All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 6/8] arm64/kexec: Add pr_devel output
  2015-01-17  0:23   ` Geoff Levand
@ 2015-01-17  0:23     ` Geoff Levand
  -1 siblings, 0 replies; 100+ messages in thread
From: Geoff Levand @ 2015-01-17  0:23 UTC (permalink / raw)
  To: linux-arm-kernel

To aid in debugging kexec problems or when adding new functionality to kexec add
a new routine kexec_image_info() and several inline pr_devel statements.

Signed-off-by: Geoff Levand <geoff@infradead.org>
---
 arch/arm64/kernel/machine_kexec.c | 54 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 54 insertions(+)

diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
index b0e5d76..3d84759 100644
--- a/arch/arm64/kernel/machine_kexec.c
+++ b/arch/arm64/kernel/machine_kexec.c
@@ -35,6 +35,37 @@ static bool kexec_is_dtb(const void *dtb)
 }
 
 /**
+ * kexec_image_info - For debugging output.
+ */
+#define kexec_image_info(_i) _kexec_image_info(__func__, __LINE__, _i)
+static void _kexec_image_info(const char *func, int line,
+	const struct kimage *image)
+{
+	unsigned long i;
+
+#if !defined(DEBUG)
+	return;
+#endif
+	pr_devel("%s:%d:\n", func, line);
+	pr_devel("  kexec image info:\n");
+	pr_devel("    type:        %d\n", image->type);
+	pr_devel("    start:       %lx\n", image->start);
+	pr_devel("    head:        %lx\n", image->head);
+	pr_devel("    nr_segments: %lu\n", image->nr_segments);
+
+	for (i = 0; i < image->nr_segments; i++) {
+		pr_devel("      segment[%lu]: %016lx - %016lx, %lx bytes, %lu pages%s\n",
+			i,
+			image->segment[i].mem,
+			image->segment[i].mem + image->segment[i].memsz,
+			image->segment[i].memsz,
+			image->segment[i].memsz /  PAGE_SIZE,
+			(kexec_is_dtb(image->segment[i].buf) ?
+				", dtb segment" : ""));
+	}
+}
+
+/**
  * kexec_find_dtb_seg - Helper routine to find the dtb segment.
  */
 static const struct kexec_segment *kexec_find_dtb_seg(
@@ -67,6 +98,8 @@ int machine_kexec_prepare(struct kimage *image)
 	arm64_kexec_dtb_addr = dtb_seg ? dtb_seg->mem : 0;
 	arm64_kexec_kimage_start = image->start;
 
+	kexec_image_info(image);
+
 	return 0;
 }
 
@@ -121,6 +154,27 @@ void machine_kexec(struct kimage *image)
 	reboot_code_buffer_phys = page_to_phys(image->control_code_page);
 	reboot_code_buffer = phys_to_virt(reboot_code_buffer_phys);
 
+	kexec_image_info(image);
+
+	pr_devel("%s:%d: control_code_page:        %p\n", __func__, __LINE__,
+		image->control_code_page);
+	pr_devel("%s:%d: reboot_code_buffer_phys:  %pa\n", __func__, __LINE__,
+		&reboot_code_buffer_phys);
+	pr_devel("%s:%d: reboot_code_buffer:       %p\n", __func__, __LINE__,
+		reboot_code_buffer);
+	pr_devel("%s:%d: relocate_new_kernel:      %p\n", __func__, __LINE__,
+		relocate_new_kernel);
+	pr_devel("%s:%d: relocate_new_kernel_size: 0x%lx(%lu) bytes\n",
+		__func__, __LINE__, relocate_new_kernel_size,
+		relocate_new_kernel_size);
+
+	pr_devel("%s:%d: kexec_dtb_addr:           %lx\n", __func__, __LINE__,
+		arm64_kexec_dtb_addr);
+	pr_devel("%s:%d: kexec_kimage_head:        %lx\n", __func__, __LINE__,
+		arm64_kexec_kimage_head);
+	pr_devel("%s:%d: kexec_kimage_start:       %lx\n", __func__, __LINE__,
+		arm64_kexec_kimage_start);
+
 	/*
 	 * Copy relocate_new_kernel to the reboot_code_buffer for use
 	 * after the kernel is shut down.
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [PATCH 7/8] arm64/kexec: Add checks for KVM
  2015-01-17  0:23   ` Geoff Levand
@ 2015-01-17  0:23     ` Geoff Levand
  -1 siblings, 0 replies; 100+ messages in thread
From: Geoff Levand @ 2015-01-17  0:23 UTC (permalink / raw)
  To: linux-arm-kernel

Add runtime checks that fail the arm64 kexec syscall for situations that would
result in system instability do to problems in the KVM kernel support.
These checks should be removed when the KVM problems are resolved fixed.

Signed-off-by: Geoff Levand <geoff@infradead.org>
---
 arch/arm64/kernel/machine_kexec.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
index 3d84759..a36459d 100644
--- a/arch/arm64/kernel/machine_kexec.c
+++ b/arch/arm64/kernel/machine_kexec.c
@@ -16,6 +16,9 @@
 #include <asm/cacheflush.h>
 #include <asm/system_misc.h>
 
+/* TODO: Remove this include when KVM can support a kexec reboot. */
+#include <asm/virt.h>
+
 /* Global variables for the relocate_kernel routine. */
 extern const unsigned char relocate_new_kernel[];
 extern const unsigned long relocate_new_kernel_size;
@@ -100,6 +103,13 @@ int machine_kexec_prepare(struct kimage *image)
 
 	kexec_image_info(image);
 
+	/* TODO: Remove this message when KVM can support a kexec reboot. */
+	if (IS_ENABLED(CONFIG_KVM) && is_hyp_mode_available()) {
+		pr_err("%s: Your kernel is configured with KVM support (CONFIG_KVM=y) which currently does not allow for kexec re-boot.\n",
+			__func__);
+		return -ENOSYS;
+	}
+
 	return 0;
 }
 
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [PATCH 0/8] arm64 kexec kernel patches V7
       [not found] <cover.1415926876.git.geoff@infradead.orgg>
@ 2015-01-17  0:23   ` Geoff Levand
  0 siblings, 0 replies; 100+ messages in thread
From: Geoff Levand @ 2015-01-17  0:23 UTC (permalink / raw)
  To: linux-arm-kernel

Hi All,

This series adds the core support for kexec re-boots on arm64.  This v7 of the
series is mainly just a rebase to the latest arm64 for-next/core branch
(v3.19-rc4), and a few very minor changes requested for v6.

I have tested with the ARM VE fast model, the ARM Base model and the ARM
Foundation model with various kernel config options for both the first and
second stage kernels.

To load a second stage kernel and execute a kexec re-boot on arm64 my patches to
kexec-tools [2], which have not yet been merged upstream, are needed.

Patch 1 here moves proc-macros.S from arm64/mm to arm64/include/asm so that the
dcache_line_size macro it defines can be uesd by kexec's relocate kernel
routine.

Patches 2-4 rework the arm64 hcall mechanism to give the arm64 soft_restart()
routine the ability to switch exception levels from EL1 to EL2 for kernels that
were entered in EL2.

Patches 5-8 add the actual kexec support.

Please consider all patches for inclusion.

Note that the location of my development repositories has changed:

[1]  https://git.kernel.org/cgit/linux/kernel/git/geoff/linux-kexec.git
[2]  https://git.kernel.org/cgit/linux/kernel/git/geoff/kexec-tools.git

Several things are known to have problems on kexec re-boot:

spin-table
----------

PROBLEM: The spin-table enable method does not implement all the methods needed
for CPU hot-plug, so the first stage kernel cannot be shutdown properly.

WORK-AROUND: Upgrade to system firmware that provides PSCI enable method
support, OR build the first stage kernel with CONFIG_SMP=n, OR pass 'maxcpus=1'
on the first stage kernel command line.

FIX: Upgrade system firmware to provide PSCI enable method support or add
missing spin-table support to the kernel.

KVM
---

PROBLEM: KVM acquires hypervisor resources on startup, but does not free those
resources on shutdown, so the first stage kernel cannot be shutdown properly
when using kexec.

WORK-AROUND:  Build the first stage kernel with CONFIG_KVM=n.

FIX: Fix KVM to support soft_restart().  KVM needs to restore default exception
vectors, etc.

UEFI
----

PROBLEM: UEFI does not manage its runtime services virtual mappings in a way
that is compatible with a kexec re-boot, so the second stage kernel hangs on
boot-up.

WORK-AROUND:  Disable UEFI in firmware.

FIX: Ard Biesheuvel has done work to fix this.  Basic kexec re-boot has been
tested and works.  More comprehensive testing is needed.

/memreserve/
----------

PROBLEM: The use of device tree /memreserve/ entries is not compatible with
kexec re-boot.  The second stage kernel will use the reserved regions and the
system will become unstable.

WORK-AROUND: Pass a user specified DTB using the kexec --dtb option.

FIX: An interface to expose a binary device tree to user space has been
proposed.  User kexec utilities will need to be updated to add support for this
new interface.

ACPI
----

PROBLEM: The kernel for ACPI based systems does not export a device tree to the
standard user space location of 'proc/device-tree'.  Current applications
expect to access device tree information from this standard location.

WORK-AROUND: Disable ACPI in firmware, OR pass 'acpi=off' on the first stage
kernel command line, OR pass a user specified DTB using the kexec --dtb option.

FIX: FIX: An interface to expose a binary device tree to user space has been
proposed.  User kexec utilities will need to be updated to add support for this
new interface.

----------------------------------------------------------------
The following changes since commit 6083fe74b7bfffc2c7be8c711596608bda0cda6e:

  arm64: respect mem= for EFI (2015-01-16 16:21:58 +0000)

are available in the git repository at:

  git://git.kernel.org/pub/scm/linux/kernel/git/geoff/linux-kexec.git kexec-v7

for you to fetch changes up to 7db998d9533d5efab816edfad3d7010fc2f7e62c:

  arm64/kexec: Enable kexec in the arm64 defconfig (2015-01-16 14:55:28 -0800)

----------------------------------------------------------------
Geoff Levand (8):
      arm64: Move proc-macros.S to include/asm
      arm64: Convert hcalls to use ISS field
      arm64: Add new hcall HVC_CALL_FUNC
      arm64: Add EL2 switch to soft_restart
      arm64/kexec: Add core kexec support
      arm64/kexec: Add pr_devel output
      arm64/kexec: Add checks for KVM
      arm64/kexec: Enable kexec in the arm64 defconfig

 arch/arm64/Kconfig                           |   9 ++
 arch/arm64/configs/defconfig                 |   1 +
 arch/arm64/include/asm/kexec.h               |  47 ++++++
 arch/arm64/include/asm/proc-fns.h            |   4 +-
 arch/arm64/{mm => include/asm}/proc-macros.S |   0
 arch/arm64/include/asm/virt.h                |  33 ++++
 arch/arm64/kernel/Makefile                   |   1 +
 arch/arm64/kernel/hyp-stub.S                 |  45 ++++--
 arch/arm64/kernel/machine_kexec.c            | 219 +++++++++++++++++++++++++++
 arch/arm64/kernel/process.c                  |  10 +-
 arch/arm64/kernel/relocate_kernel.S          | 160 +++++++++++++++++++
 arch/arm64/kvm/hyp.S                         |  18 ++-
 arch/arm64/mm/cache.S                        |   3 +-
 arch/arm64/mm/proc.S                         |  50 ++++--
 include/uapi/linux/kexec.h                   |   1 +
 15 files changed, 563 insertions(+), 38 deletions(-)
 create mode 100644 arch/arm64/include/asm/kexec.h
 rename arch/arm64/{mm => include/asm}/proc-macros.S (100%)
 create mode 100644 arch/arm64/kernel/machine_kexec.c
 create mode 100644 arch/arm64/kernel/relocate_kernel.S

-- 
2.1.0

^ permalink raw reply	[flat|nested] 100+ messages in thread

* [PATCH 1/8] arm64: Move proc-macros.S to include/asm
  2015-01-17  0:23   ` Geoff Levand
@ 2015-01-17  0:23     ` Geoff Levand
  -1 siblings, 0 replies; 100+ messages in thread
From: Geoff Levand @ 2015-01-17  0:23 UTC (permalink / raw)
  To: linux-arm-kernel

To allow the assembler macros defined in proc-macros.S to be used outside
the mm code move the proc-macros.S file from arch/arm64/mm/ to
arch/arm64/include/asm/ and fix up any preprocessor includes to use the new
file location.

Signed-off-by: Geoff Levand <geoff@infradead.org>
---
 arch/arm64/include/asm/proc-macros.S | 54 ++++++++++++++++++++++++++++++++++++
 arch/arm64/mm/cache.S                |  3 +-
 arch/arm64/mm/proc-macros.S          | 54 ------------------------------------
 arch/arm64/mm/proc.S                 |  3 +-
 4 files changed, 56 insertions(+), 58 deletions(-)
 create mode 100644 arch/arm64/include/asm/proc-macros.S
 delete mode 100644 arch/arm64/mm/proc-macros.S

diff --git a/arch/arm64/include/asm/proc-macros.S b/arch/arm64/include/asm/proc-macros.S
new file mode 100644
index 0000000..005d29e
--- /dev/null
+++ b/arch/arm64/include/asm/proc-macros.S
@@ -0,0 +1,54 @@
+/*
+ * Based on arch/arm/mm/proc-macros.S
+ *
+ * Copyright (C) 2012 ARM Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <asm/asm-offsets.h>
+#include <asm/thread_info.h>
+
+/*
+ * vma_vm_mm - get mm pointer from vma pointer (vma->vm_mm)
+ */
+	.macro	vma_vm_mm, rd, rn
+	ldr	\rd, [\rn, #VMA_VM_MM]
+	.endm
+
+/*
+ * mmid - get context id from mm pointer (mm->context.id)
+ */
+	.macro	mmid, rd, rn
+	ldr	\rd, [\rn, #MM_CONTEXT_ID]
+	.endm
+
+/*
+ * dcache_line_size - get the minimum D-cache line size from the CTR register.
+ */
+	.macro	dcache_line_size, reg, tmp
+	mrs	\tmp, ctr_el0			// read CTR
+	ubfm	\tmp, \tmp, #16, #19		// cache line size encoding
+	mov	\reg, #4			// bytes per word
+	lsl	\reg, \reg, \tmp		// actual cache line size
+	.endm
+
+/*
+ * icache_line_size - get the minimum I-cache line size from the CTR register.
+ */
+	.macro	icache_line_size, reg, tmp
+	mrs	\tmp, ctr_el0			// read CTR
+	and	\tmp, \tmp, #0xf		// cache line size encoding
+	mov	\reg, #4			// bytes per word
+	lsl	\reg, \reg, \tmp		// actual cache line size
+	.endm
diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
index 2560e1e..4fcfffa 100644
--- a/arch/arm64/mm/cache.S
+++ b/arch/arm64/mm/cache.S
@@ -23,8 +23,7 @@
 #include <asm/assembler.h>
 #include <asm/cpufeature.h>
 #include <asm/alternative-asm.h>
-
-#include "proc-macros.S"
+#include <asm/proc-macros.S>
 
 /*
  *	__flush_dcache_all()
diff --git a/arch/arm64/mm/proc-macros.S b/arch/arm64/mm/proc-macros.S
deleted file mode 100644
index 005d29e..0000000
--- a/arch/arm64/mm/proc-macros.S
+++ /dev/null
@@ -1,54 +0,0 @@
-/*
- * Based on arch/arm/mm/proc-macros.S
- *
- * Copyright (C) 2012 ARM Ltd.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program.  If not, see <http://www.gnu.org/licenses/>.
- */
-
-#include <asm/asm-offsets.h>
-#include <asm/thread_info.h>
-
-/*
- * vma_vm_mm - get mm pointer from vma pointer (vma->vm_mm)
- */
-	.macro	vma_vm_mm, rd, rn
-	ldr	\rd, [\rn, #VMA_VM_MM]
-	.endm
-
-/*
- * mmid - get context id from mm pointer (mm->context.id)
- */
-	.macro	mmid, rd, rn
-	ldr	\rd, [\rn, #MM_CONTEXT_ID]
-	.endm
-
-/*
- * dcache_line_size - get the minimum D-cache line size from the CTR register.
- */
-	.macro	dcache_line_size, reg, tmp
-	mrs	\tmp, ctr_el0			// read CTR
-	ubfm	\tmp, \tmp, #16, #19		// cache line size encoding
-	mov	\reg, #4			// bytes per word
-	lsl	\reg, \reg, \tmp		// actual cache line size
-	.endm
-
-/*
- * icache_line_size - get the minimum I-cache line size from the CTR register.
- */
-	.macro	icache_line_size, reg, tmp
-	mrs	\tmp, ctr_el0			// read CTR
-	and	\tmp, \tmp, #0xf		// cache line size encoding
-	mov	\reg, #4			// bytes per word
-	lsl	\reg, \reg, \tmp		// actual cache line size
-	.endm
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index 4e778b1..c507e25 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -25,8 +25,7 @@
 #include <asm/hwcap.h>
 #include <asm/pgtable-hwdef.h>
 #include <asm/pgtable.h>
-
-#include "proc-macros.S"
+#include <asm/proc-macros.S>
 
 #ifdef CONFIG_ARM64_64K_PAGES
 #define TCR_TG_FLAGS	TCR_TG0_64K | TCR_TG1_64K
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [PATCH 4/8] arm64: Add EL2 switch to soft_restart
  2015-01-17  0:23   ` Geoff Levand
@ 2015-01-17  0:23     ` Geoff Levand
  -1 siblings, 0 replies; 100+ messages in thread
From: Geoff Levand @ 2015-01-17  0:23 UTC (permalink / raw)
  To: linux-arm-kernel

When a CPU is reset it needs to be put into the exception level it had when it
entered the kernel.  Update cpu_reset() to accept an argument el2_switch which
signals cpu_reset() to enter the soft reset address at EL2.  If el2_switch is
not set the soft reset address will be entered at EL1.

Update cpu_soft_restart() and soft_restart() to pass the return of
is_hyp_mode_available() as the el2_switch value to cpu_reset().  Also update the
comments of cpu_reset(), cpu_soft_restart() and soft_restart() to reflect this
change.

Signed-off-by: Geoff Levand <geoff@infradead.org>
---
 arch/arm64/include/asm/proc-fns.h |  4 ++--
 arch/arm64/kernel/process.c       | 10 ++++++++-
 arch/arm64/mm/proc.S              | 47 +++++++++++++++++++++++++++++----------
 3 files changed, 46 insertions(+), 15 deletions(-)

diff --git a/arch/arm64/include/asm/proc-fns.h b/arch/arm64/include/asm/proc-fns.h
index 9a8fd84..339394d 100644
--- a/arch/arm64/include/asm/proc-fns.h
+++ b/arch/arm64/include/asm/proc-fns.h
@@ -32,8 +32,8 @@ extern void cpu_cache_off(void);
 extern void cpu_do_idle(void);
 extern void cpu_do_switch_mm(unsigned long pgd_phys, struct mm_struct *mm);
 extern void cpu_reset(unsigned long addr) __attribute__((noreturn));
-void cpu_soft_restart(phys_addr_t cpu_reset,
-		unsigned long addr) __attribute__((noreturn));
+void cpu_soft_restart(phys_addr_t cpu_reset, unsigned long el2_switch,
+		      unsigned long addr) __attribute__((noreturn));
 extern void cpu_do_suspend(struct cpu_suspend_ctx *ptr);
 extern u64 cpu_do_resume(phys_addr_t ptr, u64 idmap_ttbr);
 
diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
index fde9923..371bbf1 100644
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -50,6 +50,7 @@
 #include <asm/mmu_context.h>
 #include <asm/processor.h>
 #include <asm/stacktrace.h>
+#include <asm/virt.h>
 
 #ifdef CONFIG_CC_STACKPROTECTOR
 #include <linux/stackprotector.h>
@@ -60,7 +61,14 @@ EXPORT_SYMBOL(__stack_chk_guard);
 void soft_restart(unsigned long addr)
 {
 	setup_mm_for_reboot();
-	cpu_soft_restart(virt_to_phys(cpu_reset), addr);
+
+	/* TODO: Remove this conditional when KVM can support CPU restart. */
+	if (IS_ENABLED(CONFIG_KVM))
+		cpu_soft_restart(virt_to_phys(cpu_reset), 0, addr);
+	else
+		cpu_soft_restart(virt_to_phys(cpu_reset),
+				 is_hyp_mode_available(), addr);
+
 	/* Should never get here */
 	BUG();
 }
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index c507e25..b767032 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -26,6 +26,7 @@
 #include <asm/pgtable-hwdef.h>
 #include <asm/pgtable.h>
 #include <asm/proc-macros.S>
+#include <asm/virt.h>
 
 #ifdef CONFIG_ARM64_64K_PAGES
 #define TCR_TG_FLAGS	TCR_TG0_64K | TCR_TG1_64K
@@ -58,27 +59,48 @@ ENTRY(cpu_cache_off)
 ENDPROC(cpu_cache_off)
 
 /*
- *	cpu_reset(loc)
+ * cpu_reset(el2_switch, loc) - Helper for cpu_soft_restart.
  *
- *	Perform a soft reset of the system.  Put the CPU into the same state
- *	as it would be if it had been reset, and branch to what would be the
- *	reset vector. It must be executed with the flat identity mapping.
+ * @cpu_reset: Physical address of the cpu_reset routine.
+ * @el2_switch: Flag to indicate a swich to EL2 is needed.
+ * @addr: Location to jump to for soft reset.
  *
- *	- loc   - location to jump to for soft reset
+ * Put the CPU into the same state as it would be if it had been reset, and
+ * branch to what would be the reset vector. It must be executed with the
+ * flat identity mapping.
  */
+
 	.align	5
+
 ENTRY(cpu_reset)
-	mrs	x1, sctlr_el1
-	bic	x1, x1, #1
-	msr	sctlr_el1, x1			// disable the MMU
+	mrs	x2, sctlr_el1
+	bic	x2, x2, #1
+	msr	sctlr_el1, x2			// disable the MMU
 	isb
-	ret	x0
+
+	cbz	x0, 1f				// el2_switch?
+	mov	x0, x1
+	mov	x1, xzr
+	mov	x2, xzr
+	mov	x3, xzr
+	hvc	#HVC_CALL_FUNC			// no return
+
+1:	ret	x1
 ENDPROC(cpu_reset)
 
+/*
+ * cpu_soft_restart(cpu_reset, el2_switch, addr) - Perform a cpu soft reset.
+ *
+ * @cpu_reset: Physical address of the cpu_reset routine.
+ * @el2_switch: Flag to indicate a swich to EL2 is needed, passed to cpu_reset.
+ * @addr: Location to jump to for soft reset, passed to cpu_reset.
+ *
+ */
+
 ENTRY(cpu_soft_restart)
-	/* Save address of cpu_reset() and reset address */
-	mov	x19, x0
-	mov	x20, x1
+	mov	x19, x0				// cpu_reset
+	mov	x20, x1				// el2_switch
+	mov	x21, x2				// addr
 
 	/* Turn D-cache off */
 	bl	cpu_cache_off
@@ -87,6 +109,7 @@ ENTRY(cpu_soft_restart)
 	bl	flush_cache_all
 
 	mov	x0, x20
+	mov	x1, x21
 	ret	x19
 ENDPROC(cpu_soft_restart)
 
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [PATCH 2/8] arm64: Convert hcalls to use ISS field
  2015-01-17  0:23   ` Geoff Levand
@ 2015-01-17  0:23     ` Geoff Levand
  -1 siblings, 0 replies; 100+ messages in thread
From: Geoff Levand @ 2015-01-17  0:23 UTC (permalink / raw)
  To: linux-arm-kernel

To allow for additional hcalls to be defined and to make the arm64 hcall API
more consistent across exception vector routines, change the hcall implementations
to use the ISS field of the ESR_EL2 register to specify the hcall type.

The existing arm64 hcall implementations are limited in that they only allow
for two distinct hcalls; with the x0 register either zero, or not zero.  Also,
the API of the hyp-stub exception vector routines and the KVM exception vector
routines differ; hyp-stub uses a non-zero value in x0 to implement
__hyp_set_vectors, whereas KVM uses it to implement kvm_call_hyp.

Define three new preprocessor macros HVC_GET_VECTORS, HVC_SET_VECTORS and
HVC_CALL_HYP and to be used as hcall type specifiers and convert the
existing __hyp_get_vectors(), __hyp_set_vectors() and kvm_call_hyp() routines
to use these new macros when executing an HVC call.  Also change the
corresponding hyp-stub and KVM el1_sync exception vector routines to use these
new macros.

Signed-off-by: Geoff Levand <geoff@infradead.org>
---
 arch/arm64/include/asm/virt.h | 20 ++++++++++++++++++++
 arch/arm64/kernel/hyp-stub.S  | 34 ++++++++++++++++++++++------------
 arch/arm64/kvm/hyp.S          | 18 +++++++++++-------
 3 files changed, 53 insertions(+), 19 deletions(-)

diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h
index 7a5df52..99c319c 100644
--- a/arch/arm64/include/asm/virt.h
+++ b/arch/arm64/include/asm/virt.h
@@ -21,6 +21,26 @@
 #define BOOT_CPU_MODE_EL1	(0xe11)
 #define BOOT_CPU_MODE_EL2	(0xe12)
 
+/*
+ * HVC_GET_VECTORS - Return the value of the vbar_el2 register.
+ */
+
+#define HVC_GET_VECTORS 1
+
+/*
+ * HVC_SET_VECTORS - Set the value of the vbar_el2 register.
+ *
+ * @x0: Physical address of the new vector table.
+ */
+
+#define HVC_SET_VECTORS 2
+
+/*
+ * HVC_CALL_HYP - Execute a hyp routine.
+ */
+
+#define HVC_CALL_HYP 3
+
 #ifndef __ASSEMBLY__
 
 /*
diff --git a/arch/arm64/kernel/hyp-stub.S b/arch/arm64/kernel/hyp-stub.S
index a272f33..e3db3fd 100644
--- a/arch/arm64/kernel/hyp-stub.S
+++ b/arch/arm64/kernel/hyp-stub.S
@@ -22,6 +22,7 @@
 #include <linux/irqchip/arm-gic-v3.h>
 
 #include <asm/assembler.h>
+#include <asm/kvm_arm.h>
 #include <asm/ptrace.h>
 #include <asm/virt.h>
 
@@ -53,14 +54,22 @@ ENDPROC(__hyp_stub_vectors)
 	.align 11
 
 el1_sync:
-	mrs	x1, esr_el2
-	lsr	x1, x1, #26
-	cmp	x1, #0x16
-	b.ne	2f				// Not an HVC trap
-	cbz	x0, 1f
-	msr	vbar_el2, x0			// Set vbar_el2
+	mrs	x18, esr_el2
+	lsr	x17, x18, #ESR_ELx_EC_SHIFT
+	and	x18, x18, #ESR_ELx_ISS_MASK
+
+	cmp     x17, #ESR_ELx_EC_HVC64
+	b.ne    2f				// Not an HVC trap
+
+	cmp	x18, #HVC_GET_VECTORS
+	b.ne	1f
+	mrs	x0, vbar_el2
 	b	2f
-1:	mrs	x0, vbar_el2			// Return vbar_el2
+
+1:	cmp	x18, #HVC_SET_VECTORS
+	b.ne	2f
+	msr	vbar_el2, x0
+
 2:	eret
 ENDPROC(el1_sync)
 
@@ -100,11 +109,12 @@ ENDPROC(\label)
  * initialisation entry point.
  */
 
-ENTRY(__hyp_get_vectors)
-	mov	x0, xzr
-	// fall through
 ENTRY(__hyp_set_vectors)
-	hvc	#0
+	hvc	#HVC_SET_VECTORS
 	ret
-ENDPROC(__hyp_get_vectors)
 ENDPROC(__hyp_set_vectors)
+
+ENTRY(__hyp_get_vectors)
+	hvc	#HVC_GET_VECTORS
+	ret
+ENDPROC(__hyp_get_vectors)
diff --git a/arch/arm64/kvm/hyp.S b/arch/arm64/kvm/hyp.S
index c0d8202..1916c89 100644
--- a/arch/arm64/kvm/hyp.S
+++ b/arch/arm64/kvm/hyp.S
@@ -27,6 +27,7 @@
 #include <asm/kvm_asm.h>
 #include <asm/kvm_mmu.h>
 #include <asm/memory.h>
+#include <asm/virt.h>
 
 #define CPU_GP_REG_OFFSET(x)	(CPU_GP_REGS + x)
 #define CPU_XREG_OFFSET(x)	CPU_GP_REG_OFFSET(CPU_USER_PT_REGS + 8*x)
@@ -1106,12 +1107,9 @@ __hyp_panic_str:
  * in Hyp mode (see init_hyp_mode in arch/arm/kvm/arm.c).  Return values are
  * passed in r0 and r1.
  *
- * A function pointer with a value of 0 has a special meaning, and is
- * used to implement __hyp_get_vectors in the same way as in
- * arch/arm64/kernel/hyp_stub.S.
  */
 ENTRY(kvm_call_hyp)
-	hvc	#0
+	hvc	#HVC_CALL_HYP
 	ret
 ENDPROC(kvm_call_hyp)
 
@@ -1142,6 +1140,7 @@ el1_sync:					// Guest trapped into EL2
 
 	mrs	x1, esr_el2
 	lsr	x2, x1, #ESR_ELx_EC_SHIFT
+	and	x0, x1, #ESR_ELx_ISS_MASK
 
 	cmp	x2, #ESR_ELx_EC_HVC64
 	b.ne	el1_trap
@@ -1150,15 +1149,19 @@ el1_sync:					// Guest trapped into EL2
 	cbnz	x3, el1_trap			// called HVC
 
 	/* Here, we're pretty sure the host called HVC. */
+	mov	x18, x0
 	pop	x2, x3
 	pop	x0, x1
 
-	/* Check for __hyp_get_vectors */
-	cbnz	x0, 1f
+	cmp	x18, #HVC_GET_VECTORS
+	b.ne	1f
 	mrs	x0, vbar_el2
 	b	2f
 
-1:	push	lr, xzr
+1:	cmp	x18, #HVC_CALL_HYP
+	b.ne	2f
+
+	push	lr, xzr
 
 	/*
 	 * Compute the function address in EL2, and shuffle the parameters.
@@ -1171,6 +1174,7 @@ el1_sync:					// Guest trapped into EL2
 	blr	lr
 
 	pop	lr, xzr
+
 2:	eret
 
 el1_trap:
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [PATCH 8/8] arm64/kexec: Enable kexec in the arm64 defconfig
  2015-01-17  0:23   ` Geoff Levand
@ 2015-01-17  0:23     ` Geoff Levand
  -1 siblings, 0 replies; 100+ messages in thread
From: Geoff Levand @ 2015-01-17  0:23 UTC (permalink / raw)
  To: linux-arm-kernel

Signed-off-by: Geoff Levand <geoff@infradead.org>
---
 arch/arm64/configs/defconfig | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/arm64/configs/defconfig b/arch/arm64/configs/defconfig
index 5376d90..85285dc 100644
--- a/arch/arm64/configs/defconfig
+++ b/arch/arm64/configs/defconfig
@@ -42,6 +42,7 @@ CONFIG_PREEMPT=y
 CONFIG_KSM=y
 CONFIG_TRANSPARENT_HUGEPAGE=y
 CONFIG_CMA=y
+CONFIG_KEXEC=y
 CONFIG_CMDLINE="console=ttyAMA0"
 # CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
 CONFIG_COMPAT=y
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [PATCH 3/8] arm64: Add new hcall HVC_CALL_FUNC
  2015-01-17  0:23   ` Geoff Levand
@ 2015-01-17  0:23     ` Geoff Levand
  -1 siblings, 0 replies; 100+ messages in thread
From: Geoff Levand @ 2015-01-17  0:23 UTC (permalink / raw)
  To: linux-arm-kernel

Add the new hcall HVC_CALL_FUNC that allows execution of a function at EL2.
During CPU reset the CPU must be brought to the exception level it had on
entry to the kernel.  The HVC_CALL_FUNC hcall will provide the mechanism
needed for this exception level switch.

Signed-off-by: Geoff Levand <geoff@infradead.org>
---
 arch/arm64/include/asm/virt.h | 13 +++++++++++++
 arch/arm64/kernel/hyp-stub.S  | 17 ++++++++++++++---
 2 files changed, 27 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h
index 99c319c..4f23a48 100644
--- a/arch/arm64/include/asm/virt.h
+++ b/arch/arm64/include/asm/virt.h
@@ -41,6 +41,19 @@
 
 #define HVC_CALL_HYP 3
 
+/*
+ * HVC_CALL_FUNC - Execute a function at EL2.
+ *
+ * @x0: Physical address of the function to be executed.
+ * @x1: Passed as the first argument to the function.
+ * @x2: Passed as the second argument to the function.
+ * @x3: Passed as the third argument to the function.
+ *
+ * The called function must preserve the contents of register x18.
+ */
+
+#define HVC_CALL_FUNC 4
+
 #ifndef __ASSEMBLY__
 
 /*
diff --git a/arch/arm64/kernel/hyp-stub.S b/arch/arm64/kernel/hyp-stub.S
index e3db3fd..b5d36e7 100644
--- a/arch/arm64/kernel/hyp-stub.S
+++ b/arch/arm64/kernel/hyp-stub.S
@@ -66,9 +66,20 @@ el1_sync:
 	mrs	x0, vbar_el2
 	b	2f
 
-1:	cmp	x18, #HVC_SET_VECTORS
-	b.ne	2f
-	msr	vbar_el2, x0
+1:	cmp     x18, #HVC_SET_VECTORS
+	b.ne    1f
+	msr     vbar_el2, x0
+	b       2f
+
+1:	cmp     x18, #HVC_CALL_FUNC
+	b.ne    2f
+	mov     x18, lr
+	mov     lr, x0
+	mov     x0, x1
+	mov     x1, x2
+	mov     x2, x3
+	blr     lr
+	mov     lr, x18
 
 2:	eret
 ENDPROC(el1_sync)
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [PATCH 5/8] arm64/kexec: Add core kexec support
  2015-01-17  0:23   ` Geoff Levand
@ 2015-01-17  0:23     ` Geoff Levand
  -1 siblings, 0 replies; 100+ messages in thread
From: Geoff Levand @ 2015-01-17  0:23 UTC (permalink / raw)
  To: linux-arm-kernel

Add three new files, kexec.h, machine_kexec.c and relocate_kernel.S to the
arm64 architecture that add support for the kexec re-boot mechanism
(CONFIG_KEXEC) on arm64 platforms.

Signed-off-by: Geoff Levand <geoff@infradead.org>
---
 arch/arm64/Kconfig                  |   9 ++
 arch/arm64/include/asm/kexec.h      |  47 +++++++++++
 arch/arm64/kernel/Makefile          |   1 +
 arch/arm64/kernel/machine_kexec.c   | 155 ++++++++++++++++++++++++++++++++++
 arch/arm64/kernel/relocate_kernel.S | 160 ++++++++++++++++++++++++++++++++++++
 include/uapi/linux/kexec.h          |   1 +
 6 files changed, 373 insertions(+)
 create mode 100644 arch/arm64/include/asm/kexec.h
 create mode 100644 arch/arm64/kernel/machine_kexec.c
 create mode 100644 arch/arm64/kernel/relocate_kernel.S

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index b1f9a20..d9eb9cd 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -474,6 +474,15 @@ config SECCOMP
 	  and the task is only allowed to execute a few safe syscalls
 	  defined by each seccomp mode.
 
+config KEXEC
+	depends on (!SMP || PM_SLEEP_SMP)
+	bool "kexec system call"
+	---help---
+	  kexec is a system call that implements the ability to shutdown your
+	  current kernel, and to start another kernel.  It is like a reboot
+	  but it is independent of the system firmware.   And like a reboot
+	  you can start any kernel with it, not just Linux.
+
 config XEN_DOM0
 	def_bool y
 	depends on XEN
diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h
new file mode 100644
index 0000000..e7bd7ab
--- /dev/null
+++ b/arch/arm64/include/asm/kexec.h
@@ -0,0 +1,47 @@
+/*
+ * kexec for arm64
+ *
+ * Copyright (C) Linaro.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#if !defined(_ARM64_KEXEC_H)
+#define _ARM64_KEXEC_H
+
+/* Maximum physical address we can use pages from */
+
+#define KEXEC_SOURCE_MEMORY_LIMIT (-1UL)
+
+/* Maximum address we can reach in physical address mode */
+
+#define KEXEC_DESTINATION_MEMORY_LIMIT (-1UL)
+
+/* Maximum address we can use for the control code buffer */
+
+#define KEXEC_CONTROL_MEMORY_LIMIT (-1UL)
+
+#define KEXEC_CONTROL_PAGE_SIZE	4096
+
+#define KEXEC_ARCH KEXEC_ARCH_ARM64
+
+#if !defined(__ASSEMBLY__)
+
+/**
+ * crash_setup_regs() - save registers for the panic kernel
+ *
+ * @newregs: registers are saved here
+ * @oldregs: registers to be saved (may be %NULL)
+ */
+
+static inline void crash_setup_regs(struct pt_regs *newregs,
+				    struct pt_regs *oldregs)
+{
+	/* Empty routine needed to avoid build errors. */
+}
+
+#endif /* !defined(__ASSEMBLY__) */
+
+#endif
diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index 77d3d95..eff1625 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -34,6 +34,7 @@ arm64-obj-$(CONFIG_KGDB)		+= kgdb.o
 arm64-obj-$(CONFIG_EFI)			+= efi.o efi-stub.o efi-entry.o
 arm64-obj-$(CONFIG_PCI)			+= pci.o
 arm64-obj-$(CONFIG_ARMV8_DEPRECATED)	+= armv8_deprecated.o
+arm64-obj-$(CONFIG_KEXEC)		+= machine_kexec.o relocate_kernel.o
 
 obj-y					+= $(arm64-obj-y) vdso/
 obj-m					+= $(arm64-obj-m)
diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
new file mode 100644
index 0000000..b0e5d76
--- /dev/null
+++ b/arch/arm64/kernel/machine_kexec.c
@@ -0,0 +1,155 @@
+/*
+ * kexec for arm64
+ *
+ * Copyright (C) Linaro.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/kexec.h>
+#include <linux/of_fdt.h>
+#include <linux/slab.h>
+#include <linux/uaccess.h>
+
+#include <asm/cacheflush.h>
+#include <asm/system_misc.h>
+
+/* Global variables for the relocate_kernel routine. */
+extern const unsigned char relocate_new_kernel[];
+extern const unsigned long relocate_new_kernel_size;
+extern unsigned long arm64_kexec_dtb_addr;
+extern unsigned long arm64_kexec_kimage_head;
+extern unsigned long arm64_kexec_kimage_start;
+
+/**
+ * kexec_is_dtb - Helper routine to check the device tree header signature.
+ */
+static bool kexec_is_dtb(const void *dtb)
+{
+	__be32 magic;
+
+	return get_user(magic, (__be32 *)dtb) ? false :
+		(be32_to_cpu(magic) == OF_DT_HEADER);
+}
+
+/**
+ * kexec_find_dtb_seg - Helper routine to find the dtb segment.
+ */
+static const struct kexec_segment *kexec_find_dtb_seg(
+	const struct kimage *image)
+{
+	int i;
+
+	for (i = 0; i < image->nr_segments; i++) {
+		if (kexec_is_dtb(image->segment[i].buf))
+			return &image->segment[i];
+	}
+
+	return NULL;
+}
+
+void machine_kexec_cleanup(struct kimage *image)
+{
+	/* Empty routine needed to avoid build errors. */
+}
+
+/**
+ * machine_kexec_prepare - Prepare for a kexec reboot.
+ *
+ * Called from the core kexec code when a kernel image is loaded.
+ */
+int machine_kexec_prepare(struct kimage *image)
+{
+	const struct kexec_segment *dtb_seg = kexec_find_dtb_seg(image);
+
+	arm64_kexec_dtb_addr = dtb_seg ? dtb_seg->mem : 0;
+	arm64_kexec_kimage_start = image->start;
+
+	return 0;
+}
+
+/**
+ * kexec_list_flush - Helper to flush the kimage list to PoC.
+ */
+static void kexec_list_flush(unsigned long kimage_head)
+{
+	void *dest;
+	unsigned long *entry;
+
+	for (entry = &kimage_head, dest = NULL; ; entry++) {
+		unsigned int flag = *entry &
+			(IND_DESTINATION | IND_INDIRECTION | IND_DONE |
+			IND_SOURCE);
+		void *addr = phys_to_virt(*entry & PAGE_MASK);
+
+		switch (flag) {
+		case IND_INDIRECTION:
+			entry = (unsigned long *)addr - 1;
+			__flush_dcache_area(addr, PAGE_SIZE);
+			break;
+		case IND_DESTINATION:
+			dest = addr;
+			break;
+		case IND_SOURCE:
+			__flush_dcache_area(addr, PAGE_SIZE);
+			dest += PAGE_SIZE;
+			break;
+		case IND_DONE:
+			return;
+		default:
+			BUG();
+		}
+	}
+}
+
+/**
+ * machine_kexec - Do the kexec reboot.
+ *
+ * Called from the core kexec code for a sys_reboot with LINUX_REBOOT_CMD_KEXEC.
+ */
+void machine_kexec(struct kimage *image)
+{
+	phys_addr_t reboot_code_buffer_phys;
+	void *reboot_code_buffer;
+
+	BUG_ON(num_online_cpus() > 1);
+
+	arm64_kexec_kimage_head = image->head;
+
+	reboot_code_buffer_phys = page_to_phys(image->control_code_page);
+	reboot_code_buffer = phys_to_virt(reboot_code_buffer_phys);
+
+	/*
+	 * Copy relocate_new_kernel to the reboot_code_buffer for use
+	 * after the kernel is shut down.
+	 */
+	memcpy(reboot_code_buffer, relocate_new_kernel,
+		relocate_new_kernel_size);
+
+	/* Flush the reboot_code_buffer in preparation for its execution. */
+	__flush_dcache_area(reboot_code_buffer, relocate_new_kernel_size);
+
+	/* Flush the kimage list. */
+	kexec_list_flush(image->head);
+
+	pr_info("Bye!\n");
+
+	/* Disable all DAIF exceptions. */
+	asm volatile ("msr daifset, #0xf" : : : "memory");
+
+	/*
+	 * soft_restart() will shutdown the MMU, disable data caches, then
+	 * transfer control to the reboot_code_buffer which contains a copy of
+	 * the relocate_new_kernel routine.  relocate_new_kernel will use
+	 * physical addressing to relocate the new kernel to its final position
+	 * and then will transfer control to the entry point of the new kernel.
+	 */
+	soft_restart(reboot_code_buffer_phys);
+}
+
+void machine_crash_shutdown(struct pt_regs *regs)
+{
+	/* Empty routine needed to avoid build errors. */
+}
diff --git a/arch/arm64/kernel/relocate_kernel.S b/arch/arm64/kernel/relocate_kernel.S
new file mode 100644
index 0000000..1c1514d
--- /dev/null
+++ b/arch/arm64/kernel/relocate_kernel.S
@@ -0,0 +1,160 @@
+/*
+ * kexec for arm64
+ *
+ * Copyright (C) Linaro.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <asm/assembler.h>
+#include <asm/kexec.h>
+#include <asm/memory.h>
+#include <asm/page.h>
+#include <asm/proc-macros.S>
+
+/* The list entry flags. */
+
+#define IND_DESTINATION_BIT 0
+#define IND_INDIRECTION_BIT 1
+#define IND_DONE_BIT        2
+#define IND_SOURCE_BIT      3
+
+/*
+ * relocate_new_kernel - Put a 2nd stage kernel image in place and boot it.
+ *
+ * The memory that the old kernel occupies may be overwritten when coping the
+ * new image to its final location.  To assure that the relocate_new_kernel
+ * routine which does that copy is not overwritten all code and data needed
+ * by relocate_new_kernel must be between the symbols relocate_new_kernel and
+ * relocate_new_kernel_end.  The machine_kexec() routine will copy
+ * relocate_new_kernel to the kexec control_code_page, a special page which
+ * has been set up to be preserved during the copy operation.
+ */
+.globl relocate_new_kernel
+relocate_new_kernel:
+
+	/* Setup the list loop variables. */
+	ldr	x18, arm64_kexec_kimage_head	/* x18 = list entry */
+	dcache_line_size x17, x0		/* x17 = dcache line size */
+	mov	x16, xzr			/* x16 = segment start */
+	mov	x15, xzr			/* x15 = entry ptr */
+	mov	x14, xzr			/* x14 = copy dest */
+
+	/* Check if the new image needs relocation. */
+	cbz	x18, .Ldone
+	tbnz	x18, IND_DONE_BIT, .Ldone
+
+.Lloop:
+	and	x13, x18, PAGE_MASK		/* x13 = addr */
+
+	/* Test the entry flags. */
+.Ltest_source:
+	tbz	x18, IND_SOURCE_BIT, .Ltest_indirection
+
+	mov x20, x14				/*  x20 = copy dest */
+	mov x21, x13				/*  x21 = copy src */
+
+	/* Invalidate dest page to PoC. */
+	mov	x0, x20
+	add	x19, x0, #PAGE_SIZE
+	sub	x1, x17, #1
+	bic	x0, x0, x1
+1:	dc	ivac, x0
+	add	x0, x0, x17
+	cmp	x0, x19
+	b.lo	1b
+	dsb	sy
+
+	/* Copy page. */
+1:	ldp	x22, x23, [x21]
+	ldp	x24, x25, [x21, #16]
+	ldp	x26, x27, [x21, #32]
+	ldp	x28, x29, [x21, #48]
+	add	x21, x21, #64
+	stnp	x22, x23, [x20]
+	stnp	x24, x25, [x20, #16]
+	stnp	x26, x27, [x20, #32]
+	stnp	x28, x29, [x20, #48]
+	add	x20, x20, #64
+	tst	x21, #(PAGE_SIZE - 1)
+	b.ne	1b
+
+	/* dest += PAGE_SIZE */
+	add	x14, x14, PAGE_SIZE
+	b	.Lnext
+
+.Ltest_indirection:
+	tbz	x18, IND_INDIRECTION_BIT, .Ltest_destination
+
+	/* ptr = addr */
+	mov	x15, x13
+	b	.Lnext
+
+.Ltest_destination:
+	tbz	x18, IND_DESTINATION_BIT, .Lnext
+
+	mov	x16, x13
+
+	/* dest = addr */
+	mov	x14, x13
+
+.Lnext:
+	/* entry = *ptr++ */
+	ldr	x18, [x15], #8
+
+	/* while (!(entry & DONE)) */
+	tbz	x18, IND_DONE_BIT, .Lloop
+
+.Ldone:
+	dsb	sy
+	isb
+	ic	ialluis
+	dsb	sy
+	isb
+
+	/* Start new image. */
+	ldr	x4, arm64_kexec_kimage_start
+	ldr	x0, arm64_kexec_dtb_addr
+	mov	x1, xzr
+	mov	x2, xzr
+	mov	x3, xzr
+	br	x4
+
+.align 3	/* To keep the 64-bit values below naturally aligned. */
+
+/* The machine_kexec routines set these variables. */
+
+/*
+ * arm64_kexec_kimage_start - Copy of image->start, the entry point of the new
+ * image.
+ */
+.globl arm64_kexec_kimage_start
+arm64_kexec_kimage_start:
+	.quad	0x0
+
+/*
+ * arm64_kexec_dtb_addr - Physical address of a device tree.
+ */
+.globl arm64_kexec_dtb_addr
+arm64_kexec_dtb_addr:
+	.quad	0x0
+
+/*
+ * arm64_kexec_kimage_head - Copy of image->head, the list of kimage entries.
+ */
+.globl arm64_kexec_kimage_head
+arm64_kexec_kimage_head:
+	.quad	0x0
+
+.Lrelocate_new_kernel_end:
+
+/*
+ * relocate_new_kernel_size - Number of bytes to copy to the control_code_page.
+ */
+.globl relocate_new_kernel_size
+relocate_new_kernel_size:
+	.quad .Lrelocate_new_kernel_end - relocate_new_kernel
+
+.org	KEXEC_CONTROL_PAGE_SIZE
diff --git a/include/uapi/linux/kexec.h b/include/uapi/linux/kexec.h
index 6925f5b..04626b9 100644
--- a/include/uapi/linux/kexec.h
+++ b/include/uapi/linux/kexec.h
@@ -39,6 +39,7 @@
 #define KEXEC_ARCH_SH      (42 << 16)
 #define KEXEC_ARCH_MIPS_LE (10 << 16)
 #define KEXEC_ARCH_MIPS    ( 8 << 16)
+#define KEXEC_ARCH_ARM64   (183 << 16)
 
 /* The artificial cap on the number of segments passed to kexec_load. */
 #define KEXEC_SEGMENT_MAX 16
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [PATCH 6/8] arm64/kexec: Add pr_devel output
@ 2015-01-17  0:23     ` Geoff Levand
  0 siblings, 0 replies; 100+ messages in thread
From: Geoff Levand @ 2015-01-17  0:23 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon
  Cc: marc.zyngier, Grant Likely, kexec, linux-arm-kernel, christoffer.dall

To aid in debugging kexec problems or when adding new functionality to kexec add
a new routine kexec_image_info() and several inline pr_devel statements.

Signed-off-by: Geoff Levand <geoff@infradead.org>
---
 arch/arm64/kernel/machine_kexec.c | 54 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 54 insertions(+)

diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
index b0e5d76..3d84759 100644
--- a/arch/arm64/kernel/machine_kexec.c
+++ b/arch/arm64/kernel/machine_kexec.c
@@ -35,6 +35,37 @@ static bool kexec_is_dtb(const void *dtb)
 }
 
 /**
+ * kexec_image_info - For debugging output.
+ */
+#define kexec_image_info(_i) _kexec_image_info(__func__, __LINE__, _i)
+static void _kexec_image_info(const char *func, int line,
+	const struct kimage *image)
+{
+	unsigned long i;
+
+#if !defined(DEBUG)
+	return;
+#endif
+	pr_devel("%s:%d:\n", func, line);
+	pr_devel("  kexec image info:\n");
+	pr_devel("    type:        %d\n", image->type);
+	pr_devel("    start:       %lx\n", image->start);
+	pr_devel("    head:        %lx\n", image->head);
+	pr_devel("    nr_segments: %lu\n", image->nr_segments);
+
+	for (i = 0; i < image->nr_segments; i++) {
+		pr_devel("      segment[%lu]: %016lx - %016lx, %lx bytes, %lu pages%s\n",
+			i,
+			image->segment[i].mem,
+			image->segment[i].mem + image->segment[i].memsz,
+			image->segment[i].memsz,
+			image->segment[i].memsz /  PAGE_SIZE,
+			(kexec_is_dtb(image->segment[i].buf) ?
+				", dtb segment" : ""));
+	}
+}
+
+/**
  * kexec_find_dtb_seg - Helper routine to find the dtb segment.
  */
 static const struct kexec_segment *kexec_find_dtb_seg(
@@ -67,6 +98,8 @@ int machine_kexec_prepare(struct kimage *image)
 	arm64_kexec_dtb_addr = dtb_seg ? dtb_seg->mem : 0;
 	arm64_kexec_kimage_start = image->start;
 
+	kexec_image_info(image);
+
 	return 0;
 }
 
@@ -121,6 +154,27 @@ void machine_kexec(struct kimage *image)
 	reboot_code_buffer_phys = page_to_phys(image->control_code_page);
 	reboot_code_buffer = phys_to_virt(reboot_code_buffer_phys);
 
+	kexec_image_info(image);
+
+	pr_devel("%s:%d: control_code_page:        %p\n", __func__, __LINE__,
+		image->control_code_page);
+	pr_devel("%s:%d: reboot_code_buffer_phys:  %pa\n", __func__, __LINE__,
+		&reboot_code_buffer_phys);
+	pr_devel("%s:%d: reboot_code_buffer:       %p\n", __func__, __LINE__,
+		reboot_code_buffer);
+	pr_devel("%s:%d: relocate_new_kernel:      %p\n", __func__, __LINE__,
+		relocate_new_kernel);
+	pr_devel("%s:%d: relocate_new_kernel_size: 0x%lx(%lu) bytes\n",
+		__func__, __LINE__, relocate_new_kernel_size,
+		relocate_new_kernel_size);
+
+	pr_devel("%s:%d: kexec_dtb_addr:           %lx\n", __func__, __LINE__,
+		arm64_kexec_dtb_addr);
+	pr_devel("%s:%d: kexec_kimage_head:        %lx\n", __func__, __LINE__,
+		arm64_kexec_kimage_head);
+	pr_devel("%s:%d: kexec_kimage_start:       %lx\n", __func__, __LINE__,
+		arm64_kexec_kimage_start);
+
 	/*
 	 * Copy relocate_new_kernel to the reboot_code_buffer for use
 	 * after the kernel is shut down.
-- 
2.1.0



_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [PATCH 2/8] arm64: Convert hcalls to use ISS field
@ 2015-01-17  0:23     ` Geoff Levand
  0 siblings, 0 replies; 100+ messages in thread
From: Geoff Levand @ 2015-01-17  0:23 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon
  Cc: marc.zyngier, Grant Likely, kexec, linux-arm-kernel, christoffer.dall

To allow for additional hcalls to be defined and to make the arm64 hcall API
more consistent across exception vector routines, change the hcall implementations
to use the ISS field of the ESR_EL2 register to specify the hcall type.

The existing arm64 hcall implementations are limited in that they only allow
for two distinct hcalls; with the x0 register either zero, or not zero.  Also,
the API of the hyp-stub exception vector routines and the KVM exception vector
routines differ; hyp-stub uses a non-zero value in x0 to implement
__hyp_set_vectors, whereas KVM uses it to implement kvm_call_hyp.

Define three new preprocessor macros HVC_GET_VECTORS, HVC_SET_VECTORS and
HVC_CALL_HYP and to be used as hcall type specifiers and convert the
existing __hyp_get_vectors(), __hyp_set_vectors() and kvm_call_hyp() routines
to use these new macros when executing an HVC call.  Also change the
corresponding hyp-stub and KVM el1_sync exception vector routines to use these
new macros.

Signed-off-by: Geoff Levand <geoff@infradead.org>
---
 arch/arm64/include/asm/virt.h | 20 ++++++++++++++++++++
 arch/arm64/kernel/hyp-stub.S  | 34 ++++++++++++++++++++++------------
 arch/arm64/kvm/hyp.S          | 18 +++++++++++-------
 3 files changed, 53 insertions(+), 19 deletions(-)

diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h
index 7a5df52..99c319c 100644
--- a/arch/arm64/include/asm/virt.h
+++ b/arch/arm64/include/asm/virt.h
@@ -21,6 +21,26 @@
 #define BOOT_CPU_MODE_EL1	(0xe11)
 #define BOOT_CPU_MODE_EL2	(0xe12)
 
+/*
+ * HVC_GET_VECTORS - Return the value of the vbar_el2 register.
+ */
+
+#define HVC_GET_VECTORS 1
+
+/*
+ * HVC_SET_VECTORS - Set the value of the vbar_el2 register.
+ *
+ * @x0: Physical address of the new vector table.
+ */
+
+#define HVC_SET_VECTORS 2
+
+/*
+ * HVC_CALL_HYP - Execute a hyp routine.
+ */
+
+#define HVC_CALL_HYP 3
+
 #ifndef __ASSEMBLY__
 
 /*
diff --git a/arch/arm64/kernel/hyp-stub.S b/arch/arm64/kernel/hyp-stub.S
index a272f33..e3db3fd 100644
--- a/arch/arm64/kernel/hyp-stub.S
+++ b/arch/arm64/kernel/hyp-stub.S
@@ -22,6 +22,7 @@
 #include <linux/irqchip/arm-gic-v3.h>
 
 #include <asm/assembler.h>
+#include <asm/kvm_arm.h>
 #include <asm/ptrace.h>
 #include <asm/virt.h>
 
@@ -53,14 +54,22 @@ ENDPROC(__hyp_stub_vectors)
 	.align 11
 
 el1_sync:
-	mrs	x1, esr_el2
-	lsr	x1, x1, #26
-	cmp	x1, #0x16
-	b.ne	2f				// Not an HVC trap
-	cbz	x0, 1f
-	msr	vbar_el2, x0			// Set vbar_el2
+	mrs	x18, esr_el2
+	lsr	x17, x18, #ESR_ELx_EC_SHIFT
+	and	x18, x18, #ESR_ELx_ISS_MASK
+
+	cmp     x17, #ESR_ELx_EC_HVC64
+	b.ne    2f				// Not an HVC trap
+
+	cmp	x18, #HVC_GET_VECTORS
+	b.ne	1f
+	mrs	x0, vbar_el2
 	b	2f
-1:	mrs	x0, vbar_el2			// Return vbar_el2
+
+1:	cmp	x18, #HVC_SET_VECTORS
+	b.ne	2f
+	msr	vbar_el2, x0
+
 2:	eret
 ENDPROC(el1_sync)
 
@@ -100,11 +109,12 @@ ENDPROC(\label)
  * initialisation entry point.
  */
 
-ENTRY(__hyp_get_vectors)
-	mov	x0, xzr
-	// fall through
 ENTRY(__hyp_set_vectors)
-	hvc	#0
+	hvc	#HVC_SET_VECTORS
 	ret
-ENDPROC(__hyp_get_vectors)
 ENDPROC(__hyp_set_vectors)
+
+ENTRY(__hyp_get_vectors)
+	hvc	#HVC_GET_VECTORS
+	ret
+ENDPROC(__hyp_get_vectors)
diff --git a/arch/arm64/kvm/hyp.S b/arch/arm64/kvm/hyp.S
index c0d8202..1916c89 100644
--- a/arch/arm64/kvm/hyp.S
+++ b/arch/arm64/kvm/hyp.S
@@ -27,6 +27,7 @@
 #include <asm/kvm_asm.h>
 #include <asm/kvm_mmu.h>
 #include <asm/memory.h>
+#include <asm/virt.h>
 
 #define CPU_GP_REG_OFFSET(x)	(CPU_GP_REGS + x)
 #define CPU_XREG_OFFSET(x)	CPU_GP_REG_OFFSET(CPU_USER_PT_REGS + 8*x)
@@ -1106,12 +1107,9 @@ __hyp_panic_str:
  * in Hyp mode (see init_hyp_mode in arch/arm/kvm/arm.c).  Return values are
  * passed in r0 and r1.
  *
- * A function pointer with a value of 0 has a special meaning, and is
- * used to implement __hyp_get_vectors in the same way as in
- * arch/arm64/kernel/hyp_stub.S.
  */
 ENTRY(kvm_call_hyp)
-	hvc	#0
+	hvc	#HVC_CALL_HYP
 	ret
 ENDPROC(kvm_call_hyp)
 
@@ -1142,6 +1140,7 @@ el1_sync:					// Guest trapped into EL2
 
 	mrs	x1, esr_el2
 	lsr	x2, x1, #ESR_ELx_EC_SHIFT
+	and	x0, x1, #ESR_ELx_ISS_MASK
 
 	cmp	x2, #ESR_ELx_EC_HVC64
 	b.ne	el1_trap
@@ -1150,15 +1149,19 @@ el1_sync:					// Guest trapped into EL2
 	cbnz	x3, el1_trap			// called HVC
 
 	/* Here, we're pretty sure the host called HVC. */
+	mov	x18, x0
 	pop	x2, x3
 	pop	x0, x1
 
-	/* Check for __hyp_get_vectors */
-	cbnz	x0, 1f
+	cmp	x18, #HVC_GET_VECTORS
+	b.ne	1f
 	mrs	x0, vbar_el2
 	b	2f
 
-1:	push	lr, xzr
+1:	cmp	x18, #HVC_CALL_HYP
+	b.ne	2f
+
+	push	lr, xzr
 
 	/*
 	 * Compute the function address in EL2, and shuffle the parameters.
@@ -1171,6 +1174,7 @@ el1_sync:					// Guest trapped into EL2
 	blr	lr
 
 	pop	lr, xzr
+
 2:	eret
 
 el1_trap:
-- 
2.1.0



_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [PATCH 0/8] arm64 kexec kernel patches V7
@ 2015-01-17  0:23   ` Geoff Levand
  0 siblings, 0 replies; 100+ messages in thread
From: Geoff Levand @ 2015-01-17  0:23 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon
  Cc: Ard Biesheuvel, marc.zyngier, kexec, Deepak Saxena,
	christoffer.dall, Grant Likely, linux-arm-kernel

Hi All,

This series adds the core support for kexec re-boots on arm64.  This v7 of the
series is mainly just a rebase to the latest arm64 for-next/core branch
(v3.19-rc4), and a few very minor changes requested for v6.

I have tested with the ARM VE fast model, the ARM Base model and the ARM
Foundation model with various kernel config options for both the first and
second stage kernels.

To load a second stage kernel and execute a kexec re-boot on arm64 my patches to
kexec-tools [2], which have not yet been merged upstream, are needed.

Patch 1 here moves proc-macros.S from arm64/mm to arm64/include/asm so that the
dcache_line_size macro it defines can be uesd by kexec's relocate kernel
routine.

Patches 2-4 rework the arm64 hcall mechanism to give the arm64 soft_restart()
routine the ability to switch exception levels from EL1 to EL2 for kernels that
were entered in EL2.

Patches 5-8 add the actual kexec support.

Please consider all patches for inclusion.

Note that the location of my development repositories has changed:

[1]  https://git.kernel.org/cgit/linux/kernel/git/geoff/linux-kexec.git
[2]  https://git.kernel.org/cgit/linux/kernel/git/geoff/kexec-tools.git

Several things are known to have problems on kexec re-boot:

spin-table
----------

PROBLEM: The spin-table enable method does not implement all the methods needed
for CPU hot-plug, so the first stage kernel cannot be shutdown properly.

WORK-AROUND: Upgrade to system firmware that provides PSCI enable method
support, OR build the first stage kernel with CONFIG_SMP=n, OR pass 'maxcpus=1'
on the first stage kernel command line.

FIX: Upgrade system firmware to provide PSCI enable method support or add
missing spin-table support to the kernel.

KVM
---

PROBLEM: KVM acquires hypervisor resources on startup, but does not free those
resources on shutdown, so the first stage kernel cannot be shutdown properly
when using kexec.

WORK-AROUND:  Build the first stage kernel with CONFIG_KVM=n.

FIX: Fix KVM to support soft_restart().  KVM needs to restore default exception
vectors, etc.

UEFI
----

PROBLEM: UEFI does not manage its runtime services virtual mappings in a way
that is compatible with a kexec re-boot, so the second stage kernel hangs on
boot-up.

WORK-AROUND:  Disable UEFI in firmware.

FIX: Ard Biesheuvel has done work to fix this.  Basic kexec re-boot has been
tested and works.  More comprehensive testing is needed.

/memreserve/
----------

PROBLEM: The use of device tree /memreserve/ entries is not compatible with
kexec re-boot.  The second stage kernel will use the reserved regions and the
system will become unstable.

WORK-AROUND: Pass a user specified DTB using the kexec --dtb option.

FIX: An interface to expose a binary device tree to user space has been
proposed.  User kexec utilities will need to be updated to add support for this
new interface.

ACPI
----

PROBLEM: The kernel for ACPI based systems does not export a device tree to the
standard user space location of 'proc/device-tree'.  Current applications
expect to access device tree information from this standard location.

WORK-AROUND: Disable ACPI in firmware, OR pass 'acpi=off' on the first stage
kernel command line, OR pass a user specified DTB using the kexec --dtb option.

FIX: FIX: An interface to expose a binary device tree to user space has been
proposed.  User kexec utilities will need to be updated to add support for this
new interface.

----------------------------------------------------------------
The following changes since commit 6083fe74b7bfffc2c7be8c711596608bda0cda6e:

  arm64: respect mem= for EFI (2015-01-16 16:21:58 +0000)

are available in the git repository at:

  git://git.kernel.org/pub/scm/linux/kernel/git/geoff/linux-kexec.git kexec-v7

for you to fetch changes up to 7db998d9533d5efab816edfad3d7010fc2f7e62c:

  arm64/kexec: Enable kexec in the arm64 defconfig (2015-01-16 14:55:28 -0800)

----------------------------------------------------------------
Geoff Levand (8):
      arm64: Move proc-macros.S to include/asm
      arm64: Convert hcalls to use ISS field
      arm64: Add new hcall HVC_CALL_FUNC
      arm64: Add EL2 switch to soft_restart
      arm64/kexec: Add core kexec support
      arm64/kexec: Add pr_devel output
      arm64/kexec: Add checks for KVM
      arm64/kexec: Enable kexec in the arm64 defconfig

 arch/arm64/Kconfig                           |   9 ++
 arch/arm64/configs/defconfig                 |   1 +
 arch/arm64/include/asm/kexec.h               |  47 ++++++
 arch/arm64/include/asm/proc-fns.h            |   4 +-
 arch/arm64/{mm => include/asm}/proc-macros.S |   0
 arch/arm64/include/asm/virt.h                |  33 ++++
 arch/arm64/kernel/Makefile                   |   1 +
 arch/arm64/kernel/hyp-stub.S                 |  45 ++++--
 arch/arm64/kernel/machine_kexec.c            | 219 +++++++++++++++++++++++++++
 arch/arm64/kernel/process.c                  |  10 +-
 arch/arm64/kernel/relocate_kernel.S          | 160 +++++++++++++++++++
 arch/arm64/kvm/hyp.S                         |  18 ++-
 arch/arm64/mm/cache.S                        |   3 +-
 arch/arm64/mm/proc.S                         |  50 ++++--
 include/uapi/linux/kexec.h                   |   1 +
 15 files changed, 563 insertions(+), 38 deletions(-)
 create mode 100644 arch/arm64/include/asm/kexec.h
 rename arch/arm64/{mm => include/asm}/proc-macros.S (100%)
 create mode 100644 arch/arm64/kernel/machine_kexec.c
 create mode 100644 arch/arm64/kernel/relocate_kernel.S

-- 
2.1.0


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 100+ messages in thread

* [PATCH 7/8] arm64/kexec: Add checks for KVM
@ 2015-01-17  0:23     ` Geoff Levand
  0 siblings, 0 replies; 100+ messages in thread
From: Geoff Levand @ 2015-01-17  0:23 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon
  Cc: marc.zyngier, Grant Likely, kexec, linux-arm-kernel, christoffer.dall

Add runtime checks that fail the arm64 kexec syscall for situations that would
result in system instability do to problems in the KVM kernel support.
These checks should be removed when the KVM problems are resolved fixed.

Signed-off-by: Geoff Levand <geoff@infradead.org>
---
 arch/arm64/kernel/machine_kexec.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
index 3d84759..a36459d 100644
--- a/arch/arm64/kernel/machine_kexec.c
+++ b/arch/arm64/kernel/machine_kexec.c
@@ -16,6 +16,9 @@
 #include <asm/cacheflush.h>
 #include <asm/system_misc.h>
 
+/* TODO: Remove this include when KVM can support a kexec reboot. */
+#include <asm/virt.h>
+
 /* Global variables for the relocate_kernel routine. */
 extern const unsigned char relocate_new_kernel[];
 extern const unsigned long relocate_new_kernel_size;
@@ -100,6 +103,13 @@ int machine_kexec_prepare(struct kimage *image)
 
 	kexec_image_info(image);
 
+	/* TODO: Remove this message when KVM can support a kexec reboot. */
+	if (IS_ENABLED(CONFIG_KVM) && is_hyp_mode_available()) {
+		pr_err("%s: Your kernel is configured with KVM support (CONFIG_KVM=y) which currently does not allow for kexec re-boot.\n",
+			__func__);
+		return -ENOSYS;
+	}
+
 	return 0;
 }
 
-- 
2.1.0



_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [PATCH 1/8] arm64: Move proc-macros.S to include/asm
@ 2015-01-17  0:23     ` Geoff Levand
  0 siblings, 0 replies; 100+ messages in thread
From: Geoff Levand @ 2015-01-17  0:23 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon
  Cc: marc.zyngier, Grant Likely, kexec, linux-arm-kernel, christoffer.dall

To allow the assembler macros defined in proc-macros.S to be used outside
the mm code move the proc-macros.S file from arch/arm64/mm/ to
arch/arm64/include/asm/ and fix up any preprocessor includes to use the new
file location.

Signed-off-by: Geoff Levand <geoff@infradead.org>
---
 arch/arm64/include/asm/proc-macros.S | 54 ++++++++++++++++++++++++++++++++++++
 arch/arm64/mm/cache.S                |  3 +-
 arch/arm64/mm/proc-macros.S          | 54 ------------------------------------
 arch/arm64/mm/proc.S                 |  3 +-
 4 files changed, 56 insertions(+), 58 deletions(-)
 create mode 100644 arch/arm64/include/asm/proc-macros.S
 delete mode 100644 arch/arm64/mm/proc-macros.S

diff --git a/arch/arm64/include/asm/proc-macros.S b/arch/arm64/include/asm/proc-macros.S
new file mode 100644
index 0000000..005d29e
--- /dev/null
+++ b/arch/arm64/include/asm/proc-macros.S
@@ -0,0 +1,54 @@
+/*
+ * Based on arch/arm/mm/proc-macros.S
+ *
+ * Copyright (C) 2012 ARM Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <asm/asm-offsets.h>
+#include <asm/thread_info.h>
+
+/*
+ * vma_vm_mm - get mm pointer from vma pointer (vma->vm_mm)
+ */
+	.macro	vma_vm_mm, rd, rn
+	ldr	\rd, [\rn, #VMA_VM_MM]
+	.endm
+
+/*
+ * mmid - get context id from mm pointer (mm->context.id)
+ */
+	.macro	mmid, rd, rn
+	ldr	\rd, [\rn, #MM_CONTEXT_ID]
+	.endm
+
+/*
+ * dcache_line_size - get the minimum D-cache line size from the CTR register.
+ */
+	.macro	dcache_line_size, reg, tmp
+	mrs	\tmp, ctr_el0			// read CTR
+	ubfm	\tmp, \tmp, #16, #19		// cache line size encoding
+	mov	\reg, #4			// bytes per word
+	lsl	\reg, \reg, \tmp		// actual cache line size
+	.endm
+
+/*
+ * icache_line_size - get the minimum I-cache line size from the CTR register.
+ */
+	.macro	icache_line_size, reg, tmp
+	mrs	\tmp, ctr_el0			// read CTR
+	and	\tmp, \tmp, #0xf		// cache line size encoding
+	mov	\reg, #4			// bytes per word
+	lsl	\reg, \reg, \tmp		// actual cache line size
+	.endm
diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
index 2560e1e..4fcfffa 100644
--- a/arch/arm64/mm/cache.S
+++ b/arch/arm64/mm/cache.S
@@ -23,8 +23,7 @@
 #include <asm/assembler.h>
 #include <asm/cpufeature.h>
 #include <asm/alternative-asm.h>
-
-#include "proc-macros.S"
+#include <asm/proc-macros.S>
 
 /*
  *	__flush_dcache_all()
diff --git a/arch/arm64/mm/proc-macros.S b/arch/arm64/mm/proc-macros.S
deleted file mode 100644
index 005d29e..0000000
--- a/arch/arm64/mm/proc-macros.S
+++ /dev/null
@@ -1,54 +0,0 @@
-/*
- * Based on arch/arm/mm/proc-macros.S
- *
- * Copyright (C) 2012 ARM Ltd.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program.  If not, see <http://www.gnu.org/licenses/>.
- */
-
-#include <asm/asm-offsets.h>
-#include <asm/thread_info.h>
-
-/*
- * vma_vm_mm - get mm pointer from vma pointer (vma->vm_mm)
- */
-	.macro	vma_vm_mm, rd, rn
-	ldr	\rd, [\rn, #VMA_VM_MM]
-	.endm
-
-/*
- * mmid - get context id from mm pointer (mm->context.id)
- */
-	.macro	mmid, rd, rn
-	ldr	\rd, [\rn, #MM_CONTEXT_ID]
-	.endm
-
-/*
- * dcache_line_size - get the minimum D-cache line size from the CTR register.
- */
-	.macro	dcache_line_size, reg, tmp
-	mrs	\tmp, ctr_el0			// read CTR
-	ubfm	\tmp, \tmp, #16, #19		// cache line size encoding
-	mov	\reg, #4			// bytes per word
-	lsl	\reg, \reg, \tmp		// actual cache line size
-	.endm
-
-/*
- * icache_line_size - get the minimum I-cache line size from the CTR register.
- */
-	.macro	icache_line_size, reg, tmp
-	mrs	\tmp, ctr_el0			// read CTR
-	and	\tmp, \tmp, #0xf		// cache line size encoding
-	mov	\reg, #4			// bytes per word
-	lsl	\reg, \reg, \tmp		// actual cache line size
-	.endm
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index 4e778b1..c507e25 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -25,8 +25,7 @@
 #include <asm/hwcap.h>
 #include <asm/pgtable-hwdef.h>
 #include <asm/pgtable.h>
-
-#include "proc-macros.S"
+#include <asm/proc-macros.S>
 
 #ifdef CONFIG_ARM64_64K_PAGES
 #define TCR_TG_FLAGS	TCR_TG0_64K | TCR_TG1_64K
-- 
2.1.0



_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [PATCH 4/8] arm64: Add EL2 switch to soft_restart
@ 2015-01-17  0:23     ` Geoff Levand
  0 siblings, 0 replies; 100+ messages in thread
From: Geoff Levand @ 2015-01-17  0:23 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon
  Cc: marc.zyngier, Grant Likely, kexec, linux-arm-kernel, christoffer.dall

When a CPU is reset it needs to be put into the exception level it had when it
entered the kernel.  Update cpu_reset() to accept an argument el2_switch which
signals cpu_reset() to enter the soft reset address at EL2.  If el2_switch is
not set the soft reset address will be entered at EL1.

Update cpu_soft_restart() and soft_restart() to pass the return of
is_hyp_mode_available() as the el2_switch value to cpu_reset().  Also update the
comments of cpu_reset(), cpu_soft_restart() and soft_restart() to reflect this
change.

Signed-off-by: Geoff Levand <geoff@infradead.org>
---
 arch/arm64/include/asm/proc-fns.h |  4 ++--
 arch/arm64/kernel/process.c       | 10 ++++++++-
 arch/arm64/mm/proc.S              | 47 +++++++++++++++++++++++++++++----------
 3 files changed, 46 insertions(+), 15 deletions(-)

diff --git a/arch/arm64/include/asm/proc-fns.h b/arch/arm64/include/asm/proc-fns.h
index 9a8fd84..339394d 100644
--- a/arch/arm64/include/asm/proc-fns.h
+++ b/arch/arm64/include/asm/proc-fns.h
@@ -32,8 +32,8 @@ extern void cpu_cache_off(void);
 extern void cpu_do_idle(void);
 extern void cpu_do_switch_mm(unsigned long pgd_phys, struct mm_struct *mm);
 extern void cpu_reset(unsigned long addr) __attribute__((noreturn));
-void cpu_soft_restart(phys_addr_t cpu_reset,
-		unsigned long addr) __attribute__((noreturn));
+void cpu_soft_restart(phys_addr_t cpu_reset, unsigned long el2_switch,
+		      unsigned long addr) __attribute__((noreturn));
 extern void cpu_do_suspend(struct cpu_suspend_ctx *ptr);
 extern u64 cpu_do_resume(phys_addr_t ptr, u64 idmap_ttbr);
 
diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
index fde9923..371bbf1 100644
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -50,6 +50,7 @@
 #include <asm/mmu_context.h>
 #include <asm/processor.h>
 #include <asm/stacktrace.h>
+#include <asm/virt.h>
 
 #ifdef CONFIG_CC_STACKPROTECTOR
 #include <linux/stackprotector.h>
@@ -60,7 +61,14 @@ EXPORT_SYMBOL(__stack_chk_guard);
 void soft_restart(unsigned long addr)
 {
 	setup_mm_for_reboot();
-	cpu_soft_restart(virt_to_phys(cpu_reset), addr);
+
+	/* TODO: Remove this conditional when KVM can support CPU restart. */
+	if (IS_ENABLED(CONFIG_KVM))
+		cpu_soft_restart(virt_to_phys(cpu_reset), 0, addr);
+	else
+		cpu_soft_restart(virt_to_phys(cpu_reset),
+				 is_hyp_mode_available(), addr);
+
 	/* Should never get here */
 	BUG();
 }
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index c507e25..b767032 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -26,6 +26,7 @@
 #include <asm/pgtable-hwdef.h>
 #include <asm/pgtable.h>
 #include <asm/proc-macros.S>
+#include <asm/virt.h>
 
 #ifdef CONFIG_ARM64_64K_PAGES
 #define TCR_TG_FLAGS	TCR_TG0_64K | TCR_TG1_64K
@@ -58,27 +59,48 @@ ENTRY(cpu_cache_off)
 ENDPROC(cpu_cache_off)
 
 /*
- *	cpu_reset(loc)
+ * cpu_reset(el2_switch, loc) - Helper for cpu_soft_restart.
  *
- *	Perform a soft reset of the system.  Put the CPU into the same state
- *	as it would be if it had been reset, and branch to what would be the
- *	reset vector. It must be executed with the flat identity mapping.
+ * @cpu_reset: Physical address of the cpu_reset routine.
+ * @el2_switch: Flag to indicate a swich to EL2 is needed.
+ * @addr: Location to jump to for soft reset.
  *
- *	- loc   - location to jump to for soft reset
+ * Put the CPU into the same state as it would be if it had been reset, and
+ * branch to what would be the reset vector. It must be executed with the
+ * flat identity mapping.
  */
+
 	.align	5
+
 ENTRY(cpu_reset)
-	mrs	x1, sctlr_el1
-	bic	x1, x1, #1
-	msr	sctlr_el1, x1			// disable the MMU
+	mrs	x2, sctlr_el1
+	bic	x2, x2, #1
+	msr	sctlr_el1, x2			// disable the MMU
 	isb
-	ret	x0
+
+	cbz	x0, 1f				// el2_switch?
+	mov	x0, x1
+	mov	x1, xzr
+	mov	x2, xzr
+	mov	x3, xzr
+	hvc	#HVC_CALL_FUNC			// no return
+
+1:	ret	x1
 ENDPROC(cpu_reset)
 
+/*
+ * cpu_soft_restart(cpu_reset, el2_switch, addr) - Perform a cpu soft reset.
+ *
+ * @cpu_reset: Physical address of the cpu_reset routine.
+ * @el2_switch: Flag to indicate a swich to EL2 is needed, passed to cpu_reset.
+ * @addr: Location to jump to for soft reset, passed to cpu_reset.
+ *
+ */
+
 ENTRY(cpu_soft_restart)
-	/* Save address of cpu_reset() and reset address */
-	mov	x19, x0
-	mov	x20, x1
+	mov	x19, x0				// cpu_reset
+	mov	x20, x1				// el2_switch
+	mov	x21, x2				// addr
 
 	/* Turn D-cache off */
 	bl	cpu_cache_off
@@ -87,6 +109,7 @@ ENTRY(cpu_soft_restart)
 	bl	flush_cache_all
 
 	mov	x0, x20
+	mov	x1, x21
 	ret	x19
 ENDPROC(cpu_soft_restart)
 
-- 
2.1.0



_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [PATCH 8/8] arm64/kexec: Enable kexec in the arm64 defconfig
@ 2015-01-17  0:23     ` Geoff Levand
  0 siblings, 0 replies; 100+ messages in thread
From: Geoff Levand @ 2015-01-17  0:23 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon
  Cc: marc.zyngier, Grant Likely, kexec, linux-arm-kernel, christoffer.dall

Signed-off-by: Geoff Levand <geoff@infradead.org>
---
 arch/arm64/configs/defconfig | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/arm64/configs/defconfig b/arch/arm64/configs/defconfig
index 5376d90..85285dc 100644
--- a/arch/arm64/configs/defconfig
+++ b/arch/arm64/configs/defconfig
@@ -42,6 +42,7 @@ CONFIG_PREEMPT=y
 CONFIG_KSM=y
 CONFIG_TRANSPARENT_HUGEPAGE=y
 CONFIG_CMA=y
+CONFIG_KEXEC=y
 CONFIG_CMDLINE="console=ttyAMA0"
 # CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
 CONFIG_COMPAT=y
-- 
2.1.0


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [PATCH 3/8] arm64: Add new hcall HVC_CALL_FUNC
@ 2015-01-17  0:23     ` Geoff Levand
  0 siblings, 0 replies; 100+ messages in thread
From: Geoff Levand @ 2015-01-17  0:23 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon
  Cc: marc.zyngier, Grant Likely, kexec, linux-arm-kernel, christoffer.dall

Add the new hcall HVC_CALL_FUNC that allows execution of a function at EL2.
During CPU reset the CPU must be brought to the exception level it had on
entry to the kernel.  The HVC_CALL_FUNC hcall will provide the mechanism
needed for this exception level switch.

Signed-off-by: Geoff Levand <geoff@infradead.org>
---
 arch/arm64/include/asm/virt.h | 13 +++++++++++++
 arch/arm64/kernel/hyp-stub.S  | 17 ++++++++++++++---
 2 files changed, 27 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h
index 99c319c..4f23a48 100644
--- a/arch/arm64/include/asm/virt.h
+++ b/arch/arm64/include/asm/virt.h
@@ -41,6 +41,19 @@
 
 #define HVC_CALL_HYP 3
 
+/*
+ * HVC_CALL_FUNC - Execute a function at EL2.
+ *
+ * @x0: Physical address of the function to be executed.
+ * @x1: Passed as the first argument to the function.
+ * @x2: Passed as the second argument to the function.
+ * @x3: Passed as the third argument to the function.
+ *
+ * The called function must preserve the contents of register x18.
+ */
+
+#define HVC_CALL_FUNC 4
+
 #ifndef __ASSEMBLY__
 
 /*
diff --git a/arch/arm64/kernel/hyp-stub.S b/arch/arm64/kernel/hyp-stub.S
index e3db3fd..b5d36e7 100644
--- a/arch/arm64/kernel/hyp-stub.S
+++ b/arch/arm64/kernel/hyp-stub.S
@@ -66,9 +66,20 @@ el1_sync:
 	mrs	x0, vbar_el2
 	b	2f
 
-1:	cmp	x18, #HVC_SET_VECTORS
-	b.ne	2f
-	msr	vbar_el2, x0
+1:	cmp     x18, #HVC_SET_VECTORS
+	b.ne    1f
+	msr     vbar_el2, x0
+	b       2f
+
+1:	cmp     x18, #HVC_CALL_FUNC
+	b.ne    2f
+	mov     x18, lr
+	mov     lr, x0
+	mov     x0, x1
+	mov     x1, x2
+	mov     x2, x3
+	blr     lr
+	mov     lr, x18
 
 2:	eret
 ENDPROC(el1_sync)
-- 
2.1.0



_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [PATCH 5/8] arm64/kexec: Add core kexec support
@ 2015-01-17  0:23     ` Geoff Levand
  0 siblings, 0 replies; 100+ messages in thread
From: Geoff Levand @ 2015-01-17  0:23 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon
  Cc: marc.zyngier, Grant Likely, kexec, linux-arm-kernel, christoffer.dall

Add three new files, kexec.h, machine_kexec.c and relocate_kernel.S to the
arm64 architecture that add support for the kexec re-boot mechanism
(CONFIG_KEXEC) on arm64 platforms.

Signed-off-by: Geoff Levand <geoff@infradead.org>
---
 arch/arm64/Kconfig                  |   9 ++
 arch/arm64/include/asm/kexec.h      |  47 +++++++++++
 arch/arm64/kernel/Makefile          |   1 +
 arch/arm64/kernel/machine_kexec.c   | 155 ++++++++++++++++++++++++++++++++++
 arch/arm64/kernel/relocate_kernel.S | 160 ++++++++++++++++++++++++++++++++++++
 include/uapi/linux/kexec.h          |   1 +
 6 files changed, 373 insertions(+)
 create mode 100644 arch/arm64/include/asm/kexec.h
 create mode 100644 arch/arm64/kernel/machine_kexec.c
 create mode 100644 arch/arm64/kernel/relocate_kernel.S

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index b1f9a20..d9eb9cd 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -474,6 +474,15 @@ config SECCOMP
 	  and the task is only allowed to execute a few safe syscalls
 	  defined by each seccomp mode.
 
+config KEXEC
+	depends on (!SMP || PM_SLEEP_SMP)
+	bool "kexec system call"
+	---help---
+	  kexec is a system call that implements the ability to shutdown your
+	  current kernel, and to start another kernel.  It is like a reboot
+	  but it is independent of the system firmware.   And like a reboot
+	  you can start any kernel with it, not just Linux.
+
 config XEN_DOM0
 	def_bool y
 	depends on XEN
diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h
new file mode 100644
index 0000000..e7bd7ab
--- /dev/null
+++ b/arch/arm64/include/asm/kexec.h
@@ -0,0 +1,47 @@
+/*
+ * kexec for arm64
+ *
+ * Copyright (C) Linaro.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#if !defined(_ARM64_KEXEC_H)
+#define _ARM64_KEXEC_H
+
+/* Maximum physical address we can use pages from */
+
+#define KEXEC_SOURCE_MEMORY_LIMIT (-1UL)
+
+/* Maximum address we can reach in physical address mode */
+
+#define KEXEC_DESTINATION_MEMORY_LIMIT (-1UL)
+
+/* Maximum address we can use for the control code buffer */
+
+#define KEXEC_CONTROL_MEMORY_LIMIT (-1UL)
+
+#define KEXEC_CONTROL_PAGE_SIZE	4096
+
+#define KEXEC_ARCH KEXEC_ARCH_ARM64
+
+#if !defined(__ASSEMBLY__)
+
+/**
+ * crash_setup_regs() - save registers for the panic kernel
+ *
+ * @newregs: registers are saved here
+ * @oldregs: registers to be saved (may be %NULL)
+ */
+
+static inline void crash_setup_regs(struct pt_regs *newregs,
+				    struct pt_regs *oldregs)
+{
+	/* Empty routine needed to avoid build errors. */
+}
+
+#endif /* !defined(__ASSEMBLY__) */
+
+#endif
diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index 77d3d95..eff1625 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -34,6 +34,7 @@ arm64-obj-$(CONFIG_KGDB)		+= kgdb.o
 arm64-obj-$(CONFIG_EFI)			+= efi.o efi-stub.o efi-entry.o
 arm64-obj-$(CONFIG_PCI)			+= pci.o
 arm64-obj-$(CONFIG_ARMV8_DEPRECATED)	+= armv8_deprecated.o
+arm64-obj-$(CONFIG_KEXEC)		+= machine_kexec.o relocate_kernel.o
 
 obj-y					+= $(arm64-obj-y) vdso/
 obj-m					+= $(arm64-obj-m)
diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
new file mode 100644
index 0000000..b0e5d76
--- /dev/null
+++ b/arch/arm64/kernel/machine_kexec.c
@@ -0,0 +1,155 @@
+/*
+ * kexec for arm64
+ *
+ * Copyright (C) Linaro.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/kexec.h>
+#include <linux/of_fdt.h>
+#include <linux/slab.h>
+#include <linux/uaccess.h>
+
+#include <asm/cacheflush.h>
+#include <asm/system_misc.h>
+
+/* Global variables for the relocate_kernel routine. */
+extern const unsigned char relocate_new_kernel[];
+extern const unsigned long relocate_new_kernel_size;
+extern unsigned long arm64_kexec_dtb_addr;
+extern unsigned long arm64_kexec_kimage_head;
+extern unsigned long arm64_kexec_kimage_start;
+
+/**
+ * kexec_is_dtb - Helper routine to check the device tree header signature.
+ */
+static bool kexec_is_dtb(const void *dtb)
+{
+	__be32 magic;
+
+	return get_user(magic, (__be32 *)dtb) ? false :
+		(be32_to_cpu(magic) == OF_DT_HEADER);
+}
+
+/**
+ * kexec_find_dtb_seg - Helper routine to find the dtb segment.
+ */
+static const struct kexec_segment *kexec_find_dtb_seg(
+	const struct kimage *image)
+{
+	int i;
+
+	for (i = 0; i < image->nr_segments; i++) {
+		if (kexec_is_dtb(image->segment[i].buf))
+			return &image->segment[i];
+	}
+
+	return NULL;
+}
+
+void machine_kexec_cleanup(struct kimage *image)
+{
+	/* Empty routine needed to avoid build errors. */
+}
+
+/**
+ * machine_kexec_prepare - Prepare for a kexec reboot.
+ *
+ * Called from the core kexec code when a kernel image is loaded.
+ */
+int machine_kexec_prepare(struct kimage *image)
+{
+	const struct kexec_segment *dtb_seg = kexec_find_dtb_seg(image);
+
+	arm64_kexec_dtb_addr = dtb_seg ? dtb_seg->mem : 0;
+	arm64_kexec_kimage_start = image->start;
+
+	return 0;
+}
+
+/**
+ * kexec_list_flush - Helper to flush the kimage list to PoC.
+ */
+static void kexec_list_flush(unsigned long kimage_head)
+{
+	void *dest;
+	unsigned long *entry;
+
+	for (entry = &kimage_head, dest = NULL; ; entry++) {
+		unsigned int flag = *entry &
+			(IND_DESTINATION | IND_INDIRECTION | IND_DONE |
+			IND_SOURCE);
+		void *addr = phys_to_virt(*entry & PAGE_MASK);
+
+		switch (flag) {
+		case IND_INDIRECTION:
+			entry = (unsigned long *)addr - 1;
+			__flush_dcache_area(addr, PAGE_SIZE);
+			break;
+		case IND_DESTINATION:
+			dest = addr;
+			break;
+		case IND_SOURCE:
+			__flush_dcache_area(addr, PAGE_SIZE);
+			dest += PAGE_SIZE;
+			break;
+		case IND_DONE:
+			return;
+		default:
+			BUG();
+		}
+	}
+}
+
+/**
+ * machine_kexec - Do the kexec reboot.
+ *
+ * Called from the core kexec code for a sys_reboot with LINUX_REBOOT_CMD_KEXEC.
+ */
+void machine_kexec(struct kimage *image)
+{
+	phys_addr_t reboot_code_buffer_phys;
+	void *reboot_code_buffer;
+
+	BUG_ON(num_online_cpus() > 1);
+
+	arm64_kexec_kimage_head = image->head;
+
+	reboot_code_buffer_phys = page_to_phys(image->control_code_page);
+	reboot_code_buffer = phys_to_virt(reboot_code_buffer_phys);
+
+	/*
+	 * Copy relocate_new_kernel to the reboot_code_buffer for use
+	 * after the kernel is shut down.
+	 */
+	memcpy(reboot_code_buffer, relocate_new_kernel,
+		relocate_new_kernel_size);
+
+	/* Flush the reboot_code_buffer in preparation for its execution. */
+	__flush_dcache_area(reboot_code_buffer, relocate_new_kernel_size);
+
+	/* Flush the kimage list. */
+	kexec_list_flush(image->head);
+
+	pr_info("Bye!\n");
+
+	/* Disable all DAIF exceptions. */
+	asm volatile ("msr daifset, #0xf" : : : "memory");
+
+	/*
+	 * soft_restart() will shutdown the MMU, disable data caches, then
+	 * transfer control to the reboot_code_buffer which contains a copy of
+	 * the relocate_new_kernel routine.  relocate_new_kernel will use
+	 * physical addressing to relocate the new kernel to its final position
+	 * and then will transfer control to the entry point of the new kernel.
+	 */
+	soft_restart(reboot_code_buffer_phys);
+}
+
+void machine_crash_shutdown(struct pt_regs *regs)
+{
+	/* Empty routine needed to avoid build errors. */
+}
diff --git a/arch/arm64/kernel/relocate_kernel.S b/arch/arm64/kernel/relocate_kernel.S
new file mode 100644
index 0000000..1c1514d
--- /dev/null
+++ b/arch/arm64/kernel/relocate_kernel.S
@@ -0,0 +1,160 @@
+/*
+ * kexec for arm64
+ *
+ * Copyright (C) Linaro.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <asm/assembler.h>
+#include <asm/kexec.h>
+#include <asm/memory.h>
+#include <asm/page.h>
+#include <asm/proc-macros.S>
+
+/* The list entry flags. */
+
+#define IND_DESTINATION_BIT 0
+#define IND_INDIRECTION_BIT 1
+#define IND_DONE_BIT        2
+#define IND_SOURCE_BIT      3
+
+/*
+ * relocate_new_kernel - Put a 2nd stage kernel image in place and boot it.
+ *
+ * The memory that the old kernel occupies may be overwritten when coping the
+ * new image to its final location.  To assure that the relocate_new_kernel
+ * routine which does that copy is not overwritten all code and data needed
+ * by relocate_new_kernel must be between the symbols relocate_new_kernel and
+ * relocate_new_kernel_end.  The machine_kexec() routine will copy
+ * relocate_new_kernel to the kexec control_code_page, a special page which
+ * has been set up to be preserved during the copy operation.
+ */
+.globl relocate_new_kernel
+relocate_new_kernel:
+
+	/* Setup the list loop variables. */
+	ldr	x18, arm64_kexec_kimage_head	/* x18 = list entry */
+	dcache_line_size x17, x0		/* x17 = dcache line size */
+	mov	x16, xzr			/* x16 = segment start */
+	mov	x15, xzr			/* x15 = entry ptr */
+	mov	x14, xzr			/* x14 = copy dest */
+
+	/* Check if the new image needs relocation. */
+	cbz	x18, .Ldone
+	tbnz	x18, IND_DONE_BIT, .Ldone
+
+.Lloop:
+	and	x13, x18, PAGE_MASK		/* x13 = addr */
+
+	/* Test the entry flags. */
+.Ltest_source:
+	tbz	x18, IND_SOURCE_BIT, .Ltest_indirection
+
+	mov x20, x14				/*  x20 = copy dest */
+	mov x21, x13				/*  x21 = copy src */
+
+	/* Invalidate dest page to PoC. */
+	mov	x0, x20
+	add	x19, x0, #PAGE_SIZE
+	sub	x1, x17, #1
+	bic	x0, x0, x1
+1:	dc	ivac, x0
+	add	x0, x0, x17
+	cmp	x0, x19
+	b.lo	1b
+	dsb	sy
+
+	/* Copy page. */
+1:	ldp	x22, x23, [x21]
+	ldp	x24, x25, [x21, #16]
+	ldp	x26, x27, [x21, #32]
+	ldp	x28, x29, [x21, #48]
+	add	x21, x21, #64
+	stnp	x22, x23, [x20]
+	stnp	x24, x25, [x20, #16]
+	stnp	x26, x27, [x20, #32]
+	stnp	x28, x29, [x20, #48]
+	add	x20, x20, #64
+	tst	x21, #(PAGE_SIZE - 1)
+	b.ne	1b
+
+	/* dest += PAGE_SIZE */
+	add	x14, x14, PAGE_SIZE
+	b	.Lnext
+
+.Ltest_indirection:
+	tbz	x18, IND_INDIRECTION_BIT, .Ltest_destination
+
+	/* ptr = addr */
+	mov	x15, x13
+	b	.Lnext
+
+.Ltest_destination:
+	tbz	x18, IND_DESTINATION_BIT, .Lnext
+
+	mov	x16, x13
+
+	/* dest = addr */
+	mov	x14, x13
+
+.Lnext:
+	/* entry = *ptr++ */
+	ldr	x18, [x15], #8
+
+	/* while (!(entry & DONE)) */
+	tbz	x18, IND_DONE_BIT, .Lloop
+
+.Ldone:
+	dsb	sy
+	isb
+	ic	ialluis
+	dsb	sy
+	isb
+
+	/* Start new image. */
+	ldr	x4, arm64_kexec_kimage_start
+	ldr	x0, arm64_kexec_dtb_addr
+	mov	x1, xzr
+	mov	x2, xzr
+	mov	x3, xzr
+	br	x4
+
+.align 3	/* To keep the 64-bit values below naturally aligned. */
+
+/* The machine_kexec routines set these variables. */
+
+/*
+ * arm64_kexec_kimage_start - Copy of image->start, the entry point of the new
+ * image.
+ */
+.globl arm64_kexec_kimage_start
+arm64_kexec_kimage_start:
+	.quad	0x0
+
+/*
+ * arm64_kexec_dtb_addr - Physical address of a device tree.
+ */
+.globl arm64_kexec_dtb_addr
+arm64_kexec_dtb_addr:
+	.quad	0x0
+
+/*
+ * arm64_kexec_kimage_head - Copy of image->head, the list of kimage entries.
+ */
+.globl arm64_kexec_kimage_head
+arm64_kexec_kimage_head:
+	.quad	0x0
+
+.Lrelocate_new_kernel_end:
+
+/*
+ * relocate_new_kernel_size - Number of bytes to copy to the control_code_page.
+ */
+.globl relocate_new_kernel_size
+relocate_new_kernel_size:
+	.quad .Lrelocate_new_kernel_end - relocate_new_kernel
+
+.org	KEXEC_CONTROL_PAGE_SIZE
diff --git a/include/uapi/linux/kexec.h b/include/uapi/linux/kexec.h
index 6925f5b..04626b9 100644
--- a/include/uapi/linux/kexec.h
+++ b/include/uapi/linux/kexec.h
@@ -39,6 +39,7 @@
 #define KEXEC_ARCH_SH      (42 << 16)
 #define KEXEC_ARCH_MIPS_LE (10 << 16)
 #define KEXEC_ARCH_MIPS    ( 8 << 16)
+#define KEXEC_ARCH_ARM64   (183 << 16)
 
 /* The artificial cap on the number of segments passed to kexec_load. */
 #define KEXEC_SEGMENT_MAX 16
-- 
2.1.0



_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [PATCH 0/8] arm64 kexec kernel patches V7
  2015-01-17  0:23   ` Geoff Levand
@ 2015-01-26 17:44     ` Catalin Marinas
  -1 siblings, 0 replies; 100+ messages in thread
From: Catalin Marinas @ 2015-01-26 17:44 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Geoff,

On Sat, Jan 17, 2015 at 12:23:34AM +0000, Geoff Levand wrote:
> This series adds the core support for kexec re-boots on arm64.  This v7 of the
> series is mainly just a rebase to the latest arm64 for-next/core branch
> (v3.19-rc4), and a few very minor changes requested for v6.

I haven't looked at the series in detail before, so some of my comments
may have already been discussed.

> Several things are known to have problems on kexec re-boot:
> 
> spin-table

I think that's not too bad, for complete kexec support (SMP->SMP) we can
require some CPU unplug mechanism and PSCI is one of them.

> FIX: Upgrade system firmware to provide PSCI enable method support or add
> missing spin-table support to the kernel.

What's the missing spin-table support?

> ACPI
> ----
> 
> PROBLEM: The kernel for ACPI based systems does not export a device tree to the
> standard user space location of 'proc/device-tree'.  Current applications
> expect to access device tree information from this standard location.
> 
> WORK-AROUND: Disable ACPI in firmware, OR pass 'acpi=off' on the first stage
> kernel command line, OR pass a user specified DTB using the kexec --dtb option.
> 
> FIX: FIX: An interface to expose a binary device tree to user space has been
> proposed.  User kexec utilities will need to be updated to add support for this
> new interface.

So the fix here is to boot the second stage kernel with dtb, which means
that we mandate the existence of a DT file for any ACPI system. Are
there plans to make the kexec'ed kernel reuse the ACPI tables?

-- 
Catalin

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [PATCH 0/8] arm64 kexec kernel patches V7
@ 2015-01-26 17:44     ` Catalin Marinas
  0 siblings, 0 replies; 100+ messages in thread
From: Catalin Marinas @ 2015-01-26 17:44 UTC (permalink / raw)
  To: Geoff Levand
  Cc: Ard Biesheuvel, Marc Zyngier, kexec, Will Deacon, Deepak Saxena,
	linux-arm-kernel, grant.likely, christoffer.dall

Hi Geoff,

On Sat, Jan 17, 2015 at 12:23:34AM +0000, Geoff Levand wrote:
> This series adds the core support for kexec re-boots on arm64.  This v7 of the
> series is mainly just a rebase to the latest arm64 for-next/core branch
> (v3.19-rc4), and a few very minor changes requested for v6.

I haven't looked at the series in detail before, so some of my comments
may have already been discussed.

> Several things are known to have problems on kexec re-boot:
> 
> spin-table

I think that's not too bad, for complete kexec support (SMP->SMP) we can
require some CPU unplug mechanism and PSCI is one of them.

> FIX: Upgrade system firmware to provide PSCI enable method support or add
> missing spin-table support to the kernel.

What's the missing spin-table support?

> ACPI
> ----
> 
> PROBLEM: The kernel for ACPI based systems does not export a device tree to the
> standard user space location of 'proc/device-tree'.  Current applications
> expect to access device tree information from this standard location.
> 
> WORK-AROUND: Disable ACPI in firmware, OR pass 'acpi=off' on the first stage
> kernel command line, OR pass a user specified DTB using the kexec --dtb option.
> 
> FIX: FIX: An interface to expose a binary device tree to user space has been
> proposed.  User kexec utilities will need to be updated to add support for this
> new interface.

So the fix here is to boot the second stage kernel with dtb, which means
that we mandate the existence of a DT file for any ACPI system. Are
there plans to make the kexec'ed kernel reuse the ACPI tables?

-- 
Catalin

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 100+ messages in thread

* [PATCH 1/8] arm64: Move proc-macros.S to include/asm
  2015-01-17  0:23     ` Geoff Levand
@ 2015-01-26 17:45       ` Catalin Marinas
  -1 siblings, 0 replies; 100+ messages in thread
From: Catalin Marinas @ 2015-01-26 17:45 UTC (permalink / raw)
  To: linux-arm-kernel

On Sat, Jan 17, 2015 at 12:23:34AM +0000, Geoff Levand wrote:
> To allow the assembler macros defined in proc-macros.S to be used outside
> the mm code move the proc-macros.S file from arch/arm64/mm/ to
> arch/arm64/include/asm/ and fix up any preprocessor includes to use the new
> file location.
> 
> Signed-off-by: Geoff Levand <geoff@infradead.org>
> ---
>  arch/arm64/include/asm/proc-macros.S | 54 ++++++++++++++++++++++++++++++++++++
>  arch/arm64/mm/cache.S                |  3 +-
>  arch/arm64/mm/proc-macros.S          | 54 ------------------------------------
>  arch/arm64/mm/proc.S                 |  3 +-

Actually, I would just merge proc-macros.S into assembler.h. Not wirth
keeping the former just for a few macros.

-- 
Catalin

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [PATCH 1/8] arm64: Move proc-macros.S to include/asm
@ 2015-01-26 17:45       ` Catalin Marinas
  0 siblings, 0 replies; 100+ messages in thread
From: Catalin Marinas @ 2015-01-26 17:45 UTC (permalink / raw)
  To: Geoff Levand
  Cc: Marc Zyngier, kexec, Will Deacon, linux-arm-kernel, grant.likely,
	christoffer.dall

On Sat, Jan 17, 2015 at 12:23:34AM +0000, Geoff Levand wrote:
> To allow the assembler macros defined in proc-macros.S to be used outside
> the mm code move the proc-macros.S file from arch/arm64/mm/ to
> arch/arm64/include/asm/ and fix up any preprocessor includes to use the new
> file location.
> 
> Signed-off-by: Geoff Levand <geoff@infradead.org>
> ---
>  arch/arm64/include/asm/proc-macros.S | 54 ++++++++++++++++++++++++++++++++++++
>  arch/arm64/mm/cache.S                |  3 +-
>  arch/arm64/mm/proc-macros.S          | 54 ------------------------------------
>  arch/arm64/mm/proc.S                 |  3 +-

Actually, I would just merge proc-macros.S into assembler.h. Not wirth
keeping the former just for a few macros.

-- 
Catalin

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 100+ messages in thread

* [PATCH 2/8] arm64: Convert hcalls to use ISS field
  2015-01-17  0:23     ` Geoff Levand
@ 2015-01-26 18:26       ` Catalin Marinas
  -1 siblings, 0 replies; 100+ messages in thread
From: Catalin Marinas @ 2015-01-26 18:26 UTC (permalink / raw)
  To: linux-arm-kernel

On Sat, Jan 17, 2015 at 12:23:34AM +0000, Geoff Levand wrote:
> To allow for additional hcalls to be defined and to make the arm64 hcall API
> more consistent across exception vector routines, change the hcall implementations
> to use the ISS field of the ESR_EL2 register to specify the hcall type.
> 
> The existing arm64 hcall implementations are limited in that they only allow
> for two distinct hcalls; with the x0 register either zero, or not zero.  Also,
> the API of the hyp-stub exception vector routines and the KVM exception vector
> routines differ; hyp-stub uses a non-zero value in x0 to implement
> __hyp_set_vectors, whereas KVM uses it to implement kvm_call_hyp.
> 
> Define three new preprocessor macros HVC_GET_VECTORS, HVC_SET_VECTORS and
> HVC_CALL_HYP and to be used as hcall type specifiers and convert the
> existing __hyp_get_vectors(), __hyp_set_vectors() and kvm_call_hyp() routines
> to use these new macros when executing an HVC call.  Also change the
> corresponding hyp-stub and KVM el1_sync exception vector routines to use these
> new macros.
> 
> Signed-off-by: Geoff Levand <geoff@infradead.org>

Using the #imm value for HVC to separate what gets called looks fine to
me. However, I'd like to see a review from Marc/Christoffer on this
patch.

Some comments below:

> diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h
> index 7a5df52..99c319c 100644
> --- a/arch/arm64/include/asm/virt.h
> +++ b/arch/arm64/include/asm/virt.h
> @@ -21,6 +21,26 @@
>  #define BOOT_CPU_MODE_EL1	(0xe11)
>  #define BOOT_CPU_MODE_EL2	(0xe12)
>  
> +/*
> + * HVC_GET_VECTORS - Return the value of the vbar_el2 register.
> + */
> +
> +#define HVC_GET_VECTORS 1
> +
> +/*
> + * HVC_SET_VECTORS - Set the value of the vbar_el2 register.
> + *
> + * @x0: Physical address of the new vector table.
> + */
> +
> +#define HVC_SET_VECTORS 2
> +
> +/*
> + * HVC_CALL_HYP - Execute a hyp routine.
> + */
> +
> +#define HVC_CALL_HYP 3

I think you can ignore this case (make it the default), just define it
as 0 as that's the normal use-case after initialisation and avoid
checking it explicitly.

>  /*
> diff --git a/arch/arm64/kernel/hyp-stub.S b/arch/arm64/kernel/hyp-stub.S
> index a272f33..e3db3fd 100644
> --- a/arch/arm64/kernel/hyp-stub.S
> +++ b/arch/arm64/kernel/hyp-stub.S
> @@ -22,6 +22,7 @@
>  #include <linux/irqchip/arm-gic-v3.h>
>  
>  #include <asm/assembler.h>
> +#include <asm/kvm_arm.h>
>  #include <asm/ptrace.h>
>  #include <asm/virt.h>
>  
> @@ -53,14 +54,22 @@ ENDPROC(__hyp_stub_vectors)
>  	.align 11
>  
>  el1_sync:
> -	mrs	x1, esr_el2
> -	lsr	x1, x1, #26
> -	cmp	x1, #0x16
> -	b.ne	2f				// Not an HVC trap
> -	cbz	x0, 1f
> -	msr	vbar_el2, x0			// Set vbar_el2
> +	mrs	x18, esr_el2
> +	lsr	x17, x18, #ESR_ELx_EC_SHIFT
> +	and	x18, x18, #ESR_ELx_ISS_MASK
> +
> +	cmp     x17, #ESR_ELx_EC_HVC64
> +	b.ne    2f				// Not an HVC trap
> +
> +	cmp	x18, #HVC_GET_VECTORS
> +	b.ne	1f
> +	mrs	x0, vbar_el2
>  	b	2f
> -1:	mrs	x0, vbar_el2			// Return vbar_el2
> +
> +1:	cmp	x18, #HVC_SET_VECTORS
> +	b.ne	2f
> +	msr	vbar_el2, x0
> +
>  2:	eret
>  ENDPROC(el1_sync)

You seem to be using x17 and x18 here freely. Do you have any guarantees
that the caller saved/restored those registers? I guess you assume they
are temporary registers and the caller first branches to a function
(like __kvm_hyp_call) and expects them to be corrupted. But I'm not sure
that's always the case. Take for example the __invoke_psci_fn_hvc where
the function is in C (we should change this for other reasons).

> diff --git a/arch/arm64/kvm/hyp.S b/arch/arm64/kvm/hyp.S
> index c0d8202..1916c89 100644
> --- a/arch/arm64/kvm/hyp.S
> +++ b/arch/arm64/kvm/hyp.S
> @@ -27,6 +27,7 @@
>  #include <asm/kvm_asm.h>
>  #include <asm/kvm_mmu.h>
>  #include <asm/memory.h>
> +#include <asm/virt.h>
>  
>  #define CPU_GP_REG_OFFSET(x)	(CPU_GP_REGS + x)
>  #define CPU_XREG_OFFSET(x)	CPU_GP_REG_OFFSET(CPU_USER_PT_REGS + 8*x)
> @@ -1106,12 +1107,9 @@ __hyp_panic_str:
>   * in Hyp mode (see init_hyp_mode in arch/arm/kvm/arm.c).  Return values are
>   * passed in r0 and r1.
>   *
> - * A function pointer with a value of 0 has a special meaning, and is
> - * used to implement __hyp_get_vectors in the same way as in
> - * arch/arm64/kernel/hyp_stub.S.
>   */
>  ENTRY(kvm_call_hyp)
> -	hvc	#0
> +	hvc	#HVC_CALL_HYP
>  	ret
>  ENDPROC(kvm_call_hyp)
>  
> @@ -1142,6 +1140,7 @@ el1_sync:					// Guest trapped into EL2
>  
>  	mrs	x1, esr_el2
>  	lsr	x2, x1, #ESR_ELx_EC_SHIFT
> +	and	x0, x1, #ESR_ELx_ISS_MASK
>  
>  	cmp	x2, #ESR_ELx_EC_HVC64
>  	b.ne	el1_trap
> @@ -1150,15 +1149,19 @@ el1_sync:					// Guest trapped into EL2
>  	cbnz	x3, el1_trap			// called HVC
>  
>  	/* Here, we're pretty sure the host called HVC. */
> +	mov	x18, x0

Same comment here about corrupting x18. If it is safe, maybe add some
comments in the calling place.

>  	pop	x2, x3
>  	pop	x0, x1
>  
> -	/* Check for __hyp_get_vectors */
> -	cbnz	x0, 1f
> +	cmp	x18, #HVC_GET_VECTORS
> +	b.ne	1f
>  	mrs	x0, vbar_el2
>  	b	2f
>  
> -1:	push	lr, xzr
> +1:	cmp	x18, #HVC_CALL_HYP
> +	b.ne	2f
> +
> +	push	lr, xzr

At this point, we expect either HVC_GET_VECTORS or HVC_CALL_HYP. I think
you can simply assume HVC_CALL_HYP as default and ignore the additional
cmp.

-- 
Catalin

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [PATCH 2/8] arm64: Convert hcalls to use ISS field
@ 2015-01-26 18:26       ` Catalin Marinas
  0 siblings, 0 replies; 100+ messages in thread
From: Catalin Marinas @ 2015-01-26 18:26 UTC (permalink / raw)
  To: Geoff Levand
  Cc: Marc Zyngier, kexec, Will Deacon, linux-arm-kernel, grant.likely,
	christoffer.dall

On Sat, Jan 17, 2015 at 12:23:34AM +0000, Geoff Levand wrote:
> To allow for additional hcalls to be defined and to make the arm64 hcall API
> more consistent across exception vector routines, change the hcall implementations
> to use the ISS field of the ESR_EL2 register to specify the hcall type.
> 
> The existing arm64 hcall implementations are limited in that they only allow
> for two distinct hcalls; with the x0 register either zero, or not zero.  Also,
> the API of the hyp-stub exception vector routines and the KVM exception vector
> routines differ; hyp-stub uses a non-zero value in x0 to implement
> __hyp_set_vectors, whereas KVM uses it to implement kvm_call_hyp.
> 
> Define three new preprocessor macros HVC_GET_VECTORS, HVC_SET_VECTORS and
> HVC_CALL_HYP and to be used as hcall type specifiers and convert the
> existing __hyp_get_vectors(), __hyp_set_vectors() and kvm_call_hyp() routines
> to use these new macros when executing an HVC call.  Also change the
> corresponding hyp-stub and KVM el1_sync exception vector routines to use these
> new macros.
> 
> Signed-off-by: Geoff Levand <geoff@infradead.org>

Using the #imm value for HVC to separate what gets called looks fine to
me. However, I'd like to see a review from Marc/Christoffer on this
patch.

Some comments below:

> diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h
> index 7a5df52..99c319c 100644
> --- a/arch/arm64/include/asm/virt.h
> +++ b/arch/arm64/include/asm/virt.h
> @@ -21,6 +21,26 @@
>  #define BOOT_CPU_MODE_EL1	(0xe11)
>  #define BOOT_CPU_MODE_EL2	(0xe12)
>  
> +/*
> + * HVC_GET_VECTORS - Return the value of the vbar_el2 register.
> + */
> +
> +#define HVC_GET_VECTORS 1
> +
> +/*
> + * HVC_SET_VECTORS - Set the value of the vbar_el2 register.
> + *
> + * @x0: Physical address of the new vector table.
> + */
> +
> +#define HVC_SET_VECTORS 2
> +
> +/*
> + * HVC_CALL_HYP - Execute a hyp routine.
> + */
> +
> +#define HVC_CALL_HYP 3

I think you can ignore this case (make it the default), just define it
as 0 as that's the normal use-case after initialisation and avoid
checking it explicitly.

>  /*
> diff --git a/arch/arm64/kernel/hyp-stub.S b/arch/arm64/kernel/hyp-stub.S
> index a272f33..e3db3fd 100644
> --- a/arch/arm64/kernel/hyp-stub.S
> +++ b/arch/arm64/kernel/hyp-stub.S
> @@ -22,6 +22,7 @@
>  #include <linux/irqchip/arm-gic-v3.h>
>  
>  #include <asm/assembler.h>
> +#include <asm/kvm_arm.h>
>  #include <asm/ptrace.h>
>  #include <asm/virt.h>
>  
> @@ -53,14 +54,22 @@ ENDPROC(__hyp_stub_vectors)
>  	.align 11
>  
>  el1_sync:
> -	mrs	x1, esr_el2
> -	lsr	x1, x1, #26
> -	cmp	x1, #0x16
> -	b.ne	2f				// Not an HVC trap
> -	cbz	x0, 1f
> -	msr	vbar_el2, x0			// Set vbar_el2
> +	mrs	x18, esr_el2
> +	lsr	x17, x18, #ESR_ELx_EC_SHIFT
> +	and	x18, x18, #ESR_ELx_ISS_MASK
> +
> +	cmp     x17, #ESR_ELx_EC_HVC64
> +	b.ne    2f				// Not an HVC trap
> +
> +	cmp	x18, #HVC_GET_VECTORS
> +	b.ne	1f
> +	mrs	x0, vbar_el2
>  	b	2f
> -1:	mrs	x0, vbar_el2			// Return vbar_el2
> +
> +1:	cmp	x18, #HVC_SET_VECTORS
> +	b.ne	2f
> +	msr	vbar_el2, x0
> +
>  2:	eret
>  ENDPROC(el1_sync)

You seem to be using x17 and x18 here freely. Do you have any guarantees
that the caller saved/restored those registers? I guess you assume they
are temporary registers and the caller first branches to a function
(like __kvm_hyp_call) and expects them to be corrupted. But I'm not sure
that's always the case. Take for example the __invoke_psci_fn_hvc where
the function is in C (we should change this for other reasons).

> diff --git a/arch/arm64/kvm/hyp.S b/arch/arm64/kvm/hyp.S
> index c0d8202..1916c89 100644
> --- a/arch/arm64/kvm/hyp.S
> +++ b/arch/arm64/kvm/hyp.S
> @@ -27,6 +27,7 @@
>  #include <asm/kvm_asm.h>
>  #include <asm/kvm_mmu.h>
>  #include <asm/memory.h>
> +#include <asm/virt.h>
>  
>  #define CPU_GP_REG_OFFSET(x)	(CPU_GP_REGS + x)
>  #define CPU_XREG_OFFSET(x)	CPU_GP_REG_OFFSET(CPU_USER_PT_REGS + 8*x)
> @@ -1106,12 +1107,9 @@ __hyp_panic_str:
>   * in Hyp mode (see init_hyp_mode in arch/arm/kvm/arm.c).  Return values are
>   * passed in r0 and r1.
>   *
> - * A function pointer with a value of 0 has a special meaning, and is
> - * used to implement __hyp_get_vectors in the same way as in
> - * arch/arm64/kernel/hyp_stub.S.
>   */
>  ENTRY(kvm_call_hyp)
> -	hvc	#0
> +	hvc	#HVC_CALL_HYP
>  	ret
>  ENDPROC(kvm_call_hyp)
>  
> @@ -1142,6 +1140,7 @@ el1_sync:					// Guest trapped into EL2
>  
>  	mrs	x1, esr_el2
>  	lsr	x2, x1, #ESR_ELx_EC_SHIFT
> +	and	x0, x1, #ESR_ELx_ISS_MASK
>  
>  	cmp	x2, #ESR_ELx_EC_HVC64
>  	b.ne	el1_trap
> @@ -1150,15 +1149,19 @@ el1_sync:					// Guest trapped into EL2
>  	cbnz	x3, el1_trap			// called HVC
>  
>  	/* Here, we're pretty sure the host called HVC. */
> +	mov	x18, x0

Same comment here about corrupting x18. If it is safe, maybe add some
comments in the calling place.

>  	pop	x2, x3
>  	pop	x0, x1
>  
> -	/* Check for __hyp_get_vectors */
> -	cbnz	x0, 1f
> +	cmp	x18, #HVC_GET_VECTORS
> +	b.ne	1f
>  	mrs	x0, vbar_el2
>  	b	2f
>  
> -1:	push	lr, xzr
> +1:	cmp	x18, #HVC_CALL_HYP
> +	b.ne	2f
> +
> +	push	lr, xzr

At this point, we expect either HVC_GET_VECTORS or HVC_CALL_HYP. I think
you can simply assume HVC_CALL_HYP as default and ignore the additional
cmp.

-- 
Catalin

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 100+ messages in thread

* [PATCH 0/8] arm64 kexec kernel patches V7
  2015-01-26 17:44     ` Catalin Marinas
@ 2015-01-26 18:37       ` Grant Likely
  -1 siblings, 0 replies; 100+ messages in thread
From: Grant Likely @ 2015-01-26 18:37 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Jan 26, 2015 at 5:44 PM, Catalin Marinas
<catalin.marinas@arm.com> wrote:
> Hi Geoff,
>
> On Sat, Jan 17, 2015 at 12:23:34AM +0000, Geoff Levand wrote:
>> This series adds the core support for kexec re-boots on arm64.  This v7 of the
>> series is mainly just a rebase to the latest arm64 for-next/core branch
>> (v3.19-rc4), and a few very minor changes requested for v6.
>
> I haven't looked at the series in detail before, so some of my comments
> may have already been discussed.
>
>> Several things are known to have problems on kexec re-boot:
>>
>> spin-table
>
> I think that's not too bad, for complete kexec support (SMP->SMP) we can
> require some CPU unplug mechanism and PSCI is one of them.
>
>> FIX: Upgrade system firmware to provide PSCI enable method support or add
>> missing spin-table support to the kernel.
>
> What's the missing spin-table support?
>
>> ACPI
>> ----
>>
>> PROBLEM: The kernel for ACPI based systems does not export a device tree to the
>> standard user space location of 'proc/device-tree'.  Current applications
>> expect to access device tree information from this standard location.
>>
>> WORK-AROUND: Disable ACPI in firmware, OR pass 'acpi=off' on the first stage
>> kernel command line, OR pass a user specified DTB using the kexec --dtb option.
>>
>> FIX: FIX: An interface to expose a binary device tree to user space has been
>> proposed.  User kexec utilities will need to be updated to add support for this
>> new interface.

The new interface is merged into mainline. /sys/firmware/fdt

> So the fix here is to boot the second stage kernel with dtb, which means
> that we mandate the existence of a DT file for any ACPI system. Are
> there plans to make the kexec'ed kernel reuse the ACPI tables?

Yes, the kexec'ed kernel will reuse the ACPI tables, and any other
data passed by UEFI. The DT we're talking about here is the DT
generated by the kernel's UEFI stub, and the kexec tools want access
to it so they can find the UEFI system table pointer.

g.

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [PATCH 0/8] arm64 kexec kernel patches V7
@ 2015-01-26 18:37       ` Grant Likely
  0 siblings, 0 replies; 100+ messages in thread
From: Grant Likely @ 2015-01-26 18:37 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: Ard Biesheuvel, Geoff Levand, kexec, Will Deacon, Marc Zyngier,
	Deepak Saxena, linux-arm-kernel, christoffer.dall

On Mon, Jan 26, 2015 at 5:44 PM, Catalin Marinas
<catalin.marinas@arm.com> wrote:
> Hi Geoff,
>
> On Sat, Jan 17, 2015 at 12:23:34AM +0000, Geoff Levand wrote:
>> This series adds the core support for kexec re-boots on arm64.  This v7 of the
>> series is mainly just a rebase to the latest arm64 for-next/core branch
>> (v3.19-rc4), and a few very minor changes requested for v6.
>
> I haven't looked at the series in detail before, so some of my comments
> may have already been discussed.
>
>> Several things are known to have problems on kexec re-boot:
>>
>> spin-table
>
> I think that's not too bad, for complete kexec support (SMP->SMP) we can
> require some CPU unplug mechanism and PSCI is one of them.
>
>> FIX: Upgrade system firmware to provide PSCI enable method support or add
>> missing spin-table support to the kernel.
>
> What's the missing spin-table support?
>
>> ACPI
>> ----
>>
>> PROBLEM: The kernel for ACPI based systems does not export a device tree to the
>> standard user space location of 'proc/device-tree'.  Current applications
>> expect to access device tree information from this standard location.
>>
>> WORK-AROUND: Disable ACPI in firmware, OR pass 'acpi=off' on the first stage
>> kernel command line, OR pass a user specified DTB using the kexec --dtb option.
>>
>> FIX: FIX: An interface to expose a binary device tree to user space has been
>> proposed.  User kexec utilities will need to be updated to add support for this
>> new interface.

The new interface is merged into mainline. /sys/firmware/fdt

> So the fix here is to boot the second stage kernel with dtb, which means
> that we mandate the existence of a DT file for any ACPI system. Are
> there plans to make the kexec'ed kernel reuse the ACPI tables?

Yes, the kexec'ed kernel will reuse the ACPI tables, and any other
data passed by UEFI. The DT we're talking about here is the DT
generated by the kernel's UEFI stub, and the kexec tools want access
to it so they can find the UEFI system table pointer.

g.

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 100+ messages in thread

* [PATCH 0/8] arm64 kexec kernel patches V7
  2015-01-26 17:44     ` Catalin Marinas
@ 2015-01-26 18:55       ` Mark Rutland
  -1 siblings, 0 replies; 100+ messages in thread
From: Mark Rutland @ 2015-01-26 18:55 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Jan 26, 2015 at 05:44:14PM +0000, Catalin Marinas wrote:
> Hi Geoff,
> 
> On Sat, Jan 17, 2015 at 12:23:34AM +0000, Geoff Levand wrote:
> > This series adds the core support for kexec re-boots on arm64.  This v7 of the
> > series is mainly just a rebase to the latest arm64 for-next/core branch
> > (v3.19-rc4), and a few very minor changes requested for v6.
> 
> I haven't looked at the series in detail before, so some of my comments
> may have already been discussed.
> 
> > Several things are known to have problems on kexec re-boot:
> > 
> > spin-table
> 
> I think that's not too bad, for complete kexec support (SMP->SMP) we can
> require some CPU unplug mechanism and PSCI is one of them.
> 
> > FIX: Upgrade system firmware to provide PSCI enable method support or add
> > missing spin-table support to the kernel.
> 
> What's the missing spin-table support?

As you mention above, a mechanism for returning the CPUs to FW. There is
no spec for doing this with spin-table, which would have to be written
and vetted.

Mark.

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [PATCH 0/8] arm64 kexec kernel patches V7
@ 2015-01-26 18:55       ` Mark Rutland
  0 siblings, 0 replies; 100+ messages in thread
From: Mark Rutland @ 2015-01-26 18:55 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: Ard Biesheuvel, Geoff Levand, Will Deacon, kexec, Marc Zyngier,
	Deepak Saxena, christoffer.dall, grant.likely, linux-arm-kernel

On Mon, Jan 26, 2015 at 05:44:14PM +0000, Catalin Marinas wrote:
> Hi Geoff,
> 
> On Sat, Jan 17, 2015 at 12:23:34AM +0000, Geoff Levand wrote:
> > This series adds the core support for kexec re-boots on arm64.  This v7 of the
> > series is mainly just a rebase to the latest arm64 for-next/core branch
> > (v3.19-rc4), and a few very minor changes requested for v6.
> 
> I haven't looked at the series in detail before, so some of my comments
> may have already been discussed.
> 
> > Several things are known to have problems on kexec re-boot:
> > 
> > spin-table
> 
> I think that's not too bad, for complete kexec support (SMP->SMP) we can
> require some CPU unplug mechanism and PSCI is one of them.
> 
> > FIX: Upgrade system firmware to provide PSCI enable method support or add
> > missing spin-table support to the kernel.
> 
> What's the missing spin-table support?

As you mention above, a mechanism for returning the CPUs to FW. There is
no spec for doing this with spin-table, which would have to be written
and vetted.

Mark.

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 100+ messages in thread

* [PATCH 4/8] arm64: Add EL2 switch to soft_restart
  2015-01-17  0:23     ` Geoff Levand
@ 2015-01-26 19:02       ` Mark Rutland
  -1 siblings, 0 replies; 100+ messages in thread
From: Mark Rutland @ 2015-01-26 19:02 UTC (permalink / raw)
  To: linux-arm-kernel

On Sat, Jan 17, 2015 at 12:23:34AM +0000, Geoff Levand wrote:
> When a CPU is reset it needs to be put into the exception level it had when it
> entered the kernel.  Update cpu_reset() to accept an argument el2_switch which
> signals cpu_reset() to enter the soft reset address at EL2.  If el2_switch is
> not set the soft reset address will be entered at EL1.
> 
> Update cpu_soft_restart() and soft_restart() to pass the return of
> is_hyp_mode_available() as the el2_switch value to cpu_reset().  Also update the
> comments of cpu_reset(), cpu_soft_restart() and soft_restart() to reflect this
> change.
> 
> Signed-off-by: Geoff Levand <geoff@infradead.org>
> ---
>  arch/arm64/include/asm/proc-fns.h |  4 ++--
>  arch/arm64/kernel/process.c       | 10 ++++++++-
>  arch/arm64/mm/proc.S              | 47 +++++++++++++++++++++++++++++----------
>  3 files changed, 46 insertions(+), 15 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/proc-fns.h b/arch/arm64/include/asm/proc-fns.h
> index 9a8fd84..339394d 100644
> --- a/arch/arm64/include/asm/proc-fns.h
> +++ b/arch/arm64/include/asm/proc-fns.h
> @@ -32,8 +32,8 @@ extern void cpu_cache_off(void);
>  extern void cpu_do_idle(void);
>  extern void cpu_do_switch_mm(unsigned long pgd_phys, struct mm_struct *mm);
>  extern void cpu_reset(unsigned long addr) __attribute__((noreturn));
> -void cpu_soft_restart(phys_addr_t cpu_reset,
> -		unsigned long addr) __attribute__((noreturn));
> +void cpu_soft_restart(phys_addr_t cpu_reset, unsigned long el2_switch,
> +		      unsigned long addr) __attribute__((noreturn));
>  extern void cpu_do_suspend(struct cpu_suspend_ctx *ptr);
>  extern u64 cpu_do_resume(phys_addr_t ptr, u64 idmap_ttbr);
>  
> diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
> index fde9923..371bbf1 100644
> --- a/arch/arm64/kernel/process.c
> +++ b/arch/arm64/kernel/process.c
> @@ -50,6 +50,7 @@
>  #include <asm/mmu_context.h>
>  #include <asm/processor.h>
>  #include <asm/stacktrace.h>
> +#include <asm/virt.h>
>  
>  #ifdef CONFIG_CC_STACKPROTECTOR
>  #include <linux/stackprotector.h>
> @@ -60,7 +61,14 @@ EXPORT_SYMBOL(__stack_chk_guard);
>  void soft_restart(unsigned long addr)
>  {
>  	setup_mm_for_reboot();
> -	cpu_soft_restart(virt_to_phys(cpu_reset), addr);
> +
> +	/* TODO: Remove this conditional when KVM can support CPU restart. */
> +	if (IS_ENABLED(CONFIG_KVM))
> +		cpu_soft_restart(virt_to_phys(cpu_reset), 0, addr);

If we haven't torn down KVM, doesn't that mean that KVM is active at EL2
(with MMU and caches on) at this point?

If that's the case then we cannot possibly try to call kexec(), because
we cannot touch the memory used by the page tables for those EL2
mappings. Things will explode if we do.

Mark.

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [PATCH 4/8] arm64: Add EL2 switch to soft_restart
@ 2015-01-26 19:02       ` Mark Rutland
  0 siblings, 0 replies; 100+ messages in thread
From: Mark Rutland @ 2015-01-26 19:02 UTC (permalink / raw)
  To: Geoff Levand
  Cc: Marc Zyngier, Catalin Marinas, Will Deacon, christoffer.dall,
	grant.likely, kexec, linux-arm-kernel

On Sat, Jan 17, 2015 at 12:23:34AM +0000, Geoff Levand wrote:
> When a CPU is reset it needs to be put into the exception level it had when it
> entered the kernel.  Update cpu_reset() to accept an argument el2_switch which
> signals cpu_reset() to enter the soft reset address at EL2.  If el2_switch is
> not set the soft reset address will be entered at EL1.
> 
> Update cpu_soft_restart() and soft_restart() to pass the return of
> is_hyp_mode_available() as the el2_switch value to cpu_reset().  Also update the
> comments of cpu_reset(), cpu_soft_restart() and soft_restart() to reflect this
> change.
> 
> Signed-off-by: Geoff Levand <geoff@infradead.org>
> ---
>  arch/arm64/include/asm/proc-fns.h |  4 ++--
>  arch/arm64/kernel/process.c       | 10 ++++++++-
>  arch/arm64/mm/proc.S              | 47 +++++++++++++++++++++++++++++----------
>  3 files changed, 46 insertions(+), 15 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/proc-fns.h b/arch/arm64/include/asm/proc-fns.h
> index 9a8fd84..339394d 100644
> --- a/arch/arm64/include/asm/proc-fns.h
> +++ b/arch/arm64/include/asm/proc-fns.h
> @@ -32,8 +32,8 @@ extern void cpu_cache_off(void);
>  extern void cpu_do_idle(void);
>  extern void cpu_do_switch_mm(unsigned long pgd_phys, struct mm_struct *mm);
>  extern void cpu_reset(unsigned long addr) __attribute__((noreturn));
> -void cpu_soft_restart(phys_addr_t cpu_reset,
> -		unsigned long addr) __attribute__((noreturn));
> +void cpu_soft_restart(phys_addr_t cpu_reset, unsigned long el2_switch,
> +		      unsigned long addr) __attribute__((noreturn));
>  extern void cpu_do_suspend(struct cpu_suspend_ctx *ptr);
>  extern u64 cpu_do_resume(phys_addr_t ptr, u64 idmap_ttbr);
>  
> diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
> index fde9923..371bbf1 100644
> --- a/arch/arm64/kernel/process.c
> +++ b/arch/arm64/kernel/process.c
> @@ -50,6 +50,7 @@
>  #include <asm/mmu_context.h>
>  #include <asm/processor.h>
>  #include <asm/stacktrace.h>
> +#include <asm/virt.h>
>  
>  #ifdef CONFIG_CC_STACKPROTECTOR
>  #include <linux/stackprotector.h>
> @@ -60,7 +61,14 @@ EXPORT_SYMBOL(__stack_chk_guard);
>  void soft_restart(unsigned long addr)
>  {
>  	setup_mm_for_reboot();
> -	cpu_soft_restart(virt_to_phys(cpu_reset), addr);
> +
> +	/* TODO: Remove this conditional when KVM can support CPU restart. */
> +	if (IS_ENABLED(CONFIG_KVM))
> +		cpu_soft_restart(virt_to_phys(cpu_reset), 0, addr);

If we haven't torn down KVM, doesn't that mean that KVM is active at EL2
(with MMU and caches on) at this point?

If that's the case then we cannot possibly try to call kexec(), because
we cannot touch the memory used by the page tables for those EL2
mappings. Things will explode if we do.

Mark.

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 100+ messages in thread

* [PATCH 5/8] arm64/kexec: Add core kexec support
  2015-01-17  0:23     ` Geoff Levand
@ 2015-01-26 19:16       ` Mark Rutland
  -1 siblings, 0 replies; 100+ messages in thread
From: Mark Rutland @ 2015-01-26 19:16 UTC (permalink / raw)
  To: linux-arm-kernel

On Sat, Jan 17, 2015 at 12:23:34AM +0000, Geoff Levand wrote:
> Add three new files, kexec.h, machine_kexec.c and relocate_kernel.S to the
> arm64 architecture that add support for the kexec re-boot mechanism
> (CONFIG_KEXEC) on arm64 platforms.
> 
> Signed-off-by: Geoff Levand <geoff@infradead.org>
> ---
>  arch/arm64/Kconfig                  |   9 ++
>  arch/arm64/include/asm/kexec.h      |  47 +++++++++++
>  arch/arm64/kernel/Makefile          |   1 +
>  arch/arm64/kernel/machine_kexec.c   | 155 ++++++++++++++++++++++++++++++++++
>  arch/arm64/kernel/relocate_kernel.S | 160 ++++++++++++++++++++++++++++++++++++
>  include/uapi/linux/kexec.h          |   1 +
>  6 files changed, 373 insertions(+)
>  create mode 100644 arch/arm64/include/asm/kexec.h
>  create mode 100644 arch/arm64/kernel/machine_kexec.c
>  create mode 100644 arch/arm64/kernel/relocate_kernel.S
> 
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index b1f9a20..d9eb9cd 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -474,6 +474,15 @@ config SECCOMP
>           and the task is only allowed to execute a few safe syscalls
>           defined by each seccomp mode.
> 
> +config KEXEC
> +       depends on (!SMP || PM_SLEEP_SMP)
> +       bool "kexec system call"
> +       ---help---
> +         kexec is a system call that implements the ability to shutdown your
> +         current kernel, and to start another kernel.  It is like a reboot
> +         but it is independent of the system firmware.   And like a reboot
> +         you can start any kernel with it, not just Linux.
> +

[...]

> +/**
> + * kexec_is_dtb - Helper routine to check the device tree header signature.
> + */
> +static bool kexec_is_dtb(const void *dtb)
> +{
> +       __be32 magic;
> +
> +       return get_user(magic, (__be32 *)dtb) ? false :
> +               (be32_to_cpu(magic) == OF_DT_HEADER);
> +}
> +
> +/**
> + * kexec_find_dtb_seg - Helper routine to find the dtb segment.
> + */
> +static const struct kexec_segment *kexec_find_dtb_seg(
> +       const struct kimage *image)
> +{
> +       int i;
> +
> +       for (i = 0; i < image->nr_segments; i++) {
> +               if (kexec_is_dtb(image->segment[i].buf))
> +                       return &image->segment[i];
> +       }
> +
> +       return NULL;
> +}

As mentioned before, _please_ move the dtb handling to the
userspace-provided purgatory. It would be far better to get userspace to
handle setting up the dtb pointer explicitly. That avoids fragility
w.r.t. policy here as userspace will get exactly what it asked for,
nothing more, nothing less.

The fact that this is done on 32-bit arm does not mean that we must do
it here.

[...]

> +       /* Start new image. */
> +       ldr     x4, arm64_kexec_kimage_start
> +       ldr     x0, arm64_kexec_dtb_addr
> +       mov     x1, xzr
> +       mov     x2, xzr
> +       mov     x3, xzr
> +       br      x4

Likewise, this should be part of the userspace-provided purgatory code.
If we're staying true to "like a reboot you can start any kernel with
it, not just Linux", we shouldn't embed the Linux boot protocol here.

Mark.

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [PATCH 5/8] arm64/kexec: Add core kexec support
@ 2015-01-26 19:16       ` Mark Rutland
  0 siblings, 0 replies; 100+ messages in thread
From: Mark Rutland @ 2015-01-26 19:16 UTC (permalink / raw)
  To: Geoff Levand
  Cc: Marc Zyngier, Catalin Marinas, Will Deacon, christoffer.dall,
	grant.likely, kexec, linux-arm-kernel

On Sat, Jan 17, 2015 at 12:23:34AM +0000, Geoff Levand wrote:
> Add three new files, kexec.h, machine_kexec.c and relocate_kernel.S to the
> arm64 architecture that add support for the kexec re-boot mechanism
> (CONFIG_KEXEC) on arm64 platforms.
> 
> Signed-off-by: Geoff Levand <geoff@infradead.org>
> ---
>  arch/arm64/Kconfig                  |   9 ++
>  arch/arm64/include/asm/kexec.h      |  47 +++++++++++
>  arch/arm64/kernel/Makefile          |   1 +
>  arch/arm64/kernel/machine_kexec.c   | 155 ++++++++++++++++++++++++++++++++++
>  arch/arm64/kernel/relocate_kernel.S | 160 ++++++++++++++++++++++++++++++++++++
>  include/uapi/linux/kexec.h          |   1 +
>  6 files changed, 373 insertions(+)
>  create mode 100644 arch/arm64/include/asm/kexec.h
>  create mode 100644 arch/arm64/kernel/machine_kexec.c
>  create mode 100644 arch/arm64/kernel/relocate_kernel.S
> 
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index b1f9a20..d9eb9cd 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -474,6 +474,15 @@ config SECCOMP
>           and the task is only allowed to execute a few safe syscalls
>           defined by each seccomp mode.
> 
> +config KEXEC
> +       depends on (!SMP || PM_SLEEP_SMP)
> +       bool "kexec system call"
> +       ---help---
> +         kexec is a system call that implements the ability to shutdown your
> +         current kernel, and to start another kernel.  It is like a reboot
> +         but it is independent of the system firmware.   And like a reboot
> +         you can start any kernel with it, not just Linux.
> +

[...]

> +/**
> + * kexec_is_dtb - Helper routine to check the device tree header signature.
> + */
> +static bool kexec_is_dtb(const void *dtb)
> +{
> +       __be32 magic;
> +
> +       return get_user(magic, (__be32 *)dtb) ? false :
> +               (be32_to_cpu(magic) == OF_DT_HEADER);
> +}
> +
> +/**
> + * kexec_find_dtb_seg - Helper routine to find the dtb segment.
> + */
> +static const struct kexec_segment *kexec_find_dtb_seg(
> +       const struct kimage *image)
> +{
> +       int i;
> +
> +       for (i = 0; i < image->nr_segments; i++) {
> +               if (kexec_is_dtb(image->segment[i].buf))
> +                       return &image->segment[i];
> +       }
> +
> +       return NULL;
> +}

As mentioned before, _please_ move the dtb handling to the
userspace-provided purgatory. It would be far better to get userspace to
handle setting up the dtb pointer explicitly. That avoids fragility
w.r.t. policy here as userspace will get exactly what it asked for,
nothing more, nothing less.

The fact that this is done on 32-bit arm does not mean that we must do
it here.

[...]

> +       /* Start new image. */
> +       ldr     x4, arm64_kexec_kimage_start
> +       ldr     x0, arm64_kexec_dtb_addr
> +       mov     x1, xzr
> +       mov     x2, xzr
> +       mov     x3, xzr
> +       br      x4

Likewise, this should be part of the userspace-provided purgatory code.
If we're staying true to "like a reboot you can start any kernel with
it, not just Linux", we shouldn't embed the Linux boot protocol here.

Mark.

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 100+ messages in thread

* [PATCH 7/8] arm64/kexec: Add checks for KVM
  2015-01-17  0:23     ` Geoff Levand
@ 2015-01-26 19:19       ` Mark Rutland
  -1 siblings, 0 replies; 100+ messages in thread
From: Mark Rutland @ 2015-01-26 19:19 UTC (permalink / raw)
  To: linux-arm-kernel

On Sat, Jan 17, 2015 at 12:23:34AM +0000, Geoff Levand wrote:
> Add runtime checks that fail the arm64 kexec syscall for situations that would
> result in system instability do to problems in the KVM kernel support.
> These checks should be removed when the KVM problems are resolved fixed.
> 
> Signed-off-by: Geoff Levand <geoff@infradead.org>
> ---
>  arch/arm64/kernel/machine_kexec.c | 10 ++++++++++
>  1 file changed, 10 insertions(+)
> 
> diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
> index 3d84759..a36459d 100644
> --- a/arch/arm64/kernel/machine_kexec.c
> +++ b/arch/arm64/kernel/machine_kexec.c
> @@ -16,6 +16,9 @@
>  #include <asm/cacheflush.h>
>  #include <asm/system_misc.h>
>  
> +/* TODO: Remove this include when KVM can support a kexec reboot. */
> +#include <asm/virt.h>
> +
>  /* Global variables for the relocate_kernel routine. */
>  extern const unsigned char relocate_new_kernel[];
>  extern const unsigned long relocate_new_kernel_size;
> @@ -100,6 +103,13 @@ int machine_kexec_prepare(struct kimage *image)
>  
>  	kexec_image_info(image);
>  
> +	/* TODO: Remove this message when KVM can support a kexec reboot. */
> +	if (IS_ENABLED(CONFIG_KVM) && is_hyp_mode_available()) {
> +		pr_err("%s: Your kernel is configured with KVM support (CONFIG_KVM=y) which currently does not allow for kexec re-boot.\n",
> +			__func__);
> +		return -ENOSYS;
> +	}

If you really don't want to implement KVM teardown, surely this should
be at the start of the series, so we don't have a point in the middle
where things may explode in this case?

Mark.

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [PATCH 7/8] arm64/kexec: Add checks for KVM
@ 2015-01-26 19:19       ` Mark Rutland
  0 siblings, 0 replies; 100+ messages in thread
From: Mark Rutland @ 2015-01-26 19:19 UTC (permalink / raw)
  To: Geoff Levand
  Cc: Marc Zyngier, Catalin Marinas, Will Deacon, christoffer.dall,
	grant.likely, kexec, linux-arm-kernel

On Sat, Jan 17, 2015 at 12:23:34AM +0000, Geoff Levand wrote:
> Add runtime checks that fail the arm64 kexec syscall for situations that would
> result in system instability do to problems in the KVM kernel support.
> These checks should be removed when the KVM problems are resolved fixed.
> 
> Signed-off-by: Geoff Levand <geoff@infradead.org>
> ---
>  arch/arm64/kernel/machine_kexec.c | 10 ++++++++++
>  1 file changed, 10 insertions(+)
> 
> diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
> index 3d84759..a36459d 100644
> --- a/arch/arm64/kernel/machine_kexec.c
> +++ b/arch/arm64/kernel/machine_kexec.c
> @@ -16,6 +16,9 @@
>  #include <asm/cacheflush.h>
>  #include <asm/system_misc.h>
>  
> +/* TODO: Remove this include when KVM can support a kexec reboot. */
> +#include <asm/virt.h>
> +
>  /* Global variables for the relocate_kernel routine. */
>  extern const unsigned char relocate_new_kernel[];
>  extern const unsigned long relocate_new_kernel_size;
> @@ -100,6 +103,13 @@ int machine_kexec_prepare(struct kimage *image)
>  
>  	kexec_image_info(image);
>  
> +	/* TODO: Remove this message when KVM can support a kexec reboot. */
> +	if (IS_ENABLED(CONFIG_KVM) && is_hyp_mode_available()) {
> +		pr_err("%s: Your kernel is configured with KVM support (CONFIG_KVM=y) which currently does not allow for kexec re-boot.\n",
> +			__func__);
> +		return -ENOSYS;
> +	}

If you really don't want to implement KVM teardown, surely this should
be at the start of the series, so we don't have a point in the middle
where things may explode in this case?

Mark.

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 100+ messages in thread

* [PATCH 7/8] arm64/kexec: Add checks for KVM
  2015-01-26 19:19       ` Mark Rutland
@ 2015-01-26 20:39         ` Christoffer Dall
  -1 siblings, 0 replies; 100+ messages in thread
From: Christoffer Dall @ 2015-01-26 20:39 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Jan 26, 2015 at 8:19 PM, Mark Rutland <mark.rutland@arm.com> wrote:
> On Sat, Jan 17, 2015 at 12:23:34AM +0000, Geoff Levand wrote:
>> Add runtime checks that fail the arm64 kexec syscall for situations that would
>> result in system instability do to problems in the KVM kernel support.
>> These checks should be removed when the KVM problems are resolved fixed.
>>
>> Signed-off-by: Geoff Levand <geoff@infradead.org>
>> ---
>>  arch/arm64/kernel/machine_kexec.c | 10 ++++++++++
>>  1 file changed, 10 insertions(+)
>>
>> diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
>> index 3d84759..a36459d 100644
>> --- a/arch/arm64/kernel/machine_kexec.c
>> +++ b/arch/arm64/kernel/machine_kexec.c
>> @@ -16,6 +16,9 @@
>>  #include <asm/cacheflush.h>
>>  #include <asm/system_misc.h>
>>
>> +/* TODO: Remove this include when KVM can support a kexec reboot. */
>> +#include <asm/virt.h>
>> +
>>  /* Global variables for the relocate_kernel routine. */
>>  extern const unsigned char relocate_new_kernel[];
>>  extern const unsigned long relocate_new_kernel_size;
>> @@ -100,6 +103,13 @@ int machine_kexec_prepare(struct kimage *image)
>>
>>       kexec_image_info(image);
>>
>> +     /* TODO: Remove this message when KVM can support a kexec reboot. */
>> +     if (IS_ENABLED(CONFIG_KVM) && is_hyp_mode_available()) {
>> +             pr_err("%s: Your kernel is configured with KVM support (CONFIG_KVM=y) which currently does not allow for kexec re-boot.\n",
>> +                     __func__);
>> +             return -ENOSYS;
>> +     }
>
> If you really don't want to implement KVM teardown, surely this should
> be at the start of the series, so we don't have a point in the middle
> where things may explode in this case?
>
So this caters to support systems that don't support KVM (don't boot
in EL2) but is configured with both KVM and KEXEC?

Why not just make the kexec config dependent on !KVM ?

-Christoffer

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [PATCH 7/8] arm64/kexec: Add checks for KVM
@ 2015-01-26 20:39         ` Christoffer Dall
  0 siblings, 0 replies; 100+ messages in thread
From: Christoffer Dall @ 2015-01-26 20:39 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Geoff Levand, Catalin Marinas, Will Deacon, Marc Zyngier,
	grant.likely, kexec, linux-arm-kernel

On Mon, Jan 26, 2015 at 8:19 PM, Mark Rutland <mark.rutland@arm.com> wrote:
> On Sat, Jan 17, 2015 at 12:23:34AM +0000, Geoff Levand wrote:
>> Add runtime checks that fail the arm64 kexec syscall for situations that would
>> result in system instability do to problems in the KVM kernel support.
>> These checks should be removed when the KVM problems are resolved fixed.
>>
>> Signed-off-by: Geoff Levand <geoff@infradead.org>
>> ---
>>  arch/arm64/kernel/machine_kexec.c | 10 ++++++++++
>>  1 file changed, 10 insertions(+)
>>
>> diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
>> index 3d84759..a36459d 100644
>> --- a/arch/arm64/kernel/machine_kexec.c
>> +++ b/arch/arm64/kernel/machine_kexec.c
>> @@ -16,6 +16,9 @@
>>  #include <asm/cacheflush.h>
>>  #include <asm/system_misc.h>
>>
>> +/* TODO: Remove this include when KVM can support a kexec reboot. */
>> +#include <asm/virt.h>
>> +
>>  /* Global variables for the relocate_kernel routine. */
>>  extern const unsigned char relocate_new_kernel[];
>>  extern const unsigned long relocate_new_kernel_size;
>> @@ -100,6 +103,13 @@ int machine_kexec_prepare(struct kimage *image)
>>
>>       kexec_image_info(image);
>>
>> +     /* TODO: Remove this message when KVM can support a kexec reboot. */
>> +     if (IS_ENABLED(CONFIG_KVM) && is_hyp_mode_available()) {
>> +             pr_err("%s: Your kernel is configured with KVM support (CONFIG_KVM=y) which currently does not allow for kexec re-boot.\n",
>> +                     __func__);
>> +             return -ENOSYS;
>> +     }
>
> If you really don't want to implement KVM teardown, surely this should
> be at the start of the series, so we don't have a point in the middle
> where things may explode in this case?
>
So this caters to support systems that don't support KVM (don't boot
in EL2) but is configured with both KVM and KEXEC?

Why not just make the kexec config dependent on !KVM ?

-Christoffer

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 100+ messages in thread

* [PATCH 0/8] arm64 kexec kernel patches V7
  2015-01-26 17:44     ` Catalin Marinas
@ 2015-01-26 20:57       ` Geoff Levand
  -1 siblings, 0 replies; 100+ messages in thread
From: Geoff Levand @ 2015-01-26 20:57 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Catalin,


On Mon, 2015-01-26 at 17:44 +0000, Catalin Marinas wrote:
> On Sat, Jan 17, 2015 at 12:23:34AM +0000, Geoff Levand wrote:
> > This series adds the core support for kexec re-boots on arm64.  This v7 of the
> > series is mainly just a rebase to the latest arm64 for-next/core branch
> > (v3.19-rc4), and a few very minor changes requested for v6.
> 
> I haven't looked at the series in detail before, so some of my comments
> may have already been discussed.
> 
> > Several things are known to have problems on kexec re-boot:
> > 
> > spin-table
> 
> I think that's not too bad, for complete kexec support (SMP->SMP) we can
> require some CPU unplug mechanism and PSCI is one of them.
> 
> > FIX: Upgrade system firmware to provide PSCI enable method support or add
> > missing spin-table support to the kernel.
> 
> What's the missing spin-table support?

I had a working spin-table implementation, but I have not kept that
up to date.  It is somewhat complicated, and it requires a mechanism
to return the secondary CPUs to the spin table code.  As Mark
mentioned, that mechanism would need to be decided on.  I don't plan
to put any more work into spin-table support.

Just for anyone interested, my old spin-table support patches are in
this branch:

https://git.kernel.org/cgit/linux/kernel/git/geoff/linux-kexec.git/log/?h=spin-table

> 
> > ACPI
> > ----
> > 
> > PROBLEM: The kernel for ACPI based systems does not export a device tree to the
> > standard user space location of 'proc/device-tree'.  Current applications
> > expect to access device tree information from this standard location.
> > 
> > WORK-AROUND: Disable ACPI in firmware, OR pass 'acpi=off' on the first stage
> > kernel command line, OR pass a user specified DTB using the kexec --dtb option.
> > 
> > FIX: FIX: An interface to expose a binary device tree to user space has been
> > proposed.  User kexec utilities will need to be updated to add support for this
> > new interface.
> 
> So the fix here is to boot the second stage kernel with dtb, which means
> that we mandate the existence of a DT file for any ACPI system. Are
> there plans to make the kexec'ed kernel reuse the ACPI tables?

As Grant mentioned, the dtb the UEFI stub creates and passes to the first
stage kernel is what is passed to the second stage kernel.  The second
stage kernel uses that to setup its ACPI support.

-Geoff

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [PATCH 0/8] arm64 kexec kernel patches V7
@ 2015-01-26 20:57       ` Geoff Levand
  0 siblings, 0 replies; 100+ messages in thread
From: Geoff Levand @ 2015-01-26 20:57 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: Ard Biesheuvel, Marc Zyngier, kexec, Will Deacon, Deepak Saxena,
	linux-arm-kernel, grant.likely, christoffer.dall

Hi Catalin,


On Mon, 2015-01-26 at 17:44 +0000, Catalin Marinas wrote:
> On Sat, Jan 17, 2015 at 12:23:34AM +0000, Geoff Levand wrote:
> > This series adds the core support for kexec re-boots on arm64.  This v7 of the
> > series is mainly just a rebase to the latest arm64 for-next/core branch
> > (v3.19-rc4), and a few very minor changes requested for v6.
> 
> I haven't looked at the series in detail before, so some of my comments
> may have already been discussed.
> 
> > Several things are known to have problems on kexec re-boot:
> > 
> > spin-table
> 
> I think that's not too bad, for complete kexec support (SMP->SMP) we can
> require some CPU unplug mechanism and PSCI is one of them.
> 
> > FIX: Upgrade system firmware to provide PSCI enable method support or add
> > missing spin-table support to the kernel.
> 
> What's the missing spin-table support?

I had a working spin-table implementation, but I have not kept that
up to date.  It is somewhat complicated, and it requires a mechanism
to return the secondary CPUs to the spin table code.  As Mark
mentioned, that mechanism would need to be decided on.  I don't plan
to put any more work into spin-table support.

Just for anyone interested, my old spin-table support patches are in
this branch:

https://git.kernel.org/cgit/linux/kernel/git/geoff/linux-kexec.git/log/?h=spin-table

> 
> > ACPI
> > ----
> > 
> > PROBLEM: The kernel for ACPI based systems does not export a device tree to the
> > standard user space location of 'proc/device-tree'.  Current applications
> > expect to access device tree information from this standard location.
> > 
> > WORK-AROUND: Disable ACPI in firmware, OR pass 'acpi=off' on the first stage
> > kernel command line, OR pass a user specified DTB using the kexec --dtb option.
> > 
> > FIX: FIX: An interface to expose a binary device tree to user space has been
> > proposed.  User kexec utilities will need to be updated to add support for this
> > new interface.
> 
> So the fix here is to boot the second stage kernel with dtb, which means
> that we mandate the existence of a DT file for any ACPI system. Are
> there plans to make the kexec'ed kernel reuse the ACPI tables?

As Grant mentioned, the dtb the UEFI stub creates and passes to the first
stage kernel is what is passed to the second stage kernel.  The second
stage kernel uses that to setup its ACPI support.

-Geoff




_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 100+ messages in thread

* [PATCH 7/8] arm64/kexec: Add checks for KVM
  2015-01-26 20:39         ` Christoffer Dall
@ 2015-01-26 20:58           ` Geoff Levand
  -1 siblings, 0 replies; 100+ messages in thread
From: Geoff Levand @ 2015-01-26 20:58 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, 2015-01-26 at 21:39 +0100, Christoffer Dall wrote:
> On Mon, Jan 26, 2015 at 8:19 PM, Mark Rutland <mark.rutland@arm.com> wrote:
> > On Sat, Jan 17, 2015 at 12:23:34AM +0000, Geoff Levand wrote:
> >> Add runtime checks that fail the arm64 kexec syscall for situations that would
> >> result in system instability do to problems in the KVM kernel support.
> >> These checks should be removed when the KVM problems are resolved fixed.
> >>
> >> Signed-off-by: Geoff Levand <geoff@infradead.org>
> >> ---
> >>  arch/arm64/kernel/machine_kexec.c | 10 ++++++++++
> >>  1 file changed, 10 insertions(+)
> >>
> >> diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
> >> index 3d84759..a36459d 100644
> >> --- a/arch/arm64/kernel/machine_kexec.c
> >> +++ b/arch/arm64/kernel/machine_kexec.c
> >> @@ -16,6 +16,9 @@
> >>  #include <asm/cacheflush.h>
> >>  #include <asm/system_misc.h>
> >>
> >> +/* TODO: Remove this include when KVM can support a kexec reboot. */
> >> +#include <asm/virt.h>
> >> +
> >>  /* Global variables for the relocate_kernel routine. */
> >>  extern const unsigned char relocate_new_kernel[];
> >>  extern const unsigned long relocate_new_kernel_size;
> >> @@ -100,6 +103,13 @@ int machine_kexec_prepare(struct kimage *image)
> >>
> >>       kexec_image_info(image);
> >>
> >> +     /* TODO: Remove this message when KVM can support a kexec reboot. */
> >> +     if (IS_ENABLED(CONFIG_KVM) && is_hyp_mode_available()) {
> >> +             pr_err("%s: Your kernel is configured with KVM support (CONFIG_KVM=y) which currently does not allow for kexec re-boot.\n",
> >> +                     __func__);
> >> +             return -ENOSYS;
> >> +     }
> >
> > If you really don't want to implement KVM teardown, surely this should
> > be at the start of the series, so we don't have a point in the middle
> > where things may explode in this case?
> >
> So this caters to support systems that don't support KVM (don't boot
> in EL2) but is configured with both KVM and KEXEC?
> 
> Why not just make the kexec config dependent on !KVM ?

Sure, that would work.  I put it this way so we can get build testing of
kexec since the arm64 defconfig has KVM set.

-Geoff

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [PATCH 7/8] arm64/kexec: Add checks for KVM
@ 2015-01-26 20:58           ` Geoff Levand
  0 siblings, 0 replies; 100+ messages in thread
From: Geoff Levand @ 2015-01-26 20:58 UTC (permalink / raw)
  To: Christoffer Dall
  Cc: Mark Rutland, Marc Zyngier, Catalin Marinas, Will Deacon,
	grant.likely, kexec, linux-arm-kernel

On Mon, 2015-01-26 at 21:39 +0100, Christoffer Dall wrote:
> On Mon, Jan 26, 2015 at 8:19 PM, Mark Rutland <mark.rutland@arm.com> wrote:
> > On Sat, Jan 17, 2015 at 12:23:34AM +0000, Geoff Levand wrote:
> >> Add runtime checks that fail the arm64 kexec syscall for situations that would
> >> result in system instability do to problems in the KVM kernel support.
> >> These checks should be removed when the KVM problems are resolved fixed.
> >>
> >> Signed-off-by: Geoff Levand <geoff@infradead.org>
> >> ---
> >>  arch/arm64/kernel/machine_kexec.c | 10 ++++++++++
> >>  1 file changed, 10 insertions(+)
> >>
> >> diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
> >> index 3d84759..a36459d 100644
> >> --- a/arch/arm64/kernel/machine_kexec.c
> >> +++ b/arch/arm64/kernel/machine_kexec.c
> >> @@ -16,6 +16,9 @@
> >>  #include <asm/cacheflush.h>
> >>  #include <asm/system_misc.h>
> >>
> >> +/* TODO: Remove this include when KVM can support a kexec reboot. */
> >> +#include <asm/virt.h>
> >> +
> >>  /* Global variables for the relocate_kernel routine. */
> >>  extern const unsigned char relocate_new_kernel[];
> >>  extern const unsigned long relocate_new_kernel_size;
> >> @@ -100,6 +103,13 @@ int machine_kexec_prepare(struct kimage *image)
> >>
> >>       kexec_image_info(image);
> >>
> >> +     /* TODO: Remove this message when KVM can support a kexec reboot. */
> >> +     if (IS_ENABLED(CONFIG_KVM) && is_hyp_mode_available()) {
> >> +             pr_err("%s: Your kernel is configured with KVM support (CONFIG_KVM=y) which currently does not allow for kexec re-boot.\n",
> >> +                     __func__);
> >> +             return -ENOSYS;
> >> +     }
> >
> > If you really don't want to implement KVM teardown, surely this should
> > be at the start of the series, so we don't have a point in the middle
> > where things may explode in this case?
> >
> So this caters to support systems that don't support KVM (don't boot
> in EL2) but is configured with both KVM and KEXEC?
> 
> Why not just make the kexec config dependent on !KVM ?

Sure, that would work.  I put it this way so we can get build testing of
kexec since the arm64 defconfig has KVM set.

-Geoff



_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 100+ messages in thread

* [PATCH 7/8] arm64/kexec: Add checks for KVM
  2015-01-26 19:19       ` Mark Rutland
@ 2015-01-26 21:00         ` Geoff Levand
  -1 siblings, 0 replies; 100+ messages in thread
From: Geoff Levand @ 2015-01-26 21:00 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Mark,

On Mon, 2015-01-26 at 19:19 +0000, Mark Rutland wrote:
> On Sat, Jan 17, 2015 at 12:23:34AM +0000, Geoff Levand wrote:
> > Add runtime checks that fail the arm64 kexec syscall for situations that would
> > result in system instability do to problems in the KVM kernel support.
> > These checks should be removed when the KVM problems are resolved fixed.
> > 
> > Signed-off-by: Geoff Levand <geoff@infradead.org>
> > ---
> >  arch/arm64/kernel/machine_kexec.c | 10 ++++++++++
> >  1 file changed, 10 insertions(+)
> > 
> > diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
> > index 3d84759..a36459d 100644
> > --- a/arch/arm64/kernel/machine_kexec.c
> > +++ b/arch/arm64/kernel/machine_kexec.c
> > @@ -16,6 +16,9 @@
> >  #include <asm/cacheflush.h>
> >  #include <asm/system_misc.h>
> >  
> > +/* TODO: Remove this include when KVM can support a kexec reboot. */
> > +#include <asm/virt.h>
> > +
> >  /* Global variables for the relocate_kernel routine. */
> >  extern const unsigned char relocate_new_kernel[];
> >  extern const unsigned long relocate_new_kernel_size;
> > @@ -100,6 +103,13 @@ int machine_kexec_prepare(struct kimage *image)
> >  
> >  	kexec_image_info(image);
> >  
> > +	/* TODO: Remove this message when KVM can support a kexec reboot. */
> > +	if (IS_ENABLED(CONFIG_KVM) && is_hyp_mode_available()) {
> > +		pr_err("%s: Your kernel is configured with KVM support (CONFIG_KVM=y) which currently does not allow for kexec re-boot.\n",
> > +			__func__);
> > +		return -ENOSYS;
> > +	}
> 
> If you really don't want to implement KVM teardown, surely this should
> be at the start of the series, so we don't have a point in the middle
> where things may explode in this case?

Yes, you're right.  I'm hoping we can get the KVM fix done soon so this
is not needed.

-Geoff

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [PATCH 7/8] arm64/kexec: Add checks for KVM
@ 2015-01-26 21:00         ` Geoff Levand
  0 siblings, 0 replies; 100+ messages in thread
From: Geoff Levand @ 2015-01-26 21:00 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Marc Zyngier, Catalin Marinas, Will Deacon, christoffer.dall,
	grant.likely, kexec, linux-arm-kernel

Hi Mark,

On Mon, 2015-01-26 at 19:19 +0000, Mark Rutland wrote:
> On Sat, Jan 17, 2015 at 12:23:34AM +0000, Geoff Levand wrote:
> > Add runtime checks that fail the arm64 kexec syscall for situations that would
> > result in system instability do to problems in the KVM kernel support.
> > These checks should be removed when the KVM problems are resolved fixed.
> > 
> > Signed-off-by: Geoff Levand <geoff@infradead.org>
> > ---
> >  arch/arm64/kernel/machine_kexec.c | 10 ++++++++++
> >  1 file changed, 10 insertions(+)
> > 
> > diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
> > index 3d84759..a36459d 100644
> > --- a/arch/arm64/kernel/machine_kexec.c
> > +++ b/arch/arm64/kernel/machine_kexec.c
> > @@ -16,6 +16,9 @@
> >  #include <asm/cacheflush.h>
> >  #include <asm/system_misc.h>
> >  
> > +/* TODO: Remove this include when KVM can support a kexec reboot. */
> > +#include <asm/virt.h>
> > +
> >  /* Global variables for the relocate_kernel routine. */
> >  extern const unsigned char relocate_new_kernel[];
> >  extern const unsigned long relocate_new_kernel_size;
> > @@ -100,6 +103,13 @@ int machine_kexec_prepare(struct kimage *image)
> >  
> >  	kexec_image_info(image);
> >  
> > +	/* TODO: Remove this message when KVM can support a kexec reboot. */
> > +	if (IS_ENABLED(CONFIG_KVM) && is_hyp_mode_available()) {
> > +		pr_err("%s: Your kernel is configured with KVM support (CONFIG_KVM=y) which currently does not allow for kexec re-boot.\n",
> > +			__func__);
> > +		return -ENOSYS;
> > +	}
> 
> If you really don't want to implement KVM teardown, surely this should
> be at the start of the series, so we don't have a point in the middle
> where things may explode in this case?

Yes, you're right.  I'm hoping we can get the KVM fix done soon so this
is not needed.

-Geoff



_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 100+ messages in thread

* [PATCH 4/8] arm64: Add EL2 switch to soft_restart
  2015-01-26 19:02       ` Mark Rutland
@ 2015-01-26 21:48         ` Geoff Levand
  -1 siblings, 0 replies; 100+ messages in thread
From: Geoff Levand @ 2015-01-26 21:48 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Mark,

On Mon, 2015-01-26 at 19:02 +0000, Mark Rutland wrote:
> On Sat, Jan 17, 2015 at 12:23:34AM +0000, Geoff Levand wrote:
> > When a CPU is reset it needs to be put into the exception level it had when it
> > entered the kernel.  Update cpu_reset() to accept an argument el2_switch which
> > signals cpu_reset() to enter the soft reset address at EL2.  If el2_switch is
> > not set the soft reset address will be entered at EL1.
> > 
> > Update cpu_soft_restart() and soft_restart() to pass the return of
> > is_hyp_mode_available() as the el2_switch value to cpu_reset().  Also update the
> > comments of cpu_reset(), cpu_soft_restart() and soft_restart() to reflect this
> > change.
> > 
> > Signed-off-by: Geoff Levand <geoff@infradead.org>
> > ---
> >  arch/arm64/include/asm/proc-fns.h |  4 ++--
> >  arch/arm64/kernel/process.c       | 10 ++++++++-
> >  arch/arm64/mm/proc.S              | 47 +++++++++++++++++++++++++++++----------
> >  3 files changed, 46 insertions(+), 15 deletions(-)
> > 
> > diff --git a/arch/arm64/include/asm/proc-fns.h b/arch/arm64/include/asm/proc-fns.h
> > index 9a8fd84..339394d 100644
> > --- a/arch/arm64/include/asm/proc-fns.h
> > +++ b/arch/arm64/include/asm/proc-fns.h
> > @@ -32,8 +32,8 @@ extern void cpu_cache_off(void);
> >  extern void cpu_do_idle(void);
> >  extern void cpu_do_switch_mm(unsigned long pgd_phys, struct mm_struct *mm);
> >  extern void cpu_reset(unsigned long addr) __attribute__((noreturn));
> > -void cpu_soft_restart(phys_addr_t cpu_reset,
> > -		unsigned long addr) __attribute__((noreturn));
> > +void cpu_soft_restart(phys_addr_t cpu_reset, unsigned long el2_switch,
> > +		      unsigned long addr) __attribute__((noreturn));
> >  extern void cpu_do_suspend(struct cpu_suspend_ctx *ptr);
> >  extern u64 cpu_do_resume(phys_addr_t ptr, u64 idmap_ttbr);
> >  
> > diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
> > index fde9923..371bbf1 100644
> > --- a/arch/arm64/kernel/process.c
> > +++ b/arch/arm64/kernel/process.c
> > @@ -50,6 +50,7 @@
> >  #include <asm/mmu_context.h>
> >  #include <asm/processor.h>
> >  #include <asm/stacktrace.h>
> > +#include <asm/virt.h>
> >  
> >  #ifdef CONFIG_CC_STACKPROTECTOR
> >  #include <linux/stackprotector.h>
> > @@ -60,7 +61,14 @@ EXPORT_SYMBOL(__stack_chk_guard);
> >  void soft_restart(unsigned long addr)
> >  {
> >  	setup_mm_for_reboot();
> > -	cpu_soft_restart(virt_to_phys(cpu_reset), addr);
> > +
> > +	/* TODO: Remove this conditional when KVM can support CPU restart. */
> > +	if (IS_ENABLED(CONFIG_KVM))
> > +		cpu_soft_restart(virt_to_phys(cpu_reset), 0, addr);
> 
> If we haven't torn down KVM, doesn't that mean that KVM is active at EL2
> (with MMU and caches on) at this point?
> 
> If that's the case then we cannot possibly try to call kexec(), because
> we cannot touch the memory used by the page tables for those EL2
> mappings. Things will explode if we do.

This conditional is just if KVM, do things the old way (don't try to
switch exception levels).  It is to handle the system shutdown case.

Another patch in this series '[PATCH 7/8] arm64/kexec: Add checks for
KVM' assures kexec cannot happen when KVM is configured.

-Geoff

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [PATCH 4/8] arm64: Add EL2 switch to soft_restart
@ 2015-01-26 21:48         ` Geoff Levand
  0 siblings, 0 replies; 100+ messages in thread
From: Geoff Levand @ 2015-01-26 21:48 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Marc Zyngier, Catalin Marinas, Will Deacon, christoffer.dall,
	grant.likely, kexec, linux-arm-kernel

Hi Mark,

On Mon, 2015-01-26 at 19:02 +0000, Mark Rutland wrote:
> On Sat, Jan 17, 2015 at 12:23:34AM +0000, Geoff Levand wrote:
> > When a CPU is reset it needs to be put into the exception level it had when it
> > entered the kernel.  Update cpu_reset() to accept an argument el2_switch which
> > signals cpu_reset() to enter the soft reset address at EL2.  If el2_switch is
> > not set the soft reset address will be entered at EL1.
> > 
> > Update cpu_soft_restart() and soft_restart() to pass the return of
> > is_hyp_mode_available() as the el2_switch value to cpu_reset().  Also update the
> > comments of cpu_reset(), cpu_soft_restart() and soft_restart() to reflect this
> > change.
> > 
> > Signed-off-by: Geoff Levand <geoff@infradead.org>
> > ---
> >  arch/arm64/include/asm/proc-fns.h |  4 ++--
> >  arch/arm64/kernel/process.c       | 10 ++++++++-
> >  arch/arm64/mm/proc.S              | 47 +++++++++++++++++++++++++++++----------
> >  3 files changed, 46 insertions(+), 15 deletions(-)
> > 
> > diff --git a/arch/arm64/include/asm/proc-fns.h b/arch/arm64/include/asm/proc-fns.h
> > index 9a8fd84..339394d 100644
> > --- a/arch/arm64/include/asm/proc-fns.h
> > +++ b/arch/arm64/include/asm/proc-fns.h
> > @@ -32,8 +32,8 @@ extern void cpu_cache_off(void);
> >  extern void cpu_do_idle(void);
> >  extern void cpu_do_switch_mm(unsigned long pgd_phys, struct mm_struct *mm);
> >  extern void cpu_reset(unsigned long addr) __attribute__((noreturn));
> > -void cpu_soft_restart(phys_addr_t cpu_reset,
> > -		unsigned long addr) __attribute__((noreturn));
> > +void cpu_soft_restart(phys_addr_t cpu_reset, unsigned long el2_switch,
> > +		      unsigned long addr) __attribute__((noreturn));
> >  extern void cpu_do_suspend(struct cpu_suspend_ctx *ptr);
> >  extern u64 cpu_do_resume(phys_addr_t ptr, u64 idmap_ttbr);
> >  
> > diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
> > index fde9923..371bbf1 100644
> > --- a/arch/arm64/kernel/process.c
> > +++ b/arch/arm64/kernel/process.c
> > @@ -50,6 +50,7 @@
> >  #include <asm/mmu_context.h>
> >  #include <asm/processor.h>
> >  #include <asm/stacktrace.h>
> > +#include <asm/virt.h>
> >  
> >  #ifdef CONFIG_CC_STACKPROTECTOR
> >  #include <linux/stackprotector.h>
> > @@ -60,7 +61,14 @@ EXPORT_SYMBOL(__stack_chk_guard);
> >  void soft_restart(unsigned long addr)
> >  {
> >  	setup_mm_for_reboot();
> > -	cpu_soft_restart(virt_to_phys(cpu_reset), addr);
> > +
> > +	/* TODO: Remove this conditional when KVM can support CPU restart. */
> > +	if (IS_ENABLED(CONFIG_KVM))
> > +		cpu_soft_restart(virt_to_phys(cpu_reset), 0, addr);
> 
> If we haven't torn down KVM, doesn't that mean that KVM is active at EL2
> (with MMU and caches on) at this point?
> 
> If that's the case then we cannot possibly try to call kexec(), because
> we cannot touch the memory used by the page tables for those EL2
> mappings. Things will explode if we do.

This conditional is just if KVM, do things the old way (don't try to
switch exception levels).  It is to handle the system shutdown case.

Another patch in this series '[PATCH 7/8] arm64/kexec: Add checks for
KVM' assures kexec cannot happen when KVM is configured.

-Geoff




_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 100+ messages in thread

* [PATCH 4/8] arm64: Add EL2 switch to soft_restart
  2015-01-26 21:48         ` Geoff Levand
@ 2015-01-27 16:46           ` Mark Rutland
  -1 siblings, 0 replies; 100+ messages in thread
From: Mark Rutland @ 2015-01-27 16:46 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Jan 26, 2015 at 09:48:48PM +0000, Geoff Levand wrote:
> Hi Mark,
> 
> On Mon, 2015-01-26 at 19:02 +0000, Mark Rutland wrote:
> > On Sat, Jan 17, 2015 at 12:23:34AM +0000, Geoff Levand wrote:
> > > When a CPU is reset it needs to be put into the exception level it had when it
> > > entered the kernel.  Update cpu_reset() to accept an argument el2_switch which
> > > signals cpu_reset() to enter the soft reset address at EL2.  If el2_switch is
> > > not set the soft reset address will be entered at EL1.
> > > 
> > > Update cpu_soft_restart() and soft_restart() to pass the return of
> > > is_hyp_mode_available() as the el2_switch value to cpu_reset().  Also update the
> > > comments of cpu_reset(), cpu_soft_restart() and soft_restart() to reflect this
> > > change.
> > > 
> > > Signed-off-by: Geoff Levand <geoff@infradead.org>
> > > ---
> > >  arch/arm64/include/asm/proc-fns.h |  4 ++--
> > >  arch/arm64/kernel/process.c       | 10 ++++++++-
> > >  arch/arm64/mm/proc.S              | 47 +++++++++++++++++++++++++++++----------
> > >  3 files changed, 46 insertions(+), 15 deletions(-)
> > > 
> > > diff --git a/arch/arm64/include/asm/proc-fns.h b/arch/arm64/include/asm/proc-fns.h
> > > index 9a8fd84..339394d 100644
> > > --- a/arch/arm64/include/asm/proc-fns.h
> > > +++ b/arch/arm64/include/asm/proc-fns.h
> > > @@ -32,8 +32,8 @@ extern void cpu_cache_off(void);
> > >  extern void cpu_do_idle(void);
> > >  extern void cpu_do_switch_mm(unsigned long pgd_phys, struct mm_struct *mm);
> > >  extern void cpu_reset(unsigned long addr) __attribute__((noreturn));
> > > -void cpu_soft_restart(phys_addr_t cpu_reset,
> > > -		unsigned long addr) __attribute__((noreturn));
> > > +void cpu_soft_restart(phys_addr_t cpu_reset, unsigned long el2_switch,
> > > +		      unsigned long addr) __attribute__((noreturn));
> > >  extern void cpu_do_suspend(struct cpu_suspend_ctx *ptr);
> > >  extern u64 cpu_do_resume(phys_addr_t ptr, u64 idmap_ttbr);
> > >  
> > > diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
> > > index fde9923..371bbf1 100644
> > > --- a/arch/arm64/kernel/process.c
> > > +++ b/arch/arm64/kernel/process.c
> > > @@ -50,6 +50,7 @@
> > >  #include <asm/mmu_context.h>
> > >  #include <asm/processor.h>
> > >  #include <asm/stacktrace.h>
> > > +#include <asm/virt.h>
> > >  
> > >  #ifdef CONFIG_CC_STACKPROTECTOR
> > >  #include <linux/stackprotector.h>
> > > @@ -60,7 +61,14 @@ EXPORT_SYMBOL(__stack_chk_guard);
> > >  void soft_restart(unsigned long addr)
> > >  {
> > >  	setup_mm_for_reboot();
> > > -	cpu_soft_restart(virt_to_phys(cpu_reset), addr);
> > > +
> > > +	/* TODO: Remove this conditional when KVM can support CPU restart. */
> > > +	if (IS_ENABLED(CONFIG_KVM))
> > > +		cpu_soft_restart(virt_to_phys(cpu_reset), 0, addr);
> > 
> > If we haven't torn down KVM, doesn't that mean that KVM is active at EL2
> > (with MMU and caches on) at this point?
> > 
> > If that's the case then we cannot possibly try to call kexec(), because
> > we cannot touch the memory used by the page tables for those EL2
> > mappings. Things will explode if we do.
> 
> This conditional is just if KVM, do things the old way (don't try to
> switch exception levels).  It is to handle the system shutdown case.

Having grepped treewide for soft_restart, other than kexec there are no
users for arm64. So surely kexec is the only case to cater for at the
moment?

> Another patch in this series '[PATCH 7/8] arm64/kexec: Add checks for
> KVM' assures kexec cannot happen when KVM is configured.

It would be better to just move this earlier (or event better, implement
kvm teardown).

Mark.

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [PATCH 4/8] arm64: Add EL2 switch to soft_restart
@ 2015-01-27 16:46           ` Mark Rutland
  0 siblings, 0 replies; 100+ messages in thread
From: Mark Rutland @ 2015-01-27 16:46 UTC (permalink / raw)
  To: Geoff Levand
  Cc: Marc Zyngier, Catalin Marinas, Will Deacon, christoffer.dall,
	grant.likely, kexec, linux-arm-kernel

On Mon, Jan 26, 2015 at 09:48:48PM +0000, Geoff Levand wrote:
> Hi Mark,
> 
> On Mon, 2015-01-26 at 19:02 +0000, Mark Rutland wrote:
> > On Sat, Jan 17, 2015 at 12:23:34AM +0000, Geoff Levand wrote:
> > > When a CPU is reset it needs to be put into the exception level it had when it
> > > entered the kernel.  Update cpu_reset() to accept an argument el2_switch which
> > > signals cpu_reset() to enter the soft reset address at EL2.  If el2_switch is
> > > not set the soft reset address will be entered at EL1.
> > > 
> > > Update cpu_soft_restart() and soft_restart() to pass the return of
> > > is_hyp_mode_available() as the el2_switch value to cpu_reset().  Also update the
> > > comments of cpu_reset(), cpu_soft_restart() and soft_restart() to reflect this
> > > change.
> > > 
> > > Signed-off-by: Geoff Levand <geoff@infradead.org>
> > > ---
> > >  arch/arm64/include/asm/proc-fns.h |  4 ++--
> > >  arch/arm64/kernel/process.c       | 10 ++++++++-
> > >  arch/arm64/mm/proc.S              | 47 +++++++++++++++++++++++++++++----------
> > >  3 files changed, 46 insertions(+), 15 deletions(-)
> > > 
> > > diff --git a/arch/arm64/include/asm/proc-fns.h b/arch/arm64/include/asm/proc-fns.h
> > > index 9a8fd84..339394d 100644
> > > --- a/arch/arm64/include/asm/proc-fns.h
> > > +++ b/arch/arm64/include/asm/proc-fns.h
> > > @@ -32,8 +32,8 @@ extern void cpu_cache_off(void);
> > >  extern void cpu_do_idle(void);
> > >  extern void cpu_do_switch_mm(unsigned long pgd_phys, struct mm_struct *mm);
> > >  extern void cpu_reset(unsigned long addr) __attribute__((noreturn));
> > > -void cpu_soft_restart(phys_addr_t cpu_reset,
> > > -		unsigned long addr) __attribute__((noreturn));
> > > +void cpu_soft_restart(phys_addr_t cpu_reset, unsigned long el2_switch,
> > > +		      unsigned long addr) __attribute__((noreturn));
> > >  extern void cpu_do_suspend(struct cpu_suspend_ctx *ptr);
> > >  extern u64 cpu_do_resume(phys_addr_t ptr, u64 idmap_ttbr);
> > >  
> > > diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
> > > index fde9923..371bbf1 100644
> > > --- a/arch/arm64/kernel/process.c
> > > +++ b/arch/arm64/kernel/process.c
> > > @@ -50,6 +50,7 @@
> > >  #include <asm/mmu_context.h>
> > >  #include <asm/processor.h>
> > >  #include <asm/stacktrace.h>
> > > +#include <asm/virt.h>
> > >  
> > >  #ifdef CONFIG_CC_STACKPROTECTOR
> > >  #include <linux/stackprotector.h>
> > > @@ -60,7 +61,14 @@ EXPORT_SYMBOL(__stack_chk_guard);
> > >  void soft_restart(unsigned long addr)
> > >  {
> > >  	setup_mm_for_reboot();
> > > -	cpu_soft_restart(virt_to_phys(cpu_reset), addr);
> > > +
> > > +	/* TODO: Remove this conditional when KVM can support CPU restart. */
> > > +	if (IS_ENABLED(CONFIG_KVM))
> > > +		cpu_soft_restart(virt_to_phys(cpu_reset), 0, addr);
> > 
> > If we haven't torn down KVM, doesn't that mean that KVM is active at EL2
> > (with MMU and caches on) at this point?
> > 
> > If that's the case then we cannot possibly try to call kexec(), because
> > we cannot touch the memory used by the page tables for those EL2
> > mappings. Things will explode if we do.
> 
> This conditional is just if KVM, do things the old way (don't try to
> switch exception levels).  It is to handle the system shutdown case.

Having grepped treewide for soft_restart, other than kexec there are no
users for arm64. So surely kexec is the only case to cater for at the
moment?

> Another patch in this series '[PATCH 7/8] arm64/kexec: Add checks for
> KVM' assures kexec cannot happen when KVM is configured.

It would be better to just move this earlier (or event better, implement
kvm teardown).

Mark.

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 100+ messages in thread

* [PATCH 3/8] arm64: Add new hcall HVC_CALL_FUNC
  2015-01-17  0:23     ` Geoff Levand
@ 2015-01-27 17:39       ` Catalin Marinas
  -1 siblings, 0 replies; 100+ messages in thread
From: Catalin Marinas @ 2015-01-27 17:39 UTC (permalink / raw)
  To: linux-arm-kernel

On Sat, Jan 17, 2015 at 12:23:34AM +0000, Geoff Levand wrote:
> diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h
> index 99c319c..4f23a48 100644
> --- a/arch/arm64/include/asm/virt.h
> +++ b/arch/arm64/include/asm/virt.h
> @@ -41,6 +41,19 @@
>  
>  #define HVC_CALL_HYP 3
>  
> +/*
> + * HVC_CALL_FUNC - Execute a function at EL2.
> + *
> + * @x0: Physical address of the function to be executed.
> + * @x1: Passed as the first argument to the function.
> + * @x2: Passed as the second argument to the function.
> + * @x3: Passed as the third argument to the function.
> + *
> + * The called function must preserve the contents of register x18.

Can you pick a register that's normally callee saved?

> + */
> +
> +#define HVC_CALL_FUNC 4
> +
>  #ifndef __ASSEMBLY__
>  
>  /*
> diff --git a/arch/arm64/kernel/hyp-stub.S b/arch/arm64/kernel/hyp-stub.S
> index e3db3fd..b5d36e7 100644
> --- a/arch/arm64/kernel/hyp-stub.S
> +++ b/arch/arm64/kernel/hyp-stub.S
> @@ -66,9 +66,20 @@ el1_sync:
>  	mrs	x0, vbar_el2
>  	b	2f
>  
> -1:	cmp	x18, #HVC_SET_VECTORS
> -	b.ne	2f
> -	msr	vbar_el2, x0
> +1:	cmp     x18, #HVC_SET_VECTORS

This line doesn't seem to have any change, apart from some whitespace.
Or did you want to drop the label?

> +	b.ne    1f
> +	msr     vbar_el2, x0
> +	b       2f
> +
> +1:	cmp     x18, #HVC_CALL_FUNC
> +	b.ne    2f
> +	mov     x18, lr
> +	mov     lr, x0
> +	mov     x0, x1
> +	mov     x1, x2
> +	mov     x2, x3
> +	blr     lr
> +	mov     lr, x18
>  
>  2:	eret
>  ENDPROC(el1_sync)

What is the calling convention for this HVC? You mentioned x18 above but
what about other registers that the called function may corrupt (x18 is
a temporary register, so it's not expected to be callee saved).

-- 
Catalin

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [PATCH 3/8] arm64: Add new hcall HVC_CALL_FUNC
@ 2015-01-27 17:39       ` Catalin Marinas
  0 siblings, 0 replies; 100+ messages in thread
From: Catalin Marinas @ 2015-01-27 17:39 UTC (permalink / raw)
  To: Geoff Levand
  Cc: Marc Zyngier, kexec, Will Deacon, linux-arm-kernel, grant.likely,
	christoffer.dall

On Sat, Jan 17, 2015 at 12:23:34AM +0000, Geoff Levand wrote:
> diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h
> index 99c319c..4f23a48 100644
> --- a/arch/arm64/include/asm/virt.h
> +++ b/arch/arm64/include/asm/virt.h
> @@ -41,6 +41,19 @@
>  
>  #define HVC_CALL_HYP 3
>  
> +/*
> + * HVC_CALL_FUNC - Execute a function at EL2.
> + *
> + * @x0: Physical address of the function to be executed.
> + * @x1: Passed as the first argument to the function.
> + * @x2: Passed as the second argument to the function.
> + * @x3: Passed as the third argument to the function.
> + *
> + * The called function must preserve the contents of register x18.

Can you pick a register that's normally callee saved?

> + */
> +
> +#define HVC_CALL_FUNC 4
> +
>  #ifndef __ASSEMBLY__
>  
>  /*
> diff --git a/arch/arm64/kernel/hyp-stub.S b/arch/arm64/kernel/hyp-stub.S
> index e3db3fd..b5d36e7 100644
> --- a/arch/arm64/kernel/hyp-stub.S
> +++ b/arch/arm64/kernel/hyp-stub.S
> @@ -66,9 +66,20 @@ el1_sync:
>  	mrs	x0, vbar_el2
>  	b	2f
>  
> -1:	cmp	x18, #HVC_SET_VECTORS
> -	b.ne	2f
> -	msr	vbar_el2, x0
> +1:	cmp     x18, #HVC_SET_VECTORS

This line doesn't seem to have any change, apart from some whitespace.
Or did you want to drop the label?

> +	b.ne    1f
> +	msr     vbar_el2, x0
> +	b       2f
> +
> +1:	cmp     x18, #HVC_CALL_FUNC
> +	b.ne    2f
> +	mov     x18, lr
> +	mov     lr, x0
> +	mov     x0, x1
> +	mov     x1, x2
> +	mov     x2, x3
> +	blr     lr
> +	mov     lr, x18
>  
>  2:	eret
>  ENDPROC(el1_sync)

What is the calling convention for this HVC? You mentioned x18 above but
what about other registers that the called function may corrupt (x18 is
a temporary register, so it's not expected to be callee saved).

-- 
Catalin

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 100+ messages in thread

* [PATCH 4/8] arm64: Add EL2 switch to soft_restart
  2015-01-17  0:23     ` Geoff Levand
@ 2015-01-27 17:57       ` Catalin Marinas
  -1 siblings, 0 replies; 100+ messages in thread
From: Catalin Marinas @ 2015-01-27 17:57 UTC (permalink / raw)
  To: linux-arm-kernel

On Sat, Jan 17, 2015 at 12:23:34AM +0000, Geoff Levand wrote:
>  ENTRY(cpu_reset)
> -	mrs	x1, sctlr_el1
> -	bic	x1, x1, #1
> -	msr	sctlr_el1, x1			// disable the MMU
> +	mrs	x2, sctlr_el1
> +	bic	x2, x2, #1
> +	msr	sctlr_el1, x2			// disable the MMU
>  	isb
> -	ret	x0
> +
> +	cbz	x0, 1f				// el2_switch?
> +	mov	x0, x1
> +	mov	x1, xzr
> +	mov	x2, xzr
> +	mov	x3, xzr
> +	hvc	#HVC_CALL_FUNC			// no return

If that's the only user of HVC_CALL_FUNC, why do we bother with
arguments, calling convention?

-- 
Catalin

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [PATCH 4/8] arm64: Add EL2 switch to soft_restart
@ 2015-01-27 17:57       ` Catalin Marinas
  0 siblings, 0 replies; 100+ messages in thread
From: Catalin Marinas @ 2015-01-27 17:57 UTC (permalink / raw)
  To: Geoff Levand
  Cc: Marc Zyngier, kexec, Will Deacon, linux-arm-kernel, grant.likely,
	christoffer.dall

On Sat, Jan 17, 2015 at 12:23:34AM +0000, Geoff Levand wrote:
>  ENTRY(cpu_reset)
> -	mrs	x1, sctlr_el1
> -	bic	x1, x1, #1
> -	msr	sctlr_el1, x1			// disable the MMU
> +	mrs	x2, sctlr_el1
> +	bic	x2, x2, #1
> +	msr	sctlr_el1, x2			// disable the MMU
>  	isb
> -	ret	x0
> +
> +	cbz	x0, 1f				// el2_switch?
> +	mov	x0, x1
> +	mov	x1, xzr
> +	mov	x2, xzr
> +	mov	x3, xzr
> +	hvc	#HVC_CALL_FUNC			// no return

If that's the only user of HVC_CALL_FUNC, why do we bother with
arguments, calling convention?

-- 
Catalin

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 100+ messages in thread

* [PATCH 3/8] arm64: Add new hcall HVC_CALL_FUNC
  2015-01-27 17:39       ` Catalin Marinas
@ 2015-01-27 18:00         ` Mark Rutland
  -1 siblings, 0 replies; 100+ messages in thread
From: Mark Rutland @ 2015-01-27 18:00 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Jan 27, 2015 at 05:39:47PM +0000, Catalin Marinas wrote:
> On Sat, Jan 17, 2015 at 12:23:34AM +0000, Geoff Levand wrote:
> > diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h
> > index 99c319c..4f23a48 100644
> > --- a/arch/arm64/include/asm/virt.h
> > +++ b/arch/arm64/include/asm/virt.h
> > @@ -41,6 +41,19 @@
> >  
> >  #define HVC_CALL_HYP 3
> >  
> > +/*
> > + * HVC_CALL_FUNC - Execute a function at EL2.
> > + *
> > + * @x0: Physical address of the function to be executed.
> > + * @x1: Passed as the first argument to the function.
> > + * @x2: Passed as the second argument to the function.
> > + * @x3: Passed as the third argument to the function.
> > + *
> > + * The called function must preserve the contents of register x18.
> 
> Can you pick a register that's normally callee saved?

We're in the hyp-stub, so we don't have a stack in EL2. Therefore we
can't stack any of the existing callee-saved register values in order to
be able to use them.

One way to avoid that would be to have asm block which issues the HVC at
EL1 stack/unstack the LR around the HVC. Then we're free to corrupt the
LR at EL2 in order to call the provided function.

[...]

> > +1:	cmp     x18, #HVC_CALL_FUNC
> > +	b.ne    2f
> > +	mov     x18, lr
> > +	mov     lr, x0
> > +	mov     x0, x1
> > +	mov     x1, x2
> > +	mov     x2, x3
> > +	blr     lr
> > +	mov     lr, x18
> >  
> >  2:	eret
> >  ENDPROC(el1_sync)
> 
> What is the calling convention for this HVC? You mentioned x18 above but
> what about other registers that the called function may corrupt (x18 is
> a temporary register, so it's not expected to be callee saved).

Other than x18, the usual PCS rules apply here. We don't have a stack,
so the function we call can't make a nested call to anything else.

Mark.

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [PATCH 3/8] arm64: Add new hcall HVC_CALL_FUNC
@ 2015-01-27 18:00         ` Mark Rutland
  0 siblings, 0 replies; 100+ messages in thread
From: Mark Rutland @ 2015-01-27 18:00 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: Geoff Levand, Will Deacon, kexec, Marc Zyngier, christoffer.dall,
	grant.likely, linux-arm-kernel

On Tue, Jan 27, 2015 at 05:39:47PM +0000, Catalin Marinas wrote:
> On Sat, Jan 17, 2015 at 12:23:34AM +0000, Geoff Levand wrote:
> > diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h
> > index 99c319c..4f23a48 100644
> > --- a/arch/arm64/include/asm/virt.h
> > +++ b/arch/arm64/include/asm/virt.h
> > @@ -41,6 +41,19 @@
> >  
> >  #define HVC_CALL_HYP 3
> >  
> > +/*
> > + * HVC_CALL_FUNC - Execute a function at EL2.
> > + *
> > + * @x0: Physical address of the function to be executed.
> > + * @x1: Passed as the first argument to the function.
> > + * @x2: Passed as the second argument to the function.
> > + * @x3: Passed as the third argument to the function.
> > + *
> > + * The called function must preserve the contents of register x18.
> 
> Can you pick a register that's normally callee saved?

We're in the hyp-stub, so we don't have a stack in EL2. Therefore we
can't stack any of the existing callee-saved register values in order to
be able to use them.

One way to avoid that would be to have asm block which issues the HVC at
EL1 stack/unstack the LR around the HVC. Then we're free to corrupt the
LR at EL2 in order to call the provided function.

[...]

> > +1:	cmp     x18, #HVC_CALL_FUNC
> > +	b.ne    2f
> > +	mov     x18, lr
> > +	mov     lr, x0
> > +	mov     x0, x1
> > +	mov     x1, x2
> > +	mov     x2, x3
> > +	blr     lr
> > +	mov     lr, x18
> >  
> >  2:	eret
> >  ENDPROC(el1_sync)
> 
> What is the calling convention for this HVC? You mentioned x18 above but
> what about other registers that the called function may corrupt (x18 is
> a temporary register, so it's not expected to be callee saved).

Other than x18, the usual PCS rules apply here. We don't have a stack,
so the function we call can't make a nested call to anything else.

Mark.

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 100+ messages in thread

* [PATCH 4/8] arm64: Add EL2 switch to soft_restart
  2015-01-27 16:46           ` Mark Rutland
@ 2015-01-27 18:34             ` Geoff Levand
  -1 siblings, 0 replies; 100+ messages in thread
From: Geoff Levand @ 2015-01-27 18:34 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Mark,

On Tue, 2015-01-27 at 16:46 +0000, Mark Rutland wrote:
> On Mon, Jan 26, 2015 at 09:48:48PM +0000, Geoff Levand wrote:
> > This conditional is just if KVM, do things the old way (don't try to
> > switch exception levels).  It is to handle the system shutdown case.
> 
> Having grepped treewide for soft_restart, other than kexec there are no
> users for arm64. So surely kexec is the only case to cater for at the
> moment?

Yes, I think you're right, and so it seems we can drop this patch and
just have the 'Add checks for KVM' patch.

> > Another patch in this series '[PATCH 7/8] arm64/kexec: Add checks for
> > KVM' assures kexec cannot happen when KVM is configured.
> 
> It would be better to just move this earlier (or event better, implement
> kvm teardown).

Yes, I hope we don't really need to have any KVM work-arounds.

-Geoff

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [PATCH 4/8] arm64: Add EL2 switch to soft_restart
@ 2015-01-27 18:34             ` Geoff Levand
  0 siblings, 0 replies; 100+ messages in thread
From: Geoff Levand @ 2015-01-27 18:34 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Marc Zyngier, Catalin Marinas, Will Deacon, christoffer.dall,
	grant.likely, kexec, linux-arm-kernel

Hi Mark,

On Tue, 2015-01-27 at 16:46 +0000, Mark Rutland wrote:
> On Mon, Jan 26, 2015 at 09:48:48PM +0000, Geoff Levand wrote:
> > This conditional is just if KVM, do things the old way (don't try to
> > switch exception levels).  It is to handle the system shutdown case.
> 
> Having grepped treewide for soft_restart, other than kexec there are no
> users for arm64. So surely kexec is the only case to cater for at the
> moment?

Yes, I think you're right, and so it seems we can drop this patch and
just have the 'Add checks for KVM' patch.

> > Another patch in this series '[PATCH 7/8] arm64/kexec: Add checks for
> > KVM' assures kexec cannot happen when KVM is configured.
> 
> It would be better to just move this earlier (or event better, implement
> kvm teardown).

Yes, I hope we don't really need to have any KVM work-arounds.

-Geoff




_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 100+ messages in thread

* [PATCH V2 1/8] arm64: Fold proc-macros.S into assembler.h
  2015-01-17  0:23     ` Geoff Levand
@ 2015-01-27 19:33       ` Geoff Levand
  -1 siblings, 0 replies; 100+ messages in thread
From: Geoff Levand @ 2015-01-27 19:33 UTC (permalink / raw)
  To: linux-arm-kernel

To allow the assembler macros defined in arch/arm64/mm/proc-macros.S to be used
outside the mm code move the contents of proc-macros.S to
arch/arm64/include/asm/assembler.h, delete proc-macros.S, and fix up all
references to proc-macros.S.

Signed-off-by: Geoff Levand <geoff@infradead.org>
---
 arch/arm64/include/asm/assembler.h | 37 +++++++++++++++++++++++++-
 arch/arm64/mm/cache.S              |  2 --
 arch/arm64/mm/proc-macros.S        | 54 --------------------------------------
 arch/arm64/mm/proc.S               |  2 --
 4 files changed, 36 insertions(+), 59 deletions(-)
 delete mode 100644 arch/arm64/mm/proc-macros.S

diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
index 5901480..80436d3 100644
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -1,5 +1,5 @@
 /*
- * Based on arch/arm/include/asm/assembler.h
+ * Based on arch/arm/include/asm/assembler.h, arch/arm/mm/proc-macros.S
  *
  * Copyright (C) 1996-2000 Russell King
  * Copyright (C) 2012 ARM Ltd.
@@ -20,6 +20,7 @@
 #error "Only include this from assembly code"
 #endif
 
+#include <asm/asm-offsets.h>
 #include <asm/ptrace.h>
 #include <asm/thread_info.h>
 
@@ -155,3 +156,37 @@ lr	.req	x30		// link register
 #endif
 	orr	\rd, \lbits, \hbits, lsl #32
 	.endm
+
+/*
+ * vma_vm_mm - get mm pointer from vma pointer (vma->vm_mm)
+ */
+	.macro	vma_vm_mm, rd, rn
+	ldr	\rd, [\rn, #VMA_VM_MM]
+	.endm
+
+/*
+ * mmid - get context id from mm pointer (mm->context.id)
+ */
+	.macro	mmid, rd, rn
+	ldr	\rd, [\rn, #MM_CONTEXT_ID]
+	.endm
+
+/*
+ * dcache_line_size - get the minimum D-cache line size from the CTR register.
+ */
+	.macro	dcache_line_size, reg, tmp
+	mrs	\tmp, ctr_el0			// read CTR
+	ubfm	\tmp, \tmp, #16, #19		// cache line size encoding
+	mov	\reg, #4			// bytes per word
+	lsl	\reg, \reg, \tmp		// actual cache line size
+	.endm
+
+/*
+ * icache_line_size - get the minimum I-cache line size from the CTR register.
+ */
+	.macro	icache_line_size, reg, tmp
+	mrs	\tmp, ctr_el0			// read CTR
+	and	\tmp, \tmp, #0xf		// cache line size encoding
+	mov	\reg, #4			// bytes per word
+	lsl	\reg, \reg, \tmp		// actual cache line size
+	.endm
diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
index 2560e1e..2d7a67c 100644
--- a/arch/arm64/mm/cache.S
+++ b/arch/arm64/mm/cache.S
@@ -24,8 +24,6 @@
 #include <asm/cpufeature.h>
 #include <asm/alternative-asm.h>
 
-#include "proc-macros.S"
-
 /*
  *	__flush_dcache_all()
  *
diff --git a/arch/arm64/mm/proc-macros.S b/arch/arm64/mm/proc-macros.S
deleted file mode 100644
index 005d29e..0000000
--- a/arch/arm64/mm/proc-macros.S
+++ /dev/null
@@ -1,54 +0,0 @@
-/*
- * Based on arch/arm/mm/proc-macros.S
- *
- * Copyright (C) 2012 ARM Ltd.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program.  If not, see <http://www.gnu.org/licenses/>.
- */
-
-#include <asm/asm-offsets.h>
-#include <asm/thread_info.h>
-
-/*
- * vma_vm_mm - get mm pointer from vma pointer (vma->vm_mm)
- */
-	.macro	vma_vm_mm, rd, rn
-	ldr	\rd, [\rn, #VMA_VM_MM]
-	.endm
-
-/*
- * mmid - get context id from mm pointer (mm->context.id)
- */
-	.macro	mmid, rd, rn
-	ldr	\rd, [\rn, #MM_CONTEXT_ID]
-	.endm
-
-/*
- * dcache_line_size - get the minimum D-cache line size from the CTR register.
- */
-	.macro	dcache_line_size, reg, tmp
-	mrs	\tmp, ctr_el0			// read CTR
-	ubfm	\tmp, \tmp, #16, #19		// cache line size encoding
-	mov	\reg, #4			// bytes per word
-	lsl	\reg, \reg, \tmp		// actual cache line size
-	.endm
-
-/*
- * icache_line_size - get the minimum I-cache line size from the CTR register.
- */
-	.macro	icache_line_size, reg, tmp
-	mrs	\tmp, ctr_el0			// read CTR
-	and	\tmp, \tmp, #0xf		// cache line size encoding
-	mov	\reg, #4			// bytes per word
-	lsl	\reg, \reg, \tmp		// actual cache line size
-	.endm
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index 28eebfb..fe69f6e 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -26,8 +26,6 @@
 #include <asm/pgtable-hwdef.h>
 #include <asm/pgtable.h>
 
-#include "proc-macros.S"
-
 #ifdef CONFIG_ARM64_64K_PAGES
 #define TCR_TG_FLAGS	TCR_TG0_64K | TCR_TG1_64K
 #else
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [PATCH V2 1/8] arm64: Fold proc-macros.S into assembler.h
@ 2015-01-27 19:33       ` Geoff Levand
  0 siblings, 0 replies; 100+ messages in thread
From: Geoff Levand @ 2015-01-27 19:33 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: marc.zyngier, kexec, Will Deacon, linux-arm-kernel, Grant Likely,
	christoffer.dall

To allow the assembler macros defined in arch/arm64/mm/proc-macros.S to be used
outside the mm code move the contents of proc-macros.S to
arch/arm64/include/asm/assembler.h, delete proc-macros.S, and fix up all
references to proc-macros.S.

Signed-off-by: Geoff Levand <geoff@infradead.org>
---
 arch/arm64/include/asm/assembler.h | 37 +++++++++++++++++++++++++-
 arch/arm64/mm/cache.S              |  2 --
 arch/arm64/mm/proc-macros.S        | 54 --------------------------------------
 arch/arm64/mm/proc.S               |  2 --
 4 files changed, 36 insertions(+), 59 deletions(-)
 delete mode 100644 arch/arm64/mm/proc-macros.S

diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
index 5901480..80436d3 100644
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -1,5 +1,5 @@
 /*
- * Based on arch/arm/include/asm/assembler.h
+ * Based on arch/arm/include/asm/assembler.h, arch/arm/mm/proc-macros.S
  *
  * Copyright (C) 1996-2000 Russell King
  * Copyright (C) 2012 ARM Ltd.
@@ -20,6 +20,7 @@
 #error "Only include this from assembly code"
 #endif
 
+#include <asm/asm-offsets.h>
 #include <asm/ptrace.h>
 #include <asm/thread_info.h>
 
@@ -155,3 +156,37 @@ lr	.req	x30		// link register
 #endif
 	orr	\rd, \lbits, \hbits, lsl #32
 	.endm
+
+/*
+ * vma_vm_mm - get mm pointer from vma pointer (vma->vm_mm)
+ */
+	.macro	vma_vm_mm, rd, rn
+	ldr	\rd, [\rn, #VMA_VM_MM]
+	.endm
+
+/*
+ * mmid - get context id from mm pointer (mm->context.id)
+ */
+	.macro	mmid, rd, rn
+	ldr	\rd, [\rn, #MM_CONTEXT_ID]
+	.endm
+
+/*
+ * dcache_line_size - get the minimum D-cache line size from the CTR register.
+ */
+	.macro	dcache_line_size, reg, tmp
+	mrs	\tmp, ctr_el0			// read CTR
+	ubfm	\tmp, \tmp, #16, #19		// cache line size encoding
+	mov	\reg, #4			// bytes per word
+	lsl	\reg, \reg, \tmp		// actual cache line size
+	.endm
+
+/*
+ * icache_line_size - get the minimum I-cache line size from the CTR register.
+ */
+	.macro	icache_line_size, reg, tmp
+	mrs	\tmp, ctr_el0			// read CTR
+	and	\tmp, \tmp, #0xf		// cache line size encoding
+	mov	\reg, #4			// bytes per word
+	lsl	\reg, \reg, \tmp		// actual cache line size
+	.endm
diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
index 2560e1e..2d7a67c 100644
--- a/arch/arm64/mm/cache.S
+++ b/arch/arm64/mm/cache.S
@@ -24,8 +24,6 @@
 #include <asm/cpufeature.h>
 #include <asm/alternative-asm.h>
 
-#include "proc-macros.S"
-
 /*
  *	__flush_dcache_all()
  *
diff --git a/arch/arm64/mm/proc-macros.S b/arch/arm64/mm/proc-macros.S
deleted file mode 100644
index 005d29e..0000000
--- a/arch/arm64/mm/proc-macros.S
+++ /dev/null
@@ -1,54 +0,0 @@
-/*
- * Based on arch/arm/mm/proc-macros.S
- *
- * Copyright (C) 2012 ARM Ltd.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program.  If not, see <http://www.gnu.org/licenses/>.
- */
-
-#include <asm/asm-offsets.h>
-#include <asm/thread_info.h>
-
-/*
- * vma_vm_mm - get mm pointer from vma pointer (vma->vm_mm)
- */
-	.macro	vma_vm_mm, rd, rn
-	ldr	\rd, [\rn, #VMA_VM_MM]
-	.endm
-
-/*
- * mmid - get context id from mm pointer (mm->context.id)
- */
-	.macro	mmid, rd, rn
-	ldr	\rd, [\rn, #MM_CONTEXT_ID]
-	.endm
-
-/*
- * dcache_line_size - get the minimum D-cache line size from the CTR register.
- */
-	.macro	dcache_line_size, reg, tmp
-	mrs	\tmp, ctr_el0			// read CTR
-	ubfm	\tmp, \tmp, #16, #19		// cache line size encoding
-	mov	\reg, #4			// bytes per word
-	lsl	\reg, \reg, \tmp		// actual cache line size
-	.endm
-
-/*
- * icache_line_size - get the minimum I-cache line size from the CTR register.
- */
-	.macro	icache_line_size, reg, tmp
-	mrs	\tmp, ctr_el0			// read CTR
-	and	\tmp, \tmp, #0xf		// cache line size encoding
-	mov	\reg, #4			// bytes per word
-	lsl	\reg, \reg, \tmp		// actual cache line size
-	.endm
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index 28eebfb..fe69f6e 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -26,8 +26,6 @@
 #include <asm/pgtable-hwdef.h>
 #include <asm/pgtable.h>
 
-#include "proc-macros.S"
-
 #ifdef CONFIG_ARM64_64K_PAGES
 #define TCR_TG_FLAGS	TCR_TG0_64K | TCR_TG1_64K
 #else
-- 
2.1.0




_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [PATCH 7/8] arm64/kexec: Add checks for KVM
  2015-01-26 19:19       ` Mark Rutland
@ 2015-01-29  9:36         ` AKASHI Takahiro
  -1 siblings, 0 replies; 100+ messages in thread
From: AKASHI Takahiro @ 2015-01-29  9:36 UTC (permalink / raw)
  To: linux-arm-kernel

Hello,

On 01/27/2015 04:19 AM, Mark Rutland wrote:
> On Sat, Jan 17, 2015 at 12:23:34AM +0000, Geoff Levand wrote:
>> Add runtime checks that fail the arm64 kexec syscall for situations that would
>> result in system instability do to problems in the KVM kernel support.
>> These checks should be removed when the KVM problems are resolved fixed.
>>
>> Signed-off-by: Geoff Levand <geoff@infradead.org>
>> ---
>>   arch/arm64/kernel/machine_kexec.c | 10 ++++++++++
>>   1 file changed, 10 insertions(+)
>>
>> diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
>> index 3d84759..a36459d 100644
>> --- a/arch/arm64/kernel/machine_kexec.c
>> +++ b/arch/arm64/kernel/machine_kexec.c
>> @@ -16,6 +16,9 @@
>>   #include <asm/cacheflush.h>
>>   #include <asm/system_misc.h>
>>
>> +/* TODO: Remove this include when KVM can support a kexec reboot. */
>> +#include <asm/virt.h>
>> +
>>   /* Global variables for the relocate_kernel routine. */
>>   extern const unsigned char relocate_new_kernel[];
>>   extern const unsigned long relocate_new_kernel_size;
>> @@ -100,6 +103,13 @@ int machine_kexec_prepare(struct kimage *image)
>>
>>   	kexec_image_info(image);
>>
>> +	/* TODO: Remove this message when KVM can support a kexec reboot. */
>> +	if (IS_ENABLED(CONFIG_KVM) && is_hyp_mode_available()) {
>> +		pr_err("%s: Your kernel is configured with KVM support (CONFIG_KVM=y) which currently does not allow for kexec re-boot.\n",
>> +			__func__);
>> +		return -ENOSYS;
>> +	}
>
> If you really don't want to implement KVM teardown, surely this should
> be at the start of the series, so we don't have a point in the middle
> where things may explode in this case?

I'm going to fix this KVM issue (teardown) in cooperation with Geoff.

Looking into kvm init code, kvm_arch_init() in particular,
I guess that teardown function (kvm_arch_exit()) should do
   (reverting kvm_timer_hyp_init() per cpu)
- stop arch timer

   (reverting cpu_init_hyp_mode() per cpu)
- flush TLB
- jump into identical mapping (using boot_hyp_pgd?)
- disable MMU?
- restore vbar_el2 to __hyp_stub_vectors (or NULL?)

   (reverting kvm_mmu_init())
- Do we need to free page tables and a bounce(trampoline) page?

Is this good enough for safely shutting down kvm?
Do I miss anything essential, or can I skip anyhing?

I really appreciate your comments.
-Takahiro AKASHI

> Mark.
>
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel at lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
>

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [PATCH 7/8] arm64/kexec: Add checks for KVM
@ 2015-01-29  9:36         ` AKASHI Takahiro
  0 siblings, 0 replies; 100+ messages in thread
From: AKASHI Takahiro @ 2015-01-29  9:36 UTC (permalink / raw)
  To: Mark Rutland, Geoff Levand
  Cc: Marc Zyngier, Catalin Marinas, Will Deacon, linux-arm-kernel,
	grant.likely, kexec, christoffer.dall

Hello,

On 01/27/2015 04:19 AM, Mark Rutland wrote:
> On Sat, Jan 17, 2015 at 12:23:34AM +0000, Geoff Levand wrote:
>> Add runtime checks that fail the arm64 kexec syscall for situations that would
>> result in system instability do to problems in the KVM kernel support.
>> These checks should be removed when the KVM problems are resolved fixed.
>>
>> Signed-off-by: Geoff Levand <geoff@infradead.org>
>> ---
>>   arch/arm64/kernel/machine_kexec.c | 10 ++++++++++
>>   1 file changed, 10 insertions(+)
>>
>> diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
>> index 3d84759..a36459d 100644
>> --- a/arch/arm64/kernel/machine_kexec.c
>> +++ b/arch/arm64/kernel/machine_kexec.c
>> @@ -16,6 +16,9 @@
>>   #include <asm/cacheflush.h>
>>   #include <asm/system_misc.h>
>>
>> +/* TODO: Remove this include when KVM can support a kexec reboot. */
>> +#include <asm/virt.h>
>> +
>>   /* Global variables for the relocate_kernel routine. */
>>   extern const unsigned char relocate_new_kernel[];
>>   extern const unsigned long relocate_new_kernel_size;
>> @@ -100,6 +103,13 @@ int machine_kexec_prepare(struct kimage *image)
>>
>>   	kexec_image_info(image);
>>
>> +	/* TODO: Remove this message when KVM can support a kexec reboot. */
>> +	if (IS_ENABLED(CONFIG_KVM) && is_hyp_mode_available()) {
>> +		pr_err("%s: Your kernel is configured with KVM support (CONFIG_KVM=y) which currently does not allow for kexec re-boot.\n",
>> +			__func__);
>> +		return -ENOSYS;
>> +	}
>
> If you really don't want to implement KVM teardown, surely this should
> be at the start of the series, so we don't have a point in the middle
> where things may explode in this case?

I'm going to fix this KVM issue (teardown) in cooperation with Geoff.

Looking into kvm init code, kvm_arch_init() in particular,
I guess that teardown function (kvm_arch_exit()) should do
   (reverting kvm_timer_hyp_init() per cpu)
- stop arch timer

   (reverting cpu_init_hyp_mode() per cpu)
- flush TLB
- jump into identical mapping (using boot_hyp_pgd?)
- disable MMU?
- restore vbar_el2 to __hyp_stub_vectors (or NULL?)

   (reverting kvm_mmu_init())
- Do we need to free page tables and a bounce(trampoline) page?

Is this good enough for safely shutting down kvm?
Do I miss anything essential, or can I skip anyhing?

I really appreciate your comments.
-Takahiro AKASHI

> Mark.
>
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
>

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 100+ messages in thread

* [PATCH 7/8] arm64/kexec: Add checks for KVM
  2015-01-26 19:19       ` Mark Rutland
@ 2015-01-29  9:57         ` AKASHI Takahiro
  -1 siblings, 0 replies; 100+ messages in thread
From: AKASHI Takahiro @ 2015-01-29  9:57 UTC (permalink / raw)
  To: linux-arm-kernel

Hello,

On 01/27/2015 04:19 AM, Mark Rutland wrote:
> On Sat, Jan 17, 2015 at 12:23:34AM +0000, Geoff Levand wrote:
>> Add runtime checks that fail the arm64 kexec syscall for situations that would
>> result in system instability do to problems in the KVM kernel support.
>> These checks should be removed when the KVM problems are resolved fixed.
>>
>> Signed-off-by: Geoff Levand <geoff@infradead.org>
>> ---
>>   arch/arm64/kernel/machine_kexec.c | 10 ++++++++++
>>   1 file changed, 10 insertions(+)
>>
>> diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
>> index 3d84759..a36459d 100644
>> --- a/arch/arm64/kernel/machine_kexec.c
>> +++ b/arch/arm64/kernel/machine_kexec.c
>> @@ -16,6 +16,9 @@
>>   #include <asm/cacheflush.h>
>>   #include <asm/system_misc.h>
>>
>> +/* TODO: Remove this include when KVM can support a kexec reboot. */
>> +#include <asm/virt.h>
>> +
>>   /* Global variables for the relocate_kernel routine. */
>>   extern const unsigned char relocate_new_kernel[];
>>   extern const unsigned long relocate_new_kernel_size;
>> @@ -100,6 +103,13 @@ int machine_kexec_prepare(struct kimage *image)
>>
>>   	kexec_image_info(image);
>>
>> +	/* TODO: Remove this message when KVM can support a kexec reboot. */
>> +	if (IS_ENABLED(CONFIG_KVM) && is_hyp_mode_available()) {
>> +		pr_err("%s: Your kernel is configured with KVM support (CONFIG_KVM=y) which currently does not allow for kexec re-boot.\n",
>> +			__func__);
>> +		return -ENOSYS;
>> +	}
>
> If you really don't want to implement KVM teardown, surely this should
> be at the start of the series, so we don't have a point in the middle
> where things may explode in this case?

I'm going to fix this KVM issue (teardown) in cooperation with Geoff.

Looking into kvm init code, kvm_arch_init() in particular,
I guess that teardown function (kvm_arch_exit()) should do
   (reverting kvm_timer_hyp_init() per cpu)
- stop arch timer

   (reverting cpu_init_hyp_mode() per cpu)
- flush TLB
- jump into identical mapping (using boot_hyp_pgd?)
- disable MMU?
- restore vbar_el2 to __hyp_stub_vectors (or NULL?)

   (reverting kvm_mmu_init())
- Do we need to free page tables and a bounce(trampoline) page?

Is this good enough for safely shutting down kvm?
Do I miss anything essential, or can I skip anyhing?

I really appreciate your comments.
-Takahiro AKASHI

> Mark.
>
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel at lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
>

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [PATCH 7/8] arm64/kexec: Add checks for KVM
@ 2015-01-29  9:57         ` AKASHI Takahiro
  0 siblings, 0 replies; 100+ messages in thread
From: AKASHI Takahiro @ 2015-01-29  9:57 UTC (permalink / raw)
  To: Mark Rutland, Geoff Levand
  Cc: Marc Zyngier, Catalin Marinas, Will Deacon, linux-arm-kernel,
	grant.likely, kexec, christoffer.dall

Hello,

On 01/27/2015 04:19 AM, Mark Rutland wrote:
> On Sat, Jan 17, 2015 at 12:23:34AM +0000, Geoff Levand wrote:
>> Add runtime checks that fail the arm64 kexec syscall for situations that would
>> result in system instability do to problems in the KVM kernel support.
>> These checks should be removed when the KVM problems are resolved fixed.
>>
>> Signed-off-by: Geoff Levand <geoff@infradead.org>
>> ---
>>   arch/arm64/kernel/machine_kexec.c | 10 ++++++++++
>>   1 file changed, 10 insertions(+)
>>
>> diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
>> index 3d84759..a36459d 100644
>> --- a/arch/arm64/kernel/machine_kexec.c
>> +++ b/arch/arm64/kernel/machine_kexec.c
>> @@ -16,6 +16,9 @@
>>   #include <asm/cacheflush.h>
>>   #include <asm/system_misc.h>
>>
>> +/* TODO: Remove this include when KVM can support a kexec reboot. */
>> +#include <asm/virt.h>
>> +
>>   /* Global variables for the relocate_kernel routine. */
>>   extern const unsigned char relocate_new_kernel[];
>>   extern const unsigned long relocate_new_kernel_size;
>> @@ -100,6 +103,13 @@ int machine_kexec_prepare(struct kimage *image)
>>
>>   	kexec_image_info(image);
>>
>> +	/* TODO: Remove this message when KVM can support a kexec reboot. */
>> +	if (IS_ENABLED(CONFIG_KVM) && is_hyp_mode_available()) {
>> +		pr_err("%s: Your kernel is configured with KVM support (CONFIG_KVM=y) which currently does not allow for kexec re-boot.\n",
>> +			__func__);
>> +		return -ENOSYS;
>> +	}
>
> If you really don't want to implement KVM teardown, surely this should
> be at the start of the series, so we don't have a point in the middle
> where things may explode in this case?

I'm going to fix this KVM issue (teardown) in cooperation with Geoff.

Looking into kvm init code, kvm_arch_init() in particular,
I guess that teardown function (kvm_arch_exit()) should do
   (reverting kvm_timer_hyp_init() per cpu)
- stop arch timer

   (reverting cpu_init_hyp_mode() per cpu)
- flush TLB
- jump into identical mapping (using boot_hyp_pgd?)
- disable MMU?
- restore vbar_el2 to __hyp_stub_vectors (or NULL?)

   (reverting kvm_mmu_init())
- Do we need to free page tables and a bounce(trampoline) page?

Is this good enough for safely shutting down kvm?
Do I miss anything essential, or can I skip anyhing?

I really appreciate your comments.
-Takahiro AKASHI

> Mark.
>
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
>

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 100+ messages in thread

* [PATCH 7/8] arm64/kexec: Add checks for KVM
  2015-01-29  9:57         ` AKASHI Takahiro
@ 2015-01-29 10:59           ` Marc Zyngier
  -1 siblings, 0 replies; 100+ messages in thread
From: Marc Zyngier @ 2015-01-29 10:59 UTC (permalink / raw)
  To: linux-arm-kernel


On 29/01/15 09:57, AKASHI Takahiro wrote:
> Hello,
> 
> On 01/27/2015 04:19 AM, Mark Rutland wrote:
>> On Sat, Jan 17, 2015 at 12:23:34AM +0000, Geoff Levand wrote:
>>> Add runtime checks that fail the arm64 kexec syscall for situations that would
>>> result in system instability do to problems in the KVM kernel support.
>>> These checks should be removed when the KVM problems are resolved fixed.
>>>
>>> Signed-off-by: Geoff Levand <geoff@infradead.org>
>>> ---
>>>   arch/arm64/kernel/machine_kexec.c | 10 ++++++++++
>>>   1 file changed, 10 insertions(+)
>>>
>>> diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
>>> index 3d84759..a36459d 100644
>>> --- a/arch/arm64/kernel/machine_kexec.c
>>> +++ b/arch/arm64/kernel/machine_kexec.c
>>> @@ -16,6 +16,9 @@
>>>   #include <asm/cacheflush.h>
>>>   #include <asm/system_misc.h>
>>>
>>> +/* TODO: Remove this include when KVM can support a kexec reboot. */
>>> +#include <asm/virt.h>
>>> +
>>>   /* Global variables for the relocate_kernel routine. */
>>>   extern const unsigned char relocate_new_kernel[];
>>>   extern const unsigned long relocate_new_kernel_size;
>>> @@ -100,6 +103,13 @@ int machine_kexec_prepare(struct kimage *image)
>>>
>>>   	kexec_image_info(image);
>>>
>>> +	/* TODO: Remove this message when KVM can support a kexec reboot. */
>>> +	if (IS_ENABLED(CONFIG_KVM) && is_hyp_mode_available()) {
>>> +		pr_err("%s: Your kernel is configured with KVM support (CONFIG_KVM=y) which currently does not allow for kexec re-boot.\n",
>>> +			__func__);
>>> +		return -ENOSYS;
>>> +	}
>>
>> If you really don't want to implement KVM teardown, surely this should
>> be at the start of the series, so we don't have a point in the middle
>> where things may explode in this case?
> 
> I'm going to fix this KVM issue (teardown) in cooperation with Geoff.
> 
> Looking into kvm init code, kvm_arch_init() in particular,
> I guess that teardown function (kvm_arch_exit()) should do
>    (reverting kvm_timer_hyp_init() per cpu)
> - stop arch timer

No need, there shouldn't be any guest running at that point.

>    (reverting cpu_init_hyp_mode() per cpu)
> - flush TLB
> - jump into identical mapping (using boot_hyp_pgd?)
> - disable MMU?

Yes, and for that, you need to go back to an idmap

> - restore vbar_el2 to __hyp_stub_vectors (or NULL?)

It doesn't matter, you want to stay in HYP for the next kernel.

>    (reverting kvm_mmu_init())
> - Do we need to free page tables and a bounce(trampoline) page?

I don't think that's useful, you've killed the kernel already.

> Is this good enough for safely shutting down kvm?
> Do I miss anything essential, or can I skip anyhing?

I've outlined the steps a couple of days ago there:
http://www.spinics.net/lists/arm-kernel/msg395177.html

and the outline above should give you a nice list of things to look at.
All in all, it is pretty straightforward, provided that you're careful
enough about (commit 5a677ce044f has most of the information, just read
it backward... ;-).

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [PATCH 7/8] arm64/kexec: Add checks for KVM
@ 2015-01-29 10:59           ` Marc Zyngier
  0 siblings, 0 replies; 100+ messages in thread
From: Marc Zyngier @ 2015-01-29 10:59 UTC (permalink / raw)
  To: AKASHI Takahiro, Mark Rutland, Geoff Levand
  Cc: Catalin Marinas, Will Deacon, linux-arm-kernel, grant.likely,
	kexec, christoffer.dall


On 29/01/15 09:57, AKASHI Takahiro wrote:
> Hello,
> 
> On 01/27/2015 04:19 AM, Mark Rutland wrote:
>> On Sat, Jan 17, 2015 at 12:23:34AM +0000, Geoff Levand wrote:
>>> Add runtime checks that fail the arm64 kexec syscall for situations that would
>>> result in system instability do to problems in the KVM kernel support.
>>> These checks should be removed when the KVM problems are resolved fixed.
>>>
>>> Signed-off-by: Geoff Levand <geoff@infradead.org>
>>> ---
>>>   arch/arm64/kernel/machine_kexec.c | 10 ++++++++++
>>>   1 file changed, 10 insertions(+)
>>>
>>> diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
>>> index 3d84759..a36459d 100644
>>> --- a/arch/arm64/kernel/machine_kexec.c
>>> +++ b/arch/arm64/kernel/machine_kexec.c
>>> @@ -16,6 +16,9 @@
>>>   #include <asm/cacheflush.h>
>>>   #include <asm/system_misc.h>
>>>
>>> +/* TODO: Remove this include when KVM can support a kexec reboot. */
>>> +#include <asm/virt.h>
>>> +
>>>   /* Global variables for the relocate_kernel routine. */
>>>   extern const unsigned char relocate_new_kernel[];
>>>   extern const unsigned long relocate_new_kernel_size;
>>> @@ -100,6 +103,13 @@ int machine_kexec_prepare(struct kimage *image)
>>>
>>>   	kexec_image_info(image);
>>>
>>> +	/* TODO: Remove this message when KVM can support a kexec reboot. */
>>> +	if (IS_ENABLED(CONFIG_KVM) && is_hyp_mode_available()) {
>>> +		pr_err("%s: Your kernel is configured with KVM support (CONFIG_KVM=y) which currently does not allow for kexec re-boot.\n",
>>> +			__func__);
>>> +		return -ENOSYS;
>>> +	}
>>
>> If you really don't want to implement KVM teardown, surely this should
>> be at the start of the series, so we don't have a point in the middle
>> where things may explode in this case?
> 
> I'm going to fix this KVM issue (teardown) in cooperation with Geoff.
> 
> Looking into kvm init code, kvm_arch_init() in particular,
> I guess that teardown function (kvm_arch_exit()) should do
>    (reverting kvm_timer_hyp_init() per cpu)
> - stop arch timer

No need, there shouldn't be any guest running at that point.

>    (reverting cpu_init_hyp_mode() per cpu)
> - flush TLB
> - jump into identical mapping (using boot_hyp_pgd?)
> - disable MMU?

Yes, and for that, you need to go back to an idmap

> - restore vbar_el2 to __hyp_stub_vectors (or NULL?)

It doesn't matter, you want to stay in HYP for the next kernel.

>    (reverting kvm_mmu_init())
> - Do we need to free page tables and a bounce(trampoline) page?

I don't think that's useful, you've killed the kernel already.

> Is this good enough for safely shutting down kvm?
> Do I miss anything essential, or can I skip anyhing?

I've outlined the steps a couple of days ago there:
http://www.spinics.net/lists/arm-kernel/msg395177.html

and the outline above should give you a nice list of things to look at.
All in all, it is pretty straightforward, provided that you're careful
enough about (commit 5a677ce044f has most of the information, just read
it backward... ;-).

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 100+ messages in thread

* [PATCH 7/8] arm64/kexec: Add checks for KVM
  2015-01-29 10:59           ` Marc Zyngier
@ 2015-01-29 18:47             ` Mark Rutland
  -1 siblings, 0 replies; 100+ messages in thread
From: Mark Rutland @ 2015-01-29 18:47 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Jan 29, 2015 at 10:59:45AM +0000, Marc Zyngier wrote:
> 
> On 29/01/15 09:57, AKASHI Takahiro wrote:
> > Hello,
> > 
> > On 01/27/2015 04:19 AM, Mark Rutland wrote:
> >> On Sat, Jan 17, 2015 at 12:23:34AM +0000, Geoff Levand wrote:
> >>> Add runtime checks that fail the arm64 kexec syscall for situations that would
> >>> result in system instability do to problems in the KVM kernel support.
> >>> These checks should be removed when the KVM problems are resolved fixed.
> >>>
> >>> Signed-off-by: Geoff Levand <geoff@infradead.org>
> >>> ---
> >>>   arch/arm64/kernel/machine_kexec.c | 10 ++++++++++
> >>>   1 file changed, 10 insertions(+)
> >>>
> >>> diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
> >>> index 3d84759..a36459d 100644
> >>> --- a/arch/arm64/kernel/machine_kexec.c
> >>> +++ b/arch/arm64/kernel/machine_kexec.c
> >>> @@ -16,6 +16,9 @@
> >>>   #include <asm/cacheflush.h>
> >>>   #include <asm/system_misc.h>
> >>>
> >>> +/* TODO: Remove this include when KVM can support a kexec reboot. */
> >>> +#include <asm/virt.h>
> >>> +
> >>>   /* Global variables for the relocate_kernel routine. */
> >>>   extern const unsigned char relocate_new_kernel[];
> >>>   extern const unsigned long relocate_new_kernel_size;
> >>> @@ -100,6 +103,13 @@ int machine_kexec_prepare(struct kimage *image)
> >>>
> >>>   	kexec_image_info(image);
> >>>
> >>> +	/* TODO: Remove this message when KVM can support a kexec reboot. */
> >>> +	if (IS_ENABLED(CONFIG_KVM) && is_hyp_mode_available()) {
> >>> +		pr_err("%s: Your kernel is configured with KVM support (CONFIG_KVM=y) which currently does not allow for kexec re-boot.\n",
> >>> +			__func__);
> >>> +		return -ENOSYS;
> >>> +	}
> >>
> >> If you really don't want to implement KVM teardown, surely this should
> >> be at the start of the series, so we don't have a point in the middle
> >> where things may explode in this case?
> > 
> > I'm going to fix this KVM issue (teardown) in cooperation with Geoff.
> > 
> > Looking into kvm init code, kvm_arch_init() in particular,
> > I guess that teardown function (kvm_arch_exit()) should do
> >    (reverting kvm_timer_hyp_init() per cpu)
> > - stop arch timer
> 
> No need, there shouldn't be any guest running at that point.
> 
> >    (reverting cpu_init_hyp_mode() per cpu)
> > - flush TLB
> > - jump into identical mapping (using boot_hyp_pgd?)
> > - disable MMU?
> 
> Yes, and for that, you need to go back to an idmap
> 
> > - restore vbar_el2 to __hyp_stub_vectors (or NULL?)
> 
> It doesn't matter, you want to stay in HYP for the next kernel.

Well, it depends on how the teardown is implemented. I'd imagined we'd
have KVM tear itself down (restoring the hyp stub) as part of shut down.
Later kexec/soft_restart would call the stub to get back to EL2.

That way kexec doesn't need to know anything about KVM, and it has one
path that works regardless of whether KVM is compiled into the kernel.

Mark.

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [PATCH 7/8] arm64/kexec: Add checks for KVM
@ 2015-01-29 18:47             ` Mark Rutland
  0 siblings, 0 replies; 100+ messages in thread
From: Mark Rutland @ 2015-01-29 18:47 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Geoff Levand, Catalin Marinas, Will Deacon, AKASHI Takahiro,
	linux-arm-kernel, grant.likely, kexec, christoffer.dall

On Thu, Jan 29, 2015 at 10:59:45AM +0000, Marc Zyngier wrote:
> 
> On 29/01/15 09:57, AKASHI Takahiro wrote:
> > Hello,
> > 
> > On 01/27/2015 04:19 AM, Mark Rutland wrote:
> >> On Sat, Jan 17, 2015 at 12:23:34AM +0000, Geoff Levand wrote:
> >>> Add runtime checks that fail the arm64 kexec syscall for situations that would
> >>> result in system instability do to problems in the KVM kernel support.
> >>> These checks should be removed when the KVM problems are resolved fixed.
> >>>
> >>> Signed-off-by: Geoff Levand <geoff@infradead.org>
> >>> ---
> >>>   arch/arm64/kernel/machine_kexec.c | 10 ++++++++++
> >>>   1 file changed, 10 insertions(+)
> >>>
> >>> diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
> >>> index 3d84759..a36459d 100644
> >>> --- a/arch/arm64/kernel/machine_kexec.c
> >>> +++ b/arch/arm64/kernel/machine_kexec.c
> >>> @@ -16,6 +16,9 @@
> >>>   #include <asm/cacheflush.h>
> >>>   #include <asm/system_misc.h>
> >>>
> >>> +/* TODO: Remove this include when KVM can support a kexec reboot. */
> >>> +#include <asm/virt.h>
> >>> +
> >>>   /* Global variables for the relocate_kernel routine. */
> >>>   extern const unsigned char relocate_new_kernel[];
> >>>   extern const unsigned long relocate_new_kernel_size;
> >>> @@ -100,6 +103,13 @@ int machine_kexec_prepare(struct kimage *image)
> >>>
> >>>   	kexec_image_info(image);
> >>>
> >>> +	/* TODO: Remove this message when KVM can support a kexec reboot. */
> >>> +	if (IS_ENABLED(CONFIG_KVM) && is_hyp_mode_available()) {
> >>> +		pr_err("%s: Your kernel is configured with KVM support (CONFIG_KVM=y) which currently does not allow for kexec re-boot.\n",
> >>> +			__func__);
> >>> +		return -ENOSYS;
> >>> +	}
> >>
> >> If you really don't want to implement KVM teardown, surely this should
> >> be at the start of the series, so we don't have a point in the middle
> >> where things may explode in this case?
> > 
> > I'm going to fix this KVM issue (teardown) in cooperation with Geoff.
> > 
> > Looking into kvm init code, kvm_arch_init() in particular,
> > I guess that teardown function (kvm_arch_exit()) should do
> >    (reverting kvm_timer_hyp_init() per cpu)
> > - stop arch timer
> 
> No need, there shouldn't be any guest running at that point.
> 
> >    (reverting cpu_init_hyp_mode() per cpu)
> > - flush TLB
> > - jump into identical mapping (using boot_hyp_pgd?)
> > - disable MMU?
> 
> Yes, and for that, you need to go back to an idmap
> 
> > - restore vbar_el2 to __hyp_stub_vectors (or NULL?)
> 
> It doesn't matter, you want to stay in HYP for the next kernel.

Well, it depends on how the teardown is implemented. I'd imagined we'd
have KVM tear itself down (restoring the hyp stub) as part of shut down.
Later kexec/soft_restart would call the stub to get back to EL2.

That way kexec doesn't need to know anything about KVM, and it has one
path that works regardless of whether KVM is compiled into the kernel.

Mark.

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 100+ messages in thread

* [PATCH 7/8] arm64/kexec: Add checks for KVM
  2015-01-29 18:47             ` Mark Rutland
@ 2015-01-30  6:10               ` AKASHI Takahiro
  -1 siblings, 0 replies; 100+ messages in thread
From: AKASHI Takahiro @ 2015-01-30  6:10 UTC (permalink / raw)
  To: linux-arm-kernel

Hello Marc, Mark

Thank you for your useful comments.
I need to look at Marc's commit(5a677ce044f) more carefully
about trampoline code.

On 01/30/2015 03:47 AM, Mark Rutland wrote:
> On Thu, Jan 29, 2015 at 10:59:45AM +0000, Marc Zyngier wrote:
>>
>> On 29/01/15 09:57, AKASHI Takahiro wrote:
>>> Hello,
>>>
>>> On 01/27/2015 04:19 AM, Mark Rutland wrote:
>>>> On Sat, Jan 17, 2015 at 12:23:34AM +0000, Geoff Levand wrote:
>>>>> Add runtime checks that fail the arm64 kexec syscall for situations that would
>>>>> result in system instability do to problems in the KVM kernel support.
>>>>> These checks should be removed when the KVM problems are resolved fixed.
>>>>>
>>>>> Signed-off-by: Geoff Levand <geoff@infradead.org>
>>>>> ---
>>>>>    arch/arm64/kernel/machine_kexec.c | 10 ++++++++++
>>>>>    1 file changed, 10 insertions(+)
>>>>>
>>>>> diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
>>>>> index 3d84759..a36459d 100644
>>>>> --- a/arch/arm64/kernel/machine_kexec.c
>>>>> +++ b/arch/arm64/kernel/machine_kexec.c
>>>>> @@ -16,6 +16,9 @@
>>>>>    #include <asm/cacheflush.h>
>>>>>    #include <asm/system_misc.h>
>>>>>
>>>>> +/* TODO: Remove this include when KVM can support a kexec reboot. */
>>>>> +#include <asm/virt.h>
>>>>> +
>>>>>    /* Global variables for the relocate_kernel routine. */
>>>>>    extern const unsigned char relocate_new_kernel[];
>>>>>    extern const unsigned long relocate_new_kernel_size;
>>>>> @@ -100,6 +103,13 @@ int machine_kexec_prepare(struct kimage *image)
>>>>>
>>>>>    	kexec_image_info(image);
>>>>>
>>>>> +	/* TODO: Remove this message when KVM can support a kexec reboot. */
>>>>> +	if (IS_ENABLED(CONFIG_KVM) && is_hyp_mode_available()) {
>>>>> +		pr_err("%s: Your kernel is configured with KVM support (CONFIG_KVM=y) which currently does not allow for kexec re-boot.\n",
>>>>> +			__func__);
>>>>> +		return -ENOSYS;
>>>>> +	}
>>>>
>>>> If you really don't want to implement KVM teardown, surely this should
>>>> be at the start of the series, so we don't have a point in the middle
>>>> where things may explode in this case?
>>>
>>> I'm going to fix this KVM issue (teardown) in cooperation with Geoff.
>>>
>>> Looking into kvm init code, kvm_arch_init() in particular,
>>> I guess that teardown function (kvm_arch_exit()) should do
>>>     (reverting kvm_timer_hyp_init() per cpu)
>>> - stop arch timer
>>
>> No need, there shouldn't be any guest running at that point.
>>
>>>     (reverting cpu_init_hyp_mode() per cpu)
>>> - flush TLB
>>> - jump into identical mapping (using boot_hyp_pgd?)
>>> - disable MMU?
>>
>> Yes, and for that, you need to go back to an idmap
>>
>>> - restore vbar_el2 to __hyp_stub_vectors (or NULL?)
>>
>> It doesn't matter, you want to stay in HYP for the next kernel.
>
> Well, it depends on how the teardown is implemented. I'd imagined we'd
> have KVM tear itself down (restoring the hyp stub) as part of shut down.
> Later kexec/soft_restart would call the stub to get back to EL2.
>
> That way kexec doesn't need to know anything about KVM, and it has one
> path that works regardless of whether KVM is compiled into the kernel.

Initially, I thought that we would define kvm_arch_exit() and call it
somewhere in the middle of kexec path (no idea yet).
But Geoff suggested me to implement a new hvc call, HVC_CPU_SHUTDOWN(??),
and make it called via cpu_notifier(CPU_DYING_FROZEN) initiated by
machine_shutdown() from kernel_kexec().
(As you know, a hook is already there for cpu online in kvm/arm.c.)

Is this the right place to put teardown function?
(I'm not sure if this hook will be called also on a boot cpu.)

Thanks,
-Takahiro AKASHI



> Mark.
>

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [PATCH 7/8] arm64/kexec: Add checks for KVM
@ 2015-01-30  6:10               ` AKASHI Takahiro
  0 siblings, 0 replies; 100+ messages in thread
From: AKASHI Takahiro @ 2015-01-30  6:10 UTC (permalink / raw)
  To: Mark Rutland, Marc Zyngier
  Cc: Geoff Levand, Catalin Marinas, Will Deacon, linux-arm-kernel,
	grant.likely, kexec, christoffer.dall

Hello Marc, Mark

Thank you for your useful comments.
I need to look at Marc's commit(5a677ce044f) more carefully
about trampoline code.

On 01/30/2015 03:47 AM, Mark Rutland wrote:
> On Thu, Jan 29, 2015 at 10:59:45AM +0000, Marc Zyngier wrote:
>>
>> On 29/01/15 09:57, AKASHI Takahiro wrote:
>>> Hello,
>>>
>>> On 01/27/2015 04:19 AM, Mark Rutland wrote:
>>>> On Sat, Jan 17, 2015 at 12:23:34AM +0000, Geoff Levand wrote:
>>>>> Add runtime checks that fail the arm64 kexec syscall for situations that would
>>>>> result in system instability do to problems in the KVM kernel support.
>>>>> These checks should be removed when the KVM problems are resolved fixed.
>>>>>
>>>>> Signed-off-by: Geoff Levand <geoff@infradead.org>
>>>>> ---
>>>>>    arch/arm64/kernel/machine_kexec.c | 10 ++++++++++
>>>>>    1 file changed, 10 insertions(+)
>>>>>
>>>>> diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
>>>>> index 3d84759..a36459d 100644
>>>>> --- a/arch/arm64/kernel/machine_kexec.c
>>>>> +++ b/arch/arm64/kernel/machine_kexec.c
>>>>> @@ -16,6 +16,9 @@
>>>>>    #include <asm/cacheflush.h>
>>>>>    #include <asm/system_misc.h>
>>>>>
>>>>> +/* TODO: Remove this include when KVM can support a kexec reboot. */
>>>>> +#include <asm/virt.h>
>>>>> +
>>>>>    /* Global variables for the relocate_kernel routine. */
>>>>>    extern const unsigned char relocate_new_kernel[];
>>>>>    extern const unsigned long relocate_new_kernel_size;
>>>>> @@ -100,6 +103,13 @@ int machine_kexec_prepare(struct kimage *image)
>>>>>
>>>>>    	kexec_image_info(image);
>>>>>
>>>>> +	/* TODO: Remove this message when KVM can support a kexec reboot. */
>>>>> +	if (IS_ENABLED(CONFIG_KVM) && is_hyp_mode_available()) {
>>>>> +		pr_err("%s: Your kernel is configured with KVM support (CONFIG_KVM=y) which currently does not allow for kexec re-boot.\n",
>>>>> +			__func__);
>>>>> +		return -ENOSYS;
>>>>> +	}
>>>>
>>>> If you really don't want to implement KVM teardown, surely this should
>>>> be at the start of the series, so we don't have a point in the middle
>>>> where things may explode in this case?
>>>
>>> I'm going to fix this KVM issue (teardown) in cooperation with Geoff.
>>>
>>> Looking into kvm init code, kvm_arch_init() in particular,
>>> I guess that teardown function (kvm_arch_exit()) should do
>>>     (reverting kvm_timer_hyp_init() per cpu)
>>> - stop arch timer
>>
>> No need, there shouldn't be any guest running at that point.
>>
>>>     (reverting cpu_init_hyp_mode() per cpu)
>>> - flush TLB
>>> - jump into identical mapping (using boot_hyp_pgd?)
>>> - disable MMU?
>>
>> Yes, and for that, you need to go back to an idmap
>>
>>> - restore vbar_el2 to __hyp_stub_vectors (or NULL?)
>>
>> It doesn't matter, you want to stay in HYP for the next kernel.
>
> Well, it depends on how the teardown is implemented. I'd imagined we'd
> have KVM tear itself down (restoring the hyp stub) as part of shut down.
> Later kexec/soft_restart would call the stub to get back to EL2.
>
> That way kexec doesn't need to know anything about KVM, and it has one
> path that works regardless of whether KVM is compiled into the kernel.

Initially, I thought that we would define kvm_arch_exit() and call it
somewhere in the middle of kexec path (no idea yet).
But Geoff suggested me to implement a new hvc call, HVC_CPU_SHUTDOWN(??),
and make it called via cpu_notifier(CPU_DYING_FROZEN) initiated by
machine_shutdown() from kernel_kexec().
(As you know, a hook is already there for cpu online in kvm/arm.c.)

Is this the right place to put teardown function?
(I'm not sure if this hook will be called also on a boot cpu.)

Thanks,
-Takahiro AKASHI



> Mark.
>

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 100+ messages in thread

* [PATCH 7/8] arm64/kexec: Add checks for KVM
  2015-01-30  6:10               ` AKASHI Takahiro
@ 2015-01-30 12:14                 ` Mark Rutland
  -1 siblings, 0 replies; 100+ messages in thread
From: Mark Rutland @ 2015-01-30 12:14 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Jan 30, 2015 at 06:10:53AM +0000, AKASHI Takahiro wrote:
> Hello Marc, Mark
> 
> Thank you for your useful comments.
> I need to look at Marc's commit(5a677ce044f) more carefully
> about trampoline code.
> 
> On 01/30/2015 03:47 AM, Mark Rutland wrote:
> > On Thu, Jan 29, 2015 at 10:59:45AM +0000, Marc Zyngier wrote:
> >>
> >> On 29/01/15 09:57, AKASHI Takahiro wrote:
> >>> Hello,
> >>>
> >>> On 01/27/2015 04:19 AM, Mark Rutland wrote:
> >>>> On Sat, Jan 17, 2015 at 12:23:34AM +0000, Geoff Levand wrote:
> >>>>> Add runtime checks that fail the arm64 kexec syscall for situations that would
> >>>>> result in system instability do to problems in the KVM kernel support.
> >>>>> These checks should be removed when the KVM problems are resolved fixed.
> >>>>>
> >>>>> Signed-off-by: Geoff Levand <geoff@infradead.org>
> >>>>> ---
> >>>>>    arch/arm64/kernel/machine_kexec.c | 10 ++++++++++
> >>>>>    1 file changed, 10 insertions(+)
> >>>>>
> >>>>> diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
> >>>>> index 3d84759..a36459d 100644
> >>>>> --- a/arch/arm64/kernel/machine_kexec.c
> >>>>> +++ b/arch/arm64/kernel/machine_kexec.c
> >>>>> @@ -16,6 +16,9 @@
> >>>>>    #include <asm/cacheflush.h>
> >>>>>    #include <asm/system_misc.h>
> >>>>>
> >>>>> +/* TODO: Remove this include when KVM can support a kexec reboot. */
> >>>>> +#include <asm/virt.h>
> >>>>> +
> >>>>>    /* Global variables for the relocate_kernel routine. */
> >>>>>    extern const unsigned char relocate_new_kernel[];
> >>>>>    extern const unsigned long relocate_new_kernel_size;
> >>>>> @@ -100,6 +103,13 @@ int machine_kexec_prepare(struct kimage *image)
> >>>>>
> >>>>>    	kexec_image_info(image);
> >>>>>
> >>>>> +	/* TODO: Remove this message when KVM can support a kexec reboot. */
> >>>>> +	if (IS_ENABLED(CONFIG_KVM) && is_hyp_mode_available()) {
> >>>>> +		pr_err("%s: Your kernel is configured with KVM support (CONFIG_KVM=y) which currently does not allow for kexec re-boot.\n",
> >>>>> +			__func__);
> >>>>> +		return -ENOSYS;
> >>>>> +	}
> >>>>
> >>>> If you really don't want to implement KVM teardown, surely this should
> >>>> be at the start of the series, so we don't have a point in the middle
> >>>> where things may explode in this case?
> >>>
> >>> I'm going to fix this KVM issue (teardown) in cooperation with Geoff.
> >>>
> >>> Looking into kvm init code, kvm_arch_init() in particular,
> >>> I guess that teardown function (kvm_arch_exit()) should do
> >>>     (reverting kvm_timer_hyp_init() per cpu)
> >>> - stop arch timer
> >>
> >> No need, there shouldn't be any guest running at that point.
> >>
> >>>     (reverting cpu_init_hyp_mode() per cpu)
> >>> - flush TLB
> >>> - jump into identical mapping (using boot_hyp_pgd?)
> >>> - disable MMU?
> >>
> >> Yes, and for that, you need to go back to an idmap
> >>
> >>> - restore vbar_el2 to __hyp_stub_vectors (or NULL?)
> >>
> >> It doesn't matter, you want to stay in HYP for the next kernel.
> >
> > Well, it depends on how the teardown is implemented. I'd imagined we'd
> > have KVM tear itself down (restoring the hyp stub) as part of shut down.
> > Later kexec/soft_restart would call the stub to get back to EL2.
> >
> > That way kexec doesn't need to know anything about KVM, and it has one
> > path that works regardless of whether KVM is compiled into the kernel.
> 
> Initially, I thought that we would define kvm_arch_exit() and call it
> somewhere in the middle of kexec path (no idea yet).
> But Geoff suggested me to implement a new hvc call, HVC_CPU_SHUTDOWN(??),
> and make it called via cpu_notifier(CPU_DYING_FROZEN) initiated by
> machine_shutdown() from kernel_kexec().
> (As you know, a hook is already there for cpu online in kvm/arm.c.)

I think it would make far more sense to have the KVM teardown entirely
contained within KVM (e.g. in kvm_arch_exit). That way there's no need
for an interdependency between KVM and kexec, and we'd be a step closer
to KVM as a module.

I'm not keen on having KVM teardown in the middle of the kexec path.

Mark.

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [PATCH 7/8] arm64/kexec: Add checks for KVM
@ 2015-01-30 12:14                 ` Mark Rutland
  0 siblings, 0 replies; 100+ messages in thread
From: Mark Rutland @ 2015-01-30 12:14 UTC (permalink / raw)
  To: AKASHI Takahiro
  Cc: Marc Zyngier, Catalin Marinas, Will Deacon, Geoff Levand,
	linux-arm-kernel, grant.likely, kexec, christoffer.dall

On Fri, Jan 30, 2015 at 06:10:53AM +0000, AKASHI Takahiro wrote:
> Hello Marc, Mark
> 
> Thank you for your useful comments.
> I need to look at Marc's commit(5a677ce044f) more carefully
> about trampoline code.
> 
> On 01/30/2015 03:47 AM, Mark Rutland wrote:
> > On Thu, Jan 29, 2015 at 10:59:45AM +0000, Marc Zyngier wrote:
> >>
> >> On 29/01/15 09:57, AKASHI Takahiro wrote:
> >>> Hello,
> >>>
> >>> On 01/27/2015 04:19 AM, Mark Rutland wrote:
> >>>> On Sat, Jan 17, 2015 at 12:23:34AM +0000, Geoff Levand wrote:
> >>>>> Add runtime checks that fail the arm64 kexec syscall for situations that would
> >>>>> result in system instability do to problems in the KVM kernel support.
> >>>>> These checks should be removed when the KVM problems are resolved fixed.
> >>>>>
> >>>>> Signed-off-by: Geoff Levand <geoff@infradead.org>
> >>>>> ---
> >>>>>    arch/arm64/kernel/machine_kexec.c | 10 ++++++++++
> >>>>>    1 file changed, 10 insertions(+)
> >>>>>
> >>>>> diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
> >>>>> index 3d84759..a36459d 100644
> >>>>> --- a/arch/arm64/kernel/machine_kexec.c
> >>>>> +++ b/arch/arm64/kernel/machine_kexec.c
> >>>>> @@ -16,6 +16,9 @@
> >>>>>    #include <asm/cacheflush.h>
> >>>>>    #include <asm/system_misc.h>
> >>>>>
> >>>>> +/* TODO: Remove this include when KVM can support a kexec reboot. */
> >>>>> +#include <asm/virt.h>
> >>>>> +
> >>>>>    /* Global variables for the relocate_kernel routine. */
> >>>>>    extern const unsigned char relocate_new_kernel[];
> >>>>>    extern const unsigned long relocate_new_kernel_size;
> >>>>> @@ -100,6 +103,13 @@ int machine_kexec_prepare(struct kimage *image)
> >>>>>
> >>>>>    	kexec_image_info(image);
> >>>>>
> >>>>> +	/* TODO: Remove this message when KVM can support a kexec reboot. */
> >>>>> +	if (IS_ENABLED(CONFIG_KVM) && is_hyp_mode_available()) {
> >>>>> +		pr_err("%s: Your kernel is configured with KVM support (CONFIG_KVM=y) which currently does not allow for kexec re-boot.\n",
> >>>>> +			__func__);
> >>>>> +		return -ENOSYS;
> >>>>> +	}
> >>>>
> >>>> If you really don't want to implement KVM teardown, surely this should
> >>>> be at the start of the series, so we don't have a point in the middle
> >>>> where things may explode in this case?
> >>>
> >>> I'm going to fix this KVM issue (teardown) in cooperation with Geoff.
> >>>
> >>> Looking into kvm init code, kvm_arch_init() in particular,
> >>> I guess that teardown function (kvm_arch_exit()) should do
> >>>     (reverting kvm_timer_hyp_init() per cpu)
> >>> - stop arch timer
> >>
> >> No need, there shouldn't be any guest running at that point.
> >>
> >>>     (reverting cpu_init_hyp_mode() per cpu)
> >>> - flush TLB
> >>> - jump into identical mapping (using boot_hyp_pgd?)
> >>> - disable MMU?
> >>
> >> Yes, and for that, you need to go back to an idmap
> >>
> >>> - restore vbar_el2 to __hyp_stub_vectors (or NULL?)
> >>
> >> It doesn't matter, you want to stay in HYP for the next kernel.
> >
> > Well, it depends on how the teardown is implemented. I'd imagined we'd
> > have KVM tear itself down (restoring the hyp stub) as part of shut down.
> > Later kexec/soft_restart would call the stub to get back to EL2.
> >
> > That way kexec doesn't need to know anything about KVM, and it has one
> > path that works regardless of whether KVM is compiled into the kernel.
> 
> Initially, I thought that we would define kvm_arch_exit() and call it
> somewhere in the middle of kexec path (no idea yet).
> But Geoff suggested me to implement a new hvc call, HVC_CPU_SHUTDOWN(??),
> and make it called via cpu_notifier(CPU_DYING_FROZEN) initiated by
> machine_shutdown() from kernel_kexec().
> (As you know, a hook is already there for cpu online in kvm/arm.c.)

I think it would make far more sense to have the KVM teardown entirely
contained within KVM (e.g. in kvm_arch_exit). That way there's no need
for an interdependency between KVM and kexec, and we'd be a step closer
to KVM as a module.

I'm not keen on having KVM teardown in the middle of the kexec path.

Mark.

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 100+ messages in thread

* [PATCH 7/8] arm64/kexec: Add checks for KVM
  2015-01-30  6:10               ` AKASHI Takahiro
@ 2015-01-30 19:48                 ` Geoff Levand
  -1 siblings, 0 replies; 100+ messages in thread
From: Geoff Levand @ 2015-01-30 19:48 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Takahiro.

On Fri, 2015-01-30 at 15:10 +0900, AKASHI Takahiro wrote:
> Initially, I thought that we would define kvm_arch_exit() and call it
> somewhere in the middle of kexec path (no idea yet).
> But Geoff suggested me to implement a new hvc call, HVC_CPU_SHUTDOWN(??),
> and make it called via cpu_notifier(CPU_DYING_FROZEN) initiated by
> machine_shutdown() from kernel_kexec().

As an initial implementation we can hook into the CPU_DYING_FROZEN
notifier sent to hyp_init_cpu_notify().  The longer term solution
should use kvm_arch_hardware_enable() and kvm_arch_hardware_disable().

The calls to cpu_notifier(CPU_DYING_FROZEN) are part of cpu hot
plug, and independent of kexec.  If someone were to add spin-table
cpu un-plug, then it would be used for that also.  It seems we should
be able to test without kexec by using cpu hot plug.

To tear down KVM you need to get back to hyp mode, and hence
the need for HVC_CPU_SHUTDOWN.  The sequence I envisioned would
be like this:

cpu_notifier(CPU_DYING_FROZEN)
 -> kvm_cpu_shutdown() 
    prepare for hvc
    -> HVC_CPU_SHUTDOWN
       now in hyp mode, do KVM tear down, restore default exception vectors

Once the default exception vectors are restored soft_restart()
can then execute the cpu_reset routine in EL2.

Some notes are here for those with access:  https://cards.linaro.org/browse/KWG-611

-Geoff

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [PATCH 7/8] arm64/kexec: Add checks for KVM
@ 2015-01-30 19:48                 ` Geoff Levand
  0 siblings, 0 replies; 100+ messages in thread
From: Geoff Levand @ 2015-01-30 19:48 UTC (permalink / raw)
  To: AKASHI Takahiro
  Cc: Mark Rutland, Marc Zyngier, Catalin Marinas, Will Deacon,
	linux-arm-kernel, grant.likely, kexec, christoffer.dall

Hi Takahiro.

On Fri, 2015-01-30 at 15:10 +0900, AKASHI Takahiro wrote:
> Initially, I thought that we would define kvm_arch_exit() and call it
> somewhere in the middle of kexec path (no idea yet).
> But Geoff suggested me to implement a new hvc call, HVC_CPU_SHUTDOWN(??),
> and make it called via cpu_notifier(CPU_DYING_FROZEN) initiated by
> machine_shutdown() from kernel_kexec().

As an initial implementation we can hook into the CPU_DYING_FROZEN
notifier sent to hyp_init_cpu_notify().  The longer term solution
should use kvm_arch_hardware_enable() and kvm_arch_hardware_disable().

The calls to cpu_notifier(CPU_DYING_FROZEN) are part of cpu hot
plug, and independent of kexec.  If someone were to add spin-table
cpu un-plug, then it would be used for that also.  It seems we should
be able to test without kexec by using cpu hot plug.

To tear down KVM you need to get back to hyp mode, and hence
the need for HVC_CPU_SHUTDOWN.  The sequence I envisioned would
be like this:

cpu_notifier(CPU_DYING_FROZEN)
 -> kvm_cpu_shutdown() 
    prepare for hvc
    -> HVC_CPU_SHUTDOWN
       now in hyp mode, do KVM tear down, restore default exception vectors

Once the default exception vectors are restored soft_restart()
can then execute the cpu_reset routine in EL2.

Some notes are here for those with access:  https://cards.linaro.org/browse/KWG-611

-Geoff


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 100+ messages in thread

* [PATCH 4/8] arm64: Add EL2 switch to soft_restart
  2015-01-27 17:57       ` Catalin Marinas
@ 2015-01-30 21:47         ` Geoff Levand
  -1 siblings, 0 replies; 100+ messages in thread
From: Geoff Levand @ 2015-01-30 21:47 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Catalin,

On Tue, 2015-01-27 at 17:57 +0000, Catalin Marinas wrote:
> On Sat, Jan 17, 2015 at 12:23:34AM +0000, Geoff Levand wrote:
> >  ENTRY(cpu_reset)
> > -	mrs	x1, sctlr_el1
> > -	bic	x1, x1, #1
> > -	msr	sctlr_el1, x1			// disable the MMU
> > +	mrs	x2, sctlr_el1
> > +	bic	x2, x2, #1
> > +	msr	sctlr_el1, x2			// disable the MMU
> >  	isb
> > -	ret	x0
> > +
> > +	cbz	x0, 1f				// el2_switch?
> > +	mov	x0, x1
> > +	mov	x1, xzr
> > +	mov	x2, xzr
> > +	mov	x3, xzr
> > +	hvc	#HVC_CALL_FUNC			// no return
> 
> If that's the only user of HVC_CALL_FUNC, why do we bother with
> arguments, calling convention?

It was intended that HVC_CALL_FUNC be a mechanism to call a generic
function in hyp mode.  cpu_reset() is the only user of it now.

As Mark explained in another post, the use of x18 by the hyp stub
is to avoid the complication of setting up a stack for EL2.  We
thought this solution was acceptable since there are relatively
few HVC_CALL_FUNC calls and we could assure they would do the
right thing.

-Geoff 

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [PATCH 4/8] arm64: Add EL2 switch to soft_restart
@ 2015-01-30 21:47         ` Geoff Levand
  0 siblings, 0 replies; 100+ messages in thread
From: Geoff Levand @ 2015-01-30 21:47 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: Marc Zyngier, kexec, Will Deacon, linux-arm-kernel, grant.likely,
	christoffer.dall

Hi Catalin,

On Tue, 2015-01-27 at 17:57 +0000, Catalin Marinas wrote:
> On Sat, Jan 17, 2015 at 12:23:34AM +0000, Geoff Levand wrote:
> >  ENTRY(cpu_reset)
> > -	mrs	x1, sctlr_el1
> > -	bic	x1, x1, #1
> > -	msr	sctlr_el1, x1			// disable the MMU
> > +	mrs	x2, sctlr_el1
> > +	bic	x2, x2, #1
> > +	msr	sctlr_el1, x2			// disable the MMU
> >  	isb
> > -	ret	x0
> > +
> > +	cbz	x0, 1f				// el2_switch?
> > +	mov	x0, x1
> > +	mov	x1, xzr
> > +	mov	x2, xzr
> > +	mov	x3, xzr
> > +	hvc	#HVC_CALL_FUNC			// no return
> 
> If that's the only user of HVC_CALL_FUNC, why do we bother with
> arguments, calling convention?

It was intended that HVC_CALL_FUNC be a mechanism to call a generic
function in hyp mode.  cpu_reset() is the only user of it now.

As Mark explained in another post, the use of x18 by the hyp stub
is to avoid the complication of setting up a stack for EL2.  We
thought this solution was acceptable since there are relatively
few HVC_CALL_FUNC calls and we could assure they would do the
right thing.

-Geoff 





_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 100+ messages in thread

* [PATCH 3/8] arm64: Add new hcall HVC_CALL_FUNC
  2015-01-27 17:39       ` Catalin Marinas
@ 2015-01-30 21:52         ` Geoff Levand
  -1 siblings, 0 replies; 100+ messages in thread
From: Geoff Levand @ 2015-01-30 21:52 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Catalin,


On Tue, 2015-01-27 at 17:39 +0000, Catalin Marinas wrote:
> On Sat, Jan 17, 2015 at 12:23:34AM +0000, Geoff Levand wrote:
> > diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h
> > index 99c319c..4f23a48 100644
> > --- a/arch/arm64/include/asm/virt.h
> > +++ b/arch/arm64/include/asm/virt.h
> > @@ -41,6 +41,19 @@
> >  
> >  #define HVC_CALL_HYP 3
> >  
> > +/*
> > + * HVC_CALL_FUNC - Execute a function at EL2.
> > + *
> > + * @x0: Physical address of the function to be executed.
> > + * @x1: Passed as the first argument to the function.
> > + * @x2: Passed as the second argument to the function.
> > + * @x3: Passed as the third argument to the function.
> > + *
> > + * The called function must preserve the contents of register x18.
> 
> Can you pick a register that's normally callee saved?

Mark covered this in his reply.

> > + */
> > +
> > +#define HVC_CALL_FUNC 4
> > +
> >  #ifndef __ASSEMBLY__
> >  
> >  /*
> > diff --git a/arch/arm64/kernel/hyp-stub.S b/arch/arm64/kernel/hyp-stub.S
> > index e3db3fd..b5d36e7 100644
> > --- a/arch/arm64/kernel/hyp-stub.S
> > +++ b/arch/arm64/kernel/hyp-stub.S
> > @@ -66,9 +66,20 @@ el1_sync:
> >  	mrs	x0, vbar_el2
> >  	b	2f
> >  
> > -1:	cmp	x18, #HVC_SET_VECTORS
> > -	b.ne	2f
> > -	msr	vbar_el2, x0
> > +1:	cmp     x18, #HVC_SET_VECTORS
> 
> This line doesn't seem to have any change, apart from some whitespace.
> Or did you want to drop the label?

Some whitespace problems got in there from so many rebases.  I went
through the series and cleaned them up.


> > +	b.ne    1f
> > +	msr     vbar_el2, x0
> > +	b       2f
> > +
> > +1:	cmp     x18, #HVC_CALL_FUNC
> > +	b.ne    2f
> > +	mov     x18, lr
> > +	mov     lr, x0
> > +	mov     x0, x1
> > +	mov     x1, x2
> > +	mov     x2, x3
> > +	blr     lr
> > +	mov     lr, x18
> >  
> >  2:	eret
> >  ENDPROC(el1_sync)
> 
> What is the calling convention for this HVC? You mentioned x18 above but
> what about other registers that the called function may corrupt (x18 is
> a temporary register, so it's not expected to be callee saved).

Again, Mark covered this in his reply.

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [PATCH 3/8] arm64: Add new hcall HVC_CALL_FUNC
@ 2015-01-30 21:52         ` Geoff Levand
  0 siblings, 0 replies; 100+ messages in thread
From: Geoff Levand @ 2015-01-30 21:52 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: Marc Zyngier, kexec, Will Deacon, linux-arm-kernel, grant.likely,
	christoffer.dall

Hi Catalin,


On Tue, 2015-01-27 at 17:39 +0000, Catalin Marinas wrote:
> On Sat, Jan 17, 2015 at 12:23:34AM +0000, Geoff Levand wrote:
> > diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h
> > index 99c319c..4f23a48 100644
> > --- a/arch/arm64/include/asm/virt.h
> > +++ b/arch/arm64/include/asm/virt.h
> > @@ -41,6 +41,19 @@
> >  
> >  #define HVC_CALL_HYP 3
> >  
> > +/*
> > + * HVC_CALL_FUNC - Execute a function at EL2.
> > + *
> > + * @x0: Physical address of the function to be executed.
> > + * @x1: Passed as the first argument to the function.
> > + * @x2: Passed as the second argument to the function.
> > + * @x3: Passed as the third argument to the function.
> > + *
> > + * The called function must preserve the contents of register x18.
> 
> Can you pick a register that's normally callee saved?

Mark covered this in his reply.

> > + */
> > +
> > +#define HVC_CALL_FUNC 4
> > +
> >  #ifndef __ASSEMBLY__
> >  
> >  /*
> > diff --git a/arch/arm64/kernel/hyp-stub.S b/arch/arm64/kernel/hyp-stub.S
> > index e3db3fd..b5d36e7 100644
> > --- a/arch/arm64/kernel/hyp-stub.S
> > +++ b/arch/arm64/kernel/hyp-stub.S
> > @@ -66,9 +66,20 @@ el1_sync:
> >  	mrs	x0, vbar_el2
> >  	b	2f
> >  
> > -1:	cmp	x18, #HVC_SET_VECTORS
> > -	b.ne	2f
> > -	msr	vbar_el2, x0
> > +1:	cmp     x18, #HVC_SET_VECTORS
> 
> This line doesn't seem to have any change, apart from some whitespace.
> Or did you want to drop the label?

Some whitespace problems got in there from so many rebases.  I went
through the series and cleaned them up.


> > +	b.ne    1f
> > +	msr     vbar_el2, x0
> > +	b       2f
> > +
> > +1:	cmp     x18, #HVC_CALL_FUNC
> > +	b.ne    2f
> > +	mov     x18, lr
> > +	mov     lr, x0
> > +	mov     x0, x1
> > +	mov     x1, x2
> > +	mov     x2, x3
> > +	blr     lr
> > +	mov     lr, x18
> >  
> >  2:	eret
> >  ENDPROC(el1_sync)
> 
> What is the calling convention for this HVC? You mentioned x18 above but
> what about other registers that the called function may corrupt (x18 is
> a temporary register, so it's not expected to be callee saved).

Again, Mark covered this in his reply.



_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 100+ messages in thread

* [PATCH 2/8] arm64: Convert hcalls to use ISS field
  2015-01-26 18:26       ` Catalin Marinas
@ 2015-01-30 23:31         ` Geoff Levand
  -1 siblings, 0 replies; 100+ messages in thread
From: Geoff Levand @ 2015-01-30 23:31 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, 2015-01-26 at 18:26 +0000, Catalin Marinas wrote:
> On Sat, Jan 17, 2015 at 12:23:34AM +0000, Geoff Levand wrote:
> > To allow for additional hcalls to be defined and to make the arm64 hcall API
> > more consistent across exception vector routines, change the hcall implementations
> > to use the ISS field of the ESR_EL2 register to specify the hcall type.
> > 
> > The existing arm64 hcall implementations are limited in that they only allow
> > for two distinct hcalls; with the x0 register either zero, or not zero.  Also,
> > the API of the hyp-stub exception vector routines and the KVM exception vector
> > routines differ; hyp-stub uses a non-zero value in x0 to implement
> > __hyp_set_vectors, whereas KVM uses it to implement kvm_call_hyp.
> > 
> > Define three new preprocessor macros HVC_GET_VECTORS, HVC_SET_VECTORS and
> > HVC_CALL_HYP and to be used as hcall type specifiers and convert the
> > existing __hyp_get_vectors(), __hyp_set_vectors() and kvm_call_hyp() routines
> > to use these new macros when executing an HVC call.  Also change the
> > corresponding hyp-stub and KVM el1_sync exception vector routines to use these
> > new macros.
> > 
> > Signed-off-by: Geoff Levand <geoff@infradead.org>
> 
> Using the #imm value for HVC to separate what gets called looks fine to
> me. However, I'd like to see a review from Marc/Christoffer on this
> patch.

Marc, Christopher, comments please?

> Some comments below:
> 
> > diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h
> > index 7a5df52..99c319c 100644
> > --- a/arch/arm64/include/asm/virt.h
> > +++ b/arch/arm64/include/asm/virt.h
> > @@ -21,6 +21,26 @@
> >  #define BOOT_CPU_MODE_EL1	(0xe11)
> >  #define BOOT_CPU_MODE_EL2	(0xe12)
> >  
> > +/*
> > + * HVC_GET_VECTORS - Return the value of the vbar_el2 register.
> > + */
> > +
> > +#define HVC_GET_VECTORS 1
> > +
> > +/*
> > + * HVC_SET_VECTORS - Set the value of the vbar_el2 register.
> > + *
> > + * @x0: Physical address of the new vector table.
> > + */
> > +
> > +#define HVC_SET_VECTORS 2
> > +
> > +/*
> > + * HVC_CALL_HYP - Execute a hyp routine.
> > + */
> > +
> > +#define HVC_CALL_HYP 3
> 
> I think you can ignore this case (make it the default), just define it
> as 0 as that's the normal use-case after initialisation and avoid
> checking it explicitly.

OK, I changed this so that HVC_CALL_HYP is the default at 0.

> >  /*
> > diff --git a/arch/arm64/kernel/hyp-stub.S b/arch/arm64/kernel/hyp-stub.S
> > index a272f33..e3db3fd 100644
> > --- a/arch/arm64/kernel/hyp-stub.S
> > +++ b/arch/arm64/kernel/hyp-stub.S
> > @@ -22,6 +22,7 @@
> >  #include <linux/irqchip/arm-gic-v3.h>
> >  
> >  #include <asm/assembler.h>
> > +#include <asm/kvm_arm.h>
> >  #include <asm/ptrace.h>
> >  #include <asm/virt.h>
> >  
> > @@ -53,14 +54,22 @@ ENDPROC(__hyp_stub_vectors)
> >  	.align 11
> >  
> >  el1_sync:
> > -	mrs	x1, esr_el2
> > -	lsr	x1, x1, #26
> > -	cmp	x1, #0x16
> > -	b.ne	2f				// Not an HVC trap
> > -	cbz	x0, 1f
> > -	msr	vbar_el2, x0			// Set vbar_el2
> > +	mrs	x18, esr_el2
> > +	lsr	x17, x18, #ESR_ELx_EC_SHIFT
> > +	and	x18, x18, #ESR_ELx_ISS_MASK
> > +
> > +	cmp     x17, #ESR_ELx_EC_HVC64
> > +	b.ne    2f				// Not an HVC trap
> > +
> > +	cmp	x18, #HVC_GET_VECTORS
> > +	b.ne	1f
> > +	mrs	x0, vbar_el2
> >  	b	2f
> > -1:	mrs	x0, vbar_el2			// Return vbar_el2
> > +
> > +1:	cmp	x18, #HVC_SET_VECTORS
> > +	b.ne	2f
> > +	msr	vbar_el2, x0
> > +
> >  2:	eret
> >  ENDPROC(el1_sync)
> 
> You seem to be using x17 and x18 here freely. Do you have any guarantees
> that the caller saved/restored those registers? I guess you assume they
> are temporary registers and the caller first branches to a function
> (like __kvm_hyp_call) and expects them to be corrupted. But I'm not sure
> that's always the case. Take for example the __invoke_psci_fn_hvc where
> the function is in C (we should change this for other reasons).

Yes, I assume the compiler will not expect them to be preserved.  I
missed __invoke_psci_fn_hvc.  Can we just add x17 and x18 to the
clobbered list?

        asm volatile(
                        __asmeq("%0", "x0")
                        __asmeq("%1", "x1")
                        __asmeq("%2", "x2")
                        __asmeq("%3", "x3")
                        "hvc    #0\n"
                : "+r" (function_id)
-               : "r" (arg0), "r" (arg1), "r" (arg2));
+               : "r" (arg0), "r" (arg1), "r" (arg2)
+               : "x17", "x18");


> > diff --git a/arch/arm64/kvm/hyp.S b/arch/arm64/kvm/hyp.S
> > index c0d8202..1916c89 100644
> > --- a/arch/arm64/kvm/hyp.S
> > +++ b/arch/arm64/kvm/hyp.S
> > @@ -27,6 +27,7 @@
> >  #include <asm/kvm_asm.h>
> >  #include <asm/kvm_mmu.h>
> >  #include <asm/memory.h>
> > +#include <asm/virt.h>
> >  
> >  #define CPU_GP_REG_OFFSET(x)	(CPU_GP_REGS + x)
> >  #define CPU_XREG_OFFSET(x)	CPU_GP_REG_OFFSET(CPU_USER_PT_REGS + 8*x)
> > @@ -1106,12 +1107,9 @@ __hyp_panic_str:
> >   * in Hyp mode (see init_hyp_mode in arch/arm/kvm/arm.c).  Return values are
> >   * passed in r0 and r1.
> >   *
> > - * A function pointer with a value of 0 has a special meaning, and is
> > - * used to implement __hyp_get_vectors in the same way as in
> > - * arch/arm64/kernel/hyp_stub.S.
> >   */
> >  ENTRY(kvm_call_hyp)
> > -	hvc	#0
> > +	hvc	#HVC_CALL_HYP
> >  	ret
> >  ENDPROC(kvm_call_hyp)
> >  
> > @@ -1142,6 +1140,7 @@ el1_sync:					// Guest trapped into EL2
> >  
> >  	mrs	x1, esr_el2
> >  	lsr	x2, x1, #ESR_ELx_EC_SHIFT
> > +	and	x0, x1, #ESR_ELx_ISS_MASK
> >  
> >  	cmp	x2, #ESR_ELx_EC_HVC64
> >  	b.ne	el1_trap
> > @@ -1150,15 +1149,19 @@ el1_sync:					// Guest trapped into EL2
> >  	cbnz	x3, el1_trap			// called HVC
> >  
> >  	/* Here, we're pretty sure the host called HVC. */
> > +	mov	x18, x0
> 
> Same comment here about corrupting x18. If it is safe, maybe add some
> comments in the calling place.

I added a comment regarding this to virt.h where the HVC_XXX macros
are defined.  I'll post that fixed up patch for review.

> 
> >  	pop	x2, x3
> >  	pop	x0, x1
> >  
> > -	/* Check for __hyp_get_vectors */
> > -	cbnz	x0, 1f
> > +	cmp	x18, #HVC_GET_VECTORS
> > +	b.ne	1f
> >  	mrs	x0, vbar_el2
> >  	b	2f
> >  
> > -1:	push	lr, xzr
> > +1:	cmp	x18, #HVC_CALL_HYP
> > +	b.ne	2f
> > +
> > +	push	lr, xzr
> 
> At this point, we expect either HVC_GET_VECTORS or HVC_CALL_HYP. I think
> you can simply assume HVC_CALL_HYP as default and ignore the additional
> cmp.

OK, did that.

-Geoff

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [PATCH 2/8] arm64: Convert hcalls to use ISS field
@ 2015-01-30 23:31         ` Geoff Levand
  0 siblings, 0 replies; 100+ messages in thread
From: Geoff Levand @ 2015-01-30 23:31 UTC (permalink / raw)
  To: Catalin Marinas, Marc Zyngier, christoffer.dall
  Cc: Marc Zyngier, kexec, Will Deacon, linux-arm-kernel, grant.likely,
	christoffer.dall

On Mon, 2015-01-26 at 18:26 +0000, Catalin Marinas wrote:
> On Sat, Jan 17, 2015 at 12:23:34AM +0000, Geoff Levand wrote:
> > To allow for additional hcalls to be defined and to make the arm64 hcall API
> > more consistent across exception vector routines, change the hcall implementations
> > to use the ISS field of the ESR_EL2 register to specify the hcall type.
> > 
> > The existing arm64 hcall implementations are limited in that they only allow
> > for two distinct hcalls; with the x0 register either zero, or not zero.  Also,
> > the API of the hyp-stub exception vector routines and the KVM exception vector
> > routines differ; hyp-stub uses a non-zero value in x0 to implement
> > __hyp_set_vectors, whereas KVM uses it to implement kvm_call_hyp.
> > 
> > Define three new preprocessor macros HVC_GET_VECTORS, HVC_SET_VECTORS and
> > HVC_CALL_HYP and to be used as hcall type specifiers and convert the
> > existing __hyp_get_vectors(), __hyp_set_vectors() and kvm_call_hyp() routines
> > to use these new macros when executing an HVC call.  Also change the
> > corresponding hyp-stub and KVM el1_sync exception vector routines to use these
> > new macros.
> > 
> > Signed-off-by: Geoff Levand <geoff@infradead.org>
> 
> Using the #imm value for HVC to separate what gets called looks fine to
> me. However, I'd like to see a review from Marc/Christoffer on this
> patch.

Marc, Christopher, comments please?

> Some comments below:
> 
> > diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h
> > index 7a5df52..99c319c 100644
> > --- a/arch/arm64/include/asm/virt.h
> > +++ b/arch/arm64/include/asm/virt.h
> > @@ -21,6 +21,26 @@
> >  #define BOOT_CPU_MODE_EL1	(0xe11)
> >  #define BOOT_CPU_MODE_EL2	(0xe12)
> >  
> > +/*
> > + * HVC_GET_VECTORS - Return the value of the vbar_el2 register.
> > + */
> > +
> > +#define HVC_GET_VECTORS 1
> > +
> > +/*
> > + * HVC_SET_VECTORS - Set the value of the vbar_el2 register.
> > + *
> > + * @x0: Physical address of the new vector table.
> > + */
> > +
> > +#define HVC_SET_VECTORS 2
> > +
> > +/*
> > + * HVC_CALL_HYP - Execute a hyp routine.
> > + */
> > +
> > +#define HVC_CALL_HYP 3
> 
> I think you can ignore this case (make it the default), just define it
> as 0 as that's the normal use-case after initialisation and avoid
> checking it explicitly.

OK, I changed this so that HVC_CALL_HYP is the default at 0.

> >  /*
> > diff --git a/arch/arm64/kernel/hyp-stub.S b/arch/arm64/kernel/hyp-stub.S
> > index a272f33..e3db3fd 100644
> > --- a/arch/arm64/kernel/hyp-stub.S
> > +++ b/arch/arm64/kernel/hyp-stub.S
> > @@ -22,6 +22,7 @@
> >  #include <linux/irqchip/arm-gic-v3.h>
> >  
> >  #include <asm/assembler.h>
> > +#include <asm/kvm_arm.h>
> >  #include <asm/ptrace.h>
> >  #include <asm/virt.h>
> >  
> > @@ -53,14 +54,22 @@ ENDPROC(__hyp_stub_vectors)
> >  	.align 11
> >  
> >  el1_sync:
> > -	mrs	x1, esr_el2
> > -	lsr	x1, x1, #26
> > -	cmp	x1, #0x16
> > -	b.ne	2f				// Not an HVC trap
> > -	cbz	x0, 1f
> > -	msr	vbar_el2, x0			// Set vbar_el2
> > +	mrs	x18, esr_el2
> > +	lsr	x17, x18, #ESR_ELx_EC_SHIFT
> > +	and	x18, x18, #ESR_ELx_ISS_MASK
> > +
> > +	cmp     x17, #ESR_ELx_EC_HVC64
> > +	b.ne    2f				// Not an HVC trap
> > +
> > +	cmp	x18, #HVC_GET_VECTORS
> > +	b.ne	1f
> > +	mrs	x0, vbar_el2
> >  	b	2f
> > -1:	mrs	x0, vbar_el2			// Return vbar_el2
> > +
> > +1:	cmp	x18, #HVC_SET_VECTORS
> > +	b.ne	2f
> > +	msr	vbar_el2, x0
> > +
> >  2:	eret
> >  ENDPROC(el1_sync)
> 
> You seem to be using x17 and x18 here freely. Do you have any guarantees
> that the caller saved/restored those registers? I guess you assume they
> are temporary registers and the caller first branches to a function
> (like __kvm_hyp_call) and expects them to be corrupted. But I'm not sure
> that's always the case. Take for example the __invoke_psci_fn_hvc where
> the function is in C (we should change this for other reasons).

Yes, I assume the compiler will not expect them to be preserved.  I
missed __invoke_psci_fn_hvc.  Can we just add x17 and x18 to the
clobbered list?

        asm volatile(
                        __asmeq("%0", "x0")
                        __asmeq("%1", "x1")
                        __asmeq("%2", "x2")
                        __asmeq("%3", "x3")
                        "hvc    #0\n"
                : "+r" (function_id)
-               : "r" (arg0), "r" (arg1), "r" (arg2));
+               : "r" (arg0), "r" (arg1), "r" (arg2)
+               : "x17", "x18");


> > diff --git a/arch/arm64/kvm/hyp.S b/arch/arm64/kvm/hyp.S
> > index c0d8202..1916c89 100644
> > --- a/arch/arm64/kvm/hyp.S
> > +++ b/arch/arm64/kvm/hyp.S
> > @@ -27,6 +27,7 @@
> >  #include <asm/kvm_asm.h>
> >  #include <asm/kvm_mmu.h>
> >  #include <asm/memory.h>
> > +#include <asm/virt.h>
> >  
> >  #define CPU_GP_REG_OFFSET(x)	(CPU_GP_REGS + x)
> >  #define CPU_XREG_OFFSET(x)	CPU_GP_REG_OFFSET(CPU_USER_PT_REGS + 8*x)
> > @@ -1106,12 +1107,9 @@ __hyp_panic_str:
> >   * in Hyp mode (see init_hyp_mode in arch/arm/kvm/arm.c).  Return values are
> >   * passed in r0 and r1.
> >   *
> > - * A function pointer with a value of 0 has a special meaning, and is
> > - * used to implement __hyp_get_vectors in the same way as in
> > - * arch/arm64/kernel/hyp_stub.S.
> >   */
> >  ENTRY(kvm_call_hyp)
> > -	hvc	#0
> > +	hvc	#HVC_CALL_HYP
> >  	ret
> >  ENDPROC(kvm_call_hyp)
> >  
> > @@ -1142,6 +1140,7 @@ el1_sync:					// Guest trapped into EL2
> >  
> >  	mrs	x1, esr_el2
> >  	lsr	x2, x1, #ESR_ELx_EC_SHIFT
> > +	and	x0, x1, #ESR_ELx_ISS_MASK
> >  
> >  	cmp	x2, #ESR_ELx_EC_HVC64
> >  	b.ne	el1_trap
> > @@ -1150,15 +1149,19 @@ el1_sync:					// Guest trapped into EL2
> >  	cbnz	x3, el1_trap			// called HVC
> >  
> >  	/* Here, we're pretty sure the host called HVC. */
> > +	mov	x18, x0
> 
> Same comment here about corrupting x18. If it is safe, maybe add some
> comments in the calling place.

I added a comment regarding this to virt.h where the HVC_XXX macros
are defined.  I'll post that fixed up patch for review.

> 
> >  	pop	x2, x3
> >  	pop	x0, x1
> >  
> > -	/* Check for __hyp_get_vectors */
> > -	cbnz	x0, 1f
> > +	cmp	x18, #HVC_GET_VECTORS
> > +	b.ne	1f
> >  	mrs	x0, vbar_el2
> >  	b	2f
> >  
> > -1:	push	lr, xzr
> > +1:	cmp	x18, #HVC_CALL_HYP
> > +	b.ne	2f
> > +
> > +	push	lr, xzr
> 
> At this point, we expect either HVC_GET_VECTORS or HVC_CALL_HYP. I think
> you can simply assume HVC_CALL_HYP as default and ignore the additional
> cmp.

OK, did that.

-Geoff



_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 100+ messages in thread

* [PATCH v2 2/8] arm64: Convert hcalls to use ISS field
  2015-01-17  0:23     ` Geoff Levand
@ 2015-01-30 23:33       ` Geoff Levand
  -1 siblings, 0 replies; 100+ messages in thread
From: Geoff Levand @ 2015-01-30 23:33 UTC (permalink / raw)
  To: linux-arm-kernel

To allow for additional hcalls to be defined and to make the arm64 hcall API
more consistent across exception vector routines, change the hcall implementations
to use the ISS field of the ESR_EL2 register to specify the hcall type.

The existing arm64 hcall implementations are limited in that they only allow
for two distinct hcalls; with the x0 register either zero, or not zero.  Also,
the API of the hyp-stub exception vector routines and the KVM exception vector
routines differ; hyp-stub uses a non-zero value in x0 to implement
__hyp_set_vectors, whereas KVM uses it to implement kvm_call_hyp.

Define three new preprocessor macros HVC_CALL_HYP, HVC_GET_VECTORS, and
HVC_SET_VECTORS to be used as hcall type specifiers and convert the
existing __hyp_get_vectors(), __hyp_set_vectors() and kvm_call_hyp() routines
to use these new macros when executing an HVC call.  Also change the
corresponding hyp-stub and KVM el1_sync exception vector routines to use these
new macros.

Signed-off-by: Geoff Levand <geoff@infradead.org>
---
 arch/arm64/include/asm/virt.h | 27 +++++++++++++++++++++++++++
 arch/arm64/kernel/hyp-stub.S  | 32 +++++++++++++++++++++-----------
 arch/arm64/kernel/psci.c      |  3 ++-
 arch/arm64/kvm/hyp.S          | 16 +++++++++-------
 4 files changed, 59 insertions(+), 19 deletions(-)

diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h
index 7a5df52..eb10368 100644
--- a/arch/arm64/include/asm/virt.h
+++ b/arch/arm64/include/asm/virt.h
@@ -18,6 +18,33 @@
 #ifndef __ASM__VIRT_H
 #define __ASM__VIRT_H
 
+/*
+ * The arm64 hcall implementation uses the ISS field of the ESR_EL2 register to
+ * specify the hcall type.  The exception handlers are allowed to use registers
+ * x17 and x18 in their implementation.  Any routine issuing an hcall must not
+ * expect these registers to be preserved.
+ */
+
+/*
+ * HVC_CALL_HYP - Execute a hyp routine.
+ */
+
+#define HVC_CALL_HYP 0
+
+/*
+ * HVC_GET_VECTORS - Return the value of the vbar_el2 register.
+ */
+
+#define HVC_GET_VECTORS 1
+
+/*
+ * HVC_SET_VECTORS - Set the value of the vbar_el2 register.
+ *
+ * @x0: Physical address of the new vector table.
+ */
+
+#define HVC_SET_VECTORS 2
+
 #define BOOT_CPU_MODE_EL1	(0xe11)
 #define BOOT_CPU_MODE_EL2	(0xe12)
 
diff --git a/arch/arm64/kernel/hyp-stub.S b/arch/arm64/kernel/hyp-stub.S
index a272f33..017ab519 100644
--- a/arch/arm64/kernel/hyp-stub.S
+++ b/arch/arm64/kernel/hyp-stub.S
@@ -22,6 +22,7 @@
 #include <linux/irqchip/arm-gic-v3.h>
 
 #include <asm/assembler.h>
+#include <asm/kvm_arm.h>
 #include <asm/ptrace.h>
 #include <asm/virt.h>
 
@@ -53,14 +54,22 @@ ENDPROC(__hyp_stub_vectors)
 	.align 11
 
 el1_sync:
-	mrs	x1, esr_el2
-	lsr	x1, x1, #26
-	cmp	x1, #0x16
+	mrs	x18, esr_el2
+	lsr	x17, x18, #ESR_ELx_EC_SHIFT
+	and	x18, x18, #ESR_ELx_ISS_MASK
+
+	cmp	x17, #ESR_ELx_EC_HVC64
 	b.ne	2f				// Not an HVC trap
-	cbz	x0, 1f
-	msr	vbar_el2, x0			// Set vbar_el2
+
+	cmp	x18, #HVC_GET_VECTORS
+	b.ne	1f
+	mrs	x0, vbar_el2
 	b	2f
-1:	mrs	x0, vbar_el2			// Return vbar_el2
+
+1:	cmp	x18, #HVC_SET_VECTORS
+	b.ne	2f
+	msr	vbar_el2, x0
+
 2:	eret
 ENDPROC(el1_sync)
 
@@ -100,11 +109,12 @@ ENDPROC(\label)
  * initialisation entry point.
  */
 
-ENTRY(__hyp_get_vectors)
-	mov	x0, xzr
-	// fall through
 ENTRY(__hyp_set_vectors)
-	hvc	#0
+	hvc	#HVC_SET_VECTORS
 	ret
-ENDPROC(__hyp_get_vectors)
 ENDPROC(__hyp_set_vectors)
+
+ENTRY(__hyp_get_vectors)
+	hvc	#HVC_GET_VECTORS
+	ret
+ENDPROC(__hyp_get_vectors)
diff --git a/arch/arm64/kernel/psci.c b/arch/arm64/kernel/psci.c
index 3425f31..7043fd7 100644
--- a/arch/arm64/kernel/psci.c
+++ b/arch/arm64/kernel/psci.c
@@ -123,7 +123,8 @@ static noinline int __invoke_psci_fn_hvc(u64 function_id, u64 arg0, u64 arg1,
 			__asmeq("%3", "x3")
 			"hvc	#0\n"
 		: "+r" (function_id)
-		: "r" (arg0), "r" (arg1), "r" (arg2));
+		: "r" (arg0), "r" (arg1), "r" (arg2)
+		: "x17", "x18");
 
 	return function_id;
 }
diff --git a/arch/arm64/kvm/hyp.S b/arch/arm64/kvm/hyp.S
index c0d8202..42c9851 100644
--- a/arch/arm64/kvm/hyp.S
+++ b/arch/arm64/kvm/hyp.S
@@ -27,6 +27,7 @@
 #include <asm/kvm_asm.h>
 #include <asm/kvm_mmu.h>
 #include <asm/memory.h>
+#include <asm/virt.h>
 
 #define CPU_GP_REG_OFFSET(x)	(CPU_GP_REGS + x)
 #define CPU_XREG_OFFSET(x)	CPU_GP_REG_OFFSET(CPU_USER_PT_REGS + 8*x)
@@ -1106,12 +1107,9 @@ __hyp_panic_str:
  * in Hyp mode (see init_hyp_mode in arch/arm/kvm/arm.c).  Return values are
  * passed in r0 and r1.
  *
- * A function pointer with a value of 0 has a special meaning, and is
- * used to implement __hyp_get_vectors in the same way as in
- * arch/arm64/kernel/hyp_stub.S.
  */
 ENTRY(kvm_call_hyp)
-	hvc	#0
+	hvc	#HVC_CALL_HYP
 	ret
 ENDPROC(kvm_call_hyp)
 
@@ -1142,6 +1140,7 @@ el1_sync:					// Guest trapped into EL2
 
 	mrs	x1, esr_el2
 	lsr	x2, x1, #ESR_ELx_EC_SHIFT
+	and	x0, x1, #ESR_ELx_ISS_MASK
 
 	cmp	x2, #ESR_ELx_EC_HVC64
 	b.ne	el1_trap
@@ -1150,15 +1149,18 @@ el1_sync:					// Guest trapped into EL2
 	cbnz	x3, el1_trap			// called HVC
 
 	/* Here, we're pretty sure the host called HVC. */
+	mov	x18, x0
 	pop	x2, x3
 	pop	x0, x1
 
-	/* Check for __hyp_get_vectors */
-	cbnz	x0, 1f
+	cmp	x18, #HVC_GET_VECTORS
+	b.ne	1f
 	mrs	x0, vbar_el2
 	b	2f
 
-1:	push	lr, xzr
+1:	/* Default to HVC_CALL_HYP. */
+
+	push	lr, xzr
 
 	/*
 	 * Compute the function address in EL2, and shuffle the parameters.
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [PATCH v2 2/8] arm64: Convert hcalls to use ISS field
@ 2015-01-30 23:33       ` Geoff Levand
  0 siblings, 0 replies; 100+ messages in thread
From: Geoff Levand @ 2015-01-30 23:33 UTC (permalink / raw)
  To: Catalin Marinas, marc.zyngier, christoffer.dall
  Cc: Grant Likely, kexec, Will Deacon, linux-arm-kernel

To allow for additional hcalls to be defined and to make the arm64 hcall API
more consistent across exception vector routines, change the hcall implementations
to use the ISS field of the ESR_EL2 register to specify the hcall type.

The existing arm64 hcall implementations are limited in that they only allow
for two distinct hcalls; with the x0 register either zero, or not zero.  Also,
the API of the hyp-stub exception vector routines and the KVM exception vector
routines differ; hyp-stub uses a non-zero value in x0 to implement
__hyp_set_vectors, whereas KVM uses it to implement kvm_call_hyp.

Define three new preprocessor macros HVC_CALL_HYP, HVC_GET_VECTORS, and
HVC_SET_VECTORS to be used as hcall type specifiers and convert the
existing __hyp_get_vectors(), __hyp_set_vectors() and kvm_call_hyp() routines
to use these new macros when executing an HVC call.  Also change the
corresponding hyp-stub and KVM el1_sync exception vector routines to use these
new macros.

Signed-off-by: Geoff Levand <geoff@infradead.org>
---
 arch/arm64/include/asm/virt.h | 27 +++++++++++++++++++++++++++
 arch/arm64/kernel/hyp-stub.S  | 32 +++++++++++++++++++++-----------
 arch/arm64/kernel/psci.c      |  3 ++-
 arch/arm64/kvm/hyp.S          | 16 +++++++++-------
 4 files changed, 59 insertions(+), 19 deletions(-)

diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h
index 7a5df52..eb10368 100644
--- a/arch/arm64/include/asm/virt.h
+++ b/arch/arm64/include/asm/virt.h
@@ -18,6 +18,33 @@
 #ifndef __ASM__VIRT_H
 #define __ASM__VIRT_H
 
+/*
+ * The arm64 hcall implementation uses the ISS field of the ESR_EL2 register to
+ * specify the hcall type.  The exception handlers are allowed to use registers
+ * x17 and x18 in their implementation.  Any routine issuing an hcall must not
+ * expect these registers to be preserved.
+ */
+
+/*
+ * HVC_CALL_HYP - Execute a hyp routine.
+ */
+
+#define HVC_CALL_HYP 0
+
+/*
+ * HVC_GET_VECTORS - Return the value of the vbar_el2 register.
+ */
+
+#define HVC_GET_VECTORS 1
+
+/*
+ * HVC_SET_VECTORS - Set the value of the vbar_el2 register.
+ *
+ * @x0: Physical address of the new vector table.
+ */
+
+#define HVC_SET_VECTORS 2
+
 #define BOOT_CPU_MODE_EL1	(0xe11)
 #define BOOT_CPU_MODE_EL2	(0xe12)
 
diff --git a/arch/arm64/kernel/hyp-stub.S b/arch/arm64/kernel/hyp-stub.S
index a272f33..017ab519 100644
--- a/arch/arm64/kernel/hyp-stub.S
+++ b/arch/arm64/kernel/hyp-stub.S
@@ -22,6 +22,7 @@
 #include <linux/irqchip/arm-gic-v3.h>
 
 #include <asm/assembler.h>
+#include <asm/kvm_arm.h>
 #include <asm/ptrace.h>
 #include <asm/virt.h>
 
@@ -53,14 +54,22 @@ ENDPROC(__hyp_stub_vectors)
 	.align 11
 
 el1_sync:
-	mrs	x1, esr_el2
-	lsr	x1, x1, #26
-	cmp	x1, #0x16
+	mrs	x18, esr_el2
+	lsr	x17, x18, #ESR_ELx_EC_SHIFT
+	and	x18, x18, #ESR_ELx_ISS_MASK
+
+	cmp	x17, #ESR_ELx_EC_HVC64
 	b.ne	2f				// Not an HVC trap
-	cbz	x0, 1f
-	msr	vbar_el2, x0			// Set vbar_el2
+
+	cmp	x18, #HVC_GET_VECTORS
+	b.ne	1f
+	mrs	x0, vbar_el2
 	b	2f
-1:	mrs	x0, vbar_el2			// Return vbar_el2
+
+1:	cmp	x18, #HVC_SET_VECTORS
+	b.ne	2f
+	msr	vbar_el2, x0
+
 2:	eret
 ENDPROC(el1_sync)
 
@@ -100,11 +109,12 @@ ENDPROC(\label)
  * initialisation entry point.
  */
 
-ENTRY(__hyp_get_vectors)
-	mov	x0, xzr
-	// fall through
 ENTRY(__hyp_set_vectors)
-	hvc	#0
+	hvc	#HVC_SET_VECTORS
 	ret
-ENDPROC(__hyp_get_vectors)
 ENDPROC(__hyp_set_vectors)
+
+ENTRY(__hyp_get_vectors)
+	hvc	#HVC_GET_VECTORS
+	ret
+ENDPROC(__hyp_get_vectors)
diff --git a/arch/arm64/kernel/psci.c b/arch/arm64/kernel/psci.c
index 3425f31..7043fd7 100644
--- a/arch/arm64/kernel/psci.c
+++ b/arch/arm64/kernel/psci.c
@@ -123,7 +123,8 @@ static noinline int __invoke_psci_fn_hvc(u64 function_id, u64 arg0, u64 arg1,
 			__asmeq("%3", "x3")
 			"hvc	#0\n"
 		: "+r" (function_id)
-		: "r" (arg0), "r" (arg1), "r" (arg2));
+		: "r" (arg0), "r" (arg1), "r" (arg2)
+		: "x17", "x18");
 
 	return function_id;
 }
diff --git a/arch/arm64/kvm/hyp.S b/arch/arm64/kvm/hyp.S
index c0d8202..42c9851 100644
--- a/arch/arm64/kvm/hyp.S
+++ b/arch/arm64/kvm/hyp.S
@@ -27,6 +27,7 @@
 #include <asm/kvm_asm.h>
 #include <asm/kvm_mmu.h>
 #include <asm/memory.h>
+#include <asm/virt.h>
 
 #define CPU_GP_REG_OFFSET(x)	(CPU_GP_REGS + x)
 #define CPU_XREG_OFFSET(x)	CPU_GP_REG_OFFSET(CPU_USER_PT_REGS + 8*x)
@@ -1106,12 +1107,9 @@ __hyp_panic_str:
  * in Hyp mode (see init_hyp_mode in arch/arm/kvm/arm.c).  Return values are
  * passed in r0 and r1.
  *
- * A function pointer with a value of 0 has a special meaning, and is
- * used to implement __hyp_get_vectors in the same way as in
- * arch/arm64/kernel/hyp_stub.S.
  */
 ENTRY(kvm_call_hyp)
-	hvc	#0
+	hvc	#HVC_CALL_HYP
 	ret
 ENDPROC(kvm_call_hyp)
 
@@ -1142,6 +1140,7 @@ el1_sync:					// Guest trapped into EL2
 
 	mrs	x1, esr_el2
 	lsr	x2, x1, #ESR_ELx_EC_SHIFT
+	and	x0, x1, #ESR_ELx_ISS_MASK
 
 	cmp	x2, #ESR_ELx_EC_HVC64
 	b.ne	el1_trap
@@ -1150,15 +1149,18 @@ el1_sync:					// Guest trapped into EL2
 	cbnz	x3, el1_trap			// called HVC
 
 	/* Here, we're pretty sure the host called HVC. */
+	mov	x18, x0
 	pop	x2, x3
 	pop	x0, x1
 
-	/* Check for __hyp_get_vectors */
-	cbnz	x0, 1f
+	cmp	x18, #HVC_GET_VECTORS
+	b.ne	1f
 	mrs	x0, vbar_el2
 	b	2f
 
-1:	push	lr, xzr
+1:	/* Default to HVC_CALL_HYP. */
+
+	push	lr, xzr
 
 	/*
 	 * Compute the function address in EL2, and shuffle the parameters.
-- 
2.1.0




_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [PATCH 7/8] arm64/kexec: Add checks for KVM
  2015-01-30 19:48                 ` Geoff Levand
@ 2015-02-02  8:18                   ` AKASHI Takahiro
  -1 siblings, 0 replies; 100+ messages in thread
From: AKASHI Takahiro @ 2015-02-02  8:18 UTC (permalink / raw)
  To: linux-arm-kernel

Geoff,

On 01/31/2015 04:48 AM, Geoff Levand wrote:
> Hi Takahiro.
>
> On Fri, 2015-01-30 at 15:10 +0900, AKASHI Takahiro wrote:
>> Initially, I thought that we would define kvm_arch_exit() and call it
>> somewhere in the middle of kexec path (no idea yet).
>> But Geoff suggested me to implement a new hvc call, HVC_CPU_SHUTDOWN(??),
>> and make it called via cpu_notifier(CPU_DYING_FROZEN) initiated by
>> machine_shutdown() from kernel_kexec().
>
> As an initial implementation we can hook into the CPU_DYING_FROZEN
> notifier sent to hyp_init_cpu_notify().  The longer term solution
> should use kvm_arch_hardware_enable() and kvm_arch_hardware_disable().

Are these two different approaches? I mean,
kexec will initiate cpu hotplug:
kernel_exec() -> machine_shutdown() -> disable_nonboot_cpu()
    -> _cpu_down() -> cpu_notify_nofail(CPU_DEAD|...)

On the other hand, kvm already has a hook into kvm_arch_hardware_disable():
   (ignoring kvm_usage_count here)
kvm_cpu_hotplug(CPU_DYING) -> hardware_disable()
    -> hardware_disable_nolock() -> kvm_arch_hardware_disable()

So it seems that we don't have to add a new hook at hyp_init_cpu_notify()
if kvm_arch_hardware_disable() is properly implemented.
disable_nonboot_cpu() will not inovke cpu hotplug on *boot* cpu, and
we should handle it in a separate way though.

Do I misunderstand anything here?


-Takahiro AKASHI

> The calls to cpu_notifier(CPU_DYING_FROZEN) are part of cpu hot
> plug, and independent of kexec.  If someone were to add spin-table
> cpu un-plug, then it would be used for that also.  It seems we should
> be able to test without kexec by using cpu hot plug.
>
> To tear down KVM you need to get back to hyp mode, and hence
> the need for HVC_CPU_SHUTDOWN.  The sequence I envisioned would
> be like this:
>
> cpu_notifier(CPU_DYING_FROZEN)
>   -> kvm_cpu_shutdown()
>      prepare for hvc
>      -> HVC_CPU_SHUTDOWN
>         now in hyp mode, do KVM tear down, restore default exception vectors
>
> Once the default exception vectors are restored soft_restart()
> can then execute the cpu_reset routine in EL2.
>
> Some notes are here for those with access:  https://cards.linaro.org/browse/KWG-611
>
> -Geoff
>

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [PATCH 7/8] arm64/kexec: Add checks for KVM
@ 2015-02-02  8:18                   ` AKASHI Takahiro
  0 siblings, 0 replies; 100+ messages in thread
From: AKASHI Takahiro @ 2015-02-02  8:18 UTC (permalink / raw)
  To: Geoff Levand
  Cc: Mark Rutland, Marc Zyngier, Catalin Marinas, Will Deacon,
	linux-arm-kernel, grant.likely, kexec, christoffer.dall

Geoff,

On 01/31/2015 04:48 AM, Geoff Levand wrote:
> Hi Takahiro.
>
> On Fri, 2015-01-30 at 15:10 +0900, AKASHI Takahiro wrote:
>> Initially, I thought that we would define kvm_arch_exit() and call it
>> somewhere in the middle of kexec path (no idea yet).
>> But Geoff suggested me to implement a new hvc call, HVC_CPU_SHUTDOWN(??),
>> and make it called via cpu_notifier(CPU_DYING_FROZEN) initiated by
>> machine_shutdown() from kernel_kexec().
>
> As an initial implementation we can hook into the CPU_DYING_FROZEN
> notifier sent to hyp_init_cpu_notify().  The longer term solution
> should use kvm_arch_hardware_enable() and kvm_arch_hardware_disable().

Are these two different approaches? I mean,
kexec will initiate cpu hotplug:
kernel_exec() -> machine_shutdown() -> disable_nonboot_cpu()
    -> _cpu_down() -> cpu_notify_nofail(CPU_DEAD|...)

On the other hand, kvm already has a hook into kvm_arch_hardware_disable():
   (ignoring kvm_usage_count here)
kvm_cpu_hotplug(CPU_DYING) -> hardware_disable()
    -> hardware_disable_nolock() -> kvm_arch_hardware_disable()

So it seems that we don't have to add a new hook at hyp_init_cpu_notify()
if kvm_arch_hardware_disable() is properly implemented.
disable_nonboot_cpu() will not inovke cpu hotplug on *boot* cpu, and
we should handle it in a separate way though.

Do I misunderstand anything here?


-Takahiro AKASHI

> The calls to cpu_notifier(CPU_DYING_FROZEN) are part of cpu hot
> plug, and independent of kexec.  If someone were to add spin-table
> cpu un-plug, then it would be used for that also.  It seems we should
> be able to test without kexec by using cpu hot plug.
>
> To tear down KVM you need to get back to hyp mode, and hence
> the need for HVC_CPU_SHUTDOWN.  The sequence I envisioned would
> be like this:
>
> cpu_notifier(CPU_DYING_FROZEN)
>   -> kvm_cpu_shutdown()
>      prepare for hvc
>      -> HVC_CPU_SHUTDOWN
>         now in hyp mode, do KVM tear down, restore default exception vectors
>
> Once the default exception vectors are restored soft_restart()
> can then execute the cpu_reset routine in EL2.
>
> Some notes are here for those with access:  https://cards.linaro.org/browse/KWG-611
>
> -Geoff
>

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 100+ messages in thread

* [PATCH 2/8] arm64: Convert hcalls to use ISS field
  2015-01-30 23:31         ` Geoff Levand
@ 2015-02-02 16:04           ` Catalin Marinas
  -1 siblings, 0 replies; 100+ messages in thread
From: Catalin Marinas @ 2015-02-02 16:04 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Jan 30, 2015 at 11:31:21PM +0000, Geoff Levand wrote:
> On Mon, 2015-01-26 at 18:26 +0000, Catalin Marinas wrote:
> > On Sat, Jan 17, 2015 at 12:23:34AM +0000, Geoff Levand wrote:
> > >  /*
> > > diff --git a/arch/arm64/kernel/hyp-stub.S b/arch/arm64/kernel/hyp-stub.S
> > > index a272f33..e3db3fd 100644
> > > --- a/arch/arm64/kernel/hyp-stub.S
> > > +++ b/arch/arm64/kernel/hyp-stub.S
> > > @@ -22,6 +22,7 @@
> > >  #include <linux/irqchip/arm-gic-v3.h>
> > >  
> > >  #include <asm/assembler.h>
> > > +#include <asm/kvm_arm.h>
> > >  #include <asm/ptrace.h>
> > >  #include <asm/virt.h>
> > >  
> > > @@ -53,14 +54,22 @@ ENDPROC(__hyp_stub_vectors)
> > >  	.align 11
> > >  
> > >  el1_sync:
> > > -	mrs	x1, esr_el2
> > > -	lsr	x1, x1, #26
> > > -	cmp	x1, #0x16
> > > -	b.ne	2f				// Not an HVC trap
> > > -	cbz	x0, 1f
> > > -	msr	vbar_el2, x0			// Set vbar_el2
> > > +	mrs	x18, esr_el2
> > > +	lsr	x17, x18, #ESR_ELx_EC_SHIFT
> > > +	and	x18, x18, #ESR_ELx_ISS_MASK
> > > +
> > > +	cmp     x17, #ESR_ELx_EC_HVC64
> > > +	b.ne    2f				// Not an HVC trap
> > > +
> > > +	cmp	x18, #HVC_GET_VECTORS
> > > +	b.ne	1f
> > > +	mrs	x0, vbar_el2
> > >  	b	2f
> > > -1:	mrs	x0, vbar_el2			// Return vbar_el2
> > > +
> > > +1:	cmp	x18, #HVC_SET_VECTORS
> > > +	b.ne	2f
> > > +	msr	vbar_el2, x0
> > > +
> > >  2:	eret
> > >  ENDPROC(el1_sync)
> > 
> > You seem to be using x17 and x18 here freely. Do you have any guarantees
> > that the caller saved/restored those registers? I guess you assume they
> > are temporary registers and the caller first branches to a function
> > (like __kvm_hyp_call) and expects them to be corrupted. But I'm not sure
> > that's always the case. Take for example the __invoke_psci_fn_hvc where
> > the function is in C (we should change this for other reasons).
> 
> Yes, I assume the compiler will not expect them to be preserved.  I
> missed __invoke_psci_fn_hvc.  Can we just add x17 and x18 to the
> clobbered list?
> 
>         asm volatile(
>                         __asmeq("%0", "x0")
>                         __asmeq("%1", "x1")
>                         __asmeq("%2", "x2")
>                         __asmeq("%3", "x3")
>                         "hvc    #0\n"
>                 : "+r" (function_id)
> -               : "r" (arg0), "r" (arg1), "r" (arg2));
> +               : "r" (arg0), "r" (arg1), "r" (arg2)
> +               : "x17", "x18");

I think we can ignore these because they would be called from a guest
context and IIUC we would only clobber x18 on the host HVC side.

-- 
Catalin

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [PATCH 2/8] arm64: Convert hcalls to use ISS field
@ 2015-02-02 16:04           ` Catalin Marinas
  0 siblings, 0 replies; 100+ messages in thread
From: Catalin Marinas @ 2015-02-02 16:04 UTC (permalink / raw)
  To: Geoff Levand
  Cc: Marc Zyngier, kexec, Will Deacon, linux-arm-kernel, grant.likely,
	christoffer.dall

On Fri, Jan 30, 2015 at 11:31:21PM +0000, Geoff Levand wrote:
> On Mon, 2015-01-26 at 18:26 +0000, Catalin Marinas wrote:
> > On Sat, Jan 17, 2015 at 12:23:34AM +0000, Geoff Levand wrote:
> > >  /*
> > > diff --git a/arch/arm64/kernel/hyp-stub.S b/arch/arm64/kernel/hyp-stub.S
> > > index a272f33..e3db3fd 100644
> > > --- a/arch/arm64/kernel/hyp-stub.S
> > > +++ b/arch/arm64/kernel/hyp-stub.S
> > > @@ -22,6 +22,7 @@
> > >  #include <linux/irqchip/arm-gic-v3.h>
> > >  
> > >  #include <asm/assembler.h>
> > > +#include <asm/kvm_arm.h>
> > >  #include <asm/ptrace.h>
> > >  #include <asm/virt.h>
> > >  
> > > @@ -53,14 +54,22 @@ ENDPROC(__hyp_stub_vectors)
> > >  	.align 11
> > >  
> > >  el1_sync:
> > > -	mrs	x1, esr_el2
> > > -	lsr	x1, x1, #26
> > > -	cmp	x1, #0x16
> > > -	b.ne	2f				// Not an HVC trap
> > > -	cbz	x0, 1f
> > > -	msr	vbar_el2, x0			// Set vbar_el2
> > > +	mrs	x18, esr_el2
> > > +	lsr	x17, x18, #ESR_ELx_EC_SHIFT
> > > +	and	x18, x18, #ESR_ELx_ISS_MASK
> > > +
> > > +	cmp     x17, #ESR_ELx_EC_HVC64
> > > +	b.ne    2f				// Not an HVC trap
> > > +
> > > +	cmp	x18, #HVC_GET_VECTORS
> > > +	b.ne	1f
> > > +	mrs	x0, vbar_el2
> > >  	b	2f
> > > -1:	mrs	x0, vbar_el2			// Return vbar_el2
> > > +
> > > +1:	cmp	x18, #HVC_SET_VECTORS
> > > +	b.ne	2f
> > > +	msr	vbar_el2, x0
> > > +
> > >  2:	eret
> > >  ENDPROC(el1_sync)
> > 
> > You seem to be using x17 and x18 here freely. Do you have any guarantees
> > that the caller saved/restored those registers? I guess you assume they
> > are temporary registers and the caller first branches to a function
> > (like __kvm_hyp_call) and expects them to be corrupted. But I'm not sure
> > that's always the case. Take for example the __invoke_psci_fn_hvc where
> > the function is in C (we should change this for other reasons).
> 
> Yes, I assume the compiler will not expect them to be preserved.  I
> missed __invoke_psci_fn_hvc.  Can we just add x17 and x18 to the
> clobbered list?
> 
>         asm volatile(
>                         __asmeq("%0", "x0")
>                         __asmeq("%1", "x1")
>                         __asmeq("%2", "x2")
>                         __asmeq("%3", "x3")
>                         "hvc    #0\n"
>                 : "+r" (function_id)
> -               : "r" (arg0), "r" (arg1), "r" (arg2));
> +               : "r" (arg0), "r" (arg1), "r" (arg2)
> +               : "x17", "x18");

I think we can ignore these because they would be called from a guest
context and IIUC we would only clobber x18 on the host HVC side.

-- 
Catalin

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 100+ messages in thread

* [PATCH 7/8] arm64/kexec: Add checks for KVM
  2015-02-02  8:18                   ` AKASHI Takahiro
@ 2015-02-06  0:11                     ` Geoff Levand
  -1 siblings, 0 replies; 100+ messages in thread
From: Geoff Levand @ 2015-02-06  0:11 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Takahiro,

On Mon, 2015-02-02 at 17:18 +0900, AKASHI Takahiro wrote:
> On 01/31/2015 04:48 AM, Geoff Levand wrote:
> > As an initial implementation we can hook into the CPU_DYING_FROZEN
> > notifier sent to hyp_init_cpu_notify().  The longer term solution
> > should use kvm_arch_hardware_enable() and kvm_arch_hardware_disable().
> 
> Are these two different approaches? 

Yes, these are two different solutions,  One initial work-around, and a
more involved proper solution.  Hooking into the CPU_DYING_FROZEN
notifier would be a initial fix.  The proper solution would be to move
the KVM setup to kvm_arch_hardware_enable(), and the shutdown to
kvm_arch_hardware_disable().


> kernel_exec() -> machine_shutdown() -> disable_nonboot_cpu()
>     -> _cpu_down() -> cpu_notify_nofail(CPU_DEAD|...)
> 
> On the other hand, kvm already has a hook into kvm_arch_hardware_disable():
>    (ignoring kvm_usage_count here)
> kvm_cpu_hotplug(CPU_DYING) -> hardware_disable()
>     -> hardware_disable_nolock() -> kvm_arch_hardware_disable()
> 
> So it seems that we don't have to add a new hook at hyp_init_cpu_notify()
> if kvm_arch_hardware_disable() is properly implemented.

Yes, that is correct.  But, as above, you would also need to update the
KVM startup to use kvm_arch_hardware_enable().

> disable_nonboot_cpu() will not inovke cpu hotplug on *boot* cpu, and
> we should handle it in a separate way though.

IIRC, the secondary cpus go through PSCI on shutdown, and that path
is working OK.  Maybe I am mistaken though.

The primary cpu shutdown (hyp stubs restored) is what is missing.  The
primary cpu goes through cpu_soft_restart(), and that is what is
currently failing.

-Geoff

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [PATCH 7/8] arm64/kexec: Add checks for KVM
@ 2015-02-06  0:11                     ` Geoff Levand
  0 siblings, 0 replies; 100+ messages in thread
From: Geoff Levand @ 2015-02-06  0:11 UTC (permalink / raw)
  To: AKASHI Takahiro
  Cc: Mark Rutland, Marc Zyngier, Catalin Marinas, Will Deacon,
	linux-arm-kernel, grant.likely, kexec, christoffer.dall

Hi Takahiro,

On Mon, 2015-02-02 at 17:18 +0900, AKASHI Takahiro wrote:
> On 01/31/2015 04:48 AM, Geoff Levand wrote:
> > As an initial implementation we can hook into the CPU_DYING_FROZEN
> > notifier sent to hyp_init_cpu_notify().  The longer term solution
> > should use kvm_arch_hardware_enable() and kvm_arch_hardware_disable().
> 
> Are these two different approaches? 

Yes, these are two different solutions,  One initial work-around, and a
more involved proper solution.  Hooking into the CPU_DYING_FROZEN
notifier would be a initial fix.  The proper solution would be to move
the KVM setup to kvm_arch_hardware_enable(), and the shutdown to
kvm_arch_hardware_disable().


> kernel_exec() -> machine_shutdown() -> disable_nonboot_cpu()
>     -> _cpu_down() -> cpu_notify_nofail(CPU_DEAD|...)
> 
> On the other hand, kvm already has a hook into kvm_arch_hardware_disable():
>    (ignoring kvm_usage_count here)
> kvm_cpu_hotplug(CPU_DYING) -> hardware_disable()
>     -> hardware_disable_nolock() -> kvm_arch_hardware_disable()
> 
> So it seems that we don't have to add a new hook at hyp_init_cpu_notify()
> if kvm_arch_hardware_disable() is properly implemented.

Yes, that is correct.  But, as above, you would also need to update the
KVM startup to use kvm_arch_hardware_enable().

> disable_nonboot_cpu() will not inovke cpu hotplug on *boot* cpu, and
> we should handle it in a separate way though.

IIRC, the secondary cpus go through PSCI on shutdown, and that path
is working OK.  Maybe I am mistaken though.

The primary cpu shutdown (hyp stubs restored) is what is missing.  The
primary cpu goes through cpu_soft_restart(), and that is what is
currently failing.

-Geoff


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 100+ messages in thread

* [PATCH 7/8] arm64/kexec: Add checks for KVM
  2015-02-06  0:11                     ` Geoff Levand
@ 2015-02-06  4:18                       ` AKASHI Takahiro
  -1 siblings, 0 replies; 100+ messages in thread
From: AKASHI Takahiro @ 2015-02-06  4:18 UTC (permalink / raw)
  To: linux-arm-kernel

On 02/06/2015 09:11 AM, Geoff Levand wrote:
> Hi Takahiro,
>
> On Mon, 2015-02-02 at 17:18 +0900, AKASHI Takahiro wrote:
>> On 01/31/2015 04:48 AM, Geoff Levand wrote:
>>> As an initial implementation we can hook into the CPU_DYING_FROZEN
>>> notifier sent to hyp_init_cpu_notify().  The longer term solution
>>> should use kvm_arch_hardware_enable() and kvm_arch_hardware_disable().
>>
>> Are these two different approaches?
>
> Yes, these are two different solutions,  One initial work-around, and a
> more involved proper solution.  Hooking into the CPU_DYING_FROZEN
> notifier would be a initial fix.  The proper solution would be to move
> the KVM setup to kvm_arch_hardware_enable(), and the shutdown to
> kvm_arch_hardware_disable().
>
>
>> kernel_exec() -> machine_shutdown() -> disable_nonboot_cpu()
>>      -> _cpu_down() -> cpu_notify_nofail(CPU_DEAD|...)
>>
>> On the other hand, kvm already has a hook into kvm_arch_hardware_disable():
>>     (ignoring kvm_usage_count here)
>> kvm_cpu_hotplug(CPU_DYING) -> hardware_disable()
>>      -> hardware_disable_nolock() -> kvm_arch_hardware_disable()
>>
>> So it seems that we don't have to add a new hook at hyp_init_cpu_notify()
>> if kvm_arch_hardware_disable() is properly implemented.
>
> Yes, that is correct.  But, as above, you would also need to update the
> KVM startup to use kvm_arch_hardware_enable().
>
>> disable_nonboot_cpu() will not inovke cpu hotplug on *boot* cpu, and
>> we should handle it in a separate way though.
>
> IIRC, the secondary cpus go through PSCI on shutdown, and that path
> is working OK.  Maybe I am mistaken though.

If so, why should we add a hook at hyp_init_cpu_notify() as initial work-around?

> The primary cpu shutdown (hyp stubs restored) is what is missing.  The
> primary cpu goes through cpu_soft_restart(), and that is what is
> currently failing.

Yeah, we will call teardown function manually in soft_restart();

-Takahiro AKASHI
>
> -Geoff
>

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [PATCH 7/8] arm64/kexec: Add checks for KVM
@ 2015-02-06  4:18                       ` AKASHI Takahiro
  0 siblings, 0 replies; 100+ messages in thread
From: AKASHI Takahiro @ 2015-02-06  4:18 UTC (permalink / raw)
  To: Geoff Levand
  Cc: Mark Rutland, Marc Zyngier, Catalin Marinas, Will Deacon,
	linux-arm-kernel, grant.likely, kexec, christoffer.dall

On 02/06/2015 09:11 AM, Geoff Levand wrote:
> Hi Takahiro,
>
> On Mon, 2015-02-02 at 17:18 +0900, AKASHI Takahiro wrote:
>> On 01/31/2015 04:48 AM, Geoff Levand wrote:
>>> As an initial implementation we can hook into the CPU_DYING_FROZEN
>>> notifier sent to hyp_init_cpu_notify().  The longer term solution
>>> should use kvm_arch_hardware_enable() and kvm_arch_hardware_disable().
>>
>> Are these two different approaches?
>
> Yes, these are two different solutions,  One initial work-around, and a
> more involved proper solution.  Hooking into the CPU_DYING_FROZEN
> notifier would be a initial fix.  The proper solution would be to move
> the KVM setup to kvm_arch_hardware_enable(), and the shutdown to
> kvm_arch_hardware_disable().
>
>
>> kernel_exec() -> machine_shutdown() -> disable_nonboot_cpu()
>>      -> _cpu_down() -> cpu_notify_nofail(CPU_DEAD|...)
>>
>> On the other hand, kvm already has a hook into kvm_arch_hardware_disable():
>>     (ignoring kvm_usage_count here)
>> kvm_cpu_hotplug(CPU_DYING) -> hardware_disable()
>>      -> hardware_disable_nolock() -> kvm_arch_hardware_disable()
>>
>> So it seems that we don't have to add a new hook at hyp_init_cpu_notify()
>> if kvm_arch_hardware_disable() is properly implemented.
>
> Yes, that is correct.  But, as above, you would also need to update the
> KVM startup to use kvm_arch_hardware_enable().
>
>> disable_nonboot_cpu() will not inovke cpu hotplug on *boot* cpu, and
>> we should handle it in a separate way though.
>
> IIRC, the secondary cpus go through PSCI on shutdown, and that path
> is working OK.  Maybe I am mistaken though.

If so, why should we add a hook at hyp_init_cpu_notify() as initial work-around?

> The primary cpu shutdown (hyp stubs restored) is what is missing.  The
> primary cpu goes through cpu_soft_restart(), and that is what is
> currently failing.

Yeah, we will call teardown function manually in soft_restart();

-Takahiro AKASHI
>
> -Geoff
>

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 100+ messages in thread

* [PATCH 7/8] arm64/kexec: Add checks for KVM
  2015-02-06  4:18                       ` AKASHI Takahiro
@ 2015-02-06  7:06                         ` Geoff Levand
  -1 siblings, 0 replies; 100+ messages in thread
From: Geoff Levand @ 2015-02-06  7:06 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, 2015-02-06 at 13:18 +0900, AKASHI Takahiro wrote:
> On 02/06/2015 09:11 AM, Geoff Levand wrote:
> > Hi Takahiro,
> >
> > On Mon, 2015-02-02 at 17:18 +0900, AKASHI Takahiro wrote:
> >> On 01/31/2015 04:48 AM, Geoff Levand wrote:
> >>> As an initial implementation we can hook into the CPU_DYING_FROZEN
> >>> notifier sent to hyp_init_cpu_notify().  The longer term solution
> >>> should use kvm_arch_hardware_enable() and kvm_arch_hardware_disable().
> >>
> >> Are these two different approaches?
> >
> > Yes, these are two different solutions,  One initial work-around, and a
> > more involved proper solution.  Hooking into the CPU_DYING_FROZEN
> > notifier would be a initial fix.  The proper solution would be to move
> > the KVM setup to kvm_arch_hardware_enable(), and the shutdown to
> > kvm_arch_hardware_disable().
> >
> >
> >> kernel_exec() -> machine_shutdown() -> disable_nonboot_cpu()
> >>      -> _cpu_down() -> cpu_notify_nofail(CPU_DEAD|...)
> >>
> >> On the other hand, kvm already has a hook into kvm_arch_hardware_disable():
> >>     (ignoring kvm_usage_count here)
> >> kvm_cpu_hotplug(CPU_DYING) -> hardware_disable()
> >>      -> hardware_disable_nolock() -> kvm_arch_hardware_disable()
> >>
> >> So it seems that we don't have to add a new hook at hyp_init_cpu_notify()
> >> if kvm_arch_hardware_disable() is properly implemented.
> >
> > Yes, that is correct.  But, as above, you would also need to update the
> > KVM startup to use kvm_arch_hardware_enable().
> >
> >> disable_nonboot_cpu() will not inovke cpu hotplug on *boot* cpu, and
> >> we should handle it in a separate way though.
> >
> > IIRC, the secondary cpus go through PSCI on shutdown, and that path
> > is working OK.  Maybe I am mistaken though.
> 
> If so, why should we add a hook at hyp_init_cpu_notify() as initial work-around?

To tear down KVM on the primary cpu.

> > The primary cpu shutdown (hyp stubs restored) is what is missing.  The
> > primary cpu goes through cpu_soft_restart(), and that is what is
> > currently failing.
> 
> Yeah, we will call teardown function manually in soft_restart();

I think the KVM tear down should happen independent of soft_restart().
When soft_restart() is called, KVM should have already been torn down.

-Geoff

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [PATCH 7/8] arm64/kexec: Add checks for KVM
@ 2015-02-06  7:06                         ` Geoff Levand
  0 siblings, 0 replies; 100+ messages in thread
From: Geoff Levand @ 2015-02-06  7:06 UTC (permalink / raw)
  To: AKASHI Takahiro
  Cc: Mark Rutland, Marc Zyngier, Catalin Marinas, Will Deacon,
	linux-arm-kernel, grant.likely, kexec, christoffer.dall

On Fri, 2015-02-06 at 13:18 +0900, AKASHI Takahiro wrote:
> On 02/06/2015 09:11 AM, Geoff Levand wrote:
> > Hi Takahiro,
> >
> > On Mon, 2015-02-02 at 17:18 +0900, AKASHI Takahiro wrote:
> >> On 01/31/2015 04:48 AM, Geoff Levand wrote:
> >>> As an initial implementation we can hook into the CPU_DYING_FROZEN
> >>> notifier sent to hyp_init_cpu_notify().  The longer term solution
> >>> should use kvm_arch_hardware_enable() and kvm_arch_hardware_disable().
> >>
> >> Are these two different approaches?
> >
> > Yes, these are two different solutions,  One initial work-around, and a
> > more involved proper solution.  Hooking into the CPU_DYING_FROZEN
> > notifier would be a initial fix.  The proper solution would be to move
> > the KVM setup to kvm_arch_hardware_enable(), and the shutdown to
> > kvm_arch_hardware_disable().
> >
> >
> >> kernel_exec() -> machine_shutdown() -> disable_nonboot_cpu()
> >>      -> _cpu_down() -> cpu_notify_nofail(CPU_DEAD|...)
> >>
> >> On the other hand, kvm already has a hook into kvm_arch_hardware_disable():
> >>     (ignoring kvm_usage_count here)
> >> kvm_cpu_hotplug(CPU_DYING) -> hardware_disable()
> >>      -> hardware_disable_nolock() -> kvm_arch_hardware_disable()
> >>
> >> So it seems that we don't have to add a new hook at hyp_init_cpu_notify()
> >> if kvm_arch_hardware_disable() is properly implemented.
> >
> > Yes, that is correct.  But, as above, you would also need to update the
> > KVM startup to use kvm_arch_hardware_enable().
> >
> >> disable_nonboot_cpu() will not inovke cpu hotplug on *boot* cpu, and
> >> we should handle it in a separate way though.
> >
> > IIRC, the secondary cpus go through PSCI on shutdown, and that path
> > is working OK.  Maybe I am mistaken though.
> 
> If so, why should we add a hook at hyp_init_cpu_notify() as initial work-around?

To tear down KVM on the primary cpu.

> > The primary cpu shutdown (hyp stubs restored) is what is missing.  The
> > primary cpu goes through cpu_soft_restart(), and that is what is
> > currently failing.
> 
> Yeah, we will call teardown function manually in soft_restart();

I think the KVM tear down should happen independent of soft_restart().
When soft_restart() is called, KVM should have already been torn down.

-Geoff



_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 100+ messages in thread

* [PATCH v2 2/8] arm64: Convert hcalls to use ISS field
  2015-01-30 23:33       ` Geoff Levand
@ 2015-02-19 20:57         ` Christoffer Dall
  -1 siblings, 0 replies; 100+ messages in thread
From: Christoffer Dall @ 2015-02-19 20:57 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Jan 30, 2015 at 03:33:48PM -0800, Geoff Levand wrote:
> To allow for additional hcalls to be defined and to make the arm64 hcall API
> more consistent across exception vector routines, change the hcall implementations
> to use the ISS field of the ESR_EL2 register to specify the hcall type.

how does this make things more consistent?  Do we have other examples of
things using the immediate field which I'm missing?

> 
> The existing arm64 hcall implementations are limited in that they only allow
> for two distinct hcalls; with the x0 register either zero, or not zero.  Also,
> the API of the hyp-stub exception vector routines and the KVM exception vector
> routines differ; hyp-stub uses a non-zero value in x0 to implement
> __hyp_set_vectors, whereas KVM uses it to implement kvm_call_hyp.

this seems orthogonal to the use of the immediate field vs. x0 though,
so why it the immediate field preferred again?

> 
> Define three new preprocessor macros HVC_CALL_HYP, HVC_GET_VECTORS, and
> HVC_SET_VECTORS to be used as hcall type specifiers and convert the
> existing __hyp_get_vectors(), __hyp_set_vectors() and kvm_call_hyp() routines
> to use these new macros when executing an HVC call.  Also change the
> corresponding hyp-stub and KVM el1_sync exception vector routines to use these
> new macros.
> 
> Signed-off-by: Geoff Levand <geoff@infradead.org>
> ---
>  arch/arm64/include/asm/virt.h | 27 +++++++++++++++++++++++++++
>  arch/arm64/kernel/hyp-stub.S  | 32 +++++++++++++++++++++-----------
>  arch/arm64/kernel/psci.c      |  3 ++-
>  arch/arm64/kvm/hyp.S          | 16 +++++++++-------
>  4 files changed, 59 insertions(+), 19 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h
> index 7a5df52..eb10368 100644
> --- a/arch/arm64/include/asm/virt.h
> +++ b/arch/arm64/include/asm/virt.h
> @@ -18,6 +18,33 @@
>  #ifndef __ASM__VIRT_H
>  #define __ASM__VIRT_H
>  
> +/*
> + * The arm64 hcall implementation uses the ISS field of the ESR_EL2 register to
> + * specify the hcall type.  The exception handlers are allowed to use registers
> + * x17 and x18 in their implementation.  Any routine issuing an hcall must not
> + * expect these registers to be preserved.
> + */

I thought the existing use of registers were based on the arm procedure
call standard so we didn't have to worry about adding more caller-save
registers.

Don't we now have to start adding code around callers to make sure
callers know that x17 and x18 may be clobbered?

Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [PATCH v2 2/8] arm64: Convert hcalls to use ISS field
@ 2015-02-19 20:57         ` Christoffer Dall
  0 siblings, 0 replies; 100+ messages in thread
From: Christoffer Dall @ 2015-02-19 20:57 UTC (permalink / raw)
  To: Geoff Levand
  Cc: marc.zyngier, Catalin Marinas, Will Deacon, Grant Likely, kexec,
	linux-arm-kernel

On Fri, Jan 30, 2015 at 03:33:48PM -0800, Geoff Levand wrote:
> To allow for additional hcalls to be defined and to make the arm64 hcall API
> more consistent across exception vector routines, change the hcall implementations
> to use the ISS field of the ESR_EL2 register to specify the hcall type.

how does this make things more consistent?  Do we have other examples of
things using the immediate field which I'm missing?

> 
> The existing arm64 hcall implementations are limited in that they only allow
> for two distinct hcalls; with the x0 register either zero, or not zero.  Also,
> the API of the hyp-stub exception vector routines and the KVM exception vector
> routines differ; hyp-stub uses a non-zero value in x0 to implement
> __hyp_set_vectors, whereas KVM uses it to implement kvm_call_hyp.

this seems orthogonal to the use of the immediate field vs. x0 though,
so why it the immediate field preferred again?

> 
> Define three new preprocessor macros HVC_CALL_HYP, HVC_GET_VECTORS, and
> HVC_SET_VECTORS to be used as hcall type specifiers and convert the
> existing __hyp_get_vectors(), __hyp_set_vectors() and kvm_call_hyp() routines
> to use these new macros when executing an HVC call.  Also change the
> corresponding hyp-stub and KVM el1_sync exception vector routines to use these
> new macros.
> 
> Signed-off-by: Geoff Levand <geoff@infradead.org>
> ---
>  arch/arm64/include/asm/virt.h | 27 +++++++++++++++++++++++++++
>  arch/arm64/kernel/hyp-stub.S  | 32 +++++++++++++++++++++-----------
>  arch/arm64/kernel/psci.c      |  3 ++-
>  arch/arm64/kvm/hyp.S          | 16 +++++++++-------
>  4 files changed, 59 insertions(+), 19 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h
> index 7a5df52..eb10368 100644
> --- a/arch/arm64/include/asm/virt.h
> +++ b/arch/arm64/include/asm/virt.h
> @@ -18,6 +18,33 @@
>  #ifndef __ASM__VIRT_H
>  #define __ASM__VIRT_H
>  
> +/*
> + * The arm64 hcall implementation uses the ISS field of the ESR_EL2 register to
> + * specify the hcall type.  The exception handlers are allowed to use registers
> + * x17 and x18 in their implementation.  Any routine issuing an hcall must not
> + * expect these registers to be preserved.
> + */

I thought the existing use of registers were based on the arm procedure
call standard so we didn't have to worry about adding more caller-save
registers.

Don't we now have to start adding code around callers to make sure
callers know that x17 and x18 may be clobbered?

Thanks,
-Christoffer

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 100+ messages in thread

* [PATCH v2 2/8] arm64: Convert hcalls to use ISS field
  2015-02-19 20:57         ` Christoffer Dall
@ 2015-02-25 22:09           ` Geoff Levand
  -1 siblings, 0 replies; 100+ messages in thread
From: Geoff Levand @ 2015-02-25 22:09 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Christoffer,

On Thu, 2015-02-19 at 21:57 +0100, Christoffer Dall wrote:
> On Fri, Jan 30, 2015 at 03:33:48PM -0800, Geoff Levand wrote:
> > To allow for additional hcalls to be defined and to make the arm64 hcall API
> > more consistent across exception vector routines, change the hcall implementations
> > to use the ISS field of the ESR_EL2 register to specify the hcall type.
> 
> how does this make things more consistent?  Do we have other examples of
> things using the immediate field which I'm missing?

As I detail in the next paragraph, by consistent I mean in the API exposed,
not in the implementation.  This point is really secondary to the need for
more hyper calls as I discuss below.

> > The existing arm64 hcall implementations are limited in that they only allow
> > for two distinct hcalls; with the x0 register either zero, or not zero.  Also,
> > the API of the hyp-stub exception vector routines and the KVM exception vector
> > routines differ; hyp-stub uses a non-zero value in x0 to implement
> > __hyp_set_vectors, whereas KVM uses it to implement kvm_call_hyp.
> 
> this seems orthogonal to the use of the immediate field vs. x0 though,
> so why it the immediate field preferred again?

When a CPU is reset via cpu_soft_restart() we need to execute the caller
supplied reset routine at the exception level the kernel was entered at.
So, for a kernel entered at EL2, we need a way to execute that routine at
EL2.

The current hyp-stub vector implementation, which uses x0, is limited
to two hyper calls; __hyp_get_vectors and __hyp_set_vectors.  To
support cpu_soft_restart() we need a third hyper call, one which
allows for code to be executed at EL2.  My proposed use of the
immediate value of the hvc instruction will allow for 2^16 distinct
hyper calls.
 
> > Define three new preprocessor macros HVC_CALL_HYP, HVC_GET_VECTORS, and
> > HVC_SET_VECTORS to be used as hcall type specifiers and convert the
> > existing __hyp_get_vectors(), __hyp_set_vectors() and kvm_call_hyp() routines
> > to use these new macros when executing an HVC call.  Also change the
> > corresponding hyp-stub and KVM el1_sync exception vector routines to use these
> > new macros.
> > 
> > Signed-off-by: Geoff Levand <geoff@infradead.org>
> > ---
> >  arch/arm64/include/asm/virt.h | 27 +++++++++++++++++++++++++++
> >  arch/arm64/kernel/hyp-stub.S  | 32 +++++++++++++++++++++-----------
> >  arch/arm64/kernel/psci.c      |  3 ++-
> >  arch/arm64/kvm/hyp.S          | 16 +++++++++-------
> >  4 files changed, 59 insertions(+), 19 deletions(-)
> > 
> > diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h
> > index 7a5df52..eb10368 100644
> > --- a/arch/arm64/include/asm/virt.h
> > +++ b/arch/arm64/include/asm/virt.h
> > @@ -18,6 +18,33 @@
> >  #ifndef __ASM__VIRT_H
> >  #define __ASM__VIRT_H
> >  
> > +/*
> > + * The arm64 hcall implementation uses the ISS field of the ESR_EL2 register to
> > + * specify the hcall type.  The exception handlers are allowed to use registers
> > + * x17 and x18 in their implementation.  Any routine issuing an hcall must not
> > + * expect these registers to be preserved.
> > + */
> 
> I thought the existing use of registers were based on the arm procedure
> call standard so we didn't have to worry about adding more caller-save
> registers.
> 
> Don't we now have to start adding code around callers to make sure
> callers know that x17 and x18 may be clobbered?

We use x17 and x18 to allow hyper calls to work without a stack, which
is needed for cpu_soft_restart(). The procedure call standard says that
these are temporary registers, so a C compiler should not expect these
to be preserved.

-Geoff

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [PATCH v2 2/8] arm64: Convert hcalls to use ISS field
@ 2015-02-25 22:09           ` Geoff Levand
  0 siblings, 0 replies; 100+ messages in thread
From: Geoff Levand @ 2015-02-25 22:09 UTC (permalink / raw)
  To: Christoffer Dall
  Cc: marc.zyngier, Catalin Marinas, Will Deacon, Grant Likely, kexec,
	linux-arm-kernel

Hi Christoffer,

On Thu, 2015-02-19 at 21:57 +0100, Christoffer Dall wrote:
> On Fri, Jan 30, 2015 at 03:33:48PM -0800, Geoff Levand wrote:
> > To allow for additional hcalls to be defined and to make the arm64 hcall API
> > more consistent across exception vector routines, change the hcall implementations
> > to use the ISS field of the ESR_EL2 register to specify the hcall type.
> 
> how does this make things more consistent?  Do we have other examples of
> things using the immediate field which I'm missing?

As I detail in the next paragraph, by consistent I mean in the API exposed,
not in the implementation.  This point is really secondary to the need for
more hyper calls as I discuss below.

> > The existing arm64 hcall implementations are limited in that they only allow
> > for two distinct hcalls; with the x0 register either zero, or not zero.  Also,
> > the API of the hyp-stub exception vector routines and the KVM exception vector
> > routines differ; hyp-stub uses a non-zero value in x0 to implement
> > __hyp_set_vectors, whereas KVM uses it to implement kvm_call_hyp.
> 
> this seems orthogonal to the use of the immediate field vs. x0 though,
> so why it the immediate field preferred again?

When a CPU is reset via cpu_soft_restart() we need to execute the caller
supplied reset routine at the exception level the kernel was entered at.
So, for a kernel entered at EL2, we need a way to execute that routine at
EL2.

The current hyp-stub vector implementation, which uses x0, is limited
to two hyper calls; __hyp_get_vectors and __hyp_set_vectors.  To
support cpu_soft_restart() we need a third hyper call, one which
allows for code to be executed at EL2.  My proposed use of the
immediate value of the hvc instruction will allow for 2^16 distinct
hyper calls.
 
> > Define three new preprocessor macros HVC_CALL_HYP, HVC_GET_VECTORS, and
> > HVC_SET_VECTORS to be used as hcall type specifiers and convert the
> > existing __hyp_get_vectors(), __hyp_set_vectors() and kvm_call_hyp() routines
> > to use these new macros when executing an HVC call.  Also change the
> > corresponding hyp-stub and KVM el1_sync exception vector routines to use these
> > new macros.
> > 
> > Signed-off-by: Geoff Levand <geoff@infradead.org>
> > ---
> >  arch/arm64/include/asm/virt.h | 27 +++++++++++++++++++++++++++
> >  arch/arm64/kernel/hyp-stub.S  | 32 +++++++++++++++++++++-----------
> >  arch/arm64/kernel/psci.c      |  3 ++-
> >  arch/arm64/kvm/hyp.S          | 16 +++++++++-------
> >  4 files changed, 59 insertions(+), 19 deletions(-)
> > 
> > diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h
> > index 7a5df52..eb10368 100644
> > --- a/arch/arm64/include/asm/virt.h
> > +++ b/arch/arm64/include/asm/virt.h
> > @@ -18,6 +18,33 @@
> >  #ifndef __ASM__VIRT_H
> >  #define __ASM__VIRT_H
> >  
> > +/*
> > + * The arm64 hcall implementation uses the ISS field of the ESR_EL2 register to
> > + * specify the hcall type.  The exception handlers are allowed to use registers
> > + * x17 and x18 in their implementation.  Any routine issuing an hcall must not
> > + * expect these registers to be preserved.
> > + */
> 
> I thought the existing use of registers were based on the arm procedure
> call standard so we didn't have to worry about adding more caller-save
> registers.
> 
> Don't we now have to start adding code around callers to make sure
> callers know that x17 and x18 may be clobbered?

We use x17 and x18 to allow hyper calls to work without a stack, which
is needed for cpu_soft_restart(). The procedure call standard says that
these are temporary registers, so a C compiler should not expect these
to be preserved.

-Geoff




_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 100+ messages in thread

* [PATCH v2 2/8] arm64: Convert hcalls to use ISS field
  2015-02-25 22:09           ` Geoff Levand
@ 2015-03-02 22:13             ` Christoffer Dall
  -1 siblings, 0 replies; 100+ messages in thread
From: Christoffer Dall @ 2015-03-02 22:13 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Feb 25, 2015 at 02:09:30PM -0800, Geoff Levand wrote:
> Hi Christoffer,
> 
> On Thu, 2015-02-19 at 21:57 +0100, Christoffer Dall wrote:
> > On Fri, Jan 30, 2015 at 03:33:48PM -0800, Geoff Levand wrote:
> > > To allow for additional hcalls to be defined and to make the arm64 hcall API
> > > more consistent across exception vector routines, change the hcall implementations
> > > to use the ISS field of the ESR_EL2 register to specify the hcall type.
> > 
> > how does this make things more consistent?  Do we have other examples of
> > things using the immediate field which I'm missing?
> 
> As I detail in the next paragraph, by consistent I mean in the API exposed,
> not in the implementation.  This point is really secondary to the need for
> more hyper calls as I discuss below.
> 
> > > The existing arm64 hcall implementations are limited in that they only allow
> > > for two distinct hcalls; with the x0 register either zero, or not zero.  Also,
> > > the API of the hyp-stub exception vector routines and the KVM exception vector
> > > routines differ; hyp-stub uses a non-zero value in x0 to implement
> > > __hyp_set_vectors, whereas KVM uses it to implement kvm_call_hyp.
> > 
> > this seems orthogonal to the use of the immediate field vs. x0 though,
> > so why it the immediate field preferred again?
> 
> When a CPU is reset via cpu_soft_restart() we need to execute the caller
> supplied reset routine at the exception level the kernel was entered at.
> So, for a kernel entered at EL2, we need a way to execute that routine at
> EL2.
> 
> The current hyp-stub vector implementation, which uses x0, is limited
> to two hyper calls; __hyp_get_vectors and __hyp_set_vectors.  To
> support cpu_soft_restart() we need a third hyper call, one which
> allows for code to be executed at EL2.  My proposed use of the
> immediate value of the hvc instruction will allow for 2^16 distinct
> hyper calls.
>  

right, but using x0 allows for 2^64 distinct hypercalls.  Just to be
clear, I'm fine with using immediate field if there are no good reasons
not to, I was just curious as to what direct benefit it has.  After
thinking about it a bit, from my point of view, the benefit would be the
clarity that x0 is first argument like a normal procedure call, so no
need to shift things around.  Is this part of the equation or am I
missing the overall purpose here?

> > > Define three new preprocessor macros HVC_CALL_HYP, HVC_GET_VECTORS, and
> > > HVC_SET_VECTORS to be used as hcall type specifiers and convert the
> > > existing __hyp_get_vectors(), __hyp_set_vectors() and kvm_call_hyp() routines
> > > to use these new macros when executing an HVC call.  Also change the
> > > corresponding hyp-stub and KVM el1_sync exception vector routines to use these
> > > new macros.
> > > 
> > > Signed-off-by: Geoff Levand <geoff@infradead.org>
> > > ---
> > >  arch/arm64/include/asm/virt.h | 27 +++++++++++++++++++++++++++
> > >  arch/arm64/kernel/hyp-stub.S  | 32 +++++++++++++++++++++-----------
> > >  arch/arm64/kernel/psci.c      |  3 ++-
> > >  arch/arm64/kvm/hyp.S          | 16 +++++++++-------
> > >  4 files changed, 59 insertions(+), 19 deletions(-)
> > > 
> > > diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h
> > > index 7a5df52..eb10368 100644
> > > --- a/arch/arm64/include/asm/virt.h
> > > +++ b/arch/arm64/include/asm/virt.h
> > > @@ -18,6 +18,33 @@
> > >  #ifndef __ASM__VIRT_H
> > >  #define __ASM__VIRT_H
> > >  
> > > +/*
> > > + * The arm64 hcall implementation uses the ISS field of the ESR_EL2 register to
> > > + * specify the hcall type.  The exception handlers are allowed to use registers
> > > + * x17 and x18 in their implementation.  Any routine issuing an hcall must not
> > > + * expect these registers to be preserved.
> > > + */
> > 
> > I thought the existing use of registers were based on the arm procedure
> > call standard so we didn't have to worry about adding more caller-save
> > registers.
> > 
> > Don't we now have to start adding code around callers to make sure
> > callers know that x17 and x18 may be clobbered?
> 
> We use x17 and x18 to allow hyper calls to work without a stack, which
> is needed for cpu_soft_restart(). The procedure call standard says that
> these are temporary registers, so a C compiler should not expect these
> to be preserved.
> 
Then why not use r9-15 or r0-r7 as the AACPS clearly specifies as
caller preserve instead of the registers which may have special meaning
etc.?

-Christoffer

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [PATCH v2 2/8] arm64: Convert hcalls to use ISS field
@ 2015-03-02 22:13             ` Christoffer Dall
  0 siblings, 0 replies; 100+ messages in thread
From: Christoffer Dall @ 2015-03-02 22:13 UTC (permalink / raw)
  To: Geoff Levand
  Cc: marc.zyngier, Catalin Marinas, Will Deacon, Grant Likely, kexec,
	linux-arm-kernel

On Wed, Feb 25, 2015 at 02:09:30PM -0800, Geoff Levand wrote:
> Hi Christoffer,
> 
> On Thu, 2015-02-19 at 21:57 +0100, Christoffer Dall wrote:
> > On Fri, Jan 30, 2015 at 03:33:48PM -0800, Geoff Levand wrote:
> > > To allow for additional hcalls to be defined and to make the arm64 hcall API
> > > more consistent across exception vector routines, change the hcall implementations
> > > to use the ISS field of the ESR_EL2 register to specify the hcall type.
> > 
> > how does this make things more consistent?  Do we have other examples of
> > things using the immediate field which I'm missing?
> 
> As I detail in the next paragraph, by consistent I mean in the API exposed,
> not in the implementation.  This point is really secondary to the need for
> more hyper calls as I discuss below.
> 
> > > The existing arm64 hcall implementations are limited in that they only allow
> > > for two distinct hcalls; with the x0 register either zero, or not zero.  Also,
> > > the API of the hyp-stub exception vector routines and the KVM exception vector
> > > routines differ; hyp-stub uses a non-zero value in x0 to implement
> > > __hyp_set_vectors, whereas KVM uses it to implement kvm_call_hyp.
> > 
> > this seems orthogonal to the use of the immediate field vs. x0 though,
> > so why it the immediate field preferred again?
> 
> When a CPU is reset via cpu_soft_restart() we need to execute the caller
> supplied reset routine at the exception level the kernel was entered at.
> So, for a kernel entered at EL2, we need a way to execute that routine at
> EL2.
> 
> The current hyp-stub vector implementation, which uses x0, is limited
> to two hyper calls; __hyp_get_vectors and __hyp_set_vectors.  To
> support cpu_soft_restart() we need a third hyper call, one which
> allows for code to be executed at EL2.  My proposed use of the
> immediate value of the hvc instruction will allow for 2^16 distinct
> hyper calls.
>  

right, but using x0 allows for 2^64 distinct hypercalls.  Just to be
clear, I'm fine with using immediate field if there are no good reasons
not to, I was just curious as to what direct benefit it has.  After
thinking about it a bit, from my point of view, the benefit would be the
clarity that x0 is first argument like a normal procedure call, so no
need to shift things around.  Is this part of the equation or am I
missing the overall purpose here?

> > > Define three new preprocessor macros HVC_CALL_HYP, HVC_GET_VECTORS, and
> > > HVC_SET_VECTORS to be used as hcall type specifiers and convert the
> > > existing __hyp_get_vectors(), __hyp_set_vectors() and kvm_call_hyp() routines
> > > to use these new macros when executing an HVC call.  Also change the
> > > corresponding hyp-stub and KVM el1_sync exception vector routines to use these
> > > new macros.
> > > 
> > > Signed-off-by: Geoff Levand <geoff@infradead.org>
> > > ---
> > >  arch/arm64/include/asm/virt.h | 27 +++++++++++++++++++++++++++
> > >  arch/arm64/kernel/hyp-stub.S  | 32 +++++++++++++++++++++-----------
> > >  arch/arm64/kernel/psci.c      |  3 ++-
> > >  arch/arm64/kvm/hyp.S          | 16 +++++++++-------
> > >  4 files changed, 59 insertions(+), 19 deletions(-)
> > > 
> > > diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h
> > > index 7a5df52..eb10368 100644
> > > --- a/arch/arm64/include/asm/virt.h
> > > +++ b/arch/arm64/include/asm/virt.h
> > > @@ -18,6 +18,33 @@
> > >  #ifndef __ASM__VIRT_H
> > >  #define __ASM__VIRT_H
> > >  
> > > +/*
> > > + * The arm64 hcall implementation uses the ISS field of the ESR_EL2 register to
> > > + * specify the hcall type.  The exception handlers are allowed to use registers
> > > + * x17 and x18 in their implementation.  Any routine issuing an hcall must not
> > > + * expect these registers to be preserved.
> > > + */
> > 
> > I thought the existing use of registers were based on the arm procedure
> > call standard so we didn't have to worry about adding more caller-save
> > registers.
> > 
> > Don't we now have to start adding code around callers to make sure
> > callers know that x17 and x18 may be clobbered?
> 
> We use x17 and x18 to allow hyper calls to work without a stack, which
> is needed for cpu_soft_restart(). The procedure call standard says that
> these are temporary registers, so a C compiler should not expect these
> to be preserved.
> 
Then why not use r9-15 or r0-r7 as the AACPS clearly specifies as
caller preserve instead of the registers which may have special meaning
etc.?

-Christoffer

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 100+ messages in thread

* [PATCH v2 2/8] arm64: Convert hcalls to use ISS field
  2015-03-02 22:13             ` Christoffer Dall
@ 2015-03-02 23:22               ` Geoff Levand
  -1 siblings, 0 replies; 100+ messages in thread
From: Geoff Levand @ 2015-03-02 23:22 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Christoffer,

On Mon, 2015-03-02 at 14:13 -0800, Christoffer Dall wrote:
> On Wed, Feb 25, 2015 at 02:09:30PM -0800, Geoff Levand wrote:
> > The current hyp-stub vector implementation, which uses x0, is limited
> > to two hyper calls; __hyp_get_vectors and __hyp_set_vectors.  To
> > support cpu_soft_restart() we need a third hyper call, one which
> > allows for code to be executed at EL2.  My proposed use of the
> > immediate value of the hvc instruction will allow for 2^16 distinct
> > hyper calls.  
> 
> right, but using x0 allows for 2^64 distinct hypercalls.  Just to be
> clear, I'm fine with using immediate field if there are no good reasons
> not to, I was just curious as to what direct benefit it has.  After
> thinking about it a bit, from my point of view, the benefit would be the
> clarity that x0 is first argument like a normal procedure call, so no
> need to shift things around.  Is this part of the equation or am I
> missing the overall purpose here?

Yes, in general it will make marshaling of args, etc. easier.  Also,
to me, if we are going to change the implementation it seems to be
the most natural way.
 
> > > > + * The arm64 hcall implementation uses the ISS field of the ESR_EL2 register to
> > > > + * specify the hcall type.  The exception handlers are allowed to use registers
> > > > + * x17 and x18 in their implementation.  Any routine issuing an hcall must not
> > > > + * expect these registers to be preserved.
> > > > + */
> > > 
> > > I thought the existing use of registers were based on the arm procedure
> > > call standard so we didn't have to worry about adding more caller-save
> > > registers.
> > > 
> > > Don't we now have to start adding code around callers to make sure
> > > callers know that x17 and x18 may be clobbered?
> > 
> > We use x17 and x18 to allow hyper calls to work without a stack, which
> > is needed for cpu_soft_restart(). The procedure call standard says that
> > these are temporary registers, so a C compiler should not expect these
> > to be preserved.
> > 
> Then why not use r9-15 or r0-r7 as the AACPS clearly specifies as
> caller preserve instead of the registers which may have special meaning
> etc.?

OK, I will change these to x9, x10.  I'll post a v8 patch set soon.

-Geoff

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [PATCH v2 2/8] arm64: Convert hcalls to use ISS field
@ 2015-03-02 23:22               ` Geoff Levand
  0 siblings, 0 replies; 100+ messages in thread
From: Geoff Levand @ 2015-03-02 23:22 UTC (permalink / raw)
  To: Christoffer Dall
  Cc: marc.zyngier, Catalin Marinas, Will Deacon, Grant Likely, kexec,
	linux-arm-kernel

Hi Christoffer,

On Mon, 2015-03-02 at 14:13 -0800, Christoffer Dall wrote:
> On Wed, Feb 25, 2015 at 02:09:30PM -0800, Geoff Levand wrote:
> > The current hyp-stub vector implementation, which uses x0, is limited
> > to two hyper calls; __hyp_get_vectors and __hyp_set_vectors.  To
> > support cpu_soft_restart() we need a third hyper call, one which
> > allows for code to be executed at EL2.  My proposed use of the
> > immediate value of the hvc instruction will allow for 2^16 distinct
> > hyper calls.  
> 
> right, but using x0 allows for 2^64 distinct hypercalls.  Just to be
> clear, I'm fine with using immediate field if there are no good reasons
> not to, I was just curious as to what direct benefit it has.  After
> thinking about it a bit, from my point of view, the benefit would be the
> clarity that x0 is first argument like a normal procedure call, so no
> need to shift things around.  Is this part of the equation or am I
> missing the overall purpose here?

Yes, in general it will make marshaling of args, etc. easier.  Also,
to me, if we are going to change the implementation it seems to be
the most natural way.
 
> > > > + * The arm64 hcall implementation uses the ISS field of the ESR_EL2 register to
> > > > + * specify the hcall type.  The exception handlers are allowed to use registers
> > > > + * x17 and x18 in their implementation.  Any routine issuing an hcall must not
> > > > + * expect these registers to be preserved.
> > > > + */
> > > 
> > > I thought the existing use of registers were based on the arm procedure
> > > call standard so we didn't have to worry about adding more caller-save
> > > registers.
> > > 
> > > Don't we now have to start adding code around callers to make sure
> > > callers know that x17 and x18 may be clobbered?
> > 
> > We use x17 and x18 to allow hyper calls to work without a stack, which
> > is needed for cpu_soft_restart(). The procedure call standard says that
> > these are temporary registers, so a C compiler should not expect these
> > to be preserved.
> > 
> Then why not use r9-15 or r0-r7 as the AACPS clearly specifies as
> caller preserve instead of the registers which may have special meaning
> etc.?

OK, I will change these to x9, x10.  I'll post a v8 patch set soon.

-Geoff




_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 100+ messages in thread

* [PATCH v2 2/8] arm64: Convert hcalls to use ISS field
  2015-03-02 23:22               ` Geoff Levand
@ 2015-03-03 21:47                 ` Christopher Covington
  -1 siblings, 0 replies; 100+ messages in thread
From: Christopher Covington @ 2015-03-03 21:47 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Geoff,

On 03/02/2015 06:22 PM, Geoff Levand wrote:
> Hi Christoffer,
> 
> On Mon, 2015-03-02 at 14:13 -0800, Christoffer Dall wrote:
>> On Wed, Feb 25, 2015 at 02:09:30PM -0800, Geoff Levand wrote:
>>> The current hyp-stub vector implementation, which uses x0, is limited
>>> to two hyper calls; __hyp_get_vectors and __hyp_set_vectors.  To
>>> support cpu_soft_restart() we need a third hyper call, one which
>>> allows for code to be executed at EL2.  My proposed use of the
>>> immediate value of the hvc instruction will allow for 2^16 distinct
>>> hyper calls.  
>>
>> right, but using x0 allows for 2^64 distinct hypercalls.  Just to be
>> clear, I'm fine with using immediate field if there are no good reasons
>> not to, I was just curious as to what direct benefit it has.  After
>> thinking about it a bit, from my point of view, the benefit would be the
>> clarity that x0 is first argument like a normal procedure call, so no
>> need to shift things around.  Is this part of the equation or am I
>> missing the overall purpose here?
> 
> Yes, in general it will make marshaling of args, etc. easier.  Also,
> to me, if we are going to change the implementation it seems to be
> the most natural way.

>From reading the architecture documentation, I too expected the hypervisor
call instruction's immediate and the instruction specific syndrome to be used.
However I vaguely recall someone pointing out that reading the exception
syndrome register and extracting the instruction specific syndrome is bound to
take longer than simply using a general purpose register.

One might also consider alignment with the SMC Calling Convention document
[1], which while originally written for SMC, is also used for HVC by PSCI [2].

1. http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.den0028a/index.html
2. http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.den0022c/index.html

Chris

-- 
Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [PATCH v2 2/8] arm64: Convert hcalls to use ISS field
@ 2015-03-03 21:47                 ` Christopher Covington
  0 siblings, 0 replies; 100+ messages in thread
From: Christopher Covington @ 2015-03-03 21:47 UTC (permalink / raw)
  To: Geoff Levand
  Cc: marc.zyngier, Catalin Marinas, Will Deacon, linux-arm-kernel,
	Grant Likely, kexec, Christoffer Dall

Hi Geoff,

On 03/02/2015 06:22 PM, Geoff Levand wrote:
> Hi Christoffer,
> 
> On Mon, 2015-03-02 at 14:13 -0800, Christoffer Dall wrote:
>> On Wed, Feb 25, 2015 at 02:09:30PM -0800, Geoff Levand wrote:
>>> The current hyp-stub vector implementation, which uses x0, is limited
>>> to two hyper calls; __hyp_get_vectors and __hyp_set_vectors.  To
>>> support cpu_soft_restart() we need a third hyper call, one which
>>> allows for code to be executed at EL2.  My proposed use of the
>>> immediate value of the hvc instruction will allow for 2^16 distinct
>>> hyper calls.  
>>
>> right, but using x0 allows for 2^64 distinct hypercalls.  Just to be
>> clear, I'm fine with using immediate field if there are no good reasons
>> not to, I was just curious as to what direct benefit it has.  After
>> thinking about it a bit, from my point of view, the benefit would be the
>> clarity that x0 is first argument like a normal procedure call, so no
>> need to shift things around.  Is this part of the equation or am I
>> missing the overall purpose here?
> 
> Yes, in general it will make marshaling of args, etc. easier.  Also,
> to me, if we are going to change the implementation it seems to be
> the most natural way.

From reading the architecture documentation, I too expected the hypervisor
call instruction's immediate and the instruction specific syndrome to be used.
However I vaguely recall someone pointing out that reading the exception
syndrome register and extracting the instruction specific syndrome is bound to
take longer than simply using a general purpose register.

One might also consider alignment with the SMC Calling Convention document
[1], which while originally written for SMC, is also used for HVC by PSCI [2].

1. http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.den0028a/index.html
2. http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.den0022c/index.html

Chris

-- 
Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 100+ messages in thread

* [PATCH v2 2/8] arm64: Convert hcalls to use ISS field
  2015-03-03 21:47                 ` Christopher Covington
@ 2015-03-03 22:35                   ` Geoff Levand
  -1 siblings, 0 replies; 100+ messages in thread
From: Geoff Levand @ 2015-03-03 22:35 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Christopher,

On Tue, 2015-03-03 at 16:47 -0500, Christopher Covington wrote:
> On 03/02/2015 06:22 PM, Geoff Levand wrote:
> > Yes, in general it will make marshaling of args, etc. easier.  Also,
> > to me, if we are going to change the implementation it seems to be
> > the most natural way.
> 
> From reading the architecture documentation, I too expected the hypervisor
> call instruction's immediate and the instruction specific syndrome to be used.
> However I vaguely recall someone pointing out that reading the exception
> syndrome register and extracting the instruction specific syndrome is bound to
> take longer than simply using a general purpose register.
> 
> One might also consider alignment with the SMC Calling Convention document
> [1], which while originally written for SMC, is also used for HVC by PSCI [2].
> 
> 1. http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.den0028a/index.html
> 2. http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.den0022c/index.html

On looking at the SMC document, I found this:
 
  The SMC instruction encodes an immediate value as defined by the ARM
  architecture [1][2]. The size of this and mechanism to access the
  immediate value differ between the ARM instruction sets. Additionally,
  it is time consuming for 32-bit Secure Monitor code to access this
  immediate value. Consequently:

   o An SMC immediate value of Zero must be used.
   o All other SMC immediate values are reserved. 

The first problem of differing access methods does not exist for our
case, the kernel will always use the same method.

As for the second problem, the current implementation already reads
esr_el2.  The new code just adds an AND instruction to mask the ISS
field.  I don't think this would be more overhead than shifting
registers.

One alternative would be to use a high register, say x7, and limit the
hcalls to args x0-x6, but I don't think this gains much over using the
immediate.

-Geoff

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [PATCH v2 2/8] arm64: Convert hcalls to use ISS field
@ 2015-03-03 22:35                   ` Geoff Levand
  0 siblings, 0 replies; 100+ messages in thread
From: Geoff Levand @ 2015-03-03 22:35 UTC (permalink / raw)
  To: Christopher Covington
  Cc: marc.zyngier, Catalin Marinas, Will Deacon, linux-arm-kernel,
	Grant Likely, kexec, Christoffer Dall

Hi Christopher,

On Tue, 2015-03-03 at 16:47 -0500, Christopher Covington wrote:
> On 03/02/2015 06:22 PM, Geoff Levand wrote:
> > Yes, in general it will make marshaling of args, etc. easier.  Also,
> > to me, if we are going to change the implementation it seems to be
> > the most natural way.
> 
> From reading the architecture documentation, I too expected the hypervisor
> call instruction's immediate and the instruction specific syndrome to be used.
> However I vaguely recall someone pointing out that reading the exception
> syndrome register and extracting the instruction specific syndrome is bound to
> take longer than simply using a general purpose register.
> 
> One might also consider alignment with the SMC Calling Convention document
> [1], which while originally written for SMC, is also used for HVC by PSCI [2].
> 
> 1. http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.den0028a/index.html
> 2. http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.den0022c/index.html

On looking at the SMC document, I found this:
 
  The SMC instruction encodes an immediate value as defined by the ARM
  architecture [1][2]. The size of this and mechanism to access the
  immediate value differ between the ARM instruction sets. Additionally,
  it is time consuming for 32-bit Secure Monitor code to access this
  immediate value. Consequently:

   o An SMC immediate value of Zero must be used.
   o All other SMC immediate values are reserved. 

The first problem of differing access methods does not exist for our
case, the kernel will always use the same method.

As for the second problem, the current implementation already reads
esr_el2.  The new code just adds an AND instruction to mask the ISS
field.  I don't think this would be more overhead than shifting
registers.

One alternative would be to use a high register, say x7, and limit the
hcalls to args x0-x6, but I don't think this gains much over using the
immediate.

-Geoff


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 100+ messages in thread

end of thread, other threads:[~2015-03-03 22:35 UTC | newest]

Thread overview: 100+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <cover.1415926876.git.geoff@infradead.orgg>
2015-01-17  0:23 ` [PATCH 0/8] arm64 kexec kernel patches V7 Geoff Levand
2015-01-17  0:23   ` Geoff Levand
2015-01-17  0:23   ` [PATCH 7/8] arm64/kexec: Add checks for KVM Geoff Levand
2015-01-17  0:23     ` Geoff Levand
2015-01-26 19:19     ` Mark Rutland
2015-01-26 19:19       ` Mark Rutland
2015-01-26 20:39       ` Christoffer Dall
2015-01-26 20:39         ` Christoffer Dall
2015-01-26 20:58         ` Geoff Levand
2015-01-26 20:58           ` Geoff Levand
2015-01-26 21:00       ` Geoff Levand
2015-01-26 21:00         ` Geoff Levand
2015-01-29  9:36       ` AKASHI Takahiro
2015-01-29  9:36         ` AKASHI Takahiro
2015-01-29  9:57       ` AKASHI Takahiro
2015-01-29  9:57         ` AKASHI Takahiro
2015-01-29 10:59         ` Marc Zyngier
2015-01-29 10:59           ` Marc Zyngier
2015-01-29 18:47           ` Mark Rutland
2015-01-29 18:47             ` Mark Rutland
2015-01-30  6:10             ` AKASHI Takahiro
2015-01-30  6:10               ` AKASHI Takahiro
2015-01-30 12:14               ` Mark Rutland
2015-01-30 12:14                 ` Mark Rutland
2015-01-30 19:48               ` Geoff Levand
2015-01-30 19:48                 ` Geoff Levand
2015-02-02  8:18                 ` AKASHI Takahiro
2015-02-02  8:18                   ` AKASHI Takahiro
2015-02-06  0:11                   ` Geoff Levand
2015-02-06  0:11                     ` Geoff Levand
2015-02-06  4:18                     ` AKASHI Takahiro
2015-02-06  4:18                       ` AKASHI Takahiro
2015-02-06  7:06                       ` Geoff Levand
2015-02-06  7:06                         ` Geoff Levand
2015-01-17  0:23   ` [PATCH 2/8] arm64: Convert hcalls to use ISS field Geoff Levand
2015-01-17  0:23     ` Geoff Levand
2015-01-26 18:26     ` Catalin Marinas
2015-01-26 18:26       ` Catalin Marinas
2015-01-30 23:31       ` Geoff Levand
2015-01-30 23:31         ` Geoff Levand
2015-02-02 16:04         ` Catalin Marinas
2015-02-02 16:04           ` Catalin Marinas
2015-01-30 23:33     ` [PATCH v2 " Geoff Levand
2015-01-30 23:33       ` Geoff Levand
2015-02-19 20:57       ` Christoffer Dall
2015-02-19 20:57         ` Christoffer Dall
2015-02-25 22:09         ` Geoff Levand
2015-02-25 22:09           ` Geoff Levand
2015-03-02 22:13           ` Christoffer Dall
2015-03-02 22:13             ` Christoffer Dall
2015-03-02 23:22             ` Geoff Levand
2015-03-02 23:22               ` Geoff Levand
2015-03-03 21:47               ` Christopher Covington
2015-03-03 21:47                 ` Christopher Covington
2015-03-03 22:35                 ` Geoff Levand
2015-03-03 22:35                   ` Geoff Levand
2015-01-17  0:23   ` [PATCH 8/8] arm64/kexec: Enable kexec in the arm64 defconfig Geoff Levand
2015-01-17  0:23     ` Geoff Levand
2015-01-17  0:23   ` [PATCH 6/8] arm64/kexec: Add pr_devel output Geoff Levand
2015-01-17  0:23     ` Geoff Levand
2015-01-17  0:23   ` [PATCH 3/8] arm64: Add new hcall HVC_CALL_FUNC Geoff Levand
2015-01-17  0:23     ` Geoff Levand
2015-01-27 17:39     ` Catalin Marinas
2015-01-27 17:39       ` Catalin Marinas
2015-01-27 18:00       ` Mark Rutland
2015-01-27 18:00         ` Mark Rutland
2015-01-30 21:52       ` Geoff Levand
2015-01-30 21:52         ` Geoff Levand
2015-01-17  0:23   ` [PATCH 1/8] arm64: Move proc-macros.S to include/asm Geoff Levand
2015-01-17  0:23     ` Geoff Levand
2015-01-26 17:45     ` Catalin Marinas
2015-01-26 17:45       ` Catalin Marinas
2015-01-27 19:33     ` [PATCH V2 1/8] arm64: Fold proc-macros.S into assembler.h Geoff Levand
2015-01-27 19:33       ` Geoff Levand
2015-01-17  0:23   ` [PATCH 4/8] arm64: Add EL2 switch to soft_restart Geoff Levand
2015-01-17  0:23     ` Geoff Levand
2015-01-26 19:02     ` Mark Rutland
2015-01-26 19:02       ` Mark Rutland
2015-01-26 21:48       ` Geoff Levand
2015-01-26 21:48         ` Geoff Levand
2015-01-27 16:46         ` Mark Rutland
2015-01-27 16:46           ` Mark Rutland
2015-01-27 18:34           ` Geoff Levand
2015-01-27 18:34             ` Geoff Levand
2015-01-27 17:57     ` Catalin Marinas
2015-01-27 17:57       ` Catalin Marinas
2015-01-30 21:47       ` Geoff Levand
2015-01-30 21:47         ` Geoff Levand
2015-01-17  0:23   ` [PATCH 5/8] arm64/kexec: Add core kexec support Geoff Levand
2015-01-17  0:23     ` Geoff Levand
2015-01-26 19:16     ` Mark Rutland
2015-01-26 19:16       ` Mark Rutland
2015-01-26 17:44   ` [PATCH 0/8] arm64 kexec kernel patches V7 Catalin Marinas
2015-01-26 17:44     ` Catalin Marinas
2015-01-26 18:37     ` Grant Likely
2015-01-26 18:37       ` Grant Likely
2015-01-26 18:55     ` Mark Rutland
2015-01-26 18:55       ` Mark Rutland
2015-01-26 20:57     ` Geoff Levand
2015-01-26 20:57       ` Geoff Levand

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.