All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 00/23] arm64: Virtualization Host Extension support
@ 2016-02-03 17:59 ` Marc Zyngier
  0 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 17:59 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Christoffer Dall
  Cc: linux-arm-kernel, linux-kernel, kvm, kvmarm

ARMv8.1 comes with the "Virtualization Host Extension" (VHE for
short), which enables simpler support of Type-2 hypervisors.

This extension allows the kernel to directly run at EL2, and
significantly reduces the number of system registers shared between
host and guest, reducing the overhead of virtualization.

In order to have the same kernel binary running on all versions of the
architecture, this series makes heavy use of runtime code patching.

The first 22 patches massage the KVM code to deal with VHE and enable
Linux to run at EL2. The last patch catches an ugly case when VHE
capable CPUs are paired with some of their less capable siblings. This
should never happen, but hey...

I have deliberately left out some of the more "advanced"
optimizations, as they are likely to distract the reviewer from the
core infrastructure, which is what I care about at the moment.

Note: GDB is currently busted on VHE systems, as it checks for version
      6 on the debug architecture, while VHE is version 7. The
      binutils people are on the case.

This has been tested on the FVP_Base_SLV-V8-A model, and based on
v4.5-rc2. I've put a branch out on:

git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git kvm-arm64/vhe

* From v2:
  - Added support for perf to count kernel events in EL2
  - Added support for EL2 breakpoints
  - Moved the VTCR_EL2 setup from assembly to C
  - Made the fault handling easier to understand (hopefuly)
  - Plenty of smaller fixups

* From v1:
  - Full rewrite now that the World Switch is written in C code.
  - Dropped the "early IRQ handling" for the moment.

Marc Zyngier (23):
  arm/arm64: KVM: Add hook for C-based stage2 init
  arm64: KVM: Switch to C-based stage2 init
  arm/arm64: Add new is_kernel_in_hyp_mode predicate
  arm64: Allow the arch timer to use the HYP timer
  arm64: Add ARM64_HAS_VIRT_HOST_EXTN feature
  arm64: KVM: Skip HYP setup when already running in HYP
  arm64: KVM: VHE: Patch out use of HVC
  arm64: KVM: VHE: Patch out kern_hyp_va
  arm64: KVM: VHE: Introduce unified system register accessors
  arm64: KVM: VHE: Differenciate host/guest sysreg save/restore
  arm64: KVM: VHE: Split save/restore of registers shared between guest
    and host
  arm64: KVM: VHE: Use unified system register accessors
  arm64: KVM: VHE: Enable minimal sysreg save/restore
  arm64: KVM: VHE: Make __fpsimd_enabled VHE aware
  arm64: KVM: VHE: Implement VHE activate/deactivate_traps
  arm64: KVM: VHE: Use unified sysreg accessors for timer
  arm64: KVM: VHE: Add fpsimd enabling on guest access
  arm64: KVM: VHE: Add alternative panic handling
  arm64: KVM: Move most of the fault decoding to C
  arm64: perf: Count EL2 events if the kernel is running in HYP
  arm64: hw_breakpoint: Allow EL2 breakpoints if running in HYP
  arm64: VHE: Add support for running Linux in EL2 mode
  arm64: Panic when VHE and non VHE CPUs coexist

 arch/arm/include/asm/kvm_host.h        |   4 +
 arch/arm/include/asm/virt.h            |   5 +
 arch/arm/kvm/arm.c                     | 174 +++++++++++++++++++----------
 arch/arm/kvm/mmu.c                     |   7 ++
 arch/arm64/Kconfig                     |  13 +++
 arch/arm64/include/asm/cpufeature.h    |   3 +-
 arch/arm64/include/asm/hw_breakpoint.h |  49 ++++++---
 arch/arm64/include/asm/kvm_arm.h       |   6 +-
 arch/arm64/include/asm/kvm_asm.h       |   2 +
 arch/arm64/include/asm/kvm_emulate.h   |   3 +
 arch/arm64/include/asm/kvm_host.h      |   6 +
 arch/arm64/include/asm/kvm_mmu.h       |  12 +-
 arch/arm64/include/asm/virt.h          |  27 +++++
 arch/arm64/kernel/asm-offsets.c        |   3 -
 arch/arm64/kernel/cpufeature.c         |  11 ++
 arch/arm64/kernel/head.S               |  48 +++++++-
 arch/arm64/kernel/perf_event.c         |  14 ++-
 arch/arm64/kernel/smp.c                |   3 +
 arch/arm64/kvm/hyp-init.S              |  18 ---
 arch/arm64/kvm/hyp.S                   |   7 ++
 arch/arm64/kvm/hyp/Makefile            |   1 +
 arch/arm64/kvm/hyp/entry.S             |   6 +
 arch/arm64/kvm/hyp/hyp-entry.S         | 109 ++++++------------
 arch/arm64/kvm/hyp/hyp.h               | 108 ++++++++++++++++--
 arch/arm64/kvm/hyp/s2-setup.c          |  44 ++++++++
 arch/arm64/kvm/hyp/switch.c            | 196 ++++++++++++++++++++++++++++++---
 arch/arm64/kvm/hyp/sysreg-sr.c         | 147 ++++++++++++++++---------
 arch/arm64/kvm/hyp/timer-sr.c          |  10 +-
 drivers/clocksource/arm_arch_timer.c   |  96 +++++++++-------
 29 files changed, 837 insertions(+), 295 deletions(-)
 create mode 100644 arch/arm64/kvm/hyp/s2-setup.c

-- 
2.1.4

^ permalink raw reply	[flat|nested] 119+ messages in thread

* [PATCH v3 00/23] arm64: Virtualization Host Extension support
@ 2016-02-03 17:59 ` Marc Zyngier
  0 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 17:59 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Christoffer Dall
  Cc: kvmarm, linux-kernel, linux-arm-kernel, kvm

ARMv8.1 comes with the "Virtualization Host Extension" (VHE for
short), which enables simpler support of Type-2 hypervisors.

This extension allows the kernel to directly run at EL2, and
significantly reduces the number of system registers shared between
host and guest, reducing the overhead of virtualization.

In order to have the same kernel binary running on all versions of the
architecture, this series makes heavy use of runtime code patching.

The first 22 patches massage the KVM code to deal with VHE and enable
Linux to run at EL2. The last patch catches an ugly case when VHE
capable CPUs are paired with some of their less capable siblings. This
should never happen, but hey...

I have deliberately left out some of the more "advanced"
optimizations, as they are likely to distract the reviewer from the
core infrastructure, which is what I care about at the moment.

Note: GDB is currently busted on VHE systems, as it checks for version
      6 on the debug architecture, while VHE is version 7. The
      binutils people are on the case.

This has been tested on the FVP_Base_SLV-V8-A model, and based on
v4.5-rc2. I've put a branch out on:

git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git kvm-arm64/vhe

* From v2:
  - Added support for perf to count kernel events in EL2
  - Added support for EL2 breakpoints
  - Moved the VTCR_EL2 setup from assembly to C
  - Made the fault handling easier to understand (hopefuly)
  - Plenty of smaller fixups

* From v1:
  - Full rewrite now that the World Switch is written in C code.
  - Dropped the "early IRQ handling" for the moment.

Marc Zyngier (23):
  arm/arm64: KVM: Add hook for C-based stage2 init
  arm64: KVM: Switch to C-based stage2 init
  arm/arm64: Add new is_kernel_in_hyp_mode predicate
  arm64: Allow the arch timer to use the HYP timer
  arm64: Add ARM64_HAS_VIRT_HOST_EXTN feature
  arm64: KVM: Skip HYP setup when already running in HYP
  arm64: KVM: VHE: Patch out use of HVC
  arm64: KVM: VHE: Patch out kern_hyp_va
  arm64: KVM: VHE: Introduce unified system register accessors
  arm64: KVM: VHE: Differenciate host/guest sysreg save/restore
  arm64: KVM: VHE: Split save/restore of registers shared between guest
    and host
  arm64: KVM: VHE: Use unified system register accessors
  arm64: KVM: VHE: Enable minimal sysreg save/restore
  arm64: KVM: VHE: Make __fpsimd_enabled VHE aware
  arm64: KVM: VHE: Implement VHE activate/deactivate_traps
  arm64: KVM: VHE: Use unified sysreg accessors for timer
  arm64: KVM: VHE: Add fpsimd enabling on guest access
  arm64: KVM: VHE: Add alternative panic handling
  arm64: KVM: Move most of the fault decoding to C
  arm64: perf: Count EL2 events if the kernel is running in HYP
  arm64: hw_breakpoint: Allow EL2 breakpoints if running in HYP
  arm64: VHE: Add support for running Linux in EL2 mode
  arm64: Panic when VHE and non VHE CPUs coexist

 arch/arm/include/asm/kvm_host.h        |   4 +
 arch/arm/include/asm/virt.h            |   5 +
 arch/arm/kvm/arm.c                     | 174 +++++++++++++++++++----------
 arch/arm/kvm/mmu.c                     |   7 ++
 arch/arm64/Kconfig                     |  13 +++
 arch/arm64/include/asm/cpufeature.h    |   3 +-
 arch/arm64/include/asm/hw_breakpoint.h |  49 ++++++---
 arch/arm64/include/asm/kvm_arm.h       |   6 +-
 arch/arm64/include/asm/kvm_asm.h       |   2 +
 arch/arm64/include/asm/kvm_emulate.h   |   3 +
 arch/arm64/include/asm/kvm_host.h      |   6 +
 arch/arm64/include/asm/kvm_mmu.h       |  12 +-
 arch/arm64/include/asm/virt.h          |  27 +++++
 arch/arm64/kernel/asm-offsets.c        |   3 -
 arch/arm64/kernel/cpufeature.c         |  11 ++
 arch/arm64/kernel/head.S               |  48 +++++++-
 arch/arm64/kernel/perf_event.c         |  14 ++-
 arch/arm64/kernel/smp.c                |   3 +
 arch/arm64/kvm/hyp-init.S              |  18 ---
 arch/arm64/kvm/hyp.S                   |   7 ++
 arch/arm64/kvm/hyp/Makefile            |   1 +
 arch/arm64/kvm/hyp/entry.S             |   6 +
 arch/arm64/kvm/hyp/hyp-entry.S         | 109 ++++++------------
 arch/arm64/kvm/hyp/hyp.h               | 108 ++++++++++++++++--
 arch/arm64/kvm/hyp/s2-setup.c          |  44 ++++++++
 arch/arm64/kvm/hyp/switch.c            | 196 ++++++++++++++++++++++++++++++---
 arch/arm64/kvm/hyp/sysreg-sr.c         | 147 ++++++++++++++++---------
 arch/arm64/kvm/hyp/timer-sr.c          |  10 +-
 drivers/clocksource/arm_arch_timer.c   |  96 +++++++++-------
 29 files changed, 837 insertions(+), 295 deletions(-)
 create mode 100644 arch/arm64/kvm/hyp/s2-setup.c

-- 
2.1.4

^ permalink raw reply	[flat|nested] 119+ messages in thread

* [PATCH v3 00/23] arm64: Virtualization Host Extension support
@ 2016-02-03 17:59 ` Marc Zyngier
  0 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 17:59 UTC (permalink / raw)
  To: linux-arm-kernel

ARMv8.1 comes with the "Virtualization Host Extension" (VHE for
short), which enables simpler support of Type-2 hypervisors.

This extension allows the kernel to directly run at EL2, and
significantly reduces the number of system registers shared between
host and guest, reducing the overhead of virtualization.

In order to have the same kernel binary running on all versions of the
architecture, this series makes heavy use of runtime code patching.

The first 22 patches massage the KVM code to deal with VHE and enable
Linux to run at EL2. The last patch catches an ugly case when VHE
capable CPUs are paired with some of their less capable siblings. This
should never happen, but hey...

I have deliberately left out some of the more "advanced"
optimizations, as they are likely to distract the reviewer from the
core infrastructure, which is what I care about at the moment.

Note: GDB is currently busted on VHE systems, as it checks for version
      6 on the debug architecture, while VHE is version 7. The
      binutils people are on the case.

This has been tested on the FVP_Base_SLV-V8-A model, and based on
v4.5-rc2. I've put a branch out on:

git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git kvm-arm64/vhe

* From v2:
  - Added support for perf to count kernel events in EL2
  - Added support for EL2 breakpoints
  - Moved the VTCR_EL2 setup from assembly to C
  - Made the fault handling easier to understand (hopefuly)
  - Plenty of smaller fixups

* From v1:
  - Full rewrite now that the World Switch is written in C code.
  - Dropped the "early IRQ handling" for the moment.

Marc Zyngier (23):
  arm/arm64: KVM: Add hook for C-based stage2 init
  arm64: KVM: Switch to C-based stage2 init
  arm/arm64: Add new is_kernel_in_hyp_mode predicate
  arm64: Allow the arch timer to use the HYP timer
  arm64: Add ARM64_HAS_VIRT_HOST_EXTN feature
  arm64: KVM: Skip HYP setup when already running in HYP
  arm64: KVM: VHE: Patch out use of HVC
  arm64: KVM: VHE: Patch out kern_hyp_va
  arm64: KVM: VHE: Introduce unified system register accessors
  arm64: KVM: VHE: Differenciate host/guest sysreg save/restore
  arm64: KVM: VHE: Split save/restore of registers shared between guest
    and host
  arm64: KVM: VHE: Use unified system register accessors
  arm64: KVM: VHE: Enable minimal sysreg save/restore
  arm64: KVM: VHE: Make __fpsimd_enabled VHE aware
  arm64: KVM: VHE: Implement VHE activate/deactivate_traps
  arm64: KVM: VHE: Use unified sysreg accessors for timer
  arm64: KVM: VHE: Add fpsimd enabling on guest access
  arm64: KVM: VHE: Add alternative panic handling
  arm64: KVM: Move most of the fault decoding to C
  arm64: perf: Count EL2 events if the kernel is running in HYP
  arm64: hw_breakpoint: Allow EL2 breakpoints if running in HYP
  arm64: VHE: Add support for running Linux in EL2 mode
  arm64: Panic when VHE and non VHE CPUs coexist

 arch/arm/include/asm/kvm_host.h        |   4 +
 arch/arm/include/asm/virt.h            |   5 +
 arch/arm/kvm/arm.c                     | 174 +++++++++++++++++++----------
 arch/arm/kvm/mmu.c                     |   7 ++
 arch/arm64/Kconfig                     |  13 +++
 arch/arm64/include/asm/cpufeature.h    |   3 +-
 arch/arm64/include/asm/hw_breakpoint.h |  49 ++++++---
 arch/arm64/include/asm/kvm_arm.h       |   6 +-
 arch/arm64/include/asm/kvm_asm.h       |   2 +
 arch/arm64/include/asm/kvm_emulate.h   |   3 +
 arch/arm64/include/asm/kvm_host.h      |   6 +
 arch/arm64/include/asm/kvm_mmu.h       |  12 +-
 arch/arm64/include/asm/virt.h          |  27 +++++
 arch/arm64/kernel/asm-offsets.c        |   3 -
 arch/arm64/kernel/cpufeature.c         |  11 ++
 arch/arm64/kernel/head.S               |  48 +++++++-
 arch/arm64/kernel/perf_event.c         |  14 ++-
 arch/arm64/kernel/smp.c                |   3 +
 arch/arm64/kvm/hyp-init.S              |  18 ---
 arch/arm64/kvm/hyp.S                   |   7 ++
 arch/arm64/kvm/hyp/Makefile            |   1 +
 arch/arm64/kvm/hyp/entry.S             |   6 +
 arch/arm64/kvm/hyp/hyp-entry.S         | 109 ++++++------------
 arch/arm64/kvm/hyp/hyp.h               | 108 ++++++++++++++++--
 arch/arm64/kvm/hyp/s2-setup.c          |  44 ++++++++
 arch/arm64/kvm/hyp/switch.c            | 196 ++++++++++++++++++++++++++++++---
 arch/arm64/kvm/hyp/sysreg-sr.c         | 147 ++++++++++++++++---------
 arch/arm64/kvm/hyp/timer-sr.c          |  10 +-
 drivers/clocksource/arm_arch_timer.c   |  96 +++++++++-------
 29 files changed, 837 insertions(+), 295 deletions(-)
 create mode 100644 arch/arm64/kvm/hyp/s2-setup.c

-- 
2.1.4

^ permalink raw reply	[flat|nested] 119+ messages in thread

* [PATCH v3 01/23] arm/arm64: KVM: Add hook for C-based stage2 init
  2016-02-03 17:59 ` Marc Zyngier
  (?)
@ 2016-02-03 17:59   ` Marc Zyngier
  -1 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 17:59 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Christoffer Dall
  Cc: linux-arm-kernel, linux-kernel, kvm, kvmarm

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm/include/asm/kvm_host.h   | 4 ++++
 arch/arm/kvm/arm.c                | 1 +
 arch/arm64/include/asm/kvm_host.h | 4 ++++
 3 files changed, 9 insertions(+)

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index f9f2779..f1e86f1 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -220,6 +220,10 @@ static inline void __cpu_init_hyp_mode(phys_addr_t boot_pgd_ptr,
 	kvm_call_hyp((void*)hyp_stack_ptr, vector_ptr, pgd_ptr);
 }
 
+static inline void __cpu_init_stage2(void)
+{
+}
+
 static inline int kvm_arch_dev_ioctl_check_extension(long ext)
 {
 	return 0;
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index dda1959..6b76e01 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -985,6 +985,7 @@ static void cpu_init_hyp_mode(void *dummy)
 	vector_ptr = (unsigned long)__kvm_hyp_vector;
 
 	__cpu_init_hyp_mode(boot_pgd_ptr, pgd_ptr, hyp_stack_ptr, vector_ptr);
+	__cpu_init_stage2();
 
 	kvm_arm_init_debug();
 }
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 689d4c9..fe86cf9 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -332,6 +332,10 @@ static inline void __cpu_init_hyp_mode(phys_addr_t boot_pgd_ptr,
 		     hyp_stack_ptr, vector_ptr);
 }
 
+static inline void __cpu_init_stage2(void)
+{
+}
+
 static inline void kvm_arch_hardware_disable(void) {}
 static inline void kvm_arch_hardware_unsetup(void) {}
 static inline void kvm_arch_sync_events(struct kvm *kvm) {}
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 01/23] arm/arm64: KVM: Add hook for C-based stage2 init
@ 2016-02-03 17:59   ` Marc Zyngier
  0 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 17:59 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Christoffer Dall
  Cc: kvmarm, linux-kernel, linux-arm-kernel, kvm

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm/include/asm/kvm_host.h   | 4 ++++
 arch/arm/kvm/arm.c                | 1 +
 arch/arm64/include/asm/kvm_host.h | 4 ++++
 3 files changed, 9 insertions(+)

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index f9f2779..f1e86f1 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -220,6 +220,10 @@ static inline void __cpu_init_hyp_mode(phys_addr_t boot_pgd_ptr,
 	kvm_call_hyp((void*)hyp_stack_ptr, vector_ptr, pgd_ptr);
 }
 
+static inline void __cpu_init_stage2(void)
+{
+}
+
 static inline int kvm_arch_dev_ioctl_check_extension(long ext)
 {
 	return 0;
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index dda1959..6b76e01 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -985,6 +985,7 @@ static void cpu_init_hyp_mode(void *dummy)
 	vector_ptr = (unsigned long)__kvm_hyp_vector;
 
 	__cpu_init_hyp_mode(boot_pgd_ptr, pgd_ptr, hyp_stack_ptr, vector_ptr);
+	__cpu_init_stage2();
 
 	kvm_arm_init_debug();
 }
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 689d4c9..fe86cf9 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -332,6 +332,10 @@ static inline void __cpu_init_hyp_mode(phys_addr_t boot_pgd_ptr,
 		     hyp_stack_ptr, vector_ptr);
 }
 
+static inline void __cpu_init_stage2(void)
+{
+}
+
 static inline void kvm_arch_hardware_disable(void) {}
 static inline void kvm_arch_hardware_unsetup(void) {}
 static inline void kvm_arch_sync_events(struct kvm *kvm) {}
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 01/23] arm/arm64: KVM: Add hook for C-based stage2 init
@ 2016-02-03 17:59   ` Marc Zyngier
  0 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 17:59 UTC (permalink / raw)
  To: linux-arm-kernel

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm/include/asm/kvm_host.h   | 4 ++++
 arch/arm/kvm/arm.c                | 1 +
 arch/arm64/include/asm/kvm_host.h | 4 ++++
 3 files changed, 9 insertions(+)

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index f9f2779..f1e86f1 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -220,6 +220,10 @@ static inline void __cpu_init_hyp_mode(phys_addr_t boot_pgd_ptr,
 	kvm_call_hyp((void*)hyp_stack_ptr, vector_ptr, pgd_ptr);
 }
 
+static inline void __cpu_init_stage2(void)
+{
+}
+
 static inline int kvm_arch_dev_ioctl_check_extension(long ext)
 {
 	return 0;
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index dda1959..6b76e01 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -985,6 +985,7 @@ static void cpu_init_hyp_mode(void *dummy)
 	vector_ptr = (unsigned long)__kvm_hyp_vector;
 
 	__cpu_init_hyp_mode(boot_pgd_ptr, pgd_ptr, hyp_stack_ptr, vector_ptr);
+	__cpu_init_stage2();
 
 	kvm_arm_init_debug();
 }
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 689d4c9..fe86cf9 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -332,6 +332,10 @@ static inline void __cpu_init_hyp_mode(phys_addr_t boot_pgd_ptr,
 		     hyp_stack_ptr, vector_ptr);
 }
 
+static inline void __cpu_init_stage2(void)
+{
+}
+
 static inline void kvm_arch_hardware_disable(void) {}
 static inline void kvm_arch_hardware_unsetup(void) {}
 static inline void kvm_arch_sync_events(struct kvm *kvm) {}
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 02/23] arm64: KVM: Switch to C-based stage2 init
  2016-02-03 17:59 ` Marc Zyngier
  (?)
@ 2016-02-03 17:59   ` Marc Zyngier
  -1 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 17:59 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Christoffer Dall
  Cc: linux-arm-kernel, linux-kernel, kvm, kvmarm

There is no real need to leave the stage2 initialization as part
of the early HYP bootstrap, and we can easily postpone it to
the point where we can safely run C code.

This will help VHE, which doesn't need any of this bootstrap.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/kvm_asm.h  |  2 ++
 arch/arm64/include/asm/kvm_host.h |  2 ++
 arch/arm64/kvm/hyp-init.S         | 18 ----------------
 arch/arm64/kvm/hyp/Makefile       |  1 +
 arch/arm64/kvm/hyp/s2-setup.c     | 44 +++++++++++++++++++++++++++++++++++++++
 5 files changed, 49 insertions(+), 18 deletions(-)
 create mode 100644 arch/arm64/kvm/hyp/s2-setup.c

diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index 52b777b..67ac580 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -48,6 +48,8 @@ extern u64 __vgic_v3_get_ich_vtr_el2(void);
 
 extern u32 __kvm_get_mdcr_el2(void);
 
+extern void __init_stage2_translation(void);
+
 #endif
 
 #endif /* __ARM_KVM_ASM_H__ */
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index fe86cf9..43688d9 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -25,6 +25,7 @@
 #include <linux/types.h>
 #include <linux/kvm_types.h>
 #include <asm/kvm.h>
+#include <asm/kvm_asm.h>
 #include <asm/kvm_mmio.h>
 
 #define __KVM_HAVE_ARCH_INTC_INITIALIZED
@@ -334,6 +335,7 @@ static inline void __cpu_init_hyp_mode(phys_addr_t boot_pgd_ptr,
 
 static inline void __cpu_init_stage2(void)
 {
+	kvm_call_hyp(__init_stage2_translation);
 }
 
 static inline void kvm_arch_hardware_disable(void) {}
diff --git a/arch/arm64/kvm/hyp-init.S b/arch/arm64/kvm/hyp-init.S
index 3e568dc..80fda7e 100644
--- a/arch/arm64/kvm/hyp-init.S
+++ b/arch/arm64/kvm/hyp-init.S
@@ -87,24 +87,6 @@ __do_hyp_init:
 #endif
 	msr	tcr_el2, x4
 
-	ldr	x4, =VTCR_EL2_FLAGS
-	/*
-	 * Read the PARange bits from ID_AA64MMFR0_EL1 and set the PS bits in
-	 * VTCR_EL2.
-	 */
-	mrs	x5, ID_AA64MMFR0_EL1
-	bfi	x4, x5, #16, #3
-	/*
-	 * Read the VMIDBits bits from ID_AA64MMFR1_EL1 and set the VS bit in
-	 * VTCR_EL2.
-	 */
-	mrs	x5, ID_AA64MMFR1_EL1
-	ubfx	x5, x5, #5, #1
-	lsl	x5, x5, #VTCR_EL2_VS
-	orr	x4, x4, x5
-
-	msr	vtcr_el2, x4
-
 	mrs	x4, mair_el1
 	msr	mair_el2, x4
 	isb
diff --git a/arch/arm64/kvm/hyp/Makefile b/arch/arm64/kvm/hyp/Makefile
index 826032b..5326e66 100644
--- a/arch/arm64/kvm/hyp/Makefile
+++ b/arch/arm64/kvm/hyp/Makefile
@@ -12,3 +12,4 @@ obj-$(CONFIG_KVM_ARM_HOST) += switch.o
 obj-$(CONFIG_KVM_ARM_HOST) += fpsimd.o
 obj-$(CONFIG_KVM_ARM_HOST) += tlb.o
 obj-$(CONFIG_KVM_ARM_HOST) += hyp-entry.o
+obj-$(CONFIG_KVM_ARM_HOST) += s2-setup.o
diff --git a/arch/arm64/kvm/hyp/s2-setup.c b/arch/arm64/kvm/hyp/s2-setup.c
new file mode 100644
index 0000000..17e8cc0
--- /dev/null
+++ b/arch/arm64/kvm/hyp/s2-setup.c
@@ -0,0 +1,44 @@
+/*
+ * Copyright (C) 2016 - ARM Ltd
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/types.h>
+#include <asm/kvm_arm.h>
+#include <asm/kvm_asm.h>
+
+#include "hyp.h"
+
+void __hyp_text __init_stage2_translation(void)
+{
+	u64 val = VTCR_EL2_FLAGS;
+	u64 tmp;
+
+	/*
+	 * Read the PARange bits from ID_AA64MMFR0_EL1 and set the PS
+	 * bits in VTCR_EL2. Amusingly, the PARange is 4 bits, while
+	 * PS is only 3. Fortunately, bit 19 is RES0 in VTCR_EL2...
+	 */
+	val |= (read_sysreg(id_aa64mmfr0_el1) & 7) << 16;
+
+	/*
+	 * Read the VMIDBits bits from ID_AA64MMFR1_EL1 and set the VS
+	 * bit in VTCR_EL2.
+	 */
+	tmp = (read_sysreg(id_aa64mmfr1_el1) >> 4) & 0xf;
+	val |= (tmp == 2) ? VTCR_EL2_VS : 0;
+
+	write_sysreg(val, vtcr_el2);
+}
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 02/23] arm64: KVM: Switch to C-based stage2 init
@ 2016-02-03 17:59   ` Marc Zyngier
  0 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 17:59 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Christoffer Dall
  Cc: kvmarm, linux-kernel, linux-arm-kernel, kvm

There is no real need to leave the stage2 initialization as part
of the early HYP bootstrap, and we can easily postpone it to
the point where we can safely run C code.

This will help VHE, which doesn't need any of this bootstrap.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/kvm_asm.h  |  2 ++
 arch/arm64/include/asm/kvm_host.h |  2 ++
 arch/arm64/kvm/hyp-init.S         | 18 ----------------
 arch/arm64/kvm/hyp/Makefile       |  1 +
 arch/arm64/kvm/hyp/s2-setup.c     | 44 +++++++++++++++++++++++++++++++++++++++
 5 files changed, 49 insertions(+), 18 deletions(-)
 create mode 100644 arch/arm64/kvm/hyp/s2-setup.c

diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index 52b777b..67ac580 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -48,6 +48,8 @@ extern u64 __vgic_v3_get_ich_vtr_el2(void);
 
 extern u32 __kvm_get_mdcr_el2(void);
 
+extern void __init_stage2_translation(void);
+
 #endif
 
 #endif /* __ARM_KVM_ASM_H__ */
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index fe86cf9..43688d9 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -25,6 +25,7 @@
 #include <linux/types.h>
 #include <linux/kvm_types.h>
 #include <asm/kvm.h>
+#include <asm/kvm_asm.h>
 #include <asm/kvm_mmio.h>
 
 #define __KVM_HAVE_ARCH_INTC_INITIALIZED
@@ -334,6 +335,7 @@ static inline void __cpu_init_hyp_mode(phys_addr_t boot_pgd_ptr,
 
 static inline void __cpu_init_stage2(void)
 {
+	kvm_call_hyp(__init_stage2_translation);
 }
 
 static inline void kvm_arch_hardware_disable(void) {}
diff --git a/arch/arm64/kvm/hyp-init.S b/arch/arm64/kvm/hyp-init.S
index 3e568dc..80fda7e 100644
--- a/arch/arm64/kvm/hyp-init.S
+++ b/arch/arm64/kvm/hyp-init.S
@@ -87,24 +87,6 @@ __do_hyp_init:
 #endif
 	msr	tcr_el2, x4
 
-	ldr	x4, =VTCR_EL2_FLAGS
-	/*
-	 * Read the PARange bits from ID_AA64MMFR0_EL1 and set the PS bits in
-	 * VTCR_EL2.
-	 */
-	mrs	x5, ID_AA64MMFR0_EL1
-	bfi	x4, x5, #16, #3
-	/*
-	 * Read the VMIDBits bits from ID_AA64MMFR1_EL1 and set the VS bit in
-	 * VTCR_EL2.
-	 */
-	mrs	x5, ID_AA64MMFR1_EL1
-	ubfx	x5, x5, #5, #1
-	lsl	x5, x5, #VTCR_EL2_VS
-	orr	x4, x4, x5
-
-	msr	vtcr_el2, x4
-
 	mrs	x4, mair_el1
 	msr	mair_el2, x4
 	isb
diff --git a/arch/arm64/kvm/hyp/Makefile b/arch/arm64/kvm/hyp/Makefile
index 826032b..5326e66 100644
--- a/arch/arm64/kvm/hyp/Makefile
+++ b/arch/arm64/kvm/hyp/Makefile
@@ -12,3 +12,4 @@ obj-$(CONFIG_KVM_ARM_HOST) += switch.o
 obj-$(CONFIG_KVM_ARM_HOST) += fpsimd.o
 obj-$(CONFIG_KVM_ARM_HOST) += tlb.o
 obj-$(CONFIG_KVM_ARM_HOST) += hyp-entry.o
+obj-$(CONFIG_KVM_ARM_HOST) += s2-setup.o
diff --git a/arch/arm64/kvm/hyp/s2-setup.c b/arch/arm64/kvm/hyp/s2-setup.c
new file mode 100644
index 0000000..17e8cc0
--- /dev/null
+++ b/arch/arm64/kvm/hyp/s2-setup.c
@@ -0,0 +1,44 @@
+/*
+ * Copyright (C) 2016 - ARM Ltd
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/types.h>
+#include <asm/kvm_arm.h>
+#include <asm/kvm_asm.h>
+
+#include "hyp.h"
+
+void __hyp_text __init_stage2_translation(void)
+{
+	u64 val = VTCR_EL2_FLAGS;
+	u64 tmp;
+
+	/*
+	 * Read the PARange bits from ID_AA64MMFR0_EL1 and set the PS
+	 * bits in VTCR_EL2. Amusingly, the PARange is 4 bits, while
+	 * PS is only 3. Fortunately, bit 19 is RES0 in VTCR_EL2...
+	 */
+	val |= (read_sysreg(id_aa64mmfr0_el1) & 7) << 16;
+
+	/*
+	 * Read the VMIDBits bits from ID_AA64MMFR1_EL1 and set the VS
+	 * bit in VTCR_EL2.
+	 */
+	tmp = (read_sysreg(id_aa64mmfr1_el1) >> 4) & 0xf;
+	val |= (tmp == 2) ? VTCR_EL2_VS : 0;
+
+	write_sysreg(val, vtcr_el2);
+}
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 02/23] arm64: KVM: Switch to C-based stage2 init
@ 2016-02-03 17:59   ` Marc Zyngier
  0 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 17:59 UTC (permalink / raw)
  To: linux-arm-kernel

There is no real need to leave the stage2 initialization as part
of the early HYP bootstrap, and we can easily postpone it to
the point where we can safely run C code.

This will help VHE, which doesn't need any of this bootstrap.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/kvm_asm.h  |  2 ++
 arch/arm64/include/asm/kvm_host.h |  2 ++
 arch/arm64/kvm/hyp-init.S         | 18 ----------------
 arch/arm64/kvm/hyp/Makefile       |  1 +
 arch/arm64/kvm/hyp/s2-setup.c     | 44 +++++++++++++++++++++++++++++++++++++++
 5 files changed, 49 insertions(+), 18 deletions(-)
 create mode 100644 arch/arm64/kvm/hyp/s2-setup.c

diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index 52b777b..67ac580 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -48,6 +48,8 @@ extern u64 __vgic_v3_get_ich_vtr_el2(void);
 
 extern u32 __kvm_get_mdcr_el2(void);
 
+extern void __init_stage2_translation(void);
+
 #endif
 
 #endif /* __ARM_KVM_ASM_H__ */
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index fe86cf9..43688d9 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -25,6 +25,7 @@
 #include <linux/types.h>
 #include <linux/kvm_types.h>
 #include <asm/kvm.h>
+#include <asm/kvm_asm.h>
 #include <asm/kvm_mmio.h>
 
 #define __KVM_HAVE_ARCH_INTC_INITIALIZED
@@ -334,6 +335,7 @@ static inline void __cpu_init_hyp_mode(phys_addr_t boot_pgd_ptr,
 
 static inline void __cpu_init_stage2(void)
 {
+	kvm_call_hyp(__init_stage2_translation);
 }
 
 static inline void kvm_arch_hardware_disable(void) {}
diff --git a/arch/arm64/kvm/hyp-init.S b/arch/arm64/kvm/hyp-init.S
index 3e568dc..80fda7e 100644
--- a/arch/arm64/kvm/hyp-init.S
+++ b/arch/arm64/kvm/hyp-init.S
@@ -87,24 +87,6 @@ __do_hyp_init:
 #endif
 	msr	tcr_el2, x4
 
-	ldr	x4, =VTCR_EL2_FLAGS
-	/*
-	 * Read the PARange bits from ID_AA64MMFR0_EL1 and set the PS bits in
-	 * VTCR_EL2.
-	 */
-	mrs	x5, ID_AA64MMFR0_EL1
-	bfi	x4, x5, #16, #3
-	/*
-	 * Read the VMIDBits bits from ID_AA64MMFR1_EL1 and set the VS bit in
-	 * VTCR_EL2.
-	 */
-	mrs	x5, ID_AA64MMFR1_EL1
-	ubfx	x5, x5, #5, #1
-	lsl	x5, x5, #VTCR_EL2_VS
-	orr	x4, x4, x5
-
-	msr	vtcr_el2, x4
-
 	mrs	x4, mair_el1
 	msr	mair_el2, x4
 	isb
diff --git a/arch/arm64/kvm/hyp/Makefile b/arch/arm64/kvm/hyp/Makefile
index 826032b..5326e66 100644
--- a/arch/arm64/kvm/hyp/Makefile
+++ b/arch/arm64/kvm/hyp/Makefile
@@ -12,3 +12,4 @@ obj-$(CONFIG_KVM_ARM_HOST) += switch.o
 obj-$(CONFIG_KVM_ARM_HOST) += fpsimd.o
 obj-$(CONFIG_KVM_ARM_HOST) += tlb.o
 obj-$(CONFIG_KVM_ARM_HOST) += hyp-entry.o
+obj-$(CONFIG_KVM_ARM_HOST) += s2-setup.o
diff --git a/arch/arm64/kvm/hyp/s2-setup.c b/arch/arm64/kvm/hyp/s2-setup.c
new file mode 100644
index 0000000..17e8cc0
--- /dev/null
+++ b/arch/arm64/kvm/hyp/s2-setup.c
@@ -0,0 +1,44 @@
+/*
+ * Copyright (C) 2016 - ARM Ltd
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/types.h>
+#include <asm/kvm_arm.h>
+#include <asm/kvm_asm.h>
+
+#include "hyp.h"
+
+void __hyp_text __init_stage2_translation(void)
+{
+	u64 val = VTCR_EL2_FLAGS;
+	u64 tmp;
+
+	/*
+	 * Read the PARange bits from ID_AA64MMFR0_EL1 and set the PS
+	 * bits in VTCR_EL2. Amusingly, the PARange is 4 bits, while
+	 * PS is only 3. Fortunately, bit 19 is RES0 in VTCR_EL2...
+	 */
+	val |= (read_sysreg(id_aa64mmfr0_el1) & 7) << 16;
+
+	/*
+	 * Read the VMIDBits bits from ID_AA64MMFR1_EL1 and set the VS
+	 * bit in VTCR_EL2.
+	 */
+	tmp = (read_sysreg(id_aa64mmfr1_el1) >> 4) & 0xf;
+	val |= (tmp == 2) ? VTCR_EL2_VS : 0;
+
+	write_sysreg(val, vtcr_el2);
+}
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 03/23] arm/arm64: Add new is_kernel_in_hyp_mode predicate
  2016-02-03 17:59 ` Marc Zyngier
  (?)
@ 2016-02-03 17:59   ` Marc Zyngier
  -1 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 17:59 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Christoffer Dall
  Cc: linux-arm-kernel, linux-kernel, kvm, kvmarm

With ARMv8.1 VHE extension, it will be possible to run the kernel
at EL2 (aka HYP mode). In order for the kernel to easily find out
where it is running, add a new predicate that returns whether or
not the kernel is in HYP mode.

For completeness, the 32bit code also get such a predicate (always
returning false) so that code common to both architecture (timers,
KVM) can use it transparently.

Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm/include/asm/virt.h   |  5 +++++
 arch/arm64/include/asm/virt.h | 10 ++++++++++
 2 files changed, 15 insertions(+)

diff --git a/arch/arm/include/asm/virt.h b/arch/arm/include/asm/virt.h
index 4371f45..b6a3cef 100644
--- a/arch/arm/include/asm/virt.h
+++ b/arch/arm/include/asm/virt.h
@@ -74,6 +74,11 @@ static inline bool is_hyp_mode_mismatched(void)
 {
 	return !!(__boot_cpu_mode & BOOT_CPU_MODE_MISMATCH);
 }
+
+static inline bool is_kernel_in_hyp_mode(void)
+{
+	return false;
+}
 #endif
 
 #endif /* __ASSEMBLY__ */
diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h
index 7a5df52..9f22dd6 100644
--- a/arch/arm64/include/asm/virt.h
+++ b/arch/arm64/include/asm/virt.h
@@ -23,6 +23,8 @@
 
 #ifndef __ASSEMBLY__
 
+#include <asm/ptrace.h>
+
 /*
  * __boot_cpu_mode records what mode CPUs were booted in.
  * A correctly-implemented bootloader must start all CPUs in the same mode:
@@ -50,6 +52,14 @@ static inline bool is_hyp_mode_mismatched(void)
 	return __boot_cpu_mode[0] != __boot_cpu_mode[1];
 }
 
+static inline bool is_kernel_in_hyp_mode(void)
+{
+	u64 el;
+
+	asm("mrs %0, CurrentEL" : "=r" (el));
+	return el == CurrentEL_EL2;
+}
+
 /* The section containing the hypervisor text */
 extern char __hyp_text_start[];
 extern char __hyp_text_end[];
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 03/23] arm/arm64: Add new is_kernel_in_hyp_mode predicate
@ 2016-02-03 17:59   ` Marc Zyngier
  0 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 17:59 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Christoffer Dall
  Cc: kvmarm, linux-kernel, linux-arm-kernel, kvm

With ARMv8.1 VHE extension, it will be possible to run the kernel
at EL2 (aka HYP mode). In order for the kernel to easily find out
where it is running, add a new predicate that returns whether or
not the kernel is in HYP mode.

For completeness, the 32bit code also get such a predicate (always
returning false) so that code common to both architecture (timers,
KVM) can use it transparently.

Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm/include/asm/virt.h   |  5 +++++
 arch/arm64/include/asm/virt.h | 10 ++++++++++
 2 files changed, 15 insertions(+)

diff --git a/arch/arm/include/asm/virt.h b/arch/arm/include/asm/virt.h
index 4371f45..b6a3cef 100644
--- a/arch/arm/include/asm/virt.h
+++ b/arch/arm/include/asm/virt.h
@@ -74,6 +74,11 @@ static inline bool is_hyp_mode_mismatched(void)
 {
 	return !!(__boot_cpu_mode & BOOT_CPU_MODE_MISMATCH);
 }
+
+static inline bool is_kernel_in_hyp_mode(void)
+{
+	return false;
+}
 #endif
 
 #endif /* __ASSEMBLY__ */
diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h
index 7a5df52..9f22dd6 100644
--- a/arch/arm64/include/asm/virt.h
+++ b/arch/arm64/include/asm/virt.h
@@ -23,6 +23,8 @@
 
 #ifndef __ASSEMBLY__
 
+#include <asm/ptrace.h>
+
 /*
  * __boot_cpu_mode records what mode CPUs were booted in.
  * A correctly-implemented bootloader must start all CPUs in the same mode:
@@ -50,6 +52,14 @@ static inline bool is_hyp_mode_mismatched(void)
 	return __boot_cpu_mode[0] != __boot_cpu_mode[1];
 }
 
+static inline bool is_kernel_in_hyp_mode(void)
+{
+	u64 el;
+
+	asm("mrs %0, CurrentEL" : "=r" (el));
+	return el == CurrentEL_EL2;
+}
+
 /* The section containing the hypervisor text */
 extern char __hyp_text_start[];
 extern char __hyp_text_end[];
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 03/23] arm/arm64: Add new is_kernel_in_hyp_mode predicate
@ 2016-02-03 17:59   ` Marc Zyngier
  0 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 17:59 UTC (permalink / raw)
  To: linux-arm-kernel

With ARMv8.1 VHE extension, it will be possible to run the kernel
at EL2 (aka HYP mode). In order for the kernel to easily find out
where it is running, add a new predicate that returns whether or
not the kernel is in HYP mode.

For completeness, the 32bit code also get such a predicate (always
returning false) so that code common to both architecture (timers,
KVM) can use it transparently.

Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm/include/asm/virt.h   |  5 +++++
 arch/arm64/include/asm/virt.h | 10 ++++++++++
 2 files changed, 15 insertions(+)

diff --git a/arch/arm/include/asm/virt.h b/arch/arm/include/asm/virt.h
index 4371f45..b6a3cef 100644
--- a/arch/arm/include/asm/virt.h
+++ b/arch/arm/include/asm/virt.h
@@ -74,6 +74,11 @@ static inline bool is_hyp_mode_mismatched(void)
 {
 	return !!(__boot_cpu_mode & BOOT_CPU_MODE_MISMATCH);
 }
+
+static inline bool is_kernel_in_hyp_mode(void)
+{
+	return false;
+}
 #endif
 
 #endif /* __ASSEMBLY__ */
diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h
index 7a5df52..9f22dd6 100644
--- a/arch/arm64/include/asm/virt.h
+++ b/arch/arm64/include/asm/virt.h
@@ -23,6 +23,8 @@
 
 #ifndef __ASSEMBLY__
 
+#include <asm/ptrace.h>
+
 /*
  * __boot_cpu_mode records what mode CPUs were booted in.
  * A correctly-implemented bootloader must start all CPUs in the same mode:
@@ -50,6 +52,14 @@ static inline bool is_hyp_mode_mismatched(void)
 	return __boot_cpu_mode[0] != __boot_cpu_mode[1];
 }
 
+static inline bool is_kernel_in_hyp_mode(void)
+{
+	u64 el;
+
+	asm("mrs %0, CurrentEL" : "=r" (el));
+	return el == CurrentEL_EL2;
+}
+
 /* The section containing the hypervisor text */
 extern char __hyp_text_start[];
 extern char __hyp_text_end[];
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 04/23] arm64: Allow the arch timer to use the HYP timer
  2016-02-03 17:59 ` Marc Zyngier
@ 2016-02-03 17:59   ` Marc Zyngier
  -1 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 17:59 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Christoffer Dall
  Cc: linux-arm-kernel, linux-kernel, kvm, kvmarm

With the ARMv8.1 VHE, the kernel can run in HYP mode, and thus
use the HYP timer instead of the normal guest timer in a mostly
transparent way, except for the interrupt line.

This patch reworks the arch timer code to allow the selection of
the HYP PPI, possibly falling back to the guest timer if not
available.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 drivers/clocksource/arm_arch_timer.c | 96 ++++++++++++++++++++++--------------
 1 file changed, 59 insertions(+), 37 deletions(-)

diff --git a/drivers/clocksource/arm_arch_timer.c b/drivers/clocksource/arm_arch_timer.c
index c64d543..ffe9d1c 100644
--- a/drivers/clocksource/arm_arch_timer.c
+++ b/drivers/clocksource/arm_arch_timer.c
@@ -67,7 +67,7 @@ static int arch_timer_ppi[MAX_TIMER_PPI];
 
 static struct clock_event_device __percpu *arch_timer_evt;
 
-static bool arch_timer_use_virtual = true;
+static enum ppi_nr arch_timer_uses_ppi = VIRT_PPI;
 static bool arch_timer_c3stop;
 static bool arch_timer_mem_use_virtual;
 
@@ -263,14 +263,20 @@ static void __arch_timer_setup(unsigned type,
 		clk->name = "arch_sys_timer";
 		clk->rating = 450;
 		clk->cpumask = cpumask_of(smp_processor_id());
-		if (arch_timer_use_virtual) {
-			clk->irq = arch_timer_ppi[VIRT_PPI];
+		clk->irq = arch_timer_ppi[arch_timer_uses_ppi];
+		switch (arch_timer_uses_ppi) {
+		case VIRT_PPI:
 			clk->set_state_shutdown = arch_timer_shutdown_virt;
 			clk->set_next_event = arch_timer_set_next_event_virt;
-		} else {
-			clk->irq = arch_timer_ppi[PHYS_SECURE_PPI];
+			break;
+		case PHYS_SECURE_PPI:
+		case PHYS_NONSECURE_PPI:
+		case HYP_PPI:
 			clk->set_state_shutdown = arch_timer_shutdown_phys;
 			clk->set_next_event = arch_timer_set_next_event_phys;
+			break;
+		default:
+			BUG();
 		}
 	} else {
 		clk->features |= CLOCK_EVT_FEAT_DYNIRQ;
@@ -338,17 +344,20 @@ static void arch_counter_set_user_access(void)
 	arch_timer_set_cntkctl(cntkctl);
 }
 
+static bool arch_timer_has_nonsecure_ppi(void)
+{
+	return (arch_timer_uses_ppi == PHYS_SECURE_PPI &&
+		arch_timer_ppi[PHYS_NONSECURE_PPI]);
+}
+
 static int arch_timer_setup(struct clock_event_device *clk)
 {
 	__arch_timer_setup(ARCH_CP15_TIMER, clk);
 
-	if (arch_timer_use_virtual)
-		enable_percpu_irq(arch_timer_ppi[VIRT_PPI], 0);
-	else {
-		enable_percpu_irq(arch_timer_ppi[PHYS_SECURE_PPI], 0);
-		if (arch_timer_ppi[PHYS_NONSECURE_PPI])
-			enable_percpu_irq(arch_timer_ppi[PHYS_NONSECURE_PPI], 0);
-	}
+	enable_percpu_irq(arch_timer_ppi[arch_timer_uses_ppi], 0);
+
+	if (arch_timer_has_nonsecure_ppi())
+		enable_percpu_irq(arch_timer_ppi[PHYS_NONSECURE_PPI], 0);
 
 	arch_counter_set_user_access();
 	if (IS_ENABLED(CONFIG_ARM_ARCH_TIMER_EVTSTREAM))
@@ -390,7 +399,7 @@ static void arch_timer_banner(unsigned type)
 		     (unsigned long)arch_timer_rate / 1000000,
 		     (unsigned long)(arch_timer_rate / 10000) % 100,
 		     type & ARCH_CP15_TIMER ?
-			arch_timer_use_virtual ? "virt" : "phys" :
+		     (arch_timer_uses_ppi == VIRT_PPI) ? "virt" : "phys" :
 			"",
 		     type == (ARCH_CP15_TIMER | ARCH_MEM_TIMER) ?  "/" : "",
 		     type & ARCH_MEM_TIMER ?
@@ -460,7 +469,7 @@ static void __init arch_counter_register(unsigned type)
 
 	/* Register the CP15 based counter if we have one */
 	if (type & ARCH_CP15_TIMER) {
-		if (IS_ENABLED(CONFIG_ARM64) || arch_timer_use_virtual)
+		if (IS_ENABLED(CONFIG_ARM64) || arch_timer_uses_ppi == VIRT_PPI)
 			arch_timer_read_counter = arch_counter_get_cntvct;
 		else
 			arch_timer_read_counter = arch_counter_get_cntpct;
@@ -490,13 +499,9 @@ static void arch_timer_stop(struct clock_event_device *clk)
 	pr_debug("arch_timer_teardown disable IRQ%d cpu #%d\n",
 		 clk->irq, smp_processor_id());
 
-	if (arch_timer_use_virtual)
-		disable_percpu_irq(arch_timer_ppi[VIRT_PPI]);
-	else {
-		disable_percpu_irq(arch_timer_ppi[PHYS_SECURE_PPI]);
-		if (arch_timer_ppi[PHYS_NONSECURE_PPI])
-			disable_percpu_irq(arch_timer_ppi[PHYS_NONSECURE_PPI]);
-	}
+	disable_percpu_irq(arch_timer_ppi[arch_timer_uses_ppi]);
+	if (arch_timer_has_nonsecure_ppi())
+		disable_percpu_irq(arch_timer_ppi[PHYS_NONSECURE_PPI]);
 
 	clk->set_state_shutdown(clk);
 }
@@ -562,12 +567,14 @@ static int __init arch_timer_register(void)
 		goto out;
 	}
 
-	if (arch_timer_use_virtual) {
-		ppi = arch_timer_ppi[VIRT_PPI];
+	ppi = arch_timer_ppi[arch_timer_uses_ppi];
+	switch (arch_timer_uses_ppi) {
+	case VIRT_PPI:
 		err = request_percpu_irq(ppi, arch_timer_handler_virt,
 					 "arch_timer", arch_timer_evt);
-	} else {
-		ppi = arch_timer_ppi[PHYS_SECURE_PPI];
+		break;
+	case PHYS_SECURE_PPI:
+	case PHYS_NONSECURE_PPI:
 		err = request_percpu_irq(ppi, arch_timer_handler_phys,
 					 "arch_timer", arch_timer_evt);
 		if (!err && arch_timer_ppi[PHYS_NONSECURE_PPI]) {
@@ -578,6 +585,13 @@ static int __init arch_timer_register(void)
 				free_percpu_irq(arch_timer_ppi[PHYS_SECURE_PPI],
 						arch_timer_evt);
 		}
+		break;
+	case HYP_PPI:
+		err = request_percpu_irq(ppi, arch_timer_handler_phys,
+					 "arch_timer", arch_timer_evt);
+		break;
+	default:
+		BUG();
 	}
 
 	if (err) {
@@ -602,15 +616,10 @@ static int __init arch_timer_register(void)
 out_unreg_notify:
 	unregister_cpu_notifier(&arch_timer_cpu_nb);
 out_free_irq:
-	if (arch_timer_use_virtual)
-		free_percpu_irq(arch_timer_ppi[VIRT_PPI], arch_timer_evt);
-	else {
-		free_percpu_irq(arch_timer_ppi[PHYS_SECURE_PPI],
+	free_percpu_irq(arch_timer_ppi[arch_timer_uses_ppi], arch_timer_evt);
+	if (arch_timer_has_nonsecure_ppi())
+		free_percpu_irq(arch_timer_ppi[PHYS_NONSECURE_PPI],
 				arch_timer_evt);
-		if (arch_timer_ppi[PHYS_NONSECURE_PPI])
-			free_percpu_irq(arch_timer_ppi[PHYS_NONSECURE_PPI],
-					arch_timer_evt);
-	}
 
 out_free:
 	free_percpu(arch_timer_evt);
@@ -697,12 +706,25 @@ static void __init arch_timer_init(void)
 	 *
 	 * If no interrupt provided for virtual timer, we'll have to
 	 * stick to the physical timer. It'd better be accessible...
+	 *
+	 * On ARMv8.1 with VH extensions, the kernel runs in HYP. VHE
+	 * accesses to CNTP_*_EL1 registers are silently redirected to
+	 * their CNTHP_*_EL2 counterparts, and use a different PPI
+	 * number.
 	 */
 	if (is_hyp_mode_available() || !arch_timer_ppi[VIRT_PPI]) {
-		arch_timer_use_virtual = false;
+		bool has_ppi;
+
+		if (is_kernel_in_hyp_mode()) {
+			arch_timer_uses_ppi = HYP_PPI;
+			has_ppi = !!arch_timer_ppi[HYP_PPI];
+		} else {
+			arch_timer_uses_ppi = PHYS_SECURE_PPI;
+			has_ppi = (!!arch_timer_ppi[PHYS_SECURE_PPI] ||
+				   !!arch_timer_ppi[PHYS_NONSECURE_PPI]);
+		}
 
-		if (!arch_timer_ppi[PHYS_SECURE_PPI] ||
-		    !arch_timer_ppi[PHYS_NONSECURE_PPI]) {
+		if (!has_ppi) {
 			pr_warn("arch_timer: No interrupt available, giving up\n");
 			return;
 		}
@@ -735,7 +757,7 @@ static void __init arch_timer_of_init(struct device_node *np)
 	 */
 	if (IS_ENABLED(CONFIG_ARM) &&
 	    of_property_read_bool(np, "arm,cpu-registers-not-fw-configured"))
-			arch_timer_use_virtual = false;
+		arch_timer_uses_ppi = PHYS_SECURE_PPI;
 
 	arch_timer_init();
 }
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 04/23] arm64: Allow the arch timer to use the HYP timer
@ 2016-02-03 17:59   ` Marc Zyngier
  0 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 17:59 UTC (permalink / raw)
  To: linux-arm-kernel

With the ARMv8.1 VHE, the kernel can run in HYP mode, and thus
use the HYP timer instead of the normal guest timer in a mostly
transparent way, except for the interrupt line.

This patch reworks the arch timer code to allow the selection of
the HYP PPI, possibly falling back to the guest timer if not
available.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 drivers/clocksource/arm_arch_timer.c | 96 ++++++++++++++++++++++--------------
 1 file changed, 59 insertions(+), 37 deletions(-)

diff --git a/drivers/clocksource/arm_arch_timer.c b/drivers/clocksource/arm_arch_timer.c
index c64d543..ffe9d1c 100644
--- a/drivers/clocksource/arm_arch_timer.c
+++ b/drivers/clocksource/arm_arch_timer.c
@@ -67,7 +67,7 @@ static int arch_timer_ppi[MAX_TIMER_PPI];
 
 static struct clock_event_device __percpu *arch_timer_evt;
 
-static bool arch_timer_use_virtual = true;
+static enum ppi_nr arch_timer_uses_ppi = VIRT_PPI;
 static bool arch_timer_c3stop;
 static bool arch_timer_mem_use_virtual;
 
@@ -263,14 +263,20 @@ static void __arch_timer_setup(unsigned type,
 		clk->name = "arch_sys_timer";
 		clk->rating = 450;
 		clk->cpumask = cpumask_of(smp_processor_id());
-		if (arch_timer_use_virtual) {
-			clk->irq = arch_timer_ppi[VIRT_PPI];
+		clk->irq = arch_timer_ppi[arch_timer_uses_ppi];
+		switch (arch_timer_uses_ppi) {
+		case VIRT_PPI:
 			clk->set_state_shutdown = arch_timer_shutdown_virt;
 			clk->set_next_event = arch_timer_set_next_event_virt;
-		} else {
-			clk->irq = arch_timer_ppi[PHYS_SECURE_PPI];
+			break;
+		case PHYS_SECURE_PPI:
+		case PHYS_NONSECURE_PPI:
+		case HYP_PPI:
 			clk->set_state_shutdown = arch_timer_shutdown_phys;
 			clk->set_next_event = arch_timer_set_next_event_phys;
+			break;
+		default:
+			BUG();
 		}
 	} else {
 		clk->features |= CLOCK_EVT_FEAT_DYNIRQ;
@@ -338,17 +344,20 @@ static void arch_counter_set_user_access(void)
 	arch_timer_set_cntkctl(cntkctl);
 }
 
+static bool arch_timer_has_nonsecure_ppi(void)
+{
+	return (arch_timer_uses_ppi == PHYS_SECURE_PPI &&
+		arch_timer_ppi[PHYS_NONSECURE_PPI]);
+}
+
 static int arch_timer_setup(struct clock_event_device *clk)
 {
 	__arch_timer_setup(ARCH_CP15_TIMER, clk);
 
-	if (arch_timer_use_virtual)
-		enable_percpu_irq(arch_timer_ppi[VIRT_PPI], 0);
-	else {
-		enable_percpu_irq(arch_timer_ppi[PHYS_SECURE_PPI], 0);
-		if (arch_timer_ppi[PHYS_NONSECURE_PPI])
-			enable_percpu_irq(arch_timer_ppi[PHYS_NONSECURE_PPI], 0);
-	}
+	enable_percpu_irq(arch_timer_ppi[arch_timer_uses_ppi], 0);
+
+	if (arch_timer_has_nonsecure_ppi())
+		enable_percpu_irq(arch_timer_ppi[PHYS_NONSECURE_PPI], 0);
 
 	arch_counter_set_user_access();
 	if (IS_ENABLED(CONFIG_ARM_ARCH_TIMER_EVTSTREAM))
@@ -390,7 +399,7 @@ static void arch_timer_banner(unsigned type)
 		     (unsigned long)arch_timer_rate / 1000000,
 		     (unsigned long)(arch_timer_rate / 10000) % 100,
 		     type & ARCH_CP15_TIMER ?
-			arch_timer_use_virtual ? "virt" : "phys" :
+		     (arch_timer_uses_ppi == VIRT_PPI) ? "virt" : "phys" :
 			"",
 		     type == (ARCH_CP15_TIMER | ARCH_MEM_TIMER) ?  "/" : "",
 		     type & ARCH_MEM_TIMER ?
@@ -460,7 +469,7 @@ static void __init arch_counter_register(unsigned type)
 
 	/* Register the CP15 based counter if we have one */
 	if (type & ARCH_CP15_TIMER) {
-		if (IS_ENABLED(CONFIG_ARM64) || arch_timer_use_virtual)
+		if (IS_ENABLED(CONFIG_ARM64) || arch_timer_uses_ppi == VIRT_PPI)
 			arch_timer_read_counter = arch_counter_get_cntvct;
 		else
 			arch_timer_read_counter = arch_counter_get_cntpct;
@@ -490,13 +499,9 @@ static void arch_timer_stop(struct clock_event_device *clk)
 	pr_debug("arch_timer_teardown disable IRQ%d cpu #%d\n",
 		 clk->irq, smp_processor_id());
 
-	if (arch_timer_use_virtual)
-		disable_percpu_irq(arch_timer_ppi[VIRT_PPI]);
-	else {
-		disable_percpu_irq(arch_timer_ppi[PHYS_SECURE_PPI]);
-		if (arch_timer_ppi[PHYS_NONSECURE_PPI])
-			disable_percpu_irq(arch_timer_ppi[PHYS_NONSECURE_PPI]);
-	}
+	disable_percpu_irq(arch_timer_ppi[arch_timer_uses_ppi]);
+	if (arch_timer_has_nonsecure_ppi())
+		disable_percpu_irq(arch_timer_ppi[PHYS_NONSECURE_PPI]);
 
 	clk->set_state_shutdown(clk);
 }
@@ -562,12 +567,14 @@ static int __init arch_timer_register(void)
 		goto out;
 	}
 
-	if (arch_timer_use_virtual) {
-		ppi = arch_timer_ppi[VIRT_PPI];
+	ppi = arch_timer_ppi[arch_timer_uses_ppi];
+	switch (arch_timer_uses_ppi) {
+	case VIRT_PPI:
 		err = request_percpu_irq(ppi, arch_timer_handler_virt,
 					 "arch_timer", arch_timer_evt);
-	} else {
-		ppi = arch_timer_ppi[PHYS_SECURE_PPI];
+		break;
+	case PHYS_SECURE_PPI:
+	case PHYS_NONSECURE_PPI:
 		err = request_percpu_irq(ppi, arch_timer_handler_phys,
 					 "arch_timer", arch_timer_evt);
 		if (!err && arch_timer_ppi[PHYS_NONSECURE_PPI]) {
@@ -578,6 +585,13 @@ static int __init arch_timer_register(void)
 				free_percpu_irq(arch_timer_ppi[PHYS_SECURE_PPI],
 						arch_timer_evt);
 		}
+		break;
+	case HYP_PPI:
+		err = request_percpu_irq(ppi, arch_timer_handler_phys,
+					 "arch_timer", arch_timer_evt);
+		break;
+	default:
+		BUG();
 	}
 
 	if (err) {
@@ -602,15 +616,10 @@ static int __init arch_timer_register(void)
 out_unreg_notify:
 	unregister_cpu_notifier(&arch_timer_cpu_nb);
 out_free_irq:
-	if (arch_timer_use_virtual)
-		free_percpu_irq(arch_timer_ppi[VIRT_PPI], arch_timer_evt);
-	else {
-		free_percpu_irq(arch_timer_ppi[PHYS_SECURE_PPI],
+	free_percpu_irq(arch_timer_ppi[arch_timer_uses_ppi], arch_timer_evt);
+	if (arch_timer_has_nonsecure_ppi())
+		free_percpu_irq(arch_timer_ppi[PHYS_NONSECURE_PPI],
 				arch_timer_evt);
-		if (arch_timer_ppi[PHYS_NONSECURE_PPI])
-			free_percpu_irq(arch_timer_ppi[PHYS_NONSECURE_PPI],
-					arch_timer_evt);
-	}
 
 out_free:
 	free_percpu(arch_timer_evt);
@@ -697,12 +706,25 @@ static void __init arch_timer_init(void)
 	 *
 	 * If no interrupt provided for virtual timer, we'll have to
 	 * stick to the physical timer. It'd better be accessible...
+	 *
+	 * On ARMv8.1 with VH extensions, the kernel runs in HYP. VHE
+	 * accesses to CNTP_*_EL1 registers are silently redirected to
+	 * their CNTHP_*_EL2 counterparts, and use a different PPI
+	 * number.
 	 */
 	if (is_hyp_mode_available() || !arch_timer_ppi[VIRT_PPI]) {
-		arch_timer_use_virtual = false;
+		bool has_ppi;
+
+		if (is_kernel_in_hyp_mode()) {
+			arch_timer_uses_ppi = HYP_PPI;
+			has_ppi = !!arch_timer_ppi[HYP_PPI];
+		} else {
+			arch_timer_uses_ppi = PHYS_SECURE_PPI;
+			has_ppi = (!!arch_timer_ppi[PHYS_SECURE_PPI] ||
+				   !!arch_timer_ppi[PHYS_NONSECURE_PPI]);
+		}
 
-		if (!arch_timer_ppi[PHYS_SECURE_PPI] ||
-		    !arch_timer_ppi[PHYS_NONSECURE_PPI]) {
+		if (!has_ppi) {
 			pr_warn("arch_timer: No interrupt available, giving up\n");
 			return;
 		}
@@ -735,7 +757,7 @@ static void __init arch_timer_of_init(struct device_node *np)
 	 */
 	if (IS_ENABLED(CONFIG_ARM) &&
 	    of_property_read_bool(np, "arm,cpu-registers-not-fw-configured"))
-			arch_timer_use_virtual = false;
+		arch_timer_uses_ppi = PHYS_SECURE_PPI;
 
 	arch_timer_init();
 }
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 05/23] arm64: Add ARM64_HAS_VIRT_HOST_EXTN feature
  2016-02-03 17:59 ` Marc Zyngier
  (?)
@ 2016-02-03 17:59   ` Marc Zyngier
  -1 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 17:59 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Christoffer Dall
  Cc: linux-arm-kernel, linux-kernel, kvm, kvmarm

Add a new ARM64_HAS_VIRT_HOST_EXTN features to indicate that the
CPU has the ARMv8.1 VHE capability.

This will be used to trigger kernel patching in KVM.

Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/cpufeature.h |  3 ++-
 arch/arm64/kernel/cpufeature.c      | 11 +++++++++++
 2 files changed, 13 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 8f271b8..c705d6a 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -30,8 +30,9 @@
 #define ARM64_HAS_LSE_ATOMICS			5
 #define ARM64_WORKAROUND_CAVIUM_23154		6
 #define ARM64_WORKAROUND_834220			7
+#define ARM64_HAS_VIRT_HOST_EXTN		8
 
-#define ARM64_NCAPS				8
+#define ARM64_NCAPS				9
 
 #ifndef __ASSEMBLY__
 
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 5c90aa4..ba74519 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -26,6 +26,7 @@
 #include <asm/cpu_ops.h>
 #include <asm/processor.h>
 #include <asm/sysreg.h>
+#include <asm/virt.h>
 
 unsigned long elf_hwcap __read_mostly;
 EXPORT_SYMBOL_GPL(elf_hwcap);
@@ -621,6 +622,11 @@ static bool has_useable_gicv3_cpuif(const struct arm64_cpu_capabilities *entry)
 	return has_sre;
 }
 
+static bool runs_at_el2(const struct arm64_cpu_capabilities *entry)
+{
+	return is_kernel_in_hyp_mode();
+}
+
 static const struct arm64_cpu_capabilities arm64_features[] = {
 	{
 		.desc = "GIC system register CPU interface",
@@ -651,6 +657,11 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.min_field_value = 2,
 	},
 #endif /* CONFIG_AS_LSE && CONFIG_ARM64_LSE_ATOMICS */
+	{
+		.desc = "Virtualization Host Extensions",
+		.capability = ARM64_HAS_VIRT_HOST_EXTN,
+		.matches = runs_at_el2,
+	},
 	{},
 };
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 05/23] arm64: Add ARM64_HAS_VIRT_HOST_EXTN feature
@ 2016-02-03 17:59   ` Marc Zyngier
  0 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 17:59 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Christoffer Dall
  Cc: kvmarm, linux-kernel, linux-arm-kernel, kvm

Add a new ARM64_HAS_VIRT_HOST_EXTN features to indicate that the
CPU has the ARMv8.1 VHE capability.

This will be used to trigger kernel patching in KVM.

Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/cpufeature.h |  3 ++-
 arch/arm64/kernel/cpufeature.c      | 11 +++++++++++
 2 files changed, 13 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 8f271b8..c705d6a 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -30,8 +30,9 @@
 #define ARM64_HAS_LSE_ATOMICS			5
 #define ARM64_WORKAROUND_CAVIUM_23154		6
 #define ARM64_WORKAROUND_834220			7
+#define ARM64_HAS_VIRT_HOST_EXTN		8
 
-#define ARM64_NCAPS				8
+#define ARM64_NCAPS				9
 
 #ifndef __ASSEMBLY__
 
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 5c90aa4..ba74519 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -26,6 +26,7 @@
 #include <asm/cpu_ops.h>
 #include <asm/processor.h>
 #include <asm/sysreg.h>
+#include <asm/virt.h>
 
 unsigned long elf_hwcap __read_mostly;
 EXPORT_SYMBOL_GPL(elf_hwcap);
@@ -621,6 +622,11 @@ static bool has_useable_gicv3_cpuif(const struct arm64_cpu_capabilities *entry)
 	return has_sre;
 }
 
+static bool runs_at_el2(const struct arm64_cpu_capabilities *entry)
+{
+	return is_kernel_in_hyp_mode();
+}
+
 static const struct arm64_cpu_capabilities arm64_features[] = {
 	{
 		.desc = "GIC system register CPU interface",
@@ -651,6 +657,11 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.min_field_value = 2,
 	},
 #endif /* CONFIG_AS_LSE && CONFIG_ARM64_LSE_ATOMICS */
+	{
+		.desc = "Virtualization Host Extensions",
+		.capability = ARM64_HAS_VIRT_HOST_EXTN,
+		.matches = runs_at_el2,
+	},
 	{},
 };
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 05/23] arm64: Add ARM64_HAS_VIRT_HOST_EXTN feature
@ 2016-02-03 17:59   ` Marc Zyngier
  0 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 17:59 UTC (permalink / raw)
  To: linux-arm-kernel

Add a new ARM64_HAS_VIRT_HOST_EXTN features to indicate that the
CPU has the ARMv8.1 VHE capability.

This will be used to trigger kernel patching in KVM.

Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/cpufeature.h |  3 ++-
 arch/arm64/kernel/cpufeature.c      | 11 +++++++++++
 2 files changed, 13 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 8f271b8..c705d6a 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -30,8 +30,9 @@
 #define ARM64_HAS_LSE_ATOMICS			5
 #define ARM64_WORKAROUND_CAVIUM_23154		6
 #define ARM64_WORKAROUND_834220			7
+#define ARM64_HAS_VIRT_HOST_EXTN		8
 
-#define ARM64_NCAPS				8
+#define ARM64_NCAPS				9
 
 #ifndef __ASSEMBLY__
 
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 5c90aa4..ba74519 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -26,6 +26,7 @@
 #include <asm/cpu_ops.h>
 #include <asm/processor.h>
 #include <asm/sysreg.h>
+#include <asm/virt.h>
 
 unsigned long elf_hwcap __read_mostly;
 EXPORT_SYMBOL_GPL(elf_hwcap);
@@ -621,6 +622,11 @@ static bool has_useable_gicv3_cpuif(const struct arm64_cpu_capabilities *entry)
 	return has_sre;
 }
 
+static bool runs_at_el2(const struct arm64_cpu_capabilities *entry)
+{
+	return is_kernel_in_hyp_mode();
+}
+
 static const struct arm64_cpu_capabilities arm64_features[] = {
 	{
 		.desc = "GIC system register CPU interface",
@@ -651,6 +657,11 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.min_field_value = 2,
 	},
 #endif /* CONFIG_AS_LSE && CONFIG_ARM64_LSE_ATOMICS */
+	{
+		.desc = "Virtualization Host Extensions",
+		.capability = ARM64_HAS_VIRT_HOST_EXTN,
+		.matches = runs_at_el2,
+	},
 	{},
 };
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 06/23] arm64: KVM: Skip HYP setup when already running in HYP
  2016-02-03 17:59 ` Marc Zyngier
  (?)
@ 2016-02-03 17:59   ` Marc Zyngier
  -1 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 17:59 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Christoffer Dall
  Cc: linux-arm-kernel, linux-kernel, kvm, kvmarm

With the kernel running at EL2, there is no point trying to
configure page tables for HYP, as the kernel is already mapped.

Take this opportunity to refactor the whole init a bit, allowing
the various parts of the hypervisor bringup to be split across
multiple functions.

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm/kvm/arm.c | 173 +++++++++++++++++++++++++++++++++++------------------
 arch/arm/kvm/mmu.c |   7 +++
 2 files changed, 121 insertions(+), 59 deletions(-)

diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index 6b76e01..58f89e3 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -967,6 +967,11 @@ long kvm_arch_vm_ioctl(struct file *filp,
 	}
 }
 
+static void cpu_init_stage2(void *dummy)
+{
+	__cpu_init_stage2();
+}
+
 static void cpu_init_hyp_mode(void *dummy)
 {
 	phys_addr_t boot_pgd_ptr;
@@ -1036,6 +1041,82 @@ static inline void hyp_cpu_pm_init(void)
 }
 #endif
 
+static void teardown_common_resources(void)
+{
+	free_percpu(kvm_host_cpu_state);
+}
+
+static int init_common_resources(void)
+{
+	kvm_host_cpu_state = alloc_percpu(kvm_cpu_context_t);
+	if (!kvm_host_cpu_state) {
+		kvm_err("Cannot allocate host CPU state\n");
+		return -ENOMEM;
+	}
+
+	return 0;
+}
+
+static int init_subsystems(void)
+{
+	int err;
+
+	/*
+	 * Init HYP view of VGIC
+	 */
+	err = kvm_vgic_hyp_init();
+	switch (err) {
+	case 0:
+		vgic_present = true;
+		break;
+	case -ENODEV:
+	case -ENXIO:
+		vgic_present = false;
+		break;
+	default:
+		return err;
+	}
+
+	/*
+	 * Init HYP architected timer support
+	 */
+	err = kvm_timer_hyp_init();
+	if (err)
+		return err;
+
+	kvm_perf_init();
+	kvm_coproc_table_init();
+
+	return 0;
+}
+
+static void teardown_hyp_mode(void)
+{
+	int cpu;
+
+	if (is_kernel_in_hyp_mode())
+		return;
+
+	free_hyp_pgds();
+	for_each_possible_cpu(cpu)
+		free_page(per_cpu(kvm_arm_hyp_stack_page, cpu));
+}
+
+static int init_vhe_mode(void)
+{
+	/*
+	 * Execute the init code on each CPU.
+	 */
+	on_each_cpu(cpu_init_stage2, NULL, 1);
+
+	/* set size of VMID supported by CPU */
+	kvm_vmid_bits = kvm_get_vmid_bits();
+	kvm_info("%d-bit VMID\n", kvm_vmid_bits);
+
+	kvm_info("VHE mode initialized successfully\n");
+	return 0;
+}
+
 /**
  * Inits Hyp-mode on all online CPUs
  */
@@ -1066,7 +1147,7 @@ static int init_hyp_mode(void)
 		stack_page = __get_free_page(GFP_KERNEL);
 		if (!stack_page) {
 			err = -ENOMEM;
-			goto out_free_stack_pages;
+			goto out_err;
 		}
 
 		per_cpu(kvm_arm_hyp_stack_page, cpu) = stack_page;
@@ -1078,13 +1159,13 @@ static int init_hyp_mode(void)
 	err = create_hyp_mappings(__kvm_hyp_code_start, __kvm_hyp_code_end);
 	if (err) {
 		kvm_err("Cannot map world-switch code\n");
-		goto out_free_mappings;
+		goto out_err;
 	}
 
 	err = create_hyp_mappings(__start_rodata, __end_rodata);
 	if (err) {
 		kvm_err("Cannot map rodata section\n");
-		goto out_free_mappings;
+		goto out_err;
 	}
 
 	/*
@@ -1096,20 +1177,10 @@ static int init_hyp_mode(void)
 
 		if (err) {
 			kvm_err("Cannot map hyp stack\n");
-			goto out_free_mappings;
+			goto out_err;
 		}
 	}
 
-	/*
-	 * Map the host CPU structures
-	 */
-	kvm_host_cpu_state = alloc_percpu(kvm_cpu_context_t);
-	if (!kvm_host_cpu_state) {
-		err = -ENOMEM;
-		kvm_err("Cannot allocate host CPU state\n");
-		goto out_free_mappings;
-	}
-
 	for_each_possible_cpu(cpu) {
 		kvm_cpu_context_t *cpu_ctxt;
 
@@ -1118,7 +1189,7 @@ static int init_hyp_mode(void)
 
 		if (err) {
 			kvm_err("Cannot map host CPU state: %d\n", err);
-			goto out_free_context;
+			goto out_err;
 		}
 	}
 
@@ -1127,34 +1198,22 @@ static int init_hyp_mode(void)
 	 */
 	on_each_cpu(cpu_init_hyp_mode, NULL, 1);
 
-	/*
-	 * Init HYP view of VGIC
-	 */
-	err = kvm_vgic_hyp_init();
-	switch (err) {
-	case 0:
-		vgic_present = true;
-		break;
-	case -ENODEV:
-	case -ENXIO:
-		vgic_present = false;
-		break;
-	default:
-		goto out_free_context;
-	}
-
-	/*
-	 * Init HYP architected timer support
-	 */
-	err = kvm_timer_hyp_init();
-	if (err)
-		goto out_free_context;
-
 #ifndef CONFIG_HOTPLUG_CPU
 	free_boot_hyp_pgd();
 #endif
 
-	kvm_perf_init();
+	cpu_notifier_register_begin();
+
+	err = __register_cpu_notifier(&hyp_init_cpu_nb);
+
+	cpu_notifier_register_done();
+
+	if (err) {
+		kvm_err("Cannot register HYP init CPU notifier (%d)\n", err);
+		goto out_err;
+	}
+
+	hyp_cpu_pm_init();
 
 	/* set size of VMID supported by CPU */
 	kvm_vmid_bits = kvm_get_vmid_bits();
@@ -1163,14 +1222,9 @@ static int init_hyp_mode(void)
 	kvm_info("Hyp mode initialized successfully\n");
 
 	return 0;
-out_free_context:
-	free_percpu(kvm_host_cpu_state);
-out_free_mappings:
-	free_hyp_pgds();
-out_free_stack_pages:
-	for_each_possible_cpu(cpu)
-		free_page(per_cpu(kvm_arm_hyp_stack_page, cpu));
+
 out_err:
+	teardown_hyp_mode();
 	kvm_err("error initializing Hyp mode: %d\n", err);
 	return err;
 }
@@ -1214,26 +1268,27 @@ int kvm_arch_init(void *opaque)
 		}
 	}
 
-	cpu_notifier_register_begin();
-
-	err = init_hyp_mode();
+	err = init_common_resources();
 	if (err)
-		goto out_err;
+		return err;
 
-	err = __register_cpu_notifier(&hyp_init_cpu_nb);
-	if (err) {
-		kvm_err("Cannot register HYP init CPU notifier (%d)\n", err);
+	if (is_kernel_in_hyp_mode())
+		err = init_vhe_mode();
+	else
+		err = init_hyp_mode();
+	if (err)
 		goto out_err;
-	}
-
-	cpu_notifier_register_done();
 
-	hyp_cpu_pm_init();
+	err = init_subsystems();
+	if (err)
+		goto out_hyp;
 
-	kvm_coproc_table_init();
 	return 0;
+
+out_hyp:
+	teardown_hyp_mode();
 out_err:
-	cpu_notifier_register_done();
+	teardown_common_resources();
 	return err;
 }
 
diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
index aba61fd..920d0c3 100644
--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -28,6 +28,7 @@
 #include <asm/kvm_mmio.h>
 #include <asm/kvm_asm.h>
 #include <asm/kvm_emulate.h>
+#include <asm/virt.h>
 
 #include "trace.h"
 
@@ -598,6 +599,9 @@ int create_hyp_mappings(void *from, void *to)
 	unsigned long start = KERN_TO_HYP((unsigned long)from);
 	unsigned long end = KERN_TO_HYP((unsigned long)to);
 
+	if (is_kernel_in_hyp_mode())
+		return 0;
+
 	start = start & PAGE_MASK;
 	end = PAGE_ALIGN(end);
 
@@ -630,6 +634,9 @@ int create_hyp_io_mappings(void *from, void *to, phys_addr_t phys_addr)
 	unsigned long start = KERN_TO_HYP((unsigned long)from);
 	unsigned long end = KERN_TO_HYP((unsigned long)to);
 
+	if (is_kernel_in_hyp_mode())
+		return 0;
+
 	/* Check for a valid kernel IO mapping */
 	if (!is_vmalloc_addr(from) || !is_vmalloc_addr(to - 1))
 		return -EINVAL;
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 06/23] arm64: KVM: Skip HYP setup when already running in HYP
@ 2016-02-03 17:59   ` Marc Zyngier
  0 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 17:59 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Christoffer Dall
  Cc: kvmarm, linux-kernel, linux-arm-kernel, kvm

With the kernel running at EL2, there is no point trying to
configure page tables for HYP, as the kernel is already mapped.

Take this opportunity to refactor the whole init a bit, allowing
the various parts of the hypervisor bringup to be split across
multiple functions.

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm/kvm/arm.c | 173 +++++++++++++++++++++++++++++++++++------------------
 arch/arm/kvm/mmu.c |   7 +++
 2 files changed, 121 insertions(+), 59 deletions(-)

diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index 6b76e01..58f89e3 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -967,6 +967,11 @@ long kvm_arch_vm_ioctl(struct file *filp,
 	}
 }
 
+static void cpu_init_stage2(void *dummy)
+{
+	__cpu_init_stage2();
+}
+
 static void cpu_init_hyp_mode(void *dummy)
 {
 	phys_addr_t boot_pgd_ptr;
@@ -1036,6 +1041,82 @@ static inline void hyp_cpu_pm_init(void)
 }
 #endif
 
+static void teardown_common_resources(void)
+{
+	free_percpu(kvm_host_cpu_state);
+}
+
+static int init_common_resources(void)
+{
+	kvm_host_cpu_state = alloc_percpu(kvm_cpu_context_t);
+	if (!kvm_host_cpu_state) {
+		kvm_err("Cannot allocate host CPU state\n");
+		return -ENOMEM;
+	}
+
+	return 0;
+}
+
+static int init_subsystems(void)
+{
+	int err;
+
+	/*
+	 * Init HYP view of VGIC
+	 */
+	err = kvm_vgic_hyp_init();
+	switch (err) {
+	case 0:
+		vgic_present = true;
+		break;
+	case -ENODEV:
+	case -ENXIO:
+		vgic_present = false;
+		break;
+	default:
+		return err;
+	}
+
+	/*
+	 * Init HYP architected timer support
+	 */
+	err = kvm_timer_hyp_init();
+	if (err)
+		return err;
+
+	kvm_perf_init();
+	kvm_coproc_table_init();
+
+	return 0;
+}
+
+static void teardown_hyp_mode(void)
+{
+	int cpu;
+
+	if (is_kernel_in_hyp_mode())
+		return;
+
+	free_hyp_pgds();
+	for_each_possible_cpu(cpu)
+		free_page(per_cpu(kvm_arm_hyp_stack_page, cpu));
+}
+
+static int init_vhe_mode(void)
+{
+	/*
+	 * Execute the init code on each CPU.
+	 */
+	on_each_cpu(cpu_init_stage2, NULL, 1);
+
+	/* set size of VMID supported by CPU */
+	kvm_vmid_bits = kvm_get_vmid_bits();
+	kvm_info("%d-bit VMID\n", kvm_vmid_bits);
+
+	kvm_info("VHE mode initialized successfully\n");
+	return 0;
+}
+
 /**
  * Inits Hyp-mode on all online CPUs
  */
@@ -1066,7 +1147,7 @@ static int init_hyp_mode(void)
 		stack_page = __get_free_page(GFP_KERNEL);
 		if (!stack_page) {
 			err = -ENOMEM;
-			goto out_free_stack_pages;
+			goto out_err;
 		}
 
 		per_cpu(kvm_arm_hyp_stack_page, cpu) = stack_page;
@@ -1078,13 +1159,13 @@ static int init_hyp_mode(void)
 	err = create_hyp_mappings(__kvm_hyp_code_start, __kvm_hyp_code_end);
 	if (err) {
 		kvm_err("Cannot map world-switch code\n");
-		goto out_free_mappings;
+		goto out_err;
 	}
 
 	err = create_hyp_mappings(__start_rodata, __end_rodata);
 	if (err) {
 		kvm_err("Cannot map rodata section\n");
-		goto out_free_mappings;
+		goto out_err;
 	}
 
 	/*
@@ -1096,20 +1177,10 @@ static int init_hyp_mode(void)
 
 		if (err) {
 			kvm_err("Cannot map hyp stack\n");
-			goto out_free_mappings;
+			goto out_err;
 		}
 	}
 
-	/*
-	 * Map the host CPU structures
-	 */
-	kvm_host_cpu_state = alloc_percpu(kvm_cpu_context_t);
-	if (!kvm_host_cpu_state) {
-		err = -ENOMEM;
-		kvm_err("Cannot allocate host CPU state\n");
-		goto out_free_mappings;
-	}
-
 	for_each_possible_cpu(cpu) {
 		kvm_cpu_context_t *cpu_ctxt;
 
@@ -1118,7 +1189,7 @@ static int init_hyp_mode(void)
 
 		if (err) {
 			kvm_err("Cannot map host CPU state: %d\n", err);
-			goto out_free_context;
+			goto out_err;
 		}
 	}
 
@@ -1127,34 +1198,22 @@ static int init_hyp_mode(void)
 	 */
 	on_each_cpu(cpu_init_hyp_mode, NULL, 1);
 
-	/*
-	 * Init HYP view of VGIC
-	 */
-	err = kvm_vgic_hyp_init();
-	switch (err) {
-	case 0:
-		vgic_present = true;
-		break;
-	case -ENODEV:
-	case -ENXIO:
-		vgic_present = false;
-		break;
-	default:
-		goto out_free_context;
-	}
-
-	/*
-	 * Init HYP architected timer support
-	 */
-	err = kvm_timer_hyp_init();
-	if (err)
-		goto out_free_context;
-
 #ifndef CONFIG_HOTPLUG_CPU
 	free_boot_hyp_pgd();
 #endif
 
-	kvm_perf_init();
+	cpu_notifier_register_begin();
+
+	err = __register_cpu_notifier(&hyp_init_cpu_nb);
+
+	cpu_notifier_register_done();
+
+	if (err) {
+		kvm_err("Cannot register HYP init CPU notifier (%d)\n", err);
+		goto out_err;
+	}
+
+	hyp_cpu_pm_init();
 
 	/* set size of VMID supported by CPU */
 	kvm_vmid_bits = kvm_get_vmid_bits();
@@ -1163,14 +1222,9 @@ static int init_hyp_mode(void)
 	kvm_info("Hyp mode initialized successfully\n");
 
 	return 0;
-out_free_context:
-	free_percpu(kvm_host_cpu_state);
-out_free_mappings:
-	free_hyp_pgds();
-out_free_stack_pages:
-	for_each_possible_cpu(cpu)
-		free_page(per_cpu(kvm_arm_hyp_stack_page, cpu));
+
 out_err:
+	teardown_hyp_mode();
 	kvm_err("error initializing Hyp mode: %d\n", err);
 	return err;
 }
@@ -1214,26 +1268,27 @@ int kvm_arch_init(void *opaque)
 		}
 	}
 
-	cpu_notifier_register_begin();
-
-	err = init_hyp_mode();
+	err = init_common_resources();
 	if (err)
-		goto out_err;
+		return err;
 
-	err = __register_cpu_notifier(&hyp_init_cpu_nb);
-	if (err) {
-		kvm_err("Cannot register HYP init CPU notifier (%d)\n", err);
+	if (is_kernel_in_hyp_mode())
+		err = init_vhe_mode();
+	else
+		err = init_hyp_mode();
+	if (err)
 		goto out_err;
-	}
-
-	cpu_notifier_register_done();
 
-	hyp_cpu_pm_init();
+	err = init_subsystems();
+	if (err)
+		goto out_hyp;
 
-	kvm_coproc_table_init();
 	return 0;
+
+out_hyp:
+	teardown_hyp_mode();
 out_err:
-	cpu_notifier_register_done();
+	teardown_common_resources();
 	return err;
 }
 
diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
index aba61fd..920d0c3 100644
--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -28,6 +28,7 @@
 #include <asm/kvm_mmio.h>
 #include <asm/kvm_asm.h>
 #include <asm/kvm_emulate.h>
+#include <asm/virt.h>
 
 #include "trace.h"
 
@@ -598,6 +599,9 @@ int create_hyp_mappings(void *from, void *to)
 	unsigned long start = KERN_TO_HYP((unsigned long)from);
 	unsigned long end = KERN_TO_HYP((unsigned long)to);
 
+	if (is_kernel_in_hyp_mode())
+		return 0;
+
 	start = start & PAGE_MASK;
 	end = PAGE_ALIGN(end);
 
@@ -630,6 +634,9 @@ int create_hyp_io_mappings(void *from, void *to, phys_addr_t phys_addr)
 	unsigned long start = KERN_TO_HYP((unsigned long)from);
 	unsigned long end = KERN_TO_HYP((unsigned long)to);
 
+	if (is_kernel_in_hyp_mode())
+		return 0;
+
 	/* Check for a valid kernel IO mapping */
 	if (!is_vmalloc_addr(from) || !is_vmalloc_addr(to - 1))
 		return -EINVAL;
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 06/23] arm64: KVM: Skip HYP setup when already running in HYP
@ 2016-02-03 17:59   ` Marc Zyngier
  0 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 17:59 UTC (permalink / raw)
  To: linux-arm-kernel

With the kernel running at EL2, there is no point trying to
configure page tables for HYP, as the kernel is already mapped.

Take this opportunity to refactor the whole init a bit, allowing
the various parts of the hypervisor bringup to be split across
multiple functions.

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm/kvm/arm.c | 173 +++++++++++++++++++++++++++++++++++------------------
 arch/arm/kvm/mmu.c |   7 +++
 2 files changed, 121 insertions(+), 59 deletions(-)

diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index 6b76e01..58f89e3 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -967,6 +967,11 @@ long kvm_arch_vm_ioctl(struct file *filp,
 	}
 }
 
+static void cpu_init_stage2(void *dummy)
+{
+	__cpu_init_stage2();
+}
+
 static void cpu_init_hyp_mode(void *dummy)
 {
 	phys_addr_t boot_pgd_ptr;
@@ -1036,6 +1041,82 @@ static inline void hyp_cpu_pm_init(void)
 }
 #endif
 
+static void teardown_common_resources(void)
+{
+	free_percpu(kvm_host_cpu_state);
+}
+
+static int init_common_resources(void)
+{
+	kvm_host_cpu_state = alloc_percpu(kvm_cpu_context_t);
+	if (!kvm_host_cpu_state) {
+		kvm_err("Cannot allocate host CPU state\n");
+		return -ENOMEM;
+	}
+
+	return 0;
+}
+
+static int init_subsystems(void)
+{
+	int err;
+
+	/*
+	 * Init HYP view of VGIC
+	 */
+	err = kvm_vgic_hyp_init();
+	switch (err) {
+	case 0:
+		vgic_present = true;
+		break;
+	case -ENODEV:
+	case -ENXIO:
+		vgic_present = false;
+		break;
+	default:
+		return err;
+	}
+
+	/*
+	 * Init HYP architected timer support
+	 */
+	err = kvm_timer_hyp_init();
+	if (err)
+		return err;
+
+	kvm_perf_init();
+	kvm_coproc_table_init();
+
+	return 0;
+}
+
+static void teardown_hyp_mode(void)
+{
+	int cpu;
+
+	if (is_kernel_in_hyp_mode())
+		return;
+
+	free_hyp_pgds();
+	for_each_possible_cpu(cpu)
+		free_page(per_cpu(kvm_arm_hyp_stack_page, cpu));
+}
+
+static int init_vhe_mode(void)
+{
+	/*
+	 * Execute the init code on each CPU.
+	 */
+	on_each_cpu(cpu_init_stage2, NULL, 1);
+
+	/* set size of VMID supported by CPU */
+	kvm_vmid_bits = kvm_get_vmid_bits();
+	kvm_info("%d-bit VMID\n", kvm_vmid_bits);
+
+	kvm_info("VHE mode initialized successfully\n");
+	return 0;
+}
+
 /**
  * Inits Hyp-mode on all online CPUs
  */
@@ -1066,7 +1147,7 @@ static int init_hyp_mode(void)
 		stack_page = __get_free_page(GFP_KERNEL);
 		if (!stack_page) {
 			err = -ENOMEM;
-			goto out_free_stack_pages;
+			goto out_err;
 		}
 
 		per_cpu(kvm_arm_hyp_stack_page, cpu) = stack_page;
@@ -1078,13 +1159,13 @@ static int init_hyp_mode(void)
 	err = create_hyp_mappings(__kvm_hyp_code_start, __kvm_hyp_code_end);
 	if (err) {
 		kvm_err("Cannot map world-switch code\n");
-		goto out_free_mappings;
+		goto out_err;
 	}
 
 	err = create_hyp_mappings(__start_rodata, __end_rodata);
 	if (err) {
 		kvm_err("Cannot map rodata section\n");
-		goto out_free_mappings;
+		goto out_err;
 	}
 
 	/*
@@ -1096,20 +1177,10 @@ static int init_hyp_mode(void)
 
 		if (err) {
 			kvm_err("Cannot map hyp stack\n");
-			goto out_free_mappings;
+			goto out_err;
 		}
 	}
 
-	/*
-	 * Map the host CPU structures
-	 */
-	kvm_host_cpu_state = alloc_percpu(kvm_cpu_context_t);
-	if (!kvm_host_cpu_state) {
-		err = -ENOMEM;
-		kvm_err("Cannot allocate host CPU state\n");
-		goto out_free_mappings;
-	}
-
 	for_each_possible_cpu(cpu) {
 		kvm_cpu_context_t *cpu_ctxt;
 
@@ -1118,7 +1189,7 @@ static int init_hyp_mode(void)
 
 		if (err) {
 			kvm_err("Cannot map host CPU state: %d\n", err);
-			goto out_free_context;
+			goto out_err;
 		}
 	}
 
@@ -1127,34 +1198,22 @@ static int init_hyp_mode(void)
 	 */
 	on_each_cpu(cpu_init_hyp_mode, NULL, 1);
 
-	/*
-	 * Init HYP view of VGIC
-	 */
-	err = kvm_vgic_hyp_init();
-	switch (err) {
-	case 0:
-		vgic_present = true;
-		break;
-	case -ENODEV:
-	case -ENXIO:
-		vgic_present = false;
-		break;
-	default:
-		goto out_free_context;
-	}
-
-	/*
-	 * Init HYP architected timer support
-	 */
-	err = kvm_timer_hyp_init();
-	if (err)
-		goto out_free_context;
-
 #ifndef CONFIG_HOTPLUG_CPU
 	free_boot_hyp_pgd();
 #endif
 
-	kvm_perf_init();
+	cpu_notifier_register_begin();
+
+	err = __register_cpu_notifier(&hyp_init_cpu_nb);
+
+	cpu_notifier_register_done();
+
+	if (err) {
+		kvm_err("Cannot register HYP init CPU notifier (%d)\n", err);
+		goto out_err;
+	}
+
+	hyp_cpu_pm_init();
 
 	/* set size of VMID supported by CPU */
 	kvm_vmid_bits = kvm_get_vmid_bits();
@@ -1163,14 +1222,9 @@ static int init_hyp_mode(void)
 	kvm_info("Hyp mode initialized successfully\n");
 
 	return 0;
-out_free_context:
-	free_percpu(kvm_host_cpu_state);
-out_free_mappings:
-	free_hyp_pgds();
-out_free_stack_pages:
-	for_each_possible_cpu(cpu)
-		free_page(per_cpu(kvm_arm_hyp_stack_page, cpu));
+
 out_err:
+	teardown_hyp_mode();
 	kvm_err("error initializing Hyp mode: %d\n", err);
 	return err;
 }
@@ -1214,26 +1268,27 @@ int kvm_arch_init(void *opaque)
 		}
 	}
 
-	cpu_notifier_register_begin();
-
-	err = init_hyp_mode();
+	err = init_common_resources();
 	if (err)
-		goto out_err;
+		return err;
 
-	err = __register_cpu_notifier(&hyp_init_cpu_nb);
-	if (err) {
-		kvm_err("Cannot register HYP init CPU notifier (%d)\n", err);
+	if (is_kernel_in_hyp_mode())
+		err = init_vhe_mode();
+	else
+		err = init_hyp_mode();
+	if (err)
 		goto out_err;
-	}
-
-	cpu_notifier_register_done();
 
-	hyp_cpu_pm_init();
+	err = init_subsystems();
+	if (err)
+		goto out_hyp;
 
-	kvm_coproc_table_init();
 	return 0;
+
+out_hyp:
+	teardown_hyp_mode();
 out_err:
-	cpu_notifier_register_done();
+	teardown_common_resources();
 	return err;
 }
 
diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
index aba61fd..920d0c3 100644
--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -28,6 +28,7 @@
 #include <asm/kvm_mmio.h>
 #include <asm/kvm_asm.h>
 #include <asm/kvm_emulate.h>
+#include <asm/virt.h>
 
 #include "trace.h"
 
@@ -598,6 +599,9 @@ int create_hyp_mappings(void *from, void *to)
 	unsigned long start = KERN_TO_HYP((unsigned long)from);
 	unsigned long end = KERN_TO_HYP((unsigned long)to);
 
+	if (is_kernel_in_hyp_mode())
+		return 0;
+
 	start = start & PAGE_MASK;
 	end = PAGE_ALIGN(end);
 
@@ -630,6 +634,9 @@ int create_hyp_io_mappings(void *from, void *to, phys_addr_t phys_addr)
 	unsigned long start = KERN_TO_HYP((unsigned long)from);
 	unsigned long end = KERN_TO_HYP((unsigned long)to);
 
+	if (is_kernel_in_hyp_mode())
+		return 0;
+
 	/* Check for a valid kernel IO mapping */
 	if (!is_vmalloc_addr(from) || !is_vmalloc_addr(to - 1))
 		return -EINVAL;
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 07/23] arm64: KVM: VHE: Patch out use of HVC
  2016-02-03 17:59 ` Marc Zyngier
  (?)
@ 2016-02-03 18:00   ` Marc Zyngier
  -1 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 18:00 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Christoffer Dall
  Cc: linux-arm-kernel, linux-kernel, kvm, kvmarm

With VHE, the host never issues an HVC instruction to get into the
KVM code, as we can simply branch there.

Use runtime code patching to simplify things a bit.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kvm/hyp.S           |  7 +++++++
 arch/arm64/kvm/hyp/hyp-entry.S | 40 +++++++++++++++++++++++++++++++---------
 2 files changed, 38 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/kvm/hyp.S b/arch/arm64/kvm/hyp.S
index 0ccdcbb..0689a74 100644
--- a/arch/arm64/kvm/hyp.S
+++ b/arch/arm64/kvm/hyp.S
@@ -17,7 +17,9 @@
 
 #include <linux/linkage.h>
 
+#include <asm/alternative.h>
 #include <asm/assembler.h>
+#include <asm/cpufeature.h>
 
 /*
  * u64 kvm_call_hyp(void *hypfn, ...);
@@ -38,6 +40,11 @@
  * arch/arm64/kernel/hyp_stub.S.
  */
 ENTRY(kvm_call_hyp)
+alternative_if_not ARM64_HAS_VIRT_HOST_EXTN	
 	hvc	#0
 	ret
+alternative_else
+	b	__vhe_hyp_call
+	nop
+alternative_endif
 ENDPROC(kvm_call_hyp)
diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S
index 93e8d983..1bdeee7 100644
--- a/arch/arm64/kvm/hyp/hyp-entry.S
+++ b/arch/arm64/kvm/hyp/hyp-entry.S
@@ -38,6 +38,34 @@
 	ldp	x0, x1, [sp], #16
 .endm
 
+.macro do_el2_call
+	/*
+	 * Shuffle the parameters before calling the function
+	 * pointed to in x0. Assumes parameters in x[1,2,3].
+	 */
+	sub	sp, sp, #16
+	str	lr, [sp]
+	mov	lr, x0
+	mov	x0, x1
+	mov	x1, x2
+	mov	x2, x3
+	blr	lr
+	ldr	lr, [sp]
+	add	sp, sp, #16
+.endm
+
+ENTRY(__vhe_hyp_call)
+	do_el2_call
+	/*
+	 * We used to rely on having an exception return to get
+	 * an implicit isb. In the E2H case, we don't have it anymore.
+	 * rather than changing all the leaf functions, just do it here
+	 * before returning to the rest of the kernel.
+	 */
+	isb
+	ret
+ENDPROC(__vhe_hyp_call)
+	
 el1_sync:				// Guest trapped into EL2
 	save_x0_to_x3
 
@@ -58,19 +86,13 @@ el1_sync:				// Guest trapped into EL2
 	mrs	x0, vbar_el2
 	b	2f
 
-1:	stp	lr, xzr, [sp, #-16]!
-
+1:
 	/*
-	 * Compute the function address in EL2, and shuffle the parameters.
+	 * Perform the EL2 call
 	 */
 	kern_hyp_va	x0
-	mov	lr, x0
-	mov	x0, x1
-	mov	x1, x2
-	mov	x2, x3
-	blr	lr
+	do_el2_call
 
-	ldp	lr, xzr, [sp], #16
 2:	eret
 
 el1_trap:
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 07/23] arm64: KVM: VHE: Patch out use of HVC
@ 2016-02-03 18:00   ` Marc Zyngier
  0 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 18:00 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Christoffer Dall
  Cc: kvmarm, linux-kernel, linux-arm-kernel, kvm

With VHE, the host never issues an HVC instruction to get into the
KVM code, as we can simply branch there.

Use runtime code patching to simplify things a bit.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kvm/hyp.S           |  7 +++++++
 arch/arm64/kvm/hyp/hyp-entry.S | 40 +++++++++++++++++++++++++++++++---------
 2 files changed, 38 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/kvm/hyp.S b/arch/arm64/kvm/hyp.S
index 0ccdcbb..0689a74 100644
--- a/arch/arm64/kvm/hyp.S
+++ b/arch/arm64/kvm/hyp.S
@@ -17,7 +17,9 @@
 
 #include <linux/linkage.h>
 
+#include <asm/alternative.h>
 #include <asm/assembler.h>
+#include <asm/cpufeature.h>
 
 /*
  * u64 kvm_call_hyp(void *hypfn, ...);
@@ -38,6 +40,11 @@
  * arch/arm64/kernel/hyp_stub.S.
  */
 ENTRY(kvm_call_hyp)
+alternative_if_not ARM64_HAS_VIRT_HOST_EXTN	
 	hvc	#0
 	ret
+alternative_else
+	b	__vhe_hyp_call
+	nop
+alternative_endif
 ENDPROC(kvm_call_hyp)
diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S
index 93e8d983..1bdeee7 100644
--- a/arch/arm64/kvm/hyp/hyp-entry.S
+++ b/arch/arm64/kvm/hyp/hyp-entry.S
@@ -38,6 +38,34 @@
 	ldp	x0, x1, [sp], #16
 .endm
 
+.macro do_el2_call
+	/*
+	 * Shuffle the parameters before calling the function
+	 * pointed to in x0. Assumes parameters in x[1,2,3].
+	 */
+	sub	sp, sp, #16
+	str	lr, [sp]
+	mov	lr, x0
+	mov	x0, x1
+	mov	x1, x2
+	mov	x2, x3
+	blr	lr
+	ldr	lr, [sp]
+	add	sp, sp, #16
+.endm
+
+ENTRY(__vhe_hyp_call)
+	do_el2_call
+	/*
+	 * We used to rely on having an exception return to get
+	 * an implicit isb. In the E2H case, we don't have it anymore.
+	 * rather than changing all the leaf functions, just do it here
+	 * before returning to the rest of the kernel.
+	 */
+	isb
+	ret
+ENDPROC(__vhe_hyp_call)
+	
 el1_sync:				// Guest trapped into EL2
 	save_x0_to_x3
 
@@ -58,19 +86,13 @@ el1_sync:				// Guest trapped into EL2
 	mrs	x0, vbar_el2
 	b	2f
 
-1:	stp	lr, xzr, [sp, #-16]!
-
+1:
 	/*
-	 * Compute the function address in EL2, and shuffle the parameters.
+	 * Perform the EL2 call
 	 */
 	kern_hyp_va	x0
-	mov	lr, x0
-	mov	x0, x1
-	mov	x1, x2
-	mov	x2, x3
-	blr	lr
+	do_el2_call
 
-	ldp	lr, xzr, [sp], #16
 2:	eret
 
 el1_trap:
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 07/23] arm64: KVM: VHE: Patch out use of HVC
@ 2016-02-03 18:00   ` Marc Zyngier
  0 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 18:00 UTC (permalink / raw)
  To: linux-arm-kernel

With VHE, the host never issues an HVC instruction to get into the
KVM code, as we can simply branch there.

Use runtime code patching to simplify things a bit.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kvm/hyp.S           |  7 +++++++
 arch/arm64/kvm/hyp/hyp-entry.S | 40 +++++++++++++++++++++++++++++++---------
 2 files changed, 38 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/kvm/hyp.S b/arch/arm64/kvm/hyp.S
index 0ccdcbb..0689a74 100644
--- a/arch/arm64/kvm/hyp.S
+++ b/arch/arm64/kvm/hyp.S
@@ -17,7 +17,9 @@
 
 #include <linux/linkage.h>
 
+#include <asm/alternative.h>
 #include <asm/assembler.h>
+#include <asm/cpufeature.h>
 
 /*
  * u64 kvm_call_hyp(void *hypfn, ...);
@@ -38,6 +40,11 @@
  * arch/arm64/kernel/hyp_stub.S.
  */
 ENTRY(kvm_call_hyp)
+alternative_if_not ARM64_HAS_VIRT_HOST_EXTN	
 	hvc	#0
 	ret
+alternative_else
+	b	__vhe_hyp_call
+	nop
+alternative_endif
 ENDPROC(kvm_call_hyp)
diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S
index 93e8d983..1bdeee7 100644
--- a/arch/arm64/kvm/hyp/hyp-entry.S
+++ b/arch/arm64/kvm/hyp/hyp-entry.S
@@ -38,6 +38,34 @@
 	ldp	x0, x1, [sp], #16
 .endm
 
+.macro do_el2_call
+	/*
+	 * Shuffle the parameters before calling the function
+	 * pointed to in x0. Assumes parameters in x[1,2,3].
+	 */
+	sub	sp, sp, #16
+	str	lr, [sp]
+	mov	lr, x0
+	mov	x0, x1
+	mov	x1, x2
+	mov	x2, x3
+	blr	lr
+	ldr	lr, [sp]
+	add	sp, sp, #16
+.endm
+
+ENTRY(__vhe_hyp_call)
+	do_el2_call
+	/*
+	 * We used to rely on having an exception return to get
+	 * an implicit isb. In the E2H case, we don't have it anymore.
+	 * rather than changing all the leaf functions, just do it here
+	 * before returning to the rest of the kernel.
+	 */
+	isb
+	ret
+ENDPROC(__vhe_hyp_call)
+	
 el1_sync:				// Guest trapped into EL2
 	save_x0_to_x3
 
@@ -58,19 +86,13 @@ el1_sync:				// Guest trapped into EL2
 	mrs	x0, vbar_el2
 	b	2f
 
-1:	stp	lr, xzr, [sp, #-16]!
-
+1:
 	/*
-	 * Compute the function address in EL2, and shuffle the parameters.
+	 * Perform the EL2 call
 	 */
 	kern_hyp_va	x0
-	mov	lr, x0
-	mov	x0, x1
-	mov	x1, x2
-	mov	x2, x3
-	blr	lr
+	do_el2_call
 
-	ldp	lr, xzr, [sp], #16
 2:	eret
 
 el1_trap:
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 08/23] arm64: KVM: VHE: Patch out kern_hyp_va
  2016-02-03 17:59 ` Marc Zyngier
  (?)
@ 2016-02-03 18:00   ` Marc Zyngier
  -1 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 18:00 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Christoffer Dall
  Cc: linux-arm-kernel, linux-kernel, kvm, kvmarm

The kern_hyp_va macro is pretty meaninless with VHE, as there is
only one mapping - the kernel one.

In order to keep the code readable and efficient, use runtime
patching to replace the 'and' instruction used to compute the VA
with a 'nop'.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/kvm_mmu.h | 12 +++++++++++-
 arch/arm64/kvm/hyp/hyp.h         | 25 ++++++++++++++++++++++---
 2 files changed, 33 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 7364339..9a9318a 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -23,13 +23,16 @@
 #include <asm/cpufeature.h>
 
 /*
- * As we only have the TTBR0_EL2 register, we cannot express
+ * As ARMv8.0 only has the TTBR0_EL2 register, we cannot express
  * "negative" addresses. This makes it impossible to directly share
  * mappings with the kernel.
  *
  * Instead, give the HYP mode its own VA region at a fixed offset from
  * the kernel by just masking the top bits (which are all ones for a
  * kernel address).
+ *
+ * ARMv8.1 (using VHE) does have a TTBR1_EL2, and doesn't use these
+ * macros (the entire kernel runs at EL2).
  */
 #define HYP_PAGE_OFFSET_SHIFT	VA_BITS
 #define HYP_PAGE_OFFSET_MASK	((UL(1) << HYP_PAGE_OFFSET_SHIFT) - 1)
@@ -56,12 +59,19 @@
 
 #ifdef __ASSEMBLY__
 
+#include <asm/alternative.h>
+#include <asm/cpufeature.h>
+
 /*
  * Convert a kernel VA into a HYP VA.
  * reg: VA to be converted.
  */
 .macro kern_hyp_va	reg
+alternative_if_not ARM64_HAS_VIRT_HOST_EXTN	
 	and	\reg, \reg, #HYP_PAGE_OFFSET_MASK
+alternative_else
+	nop
+alternative_endif
 .endm
 
 #else
diff --git a/arch/arm64/kvm/hyp/hyp.h b/arch/arm64/kvm/hyp/hyp.h
index fb27517..fc502f3 100644
--- a/arch/arm64/kvm/hyp/hyp.h
+++ b/arch/arm64/kvm/hyp/hyp.h
@@ -25,9 +25,28 @@
 
 #define __hyp_text __section(.hyp.text) notrace
 
-#define kern_hyp_va(v) (typeof(v))((unsigned long)(v) & HYP_PAGE_OFFSET_MASK)
-#define hyp_kern_va(v) (typeof(v))((unsigned long)(v) - HYP_PAGE_OFFSET \
-						      + PAGE_OFFSET)
+static inline unsigned long __kern_hyp_va(unsigned long v)
+{
+	asm volatile(ALTERNATIVE("and %0, %0, %1",
+				 "nop",
+				 ARM64_HAS_VIRT_HOST_EXTN)
+		     : "+r" (v) : "i" (HYP_PAGE_OFFSET_MASK));
+	return v;
+}
+
+#define kern_hyp_va(v) (typeof(v))(__kern_hyp_va((unsigned long)(v)))
+
+static inline unsigned long __hyp_kern_va(unsigned long v)
+{
+	u64 offset = PAGE_OFFSET - HYP_PAGE_OFFSET;
+	asm volatile(ALTERNATIVE("add %0, %0, %1",
+				 "nop",
+				 ARM64_HAS_VIRT_HOST_EXTN)
+		     : "+r" (v) : "r" (offset));
+	return v;
+}
+
+#define hyp_kern_va(v) (typeof(v))(__hyp_kern_va((unsigned long)(v)))
 
 /**
  * hyp_alternate_select - Generates patchable code sequences that are
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 08/23] arm64: KVM: VHE: Patch out kern_hyp_va
@ 2016-02-03 18:00   ` Marc Zyngier
  0 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 18:00 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Christoffer Dall
  Cc: kvmarm, linux-kernel, linux-arm-kernel, kvm

The kern_hyp_va macro is pretty meaninless with VHE, as there is
only one mapping - the kernel one.

In order to keep the code readable and efficient, use runtime
patching to replace the 'and' instruction used to compute the VA
with a 'nop'.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/kvm_mmu.h | 12 +++++++++++-
 arch/arm64/kvm/hyp/hyp.h         | 25 ++++++++++++++++++++++---
 2 files changed, 33 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 7364339..9a9318a 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -23,13 +23,16 @@
 #include <asm/cpufeature.h>
 
 /*
- * As we only have the TTBR0_EL2 register, we cannot express
+ * As ARMv8.0 only has the TTBR0_EL2 register, we cannot express
  * "negative" addresses. This makes it impossible to directly share
  * mappings with the kernel.
  *
  * Instead, give the HYP mode its own VA region at a fixed offset from
  * the kernel by just masking the top bits (which are all ones for a
  * kernel address).
+ *
+ * ARMv8.1 (using VHE) does have a TTBR1_EL2, and doesn't use these
+ * macros (the entire kernel runs at EL2).
  */
 #define HYP_PAGE_OFFSET_SHIFT	VA_BITS
 #define HYP_PAGE_OFFSET_MASK	((UL(1) << HYP_PAGE_OFFSET_SHIFT) - 1)
@@ -56,12 +59,19 @@
 
 #ifdef __ASSEMBLY__
 
+#include <asm/alternative.h>
+#include <asm/cpufeature.h>
+
 /*
  * Convert a kernel VA into a HYP VA.
  * reg: VA to be converted.
  */
 .macro kern_hyp_va	reg
+alternative_if_not ARM64_HAS_VIRT_HOST_EXTN	
 	and	\reg, \reg, #HYP_PAGE_OFFSET_MASK
+alternative_else
+	nop
+alternative_endif
 .endm
 
 #else
diff --git a/arch/arm64/kvm/hyp/hyp.h b/arch/arm64/kvm/hyp/hyp.h
index fb27517..fc502f3 100644
--- a/arch/arm64/kvm/hyp/hyp.h
+++ b/arch/arm64/kvm/hyp/hyp.h
@@ -25,9 +25,28 @@
 
 #define __hyp_text __section(.hyp.text) notrace
 
-#define kern_hyp_va(v) (typeof(v))((unsigned long)(v) & HYP_PAGE_OFFSET_MASK)
-#define hyp_kern_va(v) (typeof(v))((unsigned long)(v) - HYP_PAGE_OFFSET \
-						      + PAGE_OFFSET)
+static inline unsigned long __kern_hyp_va(unsigned long v)
+{
+	asm volatile(ALTERNATIVE("and %0, %0, %1",
+				 "nop",
+				 ARM64_HAS_VIRT_HOST_EXTN)
+		     : "+r" (v) : "i" (HYP_PAGE_OFFSET_MASK));
+	return v;
+}
+
+#define kern_hyp_va(v) (typeof(v))(__kern_hyp_va((unsigned long)(v)))
+
+static inline unsigned long __hyp_kern_va(unsigned long v)
+{
+	u64 offset = PAGE_OFFSET - HYP_PAGE_OFFSET;
+	asm volatile(ALTERNATIVE("add %0, %0, %1",
+				 "nop",
+				 ARM64_HAS_VIRT_HOST_EXTN)
+		     : "+r" (v) : "r" (offset));
+	return v;
+}
+
+#define hyp_kern_va(v) (typeof(v))(__hyp_kern_va((unsigned long)(v)))
 
 /**
  * hyp_alternate_select - Generates patchable code sequences that are
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 08/23] arm64: KVM: VHE: Patch out kern_hyp_va
@ 2016-02-03 18:00   ` Marc Zyngier
  0 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 18:00 UTC (permalink / raw)
  To: linux-arm-kernel

The kern_hyp_va macro is pretty meaninless with VHE, as there is
only one mapping - the kernel one.

In order to keep the code readable and efficient, use runtime
patching to replace the 'and' instruction used to compute the VA
with a 'nop'.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/kvm_mmu.h | 12 +++++++++++-
 arch/arm64/kvm/hyp/hyp.h         | 25 ++++++++++++++++++++++---
 2 files changed, 33 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 7364339..9a9318a 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -23,13 +23,16 @@
 #include <asm/cpufeature.h>
 
 /*
- * As we only have the TTBR0_EL2 register, we cannot express
+ * As ARMv8.0 only has the TTBR0_EL2 register, we cannot express
  * "negative" addresses. This makes it impossible to directly share
  * mappings with the kernel.
  *
  * Instead, give the HYP mode its own VA region at a fixed offset from
  * the kernel by just masking the top bits (which are all ones for a
  * kernel address).
+ *
+ * ARMv8.1 (using VHE) does have a TTBR1_EL2, and doesn't use these
+ * macros (the entire kernel runs at EL2).
  */
 #define HYP_PAGE_OFFSET_SHIFT	VA_BITS
 #define HYP_PAGE_OFFSET_MASK	((UL(1) << HYP_PAGE_OFFSET_SHIFT) - 1)
@@ -56,12 +59,19 @@
 
 #ifdef __ASSEMBLY__
 
+#include <asm/alternative.h>
+#include <asm/cpufeature.h>
+
 /*
  * Convert a kernel VA into a HYP VA.
  * reg: VA to be converted.
  */
 .macro kern_hyp_va	reg
+alternative_if_not ARM64_HAS_VIRT_HOST_EXTN	
 	and	\reg, \reg, #HYP_PAGE_OFFSET_MASK
+alternative_else
+	nop
+alternative_endif
 .endm
 
 #else
diff --git a/arch/arm64/kvm/hyp/hyp.h b/arch/arm64/kvm/hyp/hyp.h
index fb27517..fc502f3 100644
--- a/arch/arm64/kvm/hyp/hyp.h
+++ b/arch/arm64/kvm/hyp/hyp.h
@@ -25,9 +25,28 @@
 
 #define __hyp_text __section(.hyp.text) notrace
 
-#define kern_hyp_va(v) (typeof(v))((unsigned long)(v) & HYP_PAGE_OFFSET_MASK)
-#define hyp_kern_va(v) (typeof(v))((unsigned long)(v) - HYP_PAGE_OFFSET \
-						      + PAGE_OFFSET)
+static inline unsigned long __kern_hyp_va(unsigned long v)
+{
+	asm volatile(ALTERNATIVE("and %0, %0, %1",
+				 "nop",
+				 ARM64_HAS_VIRT_HOST_EXTN)
+		     : "+r" (v) : "i" (HYP_PAGE_OFFSET_MASK));
+	return v;
+}
+
+#define kern_hyp_va(v) (typeof(v))(__kern_hyp_va((unsigned long)(v)))
+
+static inline unsigned long __hyp_kern_va(unsigned long v)
+{
+	u64 offset = PAGE_OFFSET - HYP_PAGE_OFFSET;
+	asm volatile(ALTERNATIVE("add %0, %0, %1",
+				 "nop",
+				 ARM64_HAS_VIRT_HOST_EXTN)
+		     : "+r" (v) : "r" (offset));
+	return v;
+}
+
+#define hyp_kern_va(v) (typeof(v))(__hyp_kern_va((unsigned long)(v)))
 
 /**
  * hyp_alternate_select - Generates patchable code sequences that are
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 09/23] arm64: KVM: VHE: Introduce unified system register accessors
  2016-02-03 17:59 ` Marc Zyngier
  (?)
@ 2016-02-03 18:00   ` Marc Zyngier
  -1 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 18:00 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Christoffer Dall
  Cc: linux-arm-kernel, linux-kernel, kvm, kvmarm

VHE brings its own bag of new system registers, or rather system
register accessors, as it define new ways to access both guest
and host system registers. For example, from the host:

- The host TCR_EL2 register is accessed using the TCR_EL1 accessor
- The guest TCR_EL1 register is accessed using the TCR_EL12 accessor

Obviously, this is confusing. A way to somehow reduce the complexity
of writing code for both ARMv8 and ARMv8.1 is to use a set of unified
accessors that will generate the right sysreg, depending on the mode
the CPU is running in. For example:

- read_sysreg_el1(tcr) will use TCR_EL1 on ARMv8, and TCR_EL12 on
  ARMv8.1 with VHE.
- read_sysreg_el2(tcr) will use TCR_EL2 on ARMv8, and TCR_EL1 on
  ARMv8.1 with VHE.

We end up with three sets of accessors ({read,write}_sysreg_el[012])
that can be directly used from C code. We take this opportunity to
also add the definition for the new VHE sysregs.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kvm/hyp/hyp.h | 72 ++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 72 insertions(+)

diff --git a/arch/arm64/kvm/hyp/hyp.h b/arch/arm64/kvm/hyp/hyp.h
index fc502f3..744c919 100644
--- a/arch/arm64/kvm/hyp/hyp.h
+++ b/arch/arm64/kvm/hyp/hyp.h
@@ -48,6 +48,78 @@ static inline unsigned long __hyp_kern_va(unsigned long v)
 
 #define hyp_kern_va(v) (typeof(v))(__hyp_kern_va((unsigned long)(v)))
 
+#define read_sysreg_elx(r,nvh,vh)					\
+	({								\
+		u64 reg;						\
+		asm volatile(ALTERNATIVE("mrs %0, " __stringify(r##nvh),\
+					 "mrs_s %0, " __stringify(r##vh),\
+					 ARM64_HAS_VIRT_HOST_EXTN)	\
+			     : "=r" (reg));				\
+		reg;							\
+	})
+
+#define write_sysreg_elx(v,r,nvh,vh)					\
+	do {								\
+		u64 __val = (u64)(v);					\
+		asm volatile(ALTERNATIVE("msr " __stringify(r##nvh) ", %x0",\
+					 "msr_s " __stringify(r##vh) ", %x0",\
+					 ARM64_HAS_VIRT_HOST_EXTN)	\
+					 : : "rZ" (__val));		\
+	} while (0)
+
+/*
+ * Unified accessors for registers that have a different encoding
+ * between VHE and non-VHE. They must be specified without their "ELx"
+ * encoding.
+ */
+#define read_sysreg_el2(r)						\
+	({								\
+		u64 reg;						\
+		asm volatile(ALTERNATIVE("mrs %0, " __stringify(r##_EL2),\
+					 "mrs %0, " __stringify(r##_EL1),\
+					 ARM64_HAS_VIRT_HOST_EXTN)	\
+			     : "=r" (reg));				\
+		reg;							\
+	})
+
+#define write_sysreg_el2(v,r)						\
+	do {								\
+		u64 __val = (u64)(v);					\
+		asm volatile(ALTERNATIVE("msr " __stringify(r##_EL2) ", %x0",\
+					 "msr " __stringify(r##_EL1) ", %x0",\
+					 ARM64_HAS_VIRT_HOST_EXTN)	\
+					 : : "rZ" (__val));		\
+	} while (0)
+
+#define read_sysreg_el0(r)	read_sysreg_elx(r, _EL0, _EL02)
+#define write_sysreg_el0(v,r)	write_sysreg_elx(v, r, _EL0, _EL02)
+#define read_sysreg_el1(r)	read_sysreg_elx(r, _EL1, _EL12)
+#define write_sysreg_el1(v,r)	write_sysreg_elx(v, r, _EL1, _EL12)
+
+/* The VHE specific system registers and their encoding */
+#define sctlr_EL12              sys_reg(3, 5, 1, 0, 0)
+#define cpacr_EL12              sys_reg(3, 5, 1, 0, 2)
+#define ttbr0_EL12              sys_reg(3, 5, 2, 0, 0)
+#define ttbr1_EL12              sys_reg(3, 5, 2, 0, 1)
+#define tcr_EL12                sys_reg(3, 5, 2, 0, 2)
+#define afsr0_EL12              sys_reg(3, 5, 5, 1, 0)
+#define afsr1_EL12              sys_reg(3, 5, 5, 1, 1)
+#define esr_EL12                sys_reg(3, 5, 5, 2, 0)
+#define far_EL12                sys_reg(3, 5, 6, 0, 0)
+#define mair_EL12               sys_reg(3, 5, 10, 2, 0)
+#define amair_EL12              sys_reg(3, 5, 10, 3, 0)
+#define vbar_EL12               sys_reg(3, 5, 12, 0, 0)
+#define contextidr_EL12         sys_reg(3, 5, 13, 0, 1)
+#define cntkctl_EL12            sys_reg(3, 5, 14, 1, 0)
+#define cntp_tval_EL02          sys_reg(3, 5, 14, 2, 0)
+#define cntp_ctl_EL02           sys_reg(3, 5, 14, 2, 1)
+#define cntp_cval_EL02          sys_reg(3, 5, 14, 2, 2)
+#define cntv_tval_EL02          sys_reg(3, 5, 14, 3, 0)
+#define cntv_ctl_EL02           sys_reg(3, 5, 14, 3, 1)
+#define cntv_cval_EL02          sys_reg(3, 5, 14, 3, 2)
+#define spsr_EL12               sys_reg(3, 5, 4, 0, 0)
+#define elr_EL12                sys_reg(3, 5, 4, 0, 1)
+
 /**
  * hyp_alternate_select - Generates patchable code sequences that are
  * used to switch between two implementations of a function, depending
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 09/23] arm64: KVM: VHE: Introduce unified system register accessors
@ 2016-02-03 18:00   ` Marc Zyngier
  0 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 18:00 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Christoffer Dall
  Cc: kvmarm, linux-kernel, linux-arm-kernel, kvm

VHE brings its own bag of new system registers, or rather system
register accessors, as it define new ways to access both guest
and host system registers. For example, from the host:

- The host TCR_EL2 register is accessed using the TCR_EL1 accessor
- The guest TCR_EL1 register is accessed using the TCR_EL12 accessor

Obviously, this is confusing. A way to somehow reduce the complexity
of writing code for both ARMv8 and ARMv8.1 is to use a set of unified
accessors that will generate the right sysreg, depending on the mode
the CPU is running in. For example:

- read_sysreg_el1(tcr) will use TCR_EL1 on ARMv8, and TCR_EL12 on
  ARMv8.1 with VHE.
- read_sysreg_el2(tcr) will use TCR_EL2 on ARMv8, and TCR_EL1 on
  ARMv8.1 with VHE.

We end up with three sets of accessors ({read,write}_sysreg_el[012])
that can be directly used from C code. We take this opportunity to
also add the definition for the new VHE sysregs.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kvm/hyp/hyp.h | 72 ++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 72 insertions(+)

diff --git a/arch/arm64/kvm/hyp/hyp.h b/arch/arm64/kvm/hyp/hyp.h
index fc502f3..744c919 100644
--- a/arch/arm64/kvm/hyp/hyp.h
+++ b/arch/arm64/kvm/hyp/hyp.h
@@ -48,6 +48,78 @@ static inline unsigned long __hyp_kern_va(unsigned long v)
 
 #define hyp_kern_va(v) (typeof(v))(__hyp_kern_va((unsigned long)(v)))
 
+#define read_sysreg_elx(r,nvh,vh)					\
+	({								\
+		u64 reg;						\
+		asm volatile(ALTERNATIVE("mrs %0, " __stringify(r##nvh),\
+					 "mrs_s %0, " __stringify(r##vh),\
+					 ARM64_HAS_VIRT_HOST_EXTN)	\
+			     : "=r" (reg));				\
+		reg;							\
+	})
+
+#define write_sysreg_elx(v,r,nvh,vh)					\
+	do {								\
+		u64 __val = (u64)(v);					\
+		asm volatile(ALTERNATIVE("msr " __stringify(r##nvh) ", %x0",\
+					 "msr_s " __stringify(r##vh) ", %x0",\
+					 ARM64_HAS_VIRT_HOST_EXTN)	\
+					 : : "rZ" (__val));		\
+	} while (0)
+
+/*
+ * Unified accessors for registers that have a different encoding
+ * between VHE and non-VHE. They must be specified without their "ELx"
+ * encoding.
+ */
+#define read_sysreg_el2(r)						\
+	({								\
+		u64 reg;						\
+		asm volatile(ALTERNATIVE("mrs %0, " __stringify(r##_EL2),\
+					 "mrs %0, " __stringify(r##_EL1),\
+					 ARM64_HAS_VIRT_HOST_EXTN)	\
+			     : "=r" (reg));				\
+		reg;							\
+	})
+
+#define write_sysreg_el2(v,r)						\
+	do {								\
+		u64 __val = (u64)(v);					\
+		asm volatile(ALTERNATIVE("msr " __stringify(r##_EL2) ", %x0",\
+					 "msr " __stringify(r##_EL1) ", %x0",\
+					 ARM64_HAS_VIRT_HOST_EXTN)	\
+					 : : "rZ" (__val));		\
+	} while (0)
+
+#define read_sysreg_el0(r)	read_sysreg_elx(r, _EL0, _EL02)
+#define write_sysreg_el0(v,r)	write_sysreg_elx(v, r, _EL0, _EL02)
+#define read_sysreg_el1(r)	read_sysreg_elx(r, _EL1, _EL12)
+#define write_sysreg_el1(v,r)	write_sysreg_elx(v, r, _EL1, _EL12)
+
+/* The VHE specific system registers and their encoding */
+#define sctlr_EL12              sys_reg(3, 5, 1, 0, 0)
+#define cpacr_EL12              sys_reg(3, 5, 1, 0, 2)
+#define ttbr0_EL12              sys_reg(3, 5, 2, 0, 0)
+#define ttbr1_EL12              sys_reg(3, 5, 2, 0, 1)
+#define tcr_EL12                sys_reg(3, 5, 2, 0, 2)
+#define afsr0_EL12              sys_reg(3, 5, 5, 1, 0)
+#define afsr1_EL12              sys_reg(3, 5, 5, 1, 1)
+#define esr_EL12                sys_reg(3, 5, 5, 2, 0)
+#define far_EL12                sys_reg(3, 5, 6, 0, 0)
+#define mair_EL12               sys_reg(3, 5, 10, 2, 0)
+#define amair_EL12              sys_reg(3, 5, 10, 3, 0)
+#define vbar_EL12               sys_reg(3, 5, 12, 0, 0)
+#define contextidr_EL12         sys_reg(3, 5, 13, 0, 1)
+#define cntkctl_EL12            sys_reg(3, 5, 14, 1, 0)
+#define cntp_tval_EL02          sys_reg(3, 5, 14, 2, 0)
+#define cntp_ctl_EL02           sys_reg(3, 5, 14, 2, 1)
+#define cntp_cval_EL02          sys_reg(3, 5, 14, 2, 2)
+#define cntv_tval_EL02          sys_reg(3, 5, 14, 3, 0)
+#define cntv_ctl_EL02           sys_reg(3, 5, 14, 3, 1)
+#define cntv_cval_EL02          sys_reg(3, 5, 14, 3, 2)
+#define spsr_EL12               sys_reg(3, 5, 4, 0, 0)
+#define elr_EL12                sys_reg(3, 5, 4, 0, 1)
+
 /**
  * hyp_alternate_select - Generates patchable code sequences that are
  * used to switch between two implementations of a function, depending
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 09/23] arm64: KVM: VHE: Introduce unified system register accessors
@ 2016-02-03 18:00   ` Marc Zyngier
  0 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 18:00 UTC (permalink / raw)
  To: linux-arm-kernel

VHE brings its own bag of new system registers, or rather system
register accessors, as it define new ways to access both guest
and host system registers. For example, from the host:

- The host TCR_EL2 register is accessed using the TCR_EL1 accessor
- The guest TCR_EL1 register is accessed using the TCR_EL12 accessor

Obviously, this is confusing. A way to somehow reduce the complexity
of writing code for both ARMv8 and ARMv8.1 is to use a set of unified
accessors that will generate the right sysreg, depending on the mode
the CPU is running in. For example:

- read_sysreg_el1(tcr) will use TCR_EL1 on ARMv8, and TCR_EL12 on
  ARMv8.1 with VHE.
- read_sysreg_el2(tcr) will use TCR_EL2 on ARMv8, and TCR_EL1 on
  ARMv8.1 with VHE.

We end up with three sets of accessors ({read,write}_sysreg_el[012])
that can be directly used from C code. We take this opportunity to
also add the definition for the new VHE sysregs.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kvm/hyp/hyp.h | 72 ++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 72 insertions(+)

diff --git a/arch/arm64/kvm/hyp/hyp.h b/arch/arm64/kvm/hyp/hyp.h
index fc502f3..744c919 100644
--- a/arch/arm64/kvm/hyp/hyp.h
+++ b/arch/arm64/kvm/hyp/hyp.h
@@ -48,6 +48,78 @@ static inline unsigned long __hyp_kern_va(unsigned long v)
 
 #define hyp_kern_va(v) (typeof(v))(__hyp_kern_va((unsigned long)(v)))
 
+#define read_sysreg_elx(r,nvh,vh)					\
+	({								\
+		u64 reg;						\
+		asm volatile(ALTERNATIVE("mrs %0, " __stringify(r##nvh),\
+					 "mrs_s %0, " __stringify(r##vh),\
+					 ARM64_HAS_VIRT_HOST_EXTN)	\
+			     : "=r" (reg));				\
+		reg;							\
+	})
+
+#define write_sysreg_elx(v,r,nvh,vh)					\
+	do {								\
+		u64 __val = (u64)(v);					\
+		asm volatile(ALTERNATIVE("msr " __stringify(r##nvh) ", %x0",\
+					 "msr_s " __stringify(r##vh) ", %x0",\
+					 ARM64_HAS_VIRT_HOST_EXTN)	\
+					 : : "rZ" (__val));		\
+	} while (0)
+
+/*
+ * Unified accessors for registers that have a different encoding
+ * between VHE and non-VHE. They must be specified without their "ELx"
+ * encoding.
+ */
+#define read_sysreg_el2(r)						\
+	({								\
+		u64 reg;						\
+		asm volatile(ALTERNATIVE("mrs %0, " __stringify(r##_EL2),\
+					 "mrs %0, " __stringify(r##_EL1),\
+					 ARM64_HAS_VIRT_HOST_EXTN)	\
+			     : "=r" (reg));				\
+		reg;							\
+	})
+
+#define write_sysreg_el2(v,r)						\
+	do {								\
+		u64 __val = (u64)(v);					\
+		asm volatile(ALTERNATIVE("msr " __stringify(r##_EL2) ", %x0",\
+					 "msr " __stringify(r##_EL1) ", %x0",\
+					 ARM64_HAS_VIRT_HOST_EXTN)	\
+					 : : "rZ" (__val));		\
+	} while (0)
+
+#define read_sysreg_el0(r)	read_sysreg_elx(r, _EL0, _EL02)
+#define write_sysreg_el0(v,r)	write_sysreg_elx(v, r, _EL0, _EL02)
+#define read_sysreg_el1(r)	read_sysreg_elx(r, _EL1, _EL12)
+#define write_sysreg_el1(v,r)	write_sysreg_elx(v, r, _EL1, _EL12)
+
+/* The VHE specific system registers and their encoding */
+#define sctlr_EL12              sys_reg(3, 5, 1, 0, 0)
+#define cpacr_EL12              sys_reg(3, 5, 1, 0, 2)
+#define ttbr0_EL12              sys_reg(3, 5, 2, 0, 0)
+#define ttbr1_EL12              sys_reg(3, 5, 2, 0, 1)
+#define tcr_EL12                sys_reg(3, 5, 2, 0, 2)
+#define afsr0_EL12              sys_reg(3, 5, 5, 1, 0)
+#define afsr1_EL12              sys_reg(3, 5, 5, 1, 1)
+#define esr_EL12                sys_reg(3, 5, 5, 2, 0)
+#define far_EL12                sys_reg(3, 5, 6, 0, 0)
+#define mair_EL12               sys_reg(3, 5, 10, 2, 0)
+#define amair_EL12              sys_reg(3, 5, 10, 3, 0)
+#define vbar_EL12               sys_reg(3, 5, 12, 0, 0)
+#define contextidr_EL12         sys_reg(3, 5, 13, 0, 1)
+#define cntkctl_EL12            sys_reg(3, 5, 14, 1, 0)
+#define cntp_tval_EL02          sys_reg(3, 5, 14, 2, 0)
+#define cntp_ctl_EL02           sys_reg(3, 5, 14, 2, 1)
+#define cntp_cval_EL02          sys_reg(3, 5, 14, 2, 2)
+#define cntv_tval_EL02          sys_reg(3, 5, 14, 3, 0)
+#define cntv_ctl_EL02           sys_reg(3, 5, 14, 3, 1)
+#define cntv_cval_EL02          sys_reg(3, 5, 14, 3, 2)
+#define spsr_EL12               sys_reg(3, 5, 4, 0, 0)
+#define elr_EL12                sys_reg(3, 5, 4, 0, 1)
+
 /**
  * hyp_alternate_select - Generates patchable code sequences that are
  * used to switch between two implementations of a function, depending
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 10/23] arm64: KVM: VHE: Differenciate host/guest sysreg save/restore
  2016-02-03 17:59 ` Marc Zyngier
  (?)
@ 2016-02-03 18:00   ` Marc Zyngier
  -1 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 18:00 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Christoffer Dall
  Cc: linux-arm-kernel, linux-kernel, kvm, kvmarm

With ARMv8, host and guest share the same system register file,
making the save/restore procedure completely symetrical.
With VHE, host and guest now have different requirements, as they
use different sysregs.

In order to prepare for this, add split sysreg save/restore functions
for both host and guest. No functional changes yet.

Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kvm/hyp/hyp.h       |  6 ++++--
 arch/arm64/kvm/hyp/switch.c    | 10 +++++-----
 arch/arm64/kvm/hyp/sysreg-sr.c | 24 ++++++++++++++++++++++--
 3 files changed, 31 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/kvm/hyp/hyp.h b/arch/arm64/kvm/hyp/hyp.h
index 744c919..5dfa883 100644
--- a/arch/arm64/kvm/hyp/hyp.h
+++ b/arch/arm64/kvm/hyp/hyp.h
@@ -153,8 +153,10 @@ void __vgic_v3_restore_state(struct kvm_vcpu *vcpu);
 void __timer_save_state(struct kvm_vcpu *vcpu);
 void __timer_restore_state(struct kvm_vcpu *vcpu);
 
-void __sysreg_save_state(struct kvm_cpu_context *ctxt);
-void __sysreg_restore_state(struct kvm_cpu_context *ctxt);
+void __sysreg_save_host_state(struct kvm_cpu_context *ctxt);
+void __sysreg_restore_host_state(struct kvm_cpu_context *ctxt);
+void __sysreg_save_guest_state(struct kvm_cpu_context *ctxt);
+void __sysreg_restore_guest_state(struct kvm_cpu_context *ctxt);
 void __sysreg32_save_state(struct kvm_vcpu *vcpu);
 void __sysreg32_restore_state(struct kvm_vcpu *vcpu);
 
diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index ca8f5a5..9071dee 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -98,7 +98,7 @@ static int __hyp_text __guest_run(struct kvm_vcpu *vcpu)
 	host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context);
 	guest_ctxt = &vcpu->arch.ctxt;
 
-	__sysreg_save_state(host_ctxt);
+	__sysreg_save_host_state(host_ctxt);
 	__debug_cond_save_host_state(vcpu);
 
 	__activate_traps(vcpu);
@@ -112,7 +112,7 @@ static int __hyp_text __guest_run(struct kvm_vcpu *vcpu)
 	 * to Cortex-A57 erratum #852523.
 	 */
 	__sysreg32_restore_state(vcpu);
-	__sysreg_restore_state(guest_ctxt);
+	__sysreg_restore_guest_state(guest_ctxt);
 	__debug_restore_state(vcpu, kern_hyp_va(vcpu->arch.debug_ptr), guest_ctxt);
 
 	/* Jump in the fire! */
@@ -121,7 +121,7 @@ static int __hyp_text __guest_run(struct kvm_vcpu *vcpu)
 
 	fp_enabled = __fpsimd_enabled();
 
-	__sysreg_save_state(guest_ctxt);
+	__sysreg_save_guest_state(guest_ctxt);
 	__sysreg32_save_state(vcpu);
 	__timer_save_state(vcpu);
 	__vgic_save_state(vcpu);
@@ -129,7 +129,7 @@ static int __hyp_text __guest_run(struct kvm_vcpu *vcpu)
 	__deactivate_traps(vcpu);
 	__deactivate_vm(vcpu);
 
-	__sysreg_restore_state(host_ctxt);
+	__sysreg_restore_host_state(host_ctxt);
 
 	if (fp_enabled) {
 		__fpsimd_save_state(&guest_ctxt->gp_regs.fp_regs);
@@ -161,7 +161,7 @@ void __hyp_text __noreturn __hyp_panic(void)
 		host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context);
 		__deactivate_traps(vcpu);
 		__deactivate_vm(vcpu);
-		__sysreg_restore_state(host_ctxt);
+		__sysreg_restore_host_state(host_ctxt);
 	}
 
 	/* Call panic for real */
diff --git a/arch/arm64/kvm/hyp/sysreg-sr.c b/arch/arm64/kvm/hyp/sysreg-sr.c
index 42563098..bd5b543 100644
--- a/arch/arm64/kvm/hyp/sysreg-sr.c
+++ b/arch/arm64/kvm/hyp/sysreg-sr.c
@@ -24,7 +24,7 @@
 #include "hyp.h"
 
 /* ctxt is already in the HYP VA space */
-void __hyp_text __sysreg_save_state(struct kvm_cpu_context *ctxt)
+static void __hyp_text __sysreg_save_state(struct kvm_cpu_context *ctxt)
 {
 	ctxt->sys_regs[MPIDR_EL1]	= read_sysreg(vmpidr_el2);
 	ctxt->sys_regs[CSSELR_EL1]	= read_sysreg(csselr_el1);
@@ -57,7 +57,17 @@ void __hyp_text __sysreg_save_state(struct kvm_cpu_context *ctxt)
 	ctxt->gp_regs.spsr[KVM_SPSR_EL1]= read_sysreg(spsr_el1);
 }
 
-void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
+void __hyp_text __sysreg_save_host_state(struct kvm_cpu_context *ctxt)
+{
+	__sysreg_save_state(ctxt);
+}
+
+void __hyp_text __sysreg_save_guest_state(struct kvm_cpu_context *ctxt)
+{
+	__sysreg_save_state(ctxt);
+}
+
+static void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
 {
 	write_sysreg(ctxt->sys_regs[MPIDR_EL1],	  vmpidr_el2);
 	write_sysreg(ctxt->sys_regs[CSSELR_EL1],  csselr_el1);
@@ -90,6 +100,16 @@ void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
 	write_sysreg(ctxt->gp_regs.spsr[KVM_SPSR_EL1], spsr_el1);
 }
 
+void __hyp_text __sysreg_restore_host_state(struct kvm_cpu_context *ctxt)
+{
+	__sysreg_restore_state(ctxt);
+}
+
+void __hyp_text __sysreg_restore_guest_state(struct kvm_cpu_context *ctxt)
+{
+	__sysreg_restore_state(ctxt);
+}
+
 void __hyp_text __sysreg32_save_state(struct kvm_vcpu *vcpu)
 {
 	u64 *spsr, *sysreg;
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 10/23] arm64: KVM: VHE: Differenciate host/guest sysreg save/restore
@ 2016-02-03 18:00   ` Marc Zyngier
  0 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 18:00 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Christoffer Dall
  Cc: kvmarm, linux-kernel, linux-arm-kernel, kvm

With ARMv8, host and guest share the same system register file,
making the save/restore procedure completely symetrical.
With VHE, host and guest now have different requirements, as they
use different sysregs.

In order to prepare for this, add split sysreg save/restore functions
for both host and guest. No functional changes yet.

Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kvm/hyp/hyp.h       |  6 ++++--
 arch/arm64/kvm/hyp/switch.c    | 10 +++++-----
 arch/arm64/kvm/hyp/sysreg-sr.c | 24 ++++++++++++++++++++++--
 3 files changed, 31 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/kvm/hyp/hyp.h b/arch/arm64/kvm/hyp/hyp.h
index 744c919..5dfa883 100644
--- a/arch/arm64/kvm/hyp/hyp.h
+++ b/arch/arm64/kvm/hyp/hyp.h
@@ -153,8 +153,10 @@ void __vgic_v3_restore_state(struct kvm_vcpu *vcpu);
 void __timer_save_state(struct kvm_vcpu *vcpu);
 void __timer_restore_state(struct kvm_vcpu *vcpu);
 
-void __sysreg_save_state(struct kvm_cpu_context *ctxt);
-void __sysreg_restore_state(struct kvm_cpu_context *ctxt);
+void __sysreg_save_host_state(struct kvm_cpu_context *ctxt);
+void __sysreg_restore_host_state(struct kvm_cpu_context *ctxt);
+void __sysreg_save_guest_state(struct kvm_cpu_context *ctxt);
+void __sysreg_restore_guest_state(struct kvm_cpu_context *ctxt);
 void __sysreg32_save_state(struct kvm_vcpu *vcpu);
 void __sysreg32_restore_state(struct kvm_vcpu *vcpu);
 
diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index ca8f5a5..9071dee 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -98,7 +98,7 @@ static int __hyp_text __guest_run(struct kvm_vcpu *vcpu)
 	host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context);
 	guest_ctxt = &vcpu->arch.ctxt;
 
-	__sysreg_save_state(host_ctxt);
+	__sysreg_save_host_state(host_ctxt);
 	__debug_cond_save_host_state(vcpu);
 
 	__activate_traps(vcpu);
@@ -112,7 +112,7 @@ static int __hyp_text __guest_run(struct kvm_vcpu *vcpu)
 	 * to Cortex-A57 erratum #852523.
 	 */
 	__sysreg32_restore_state(vcpu);
-	__sysreg_restore_state(guest_ctxt);
+	__sysreg_restore_guest_state(guest_ctxt);
 	__debug_restore_state(vcpu, kern_hyp_va(vcpu->arch.debug_ptr), guest_ctxt);
 
 	/* Jump in the fire! */
@@ -121,7 +121,7 @@ static int __hyp_text __guest_run(struct kvm_vcpu *vcpu)
 
 	fp_enabled = __fpsimd_enabled();
 
-	__sysreg_save_state(guest_ctxt);
+	__sysreg_save_guest_state(guest_ctxt);
 	__sysreg32_save_state(vcpu);
 	__timer_save_state(vcpu);
 	__vgic_save_state(vcpu);
@@ -129,7 +129,7 @@ static int __hyp_text __guest_run(struct kvm_vcpu *vcpu)
 	__deactivate_traps(vcpu);
 	__deactivate_vm(vcpu);
 
-	__sysreg_restore_state(host_ctxt);
+	__sysreg_restore_host_state(host_ctxt);
 
 	if (fp_enabled) {
 		__fpsimd_save_state(&guest_ctxt->gp_regs.fp_regs);
@@ -161,7 +161,7 @@ void __hyp_text __noreturn __hyp_panic(void)
 		host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context);
 		__deactivate_traps(vcpu);
 		__deactivate_vm(vcpu);
-		__sysreg_restore_state(host_ctxt);
+		__sysreg_restore_host_state(host_ctxt);
 	}
 
 	/* Call panic for real */
diff --git a/arch/arm64/kvm/hyp/sysreg-sr.c b/arch/arm64/kvm/hyp/sysreg-sr.c
index 42563098..bd5b543 100644
--- a/arch/arm64/kvm/hyp/sysreg-sr.c
+++ b/arch/arm64/kvm/hyp/sysreg-sr.c
@@ -24,7 +24,7 @@
 #include "hyp.h"
 
 /* ctxt is already in the HYP VA space */
-void __hyp_text __sysreg_save_state(struct kvm_cpu_context *ctxt)
+static void __hyp_text __sysreg_save_state(struct kvm_cpu_context *ctxt)
 {
 	ctxt->sys_regs[MPIDR_EL1]	= read_sysreg(vmpidr_el2);
 	ctxt->sys_regs[CSSELR_EL1]	= read_sysreg(csselr_el1);
@@ -57,7 +57,17 @@ void __hyp_text __sysreg_save_state(struct kvm_cpu_context *ctxt)
 	ctxt->gp_regs.spsr[KVM_SPSR_EL1]= read_sysreg(spsr_el1);
 }
 
-void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
+void __hyp_text __sysreg_save_host_state(struct kvm_cpu_context *ctxt)
+{
+	__sysreg_save_state(ctxt);
+}
+
+void __hyp_text __sysreg_save_guest_state(struct kvm_cpu_context *ctxt)
+{
+	__sysreg_save_state(ctxt);
+}
+
+static void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
 {
 	write_sysreg(ctxt->sys_regs[MPIDR_EL1],	  vmpidr_el2);
 	write_sysreg(ctxt->sys_regs[CSSELR_EL1],  csselr_el1);
@@ -90,6 +100,16 @@ void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
 	write_sysreg(ctxt->gp_regs.spsr[KVM_SPSR_EL1], spsr_el1);
 }
 
+void __hyp_text __sysreg_restore_host_state(struct kvm_cpu_context *ctxt)
+{
+	__sysreg_restore_state(ctxt);
+}
+
+void __hyp_text __sysreg_restore_guest_state(struct kvm_cpu_context *ctxt)
+{
+	__sysreg_restore_state(ctxt);
+}
+
 void __hyp_text __sysreg32_save_state(struct kvm_vcpu *vcpu)
 {
 	u64 *spsr, *sysreg;
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 10/23] arm64: KVM: VHE: Differenciate host/guest sysreg save/restore
@ 2016-02-03 18:00   ` Marc Zyngier
  0 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 18:00 UTC (permalink / raw)
  To: linux-arm-kernel

With ARMv8, host and guest share the same system register file,
making the save/restore procedure completely symetrical.
With VHE, host and guest now have different requirements, as they
use different sysregs.

In order to prepare for this, add split sysreg save/restore functions
for both host and guest. No functional changes yet.

Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kvm/hyp/hyp.h       |  6 ++++--
 arch/arm64/kvm/hyp/switch.c    | 10 +++++-----
 arch/arm64/kvm/hyp/sysreg-sr.c | 24 ++++++++++++++++++++++--
 3 files changed, 31 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/kvm/hyp/hyp.h b/arch/arm64/kvm/hyp/hyp.h
index 744c919..5dfa883 100644
--- a/arch/arm64/kvm/hyp/hyp.h
+++ b/arch/arm64/kvm/hyp/hyp.h
@@ -153,8 +153,10 @@ void __vgic_v3_restore_state(struct kvm_vcpu *vcpu);
 void __timer_save_state(struct kvm_vcpu *vcpu);
 void __timer_restore_state(struct kvm_vcpu *vcpu);
 
-void __sysreg_save_state(struct kvm_cpu_context *ctxt);
-void __sysreg_restore_state(struct kvm_cpu_context *ctxt);
+void __sysreg_save_host_state(struct kvm_cpu_context *ctxt);
+void __sysreg_restore_host_state(struct kvm_cpu_context *ctxt);
+void __sysreg_save_guest_state(struct kvm_cpu_context *ctxt);
+void __sysreg_restore_guest_state(struct kvm_cpu_context *ctxt);
 void __sysreg32_save_state(struct kvm_vcpu *vcpu);
 void __sysreg32_restore_state(struct kvm_vcpu *vcpu);
 
diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index ca8f5a5..9071dee 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -98,7 +98,7 @@ static int __hyp_text __guest_run(struct kvm_vcpu *vcpu)
 	host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context);
 	guest_ctxt = &vcpu->arch.ctxt;
 
-	__sysreg_save_state(host_ctxt);
+	__sysreg_save_host_state(host_ctxt);
 	__debug_cond_save_host_state(vcpu);
 
 	__activate_traps(vcpu);
@@ -112,7 +112,7 @@ static int __hyp_text __guest_run(struct kvm_vcpu *vcpu)
 	 * to Cortex-A57 erratum #852523.
 	 */
 	__sysreg32_restore_state(vcpu);
-	__sysreg_restore_state(guest_ctxt);
+	__sysreg_restore_guest_state(guest_ctxt);
 	__debug_restore_state(vcpu, kern_hyp_va(vcpu->arch.debug_ptr), guest_ctxt);
 
 	/* Jump in the fire! */
@@ -121,7 +121,7 @@ static int __hyp_text __guest_run(struct kvm_vcpu *vcpu)
 
 	fp_enabled = __fpsimd_enabled();
 
-	__sysreg_save_state(guest_ctxt);
+	__sysreg_save_guest_state(guest_ctxt);
 	__sysreg32_save_state(vcpu);
 	__timer_save_state(vcpu);
 	__vgic_save_state(vcpu);
@@ -129,7 +129,7 @@ static int __hyp_text __guest_run(struct kvm_vcpu *vcpu)
 	__deactivate_traps(vcpu);
 	__deactivate_vm(vcpu);
 
-	__sysreg_restore_state(host_ctxt);
+	__sysreg_restore_host_state(host_ctxt);
 
 	if (fp_enabled) {
 		__fpsimd_save_state(&guest_ctxt->gp_regs.fp_regs);
@@ -161,7 +161,7 @@ void __hyp_text __noreturn __hyp_panic(void)
 		host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context);
 		__deactivate_traps(vcpu);
 		__deactivate_vm(vcpu);
-		__sysreg_restore_state(host_ctxt);
+		__sysreg_restore_host_state(host_ctxt);
 	}
 
 	/* Call panic for real */
diff --git a/arch/arm64/kvm/hyp/sysreg-sr.c b/arch/arm64/kvm/hyp/sysreg-sr.c
index 42563098..bd5b543 100644
--- a/arch/arm64/kvm/hyp/sysreg-sr.c
+++ b/arch/arm64/kvm/hyp/sysreg-sr.c
@@ -24,7 +24,7 @@
 #include "hyp.h"
 
 /* ctxt is already in the HYP VA space */
-void __hyp_text __sysreg_save_state(struct kvm_cpu_context *ctxt)
+static void __hyp_text __sysreg_save_state(struct kvm_cpu_context *ctxt)
 {
 	ctxt->sys_regs[MPIDR_EL1]	= read_sysreg(vmpidr_el2);
 	ctxt->sys_regs[CSSELR_EL1]	= read_sysreg(csselr_el1);
@@ -57,7 +57,17 @@ void __hyp_text __sysreg_save_state(struct kvm_cpu_context *ctxt)
 	ctxt->gp_regs.spsr[KVM_SPSR_EL1]= read_sysreg(spsr_el1);
 }
 
-void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
+void __hyp_text __sysreg_save_host_state(struct kvm_cpu_context *ctxt)
+{
+	__sysreg_save_state(ctxt);
+}
+
+void __hyp_text __sysreg_save_guest_state(struct kvm_cpu_context *ctxt)
+{
+	__sysreg_save_state(ctxt);
+}
+
+static void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
 {
 	write_sysreg(ctxt->sys_regs[MPIDR_EL1],	  vmpidr_el2);
 	write_sysreg(ctxt->sys_regs[CSSELR_EL1],  csselr_el1);
@@ -90,6 +100,16 @@ void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
 	write_sysreg(ctxt->gp_regs.spsr[KVM_SPSR_EL1], spsr_el1);
 }
 
+void __hyp_text __sysreg_restore_host_state(struct kvm_cpu_context *ctxt)
+{
+	__sysreg_restore_state(ctxt);
+}
+
+void __hyp_text __sysreg_restore_guest_state(struct kvm_cpu_context *ctxt)
+{
+	__sysreg_restore_state(ctxt);
+}
+
 void __hyp_text __sysreg32_save_state(struct kvm_vcpu *vcpu)
 {
 	u64 *spsr, *sysreg;
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 11/23] arm64: KVM: VHE: Split save/restore of registers shared between guest and host
  2016-02-03 17:59 ` Marc Zyngier
  (?)
@ 2016-02-03 18:00   ` Marc Zyngier
  -1 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 18:00 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Christoffer Dall
  Cc: linux-arm-kernel, linux-kernel, kvm, kvmarm

A handful of system registers are still shared between host and guest,
even while using VHE (tpidr*_el[01] and actlr_el1).

Also, some of the vcpu state (sp_el0, PC and PSTATE) must be
save/restored on entry/exit, as they are used on the host as well.

In order to facilitate the introduction of a VHE-specific sysreg
save/restore, make move the access to these registers to their
own save/restore functions.

No functionnal change.

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kvm/hyp/sysreg-sr.c | 48 +++++++++++++++++++++++++++++-------------
 1 file changed, 33 insertions(+), 15 deletions(-)

diff --git a/arch/arm64/kvm/hyp/sysreg-sr.c b/arch/arm64/kvm/hyp/sysreg-sr.c
index bd5b543..61bad17 100644
--- a/arch/arm64/kvm/hyp/sysreg-sr.c
+++ b/arch/arm64/kvm/hyp/sysreg-sr.c
@@ -23,13 +23,29 @@
 
 #include "hyp.h"
 
-/* ctxt is already in the HYP VA space */
+/*
+ * Non-VHE: Both host and guest must save everything.
+ *
+ * VHE: Host must save tpidr*_el[01], actlr_el1, sp0, pc, pstate, and
+ * guest must save everything.
+ */
+
+static void __hyp_text __sysreg_save_common_state(struct kvm_cpu_context *ctxt)
+{
+	ctxt->sys_regs[ACTLR_EL1]	= read_sysreg(actlr_el1);
+	ctxt->sys_regs[TPIDR_EL0]	= read_sysreg(tpidr_el0);
+	ctxt->sys_regs[TPIDRRO_EL0]	= read_sysreg(tpidrro_el0);
+	ctxt->sys_regs[TPIDR_EL1]	= read_sysreg(tpidr_el1);
+	ctxt->gp_regs.regs.sp		= read_sysreg(sp_el0);
+	ctxt->gp_regs.regs.pc		= read_sysreg(elr_el2);
+	ctxt->gp_regs.regs.pstate	= read_sysreg(spsr_el2);
+}
+
 static void __hyp_text __sysreg_save_state(struct kvm_cpu_context *ctxt)
 {
 	ctxt->sys_regs[MPIDR_EL1]	= read_sysreg(vmpidr_el2);
 	ctxt->sys_regs[CSSELR_EL1]	= read_sysreg(csselr_el1);
 	ctxt->sys_regs[SCTLR_EL1]	= read_sysreg(sctlr_el1);
-	ctxt->sys_regs[ACTLR_EL1]	= read_sysreg(actlr_el1);
 	ctxt->sys_regs[CPACR_EL1]	= read_sysreg(cpacr_el1);
 	ctxt->sys_regs[TTBR0_EL1]	= read_sysreg(ttbr0_el1);
 	ctxt->sys_regs[TTBR1_EL1]	= read_sysreg(ttbr1_el1);
@@ -41,17 +57,11 @@ static void __hyp_text __sysreg_save_state(struct kvm_cpu_context *ctxt)
 	ctxt->sys_regs[MAIR_EL1]	= read_sysreg(mair_el1);
 	ctxt->sys_regs[VBAR_EL1]	= read_sysreg(vbar_el1);
 	ctxt->sys_regs[CONTEXTIDR_EL1]	= read_sysreg(contextidr_el1);
-	ctxt->sys_regs[TPIDR_EL0]	= read_sysreg(tpidr_el0);
-	ctxt->sys_regs[TPIDRRO_EL0]	= read_sysreg(tpidrro_el0);
-	ctxt->sys_regs[TPIDR_EL1]	= read_sysreg(tpidr_el1);
 	ctxt->sys_regs[AMAIR_EL1]	= read_sysreg(amair_el1);
 	ctxt->sys_regs[CNTKCTL_EL1]	= read_sysreg(cntkctl_el1);
 	ctxt->sys_regs[PAR_EL1]		= read_sysreg(par_el1);
 	ctxt->sys_regs[MDSCR_EL1]	= read_sysreg(mdscr_el1);
 
-	ctxt->gp_regs.regs.sp		= read_sysreg(sp_el0);
-	ctxt->gp_regs.regs.pc		= read_sysreg(elr_el2);
-	ctxt->gp_regs.regs.pstate	= read_sysreg(spsr_el2);
 	ctxt->gp_regs.sp_el1		= read_sysreg(sp_el1);
 	ctxt->gp_regs.elr_el1		= read_sysreg(elr_el1);
 	ctxt->gp_regs.spsr[KVM_SPSR_EL1]= read_sysreg(spsr_el1);
@@ -60,11 +70,24 @@ static void __hyp_text __sysreg_save_state(struct kvm_cpu_context *ctxt)
 void __hyp_text __sysreg_save_host_state(struct kvm_cpu_context *ctxt)
 {
 	__sysreg_save_state(ctxt);
+	__sysreg_save_common_state(ctxt);
 }
 
 void __hyp_text __sysreg_save_guest_state(struct kvm_cpu_context *ctxt)
 {
 	__sysreg_save_state(ctxt);
+	__sysreg_save_common_state(ctxt);
+}
+
+static void __hyp_text __sysreg_restore_common_state(struct kvm_cpu_context *ctxt)
+{
+	write_sysreg(ctxt->sys_regs[ACTLR_EL1],	  actlr_el1);
+	write_sysreg(ctxt->sys_regs[TPIDR_EL0],	  tpidr_el0);
+	write_sysreg(ctxt->sys_regs[TPIDRRO_EL0], tpidrro_el0);
+	write_sysreg(ctxt->sys_regs[TPIDR_EL1],	  tpidr_el1);
+	write_sysreg(ctxt->gp_regs.regs.sp,	  sp_el0);
+	write_sysreg(ctxt->gp_regs.regs.pc,	  elr_el2);
+	write_sysreg(ctxt->gp_regs.regs.pstate,	  spsr_el2);
 }
 
 static void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
@@ -72,7 +95,6 @@ static void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
 	write_sysreg(ctxt->sys_regs[MPIDR_EL1],	  vmpidr_el2);
 	write_sysreg(ctxt->sys_regs[CSSELR_EL1],  csselr_el1);
 	write_sysreg(ctxt->sys_regs[SCTLR_EL1],	  sctlr_el1);
-	write_sysreg(ctxt->sys_regs[ACTLR_EL1],	  actlr_el1);
 	write_sysreg(ctxt->sys_regs[CPACR_EL1],	  cpacr_el1);
 	write_sysreg(ctxt->sys_regs[TTBR0_EL1],	  ttbr0_el1);
 	write_sysreg(ctxt->sys_regs[TTBR1_EL1],	  ttbr1_el1);
@@ -84,17 +106,11 @@ static void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
 	write_sysreg(ctxt->sys_regs[MAIR_EL1],	  mair_el1);
 	write_sysreg(ctxt->sys_regs[VBAR_EL1],	  vbar_el1);
 	write_sysreg(ctxt->sys_regs[CONTEXTIDR_EL1], contextidr_el1);
-	write_sysreg(ctxt->sys_regs[TPIDR_EL0],	  tpidr_el0);
-	write_sysreg(ctxt->sys_regs[TPIDRRO_EL0], tpidrro_el0);
-	write_sysreg(ctxt->sys_regs[TPIDR_EL1],	  tpidr_el1);
 	write_sysreg(ctxt->sys_regs[AMAIR_EL1],	  amair_el1);
 	write_sysreg(ctxt->sys_regs[CNTKCTL_EL1], cntkctl_el1);
 	write_sysreg(ctxt->sys_regs[PAR_EL1],	  par_el1);
 	write_sysreg(ctxt->sys_regs[MDSCR_EL1],	  mdscr_el1);
 
-	write_sysreg(ctxt->gp_regs.regs.sp,	sp_el0);
-	write_sysreg(ctxt->gp_regs.regs.pc,	elr_el2);
-	write_sysreg(ctxt->gp_regs.regs.pstate,	spsr_el2);
 	write_sysreg(ctxt->gp_regs.sp_el1,	sp_el1);
 	write_sysreg(ctxt->gp_regs.elr_el1,	elr_el1);
 	write_sysreg(ctxt->gp_regs.spsr[KVM_SPSR_EL1], spsr_el1);
@@ -103,11 +119,13 @@ static void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
 void __hyp_text __sysreg_restore_host_state(struct kvm_cpu_context *ctxt)
 {
 	__sysreg_restore_state(ctxt);
+	__sysreg_restore_common_state(ctxt);
 }
 
 void __hyp_text __sysreg_restore_guest_state(struct kvm_cpu_context *ctxt)
 {
 	__sysreg_restore_state(ctxt);
+	__sysreg_restore_common_state(ctxt);
 }
 
 void __hyp_text __sysreg32_save_state(struct kvm_vcpu *vcpu)
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 11/23] arm64: KVM: VHE: Split save/restore of registers shared between guest and host
@ 2016-02-03 18:00   ` Marc Zyngier
  0 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 18:00 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Christoffer Dall
  Cc: kvmarm, linux-kernel, linux-arm-kernel, kvm

A handful of system registers are still shared between host and guest,
even while using VHE (tpidr*_el[01] and actlr_el1).

Also, some of the vcpu state (sp_el0, PC and PSTATE) must be
save/restored on entry/exit, as they are used on the host as well.

In order to facilitate the introduction of a VHE-specific sysreg
save/restore, make move the access to these registers to their
own save/restore functions.

No functionnal change.

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kvm/hyp/sysreg-sr.c | 48 +++++++++++++++++++++++++++++-------------
 1 file changed, 33 insertions(+), 15 deletions(-)

diff --git a/arch/arm64/kvm/hyp/sysreg-sr.c b/arch/arm64/kvm/hyp/sysreg-sr.c
index bd5b543..61bad17 100644
--- a/arch/arm64/kvm/hyp/sysreg-sr.c
+++ b/arch/arm64/kvm/hyp/sysreg-sr.c
@@ -23,13 +23,29 @@
 
 #include "hyp.h"
 
-/* ctxt is already in the HYP VA space */
+/*
+ * Non-VHE: Both host and guest must save everything.
+ *
+ * VHE: Host must save tpidr*_el[01], actlr_el1, sp0, pc, pstate, and
+ * guest must save everything.
+ */
+
+static void __hyp_text __sysreg_save_common_state(struct kvm_cpu_context *ctxt)
+{
+	ctxt->sys_regs[ACTLR_EL1]	= read_sysreg(actlr_el1);
+	ctxt->sys_regs[TPIDR_EL0]	= read_sysreg(tpidr_el0);
+	ctxt->sys_regs[TPIDRRO_EL0]	= read_sysreg(tpidrro_el0);
+	ctxt->sys_regs[TPIDR_EL1]	= read_sysreg(tpidr_el1);
+	ctxt->gp_regs.regs.sp		= read_sysreg(sp_el0);
+	ctxt->gp_regs.regs.pc		= read_sysreg(elr_el2);
+	ctxt->gp_regs.regs.pstate	= read_sysreg(spsr_el2);
+}
+
 static void __hyp_text __sysreg_save_state(struct kvm_cpu_context *ctxt)
 {
 	ctxt->sys_regs[MPIDR_EL1]	= read_sysreg(vmpidr_el2);
 	ctxt->sys_regs[CSSELR_EL1]	= read_sysreg(csselr_el1);
 	ctxt->sys_regs[SCTLR_EL1]	= read_sysreg(sctlr_el1);
-	ctxt->sys_regs[ACTLR_EL1]	= read_sysreg(actlr_el1);
 	ctxt->sys_regs[CPACR_EL1]	= read_sysreg(cpacr_el1);
 	ctxt->sys_regs[TTBR0_EL1]	= read_sysreg(ttbr0_el1);
 	ctxt->sys_regs[TTBR1_EL1]	= read_sysreg(ttbr1_el1);
@@ -41,17 +57,11 @@ static void __hyp_text __sysreg_save_state(struct kvm_cpu_context *ctxt)
 	ctxt->sys_regs[MAIR_EL1]	= read_sysreg(mair_el1);
 	ctxt->sys_regs[VBAR_EL1]	= read_sysreg(vbar_el1);
 	ctxt->sys_regs[CONTEXTIDR_EL1]	= read_sysreg(contextidr_el1);
-	ctxt->sys_regs[TPIDR_EL0]	= read_sysreg(tpidr_el0);
-	ctxt->sys_regs[TPIDRRO_EL0]	= read_sysreg(tpidrro_el0);
-	ctxt->sys_regs[TPIDR_EL1]	= read_sysreg(tpidr_el1);
 	ctxt->sys_regs[AMAIR_EL1]	= read_sysreg(amair_el1);
 	ctxt->sys_regs[CNTKCTL_EL1]	= read_sysreg(cntkctl_el1);
 	ctxt->sys_regs[PAR_EL1]		= read_sysreg(par_el1);
 	ctxt->sys_regs[MDSCR_EL1]	= read_sysreg(mdscr_el1);
 
-	ctxt->gp_regs.regs.sp		= read_sysreg(sp_el0);
-	ctxt->gp_regs.regs.pc		= read_sysreg(elr_el2);
-	ctxt->gp_regs.regs.pstate	= read_sysreg(spsr_el2);
 	ctxt->gp_regs.sp_el1		= read_sysreg(sp_el1);
 	ctxt->gp_regs.elr_el1		= read_sysreg(elr_el1);
 	ctxt->gp_regs.spsr[KVM_SPSR_EL1]= read_sysreg(spsr_el1);
@@ -60,11 +70,24 @@ static void __hyp_text __sysreg_save_state(struct kvm_cpu_context *ctxt)
 void __hyp_text __sysreg_save_host_state(struct kvm_cpu_context *ctxt)
 {
 	__sysreg_save_state(ctxt);
+	__sysreg_save_common_state(ctxt);
 }
 
 void __hyp_text __sysreg_save_guest_state(struct kvm_cpu_context *ctxt)
 {
 	__sysreg_save_state(ctxt);
+	__sysreg_save_common_state(ctxt);
+}
+
+static void __hyp_text __sysreg_restore_common_state(struct kvm_cpu_context *ctxt)
+{
+	write_sysreg(ctxt->sys_regs[ACTLR_EL1],	  actlr_el1);
+	write_sysreg(ctxt->sys_regs[TPIDR_EL0],	  tpidr_el0);
+	write_sysreg(ctxt->sys_regs[TPIDRRO_EL0], tpidrro_el0);
+	write_sysreg(ctxt->sys_regs[TPIDR_EL1],	  tpidr_el1);
+	write_sysreg(ctxt->gp_regs.regs.sp,	  sp_el0);
+	write_sysreg(ctxt->gp_regs.regs.pc,	  elr_el2);
+	write_sysreg(ctxt->gp_regs.regs.pstate,	  spsr_el2);
 }
 
 static void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
@@ -72,7 +95,6 @@ static void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
 	write_sysreg(ctxt->sys_regs[MPIDR_EL1],	  vmpidr_el2);
 	write_sysreg(ctxt->sys_regs[CSSELR_EL1],  csselr_el1);
 	write_sysreg(ctxt->sys_regs[SCTLR_EL1],	  sctlr_el1);
-	write_sysreg(ctxt->sys_regs[ACTLR_EL1],	  actlr_el1);
 	write_sysreg(ctxt->sys_regs[CPACR_EL1],	  cpacr_el1);
 	write_sysreg(ctxt->sys_regs[TTBR0_EL1],	  ttbr0_el1);
 	write_sysreg(ctxt->sys_regs[TTBR1_EL1],	  ttbr1_el1);
@@ -84,17 +106,11 @@ static void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
 	write_sysreg(ctxt->sys_regs[MAIR_EL1],	  mair_el1);
 	write_sysreg(ctxt->sys_regs[VBAR_EL1],	  vbar_el1);
 	write_sysreg(ctxt->sys_regs[CONTEXTIDR_EL1], contextidr_el1);
-	write_sysreg(ctxt->sys_regs[TPIDR_EL0],	  tpidr_el0);
-	write_sysreg(ctxt->sys_regs[TPIDRRO_EL0], tpidrro_el0);
-	write_sysreg(ctxt->sys_regs[TPIDR_EL1],	  tpidr_el1);
 	write_sysreg(ctxt->sys_regs[AMAIR_EL1],	  amair_el1);
 	write_sysreg(ctxt->sys_regs[CNTKCTL_EL1], cntkctl_el1);
 	write_sysreg(ctxt->sys_regs[PAR_EL1],	  par_el1);
 	write_sysreg(ctxt->sys_regs[MDSCR_EL1],	  mdscr_el1);
 
-	write_sysreg(ctxt->gp_regs.regs.sp,	sp_el0);
-	write_sysreg(ctxt->gp_regs.regs.pc,	elr_el2);
-	write_sysreg(ctxt->gp_regs.regs.pstate,	spsr_el2);
 	write_sysreg(ctxt->gp_regs.sp_el1,	sp_el1);
 	write_sysreg(ctxt->gp_regs.elr_el1,	elr_el1);
 	write_sysreg(ctxt->gp_regs.spsr[KVM_SPSR_EL1], spsr_el1);
@@ -103,11 +119,13 @@ static void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
 void __hyp_text __sysreg_restore_host_state(struct kvm_cpu_context *ctxt)
 {
 	__sysreg_restore_state(ctxt);
+	__sysreg_restore_common_state(ctxt);
 }
 
 void __hyp_text __sysreg_restore_guest_state(struct kvm_cpu_context *ctxt)
 {
 	__sysreg_restore_state(ctxt);
+	__sysreg_restore_common_state(ctxt);
 }
 
 void __hyp_text __sysreg32_save_state(struct kvm_vcpu *vcpu)
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 11/23] arm64: KVM: VHE: Split save/restore of registers shared between guest and host
@ 2016-02-03 18:00   ` Marc Zyngier
  0 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 18:00 UTC (permalink / raw)
  To: linux-arm-kernel

A handful of system registers are still shared between host and guest,
even while using VHE (tpidr*_el[01] and actlr_el1).

Also, some of the vcpu state (sp_el0, PC and PSTATE) must be
save/restored on entry/exit, as they are used on the host as well.

In order to facilitate the introduction of a VHE-specific sysreg
save/restore, make move the access to these registers to their
own save/restore functions.

No functionnal change.

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kvm/hyp/sysreg-sr.c | 48 +++++++++++++++++++++++++++++-------------
 1 file changed, 33 insertions(+), 15 deletions(-)

diff --git a/arch/arm64/kvm/hyp/sysreg-sr.c b/arch/arm64/kvm/hyp/sysreg-sr.c
index bd5b543..61bad17 100644
--- a/arch/arm64/kvm/hyp/sysreg-sr.c
+++ b/arch/arm64/kvm/hyp/sysreg-sr.c
@@ -23,13 +23,29 @@
 
 #include "hyp.h"
 
-/* ctxt is already in the HYP VA space */
+/*
+ * Non-VHE: Both host and guest must save everything.
+ *
+ * VHE: Host must save tpidr*_el[01], actlr_el1, sp0, pc, pstate, and
+ * guest must save everything.
+ */
+
+static void __hyp_text __sysreg_save_common_state(struct kvm_cpu_context *ctxt)
+{
+	ctxt->sys_regs[ACTLR_EL1]	= read_sysreg(actlr_el1);
+	ctxt->sys_regs[TPIDR_EL0]	= read_sysreg(tpidr_el0);
+	ctxt->sys_regs[TPIDRRO_EL0]	= read_sysreg(tpidrro_el0);
+	ctxt->sys_regs[TPIDR_EL1]	= read_sysreg(tpidr_el1);
+	ctxt->gp_regs.regs.sp		= read_sysreg(sp_el0);
+	ctxt->gp_regs.regs.pc		= read_sysreg(elr_el2);
+	ctxt->gp_regs.regs.pstate	= read_sysreg(spsr_el2);
+}
+
 static void __hyp_text __sysreg_save_state(struct kvm_cpu_context *ctxt)
 {
 	ctxt->sys_regs[MPIDR_EL1]	= read_sysreg(vmpidr_el2);
 	ctxt->sys_regs[CSSELR_EL1]	= read_sysreg(csselr_el1);
 	ctxt->sys_regs[SCTLR_EL1]	= read_sysreg(sctlr_el1);
-	ctxt->sys_regs[ACTLR_EL1]	= read_sysreg(actlr_el1);
 	ctxt->sys_regs[CPACR_EL1]	= read_sysreg(cpacr_el1);
 	ctxt->sys_regs[TTBR0_EL1]	= read_sysreg(ttbr0_el1);
 	ctxt->sys_regs[TTBR1_EL1]	= read_sysreg(ttbr1_el1);
@@ -41,17 +57,11 @@ static void __hyp_text __sysreg_save_state(struct kvm_cpu_context *ctxt)
 	ctxt->sys_regs[MAIR_EL1]	= read_sysreg(mair_el1);
 	ctxt->sys_regs[VBAR_EL1]	= read_sysreg(vbar_el1);
 	ctxt->sys_regs[CONTEXTIDR_EL1]	= read_sysreg(contextidr_el1);
-	ctxt->sys_regs[TPIDR_EL0]	= read_sysreg(tpidr_el0);
-	ctxt->sys_regs[TPIDRRO_EL0]	= read_sysreg(tpidrro_el0);
-	ctxt->sys_regs[TPIDR_EL1]	= read_sysreg(tpidr_el1);
 	ctxt->sys_regs[AMAIR_EL1]	= read_sysreg(amair_el1);
 	ctxt->sys_regs[CNTKCTL_EL1]	= read_sysreg(cntkctl_el1);
 	ctxt->sys_regs[PAR_EL1]		= read_sysreg(par_el1);
 	ctxt->sys_regs[MDSCR_EL1]	= read_sysreg(mdscr_el1);
 
-	ctxt->gp_regs.regs.sp		= read_sysreg(sp_el0);
-	ctxt->gp_regs.regs.pc		= read_sysreg(elr_el2);
-	ctxt->gp_regs.regs.pstate	= read_sysreg(spsr_el2);
 	ctxt->gp_regs.sp_el1		= read_sysreg(sp_el1);
 	ctxt->gp_regs.elr_el1		= read_sysreg(elr_el1);
 	ctxt->gp_regs.spsr[KVM_SPSR_EL1]= read_sysreg(spsr_el1);
@@ -60,11 +70,24 @@ static void __hyp_text __sysreg_save_state(struct kvm_cpu_context *ctxt)
 void __hyp_text __sysreg_save_host_state(struct kvm_cpu_context *ctxt)
 {
 	__sysreg_save_state(ctxt);
+	__sysreg_save_common_state(ctxt);
 }
 
 void __hyp_text __sysreg_save_guest_state(struct kvm_cpu_context *ctxt)
 {
 	__sysreg_save_state(ctxt);
+	__sysreg_save_common_state(ctxt);
+}
+
+static void __hyp_text __sysreg_restore_common_state(struct kvm_cpu_context *ctxt)
+{
+	write_sysreg(ctxt->sys_regs[ACTLR_EL1],	  actlr_el1);
+	write_sysreg(ctxt->sys_regs[TPIDR_EL0],	  tpidr_el0);
+	write_sysreg(ctxt->sys_regs[TPIDRRO_EL0], tpidrro_el0);
+	write_sysreg(ctxt->sys_regs[TPIDR_EL1],	  tpidr_el1);
+	write_sysreg(ctxt->gp_regs.regs.sp,	  sp_el0);
+	write_sysreg(ctxt->gp_regs.regs.pc,	  elr_el2);
+	write_sysreg(ctxt->gp_regs.regs.pstate,	  spsr_el2);
 }
 
 static void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
@@ -72,7 +95,6 @@ static void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
 	write_sysreg(ctxt->sys_regs[MPIDR_EL1],	  vmpidr_el2);
 	write_sysreg(ctxt->sys_regs[CSSELR_EL1],  csselr_el1);
 	write_sysreg(ctxt->sys_regs[SCTLR_EL1],	  sctlr_el1);
-	write_sysreg(ctxt->sys_regs[ACTLR_EL1],	  actlr_el1);
 	write_sysreg(ctxt->sys_regs[CPACR_EL1],	  cpacr_el1);
 	write_sysreg(ctxt->sys_regs[TTBR0_EL1],	  ttbr0_el1);
 	write_sysreg(ctxt->sys_regs[TTBR1_EL1],	  ttbr1_el1);
@@ -84,17 +106,11 @@ static void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
 	write_sysreg(ctxt->sys_regs[MAIR_EL1],	  mair_el1);
 	write_sysreg(ctxt->sys_regs[VBAR_EL1],	  vbar_el1);
 	write_sysreg(ctxt->sys_regs[CONTEXTIDR_EL1], contextidr_el1);
-	write_sysreg(ctxt->sys_regs[TPIDR_EL0],	  tpidr_el0);
-	write_sysreg(ctxt->sys_regs[TPIDRRO_EL0], tpidrro_el0);
-	write_sysreg(ctxt->sys_regs[TPIDR_EL1],	  tpidr_el1);
 	write_sysreg(ctxt->sys_regs[AMAIR_EL1],	  amair_el1);
 	write_sysreg(ctxt->sys_regs[CNTKCTL_EL1], cntkctl_el1);
 	write_sysreg(ctxt->sys_regs[PAR_EL1],	  par_el1);
 	write_sysreg(ctxt->sys_regs[MDSCR_EL1],	  mdscr_el1);
 
-	write_sysreg(ctxt->gp_regs.regs.sp,	sp_el0);
-	write_sysreg(ctxt->gp_regs.regs.pc,	elr_el2);
-	write_sysreg(ctxt->gp_regs.regs.pstate,	spsr_el2);
 	write_sysreg(ctxt->gp_regs.sp_el1,	sp_el1);
 	write_sysreg(ctxt->gp_regs.elr_el1,	elr_el1);
 	write_sysreg(ctxt->gp_regs.spsr[KVM_SPSR_EL1], spsr_el1);
@@ -103,11 +119,13 @@ static void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
 void __hyp_text __sysreg_restore_host_state(struct kvm_cpu_context *ctxt)
 {
 	__sysreg_restore_state(ctxt);
+	__sysreg_restore_common_state(ctxt);
 }
 
 void __hyp_text __sysreg_restore_guest_state(struct kvm_cpu_context *ctxt)
 {
 	__sysreg_restore_state(ctxt);
+	__sysreg_restore_common_state(ctxt);
 }
 
 void __hyp_text __sysreg32_save_state(struct kvm_vcpu *vcpu)
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 12/23] arm64: KVM: VHE: Use unified system register accessors
  2016-02-03 17:59 ` Marc Zyngier
  (?)
@ 2016-02-03 18:00   ` Marc Zyngier
  -1 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 18:00 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Christoffer Dall
  Cc: linux-arm-kernel, linux-kernel, kvm, kvmarm

Use the recently introduced unified system register accessors for
those sysregs that behave differently depending on VHE being in
use or not.

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kvm/hyp/sysreg-sr.c | 84 +++++++++++++++++++++---------------------
 1 file changed, 42 insertions(+), 42 deletions(-)

diff --git a/arch/arm64/kvm/hyp/sysreg-sr.c b/arch/arm64/kvm/hyp/sysreg-sr.c
index 61bad17..7d7d757 100644
--- a/arch/arm64/kvm/hyp/sysreg-sr.c
+++ b/arch/arm64/kvm/hyp/sysreg-sr.c
@@ -37,34 +37,34 @@ static void __hyp_text __sysreg_save_common_state(struct kvm_cpu_context *ctxt)
 	ctxt->sys_regs[TPIDRRO_EL0]	= read_sysreg(tpidrro_el0);
 	ctxt->sys_regs[TPIDR_EL1]	= read_sysreg(tpidr_el1);
 	ctxt->gp_regs.regs.sp		= read_sysreg(sp_el0);
-	ctxt->gp_regs.regs.pc		= read_sysreg(elr_el2);
-	ctxt->gp_regs.regs.pstate	= read_sysreg(spsr_el2);
+	ctxt->gp_regs.regs.pc		= read_sysreg_el2(elr);
+	ctxt->gp_regs.regs.pstate	= read_sysreg_el2(spsr);
 }
 
 static void __hyp_text __sysreg_save_state(struct kvm_cpu_context *ctxt)
 {
 	ctxt->sys_regs[MPIDR_EL1]	= read_sysreg(vmpidr_el2);
 	ctxt->sys_regs[CSSELR_EL1]	= read_sysreg(csselr_el1);
-	ctxt->sys_regs[SCTLR_EL1]	= read_sysreg(sctlr_el1);
-	ctxt->sys_regs[CPACR_EL1]	= read_sysreg(cpacr_el1);
-	ctxt->sys_regs[TTBR0_EL1]	= read_sysreg(ttbr0_el1);
-	ctxt->sys_regs[TTBR1_EL1]	= read_sysreg(ttbr1_el1);
-	ctxt->sys_regs[TCR_EL1]		= read_sysreg(tcr_el1);
-	ctxt->sys_regs[ESR_EL1]		= read_sysreg(esr_el1);
-	ctxt->sys_regs[AFSR0_EL1]	= read_sysreg(afsr0_el1);
-	ctxt->sys_regs[AFSR1_EL1]	= read_sysreg(afsr1_el1);
-	ctxt->sys_regs[FAR_EL1]		= read_sysreg(far_el1);
-	ctxt->sys_regs[MAIR_EL1]	= read_sysreg(mair_el1);
-	ctxt->sys_regs[VBAR_EL1]	= read_sysreg(vbar_el1);
-	ctxt->sys_regs[CONTEXTIDR_EL1]	= read_sysreg(contextidr_el1);
-	ctxt->sys_regs[AMAIR_EL1]	= read_sysreg(amair_el1);
-	ctxt->sys_regs[CNTKCTL_EL1]	= read_sysreg(cntkctl_el1);
+	ctxt->sys_regs[SCTLR_EL1]	= read_sysreg_el1(sctlr);
+	ctxt->sys_regs[CPACR_EL1]	= read_sysreg_el1(cpacr);
+	ctxt->sys_regs[TTBR0_EL1]	= read_sysreg_el1(ttbr0);
+	ctxt->sys_regs[TTBR1_EL1]	= read_sysreg_el1(ttbr1);
+	ctxt->sys_regs[TCR_EL1]		= read_sysreg_el1(tcr);
+	ctxt->sys_regs[ESR_EL1]		= read_sysreg_el1(esr);
+	ctxt->sys_regs[AFSR0_EL1]	= read_sysreg_el1(afsr0);
+	ctxt->sys_regs[AFSR1_EL1]	= read_sysreg_el1(afsr1);
+	ctxt->sys_regs[FAR_EL1]		= read_sysreg_el1(far);
+	ctxt->sys_regs[MAIR_EL1]	= read_sysreg_el1(mair);
+	ctxt->sys_regs[VBAR_EL1]	= read_sysreg_el1(vbar);
+	ctxt->sys_regs[CONTEXTIDR_EL1]	= read_sysreg_el1(contextidr);
+	ctxt->sys_regs[AMAIR_EL1]	= read_sysreg_el1(amair);
+	ctxt->sys_regs[CNTKCTL_EL1]	= read_sysreg_el1(cntkctl);
 	ctxt->sys_regs[PAR_EL1]		= read_sysreg(par_el1);
 	ctxt->sys_regs[MDSCR_EL1]	= read_sysreg(mdscr_el1);
 
 	ctxt->gp_regs.sp_el1		= read_sysreg(sp_el1);
-	ctxt->gp_regs.elr_el1		= read_sysreg(elr_el1);
-	ctxt->gp_regs.spsr[KVM_SPSR_EL1]= read_sysreg(spsr_el1);
+	ctxt->gp_regs.elr_el1		= read_sysreg_el1(elr);
+	ctxt->gp_regs.spsr[KVM_SPSR_EL1]= read_sysreg_el1(spsr);
 }
 
 void __hyp_text __sysreg_save_host_state(struct kvm_cpu_context *ctxt)
@@ -86,34 +86,34 @@ static void __hyp_text __sysreg_restore_common_state(struct kvm_cpu_context *ctx
 	write_sysreg(ctxt->sys_regs[TPIDRRO_EL0], tpidrro_el0);
 	write_sysreg(ctxt->sys_regs[TPIDR_EL1],	  tpidr_el1);
 	write_sysreg(ctxt->gp_regs.regs.sp,	  sp_el0);
-	write_sysreg(ctxt->gp_regs.regs.pc,	  elr_el2);
-	write_sysreg(ctxt->gp_regs.regs.pstate,	  spsr_el2);
+	write_sysreg_el2(ctxt->gp_regs.regs.pc,	  elr);
+	write_sysreg_el2(ctxt->gp_regs.regs.pstate, spsr);
 }
 
 static void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
 {
-	write_sysreg(ctxt->sys_regs[MPIDR_EL1],	  vmpidr_el2);
-	write_sysreg(ctxt->sys_regs[CSSELR_EL1],  csselr_el1);
-	write_sysreg(ctxt->sys_regs[SCTLR_EL1],	  sctlr_el1);
-	write_sysreg(ctxt->sys_regs[CPACR_EL1],	  cpacr_el1);
-	write_sysreg(ctxt->sys_regs[TTBR0_EL1],	  ttbr0_el1);
-	write_sysreg(ctxt->sys_regs[TTBR1_EL1],	  ttbr1_el1);
-	write_sysreg(ctxt->sys_regs[TCR_EL1],	  tcr_el1);
-	write_sysreg(ctxt->sys_regs[ESR_EL1],	  esr_el1);
-	write_sysreg(ctxt->sys_regs[AFSR0_EL1],	  afsr0_el1);
-	write_sysreg(ctxt->sys_regs[AFSR1_EL1],	  afsr1_el1);
-	write_sysreg(ctxt->sys_regs[FAR_EL1],	  far_el1);
-	write_sysreg(ctxt->sys_regs[MAIR_EL1],	  mair_el1);
-	write_sysreg(ctxt->sys_regs[VBAR_EL1],	  vbar_el1);
-	write_sysreg(ctxt->sys_regs[CONTEXTIDR_EL1], contextidr_el1);
-	write_sysreg(ctxt->sys_regs[AMAIR_EL1],	  amair_el1);
-	write_sysreg(ctxt->sys_regs[CNTKCTL_EL1], cntkctl_el1);
-	write_sysreg(ctxt->sys_regs[PAR_EL1],	  par_el1);
-	write_sysreg(ctxt->sys_regs[MDSCR_EL1],	  mdscr_el1);
-
-	write_sysreg(ctxt->gp_regs.sp_el1,	sp_el1);
-	write_sysreg(ctxt->gp_regs.elr_el1,	elr_el1);
-	write_sysreg(ctxt->gp_regs.spsr[KVM_SPSR_EL1], spsr_el1);
+	write_sysreg(ctxt->sys_regs[MPIDR_EL1],		vmpidr_el2);
+	write_sysreg(ctxt->sys_regs[CSSELR_EL1],	csselr_el1);
+	write_sysreg_el1(ctxt->sys_regs[SCTLR_EL1],	sctlr);
+	write_sysreg_el1(ctxt->sys_regs[CPACR_EL1],	cpacr);
+	write_sysreg_el1(ctxt->sys_regs[TTBR0_EL1],	ttbr0);
+	write_sysreg_el1(ctxt->sys_regs[TTBR1_EL1],	ttbr1);
+	write_sysreg_el1(ctxt->sys_regs[TCR_EL1],	tcr);
+	write_sysreg_el1(ctxt->sys_regs[ESR_EL1],	esr);
+	write_sysreg_el1(ctxt->sys_regs[AFSR0_EL1],	afsr0);
+	write_sysreg_el1(ctxt->sys_regs[AFSR1_EL1],	afsr1);
+	write_sysreg_el1(ctxt->sys_regs[FAR_EL1],	far);
+	write_sysreg_el1(ctxt->sys_regs[MAIR_EL1],	mair);
+	write_sysreg_el1(ctxt->sys_regs[VBAR_EL1],	vbar);
+	write_sysreg_el1(ctxt->sys_regs[CONTEXTIDR_EL1],contextidr);
+	write_sysreg_el1(ctxt->sys_regs[AMAIR_EL1],	amair);
+	write_sysreg_el1(ctxt->sys_regs[CNTKCTL_EL1], 	cntkctl);
+	write_sysreg(ctxt->sys_regs[PAR_EL1],		par_el1);
+	write_sysreg(ctxt->sys_regs[MDSCR_EL1],		mdscr_el1);
+
+	write_sysreg(ctxt->gp_regs.sp_el1,		sp_el1);
+	write_sysreg_el1(ctxt->gp_regs.elr_el1,		elr);
+	write_sysreg_el1(ctxt->gp_regs.spsr[KVM_SPSR_EL1],spsr);
 }
 
 void __hyp_text __sysreg_restore_host_state(struct kvm_cpu_context *ctxt)
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 12/23] arm64: KVM: VHE: Use unified system register accessors
@ 2016-02-03 18:00   ` Marc Zyngier
  0 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 18:00 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Christoffer Dall
  Cc: kvmarm, linux-kernel, linux-arm-kernel, kvm

Use the recently introduced unified system register accessors for
those sysregs that behave differently depending on VHE being in
use or not.

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kvm/hyp/sysreg-sr.c | 84 +++++++++++++++++++++---------------------
 1 file changed, 42 insertions(+), 42 deletions(-)

diff --git a/arch/arm64/kvm/hyp/sysreg-sr.c b/arch/arm64/kvm/hyp/sysreg-sr.c
index 61bad17..7d7d757 100644
--- a/arch/arm64/kvm/hyp/sysreg-sr.c
+++ b/arch/arm64/kvm/hyp/sysreg-sr.c
@@ -37,34 +37,34 @@ static void __hyp_text __sysreg_save_common_state(struct kvm_cpu_context *ctxt)
 	ctxt->sys_regs[TPIDRRO_EL0]	= read_sysreg(tpidrro_el0);
 	ctxt->sys_regs[TPIDR_EL1]	= read_sysreg(tpidr_el1);
 	ctxt->gp_regs.regs.sp		= read_sysreg(sp_el0);
-	ctxt->gp_regs.regs.pc		= read_sysreg(elr_el2);
-	ctxt->gp_regs.regs.pstate	= read_sysreg(spsr_el2);
+	ctxt->gp_regs.regs.pc		= read_sysreg_el2(elr);
+	ctxt->gp_regs.regs.pstate	= read_sysreg_el2(spsr);
 }
 
 static void __hyp_text __sysreg_save_state(struct kvm_cpu_context *ctxt)
 {
 	ctxt->sys_regs[MPIDR_EL1]	= read_sysreg(vmpidr_el2);
 	ctxt->sys_regs[CSSELR_EL1]	= read_sysreg(csselr_el1);
-	ctxt->sys_regs[SCTLR_EL1]	= read_sysreg(sctlr_el1);
-	ctxt->sys_regs[CPACR_EL1]	= read_sysreg(cpacr_el1);
-	ctxt->sys_regs[TTBR0_EL1]	= read_sysreg(ttbr0_el1);
-	ctxt->sys_regs[TTBR1_EL1]	= read_sysreg(ttbr1_el1);
-	ctxt->sys_regs[TCR_EL1]		= read_sysreg(tcr_el1);
-	ctxt->sys_regs[ESR_EL1]		= read_sysreg(esr_el1);
-	ctxt->sys_regs[AFSR0_EL1]	= read_sysreg(afsr0_el1);
-	ctxt->sys_regs[AFSR1_EL1]	= read_sysreg(afsr1_el1);
-	ctxt->sys_regs[FAR_EL1]		= read_sysreg(far_el1);
-	ctxt->sys_regs[MAIR_EL1]	= read_sysreg(mair_el1);
-	ctxt->sys_regs[VBAR_EL1]	= read_sysreg(vbar_el1);
-	ctxt->sys_regs[CONTEXTIDR_EL1]	= read_sysreg(contextidr_el1);
-	ctxt->sys_regs[AMAIR_EL1]	= read_sysreg(amair_el1);
-	ctxt->sys_regs[CNTKCTL_EL1]	= read_sysreg(cntkctl_el1);
+	ctxt->sys_regs[SCTLR_EL1]	= read_sysreg_el1(sctlr);
+	ctxt->sys_regs[CPACR_EL1]	= read_sysreg_el1(cpacr);
+	ctxt->sys_regs[TTBR0_EL1]	= read_sysreg_el1(ttbr0);
+	ctxt->sys_regs[TTBR1_EL1]	= read_sysreg_el1(ttbr1);
+	ctxt->sys_regs[TCR_EL1]		= read_sysreg_el1(tcr);
+	ctxt->sys_regs[ESR_EL1]		= read_sysreg_el1(esr);
+	ctxt->sys_regs[AFSR0_EL1]	= read_sysreg_el1(afsr0);
+	ctxt->sys_regs[AFSR1_EL1]	= read_sysreg_el1(afsr1);
+	ctxt->sys_regs[FAR_EL1]		= read_sysreg_el1(far);
+	ctxt->sys_regs[MAIR_EL1]	= read_sysreg_el1(mair);
+	ctxt->sys_regs[VBAR_EL1]	= read_sysreg_el1(vbar);
+	ctxt->sys_regs[CONTEXTIDR_EL1]	= read_sysreg_el1(contextidr);
+	ctxt->sys_regs[AMAIR_EL1]	= read_sysreg_el1(amair);
+	ctxt->sys_regs[CNTKCTL_EL1]	= read_sysreg_el1(cntkctl);
 	ctxt->sys_regs[PAR_EL1]		= read_sysreg(par_el1);
 	ctxt->sys_regs[MDSCR_EL1]	= read_sysreg(mdscr_el1);
 
 	ctxt->gp_regs.sp_el1		= read_sysreg(sp_el1);
-	ctxt->gp_regs.elr_el1		= read_sysreg(elr_el1);
-	ctxt->gp_regs.spsr[KVM_SPSR_EL1]= read_sysreg(spsr_el1);
+	ctxt->gp_regs.elr_el1		= read_sysreg_el1(elr);
+	ctxt->gp_regs.spsr[KVM_SPSR_EL1]= read_sysreg_el1(spsr);
 }
 
 void __hyp_text __sysreg_save_host_state(struct kvm_cpu_context *ctxt)
@@ -86,34 +86,34 @@ static void __hyp_text __sysreg_restore_common_state(struct kvm_cpu_context *ctx
 	write_sysreg(ctxt->sys_regs[TPIDRRO_EL0], tpidrro_el0);
 	write_sysreg(ctxt->sys_regs[TPIDR_EL1],	  tpidr_el1);
 	write_sysreg(ctxt->gp_regs.regs.sp,	  sp_el0);
-	write_sysreg(ctxt->gp_regs.regs.pc,	  elr_el2);
-	write_sysreg(ctxt->gp_regs.regs.pstate,	  spsr_el2);
+	write_sysreg_el2(ctxt->gp_regs.regs.pc,	  elr);
+	write_sysreg_el2(ctxt->gp_regs.regs.pstate, spsr);
 }
 
 static void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
 {
-	write_sysreg(ctxt->sys_regs[MPIDR_EL1],	  vmpidr_el2);
-	write_sysreg(ctxt->sys_regs[CSSELR_EL1],  csselr_el1);
-	write_sysreg(ctxt->sys_regs[SCTLR_EL1],	  sctlr_el1);
-	write_sysreg(ctxt->sys_regs[CPACR_EL1],	  cpacr_el1);
-	write_sysreg(ctxt->sys_regs[TTBR0_EL1],	  ttbr0_el1);
-	write_sysreg(ctxt->sys_regs[TTBR1_EL1],	  ttbr1_el1);
-	write_sysreg(ctxt->sys_regs[TCR_EL1],	  tcr_el1);
-	write_sysreg(ctxt->sys_regs[ESR_EL1],	  esr_el1);
-	write_sysreg(ctxt->sys_regs[AFSR0_EL1],	  afsr0_el1);
-	write_sysreg(ctxt->sys_regs[AFSR1_EL1],	  afsr1_el1);
-	write_sysreg(ctxt->sys_regs[FAR_EL1],	  far_el1);
-	write_sysreg(ctxt->sys_regs[MAIR_EL1],	  mair_el1);
-	write_sysreg(ctxt->sys_regs[VBAR_EL1],	  vbar_el1);
-	write_sysreg(ctxt->sys_regs[CONTEXTIDR_EL1], contextidr_el1);
-	write_sysreg(ctxt->sys_regs[AMAIR_EL1],	  amair_el1);
-	write_sysreg(ctxt->sys_regs[CNTKCTL_EL1], cntkctl_el1);
-	write_sysreg(ctxt->sys_regs[PAR_EL1],	  par_el1);
-	write_sysreg(ctxt->sys_regs[MDSCR_EL1],	  mdscr_el1);
-
-	write_sysreg(ctxt->gp_regs.sp_el1,	sp_el1);
-	write_sysreg(ctxt->gp_regs.elr_el1,	elr_el1);
-	write_sysreg(ctxt->gp_regs.spsr[KVM_SPSR_EL1], spsr_el1);
+	write_sysreg(ctxt->sys_regs[MPIDR_EL1],		vmpidr_el2);
+	write_sysreg(ctxt->sys_regs[CSSELR_EL1],	csselr_el1);
+	write_sysreg_el1(ctxt->sys_regs[SCTLR_EL1],	sctlr);
+	write_sysreg_el1(ctxt->sys_regs[CPACR_EL1],	cpacr);
+	write_sysreg_el1(ctxt->sys_regs[TTBR0_EL1],	ttbr0);
+	write_sysreg_el1(ctxt->sys_regs[TTBR1_EL1],	ttbr1);
+	write_sysreg_el1(ctxt->sys_regs[TCR_EL1],	tcr);
+	write_sysreg_el1(ctxt->sys_regs[ESR_EL1],	esr);
+	write_sysreg_el1(ctxt->sys_regs[AFSR0_EL1],	afsr0);
+	write_sysreg_el1(ctxt->sys_regs[AFSR1_EL1],	afsr1);
+	write_sysreg_el1(ctxt->sys_regs[FAR_EL1],	far);
+	write_sysreg_el1(ctxt->sys_regs[MAIR_EL1],	mair);
+	write_sysreg_el1(ctxt->sys_regs[VBAR_EL1],	vbar);
+	write_sysreg_el1(ctxt->sys_regs[CONTEXTIDR_EL1],contextidr);
+	write_sysreg_el1(ctxt->sys_regs[AMAIR_EL1],	amair);
+	write_sysreg_el1(ctxt->sys_regs[CNTKCTL_EL1], 	cntkctl);
+	write_sysreg(ctxt->sys_regs[PAR_EL1],		par_el1);
+	write_sysreg(ctxt->sys_regs[MDSCR_EL1],		mdscr_el1);
+
+	write_sysreg(ctxt->gp_regs.sp_el1,		sp_el1);
+	write_sysreg_el1(ctxt->gp_regs.elr_el1,		elr);
+	write_sysreg_el1(ctxt->gp_regs.spsr[KVM_SPSR_EL1],spsr);
 }
 
 void __hyp_text __sysreg_restore_host_state(struct kvm_cpu_context *ctxt)
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 12/23] arm64: KVM: VHE: Use unified system register accessors
@ 2016-02-03 18:00   ` Marc Zyngier
  0 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 18:00 UTC (permalink / raw)
  To: linux-arm-kernel

Use the recently introduced unified system register accessors for
those sysregs that behave differently depending on VHE being in
use or not.

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kvm/hyp/sysreg-sr.c | 84 +++++++++++++++++++++---------------------
 1 file changed, 42 insertions(+), 42 deletions(-)

diff --git a/arch/arm64/kvm/hyp/sysreg-sr.c b/arch/arm64/kvm/hyp/sysreg-sr.c
index 61bad17..7d7d757 100644
--- a/arch/arm64/kvm/hyp/sysreg-sr.c
+++ b/arch/arm64/kvm/hyp/sysreg-sr.c
@@ -37,34 +37,34 @@ static void __hyp_text __sysreg_save_common_state(struct kvm_cpu_context *ctxt)
 	ctxt->sys_regs[TPIDRRO_EL0]	= read_sysreg(tpidrro_el0);
 	ctxt->sys_regs[TPIDR_EL1]	= read_sysreg(tpidr_el1);
 	ctxt->gp_regs.regs.sp		= read_sysreg(sp_el0);
-	ctxt->gp_regs.regs.pc		= read_sysreg(elr_el2);
-	ctxt->gp_regs.regs.pstate	= read_sysreg(spsr_el2);
+	ctxt->gp_regs.regs.pc		= read_sysreg_el2(elr);
+	ctxt->gp_regs.regs.pstate	= read_sysreg_el2(spsr);
 }
 
 static void __hyp_text __sysreg_save_state(struct kvm_cpu_context *ctxt)
 {
 	ctxt->sys_regs[MPIDR_EL1]	= read_sysreg(vmpidr_el2);
 	ctxt->sys_regs[CSSELR_EL1]	= read_sysreg(csselr_el1);
-	ctxt->sys_regs[SCTLR_EL1]	= read_sysreg(sctlr_el1);
-	ctxt->sys_regs[CPACR_EL1]	= read_sysreg(cpacr_el1);
-	ctxt->sys_regs[TTBR0_EL1]	= read_sysreg(ttbr0_el1);
-	ctxt->sys_regs[TTBR1_EL1]	= read_sysreg(ttbr1_el1);
-	ctxt->sys_regs[TCR_EL1]		= read_sysreg(tcr_el1);
-	ctxt->sys_regs[ESR_EL1]		= read_sysreg(esr_el1);
-	ctxt->sys_regs[AFSR0_EL1]	= read_sysreg(afsr0_el1);
-	ctxt->sys_regs[AFSR1_EL1]	= read_sysreg(afsr1_el1);
-	ctxt->sys_regs[FAR_EL1]		= read_sysreg(far_el1);
-	ctxt->sys_regs[MAIR_EL1]	= read_sysreg(mair_el1);
-	ctxt->sys_regs[VBAR_EL1]	= read_sysreg(vbar_el1);
-	ctxt->sys_regs[CONTEXTIDR_EL1]	= read_sysreg(contextidr_el1);
-	ctxt->sys_regs[AMAIR_EL1]	= read_sysreg(amair_el1);
-	ctxt->sys_regs[CNTKCTL_EL1]	= read_sysreg(cntkctl_el1);
+	ctxt->sys_regs[SCTLR_EL1]	= read_sysreg_el1(sctlr);
+	ctxt->sys_regs[CPACR_EL1]	= read_sysreg_el1(cpacr);
+	ctxt->sys_regs[TTBR0_EL1]	= read_sysreg_el1(ttbr0);
+	ctxt->sys_regs[TTBR1_EL1]	= read_sysreg_el1(ttbr1);
+	ctxt->sys_regs[TCR_EL1]		= read_sysreg_el1(tcr);
+	ctxt->sys_regs[ESR_EL1]		= read_sysreg_el1(esr);
+	ctxt->sys_regs[AFSR0_EL1]	= read_sysreg_el1(afsr0);
+	ctxt->sys_regs[AFSR1_EL1]	= read_sysreg_el1(afsr1);
+	ctxt->sys_regs[FAR_EL1]		= read_sysreg_el1(far);
+	ctxt->sys_regs[MAIR_EL1]	= read_sysreg_el1(mair);
+	ctxt->sys_regs[VBAR_EL1]	= read_sysreg_el1(vbar);
+	ctxt->sys_regs[CONTEXTIDR_EL1]	= read_sysreg_el1(contextidr);
+	ctxt->sys_regs[AMAIR_EL1]	= read_sysreg_el1(amair);
+	ctxt->sys_regs[CNTKCTL_EL1]	= read_sysreg_el1(cntkctl);
 	ctxt->sys_regs[PAR_EL1]		= read_sysreg(par_el1);
 	ctxt->sys_regs[MDSCR_EL1]	= read_sysreg(mdscr_el1);
 
 	ctxt->gp_regs.sp_el1		= read_sysreg(sp_el1);
-	ctxt->gp_regs.elr_el1		= read_sysreg(elr_el1);
-	ctxt->gp_regs.spsr[KVM_SPSR_EL1]= read_sysreg(spsr_el1);
+	ctxt->gp_regs.elr_el1		= read_sysreg_el1(elr);
+	ctxt->gp_regs.spsr[KVM_SPSR_EL1]= read_sysreg_el1(spsr);
 }
 
 void __hyp_text __sysreg_save_host_state(struct kvm_cpu_context *ctxt)
@@ -86,34 +86,34 @@ static void __hyp_text __sysreg_restore_common_state(struct kvm_cpu_context *ctx
 	write_sysreg(ctxt->sys_regs[TPIDRRO_EL0], tpidrro_el0);
 	write_sysreg(ctxt->sys_regs[TPIDR_EL1],	  tpidr_el1);
 	write_sysreg(ctxt->gp_regs.regs.sp,	  sp_el0);
-	write_sysreg(ctxt->gp_regs.regs.pc,	  elr_el2);
-	write_sysreg(ctxt->gp_regs.regs.pstate,	  spsr_el2);
+	write_sysreg_el2(ctxt->gp_regs.regs.pc,	  elr);
+	write_sysreg_el2(ctxt->gp_regs.regs.pstate, spsr);
 }
 
 static void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
 {
-	write_sysreg(ctxt->sys_regs[MPIDR_EL1],	  vmpidr_el2);
-	write_sysreg(ctxt->sys_regs[CSSELR_EL1],  csselr_el1);
-	write_sysreg(ctxt->sys_regs[SCTLR_EL1],	  sctlr_el1);
-	write_sysreg(ctxt->sys_regs[CPACR_EL1],	  cpacr_el1);
-	write_sysreg(ctxt->sys_regs[TTBR0_EL1],	  ttbr0_el1);
-	write_sysreg(ctxt->sys_regs[TTBR1_EL1],	  ttbr1_el1);
-	write_sysreg(ctxt->sys_regs[TCR_EL1],	  tcr_el1);
-	write_sysreg(ctxt->sys_regs[ESR_EL1],	  esr_el1);
-	write_sysreg(ctxt->sys_regs[AFSR0_EL1],	  afsr0_el1);
-	write_sysreg(ctxt->sys_regs[AFSR1_EL1],	  afsr1_el1);
-	write_sysreg(ctxt->sys_regs[FAR_EL1],	  far_el1);
-	write_sysreg(ctxt->sys_regs[MAIR_EL1],	  mair_el1);
-	write_sysreg(ctxt->sys_regs[VBAR_EL1],	  vbar_el1);
-	write_sysreg(ctxt->sys_regs[CONTEXTIDR_EL1], contextidr_el1);
-	write_sysreg(ctxt->sys_regs[AMAIR_EL1],	  amair_el1);
-	write_sysreg(ctxt->sys_regs[CNTKCTL_EL1], cntkctl_el1);
-	write_sysreg(ctxt->sys_regs[PAR_EL1],	  par_el1);
-	write_sysreg(ctxt->sys_regs[MDSCR_EL1],	  mdscr_el1);
-
-	write_sysreg(ctxt->gp_regs.sp_el1,	sp_el1);
-	write_sysreg(ctxt->gp_regs.elr_el1,	elr_el1);
-	write_sysreg(ctxt->gp_regs.spsr[KVM_SPSR_EL1], spsr_el1);
+	write_sysreg(ctxt->sys_regs[MPIDR_EL1],		vmpidr_el2);
+	write_sysreg(ctxt->sys_regs[CSSELR_EL1],	csselr_el1);
+	write_sysreg_el1(ctxt->sys_regs[SCTLR_EL1],	sctlr);
+	write_sysreg_el1(ctxt->sys_regs[CPACR_EL1],	cpacr);
+	write_sysreg_el1(ctxt->sys_regs[TTBR0_EL1],	ttbr0);
+	write_sysreg_el1(ctxt->sys_regs[TTBR1_EL1],	ttbr1);
+	write_sysreg_el1(ctxt->sys_regs[TCR_EL1],	tcr);
+	write_sysreg_el1(ctxt->sys_regs[ESR_EL1],	esr);
+	write_sysreg_el1(ctxt->sys_regs[AFSR0_EL1],	afsr0);
+	write_sysreg_el1(ctxt->sys_regs[AFSR1_EL1],	afsr1);
+	write_sysreg_el1(ctxt->sys_regs[FAR_EL1],	far);
+	write_sysreg_el1(ctxt->sys_regs[MAIR_EL1],	mair);
+	write_sysreg_el1(ctxt->sys_regs[VBAR_EL1],	vbar);
+	write_sysreg_el1(ctxt->sys_regs[CONTEXTIDR_EL1],contextidr);
+	write_sysreg_el1(ctxt->sys_regs[AMAIR_EL1],	amair);
+	write_sysreg_el1(ctxt->sys_regs[CNTKCTL_EL1], 	cntkctl);
+	write_sysreg(ctxt->sys_regs[PAR_EL1],		par_el1);
+	write_sysreg(ctxt->sys_regs[MDSCR_EL1],		mdscr_el1);
+
+	write_sysreg(ctxt->gp_regs.sp_el1,		sp_el1);
+	write_sysreg_el1(ctxt->gp_regs.elr_el1,		elr);
+	write_sysreg_el1(ctxt->gp_regs.spsr[KVM_SPSR_EL1],spsr);
 }
 
 void __hyp_text __sysreg_restore_host_state(struct kvm_cpu_context *ctxt)
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 13/23] arm64: KVM: VHE: Enable minimal sysreg save/restore
  2016-02-03 17:59 ` Marc Zyngier
  (?)
@ 2016-02-03 18:00   ` Marc Zyngier
  -1 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 18:00 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Christoffer Dall
  Cc: linux-arm-kernel, linux-kernel, kvm, kvmarm

We're now in a position where we can introduce VHE's minimal
save/restore, which is limited to the handful of shared sysregs.

Add the required alternative function calls that result in a
"do nothing" call on VHE, and the normal save/restore for non-VHE.

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kvm/hyp/sysreg-sr.c | 15 +++++++++++++--
 1 file changed, 13 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kvm/hyp/sysreg-sr.c b/arch/arm64/kvm/hyp/sysreg-sr.c
index 7d7d757..74b5f81 100644
--- a/arch/arm64/kvm/hyp/sysreg-sr.c
+++ b/arch/arm64/kvm/hyp/sysreg-sr.c
@@ -23,6 +23,9 @@
 
 #include "hyp.h"
 
+/* Yes, this does nothing, on purpose */
+static void __hyp_text __sysreg_do_nothing(struct kvm_cpu_context *ctxt) { }
+
 /*
  * Non-VHE: Both host and guest must save everything.
  *
@@ -67,9 +70,13 @@ static void __hyp_text __sysreg_save_state(struct kvm_cpu_context *ctxt)
 	ctxt->gp_regs.spsr[KVM_SPSR_EL1]= read_sysreg_el1(spsr);
 }
 
+static hyp_alternate_select(__sysreg_call_save_host_state,
+			    __sysreg_save_state, __sysreg_do_nothing,
+			    ARM64_HAS_VIRT_HOST_EXTN);
+
 void __hyp_text __sysreg_save_host_state(struct kvm_cpu_context *ctxt)
 {
-	__sysreg_save_state(ctxt);
+	__sysreg_call_save_host_state()(ctxt);
 	__sysreg_save_common_state(ctxt);
 }
 
@@ -116,9 +123,13 @@ static void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
 	write_sysreg_el1(ctxt->gp_regs.spsr[KVM_SPSR_EL1],spsr);
 }
 
+static hyp_alternate_select(__sysreg_call_restore_host_state,
+			    __sysreg_restore_state, __sysreg_do_nothing,
+			    ARM64_HAS_VIRT_HOST_EXTN);
+
 void __hyp_text __sysreg_restore_host_state(struct kvm_cpu_context *ctxt)
 {
-	__sysreg_restore_state(ctxt);
+	__sysreg_call_restore_host_state()(ctxt);
 	__sysreg_restore_common_state(ctxt);
 }
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 13/23] arm64: KVM: VHE: Enable minimal sysreg save/restore
@ 2016-02-03 18:00   ` Marc Zyngier
  0 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 18:00 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Christoffer Dall
  Cc: kvmarm, linux-kernel, linux-arm-kernel, kvm

We're now in a position where we can introduce VHE's minimal
save/restore, which is limited to the handful of shared sysregs.

Add the required alternative function calls that result in a
"do nothing" call on VHE, and the normal save/restore for non-VHE.

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kvm/hyp/sysreg-sr.c | 15 +++++++++++++--
 1 file changed, 13 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kvm/hyp/sysreg-sr.c b/arch/arm64/kvm/hyp/sysreg-sr.c
index 7d7d757..74b5f81 100644
--- a/arch/arm64/kvm/hyp/sysreg-sr.c
+++ b/arch/arm64/kvm/hyp/sysreg-sr.c
@@ -23,6 +23,9 @@
 
 #include "hyp.h"
 
+/* Yes, this does nothing, on purpose */
+static void __hyp_text __sysreg_do_nothing(struct kvm_cpu_context *ctxt) { }
+
 /*
  * Non-VHE: Both host and guest must save everything.
  *
@@ -67,9 +70,13 @@ static void __hyp_text __sysreg_save_state(struct kvm_cpu_context *ctxt)
 	ctxt->gp_regs.spsr[KVM_SPSR_EL1]= read_sysreg_el1(spsr);
 }
 
+static hyp_alternate_select(__sysreg_call_save_host_state,
+			    __sysreg_save_state, __sysreg_do_nothing,
+			    ARM64_HAS_VIRT_HOST_EXTN);
+
 void __hyp_text __sysreg_save_host_state(struct kvm_cpu_context *ctxt)
 {
-	__sysreg_save_state(ctxt);
+	__sysreg_call_save_host_state()(ctxt);
 	__sysreg_save_common_state(ctxt);
 }
 
@@ -116,9 +123,13 @@ static void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
 	write_sysreg_el1(ctxt->gp_regs.spsr[KVM_SPSR_EL1],spsr);
 }
 
+static hyp_alternate_select(__sysreg_call_restore_host_state,
+			    __sysreg_restore_state, __sysreg_do_nothing,
+			    ARM64_HAS_VIRT_HOST_EXTN);
+
 void __hyp_text __sysreg_restore_host_state(struct kvm_cpu_context *ctxt)
 {
-	__sysreg_restore_state(ctxt);
+	__sysreg_call_restore_host_state()(ctxt);
 	__sysreg_restore_common_state(ctxt);
 }
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 13/23] arm64: KVM: VHE: Enable minimal sysreg save/restore
@ 2016-02-03 18:00   ` Marc Zyngier
  0 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 18:00 UTC (permalink / raw)
  To: linux-arm-kernel

We're now in a position where we can introduce VHE's minimal
save/restore, which is limited to the handful of shared sysregs.

Add the required alternative function calls that result in a
"do nothing" call on VHE, and the normal save/restore for non-VHE.

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kvm/hyp/sysreg-sr.c | 15 +++++++++++++--
 1 file changed, 13 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kvm/hyp/sysreg-sr.c b/arch/arm64/kvm/hyp/sysreg-sr.c
index 7d7d757..74b5f81 100644
--- a/arch/arm64/kvm/hyp/sysreg-sr.c
+++ b/arch/arm64/kvm/hyp/sysreg-sr.c
@@ -23,6 +23,9 @@
 
 #include "hyp.h"
 
+/* Yes, this does nothing, on purpose */
+static void __hyp_text __sysreg_do_nothing(struct kvm_cpu_context *ctxt) { }
+
 /*
  * Non-VHE: Both host and guest must save everything.
  *
@@ -67,9 +70,13 @@ static void __hyp_text __sysreg_save_state(struct kvm_cpu_context *ctxt)
 	ctxt->gp_regs.spsr[KVM_SPSR_EL1]= read_sysreg_el1(spsr);
 }
 
+static hyp_alternate_select(__sysreg_call_save_host_state,
+			    __sysreg_save_state, __sysreg_do_nothing,
+			    ARM64_HAS_VIRT_HOST_EXTN);
+
 void __hyp_text __sysreg_save_host_state(struct kvm_cpu_context *ctxt)
 {
-	__sysreg_save_state(ctxt);
+	__sysreg_call_save_host_state()(ctxt);
 	__sysreg_save_common_state(ctxt);
 }
 
@@ -116,9 +123,13 @@ static void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
 	write_sysreg_el1(ctxt->gp_regs.spsr[KVM_SPSR_EL1],spsr);
 }
 
+static hyp_alternate_select(__sysreg_call_restore_host_state,
+			    __sysreg_restore_state, __sysreg_do_nothing,
+			    ARM64_HAS_VIRT_HOST_EXTN);
+
 void __hyp_text __sysreg_restore_host_state(struct kvm_cpu_context *ctxt)
 {
-	__sysreg_restore_state(ctxt);
+	__sysreg_call_restore_host_state()(ctxt);
 	__sysreg_restore_common_state(ctxt);
 }
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 14/23] arm64: KVM: VHE: Make __fpsimd_enabled VHE aware
  2016-02-03 17:59 ` Marc Zyngier
  (?)
@ 2016-02-03 18:00   ` Marc Zyngier
  -1 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 18:00 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Christoffer Dall
  Cc: linux-arm-kernel, linux-kernel, kvm, kvmarm

As non-VHE and VHE have different ways to express the trapping of
FPSIMD registers to EL2, make __fpsimd_enabled a patchable predicate
and provide a VHE implementation.

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/kvm_arm.h |  3 +++
 arch/arm64/kvm/hyp/hyp.h         |  5 +----
 arch/arm64/kvm/hyp/switch.c      | 19 +++++++++++++++++++
 3 files changed, 23 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index 738a95f..498335e 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -217,4 +217,7 @@
 	ECN(SOFTSTP_CUR), ECN(WATCHPT_LOW), ECN(WATCHPT_CUR), \
 	ECN(BKPT32), ECN(VECTOR32), ECN(BRK64)
 
+#define CPACR_EL1_FPEN		(3 << 20)
+#define CPACR_EL1_TTA		(1 << 28)
+
 #endif /* __ARM64_KVM_ARM_H__ */
diff --git a/arch/arm64/kvm/hyp/hyp.h b/arch/arm64/kvm/hyp/hyp.h
index 5dfa883..44eaff7 100644
--- a/arch/arm64/kvm/hyp/hyp.h
+++ b/arch/arm64/kvm/hyp/hyp.h
@@ -171,10 +171,7 @@ void __debug_cond_restore_host_state(struct kvm_vcpu *vcpu);
 
 void __fpsimd_save_state(struct user_fpsimd_state *fp_regs);
 void __fpsimd_restore_state(struct user_fpsimd_state *fp_regs);
-static inline bool __fpsimd_enabled(void)
-{
-	return !(read_sysreg(cptr_el2) & CPTR_EL2_TFP);
-}
+bool __fpsimd_enabled(void);
 
 u64 __guest_enter(struct kvm_vcpu *vcpu, struct kvm_cpu_context *host_ctxt);
 void __noreturn __hyp_do_panic(unsigned long, ...);
diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index 9071dee..0db161e 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -17,6 +17,25 @@
 
 #include "hyp.h"
 
+static bool __hyp_text __fpsimd_enabled_nvhe(void)
+{
+	return !(read_sysreg(cptr_el2) & CPTR_EL2_TFP);
+}
+
+static bool __hyp_text __fpsimd_enabled_vhe(void)
+{
+	return !!(read_sysreg(cpacr_el1) & CPACR_EL1_FPEN);
+}
+
+static hyp_alternate_select(__fpsimd_is_enabled,
+			    __fpsimd_enabled_nvhe, __fpsimd_enabled_vhe,
+			    ARM64_HAS_VIRT_HOST_EXTN);
+
+bool __hyp_text __fpsimd_enabled(void)
+{
+	return __fpsimd_is_enabled()();
+}
+
 static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu)
 {
 	u64 val;
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 14/23] arm64: KVM: VHE: Make __fpsimd_enabled VHE aware
@ 2016-02-03 18:00   ` Marc Zyngier
  0 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 18:00 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Christoffer Dall
  Cc: kvmarm, linux-kernel, linux-arm-kernel, kvm

As non-VHE and VHE have different ways to express the trapping of
FPSIMD registers to EL2, make __fpsimd_enabled a patchable predicate
and provide a VHE implementation.

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/kvm_arm.h |  3 +++
 arch/arm64/kvm/hyp/hyp.h         |  5 +----
 arch/arm64/kvm/hyp/switch.c      | 19 +++++++++++++++++++
 3 files changed, 23 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index 738a95f..498335e 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -217,4 +217,7 @@
 	ECN(SOFTSTP_CUR), ECN(WATCHPT_LOW), ECN(WATCHPT_CUR), \
 	ECN(BKPT32), ECN(VECTOR32), ECN(BRK64)
 
+#define CPACR_EL1_FPEN		(3 << 20)
+#define CPACR_EL1_TTA		(1 << 28)
+
 #endif /* __ARM64_KVM_ARM_H__ */
diff --git a/arch/arm64/kvm/hyp/hyp.h b/arch/arm64/kvm/hyp/hyp.h
index 5dfa883..44eaff7 100644
--- a/arch/arm64/kvm/hyp/hyp.h
+++ b/arch/arm64/kvm/hyp/hyp.h
@@ -171,10 +171,7 @@ void __debug_cond_restore_host_state(struct kvm_vcpu *vcpu);
 
 void __fpsimd_save_state(struct user_fpsimd_state *fp_regs);
 void __fpsimd_restore_state(struct user_fpsimd_state *fp_regs);
-static inline bool __fpsimd_enabled(void)
-{
-	return !(read_sysreg(cptr_el2) & CPTR_EL2_TFP);
-}
+bool __fpsimd_enabled(void);
 
 u64 __guest_enter(struct kvm_vcpu *vcpu, struct kvm_cpu_context *host_ctxt);
 void __noreturn __hyp_do_panic(unsigned long, ...);
diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index 9071dee..0db161e 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -17,6 +17,25 @@
 
 #include "hyp.h"
 
+static bool __hyp_text __fpsimd_enabled_nvhe(void)
+{
+	return !(read_sysreg(cptr_el2) & CPTR_EL2_TFP);
+}
+
+static bool __hyp_text __fpsimd_enabled_vhe(void)
+{
+	return !!(read_sysreg(cpacr_el1) & CPACR_EL1_FPEN);
+}
+
+static hyp_alternate_select(__fpsimd_is_enabled,
+			    __fpsimd_enabled_nvhe, __fpsimd_enabled_vhe,
+			    ARM64_HAS_VIRT_HOST_EXTN);
+
+bool __hyp_text __fpsimd_enabled(void)
+{
+	return __fpsimd_is_enabled()();
+}
+
 static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu)
 {
 	u64 val;
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 14/23] arm64: KVM: VHE: Make __fpsimd_enabled VHE aware
@ 2016-02-03 18:00   ` Marc Zyngier
  0 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 18:00 UTC (permalink / raw)
  To: linux-arm-kernel

As non-VHE and VHE have different ways to express the trapping of
FPSIMD registers to EL2, make __fpsimd_enabled a patchable predicate
and provide a VHE implementation.

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/kvm_arm.h |  3 +++
 arch/arm64/kvm/hyp/hyp.h         |  5 +----
 arch/arm64/kvm/hyp/switch.c      | 19 +++++++++++++++++++
 3 files changed, 23 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index 738a95f..498335e 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -217,4 +217,7 @@
 	ECN(SOFTSTP_CUR), ECN(WATCHPT_LOW), ECN(WATCHPT_CUR), \
 	ECN(BKPT32), ECN(VECTOR32), ECN(BRK64)
 
+#define CPACR_EL1_FPEN		(3 << 20)
+#define CPACR_EL1_TTA		(1 << 28)
+
 #endif /* __ARM64_KVM_ARM_H__ */
diff --git a/arch/arm64/kvm/hyp/hyp.h b/arch/arm64/kvm/hyp/hyp.h
index 5dfa883..44eaff7 100644
--- a/arch/arm64/kvm/hyp/hyp.h
+++ b/arch/arm64/kvm/hyp/hyp.h
@@ -171,10 +171,7 @@ void __debug_cond_restore_host_state(struct kvm_vcpu *vcpu);
 
 void __fpsimd_save_state(struct user_fpsimd_state *fp_regs);
 void __fpsimd_restore_state(struct user_fpsimd_state *fp_regs);
-static inline bool __fpsimd_enabled(void)
-{
-	return !(read_sysreg(cptr_el2) & CPTR_EL2_TFP);
-}
+bool __fpsimd_enabled(void);
 
 u64 __guest_enter(struct kvm_vcpu *vcpu, struct kvm_cpu_context *host_ctxt);
 void __noreturn __hyp_do_panic(unsigned long, ...);
diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index 9071dee..0db161e 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -17,6 +17,25 @@
 
 #include "hyp.h"
 
+static bool __hyp_text __fpsimd_enabled_nvhe(void)
+{
+	return !(read_sysreg(cptr_el2) & CPTR_EL2_TFP);
+}
+
+static bool __hyp_text __fpsimd_enabled_vhe(void)
+{
+	return !!(read_sysreg(cpacr_el1) & CPACR_EL1_FPEN);
+}
+
+static hyp_alternate_select(__fpsimd_is_enabled,
+			    __fpsimd_enabled_nvhe, __fpsimd_enabled_vhe,
+			    ARM64_HAS_VIRT_HOST_EXTN);
+
+bool __hyp_text __fpsimd_enabled(void)
+{
+	return __fpsimd_is_enabled()();
+}
+
 static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu)
 {
 	u64 val;
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 15/23] arm64: KVM: VHE: Implement VHE activate/deactivate_traps
  2016-02-03 17:59 ` Marc Zyngier
  (?)
@ 2016-02-03 18:00   ` Marc Zyngier
  -1 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 18:00 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Christoffer Dall
  Cc: linux-arm-kernel, linux-kernel, kvm, kvmarm

Running the kernel in HYP mode requires the HCR_E2H bit to be set
at all times, and the HCR_TGE bit to be set when running as a host
(and cleared when running as a guest). At the same time, the vector
 must be set to the current role of the kernel (either host or
hypervisor), and a couple of system registers differ between VHE
and non-VHE.

We implement these by using another set of alternate functions
that get dynamically patched.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/kvm_arm.h     |  3 ++-
 arch/arm64/include/asm/kvm_emulate.h |  3 +++
 arch/arm64/kvm/hyp/switch.c          | 47 +++++++++++++++++++++++++++++++++---
 3 files changed, 49 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index 498335e..100cbec 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -23,6 +23,7 @@
 #include <asm/types.h>
 
 /* Hyp Configuration Register (HCR) bits */
+#define HCR_E2H		(UL(1) << 34)
 #define HCR_ID		(UL(1) << 33)
 #define HCR_CD		(UL(1) << 32)
 #define HCR_RW_SHIFT	31
@@ -81,7 +82,7 @@
 			 HCR_AMO | HCR_SWIO | HCR_TIDCP | HCR_RW)
 #define HCR_VIRT_EXCP_MASK (HCR_VA | HCR_VI | HCR_VF)
 #define HCR_INT_OVERRIDE   (HCR_FMO | HCR_IMO)
-
+#define HCR_HOST_VHE_FLAGS (HCR_RW | HCR_TGE | HCR_E2H)
 
 /* Hyp System Control Register (SCTLR_EL2) bits */
 #define SCTLR_EL2_EE	(1 << 25)
diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index 3066328..5ae0c69 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -29,6 +29,7 @@
 #include <asm/kvm_mmio.h>
 #include <asm/ptrace.h>
 #include <asm/cputype.h>
+#include <asm/virt.h>
 
 unsigned long *vcpu_reg32(const struct kvm_vcpu *vcpu, u8 reg_num);
 unsigned long *vcpu_spsr32(const struct kvm_vcpu *vcpu);
@@ -43,6 +44,8 @@ void kvm_inject_pabt(struct kvm_vcpu *vcpu, unsigned long addr);
 static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu)
 {
 	vcpu->arch.hcr_el2 = HCR_GUEST_FLAGS;
+	if (is_kernel_in_hyp_mode())
+		vcpu->arch.hcr_el2 |= HCR_E2H;
 	if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features))
 		vcpu->arch.hcr_el2 &= ~HCR_RW;
 }
diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index 0db161e..686ca35 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -15,6 +15,8 @@
  * along with this program.  If not, see <http://www.gnu.org/licenses/>.
  */
 
+#include <asm/kvm_asm.h>
+
 #include "hyp.h"
 
 static bool __hyp_text __fpsimd_enabled_nvhe(void)
@@ -36,6 +38,27 @@ bool __hyp_text __fpsimd_enabled(void)
 	return __fpsimd_is_enabled()();
 }
 
+static void __hyp_text __activate_traps_vhe(void)
+{
+	u64 val;
+
+	val = read_sysreg(cpacr_el1);
+	val |= CPACR_EL1_TTA;
+	val &= ~CPACR_EL1_FPEN;
+	write_sysreg(val, cpacr_el1);
+
+	write_sysreg(__kvm_hyp_vector, vbar_el1);
+}
+
+static void __hyp_text __activate_traps_nvhe(void)
+{
+	write_sysreg(CPTR_EL2_TTA | CPTR_EL2_TFP, cptr_el2);
+}
+
+static hyp_alternate_select(__activate_traps_arch,
+			    __activate_traps_nvhe, __activate_traps_vhe,
+			    ARM64_HAS_VIRT_HOST_EXTN);
+
 static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu)
 {
 	u64 val;
@@ -55,16 +78,34 @@ static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu)
 	write_sysreg(val, hcr_el2);
 	/* Trap on AArch32 cp15 c15 accesses (EL1 or EL0) */
 	write_sysreg(1 << 15, hstr_el2);
-	write_sysreg(CPTR_EL2_TTA | CPTR_EL2_TFP, cptr_el2);
 	write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
+	__activate_traps_arch()();
 }
 
-static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu)
+static void __hyp_text __deactivate_traps_vhe(void)
+{
+	extern char vectors[];	/* kernel exception vectors */
+
+	write_sysreg(HCR_HOST_VHE_FLAGS, hcr_el2);
+	write_sysreg(CPACR_EL1_FPEN, cpacr_el1);
+	write_sysreg(vectors, vbar_el1);
+}
+
+static void __hyp_text __deactivate_traps_nvhe(void)
 {
 	write_sysreg(HCR_RW, hcr_el2);
+	write_sysreg(0, cptr_el2);
+}
+
+static hyp_alternate_select(__deactivate_traps_arch,
+			    __deactivate_traps_nvhe, __deactivate_traps_vhe,
+			    ARM64_HAS_VIRT_HOST_EXTN);
+
+static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu)
+{
+	__deactivate_traps_arch()();
 	write_sysreg(0, hstr_el2);
 	write_sysreg(read_sysreg(mdcr_el2) & MDCR_EL2_HPMN_MASK, mdcr_el2);
-	write_sysreg(0, cptr_el2);
 }
 
 static void __hyp_text __activate_vm(struct kvm_vcpu *vcpu)
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 15/23] arm64: KVM: VHE: Implement VHE activate/deactivate_traps
@ 2016-02-03 18:00   ` Marc Zyngier
  0 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 18:00 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Christoffer Dall
  Cc: kvmarm, linux-kernel, linux-arm-kernel, kvm

Running the kernel in HYP mode requires the HCR_E2H bit to be set
at all times, and the HCR_TGE bit to be set when running as a host
(and cleared when running as a guest). At the same time, the vector
 must be set to the current role of the kernel (either host or
hypervisor), and a couple of system registers differ between VHE
and non-VHE.

We implement these by using another set of alternate functions
that get dynamically patched.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/kvm_arm.h     |  3 ++-
 arch/arm64/include/asm/kvm_emulate.h |  3 +++
 arch/arm64/kvm/hyp/switch.c          | 47 +++++++++++++++++++++++++++++++++---
 3 files changed, 49 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index 498335e..100cbec 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -23,6 +23,7 @@
 #include <asm/types.h>
 
 /* Hyp Configuration Register (HCR) bits */
+#define HCR_E2H		(UL(1) << 34)
 #define HCR_ID		(UL(1) << 33)
 #define HCR_CD		(UL(1) << 32)
 #define HCR_RW_SHIFT	31
@@ -81,7 +82,7 @@
 			 HCR_AMO | HCR_SWIO | HCR_TIDCP | HCR_RW)
 #define HCR_VIRT_EXCP_MASK (HCR_VA | HCR_VI | HCR_VF)
 #define HCR_INT_OVERRIDE   (HCR_FMO | HCR_IMO)
-
+#define HCR_HOST_VHE_FLAGS (HCR_RW | HCR_TGE | HCR_E2H)
 
 /* Hyp System Control Register (SCTLR_EL2) bits */
 #define SCTLR_EL2_EE	(1 << 25)
diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index 3066328..5ae0c69 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -29,6 +29,7 @@
 #include <asm/kvm_mmio.h>
 #include <asm/ptrace.h>
 #include <asm/cputype.h>
+#include <asm/virt.h>
 
 unsigned long *vcpu_reg32(const struct kvm_vcpu *vcpu, u8 reg_num);
 unsigned long *vcpu_spsr32(const struct kvm_vcpu *vcpu);
@@ -43,6 +44,8 @@ void kvm_inject_pabt(struct kvm_vcpu *vcpu, unsigned long addr);
 static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu)
 {
 	vcpu->arch.hcr_el2 = HCR_GUEST_FLAGS;
+	if (is_kernel_in_hyp_mode())
+		vcpu->arch.hcr_el2 |= HCR_E2H;
 	if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features))
 		vcpu->arch.hcr_el2 &= ~HCR_RW;
 }
diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index 0db161e..686ca35 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -15,6 +15,8 @@
  * along with this program.  If not, see <http://www.gnu.org/licenses/>.
  */
 
+#include <asm/kvm_asm.h>
+
 #include "hyp.h"
 
 static bool __hyp_text __fpsimd_enabled_nvhe(void)
@@ -36,6 +38,27 @@ bool __hyp_text __fpsimd_enabled(void)
 	return __fpsimd_is_enabled()();
 }
 
+static void __hyp_text __activate_traps_vhe(void)
+{
+	u64 val;
+
+	val = read_sysreg(cpacr_el1);
+	val |= CPACR_EL1_TTA;
+	val &= ~CPACR_EL1_FPEN;
+	write_sysreg(val, cpacr_el1);
+
+	write_sysreg(__kvm_hyp_vector, vbar_el1);
+}
+
+static void __hyp_text __activate_traps_nvhe(void)
+{
+	write_sysreg(CPTR_EL2_TTA | CPTR_EL2_TFP, cptr_el2);
+}
+
+static hyp_alternate_select(__activate_traps_arch,
+			    __activate_traps_nvhe, __activate_traps_vhe,
+			    ARM64_HAS_VIRT_HOST_EXTN);
+
 static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu)
 {
 	u64 val;
@@ -55,16 +78,34 @@ static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu)
 	write_sysreg(val, hcr_el2);
 	/* Trap on AArch32 cp15 c15 accesses (EL1 or EL0) */
 	write_sysreg(1 << 15, hstr_el2);
-	write_sysreg(CPTR_EL2_TTA | CPTR_EL2_TFP, cptr_el2);
 	write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
+	__activate_traps_arch()();
 }
 
-static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu)
+static void __hyp_text __deactivate_traps_vhe(void)
+{
+	extern char vectors[];	/* kernel exception vectors */
+
+	write_sysreg(HCR_HOST_VHE_FLAGS, hcr_el2);
+	write_sysreg(CPACR_EL1_FPEN, cpacr_el1);
+	write_sysreg(vectors, vbar_el1);
+}
+
+static void __hyp_text __deactivate_traps_nvhe(void)
 {
 	write_sysreg(HCR_RW, hcr_el2);
+	write_sysreg(0, cptr_el2);
+}
+
+static hyp_alternate_select(__deactivate_traps_arch,
+			    __deactivate_traps_nvhe, __deactivate_traps_vhe,
+			    ARM64_HAS_VIRT_HOST_EXTN);
+
+static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu)
+{
+	__deactivate_traps_arch()();
 	write_sysreg(0, hstr_el2);
 	write_sysreg(read_sysreg(mdcr_el2) & MDCR_EL2_HPMN_MASK, mdcr_el2);
-	write_sysreg(0, cptr_el2);
 }
 
 static void __hyp_text __activate_vm(struct kvm_vcpu *vcpu)
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 15/23] arm64: KVM: VHE: Implement VHE activate/deactivate_traps
@ 2016-02-03 18:00   ` Marc Zyngier
  0 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 18:00 UTC (permalink / raw)
  To: linux-arm-kernel

Running the kernel in HYP mode requires the HCR_E2H bit to be set
at all times, and the HCR_TGE bit to be set when running as a host
(and cleared when running as a guest). At the same time, the vector
 must be set to the current role of the kernel (either host or
hypervisor), and a couple of system registers differ between VHE
and non-VHE.

We implement these by using another set of alternate functions
that get dynamically patched.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/kvm_arm.h     |  3 ++-
 arch/arm64/include/asm/kvm_emulate.h |  3 +++
 arch/arm64/kvm/hyp/switch.c          | 47 +++++++++++++++++++++++++++++++++---
 3 files changed, 49 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index 498335e..100cbec 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -23,6 +23,7 @@
 #include <asm/types.h>
 
 /* Hyp Configuration Register (HCR) bits */
+#define HCR_E2H		(UL(1) << 34)
 #define HCR_ID		(UL(1) << 33)
 #define HCR_CD		(UL(1) << 32)
 #define HCR_RW_SHIFT	31
@@ -81,7 +82,7 @@
 			 HCR_AMO | HCR_SWIO | HCR_TIDCP | HCR_RW)
 #define HCR_VIRT_EXCP_MASK (HCR_VA | HCR_VI | HCR_VF)
 #define HCR_INT_OVERRIDE   (HCR_FMO | HCR_IMO)
-
+#define HCR_HOST_VHE_FLAGS (HCR_RW | HCR_TGE | HCR_E2H)
 
 /* Hyp System Control Register (SCTLR_EL2) bits */
 #define SCTLR_EL2_EE	(1 << 25)
diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index 3066328..5ae0c69 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -29,6 +29,7 @@
 #include <asm/kvm_mmio.h>
 #include <asm/ptrace.h>
 #include <asm/cputype.h>
+#include <asm/virt.h>
 
 unsigned long *vcpu_reg32(const struct kvm_vcpu *vcpu, u8 reg_num);
 unsigned long *vcpu_spsr32(const struct kvm_vcpu *vcpu);
@@ -43,6 +44,8 @@ void kvm_inject_pabt(struct kvm_vcpu *vcpu, unsigned long addr);
 static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu)
 {
 	vcpu->arch.hcr_el2 = HCR_GUEST_FLAGS;
+	if (is_kernel_in_hyp_mode())
+		vcpu->arch.hcr_el2 |= HCR_E2H;
 	if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features))
 		vcpu->arch.hcr_el2 &= ~HCR_RW;
 }
diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index 0db161e..686ca35 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -15,6 +15,8 @@
  * along with this program.  If not, see <http://www.gnu.org/licenses/>.
  */
 
+#include <asm/kvm_asm.h>
+
 #include "hyp.h"
 
 static bool __hyp_text __fpsimd_enabled_nvhe(void)
@@ -36,6 +38,27 @@ bool __hyp_text __fpsimd_enabled(void)
 	return __fpsimd_is_enabled()();
 }
 
+static void __hyp_text __activate_traps_vhe(void)
+{
+	u64 val;
+
+	val = read_sysreg(cpacr_el1);
+	val |= CPACR_EL1_TTA;
+	val &= ~CPACR_EL1_FPEN;
+	write_sysreg(val, cpacr_el1);
+
+	write_sysreg(__kvm_hyp_vector, vbar_el1);
+}
+
+static void __hyp_text __activate_traps_nvhe(void)
+{
+	write_sysreg(CPTR_EL2_TTA | CPTR_EL2_TFP, cptr_el2);
+}
+
+static hyp_alternate_select(__activate_traps_arch,
+			    __activate_traps_nvhe, __activate_traps_vhe,
+			    ARM64_HAS_VIRT_HOST_EXTN);
+
 static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu)
 {
 	u64 val;
@@ -55,16 +78,34 @@ static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu)
 	write_sysreg(val, hcr_el2);
 	/* Trap on AArch32 cp15 c15 accesses (EL1 or EL0) */
 	write_sysreg(1 << 15, hstr_el2);
-	write_sysreg(CPTR_EL2_TTA | CPTR_EL2_TFP, cptr_el2);
 	write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
+	__activate_traps_arch()();
 }
 
-static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu)
+static void __hyp_text __deactivate_traps_vhe(void)
+{
+	extern char vectors[];	/* kernel exception vectors */
+
+	write_sysreg(HCR_HOST_VHE_FLAGS, hcr_el2);
+	write_sysreg(CPACR_EL1_FPEN, cpacr_el1);
+	write_sysreg(vectors, vbar_el1);
+}
+
+static void __hyp_text __deactivate_traps_nvhe(void)
 {
 	write_sysreg(HCR_RW, hcr_el2);
+	write_sysreg(0, cptr_el2);
+}
+
+static hyp_alternate_select(__deactivate_traps_arch,
+			    __deactivate_traps_nvhe, __deactivate_traps_vhe,
+			    ARM64_HAS_VIRT_HOST_EXTN);
+
+static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu)
+{
+	__deactivate_traps_arch()();
 	write_sysreg(0, hstr_el2);
 	write_sysreg(read_sysreg(mdcr_el2) & MDCR_EL2_HPMN_MASK, mdcr_el2);
-	write_sysreg(0, cptr_el2);
 }
 
 static void __hyp_text __activate_vm(struct kvm_vcpu *vcpu)
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 16/23] arm64: KVM: VHE: Use unified sysreg accessors for timer
  2016-02-03 17:59 ` Marc Zyngier
  (?)
@ 2016-02-03 18:00   ` Marc Zyngier
  -1 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 18:00 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Christoffer Dall
  Cc: linux-arm-kernel, linux-kernel, kvm, kvmarm

Switch the timer code to the unified sysreg accessors.

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kvm/hyp/timer-sr.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/kvm/hyp/timer-sr.c b/arch/arm64/kvm/hyp/timer-sr.c
index 1051e5d..f276d9e 100644
--- a/arch/arm64/kvm/hyp/timer-sr.c
+++ b/arch/arm64/kvm/hyp/timer-sr.c
@@ -31,12 +31,12 @@ void __hyp_text __timer_save_state(struct kvm_vcpu *vcpu)
 	u64 val;
 
 	if (kvm->arch.timer.enabled) {
-		timer->cntv_ctl = read_sysreg(cntv_ctl_el0);
-		timer->cntv_cval = read_sysreg(cntv_cval_el0);
+		timer->cntv_ctl = read_sysreg_el0(cntv_ctl);
+		timer->cntv_cval = read_sysreg_el0(cntv_cval);
 	}
 
 	/* Disable the virtual timer */
-	write_sysreg(0, cntv_ctl_el0);
+	write_sysreg_el0(0, cntv_ctl);
 
 	/* Allow physical timer/counter access for the host */
 	val = read_sysreg(cnthctl_el2);
@@ -64,8 +64,8 @@ void __hyp_text __timer_restore_state(struct kvm_vcpu *vcpu)
 
 	if (kvm->arch.timer.enabled) {
 		write_sysreg(kvm->arch.timer.cntvoff, cntvoff_el2);
-		write_sysreg(timer->cntv_cval, cntv_cval_el0);
+		write_sysreg_el0(timer->cntv_cval, cntv_cval);
 		isb();
-		write_sysreg(timer->cntv_ctl, cntv_ctl_el0);
+		write_sysreg_el0(timer->cntv_ctl, cntv_ctl);
 	}
 }
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 16/23] arm64: KVM: VHE: Use unified sysreg accessors for timer
@ 2016-02-03 18:00   ` Marc Zyngier
  0 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 18:00 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Christoffer Dall
  Cc: kvmarm, linux-kernel, linux-arm-kernel, kvm

Switch the timer code to the unified sysreg accessors.

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kvm/hyp/timer-sr.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/kvm/hyp/timer-sr.c b/arch/arm64/kvm/hyp/timer-sr.c
index 1051e5d..f276d9e 100644
--- a/arch/arm64/kvm/hyp/timer-sr.c
+++ b/arch/arm64/kvm/hyp/timer-sr.c
@@ -31,12 +31,12 @@ void __hyp_text __timer_save_state(struct kvm_vcpu *vcpu)
 	u64 val;
 
 	if (kvm->arch.timer.enabled) {
-		timer->cntv_ctl = read_sysreg(cntv_ctl_el0);
-		timer->cntv_cval = read_sysreg(cntv_cval_el0);
+		timer->cntv_ctl = read_sysreg_el0(cntv_ctl);
+		timer->cntv_cval = read_sysreg_el0(cntv_cval);
 	}
 
 	/* Disable the virtual timer */
-	write_sysreg(0, cntv_ctl_el0);
+	write_sysreg_el0(0, cntv_ctl);
 
 	/* Allow physical timer/counter access for the host */
 	val = read_sysreg(cnthctl_el2);
@@ -64,8 +64,8 @@ void __hyp_text __timer_restore_state(struct kvm_vcpu *vcpu)
 
 	if (kvm->arch.timer.enabled) {
 		write_sysreg(kvm->arch.timer.cntvoff, cntvoff_el2);
-		write_sysreg(timer->cntv_cval, cntv_cval_el0);
+		write_sysreg_el0(timer->cntv_cval, cntv_cval);
 		isb();
-		write_sysreg(timer->cntv_ctl, cntv_ctl_el0);
+		write_sysreg_el0(timer->cntv_ctl, cntv_ctl);
 	}
 }
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 16/23] arm64: KVM: VHE: Use unified sysreg accessors for timer
@ 2016-02-03 18:00   ` Marc Zyngier
  0 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 18:00 UTC (permalink / raw)
  To: linux-arm-kernel

Switch the timer code to the unified sysreg accessors.

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kvm/hyp/timer-sr.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/kvm/hyp/timer-sr.c b/arch/arm64/kvm/hyp/timer-sr.c
index 1051e5d..f276d9e 100644
--- a/arch/arm64/kvm/hyp/timer-sr.c
+++ b/arch/arm64/kvm/hyp/timer-sr.c
@@ -31,12 +31,12 @@ void __hyp_text __timer_save_state(struct kvm_vcpu *vcpu)
 	u64 val;
 
 	if (kvm->arch.timer.enabled) {
-		timer->cntv_ctl = read_sysreg(cntv_ctl_el0);
-		timer->cntv_cval = read_sysreg(cntv_cval_el0);
+		timer->cntv_ctl = read_sysreg_el0(cntv_ctl);
+		timer->cntv_cval = read_sysreg_el0(cntv_cval);
 	}
 
 	/* Disable the virtual timer */
-	write_sysreg(0, cntv_ctl_el0);
+	write_sysreg_el0(0, cntv_ctl);
 
 	/* Allow physical timer/counter access for the host */
 	val = read_sysreg(cnthctl_el2);
@@ -64,8 +64,8 @@ void __hyp_text __timer_restore_state(struct kvm_vcpu *vcpu)
 
 	if (kvm->arch.timer.enabled) {
 		write_sysreg(kvm->arch.timer.cntvoff, cntvoff_el2);
-		write_sysreg(timer->cntv_cval, cntv_cval_el0);
+		write_sysreg_el0(timer->cntv_cval, cntv_cval);
 		isb();
-		write_sysreg(timer->cntv_ctl, cntv_ctl_el0);
+		write_sysreg_el0(timer->cntv_ctl, cntv_ctl);
 	}
 }
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 17/23] arm64: KVM: VHE: Add fpsimd enabling on guest access
  2016-02-03 17:59 ` Marc Zyngier
  (?)
@ 2016-02-03 18:00   ` Marc Zyngier
  -1 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 18:00 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Christoffer Dall
  Cc: linux-arm-kernel, linux-kernel, kvm, kvmarm

Despite the fact that a VHE enabled kernel runs at EL2, it uses
CPACR_EL1 to trap FPSIMD access. Add the required alternative
code to re-enable guest FPSIMD access when it has trapped to
EL2.

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kvm/hyp/entry.S | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
index fd0fbe9..ce9e5e5 100644
--- a/arch/arm64/kvm/hyp/entry.S
+++ b/arch/arm64/kvm/hyp/entry.S
@@ -130,9 +130,15 @@ ENDPROC(__guest_exit)
 ENTRY(__fpsimd_guest_restore)
 	stp	x4, lr, [sp, #-16]!
 
+alternative_if_not ARM64_HAS_VIRT_HOST_EXTN
 	mrs	x2, cptr_el2
 	bic	x2, x2, #CPTR_EL2_TFP
 	msr	cptr_el2, x2
+alternative_else
+	mrs	x2, cpacr_el1
+	orr	x2, x2, #CPACR_EL1_FPEN
+	msr	cpacr_el1, x2
+alternative_endif
 	isb
 
 	mrs	x3, tpidr_el2
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 17/23] arm64: KVM: VHE: Add fpsimd enabling on guest access
@ 2016-02-03 18:00   ` Marc Zyngier
  0 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 18:00 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Christoffer Dall
  Cc: kvmarm, linux-kernel, linux-arm-kernel, kvm

Despite the fact that a VHE enabled kernel runs at EL2, it uses
CPACR_EL1 to trap FPSIMD access. Add the required alternative
code to re-enable guest FPSIMD access when it has trapped to
EL2.

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kvm/hyp/entry.S | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
index fd0fbe9..ce9e5e5 100644
--- a/arch/arm64/kvm/hyp/entry.S
+++ b/arch/arm64/kvm/hyp/entry.S
@@ -130,9 +130,15 @@ ENDPROC(__guest_exit)
 ENTRY(__fpsimd_guest_restore)
 	stp	x4, lr, [sp, #-16]!
 
+alternative_if_not ARM64_HAS_VIRT_HOST_EXTN
 	mrs	x2, cptr_el2
 	bic	x2, x2, #CPTR_EL2_TFP
 	msr	cptr_el2, x2
+alternative_else
+	mrs	x2, cpacr_el1
+	orr	x2, x2, #CPACR_EL1_FPEN
+	msr	cpacr_el1, x2
+alternative_endif
 	isb
 
 	mrs	x3, tpidr_el2
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 17/23] arm64: KVM: VHE: Add fpsimd enabling on guest access
@ 2016-02-03 18:00   ` Marc Zyngier
  0 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 18:00 UTC (permalink / raw)
  To: linux-arm-kernel

Despite the fact that a VHE enabled kernel runs at EL2, it uses
CPACR_EL1 to trap FPSIMD access. Add the required alternative
code to re-enable guest FPSIMD access when it has trapped to
EL2.

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kvm/hyp/entry.S | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
index fd0fbe9..ce9e5e5 100644
--- a/arch/arm64/kvm/hyp/entry.S
+++ b/arch/arm64/kvm/hyp/entry.S
@@ -130,9 +130,15 @@ ENDPROC(__guest_exit)
 ENTRY(__fpsimd_guest_restore)
 	stp	x4, lr, [sp, #-16]!
 
+alternative_if_not ARM64_HAS_VIRT_HOST_EXTN
 	mrs	x2, cptr_el2
 	bic	x2, x2, #CPTR_EL2_TFP
 	msr	cptr_el2, x2
+alternative_else
+	mrs	x2, cpacr_el1
+	orr	x2, x2, #CPACR_EL1_FPEN
+	msr	cpacr_el1, x2
+alternative_endif
 	isb
 
 	mrs	x3, tpidr_el2
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 18/23] arm64: KVM: VHE: Add alternative panic handling
  2016-02-03 17:59 ` Marc Zyngier
  (?)
@ 2016-02-03 18:00   ` Marc Zyngier
  -1 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 18:00 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Christoffer Dall
  Cc: linux-arm-kernel, linux-kernel, kvm, kvmarm

As the kernel fully runs in HYP when VHE is enabled, we can
directly branch to the kernel's panic() implementation, and
not perform an exception return.

Add the alternative code to deal with this.

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kvm/hyp/switch.c | 35 +++++++++++++++++++++++++++--------
 1 file changed, 27 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index 686ca35..e90683a 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -206,11 +206,34 @@ __alias(__guest_run) int __kvm_vcpu_run(struct kvm_vcpu *vcpu);
 
 static const char __hyp_panic_string[] = "HYP panic:\nPS:%08llx PC:%016llx ESR:%08llx\nFAR:%016llx HPFAR:%016llx PAR:%016llx\nVCPU:%p\n";
 
-void __hyp_text __noreturn __hyp_panic(void)
+static void __hyp_text __hyp_call_panic_nvhe(u64 spsr, u64 elr, u64 par)
 {
 	unsigned long str_va = (unsigned long)__hyp_panic_string;
-	u64 spsr = read_sysreg(spsr_el2);
-	u64 elr = read_sysreg(elr_el2);
+
+	__hyp_do_panic(hyp_kern_va(str_va),
+		       spsr,  elr,
+		       read_sysreg(esr_el2),   read_sysreg_el2(far),
+		       read_sysreg(hpfar_el2), par,
+		       (void *)read_sysreg(tpidr_el2));
+}
+
+static void __hyp_text __hyp_call_panic_vhe(u64 spsr, u64 elr, u64 par)
+{
+	panic(__hyp_panic_string,
+	      spsr,  elr,
+	      read_sysreg_el2(esr),   read_sysreg_el2(far),
+	      read_sysreg(hpfar_el2), par,
+	      (void *)read_sysreg(tpidr_el2));
+}
+
+static hyp_alternate_select(__hyp_call_panic,
+			    __hyp_call_panic_nvhe, __hyp_call_panic_vhe,
+			    ARM64_HAS_VIRT_HOST_EXTN);
+
+void __hyp_text __noreturn __hyp_panic(void)
+{
+	u64 spsr = read_sysreg_el2(spsr);
+	u64 elr = read_sysreg_el2(elr);
 	u64 par = read_sysreg(par_el1);
 
 	if (read_sysreg(vttbr_el2)) {
@@ -225,11 +248,7 @@ void __hyp_text __noreturn __hyp_panic(void)
 	}
 
 	/* Call panic for real */
-	__hyp_do_panic(hyp_kern_va(str_va),
-		       spsr,  elr,
-		       read_sysreg(esr_el2),   read_sysreg(far_el2),
-		       read_sysreg(hpfar_el2), par,
-		       (void *)read_sysreg(tpidr_el2));
+	__hyp_call_panic()(spsr, elr, par);
 
 	unreachable();
 }
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 18/23] arm64: KVM: VHE: Add alternative panic handling
@ 2016-02-03 18:00   ` Marc Zyngier
  0 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 18:00 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Christoffer Dall
  Cc: kvmarm, linux-kernel, linux-arm-kernel, kvm

As the kernel fully runs in HYP when VHE is enabled, we can
directly branch to the kernel's panic() implementation, and
not perform an exception return.

Add the alternative code to deal with this.

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kvm/hyp/switch.c | 35 +++++++++++++++++++++++++++--------
 1 file changed, 27 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index 686ca35..e90683a 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -206,11 +206,34 @@ __alias(__guest_run) int __kvm_vcpu_run(struct kvm_vcpu *vcpu);
 
 static const char __hyp_panic_string[] = "HYP panic:\nPS:%08llx PC:%016llx ESR:%08llx\nFAR:%016llx HPFAR:%016llx PAR:%016llx\nVCPU:%p\n";
 
-void __hyp_text __noreturn __hyp_panic(void)
+static void __hyp_text __hyp_call_panic_nvhe(u64 spsr, u64 elr, u64 par)
 {
 	unsigned long str_va = (unsigned long)__hyp_panic_string;
-	u64 spsr = read_sysreg(spsr_el2);
-	u64 elr = read_sysreg(elr_el2);
+
+	__hyp_do_panic(hyp_kern_va(str_va),
+		       spsr,  elr,
+		       read_sysreg(esr_el2),   read_sysreg_el2(far),
+		       read_sysreg(hpfar_el2), par,
+		       (void *)read_sysreg(tpidr_el2));
+}
+
+static void __hyp_text __hyp_call_panic_vhe(u64 spsr, u64 elr, u64 par)
+{
+	panic(__hyp_panic_string,
+	      spsr,  elr,
+	      read_sysreg_el2(esr),   read_sysreg_el2(far),
+	      read_sysreg(hpfar_el2), par,
+	      (void *)read_sysreg(tpidr_el2));
+}
+
+static hyp_alternate_select(__hyp_call_panic,
+			    __hyp_call_panic_nvhe, __hyp_call_panic_vhe,
+			    ARM64_HAS_VIRT_HOST_EXTN);
+
+void __hyp_text __noreturn __hyp_panic(void)
+{
+	u64 spsr = read_sysreg_el2(spsr);
+	u64 elr = read_sysreg_el2(elr);
 	u64 par = read_sysreg(par_el1);
 
 	if (read_sysreg(vttbr_el2)) {
@@ -225,11 +248,7 @@ void __hyp_text __noreturn __hyp_panic(void)
 	}
 
 	/* Call panic for real */
-	__hyp_do_panic(hyp_kern_va(str_va),
-		       spsr,  elr,
-		       read_sysreg(esr_el2),   read_sysreg(far_el2),
-		       read_sysreg(hpfar_el2), par,
-		       (void *)read_sysreg(tpidr_el2));
+	__hyp_call_panic()(spsr, elr, par);
 
 	unreachable();
 }
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 18/23] arm64: KVM: VHE: Add alternative panic handling
@ 2016-02-03 18:00   ` Marc Zyngier
  0 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 18:00 UTC (permalink / raw)
  To: linux-arm-kernel

As the kernel fully runs in HYP when VHE is enabled, we can
directly branch to the kernel's panic() implementation, and
not perform an exception return.

Add the alternative code to deal with this.

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kvm/hyp/switch.c | 35 +++++++++++++++++++++++++++--------
 1 file changed, 27 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index 686ca35..e90683a 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -206,11 +206,34 @@ __alias(__guest_run) int __kvm_vcpu_run(struct kvm_vcpu *vcpu);
 
 static const char __hyp_panic_string[] = "HYP panic:\nPS:%08llx PC:%016llx ESR:%08llx\nFAR:%016llx HPFAR:%016llx PAR:%016llx\nVCPU:%p\n";
 
-void __hyp_text __noreturn __hyp_panic(void)
+static void __hyp_text __hyp_call_panic_nvhe(u64 spsr, u64 elr, u64 par)
 {
 	unsigned long str_va = (unsigned long)__hyp_panic_string;
-	u64 spsr = read_sysreg(spsr_el2);
-	u64 elr = read_sysreg(elr_el2);
+
+	__hyp_do_panic(hyp_kern_va(str_va),
+		       spsr,  elr,
+		       read_sysreg(esr_el2),   read_sysreg_el2(far),
+		       read_sysreg(hpfar_el2), par,
+		       (void *)read_sysreg(tpidr_el2));
+}
+
+static void __hyp_text __hyp_call_panic_vhe(u64 spsr, u64 elr, u64 par)
+{
+	panic(__hyp_panic_string,
+	      spsr,  elr,
+	      read_sysreg_el2(esr),   read_sysreg_el2(far),
+	      read_sysreg(hpfar_el2), par,
+	      (void *)read_sysreg(tpidr_el2));
+}
+
+static hyp_alternate_select(__hyp_call_panic,
+			    __hyp_call_panic_nvhe, __hyp_call_panic_vhe,
+			    ARM64_HAS_VIRT_HOST_EXTN);
+
+void __hyp_text __noreturn __hyp_panic(void)
+{
+	u64 spsr = read_sysreg_el2(spsr);
+	u64 elr = read_sysreg_el2(elr);
 	u64 par = read_sysreg(par_el1);
 
 	if (read_sysreg(vttbr_el2)) {
@@ -225,11 +248,7 @@ void __hyp_text __noreturn __hyp_panic(void)
 	}
 
 	/* Call panic for real */
-	__hyp_do_panic(hyp_kern_va(str_va),
-		       spsr,  elr,
-		       read_sysreg(esr_el2),   read_sysreg(far_el2),
-		       read_sysreg(hpfar_el2), par,
-		       (void *)read_sysreg(tpidr_el2));
+	__hyp_call_panic()(spsr, elr, par);
 
 	unreachable();
 }
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 19/23] arm64: KVM: Move most of the fault decoding to C
  2016-02-03 17:59 ` Marc Zyngier
  (?)
@ 2016-02-03 18:00   ` Marc Zyngier
  -1 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 18:00 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Christoffer Dall
  Cc: linux-arm-kernel, linux-kernel, kvm, kvmarm

The fault decoding process (including computing the IPA in the case
of a permission fault) would be much better done in C code, as we
have a reasonable infrastructure to deal with the VHE/non-VHE
differences.

Let's move the whole thing to C, including the workaround for
erratum 834220, and just patch the odd ESR_EL2 access remaining
in hyp-entry.S.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kernel/asm-offsets.c |  3 --
 arch/arm64/kvm/hyp/hyp-entry.S  | 69 +++------------------------------
 arch/arm64/kvm/hyp/switch.c     | 85 +++++++++++++++++++++++++++++++++++++++++
 3 files changed, 90 insertions(+), 67 deletions(-)

diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index fffa4ac6..b0ab4e9 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -110,9 +110,6 @@ int main(void)
   DEFINE(CPU_USER_PT_REGS,	offsetof(struct kvm_regs, regs));
   DEFINE(CPU_FP_REGS,		offsetof(struct kvm_regs, fp_regs));
   DEFINE(VCPU_FPEXC32_EL2,	offsetof(struct kvm_vcpu, arch.ctxt.sys_regs[FPEXC32_EL2]));
-  DEFINE(VCPU_ESR_EL2,		offsetof(struct kvm_vcpu, arch.fault.esr_el2));
-  DEFINE(VCPU_FAR_EL2,		offsetof(struct kvm_vcpu, arch.fault.far_el2));
-  DEFINE(VCPU_HPFAR_EL2,	offsetof(struct kvm_vcpu, arch.fault.hpfar_el2));
   DEFINE(VCPU_HOST_CONTEXT,	offsetof(struct kvm_vcpu, arch.host_cpu_context));
 #endif
 #ifdef CONFIG_CPU_PM
diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S
index 1bdeee7..3488894 100644
--- a/arch/arm64/kvm/hyp/hyp-entry.S
+++ b/arch/arm64/kvm/hyp/hyp-entry.S
@@ -19,7 +19,6 @@
 
 #include <asm/alternative.h>
 #include <asm/assembler.h>
-#include <asm/asm-offsets.h>
 #include <asm/cpufeature.h>
 #include <asm/kvm_arm.h>
 #include <asm/kvm_asm.h>
@@ -69,7 +68,11 @@ ENDPROC(__vhe_hyp_call)
 el1_sync:				// Guest trapped into EL2
 	save_x0_to_x3
 
+alternative_if_not ARM64_HAS_VIRT_HOST_EXTN
 	mrs	x1, esr_el2
+alternative_else
+	mrs	x1, esr_el1
+alternative_endif
 	lsr	x2, x1, #ESR_ELx_EC_SHIFT
 
 	cmp	x2, #ESR_ELx_EC_HVC64
@@ -105,72 +108,10 @@ el1_trap:
 	cmp	x2, #ESR_ELx_EC_FP_ASIMD
 	b.eq	__fpsimd_guest_restore
 
-	cmp	x2, #ESR_ELx_EC_DABT_LOW
-	mov	x0, #ESR_ELx_EC_IABT_LOW
-	ccmp	x2, x0, #4, ne
-	b.ne	1f		// Not an abort we care about
-
-	/* This is an abort. Check for permission fault */
-alternative_if_not ARM64_WORKAROUND_834220
-	and	x2, x1, #ESR_ELx_FSC_TYPE
-	cmp	x2, #FSC_PERM
-	b.ne	1f		// Not a permission fault
-alternative_else
-	nop			// Use the permission fault path to
-	nop			// check for a valid S1 translation,
-	nop			// regardless of the ESR value.
-alternative_endif
-
-	/*
-	 * Check for Stage-1 page table walk, which is guaranteed
-	 * to give a valid HPFAR_EL2.
-	 */
-	tbnz	x1, #7, 1f	// S1PTW is set
-
-	/* Preserve PAR_EL1 */
-	mrs	x3, par_el1
-	stp	x3, xzr, [sp, #-16]!
-
-	/*
-	 * Permission fault, HPFAR_EL2 is invalid.
-	 * Resolve the IPA the hard way using the guest VA.
-	 * Stage-1 translation already validated the memory access rights.
-	 * As such, we can use the EL1 translation regime, and don't have
-	 * to distinguish between EL0 and EL1 access.
-	 */
-	mrs	x2, far_el2
-	at	s1e1r, x2
-	isb
-
-	/* Read result */
-	mrs	x3, par_el1
-	ldp	x0, xzr, [sp], #16	// Restore PAR_EL1 from the stack
-	msr	par_el1, x0
-	tbnz	x3, #0, 3f		// Bail out if we failed the translation
-	ubfx	x3, x3, #12, #36	// Extract IPA
-	lsl	x3, x3, #4		// and present it like HPFAR
-	b	2f
-
-1:	mrs	x3, hpfar_el2
-	mrs	x2, far_el2
-
-2:	mrs	x0, tpidr_el2
-	str	w1, [x0, #VCPU_ESR_EL2]
-	str	x2, [x0, #VCPU_FAR_EL2]
-	str	x3, [x0, #VCPU_HPFAR_EL2]
-
+	mrs	x0, tpidr_el2
 	mov	x1, #ARM_EXCEPTION_TRAP
 	b	__guest_exit
 
-	/*
-	 * Translation failed. Just return to the guest and
-	 * let it fault again. Another CPU is probably playing
-	 * behind our back.
-	 */
-3:	restore_x0_to_x3
-
-	eret
-
 el1_irq:
 	save_x0_to_x3
 	mrs	x0, tpidr_el2
diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index e90683a..a192357 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -15,6 +15,7 @@
  * along with this program.  If not, see <http://www.gnu.org/licenses/>.
  */
 
+#include <linux/types.h>
 #include <asm/kvm_asm.h>
 
 #include "hyp.h"
@@ -145,6 +146,86 @@ static void __hyp_text __vgic_restore_state(struct kvm_vcpu *vcpu)
 	__vgic_call_restore_state()(vcpu);
 }
 
+static bool __hyp_text __true_value(void)
+{
+	return true;
+}
+
+static bool __hyp_text __false_value(void)
+{
+	return false;
+}
+
+static hyp_alternate_select(__check_arm_834220,
+			    __false_value, __true_value,
+			    ARM64_WORKAROUND_834220);
+
+static bool __hyp_text __translate_far_to_hpfar(u64 far, u64 *hpfar)
+{
+	u64 par, tmp;
+
+	/*
+	 * Resolve the IPA the hard way using the guest VA.
+	 *
+	 * Stage-1 translation already validated the memory access
+	 * rights. As such, we can use the EL1 translation regime, and
+	 * don't have to distinguish between EL0 and EL1 access.
+	 *
+	 * We do need to save/restore PAR_EL1 though, as we haven't
+	 * saved the guest context yet, and we may return early...
+	 */
+	par = read_sysreg(par_el1);
+	asm volatile("at s1e1r, %0" : : "r" (far));
+	isb();
+
+	tmp = read_sysreg(par_el1);
+	write_sysreg(par, par_el1);
+
+	if (unlikely(tmp & 1))
+		return false; /* Translation failed, back to guest */
+
+	/* Convert PAR to HPFAR format */
+	*hpfar = ((tmp >> 12) & ((1UL << 36) - 1)) << 4;
+	return true;
+}
+
+static bool __hyp_text __populate_fault_info(struct kvm_vcpu *vcpu)
+{
+	u64 esr = read_sysreg_el2(esr);
+	u8 ec = esr >> ESR_ELx_EC_SHIFT;
+	u64 hpfar, far;
+
+	vcpu->arch.fault.esr_el2 = esr;
+
+	if (ec != ESR_ELx_EC_DABT_LOW && ec != ESR_ELx_EC_IABT_LOW)
+		return true;
+
+	far = read_sysreg_el2(far);
+
+	/*
+	 * The HPFAR can be invalid if the stage 2 fault did not
+	 * happen during a stage 1 page table walk (the ESR_EL2.S1PTW
+	 * bit is clear) and one of the two following cases are true:
+	 *   1. The fault was due to a permission fault
+	 *   2. The processor carries errata 834220
+	 *
+	 * Therefore, for all non S1PTW faults where we either have a
+	 * permission fault or the errata workaround is enabled, we
+	 * resolve the IPA using the AT instruction.
+	 */
+	if (!(esr & ESR_ELx_S1PTW) &&
+	    (__check_arm_834220()() || (esr & ESR_ELx_FSC_TYPE) == FSC_PERM)) {
+		if (!__translate_far_to_hpfar(far, &hpfar))
+			return false;
+	} else {
+		hpfar = read_sysreg(hpfar_el2);
+	}
+
+	vcpu->arch.fault.far_el2 = far;
+	vcpu->arch.fault.hpfar_el2 = hpfar;
+	return true;
+}
+
 static int __hyp_text __guest_run(struct kvm_vcpu *vcpu)
 {
 	struct kvm_cpu_context *host_ctxt;
@@ -176,9 +257,13 @@ static int __hyp_text __guest_run(struct kvm_vcpu *vcpu)
 	__debug_restore_state(vcpu, kern_hyp_va(vcpu->arch.debug_ptr), guest_ctxt);
 
 	/* Jump in the fire! */
+again:
 	exit_code = __guest_enter(vcpu, host_ctxt);
 	/* And we're baaack! */
 
+	if (exit_code == ARM_EXCEPTION_TRAP && !__populate_fault_info(vcpu))
+		goto again;
+
 	fp_enabled = __fpsimd_enabled();
 
 	__sysreg_save_guest_state(guest_ctxt);
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 19/23] arm64: KVM: Move most of the fault decoding to C
@ 2016-02-03 18:00   ` Marc Zyngier
  0 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 18:00 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Christoffer Dall
  Cc: kvmarm, linux-kernel, linux-arm-kernel, kvm

The fault decoding process (including computing the IPA in the case
of a permission fault) would be much better done in C code, as we
have a reasonable infrastructure to deal with the VHE/non-VHE
differences.

Let's move the whole thing to C, including the workaround for
erratum 834220, and just patch the odd ESR_EL2 access remaining
in hyp-entry.S.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kernel/asm-offsets.c |  3 --
 arch/arm64/kvm/hyp/hyp-entry.S  | 69 +++------------------------------
 arch/arm64/kvm/hyp/switch.c     | 85 +++++++++++++++++++++++++++++++++++++++++
 3 files changed, 90 insertions(+), 67 deletions(-)

diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index fffa4ac6..b0ab4e9 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -110,9 +110,6 @@ int main(void)
   DEFINE(CPU_USER_PT_REGS,	offsetof(struct kvm_regs, regs));
   DEFINE(CPU_FP_REGS,		offsetof(struct kvm_regs, fp_regs));
   DEFINE(VCPU_FPEXC32_EL2,	offsetof(struct kvm_vcpu, arch.ctxt.sys_regs[FPEXC32_EL2]));
-  DEFINE(VCPU_ESR_EL2,		offsetof(struct kvm_vcpu, arch.fault.esr_el2));
-  DEFINE(VCPU_FAR_EL2,		offsetof(struct kvm_vcpu, arch.fault.far_el2));
-  DEFINE(VCPU_HPFAR_EL2,	offsetof(struct kvm_vcpu, arch.fault.hpfar_el2));
   DEFINE(VCPU_HOST_CONTEXT,	offsetof(struct kvm_vcpu, arch.host_cpu_context));
 #endif
 #ifdef CONFIG_CPU_PM
diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S
index 1bdeee7..3488894 100644
--- a/arch/arm64/kvm/hyp/hyp-entry.S
+++ b/arch/arm64/kvm/hyp/hyp-entry.S
@@ -19,7 +19,6 @@
 
 #include <asm/alternative.h>
 #include <asm/assembler.h>
-#include <asm/asm-offsets.h>
 #include <asm/cpufeature.h>
 #include <asm/kvm_arm.h>
 #include <asm/kvm_asm.h>
@@ -69,7 +68,11 @@ ENDPROC(__vhe_hyp_call)
 el1_sync:				// Guest trapped into EL2
 	save_x0_to_x3
 
+alternative_if_not ARM64_HAS_VIRT_HOST_EXTN
 	mrs	x1, esr_el2
+alternative_else
+	mrs	x1, esr_el1
+alternative_endif
 	lsr	x2, x1, #ESR_ELx_EC_SHIFT
 
 	cmp	x2, #ESR_ELx_EC_HVC64
@@ -105,72 +108,10 @@ el1_trap:
 	cmp	x2, #ESR_ELx_EC_FP_ASIMD
 	b.eq	__fpsimd_guest_restore
 
-	cmp	x2, #ESR_ELx_EC_DABT_LOW
-	mov	x0, #ESR_ELx_EC_IABT_LOW
-	ccmp	x2, x0, #4, ne
-	b.ne	1f		// Not an abort we care about
-
-	/* This is an abort. Check for permission fault */
-alternative_if_not ARM64_WORKAROUND_834220
-	and	x2, x1, #ESR_ELx_FSC_TYPE
-	cmp	x2, #FSC_PERM
-	b.ne	1f		// Not a permission fault
-alternative_else
-	nop			// Use the permission fault path to
-	nop			// check for a valid S1 translation,
-	nop			// regardless of the ESR value.
-alternative_endif
-
-	/*
-	 * Check for Stage-1 page table walk, which is guaranteed
-	 * to give a valid HPFAR_EL2.
-	 */
-	tbnz	x1, #7, 1f	// S1PTW is set
-
-	/* Preserve PAR_EL1 */
-	mrs	x3, par_el1
-	stp	x3, xzr, [sp, #-16]!
-
-	/*
-	 * Permission fault, HPFAR_EL2 is invalid.
-	 * Resolve the IPA the hard way using the guest VA.
-	 * Stage-1 translation already validated the memory access rights.
-	 * As such, we can use the EL1 translation regime, and don't have
-	 * to distinguish between EL0 and EL1 access.
-	 */
-	mrs	x2, far_el2
-	at	s1e1r, x2
-	isb
-
-	/* Read result */
-	mrs	x3, par_el1
-	ldp	x0, xzr, [sp], #16	// Restore PAR_EL1 from the stack
-	msr	par_el1, x0
-	tbnz	x3, #0, 3f		// Bail out if we failed the translation
-	ubfx	x3, x3, #12, #36	// Extract IPA
-	lsl	x3, x3, #4		// and present it like HPFAR
-	b	2f
-
-1:	mrs	x3, hpfar_el2
-	mrs	x2, far_el2
-
-2:	mrs	x0, tpidr_el2
-	str	w1, [x0, #VCPU_ESR_EL2]
-	str	x2, [x0, #VCPU_FAR_EL2]
-	str	x3, [x0, #VCPU_HPFAR_EL2]
-
+	mrs	x0, tpidr_el2
 	mov	x1, #ARM_EXCEPTION_TRAP
 	b	__guest_exit
 
-	/*
-	 * Translation failed. Just return to the guest and
-	 * let it fault again. Another CPU is probably playing
-	 * behind our back.
-	 */
-3:	restore_x0_to_x3
-
-	eret
-
 el1_irq:
 	save_x0_to_x3
 	mrs	x0, tpidr_el2
diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index e90683a..a192357 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -15,6 +15,7 @@
  * along with this program.  If not, see <http://www.gnu.org/licenses/>.
  */
 
+#include <linux/types.h>
 #include <asm/kvm_asm.h>
 
 #include "hyp.h"
@@ -145,6 +146,86 @@ static void __hyp_text __vgic_restore_state(struct kvm_vcpu *vcpu)
 	__vgic_call_restore_state()(vcpu);
 }
 
+static bool __hyp_text __true_value(void)
+{
+	return true;
+}
+
+static bool __hyp_text __false_value(void)
+{
+	return false;
+}
+
+static hyp_alternate_select(__check_arm_834220,
+			    __false_value, __true_value,
+			    ARM64_WORKAROUND_834220);
+
+static bool __hyp_text __translate_far_to_hpfar(u64 far, u64 *hpfar)
+{
+	u64 par, tmp;
+
+	/*
+	 * Resolve the IPA the hard way using the guest VA.
+	 *
+	 * Stage-1 translation already validated the memory access
+	 * rights. As such, we can use the EL1 translation regime, and
+	 * don't have to distinguish between EL0 and EL1 access.
+	 *
+	 * We do need to save/restore PAR_EL1 though, as we haven't
+	 * saved the guest context yet, and we may return early...
+	 */
+	par = read_sysreg(par_el1);
+	asm volatile("at s1e1r, %0" : : "r" (far));
+	isb();
+
+	tmp = read_sysreg(par_el1);
+	write_sysreg(par, par_el1);
+
+	if (unlikely(tmp & 1))
+		return false; /* Translation failed, back to guest */
+
+	/* Convert PAR to HPFAR format */
+	*hpfar = ((tmp >> 12) & ((1UL << 36) - 1)) << 4;
+	return true;
+}
+
+static bool __hyp_text __populate_fault_info(struct kvm_vcpu *vcpu)
+{
+	u64 esr = read_sysreg_el2(esr);
+	u8 ec = esr >> ESR_ELx_EC_SHIFT;
+	u64 hpfar, far;
+
+	vcpu->arch.fault.esr_el2 = esr;
+
+	if (ec != ESR_ELx_EC_DABT_LOW && ec != ESR_ELx_EC_IABT_LOW)
+		return true;
+
+	far = read_sysreg_el2(far);
+
+	/*
+	 * The HPFAR can be invalid if the stage 2 fault did not
+	 * happen during a stage 1 page table walk (the ESR_EL2.S1PTW
+	 * bit is clear) and one of the two following cases are true:
+	 *   1. The fault was due to a permission fault
+	 *   2. The processor carries errata 834220
+	 *
+	 * Therefore, for all non S1PTW faults where we either have a
+	 * permission fault or the errata workaround is enabled, we
+	 * resolve the IPA using the AT instruction.
+	 */
+	if (!(esr & ESR_ELx_S1PTW) &&
+	    (__check_arm_834220()() || (esr & ESR_ELx_FSC_TYPE) == FSC_PERM)) {
+		if (!__translate_far_to_hpfar(far, &hpfar))
+			return false;
+	} else {
+		hpfar = read_sysreg(hpfar_el2);
+	}
+
+	vcpu->arch.fault.far_el2 = far;
+	vcpu->arch.fault.hpfar_el2 = hpfar;
+	return true;
+}
+
 static int __hyp_text __guest_run(struct kvm_vcpu *vcpu)
 {
 	struct kvm_cpu_context *host_ctxt;
@@ -176,9 +257,13 @@ static int __hyp_text __guest_run(struct kvm_vcpu *vcpu)
 	__debug_restore_state(vcpu, kern_hyp_va(vcpu->arch.debug_ptr), guest_ctxt);
 
 	/* Jump in the fire! */
+again:
 	exit_code = __guest_enter(vcpu, host_ctxt);
 	/* And we're baaack! */
 
+	if (exit_code == ARM_EXCEPTION_TRAP && !__populate_fault_info(vcpu))
+		goto again;
+
 	fp_enabled = __fpsimd_enabled();
 
 	__sysreg_save_guest_state(guest_ctxt);
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 19/23] arm64: KVM: Move most of the fault decoding to C
@ 2016-02-03 18:00   ` Marc Zyngier
  0 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 18:00 UTC (permalink / raw)
  To: linux-arm-kernel

The fault decoding process (including computing the IPA in the case
of a permission fault) would be much better done in C code, as we
have a reasonable infrastructure to deal with the VHE/non-VHE
differences.

Let's move the whole thing to C, including the workaround for
erratum 834220, and just patch the odd ESR_EL2 access remaining
in hyp-entry.S.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kernel/asm-offsets.c |  3 --
 arch/arm64/kvm/hyp/hyp-entry.S  | 69 +++------------------------------
 arch/arm64/kvm/hyp/switch.c     | 85 +++++++++++++++++++++++++++++++++++++++++
 3 files changed, 90 insertions(+), 67 deletions(-)

diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index fffa4ac6..b0ab4e9 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -110,9 +110,6 @@ int main(void)
   DEFINE(CPU_USER_PT_REGS,	offsetof(struct kvm_regs, regs));
   DEFINE(CPU_FP_REGS,		offsetof(struct kvm_regs, fp_regs));
   DEFINE(VCPU_FPEXC32_EL2,	offsetof(struct kvm_vcpu, arch.ctxt.sys_regs[FPEXC32_EL2]));
-  DEFINE(VCPU_ESR_EL2,		offsetof(struct kvm_vcpu, arch.fault.esr_el2));
-  DEFINE(VCPU_FAR_EL2,		offsetof(struct kvm_vcpu, arch.fault.far_el2));
-  DEFINE(VCPU_HPFAR_EL2,	offsetof(struct kvm_vcpu, arch.fault.hpfar_el2));
   DEFINE(VCPU_HOST_CONTEXT,	offsetof(struct kvm_vcpu, arch.host_cpu_context));
 #endif
 #ifdef CONFIG_CPU_PM
diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S
index 1bdeee7..3488894 100644
--- a/arch/arm64/kvm/hyp/hyp-entry.S
+++ b/arch/arm64/kvm/hyp/hyp-entry.S
@@ -19,7 +19,6 @@
 
 #include <asm/alternative.h>
 #include <asm/assembler.h>
-#include <asm/asm-offsets.h>
 #include <asm/cpufeature.h>
 #include <asm/kvm_arm.h>
 #include <asm/kvm_asm.h>
@@ -69,7 +68,11 @@ ENDPROC(__vhe_hyp_call)
 el1_sync:				// Guest trapped into EL2
 	save_x0_to_x3
 
+alternative_if_not ARM64_HAS_VIRT_HOST_EXTN
 	mrs	x1, esr_el2
+alternative_else
+	mrs	x1, esr_el1
+alternative_endif
 	lsr	x2, x1, #ESR_ELx_EC_SHIFT
 
 	cmp	x2, #ESR_ELx_EC_HVC64
@@ -105,72 +108,10 @@ el1_trap:
 	cmp	x2, #ESR_ELx_EC_FP_ASIMD
 	b.eq	__fpsimd_guest_restore
 
-	cmp	x2, #ESR_ELx_EC_DABT_LOW
-	mov	x0, #ESR_ELx_EC_IABT_LOW
-	ccmp	x2, x0, #4, ne
-	b.ne	1f		// Not an abort we care about
-
-	/* This is an abort. Check for permission fault */
-alternative_if_not ARM64_WORKAROUND_834220
-	and	x2, x1, #ESR_ELx_FSC_TYPE
-	cmp	x2, #FSC_PERM
-	b.ne	1f		// Not a permission fault
-alternative_else
-	nop			// Use the permission fault path to
-	nop			// check for a valid S1 translation,
-	nop			// regardless of the ESR value.
-alternative_endif
-
-	/*
-	 * Check for Stage-1 page table walk, which is guaranteed
-	 * to give a valid HPFAR_EL2.
-	 */
-	tbnz	x1, #7, 1f	// S1PTW is set
-
-	/* Preserve PAR_EL1 */
-	mrs	x3, par_el1
-	stp	x3, xzr, [sp, #-16]!
-
-	/*
-	 * Permission fault, HPFAR_EL2 is invalid.
-	 * Resolve the IPA the hard way using the guest VA.
-	 * Stage-1 translation already validated the memory access rights.
-	 * As such, we can use the EL1 translation regime, and don't have
-	 * to distinguish between EL0 and EL1 access.
-	 */
-	mrs	x2, far_el2
-	at	s1e1r, x2
-	isb
-
-	/* Read result */
-	mrs	x3, par_el1
-	ldp	x0, xzr, [sp], #16	// Restore PAR_EL1 from the stack
-	msr	par_el1, x0
-	tbnz	x3, #0, 3f		// Bail out if we failed the translation
-	ubfx	x3, x3, #12, #36	// Extract IPA
-	lsl	x3, x3, #4		// and present it like HPFAR
-	b	2f
-
-1:	mrs	x3, hpfar_el2
-	mrs	x2, far_el2
-
-2:	mrs	x0, tpidr_el2
-	str	w1, [x0, #VCPU_ESR_EL2]
-	str	x2, [x0, #VCPU_FAR_EL2]
-	str	x3, [x0, #VCPU_HPFAR_EL2]
-
+	mrs	x0, tpidr_el2
 	mov	x1, #ARM_EXCEPTION_TRAP
 	b	__guest_exit
 
-	/*
-	 * Translation failed. Just return to the guest and
-	 * let it fault again. Another CPU is probably playing
-	 * behind our back.
-	 */
-3:	restore_x0_to_x3
-
-	eret
-
 el1_irq:
 	save_x0_to_x3
 	mrs	x0, tpidr_el2
diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index e90683a..a192357 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -15,6 +15,7 @@
  * along with this program.  If not, see <http://www.gnu.org/licenses/>.
  */
 
+#include <linux/types.h>
 #include <asm/kvm_asm.h>
 
 #include "hyp.h"
@@ -145,6 +146,86 @@ static void __hyp_text __vgic_restore_state(struct kvm_vcpu *vcpu)
 	__vgic_call_restore_state()(vcpu);
 }
 
+static bool __hyp_text __true_value(void)
+{
+	return true;
+}
+
+static bool __hyp_text __false_value(void)
+{
+	return false;
+}
+
+static hyp_alternate_select(__check_arm_834220,
+			    __false_value, __true_value,
+			    ARM64_WORKAROUND_834220);
+
+static bool __hyp_text __translate_far_to_hpfar(u64 far, u64 *hpfar)
+{
+	u64 par, tmp;
+
+	/*
+	 * Resolve the IPA the hard way using the guest VA.
+	 *
+	 * Stage-1 translation already validated the memory access
+	 * rights. As such, we can use the EL1 translation regime, and
+	 * don't have to distinguish between EL0 and EL1 access.
+	 *
+	 * We do need to save/restore PAR_EL1 though, as we haven't
+	 * saved the guest context yet, and we may return early...
+	 */
+	par = read_sysreg(par_el1);
+	asm volatile("at s1e1r, %0" : : "r" (far));
+	isb();
+
+	tmp = read_sysreg(par_el1);
+	write_sysreg(par, par_el1);
+
+	if (unlikely(tmp & 1))
+		return false; /* Translation failed, back to guest */
+
+	/* Convert PAR to HPFAR format */
+	*hpfar = ((tmp >> 12) & ((1UL << 36) - 1)) << 4;
+	return true;
+}
+
+static bool __hyp_text __populate_fault_info(struct kvm_vcpu *vcpu)
+{
+	u64 esr = read_sysreg_el2(esr);
+	u8 ec = esr >> ESR_ELx_EC_SHIFT;
+	u64 hpfar, far;
+
+	vcpu->arch.fault.esr_el2 = esr;
+
+	if (ec != ESR_ELx_EC_DABT_LOW && ec != ESR_ELx_EC_IABT_LOW)
+		return true;
+
+	far = read_sysreg_el2(far);
+
+	/*
+	 * The HPFAR can be invalid if the stage 2 fault did not
+	 * happen during a stage 1 page table walk (the ESR_EL2.S1PTW
+	 * bit is clear) and one of the two following cases are true:
+	 *   1. The fault was due to a permission fault
+	 *   2. The processor carries errata 834220
+	 *
+	 * Therefore, for all non S1PTW faults where we either have a
+	 * permission fault or the errata workaround is enabled, we
+	 * resolve the IPA using the AT instruction.
+	 */
+	if (!(esr & ESR_ELx_S1PTW) &&
+	    (__check_arm_834220()() || (esr & ESR_ELx_FSC_TYPE) == FSC_PERM)) {
+		if (!__translate_far_to_hpfar(far, &hpfar))
+			return false;
+	} else {
+		hpfar = read_sysreg(hpfar_el2);
+	}
+
+	vcpu->arch.fault.far_el2 = far;
+	vcpu->arch.fault.hpfar_el2 = hpfar;
+	return true;
+}
+
 static int __hyp_text __guest_run(struct kvm_vcpu *vcpu)
 {
 	struct kvm_cpu_context *host_ctxt;
@@ -176,9 +257,13 @@ static int __hyp_text __guest_run(struct kvm_vcpu *vcpu)
 	__debug_restore_state(vcpu, kern_hyp_va(vcpu->arch.debug_ptr), guest_ctxt);
 
 	/* Jump in the fire! */
+again:
 	exit_code = __guest_enter(vcpu, host_ctxt);
 	/* And we're baaack! */
 
+	if (exit_code == ARM_EXCEPTION_TRAP && !__populate_fault_info(vcpu))
+		goto again;
+
 	fp_enabled = __fpsimd_enabled();
 
 	__sysreg_save_guest_state(guest_ctxt);
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 20/23] arm64: perf: Count EL2 events if the kernel is running in HYP
  2016-02-03 17:59 ` Marc Zyngier
  (?)
@ 2016-02-03 18:00   ` Marc Zyngier
  -1 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 18:00 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Christoffer Dall
  Cc: linux-arm-kernel, linux-kernel, kvm, kvmarm

When the kernel is running in HYP (with VHE), it is necessary to
include EL2 events if the user requests counting kernel or
hypervisor events.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kernel/perf_event.c | 14 ++++++++++----
 1 file changed, 10 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index f7ab14c..6013a38 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -20,6 +20,7 @@
  */
 
 #include <asm/irq_regs.h>
+#include <asm/virt.h>
 
 #include <linux/of.h>
 #include <linux/perf/arm_pmu.h>
@@ -693,10 +694,15 @@ static int armv8pmu_set_event_filter(struct hw_perf_event *event,
 		return -EPERM;
 	if (attr->exclude_user)
 		config_base |= ARMV8_EXCLUDE_EL0;
-	if (attr->exclude_kernel)
-		config_base |= ARMV8_EXCLUDE_EL1;
-	if (!attr->exclude_hv)
-		config_base |= ARMV8_INCLUDE_EL2;
+	if (is_kernel_in_hyp_mode()) {
+		if (!attr->exclude_kernel || !attr->exclude_hv)
+			config_base |= ARMV8_INCLUDE_EL2;
+	} else {
+		if (attr->exclude_kernel)
+			config_base |= ARMV8_EXCLUDE_EL1;
+		if (!attr->exclude_hv)
+			config_base |= ARMV8_INCLUDE_EL2;
+	}
 
 	/*
 	 * Install the filter into config_base as this is used to
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 20/23] arm64: perf: Count EL2 events if the kernel is running in HYP
@ 2016-02-03 18:00   ` Marc Zyngier
  0 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 18:00 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Christoffer Dall
  Cc: kvmarm, linux-kernel, linux-arm-kernel, kvm

When the kernel is running in HYP (with VHE), it is necessary to
include EL2 events if the user requests counting kernel or
hypervisor events.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kernel/perf_event.c | 14 ++++++++++----
 1 file changed, 10 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index f7ab14c..6013a38 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -20,6 +20,7 @@
  */
 
 #include <asm/irq_regs.h>
+#include <asm/virt.h>
 
 #include <linux/of.h>
 #include <linux/perf/arm_pmu.h>
@@ -693,10 +694,15 @@ static int armv8pmu_set_event_filter(struct hw_perf_event *event,
 		return -EPERM;
 	if (attr->exclude_user)
 		config_base |= ARMV8_EXCLUDE_EL0;
-	if (attr->exclude_kernel)
-		config_base |= ARMV8_EXCLUDE_EL1;
-	if (!attr->exclude_hv)
-		config_base |= ARMV8_INCLUDE_EL2;
+	if (is_kernel_in_hyp_mode()) {
+		if (!attr->exclude_kernel || !attr->exclude_hv)
+			config_base |= ARMV8_INCLUDE_EL2;
+	} else {
+		if (attr->exclude_kernel)
+			config_base |= ARMV8_EXCLUDE_EL1;
+		if (!attr->exclude_hv)
+			config_base |= ARMV8_INCLUDE_EL2;
+	}
 
 	/*
 	 * Install the filter into config_base as this is used to
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 20/23] arm64: perf: Count EL2 events if the kernel is running in HYP
@ 2016-02-03 18:00   ` Marc Zyngier
  0 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 18:00 UTC (permalink / raw)
  To: linux-arm-kernel

When the kernel is running in HYP (with VHE), it is necessary to
include EL2 events if the user requests counting kernel or
hypervisor events.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kernel/perf_event.c | 14 ++++++++++----
 1 file changed, 10 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index f7ab14c..6013a38 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -20,6 +20,7 @@
  */
 
 #include <asm/irq_regs.h>
+#include <asm/virt.h>
 
 #include <linux/of.h>
 #include <linux/perf/arm_pmu.h>
@@ -693,10 +694,15 @@ static int armv8pmu_set_event_filter(struct hw_perf_event *event,
 		return -EPERM;
 	if (attr->exclude_user)
 		config_base |= ARMV8_EXCLUDE_EL0;
-	if (attr->exclude_kernel)
-		config_base |= ARMV8_EXCLUDE_EL1;
-	if (!attr->exclude_hv)
-		config_base |= ARMV8_INCLUDE_EL2;
+	if (is_kernel_in_hyp_mode()) {
+		if (!attr->exclude_kernel || !attr->exclude_hv)
+			config_base |= ARMV8_INCLUDE_EL2;
+	} else {
+		if (attr->exclude_kernel)
+			config_base |= ARMV8_EXCLUDE_EL1;
+		if (!attr->exclude_hv)
+			config_base |= ARMV8_INCLUDE_EL2;
+	}
 
 	/*
 	 * Install the filter into config_base as this is used to
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 21/23] arm64: hw_breakpoint: Allow EL2 breakpoints if running in HYP
  2016-02-03 17:59 ` Marc Zyngier
  (?)
@ 2016-02-03 18:00   ` Marc Zyngier
  -1 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 18:00 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Christoffer Dall
  Cc: linux-arm-kernel, linux-kernel, kvm, kvmarm

With VHE, we place kernel {watch,break}-points at EL2 to get things
like kgdb and "perf -e mem:..." working.

This requires a bit of repainting in the low-level encore/decode,
but is otherwise pretty simple.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/hw_breakpoint.h | 49 +++++++++++++++++++++-------------
 1 file changed, 31 insertions(+), 18 deletions(-)

diff --git a/arch/arm64/include/asm/hw_breakpoint.h b/arch/arm64/include/asm/hw_breakpoint.h
index 9732908..0da0272 100644
--- a/arch/arm64/include/asm/hw_breakpoint.h
+++ b/arch/arm64/include/asm/hw_breakpoint.h
@@ -18,6 +18,7 @@
 
 #include <asm/cputype.h>
 #include <asm/cpufeature.h>
+#include <asm/virt.h>
 
 #ifdef __KERNEL__
 
@@ -35,24 +36,6 @@ struct arch_hw_breakpoint {
 	struct arch_hw_breakpoint_ctrl ctrl;
 };
 
-static inline u32 encode_ctrl_reg(struct arch_hw_breakpoint_ctrl ctrl)
-{
-	return (ctrl.len << 5) | (ctrl.type << 3) | (ctrl.privilege << 1) |
-		ctrl.enabled;
-}
-
-static inline void decode_ctrl_reg(u32 reg,
-				   struct arch_hw_breakpoint_ctrl *ctrl)
-{
-	ctrl->enabled	= reg & 0x1;
-	reg >>= 1;
-	ctrl->privilege	= reg & 0x3;
-	reg >>= 2;
-	ctrl->type	= reg & 0x3;
-	reg >>= 2;
-	ctrl->len	= reg & 0xff;
-}
-
 /* Breakpoint */
 #define ARM_BREAKPOINT_EXECUTE	0
 
@@ -76,6 +59,36 @@ static inline void decode_ctrl_reg(u32 reg,
 #define ARM_KERNEL_STEP_ACTIVE	1
 #define ARM_KERNEL_STEP_SUSPEND	2
 
+#define DBG_HMC_HYP		(1 << 13)
+#define DBG_SSC_HYP		(3 << 14)
+
+static inline u32 encode_ctrl_reg(struct arch_hw_breakpoint_ctrl ctrl)
+{
+	u32 val = (ctrl.len << 5) | (ctrl.type << 3) | ctrl.enabled;
+
+	if (is_kernel_in_hyp_mode() && ctrl.privilege == AARCH64_BREAKPOINT_EL1)
+		val |= DBG_HMC_HYP | DBG_SSC_HYP;
+	else
+		val |= ctrl.privilege << 1;
+
+	return val;
+}
+
+static inline void decode_ctrl_reg(u32 reg,
+				   struct arch_hw_breakpoint_ctrl *ctrl)
+{
+	ctrl->enabled	= reg & 0x1;
+	reg >>= 1;
+	if (is_kernel_in_hyp_mode())
+		ctrl->privilege = !!(reg & (DBG_HMC_HYP >> 1));
+	else
+		ctrl->privilege	= reg & 0x3;
+	reg >>= 2;
+	ctrl->type	= reg & 0x3;
+	reg >>= 2;
+	ctrl->len	= reg & 0xff;
+}
+
 /*
  * Limits.
  * Changing these will require modifications to the register accessors.
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 21/23] arm64: hw_breakpoint: Allow EL2 breakpoints if running in HYP
@ 2016-02-03 18:00   ` Marc Zyngier
  0 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 18:00 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Christoffer Dall
  Cc: kvmarm, linux-kernel, linux-arm-kernel, kvm

With VHE, we place kernel {watch,break}-points at EL2 to get things
like kgdb and "perf -e mem:..." working.

This requires a bit of repainting in the low-level encore/decode,
but is otherwise pretty simple.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/hw_breakpoint.h | 49 +++++++++++++++++++++-------------
 1 file changed, 31 insertions(+), 18 deletions(-)

diff --git a/arch/arm64/include/asm/hw_breakpoint.h b/arch/arm64/include/asm/hw_breakpoint.h
index 9732908..0da0272 100644
--- a/arch/arm64/include/asm/hw_breakpoint.h
+++ b/arch/arm64/include/asm/hw_breakpoint.h
@@ -18,6 +18,7 @@
 
 #include <asm/cputype.h>
 #include <asm/cpufeature.h>
+#include <asm/virt.h>
 
 #ifdef __KERNEL__
 
@@ -35,24 +36,6 @@ struct arch_hw_breakpoint {
 	struct arch_hw_breakpoint_ctrl ctrl;
 };
 
-static inline u32 encode_ctrl_reg(struct arch_hw_breakpoint_ctrl ctrl)
-{
-	return (ctrl.len << 5) | (ctrl.type << 3) | (ctrl.privilege << 1) |
-		ctrl.enabled;
-}
-
-static inline void decode_ctrl_reg(u32 reg,
-				   struct arch_hw_breakpoint_ctrl *ctrl)
-{
-	ctrl->enabled	= reg & 0x1;
-	reg >>= 1;
-	ctrl->privilege	= reg & 0x3;
-	reg >>= 2;
-	ctrl->type	= reg & 0x3;
-	reg >>= 2;
-	ctrl->len	= reg & 0xff;
-}
-
 /* Breakpoint */
 #define ARM_BREAKPOINT_EXECUTE	0
 
@@ -76,6 +59,36 @@ static inline void decode_ctrl_reg(u32 reg,
 #define ARM_KERNEL_STEP_ACTIVE	1
 #define ARM_KERNEL_STEP_SUSPEND	2
 
+#define DBG_HMC_HYP		(1 << 13)
+#define DBG_SSC_HYP		(3 << 14)
+
+static inline u32 encode_ctrl_reg(struct arch_hw_breakpoint_ctrl ctrl)
+{
+	u32 val = (ctrl.len << 5) | (ctrl.type << 3) | ctrl.enabled;
+
+	if (is_kernel_in_hyp_mode() && ctrl.privilege == AARCH64_BREAKPOINT_EL1)
+		val |= DBG_HMC_HYP | DBG_SSC_HYP;
+	else
+		val |= ctrl.privilege << 1;
+
+	return val;
+}
+
+static inline void decode_ctrl_reg(u32 reg,
+				   struct arch_hw_breakpoint_ctrl *ctrl)
+{
+	ctrl->enabled	= reg & 0x1;
+	reg >>= 1;
+	if (is_kernel_in_hyp_mode())
+		ctrl->privilege = !!(reg & (DBG_HMC_HYP >> 1));
+	else
+		ctrl->privilege	= reg & 0x3;
+	reg >>= 2;
+	ctrl->type	= reg & 0x3;
+	reg >>= 2;
+	ctrl->len	= reg & 0xff;
+}
+
 /*
  * Limits.
  * Changing these will require modifications to the register accessors.
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 21/23] arm64: hw_breakpoint: Allow EL2 breakpoints if running in HYP
@ 2016-02-03 18:00   ` Marc Zyngier
  0 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 18:00 UTC (permalink / raw)
  To: linux-arm-kernel

With VHE, we place kernel {watch,break}-points at EL2 to get things
like kgdb and "perf -e mem:..." working.

This requires a bit of repainting in the low-level encore/decode,
but is otherwise pretty simple.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/hw_breakpoint.h | 49 +++++++++++++++++++++-------------
 1 file changed, 31 insertions(+), 18 deletions(-)

diff --git a/arch/arm64/include/asm/hw_breakpoint.h b/arch/arm64/include/asm/hw_breakpoint.h
index 9732908..0da0272 100644
--- a/arch/arm64/include/asm/hw_breakpoint.h
+++ b/arch/arm64/include/asm/hw_breakpoint.h
@@ -18,6 +18,7 @@
 
 #include <asm/cputype.h>
 #include <asm/cpufeature.h>
+#include <asm/virt.h>
 
 #ifdef __KERNEL__
 
@@ -35,24 +36,6 @@ struct arch_hw_breakpoint {
 	struct arch_hw_breakpoint_ctrl ctrl;
 };
 
-static inline u32 encode_ctrl_reg(struct arch_hw_breakpoint_ctrl ctrl)
-{
-	return (ctrl.len << 5) | (ctrl.type << 3) | (ctrl.privilege << 1) |
-		ctrl.enabled;
-}
-
-static inline void decode_ctrl_reg(u32 reg,
-				   struct arch_hw_breakpoint_ctrl *ctrl)
-{
-	ctrl->enabled	= reg & 0x1;
-	reg >>= 1;
-	ctrl->privilege	= reg & 0x3;
-	reg >>= 2;
-	ctrl->type	= reg & 0x3;
-	reg >>= 2;
-	ctrl->len	= reg & 0xff;
-}
-
 /* Breakpoint */
 #define ARM_BREAKPOINT_EXECUTE	0
 
@@ -76,6 +59,36 @@ static inline void decode_ctrl_reg(u32 reg,
 #define ARM_KERNEL_STEP_ACTIVE	1
 #define ARM_KERNEL_STEP_SUSPEND	2
 
+#define DBG_HMC_HYP		(1 << 13)
+#define DBG_SSC_HYP		(3 << 14)
+
+static inline u32 encode_ctrl_reg(struct arch_hw_breakpoint_ctrl ctrl)
+{
+	u32 val = (ctrl.len << 5) | (ctrl.type << 3) | ctrl.enabled;
+
+	if (is_kernel_in_hyp_mode() && ctrl.privilege == AARCH64_BREAKPOINT_EL1)
+		val |= DBG_HMC_HYP | DBG_SSC_HYP;
+	else
+		val |= ctrl.privilege << 1;
+
+	return val;
+}
+
+static inline void decode_ctrl_reg(u32 reg,
+				   struct arch_hw_breakpoint_ctrl *ctrl)
+{
+	ctrl->enabled	= reg & 0x1;
+	reg >>= 1;
+	if (is_kernel_in_hyp_mode())
+		ctrl->privilege = !!(reg & (DBG_HMC_HYP >> 1));
+	else
+		ctrl->privilege	= reg & 0x3;
+	reg >>= 2;
+	ctrl->type	= reg & 0x3;
+	reg >>= 2;
+	ctrl->len	= reg & 0xff;
+}
+
 /*
  * Limits.
  * Changing these will require modifications to the register accessors.
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 22/23] arm64: VHE: Add support for running Linux in EL2 mode
  2016-02-03 17:59 ` Marc Zyngier
@ 2016-02-03 18:00   ` Marc Zyngier
  -1 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 18:00 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Christoffer Dall
  Cc: linux-arm-kernel, linux-kernel, kvm, kvmarm

With ARMv8.1 VHE, the architecture is able to (almost) transparently
run the kernel at EL2, despite being written for EL1.

This patch takes care of the "almost" part, mostly preventing the kernel
from dropping from EL2 to EL1, and setting up the HYP configuration.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/Kconfig       | 13 +++++++++++++
 arch/arm64/kernel/head.S | 28 +++++++++++++++++++++++++++-
 2 files changed, 40 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 8cc6228..cf118d9 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -750,6 +750,19 @@ config ARM64_LSE_ATOMICS
 	  not support these instructions and requires the kernel to be
 	  built with binutils >= 2.25.
 
+config ARM64_VHE
+	bool "Enable support for Virtualization Host Extensions (VHE)"
+	default y
+	help
+	  Virtualization Host Extensions (VHE) allow the kernel to run
+	  directly at EL2 (instead of EL1) on processors that support
+	  it. This leads to better performance for KVM, as they reduce
+	  the cost of the world switch.
+
+	  Selecting this option allows the VHE feature to be detected
+	  at runtime, and does not affect processors that do not
+	  implement this feature.
+
 endmenu
 
 endmenu
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 917d981..6f2f377 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -30,6 +30,7 @@
 #include <asm/cache.h>
 #include <asm/cputype.h>
 #include <asm/kernel-pgtable.h>
+#include <asm/kvm_arm.h>
 #include <asm/memory.h>
 #include <asm/pgtable-hwdef.h>
 #include <asm/pgtable.h>
@@ -464,9 +465,27 @@ CPU_LE(	bic	x0, x0, #(3 << 24)	)	// Clear the EE and E0E bits for EL1
 	isb
 	ret
 
+2:
+#ifdef CONFIG_ARM64_VHE
+	/*
+	 * Check for VHE being present. For the rest of the EL2 setup,
+	 * x2 being non-zero indicates that we do have VHE, and that the
+	 * kernel is intended to run at EL2.
+	 */
+	mrs	x2, id_aa64mmfr1_el1
+	ubfx	x2, x2, #8, #4
+#else
+	mov	x2, xzr
+#endif
+
 	/* Hyp configuration. */
-2:	mov	x0, #(1 << 31)			// 64-bit EL1
+	mov	x0, #HCR_RW			// 64-bit EL1
+	cbz	x2, set_hcr
+	orr	x0, x0, #HCR_TGE		// Enable Host Extensions
+	orr	x0, x0, #HCR_E2H
+set_hcr:
 	msr	hcr_el2, x0
+	isb
 
 	/* Generic timers. */
 	mrs	x0, cnthctl_el2
@@ -526,6 +545,13 @@ CPU_LE(	movk	x0, #0x30d0, lsl #16	)	// Clear EE and E0E on LE systems
 	/* Stage-2 translation */
 	msr	vttbr_el2, xzr
 
+	cbz	x2, install_el2_stub
+
+	mov	w20, #BOOT_CPU_MODE_EL2		// This CPU booted in EL2
+	isb
+	ret
+
+install_el2_stub:
 	/* Hypervisor stub */
 	adrp	x0, __hyp_stub_vectors
 	add	x0, x0, #:lo12:__hyp_stub_vectors
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 22/23] arm64: VHE: Add support for running Linux in EL2 mode
@ 2016-02-03 18:00   ` Marc Zyngier
  0 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 18:00 UTC (permalink / raw)
  To: linux-arm-kernel

With ARMv8.1 VHE, the architecture is able to (almost) transparently
run the kernel at EL2, despite being written for EL1.

This patch takes care of the "almost" part, mostly preventing the kernel
from dropping from EL2 to EL1, and setting up the HYP configuration.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/Kconfig       | 13 +++++++++++++
 arch/arm64/kernel/head.S | 28 +++++++++++++++++++++++++++-
 2 files changed, 40 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 8cc6228..cf118d9 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -750,6 +750,19 @@ config ARM64_LSE_ATOMICS
 	  not support these instructions and requires the kernel to be
 	  built with binutils >= 2.25.
 
+config ARM64_VHE
+	bool "Enable support for Virtualization Host Extensions (VHE)"
+	default y
+	help
+	  Virtualization Host Extensions (VHE) allow the kernel to run
+	  directly at EL2 (instead of EL1) on processors that support
+	  it. This leads to better performance for KVM, as they reduce
+	  the cost of the world switch.
+
+	  Selecting this option allows the VHE feature to be detected
+	  at runtime, and does not affect processors that do not
+	  implement this feature.
+
 endmenu
 
 endmenu
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 917d981..6f2f377 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -30,6 +30,7 @@
 #include <asm/cache.h>
 #include <asm/cputype.h>
 #include <asm/kernel-pgtable.h>
+#include <asm/kvm_arm.h>
 #include <asm/memory.h>
 #include <asm/pgtable-hwdef.h>
 #include <asm/pgtable.h>
@@ -464,9 +465,27 @@ CPU_LE(	bic	x0, x0, #(3 << 24)	)	// Clear the EE and E0E bits for EL1
 	isb
 	ret
 
+2:
+#ifdef CONFIG_ARM64_VHE
+	/*
+	 * Check for VHE being present. For the rest of the EL2 setup,
+	 * x2 being non-zero indicates that we do have VHE, and that the
+	 * kernel is intended to run@EL2.
+	 */
+	mrs	x2, id_aa64mmfr1_el1
+	ubfx	x2, x2, #8, #4
+#else
+	mov	x2, xzr
+#endif
+
 	/* Hyp configuration. */
-2:	mov	x0, #(1 << 31)			// 64-bit EL1
+	mov	x0, #HCR_RW			// 64-bit EL1
+	cbz	x2, set_hcr
+	orr	x0, x0, #HCR_TGE		// Enable Host Extensions
+	orr	x0, x0, #HCR_E2H
+set_hcr:
 	msr	hcr_el2, x0
+	isb
 
 	/* Generic timers. */
 	mrs	x0, cnthctl_el2
@@ -526,6 +545,13 @@ CPU_LE(	movk	x0, #0x30d0, lsl #16	)	// Clear EE and E0E on LE systems
 	/* Stage-2 translation */
 	msr	vttbr_el2, xzr
 
+	cbz	x2, install_el2_stub
+
+	mov	w20, #BOOT_CPU_MODE_EL2		// This CPU booted in EL2
+	isb
+	ret
+
+install_el2_stub:
 	/* Hypervisor stub */
 	adrp	x0, __hyp_stub_vectors
 	add	x0, x0, #:lo12:__hyp_stub_vectors
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 23/23] arm64: Panic when VHE and non VHE CPUs coexist
  2016-02-03 17:59 ` Marc Zyngier
  (?)
@ 2016-02-03 18:00   ` Marc Zyngier
  -1 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 18:00 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Christoffer Dall
  Cc: linux-arm-kernel, linux-kernel, kvm, kvmarm

Having both VHE and non-VHE capable CPUs in the same system
is likely to be a recipe for disaster.

If the boot CPU has VHE, but a secondary is not, we won't be
able to downgrade and run the kernel at EL1. Add CPU hotplug
to the mix, and this produces a terrifying mess.

Let's solve the problem once and for all. If you mix VHE and
non-VHE CPUs in the same system, you deserve to loose, and this
patch makes sure you don't get a chance.

This is implemented by storing the kernel execution level in
a global variable. Secondaries will park themselves in a
WFI loop if they observe a mismatch. Also, the primary CPU
will detect that the secondary CPU has died on a mismatched
execution level. Panic will follow.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/virt.h | 17 +++++++++++++++++
 arch/arm64/kernel/head.S      | 20 ++++++++++++++++++++
 arch/arm64/kernel/smp.c       |  3 +++
 3 files changed, 40 insertions(+)

diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h
index 9f22dd6..f81a345 100644
--- a/arch/arm64/include/asm/virt.h
+++ b/arch/arm64/include/asm/virt.h
@@ -36,6 +36,11 @@
  */
 extern u32 __boot_cpu_mode[2];
 
+/*
+ * __run_cpu_mode records the mode the boot CPU uses for the kernel.
+ */
+extern u32 __run_cpu_mode[2];
+
 void __hyp_set_vectors(phys_addr_t phys_vector_base);
 phys_addr_t __hyp_get_vectors(void);
 
@@ -60,6 +65,18 @@ static inline bool is_kernel_in_hyp_mode(void)
 	return el == CurrentEL_EL2;
 }
 
+static inline bool is_kernel_mode_mismatched(void)
+{
+	/*
+	 * A mismatched CPU will have written its own CurrentEL in
+	 * __run_cpu_mode[1] (initially set to zero) after failing to
+	 * match the value in __run_cpu_mode[0]. Thus, a non-zero
+	 * value in __run_cpu_mode[1] is enough to detect the
+	 * pathological case.
+	 */
+	return !!ACCESS_ONCE(__run_cpu_mode[1]);
+}
+
 /* The section containing the hypervisor text */
 extern char __hyp_text_start[];
 extern char __hyp_text_end[];
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 6f2f377..f9b6a5b 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -578,7 +578,24 @@ ENTRY(set_cpu_boot_mode_flag)
 1:	str	w20, [x1]			// This CPU has booted in EL1
 	dmb	sy
 	dc	ivac, x1			// Invalidate potentially stale cache line
+	adr_l	x1, __run_cpu_mode
+	ldr	w0, [x1]
+	mrs	x20, CurrentEL
+	cbz	x0, skip_el_check
+	cmp	x0, x20
+	bne	mismatched_el
 	ret
+skip_el_check:			// Only the first CPU gets to set the rule
+	str	w20, [x1]
+	dmb	sy
+	dc	ivac, x1	// Invalidate potentially stale cache line
+	ret
+mismatched_el:
+	str	w20, [x1, #4]
+	dmb	sy
+	dc	ivac, x1	// Invalidate potentially stale cache line
+1:	wfi
+	b	1b
 ENDPROC(set_cpu_boot_mode_flag)
 
 /*
@@ -593,6 +610,9 @@ ENDPROC(set_cpu_boot_mode_flag)
 ENTRY(__boot_cpu_mode)
 	.long	BOOT_CPU_MODE_EL2
 	.long	BOOT_CPU_MODE_EL1
+ENTRY(__run_cpu_mode)
+	.long	0
+	.long	0
 	.popsection
 
 	/*
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index b1adc51..bc7650a 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -113,6 +113,9 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
 			pr_crit("CPU%u: failed to come online\n", cpu);
 			ret = -EIO;
 		}
+
+		if (is_kernel_mode_mismatched())
+			panic("CPU%u: incompatible execution level", cpu);
 	} else {
 		pr_err("CPU%u: failed to boot: %d\n", cpu, ret);
 	}
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 23/23] arm64: Panic when VHE and non VHE CPUs coexist
@ 2016-02-03 18:00   ` Marc Zyngier
  0 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 18:00 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Christoffer Dall
  Cc: kvmarm, linux-kernel, linux-arm-kernel, kvm

Having both VHE and non-VHE capable CPUs in the same system
is likely to be a recipe for disaster.

If the boot CPU has VHE, but a secondary is not, we won't be
able to downgrade and run the kernel at EL1. Add CPU hotplug
to the mix, and this produces a terrifying mess.

Let's solve the problem once and for all. If you mix VHE and
non-VHE CPUs in the same system, you deserve to loose, and this
patch makes sure you don't get a chance.

This is implemented by storing the kernel execution level in
a global variable. Secondaries will park themselves in a
WFI loop if they observe a mismatch. Also, the primary CPU
will detect that the secondary CPU has died on a mismatched
execution level. Panic will follow.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/virt.h | 17 +++++++++++++++++
 arch/arm64/kernel/head.S      | 20 ++++++++++++++++++++
 arch/arm64/kernel/smp.c       |  3 +++
 3 files changed, 40 insertions(+)

diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h
index 9f22dd6..f81a345 100644
--- a/arch/arm64/include/asm/virt.h
+++ b/arch/arm64/include/asm/virt.h
@@ -36,6 +36,11 @@
  */
 extern u32 __boot_cpu_mode[2];
 
+/*
+ * __run_cpu_mode records the mode the boot CPU uses for the kernel.
+ */
+extern u32 __run_cpu_mode[2];
+
 void __hyp_set_vectors(phys_addr_t phys_vector_base);
 phys_addr_t __hyp_get_vectors(void);
 
@@ -60,6 +65,18 @@ static inline bool is_kernel_in_hyp_mode(void)
 	return el == CurrentEL_EL2;
 }
 
+static inline bool is_kernel_mode_mismatched(void)
+{
+	/*
+	 * A mismatched CPU will have written its own CurrentEL in
+	 * __run_cpu_mode[1] (initially set to zero) after failing to
+	 * match the value in __run_cpu_mode[0]. Thus, a non-zero
+	 * value in __run_cpu_mode[1] is enough to detect the
+	 * pathological case.
+	 */
+	return !!ACCESS_ONCE(__run_cpu_mode[1]);
+}
+
 /* The section containing the hypervisor text */
 extern char __hyp_text_start[];
 extern char __hyp_text_end[];
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 6f2f377..f9b6a5b 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -578,7 +578,24 @@ ENTRY(set_cpu_boot_mode_flag)
 1:	str	w20, [x1]			// This CPU has booted in EL1
 	dmb	sy
 	dc	ivac, x1			// Invalidate potentially stale cache line
+	adr_l	x1, __run_cpu_mode
+	ldr	w0, [x1]
+	mrs	x20, CurrentEL
+	cbz	x0, skip_el_check
+	cmp	x0, x20
+	bne	mismatched_el
 	ret
+skip_el_check:			// Only the first CPU gets to set the rule
+	str	w20, [x1]
+	dmb	sy
+	dc	ivac, x1	// Invalidate potentially stale cache line
+	ret
+mismatched_el:
+	str	w20, [x1, #4]
+	dmb	sy
+	dc	ivac, x1	// Invalidate potentially stale cache line
+1:	wfi
+	b	1b
 ENDPROC(set_cpu_boot_mode_flag)
 
 /*
@@ -593,6 +610,9 @@ ENDPROC(set_cpu_boot_mode_flag)
 ENTRY(__boot_cpu_mode)
 	.long	BOOT_CPU_MODE_EL2
 	.long	BOOT_CPU_MODE_EL1
+ENTRY(__run_cpu_mode)
+	.long	0
+	.long	0
 	.popsection
 
 	/*
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index b1adc51..bc7650a 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -113,6 +113,9 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
 			pr_crit("CPU%u: failed to come online\n", cpu);
 			ret = -EIO;
 		}
+
+		if (is_kernel_mode_mismatched())
+			panic("CPU%u: incompatible execution level", cpu);
 	} else {
 		pr_err("CPU%u: failed to boot: %d\n", cpu, ret);
 	}
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 23/23] arm64: Panic when VHE and non VHE CPUs coexist
@ 2016-02-03 18:00   ` Marc Zyngier
  0 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-03 18:00 UTC (permalink / raw)
  To: linux-arm-kernel

Having both VHE and non-VHE capable CPUs in the same system
is likely to be a recipe for disaster.

If the boot CPU has VHE, but a secondary is not, we won't be
able to downgrade and run the kernel at EL1. Add CPU hotplug
to the mix, and this produces a terrifying mess.

Let's solve the problem once and for all. If you mix VHE and
non-VHE CPUs in the same system, you deserve to loose, and this
patch makes sure you don't get a chance.

This is implemented by storing the kernel execution level in
a global variable. Secondaries will park themselves in a
WFI loop if they observe a mismatch. Also, the primary CPU
will detect that the secondary CPU has died on a mismatched
execution level. Panic will follow.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/virt.h | 17 +++++++++++++++++
 arch/arm64/kernel/head.S      | 20 ++++++++++++++++++++
 arch/arm64/kernel/smp.c       |  3 +++
 3 files changed, 40 insertions(+)

diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h
index 9f22dd6..f81a345 100644
--- a/arch/arm64/include/asm/virt.h
+++ b/arch/arm64/include/asm/virt.h
@@ -36,6 +36,11 @@
  */
 extern u32 __boot_cpu_mode[2];
 
+/*
+ * __run_cpu_mode records the mode the boot CPU uses for the kernel.
+ */
+extern u32 __run_cpu_mode[2];
+
 void __hyp_set_vectors(phys_addr_t phys_vector_base);
 phys_addr_t __hyp_get_vectors(void);
 
@@ -60,6 +65,18 @@ static inline bool is_kernel_in_hyp_mode(void)
 	return el == CurrentEL_EL2;
 }
 
+static inline bool is_kernel_mode_mismatched(void)
+{
+	/*
+	 * A mismatched CPU will have written its own CurrentEL in
+	 * __run_cpu_mode[1] (initially set to zero) after failing to
+	 * match the value in __run_cpu_mode[0]. Thus, a non-zero
+	 * value in __run_cpu_mode[1] is enough to detect the
+	 * pathological case.
+	 */
+	return !!ACCESS_ONCE(__run_cpu_mode[1]);
+}
+
 /* The section containing the hypervisor text */
 extern char __hyp_text_start[];
 extern char __hyp_text_end[];
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 6f2f377..f9b6a5b 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -578,7 +578,24 @@ ENTRY(set_cpu_boot_mode_flag)
 1:	str	w20, [x1]			// This CPU has booted in EL1
 	dmb	sy
 	dc	ivac, x1			// Invalidate potentially stale cache line
+	adr_l	x1, __run_cpu_mode
+	ldr	w0, [x1]
+	mrs	x20, CurrentEL
+	cbz	x0, skip_el_check
+	cmp	x0, x20
+	bne	mismatched_el
 	ret
+skip_el_check:			// Only the first CPU gets to set the rule
+	str	w20, [x1]
+	dmb	sy
+	dc	ivac, x1	// Invalidate potentially stale cache line
+	ret
+mismatched_el:
+	str	w20, [x1, #4]
+	dmb	sy
+	dc	ivac, x1	// Invalidate potentially stale cache line
+1:	wfi
+	b	1b
 ENDPROC(set_cpu_boot_mode_flag)
 
 /*
@@ -593,6 +610,9 @@ ENDPROC(set_cpu_boot_mode_flag)
 ENTRY(__boot_cpu_mode)
 	.long	BOOT_CPU_MODE_EL2
 	.long	BOOT_CPU_MODE_EL1
+ENTRY(__run_cpu_mode)
+	.long	0
+	.long	0
 	.popsection
 
 	/*
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index b1adc51..bc7650a 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -113,6 +113,9 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
 			pr_crit("CPU%u: failed to come online\n", cpu);
 			ret = -EIO;
 		}
+
+		if (is_kernel_mode_mismatched())
+			panic("CPU%u: incompatible execution level", cpu);
 	} else {
 		pr_err("CPU%u: failed to boot: %d\n", cpu, ret);
 	}
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* Re: [PATCH v3 11/23] arm64: KVM: VHE: Split save/restore of registers shared between guest and host
  2016-02-03 18:00   ` Marc Zyngier
  (?)
@ 2016-02-04 19:15     ` Christoffer Dall
  -1 siblings, 0 replies; 119+ messages in thread
From: Christoffer Dall @ 2016-02-04 19:15 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Catalin Marinas, Will Deacon, linux-arm-kernel, linux-kernel,
	kvm, kvmarm

On Wed, Feb 03, 2016 at 06:00:04PM +0000, Marc Zyngier wrote:
> A handful of system registers are still shared between host and guest,
> even while using VHE (tpidr*_el[01] and actlr_el1).
> 
> Also, some of the vcpu state (sp_el0, PC and PSTATE) must be
> save/restored on entry/exit, as they are used on the host as well.
> 
> In order to facilitate the introduction of a VHE-specific sysreg
> save/restore, make move the access to these registers to their
> own save/restore functions.
> 
> No functionnal change.

uber nit: s/functionnal/functional/

> 
> Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
>  arch/arm64/kvm/hyp/sysreg-sr.c | 48 +++++++++++++++++++++++++++++-------------
>  1 file changed, 33 insertions(+), 15 deletions(-)
> 
> diff --git a/arch/arm64/kvm/hyp/sysreg-sr.c b/arch/arm64/kvm/hyp/sysreg-sr.c
> index bd5b543..61bad17 100644
> --- a/arch/arm64/kvm/hyp/sysreg-sr.c
> +++ b/arch/arm64/kvm/hyp/sysreg-sr.c
> @@ -23,13 +23,29 @@
>  
>  #include "hyp.h"
>  
> -/* ctxt is already in the HYP VA space */
> +/*
> + * Non-VHE: Both host and guest must save everything.
> + *
> + * VHE: Host must save tpidr*_el[01], actlr_el1, sp0, pc, pstate, and
> + * guest must save everything.
> + */
> +
> +static void __hyp_text __sysreg_save_common_state(struct kvm_cpu_context *ctxt)
> +{
> +	ctxt->sys_regs[ACTLR_EL1]	= read_sysreg(actlr_el1);
> +	ctxt->sys_regs[TPIDR_EL0]	= read_sysreg(tpidr_el0);
> +	ctxt->sys_regs[TPIDRRO_EL0]	= read_sysreg(tpidrro_el0);
> +	ctxt->sys_regs[TPIDR_EL1]	= read_sysreg(tpidr_el1);
> +	ctxt->gp_regs.regs.sp		= read_sysreg(sp_el0);
> +	ctxt->gp_regs.regs.pc		= read_sysreg(elr_el2);
> +	ctxt->gp_regs.regs.pstate	= read_sysreg(spsr_el2);
> +}
> +
>  static void __hyp_text __sysreg_save_state(struct kvm_cpu_context *ctxt)
>  {
>  	ctxt->sys_regs[MPIDR_EL1]	= read_sysreg(vmpidr_el2);
>  	ctxt->sys_regs[CSSELR_EL1]	= read_sysreg(csselr_el1);
>  	ctxt->sys_regs[SCTLR_EL1]	= read_sysreg(sctlr_el1);
> -	ctxt->sys_regs[ACTLR_EL1]	= read_sysreg(actlr_el1);
>  	ctxt->sys_regs[CPACR_EL1]	= read_sysreg(cpacr_el1);
>  	ctxt->sys_regs[TTBR0_EL1]	= read_sysreg(ttbr0_el1);
>  	ctxt->sys_regs[TTBR1_EL1]	= read_sysreg(ttbr1_el1);
> @@ -41,17 +57,11 @@ static void __hyp_text __sysreg_save_state(struct kvm_cpu_context *ctxt)
>  	ctxt->sys_regs[MAIR_EL1]	= read_sysreg(mair_el1);
>  	ctxt->sys_regs[VBAR_EL1]	= read_sysreg(vbar_el1);
>  	ctxt->sys_regs[CONTEXTIDR_EL1]	= read_sysreg(contextidr_el1);
> -	ctxt->sys_regs[TPIDR_EL0]	= read_sysreg(tpidr_el0);
> -	ctxt->sys_regs[TPIDRRO_EL0]	= read_sysreg(tpidrro_el0);
> -	ctxt->sys_regs[TPIDR_EL1]	= read_sysreg(tpidr_el1);
>  	ctxt->sys_regs[AMAIR_EL1]	= read_sysreg(amair_el1);
>  	ctxt->sys_regs[CNTKCTL_EL1]	= read_sysreg(cntkctl_el1);
>  	ctxt->sys_regs[PAR_EL1]		= read_sysreg(par_el1);
>  	ctxt->sys_regs[MDSCR_EL1]	= read_sysreg(mdscr_el1);
>  
> -	ctxt->gp_regs.regs.sp		= read_sysreg(sp_el0);
> -	ctxt->gp_regs.regs.pc		= read_sysreg(elr_el2);
> -	ctxt->gp_regs.regs.pstate	= read_sysreg(spsr_el2);
>  	ctxt->gp_regs.sp_el1		= read_sysreg(sp_el1);
>  	ctxt->gp_regs.elr_el1		= read_sysreg(elr_el1);
>  	ctxt->gp_regs.spsr[KVM_SPSR_EL1]= read_sysreg(spsr_el1);
> @@ -60,11 +70,24 @@ static void __hyp_text __sysreg_save_state(struct kvm_cpu_context *ctxt)
>  void __hyp_text __sysreg_save_host_state(struct kvm_cpu_context *ctxt)
>  {
>  	__sysreg_save_state(ctxt);
> +	__sysreg_save_common_state(ctxt);
>  }
>  
>  void __hyp_text __sysreg_save_guest_state(struct kvm_cpu_context *ctxt)
>  {
>  	__sysreg_save_state(ctxt);
> +	__sysreg_save_common_state(ctxt);
> +}
> +
> +static void __hyp_text __sysreg_restore_common_state(struct kvm_cpu_context *ctxt)
> +{
> +	write_sysreg(ctxt->sys_regs[ACTLR_EL1],	  actlr_el1);
> +	write_sysreg(ctxt->sys_regs[TPIDR_EL0],	  tpidr_el0);
> +	write_sysreg(ctxt->sys_regs[TPIDRRO_EL0], tpidrro_el0);
> +	write_sysreg(ctxt->sys_regs[TPIDR_EL1],	  tpidr_el1);
> +	write_sysreg(ctxt->gp_regs.regs.sp,	  sp_el0);
> +	write_sysreg(ctxt->gp_regs.regs.pc,	  elr_el2);
> +	write_sysreg(ctxt->gp_regs.regs.pstate,	  spsr_el2);
>  }
>  
>  static void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
> @@ -72,7 +95,6 @@ static void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
>  	write_sysreg(ctxt->sys_regs[MPIDR_EL1],	  vmpidr_el2);
>  	write_sysreg(ctxt->sys_regs[CSSELR_EL1],  csselr_el1);
>  	write_sysreg(ctxt->sys_regs[SCTLR_EL1],	  sctlr_el1);
> -	write_sysreg(ctxt->sys_regs[ACTLR_EL1],	  actlr_el1);
>  	write_sysreg(ctxt->sys_regs[CPACR_EL1],	  cpacr_el1);
>  	write_sysreg(ctxt->sys_regs[TTBR0_EL1],	  ttbr0_el1);
>  	write_sysreg(ctxt->sys_regs[TTBR1_EL1],	  ttbr1_el1);
> @@ -84,17 +106,11 @@ static void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
>  	write_sysreg(ctxt->sys_regs[MAIR_EL1],	  mair_el1);
>  	write_sysreg(ctxt->sys_regs[VBAR_EL1],	  vbar_el1);
>  	write_sysreg(ctxt->sys_regs[CONTEXTIDR_EL1], contextidr_el1);
> -	write_sysreg(ctxt->sys_regs[TPIDR_EL0],	  tpidr_el0);
> -	write_sysreg(ctxt->sys_regs[TPIDRRO_EL0], tpidrro_el0);
> -	write_sysreg(ctxt->sys_regs[TPIDR_EL1],	  tpidr_el1);
>  	write_sysreg(ctxt->sys_regs[AMAIR_EL1],	  amair_el1);
>  	write_sysreg(ctxt->sys_regs[CNTKCTL_EL1], cntkctl_el1);
>  	write_sysreg(ctxt->sys_regs[PAR_EL1],	  par_el1);
>  	write_sysreg(ctxt->sys_regs[MDSCR_EL1],	  mdscr_el1);
>  
> -	write_sysreg(ctxt->gp_regs.regs.sp,	sp_el0);
> -	write_sysreg(ctxt->gp_regs.regs.pc,	elr_el2);
> -	write_sysreg(ctxt->gp_regs.regs.pstate,	spsr_el2);
>  	write_sysreg(ctxt->gp_regs.sp_el1,	sp_el1);
>  	write_sysreg(ctxt->gp_regs.elr_el1,	elr_el1);
>  	write_sysreg(ctxt->gp_regs.spsr[KVM_SPSR_EL1], spsr_el1);
> @@ -103,11 +119,13 @@ static void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
>  void __hyp_text __sysreg_restore_host_state(struct kvm_cpu_context *ctxt)
>  {
>  	__sysreg_restore_state(ctxt);
> +	__sysreg_restore_common_state(ctxt);
>  }
>  
>  void __hyp_text __sysreg_restore_guest_state(struct kvm_cpu_context *ctxt)
>  {
>  	__sysreg_restore_state(ctxt);
> +	__sysreg_restore_common_state(ctxt);
>  }
>  
>  void __hyp_text __sysreg32_save_state(struct kvm_vcpu *vcpu)
> -- 
> 2.1.4
> 

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH v3 11/23] arm64: KVM: VHE: Split save/restore of registers shared between guest and host
@ 2016-02-04 19:15     ` Christoffer Dall
  0 siblings, 0 replies; 119+ messages in thread
From: Christoffer Dall @ 2016-02-04 19:15 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvm, Catalin Marinas, Will Deacon, linux-kernel, kvmarm,
	linux-arm-kernel

On Wed, Feb 03, 2016 at 06:00:04PM +0000, Marc Zyngier wrote:
> A handful of system registers are still shared between host and guest,
> even while using VHE (tpidr*_el[01] and actlr_el1).
> 
> Also, some of the vcpu state (sp_el0, PC and PSTATE) must be
> save/restored on entry/exit, as they are used on the host as well.
> 
> In order to facilitate the introduction of a VHE-specific sysreg
> save/restore, make move the access to these registers to their
> own save/restore functions.
> 
> No functionnal change.

uber nit: s/functionnal/functional/

> 
> Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
>  arch/arm64/kvm/hyp/sysreg-sr.c | 48 +++++++++++++++++++++++++++++-------------
>  1 file changed, 33 insertions(+), 15 deletions(-)
> 
> diff --git a/arch/arm64/kvm/hyp/sysreg-sr.c b/arch/arm64/kvm/hyp/sysreg-sr.c
> index bd5b543..61bad17 100644
> --- a/arch/arm64/kvm/hyp/sysreg-sr.c
> +++ b/arch/arm64/kvm/hyp/sysreg-sr.c
> @@ -23,13 +23,29 @@
>  
>  #include "hyp.h"
>  
> -/* ctxt is already in the HYP VA space */
> +/*
> + * Non-VHE: Both host and guest must save everything.
> + *
> + * VHE: Host must save tpidr*_el[01], actlr_el1, sp0, pc, pstate, and
> + * guest must save everything.
> + */
> +
> +static void __hyp_text __sysreg_save_common_state(struct kvm_cpu_context *ctxt)
> +{
> +	ctxt->sys_regs[ACTLR_EL1]	= read_sysreg(actlr_el1);
> +	ctxt->sys_regs[TPIDR_EL0]	= read_sysreg(tpidr_el0);
> +	ctxt->sys_regs[TPIDRRO_EL0]	= read_sysreg(tpidrro_el0);
> +	ctxt->sys_regs[TPIDR_EL1]	= read_sysreg(tpidr_el1);
> +	ctxt->gp_regs.regs.sp		= read_sysreg(sp_el0);
> +	ctxt->gp_regs.regs.pc		= read_sysreg(elr_el2);
> +	ctxt->gp_regs.regs.pstate	= read_sysreg(spsr_el2);
> +}
> +
>  static void __hyp_text __sysreg_save_state(struct kvm_cpu_context *ctxt)
>  {
>  	ctxt->sys_regs[MPIDR_EL1]	= read_sysreg(vmpidr_el2);
>  	ctxt->sys_regs[CSSELR_EL1]	= read_sysreg(csselr_el1);
>  	ctxt->sys_regs[SCTLR_EL1]	= read_sysreg(sctlr_el1);
> -	ctxt->sys_regs[ACTLR_EL1]	= read_sysreg(actlr_el1);
>  	ctxt->sys_regs[CPACR_EL1]	= read_sysreg(cpacr_el1);
>  	ctxt->sys_regs[TTBR0_EL1]	= read_sysreg(ttbr0_el1);
>  	ctxt->sys_regs[TTBR1_EL1]	= read_sysreg(ttbr1_el1);
> @@ -41,17 +57,11 @@ static void __hyp_text __sysreg_save_state(struct kvm_cpu_context *ctxt)
>  	ctxt->sys_regs[MAIR_EL1]	= read_sysreg(mair_el1);
>  	ctxt->sys_regs[VBAR_EL1]	= read_sysreg(vbar_el1);
>  	ctxt->sys_regs[CONTEXTIDR_EL1]	= read_sysreg(contextidr_el1);
> -	ctxt->sys_regs[TPIDR_EL0]	= read_sysreg(tpidr_el0);
> -	ctxt->sys_regs[TPIDRRO_EL0]	= read_sysreg(tpidrro_el0);
> -	ctxt->sys_regs[TPIDR_EL1]	= read_sysreg(tpidr_el1);
>  	ctxt->sys_regs[AMAIR_EL1]	= read_sysreg(amair_el1);
>  	ctxt->sys_regs[CNTKCTL_EL1]	= read_sysreg(cntkctl_el1);
>  	ctxt->sys_regs[PAR_EL1]		= read_sysreg(par_el1);
>  	ctxt->sys_regs[MDSCR_EL1]	= read_sysreg(mdscr_el1);
>  
> -	ctxt->gp_regs.regs.sp		= read_sysreg(sp_el0);
> -	ctxt->gp_regs.regs.pc		= read_sysreg(elr_el2);
> -	ctxt->gp_regs.regs.pstate	= read_sysreg(spsr_el2);
>  	ctxt->gp_regs.sp_el1		= read_sysreg(sp_el1);
>  	ctxt->gp_regs.elr_el1		= read_sysreg(elr_el1);
>  	ctxt->gp_regs.spsr[KVM_SPSR_EL1]= read_sysreg(spsr_el1);
> @@ -60,11 +70,24 @@ static void __hyp_text __sysreg_save_state(struct kvm_cpu_context *ctxt)
>  void __hyp_text __sysreg_save_host_state(struct kvm_cpu_context *ctxt)
>  {
>  	__sysreg_save_state(ctxt);
> +	__sysreg_save_common_state(ctxt);
>  }
>  
>  void __hyp_text __sysreg_save_guest_state(struct kvm_cpu_context *ctxt)
>  {
>  	__sysreg_save_state(ctxt);
> +	__sysreg_save_common_state(ctxt);
> +}
> +
> +static void __hyp_text __sysreg_restore_common_state(struct kvm_cpu_context *ctxt)
> +{
> +	write_sysreg(ctxt->sys_regs[ACTLR_EL1],	  actlr_el1);
> +	write_sysreg(ctxt->sys_regs[TPIDR_EL0],	  tpidr_el0);
> +	write_sysreg(ctxt->sys_regs[TPIDRRO_EL0], tpidrro_el0);
> +	write_sysreg(ctxt->sys_regs[TPIDR_EL1],	  tpidr_el1);
> +	write_sysreg(ctxt->gp_regs.regs.sp,	  sp_el0);
> +	write_sysreg(ctxt->gp_regs.regs.pc,	  elr_el2);
> +	write_sysreg(ctxt->gp_regs.regs.pstate,	  spsr_el2);
>  }
>  
>  static void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
> @@ -72,7 +95,6 @@ static void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
>  	write_sysreg(ctxt->sys_regs[MPIDR_EL1],	  vmpidr_el2);
>  	write_sysreg(ctxt->sys_regs[CSSELR_EL1],  csselr_el1);
>  	write_sysreg(ctxt->sys_regs[SCTLR_EL1],	  sctlr_el1);
> -	write_sysreg(ctxt->sys_regs[ACTLR_EL1],	  actlr_el1);
>  	write_sysreg(ctxt->sys_regs[CPACR_EL1],	  cpacr_el1);
>  	write_sysreg(ctxt->sys_regs[TTBR0_EL1],	  ttbr0_el1);
>  	write_sysreg(ctxt->sys_regs[TTBR1_EL1],	  ttbr1_el1);
> @@ -84,17 +106,11 @@ static void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
>  	write_sysreg(ctxt->sys_regs[MAIR_EL1],	  mair_el1);
>  	write_sysreg(ctxt->sys_regs[VBAR_EL1],	  vbar_el1);
>  	write_sysreg(ctxt->sys_regs[CONTEXTIDR_EL1], contextidr_el1);
> -	write_sysreg(ctxt->sys_regs[TPIDR_EL0],	  tpidr_el0);
> -	write_sysreg(ctxt->sys_regs[TPIDRRO_EL0], tpidrro_el0);
> -	write_sysreg(ctxt->sys_regs[TPIDR_EL1],	  tpidr_el1);
>  	write_sysreg(ctxt->sys_regs[AMAIR_EL1],	  amair_el1);
>  	write_sysreg(ctxt->sys_regs[CNTKCTL_EL1], cntkctl_el1);
>  	write_sysreg(ctxt->sys_regs[PAR_EL1],	  par_el1);
>  	write_sysreg(ctxt->sys_regs[MDSCR_EL1],	  mdscr_el1);
>  
> -	write_sysreg(ctxt->gp_regs.regs.sp,	sp_el0);
> -	write_sysreg(ctxt->gp_regs.regs.pc,	elr_el2);
> -	write_sysreg(ctxt->gp_regs.regs.pstate,	spsr_el2);
>  	write_sysreg(ctxt->gp_regs.sp_el1,	sp_el1);
>  	write_sysreg(ctxt->gp_regs.elr_el1,	elr_el1);
>  	write_sysreg(ctxt->gp_regs.spsr[KVM_SPSR_EL1], spsr_el1);
> @@ -103,11 +119,13 @@ static void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
>  void __hyp_text __sysreg_restore_host_state(struct kvm_cpu_context *ctxt)
>  {
>  	__sysreg_restore_state(ctxt);
> +	__sysreg_restore_common_state(ctxt);
>  }
>  
>  void __hyp_text __sysreg_restore_guest_state(struct kvm_cpu_context *ctxt)
>  {
>  	__sysreg_restore_state(ctxt);
> +	__sysreg_restore_common_state(ctxt);
>  }
>  
>  void __hyp_text __sysreg32_save_state(struct kvm_vcpu *vcpu)
> -- 
> 2.1.4
> 

^ permalink raw reply	[flat|nested] 119+ messages in thread

* [PATCH v3 11/23] arm64: KVM: VHE: Split save/restore of registers shared between guest and host
@ 2016-02-04 19:15     ` Christoffer Dall
  0 siblings, 0 replies; 119+ messages in thread
From: Christoffer Dall @ 2016-02-04 19:15 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Feb 03, 2016 at 06:00:04PM +0000, Marc Zyngier wrote:
> A handful of system registers are still shared between host and guest,
> even while using VHE (tpidr*_el[01] and actlr_el1).
> 
> Also, some of the vcpu state (sp_el0, PC and PSTATE) must be
> save/restored on entry/exit, as they are used on the host as well.
> 
> In order to facilitate the introduction of a VHE-specific sysreg
> save/restore, make move the access to these registers to their
> own save/restore functions.
> 
> No functionnal change.

uber nit: s/functionnal/functional/

> 
> Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
>  arch/arm64/kvm/hyp/sysreg-sr.c | 48 +++++++++++++++++++++++++++++-------------
>  1 file changed, 33 insertions(+), 15 deletions(-)
> 
> diff --git a/arch/arm64/kvm/hyp/sysreg-sr.c b/arch/arm64/kvm/hyp/sysreg-sr.c
> index bd5b543..61bad17 100644
> --- a/arch/arm64/kvm/hyp/sysreg-sr.c
> +++ b/arch/arm64/kvm/hyp/sysreg-sr.c
> @@ -23,13 +23,29 @@
>  
>  #include "hyp.h"
>  
> -/* ctxt is already in the HYP VA space */
> +/*
> + * Non-VHE: Both host and guest must save everything.
> + *
> + * VHE: Host must save tpidr*_el[01], actlr_el1, sp0, pc, pstate, and
> + * guest must save everything.
> + */
> +
> +static void __hyp_text __sysreg_save_common_state(struct kvm_cpu_context *ctxt)
> +{
> +	ctxt->sys_regs[ACTLR_EL1]	= read_sysreg(actlr_el1);
> +	ctxt->sys_regs[TPIDR_EL0]	= read_sysreg(tpidr_el0);
> +	ctxt->sys_regs[TPIDRRO_EL0]	= read_sysreg(tpidrro_el0);
> +	ctxt->sys_regs[TPIDR_EL1]	= read_sysreg(tpidr_el1);
> +	ctxt->gp_regs.regs.sp		= read_sysreg(sp_el0);
> +	ctxt->gp_regs.regs.pc		= read_sysreg(elr_el2);
> +	ctxt->gp_regs.regs.pstate	= read_sysreg(spsr_el2);
> +}
> +
>  static void __hyp_text __sysreg_save_state(struct kvm_cpu_context *ctxt)
>  {
>  	ctxt->sys_regs[MPIDR_EL1]	= read_sysreg(vmpidr_el2);
>  	ctxt->sys_regs[CSSELR_EL1]	= read_sysreg(csselr_el1);
>  	ctxt->sys_regs[SCTLR_EL1]	= read_sysreg(sctlr_el1);
> -	ctxt->sys_regs[ACTLR_EL1]	= read_sysreg(actlr_el1);
>  	ctxt->sys_regs[CPACR_EL1]	= read_sysreg(cpacr_el1);
>  	ctxt->sys_regs[TTBR0_EL1]	= read_sysreg(ttbr0_el1);
>  	ctxt->sys_regs[TTBR1_EL1]	= read_sysreg(ttbr1_el1);
> @@ -41,17 +57,11 @@ static void __hyp_text __sysreg_save_state(struct kvm_cpu_context *ctxt)
>  	ctxt->sys_regs[MAIR_EL1]	= read_sysreg(mair_el1);
>  	ctxt->sys_regs[VBAR_EL1]	= read_sysreg(vbar_el1);
>  	ctxt->sys_regs[CONTEXTIDR_EL1]	= read_sysreg(contextidr_el1);
> -	ctxt->sys_regs[TPIDR_EL0]	= read_sysreg(tpidr_el0);
> -	ctxt->sys_regs[TPIDRRO_EL0]	= read_sysreg(tpidrro_el0);
> -	ctxt->sys_regs[TPIDR_EL1]	= read_sysreg(tpidr_el1);
>  	ctxt->sys_regs[AMAIR_EL1]	= read_sysreg(amair_el1);
>  	ctxt->sys_regs[CNTKCTL_EL1]	= read_sysreg(cntkctl_el1);
>  	ctxt->sys_regs[PAR_EL1]		= read_sysreg(par_el1);
>  	ctxt->sys_regs[MDSCR_EL1]	= read_sysreg(mdscr_el1);
>  
> -	ctxt->gp_regs.regs.sp		= read_sysreg(sp_el0);
> -	ctxt->gp_regs.regs.pc		= read_sysreg(elr_el2);
> -	ctxt->gp_regs.regs.pstate	= read_sysreg(spsr_el2);
>  	ctxt->gp_regs.sp_el1		= read_sysreg(sp_el1);
>  	ctxt->gp_regs.elr_el1		= read_sysreg(elr_el1);
>  	ctxt->gp_regs.spsr[KVM_SPSR_EL1]= read_sysreg(spsr_el1);
> @@ -60,11 +70,24 @@ static void __hyp_text __sysreg_save_state(struct kvm_cpu_context *ctxt)
>  void __hyp_text __sysreg_save_host_state(struct kvm_cpu_context *ctxt)
>  {
>  	__sysreg_save_state(ctxt);
> +	__sysreg_save_common_state(ctxt);
>  }
>  
>  void __hyp_text __sysreg_save_guest_state(struct kvm_cpu_context *ctxt)
>  {
>  	__sysreg_save_state(ctxt);
> +	__sysreg_save_common_state(ctxt);
> +}
> +
> +static void __hyp_text __sysreg_restore_common_state(struct kvm_cpu_context *ctxt)
> +{
> +	write_sysreg(ctxt->sys_regs[ACTLR_EL1],	  actlr_el1);
> +	write_sysreg(ctxt->sys_regs[TPIDR_EL0],	  tpidr_el0);
> +	write_sysreg(ctxt->sys_regs[TPIDRRO_EL0], tpidrro_el0);
> +	write_sysreg(ctxt->sys_regs[TPIDR_EL1],	  tpidr_el1);
> +	write_sysreg(ctxt->gp_regs.regs.sp,	  sp_el0);
> +	write_sysreg(ctxt->gp_regs.regs.pc,	  elr_el2);
> +	write_sysreg(ctxt->gp_regs.regs.pstate,	  spsr_el2);
>  }
>  
>  static void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
> @@ -72,7 +95,6 @@ static void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
>  	write_sysreg(ctxt->sys_regs[MPIDR_EL1],	  vmpidr_el2);
>  	write_sysreg(ctxt->sys_regs[CSSELR_EL1],  csselr_el1);
>  	write_sysreg(ctxt->sys_regs[SCTLR_EL1],	  sctlr_el1);
> -	write_sysreg(ctxt->sys_regs[ACTLR_EL1],	  actlr_el1);
>  	write_sysreg(ctxt->sys_regs[CPACR_EL1],	  cpacr_el1);
>  	write_sysreg(ctxt->sys_regs[TTBR0_EL1],	  ttbr0_el1);
>  	write_sysreg(ctxt->sys_regs[TTBR1_EL1],	  ttbr1_el1);
> @@ -84,17 +106,11 @@ static void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
>  	write_sysreg(ctxt->sys_regs[MAIR_EL1],	  mair_el1);
>  	write_sysreg(ctxt->sys_regs[VBAR_EL1],	  vbar_el1);
>  	write_sysreg(ctxt->sys_regs[CONTEXTIDR_EL1], contextidr_el1);
> -	write_sysreg(ctxt->sys_regs[TPIDR_EL0],	  tpidr_el0);
> -	write_sysreg(ctxt->sys_regs[TPIDRRO_EL0], tpidrro_el0);
> -	write_sysreg(ctxt->sys_regs[TPIDR_EL1],	  tpidr_el1);
>  	write_sysreg(ctxt->sys_regs[AMAIR_EL1],	  amair_el1);
>  	write_sysreg(ctxt->sys_regs[CNTKCTL_EL1], cntkctl_el1);
>  	write_sysreg(ctxt->sys_regs[PAR_EL1],	  par_el1);
>  	write_sysreg(ctxt->sys_regs[MDSCR_EL1],	  mdscr_el1);
>  
> -	write_sysreg(ctxt->gp_regs.regs.sp,	sp_el0);
> -	write_sysreg(ctxt->gp_regs.regs.pc,	elr_el2);
> -	write_sysreg(ctxt->gp_regs.regs.pstate,	spsr_el2);
>  	write_sysreg(ctxt->gp_regs.sp_el1,	sp_el1);
>  	write_sysreg(ctxt->gp_regs.elr_el1,	elr_el1);
>  	write_sysreg(ctxt->gp_regs.spsr[KVM_SPSR_EL1], spsr_el1);
> @@ -103,11 +119,13 @@ static void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
>  void __hyp_text __sysreg_restore_host_state(struct kvm_cpu_context *ctxt)
>  {
>  	__sysreg_restore_state(ctxt);
> +	__sysreg_restore_common_state(ctxt);
>  }
>  
>  void __hyp_text __sysreg_restore_guest_state(struct kvm_cpu_context *ctxt)
>  {
>  	__sysreg_restore_state(ctxt);
> +	__sysreg_restore_common_state(ctxt);
>  }
>  
>  void __hyp_text __sysreg32_save_state(struct kvm_vcpu *vcpu)
> -- 
> 2.1.4
> 

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH v3 19/23] arm64: KVM: Move most of the fault decoding to C
  2016-02-03 18:00   ` Marc Zyngier
  (?)
@ 2016-02-04 19:20     ` Christoffer Dall
  -1 siblings, 0 replies; 119+ messages in thread
From: Christoffer Dall @ 2016-02-04 19:20 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Catalin Marinas, Will Deacon, linux-arm-kernel, linux-kernel,
	kvm, kvmarm

On Wed, Feb 03, 2016 at 06:00:12PM +0000, Marc Zyngier wrote:
> The fault decoding process (including computing the IPA in the case
> of a permission fault) would be much better done in C code, as we
> have a reasonable infrastructure to deal with the VHE/non-VHE
> differences.
> 
> Let's move the whole thing to C, including the workaround for
> erratum 834220, and just patch the odd ESR_EL2 access remaining
> in hyp-entry.S.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
>  arch/arm64/kernel/asm-offsets.c |  3 --
>  arch/arm64/kvm/hyp/hyp-entry.S  | 69 +++------------------------------
>  arch/arm64/kvm/hyp/switch.c     | 85 +++++++++++++++++++++++++++++++++++++++++
>  3 files changed, 90 insertions(+), 67 deletions(-)
> 
> diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
> index fffa4ac6..b0ab4e9 100644
> --- a/arch/arm64/kernel/asm-offsets.c
> +++ b/arch/arm64/kernel/asm-offsets.c
> @@ -110,9 +110,6 @@ int main(void)
>    DEFINE(CPU_USER_PT_REGS,	offsetof(struct kvm_regs, regs));
>    DEFINE(CPU_FP_REGS,		offsetof(struct kvm_regs, fp_regs));
>    DEFINE(VCPU_FPEXC32_EL2,	offsetof(struct kvm_vcpu, arch.ctxt.sys_regs[FPEXC32_EL2]));
> -  DEFINE(VCPU_ESR_EL2,		offsetof(struct kvm_vcpu, arch.fault.esr_el2));
> -  DEFINE(VCPU_FAR_EL2,		offsetof(struct kvm_vcpu, arch.fault.far_el2));
> -  DEFINE(VCPU_HPFAR_EL2,	offsetof(struct kvm_vcpu, arch.fault.hpfar_el2));
>    DEFINE(VCPU_HOST_CONTEXT,	offsetof(struct kvm_vcpu, arch.host_cpu_context));
>  #endif
>  #ifdef CONFIG_CPU_PM
> diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S
> index 1bdeee7..3488894 100644
> --- a/arch/arm64/kvm/hyp/hyp-entry.S
> +++ b/arch/arm64/kvm/hyp/hyp-entry.S
> @@ -19,7 +19,6 @@
>  
>  #include <asm/alternative.h>
>  #include <asm/assembler.h>
> -#include <asm/asm-offsets.h>
>  #include <asm/cpufeature.h>
>  #include <asm/kvm_arm.h>
>  #include <asm/kvm_asm.h>
> @@ -69,7 +68,11 @@ ENDPROC(__vhe_hyp_call)
>  el1_sync:				// Guest trapped into EL2
>  	save_x0_to_x3
>  
> +alternative_if_not ARM64_HAS_VIRT_HOST_EXTN
>  	mrs	x1, esr_el2
> +alternative_else
> +	mrs	x1, esr_el1
> +alternative_endif
>  	lsr	x2, x1, #ESR_ELx_EC_SHIFT
>  
>  	cmp	x2, #ESR_ELx_EC_HVC64
> @@ -105,72 +108,10 @@ el1_trap:
>  	cmp	x2, #ESR_ELx_EC_FP_ASIMD
>  	b.eq	__fpsimd_guest_restore
>  
> -	cmp	x2, #ESR_ELx_EC_DABT_LOW
> -	mov	x0, #ESR_ELx_EC_IABT_LOW
> -	ccmp	x2, x0, #4, ne
> -	b.ne	1f		// Not an abort we care about
> -
> -	/* This is an abort. Check for permission fault */
> -alternative_if_not ARM64_WORKAROUND_834220
> -	and	x2, x1, #ESR_ELx_FSC_TYPE
> -	cmp	x2, #FSC_PERM
> -	b.ne	1f		// Not a permission fault
> -alternative_else
> -	nop			// Use the permission fault path to
> -	nop			// check for a valid S1 translation,
> -	nop			// regardless of the ESR value.
> -alternative_endif
> -
> -	/*
> -	 * Check for Stage-1 page table walk, which is guaranteed
> -	 * to give a valid HPFAR_EL2.
> -	 */
> -	tbnz	x1, #7, 1f	// S1PTW is set
> -
> -	/* Preserve PAR_EL1 */
> -	mrs	x3, par_el1
> -	stp	x3, xzr, [sp, #-16]!
> -
> -	/*
> -	 * Permission fault, HPFAR_EL2 is invalid.
> -	 * Resolve the IPA the hard way using the guest VA.
> -	 * Stage-1 translation already validated the memory access rights.
> -	 * As such, we can use the EL1 translation regime, and don't have
> -	 * to distinguish between EL0 and EL1 access.
> -	 */
> -	mrs	x2, far_el2
> -	at	s1e1r, x2
> -	isb
> -
> -	/* Read result */
> -	mrs	x3, par_el1
> -	ldp	x0, xzr, [sp], #16	// Restore PAR_EL1 from the stack
> -	msr	par_el1, x0
> -	tbnz	x3, #0, 3f		// Bail out if we failed the translation
> -	ubfx	x3, x3, #12, #36	// Extract IPA
> -	lsl	x3, x3, #4		// and present it like HPFAR
> -	b	2f
> -
> -1:	mrs	x3, hpfar_el2
> -	mrs	x2, far_el2
> -
> -2:	mrs	x0, tpidr_el2
> -	str	w1, [x0, #VCPU_ESR_EL2]
> -	str	x2, [x0, #VCPU_FAR_EL2]
> -	str	x3, [x0, #VCPU_HPFAR_EL2]
> -
> +	mrs	x0, tpidr_el2
>  	mov	x1, #ARM_EXCEPTION_TRAP
>  	b	__guest_exit
>  
> -	/*
> -	 * Translation failed. Just return to the guest and
> -	 * let it fault again. Another CPU is probably playing
> -	 * behind our back.
> -	 */
> -3:	restore_x0_to_x3
> -
> -	eret
> -
>  el1_irq:
>  	save_x0_to_x3
>  	mrs	x0, tpidr_el2
> diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
> index e90683a..a192357 100644
> --- a/arch/arm64/kvm/hyp/switch.c
> +++ b/arch/arm64/kvm/hyp/switch.c
> @@ -15,6 +15,7 @@
>   * along with this program.  If not, see <http://www.gnu.org/licenses/>.
>   */
>  
> +#include <linux/types.h>
>  #include <asm/kvm_asm.h>
>  
>  #include "hyp.h"
> @@ -145,6 +146,86 @@ static void __hyp_text __vgic_restore_state(struct kvm_vcpu *vcpu)
>  	__vgic_call_restore_state()(vcpu);
>  }
>  
> +static bool __hyp_text __true_value(void)
> +{
> +	return true;
> +}
> +
> +static bool __hyp_text __false_value(void)
> +{
> +	return false;
> +}
> +
> +static hyp_alternate_select(__check_arm_834220,
> +			    __false_value, __true_value,
> +			    ARM64_WORKAROUND_834220);
> +
> +static bool __hyp_text __translate_far_to_hpfar(u64 far, u64 *hpfar)
> +{
> +	u64 par, tmp;
> +
> +	/*
> +	 * Resolve the IPA the hard way using the guest VA.
> +	 *
> +	 * Stage-1 translation already validated the memory access
> +	 * rights. As such, we can use the EL1 translation regime, and
> +	 * don't have to distinguish between EL0 and EL1 access.
> +	 *
> +	 * We do need to save/restore PAR_EL1 though, as we haven't
> +	 * saved the guest context yet, and we may return early...
> +	 */
> +	par = read_sysreg(par_el1);
> +	asm volatile("at s1e1r, %0" : : "r" (far));
> +	isb();
> +
> +	tmp = read_sysreg(par_el1);
> +	write_sysreg(par, par_el1);
> +
> +	if (unlikely(tmp & 1))
> +		return false; /* Translation failed, back to guest */
> +
> +	/* Convert PAR to HPFAR format */
> +	*hpfar = ((tmp >> 12) & ((1UL << 36) - 1)) << 4;
> +	return true;
> +}
> +
> +static bool __hyp_text __populate_fault_info(struct kvm_vcpu *vcpu)
> +{
> +	u64 esr = read_sysreg_el2(esr);
> +	u8 ec = esr >> ESR_ELx_EC_SHIFT;
> +	u64 hpfar, far;
> +
> +	vcpu->arch.fault.esr_el2 = esr;
> +
> +	if (ec != ESR_ELx_EC_DABT_LOW && ec != ESR_ELx_EC_IABT_LOW)
> +		return true;
> +
> +	far = read_sysreg_el2(far);
> +
> +	/*
> +	 * The HPFAR can be invalid if the stage 2 fault did not
> +	 * happen during a stage 1 page table walk (the ESR_EL2.S1PTW
> +	 * bit is clear) and one of the two following cases are true:
> +	 *   1. The fault was due to a permission fault
> +	 *   2. The processor carries errata 834220
> +	 *
> +	 * Therefore, for all non S1PTW faults where we either have a
> +	 * permission fault or the errata workaround is enabled, we
> +	 * resolve the IPA using the AT instruction.
> +	 */
> +	if (!(esr & ESR_ELx_S1PTW) &&
> +	    (__check_arm_834220()() || (esr & ESR_ELx_FSC_TYPE) == FSC_PERM)) {
> +		if (!__translate_far_to_hpfar(far, &hpfar))
> +			return false;
> +	} else {
> +		hpfar = read_sysreg(hpfar_el2);
> +	}
> +
> +	vcpu->arch.fault.far_el2 = far;
> +	vcpu->arch.fault.hpfar_el2 = hpfar;
> +	return true;
> +}
> +
>  static int __hyp_text __guest_run(struct kvm_vcpu *vcpu)
>  {
>  	struct kvm_cpu_context *host_ctxt;
> @@ -176,9 +257,13 @@ static int __hyp_text __guest_run(struct kvm_vcpu *vcpu)
>  	__debug_restore_state(vcpu, kern_hyp_va(vcpu->arch.debug_ptr), guest_ctxt);
>  
>  	/* Jump in the fire! */
> +again:
>  	exit_code = __guest_enter(vcpu, host_ctxt);
>  	/* And we're baaack! */
>  
> +	if (exit_code == ARM_EXCEPTION_TRAP && !__populate_fault_info(vcpu))
> +		goto again;
> +
>  	fp_enabled = __fpsimd_enabled();
>  
>  	__sysreg_save_guest_state(guest_ctxt);
> -- 
> 2.1.4
> 

Thanks for the rewrite, I find this code really nice now, especially
comparing to the confusing job-security-creating stuff we had in
assembly before.

-Christoffer

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH v3 19/23] arm64: KVM: Move most of the fault decoding to C
@ 2016-02-04 19:20     ` Christoffer Dall
  0 siblings, 0 replies; 119+ messages in thread
From: Christoffer Dall @ 2016-02-04 19:20 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvm, Catalin Marinas, Will Deacon, linux-kernel, kvmarm,
	linux-arm-kernel

On Wed, Feb 03, 2016 at 06:00:12PM +0000, Marc Zyngier wrote:
> The fault decoding process (including computing the IPA in the case
> of a permission fault) would be much better done in C code, as we
> have a reasonable infrastructure to deal with the VHE/non-VHE
> differences.
> 
> Let's move the whole thing to C, including the workaround for
> erratum 834220, and just patch the odd ESR_EL2 access remaining
> in hyp-entry.S.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
>  arch/arm64/kernel/asm-offsets.c |  3 --
>  arch/arm64/kvm/hyp/hyp-entry.S  | 69 +++------------------------------
>  arch/arm64/kvm/hyp/switch.c     | 85 +++++++++++++++++++++++++++++++++++++++++
>  3 files changed, 90 insertions(+), 67 deletions(-)
> 
> diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
> index fffa4ac6..b0ab4e9 100644
> --- a/arch/arm64/kernel/asm-offsets.c
> +++ b/arch/arm64/kernel/asm-offsets.c
> @@ -110,9 +110,6 @@ int main(void)
>    DEFINE(CPU_USER_PT_REGS,	offsetof(struct kvm_regs, regs));
>    DEFINE(CPU_FP_REGS,		offsetof(struct kvm_regs, fp_regs));
>    DEFINE(VCPU_FPEXC32_EL2,	offsetof(struct kvm_vcpu, arch.ctxt.sys_regs[FPEXC32_EL2]));
> -  DEFINE(VCPU_ESR_EL2,		offsetof(struct kvm_vcpu, arch.fault.esr_el2));
> -  DEFINE(VCPU_FAR_EL2,		offsetof(struct kvm_vcpu, arch.fault.far_el2));
> -  DEFINE(VCPU_HPFAR_EL2,	offsetof(struct kvm_vcpu, arch.fault.hpfar_el2));
>    DEFINE(VCPU_HOST_CONTEXT,	offsetof(struct kvm_vcpu, arch.host_cpu_context));
>  #endif
>  #ifdef CONFIG_CPU_PM
> diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S
> index 1bdeee7..3488894 100644
> --- a/arch/arm64/kvm/hyp/hyp-entry.S
> +++ b/arch/arm64/kvm/hyp/hyp-entry.S
> @@ -19,7 +19,6 @@
>  
>  #include <asm/alternative.h>
>  #include <asm/assembler.h>
> -#include <asm/asm-offsets.h>
>  #include <asm/cpufeature.h>
>  #include <asm/kvm_arm.h>
>  #include <asm/kvm_asm.h>
> @@ -69,7 +68,11 @@ ENDPROC(__vhe_hyp_call)
>  el1_sync:				// Guest trapped into EL2
>  	save_x0_to_x3
>  
> +alternative_if_not ARM64_HAS_VIRT_HOST_EXTN
>  	mrs	x1, esr_el2
> +alternative_else
> +	mrs	x1, esr_el1
> +alternative_endif
>  	lsr	x2, x1, #ESR_ELx_EC_SHIFT
>  
>  	cmp	x2, #ESR_ELx_EC_HVC64
> @@ -105,72 +108,10 @@ el1_trap:
>  	cmp	x2, #ESR_ELx_EC_FP_ASIMD
>  	b.eq	__fpsimd_guest_restore
>  
> -	cmp	x2, #ESR_ELx_EC_DABT_LOW
> -	mov	x0, #ESR_ELx_EC_IABT_LOW
> -	ccmp	x2, x0, #4, ne
> -	b.ne	1f		// Not an abort we care about
> -
> -	/* This is an abort. Check for permission fault */
> -alternative_if_not ARM64_WORKAROUND_834220
> -	and	x2, x1, #ESR_ELx_FSC_TYPE
> -	cmp	x2, #FSC_PERM
> -	b.ne	1f		// Not a permission fault
> -alternative_else
> -	nop			// Use the permission fault path to
> -	nop			// check for a valid S1 translation,
> -	nop			// regardless of the ESR value.
> -alternative_endif
> -
> -	/*
> -	 * Check for Stage-1 page table walk, which is guaranteed
> -	 * to give a valid HPFAR_EL2.
> -	 */
> -	tbnz	x1, #7, 1f	// S1PTW is set
> -
> -	/* Preserve PAR_EL1 */
> -	mrs	x3, par_el1
> -	stp	x3, xzr, [sp, #-16]!
> -
> -	/*
> -	 * Permission fault, HPFAR_EL2 is invalid.
> -	 * Resolve the IPA the hard way using the guest VA.
> -	 * Stage-1 translation already validated the memory access rights.
> -	 * As such, we can use the EL1 translation regime, and don't have
> -	 * to distinguish between EL0 and EL1 access.
> -	 */
> -	mrs	x2, far_el2
> -	at	s1e1r, x2
> -	isb
> -
> -	/* Read result */
> -	mrs	x3, par_el1
> -	ldp	x0, xzr, [sp], #16	// Restore PAR_EL1 from the stack
> -	msr	par_el1, x0
> -	tbnz	x3, #0, 3f		// Bail out if we failed the translation
> -	ubfx	x3, x3, #12, #36	// Extract IPA
> -	lsl	x3, x3, #4		// and present it like HPFAR
> -	b	2f
> -
> -1:	mrs	x3, hpfar_el2
> -	mrs	x2, far_el2
> -
> -2:	mrs	x0, tpidr_el2
> -	str	w1, [x0, #VCPU_ESR_EL2]
> -	str	x2, [x0, #VCPU_FAR_EL2]
> -	str	x3, [x0, #VCPU_HPFAR_EL2]
> -
> +	mrs	x0, tpidr_el2
>  	mov	x1, #ARM_EXCEPTION_TRAP
>  	b	__guest_exit
>  
> -	/*
> -	 * Translation failed. Just return to the guest and
> -	 * let it fault again. Another CPU is probably playing
> -	 * behind our back.
> -	 */
> -3:	restore_x0_to_x3
> -
> -	eret
> -
>  el1_irq:
>  	save_x0_to_x3
>  	mrs	x0, tpidr_el2
> diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
> index e90683a..a192357 100644
> --- a/arch/arm64/kvm/hyp/switch.c
> +++ b/arch/arm64/kvm/hyp/switch.c
> @@ -15,6 +15,7 @@
>   * along with this program.  If not, see <http://www.gnu.org/licenses/>.
>   */
>  
> +#include <linux/types.h>
>  #include <asm/kvm_asm.h>
>  
>  #include "hyp.h"
> @@ -145,6 +146,86 @@ static void __hyp_text __vgic_restore_state(struct kvm_vcpu *vcpu)
>  	__vgic_call_restore_state()(vcpu);
>  }
>  
> +static bool __hyp_text __true_value(void)
> +{
> +	return true;
> +}
> +
> +static bool __hyp_text __false_value(void)
> +{
> +	return false;
> +}
> +
> +static hyp_alternate_select(__check_arm_834220,
> +			    __false_value, __true_value,
> +			    ARM64_WORKAROUND_834220);
> +
> +static bool __hyp_text __translate_far_to_hpfar(u64 far, u64 *hpfar)
> +{
> +	u64 par, tmp;
> +
> +	/*
> +	 * Resolve the IPA the hard way using the guest VA.
> +	 *
> +	 * Stage-1 translation already validated the memory access
> +	 * rights. As such, we can use the EL1 translation regime, and
> +	 * don't have to distinguish between EL0 and EL1 access.
> +	 *
> +	 * We do need to save/restore PAR_EL1 though, as we haven't
> +	 * saved the guest context yet, and we may return early...
> +	 */
> +	par = read_sysreg(par_el1);
> +	asm volatile("at s1e1r, %0" : : "r" (far));
> +	isb();
> +
> +	tmp = read_sysreg(par_el1);
> +	write_sysreg(par, par_el1);
> +
> +	if (unlikely(tmp & 1))
> +		return false; /* Translation failed, back to guest */
> +
> +	/* Convert PAR to HPFAR format */
> +	*hpfar = ((tmp >> 12) & ((1UL << 36) - 1)) << 4;
> +	return true;
> +}
> +
> +static bool __hyp_text __populate_fault_info(struct kvm_vcpu *vcpu)
> +{
> +	u64 esr = read_sysreg_el2(esr);
> +	u8 ec = esr >> ESR_ELx_EC_SHIFT;
> +	u64 hpfar, far;
> +
> +	vcpu->arch.fault.esr_el2 = esr;
> +
> +	if (ec != ESR_ELx_EC_DABT_LOW && ec != ESR_ELx_EC_IABT_LOW)
> +		return true;
> +
> +	far = read_sysreg_el2(far);
> +
> +	/*
> +	 * The HPFAR can be invalid if the stage 2 fault did not
> +	 * happen during a stage 1 page table walk (the ESR_EL2.S1PTW
> +	 * bit is clear) and one of the two following cases are true:
> +	 *   1. The fault was due to a permission fault
> +	 *   2. The processor carries errata 834220
> +	 *
> +	 * Therefore, for all non S1PTW faults where we either have a
> +	 * permission fault or the errata workaround is enabled, we
> +	 * resolve the IPA using the AT instruction.
> +	 */
> +	if (!(esr & ESR_ELx_S1PTW) &&
> +	    (__check_arm_834220()() || (esr & ESR_ELx_FSC_TYPE) == FSC_PERM)) {
> +		if (!__translate_far_to_hpfar(far, &hpfar))
> +			return false;
> +	} else {
> +		hpfar = read_sysreg(hpfar_el2);
> +	}
> +
> +	vcpu->arch.fault.far_el2 = far;
> +	vcpu->arch.fault.hpfar_el2 = hpfar;
> +	return true;
> +}
> +
>  static int __hyp_text __guest_run(struct kvm_vcpu *vcpu)
>  {
>  	struct kvm_cpu_context *host_ctxt;
> @@ -176,9 +257,13 @@ static int __hyp_text __guest_run(struct kvm_vcpu *vcpu)
>  	__debug_restore_state(vcpu, kern_hyp_va(vcpu->arch.debug_ptr), guest_ctxt);
>  
>  	/* Jump in the fire! */
> +again:
>  	exit_code = __guest_enter(vcpu, host_ctxt);
>  	/* And we're baaack! */
>  
> +	if (exit_code == ARM_EXCEPTION_TRAP && !__populate_fault_info(vcpu))
> +		goto again;
> +
>  	fp_enabled = __fpsimd_enabled();
>  
>  	__sysreg_save_guest_state(guest_ctxt);
> -- 
> 2.1.4
> 

Thanks for the rewrite, I find this code really nice now, especially
comparing to the confusing job-security-creating stuff we had in
assembly before.

-Christoffer

^ permalink raw reply	[flat|nested] 119+ messages in thread

* [PATCH v3 19/23] arm64: KVM: Move most of the fault decoding to C
@ 2016-02-04 19:20     ` Christoffer Dall
  0 siblings, 0 replies; 119+ messages in thread
From: Christoffer Dall @ 2016-02-04 19:20 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Feb 03, 2016 at 06:00:12PM +0000, Marc Zyngier wrote:
> The fault decoding process (including computing the IPA in the case
> of a permission fault) would be much better done in C code, as we
> have a reasonable infrastructure to deal with the VHE/non-VHE
> differences.
> 
> Let's move the whole thing to C, including the workaround for
> erratum 834220, and just patch the odd ESR_EL2 access remaining
> in hyp-entry.S.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
>  arch/arm64/kernel/asm-offsets.c |  3 --
>  arch/arm64/kvm/hyp/hyp-entry.S  | 69 +++------------------------------
>  arch/arm64/kvm/hyp/switch.c     | 85 +++++++++++++++++++++++++++++++++++++++++
>  3 files changed, 90 insertions(+), 67 deletions(-)
> 
> diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
> index fffa4ac6..b0ab4e9 100644
> --- a/arch/arm64/kernel/asm-offsets.c
> +++ b/arch/arm64/kernel/asm-offsets.c
> @@ -110,9 +110,6 @@ int main(void)
>    DEFINE(CPU_USER_PT_REGS,	offsetof(struct kvm_regs, regs));
>    DEFINE(CPU_FP_REGS,		offsetof(struct kvm_regs, fp_regs));
>    DEFINE(VCPU_FPEXC32_EL2,	offsetof(struct kvm_vcpu, arch.ctxt.sys_regs[FPEXC32_EL2]));
> -  DEFINE(VCPU_ESR_EL2,		offsetof(struct kvm_vcpu, arch.fault.esr_el2));
> -  DEFINE(VCPU_FAR_EL2,		offsetof(struct kvm_vcpu, arch.fault.far_el2));
> -  DEFINE(VCPU_HPFAR_EL2,	offsetof(struct kvm_vcpu, arch.fault.hpfar_el2));
>    DEFINE(VCPU_HOST_CONTEXT,	offsetof(struct kvm_vcpu, arch.host_cpu_context));
>  #endif
>  #ifdef CONFIG_CPU_PM
> diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S
> index 1bdeee7..3488894 100644
> --- a/arch/arm64/kvm/hyp/hyp-entry.S
> +++ b/arch/arm64/kvm/hyp/hyp-entry.S
> @@ -19,7 +19,6 @@
>  
>  #include <asm/alternative.h>
>  #include <asm/assembler.h>
> -#include <asm/asm-offsets.h>
>  #include <asm/cpufeature.h>
>  #include <asm/kvm_arm.h>
>  #include <asm/kvm_asm.h>
> @@ -69,7 +68,11 @@ ENDPROC(__vhe_hyp_call)
>  el1_sync:				// Guest trapped into EL2
>  	save_x0_to_x3
>  
> +alternative_if_not ARM64_HAS_VIRT_HOST_EXTN
>  	mrs	x1, esr_el2
> +alternative_else
> +	mrs	x1, esr_el1
> +alternative_endif
>  	lsr	x2, x1, #ESR_ELx_EC_SHIFT
>  
>  	cmp	x2, #ESR_ELx_EC_HVC64
> @@ -105,72 +108,10 @@ el1_trap:
>  	cmp	x2, #ESR_ELx_EC_FP_ASIMD
>  	b.eq	__fpsimd_guest_restore
>  
> -	cmp	x2, #ESR_ELx_EC_DABT_LOW
> -	mov	x0, #ESR_ELx_EC_IABT_LOW
> -	ccmp	x2, x0, #4, ne
> -	b.ne	1f		// Not an abort we care about
> -
> -	/* This is an abort. Check for permission fault */
> -alternative_if_not ARM64_WORKAROUND_834220
> -	and	x2, x1, #ESR_ELx_FSC_TYPE
> -	cmp	x2, #FSC_PERM
> -	b.ne	1f		// Not a permission fault
> -alternative_else
> -	nop			// Use the permission fault path to
> -	nop			// check for a valid S1 translation,
> -	nop			// regardless of the ESR value.
> -alternative_endif
> -
> -	/*
> -	 * Check for Stage-1 page table walk, which is guaranteed
> -	 * to give a valid HPFAR_EL2.
> -	 */
> -	tbnz	x1, #7, 1f	// S1PTW is set
> -
> -	/* Preserve PAR_EL1 */
> -	mrs	x3, par_el1
> -	stp	x3, xzr, [sp, #-16]!
> -
> -	/*
> -	 * Permission fault, HPFAR_EL2 is invalid.
> -	 * Resolve the IPA the hard way using the guest VA.
> -	 * Stage-1 translation already validated the memory access rights.
> -	 * As such, we can use the EL1 translation regime, and don't have
> -	 * to distinguish between EL0 and EL1 access.
> -	 */
> -	mrs	x2, far_el2
> -	at	s1e1r, x2
> -	isb
> -
> -	/* Read result */
> -	mrs	x3, par_el1
> -	ldp	x0, xzr, [sp], #16	// Restore PAR_EL1 from the stack
> -	msr	par_el1, x0
> -	tbnz	x3, #0, 3f		// Bail out if we failed the translation
> -	ubfx	x3, x3, #12, #36	// Extract IPA
> -	lsl	x3, x3, #4		// and present it like HPFAR
> -	b	2f
> -
> -1:	mrs	x3, hpfar_el2
> -	mrs	x2, far_el2
> -
> -2:	mrs	x0, tpidr_el2
> -	str	w1, [x0, #VCPU_ESR_EL2]
> -	str	x2, [x0, #VCPU_FAR_EL2]
> -	str	x3, [x0, #VCPU_HPFAR_EL2]
> -
> +	mrs	x0, tpidr_el2
>  	mov	x1, #ARM_EXCEPTION_TRAP
>  	b	__guest_exit
>  
> -	/*
> -	 * Translation failed. Just return to the guest and
> -	 * let it fault again. Another CPU is probably playing
> -	 * behind our back.
> -	 */
> -3:	restore_x0_to_x3
> -
> -	eret
> -
>  el1_irq:
>  	save_x0_to_x3
>  	mrs	x0, tpidr_el2
> diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
> index e90683a..a192357 100644
> --- a/arch/arm64/kvm/hyp/switch.c
> +++ b/arch/arm64/kvm/hyp/switch.c
> @@ -15,6 +15,7 @@
>   * along with this program.  If not, see <http://www.gnu.org/licenses/>.
>   */
>  
> +#include <linux/types.h>
>  #include <asm/kvm_asm.h>
>  
>  #include "hyp.h"
> @@ -145,6 +146,86 @@ static void __hyp_text __vgic_restore_state(struct kvm_vcpu *vcpu)
>  	__vgic_call_restore_state()(vcpu);
>  }
>  
> +static bool __hyp_text __true_value(void)
> +{
> +	return true;
> +}
> +
> +static bool __hyp_text __false_value(void)
> +{
> +	return false;
> +}
> +
> +static hyp_alternate_select(__check_arm_834220,
> +			    __false_value, __true_value,
> +			    ARM64_WORKAROUND_834220);
> +
> +static bool __hyp_text __translate_far_to_hpfar(u64 far, u64 *hpfar)
> +{
> +	u64 par, tmp;
> +
> +	/*
> +	 * Resolve the IPA the hard way using the guest VA.
> +	 *
> +	 * Stage-1 translation already validated the memory access
> +	 * rights. As such, we can use the EL1 translation regime, and
> +	 * don't have to distinguish between EL0 and EL1 access.
> +	 *
> +	 * We do need to save/restore PAR_EL1 though, as we haven't
> +	 * saved the guest context yet, and we may return early...
> +	 */
> +	par = read_sysreg(par_el1);
> +	asm volatile("at s1e1r, %0" : : "r" (far));
> +	isb();
> +
> +	tmp = read_sysreg(par_el1);
> +	write_sysreg(par, par_el1);
> +
> +	if (unlikely(tmp & 1))
> +		return false; /* Translation failed, back to guest */
> +
> +	/* Convert PAR to HPFAR format */
> +	*hpfar = ((tmp >> 12) & ((1UL << 36) - 1)) << 4;
> +	return true;
> +}
> +
> +static bool __hyp_text __populate_fault_info(struct kvm_vcpu *vcpu)
> +{
> +	u64 esr = read_sysreg_el2(esr);
> +	u8 ec = esr >> ESR_ELx_EC_SHIFT;
> +	u64 hpfar, far;
> +
> +	vcpu->arch.fault.esr_el2 = esr;
> +
> +	if (ec != ESR_ELx_EC_DABT_LOW && ec != ESR_ELx_EC_IABT_LOW)
> +		return true;
> +
> +	far = read_sysreg_el2(far);
> +
> +	/*
> +	 * The HPFAR can be invalid if the stage 2 fault did not
> +	 * happen during a stage 1 page table walk (the ESR_EL2.S1PTW
> +	 * bit is clear) and one of the two following cases are true:
> +	 *   1. The fault was due to a permission fault
> +	 *   2. The processor carries errata 834220
> +	 *
> +	 * Therefore, for all non S1PTW faults where we either have a
> +	 * permission fault or the errata workaround is enabled, we
> +	 * resolve the IPA using the AT instruction.
> +	 */
> +	if (!(esr & ESR_ELx_S1PTW) &&
> +	    (__check_arm_834220()() || (esr & ESR_ELx_FSC_TYPE) == FSC_PERM)) {
> +		if (!__translate_far_to_hpfar(far, &hpfar))
> +			return false;
> +	} else {
> +		hpfar = read_sysreg(hpfar_el2);
> +	}
> +
> +	vcpu->arch.fault.far_el2 = far;
> +	vcpu->arch.fault.hpfar_el2 = hpfar;
> +	return true;
> +}
> +
>  static int __hyp_text __guest_run(struct kvm_vcpu *vcpu)
>  {
>  	struct kvm_cpu_context *host_ctxt;
> @@ -176,9 +257,13 @@ static int __hyp_text __guest_run(struct kvm_vcpu *vcpu)
>  	__debug_restore_state(vcpu, kern_hyp_va(vcpu->arch.debug_ptr), guest_ctxt);
>  
>  	/* Jump in the fire! */
> +again:
>  	exit_code = __guest_enter(vcpu, host_ctxt);
>  	/* And we're baaack! */
>  
> +	if (exit_code == ARM_EXCEPTION_TRAP && !__populate_fault_info(vcpu))
> +		goto again;
> +
>  	fp_enabled = __fpsimd_enabled();
>  
>  	__sysreg_save_guest_state(guest_ctxt);
> -- 
> 2.1.4
> 

Thanks for the rewrite, I find this code really nice now, especially
comparing to the confusing job-security-creating stuff we had in
assembly before.

-Christoffer

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH v3 20/23] arm64: perf: Count EL2 events if the kernel is running in HYP
  2016-02-03 18:00   ` Marc Zyngier
  (?)
@ 2016-02-04 19:23     ` Christoffer Dall
  -1 siblings, 0 replies; 119+ messages in thread
From: Christoffer Dall @ 2016-02-04 19:23 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Catalin Marinas, Will Deacon, linux-arm-kernel, linux-kernel,
	kvm, kvmarm

On Wed, Feb 03, 2016 at 06:00:13PM +0000, Marc Zyngier wrote:
> When the kernel is running in HYP (with VHE), it is necessary to
> include EL2 events if the user requests counting kernel or
> hypervisor events.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
>  arch/arm64/kernel/perf_event.c | 14 ++++++++++----
>  1 file changed, 10 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
> index f7ab14c..6013a38 100644
> --- a/arch/arm64/kernel/perf_event.c
> +++ b/arch/arm64/kernel/perf_event.c
> @@ -20,6 +20,7 @@
>   */
>  
>  #include <asm/irq_regs.h>
> +#include <asm/virt.h>
>  
>  #include <linux/of.h>
>  #include <linux/perf/arm_pmu.h>
> @@ -693,10 +694,15 @@ static int armv8pmu_set_event_filter(struct hw_perf_event *event,
>  		return -EPERM;
>  	if (attr->exclude_user)
>  		config_base |= ARMV8_EXCLUDE_EL0;
> -	if (attr->exclude_kernel)
> -		config_base |= ARMV8_EXCLUDE_EL1;
> -	if (!attr->exclude_hv)
> -		config_base |= ARMV8_INCLUDE_EL2;
> +	if (is_kernel_in_hyp_mode()) {
> +		if (!attr->exclude_kernel || !attr->exclude_hv)
> +			config_base |= ARMV8_INCLUDE_EL2;

is exclude_hv a perf generic construct?

is it really correct to count everything the host kernel does (even
unrelated to running VMs) when counting hypervisor events?

Other than my ignorance in this area, this patch looks fine to me.

Thanks,
-Christoffer

> +	} else {
> +		if (attr->exclude_kernel)
> +			config_base |= ARMV8_EXCLUDE_EL1;
> +		if (!attr->exclude_hv)
> +			config_base |= ARMV8_INCLUDE_EL2;
> +	}
>  
>  	/*
>  	 * Install the filter into config_base as this is used to
> -- 
> 2.1.4
> 

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH v3 20/23] arm64: perf: Count EL2 events if the kernel is running in HYP
@ 2016-02-04 19:23     ` Christoffer Dall
  0 siblings, 0 replies; 119+ messages in thread
From: Christoffer Dall @ 2016-02-04 19:23 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvm, Catalin Marinas, Will Deacon, linux-kernel, kvmarm,
	linux-arm-kernel

On Wed, Feb 03, 2016 at 06:00:13PM +0000, Marc Zyngier wrote:
> When the kernel is running in HYP (with VHE), it is necessary to
> include EL2 events if the user requests counting kernel or
> hypervisor events.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
>  arch/arm64/kernel/perf_event.c | 14 ++++++++++----
>  1 file changed, 10 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
> index f7ab14c..6013a38 100644
> --- a/arch/arm64/kernel/perf_event.c
> +++ b/arch/arm64/kernel/perf_event.c
> @@ -20,6 +20,7 @@
>   */
>  
>  #include <asm/irq_regs.h>
> +#include <asm/virt.h>
>  
>  #include <linux/of.h>
>  #include <linux/perf/arm_pmu.h>
> @@ -693,10 +694,15 @@ static int armv8pmu_set_event_filter(struct hw_perf_event *event,
>  		return -EPERM;
>  	if (attr->exclude_user)
>  		config_base |= ARMV8_EXCLUDE_EL0;
> -	if (attr->exclude_kernel)
> -		config_base |= ARMV8_EXCLUDE_EL1;
> -	if (!attr->exclude_hv)
> -		config_base |= ARMV8_INCLUDE_EL2;
> +	if (is_kernel_in_hyp_mode()) {
> +		if (!attr->exclude_kernel || !attr->exclude_hv)
> +			config_base |= ARMV8_INCLUDE_EL2;

is exclude_hv a perf generic construct?

is it really correct to count everything the host kernel does (even
unrelated to running VMs) when counting hypervisor events?

Other than my ignorance in this area, this patch looks fine to me.

Thanks,
-Christoffer

> +	} else {
> +		if (attr->exclude_kernel)
> +			config_base |= ARMV8_EXCLUDE_EL1;
> +		if (!attr->exclude_hv)
> +			config_base |= ARMV8_INCLUDE_EL2;
> +	}
>  
>  	/*
>  	 * Install the filter into config_base as this is used to
> -- 
> 2.1.4
> 

^ permalink raw reply	[flat|nested] 119+ messages in thread

* [PATCH v3 20/23] arm64: perf: Count EL2 events if the kernel is running in HYP
@ 2016-02-04 19:23     ` Christoffer Dall
  0 siblings, 0 replies; 119+ messages in thread
From: Christoffer Dall @ 2016-02-04 19:23 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Feb 03, 2016 at 06:00:13PM +0000, Marc Zyngier wrote:
> When the kernel is running in HYP (with VHE), it is necessary to
> include EL2 events if the user requests counting kernel or
> hypervisor events.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
>  arch/arm64/kernel/perf_event.c | 14 ++++++++++----
>  1 file changed, 10 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
> index f7ab14c..6013a38 100644
> --- a/arch/arm64/kernel/perf_event.c
> +++ b/arch/arm64/kernel/perf_event.c
> @@ -20,6 +20,7 @@
>   */
>  
>  #include <asm/irq_regs.h>
> +#include <asm/virt.h>
>  
>  #include <linux/of.h>
>  #include <linux/perf/arm_pmu.h>
> @@ -693,10 +694,15 @@ static int armv8pmu_set_event_filter(struct hw_perf_event *event,
>  		return -EPERM;
>  	if (attr->exclude_user)
>  		config_base |= ARMV8_EXCLUDE_EL0;
> -	if (attr->exclude_kernel)
> -		config_base |= ARMV8_EXCLUDE_EL1;
> -	if (!attr->exclude_hv)
> -		config_base |= ARMV8_INCLUDE_EL2;
> +	if (is_kernel_in_hyp_mode()) {
> +		if (!attr->exclude_kernel || !attr->exclude_hv)
> +			config_base |= ARMV8_INCLUDE_EL2;

is exclude_hv a perf generic construct?

is it really correct to count everything the host kernel does (even
unrelated to running VMs) when counting hypervisor events?

Other than my ignorance in this area, this patch looks fine to me.

Thanks,
-Christoffer

> +	} else {
> +		if (attr->exclude_kernel)
> +			config_base |= ARMV8_EXCLUDE_EL1;
> +		if (!attr->exclude_hv)
> +			config_base |= ARMV8_INCLUDE_EL2;
> +	}
>  
>  	/*
>  	 * Install the filter into config_base as this is used to
> -- 
> 2.1.4
> 

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH v3 23/23] arm64: Panic when VHE and non VHE CPUs coexist
  2016-02-03 18:00   ` Marc Zyngier
@ 2016-02-04 19:25     ` Christoffer Dall
  -1 siblings, 0 replies; 119+ messages in thread
From: Christoffer Dall @ 2016-02-04 19:25 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Catalin Marinas, Will Deacon, linux-arm-kernel, linux-kernel,
	kvm, kvmarm

On Wed, Feb 03, 2016 at 06:00:16PM +0000, Marc Zyngier wrote:
> Having both VHE and non-VHE capable CPUs in the same system
> is likely to be a recipe for disaster.
> 
> If the boot CPU has VHE, but a secondary is not, we won't be
> able to downgrade and run the kernel at EL1. Add CPU hotplug
> to the mix, and this produces a terrifying mess.
> 
> Let's solve the problem once and for all. If you mix VHE and
> non-VHE CPUs in the same system, you deserve to loose, and this
> patch makes sure you don't get a chance.
> 
> This is implemented by storing the kernel execution level in
> a global variable. Secondaries will park themselves in a
> WFI loop if they observe a mismatch. Also, the primary CPU
> will detect that the secondary CPU has died on a mismatched
> execution level. Panic will follow.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>

^ permalink raw reply	[flat|nested] 119+ messages in thread

* [PATCH v3 23/23] arm64: Panic when VHE and non VHE CPUs coexist
@ 2016-02-04 19:25     ` Christoffer Dall
  0 siblings, 0 replies; 119+ messages in thread
From: Christoffer Dall @ 2016-02-04 19:25 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Feb 03, 2016 at 06:00:16PM +0000, Marc Zyngier wrote:
> Having both VHE and non-VHE capable CPUs in the same system
> is likely to be a recipe for disaster.
> 
> If the boot CPU has VHE, but a secondary is not, we won't be
> able to downgrade and run the kernel at EL1. Add CPU hotplug
> to the mix, and this produces a terrifying mess.
> 
> Let's solve the problem once and for all. If you mix VHE and
> non-VHE CPUs in the same system, you deserve to loose, and this
> patch makes sure you don't get a chance.
> 
> This is implemented by storing the kernel execution level in
> a global variable. Secondaries will park themselves in a
> WFI loop if they observe a mismatch. Also, the primary CPU
> will detect that the secondary CPU has died on a mismatched
> execution level. Panic will follow.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH v3 22/23] arm64: VHE: Add support for running Linux in EL2 mode
  2016-02-03 18:00   ` Marc Zyngier
  (?)
@ 2016-02-04 19:25     ` Christoffer Dall
  -1 siblings, 0 replies; 119+ messages in thread
From: Christoffer Dall @ 2016-02-04 19:25 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Catalin Marinas, Will Deacon, linux-arm-kernel, linux-kernel,
	kvm, kvmarm

On Wed, Feb 03, 2016 at 06:00:15PM +0000, Marc Zyngier wrote:
> With ARMv8.1 VHE, the architecture is able to (almost) transparently
> run the kernel at EL2, despite being written for EL1.
> 
> This patch takes care of the "almost" part, mostly preventing the kernel
> from dropping from EL2 to EL1, and setting up the HYP configuration.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH v3 22/23] arm64: VHE: Add support for running Linux in EL2 mode
@ 2016-02-04 19:25     ` Christoffer Dall
  0 siblings, 0 replies; 119+ messages in thread
From: Christoffer Dall @ 2016-02-04 19:25 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvm, Catalin Marinas, Will Deacon, linux-kernel, kvmarm,
	linux-arm-kernel

On Wed, Feb 03, 2016 at 06:00:15PM +0000, Marc Zyngier wrote:
> With ARMv8.1 VHE, the architecture is able to (almost) transparently
> run the kernel at EL2, despite being written for EL1.
> 
> This patch takes care of the "almost" part, mostly preventing the kernel
> from dropping from EL2 to EL1, and setting up the HYP configuration.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>

^ permalink raw reply	[flat|nested] 119+ messages in thread

* [PATCH v3 22/23] arm64: VHE: Add support for running Linux in EL2 mode
@ 2016-02-04 19:25     ` Christoffer Dall
  0 siblings, 0 replies; 119+ messages in thread
From: Christoffer Dall @ 2016-02-04 19:25 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Feb 03, 2016 at 06:00:15PM +0000, Marc Zyngier wrote:
> With ARMv8.1 VHE, the architecture is able to (almost) transparently
> run the kernel at EL2, despite being written for EL1.
> 
> This patch takes care of the "almost" part, mostly preventing the kernel
> from dropping from EL2 to EL1, and setting up the HYP configuration.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH v3 00/23] arm64: Virtualization Host Extension support
  2016-02-03 17:59 ` Marc Zyngier
  (?)
@ 2016-02-04 19:26   ` Christoffer Dall
  -1 siblings, 0 replies; 119+ messages in thread
From: Christoffer Dall @ 2016-02-04 19:26 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Catalin Marinas, Will Deacon, linux-arm-kernel, linux-kernel,
	kvm, kvmarm

On Wed, Feb 03, 2016 at 05:59:53PM +0000, Marc Zyngier wrote:
> ARMv8.1 comes with the "Virtualization Host Extension" (VHE for
> short), which enables simpler support of Type-2 hypervisors.
> 
> This extension allows the kernel to directly run at EL2, and
> significantly reduces the number of system registers shared between
> host and guest, reducing the overhead of virtualization.
> 
> In order to have the same kernel binary running on all versions of the
> architecture, this series makes heavy use of runtime code patching.
> 
> The first 22 patches massage the KVM code to deal with VHE and enable
> Linux to run at EL2. The last patch catches an ugly case when VHE
> capable CPUs are paired with some of their less capable siblings. This
> should never happen, but hey...
> 
> I have deliberately left out some of the more "advanced"
> optimizations, as they are likely to distract the reviewer from the
> core infrastructure, which is what I care about at the moment.
> 
> Note: GDB is currently busted on VHE systems, as it checks for version
>       6 on the debug architecture, while VHE is version 7. The
>       binutils people are on the case.
> 
> This has been tested on the FVP_Base_SLV-V8-A model, and based on
> v4.5-rc2. I've put a branch out on:
> 
> git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git kvm-arm64/vhe
> 

You can have my reviewed-by on all patches that I didn't already review
or ack explicitly.

Only exception is the debug stuff where I didn't manage to page in the
context, so hopefully familiar with that code can have a look.

Then this is ready to be queued.

Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH v3 00/23] arm64: Virtualization Host Extension support
@ 2016-02-04 19:26   ` Christoffer Dall
  0 siblings, 0 replies; 119+ messages in thread
From: Christoffer Dall @ 2016-02-04 19:26 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvm, Catalin Marinas, Will Deacon, linux-kernel, kvmarm,
	linux-arm-kernel

On Wed, Feb 03, 2016 at 05:59:53PM +0000, Marc Zyngier wrote:
> ARMv8.1 comes with the "Virtualization Host Extension" (VHE for
> short), which enables simpler support of Type-2 hypervisors.
> 
> This extension allows the kernel to directly run at EL2, and
> significantly reduces the number of system registers shared between
> host and guest, reducing the overhead of virtualization.
> 
> In order to have the same kernel binary running on all versions of the
> architecture, this series makes heavy use of runtime code patching.
> 
> The first 22 patches massage the KVM code to deal with VHE and enable
> Linux to run at EL2. The last patch catches an ugly case when VHE
> capable CPUs are paired with some of their less capable siblings. This
> should never happen, but hey...
> 
> I have deliberately left out some of the more "advanced"
> optimizations, as they are likely to distract the reviewer from the
> core infrastructure, which is what I care about at the moment.
> 
> Note: GDB is currently busted on VHE systems, as it checks for version
>       6 on the debug architecture, while VHE is version 7. The
>       binutils people are on the case.
> 
> This has been tested on the FVP_Base_SLV-V8-A model, and based on
> v4.5-rc2. I've put a branch out on:
> 
> git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git kvm-arm64/vhe
> 

You can have my reviewed-by on all patches that I didn't already review
or ack explicitly.

Only exception is the debug stuff where I didn't manage to page in the
context, so hopefully familiar with that code can have a look.

Then this is ready to be queued.

Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 119+ messages in thread

* [PATCH v3 00/23] arm64: Virtualization Host Extension support
@ 2016-02-04 19:26   ` Christoffer Dall
  0 siblings, 0 replies; 119+ messages in thread
From: Christoffer Dall @ 2016-02-04 19:26 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Feb 03, 2016 at 05:59:53PM +0000, Marc Zyngier wrote:
> ARMv8.1 comes with the "Virtualization Host Extension" (VHE for
> short), which enables simpler support of Type-2 hypervisors.
> 
> This extension allows the kernel to directly run at EL2, and
> significantly reduces the number of system registers shared between
> host and guest, reducing the overhead of virtualization.
> 
> In order to have the same kernel binary running on all versions of the
> architecture, this series makes heavy use of runtime code patching.
> 
> The first 22 patches massage the KVM code to deal with VHE and enable
> Linux to run at EL2. The last patch catches an ugly case when VHE
> capable CPUs are paired with some of their less capable siblings. This
> should never happen, but hey...
> 
> I have deliberately left out some of the more "advanced"
> optimizations, as they are likely to distract the reviewer from the
> core infrastructure, which is what I care about at the moment.
> 
> Note: GDB is currently busted on VHE systems, as it checks for version
>       6 on the debug architecture, while VHE is version 7. The
>       binutils people are on the case.
> 
> This has been tested on the FVP_Base_SLV-V8-A model, and based on
> v4.5-rc2. I've put a branch out on:
> 
> git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git kvm-arm64/vhe
> 

You can have my reviewed-by on all patches that I didn't already review
or ack explicitly.

Only exception is the debug stuff where I didn't manage to page in the
context, so hopefully familiar with that code can have a look.

Then this is ready to be queued.

Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH v3 00/23] arm64: Virtualization Host Extension support
  2016-02-04 19:26   ` Christoffer Dall
  (?)
@ 2016-02-05  8:56     ` Marc Zyngier
  -1 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-05  8:56 UTC (permalink / raw)
  To: Christoffer Dall
  Cc: Catalin Marinas, Will Deacon, linux-arm-kernel, linux-kernel,
	kvm, kvmarm

On 04/02/16 19:26, Christoffer Dall wrote:
> On Wed, Feb 03, 2016 at 05:59:53PM +0000, Marc Zyngier wrote:
>> ARMv8.1 comes with the "Virtualization Host Extension" (VHE for
>> short), which enables simpler support of Type-2 hypervisors.
>>
>> This extension allows the kernel to directly run at EL2, and
>> significantly reduces the number of system registers shared between
>> host and guest, reducing the overhead of virtualization.
>>
>> In order to have the same kernel binary running on all versions of the
>> architecture, this series makes heavy use of runtime code patching.
>>
>> The first 22 patches massage the KVM code to deal with VHE and enable
>> Linux to run at EL2. The last patch catches an ugly case when VHE
>> capable CPUs are paired with some of their less capable siblings. This
>> should never happen, but hey...
>>
>> I have deliberately left out some of the more "advanced"
>> optimizations, as they are likely to distract the reviewer from the
>> core infrastructure, which is what I care about at the moment.
>>
>> Note: GDB is currently busted on VHE systems, as it checks for version
>>       6 on the debug architecture, while VHE is version 7. The
>>       binutils people are on the case.
>>
>> This has been tested on the FVP_Base_SLV-V8-A model, and based on
>> v4.5-rc2. I've put a branch out on:
>>
>> git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git kvm-arm64/vhe
>>
> 
> You can have my reviewed-by on all patches that I didn't already review
> or ack explicitly.

Thanks for the review, much appreciated.

> Only exception is the debug stuff where I didn't manage to page in the
> context, so hopefully familiar with that code can have a look.

I think this will have to be Mr Deacon.

Thanks again,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH v3 00/23] arm64: Virtualization Host Extension support
@ 2016-02-05  8:56     ` Marc Zyngier
  0 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-05  8:56 UTC (permalink / raw)
  To: Christoffer Dall
  Cc: kvm, Catalin Marinas, Will Deacon, linux-kernel, kvmarm,
	linux-arm-kernel

On 04/02/16 19:26, Christoffer Dall wrote:
> On Wed, Feb 03, 2016 at 05:59:53PM +0000, Marc Zyngier wrote:
>> ARMv8.1 comes with the "Virtualization Host Extension" (VHE for
>> short), which enables simpler support of Type-2 hypervisors.
>>
>> This extension allows the kernel to directly run at EL2, and
>> significantly reduces the number of system registers shared between
>> host and guest, reducing the overhead of virtualization.
>>
>> In order to have the same kernel binary running on all versions of the
>> architecture, this series makes heavy use of runtime code patching.
>>
>> The first 22 patches massage the KVM code to deal with VHE and enable
>> Linux to run at EL2. The last patch catches an ugly case when VHE
>> capable CPUs are paired with some of their less capable siblings. This
>> should never happen, but hey...
>>
>> I have deliberately left out some of the more "advanced"
>> optimizations, as they are likely to distract the reviewer from the
>> core infrastructure, which is what I care about at the moment.
>>
>> Note: GDB is currently busted on VHE systems, as it checks for version
>>       6 on the debug architecture, while VHE is version 7. The
>>       binutils people are on the case.
>>
>> This has been tested on the FVP_Base_SLV-V8-A model, and based on
>> v4.5-rc2. I've put a branch out on:
>>
>> git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git kvm-arm64/vhe
>>
> 
> You can have my reviewed-by on all patches that I didn't already review
> or ack explicitly.

Thanks for the review, much appreciated.

> Only exception is the debug stuff where I didn't manage to page in the
> context, so hopefully familiar with that code can have a look.

I think this will have to be Mr Deacon.

Thanks again,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 119+ messages in thread

* [PATCH v3 00/23] arm64: Virtualization Host Extension support
@ 2016-02-05  8:56     ` Marc Zyngier
  0 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-05  8:56 UTC (permalink / raw)
  To: linux-arm-kernel

On 04/02/16 19:26, Christoffer Dall wrote:
> On Wed, Feb 03, 2016 at 05:59:53PM +0000, Marc Zyngier wrote:
>> ARMv8.1 comes with the "Virtualization Host Extension" (VHE for
>> short), which enables simpler support of Type-2 hypervisors.
>>
>> This extension allows the kernel to directly run at EL2, and
>> significantly reduces the number of system registers shared between
>> host and guest, reducing the overhead of virtualization.
>>
>> In order to have the same kernel binary running on all versions of the
>> architecture, this series makes heavy use of runtime code patching.
>>
>> The first 22 patches massage the KVM code to deal with VHE and enable
>> Linux to run at EL2. The last patch catches an ugly case when VHE
>> capable CPUs are paired with some of their less capable siblings. This
>> should never happen, but hey...
>>
>> I have deliberately left out some of the more "advanced"
>> optimizations, as they are likely to distract the reviewer from the
>> core infrastructure, which is what I care about at the moment.
>>
>> Note: GDB is currently busted on VHE systems, as it checks for version
>>       6 on the debug architecture, while VHE is version 7. The
>>       binutils people are on the case.
>>
>> This has been tested on the FVP_Base_SLV-V8-A model, and based on
>> v4.5-rc2. I've put a branch out on:
>>
>> git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git kvm-arm64/vhe
>>
> 
> You can have my reviewed-by on all patches that I didn't already review
> or ack explicitly.

Thanks for the review, much appreciated.

> Only exception is the debug stuff where I didn't manage to page in the
> context, so hopefully familiar with that code can have a look.

I think this will have to be Mr Deacon.

Thanks again,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH v3 20/23] arm64: perf: Count EL2 events if the kernel is running in HYP
  2016-02-04 19:23     ` Christoffer Dall
  (?)
@ 2016-02-05  9:02       ` Marc Zyngier
  -1 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-05  9:02 UTC (permalink / raw)
  To: Christoffer Dall
  Cc: Catalin Marinas, Will Deacon, linux-arm-kernel, linux-kernel,
	kvm, kvmarm

On 04/02/16 19:23, Christoffer Dall wrote:
> On Wed, Feb 03, 2016 at 06:00:13PM +0000, Marc Zyngier wrote:
>> When the kernel is running in HYP (with VHE), it is necessary to
>> include EL2 events if the user requests counting kernel or
>> hypervisor events.
>>
>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>> ---
>>  arch/arm64/kernel/perf_event.c | 14 ++++++++++----
>>  1 file changed, 10 insertions(+), 4 deletions(-)
>>
>> diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
>> index f7ab14c..6013a38 100644
>> --- a/arch/arm64/kernel/perf_event.c
>> +++ b/arch/arm64/kernel/perf_event.c
>> @@ -20,6 +20,7 @@
>>   */
>>  
>>  #include <asm/irq_regs.h>
>> +#include <asm/virt.h>
>>  
>>  #include <linux/of.h>
>>  #include <linux/perf/arm_pmu.h>
>> @@ -693,10 +694,15 @@ static int armv8pmu_set_event_filter(struct hw_perf_event *event,
>>  		return -EPERM;
>>  	if (attr->exclude_user)
>>  		config_base |= ARMV8_EXCLUDE_EL0;
>> -	if (attr->exclude_kernel)
>> -		config_base |= ARMV8_EXCLUDE_EL1;
>> -	if (!attr->exclude_hv)
>> -		config_base |= ARMV8_INCLUDE_EL2;
>> +	if (is_kernel_in_hyp_mode()) {
>> +		if (!attr->exclude_kernel || !attr->exclude_hv)
>> +			config_base |= ARMV8_INCLUDE_EL2;
> 
> is exclude_hv a perf generic construct?

Yes. It seems to be universally used on all architectures.

> is it really correct to count everything the host kernel does (even
> unrelated to running VMs) when counting hypervisor events?

With VHE, the roles are blurred, and the PMU can only count events
occurring when at a given exception level. It has no notion of context.

> Other than my ignorance in this area, this patch looks fine to me.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH v3 20/23] arm64: perf: Count EL2 events if the kernel is running in HYP
@ 2016-02-05  9:02       ` Marc Zyngier
  0 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-05  9:02 UTC (permalink / raw)
  To: Christoffer Dall
  Cc: kvm, Catalin Marinas, Will Deacon, linux-kernel, kvmarm,
	linux-arm-kernel

On 04/02/16 19:23, Christoffer Dall wrote:
> On Wed, Feb 03, 2016 at 06:00:13PM +0000, Marc Zyngier wrote:
>> When the kernel is running in HYP (with VHE), it is necessary to
>> include EL2 events if the user requests counting kernel or
>> hypervisor events.
>>
>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>> ---
>>  arch/arm64/kernel/perf_event.c | 14 ++++++++++----
>>  1 file changed, 10 insertions(+), 4 deletions(-)
>>
>> diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
>> index f7ab14c..6013a38 100644
>> --- a/arch/arm64/kernel/perf_event.c
>> +++ b/arch/arm64/kernel/perf_event.c
>> @@ -20,6 +20,7 @@
>>   */
>>  
>>  #include <asm/irq_regs.h>
>> +#include <asm/virt.h>
>>  
>>  #include <linux/of.h>
>>  #include <linux/perf/arm_pmu.h>
>> @@ -693,10 +694,15 @@ static int armv8pmu_set_event_filter(struct hw_perf_event *event,
>>  		return -EPERM;
>>  	if (attr->exclude_user)
>>  		config_base |= ARMV8_EXCLUDE_EL0;
>> -	if (attr->exclude_kernel)
>> -		config_base |= ARMV8_EXCLUDE_EL1;
>> -	if (!attr->exclude_hv)
>> -		config_base |= ARMV8_INCLUDE_EL2;
>> +	if (is_kernel_in_hyp_mode()) {
>> +		if (!attr->exclude_kernel || !attr->exclude_hv)
>> +			config_base |= ARMV8_INCLUDE_EL2;
> 
> is exclude_hv a perf generic construct?

Yes. It seems to be universally used on all architectures.

> is it really correct to count everything the host kernel does (even
> unrelated to running VMs) when counting hypervisor events?

With VHE, the roles are blurred, and the PMU can only count events
occurring when at a given exception level. It has no notion of context.

> Other than my ignorance in this area, this patch looks fine to me.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 119+ messages in thread

* [PATCH v3 20/23] arm64: perf: Count EL2 events if the kernel is running in HYP
@ 2016-02-05  9:02       ` Marc Zyngier
  0 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-05  9:02 UTC (permalink / raw)
  To: linux-arm-kernel

On 04/02/16 19:23, Christoffer Dall wrote:
> On Wed, Feb 03, 2016 at 06:00:13PM +0000, Marc Zyngier wrote:
>> When the kernel is running in HYP (with VHE), it is necessary to
>> include EL2 events if the user requests counting kernel or
>> hypervisor events.
>>
>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>> ---
>>  arch/arm64/kernel/perf_event.c | 14 ++++++++++----
>>  1 file changed, 10 insertions(+), 4 deletions(-)
>>
>> diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
>> index f7ab14c..6013a38 100644
>> --- a/arch/arm64/kernel/perf_event.c
>> +++ b/arch/arm64/kernel/perf_event.c
>> @@ -20,6 +20,7 @@
>>   */
>>  
>>  #include <asm/irq_regs.h>
>> +#include <asm/virt.h>
>>  
>>  #include <linux/of.h>
>>  #include <linux/perf/arm_pmu.h>
>> @@ -693,10 +694,15 @@ static int armv8pmu_set_event_filter(struct hw_perf_event *event,
>>  		return -EPERM;
>>  	if (attr->exclude_user)
>>  		config_base |= ARMV8_EXCLUDE_EL0;
>> -	if (attr->exclude_kernel)
>> -		config_base |= ARMV8_EXCLUDE_EL1;
>> -	if (!attr->exclude_hv)
>> -		config_base |= ARMV8_INCLUDE_EL2;
>> +	if (is_kernel_in_hyp_mode()) {
>> +		if (!attr->exclude_kernel || !attr->exclude_hv)
>> +			config_base |= ARMV8_INCLUDE_EL2;
> 
> is exclude_hv a perf generic construct?

Yes. It seems to be universally used on all architectures.

> is it really correct to count everything the host kernel does (even
> unrelated to running VMs) when counting hypervisor events?

With VHE, the roles are blurred, and the PMU can only count events
occurring when at a given exception level. It has no notion of context.

> Other than my ignorance in this area, this patch looks fine to me.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH v3 03/23] arm/arm64: Add new is_kernel_in_hyp_mode predicate
  2016-02-03 17:59   ` Marc Zyngier
@ 2016-02-08 14:41     ` Catalin Marinas
  -1 siblings, 0 replies; 119+ messages in thread
From: Catalin Marinas @ 2016-02-08 14:41 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Will Deacon, Christoffer Dall, kvmarm, linux-kernel,
	linux-arm-kernel, kvm

On Wed, Feb 03, 2016 at 05:59:56PM +0000, Marc Zyngier wrote:
> With ARMv8.1 VHE extension, it will be possible to run the kernel
> at EL2 (aka HYP mode). In order for the kernel to easily find out
> where it is running, add a new predicate that returns whether or
> not the kernel is in HYP mode.
> 
> For completeness, the 32bit code also get such a predicate (always
> returning false) so that code common to both architecture (timers,
> KVM) can use it transparently.
> 
> Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

Acked-by: Catalin Marinas <catalin.marinas@arm.com>

^ permalink raw reply	[flat|nested] 119+ messages in thread

* [PATCH v3 03/23] arm/arm64: Add new is_kernel_in_hyp_mode predicate
@ 2016-02-08 14:41     ` Catalin Marinas
  0 siblings, 0 replies; 119+ messages in thread
From: Catalin Marinas @ 2016-02-08 14:41 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Feb 03, 2016 at 05:59:56PM +0000, Marc Zyngier wrote:
> With ARMv8.1 VHE extension, it will be possible to run the kernel
> at EL2 (aka HYP mode). In order for the kernel to easily find out
> where it is running, add a new predicate that returns whether or
> not the kernel is in HYP mode.
> 
> For completeness, the 32bit code also get such a predicate (always
> returning false) so that code common to both architecture (timers,
> KVM) can use it transparently.
> 
> Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

Acked-by: Catalin Marinas <catalin.marinas@arm.com>

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH v3 05/23] arm64: Add ARM64_HAS_VIRT_HOST_EXTN feature
  2016-02-03 17:59   ` Marc Zyngier
  (?)
@ 2016-02-08 14:42     ` Catalin Marinas
  -1 siblings, 0 replies; 119+ messages in thread
From: Catalin Marinas @ 2016-02-08 14:42 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Will Deacon, Christoffer Dall, kvmarm, linux-kernel,
	linux-arm-kernel, kvm

On Wed, Feb 03, 2016 at 05:59:58PM +0000, Marc Zyngier wrote:
> Add a new ARM64_HAS_VIRT_HOST_EXTN features to indicate that the
> CPU has the ARMv8.1 VHE capability.
> 
> This will be used to trigger kernel patching in KVM.
> 
> Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

Acked-by: Catalin Marinas <catalin.marinas@arm.com>

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH v3 05/23] arm64: Add ARM64_HAS_VIRT_HOST_EXTN feature
@ 2016-02-08 14:42     ` Catalin Marinas
  0 siblings, 0 replies; 119+ messages in thread
From: Catalin Marinas @ 2016-02-08 14:42 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvm, Will Deacon, linux-kernel, kvmarm, linux-arm-kernel

On Wed, Feb 03, 2016 at 05:59:58PM +0000, Marc Zyngier wrote:
> Add a new ARM64_HAS_VIRT_HOST_EXTN features to indicate that the
> CPU has the ARMv8.1 VHE capability.
> 
> This will be used to trigger kernel patching in KVM.
> 
> Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

Acked-by: Catalin Marinas <catalin.marinas@arm.com>

^ permalink raw reply	[flat|nested] 119+ messages in thread

* [PATCH v3 05/23] arm64: Add ARM64_HAS_VIRT_HOST_EXTN feature
@ 2016-02-08 14:42     ` Catalin Marinas
  0 siblings, 0 replies; 119+ messages in thread
From: Catalin Marinas @ 2016-02-08 14:42 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Feb 03, 2016 at 05:59:58PM +0000, Marc Zyngier wrote:
> Add a new ARM64_HAS_VIRT_HOST_EXTN features to indicate that the
> CPU has the ARMv8.1 VHE capability.
> 
> This will be used to trigger kernel patching in KVM.
> 
> Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

Acked-by: Catalin Marinas <catalin.marinas@arm.com>

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH v3 19/23] arm64: KVM: Move most of the fault decoding to C
  2016-02-03 18:00   ` Marc Zyngier
@ 2016-02-08 14:44     ` Catalin Marinas
  -1 siblings, 0 replies; 119+ messages in thread
From: Catalin Marinas @ 2016-02-08 14:44 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Will Deacon, Christoffer Dall, kvmarm, linux-kernel,
	linux-arm-kernel, kvm

On Wed, Feb 03, 2016 at 06:00:12PM +0000, Marc Zyngier wrote:
> The fault decoding process (including computing the IPA in the case
> of a permission fault) would be much better done in C code, as we
> have a reasonable infrastructure to deal with the VHE/non-VHE
> differences.
> 
> Let's move the whole thing to C, including the workaround for
> erratum 834220, and just patch the odd ESR_EL2 access remaining
> in hyp-entry.S.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

Acked-by: Catalin Marinas <catalin.marinas@arm.com>

^ permalink raw reply	[flat|nested] 119+ messages in thread

* [PATCH v3 19/23] arm64: KVM: Move most of the fault decoding to C
@ 2016-02-08 14:44     ` Catalin Marinas
  0 siblings, 0 replies; 119+ messages in thread
From: Catalin Marinas @ 2016-02-08 14:44 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Feb 03, 2016 at 06:00:12PM +0000, Marc Zyngier wrote:
> The fault decoding process (including computing the IPA in the case
> of a permission fault) would be much better done in C code, as we
> have a reasonable infrastructure to deal with the VHE/non-VHE
> differences.
> 
> Let's move the whole thing to C, including the workaround for
> erratum 834220, and just patch the odd ESR_EL2 access remaining
> in hyp-entry.S.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

Acked-by: Catalin Marinas <catalin.marinas@arm.com>

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH v3 20/23] arm64: perf: Count EL2 events if the kernel is running in HYP
  2016-02-03 18:00   ` Marc Zyngier
  (?)
@ 2016-02-08 14:48     ` Catalin Marinas
  -1 siblings, 0 replies; 119+ messages in thread
From: Catalin Marinas @ 2016-02-08 14:48 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Will Deacon, Christoffer Dall, kvmarm, linux-kernel,
	linux-arm-kernel, kvm

On Wed, Feb 03, 2016 at 06:00:13PM +0000, Marc Zyngier wrote:
> When the kernel is running in HYP (with VHE), it is necessary to
> include EL2 events if the user requests counting kernel or
> hypervisor events.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

Acked-by: Catalin Marinas <catalin.marinas@arm.com>

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH v3 20/23] arm64: perf: Count EL2 events if the kernel is running in HYP
@ 2016-02-08 14:48     ` Catalin Marinas
  0 siblings, 0 replies; 119+ messages in thread
From: Catalin Marinas @ 2016-02-08 14:48 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvm, Will Deacon, linux-kernel, kvmarm, linux-arm-kernel

On Wed, Feb 03, 2016 at 06:00:13PM +0000, Marc Zyngier wrote:
> When the kernel is running in HYP (with VHE), it is necessary to
> include EL2 events if the user requests counting kernel or
> hypervisor events.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

Acked-by: Catalin Marinas <catalin.marinas@arm.com>

^ permalink raw reply	[flat|nested] 119+ messages in thread

* [PATCH v3 20/23] arm64: perf: Count EL2 events if the kernel is running in HYP
@ 2016-02-08 14:48     ` Catalin Marinas
  0 siblings, 0 replies; 119+ messages in thread
From: Catalin Marinas @ 2016-02-08 14:48 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Feb 03, 2016 at 06:00:13PM +0000, Marc Zyngier wrote:
> When the kernel is running in HYP (with VHE), it is necessary to
> include EL2 events if the user requests counting kernel or
> hypervisor events.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

Acked-by: Catalin Marinas <catalin.marinas@arm.com>

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH v3 21/23] arm64: hw_breakpoint: Allow EL2 breakpoints if running in HYP
  2016-02-03 18:00   ` Marc Zyngier
@ 2016-02-08 15:56     ` Catalin Marinas
  -1 siblings, 0 replies; 119+ messages in thread
From: Catalin Marinas @ 2016-02-08 15:56 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Will Deacon, Christoffer Dall, kvmarm, linux-kernel,
	linux-arm-kernel, kvm

On Wed, Feb 03, 2016 at 06:00:14PM +0000, Marc Zyngier wrote:
> @@ -76,6 +59,36 @@ static inline void decode_ctrl_reg(u32 reg,
>  #define ARM_KERNEL_STEP_ACTIVE	1
>  #define ARM_KERNEL_STEP_SUSPEND	2
>  
> +#define DBG_HMC_HYP		(1 << 13)
> +#define DBG_SSC_HYP		(3 << 14)
> +
> +static inline u32 encode_ctrl_reg(struct arch_hw_breakpoint_ctrl ctrl)
> +{
> +	u32 val = (ctrl.len << 5) | (ctrl.type << 3) | ctrl.enabled;
> +
> +	if (is_kernel_in_hyp_mode() && ctrl.privilege == AARCH64_BREAKPOINT_EL1)
> +		val |= DBG_HMC_HYP | DBG_SSC_HYP;
> +	else
> +		val |= ctrl.privilege << 1;
> +
> +	return val;
> +}
> +
> +static inline void decode_ctrl_reg(u32 reg,
> +				   struct arch_hw_breakpoint_ctrl *ctrl)
> +{
> +	ctrl->enabled	= reg & 0x1;
> +	reg >>= 1;
> +	if (is_kernel_in_hyp_mode())
> +		ctrl->privilege = !!(reg & (DBG_HMC_HYP >> 1));

I don't particularly like this part as it's not clear just by looking at
the code that it, in fact, generates AARCH64_BREAKPOINT_EL1. I would
make this clearer:

	if (is_kernel_in_hyp_mode() && (reg & (DBG_HMC_HYP >> 1)))
		ctrl->privilege = AARCH64_BREAKPOINT_EL1;

Alternatively, you could define the PMC field value as:

#define AARCH64_BREAKPOINT_EL2	0

and change the privilege to EL1 after masking, something like:

	ctrl->privilege = reg & 0x3;
	if (ctrl->privilege == AARCH64_BREAKPOINT_EL2)
		ctrl->privilege = AARCH64_BREAKPOINT_EL1;

BTW, do we need to check is_kernel_in_hyp_mode() when decoding? Is there
anything else that could have set this SSC/HMC/PMC fields other than
encode_ctrl_reg()?

-- 
Catalin

^ permalink raw reply	[flat|nested] 119+ messages in thread

* [PATCH v3 21/23] arm64: hw_breakpoint: Allow EL2 breakpoints if running in HYP
@ 2016-02-08 15:56     ` Catalin Marinas
  0 siblings, 0 replies; 119+ messages in thread
From: Catalin Marinas @ 2016-02-08 15:56 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Feb 03, 2016 at 06:00:14PM +0000, Marc Zyngier wrote:
> @@ -76,6 +59,36 @@ static inline void decode_ctrl_reg(u32 reg,
>  #define ARM_KERNEL_STEP_ACTIVE	1
>  #define ARM_KERNEL_STEP_SUSPEND	2
>  
> +#define DBG_HMC_HYP		(1 << 13)
> +#define DBG_SSC_HYP		(3 << 14)
> +
> +static inline u32 encode_ctrl_reg(struct arch_hw_breakpoint_ctrl ctrl)
> +{
> +	u32 val = (ctrl.len << 5) | (ctrl.type << 3) | ctrl.enabled;
> +
> +	if (is_kernel_in_hyp_mode() && ctrl.privilege == AARCH64_BREAKPOINT_EL1)
> +		val |= DBG_HMC_HYP | DBG_SSC_HYP;
> +	else
> +		val |= ctrl.privilege << 1;
> +
> +	return val;
> +}
> +
> +static inline void decode_ctrl_reg(u32 reg,
> +				   struct arch_hw_breakpoint_ctrl *ctrl)
> +{
> +	ctrl->enabled	= reg & 0x1;
> +	reg >>= 1;
> +	if (is_kernel_in_hyp_mode())
> +		ctrl->privilege = !!(reg & (DBG_HMC_HYP >> 1));

I don't particularly like this part as it's not clear just by looking at
the code that it, in fact, generates AARCH64_BREAKPOINT_EL1. I would
make this clearer:

	if (is_kernel_in_hyp_mode() && (reg & (DBG_HMC_HYP >> 1)))
		ctrl->privilege = AARCH64_BREAKPOINT_EL1;

Alternatively, you could define the PMC field value as:

#define AARCH64_BREAKPOINT_EL2	0

and change the privilege to EL1 after masking, something like:

	ctrl->privilege = reg & 0x3;
	if (ctrl->privilege == AARCH64_BREAKPOINT_EL2)
		ctrl->privilege = AARCH64_BREAKPOINT_EL1;

BTW, do we need to check is_kernel_in_hyp_mode() when decoding? Is there
anything else that could have set this SSC/HMC/PMC fields other than
encode_ctrl_reg()?

-- 
Catalin

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH v3 22/23] arm64: VHE: Add support for running Linux in EL2 mode
  2016-02-03 18:00   ` Marc Zyngier
  (?)
@ 2016-02-08 15:58     ` Catalin Marinas
  -1 siblings, 0 replies; 119+ messages in thread
From: Catalin Marinas @ 2016-02-08 15:58 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Will Deacon, Christoffer Dall, kvmarm, linux-kernel,
	linux-arm-kernel, kvm

On Wed, Feb 03, 2016 at 06:00:15PM +0000, Marc Zyngier wrote:
> With ARMv8.1 VHE, the architecture is able to (almost) transparently
> run the kernel at EL2, despite being written for EL1.
> 
> This patch takes care of the "almost" part, mostly preventing the kernel
> from dropping from EL2 to EL1, and setting up the HYP configuration.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

Acked-by: Catalin Marinas <catalin.marinas@arm.com>

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH v3 22/23] arm64: VHE: Add support for running Linux in EL2 mode
@ 2016-02-08 15:58     ` Catalin Marinas
  0 siblings, 0 replies; 119+ messages in thread
From: Catalin Marinas @ 2016-02-08 15:58 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvm, Will Deacon, linux-kernel, kvmarm, linux-arm-kernel

On Wed, Feb 03, 2016 at 06:00:15PM +0000, Marc Zyngier wrote:
> With ARMv8.1 VHE, the architecture is able to (almost) transparently
> run the kernel at EL2, despite being written for EL1.
> 
> This patch takes care of the "almost" part, mostly preventing the kernel
> from dropping from EL2 to EL1, and setting up the HYP configuration.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

Acked-by: Catalin Marinas <catalin.marinas@arm.com>

^ permalink raw reply	[flat|nested] 119+ messages in thread

* [PATCH v3 22/23] arm64: VHE: Add support for running Linux in EL2 mode
@ 2016-02-08 15:58     ` Catalin Marinas
  0 siblings, 0 replies; 119+ messages in thread
From: Catalin Marinas @ 2016-02-08 15:58 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Feb 03, 2016 at 06:00:15PM +0000, Marc Zyngier wrote:
> With ARMv8.1 VHE, the architecture is able to (almost) transparently
> run the kernel at EL2, despite being written for EL1.
> 
> This patch takes care of the "almost" part, mostly preventing the kernel
> from dropping from EL2 to EL1, and setting up the HYP configuration.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

Acked-by: Catalin Marinas <catalin.marinas@arm.com>

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH v3 23/23] arm64: Panic when VHE and non VHE CPUs coexist
  2016-02-03 18:00   ` Marc Zyngier
@ 2016-02-08 16:04     ` Catalin Marinas
  -1 siblings, 0 replies; 119+ messages in thread
From: Catalin Marinas @ 2016-02-08 16:04 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Will Deacon, Christoffer Dall, kvmarm, linux-kernel,
	linux-arm-kernel, kvm

On Wed, Feb 03, 2016 at 06:00:16PM +0000, Marc Zyngier wrote:
> Having both VHE and non-VHE capable CPUs in the same system
> is likely to be a recipe for disaster.
> 
> If the boot CPU has VHE, but a secondary is not, we won't be
> able to downgrade and run the kernel at EL1. Add CPU hotplug
> to the mix, and this produces a terrifying mess.
> 
> Let's solve the problem once and for all. If you mix VHE and
> non-VHE CPUs in the same system, you deserve to loose, and this
> patch makes sure you don't get a chance.
> 
> This is implemented by storing the kernel execution level in
> a global variable. Secondaries will park themselves in a
> WFI loop if they observe a mismatch. Also, the primary CPU
> will detect that the secondary CPU has died on a mismatched
> execution level. Panic will follow.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
>  arch/arm64/include/asm/virt.h | 17 +++++++++++++++++
>  arch/arm64/kernel/head.S      | 20 ++++++++++++++++++++
>  arch/arm64/kernel/smp.c       |  3 +++
>  3 files changed, 40 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h
> index 9f22dd6..f81a345 100644
> --- a/arch/arm64/include/asm/virt.h
> +++ b/arch/arm64/include/asm/virt.h
> @@ -36,6 +36,11 @@
>   */
>  extern u32 __boot_cpu_mode[2];
>  
> +/*
> + * __run_cpu_mode records the mode the boot CPU uses for the kernel.
> + */
> +extern u32 __run_cpu_mode[2];
> +
>  void __hyp_set_vectors(phys_addr_t phys_vector_base);
>  phys_addr_t __hyp_get_vectors(void);
>  
> @@ -60,6 +65,18 @@ static inline bool is_kernel_in_hyp_mode(void)
>  	return el == CurrentEL_EL2;
>  }
>  
> +static inline bool is_kernel_mode_mismatched(void)
> +{
> +	/*
> +	 * A mismatched CPU will have written its own CurrentEL in
> +	 * __run_cpu_mode[1] (initially set to zero) after failing to
> +	 * match the value in __run_cpu_mode[0]. Thus, a non-zero
> +	 * value in __run_cpu_mode[1] is enough to detect the
> +	 * pathological case.
> +	 */
> +	return !!ACCESS_ONCE(__run_cpu_mode[1]);
> +}
> +
>  /* The section containing the hypervisor text */
>  extern char __hyp_text_start[];
>  extern char __hyp_text_end[];
> diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
> index 6f2f377..f9b6a5b 100644
> --- a/arch/arm64/kernel/head.S
> +++ b/arch/arm64/kernel/head.S
> @@ -578,7 +578,24 @@ ENTRY(set_cpu_boot_mode_flag)
>  1:	str	w20, [x1]			// This CPU has booted in EL1
>  	dmb	sy
>  	dc	ivac, x1			// Invalidate potentially stale cache line
> +	adr_l	x1, __run_cpu_mode
> +	ldr	w0, [x1]
> +	mrs	x20, CurrentEL
> +	cbz	x0, skip_el_check
> +	cmp	x0, x20
> +	bne	mismatched_el
>  	ret
> +skip_el_check:			// Only the first CPU gets to set the rule
> +	str	w20, [x1]
> +	dmb	sy
> +	dc	ivac, x1	// Invalidate potentially stale cache line
> +	ret
> +mismatched_el:
> +	str	w20, [x1, #4]
> +	dmb	sy
> +	dc	ivac, x1	// Invalidate potentially stale cache line
> +1:	wfi
> +	b	1b
>  ENDPROC(set_cpu_boot_mode_flag)

Do we need to wait for the D-cache maintenance completion before
entering WFI (like issuing a DSB)? Same for the skip_el_check path
before the RET.

Otherwise:

Acked-by: Catalin Marinas <catalin.marinas@arm.com>

^ permalink raw reply	[flat|nested] 119+ messages in thread

* [PATCH v3 23/23] arm64: Panic when VHE and non VHE CPUs coexist
@ 2016-02-08 16:04     ` Catalin Marinas
  0 siblings, 0 replies; 119+ messages in thread
From: Catalin Marinas @ 2016-02-08 16:04 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Feb 03, 2016 at 06:00:16PM +0000, Marc Zyngier wrote:
> Having both VHE and non-VHE capable CPUs in the same system
> is likely to be a recipe for disaster.
> 
> If the boot CPU has VHE, but a secondary is not, we won't be
> able to downgrade and run the kernel at EL1. Add CPU hotplug
> to the mix, and this produces a terrifying mess.
> 
> Let's solve the problem once and for all. If you mix VHE and
> non-VHE CPUs in the same system, you deserve to loose, and this
> patch makes sure you don't get a chance.
> 
> This is implemented by storing the kernel execution level in
> a global variable. Secondaries will park themselves in a
> WFI loop if they observe a mismatch. Also, the primary CPU
> will detect that the secondary CPU has died on a mismatched
> execution level. Panic will follow.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
>  arch/arm64/include/asm/virt.h | 17 +++++++++++++++++
>  arch/arm64/kernel/head.S      | 20 ++++++++++++++++++++
>  arch/arm64/kernel/smp.c       |  3 +++
>  3 files changed, 40 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h
> index 9f22dd6..f81a345 100644
> --- a/arch/arm64/include/asm/virt.h
> +++ b/arch/arm64/include/asm/virt.h
> @@ -36,6 +36,11 @@
>   */
>  extern u32 __boot_cpu_mode[2];
>  
> +/*
> + * __run_cpu_mode records the mode the boot CPU uses for the kernel.
> + */
> +extern u32 __run_cpu_mode[2];
> +
>  void __hyp_set_vectors(phys_addr_t phys_vector_base);
>  phys_addr_t __hyp_get_vectors(void);
>  
> @@ -60,6 +65,18 @@ static inline bool is_kernel_in_hyp_mode(void)
>  	return el == CurrentEL_EL2;
>  }
>  
> +static inline bool is_kernel_mode_mismatched(void)
> +{
> +	/*
> +	 * A mismatched CPU will have written its own CurrentEL in
> +	 * __run_cpu_mode[1] (initially set to zero) after failing to
> +	 * match the value in __run_cpu_mode[0]. Thus, a non-zero
> +	 * value in __run_cpu_mode[1] is enough to detect the
> +	 * pathological case.
> +	 */
> +	return !!ACCESS_ONCE(__run_cpu_mode[1]);
> +}
> +
>  /* The section containing the hypervisor text */
>  extern char __hyp_text_start[];
>  extern char __hyp_text_end[];
> diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
> index 6f2f377..f9b6a5b 100644
> --- a/arch/arm64/kernel/head.S
> +++ b/arch/arm64/kernel/head.S
> @@ -578,7 +578,24 @@ ENTRY(set_cpu_boot_mode_flag)
>  1:	str	w20, [x1]			// This CPU has booted in EL1
>  	dmb	sy
>  	dc	ivac, x1			// Invalidate potentially stale cache line
> +	adr_l	x1, __run_cpu_mode
> +	ldr	w0, [x1]
> +	mrs	x20, CurrentEL
> +	cbz	x0, skip_el_check
> +	cmp	x0, x20
> +	bne	mismatched_el
>  	ret
> +skip_el_check:			// Only the first CPU gets to set the rule
> +	str	w20, [x1]
> +	dmb	sy
> +	dc	ivac, x1	// Invalidate potentially stale cache line
> +	ret
> +mismatched_el:
> +	str	w20, [x1, #4]
> +	dmb	sy
> +	dc	ivac, x1	// Invalidate potentially stale cache line
> +1:	wfi
> +	b	1b
>  ENDPROC(set_cpu_boot_mode_flag)

Do we need to wait for the D-cache maintenance completion before
entering WFI (like issuing a DSB)? Same for the skip_el_check path
before the RET.

Otherwise:

Acked-by: Catalin Marinas <catalin.marinas@arm.com>

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH v3 23/23] arm64: Panic when VHE and non VHE CPUs coexist
  2016-02-08 16:04     ` Catalin Marinas
  (?)
@ 2016-02-08 16:24       ` Mark Rutland
  -1 siblings, 0 replies; 119+ messages in thread
From: Mark Rutland @ 2016-02-08 16:24 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: Marc Zyngier, kvm, Will Deacon, linux-kernel, kvmarm, linux-arm-kernel

On Mon, Feb 08, 2016 at 04:04:46PM +0000, Catalin Marinas wrote:
> On Wed, Feb 03, 2016 at 06:00:16PM +0000, Marc Zyngier wrote:
> > Having both VHE and non-VHE capable CPUs in the same system
> > is likely to be a recipe for disaster.
> > 
> > If the boot CPU has VHE, but a secondary is not, we won't be
> > able to downgrade and run the kernel at EL1. Add CPU hotplug
> > to the mix, and this produces a terrifying mess.
> > 
> > Let's solve the problem once and for all. If you mix VHE and
> > non-VHE CPUs in the same system, you deserve to loose, and this
> > patch makes sure you don't get a chance.
> > 
> > This is implemented by storing the kernel execution level in
> > a global variable. Secondaries will park themselves in a
> > WFI loop if they observe a mismatch. Also, the primary CPU
> > will detect that the secondary CPU has died on a mismatched
> > execution level. Panic will follow.
> > 
> > Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> > ---
> >  arch/arm64/include/asm/virt.h | 17 +++++++++++++++++
> >  arch/arm64/kernel/head.S      | 20 ++++++++++++++++++++
> >  arch/arm64/kernel/smp.c       |  3 +++
> >  3 files changed, 40 insertions(+)
> > 
> > diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h
> > index 9f22dd6..f81a345 100644
> > --- a/arch/arm64/include/asm/virt.h
> > +++ b/arch/arm64/include/asm/virt.h
> > @@ -36,6 +36,11 @@
> >   */
> >  extern u32 __boot_cpu_mode[2];
> >  
> > +/*
> > + * __run_cpu_mode records the mode the boot CPU uses for the kernel.
> > + */
> > +extern u32 __run_cpu_mode[2];
> > +
> >  void __hyp_set_vectors(phys_addr_t phys_vector_base);
> >  phys_addr_t __hyp_get_vectors(void);
> >  
> > @@ -60,6 +65,18 @@ static inline bool is_kernel_in_hyp_mode(void)
> >  	return el == CurrentEL_EL2;
> >  }
> >  
> > +static inline bool is_kernel_mode_mismatched(void)
> > +{
> > +	/*
> > +	 * A mismatched CPU will have written its own CurrentEL in
> > +	 * __run_cpu_mode[1] (initially set to zero) after failing to
> > +	 * match the value in __run_cpu_mode[0]. Thus, a non-zero
> > +	 * value in __run_cpu_mode[1] is enough to detect the
> > +	 * pathological case.
> > +	 */
> > +	return !!ACCESS_ONCE(__run_cpu_mode[1]);
> > +}
> > +
> >  /* The section containing the hypervisor text */
> >  extern char __hyp_text_start[];
> >  extern char __hyp_text_end[];
> > diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
> > index 6f2f377..f9b6a5b 100644
> > --- a/arch/arm64/kernel/head.S
> > +++ b/arch/arm64/kernel/head.S
> > @@ -578,7 +578,24 @@ ENTRY(set_cpu_boot_mode_flag)
> >  1:	str	w20, [x1]			// This CPU has booted in EL1
> >  	dmb	sy
> >  	dc	ivac, x1			// Invalidate potentially stale cache line
> > +	adr_l	x1, __run_cpu_mode
> > +	ldr	w0, [x1]
> > +	mrs	x20, CurrentEL
> > +	cbz	x0, skip_el_check
> > +	cmp	x0, x20
> > +	bne	mismatched_el
> >  	ret
> > +skip_el_check:			// Only the first CPU gets to set the rule
> > +	str	w20, [x1]
> > +	dmb	sy
> > +	dc	ivac, x1	// Invalidate potentially stale cache line
> > +	ret
> > +mismatched_el:
> > +	str	w20, [x1, #4]
> > +	dmb	sy
> > +	dc	ivac, x1	// Invalidate potentially stale cache line
> > +1:	wfi
> > +	b	1b
> >  ENDPROC(set_cpu_boot_mode_flag)
> 
> Do we need to wait for the D-cache maintenance completion before
> entering WFI (like issuing a DSB)? Same for the skip_el_check path
> before the RET.

We would need that to complete the maintenance, yes.

However, given we're going into WFI immediately afterwards, and not
signalling completion to other CPUs, what we gain is somewhat
questionable.

Perhaps it's always better to do the maintenance on the read side (for
consistency with [1]).

Regardless, we'd probably want a DSB to ensure the write had
completed...

Mark.

[1] http://lists.infradead.org/pipermail/linux-arm-kernel/2016-February/404661.html

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH v3 23/23] arm64: Panic when VHE and non VHE CPUs coexist
@ 2016-02-08 16:24       ` Mark Rutland
  0 siblings, 0 replies; 119+ messages in thread
From: Mark Rutland @ 2016-02-08 16:24 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: kvm, Marc Zyngier, Will Deacon, linux-kernel, kvmarm, linux-arm-kernel

On Mon, Feb 08, 2016 at 04:04:46PM +0000, Catalin Marinas wrote:
> On Wed, Feb 03, 2016 at 06:00:16PM +0000, Marc Zyngier wrote:
> > Having both VHE and non-VHE capable CPUs in the same system
> > is likely to be a recipe for disaster.
> > 
> > If the boot CPU has VHE, but a secondary is not, we won't be
> > able to downgrade and run the kernel at EL1. Add CPU hotplug
> > to the mix, and this produces a terrifying mess.
> > 
> > Let's solve the problem once and for all. If you mix VHE and
> > non-VHE CPUs in the same system, you deserve to loose, and this
> > patch makes sure you don't get a chance.
> > 
> > This is implemented by storing the kernel execution level in
> > a global variable. Secondaries will park themselves in a
> > WFI loop if they observe a mismatch. Also, the primary CPU
> > will detect that the secondary CPU has died on a mismatched
> > execution level. Panic will follow.
> > 
> > Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> > ---
> >  arch/arm64/include/asm/virt.h | 17 +++++++++++++++++
> >  arch/arm64/kernel/head.S      | 20 ++++++++++++++++++++
> >  arch/arm64/kernel/smp.c       |  3 +++
> >  3 files changed, 40 insertions(+)
> > 
> > diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h
> > index 9f22dd6..f81a345 100644
> > --- a/arch/arm64/include/asm/virt.h
> > +++ b/arch/arm64/include/asm/virt.h
> > @@ -36,6 +36,11 @@
> >   */
> >  extern u32 __boot_cpu_mode[2];
> >  
> > +/*
> > + * __run_cpu_mode records the mode the boot CPU uses for the kernel.
> > + */
> > +extern u32 __run_cpu_mode[2];
> > +
> >  void __hyp_set_vectors(phys_addr_t phys_vector_base);
> >  phys_addr_t __hyp_get_vectors(void);
> >  
> > @@ -60,6 +65,18 @@ static inline bool is_kernel_in_hyp_mode(void)
> >  	return el == CurrentEL_EL2;
> >  }
> >  
> > +static inline bool is_kernel_mode_mismatched(void)
> > +{
> > +	/*
> > +	 * A mismatched CPU will have written its own CurrentEL in
> > +	 * __run_cpu_mode[1] (initially set to zero) after failing to
> > +	 * match the value in __run_cpu_mode[0]. Thus, a non-zero
> > +	 * value in __run_cpu_mode[1] is enough to detect the
> > +	 * pathological case.
> > +	 */
> > +	return !!ACCESS_ONCE(__run_cpu_mode[1]);
> > +}
> > +
> >  /* The section containing the hypervisor text */
> >  extern char __hyp_text_start[];
> >  extern char __hyp_text_end[];
> > diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
> > index 6f2f377..f9b6a5b 100644
> > --- a/arch/arm64/kernel/head.S
> > +++ b/arch/arm64/kernel/head.S
> > @@ -578,7 +578,24 @@ ENTRY(set_cpu_boot_mode_flag)
> >  1:	str	w20, [x1]			// This CPU has booted in EL1
> >  	dmb	sy
> >  	dc	ivac, x1			// Invalidate potentially stale cache line
> > +	adr_l	x1, __run_cpu_mode
> > +	ldr	w0, [x1]
> > +	mrs	x20, CurrentEL
> > +	cbz	x0, skip_el_check
> > +	cmp	x0, x20
> > +	bne	mismatched_el
> >  	ret
> > +skip_el_check:			// Only the first CPU gets to set the rule
> > +	str	w20, [x1]
> > +	dmb	sy
> > +	dc	ivac, x1	// Invalidate potentially stale cache line
> > +	ret
> > +mismatched_el:
> > +	str	w20, [x1, #4]
> > +	dmb	sy
> > +	dc	ivac, x1	// Invalidate potentially stale cache line
> > +1:	wfi
> > +	b	1b
> >  ENDPROC(set_cpu_boot_mode_flag)
> 
> Do we need to wait for the D-cache maintenance completion before
> entering WFI (like issuing a DSB)? Same for the skip_el_check path
> before the RET.

We would need that to complete the maintenance, yes.

However, given we're going into WFI immediately afterwards, and not
signalling completion to other CPUs, what we gain is somewhat
questionable.

Perhaps it's always better to do the maintenance on the read side (for
consistency with [1]).

Regardless, we'd probably want a DSB to ensure the write had
completed...

Mark.

[1] http://lists.infradead.org/pipermail/linux-arm-kernel/2016-February/404661.html

^ permalink raw reply	[flat|nested] 119+ messages in thread

* [PATCH v3 23/23] arm64: Panic when VHE and non VHE CPUs coexist
@ 2016-02-08 16:24       ` Mark Rutland
  0 siblings, 0 replies; 119+ messages in thread
From: Mark Rutland @ 2016-02-08 16:24 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Feb 08, 2016 at 04:04:46PM +0000, Catalin Marinas wrote:
> On Wed, Feb 03, 2016 at 06:00:16PM +0000, Marc Zyngier wrote:
> > Having both VHE and non-VHE capable CPUs in the same system
> > is likely to be a recipe for disaster.
> > 
> > If the boot CPU has VHE, but a secondary is not, we won't be
> > able to downgrade and run the kernel at EL1. Add CPU hotplug
> > to the mix, and this produces a terrifying mess.
> > 
> > Let's solve the problem once and for all. If you mix VHE and
> > non-VHE CPUs in the same system, you deserve to loose, and this
> > patch makes sure you don't get a chance.
> > 
> > This is implemented by storing the kernel execution level in
> > a global variable. Secondaries will park themselves in a
> > WFI loop if they observe a mismatch. Also, the primary CPU
> > will detect that the secondary CPU has died on a mismatched
> > execution level. Panic will follow.
> > 
> > Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> > ---
> >  arch/arm64/include/asm/virt.h | 17 +++++++++++++++++
> >  arch/arm64/kernel/head.S      | 20 ++++++++++++++++++++
> >  arch/arm64/kernel/smp.c       |  3 +++
> >  3 files changed, 40 insertions(+)
> > 
> > diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h
> > index 9f22dd6..f81a345 100644
> > --- a/arch/arm64/include/asm/virt.h
> > +++ b/arch/arm64/include/asm/virt.h
> > @@ -36,6 +36,11 @@
> >   */
> >  extern u32 __boot_cpu_mode[2];
> >  
> > +/*
> > + * __run_cpu_mode records the mode the boot CPU uses for the kernel.
> > + */
> > +extern u32 __run_cpu_mode[2];
> > +
> >  void __hyp_set_vectors(phys_addr_t phys_vector_base);
> >  phys_addr_t __hyp_get_vectors(void);
> >  
> > @@ -60,6 +65,18 @@ static inline bool is_kernel_in_hyp_mode(void)
> >  	return el == CurrentEL_EL2;
> >  }
> >  
> > +static inline bool is_kernel_mode_mismatched(void)
> > +{
> > +	/*
> > +	 * A mismatched CPU will have written its own CurrentEL in
> > +	 * __run_cpu_mode[1] (initially set to zero) after failing to
> > +	 * match the value in __run_cpu_mode[0]. Thus, a non-zero
> > +	 * value in __run_cpu_mode[1] is enough to detect the
> > +	 * pathological case.
> > +	 */
> > +	return !!ACCESS_ONCE(__run_cpu_mode[1]);
> > +}
> > +
> >  /* The section containing the hypervisor text */
> >  extern char __hyp_text_start[];
> >  extern char __hyp_text_end[];
> > diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
> > index 6f2f377..f9b6a5b 100644
> > --- a/arch/arm64/kernel/head.S
> > +++ b/arch/arm64/kernel/head.S
> > @@ -578,7 +578,24 @@ ENTRY(set_cpu_boot_mode_flag)
> >  1:	str	w20, [x1]			// This CPU has booted in EL1
> >  	dmb	sy
> >  	dc	ivac, x1			// Invalidate potentially stale cache line
> > +	adr_l	x1, __run_cpu_mode
> > +	ldr	w0, [x1]
> > +	mrs	x20, CurrentEL
> > +	cbz	x0, skip_el_check
> > +	cmp	x0, x20
> > +	bne	mismatched_el
> >  	ret
> > +skip_el_check:			// Only the first CPU gets to set the rule
> > +	str	w20, [x1]
> > +	dmb	sy
> > +	dc	ivac, x1	// Invalidate potentially stale cache line
> > +	ret
> > +mismatched_el:
> > +	str	w20, [x1, #4]
> > +	dmb	sy
> > +	dc	ivac, x1	// Invalidate potentially stale cache line
> > +1:	wfi
> > +	b	1b
> >  ENDPROC(set_cpu_boot_mode_flag)
> 
> Do we need to wait for the D-cache maintenance completion before
> entering WFI (like issuing a DSB)? Same for the skip_el_check path
> before the RET.

We would need that to complete the maintenance, yes.

However, given we're going into WFI immediately afterwards, and not
signalling completion to other CPUs, what we gain is somewhat
questionable.

Perhaps it's always better to do the maintenance on the read side (for
consistency with [1]).

Regardless, we'd probably want a DSB to ensure the write had
completed...

Mark.

[1] http://lists.infradead.org/pipermail/linux-arm-kernel/2016-February/404661.html

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH v3 21/23] arm64: hw_breakpoint: Allow EL2 breakpoints if running in HYP
  2016-02-08 15:56     ` Catalin Marinas
@ 2016-02-08 16:45       ` Marc Zyngier
  -1 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-08 16:45 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: Will Deacon, Christoffer Dall, kvmarm, linux-kernel,
	linux-arm-kernel, kvm

On 08/02/16 15:56, Catalin Marinas wrote:
> On Wed, Feb 03, 2016 at 06:00:14PM +0000, Marc Zyngier wrote:
>> @@ -76,6 +59,36 @@ static inline void decode_ctrl_reg(u32 reg,
>>  #define ARM_KERNEL_STEP_ACTIVE	1
>>  #define ARM_KERNEL_STEP_SUSPEND	2
>>  
>> +#define DBG_HMC_HYP		(1 << 13)
>> +#define DBG_SSC_HYP		(3 << 14)
>> +
>> +static inline u32 encode_ctrl_reg(struct arch_hw_breakpoint_ctrl ctrl)
>> +{
>> +	u32 val = (ctrl.len << 5) | (ctrl.type << 3) | ctrl.enabled;
>> +
>> +	if (is_kernel_in_hyp_mode() && ctrl.privilege == AARCH64_BREAKPOINT_EL1)
>> +		val |= DBG_HMC_HYP | DBG_SSC_HYP;
>> +	else
>> +		val |= ctrl.privilege << 1;
>> +
>> +	return val;
>> +}
>> +
>> +static inline void decode_ctrl_reg(u32 reg,
>> +				   struct arch_hw_breakpoint_ctrl *ctrl)
>> +{
>> +	ctrl->enabled	= reg & 0x1;
>> +	reg >>= 1;
>> +	if (is_kernel_in_hyp_mode())
>> +		ctrl->privilege = !!(reg & (DBG_HMC_HYP >> 1));
> 
> I don't particularly like this part as it's not clear just by looking at
> the code that it, in fact, generates AARCH64_BREAKPOINT_EL1. I would
> make this clearer:
> 
> 	if (is_kernel_in_hyp_mode() && (reg & (DBG_HMC_HYP >> 1)))
> 		ctrl->privilege = AARCH64_BREAKPOINT_EL1;
> 
> Alternatively, you could define the PMC field value as:
> 
> #define AARCH64_BREAKPOINT_EL2	0
> 
> and change the privilege to EL1 after masking, something like:
> 
> 	ctrl->privilege = reg & 0x3;
> 	if (ctrl->privilege == AARCH64_BREAKPOINT_EL2)
> 		ctrl->privilege = AARCH64_BREAKPOINT_EL1;
> 
> BTW, do we need to check is_kernel_in_hyp_mode() when decoding? Is there
> anything else that could have set this SSC/HMC/PMC fields other than
> encode_ctrl_reg()?

I was being overzealous, and your solution is clearly better. I ended up with the following:

diff --git a/arch/arm64/include/asm/hw_breakpoint.h b/arch/arm64/include/asm/hw_breakpoint.h
index 9732908..c872b2f 100644
--- a/arch/arm64/include/asm/hw_breakpoint.h
+++ b/arch/arm64/include/asm/hw_breakpoint.h
@@ -18,6 +18,7 @@
 
 #include <asm/cputype.h>
 #include <asm/cpufeature.h>
+#include <asm/virt.h>
 
 #ifdef __KERNEL__
 
@@ -35,24 +36,6 @@ struct arch_hw_breakpoint {
 	struct arch_hw_breakpoint_ctrl ctrl;
 };
 
-static inline u32 encode_ctrl_reg(struct arch_hw_breakpoint_ctrl ctrl)
-{
-	return (ctrl.len << 5) | (ctrl.type << 3) | (ctrl.privilege << 1) |
-		ctrl.enabled;
-}
-
-static inline void decode_ctrl_reg(u32 reg,
-				   struct arch_hw_breakpoint_ctrl *ctrl)
-{
-	ctrl->enabled	= reg & 0x1;
-	reg >>= 1;
-	ctrl->privilege	= reg & 0x3;
-	reg >>= 2;
-	ctrl->type	= reg & 0x3;
-	reg >>= 2;
-	ctrl->len	= reg & 0xff;
-}
-
 /* Breakpoint */
 #define ARM_BREAKPOINT_EXECUTE	0
 
@@ -62,6 +45,7 @@ static inline void decode_ctrl_reg(u32 reg,
 #define AARCH64_ESR_ACCESS_MASK	(1 << 6)
 
 /* Privilege Levels */
+#define AARCH64_BREAKPOINT_EL2	0
 #define AARCH64_BREAKPOINT_EL1	1
 #define AARCH64_BREAKPOINT_EL0	2
 
@@ -76,6 +60,35 @@ static inline void decode_ctrl_reg(u32 reg,
 #define ARM_KERNEL_STEP_ACTIVE	1
 #define ARM_KERNEL_STEP_SUSPEND	2
 
+#define DBG_HMC_HYP		(1 << 13)
+#define DBG_SSC_HYP		(3 << 14)
+
+static inline u32 encode_ctrl_reg(struct arch_hw_breakpoint_ctrl ctrl)
+{
+	u32 val = (ctrl.len << 5) | (ctrl.type << 3) | ctrl.enabled;
+
+	if (is_kernel_in_hyp_mode() && ctrl.privilege == AARCH64_BREAKPOINT_EL1)
+		val |= DBG_HMC_HYP | DBG_SSC_HYP;
+	else
+		val |= ctrl.privilege << 1;
+
+	return val;
+}
+
+static inline void decode_ctrl_reg(u32 reg,
+				   struct arch_hw_breakpoint_ctrl *ctrl)
+{
+	ctrl->enabled	= reg & 0x1;
+	reg >>= 1;
+	ctrl->privilege	= reg & 0x3;
+	if (ctrl->privilege == AARCH64_BREAKPOINT_EL2)
+		ctrl->privilege	= AARCH64_BREAKPOINT_EL1;
+	reg >>= 2;
+	ctrl->type	= reg & 0x3;
+	reg >>= 2;
+	ctrl->len	= reg & 0xff;
+}
+
 /*
  * Limits.
  * Changing these will require modifications to the register accessors.

Was that what you had in mind?

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH v3 21/23] arm64: hw_breakpoint: Allow EL2 breakpoints if running in HYP
@ 2016-02-08 16:45       ` Marc Zyngier
  0 siblings, 0 replies; 119+ messages in thread
From: Marc Zyngier @ 2016-02-08 16:45 UTC (permalink / raw)
  To: linux-arm-kernel

On 08/02/16 15:56, Catalin Marinas wrote:
> On Wed, Feb 03, 2016 at 06:00:14PM +0000, Marc Zyngier wrote:
>> @@ -76,6 +59,36 @@ static inline void decode_ctrl_reg(u32 reg,
>>  #define ARM_KERNEL_STEP_ACTIVE	1
>>  #define ARM_KERNEL_STEP_SUSPEND	2
>>  
>> +#define DBG_HMC_HYP		(1 << 13)
>> +#define DBG_SSC_HYP		(3 << 14)
>> +
>> +static inline u32 encode_ctrl_reg(struct arch_hw_breakpoint_ctrl ctrl)
>> +{
>> +	u32 val = (ctrl.len << 5) | (ctrl.type << 3) | ctrl.enabled;
>> +
>> +	if (is_kernel_in_hyp_mode() && ctrl.privilege == AARCH64_BREAKPOINT_EL1)
>> +		val |= DBG_HMC_HYP | DBG_SSC_HYP;
>> +	else
>> +		val |= ctrl.privilege << 1;
>> +
>> +	return val;
>> +}
>> +
>> +static inline void decode_ctrl_reg(u32 reg,
>> +				   struct arch_hw_breakpoint_ctrl *ctrl)
>> +{
>> +	ctrl->enabled	= reg & 0x1;
>> +	reg >>= 1;
>> +	if (is_kernel_in_hyp_mode())
>> +		ctrl->privilege = !!(reg & (DBG_HMC_HYP >> 1));
> 
> I don't particularly like this part as it's not clear just by looking at
> the code that it, in fact, generates AARCH64_BREAKPOINT_EL1. I would
> make this clearer:
> 
> 	if (is_kernel_in_hyp_mode() && (reg & (DBG_HMC_HYP >> 1)))
> 		ctrl->privilege = AARCH64_BREAKPOINT_EL1;
> 
> Alternatively, you could define the PMC field value as:
> 
> #define AARCH64_BREAKPOINT_EL2	0
> 
> and change the privilege to EL1 after masking, something like:
> 
> 	ctrl->privilege = reg & 0x3;
> 	if (ctrl->privilege == AARCH64_BREAKPOINT_EL2)
> 		ctrl->privilege = AARCH64_BREAKPOINT_EL1;
> 
> BTW, do we need to check is_kernel_in_hyp_mode() when decoding? Is there
> anything else that could have set this SSC/HMC/PMC fields other than
> encode_ctrl_reg()?

I was being overzealous, and your solution is clearly better. I ended up with the following:

diff --git a/arch/arm64/include/asm/hw_breakpoint.h b/arch/arm64/include/asm/hw_breakpoint.h
index 9732908..c872b2f 100644
--- a/arch/arm64/include/asm/hw_breakpoint.h
+++ b/arch/arm64/include/asm/hw_breakpoint.h
@@ -18,6 +18,7 @@
 
 #include <asm/cputype.h>
 #include <asm/cpufeature.h>
+#include <asm/virt.h>
 
 #ifdef __KERNEL__
 
@@ -35,24 +36,6 @@ struct arch_hw_breakpoint {
 	struct arch_hw_breakpoint_ctrl ctrl;
 };
 
-static inline u32 encode_ctrl_reg(struct arch_hw_breakpoint_ctrl ctrl)
-{
-	return (ctrl.len << 5) | (ctrl.type << 3) | (ctrl.privilege << 1) |
-		ctrl.enabled;
-}
-
-static inline void decode_ctrl_reg(u32 reg,
-				   struct arch_hw_breakpoint_ctrl *ctrl)
-{
-	ctrl->enabled	= reg & 0x1;
-	reg >>= 1;
-	ctrl->privilege	= reg & 0x3;
-	reg >>= 2;
-	ctrl->type	= reg & 0x3;
-	reg >>= 2;
-	ctrl->len	= reg & 0xff;
-}
-
 /* Breakpoint */
 #define ARM_BREAKPOINT_EXECUTE	0
 
@@ -62,6 +45,7 @@ static inline void decode_ctrl_reg(u32 reg,
 #define AARCH64_ESR_ACCESS_MASK	(1 << 6)
 
 /* Privilege Levels */
+#define AARCH64_BREAKPOINT_EL2	0
 #define AARCH64_BREAKPOINT_EL1	1
 #define AARCH64_BREAKPOINT_EL0	2
 
@@ -76,6 +60,35 @@ static inline void decode_ctrl_reg(u32 reg,
 #define ARM_KERNEL_STEP_ACTIVE	1
 #define ARM_KERNEL_STEP_SUSPEND	2
 
+#define DBG_HMC_HYP		(1 << 13)
+#define DBG_SSC_HYP		(3 << 14)
+
+static inline u32 encode_ctrl_reg(struct arch_hw_breakpoint_ctrl ctrl)
+{
+	u32 val = (ctrl.len << 5) | (ctrl.type << 3) | ctrl.enabled;
+
+	if (is_kernel_in_hyp_mode() && ctrl.privilege == AARCH64_BREAKPOINT_EL1)
+		val |= DBG_HMC_HYP | DBG_SSC_HYP;
+	else
+		val |= ctrl.privilege << 1;
+
+	return val;
+}
+
+static inline void decode_ctrl_reg(u32 reg,
+				   struct arch_hw_breakpoint_ctrl *ctrl)
+{
+	ctrl->enabled	= reg & 0x1;
+	reg >>= 1;
+	ctrl->privilege	= reg & 0x3;
+	if (ctrl->privilege == AARCH64_BREAKPOINT_EL2)
+		ctrl->privilege	= AARCH64_BREAKPOINT_EL1;
+	reg >>= 2;
+	ctrl->type	= reg & 0x3;
+	reg >>= 2;
+	ctrl->len	= reg & 0xff;
+}
+
 /*
  * Limits.
  * Changing these will require modifications to the register accessors.

Was that what you had in mind?

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* Re: [PATCH v3 21/23] arm64: hw_breakpoint: Allow EL2 breakpoints if running in HYP
  2016-02-08 16:45       ` Marc Zyngier
@ 2016-02-08 16:52         ` Catalin Marinas
  -1 siblings, 0 replies; 119+ messages in thread
From: Catalin Marinas @ 2016-02-08 16:52 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvm, Will Deacon, linux-kernel, Christoffer Dall, kvmarm,
	linux-arm-kernel

On Mon, Feb 08, 2016 at 04:45:44PM +0000, Marc Zyngier wrote:
> I was being overzealous, and your solution is clearly better. I ended up with the following:
> 
> diff --git a/arch/arm64/include/asm/hw_breakpoint.h b/arch/arm64/include/asm/hw_breakpoint.h
> index 9732908..c872b2f 100644
> --- a/arch/arm64/include/asm/hw_breakpoint.h
> +++ b/arch/arm64/include/asm/hw_breakpoint.h
[...]
> @@ -62,6 +45,7 @@ static inline void decode_ctrl_reg(u32 reg,
>  #define AARCH64_ESR_ACCESS_MASK	(1 << 6)
>  
>  /* Privilege Levels */
> +#define AARCH64_BREAKPOINT_EL2	0
>  #define AARCH64_BREAKPOINT_EL1	1
>  #define AARCH64_BREAKPOINT_EL0	2
>  
> @@ -76,6 +60,35 @@ static inline void decode_ctrl_reg(u32 reg,
>  #define ARM_KERNEL_STEP_ACTIVE	1
>  #define ARM_KERNEL_STEP_SUSPEND	2
>  
> +#define DBG_HMC_HYP		(1 << 13)
> +#define DBG_SSC_HYP		(3 << 14)
> +
> +static inline u32 encode_ctrl_reg(struct arch_hw_breakpoint_ctrl ctrl)
> +{
> +	u32 val = (ctrl.len << 5) | (ctrl.type << 3) | ctrl.enabled;
> +
> +	if (is_kernel_in_hyp_mode() && ctrl.privilege == AARCH64_BREAKPOINT_EL1)
> +		val |= DBG_HMC_HYP | DBG_SSC_HYP;

Nitpick, for completeness:

		val |= DBG_HMC_HYP | DBG_SSC_HYP | (AARCH64_BREAKPOINT_EL2 << 1);

> +	else
> +		val |= ctrl.privilege << 1;
> +
> +	return val;
> +}
> +
> +static inline void decode_ctrl_reg(u32 reg,
> +				   struct arch_hw_breakpoint_ctrl *ctrl)
> +{
> +	ctrl->enabled	= reg & 0x1;
> +	reg >>= 1;
> +	ctrl->privilege	= reg & 0x3;
> +	if (ctrl->privilege == AARCH64_BREAKPOINT_EL2)
> +		ctrl->privilege	= AARCH64_BREAKPOINT_EL1;
> +	reg >>= 2;
> +	ctrl->type	= reg & 0x3;
> +	reg >>= 2;
> +	ctrl->len	= reg & 0xff;
> +}
> +
>  /*
>   * Limits.
>   * Changing these will require modifications to the register accessors.
> 
> Was that what you had in mind?

Looks fine:

Acked-by: Catalin Marinas <catalin.marinas@arm.com>

^ permalink raw reply	[flat|nested] 119+ messages in thread

* [PATCH v3 21/23] arm64: hw_breakpoint: Allow EL2 breakpoints if running in HYP
@ 2016-02-08 16:52         ` Catalin Marinas
  0 siblings, 0 replies; 119+ messages in thread
From: Catalin Marinas @ 2016-02-08 16:52 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Feb 08, 2016 at 04:45:44PM +0000, Marc Zyngier wrote:
> I was being overzealous, and your solution is clearly better. I ended up with the following:
> 
> diff --git a/arch/arm64/include/asm/hw_breakpoint.h b/arch/arm64/include/asm/hw_breakpoint.h
> index 9732908..c872b2f 100644
> --- a/arch/arm64/include/asm/hw_breakpoint.h
> +++ b/arch/arm64/include/asm/hw_breakpoint.h
[...]
> @@ -62,6 +45,7 @@ static inline void decode_ctrl_reg(u32 reg,
>  #define AARCH64_ESR_ACCESS_MASK	(1 << 6)
>  
>  /* Privilege Levels */
> +#define AARCH64_BREAKPOINT_EL2	0
>  #define AARCH64_BREAKPOINT_EL1	1
>  #define AARCH64_BREAKPOINT_EL0	2
>  
> @@ -76,6 +60,35 @@ static inline void decode_ctrl_reg(u32 reg,
>  #define ARM_KERNEL_STEP_ACTIVE	1
>  #define ARM_KERNEL_STEP_SUSPEND	2
>  
> +#define DBG_HMC_HYP		(1 << 13)
> +#define DBG_SSC_HYP		(3 << 14)
> +
> +static inline u32 encode_ctrl_reg(struct arch_hw_breakpoint_ctrl ctrl)
> +{
> +	u32 val = (ctrl.len << 5) | (ctrl.type << 3) | ctrl.enabled;
> +
> +	if (is_kernel_in_hyp_mode() && ctrl.privilege == AARCH64_BREAKPOINT_EL1)
> +		val |= DBG_HMC_HYP | DBG_SSC_HYP;

Nitpick, for completeness:

		val |= DBG_HMC_HYP | DBG_SSC_HYP | (AARCH64_BREAKPOINT_EL2 << 1);

> +	else
> +		val |= ctrl.privilege << 1;
> +
> +	return val;
> +}
> +
> +static inline void decode_ctrl_reg(u32 reg,
> +				   struct arch_hw_breakpoint_ctrl *ctrl)
> +{
> +	ctrl->enabled	= reg & 0x1;
> +	reg >>= 1;
> +	ctrl->privilege	= reg & 0x3;
> +	if (ctrl->privilege == AARCH64_BREAKPOINT_EL2)
> +		ctrl->privilege	= AARCH64_BREAKPOINT_EL1;
> +	reg >>= 2;
> +	ctrl->type	= reg & 0x3;
> +	reg >>= 2;
> +	ctrl->len	= reg & 0xff;
> +}
> +
>  /*
>   * Limits.
>   * Changing these will require modifications to the register accessors.
> 
> Was that what you had in mind?

Looks fine:

Acked-by: Catalin Marinas <catalin.marinas@arm.com>

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH v3 23/23] arm64: Panic when VHE and non VHE CPUs coexist
  2016-02-08 16:24       ` Mark Rutland
@ 2016-02-09 12:02         ` Catalin Marinas
  -1 siblings, 0 replies; 119+ messages in thread
From: Catalin Marinas @ 2016-02-09 12:02 UTC (permalink / raw)
  To: Mark Rutland
  Cc: kvm, Marc Zyngier, Will Deacon, linux-kernel, kvmarm, linux-arm-kernel

On Mon, Feb 08, 2016 at 04:24:03PM +0000, Mark Rutland wrote:
> On Mon, Feb 08, 2016 at 04:04:46PM +0000, Catalin Marinas wrote:
> > On Wed, Feb 03, 2016 at 06:00:16PM +0000, Marc Zyngier wrote:
> > > diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
> > > index 6f2f377..f9b6a5b 100644
> > > --- a/arch/arm64/kernel/head.S
> > > +++ b/arch/arm64/kernel/head.S
> > > @@ -578,7 +578,24 @@ ENTRY(set_cpu_boot_mode_flag)
> > >  1:	str	w20, [x1]			// This CPU has booted in EL1
> > >  	dmb	sy
> > >  	dc	ivac, x1			// Invalidate potentially stale cache line
> > > +	adr_l	x1, __run_cpu_mode
> > > +	ldr	w0, [x1]
> > > +	mrs	x20, CurrentEL
> > > +	cbz	x0, skip_el_check
> > > +	cmp	x0, x20
> > > +	bne	mismatched_el
> > >  	ret
> > > +skip_el_check:			// Only the first CPU gets to set the rule
> > > +	str	w20, [x1]
> > > +	dmb	sy
> > > +	dc	ivac, x1	// Invalidate potentially stale cache line
> > > +	ret
> > > +mismatched_el:
> > > +	str	w20, [x1, #4]
> > > +	dmb	sy
> > > +	dc	ivac, x1	// Invalidate potentially stale cache line
> > > +1:	wfi
> > > +	b	1b
> > >  ENDPROC(set_cpu_boot_mode_flag)
> > 
> > Do we need to wait for the D-cache maintenance completion before
> > entering WFI (like issuing a DSB)? Same for the skip_el_check path
> > before the RET.
> 
> We would need that to complete the maintenance, yes.
> 
> However, given we're going into WFI immediately afterwards, and not
> signalling completion to other CPUs, what we gain is somewhat
> questionable.
> 
> Perhaps it's always better to do the maintenance on the read side (for
> consistency with [1]).
> 
> [1] http://lists.infradead.org/pipermail/linux-arm-kernel/2016-February/404661.html

While [1] looks clearer to me, it seems that it is not consistent with
the rest of the head.S file. Maybe we can clean them up as a separate
patch and leave the dc ivac here. We also seem to miss barriers in other
similar situations in head.S.

-- 
Catalin

^ permalink raw reply	[flat|nested] 119+ messages in thread

* [PATCH v3 23/23] arm64: Panic when VHE and non VHE CPUs coexist
@ 2016-02-09 12:02         ` Catalin Marinas
  0 siblings, 0 replies; 119+ messages in thread
From: Catalin Marinas @ 2016-02-09 12:02 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Feb 08, 2016 at 04:24:03PM +0000, Mark Rutland wrote:
> On Mon, Feb 08, 2016 at 04:04:46PM +0000, Catalin Marinas wrote:
> > On Wed, Feb 03, 2016 at 06:00:16PM +0000, Marc Zyngier wrote:
> > > diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
> > > index 6f2f377..f9b6a5b 100644
> > > --- a/arch/arm64/kernel/head.S
> > > +++ b/arch/arm64/kernel/head.S
> > > @@ -578,7 +578,24 @@ ENTRY(set_cpu_boot_mode_flag)
> > >  1:	str	w20, [x1]			// This CPU has booted in EL1
> > >  	dmb	sy
> > >  	dc	ivac, x1			// Invalidate potentially stale cache line
> > > +	adr_l	x1, __run_cpu_mode
> > > +	ldr	w0, [x1]
> > > +	mrs	x20, CurrentEL
> > > +	cbz	x0, skip_el_check
> > > +	cmp	x0, x20
> > > +	bne	mismatched_el
> > >  	ret
> > > +skip_el_check:			// Only the first CPU gets to set the rule
> > > +	str	w20, [x1]
> > > +	dmb	sy
> > > +	dc	ivac, x1	// Invalidate potentially stale cache line
> > > +	ret
> > > +mismatched_el:
> > > +	str	w20, [x1, #4]
> > > +	dmb	sy
> > > +	dc	ivac, x1	// Invalidate potentially stale cache line
> > > +1:	wfi
> > > +	b	1b
> > >  ENDPROC(set_cpu_boot_mode_flag)
> > 
> > Do we need to wait for the D-cache maintenance completion before
> > entering WFI (like issuing a DSB)? Same for the skip_el_check path
> > before the RET.
> 
> We would need that to complete the maintenance, yes.
> 
> However, given we're going into WFI immediately afterwards, and not
> signalling completion to other CPUs, what we gain is somewhat
> questionable.
> 
> Perhaps it's always better to do the maintenance on the read side (for
> consistency with [1]).
> 
> [1] http://lists.infradead.org/pipermail/linux-arm-kernel/2016-February/404661.html

While [1] looks clearer to me, it seems that it is not consistent with
the rest of the head.S file. Maybe we can clean them up as a separate
patch and leave the dc ivac here. We also seem to miss barriers in other
similar situations in head.S.

-- 
Catalin

^ permalink raw reply	[flat|nested] 119+ messages in thread

end of thread, other threads:[~2016-02-09 12:02 UTC | newest]

Thread overview: 119+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-02-03 17:59 [PATCH v3 00/23] arm64: Virtualization Host Extension support Marc Zyngier
2016-02-03 17:59 ` Marc Zyngier
2016-02-03 17:59 ` Marc Zyngier
2016-02-03 17:59 ` [PATCH v3 01/23] arm/arm64: KVM: Add hook for C-based stage2 init Marc Zyngier
2016-02-03 17:59   ` Marc Zyngier
2016-02-03 17:59   ` Marc Zyngier
2016-02-03 17:59 ` [PATCH v3 02/23] arm64: KVM: Switch to " Marc Zyngier
2016-02-03 17:59   ` Marc Zyngier
2016-02-03 17:59   ` Marc Zyngier
2016-02-03 17:59 ` [PATCH v3 03/23] arm/arm64: Add new is_kernel_in_hyp_mode predicate Marc Zyngier
2016-02-03 17:59   ` Marc Zyngier
2016-02-03 17:59   ` Marc Zyngier
2016-02-08 14:41   ` Catalin Marinas
2016-02-08 14:41     ` Catalin Marinas
2016-02-03 17:59 ` [PATCH v3 04/23] arm64: Allow the arch timer to use the HYP timer Marc Zyngier
2016-02-03 17:59   ` Marc Zyngier
2016-02-03 17:59 ` [PATCH v3 05/23] arm64: Add ARM64_HAS_VIRT_HOST_EXTN feature Marc Zyngier
2016-02-03 17:59   ` Marc Zyngier
2016-02-03 17:59   ` Marc Zyngier
2016-02-08 14:42   ` Catalin Marinas
2016-02-08 14:42     ` Catalin Marinas
2016-02-08 14:42     ` Catalin Marinas
2016-02-03 17:59 ` [PATCH v3 06/23] arm64: KVM: Skip HYP setup when already running in HYP Marc Zyngier
2016-02-03 17:59   ` Marc Zyngier
2016-02-03 17:59   ` Marc Zyngier
2016-02-03 18:00 ` [PATCH v3 07/23] arm64: KVM: VHE: Patch out use of HVC Marc Zyngier
2016-02-03 18:00   ` Marc Zyngier
2016-02-03 18:00   ` Marc Zyngier
2016-02-03 18:00 ` [PATCH v3 08/23] arm64: KVM: VHE: Patch out kern_hyp_va Marc Zyngier
2016-02-03 18:00   ` Marc Zyngier
2016-02-03 18:00   ` Marc Zyngier
2016-02-03 18:00 ` [PATCH v3 09/23] arm64: KVM: VHE: Introduce unified system register accessors Marc Zyngier
2016-02-03 18:00   ` Marc Zyngier
2016-02-03 18:00   ` Marc Zyngier
2016-02-03 18:00 ` [PATCH v3 10/23] arm64: KVM: VHE: Differenciate host/guest sysreg save/restore Marc Zyngier
2016-02-03 18:00   ` Marc Zyngier
2016-02-03 18:00   ` Marc Zyngier
2016-02-03 18:00 ` [PATCH v3 11/23] arm64: KVM: VHE: Split save/restore of registers shared between guest and host Marc Zyngier
2016-02-03 18:00   ` Marc Zyngier
2016-02-03 18:00   ` Marc Zyngier
2016-02-04 19:15   ` Christoffer Dall
2016-02-04 19:15     ` Christoffer Dall
2016-02-04 19:15     ` Christoffer Dall
2016-02-03 18:00 ` [PATCH v3 12/23] arm64: KVM: VHE: Use unified system register accessors Marc Zyngier
2016-02-03 18:00   ` Marc Zyngier
2016-02-03 18:00   ` Marc Zyngier
2016-02-03 18:00 ` [PATCH v3 13/23] arm64: KVM: VHE: Enable minimal sysreg save/restore Marc Zyngier
2016-02-03 18:00   ` Marc Zyngier
2016-02-03 18:00   ` Marc Zyngier
2016-02-03 18:00 ` [PATCH v3 14/23] arm64: KVM: VHE: Make __fpsimd_enabled VHE aware Marc Zyngier
2016-02-03 18:00   ` Marc Zyngier
2016-02-03 18:00   ` Marc Zyngier
2016-02-03 18:00 ` [PATCH v3 15/23] arm64: KVM: VHE: Implement VHE activate/deactivate_traps Marc Zyngier
2016-02-03 18:00   ` Marc Zyngier
2016-02-03 18:00   ` Marc Zyngier
2016-02-03 18:00 ` [PATCH v3 16/23] arm64: KVM: VHE: Use unified sysreg accessors for timer Marc Zyngier
2016-02-03 18:00   ` Marc Zyngier
2016-02-03 18:00   ` Marc Zyngier
2016-02-03 18:00 ` [PATCH v3 17/23] arm64: KVM: VHE: Add fpsimd enabling on guest access Marc Zyngier
2016-02-03 18:00   ` Marc Zyngier
2016-02-03 18:00   ` Marc Zyngier
2016-02-03 18:00 ` [PATCH v3 18/23] arm64: KVM: VHE: Add alternative panic handling Marc Zyngier
2016-02-03 18:00   ` Marc Zyngier
2016-02-03 18:00   ` Marc Zyngier
2016-02-03 18:00 ` [PATCH v3 19/23] arm64: KVM: Move most of the fault decoding to C Marc Zyngier
2016-02-03 18:00   ` Marc Zyngier
2016-02-03 18:00   ` Marc Zyngier
2016-02-04 19:20   ` Christoffer Dall
2016-02-04 19:20     ` Christoffer Dall
2016-02-04 19:20     ` Christoffer Dall
2016-02-08 14:44   ` Catalin Marinas
2016-02-08 14:44     ` Catalin Marinas
2016-02-03 18:00 ` [PATCH v3 20/23] arm64: perf: Count EL2 events if the kernel is running in HYP Marc Zyngier
2016-02-03 18:00   ` Marc Zyngier
2016-02-03 18:00   ` Marc Zyngier
2016-02-04 19:23   ` Christoffer Dall
2016-02-04 19:23     ` Christoffer Dall
2016-02-04 19:23     ` Christoffer Dall
2016-02-05  9:02     ` Marc Zyngier
2016-02-05  9:02       ` Marc Zyngier
2016-02-05  9:02       ` Marc Zyngier
2016-02-08 14:48   ` Catalin Marinas
2016-02-08 14:48     ` Catalin Marinas
2016-02-08 14:48     ` Catalin Marinas
2016-02-03 18:00 ` [PATCH v3 21/23] arm64: hw_breakpoint: Allow EL2 breakpoints if " Marc Zyngier
2016-02-03 18:00   ` Marc Zyngier
2016-02-03 18:00   ` Marc Zyngier
2016-02-08 15:56   ` Catalin Marinas
2016-02-08 15:56     ` Catalin Marinas
2016-02-08 16:45     ` Marc Zyngier
2016-02-08 16:45       ` Marc Zyngier
2016-02-08 16:52       ` Catalin Marinas
2016-02-08 16:52         ` Catalin Marinas
2016-02-03 18:00 ` [PATCH v3 22/23] arm64: VHE: Add support for running Linux in EL2 mode Marc Zyngier
2016-02-03 18:00   ` Marc Zyngier
2016-02-04 19:25   ` Christoffer Dall
2016-02-04 19:25     ` Christoffer Dall
2016-02-04 19:25     ` Christoffer Dall
2016-02-08 15:58   ` Catalin Marinas
2016-02-08 15:58     ` Catalin Marinas
2016-02-08 15:58     ` Catalin Marinas
2016-02-03 18:00 ` [PATCH v3 23/23] arm64: Panic when VHE and non VHE CPUs coexist Marc Zyngier
2016-02-03 18:00   ` Marc Zyngier
2016-02-03 18:00   ` Marc Zyngier
2016-02-04 19:25   ` Christoffer Dall
2016-02-04 19:25     ` Christoffer Dall
2016-02-08 16:04   ` Catalin Marinas
2016-02-08 16:04     ` Catalin Marinas
2016-02-08 16:24     ` Mark Rutland
2016-02-08 16:24       ` Mark Rutland
2016-02-08 16:24       ` Mark Rutland
2016-02-09 12:02       ` Catalin Marinas
2016-02-09 12:02         ` Catalin Marinas
2016-02-04 19:26 ` [PATCH v3 00/23] arm64: Virtualization Host Extension support Christoffer Dall
2016-02-04 19:26   ` Christoffer Dall
2016-02-04 19:26   ` Christoffer Dall
2016-02-05  8:56   ` Marc Zyngier
2016-02-05  8:56     ` Marc Zyngier
2016-02-05  8:56     ` Marc Zyngier

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.