All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/5] KVM: arm64: Restrict host hypercalls when pKVM is enabled
@ 2021-09-23 11:22 ` Will Deacon
  0 siblings, 0 replies; 34+ messages in thread
From: Will Deacon @ 2021-09-23 11:22 UTC (permalink / raw)
  To: linux-arm-kernel; +Cc: Marc Zyngier, Catalin Marinas, Will Deacon, kvmarm

Hi folks,

This series restricts the hypercalls available to the KVM host on arm64
when pKVM is enabled so that it is not possible for the host to use them
to replace the EL2 component with something else.

This occurs in two stages: when switching to the pKVM vectors, the stub
hypercalls are removed and then later when pKVM is finalised, the pKVM
init hypercalls are removed.

There are still a few dubious calls remaining in terms of protecting the
guest (e.g. __kvm_adjust_pc) but these will be dealt with later when we
have more VM state at EL2 to play with.

Patches based on -rc2. Feedback welcome.

Cheers,

Will

Cc: Marc Zyngier <maz@kernel.org>
Cc: Quentin Perret <qperret@google.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Alexandru Elisei <alexandru.elisei@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: kvmarm@lists.cs.columbia.edu

--->8

Will Deacon (5):
  arm64: Prevent kexec and hibernation if is_protected_kvm_enabled()
  KVM: arm64: Reject stub hypercalls after pKVM has been initialised
  KVM: arm64: Propagate errors from __pkvm_prot_finalize hypercall
  KVM: arm64: Prevent re-finalisation of pKVM for a given CPU
  KVM: arm64: Disable privileged hypercalls after pKVM finalisation

 arch/arm64/include/asm/kvm_asm.h      | 43 ++++++++++---------
 arch/arm64/kernel/smp.c               |  3 +-
 arch/arm64/kvm/arm.c                  | 61 ++++++++++++++++++---------
 arch/arm64/kvm/hyp/nvhe/host.S        | 26 ++++++++----
 arch/arm64/kvm/hyp/nvhe/hyp-main.c    | 26 +++++++-----
 arch/arm64/kvm/hyp/nvhe/mem_protect.c |  3 ++
 6 files changed, 103 insertions(+), 59 deletions(-)

-- 
2.33.0.464.g1972c5931b-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH 0/5] KVM: arm64: Restrict host hypercalls when pKVM is enabled
@ 2021-09-23 11:22 ` Will Deacon
  0 siblings, 0 replies; 34+ messages in thread
From: Will Deacon @ 2021-09-23 11:22 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Will Deacon, Marc Zyngier, Quentin Perret, Catalin Marinas,
	Alexandru Elisei, Suzuki K Poulose, kvmarm

Hi folks,

This series restricts the hypercalls available to the KVM host on arm64
when pKVM is enabled so that it is not possible for the host to use them
to replace the EL2 component with something else.

This occurs in two stages: when switching to the pKVM vectors, the stub
hypercalls are removed and then later when pKVM is finalised, the pKVM
init hypercalls are removed.

There are still a few dubious calls remaining in terms of protecting the
guest (e.g. __kvm_adjust_pc) but these will be dealt with later when we
have more VM state at EL2 to play with.

Patches based on -rc2. Feedback welcome.

Cheers,

Will

Cc: Marc Zyngier <maz@kernel.org>
Cc: Quentin Perret <qperret@google.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Alexandru Elisei <alexandru.elisei@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: kvmarm@lists.cs.columbia.edu

--->8

Will Deacon (5):
  arm64: Prevent kexec and hibernation if is_protected_kvm_enabled()
  KVM: arm64: Reject stub hypercalls after pKVM has been initialised
  KVM: arm64: Propagate errors from __pkvm_prot_finalize hypercall
  KVM: arm64: Prevent re-finalisation of pKVM for a given CPU
  KVM: arm64: Disable privileged hypercalls after pKVM finalisation

 arch/arm64/include/asm/kvm_asm.h      | 43 ++++++++++---------
 arch/arm64/kernel/smp.c               |  3 +-
 arch/arm64/kvm/arm.c                  | 61 ++++++++++++++++++---------
 arch/arm64/kvm/hyp/nvhe/host.S        | 26 ++++++++----
 arch/arm64/kvm/hyp/nvhe/hyp-main.c    | 26 +++++++-----
 arch/arm64/kvm/hyp/nvhe/mem_protect.c |  3 ++
 6 files changed, 103 insertions(+), 59 deletions(-)

-- 
2.33.0.464.g1972c5931b-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH 1/5] arm64: Prevent kexec and hibernation if is_protected_kvm_enabled()
  2021-09-23 11:22 ` Will Deacon
@ 2021-09-23 11:22   ` Will Deacon
  -1 siblings, 0 replies; 34+ messages in thread
From: Will Deacon @ 2021-09-23 11:22 UTC (permalink / raw)
  To: linux-arm-kernel; +Cc: Marc Zyngier, Catalin Marinas, Will Deacon, kvmarm

When pKVM is enabled, the hypervisor code at EL2 and its data structures
are inaccessible to the host kernel and cannot be torn down or replaced
as this would defeat the integrity properies which pKVM aims to provide.
Furthermore, the ABI between the host and EL2 is flexible and private to
whatever the current implementation of KVM requires and so booting a new
kernel with an old EL2 component is very likely to end in disaster.

In preparation for uninstalling the hyp stub calls which are relied upon
to reset EL2, disable kexec and hibernation in the host when protected
KVM is enabled.

Cc: Marc Zyngier <maz@kernel.org>
Cc: Quentin Perret <qperret@google.com>
Signed-off-by: Will Deacon <will@kernel.org>
---
 arch/arm64/kernel/smp.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index 6f6ff072acbd..44369b99a57e 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -1128,5 +1128,6 @@ bool cpus_are_stuck_in_kernel(void)
 {
 	bool smp_spin_tables = (num_possible_cpus() > 1 && !have_cpu_die());
 
-	return !!cpus_stuck_in_kernel || smp_spin_tables;
+	return !!cpus_stuck_in_kernel || smp_spin_tables ||
+		is_protected_kvm_enabled();
 }
-- 
2.33.0.464.g1972c5931b-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 1/5] arm64: Prevent kexec and hibernation if is_protected_kvm_enabled()
@ 2021-09-23 11:22   ` Will Deacon
  0 siblings, 0 replies; 34+ messages in thread
From: Will Deacon @ 2021-09-23 11:22 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Will Deacon, Marc Zyngier, Quentin Perret, Catalin Marinas,
	Alexandru Elisei, Suzuki K Poulose, kvmarm

When pKVM is enabled, the hypervisor code at EL2 and its data structures
are inaccessible to the host kernel and cannot be torn down or replaced
as this would defeat the integrity properies which pKVM aims to provide.
Furthermore, the ABI between the host and EL2 is flexible and private to
whatever the current implementation of KVM requires and so booting a new
kernel with an old EL2 component is very likely to end in disaster.

In preparation for uninstalling the hyp stub calls which are relied upon
to reset EL2, disable kexec and hibernation in the host when protected
KVM is enabled.

Cc: Marc Zyngier <maz@kernel.org>
Cc: Quentin Perret <qperret@google.com>
Signed-off-by: Will Deacon <will@kernel.org>
---
 arch/arm64/kernel/smp.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index 6f6ff072acbd..44369b99a57e 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -1128,5 +1128,6 @@ bool cpus_are_stuck_in_kernel(void)
 {
 	bool smp_spin_tables = (num_possible_cpus() > 1 && !have_cpu_die());
 
-	return !!cpus_stuck_in_kernel || smp_spin_tables;
+	return !!cpus_stuck_in_kernel || smp_spin_tables ||
+		is_protected_kvm_enabled();
 }
-- 
2.33.0.464.g1972c5931b-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 2/5] KVM: arm64: Reject stub hypercalls after pKVM has been initialised
  2021-09-23 11:22 ` Will Deacon
@ 2021-09-23 11:22   ` Will Deacon
  -1 siblings, 0 replies; 34+ messages in thread
From: Will Deacon @ 2021-09-23 11:22 UTC (permalink / raw)
  To: linux-arm-kernel; +Cc: Marc Zyngier, Catalin Marinas, Will Deacon, kvmarm

The stub hypercalls provide mechanisms to reset and replace the EL2 code,
so uninstall them once pKVM has been initialised in order to ensure the
integrity of the hypervisor code.

To ensure pKVM initialisation remains functional, split cpu_hyp_reinit()
into two helper functions to separate usage of the stub from usage of
pkvm hypercalls either side of __pkvm_init on the boot CPU.

Cc: Marc Zyngier <maz@kernel.org>
Cc: Quentin Perret <qperret@google.com>
Signed-off-by: Will Deacon <will@kernel.org>
---
 arch/arm64/kvm/arm.c           | 31 +++++++++++++++++++++++--------
 arch/arm64/kvm/hyp/nvhe/host.S | 26 +++++++++++++++++---------
 2 files changed, 40 insertions(+), 17 deletions(-)

diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index fe102cd2e518..9506cf88fa0e 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -1579,25 +1579,33 @@ static void cpu_set_hyp_vector(void)
 		kvm_call_hyp_nvhe(__pkvm_cpu_set_vector, data->slot);
 }
 
-static void cpu_hyp_reinit(void)
+static void cpu_hyp_init_context(void)
 {
 	kvm_init_host_cpu_context(&this_cpu_ptr_hyp_sym(kvm_host_data)->host_ctxt);
 
-	cpu_hyp_reset();
-
-	if (is_kernel_in_hyp_mode())
-		kvm_timer_init_vhe();
-	else
+	if (!is_kernel_in_hyp_mode())
 		cpu_init_hyp_mode();
+}
 
+static void cpu_hyp_init_features(void)
+{
 	cpu_set_hyp_vector();
-
 	kvm_arm_init_debug();
 
+	if (is_kernel_in_hyp_mode())
+		kvm_timer_init_vhe();
+
 	if (vgic_present)
 		kvm_vgic_init_cpu_hardware();
 }
 
+static void cpu_hyp_reinit(void)
+{
+	cpu_hyp_reset();
+	cpu_hyp_init_context();
+	cpu_hyp_init_features();
+}
+
 static void _kvm_arch_hardware_enable(void *discard)
 {
 	if (!__this_cpu_read(kvm_arm_hardware_enabled)) {
@@ -1788,10 +1796,17 @@ static int do_pkvm_init(u32 hyp_va_bits)
 	int ret;
 
 	preempt_disable();
-	hyp_install_host_vector();
+	cpu_hyp_init_context();
 	ret = kvm_call_hyp_nvhe(__pkvm_init, hyp_mem_base, hyp_mem_size,
 				num_possible_cpus(), kern_hyp_va(per_cpu_base),
 				hyp_va_bits);
+	cpu_hyp_init_features();
+
+	/*
+	 * The stub hypercalls are now disabled, so set our local flag to
+	 * prevent a later re-init attempt in kvm_arch_hardware_enable().
+	 */
+	__this_cpu_write(kvm_arm_hardware_enabled, 1);
 	preempt_enable();
 
 	return ret;
diff --git a/arch/arm64/kvm/hyp/nvhe/host.S b/arch/arm64/kvm/hyp/nvhe/host.S
index 4b652ffb591d..0c6116d34e18 100644
--- a/arch/arm64/kvm/hyp/nvhe/host.S
+++ b/arch/arm64/kvm/hyp/nvhe/host.S
@@ -110,17 +110,14 @@ SYM_FUNC_START(__hyp_do_panic)
 	b	__host_enter_for_panic
 SYM_FUNC_END(__hyp_do_panic)
 
-.macro host_el1_sync_vect
-	.align 7
-.L__vect_start\@:
-	stp	x0, x1, [sp, #-16]!
-	mrs	x0, esr_el2
-	lsr	x0, x0, #ESR_ELx_EC_SHIFT
-	cmp	x0, #ESR_ELx_EC_HVC64
-	b.ne	__host_exit
-
+SYM_FUNC_START(__host_hvc)
 	ldp	x0, x1, [sp]		// Don't fixup the stack yet
 
+	/* No stub for you, sonny Jim */
+alternative_if ARM64_KVM_PROTECTED_MODE
+	b	__host_exit
+alternative_else_nop_endif
+
 	/* Check for a stub HVC call */
 	cmp	x0, #HVC_STUB_HCALL_NR
 	b.hs	__host_exit
@@ -137,6 +134,17 @@ SYM_FUNC_END(__hyp_do_panic)
 	ldr	x5, =__kvm_handle_stub_hvc
 	hyp_pa	x5, x6
 	br	x5
+SYM_FUNC_END(__host_hvc)
+
+.macro host_el1_sync_vect
+	.align 7
+.L__vect_start\@:
+	stp	x0, x1, [sp, #-16]!
+	mrs	x0, esr_el2
+	lsr	x0, x0, #ESR_ELx_EC_SHIFT
+	cmp	x0, #ESR_ELx_EC_HVC64
+	b.eq	__host_hvc
+	b	__host_exit
 .L__vect_end\@:
 .if ((.L__vect_end\@ - .L__vect_start\@) > 0x80)
 	.error "host_el1_sync_vect larger than vector entry"
-- 
2.33.0.464.g1972c5931b-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 2/5] KVM: arm64: Reject stub hypercalls after pKVM has been initialised
@ 2021-09-23 11:22   ` Will Deacon
  0 siblings, 0 replies; 34+ messages in thread
From: Will Deacon @ 2021-09-23 11:22 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Will Deacon, Marc Zyngier, Quentin Perret, Catalin Marinas,
	Alexandru Elisei, Suzuki K Poulose, kvmarm

The stub hypercalls provide mechanisms to reset and replace the EL2 code,
so uninstall them once pKVM has been initialised in order to ensure the
integrity of the hypervisor code.

To ensure pKVM initialisation remains functional, split cpu_hyp_reinit()
into two helper functions to separate usage of the stub from usage of
pkvm hypercalls either side of __pkvm_init on the boot CPU.

Cc: Marc Zyngier <maz@kernel.org>
Cc: Quentin Perret <qperret@google.com>
Signed-off-by: Will Deacon <will@kernel.org>
---
 arch/arm64/kvm/arm.c           | 31 +++++++++++++++++++++++--------
 arch/arm64/kvm/hyp/nvhe/host.S | 26 +++++++++++++++++---------
 2 files changed, 40 insertions(+), 17 deletions(-)

diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index fe102cd2e518..9506cf88fa0e 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -1579,25 +1579,33 @@ static void cpu_set_hyp_vector(void)
 		kvm_call_hyp_nvhe(__pkvm_cpu_set_vector, data->slot);
 }
 
-static void cpu_hyp_reinit(void)
+static void cpu_hyp_init_context(void)
 {
 	kvm_init_host_cpu_context(&this_cpu_ptr_hyp_sym(kvm_host_data)->host_ctxt);
 
-	cpu_hyp_reset();
-
-	if (is_kernel_in_hyp_mode())
-		kvm_timer_init_vhe();
-	else
+	if (!is_kernel_in_hyp_mode())
 		cpu_init_hyp_mode();
+}
 
+static void cpu_hyp_init_features(void)
+{
 	cpu_set_hyp_vector();
-
 	kvm_arm_init_debug();
 
+	if (is_kernel_in_hyp_mode())
+		kvm_timer_init_vhe();
+
 	if (vgic_present)
 		kvm_vgic_init_cpu_hardware();
 }
 
+static void cpu_hyp_reinit(void)
+{
+	cpu_hyp_reset();
+	cpu_hyp_init_context();
+	cpu_hyp_init_features();
+}
+
 static void _kvm_arch_hardware_enable(void *discard)
 {
 	if (!__this_cpu_read(kvm_arm_hardware_enabled)) {
@@ -1788,10 +1796,17 @@ static int do_pkvm_init(u32 hyp_va_bits)
 	int ret;
 
 	preempt_disable();
-	hyp_install_host_vector();
+	cpu_hyp_init_context();
 	ret = kvm_call_hyp_nvhe(__pkvm_init, hyp_mem_base, hyp_mem_size,
 				num_possible_cpus(), kern_hyp_va(per_cpu_base),
 				hyp_va_bits);
+	cpu_hyp_init_features();
+
+	/*
+	 * The stub hypercalls are now disabled, so set our local flag to
+	 * prevent a later re-init attempt in kvm_arch_hardware_enable().
+	 */
+	__this_cpu_write(kvm_arm_hardware_enabled, 1);
 	preempt_enable();
 
 	return ret;
diff --git a/arch/arm64/kvm/hyp/nvhe/host.S b/arch/arm64/kvm/hyp/nvhe/host.S
index 4b652ffb591d..0c6116d34e18 100644
--- a/arch/arm64/kvm/hyp/nvhe/host.S
+++ b/arch/arm64/kvm/hyp/nvhe/host.S
@@ -110,17 +110,14 @@ SYM_FUNC_START(__hyp_do_panic)
 	b	__host_enter_for_panic
 SYM_FUNC_END(__hyp_do_panic)
 
-.macro host_el1_sync_vect
-	.align 7
-.L__vect_start\@:
-	stp	x0, x1, [sp, #-16]!
-	mrs	x0, esr_el2
-	lsr	x0, x0, #ESR_ELx_EC_SHIFT
-	cmp	x0, #ESR_ELx_EC_HVC64
-	b.ne	__host_exit
-
+SYM_FUNC_START(__host_hvc)
 	ldp	x0, x1, [sp]		// Don't fixup the stack yet
 
+	/* No stub for you, sonny Jim */
+alternative_if ARM64_KVM_PROTECTED_MODE
+	b	__host_exit
+alternative_else_nop_endif
+
 	/* Check for a stub HVC call */
 	cmp	x0, #HVC_STUB_HCALL_NR
 	b.hs	__host_exit
@@ -137,6 +134,17 @@ SYM_FUNC_END(__hyp_do_panic)
 	ldr	x5, =__kvm_handle_stub_hvc
 	hyp_pa	x5, x6
 	br	x5
+SYM_FUNC_END(__host_hvc)
+
+.macro host_el1_sync_vect
+	.align 7
+.L__vect_start\@:
+	stp	x0, x1, [sp, #-16]!
+	mrs	x0, esr_el2
+	lsr	x0, x0, #ESR_ELx_EC_SHIFT
+	cmp	x0, #ESR_ELx_EC_HVC64
+	b.eq	__host_hvc
+	b	__host_exit
 .L__vect_end\@:
 .if ((.L__vect_end\@ - .L__vect_start\@) > 0x80)
 	.error "host_el1_sync_vect larger than vector entry"
-- 
2.33.0.464.g1972c5931b-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 3/5] KVM: arm64: Propagate errors from __pkvm_prot_finalize hypercall
  2021-09-23 11:22 ` Will Deacon
@ 2021-09-23 11:22   ` Will Deacon
  -1 siblings, 0 replies; 34+ messages in thread
From: Will Deacon @ 2021-09-23 11:22 UTC (permalink / raw)
  To: linux-arm-kernel; +Cc: Marc Zyngier, Catalin Marinas, Will Deacon, kvmarm

If the __pkvm_prot_finalize hypercall returns an error, we WARN but fail
to propagate the failure code back to kvm_arch_init().

Pass a pointer to a zero-initialised return variable so that failure
to finalise the pKVM protections on a host CPU can be reported back to
KVM.

Cc: Marc Zyngier <maz@kernel.org>
Cc: Quentin Perret <qperret@google.com>
Signed-off-by: Will Deacon <will@kernel.org>
---
 arch/arm64/kvm/arm.c | 30 +++++++++++++++++++-----------
 1 file changed, 19 insertions(+), 11 deletions(-)

diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 9506cf88fa0e..13bbf35896cd 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -1986,9 +1986,25 @@ static int init_hyp_mode(void)
 	return err;
 }
 
-static void _kvm_host_prot_finalize(void *discard)
+static void _kvm_host_prot_finalize(void *arg)
 {
-	WARN_ON(kvm_call_hyp_nvhe(__pkvm_prot_finalize));
+	int *err = arg;
+
+	if (WARN_ON(kvm_call_hyp_nvhe(__pkvm_prot_finalize)))
+		WRITE_ONCE(*err, -EINVAL);
+}
+
+static int pkvm_drop_host_privileges(void)
+{
+	int ret = 0;
+
+	/*
+	 * Flip the static key upfront as that may no longer be possible
+	 * once the host stage 2 is installed.
+	 */
+	static_branch_enable(&kvm_protected_mode_initialized);
+	on_each_cpu(_kvm_host_prot_finalize, &ret, 1);
+	return ret;
 }
 
 static int finalize_hyp_mode(void)
@@ -2002,15 +2018,7 @@ static int finalize_hyp_mode(void)
 	 * None of other sections should ever be introspected.
 	 */
 	kmemleak_free_part(__hyp_bss_start, __hyp_bss_end - __hyp_bss_start);
-
-	/*
-	 * Flip the static key upfront as that may no longer be possible
-	 * once the host stage 2 is installed.
-	 */
-	static_branch_enable(&kvm_protected_mode_initialized);
-	on_each_cpu(_kvm_host_prot_finalize, NULL, 1);
-
-	return 0;
+	return pkvm_drop_host_privileges();
 }
 
 struct kvm_vcpu *kvm_mpidr_to_vcpu(struct kvm *kvm, unsigned long mpidr)
-- 
2.33.0.464.g1972c5931b-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 3/5] KVM: arm64: Propagate errors from __pkvm_prot_finalize hypercall
@ 2021-09-23 11:22   ` Will Deacon
  0 siblings, 0 replies; 34+ messages in thread
From: Will Deacon @ 2021-09-23 11:22 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Will Deacon, Marc Zyngier, Quentin Perret, Catalin Marinas,
	Alexandru Elisei, Suzuki K Poulose, kvmarm

If the __pkvm_prot_finalize hypercall returns an error, we WARN but fail
to propagate the failure code back to kvm_arch_init().

Pass a pointer to a zero-initialised return variable so that failure
to finalise the pKVM protections on a host CPU can be reported back to
KVM.

Cc: Marc Zyngier <maz@kernel.org>
Cc: Quentin Perret <qperret@google.com>
Signed-off-by: Will Deacon <will@kernel.org>
---
 arch/arm64/kvm/arm.c | 30 +++++++++++++++++++-----------
 1 file changed, 19 insertions(+), 11 deletions(-)

diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 9506cf88fa0e..13bbf35896cd 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -1986,9 +1986,25 @@ static int init_hyp_mode(void)
 	return err;
 }
 
-static void _kvm_host_prot_finalize(void *discard)
+static void _kvm_host_prot_finalize(void *arg)
 {
-	WARN_ON(kvm_call_hyp_nvhe(__pkvm_prot_finalize));
+	int *err = arg;
+
+	if (WARN_ON(kvm_call_hyp_nvhe(__pkvm_prot_finalize)))
+		WRITE_ONCE(*err, -EINVAL);
+}
+
+static int pkvm_drop_host_privileges(void)
+{
+	int ret = 0;
+
+	/*
+	 * Flip the static key upfront as that may no longer be possible
+	 * once the host stage 2 is installed.
+	 */
+	static_branch_enable(&kvm_protected_mode_initialized);
+	on_each_cpu(_kvm_host_prot_finalize, &ret, 1);
+	return ret;
 }
 
 static int finalize_hyp_mode(void)
@@ -2002,15 +2018,7 @@ static int finalize_hyp_mode(void)
 	 * None of other sections should ever be introspected.
 	 */
 	kmemleak_free_part(__hyp_bss_start, __hyp_bss_end - __hyp_bss_start);
-
-	/*
-	 * Flip the static key upfront as that may no longer be possible
-	 * once the host stage 2 is installed.
-	 */
-	static_branch_enable(&kvm_protected_mode_initialized);
-	on_each_cpu(_kvm_host_prot_finalize, NULL, 1);
-
-	return 0;
+	return pkvm_drop_host_privileges();
 }
 
 struct kvm_vcpu *kvm_mpidr_to_vcpu(struct kvm *kvm, unsigned long mpidr)
-- 
2.33.0.464.g1972c5931b-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 4/5] KVM: arm64: Prevent re-finalisation of pKVM for a given CPU
  2021-09-23 11:22 ` Will Deacon
@ 2021-09-23 11:22   ` Will Deacon
  -1 siblings, 0 replies; 34+ messages in thread
From: Will Deacon @ 2021-09-23 11:22 UTC (permalink / raw)
  To: linux-arm-kernel; +Cc: Marc Zyngier, Catalin Marinas, Will Deacon, kvmarm

__pkvm_prot_finalize() completes the deprivilege of the host when pKVM
is in use by installing a stage-2 translation table for the calling CPU.

Issuing the hypercall multiple times for a given CPU makes little sense,
but in such a case just return early with -EPERM rather than go through
the whole page-table dance again.

Cc: Marc Zyngier <maz@kernel.org>
Cc: Quentin Perret <qperret@google.com>
Signed-off-by: Will Deacon <will@kernel.org>
---
 arch/arm64/kvm/hyp/nvhe/mem_protect.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
index bacd493a4eac..cafe17e5fa8f 100644
--- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c
+++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
@@ -123,6 +123,9 @@ int __pkvm_prot_finalize(void)
 	struct kvm_s2_mmu *mmu = &host_kvm.arch.mmu;
 	struct kvm_nvhe_init_params *params = this_cpu_ptr(&kvm_init_params);
 
+	if (params->hcr_el2 & HCR_VM)
+		return -EPERM;
+
 	params->vttbr = kvm_get_vttbr(mmu);
 	params->vtcr = host_kvm.arch.vtcr;
 	params->hcr_el2 |= HCR_VM;
-- 
2.33.0.464.g1972c5931b-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 4/5] KVM: arm64: Prevent re-finalisation of pKVM for a given CPU
@ 2021-09-23 11:22   ` Will Deacon
  0 siblings, 0 replies; 34+ messages in thread
From: Will Deacon @ 2021-09-23 11:22 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Will Deacon, Marc Zyngier, Quentin Perret, Catalin Marinas,
	Alexandru Elisei, Suzuki K Poulose, kvmarm

__pkvm_prot_finalize() completes the deprivilege of the host when pKVM
is in use by installing a stage-2 translation table for the calling CPU.

Issuing the hypercall multiple times for a given CPU makes little sense,
but in such a case just return early with -EPERM rather than go through
the whole page-table dance again.

Cc: Marc Zyngier <maz@kernel.org>
Cc: Quentin Perret <qperret@google.com>
Signed-off-by: Will Deacon <will@kernel.org>
---
 arch/arm64/kvm/hyp/nvhe/mem_protect.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
index bacd493a4eac..cafe17e5fa8f 100644
--- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c
+++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
@@ -123,6 +123,9 @@ int __pkvm_prot_finalize(void)
 	struct kvm_s2_mmu *mmu = &host_kvm.arch.mmu;
 	struct kvm_nvhe_init_params *params = this_cpu_ptr(&kvm_init_params);
 
+	if (params->hcr_el2 & HCR_VM)
+		return -EPERM;
+
 	params->vttbr = kvm_get_vttbr(mmu);
 	params->vtcr = host_kvm.arch.vtcr;
 	params->hcr_el2 |= HCR_VM;
-- 
2.33.0.464.g1972c5931b-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 5/5] KVM: arm64: Disable privileged hypercalls after pKVM finalisation
  2021-09-23 11:22 ` Will Deacon
@ 2021-09-23 11:22   ` Will Deacon
  -1 siblings, 0 replies; 34+ messages in thread
From: Will Deacon @ 2021-09-23 11:22 UTC (permalink / raw)
  To: linux-arm-kernel; +Cc: Marc Zyngier, Catalin Marinas, Will Deacon, kvmarm

After pKVM has been 'finalised' using the __pkvm_prot_finalize hypercall,
the calling CPU will have a Stage-2 translation enabled to prevent access
to memory pages owned by EL2.

Although this forms a significant part of the process to deprivilege the
host kernel, we also need to ensure that the hypercall interface is
reduced so that the EL2 code cannot, for example, be re-initialised using
a new set of vectors.

Re-order the hypercalls so that only a suffix remains available after
finalisation of pKVM.

Cc: Marc Zyngier <maz@kernel.org>
Cc: Quentin Perret <qperret@google.com>
Signed-off-by: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/kvm_asm.h   | 43 ++++++++++++++++--------------
 arch/arm64/kvm/hyp/nvhe/hyp-main.c | 26 +++++++++++-------
 2 files changed, 39 insertions(+), 30 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index e86045ac43ba..68630fd382c5 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -43,27 +43,30 @@
 
 #define KVM_HOST_SMCCC_FUNC(name) KVM_HOST_SMCCC_ID(__KVM_HOST_SMCCC_FUNC_##name)
 
+/* Hypercalls available only prior to pKVM finalisation */
 #define __KVM_HOST_SMCCC_FUNC___kvm_hyp_init			0
-#define __KVM_HOST_SMCCC_FUNC___kvm_vcpu_run			1
-#define __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context		2
-#define __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid_ipa		3
-#define __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid		4
-#define __KVM_HOST_SMCCC_FUNC___kvm_flush_cpu_context		5
-#define __KVM_HOST_SMCCC_FUNC___kvm_timer_set_cntvoff		6
-#define __KVM_HOST_SMCCC_FUNC___kvm_enable_ssbs			7
-#define __KVM_HOST_SMCCC_FUNC___vgic_v3_get_gic_config		8
-#define __KVM_HOST_SMCCC_FUNC___vgic_v3_read_vmcr		9
-#define __KVM_HOST_SMCCC_FUNC___vgic_v3_write_vmcr		10
-#define __KVM_HOST_SMCCC_FUNC___vgic_v3_init_lrs		11
-#define __KVM_HOST_SMCCC_FUNC___kvm_get_mdcr_el2		12
-#define __KVM_HOST_SMCCC_FUNC___vgic_v3_save_aprs		13
-#define __KVM_HOST_SMCCC_FUNC___vgic_v3_restore_aprs		14
-#define __KVM_HOST_SMCCC_FUNC___pkvm_init			15
-#define __KVM_HOST_SMCCC_FUNC___pkvm_host_share_hyp		16
-#define __KVM_HOST_SMCCC_FUNC___pkvm_create_private_mapping	17
-#define __KVM_HOST_SMCCC_FUNC___pkvm_cpu_set_vector		18
-#define __KVM_HOST_SMCCC_FUNC___pkvm_prot_finalize		19
-#define __KVM_HOST_SMCCC_FUNC___kvm_adjust_pc			20
+#define __KVM_HOST_SMCCC_FUNC___kvm_get_mdcr_el2		1
+#define __KVM_HOST_SMCCC_FUNC___pkvm_init			2
+#define __KVM_HOST_SMCCC_FUNC___pkvm_create_private_mapping	3
+#define __KVM_HOST_SMCCC_FUNC___pkvm_cpu_set_vector		4
+#define __KVM_HOST_SMCCC_FUNC___kvm_enable_ssbs			5
+#define __KVM_HOST_SMCCC_FUNC___vgic_v3_init_lrs		6
+#define __KVM_HOST_SMCCC_FUNC___vgic_v3_get_gic_config		7
+#define __KVM_HOST_SMCCC_FUNC___pkvm_prot_finalize		8
+
+/* Hypercalls available after pKVM finalisation */
+#define __KVM_HOST_SMCCC_FUNC___pkvm_host_share_hyp		9
+#define __KVM_HOST_SMCCC_FUNC___kvm_adjust_pc			10
+#define __KVM_HOST_SMCCC_FUNC___kvm_vcpu_run			11
+#define __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context		12
+#define __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid_ipa		13
+#define __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid		14
+#define __KVM_HOST_SMCCC_FUNC___kvm_flush_cpu_context		15
+#define __KVM_HOST_SMCCC_FUNC___kvm_timer_set_cntvoff		16
+#define __KVM_HOST_SMCCC_FUNC___vgic_v3_read_vmcr		17
+#define __KVM_HOST_SMCCC_FUNC___vgic_v3_write_vmcr		18
+#define __KVM_HOST_SMCCC_FUNC___vgic_v3_save_aprs		19
+#define __KVM_HOST_SMCCC_FUNC___vgic_v3_restore_aprs		20
 
 #ifndef __ASSEMBLY__
 
diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
index 2da6aa8da868..4120e34288e1 100644
--- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c
+++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
@@ -165,36 +165,42 @@ typedef void (*hcall_t)(struct kvm_cpu_context *);
 #define HANDLE_FUNC(x)	[__KVM_HOST_SMCCC_FUNC_##x] = (hcall_t)handle_##x
 
 static const hcall_t host_hcall[] = {
-	HANDLE_FUNC(__kvm_vcpu_run),
+	/* ___kvm_hyp_init */
+	HANDLE_FUNC(__kvm_get_mdcr_el2),
+	HANDLE_FUNC(__pkvm_init),
+	HANDLE_FUNC(__pkvm_create_private_mapping),
+	HANDLE_FUNC(__pkvm_cpu_set_vector),
+	HANDLE_FUNC(__kvm_enable_ssbs),
+	HANDLE_FUNC(__vgic_v3_init_lrs),
+	HANDLE_FUNC(__pkvm_prot_finalize),
+
+	HANDLE_FUNC(__pkvm_host_share_hyp),
 	HANDLE_FUNC(__kvm_adjust_pc),
+	HANDLE_FUNC(__kvm_vcpu_run),
 	HANDLE_FUNC(__kvm_flush_vm_context),
 	HANDLE_FUNC(__kvm_tlb_flush_vmid_ipa),
 	HANDLE_FUNC(__kvm_tlb_flush_vmid),
 	HANDLE_FUNC(__kvm_flush_cpu_context),
 	HANDLE_FUNC(__kvm_timer_set_cntvoff),
-	HANDLE_FUNC(__kvm_enable_ssbs),
 	HANDLE_FUNC(__vgic_v3_get_gic_config),
 	HANDLE_FUNC(__vgic_v3_read_vmcr),
 	HANDLE_FUNC(__vgic_v3_write_vmcr),
-	HANDLE_FUNC(__vgic_v3_init_lrs),
-	HANDLE_FUNC(__kvm_get_mdcr_el2),
 	HANDLE_FUNC(__vgic_v3_save_aprs),
 	HANDLE_FUNC(__vgic_v3_restore_aprs),
-	HANDLE_FUNC(__pkvm_init),
-	HANDLE_FUNC(__pkvm_cpu_set_vector),
-	HANDLE_FUNC(__pkvm_host_share_hyp),
-	HANDLE_FUNC(__pkvm_create_private_mapping),
-	HANDLE_FUNC(__pkvm_prot_finalize),
 };
 
 static void handle_host_hcall(struct kvm_cpu_context *host_ctxt)
 {
 	DECLARE_REG(unsigned long, id, host_ctxt, 0);
+	unsigned long hcall_min = 0;
 	hcall_t hfn;
 
+	if (static_branch_unlikely(&kvm_protected_mode_initialized))
+		hcall_min = __KVM_HOST_SMCCC_FUNC___pkvm_prot_finalize;
+
 	id -= KVM_HOST_SMCCC_ID(0);
 
-	if (unlikely(id >= ARRAY_SIZE(host_hcall)))
+	if (unlikely(id < hcall_min || id >= ARRAY_SIZE(host_hcall)))
 		goto inval;
 
 	hfn = host_hcall[id];
-- 
2.33.0.464.g1972c5931b-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 5/5] KVM: arm64: Disable privileged hypercalls after pKVM finalisation
@ 2021-09-23 11:22   ` Will Deacon
  0 siblings, 0 replies; 34+ messages in thread
From: Will Deacon @ 2021-09-23 11:22 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Will Deacon, Marc Zyngier, Quentin Perret, Catalin Marinas,
	Alexandru Elisei, Suzuki K Poulose, kvmarm

After pKVM has been 'finalised' using the __pkvm_prot_finalize hypercall,
the calling CPU will have a Stage-2 translation enabled to prevent access
to memory pages owned by EL2.

Although this forms a significant part of the process to deprivilege the
host kernel, we also need to ensure that the hypercall interface is
reduced so that the EL2 code cannot, for example, be re-initialised using
a new set of vectors.

Re-order the hypercalls so that only a suffix remains available after
finalisation of pKVM.

Cc: Marc Zyngier <maz@kernel.org>
Cc: Quentin Perret <qperret@google.com>
Signed-off-by: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/kvm_asm.h   | 43 ++++++++++++++++--------------
 arch/arm64/kvm/hyp/nvhe/hyp-main.c | 26 +++++++++++-------
 2 files changed, 39 insertions(+), 30 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index e86045ac43ba..68630fd382c5 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -43,27 +43,30 @@
 
 #define KVM_HOST_SMCCC_FUNC(name) KVM_HOST_SMCCC_ID(__KVM_HOST_SMCCC_FUNC_##name)
 
+/* Hypercalls available only prior to pKVM finalisation */
 #define __KVM_HOST_SMCCC_FUNC___kvm_hyp_init			0
-#define __KVM_HOST_SMCCC_FUNC___kvm_vcpu_run			1
-#define __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context		2
-#define __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid_ipa		3
-#define __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid		4
-#define __KVM_HOST_SMCCC_FUNC___kvm_flush_cpu_context		5
-#define __KVM_HOST_SMCCC_FUNC___kvm_timer_set_cntvoff		6
-#define __KVM_HOST_SMCCC_FUNC___kvm_enable_ssbs			7
-#define __KVM_HOST_SMCCC_FUNC___vgic_v3_get_gic_config		8
-#define __KVM_HOST_SMCCC_FUNC___vgic_v3_read_vmcr		9
-#define __KVM_HOST_SMCCC_FUNC___vgic_v3_write_vmcr		10
-#define __KVM_HOST_SMCCC_FUNC___vgic_v3_init_lrs		11
-#define __KVM_HOST_SMCCC_FUNC___kvm_get_mdcr_el2		12
-#define __KVM_HOST_SMCCC_FUNC___vgic_v3_save_aprs		13
-#define __KVM_HOST_SMCCC_FUNC___vgic_v3_restore_aprs		14
-#define __KVM_HOST_SMCCC_FUNC___pkvm_init			15
-#define __KVM_HOST_SMCCC_FUNC___pkvm_host_share_hyp		16
-#define __KVM_HOST_SMCCC_FUNC___pkvm_create_private_mapping	17
-#define __KVM_HOST_SMCCC_FUNC___pkvm_cpu_set_vector		18
-#define __KVM_HOST_SMCCC_FUNC___pkvm_prot_finalize		19
-#define __KVM_HOST_SMCCC_FUNC___kvm_adjust_pc			20
+#define __KVM_HOST_SMCCC_FUNC___kvm_get_mdcr_el2		1
+#define __KVM_HOST_SMCCC_FUNC___pkvm_init			2
+#define __KVM_HOST_SMCCC_FUNC___pkvm_create_private_mapping	3
+#define __KVM_HOST_SMCCC_FUNC___pkvm_cpu_set_vector		4
+#define __KVM_HOST_SMCCC_FUNC___kvm_enable_ssbs			5
+#define __KVM_HOST_SMCCC_FUNC___vgic_v3_init_lrs		6
+#define __KVM_HOST_SMCCC_FUNC___vgic_v3_get_gic_config		7
+#define __KVM_HOST_SMCCC_FUNC___pkvm_prot_finalize		8
+
+/* Hypercalls available after pKVM finalisation */
+#define __KVM_HOST_SMCCC_FUNC___pkvm_host_share_hyp		9
+#define __KVM_HOST_SMCCC_FUNC___kvm_adjust_pc			10
+#define __KVM_HOST_SMCCC_FUNC___kvm_vcpu_run			11
+#define __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context		12
+#define __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid_ipa		13
+#define __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid		14
+#define __KVM_HOST_SMCCC_FUNC___kvm_flush_cpu_context		15
+#define __KVM_HOST_SMCCC_FUNC___kvm_timer_set_cntvoff		16
+#define __KVM_HOST_SMCCC_FUNC___vgic_v3_read_vmcr		17
+#define __KVM_HOST_SMCCC_FUNC___vgic_v3_write_vmcr		18
+#define __KVM_HOST_SMCCC_FUNC___vgic_v3_save_aprs		19
+#define __KVM_HOST_SMCCC_FUNC___vgic_v3_restore_aprs		20
 
 #ifndef __ASSEMBLY__
 
diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
index 2da6aa8da868..4120e34288e1 100644
--- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c
+++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
@@ -165,36 +165,42 @@ typedef void (*hcall_t)(struct kvm_cpu_context *);
 #define HANDLE_FUNC(x)	[__KVM_HOST_SMCCC_FUNC_##x] = (hcall_t)handle_##x
 
 static const hcall_t host_hcall[] = {
-	HANDLE_FUNC(__kvm_vcpu_run),
+	/* ___kvm_hyp_init */
+	HANDLE_FUNC(__kvm_get_mdcr_el2),
+	HANDLE_FUNC(__pkvm_init),
+	HANDLE_FUNC(__pkvm_create_private_mapping),
+	HANDLE_FUNC(__pkvm_cpu_set_vector),
+	HANDLE_FUNC(__kvm_enable_ssbs),
+	HANDLE_FUNC(__vgic_v3_init_lrs),
+	HANDLE_FUNC(__pkvm_prot_finalize),
+
+	HANDLE_FUNC(__pkvm_host_share_hyp),
 	HANDLE_FUNC(__kvm_adjust_pc),
+	HANDLE_FUNC(__kvm_vcpu_run),
 	HANDLE_FUNC(__kvm_flush_vm_context),
 	HANDLE_FUNC(__kvm_tlb_flush_vmid_ipa),
 	HANDLE_FUNC(__kvm_tlb_flush_vmid),
 	HANDLE_FUNC(__kvm_flush_cpu_context),
 	HANDLE_FUNC(__kvm_timer_set_cntvoff),
-	HANDLE_FUNC(__kvm_enable_ssbs),
 	HANDLE_FUNC(__vgic_v3_get_gic_config),
 	HANDLE_FUNC(__vgic_v3_read_vmcr),
 	HANDLE_FUNC(__vgic_v3_write_vmcr),
-	HANDLE_FUNC(__vgic_v3_init_lrs),
-	HANDLE_FUNC(__kvm_get_mdcr_el2),
 	HANDLE_FUNC(__vgic_v3_save_aprs),
 	HANDLE_FUNC(__vgic_v3_restore_aprs),
-	HANDLE_FUNC(__pkvm_init),
-	HANDLE_FUNC(__pkvm_cpu_set_vector),
-	HANDLE_FUNC(__pkvm_host_share_hyp),
-	HANDLE_FUNC(__pkvm_create_private_mapping),
-	HANDLE_FUNC(__pkvm_prot_finalize),
 };
 
 static void handle_host_hcall(struct kvm_cpu_context *host_ctxt)
 {
 	DECLARE_REG(unsigned long, id, host_ctxt, 0);
+	unsigned long hcall_min = 0;
 	hcall_t hfn;
 
+	if (static_branch_unlikely(&kvm_protected_mode_initialized))
+		hcall_min = __KVM_HOST_SMCCC_FUNC___pkvm_prot_finalize;
+
 	id -= KVM_HOST_SMCCC_ID(0);
 
-	if (unlikely(id >= ARRAY_SIZE(host_hcall)))
+	if (unlikely(id < hcall_min || id >= ARRAY_SIZE(host_hcall)))
 		goto inval;
 
 	hfn = host_hcall[id];
-- 
2.33.0.464.g1972c5931b-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* Re: [PATCH 1/5] arm64: Prevent kexec and hibernation if is_protected_kvm_enabled()
  2021-09-23 11:22   ` Will Deacon
@ 2021-09-23 11:45     ` Mark Rutland
  -1 siblings, 0 replies; 34+ messages in thread
From: Mark Rutland @ 2021-09-23 11:45 UTC (permalink / raw)
  To: Will Deacon; +Cc: Marc Zyngier, kvmarm, linux-arm-kernel, Catalin Marinas

On Thu, Sep 23, 2021 at 12:22:52PM +0100, Will Deacon wrote:
> When pKVM is enabled, the hypervisor code at EL2 and its data structures
> are inaccessible to the host kernel and cannot be torn down or replaced
> as this would defeat the integrity properies which pKVM aims to provide.
> Furthermore, the ABI between the host and EL2 is flexible and private to
> whatever the current implementation of KVM requires and so booting a new
> kernel with an old EL2 component is very likely to end in disaster.
> 
> In preparation for uninstalling the hyp stub calls which are relied upon
> to reset EL2, disable kexec and hibernation in the host when protected
> KVM is enabled.
> 
> Cc: Marc Zyngier <maz@kernel.org>
> Cc: Quentin Perret <qperret@google.com>
> Signed-off-by: Will Deacon <will@kernel.org>
> ---
>  arch/arm64/kernel/smp.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
> index 6f6ff072acbd..44369b99a57e 100644
> --- a/arch/arm64/kernel/smp.c
> +++ b/arch/arm64/kernel/smp.c
> @@ -1128,5 +1128,6 @@ bool cpus_are_stuck_in_kernel(void)
>  {
>  	bool smp_spin_tables = (num_possible_cpus() > 1 && !have_cpu_die());
>  
> -	return !!cpus_stuck_in_kernel || smp_spin_tables;
> +	return !!cpus_stuck_in_kernel || smp_spin_tables ||
> +		is_protected_kvm_enabled();
>  }

IIUC you'll also need to do something to prevent kdump, since even with
CPUs stuck in the kernel that will try to do a kexec on the crashed CPU
and __cpu_soft_restart() won't be able to return to EL2.

You could fiddle with the BUG_ON() in machine_kexec() to die in this
case too.

Thanks,
Mark.

> -- 
> 2.33.0.464.g1972c5931b-goog
> 
> _______________________________________________
> kvmarm mailing list
> kvmarm@lists.cs.columbia.edu
> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 1/5] arm64: Prevent kexec and hibernation if is_protected_kvm_enabled()
@ 2021-09-23 11:45     ` Mark Rutland
  0 siblings, 0 replies; 34+ messages in thread
From: Mark Rutland @ 2021-09-23 11:45 UTC (permalink / raw)
  To: Will Deacon; +Cc: linux-arm-kernel, Marc Zyngier, Catalin Marinas, kvmarm

On Thu, Sep 23, 2021 at 12:22:52PM +0100, Will Deacon wrote:
> When pKVM is enabled, the hypervisor code at EL2 and its data structures
> are inaccessible to the host kernel and cannot be torn down or replaced
> as this would defeat the integrity properies which pKVM aims to provide.
> Furthermore, the ABI between the host and EL2 is flexible and private to
> whatever the current implementation of KVM requires and so booting a new
> kernel with an old EL2 component is very likely to end in disaster.
> 
> In preparation for uninstalling the hyp stub calls which are relied upon
> to reset EL2, disable kexec and hibernation in the host when protected
> KVM is enabled.
> 
> Cc: Marc Zyngier <maz@kernel.org>
> Cc: Quentin Perret <qperret@google.com>
> Signed-off-by: Will Deacon <will@kernel.org>
> ---
>  arch/arm64/kernel/smp.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
> index 6f6ff072acbd..44369b99a57e 100644
> --- a/arch/arm64/kernel/smp.c
> +++ b/arch/arm64/kernel/smp.c
> @@ -1128,5 +1128,6 @@ bool cpus_are_stuck_in_kernel(void)
>  {
>  	bool smp_spin_tables = (num_possible_cpus() > 1 && !have_cpu_die());
>  
> -	return !!cpus_stuck_in_kernel || smp_spin_tables;
> +	return !!cpus_stuck_in_kernel || smp_spin_tables ||
> +		is_protected_kvm_enabled();
>  }

IIUC you'll also need to do something to prevent kdump, since even with
CPUs stuck in the kernel that will try to do a kexec on the crashed CPU
and __cpu_soft_restart() won't be able to return to EL2.

You could fiddle with the BUG_ON() in machine_kexec() to die in this
case too.

Thanks,
Mark.

> -- 
> 2.33.0.464.g1972c5931b-goog
> 
> _______________________________________________
> kvmarm mailing list
> kvmarm@lists.cs.columbia.edu
> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 0/5] KVM: arm64: Restrict host hypercalls when pKVM is enabled
  2021-09-23 11:22 ` Will Deacon
@ 2021-09-23 12:21   ` Marc Zyngier
  -1 siblings, 0 replies; 34+ messages in thread
From: Marc Zyngier @ 2021-09-23 12:21 UTC (permalink / raw)
  To: Will Deacon; +Cc: Catalin Marinas, kvmarm, linux-arm-kernel

On Thu, 23 Sep 2021 12:22:51 +0100,
Will Deacon <will@kernel.org> wrote:
> 
> Hi folks,
> 
> This series restricts the hypercalls available to the KVM host on arm64
> when pKVM is enabled so that it is not possible for the host to use them
> to replace the EL2 component with something else.
> 
> This occurs in two stages: when switching to the pKVM vectors, the stub
> hypercalls are removed and then later when pKVM is finalised, the pKVM
> init hypercalls are removed.
> 
> There are still a few dubious calls remaining in terms of protecting the
> guest (e.g. __kvm_adjust_pc) but these will be dealt with later when we
> have more VM state at EL2 to play with.

Yup. This particular one should have an equivalent at EL2 and pending
exceptions committed to the state before exiting to EL1.

	M.

-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 0/5] KVM: arm64: Restrict host hypercalls when pKVM is enabled
@ 2021-09-23 12:21   ` Marc Zyngier
  0 siblings, 0 replies; 34+ messages in thread
From: Marc Zyngier @ 2021-09-23 12:21 UTC (permalink / raw)
  To: Will Deacon
  Cc: linux-arm-kernel, Quentin Perret, Catalin Marinas,
	Alexandru Elisei, Suzuki K Poulose, kvmarm

On Thu, 23 Sep 2021 12:22:51 +0100,
Will Deacon <will@kernel.org> wrote:
> 
> Hi folks,
> 
> This series restricts the hypercalls available to the KVM host on arm64
> when pKVM is enabled so that it is not possible for the host to use them
> to replace the EL2 component with something else.
> 
> This occurs in two stages: when switching to the pKVM vectors, the stub
> hypercalls are removed and then later when pKVM is finalised, the pKVM
> init hypercalls are removed.
> 
> There are still a few dubious calls remaining in terms of protecting the
> guest (e.g. __kvm_adjust_pc) but these will be dealt with later when we
> have more VM state at EL2 to play with.

Yup. This particular one should have an equivalent at EL2 and pending
exceptions committed to the state before exiting to EL1.

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 1/5] arm64: Prevent kexec and hibernation if is_protected_kvm_enabled()
  2021-09-23 11:45     ` Mark Rutland
@ 2021-09-23 12:29       ` Will Deacon
  -1 siblings, 0 replies; 34+ messages in thread
From: Will Deacon @ 2021-09-23 12:29 UTC (permalink / raw)
  To: Mark Rutland; +Cc: Marc Zyngier, kvmarm, linux-arm-kernel, Catalin Marinas

On Thu, Sep 23, 2021 at 12:45:06PM +0100, Mark Rutland wrote:
> On Thu, Sep 23, 2021 at 12:22:52PM +0100, Will Deacon wrote:
> > When pKVM is enabled, the hypervisor code at EL2 and its data structures
> > are inaccessible to the host kernel and cannot be torn down or replaced
> > as this would defeat the integrity properies which pKVM aims to provide.
> > Furthermore, the ABI between the host and EL2 is flexible and private to
> > whatever the current implementation of KVM requires and so booting a new
> > kernel with an old EL2 component is very likely to end in disaster.
> > 
> > In preparation for uninstalling the hyp stub calls which are relied upon
> > to reset EL2, disable kexec and hibernation in the host when protected
> > KVM is enabled.
> > 
> > Cc: Marc Zyngier <maz@kernel.org>
> > Cc: Quentin Perret <qperret@google.com>
> > Signed-off-by: Will Deacon <will@kernel.org>
> > ---
> >  arch/arm64/kernel/smp.c | 3 ++-
> >  1 file changed, 2 insertions(+), 1 deletion(-)
> > 
> > diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
> > index 6f6ff072acbd..44369b99a57e 100644
> > --- a/arch/arm64/kernel/smp.c
> > +++ b/arch/arm64/kernel/smp.c
> > @@ -1128,5 +1128,6 @@ bool cpus_are_stuck_in_kernel(void)
> >  {
> >  	bool smp_spin_tables = (num_possible_cpus() > 1 && !have_cpu_die());
> >  
> > -	return !!cpus_stuck_in_kernel || smp_spin_tables;
> > +	return !!cpus_stuck_in_kernel || smp_spin_tables ||
> > +		is_protected_kvm_enabled();
> >  }
> 
> IIUC you'll also need to do something to prevent kdump, since even with
> CPUs stuck in the kernel that will try to do a kexec on the crashed CPU
> and __cpu_soft_restart() won't be able to return to EL2.
> 
> You could fiddle with the BUG_ON() in machine_kexec() to die in this
> case too.

I wondered about that, and I'm happy to do it if you reckon it's better,
but if the host is crashing _anyway_ then I wasn't convinced it was worth
the effort. With the approach here, we'll WARN and then enter the kdump
kernel at EL1 which maybe might work sometimes possibly? I suppose if the
kdump kernel is careful about the memory it accesses, then it has a
fighting chance of doing something useful.

Will
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 1/5] arm64: Prevent kexec and hibernation if is_protected_kvm_enabled()
@ 2021-09-23 12:29       ` Will Deacon
  0 siblings, 0 replies; 34+ messages in thread
From: Will Deacon @ 2021-09-23 12:29 UTC (permalink / raw)
  To: Mark Rutland; +Cc: linux-arm-kernel, Marc Zyngier, Catalin Marinas, kvmarm

On Thu, Sep 23, 2021 at 12:45:06PM +0100, Mark Rutland wrote:
> On Thu, Sep 23, 2021 at 12:22:52PM +0100, Will Deacon wrote:
> > When pKVM is enabled, the hypervisor code at EL2 and its data structures
> > are inaccessible to the host kernel and cannot be torn down or replaced
> > as this would defeat the integrity properies which pKVM aims to provide.
> > Furthermore, the ABI between the host and EL2 is flexible and private to
> > whatever the current implementation of KVM requires and so booting a new
> > kernel with an old EL2 component is very likely to end in disaster.
> > 
> > In preparation for uninstalling the hyp stub calls which are relied upon
> > to reset EL2, disable kexec and hibernation in the host when protected
> > KVM is enabled.
> > 
> > Cc: Marc Zyngier <maz@kernel.org>
> > Cc: Quentin Perret <qperret@google.com>
> > Signed-off-by: Will Deacon <will@kernel.org>
> > ---
> >  arch/arm64/kernel/smp.c | 3 ++-
> >  1 file changed, 2 insertions(+), 1 deletion(-)
> > 
> > diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
> > index 6f6ff072acbd..44369b99a57e 100644
> > --- a/arch/arm64/kernel/smp.c
> > +++ b/arch/arm64/kernel/smp.c
> > @@ -1128,5 +1128,6 @@ bool cpus_are_stuck_in_kernel(void)
> >  {
> >  	bool smp_spin_tables = (num_possible_cpus() > 1 && !have_cpu_die());
> >  
> > -	return !!cpus_stuck_in_kernel || smp_spin_tables;
> > +	return !!cpus_stuck_in_kernel || smp_spin_tables ||
> > +		is_protected_kvm_enabled();
> >  }
> 
> IIUC you'll also need to do something to prevent kdump, since even with
> CPUs stuck in the kernel that will try to do a kexec on the crashed CPU
> and __cpu_soft_restart() won't be able to return to EL2.
> 
> You could fiddle with the BUG_ON() in machine_kexec() to die in this
> case too.

I wondered about that, and I'm happy to do it if you reckon it's better,
but if the host is crashing _anyway_ then I wasn't convinced it was worth
the effort. With the approach here, we'll WARN and then enter the kdump
kernel at EL1 which maybe might work sometimes possibly? I suppose if the
kdump kernel is careful about the memory it accesses, then it has a
fighting chance of doing something useful.

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 5/5] KVM: arm64: Disable privileged hypercalls after pKVM finalisation
  2021-09-23 11:22   ` Will Deacon
@ 2021-09-23 12:56     ` Marc Zyngier
  -1 siblings, 0 replies; 34+ messages in thread
From: Marc Zyngier @ 2021-09-23 12:56 UTC (permalink / raw)
  To: Will Deacon; +Cc: Catalin Marinas, kvmarm, linux-arm-kernel

On Thu, 23 Sep 2021 12:22:56 +0100,
Will Deacon <will@kernel.org> wrote:
> 
> After pKVM has been 'finalised' using the __pkvm_prot_finalize hypercall,
> the calling CPU will have a Stage-2 translation enabled to prevent access
> to memory pages owned by EL2.
> 
> Although this forms a significant part of the process to deprivilege the
> host kernel, we also need to ensure that the hypercall interface is
> reduced so that the EL2 code cannot, for example, be re-initialised using
> a new set of vectors.
> 
> Re-order the hypercalls so that only a suffix remains available after
> finalisation of pKVM.
> 
> Cc: Marc Zyngier <maz@kernel.org>
> Cc: Quentin Perret <qperret@google.com>
> Signed-off-by: Will Deacon <will@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_asm.h   | 43 ++++++++++++++++--------------
>  arch/arm64/kvm/hyp/nvhe/hyp-main.c | 26 +++++++++++-------
>  2 files changed, 39 insertions(+), 30 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
> index e86045ac43ba..68630fd382c5 100644
> --- a/arch/arm64/include/asm/kvm_asm.h
> +++ b/arch/arm64/include/asm/kvm_asm.h
> @@ -43,27 +43,30 @@
>  
>  #define KVM_HOST_SMCCC_FUNC(name) KVM_HOST_SMCCC_ID(__KVM_HOST_SMCCC_FUNC_##name)
>  
> +/* Hypercalls available only prior to pKVM finalisation */
>  #define __KVM_HOST_SMCCC_FUNC___kvm_hyp_init			0
> -#define __KVM_HOST_SMCCC_FUNC___kvm_vcpu_run			1
> -#define __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context		2
> -#define __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid_ipa		3
> -#define __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid		4
> -#define __KVM_HOST_SMCCC_FUNC___kvm_flush_cpu_context		5
> -#define __KVM_HOST_SMCCC_FUNC___kvm_timer_set_cntvoff		6
> -#define __KVM_HOST_SMCCC_FUNC___kvm_enable_ssbs			7
> -#define __KVM_HOST_SMCCC_FUNC___vgic_v3_get_gic_config		8
> -#define __KVM_HOST_SMCCC_FUNC___vgic_v3_read_vmcr		9
> -#define __KVM_HOST_SMCCC_FUNC___vgic_v3_write_vmcr		10
> -#define __KVM_HOST_SMCCC_FUNC___vgic_v3_init_lrs		11
> -#define __KVM_HOST_SMCCC_FUNC___kvm_get_mdcr_el2		12
> -#define __KVM_HOST_SMCCC_FUNC___vgic_v3_save_aprs		13
> -#define __KVM_HOST_SMCCC_FUNC___vgic_v3_restore_aprs		14
> -#define __KVM_HOST_SMCCC_FUNC___pkvm_init			15
> -#define __KVM_HOST_SMCCC_FUNC___pkvm_host_share_hyp		16
> -#define __KVM_HOST_SMCCC_FUNC___pkvm_create_private_mapping	17
> -#define __KVM_HOST_SMCCC_FUNC___pkvm_cpu_set_vector		18
> -#define __KVM_HOST_SMCCC_FUNC___pkvm_prot_finalize		19
> -#define __KVM_HOST_SMCCC_FUNC___kvm_adjust_pc			20
> +#define __KVM_HOST_SMCCC_FUNC___kvm_get_mdcr_el2		1
> +#define __KVM_HOST_SMCCC_FUNC___pkvm_init			2
> +#define __KVM_HOST_SMCCC_FUNC___pkvm_create_private_mapping	3
> +#define __KVM_HOST_SMCCC_FUNC___pkvm_cpu_set_vector		4
> +#define __KVM_HOST_SMCCC_FUNC___kvm_enable_ssbs			5
> +#define __KVM_HOST_SMCCC_FUNC___vgic_v3_init_lrs		6
> +#define __KVM_HOST_SMCCC_FUNC___vgic_v3_get_gic_config		7
> +#define __KVM_HOST_SMCCC_FUNC___pkvm_prot_finalize		8
> +
> +/* Hypercalls available after pKVM finalisation */
> +#define __KVM_HOST_SMCCC_FUNC___pkvm_host_share_hyp		9
> +#define __KVM_HOST_SMCCC_FUNC___kvm_adjust_pc			10
> +#define __KVM_HOST_SMCCC_FUNC___kvm_vcpu_run			11
> +#define __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context		12
> +#define __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid_ipa		13
> +#define __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid		14
> +#define __KVM_HOST_SMCCC_FUNC___kvm_flush_cpu_context		15
> +#define __KVM_HOST_SMCCC_FUNC___kvm_timer_set_cntvoff		16
> +#define __KVM_HOST_SMCCC_FUNC___vgic_v3_read_vmcr		17
> +#define __KVM_HOST_SMCCC_FUNC___vgic_v3_write_vmcr		18
> +#define __KVM_HOST_SMCCC_FUNC___vgic_v3_save_aprs		19
> +#define __KVM_HOST_SMCCC_FUNC___vgic_v3_restore_aprs		20
>  
>  #ifndef __ASSEMBLY__
>  
> diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
> index 2da6aa8da868..4120e34288e1 100644
> --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c
> +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
> @@ -165,36 +165,42 @@ typedef void (*hcall_t)(struct kvm_cpu_context *);
>  #define HANDLE_FUNC(x)	[__KVM_HOST_SMCCC_FUNC_##x] = (hcall_t)handle_##x
>  
>  static const hcall_t host_hcall[] = {
> -	HANDLE_FUNC(__kvm_vcpu_run),
> +	/* ___kvm_hyp_init */
> +	HANDLE_FUNC(__kvm_get_mdcr_el2),
> +	HANDLE_FUNC(__pkvm_init),
> +	HANDLE_FUNC(__pkvm_create_private_mapping),
> +	HANDLE_FUNC(__pkvm_cpu_set_vector),
> +	HANDLE_FUNC(__kvm_enable_ssbs),
> +	HANDLE_FUNC(__vgic_v3_init_lrs),
> +	HANDLE_FUNC(__pkvm_prot_finalize),
> +
> +	HANDLE_FUNC(__pkvm_host_share_hyp),
>  	HANDLE_FUNC(__kvm_adjust_pc),
> +	HANDLE_FUNC(__kvm_vcpu_run),
>  	HANDLE_FUNC(__kvm_flush_vm_context),
>  	HANDLE_FUNC(__kvm_tlb_flush_vmid_ipa),
>  	HANDLE_FUNC(__kvm_tlb_flush_vmid),
>  	HANDLE_FUNC(__kvm_flush_cpu_context),
>  	HANDLE_FUNC(__kvm_timer_set_cntvoff),
> -	HANDLE_FUNC(__kvm_enable_ssbs),
>  	HANDLE_FUNC(__vgic_v3_get_gic_config),
>  	HANDLE_FUNC(__vgic_v3_read_vmcr),
>  	HANDLE_FUNC(__vgic_v3_write_vmcr),
> -	HANDLE_FUNC(__vgic_v3_init_lrs),
> -	HANDLE_FUNC(__kvm_get_mdcr_el2),
>  	HANDLE_FUNC(__vgic_v3_save_aprs),
>  	HANDLE_FUNC(__vgic_v3_restore_aprs),
> -	HANDLE_FUNC(__pkvm_init),
> -	HANDLE_FUNC(__pkvm_cpu_set_vector),
> -	HANDLE_FUNC(__pkvm_host_share_hyp),
> -	HANDLE_FUNC(__pkvm_create_private_mapping),
> -	HANDLE_FUNC(__pkvm_prot_finalize),
>  };
>  
>  static void handle_host_hcall(struct kvm_cpu_context *host_ctxt)
>  {
>  	DECLARE_REG(unsigned long, id, host_ctxt, 0);
> +	unsigned long hcall_min = 0;
>  	hcall_t hfn;
>  
> +	if (static_branch_unlikely(&kvm_protected_mode_initialized))
> +		hcall_min = __KVM_HOST_SMCCC_FUNC___pkvm_prot_finalize;
> +
>  	id -= KVM_HOST_SMCCC_ID(0);
>  
> -	if (unlikely(id >= ARRAY_SIZE(host_hcall)))
> +	if (unlikely(id < hcall_min || id >= ARRAY_SIZE(host_hcall)))

So I can still issue a pkvm_prot_finalize after finalisation? Seems
odd. As hcall_min has to be inclusive, you probably want it to be set
to __KVM_HOST_SMCCC_FUNC___pkvm_host_share_hyp once protected.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 5/5] KVM: arm64: Disable privileged hypercalls after pKVM finalisation
@ 2021-09-23 12:56     ` Marc Zyngier
  0 siblings, 0 replies; 34+ messages in thread
From: Marc Zyngier @ 2021-09-23 12:56 UTC (permalink / raw)
  To: Will Deacon
  Cc: linux-arm-kernel, Quentin Perret, Catalin Marinas,
	Alexandru Elisei, Suzuki K Poulose, kvmarm

On Thu, 23 Sep 2021 12:22:56 +0100,
Will Deacon <will@kernel.org> wrote:
> 
> After pKVM has been 'finalised' using the __pkvm_prot_finalize hypercall,
> the calling CPU will have a Stage-2 translation enabled to prevent access
> to memory pages owned by EL2.
> 
> Although this forms a significant part of the process to deprivilege the
> host kernel, we also need to ensure that the hypercall interface is
> reduced so that the EL2 code cannot, for example, be re-initialised using
> a new set of vectors.
> 
> Re-order the hypercalls so that only a suffix remains available after
> finalisation of pKVM.
> 
> Cc: Marc Zyngier <maz@kernel.org>
> Cc: Quentin Perret <qperret@google.com>
> Signed-off-by: Will Deacon <will@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_asm.h   | 43 ++++++++++++++++--------------
>  arch/arm64/kvm/hyp/nvhe/hyp-main.c | 26 +++++++++++-------
>  2 files changed, 39 insertions(+), 30 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
> index e86045ac43ba..68630fd382c5 100644
> --- a/arch/arm64/include/asm/kvm_asm.h
> +++ b/arch/arm64/include/asm/kvm_asm.h
> @@ -43,27 +43,30 @@
>  
>  #define KVM_HOST_SMCCC_FUNC(name) KVM_HOST_SMCCC_ID(__KVM_HOST_SMCCC_FUNC_##name)
>  
> +/* Hypercalls available only prior to pKVM finalisation */
>  #define __KVM_HOST_SMCCC_FUNC___kvm_hyp_init			0
> -#define __KVM_HOST_SMCCC_FUNC___kvm_vcpu_run			1
> -#define __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context		2
> -#define __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid_ipa		3
> -#define __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid		4
> -#define __KVM_HOST_SMCCC_FUNC___kvm_flush_cpu_context		5
> -#define __KVM_HOST_SMCCC_FUNC___kvm_timer_set_cntvoff		6
> -#define __KVM_HOST_SMCCC_FUNC___kvm_enable_ssbs			7
> -#define __KVM_HOST_SMCCC_FUNC___vgic_v3_get_gic_config		8
> -#define __KVM_HOST_SMCCC_FUNC___vgic_v3_read_vmcr		9
> -#define __KVM_HOST_SMCCC_FUNC___vgic_v3_write_vmcr		10
> -#define __KVM_HOST_SMCCC_FUNC___vgic_v3_init_lrs		11
> -#define __KVM_HOST_SMCCC_FUNC___kvm_get_mdcr_el2		12
> -#define __KVM_HOST_SMCCC_FUNC___vgic_v3_save_aprs		13
> -#define __KVM_HOST_SMCCC_FUNC___vgic_v3_restore_aprs		14
> -#define __KVM_HOST_SMCCC_FUNC___pkvm_init			15
> -#define __KVM_HOST_SMCCC_FUNC___pkvm_host_share_hyp		16
> -#define __KVM_HOST_SMCCC_FUNC___pkvm_create_private_mapping	17
> -#define __KVM_HOST_SMCCC_FUNC___pkvm_cpu_set_vector		18
> -#define __KVM_HOST_SMCCC_FUNC___pkvm_prot_finalize		19
> -#define __KVM_HOST_SMCCC_FUNC___kvm_adjust_pc			20
> +#define __KVM_HOST_SMCCC_FUNC___kvm_get_mdcr_el2		1
> +#define __KVM_HOST_SMCCC_FUNC___pkvm_init			2
> +#define __KVM_HOST_SMCCC_FUNC___pkvm_create_private_mapping	3
> +#define __KVM_HOST_SMCCC_FUNC___pkvm_cpu_set_vector		4
> +#define __KVM_HOST_SMCCC_FUNC___kvm_enable_ssbs			5
> +#define __KVM_HOST_SMCCC_FUNC___vgic_v3_init_lrs		6
> +#define __KVM_HOST_SMCCC_FUNC___vgic_v3_get_gic_config		7
> +#define __KVM_HOST_SMCCC_FUNC___pkvm_prot_finalize		8
> +
> +/* Hypercalls available after pKVM finalisation */
> +#define __KVM_HOST_SMCCC_FUNC___pkvm_host_share_hyp		9
> +#define __KVM_HOST_SMCCC_FUNC___kvm_adjust_pc			10
> +#define __KVM_HOST_SMCCC_FUNC___kvm_vcpu_run			11
> +#define __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context		12
> +#define __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid_ipa		13
> +#define __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid		14
> +#define __KVM_HOST_SMCCC_FUNC___kvm_flush_cpu_context		15
> +#define __KVM_HOST_SMCCC_FUNC___kvm_timer_set_cntvoff		16
> +#define __KVM_HOST_SMCCC_FUNC___vgic_v3_read_vmcr		17
> +#define __KVM_HOST_SMCCC_FUNC___vgic_v3_write_vmcr		18
> +#define __KVM_HOST_SMCCC_FUNC___vgic_v3_save_aprs		19
> +#define __KVM_HOST_SMCCC_FUNC___vgic_v3_restore_aprs		20
>  
>  #ifndef __ASSEMBLY__
>  
> diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
> index 2da6aa8da868..4120e34288e1 100644
> --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c
> +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
> @@ -165,36 +165,42 @@ typedef void (*hcall_t)(struct kvm_cpu_context *);
>  #define HANDLE_FUNC(x)	[__KVM_HOST_SMCCC_FUNC_##x] = (hcall_t)handle_##x
>  
>  static const hcall_t host_hcall[] = {
> -	HANDLE_FUNC(__kvm_vcpu_run),
> +	/* ___kvm_hyp_init */
> +	HANDLE_FUNC(__kvm_get_mdcr_el2),
> +	HANDLE_FUNC(__pkvm_init),
> +	HANDLE_FUNC(__pkvm_create_private_mapping),
> +	HANDLE_FUNC(__pkvm_cpu_set_vector),
> +	HANDLE_FUNC(__kvm_enable_ssbs),
> +	HANDLE_FUNC(__vgic_v3_init_lrs),
> +	HANDLE_FUNC(__pkvm_prot_finalize),
> +
> +	HANDLE_FUNC(__pkvm_host_share_hyp),
>  	HANDLE_FUNC(__kvm_adjust_pc),
> +	HANDLE_FUNC(__kvm_vcpu_run),
>  	HANDLE_FUNC(__kvm_flush_vm_context),
>  	HANDLE_FUNC(__kvm_tlb_flush_vmid_ipa),
>  	HANDLE_FUNC(__kvm_tlb_flush_vmid),
>  	HANDLE_FUNC(__kvm_flush_cpu_context),
>  	HANDLE_FUNC(__kvm_timer_set_cntvoff),
> -	HANDLE_FUNC(__kvm_enable_ssbs),
>  	HANDLE_FUNC(__vgic_v3_get_gic_config),
>  	HANDLE_FUNC(__vgic_v3_read_vmcr),
>  	HANDLE_FUNC(__vgic_v3_write_vmcr),
> -	HANDLE_FUNC(__vgic_v3_init_lrs),
> -	HANDLE_FUNC(__kvm_get_mdcr_el2),
>  	HANDLE_FUNC(__vgic_v3_save_aprs),
>  	HANDLE_FUNC(__vgic_v3_restore_aprs),
> -	HANDLE_FUNC(__pkvm_init),
> -	HANDLE_FUNC(__pkvm_cpu_set_vector),
> -	HANDLE_FUNC(__pkvm_host_share_hyp),
> -	HANDLE_FUNC(__pkvm_create_private_mapping),
> -	HANDLE_FUNC(__pkvm_prot_finalize),
>  };
>  
>  static void handle_host_hcall(struct kvm_cpu_context *host_ctxt)
>  {
>  	DECLARE_REG(unsigned long, id, host_ctxt, 0);
> +	unsigned long hcall_min = 0;
>  	hcall_t hfn;
>  
> +	if (static_branch_unlikely(&kvm_protected_mode_initialized))
> +		hcall_min = __KVM_HOST_SMCCC_FUNC___pkvm_prot_finalize;
> +
>  	id -= KVM_HOST_SMCCC_ID(0);
>  
> -	if (unlikely(id >= ARRAY_SIZE(host_hcall)))
> +	if (unlikely(id < hcall_min || id >= ARRAY_SIZE(host_hcall)))

So I can still issue a pkvm_prot_finalize after finalisation? Seems
odd. As hcall_min has to be inclusive, you probably want it to be set
to __KVM_HOST_SMCCC_FUNC___pkvm_host_share_hyp once protected.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 5/5] KVM: arm64: Disable privileged hypercalls after pKVM finalisation
  2021-09-23 11:22   ` Will Deacon
@ 2021-09-23 12:58     ` Will Deacon
  -1 siblings, 0 replies; 34+ messages in thread
From: Will Deacon @ 2021-09-23 12:58 UTC (permalink / raw)
  To: linux-arm-kernel; +Cc: Marc Zyngier, Catalin Marinas, kvmarm

On Thu, Sep 23, 2021 at 12:22:56PM +0100, Will Deacon wrote:
> After pKVM has been 'finalised' using the __pkvm_prot_finalize hypercall,
> the calling CPU will have a Stage-2 translation enabled to prevent access
> to memory pages owned by EL2.
> 
> Although this forms a significant part of the process to deprivilege the
> host kernel, we also need to ensure that the hypercall interface is
> reduced so that the EL2 code cannot, for example, be re-initialised using
> a new set of vectors.
> 
> Re-order the hypercalls so that only a suffix remains available after
> finalisation of pKVM.
> 
> Cc: Marc Zyngier <maz@kernel.org>
> Cc: Quentin Perret <qperret@google.com>
> Signed-off-by: Will Deacon <will@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_asm.h   | 43 ++++++++++++++++--------------
>  arch/arm64/kvm/hyp/nvhe/hyp-main.c | 26 +++++++++++-------
>  2 files changed, 39 insertions(+), 30 deletions(-)

[...]

> diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
> index 2da6aa8da868..4120e34288e1 100644
> --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c
> +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
> @@ -165,36 +165,42 @@ typedef void (*hcall_t)(struct kvm_cpu_context *);
>  #define HANDLE_FUNC(x)	[__KVM_HOST_SMCCC_FUNC_##x] = (hcall_t)handle_##x
>  
>  static const hcall_t host_hcall[] = {
> -	HANDLE_FUNC(__kvm_vcpu_run),
> +	/* ___kvm_hyp_init */
> +	HANDLE_FUNC(__kvm_get_mdcr_el2),
> +	HANDLE_FUNC(__pkvm_init),
> +	HANDLE_FUNC(__pkvm_create_private_mapping),
> +	HANDLE_FUNC(__pkvm_cpu_set_vector),
> +	HANDLE_FUNC(__kvm_enable_ssbs),
> +	HANDLE_FUNC(__vgic_v3_init_lrs),
> +	HANDLE_FUNC(__pkvm_prot_finalize),
> +
> +	HANDLE_FUNC(__pkvm_host_share_hyp),
>  	HANDLE_FUNC(__kvm_adjust_pc),
> +	HANDLE_FUNC(__kvm_vcpu_run),
>  	HANDLE_FUNC(__kvm_flush_vm_context),
>  	HANDLE_FUNC(__kvm_tlb_flush_vmid_ipa),
>  	HANDLE_FUNC(__kvm_tlb_flush_vmid),
>  	HANDLE_FUNC(__kvm_flush_cpu_context),
>  	HANDLE_FUNC(__kvm_timer_set_cntvoff),
> -	HANDLE_FUNC(__kvm_enable_ssbs),
>  	HANDLE_FUNC(__vgic_v3_get_gic_config),
>  	HANDLE_FUNC(__vgic_v3_read_vmcr),
>  	HANDLE_FUNC(__vgic_v3_write_vmcr),
> -	HANDLE_FUNC(__vgic_v3_init_lrs),
> -	HANDLE_FUNC(__kvm_get_mdcr_el2),
>  	HANDLE_FUNC(__vgic_v3_save_aprs),
>  	HANDLE_FUNC(__vgic_v3_restore_aprs),
> -	HANDLE_FUNC(__pkvm_init),
> -	HANDLE_FUNC(__pkvm_cpu_set_vector),
> -	HANDLE_FUNC(__pkvm_host_share_hyp),
> -	HANDLE_FUNC(__pkvm_create_private_mapping),
> -	HANDLE_FUNC(__pkvm_prot_finalize),

Not that it makes any functional difference, but I was trying to keep this
in numerical order and evidently didn't manage it after renumbering
__vgic_v3_get_gic_config. Will fix for v2.

Will
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 5/5] KVM: arm64: Disable privileged hypercalls after pKVM finalisation
@ 2021-09-23 12:58     ` Will Deacon
  0 siblings, 0 replies; 34+ messages in thread
From: Will Deacon @ 2021-09-23 12:58 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Marc Zyngier, Quentin Perret, Catalin Marinas, Alexandru Elisei,
	Suzuki K Poulose, kvmarm

On Thu, Sep 23, 2021 at 12:22:56PM +0100, Will Deacon wrote:
> After pKVM has been 'finalised' using the __pkvm_prot_finalize hypercall,
> the calling CPU will have a Stage-2 translation enabled to prevent access
> to memory pages owned by EL2.
> 
> Although this forms a significant part of the process to deprivilege the
> host kernel, we also need to ensure that the hypercall interface is
> reduced so that the EL2 code cannot, for example, be re-initialised using
> a new set of vectors.
> 
> Re-order the hypercalls so that only a suffix remains available after
> finalisation of pKVM.
> 
> Cc: Marc Zyngier <maz@kernel.org>
> Cc: Quentin Perret <qperret@google.com>
> Signed-off-by: Will Deacon <will@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_asm.h   | 43 ++++++++++++++++--------------
>  arch/arm64/kvm/hyp/nvhe/hyp-main.c | 26 +++++++++++-------
>  2 files changed, 39 insertions(+), 30 deletions(-)

[...]

> diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
> index 2da6aa8da868..4120e34288e1 100644
> --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c
> +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
> @@ -165,36 +165,42 @@ typedef void (*hcall_t)(struct kvm_cpu_context *);
>  #define HANDLE_FUNC(x)	[__KVM_HOST_SMCCC_FUNC_##x] = (hcall_t)handle_##x
>  
>  static const hcall_t host_hcall[] = {
> -	HANDLE_FUNC(__kvm_vcpu_run),
> +	/* ___kvm_hyp_init */
> +	HANDLE_FUNC(__kvm_get_mdcr_el2),
> +	HANDLE_FUNC(__pkvm_init),
> +	HANDLE_FUNC(__pkvm_create_private_mapping),
> +	HANDLE_FUNC(__pkvm_cpu_set_vector),
> +	HANDLE_FUNC(__kvm_enable_ssbs),
> +	HANDLE_FUNC(__vgic_v3_init_lrs),
> +	HANDLE_FUNC(__pkvm_prot_finalize),
> +
> +	HANDLE_FUNC(__pkvm_host_share_hyp),
>  	HANDLE_FUNC(__kvm_adjust_pc),
> +	HANDLE_FUNC(__kvm_vcpu_run),
>  	HANDLE_FUNC(__kvm_flush_vm_context),
>  	HANDLE_FUNC(__kvm_tlb_flush_vmid_ipa),
>  	HANDLE_FUNC(__kvm_tlb_flush_vmid),
>  	HANDLE_FUNC(__kvm_flush_cpu_context),
>  	HANDLE_FUNC(__kvm_timer_set_cntvoff),
> -	HANDLE_FUNC(__kvm_enable_ssbs),
>  	HANDLE_FUNC(__vgic_v3_get_gic_config),
>  	HANDLE_FUNC(__vgic_v3_read_vmcr),
>  	HANDLE_FUNC(__vgic_v3_write_vmcr),
> -	HANDLE_FUNC(__vgic_v3_init_lrs),
> -	HANDLE_FUNC(__kvm_get_mdcr_el2),
>  	HANDLE_FUNC(__vgic_v3_save_aprs),
>  	HANDLE_FUNC(__vgic_v3_restore_aprs),
> -	HANDLE_FUNC(__pkvm_init),
> -	HANDLE_FUNC(__pkvm_cpu_set_vector),
> -	HANDLE_FUNC(__pkvm_host_share_hyp),
> -	HANDLE_FUNC(__pkvm_create_private_mapping),
> -	HANDLE_FUNC(__pkvm_prot_finalize),

Not that it makes any functional difference, but I was trying to keep this
in numerical order and evidently didn't manage it after renumbering
__vgic_v3_get_gic_config. Will fix for v2.

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 5/5] KVM: arm64: Disable privileged hypercalls after pKVM finalisation
  2021-09-23 12:56     ` Marc Zyngier
@ 2021-09-23 13:02       ` Will Deacon
  -1 siblings, 0 replies; 34+ messages in thread
From: Will Deacon @ 2021-09-23 13:02 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: Catalin Marinas, kvmarm, linux-arm-kernel

On Thu, Sep 23, 2021 at 01:56:21PM +0100, Marc Zyngier wrote:
> On Thu, 23 Sep 2021 12:22:56 +0100,
> Will Deacon <will@kernel.org> wrote:
> > 
> > After pKVM has been 'finalised' using the __pkvm_prot_finalize hypercall,
> > the calling CPU will have a Stage-2 translation enabled to prevent access
> > to memory pages owned by EL2.
> > 
> > Although this forms a significant part of the process to deprivilege the
> > host kernel, we also need to ensure that the hypercall interface is
> > reduced so that the EL2 code cannot, for example, be re-initialised using
> > a new set of vectors.
> > 
> > Re-order the hypercalls so that only a suffix remains available after
> > finalisation of pKVM.
> > 
> > Cc: Marc Zyngier <maz@kernel.org>
> > Cc: Quentin Perret <qperret@google.com>
> > Signed-off-by: Will Deacon <will@kernel.org>
> > ---
> >  arch/arm64/include/asm/kvm_asm.h   | 43 ++++++++++++++++--------------
> >  arch/arm64/kvm/hyp/nvhe/hyp-main.c | 26 +++++++++++-------
> >  2 files changed, 39 insertions(+), 30 deletions(-)
> > 
> > diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
> > index e86045ac43ba..68630fd382c5 100644
> > --- a/arch/arm64/include/asm/kvm_asm.h
> > +++ b/arch/arm64/include/asm/kvm_asm.h
> > @@ -43,27 +43,30 @@
> >  
> >  #define KVM_HOST_SMCCC_FUNC(name) KVM_HOST_SMCCC_ID(__KVM_HOST_SMCCC_FUNC_##name)
> >  
> > +/* Hypercalls available only prior to pKVM finalisation */
> >  #define __KVM_HOST_SMCCC_FUNC___kvm_hyp_init			0
> > -#define __KVM_HOST_SMCCC_FUNC___kvm_vcpu_run			1
> > -#define __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context		2
> > -#define __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid_ipa		3
> > -#define __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid		4
> > -#define __KVM_HOST_SMCCC_FUNC___kvm_flush_cpu_context		5
> > -#define __KVM_HOST_SMCCC_FUNC___kvm_timer_set_cntvoff		6
> > -#define __KVM_HOST_SMCCC_FUNC___kvm_enable_ssbs			7
> > -#define __KVM_HOST_SMCCC_FUNC___vgic_v3_get_gic_config		8
> > -#define __KVM_HOST_SMCCC_FUNC___vgic_v3_read_vmcr		9
> > -#define __KVM_HOST_SMCCC_FUNC___vgic_v3_write_vmcr		10
> > -#define __KVM_HOST_SMCCC_FUNC___vgic_v3_init_lrs		11
> > -#define __KVM_HOST_SMCCC_FUNC___kvm_get_mdcr_el2		12
> > -#define __KVM_HOST_SMCCC_FUNC___vgic_v3_save_aprs		13
> > -#define __KVM_HOST_SMCCC_FUNC___vgic_v3_restore_aprs		14
> > -#define __KVM_HOST_SMCCC_FUNC___pkvm_init			15
> > -#define __KVM_HOST_SMCCC_FUNC___pkvm_host_share_hyp		16
> > -#define __KVM_HOST_SMCCC_FUNC___pkvm_create_private_mapping	17
> > -#define __KVM_HOST_SMCCC_FUNC___pkvm_cpu_set_vector		18
> > -#define __KVM_HOST_SMCCC_FUNC___pkvm_prot_finalize		19
> > -#define __KVM_HOST_SMCCC_FUNC___kvm_adjust_pc			20
> > +#define __KVM_HOST_SMCCC_FUNC___kvm_get_mdcr_el2		1
> > +#define __KVM_HOST_SMCCC_FUNC___pkvm_init			2
> > +#define __KVM_HOST_SMCCC_FUNC___pkvm_create_private_mapping	3
> > +#define __KVM_HOST_SMCCC_FUNC___pkvm_cpu_set_vector		4
> > +#define __KVM_HOST_SMCCC_FUNC___kvm_enable_ssbs			5
> > +#define __KVM_HOST_SMCCC_FUNC___vgic_v3_init_lrs		6
> > +#define __KVM_HOST_SMCCC_FUNC___vgic_v3_get_gic_config		7
> > +#define __KVM_HOST_SMCCC_FUNC___pkvm_prot_finalize		8
> > +
> > +/* Hypercalls available after pKVM finalisation */
> > +#define __KVM_HOST_SMCCC_FUNC___pkvm_host_share_hyp		9
> > +#define __KVM_HOST_SMCCC_FUNC___kvm_adjust_pc			10
> > +#define __KVM_HOST_SMCCC_FUNC___kvm_vcpu_run			11
> > +#define __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context		12
> > +#define __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid_ipa		13
> > +#define __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid		14
> > +#define __KVM_HOST_SMCCC_FUNC___kvm_flush_cpu_context		15
> > +#define __KVM_HOST_SMCCC_FUNC___kvm_timer_set_cntvoff		16
> > +#define __KVM_HOST_SMCCC_FUNC___vgic_v3_read_vmcr		17
> > +#define __KVM_HOST_SMCCC_FUNC___vgic_v3_write_vmcr		18
> > +#define __KVM_HOST_SMCCC_FUNC___vgic_v3_save_aprs		19
> > +#define __KVM_HOST_SMCCC_FUNC___vgic_v3_restore_aprs		20
> >  
> >  #ifndef __ASSEMBLY__
> >  
> > diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
> > index 2da6aa8da868..4120e34288e1 100644
> > --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c
> > +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
> > @@ -165,36 +165,42 @@ typedef void (*hcall_t)(struct kvm_cpu_context *);
> >  #define HANDLE_FUNC(x)	[__KVM_HOST_SMCCC_FUNC_##x] = (hcall_t)handle_##x
> >  
> >  static const hcall_t host_hcall[] = {
> > -	HANDLE_FUNC(__kvm_vcpu_run),
> > +	/* ___kvm_hyp_init */
> > +	HANDLE_FUNC(__kvm_get_mdcr_el2),
> > +	HANDLE_FUNC(__pkvm_init),
> > +	HANDLE_FUNC(__pkvm_create_private_mapping),
> > +	HANDLE_FUNC(__pkvm_cpu_set_vector),
> > +	HANDLE_FUNC(__kvm_enable_ssbs),
> > +	HANDLE_FUNC(__vgic_v3_init_lrs),
> > +	HANDLE_FUNC(__pkvm_prot_finalize),
> > +
> > +	HANDLE_FUNC(__pkvm_host_share_hyp),
> >  	HANDLE_FUNC(__kvm_adjust_pc),
> > +	HANDLE_FUNC(__kvm_vcpu_run),
> >  	HANDLE_FUNC(__kvm_flush_vm_context),
> >  	HANDLE_FUNC(__kvm_tlb_flush_vmid_ipa),
> >  	HANDLE_FUNC(__kvm_tlb_flush_vmid),
> >  	HANDLE_FUNC(__kvm_flush_cpu_context),
> >  	HANDLE_FUNC(__kvm_timer_set_cntvoff),
> > -	HANDLE_FUNC(__kvm_enable_ssbs),
> >  	HANDLE_FUNC(__vgic_v3_get_gic_config),
> >  	HANDLE_FUNC(__vgic_v3_read_vmcr),
> >  	HANDLE_FUNC(__vgic_v3_write_vmcr),
> > -	HANDLE_FUNC(__vgic_v3_init_lrs),
> > -	HANDLE_FUNC(__kvm_get_mdcr_el2),
> >  	HANDLE_FUNC(__vgic_v3_save_aprs),
> >  	HANDLE_FUNC(__vgic_v3_restore_aprs),
> > -	HANDLE_FUNC(__pkvm_init),
> > -	HANDLE_FUNC(__pkvm_cpu_set_vector),
> > -	HANDLE_FUNC(__pkvm_host_share_hyp),
> > -	HANDLE_FUNC(__pkvm_create_private_mapping),
> > -	HANDLE_FUNC(__pkvm_prot_finalize),
> >  };
> >  
> >  static void handle_host_hcall(struct kvm_cpu_context *host_ctxt)
> >  {
> >  	DECLARE_REG(unsigned long, id, host_ctxt, 0);
> > +	unsigned long hcall_min = 0;
> >  	hcall_t hfn;
> >  
> > +	if (static_branch_unlikely(&kvm_protected_mode_initialized))
> > +		hcall_min = __KVM_HOST_SMCCC_FUNC___pkvm_prot_finalize;
> > +
> >  	id -= KVM_HOST_SMCCC_ID(0);
> >  
> > -	if (unlikely(id >= ARRAY_SIZE(host_hcall)))
> > +	if (unlikely(id < hcall_min || id >= ARRAY_SIZE(host_hcall)))
> 
> So I can still issue a pkvm_prot_finalize after finalisation? Seems
> odd. As hcall_min has to be inclusive, you probably want it to be set
> to __KVM_HOST_SMCCC_FUNC___pkvm_host_share_hyp once protected.

Yeah, I ended up addresing that one in the previous patch. The problem is
that we need to allow pkvm_prot_finalize to be called on each CPU, so I
think we'd end up having an extra "really finalize damnit!" call to be
issued _once_ after each CPU is done with the finalisation if we want
to lock it down.

The approach I took instead is to make pkvm_prot_finalize return -EBUSY
if it's called on a CPU where it's already been called.

Will
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 5/5] KVM: arm64: Disable privileged hypercalls after pKVM finalisation
@ 2021-09-23 13:02       ` Will Deacon
  0 siblings, 0 replies; 34+ messages in thread
From: Will Deacon @ 2021-09-23 13:02 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: linux-arm-kernel, Quentin Perret, Catalin Marinas,
	Alexandru Elisei, Suzuki K Poulose, kvmarm

On Thu, Sep 23, 2021 at 01:56:21PM +0100, Marc Zyngier wrote:
> On Thu, 23 Sep 2021 12:22:56 +0100,
> Will Deacon <will@kernel.org> wrote:
> > 
> > After pKVM has been 'finalised' using the __pkvm_prot_finalize hypercall,
> > the calling CPU will have a Stage-2 translation enabled to prevent access
> > to memory pages owned by EL2.
> > 
> > Although this forms a significant part of the process to deprivilege the
> > host kernel, we also need to ensure that the hypercall interface is
> > reduced so that the EL2 code cannot, for example, be re-initialised using
> > a new set of vectors.
> > 
> > Re-order the hypercalls so that only a suffix remains available after
> > finalisation of pKVM.
> > 
> > Cc: Marc Zyngier <maz@kernel.org>
> > Cc: Quentin Perret <qperret@google.com>
> > Signed-off-by: Will Deacon <will@kernel.org>
> > ---
> >  arch/arm64/include/asm/kvm_asm.h   | 43 ++++++++++++++++--------------
> >  arch/arm64/kvm/hyp/nvhe/hyp-main.c | 26 +++++++++++-------
> >  2 files changed, 39 insertions(+), 30 deletions(-)
> > 
> > diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
> > index e86045ac43ba..68630fd382c5 100644
> > --- a/arch/arm64/include/asm/kvm_asm.h
> > +++ b/arch/arm64/include/asm/kvm_asm.h
> > @@ -43,27 +43,30 @@
> >  
> >  #define KVM_HOST_SMCCC_FUNC(name) KVM_HOST_SMCCC_ID(__KVM_HOST_SMCCC_FUNC_##name)
> >  
> > +/* Hypercalls available only prior to pKVM finalisation */
> >  #define __KVM_HOST_SMCCC_FUNC___kvm_hyp_init			0
> > -#define __KVM_HOST_SMCCC_FUNC___kvm_vcpu_run			1
> > -#define __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context		2
> > -#define __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid_ipa		3
> > -#define __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid		4
> > -#define __KVM_HOST_SMCCC_FUNC___kvm_flush_cpu_context		5
> > -#define __KVM_HOST_SMCCC_FUNC___kvm_timer_set_cntvoff		6
> > -#define __KVM_HOST_SMCCC_FUNC___kvm_enable_ssbs			7
> > -#define __KVM_HOST_SMCCC_FUNC___vgic_v3_get_gic_config		8
> > -#define __KVM_HOST_SMCCC_FUNC___vgic_v3_read_vmcr		9
> > -#define __KVM_HOST_SMCCC_FUNC___vgic_v3_write_vmcr		10
> > -#define __KVM_HOST_SMCCC_FUNC___vgic_v3_init_lrs		11
> > -#define __KVM_HOST_SMCCC_FUNC___kvm_get_mdcr_el2		12
> > -#define __KVM_HOST_SMCCC_FUNC___vgic_v3_save_aprs		13
> > -#define __KVM_HOST_SMCCC_FUNC___vgic_v3_restore_aprs		14
> > -#define __KVM_HOST_SMCCC_FUNC___pkvm_init			15
> > -#define __KVM_HOST_SMCCC_FUNC___pkvm_host_share_hyp		16
> > -#define __KVM_HOST_SMCCC_FUNC___pkvm_create_private_mapping	17
> > -#define __KVM_HOST_SMCCC_FUNC___pkvm_cpu_set_vector		18
> > -#define __KVM_HOST_SMCCC_FUNC___pkvm_prot_finalize		19
> > -#define __KVM_HOST_SMCCC_FUNC___kvm_adjust_pc			20
> > +#define __KVM_HOST_SMCCC_FUNC___kvm_get_mdcr_el2		1
> > +#define __KVM_HOST_SMCCC_FUNC___pkvm_init			2
> > +#define __KVM_HOST_SMCCC_FUNC___pkvm_create_private_mapping	3
> > +#define __KVM_HOST_SMCCC_FUNC___pkvm_cpu_set_vector		4
> > +#define __KVM_HOST_SMCCC_FUNC___kvm_enable_ssbs			5
> > +#define __KVM_HOST_SMCCC_FUNC___vgic_v3_init_lrs		6
> > +#define __KVM_HOST_SMCCC_FUNC___vgic_v3_get_gic_config		7
> > +#define __KVM_HOST_SMCCC_FUNC___pkvm_prot_finalize		8
> > +
> > +/* Hypercalls available after pKVM finalisation */
> > +#define __KVM_HOST_SMCCC_FUNC___pkvm_host_share_hyp		9
> > +#define __KVM_HOST_SMCCC_FUNC___kvm_adjust_pc			10
> > +#define __KVM_HOST_SMCCC_FUNC___kvm_vcpu_run			11
> > +#define __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context		12
> > +#define __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid_ipa		13
> > +#define __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid		14
> > +#define __KVM_HOST_SMCCC_FUNC___kvm_flush_cpu_context		15
> > +#define __KVM_HOST_SMCCC_FUNC___kvm_timer_set_cntvoff		16
> > +#define __KVM_HOST_SMCCC_FUNC___vgic_v3_read_vmcr		17
> > +#define __KVM_HOST_SMCCC_FUNC___vgic_v3_write_vmcr		18
> > +#define __KVM_HOST_SMCCC_FUNC___vgic_v3_save_aprs		19
> > +#define __KVM_HOST_SMCCC_FUNC___vgic_v3_restore_aprs		20
> >  
> >  #ifndef __ASSEMBLY__
> >  
> > diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
> > index 2da6aa8da868..4120e34288e1 100644
> > --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c
> > +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
> > @@ -165,36 +165,42 @@ typedef void (*hcall_t)(struct kvm_cpu_context *);
> >  #define HANDLE_FUNC(x)	[__KVM_HOST_SMCCC_FUNC_##x] = (hcall_t)handle_##x
> >  
> >  static const hcall_t host_hcall[] = {
> > -	HANDLE_FUNC(__kvm_vcpu_run),
> > +	/* ___kvm_hyp_init */
> > +	HANDLE_FUNC(__kvm_get_mdcr_el2),
> > +	HANDLE_FUNC(__pkvm_init),
> > +	HANDLE_FUNC(__pkvm_create_private_mapping),
> > +	HANDLE_FUNC(__pkvm_cpu_set_vector),
> > +	HANDLE_FUNC(__kvm_enable_ssbs),
> > +	HANDLE_FUNC(__vgic_v3_init_lrs),
> > +	HANDLE_FUNC(__pkvm_prot_finalize),
> > +
> > +	HANDLE_FUNC(__pkvm_host_share_hyp),
> >  	HANDLE_FUNC(__kvm_adjust_pc),
> > +	HANDLE_FUNC(__kvm_vcpu_run),
> >  	HANDLE_FUNC(__kvm_flush_vm_context),
> >  	HANDLE_FUNC(__kvm_tlb_flush_vmid_ipa),
> >  	HANDLE_FUNC(__kvm_tlb_flush_vmid),
> >  	HANDLE_FUNC(__kvm_flush_cpu_context),
> >  	HANDLE_FUNC(__kvm_timer_set_cntvoff),
> > -	HANDLE_FUNC(__kvm_enable_ssbs),
> >  	HANDLE_FUNC(__vgic_v3_get_gic_config),
> >  	HANDLE_FUNC(__vgic_v3_read_vmcr),
> >  	HANDLE_FUNC(__vgic_v3_write_vmcr),
> > -	HANDLE_FUNC(__vgic_v3_init_lrs),
> > -	HANDLE_FUNC(__kvm_get_mdcr_el2),
> >  	HANDLE_FUNC(__vgic_v3_save_aprs),
> >  	HANDLE_FUNC(__vgic_v3_restore_aprs),
> > -	HANDLE_FUNC(__pkvm_init),
> > -	HANDLE_FUNC(__pkvm_cpu_set_vector),
> > -	HANDLE_FUNC(__pkvm_host_share_hyp),
> > -	HANDLE_FUNC(__pkvm_create_private_mapping),
> > -	HANDLE_FUNC(__pkvm_prot_finalize),
> >  };
> >  
> >  static void handle_host_hcall(struct kvm_cpu_context *host_ctxt)
> >  {
> >  	DECLARE_REG(unsigned long, id, host_ctxt, 0);
> > +	unsigned long hcall_min = 0;
> >  	hcall_t hfn;
> >  
> > +	if (static_branch_unlikely(&kvm_protected_mode_initialized))
> > +		hcall_min = __KVM_HOST_SMCCC_FUNC___pkvm_prot_finalize;
> > +
> >  	id -= KVM_HOST_SMCCC_ID(0);
> >  
> > -	if (unlikely(id >= ARRAY_SIZE(host_hcall)))
> > +	if (unlikely(id < hcall_min || id >= ARRAY_SIZE(host_hcall)))
> 
> So I can still issue a pkvm_prot_finalize after finalisation? Seems
> odd. As hcall_min has to be inclusive, you probably want it to be set
> to __KVM_HOST_SMCCC_FUNC___pkvm_host_share_hyp once protected.

Yeah, I ended up addresing that one in the previous patch. The problem is
that we need to allow pkvm_prot_finalize to be called on each CPU, so I
think we'd end up having an extra "really finalize damnit!" call to be
issued _once_ after each CPU is done with the finalisation if we want
to lock it down.

The approach I took instead is to make pkvm_prot_finalize return -EBUSY
if it's called on a CPU where it's already been called.

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 5/5] KVM: arm64: Disable privileged hypercalls after pKVM finalisation
  2021-09-23 13:02       ` Will Deacon
@ 2021-09-23 13:11         ` Marc Zyngier
  -1 siblings, 0 replies; 34+ messages in thread
From: Marc Zyngier @ 2021-09-23 13:11 UTC (permalink / raw)
  To: Will Deacon; +Cc: Catalin Marinas, kvmarm, linux-arm-kernel

On Thu, 23 Sep 2021 14:02:11 +0100,
Will Deacon <will@kernel.org> wrote:
> 
> On Thu, Sep 23, 2021 at 01:56:21PM +0100, Marc Zyngier wrote:
> > On Thu, 23 Sep 2021 12:22:56 +0100,
> > Will Deacon <will@kernel.org> wrote:

[...]

> > >  static void handle_host_hcall(struct kvm_cpu_context *host_ctxt)
> > >  {
> > >  	DECLARE_REG(unsigned long, id, host_ctxt, 0);
> > > +	unsigned long hcall_min = 0;
> > >  	hcall_t hfn;
> > >  
> > > +	if (static_branch_unlikely(&kvm_protected_mode_initialized))
> > > +		hcall_min = __KVM_HOST_SMCCC_FUNC___pkvm_prot_finalize;
> > > +
> > >  	id -= KVM_HOST_SMCCC_ID(0);
> > >  
> > > -	if (unlikely(id >= ARRAY_SIZE(host_hcall)))
> > > +	if (unlikely(id < hcall_min || id >= ARRAY_SIZE(host_hcall)))
> > 
> > So I can still issue a pkvm_prot_finalize after finalisation? Seems
> > odd. As hcall_min has to be inclusive, you probably want it to be set
> > to __KVM_HOST_SMCCC_FUNC___pkvm_host_share_hyp once protected.
> 
> Yeah, I ended up addresing that one in the previous patch. The problem is
> that we need to allow pkvm_prot_finalize to be called on each CPU, so I
> think we'd end up having an extra "really finalize damnit!" call to be
> issued _once_ after each CPU is done with the finalisation if we want
> to lock it down.
> 
> The approach I took instead is to make pkvm_prot_finalize return -EBUSY
> if it's called on a CPU where it's already been called.

Ah, I see. Serves me right for reading patches out of order. Finalise
is of course per-CPU, and the static key global. Epic fail.

Probably deserves a comment, because I'm surely going to jump at that
again in three months.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 5/5] KVM: arm64: Disable privileged hypercalls after pKVM finalisation
@ 2021-09-23 13:11         ` Marc Zyngier
  0 siblings, 0 replies; 34+ messages in thread
From: Marc Zyngier @ 2021-09-23 13:11 UTC (permalink / raw)
  To: Will Deacon
  Cc: linux-arm-kernel, Quentin Perret, Catalin Marinas,
	Alexandru Elisei, Suzuki K Poulose, kvmarm

On Thu, 23 Sep 2021 14:02:11 +0100,
Will Deacon <will@kernel.org> wrote:
> 
> On Thu, Sep 23, 2021 at 01:56:21PM +0100, Marc Zyngier wrote:
> > On Thu, 23 Sep 2021 12:22:56 +0100,
> > Will Deacon <will@kernel.org> wrote:

[...]

> > >  static void handle_host_hcall(struct kvm_cpu_context *host_ctxt)
> > >  {
> > >  	DECLARE_REG(unsigned long, id, host_ctxt, 0);
> > > +	unsigned long hcall_min = 0;
> > >  	hcall_t hfn;
> > >  
> > > +	if (static_branch_unlikely(&kvm_protected_mode_initialized))
> > > +		hcall_min = __KVM_HOST_SMCCC_FUNC___pkvm_prot_finalize;
> > > +
> > >  	id -= KVM_HOST_SMCCC_ID(0);
> > >  
> > > -	if (unlikely(id >= ARRAY_SIZE(host_hcall)))
> > > +	if (unlikely(id < hcall_min || id >= ARRAY_SIZE(host_hcall)))
> > 
> > So I can still issue a pkvm_prot_finalize after finalisation? Seems
> > odd. As hcall_min has to be inclusive, you probably want it to be set
> > to __KVM_HOST_SMCCC_FUNC___pkvm_host_share_hyp once protected.
> 
> Yeah, I ended up addresing that one in the previous patch. The problem is
> that we need to allow pkvm_prot_finalize to be called on each CPU, so I
> think we'd end up having an extra "really finalize damnit!" call to be
> issued _once_ after each CPU is done with the finalisation if we want
> to lock it down.
> 
> The approach I took instead is to make pkvm_prot_finalize return -EBUSY
> if it's called on a CPU where it's already been called.

Ah, I see. Serves me right for reading patches out of order. Finalise
is of course per-CPU, and the static key global. Epic fail.

Probably deserves a comment, because I'm surely going to jump at that
again in three months.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 3/5] KVM: arm64: Propagate errors from __pkvm_prot_finalize hypercall
  2021-09-23 11:22   ` Will Deacon
@ 2021-09-29 13:36     ` Quentin Perret
  -1 siblings, 0 replies; 34+ messages in thread
From: Quentin Perret @ 2021-09-29 13:36 UTC (permalink / raw)
  To: Will Deacon; +Cc: Marc Zyngier, Catalin Marinas, kvmarm, linux-arm-kernel

On Thursday 23 Sep 2021 at 12:22:54 (+0100), Will Deacon wrote:
> If the __pkvm_prot_finalize hypercall returns an error, we WARN but fail
> to propagate the failure code back to kvm_arch_init().
> 
> Pass a pointer to a zero-initialised return variable so that failure
> to finalise the pKVM protections on a host CPU can be reported back to
> KVM.
> 
> Cc: Marc Zyngier <maz@kernel.org>
> Cc: Quentin Perret <qperret@google.com>
> Signed-off-by: Will Deacon <will@kernel.org>
> ---
>  arch/arm64/kvm/arm.c | 30 +++++++++++++++++++-----------
>  1 file changed, 19 insertions(+), 11 deletions(-)
> 
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index 9506cf88fa0e..13bbf35896cd 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -1986,9 +1986,25 @@ static int init_hyp_mode(void)
>  	return err;
>  }
>  
> -static void _kvm_host_prot_finalize(void *discard)
> +static void _kvm_host_prot_finalize(void *arg)
>  {
> -	WARN_ON(kvm_call_hyp_nvhe(__pkvm_prot_finalize));
> +	int *err = arg;
> +
> +	if (WARN_ON(kvm_call_hyp_nvhe(__pkvm_prot_finalize)))
> +		WRITE_ONCE(*err, -EINVAL);
> +}

I was going to suggest to propagate the hypercall's error code directly,
but this becomes very racy so n/m...

But this got me thinking about what we should do when the hyp init fails
while the protected mode has been explicitly enabled on the kernel
cmdline. That is, if we continue and boot the kernel w/o KVM support,
then I don't know how e.g. EL3 can know that it shouldn't give keys to
VMs because the kernel (and EL2) can't be trusted. It feels like it is
the kernel's responsibility to do something while it _is_ still
trustworthy.

I guess we could make any error code fatal in kvm_arch_init() when
is_protected_kvm_enabled() is on, or something along those lines? Maybe
dependent on CONFIG_NVHE_EL2_DEBUG=n?

It's probably a bit theoretical because there really shouldn't be any
reason to fail hyp init in production when using a signed kernel image
etc etc, but then if that is the case the additional check I'm
suggesting shouldn't hurt and will give us some peace of mind. Thoughts?

Thanks,
Quentin
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 3/5] KVM: arm64: Propagate errors from __pkvm_prot_finalize hypercall
@ 2021-09-29 13:36     ` Quentin Perret
  0 siblings, 0 replies; 34+ messages in thread
From: Quentin Perret @ 2021-09-29 13:36 UTC (permalink / raw)
  To: Will Deacon
  Cc: linux-arm-kernel, Marc Zyngier, Catalin Marinas,
	Alexandru Elisei, Suzuki K Poulose, kvmarm

On Thursday 23 Sep 2021 at 12:22:54 (+0100), Will Deacon wrote:
> If the __pkvm_prot_finalize hypercall returns an error, we WARN but fail
> to propagate the failure code back to kvm_arch_init().
> 
> Pass a pointer to a zero-initialised return variable so that failure
> to finalise the pKVM protections on a host CPU can be reported back to
> KVM.
> 
> Cc: Marc Zyngier <maz@kernel.org>
> Cc: Quentin Perret <qperret@google.com>
> Signed-off-by: Will Deacon <will@kernel.org>
> ---
>  arch/arm64/kvm/arm.c | 30 +++++++++++++++++++-----------
>  1 file changed, 19 insertions(+), 11 deletions(-)
> 
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index 9506cf88fa0e..13bbf35896cd 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -1986,9 +1986,25 @@ static int init_hyp_mode(void)
>  	return err;
>  }
>  
> -static void _kvm_host_prot_finalize(void *discard)
> +static void _kvm_host_prot_finalize(void *arg)
>  {
> -	WARN_ON(kvm_call_hyp_nvhe(__pkvm_prot_finalize));
> +	int *err = arg;
> +
> +	if (WARN_ON(kvm_call_hyp_nvhe(__pkvm_prot_finalize)))
> +		WRITE_ONCE(*err, -EINVAL);
> +}

I was going to suggest to propagate the hypercall's error code directly,
but this becomes very racy so n/m...

But this got me thinking about what we should do when the hyp init fails
while the protected mode has been explicitly enabled on the kernel
cmdline. That is, if we continue and boot the kernel w/o KVM support,
then I don't know how e.g. EL3 can know that it shouldn't give keys to
VMs because the kernel (and EL2) can't be trusted. It feels like it is
the kernel's responsibility to do something while it _is_ still
trustworthy.

I guess we could make any error code fatal in kvm_arch_init() when
is_protected_kvm_enabled() is on, or something along those lines? Maybe
dependent on CONFIG_NVHE_EL2_DEBUG=n?

It's probably a bit theoretical because there really shouldn't be any
reason to fail hyp init in production when using a signed kernel image
etc etc, but then if that is the case the additional check I'm
suggesting shouldn't hurt and will give us some peace of mind. Thoughts?

Thanks,
Quentin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 2/5] KVM: arm64: Reject stub hypercalls after pKVM has been initialised
  2021-09-23 11:22   ` Will Deacon
@ 2021-09-29 13:37     ` Quentin Perret
  -1 siblings, 0 replies; 34+ messages in thread
From: Quentin Perret @ 2021-09-29 13:37 UTC (permalink / raw)
  To: Will Deacon; +Cc: Marc Zyngier, Catalin Marinas, kvmarm, linux-arm-kernel

On Thursday 23 Sep 2021 at 12:22:53 (+0100), Will Deacon wrote:
> The stub hypercalls provide mechanisms to reset and replace the EL2 code,
> so uninstall them once pKVM has been initialised in order to ensure the
> integrity of the hypervisor code.
> 
> To ensure pKVM initialisation remains functional, split cpu_hyp_reinit()
> into two helper functions to separate usage of the stub from usage of
> pkvm hypercalls either side of __pkvm_init on the boot CPU.
> 
> Cc: Marc Zyngier <maz@kernel.org>
> Cc: Quentin Perret <qperret@google.com>
> Signed-off-by: Will Deacon <will@kernel.org>

Reviewed-by: Quentin Perret <qperret@google.com>

Thanks,
Quentin
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 2/5] KVM: arm64: Reject stub hypercalls after pKVM has been initialised
@ 2021-09-29 13:37     ` Quentin Perret
  0 siblings, 0 replies; 34+ messages in thread
From: Quentin Perret @ 2021-09-29 13:37 UTC (permalink / raw)
  To: Will Deacon
  Cc: linux-arm-kernel, Marc Zyngier, Catalin Marinas,
	Alexandru Elisei, Suzuki K Poulose, kvmarm

On Thursday 23 Sep 2021 at 12:22:53 (+0100), Will Deacon wrote:
> The stub hypercalls provide mechanisms to reset and replace the EL2 code,
> so uninstall them once pKVM has been initialised in order to ensure the
> integrity of the hypervisor code.
> 
> To ensure pKVM initialisation remains functional, split cpu_hyp_reinit()
> into two helper functions to separate usage of the stub from usage of
> pkvm hypercalls either side of __pkvm_init on the boot CPU.
> 
> Cc: Marc Zyngier <maz@kernel.org>
> Cc: Quentin Perret <qperret@google.com>
> Signed-off-by: Will Deacon <will@kernel.org>

Reviewed-by: Quentin Perret <qperret@google.com>

Thanks,
Quentin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 4/5] KVM: arm64: Prevent re-finalisation of pKVM for a given CPU
  2021-09-23 11:22   ` Will Deacon
@ 2021-09-29 13:41     ` Quentin Perret
  -1 siblings, 0 replies; 34+ messages in thread
From: Quentin Perret @ 2021-09-29 13:41 UTC (permalink / raw)
  To: Will Deacon; +Cc: Marc Zyngier, Catalin Marinas, kvmarm, linux-arm-kernel

On Thursday 23 Sep 2021 at 12:22:55 (+0100), Will Deacon wrote:
> __pkvm_prot_finalize() completes the deprivilege of the host when pKVM
> is in use by installing a stage-2 translation table for the calling CPU.
> 
> Issuing the hypercall multiple times for a given CPU makes little sense,
> but in such a case just return early with -EPERM rather than go through
> the whole page-table dance again.
> 
> Cc: Marc Zyngier <maz@kernel.org>
> Cc: Quentin Perret <qperret@google.com>
> Signed-off-by: Will Deacon <will@kernel.org>
> ---
>  arch/arm64/kvm/hyp/nvhe/mem_protect.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> index bacd493a4eac..cafe17e5fa8f 100644
> --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> @@ -123,6 +123,9 @@ int __pkvm_prot_finalize(void)
>  	struct kvm_s2_mmu *mmu = &host_kvm.arch.mmu;
>  	struct kvm_nvhe_init_params *params = this_cpu_ptr(&kvm_init_params);
>  
> +	if (params->hcr_el2 & HCR_VM)
> +		return -EPERM;

And you check this rather than the static key because we flip it upfront
I guess. Makes sense to me, but maybe a little comment would be useful :)
In any case:

Reviewed-by: Quentin Perret <qperret@google.com>

Thanks,
Quentin
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 4/5] KVM: arm64: Prevent re-finalisation of pKVM for a given CPU
@ 2021-09-29 13:41     ` Quentin Perret
  0 siblings, 0 replies; 34+ messages in thread
From: Quentin Perret @ 2021-09-29 13:41 UTC (permalink / raw)
  To: Will Deacon
  Cc: linux-arm-kernel, Marc Zyngier, Catalin Marinas,
	Alexandru Elisei, Suzuki K Poulose, kvmarm

On Thursday 23 Sep 2021 at 12:22:55 (+0100), Will Deacon wrote:
> __pkvm_prot_finalize() completes the deprivilege of the host when pKVM
> is in use by installing a stage-2 translation table for the calling CPU.
> 
> Issuing the hypercall multiple times for a given CPU makes little sense,
> but in such a case just return early with -EPERM rather than go through
> the whole page-table dance again.
> 
> Cc: Marc Zyngier <maz@kernel.org>
> Cc: Quentin Perret <qperret@google.com>
> Signed-off-by: Will Deacon <will@kernel.org>
> ---
>  arch/arm64/kvm/hyp/nvhe/mem_protect.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> index bacd493a4eac..cafe17e5fa8f 100644
> --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> @@ -123,6 +123,9 @@ int __pkvm_prot_finalize(void)
>  	struct kvm_s2_mmu *mmu = &host_kvm.arch.mmu;
>  	struct kvm_nvhe_init_params *params = this_cpu_ptr(&kvm_init_params);
>  
> +	if (params->hcr_el2 & HCR_VM)
> +		return -EPERM;

And you check this rather than the static key because we flip it upfront
I guess. Makes sense to me, but maybe a little comment would be useful :)
In any case:

Reviewed-by: Quentin Perret <qperret@google.com>

Thanks,
Quentin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 3/5] KVM: arm64: Propagate errors from __pkvm_prot_finalize hypercall
  2021-09-29 13:36     ` Quentin Perret
@ 2021-10-05 11:30       ` Will Deacon
  -1 siblings, 0 replies; 34+ messages in thread
From: Will Deacon @ 2021-10-05 11:30 UTC (permalink / raw)
  To: Quentin Perret; +Cc: Marc Zyngier, Catalin Marinas, kvmarm, linux-arm-kernel

Hey Quentin,

On Wed, Sep 29, 2021 at 02:36:47PM +0100, Quentin Perret wrote:
> On Thursday 23 Sep 2021 at 12:22:54 (+0100), Will Deacon wrote:
> > If the __pkvm_prot_finalize hypercall returns an error, we WARN but fail
> > to propagate the failure code back to kvm_arch_init().
> > 
> > Pass a pointer to a zero-initialised return variable so that failure
> > to finalise the pKVM protections on a host CPU can be reported back to
> > KVM.
> > 
> > Cc: Marc Zyngier <maz@kernel.org>
> > Cc: Quentin Perret <qperret@google.com>
> > Signed-off-by: Will Deacon <will@kernel.org>
> > ---
> >  arch/arm64/kvm/arm.c | 30 +++++++++++++++++++-----------
> >  1 file changed, 19 insertions(+), 11 deletions(-)
> > 
> > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> > index 9506cf88fa0e..13bbf35896cd 100644
> > --- a/arch/arm64/kvm/arm.c
> > +++ b/arch/arm64/kvm/arm.c
> > @@ -1986,9 +1986,25 @@ static int init_hyp_mode(void)
> >  	return err;
> >  }
> >  
> > -static void _kvm_host_prot_finalize(void *discard)
> > +static void _kvm_host_prot_finalize(void *arg)
> >  {
> > -	WARN_ON(kvm_call_hyp_nvhe(__pkvm_prot_finalize));
> > +	int *err = arg;
> > +
> > +	if (WARN_ON(kvm_call_hyp_nvhe(__pkvm_prot_finalize)))
> > +		WRITE_ONCE(*err, -EINVAL);
> > +}
> 
> I was going to suggest to propagate the hypercall's error code directly,
> but this becomes very racy so n/m...
> 
> But this got me thinking about what we should do when the hyp init fails
> while the protected mode has been explicitly enabled on the kernel
> cmdline. That is, if we continue and boot the kernel w/o KVM support,
> then I don't know how e.g. EL3 can know that it shouldn't give keys to
> VMs because the kernel (and EL2) can't be trusted. It feels like it is
> the kernel's responsibility to do something while it _is_ still
> trustworthy.
> 
> I guess we could make any error code fatal in kvm_arch_init() when
> is_protected_kvm_enabled() is on, or something along those lines? Maybe
> dependent on CONFIG_NVHE_EL2_DEBUG=n?
> 
> It's probably a bit theoretical because there really shouldn't be any
> reason to fail hyp init in production when using a signed kernel image
> etc etc, but then if that is the case the additional check I'm
> suggesting shouldn't hurt and will give us some peace of mind. Thoughts?

It's an interesting one.

I'm not hugely keen on crashing the system if we fail to deprivilege the
host (which I think is effectively what is happening in the case you
describe), but you're right that we need to disable pKVM somehow in this
case. I think the best thing would be to wipe the pvmfw memory; that would
mean that the host can do whatever it likes at EL2, as the keys will no
longer be available.

I'll make a note about this, since I've parked the pvmfw patches until
we've got more of the pKVM infrastructure up and running.

Will
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 3/5] KVM: arm64: Propagate errors from __pkvm_prot_finalize hypercall
@ 2021-10-05 11:30       ` Will Deacon
  0 siblings, 0 replies; 34+ messages in thread
From: Will Deacon @ 2021-10-05 11:30 UTC (permalink / raw)
  To: Quentin Perret
  Cc: linux-arm-kernel, Marc Zyngier, Catalin Marinas,
	Alexandru Elisei, Suzuki K Poulose, kvmarm

Hey Quentin,

On Wed, Sep 29, 2021 at 02:36:47PM +0100, Quentin Perret wrote:
> On Thursday 23 Sep 2021 at 12:22:54 (+0100), Will Deacon wrote:
> > If the __pkvm_prot_finalize hypercall returns an error, we WARN but fail
> > to propagate the failure code back to kvm_arch_init().
> > 
> > Pass a pointer to a zero-initialised return variable so that failure
> > to finalise the pKVM protections on a host CPU can be reported back to
> > KVM.
> > 
> > Cc: Marc Zyngier <maz@kernel.org>
> > Cc: Quentin Perret <qperret@google.com>
> > Signed-off-by: Will Deacon <will@kernel.org>
> > ---
> >  arch/arm64/kvm/arm.c | 30 +++++++++++++++++++-----------
> >  1 file changed, 19 insertions(+), 11 deletions(-)
> > 
> > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> > index 9506cf88fa0e..13bbf35896cd 100644
> > --- a/arch/arm64/kvm/arm.c
> > +++ b/arch/arm64/kvm/arm.c
> > @@ -1986,9 +1986,25 @@ static int init_hyp_mode(void)
> >  	return err;
> >  }
> >  
> > -static void _kvm_host_prot_finalize(void *discard)
> > +static void _kvm_host_prot_finalize(void *arg)
> >  {
> > -	WARN_ON(kvm_call_hyp_nvhe(__pkvm_prot_finalize));
> > +	int *err = arg;
> > +
> > +	if (WARN_ON(kvm_call_hyp_nvhe(__pkvm_prot_finalize)))
> > +		WRITE_ONCE(*err, -EINVAL);
> > +}
> 
> I was going to suggest to propagate the hypercall's error code directly,
> but this becomes very racy so n/m...
> 
> But this got me thinking about what we should do when the hyp init fails
> while the protected mode has been explicitly enabled on the kernel
> cmdline. That is, if we continue and boot the kernel w/o KVM support,
> then I don't know how e.g. EL3 can know that it shouldn't give keys to
> VMs because the kernel (and EL2) can't be trusted. It feels like it is
> the kernel's responsibility to do something while it _is_ still
> trustworthy.
> 
> I guess we could make any error code fatal in kvm_arch_init() when
> is_protected_kvm_enabled() is on, or something along those lines? Maybe
> dependent on CONFIG_NVHE_EL2_DEBUG=n?
> 
> It's probably a bit theoretical because there really shouldn't be any
> reason to fail hyp init in production when using a signed kernel image
> etc etc, but then if that is the case the additional check I'm
> suggesting shouldn't hurt and will give us some peace of mind. Thoughts?

It's an interesting one.

I'm not hugely keen on crashing the system if we fail to deprivilege the
host (which I think is effectively what is happening in the case you
describe), but you're right that we need to disable pKVM somehow in this
case. I think the best thing would be to wipe the pvmfw memory; that would
mean that the host can do whatever it likes at EL2, as the keys will no
longer be available.

I'll make a note about this, since I've parked the pvmfw patches until
we've got more of the pKVM infrastructure up and running.

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 34+ messages in thread

end of thread, other threads:[~2021-10-05 11:32 UTC | newest]

Thread overview: 34+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-23 11:22 [PATCH 0/5] KVM: arm64: Restrict host hypercalls when pKVM is enabled Will Deacon
2021-09-23 11:22 ` Will Deacon
2021-09-23 11:22 ` [PATCH 1/5] arm64: Prevent kexec and hibernation if is_protected_kvm_enabled() Will Deacon
2021-09-23 11:22   ` Will Deacon
2021-09-23 11:45   ` Mark Rutland
2021-09-23 11:45     ` Mark Rutland
2021-09-23 12:29     ` Will Deacon
2021-09-23 12:29       ` Will Deacon
2021-09-23 11:22 ` [PATCH 2/5] KVM: arm64: Reject stub hypercalls after pKVM has been initialised Will Deacon
2021-09-23 11:22   ` Will Deacon
2021-09-29 13:37   ` Quentin Perret
2021-09-29 13:37     ` Quentin Perret
2021-09-23 11:22 ` [PATCH 3/5] KVM: arm64: Propagate errors from __pkvm_prot_finalize hypercall Will Deacon
2021-09-23 11:22   ` Will Deacon
2021-09-29 13:36   ` Quentin Perret
2021-09-29 13:36     ` Quentin Perret
2021-10-05 11:30     ` Will Deacon
2021-10-05 11:30       ` Will Deacon
2021-09-23 11:22 ` [PATCH 4/5] KVM: arm64: Prevent re-finalisation of pKVM for a given CPU Will Deacon
2021-09-23 11:22   ` Will Deacon
2021-09-29 13:41   ` Quentin Perret
2021-09-29 13:41     ` Quentin Perret
2021-09-23 11:22 ` [PATCH 5/5] KVM: arm64: Disable privileged hypercalls after pKVM finalisation Will Deacon
2021-09-23 11:22   ` Will Deacon
2021-09-23 12:56   ` Marc Zyngier
2021-09-23 12:56     ` Marc Zyngier
2021-09-23 13:02     ` Will Deacon
2021-09-23 13:02       ` Will Deacon
2021-09-23 13:11       ` Marc Zyngier
2021-09-23 13:11         ` Marc Zyngier
2021-09-23 12:58   ` Will Deacon
2021-09-23 12:58     ` Will Deacon
2021-09-23 12:21 ` [PATCH 0/5] KVM: arm64: Restrict host hypercalls when pKVM is enabled Marc Zyngier
2021-09-23 12:21   ` Marc Zyngier

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.