linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v6 0/6] Add ARMv8.3 pointer authentication for kvm guest
@ 2019-02-19  9:24 Amit Daniel Kachhap
  2019-02-19  9:24 ` [PATCH v6 1/6] arm64/kvm: preserve host HCR_EL2 value Amit Daniel Kachhap
                   ` (6 more replies)
  0 siblings, 7 replies; 41+ messages in thread
From: Amit Daniel Kachhap @ 2019-02-19  9:24 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Christoffer Dall, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Dave Martin, Ramana Radhakrishnan, kvmarm,
	Kristina Martsenko, linux-kernel, Amit Daniel Kachhap,
	Mark Rutland, James Morse, Julien Thierry

Hi,

This patch series adds pointer authentication support for KVM guest and
is based on top of Linux 5.0-rc6. The basic patches in this series was
originally posted by Mark Rutland earlier[1,2] and contains some history
of this work.

Extension Overview:
=============================================

The ARMv8.3 pointer authentication extension adds functionality to detect
modification of pointer values, mitigating certain classes of attack such as
stack smashing, and making return oriented programming attacks harder.

The extension introduces the concept of a pointer authentication code (PAC),
which is stored in some upper bits of pointers. Each PAC is derived from the
original pointer, another 64-bit value (e.g. the stack pointer), and a secret
128-bit key.

New instructions are added which can be used to:

* Insert a PAC into a pointer
* Strip a PAC from a pointer
* Authenticate and strip a PAC from a pointer

The detailed description of ARMv8.3 pointer authentication support in
userspace/kernel and can be found in Kristina's generic pointer authentication
patch series[3].

KVM guest work:
==============================================

If pointer authentication is enabled for KVM guests then the new PAC instructions
will not trap to EL2. If not then they may be ignored if in HINT region or trapped
in EL2 as illegal instruction. Since KVM guest vcpu runs as a thread so they have
a key initialized which will be used by PAC. When world switch happens between
host and guest then this key is exchanged.

There were some review comments by Christoffer Dall in the original series [1,2,3]
and this patch series tries to implement them.

The current v6 patch series contains most of the suggestions by James Morse,
Kristina, Julien and Dave.

This patch series is based on just a single patch from Dave Martin [8] which add
control checks for accessing sys registers. 

Changes since v5 [7]: Major changes are listed below.

* Split hcr_el2 and mdcr_el2 save/restore in two patches.
* Reverted back save/restore of sys-reg keys as done in V4 [5]. There was
  suggestion by James Morse to use ptrauth utilities in a single place
  in arm core and use them from kvm. However this change deviates from the
  existing sys-reg implementations and is not scalable.
* Invoked the key switch C functions from __guest_enter/__guest_exit assembly.
* Host key save is now done inside vcpu_load.
* Reverted back masking of cpufeature ID registers for ptrauth when disabled
  from userpace.
* Reset of ptrauth key registers not done conditionally.
* Code and Documentation cleanup.

Changes since v4 [6]: Several suggestions from James Morse
* Move host registers to be saved/restored inside struct kvm_cpu_context.
* Similar to hcr_el2, save/restore mdcr_el2 register also.
* Added save routines for ptrauth keys in generic arm core and
  use them during KVM context switch.
* Defined a GCC attribute __no_ptrauth which discards generating
  ptrauth instructions in a function. This is taken from Kristina's
  earlier kernel pointer authentication support patches [4].
* Dropped a patch to mask cpufeature when not enabled from userspace and
  now only key registers are masked from register list.

Changes since v3 [5]:
* Use pointer authentication only when VHE is present as ARM8.3 implies ARM8.1
  features to be present.
* Added lazy context handling of ptrauth instructions from V2 version again. 
* Added more details in Documentation.

Changes since v2 [1,2]:
* Allow host and guest to have different HCR_EL2 settings and not just constant
  value HCR_HOST_VHE_FLAGS or HCR_HOST_NVHE_FLAGS.
* Optimise the reading of HCR_EL2 in host/guest switch by fetching it once
  during KVM initialisation state and using it later.
* Context switch pointer authentication keys when switching between guest
  and host. Pointer authentication was enabled in a lazy context earlier[2] and
  is removed now to make it simple. However it can be revisited later if there
  is significant performance issue.
* Added a userspace option to choose pointer authentication.
* Based on the userspace option, ptrauth cpufeature will be visible.
* Based on the userspace option, ptrauth key registers will be accessible.
* A small document is added on how to enable pointer authentication from
  userspace KVM API.

Looking for feedback and comments.

Thanks,
Amit

[1]: https://lore.kernel.org/lkml/20171127163806.31435-11-mark.rutland@arm.com/
[2]: https://lore.kernel.org/lkml/20171127163806.31435-10-mark.rutland@arm.com/
[3]: https://lkml.org/lkml/2018/12/7/666
[4]: https://lore.kernel.org/lkml/20181005084754.20950-1-kristina.martsenko@arm.com/
[5]: https://lkml.org/lkml/2018/10/17/594
[6]: https://lkml.org/lkml/2018/12/18/80
[7]: https://lkml.org/lkml/2019/1/28/49
[8]: https://lore.kernel.org/linux-arm-kernel/1547757219-19439-13-git-send-email-Dave.Martin@arm.com/


Linux (5.0-rc6 based):

Amit Daniel Kachhap (5):
  arm64/kvm: preserve host HCR_EL2 value
  arm64/kvm: preserve host MDCR_EL2 value
  arm64/kvm: context-switch ptrauth registers
  arm64/kvm: add a userspace option to enable pointer authentication
  arm64/kvm: control accessibility of ptrauth key registers

 Documentation/arm64/pointer-authentication.txt |  13 ++-
 Documentation/virtual/kvm/api.txt              |   4 +
 arch/arm/include/asm/kvm_host.h                |   4 +-
 arch/arm64/include/asm/kvm_asm.h               |   2 +
 arch/arm64/include/asm/kvm_emulate.h           |  22 ++---
 arch/arm64/include/asm/kvm_host.h              |  45 ++++++++--
 arch/arm64/include/asm/kvm_hyp.h               |   9 +-
 arch/arm64/include/uapi/asm/kvm.h              |   1 +
 arch/arm64/kernel/traps.c                      |   1 +
 arch/arm64/kvm/debug.c                         |  28 ++----
 arch/arm64/kvm/guest.c                         |   2 +-
 arch/arm64/kvm/handle_exit.c                   |  21 +++--
 arch/arm64/kvm/hyp/Makefile                    |   1 +
 arch/arm64/kvm/hyp/entry.S                     |  17 ++++
 arch/arm64/kvm/hyp/ptrauth-sr.c                | 115 +++++++++++++++++++++++++
 arch/arm64/kvm/hyp/switch.c                    |  40 ++++-----
 arch/arm64/kvm/hyp/sysreg-sr.c                 |  27 +++++-
 arch/arm64/kvm/hyp/tlb.c                       |   6 +-
 arch/arm64/kvm/reset.c                         |   3 +
 arch/arm64/kvm/sys_regs.c                      |  66 +++++++++++---
 include/uapi/linux/kvm.h                       |   1 +
 virt/kvm/arm/arm.c                             |   4 +-
 22 files changed, 338 insertions(+), 94 deletions(-)
 create mode 100644 arch/arm64/kvm/hyp/ptrauth-sr.c

kvmtool:

Repo: git.kernel.org/pub/scm/linux/kernel/git/will/kvmtool.git

Amit Daniel Kachhap (1):
  arm/kvm: arm64: Add a vcpu feature for pointer authentication

 arm/aarch32/include/kvm/kvm-cpu-arch.h    | 1 +
 arm/aarch64/include/asm/kvm.h             | 1 +
 arm/aarch64/include/kvm/kvm-config-arch.h | 4 +++-
 arm/aarch64/include/kvm/kvm-cpu-arch.h    | 1 +
 arm/include/arm-common/kvm-config-arch.h  | 1 +
 arm/kvm-cpu.c                             | 6 ++++++
 include/linux/kvm.h                       | 1 +
 7 files changed, 14 insertions(+), 1 deletion(-)

-- 
2.7.4


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [PATCH v6 1/6] arm64/kvm: preserve host HCR_EL2 value
  2019-02-19  9:24 [PATCH v6 0/6] Add ARMv8.3 pointer authentication for kvm guest Amit Daniel Kachhap
@ 2019-02-19  9:24 ` Amit Daniel Kachhap
  2019-02-21 11:50   ` Mark Rutland
                     ` (2 more replies)
  2019-02-19  9:24 ` [PATCH v6 2/6] arm64/kvm: preserve host MDCR_EL2 value Amit Daniel Kachhap
                   ` (5 subsequent siblings)
  6 siblings, 3 replies; 41+ messages in thread
From: Amit Daniel Kachhap @ 2019-02-19  9:24 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Christoffer Dall, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Dave Martin, Ramana Radhakrishnan, kvmarm,
	Kristina Martsenko, linux-kernel, Amit Daniel Kachhap,
	Mark Rutland, James Morse, Julien Thierry

From: Mark Rutland <mark.rutland@arm.com>

When restoring HCR_EL2 for the host, KVM uses HCR_HOST_VHE_FLAGS, which
is a constant value. This works today, as the host HCR_EL2 value is
always the same, but this will get in the way of supporting extensions
that require HCR_EL2 bits to be set conditionally for the host.

To allow such features to work without KVM having to explicitly handle
every possible host feature combination, this patch has KVM save/restore
for the host HCR when switching to/from a guest HCR. The saving of the
register is done once during cpu hypervisor initialization state and is
just restored after switch from guest.

For fetching HCR_EL2 during kvm initialisation, a hyp call is made using
kvm_call_hyp and is helpful in NHVE case.

For the hyp TLB maintenance code, __tlb_switch_to_host_vhe() is updated
to toggle the TGE bit with a RMW sequence, as we already do in
__tlb_switch_to_guest_vhe().

The value of hcr_el2 is now stored in struct kvm_cpu_context as both host
and guest can now use this field in a common way.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
[Added __cpu_copy_hyp_conf, hcr_el2 field in struct kvm_cpu_context]
Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Christoffer Dall <christoffer.dall@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
---
 arch/arm/include/asm/kvm_host.h      |  2 ++
 arch/arm64/include/asm/kvm_asm.h     |  2 ++
 arch/arm64/include/asm/kvm_emulate.h | 22 +++++++++++-----------
 arch/arm64/include/asm/kvm_host.h    | 13 ++++++++++++-
 arch/arm64/include/asm/kvm_hyp.h     |  2 +-
 arch/arm64/kvm/guest.c               |  2 +-
 arch/arm64/kvm/hyp/switch.c          | 23 +++++++++++++----------
 arch/arm64/kvm/hyp/sysreg-sr.c       | 21 ++++++++++++++++++++-
 arch/arm64/kvm/hyp/tlb.c             |  6 +++++-
 virt/kvm/arm/arm.c                   |  1 +
 10 files changed, 68 insertions(+), 26 deletions(-)

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index ca56537..05706b4 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -273,6 +273,8 @@ static inline void __cpu_init_stage2(void)
 	kvm_call_hyp(__init_stage2_translation);
 }
 
+static inline void __cpu_copy_hyp_conf(void) {}
+
 static inline int kvm_arch_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 {
 	return 0;
diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index f5b79e9..8acd73f 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -80,6 +80,8 @@ extern void __vgic_v3_init_lrs(void);
 
 extern u32 __kvm_get_mdcr_el2(void);
 
+extern void __kvm_populate_host_regs(void);
+
 /* Home-grown __this_cpu_{ptr,read} variants that always work at HYP */
 #define __hyp_this_cpu_ptr(sym)						\
 	({								\
diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index 506386a..0dbe795 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -50,25 +50,25 @@ void kvm_inject_pabt32(struct kvm_vcpu *vcpu, unsigned long addr);
 
 static inline bool vcpu_el1_is_32bit(struct kvm_vcpu *vcpu)
 {
-	return !(vcpu->arch.hcr_el2 & HCR_RW);
+	return !(vcpu->arch.ctxt.hcr_el2 & HCR_RW);
 }
 
 static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu)
 {
-	vcpu->arch.hcr_el2 = HCR_GUEST_FLAGS;
+	vcpu->arch.ctxt.hcr_el2 = HCR_GUEST_FLAGS;
 	if (is_kernel_in_hyp_mode())
-		vcpu->arch.hcr_el2 |= HCR_E2H;
+		vcpu->arch.ctxt.hcr_el2 |= HCR_E2H;
 	if (cpus_have_const_cap(ARM64_HAS_RAS_EXTN)) {
 		/* route synchronous external abort exceptions to EL2 */
-		vcpu->arch.hcr_el2 |= HCR_TEA;
+		vcpu->arch.ctxt.hcr_el2 |= HCR_TEA;
 		/* trap error record accesses */
-		vcpu->arch.hcr_el2 |= HCR_TERR;
+		vcpu->arch.ctxt.hcr_el2 |= HCR_TERR;
 	}
 	if (cpus_have_const_cap(ARM64_HAS_STAGE2_FWB))
-		vcpu->arch.hcr_el2 |= HCR_FWB;
+		vcpu->arch.ctxt.hcr_el2 |= HCR_FWB;
 
 	if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features))
-		vcpu->arch.hcr_el2 &= ~HCR_RW;
+		vcpu->arch.ctxt.hcr_el2 &= ~HCR_RW;
 
 	/*
 	 * TID3: trap feature register accesses that we virtualise.
@@ -76,22 +76,22 @@ static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu)
 	 * are currently virtualised.
 	 */
 	if (!vcpu_el1_is_32bit(vcpu))
-		vcpu->arch.hcr_el2 |= HCR_TID3;
+		vcpu->arch.ctxt.hcr_el2 |= HCR_TID3;
 }
 
 static inline unsigned long *vcpu_hcr(struct kvm_vcpu *vcpu)
 {
-	return (unsigned long *)&vcpu->arch.hcr_el2;
+	return (unsigned long *)&vcpu->arch.ctxt.hcr_el2;
 }
 
 static inline void vcpu_clear_wfe_traps(struct kvm_vcpu *vcpu)
 {
-	vcpu->arch.hcr_el2 &= ~HCR_TWE;
+	vcpu->arch.ctxt.hcr_el2 &= ~HCR_TWE;
 }
 
 static inline void vcpu_set_wfe_traps(struct kvm_vcpu *vcpu)
 {
-	vcpu->arch.hcr_el2 |= HCR_TWE;
+	vcpu->arch.ctxt.hcr_el2 |= HCR_TWE;
 }
 
 static inline unsigned long vcpu_get_vsesr(struct kvm_vcpu *vcpu)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 7732d0b..1b2e05b 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -203,6 +203,8 @@ struct kvm_cpu_context {
 		u32 copro[NR_COPRO_REGS];
 	};
 
+	/* HYP host/guest configuration */
+	u64 hcr_el2;
 	struct kvm_vcpu *__hyp_running_vcpu;
 };
 
@@ -212,7 +214,6 @@ struct kvm_vcpu_arch {
 	struct kvm_cpu_context ctxt;
 
 	/* HYP configuration */
-	u64 hcr_el2;
 	u32 mdcr_el2;
 
 	/* Exception Information */
@@ -458,6 +459,16 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
 
 static inline void __cpu_init_stage2(void) {}
 
+/**
+ * __cpu_copy_hyp_conf - copy the boot hyp configuration registers
+ *
+ * It is called once per-cpu during CPU hyp initialisation.
+ */
+static inline void __cpu_copy_hyp_conf(void)
+{
+	kvm_call_hyp(__kvm_populate_host_regs);
+}
+
 /* Guest/host FPSIMD coordination helpers */
 int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu);
 void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index a80a7ef..6e65cad 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -151,7 +151,7 @@ void __fpsimd_restore_state(struct user_fpsimd_state *fp_regs);
 bool __fpsimd_enabled(void);
 
 void activate_traps_vhe_load(struct kvm_vcpu *vcpu);
-void deactivate_traps_vhe_put(void);
+void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu);
 
 u64 __guest_enter(struct kvm_vcpu *vcpu, struct kvm_cpu_context *host_ctxt);
 void __noreturn __hyp_do_panic(unsigned long, ...);
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index dd436a5..e2f0268 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -345,7 +345,7 @@ int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu,
 int __kvm_arm_vcpu_get_events(struct kvm_vcpu *vcpu,
 			      struct kvm_vcpu_events *events)
 {
-	events->exception.serror_pending = !!(vcpu->arch.hcr_el2 & HCR_VSE);
+	events->exception.serror_pending = !!(vcpu->arch.ctxt.hcr_el2 & HCR_VSE);
 	events->exception.serror_has_esr = cpus_have_const_cap(ARM64_HAS_RAS_EXTN);
 
 	if (events->exception.serror_pending && events->exception.serror_has_esr)
diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index b0b1478..006bd33 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -126,7 +126,7 @@ static void __hyp_text __activate_traps_nvhe(struct kvm_vcpu *vcpu)
 
 static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu)
 {
-	u64 hcr = vcpu->arch.hcr_el2;
+	u64 hcr = vcpu->arch.ctxt.hcr_el2;
 
 	write_sysreg(hcr, hcr_el2);
 
@@ -139,10 +139,10 @@ static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu)
 		__activate_traps_nvhe(vcpu);
 }
 
-static void deactivate_traps_vhe(void)
+static void deactivate_traps_vhe(struct kvm_cpu_context *host_ctxt)
 {
 	extern char vectors[];	/* kernel exception vectors */
-	write_sysreg(HCR_HOST_VHE_FLAGS, hcr_el2);
+	write_sysreg(host_ctxt->hcr_el2, hcr_el2);
 
 	/*
 	 * ARM erratum 1165522 requires the actual execution of the above
@@ -155,7 +155,7 @@ static void deactivate_traps_vhe(void)
 	write_sysreg(vectors, vbar_el1);
 }
 
-static void __hyp_text __deactivate_traps_nvhe(void)
+static void __hyp_text __deactivate_traps_nvhe(struct kvm_cpu_context *host_ctxt)
 {
 	u64 mdcr_el2 = read_sysreg(mdcr_el2);
 
@@ -165,25 +165,28 @@ static void __hyp_text __deactivate_traps_nvhe(void)
 	mdcr_el2 |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT;
 
 	write_sysreg(mdcr_el2, mdcr_el2);
-	write_sysreg(HCR_HOST_NVHE_FLAGS, hcr_el2);
+	write_sysreg(host_ctxt->hcr_el2, hcr_el2);
 	write_sysreg(CPTR_EL2_DEFAULT, cptr_el2);
 }
 
 static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu)
 {
+	struct kvm_cpu_context *host_ctxt;
+
+	host_ctxt = vcpu->arch.host_cpu_context;
 	/*
 	 * If we pended a virtual abort, preserve it until it gets
 	 * cleared. See D1.14.3 (Virtual Interrupts) for details, but
 	 * the crucial bit is "On taking a vSError interrupt,
 	 * HCR_EL2.VSE is cleared to 0."
 	 */
-	if (vcpu->arch.hcr_el2 & HCR_VSE)
-		vcpu->arch.hcr_el2 = read_sysreg(hcr_el2);
+	if (vcpu->arch.ctxt.hcr_el2 & HCR_VSE)
+		vcpu->arch.ctxt.hcr_el2 = read_sysreg(hcr_el2);
 
 	if (has_vhe())
-		deactivate_traps_vhe();
+		deactivate_traps_vhe(host_ctxt);
 	else
-		__deactivate_traps_nvhe();
+		__deactivate_traps_nvhe(host_ctxt);
 }
 
 void activate_traps_vhe_load(struct kvm_vcpu *vcpu)
@@ -191,7 +194,7 @@ void activate_traps_vhe_load(struct kvm_vcpu *vcpu)
 	__activate_traps_common(vcpu);
 }
 
-void deactivate_traps_vhe_put(void)
+void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu)
 {
 	u64 mdcr_el2 = read_sysreg(mdcr_el2);
 
diff --git a/arch/arm64/kvm/hyp/sysreg-sr.c b/arch/arm64/kvm/hyp/sysreg-sr.c
index 68d6f7c..68ddc0f 100644
--- a/arch/arm64/kvm/hyp/sysreg-sr.c
+++ b/arch/arm64/kvm/hyp/sysreg-sr.c
@@ -21,6 +21,7 @@
 #include <asm/kvm_asm.h>
 #include <asm/kvm_emulate.h>
 #include <asm/kvm_hyp.h>
+#include <asm/kvm_mmu.h>
 
 /*
  * Non-VHE: Both host and guest must save everything.
@@ -294,7 +295,7 @@ void kvm_vcpu_put_sysregs(struct kvm_vcpu *vcpu)
 	if (!has_vhe())
 		return;
 
-	deactivate_traps_vhe_put();
+	deactivate_traps_vhe_put(vcpu);
 
 	__sysreg_save_el1_state(guest_ctxt);
 	__sysreg_save_user_state(guest_ctxt);
@@ -316,3 +317,21 @@ void __hyp_text __kvm_enable_ssbs(void)
 	"msr	sctlr_el2, %0"
 	: "=&r" (tmp) : "L" (SCTLR_ELx_DSSBS));
 }
+
+/**
+ * __kvm_populate_host_regs - Stores host register values
+ *
+ * This function acts as a function handler parameter for kvm_call_hyp and
+ * may be called from EL1 exception level to fetch the register value.
+ */
+void __hyp_text __kvm_populate_host_regs(void)
+{
+	struct kvm_cpu_context *host_ctxt;
+
+	if (has_vhe())
+		host_ctxt = this_cpu_ptr(&kvm_host_cpu_state);
+	else
+		host_ctxt = __hyp_this_cpu_ptr(kvm_host_cpu_state);
+
+	host_ctxt->hcr_el2 = read_sysreg(hcr_el2);
+}
diff --git a/arch/arm64/kvm/hyp/tlb.c b/arch/arm64/kvm/hyp/tlb.c
index 76c3086..c5e7144 100644
--- a/arch/arm64/kvm/hyp/tlb.c
+++ b/arch/arm64/kvm/hyp/tlb.c
@@ -86,12 +86,16 @@ static hyp_alternate_select(__tlb_switch_to_guest,
 static void __hyp_text __tlb_switch_to_host_vhe(struct kvm *kvm,
 						struct tlb_inv_context *cxt)
 {
+	u64 val;
+
 	/*
 	 * We're done with the TLB operation, let's restore the host's
 	 * view of HCR_EL2.
 	 */
 	write_sysreg(0, vttbr_el2);
-	write_sysreg(HCR_HOST_VHE_FLAGS, hcr_el2);
+	val = read_sysreg(hcr_el2);
+	val |= HCR_TGE;
+	write_sysreg(val, hcr_el2);
 	isb();
 
 	if (cpus_have_const_cap(ARM64_WORKAROUND_1165522)) {
diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index 9e350fd3..8e18f7f 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -1328,6 +1328,7 @@ static void cpu_hyp_reinit(void)
 		cpu_init_hyp_mode(NULL);
 
 	kvm_arm_init_debug();
+	__cpu_copy_hyp_conf();
 
 	if (vgic_present)
 		kvm_vgic_init_cpu_hardware();
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH v6 2/6] arm64/kvm: preserve host MDCR_EL2 value
  2019-02-19  9:24 [PATCH v6 0/6] Add ARMv8.3 pointer authentication for kvm guest Amit Daniel Kachhap
  2019-02-19  9:24 ` [PATCH v6 1/6] arm64/kvm: preserve host HCR_EL2 value Amit Daniel Kachhap
@ 2019-02-19  9:24 ` Amit Daniel Kachhap
  2019-02-21 11:57   ` Mark Rutland
  2019-02-21 15:51   ` Dave Martin
  2019-02-19  9:24 ` [PATCH v6 3/6] arm64/kvm: context-switch ptrauth registers Amit Daniel Kachhap
                   ` (4 subsequent siblings)
  6 siblings, 2 replies; 41+ messages in thread
From: Amit Daniel Kachhap @ 2019-02-19  9:24 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Christoffer Dall, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Dave Martin, Ramana Radhakrishnan, kvmarm,
	Kristina Martsenko, linux-kernel, Amit Daniel Kachhap,
	Mark Rutland, James Morse, Julien Thierry

Save host MDCR_EL2 value during kvm HYP initialisation and restore
after every switch from host to guest. There should not be any
change in functionality due to this.

The value of mdcr_el2 is now stored in struct kvm_cpu_context as
both host and guest can now use this field in a common way.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Christoffer Dall <christoffer.dall@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
---
 arch/arm/include/asm/kvm_host.h   |  1 -
 arch/arm64/include/asm/kvm_host.h |  6 ++----
 arch/arm64/kvm/debug.c            | 28 ++++++----------------------
 arch/arm64/kvm/hyp/switch.c       | 17 ++++-------------
 arch/arm64/kvm/hyp/sysreg-sr.c    |  6 ++++++
 virt/kvm/arm/arm.c                |  1 -
 6 files changed, 18 insertions(+), 41 deletions(-)

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 05706b4..704667e 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -294,7 +294,6 @@ static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {}
 static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
 static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {}
 
-static inline void kvm_arm_init_debug(void) {}
 static inline void kvm_arm_setup_debug(struct kvm_vcpu *vcpu) {}
 static inline void kvm_arm_clear_debug(struct kvm_vcpu *vcpu) {}
 static inline void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu) {}
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 1b2e05b..2f1bb86 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -205,6 +205,8 @@ struct kvm_cpu_context {
 
 	/* HYP host/guest configuration */
 	u64 hcr_el2;
+	u32 mdcr_el2;
+
 	struct kvm_vcpu *__hyp_running_vcpu;
 };
 
@@ -213,9 +215,6 @@ typedef struct kvm_cpu_context kvm_cpu_context_t;
 struct kvm_vcpu_arch {
 	struct kvm_cpu_context ctxt;
 
-	/* HYP configuration */
-	u32 mdcr_el2;
-
 	/* Exception Information */
 	struct kvm_vcpu_fault_info fault;
 
@@ -446,7 +445,6 @@ static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {}
 static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
 static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {}
 
-void kvm_arm_init_debug(void);
 void kvm_arm_setup_debug(struct kvm_vcpu *vcpu);
 void kvm_arm_clear_debug(struct kvm_vcpu *vcpu);
 void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c
index f39801e..99dc0a4 100644
--- a/arch/arm64/kvm/debug.c
+++ b/arch/arm64/kvm/debug.c
@@ -32,8 +32,6 @@
 				DBG_MDSCR_KDE | \
 				DBG_MDSCR_MDE)
 
-static DEFINE_PER_CPU(u32, mdcr_el2);
-
 /**
  * save/restore_guest_debug_regs
  *
@@ -65,21 +63,6 @@ static void restore_guest_debug_regs(struct kvm_vcpu *vcpu)
 }
 
 /**
- * kvm_arm_init_debug - grab what we need for debug
- *
- * Currently the sole task of this function is to retrieve the initial
- * value of mdcr_el2 so we can preserve MDCR_EL2.HPMN which has
- * presumably been set-up by some knowledgeable bootcode.
- *
- * It is called once per-cpu during CPU hyp initialisation.
- */
-
-void kvm_arm_init_debug(void)
-{
-	__this_cpu_write(mdcr_el2, kvm_call_hyp(__kvm_get_mdcr_el2));
-}
-
-/**
  * kvm_arm_reset_debug_ptr - reset the debug ptr to point to the vcpu state
  */
 
@@ -111,6 +94,7 @@ void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu)
 
 void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
 {
+	kvm_cpu_context_t *host_cxt = this_cpu_ptr(&kvm_host_cpu_state);
 	bool trap_debug = !(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY);
 	unsigned long mdscr;
 
@@ -120,8 +104,8 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
 	 * This also clears MDCR_EL2_E2PB_MASK to disable guest access
 	 * to the profiling buffer.
 	 */
-	vcpu->arch.mdcr_el2 = __this_cpu_read(mdcr_el2) & MDCR_EL2_HPMN_MASK;
-	vcpu->arch.mdcr_el2 |= (MDCR_EL2_TPM |
+	vcpu->arch.ctxt.mdcr_el2 = host_cxt->mdcr_el2 & MDCR_EL2_HPMN_MASK;
+	vcpu->arch.ctxt.mdcr_el2 |= (MDCR_EL2_TPM |
 				MDCR_EL2_TPMS |
 				MDCR_EL2_TPMCR |
 				MDCR_EL2_TDRA |
@@ -130,7 +114,7 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
 	/* Is Guest debugging in effect? */
 	if (vcpu->guest_debug) {
 		/* Route all software debug exceptions to EL2 */
-		vcpu->arch.mdcr_el2 |= MDCR_EL2_TDE;
+		vcpu->arch.ctxt.mdcr_el2 |= MDCR_EL2_TDE;
 
 		/* Save guest debug state */
 		save_guest_debug_regs(vcpu);
@@ -202,13 +186,13 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
 
 	/* Trap debug register access */
 	if (trap_debug)
-		vcpu->arch.mdcr_el2 |= MDCR_EL2_TDA;
+		vcpu->arch.ctxt.mdcr_el2 |= MDCR_EL2_TDA;
 
 	/* If KDE or MDE are set, perform a full save/restore cycle. */
 	if (vcpu_read_sys_reg(vcpu, MDSCR_EL1) & (DBG_MDSCR_KDE | DBG_MDSCR_MDE))
 		vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY;
 
-	trace_kvm_arm_set_dreg32("MDCR_EL2", vcpu->arch.mdcr_el2);
+	trace_kvm_arm_set_dreg32("MDCR_EL2", vcpu->arch.ctxt.mdcr_el2);
 	trace_kvm_arm_set_dreg32("MDSCR_EL1", vcpu_read_sys_reg(vcpu, MDSCR_EL1));
 }
 
diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index 006bd33..03b36f1 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -82,7 +82,7 @@ static void __hyp_text __activate_traps_common(struct kvm_vcpu *vcpu)
 	 */
 	write_sysreg(0, pmselr_el0);
 	write_sysreg(ARMV8_PMU_USERENR_MASK, pmuserenr_el0);
-	write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
+	write_sysreg(vcpu->arch.ctxt.mdcr_el2, mdcr_el2);
 }
 
 static void __hyp_text __deactivate_traps_common(void)
@@ -157,14 +157,9 @@ static void deactivate_traps_vhe(struct kvm_cpu_context *host_ctxt)
 
 static void __hyp_text __deactivate_traps_nvhe(struct kvm_cpu_context *host_ctxt)
 {
-	u64 mdcr_el2 = read_sysreg(mdcr_el2);
-
 	__deactivate_traps_common();
 
-	mdcr_el2 &= MDCR_EL2_HPMN_MASK;
-	mdcr_el2 |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT;
-
-	write_sysreg(mdcr_el2, mdcr_el2);
+	write_sysreg(host_ctxt->mdcr_el2, mdcr_el2);
 	write_sysreg(host_ctxt->hcr_el2, hcr_el2);
 	write_sysreg(CPTR_EL2_DEFAULT, cptr_el2);
 }
@@ -196,13 +191,9 @@ void activate_traps_vhe_load(struct kvm_vcpu *vcpu)
 
 void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu)
 {
-	u64 mdcr_el2 = read_sysreg(mdcr_el2);
-
-	mdcr_el2 &= MDCR_EL2_HPMN_MASK |
-		    MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT |
-		    MDCR_EL2_TPMS;
+	struct kvm_cpu_context *host_ctxt = vcpu->arch.host_cpu_context;
 
-	write_sysreg(mdcr_el2, mdcr_el2);
+	write_sysreg(host_ctxt->mdcr_el2, mdcr_el2);
 
 	__deactivate_traps_common();
 }
diff --git a/arch/arm64/kvm/hyp/sysreg-sr.c b/arch/arm64/kvm/hyp/sysreg-sr.c
index 68ddc0f..42ec50f 100644
--- a/arch/arm64/kvm/hyp/sysreg-sr.c
+++ b/arch/arm64/kvm/hyp/sysreg-sr.c
@@ -334,4 +334,10 @@ void __hyp_text __kvm_populate_host_regs(void)
 		host_ctxt = __hyp_this_cpu_ptr(kvm_host_cpu_state);
 
 	host_ctxt->hcr_el2 = read_sysreg(hcr_el2);
+	/*
+	 * Retrieve the initial value of mdcr_el2 so we can preserve
+	 * MDCR_EL2.HPMN which has presumably been set-up by some
+	 * knowledgeable bootcode.
+	 */
+	host_ctxt->mdcr_el2 = read_sysreg(mdcr_el2);
 }
diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index 8e18f7f..2032a66 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -1327,7 +1327,6 @@ static void cpu_hyp_reinit(void)
 	else
 		cpu_init_hyp_mode(NULL);
 
-	kvm_arm_init_debug();
 	__cpu_copy_hyp_conf();
 
 	if (vgic_present)
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH v6 3/6] arm64/kvm: context-switch ptrauth registers
  2019-02-19  9:24 [PATCH v6 0/6] Add ARMv8.3 pointer authentication for kvm guest Amit Daniel Kachhap
  2019-02-19  9:24 ` [PATCH v6 1/6] arm64/kvm: preserve host HCR_EL2 value Amit Daniel Kachhap
  2019-02-19  9:24 ` [PATCH v6 2/6] arm64/kvm: preserve host MDCR_EL2 value Amit Daniel Kachhap
@ 2019-02-19  9:24 ` Amit Daniel Kachhap
  2019-02-21 12:29   ` Mark Rutland
                     ` (2 more replies)
  2019-02-19  9:24 ` [PATCH v6 4/6] arm64/kvm: add a userspace option to enable pointer authentication Amit Daniel Kachhap
                   ` (3 subsequent siblings)
  6 siblings, 3 replies; 41+ messages in thread
From: Amit Daniel Kachhap @ 2019-02-19  9:24 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Christoffer Dall, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Dave Martin, Ramana Radhakrishnan, kvmarm,
	Kristina Martsenko, linux-kernel, Amit Daniel Kachhap,
	Mark Rutland, James Morse, Julien Thierry

From: Mark Rutland <mark.rutland@arm.com>

When pointer authentication is supported, a guest may wish to use it.
This patch adds the necessary KVM infrastructure for this to work, with
a semi-lazy context switch of the pointer auth state.

Pointer authentication feature is only enabled when VHE is built
in the kernel and present into CPU implementation so only VHE code
paths are modified.

When we schedule a vcpu, we disable guest usage of pointer
authentication instructions and accesses to the keys. While these are
disabled, we avoid context-switching the keys. When we trap the guest
trying to use pointer authentication functionality, we change to eagerly
context-switching the keys, and enable the feature. The next time the
vcpu is scheduled out/in, we start again. However the host key registers
are saved in vcpu load stage as they remain constant for each vcpu
schedule.

Pointer authentication consists of address authentication and generic
authentication, and CPUs in a system might have varied support for
either. Where support for either feature is not uniform, it is hidden
from guests via ID register emulation, as a result of the cpufeature
framework in the host.

Unfortunately, address authentication and generic authentication cannot
be trapped separately, as the architecture provides a single EL2 trap
covering both. If we wish to expose one without the other, we cannot
prevent a (badly-written) guest from intermittently using a feature
which is not uniformly supported (when scheduled on a physical CPU which
supports the relevant feature). Hence, this patch expects both type of
authentication to be present in a cpu.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
[Only VHE, key switch from from assembly, kvm_supports_ptrauth
checks, save host key in vcpu_load]
Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Reviewed-by: Julien Thierry <julien.thierry@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Christoffer Dall <christoffer.dall@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
---
 arch/arm/include/asm/kvm_host.h   |   1 +
 arch/arm64/include/asm/kvm_host.h |  23 +++++++++
 arch/arm64/include/asm/kvm_hyp.h  |   7 +++
 arch/arm64/kernel/traps.c         |   1 +
 arch/arm64/kvm/handle_exit.c      |  21 +++++---
 arch/arm64/kvm/hyp/Makefile       |   1 +
 arch/arm64/kvm/hyp/entry.S        |  17 +++++++
 arch/arm64/kvm/hyp/ptrauth-sr.c   | 101 ++++++++++++++++++++++++++++++++++++++
 arch/arm64/kvm/sys_regs.c         |  37 +++++++++++++-
 virt/kvm/arm/arm.c                |   2 +
 10 files changed, 201 insertions(+), 10 deletions(-)
 create mode 100644 arch/arm64/kvm/hyp/ptrauth-sr.c

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 704667e..b200c14 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -345,6 +345,7 @@ static inline int kvm_arm_have_ssbd(void)
 
 static inline void kvm_vcpu_load_sysregs(struct kvm_vcpu *vcpu) {}
 static inline void kvm_vcpu_put_sysregs(struct kvm_vcpu *vcpu) {}
+static inline void kvm_arm_vcpu_ptrauth_reset(struct kvm_vcpu *vcpu) {}
 
 #define __KVM_HAVE_ARCH_VM_ALLOC
 struct kvm *kvm_arch_alloc_vm(void);
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 2f1bb86..1bacf78 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -146,6 +146,18 @@ enum vcpu_sysreg {
 	PMSWINC_EL0,	/* Software Increment Register */
 	PMUSERENR_EL0,	/* User Enable Register */
 
+	/* Pointer Authentication Registers */
+	APIAKEYLO_EL1,
+	APIAKEYHI_EL1,
+	APIBKEYLO_EL1,
+	APIBKEYHI_EL1,
+	APDAKEYLO_EL1,
+	APDAKEYHI_EL1,
+	APDBKEYLO_EL1,
+	APDBKEYHI_EL1,
+	APGAKEYLO_EL1,
+	APGAKEYHI_EL1,
+
 	/* 32bit specific registers. Keep them at the end of the range */
 	DACR32_EL2,	/* Domain Access Control Register */
 	IFSR32_EL2,	/* Instruction Fault Status Register */
@@ -439,6 +451,17 @@ static inline bool kvm_arch_requires_vhe(void)
 	return false;
 }
 
+static inline bool kvm_supports_ptrauth(void)
+{
+	return has_vhe() && system_supports_address_auth() &&
+				system_supports_generic_auth();
+}
+
+void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu);
+void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu);
+void kvm_arm_vcpu_ptrauth_reset(struct kvm_vcpu *vcpu);
+void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu);
+
 static inline void kvm_arch_hardware_unsetup(void) {}
 static inline void kvm_arch_sync_events(struct kvm *kvm) {}
 static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {}
diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index 6e65cad..09e061a 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -153,6 +153,13 @@ bool __fpsimd_enabled(void);
 void activate_traps_vhe_load(struct kvm_vcpu *vcpu);
 void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu);
 
+void __ptrauth_switch_to_guest(struct kvm_vcpu *vcpu,
+			       struct kvm_cpu_context *host_ctxt,
+			       struct kvm_cpu_context *guest_ctxt);
+void __ptrauth_switch_to_host(struct kvm_vcpu *vcpu,
+			      struct kvm_cpu_context *guest_ctxt,
+			      struct kvm_cpu_context *host_ctxt);
+
 u64 __guest_enter(struct kvm_vcpu *vcpu, struct kvm_cpu_context *host_ctxt);
 void __noreturn __hyp_do_panic(unsigned long, ...);
 
diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
index 4e2fb87..5cac605 100644
--- a/arch/arm64/kernel/traps.c
+++ b/arch/arm64/kernel/traps.c
@@ -749,6 +749,7 @@ static const char *esr_class_str[] = {
 	[ESR_ELx_EC_CP14_LS]		= "CP14 LDC/STC",
 	[ESR_ELx_EC_FP_ASIMD]		= "ASIMD",
 	[ESR_ELx_EC_CP10_ID]		= "CP10 MRC/VMRS",
+	[ESR_ELx_EC_PAC]		= "Pointer authentication trap",
 	[ESR_ELx_EC_CP14_64]		= "CP14 MCRR/MRRC",
 	[ESR_ELx_EC_ILL]		= "PSTATE.IL",
 	[ESR_ELx_EC_SVC32]		= "SVC (AArch32)",
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 0b79834..7622ab3 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -174,19 +174,24 @@ static int handle_sve(struct kvm_vcpu *vcpu, struct kvm_run *run)
 }
 
 /*
+ * Handle the guest trying to use a ptrauth instruction, or trying to access a
+ * ptrauth register.
+ */
+void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu)
+{
+	if (kvm_supports_ptrauth())
+		kvm_arm_vcpu_ptrauth_enable(vcpu);
+	else
+		kvm_inject_undefined(vcpu);
+}
+
+/*
  * Guest usage of a ptrauth instruction (which the guest EL1 did not turn into
  * a NOP).
  */
 static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu, struct kvm_run *run)
 {
-	/*
-	 * We don't currently support ptrauth in a guest, and we mask the ID
-	 * registers to prevent well-behaved guests from trying to make use of
-	 * it.
-	 *
-	 * Inject an UNDEF, as if the feature really isn't present.
-	 */
-	kvm_inject_undefined(vcpu);
+	kvm_arm_vcpu_ptrauth_trap(vcpu);
 	return 1;
 }
 
diff --git a/arch/arm64/kvm/hyp/Makefile b/arch/arm64/kvm/hyp/Makefile
index 82d1904..17cec99 100644
--- a/arch/arm64/kvm/hyp/Makefile
+++ b/arch/arm64/kvm/hyp/Makefile
@@ -19,6 +19,7 @@ obj-$(CONFIG_KVM_ARM_HOST) += switch.o
 obj-$(CONFIG_KVM_ARM_HOST) += fpsimd.o
 obj-$(CONFIG_KVM_ARM_HOST) += tlb.o
 obj-$(CONFIG_KVM_ARM_HOST) += hyp-entry.o
+obj-$(CONFIG_KVM_ARM_HOST) += ptrauth-sr.o
 
 # KVM code is run at a different exception code with a different map, so
 # compiler instrumentation that inserts callbacks or checks into the code may
diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
index 675fdc1..b78cc15 100644
--- a/arch/arm64/kvm/hyp/entry.S
+++ b/arch/arm64/kvm/hyp/entry.S
@@ -64,6 +64,12 @@ ENTRY(__guest_enter)
 
 	add	x18, x0, #VCPU_CONTEXT
 
+#ifdef	CONFIG_ARM64_PTR_AUTH
+	// Prepare parameter for __ptrauth_switch_to_guest(vcpu, host, guest).
+	mov	x2, x18
+	bl	__ptrauth_switch_to_guest
+#endif
+
 	// Restore guest regs x0-x17
 	ldp	x0, x1,   [x18, #CPU_XREG_OFFSET(0)]
 	ldp	x2, x3,   [x18, #CPU_XREG_OFFSET(2)]
@@ -118,6 +124,17 @@ ENTRY(__guest_exit)
 
 	get_host_ctxt	x2, x3
 
+#ifdef	CONFIG_ARM64_PTR_AUTH
+	// Prepare parameter for __ptrauth_switch_to_host(vcpu, guest, host).
+	// Save x0, x2 which are used later in callee saved registers.
+	mov	x19, x0
+	mov	x20, x2
+	sub	x0, x1, #VCPU_CONTEXT
+	ldr	x29, [x2, #CPU_XREG_OFFSET(29)]
+	bl	__ptrauth_switch_to_host
+	mov	x0, x19
+	mov	x2, x20
+#endif
 	// Now restore the host regs
 	restore_callee_saved_regs x2
 
diff --git a/arch/arm64/kvm/hyp/ptrauth-sr.c b/arch/arm64/kvm/hyp/ptrauth-sr.c
new file mode 100644
index 0000000..528ee6e
--- /dev/null
+++ b/arch/arm64/kvm/hyp/ptrauth-sr.c
@@ -0,0 +1,101 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * arch/arm64/kvm/hyp/ptrauth-sr.c: Guest/host ptrauth save/restore
+ *
+ * Copyright 2018 Arm Limited
+ * Author: Mark Rutland <mark.rutland@arm.com>
+ *         Amit Daniel Kachhap <amit.kachhap@arm.com>
+ */
+#include <linux/compiler.h>
+#include <linux/kvm_host.h>
+
+#include <asm/cpucaps.h>
+#include <asm/cpufeature.h>
+#include <asm/kvm_asm.h>
+#include <asm/kvm_hyp.h>
+#include <asm/pointer_auth.h>
+
+static __always_inline bool __ptrauth_is_enabled(struct kvm_vcpu *vcpu)
+{
+	return IS_ENABLED(CONFIG_ARM64_PTR_AUTH) &&
+			vcpu->arch.ctxt.hcr_el2 & (HCR_API | HCR_APK);
+}
+
+#define __ptrauth_save_key(regs, key)						\
+({										\
+	regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1);	\
+	regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1);	\
+})
+
+static __always_inline void __ptrauth_save_state(struct kvm_cpu_context *ctxt)
+{
+	__ptrauth_save_key(ctxt->sys_regs, APIA);
+	__ptrauth_save_key(ctxt->sys_regs, APIB);
+	__ptrauth_save_key(ctxt->sys_regs, APDA);
+	__ptrauth_save_key(ctxt->sys_regs, APDB);
+	__ptrauth_save_key(ctxt->sys_regs, APGA);
+}
+
+#define __ptrauth_restore_key(regs, key) 					\
+({										\
+	write_sysreg_s(regs[key ## KEYLO_EL1], SYS_ ## key ## KEYLO_EL1);	\
+	write_sysreg_s(regs[key ## KEYHI_EL1], SYS_ ## key ## KEYHI_EL1);	\
+})
+
+static __always_inline void __ptrauth_restore_state(struct kvm_cpu_context *ctxt)
+{
+	__ptrauth_restore_key(ctxt->sys_regs, APIA);
+	__ptrauth_restore_key(ctxt->sys_regs, APIB);
+	__ptrauth_restore_key(ctxt->sys_regs, APDA);
+	__ptrauth_restore_key(ctxt->sys_regs, APDB);
+	__ptrauth_restore_key(ctxt->sys_regs, APGA);
+}
+
+/**
+ * This function changes the key so assign Pointer Authentication safe
+ * GCC attribute if protected by it.
+ */
+void __ptrauth_switch_to_guest(struct kvm_vcpu *vcpu,
+				  struct kvm_cpu_context *host_ctxt,
+				  struct kvm_cpu_context *guest_ctxt)
+{
+	if (!__ptrauth_is_enabled(vcpu))
+		return;
+
+	__ptrauth_restore_state(guest_ctxt);
+}
+
+/**
+ * This function changes the key so assign Pointer Authentication safe
+ * GCC attribute if protected by it.
+ */
+void __ptrauth_switch_to_host(struct kvm_vcpu *vcpu,
+				 struct kvm_cpu_context *guest_ctxt,
+				 struct kvm_cpu_context *host_ctxt)
+{
+	if (!__ptrauth_is_enabled(vcpu))
+		return;
+
+	__ptrauth_save_state(guest_ctxt);
+	__ptrauth_restore_state(host_ctxt);
+}
+
+/**
+ * kvm_arm_vcpu_ptrauth_reset - resets ptrauth for vcpu schedule
+ *
+ * @vcpu: The VCPU pointer
+ *
+ * This function may be used to disable ptrauth and use it in a lazy context
+ * via traps. However host key registers are saved here as they dont change
+ * during host/guest switch.
+ */
+void kvm_arm_vcpu_ptrauth_reset(struct kvm_vcpu *vcpu)
+{
+	struct kvm_cpu_context *host_ctxt;
+
+	if (kvm_supports_ptrauth()) {
+		kvm_arm_vcpu_ptrauth_disable(vcpu);
+		host_ctxt = vcpu->arch.host_cpu_context;
+		__ptrauth_save_state(host_ctxt);
+	}
+}
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index a6c9381..12529df 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -986,6 +986,32 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	{ SYS_DESC(SYS_PMEVTYPERn_EL0(n)),					\
 	  access_pmu_evtyper, reset_unknown, (PMEVTYPER0_EL0 + n), }
 
+
+void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu)
+{
+	vcpu->arch.ctxt.hcr_el2 |= (HCR_API | HCR_APK);
+}
+
+void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu)
+{
+	vcpu->arch.ctxt.hcr_el2 &= ~(HCR_API | HCR_APK);
+}
+
+static bool trap_ptrauth(struct kvm_vcpu *vcpu,
+			 struct sys_reg_params *p,
+			 const struct sys_reg_desc *rd)
+{
+	kvm_arm_vcpu_ptrauth_trap(vcpu);
+	return false;
+}
+
+#define __PTRAUTH_KEY(k)						\
+	{ SYS_DESC(SYS_## k), trap_ptrauth, reset_unknown, k }
+
+#define PTRAUTH_KEY(k)							\
+	__PTRAUTH_KEY(k ## KEYLO_EL1),					\
+	__PTRAUTH_KEY(k ## KEYHI_EL1)
+
 static bool access_cntp_tval(struct kvm_vcpu *vcpu,
 		struct sys_reg_params *p,
 		const struct sys_reg_desc *r)
@@ -1045,9 +1071,10 @@ static u64 read_id_reg(struct sys_reg_desc const *r, bool raz)
 					 (0xfUL << ID_AA64ISAR1_API_SHIFT) |
 					 (0xfUL << ID_AA64ISAR1_GPA_SHIFT) |
 					 (0xfUL << ID_AA64ISAR1_GPI_SHIFT);
-		if (val & ptrauth_mask)
+		if (!kvm_supports_ptrauth()) {
 			kvm_debug("ptrauth unsupported for guests, suppressing\n");
-		val &= ~ptrauth_mask;
+			val &= ~ptrauth_mask;
+		}
 	} else if (id == SYS_ID_AA64MMFR1_EL1) {
 		if (val & (0xfUL << ID_AA64MMFR1_LOR_SHIFT))
 			kvm_debug("LORegions unsupported for guests, suppressing\n");
@@ -1316,6 +1343,12 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	{ SYS_DESC(SYS_TTBR1_EL1), access_vm_reg, reset_unknown, TTBR1_EL1 },
 	{ SYS_DESC(SYS_TCR_EL1), access_vm_reg, reset_val, TCR_EL1, 0 },
 
+	PTRAUTH_KEY(APIA),
+	PTRAUTH_KEY(APIB),
+	PTRAUTH_KEY(APDA),
+	PTRAUTH_KEY(APDB),
+	PTRAUTH_KEY(APGA),
+
 	{ SYS_DESC(SYS_AFSR0_EL1), access_vm_reg, reset_unknown, AFSR0_EL1 },
 	{ SYS_DESC(SYS_AFSR1_EL1), access_vm_reg, reset_unknown, AFSR1_EL1 },
 	{ SYS_DESC(SYS_ESR_EL1), access_vm_reg, reset_unknown, ESR_EL1 },
diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index 2032a66..d7e003f 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -388,6 +388,8 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 		vcpu_clear_wfe_traps(vcpu);
 	else
 		vcpu_set_wfe_traps(vcpu);
+
+	kvm_arm_vcpu_ptrauth_reset(vcpu);
 }
 
 void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH v6 4/6] arm64/kvm: add a userspace option to enable pointer authentication
  2019-02-19  9:24 [PATCH v6 0/6] Add ARMv8.3 pointer authentication for kvm guest Amit Daniel Kachhap
                   ` (2 preceding siblings ...)
  2019-02-19  9:24 ` [PATCH v6 3/6] arm64/kvm: context-switch ptrauth registers Amit Daniel Kachhap
@ 2019-02-19  9:24 ` Amit Daniel Kachhap
  2019-02-21 12:34   ` Mark Rutland
                     ` (2 more replies)
  2019-02-19  9:24 ` [PATCH v6 5/6] arm64/kvm: control accessibility of ptrauth key registers Amit Daniel Kachhap
                   ` (2 subsequent siblings)
  6 siblings, 3 replies; 41+ messages in thread
From: Amit Daniel Kachhap @ 2019-02-19  9:24 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Christoffer Dall, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Dave Martin, Ramana Radhakrishnan, kvmarm,
	Kristina Martsenko, linux-kernel, Amit Daniel Kachhap,
	Mark Rutland, James Morse, Julien Thierry

This feature will allow the KVM guest to allow the handling of
pointer authentication instructions or to treat them as undefined
if not set. It uses the existing vcpu API KVM_ARM_VCPU_INIT to
supply this parameter instead of creating a new API.

A new register is not created to pass this parameter via
SET/GET_ONE_REG interface as just a flag (KVM_ARM_VCPU_PTRAUTH)
supplied is enough to enable this feature.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Christoffer Dall <christoffer.dall@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
---
 Documentation/arm64/pointer-authentication.txt |  9 +++++----
 Documentation/virtual/kvm/api.txt              |  4 ++++
 arch/arm64/include/asm/kvm_host.h              |  3 ++-
 arch/arm64/include/uapi/asm/kvm.h              |  1 +
 arch/arm64/kvm/handle_exit.c                   |  2 +-
 arch/arm64/kvm/hyp/ptrauth-sr.c                | 16 +++++++++++++++-
 arch/arm64/kvm/reset.c                         |  3 +++
 arch/arm64/kvm/sys_regs.c                      | 26 +++++++++++++-------------
 include/uapi/linux/kvm.h                       |  1 +
 9 files changed, 45 insertions(+), 20 deletions(-)

diff --git a/Documentation/arm64/pointer-authentication.txt b/Documentation/arm64/pointer-authentication.txt
index a25cd21..0529a7d 100644
--- a/Documentation/arm64/pointer-authentication.txt
+++ b/Documentation/arm64/pointer-authentication.txt
@@ -82,7 +82,8 @@ pointers).
 Virtualization
 --------------
 
-Pointer authentication is not currently supported in KVM guests. KVM
-will mask the feature bits from ID_AA64ISAR1_EL1, and attempted use of
-the feature will result in an UNDEFINED exception being injected into
-the guest.
+Pointer authentication is enabled in KVM guest when virtual machine is
+created by passing a flag (KVM_ARM_VCPU_PTRAUTH) requesting this feature
+to be enabled. Without this flag, pointer authentication is not enabled
+in KVM guests and attempted use of the feature will result in an UNDEFINED
+exception being injected into the guest.
diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
index 356156f..1e646fb 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -2642,6 +2642,10 @@ Possible features:
 	  Depends on KVM_CAP_ARM_PSCI_0_2.
 	- KVM_ARM_VCPU_PMU_V3: Emulate PMUv3 for the CPU.
 	  Depends on KVM_CAP_ARM_PMU_V3.
+	- KVM_ARM_VCPU_PTRAUTH: Emulate Pointer authentication for the CPU.
+	  Depends on KVM_CAP_ARM_PTRAUTH and only on arm64 architecture. If
+	  set, then the KVM guest allows the execution of pointer authentication
+	  instructions. Otherwise, KVM treats these instructions as undefined.
 
 
 4.83 KVM_ARM_PREFERRED_TARGET
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 1bacf78..2768a53 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -43,7 +43,7 @@
 
 #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS
 
-#define KVM_VCPU_MAX_FEATURES 4
+#define KVM_VCPU_MAX_FEATURES 5
 
 #define KVM_REQ_SLEEP \
 	KVM_ARCH_REQ_FLAGS(0, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
@@ -451,6 +451,7 @@ static inline bool kvm_arch_requires_vhe(void)
 	return false;
 }
 
+bool kvm_arm_vcpu_ptrauth_allowed(const struct kvm_vcpu *vcpu);
 static inline bool kvm_supports_ptrauth(void)
 {
 	return has_vhe() && system_supports_address_auth() &&
diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
index 97c3478..5f82ca1 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -102,6 +102,7 @@ struct kvm_regs {
 #define KVM_ARM_VCPU_EL1_32BIT		1 /* CPU running a 32bit VM */
 #define KVM_ARM_VCPU_PSCI_0_2		2 /* CPU uses PSCI v0.2 */
 #define KVM_ARM_VCPU_PMU_V3		3 /* Support guest PMUv3 */
+#define KVM_ARM_VCPU_PTRAUTH		4 /* VCPU uses address authentication */
 
 struct kvm_vcpu_init {
 	__u32 target;
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 7622ab3..d9f583b 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -179,7 +179,7 @@ static int handle_sve(struct kvm_vcpu *vcpu, struct kvm_run *run)
  */
 void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu)
 {
-	if (kvm_supports_ptrauth())
+	if (kvm_arm_vcpu_ptrauth_allowed(vcpu))
 		kvm_arm_vcpu_ptrauth_enable(vcpu);
 	else
 		kvm_inject_undefined(vcpu);
diff --git a/arch/arm64/kvm/hyp/ptrauth-sr.c b/arch/arm64/kvm/hyp/ptrauth-sr.c
index 528ee6e..6846a23 100644
--- a/arch/arm64/kvm/hyp/ptrauth-sr.c
+++ b/arch/arm64/kvm/hyp/ptrauth-sr.c
@@ -93,9 +93,23 @@ void kvm_arm_vcpu_ptrauth_reset(struct kvm_vcpu *vcpu)
 {
 	struct kvm_cpu_context *host_ctxt;
 
-	if (kvm_supports_ptrauth()) {
+	if (kvm_arm_vcpu_ptrauth_allowed(vcpu)) {
 		kvm_arm_vcpu_ptrauth_disable(vcpu);
 		host_ctxt = vcpu->arch.host_cpu_context;
 		__ptrauth_save_state(host_ctxt);
 	}
 }
+
+/**
+ * kvm_arm_vcpu_ptrauth_allowed - checks if ptrauth feature is allowed by user
+ *
+ * @vcpu: The VCPU pointer
+ *
+ * This function will be used to check userspace option to have ptrauth or not
+ * in the guest kernel.
+ */
+bool kvm_arm_vcpu_ptrauth_allowed(const struct kvm_vcpu *vcpu)
+{
+	return kvm_supports_ptrauth() &&
+		test_bit(KVM_ARM_VCPU_PTRAUTH, vcpu->arch.features);
+}
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index b72a3dd..987e0c3c 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -91,6 +91,9 @@ int kvm_arch_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 	case KVM_CAP_ARM_VM_IPA_SIZE:
 		r = kvm_ipa_limit;
 		break;
+	case KVM_CAP_ARM_PTRAUTH:
+		r = kvm_supports_ptrauth();
+		break;
 	default:
 		r = 0;
 	}
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 12529df..f7bcc60 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1055,7 +1055,7 @@ static bool access_cntp_cval(struct kvm_vcpu *vcpu,
 }
 
 /* Read a sanitised cpufeature ID register by sys_reg_desc */
-static u64 read_id_reg(struct sys_reg_desc const *r, bool raz)
+static u64 read_id_reg(struct kvm_vcpu *vcpu, struct sys_reg_desc const *r, bool raz)
 {
 	u32 id = sys_reg((u32)r->Op0, (u32)r->Op1,
 			 (u32)r->CRn, (u32)r->CRm, (u32)r->Op2);
@@ -1071,7 +1071,7 @@ static u64 read_id_reg(struct sys_reg_desc const *r, bool raz)
 					 (0xfUL << ID_AA64ISAR1_API_SHIFT) |
 					 (0xfUL << ID_AA64ISAR1_GPA_SHIFT) |
 					 (0xfUL << ID_AA64ISAR1_GPI_SHIFT);
-		if (!kvm_supports_ptrauth()) {
+		if (!kvm_arm_vcpu_ptrauth_allowed(vcpu)) {
 			kvm_debug("ptrauth unsupported for guests, suppressing\n");
 			val &= ~ptrauth_mask;
 		}
@@ -1095,7 +1095,7 @@ static bool __access_id_reg(struct kvm_vcpu *vcpu,
 	if (p->is_write)
 		return write_to_read_only(vcpu, p, r);
 
-	p->regval = read_id_reg(r, raz);
+	p->regval = read_id_reg(vcpu, r, raz);
 	return true;
 }
 
@@ -1124,17 +1124,17 @@ static u64 sys_reg_to_index(const struct sys_reg_desc *reg);
  * are stored, and for set_id_reg() we don't allow the effective value
  * to be changed.
  */
-static int __get_id_reg(const struct sys_reg_desc *rd, void __user *uaddr,
-			bool raz)
+static int __get_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
+			void __user *uaddr, bool raz)
 {
 	const u64 id = sys_reg_to_index(rd);
-	const u64 val = read_id_reg(rd, raz);
+	const u64 val = read_id_reg(vcpu, rd, raz);
 
 	return reg_to_user(uaddr, &val, id);
 }
 
-static int __set_id_reg(const struct sys_reg_desc *rd, void __user *uaddr,
-			bool raz)
+static int __set_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
+			void __user *uaddr, bool raz)
 {
 	const u64 id = sys_reg_to_index(rd);
 	int err;
@@ -1145,7 +1145,7 @@ static int __set_id_reg(const struct sys_reg_desc *rd, void __user *uaddr,
 		return err;
 
 	/* This is what we mean by invariant: you can't change it. */
-	if (val != read_id_reg(rd, raz))
+	if (val != read_id_reg(vcpu, rd, raz))
 		return -EINVAL;
 
 	return 0;
@@ -1154,25 +1154,25 @@ static int __set_id_reg(const struct sys_reg_desc *rd, void __user *uaddr,
 static int get_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 		      const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	return __get_id_reg(rd, uaddr, false);
+	return __get_id_reg(vcpu, rd, uaddr, false);
 }
 
 static int set_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 		      const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	return __set_id_reg(rd, uaddr, false);
+	return __set_id_reg(vcpu, rd, uaddr, false);
 }
 
 static int get_raz_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 			  const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	return __get_id_reg(rd, uaddr, true);
+	return __get_id_reg(vcpu, rd, uaddr, true);
 }
 
 static int set_raz_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 			  const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	return __set_id_reg(rd, uaddr, true);
+	return __set_id_reg(vcpu, rd, uaddr, true);
 }
 
 /* sys_reg_desc initialiser for known cpufeature ID registers */
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 6d4ea4b..a553477 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -988,6 +988,7 @@ struct kvm_ppc_resize_hpt {
 #define KVM_CAP_ARM_VM_IPA_SIZE 165
 #define KVM_CAP_MANUAL_DIRTY_LOG_PROTECT 166
 #define KVM_CAP_HYPERV_CPUID 167
+#define KVM_CAP_ARM_PTRAUTH 168
 
 #ifdef KVM_CAP_IRQ_ROUTING
 
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH v6 5/6] arm64/kvm: control accessibility of ptrauth key registers
  2019-02-19  9:24 [PATCH v6 0/6] Add ARMv8.3 pointer authentication for kvm guest Amit Daniel Kachhap
                   ` (3 preceding siblings ...)
  2019-02-19  9:24 ` [PATCH v6 4/6] arm64/kvm: add a userspace option to enable pointer authentication Amit Daniel Kachhap
@ 2019-02-19  9:24 ` Amit Daniel Kachhap
  2019-02-21 15:53   ` Dave Martin
  2019-02-26 18:34   ` James Morse
  2019-02-19  9:24 ` [kvmtool PATCH v6 6/6] arm/kvm: arm64: Add a vcpu feature for pointer authentication Amit Daniel Kachhap
  2019-02-26 18:03 ` [PATCH v6 0/6] Add ARMv8.3 pointer authentication for kvm guest James Morse
  6 siblings, 2 replies; 41+ messages in thread
From: Amit Daniel Kachhap @ 2019-02-19  9:24 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Christoffer Dall, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Dave Martin, Ramana Radhakrishnan, kvmarm,
	Kristina Martsenko, linux-kernel, Amit Daniel Kachhap,
	Mark Rutland, James Morse, Julien Thierry

According to userspace settings, ptrauth key registers are conditionally
present in guest system register list based on user specified flag
KVM_ARM_VCPU_PTRAUTH.

Reset routines still sets these registers to default values but they are
left like that as they are conditionally accessible (set/get).

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Christoffer Dall <christoffer.dall@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
---
This patch needs patch [1] by Dave Martin and adds feature to manage accessibility in a scalable way.

[1]: https://lore.kernel.org/linux-arm-kernel/1547757219-19439-13-git-send-email-Dave.Martin@arm.com/ 

 Documentation/arm64/pointer-authentication.txt | 4 ++++
 arch/arm64/kvm/sys_regs.c                      | 7 ++++++-
 2 files changed, 10 insertions(+), 1 deletion(-)

diff --git a/Documentation/arm64/pointer-authentication.txt b/Documentation/arm64/pointer-authentication.txt
index 0529a7d..996e435 100644
--- a/Documentation/arm64/pointer-authentication.txt
+++ b/Documentation/arm64/pointer-authentication.txt
@@ -87,3 +87,7 @@ created by passing a flag (KVM_ARM_VCPU_PTRAUTH) requesting this feature
 to be enabled. Without this flag, pointer authentication is not enabled
 in KVM guests and attempted use of the feature will result in an UNDEFINED
 exception being injected into the guest.
+
+Additionally, when KVM_ARM_VCPU_PTRAUTH is not set then KVM will filter
+out the Pointer Authentication system key registers from KVM_GET/SET_REG_*
+ioctls.
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index f7bcc60..c2f4974 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1005,8 +1005,13 @@ static bool trap_ptrauth(struct kvm_vcpu *vcpu,
 	return false;
 }
 
+static bool check_ptrauth(const struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd)
+{
+	return kvm_arm_vcpu_ptrauth_allowed(vcpu);
+}
+
 #define __PTRAUTH_KEY(k)						\
-	{ SYS_DESC(SYS_## k), trap_ptrauth, reset_unknown, k }
+	{ SYS_DESC(SYS_## k), trap_ptrauth, reset_unknown, k , .check_present = check_ptrauth}
 
 #define PTRAUTH_KEY(k)							\
 	__PTRAUTH_KEY(k ## KEYLO_EL1),					\
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [kvmtool PATCH v6 6/6] arm/kvm: arm64: Add a vcpu feature for pointer authentication
  2019-02-19  9:24 [PATCH v6 0/6] Add ARMv8.3 pointer authentication for kvm guest Amit Daniel Kachhap
                   ` (4 preceding siblings ...)
  2019-02-19  9:24 ` [PATCH v6 5/6] arm64/kvm: control accessibility of ptrauth key registers Amit Daniel Kachhap
@ 2019-02-19  9:24 ` Amit Daniel Kachhap
  2019-02-21 15:54   ` Dave Martin
  2019-02-26 18:03 ` [PATCH v6 0/6] Add ARMv8.3 pointer authentication for kvm guest James Morse
  6 siblings, 1 reply; 41+ messages in thread
From: Amit Daniel Kachhap @ 2019-02-19  9:24 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Christoffer Dall, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Dave Martin, Ramana Radhakrishnan, kvmarm,
	Kristina Martsenko, linux-kernel, Amit Daniel Kachhap,
	Mark Rutland, James Morse, Julien Thierry

This is a runtime capabality for KVM tool to enable Armv8.3 Pointer
Authentication in guest kernel. A command line option --ptrauth is
required for this.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
---
 arm/aarch32/include/kvm/kvm-cpu-arch.h    | 1 +
 arm/aarch64/include/asm/kvm.h             | 1 +
 arm/aarch64/include/kvm/kvm-config-arch.h | 4 +++-
 arm/aarch64/include/kvm/kvm-cpu-arch.h    | 1 +
 arm/include/arm-common/kvm-config-arch.h  | 1 +
 arm/kvm-cpu.c                             | 6 ++++++
 include/linux/kvm.h                       | 1 +
 7 files changed, 14 insertions(+), 1 deletion(-)

diff --git a/arm/aarch32/include/kvm/kvm-cpu-arch.h b/arm/aarch32/include/kvm/kvm-cpu-arch.h
index d28ea67..520ea76 100644
--- a/arm/aarch32/include/kvm/kvm-cpu-arch.h
+++ b/arm/aarch32/include/kvm/kvm-cpu-arch.h
@@ -13,4 +13,5 @@
 #define ARM_CPU_ID		0, 0, 0
 #define ARM_CPU_ID_MPIDR	5
 
+#define ARM_VCPU_PTRAUTH_FEATURE	0
 #endif /* KVM__KVM_CPU_ARCH_H */
diff --git a/arm/aarch64/include/asm/kvm.h b/arm/aarch64/include/asm/kvm.h
index 97c3478..1068fd1 100644
--- a/arm/aarch64/include/asm/kvm.h
+++ b/arm/aarch64/include/asm/kvm.h
@@ -102,6 +102,7 @@ struct kvm_regs {
 #define KVM_ARM_VCPU_EL1_32BIT		1 /* CPU running a 32bit VM */
 #define KVM_ARM_VCPU_PSCI_0_2		2 /* CPU uses PSCI v0.2 */
 #define KVM_ARM_VCPU_PMU_V3		3 /* Support guest PMUv3 */
+#define KVM_ARM_VCPU_PTRAUTH		4 /* CPU uses pointer authentication */
 
 struct kvm_vcpu_init {
 	__u32 target;
diff --git a/arm/aarch64/include/kvm/kvm-config-arch.h b/arm/aarch64/include/kvm/kvm-config-arch.h
index 04be43d..2074684 100644
--- a/arm/aarch64/include/kvm/kvm-config-arch.h
+++ b/arm/aarch64/include/kvm/kvm-config-arch.h
@@ -8,7 +8,9 @@
 			"Create PMUv3 device"),				\
 	OPT_U64('\0', "kaslr-seed", &(cfg)->kaslr_seed,			\
 			"Specify random seed for Kernel Address Space "	\
-			"Layout Randomization (KASLR)"),
+			"Layout Randomization (KASLR)"),		\
+	OPT_BOOLEAN('\0', "ptrauth", &(cfg)->has_ptrauth,		\
+			"Enable address authentication"),
 
 #include "arm-common/kvm-config-arch.h"
 
diff --git a/arm/aarch64/include/kvm/kvm-cpu-arch.h b/arm/aarch64/include/kvm/kvm-cpu-arch.h
index a9d8563..496ece8 100644
--- a/arm/aarch64/include/kvm/kvm-cpu-arch.h
+++ b/arm/aarch64/include/kvm/kvm-cpu-arch.h
@@ -17,4 +17,5 @@
 #define ARM_CPU_CTRL		3, 0, 1, 0
 #define ARM_CPU_CTRL_SCTLR_EL1	0
 
+#define ARM_VCPU_PTRAUTH_FEATURE	(1UL << KVM_ARM_VCPU_PTRAUTH)
 #endif /* KVM__KVM_CPU_ARCH_H */
diff --git a/arm/include/arm-common/kvm-config-arch.h b/arm/include/arm-common/kvm-config-arch.h
index 5734c46..5badcbd 100644
--- a/arm/include/arm-common/kvm-config-arch.h
+++ b/arm/include/arm-common/kvm-config-arch.h
@@ -10,6 +10,7 @@ struct kvm_config_arch {
 	bool		aarch32_guest;
 	bool		has_pmuv3;
 	u64		kaslr_seed;
+	bool		has_ptrauth;
 	enum irqchip_type irqchip;
 	u64		fw_addr;
 };
diff --git a/arm/kvm-cpu.c b/arm/kvm-cpu.c
index 7780251..4ac80f8 100644
--- a/arm/kvm-cpu.c
+++ b/arm/kvm-cpu.c
@@ -68,6 +68,12 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, unsigned long cpu_id)
 		vcpu_init.features[0] |= (1UL << KVM_ARM_VCPU_PSCI_0_2);
 	}
 
+	/* Set KVM_ARM_VCPU_PTRAUTH if available */
+	if (kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH)) {
+		if (kvm->cfg.arch.has_ptrauth)
+			vcpu_init.features[0] |= ARM_VCPU_PTRAUTH_FEATURE;
+	}
+
 	/*
 	 * If the preferred target ioctl is successful then
 	 * use preferred target else try each and every target type
diff --git a/include/linux/kvm.h b/include/linux/kvm.h
index 6d4ea4b..a553477 100644
--- a/include/linux/kvm.h
+++ b/include/linux/kvm.h
@@ -988,6 +988,7 @@ struct kvm_ppc_resize_hpt {
 #define KVM_CAP_ARM_VM_IPA_SIZE 165
 #define KVM_CAP_MANUAL_DIRTY_LOG_PROTECT 166
 #define KVM_CAP_HYPERV_CPUID 167
+#define KVM_CAP_ARM_PTRAUTH 168
 
 #ifdef KVM_CAP_IRQ_ROUTING
 
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* Re: [PATCH v6 1/6] arm64/kvm: preserve host HCR_EL2 value
  2019-02-19  9:24 ` [PATCH v6 1/6] arm64/kvm: preserve host HCR_EL2 value Amit Daniel Kachhap
@ 2019-02-21 11:50   ` Mark Rutland
  2019-02-25 18:09     ` Marc Zyngier
  2019-02-28  6:43     ` Amit Daniel Kachhap
  2019-02-21 15:49   ` Dave Martin
  2019-02-25 17:39   ` James Morse
  2 siblings, 2 replies; 41+ messages in thread
From: Mark Rutland @ 2019-02-21 11:50 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: linux-arm-kernel, Christoffer Dall, Marc Zyngier,
	Catalin Marinas, Will Deacon, Andrew Jones, Dave Martin,
	Ramana Radhakrishnan, kvmarm, Kristina Martsenko, linux-kernel,
	James Morse, Julien Thierry

Hi,

On Tue, Feb 19, 2019 at 02:54:26PM +0530, Amit Daniel Kachhap wrote:
> From: Mark Rutland <mark.rutland@arm.com>
> 
> When restoring HCR_EL2 for the host, KVM uses HCR_HOST_VHE_FLAGS, which
> is a constant value. This works today, as the host HCR_EL2 value is
> always the same, but this will get in the way of supporting extensions
> that require HCR_EL2 bits to be set conditionally for the host.
> 
> To allow such features to work without KVM having to explicitly handle
> every possible host feature combination, this patch has KVM save/restore
> for the host HCR when switching to/from a guest HCR. The saving of the
> register is done once during cpu hypervisor initialization state and is
> just restored after switch from guest.
> 
> For fetching HCR_EL2 during kvm initialisation, a hyp call is made using
> kvm_call_hyp and is helpful in NHVE case.
> 
> For the hyp TLB maintenance code, __tlb_switch_to_host_vhe() is updated
> to toggle the TGE bit with a RMW sequence, as we already do in
> __tlb_switch_to_guest_vhe().
> 
> The value of hcr_el2 is now stored in struct kvm_cpu_context as both host
> and guest can now use this field in a common way.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> [Added __cpu_copy_hyp_conf, hcr_el2 field in struct kvm_cpu_context]
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Christoffer Dall <christoffer.dall@arm.com>
> Cc: kvmarm@lists.cs.columbia.edu

[...]

> +/**
> + * __cpu_copy_hyp_conf - copy the boot hyp configuration registers
> + *
> + * It is called once per-cpu during CPU hyp initialisation.
> + */
> +static inline void __cpu_copy_hyp_conf(void)

I think this would be better named as something like:

  cpu_init_host_ctxt()

... as that makes it a bit clearer as to what is being initialized.

[...]

> +/**
> + * __kvm_populate_host_regs - Stores host register values
> + *
> + * This function acts as a function handler parameter for kvm_call_hyp and
> + * may be called from EL1 exception level to fetch the register value.
> + */
> +void __hyp_text __kvm_populate_host_regs(void)
> +{
> +	struct kvm_cpu_context *host_ctxt;
> +
> +	if (has_vhe())
> +		host_ctxt = this_cpu_ptr(&kvm_host_cpu_state);
> +	else
> +		host_ctxt = __hyp_this_cpu_ptr(kvm_host_cpu_state);

Do we need the has_vhe() check here?

Can't we always do:

	host_ctxt = __hyp_this_cpu_ptr(kvm_host_cpu_state);

... regardless of VHE? Or is that broken for VHE somehow?

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v6 2/6] arm64/kvm: preserve host MDCR_EL2 value
  2019-02-19  9:24 ` [PATCH v6 2/6] arm64/kvm: preserve host MDCR_EL2 value Amit Daniel Kachhap
@ 2019-02-21 11:57   ` Mark Rutland
  2019-02-21 15:51   ` Dave Martin
  1 sibling, 0 replies; 41+ messages in thread
From: Mark Rutland @ 2019-02-21 11:57 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: linux-arm-kernel, Christoffer Dall, Marc Zyngier,
	Catalin Marinas, Will Deacon, Andrew Jones, Dave Martin,
	Ramana Radhakrishnan, kvmarm, Kristina Martsenko, linux-kernel,
	James Morse, Julien Thierry

On Tue, Feb 19, 2019 at 02:54:27PM +0530, Amit Daniel Kachhap wrote:
> Save host MDCR_EL2 value during kvm HYP initialisation and restore
> after every switch from host to guest. There should not be any
> change in functionality due to this.
> 
> The value of mdcr_el2 is now stored in struct kvm_cpu_context as
> both host and guest can now use this field in a common way.
> 
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Christoffer Dall <christoffer.dall@arm.com>
> Cc: kvmarm@lists.cs.columbia.edu
> ---
>  arch/arm/include/asm/kvm_host.h   |  1 -
>  arch/arm64/include/asm/kvm_host.h |  6 ++----
>  arch/arm64/kvm/debug.c            | 28 ++++++----------------------
>  arch/arm64/kvm/hyp/switch.c       | 17 ++++-------------
>  arch/arm64/kvm/hyp/sysreg-sr.c    |  6 ++++++
>  virt/kvm/arm/arm.c                |  1 -
>  6 files changed, 18 insertions(+), 41 deletions(-)

This looks like a nice cleanup! FWIW:

Acked-by: Mark Rutland <mark.rutland@arm.com>

Thanks,
Mark.

> 
> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
> index 05706b4..704667e 100644
> --- a/arch/arm/include/asm/kvm_host.h
> +++ b/arch/arm/include/asm/kvm_host.h
> @@ -294,7 +294,6 @@ static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {}
>  static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
>  static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {}
>  
> -static inline void kvm_arm_init_debug(void) {}
>  static inline void kvm_arm_setup_debug(struct kvm_vcpu *vcpu) {}
>  static inline void kvm_arm_clear_debug(struct kvm_vcpu *vcpu) {}
>  static inline void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu) {}
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 1b2e05b..2f1bb86 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -205,6 +205,8 @@ struct kvm_cpu_context {
>  
>  	/* HYP host/guest configuration */
>  	u64 hcr_el2;
> +	u32 mdcr_el2;
> +
>  	struct kvm_vcpu *__hyp_running_vcpu;
>  };
>  
> @@ -213,9 +215,6 @@ typedef struct kvm_cpu_context kvm_cpu_context_t;
>  struct kvm_vcpu_arch {
>  	struct kvm_cpu_context ctxt;
>  
> -	/* HYP configuration */
> -	u32 mdcr_el2;
> -
>  	/* Exception Information */
>  	struct kvm_vcpu_fault_info fault;
>  
> @@ -446,7 +445,6 @@ static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {}
>  static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
>  static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {}
>  
> -void kvm_arm_init_debug(void);
>  void kvm_arm_setup_debug(struct kvm_vcpu *vcpu);
>  void kvm_arm_clear_debug(struct kvm_vcpu *vcpu);
>  void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu);
> diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c
> index f39801e..99dc0a4 100644
> --- a/arch/arm64/kvm/debug.c
> +++ b/arch/arm64/kvm/debug.c
> @@ -32,8 +32,6 @@
>  				DBG_MDSCR_KDE | \
>  				DBG_MDSCR_MDE)
>  
> -static DEFINE_PER_CPU(u32, mdcr_el2);
> -
>  /**
>   * save/restore_guest_debug_regs
>   *
> @@ -65,21 +63,6 @@ static void restore_guest_debug_regs(struct kvm_vcpu *vcpu)
>  }
>  
>  /**
> - * kvm_arm_init_debug - grab what we need for debug
> - *
> - * Currently the sole task of this function is to retrieve the initial
> - * value of mdcr_el2 so we can preserve MDCR_EL2.HPMN which has
> - * presumably been set-up by some knowledgeable bootcode.
> - *
> - * It is called once per-cpu during CPU hyp initialisation.
> - */
> -
> -void kvm_arm_init_debug(void)
> -{
> -	__this_cpu_write(mdcr_el2, kvm_call_hyp(__kvm_get_mdcr_el2));
> -}
> -
> -/**
>   * kvm_arm_reset_debug_ptr - reset the debug ptr to point to the vcpu state
>   */
>  
> @@ -111,6 +94,7 @@ void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu)
>  
>  void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
>  {
> +	kvm_cpu_context_t *host_cxt = this_cpu_ptr(&kvm_host_cpu_state);
>  	bool trap_debug = !(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY);
>  	unsigned long mdscr;
>  
> @@ -120,8 +104,8 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
>  	 * This also clears MDCR_EL2_E2PB_MASK to disable guest access
>  	 * to the profiling buffer.
>  	 */
> -	vcpu->arch.mdcr_el2 = __this_cpu_read(mdcr_el2) & MDCR_EL2_HPMN_MASK;
> -	vcpu->arch.mdcr_el2 |= (MDCR_EL2_TPM |
> +	vcpu->arch.ctxt.mdcr_el2 = host_cxt->mdcr_el2 & MDCR_EL2_HPMN_MASK;
> +	vcpu->arch.ctxt.mdcr_el2 |= (MDCR_EL2_TPM |
>  				MDCR_EL2_TPMS |
>  				MDCR_EL2_TPMCR |
>  				MDCR_EL2_TDRA |
> @@ -130,7 +114,7 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
>  	/* Is Guest debugging in effect? */
>  	if (vcpu->guest_debug) {
>  		/* Route all software debug exceptions to EL2 */
> -		vcpu->arch.mdcr_el2 |= MDCR_EL2_TDE;
> +		vcpu->arch.ctxt.mdcr_el2 |= MDCR_EL2_TDE;
>  
>  		/* Save guest debug state */
>  		save_guest_debug_regs(vcpu);
> @@ -202,13 +186,13 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
>  
>  	/* Trap debug register access */
>  	if (trap_debug)
> -		vcpu->arch.mdcr_el2 |= MDCR_EL2_TDA;
> +		vcpu->arch.ctxt.mdcr_el2 |= MDCR_EL2_TDA;
>  
>  	/* If KDE or MDE are set, perform a full save/restore cycle. */
>  	if (vcpu_read_sys_reg(vcpu, MDSCR_EL1) & (DBG_MDSCR_KDE | DBG_MDSCR_MDE))
>  		vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY;
>  
> -	trace_kvm_arm_set_dreg32("MDCR_EL2", vcpu->arch.mdcr_el2);
> +	trace_kvm_arm_set_dreg32("MDCR_EL2", vcpu->arch.ctxt.mdcr_el2);
>  	trace_kvm_arm_set_dreg32("MDSCR_EL1", vcpu_read_sys_reg(vcpu, MDSCR_EL1));
>  }
>  
> diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
> index 006bd33..03b36f1 100644
> --- a/arch/arm64/kvm/hyp/switch.c
> +++ b/arch/arm64/kvm/hyp/switch.c
> @@ -82,7 +82,7 @@ static void __hyp_text __activate_traps_common(struct kvm_vcpu *vcpu)
>  	 */
>  	write_sysreg(0, pmselr_el0);
>  	write_sysreg(ARMV8_PMU_USERENR_MASK, pmuserenr_el0);
> -	write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
> +	write_sysreg(vcpu->arch.ctxt.mdcr_el2, mdcr_el2);
>  }
>  
>  static void __hyp_text __deactivate_traps_common(void)
> @@ -157,14 +157,9 @@ static void deactivate_traps_vhe(struct kvm_cpu_context *host_ctxt)
>  
>  static void __hyp_text __deactivate_traps_nvhe(struct kvm_cpu_context *host_ctxt)
>  {
> -	u64 mdcr_el2 = read_sysreg(mdcr_el2);
> -
>  	__deactivate_traps_common();
>  
> -	mdcr_el2 &= MDCR_EL2_HPMN_MASK;
> -	mdcr_el2 |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT;
> -
> -	write_sysreg(mdcr_el2, mdcr_el2);
> +	write_sysreg(host_ctxt->mdcr_el2, mdcr_el2);
>  	write_sysreg(host_ctxt->hcr_el2, hcr_el2);
>  	write_sysreg(CPTR_EL2_DEFAULT, cptr_el2);
>  }
> @@ -196,13 +191,9 @@ void activate_traps_vhe_load(struct kvm_vcpu *vcpu)
>  
>  void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu)
>  {
> -	u64 mdcr_el2 = read_sysreg(mdcr_el2);
> -
> -	mdcr_el2 &= MDCR_EL2_HPMN_MASK |
> -		    MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT |
> -		    MDCR_EL2_TPMS;
> +	struct kvm_cpu_context *host_ctxt = vcpu->arch.host_cpu_context;
>  
> -	write_sysreg(mdcr_el2, mdcr_el2);
> +	write_sysreg(host_ctxt->mdcr_el2, mdcr_el2);
>  
>  	__deactivate_traps_common();
>  }
> diff --git a/arch/arm64/kvm/hyp/sysreg-sr.c b/arch/arm64/kvm/hyp/sysreg-sr.c
> index 68ddc0f..42ec50f 100644
> --- a/arch/arm64/kvm/hyp/sysreg-sr.c
> +++ b/arch/arm64/kvm/hyp/sysreg-sr.c
> @@ -334,4 +334,10 @@ void __hyp_text __kvm_populate_host_regs(void)
>  		host_ctxt = __hyp_this_cpu_ptr(kvm_host_cpu_state);
>  
>  	host_ctxt->hcr_el2 = read_sysreg(hcr_el2);
> +	/*
> +	 * Retrieve the initial value of mdcr_el2 so we can preserve
> +	 * MDCR_EL2.HPMN which has presumably been set-up by some
> +	 * knowledgeable bootcode.
> +	 */
> +	host_ctxt->mdcr_el2 = read_sysreg(mdcr_el2);
>  }
> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> index 8e18f7f..2032a66 100644
> --- a/virt/kvm/arm/arm.c
> +++ b/virt/kvm/arm/arm.c
> @@ -1327,7 +1327,6 @@ static void cpu_hyp_reinit(void)
>  	else
>  		cpu_init_hyp_mode(NULL);
>  
> -	kvm_arm_init_debug();
>  	__cpu_copy_hyp_conf();
>  
>  	if (vgic_present)
> -- 
> 2.7.4
> 

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v6 3/6] arm64/kvm: context-switch ptrauth registers
  2019-02-19  9:24 ` [PATCH v6 3/6] arm64/kvm: context-switch ptrauth registers Amit Daniel Kachhap
@ 2019-02-21 12:29   ` Mark Rutland
  2019-02-21 15:51     ` Dave Martin
  2019-02-28  9:07     ` Amit Daniel Kachhap
  2019-02-21 15:53   ` Dave Martin
  2019-02-26 18:31   ` James Morse
  2 siblings, 2 replies; 41+ messages in thread
From: Mark Rutland @ 2019-02-21 12:29 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: linux-arm-kernel, Christoffer Dall, Marc Zyngier,
	Catalin Marinas, Will Deacon, Andrew Jones, Dave Martin,
	Ramana Radhakrishnan, kvmarm, Kristina Martsenko, linux-kernel,
	James Morse, Julien Thierry

On Tue, Feb 19, 2019 at 02:54:28PM +0530, Amit Daniel Kachhap wrote:
> From: Mark Rutland <mark.rutland@arm.com>
> 
> When pointer authentication is supported, a guest may wish to use it.
> This patch adds the necessary KVM infrastructure for this to work, with
> a semi-lazy context switch of the pointer auth state.
> 
> Pointer authentication feature is only enabled when VHE is built
> in the kernel and present into CPU implementation so only VHE code
> paths are modified.

Nit: s/into/in the/

> 
> When we schedule a vcpu, we disable guest usage of pointer
> authentication instructions and accesses to the keys. While these are
> disabled, we avoid context-switching the keys. When we trap the guest
> trying to use pointer authentication functionality, we change to eagerly
> context-switching the keys, and enable the feature. The next time the
> vcpu is scheduled out/in, we start again. However the host key registers
> are saved in vcpu load stage as they remain constant for each vcpu
> schedule.
> 
> Pointer authentication consists of address authentication and generic
> authentication, and CPUs in a system might have varied support for
> either. Where support for either feature is not uniform, it is hidden
> from guests via ID register emulation, as a result of the cpufeature
> framework in the host.
> 
> Unfortunately, address authentication and generic authentication cannot
> be trapped separately, as the architecture provides a single EL2 trap
> covering both. If we wish to expose one without the other, we cannot
> prevent a (badly-written) guest from intermittently using a feature
> which is not uniformly supported (when scheduled on a physical CPU which
> supports the relevant feature). Hence, this patch expects both type of
> authentication to be present in a cpu.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> [Only VHE, key switch from from assembly, kvm_supports_ptrauth
> checks, save host key in vcpu_load]

Hmm, why do we need to do the key switch in assembly, given it's not
used in-kernel right now?

Is that in preparation for in-kernel pointer auth usage? If so, please
call that out in the commit message.

[...]

> diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
> index 4e2fb87..5cac605 100644
> --- a/arch/arm64/kernel/traps.c
> +++ b/arch/arm64/kernel/traps.c
> @@ -749,6 +749,7 @@ static const char *esr_class_str[] = {
>  	[ESR_ELx_EC_CP14_LS]		= "CP14 LDC/STC",
>  	[ESR_ELx_EC_FP_ASIMD]		= "ASIMD",
>  	[ESR_ELx_EC_CP10_ID]		= "CP10 MRC/VMRS",
> +	[ESR_ELx_EC_PAC]		= "Pointer authentication trap",

For consistency with the other strings, can we please make this "PAC"?

[...]

> diff --git a/arch/arm64/kvm/hyp/Makefile b/arch/arm64/kvm/hyp/Makefile
> index 82d1904..17cec99 100644
> --- a/arch/arm64/kvm/hyp/Makefile
> +++ b/arch/arm64/kvm/hyp/Makefile
> @@ -19,6 +19,7 @@ obj-$(CONFIG_KVM_ARM_HOST) += switch.o
>  obj-$(CONFIG_KVM_ARM_HOST) += fpsimd.o
>  obj-$(CONFIG_KVM_ARM_HOST) += tlb.o
>  obj-$(CONFIG_KVM_ARM_HOST) += hyp-entry.o
> +obj-$(CONFIG_KVM_ARM_HOST) += ptrauth-sr.o

Huh, so we're actually doing the switch in C code...

>  # KVM code is run at a different exception code with a different map, so
>  # compiler instrumentation that inserts callbacks or checks into the code may
> diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
> index 675fdc1..b78cc15 100644
> --- a/arch/arm64/kvm/hyp/entry.S
> +++ b/arch/arm64/kvm/hyp/entry.S
> @@ -64,6 +64,12 @@ ENTRY(__guest_enter)
>  
>  	add	x18, x0, #VCPU_CONTEXT
>  
> +#ifdef	CONFIG_ARM64_PTR_AUTH
> +	// Prepare parameter for __ptrauth_switch_to_guest(vcpu, host, guest).
> +	mov	x2, x18
> +	bl	__ptrauth_switch_to_guest
> +#endif

... and conditionally *calling* that switch code from assembly ...

> +
>  	// Restore guest regs x0-x17
>  	ldp	x0, x1,   [x18, #CPU_XREG_OFFSET(0)]
>  	ldp	x2, x3,   [x18, #CPU_XREG_OFFSET(2)]
> @@ -118,6 +124,17 @@ ENTRY(__guest_exit)
>  
>  	get_host_ctxt	x2, x3
>  
> +#ifdef	CONFIG_ARM64_PTR_AUTH
> +	// Prepare parameter for __ptrauth_switch_to_host(vcpu, guest, host).
> +	// Save x0, x2 which are used later in callee saved registers.
> +	mov	x19, x0
> +	mov	x20, x2
> +	sub	x0, x1, #VCPU_CONTEXT
> +	ldr	x29, [x2, #CPU_XREG_OFFSET(29)]
> +	bl	__ptrauth_switch_to_host
> +	mov	x0, x19
> +	mov	x2, x20
> +#endif

... which adds a load of boilerplate for no immediate gain.

Do we really need to do this in assembly today?

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v6 4/6] arm64/kvm: add a userspace option to enable pointer authentication
  2019-02-19  9:24 ` [PATCH v6 4/6] arm64/kvm: add a userspace option to enable pointer authentication Amit Daniel Kachhap
@ 2019-02-21 12:34   ` Mark Rutland
  2019-02-28  9:25     ` Amit Daniel Kachhap
  2019-02-21 15:53   ` Dave Martin
  2019-02-26 18:33   ` James Morse
  2 siblings, 1 reply; 41+ messages in thread
From: Mark Rutland @ 2019-02-21 12:34 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: linux-arm-kernel, Christoffer Dall, Marc Zyngier,
	Catalin Marinas, Will Deacon, Andrew Jones, Dave Martin,
	Ramana Radhakrishnan, kvmarm, Kristina Martsenko, linux-kernel,
	James Morse, Julien Thierry

On Tue, Feb 19, 2019 at 02:54:29PM +0530, Amit Daniel Kachhap wrote:
> This feature will allow the KVM guest to allow the handling of
> pointer authentication instructions or to treat them as undefined
> if not set. It uses the existing vcpu API KVM_ARM_VCPU_INIT to
> supply this parameter instead of creating a new API.
> 
> A new register is not created to pass this parameter via
> SET/GET_ONE_REG interface as just a flag (KVM_ARM_VCPU_PTRAUTH)
> supplied is enough to enable this feature.
> 
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Christoffer Dall <christoffer.dall@arm.com>
> Cc: kvmarm@lists.cs.columbia.edu
> ---
>  Documentation/arm64/pointer-authentication.txt |  9 +++++----
>  Documentation/virtual/kvm/api.txt              |  4 ++++
>  arch/arm64/include/asm/kvm_host.h              |  3 ++-
>  arch/arm64/include/uapi/asm/kvm.h              |  1 +
>  arch/arm64/kvm/handle_exit.c                   |  2 +-
>  arch/arm64/kvm/hyp/ptrauth-sr.c                | 16 +++++++++++++++-
>  arch/arm64/kvm/reset.c                         |  3 +++
>  arch/arm64/kvm/sys_regs.c                      | 26 +++++++++++++-------------
>  include/uapi/linux/kvm.h                       |  1 +
>  9 files changed, 45 insertions(+), 20 deletions(-)
> 
> diff --git a/Documentation/arm64/pointer-authentication.txt b/Documentation/arm64/pointer-authentication.txt
> index a25cd21..0529a7d 100644
> --- a/Documentation/arm64/pointer-authentication.txt
> +++ b/Documentation/arm64/pointer-authentication.txt
> @@ -82,7 +82,8 @@ pointers).
>  Virtualization
>  --------------
>  
> -Pointer authentication is not currently supported in KVM guests. KVM
> -will mask the feature bits from ID_AA64ISAR1_EL1, and attempted use of
> -the feature will result in an UNDEFINED exception being injected into
> -the guest.
> +Pointer authentication is enabled in KVM guest when virtual machine is
> +created by passing a flag (KVM_ARM_VCPU_PTRAUTH) requesting this feature
> +to be enabled. Without this flag, pointer authentication is not enabled
> +in KVM guests and attempted use of the feature will result in an UNDEFINED
> +exception being injected into the guest.
> diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
> index 356156f..1e646fb 100644
> --- a/Documentation/virtual/kvm/api.txt
> +++ b/Documentation/virtual/kvm/api.txt
> @@ -2642,6 +2642,10 @@ Possible features:
>  	  Depends on KVM_CAP_ARM_PSCI_0_2.
>  	- KVM_ARM_VCPU_PMU_V3: Emulate PMUv3 for the CPU.
>  	  Depends on KVM_CAP_ARM_PMU_V3.
> +	- KVM_ARM_VCPU_PTRAUTH: Emulate Pointer authentication for the CPU.
> +	  Depends on KVM_CAP_ARM_PTRAUTH and only on arm64 architecture. If
> +	  set, then the KVM guest allows the execution of pointer authentication
> +	  instructions. Otherwise, KVM treats these instructions as undefined.

I think that we should have separate flags for address auth and generic
auth, to match the ID register split.

For now, we can have KVM only support the case where both are set, but
it gives us freedom to support either in isolation if we have to in
future, without an ABI break.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v6 1/6] arm64/kvm: preserve host HCR_EL2 value
  2019-02-19  9:24 ` [PATCH v6 1/6] arm64/kvm: preserve host HCR_EL2 value Amit Daniel Kachhap
  2019-02-21 11:50   ` Mark Rutland
@ 2019-02-21 15:49   ` Dave Martin
  2019-03-01  5:56     ` Amit Daniel Kachhap
  2019-02-25 17:39   ` James Morse
  2 siblings, 1 reply; 41+ messages in thread
From: Dave Martin @ 2019-02-21 15:49 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: linux-arm-kernel, Marc Zyngier, Catalin Marinas, Will Deacon,
	Kristina Martsenko, kvmarm, Ramana Radhakrishnan, linux-kernel

On Tue, Feb 19, 2019 at 02:54:26PM +0530, Amit Daniel Kachhap wrote:
> From: Mark Rutland <mark.rutland@arm.com>
> 
> When restoring HCR_EL2 for the host, KVM uses HCR_HOST_VHE_FLAGS, which
> is a constant value. This works today, as the host HCR_EL2 value is
> always the same, but this will get in the way of supporting extensions
> that require HCR_EL2 bits to be set conditionally for the host.
> 
> To allow such features to work without KVM having to explicitly handle
> every possible host feature combination, this patch has KVM save/restore
> for the host HCR when switching to/from a guest HCR. The saving of the
> register is done once during cpu hypervisor initialization state and is
> just restored after switch from guest.
> 
> For fetching HCR_EL2 during kvm initialisation, a hyp call is made using
> kvm_call_hyp and is helpful in NHVE case.

Minor nit: NVHE misspelled.  This looks a bit like it's naming an arch
feature rather than a kernel implementation detail though.  Maybe write
"non-VHE".

> For the hyp TLB maintenance code, __tlb_switch_to_host_vhe() is updated
> to toggle the TGE bit with a RMW sequence, as we already do in
> __tlb_switch_to_guest_vhe().
> 
> The value of hcr_el2 is now stored in struct kvm_cpu_context as both host
> and guest can now use this field in a common way.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> [Added __cpu_copy_hyp_conf, hcr_el2 field in struct kvm_cpu_context]
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Christoffer Dall <christoffer.dall@arm.com>
> Cc: kvmarm@lists.cs.columbia.edu
> ---
>  arch/arm/include/asm/kvm_host.h      |  2 ++
>  arch/arm64/include/asm/kvm_asm.h     |  2 ++
>  arch/arm64/include/asm/kvm_emulate.h | 22 +++++++++++-----------
>  arch/arm64/include/asm/kvm_host.h    | 13 ++++++++++++-
>  arch/arm64/include/asm/kvm_hyp.h     |  2 +-
>  arch/arm64/kvm/guest.c               |  2 +-
>  arch/arm64/kvm/hyp/switch.c          | 23 +++++++++++++----------
>  arch/arm64/kvm/hyp/sysreg-sr.c       | 21 ++++++++++++++++++++-
>  arch/arm64/kvm/hyp/tlb.c             |  6 +++++-
>  virt/kvm/arm/arm.c                   |  1 +
>  10 files changed, 68 insertions(+), 26 deletions(-)
> 
> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
> index ca56537..05706b4 100644
> --- a/arch/arm/include/asm/kvm_host.h
> +++ b/arch/arm/include/asm/kvm_host.h
> @@ -273,6 +273,8 @@ static inline void __cpu_init_stage2(void)
>  	kvm_call_hyp(__init_stage2_translation);
>  }
>  
> +static inline void __cpu_copy_hyp_conf(void) {}
> +
>  static inline int kvm_arch_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>  {
>  	return 0;
> diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
> index f5b79e9..8acd73f 100644
> --- a/arch/arm64/include/asm/kvm_asm.h
> +++ b/arch/arm64/include/asm/kvm_asm.h
> @@ -80,6 +80,8 @@ extern void __vgic_v3_init_lrs(void);
>  
>  extern u32 __kvm_get_mdcr_el2(void);
>  
> +extern void __kvm_populate_host_regs(void);
> +
>  /* Home-grown __this_cpu_{ptr,read} variants that always work at HYP */
>  #define __hyp_this_cpu_ptr(sym)						\
>  	({								\
> diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
> index 506386a..0dbe795 100644
> --- a/arch/arm64/include/asm/kvm_emulate.h
> +++ b/arch/arm64/include/asm/kvm_emulate.h
> @@ -50,25 +50,25 @@ void kvm_inject_pabt32(struct kvm_vcpu *vcpu, unsigned long addr);
>  
>  static inline bool vcpu_el1_is_32bit(struct kvm_vcpu *vcpu)
>  {
> -	return !(vcpu->arch.hcr_el2 & HCR_RW);
> +	return !(vcpu->arch.ctxt.hcr_el2 & HCR_RW);

Putting hcr_el2 into struct kvm_cpu_context creates a lot of splatter
here, and I'm wondering whether it's really necessary.  Otherwise,
we could just put the per-vcpu guest HCR_EL2 value in struct
kvm_vcpu_arch.

Is the *host* hcr_el2 value really different per-vcpu?  That looks
odd.  I would have thought this is fixed across the system at KVM
startup time.

Having a single global host hcr_el2 would also avoid the need for
__kvm_populate_host_regs(): instead, we just decide what HCR_EL2 is to
be ahead of time and set a global variable that we map into Hyp.


Or does the host HCR_EL2 need to vary at runtime for some reason I've
missed?

[...]

+void __hyp_text __kvm_populate_host_regs(void)
+{
+       struct kvm_cpu_context *host_ctxt;
+
+       if (has_vhe())
+               host_ctxt = this_cpu_ptr(&kvm_host_cpu_state);
+       else
+               host_ctxt = __hyp_this_cpu_ptr(kvm_host_cpu_state);

According to the comment by the definition of __hyp_this_cpu_ptr(), this
always works at Hyp.  I also see other calls with no fallback
this_cpu_ptr() call like we have here.

So, can we simply always call __hyp_this_cpu_ptr() here?

(I'm not familiar with this, myself.)

Cheers
---Dave

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v6 2/6] arm64/kvm: preserve host MDCR_EL2 value
  2019-02-19  9:24 ` [PATCH v6 2/6] arm64/kvm: preserve host MDCR_EL2 value Amit Daniel Kachhap
  2019-02-21 11:57   ` Mark Rutland
@ 2019-02-21 15:51   ` Dave Martin
  2019-03-01  6:10     ` Amit Daniel Kachhap
  1 sibling, 1 reply; 41+ messages in thread
From: Dave Martin @ 2019-02-21 15:51 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: linux-arm-kernel, Marc Zyngier, Catalin Marinas, Will Deacon,
	Kristina Martsenko, kvmarm, Ramana Radhakrishnan, linux-kernel

On Tue, Feb 19, 2019 at 02:54:27PM +0530, Amit Daniel Kachhap wrote:
> Save host MDCR_EL2 value during kvm HYP initialisation and restore
> after every switch from host to guest. There should not be any
> change in functionality due to this.
> 
> The value of mdcr_el2 is now stored in struct kvm_cpu_context as
> both host and guest can now use this field in a common way.

Is MDCR_EL2 somehow relevant to pointer auth?

It's not entirely clear why this patch is here.

If this is a cleanup to align the handling of this register with
how HCR_EL2 is handled, it would be good to explain that in the commit
message.

> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Christoffer Dall <christoffer.dall@arm.com>
> Cc: kvmarm@lists.cs.columbia.edu
> ---
>  arch/arm/include/asm/kvm_host.h   |  1 -
>  arch/arm64/include/asm/kvm_host.h |  6 ++----
>  arch/arm64/kvm/debug.c            | 28 ++++++----------------------
>  arch/arm64/kvm/hyp/switch.c       | 17 ++++-------------
>  arch/arm64/kvm/hyp/sysreg-sr.c    |  6 ++++++
>  virt/kvm/arm/arm.c                |  1 -
>  6 files changed, 18 insertions(+), 41 deletions(-)
> 
> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
> index 05706b4..704667e 100644
> --- a/arch/arm/include/asm/kvm_host.h
> +++ b/arch/arm/include/asm/kvm_host.h
> @@ -294,7 +294,6 @@ static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {}
>  static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
>  static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {}
>  
> -static inline void kvm_arm_init_debug(void) {}
>  static inline void kvm_arm_setup_debug(struct kvm_vcpu *vcpu) {}
>  static inline void kvm_arm_clear_debug(struct kvm_vcpu *vcpu) {}
>  static inline void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu) {}
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 1b2e05b..2f1bb86 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -205,6 +205,8 @@ struct kvm_cpu_context {
>  
>  	/* HYP host/guest configuration */
>  	u64 hcr_el2;
> +	u32 mdcr_el2;
> +

ARMv8-A says MDCR_EL2 is a 64-bit register.

Bits [63:20] are currently RES0, so this is probably not a big deal.
But it would be better to make this 64-bit to prevent future accidents.
It may be better to make that change in a separate patch.

This is probably non-urgent, since this is clearly not causing problems
for anyone today.

[...]

Cheers
---Dave

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v6 3/6] arm64/kvm: context-switch ptrauth registers
  2019-02-21 12:29   ` Mark Rutland
@ 2019-02-21 15:51     ` Dave Martin
  2019-03-01  6:17       ` Amit Daniel Kachhap
  2019-02-28  9:07     ` Amit Daniel Kachhap
  1 sibling, 1 reply; 41+ messages in thread
From: Dave Martin @ 2019-02-21 15:51 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Amit Daniel Kachhap, linux-kernel, Marc Zyngier, Catalin Marinas,
	Will Deacon, Kristina Martsenko, kvmarm, Ramana Radhakrishnan,
	linux-arm-kernel

On Thu, Feb 21, 2019 at 12:29:42PM +0000, Mark Rutland wrote:
> On Tue, Feb 19, 2019 at 02:54:28PM +0530, Amit Daniel Kachhap wrote:
> > From: Mark Rutland <mark.rutland@arm.com>
> > 
> > When pointer authentication is supported, a guest may wish to use it.
> > This patch adds the necessary KVM infrastructure for this to work, with
> > a semi-lazy context switch of the pointer auth state.
> > 
> > Pointer authentication feature is only enabled when VHE is built
> > in the kernel and present into CPU implementation so only VHE code
> > paths are modified.
> 
> Nit: s/into/in the/
> 
> > 
> > When we schedule a vcpu, we disable guest usage of pointer
> > authentication instructions and accesses to the keys. While these are
> > disabled, we avoid context-switching the keys. When we trap the guest
> > trying to use pointer authentication functionality, we change to eagerly
> > context-switching the keys, and enable the feature. The next time the
> > vcpu is scheduled out/in, we start again. However the host key registers
> > are saved in vcpu load stage as they remain constant for each vcpu
> > schedule.
> > 
> > Pointer authentication consists of address authentication and generic
> > authentication, and CPUs in a system might have varied support for
> > either. Where support for either feature is not uniform, it is hidden
> > from guests via ID register emulation, as a result of the cpufeature
> > framework in the host.
> > 
> > Unfortunately, address authentication and generic authentication cannot
> > be trapped separately, as the architecture provides a single EL2 trap
> > covering both. If we wish to expose one without the other, we cannot
> > prevent a (badly-written) guest from intermittently using a feature
> > which is not uniformly supported (when scheduled on a physical CPU which
> > supports the relevant feature). Hence, this patch expects both type of
> > authentication to be present in a cpu.
> > 
> > Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> > [Only VHE, key switch from from assembly, kvm_supports_ptrauth
> > checks, save host key in vcpu_load]
> 
> Hmm, why do we need to do the key switch in assembly, given it's not
> used in-kernel right now?
> 
> Is that in preparation for in-kernel pointer auth usage? If so, please
> call that out in the commit message.

[...]

> Huh, so we're actually doing the switch in C code...
> 
> >  # KVM code is run at a different exception code with a different map, so
> >  # compiler instrumentation that inserts callbacks or checks into the code may
> > diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
> > index 675fdc1..b78cc15 100644
> > --- a/arch/arm64/kvm/hyp/entry.S
> > +++ b/arch/arm64/kvm/hyp/entry.S
> > @@ -64,6 +64,12 @@ ENTRY(__guest_enter)
> >  
> >  	add	x18, x0, #VCPU_CONTEXT
> >  
> > +#ifdef	CONFIG_ARM64_PTR_AUTH
> > +	// Prepare parameter for __ptrauth_switch_to_guest(vcpu, host, guest).
> > +	mov	x2, x18
> > +	bl	__ptrauth_switch_to_guest
> > +#endif
> 
> ... and conditionally *calling* that switch code from assembly ...
> 
> > +
> >  	// Restore guest regs x0-x17
> >  	ldp	x0, x1,   [x18, #CPU_XREG_OFFSET(0)]
> >  	ldp	x2, x3,   [x18, #CPU_XREG_OFFSET(2)]
> > @@ -118,6 +124,17 @@ ENTRY(__guest_exit)
> >  
> >  	get_host_ctxt	x2, x3
> >  
> > +#ifdef	CONFIG_ARM64_PTR_AUTH
> > +	// Prepare parameter for __ptrauth_switch_to_host(vcpu, guest, host).
> > +	// Save x0, x2 which are used later in callee saved registers.
> > +	mov	x19, x0
> > +	mov	x20, x2
> > +	sub	x0, x1, #VCPU_CONTEXT
> > +	ldr	x29, [x2, #CPU_XREG_OFFSET(29)]
> > +	bl	__ptrauth_switch_to_host
> > +	mov	x0, x19
> > +	mov	x2, x20
> > +#endif
> 
> ... which adds a load of boilerplate for no immediate gain.
> 
> Do we really need to do this in assembly today?

If we will need to move this to assembly when we add in-kernel ptrauth
support, it may be best to have it in assembly from the start, to reduce
unnecessary churn.

But having a mix of C and assembly is likely to make things more
complicated: we should go with one or the other IMHO.

Cheers
---Dave

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v6 3/6] arm64/kvm: context-switch ptrauth registers
  2019-02-19  9:24 ` [PATCH v6 3/6] arm64/kvm: context-switch ptrauth registers Amit Daniel Kachhap
  2019-02-21 12:29   ` Mark Rutland
@ 2019-02-21 15:53   ` Dave Martin
  2019-03-01  9:35     ` Amit Daniel Kachhap
  2019-02-26 18:31   ` James Morse
  2 siblings, 1 reply; 41+ messages in thread
From: Dave Martin @ 2019-02-21 15:53 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: linux-arm-kernel, Marc Zyngier, Catalin Marinas, Will Deacon,
	Kristina Martsenko, kvmarm, Ramana Radhakrishnan, linux-kernel

On Tue, Feb 19, 2019 at 02:54:28PM +0530, Amit Daniel Kachhap wrote:
> From: Mark Rutland <mark.rutland@arm.com>
> 
> When pointer authentication is supported, a guest may wish to use it.
> This patch adds the necessary KVM infrastructure for this to work, with
> a semi-lazy context switch of the pointer auth state.
> 
> Pointer authentication feature is only enabled when VHE is built
> in the kernel and present into CPU implementation so only VHE code
> paths are modified.
> 
> When we schedule a vcpu, we disable guest usage of pointer
> authentication instructions and accesses to the keys. While these are
> disabled, we avoid context-switching the keys. When we trap the guest
> trying to use pointer authentication functionality, we change to eagerly
> context-switching the keys, and enable the feature. The next time the
> vcpu is scheduled out/in, we start again. However the host key registers
> are saved in vcpu load stage as they remain constant for each vcpu
> schedule.
> 
> Pointer authentication consists of address authentication and generic
> authentication, and CPUs in a system might have varied support for
> either. Where support for either feature is not uniform, it is hidden
> from guests via ID register emulation, as a result of the cpufeature
> framework in the host.
> 
> Unfortunately, address authentication and generic authentication cannot
> be trapped separately, as the architecture provides a single EL2 trap
> covering both. If we wish to expose one without the other, we cannot
> prevent a (badly-written) guest from intermittently using a feature
> which is not uniformly supported (when scheduled on a physical CPU which
> supports the relevant feature). Hence, this patch expects both type of
> authentication to be present in a cpu.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> [Only VHE, key switch from from assembly, kvm_supports_ptrauth
> checks, save host key in vcpu_load]
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> Reviewed-by: Julien Thierry <julien.thierry@arm.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Christoffer Dall <christoffer.dall@arm.com>
> Cc: kvmarm@lists.cs.columbia.edu
> ---
>  arch/arm/include/asm/kvm_host.h   |   1 +
>  arch/arm64/include/asm/kvm_host.h |  23 +++++++++
>  arch/arm64/include/asm/kvm_hyp.h  |   7 +++
>  arch/arm64/kernel/traps.c         |   1 +
>  arch/arm64/kvm/handle_exit.c      |  21 +++++---
>  arch/arm64/kvm/hyp/Makefile       |   1 +
>  arch/arm64/kvm/hyp/entry.S        |  17 +++++++
>  arch/arm64/kvm/hyp/ptrauth-sr.c   | 101 ++++++++++++++++++++++++++++++++++++++
>  arch/arm64/kvm/sys_regs.c         |  37 +++++++++++++-
>  virt/kvm/arm/arm.c                |   2 +
>  10 files changed, 201 insertions(+), 10 deletions(-)
>  create mode 100644 arch/arm64/kvm/hyp/ptrauth-sr.c

[...]

> diff --git a/arch/arm64/kvm/hyp/ptrauth-sr.c b/arch/arm64/kvm/hyp/ptrauth-sr.c
> new file mode 100644
> index 0000000..528ee6e
> --- /dev/null
> +++ b/arch/arm64/kvm/hyp/ptrauth-sr.c
> @@ -0,0 +1,101 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * arch/arm64/kvm/hyp/ptrauth-sr.c: Guest/host ptrauth save/restore
> + *
> + * Copyright 2018 Arm Limited
> + * Author: Mark Rutland <mark.rutland@arm.com>
> + *         Amit Daniel Kachhap <amit.kachhap@arm.com>
> + */
> +#include <linux/compiler.h>
> +#include <linux/kvm_host.h>
> +
> +#include <asm/cpucaps.h>
> +#include <asm/cpufeature.h>
> +#include <asm/kvm_asm.h>
> +#include <asm/kvm_hyp.h>
> +#include <asm/pointer_auth.h>
> +
> +static __always_inline bool __ptrauth_is_enabled(struct kvm_vcpu *vcpu)
> +{
> +	return IS_ENABLED(CONFIG_ARM64_PTR_AUTH) &&
> +			vcpu->arch.ctxt.hcr_el2 & (HCR_API | HCR_APK);
> +}
> +
> +#define __ptrauth_save_key(regs, key)						\
> +({										\
> +	regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1);	\
> +	regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1);	\
> +})
> +
> +static __always_inline void __ptrauth_save_state(struct kvm_cpu_context *ctxt)

Why __always_inline?

> +{
> +	__ptrauth_save_key(ctxt->sys_regs, APIA);
> +	__ptrauth_save_key(ctxt->sys_regs, APIB);
> +	__ptrauth_save_key(ctxt->sys_regs, APDA);
> +	__ptrauth_save_key(ctxt->sys_regs, APDB);
> +	__ptrauth_save_key(ctxt->sys_regs, APGA);
> +}
> +
> +#define __ptrauth_restore_key(regs, key) 					\
> +({										\
> +	write_sysreg_s(regs[key ## KEYLO_EL1], SYS_ ## key ## KEYLO_EL1);	\
> +	write_sysreg_s(regs[key ## KEYHI_EL1], SYS_ ## key ## KEYHI_EL1);	\
> +})
> +
> +static __always_inline void __ptrauth_restore_state(struct kvm_cpu_context *ctxt)

Same here.  I would hope these just need to be marked with the correct
function attribute to disable ptrauth by the compiler.  I don't see why
it makes a difference whether it's inline or not.

If the compiler semantics are not sufficiently clear, make it a macro.

(Bikeshedding here, so it you feel this has already been discussed to
death I'm happy for this to stay as-is.)

> +{
> +	__ptrauth_restore_key(ctxt->sys_regs, APIA);
> +	__ptrauth_restore_key(ctxt->sys_regs, APIB);
> +	__ptrauth_restore_key(ctxt->sys_regs, APDA);
> +	__ptrauth_restore_key(ctxt->sys_regs, APDB);
> +	__ptrauth_restore_key(ctxt->sys_regs, APGA);
> +}
> +
> +/**
> + * This function changes the key so assign Pointer Authentication safe
> + * GCC attribute if protected by it.
> + */

(I'd have preferred to keep __noptrauth here and define it do nothing for
now.  But I'll defer to others on that, since this has already been
discussed...)

> +void __ptrauth_switch_to_guest(struct kvm_vcpu *vcpu,
> +				  struct kvm_cpu_context *host_ctxt,
> +				  struct kvm_cpu_context *guest_ctxt)
> +{
> +	if (!__ptrauth_is_enabled(vcpu))
> +		return;
> +
> +	__ptrauth_restore_state(guest_ctxt);
> +}
> +
> +/**
> + * This function changes the key so assign Pointer Authentication safe
> + * GCC attribute if protected by it.
> + */
> +void __ptrauth_switch_to_host(struct kvm_vcpu *vcpu,
> +				 struct kvm_cpu_context *guest_ctxt,
> +				 struct kvm_cpu_context *host_ctxt)
> +{
> +	if (!__ptrauth_is_enabled(vcpu))
> +		return;
> +
> +	__ptrauth_save_state(guest_ctxt);
> +	__ptrauth_restore_state(host_ctxt);
> +}
> +
> +/**
> + * kvm_arm_vcpu_ptrauth_reset - resets ptrauth for vcpu schedule
> + *
> + * @vcpu: The VCPU pointer
> + *
> + * This function may be used to disable ptrauth and use it in a lazy context
> + * via traps. However host key registers are saved here as they dont change
> + * during host/guest switch.
> + */
> +void kvm_arm_vcpu_ptrauth_reset(struct kvm_vcpu *vcpu)

I feel this is not a good name.  It sounds too much like it resets the
registers as part of vcpu reset, whereas really it's doing something
completely different.

(Do you reset the regs anywhere btw?  I may have missed it...)

> +{
> +	struct kvm_cpu_context *host_ctxt;
> +
> +	if (kvm_supports_ptrauth()) {
> +		kvm_arm_vcpu_ptrauth_disable(vcpu);
> +		host_ctxt = vcpu->arch.host_cpu_context;
> +		__ptrauth_save_state(host_ctxt);
> +	}
> +}
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index a6c9381..12529df 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -986,6 +986,32 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  	{ SYS_DESC(SYS_PMEVTYPERn_EL0(n)),					\
>  	  access_pmu_evtyper, reset_unknown, (PMEVTYPER0_EL0 + n), }
>  
> +
> +void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu)
> +{
> +	vcpu->arch.ctxt.hcr_el2 |= (HCR_API | HCR_APK);

Pedantic nit: surplus ().

(Although opinions differ, and keeping them looks more symmetric with
kvm_arm_vcpu_ptrauth_disable() -- either way, the code can stay as-is if
you prefer.)

> +}
> +
> +void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu)
> +{
> +	vcpu->arch.ctxt.hcr_el2 &= ~(HCR_API | HCR_APK);
> +}
> +
> +static bool trap_ptrauth(struct kvm_vcpu *vcpu,
> +			 struct sys_reg_params *p,
> +			 const struct sys_reg_desc *rd)
> +{
> +	kvm_arm_vcpu_ptrauth_trap(vcpu);
> +	return false;

Can we ever get here?  Won't PAC traps always be handled via
handle_exit()?

Or can we also take sysreg access traps when the guest tries to access
the ptrauth key registers?

(I'm now wondering how this works for SVE.)

> +}
> +
> +#define __PTRAUTH_KEY(k)						\
> +	{ SYS_DESC(SYS_## k), trap_ptrauth, reset_unknown, k }
> +
> +#define PTRAUTH_KEY(k)							\
> +	__PTRAUTH_KEY(k ## KEYLO_EL1),					\
> +	__PTRAUTH_KEY(k ## KEYHI_EL1)
> +
>  static bool access_cntp_tval(struct kvm_vcpu *vcpu,
>  		struct sys_reg_params *p,
>  		const struct sys_reg_desc *r)
> @@ -1045,9 +1071,10 @@ static u64 read_id_reg(struct sys_reg_desc const *r, bool raz)
>  					 (0xfUL << ID_AA64ISAR1_API_SHIFT) |
>  					 (0xfUL << ID_AA64ISAR1_GPA_SHIFT) |
>  					 (0xfUL << ID_AA64ISAR1_GPI_SHIFT);
> -		if (val & ptrauth_mask)
> +		if (!kvm_supports_ptrauth()) {

Don't we now always print this when ptrauth is not supported?

Previously we only printed a message in the interesting case, i.e.,
where the host supports ptrauch but we cannot offer it to the guest.

>  			kvm_debug("ptrauth unsupported for guests, suppressing\n");
> -		val &= ~ptrauth_mask;
> +			val &= ~ptrauth_mask;
> +		}
>  	} else if (id == SYS_ID_AA64MMFR1_EL1) {
>  		if (val & (0xfUL << ID_AA64MMFR1_LOR_SHIFT))
>  			kvm_debug("LORegions unsupported for guests, suppressing\n");
> @@ -1316,6 +1343,12 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>  	{ SYS_DESC(SYS_TTBR1_EL1), access_vm_reg, reset_unknown, TTBR1_EL1 },
>  	{ SYS_DESC(SYS_TCR_EL1), access_vm_reg, reset_val, TCR_EL1, 0 },
>  
> +	PTRAUTH_KEY(APIA),
> +	PTRAUTH_KEY(APIB),
> +	PTRAUTH_KEY(APDA),
> +	PTRAUTH_KEY(APDB),
> +	PTRAUTH_KEY(APGA),
> +
>  	{ SYS_DESC(SYS_AFSR0_EL1), access_vm_reg, reset_unknown, AFSR0_EL1 },
>  	{ SYS_DESC(SYS_AFSR1_EL1), access_vm_reg, reset_unknown, AFSR1_EL1 },
>  	{ SYS_DESC(SYS_ESR_EL1), access_vm_reg, reset_unknown, ESR_EL1 },
> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> index 2032a66..d7e003f 100644
> --- a/virt/kvm/arm/arm.c
> +++ b/virt/kvm/arm/arm.c
> @@ -388,6 +388,8 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
>  		vcpu_clear_wfe_traps(vcpu);
>  	else
>  		vcpu_set_wfe_traps(vcpu);
> +
> +	kvm_arm_vcpu_ptrauth_reset(vcpu);
>  }
>  
>  void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
> -- 
> 2.7.4
> 
> _______________________________________________
> kvmarm mailing list
> kvmarm@lists.cs.columbia.edu
> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v6 4/6] arm64/kvm: add a userspace option to enable pointer authentication
  2019-02-19  9:24 ` [PATCH v6 4/6] arm64/kvm: add a userspace option to enable pointer authentication Amit Daniel Kachhap
  2019-02-21 12:34   ` Mark Rutland
@ 2019-02-21 15:53   ` Dave Martin
  2019-03-01  9:41     ` Amit Daniel Kachhap
  2019-02-26 18:33   ` James Morse
  2 siblings, 1 reply; 41+ messages in thread
From: Dave Martin @ 2019-02-21 15:53 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: linux-arm-kernel, Marc Zyngier, Catalin Marinas, Will Deacon,
	Kristina Martsenko, kvmarm, Ramana Radhakrishnan, linux-kernel

On Tue, Feb 19, 2019 at 02:54:29PM +0530, Amit Daniel Kachhap wrote:
> This feature will allow the KVM guest to allow the handling of
> pointer authentication instructions or to treat them as undefined
> if not set. It uses the existing vcpu API KVM_ARM_VCPU_INIT to
> supply this parameter instead of creating a new API.
> 
> A new register is not created to pass this parameter via
> SET/GET_ONE_REG interface as just a flag (KVM_ARM_VCPU_PTRAUTH)
> supplied is enough to enable this feature.
> 
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Christoffer Dall <christoffer.dall@arm.com>
> Cc: kvmarm@lists.cs.columbia.edu
> ---
>  Documentation/arm64/pointer-authentication.txt |  9 +++++----
>  Documentation/virtual/kvm/api.txt              |  4 ++++
>  arch/arm64/include/asm/kvm_host.h              |  3 ++-
>  arch/arm64/include/uapi/asm/kvm.h              |  1 +
>  arch/arm64/kvm/handle_exit.c                   |  2 +-
>  arch/arm64/kvm/hyp/ptrauth-sr.c                | 16 +++++++++++++++-
>  arch/arm64/kvm/reset.c                         |  3 +++
>  arch/arm64/kvm/sys_regs.c                      | 26 +++++++++++++-------------
>  include/uapi/linux/kvm.h                       |  1 +
>  9 files changed, 45 insertions(+), 20 deletions(-)
> 

[...]

> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 1bacf78..2768a53 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -43,7 +43,7 @@
>  
>  #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS
>  
> -#define KVM_VCPU_MAX_FEATURES 4
> +#define KVM_VCPU_MAX_FEATURES 5
>  
>  #define KVM_REQ_SLEEP \
>  	KVM_ARCH_REQ_FLAGS(0, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
> @@ -451,6 +451,7 @@ static inline bool kvm_arch_requires_vhe(void)
>  	return false;
>  }
>  
> +bool kvm_arm_vcpu_ptrauth_allowed(const struct kvm_vcpu *vcpu);
>  static inline bool kvm_supports_ptrauth(void)
>  {
>  	return has_vhe() && system_supports_address_auth() &&

[...]

> diff --git a/arch/arm64/kvm/hyp/ptrauth-sr.c b/arch/arm64/kvm/hyp/ptrauth-sr.c
> index 528ee6e..6846a23 100644
> --- a/arch/arm64/kvm/hyp/ptrauth-sr.c
> +++ b/arch/arm64/kvm/hyp/ptrauth-sr.c
> @@ -93,9 +93,23 @@ void kvm_arm_vcpu_ptrauth_reset(struct kvm_vcpu *vcpu)

[...]

> +/**
> + * kvm_arm_vcpu_ptrauth_allowed - checks if ptrauth feature is allowed by user
> + *
> + * @vcpu: The VCPU pointer
> + *
> + * This function will be used to check userspace option to have ptrauth or not
> + * in the guest kernel.
> + */
> +bool kvm_arm_vcpu_ptrauth_allowed(const struct kvm_vcpu *vcpu)
> +{
> +	return kvm_supports_ptrauth() &&
> +		test_bit(KVM_ARM_VCPU_PTRAUTH, vcpu->arch.features);
> +}

Nit: for SVE is called the equivalent helper vcpu_has_sve(vcpu).

Neither naming is more correct, but it would make sense to be
consistent.  We will likely accumulate more of these vcpu feature
predicates over time.

Given that this is trivial and will be used all over the place, it
probably makes sense to define it in kvm_host.h rather than having it
out of line in a separate C file.

> diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
> index b72a3dd..987e0c3c 100644
> --- a/arch/arm64/kvm/reset.c
> +++ b/arch/arm64/kvm/reset.c
> @@ -91,6 +91,9 @@ int kvm_arch_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>  	case KVM_CAP_ARM_VM_IPA_SIZE:
>  		r = kvm_ipa_limit;
>  		break;
> +	case KVM_CAP_ARM_PTRAUTH:
> +		r = kvm_supports_ptrauth();
> +		break;
>  	default:
>  		r = 0;
>  	}
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 12529df..f7bcc60 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -1055,7 +1055,7 @@ static bool access_cntp_cval(struct kvm_vcpu *vcpu,
>  }
>  
>  /* Read a sanitised cpufeature ID register by sys_reg_desc */
> -static u64 read_id_reg(struct sys_reg_desc const *r, bool raz)
> +static u64 read_id_reg(struct kvm_vcpu *vcpu, struct sys_reg_desc const *r, bool raz)
>  {
>  	u32 id = sys_reg((u32)r->Op0, (u32)r->Op1,
>  			 (u32)r->CRn, (u32)r->CRm, (u32)r->Op2);
> @@ -1071,7 +1071,7 @@ static u64 read_id_reg(struct sys_reg_desc const *r, bool raz)
>  					 (0xfUL << ID_AA64ISAR1_API_SHIFT) |
>  					 (0xfUL << ID_AA64ISAR1_GPA_SHIFT) |
>  					 (0xfUL << ID_AA64ISAR1_GPI_SHIFT);
> -		if (!kvm_supports_ptrauth()) {
> +		if (!kvm_arm_vcpu_ptrauth_allowed(vcpu)) {
>  			kvm_debug("ptrauth unsupported for guests, suppressing\n");
>  			val &= ~ptrauth_mask;
>  		}
> @@ -1095,7 +1095,7 @@ static bool __access_id_reg(struct kvm_vcpu *vcpu,
>  	if (p->is_write)
>  		return write_to_read_only(vcpu, p, r);
>  
> -	p->regval = read_id_reg(r, raz);
> +	p->regval = read_id_reg(vcpu, r, raz);
>  	return true;
>  }

The SVE KVM series makes various overlapping changes to propagate vcpuo
into the relevant places, but hopefully the rebase is not too painful.
Many of the changes are probably virtually identical between the two
series.

See for example [1].  Maybe you could cherry-pick and drop the
equivalent changes here (though if your series is picked up first, I
will live with it ;)

[...]

Cheers
---Dave


[1] [PATCH v5 10/26] KVM: arm64: Propagate vcpu into read_id_reg()
https://lists.cs.columbia.edu/pipermail/kvmarm/2019-February/034687.html

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v6 5/6] arm64/kvm: control accessibility of ptrauth key registers
  2019-02-19  9:24 ` [PATCH v6 5/6] arm64/kvm: control accessibility of ptrauth key registers Amit Daniel Kachhap
@ 2019-02-21 15:53   ` Dave Martin
  2019-02-26 18:34   ` James Morse
  1 sibling, 0 replies; 41+ messages in thread
From: Dave Martin @ 2019-02-21 15:53 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: linux-arm-kernel, Marc Zyngier, Catalin Marinas, Will Deacon,
	Kristina Martsenko, kvmarm, Ramana Radhakrishnan, linux-kernel

On Tue, Feb 19, 2019 at 02:54:30PM +0530, Amit Daniel Kachhap wrote:
> According to userspace settings, ptrauth key registers are conditionally
> present in guest system register list based on user specified flag
> KVM_ARM_VCPU_PTRAUTH.
> 
> Reset routines still sets these registers to default values but they are
> left like that as they are conditionally accessible (set/get).
> 
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Christoffer Dall <christoffer.dall@arm.com>
> Cc: kvmarm@lists.cs.columbia.edu
> ---
> This patch needs patch [1] by Dave Martin and adds feature to manage accessibility in a scalable way.
> 
> [1]: https://lore.kernel.org/linux-arm-kernel/1547757219-19439-13-git-send-email-Dave.Martin@arm.com/ 

FYI, check_present() has changed a bit in the SVE v5 series [2].

The precise interface is still under discussion, so please take a look
and feel free to comment.

You'll probably need to tweak some things so that the KVM_GET_REG_LIST
output is consistent with the set of regs that do/don't yield -ENOENT in
KVM_GET_ONE_REG/KVM_SET_ONE_REG.

See other patches in the series for examples of how I use the modified
interface.

[...]

> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index f7bcc60..c2f4974 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -1005,8 +1005,13 @@ static bool trap_ptrauth(struct kvm_vcpu *vcpu,
>  	return false;
>  }
>  
> +static bool check_ptrauth(const struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd)
> +{
> +	return kvm_arm_vcpu_ptrauth_allowed(vcpu);
> +}
> +
>  #define __PTRAUTH_KEY(k)						\
> -	{ SYS_DESC(SYS_## k), trap_ptrauth, reset_unknown, k }
> +	{ SYS_DESC(SYS_## k), trap_ptrauth, reset_unknown, k , .check_present = check_ptrauth}

Cheers
---Dave


[2] [PATCH v5 12/26] KVM: arm64: Support runtime sysreg visibility filtering
https://lists.cs.columbia.edu/pipermail/kvmarm/2019-February/034671.html

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [kvmtool PATCH v6 6/6] arm/kvm: arm64: Add a vcpu feature for pointer authentication
  2019-02-19  9:24 ` [kvmtool PATCH v6 6/6] arm/kvm: arm64: Add a vcpu feature for pointer authentication Amit Daniel Kachhap
@ 2019-02-21 15:54   ` Dave Martin
  2019-03-01 10:37     ` Amit Daniel Kachhap
  0 siblings, 1 reply; 41+ messages in thread
From: Dave Martin @ 2019-02-21 15:54 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: linux-arm-kernel, Marc Zyngier, Catalin Marinas, Will Deacon,
	Kristina Martsenko, kvmarm, Ramana Radhakrishnan, linux-kernel

On Tue, Feb 19, 2019 at 02:54:31PM +0530, Amit Daniel Kachhap wrote:
> This is a runtime capabality for KVM tool to enable Armv8.3 Pointer
> Authentication in guest kernel. A command line option --ptrauth is
> required for this.
> 
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> ---
>  arm/aarch32/include/kvm/kvm-cpu-arch.h    | 1 +
>  arm/aarch64/include/asm/kvm.h             | 1 +
>  arm/aarch64/include/kvm/kvm-config-arch.h | 4 +++-
>  arm/aarch64/include/kvm/kvm-cpu-arch.h    | 1 +
>  arm/include/arm-common/kvm-config-arch.h  | 1 +
>  arm/kvm-cpu.c                             | 6 ++++++
>  include/linux/kvm.h                       | 1 +
>  7 files changed, 14 insertions(+), 1 deletion(-)
> 
> diff --git a/arm/aarch32/include/kvm/kvm-cpu-arch.h b/arm/aarch32/include/kvm/kvm-cpu-arch.h
> index d28ea67..520ea76 100644
> --- a/arm/aarch32/include/kvm/kvm-cpu-arch.h
> +++ b/arm/aarch32/include/kvm/kvm-cpu-arch.h
> @@ -13,4 +13,5 @@
>  #define ARM_CPU_ID		0, 0, 0
>  #define ARM_CPU_ID_MPIDR	5
>  
> +#define ARM_VCPU_PTRAUTH_FEATURE	0
>  #endif /* KVM__KVM_CPU_ARCH_H */
> diff --git a/arm/aarch64/include/asm/kvm.h b/arm/aarch64/include/asm/kvm.h
> index 97c3478..1068fd1 100644
> --- a/arm/aarch64/include/asm/kvm.h
> +++ b/arm/aarch64/include/asm/kvm.h
> @@ -102,6 +102,7 @@ struct kvm_regs {
>  #define KVM_ARM_VCPU_EL1_32BIT		1 /* CPU running a 32bit VM */
>  #define KVM_ARM_VCPU_PSCI_0_2		2 /* CPU uses PSCI v0.2 */
>  #define KVM_ARM_VCPU_PMU_V3		3 /* Support guest PMUv3 */
> +#define KVM_ARM_VCPU_PTRAUTH		4 /* CPU uses pointer authentication */
>  
>  struct kvm_vcpu_init {
>  	__u32 target;
> diff --git a/arm/aarch64/include/kvm/kvm-config-arch.h b/arm/aarch64/include/kvm/kvm-config-arch.h
> index 04be43d..2074684 100644
> --- a/arm/aarch64/include/kvm/kvm-config-arch.h
> +++ b/arm/aarch64/include/kvm/kvm-config-arch.h
> @@ -8,7 +8,9 @@
>  			"Create PMUv3 device"),				\
>  	OPT_U64('\0', "kaslr-seed", &(cfg)->kaslr_seed,			\
>  			"Specify random seed for Kernel Address Space "	\
> -			"Layout Randomization (KASLR)"),
> +			"Layout Randomization (KASLR)"),		\
> +	OPT_BOOLEAN('\0', "ptrauth", &(cfg)->has_ptrauth,		\
> +			"Enable address authentication"),

Nit: doesn't this enable address *and* generic authentication?  The
discussion on what capababilities and enables the ABI exposes probably
needs to conclude before we can finalise this here.

However, I would recommend that we provide a single option here that
turns both address authentication and generic authentication on, even
if the ABI treats them independently.  This is expected to be the common
case by far.

We can always add more fine-grained options later if it turns out to be
necessary.

>  #include "arm-common/kvm-config-arch.h"
>  
> diff --git a/arm/aarch64/include/kvm/kvm-cpu-arch.h b/arm/aarch64/include/kvm/kvm-cpu-arch.h
> index a9d8563..496ece8 100644
> --- a/arm/aarch64/include/kvm/kvm-cpu-arch.h
> +++ b/arm/aarch64/include/kvm/kvm-cpu-arch.h
> @@ -17,4 +17,5 @@
>  #define ARM_CPU_CTRL		3, 0, 1, 0
>  #define ARM_CPU_CTRL_SCTLR_EL1	0
>  
> +#define ARM_VCPU_PTRAUTH_FEATURE	(1UL << KVM_ARM_VCPU_PTRAUTH)
>  #endif /* KVM__KVM_CPU_ARCH_H */
> diff --git a/arm/include/arm-common/kvm-config-arch.h b/arm/include/arm-common/kvm-config-arch.h
> index 5734c46..5badcbd 100644
> --- a/arm/include/arm-common/kvm-config-arch.h
> +++ b/arm/include/arm-common/kvm-config-arch.h
> @@ -10,6 +10,7 @@ struct kvm_config_arch {
>  	bool		aarch32_guest;
>  	bool		has_pmuv3;
>  	u64		kaslr_seed;
> +	bool		has_ptrauth;
>  	enum irqchip_type irqchip;
>  	u64		fw_addr;
>  };
> diff --git a/arm/kvm-cpu.c b/arm/kvm-cpu.c
> index 7780251..4ac80f8 100644
> --- a/arm/kvm-cpu.c
> +++ b/arm/kvm-cpu.c
> @@ -68,6 +68,12 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, unsigned long cpu_id)
>  		vcpu_init.features[0] |= (1UL << KVM_ARM_VCPU_PSCI_0_2);
>  	}
>  
> +	/* Set KVM_ARM_VCPU_PTRAUTH if available */
> +	if (kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH)) {
> +		if (kvm->cfg.arch.has_ptrauth)
> +			vcpu_init.features[0] |= ARM_VCPU_PTRAUTH_FEATURE;
> +	}
> +

I'm not too keen on requiring a dummy #define for AArch32 here.  How do
we handle other subarch-specific feature flags?  Is there something we
can reuse?

(For SVE I didn''t have a proper solution for this yet: my kvmtool
patches are still a dirty hack...)

[...]

Cheers
---Dave

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v6 1/6] arm64/kvm: preserve host HCR_EL2 value
  2019-02-19  9:24 ` [PATCH v6 1/6] arm64/kvm: preserve host HCR_EL2 value Amit Daniel Kachhap
  2019-02-21 11:50   ` Mark Rutland
  2019-02-21 15:49   ` Dave Martin
@ 2019-02-25 17:39   ` James Morse
  2019-02-26 10:06     ` James Morse
  2019-03-02 11:09     ` Amit Daniel Kachhap
  2 siblings, 2 replies; 41+ messages in thread
From: James Morse @ 2019-02-25 17:39 UTC (permalink / raw)
  To: Amit Daniel Kachhap, linux-arm-kernel
  Cc: Christoffer Dall, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Dave Martin, Ramana Radhakrishnan, kvmarm,
	Kristina Martsenko, linux-kernel, Mark Rutland, Julien Thierry

Hi Amit,

On 19/02/2019 09:24, Amit Daniel Kachhap wrote:
> From: Mark Rutland <mark.rutland@arm.com>
> 
> When restoring HCR_EL2 for the host, KVM uses HCR_HOST_VHE_FLAGS, which
> is a constant value. This works today, as the host HCR_EL2 value is
> always the same, but this will get in the way of supporting extensions
> that require HCR_EL2 bits to be set conditionally for the host.
> 
> To allow such features to work without KVM having to explicitly handle
> every possible host feature combination, this patch has KVM save/restore
> for the host HCR when switching to/from a guest HCR. The saving of the
> register is done once during cpu hypervisor initialization state and is
> just restored after switch from guest.
> 
> For fetching HCR_EL2 during kvm initialisation, a hyp call is made using
> kvm_call_hyp and is helpful in NHVE case.
> 
> For the hyp TLB maintenance code, __tlb_switch_to_host_vhe() is updated
> to toggle the TGE bit with a RMW sequence, as we already do in
> __tlb_switch_to_guest_vhe().
> 
> The value of hcr_el2 is now stored in struct kvm_cpu_context as both host
> and guest can now use this field in a common way.


> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
> index ca56537..05706b4 100644
> --- a/arch/arm/include/asm/kvm_host.h
> +++ b/arch/arm/include/asm/kvm_host.h
> @@ -273,6 +273,8 @@ static inline void __cpu_init_stage2(void)
>  	kvm_call_hyp(__init_stage2_translation);
>  }
>  
> +static inline void __cpu_copy_hyp_conf(void) {}
> +

I agree Mark's suggestion of adding 'host_ctxt' in here makes it clearer what it is.


> diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
> index 506386a..0dbe795 100644
> --- a/arch/arm64/include/asm/kvm_emulate.h
> +++ b/arch/arm64/include/asm/kvm_emulate.h

Hmmm, there is still a fair amount of churn due to moving the struct definition, but its
easy enough to ignore as its mechanical. A preparatory patch that switched as may as
possible to '*vcpu_hcr() = ' would cut the churn down some more, but I don't think its
worth the extra effort.


> diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
> index a80a7ef..6e65cad 100644
> --- a/arch/arm64/include/asm/kvm_hyp.h
> +++ b/arch/arm64/include/asm/kvm_hyp.h
> @@ -151,7 +151,7 @@ void __fpsimd_restore_state(struct user_fpsimd_state *fp_regs);
>  bool __fpsimd_enabled(void);
>  
>  void activate_traps_vhe_load(struct kvm_vcpu *vcpu);
> -void deactivate_traps_vhe_put(void);
> +void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu);

I've forgotten why this is needed. You don't add a user of vcpu to
deactivate_traps_vhe_put() in this patch.


> diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
> index b0b1478..006bd33 100644
> --- a/arch/arm64/kvm/hyp/switch.c
> +++ b/arch/arm64/kvm/hyp/switch.c
> @@ -191,7 +194,7 @@ void activate_traps_vhe_load(struct kvm_vcpu *vcpu)

> -void deactivate_traps_vhe_put(void)
> +void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu)
>  {
>  	u64 mdcr_el2 = read_sysreg(mdcr_el2);
>  

Why does deactivate_traps_vhe_put() need the vcpu?


> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 7732d0b..1b2e05b 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -458,6 +459,16 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
>
>  static inline void __cpu_init_stage2(void) {}
>
> +/**
> + * __cpu_copy_hyp_conf - copy the boot hyp configuration registers
> + *
> + * It is called once per-cpu during CPU hyp initialisation.
> + */

Is it just the boot cpu?


> +static inline void __cpu_copy_hyp_conf(void)
> +{
> +	kvm_call_hyp(__kvm_populate_host_regs);
> +}
> +


> diff --git a/arch/arm64/kvm/hyp/sysreg-sr.c b/arch/arm64/kvm/hyp/sysreg-sr.c
> index 68d6f7c..68ddc0f 100644
> --- a/arch/arm64/kvm/hyp/sysreg-sr.c
> +++ b/arch/arm64/kvm/hyp/sysreg-sr.c
> @@ -21,6 +21,7 @@
>  #include <asm/kvm_asm.h>
>  #include <asm/kvm_emulate.h>
>  #include <asm/kvm_hyp.h>
> +#include <asm/kvm_mmu.h>

... what's kvm_mmu.h needed for?
The __hyp_this_cpu_ptr() you add comes from kvm_asm.h.

/me tries it.

Heh, hyp_symbol_addr(). kvm_asm.h should include this, but can't because the
kvm_ksym_ref() dependency is the other-way round. This is just going to bite us somewhere
else later!
If we want to fix it now, moving hyp_symbol_addr() to kvm_asm.h would fix it. It's
generating adrp/add so the 'asm' label is fair, and it really should live with its EL1
counterpart kvm_ksym_ref().


> @@ -294,7 +295,7 @@ void kvm_vcpu_put_sysregs(struct kvm_vcpu *vcpu)
>  	if (!has_vhe())
>  		return;
>  
> -	deactivate_traps_vhe_put();
> +	deactivate_traps_vhe_put(vcpu);
>  
>  	__sysreg_save_el1_state(guest_ctxt);
>  	__sysreg_save_user_state(guest_ctxt);
> @@ -316,3 +317,21 @@ void __hyp_text __kvm_enable_ssbs(void)
>  	"msr	sctlr_el2, %0"
>  	: "=&r" (tmp) : "L" (SCTLR_ELx_DSSBS));
>  }
> +
> +/**
> + * __kvm_populate_host_regs - Stores host register values
> + *
> + * This function acts as a function handler parameter for kvm_call_hyp and
> + * may be called from EL1 exception level to fetch the register value.
> + */
> +void __hyp_text __kvm_populate_host_regs(void)
> +{
> +	struct kvm_cpu_context *host_ctxt;


> +	if (has_vhe())
> +		host_ctxt = this_cpu_ptr(&kvm_host_cpu_state);
> +	else
> +		host_ctxt = __hyp_this_cpu_ptr(kvm_host_cpu_state);

You can use __hyp_this_cpu_ptr() here, even on VHE.

For VHE the guts are the same and its simpler to use the same version in both cases.


__hyp_this_cpu_ptr(sym) == hyp_symbol_addr(sym) + tpidr_el2;

hyp_symbol_addr() here is just to guarantee the address is generated based on where we're
executing from, not loaded from a literal pool which would give us the link-time address.
(or whenever kaslr applied the relocations). This matters for non-VHE because the compiler
can't know the code has an EL2 address as well as its link-time address.

This doesn't matter for VHE, as there is no additional different address.

(the other trickery is on non-VHE the tpidr_el2 value isn't actually the same as the
hosts.. but on VHE it is)


> +	host_ctxt->hcr_el2 = read_sysreg(hcr_el2);
> +}


> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> index 9e350fd3..8e18f7f 100644
> --- a/virt/kvm/arm/arm.c
> +++ b/virt/kvm/arm/arm.c
> @@ -1328,6 +1328,7 @@ static void cpu_hyp_reinit(void)
>  		cpu_init_hyp_mode(NULL);
>  
>  	kvm_arm_init_debug();
> +	__cpu_copy_hyp_conf();

Your commit message says:
| The saving of the register is done once during cpu hypervisor initialization state

But cpu_hyp_reinit() is called each time secondary CPUs come online. Its also called as
part of the cpu-idle mechanism via hyp_init_cpu_pm_notifier(). cpu-idle can ask the
firmware to power-off the CPU until an interrupt becomes pending for it. KVM's EL2 state
disappears when this happens, these calls take care of setting it back up again. On Juno,
this can happen tens of times a second, and this adds an extra call to EL2.

init_subsystems() would be the alternative place for this, but it wouldn't catch CPUs that
came online after booting. I think you need something in cpu_hyp_reinit() or
__cpu_copy_hyp_conf() to ensure it only happens once per CPU.

I think you can test whether the HCR_EL2 value is zero, assuming zero means uninitialised.
A VHE system would always set E2H, and a non-VHE system has to set RW.


>  	if (vgic_present)
>  		kvm_vgic_init_cpu_hardware();
> 


Thanks,

James

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v6 1/6] arm64/kvm: preserve host HCR_EL2 value
  2019-02-21 11:50   ` Mark Rutland
@ 2019-02-25 18:09     ` Marc Zyngier
  2019-02-28  6:43     ` Amit Daniel Kachhap
  1 sibling, 0 replies; 41+ messages in thread
From: Marc Zyngier @ 2019-02-25 18:09 UTC (permalink / raw)
  To: Mark Rutland, Amit Daniel Kachhap
  Cc: linux-arm-kernel, Christoffer Dall, Catalin Marinas, Will Deacon,
	Andrew Jones, Dave Martin, Ramana Radhakrishnan, kvmarm,
	Kristina Martsenko, linux-kernel, James Morse, Julien Thierry

On 21/02/2019 11:50, Mark Rutland wrote:
> Hi,
> 
> On Tue, Feb 19, 2019 at 02:54:26PM +0530, Amit Daniel Kachhap wrote:
>> From: Mark Rutland <mark.rutland@arm.com>
>>
>> When restoring HCR_EL2 for the host, KVM uses HCR_HOST_VHE_FLAGS, which
>> is a constant value. This works today, as the host HCR_EL2 value is
>> always the same, but this will get in the way of supporting extensions
>> that require HCR_EL2 bits to be set conditionally for the host.
>>
>> To allow such features to work without KVM having to explicitly handle
>> every possible host feature combination, this patch has KVM save/restore
>> for the host HCR when switching to/from a guest HCR. The saving of the
>> register is done once during cpu hypervisor initialization state and is
>> just restored after switch from guest.
>>
>> For fetching HCR_EL2 during kvm initialisation, a hyp call is made using
>> kvm_call_hyp and is helpful in NHVE case.
>>
>> For the hyp TLB maintenance code, __tlb_switch_to_host_vhe() is updated
>> to toggle the TGE bit with a RMW sequence, as we already do in
>> __tlb_switch_to_guest_vhe().
>>
>> The value of hcr_el2 is now stored in struct kvm_cpu_context as both host
>> and guest can now use this field in a common way.
>>
>> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
>> [Added __cpu_copy_hyp_conf, hcr_el2 field in struct kvm_cpu_context]
>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>> Cc: Christoffer Dall <christoffer.dall@arm.com>
>> Cc: kvmarm@lists.cs.columbia.edu
> 
> [...]
> 
>> +/**
>> + * __cpu_copy_hyp_conf - copy the boot hyp configuration registers
>> + *
>> + * It is called once per-cpu during CPU hyp initialisation.
>> + */
>> +static inline void __cpu_copy_hyp_conf(void)
> 
> I think this would be better named as something like:
> 
>   cpu_init_host_ctxt()
> 
> ... as that makes it a bit clearer as to what is being initialized.
> 
> [...]
> 
>> +/**
>> + * __kvm_populate_host_regs - Stores host register values
>> + *
>> + * This function acts as a function handler parameter for kvm_call_hyp and
>> + * may be called from EL1 exception level to fetch the register value.
>> + */
>> +void __hyp_text __kvm_populate_host_regs(void)
>> +{
>> +	struct kvm_cpu_context *host_ctxt;
>> +
>> +	if (has_vhe())
>> +		host_ctxt = this_cpu_ptr(&kvm_host_cpu_state);
>> +	else
>> +		host_ctxt = __hyp_this_cpu_ptr(kvm_host_cpu_state);
> 
> Do we need the has_vhe() check here?
> 
> Can't we always do:
> 
> 	host_ctxt = __hyp_this_cpu_ptr(kvm_host_cpu_state);
> 
> ... regardless of VHE? Or is that broken for VHE somehow?

The whole point of __hyp_this_cpu_ptr is that it is always valid...
See 85478bab40917 for details.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v6 1/6] arm64/kvm: preserve host HCR_EL2 value
  2019-02-25 17:39   ` James Morse
@ 2019-02-26 10:06     ` James Morse
  2019-03-02 11:09     ` Amit Daniel Kachhap
  1 sibling, 0 replies; 41+ messages in thread
From: James Morse @ 2019-02-26 10:06 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: linux-arm-kernel, Christoffer Dall, Marc Zyngier,
	Catalin Marinas, Will Deacon, Andrew Jones, Dave Martin,
	Ramana Radhakrishnan, kvmarm, Kristina Martsenko, linux-kernel,
	Mark Rutland, Julien Thierry

Hi Amit,

On 25/02/2019 17:39, James Morse wrote:
> On 19/02/2019 09:24, Amit Daniel Kachhap wrote:
>> From: Mark Rutland <mark.rutland@arm.com>
>> When restoring HCR_EL2 for the host, KVM uses HCR_HOST_VHE_FLAGS, which
>> is a constant value. This works today, as the host HCR_EL2 value is
>> always the same, but this will get in the way of supporting extensions
>> that require HCR_EL2 bits to be set conditionally for the host.
>>
>> To allow such features to work without KVM having to explicitly handle
>> every possible host feature combination, this patch has KVM save/restore
>> for the host HCR when switching to/from a guest HCR. The saving of the
>> register is done once during cpu hypervisor initialization state and is
>> just restored after switch from guest.
>>
>> For fetching HCR_EL2 during kvm initialisation, a hyp call is made using
>> kvm_call_hyp and is helpful in NHVE case.
>>
>> For the hyp TLB maintenance code, __tlb_switch_to_host_vhe() is updated
>> to toggle the TGE bit with a RMW sequence, as we already do in
>> __tlb_switch_to_guest_vhe().
>>
>> The value of hcr_el2 is now stored in struct kvm_cpu_context as both host
>> and guest can now use this field in a common way.

>> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
>> index 9e350fd3..8e18f7f 100644
>> --- a/virt/kvm/arm/arm.c
>> +++ b/virt/kvm/arm/arm.c
>> @@ -1328,6 +1328,7 @@ static void cpu_hyp_reinit(void)
>>  		cpu_init_hyp_mode(NULL);
>>  
>>  	kvm_arm_init_debug();
>> +	__cpu_copy_hyp_conf();
> 
> Your commit message says:
> | The saving of the register is done once during cpu hypervisor initialization state
> 
> But cpu_hyp_reinit() is called each time secondary CPUs come online. Its also called as
> part of the cpu-idle mechanism via hyp_init_cpu_pm_notifier(). cpu-idle can ask the
> firmware to power-off the CPU until an interrupt becomes pending for it. KVM's EL2 state
> disappears when this happens, these calls take care of setting it back up again. On Juno,
> this can happen tens of times a second, and this adds an extra call to EL2.

The bit I missed was the MDCR_EL2 copy is behind kvm_arm_init_debug(), so we already have
an unnecessary EL2 call here, so its nothing new.

Assuming the deactivate_traps_vhe_put() vcpu isn't needed, and with Mark's comments addressed:
Reviewed-by: James Morse <james.morse@arm.com>


If we can avoid repeated calls to EL2 once we've got HCR_EL2+MDCR_EL2, even better!


Thanks,

James

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v6 0/6] Add ARMv8.3 pointer authentication for kvm guest
  2019-02-19  9:24 [PATCH v6 0/6] Add ARMv8.3 pointer authentication for kvm guest Amit Daniel Kachhap
                   ` (5 preceding siblings ...)
  2019-02-19  9:24 ` [kvmtool PATCH v6 6/6] arm/kvm: arm64: Add a vcpu feature for pointer authentication Amit Daniel Kachhap
@ 2019-02-26 18:03 ` James Morse
  6 siblings, 0 replies; 41+ messages in thread
From: James Morse @ 2019-02-26 18:03 UTC (permalink / raw)
  To: Amit Daniel Kachhap, linux-arm-kernel
  Cc: Christoffer Dall, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Dave Martin, Ramana Radhakrishnan, kvmarm,
	Kristina Martsenko, linux-kernel, Mark Rutland, Julien Thierry

Hi Amit,

On 19/02/2019 09:24, Amit Daniel Kachhap wrote:
> This patch series adds pointer authentication support for KVM guest and
> is based on top of Linux 5.0-rc6. The basic patches in this series was
> originally posted by Mark Rutland earlier[1,2] and contains some history
> of this work.
> 
> Extension Overview:
> =============================================
> 
> The ARMv8.3 pointer authentication extension adds functionality to detect
> modification of pointer values, mitigating certain classes of attack such as
> stack smashing, and making return oriented programming attacks harder.
> 
> The extension introduces the concept of a pointer authentication code (PAC),
> which is stored in some upper bits of pointers. Each PAC is derived from the
> original pointer, another 64-bit value (e.g. the stack pointer), and a secret
> 128-bit key.
> 
> New instructions are added which can be used to:
> 
> * Insert a PAC into a pointer
> * Strip a PAC from a pointer
> * Authenticate and strip a PAC from a pointer
> 
> The detailed description of ARMv8.3 pointer authentication support in
> userspace/kernel and can be found in Kristina's generic pointer authentication
> patch series[3].


> This patch series is based on just a single patch from Dave Martin [8] which add
> control checks for accessing sys registers. 

Ooeer, If you miss this patch, (like I did) the series still applies to rc6, it just
doesn't build. If you depend on extra patches like this, please re-post them as part of
the series. (you need to add your Signed-off-by if picked the patch up from the list).

This lets people apply the series from the list (everyone has a script to to do this),
without having to go and find the dependencies.


> [8]: https://lore.kernel.org/linux-arm-kernel/1547757219-19439-13-git-send-email-Dave.Martin@arm.com/

This is v4 of Dave's patch. He changed the subject and posted a v5 here:
https://lore.kernel.org/linux-arm-kernel/1550519559-15915-13-git-send-email-Dave.Martin@arm.com/

Re-posting the patch you tested with would avoid someone accidentally pickup v5, then
trying to work out how its supposed to work with your series. (check_present() was
replaced by a restrictions() bitmask).


As we can't have both, and v5 of that patch has been reviewed, could you rebase onto it?
You'll need to pick up any tags and make any changes reviewers asked for. If you could
note 'this v7 patch is Dave's v5 with $changes', then it makes it clear what is going on.



Thanks,

James

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v6 3/6] arm64/kvm: context-switch ptrauth registers
  2019-02-19  9:24 ` [PATCH v6 3/6] arm64/kvm: context-switch ptrauth registers Amit Daniel Kachhap
  2019-02-21 12:29   ` Mark Rutland
  2019-02-21 15:53   ` Dave Martin
@ 2019-02-26 18:31   ` James Morse
  2019-03-04 10:51     ` Amit Daniel Kachhap
  2 siblings, 1 reply; 41+ messages in thread
From: James Morse @ 2019-02-26 18:31 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: linux-arm-kernel, Christoffer Dall, Marc Zyngier,
	Catalin Marinas, Will Deacon, Andrew Jones, Dave Martin,
	Ramana Radhakrishnan, kvmarm, Kristina Martsenko, linux-kernel,
	Mark Rutland, Julien Thierry

Hi Amit,

On 19/02/2019 09:24, Amit Daniel Kachhap wrote:
> From: Mark Rutland <mark.rutland@arm.com>
> 
> When pointer authentication is supported, a guest may wish to use it.
> This patch adds the necessary KVM infrastructure for this to work, with
> a semi-lazy context switch of the pointer auth state.
> 
> Pointer authentication feature is only enabled when VHE is built
> in the kernel and present into CPU implementation so only VHE code
> paths are modified.

> When we schedule a vcpu, we disable guest usage of pointer
> authentication instructions and accesses to the keys. While these are
> disabled, we avoid context-switching the keys. When we trap the guest
> trying to use pointer authentication functionality, we change to eagerly
> context-switching the keys, and enable the feature. The next time the
> vcpu is scheduled out/in, we start again.

> However the host key registers
> are saved in vcpu load stage as they remain constant for each vcpu
> schedule.

(I think we can get away with doing this later ... with the hope of doing it never!)


> Pointer authentication consists of address authentication and generic
> authentication, and CPUs in a system might have varied support for
> either. Where support for either feature is not uniform, it is hidden
> from guests via ID register emulation, as a result of the cpufeature
> framework in the host.
> 
> Unfortunately, address authentication and generic authentication cannot
> be trapped separately, as the architecture provides a single EL2 trap
> covering both. If we wish to expose one without the other, we cannot
> prevent a (badly-written) guest from intermittently using a feature
> which is not uniformly supported (when scheduled on a physical CPU which
> supports the relevant feature). Hence, this patch expects both type of
> authentication to be present in a cpu.


> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 2f1bb86..1bacf78 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -146,6 +146,18 @@ enum vcpu_sysreg {

> +static inline bool kvm_supports_ptrauth(void)
> +{
> +	return has_vhe() && system_supports_address_auth() &&
> +				system_supports_generic_auth();
> +}

Do we intend to support the implementation defined algorithm? I'd assumed not.

system_supports_address_auth() is defined as:
| static inline bool system_supports_address_auth(void)
| {
| 	return IS_ENABLED(CONFIG_ARM64_PTR_AUTH) &&
| 		(cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH_ARCH) ||
| 		cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH_IMP_DEF));
| }


So we could return true from kvm_supports_ptrauth() even if we only support the imp-def
algorithm.

I think we should hide the imp-def ptrauth support as KVM hides all other imp-def
features. This lets us avoid trying to migrate values that have been signed with the
imp-def algorithm.

I'm worried that it could include some value that we can't migrate (e.g. the SoC serial
number). Does the ARM-ARM say this can't happen?

All I can find is D5.1.5 "Pointer authentication in AArch64 state" of DDI0487D.a which has
this clause for the imp-def algorithm:
| For a set of arguments passed to the function, must give the same result for all PEs
| that a thread of execution could migrate between.

... with KVM we've extended the scope of migration significantly.


Could we check the cpus_have_const_cap() values for the two architected algorithms directly?


> diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
> index 6e65cad..09e061a 100644
> --- a/arch/arm64/include/asm/kvm_hyp.h
> +++ b/arch/arm64/include/asm/kvm_hyp.h
> @@ -153,6 +153,13 @@ bool __fpsimd_enabled(void);
>  void activate_traps_vhe_load(struct kvm_vcpu *vcpu);
>  void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu);
>  
> +void __ptrauth_switch_to_guest(struct kvm_vcpu *vcpu,
> +			       struct kvm_cpu_context *host_ctxt,
> +			       struct kvm_cpu_context *guest_ctxt);
> +void __ptrauth_switch_to_host(struct kvm_vcpu *vcpu,
> +			      struct kvm_cpu_context *guest_ctxt,
> +			      struct kvm_cpu_context *host_ctxt);


Why do you need the vcpu and the guest_ctxt?
Would it be possible for these to just take the vcpu, and to pull the host context from
the per-cpu variable?
This would avoid any future bugs where the ctxt's are the wrong way round, taking two is
unusual in KVM, but necessary here.

As you're calling these from asm you want the compiler to do as much of the type mangling
as possible.


> diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
> index 4e2fb87..5cac605 100644
> --- a/arch/arm64/kernel/traps.c
> +++ b/arch/arm64/kernel/traps.c
> @@ -749,6 +749,7 @@ static const char *esr_class_str[] = {
>  	[ESR_ELx_EC_CP14_LS]		= "CP14 LDC/STC",
>  	[ESR_ELx_EC_FP_ASIMD]		= "ASIMD",
>  	[ESR_ELx_EC_CP10_ID]		= "CP10 MRC/VMRS",
> +	[ESR_ELx_EC_PAC]		= "Pointer authentication trap",
>  	[ESR_ELx_EC_CP14_64]		= "CP14 MCRR/MRRC",
>  	[ESR_ELx_EC_ILL]		= "PSTATE.IL",
>  	[ESR_ELx_EC_SVC32]		= "SVC (AArch32)",

Is this needed? Or was it just missing from the parts already merged. (should it be a
separate patch for the arch code)

It looks like KVM only prints it from kvm_handle_unknown_ec(), which would never happen as
arm_exit_handlers[] has an entry for ESR_ELx_EC_PAC.

Is it true that the host could never take this trap either?, as it can only be taken when
HCR_EL2.TGE is clear.
(breadcrumbs from the ESR_ELx definition to "Trap to EL2 of EL0 accesses to Pointer
authentication instructions")


> diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
> index 675fdc1..b78cc15 100644
> --- a/arch/arm64/kvm/hyp/entry.S
> +++ b/arch/arm64/kvm/hyp/entry.S
> @@ -64,6 +64,12 @@ ENTRY(__guest_enter)
>  
>  	add	x18, x0, #VCPU_CONTEXT
>  
> +#ifdef	CONFIG_ARM64_PTR_AUTH
> +	// Prepare parameter for __ptrauth_switch_to_guest(vcpu, host, guest).
> +	mov	x2, x18
> +	bl	__ptrauth_switch_to_guest
> +#endif

This calls back into C code with x18 clobbered... is that allowed?
x18 has this weird platform-register/temporary-register behaviour, that depends on the
compiler. The PCS[0] has a note that 'hand-coded assembler should avoid it entirely'!

Can we assume that compiler generated code is using it from something, and depends on that
value, which means we need to preserve or save/restore it when calling into C.


The upshot? Could you use one of the callee saved registers instead of x18, then move it
after your C call.


> @@ -118,6 +124,17 @@ ENTRY(__guest_exit)
>  
>  	get_host_ctxt	x2, x3
>  
> +#ifdef	CONFIG_ARM64_PTR_AUTH
> +	// Prepare parameter for __ptrauth_switch_to_host(vcpu, guest, host).
> +	// Save x0, x2 which are used later in callee saved registers.
> +	mov	x19, x0
> +	mov	x20, x2
> +	sub	x0, x1, #VCPU_CONTEXT

> +	ldr	x29, [x2, #CPU_XREG_OFFSET(29)]

Is this to make the stack-trace look plausible?


> +	bl	__ptrauth_switch_to_host
> +	mov	x0, x19
> +	mov	x2, x20
> +#endif

(ditching the host_ctxt would let you move this above get_host_ctxt and the need to
preserve its result)


> diff --git a/arch/arm64/kvm/hyp/ptrauth-sr.c b/arch/arm64/kvm/hyp/ptrauth-sr.c
> new file mode 100644
> index 0000000..528ee6e
> --- /dev/null
> +++ b/arch/arm64/kvm/hyp/ptrauth-sr.c
> @@ -0,0 +1,101 @@

> +static __always_inline bool __ptrauth_is_enabled(struct kvm_vcpu *vcpu)

This __always_inline still looks weird! You said it might be needed to make it function
pointer safe. If it is, could you add a comment explaining why.

(alternatives would be making it an #ifdef, disabling ptrauth for the whole file, or
annotating this function too)


> +#define __ptrauth_save_key(regs, key)						\
> +({										\
> +	regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1);	\
> +	regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1);	\
> +})
> +
> +static __always_inline void __ptrauth_save_state(struct kvm_cpu_context *ctxt)
> +{
> +	__ptrauth_save_key(ctxt->sys_regs, APIA);
> +	__ptrauth_save_key(ctxt->sys_regs, APIB);
> +	__ptrauth_save_key(ctxt->sys_regs, APDA);
> +	__ptrauth_save_key(ctxt->sys_regs, APDB);
> +	__ptrauth_save_key(ctxt->sys_regs, APGA);
> +}
> +
> +#define __ptrauth_restore_key(regs, key) 					\
> +({										\
> +	write_sysreg_s(regs[key ## KEYLO_EL1], SYS_ ## key ## KEYLO_EL1);	\
> +	write_sysreg_s(regs[key ## KEYHI_EL1], SYS_ ## key ## KEYHI_EL1);	\
> +})
> +
> +static __always_inline void __ptrauth_restore_state(struct kvm_cpu_context *ctxt)
> +{
> +	__ptrauth_restore_key(ctxt->sys_regs, APIA);
> +	__ptrauth_restore_key(ctxt->sys_regs, APIB);
> +	__ptrauth_restore_key(ctxt->sys_regs, APDA);
> +	__ptrauth_restore_key(ctxt->sys_regs, APDB);
> +	__ptrauth_restore_key(ctxt->sys_regs, APGA);

Are writes to these registers self synchronising? I'd assume not, as they come as a pair.

I think this means we need an isb() here to ensure that when restoring the host registers
the next host authentication attempt uses the key we wrote here? We don't need it for the
guest, so we could put it at the end of __ptrauth_switch_to_host().


> +/**
> + * This function changes the key so assign Pointer Authentication safe
> + * GCC attribute if protected by it.
> + */

... this comment is the reminder for 'once we have host kernel ptrauth support'? could we
add something to say that kernel support is when the attribute would be needed. Otherwise
it reads like we're waiting for GCC support.

(I assume LLVM has a similar attribute ... is it exactly the same?)


> +void __ptrauth_switch_to_guest(struct kvm_vcpu *vcpu,
> +				  struct kvm_cpu_context *host_ctxt,
> +				  struct kvm_cpu_context *guest_ctxt)
> +{

> +}

> +void __ptrauth_switch_to_host(struct kvm_vcpu *vcpu,
> +				 struct kvm_cpu_context *guest_ctxt,
> +				 struct kvm_cpu_context *host_ctxt)
> +{

> +}


Could you add NOKPROBE_SYMBOL(symbol_name) for these. This adds them to the kprobe
blacklist as they aren't part of the __hyp_text. (and don't need to be as its VHE only).
Without this, you can patch a software-breakpoint in here, which KVM won't handle as its
already switched VBAR for entry to the guest.

Details in 7d82602909ed ("KVM: arm64: Forbid kprobing of the VHE world-switch code")

(... this turned up in a kernel version later than you based on ...)


> +/**
> + * kvm_arm_vcpu_ptrauth_reset - resets ptrauth for vcpu schedule
> + *
> + * @vcpu: The VCPU pointer
> + *
> + * This function may be used to disable ptrauth and use it in a lazy context
> + * via traps. However host key registers are saved here as they dont change
> + * during host/guest switch.
> + */
> +void kvm_arm_vcpu_ptrauth_reset(struct kvm_vcpu *vcpu)
> +{
> +	struct kvm_cpu_context *host_ctxt;
> +
> +	if (kvm_supports_ptrauth()) {
> +		kvm_arm_vcpu_ptrauth_disable(vcpu);
> +		host_ctxt = vcpu->arch.host_cpu_context;

> +		__ptrauth_save_state(host_ctxt);

Could you equally do this host-save in kvm_arm_vcpu_ptrauth_trap() before
kvm_arm_vcpu_ptrauth_enable()? This would avoid saving the keys if the guest never gets
the opportunity to change them. At the moment we do it on every vcpu_load().


As kvm_arm_vcpu_ptrauth_reset() isn't used as part of the world-switch, could we keep it
outside the 'hyp' directory? The Makefile for that directory expects to be building the
hyp text, so it disables KASAN, KCOV and friends.
kvm_arm_vcpu_ptrauth_reset() is safe for all of these, and its good for it to be covered
by this debug infrastructure. Could you move it to guest.c?


> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index a6c9381..12529df 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c> @@ -1045,9 +1071,10 @@ static u64 read_id_reg(struct sys_reg_desc const *r, bool raz)
>  					 (0xfUL << ID_AA64ISAR1_API_SHIFT) |
>  					 (0xfUL << ID_AA64ISAR1_GPA_SHIFT) |
>  					 (0xfUL << ID_AA64ISAR1_GPI_SHIFT);
> -		if (val & ptrauth_mask)
> +		if (!kvm_supports_ptrauth()) {
>  			kvm_debug("ptrauth unsupported for guests, suppressing\n");
> -		val &= ~ptrauth_mask;
> +			val &= ~ptrauth_mask;
> +		}

This means that debug message gets printed even on systems that don't support ptrauth in
hardware. (val&ptrauth_mask) used to cut them out, now kvm_supports_ptrauth() fails if the
static keys are false, and we end up printing this message.
Now that KVM supports pointer-auth, I don't think the debug message is useful, can we
remove it? (it would now mean 'you didn't ask for ptrauth to be turned on')


Could we always mask out the imp-def bits?


This patch needs to be merged together with 4 & 5 so the user-abi is as it should be. This
means the check_present/restrictions thing needs sorting so they're ready together.


Thanks,

James


[0] http://infocenter.arm.com/help/topic/com.arm.doc.ihi0055b/IHI0055B_aapcs64.pdf

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v6 4/6] arm64/kvm: add a userspace option to enable pointer authentication
  2019-02-19  9:24 ` [PATCH v6 4/6] arm64/kvm: add a userspace option to enable pointer authentication Amit Daniel Kachhap
  2019-02-21 12:34   ` Mark Rutland
  2019-02-21 15:53   ` Dave Martin
@ 2019-02-26 18:33   ` James Morse
  2019-03-04 10:56     ` Amit Daniel Kachhap
  2 siblings, 1 reply; 41+ messages in thread
From: James Morse @ 2019-02-26 18:33 UTC (permalink / raw)
  To: Amit Daniel Kachhap, linux-arm-kernel
  Cc: Christoffer Dall, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Dave Martin, Ramana Radhakrishnan, kvmarm,
	Kristina Martsenko, linux-kernel, Mark Rutland, Julien Thierry

Hi Amit,

On 19/02/2019 09:24, Amit Daniel Kachhap wrote:
> This feature will allow the KVM guest to allow the handling of
> pointer authentication instructions or to treat them as undefined
> if not set. It uses the existing vcpu API KVM_ARM_VCPU_INIT to
> supply this parameter instead of creating a new API.
> 
> A new register is not created to pass this parameter via
> SET/GET_ONE_REG interface as just a flag (KVM_ARM_VCPU_PTRAUTH)
> supplied is enough to enable this feature.

and an attempt to restore the id register with the other version would fail.


> diff --git a/Documentation/arm64/pointer-authentication.txt b/Documentation/arm64/pointer-authentication.txt
> index a25cd21..0529a7d 100644
> --- a/Documentation/arm64/pointer-authentication.txt
> +++ b/Documentation/arm64/pointer-authentication.txt
> @@ -82,7 +82,8 @@ pointers).
>  Virtualization
>  --------------
>  
> -Pointer authentication is not currently supported in KVM guests. KVM
> -will mask the feature bits from ID_AA64ISAR1_EL1, and attempted use of
> -the feature will result in an UNDEFINED exception being injected into
> -the guest.

> +Pointer authentication is enabled in KVM guest when virtual machine is
> +created by passing a flag (KVM_ARM_VCPU_PTRAUTH)

(This is still mixing VM and VCPU)


> + requesting this feature to be enabled.

.. on each vcpu?


> +Without this flag, pointer authentication is not enabled
> +in KVM guests and attempted use of the feature will result in an UNDEFINED
> +exception being injected into the guest.

'guests' here suggests its a VM property. If you set it on some VCPU but not others KVM
will generate undefs instead of enabling the feature. (which is the right thing to do)

I think it needs to be clear this is a per-vcpu property.


> diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
> index 97c3478..5f82ca1 100644
> --- a/arch/arm64/include/uapi/asm/kvm.h
> +++ b/arch/arm64/include/uapi/asm/kvm.h
> @@ -102,6 +102,7 @@ struct kvm_regs {
>  #define KVM_ARM_VCPU_EL1_32BIT		1 /* CPU running a 32bit VM */
>  #define KVM_ARM_VCPU_PSCI_0_2		2 /* CPU uses PSCI v0.2 */
>  #define KVM_ARM_VCPU_PMU_V3		3 /* Support guest PMUv3 */

> +#define KVM_ARM_VCPU_PTRAUTH		4 /* VCPU uses address authentication */

Just address authentication? I agree with Mark we should have two bits to match what gets
exposed to EL0. One would then be address, the other generic.


> diff --git a/arch/arm64/kvm/hyp/ptrauth-sr.c b/arch/arm64/kvm/hyp/ptrauth-sr.c
> index 528ee6e..6846a23 100644
> --- a/arch/arm64/kvm/hyp/ptrauth-sr.c
> +++ b/arch/arm64/kvm/hyp/ptrauth-sr.c
> @@ -93,9 +93,23 @@ void kvm_arm_vcpu_ptrauth_reset(struct kvm_vcpu *vcpu)

> +/**
> + * kvm_arm_vcpu_ptrauth_allowed - checks if ptrauth feature is allowed by user
> + *
> + * @vcpu: The VCPU pointer
> + *
> + * This function will be used to check userspace option to have ptrauth or not
> + * in the guest kernel.
> + */
> +bool kvm_arm_vcpu_ptrauth_allowed(const struct kvm_vcpu *vcpu)
> +{
> +	return kvm_supports_ptrauth() &&
> +		test_bit(KVM_ARM_VCPU_PTRAUTH, vcpu->arch.features);
> +}

This isn't used from world-switch, could it be moved to guest.c?


> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 12529df..f7bcc60 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -1055,7 +1055,7 @@ static bool access_cntp_cval(struct kvm_vcpu *vcpu,
>  }
>  
>  /* Read a sanitised cpufeature ID register by sys_reg_desc */
> -static u64 read_id_reg(struct sys_reg_desc const *r, bool raz)
> +static u64 read_id_reg(struct kvm_vcpu *vcpu, struct sys_reg_desc const *r, bool raz)

(It might be easier on the reviewer to move these mechanical changes to an earlier patch)


Looks good,

Thanks,

James

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v6 5/6] arm64/kvm: control accessibility of ptrauth key registers
  2019-02-19  9:24 ` [PATCH v6 5/6] arm64/kvm: control accessibility of ptrauth key registers Amit Daniel Kachhap
  2019-02-21 15:53   ` Dave Martin
@ 2019-02-26 18:34   ` James Morse
  1 sibling, 0 replies; 41+ messages in thread
From: James Morse @ 2019-02-26 18:34 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: linux-arm-kernel, Christoffer Dall, Marc Zyngier,
	Catalin Marinas, Will Deacon, Andrew Jones, Dave Martin,
	Ramana Radhakrishnan, kvmarm, Kristina Martsenko, linux-kernel,
	Mark Rutland, Julien Thierry

Hi Amit,

On 19/02/2019 09:24, Amit Daniel Kachhap wrote:
> According to userspace settings, ptrauth key registers are conditionally
> present in guest system register list based on user specified flag
> KVM_ARM_VCPU_PTRAUTH.
> 
> Reset routines still sets these registers to default values but they are
> left like that as they are conditionally accessible (set/get).

What problem is this patch solving?

I think it's that now we have ptrauth support, we have a glut of new registers visible to
user-space. This breaks migration and save/resume if the target machine doesn't have
pointer-auth configured, even if the guest wasn't using it.
Because we've got a VCPU bit, we can be smarter about this, and only expose the registers
if user-space was able to enable ptrauth.


> ---
> This patch needs patch [1] by Dave Martin and adds feature to manage accessibility in a scalable way.
> 
> [1]: https://lore.kernel.org/linux-arm-kernel/1547757219-19439-13-git-send-email-Dave.Martin@arm.com/ 

This is v4. Shortly before you posted this there was a v5 (but the subject changed, easily
missed). V5 has subsequently been reviewed. As we can't have both, could you rebase onto
that v5 so that there is one way of doing this?

(in general if you could re-post the version you develop/tested with then it makes it
clear what is going on)


> diff --git a/Documentation/arm64/pointer-authentication.txt b/Documentation/arm64/pointer-authentication.txt
> index 0529a7d..996e435 100644
> --- a/Documentation/arm64/pointer-authentication.txt
> +++ b/Documentation/arm64/pointer-authentication.txt
> @@ -87,3 +87,7 @@ created by passing a flag (KVM_ARM_VCPU_PTRAUTH) requesting this feature
>  to be enabled. Without this flag, pointer authentication is not enabled
>  in KVM guests and attempted use of the feature will result in an UNDEFINED
>  exception being injected into the guest.
> +
> +Additionally, when KVM_ARM_VCPU_PTRAUTH is not set then KVM will filter
> +out the Pointer Authentication system key registers from KVM_GET/SET_REG_*
> +ioctls.
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index f7bcc60..c2f4974 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -1005,8 +1005,13 @@ static bool trap_ptrauth(struct kvm_vcpu *vcpu,
>  	return false;
>  }
>  
> +static bool check_ptrauth(const struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd)
> +{
> +	return kvm_arm_vcpu_ptrauth_allowed(vcpu);
> +}
> +
>  #define __PTRAUTH_KEY(k)						\
> -	{ SYS_DESC(SYS_## k), trap_ptrauth, reset_unknown, k }
> +	{ SYS_DESC(SYS_## k), trap_ptrauth, reset_unknown, k , .check_present = check_ptrauth}


Looks good. I'm pretty sure the changes due to Dave's v5 map neatly.


Thanks,

James

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v6 1/6] arm64/kvm: preserve host HCR_EL2 value
  2019-02-21 11:50   ` Mark Rutland
  2019-02-25 18:09     ` Marc Zyngier
@ 2019-02-28  6:43     ` Amit Daniel Kachhap
  1 sibling, 0 replies; 41+ messages in thread
From: Amit Daniel Kachhap @ 2019-02-28  6:43 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, Christoffer Dall, Marc Zyngier,
	Catalin Marinas, Will Deacon, Andrew Jones, Dave Martin,
	Ramana Radhakrishnan, kvmarm, Kristina Martsenko, linux-kernel,
	James Morse, Julien Thierry

Hi,

On 2/21/19 5:20 PM, Mark Rutland wrote:
> Hi,
> 
> On Tue, Feb 19, 2019 at 02:54:26PM +0530, Amit Daniel Kachhap wrote:
>> From: Mark Rutland <mark.rutland@arm.com>
>>
>> When restoring HCR_EL2 for the host, KVM uses HCR_HOST_VHE_FLAGS, which
>> is a constant value. This works today, as the host HCR_EL2 value is
>> always the same, but this will get in the way of supporting extensions
>> that require HCR_EL2 bits to be set conditionally for the host.
>>
>> To allow such features to work without KVM having to explicitly handle
>> every possible host feature combination, this patch has KVM save/restore
>> for the host HCR when switching to/from a guest HCR. The saving of the
>> register is done once during cpu hypervisor initialization state and is
>> just restored after switch from guest.
>>
>> For fetching HCR_EL2 during kvm initialisation, a hyp call is made using
>> kvm_call_hyp and is helpful in NHVE case.
>>
>> For the hyp TLB maintenance code, __tlb_switch_to_host_vhe() is updated
>> to toggle the TGE bit with a RMW sequence, as we already do in
>> __tlb_switch_to_guest_vhe().
>>
>> The value of hcr_el2 is now stored in struct kvm_cpu_context as both host
>> and guest can now use this field in a common way.
>>
>> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
>> [Added __cpu_copy_hyp_conf, hcr_el2 field in struct kvm_cpu_context]
>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>> Cc: Christoffer Dall <christoffer.dall@arm.com>
>> Cc: kvmarm@lists.cs.columbia.edu
> 
> [...]
> 
>> +/**
>> + * __cpu_copy_hyp_conf - copy the boot hyp configuration registers
>> + *
>> + * It is called once per-cpu during CPU hyp initialisation.
>> + */
>> +static inline void __cpu_copy_hyp_conf(void)
> 
> I think this would be better named as something like:
> 
>    cpu_init_host_ctxt()
> 
> ... as that makes it a bit clearer as to what is being initialized.
ok, Agreed with the name.
> 
> [...]
> 
>> +/**
>> + * __kvm_populate_host_regs - Stores host register values
>> + *
>> + * This function acts as a function handler parameter for kvm_call_hyp and
>> + * may be called from EL1 exception level to fetch the register value.
>> + */
>> +void __hyp_text __kvm_populate_host_regs(void)
>> +{
>> +	struct kvm_cpu_context *host_ctxt;
>> +
>> +	if (has_vhe())
>> +		host_ctxt = this_cpu_ptr(&kvm_host_cpu_state);
>> +	else
>> +		host_ctxt = __hyp_this_cpu_ptr(kvm_host_cpu_state);
> 
> Do we need the has_vhe() check here?
> 
> Can't we always do:
> 
> 	host_ctxt = __hyp_this_cpu_ptr(kvm_host_cpu_state);
> 
> ... regardless of VHE? Or is that broken for VHE somehow?
Yes it works fine for VHE. I missed this.

Thanks,
Amit
> 
> Thanks,
> Mark.
> 

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v6 3/6] arm64/kvm: context-switch ptrauth registers
  2019-02-21 12:29   ` Mark Rutland
  2019-02-21 15:51     ` Dave Martin
@ 2019-02-28  9:07     ` Amit Daniel Kachhap
  1 sibling, 0 replies; 41+ messages in thread
From: Amit Daniel Kachhap @ 2019-02-28  9:07 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, Christoffer Dall, Marc Zyngier,
	Catalin Marinas, Will Deacon, Andrew Jones, Dave Martin,
	Ramana Radhakrishnan, kvmarm, Kristina Martsenko, linux-kernel,
	James Morse, Julien Thierry

Hi Mark,

On 2/21/19 5:59 PM, Mark Rutland wrote:
> On Tue, Feb 19, 2019 at 02:54:28PM +0530, Amit Daniel Kachhap wrote:
>> From: Mark Rutland <mark.rutland@arm.com>
>>
>> When pointer authentication is supported, a guest may wish to use it.
>> This patch adds the necessary KVM infrastructure for this to work, with
>> a semi-lazy context switch of the pointer auth state.
>>
>> Pointer authentication feature is only enabled when VHE is built
>> in the kernel and present into CPU implementation so only VHE code
>> paths are modified.
> 
> Nit: s/into/in the/
ok.
> 
>>
>> When we schedule a vcpu, we disable guest usage of pointer
>> authentication instructions and accesses to the keys. While these are
>> disabled, we avoid context-switching the keys. When we trap the guest
>> trying to use pointer authentication functionality, we change to eagerly
>> context-switching the keys, and enable the feature. The next time the
>> vcpu is scheduled out/in, we start again. However the host key registers
>> are saved in vcpu load stage as they remain constant for each vcpu
>> schedule.
>>
>> Pointer authentication consists of address authentication and generic
>> authentication, and CPUs in a system might have varied support for
>> either. Where support for either feature is not uniform, it is hidden
>> from guests via ID register emulation, as a result of the cpufeature
>> framework in the host.
>>
>> Unfortunately, address authentication and generic authentication cannot
>> be trapped separately, as the architecture provides a single EL2 trap
>> covering both. If we wish to expose one without the other, we cannot
>> prevent a (badly-written) guest from intermittently using a feature
>> which is not uniformly supported (when scheduled on a physical CPU which
>> supports the relevant feature). Hence, this patch expects both type of
>> authentication to be present in a cpu.
>>
>> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
>> [Only VHE, key switch from from assembly, kvm_supports_ptrauth
>> checks, save host key in vcpu_load]
> 
> Hmm, why do we need to do the key switch in assembly, given it's not
> used in-kernel right now?
> 
> Is that in preparation for in-kernel pointer auth usage? If so, please
> call that out in the commit message.
ok sure.
> 
> [...]
> 
>> diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
>> index 4e2fb87..5cac605 100644
>> --- a/arch/arm64/kernel/traps.c
>> +++ b/arch/arm64/kernel/traps.c
>> @@ -749,6 +749,7 @@ static const char *esr_class_str[] = {
>>   	[ESR_ELx_EC_CP14_LS]		= "CP14 LDC/STC",
>>   	[ESR_ELx_EC_FP_ASIMD]		= "ASIMD",
>>   	[ESR_ELx_EC_CP10_ID]		= "CP10 MRC/VMRS",
>> +	[ESR_ELx_EC_PAC]		= "Pointer authentication trap",
> 
> For consistency with the other strings, can we please make this "PAC"?
ok. It makes sense.
> 
> [...]
> 
>> diff --git a/arch/arm64/kvm/hyp/Makefile b/arch/arm64/kvm/hyp/Makefile
>> index 82d1904..17cec99 100644
>> --- a/arch/arm64/kvm/hyp/Makefile
>> +++ b/arch/arm64/kvm/hyp/Makefile
>> @@ -19,6 +19,7 @@ obj-$(CONFIG_KVM_ARM_HOST) += switch.o
>>   obj-$(CONFIG_KVM_ARM_HOST) += fpsimd.o
>>   obj-$(CONFIG_KVM_ARM_HOST) += tlb.o
>>   obj-$(CONFIG_KVM_ARM_HOST) += hyp-entry.o
>> +obj-$(CONFIG_KVM_ARM_HOST) += ptrauth-sr.o
> 
> Huh, so we're actually doing the switch in C code...
> 
>>   # KVM code is run at a different exception code with a different map, so
>>   # compiler instrumentation that inserts callbacks or checks into the code may
>> diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
>> index 675fdc1..b78cc15 100644
>> --- a/arch/arm64/kvm/hyp/entry.S
>> +++ b/arch/arm64/kvm/hyp/entry.S
>> @@ -64,6 +64,12 @@ ENTRY(__guest_enter)
>>   
>>   	add	x18, x0, #VCPU_CONTEXT
>>   
>> +#ifdef	CONFIG_ARM64_PTR_AUTH
>> +	// Prepare parameter for __ptrauth_switch_to_guest(vcpu, host, guest).
>> +	mov	x2, x18
>> +	bl	__ptrauth_switch_to_guest
>> +#endif
> 
> ... and conditionally *calling* that switch code from assembly ...
> 
>> +
>>   	// Restore guest regs x0-x17
>>   	ldp	x0, x1,   [x18, #CPU_XREG_OFFSET(0)]
>>   	ldp	x2, x3,   [x18, #CPU_XREG_OFFSET(2)]
>> @@ -118,6 +124,17 @@ ENTRY(__guest_exit)
>>   
>>   	get_host_ctxt	x2, x3
>>   
>> +#ifdef	CONFIG_ARM64_PTR_AUTH
>> +	// Prepare parameter for __ptrauth_switch_to_host(vcpu, guest, host).
>> +	// Save x0, x2 which are used later in callee saved registers.
>> +	mov	x19, x0
>> +	mov	x20, x2
>> +	sub	x0, x1, #VCPU_CONTEXT
>> +	ldr	x29, [x2, #CPU_XREG_OFFSET(29)]
>> +	bl	__ptrauth_switch_to_host
>> +	mov	x0, x19
>> +	mov	x2, x20
>> +#endif
> 
> ... which adds a load of boilerplate for no immediate gain.
Here some parameter optimizations may be possible as guest and host ctxt 
can be derived from vcpu itself as James suggested in other review 
comments. I thought about doing all save/restore in assembly but for 
optimization now host keys are saved in vcpu_load stage in C so reused 
those C codes here also.

Again all these codes are beneficial with in-kernel ptrauth and hence in 
case of strong objection may revert to old way.
> 
> Do we really need to do this in assembly today?
During the last patchset review [1], James provided a lot of supporting 
arguments to have these switch routines called from assembly due to 
function outlining between kvm_vcpu_run_vhe() and __kvm_vcpu_run_nvhe().

[1]: https://lkml.org/lkml/2019/1/31/662

Thanks,
Amit D
> 
> Thanks,
> Mark.
> 

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v6 4/6] arm64/kvm: add a userspace option to enable pointer authentication
  2019-02-21 12:34   ` Mark Rutland
@ 2019-02-28  9:25     ` Amit Daniel Kachhap
  0 siblings, 0 replies; 41+ messages in thread
From: Amit Daniel Kachhap @ 2019-02-28  9:25 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, Christoffer Dall, Marc Zyngier,
	Catalin Marinas, Will Deacon, Andrew Jones, Dave Martin,
	Ramana Radhakrishnan, kvmarm, Kristina Martsenko, linux-kernel,
	James Morse, Julien Thierry

Hi,

On 2/21/19 6:04 PM, Mark Rutland wrote:
> On Tue, Feb 19, 2019 at 02:54:29PM +0530, Amit Daniel Kachhap wrote:
>> This feature will allow the KVM guest to allow the handling of
>> pointer authentication instructions or to treat them as undefined
>> if not set. It uses the existing vcpu API KVM_ARM_VCPU_INIT to
>> supply this parameter instead of creating a new API.
>>
>> A new register is not created to pass this parameter via
>> SET/GET_ONE_REG interface as just a flag (KVM_ARM_VCPU_PTRAUTH)
>> supplied is enough to enable this feature.
>>
>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>> Cc: Mark Rutland <mark.rutland@arm.com>
>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>> Cc: Christoffer Dall <christoffer.dall@arm.com>
>> Cc: kvmarm@lists.cs.columbia.edu
>> ---
>>   Documentation/arm64/pointer-authentication.txt |  9 +++++----
>>   Documentation/virtual/kvm/api.txt              |  4 ++++
>>   arch/arm64/include/asm/kvm_host.h              |  3 ++-
>>   arch/arm64/include/uapi/asm/kvm.h              |  1 +
>>   arch/arm64/kvm/handle_exit.c                   |  2 +-
>>   arch/arm64/kvm/hyp/ptrauth-sr.c                | 16 +++++++++++++++-
>>   arch/arm64/kvm/reset.c                         |  3 +++
>>   arch/arm64/kvm/sys_regs.c                      | 26 +++++++++++++-------------
>>   include/uapi/linux/kvm.h                       |  1 +
>>   9 files changed, 45 insertions(+), 20 deletions(-)
>>
>> diff --git a/Documentation/arm64/pointer-authentication.txt b/Documentation/arm64/pointer-authentication.txt
>> index a25cd21..0529a7d 100644
>> --- a/Documentation/arm64/pointer-authentication.txt
>> +++ b/Documentation/arm64/pointer-authentication.txt
>> @@ -82,7 +82,8 @@ pointers).
>>   Virtualization
>>   --------------
>>   
>> -Pointer authentication is not currently supported in KVM guests. KVM
>> -will mask the feature bits from ID_AA64ISAR1_EL1, and attempted use of
>> -the feature will result in an UNDEFINED exception being injected into
>> -the guest.
>> +Pointer authentication is enabled in KVM guest when virtual machine is
>> +created by passing a flag (KVM_ARM_VCPU_PTRAUTH) requesting this feature
>> +to be enabled. Without this flag, pointer authentication is not enabled
>> +in KVM guests and attempted use of the feature will result in an UNDEFINED
>> +exception being injected into the guest.
>> diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
>> index 356156f..1e646fb 100644
>> --- a/Documentation/virtual/kvm/api.txt
>> +++ b/Documentation/virtual/kvm/api.txt
>> @@ -2642,6 +2642,10 @@ Possible features:
>>   	  Depends on KVM_CAP_ARM_PSCI_0_2.
>>   	- KVM_ARM_VCPU_PMU_V3: Emulate PMUv3 for the CPU.
>>   	  Depends on KVM_CAP_ARM_PMU_V3.
>> +	- KVM_ARM_VCPU_PTRAUTH: Emulate Pointer authentication for the CPU.
>> +	  Depends on KVM_CAP_ARM_PTRAUTH and only on arm64 architecture. If
>> +	  set, then the KVM guest allows the execution of pointer authentication
>> +	  instructions. Otherwise, KVM treats these instructions as undefined.
> 
> I think that we should have separate flags for address auth and generic
> auth, to match the ID register split.
> 
> For now, we can have KVM only support the case where both are set, but
> it gives us freedom to support either in isolation if we have to in
> future, without an ABI break.
yes agree with you about having two address and generic ptrauth flags. 
Will add in next iteration.

Thanks,
Amit D
> 
> Thanks,
> Mark.
> 

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v6 1/6] arm64/kvm: preserve host HCR_EL2 value
  2019-02-21 15:49   ` Dave Martin
@ 2019-03-01  5:56     ` Amit Daniel Kachhap
  0 siblings, 0 replies; 41+ messages in thread
From: Amit Daniel Kachhap @ 2019-03-01  5:56 UTC (permalink / raw)
  To: Dave Martin
  Cc: linux-arm-kernel, Marc Zyngier, Catalin Marinas, Will Deacon,
	Kristina Martsenko, kvmarm, Ramana Radhakrishnan, linux-kernel

Hi,

On 2/21/19 9:19 PM, Dave Martin wrote:
> On Tue, Feb 19, 2019 at 02:54:26PM +0530, Amit Daniel Kachhap wrote:
>> From: Mark Rutland <mark.rutland@arm.com>
>>
>> When restoring HCR_EL2 for the host, KVM uses HCR_HOST_VHE_FLAGS, which
>> is a constant value. This works today, as the host HCR_EL2 value is
>> always the same, but this will get in the way of supporting extensions
>> that require HCR_EL2 bits to be set conditionally for the host.
>>
>> To allow such features to work without KVM having to explicitly handle
>> every possible host feature combination, this patch has KVM save/restore
>> for the host HCR when switching to/from a guest HCR. The saving of the
>> register is done once during cpu hypervisor initialization state and is
>> just restored after switch from guest.
>>
>> For fetching HCR_EL2 during kvm initialisation, a hyp call is made using
>> kvm_call_hyp and is helpful in NHVE case.
> 
> Minor nit: NVHE misspelled.  This looks a bit like it's naming an arch
> feature rather than a kernel implementation detail though.  Maybe write
> "non-VHE".
yes.
> 
>> For the hyp TLB maintenance code, __tlb_switch_to_host_vhe() is updated
>> to toggle the TGE bit with a RMW sequence, as we already do in
>> __tlb_switch_to_guest_vhe().
>>
>> The value of hcr_el2 is now stored in struct kvm_cpu_context as both host
>> and guest can now use this field in a common way.
>>
>> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
>> [Added __cpu_copy_hyp_conf, hcr_el2 field in struct kvm_cpu_context]
>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>> Cc: Christoffer Dall <christoffer.dall@arm.com>
>> Cc: kvmarm@lists.cs.columbia.edu
>> ---
>>   arch/arm/include/asm/kvm_host.h      |  2 ++
>>   arch/arm64/include/asm/kvm_asm.h     |  2 ++
>>   arch/arm64/include/asm/kvm_emulate.h | 22 +++++++++++-----------
>>   arch/arm64/include/asm/kvm_host.h    | 13 ++++++++++++-
>>   arch/arm64/include/asm/kvm_hyp.h     |  2 +-
>>   arch/arm64/kvm/guest.c               |  2 +-
>>   arch/arm64/kvm/hyp/switch.c          | 23 +++++++++++++----------
>>   arch/arm64/kvm/hyp/sysreg-sr.c       | 21 ++++++++++++++++++++-
>>   arch/arm64/kvm/hyp/tlb.c             |  6 +++++-
>>   virt/kvm/arm/arm.c                   |  1 +
>>   10 files changed, 68 insertions(+), 26 deletions(-)
>>
>> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
>> index ca56537..05706b4 100644
>> --- a/arch/arm/include/asm/kvm_host.h
>> +++ b/arch/arm/include/asm/kvm_host.h
>> @@ -273,6 +273,8 @@ static inline void __cpu_init_stage2(void)
>>   	kvm_call_hyp(__init_stage2_translation);
>>   }
>>   
>> +static inline void __cpu_copy_hyp_conf(void) {}
>> +
>>   static inline int kvm_arch_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>>   {
>>   	return 0;
>> diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
>> index f5b79e9..8acd73f 100644
>> --- a/arch/arm64/include/asm/kvm_asm.h
>> +++ b/arch/arm64/include/asm/kvm_asm.h
>> @@ -80,6 +80,8 @@ extern void __vgic_v3_init_lrs(void);
>>   
>>   extern u32 __kvm_get_mdcr_el2(void);
>>   
>> +extern void __kvm_populate_host_regs(void);
>> +
>>   /* Home-grown __this_cpu_{ptr,read} variants that always work at HYP */
>>   #define __hyp_this_cpu_ptr(sym)						\
>>   	({								\
>> diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
>> index 506386a..0dbe795 100644
>> --- a/arch/arm64/include/asm/kvm_emulate.h
>> +++ b/arch/arm64/include/asm/kvm_emulate.h
>> @@ -50,25 +50,25 @@ void kvm_inject_pabt32(struct kvm_vcpu *vcpu, unsigned long addr);
>>   
>>   static inline bool vcpu_el1_is_32bit(struct kvm_vcpu *vcpu)
>>   {
>> -	return !(vcpu->arch.hcr_el2 & HCR_RW);
>> +	return !(vcpu->arch.ctxt.hcr_el2 & HCR_RW);
> 
> Putting hcr_el2 into struct kvm_cpu_context creates a lot of splatter
> here, and I'm wondering whether it's really necessary.  Otherwise,
> we could just put the per-vcpu guest HCR_EL2 value in struct
> kvm_vcpu_arch.
I did like that in V4 version [1] but comments were raised that this was 
repetition of hcr_el2 field in 2 places and may be avoided.

[1]: https://lkml.org/lkml/2019/1/4/433
> 
> Is the *host* hcr_el2 value really different per-vcpu?  That looks
> odd.  I would have thought this is fixed across the system at KVM
> startup time.
> 
> Having a single global host hcr_el2 would also avoid the need for
> __kvm_populate_host_regs(): instead, we just decide what HCR_EL2 is to
> be ahead of time and set a global variable that we map into Hyp.
> 
> 
> Or does the host HCR_EL2 need to vary at runtime for some reason I've
> missed?
This patch basically makes host hcr_el2 not to use fixed values like 
HCR_HOST_NVHE_FLAGS/HCR_HOST_VHE_FLAGS during context switch and hence 
saves those values at boot time. This patch is just preparation to 
configure host hcr_el2 dynamically. However currently it is same for all 
cpus.

I suppose it is better to have host hcr_el2 as percpu to take care of 
heterogeneous systems. Currently even host mdcr_el2 is stored on percpu 
basis(arch/arm64/kvm/debug.c).
> 
> [...]
> 
> +void __hyp_text __kvm_populate_host_regs(void)
> +{
> +       struct kvm_cpu_context *host_ctxt;
> +
> +       if (has_vhe())
> +               host_ctxt = this_cpu_ptr(&kvm_host_cpu_state);
> +       else
> +               host_ctxt = __hyp_this_cpu_ptr(kvm_host_cpu_state);
> 
> According to the comment by the definition of __hyp_this_cpu_ptr(), this
> always works at Hyp.  I also see other calls with no fallback
> this_cpu_ptr() call like we have here.
> 
> So, can we simply always call __hyp_this_cpu_ptr() here?
Yes i missed this.

Thanks,
Amit D
> 
> (I'm not familiar with this, myself.)
> 
> Cheers
> ---Dave
> 

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v6 2/6] arm64/kvm: preserve host MDCR_EL2 value
  2019-02-21 15:51   ` Dave Martin
@ 2019-03-01  6:10     ` Amit Daniel Kachhap
  0 siblings, 0 replies; 41+ messages in thread
From: Amit Daniel Kachhap @ 2019-03-01  6:10 UTC (permalink / raw)
  To: Dave Martin
  Cc: linux-arm-kernel, Marc Zyngier, Catalin Marinas, Will Deacon,
	Kristina Martsenko, kvmarm, Ramana Radhakrishnan, linux-kernel

Hi,

On 2/21/19 9:21 PM, Dave Martin wrote:
> On Tue, Feb 19, 2019 at 02:54:27PM +0530, Amit Daniel Kachhap wrote:
>> Save host MDCR_EL2 value during kvm HYP initialisation and restore
>> after every switch from host to guest. There should not be any
>> change in functionality due to this.
>>
>> The value of mdcr_el2 is now stored in struct kvm_cpu_context as
>> both host and guest can now use this field in a common way.
> 
> Is MDCR_EL2 somehow relevant to pointer auth?
> 
> It's not entirely clear why this patch is here.
> 
> If this is a cleanup to align the handling of this register with
> how HCR_EL2 is handled, it would be good to explain that in the commit
> message.
Agreed I will more information in commit message.
> 
>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>> Cc: Mark Rutland <mark.rutland@arm.com>
>> Cc: Christoffer Dall <christoffer.dall@arm.com>
>> Cc: kvmarm@lists.cs.columbia.edu
>> ---
>>   arch/arm/include/asm/kvm_host.h   |  1 -
>>   arch/arm64/include/asm/kvm_host.h |  6 ++----
>>   arch/arm64/kvm/debug.c            | 28 ++++++----------------------
>>   arch/arm64/kvm/hyp/switch.c       | 17 ++++-------------
>>   arch/arm64/kvm/hyp/sysreg-sr.c    |  6 ++++++
>>   virt/kvm/arm/arm.c                |  1 -
>>   6 files changed, 18 insertions(+), 41 deletions(-)
>>
>> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
>> index 05706b4..704667e 100644
>> --- a/arch/arm/include/asm/kvm_host.h
>> +++ b/arch/arm/include/asm/kvm_host.h
>> @@ -294,7 +294,6 @@ static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {}
>>   static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
>>   static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {}
>>   
>> -static inline void kvm_arm_init_debug(void) {}
>>   static inline void kvm_arm_setup_debug(struct kvm_vcpu *vcpu) {}
>>   static inline void kvm_arm_clear_debug(struct kvm_vcpu *vcpu) {}
>>   static inline void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu) {}
>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
>> index 1b2e05b..2f1bb86 100644
>> --- a/arch/arm64/include/asm/kvm_host.h
>> +++ b/arch/arm64/include/asm/kvm_host.h
>> @@ -205,6 +205,8 @@ struct kvm_cpu_context {
>>   
>>   	/* HYP host/guest configuration */
>>   	u64 hcr_el2;
>> +	u32 mdcr_el2;
>> +
> 
> ARMv8-A says MDCR_EL2 is a 64-bit register.
> 
> Bits [63:20] are currently RES0, so this is probably not a big deal.
> But it would be better to make this 64-bit to prevent future accidents.
> It may be better to make that change in a separate patch.
Yes this a potential issue. I will fix it in a separate patch.

Thanks,
Amit D
> 
> This is probably non-urgent, since this is clearly not causing problems
> for anyone today.
> 
> [...]
> 
> Cheers
> ---Dave
> 

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v6 3/6] arm64/kvm: context-switch ptrauth registers
  2019-02-21 15:51     ` Dave Martin
@ 2019-03-01  6:17       ` Amit Daniel Kachhap
  0 siblings, 0 replies; 41+ messages in thread
From: Amit Daniel Kachhap @ 2019-03-01  6:17 UTC (permalink / raw)
  To: Dave Martin, Mark Rutland
  Cc: linux-kernel, Marc Zyngier, Catalin Marinas, Will Deacon,
	Kristina Martsenko, kvmarm, Ramana Radhakrishnan,
	linux-arm-kernel



On 2/21/19 9:21 PM, Dave Martin wrote:
> On Thu, Feb 21, 2019 at 12:29:42PM +0000, Mark Rutland wrote:
>> On Tue, Feb 19, 2019 at 02:54:28PM +0530, Amit Daniel Kachhap wrote:
>>> From: Mark Rutland <mark.rutland@arm.com>
>>>
>>> When pointer authentication is supported, a guest may wish to use it.
>>> This patch adds the necessary KVM infrastructure for this to work, with
>>> a semi-lazy context switch of the pointer auth state.
>>>
>>> Pointer authentication feature is only enabled when VHE is built
>>> in the kernel and present into CPU implementation so only VHE code
>>> paths are modified.
>>
>> Nit: s/into/in the/
>>
>>>
>>> When we schedule a vcpu, we disable guest usage of pointer
>>> authentication instructions and accesses to the keys. While these are
>>> disabled, we avoid context-switching the keys. When we trap the guest
>>> trying to use pointer authentication functionality, we change to eagerly
>>> context-switching the keys, and enable the feature. The next time the
>>> vcpu is scheduled out/in, we start again. However the host key registers
>>> are saved in vcpu load stage as they remain constant for each vcpu
>>> schedule.
>>>
>>> Pointer authentication consists of address authentication and generic
>>> authentication, and CPUs in a system might have varied support for
>>> either. Where support for either feature is not uniform, it is hidden
>>> from guests via ID register emulation, as a result of the cpufeature
>>> framework in the host.
>>>
>>> Unfortunately, address authentication and generic authentication cannot
>>> be trapped separately, as the architecture provides a single EL2 trap
>>> covering both. If we wish to expose one without the other, we cannot
>>> prevent a (badly-written) guest from intermittently using a feature
>>> which is not uniformly supported (when scheduled on a physical CPU which
>>> supports the relevant feature). Hence, this patch expects both type of
>>> authentication to be present in a cpu.
>>>
>>> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
>>> [Only VHE, key switch from from assembly, kvm_supports_ptrauth
>>> checks, save host key in vcpu_load]
>>
>> Hmm, why do we need to do the key switch in assembly, given it's not
>> used in-kernel right now?
>>
>> Is that in preparation for in-kernel pointer auth usage? If so, please
>> call that out in the commit message.
> 
> [...]
> 
>> Huh, so we're actually doing the switch in C code...
>>
>>>   # KVM code is run at a different exception code with a different map, so
>>>   # compiler instrumentation that inserts callbacks or checks into the code may
>>> diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
>>> index 675fdc1..b78cc15 100644
>>> --- a/arch/arm64/kvm/hyp/entry.S
>>> +++ b/arch/arm64/kvm/hyp/entry.S
>>> @@ -64,6 +64,12 @@ ENTRY(__guest_enter)
>>>   
>>>   	add	x18, x0, #VCPU_CONTEXT
>>>   
>>> +#ifdef	CONFIG_ARM64_PTR_AUTH
>>> +	// Prepare parameter for __ptrauth_switch_to_guest(vcpu, host, guest).
>>> +	mov	x2, x18
>>> +	bl	__ptrauth_switch_to_guest
>>> +#endif
>>
>> ... and conditionally *calling* that switch code from assembly ...
>>
>>> +
>>>   	// Restore guest regs x0-x17
>>>   	ldp	x0, x1,   [x18, #CPU_XREG_OFFSET(0)]
>>>   	ldp	x2, x3,   [x18, #CPU_XREG_OFFSET(2)]
>>> @@ -118,6 +124,17 @@ ENTRY(__guest_exit)
>>>   
>>>   	get_host_ctxt	x2, x3
>>>   
>>> +#ifdef	CONFIG_ARM64_PTR_AUTH
>>> +	// Prepare parameter for __ptrauth_switch_to_host(vcpu, guest, host).
>>> +	// Save x0, x2 which are used later in callee saved registers.
>>> +	mov	x19, x0
>>> +	mov	x20, x2
>>> +	sub	x0, x1, #VCPU_CONTEXT
>>> +	ldr	x29, [x2, #CPU_XREG_OFFSET(29)]
>>> +	bl	__ptrauth_switch_to_host
>>> +	mov	x0, x19
>>> +	mov	x2, x20
>>> +#endif
>>
>> ... which adds a load of boilerplate for no immediate gain.
>>
>> Do we really need to do this in assembly today?
> 
> If we will need to move this to assembly when we add in-kernel ptrauth
> support, it may be best to have it in assembly from the start, to reduce
> unnecessary churn.
> 
> But having a mix of C and assembly is likely to make things more
> complicated: we should go with one or the other IMHO.
ok, I will check on this.

Thanks,
Amit D
> 
> Cheers
> ---Dave
> 

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v6 3/6] arm64/kvm: context-switch ptrauth registers
  2019-02-21 15:53   ` Dave Martin
@ 2019-03-01  9:35     ` Amit Daniel Kachhap
  0 siblings, 0 replies; 41+ messages in thread
From: Amit Daniel Kachhap @ 2019-03-01  9:35 UTC (permalink / raw)
  To: Dave Martin
  Cc: linux-arm-kernel, Marc Zyngier, Catalin Marinas, Will Deacon,
	Kristina Martsenko, kvmarm, Ramana Radhakrishnan, linux-kernel

Hi,

On 2/21/19 9:23 PM, Dave Martin wrote:
> On Tue, Feb 19, 2019 at 02:54:28PM +0530, Amit Daniel Kachhap wrote:
>> From: Mark Rutland <mark.rutland@arm.com>
>>
>> When pointer authentication is supported, a guest may wish to use it.
>> This patch adds the necessary KVM infrastructure for this to work, with
>> a semi-lazy context switch of the pointer auth state.
>>
>> Pointer authentication feature is only enabled when VHE is built
>> in the kernel and present into CPU implementation so only VHE code
>> paths are modified.
>>
>> When we schedule a vcpu, we disable guest usage of pointer
>> authentication instructions and accesses to the keys. While these are
>> disabled, we avoid context-switching the keys. When we trap the guest
>> trying to use pointer authentication functionality, we change to eagerly
>> context-switching the keys, and enable the feature. The next time the
>> vcpu is scheduled out/in, we start again. However the host key registers
>> are saved in vcpu load stage as they remain constant for each vcpu
>> schedule.
>>
>> Pointer authentication consists of address authentication and generic
>> authentication, and CPUs in a system might have varied support for
>> either. Where support for either feature is not uniform, it is hidden
>> from guests via ID register emulation, as a result of the cpufeature
>> framework in the host.
>>
>> Unfortunately, address authentication and generic authentication cannot
>> be trapped separately, as the architecture provides a single EL2 trap
>> covering both. If we wish to expose one without the other, we cannot
>> prevent a (badly-written) guest from intermittently using a feature
>> which is not uniformly supported (when scheduled on a physical CPU which
>> supports the relevant feature). Hence, this patch expects both type of
>> authentication to be present in a cpu.
>>
>> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
>> [Only VHE, key switch from from assembly, kvm_supports_ptrauth
>> checks, save host key in vcpu_load]
>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>> Reviewed-by: Julien Thierry <julien.thierry@arm.com>
>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>> Cc: Christoffer Dall <christoffer.dall@arm.com>
>> Cc: kvmarm@lists.cs.columbia.edu
>> ---
>>   arch/arm/include/asm/kvm_host.h   |   1 +
>>   arch/arm64/include/asm/kvm_host.h |  23 +++++++++
>>   arch/arm64/include/asm/kvm_hyp.h  |   7 +++
>>   arch/arm64/kernel/traps.c         |   1 +
>>   arch/arm64/kvm/handle_exit.c      |  21 +++++---
>>   arch/arm64/kvm/hyp/Makefile       |   1 +
>>   arch/arm64/kvm/hyp/entry.S        |  17 +++++++
>>   arch/arm64/kvm/hyp/ptrauth-sr.c   | 101 ++++++++++++++++++++++++++++++++++++++
>>   arch/arm64/kvm/sys_regs.c         |  37 +++++++++++++-
>>   virt/kvm/arm/arm.c                |   2 +
>>   10 files changed, 201 insertions(+), 10 deletions(-)
>>   create mode 100644 arch/arm64/kvm/hyp/ptrauth-sr.c
> 
> [...]
> 
>> diff --git a/arch/arm64/kvm/hyp/ptrauth-sr.c b/arch/arm64/kvm/hyp/ptrauth-sr.c
>> new file mode 100644
>> index 0000000..528ee6e
>> --- /dev/null
>> +++ b/arch/arm64/kvm/hyp/ptrauth-sr.c
>> @@ -0,0 +1,101 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +/*
>> + * arch/arm64/kvm/hyp/ptrauth-sr.c: Guest/host ptrauth save/restore
>> + *
>> + * Copyright 2018 Arm Limited
>> + * Author: Mark Rutland <mark.rutland@arm.com>
>> + *         Amit Daniel Kachhap <amit.kachhap@arm.com>
>> + */
>> +#include <linux/compiler.h>
>> +#include <linux/kvm_host.h>
>> +
>> +#include <asm/cpucaps.h>
>> +#include <asm/cpufeature.h>
>> +#include <asm/kvm_asm.h>
>> +#include <asm/kvm_hyp.h>
>> +#include <asm/pointer_auth.h>
>> +
>> +static __always_inline bool __ptrauth_is_enabled(struct kvm_vcpu *vcpu)
>> +{
>> +	return IS_ENABLED(CONFIG_ARM64_PTR_AUTH) &&
>> +			vcpu->arch.ctxt.hcr_el2 & (HCR_API | HCR_APK);
>> +}
>> +
>> +#define __ptrauth_save_key(regs, key)						\
>> +({										\
>> +	regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1);	\
>> +	regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1);	\
>> +})
>> +
>> +static __always_inline void __ptrauth_save_state(struct kvm_cpu_context *ctxt)
> 
> Why __always_inline?
> 
>> +{
>> +	__ptrauth_save_key(ctxt->sys_regs, APIA);
>> +	__ptrauth_save_key(ctxt->sys_regs, APIB);
>> +	__ptrauth_save_key(ctxt->sys_regs, APDA);
>> +	__ptrauth_save_key(ctxt->sys_regs, APDB);
>> +	__ptrauth_save_key(ctxt->sys_regs, APGA);
>> +}
>> +
>> +#define __ptrauth_restore_key(regs, key) 					\
>> +({										\
>> +	write_sysreg_s(regs[key ## KEYLO_EL1], SYS_ ## key ## KEYLO_EL1);	\
>> +	write_sysreg_s(regs[key ## KEYHI_EL1], SYS_ ## key ## KEYHI_EL1);	\
>> +})
>> +
>> +static __always_inline void __ptrauth_restore_state(struct kvm_cpu_context *ctxt)
> 
> Same here.  I would hope these just need to be marked with the correct
> function attribute to disable ptrauth by the compiler.  I don't see why
> it makes a difference whether it's inline or not.
> 
> If the compiler semantics are not sufficiently clear, make it a macro.
ok.
> 
> (Bikeshedding here, so it you feel this has already been discussed to
> death I'm happy for this to stay as-is.)
> 
>> +{
>> +	__ptrauth_restore_key(ctxt->sys_regs, APIA);
>> +	__ptrauth_restore_key(ctxt->sys_regs, APIB);
>> +	__ptrauth_restore_key(ctxt->sys_regs, APDA);
>> +	__ptrauth_restore_key(ctxt->sys_regs, APDB);
>> +	__ptrauth_restore_key(ctxt->sys_regs, APGA);
>> +}
>> +
>> +/**
>> + * This function changes the key so assign Pointer Authentication safe
>> + * GCC attribute if protected by it.
>> + */
> 
> (I'd have preferred to keep __noptrauth here and define it do nothing for
> now.  But I'll defer to others on that, since this has already been
> discussed...)

ok __noptrauth annotation will make it clear. I will add it for all C 
error prone functions in the next iteration.
> 
>> +void __ptrauth_switch_to_guest(struct kvm_vcpu *vcpu,
>> +				  struct kvm_cpu_context *host_ctxt,
>> +				  struct kvm_cpu_context *guest_ctxt)
>> +{
>> +	if (!__ptrauth_is_enabled(vcpu))
>> +		return;
>> +
>> +	__ptrauth_restore_state(guest_ctxt);
>> +}
>> +
>> +/**
>> + * This function changes the key so assign Pointer Authentication safe
>> + * GCC attribute if protected by it.
>> + */
>> +void __ptrauth_switch_to_host(struct kvm_vcpu *vcpu,
>> +				 struct kvm_cpu_context *guest_ctxt,
>> +				 struct kvm_cpu_context *host_ctxt)
>> +{
>> +	if (!__ptrauth_is_enabled(vcpu))
>> +		return;
>> +
>> +	__ptrauth_save_state(guest_ctxt);
>> +	__ptrauth_restore_state(host_ctxt);
>> +}
>> +
>> +/**
>> + * kvm_arm_vcpu_ptrauth_reset - resets ptrauth for vcpu schedule
>> + *
>> + * @vcpu: The VCPU pointer
>> + *
>> + * This function may be used to disable ptrauth and use it in a lazy context
>> + * via traps. However host key registers are saved here as they dont change
>> + * during host/guest switch.
>> + */
>> +void kvm_arm_vcpu_ptrauth_reset(struct kvm_vcpu *vcpu)
> 
> I feel this is not a good name.  It sounds too much like it resets the
> registers as part of vcpu reset, whereas really it's doing something
> completely different.
> 
> (Do you reset the regs anywhere btw?  I may have missed it...)
No there is not reset of registers. May be name like 
kvm_arm_vcpu_ptrauth_setup_lazy would be better.
> 
>> +{
>> +	struct kvm_cpu_context *host_ctxt;
>> +
>> +	if (kvm_supports_ptrauth()) {
>> +		kvm_arm_vcpu_ptrauth_disable(vcpu);
>> +		host_ctxt = vcpu->arch.host_cpu_context;
>> +		__ptrauth_save_state(host_ctxt);
>> +	}
>> +}
>> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
>> index a6c9381..12529df 100644
>> --- a/arch/arm64/kvm/sys_regs.c
>> +++ b/arch/arm64/kvm/sys_regs.c
>> @@ -986,6 +986,32 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>>   	{ SYS_DESC(SYS_PMEVTYPERn_EL0(n)),					\
>>   	  access_pmu_evtyper, reset_unknown, (PMEVTYPER0_EL0 + n), }
>>   
>> +
>> +void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu)
>> +{
>> +	vcpu->arch.ctxt.hcr_el2 |= (HCR_API | HCR_APK);
> 
> Pedantic nit: surplus ().
> 
> (Although opinions differ, and keeping them looks more symmetric with
> kvm_arm_vcpu_ptrauth_disable() -- either way, the code can stay as-is if
> you prefer.)
ok.
> 
>> +}
>> +
>> +void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu)
>> +{
>> +	vcpu->arch.ctxt.hcr_el2 &= ~(HCR_API | HCR_APK);
>> +}
>> +
>> +static bool trap_ptrauth(struct kvm_vcpu *vcpu,
>> +			 struct sys_reg_params *p,
>> +			 const struct sys_reg_desc *rd)
>> +{
>> +	kvm_arm_vcpu_ptrauth_trap(vcpu);
>> +	return false;
> 
> Can we ever get here?  Won't PAC traps always be handled via
> handle_exit()?
> 
> Or can we also take sysreg access traps when the guest tries to access
> the ptrauth key registers?

When Guest kernel forks thread then the key registers are accessed to 
fill them with ptrauth keys and hcr_el2(APK bit) is not appropriately 
set at that moment. This causes trap to EL2 and the above function is 
invoked.
> 
> (I'm now wondering how this works for SVE.)
Not sure. Need to check.
> 
>> +}
>> +
>> +#define __PTRAUTH_KEY(k)						\
>> +	{ SYS_DESC(SYS_## k), trap_ptrauth, reset_unknown, k }
>> +
>> +#define PTRAUTH_KEY(k)							\
>> +	__PTRAUTH_KEY(k ## KEYLO_EL1),					\
>> +	__PTRAUTH_KEY(k ## KEYHI_EL1)
>> +
>>   static bool access_cntp_tval(struct kvm_vcpu *vcpu,
>>   		struct sys_reg_params *p,
>>   		const struct sys_reg_desc *r)
>> @@ -1045,9 +1071,10 @@ static u64 read_id_reg(struct sys_reg_desc const *r, bool raz)
>>   					 (0xfUL << ID_AA64ISAR1_API_SHIFT) |
>>   					 (0xfUL << ID_AA64ISAR1_GPA_SHIFT) |
>>   					 (0xfUL << ID_AA64ISAR1_GPI_SHIFT);
>> -		if (val & ptrauth_mask)
>> +		if (!kvm_supports_ptrauth()) {
> 
> Don't we now always print this when ptrauth is not supported?
> 
> Previously we only printed a message in the interesting case, i.e.,
> where the host supports ptrauch but we cannot offer it to the guest.
Yes agreed. I will add proper checks here to skip prints for non ptrauth 
hosts.

Thanks,
Amit D
> 
>>   			kvm_debug("ptrauth unsupported for guests, suppressing\n");
>> -		val &= ~ptrauth_mask;
>> +			val &= ~ptrauth_mask;
>> +		}
>>   	} else if (id == SYS_ID_AA64MMFR1_EL1) {
>>   		if (val & (0xfUL << ID_AA64MMFR1_LOR_SHIFT))
>>   			kvm_debug("LORegions unsupported for guests, suppressing\n");
>> @@ -1316,6 +1343,12 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>>   	{ SYS_DESC(SYS_TTBR1_EL1), access_vm_reg, reset_unknown, TTBR1_EL1 },
>>   	{ SYS_DESC(SYS_TCR_EL1), access_vm_reg, reset_val, TCR_EL1, 0 },
>>   
>> +	PTRAUTH_KEY(APIA),
>> +	PTRAUTH_KEY(APIB),
>> +	PTRAUTH_KEY(APDA),
>> +	PTRAUTH_KEY(APDB),
>> +	PTRAUTH_KEY(APGA),
>> +
>>   	{ SYS_DESC(SYS_AFSR0_EL1), access_vm_reg, reset_unknown, AFSR0_EL1 },
>>   	{ SYS_DESC(SYS_AFSR1_EL1), access_vm_reg, reset_unknown, AFSR1_EL1 },
>>   	{ SYS_DESC(SYS_ESR_EL1), access_vm_reg, reset_unknown, ESR_EL1 },
>> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
>> index 2032a66..d7e003f 100644
>> --- a/virt/kvm/arm/arm.c
>> +++ b/virt/kvm/arm/arm.c
>> @@ -388,6 +388,8 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
>>   		vcpu_clear_wfe_traps(vcpu);
>>   	else
>>   		vcpu_set_wfe_traps(vcpu);
>> +
>> +	kvm_arm_vcpu_ptrauth_reset(vcpu);
>>   }
>>   
>>   void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
>> -- 
>> 2.7.4
>>
>> _______________________________________________
>> kvmarm mailing list
>> kvmarm@lists.cs.columbia.edu
>> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v6 4/6] arm64/kvm: add a userspace option to enable pointer authentication
  2019-02-21 15:53   ` Dave Martin
@ 2019-03-01  9:41     ` Amit Daniel Kachhap
  2019-03-01 12:22       ` Dave P Martin
  0 siblings, 1 reply; 41+ messages in thread
From: Amit Daniel Kachhap @ 2019-03-01  9:41 UTC (permalink / raw)
  To: Dave Martin
  Cc: linux-arm-kernel, Marc Zyngier, Catalin Marinas, Will Deacon,
	Kristina Martsenko, kvmarm, Ramana Radhakrishnan, linux-kernel

Hi,

On 2/21/19 9:23 PM, Dave Martin wrote:
> On Tue, Feb 19, 2019 at 02:54:29PM +0530, Amit Daniel Kachhap wrote:
>> This feature will allow the KVM guest to allow the handling of
>> pointer authentication instructions or to treat them as undefined
>> if not set. It uses the existing vcpu API KVM_ARM_VCPU_INIT to
>> supply this parameter instead of creating a new API.
>>
>> A new register is not created to pass this parameter via
>> SET/GET_ONE_REG interface as just a flag (KVM_ARM_VCPU_PTRAUTH)
>> supplied is enough to enable this feature.
>>
>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>> Cc: Mark Rutland <mark.rutland@arm.com>
>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>> Cc: Christoffer Dall <christoffer.dall@arm.com>
>> Cc: kvmarm@lists.cs.columbia.edu
>> ---
>>   Documentation/arm64/pointer-authentication.txt |  9 +++++----
>>   Documentation/virtual/kvm/api.txt              |  4 ++++
>>   arch/arm64/include/asm/kvm_host.h              |  3 ++-
>>   arch/arm64/include/uapi/asm/kvm.h              |  1 +
>>   arch/arm64/kvm/handle_exit.c                   |  2 +-
>>   arch/arm64/kvm/hyp/ptrauth-sr.c                | 16 +++++++++++++++-
>>   arch/arm64/kvm/reset.c                         |  3 +++
>>   arch/arm64/kvm/sys_regs.c                      | 26 +++++++++++++-------------
>>   include/uapi/linux/kvm.h                       |  1 +
>>   9 files changed, 45 insertions(+), 20 deletions(-)
>>
> 
> [...]
> 
>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
>> index 1bacf78..2768a53 100644
>> --- a/arch/arm64/include/asm/kvm_host.h
>> +++ b/arch/arm64/include/asm/kvm_host.h
>> @@ -43,7 +43,7 @@
>>   
>>   #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS
>>   
>> -#define KVM_VCPU_MAX_FEATURES 4
>> +#define KVM_VCPU_MAX_FEATURES 5
>>   
>>   #define KVM_REQ_SLEEP \
>>   	KVM_ARCH_REQ_FLAGS(0, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
>> @@ -451,6 +451,7 @@ static inline bool kvm_arch_requires_vhe(void)
>>   	return false;
>>   }
>>   
>> +bool kvm_arm_vcpu_ptrauth_allowed(const struct kvm_vcpu *vcpu);
>>   static inline bool kvm_supports_ptrauth(void)
>>   {
>>   	return has_vhe() && system_supports_address_auth() &&
> 
> [...]
> 
>> diff --git a/arch/arm64/kvm/hyp/ptrauth-sr.c b/arch/arm64/kvm/hyp/ptrauth-sr.c
>> index 528ee6e..6846a23 100644
>> --- a/arch/arm64/kvm/hyp/ptrauth-sr.c
>> +++ b/arch/arm64/kvm/hyp/ptrauth-sr.c
>> @@ -93,9 +93,23 @@ void kvm_arm_vcpu_ptrauth_reset(struct kvm_vcpu *vcpu)
> 
> [...]
> 
>> +/**
>> + * kvm_arm_vcpu_ptrauth_allowed - checks if ptrauth feature is allowed by user
>> + *
>> + * @vcpu: The VCPU pointer
>> + *
>> + * This function will be used to check userspace option to have ptrauth or not
>> + * in the guest kernel.
>> + */
>> +bool kvm_arm_vcpu_ptrauth_allowed(const struct kvm_vcpu *vcpu)
>> +{
>> +	return kvm_supports_ptrauth() &&
>> +		test_bit(KVM_ARM_VCPU_PTRAUTH, vcpu->arch.features);
>> +}
> 
> Nit: for SVE is called the equivalent helper vcpu_has_sve(vcpu).
> 
> Neither naming is more correct, but it would make sense to be
> consistent.  We will likely accumulate more of these vcpu feature
> predicates over time.
> 
> Given that this is trivial and will be used all over the place, it
> probably makes sense to define it in kvm_host.h rather than having it
> out of line in a separate C file.
Ok I checked the SVE implementation. vcpu_has_ptrauth macro make more sense.
> 
>> diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
>> index b72a3dd..987e0c3c 100644
>> --- a/arch/arm64/kvm/reset.c
>> +++ b/arch/arm64/kvm/reset.c
>> @@ -91,6 +91,9 @@ int kvm_arch_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>>   	case KVM_CAP_ARM_VM_IPA_SIZE:
>>   		r = kvm_ipa_limit;
>>   		break;
>> +	case KVM_CAP_ARM_PTRAUTH:
>> +		r = kvm_supports_ptrauth();
>> +		break;
>>   	default:
>>   		r = 0;
>>   	}
>> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
>> index 12529df..f7bcc60 100644
>> --- a/arch/arm64/kvm/sys_regs.c
>> +++ b/arch/arm64/kvm/sys_regs.c
>> @@ -1055,7 +1055,7 @@ static bool access_cntp_cval(struct kvm_vcpu *vcpu,
>>   }
>>   
>>   /* Read a sanitised cpufeature ID register by sys_reg_desc */
>> -static u64 read_id_reg(struct sys_reg_desc const *r, bool raz)
>> +static u64 read_id_reg(struct kvm_vcpu *vcpu, struct sys_reg_desc const *r, bool raz)
>>   {
>>   	u32 id = sys_reg((u32)r->Op0, (u32)r->Op1,
>>   			 (u32)r->CRn, (u32)r->CRm, (u32)r->Op2);
>> @@ -1071,7 +1071,7 @@ static u64 read_id_reg(struct sys_reg_desc const *r, bool raz)
>>   					 (0xfUL << ID_AA64ISAR1_API_SHIFT) |
>>   					 (0xfUL << ID_AA64ISAR1_GPA_SHIFT) |
>>   					 (0xfUL << ID_AA64ISAR1_GPI_SHIFT);
>> -		if (!kvm_supports_ptrauth()) {
>> +		if (!kvm_arm_vcpu_ptrauth_allowed(vcpu)) {
>>   			kvm_debug("ptrauth unsupported for guests, suppressing\n");
>>   			val &= ~ptrauth_mask;
>>   		}
>> @@ -1095,7 +1095,7 @@ static bool __access_id_reg(struct kvm_vcpu *vcpu,
>>   	if (p->is_write)
>>   		return write_to_read_only(vcpu, p, r);
>>   
>> -	p->regval = read_id_reg(r, raz);
>> +	p->regval = read_id_reg(vcpu, r, raz);
>>   	return true;
>>   }
> 
> The SVE KVM series makes various overlapping changes to propagate vcpuo
> into the relevant places, but hopefully the rebase is not too painful.
> Many of the changes are probably virtually identical between the two
> series.
> 
> See for example [1].  Maybe you could cherry-pick and drop the
> equivalent changes here (though if your series is picked up first, I
> will live with it ;)
Yes no issue. I will cherry-pick your specific patch and rebase mine on it.

Thanks,
Amit D
> 
> [...]
> 
> Cheers
> ---Dave
> 
> 
> [1] [PATCH v5 10/26] KVM: arm64: Propagate vcpu into read_id_reg()
> https://lists.cs.columbia.edu/pipermail/kvmarm/2019-February/034687.html
> 

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [kvmtool PATCH v6 6/6] arm/kvm: arm64: Add a vcpu feature for pointer authentication
  2019-02-21 15:54   ` Dave Martin
@ 2019-03-01 10:37     ` Amit Daniel Kachhap
  2019-03-01 11:24       ` Dave P Martin
  0 siblings, 1 reply; 41+ messages in thread
From: Amit Daniel Kachhap @ 2019-03-01 10:37 UTC (permalink / raw)
  To: Dave Martin
  Cc: linux-arm-kernel, Marc Zyngier, Catalin Marinas, Will Deacon,
	Kristina Martsenko, kvmarm, Ramana Radhakrishnan, linux-kernel

Hi,

On 2/21/19 9:24 PM, Dave Martin wrote:
> On Tue, Feb 19, 2019 at 02:54:31PM +0530, Amit Daniel Kachhap wrote:
>> This is a runtime capabality for KVM tool to enable Armv8.3 Pointer
>> Authentication in guest kernel. A command line option --ptrauth is
>> required for this.
>>
>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>> ---
>>   arm/aarch32/include/kvm/kvm-cpu-arch.h    | 1 +
>>   arm/aarch64/include/asm/kvm.h             | 1 +
>>   arm/aarch64/include/kvm/kvm-config-arch.h | 4 +++-
>>   arm/aarch64/include/kvm/kvm-cpu-arch.h    | 1 +
>>   arm/include/arm-common/kvm-config-arch.h  | 1 +
>>   arm/kvm-cpu.c                             | 6 ++++++
>>   include/linux/kvm.h                       | 1 +
>>   7 files changed, 14 insertions(+), 1 deletion(-)
>>
>> diff --git a/arm/aarch32/include/kvm/kvm-cpu-arch.h b/arm/aarch32/include/kvm/kvm-cpu-arch.h
>> index d28ea67..520ea76 100644
>> --- a/arm/aarch32/include/kvm/kvm-cpu-arch.h
>> +++ b/arm/aarch32/include/kvm/kvm-cpu-arch.h
>> @@ -13,4 +13,5 @@
>>   #define ARM_CPU_ID		0, 0, 0
>>   #define ARM_CPU_ID_MPIDR	5
>>   
>> +#define ARM_VCPU_PTRAUTH_FEATURE	0
>>   #endif /* KVM__KVM_CPU_ARCH_H */
>> diff --git a/arm/aarch64/include/asm/kvm.h b/arm/aarch64/include/asm/kvm.h
>> index 97c3478..1068fd1 100644
>> --- a/arm/aarch64/include/asm/kvm.h
>> +++ b/arm/aarch64/include/asm/kvm.h
>> @@ -102,6 +102,7 @@ struct kvm_regs {
>>   #define KVM_ARM_VCPU_EL1_32BIT		1 /* CPU running a 32bit VM */
>>   #define KVM_ARM_VCPU_PSCI_0_2		2 /* CPU uses PSCI v0.2 */
>>   #define KVM_ARM_VCPU_PMU_V3		3 /* Support guest PMUv3 */
>> +#define KVM_ARM_VCPU_PTRAUTH		4 /* CPU uses pointer authentication */
>>   
>>   struct kvm_vcpu_init {
>>   	__u32 target;
>> diff --git a/arm/aarch64/include/kvm/kvm-config-arch.h b/arm/aarch64/include/kvm/kvm-config-arch.h
>> index 04be43d..2074684 100644
>> --- a/arm/aarch64/include/kvm/kvm-config-arch.h
>> +++ b/arm/aarch64/include/kvm/kvm-config-arch.h
>> @@ -8,7 +8,9 @@
>>   			"Create PMUv3 device"),				\
>>   	OPT_U64('\0', "kaslr-seed", &(cfg)->kaslr_seed,			\
>>   			"Specify random seed for Kernel Address Space "	\
>> -			"Layout Randomization (KASLR)"),
>> +			"Layout Randomization (KASLR)"),		\
>> +	OPT_BOOLEAN('\0', "ptrauth", &(cfg)->has_ptrauth,		\
>> +			"Enable address authentication"),
> 
> Nit: doesn't this enable address *and* generic authentication?  The
> discussion on what capababilities and enables the ABI exposes probably
> needs to conclude before we can finalise this here.
ok.
> 
> However, I would recommend that we provide a single option here that
> turns both address authentication and generic authentication on, even
> if the ABI treats them independently.  This is expected to be the common
> case by far.
ok
> 
> We can always add more fine-grained options later if it turns out to be
> necessary.
Mark suggested to provide 2 flags [1] for Address and Generic 
authentication so I was thinking of adding 2 features like,

+#define KVM_ARM_VCPU_PTRAUTH_ADDR		4 /* CPU uses pointer address 
authentication */
+#define KVM_ARM_VCPU_PTRAUTH_GENERIC		5 /* CPU uses pointer generic 
authentication */

And supply both of them concatenated in VCPU_INIT stage. Kernel KVM 
would expect both feature requested together.

[1]: https://www.spinics.net/lists/arm-kernel/msg709181.html
> 
>>   #include "arm-common/kvm-config-arch.h"
>>   
>> diff --git a/arm/aarch64/include/kvm/kvm-cpu-arch.h b/arm/aarch64/include/kvm/kvm-cpu-arch.h
>> index a9d8563..496ece8 100644
>> --- a/arm/aarch64/include/kvm/kvm-cpu-arch.h
>> +++ b/arm/aarch64/include/kvm/kvm-cpu-arch.h
>> @@ -17,4 +17,5 @@
>>   #define ARM_CPU_CTRL		3, 0, 1, 0
>>   #define ARM_CPU_CTRL_SCTLR_EL1	0
>>   
>> +#define ARM_VCPU_PTRAUTH_FEATURE	(1UL << KVM_ARM_VCPU_PTRAUTH)
>>   #endif /* KVM__KVM_CPU_ARCH_H */
>> diff --git a/arm/include/arm-common/kvm-config-arch.h b/arm/include/arm-common/kvm-config-arch.h
>> index 5734c46..5badcbd 100644
>> --- a/arm/include/arm-common/kvm-config-arch.h
>> +++ b/arm/include/arm-common/kvm-config-arch.h
>> @@ -10,6 +10,7 @@ struct kvm_config_arch {
>>   	bool		aarch32_guest;
>>   	bool		has_pmuv3;
>>   	u64		kaslr_seed;
>> +	bool		has_ptrauth;
>>   	enum irqchip_type irqchip;
>>   	u64		fw_addr;
>>   };
>> diff --git a/arm/kvm-cpu.c b/arm/kvm-cpu.c
>> index 7780251..4ac80f8 100644
>> --- a/arm/kvm-cpu.c
>> +++ b/arm/kvm-cpu.c
>> @@ -68,6 +68,12 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, unsigned long cpu_id)
>>   		vcpu_init.features[0] |= (1UL << KVM_ARM_VCPU_PSCI_0_2);
>>   	}
>>   
>> +	/* Set KVM_ARM_VCPU_PTRAUTH if available */
>> +	if (kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH)) {
>> +		if (kvm->cfg.arch.has_ptrauth)
>> +			vcpu_init.features[0] |= ARM_VCPU_PTRAUTH_FEATURE;
>> +	}
>> +
> 
> I'm not too keen on requiring a dummy #define for AArch32 here.  How do
> we handle other subarch-specific feature flags?  Is there something we
> can reuse?
I will check it.

Thanks,
Amit D
> 
> (For SVE I didn''t have a proper solution for this yet: my kvmtool
> patches are still a dirty hack...)
> 
> [...]
> 
> Cheers
> ---Dave
> 

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [kvmtool PATCH v6 6/6] arm/kvm: arm64: Add a vcpu feature for pointer authentication
  2019-03-01 10:37     ` Amit Daniel Kachhap
@ 2019-03-01 11:24       ` Dave P Martin
  2019-03-04 11:08         ` Amit Daniel Kachhap
  0 siblings, 1 reply; 41+ messages in thread
From: Dave P Martin @ 2019-03-01 11:24 UTC (permalink / raw)
  To: Amit Kachhap
  Cc: linux-arm-kernel, Marc Zyngier, Catalin Marinas, Will Deacon,
	Kristina Martsenko, kvmarm, Ramana Radhakrishnan, linux-kernel

On Fri, Mar 01, 2019 at 10:37:54AM +0000, Amit Daniel Kachhap wrote:
> Hi,
>
> On 2/21/19 9:24 PM, Dave Martin wrote:
> > On Tue, Feb 19, 2019 at 02:54:31PM +0530, Amit Daniel Kachhap wrote:

[...]

> >> diff --git a/arm/aarch64/include/kvm/kvm-config-arch.h b/arm/aarch64/include/kvm/kvm-config-arch.h
> >> index 04be43d..2074684 100644
> >> --- a/arm/aarch64/include/kvm/kvm-config-arch.h
> >> +++ b/arm/aarch64/include/kvm/kvm-config-arch.h
> >> @@ -8,7 +8,9 @@
> >>   "Create PMUv3 device"),\
> >>   OPT_U64('\0', "kaslr-seed", &(cfg)->kaslr_seed,\
> >>   "Specify random seed for Kernel Address Space "\
> >> -"Layout Randomization (KASLR)"),
> >> +"Layout Randomization (KASLR)"),\
> >> +OPT_BOOLEAN('\0', "ptrauth", &(cfg)->has_ptrauth,\
> >> +"Enable address authentication"),
> >
> > Nit: doesn't this enable address *and* generic authentication?  The
> > discussion on what capababilities and enables the ABI exposes probably
> > needs to conclude before we can finalise this here.
> ok.
> >
> > However, I would recommend that we provide a single option here that
> > turns both address authentication and generic authentication on, even
> > if the ABI treats them independently.  This is expected to be the common
> > case by far.
> ok
> >
> > We can always add more fine-grained options later if it turns out to be
> > necessary.
> Mark suggested to provide 2 flags [1] for Address and Generic
> authentication so I was thinking of adding 2 features like,
>
> +#define KVM_ARM_VCPU_PTRAUTH_ADDR4 /* CPU uses pointer address
> authentication */
> +#define KVM_ARM_VCPU_PTRAUTH_GENERIC5 /* CPU uses pointer generic
> authentication */
>
> And supply both of them concatenated in VCPU_INIT stage. Kernel KVM
> would expect both feature requested together.

Seems reasonable.  Do you mean the kernel would treat it as an error if
only one of these flags is passed to KVM_ARM_VCPU_INIT, or would KVM
simply treat them as independent?

[...]

> >> diff --git a/arm/kvm-cpu.c b/arm/kvm-cpu.c
> >> index 7780251..4ac80f8 100644
> >> --- a/arm/kvm-cpu.c
> >> +++ b/arm/kvm-cpu.c
> >> @@ -68,6 +68,12 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, unsigned long cpu_id)
> >>   vcpu_init.features[0] |= (1UL << KVM_ARM_VCPU_PSCI_0_2);
> >>   }
> >>
> >> +/* Set KVM_ARM_VCPU_PTRAUTH if available */
> >> +if (kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH)) {
> >> +if (kvm->cfg.arch.has_ptrauth)
> >> +vcpu_init.features[0] |= ARM_VCPU_PTRAUTH_FEATURE;
> >> +}
> >> +
> >
> > I'm not too keen on requiring a dummy #define for AArch32 here.  How do
> > we handle other subarch-specific feature flags?  Is there something we
> > can reuse?
> I will check it.

OK

Cheers
---Dave
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v6 4/6] arm64/kvm: add a userspace option to enable pointer authentication
  2019-03-01  9:41     ` Amit Daniel Kachhap
@ 2019-03-01 12:22       ` Dave P Martin
  0 siblings, 0 replies; 41+ messages in thread
From: Dave P Martin @ 2019-03-01 12:22 UTC (permalink / raw)
  To: Amit Kachhap
  Cc: linux-arm-kernel, Marc Zyngier, Catalin Marinas, Will Deacon,
	Kristina Martsenko, kvmarm, Ramana Radhakrishnan, linux-kernel

On Fri, Mar 01, 2019 at 09:41:20AM +0000, Amit Daniel Kachhap wrote:
> Hi,
>
> On 2/21/19 9:23 PM, Dave Martin wrote:
> > On Tue, Feb 19, 2019 at 02:54:29PM +0530, Amit Daniel Kachhap wrote:
> >> This feature will allow the KVM guest to allow the handling of
> >> pointer authentication instructions or to treat them as undefined
> >> if not set. It uses the existing vcpu API KVM_ARM_VCPU_INIT to
> >> supply this parameter instead of creating a new API.
> >>
> >> A new register is not created to pass this parameter via
> >> SET/GET_ONE_REG interface as just a flag (KVM_ARM_VCPU_PTRAUTH)
> >> supplied is enough to enable this feature.
> >>
> >> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> >> Cc: Mark Rutland <mark.rutland@arm.com>
> >> Cc: Marc Zyngier <marc.zyngier@arm.com>
> >> Cc: Christoffer Dall <christoffer.dall@arm.com>
> >> Cc: kvmarm@lists.cs.columbia.edu

[...]

> >> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> >> index 12529df..f7bcc60 100644
> >> --- a/arch/arm64/kvm/sys_regs.c
> >> +++ b/arch/arm64/kvm/sys_regs.c
> >> @@ -1055,7 +1055,7 @@ static bool access_cntp_cval(struct kvm_vcpu *vcpu,
> >>   }
> >>
> >>   /* Read a sanitised cpufeature ID register by sys_reg_desc */
> >> -static u64 read_id_reg(struct sys_reg_desc const *r, bool raz)
> >> +static u64 read_id_reg(struct kvm_vcpu *vcpu, struct sys_reg_desc const *r, bool raz)
> >>   {
> >>   u32 id = sys_reg((u32)r->Op0, (u32)r->Op1,
> >>    (u32)r->CRn, (u32)r->CRm, (u32)r->Op2);
> >> @@ -1071,7 +1071,7 @@ static u64 read_id_reg(struct sys_reg_desc const *r, bool raz)
> >>    (0xfUL << ID_AA64ISAR1_API_SHIFT) |
> >>    (0xfUL << ID_AA64ISAR1_GPA_SHIFT) |
> >>    (0xfUL << ID_AA64ISAR1_GPI_SHIFT);
> >> -if (!kvm_supports_ptrauth()) {
> >> +if (!kvm_arm_vcpu_ptrauth_allowed(vcpu)) {
> >>   kvm_debug("ptrauth unsupported for guests, suppressing\n");
> >>   val &= ~ptrauth_mask;
> >>   }
> >> @@ -1095,7 +1095,7 @@ static bool __access_id_reg(struct kvm_vcpu *vcpu,
> >>   if (p->is_write)
> >>   return write_to_read_only(vcpu, p, r);
> >>
> >> -p->regval = read_id_reg(r, raz);
> >> +p->regval = read_id_reg(vcpu, r, raz);
> >>   return true;
> >>   }
> >
> > The SVE KVM series makes various overlapping changes to propagate vcpuo
> > into the relevant places, but hopefully the rebase is not too painful.
> > Many of the changes are probably virtually identical between the two
> > series.
> >
> > See for example [1].  Maybe you could cherry-pick and drop the
> > equivalent changes here (though if your series is picked up first, I
> > will live with it ;)
> Yes no issue. I will cherry-pick your specific patch and rebase mine on it.

OK, thanks.

Unfortunately it is likely to churn a bit due to review-- my v6 series
will rename some stuff.  Hopefully it will be stable from then on.

Cheers
---Dave
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v6 1/6] arm64/kvm: preserve host HCR_EL2 value
  2019-02-25 17:39   ` James Morse
  2019-02-26 10:06     ` James Morse
@ 2019-03-02 11:09     ` Amit Daniel Kachhap
  1 sibling, 0 replies; 41+ messages in thread
From: Amit Daniel Kachhap @ 2019-03-02 11:09 UTC (permalink / raw)
  To: James Morse, linux-arm-kernel
  Cc: Christoffer Dall, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Dave Martin, Ramana Radhakrishnan, kvmarm,
	Kristina Martsenko, linux-kernel, Mark Rutland, Julien Thierry

Hi,

On 2/25/19 11:09 PM, James Morse wrote:
> Hi Amit,
> 
> On 19/02/2019 09:24, Amit Daniel Kachhap wrote:
>> From: Mark Rutland <mark.rutland@arm.com>
>>
>> When restoring HCR_EL2 for the host, KVM uses HCR_HOST_VHE_FLAGS, which
>> is a constant value. This works today, as the host HCR_EL2 value is
>> always the same, but this will get in the way of supporting extensions
>> that require HCR_EL2 bits to be set conditionally for the host.
>>
>> To allow such features to work without KVM having to explicitly handle
>> every possible host feature combination, this patch has KVM save/restore
>> for the host HCR when switching to/from a guest HCR. The saving of the
>> register is done once during cpu hypervisor initialization state and is
>> just restored after switch from guest.
>>
>> For fetching HCR_EL2 during kvm initialisation, a hyp call is made using
>> kvm_call_hyp and is helpful in NHVE case.
>>
>> For the hyp TLB maintenance code, __tlb_switch_to_host_vhe() is updated
>> to toggle the TGE bit with a RMW sequence, as we already do in
>> __tlb_switch_to_guest_vhe().
>>
>> The value of hcr_el2 is now stored in struct kvm_cpu_context as both host
>> and guest can now use this field in a common way.
> 
> 
>> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
>> index ca56537..05706b4 100644
>> --- a/arch/arm/include/asm/kvm_host.h
>> +++ b/arch/arm/include/asm/kvm_host.h
>> @@ -273,6 +273,8 @@ static inline void __cpu_init_stage2(void)
>>   	kvm_call_hyp(__init_stage2_translation);
>>   }
>>   
>> +static inline void __cpu_copy_hyp_conf(void) {}
>> +
> 
> I agree Mark's suggestion of adding 'host_ctxt' in here makes it clearer what it is.
ok.
> 
> 
>> diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
>> index 506386a..0dbe795 100644
>> --- a/arch/arm64/include/asm/kvm_emulate.h
>> +++ b/arch/arm64/include/asm/kvm_emulate.h
> 
> Hmmm, there is still a fair amount of churn due to moving the struct definition, but its
> easy enough to ignore as its mechanical. A preparatory patch that switched as may as
> possible to '*vcpu_hcr() = ' would cut the churn down some more, but I don't think its
> worth the extra effort.
> 
> 
>> diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
>> index a80a7ef..6e65cad 100644
>> --- a/arch/arm64/include/asm/kvm_hyp.h
>> +++ b/arch/arm64/include/asm/kvm_hyp.h
>> @@ -151,7 +151,7 @@ void __fpsimd_restore_state(struct user_fpsimd_state *fp_regs);
>>   bool __fpsimd_enabled(void);
>>   
>>   void activate_traps_vhe_load(struct kvm_vcpu *vcpu);
>> -void deactivate_traps_vhe_put(void);
>> +void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu);
> 
> I've forgotten why this is needed. You don't add a user of vcpu to
> deactivate_traps_vhe_put() in this patch.
> 
> 
>> diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
>> index b0b1478..006bd33 100644
>> --- a/arch/arm64/kvm/hyp/switch.c
>> +++ b/arch/arm64/kvm/hyp/switch.c
>> @@ -191,7 +194,7 @@ void activate_traps_vhe_load(struct kvm_vcpu *vcpu)
> 
>> -void deactivate_traps_vhe_put(void)
>> +void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu)
>>   {
>>   	u64 mdcr_el2 = read_sysreg(mdcr_el2);
>>   
> 
> Why does deactivate_traps_vhe_put() need the vcpu?
vcpu is needed for the next patch which saves/restore mdcr_el2. I will 
add this in that patch.
> 
> 
>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
>> index 7732d0b..1b2e05b 100644
>> --- a/arch/arm64/include/asm/kvm_host.h
>> +++ b/arch/arm64/include/asm/kvm_host.h
>> @@ -458,6 +459,16 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
>>
>>   static inline void __cpu_init_stage2(void) {}
>>
>> +/**
>> + * __cpu_copy_hyp_conf - copy the boot hyp configuration registers
>> + *
>> + * It is called once per-cpu during CPU hyp initialisation.
>> + */
> 
> Is it just the boot cpu?
> 
> 
>> +static inline void __cpu_copy_hyp_conf(void)
>> +{
>> +	kvm_call_hyp(__kvm_populate_host_regs);
>> +}
>> +
> 
> 
>> diff --git a/arch/arm64/kvm/hyp/sysreg-sr.c b/arch/arm64/kvm/hyp/sysreg-sr.c
>> index 68d6f7c..68ddc0f 100644
>> --- a/arch/arm64/kvm/hyp/sysreg-sr.c
>> +++ b/arch/arm64/kvm/hyp/sysreg-sr.c
>> @@ -21,6 +21,7 @@
>>   #include <asm/kvm_asm.h>
>>   #include <asm/kvm_emulate.h>
>>   #include <asm/kvm_hyp.h>
>> +#include <asm/kvm_mmu.h>
> 
> ... what's kvm_mmu.h needed for?
> The __hyp_this_cpu_ptr() you add comes from kvm_asm.h.
> 
> /me tries it.
> 
> Heh, hyp_symbol_addr(). kvm_asm.h should include this, but can't because the
> kvm_ksym_ref() dependency is the other-way round. This is just going to bite us somewhere
> else later!
> If we want to fix it now, moving hyp_symbol_addr() to kvm_asm.h would fix it. It's
> generating adrp/add so the 'asm' label is fair, and it really should live with its EL1
> counterpart kvm_ksym_ref().
> 
Yes moving hyp_symbol_addr() fixes the dependency error.
> 
>> @@ -294,7 +295,7 @@ void kvm_vcpu_put_sysregs(struct kvm_vcpu *vcpu)
>>   	if (!has_vhe())
>>   		return;
>>   
>> -	deactivate_traps_vhe_put();
>> +	deactivate_traps_vhe_put(vcpu);
>>   
>>   	__sysreg_save_el1_state(guest_ctxt);
>>   	__sysreg_save_user_state(guest_ctxt);
>> @@ -316,3 +317,21 @@ void __hyp_text __kvm_enable_ssbs(void)
>>   	"msr	sctlr_el2, %0"
>>   	: "=&r" (tmp) : "L" (SCTLR_ELx_DSSBS));
>>   }
>> +
>> +/**
>> + * __kvm_populate_host_regs - Stores host register values
>> + *
>> + * This function acts as a function handler parameter for kvm_call_hyp and
>> + * may be called from EL1 exception level to fetch the register value.
>> + */
>> +void __hyp_text __kvm_populate_host_regs(void)
>> +{
>> +	struct kvm_cpu_context *host_ctxt;
> 
> 
>> +	if (has_vhe())
>> +		host_ctxt = this_cpu_ptr(&kvm_host_cpu_state);
>> +	else
>> +		host_ctxt = __hyp_this_cpu_ptr(kvm_host_cpu_state);
> 
> You can use __hyp_this_cpu_ptr() here, even on VHE.
> 
> For VHE the guts are the same and its simpler to use the same version in both cases.
> 
> 
> __hyp_this_cpu_ptr(sym) == hyp_symbol_addr(sym) + tpidr_el2;
> 
> hyp_symbol_addr() here is just to guarantee the address is generated based on where we're
> executing from, not loaded from a literal pool which would give us the link-time address.
> (or whenever kaslr applied the relocations). This matters for non-VHE because the compiler
> can't know the code has an EL2 address as well as its link-time address.
> 
> This doesn't matter for VHE, as there is no additional different address.
> 
> (the other trickery is on non-VHE the tpidr_el2 value isn't actually the same as the
> hosts.. but on VHE it is)
> 
> 
Thanks for the details.

>> +	host_ctxt->hcr_el2 = read_sysreg(hcr_el2);
>> +}
> 
> 
>> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
>> index 9e350fd3..8e18f7f 100644
>> --- a/virt/kvm/arm/arm.c
>> +++ b/virt/kvm/arm/arm.c
>> @@ -1328,6 +1328,7 @@ static void cpu_hyp_reinit(void)
>>   		cpu_init_hyp_mode(NULL);
>>   
>>   	kvm_arm_init_debug();
>> +	__cpu_copy_hyp_conf();
> 
> Your commit message says:
> | The saving of the register is done once during cpu hypervisor initialization state
> 
> But cpu_hyp_reinit() is called each time secondary CPUs come online. Its also called as
> part of the cpu-idle mechanism via hyp_init_cpu_pm_notifier(). cpu-idle can ask the
> firmware to power-off the CPU until an interrupt becomes pending for it. KVM's EL2 state
> disappears when this happens, these calls take care of setting it back up again. On Juno,
> this can happen tens of times a second, and this adds an extra call to EL2.
> 
> init_subsystems() would be the alternative place for this, but it wouldn't catch CPUs that
> came online after booting. I think you need something in cpu_hyp_reinit() or
> __cpu_copy_hyp_conf() to ensure it only happens once per CPU.
ok i will check on it.
> 
> I think you can test whether the HCR_EL2 value is zero, assuming zero means uninitialised.
> A VHE system would always set E2H, and a non-VHE system has to set RW.
It is not zero and is set to initial values.

Thanks,
Amit D
> 
> 
>>   	if (vgic_present)
>>   		kvm_vgic_init_cpu_hardware();
>>
> 
> 
> Thanks,
> 
> James
> 

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v6 3/6] arm64/kvm: context-switch ptrauth registers
  2019-02-26 18:31   ` James Morse
@ 2019-03-04 10:51     ` Amit Daniel Kachhap
  0 siblings, 0 replies; 41+ messages in thread
From: Amit Daniel Kachhap @ 2019-03-04 10:51 UTC (permalink / raw)
  To: James Morse
  Cc: linux-arm-kernel, Christoffer Dall, Marc Zyngier,
	Catalin Marinas, Will Deacon, Andrew Jones, Dave Martin,
	Ramana Radhakrishnan, kvmarm, Kristina Martsenko, linux-kernel,
	Mark Rutland, Julien Thierry

Hi James,

On 2/27/19 12:01 AM, James Morse wrote:
> Hi Amit,
> 
> On 19/02/2019 09:24, Amit Daniel Kachhap wrote:
>> From: Mark Rutland <mark.rutland@arm.com>
>>
>> When pointer authentication is supported, a guest may wish to use it.
>> This patch adds the necessary KVM infrastructure for this to work, with
>> a semi-lazy context switch of the pointer auth state.
>>
>> Pointer authentication feature is only enabled when VHE is built
>> in the kernel and present into CPU implementation so only VHE code
>> paths are modified.
> 
>> When we schedule a vcpu, we disable guest usage of pointer
>> authentication instructions and accesses to the keys. While these are
>> disabled, we avoid context-switching the keys. When we trap the guest
>> trying to use pointer authentication functionality, we change to eagerly
>> context-switching the keys, and enable the feature. The next time the
>> vcpu is scheduled out/in, we start again.
> 
>> However the host key registers
>> are saved in vcpu load stage as they remain constant for each vcpu
>> schedule.
> 
> (I think we can get away with doing this later ... with the hope of doing it never!)
> 
> 
>> Pointer authentication consists of address authentication and generic
>> authentication, and CPUs in a system might have varied support for
>> either. Where support for either feature is not uniform, it is hidden
>> from guests via ID register emulation, as a result of the cpufeature
>> framework in the host.
>>
>> Unfortunately, address authentication and generic authentication cannot
>> be trapped separately, as the architecture provides a single EL2 trap
>> covering both. If we wish to expose one without the other, we cannot
>> prevent a (badly-written) guest from intermittently using a feature
>> which is not uniformly supported (when scheduled on a physical CPU which
>> supports the relevant feature). Hence, this patch expects both type of
>> authentication to be present in a cpu.
> 
> 
>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
>> index 2f1bb86..1bacf78 100644
>> --- a/arch/arm64/include/asm/kvm_host.h
>> +++ b/arch/arm64/include/asm/kvm_host.h
>> @@ -146,6 +146,18 @@ enum vcpu_sysreg {
> 
>> +static inline bool kvm_supports_ptrauth(void)
>> +{
>> +	return has_vhe() && system_supports_address_auth() &&
>> +				system_supports_generic_auth();
>> +}
> 
> Do we intend to support the implementation defined algorithm? I'd assumed not.
> 
> system_supports_address_auth() is defined as:
> | static inline bool system_supports_address_auth(void)
> | {
> | 	return IS_ENABLED(CONFIG_ARM64_PTR_AUTH) &&
> | 		(cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH_ARCH) ||
> | 		cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH_IMP_DEF));
> | }
> 
> 
> So we could return true from kvm_supports_ptrauth() even if we only support the imp-def
> algorithm.
> 
> I think we should hide the imp-def ptrauth support as KVM hides all other imp-def
> features. This lets us avoid trying to migrate values that have been signed with the
> imp-def algorithm.
I suppose imp-def algorithm should not make any difference in migration 
case even if the 2 system uses different imp-def algorithm. As the LR 
PAC field generation happens at runtime so only things matters is key 
value and SP which is taken care. Also the model on which I am testing 
uses imp-def algorithm. Or am I missing something ?
> 
> I'm worried that it could include some value that we can't migrate (e.g. the SoC serial
> number). Does the ARM-ARM say this can't happen?
> 
> All I can find is D5.1.5 "Pointer authentication in AArch64 state" of DDI0487D.a which has
> this clause for the imp-def algorithm:
> | For a set of arguments passed to the function, must give the same result for all PEs
> | that a thread of execution could migrate between.
> 
> ... with KVM we've extended the scope of migration significantly.
> 
> 
> Could we check the cpus_have_const_cap() values for the two architected algorithms directly?
> 
> 
>> diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
>> index 6e65cad..09e061a 100644
>> --- a/arch/arm64/include/asm/kvm_hyp.h
>> +++ b/arch/arm64/include/asm/kvm_hyp.h
>> @@ -153,6 +153,13 @@ bool __fpsimd_enabled(void);
>>   void activate_traps_vhe_load(struct kvm_vcpu *vcpu);
>>   void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu);
>>   
>> +void __ptrauth_switch_to_guest(struct kvm_vcpu *vcpu,
>> +			       struct kvm_cpu_context *host_ctxt,
>> +			       struct kvm_cpu_context *guest_ctxt);
>> +void __ptrauth_switch_to_host(struct kvm_vcpu *vcpu,
>> +			      struct kvm_cpu_context *guest_ctxt,
>> +			      struct kvm_cpu_context *host_ctxt);
> 
> 
> Why do you need the vcpu and the guest_ctxt?
> Would it be possible for these to just take the vcpu, and to pull the host context from
> the per-cpu variable?
> This would avoid any future bugs where the ctxt's are the wrong way round, taking two is
> unusual in KVM, but necessary here.
> 
> As you're calling these from asm you want the compiler to do as much of the type mangling
> as possible.
Yes it is possible. I will implement in my upcoming version.
> 
> 
>> diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
>> index 4e2fb87..5cac605 100644
>> --- a/arch/arm64/kernel/traps.c
>> +++ b/arch/arm64/kernel/traps.c
>> @@ -749,6 +749,7 @@ static const char *esr_class_str[] = {
>>   	[ESR_ELx_EC_CP14_LS]		= "CP14 LDC/STC",
>>   	[ESR_ELx_EC_FP_ASIMD]		= "ASIMD",
>>   	[ESR_ELx_EC_CP10_ID]		= "CP10 MRC/VMRS",
>> +	[ESR_ELx_EC_PAC]		= "Pointer authentication trap",
>>   	[ESR_ELx_EC_CP14_64]		= "CP14 MCRR/MRRC",
>>   	[ESR_ELx_EC_ILL]		= "PSTATE.IL",
>>   	[ESR_ELx_EC_SVC32]		= "SVC (AArch32)",
> 
> Is this needed? Or was it just missing from the parts already merged. (should it be a
> separate patch for the arch code)
Yes you are right and looks like this is a missed patch of commit 
aa6eece8ec5095e479. I suppose this can be posted as a separate patch.
> 
> It looks like KVM only prints it from kvm_handle_unknown_ec(), which would never happen as
> arm_exit_handlers[] has an entry for ESR_ELx_EC_PAC.
yes.
> 
> Is it true that the host could never take this trap either?, as it can only be taken when
> HCR_EL2.TGE is clear.
> (breadcrumbs from the ESR_ELx definition to "Trap to EL2 of EL0 accesses to Pointer
> authentication instructions")
>
Yes most of the ptrauth instructions are treated as NOP but some 
instructions like PACGA and XPAC [0] are always enabled and may trap if 
CONFIG_ARM64_PTR_AUTH is disabled. In VHE mode this is not trapping as 
HCR_EL2.TGE is set. But in NVHE mode this is causing hang instead of 
trap if tested with HCR_EL2.API=0. Further checking on it.

[0]: DDI0487D_a_arm (page: D5-2390)

>> diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
>> index 675fdc1..b78cc15 100644
>> --- a/arch/arm64/kvm/hyp/entry.S
>> +++ b/arch/arm64/kvm/hyp/entry.S
>> @@ -64,6 +64,12 @@ ENTRY(__guest_enter)
>>   
>>   	add	x18, x0, #VCPU_CONTEXT
>>   
>> +#ifdef	CONFIG_ARM64_PTR_AUTH
>> +	// Prepare parameter for __ptrauth_switch_to_guest(vcpu, host, guest).
>> +	mov	x2, x18
>> +	bl	__ptrauth_switch_to_guest
>> +#endif
> 
> This calls back into C code with x18 clobbered... is that allowed?
> x18 has this weird platform-register/temporary-register behaviour, that depends on the
> compiler. The PCS[0] has a note that 'hand-coded assembler should avoid it entirely'!
Yes agree with you that x18 may get clobbered.
> 
> Can we assume that compiler generated code is using it from something, and depends on that
> value, which means we need to preserve or save/restore it when calling into C.
> 
> 
> The upshot? Could you use one of the callee saved registers instead of x18, then move it
> after your C call.
Yes using a callee register is an option.
> 
> 
>> @@ -118,6 +124,17 @@ ENTRY(__guest_exit)
>>   
>>   	get_host_ctxt	x2, x3
>>   
>> +#ifdef	CONFIG_ARM64_PTR_AUTH
>> +	// Prepare parameter for __ptrauth_switch_to_host(vcpu, guest, host).
>> +	// Save x0, x2 which are used later in callee saved registers.
>> +	mov	x19, x0
>> +	mov	x20, x2
>> +	sub	x0, x1, #VCPU_CONTEXT
> 
>> +	ldr	x29, [x2, #CPU_XREG_OFFSET(29)]
> 
> Is this to make the stack-trace look plausible?
> 
> 
>> +	bl	__ptrauth_switch_to_host
>> +	mov	x0, x19
>> +	mov	x2, x20
>> +#endif
> 
> (ditching the host_ctxt would let you move this above get_host_ctxt and the need to
> preserve its result)
ok.
> 
> 
>> diff --git a/arch/arm64/kvm/hyp/ptrauth-sr.c b/arch/arm64/kvm/hyp/ptrauth-sr.c
>> new file mode 100644
>> index 0000000..528ee6e
>> --- /dev/null
>> +++ b/arch/arm64/kvm/hyp/ptrauth-sr.c
>> @@ -0,0 +1,101 @@
> 
>> +static __always_inline bool __ptrauth_is_enabled(struct kvm_vcpu *vcpu)
> 
> This __always_inline still looks weird! You said it might be needed to make it function
> pointer safe. If it is, could you add a comment explaining why.
> 
> (alternatives would be making it an #ifdef, disabling ptrauth for the whole file, or
> annotating this function too)
ok __noptrauth annotation may be better as some functions are already 
using it in this file.
> 
> 
>> +#define __ptrauth_save_key(regs, key)						\
>> +({										\
>> +	regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1);	\
>> +	regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1);	\
>> +})
>> +
>> +static __always_inline void __ptrauth_save_state(struct kvm_cpu_context *ctxt)
>> +{
>> +	__ptrauth_save_key(ctxt->sys_regs, APIA);
>> +	__ptrauth_save_key(ctxt->sys_regs, APIB);
>> +	__ptrauth_save_key(ctxt->sys_regs, APDA);
>> +	__ptrauth_save_key(ctxt->sys_regs, APDB);
>> +	__ptrauth_save_key(ctxt->sys_regs, APGA);
>> +}
>> +
>> +#define __ptrauth_restore_key(regs, key) 					\
>> +({										\
>> +	write_sysreg_s(regs[key ## KEYLO_EL1], SYS_ ## key ## KEYLO_EL1);	\
>> +	write_sysreg_s(regs[key ## KEYHI_EL1], SYS_ ## key ## KEYHI_EL1);	\
>> +})
>> +
>> +static __always_inline void __ptrauth_restore_state(struct kvm_cpu_context *ctxt)
>> +{
>> +	__ptrauth_restore_key(ctxt->sys_regs, APIA);
>> +	__ptrauth_restore_key(ctxt->sys_regs, APIB);
>> +	__ptrauth_restore_key(ctxt->sys_regs, APDA);
>> +	__ptrauth_restore_key(ctxt->sys_regs, APDB);
>> +	__ptrauth_restore_key(ctxt->sys_regs, APGA);
> 
> Are writes to these registers self synchronising? I'd assume not, as they come as a pair.
> 
> I think this means we need an isb() here to ensure that when restoring the host registers
> the next host authentication attempt uses the key we wrote here? We don't need it for the
> guest, so we could put it at the end of __ptrauth_switch_to_host().
yes isb() is required.
> 
> 
>> +/**
>> + * This function changes the key so assign Pointer Authentication safe
>> + * GCC attribute if protected by it.
>> + */
> 
> ... this comment is the reminder for 'once we have host kernel ptrauth support'? could we
> add something to say that kernel support is when the attribute would be needed. Otherwise
> it reads like we're waiting for GCC support.
ok.
> 
> (I assume LLVM has a similar attribute ... is it exactly the same?)
> 
> 
>> +void __ptrauth_switch_to_guest(struct kvm_vcpu *vcpu,
>> +				  struct kvm_cpu_context *host_ctxt,
>> +				  struct kvm_cpu_context *guest_ctxt)
>> +{
> 
>> +}
> 
>> +void __ptrauth_switch_to_host(struct kvm_vcpu *vcpu,
>> +				 struct kvm_cpu_context *guest_ctxt,
>> +				 struct kvm_cpu_context *host_ctxt)
>> +{
> 
>> +}
> 
> 
> Could you add NOKPROBE_SYMBOL(symbol_name) for these. This adds them to the kprobe
> blacklist as they aren't part of the __hyp_text. (and don't need to be as its VHE only).
> Without this, you can patch a software-breakpoint in here, which KVM won't handle as its
> already switched VBAR for entry to the guest.
ok.
> 
> Details in 7d82602909ed ("KVM: arm64: Forbid kprobing of the VHE world-switch code")
> 
> (... this turned up in a kernel version later than you based on ...)
> 
> 
>> +/**
>> + * kvm_arm_vcpu_ptrauth_reset - resets ptrauth for vcpu schedule
>> + *
>> + * @vcpu: The VCPU pointer
>> + *
>> + * This function may be used to disable ptrauth and use it in a lazy context
>> + * via traps. However host key registers are saved here as they dont change
>> + * during host/guest switch.
>> + */
>> +void kvm_arm_vcpu_ptrauth_reset(struct kvm_vcpu *vcpu)
>> +{
>> +	struct kvm_cpu_context *host_ctxt;
>> +
>> +	if (kvm_supports_ptrauth()) {
>> +		kvm_arm_vcpu_ptrauth_disable(vcpu);
>> +		host_ctxt = vcpu->arch.host_cpu_context;
> 
>> +		__ptrauth_save_state(host_ctxt);
> 
> Could you equally do this host-save in kvm_arm_vcpu_ptrauth_trap() before
> kvm_arm_vcpu_ptrauth_enable()? This would avoid saving the keys if the guest never gets
> the opportunity to change them. At the moment we do it on every vcpu_load().
ok nice suggestion. It works fine in kvm_arm_vcpu_ptrauth_trap().
> 
> 
> As kvm_arm_vcpu_ptrauth_reset() isn't used as part of the world-switch, could we keep it
> outside the 'hyp' directory? The Makefile for that directory expects to be building the
> hyp text, so it disables KASAN, KCOV and friends.
> kvm_arm_vcpu_ptrauth_reset() is safe for all of these, and its good for it to be covered
> by this debug infrastructure. Could you move it to guest.c?
ok.
> 
> 
>> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
>> index a6c9381..12529df 100644
>> --- a/arch/arm64/kvm/sys_regs.c
>> +++ b/arch/arm64/kvm/sys_regs.c> @@ -1045,9 +1071,10 @@ static u64 read_id_reg(struct sys_reg_desc const *r, bool raz)
>>   					 (0xfUL << ID_AA64ISAR1_API_SHIFT) |
>>   					 (0xfUL << ID_AA64ISAR1_GPA_SHIFT) |
>>   					 (0xfUL << ID_AA64ISAR1_GPI_SHIFT);
>> -		if (val & ptrauth_mask)
>> +		if (!kvm_supports_ptrauth()) {
>>   			kvm_debug("ptrauth unsupported for guests, suppressing\n");
>> -		val &= ~ptrauth_mask;
>> +			val &= ~ptrauth_mask;
>> +		}
> 
> This means that debug message gets printed even on systems that don't support ptrauth in
> hardware. (val&ptrauth_mask) used to cut them out, now kvm_supports_ptrauth() fails if the
> static keys are false, and we end up printing this message.
> Now that KVM supports pointer-auth, I don't think the debug message is useful, can we
> remove it? (it would now mean 'you didn't ask for ptrauth to be turned on')
ok.
> 
> 
> Could we always mask out the imp-def bits?
I guess no as explained before.
> 
> 
> This patch needs to be merged together with 4 & 5 so the user-abi is as it should be. This
> means the check_present/restrictions thing needs sorting so they're ready together.
ok.

Thanks,
Amit D>
> 
> Thanks,
> 
> James
> 
> 
> [0] http://infocenter.arm.com/help/topic/com.arm.doc.ihi0055b/IHI0055B_aapcs64.pdf
> 

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v6 4/6] arm64/kvm: add a userspace option to enable pointer authentication
  2019-02-26 18:33   ` James Morse
@ 2019-03-04 10:56     ` Amit Daniel Kachhap
  0 siblings, 0 replies; 41+ messages in thread
From: Amit Daniel Kachhap @ 2019-03-04 10:56 UTC (permalink / raw)
  To: James Morse, linux-arm-kernel
  Cc: Christoffer Dall, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Dave Martin, Ramana Radhakrishnan, kvmarm,
	Kristina Martsenko, linux-kernel, Mark Rutland, Julien Thierry

Hi James,

On 2/27/19 12:03 AM, James Morse wrote:
> Hi Amit,
> 
> On 19/02/2019 09:24, Amit Daniel Kachhap wrote:
>> This feature will allow the KVM guest to allow the handling of
>> pointer authentication instructions or to treat them as undefined
>> if not set. It uses the existing vcpu API KVM_ARM_VCPU_INIT to
>> supply this parameter instead of creating a new API.
>>
>> A new register is not created to pass this parameter via
>> SET/GET_ONE_REG interface as just a flag (KVM_ARM_VCPU_PTRAUTH)
>> supplied is enough to enable this feature.
> 
> and an attempt to restore the id register with the other version would fail.
> 
> 
>> diff --git a/Documentation/arm64/pointer-authentication.txt b/Documentation/arm64/pointer-authentication.txt
>> index a25cd21..0529a7d 100644
>> --- a/Documentation/arm64/pointer-authentication.txt
>> +++ b/Documentation/arm64/pointer-authentication.txt
>> @@ -82,7 +82,8 @@ pointers).
>>   Virtualization
>>   --------------
>>   
>> -Pointer authentication is not currently supported in KVM guests. KVM
>> -will mask the feature bits from ID_AA64ISAR1_EL1, and attempted use of
>> -the feature will result in an UNDEFINED exception being injected into
>> -the guest.
> 
>> +Pointer authentication is enabled in KVM guest when virtual machine is
>> +created by passing a flag (KVM_ARM_VCPU_PTRAUTH)
> 
> (This is still mixing VM and VCPU)
> 
> 
>> + requesting this feature to be enabled.
> 
> .. on each vcpu?
> 
> 
>> +Without this flag, pointer authentication is not enabled
>> +in KVM guests and attempted use of the feature will result in an UNDEFINED
>> +exception being injected into the guest.
> 
> 'guests' here suggests its a VM property. If you set it on some VCPU but not others KVM
> will generate undefs instead of enabling the feature. (which is the right thing to do)
> 
> I think it needs to be clear this is a per-vcpu property.
ok.
> 
> 
>> diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
>> index 97c3478..5f82ca1 100644
>> --- a/arch/arm64/include/uapi/asm/kvm.h
>> +++ b/arch/arm64/include/uapi/asm/kvm.h
>> @@ -102,6 +102,7 @@ struct kvm_regs {
>>   #define KVM_ARM_VCPU_EL1_32BIT		1 /* CPU running a 32bit VM */
>>   #define KVM_ARM_VCPU_PSCI_0_2		2 /* CPU uses PSCI v0.2 */
>>   #define KVM_ARM_VCPU_PMU_V3		3 /* Support guest PMUv3 */
> 
>> +#define KVM_ARM_VCPU_PTRAUTH		4 /* VCPU uses address authentication */
> 
> Just address authentication? I agree with Mark we should have two bits to match what gets
> exposed to EL0. One would then be address, the other generic.
ok.
> 
> 
>> diff --git a/arch/arm64/kvm/hyp/ptrauth-sr.c b/arch/arm64/kvm/hyp/ptrauth-sr.c
>> index 528ee6e..6846a23 100644
>> --- a/arch/arm64/kvm/hyp/ptrauth-sr.c
>> +++ b/arch/arm64/kvm/hyp/ptrauth-sr.c
>> @@ -93,9 +93,23 @@ void kvm_arm_vcpu_ptrauth_reset(struct kvm_vcpu *vcpu)
> 
>> +/**
>> + * kvm_arm_vcpu_ptrauth_allowed - checks if ptrauth feature is allowed by user
>> + *
>> + * @vcpu: The VCPU pointer
>> + *
>> + * This function will be used to check userspace option to have ptrauth or not
>> + * in the guest kernel.
>> + */
>> +bool kvm_arm_vcpu_ptrauth_allowed(const struct kvm_vcpu *vcpu)
>> +{
>> +	return kvm_supports_ptrauth() &&
>> +		test_bit(KVM_ARM_VCPU_PTRAUTH, vcpu->arch.features);
>> +}
> 
> This isn't used from world-switch, could it be moved to guest.c?
yes sure.
> 
> 
>> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
>> index 12529df..f7bcc60 100644
>> --- a/arch/arm64/kvm/sys_regs.c
>> +++ b/arch/arm64/kvm/sys_regs.c
>> @@ -1055,7 +1055,7 @@ static bool access_cntp_cval(struct kvm_vcpu *vcpu,
>>   }
>>   
>>   /* Read a sanitised cpufeature ID register by sys_reg_desc */
>> -static u64 read_id_reg(struct sys_reg_desc const *r, bool raz)
>> +static u64 read_id_reg(struct kvm_vcpu *vcpu, struct sys_reg_desc const *r, bool raz)
> 
> (It might be easier on the reviewer to move these mechanical changes to an earlier patch)
Yes with including some of Dave SVE patches this wont be required.

Thanks,
Amit D
> 
> 
> Looks good,
> 
> Thanks,
> 
> James
> 

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [kvmtool PATCH v6 6/6] arm/kvm: arm64: Add a vcpu feature for pointer authentication
  2019-03-01 11:24       ` Dave P Martin
@ 2019-03-04 11:08         ` Amit Daniel Kachhap
  2019-03-05 11:11           ` Dave Martin
  0 siblings, 1 reply; 41+ messages in thread
From: Amit Daniel Kachhap @ 2019-03-04 11:08 UTC (permalink / raw)
  To: Dave P Martin
  Cc: linux-arm-kernel, Marc Zyngier, Catalin Marinas, Will Deacon,
	Kristina Martsenko, kvmarm, Ramana Radhakrishnan, linux-kernel


Hi Dave,

On 3/1/19 4:54 PM, Dave P Martin wrote:
> On Fri, Mar 01, 2019 at 10:37:54AM +0000, Amit Daniel Kachhap wrote:
>> Hi,
>>
>> On 2/21/19 9:24 PM, Dave Martin wrote:
>>> On Tue, Feb 19, 2019 at 02:54:31PM +0530, Amit Daniel Kachhap wrote:
> 
> [...]
> 
>>>> diff --git a/arm/aarch64/include/kvm/kvm-config-arch.h b/arm/aarch64/include/kvm/kvm-config-arch.h
>>>> index 04be43d..2074684 100644
>>>> --- a/arm/aarch64/include/kvm/kvm-config-arch.h
>>>> +++ b/arm/aarch64/include/kvm/kvm-config-arch.h
>>>> @@ -8,7 +8,9 @@
>>>>    			"Create PMUv3 device"),				\
>>>>    	OPT_U64('\0', "kaslr-seed", &(cfg)->kaslr_seed,			\
>>>>    			"Specify random seed for Kernel Address Space "	\
>>>> -			"Layout Randomization (KASLR)"),
>>>> +			"Layout Randomization (KASLR)"),		\
>>>> +	OPT_BOOLEAN('\0', "ptrauth", &(cfg)->has_ptrauth,		\
>>>> +			"Enable address authentication"),
>>>
>>> Nit: doesn't this enable address *and* generic authentication?  The
>>> discussion on what capababilities and enables the ABI exposes probably
>>> needs to conclude before we can finalise this here.
>> ok.
>>>
>>> However, I would recommend that we provide a single option here that
>>> turns both address authentication and generic authentication on, even
>>> if the ABI treats them independently.  This is expected to be the common
>>> case by far.
>> ok
>>>
>>> We can always add more fine-grained options later if it turns out to be
>>> necessary.
>> Mark suggested to provide 2 flags [1] for Address and Generic
>> authentication so I was thinking of adding 2 features like,
>>
>> +#define KVM_ARM_VCPU_PTRAUTH_ADDR		4 /* CPU uses pointer address
>> authentication */
>> +#define KVM_ARM_VCPU_PTRAUTH_GENERIC		5 /* CPU uses pointer generic
>> authentication */
>>
>> And supply both of them concatenated in VCPU_INIT stage. Kernel KVM
>> would expect both feature requested together.
> 
> Seems reasonable.  Do you mean the kernel would treat it as an error if
> only one of these flags is passed to KVM_ARM_VCPU_INIT, or would KVM
> simply treat them as independent?
If both flags are passed together then only start using ptrauth 
otherwise keep ptrauth disabled. This is just to finalize the user side 
abi as of now and KVM can be updated later.

Thanks,
Amit D
> 
> [...]
> 
>>>> diff --git a/arm/kvm-cpu.c b/arm/kvm-cpu.c
>>>> index 7780251..4ac80f8 100644
>>>> --- a/arm/kvm-cpu.c
>>>> +++ b/arm/kvm-cpu.c
>>>> @@ -68,6 +68,12 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, unsigned long cpu_id)
>>>>    		vcpu_init.features[0] |= (1UL << KVM_ARM_VCPU_PSCI_0_2);
>>>>    	}
>>>>    
>>>> +	/* Set KVM_ARM_VCPU_PTRAUTH if available */
>>>> +	if (kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH)) {
>>>> +		if (kvm->cfg.arch.has_ptrauth)
>>>> +			vcpu_init.features[0] |= ARM_VCPU_PTRAUTH_FEATURE;
>>>> +	}
>>>> +
>>>
>>> I'm not too keen on requiring a dummy #define for AArch32 here.  How do
>>> we handle other subarch-specific feature flags?  Is there something we
>>> can reuse?
>> I will check it.
> 
> OK
> 
> Cheers
> ---Dave
> 

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [kvmtool PATCH v6 6/6] arm/kvm: arm64: Add a vcpu feature for pointer authentication
  2019-03-04 11:08         ` Amit Daniel Kachhap
@ 2019-03-05 11:11           ` Dave Martin
  0 siblings, 0 replies; 41+ messages in thread
From: Dave Martin @ 2019-03-05 11:11 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: Marc Zyngier, Catalin Marinas, Will Deacon, linux-kernel,
	Kristina Martsenko, Ramana Radhakrishnan, kvmarm,
	linux-arm-kernel

On Mon, Mar 04, 2019 at 04:38:18PM +0530, Amit Daniel Kachhap wrote:
> 
> Hi Dave,
> 
> On 3/1/19 4:54 PM, Dave P Martin wrote:
> >On Fri, Mar 01, 2019 at 10:37:54AM +0000, Amit Daniel Kachhap wrote:
> >>Hi,
> >>
> >>On 2/21/19 9:24 PM, Dave Martin wrote:
> >>>On Tue, Feb 19, 2019 at 02:54:31PM +0530, Amit Daniel Kachhap wrote:
> >
> >[...]
> >
> >>>>diff --git a/arm/aarch64/include/kvm/kvm-config-arch.h b/arm/aarch64/include/kvm/kvm-config-arch.h
> >>>>index 04be43d..2074684 100644
> >>>>--- a/arm/aarch64/include/kvm/kvm-config-arch.h
> >>>>+++ b/arm/aarch64/include/kvm/kvm-config-arch.h
> >>>>@@ -8,7 +8,9 @@
> >>>>   			"Create PMUv3 device"),				\
> >>>>   	OPT_U64('\0', "kaslr-seed", &(cfg)->kaslr_seed,			\
> >>>>   			"Specify random seed for Kernel Address Space "	\
> >>>>-			"Layout Randomization (KASLR)"),
> >>>>+			"Layout Randomization (KASLR)"),		\
> >>>>+	OPT_BOOLEAN('\0', "ptrauth", &(cfg)->has_ptrauth,		\
> >>>>+			"Enable address authentication"),
> >>>
> >>>Nit: doesn't this enable address *and* generic authentication?  The
> >>>discussion on what capababilities and enables the ABI exposes probably
> >>>needs to conclude before we can finalise this here.
> >>ok.
> >>>
> >>>However, I would recommend that we provide a single option here that
> >>>turns both address authentication and generic authentication on, even
> >>>if the ABI treats them independently.  This is expected to be the common
> >>>case by far.
> >>ok
> >>>
> >>>We can always add more fine-grained options later if it turns out to be
> >>>necessary.
> >>Mark suggested to provide 2 flags [1] for Address and Generic
> >>authentication so I was thinking of adding 2 features like,
> >>
> >>+#define KVM_ARM_VCPU_PTRAUTH_ADDR		4 /* CPU uses pointer address
> >>authentication */
> >>+#define KVM_ARM_VCPU_PTRAUTH_GENERIC		5 /* CPU uses pointer generic
> >>authentication */
> >>
> >>And supply both of them concatenated in VCPU_INIT stage. Kernel KVM
> >>would expect both feature requested together.
> >
> >Seems reasonable.  Do you mean the kernel would treat it as an error if
> >only one of these flags is passed to KVM_ARM_VCPU_INIT, or would KVM
> >simply treat them as independent?
> If both flags are passed together then only start using ptrauth otherwise
> keep ptrauth disabled. This is just to finalize the user side abi as of now
> and KVM can be updated later.

If just flag is passed, I think KVM_ARM_VCPU_INIT should just fail.
Otherwise we risk userspace becoming accidentally reliant on behaviour
that may change in the future.

Cheers
---Dave

^ permalink raw reply	[flat|nested] 41+ messages in thread

end of thread, other threads:[~2019-03-05 11:11 UTC | newest]

Thread overview: 41+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-02-19  9:24 [PATCH v6 0/6] Add ARMv8.3 pointer authentication for kvm guest Amit Daniel Kachhap
2019-02-19  9:24 ` [PATCH v6 1/6] arm64/kvm: preserve host HCR_EL2 value Amit Daniel Kachhap
2019-02-21 11:50   ` Mark Rutland
2019-02-25 18:09     ` Marc Zyngier
2019-02-28  6:43     ` Amit Daniel Kachhap
2019-02-21 15:49   ` Dave Martin
2019-03-01  5:56     ` Amit Daniel Kachhap
2019-02-25 17:39   ` James Morse
2019-02-26 10:06     ` James Morse
2019-03-02 11:09     ` Amit Daniel Kachhap
2019-02-19  9:24 ` [PATCH v6 2/6] arm64/kvm: preserve host MDCR_EL2 value Amit Daniel Kachhap
2019-02-21 11:57   ` Mark Rutland
2019-02-21 15:51   ` Dave Martin
2019-03-01  6:10     ` Amit Daniel Kachhap
2019-02-19  9:24 ` [PATCH v6 3/6] arm64/kvm: context-switch ptrauth registers Amit Daniel Kachhap
2019-02-21 12:29   ` Mark Rutland
2019-02-21 15:51     ` Dave Martin
2019-03-01  6:17       ` Amit Daniel Kachhap
2019-02-28  9:07     ` Amit Daniel Kachhap
2019-02-21 15:53   ` Dave Martin
2019-03-01  9:35     ` Amit Daniel Kachhap
2019-02-26 18:31   ` James Morse
2019-03-04 10:51     ` Amit Daniel Kachhap
2019-02-19  9:24 ` [PATCH v6 4/6] arm64/kvm: add a userspace option to enable pointer authentication Amit Daniel Kachhap
2019-02-21 12:34   ` Mark Rutland
2019-02-28  9:25     ` Amit Daniel Kachhap
2019-02-21 15:53   ` Dave Martin
2019-03-01  9:41     ` Amit Daniel Kachhap
2019-03-01 12:22       ` Dave P Martin
2019-02-26 18:33   ` James Morse
2019-03-04 10:56     ` Amit Daniel Kachhap
2019-02-19  9:24 ` [PATCH v6 5/6] arm64/kvm: control accessibility of ptrauth key registers Amit Daniel Kachhap
2019-02-21 15:53   ` Dave Martin
2019-02-26 18:34   ` James Morse
2019-02-19  9:24 ` [kvmtool PATCH v6 6/6] arm/kvm: arm64: Add a vcpu feature for pointer authentication Amit Daniel Kachhap
2019-02-21 15:54   ` Dave Martin
2019-03-01 10:37     ` Amit Daniel Kachhap
2019-03-01 11:24       ` Dave P Martin
2019-03-04 11:08         ` Amit Daniel Kachhap
2019-03-05 11:11           ` Dave Martin
2019-02-26 18:03 ` [PATCH v6 0/6] Add ARMv8.3 pointer authentication for kvm guest James Morse

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).