linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v8 0/9] Add ARMv8.3 pointer authentication for kvm guest
@ 2019-04-02  2:27 Amit Daniel Kachhap
  2019-04-02  2:27 ` [PATCH v8 1/9] KVM: arm64: Propagate vcpu into read_id_reg() Amit Daniel Kachhap
                   ` (8 more replies)
  0 siblings, 9 replies; 30+ messages in thread
From: Amit Daniel Kachhap @ 2019-04-02  2:27 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Christoffer Dall, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Dave Martin, Ramana Radhakrishnan, kvmarm,
	Kristina Martsenko, linux-kernel, Amit Daniel Kachhap,
	Mark Rutland, James Morse, Julien Thierry

Hi,

This patch series adds pointer authentication support for KVM guest and
is based on top of Linux 5.1-rc3. The basic patches in this series was
originally posted by Mark Rutland earlier[1,2] and contains some history
of this work.

Extension Overview:
=============================================

The ARMv8.3 pointer authentication extension adds functionality to detect
modification of pointer values, mitigating certain classes of attack such as
stack smashing, and making return oriented programming attacks harder.

The extension introduces the concept of a pointer authentication code (PAC),
which is stored in some upper bits of pointers. Each PAC is derived from the
original pointer, another 64-bit value (e.g. the stack pointer), and a secret
128-bit key.

New instructions are added which can be used to:

* Insert a PAC into a pointer
* Strip a PAC from a pointer
* Authenticate and strip a PAC from a pointer

The detailed description of ARMv8.3 pointer authentication support in
userspace/kernel and can be found in Kristina's generic pointer authentication
patch series[3].

KVM guest work:
==============================================

If pointer authentication is enabled for KVM guests then the new PAC instructions
will not trap to EL2. If not then they may be ignored if in HINT region or trapped
in EL2 as illegal instruction. Since KVM guest vcpu runs as a thread so they have
a key initialized which will be used by PAC. When world switch happens between
host and guest then this key is exchanged.

The current v8 patch series contains review comments and suggestions by James
Morse, Kristina Martsenko, Julien Thierry and Dave Martin.

The first two patch is cherry picked from Dave Martin [10, 11] v6 series which adds
SVE support for KVM guest. 

Changes since v7 [9]: Major changes are listed below and detail changes are in
		      each patch.
 * Comments and Documentation updated to reflect using address/generic
   features flag together.
 * Dropped the documentation patch and added those details in the relevant
   patches.
 * Rebased the patch series on 2 patches of Dave Martin v6 SVE series. 
 * Small bug fixes.

Changes since v6 [8]: Major changes are listed below.

* Pointer authentication key switch entirely in assembly now.
* isb instruction added after Key switched to host.
* Use __hyp_this_cpu_ptr for both VHE and nVHE mode.
* 2 separate flags for address and generic authentication.
* kvm_arm_vcpu_ptrauth_allowed renamed to has_vcpu_ptrauth.
* kvm_arm_vcpu_ptrauth_reset renamed to kvm_arm_vcpu_ptrauth_setup_lazy.
* Save of host Key registers now done in ptrauth instruction trap.
* A fix to add kern_hyp_va to get correct host_ctxt pointer in nVHE mode.
* Patches re-structured to better reflect ABI change.

Changes since v5 [7]: Major changes are listed below.

* Split hcr_el2 and mdcr_el2 save/restore in two patches.
* Reverted back save/restore of sys-reg keys as done in V4 [5]. There was
  suggestion by James Morse to use ptrauth utilities in a single place
  in arm core and use them from kvm. However this change deviates from the
  existing sys-reg implementations and is not scalable.
* Invoked the key switch C functions from __guest_enter/__guest_exit assembly.
* Host key save is now done inside vcpu_load.
* Reverted back masking of cpufeature ID registers for ptrauth when disabled
  from userpace.
* Reset of ptrauth key registers not done conditionally.
* Code and Documentation cleanup.

Changes since v4 [6]: Several suggestions from James Morse
* Move host registers to be saved/restored inside struct kvm_cpu_context.
* Similar to hcr_el2, save/restore mdcr_el2 register also.
* Added save routines for ptrauth keys in generic arm core and
  use them during KVM context switch.
* Defined a GCC attribute __no_ptrauth which discards generating
  ptrauth instructions in a function. This is taken from Kristina's
  earlier kernel pointer authentication support patches [4].
* Dropped a patch to mask cpufeature when not enabled from userspace and
  now only key registers are masked from register list.

Changes since v3 [5]:
* Use pointer authentication only when VHE is present as ARM8.3 implies ARM8.1
  features to be present.
* Added lazy context handling of ptrauth instructions from V2 version again. 
* Added more details in Documentation.

Changes since v2 [1,2]:
* Allow host and guest to have different HCR_EL2 settings and not just constant
  value HCR_HOST_VHE_FLAGS or HCR_HOST_NVHE_FLAGS.
* Optimise the reading of HCR_EL2 in host/guest switch by fetching it once
  during KVM initialisation state and using it later.
* Context switch pointer authentication keys when switching between guest
  and host. Pointer authentication was enabled in a lazy context earlier[2] and
  is removed now to make it simple. However it can be revisited later if there
  is significant performance issue.
* Added a userspace option to choose pointer authentication.
* Based on the userspace option, ptrauth cpufeature will be visible.
* Based on the userspace option, ptrauth key registers will be accessible.
* A small document is added on how to enable pointer authentication from
  userspace KVM API.

Looking for feedback and comments.

Thanks,
Amit

[1]: https://lore.kernel.org/lkml/20171127163806.31435-11-mark.rutland@arm.com/
[2]: https://lore.kernel.org/lkml/20171127163806.31435-10-mark.rutland@arm.com/
[3]: https://lkml.org/lkml/2018/12/7/666
[4]: https://lore.kernel.org/lkml/20181005084754.20950-1-kristina.martsenko@arm.com/
[5]: https://lkml.org/lkml/2018/10/17/594
[6]: https://lkml.org/lkml/2018/12/18/80
[7]: https://lkml.org/lkml/2019/1/28/49
[8]: https://lkml.org/lkml/2019/2/19/190 
[9]: https://lkml.org/lkml/2019/3/19/125 
[10]: https://patchwork.kernel.org/patch/10860139/
[11]: https://patchwork.kernel.org/patch/10860141/

Linux (5.1-rc3 based):

Amit Daniel Kachhap (4):
  KVM: arm64: Move hyp_symbol_addr to fix dependency
  KVM: arm/arm64: preserve host MDCR_EL2 value
  KVM: arm64: Add vcpu feature flags to control ptrauth accessibility
  KVM: arm64: Add capability to advertise ptrauth for guest

Dave Martin (2):
  KVM: arm64: Propagate vcpu into read_id_reg()
  KVM: arm64: Support runtime sysreg visibility filtering

Mark Rutland (2):
  KVM: arm/arm64: preserve host HCR_EL2 value
  KVM: arm/arm64: context-switch ptrauth registers

 Documentation/arm64/pointer-authentication.txt |  19 ++++-
 Documentation/virtual/kvm/api.txt              |   7 ++
 arch/arm/include/asm/kvm_host.h                |   4 +-
 arch/arm64/Kconfig                             |   5 +-
 arch/arm64/include/asm/kvm_asm.h               |  22 ++++-
 arch/arm64/include/asm/kvm_emulate.h           |  24 +++---
 arch/arm64/include/asm/kvm_host.h              |  46 +++++++++--
 arch/arm64/include/asm/kvm_hyp.h               |   2 +-
 arch/arm64/include/asm/kvm_mmu.h               |  20 -----
 arch/arm64/include/asm/kvm_ptrauth_asm.h       | 106 +++++++++++++++++++++++++
 arch/arm64/include/uapi/asm/kvm.h              |   2 +
 arch/arm64/kernel/asm-offsets.c                |   6 ++
 arch/arm64/kvm/debug.c                         |  28 ++-----
 arch/arm64/kvm/guest.c                         |  16 +++-
 arch/arm64/kvm/handle_exit.c                   |  24 ++++--
 arch/arm64/kvm/hyp/debug-sr.c                  |   5 --
 arch/arm64/kvm/hyp/entry.S                     |   7 ++
 arch/arm64/kvm/hyp/switch.c                    |  40 +++++-----
 arch/arm64/kvm/hyp/sysreg-sr.c                 |  22 ++++-
 arch/arm64/kvm/hyp/tlb.c                       |   6 +-
 arch/arm64/kvm/reset.c                         |  13 +++
 arch/arm64/kvm/sys_regs.c                      |  93 ++++++++++++++++++----
 arch/arm64/kvm/sys_regs.h                      |  25 ++++++
 include/uapi/linux/kvm.h                       |   2 +
 virt/kvm/arm/arm.c                             |   4 +-
 25 files changed, 424 insertions(+), 124 deletions(-)
 create mode 100644 arch/arm64/include/asm/kvm_ptrauth_asm.h

kvmtool:

Repo: git.kernel.org/pub/scm/linux/kernel/git/will/kvmtool.git

Amit Daniel Kachhap (1):
  KVM: arm/arm64: Add a vcpu feature for pointer authentication

 arm/aarch32/include/kvm/kvm-cpu-arch.h    | 1 +
 arm/aarch64/include/asm/kvm.h             | 2 ++
 arm/aarch64/include/kvm/kvm-config-arch.h | 4 +++-
 arm/aarch64/include/kvm/kvm-cpu-arch.h    | 2 ++
 arm/include/arm-common/kvm-config-arch.h  | 1 +
 arm/kvm-cpu.c                             | 7 +++++++
 include/linux/kvm.h                       | 2 ++
 7 files changed, 18 insertions(+), 1 deletion(-)

-- 
2.7.4


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH v8 1/9] KVM: arm64: Propagate vcpu into read_id_reg()
  2019-04-02  2:27 [PATCH v8 0/9] Add ARMv8.3 pointer authentication for kvm guest Amit Daniel Kachhap
@ 2019-04-02  2:27 ` Amit Daniel Kachhap
  2019-04-02  2:27 ` [PATCH v8 2/9] KVM: arm64: Support runtime sysreg visibility filtering Amit Daniel Kachhap
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 30+ messages in thread
From: Amit Daniel Kachhap @ 2019-04-02  2:27 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Christoffer Dall, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Dave Martin, Ramana Radhakrishnan, kvmarm,
	Kristina Martsenko, linux-kernel, Amit Daniel Kachhap,
	Mark Rutland, James Morse, Julien Thierry

From: Dave Martin <Dave.Martin@arm.com>

Architecture features that are conditionally visible to the guest
will require run-time checks in the ID register accessor functions.
In particular, read_id_reg() will need to perform checks in order
to generate the correct emulated value for certain ID register
fields such as ID_AA64PFR0_EL1.SVE for example.

This patch propagates vcpu into read_id_reg() so that future
patches can add run-time checks on the guest configuration here.

For now, there is no functional change.

Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Alex Bennee <alex.bennee@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 23 +++++++++++++----------
 1 file changed, 13 insertions(+), 10 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 539feec..a5d14b5 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1044,7 +1044,8 @@ static bool access_arch_timer(struct kvm_vcpu *vcpu,
 }
 
 /* Read a sanitised cpufeature ID register by sys_reg_desc */
-static u64 read_id_reg(struct sys_reg_desc const *r, bool raz)
+static u64 read_id_reg(const struct kvm_vcpu *vcpu,
+		struct sys_reg_desc const *r, bool raz)
 {
 	u32 id = sys_reg((u32)r->Op0, (u32)r->Op1,
 			 (u32)r->CRn, (u32)r->CRm, (u32)r->Op2);
@@ -1078,7 +1079,7 @@ static bool __access_id_reg(struct kvm_vcpu *vcpu,
 	if (p->is_write)
 		return write_to_read_only(vcpu, p, r);
 
-	p->regval = read_id_reg(r, raz);
+	p->regval = read_id_reg(vcpu, r, raz);
 	return true;
 }
 
@@ -1107,16 +1108,18 @@ static u64 sys_reg_to_index(const struct sys_reg_desc *reg);
  * are stored, and for set_id_reg() we don't allow the effective value
  * to be changed.
  */
-static int __get_id_reg(const struct sys_reg_desc *rd, void __user *uaddr,
+static int __get_id_reg(const struct kvm_vcpu *vcpu,
+			const struct sys_reg_desc *rd, void __user *uaddr,
 			bool raz)
 {
 	const u64 id = sys_reg_to_index(rd);
-	const u64 val = read_id_reg(rd, raz);
+	const u64 val = read_id_reg(vcpu, rd, raz);
 
 	return reg_to_user(uaddr, &val, id);
 }
 
-static int __set_id_reg(const struct sys_reg_desc *rd, void __user *uaddr,
+static int __set_id_reg(const struct kvm_vcpu *vcpu,
+			const struct sys_reg_desc *rd, void __user *uaddr,
 			bool raz)
 {
 	const u64 id = sys_reg_to_index(rd);
@@ -1128,7 +1131,7 @@ static int __set_id_reg(const struct sys_reg_desc *rd, void __user *uaddr,
 		return err;
 
 	/* This is what we mean by invariant: you can't change it. */
-	if (val != read_id_reg(rd, raz))
+	if (val != read_id_reg(vcpu, rd, raz))
 		return -EINVAL;
 
 	return 0;
@@ -1137,25 +1140,25 @@ static int __set_id_reg(const struct sys_reg_desc *rd, void __user *uaddr,
 static int get_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 		      const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	return __get_id_reg(rd, uaddr, false);
+	return __get_id_reg(vcpu, rd, uaddr, false);
 }
 
 static int set_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 		      const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	return __set_id_reg(rd, uaddr, false);
+	return __set_id_reg(vcpu, rd, uaddr, false);
 }
 
 static int get_raz_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 			  const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	return __get_id_reg(rd, uaddr, true);
+	return __get_id_reg(vcpu, rd, uaddr, true);
 }
 
 static int set_raz_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 			  const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	return __set_id_reg(rd, uaddr, true);
+	return __set_id_reg(vcpu, rd, uaddr, true);
 }
 
 static bool access_ctr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v8 2/9] KVM: arm64: Support runtime sysreg visibility filtering
  2019-04-02  2:27 [PATCH v8 0/9] Add ARMv8.3 pointer authentication for kvm guest Amit Daniel Kachhap
  2019-04-02  2:27 ` [PATCH v8 1/9] KVM: arm64: Propagate vcpu into read_id_reg() Amit Daniel Kachhap
@ 2019-04-02  2:27 ` Amit Daniel Kachhap
  2019-04-02  2:27 ` [PATCH v8 3/9] KVM: arm64: Move hyp_symbol_addr to fix dependency Amit Daniel Kachhap
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 30+ messages in thread
From: Amit Daniel Kachhap @ 2019-04-02  2:27 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Christoffer Dall, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Dave Martin, Ramana Radhakrishnan, kvmarm,
	Kristina Martsenko, linux-kernel, Amit Daniel Kachhap,
	Mark Rutland, James Morse, Julien Thierry

From: Dave Martin <Dave.Martin@arm.com>

Some optional features of the Arm architecture add new system
registers that are not present in the base architecture.

Where these features are optional for the guest, the visibility of
these registers may need to depend on some runtime configuration,
such as a flag passed to KVM_ARM_VCPU_INIT.

For example, ZCR_EL1 and ID_AA64ZFR0_EL1 need to be hidden if SVE
is not enabled for the guest, even though these registers may be
present in the hardware and visible to the host at EL2.

Adding special-case checks all over the place for individual
registers is going to get messy as the number of conditionally-
visible registers grows.

In order to help solve this problem, this patch adds a new sysreg
method visibility() that can be used to hook in any needed runtime
visibility checks.  This method can currently return
REG_HIDDEN_USER to inhibit enumeration and ioctl access to the
register for userspace, and REG_HIDDEN_GUEST to inhibit runtime
access by the guest using MSR/MRS.  Wrappers are added to allow
these flags to be conveniently queried.

This approach allows a conditionally modified view of individual
system registers such as the CPU ID registers, in addition to
completely hiding register where appropriate.

Signed-off-by: Dave Martin <Dave.Martin@arm.com>
---
 arch/arm64/kvm/sys_regs.c | 24 +++++++++++++++++++++---
 arch/arm64/kvm/sys_regs.h | 25 +++++++++++++++++++++++++
 2 files changed, 46 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index a5d14b5..c86a7b0 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1927,6 +1927,12 @@ static void perform_access(struct kvm_vcpu *vcpu,
 {
 	trace_kvm_sys_access(*vcpu_pc(vcpu), params, r);
 
+	/* Check for regs disabled by runtime config */
+	if (sysreg_hidden_from_guest(vcpu, r)) {
+		kvm_inject_undefined(vcpu);
+		return;
+	}
+
 	/*
 	 * Not having an accessor means that we have configured a trap
 	 * that we don't know how to handle. This certainly qualifies
@@ -2438,6 +2444,10 @@ int kvm_arm_sys_reg_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg
 	if (!r)
 		return get_invariant_sys_reg(reg->id, uaddr);
 
+	/* Check for regs disabled by runtime config */
+	if (sysreg_hidden_from_user(vcpu, r))
+		return -ENOENT;
+
 	if (r->get_user)
 		return (r->get_user)(vcpu, r, reg, uaddr);
 
@@ -2459,6 +2469,10 @@ int kvm_arm_sys_reg_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg
 	if (!r)
 		return set_invariant_sys_reg(reg->id, uaddr);
 
+	/* Check for regs disabled by runtime config */
+	if (sysreg_hidden_from_user(vcpu, r))
+		return -ENOENT;
+
 	if (r->set_user)
 		return (r->set_user)(vcpu, r, reg, uaddr);
 
@@ -2515,7 +2529,8 @@ static bool copy_reg_to_user(const struct sys_reg_desc *reg, u64 __user **uind)
 	return true;
 }
 
-static int walk_one_sys_reg(const struct sys_reg_desc *rd,
+static int walk_one_sys_reg(const struct kvm_vcpu *vcpu,
+			    const struct sys_reg_desc *rd,
 			    u64 __user **uind,
 			    unsigned int *total)
 {
@@ -2526,6 +2541,9 @@ static int walk_one_sys_reg(const struct sys_reg_desc *rd,
 	if (!(rd->reg || rd->get_user))
 		return 0;
 
+	if (sysreg_hidden_from_user(vcpu, rd))
+		return 0;
+
 	if (!copy_reg_to_user(rd, uind))
 		return -EFAULT;
 
@@ -2554,9 +2572,9 @@ static int walk_sys_regs(struct kvm_vcpu *vcpu, u64 __user *uind)
 		int cmp = cmp_sys_reg(i1, i2);
 		/* target-specific overrides generic entry. */
 		if (cmp <= 0)
-			err = walk_one_sys_reg(i1, &uind, &total);
+			err = walk_one_sys_reg(vcpu, i1, &uind, &total);
 		else
-			err = walk_one_sys_reg(i2, &uind, &total);
+			err = walk_one_sys_reg(vcpu, i2, &uind, &total);
 
 		if (err)
 			return err;
diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h
index 3b1bc7f..2be9950 100644
--- a/arch/arm64/kvm/sys_regs.h
+++ b/arch/arm64/kvm/sys_regs.h
@@ -64,8 +64,15 @@ struct sys_reg_desc {
 			const struct kvm_one_reg *reg, void __user *uaddr);
 	int (*set_user)(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 			const struct kvm_one_reg *reg, void __user *uaddr);
+
+	/* Return mask of REG_* runtime visibility overrides */
+	unsigned int (*visibility)(const struct kvm_vcpu *vcpu,
+				   const struct sys_reg_desc *rd);
 };
 
+#define REG_HIDDEN_USER		(1 << 0) /* hidden from userspace ioctls */
+#define REG_HIDDEN_GUEST	(1 << 1) /* hidden from guest */
+
 static inline void print_sys_reg_instr(const struct sys_reg_params *p)
 {
 	/* Look, we even formatted it for you to paste into the table! */
@@ -102,6 +109,24 @@ static inline void reset_val(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r
 	__vcpu_sys_reg(vcpu, r->reg) = r->val;
 }
 
+static inline bool sysreg_hidden_from_guest(const struct kvm_vcpu *vcpu,
+					    const struct sys_reg_desc *r)
+{
+	if (likely(!r->visibility))
+		return false;
+
+	return r->visibility(vcpu, r) & REG_HIDDEN_GUEST;
+}
+
+static inline bool sysreg_hidden_from_user(const struct kvm_vcpu *vcpu,
+					   const struct sys_reg_desc *r)
+{
+	if (likely(!r->visibility))
+		return false;
+
+	return r->visibility(vcpu, r) & REG_HIDDEN_USER;
+}
+
 static inline int cmp_sys_reg(const struct sys_reg_desc *i1,
 			      const struct sys_reg_desc *i2)
 {
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v8 3/9] KVM: arm64: Move hyp_symbol_addr to fix dependency
  2019-04-02  2:27 [PATCH v8 0/9] Add ARMv8.3 pointer authentication for kvm guest Amit Daniel Kachhap
  2019-04-02  2:27 ` [PATCH v8 1/9] KVM: arm64: Propagate vcpu into read_id_reg() Amit Daniel Kachhap
  2019-04-02  2:27 ` [PATCH v8 2/9] KVM: arm64: Support runtime sysreg visibility filtering Amit Daniel Kachhap
@ 2019-04-02  2:27 ` Amit Daniel Kachhap
  2019-04-05 11:02   ` Dave Martin
  2019-04-02  2:27 ` [PATCH v8 4/9] KVM: arm/arm64: preserve host HCR_EL2 value Amit Daniel Kachhap
                   ` (5 subsequent siblings)
  8 siblings, 1 reply; 30+ messages in thread
From: Amit Daniel Kachhap @ 2019-04-02  2:27 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Christoffer Dall, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Dave Martin, Ramana Radhakrishnan, kvmarm,
	Kristina Martsenko, linux-kernel, Amit Daniel Kachhap,
	Mark Rutland, James Morse, Julien Thierry

Currently hyp_symbol_addr is placed in kvm_mmu.h which is mostly
used by __hyp_this_cpu_ptr in kvm_asm.h but it cannot include
kvm_mmu.h directly as kvm_mmu.h uses kvm_ksym_ref which is
defined inside kvm_asm.h. Hence, hyp_symbol_addr is moved inside
kvm_asm.h to fix this dependency on each other.

Also, hyp_symbol_addr corresponding counterpart, kvm_ksym_ref,
is already in kvm_asm.h, making it more sensible to move
kvm_symbol_addr to the same file.

Suggested by: James Morse <james.morse@arm.com>
Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Reviewed-by: Julien Thierry <julien.thierry@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Christoffer Dall <christoffer.dall@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
---

Changes since v7:
* Slight change in commit message [Julien Thierry].

 arch/arm64/include/asm/kvm_asm.h | 20 ++++++++++++++++++++
 arch/arm64/include/asm/kvm_mmu.h | 20 --------------------
 2 files changed, 20 insertions(+), 20 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index f5b79e9..57a07e8 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -80,6 +80,26 @@ extern void __vgic_v3_init_lrs(void);
 
 extern u32 __kvm_get_mdcr_el2(void);
 
+/*
+ * Obtain the PC-relative address of a kernel symbol
+ * s: symbol
+ *
+ * The goal of this macro is to return a symbol's address based on a
+ * PC-relative computation, as opposed to a loading the VA from a
+ * constant pool or something similar. This works well for HYP, as an
+ * absolute VA is guaranteed to be wrong. Only use this if trying to
+ * obtain the address of a symbol (i.e. not something you obtained by
+ * following a pointer).
+ */
+#define hyp_symbol_addr(s)						\
+	({								\
+		typeof(s) *addr;					\
+		asm("adrp	%0, %1\n"				\
+		    "add	%0, %0, :lo12:%1\n"			\
+		    : "=r" (addr) : "S" (&s));				\
+		addr;							\
+	})
+
 /* Home-grown __this_cpu_{ptr,read} variants that always work at HYP */
 #define __hyp_this_cpu_ptr(sym)						\
 	({								\
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index b0742a1..3dea6af 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -118,26 +118,6 @@ static inline unsigned long __kern_hyp_va(unsigned long v)
 #define kern_hyp_va(v) 	((typeof(v))(__kern_hyp_va((unsigned long)(v))))
 
 /*
- * Obtain the PC-relative address of a kernel symbol
- * s: symbol
- *
- * The goal of this macro is to return a symbol's address based on a
- * PC-relative computation, as opposed to a loading the VA from a
- * constant pool or something similar. This works well for HYP, as an
- * absolute VA is guaranteed to be wrong. Only use this if trying to
- * obtain the address of a symbol (i.e. not something you obtained by
- * following a pointer).
- */
-#define hyp_symbol_addr(s)						\
-	({								\
-		typeof(s) *addr;					\
-		asm("adrp	%0, %1\n"				\
-		    "add	%0, %0, :lo12:%1\n"			\
-		    : "=r" (addr) : "S" (&s));				\
-		addr;							\
-	})
-
-/*
  * We currently support using a VM-specified IPA size. For backward
  * compatibility, the default IPA size is fixed to 40bits.
  */
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v8 4/9] KVM: arm/arm64: preserve host HCR_EL2 value
  2019-04-02  2:27 [PATCH v8 0/9] Add ARMv8.3 pointer authentication for kvm guest Amit Daniel Kachhap
                   ` (2 preceding siblings ...)
  2019-04-02  2:27 ` [PATCH v8 3/9] KVM: arm64: Move hyp_symbol_addr to fix dependency Amit Daniel Kachhap
@ 2019-04-02  2:27 ` Amit Daniel Kachhap
  2019-04-05 11:02   ` Dave Martin
  2019-04-06 10:37   ` James Morse
  2019-04-02  2:27 ` [PATCH v8 5/9] KVM: arm/arm64: preserve host MDCR_EL2 value Amit Daniel Kachhap
                   ` (4 subsequent siblings)
  8 siblings, 2 replies; 30+ messages in thread
From: Amit Daniel Kachhap @ 2019-04-02  2:27 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Christoffer Dall, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Dave Martin, Ramana Radhakrishnan, kvmarm,
	Kristina Martsenko, linux-kernel, Amit Daniel Kachhap,
	Mark Rutland, James Morse, Julien Thierry

From: Mark Rutland <mark.rutland@arm.com>

When restoring HCR_EL2 for the host, KVM uses HCR_HOST_VHE_FLAGS, which
is a constant value. This works today, as the host HCR_EL2 value is
always the same, but this will get in the way of supporting extensions
that require HCR_EL2 bits to be set conditionally for the host.

To allow such features to work without KVM having to explicitly handle
every possible host feature combination, this patch has KVM save/restore
for the host HCR when switching to/from a guest HCR. The saving of the
register is done once during cpu hypervisor initialization state and is
just restored after switch from guest.

For fetching HCR_EL2 during kvm initialisation, a hyp call is made using
kvm_call_hyp and is helpful in non-VHE case.

For the hyp TLB maintenance code, __tlb_switch_to_host_vhe() is updated
to toggle the TGE bit with a RMW sequence, as we already do in
__tlb_switch_to_guest_vhe().

The value of hcr_el2 is now stored in struct kvm_cpu_context as both host
and guest can now use this field in a common way.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
[Added cpu_init_host_ctxt, hcr_el2 field in struct kvm_cpu_context,
save hcr_el2 in hyp init stage]
Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Reviewed-by: James Morse <james.morse@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Christoffer Dall <christoffer.dall@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
---
 arch/arm/include/asm/kvm_host.h      |  2 ++
 arch/arm64/include/asm/kvm_asm.h     |  2 ++
 arch/arm64/include/asm/kvm_emulate.h | 24 ++++++++++++------------
 arch/arm64/include/asm/kvm_host.h    | 15 ++++++++++++++-
 arch/arm64/kvm/guest.c               |  2 +-
 arch/arm64/kvm/hyp/switch.c          | 22 +++++++++++++---------
 arch/arm64/kvm/hyp/sysreg-sr.c       | 14 ++++++++++++++
 arch/arm64/kvm/hyp/tlb.c             |  6 +++++-
 virt/kvm/arm/arm.c                   |  1 +
 9 files changed, 64 insertions(+), 24 deletions(-)

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 770d732..6d0aac4 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -322,6 +322,8 @@ static inline void __cpu_init_stage2(void)
 	kvm_call_hyp(__init_stage2_translation);
 }
 
+static inline void cpu_init_host_ctxt(void) {}
+
 static inline int kvm_arch_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 {
 	return 0;
diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index 57a07e8..a68205c 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -80,6 +80,8 @@ extern void __vgic_v3_init_lrs(void);
 
 extern u32 __kvm_get_mdcr_el2(void);
 
+extern void __kvm_populate_host_regs(void);
+
 /*
  * Obtain the PC-relative address of a kernel symbol
  * s: symbol
diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index d384279..426815d 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -50,25 +50,25 @@ void kvm_inject_pabt32(struct kvm_vcpu *vcpu, unsigned long addr);
 
 static inline bool vcpu_el1_is_32bit(struct kvm_vcpu *vcpu)
 {
-	return !(vcpu->arch.hcr_el2 & HCR_RW);
+	return !(vcpu->arch.ctxt.hcr_el2 & HCR_RW);
 }
 
 static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu)
 {
-	vcpu->arch.hcr_el2 = HCR_GUEST_FLAGS;
+	vcpu->arch.ctxt.hcr_el2 = HCR_GUEST_FLAGS;
 	if (is_kernel_in_hyp_mode())
-		vcpu->arch.hcr_el2 |= HCR_E2H;
+		vcpu->arch.ctxt.hcr_el2 |= HCR_E2H;
 	if (cpus_have_const_cap(ARM64_HAS_RAS_EXTN)) {
 		/* route synchronous external abort exceptions to EL2 */
-		vcpu->arch.hcr_el2 |= HCR_TEA;
+		vcpu->arch.ctxt.hcr_el2 |= HCR_TEA;
 		/* trap error record accesses */
-		vcpu->arch.hcr_el2 |= HCR_TERR;
+		vcpu->arch.ctxt.hcr_el2 |= HCR_TERR;
 	}
 	if (cpus_have_const_cap(ARM64_HAS_STAGE2_FWB))
-		vcpu->arch.hcr_el2 |= HCR_FWB;
+		vcpu->arch.ctxt.hcr_el2 |= HCR_FWB;
 
 	if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features))
-		vcpu->arch.hcr_el2 &= ~HCR_RW;
+		vcpu->arch.ctxt.hcr_el2 &= ~HCR_RW;
 
 	/*
 	 * TID3: trap feature register accesses that we virtualise.
@@ -76,26 +76,26 @@ static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu)
 	 * are currently virtualised.
 	 */
 	if (!vcpu_el1_is_32bit(vcpu))
-		vcpu->arch.hcr_el2 |= HCR_TID3;
+		vcpu->arch.ctxt.hcr_el2 |= HCR_TID3;
 
 	if (cpus_have_const_cap(ARM64_MISMATCHED_CACHE_TYPE) ||
 	    vcpu_el1_is_32bit(vcpu))
-		vcpu->arch.hcr_el2 |= HCR_TID2;
+		vcpu->arch.ctxt.hcr_el2 |= HCR_TID2;
 }
 
 static inline unsigned long *vcpu_hcr(struct kvm_vcpu *vcpu)
 {
-	return (unsigned long *)&vcpu->arch.hcr_el2;
+	return (unsigned long *)&vcpu->arch.ctxt.hcr_el2;
 }
 
 static inline void vcpu_clear_wfe_traps(struct kvm_vcpu *vcpu)
 {
-	vcpu->arch.hcr_el2 &= ~HCR_TWE;
+	vcpu->arch.ctxt.hcr_el2 &= ~HCR_TWE;
 }
 
 static inline void vcpu_set_wfe_traps(struct kvm_vcpu *vcpu)
 {
-	vcpu->arch.hcr_el2 |= HCR_TWE;
+	vcpu->arch.ctxt.hcr_el2 |= HCR_TWE;
 }
 
 static inline unsigned long vcpu_get_vsesr(struct kvm_vcpu *vcpu)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index a01fe087..3b09fd0 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -209,6 +209,8 @@ struct kvm_cpu_context {
 		u32 copro[NR_COPRO_REGS];
 	};
 
+	/* HYP host/guest configuration */
+	u64 hcr_el2;
 	struct kvm_vcpu *__hyp_running_vcpu;
 };
 
@@ -225,7 +227,6 @@ struct kvm_vcpu_arch {
 	struct kvm_cpu_context ctxt;
 
 	/* HYP configuration */
-	u64 hcr_el2;
 	u32 mdcr_el2;
 
 	/* Exception Information */
@@ -510,6 +511,18 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
 
 static inline void __cpu_init_stage2(void) {}
 
+/**
+ * cpu_init_host_ctxt - save the boot hyp configuration registers
+ *
+ * It is called per-cpu during CPU hyp initialisation and the
+ * registers context saved may be used during host/guest context
+ * switch.
+ */
+static inline void cpu_init_host_ctxt(void)
+{
+	kvm_call_hyp(__kvm_populate_host_regs);
+}
+
 /* Guest/host FPSIMD coordination helpers */
 int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu);
 void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index dd436a5..e2f0268 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -345,7 +345,7 @@ int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu,
 int __kvm_arm_vcpu_get_events(struct kvm_vcpu *vcpu,
 			      struct kvm_vcpu_events *events)
 {
-	events->exception.serror_pending = !!(vcpu->arch.hcr_el2 & HCR_VSE);
+	events->exception.serror_pending = !!(vcpu->arch.ctxt.hcr_el2 & HCR_VSE);
 	events->exception.serror_has_esr = cpus_have_const_cap(ARM64_HAS_RAS_EXTN);
 
 	if (events->exception.serror_pending && events->exception.serror_has_esr)
diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index 3563fe6..f5cefa1 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -129,7 +129,7 @@ static void __hyp_text __activate_traps_nvhe(struct kvm_vcpu *vcpu)
 
 static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu)
 {
-	u64 hcr = vcpu->arch.hcr_el2;
+	u64 hcr = vcpu->arch.ctxt.hcr_el2;
 
 	write_sysreg(hcr, hcr_el2);
 
@@ -142,10 +142,10 @@ static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu)
 		__activate_traps_nvhe(vcpu);
 }
 
-static void deactivate_traps_vhe(void)
+static void deactivate_traps_vhe(struct kvm_cpu_context *host_ctxt)
 {
 	extern char vectors[];	/* kernel exception vectors */
-	write_sysreg(HCR_HOST_VHE_FLAGS, hcr_el2);
+	write_sysreg(host_ctxt->hcr_el2, hcr_el2);
 
 	/*
 	 * ARM erratum 1165522 requires the actual execution of the above
@@ -159,9 +159,10 @@ static void deactivate_traps_vhe(void)
 }
 NOKPROBE_SYMBOL(deactivate_traps_vhe);
 
-static void __hyp_text __deactivate_traps_nvhe(void)
+static void __hyp_text __deactivate_traps_nvhe(struct kvm_cpu_context *host_ctxt)
 {
 	u64 mdcr_el2 = read_sysreg(mdcr_el2);
+	struct kvm_cpu_context *hyp_host_ctxt = kern_hyp_va(host_ctxt);
 
 	__deactivate_traps_common();
 
@@ -169,25 +170,28 @@ static void __hyp_text __deactivate_traps_nvhe(void)
 	mdcr_el2 |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT;
 
 	write_sysreg(mdcr_el2, mdcr_el2);
-	write_sysreg(HCR_HOST_NVHE_FLAGS, hcr_el2);
+	write_sysreg(hyp_host_ctxt->hcr_el2, hcr_el2);
 	write_sysreg(CPTR_EL2_DEFAULT, cptr_el2);
 }
 
 static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu)
 {
+	struct kvm_cpu_context *host_ctxt;
+
+	host_ctxt = vcpu->arch.host_cpu_context;
 	/*
 	 * If we pended a virtual abort, preserve it until it gets
 	 * cleared. See D1.14.3 (Virtual Interrupts) for details, but
 	 * the crucial bit is "On taking a vSError interrupt,
 	 * HCR_EL2.VSE is cleared to 0."
 	 */
-	if (vcpu->arch.hcr_el2 & HCR_VSE)
-		vcpu->arch.hcr_el2 = read_sysreg(hcr_el2);
+	if (vcpu->arch.ctxt.hcr_el2 & HCR_VSE)
+		vcpu->arch.ctxt.hcr_el2 = read_sysreg(hcr_el2);
 
 	if (has_vhe())
-		deactivate_traps_vhe();
+		deactivate_traps_vhe(host_ctxt);
 	else
-		__deactivate_traps_nvhe();
+		__deactivate_traps_nvhe(host_ctxt);
 }
 
 void activate_traps_vhe_load(struct kvm_vcpu *vcpu)
diff --git a/arch/arm64/kvm/hyp/sysreg-sr.c b/arch/arm64/kvm/hyp/sysreg-sr.c
index c52a845..277f82b 100644
--- a/arch/arm64/kvm/hyp/sysreg-sr.c
+++ b/arch/arm64/kvm/hyp/sysreg-sr.c
@@ -320,3 +320,17 @@ void __hyp_text __kvm_enable_ssbs(void)
 	"msr	sctlr_el2, %0"
 	: "=&r" (tmp) : "L" (SCTLR_ELx_DSSBS));
 }
+
+/**
+ * __kvm_populate_host_regs - Fetches host register values
+ *
+ * This function acts as a function handler parameter for kvm_call_hyp and
+ * may be called from EL1 exception level to fetch the host register values.
+ */
+void __hyp_text __kvm_populate_host_regs(void)
+{
+	struct kvm_cpu_context *host_ctxt;
+
+	host_ctxt = __hyp_this_cpu_ptr(kvm_host_cpu_state);
+	host_ctxt->hcr_el2 = read_sysreg(hcr_el2);
+}
diff --git a/arch/arm64/kvm/hyp/tlb.c b/arch/arm64/kvm/hyp/tlb.c
index 76c3086..c5e7144 100644
--- a/arch/arm64/kvm/hyp/tlb.c
+++ b/arch/arm64/kvm/hyp/tlb.c
@@ -86,12 +86,16 @@ static hyp_alternate_select(__tlb_switch_to_guest,
 static void __hyp_text __tlb_switch_to_host_vhe(struct kvm *kvm,
 						struct tlb_inv_context *cxt)
 {
+	u64 val;
+
 	/*
 	 * We're done with the TLB operation, let's restore the host's
 	 * view of HCR_EL2.
 	 */
 	write_sysreg(0, vttbr_el2);
-	write_sysreg(HCR_HOST_VHE_FLAGS, hcr_el2);
+	val = read_sysreg(hcr_el2);
+	val |= HCR_TGE;
+	write_sysreg(val, hcr_el2);
 	isb();
 
 	if (cpus_have_const_cap(ARM64_WORKAROUND_1165522)) {
diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index 99c3738..e8c2ee6 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -1323,6 +1323,7 @@ static void cpu_hyp_reinit(void)
 		cpu_init_hyp_mode(NULL);
 
 	kvm_arm_init_debug();
+	cpu_init_host_ctxt();
 
 	if (vgic_present)
 		kvm_vgic_init_cpu_hardware();
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v8 5/9] KVM: arm/arm64: preserve host MDCR_EL2 value
  2019-04-02  2:27 [PATCH v8 0/9] Add ARMv8.3 pointer authentication for kvm guest Amit Daniel Kachhap
                   ` (3 preceding siblings ...)
  2019-04-02  2:27 ` [PATCH v8 4/9] KVM: arm/arm64: preserve host HCR_EL2 value Amit Daniel Kachhap
@ 2019-04-02  2:27 ` Amit Daniel Kachhap
  2019-04-05 11:02   ` Dave Martin
  2019-04-02  2:27 ` [PATCH v8 6/9] KVM: arm64: Add vcpu feature flags to control ptrauth accessibility Amit Daniel Kachhap
                   ` (3 subsequent siblings)
  8 siblings, 1 reply; 30+ messages in thread
From: Amit Daniel Kachhap @ 2019-04-02  2:27 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Christoffer Dall, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Dave Martin, Ramana Radhakrishnan, kvmarm,
	Kristina Martsenko, linux-kernel, Amit Daniel Kachhap,
	Mark Rutland, James Morse, Julien Thierry

Save host MDCR_EL2 value during kvm HYP initialisation and restore
after every switch from host to guest. There should not be any
change in functionality due to this.

The value of mdcr_el2 is now stored in struct kvm_cpu_context as
both host and guest can now use this field in a common way.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Christoffer Dall <christoffer.dall@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
---

Changes since v7:
* Removed unused function __kvm_get_mdcr_el2 [Kristina].

 arch/arm/include/asm/kvm_host.h   |  1 -
 arch/arm64/include/asm/kvm_asm.h  |  2 --
 arch/arm64/include/asm/kvm_host.h |  6 ++----
 arch/arm64/include/asm/kvm_hyp.h  |  2 +-
 arch/arm64/kvm/debug.c            | 28 ++++++----------------------
 arch/arm64/kvm/hyp/debug-sr.c     |  5 -----
 arch/arm64/kvm/hyp/switch.c       | 18 +++++-------------
 arch/arm64/kvm/hyp/sysreg-sr.c    |  8 +++++++-
 virt/kvm/arm/arm.c                |  1 -
 9 files changed, 21 insertions(+), 50 deletions(-)

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 6d0aac4..a928565 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -343,7 +343,6 @@ static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {}
 static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
 static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {}
 
-static inline void kvm_arm_init_debug(void) {}
 static inline void kvm_arm_setup_debug(struct kvm_vcpu *vcpu) {}
 static inline void kvm_arm_clear_debug(struct kvm_vcpu *vcpu) {}
 static inline void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu) {}
diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index a68205c..a15ba55 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -78,8 +78,6 @@ extern u64 __vgic_v3_read_vmcr(void);
 extern void __vgic_v3_write_vmcr(u32 vmcr);
 extern void __vgic_v3_init_lrs(void);
 
-extern u32 __kvm_get_mdcr_el2(void);
-
 extern void __kvm_populate_host_regs(void);
 
 /*
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 3b09fd0..e3ccd7b 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -211,6 +211,8 @@ struct kvm_cpu_context {
 
 	/* HYP host/guest configuration */
 	u64 hcr_el2;
+	u32 mdcr_el2;
+
 	struct kvm_vcpu *__hyp_running_vcpu;
 };
 
@@ -226,9 +228,6 @@ struct vcpu_reset_state {
 struct kvm_vcpu_arch {
 	struct kvm_cpu_context ctxt;
 
-	/* HYP configuration */
-	u32 mdcr_el2;
-
 	/* Exception Information */
 	struct kvm_vcpu_fault_info fault;
 
@@ -498,7 +497,6 @@ static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {}
 static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
 static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {}
 
-void kvm_arm_init_debug(void);
 void kvm_arm_setup_debug(struct kvm_vcpu *vcpu);
 void kvm_arm_clear_debug(struct kvm_vcpu *vcpu);
 void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index 4da765f..7fcde8a 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -152,7 +152,7 @@ void __fpsimd_restore_state(struct user_fpsimd_state *fp_regs);
 bool __fpsimd_enabled(void);
 
 void activate_traps_vhe_load(struct kvm_vcpu *vcpu);
-void deactivate_traps_vhe_put(void);
+void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu);
 
 u64 __guest_enter(struct kvm_vcpu *vcpu, struct kvm_cpu_context *host_ctxt);
 void __noreturn __hyp_do_panic(unsigned long, ...);
diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c
index fd917d6..99dc0a4 100644
--- a/arch/arm64/kvm/debug.c
+++ b/arch/arm64/kvm/debug.c
@@ -32,8 +32,6 @@
 				DBG_MDSCR_KDE | \
 				DBG_MDSCR_MDE)
 
-static DEFINE_PER_CPU(u32, mdcr_el2);
-
 /**
  * save/restore_guest_debug_regs
  *
@@ -65,21 +63,6 @@ static void restore_guest_debug_regs(struct kvm_vcpu *vcpu)
 }
 
 /**
- * kvm_arm_init_debug - grab what we need for debug
- *
- * Currently the sole task of this function is to retrieve the initial
- * value of mdcr_el2 so we can preserve MDCR_EL2.HPMN which has
- * presumably been set-up by some knowledgeable bootcode.
- *
- * It is called once per-cpu during CPU hyp initialisation.
- */
-
-void kvm_arm_init_debug(void)
-{
-	__this_cpu_write(mdcr_el2, kvm_call_hyp_ret(__kvm_get_mdcr_el2));
-}
-
-/**
  * kvm_arm_reset_debug_ptr - reset the debug ptr to point to the vcpu state
  */
 
@@ -111,6 +94,7 @@ void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu)
 
 void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
 {
+	kvm_cpu_context_t *host_cxt = this_cpu_ptr(&kvm_host_cpu_state);
 	bool trap_debug = !(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY);
 	unsigned long mdscr;
 
@@ -120,8 +104,8 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
 	 * This also clears MDCR_EL2_E2PB_MASK to disable guest access
 	 * to the profiling buffer.
 	 */
-	vcpu->arch.mdcr_el2 = __this_cpu_read(mdcr_el2) & MDCR_EL2_HPMN_MASK;
-	vcpu->arch.mdcr_el2 |= (MDCR_EL2_TPM |
+	vcpu->arch.ctxt.mdcr_el2 = host_cxt->mdcr_el2 & MDCR_EL2_HPMN_MASK;
+	vcpu->arch.ctxt.mdcr_el2 |= (MDCR_EL2_TPM |
 				MDCR_EL2_TPMS |
 				MDCR_EL2_TPMCR |
 				MDCR_EL2_TDRA |
@@ -130,7 +114,7 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
 	/* Is Guest debugging in effect? */
 	if (vcpu->guest_debug) {
 		/* Route all software debug exceptions to EL2 */
-		vcpu->arch.mdcr_el2 |= MDCR_EL2_TDE;
+		vcpu->arch.ctxt.mdcr_el2 |= MDCR_EL2_TDE;
 
 		/* Save guest debug state */
 		save_guest_debug_regs(vcpu);
@@ -202,13 +186,13 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
 
 	/* Trap debug register access */
 	if (trap_debug)
-		vcpu->arch.mdcr_el2 |= MDCR_EL2_TDA;
+		vcpu->arch.ctxt.mdcr_el2 |= MDCR_EL2_TDA;
 
 	/* If KDE or MDE are set, perform a full save/restore cycle. */
 	if (vcpu_read_sys_reg(vcpu, MDSCR_EL1) & (DBG_MDSCR_KDE | DBG_MDSCR_MDE))
 		vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY;
 
-	trace_kvm_arm_set_dreg32("MDCR_EL2", vcpu->arch.mdcr_el2);
+	trace_kvm_arm_set_dreg32("MDCR_EL2", vcpu->arch.ctxt.mdcr_el2);
 	trace_kvm_arm_set_dreg32("MDSCR_EL1", vcpu_read_sys_reg(vcpu, MDSCR_EL1));
 }
 
diff --git a/arch/arm64/kvm/hyp/debug-sr.c b/arch/arm64/kvm/hyp/debug-sr.c
index 5000976..f49a3f7 100644
--- a/arch/arm64/kvm/hyp/debug-sr.c
+++ b/arch/arm64/kvm/hyp/debug-sr.c
@@ -198,8 +198,3 @@ void __hyp_text __debug_switch_to_host(struct kvm_vcpu *vcpu)
 
 	vcpu->arch.flags &= ~KVM_ARM64_DEBUG_DIRTY;
 }
-
-u32 __hyp_text __kvm_get_mdcr_el2(void)
-{
-	return read_sysreg(mdcr_el2);
-}
diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index f5cefa1..fe76e24 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -84,7 +84,7 @@ static void __hyp_text __activate_traps_common(struct kvm_vcpu *vcpu)
 	 */
 	write_sysreg(0, pmselr_el0);
 	write_sysreg(ARMV8_PMU_USERENR_MASK, pmuserenr_el0);
-	write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
+	write_sysreg(vcpu->arch.ctxt.mdcr_el2, mdcr_el2);
 }
 
 static void __hyp_text __deactivate_traps_common(void)
@@ -161,15 +161,11 @@ NOKPROBE_SYMBOL(deactivate_traps_vhe);
 
 static void __hyp_text __deactivate_traps_nvhe(struct kvm_cpu_context *host_ctxt)
 {
-	u64 mdcr_el2 = read_sysreg(mdcr_el2);
 	struct kvm_cpu_context *hyp_host_ctxt = kern_hyp_va(host_ctxt);
 
 	__deactivate_traps_common();
 
-	mdcr_el2 &= MDCR_EL2_HPMN_MASK;
-	mdcr_el2 |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT;
-
-	write_sysreg(mdcr_el2, mdcr_el2);
+	write_sysreg(hyp_host_ctxt->mdcr_el2, mdcr_el2);
 	write_sysreg(hyp_host_ctxt->hcr_el2, hcr_el2);
 	write_sysreg(CPTR_EL2_DEFAULT, cptr_el2);
 }
@@ -199,15 +195,11 @@ void activate_traps_vhe_load(struct kvm_vcpu *vcpu)
 	__activate_traps_common(vcpu);
 }
 
-void deactivate_traps_vhe_put(void)
+void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu)
 {
-	u64 mdcr_el2 = read_sysreg(mdcr_el2);
-
-	mdcr_el2 &= MDCR_EL2_HPMN_MASK |
-		    MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT |
-		    MDCR_EL2_TPMS;
+	struct kvm_cpu_context *host_ctxt = vcpu->arch.host_cpu_context;
 
-	write_sysreg(mdcr_el2, mdcr_el2);
+	write_sysreg(host_ctxt->mdcr_el2, mdcr_el2);
 
 	__deactivate_traps_common();
 }
diff --git a/arch/arm64/kvm/hyp/sysreg-sr.c b/arch/arm64/kvm/hyp/sysreg-sr.c
index 277f82b..6afd309 100644
--- a/arch/arm64/kvm/hyp/sysreg-sr.c
+++ b/arch/arm64/kvm/hyp/sysreg-sr.c
@@ -298,7 +298,7 @@ void kvm_vcpu_put_sysregs(struct kvm_vcpu *vcpu)
 	if (!has_vhe())
 		return;
 
-	deactivate_traps_vhe_put();
+	deactivate_traps_vhe_put(vcpu);
 
 	__sysreg_save_el1_state(guest_ctxt);
 	__sysreg_save_user_state(guest_ctxt);
@@ -333,4 +333,10 @@ void __hyp_text __kvm_populate_host_regs(void)
 
 	host_ctxt = __hyp_this_cpu_ptr(kvm_host_cpu_state);
 	host_ctxt->hcr_el2 = read_sysreg(hcr_el2);
+	/*
+	 * Retrieve the initial value of mdcr_el2 so we can preserve
+	 * MDCR_EL2.HPMN which has presumably been set-up by some
+	 * knowledgeable bootcode.
+	 */
+	host_ctxt->mdcr_el2 = read_sysreg(mdcr_el2);
 }
diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index e8c2ee6..58de0ca 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -1322,7 +1322,6 @@ static void cpu_hyp_reinit(void)
 	else
 		cpu_init_hyp_mode(NULL);
 
-	kvm_arm_init_debug();
 	cpu_init_host_ctxt();
 
 	if (vgic_present)
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v8 6/9] KVM: arm64: Add vcpu feature flags to control ptrauth accessibility
  2019-04-02  2:27 [PATCH v8 0/9] Add ARMv8.3 pointer authentication for kvm guest Amit Daniel Kachhap
                   ` (4 preceding siblings ...)
  2019-04-02  2:27 ` [PATCH v8 5/9] KVM: arm/arm64: preserve host MDCR_EL2 value Amit Daniel Kachhap
@ 2019-04-02  2:27 ` Amit Daniel Kachhap
  2019-04-05 11:02   ` Dave Martin
  2019-04-02  2:27 ` [PATCH v8 7/9] KVM: arm/arm64: context-switch ptrauth registers Amit Daniel Kachhap
                   ` (2 subsequent siblings)
  8 siblings, 1 reply; 30+ messages in thread
From: Amit Daniel Kachhap @ 2019-04-02  2:27 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Christoffer Dall, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Dave Martin, Ramana Radhakrishnan, kvmarm,
	Kristina Martsenko, linux-kernel, Amit Daniel Kachhap,
	Mark Rutland, James Morse, Julien Thierry

Since Pointer authentication will be enabled or disabled on a
per-vcpu basis, vcpu feature flags are added in order to know which
vcpus have it enabled from userspace.

This features will allow the KVM guest to allow the handling of
pointer authentication instructions or to treat them as undefined
if not set.

The helper macro added checks the feature flag along with other
conditions such as VHE mode present and system support for
pointer address/generic authentication.

Necessary documentations are added to reflect the changes done.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Christoffer Dall <christoffer.dall@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
---

Changes since v7:
* Moved the check for userspace features in this patch [James Morse].
* Moved the vcpu feature flags Documentation in this patch [James Morse].

 Documentation/arm64/pointer-authentication.txt | 13 +++++++++----
 Documentation/virtual/kvm/api.txt              |  4 ++++
 arch/arm64/include/asm/kvm_host.h              |  8 +++++++-
 arch/arm64/include/uapi/asm/kvm.h              |  2 ++
 arch/arm64/kvm/reset.c                         |  7 +++++++
 5 files changed, 29 insertions(+), 5 deletions(-)

diff --git a/Documentation/arm64/pointer-authentication.txt b/Documentation/arm64/pointer-authentication.txt
index 5baca42..b164886 100644
--- a/Documentation/arm64/pointer-authentication.txt
+++ b/Documentation/arm64/pointer-authentication.txt
@@ -87,7 +87,12 @@ used to get and set the keys for a thread.
 Virtualization
 --------------
 
-Pointer authentication is not currently supported in KVM guests. KVM
-will mask the feature bits from ID_AA64ISAR1_EL1, and attempted use of
-the feature will result in an UNDEFINED exception being injected into
-the guest.
+Pointer authentication is enabled in KVM guest when each virtual cpu is
+initialised by passing flags KVM_ARM_VCPU_PTRAUTH_[ADDRESS/GENERIC] and
+requesting these two separate cpu features to be enabled. The current KVM
+guest implementation works by enabling both features together, so both these
+userspace flags are checked together before enabling pointer authentication.
+The separate userspace flag will allow to have no userspace ABI changes when
+both features are implemented in an isolated way in future.
+
+Pointer Authentication is supported in KVM guest only in VHE mode.
diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
index 7de9eee..aaa048d 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -2659,6 +2659,10 @@ Possible features:
 	  Depends on KVM_CAP_ARM_PSCI_0_2.
 	- KVM_ARM_VCPU_PMU_V3: Emulate PMUv3 for the CPU.
 	  Depends on KVM_CAP_ARM_PMU_V3.
+	- KVM_ARM_VCPU_PTRAUTH_ADDRESS: Enables Address Pointer authentication
+	  for the CPU and supported only on arm64 architecture.
+	- KVM_ARM_VCPU_PTRAUTH_GENERIC: Enables Generic Pointer authentication
+	  for the CPU and supported only on arm64 architecture.
 
 
 4.83 KVM_ARM_PREFERRED_TARGET
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index e3ccd7b..9dd2918 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -45,7 +45,7 @@
 
 #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS
 
-#define KVM_VCPU_MAX_FEATURES 4
+#define KVM_VCPU_MAX_FEATURES 6
 
 #define KVM_REQ_SLEEP \
 	KVM_ARCH_REQ_FLAGS(0, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
@@ -491,6 +491,12 @@ static inline bool kvm_arch_requires_vhe(void)
 	return false;
 }
 
+#define vcpu_has_ptrauth(vcpu)	(has_vhe() && \
+		system_supports_address_auth() && \
+		system_supports_generic_auth() && \
+		test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, vcpu->arch.features) && \
+		test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, vcpu->arch.features))
+
 static inline void kvm_arch_hardware_unsetup(void) {}
 static inline void kvm_arch_sync_events(struct kvm *kvm) {}
 static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {}
diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
index 97c3478..8806f71 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -102,6 +102,8 @@ struct kvm_regs {
 #define KVM_ARM_VCPU_EL1_32BIT		1 /* CPU running a 32bit VM */
 #define KVM_ARM_VCPU_PSCI_0_2		2 /* CPU uses PSCI v0.2 */
 #define KVM_ARM_VCPU_PMU_V3		3 /* Support guest PMUv3 */
+#define KVM_ARM_VCPU_PTRAUTH_ADDRESS	4 /* VCPU uses address authentication */
+#define KVM_ARM_VCPU_PTRAUTH_GENERIC	5 /* VCPU uses generic authentication */
 
 struct kvm_vcpu_init {
 	__u32 target;
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index f16a5f8..717afed 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -128,6 +128,13 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
 	if (loaded)
 		kvm_arch_vcpu_put(vcpu);
 
+	if (test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, vcpu->arch.features) ||
+		test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, vcpu->arch.features)) {
+		/* Verify that KVM startup matches the conditions for ptrauth */
+		if (!vcpu_has_ptrauth(vcpu))
+			goto out;
+	}
+
 	switch (vcpu->arch.target) {
 	default:
 		if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features)) {
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v8 7/9] KVM: arm/arm64: context-switch ptrauth registers
  2019-04-02  2:27 [PATCH v8 0/9] Add ARMv8.3 pointer authentication for kvm guest Amit Daniel Kachhap
                   ` (5 preceding siblings ...)
  2019-04-02  2:27 ` [PATCH v8 6/9] KVM: arm64: Add vcpu feature flags to control ptrauth accessibility Amit Daniel Kachhap
@ 2019-04-02  2:27 ` Amit Daniel Kachhap
  2019-04-04 19:29   ` Kristina Martsenko
  2019-04-02  2:27 ` [PATCH v8 8/9] KVM: arm64: Add capability to advertise ptrauth for guest Amit Daniel Kachhap
  2019-04-02  2:27 ` [kvmtool PATCH v8 9/9] KVM: arm/arm64: Add a vcpu feature for pointer authentication Amit Daniel Kachhap
  8 siblings, 1 reply; 30+ messages in thread
From: Amit Daniel Kachhap @ 2019-04-02  2:27 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Christoffer Dall, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Dave Martin, Ramana Radhakrishnan, kvmarm,
	Kristina Martsenko, linux-kernel, Amit Daniel Kachhap,
	Mark Rutland, James Morse, Julien Thierry

From: Mark Rutland <mark.rutland@arm.com>

When pointer authentication is supported, a guest may wish to use it.
This patch adds the necessary KVM infrastructure for this to work, with
a semi-lazy context switch of the pointer auth state.

Pointer authentication feature is only enabled when VHE is built
in the kernel and present in the CPU implementation so only VHE code
paths are modified.

When we schedule a vcpu, we disable guest usage of pointer
authentication instructions and accesses to the keys. While these are
disabled, we avoid context-switching the keys. When we trap the guest
trying to use pointer authentication functionality, we change to eagerly
context-switching the keys, and enable the feature. The next time the
vcpu is scheduled out/in, we start again. However the host key save is
optimized and implemented inside ptrauth instruction/register access
trap.

Pointer authentication consists of address authentication and generic
authentication, and CPUs in a system might have varied support for
either. Where support for either feature is not uniform, it is hidden
from guests via ID register emulation, as a result of the cpufeature
framework in the host.

Unfortunately, address authentication and generic authentication cannot
be trapped separately, as the architecture provides a single EL2 trap
covering both. If we wish to expose one without the other, we cannot
prevent a (badly-written) guest from intermittently using a feature
which is not uniformly supported (when scheduled on a physical CPU which
supports the relevant feature). Hence, this patch expects both type of
authentication to be present in a cpu.

This switch of key is done from guest enter/exit assembly as preparation
for the upcoming in-kernel pointer authentication support. Hence, these
key switching routines are not implemented in C code as they may cause
pointer authentication key signing error in some situations.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
[Only VHE, key switch in full assembly, vcpu_has_ptrauth checks
, save host key in ptrauth exception trap]
Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Reviewed-by: Julien Thierry <julien.thierry@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Christoffer Dall <christoffer.dall@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
---

Changes since v7:
* Changed assembly label to number [Julien Thierry].
* Added a comment to explain the logic of calculating immediate offset.
* Proper license comment [Kristina Martsenko].
* Removed warning message inside kvm_vcpu_reset [Kristina Martsenko, James Morse].
* Added goto instead of return in kvm_vcpu_reset [Kristina Martsenko].
* Fixed kvm_arm_vcpu_ptrauth_setup_lazy missing function for arch/arm [Kristina].
* Created SYS_AP*_EL1 enums in increasing order to fix for enum randomisation [James Morse].
* Added details for KVM guest support in arch/arm64/Kconfig [Kristina Martsenko].

 Documentation/arm64/pointer-authentication.txt |   6 ++
 arch/arm/include/asm/kvm_host.h                |   1 +
 arch/arm64/Kconfig                             |   5 +-
 arch/arm64/include/asm/kvm_host.h              |  17 ++++
 arch/arm64/include/asm/kvm_ptrauth_asm.h       | 106 +++++++++++++++++++++++++
 arch/arm64/kernel/asm-offsets.c                |   6 ++
 arch/arm64/kvm/guest.c                         |  14 ++++
 arch/arm64/kvm/handle_exit.c                   |  24 ++++--
 arch/arm64/kvm/hyp/entry.S                     |   7 ++
 arch/arm64/kvm/sys_regs.c                      |  46 ++++++++++-
 virt/kvm/arm/arm.c                             |   2 +
 11 files changed, 221 insertions(+), 13 deletions(-)
 create mode 100644 arch/arm64/include/asm/kvm_ptrauth_asm.h

diff --git a/Documentation/arm64/pointer-authentication.txt b/Documentation/arm64/pointer-authentication.txt
index b164886..bb134b9 100644
--- a/Documentation/arm64/pointer-authentication.txt
+++ b/Documentation/arm64/pointer-authentication.txt
@@ -95,4 +95,10 @@ userspace flags are checked together before enabling pointer authentication.
 The separate userspace flag will allow to have no userspace ABI changes when
 both features are implemented in an isolated way in future.
 
+Additionally, when these vcpu feature flags are not set then KVM will
+filter out the Pointer Authentication system key registers from
+KVM_GET/SET_REG_* ioctls and mask those features from cpufeature ID
+register. Any attempt to use the Pointer Authentication instructions will
+result in an UNDEFINED exception being injected into the guest.
+
 Pointer Authentication is supported in KVM guest only in VHE mode.
diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index a928565..c7a624b 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -361,6 +361,7 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
 static inline void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu) {}
 static inline void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu) {}
 static inline void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu) {}
+static inline void kvm_arm_vcpu_ptrauth_setup_lazy(struct kvm_vcpu *vcpu) {}
 
 static inline void kvm_arm_vhe_guest_enter(void) {}
 static inline void kvm_arm_vhe_guest_exit(void) {}
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 7e34b9e..9e8506e 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1301,8 +1301,9 @@ config ARM64_PTR_AUTH
 	  context-switched along with the process.
 
 	  The feature is detected at runtime. If the feature is not present in
-	  hardware it will not be advertised to userspace nor will it be
-	  enabled.
+	  hardware it will not be advertised to userspace/KVM guest nor will it
+	  be enabled. However, KVM guest also require CONFIG_ARM64_VHE=y to use
+	  this feature.
 
 endmenu
 
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 9dd2918..becac19 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -152,6 +152,18 @@ enum vcpu_sysreg {
 	PMSWINC_EL0,	/* Software Increment Register */
 	PMUSERENR_EL0,	/* User Enable Register */
 
+	/* Pointer Authentication Registers. Keep in an increasing order. */
+	APIAKEYLO_EL1,
+	APIAKEYHI_EL1 = APIAKEYLO_EL1 + 1,
+	APIBKEYLO_EL1 = APIAKEYLO_EL1 + 2,
+	APIBKEYHI_EL1 = APIAKEYLO_EL1 + 3,
+	APDAKEYLO_EL1 = APIAKEYLO_EL1 + 4,
+	APDAKEYHI_EL1 = APIAKEYLO_EL1 + 5,
+	APDBKEYLO_EL1 = APIAKEYLO_EL1 + 6,
+	APDBKEYHI_EL1 = APIAKEYLO_EL1 + 7,
+	APGAKEYLO_EL1 = APIAKEYLO_EL1 + 8,
+	APGAKEYHI_EL1 = APIAKEYLO_EL1 + 9,
+
 	/* 32bit specific registers. Keep them at the end of the range */
 	DACR32_EL2,	/* Domain Access Control Register */
 	IFSR32_EL2,	/* Instruction Fault Status Register */
@@ -497,6 +509,11 @@ static inline bool kvm_arch_requires_vhe(void)
 		test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, vcpu->arch.features) && \
 		test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, vcpu->arch.features))
 
+void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu);
+void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu);
+void kvm_arm_vcpu_ptrauth_setup_lazy(struct kvm_vcpu *vcpu);
+void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu);
+
 static inline void kvm_arch_hardware_unsetup(void) {}
 static inline void kvm_arch_sync_events(struct kvm *kvm) {}
 static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {}
diff --git a/arch/arm64/include/asm/kvm_ptrauth_asm.h b/arch/arm64/include/asm/kvm_ptrauth_asm.h
new file mode 100644
index 0000000..65f99e9
--- /dev/null
+++ b/arch/arm64/include/asm/kvm_ptrauth_asm.h
@@ -0,0 +1,106 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* arch/arm64/include/asm/kvm_ptrauth_asm.h: Guest/host ptrauth save/restore
+ * Copyright 2019 Arm Limited
+ * Author: Mark Rutland <mark.rutland@arm.com>
+ *         Amit Daniel Kachhap <amit.kachhap@arm.com>
+ */
+
+#ifndef __ASM_KVM_PTRAUTH_ASM_H
+#define __ASM_KVM_PTRAUTH_ASM_H
+
+#ifndef __ASSEMBLY__
+
+#define __ptrauth_save_key(regs, key)						\
+({										\
+	regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1);	\
+	regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1);	\
+})
+
+#define __ptrauth_save_state(ctxt)						\
+({										\
+	__ptrauth_save_key(ctxt->sys_regs, APIA);				\
+	__ptrauth_save_key(ctxt->sys_regs, APIB);				\
+	__ptrauth_save_key(ctxt->sys_regs, APDA);				\
+	__ptrauth_save_key(ctxt->sys_regs, APDB);				\
+	__ptrauth_save_key(ctxt->sys_regs, APGA);				\
+})
+
+#else /* __ASSEMBLY__ */
+
+#include <asm/sysreg.h>
+
+#ifdef	CONFIG_ARM64_PTR_AUTH
+
+#define PTRAUTH_REG_OFFSET(x)	(x - CPU_APIAKEYLO_EL1)
+
+/*
+ * CPU_AP*_EL1 values exceed immediate offset range (512) for stp instruction
+ * so below macros takes CPU_APIAKEYLO_EL1 as base and calculates the offset of
+ * the keys from this base to avoid an extra add instruction. These macros
+ * assumes the keys offsets are aligned in a specific increasing order.
+ */
+.macro	ptrauth_save_state base, reg1, reg2
+	mrs_s	\reg1, SYS_APIAKEYLO_EL1
+	mrs_s	\reg2, SYS_APIAKEYHI_EL1
+	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIAKEYLO_EL1)]
+	mrs_s	\reg1, SYS_APIBKEYLO_EL1
+	mrs_s	\reg2, SYS_APIBKEYHI_EL1
+	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIBKEYLO_EL1)]
+	mrs_s	\reg1, SYS_APDAKEYLO_EL1
+	mrs_s	\reg2, SYS_APDAKEYHI_EL1
+	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDAKEYLO_EL1)]
+	mrs_s	\reg1, SYS_APDBKEYLO_EL1
+	mrs_s	\reg2, SYS_APDBKEYHI_EL1
+	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDBKEYLO_EL1)]
+	mrs_s	\reg1, SYS_APGAKEYLO_EL1
+	mrs_s	\reg2, SYS_APGAKEYHI_EL1
+	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APGAKEYLO_EL1)]
+.endm
+
+.macro	ptrauth_restore_state base, reg1, reg2
+	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIAKEYLO_EL1)]
+	msr_s	SYS_APIAKEYLO_EL1, \reg1
+	msr_s	SYS_APIAKEYHI_EL1, \reg2
+	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIBKEYLO_EL1)]
+	msr_s	SYS_APIBKEYLO_EL1, \reg1
+	msr_s	SYS_APIBKEYHI_EL1, \reg2
+	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDAKEYLO_EL1)]
+	msr_s	SYS_APDAKEYLO_EL1, \reg1
+	msr_s	SYS_APDAKEYHI_EL1, \reg2
+	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDBKEYLO_EL1)]
+	msr_s	SYS_APDBKEYLO_EL1, \reg1
+	msr_s	SYS_APDBKEYHI_EL1, \reg2
+	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APGAKEYLO_EL1)]
+	msr_s	SYS_APGAKEYLO_EL1, \reg1
+	msr_s	SYS_APGAKEYHI_EL1, \reg2
+.endm
+
+.macro ptrauth_switch_to_guest g_ctxt, reg1, reg2, reg3
+	ldr	\reg1, [\g_ctxt, #CPU_HCR_EL2]
+	and	\reg1, \reg1, #(HCR_API | HCR_APK)
+	cbz	\reg1, 1f
+	add	\reg1, \g_ctxt, #CPU_APIAKEYLO_EL1
+	ptrauth_restore_state	\reg1, \reg2, \reg3
+1:
+.endm
+
+.macro ptrauth_switch_to_host g_ctxt, h_ctxt, reg1, reg2, reg3
+	ldr	\reg1, [\g_ctxt, #CPU_HCR_EL2]
+	and	\reg1, \reg1, #(HCR_API | HCR_APK)
+	cbz	\reg1, 2f
+	add	\reg1, \g_ctxt, #CPU_APIAKEYLO_EL1
+	ptrauth_save_state	\reg1, \reg2, \reg3
+	add	\reg1, \h_ctxt, #CPU_APIAKEYLO_EL1
+	ptrauth_restore_state	\reg1, \reg2, \reg3
+	isb
+2:
+.endm
+
+#else /* !CONFIG_ARM64_PTR_AUTH */
+.macro ptrauth_switch_to_guest g_ctxt, reg1, reg2, reg3
+.endm
+.macro ptrauth_switch_to_host g_ctxt, h_ctxt, reg1, reg2, reg3
+.endm
+#endif /* CONFIG_ARM64_PTR_AUTH */
+#endif /* __ASSEMBLY__ */
+#endif /* __ASM_KVM_PTRAUTH_ASM_H */
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index 7f40dcb..12ca916 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -126,6 +126,12 @@ int main(void)
   DEFINE(VCPU_FAULT_DISR,	offsetof(struct kvm_vcpu, arch.fault.disr_el1));
   DEFINE(VCPU_WORKAROUND_FLAGS,	offsetof(struct kvm_vcpu, arch.workaround_flags));
   DEFINE(CPU_GP_REGS,		offsetof(struct kvm_cpu_context, gp_regs));
+  DEFINE(CPU_APIAKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APIAKEYLO_EL1]));
+  DEFINE(CPU_APIBKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APIBKEYLO_EL1]));
+  DEFINE(CPU_APDAKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APDAKEYLO_EL1]));
+  DEFINE(CPU_APDBKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APDBKEYLO_EL1]));
+  DEFINE(CPU_APGAKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APGAKEYLO_EL1]));
+  DEFINE(CPU_HCR_EL2,		offsetof(struct kvm_cpu_context, hcr_el2));
   DEFINE(CPU_USER_PT_REGS,	offsetof(struct kvm_regs, regs));
   DEFINE(HOST_CONTEXT_VCPU,	offsetof(struct kvm_cpu_context, __hyp_running_vcpu));
 #endif
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index e2f0268..9f591ad 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -544,3 +544,17 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
 
 	return ret;
 }
+
+/**
+ * kvm_arm_vcpu_ptrauth_setup_lazy - setup lazy ptrauth for vcpu schedule
+ *
+ * @vcpu: The VCPU pointer
+ *
+ * This function may be used to disable ptrauth and use it in a lazy context
+ * via traps.
+ */
+void kvm_arm_vcpu_ptrauth_setup_lazy(struct kvm_vcpu *vcpu)
+{
+	if (vcpu_has_ptrauth(vcpu))
+		kvm_arm_vcpu_ptrauth_disable(vcpu);
+}
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 0b79834..5838ff9 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -30,6 +30,7 @@
 #include <asm/kvm_coproc.h>
 #include <asm/kvm_emulate.h>
 #include <asm/kvm_mmu.h>
+#include <asm/kvm_ptrauth_asm.h>
 #include <asm/debug-monitors.h>
 #include <asm/traps.h>
 
@@ -174,19 +175,26 @@ static int handle_sve(struct kvm_vcpu *vcpu, struct kvm_run *run)
 }
 
 /*
+ * Handle the guest trying to use a ptrauth instruction, or trying to access a
+ * ptrauth register.
+ */
+void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu)
+{
+	if (vcpu_has_ptrauth(vcpu)) {
+		kvm_arm_vcpu_ptrauth_enable(vcpu);
+		__ptrauth_save_state(vcpu->arch.host_cpu_context);
+	} else {
+		kvm_inject_undefined(vcpu);
+	}
+}
+
+/*
  * Guest usage of a ptrauth instruction (which the guest EL1 did not turn into
  * a NOP).
  */
 static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu, struct kvm_run *run)
 {
-	/*
-	 * We don't currently support ptrauth in a guest, and we mask the ID
-	 * registers to prevent well-behaved guests from trying to make use of
-	 * it.
-	 *
-	 * Inject an UNDEF, as if the feature really isn't present.
-	 */
-	kvm_inject_undefined(vcpu);
+	kvm_arm_vcpu_ptrauth_trap(vcpu);
 	return 1;
 }
 
diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
index 675fdc1..3a70213 100644
--- a/arch/arm64/kvm/hyp/entry.S
+++ b/arch/arm64/kvm/hyp/entry.S
@@ -24,6 +24,7 @@
 #include <asm/kvm_arm.h>
 #include <asm/kvm_asm.h>
 #include <asm/kvm_mmu.h>
+#include <asm/kvm_ptrauth_asm.h>
 
 #define CPU_GP_REG_OFFSET(x)	(CPU_GP_REGS + x)
 #define CPU_XREG_OFFSET(x)	CPU_GP_REG_OFFSET(CPU_USER_PT_REGS + 8*x)
@@ -64,6 +65,9 @@ ENTRY(__guest_enter)
 
 	add	x18, x0, #VCPU_CONTEXT
 
+	// Macro ptrauth_switch_to_guest(guest cxt, tmp1, tmp2, tmp3).
+	ptrauth_switch_to_guest x18, x0, x1, x2
+
 	// Restore guest regs x0-x17
 	ldp	x0, x1,   [x18, #CPU_XREG_OFFSET(0)]
 	ldp	x2, x3,   [x18, #CPU_XREG_OFFSET(2)]
@@ -118,6 +122,9 @@ ENTRY(__guest_exit)
 
 	get_host_ctxt	x2, x3
 
+	// Macro ptrauth_switch_to_host(guest cxt, host cxt, tmp1, tmp2, tmp3).
+	ptrauth_switch_to_host x1, x2, x3, x4, x5
+
 	// Now restore the host regs
 	restore_callee_saved_regs x2
 
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index c86a7b0..a9adb92 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1007,6 +1007,38 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	{ SYS_DESC(SYS_PMEVTYPERn_EL0(n)),					\
 	  access_pmu_evtyper, reset_unknown, (PMEVTYPER0_EL0 + n), }
 
+void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu)
+{
+	vcpu->arch.ctxt.hcr_el2 |= (HCR_API | HCR_APK);
+}
+
+void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu)
+{
+	vcpu->arch.ctxt.hcr_el2 &= ~(HCR_API | HCR_APK);
+}
+
+static bool trap_ptrauth(struct kvm_vcpu *vcpu,
+			 struct sys_reg_params *p,
+			 const struct sys_reg_desc *rd)
+{
+	kvm_arm_vcpu_ptrauth_trap(vcpu);
+	return false;
+}
+
+static unsigned int ptrauth_visibility(const struct kvm_vcpu *vcpu,
+			const struct sys_reg_desc *rd)
+{
+	return vcpu_has_ptrauth(vcpu) ? 0 : REG_HIDDEN_USER | REG_HIDDEN_GUEST;
+}
+
+#define __PTRAUTH_KEY(k)						\
+	{ SYS_DESC(SYS_## k), trap_ptrauth, reset_unknown, k,		\
+	.visibility = ptrauth_visibility}
+
+#define PTRAUTH_KEY(k)							\
+	__PTRAUTH_KEY(k ## KEYLO_EL1),					\
+	__PTRAUTH_KEY(k ## KEYHI_EL1)
+
 static bool access_arch_timer(struct kvm_vcpu *vcpu,
 			      struct sys_reg_params *p,
 			      const struct sys_reg_desc *r)
@@ -1061,9 +1093,11 @@ static u64 read_id_reg(const struct kvm_vcpu *vcpu,
 					 (0xfUL << ID_AA64ISAR1_API_SHIFT) |
 					 (0xfUL << ID_AA64ISAR1_GPA_SHIFT) |
 					 (0xfUL << ID_AA64ISAR1_GPI_SHIFT);
-		if (val & ptrauth_mask)
-			kvm_debug("ptrauth unsupported for guests, suppressing\n");
-		val &= ~ptrauth_mask;
+		if (!vcpu_has_ptrauth(vcpu)) {
+			if (val & ptrauth_mask)
+				kvm_debug("ptrauth unsupported for guests, suppressing\n");
+			val &= ~ptrauth_mask;
+		}
 	}
 
 	return val;
@@ -1387,6 +1421,12 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	{ SYS_DESC(SYS_TTBR1_EL1), access_vm_reg, reset_unknown, TTBR1_EL1 },
 	{ SYS_DESC(SYS_TCR_EL1), access_vm_reg, reset_val, TCR_EL1, 0 },
 
+	PTRAUTH_KEY(APIA),
+	PTRAUTH_KEY(APIB),
+	PTRAUTH_KEY(APDA),
+	PTRAUTH_KEY(APDB),
+	PTRAUTH_KEY(APGA),
+
 	{ SYS_DESC(SYS_AFSR0_EL1), access_vm_reg, reset_unknown, AFSR0_EL1 },
 	{ SYS_DESC(SYS_AFSR1_EL1), access_vm_reg, reset_unknown, AFSR1_EL1 },
 	{ SYS_DESC(SYS_ESR_EL1), access_vm_reg, reset_unknown, ESR_EL1 },
diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index 58de0ca..50ae066 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -385,6 +385,8 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 		vcpu_clear_wfe_traps(vcpu);
 	else
 		vcpu_set_wfe_traps(vcpu);
+
+	kvm_arm_vcpu_ptrauth_setup_lazy(vcpu);
 }
 
 void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v8 8/9] KVM: arm64: Add capability to advertise ptrauth for guest
  2019-04-02  2:27 [PATCH v8 0/9] Add ARMv8.3 pointer authentication for kvm guest Amit Daniel Kachhap
                   ` (6 preceding siblings ...)
  2019-04-02  2:27 ` [PATCH v8 7/9] KVM: arm/arm64: context-switch ptrauth registers Amit Daniel Kachhap
@ 2019-04-02  2:27 ` Amit Daniel Kachhap
  2019-04-05 11:03   ` Dave Martin
  2019-04-02  2:27 ` [kvmtool PATCH v8 9/9] KVM: arm/arm64: Add a vcpu feature for pointer authentication Amit Daniel Kachhap
  8 siblings, 1 reply; 30+ messages in thread
From: Amit Daniel Kachhap @ 2019-04-02  2:27 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Christoffer Dall, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Dave Martin, Ramana Radhakrishnan, kvmarm,
	Kristina Martsenko, linux-kernel, Amit Daniel Kachhap,
	Mark Rutland, James Morse, Julien Thierry

This patch advertises the capability of two cpu feature called address
pointer authentication and generic pointer authentication. These
capabilities depend upon system support for pointer authentication and
VHE mode.

The current arm64 KVM partially implements pointer authentication and
support of address/generic authentication are tied together. However,
separate ABI requirements for both of them is added so that the future
isolated implementation will not require any ABI changes.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Christoffer Dall <christoffer.dall@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
---

Changes since v7:
* Created 2 capabilities KVM_CAP_ARM_PTRAUTH_ADDRESS and KVM_CAP_ARM_PTRAUTH_GENERIC
  instead of one KVM_CAP_ARM_PTRAUTH [Kristina Martsenko].
* Added documentation here itself instead of in a new patch.

 Documentation/virtual/kvm/api.txt | 3 +++
 arch/arm64/kvm/reset.c            | 6 ++++++
 include/uapi/linux/kvm.h          | 2 ++
 3 files changed, 11 insertions(+)

diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
index aaa048d..9b56892 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -2661,8 +2661,11 @@ Possible features:
 	  Depends on KVM_CAP_ARM_PMU_V3.
 	- KVM_ARM_VCPU_PTRAUTH_ADDRESS: Enables Address Pointer authentication
 	  for the CPU and supported only on arm64 architecture.
+	  Depends on KVM_CAP_ARM_PTRAUTH_ADDRESS.
 	- KVM_ARM_VCPU_PTRAUTH_GENERIC: Enables Generic Pointer authentication
 	  for the CPU and supported only on arm64 architecture.
+	  Depends on KVM_CAP_ARM_PTRAUTH_GENERIC.
+	  More details on Documentation/arm64/pointer-authentication.txt.
 
 
 4.83 KVM_ARM_PREFERRED_TARGET
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index 717afed..8aa8982 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -92,6 +92,12 @@ int kvm_arch_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 	case KVM_CAP_ARM_VM_IPA_SIZE:
 		r = kvm_ipa_limit;
 		break;
+	case KVM_CAP_ARM_PTRAUTH_ADDRESS:
+		r = has_vhe() && system_supports_address_auth();
+		break;
+	case KVM_CAP_ARM_PTRAUTH_GENERIC:
+		r = has_vhe() && system_supports_generic_auth();
+		break;
 	default:
 		r = 0;
 	}
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 6d4ea4b..500ac2b 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -988,6 +988,8 @@ struct kvm_ppc_resize_hpt {
 #define KVM_CAP_ARM_VM_IPA_SIZE 165
 #define KVM_CAP_MANUAL_DIRTY_LOG_PROTECT 166
 #define KVM_CAP_HYPERV_CPUID 167
+#define KVM_CAP_ARM_PTRAUTH_ADDRESS 168
+#define KVM_CAP_ARM_PTRAUTH_GENERIC 169
 
 #ifdef KVM_CAP_IRQ_ROUTING
 
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [kvmtool PATCH v8 9/9] KVM: arm/arm64: Add a vcpu feature for pointer authentication
  2019-04-02  2:27 [PATCH v8 0/9] Add ARMv8.3 pointer authentication for kvm guest Amit Daniel Kachhap
                   ` (7 preceding siblings ...)
  2019-04-02  2:27 ` [PATCH v8 8/9] KVM: arm64: Add capability to advertise ptrauth for guest Amit Daniel Kachhap
@ 2019-04-02  2:27 ` Amit Daniel Kachhap
  2019-04-05 11:04   ` Dave Martin
  8 siblings, 1 reply; 30+ messages in thread
From: Amit Daniel Kachhap @ 2019-04-02  2:27 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Christoffer Dall, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Dave Martin, Ramana Radhakrishnan, kvmarm,
	Kristina Martsenko, linux-kernel, Amit Daniel Kachhap,
	Mark Rutland, James Morse, Julien Thierry

This is a runtime capabality for KVM tool to enable Arm64 8.3 Pointer
Authentication in guest kernel. Two vcpu features
KVM_ARM_VCPU_PTRAUTH_[ADDRESS/GENERIC] are supplied together to enable
Pointer Authentication in KVM guest after checking the capability.

A command line option --ptrauth is also required to select this feature.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
---

Changes since v7:
* Added check for capability KVM_CAP_ARM_PTRAUTH_GENERIC

 arm/aarch32/include/kvm/kvm-cpu-arch.h    | 1 +
 arm/aarch64/include/asm/kvm.h             | 2 ++
 arm/aarch64/include/kvm/kvm-config-arch.h | 4 +++-
 arm/aarch64/include/kvm/kvm-cpu-arch.h    | 2 ++
 arm/include/arm-common/kvm-config-arch.h  | 1 +
 arm/kvm-cpu.c                             | 7 +++++++
 include/linux/kvm.h                       | 2 ++
 7 files changed, 18 insertions(+), 1 deletion(-)

diff --git a/arm/aarch32/include/kvm/kvm-cpu-arch.h b/arm/aarch32/include/kvm/kvm-cpu-arch.h
index d28ea67..520ea76 100644
--- a/arm/aarch32/include/kvm/kvm-cpu-arch.h
+++ b/arm/aarch32/include/kvm/kvm-cpu-arch.h
@@ -13,4 +13,5 @@
 #define ARM_CPU_ID		0, 0, 0
 #define ARM_CPU_ID_MPIDR	5
 
+#define ARM_VCPU_PTRAUTH_FEATURE	0
 #endif /* KVM__KVM_CPU_ARCH_H */
diff --git a/arm/aarch64/include/asm/kvm.h b/arm/aarch64/include/asm/kvm.h
index 97c3478..d4d0d8c 100644
--- a/arm/aarch64/include/asm/kvm.h
+++ b/arm/aarch64/include/asm/kvm.h
@@ -102,6 +102,8 @@ struct kvm_regs {
 #define KVM_ARM_VCPU_EL1_32BIT		1 /* CPU running a 32bit VM */
 #define KVM_ARM_VCPU_PSCI_0_2		2 /* CPU uses PSCI v0.2 */
 #define KVM_ARM_VCPU_PMU_V3		3 /* Support guest PMUv3 */
+#define KVM_ARM_VCPU_PTRAUTH_ADDRESS	4 /* CPU uses address pointer authentication */
+#define KVM_ARM_VCPU_PTRAUTH_GENERIC	5 /* CPU uses generic pointer authentication */
 
 struct kvm_vcpu_init {
 	__u32 target;
diff --git a/arm/aarch64/include/kvm/kvm-config-arch.h b/arm/aarch64/include/kvm/kvm-config-arch.h
index 04be43d..2074684 100644
--- a/arm/aarch64/include/kvm/kvm-config-arch.h
+++ b/arm/aarch64/include/kvm/kvm-config-arch.h
@@ -8,7 +8,9 @@
 			"Create PMUv3 device"),				\
 	OPT_U64('\0', "kaslr-seed", &(cfg)->kaslr_seed,			\
 			"Specify random seed for Kernel Address Space "	\
-			"Layout Randomization (KASLR)"),
+			"Layout Randomization (KASLR)"),		\
+	OPT_BOOLEAN('\0', "ptrauth", &(cfg)->has_ptrauth,		\
+			"Enable address authentication"),
 
 #include "arm-common/kvm-config-arch.h"
 
diff --git a/arm/aarch64/include/kvm/kvm-cpu-arch.h b/arm/aarch64/include/kvm/kvm-cpu-arch.h
index a9d8563..fcc2107 100644
--- a/arm/aarch64/include/kvm/kvm-cpu-arch.h
+++ b/arm/aarch64/include/kvm/kvm-cpu-arch.h
@@ -17,4 +17,6 @@
 #define ARM_CPU_CTRL		3, 0, 1, 0
 #define ARM_CPU_CTRL_SCTLR_EL1	0
 
+#define ARM_VCPU_PTRAUTH_FEATURE	((1UL << KVM_ARM_VCPU_PTRAUTH_ADDRESS) \
+					| (1UL << KVM_ARM_VCPU_PTRAUTH_GENERIC))
 #endif /* KVM__KVM_CPU_ARCH_H */
diff --git a/arm/include/arm-common/kvm-config-arch.h b/arm/include/arm-common/kvm-config-arch.h
index 5734c46..5badcbd 100644
--- a/arm/include/arm-common/kvm-config-arch.h
+++ b/arm/include/arm-common/kvm-config-arch.h
@@ -10,6 +10,7 @@ struct kvm_config_arch {
 	bool		aarch32_guest;
 	bool		has_pmuv3;
 	u64		kaslr_seed;
+	bool		has_ptrauth;
 	enum irqchip_type irqchip;
 	u64		fw_addr;
 };
diff --git a/arm/kvm-cpu.c b/arm/kvm-cpu.c
index 7780251..398c9d6 100644
--- a/arm/kvm-cpu.c
+++ b/arm/kvm-cpu.c
@@ -68,6 +68,13 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, unsigned long cpu_id)
 		vcpu_init.features[0] |= (1UL << KVM_ARM_VCPU_PSCI_0_2);
 	}
 
+	/* Set KVM_ARM_VCPU_PTRAUTH_[ADDRESS/GENERIC] if available */
+	if (kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH_ADDRESS) &&
+		kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH_GENERIC)) {
+		if (kvm->cfg.arch.has_ptrauth)
+			vcpu_init.features[0] |= ARM_VCPU_PTRAUTH_FEATURE;
+	}
+
 	/*
 	 * If the preferred target ioctl is successful then
 	 * use preferred target else try each and every target type
diff --git a/include/linux/kvm.h b/include/linux/kvm.h
index 6d4ea4b..500ac2b 100644
--- a/include/linux/kvm.h
+++ b/include/linux/kvm.h
@@ -988,6 +988,8 @@ struct kvm_ppc_resize_hpt {
 #define KVM_CAP_ARM_VM_IPA_SIZE 165
 #define KVM_CAP_MANUAL_DIRTY_LOG_PROTECT 166
 #define KVM_CAP_HYPERV_CPUID 167
+#define KVM_CAP_ARM_PTRAUTH_ADDRESS 168
+#define KVM_CAP_ARM_PTRAUTH_GENERIC 169
 
 #ifdef KVM_CAP_IRQ_ROUTING
 
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [PATCH v8 7/9] KVM: arm/arm64: context-switch ptrauth registers
  2019-04-02  2:27 ` [PATCH v8 7/9] KVM: arm/arm64: context-switch ptrauth registers Amit Daniel Kachhap
@ 2019-04-04 19:29   ` Kristina Martsenko
  2019-04-05 11:00     ` Amit Daniel Kachhap
  0 siblings, 1 reply; 30+ messages in thread
From: Kristina Martsenko @ 2019-04-04 19:29 UTC (permalink / raw)
  To: Amit Daniel Kachhap, linux-arm-kernel
  Cc: Christoffer Dall, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Dave Martin, Ramana Radhakrishnan, kvmarm,
	linux-kernel, Mark Rutland, James Morse, Julien Thierry

On 02/04/2019 03:27, Amit Daniel Kachhap wrote:
> From: Mark Rutland <mark.rutland@arm.com>
> 
> When pointer authentication is supported, a guest may wish to use it.
> This patch adds the necessary KVM infrastructure for this to work, with
> a semi-lazy context switch of the pointer auth state.

[...]

> diff --git a/arch/arm64/include/asm/kvm_ptrauth_asm.h b/arch/arm64/include/asm/kvm_ptrauth_asm.h
> new file mode 100644
> index 0000000..65f99e9
> --- /dev/null
> +++ b/arch/arm64/include/asm/kvm_ptrauth_asm.h
> @@ -0,0 +1,106 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/* arch/arm64/include/asm/kvm_ptrauth_asm.h: Guest/host ptrauth save/restore
> + * Copyright 2019 Arm Limited
> + * Author: Mark Rutland <mark.rutland@arm.com>
> + *         Amit Daniel Kachhap <amit.kachhap@arm.com>
> + */
> +
> +#ifndef __ASM_KVM_PTRAUTH_ASM_H
> +#define __ASM_KVM_PTRAUTH_ASM_H
> +
> +#ifndef __ASSEMBLY__
> +
> +#define __ptrauth_save_key(regs, key)						\
> +({										\
> +	regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1);	\
> +	regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1);	\
> +})
> +
> +#define __ptrauth_save_state(ctxt)						\
> +({										\
> +	__ptrauth_save_key(ctxt->sys_regs, APIA);				\
> +	__ptrauth_save_key(ctxt->sys_regs, APIB);				\
> +	__ptrauth_save_key(ctxt->sys_regs, APDA);				\
> +	__ptrauth_save_key(ctxt->sys_regs, APDB);				\
> +	__ptrauth_save_key(ctxt->sys_regs, APGA);				\
> +})
> +
> +#else /* __ASSEMBLY__ */
> +
> +#include <asm/sysreg.h>
> +
> +#ifdef	CONFIG_ARM64_PTR_AUTH
> +
> +#define PTRAUTH_REG_OFFSET(x)	(x - CPU_APIAKEYLO_EL1)
> +
> +/*
> + * CPU_AP*_EL1 values exceed immediate offset range (512) for stp instruction
> + * so below macros takes CPU_APIAKEYLO_EL1 as base and calculates the offset of
> + * the keys from this base to avoid an extra add instruction. These macros
> + * assumes the keys offsets are aligned in a specific increasing order.
> + */
> +.macro	ptrauth_save_state base, reg1, reg2
> +	mrs_s	\reg1, SYS_APIAKEYLO_EL1
> +	mrs_s	\reg2, SYS_APIAKEYHI_EL1
> +	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIAKEYLO_EL1)]
> +	mrs_s	\reg1, SYS_APIBKEYLO_EL1
> +	mrs_s	\reg2, SYS_APIBKEYHI_EL1
> +	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIBKEYLO_EL1)]
> +	mrs_s	\reg1, SYS_APDAKEYLO_EL1
> +	mrs_s	\reg2, SYS_APDAKEYHI_EL1
> +	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDAKEYLO_EL1)]
> +	mrs_s	\reg1, SYS_APDBKEYLO_EL1
> +	mrs_s	\reg2, SYS_APDBKEYHI_EL1
> +	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDBKEYLO_EL1)]
> +	mrs_s	\reg1, SYS_APGAKEYLO_EL1
> +	mrs_s	\reg2, SYS_APGAKEYHI_EL1
> +	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APGAKEYLO_EL1)]
> +.endm
> +
> +.macro	ptrauth_restore_state base, reg1, reg2
> +	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIAKEYLO_EL1)]
> +	msr_s	SYS_APIAKEYLO_EL1, \reg1
> +	msr_s	SYS_APIAKEYHI_EL1, \reg2
> +	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIBKEYLO_EL1)]
> +	msr_s	SYS_APIBKEYLO_EL1, \reg1
> +	msr_s	SYS_APIBKEYHI_EL1, \reg2
> +	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDAKEYLO_EL1)]
> +	msr_s	SYS_APDAKEYLO_EL1, \reg1
> +	msr_s	SYS_APDAKEYHI_EL1, \reg2
> +	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDBKEYLO_EL1)]
> +	msr_s	SYS_APDBKEYLO_EL1, \reg1
> +	msr_s	SYS_APDBKEYHI_EL1, \reg2
> +	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APGAKEYLO_EL1)]
> +	msr_s	SYS_APGAKEYLO_EL1, \reg1
> +	msr_s	SYS_APGAKEYHI_EL1, \reg2
> +.endm
> +
> +.macro ptrauth_switch_to_guest g_ctxt, reg1, reg2, reg3
> +	ldr	\reg1, [\g_ctxt, #CPU_HCR_EL2]
> +	and	\reg1, \reg1, #(HCR_API | HCR_APK)
> +	cbz	\reg1, 1f
> +	add	\reg1, \g_ctxt, #CPU_APIAKEYLO_EL1
> +	ptrauth_restore_state	\reg1, \reg2, \reg3
> +1:

Nit: the label in assembly macros is usually a larger number (see
assembler.h or alternative.h for example). I think this is to avoid
future accidents like

	cbz	x0, 1f
	ptrauth_switch_to_guest	x1, x2, x3, x4
	add	x5, x5, x6
1:
	...

where the code would incorrectly branch to the label inside
ptrauth_switch_to_guest, instead of the one after it.

Thanks,
Kristina

> +.endm
> +
> +.macro ptrauth_switch_to_host g_ctxt, h_ctxt, reg1, reg2, reg3
> +	ldr	\reg1, [\g_ctxt, #CPU_HCR_EL2]
> +	and	\reg1, \reg1, #(HCR_API | HCR_APK)
> +	cbz	\reg1, 2f
> +	add	\reg1, \g_ctxt, #CPU_APIAKEYLO_EL1
> +	ptrauth_save_state	\reg1, \reg2, \reg3
> +	add	\reg1, \h_ctxt, #CPU_APIAKEYLO_EL1
> +	ptrauth_restore_state	\reg1, \reg2, \reg3
> +	isb
> +2:
> +.endm
> +
> +#else /* !CONFIG_ARM64_PTR_AUTH */
> +.macro ptrauth_switch_to_guest g_ctxt, reg1, reg2, reg3
> +.endm
> +.macro ptrauth_switch_to_host g_ctxt, h_ctxt, reg1, reg2, reg3
> +.endm
> +#endif /* CONFIG_ARM64_PTR_AUTH */
> +#endif /* __ASSEMBLY__ */
> +#endif /* __ASM_KVM_PTRAUTH_ASM_H */

> diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
> index 675fdc1..3a70213 100644
> --- a/arch/arm64/kvm/hyp/entry.S
> +++ b/arch/arm64/kvm/hyp/entry.S
> @@ -24,6 +24,7 @@
>  #include <asm/kvm_arm.h>
>  #include <asm/kvm_asm.h>
>  #include <asm/kvm_mmu.h>
> +#include <asm/kvm_ptrauth_asm.h>
>  
>  #define CPU_GP_REG_OFFSET(x)	(CPU_GP_REGS + x)
>  #define CPU_XREG_OFFSET(x)	CPU_GP_REG_OFFSET(CPU_USER_PT_REGS + 8*x)
> @@ -64,6 +65,9 @@ ENTRY(__guest_enter)
>  
>  	add	x18, x0, #VCPU_CONTEXT
>  
> +	// Macro ptrauth_switch_to_guest(guest cxt, tmp1, tmp2, tmp3).
> +	ptrauth_switch_to_guest x18, x0, x1, x2
> +
>  	// Restore guest regs x0-x17
>  	ldp	x0, x1,   [x18, #CPU_XREG_OFFSET(0)]
>  	ldp	x2, x3,   [x18, #CPU_XREG_OFFSET(2)]
> @@ -118,6 +122,9 @@ ENTRY(__guest_exit)
>  
>  	get_host_ctxt	x2, x3
>  
> +	// Macro ptrauth_switch_to_host(guest cxt, host cxt, tmp1, tmp2, tmp3).
> +	ptrauth_switch_to_host x1, x2, x3, x4, x5
> +
>  	// Now restore the host regs
>  	restore_callee_saved_regs x2
>  

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v8 7/9] KVM: arm/arm64: context-switch ptrauth registers
  2019-04-04 19:29   ` Kristina Martsenko
@ 2019-04-05 11:00     ` Amit Daniel Kachhap
  0 siblings, 0 replies; 30+ messages in thread
From: Amit Daniel Kachhap @ 2019-04-05 11:00 UTC (permalink / raw)
  To: Kristina Martsenko, linux-arm-kernel
  Cc: Christoffer Dall, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Dave Martin, Ramana Radhakrishnan, kvmarm,
	linux-kernel, Mark Rutland, James Morse, Julien Thierry

Hi Kristina,

On 4/5/19 12:59 AM, Kristina Martsenko wrote:
> On 02/04/2019 03:27, Amit Daniel Kachhap wrote:
>> From: Mark Rutland <mark.rutland@arm.com>
>>
>> When pointer authentication is supported, a guest may wish to use it.
>> This patch adds the necessary KVM infrastructure for this to work, with
>> a semi-lazy context switch of the pointer auth state.
> 
> [...]
> 
>> diff --git a/arch/arm64/include/asm/kvm_ptrauth_asm.h b/arch/arm64/include/asm/kvm_ptrauth_asm.h
>> new file mode 100644
>> index 0000000..65f99e9
>> --- /dev/null
>> +++ b/arch/arm64/include/asm/kvm_ptrauth_asm.h
>> @@ -0,0 +1,106 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/* arch/arm64/include/asm/kvm_ptrauth_asm.h: Guest/host ptrauth save/restore
>> + * Copyright 2019 Arm Limited
>> + * Author: Mark Rutland <mark.rutland@arm.com>
>> + *         Amit Daniel Kachhap <amit.kachhap@arm.com>
>> + */
>> +
>> +#ifndef __ASM_KVM_PTRAUTH_ASM_H
>> +#define __ASM_KVM_PTRAUTH_ASM_H
>> +
>> +#ifndef __ASSEMBLY__
>> +
>> +#define __ptrauth_save_key(regs, key)						\
>> +({										\
>> +	regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1);	\
>> +	regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1);	\
>> +})
>> +
>> +#define __ptrauth_save_state(ctxt)						\
>> +({										\
>> +	__ptrauth_save_key(ctxt->sys_regs, APIA);				\
>> +	__ptrauth_save_key(ctxt->sys_regs, APIB);				\
>> +	__ptrauth_save_key(ctxt->sys_regs, APDA);				\
>> +	__ptrauth_save_key(ctxt->sys_regs, APDB);				\
>> +	__ptrauth_save_key(ctxt->sys_regs, APGA);				\
>> +})
>> +
>> +#else /* __ASSEMBLY__ */
>> +
>> +#include <asm/sysreg.h>
>> +
>> +#ifdef	CONFIG_ARM64_PTR_AUTH
>> +
>> +#define PTRAUTH_REG_OFFSET(x)	(x - CPU_APIAKEYLO_EL1)
>> +
>> +/*
>> + * CPU_AP*_EL1 values exceed immediate offset range (512) for stp instruction
>> + * so below macros takes CPU_APIAKEYLO_EL1 as base and calculates the offset of
>> + * the keys from this base to avoid an extra add instruction. These macros
>> + * assumes the keys offsets are aligned in a specific increasing order.
>> + */
>> +.macro	ptrauth_save_state base, reg1, reg2
>> +	mrs_s	\reg1, SYS_APIAKEYLO_EL1
>> +	mrs_s	\reg2, SYS_APIAKEYHI_EL1
>> +	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIAKEYLO_EL1)]
>> +	mrs_s	\reg1, SYS_APIBKEYLO_EL1
>> +	mrs_s	\reg2, SYS_APIBKEYHI_EL1
>> +	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIBKEYLO_EL1)]
>> +	mrs_s	\reg1, SYS_APDAKEYLO_EL1
>> +	mrs_s	\reg2, SYS_APDAKEYHI_EL1
>> +	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDAKEYLO_EL1)]
>> +	mrs_s	\reg1, SYS_APDBKEYLO_EL1
>> +	mrs_s	\reg2, SYS_APDBKEYHI_EL1
>> +	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDBKEYLO_EL1)]
>> +	mrs_s	\reg1, SYS_APGAKEYLO_EL1
>> +	mrs_s	\reg2, SYS_APGAKEYHI_EL1
>> +	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APGAKEYLO_EL1)]
>> +.endm
>> +
>> +.macro	ptrauth_restore_state base, reg1, reg2
>> +	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIAKEYLO_EL1)]
>> +	msr_s	SYS_APIAKEYLO_EL1, \reg1
>> +	msr_s	SYS_APIAKEYHI_EL1, \reg2
>> +	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIBKEYLO_EL1)]
>> +	msr_s	SYS_APIBKEYLO_EL1, \reg1
>> +	msr_s	SYS_APIBKEYHI_EL1, \reg2
>> +	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDAKEYLO_EL1)]
>> +	msr_s	SYS_APDAKEYLO_EL1, \reg1
>> +	msr_s	SYS_APDAKEYHI_EL1, \reg2
>> +	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDBKEYLO_EL1)]
>> +	msr_s	SYS_APDBKEYLO_EL1, \reg1
>> +	msr_s	SYS_APDBKEYHI_EL1, \reg2
>> +	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APGAKEYLO_EL1)]
>> +	msr_s	SYS_APGAKEYLO_EL1, \reg1
>> +	msr_s	SYS_APGAKEYHI_EL1, \reg2
>> +.endm
>> +
>> +.macro ptrauth_switch_to_guest g_ctxt, reg1, reg2, reg3
>> +	ldr	\reg1, [\g_ctxt, #CPU_HCR_EL2]
>> +	and	\reg1, \reg1, #(HCR_API | HCR_APK)
>> +	cbz	\reg1, 1f
>> +	add	\reg1, \g_ctxt, #CPU_APIAKEYLO_EL1
>> +	ptrauth_restore_state	\reg1, \reg2, \reg3
>> +1:
> 
> Nit: the label in assembly macros is usually a larger number (see
> assembler.h or alternative.h for example). I think this is to avoid
> future accidents like
> 
> 	cbz	x0, 1f
> 	ptrauth_switch_to_guest	x1, x2, x3, x4
> 	add	x5, x5, x6
> 1:
> 	...
> 
> where the code would incorrectly branch to the label inside
> ptrauth_switch_to_guest, instead of the one after it.
Yes agree that these kind of labels are problematic. I will update in 
the next iteration.

Thanks,
Amit Daniel
> 
> Thanks,
> Kristina
> 
>> +.endm
>> +
>> +.macro ptrauth_switch_to_host g_ctxt, h_ctxt, reg1, reg2, reg3
>> +	ldr	\reg1, [\g_ctxt, #CPU_HCR_EL2]
>> +	and	\reg1, \reg1, #(HCR_API | HCR_APK)
>> +	cbz	\reg1, 2f
>> +	add	\reg1, \g_ctxt, #CPU_APIAKEYLO_EL1
>> +	ptrauth_save_state	\reg1, \reg2, \reg3
>> +	add	\reg1, \h_ctxt, #CPU_APIAKEYLO_EL1
>> +	ptrauth_restore_state	\reg1, \reg2, \reg3
>> +	isb
>> +2:
>> +.endm
>> +
>> +#else /* !CONFIG_ARM64_PTR_AUTH */
>> +.macro ptrauth_switch_to_guest g_ctxt, reg1, reg2, reg3
>> +.endm
>> +.macro ptrauth_switch_to_host g_ctxt, h_ctxt, reg1, reg2, reg3
>> +.endm
>> +#endif /* CONFIG_ARM64_PTR_AUTH */
>> +#endif /* __ASSEMBLY__ */
>> +#endif /* __ASM_KVM_PTRAUTH_ASM_H */
> 
>> diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
>> index 675fdc1..3a70213 100644
>> --- a/arch/arm64/kvm/hyp/entry.S
>> +++ b/arch/arm64/kvm/hyp/entry.S
>> @@ -24,6 +24,7 @@
>>   #include <asm/kvm_arm.h>
>>   #include <asm/kvm_asm.h>
>>   #include <asm/kvm_mmu.h>
>> +#include <asm/kvm_ptrauth_asm.h>
>>   
>>   #define CPU_GP_REG_OFFSET(x)	(CPU_GP_REGS + x)
>>   #define CPU_XREG_OFFSET(x)	CPU_GP_REG_OFFSET(CPU_USER_PT_REGS + 8*x)
>> @@ -64,6 +65,9 @@ ENTRY(__guest_enter)
>>   
>>   	add	x18, x0, #VCPU_CONTEXT
>>   
>> +	// Macro ptrauth_switch_to_guest(guest cxt, tmp1, tmp2, tmp3).
>> +	ptrauth_switch_to_guest x18, x0, x1, x2
>> +
>>   	// Restore guest regs x0-x17
>>   	ldp	x0, x1,   [x18, #CPU_XREG_OFFSET(0)]
>>   	ldp	x2, x3,   [x18, #CPU_XREG_OFFSET(2)]
>> @@ -118,6 +122,9 @@ ENTRY(__guest_exit)
>>   
>>   	get_host_ctxt	x2, x3
>>   
>> +	// Macro ptrauth_switch_to_host(guest cxt, host cxt, tmp1, tmp2, tmp3).
>> +	ptrauth_switch_to_host x1, x2, x3, x4, x5
>> +
>>   	// Now restore the host regs
>>   	restore_callee_saved_regs x2
>>   

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v8 3/9] KVM: arm64: Move hyp_symbol_addr to fix dependency
  2019-04-02  2:27 ` [PATCH v8 3/9] KVM: arm64: Move hyp_symbol_addr to fix dependency Amit Daniel Kachhap
@ 2019-04-05 11:02   ` Dave Martin
  2019-04-08  3:42     ` Amit Daniel Kachhap
  0 siblings, 1 reply; 30+ messages in thread
From: Dave Martin @ 2019-04-05 11:02 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: linux-arm-kernel, Marc Zyngier, Catalin Marinas, Will Deacon,
	Kristina Martsenko, kvmarm, Ramana Radhakrishnan, linux-kernel

On Tue, Apr 02, 2019 at 07:57:11AM +0530, Amit Daniel Kachhap wrote:
> Currently hyp_symbol_addr is placed in kvm_mmu.h which is mostly
> used by __hyp_this_cpu_ptr in kvm_asm.h but it cannot include
> kvm_mmu.h directly as kvm_mmu.h uses kvm_ksym_ref which is
> defined inside kvm_asm.h. Hence, hyp_symbol_addr is moved inside
> kvm_asm.h to fix this dependency on each other.
> 
> Also, hyp_symbol_addr corresponding counterpart, kvm_ksym_ref,
> is already in kvm_asm.h, making it more sensible to move
> kvm_symbol_addr to the same file.
> 
> Suggested by: James Morse <james.morse@arm.com>
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> Reviewed-by: Julien Thierry <julien.thierry@arm.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Christoffer Dall <christoffer.dall@arm.com>
> Cc: kvmarm@lists.cs.columbia.edu
> ---
> 
> Changes since v7:
> * Slight change in commit message [Julien Thierry].
> 
>  arch/arm64/include/asm/kvm_asm.h | 20 ++++++++++++++++++++
>  arch/arm64/include/asm/kvm_mmu.h | 20 --------------------
>  2 files changed, 20 insertions(+), 20 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
> index f5b79e9..57a07e8 100644
> --- a/arch/arm64/include/asm/kvm_asm.h
> +++ b/arch/arm64/include/asm/kvm_asm.h
> @@ -80,6 +80,26 @@ extern void __vgic_v3_init_lrs(void);
>  
>  extern u32 __kvm_get_mdcr_el2(void);
>  
> +/*
> + * Obtain the PC-relative address of a kernel symbol
> + * s: symbol
> + *
> + * The goal of this macro is to return a symbol's address based on a
> + * PC-relative computation, as opposed to a loading the VA from a
> + * constant pool or something similar. This works well for HYP, as an
> + * absolute VA is guaranteed to be wrong. Only use this if trying to
> + * obtain the address of a symbol (i.e. not something you obtained by
> + * following a pointer).
> + */
> +#define hyp_symbol_addr(s)						\
> +	({								\
> +		typeof(s) *addr;					\
> +		asm("adrp	%0, %1\n"				\
> +		    "add	%0, %0, :lo12:%1\n"			\
> +		    : "=r" (addr) : "S" (&s));				\
> +		addr;							\
> +	})
> +

Do we need to include <asm/kvm_asm.h> in vgic-v2-cpuif-proxy.c now?

Otherwise,

Reviewed-by: Dave Martin <Dave.Martin@arm.com>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v8 4/9] KVM: arm/arm64: preserve host HCR_EL2 value
  2019-04-02  2:27 ` [PATCH v8 4/9] KVM: arm/arm64: preserve host HCR_EL2 value Amit Daniel Kachhap
@ 2019-04-05 11:02   ` Dave Martin
  2019-04-08  4:31     ` Amit Daniel Kachhap
  2019-04-06 10:37   ` James Morse
  1 sibling, 1 reply; 30+ messages in thread
From: Dave Martin @ 2019-04-05 11:02 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: linux-arm-kernel, Marc Zyngier, Catalin Marinas, Will Deacon,
	Kristina Martsenko, kvmarm, Ramana Radhakrishnan, linux-kernel

On Tue, Apr 02, 2019 at 07:57:12AM +0530, Amit Daniel Kachhap wrote:
> From: Mark Rutland <mark.rutland@arm.com>
> 
> When restoring HCR_EL2 for the host, KVM uses HCR_HOST_VHE_FLAGS, which
> is a constant value. This works today, as the host HCR_EL2 value is
> always the same, but this will get in the way of supporting extensions
> that require HCR_EL2 bits to be set conditionally for the host.
> 
> To allow such features to work without KVM having to explicitly handle
> every possible host feature combination, this patch has KVM save/restore
> for the host HCR when switching to/from a guest HCR. The saving of the
> register is done once during cpu hypervisor initialization state and is
> just restored after switch from guest.
> 
> For fetching HCR_EL2 during kvm initialisation, a hyp call is made using
> kvm_call_hyp and is helpful in non-VHE case.
> 
> For the hyp TLB maintenance code, __tlb_switch_to_host_vhe() is updated
> to toggle the TGE bit with a RMW sequence, as we already do in
> __tlb_switch_to_guest_vhe().
> 
> The value of hcr_el2 is now stored in struct kvm_cpu_context as both host
> and guest can now use this field in a common way.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> [Added cpu_init_host_ctxt, hcr_el2 field in struct kvm_cpu_context,
> save hcr_el2 in hyp init stage]
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> Reviewed-by: James Morse <james.morse@arm.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Christoffer Dall <christoffer.dall@arm.com>
> Cc: kvmarm@lists.cs.columbia.edu

[...]

> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index a01fe087..3b09fd0 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -209,6 +209,8 @@ struct kvm_cpu_context {
>  		u32 copro[NR_COPRO_REGS];
>  	};
>  
> +	/* HYP host/guest configuration */
> +	u64 hcr_el2;

Minor nit: You could delete "host/guest" from the comment here.  This is
implied by the fact that the member is in struct kvm_cpu_context in the
first place.

>  	struct kvm_vcpu *__hyp_running_vcpu;
>  };

[...]

> diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
> index 3563fe6..f5cefa1 100644
> --- a/arch/arm64/kvm/hyp/switch.c
> +++ b/arch/arm64/kvm/hyp/switch.c

[...]

> @@ -159,9 +159,10 @@ static void deactivate_traps_vhe(void)
>  }
>  NOKPROBE_SYMBOL(deactivate_traps_vhe);
>  
> -static void __hyp_text __deactivate_traps_nvhe(void)
> +static void __hyp_text __deactivate_traps_nvhe(struct kvm_cpu_context *host_ctxt)

Where __hyp_text functions accept pointer arguments, they are usually
hyp pointers already...  (see below)

>  {
>  	u64 mdcr_el2 = read_sysreg(mdcr_el2);
> +	struct kvm_cpu_context *hyp_host_ctxt = kern_hyp_va(host_ctxt);
>  
>  	__deactivate_traps_common();
>  
> @@ -169,25 +170,28 @@ static void __hyp_text __deactivate_traps_nvhe(void)
>  	mdcr_el2 |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT;
>  
>  	write_sysreg(mdcr_el2, mdcr_el2);
> -	write_sysreg(HCR_HOST_NVHE_FLAGS, hcr_el2);
> +	write_sysreg(hyp_host_ctxt->hcr_el2, hcr_el2);
>  	write_sysreg(CPTR_EL2_DEFAULT, cptr_el2);
>  }
>  
>  static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu)
>  {
> +	struct kvm_cpu_context *host_ctxt;
> +
> +	host_ctxt = vcpu->arch.host_cpu_context;

host_ctxt is not otherwise used here, so can we convert it up-front so
that the argument to __deactivate_traps_nvhe() and
deactivate_traps_vhe() is a hyp pointer already?

So:

	struct kvm_cpu_context *hyp_host_ctxt;

	hyp_host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context);

>  	/*
>  	 * If we pended a virtual abort, preserve it until it gets
>  	 * cleared. See D1.14.3 (Virtual Interrupts) for details, but
>  	 * the crucial bit is "On taking a vSError interrupt,
>  	 * HCR_EL2.VSE is cleared to 0."
>  	 */
> -	if (vcpu->arch.hcr_el2 & HCR_VSE)
> -		vcpu->arch.hcr_el2 = read_sysreg(hcr_el2);
> +	if (vcpu->arch.ctxt.hcr_el2 & HCR_VSE)
> +		vcpu->arch.ctxt.hcr_el2 = read_sysreg(hcr_el2);
>  
>  	if (has_vhe())
> -		deactivate_traps_vhe();
> +		deactivate_traps_vhe(host_ctxt);
>  	else
> -		__deactivate_traps_nvhe();
> +		__deactivate_traps_nvhe(host_ctxt);

Then just pass hyp_host_ctxt to both of these, and drop the
kern_hyp_va() conversion from __deactivate_traps_nvhe().

This may be a bit less confusing.

Alternatively, just pass in the vcpu pointer (since this pattern is
already well established all over the place).

Another option could be to pull the hcr_el2 write out of the backends
entirely and put it in this common code instead.  This doesn't look
straightforward though (or at least, I don't remember enough about how
all these traps handling functions fit together...)

[...]

Cheers
---Dave

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v8 5/9] KVM: arm/arm64: preserve host MDCR_EL2 value
  2019-04-02  2:27 ` [PATCH v8 5/9] KVM: arm/arm64: preserve host MDCR_EL2 value Amit Daniel Kachhap
@ 2019-04-05 11:02   ` Dave Martin
  2019-04-08  4:39     ` Amit Daniel Kachhap
  0 siblings, 1 reply; 30+ messages in thread
From: Dave Martin @ 2019-04-05 11:02 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: linux-arm-kernel, Marc Zyngier, Catalin Marinas, Will Deacon,
	Kristina Martsenko, kvmarm, Ramana Radhakrishnan, linux-kernel

On Tue, Apr 02, 2019 at 07:57:13AM +0530, Amit Daniel Kachhap wrote:
> Save host MDCR_EL2 value during kvm HYP initialisation and restore
> after every switch from host to guest. There should not be any
> change in functionality due to this.
> 
> The value of mdcr_el2 is now stored in struct kvm_cpu_context as
> both host and guest can now use this field in a common way.
> 
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> Acked-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Christoffer Dall <christoffer.dall@arm.com>
> Cc: kvmarm@lists.cs.columbia.edu
> ---
> 
> Changes since v7:
> * Removed unused function __kvm_get_mdcr_el2 [Kristina].
> 
>  arch/arm/include/asm/kvm_host.h   |  1 -
>  arch/arm64/include/asm/kvm_asm.h  |  2 --
>  arch/arm64/include/asm/kvm_host.h |  6 ++----
>  arch/arm64/include/asm/kvm_hyp.h  |  2 +-
>  arch/arm64/kvm/debug.c            | 28 ++++++----------------------
>  arch/arm64/kvm/hyp/debug-sr.c     |  5 -----
>  arch/arm64/kvm/hyp/switch.c       | 18 +++++-------------
>  arch/arm64/kvm/hyp/sysreg-sr.c    |  8 +++++++-
>  virt/kvm/arm/arm.c                |  1 -
>  9 files changed, 21 insertions(+), 50 deletions(-)
> 
> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
> index 6d0aac4..a928565 100644
> --- a/arch/arm/include/asm/kvm_host.h
> +++ b/arch/arm/include/asm/kvm_host.h
> @@ -343,7 +343,6 @@ static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {}
>  static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
>  static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {}
>  
> -static inline void kvm_arm_init_debug(void) {}
>  static inline void kvm_arm_setup_debug(struct kvm_vcpu *vcpu) {}
>  static inline void kvm_arm_clear_debug(struct kvm_vcpu *vcpu) {}
>  static inline void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu) {}
> diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
> index a68205c..a15ba55 100644
> --- a/arch/arm64/include/asm/kvm_asm.h
> +++ b/arch/arm64/include/asm/kvm_asm.h
> @@ -78,8 +78,6 @@ extern u64 __vgic_v3_read_vmcr(void);
>  extern void __vgic_v3_write_vmcr(u32 vmcr);
>  extern void __vgic_v3_init_lrs(void);
>  
> -extern u32 __kvm_get_mdcr_el2(void);
> -
>  extern void __kvm_populate_host_regs(void);
>  
>  /*
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 3b09fd0..e3ccd7b 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -211,6 +211,8 @@ struct kvm_cpu_context {
>  
>  	/* HYP host/guest configuration */
>  	u64 hcr_el2;
> +	u32 mdcr_el2;
> +
>  	struct kvm_vcpu *__hyp_running_vcpu;
>  };
>  
> @@ -226,9 +228,6 @@ struct vcpu_reset_state {
>  struct kvm_vcpu_arch {
>  	struct kvm_cpu_context ctxt;
>  
> -	/* HYP configuration */
> -	u32 mdcr_el2;
> -
>  	/* Exception Information */
>  	struct kvm_vcpu_fault_info fault;
>  
> @@ -498,7 +497,6 @@ static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {}
>  static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
>  static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {}
>  
> -void kvm_arm_init_debug(void);
>  void kvm_arm_setup_debug(struct kvm_vcpu *vcpu);
>  void kvm_arm_clear_debug(struct kvm_vcpu *vcpu);
>  void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu);
> diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
> index 4da765f..7fcde8a 100644
> --- a/arch/arm64/include/asm/kvm_hyp.h
> +++ b/arch/arm64/include/asm/kvm_hyp.h
> @@ -152,7 +152,7 @@ void __fpsimd_restore_state(struct user_fpsimd_state *fp_regs);
>  bool __fpsimd_enabled(void);
>  
>  void activate_traps_vhe_load(struct kvm_vcpu *vcpu);
> -void deactivate_traps_vhe_put(void);
> +void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu);
>  
>  u64 __guest_enter(struct kvm_vcpu *vcpu, struct kvm_cpu_context *host_ctxt);
>  void __noreturn __hyp_do_panic(unsigned long, ...);
> diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c
> index fd917d6..99dc0a4 100644
> --- a/arch/arm64/kvm/debug.c
> +++ b/arch/arm64/kvm/debug.c
> @@ -32,8 +32,6 @@
>  				DBG_MDSCR_KDE | \
>  				DBG_MDSCR_MDE)
>  
> -static DEFINE_PER_CPU(u32, mdcr_el2);
> -
>  /**
>   * save/restore_guest_debug_regs
>   *
> @@ -65,21 +63,6 @@ static void restore_guest_debug_regs(struct kvm_vcpu *vcpu)
>  }
>  
>  /**
> - * kvm_arm_init_debug - grab what we need for debug
> - *
> - * Currently the sole task of this function is to retrieve the initial
> - * value of mdcr_el2 so we can preserve MDCR_EL2.HPMN which has
> - * presumably been set-up by some knowledgeable bootcode.
> - *
> - * It is called once per-cpu during CPU hyp initialisation.
> - */
> -
> -void kvm_arm_init_debug(void)
> -{
> -	__this_cpu_write(mdcr_el2, kvm_call_hyp_ret(__kvm_get_mdcr_el2));
> -}
> -
> -/**
>   * kvm_arm_reset_debug_ptr - reset the debug ptr to point to the vcpu state
>   */
>  
> @@ -111,6 +94,7 @@ void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu)
>  
>  void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
>  {
> +	kvm_cpu_context_t *host_cxt = this_cpu_ptr(&kvm_host_cpu_state);
>  	bool trap_debug = !(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY);
>  	unsigned long mdscr;
>  
> @@ -120,8 +104,8 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
>  	 * This also clears MDCR_EL2_E2PB_MASK to disable guest access
>  	 * to the profiling buffer.
>  	 */
> -	vcpu->arch.mdcr_el2 = __this_cpu_read(mdcr_el2) & MDCR_EL2_HPMN_MASK;
> -	vcpu->arch.mdcr_el2 |= (MDCR_EL2_TPM |
> +	vcpu->arch.ctxt.mdcr_el2 = host_cxt->mdcr_el2 & MDCR_EL2_HPMN_MASK;
> +	vcpu->arch.ctxt.mdcr_el2 |= (MDCR_EL2_TPM |
>  				MDCR_EL2_TPMS |
>  				MDCR_EL2_TPMCR |
>  				MDCR_EL2_TDRA |
> @@ -130,7 +114,7 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
>  	/* Is Guest debugging in effect? */
>  	if (vcpu->guest_debug) {
>  		/* Route all software debug exceptions to EL2 */
> -		vcpu->arch.mdcr_el2 |= MDCR_EL2_TDE;
> +		vcpu->arch.ctxt.mdcr_el2 |= MDCR_EL2_TDE;
>  
>  		/* Save guest debug state */
>  		save_guest_debug_regs(vcpu);
> @@ -202,13 +186,13 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
>  
>  	/* Trap debug register access */
>  	if (trap_debug)
> -		vcpu->arch.mdcr_el2 |= MDCR_EL2_TDA;
> +		vcpu->arch.ctxt.mdcr_el2 |= MDCR_EL2_TDA;
>  
>  	/* If KDE or MDE are set, perform a full save/restore cycle. */
>  	if (vcpu_read_sys_reg(vcpu, MDSCR_EL1) & (DBG_MDSCR_KDE | DBG_MDSCR_MDE))
>  		vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY;
>  
> -	trace_kvm_arm_set_dreg32("MDCR_EL2", vcpu->arch.mdcr_el2);
> +	trace_kvm_arm_set_dreg32("MDCR_EL2", vcpu->arch.ctxt.mdcr_el2);
>  	trace_kvm_arm_set_dreg32("MDSCR_EL1", vcpu_read_sys_reg(vcpu, MDSCR_EL1));
>  }
>  
> diff --git a/arch/arm64/kvm/hyp/debug-sr.c b/arch/arm64/kvm/hyp/debug-sr.c
> index 5000976..f49a3f7 100644
> --- a/arch/arm64/kvm/hyp/debug-sr.c
> +++ b/arch/arm64/kvm/hyp/debug-sr.c
> @@ -198,8 +198,3 @@ void __hyp_text __debug_switch_to_host(struct kvm_vcpu *vcpu)
>  
>  	vcpu->arch.flags &= ~KVM_ARM64_DEBUG_DIRTY;
>  }
> -
> -u32 __hyp_text __kvm_get_mdcr_el2(void)
> -{
> -	return read_sysreg(mdcr_el2);
> -}
> diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
> index f5cefa1..fe76e24 100644
> --- a/arch/arm64/kvm/hyp/switch.c
> +++ b/arch/arm64/kvm/hyp/switch.c
> @@ -84,7 +84,7 @@ static void __hyp_text __activate_traps_common(struct kvm_vcpu *vcpu)
>  	 */
>  	write_sysreg(0, pmselr_el0);
>  	write_sysreg(ARMV8_PMU_USERENR_MASK, pmuserenr_el0);
> -	write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
> +	write_sysreg(vcpu->arch.ctxt.mdcr_el2, mdcr_el2);
>  }
>  
>  static void __hyp_text __deactivate_traps_common(void)
> @@ -161,15 +161,11 @@ NOKPROBE_SYMBOL(deactivate_traps_vhe);
>  
>  static void __hyp_text __deactivate_traps_nvhe(struct kvm_cpu_context *host_ctxt)
>  {
> -	u64 mdcr_el2 = read_sysreg(mdcr_el2);
>  	struct kvm_cpu_context *hyp_host_ctxt = kern_hyp_va(host_ctxt);
>  
>  	__deactivate_traps_common();
>  
> -	mdcr_el2 &= MDCR_EL2_HPMN_MASK;
> -	mdcr_el2 |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT;
> -
> -	write_sysreg(mdcr_el2, mdcr_el2);
> +	write_sysreg(hyp_host_ctxt->mdcr_el2, mdcr_el2);
>  	write_sysreg(hyp_host_ctxt->hcr_el2, hcr_el2);
>  	write_sysreg(CPTR_EL2_DEFAULT, cptr_el2);
>  }
> @@ -199,15 +195,11 @@ void activate_traps_vhe_load(struct kvm_vcpu *vcpu)
>  	__activate_traps_common(vcpu);
>  }
>  
> -void deactivate_traps_vhe_put(void)
> +void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu)
>  {
> -	u64 mdcr_el2 = read_sysreg(mdcr_el2);
> -
> -	mdcr_el2 &= MDCR_EL2_HPMN_MASK |
> -		    MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT |
> -		    MDCR_EL2_TPMS;
> +	struct kvm_cpu_context *host_ctxt = vcpu->arch.host_cpu_context;
>  
> -	write_sysreg(mdcr_el2, mdcr_el2);
> +	write_sysreg(host_ctxt->mdcr_el2, mdcr_el2);
>  
>  	__deactivate_traps_common();
>  }
> diff --git a/arch/arm64/kvm/hyp/sysreg-sr.c b/arch/arm64/kvm/hyp/sysreg-sr.c
> index 277f82b..6afd309 100644
> --- a/arch/arm64/kvm/hyp/sysreg-sr.c
> +++ b/arch/arm64/kvm/hyp/sysreg-sr.c
> @@ -298,7 +298,7 @@ void kvm_vcpu_put_sysregs(struct kvm_vcpu *vcpu)
>  	if (!has_vhe())
>  		return;
>  
> -	deactivate_traps_vhe_put();
> +	deactivate_traps_vhe_put(vcpu);
>  
>  	__sysreg_save_el1_state(guest_ctxt);
>  	__sysreg_save_user_state(guest_ctxt);
> @@ -333,4 +333,10 @@ void __hyp_text __kvm_populate_host_regs(void)
>  
>  	host_ctxt = __hyp_this_cpu_ptr(kvm_host_cpu_state);
>  	host_ctxt->hcr_el2 = read_sysreg(hcr_el2);
> +	/*
> +	 * Retrieve the initial value of mdcr_el2 so we can preserve
> +	 * MDCR_EL2.HPMN which has presumably been set-up by some
> +	 * knowledgeable bootcode.
> +	 */
> +	host_ctxt->mdcr_el2 = read_sysreg(mdcr_el2);
>  }
> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> index e8c2ee6..58de0ca 100644
> --- a/virt/kvm/arm/arm.c
> +++ b/virt/kvm/arm/arm.c
> @@ -1322,7 +1322,6 @@ static void cpu_hyp_reinit(void)
>  	else
>  		cpu_init_hyp_mode(NULL);
>  
> -	kvm_arm_init_debug();
>  	cpu_init_host_ctxt();
>  
>  	if (vgic_present)

Looks reasonable overall.

I was wondering whether it would make sense to move some masking into
__kvm_populate_host_regs(), but probably not.

Reviewed-by: Dave Martin <Dave.Martin@arm.com>

Cheers
---Dave


> -- 
> 2.7.4
> 
> _______________________________________________
> kvmarm mailing list
> kvmarm@lists.cs.columbia.edu
> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v8 6/9] KVM: arm64: Add vcpu feature flags to control ptrauth accessibility
  2019-04-02  2:27 ` [PATCH v8 6/9] KVM: arm64: Add vcpu feature flags to control ptrauth accessibility Amit Daniel Kachhap
@ 2019-04-05 11:02   ` Dave Martin
  2019-04-08  5:12     ` Amit Daniel Kachhap
  0 siblings, 1 reply; 30+ messages in thread
From: Dave Martin @ 2019-04-05 11:02 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: linux-arm-kernel, Marc Zyngier, Catalin Marinas, Will Deacon,
	Kristina Martsenko, kvmarm, Ramana Radhakrishnan, linux-kernel

On Tue, Apr 02, 2019 at 07:57:14AM +0530, Amit Daniel Kachhap wrote:
> Since Pointer authentication will be enabled or disabled on a
> per-vcpu basis, vcpu feature flags are added in order to know which
> vcpus have it enabled from userspace.
> 
> This features will allow the KVM guest to allow the handling of
> pointer authentication instructions or to treat them as undefined
> if not set.
> 
> The helper macro added checks the feature flag along with other
> conditions such as VHE mode present and system support for
> pointer address/generic authentication.

Can this patch be put after the context switch patch instead?

Here, we accept a request from userspace to enable ptrauth, but it will
mysteriously fail to work.  I worked around a similar issue by defining
KVM_ARM64_GUEST_HAS_SVE early in the SVE series, but putting the logic
to set this flag in vcpu->arch.flags later on (see also comments about
this below).

> Necessary documentations are added to reflect the changes done.
> 
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Christoffer Dall <christoffer.dall@arm.com>
> Cc: kvmarm@lists.cs.columbia.edu
> ---
> 
> Changes since v7:
> * Moved the check for userspace features in this patch [James Morse].
> * Moved the vcpu feature flags Documentation in this patch [James Morse].
> 
>  Documentation/arm64/pointer-authentication.txt | 13 +++++++++----
>  Documentation/virtual/kvm/api.txt              |  4 ++++
>  arch/arm64/include/asm/kvm_host.h              |  8 +++++++-
>  arch/arm64/include/uapi/asm/kvm.h              |  2 ++
>  arch/arm64/kvm/reset.c                         |  7 +++++++
>  5 files changed, 29 insertions(+), 5 deletions(-)
> 
> diff --git a/Documentation/arm64/pointer-authentication.txt b/Documentation/arm64/pointer-authentication.txt
> index 5baca42..b164886 100644
> --- a/Documentation/arm64/pointer-authentication.txt
> +++ b/Documentation/arm64/pointer-authentication.txt
> @@ -87,7 +87,12 @@ used to get and set the keys for a thread.
>  Virtualization
>  --------------
>  
> -Pointer authentication is not currently supported in KVM guests. KVM
> -will mask the feature bits from ID_AA64ISAR1_EL1, and attempted use of
> -the feature will result in an UNDEFINED exception being injected into
> -the guest.
> +Pointer authentication is enabled in KVM guest when each virtual cpu is
> +initialised by passing flags KVM_ARM_VCPU_PTRAUTH_[ADDRESS/GENERIC] and
> +requesting these two separate cpu features to be enabled. The current KVM
> +guest implementation works by enabling both features together, so both these
> +userspace flags are checked together before enabling pointer authentication.
> +The separate userspace flag will allow to have no userspace ABI changes when
> +both features are implemented in an isolated way in future.

Nit: we might make this change, but we don't promise that it will happsen.

So, maybe write:

"[...] have no userspace ABI changes if support is added in the future
to allow these two features to be enabled independently of one another."

> +
> +Pointer Authentication is supported in KVM guest only in VHE mode.
> diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
> index 7de9eee..aaa048d 100644
> --- a/Documentation/virtual/kvm/api.txt
> +++ b/Documentation/virtual/kvm/api.txt
> @@ -2659,6 +2659,10 @@ Possible features:
>  	  Depends on KVM_CAP_ARM_PSCI_0_2.
>  	- KVM_ARM_VCPU_PMU_V3: Emulate PMUv3 for the CPU.
>  	  Depends on KVM_CAP_ARM_PMU_V3.
> +	- KVM_ARM_VCPU_PTRAUTH_ADDRESS: Enables Address Pointer authentication
> +	  for the CPU and supported only on arm64 architecture.

We should probably add:

	Must be requested if KVM_ARM_VCPU_PTRAUTH_GENERIC is also requested.

> +	- KVM_ARM_VCPU_PTRAUTH_GENERIC: Enables Generic Pointer authentication
> +	  for the CPU and supported only on arm64 architecture.

Similarly:

	Must be requested if KVM_ARM_VCPU_PTRAUTH_ADDRESS is also requested.

(Or otherwise explain that both features must enabled together or not at
all.)

> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index e3ccd7b..9dd2918 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -45,7 +45,7 @@
>  
>  #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS
>  
> -#define KVM_VCPU_MAX_FEATURES 4
> +#define KVM_VCPU_MAX_FEATURES 6
>  
>  #define KVM_REQ_SLEEP \
>  	KVM_ARCH_REQ_FLAGS(0, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
> @@ -491,6 +491,12 @@ static inline bool kvm_arch_requires_vhe(void)
>  	return false;
>  }
>  
> +#define vcpu_has_ptrauth(vcpu)	(has_vhe() && \
> +		system_supports_address_auth() && \
> +		system_supports_generic_auth() && \
> +		test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, vcpu->arch.features) && \
> +		test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, vcpu->arch.features))

We're checking 5 things here, which we don't necessarily want to do
every time.

Is this used on any hot path?

This kind of thing is one reason why I added vcpu->arch.flags: we can
make the policy decision about whether to set the flag in
kvm_reset_vcpu(), then afterwards we only need to check the flag.

> +
>  static inline void kvm_arch_hardware_unsetup(void) {}
>  static inline void kvm_arch_sync_events(struct kvm *kvm) {}
>  static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {}
> diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
> index 97c3478..8806f71 100644
> --- a/arch/arm64/include/uapi/asm/kvm.h
> +++ b/arch/arm64/include/uapi/asm/kvm.h
> @@ -102,6 +102,8 @@ struct kvm_regs {
>  #define KVM_ARM_VCPU_EL1_32BIT		1 /* CPU running a 32bit VM */
>  #define KVM_ARM_VCPU_PSCI_0_2		2 /* CPU uses PSCI v0.2 */
>  #define KVM_ARM_VCPU_PMU_V3		3 /* Support guest PMUv3 */
> +#define KVM_ARM_VCPU_PTRAUTH_ADDRESS	4 /* VCPU uses address authentication */
> +#define KVM_ARM_VCPU_PTRAUTH_GENERIC	5 /* VCPU uses generic authentication */
>  
>  struct kvm_vcpu_init {
>  	__u32 target;
> diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
> index f16a5f8..717afed 100644
> --- a/arch/arm64/kvm/reset.c
> +++ b/arch/arm64/kvm/reset.c
> @@ -128,6 +128,13 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
>  	if (loaded)
>  		kvm_arch_vcpu_put(vcpu);
>  
> +	if (test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, vcpu->arch.features) ||
> +		test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, vcpu->arch.features)) {
> +		/* Verify that KVM startup matches the conditions for ptrauth */
> +		if (!vcpu_has_ptrauth(vcpu))
> +			goto out;
> +	}
> +

This looks like it works, but I find the way vcpu->arch.features is used
in two different ways at the same time a bit confusing.

Cheers
---Dave

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v8 8/9] KVM: arm64: Add capability to advertise ptrauth for guest
  2019-04-02  2:27 ` [PATCH v8 8/9] KVM: arm64: Add capability to advertise ptrauth for guest Amit Daniel Kachhap
@ 2019-04-05 11:03   ` Dave Martin
  2019-04-08  8:51     ` Amit Daniel Kachhap
  0 siblings, 1 reply; 30+ messages in thread
From: Dave Martin @ 2019-04-05 11:03 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: linux-arm-kernel, Marc Zyngier, Catalin Marinas, Will Deacon,
	Kristina Martsenko, kvmarm, Ramana Radhakrishnan, linux-kernel,
	Mark Rutland

On Tue, Apr 02, 2019 at 07:57:16AM +0530, Amit Daniel Kachhap wrote:
> This patch advertises the capability of two cpu feature called address
> pointer authentication and generic pointer authentication. These
> capabilities depend upon system support for pointer authentication and
> VHE mode.
> 
> The current arm64 KVM partially implements pointer authentication and
> support of address/generic authentication are tied together. However,
> separate ABI requirements for both of them is added so that the future
> isolated implementation will not require any ABI changes.
> 
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Christoffer Dall <christoffer.dall@arm.com>
> Cc: kvmarm@lists.cs.columbia.edu
> ---
> 
> Changes since v7:
> * Created 2 capabilities KVM_CAP_ARM_PTRAUTH_ADDRESS and KVM_CAP_ARM_PTRAUTH_GENERIC
>   instead of one KVM_CAP_ARM_PTRAUTH [Kristina Martsenko].
> * Added documentation here itself instead of in a new patch.
> 
>  Documentation/virtual/kvm/api.txt | 3 +++
>  arch/arm64/kvm/reset.c            | 6 ++++++
>  include/uapi/linux/kvm.h          | 2 ++
>  3 files changed, 11 insertions(+)
> 
> diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
> index aaa048d..9b56892 100644
> --- a/Documentation/virtual/kvm/api.txt
> +++ b/Documentation/virtual/kvm/api.txt
> @@ -2661,8 +2661,11 @@ Possible features:
>  	  Depends on KVM_CAP_ARM_PMU_V3.
>  	- KVM_ARM_VCPU_PTRAUTH_ADDRESS: Enables Address Pointer authentication
>  	  for the CPU and supported only on arm64 architecture.
> +	  Depends on KVM_CAP_ARM_PTRAUTH_ADDRESS.
>  	- KVM_ARM_VCPU_PTRAUTH_GENERIC: Enables Generic Pointer authentication
>  	  for the CPU and supported only on arm64 architecture.
> +	  Depends on KVM_CAP_ARM_PTRAUTH_GENERIC.
>  
>  
>  4.83 KVM_ARM_PREFERRED_TARGET
> diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
> index 717afed..8aa8982 100644
> --- a/arch/arm64/kvm/reset.c
> +++ b/arch/arm64/kvm/reset.c
> @@ -92,6 +92,12 @@ int kvm_arch_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>  	case KVM_CAP_ARM_VM_IPA_SIZE:
>  		r = kvm_ipa_limit;
>  		break;
> +	case KVM_CAP_ARM_PTRAUTH_ADDRESS:
> +		r = has_vhe() && system_supports_address_auth();
> +		break;
> +	case KVM_CAP_ARM_PTRAUTH_GENERIC:
> +		r = has_vhe() && system_supports_generic_auth();
> +		break;

If some hardware supports just one auth type, we would report just one
of these caps.  Although we have the rule that userspace is not allowed
to request these independently in KVM_ARM_VCPU_INIT anyway, I think it
would be easier for userspace if we suppress both caps if either auth
type isn't available on the host.  e.g.:

	case KVM_ARM_ARM_PTRAUTH_ADDRESS:
	case KVM_ARM_ARM_PTRAUTH_GENERIC:
		r = has_vhe() && system_supports_address_auth() &&
			system_supports_generic_auth();

We could revert back to the above code later on, and apply the ABI
relaxations described in my response to the vcpu features patch, if
someday we add support to KVM for coping with host hardware that
supports just one auth type.


I'd like Mark to comment on this, since he's more aware of the
architectural situation than I am.

Cheers
---Dave

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [kvmtool PATCH v8 9/9] KVM: arm/arm64: Add a vcpu feature for pointer authentication
  2019-04-02  2:27 ` [kvmtool PATCH v8 9/9] KVM: arm/arm64: Add a vcpu feature for pointer authentication Amit Daniel Kachhap
@ 2019-04-05 11:04   ` Dave Martin
  2019-04-08  8:43     ` Amit Daniel Kachhap
  0 siblings, 1 reply; 30+ messages in thread
From: Dave Martin @ 2019-04-05 11:04 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: linux-arm-kernel, Marc Zyngier, Catalin Marinas, Will Deacon,
	Kristina Martsenko, kvmarm, Ramana Radhakrishnan, linux-kernel

On Tue, Apr 02, 2019 at 07:57:17AM +0530, Amit Daniel Kachhap wrote:
> This is a runtime capabality for KVM tool to enable Arm64 8.3 Pointer
> Authentication in guest kernel. Two vcpu features
> KVM_ARM_VCPU_PTRAUTH_[ADDRESS/GENERIC] are supplied together to enable
> Pointer Authentication in KVM guest after checking the capability.
> 
> A command line option --ptrauth is also required to select this feature.
> 
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> ---
> 
> Changes since v7:
> * Added check for capability KVM_CAP_ARM_PTRAUTH_GENERIC
> 
>  arm/aarch32/include/kvm/kvm-cpu-arch.h    | 1 +
>  arm/aarch64/include/asm/kvm.h             | 2 ++
>  arm/aarch64/include/kvm/kvm-config-arch.h | 4 +++-
>  arm/aarch64/include/kvm/kvm-cpu-arch.h    | 2 ++
>  arm/include/arm-common/kvm-config-arch.h  | 1 +
>  arm/kvm-cpu.c                             | 7 +++++++
>  include/linux/kvm.h                       | 2 ++
>  7 files changed, 18 insertions(+), 1 deletion(-)
> 
> diff --git a/arm/aarch32/include/kvm/kvm-cpu-arch.h b/arm/aarch32/include/kvm/kvm-cpu-arch.h
> index d28ea67..520ea76 100644
> --- a/arm/aarch32/include/kvm/kvm-cpu-arch.h
> +++ b/arm/aarch32/include/kvm/kvm-cpu-arch.h
> @@ -13,4 +13,5 @@
>  #define ARM_CPU_ID		0, 0, 0
>  #define ARM_CPU_ID_MPIDR	5
>  
> +#define ARM_VCPU_PTRAUTH_FEATURE	0
>  #endif /* KVM__KVM_CPU_ARCH_H */
> diff --git a/arm/aarch64/include/asm/kvm.h b/arm/aarch64/include/asm/kvm.h
> index 97c3478..d4d0d8c 100644
> --- a/arm/aarch64/include/asm/kvm.h
> +++ b/arm/aarch64/include/asm/kvm.h
> @@ -102,6 +102,8 @@ struct kvm_regs {
>  #define KVM_ARM_VCPU_EL1_32BIT		1 /* CPU running a 32bit VM */
>  #define KVM_ARM_VCPU_PSCI_0_2		2 /* CPU uses PSCI v0.2 */
>  #define KVM_ARM_VCPU_PMU_V3		3 /* Support guest PMUv3 */
> +#define KVM_ARM_VCPU_PTRAUTH_ADDRESS	4 /* CPU uses address pointer authentication */
> +#define KVM_ARM_VCPU_PTRAUTH_GENERIC	5 /* CPU uses generic pointer authentication */
>  
>  struct kvm_vcpu_init {
>  	__u32 target;
> diff --git a/arm/aarch64/include/kvm/kvm-config-arch.h b/arm/aarch64/include/kvm/kvm-config-arch.h
> index 04be43d..2074684 100644
> --- a/arm/aarch64/include/kvm/kvm-config-arch.h
> +++ b/arm/aarch64/include/kvm/kvm-config-arch.h
> @@ -8,7 +8,9 @@
>  			"Create PMUv3 device"),				\
>  	OPT_U64('\0', "kaslr-seed", &(cfg)->kaslr_seed,			\
>  			"Specify random seed for Kernel Address Space "	\
> -			"Layout Randomization (KASLR)"),
> +			"Layout Randomization (KASLR)"),		\
> +	OPT_BOOLEAN('\0', "ptrauth", &(cfg)->has_ptrauth,		\
> +			"Enable address authentication"),

This should probably say "pointer", not "address" now, since we enable
both kinds of ptrauth together.  (Sorry!)

When discussing how to control SVE, I was eventually convinced that it
is more user-friendly to make SVE default to on if present, and maybe
provide two options --disable-sve, --enable-sve, in case the user wants
to force it off or on instead of just getting what the host supports.

Passing --enable-sve on a host that doesn't support SVE would then lead
to kvmtool bailing out with an error, which is probably better then
silently turning it off.

I don't have this change in my kvmtool patches yet.

What's your view?  It makes sense to do things the same way for all
features if we can.

>  #include "arm-common/kvm-config-arch.h"
>  
> diff --git a/arm/aarch64/include/kvm/kvm-cpu-arch.h b/arm/aarch64/include/kvm/kvm-cpu-arch.h
> index a9d8563..fcc2107 100644
> --- a/arm/aarch64/include/kvm/kvm-cpu-arch.h
> +++ b/arm/aarch64/include/kvm/kvm-cpu-arch.h
> @@ -17,4 +17,6 @@
>  #define ARM_CPU_CTRL		3, 0, 1, 0
>  #define ARM_CPU_CTRL_SCTLR_EL1	0
>  
> +#define ARM_VCPU_PTRAUTH_FEATURE	((1UL << KVM_ARM_VCPU_PTRAUTH_ADDRESS) \
> +					| (1UL << KVM_ARM_VCPU_PTRAUTH_GENERIC))
>  #endif /* KVM__KVM_CPU_ARCH_H */
> diff --git a/arm/include/arm-common/kvm-config-arch.h b/arm/include/arm-common/kvm-config-arch.h
> index 5734c46..5badcbd 100644
> --- a/arm/include/arm-common/kvm-config-arch.h
> +++ b/arm/include/arm-common/kvm-config-arch.h
> @@ -10,6 +10,7 @@ struct kvm_config_arch {
>  	bool		aarch32_guest;
>  	bool		has_pmuv3;
>  	u64		kaslr_seed;
> +	bool		has_ptrauth;
>  	enum irqchip_type irqchip;
>  	u64		fw_addr;
>  };
> diff --git a/arm/kvm-cpu.c b/arm/kvm-cpu.c
> index 7780251..398c9d6 100644
> --- a/arm/kvm-cpu.c
> +++ b/arm/kvm-cpu.c
> @@ -68,6 +68,13 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, unsigned long cpu_id)
>  		vcpu_init.features[0] |= (1UL << KVM_ARM_VCPU_PSCI_0_2);
>  	}
>  
> +	/* Set KVM_ARM_VCPU_PTRAUTH_[ADDRESS/GENERIC] if available */
> +	if (kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH_ADDRESS) &&
> +		kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH_GENERIC)) {
> +		if (kvm->cfg.arch.has_ptrauth)
> +			vcpu_init.features[0] |= ARM_VCPU_PTRAUTH_FEATURE;
> +	}
> +
>  	/*
>  	 * If the preferred target ioctl is successful then
>  	 * use preferred target else try each and every target type
> diff --git a/include/linux/kvm.h b/include/linux/kvm.h
> index 6d4ea4b..500ac2b 100644
> --- a/include/linux/kvm.h
> +++ b/include/linux/kvm.h
> @@ -988,6 +988,8 @@ struct kvm_ppc_resize_hpt {
>  #define KVM_CAP_ARM_VM_IPA_SIZE 165
>  #define KVM_CAP_MANUAL_DIRTY_LOG_PROTECT 166
>  #define KVM_CAP_HYPERV_CPUID 167
> +#define KVM_CAP_ARM_PTRAUTH_ADDRESS 168
> +#define KVM_CAP_ARM_PTRAUTH_GENERIC 169

(Note, 168 will probably be KVM_CAP_ARM_SVE, now that the SVE patches
are in kvmarm/next.)

Cheers
---Dave

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v8 4/9] KVM: arm/arm64: preserve host HCR_EL2 value
  2019-04-02  2:27 ` [PATCH v8 4/9] KVM: arm/arm64: preserve host HCR_EL2 value Amit Daniel Kachhap
  2019-04-05 11:02   ` Dave Martin
@ 2019-04-06 10:37   ` James Morse
  2019-04-08 13:05     ` Amit Daniel Kachhap
  1 sibling, 1 reply; 30+ messages in thread
From: James Morse @ 2019-04-06 10:37 UTC (permalink / raw)
  To: Amit Daniel Kachhap, linux-arm-kernel
  Cc: Christoffer Dall, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Dave Martin, Ramana Radhakrishnan, kvmarm,
	Kristina Martsenko, linux-kernel, Mark Rutland, Julien Thierry

Hi Amit,

On 02/04/2019 03:27, Amit Daniel Kachhap wrote:
> From: Mark Rutland <mark.rutland@arm.com>
> 
> When restoring HCR_EL2 for the host, KVM uses HCR_HOST_VHE_FLAGS, which
> is a constant value. This works today, as the host HCR_EL2 value is
> always the same, but this will get in the way of supporting extensions
> that require HCR_EL2 bits to be set conditionally for the host.
> 
> To allow such features to work without KVM having to explicitly handle
> every possible host feature combination, this patch has KVM save/restore
> for the host HCR when switching to/from a guest HCR. The saving of the
> register is done once during cpu hypervisor initialization state and is
> just restored after switch from guest.
> 
> For fetching HCR_EL2 during kvm initialisation, a hyp call is made using
> kvm_call_hyp and is helpful in non-VHE case.
> 
> For the hyp TLB maintenance code, __tlb_switch_to_host_vhe() is updated
> to toggle the TGE bit with a RMW sequence, as we already do in
> __tlb_switch_to_guest_vhe().
> 
> The value of hcr_el2 is now stored in struct kvm_cpu_context as both host
> and guest can now use this field in a common way.

These HCR_EL2 flags have had me confused for quite a while.
I thought this was preserving the value that head.S or cpufeature.c had set, and with
ptrauth we couldn't know what this register should be anymore, the host flags has to vary.

Kristina's explanation of it[0], clarified things, and with a bit more digging it appears
we always set API/APK, even if the hardware doesn't support the feature (as its harmless).
So we don't need to vary the host flags...

My question is, what breaks if this patch isn't merged? (the MDCR change is cleanup we can
do because of this HCR change), is this HCR change just cleanup too? If so, can we merge
ptrauth without either, so we only make the change when its needed? (it will cause some
changes in your patch 7, but I can't see where you depend on the host flags).

I recall Christoffer wanting to keep the restored DAIF register value on guest-exit static
to avoid extra loads/stores when we know what the value would be. I think the same logic
applies here.

You mentioned in the cover letter the series has some history to it!


Thanks,

James

[0] http://lore.kernel.org/r/7ec2f950-7587-5ecd-6caa-c2fd091ad22c@arm.com

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v8 3/9] KVM: arm64: Move hyp_symbol_addr to fix dependency
  2019-04-05 11:02   ` Dave Martin
@ 2019-04-08  3:42     ` Amit Daniel Kachhap
  0 siblings, 0 replies; 30+ messages in thread
From: Amit Daniel Kachhap @ 2019-04-08  3:42 UTC (permalink / raw)
  To: Dave Martin
  Cc: linux-arm-kernel, Marc Zyngier, Catalin Marinas, Will Deacon,
	Kristina Martsenko, kvmarm, Ramana Radhakrishnan, linux-kernel

Hi,

On 4/5/19 4:32 PM, Dave Martin wrote:
> On Tue, Apr 02, 2019 at 07:57:11AM +0530, Amit Daniel Kachhap wrote:
>> Currently hyp_symbol_addr is placed in kvm_mmu.h which is mostly
>> used by __hyp_this_cpu_ptr in kvm_asm.h but it cannot include
>> kvm_mmu.h directly as kvm_mmu.h uses kvm_ksym_ref which is
>> defined inside kvm_asm.h. Hence, hyp_symbol_addr is moved inside
>> kvm_asm.h to fix this dependency on each other.
>>
>> Also, hyp_symbol_addr corresponding counterpart, kvm_ksym_ref,
>> is already in kvm_asm.h, making it more sensible to move
>> kvm_symbol_addr to the same file.
>>
>> Suggested by: James Morse <james.morse@arm.com>
>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>> Reviewed-by: Julien Thierry <julien.thierry@arm.com>
>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>> Cc: Christoffer Dall <christoffer.dall@arm.com>
>> Cc: kvmarm@lists.cs.columbia.edu
>> ---
>>
>> Changes since v7:
>> * Slight change in commit message [Julien Thierry].
>>
>>   arch/arm64/include/asm/kvm_asm.h | 20 ++++++++++++++++++++
>>   arch/arm64/include/asm/kvm_mmu.h | 20 --------------------
>>   2 files changed, 20 insertions(+), 20 deletions(-)
>>
>> diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
>> index f5b79e9..57a07e8 100644
>> --- a/arch/arm64/include/asm/kvm_asm.h
>> +++ b/arch/arm64/include/asm/kvm_asm.h
>> @@ -80,6 +80,26 @@ extern void __vgic_v3_init_lrs(void);
>>   
>>   extern u32 __kvm_get_mdcr_el2(void);
>>   
>> +/*
>> + * Obtain the PC-relative address of a kernel symbol
>> + * s: symbol
>> + *
>> + * The goal of this macro is to return a symbol's address based on a
>> + * PC-relative computation, as opposed to a loading the VA from a
>> + * constant pool or something similar. This works well for HYP, as an
>> + * absolute VA is guaranteed to be wrong. Only use this if trying to
>> + * obtain the address of a symbol (i.e. not something you obtained by
>> + * following a pointer).
>> + */
>> +#define hyp_symbol_addr(s)						\
>> +	({								\
>> +		typeof(s) *addr;					\
>> +		asm("adrp	%0, %1\n"				\
>> +		    "add	%0, %0, :lo12:%1\n"			\
>> +		    : "=r" (addr) : "S" (&s));				\
>> +		addr;							\
>> +	})
>> +
> 
> Do we need to include <asm/kvm_asm.h> in vgic-v2-cpuif-proxy.c now?
Yes asm/kvm_asm.h should be included now.
> 
> Otherwise,
> 
> Reviewed-by: Dave Martin <Dave.Martin@arm.com>
Thanks,
Amit D
> 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v8 4/9] KVM: arm/arm64: preserve host HCR_EL2 value
  2019-04-05 11:02   ` Dave Martin
@ 2019-04-08  4:31     ` Amit Daniel Kachhap
  0 siblings, 0 replies; 30+ messages in thread
From: Amit Daniel Kachhap @ 2019-04-08  4:31 UTC (permalink / raw)
  To: Dave Martin
  Cc: linux-arm-kernel, Marc Zyngier, Catalin Marinas, Will Deacon,
	Kristina Martsenko, kvmarm, Ramana Radhakrishnan, linux-kernel

Hi,

On 4/5/19 4:32 PM, Dave Martin wrote:
> On Tue, Apr 02, 2019 at 07:57:12AM +0530, Amit Daniel Kachhap wrote:
>> From: Mark Rutland <mark.rutland@arm.com>
>>
>> When restoring HCR_EL2 for the host, KVM uses HCR_HOST_VHE_FLAGS, which
>> is a constant value. This works today, as the host HCR_EL2 value is
>> always the same, but this will get in the way of supporting extensions
>> that require HCR_EL2 bits to be set conditionally for the host.
>>
>> To allow such features to work without KVM having to explicitly handle
>> every possible host feature combination, this patch has KVM save/restore
>> for the host HCR when switching to/from a guest HCR. The saving of the
>> register is done once during cpu hypervisor initialization state and is
>> just restored after switch from guest.
>>
>> For fetching HCR_EL2 during kvm initialisation, a hyp call is made using
>> kvm_call_hyp and is helpful in non-VHE case.
>>
>> For the hyp TLB maintenance code, __tlb_switch_to_host_vhe() is updated
>> to toggle the TGE bit with a RMW sequence, as we already do in
>> __tlb_switch_to_guest_vhe().
>>
>> The value of hcr_el2 is now stored in struct kvm_cpu_context as both host
>> and guest can now use this field in a common way.
>>
>> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
>> [Added cpu_init_host_ctxt, hcr_el2 field in struct kvm_cpu_context,
>> save hcr_el2 in hyp init stage]
>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>> Reviewed-by: James Morse <james.morse@arm.com>
>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>> Cc: Christoffer Dall <christoffer.dall@arm.com>
>> Cc: kvmarm@lists.cs.columbia.edu
> 
> [...]
> 
>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
>> index a01fe087..3b09fd0 100644
>> --- a/arch/arm64/include/asm/kvm_host.h
>> +++ b/arch/arm64/include/asm/kvm_host.h
>> @@ -209,6 +209,8 @@ struct kvm_cpu_context {
>>   		u32 copro[NR_COPRO_REGS];
>>   	};
>>   
>> +	/* HYP host/guest configuration */
>> +	u64 hcr_el2;
> 
> Minor nit: You could delete "host/guest" from the comment here.  This is
> implied by the fact that the member is in struct kvm_cpu_context in the
> first place.
ok. Agree with you.
> 
>>   	struct kvm_vcpu *__hyp_running_vcpu;
>>   };
> 
> [...]
> 
>> diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
>> index 3563fe6..f5cefa1 100644
>> --- a/arch/arm64/kvm/hyp/switch.c
>> +++ b/arch/arm64/kvm/hyp/switch.c
> 
> [...]
> 
>> @@ -159,9 +159,10 @@ static void deactivate_traps_vhe(void)
>>   }
>>   NOKPROBE_SYMBOL(deactivate_traps_vhe);
>>   
>> -static void __hyp_text __deactivate_traps_nvhe(void)
>> +static void __hyp_text __deactivate_traps_nvhe(struct kvm_cpu_context *host_ctxt)
> 
> Where __hyp_text functions accept pointer arguments, they are usually
> hyp pointers already...  (see below)
> 
>>   {
>>   	u64 mdcr_el2 = read_sysreg(mdcr_el2);
>> +	struct kvm_cpu_context *hyp_host_ctxt = kern_hyp_va(host_ctxt);
>>   
>>   	__deactivate_traps_common();
>>   
>> @@ -169,25 +170,28 @@ static void __hyp_text __deactivate_traps_nvhe(void)
>>   	mdcr_el2 |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT;
>>   
>>   	write_sysreg(mdcr_el2, mdcr_el2);
>> -	write_sysreg(HCR_HOST_NVHE_FLAGS, hcr_el2);
>> +	write_sysreg(hyp_host_ctxt->hcr_el2, hcr_el2);
>>   	write_sysreg(CPTR_EL2_DEFAULT, cptr_el2);
>>   }
>>   
>>   static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu)
>>   {
>> +	struct kvm_cpu_context *host_ctxt;
>> +
>> +	host_ctxt = vcpu->arch.host_cpu_context;
> 
> host_ctxt is not otherwise used here, so can we convert it up-front so
> that the argument to __deactivate_traps_nvhe() and
> deactivate_traps_vhe() is a hyp pointer already?
> 
> So:
> 
> 	struct kvm_cpu_context *hyp_host_ctxt;
> 
> 	hyp_host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context);
> 
>>   	/*
>>   	 * If we pended a virtual abort, preserve it until it gets
>>   	 * cleared. See D1.14.3 (Virtual Interrupts) for details, but
>>   	 * the crucial bit is "On taking a vSError interrupt,
>>   	 * HCR_EL2.VSE is cleared to 0."
>>   	 */
>> -	if (vcpu->arch.hcr_el2 & HCR_VSE)
>> -		vcpu->arch.hcr_el2 = read_sysreg(hcr_el2);
>> +	if (vcpu->arch.ctxt.hcr_el2 & HCR_VSE)
>> +		vcpu->arch.ctxt.hcr_el2 = read_sysreg(hcr_el2);
>>   
>>   	if (has_vhe())
>> -		deactivate_traps_vhe();
>> +		deactivate_traps_vhe(host_ctxt);
>>   	else
>> -		__deactivate_traps_nvhe();
>> +		__deactivate_traps_nvhe(host_ctxt);
> 
> Then just pass hyp_host_ctxt to both of these, and drop the
> kern_hyp_va() conversion from __deactivate_traps_nvhe().
> 
> This may be a bit less confusing.
Yes your explanation makes sense.
> 
> Alternatively, just pass in the vcpu pointer (since this pattern is
> already well established all over the place).
I think passing vcpu as parameter will make it consistent with other 
existing functions. __kvm_vcpu_run_nvhe function also takes vcpu and 
extracts hyp_host_ctxt.
> 
> Another option could be to pull the hcr_el2 write out of the backends
> entirely and put it in this common code instead.  This doesn't look
> straightforward though (or at least, I don't remember enough about how
> all these traps handling functions fit together...)
ok.

Thanks,
Amit D
> 
> [...]
> 
> Cheers
> ---Dave
> 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v8 5/9] KVM: arm/arm64: preserve host MDCR_EL2 value
  2019-04-05 11:02   ` Dave Martin
@ 2019-04-08  4:39     ` Amit Daniel Kachhap
  0 siblings, 0 replies; 30+ messages in thread
From: Amit Daniel Kachhap @ 2019-04-08  4:39 UTC (permalink / raw)
  To: Dave Martin
  Cc: linux-arm-kernel, Marc Zyngier, Catalin Marinas, Will Deacon,
	Kristina Martsenko, kvmarm, Ramana Radhakrishnan, linux-kernel

Hi,

On 4/5/19 4:32 PM, Dave Martin wrote:
> On Tue, Apr 02, 2019 at 07:57:13AM +0530, Amit Daniel Kachhap wrote:
>> Save host MDCR_EL2 value during kvm HYP initialisation and restore
>> after every switch from host to guest. There should not be any
>> change in functionality due to this.
>>
>> The value of mdcr_el2 is now stored in struct kvm_cpu_context as
>> both host and guest can now use this field in a common way.
>>
>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>> Acked-by: Mark Rutland <mark.rutland@arm.com>
>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>> Cc: Mark Rutland <mark.rutland@arm.com>
>> Cc: Christoffer Dall <christoffer.dall@arm.com>
>> Cc: kvmarm@lists.cs.columbia.edu
>> ---
>>
>> Changes since v7:
>> * Removed unused function __kvm_get_mdcr_el2 [Kristina].
>>
>>   arch/arm/include/asm/kvm_host.h   |  1 -
>>   arch/arm64/include/asm/kvm_asm.h  |  2 --
>>   arch/arm64/include/asm/kvm_host.h |  6 ++----
>>   arch/arm64/include/asm/kvm_hyp.h  |  2 +-
>>   arch/arm64/kvm/debug.c            | 28 ++++++----------------------
>>   arch/arm64/kvm/hyp/debug-sr.c     |  5 -----
>>   arch/arm64/kvm/hyp/switch.c       | 18 +++++-------------
>>   arch/arm64/kvm/hyp/sysreg-sr.c    |  8 +++++++-
>>   virt/kvm/arm/arm.c                |  1 -
>>   9 files changed, 21 insertions(+), 50 deletions(-)
>>
>> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
>> index 6d0aac4..a928565 100644
>> --- a/arch/arm/include/asm/kvm_host.h
>> +++ b/arch/arm/include/asm/kvm_host.h
>> @@ -343,7 +343,6 @@ static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {}
>>   static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
>>   static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {}
>>   
>> -static inline void kvm_arm_init_debug(void) {}
>>   static inline void kvm_arm_setup_debug(struct kvm_vcpu *vcpu) {}
>>   static inline void kvm_arm_clear_debug(struct kvm_vcpu *vcpu) {}
>>   static inline void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu) {}
>> diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
>> index a68205c..a15ba55 100644
>> --- a/arch/arm64/include/asm/kvm_asm.h
>> +++ b/arch/arm64/include/asm/kvm_asm.h
>> @@ -78,8 +78,6 @@ extern u64 __vgic_v3_read_vmcr(void);
>>   extern void __vgic_v3_write_vmcr(u32 vmcr);
>>   extern void __vgic_v3_init_lrs(void);
>>   
>> -extern u32 __kvm_get_mdcr_el2(void);
>> -
>>   extern void __kvm_populate_host_regs(void);
>>   
>>   /*
>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
>> index 3b09fd0..e3ccd7b 100644
>> --- a/arch/arm64/include/asm/kvm_host.h
>> +++ b/arch/arm64/include/asm/kvm_host.h
>> @@ -211,6 +211,8 @@ struct kvm_cpu_context {
>>   
>>   	/* HYP host/guest configuration */
>>   	u64 hcr_el2;
>> +	u32 mdcr_el2;
>> +
>>   	struct kvm_vcpu *__hyp_running_vcpu;
>>   };
>>   
>> @@ -226,9 +228,6 @@ struct vcpu_reset_state {
>>   struct kvm_vcpu_arch {
>>   	struct kvm_cpu_context ctxt;
>>   
>> -	/* HYP configuration */
>> -	u32 mdcr_el2;
>> -
>>   	/* Exception Information */
>>   	struct kvm_vcpu_fault_info fault;
>>   
>> @@ -498,7 +497,6 @@ static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {}
>>   static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
>>   static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {}
>>   
>> -void kvm_arm_init_debug(void);
>>   void kvm_arm_setup_debug(struct kvm_vcpu *vcpu);
>>   void kvm_arm_clear_debug(struct kvm_vcpu *vcpu);
>>   void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu);
>> diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
>> index 4da765f..7fcde8a 100644
>> --- a/arch/arm64/include/asm/kvm_hyp.h
>> +++ b/arch/arm64/include/asm/kvm_hyp.h
>> @@ -152,7 +152,7 @@ void __fpsimd_restore_state(struct user_fpsimd_state *fp_regs);
>>   bool __fpsimd_enabled(void);
>>   
>>   void activate_traps_vhe_load(struct kvm_vcpu *vcpu);
>> -void deactivate_traps_vhe_put(void);
>> +void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu);
>>   
>>   u64 __guest_enter(struct kvm_vcpu *vcpu, struct kvm_cpu_context *host_ctxt);
>>   void __noreturn __hyp_do_panic(unsigned long, ...);
>> diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c
>> index fd917d6..99dc0a4 100644
>> --- a/arch/arm64/kvm/debug.c
>> +++ b/arch/arm64/kvm/debug.c
>> @@ -32,8 +32,6 @@
>>   				DBG_MDSCR_KDE | \
>>   				DBG_MDSCR_MDE)
>>   
>> -static DEFINE_PER_CPU(u32, mdcr_el2);
>> -
>>   /**
>>    * save/restore_guest_debug_regs
>>    *
>> @@ -65,21 +63,6 @@ static void restore_guest_debug_regs(struct kvm_vcpu *vcpu)
>>   }
>>   
>>   /**
>> - * kvm_arm_init_debug - grab what we need for debug
>> - *
>> - * Currently the sole task of this function is to retrieve the initial
>> - * value of mdcr_el2 so we can preserve MDCR_EL2.HPMN which has
>> - * presumably been set-up by some knowledgeable bootcode.
>> - *
>> - * It is called once per-cpu during CPU hyp initialisation.
>> - */
>> -
>> -void kvm_arm_init_debug(void)
>> -{
>> -	__this_cpu_write(mdcr_el2, kvm_call_hyp_ret(__kvm_get_mdcr_el2));
>> -}
>> -
>> -/**
>>    * kvm_arm_reset_debug_ptr - reset the debug ptr to point to the vcpu state
>>    */
>>   
>> @@ -111,6 +94,7 @@ void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu)
>>   
>>   void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
>>   {
>> +	kvm_cpu_context_t *host_cxt = this_cpu_ptr(&kvm_host_cpu_state);
>>   	bool trap_debug = !(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY);
>>   	unsigned long mdscr;
>>   
>> @@ -120,8 +104,8 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
>>   	 * This also clears MDCR_EL2_E2PB_MASK to disable guest access
>>   	 * to the profiling buffer.
>>   	 */
>> -	vcpu->arch.mdcr_el2 = __this_cpu_read(mdcr_el2) & MDCR_EL2_HPMN_MASK;
>> -	vcpu->arch.mdcr_el2 |= (MDCR_EL2_TPM |
>> +	vcpu->arch.ctxt.mdcr_el2 = host_cxt->mdcr_el2 & MDCR_EL2_HPMN_MASK;
>> +	vcpu->arch.ctxt.mdcr_el2 |= (MDCR_EL2_TPM |
>>   				MDCR_EL2_TPMS |
>>   				MDCR_EL2_TPMCR |
>>   				MDCR_EL2_TDRA |
>> @@ -130,7 +114,7 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
>>   	/* Is Guest debugging in effect? */
>>   	if (vcpu->guest_debug) {
>>   		/* Route all software debug exceptions to EL2 */
>> -		vcpu->arch.mdcr_el2 |= MDCR_EL2_TDE;
>> +		vcpu->arch.ctxt.mdcr_el2 |= MDCR_EL2_TDE;
>>   
>>   		/* Save guest debug state */
>>   		save_guest_debug_regs(vcpu);
>> @@ -202,13 +186,13 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
>>   
>>   	/* Trap debug register access */
>>   	if (trap_debug)
>> -		vcpu->arch.mdcr_el2 |= MDCR_EL2_TDA;
>> +		vcpu->arch.ctxt.mdcr_el2 |= MDCR_EL2_TDA;
>>   
>>   	/* If KDE or MDE are set, perform a full save/restore cycle. */
>>   	if (vcpu_read_sys_reg(vcpu, MDSCR_EL1) & (DBG_MDSCR_KDE | DBG_MDSCR_MDE))
>>   		vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY;
>>   
>> -	trace_kvm_arm_set_dreg32("MDCR_EL2", vcpu->arch.mdcr_el2);
>> +	trace_kvm_arm_set_dreg32("MDCR_EL2", vcpu->arch.ctxt.mdcr_el2);
>>   	trace_kvm_arm_set_dreg32("MDSCR_EL1", vcpu_read_sys_reg(vcpu, MDSCR_EL1));
>>   }
>>   
>> diff --git a/arch/arm64/kvm/hyp/debug-sr.c b/arch/arm64/kvm/hyp/debug-sr.c
>> index 5000976..f49a3f7 100644
>> --- a/arch/arm64/kvm/hyp/debug-sr.c
>> +++ b/arch/arm64/kvm/hyp/debug-sr.c
>> @@ -198,8 +198,3 @@ void __hyp_text __debug_switch_to_host(struct kvm_vcpu *vcpu)
>>   
>>   	vcpu->arch.flags &= ~KVM_ARM64_DEBUG_DIRTY;
>>   }
>> -
>> -u32 __hyp_text __kvm_get_mdcr_el2(void)
>> -{
>> -	return read_sysreg(mdcr_el2);
>> -}
>> diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
>> index f5cefa1..fe76e24 100644
>> --- a/arch/arm64/kvm/hyp/switch.c
>> +++ b/arch/arm64/kvm/hyp/switch.c
>> @@ -84,7 +84,7 @@ static void __hyp_text __activate_traps_common(struct kvm_vcpu *vcpu)
>>   	 */
>>   	write_sysreg(0, pmselr_el0);
>>   	write_sysreg(ARMV8_PMU_USERENR_MASK, pmuserenr_el0);
>> -	write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
>> +	write_sysreg(vcpu->arch.ctxt.mdcr_el2, mdcr_el2);
>>   }
>>   
>>   static void __hyp_text __deactivate_traps_common(void)
>> @@ -161,15 +161,11 @@ NOKPROBE_SYMBOL(deactivate_traps_vhe);
>>   
>>   static void __hyp_text __deactivate_traps_nvhe(struct kvm_cpu_context *host_ctxt)
>>   {
>> -	u64 mdcr_el2 = read_sysreg(mdcr_el2);
>>   	struct kvm_cpu_context *hyp_host_ctxt = kern_hyp_va(host_ctxt);
>>   
>>   	__deactivate_traps_common();
>>   
>> -	mdcr_el2 &= MDCR_EL2_HPMN_MASK;
>> -	mdcr_el2 |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT;
>> -
>> -	write_sysreg(mdcr_el2, mdcr_el2);
>> +	write_sysreg(hyp_host_ctxt->mdcr_el2, mdcr_el2);
>>   	write_sysreg(hyp_host_ctxt->hcr_el2, hcr_el2);
>>   	write_sysreg(CPTR_EL2_DEFAULT, cptr_el2);
>>   }
>> @@ -199,15 +195,11 @@ void activate_traps_vhe_load(struct kvm_vcpu *vcpu)
>>   	__activate_traps_common(vcpu);
>>   }
>>   
>> -void deactivate_traps_vhe_put(void)
>> +void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu)
>>   {
>> -	u64 mdcr_el2 = read_sysreg(mdcr_el2);
>> -
>> -	mdcr_el2 &= MDCR_EL2_HPMN_MASK |
>> -		    MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT |
>> -		    MDCR_EL2_TPMS;
>> +	struct kvm_cpu_context *host_ctxt = vcpu->arch.host_cpu_context;
>>   
>> -	write_sysreg(mdcr_el2, mdcr_el2);
>> +	write_sysreg(host_ctxt->mdcr_el2, mdcr_el2);
>>   
>>   	__deactivate_traps_common();
>>   }
>> diff --git a/arch/arm64/kvm/hyp/sysreg-sr.c b/arch/arm64/kvm/hyp/sysreg-sr.c
>> index 277f82b..6afd309 100644
>> --- a/arch/arm64/kvm/hyp/sysreg-sr.c
>> +++ b/arch/arm64/kvm/hyp/sysreg-sr.c
>> @@ -298,7 +298,7 @@ void kvm_vcpu_put_sysregs(struct kvm_vcpu *vcpu)
>>   	if (!has_vhe())
>>   		return;
>>   
>> -	deactivate_traps_vhe_put();
>> +	deactivate_traps_vhe_put(vcpu);
>>   
>>   	__sysreg_save_el1_state(guest_ctxt);
>>   	__sysreg_save_user_state(guest_ctxt);
>> @@ -333,4 +333,10 @@ void __hyp_text __kvm_populate_host_regs(void)
>>   
>>   	host_ctxt = __hyp_this_cpu_ptr(kvm_host_cpu_state);
>>   	host_ctxt->hcr_el2 = read_sysreg(hcr_el2);
>> +	/*
>> +	 * Retrieve the initial value of mdcr_el2 so we can preserve
>> +	 * MDCR_EL2.HPMN which has presumably been set-up by some
>> +	 * knowledgeable bootcode.
>> +	 */
>> +	host_ctxt->mdcr_el2 = read_sysreg(mdcr_el2);
>>   }
>> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
>> index e8c2ee6..58de0ca 100644
>> --- a/virt/kvm/arm/arm.c
>> +++ b/virt/kvm/arm/arm.c
>> @@ -1322,7 +1322,6 @@ static void cpu_hyp_reinit(void)
>>   	else
>>   		cpu_init_hyp_mode(NULL);
>>   
>> -	kvm_arm_init_debug();
>>   	cpu_init_host_ctxt();
>>   
>>   	if (vgic_present)
> 
> Looks reasonable overall.
> 
> I was wondering whether it would make sense to move some masking into
> __kvm_populate_host_regs(), but probably not.
> 
> Reviewed-by: Dave Martin <Dave.Martin@arm.com>
Thanks for the review.

Thanks,
Amit D.
> 
> Cheers
> ---Dave
> 
> 
>> -- 
>> 2.7.4
>>
>> _______________________________________________
>> kvmarm mailing list
>> kvmarm@lists.cs.columbia.edu
>> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v8 6/9] KVM: arm64: Add vcpu feature flags to control ptrauth accessibility
  2019-04-05 11:02   ` Dave Martin
@ 2019-04-08  5:12     ` Amit Daniel Kachhap
  0 siblings, 0 replies; 30+ messages in thread
From: Amit Daniel Kachhap @ 2019-04-08  5:12 UTC (permalink / raw)
  To: Dave Martin
  Cc: linux-arm-kernel, Marc Zyngier, Catalin Marinas, Will Deacon,
	Kristina Martsenko, kvmarm, Ramana Radhakrishnan, linux-kernel

Hi,

On 4/5/19 4:32 PM, Dave Martin wrote:
> On Tue, Apr 02, 2019 at 07:57:14AM +0530, Amit Daniel Kachhap wrote:
>> Since Pointer authentication will be enabled or disabled on a
>> per-vcpu basis, vcpu feature flags are added in order to know which
>> vcpus have it enabled from userspace.
>>
>> This features will allow the KVM guest to allow the handling of
>> pointer authentication instructions or to treat them as undefined
>> if not set.
>>
>> The helper macro added checks the feature flag along with other
>> conditions such as VHE mode present and system support for
>> pointer address/generic authentication.
> 
> Can this patch be put after the context switch patch instead?
> 
> Here, we accept a request from userspace to enable ptrauth, but it will
> mysteriously fail to work.  I worked around a similar issue by defining
> KVM_ARM64_GUEST_HAS_SVE early in the SVE series, but putting the logic
> to set this flag in vcpu->arch.flags later on (see also comments about
> this below).
> 
>> Necessary documentations are added to reflect the changes done.
>>
>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>> Cc: Mark Rutland <mark.rutland@arm.com>
>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>> Cc: Christoffer Dall <christoffer.dall@arm.com>
>> Cc: kvmarm@lists.cs.columbia.edu
>> ---
>>
>> Changes since v7:
>> * Moved the check for userspace features in this patch [James Morse].
>> * Moved the vcpu feature flags Documentation in this patch [James Morse].
>>
>>   Documentation/arm64/pointer-authentication.txt | 13 +++++++++----
>>   Documentation/virtual/kvm/api.txt              |  4 ++++
>>   arch/arm64/include/asm/kvm_host.h              |  8 +++++++-
>>   arch/arm64/include/uapi/asm/kvm.h              |  2 ++
>>   arch/arm64/kvm/reset.c                         |  7 +++++++
>>   5 files changed, 29 insertions(+), 5 deletions(-)
>>
>> diff --git a/Documentation/arm64/pointer-authentication.txt b/Documentation/arm64/pointer-authentication.txt
>> index 5baca42..b164886 100644
>> --- a/Documentation/arm64/pointer-authentication.txt
>> +++ b/Documentation/arm64/pointer-authentication.txt
>> @@ -87,7 +87,12 @@ used to get and set the keys for a thread.
>>   Virtualization
>>   --------------
>>   
>> -Pointer authentication is not currently supported in KVM guests. KVM
>> -will mask the feature bits from ID_AA64ISAR1_EL1, and attempted use of
>> -the feature will result in an UNDEFINED exception being injected into
>> -the guest.
>> +Pointer authentication is enabled in KVM guest when each virtual cpu is
>> +initialised by passing flags KVM_ARM_VCPU_PTRAUTH_[ADDRESS/GENERIC] and
>> +requesting these two separate cpu features to be enabled. The current KVM
>> +guest implementation works by enabling both features together, so both these
>> +userspace flags are checked together before enabling pointer authentication.
>> +The separate userspace flag will allow to have no userspace ABI changes when
>> +both features are implemented in an isolated way in future.
> 
> Nit: we might make this change, but we don't promise that it will happsen.
> 
> So, maybe write:
> 
> "[...] have no userspace ABI changes if support is added in the future
> to allow these two features to be enabled independently of one another."
Ok. sounds good.
> 
>> +
>> +Pointer Authentication is supported in KVM guest only in VHE mode.
>> diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
>> index 7de9eee..aaa048d 100644
>> --- a/Documentation/virtual/kvm/api.txt
>> +++ b/Documentation/virtual/kvm/api.txt
>> @@ -2659,6 +2659,10 @@ Possible features:
>>   	  Depends on KVM_CAP_ARM_PSCI_0_2.
>>   	- KVM_ARM_VCPU_PMU_V3: Emulate PMUv3 for the CPU.
>>   	  Depends on KVM_CAP_ARM_PMU_V3.
>> +	- KVM_ARM_VCPU_PTRAUTH_ADDRESS: Enables Address Pointer authentication
>> +	  for the CPU and supported only on arm64 architecture.
> 
> We should probably add:
> 
> 	Must be requested if KVM_ARM_VCPU_PTRAUTH_GENERIC is also requested.
> 
>> +	- KVM_ARM_VCPU_PTRAUTH_GENERIC: Enables Generic Pointer authentication
>> +	  for the CPU and supported only on arm64 architecture.
> 
> Similarly:
> 
> 	Must be requested if KVM_ARM_VCPU_PTRAUTH_ADDRESS is also requested.
ok.
> 
> (Or otherwise explain that both features must enabled together or not at
> all.)
> 
>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
>> index e3ccd7b..9dd2918 100644
>> --- a/arch/arm64/include/asm/kvm_host.h
>> +++ b/arch/arm64/include/asm/kvm_host.h
>> @@ -45,7 +45,7 @@
>>   
>>   #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS
>>   
>> -#define KVM_VCPU_MAX_FEATURES 4
>> +#define KVM_VCPU_MAX_FEATURES 6
>>   
>>   #define KVM_REQ_SLEEP \
>>   	KVM_ARCH_REQ_FLAGS(0, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
>> @@ -491,6 +491,12 @@ static inline bool kvm_arch_requires_vhe(void)
>>   	return false;
>>   }
>>   
>> +#define vcpu_has_ptrauth(vcpu)	(has_vhe() && \
>> +		system_supports_address_auth() && \
>> +		system_supports_generic_auth() && \
>> +		test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, vcpu->arch.features) && \
>> +		test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, vcpu->arch.features))
> 
> We're checking 5 things here, which we don't necessarily want to do
> every time.
> 
> Is this used on any hot path?
These checks are used in vcpu_load level but not in vcpu_run level.
> 
> This kind of thing is one reason why I added vcpu->arch.flags: we can
> make the policy decision about whether to set the flag in
> kvm_reset_vcpu(), then afterwards we only need to check the flag.
Yes agree that deep checks can be avoided. Let me check your SVE series 
of using vcpu->arch.flags.
> 
>> +
>>   static inline void kvm_arch_hardware_unsetup(void) {}
>>   static inline void kvm_arch_sync_events(struct kvm *kvm) {}
>>   static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {}
>> diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
>> index 97c3478..8806f71 100644
>> --- a/arch/arm64/include/uapi/asm/kvm.h
>> +++ b/arch/arm64/include/uapi/asm/kvm.h
>> @@ -102,6 +102,8 @@ struct kvm_regs {
>>   #define KVM_ARM_VCPU_EL1_32BIT		1 /* CPU running a 32bit VM */
>>   #define KVM_ARM_VCPU_PSCI_0_2		2 /* CPU uses PSCI v0.2 */
>>   #define KVM_ARM_VCPU_PMU_V3		3 /* Support guest PMUv3 */
>> +#define KVM_ARM_VCPU_PTRAUTH_ADDRESS	4 /* VCPU uses address authentication */
>> +#define KVM_ARM_VCPU_PTRAUTH_GENERIC	5 /* VCPU uses generic authentication */
>>   
>>   struct kvm_vcpu_init {
>>   	__u32 target;
>> diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
>> index f16a5f8..717afed 100644
>> --- a/arch/arm64/kvm/reset.c
>> +++ b/arch/arm64/kvm/reset.c
>> @@ -128,6 +128,13 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
>>   	if (loaded)
>>   		kvm_arch_vcpu_put(vcpu);
>>   
>> +	if (test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, vcpu->arch.features) ||
>> +		test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, vcpu->arch.features)) {
>> +		/* Verify that KVM startup matches the conditions for ptrauth */
>> +		if (!vcpu_has_ptrauth(vcpu))
>> +			goto out;
>> +	}
>> +
> 
> This looks like it works, but I find the way vcpu->arch.features is used
> in two different ways at the same time a bit confusing.
ok. Will check if some other way of placing the checks with vcpu->arch.flags

Thanks,
Amit D.
> 
> Cheers
> ---Dave
> 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [kvmtool PATCH v8 9/9] KVM: arm/arm64: Add a vcpu feature for pointer authentication
  2019-04-05 11:04   ` Dave Martin
@ 2019-04-08  8:43     ` Amit Daniel Kachhap
  0 siblings, 0 replies; 30+ messages in thread
From: Amit Daniel Kachhap @ 2019-04-08  8:43 UTC (permalink / raw)
  To: Dave Martin
  Cc: linux-arm-kernel, Marc Zyngier, Catalin Marinas, Will Deacon,
	Kristina Martsenko, kvmarm, Ramana Radhakrishnan, linux-kernel

Hi Dave,

On 4/5/19 4:34 PM, Dave Martin wrote:
> On Tue, Apr 02, 2019 at 07:57:17AM +0530, Amit Daniel Kachhap wrote:
>> This is a runtime capabality for KVM tool to enable Arm64 8.3 Pointer
>> Authentication in guest kernel. Two vcpu features
>> KVM_ARM_VCPU_PTRAUTH_[ADDRESS/GENERIC] are supplied together to enable
>> Pointer Authentication in KVM guest after checking the capability.
>>
>> A command line option --ptrauth is also required to select this feature.
>>
>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>> ---
>>
>> Changes since v7:
>> * Added check for capability KVM_CAP_ARM_PTRAUTH_GENERIC
>>
>>   arm/aarch32/include/kvm/kvm-cpu-arch.h    | 1 +
>>   arm/aarch64/include/asm/kvm.h             | 2 ++
>>   arm/aarch64/include/kvm/kvm-config-arch.h | 4 +++-
>>   arm/aarch64/include/kvm/kvm-cpu-arch.h    | 2 ++
>>   arm/include/arm-common/kvm-config-arch.h  | 1 +
>>   arm/kvm-cpu.c                             | 7 +++++++
>>   include/linux/kvm.h                       | 2 ++
>>   7 files changed, 18 insertions(+), 1 deletion(-)
>>
>> diff --git a/arm/aarch32/include/kvm/kvm-cpu-arch.h b/arm/aarch32/include/kvm/kvm-cpu-arch.h
>> index d28ea67..520ea76 100644
>> --- a/arm/aarch32/include/kvm/kvm-cpu-arch.h
>> +++ b/arm/aarch32/include/kvm/kvm-cpu-arch.h
>> @@ -13,4 +13,5 @@
>>   #define ARM_CPU_ID		0, 0, 0
>>   #define ARM_CPU_ID_MPIDR	5
>>   
>> +#define ARM_VCPU_PTRAUTH_FEATURE	0
>>   #endif /* KVM__KVM_CPU_ARCH_H */
>> diff --git a/arm/aarch64/include/asm/kvm.h b/arm/aarch64/include/asm/kvm.h
>> index 97c3478..d4d0d8c 100644
>> --- a/arm/aarch64/include/asm/kvm.h
>> +++ b/arm/aarch64/include/asm/kvm.h
>> @@ -102,6 +102,8 @@ struct kvm_regs {
>>   #define KVM_ARM_VCPU_EL1_32BIT		1 /* CPU running a 32bit VM */
>>   #define KVM_ARM_VCPU_PSCI_0_2		2 /* CPU uses PSCI v0.2 */
>>   #define KVM_ARM_VCPU_PMU_V3		3 /* Support guest PMUv3 */
>> +#define KVM_ARM_VCPU_PTRAUTH_ADDRESS	4 /* CPU uses address pointer authentication */
>> +#define KVM_ARM_VCPU_PTRAUTH_GENERIC	5 /* CPU uses generic pointer authentication */
>>   
>>   struct kvm_vcpu_init {
>>   	__u32 target;
>> diff --git a/arm/aarch64/include/kvm/kvm-config-arch.h b/arm/aarch64/include/kvm/kvm-config-arch.h
>> index 04be43d..2074684 100644
>> --- a/arm/aarch64/include/kvm/kvm-config-arch.h
>> +++ b/arm/aarch64/include/kvm/kvm-config-arch.h
>> @@ -8,7 +8,9 @@
>>   			"Create PMUv3 device"),				\
>>   	OPT_U64('\0', "kaslr-seed", &(cfg)->kaslr_seed,			\
>>   			"Specify random seed for Kernel Address Space "	\
>> -			"Layout Randomization (KASLR)"),
>> +			"Layout Randomization (KASLR)"),		\
>> +	OPT_BOOLEAN('\0', "ptrauth", &(cfg)->has_ptrauth,		\
>> +			"Enable address authentication"),
> 
> This should probably say "pointer", not "address" now, since we enable
> both kinds of ptrauth together.  (Sorry!)
yes.
> 
> When discussing how to control SVE, I was eventually convinced that it
> is more user-friendly to make SVE default to on if present, and maybe
> provide two options --disable-sve, --enable-sve, in case the user wants
> to force it off or on instead of just getting what the host supports.
> 
> Passing --enable-sve on a host that doesn't support SVE would then lead
> to kvmtool bailing out with an error, which is probably better then
> silently turning it off.
I agree that leaving the ptrauth as default on makes more sense as it is 
easy for userspace to invoke the kvmtool without adding any extra 
parameter. However all the current 4 vcpu features have a configuration 
option to turn them on except power off.

I suppose for --enable-sve, failure will happen at capability check 
ioctl rather than KVM_ARM_VCPU_INIT.
> 
> I don't have this change in my kvmtool patches yet.
> 
> What's your view?  It makes sense to do things the same way for all
> features if we can.
yes it is sensible to have same tuning options for all new features.
> 
>>   #include "arm-common/kvm-config-arch.h"
>>   
>> diff --git a/arm/aarch64/include/kvm/kvm-cpu-arch.h b/arm/aarch64/include/kvm/kvm-cpu-arch.h
>> index a9d8563..fcc2107 100644
>> --- a/arm/aarch64/include/kvm/kvm-cpu-arch.h
>> +++ b/arm/aarch64/include/kvm/kvm-cpu-arch.h
>> @@ -17,4 +17,6 @@
>>   #define ARM_CPU_CTRL		3, 0, 1, 0
>>   #define ARM_CPU_CTRL_SCTLR_EL1	0
>>   
>> +#define ARM_VCPU_PTRAUTH_FEATURE	((1UL << KVM_ARM_VCPU_PTRAUTH_ADDRESS) \
>> +					| (1UL << KVM_ARM_VCPU_PTRAUTH_GENERIC))
>>   #endif /* KVM__KVM_CPU_ARCH_H */
>> diff --git a/arm/include/arm-common/kvm-config-arch.h b/arm/include/arm-common/kvm-config-arch.h
>> index 5734c46..5badcbd 100644
>> --- a/arm/include/arm-common/kvm-config-arch.h
>> +++ b/arm/include/arm-common/kvm-config-arch.h
>> @@ -10,6 +10,7 @@ struct kvm_config_arch {
>>   	bool		aarch32_guest;
>>   	bool		has_pmuv3;
>>   	u64		kaslr_seed;
>> +	bool		has_ptrauth;
>>   	enum irqchip_type irqchip;
>>   	u64		fw_addr;
>>   };
>> diff --git a/arm/kvm-cpu.c b/arm/kvm-cpu.c
>> index 7780251..398c9d6 100644
>> --- a/arm/kvm-cpu.c
>> +++ b/arm/kvm-cpu.c
>> @@ -68,6 +68,13 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, unsigned long cpu_id)
>>   		vcpu_init.features[0] |= (1UL << KVM_ARM_VCPU_PSCI_0_2);
>>   	}
>>   
>> +	/* Set KVM_ARM_VCPU_PTRAUTH_[ADDRESS/GENERIC] if available */
>> +	if (kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH_ADDRESS) &&
>> +		kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH_GENERIC)) {
>> +		if (kvm->cfg.arch.has_ptrauth)
>> +			vcpu_init.features[0] |= ARM_VCPU_PTRAUTH_FEATURE;
>> +	}
>> +
>>   	/*
>>   	 * If the preferred target ioctl is successful then
>>   	 * use preferred target else try each and every target type
>> diff --git a/include/linux/kvm.h b/include/linux/kvm.h
>> index 6d4ea4b..500ac2b 100644
>> --- a/include/linux/kvm.h
>> +++ b/include/linux/kvm.h
>> @@ -988,6 +988,8 @@ struct kvm_ppc_resize_hpt {
>>   #define KVM_CAP_ARM_VM_IPA_SIZE 165
>>   #define KVM_CAP_MANUAL_DIRTY_LOG_PROTECT 166
>>   #define KVM_CAP_HYPERV_CPUID 167
>> +#define KVM_CAP_ARM_PTRAUTH_ADDRESS 168
>> +#define KVM_CAP_ARM_PTRAUTH_GENERIC 169
> 
> (Note, 168 will probably be KVM_CAP_ARM_SVE, now that the SVE patches
> are in kvmarm/next.)
ok thanks for the information.

Thanks,
Amit Daniel
> 
> Cheers
> ---Dave
> 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v8 8/9] KVM: arm64: Add capability to advertise ptrauth for guest
  2019-04-05 11:03   ` Dave Martin
@ 2019-04-08  8:51     ` Amit Daniel Kachhap
  0 siblings, 0 replies; 30+ messages in thread
From: Amit Daniel Kachhap @ 2019-04-08  8:51 UTC (permalink / raw)
  To: Dave Martin
  Cc: linux-arm-kernel, Marc Zyngier, Catalin Marinas, Will Deacon,
	Kristina Martsenko, kvmarm, Ramana Radhakrishnan, linux-kernel,
	Mark Rutland

Hi,

On 4/5/19 4:33 PM, Dave Martin wrote:
> On Tue, Apr 02, 2019 at 07:57:16AM +0530, Amit Daniel Kachhap wrote:
>> This patch advertises the capability of two cpu feature called address
>> pointer authentication and generic pointer authentication. These
>> capabilities depend upon system support for pointer authentication and
>> VHE mode.
>>
>> The current arm64 KVM partially implements pointer authentication and
>> support of address/generic authentication are tied together. However,
>> separate ABI requirements for both of them is added so that the future
>> isolated implementation will not require any ABI changes.
>>
>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>> Cc: Mark Rutland <mark.rutland@arm.com>
>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>> Cc: Christoffer Dall <christoffer.dall@arm.com>
>> Cc: kvmarm@lists.cs.columbia.edu
>> ---
>>
>> Changes since v7:
>> * Created 2 capabilities KVM_CAP_ARM_PTRAUTH_ADDRESS and KVM_CAP_ARM_PTRAUTH_GENERIC
>>    instead of one KVM_CAP_ARM_PTRAUTH [Kristina Martsenko].
>> * Added documentation here itself instead of in a new patch.
>>
>>   Documentation/virtual/kvm/api.txt | 3 +++
>>   arch/arm64/kvm/reset.c            | 6 ++++++
>>   include/uapi/linux/kvm.h          | 2 ++
>>   3 files changed, 11 insertions(+)
>>
>> diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
>> index aaa048d..9b56892 100644
>> --- a/Documentation/virtual/kvm/api.txt
>> +++ b/Documentation/virtual/kvm/api.txt
>> @@ -2661,8 +2661,11 @@ Possible features:
>>   	  Depends on KVM_CAP_ARM_PMU_V3.
>>   	- KVM_ARM_VCPU_PTRAUTH_ADDRESS: Enables Address Pointer authentication
>>   	  for the CPU and supported only on arm64 architecture.
>> +	  Depends on KVM_CAP_ARM_PTRAUTH_ADDRESS.
>>   	- KVM_ARM_VCPU_PTRAUTH_GENERIC: Enables Generic Pointer authentication
>>   	  for the CPU and supported only on arm64 architecture.
>> +	  Depends on KVM_CAP_ARM_PTRAUTH_GENERIC.
>>   
>>   
>>   4.83 KVM_ARM_PREFERRED_TARGET
>> diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
>> index 717afed..8aa8982 100644
>> --- a/arch/arm64/kvm/reset.c
>> +++ b/arch/arm64/kvm/reset.c
>> @@ -92,6 +92,12 @@ int kvm_arch_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>>   	case KVM_CAP_ARM_VM_IPA_SIZE:
>>   		r = kvm_ipa_limit;
>>   		break;
>> +	case KVM_CAP_ARM_PTRAUTH_ADDRESS:
>> +		r = has_vhe() && system_supports_address_auth();
>> +		break;
>> +	case KVM_CAP_ARM_PTRAUTH_GENERIC:
>> +		r = has_vhe() && system_supports_generic_auth();
>> +		break;
> 
> If some hardware supports just one auth type, we would report just one
> of these caps.  Although we have the rule that userspace is not allowed
> to request these independently in KVM_ARM_VCPU_INIT anyway, I think it
> would be easier for userspace if we suppress both caps if either auth
> type isn't available on the host.  e.g.:
> 
> 	case KVM_ARM_ARM_PTRAUTH_ADDRESS:
> 	case KVM_ARM_ARM_PTRAUTH_GENERIC:
> 		r = has_vhe() && system_supports_address_auth() &&
> 			system_supports_generic_auth();
> 
> We could revert back to the above code later on, and apply the ABI
> relaxations described in my response to the vcpu features patch, if
> someday we add support to KVM for coping with host hardware that
> supports just one auth type.
These 2 different capabilities are introduced in this iteration so I was 
not clear whether to expose the suppression in capability ioctl level or 
KVM_ARM_VCPU_INIT ioctl level. But agree that this way will be more 
clearer to userspace.
> 
> 
> I'd like Mark to comment on this, since he's more aware of the
> architectural situation than I am.
ok.

Thanks,
Amit D.
> 
> Cheers
> ---Dave
> 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v8 4/9] KVM: arm/arm64: preserve host HCR_EL2 value
  2019-04-06 10:37   ` James Morse
@ 2019-04-08 13:05     ` Amit Daniel Kachhap
  2019-04-08 18:39       ` Kristina Martsenko
  0 siblings, 1 reply; 30+ messages in thread
From: Amit Daniel Kachhap @ 2019-04-08 13:05 UTC (permalink / raw)
  To: James Morse, linux-arm-kernel
  Cc: Christoffer Dall, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Dave Martin, Ramana Radhakrishnan, kvmarm,
	Kristina Martsenko, linux-kernel, Mark Rutland, Julien Thierry

Hi James,

On 4/6/19 4:07 PM, James Morse wrote:
> Hi Amit,
> 
> On 02/04/2019 03:27, Amit Daniel Kachhap wrote:
>> From: Mark Rutland <mark.rutland@arm.com>
>>
>> When restoring HCR_EL2 for the host, KVM uses HCR_HOST_VHE_FLAGS, which
>> is a constant value. This works today, as the host HCR_EL2 value is
>> always the same, but this will get in the way of supporting extensions
>> that require HCR_EL2 bits to be set conditionally for the host.
>>
>> To allow such features to work without KVM having to explicitly handle
>> every possible host feature combination, this patch has KVM save/restore
>> for the host HCR when switching to/from a guest HCR. The saving of the
>> register is done once during cpu hypervisor initialization state and is
>> just restored after switch from guest.
>>
>> For fetching HCR_EL2 during kvm initialisation, a hyp call is made using
>> kvm_call_hyp and is helpful in non-VHE case.
>>
>> For the hyp TLB maintenance code, __tlb_switch_to_host_vhe() is updated
>> to toggle the TGE bit with a RMW sequence, as we already do in
>> __tlb_switch_to_guest_vhe().
>>
>> The value of hcr_el2 is now stored in struct kvm_cpu_context as both host
>> and guest can now use this field in a common way.
> 
> These HCR_EL2 flags have had me confused for quite a while.
> I thought this was preserving the value that head.S or cpufeature.c had set, and with
> ptrauth we couldn't know what this register should be anymore, the host flags has to vary.
> 
> Kristina's explanation of it[0], clarified things, and with a bit more digging it appears
> we always set API/APK, even if the hardware doesn't support the feature (as its harmless).
> So we don't need to vary the host flags...

API/APK is always set for NVHE host mode.
> 
> My question is, what breaks if this patch isn't merged? (the MDCR change is cleanup we can
> do because of this HCR change), is this HCR change just cleanup too? If so, can we merge
> ptrauth without either, so we only make the change when its needed? (it will cause some
> changes in your patch 7, but I can't see where you depend on the host flags).

Yes you are right that this patch does not directly effect pointer 
authentication functionality but contains several optimizations and 
cleanups such as,

* Removes assigning static flags HCR_HOST_VHE_FLAGS/HCR_HOST_NVHE_FLAGS 
from switch.c so switching functions now are more generic in nature.
* Currently the variation in hcr_el2 flags is across modes (VHE/NVHE). 
Any future conditional change within those modes in host HCR_EL2 may not 
effect code changes in switch.c
* Save of hcr_el2 done at hyp init time so not expensive switching wise.

I am fine on posting it separately also.
> 
> I recall Christoffer wanting to keep the restored DAIF register value on guest-exit static
> to avoid extra loads/stores when we know what the value would be. I think the same logic
> applies here.
Yes the saving of host registers once was suggested by Christoffer.

Thanks,
Amit D
> 
> You mentioned in the cover letter the series has some history to it!
> 
> 
> Thanks,
> 
> James
> 
> [0] http://lore.kernel.org/r/7ec2f950-7587-5ecd-6caa-c2fd091ad22c@arm.com
> 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v8 4/9] KVM: arm/arm64: preserve host HCR_EL2 value
  2019-04-08 13:05     ` Amit Daniel Kachhap
@ 2019-04-08 18:39       ` Kristina Martsenko
  2019-04-09  8:38         ` Marc Zyngier
  2019-04-10  6:44         ` Amit Daniel Kachhap
  0 siblings, 2 replies; 30+ messages in thread
From: Kristina Martsenko @ 2019-04-08 18:39 UTC (permalink / raw)
  To: Amit Daniel Kachhap, James Morse, linux-arm-kernel
  Cc: Christoffer Dall, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Dave Martin, Ramana Radhakrishnan, kvmarm,
	linux-kernel, Mark Rutland, Julien Thierry

On 08/04/2019 14:05, Amit Daniel Kachhap wrote:
> Hi James,
> 
> On 4/6/19 4:07 PM, James Morse wrote:
>> Hi Amit,
>>
>> On 02/04/2019 03:27, Amit Daniel Kachhap wrote:
>>> From: Mark Rutland <mark.rutland@arm.com>
>>>
>>> When restoring HCR_EL2 for the host, KVM uses HCR_HOST_VHE_FLAGS, which
>>> is a constant value. This works today, as the host HCR_EL2 value is
>>> always the same, but this will get in the way of supporting extensions
>>> that require HCR_EL2 bits to be set conditionally for the host.
>>>
>>> To allow such features to work without KVM having to explicitly handle
>>> every possible host feature combination, this patch has KVM save/restore
>>> for the host HCR when switching to/from a guest HCR. The saving of the
>>> register is done once during cpu hypervisor initialization state and is
>>> just restored after switch from guest.
>>>
>>> For fetching HCR_EL2 during kvm initialisation, a hyp call is made using
>>> kvm_call_hyp and is helpful in non-VHE case.
>>>
>>> For the hyp TLB maintenance code, __tlb_switch_to_host_vhe() is updated
>>> to toggle the TGE bit with a RMW sequence, as we already do in
>>> __tlb_switch_to_guest_vhe().
>>>
>>> The value of hcr_el2 is now stored in struct kvm_cpu_context as both host
>>> and guest can now use this field in a common way.
>>
>> These HCR_EL2 flags have had me confused for quite a while.
>> I thought this was preserving the value that head.S or cpufeature.c had set, and with
>> ptrauth we couldn't know what this register should be anymore, the host flags has to vary.
>>
>> Kristina's explanation of it[0], clarified things, and with a bit more digging it appears
>> we always set API/APK, even if the hardware doesn't support the feature (as its harmless).
>> So we don't need to vary the host flags...
> 
> API/APK is always set for NVHE host mode.
>>
>> My question is, what breaks if this patch isn't merged? (the MDCR change is cleanup we can
>> do because of this HCR change), is this HCR change just cleanup too? If so, can we merge
>> ptrauth without either, so we only make the change when its needed? (it will cause some
>> changes in your patch 7, but I can't see where you depend on the host flags).
> 
> Yes you are right that this patch does not directly effect pointer authentication functionality but contains several optimizations and cleanups such as,
> 
> * Removes assigning static flags HCR_HOST_VHE_FLAGS/HCR_HOST_NVHE_FLAGS from switch.c so switching functions now are more generic in nature.
> * Currently the variation in hcr_el2 flags is across modes (VHE/NVHE). Any future conditional change within those modes in host HCR_EL2 may not effect code changes in switch.c
> * Save of hcr_el2 done at hyp init time so not expensive switching wise.
> 
> I am fine on posting it separately also.

FWIW I think it makes sense to post the HCR and MDCR patches separately
from this series. That should make it clear that pointer auth does not
depend on these changes, and should make it easier to evaluate the
changes on their own.

Others' opinions are welcome as well.

>> I recall Christoffer wanting to keep the restored DAIF register value on guest-exit static
>> to avoid extra loads/stores when we know what the value would be. I think the same logic
>> applies here.
> Yes the saving of host registers once was suggested by Christoffer.

I'm not familiar with this, but James may be referring to
kvm_arm_vhe_guest_exit, which restores DAIF to a constant value. It
seems like originally the patch saved/restored DAIF [1], but it was
decided that a constant value was better.

Thanks,
Kristina

[1] https://www.spinics.net/lists/arm-kernel/msg599798.html

>> You mentioned in the cover letter the series has some history to it!
>>
>>
>> Thanks,
>>
>> James
>>
>> [0] http://lore.kernel.org/r/7ec2f950-7587-5ecd-6caa-c2fd091ad22c@arm.com
>>


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v8 4/9] KVM: arm/arm64: preserve host HCR_EL2 value
  2019-04-08 18:39       ` Kristina Martsenko
@ 2019-04-09  8:38         ` Marc Zyngier
  2019-04-10  6:45           ` Amit Daniel Kachhap
  2019-04-10  6:44         ` Amit Daniel Kachhap
  1 sibling, 1 reply; 30+ messages in thread
From: Marc Zyngier @ 2019-04-09  8:38 UTC (permalink / raw)
  To: Kristina Martsenko, Amit Daniel Kachhap, James Morse, linux-arm-kernel
  Cc: Christoffer Dall, Catalin Marinas, Will Deacon, Andrew Jones,
	Dave Martin, Ramana Radhakrishnan, kvmarm, linux-kernel,
	Mark Rutland, Julien Thierry

On 08/04/2019 19:39, Kristina Martsenko wrote:
> On 08/04/2019 14:05, Amit Daniel Kachhap wrote:
>> Hi James,
>>
>> On 4/6/19 4:07 PM, James Morse wrote:
>>> Hi Amit,
>>>
>>> On 02/04/2019 03:27, Amit Daniel Kachhap wrote:
>>>> From: Mark Rutland <mark.rutland@arm.com>
>>>>
>>>> When restoring HCR_EL2 for the host, KVM uses HCR_HOST_VHE_FLAGS, which
>>>> is a constant value. This works today, as the host HCR_EL2 value is
>>>> always the same, but this will get in the way of supporting extensions
>>>> that require HCR_EL2 bits to be set conditionally for the host.
>>>>
>>>> To allow such features to work without KVM having to explicitly handle
>>>> every possible host feature combination, this patch has KVM save/restore
>>>> for the host HCR when switching to/from a guest HCR. The saving of the
>>>> register is done once during cpu hypervisor initialization state and is
>>>> just restored after switch from guest.
>>>>
>>>> For fetching HCR_EL2 during kvm initialisation, a hyp call is made using
>>>> kvm_call_hyp and is helpful in non-VHE case.
>>>>
>>>> For the hyp TLB maintenance code, __tlb_switch_to_host_vhe() is updated
>>>> to toggle the TGE bit with a RMW sequence, as we already do in
>>>> __tlb_switch_to_guest_vhe().
>>>>
>>>> The value of hcr_el2 is now stored in struct kvm_cpu_context as both host
>>>> and guest can now use this field in a common way.
>>>
>>> These HCR_EL2 flags have had me confused for quite a while.
>>> I thought this was preserving the value that head.S or cpufeature.c had set, and with
>>> ptrauth we couldn't know what this register should be anymore, the host flags has to vary.
>>>
>>> Kristina's explanation of it[0], clarified things, and with a bit more digging it appears
>>> we always set API/APK, even if the hardware doesn't support the feature (as its harmless).
>>> So we don't need to vary the host flags...
>>
>> API/APK is always set for NVHE host mode.
>>>
>>> My question is, what breaks if this patch isn't merged? (the MDCR change is cleanup we can
>>> do because of this HCR change), is this HCR change just cleanup too? If so, can we merge
>>> ptrauth without either, so we only make the change when its needed? (it will cause some
>>> changes in your patch 7, but I can't see where you depend on the host flags).
>>
>> Yes you are right that this patch does not directly effect pointer authentication functionality but contains several optimizations and cleanups such as,
>>
>> * Removes assigning static flags HCR_HOST_VHE_FLAGS/HCR_HOST_NVHE_FLAGS from switch.c so switching functions now are more generic in nature.
>> * Currently the variation in hcr_el2 flags is across modes (VHE/NVHE). Any future conditional change within those modes in host HCR_EL2 may not effect code changes in switch.c
>> * Save of hcr_el2 done at hyp init time so not expensive switching wise.
>>
>> I am fine on posting it separately also.
> 
> FWIW I think it makes sense to post the HCR and MDCR patches separately
> from this series. That should make it clear that pointer auth does not
> depend on these changes, and should make it easier to evaluate the
> changes on their own.
> 
> Others' opinions are welcome as well.

Agreed. I'm quite eager to move forward with this series, and the least
unrelated changes it makes, the better. Cleanups and optimizations can
always be merged at a later time.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v8 4/9] KVM: arm/arm64: preserve host HCR_EL2 value
  2019-04-08 18:39       ` Kristina Martsenko
  2019-04-09  8:38         ` Marc Zyngier
@ 2019-04-10  6:44         ` Amit Daniel Kachhap
  1 sibling, 0 replies; 30+ messages in thread
From: Amit Daniel Kachhap @ 2019-04-10  6:44 UTC (permalink / raw)
  To: Kristina Martsenko, James Morse, linux-arm-kernel
  Cc: Christoffer Dall, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Dave Martin, Ramana Radhakrishnan, kvmarm,
	linux-kernel, Mark Rutland, Julien Thierry

Hi,

On 4/9/19 12:09 AM, Kristina Martsenko wrote:
> On 08/04/2019 14:05, Amit Daniel Kachhap wrote:
>> Hi James,
>>
>> On 4/6/19 4:07 PM, James Morse wrote:
>>> Hi Amit,
>>>
>>> On 02/04/2019 03:27, Amit Daniel Kachhap wrote:
>>>> From: Mark Rutland <mark.rutland@arm.com>
>>>>
>>>> When restoring HCR_EL2 for the host, KVM uses HCR_HOST_VHE_FLAGS, which
>>>> is a constant value. This works today, as the host HCR_EL2 value is
>>>> always the same, but this will get in the way of supporting extensions
>>>> that require HCR_EL2 bits to be set conditionally for the host.
>>>>
>>>> To allow such features to work without KVM having to explicitly handle
>>>> every possible host feature combination, this patch has KVM save/restore
>>>> for the host HCR when switching to/from a guest HCR. The saving of the
>>>> register is done once during cpu hypervisor initialization state and is
>>>> just restored after switch from guest.
>>>>
>>>> For fetching HCR_EL2 during kvm initialisation, a hyp call is made using
>>>> kvm_call_hyp and is helpful in non-VHE case.
>>>>
>>>> For the hyp TLB maintenance code, __tlb_switch_to_host_vhe() is updated
>>>> to toggle the TGE bit with a RMW sequence, as we already do in
>>>> __tlb_switch_to_guest_vhe().
>>>>
>>>> The value of hcr_el2 is now stored in struct kvm_cpu_context as both host
>>>> and guest can now use this field in a common way.
>>>
>>> These HCR_EL2 flags have had me confused for quite a while.
>>> I thought this was preserving the value that head.S or cpufeature.c had set, and with
>>> ptrauth we couldn't know what this register should be anymore, the host flags has to vary.
>>>
>>> Kristina's explanation of it[0], clarified things, and with a bit more digging it appears
>>> we always set API/APK, even if the hardware doesn't support the feature (as its harmless).
>>> So we don't need to vary the host flags...
>>
>> API/APK is always set for NVHE host mode.
>>>
>>> My question is, what breaks if this patch isn't merged? (the MDCR change is cleanup we can
>>> do because of this HCR change), is this HCR change just cleanup too? If so, can we merge
>>> ptrauth without either, so we only make the change when its needed? (it will cause some
>>> changes in your patch 7, but I can't see where you depend on the host flags).
>>
>> Yes you are right that this patch does not directly effect pointer authentication functionality but contains several optimizations and cleanups such as,
>>
>> * Removes assigning static flags HCR_HOST_VHE_FLAGS/HCR_HOST_NVHE_FLAGS from switch.c so switching functions now are more generic in nature.
>> * Currently the variation in hcr_el2 flags is across modes (VHE/NVHE). Any future conditional change within those modes in host HCR_EL2 may not effect code changes in switch.c
>> * Save of hcr_el2 done at hyp init time so not expensive switching wise.
>>
>> I am fine on posting it separately also.
> 
> FWIW I think it makes sense to post the HCR and MDCR patches separately
> from this series. That should make it clear that pointer auth does not
> depend on these changes, and should make it easier to evaluate the
> changes on their own.
ok.
> 
> Others' opinions are welcome as well.
> 
>>> I recall Christoffer wanting to keep the restored DAIF register value on guest-exit static
>>> to avoid extra loads/stores when we know what the value would be. I think the same logic
>>> applies here.
>> Yes the saving of host registers once was suggested by Christoffer.
> 
> I'm not familiar with this, but James may be referring to
> kvm_arm_vhe_guest_exit, which restores DAIF to a constant value. It
> seems like originally the patch saved/restored DAIF [1], but it was
> decided that a constant value was better.
Thanks for the pointer. I mis-understood it.

Thanks,
Amit D
> 
> Thanks,
> Kristina
> 
> [1] https://www.spinics.net/lists/arm-kernel/msg599798.html
> 
>>> You mentioned in the cover letter the series has some history to it!
>>>
>>>
>>> Thanks,
>>>
>>> James
>>>
>>> [0] http://lore.kernel.org/r/7ec2f950-7587-5ecd-6caa-c2fd091ad22c@arm.com
>>>
> 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v8 4/9] KVM: arm/arm64: preserve host HCR_EL2 value
  2019-04-09  8:38         ` Marc Zyngier
@ 2019-04-10  6:45           ` Amit Daniel Kachhap
  0 siblings, 0 replies; 30+ messages in thread
From: Amit Daniel Kachhap @ 2019-04-10  6:45 UTC (permalink / raw)
  To: Marc Zyngier, Kristina Martsenko, James Morse, linux-arm-kernel
  Cc: Christoffer Dall, Catalin Marinas, Will Deacon, Andrew Jones,
	Dave Martin, Ramana Radhakrishnan, kvmarm, linux-kernel,
	Mark Rutland, Julien Thierry


Hi Mark,
On 4/9/19 2:08 PM, Marc Zyngier wrote:
> On 08/04/2019 19:39, Kristina Martsenko wrote:
>> On 08/04/2019 14:05, Amit Daniel Kachhap wrote:
>>> Hi James,
>>>
>>> On 4/6/19 4:07 PM, James Morse wrote:
>>>> Hi Amit,
>>>>
>>>> On 02/04/2019 03:27, Amit Daniel Kachhap wrote:
>>>>> From: Mark Rutland <mark.rutland@arm.com>
>>>>>
>>>>> When restoring HCR_EL2 for the host, KVM uses HCR_HOST_VHE_FLAGS, which
>>>>> is a constant value. This works today, as the host HCR_EL2 value is
>>>>> always the same, but this will get in the way of supporting extensions
>>>>> that require HCR_EL2 bits to be set conditionally for the host.
>>>>>
>>>>> To allow such features to work without KVM having to explicitly handle
>>>>> every possible host feature combination, this patch has KVM save/restore
>>>>> for the host HCR when switching to/from a guest HCR. The saving of the
>>>>> register is done once during cpu hypervisor initialization state and is
>>>>> just restored after switch from guest.
>>>>>
>>>>> For fetching HCR_EL2 during kvm initialisation, a hyp call is made using
>>>>> kvm_call_hyp and is helpful in non-VHE case.
>>>>>
>>>>> For the hyp TLB maintenance code, __tlb_switch_to_host_vhe() is updated
>>>>> to toggle the TGE bit with a RMW sequence, as we already do in
>>>>> __tlb_switch_to_guest_vhe().
>>>>>
>>>>> The value of hcr_el2 is now stored in struct kvm_cpu_context as both host
>>>>> and guest can now use this field in a common way.
>>>>
>>>> These HCR_EL2 flags have had me confused for quite a while.
>>>> I thought this was preserving the value that head.S or cpufeature.c had set, and with
>>>> ptrauth we couldn't know what this register should be anymore, the host flags has to vary.
>>>>
>>>> Kristina's explanation of it[0], clarified things, and with a bit more digging it appears
>>>> we always set API/APK, even if the hardware doesn't support the feature (as its harmless).
>>>> So we don't need to vary the host flags...
>>>
>>> API/APK is always set for NVHE host mode.
>>>>
>>>> My question is, what breaks if this patch isn't merged? (the MDCR change is cleanup we can
>>>> do because of this HCR change), is this HCR change just cleanup too? If so, can we merge
>>>> ptrauth without either, so we only make the change when its needed? (it will cause some
>>>> changes in your patch 7, but I can't see where you depend on the host flags).
>>>
>>> Yes you are right that this patch does not directly effect pointer authentication functionality but contains several optimizations and cleanups such as,
>>>
>>> * Removes assigning static flags HCR_HOST_VHE_FLAGS/HCR_HOST_NVHE_FLAGS from switch.c so switching functions now are more generic in nature.
>>> * Currently the variation in hcr_el2 flags is across modes (VHE/NVHE). Any future conditional change within those modes in host HCR_EL2 may not effect code changes in switch.c
>>> * Save of hcr_el2 done at hyp init time so not expensive switching wise.
>>>
>>> I am fine on posting it separately also.
>>
>> FWIW I think it makes sense to post the HCR and MDCR patches separately
>> from this series. That should make it clear that pointer auth does not
>> depend on these changes, and should make it easier to evaluate the
>> changes on their own.
>>
>> Others' opinions are welcome as well.
> 
> Agreed. I'm quite eager to move forward with this series, and the least
> unrelated changes it makes, the better. Cleanups and optimizations can
> always be merged at a later time.
yes sure. I will re-spin the patch series shortly.

Thanks,
Amit D.
> 
> Thanks,
> 
> 	M.
> 

^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2019-04-10  6:45 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-04-02  2:27 [PATCH v8 0/9] Add ARMv8.3 pointer authentication for kvm guest Amit Daniel Kachhap
2019-04-02  2:27 ` [PATCH v8 1/9] KVM: arm64: Propagate vcpu into read_id_reg() Amit Daniel Kachhap
2019-04-02  2:27 ` [PATCH v8 2/9] KVM: arm64: Support runtime sysreg visibility filtering Amit Daniel Kachhap
2019-04-02  2:27 ` [PATCH v8 3/9] KVM: arm64: Move hyp_symbol_addr to fix dependency Amit Daniel Kachhap
2019-04-05 11:02   ` Dave Martin
2019-04-08  3:42     ` Amit Daniel Kachhap
2019-04-02  2:27 ` [PATCH v8 4/9] KVM: arm/arm64: preserve host HCR_EL2 value Amit Daniel Kachhap
2019-04-05 11:02   ` Dave Martin
2019-04-08  4:31     ` Amit Daniel Kachhap
2019-04-06 10:37   ` James Morse
2019-04-08 13:05     ` Amit Daniel Kachhap
2019-04-08 18:39       ` Kristina Martsenko
2019-04-09  8:38         ` Marc Zyngier
2019-04-10  6:45           ` Amit Daniel Kachhap
2019-04-10  6:44         ` Amit Daniel Kachhap
2019-04-02  2:27 ` [PATCH v8 5/9] KVM: arm/arm64: preserve host MDCR_EL2 value Amit Daniel Kachhap
2019-04-05 11:02   ` Dave Martin
2019-04-08  4:39     ` Amit Daniel Kachhap
2019-04-02  2:27 ` [PATCH v8 6/9] KVM: arm64: Add vcpu feature flags to control ptrauth accessibility Amit Daniel Kachhap
2019-04-05 11:02   ` Dave Martin
2019-04-08  5:12     ` Amit Daniel Kachhap
2019-04-02  2:27 ` [PATCH v8 7/9] KVM: arm/arm64: context-switch ptrauth registers Amit Daniel Kachhap
2019-04-04 19:29   ` Kristina Martsenko
2019-04-05 11:00     ` Amit Daniel Kachhap
2019-04-02  2:27 ` [PATCH v8 8/9] KVM: arm64: Add capability to advertise ptrauth for guest Amit Daniel Kachhap
2019-04-05 11:03   ` Dave Martin
2019-04-08  8:51     ` Amit Daniel Kachhap
2019-04-02  2:27 ` [kvmtool PATCH v8 9/9] KVM: arm/arm64: Add a vcpu feature for pointer authentication Amit Daniel Kachhap
2019-04-05 11:04   ` Dave Martin
2019-04-08  8:43     ` Amit Daniel Kachhap

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).