All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v9 0/5] Add ARMv8.3 pointer authentication for kvm guest
@ 2019-04-12  3:20 ` Amit Daniel Kachhap
  0 siblings, 0 replies; 77+ messages in thread
From: Amit Daniel Kachhap @ 2019-04-12  3:20 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Christoffer Dall, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Dave Martin, Ramana Radhakrishnan, kvmarm,
	Kristina Martsenko, linux-kernel, Amit Daniel Kachhap,
	Mark Rutland, James Morse, Julien Thierry

Hi,

This patch series adds pointer authentication support for KVM guest and
is based on top of Linux kvmarm/next repo. The basic patches in this series was
originally posted by Mark Rutland earlier[1,2] and contains some history
of this work.

Extension Overview:
=============================================

The ARMv8.3 pointer authentication extension adds functionality to detect
modification of pointer values, mitigating certain classes of attack such as
stack smashing, and making return oriented programming attacks harder.

The extension introduces the concept of a pointer authentication code (PAC),
which is stored in some upper bits of pointers. Each PAC is derived from the
original pointer, another 64-bit value (e.g. the stack pointer), and a secret
128-bit key.

New instructions are added which can be used to:

* Insert a PAC into a pointer
* Strip a PAC from a pointer
* Authenticate and strip a PAC from a pointer

The detailed description of ARMv8.3 pointer authentication support in
userspace/kernel and can be found in Kristina's generic pointer authentication
patch series[3].

KVM guest work:
==============================================

If pointer authentication is enabled for KVM guests then the new PAC instructions
will not trap to EL2. If not then they may be ignored if in HINT region or trapped
in EL2 as illegal instruction. Since KVM guest vcpu runs as a thread so they have
a key initialized which will be used by PAC. When world switch happens between
host and guest then this key is exchanged.

The current v9 patch series contains review comments and suggestions by
Kristina Martsenko, Dave Martin, James Morse and Mark Zyngier.

Changes since v8 [10]: Major changes are listed below and detail changes are in
		      each patch.
* Added a new vcpu specific arch flag to control enabling/disabling ptrauth.
* Patches restructured as 3 patches related to hcr_el2, mdcr_el2 and
  hyp_symbol_addr cleanup and optimization dropped. They will be posted separately.

Changes since v7 [9]: Major changes are listed below and detail changes are in
		      each patch.
 * Comments and Documentation updated to reflect using address/generic
   features flag together.
 * Dropped the documentation patch and added those details in the relevant
   patches.
 * Rebased the patch series on 2 patches of Dave Martin v6 SVE series. 
 * Small bug fixes.

Changes since v6 [8]: Major changes are listed below.

* Pointer authentication key switch entirely in assembly now.
* isb instruction added after Key switched to host.
* Use __hyp_this_cpu_ptr for both VHE and nVHE mode.
* 2 separate flags for address and generic authentication.
* kvm_arm_vcpu_ptrauth_allowed renamed to has_vcpu_ptrauth.
* kvm_arm_vcpu_ptrauth_reset renamed to kvm_arm_vcpu_ptrauth_setup_lazy.
* Save of host Key registers now done in ptrauth instruction trap.
* A fix to add kern_hyp_va to get correct host_ctxt pointer in nVHE mode.
* Patches re-structured to better reflect ABI change.

Changes since v5 [7]: Major changes are listed below.

* Split hcr_el2 and mdcr_el2 save/restore in two patches.
* Reverted back save/restore of sys-reg keys as done in V4 [5]. There was
  suggestion by James Morse to use ptrauth utilities in a single place
  in arm core and use them from kvm. However this change deviates from the
  existing sys-reg implementations and is not scalable.
* Invoked the key switch C functions from __guest_enter/__guest_exit assembly.
* Host key save is now done inside vcpu_load.
* Reverted back masking of cpufeature ID registers for ptrauth when disabled
  from userpace.
* Reset of ptrauth key registers not done conditionally.
* Code and Documentation cleanup.

Changes since v4 [6]: Several suggestions from James Morse
* Move host registers to be saved/restored inside struct kvm_cpu_context.
* Similar to hcr_el2, save/restore mdcr_el2 register also.
* Added save routines for ptrauth keys in generic arm core and
  use them during KVM context switch.
* Defined a GCC attribute __no_ptrauth which discards generating
  ptrauth instructions in a function. This is taken from Kristina's
  earlier kernel pointer authentication support patches [4].
* Dropped a patch to mask cpufeature when not enabled from userspace and
  now only key registers are masked from register list.

Changes since v3 [5]:
* Use pointer authentication only when VHE is present as ARM8.3 implies ARM8.1
  features to be present.
* Added lazy context handling of ptrauth instructions from V2 version again. 
* Added more details in Documentation.

Changes since v2 [1,2]:
* Allow host and guest to have different HCR_EL2 settings and not just constant
  value HCR_HOST_VHE_FLAGS or HCR_HOST_NVHE_FLAGS.
* Optimise the reading of HCR_EL2 in host/guest switch by fetching it once
  during KVM initialisation state and using it later.
* Context switch pointer authentication keys when switching between guest
  and host. Pointer authentication was enabled in a lazy context earlier[2] and
  is removed now to make it simple. However it can be revisited later if there
  is significant performance issue.
* Added a userspace option to choose pointer authentication.
* Based on the userspace option, ptrauth cpufeature will be visible.
* Based on the userspace option, ptrauth key registers will be accessible.
* A small document is added on how to enable pointer authentication from
  userspace KVM API.

Looking for feedback and comments.

Thanks,
Amit

[1]: https://lore.kernel.org/lkml/20171127163806.31435-11-mark.rutland@arm.com/
[2]: https://lore.kernel.org/lkml/20171127163806.31435-10-mark.rutland@arm.com/
[3]: https://lkml.org/lkml/2018/12/7/666
[4]: https://lore.kernel.org/lkml/20181005084754.20950-1-kristina.martsenko@arm.com/
[5]: https://lkml.org/lkml/2018/10/17/594
[6]: https://lkml.org/lkml/2018/12/18/80
[7]: https://lkml.org/lkml/2019/1/28/49
[8]: https://lkml.org/lkml/2019/2/19/190 
[9]: https://lkml.org/lkml/2019/3/19/125 
[10]: https://lkml.org/lkml/2019/4/1/1595

Linux (5.1-rc2 based kvmarm/next repo):

Amit Daniel Kachhap (3):
  KVM: arm64: Add a vcpu flag to control ptrauth for guest
  KVM: arm64: Add userspace flag to enable pointer authentication
  KVM: arm64: Add capability to advertise ptrauth for guest

Mark Rutland (1):
  KVM: arm/arm64: context-switch ptrauth registers

 Documentation/arm64/pointer-authentication.txt |  22 ++++-
 Documentation/virtual/kvm/api.txt              |   8 ++
 arch/arm/include/asm/kvm_host.h                |   1 +
 arch/arm64/Kconfig                             |   5 +-
 arch/arm64/include/asm/kvm_host.h              |  23 +++++-
 arch/arm64/include/asm/kvm_ptrauth_asm.h       | 106 +++++++++++++++++++++++++
 arch/arm64/include/uapi/asm/kvm.h              |   2 +
 arch/arm64/kernel/asm-offsets.c                |   6 ++
 arch/arm64/kvm/guest.c                         |  14 ++++
 arch/arm64/kvm/handle_exit.c                   |  24 ++++--
 arch/arm64/kvm/hyp/entry.S                     |   7 ++
 arch/arm64/kvm/reset.c                         |  29 +++++++
 arch/arm64/kvm/sys_regs.c                      |  46 ++++++++++-
 include/uapi/linux/kvm.h                       |   2 +
 virt/kvm/arm/arm.c                             |   2 +
 15 files changed, 279 insertions(+), 18 deletions(-)
 create mode 100644 arch/arm64/include/asm/kvm_ptrauth_asm.h

kvmtool:

Repo: git.kernel.org/pub/scm/linux/kernel/git/will/kvmtool.git
Amit Daniel Kachhap (1):
  KVM: arm/arm64: Add a vcpu feature for pointer authentication

 arm/aarch32/include/kvm/kvm-cpu-arch.h    |  1 +
 arm/aarch64/include/asm/kvm.h             |  2 ++
 arm/aarch64/include/kvm/kvm-config-arch.h |  6 +++++-
 arm/aarch64/include/kvm/kvm-cpu-arch.h    |  2 ++
 arm/include/arm-common/kvm-config-arch.h  |  2 ++
 arm/kvm-cpu.c                             | 11 +++++++++++
 include/linux/kvm.h                       |  2 ++
 7 files changed, 25 insertions(+), 1 deletion(-)

-- 
2.7.4


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [PATCH v9 0/5] Add ARMv8.3 pointer authentication for kvm guest
@ 2019-04-12  3:20 ` Amit Daniel Kachhap
  0 siblings, 0 replies; 77+ messages in thread
From: Amit Daniel Kachhap @ 2019-04-12  3:20 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Marc Zyngier, Catalin Marinas, Will Deacon, Kristina Martsenko,
	kvmarm, Ramana Radhakrishnan, Amit Daniel Kachhap, Dave Martin,
	linux-kernel

Hi,

This patch series adds pointer authentication support for KVM guest and
is based on top of Linux kvmarm/next repo. The basic patches in this series was
originally posted by Mark Rutland earlier[1,2] and contains some history
of this work.

Extension Overview:
=============================================

The ARMv8.3 pointer authentication extension adds functionality to detect
modification of pointer values, mitigating certain classes of attack such as
stack smashing, and making return oriented programming attacks harder.

The extension introduces the concept of a pointer authentication code (PAC),
which is stored in some upper bits of pointers. Each PAC is derived from the
original pointer, another 64-bit value (e.g. the stack pointer), and a secret
128-bit key.

New instructions are added which can be used to:

* Insert a PAC into a pointer
* Strip a PAC from a pointer
* Authenticate and strip a PAC from a pointer

The detailed description of ARMv8.3 pointer authentication support in
userspace/kernel and can be found in Kristina's generic pointer authentication
patch series[3].

KVM guest work:
==============================================

If pointer authentication is enabled for KVM guests then the new PAC instructions
will not trap to EL2. If not then they may be ignored if in HINT region or trapped
in EL2 as illegal instruction. Since KVM guest vcpu runs as a thread so they have
a key initialized which will be used by PAC. When world switch happens between
host and guest then this key is exchanged.

The current v9 patch series contains review comments and suggestions by
Kristina Martsenko, Dave Martin, James Morse and Mark Zyngier.

Changes since v8 [10]: Major changes are listed below and detail changes are in
		      each patch.
* Added a new vcpu specific arch flag to control enabling/disabling ptrauth.
* Patches restructured as 3 patches related to hcr_el2, mdcr_el2 and
  hyp_symbol_addr cleanup and optimization dropped. They will be posted separately.

Changes since v7 [9]: Major changes are listed below and detail changes are in
		      each patch.
 * Comments and Documentation updated to reflect using address/generic
   features flag together.
 * Dropped the documentation patch and added those details in the relevant
   patches.
 * Rebased the patch series on 2 patches of Dave Martin v6 SVE series. 
 * Small bug fixes.

Changes since v6 [8]: Major changes are listed below.

* Pointer authentication key switch entirely in assembly now.
* isb instruction added after Key switched to host.
* Use __hyp_this_cpu_ptr for both VHE and nVHE mode.
* 2 separate flags for address and generic authentication.
* kvm_arm_vcpu_ptrauth_allowed renamed to has_vcpu_ptrauth.
* kvm_arm_vcpu_ptrauth_reset renamed to kvm_arm_vcpu_ptrauth_setup_lazy.
* Save of host Key registers now done in ptrauth instruction trap.
* A fix to add kern_hyp_va to get correct host_ctxt pointer in nVHE mode.
* Patches re-structured to better reflect ABI change.

Changes since v5 [7]: Major changes are listed below.

* Split hcr_el2 and mdcr_el2 save/restore in two patches.
* Reverted back save/restore of sys-reg keys as done in V4 [5]. There was
  suggestion by James Morse to use ptrauth utilities in a single place
  in arm core and use them from kvm. However this change deviates from the
  existing sys-reg implementations and is not scalable.
* Invoked the key switch C functions from __guest_enter/__guest_exit assembly.
* Host key save is now done inside vcpu_load.
* Reverted back masking of cpufeature ID registers for ptrauth when disabled
  from userpace.
* Reset of ptrauth key registers not done conditionally.
* Code and Documentation cleanup.

Changes since v4 [6]: Several suggestions from James Morse
* Move host registers to be saved/restored inside struct kvm_cpu_context.
* Similar to hcr_el2, save/restore mdcr_el2 register also.
* Added save routines for ptrauth keys in generic arm core and
  use them during KVM context switch.
* Defined a GCC attribute __no_ptrauth which discards generating
  ptrauth instructions in a function. This is taken from Kristina's
  earlier kernel pointer authentication support patches [4].
* Dropped a patch to mask cpufeature when not enabled from userspace and
  now only key registers are masked from register list.

Changes since v3 [5]:
* Use pointer authentication only when VHE is present as ARM8.3 implies ARM8.1
  features to be present.
* Added lazy context handling of ptrauth instructions from V2 version again. 
* Added more details in Documentation.

Changes since v2 [1,2]:
* Allow host and guest to have different HCR_EL2 settings and not just constant
  value HCR_HOST_VHE_FLAGS or HCR_HOST_NVHE_FLAGS.
* Optimise the reading of HCR_EL2 in host/guest switch by fetching it once
  during KVM initialisation state and using it later.
* Context switch pointer authentication keys when switching between guest
  and host. Pointer authentication was enabled in a lazy context earlier[2] and
  is removed now to make it simple. However it can be revisited later if there
  is significant performance issue.
* Added a userspace option to choose pointer authentication.
* Based on the userspace option, ptrauth cpufeature will be visible.
* Based on the userspace option, ptrauth key registers will be accessible.
* A small document is added on how to enable pointer authentication from
  userspace KVM API.

Looking for feedback and comments.

Thanks,
Amit

[1]: https://lore.kernel.org/lkml/20171127163806.31435-11-mark.rutland@arm.com/
[2]: https://lore.kernel.org/lkml/20171127163806.31435-10-mark.rutland@arm.com/
[3]: https://lkml.org/lkml/2018/12/7/666
[4]: https://lore.kernel.org/lkml/20181005084754.20950-1-kristina.martsenko@arm.com/
[5]: https://lkml.org/lkml/2018/10/17/594
[6]: https://lkml.org/lkml/2018/12/18/80
[7]: https://lkml.org/lkml/2019/1/28/49
[8]: https://lkml.org/lkml/2019/2/19/190 
[9]: https://lkml.org/lkml/2019/3/19/125 
[10]: https://lkml.org/lkml/2019/4/1/1595

Linux (5.1-rc2 based kvmarm/next repo):

Amit Daniel Kachhap (3):
  KVM: arm64: Add a vcpu flag to control ptrauth for guest
  KVM: arm64: Add userspace flag to enable pointer authentication
  KVM: arm64: Add capability to advertise ptrauth for guest

Mark Rutland (1):
  KVM: arm/arm64: context-switch ptrauth registers

 Documentation/arm64/pointer-authentication.txt |  22 ++++-
 Documentation/virtual/kvm/api.txt              |   8 ++
 arch/arm/include/asm/kvm_host.h                |   1 +
 arch/arm64/Kconfig                             |   5 +-
 arch/arm64/include/asm/kvm_host.h              |  23 +++++-
 arch/arm64/include/asm/kvm_ptrauth_asm.h       | 106 +++++++++++++++++++++++++
 arch/arm64/include/uapi/asm/kvm.h              |   2 +
 arch/arm64/kernel/asm-offsets.c                |   6 ++
 arch/arm64/kvm/guest.c                         |  14 ++++
 arch/arm64/kvm/handle_exit.c                   |  24 ++++--
 arch/arm64/kvm/hyp/entry.S                     |   7 ++
 arch/arm64/kvm/reset.c                         |  29 +++++++
 arch/arm64/kvm/sys_regs.c                      |  46 ++++++++++-
 include/uapi/linux/kvm.h                       |   2 +
 virt/kvm/arm/arm.c                             |   2 +
 15 files changed, 279 insertions(+), 18 deletions(-)
 create mode 100644 arch/arm64/include/asm/kvm_ptrauth_asm.h

kvmtool:

Repo: git.kernel.org/pub/scm/linux/kernel/git/will/kvmtool.git
Amit Daniel Kachhap (1):
  KVM: arm/arm64: Add a vcpu feature for pointer authentication

 arm/aarch32/include/kvm/kvm-cpu-arch.h    |  1 +
 arm/aarch64/include/asm/kvm.h             |  2 ++
 arm/aarch64/include/kvm/kvm-config-arch.h |  6 +++++-
 arm/aarch64/include/kvm/kvm-cpu-arch.h    |  2 ++
 arm/include/arm-common/kvm-config-arch.h  |  2 ++
 arm/kvm-cpu.c                             | 11 +++++++++++
 include/linux/kvm.h                       |  2 ++
 7 files changed, 25 insertions(+), 1 deletion(-)

-- 
2.7.4

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 77+ messages in thread

* [PATCH v9 0/5] Add ARMv8.3 pointer authentication for kvm guest
@ 2019-04-12  3:20 ` Amit Daniel Kachhap
  0 siblings, 0 replies; 77+ messages in thread
From: Amit Daniel Kachhap @ 2019-04-12  3:20 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Andrew Jones, Julien Thierry, Marc Zyngier,
	Catalin Marinas, Will Deacon, Christoffer Dall,
	Kristina Martsenko, kvmarm, James Morse, Ramana Radhakrishnan,
	Amit Daniel Kachhap, Dave Martin, linux-kernel

Hi,

This patch series adds pointer authentication support for KVM guest and
is based on top of Linux kvmarm/next repo. The basic patches in this series was
originally posted by Mark Rutland earlier[1,2] and contains some history
of this work.

Extension Overview:
=============================================

The ARMv8.3 pointer authentication extension adds functionality to detect
modification of pointer values, mitigating certain classes of attack such as
stack smashing, and making return oriented programming attacks harder.

The extension introduces the concept of a pointer authentication code (PAC),
which is stored in some upper bits of pointers. Each PAC is derived from the
original pointer, another 64-bit value (e.g. the stack pointer), and a secret
128-bit key.

New instructions are added which can be used to:

* Insert a PAC into a pointer
* Strip a PAC from a pointer
* Authenticate and strip a PAC from a pointer

The detailed description of ARMv8.3 pointer authentication support in
userspace/kernel and can be found in Kristina's generic pointer authentication
patch series[3].

KVM guest work:
==============================================

If pointer authentication is enabled for KVM guests then the new PAC instructions
will not trap to EL2. If not then they may be ignored if in HINT region or trapped
in EL2 as illegal instruction. Since KVM guest vcpu runs as a thread so they have
a key initialized which will be used by PAC. When world switch happens between
host and guest then this key is exchanged.

The current v9 patch series contains review comments and suggestions by
Kristina Martsenko, Dave Martin, James Morse and Mark Zyngier.

Changes since v8 [10]: Major changes are listed below and detail changes are in
		      each patch.
* Added a new vcpu specific arch flag to control enabling/disabling ptrauth.
* Patches restructured as 3 patches related to hcr_el2, mdcr_el2 and
  hyp_symbol_addr cleanup and optimization dropped. They will be posted separately.

Changes since v7 [9]: Major changes are listed below and detail changes are in
		      each patch.
 * Comments and Documentation updated to reflect using address/generic
   features flag together.
 * Dropped the documentation patch and added those details in the relevant
   patches.
 * Rebased the patch series on 2 patches of Dave Martin v6 SVE series. 
 * Small bug fixes.

Changes since v6 [8]: Major changes are listed below.

* Pointer authentication key switch entirely in assembly now.
* isb instruction added after Key switched to host.
* Use __hyp_this_cpu_ptr for both VHE and nVHE mode.
* 2 separate flags for address and generic authentication.
* kvm_arm_vcpu_ptrauth_allowed renamed to has_vcpu_ptrauth.
* kvm_arm_vcpu_ptrauth_reset renamed to kvm_arm_vcpu_ptrauth_setup_lazy.
* Save of host Key registers now done in ptrauth instruction trap.
* A fix to add kern_hyp_va to get correct host_ctxt pointer in nVHE mode.
* Patches re-structured to better reflect ABI change.

Changes since v5 [7]: Major changes are listed below.

* Split hcr_el2 and mdcr_el2 save/restore in two patches.
* Reverted back save/restore of sys-reg keys as done in V4 [5]. There was
  suggestion by James Morse to use ptrauth utilities in a single place
  in arm core and use them from kvm. However this change deviates from the
  existing sys-reg implementations and is not scalable.
* Invoked the key switch C functions from __guest_enter/__guest_exit assembly.
* Host key save is now done inside vcpu_load.
* Reverted back masking of cpufeature ID registers for ptrauth when disabled
  from userpace.
* Reset of ptrauth key registers not done conditionally.
* Code and Documentation cleanup.

Changes since v4 [6]: Several suggestions from James Morse
* Move host registers to be saved/restored inside struct kvm_cpu_context.
* Similar to hcr_el2, save/restore mdcr_el2 register also.
* Added save routines for ptrauth keys in generic arm core and
  use them during KVM context switch.
* Defined a GCC attribute __no_ptrauth which discards generating
  ptrauth instructions in a function. This is taken from Kristina's
  earlier kernel pointer authentication support patches [4].
* Dropped a patch to mask cpufeature when not enabled from userspace and
  now only key registers are masked from register list.

Changes since v3 [5]:
* Use pointer authentication only when VHE is present as ARM8.3 implies ARM8.1
  features to be present.
* Added lazy context handling of ptrauth instructions from V2 version again. 
* Added more details in Documentation.

Changes since v2 [1,2]:
* Allow host and guest to have different HCR_EL2 settings and not just constant
  value HCR_HOST_VHE_FLAGS or HCR_HOST_NVHE_FLAGS.
* Optimise the reading of HCR_EL2 in host/guest switch by fetching it once
  during KVM initialisation state and using it later.
* Context switch pointer authentication keys when switching between guest
  and host. Pointer authentication was enabled in a lazy context earlier[2] and
  is removed now to make it simple. However it can be revisited later if there
  is significant performance issue.
* Added a userspace option to choose pointer authentication.
* Based on the userspace option, ptrauth cpufeature will be visible.
* Based on the userspace option, ptrauth key registers will be accessible.
* A small document is added on how to enable pointer authentication from
  userspace KVM API.

Looking for feedback and comments.

Thanks,
Amit

[1]: https://lore.kernel.org/lkml/20171127163806.31435-11-mark.rutland@arm.com/
[2]: https://lore.kernel.org/lkml/20171127163806.31435-10-mark.rutland@arm.com/
[3]: https://lkml.org/lkml/2018/12/7/666
[4]: https://lore.kernel.org/lkml/20181005084754.20950-1-kristina.martsenko@arm.com/
[5]: https://lkml.org/lkml/2018/10/17/594
[6]: https://lkml.org/lkml/2018/12/18/80
[7]: https://lkml.org/lkml/2019/1/28/49
[8]: https://lkml.org/lkml/2019/2/19/190 
[9]: https://lkml.org/lkml/2019/3/19/125 
[10]: https://lkml.org/lkml/2019/4/1/1595

Linux (5.1-rc2 based kvmarm/next repo):

Amit Daniel Kachhap (3):
  KVM: arm64: Add a vcpu flag to control ptrauth for guest
  KVM: arm64: Add userspace flag to enable pointer authentication
  KVM: arm64: Add capability to advertise ptrauth for guest

Mark Rutland (1):
  KVM: arm/arm64: context-switch ptrauth registers

 Documentation/arm64/pointer-authentication.txt |  22 ++++-
 Documentation/virtual/kvm/api.txt              |   8 ++
 arch/arm/include/asm/kvm_host.h                |   1 +
 arch/arm64/Kconfig                             |   5 +-
 arch/arm64/include/asm/kvm_host.h              |  23 +++++-
 arch/arm64/include/asm/kvm_ptrauth_asm.h       | 106 +++++++++++++++++++++++++
 arch/arm64/include/uapi/asm/kvm.h              |   2 +
 arch/arm64/kernel/asm-offsets.c                |   6 ++
 arch/arm64/kvm/guest.c                         |  14 ++++
 arch/arm64/kvm/handle_exit.c                   |  24 ++++--
 arch/arm64/kvm/hyp/entry.S                     |   7 ++
 arch/arm64/kvm/reset.c                         |  29 +++++++
 arch/arm64/kvm/sys_regs.c                      |  46 ++++++++++-
 include/uapi/linux/kvm.h                       |   2 +
 virt/kvm/arm/arm.c                             |   2 +
 15 files changed, 279 insertions(+), 18 deletions(-)
 create mode 100644 arch/arm64/include/asm/kvm_ptrauth_asm.h

kvmtool:

Repo: git.kernel.org/pub/scm/linux/kernel/git/will/kvmtool.git
Amit Daniel Kachhap (1):
  KVM: arm/arm64: Add a vcpu feature for pointer authentication

 arm/aarch32/include/kvm/kvm-cpu-arch.h    |  1 +
 arm/aarch64/include/asm/kvm.h             |  2 ++
 arm/aarch64/include/kvm/kvm-config-arch.h |  6 +++++-
 arm/aarch64/include/kvm/kvm-cpu-arch.h    |  2 ++
 arm/include/arm-common/kvm-config-arch.h  |  2 ++
 arm/kvm-cpu.c                             | 11 +++++++++++
 include/linux/kvm.h                       |  2 ++
 7 files changed, 25 insertions(+), 1 deletion(-)

-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 77+ messages in thread

* [PATCH v9 1/5] KVM: arm64: Add a vcpu flag to control ptrauth for guest
@ 2019-04-12  3:20   ` Amit Daniel Kachhap
  0 siblings, 0 replies; 77+ messages in thread
From: Amit Daniel Kachhap @ 2019-04-12  3:20 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Christoffer Dall, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Dave Martin, Ramana Radhakrishnan, kvmarm,
	Kristina Martsenko, linux-kernel, Amit Daniel Kachhap,
	Mark Rutland, James Morse, Julien Thierry

A per vcpu flag is added to check if pointer authentication is
enabled for the vcpu or not. This flag may be enabled according to
the necessary user policies and host capabilities.

This patch also adds a helper to check the flag.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Christoffer Dall <christoffer.dall@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
---

Changes since v8:
* Added a new per vcpu flag which will store Pointer Authentication enable
  status instead of checking them again. [Dave Martin]

 arch/arm64/include/asm/kvm_host.h | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 9d57cf8..31dbc7c 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -355,10 +355,14 @@ struct kvm_vcpu_arch {
 #define KVM_ARM64_HOST_SVE_ENABLED	(1 << 4) /* SVE enabled for EL0 */
 #define KVM_ARM64_GUEST_HAS_SVE		(1 << 5) /* SVE exposed to guest */
 #define KVM_ARM64_VCPU_SVE_FINALIZED	(1 << 6) /* SVE config completed */
+#define KVM_ARM64_GUEST_HAS_PTRAUTH	(1 << 7) /* PTRAUTH exposed to guest */
 
 #define vcpu_has_sve(vcpu) (system_supports_sve() && \
 			    ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_SVE))
 
+#define vcpu_has_ptrauth(vcpu)	\
+			((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH)
+
 #define vcpu_gp_regs(v)		(&(v)->arch.ctxt.gp_regs)
 
 /*
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 77+ messages in thread

* [PATCH v9 1/5] KVM: arm64: Add a vcpu flag to control ptrauth for guest
@ 2019-04-12  3:20   ` Amit Daniel Kachhap
  0 siblings, 0 replies; 77+ messages in thread
From: Amit Daniel Kachhap @ 2019-04-12  3:20 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Marc Zyngier, Catalin Marinas, Will Deacon, Kristina Martsenko,
	kvmarm, Ramana Radhakrishnan, Amit Daniel Kachhap, Dave Martin,
	linux-kernel

A per vcpu flag is added to check if pointer authentication is
enabled for the vcpu or not. This flag may be enabled according to
the necessary user policies and host capabilities.

This patch also adds a helper to check the flag.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Christoffer Dall <christoffer.dall@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
---

Changes since v8:
* Added a new per vcpu flag which will store Pointer Authentication enable
  status instead of checking them again. [Dave Martin]

 arch/arm64/include/asm/kvm_host.h | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 9d57cf8..31dbc7c 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -355,10 +355,14 @@ struct kvm_vcpu_arch {
 #define KVM_ARM64_HOST_SVE_ENABLED	(1 << 4) /* SVE enabled for EL0 */
 #define KVM_ARM64_GUEST_HAS_SVE		(1 << 5) /* SVE exposed to guest */
 #define KVM_ARM64_VCPU_SVE_FINALIZED	(1 << 6) /* SVE config completed */
+#define KVM_ARM64_GUEST_HAS_PTRAUTH	(1 << 7) /* PTRAUTH exposed to guest */
 
 #define vcpu_has_sve(vcpu) (system_supports_sve() && \
 			    ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_SVE))
 
+#define vcpu_has_ptrauth(vcpu)	\
+			((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH)
+
 #define vcpu_gp_regs(v)		(&(v)->arch.ctxt.gp_regs)
 
 /*
-- 
2.7.4

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 77+ messages in thread

* [PATCH v9 1/5] KVM: arm64: Add a vcpu flag to control ptrauth for guest
@ 2019-04-12  3:20   ` Amit Daniel Kachhap
  0 siblings, 0 replies; 77+ messages in thread
From: Amit Daniel Kachhap @ 2019-04-12  3:20 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Andrew Jones, Julien Thierry, Marc Zyngier,
	Catalin Marinas, Will Deacon, Christoffer Dall,
	Kristina Martsenko, kvmarm, James Morse, Ramana Radhakrishnan,
	Amit Daniel Kachhap, Dave Martin, linux-kernel

A per vcpu flag is added to check if pointer authentication is
enabled for the vcpu or not. This flag may be enabled according to
the necessary user policies and host capabilities.

This patch also adds a helper to check the flag.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Christoffer Dall <christoffer.dall@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
---

Changes since v8:
* Added a new per vcpu flag which will store Pointer Authentication enable
  status instead of checking them again. [Dave Martin]

 arch/arm64/include/asm/kvm_host.h | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 9d57cf8..31dbc7c 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -355,10 +355,14 @@ struct kvm_vcpu_arch {
 #define KVM_ARM64_HOST_SVE_ENABLED	(1 << 4) /* SVE enabled for EL0 */
 #define KVM_ARM64_GUEST_HAS_SVE		(1 << 5) /* SVE exposed to guest */
 #define KVM_ARM64_VCPU_SVE_FINALIZED	(1 << 6) /* SVE config completed */
+#define KVM_ARM64_GUEST_HAS_PTRAUTH	(1 << 7) /* PTRAUTH exposed to guest */
 
 #define vcpu_has_sve(vcpu) (system_supports_sve() && \
 			    ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_SVE))
 
+#define vcpu_has_ptrauth(vcpu)	\
+			((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH)
+
 #define vcpu_gp_regs(v)		(&(v)->arch.ctxt.gp_regs)
 
 /*
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 77+ messages in thread

* [PATCH v9 2/5] KVM: arm/arm64: context-switch ptrauth registers
@ 2019-04-12  3:20   ` Amit Daniel Kachhap
  0 siblings, 0 replies; 77+ messages in thread
From: Amit Daniel Kachhap @ 2019-04-12  3:20 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Christoffer Dall, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Dave Martin, Ramana Radhakrishnan, kvmarm,
	Kristina Martsenko, linux-kernel, Amit Daniel Kachhap,
	Mark Rutland, James Morse, Julien Thierry

From: Mark Rutland <mark.rutland@arm.com>

When pointer authentication is supported, a guest may wish to use it.
This patch adds the necessary KVM infrastructure for this to work, with
a semi-lazy context switch of the pointer auth state.

Pointer authentication feature is only enabled when VHE is built
in the kernel and present in the CPU implementation so only VHE code
paths are modified.

When we schedule a vcpu, we disable guest usage of pointer
authentication instructions and accesses to the keys. While these are
disabled, we avoid context-switching the keys. When we trap the guest
trying to use pointer authentication functionality, we change to eagerly
context-switching the keys, and enable the feature. The next time the
vcpu is scheduled out/in, we start again. However the host key save is
optimized and implemented inside ptrauth instruction/register access
trap.

Pointer authentication consists of address authentication and generic
authentication, and CPUs in a system might have varied support for
either. Where support for either feature is not uniform, it is hidden
from guests via ID register emulation, as a result of the cpufeature
framework in the host.

Unfortunately, address authentication and generic authentication cannot
be trapped separately, as the architecture provides a single EL2 trap
covering both. If we wish to expose one without the other, we cannot
prevent a (badly-written) guest from intermittently using a feature
which is not uniformly supported (when scheduled on a physical CPU which
supports the relevant feature). Hence, this patch expects both type of
authentication to be present in a cpu.

This switch of key is done from guest enter/exit assembly as preparation
for the upcoming in-kernel pointer authentication support. Hence, these
key switching routines are not implemented in C code as they may cause
pointer authentication key signing error in some situations.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
[Only VHE, key switch in full assembly, vcpu_has_ptrauth checks
, save host key in ptrauth exception trap]
Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Reviewed-by: Julien Thierry <julien.thierry@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Christoffer Dall <christoffer.dall@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
---

Changes since v9:
* Used high order number for branching in assembly macros. [Kristina Martsenko]
* Taken care of different offset for hcr_el2 now.

 arch/arm/include/asm/kvm_host.h          |   1 +
 arch/arm64/Kconfig                       |   5 +-
 arch/arm64/include/asm/kvm_host.h        |  17 +++++
 arch/arm64/include/asm/kvm_ptrauth_asm.h | 106 +++++++++++++++++++++++++++++++
 arch/arm64/kernel/asm-offsets.c          |   6 ++
 arch/arm64/kvm/guest.c                   |  14 ++++
 arch/arm64/kvm/handle_exit.c             |  24 ++++---
 arch/arm64/kvm/hyp/entry.S               |   7 ++
 arch/arm64/kvm/sys_regs.c                |  46 +++++++++++++-
 virt/kvm/arm/arm.c                       |   2 +
 10 files changed, 215 insertions(+), 13 deletions(-)
 create mode 100644 arch/arm64/include/asm/kvm_ptrauth_asm.h

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index e80cfc1..7a5c7f8 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -363,6 +363,7 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
 static inline void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu) {}
 static inline void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu) {}
 static inline void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu) {}
+static inline void kvm_arm_vcpu_ptrauth_setup_lazy(struct kvm_vcpu *vcpu) {}
 
 static inline void kvm_arm_vhe_guest_enter(void) {}
 static inline void kvm_arm_vhe_guest_exit(void) {}
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 7e34b9e..9e8506e 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1301,8 +1301,9 @@ config ARM64_PTR_AUTH
 	  context-switched along with the process.
 
 	  The feature is detected at runtime. If the feature is not present in
-	  hardware it will not be advertised to userspace nor will it be
-	  enabled.
+	  hardware it will not be advertised to userspace/KVM guest nor will it
+	  be enabled. However, KVM guest also require CONFIG_ARM64_VHE=y to use
+	  this feature.
 
 endmenu
 
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 31dbc7c..a585d82 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -161,6 +161,18 @@ enum vcpu_sysreg {
 	PMSWINC_EL0,	/* Software Increment Register */
 	PMUSERENR_EL0,	/* User Enable Register */
 
+	/* Pointer Authentication Registers in a strict increasing order. */
+	APIAKEYLO_EL1,
+	APIAKEYHI_EL1 = APIAKEYLO_EL1 + 1,
+	APIBKEYLO_EL1 = APIAKEYLO_EL1 + 2,
+	APIBKEYHI_EL1 = APIAKEYLO_EL1 + 3,
+	APDAKEYLO_EL1 = APIAKEYLO_EL1 + 4,
+	APDAKEYHI_EL1 = APIAKEYLO_EL1 + 5,
+	APDBKEYLO_EL1 = APIAKEYLO_EL1 + 6,
+	APDBKEYHI_EL1 = APIAKEYLO_EL1 + 7,
+	APGAKEYLO_EL1 = APIAKEYLO_EL1 + 8,
+	APGAKEYHI_EL1 = APIAKEYLO_EL1 + 9,
+
 	/* 32bit specific registers. Keep them at the end of the range */
 	DACR32_EL2,	/* Domain Access Control Register */
 	IFSR32_EL2,	/* Instruction Fault Status Register */
@@ -529,6 +541,11 @@ static inline bool kvm_arch_requires_vhe(void)
 	return false;
 }
 
+void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu);
+void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu);
+void kvm_arm_vcpu_ptrauth_setup_lazy(struct kvm_vcpu *vcpu);
+void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu);
+
 static inline void kvm_arch_hardware_unsetup(void) {}
 static inline void kvm_arch_sync_events(struct kvm *kvm) {}
 static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
diff --git a/arch/arm64/include/asm/kvm_ptrauth_asm.h b/arch/arm64/include/asm/kvm_ptrauth_asm.h
new file mode 100644
index 0000000..8142521
--- /dev/null
+++ b/arch/arm64/include/asm/kvm_ptrauth_asm.h
@@ -0,0 +1,106 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* arch/arm64/include/asm/kvm_ptrauth_asm.h: Guest/host ptrauth save/restore
+ * Copyright 2019 Arm Limited
+ * Author: Mark Rutland <mark.rutland@arm.com>
+ *         Amit Daniel Kachhap <amit.kachhap@arm.com>
+ */
+
+#ifndef __ASM_KVM_PTRAUTH_ASM_H
+#define __ASM_KVM_PTRAUTH_ASM_H
+
+#ifndef __ASSEMBLY__
+
+#define __ptrauth_save_key(regs, key)						\
+({										\
+	regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1);	\
+	regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1);	\
+})
+
+#define __ptrauth_save_state(ctxt)						\
+({										\
+	__ptrauth_save_key(ctxt->sys_regs, APIA);				\
+	__ptrauth_save_key(ctxt->sys_regs, APIB);				\
+	__ptrauth_save_key(ctxt->sys_regs, APDA);				\
+	__ptrauth_save_key(ctxt->sys_regs, APDB);				\
+	__ptrauth_save_key(ctxt->sys_regs, APGA);				\
+})
+
+#else /* __ASSEMBLY__ */
+
+#include <asm/sysreg.h>
+
+#ifdef	CONFIG_ARM64_PTR_AUTH
+
+#define PTRAUTH_REG_OFFSET(x)	(x - CPU_APIAKEYLO_EL1)
+
+/*
+ * CPU_AP*_EL1 values exceed immediate offset range (512) for stp instruction
+ * so below macros takes CPU_APIAKEYLO_EL1 as base and calculates the offset of
+ * the keys from this base to avoid an extra add instruction. These macros
+ * assumes the keys offsets are aligned in a specific increasing order.
+ */
+.macro	ptrauth_save_state base, reg1, reg2
+	mrs_s	\reg1, SYS_APIAKEYLO_EL1
+	mrs_s	\reg2, SYS_APIAKEYHI_EL1
+	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIAKEYLO_EL1)]
+	mrs_s	\reg1, SYS_APIBKEYLO_EL1
+	mrs_s	\reg2, SYS_APIBKEYHI_EL1
+	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIBKEYLO_EL1)]
+	mrs_s	\reg1, SYS_APDAKEYLO_EL1
+	mrs_s	\reg2, SYS_APDAKEYHI_EL1
+	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDAKEYLO_EL1)]
+	mrs_s	\reg1, SYS_APDBKEYLO_EL1
+	mrs_s	\reg2, SYS_APDBKEYHI_EL1
+	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDBKEYLO_EL1)]
+	mrs_s	\reg1, SYS_APGAKEYLO_EL1
+	mrs_s	\reg2, SYS_APGAKEYHI_EL1
+	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APGAKEYLO_EL1)]
+.endm
+
+.macro	ptrauth_restore_state base, reg1, reg2
+	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIAKEYLO_EL1)]
+	msr_s	SYS_APIAKEYLO_EL1, \reg1
+	msr_s	SYS_APIAKEYHI_EL1, \reg2
+	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIBKEYLO_EL1)]
+	msr_s	SYS_APIBKEYLO_EL1, \reg1
+	msr_s	SYS_APIBKEYHI_EL1, \reg2
+	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDAKEYLO_EL1)]
+	msr_s	SYS_APDAKEYLO_EL1, \reg1
+	msr_s	SYS_APDAKEYHI_EL1, \reg2
+	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDBKEYLO_EL1)]
+	msr_s	SYS_APDBKEYLO_EL1, \reg1
+	msr_s	SYS_APDBKEYHI_EL1, \reg2
+	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APGAKEYLO_EL1)]
+	msr_s	SYS_APGAKEYLO_EL1, \reg1
+	msr_s	SYS_APGAKEYHI_EL1, \reg2
+.endm
+
+.macro ptrauth_switch_to_guest g_ctxt, reg1, reg2, reg3
+	ldr	\reg1, [\g_ctxt, #(VCPU_HCR_EL2 - VCPU_CONTEXT)]
+	and	\reg1, \reg1, #(HCR_API | HCR_APK)
+	cbz	\reg1, 1000f
+	add	\reg1, \g_ctxt, #CPU_APIAKEYLO_EL1
+	ptrauth_restore_state	\reg1, \reg2, \reg3
+1000:
+.endm
+
+.macro ptrauth_switch_to_host g_ctxt, h_ctxt, reg1, reg2, reg3
+	ldr	\reg1, [\g_ctxt, #(VCPU_HCR_EL2 - VCPU_CONTEXT)]
+	and	\reg1, \reg1, #(HCR_API | HCR_APK)
+	cbz	\reg1, 1001f
+	add	\reg1, \g_ctxt, #CPU_APIAKEYLO_EL1
+	ptrauth_save_state	\reg1, \reg2, \reg3
+	add	\reg1, \h_ctxt, #CPU_APIAKEYLO_EL1
+	ptrauth_restore_state	\reg1, \reg2, \reg3
+	isb
+1001:
+.endm
+
+#else /* !CONFIG_ARM64_PTR_AUTH */
+.macro ptrauth_switch_to_guest g_ctxt, reg1, reg2, reg3
+.endm
+.macro ptrauth_switch_to_host g_ctxt, h_ctxt, reg1, reg2, reg3
+.endm
+#endif /* CONFIG_ARM64_PTR_AUTH */
+#endif /* __ASSEMBLY__ */
+#endif /* __ASM_KVM_PTRAUTH_ASM_H */
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index 7f40dcb..8178330 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -125,7 +125,13 @@ int main(void)
   DEFINE(VCPU_CONTEXT,		offsetof(struct kvm_vcpu, arch.ctxt));
   DEFINE(VCPU_FAULT_DISR,	offsetof(struct kvm_vcpu, arch.fault.disr_el1));
   DEFINE(VCPU_WORKAROUND_FLAGS,	offsetof(struct kvm_vcpu, arch.workaround_flags));
+  DEFINE(VCPU_HCR_EL2,		offsetof(struct kvm_vcpu, arch.hcr_el2));
   DEFINE(CPU_GP_REGS,		offsetof(struct kvm_cpu_context, gp_regs));
+  DEFINE(CPU_APIAKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APIAKEYLO_EL1]));
+  DEFINE(CPU_APIBKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APIBKEYLO_EL1]));
+  DEFINE(CPU_APDAKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APDAKEYLO_EL1]));
+  DEFINE(CPU_APDBKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APDBKEYLO_EL1]));
+  DEFINE(CPU_APGAKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APGAKEYLO_EL1]));
   DEFINE(CPU_USER_PT_REGS,	offsetof(struct kvm_regs, regs));
   DEFINE(HOST_CONTEXT_VCPU,	offsetof(struct kvm_cpu_context, __hyp_running_vcpu));
 #endif
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index 4f7b26b..e07f763 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -878,3 +878,17 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
 
 	return ret;
 }
+
+/**
+ * kvm_arm_vcpu_ptrauth_setup_lazy - setup lazy ptrauth for vcpu schedule
+ *
+ * @vcpu: The VCPU pointer
+ *
+ * This function may be used to disable ptrauth and use it in a lazy context
+ * via traps.
+ */
+void kvm_arm_vcpu_ptrauth_setup_lazy(struct kvm_vcpu *vcpu)
+{
+	if (vcpu_has_ptrauth(vcpu))
+		kvm_arm_vcpu_ptrauth_disable(vcpu);
+}
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 0b79834..5838ff9 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -30,6 +30,7 @@
 #include <asm/kvm_coproc.h>
 #include <asm/kvm_emulate.h>
 #include <asm/kvm_mmu.h>
+#include <asm/kvm_ptrauth_asm.h>
 #include <asm/debug-monitors.h>
 #include <asm/traps.h>
 
@@ -174,19 +175,26 @@ static int handle_sve(struct kvm_vcpu *vcpu, struct kvm_run *run)
 }
 
 /*
+ * Handle the guest trying to use a ptrauth instruction, or trying to access a
+ * ptrauth register.
+ */
+void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu)
+{
+	if (vcpu_has_ptrauth(vcpu)) {
+		kvm_arm_vcpu_ptrauth_enable(vcpu);
+		__ptrauth_save_state(vcpu->arch.host_cpu_context);
+	} else {
+		kvm_inject_undefined(vcpu);
+	}
+}
+
+/*
  * Guest usage of a ptrauth instruction (which the guest EL1 did not turn into
  * a NOP).
  */
 static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu, struct kvm_run *run)
 {
-	/*
-	 * We don't currently support ptrauth in a guest, and we mask the ID
-	 * registers to prevent well-behaved guests from trying to make use of
-	 * it.
-	 *
-	 * Inject an UNDEF, as if the feature really isn't present.
-	 */
-	kvm_inject_undefined(vcpu);
+	kvm_arm_vcpu_ptrauth_trap(vcpu);
 	return 1;
 }
 
diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
index 675fdc1..3a70213 100644
--- a/arch/arm64/kvm/hyp/entry.S
+++ b/arch/arm64/kvm/hyp/entry.S
@@ -24,6 +24,7 @@
 #include <asm/kvm_arm.h>
 #include <asm/kvm_asm.h>
 #include <asm/kvm_mmu.h>
+#include <asm/kvm_ptrauth_asm.h>
 
 #define CPU_GP_REG_OFFSET(x)	(CPU_GP_REGS + x)
 #define CPU_XREG_OFFSET(x)	CPU_GP_REG_OFFSET(CPU_USER_PT_REGS + 8*x)
@@ -64,6 +65,9 @@ ENTRY(__guest_enter)
 
 	add	x18, x0, #VCPU_CONTEXT
 
+	// Macro ptrauth_switch_to_guest(guest cxt, tmp1, tmp2, tmp3).
+	ptrauth_switch_to_guest x18, x0, x1, x2
+
 	// Restore guest regs x0-x17
 	ldp	x0, x1,   [x18, #CPU_XREG_OFFSET(0)]
 	ldp	x2, x3,   [x18, #CPU_XREG_OFFSET(2)]
@@ -118,6 +122,9 @@ ENTRY(__guest_exit)
 
 	get_host_ctxt	x2, x3
 
+	// Macro ptrauth_switch_to_host(guest cxt, host cxt, tmp1, tmp2, tmp3).
+	ptrauth_switch_to_host x1, x2, x3, x4, x5
+
 	// Now restore the host regs
 	restore_callee_saved_regs x2
 
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 09e9b06..4a98b5c 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1007,6 +1007,38 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	{ SYS_DESC(SYS_PMEVTYPERn_EL0(n)),					\
 	  access_pmu_evtyper, reset_unknown, (PMEVTYPER0_EL0 + n), }
 
+void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu)
+{
+	vcpu->arch.hcr_el2 |= (HCR_API | HCR_APK);
+}
+
+void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu)
+{
+	vcpu->arch.hcr_el2 &= ~(HCR_API | HCR_APK);
+}
+
+static bool trap_ptrauth(struct kvm_vcpu *vcpu,
+			 struct sys_reg_params *p,
+			 const struct sys_reg_desc *rd)
+{
+	kvm_arm_vcpu_ptrauth_trap(vcpu);
+	return false;
+}
+
+static unsigned int ptrauth_visibility(const struct kvm_vcpu *vcpu,
+			const struct sys_reg_desc *rd)
+{
+	return vcpu_has_ptrauth(vcpu) ? 0 : REG_HIDDEN_USER | REG_HIDDEN_GUEST;
+}
+
+#define __PTRAUTH_KEY(k)						\
+	{ SYS_DESC(SYS_## k), trap_ptrauth, reset_unknown, k,		\
+	.visibility = ptrauth_visibility}
+
+#define PTRAUTH_KEY(k)							\
+	__PTRAUTH_KEY(k ## KEYLO_EL1),					\
+	__PTRAUTH_KEY(k ## KEYHI_EL1)
+
 static bool access_arch_timer(struct kvm_vcpu *vcpu,
 			      struct sys_reg_params *p,
 			      const struct sys_reg_desc *r)
@@ -1058,9 +1090,11 @@ static u64 read_id_reg(const struct kvm_vcpu *vcpu,
 					 (0xfUL << ID_AA64ISAR1_API_SHIFT) |
 					 (0xfUL << ID_AA64ISAR1_GPA_SHIFT) |
 					 (0xfUL << ID_AA64ISAR1_GPI_SHIFT);
-		if (val & ptrauth_mask)
-			kvm_debug("ptrauth unsupported for guests, suppressing\n");
-		val &= ~ptrauth_mask;
+		if (!vcpu_has_ptrauth(vcpu)) {
+			if (val & ptrauth_mask)
+				kvm_debug("ptrauth unsupported for guests, suppressing\n");
+			val &= ~ptrauth_mask;
+		}
 	}
 
 	return val;
@@ -1460,6 +1494,12 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	{ SYS_DESC(SYS_TTBR1_EL1), access_vm_reg, reset_unknown, TTBR1_EL1 },
 	{ SYS_DESC(SYS_TCR_EL1), access_vm_reg, reset_val, TCR_EL1, 0 },
 
+	PTRAUTH_KEY(APIA),
+	PTRAUTH_KEY(APIB),
+	PTRAUTH_KEY(APDA),
+	PTRAUTH_KEY(APDB),
+	PTRAUTH_KEY(APGA),
+
 	{ SYS_DESC(SYS_AFSR0_EL1), access_vm_reg, reset_unknown, AFSR0_EL1 },
 	{ SYS_DESC(SYS_AFSR1_EL1), access_vm_reg, reset_unknown, AFSR1_EL1 },
 	{ SYS_DESC(SYS_ESR_EL1), access_vm_reg, reset_unknown, ESR_EL1 },
diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index 9edbf0f..8d1b73c 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -385,6 +385,8 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 		vcpu_clear_wfe_traps(vcpu);
 	else
 		vcpu_set_wfe_traps(vcpu);
+
+	kvm_arm_vcpu_ptrauth_setup_lazy(vcpu);
 }
 
 void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 77+ messages in thread

* [PATCH v9 2/5] KVM: arm/arm64: context-switch ptrauth registers
@ 2019-04-12  3:20   ` Amit Daniel Kachhap
  0 siblings, 0 replies; 77+ messages in thread
From: Amit Daniel Kachhap @ 2019-04-12  3:20 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Marc Zyngier, Catalin Marinas, Will Deacon, Kristina Martsenko,
	kvmarm, Ramana Radhakrishnan, Amit Daniel Kachhap, Dave Martin,
	linux-kernel

From: Mark Rutland <mark.rutland@arm.com>

When pointer authentication is supported, a guest may wish to use it.
This patch adds the necessary KVM infrastructure for this to work, with
a semi-lazy context switch of the pointer auth state.

Pointer authentication feature is only enabled when VHE is built
in the kernel and present in the CPU implementation so only VHE code
paths are modified.

When we schedule a vcpu, we disable guest usage of pointer
authentication instructions and accesses to the keys. While these are
disabled, we avoid context-switching the keys. When we trap the guest
trying to use pointer authentication functionality, we change to eagerly
context-switching the keys, and enable the feature. The next time the
vcpu is scheduled out/in, we start again. However the host key save is
optimized and implemented inside ptrauth instruction/register access
trap.

Pointer authentication consists of address authentication and generic
authentication, and CPUs in a system might have varied support for
either. Where support for either feature is not uniform, it is hidden
from guests via ID register emulation, as a result of the cpufeature
framework in the host.

Unfortunately, address authentication and generic authentication cannot
be trapped separately, as the architecture provides a single EL2 trap
covering both. If we wish to expose one without the other, we cannot
prevent a (badly-written) guest from intermittently using a feature
which is not uniformly supported (when scheduled on a physical CPU which
supports the relevant feature). Hence, this patch expects both type of
authentication to be present in a cpu.

This switch of key is done from guest enter/exit assembly as preparation
for the upcoming in-kernel pointer authentication support. Hence, these
key switching routines are not implemented in C code as they may cause
pointer authentication key signing error in some situations.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
[Only VHE, key switch in full assembly, vcpu_has_ptrauth checks
, save host key in ptrauth exception trap]
Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Reviewed-by: Julien Thierry <julien.thierry@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Christoffer Dall <christoffer.dall@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
---

Changes since v9:
* Used high order number for branching in assembly macros. [Kristina Martsenko]
* Taken care of different offset for hcr_el2 now.

 arch/arm/include/asm/kvm_host.h          |   1 +
 arch/arm64/Kconfig                       |   5 +-
 arch/arm64/include/asm/kvm_host.h        |  17 +++++
 arch/arm64/include/asm/kvm_ptrauth_asm.h | 106 +++++++++++++++++++++++++++++++
 arch/arm64/kernel/asm-offsets.c          |   6 ++
 arch/arm64/kvm/guest.c                   |  14 ++++
 arch/arm64/kvm/handle_exit.c             |  24 ++++---
 arch/arm64/kvm/hyp/entry.S               |   7 ++
 arch/arm64/kvm/sys_regs.c                |  46 +++++++++++++-
 virt/kvm/arm/arm.c                       |   2 +
 10 files changed, 215 insertions(+), 13 deletions(-)
 create mode 100644 arch/arm64/include/asm/kvm_ptrauth_asm.h

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index e80cfc1..7a5c7f8 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -363,6 +363,7 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
 static inline void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu) {}
 static inline void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu) {}
 static inline void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu) {}
+static inline void kvm_arm_vcpu_ptrauth_setup_lazy(struct kvm_vcpu *vcpu) {}
 
 static inline void kvm_arm_vhe_guest_enter(void) {}
 static inline void kvm_arm_vhe_guest_exit(void) {}
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 7e34b9e..9e8506e 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1301,8 +1301,9 @@ config ARM64_PTR_AUTH
 	  context-switched along with the process.
 
 	  The feature is detected at runtime. If the feature is not present in
-	  hardware it will not be advertised to userspace nor will it be
-	  enabled.
+	  hardware it will not be advertised to userspace/KVM guest nor will it
+	  be enabled. However, KVM guest also require CONFIG_ARM64_VHE=y to use
+	  this feature.
 
 endmenu
 
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 31dbc7c..a585d82 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -161,6 +161,18 @@ enum vcpu_sysreg {
 	PMSWINC_EL0,	/* Software Increment Register */
 	PMUSERENR_EL0,	/* User Enable Register */
 
+	/* Pointer Authentication Registers in a strict increasing order. */
+	APIAKEYLO_EL1,
+	APIAKEYHI_EL1 = APIAKEYLO_EL1 + 1,
+	APIBKEYLO_EL1 = APIAKEYLO_EL1 + 2,
+	APIBKEYHI_EL1 = APIAKEYLO_EL1 + 3,
+	APDAKEYLO_EL1 = APIAKEYLO_EL1 + 4,
+	APDAKEYHI_EL1 = APIAKEYLO_EL1 + 5,
+	APDBKEYLO_EL1 = APIAKEYLO_EL1 + 6,
+	APDBKEYHI_EL1 = APIAKEYLO_EL1 + 7,
+	APGAKEYLO_EL1 = APIAKEYLO_EL1 + 8,
+	APGAKEYHI_EL1 = APIAKEYLO_EL1 + 9,
+
 	/* 32bit specific registers. Keep them at the end of the range */
 	DACR32_EL2,	/* Domain Access Control Register */
 	IFSR32_EL2,	/* Instruction Fault Status Register */
@@ -529,6 +541,11 @@ static inline bool kvm_arch_requires_vhe(void)
 	return false;
 }
 
+void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu);
+void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu);
+void kvm_arm_vcpu_ptrauth_setup_lazy(struct kvm_vcpu *vcpu);
+void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu);
+
 static inline void kvm_arch_hardware_unsetup(void) {}
 static inline void kvm_arch_sync_events(struct kvm *kvm) {}
 static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
diff --git a/arch/arm64/include/asm/kvm_ptrauth_asm.h b/arch/arm64/include/asm/kvm_ptrauth_asm.h
new file mode 100644
index 0000000..8142521
--- /dev/null
+++ b/arch/arm64/include/asm/kvm_ptrauth_asm.h
@@ -0,0 +1,106 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* arch/arm64/include/asm/kvm_ptrauth_asm.h: Guest/host ptrauth save/restore
+ * Copyright 2019 Arm Limited
+ * Author: Mark Rutland <mark.rutland@arm.com>
+ *         Amit Daniel Kachhap <amit.kachhap@arm.com>
+ */
+
+#ifndef __ASM_KVM_PTRAUTH_ASM_H
+#define __ASM_KVM_PTRAUTH_ASM_H
+
+#ifndef __ASSEMBLY__
+
+#define __ptrauth_save_key(regs, key)						\
+({										\
+	regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1);	\
+	regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1);	\
+})
+
+#define __ptrauth_save_state(ctxt)						\
+({										\
+	__ptrauth_save_key(ctxt->sys_regs, APIA);				\
+	__ptrauth_save_key(ctxt->sys_regs, APIB);				\
+	__ptrauth_save_key(ctxt->sys_regs, APDA);				\
+	__ptrauth_save_key(ctxt->sys_regs, APDB);				\
+	__ptrauth_save_key(ctxt->sys_regs, APGA);				\
+})
+
+#else /* __ASSEMBLY__ */
+
+#include <asm/sysreg.h>
+
+#ifdef	CONFIG_ARM64_PTR_AUTH
+
+#define PTRAUTH_REG_OFFSET(x)	(x - CPU_APIAKEYLO_EL1)
+
+/*
+ * CPU_AP*_EL1 values exceed immediate offset range (512) for stp instruction
+ * so below macros takes CPU_APIAKEYLO_EL1 as base and calculates the offset of
+ * the keys from this base to avoid an extra add instruction. These macros
+ * assumes the keys offsets are aligned in a specific increasing order.
+ */
+.macro	ptrauth_save_state base, reg1, reg2
+	mrs_s	\reg1, SYS_APIAKEYLO_EL1
+	mrs_s	\reg2, SYS_APIAKEYHI_EL1
+	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIAKEYLO_EL1)]
+	mrs_s	\reg1, SYS_APIBKEYLO_EL1
+	mrs_s	\reg2, SYS_APIBKEYHI_EL1
+	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIBKEYLO_EL1)]
+	mrs_s	\reg1, SYS_APDAKEYLO_EL1
+	mrs_s	\reg2, SYS_APDAKEYHI_EL1
+	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDAKEYLO_EL1)]
+	mrs_s	\reg1, SYS_APDBKEYLO_EL1
+	mrs_s	\reg2, SYS_APDBKEYHI_EL1
+	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDBKEYLO_EL1)]
+	mrs_s	\reg1, SYS_APGAKEYLO_EL1
+	mrs_s	\reg2, SYS_APGAKEYHI_EL1
+	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APGAKEYLO_EL1)]
+.endm
+
+.macro	ptrauth_restore_state base, reg1, reg2
+	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIAKEYLO_EL1)]
+	msr_s	SYS_APIAKEYLO_EL1, \reg1
+	msr_s	SYS_APIAKEYHI_EL1, \reg2
+	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIBKEYLO_EL1)]
+	msr_s	SYS_APIBKEYLO_EL1, \reg1
+	msr_s	SYS_APIBKEYHI_EL1, \reg2
+	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDAKEYLO_EL1)]
+	msr_s	SYS_APDAKEYLO_EL1, \reg1
+	msr_s	SYS_APDAKEYHI_EL1, \reg2
+	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDBKEYLO_EL1)]
+	msr_s	SYS_APDBKEYLO_EL1, \reg1
+	msr_s	SYS_APDBKEYHI_EL1, \reg2
+	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APGAKEYLO_EL1)]
+	msr_s	SYS_APGAKEYLO_EL1, \reg1
+	msr_s	SYS_APGAKEYHI_EL1, \reg2
+.endm
+
+.macro ptrauth_switch_to_guest g_ctxt, reg1, reg2, reg3
+	ldr	\reg1, [\g_ctxt, #(VCPU_HCR_EL2 - VCPU_CONTEXT)]
+	and	\reg1, \reg1, #(HCR_API | HCR_APK)
+	cbz	\reg1, 1000f
+	add	\reg1, \g_ctxt, #CPU_APIAKEYLO_EL1
+	ptrauth_restore_state	\reg1, \reg2, \reg3
+1000:
+.endm
+
+.macro ptrauth_switch_to_host g_ctxt, h_ctxt, reg1, reg2, reg3
+	ldr	\reg1, [\g_ctxt, #(VCPU_HCR_EL2 - VCPU_CONTEXT)]
+	and	\reg1, \reg1, #(HCR_API | HCR_APK)
+	cbz	\reg1, 1001f
+	add	\reg1, \g_ctxt, #CPU_APIAKEYLO_EL1
+	ptrauth_save_state	\reg1, \reg2, \reg3
+	add	\reg1, \h_ctxt, #CPU_APIAKEYLO_EL1
+	ptrauth_restore_state	\reg1, \reg2, \reg3
+	isb
+1001:
+.endm
+
+#else /* !CONFIG_ARM64_PTR_AUTH */
+.macro ptrauth_switch_to_guest g_ctxt, reg1, reg2, reg3
+.endm
+.macro ptrauth_switch_to_host g_ctxt, h_ctxt, reg1, reg2, reg3
+.endm
+#endif /* CONFIG_ARM64_PTR_AUTH */
+#endif /* __ASSEMBLY__ */
+#endif /* __ASM_KVM_PTRAUTH_ASM_H */
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index 7f40dcb..8178330 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -125,7 +125,13 @@ int main(void)
   DEFINE(VCPU_CONTEXT,		offsetof(struct kvm_vcpu, arch.ctxt));
   DEFINE(VCPU_FAULT_DISR,	offsetof(struct kvm_vcpu, arch.fault.disr_el1));
   DEFINE(VCPU_WORKAROUND_FLAGS,	offsetof(struct kvm_vcpu, arch.workaround_flags));
+  DEFINE(VCPU_HCR_EL2,		offsetof(struct kvm_vcpu, arch.hcr_el2));
   DEFINE(CPU_GP_REGS,		offsetof(struct kvm_cpu_context, gp_regs));
+  DEFINE(CPU_APIAKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APIAKEYLO_EL1]));
+  DEFINE(CPU_APIBKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APIBKEYLO_EL1]));
+  DEFINE(CPU_APDAKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APDAKEYLO_EL1]));
+  DEFINE(CPU_APDBKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APDBKEYLO_EL1]));
+  DEFINE(CPU_APGAKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APGAKEYLO_EL1]));
   DEFINE(CPU_USER_PT_REGS,	offsetof(struct kvm_regs, regs));
   DEFINE(HOST_CONTEXT_VCPU,	offsetof(struct kvm_cpu_context, __hyp_running_vcpu));
 #endif
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index 4f7b26b..e07f763 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -878,3 +878,17 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
 
 	return ret;
 }
+
+/**
+ * kvm_arm_vcpu_ptrauth_setup_lazy - setup lazy ptrauth for vcpu schedule
+ *
+ * @vcpu: The VCPU pointer
+ *
+ * This function may be used to disable ptrauth and use it in a lazy context
+ * via traps.
+ */
+void kvm_arm_vcpu_ptrauth_setup_lazy(struct kvm_vcpu *vcpu)
+{
+	if (vcpu_has_ptrauth(vcpu))
+		kvm_arm_vcpu_ptrauth_disable(vcpu);
+}
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 0b79834..5838ff9 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -30,6 +30,7 @@
 #include <asm/kvm_coproc.h>
 #include <asm/kvm_emulate.h>
 #include <asm/kvm_mmu.h>
+#include <asm/kvm_ptrauth_asm.h>
 #include <asm/debug-monitors.h>
 #include <asm/traps.h>
 
@@ -174,19 +175,26 @@ static int handle_sve(struct kvm_vcpu *vcpu, struct kvm_run *run)
 }
 
 /*
+ * Handle the guest trying to use a ptrauth instruction, or trying to access a
+ * ptrauth register.
+ */
+void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu)
+{
+	if (vcpu_has_ptrauth(vcpu)) {
+		kvm_arm_vcpu_ptrauth_enable(vcpu);
+		__ptrauth_save_state(vcpu->arch.host_cpu_context);
+	} else {
+		kvm_inject_undefined(vcpu);
+	}
+}
+
+/*
  * Guest usage of a ptrauth instruction (which the guest EL1 did not turn into
  * a NOP).
  */
 static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu, struct kvm_run *run)
 {
-	/*
-	 * We don't currently support ptrauth in a guest, and we mask the ID
-	 * registers to prevent well-behaved guests from trying to make use of
-	 * it.
-	 *
-	 * Inject an UNDEF, as if the feature really isn't present.
-	 */
-	kvm_inject_undefined(vcpu);
+	kvm_arm_vcpu_ptrauth_trap(vcpu);
 	return 1;
 }
 
diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
index 675fdc1..3a70213 100644
--- a/arch/arm64/kvm/hyp/entry.S
+++ b/arch/arm64/kvm/hyp/entry.S
@@ -24,6 +24,7 @@
 #include <asm/kvm_arm.h>
 #include <asm/kvm_asm.h>
 #include <asm/kvm_mmu.h>
+#include <asm/kvm_ptrauth_asm.h>
 
 #define CPU_GP_REG_OFFSET(x)	(CPU_GP_REGS + x)
 #define CPU_XREG_OFFSET(x)	CPU_GP_REG_OFFSET(CPU_USER_PT_REGS + 8*x)
@@ -64,6 +65,9 @@ ENTRY(__guest_enter)
 
 	add	x18, x0, #VCPU_CONTEXT
 
+	// Macro ptrauth_switch_to_guest(guest cxt, tmp1, tmp2, tmp3).
+	ptrauth_switch_to_guest x18, x0, x1, x2
+
 	// Restore guest regs x0-x17
 	ldp	x0, x1,   [x18, #CPU_XREG_OFFSET(0)]
 	ldp	x2, x3,   [x18, #CPU_XREG_OFFSET(2)]
@@ -118,6 +122,9 @@ ENTRY(__guest_exit)
 
 	get_host_ctxt	x2, x3
 
+	// Macro ptrauth_switch_to_host(guest cxt, host cxt, tmp1, tmp2, tmp3).
+	ptrauth_switch_to_host x1, x2, x3, x4, x5
+
 	// Now restore the host regs
 	restore_callee_saved_regs x2
 
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 09e9b06..4a98b5c 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1007,6 +1007,38 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	{ SYS_DESC(SYS_PMEVTYPERn_EL0(n)),					\
 	  access_pmu_evtyper, reset_unknown, (PMEVTYPER0_EL0 + n), }
 
+void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu)
+{
+	vcpu->arch.hcr_el2 |= (HCR_API | HCR_APK);
+}
+
+void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu)
+{
+	vcpu->arch.hcr_el2 &= ~(HCR_API | HCR_APK);
+}
+
+static bool trap_ptrauth(struct kvm_vcpu *vcpu,
+			 struct sys_reg_params *p,
+			 const struct sys_reg_desc *rd)
+{
+	kvm_arm_vcpu_ptrauth_trap(vcpu);
+	return false;
+}
+
+static unsigned int ptrauth_visibility(const struct kvm_vcpu *vcpu,
+			const struct sys_reg_desc *rd)
+{
+	return vcpu_has_ptrauth(vcpu) ? 0 : REG_HIDDEN_USER | REG_HIDDEN_GUEST;
+}
+
+#define __PTRAUTH_KEY(k)						\
+	{ SYS_DESC(SYS_## k), trap_ptrauth, reset_unknown, k,		\
+	.visibility = ptrauth_visibility}
+
+#define PTRAUTH_KEY(k)							\
+	__PTRAUTH_KEY(k ## KEYLO_EL1),					\
+	__PTRAUTH_KEY(k ## KEYHI_EL1)
+
 static bool access_arch_timer(struct kvm_vcpu *vcpu,
 			      struct sys_reg_params *p,
 			      const struct sys_reg_desc *r)
@@ -1058,9 +1090,11 @@ static u64 read_id_reg(const struct kvm_vcpu *vcpu,
 					 (0xfUL << ID_AA64ISAR1_API_SHIFT) |
 					 (0xfUL << ID_AA64ISAR1_GPA_SHIFT) |
 					 (0xfUL << ID_AA64ISAR1_GPI_SHIFT);
-		if (val & ptrauth_mask)
-			kvm_debug("ptrauth unsupported for guests, suppressing\n");
-		val &= ~ptrauth_mask;
+		if (!vcpu_has_ptrauth(vcpu)) {
+			if (val & ptrauth_mask)
+				kvm_debug("ptrauth unsupported for guests, suppressing\n");
+			val &= ~ptrauth_mask;
+		}
 	}
 
 	return val;
@@ -1460,6 +1494,12 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	{ SYS_DESC(SYS_TTBR1_EL1), access_vm_reg, reset_unknown, TTBR1_EL1 },
 	{ SYS_DESC(SYS_TCR_EL1), access_vm_reg, reset_val, TCR_EL1, 0 },
 
+	PTRAUTH_KEY(APIA),
+	PTRAUTH_KEY(APIB),
+	PTRAUTH_KEY(APDA),
+	PTRAUTH_KEY(APDB),
+	PTRAUTH_KEY(APGA),
+
 	{ SYS_DESC(SYS_AFSR0_EL1), access_vm_reg, reset_unknown, AFSR0_EL1 },
 	{ SYS_DESC(SYS_AFSR1_EL1), access_vm_reg, reset_unknown, AFSR1_EL1 },
 	{ SYS_DESC(SYS_ESR_EL1), access_vm_reg, reset_unknown, ESR_EL1 },
diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index 9edbf0f..8d1b73c 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -385,6 +385,8 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 		vcpu_clear_wfe_traps(vcpu);
 	else
 		vcpu_set_wfe_traps(vcpu);
+
+	kvm_arm_vcpu_ptrauth_setup_lazy(vcpu);
 }
 
 void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
-- 
2.7.4

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 77+ messages in thread

* [PATCH v9 2/5] KVM: arm/arm64: context-switch ptrauth registers
@ 2019-04-12  3:20   ` Amit Daniel Kachhap
  0 siblings, 0 replies; 77+ messages in thread
From: Amit Daniel Kachhap @ 2019-04-12  3:20 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Andrew Jones, Julien Thierry, Marc Zyngier,
	Catalin Marinas, Will Deacon, Christoffer Dall,
	Kristina Martsenko, kvmarm, James Morse, Ramana Radhakrishnan,
	Amit Daniel Kachhap, Dave Martin, linux-kernel

From: Mark Rutland <mark.rutland@arm.com>

When pointer authentication is supported, a guest may wish to use it.
This patch adds the necessary KVM infrastructure for this to work, with
a semi-lazy context switch of the pointer auth state.

Pointer authentication feature is only enabled when VHE is built
in the kernel and present in the CPU implementation so only VHE code
paths are modified.

When we schedule a vcpu, we disable guest usage of pointer
authentication instructions and accesses to the keys. While these are
disabled, we avoid context-switching the keys. When we trap the guest
trying to use pointer authentication functionality, we change to eagerly
context-switching the keys, and enable the feature. The next time the
vcpu is scheduled out/in, we start again. However the host key save is
optimized and implemented inside ptrauth instruction/register access
trap.

Pointer authentication consists of address authentication and generic
authentication, and CPUs in a system might have varied support for
either. Where support for either feature is not uniform, it is hidden
from guests via ID register emulation, as a result of the cpufeature
framework in the host.

Unfortunately, address authentication and generic authentication cannot
be trapped separately, as the architecture provides a single EL2 trap
covering both. If we wish to expose one without the other, we cannot
prevent a (badly-written) guest from intermittently using a feature
which is not uniformly supported (when scheduled on a physical CPU which
supports the relevant feature). Hence, this patch expects both type of
authentication to be present in a cpu.

This switch of key is done from guest enter/exit assembly as preparation
for the upcoming in-kernel pointer authentication support. Hence, these
key switching routines are not implemented in C code as they may cause
pointer authentication key signing error in some situations.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
[Only VHE, key switch in full assembly, vcpu_has_ptrauth checks
, save host key in ptrauth exception trap]
Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Reviewed-by: Julien Thierry <julien.thierry@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Christoffer Dall <christoffer.dall@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
---

Changes since v9:
* Used high order number for branching in assembly macros. [Kristina Martsenko]
* Taken care of different offset for hcr_el2 now.

 arch/arm/include/asm/kvm_host.h          |   1 +
 arch/arm64/Kconfig                       |   5 +-
 arch/arm64/include/asm/kvm_host.h        |  17 +++++
 arch/arm64/include/asm/kvm_ptrauth_asm.h | 106 +++++++++++++++++++++++++++++++
 arch/arm64/kernel/asm-offsets.c          |   6 ++
 arch/arm64/kvm/guest.c                   |  14 ++++
 arch/arm64/kvm/handle_exit.c             |  24 ++++---
 arch/arm64/kvm/hyp/entry.S               |   7 ++
 arch/arm64/kvm/sys_regs.c                |  46 +++++++++++++-
 virt/kvm/arm/arm.c                       |   2 +
 10 files changed, 215 insertions(+), 13 deletions(-)
 create mode 100644 arch/arm64/include/asm/kvm_ptrauth_asm.h

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index e80cfc1..7a5c7f8 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -363,6 +363,7 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
 static inline void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu) {}
 static inline void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu) {}
 static inline void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu) {}
+static inline void kvm_arm_vcpu_ptrauth_setup_lazy(struct kvm_vcpu *vcpu) {}
 
 static inline void kvm_arm_vhe_guest_enter(void) {}
 static inline void kvm_arm_vhe_guest_exit(void) {}
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 7e34b9e..9e8506e 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1301,8 +1301,9 @@ config ARM64_PTR_AUTH
 	  context-switched along with the process.
 
 	  The feature is detected at runtime. If the feature is not present in
-	  hardware it will not be advertised to userspace nor will it be
-	  enabled.
+	  hardware it will not be advertised to userspace/KVM guest nor will it
+	  be enabled. However, KVM guest also require CONFIG_ARM64_VHE=y to use
+	  this feature.
 
 endmenu
 
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 31dbc7c..a585d82 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -161,6 +161,18 @@ enum vcpu_sysreg {
 	PMSWINC_EL0,	/* Software Increment Register */
 	PMUSERENR_EL0,	/* User Enable Register */
 
+	/* Pointer Authentication Registers in a strict increasing order. */
+	APIAKEYLO_EL1,
+	APIAKEYHI_EL1 = APIAKEYLO_EL1 + 1,
+	APIBKEYLO_EL1 = APIAKEYLO_EL1 + 2,
+	APIBKEYHI_EL1 = APIAKEYLO_EL1 + 3,
+	APDAKEYLO_EL1 = APIAKEYLO_EL1 + 4,
+	APDAKEYHI_EL1 = APIAKEYLO_EL1 + 5,
+	APDBKEYLO_EL1 = APIAKEYLO_EL1 + 6,
+	APDBKEYHI_EL1 = APIAKEYLO_EL1 + 7,
+	APGAKEYLO_EL1 = APIAKEYLO_EL1 + 8,
+	APGAKEYHI_EL1 = APIAKEYLO_EL1 + 9,
+
 	/* 32bit specific registers. Keep them at the end of the range */
 	DACR32_EL2,	/* Domain Access Control Register */
 	IFSR32_EL2,	/* Instruction Fault Status Register */
@@ -529,6 +541,11 @@ static inline bool kvm_arch_requires_vhe(void)
 	return false;
 }
 
+void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu);
+void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu);
+void kvm_arm_vcpu_ptrauth_setup_lazy(struct kvm_vcpu *vcpu);
+void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu);
+
 static inline void kvm_arch_hardware_unsetup(void) {}
 static inline void kvm_arch_sync_events(struct kvm *kvm) {}
 static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
diff --git a/arch/arm64/include/asm/kvm_ptrauth_asm.h b/arch/arm64/include/asm/kvm_ptrauth_asm.h
new file mode 100644
index 0000000..8142521
--- /dev/null
+++ b/arch/arm64/include/asm/kvm_ptrauth_asm.h
@@ -0,0 +1,106 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* arch/arm64/include/asm/kvm_ptrauth_asm.h: Guest/host ptrauth save/restore
+ * Copyright 2019 Arm Limited
+ * Author: Mark Rutland <mark.rutland@arm.com>
+ *         Amit Daniel Kachhap <amit.kachhap@arm.com>
+ */
+
+#ifndef __ASM_KVM_PTRAUTH_ASM_H
+#define __ASM_KVM_PTRAUTH_ASM_H
+
+#ifndef __ASSEMBLY__
+
+#define __ptrauth_save_key(regs, key)						\
+({										\
+	regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1);	\
+	regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1);	\
+})
+
+#define __ptrauth_save_state(ctxt)						\
+({										\
+	__ptrauth_save_key(ctxt->sys_regs, APIA);				\
+	__ptrauth_save_key(ctxt->sys_regs, APIB);				\
+	__ptrauth_save_key(ctxt->sys_regs, APDA);				\
+	__ptrauth_save_key(ctxt->sys_regs, APDB);				\
+	__ptrauth_save_key(ctxt->sys_regs, APGA);				\
+})
+
+#else /* __ASSEMBLY__ */
+
+#include <asm/sysreg.h>
+
+#ifdef	CONFIG_ARM64_PTR_AUTH
+
+#define PTRAUTH_REG_OFFSET(x)	(x - CPU_APIAKEYLO_EL1)
+
+/*
+ * CPU_AP*_EL1 values exceed immediate offset range (512) for stp instruction
+ * so below macros takes CPU_APIAKEYLO_EL1 as base and calculates the offset of
+ * the keys from this base to avoid an extra add instruction. These macros
+ * assumes the keys offsets are aligned in a specific increasing order.
+ */
+.macro	ptrauth_save_state base, reg1, reg2
+	mrs_s	\reg1, SYS_APIAKEYLO_EL1
+	mrs_s	\reg2, SYS_APIAKEYHI_EL1
+	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIAKEYLO_EL1)]
+	mrs_s	\reg1, SYS_APIBKEYLO_EL1
+	mrs_s	\reg2, SYS_APIBKEYHI_EL1
+	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIBKEYLO_EL1)]
+	mrs_s	\reg1, SYS_APDAKEYLO_EL1
+	mrs_s	\reg2, SYS_APDAKEYHI_EL1
+	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDAKEYLO_EL1)]
+	mrs_s	\reg1, SYS_APDBKEYLO_EL1
+	mrs_s	\reg2, SYS_APDBKEYHI_EL1
+	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDBKEYLO_EL1)]
+	mrs_s	\reg1, SYS_APGAKEYLO_EL1
+	mrs_s	\reg2, SYS_APGAKEYHI_EL1
+	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APGAKEYLO_EL1)]
+.endm
+
+.macro	ptrauth_restore_state base, reg1, reg2
+	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIAKEYLO_EL1)]
+	msr_s	SYS_APIAKEYLO_EL1, \reg1
+	msr_s	SYS_APIAKEYHI_EL1, \reg2
+	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIBKEYLO_EL1)]
+	msr_s	SYS_APIBKEYLO_EL1, \reg1
+	msr_s	SYS_APIBKEYHI_EL1, \reg2
+	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDAKEYLO_EL1)]
+	msr_s	SYS_APDAKEYLO_EL1, \reg1
+	msr_s	SYS_APDAKEYHI_EL1, \reg2
+	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDBKEYLO_EL1)]
+	msr_s	SYS_APDBKEYLO_EL1, \reg1
+	msr_s	SYS_APDBKEYHI_EL1, \reg2
+	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APGAKEYLO_EL1)]
+	msr_s	SYS_APGAKEYLO_EL1, \reg1
+	msr_s	SYS_APGAKEYHI_EL1, \reg2
+.endm
+
+.macro ptrauth_switch_to_guest g_ctxt, reg1, reg2, reg3
+	ldr	\reg1, [\g_ctxt, #(VCPU_HCR_EL2 - VCPU_CONTEXT)]
+	and	\reg1, \reg1, #(HCR_API | HCR_APK)
+	cbz	\reg1, 1000f
+	add	\reg1, \g_ctxt, #CPU_APIAKEYLO_EL1
+	ptrauth_restore_state	\reg1, \reg2, \reg3
+1000:
+.endm
+
+.macro ptrauth_switch_to_host g_ctxt, h_ctxt, reg1, reg2, reg3
+	ldr	\reg1, [\g_ctxt, #(VCPU_HCR_EL2 - VCPU_CONTEXT)]
+	and	\reg1, \reg1, #(HCR_API | HCR_APK)
+	cbz	\reg1, 1001f
+	add	\reg1, \g_ctxt, #CPU_APIAKEYLO_EL1
+	ptrauth_save_state	\reg1, \reg2, \reg3
+	add	\reg1, \h_ctxt, #CPU_APIAKEYLO_EL1
+	ptrauth_restore_state	\reg1, \reg2, \reg3
+	isb
+1001:
+.endm
+
+#else /* !CONFIG_ARM64_PTR_AUTH */
+.macro ptrauth_switch_to_guest g_ctxt, reg1, reg2, reg3
+.endm
+.macro ptrauth_switch_to_host g_ctxt, h_ctxt, reg1, reg2, reg3
+.endm
+#endif /* CONFIG_ARM64_PTR_AUTH */
+#endif /* __ASSEMBLY__ */
+#endif /* __ASM_KVM_PTRAUTH_ASM_H */
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index 7f40dcb..8178330 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -125,7 +125,13 @@ int main(void)
   DEFINE(VCPU_CONTEXT,		offsetof(struct kvm_vcpu, arch.ctxt));
   DEFINE(VCPU_FAULT_DISR,	offsetof(struct kvm_vcpu, arch.fault.disr_el1));
   DEFINE(VCPU_WORKAROUND_FLAGS,	offsetof(struct kvm_vcpu, arch.workaround_flags));
+  DEFINE(VCPU_HCR_EL2,		offsetof(struct kvm_vcpu, arch.hcr_el2));
   DEFINE(CPU_GP_REGS,		offsetof(struct kvm_cpu_context, gp_regs));
+  DEFINE(CPU_APIAKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APIAKEYLO_EL1]));
+  DEFINE(CPU_APIBKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APIBKEYLO_EL1]));
+  DEFINE(CPU_APDAKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APDAKEYLO_EL1]));
+  DEFINE(CPU_APDBKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APDBKEYLO_EL1]));
+  DEFINE(CPU_APGAKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APGAKEYLO_EL1]));
   DEFINE(CPU_USER_PT_REGS,	offsetof(struct kvm_regs, regs));
   DEFINE(HOST_CONTEXT_VCPU,	offsetof(struct kvm_cpu_context, __hyp_running_vcpu));
 #endif
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index 4f7b26b..e07f763 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -878,3 +878,17 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
 
 	return ret;
 }
+
+/**
+ * kvm_arm_vcpu_ptrauth_setup_lazy - setup lazy ptrauth for vcpu schedule
+ *
+ * @vcpu: The VCPU pointer
+ *
+ * This function may be used to disable ptrauth and use it in a lazy context
+ * via traps.
+ */
+void kvm_arm_vcpu_ptrauth_setup_lazy(struct kvm_vcpu *vcpu)
+{
+	if (vcpu_has_ptrauth(vcpu))
+		kvm_arm_vcpu_ptrauth_disable(vcpu);
+}
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 0b79834..5838ff9 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -30,6 +30,7 @@
 #include <asm/kvm_coproc.h>
 #include <asm/kvm_emulate.h>
 #include <asm/kvm_mmu.h>
+#include <asm/kvm_ptrauth_asm.h>
 #include <asm/debug-monitors.h>
 #include <asm/traps.h>
 
@@ -174,19 +175,26 @@ static int handle_sve(struct kvm_vcpu *vcpu, struct kvm_run *run)
 }
 
 /*
+ * Handle the guest trying to use a ptrauth instruction, or trying to access a
+ * ptrauth register.
+ */
+void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu)
+{
+	if (vcpu_has_ptrauth(vcpu)) {
+		kvm_arm_vcpu_ptrauth_enable(vcpu);
+		__ptrauth_save_state(vcpu->arch.host_cpu_context);
+	} else {
+		kvm_inject_undefined(vcpu);
+	}
+}
+
+/*
  * Guest usage of a ptrauth instruction (which the guest EL1 did not turn into
  * a NOP).
  */
 static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu, struct kvm_run *run)
 {
-	/*
-	 * We don't currently support ptrauth in a guest, and we mask the ID
-	 * registers to prevent well-behaved guests from trying to make use of
-	 * it.
-	 *
-	 * Inject an UNDEF, as if the feature really isn't present.
-	 */
-	kvm_inject_undefined(vcpu);
+	kvm_arm_vcpu_ptrauth_trap(vcpu);
 	return 1;
 }
 
diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
index 675fdc1..3a70213 100644
--- a/arch/arm64/kvm/hyp/entry.S
+++ b/arch/arm64/kvm/hyp/entry.S
@@ -24,6 +24,7 @@
 #include <asm/kvm_arm.h>
 #include <asm/kvm_asm.h>
 #include <asm/kvm_mmu.h>
+#include <asm/kvm_ptrauth_asm.h>
 
 #define CPU_GP_REG_OFFSET(x)	(CPU_GP_REGS + x)
 #define CPU_XREG_OFFSET(x)	CPU_GP_REG_OFFSET(CPU_USER_PT_REGS + 8*x)
@@ -64,6 +65,9 @@ ENTRY(__guest_enter)
 
 	add	x18, x0, #VCPU_CONTEXT
 
+	// Macro ptrauth_switch_to_guest(guest cxt, tmp1, tmp2, tmp3).
+	ptrauth_switch_to_guest x18, x0, x1, x2
+
 	// Restore guest regs x0-x17
 	ldp	x0, x1,   [x18, #CPU_XREG_OFFSET(0)]
 	ldp	x2, x3,   [x18, #CPU_XREG_OFFSET(2)]
@@ -118,6 +122,9 @@ ENTRY(__guest_exit)
 
 	get_host_ctxt	x2, x3
 
+	// Macro ptrauth_switch_to_host(guest cxt, host cxt, tmp1, tmp2, tmp3).
+	ptrauth_switch_to_host x1, x2, x3, x4, x5
+
 	// Now restore the host regs
 	restore_callee_saved_regs x2
 
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 09e9b06..4a98b5c 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1007,6 +1007,38 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	{ SYS_DESC(SYS_PMEVTYPERn_EL0(n)),					\
 	  access_pmu_evtyper, reset_unknown, (PMEVTYPER0_EL0 + n), }
 
+void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu)
+{
+	vcpu->arch.hcr_el2 |= (HCR_API | HCR_APK);
+}
+
+void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu)
+{
+	vcpu->arch.hcr_el2 &= ~(HCR_API | HCR_APK);
+}
+
+static bool trap_ptrauth(struct kvm_vcpu *vcpu,
+			 struct sys_reg_params *p,
+			 const struct sys_reg_desc *rd)
+{
+	kvm_arm_vcpu_ptrauth_trap(vcpu);
+	return false;
+}
+
+static unsigned int ptrauth_visibility(const struct kvm_vcpu *vcpu,
+			const struct sys_reg_desc *rd)
+{
+	return vcpu_has_ptrauth(vcpu) ? 0 : REG_HIDDEN_USER | REG_HIDDEN_GUEST;
+}
+
+#define __PTRAUTH_KEY(k)						\
+	{ SYS_DESC(SYS_## k), trap_ptrauth, reset_unknown, k,		\
+	.visibility = ptrauth_visibility}
+
+#define PTRAUTH_KEY(k)							\
+	__PTRAUTH_KEY(k ## KEYLO_EL1),					\
+	__PTRAUTH_KEY(k ## KEYHI_EL1)
+
 static bool access_arch_timer(struct kvm_vcpu *vcpu,
 			      struct sys_reg_params *p,
 			      const struct sys_reg_desc *r)
@@ -1058,9 +1090,11 @@ static u64 read_id_reg(const struct kvm_vcpu *vcpu,
 					 (0xfUL << ID_AA64ISAR1_API_SHIFT) |
 					 (0xfUL << ID_AA64ISAR1_GPA_SHIFT) |
 					 (0xfUL << ID_AA64ISAR1_GPI_SHIFT);
-		if (val & ptrauth_mask)
-			kvm_debug("ptrauth unsupported for guests, suppressing\n");
-		val &= ~ptrauth_mask;
+		if (!vcpu_has_ptrauth(vcpu)) {
+			if (val & ptrauth_mask)
+				kvm_debug("ptrauth unsupported for guests, suppressing\n");
+			val &= ~ptrauth_mask;
+		}
 	}
 
 	return val;
@@ -1460,6 +1494,12 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	{ SYS_DESC(SYS_TTBR1_EL1), access_vm_reg, reset_unknown, TTBR1_EL1 },
 	{ SYS_DESC(SYS_TCR_EL1), access_vm_reg, reset_val, TCR_EL1, 0 },
 
+	PTRAUTH_KEY(APIA),
+	PTRAUTH_KEY(APIB),
+	PTRAUTH_KEY(APDA),
+	PTRAUTH_KEY(APDB),
+	PTRAUTH_KEY(APGA),
+
 	{ SYS_DESC(SYS_AFSR0_EL1), access_vm_reg, reset_unknown, AFSR0_EL1 },
 	{ SYS_DESC(SYS_AFSR1_EL1), access_vm_reg, reset_unknown, AFSR1_EL1 },
 	{ SYS_DESC(SYS_ESR_EL1), access_vm_reg, reset_unknown, ESR_EL1 },
diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index 9edbf0f..8d1b73c 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -385,6 +385,8 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 		vcpu_clear_wfe_traps(vcpu);
 	else
 		vcpu_set_wfe_traps(vcpu);
+
+	kvm_arm_vcpu_ptrauth_setup_lazy(vcpu);
 }
 
 void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 77+ messages in thread

* [PATCH v9 3/5] KVM: arm64: Add userspace flag to enable pointer authentication
@ 2019-04-12  3:20   ` Amit Daniel Kachhap
  0 siblings, 0 replies; 77+ messages in thread
From: Amit Daniel Kachhap @ 2019-04-12  3:20 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Christoffer Dall, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Dave Martin, Ramana Radhakrishnan, kvmarm,
	Kristina Martsenko, linux-kernel, Amit Daniel Kachhap,
	Mark Rutland, James Morse, Julien Thierry

Now that the building blocks of pointer authentication are present, lets
add userspace flags KVM_ARM_VCPU_PTRAUTH_ADDRESS and
KVM_ARM_VCPU_PTRAUTH_GENERIC. These flags will enable pointer
authentication for the KVM guest on a per-vcpu basis through the ioctl
KVM_ARM_VCPU_INIT.

This features will allow the KVM guest to allow the handling of
pointer authentication instructions or to treat them as undefined
if not set.

Necessary documentations are added to reflect the changes done.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Christoffer Dall <christoffer.dall@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
---

Changes since v8:
*  Update vcpu->arch.flags to final enable state. [Dave Martin]
*  Change in Documentation to make clear the implementation of 2 vcpu
   feature flags. [Dave Martin]

 Documentation/arm64/pointer-authentication.txt | 22 ++++++++++++++++++----
 Documentation/virtual/kvm/api.txt              |  6 ++++++
 arch/arm64/include/asm/kvm_host.h              |  2 +-
 arch/arm64/include/uapi/asm/kvm.h              |  2 ++
 arch/arm64/kvm/reset.c                         | 24 ++++++++++++++++++++++++
 5 files changed, 51 insertions(+), 5 deletions(-)

diff --git a/Documentation/arm64/pointer-authentication.txt b/Documentation/arm64/pointer-authentication.txt
index 5baca42..fc71b33 100644
--- a/Documentation/arm64/pointer-authentication.txt
+++ b/Documentation/arm64/pointer-authentication.txt
@@ -87,7 +87,21 @@ used to get and set the keys for a thread.
 Virtualization
 --------------
 
-Pointer authentication is not currently supported in KVM guests. KVM
-will mask the feature bits from ID_AA64ISAR1_EL1, and attempted use of
-the feature will result in an UNDEFINED exception being injected into
-the guest.
+Pointer authentication is enabled in KVM guest when each virtual cpu is
+initialised by passing flags KVM_ARM_VCPU_PTRAUTH_[ADDRESS/GENERIC] and
+requesting these two separate cpu features to be enabled. The current KVM
+guest implementation works by enabling both features together, so both
+these userspace flags are checked before enabling pointer authentication.
+The separate userspace flag will allow to have no userspace ABI changes
+if support is added in the future to allow these two features to be
+enabled independently of one another.
+
+As Arm Architecture specifies that Pointer Authentication feature is
+implemented along with the VHE feature so KVM arm64 ptrauth code relies
+on VHE mode to be present.
+
+Additionally, when these vcpu feature flags are not set then KVM will
+filter out the Pointer Authentication system key registers from
+KVM_GET/SET_REG_* ioctls and mask those features from cpufeature ID
+register. Any attempt to use the Pointer Authentication instructions will
+result in an UNDEFINED exception being injected into the guest.
diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
index 68509de..9d202f4 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -2753,6 +2753,12 @@ Possible features:
 	  Depends on KVM_CAP_ARM_PSCI_0_2.
 	- KVM_ARM_VCPU_PMU_V3: Emulate PMUv3 for the CPU.
 	  Depends on KVM_CAP_ARM_PMU_V3.
+	- KVM_ARM_VCPU_PTRAUTH_ADDRESS: Enables Address Pointer authentication
+	  for the CPU and supported only on arm64 architecture.
+	  Must be requested if KVM_ARM_VCPU_PTRAUTH_GENERIC is also requested.
+	- KVM_ARM_VCPU_PTRAUTH_GENERIC: Enables Generic Pointer authentication
+	  for the CPU and supported only on arm64 architecture.
+	  Must be requested if KVM_ARM_VCPU_PTRAUTH_ADDRESS is also requested.
 
 	- KVM_ARM_VCPU_SVE: Enables SVE for the CPU (arm64 only).
 	  Depends on KVM_CAP_ARM_SVE.
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index a585d82..25f2598 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -49,7 +49,7 @@
 
 #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS
 
-#define KVM_VCPU_MAX_FEATURES 5
+#define KVM_VCPU_MAX_FEATURES 7
 
 #define KVM_REQ_SLEEP \
 	KVM_ARCH_REQ_FLAGS(0, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
index 6963b7e..fec2253 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -103,6 +103,8 @@ struct kvm_regs {
 #define KVM_ARM_VCPU_PSCI_0_2		2 /* CPU uses PSCI v0.2 */
 #define KVM_ARM_VCPU_PMU_V3		3 /* Support guest PMUv3 */
 #define KVM_ARM_VCPU_SVE		4 /* enable SVE for this CPU */
+#define KVM_ARM_VCPU_PTRAUTH_ADDRESS	5 /* VCPU uses address authentication */
+#define KVM_ARM_VCPU_PTRAUTH_GENERIC	6 /* VCPU uses generic authentication */
 
 struct kvm_vcpu_init {
 	__u32 target;
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index f13378d..d13406b 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -221,6 +221,24 @@ static void kvm_vcpu_reset_sve(struct kvm_vcpu *vcpu)
 		memset(vcpu->arch.sve_state, 0, vcpu_sve_state_size(vcpu));
 }
 
+static int kvm_vcpu_enable_ptrauth(struct kvm_vcpu *vcpu)
+{
+	/* Support ptrauth only if the system supports these capabilities. */
+	if (!has_vhe() || !system_supports_address_auth() ||
+		!system_supports_generic_auth())
+		return -EINVAL;
+	/*
+	 * Make sure that both address/generic pointer authentication
+	 * features are requested by the userspace together.
+	 */
+	if (!test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, vcpu->arch.features) ||
+		!test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, vcpu->arch.features))
+		return -EINVAL;
+
+	vcpu->arch.flags |= KVM_ARM64_GUEST_HAS_PTRAUTH;
+	return 0;
+}
+
 /**
  * kvm_reset_vcpu - sets core registers and sys_regs to reset value
  * @vcpu: The VCPU pointer
@@ -261,6 +279,12 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
 		kvm_vcpu_reset_sve(vcpu);
 	}
 
+	if (test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, vcpu->arch.features) ||
+		test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, vcpu->arch.features)) {
+		if (kvm_vcpu_enable_ptrauth(vcpu))
+			goto out;
+	}
+
 	switch (vcpu->arch.target) {
 	default:
 		if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features)) {
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 77+ messages in thread

* [PATCH v9 3/5] KVM: arm64: Add userspace flag to enable pointer authentication
@ 2019-04-12  3:20   ` Amit Daniel Kachhap
  0 siblings, 0 replies; 77+ messages in thread
From: Amit Daniel Kachhap @ 2019-04-12  3:20 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Marc Zyngier, Catalin Marinas, Will Deacon, Kristina Martsenko,
	kvmarm, Ramana Radhakrishnan, Amit Daniel Kachhap, Dave Martin,
	linux-kernel

Now that the building blocks of pointer authentication are present, lets
add userspace flags KVM_ARM_VCPU_PTRAUTH_ADDRESS and
KVM_ARM_VCPU_PTRAUTH_GENERIC. These flags will enable pointer
authentication for the KVM guest on a per-vcpu basis through the ioctl
KVM_ARM_VCPU_INIT.

This features will allow the KVM guest to allow the handling of
pointer authentication instructions or to treat them as undefined
if not set.

Necessary documentations are added to reflect the changes done.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Christoffer Dall <christoffer.dall@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
---

Changes since v8:
*  Update vcpu->arch.flags to final enable state. [Dave Martin]
*  Change in Documentation to make clear the implementation of 2 vcpu
   feature flags. [Dave Martin]

 Documentation/arm64/pointer-authentication.txt | 22 ++++++++++++++++++----
 Documentation/virtual/kvm/api.txt              |  6 ++++++
 arch/arm64/include/asm/kvm_host.h              |  2 +-
 arch/arm64/include/uapi/asm/kvm.h              |  2 ++
 arch/arm64/kvm/reset.c                         | 24 ++++++++++++++++++++++++
 5 files changed, 51 insertions(+), 5 deletions(-)

diff --git a/Documentation/arm64/pointer-authentication.txt b/Documentation/arm64/pointer-authentication.txt
index 5baca42..fc71b33 100644
--- a/Documentation/arm64/pointer-authentication.txt
+++ b/Documentation/arm64/pointer-authentication.txt
@@ -87,7 +87,21 @@ used to get and set the keys for a thread.
 Virtualization
 --------------
 
-Pointer authentication is not currently supported in KVM guests. KVM
-will mask the feature bits from ID_AA64ISAR1_EL1, and attempted use of
-the feature will result in an UNDEFINED exception being injected into
-the guest.
+Pointer authentication is enabled in KVM guest when each virtual cpu is
+initialised by passing flags KVM_ARM_VCPU_PTRAUTH_[ADDRESS/GENERIC] and
+requesting these two separate cpu features to be enabled. The current KVM
+guest implementation works by enabling both features together, so both
+these userspace flags are checked before enabling pointer authentication.
+The separate userspace flag will allow to have no userspace ABI changes
+if support is added in the future to allow these two features to be
+enabled independently of one another.
+
+As Arm Architecture specifies that Pointer Authentication feature is
+implemented along with the VHE feature so KVM arm64 ptrauth code relies
+on VHE mode to be present.
+
+Additionally, when these vcpu feature flags are not set then KVM will
+filter out the Pointer Authentication system key registers from
+KVM_GET/SET_REG_* ioctls and mask those features from cpufeature ID
+register. Any attempt to use the Pointer Authentication instructions will
+result in an UNDEFINED exception being injected into the guest.
diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
index 68509de..9d202f4 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -2753,6 +2753,12 @@ Possible features:
 	  Depends on KVM_CAP_ARM_PSCI_0_2.
 	- KVM_ARM_VCPU_PMU_V3: Emulate PMUv3 for the CPU.
 	  Depends on KVM_CAP_ARM_PMU_V3.
+	- KVM_ARM_VCPU_PTRAUTH_ADDRESS: Enables Address Pointer authentication
+	  for the CPU and supported only on arm64 architecture.
+	  Must be requested if KVM_ARM_VCPU_PTRAUTH_GENERIC is also requested.
+	- KVM_ARM_VCPU_PTRAUTH_GENERIC: Enables Generic Pointer authentication
+	  for the CPU and supported only on arm64 architecture.
+	  Must be requested if KVM_ARM_VCPU_PTRAUTH_ADDRESS is also requested.
 
 	- KVM_ARM_VCPU_SVE: Enables SVE for the CPU (arm64 only).
 	  Depends on KVM_CAP_ARM_SVE.
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index a585d82..25f2598 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -49,7 +49,7 @@
 
 #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS
 
-#define KVM_VCPU_MAX_FEATURES 5
+#define KVM_VCPU_MAX_FEATURES 7
 
 #define KVM_REQ_SLEEP \
 	KVM_ARCH_REQ_FLAGS(0, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
index 6963b7e..fec2253 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -103,6 +103,8 @@ struct kvm_regs {
 #define KVM_ARM_VCPU_PSCI_0_2		2 /* CPU uses PSCI v0.2 */
 #define KVM_ARM_VCPU_PMU_V3		3 /* Support guest PMUv3 */
 #define KVM_ARM_VCPU_SVE		4 /* enable SVE for this CPU */
+#define KVM_ARM_VCPU_PTRAUTH_ADDRESS	5 /* VCPU uses address authentication */
+#define KVM_ARM_VCPU_PTRAUTH_GENERIC	6 /* VCPU uses generic authentication */
 
 struct kvm_vcpu_init {
 	__u32 target;
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index f13378d..d13406b 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -221,6 +221,24 @@ static void kvm_vcpu_reset_sve(struct kvm_vcpu *vcpu)
 		memset(vcpu->arch.sve_state, 0, vcpu_sve_state_size(vcpu));
 }
 
+static int kvm_vcpu_enable_ptrauth(struct kvm_vcpu *vcpu)
+{
+	/* Support ptrauth only if the system supports these capabilities. */
+	if (!has_vhe() || !system_supports_address_auth() ||
+		!system_supports_generic_auth())
+		return -EINVAL;
+	/*
+	 * Make sure that both address/generic pointer authentication
+	 * features are requested by the userspace together.
+	 */
+	if (!test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, vcpu->arch.features) ||
+		!test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, vcpu->arch.features))
+		return -EINVAL;
+
+	vcpu->arch.flags |= KVM_ARM64_GUEST_HAS_PTRAUTH;
+	return 0;
+}
+
 /**
  * kvm_reset_vcpu - sets core registers and sys_regs to reset value
  * @vcpu: The VCPU pointer
@@ -261,6 +279,12 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
 		kvm_vcpu_reset_sve(vcpu);
 	}
 
+	if (test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, vcpu->arch.features) ||
+		test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, vcpu->arch.features)) {
+		if (kvm_vcpu_enable_ptrauth(vcpu))
+			goto out;
+	}
+
 	switch (vcpu->arch.target) {
 	default:
 		if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features)) {
-- 
2.7.4

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 77+ messages in thread

* [PATCH v9 3/5] KVM: arm64: Add userspace flag to enable pointer authentication
@ 2019-04-12  3:20   ` Amit Daniel Kachhap
  0 siblings, 0 replies; 77+ messages in thread
From: Amit Daniel Kachhap @ 2019-04-12  3:20 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Andrew Jones, Julien Thierry, Marc Zyngier,
	Catalin Marinas, Will Deacon, Christoffer Dall,
	Kristina Martsenko, kvmarm, James Morse, Ramana Radhakrishnan,
	Amit Daniel Kachhap, Dave Martin, linux-kernel

Now that the building blocks of pointer authentication are present, lets
add userspace flags KVM_ARM_VCPU_PTRAUTH_ADDRESS and
KVM_ARM_VCPU_PTRAUTH_GENERIC. These flags will enable pointer
authentication for the KVM guest on a per-vcpu basis through the ioctl
KVM_ARM_VCPU_INIT.

This features will allow the KVM guest to allow the handling of
pointer authentication instructions or to treat them as undefined
if not set.

Necessary documentations are added to reflect the changes done.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Christoffer Dall <christoffer.dall@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
---

Changes since v8:
*  Update vcpu->arch.flags to final enable state. [Dave Martin]
*  Change in Documentation to make clear the implementation of 2 vcpu
   feature flags. [Dave Martin]

 Documentation/arm64/pointer-authentication.txt | 22 ++++++++++++++++++----
 Documentation/virtual/kvm/api.txt              |  6 ++++++
 arch/arm64/include/asm/kvm_host.h              |  2 +-
 arch/arm64/include/uapi/asm/kvm.h              |  2 ++
 arch/arm64/kvm/reset.c                         | 24 ++++++++++++++++++++++++
 5 files changed, 51 insertions(+), 5 deletions(-)

diff --git a/Documentation/arm64/pointer-authentication.txt b/Documentation/arm64/pointer-authentication.txt
index 5baca42..fc71b33 100644
--- a/Documentation/arm64/pointer-authentication.txt
+++ b/Documentation/arm64/pointer-authentication.txt
@@ -87,7 +87,21 @@ used to get and set the keys for a thread.
 Virtualization
 --------------
 
-Pointer authentication is not currently supported in KVM guests. KVM
-will mask the feature bits from ID_AA64ISAR1_EL1, and attempted use of
-the feature will result in an UNDEFINED exception being injected into
-the guest.
+Pointer authentication is enabled in KVM guest when each virtual cpu is
+initialised by passing flags KVM_ARM_VCPU_PTRAUTH_[ADDRESS/GENERIC] and
+requesting these two separate cpu features to be enabled. The current KVM
+guest implementation works by enabling both features together, so both
+these userspace flags are checked before enabling pointer authentication.
+The separate userspace flag will allow to have no userspace ABI changes
+if support is added in the future to allow these two features to be
+enabled independently of one another.
+
+As Arm Architecture specifies that Pointer Authentication feature is
+implemented along with the VHE feature so KVM arm64 ptrauth code relies
+on VHE mode to be present.
+
+Additionally, when these vcpu feature flags are not set then KVM will
+filter out the Pointer Authentication system key registers from
+KVM_GET/SET_REG_* ioctls and mask those features from cpufeature ID
+register. Any attempt to use the Pointer Authentication instructions will
+result in an UNDEFINED exception being injected into the guest.
diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
index 68509de..9d202f4 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -2753,6 +2753,12 @@ Possible features:
 	  Depends on KVM_CAP_ARM_PSCI_0_2.
 	- KVM_ARM_VCPU_PMU_V3: Emulate PMUv3 for the CPU.
 	  Depends on KVM_CAP_ARM_PMU_V3.
+	- KVM_ARM_VCPU_PTRAUTH_ADDRESS: Enables Address Pointer authentication
+	  for the CPU and supported only on arm64 architecture.
+	  Must be requested if KVM_ARM_VCPU_PTRAUTH_GENERIC is also requested.
+	- KVM_ARM_VCPU_PTRAUTH_GENERIC: Enables Generic Pointer authentication
+	  for the CPU and supported only on arm64 architecture.
+	  Must be requested if KVM_ARM_VCPU_PTRAUTH_ADDRESS is also requested.
 
 	- KVM_ARM_VCPU_SVE: Enables SVE for the CPU (arm64 only).
 	  Depends on KVM_CAP_ARM_SVE.
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index a585d82..25f2598 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -49,7 +49,7 @@
 
 #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS
 
-#define KVM_VCPU_MAX_FEATURES 5
+#define KVM_VCPU_MAX_FEATURES 7
 
 #define KVM_REQ_SLEEP \
 	KVM_ARCH_REQ_FLAGS(0, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
index 6963b7e..fec2253 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -103,6 +103,8 @@ struct kvm_regs {
 #define KVM_ARM_VCPU_PSCI_0_2		2 /* CPU uses PSCI v0.2 */
 #define KVM_ARM_VCPU_PMU_V3		3 /* Support guest PMUv3 */
 #define KVM_ARM_VCPU_SVE		4 /* enable SVE for this CPU */
+#define KVM_ARM_VCPU_PTRAUTH_ADDRESS	5 /* VCPU uses address authentication */
+#define KVM_ARM_VCPU_PTRAUTH_GENERIC	6 /* VCPU uses generic authentication */
 
 struct kvm_vcpu_init {
 	__u32 target;
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index f13378d..d13406b 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -221,6 +221,24 @@ static void kvm_vcpu_reset_sve(struct kvm_vcpu *vcpu)
 		memset(vcpu->arch.sve_state, 0, vcpu_sve_state_size(vcpu));
 }
 
+static int kvm_vcpu_enable_ptrauth(struct kvm_vcpu *vcpu)
+{
+	/* Support ptrauth only if the system supports these capabilities. */
+	if (!has_vhe() || !system_supports_address_auth() ||
+		!system_supports_generic_auth())
+		return -EINVAL;
+	/*
+	 * Make sure that both address/generic pointer authentication
+	 * features are requested by the userspace together.
+	 */
+	if (!test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, vcpu->arch.features) ||
+		!test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, vcpu->arch.features))
+		return -EINVAL;
+
+	vcpu->arch.flags |= KVM_ARM64_GUEST_HAS_PTRAUTH;
+	return 0;
+}
+
 /**
  * kvm_reset_vcpu - sets core registers and sys_regs to reset value
  * @vcpu: The VCPU pointer
@@ -261,6 +279,12 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
 		kvm_vcpu_reset_sve(vcpu);
 	}
 
+	if (test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, vcpu->arch.features) ||
+		test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, vcpu->arch.features)) {
+		if (kvm_vcpu_enable_ptrauth(vcpu))
+			goto out;
+	}
+
 	switch (vcpu->arch.target) {
 	default:
 		if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features)) {
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 77+ messages in thread

* [PATCH v9 4/5] KVM: arm64: Add capability to advertise ptrauth for guest
@ 2019-04-12  3:20   ` Amit Daniel Kachhap
  0 siblings, 0 replies; 77+ messages in thread
From: Amit Daniel Kachhap @ 2019-04-12  3:20 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Christoffer Dall, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Dave Martin, Ramana Radhakrishnan, kvmarm,
	Kristina Martsenko, linux-kernel, Amit Daniel Kachhap,
	Mark Rutland, James Morse, Julien Thierry

This patch advertises the capability of two cpu feature called address
pointer authentication and generic pointer authentication. These
capabilities depend upon system support for pointer authentication and
VHE mode.

The current arm64 KVM partially implements pointer authentication and
support of address/generic authentication are tied together. However,
separate ABI requirements for both of them is added so that any future
isolated implementation will not require any ABI changes.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Christoffer Dall <christoffer.dall@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
---
Changes since v8:
*  Keep the capability check same for the 2 vcpu ptrauth features. [Dave Martin]

 Documentation/virtual/kvm/api.txt | 2 ++
 arch/arm64/kvm/reset.c            | 5 +++++
 include/uapi/linux/kvm.h          | 2 ++
 3 files changed, 9 insertions(+)

diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
index 9d202f4..56021d0 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -2756,9 +2756,11 @@ Possible features:
 	- KVM_ARM_VCPU_PTRAUTH_ADDRESS: Enables Address Pointer authentication
 	  for the CPU and supported only on arm64 architecture.
 	  Must be requested if KVM_ARM_VCPU_PTRAUTH_GENERIC is also requested.
+	  Depends on KVM_CAP_ARM_PTRAUTH_ADDRESS.
 	- KVM_ARM_VCPU_PTRAUTH_GENERIC: Enables Generic Pointer authentication
 	  for the CPU and supported only on arm64 architecture.
 	  Must be requested if KVM_ARM_VCPU_PTRAUTH_ADDRESS is also requested.
+	  Depends on KVM_CAP_ARM_PTRAUTH_GENERIC.
 
 	- KVM_ARM_VCPU_SVE: Enables SVE for the CPU (arm64 only).
 	  Depends on KVM_CAP_ARM_SVE.
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index d13406b..be657f6 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -101,6 +101,11 @@ int kvm_arch_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 	case KVM_CAP_ARM_SVE:
 		r = system_supports_sve();
 		break;
+	case KVM_CAP_ARM_PTRAUTH_ADDRESS:
+	case KVM_CAP_ARM_PTRAUTH_GENERIC:
+		r = has_vhe() && system_supports_address_auth() &&
+				system_supports_generic_auth();
+		break;
 	default:
 		r = 0;
 	}
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 1d56444..4dc34f8 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -989,6 +989,8 @@ struct kvm_ppc_resize_hpt {
 #define KVM_CAP_MANUAL_DIRTY_LOG_PROTECT 166
 #define KVM_CAP_HYPERV_CPUID 167
 #define KVM_CAP_ARM_SVE 168
+#define KVM_CAP_ARM_PTRAUTH_ADDRESS 169
+#define KVM_CAP_ARM_PTRAUTH_GENERIC 170
 
 #ifdef KVM_CAP_IRQ_ROUTING
 
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 77+ messages in thread

* [PATCH v9 4/5] KVM: arm64: Add capability to advertise ptrauth for guest
@ 2019-04-12  3:20   ` Amit Daniel Kachhap
  0 siblings, 0 replies; 77+ messages in thread
From: Amit Daniel Kachhap @ 2019-04-12  3:20 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Marc Zyngier, Catalin Marinas, Will Deacon, Kristina Martsenko,
	kvmarm, Ramana Radhakrishnan, Amit Daniel Kachhap, Dave Martin,
	linux-kernel

This patch advertises the capability of two cpu feature called address
pointer authentication and generic pointer authentication. These
capabilities depend upon system support for pointer authentication and
VHE mode.

The current arm64 KVM partially implements pointer authentication and
support of address/generic authentication are tied together. However,
separate ABI requirements for both of them is added so that any future
isolated implementation will not require any ABI changes.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Christoffer Dall <christoffer.dall@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
---
Changes since v8:
*  Keep the capability check same for the 2 vcpu ptrauth features. [Dave Martin]

 Documentation/virtual/kvm/api.txt | 2 ++
 arch/arm64/kvm/reset.c            | 5 +++++
 include/uapi/linux/kvm.h          | 2 ++
 3 files changed, 9 insertions(+)

diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
index 9d202f4..56021d0 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -2756,9 +2756,11 @@ Possible features:
 	- KVM_ARM_VCPU_PTRAUTH_ADDRESS: Enables Address Pointer authentication
 	  for the CPU and supported only on arm64 architecture.
 	  Must be requested if KVM_ARM_VCPU_PTRAUTH_GENERIC is also requested.
+	  Depends on KVM_CAP_ARM_PTRAUTH_ADDRESS.
 	- KVM_ARM_VCPU_PTRAUTH_GENERIC: Enables Generic Pointer authentication
 	  for the CPU and supported only on arm64 architecture.
 	  Must be requested if KVM_ARM_VCPU_PTRAUTH_ADDRESS is also requested.
+	  Depends on KVM_CAP_ARM_PTRAUTH_GENERIC.
 
 	- KVM_ARM_VCPU_SVE: Enables SVE for the CPU (arm64 only).
 	  Depends on KVM_CAP_ARM_SVE.
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index d13406b..be657f6 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -101,6 +101,11 @@ int kvm_arch_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 	case KVM_CAP_ARM_SVE:
 		r = system_supports_sve();
 		break;
+	case KVM_CAP_ARM_PTRAUTH_ADDRESS:
+	case KVM_CAP_ARM_PTRAUTH_GENERIC:
+		r = has_vhe() && system_supports_address_auth() &&
+				system_supports_generic_auth();
+		break;
 	default:
 		r = 0;
 	}
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 1d56444..4dc34f8 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -989,6 +989,8 @@ struct kvm_ppc_resize_hpt {
 #define KVM_CAP_MANUAL_DIRTY_LOG_PROTECT 166
 #define KVM_CAP_HYPERV_CPUID 167
 #define KVM_CAP_ARM_SVE 168
+#define KVM_CAP_ARM_PTRAUTH_ADDRESS 169
+#define KVM_CAP_ARM_PTRAUTH_GENERIC 170
 
 #ifdef KVM_CAP_IRQ_ROUTING
 
-- 
2.7.4

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 77+ messages in thread

* [PATCH v9 4/5] KVM: arm64: Add capability to advertise ptrauth for guest
@ 2019-04-12  3:20   ` Amit Daniel Kachhap
  0 siblings, 0 replies; 77+ messages in thread
From: Amit Daniel Kachhap @ 2019-04-12  3:20 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Andrew Jones, Julien Thierry, Marc Zyngier,
	Catalin Marinas, Will Deacon, Christoffer Dall,
	Kristina Martsenko, kvmarm, James Morse, Ramana Radhakrishnan,
	Amit Daniel Kachhap, Dave Martin, linux-kernel

This patch advertises the capability of two cpu feature called address
pointer authentication and generic pointer authentication. These
capabilities depend upon system support for pointer authentication and
VHE mode.

The current arm64 KVM partially implements pointer authentication and
support of address/generic authentication are tied together. However,
separate ABI requirements for both of them is added so that any future
isolated implementation will not require any ABI changes.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Christoffer Dall <christoffer.dall@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
---
Changes since v8:
*  Keep the capability check same for the 2 vcpu ptrauth features. [Dave Martin]

 Documentation/virtual/kvm/api.txt | 2 ++
 arch/arm64/kvm/reset.c            | 5 +++++
 include/uapi/linux/kvm.h          | 2 ++
 3 files changed, 9 insertions(+)

diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
index 9d202f4..56021d0 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -2756,9 +2756,11 @@ Possible features:
 	- KVM_ARM_VCPU_PTRAUTH_ADDRESS: Enables Address Pointer authentication
 	  for the CPU and supported only on arm64 architecture.
 	  Must be requested if KVM_ARM_VCPU_PTRAUTH_GENERIC is also requested.
+	  Depends on KVM_CAP_ARM_PTRAUTH_ADDRESS.
 	- KVM_ARM_VCPU_PTRAUTH_GENERIC: Enables Generic Pointer authentication
 	  for the CPU and supported only on arm64 architecture.
 	  Must be requested if KVM_ARM_VCPU_PTRAUTH_ADDRESS is also requested.
+	  Depends on KVM_CAP_ARM_PTRAUTH_GENERIC.
 
 	- KVM_ARM_VCPU_SVE: Enables SVE for the CPU (arm64 only).
 	  Depends on KVM_CAP_ARM_SVE.
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index d13406b..be657f6 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -101,6 +101,11 @@ int kvm_arch_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 	case KVM_CAP_ARM_SVE:
 		r = system_supports_sve();
 		break;
+	case KVM_CAP_ARM_PTRAUTH_ADDRESS:
+	case KVM_CAP_ARM_PTRAUTH_GENERIC:
+		r = has_vhe() && system_supports_address_auth() &&
+				system_supports_generic_auth();
+		break;
 	default:
 		r = 0;
 	}
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 1d56444..4dc34f8 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -989,6 +989,8 @@ struct kvm_ppc_resize_hpt {
 #define KVM_CAP_MANUAL_DIRTY_LOG_PROTECT 166
 #define KVM_CAP_HYPERV_CPUID 167
 #define KVM_CAP_ARM_SVE 168
+#define KVM_CAP_ARM_PTRAUTH_ADDRESS 169
+#define KVM_CAP_ARM_PTRAUTH_GENERIC 170
 
 #ifdef KVM_CAP_IRQ_ROUTING
 
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 77+ messages in thread

* [kvmtool PATCH v9 5/5] KVM: arm/arm64: Add a vcpu feature for pointer authentication
@ 2019-04-12  3:20   ` Amit Daniel Kachhap
  0 siblings, 0 replies; 77+ messages in thread
From: Amit Daniel Kachhap @ 2019-04-12  3:20 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Christoffer Dall, Marc Zyngier, Catalin Marinas, Will Deacon,
	Andrew Jones, Dave Martin, Ramana Radhakrishnan, kvmarm,
	Kristina Martsenko, linux-kernel, Amit Daniel Kachhap,
	Mark Rutland, James Morse, Julien Thierry

This patch adds a runtime capabality for KVM tool to enable Arm64 8.3
Pointer Authentication in guest kernel. Two vcpu features
KVM_ARM_VCPU_PTRAUTH_[ADDRESS/GENERIC] are supplied together to enable
Pointer Authentication in KVM guest after checking the capability.

Command line options --enable-ptrauth and --disable-ptrauth are added
to use this feature. However, if those options are not provided then
also this feature is enabled if host supports this capability.

The macros defined in the headers are not in sync and should be replaced
from the upstream.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
---
Changes since v8:
*  Added option --enable-ptrauth and --disable-ptrauth to use ptrauth. Also
   enable ptrauth if no option provided and Host supports ptrauth. [Dave Martin]
* The macro definition are not linear as the kvmtool is not synchronised with the
  kernel changes present in kvmarm/next tree.

 arm/aarch32/include/kvm/kvm-cpu-arch.h    |  1 +
 arm/aarch64/include/asm/kvm.h             |  2 ++
 arm/aarch64/include/kvm/kvm-config-arch.h |  6 +++++-
 arm/aarch64/include/kvm/kvm-cpu-arch.h    |  2 ++
 arm/include/arm-common/kvm-config-arch.h  |  2 ++
 arm/kvm-cpu.c                             | 11 +++++++++++
 include/linux/kvm.h                       |  2 ++
 7 files changed, 25 insertions(+), 1 deletion(-)

diff --git a/arm/aarch32/include/kvm/kvm-cpu-arch.h b/arm/aarch32/include/kvm/kvm-cpu-arch.h
index d28ea67..520ea76 100644
--- a/arm/aarch32/include/kvm/kvm-cpu-arch.h
+++ b/arm/aarch32/include/kvm/kvm-cpu-arch.h
@@ -13,4 +13,5 @@
 #define ARM_CPU_ID		0, 0, 0
 #define ARM_CPU_ID_MPIDR	5
 
+#define ARM_VCPU_PTRAUTH_FEATURE	0
 #endif /* KVM__KVM_CPU_ARCH_H */
diff --git a/arm/aarch64/include/asm/kvm.h b/arm/aarch64/include/asm/kvm.h
index 97c3478..a2546e6 100644
--- a/arm/aarch64/include/asm/kvm.h
+++ b/arm/aarch64/include/asm/kvm.h
@@ -102,6 +102,8 @@ struct kvm_regs {
 #define KVM_ARM_VCPU_EL1_32BIT		1 /* CPU running a 32bit VM */
 #define KVM_ARM_VCPU_PSCI_0_2		2 /* CPU uses PSCI v0.2 */
 #define KVM_ARM_VCPU_PMU_V3		3 /* Support guest PMUv3 */
+#define KVM_ARM_VCPU_PTRAUTH_ADDRESS	5 /* CPU uses address pointer authentication */
+#define KVM_ARM_VCPU_PTRAUTH_GENERIC	6 /* CPU uses generic pointer authentication */
 
 struct kvm_vcpu_init {
 	__u32 target;
diff --git a/arm/aarch64/include/kvm/kvm-config-arch.h b/arm/aarch64/include/kvm/kvm-config-arch.h
index 04be43d..0279b13 100644
--- a/arm/aarch64/include/kvm/kvm-config-arch.h
+++ b/arm/aarch64/include/kvm/kvm-config-arch.h
@@ -8,7 +8,11 @@
 			"Create PMUv3 device"),				\
 	OPT_U64('\0', "kaslr-seed", &(cfg)->kaslr_seed,			\
 			"Specify random seed for Kernel Address Space "	\
-			"Layout Randomization (KASLR)"),
+			"Layout Randomization (KASLR)"),		\
+	OPT_BOOLEAN('\0', "enable-ptrauth", &(cfg)->enable_ptrauth,	\
+			"Enables pointer authentication"),		\
+	OPT_BOOLEAN('\0', "disable-ptrauth", &(cfg)->disable_ptrauth,	\
+			"Disables pointer authentication"),
 
 #include "arm-common/kvm-config-arch.h"
 
diff --git a/arm/aarch64/include/kvm/kvm-cpu-arch.h b/arm/aarch64/include/kvm/kvm-cpu-arch.h
index a9d8563..fcc2107 100644
--- a/arm/aarch64/include/kvm/kvm-cpu-arch.h
+++ b/arm/aarch64/include/kvm/kvm-cpu-arch.h
@@ -17,4 +17,6 @@
 #define ARM_CPU_CTRL		3, 0, 1, 0
 #define ARM_CPU_CTRL_SCTLR_EL1	0
 
+#define ARM_VCPU_PTRAUTH_FEATURE	((1UL << KVM_ARM_VCPU_PTRAUTH_ADDRESS) \
+					| (1UL << KVM_ARM_VCPU_PTRAUTH_GENERIC))
 #endif /* KVM__KVM_CPU_ARCH_H */
diff --git a/arm/include/arm-common/kvm-config-arch.h b/arm/include/arm-common/kvm-config-arch.h
index 5734c46..1b4287d 100644
--- a/arm/include/arm-common/kvm-config-arch.h
+++ b/arm/include/arm-common/kvm-config-arch.h
@@ -10,6 +10,8 @@ struct kvm_config_arch {
 	bool		aarch32_guest;
 	bool		has_pmuv3;
 	u64		kaslr_seed;
+	bool		enable_ptrauth;
+	bool		disable_ptrauth;
 	enum irqchip_type irqchip;
 	u64		fw_addr;
 };
diff --git a/arm/kvm-cpu.c b/arm/kvm-cpu.c
index 7780251..a45a649 100644
--- a/arm/kvm-cpu.c
+++ b/arm/kvm-cpu.c
@@ -69,6 +69,17 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, unsigned long cpu_id)
 	}
 
 	/*
+	 * Always enable Pointer Authentication if requested. If system supports
+	 * this extension then also enable it by default provided no disable
+	 * request present.
+	 */
+	if ((kvm->cfg.arch.enable_ptrauth) ||
+		(kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH_ADDRESS) &&
+		kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH_GENERIC) &&
+		!kvm->cfg.arch.disable_ptrauth))
+			vcpu_init.features[0] |= ARM_VCPU_PTRAUTH_FEATURE;
+
+	/*
 	 * If the preferred target ioctl is successful then
 	 * use preferred target else try each and every target type
 	 */
diff --git a/include/linux/kvm.h b/include/linux/kvm.h
index 6d4ea4b..de1033b 100644
--- a/include/linux/kvm.h
+++ b/include/linux/kvm.h
@@ -988,6 +988,8 @@ struct kvm_ppc_resize_hpt {
 #define KVM_CAP_ARM_VM_IPA_SIZE 165
 #define KVM_CAP_MANUAL_DIRTY_LOG_PROTECT 166
 #define KVM_CAP_HYPERV_CPUID 167
+#define KVM_CAP_ARM_PTRAUTH_ADDRESS 169
+#define KVM_CAP_ARM_PTRAUTH_GENERIC 170
 
 #ifdef KVM_CAP_IRQ_ROUTING
 
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 77+ messages in thread

* [kvmtool PATCH v9 5/5] KVM: arm/arm64: Add a vcpu feature for pointer authentication
@ 2019-04-12  3:20   ` Amit Daniel Kachhap
  0 siblings, 0 replies; 77+ messages in thread
From: Amit Daniel Kachhap @ 2019-04-12  3:20 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Marc Zyngier, Catalin Marinas, Will Deacon, Kristina Martsenko,
	kvmarm, Ramana Radhakrishnan, Amit Daniel Kachhap, Dave Martin,
	linux-kernel

This patch adds a runtime capabality for KVM tool to enable Arm64 8.3
Pointer Authentication in guest kernel. Two vcpu features
KVM_ARM_VCPU_PTRAUTH_[ADDRESS/GENERIC] are supplied together to enable
Pointer Authentication in KVM guest after checking the capability.

Command line options --enable-ptrauth and --disable-ptrauth are added
to use this feature. However, if those options are not provided then
also this feature is enabled if host supports this capability.

The macros defined in the headers are not in sync and should be replaced
from the upstream.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
---
Changes since v8:
*  Added option --enable-ptrauth and --disable-ptrauth to use ptrauth. Also
   enable ptrauth if no option provided and Host supports ptrauth. [Dave Martin]
* The macro definition are not linear as the kvmtool is not synchronised with the
  kernel changes present in kvmarm/next tree.

 arm/aarch32/include/kvm/kvm-cpu-arch.h    |  1 +
 arm/aarch64/include/asm/kvm.h             |  2 ++
 arm/aarch64/include/kvm/kvm-config-arch.h |  6 +++++-
 arm/aarch64/include/kvm/kvm-cpu-arch.h    |  2 ++
 arm/include/arm-common/kvm-config-arch.h  |  2 ++
 arm/kvm-cpu.c                             | 11 +++++++++++
 include/linux/kvm.h                       |  2 ++
 7 files changed, 25 insertions(+), 1 deletion(-)

diff --git a/arm/aarch32/include/kvm/kvm-cpu-arch.h b/arm/aarch32/include/kvm/kvm-cpu-arch.h
index d28ea67..520ea76 100644
--- a/arm/aarch32/include/kvm/kvm-cpu-arch.h
+++ b/arm/aarch32/include/kvm/kvm-cpu-arch.h
@@ -13,4 +13,5 @@
 #define ARM_CPU_ID		0, 0, 0
 #define ARM_CPU_ID_MPIDR	5
 
+#define ARM_VCPU_PTRAUTH_FEATURE	0
 #endif /* KVM__KVM_CPU_ARCH_H */
diff --git a/arm/aarch64/include/asm/kvm.h b/arm/aarch64/include/asm/kvm.h
index 97c3478..a2546e6 100644
--- a/arm/aarch64/include/asm/kvm.h
+++ b/arm/aarch64/include/asm/kvm.h
@@ -102,6 +102,8 @@ struct kvm_regs {
 #define KVM_ARM_VCPU_EL1_32BIT		1 /* CPU running a 32bit VM */
 #define KVM_ARM_VCPU_PSCI_0_2		2 /* CPU uses PSCI v0.2 */
 #define KVM_ARM_VCPU_PMU_V3		3 /* Support guest PMUv3 */
+#define KVM_ARM_VCPU_PTRAUTH_ADDRESS	5 /* CPU uses address pointer authentication */
+#define KVM_ARM_VCPU_PTRAUTH_GENERIC	6 /* CPU uses generic pointer authentication */
 
 struct kvm_vcpu_init {
 	__u32 target;
diff --git a/arm/aarch64/include/kvm/kvm-config-arch.h b/arm/aarch64/include/kvm/kvm-config-arch.h
index 04be43d..0279b13 100644
--- a/arm/aarch64/include/kvm/kvm-config-arch.h
+++ b/arm/aarch64/include/kvm/kvm-config-arch.h
@@ -8,7 +8,11 @@
 			"Create PMUv3 device"),				\
 	OPT_U64('\0', "kaslr-seed", &(cfg)->kaslr_seed,			\
 			"Specify random seed for Kernel Address Space "	\
-			"Layout Randomization (KASLR)"),
+			"Layout Randomization (KASLR)"),		\
+	OPT_BOOLEAN('\0', "enable-ptrauth", &(cfg)->enable_ptrauth,	\
+			"Enables pointer authentication"),		\
+	OPT_BOOLEAN('\0', "disable-ptrauth", &(cfg)->disable_ptrauth,	\
+			"Disables pointer authentication"),
 
 #include "arm-common/kvm-config-arch.h"
 
diff --git a/arm/aarch64/include/kvm/kvm-cpu-arch.h b/arm/aarch64/include/kvm/kvm-cpu-arch.h
index a9d8563..fcc2107 100644
--- a/arm/aarch64/include/kvm/kvm-cpu-arch.h
+++ b/arm/aarch64/include/kvm/kvm-cpu-arch.h
@@ -17,4 +17,6 @@
 #define ARM_CPU_CTRL		3, 0, 1, 0
 #define ARM_CPU_CTRL_SCTLR_EL1	0
 
+#define ARM_VCPU_PTRAUTH_FEATURE	((1UL << KVM_ARM_VCPU_PTRAUTH_ADDRESS) \
+					| (1UL << KVM_ARM_VCPU_PTRAUTH_GENERIC))
 #endif /* KVM__KVM_CPU_ARCH_H */
diff --git a/arm/include/arm-common/kvm-config-arch.h b/arm/include/arm-common/kvm-config-arch.h
index 5734c46..1b4287d 100644
--- a/arm/include/arm-common/kvm-config-arch.h
+++ b/arm/include/arm-common/kvm-config-arch.h
@@ -10,6 +10,8 @@ struct kvm_config_arch {
 	bool		aarch32_guest;
 	bool		has_pmuv3;
 	u64		kaslr_seed;
+	bool		enable_ptrauth;
+	bool		disable_ptrauth;
 	enum irqchip_type irqchip;
 	u64		fw_addr;
 };
diff --git a/arm/kvm-cpu.c b/arm/kvm-cpu.c
index 7780251..a45a649 100644
--- a/arm/kvm-cpu.c
+++ b/arm/kvm-cpu.c
@@ -69,6 +69,17 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, unsigned long cpu_id)
 	}
 
 	/*
+	 * Always enable Pointer Authentication if requested. If system supports
+	 * this extension then also enable it by default provided no disable
+	 * request present.
+	 */
+	if ((kvm->cfg.arch.enable_ptrauth) ||
+		(kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH_ADDRESS) &&
+		kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH_GENERIC) &&
+		!kvm->cfg.arch.disable_ptrauth))
+			vcpu_init.features[0] |= ARM_VCPU_PTRAUTH_FEATURE;
+
+	/*
 	 * If the preferred target ioctl is successful then
 	 * use preferred target else try each and every target type
 	 */
diff --git a/include/linux/kvm.h b/include/linux/kvm.h
index 6d4ea4b..de1033b 100644
--- a/include/linux/kvm.h
+++ b/include/linux/kvm.h
@@ -988,6 +988,8 @@ struct kvm_ppc_resize_hpt {
 #define KVM_CAP_ARM_VM_IPA_SIZE 165
 #define KVM_CAP_MANUAL_DIRTY_LOG_PROTECT 166
 #define KVM_CAP_HYPERV_CPUID 167
+#define KVM_CAP_ARM_PTRAUTH_ADDRESS 169
+#define KVM_CAP_ARM_PTRAUTH_GENERIC 170
 
 #ifdef KVM_CAP_IRQ_ROUTING
 
-- 
2.7.4

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 77+ messages in thread

* [kvmtool PATCH v9 5/5] KVM: arm/arm64: Add a vcpu feature for pointer authentication
@ 2019-04-12  3:20   ` Amit Daniel Kachhap
  0 siblings, 0 replies; 77+ messages in thread
From: Amit Daniel Kachhap @ 2019-04-12  3:20 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Andrew Jones, Julien Thierry, Marc Zyngier,
	Catalin Marinas, Will Deacon, Christoffer Dall,
	Kristina Martsenko, kvmarm, James Morse, Ramana Radhakrishnan,
	Amit Daniel Kachhap, Dave Martin, linux-kernel

This patch adds a runtime capabality for KVM tool to enable Arm64 8.3
Pointer Authentication in guest kernel. Two vcpu features
KVM_ARM_VCPU_PTRAUTH_[ADDRESS/GENERIC] are supplied together to enable
Pointer Authentication in KVM guest after checking the capability.

Command line options --enable-ptrauth and --disable-ptrauth are added
to use this feature. However, if those options are not provided then
also this feature is enabled if host supports this capability.

The macros defined in the headers are not in sync and should be replaced
from the upstream.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
---
Changes since v8:
*  Added option --enable-ptrauth and --disable-ptrauth to use ptrauth. Also
   enable ptrauth if no option provided and Host supports ptrauth. [Dave Martin]
* The macro definition are not linear as the kvmtool is not synchronised with the
  kernel changes present in kvmarm/next tree.

 arm/aarch32/include/kvm/kvm-cpu-arch.h    |  1 +
 arm/aarch64/include/asm/kvm.h             |  2 ++
 arm/aarch64/include/kvm/kvm-config-arch.h |  6 +++++-
 arm/aarch64/include/kvm/kvm-cpu-arch.h    |  2 ++
 arm/include/arm-common/kvm-config-arch.h  |  2 ++
 arm/kvm-cpu.c                             | 11 +++++++++++
 include/linux/kvm.h                       |  2 ++
 7 files changed, 25 insertions(+), 1 deletion(-)

diff --git a/arm/aarch32/include/kvm/kvm-cpu-arch.h b/arm/aarch32/include/kvm/kvm-cpu-arch.h
index d28ea67..520ea76 100644
--- a/arm/aarch32/include/kvm/kvm-cpu-arch.h
+++ b/arm/aarch32/include/kvm/kvm-cpu-arch.h
@@ -13,4 +13,5 @@
 #define ARM_CPU_ID		0, 0, 0
 #define ARM_CPU_ID_MPIDR	5
 
+#define ARM_VCPU_PTRAUTH_FEATURE	0
 #endif /* KVM__KVM_CPU_ARCH_H */
diff --git a/arm/aarch64/include/asm/kvm.h b/arm/aarch64/include/asm/kvm.h
index 97c3478..a2546e6 100644
--- a/arm/aarch64/include/asm/kvm.h
+++ b/arm/aarch64/include/asm/kvm.h
@@ -102,6 +102,8 @@ struct kvm_regs {
 #define KVM_ARM_VCPU_EL1_32BIT		1 /* CPU running a 32bit VM */
 #define KVM_ARM_VCPU_PSCI_0_2		2 /* CPU uses PSCI v0.2 */
 #define KVM_ARM_VCPU_PMU_V3		3 /* Support guest PMUv3 */
+#define KVM_ARM_VCPU_PTRAUTH_ADDRESS	5 /* CPU uses address pointer authentication */
+#define KVM_ARM_VCPU_PTRAUTH_GENERIC	6 /* CPU uses generic pointer authentication */
 
 struct kvm_vcpu_init {
 	__u32 target;
diff --git a/arm/aarch64/include/kvm/kvm-config-arch.h b/arm/aarch64/include/kvm/kvm-config-arch.h
index 04be43d..0279b13 100644
--- a/arm/aarch64/include/kvm/kvm-config-arch.h
+++ b/arm/aarch64/include/kvm/kvm-config-arch.h
@@ -8,7 +8,11 @@
 			"Create PMUv3 device"),				\
 	OPT_U64('\0', "kaslr-seed", &(cfg)->kaslr_seed,			\
 			"Specify random seed for Kernel Address Space "	\
-			"Layout Randomization (KASLR)"),
+			"Layout Randomization (KASLR)"),		\
+	OPT_BOOLEAN('\0', "enable-ptrauth", &(cfg)->enable_ptrauth,	\
+			"Enables pointer authentication"),		\
+	OPT_BOOLEAN('\0', "disable-ptrauth", &(cfg)->disable_ptrauth,	\
+			"Disables pointer authentication"),
 
 #include "arm-common/kvm-config-arch.h"
 
diff --git a/arm/aarch64/include/kvm/kvm-cpu-arch.h b/arm/aarch64/include/kvm/kvm-cpu-arch.h
index a9d8563..fcc2107 100644
--- a/arm/aarch64/include/kvm/kvm-cpu-arch.h
+++ b/arm/aarch64/include/kvm/kvm-cpu-arch.h
@@ -17,4 +17,6 @@
 #define ARM_CPU_CTRL		3, 0, 1, 0
 #define ARM_CPU_CTRL_SCTLR_EL1	0
 
+#define ARM_VCPU_PTRAUTH_FEATURE	((1UL << KVM_ARM_VCPU_PTRAUTH_ADDRESS) \
+					| (1UL << KVM_ARM_VCPU_PTRAUTH_GENERIC))
 #endif /* KVM__KVM_CPU_ARCH_H */
diff --git a/arm/include/arm-common/kvm-config-arch.h b/arm/include/arm-common/kvm-config-arch.h
index 5734c46..1b4287d 100644
--- a/arm/include/arm-common/kvm-config-arch.h
+++ b/arm/include/arm-common/kvm-config-arch.h
@@ -10,6 +10,8 @@ struct kvm_config_arch {
 	bool		aarch32_guest;
 	bool		has_pmuv3;
 	u64		kaslr_seed;
+	bool		enable_ptrauth;
+	bool		disable_ptrauth;
 	enum irqchip_type irqchip;
 	u64		fw_addr;
 };
diff --git a/arm/kvm-cpu.c b/arm/kvm-cpu.c
index 7780251..a45a649 100644
--- a/arm/kvm-cpu.c
+++ b/arm/kvm-cpu.c
@@ -69,6 +69,17 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, unsigned long cpu_id)
 	}
 
 	/*
+	 * Always enable Pointer Authentication if requested. If system supports
+	 * this extension then also enable it by default provided no disable
+	 * request present.
+	 */
+	if ((kvm->cfg.arch.enable_ptrauth) ||
+		(kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH_ADDRESS) &&
+		kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH_GENERIC) &&
+		!kvm->cfg.arch.disable_ptrauth))
+			vcpu_init.features[0] |= ARM_VCPU_PTRAUTH_FEATURE;
+
+	/*
 	 * If the preferred target ioctl is successful then
 	 * use preferred target else try each and every target type
 	 */
diff --git a/include/linux/kvm.h b/include/linux/kvm.h
index 6d4ea4b..de1033b 100644
--- a/include/linux/kvm.h
+++ b/include/linux/kvm.h
@@ -988,6 +988,8 @@ struct kvm_ppc_resize_hpt {
 #define KVM_CAP_ARM_VM_IPA_SIZE 165
 #define KVM_CAP_MANUAL_DIRTY_LOG_PROTECT 166
 #define KVM_CAP_HYPERV_CPUID 167
+#define KVM_CAP_ARM_PTRAUTH_ADDRESS 169
+#define KVM_CAP_ARM_PTRAUTH_GENERIC 170
 
 #ifdef KVM_CAP_IRQ_ROUTING
 
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 77+ messages in thread

* Re: [PATCH v9 1/5] KVM: arm64: Add a vcpu flag to control ptrauth for guest
@ 2019-04-16 16:30     ` Dave Martin
  0 siblings, 0 replies; 77+ messages in thread
From: Dave Martin @ 2019-04-16 16:30 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: linux-arm-kernel, Marc Zyngier, Catalin Marinas, Will Deacon,
	Kristina Martsenko, kvmarm, Ramana Radhakrishnan, linux-kernel

On Fri, Apr 12, 2019 at 08:50:32AM +0530, Amit Daniel Kachhap wrote:
> A per vcpu flag is added to check if pointer authentication is
> enabled for the vcpu or not. This flag may be enabled according to
> the necessary user policies and host capabilities.
> 
> This patch also adds a helper to check the flag.
> 
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Christoffer Dall <christoffer.dall@arm.com>
> Cc: kvmarm@lists.cs.columbia.edu

Reviewed-by: Dave Martin <Dave.Martin@arm.com>

> ---
> 
> Changes since v8:
> * Added a new per vcpu flag which will store Pointer Authentication enable
>   status instead of checking them again. [Dave Martin]
> 
>  arch/arm64/include/asm/kvm_host.h | 4 ++++
>  1 file changed, 4 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 9d57cf8..31dbc7c 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -355,10 +355,14 @@ struct kvm_vcpu_arch {
>  #define KVM_ARM64_HOST_SVE_ENABLED	(1 << 4) /* SVE enabled for EL0 */
>  #define KVM_ARM64_GUEST_HAS_SVE		(1 << 5) /* SVE exposed to guest */
>  #define KVM_ARM64_VCPU_SVE_FINALIZED	(1 << 6) /* SVE config completed */
> +#define KVM_ARM64_GUEST_HAS_PTRAUTH	(1 << 7) /* PTRAUTH exposed to guest */
>  
>  #define vcpu_has_sve(vcpu) (system_supports_sve() && \
>  			    ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_SVE))
>  
> +#define vcpu_has_ptrauth(vcpu)	\
> +			((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH)
> +
>  #define vcpu_gp_regs(v)		(&(v)->arch.ctxt.gp_regs)
>  
>  /*
> -- 
> 2.7.4
> 
> _______________________________________________
> kvmarm mailing list
> kvmarm@lists.cs.columbia.edu
> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v9 1/5] KVM: arm64: Add a vcpu flag to control ptrauth for guest
@ 2019-04-16 16:30     ` Dave Martin
  0 siblings, 0 replies; 77+ messages in thread
From: Dave Martin @ 2019-04-16 16:30 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: Marc Zyngier, Catalin Marinas, Will Deacon, linux-kernel,
	Kristina Martsenko, Ramana Radhakrishnan, kvmarm,
	linux-arm-kernel

On Fri, Apr 12, 2019 at 08:50:32AM +0530, Amit Daniel Kachhap wrote:
> A per vcpu flag is added to check if pointer authentication is
> enabled for the vcpu or not. This flag may be enabled according to
> the necessary user policies and host capabilities.
> 
> This patch also adds a helper to check the flag.
> 
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Christoffer Dall <christoffer.dall@arm.com>
> Cc: kvmarm@lists.cs.columbia.edu

Reviewed-by: Dave Martin <Dave.Martin@arm.com>

> ---
> 
> Changes since v8:
> * Added a new per vcpu flag which will store Pointer Authentication enable
>   status instead of checking them again. [Dave Martin]
> 
>  arch/arm64/include/asm/kvm_host.h | 4 ++++
>  1 file changed, 4 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 9d57cf8..31dbc7c 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -355,10 +355,14 @@ struct kvm_vcpu_arch {
>  #define KVM_ARM64_HOST_SVE_ENABLED	(1 << 4) /* SVE enabled for EL0 */
>  #define KVM_ARM64_GUEST_HAS_SVE		(1 << 5) /* SVE exposed to guest */
>  #define KVM_ARM64_VCPU_SVE_FINALIZED	(1 << 6) /* SVE config completed */
> +#define KVM_ARM64_GUEST_HAS_PTRAUTH	(1 << 7) /* PTRAUTH exposed to guest */
>  
>  #define vcpu_has_sve(vcpu) (system_supports_sve() && \
>  			    ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_SVE))
>  
> +#define vcpu_has_ptrauth(vcpu)	\
> +			((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH)
> +
>  #define vcpu_gp_regs(v)		(&(v)->arch.ctxt.gp_regs)
>  
>  /*
> -- 
> 2.7.4
> 
> _______________________________________________
> kvmarm mailing list
> kvmarm@lists.cs.columbia.edu
> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v9 1/5] KVM: arm64: Add a vcpu flag to control ptrauth for guest
@ 2019-04-16 16:30     ` Dave Martin
  0 siblings, 0 replies; 77+ messages in thread
From: Dave Martin @ 2019-04-16 16:30 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: Marc Zyngier, Catalin Marinas, Will Deacon, linux-kernel,
	Kristina Martsenko, Ramana Radhakrishnan, kvmarm,
	linux-arm-kernel

On Fri, Apr 12, 2019 at 08:50:32AM +0530, Amit Daniel Kachhap wrote:
> A per vcpu flag is added to check if pointer authentication is
> enabled for the vcpu or not. This flag may be enabled according to
> the necessary user policies and host capabilities.
> 
> This patch also adds a helper to check the flag.
> 
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Christoffer Dall <christoffer.dall@arm.com>
> Cc: kvmarm@lists.cs.columbia.edu

Reviewed-by: Dave Martin <Dave.Martin@arm.com>

> ---
> 
> Changes since v8:
> * Added a new per vcpu flag which will store Pointer Authentication enable
>   status instead of checking them again. [Dave Martin]
> 
>  arch/arm64/include/asm/kvm_host.h | 4 ++++
>  1 file changed, 4 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 9d57cf8..31dbc7c 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -355,10 +355,14 @@ struct kvm_vcpu_arch {
>  #define KVM_ARM64_HOST_SVE_ENABLED	(1 << 4) /* SVE enabled for EL0 */
>  #define KVM_ARM64_GUEST_HAS_SVE		(1 << 5) /* SVE exposed to guest */
>  #define KVM_ARM64_VCPU_SVE_FINALIZED	(1 << 6) /* SVE config completed */
> +#define KVM_ARM64_GUEST_HAS_PTRAUTH	(1 << 7) /* PTRAUTH exposed to guest */
>  
>  #define vcpu_has_sve(vcpu) (system_supports_sve() && \
>  			    ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_SVE))
>  
> +#define vcpu_has_ptrauth(vcpu)	\
> +			((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH)
> +
>  #define vcpu_gp_regs(v)		(&(v)->arch.ctxt.gp_regs)
>  
>  /*
> -- 
> 2.7.4
> 
> _______________________________________________
> kvmarm mailing list
> kvmarm@lists.cs.columbia.edu
> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v9 3/5] KVM: arm64: Add userspace flag to enable pointer authentication
@ 2019-04-16 16:31     ` Dave Martin
  0 siblings, 0 replies; 77+ messages in thread
From: Dave Martin @ 2019-04-16 16:31 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: linux-arm-kernel, Marc Zyngier, Catalin Marinas, Will Deacon,
	Kristina Martsenko, kvmarm, Ramana Radhakrishnan, linux-kernel

On Fri, Apr 12, 2019 at 08:50:34AM +0530, Amit Daniel Kachhap wrote:
> Now that the building blocks of pointer authentication are present, lets
> add userspace flags KVM_ARM_VCPU_PTRAUTH_ADDRESS and
> KVM_ARM_VCPU_PTRAUTH_GENERIC. These flags will enable pointer
> authentication for the KVM guest on a per-vcpu basis through the ioctl
> KVM_ARM_VCPU_INIT.
> 
> This features will allow the KVM guest to allow the handling of
> pointer authentication instructions or to treat them as undefined
> if not set.
> 
> Necessary documentations are added to reflect the changes done.
> 
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Christoffer Dall <christoffer.dall@arm.com>
> Cc: kvmarm@lists.cs.columbia.edu
> ---
> 
> Changes since v8:
> *  Update vcpu->arch.flags to final enable state. [Dave Martin]
> *  Change in Documentation to make clear the implementation of 2 vcpu
>    feature flags. [Dave Martin]
> 
>  Documentation/arm64/pointer-authentication.txt | 22 ++++++++++++++++++----
>  Documentation/virtual/kvm/api.txt              |  6 ++++++
>  arch/arm64/include/asm/kvm_host.h              |  2 +-
>  arch/arm64/include/uapi/asm/kvm.h              |  2 ++
>  arch/arm64/kvm/reset.c                         | 24 ++++++++++++++++++++++++
>  5 files changed, 51 insertions(+), 5 deletions(-)
> 
> diff --git a/Documentation/arm64/pointer-authentication.txt b/Documentation/arm64/pointer-authentication.txt
> index 5baca42..fc71b33 100644
> --- a/Documentation/arm64/pointer-authentication.txt
> +++ b/Documentation/arm64/pointer-authentication.txt
> @@ -87,7 +87,21 @@ used to get and set the keys for a thread.
>  Virtualization
>  --------------
>  
> -Pointer authentication is not currently supported in KVM guests. KVM
> -will mask the feature bits from ID_AA64ISAR1_EL1, and attempted use of
> -the feature will result in an UNDEFINED exception being injected into
> -the guest.
> +Pointer authentication is enabled in KVM guest when each virtual cpu is
> +initialised by passing flags KVM_ARM_VCPU_PTRAUTH_[ADDRESS/GENERIC] and
> +requesting these two separate cpu features to be enabled. The current KVM
> +guest implementation works by enabling both features together, so both
> +these userspace flags are checked before enabling pointer authentication.
> +The separate userspace flag will allow to have no userspace ABI changes
> +if support is added in the future to allow these two features to be
> +enabled independently of one another.
> +
> +As Arm Architecture specifies that Pointer Authentication feature is
> +implemented along with the VHE feature so KVM arm64 ptrauth code relies
> +on VHE mode to be present.
> +
> +Additionally, when these vcpu feature flags are not set then KVM will
> +filter out the Pointer Authentication system key registers from
> +KVM_GET/SET_REG_* ioctls and mask those features from cpufeature ID
> +register. Any attempt to use the Pointer Authentication instructions will
> +result in an UNDEFINED exception being injected into the guest.
> diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
> index 68509de..9d202f4 100644
> --- a/Documentation/virtual/kvm/api.txt
> +++ b/Documentation/virtual/kvm/api.txt
> @@ -2753,6 +2753,12 @@ Possible features:
>  	  Depends on KVM_CAP_ARM_PSCI_0_2.
>  	- KVM_ARM_VCPU_PMU_V3: Emulate PMUv3 for the CPU.
>  	  Depends on KVM_CAP_ARM_PMU_V3.
> +	- KVM_ARM_VCPU_PTRAUTH_ADDRESS: Enables Address Pointer authentication
> +	  for the CPU and supported only on arm64 architecture.

Nit: (arm64 only) would be less verbose.

> +	  Must be requested if KVM_ARM_VCPU_PTRAUTH_GENERIC is also requested.
> +	- KVM_ARM_VCPU_PTRAUTH_GENERIC: Enables Generic Pointer authentication
> +	  for the CPU and supported only on arm64 architecture.

Ditto.  (Not a big deal, though.)

> +	  Must be requested if KVM_ARM_VCPU_PTRAUTH_ADDRESS is also requested.
>  
>  	- KVM_ARM_VCPU_SVE: Enables SVE for the CPU (arm64 only).
>  	  Depends on KVM_CAP_ARM_SVE.
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index a585d82..25f2598 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -49,7 +49,7 @@
>  
>  #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS
>  
> -#define KVM_VCPU_MAX_FEATURES 5
> +#define KVM_VCPU_MAX_FEATURES 7
>  
>  #define KVM_REQ_SLEEP \
>  	KVM_ARCH_REQ_FLAGS(0, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
> diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
> index 6963b7e..fec2253 100644
> --- a/arch/arm64/include/uapi/asm/kvm.h
> +++ b/arch/arm64/include/uapi/asm/kvm.h
> @@ -103,6 +103,8 @@ struct kvm_regs {
>  #define KVM_ARM_VCPU_PSCI_0_2		2 /* CPU uses PSCI v0.2 */
>  #define KVM_ARM_VCPU_PMU_V3		3 /* Support guest PMUv3 */
>  #define KVM_ARM_VCPU_SVE		4 /* enable SVE for this CPU */
> +#define KVM_ARM_VCPU_PTRAUTH_ADDRESS	5 /* VCPU uses address authentication */
> +#define KVM_ARM_VCPU_PTRAUTH_GENERIC	6 /* VCPU uses generic authentication */
>  
>  struct kvm_vcpu_init {
>  	__u32 target;
> diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
> index f13378d..d13406b 100644
> --- a/arch/arm64/kvm/reset.c
> +++ b/arch/arm64/kvm/reset.c
> @@ -221,6 +221,24 @@ static void kvm_vcpu_reset_sve(struct kvm_vcpu *vcpu)
>  		memset(vcpu->arch.sve_state, 0, vcpu_sve_state_size(vcpu));
>  }
>  
> +static int kvm_vcpu_enable_ptrauth(struct kvm_vcpu *vcpu)
> +{
> +	/* Support ptrauth only if the system supports these capabilities. */
> +	if (!has_vhe() || !system_supports_address_auth() ||
> +		!system_supports_generic_auth())
> +		return -EINVAL;

Nit: Funny indentation.  Please align with the "if (".  It's also
preferable to keep the two system_supports_xxx_auth() aligned or on the
same line, since they go together.  This might be clearer split up:

	if (!has_vhe())
		return -EINVAL;

	if (!system_supports_address_auth() ||
	    !system_supports_generic_auth())
		return -EINVAL;

> +	/*

I'd say "for now" here, since we might relax this rule later on.

> +	 * Make sure that both address/generic pointer authentication
> +	 * features are requested by the userspace together.
> +	 */
> +	if (!test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, vcpu->arch.features) ||
> +		!test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, vcpu->arch.features))
> +		return -EINVAL;
> +
> +	vcpu->arch.flags |= KVM_ARM64_GUEST_HAS_PTRAUTH;
> +	return 0;
> +}
> +
>  /**
>   * kvm_reset_vcpu - sets core registers and sys_regs to reset value
>   * @vcpu: The VCPU pointer
> @@ -261,6 +279,12 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
>  		kvm_vcpu_reset_sve(vcpu);
>  	}
>  
> +	if (test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, vcpu->arch.features) ||
> +		test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, vcpu->arch.features)) {

Nit: funny indentation.

> +		if (kvm_vcpu_enable_ptrauth(vcpu))
> +			goto out;
> +	}
> +

[...]

With those fixed,

Reviewed-by: Dave Martin <Dave.Martin@arm.com>

Cheers
---Dave

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v9 3/5] KVM: arm64: Add userspace flag to enable pointer authentication
@ 2019-04-16 16:31     ` Dave Martin
  0 siblings, 0 replies; 77+ messages in thread
From: Dave Martin @ 2019-04-16 16:31 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: Marc Zyngier, Catalin Marinas, Will Deacon, linux-kernel,
	Kristina Martsenko, Ramana Radhakrishnan, kvmarm,
	linux-arm-kernel

On Fri, Apr 12, 2019 at 08:50:34AM +0530, Amit Daniel Kachhap wrote:
> Now that the building blocks of pointer authentication are present, lets
> add userspace flags KVM_ARM_VCPU_PTRAUTH_ADDRESS and
> KVM_ARM_VCPU_PTRAUTH_GENERIC. These flags will enable pointer
> authentication for the KVM guest on a per-vcpu basis through the ioctl
> KVM_ARM_VCPU_INIT.
> 
> This features will allow the KVM guest to allow the handling of
> pointer authentication instructions or to treat them as undefined
> if not set.
> 
> Necessary documentations are added to reflect the changes done.
> 
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Christoffer Dall <christoffer.dall@arm.com>
> Cc: kvmarm@lists.cs.columbia.edu
> ---
> 
> Changes since v8:
> *  Update vcpu->arch.flags to final enable state. [Dave Martin]
> *  Change in Documentation to make clear the implementation of 2 vcpu
>    feature flags. [Dave Martin]
> 
>  Documentation/arm64/pointer-authentication.txt | 22 ++++++++++++++++++----
>  Documentation/virtual/kvm/api.txt              |  6 ++++++
>  arch/arm64/include/asm/kvm_host.h              |  2 +-
>  arch/arm64/include/uapi/asm/kvm.h              |  2 ++
>  arch/arm64/kvm/reset.c                         | 24 ++++++++++++++++++++++++
>  5 files changed, 51 insertions(+), 5 deletions(-)
> 
> diff --git a/Documentation/arm64/pointer-authentication.txt b/Documentation/arm64/pointer-authentication.txt
> index 5baca42..fc71b33 100644
> --- a/Documentation/arm64/pointer-authentication.txt
> +++ b/Documentation/arm64/pointer-authentication.txt
> @@ -87,7 +87,21 @@ used to get and set the keys for a thread.
>  Virtualization
>  --------------
>  
> -Pointer authentication is not currently supported in KVM guests. KVM
> -will mask the feature bits from ID_AA64ISAR1_EL1, and attempted use of
> -the feature will result in an UNDEFINED exception being injected into
> -the guest.
> +Pointer authentication is enabled in KVM guest when each virtual cpu is
> +initialised by passing flags KVM_ARM_VCPU_PTRAUTH_[ADDRESS/GENERIC] and
> +requesting these two separate cpu features to be enabled. The current KVM
> +guest implementation works by enabling both features together, so both
> +these userspace flags are checked before enabling pointer authentication.
> +The separate userspace flag will allow to have no userspace ABI changes
> +if support is added in the future to allow these two features to be
> +enabled independently of one another.
> +
> +As Arm Architecture specifies that Pointer Authentication feature is
> +implemented along with the VHE feature so KVM arm64 ptrauth code relies
> +on VHE mode to be present.
> +
> +Additionally, when these vcpu feature flags are not set then KVM will
> +filter out the Pointer Authentication system key registers from
> +KVM_GET/SET_REG_* ioctls and mask those features from cpufeature ID
> +register. Any attempt to use the Pointer Authentication instructions will
> +result in an UNDEFINED exception being injected into the guest.
> diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
> index 68509de..9d202f4 100644
> --- a/Documentation/virtual/kvm/api.txt
> +++ b/Documentation/virtual/kvm/api.txt
> @@ -2753,6 +2753,12 @@ Possible features:
>  	  Depends on KVM_CAP_ARM_PSCI_0_2.
>  	- KVM_ARM_VCPU_PMU_V3: Emulate PMUv3 for the CPU.
>  	  Depends on KVM_CAP_ARM_PMU_V3.
> +	- KVM_ARM_VCPU_PTRAUTH_ADDRESS: Enables Address Pointer authentication
> +	  for the CPU and supported only on arm64 architecture.

Nit: (arm64 only) would be less verbose.

> +	  Must be requested if KVM_ARM_VCPU_PTRAUTH_GENERIC is also requested.
> +	- KVM_ARM_VCPU_PTRAUTH_GENERIC: Enables Generic Pointer authentication
> +	  for the CPU and supported only on arm64 architecture.

Ditto.  (Not a big deal, though.)

> +	  Must be requested if KVM_ARM_VCPU_PTRAUTH_ADDRESS is also requested.
>  
>  	- KVM_ARM_VCPU_SVE: Enables SVE for the CPU (arm64 only).
>  	  Depends on KVM_CAP_ARM_SVE.
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index a585d82..25f2598 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -49,7 +49,7 @@
>  
>  #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS
>  
> -#define KVM_VCPU_MAX_FEATURES 5
> +#define KVM_VCPU_MAX_FEATURES 7
>  
>  #define KVM_REQ_SLEEP \
>  	KVM_ARCH_REQ_FLAGS(0, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
> diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
> index 6963b7e..fec2253 100644
> --- a/arch/arm64/include/uapi/asm/kvm.h
> +++ b/arch/arm64/include/uapi/asm/kvm.h
> @@ -103,6 +103,8 @@ struct kvm_regs {
>  #define KVM_ARM_VCPU_PSCI_0_2		2 /* CPU uses PSCI v0.2 */
>  #define KVM_ARM_VCPU_PMU_V3		3 /* Support guest PMUv3 */
>  #define KVM_ARM_VCPU_SVE		4 /* enable SVE for this CPU */
> +#define KVM_ARM_VCPU_PTRAUTH_ADDRESS	5 /* VCPU uses address authentication */
> +#define KVM_ARM_VCPU_PTRAUTH_GENERIC	6 /* VCPU uses generic authentication */
>  
>  struct kvm_vcpu_init {
>  	__u32 target;
> diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
> index f13378d..d13406b 100644
> --- a/arch/arm64/kvm/reset.c
> +++ b/arch/arm64/kvm/reset.c
> @@ -221,6 +221,24 @@ static void kvm_vcpu_reset_sve(struct kvm_vcpu *vcpu)
>  		memset(vcpu->arch.sve_state, 0, vcpu_sve_state_size(vcpu));
>  }
>  
> +static int kvm_vcpu_enable_ptrauth(struct kvm_vcpu *vcpu)
> +{
> +	/* Support ptrauth only if the system supports these capabilities. */
> +	if (!has_vhe() || !system_supports_address_auth() ||
> +		!system_supports_generic_auth())
> +		return -EINVAL;

Nit: Funny indentation.  Please align with the "if (".  It's also
preferable to keep the two system_supports_xxx_auth() aligned or on the
same line, since they go together.  This might be clearer split up:

	if (!has_vhe())
		return -EINVAL;

	if (!system_supports_address_auth() ||
	    !system_supports_generic_auth())
		return -EINVAL;

> +	/*

I'd say "for now" here, since we might relax this rule later on.

> +	 * Make sure that both address/generic pointer authentication
> +	 * features are requested by the userspace together.
> +	 */
> +	if (!test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, vcpu->arch.features) ||
> +		!test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, vcpu->arch.features))
> +		return -EINVAL;
> +
> +	vcpu->arch.flags |= KVM_ARM64_GUEST_HAS_PTRAUTH;
> +	return 0;
> +}
> +
>  /**
>   * kvm_reset_vcpu - sets core registers and sys_regs to reset value
>   * @vcpu: The VCPU pointer
> @@ -261,6 +279,12 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
>  		kvm_vcpu_reset_sve(vcpu);
>  	}
>  
> +	if (test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, vcpu->arch.features) ||
> +		test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, vcpu->arch.features)) {

Nit: funny indentation.

> +		if (kvm_vcpu_enable_ptrauth(vcpu))
> +			goto out;
> +	}
> +

[...]

With those fixed,

Reviewed-by: Dave Martin <Dave.Martin@arm.com>

Cheers
---Dave
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v9 3/5] KVM: arm64: Add userspace flag to enable pointer authentication
@ 2019-04-16 16:31     ` Dave Martin
  0 siblings, 0 replies; 77+ messages in thread
From: Dave Martin @ 2019-04-16 16:31 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: Marc Zyngier, Catalin Marinas, Will Deacon, linux-kernel,
	Kristina Martsenko, Ramana Radhakrishnan, kvmarm,
	linux-arm-kernel

On Fri, Apr 12, 2019 at 08:50:34AM +0530, Amit Daniel Kachhap wrote:
> Now that the building blocks of pointer authentication are present, lets
> add userspace flags KVM_ARM_VCPU_PTRAUTH_ADDRESS and
> KVM_ARM_VCPU_PTRAUTH_GENERIC. These flags will enable pointer
> authentication for the KVM guest on a per-vcpu basis through the ioctl
> KVM_ARM_VCPU_INIT.
> 
> This features will allow the KVM guest to allow the handling of
> pointer authentication instructions or to treat them as undefined
> if not set.
> 
> Necessary documentations are added to reflect the changes done.
> 
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Christoffer Dall <christoffer.dall@arm.com>
> Cc: kvmarm@lists.cs.columbia.edu
> ---
> 
> Changes since v8:
> *  Update vcpu->arch.flags to final enable state. [Dave Martin]
> *  Change in Documentation to make clear the implementation of 2 vcpu
>    feature flags. [Dave Martin]
> 
>  Documentation/arm64/pointer-authentication.txt | 22 ++++++++++++++++++----
>  Documentation/virtual/kvm/api.txt              |  6 ++++++
>  arch/arm64/include/asm/kvm_host.h              |  2 +-
>  arch/arm64/include/uapi/asm/kvm.h              |  2 ++
>  arch/arm64/kvm/reset.c                         | 24 ++++++++++++++++++++++++
>  5 files changed, 51 insertions(+), 5 deletions(-)
> 
> diff --git a/Documentation/arm64/pointer-authentication.txt b/Documentation/arm64/pointer-authentication.txt
> index 5baca42..fc71b33 100644
> --- a/Documentation/arm64/pointer-authentication.txt
> +++ b/Documentation/arm64/pointer-authentication.txt
> @@ -87,7 +87,21 @@ used to get and set the keys for a thread.
>  Virtualization
>  --------------
>  
> -Pointer authentication is not currently supported in KVM guests. KVM
> -will mask the feature bits from ID_AA64ISAR1_EL1, and attempted use of
> -the feature will result in an UNDEFINED exception being injected into
> -the guest.
> +Pointer authentication is enabled in KVM guest when each virtual cpu is
> +initialised by passing flags KVM_ARM_VCPU_PTRAUTH_[ADDRESS/GENERIC] and
> +requesting these two separate cpu features to be enabled. The current KVM
> +guest implementation works by enabling both features together, so both
> +these userspace flags are checked before enabling pointer authentication.
> +The separate userspace flag will allow to have no userspace ABI changes
> +if support is added in the future to allow these two features to be
> +enabled independently of one another.
> +
> +As Arm Architecture specifies that Pointer Authentication feature is
> +implemented along with the VHE feature so KVM arm64 ptrauth code relies
> +on VHE mode to be present.
> +
> +Additionally, when these vcpu feature flags are not set then KVM will
> +filter out the Pointer Authentication system key registers from
> +KVM_GET/SET_REG_* ioctls and mask those features from cpufeature ID
> +register. Any attempt to use the Pointer Authentication instructions will
> +result in an UNDEFINED exception being injected into the guest.
> diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
> index 68509de..9d202f4 100644
> --- a/Documentation/virtual/kvm/api.txt
> +++ b/Documentation/virtual/kvm/api.txt
> @@ -2753,6 +2753,12 @@ Possible features:
>  	  Depends on KVM_CAP_ARM_PSCI_0_2.
>  	- KVM_ARM_VCPU_PMU_V3: Emulate PMUv3 for the CPU.
>  	  Depends on KVM_CAP_ARM_PMU_V3.
> +	- KVM_ARM_VCPU_PTRAUTH_ADDRESS: Enables Address Pointer authentication
> +	  for the CPU and supported only on arm64 architecture.

Nit: (arm64 only) would be less verbose.

> +	  Must be requested if KVM_ARM_VCPU_PTRAUTH_GENERIC is also requested.
> +	- KVM_ARM_VCPU_PTRAUTH_GENERIC: Enables Generic Pointer authentication
> +	  for the CPU and supported only on arm64 architecture.

Ditto.  (Not a big deal, though.)

> +	  Must be requested if KVM_ARM_VCPU_PTRAUTH_ADDRESS is also requested.
>  
>  	- KVM_ARM_VCPU_SVE: Enables SVE for the CPU (arm64 only).
>  	  Depends on KVM_CAP_ARM_SVE.
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index a585d82..25f2598 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -49,7 +49,7 @@
>  
>  #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS
>  
> -#define KVM_VCPU_MAX_FEATURES 5
> +#define KVM_VCPU_MAX_FEATURES 7
>  
>  #define KVM_REQ_SLEEP \
>  	KVM_ARCH_REQ_FLAGS(0, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
> diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
> index 6963b7e..fec2253 100644
> --- a/arch/arm64/include/uapi/asm/kvm.h
> +++ b/arch/arm64/include/uapi/asm/kvm.h
> @@ -103,6 +103,8 @@ struct kvm_regs {
>  #define KVM_ARM_VCPU_PSCI_0_2		2 /* CPU uses PSCI v0.2 */
>  #define KVM_ARM_VCPU_PMU_V3		3 /* Support guest PMUv3 */
>  #define KVM_ARM_VCPU_SVE		4 /* enable SVE for this CPU */
> +#define KVM_ARM_VCPU_PTRAUTH_ADDRESS	5 /* VCPU uses address authentication */
> +#define KVM_ARM_VCPU_PTRAUTH_GENERIC	6 /* VCPU uses generic authentication */
>  
>  struct kvm_vcpu_init {
>  	__u32 target;
> diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
> index f13378d..d13406b 100644
> --- a/arch/arm64/kvm/reset.c
> +++ b/arch/arm64/kvm/reset.c
> @@ -221,6 +221,24 @@ static void kvm_vcpu_reset_sve(struct kvm_vcpu *vcpu)
>  		memset(vcpu->arch.sve_state, 0, vcpu_sve_state_size(vcpu));
>  }
>  
> +static int kvm_vcpu_enable_ptrauth(struct kvm_vcpu *vcpu)
> +{
> +	/* Support ptrauth only if the system supports these capabilities. */
> +	if (!has_vhe() || !system_supports_address_auth() ||
> +		!system_supports_generic_auth())
> +		return -EINVAL;

Nit: Funny indentation.  Please align with the "if (".  It's also
preferable to keep the two system_supports_xxx_auth() aligned or on the
same line, since they go together.  This might be clearer split up:

	if (!has_vhe())
		return -EINVAL;

	if (!system_supports_address_auth() ||
	    !system_supports_generic_auth())
		return -EINVAL;

> +	/*

I'd say "for now" here, since we might relax this rule later on.

> +	 * Make sure that both address/generic pointer authentication
> +	 * features are requested by the userspace together.
> +	 */
> +	if (!test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, vcpu->arch.features) ||
> +		!test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, vcpu->arch.features))
> +		return -EINVAL;
> +
> +	vcpu->arch.flags |= KVM_ARM64_GUEST_HAS_PTRAUTH;
> +	return 0;
> +}
> +
>  /**
>   * kvm_reset_vcpu - sets core registers and sys_regs to reset value
>   * @vcpu: The VCPU pointer
> @@ -261,6 +279,12 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
>  		kvm_vcpu_reset_sve(vcpu);
>  	}
>  
> +	if (test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, vcpu->arch.features) ||
> +		test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, vcpu->arch.features)) {

Nit: funny indentation.

> +		if (kvm_vcpu_enable_ptrauth(vcpu))
> +			goto out;
> +	}
> +

[...]

With those fixed,

Reviewed-by: Dave Martin <Dave.Martin@arm.com>

Cheers
---Dave

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v9 4/5] KVM: arm64: Add capability to advertise ptrauth for guest
@ 2019-04-16 16:32     ` Dave Martin
  0 siblings, 0 replies; 77+ messages in thread
From: Dave Martin @ 2019-04-16 16:32 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: linux-arm-kernel, Marc Zyngier, Catalin Marinas, Will Deacon,
	Kristina Martsenko, kvmarm, Ramana Radhakrishnan, linux-kernel

On Fri, Apr 12, 2019 at 08:50:35AM +0530, Amit Daniel Kachhap wrote:
> This patch advertises the capability of two cpu feature called address
> pointer authentication and generic pointer authentication. These
> capabilities depend upon system support for pointer authentication and
> VHE mode.
> 
> The current arm64 KVM partially implements pointer authentication and
> support of address/generic authentication are tied together. However,
> separate ABI requirements for both of them is added so that any future
> isolated implementation will not require any ABI changes.
> 
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Christoffer Dall <christoffer.dall@arm.com>
> Cc: kvmarm@lists.cs.columbia.edu
> ---
> Changes since v8:
> *  Keep the capability check same for the 2 vcpu ptrauth features. [Dave Martin]
> 
>  Documentation/virtual/kvm/api.txt | 2 ++
>  arch/arm64/kvm/reset.c            | 5 +++++
>  include/uapi/linux/kvm.h          | 2 ++
>  3 files changed, 9 insertions(+)
> 
> diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
> index 9d202f4..56021d0 100644
> --- a/Documentation/virtual/kvm/api.txt
> +++ b/Documentation/virtual/kvm/api.txt
> @@ -2756,9 +2756,11 @@ Possible features:
>  	- KVM_ARM_VCPU_PTRAUTH_ADDRESS: Enables Address Pointer authentication
>  	  for the CPU and supported only on arm64 architecture.
>  	  Must be requested if KVM_ARM_VCPU_PTRAUTH_GENERIC is also requested.
> +	  Depends on KVM_CAP_ARM_PTRAUTH_ADDRESS.

What if KVM_CAP_ARM_PTRAUTH_ADDRESS is absent and
KVM_ARM_VCPU_PTRAUTH_GENERIC is requested?  By these rules, we have a
contradiction: userspace both must request and must not request
KVM_ARM_VCPU_PTRAUTH_ADDRESS.

We could qualify as follows:

	Depends on KVM_CAP_ARM_PTRAUTH_ADDRESS.
	Must be requested if KVM_CAP_ARM_PTRAUTH_ADDRESS is present and
	KVM_ARM_VCPU_PTRAUTH_GENERIC is also requested.

>  	- KVM_ARM_VCPU_PTRAUTH_GENERIC: Enables Generic Pointer authentication
>  	  for the CPU and supported only on arm64 architecture.
>  	  Must be requested if KVM_ARM_VCPU_PTRAUTH_ADDRESS is also requested.
> +	  Depends on KVM_CAP_ARM_PTRAUTH_GENERIC.

Similarly.

Or, we go back to having a single cap and a single feature, and add
more caps/features later on if we decide it's possible to support
address/generic auth separately later on.

Otherwise, we end up with complex rules that can't be tested.  This is a
high price to pay for forwards compatibility: userspace's conformance to
the rules can't be fully tested, so there's a fair chance it won't work
properly anyway when hardware/KVM with just one auth type appears.

[...]

Thoughts?

Cheers
---Dave

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v9 4/5] KVM: arm64: Add capability to advertise ptrauth for guest
@ 2019-04-16 16:32     ` Dave Martin
  0 siblings, 0 replies; 77+ messages in thread
From: Dave Martin @ 2019-04-16 16:32 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: Marc Zyngier, Catalin Marinas, Will Deacon, linux-kernel,
	Kristina Martsenko, Ramana Radhakrishnan, kvmarm,
	linux-arm-kernel

On Fri, Apr 12, 2019 at 08:50:35AM +0530, Amit Daniel Kachhap wrote:
> This patch advertises the capability of two cpu feature called address
> pointer authentication and generic pointer authentication. These
> capabilities depend upon system support for pointer authentication and
> VHE mode.
> 
> The current arm64 KVM partially implements pointer authentication and
> support of address/generic authentication are tied together. However,
> separate ABI requirements for both of them is added so that any future
> isolated implementation will not require any ABI changes.
> 
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Christoffer Dall <christoffer.dall@arm.com>
> Cc: kvmarm@lists.cs.columbia.edu
> ---
> Changes since v8:
> *  Keep the capability check same for the 2 vcpu ptrauth features. [Dave Martin]
> 
>  Documentation/virtual/kvm/api.txt | 2 ++
>  arch/arm64/kvm/reset.c            | 5 +++++
>  include/uapi/linux/kvm.h          | 2 ++
>  3 files changed, 9 insertions(+)
> 
> diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
> index 9d202f4..56021d0 100644
> --- a/Documentation/virtual/kvm/api.txt
> +++ b/Documentation/virtual/kvm/api.txt
> @@ -2756,9 +2756,11 @@ Possible features:
>  	- KVM_ARM_VCPU_PTRAUTH_ADDRESS: Enables Address Pointer authentication
>  	  for the CPU and supported only on arm64 architecture.
>  	  Must be requested if KVM_ARM_VCPU_PTRAUTH_GENERIC is also requested.
> +	  Depends on KVM_CAP_ARM_PTRAUTH_ADDRESS.

What if KVM_CAP_ARM_PTRAUTH_ADDRESS is absent and
KVM_ARM_VCPU_PTRAUTH_GENERIC is requested?  By these rules, we have a
contradiction: userspace both must request and must not request
KVM_ARM_VCPU_PTRAUTH_ADDRESS.

We could qualify as follows:

	Depends on KVM_CAP_ARM_PTRAUTH_ADDRESS.
	Must be requested if KVM_CAP_ARM_PTRAUTH_ADDRESS is present and
	KVM_ARM_VCPU_PTRAUTH_GENERIC is also requested.

>  	- KVM_ARM_VCPU_PTRAUTH_GENERIC: Enables Generic Pointer authentication
>  	  for the CPU and supported only on arm64 architecture.
>  	  Must be requested if KVM_ARM_VCPU_PTRAUTH_ADDRESS is also requested.
> +	  Depends on KVM_CAP_ARM_PTRAUTH_GENERIC.

Similarly.

Or, we go back to having a single cap and a single feature, and add
more caps/features later on if we decide it's possible to support
address/generic auth separately later on.

Otherwise, we end up with complex rules that can't be tested.  This is a
high price to pay for forwards compatibility: userspace's conformance to
the rules can't be fully tested, so there's a fair chance it won't work
properly anyway when hardware/KVM with just one auth type appears.

[...]

Thoughts?

Cheers
---Dave
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v9 4/5] KVM: arm64: Add capability to advertise ptrauth for guest
@ 2019-04-16 16:32     ` Dave Martin
  0 siblings, 0 replies; 77+ messages in thread
From: Dave Martin @ 2019-04-16 16:32 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: Marc Zyngier, Catalin Marinas, Will Deacon, linux-kernel,
	Kristina Martsenko, Ramana Radhakrishnan, kvmarm,
	linux-arm-kernel

On Fri, Apr 12, 2019 at 08:50:35AM +0530, Amit Daniel Kachhap wrote:
> This patch advertises the capability of two cpu feature called address
> pointer authentication and generic pointer authentication. These
> capabilities depend upon system support for pointer authentication and
> VHE mode.
> 
> The current arm64 KVM partially implements pointer authentication and
> support of address/generic authentication are tied together. However,
> separate ABI requirements for both of them is added so that any future
> isolated implementation will not require any ABI changes.
> 
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Christoffer Dall <christoffer.dall@arm.com>
> Cc: kvmarm@lists.cs.columbia.edu
> ---
> Changes since v8:
> *  Keep the capability check same for the 2 vcpu ptrauth features. [Dave Martin]
> 
>  Documentation/virtual/kvm/api.txt | 2 ++
>  arch/arm64/kvm/reset.c            | 5 +++++
>  include/uapi/linux/kvm.h          | 2 ++
>  3 files changed, 9 insertions(+)
> 
> diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
> index 9d202f4..56021d0 100644
> --- a/Documentation/virtual/kvm/api.txt
> +++ b/Documentation/virtual/kvm/api.txt
> @@ -2756,9 +2756,11 @@ Possible features:
>  	- KVM_ARM_VCPU_PTRAUTH_ADDRESS: Enables Address Pointer authentication
>  	  for the CPU and supported only on arm64 architecture.
>  	  Must be requested if KVM_ARM_VCPU_PTRAUTH_GENERIC is also requested.
> +	  Depends on KVM_CAP_ARM_PTRAUTH_ADDRESS.

What if KVM_CAP_ARM_PTRAUTH_ADDRESS is absent and
KVM_ARM_VCPU_PTRAUTH_GENERIC is requested?  By these rules, we have a
contradiction: userspace both must request and must not request
KVM_ARM_VCPU_PTRAUTH_ADDRESS.

We could qualify as follows:

	Depends on KVM_CAP_ARM_PTRAUTH_ADDRESS.
	Must be requested if KVM_CAP_ARM_PTRAUTH_ADDRESS is present and
	KVM_ARM_VCPU_PTRAUTH_GENERIC is also requested.

>  	- KVM_ARM_VCPU_PTRAUTH_GENERIC: Enables Generic Pointer authentication
>  	  for the CPU and supported only on arm64 architecture.
>  	  Must be requested if KVM_ARM_VCPU_PTRAUTH_ADDRESS is also requested.
> +	  Depends on KVM_CAP_ARM_PTRAUTH_GENERIC.

Similarly.

Or, we go back to having a single cap and a single feature, and add
more caps/features later on if we decide it's possible to support
address/generic auth separately later on.

Otherwise, we end up with complex rules that can't be tested.  This is a
high price to pay for forwards compatibility: userspace's conformance to
the rules can't be fully tested, so there's a fair chance it won't work
properly anyway when hardware/KVM with just one auth type appears.

[...]

Thoughts?

Cheers
---Dave

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [kvmtool PATCH v9 5/5] KVM: arm/arm64: Add a vcpu feature for pointer authentication
@ 2019-04-16 16:32     ` Dave Martin
  0 siblings, 0 replies; 77+ messages in thread
From: Dave Martin @ 2019-04-16 16:32 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: linux-arm-kernel, Marc Zyngier, Catalin Marinas, Will Deacon,
	Kristina Martsenko, kvmarm, Ramana Radhakrishnan, linux-kernel

On Fri, Apr 12, 2019 at 08:50:36AM +0530, Amit Daniel Kachhap wrote:
> This patch adds a runtime capabality for KVM tool to enable Arm64 8.3
> Pointer Authentication in guest kernel. Two vcpu features
> KVM_ARM_VCPU_PTRAUTH_[ADDRESS/GENERIC] are supplied together to enable
> Pointer Authentication in KVM guest after checking the capability.
> 
> Command line options --enable-ptrauth and --disable-ptrauth are added
> to use this feature. However, if those options are not provided then
> also this feature is enabled if host supports this capability.
> 
> The macros defined in the headers are not in sync and should be replaced
> from the upstream.
> 
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> ---
> Changes since v8:
> *  Added option --enable-ptrauth and --disable-ptrauth to use ptrauth. Also
>    enable ptrauth if no option provided and Host supports ptrauth. [Dave Martin]
> * The macro definition are not linear as the kvmtool is not synchronised with the
>   kernel changes present in kvmarm/next tree.
> 
>  arm/aarch32/include/kvm/kvm-cpu-arch.h    |  1 +
>  arm/aarch64/include/asm/kvm.h             |  2 ++
>  arm/aarch64/include/kvm/kvm-config-arch.h |  6 +++++-
>  arm/aarch64/include/kvm/kvm-cpu-arch.h    |  2 ++
>  arm/include/arm-common/kvm-config-arch.h  |  2 ++
>  arm/kvm-cpu.c                             | 11 +++++++++++
>  include/linux/kvm.h                       |  2 ++
>  7 files changed, 25 insertions(+), 1 deletion(-)
> 
> diff --git a/arm/aarch32/include/kvm/kvm-cpu-arch.h b/arm/aarch32/include/kvm/kvm-cpu-arch.h
> index d28ea67..520ea76 100644
> --- a/arm/aarch32/include/kvm/kvm-cpu-arch.h
> +++ b/arm/aarch32/include/kvm/kvm-cpu-arch.h
> @@ -13,4 +13,5 @@
>  #define ARM_CPU_ID		0, 0, 0
>  #define ARM_CPU_ID_MPIDR	5
>  
> +#define ARM_VCPU_PTRAUTH_FEATURE	0
>  #endif /* KVM__KVM_CPU_ARCH_H */
> diff --git a/arm/aarch64/include/asm/kvm.h b/arm/aarch64/include/asm/kvm.h
> index 97c3478..a2546e6 100644
> --- a/arm/aarch64/include/asm/kvm.h
> +++ b/arm/aarch64/include/asm/kvm.h
> @@ -102,6 +102,8 @@ struct kvm_regs {
>  #define KVM_ARM_VCPU_EL1_32BIT		1 /* CPU running a 32bit VM */
>  #define KVM_ARM_VCPU_PSCI_0_2		2 /* CPU uses PSCI v0.2 */
>  #define KVM_ARM_VCPU_PMU_V3		3 /* Support guest PMUv3 */
> +#define KVM_ARM_VCPU_PTRAUTH_ADDRESS	5 /* CPU uses address pointer authentication */
> +#define KVM_ARM_VCPU_PTRAUTH_GENERIC	6 /* CPU uses generic pointer authentication */
>  
>  struct kvm_vcpu_init {
>  	__u32 target;
> diff --git a/arm/aarch64/include/kvm/kvm-config-arch.h b/arm/aarch64/include/kvm/kvm-config-arch.h
> index 04be43d..0279b13 100644
> --- a/arm/aarch64/include/kvm/kvm-config-arch.h
> +++ b/arm/aarch64/include/kvm/kvm-config-arch.h
> @@ -8,7 +8,11 @@
>  			"Create PMUv3 device"),				\
>  	OPT_U64('\0', "kaslr-seed", &(cfg)->kaslr_seed,			\
>  			"Specify random seed for Kernel Address Space "	\
> -			"Layout Randomization (KASLR)"),
> +			"Layout Randomization (KASLR)"),		\
> +	OPT_BOOLEAN('\0', "enable-ptrauth", &(cfg)->enable_ptrauth,	\
> +			"Enables pointer authentication"),		\
> +	OPT_BOOLEAN('\0', "disable-ptrauth", &(cfg)->disable_ptrauth,	\
> +			"Disables pointer authentication"),
>  
>  #include "arm-common/kvm-config-arch.h"
>  
> diff --git a/arm/aarch64/include/kvm/kvm-cpu-arch.h b/arm/aarch64/include/kvm/kvm-cpu-arch.h
> index a9d8563..fcc2107 100644
> --- a/arm/aarch64/include/kvm/kvm-cpu-arch.h
> +++ b/arm/aarch64/include/kvm/kvm-cpu-arch.h
> @@ -17,4 +17,6 @@
>  #define ARM_CPU_CTRL		3, 0, 1, 0
>  #define ARM_CPU_CTRL_SCTLR_EL1	0
>  
> +#define ARM_VCPU_PTRAUTH_FEATURE	((1UL << KVM_ARM_VCPU_PTRAUTH_ADDRESS) \
> +					| (1UL << KVM_ARM_VCPU_PTRAUTH_GENERIC))
>  #endif /* KVM__KVM_CPU_ARCH_H */
> diff --git a/arm/include/arm-common/kvm-config-arch.h b/arm/include/arm-common/kvm-config-arch.h
> index 5734c46..1b4287d 100644
> --- a/arm/include/arm-common/kvm-config-arch.h
> +++ b/arm/include/arm-common/kvm-config-arch.h
> @@ -10,6 +10,8 @@ struct kvm_config_arch {
>  	bool		aarch32_guest;
>  	bool		has_pmuv3;
>  	u64		kaslr_seed;
> +	bool		enable_ptrauth;
> +	bool		disable_ptrauth;
>  	enum irqchip_type irqchip;
>  	u64		fw_addr;
>  };
> diff --git a/arm/kvm-cpu.c b/arm/kvm-cpu.c
> index 7780251..a45a649 100644
> --- a/arm/kvm-cpu.c
> +++ b/arm/kvm-cpu.c
> @@ -69,6 +69,17 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, unsigned long cpu_id)
>  	}
>  
>  	/*
> +	 * Always enable Pointer Authentication if requested. If system supports
> +	 * this extension then also enable it by default provided no disable
> +	 * request present.
> +	 */
> +	if ((kvm->cfg.arch.enable_ptrauth) ||

Nit: redundant ()

> +		(kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH_ADDRESS) &&

Funny indentation?

> +		kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH_GENERIC) &&
> +		!kvm->cfg.arch.disable_ptrauth))
> +			vcpu_init.features[0] |= ARM_VCPU_PTRAUTH_FEATURE;
> +

Hmm, we have some weird behaviours here: --enable-ptrauth
--disable-ptrauth will result in us trying to enable it, and
--enable-ptrauth without the required caps will result in an unhelpful
"Unable to initialise vcpu" error message.  I'm not sure this is a
whole lot worse than the way other options behave today, though.

You could try to be more explicit about what happens in these cases, but
I'm not sure it's worth it given the state of the existing code.

[...]

Cheers
---Dave

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [kvmtool PATCH v9 5/5] KVM: arm/arm64: Add a vcpu feature for pointer authentication
@ 2019-04-16 16:32     ` Dave Martin
  0 siblings, 0 replies; 77+ messages in thread
From: Dave Martin @ 2019-04-16 16:32 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: Marc Zyngier, Catalin Marinas, Will Deacon, linux-kernel,
	Kristina Martsenko, Ramana Radhakrishnan, kvmarm,
	linux-arm-kernel

On Fri, Apr 12, 2019 at 08:50:36AM +0530, Amit Daniel Kachhap wrote:
> This patch adds a runtime capabality for KVM tool to enable Arm64 8.3
> Pointer Authentication in guest kernel. Two vcpu features
> KVM_ARM_VCPU_PTRAUTH_[ADDRESS/GENERIC] are supplied together to enable
> Pointer Authentication in KVM guest after checking the capability.
> 
> Command line options --enable-ptrauth and --disable-ptrauth are added
> to use this feature. However, if those options are not provided then
> also this feature is enabled if host supports this capability.
> 
> The macros defined in the headers are not in sync and should be replaced
> from the upstream.
> 
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> ---
> Changes since v8:
> *  Added option --enable-ptrauth and --disable-ptrauth to use ptrauth. Also
>    enable ptrauth if no option provided and Host supports ptrauth. [Dave Martin]
> * The macro definition are not linear as the kvmtool is not synchronised with the
>   kernel changes present in kvmarm/next tree.
> 
>  arm/aarch32/include/kvm/kvm-cpu-arch.h    |  1 +
>  arm/aarch64/include/asm/kvm.h             |  2 ++
>  arm/aarch64/include/kvm/kvm-config-arch.h |  6 +++++-
>  arm/aarch64/include/kvm/kvm-cpu-arch.h    |  2 ++
>  arm/include/arm-common/kvm-config-arch.h  |  2 ++
>  arm/kvm-cpu.c                             | 11 +++++++++++
>  include/linux/kvm.h                       |  2 ++
>  7 files changed, 25 insertions(+), 1 deletion(-)
> 
> diff --git a/arm/aarch32/include/kvm/kvm-cpu-arch.h b/arm/aarch32/include/kvm/kvm-cpu-arch.h
> index d28ea67..520ea76 100644
> --- a/arm/aarch32/include/kvm/kvm-cpu-arch.h
> +++ b/arm/aarch32/include/kvm/kvm-cpu-arch.h
> @@ -13,4 +13,5 @@
>  #define ARM_CPU_ID		0, 0, 0
>  #define ARM_CPU_ID_MPIDR	5
>  
> +#define ARM_VCPU_PTRAUTH_FEATURE	0
>  #endif /* KVM__KVM_CPU_ARCH_H */
> diff --git a/arm/aarch64/include/asm/kvm.h b/arm/aarch64/include/asm/kvm.h
> index 97c3478..a2546e6 100644
> --- a/arm/aarch64/include/asm/kvm.h
> +++ b/arm/aarch64/include/asm/kvm.h
> @@ -102,6 +102,8 @@ struct kvm_regs {
>  #define KVM_ARM_VCPU_EL1_32BIT		1 /* CPU running a 32bit VM */
>  #define KVM_ARM_VCPU_PSCI_0_2		2 /* CPU uses PSCI v0.2 */
>  #define KVM_ARM_VCPU_PMU_V3		3 /* Support guest PMUv3 */
> +#define KVM_ARM_VCPU_PTRAUTH_ADDRESS	5 /* CPU uses address pointer authentication */
> +#define KVM_ARM_VCPU_PTRAUTH_GENERIC	6 /* CPU uses generic pointer authentication */
>  
>  struct kvm_vcpu_init {
>  	__u32 target;
> diff --git a/arm/aarch64/include/kvm/kvm-config-arch.h b/arm/aarch64/include/kvm/kvm-config-arch.h
> index 04be43d..0279b13 100644
> --- a/arm/aarch64/include/kvm/kvm-config-arch.h
> +++ b/arm/aarch64/include/kvm/kvm-config-arch.h
> @@ -8,7 +8,11 @@
>  			"Create PMUv3 device"),				\
>  	OPT_U64('\0', "kaslr-seed", &(cfg)->kaslr_seed,			\
>  			"Specify random seed for Kernel Address Space "	\
> -			"Layout Randomization (KASLR)"),
> +			"Layout Randomization (KASLR)"),		\
> +	OPT_BOOLEAN('\0', "enable-ptrauth", &(cfg)->enable_ptrauth,	\
> +			"Enables pointer authentication"),		\
> +	OPT_BOOLEAN('\0', "disable-ptrauth", &(cfg)->disable_ptrauth,	\
> +			"Disables pointer authentication"),
>  
>  #include "arm-common/kvm-config-arch.h"
>  
> diff --git a/arm/aarch64/include/kvm/kvm-cpu-arch.h b/arm/aarch64/include/kvm/kvm-cpu-arch.h
> index a9d8563..fcc2107 100644
> --- a/arm/aarch64/include/kvm/kvm-cpu-arch.h
> +++ b/arm/aarch64/include/kvm/kvm-cpu-arch.h
> @@ -17,4 +17,6 @@
>  #define ARM_CPU_CTRL		3, 0, 1, 0
>  #define ARM_CPU_CTRL_SCTLR_EL1	0
>  
> +#define ARM_VCPU_PTRAUTH_FEATURE	((1UL << KVM_ARM_VCPU_PTRAUTH_ADDRESS) \
> +					| (1UL << KVM_ARM_VCPU_PTRAUTH_GENERIC))
>  #endif /* KVM__KVM_CPU_ARCH_H */
> diff --git a/arm/include/arm-common/kvm-config-arch.h b/arm/include/arm-common/kvm-config-arch.h
> index 5734c46..1b4287d 100644
> --- a/arm/include/arm-common/kvm-config-arch.h
> +++ b/arm/include/arm-common/kvm-config-arch.h
> @@ -10,6 +10,8 @@ struct kvm_config_arch {
>  	bool		aarch32_guest;
>  	bool		has_pmuv3;
>  	u64		kaslr_seed;
> +	bool		enable_ptrauth;
> +	bool		disable_ptrauth;
>  	enum irqchip_type irqchip;
>  	u64		fw_addr;
>  };
> diff --git a/arm/kvm-cpu.c b/arm/kvm-cpu.c
> index 7780251..a45a649 100644
> --- a/arm/kvm-cpu.c
> +++ b/arm/kvm-cpu.c
> @@ -69,6 +69,17 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, unsigned long cpu_id)
>  	}
>  
>  	/*
> +	 * Always enable Pointer Authentication if requested. If system supports
> +	 * this extension then also enable it by default provided no disable
> +	 * request present.
> +	 */
> +	if ((kvm->cfg.arch.enable_ptrauth) ||

Nit: redundant ()

> +		(kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH_ADDRESS) &&

Funny indentation?

> +		kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH_GENERIC) &&
> +		!kvm->cfg.arch.disable_ptrauth))
> +			vcpu_init.features[0] |= ARM_VCPU_PTRAUTH_FEATURE;
> +

Hmm, we have some weird behaviours here: --enable-ptrauth
--disable-ptrauth will result in us trying to enable it, and
--enable-ptrauth without the required caps will result in an unhelpful
"Unable to initialise vcpu" error message.  I'm not sure this is a
whole lot worse than the way other options behave today, though.

You could try to be more explicit about what happens in these cases, but
I'm not sure it's worth it given the state of the existing code.

[...]

Cheers
---Dave
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [kvmtool PATCH v9 5/5] KVM: arm/arm64: Add a vcpu feature for pointer authentication
@ 2019-04-16 16:32     ` Dave Martin
  0 siblings, 0 replies; 77+ messages in thread
From: Dave Martin @ 2019-04-16 16:32 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: Marc Zyngier, Catalin Marinas, Will Deacon, linux-kernel,
	Kristina Martsenko, Ramana Radhakrishnan, kvmarm,
	linux-arm-kernel

On Fri, Apr 12, 2019 at 08:50:36AM +0530, Amit Daniel Kachhap wrote:
> This patch adds a runtime capabality for KVM tool to enable Arm64 8.3
> Pointer Authentication in guest kernel. Two vcpu features
> KVM_ARM_VCPU_PTRAUTH_[ADDRESS/GENERIC] are supplied together to enable
> Pointer Authentication in KVM guest after checking the capability.
> 
> Command line options --enable-ptrauth and --disable-ptrauth are added
> to use this feature. However, if those options are not provided then
> also this feature is enabled if host supports this capability.
> 
> The macros defined in the headers are not in sync and should be replaced
> from the upstream.
> 
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> ---
> Changes since v8:
> *  Added option --enable-ptrauth and --disable-ptrauth to use ptrauth. Also
>    enable ptrauth if no option provided and Host supports ptrauth. [Dave Martin]
> * The macro definition are not linear as the kvmtool is not synchronised with the
>   kernel changes present in kvmarm/next tree.
> 
>  arm/aarch32/include/kvm/kvm-cpu-arch.h    |  1 +
>  arm/aarch64/include/asm/kvm.h             |  2 ++
>  arm/aarch64/include/kvm/kvm-config-arch.h |  6 +++++-
>  arm/aarch64/include/kvm/kvm-cpu-arch.h    |  2 ++
>  arm/include/arm-common/kvm-config-arch.h  |  2 ++
>  arm/kvm-cpu.c                             | 11 +++++++++++
>  include/linux/kvm.h                       |  2 ++
>  7 files changed, 25 insertions(+), 1 deletion(-)
> 
> diff --git a/arm/aarch32/include/kvm/kvm-cpu-arch.h b/arm/aarch32/include/kvm/kvm-cpu-arch.h
> index d28ea67..520ea76 100644
> --- a/arm/aarch32/include/kvm/kvm-cpu-arch.h
> +++ b/arm/aarch32/include/kvm/kvm-cpu-arch.h
> @@ -13,4 +13,5 @@
>  #define ARM_CPU_ID		0, 0, 0
>  #define ARM_CPU_ID_MPIDR	5
>  
> +#define ARM_VCPU_PTRAUTH_FEATURE	0
>  #endif /* KVM__KVM_CPU_ARCH_H */
> diff --git a/arm/aarch64/include/asm/kvm.h b/arm/aarch64/include/asm/kvm.h
> index 97c3478..a2546e6 100644
> --- a/arm/aarch64/include/asm/kvm.h
> +++ b/arm/aarch64/include/asm/kvm.h
> @@ -102,6 +102,8 @@ struct kvm_regs {
>  #define KVM_ARM_VCPU_EL1_32BIT		1 /* CPU running a 32bit VM */
>  #define KVM_ARM_VCPU_PSCI_0_2		2 /* CPU uses PSCI v0.2 */
>  #define KVM_ARM_VCPU_PMU_V3		3 /* Support guest PMUv3 */
> +#define KVM_ARM_VCPU_PTRAUTH_ADDRESS	5 /* CPU uses address pointer authentication */
> +#define KVM_ARM_VCPU_PTRAUTH_GENERIC	6 /* CPU uses generic pointer authentication */
>  
>  struct kvm_vcpu_init {
>  	__u32 target;
> diff --git a/arm/aarch64/include/kvm/kvm-config-arch.h b/arm/aarch64/include/kvm/kvm-config-arch.h
> index 04be43d..0279b13 100644
> --- a/arm/aarch64/include/kvm/kvm-config-arch.h
> +++ b/arm/aarch64/include/kvm/kvm-config-arch.h
> @@ -8,7 +8,11 @@
>  			"Create PMUv3 device"),				\
>  	OPT_U64('\0', "kaslr-seed", &(cfg)->kaslr_seed,			\
>  			"Specify random seed for Kernel Address Space "	\
> -			"Layout Randomization (KASLR)"),
> +			"Layout Randomization (KASLR)"),		\
> +	OPT_BOOLEAN('\0', "enable-ptrauth", &(cfg)->enable_ptrauth,	\
> +			"Enables pointer authentication"),		\
> +	OPT_BOOLEAN('\0', "disable-ptrauth", &(cfg)->disable_ptrauth,	\
> +			"Disables pointer authentication"),
>  
>  #include "arm-common/kvm-config-arch.h"
>  
> diff --git a/arm/aarch64/include/kvm/kvm-cpu-arch.h b/arm/aarch64/include/kvm/kvm-cpu-arch.h
> index a9d8563..fcc2107 100644
> --- a/arm/aarch64/include/kvm/kvm-cpu-arch.h
> +++ b/arm/aarch64/include/kvm/kvm-cpu-arch.h
> @@ -17,4 +17,6 @@
>  #define ARM_CPU_CTRL		3, 0, 1, 0
>  #define ARM_CPU_CTRL_SCTLR_EL1	0
>  
> +#define ARM_VCPU_PTRAUTH_FEATURE	((1UL << KVM_ARM_VCPU_PTRAUTH_ADDRESS) \
> +					| (1UL << KVM_ARM_VCPU_PTRAUTH_GENERIC))
>  #endif /* KVM__KVM_CPU_ARCH_H */
> diff --git a/arm/include/arm-common/kvm-config-arch.h b/arm/include/arm-common/kvm-config-arch.h
> index 5734c46..1b4287d 100644
> --- a/arm/include/arm-common/kvm-config-arch.h
> +++ b/arm/include/arm-common/kvm-config-arch.h
> @@ -10,6 +10,8 @@ struct kvm_config_arch {
>  	bool		aarch32_guest;
>  	bool		has_pmuv3;
>  	u64		kaslr_seed;
> +	bool		enable_ptrauth;
> +	bool		disable_ptrauth;
>  	enum irqchip_type irqchip;
>  	u64		fw_addr;
>  };
> diff --git a/arm/kvm-cpu.c b/arm/kvm-cpu.c
> index 7780251..a45a649 100644
> --- a/arm/kvm-cpu.c
> +++ b/arm/kvm-cpu.c
> @@ -69,6 +69,17 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, unsigned long cpu_id)
>  	}
>  
>  	/*
> +	 * Always enable Pointer Authentication if requested. If system supports
> +	 * this extension then also enable it by default provided no disable
> +	 * request present.
> +	 */
> +	if ((kvm->cfg.arch.enable_ptrauth) ||

Nit: redundant ()

> +		(kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH_ADDRESS) &&

Funny indentation?

> +		kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH_GENERIC) &&
> +		!kvm->cfg.arch.disable_ptrauth))
> +			vcpu_init.features[0] |= ARM_VCPU_PTRAUTH_FEATURE;
> +

Hmm, we have some weird behaviours here: --enable-ptrauth
--disable-ptrauth will result in us trying to enable it, and
--enable-ptrauth without the required caps will result in an unhelpful
"Unable to initialise vcpu" error message.  I'm not sure this is a
whole lot worse than the way other options behave today, though.

You could try to be more explicit about what happens in these cases, but
I'm not sure it's worth it given the state of the existing code.

[...]

Cheers
---Dave

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v9 3/5] KVM: arm64: Add userspace flag to enable pointer authentication
@ 2019-04-17  8:17       ` Amit Daniel Kachhap
  0 siblings, 0 replies; 77+ messages in thread
From: Amit Daniel Kachhap @ 2019-04-17  8:17 UTC (permalink / raw)
  To: Dave Martin
  Cc: linux-arm-kernel, Marc Zyngier, Catalin Marinas, Will Deacon,
	Kristina Martsenko, kvmarm, Ramana Radhakrishnan, linux-kernel


Hi,
On 4/16/19 10:01 PM, Dave Martin wrote:
> On Fri, Apr 12, 2019 at 08:50:34AM +0530, Amit Daniel Kachhap wrote:
>> Now that the building blocks of pointer authentication are present, lets
>> add userspace flags KVM_ARM_VCPU_PTRAUTH_ADDRESS and
>> KVM_ARM_VCPU_PTRAUTH_GENERIC. These flags will enable pointer
>> authentication for the KVM guest on a per-vcpu basis through the ioctl
>> KVM_ARM_VCPU_INIT.
>>
>> This features will allow the KVM guest to allow the handling of
>> pointer authentication instructions or to treat them as undefined
>> if not set.
>>
>> Necessary documentations are added to reflect the changes done.
>>
>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>> Cc: Mark Rutland <mark.rutland@arm.com>
>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>> Cc: Christoffer Dall <christoffer.dall@arm.com>
>> Cc: kvmarm@lists.cs.columbia.edu
>> ---
>>
>> Changes since v8:
>> *  Update vcpu->arch.flags to final enable state. [Dave Martin]
>> *  Change in Documentation to make clear the implementation of 2 vcpu
>>     feature flags. [Dave Martin]
>>
>>   Documentation/arm64/pointer-authentication.txt | 22 ++++++++++++++++++----
>>   Documentation/virtual/kvm/api.txt              |  6 ++++++
>>   arch/arm64/include/asm/kvm_host.h              |  2 +-
>>   arch/arm64/include/uapi/asm/kvm.h              |  2 ++
>>   arch/arm64/kvm/reset.c                         | 24 ++++++++++++++++++++++++
>>   5 files changed, 51 insertions(+), 5 deletions(-)
>>
>> diff --git a/Documentation/arm64/pointer-authentication.txt b/Documentation/arm64/pointer-authentication.txt
>> index 5baca42..fc71b33 100644
>> --- a/Documentation/arm64/pointer-authentication.txt
>> +++ b/Documentation/arm64/pointer-authentication.txt
>> @@ -87,7 +87,21 @@ used to get and set the keys for a thread.
>>   Virtualization
>>   --------------
>>   
>> -Pointer authentication is not currently supported in KVM guests. KVM
>> -will mask the feature bits from ID_AA64ISAR1_EL1, and attempted use of
>> -the feature will result in an UNDEFINED exception being injected into
>> -the guest.
>> +Pointer authentication is enabled in KVM guest when each virtual cpu is
>> +initialised by passing flags KVM_ARM_VCPU_PTRAUTH_[ADDRESS/GENERIC] and
>> +requesting these two separate cpu features to be enabled. The current KVM
>> +guest implementation works by enabling both features together, so both
>> +these userspace flags are checked before enabling pointer authentication.
>> +The separate userspace flag will allow to have no userspace ABI changes
>> +if support is added in the future to allow these two features to be
>> +enabled independently of one another.
>> +
>> +As Arm Architecture specifies that Pointer Authentication feature is
>> +implemented along with the VHE feature so KVM arm64 ptrauth code relies
>> +on VHE mode to be present.
>> +
>> +Additionally, when these vcpu feature flags are not set then KVM will
>> +filter out the Pointer Authentication system key registers from
>> +KVM_GET/SET_REG_* ioctls and mask those features from cpufeature ID
>> +register. Any attempt to use the Pointer Authentication instructions will
>> +result in an UNDEFINED exception being injected into the guest.
>> diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
>> index 68509de..9d202f4 100644
>> --- a/Documentation/virtual/kvm/api.txt
>> +++ b/Documentation/virtual/kvm/api.txt
>> @@ -2753,6 +2753,12 @@ Possible features:
>>   	  Depends on KVM_CAP_ARM_PSCI_0_2.
>>   	- KVM_ARM_VCPU_PMU_V3: Emulate PMUv3 for the CPU.
>>   	  Depends on KVM_CAP_ARM_PMU_V3.
>> +	- KVM_ARM_VCPU_PTRAUTH_ADDRESS: Enables Address Pointer authentication
>> +	  for the CPU and supported only on arm64 architecture.
> 
> Nit: (arm64 only) would be less verbose.
ok.
> 
>> +	  Must be requested if KVM_ARM_VCPU_PTRAUTH_GENERIC is also requested.
>> +	- KVM_ARM_VCPU_PTRAUTH_GENERIC: Enables Generic Pointer authentication
>> +	  for the CPU and supported only on arm64 architecture.
> 
> Ditto.  (Not a big deal, though.)
> 
>> +	  Must be requested if KVM_ARM_VCPU_PTRAUTH_ADDRESS is also requested.
>>   
>>   	- KVM_ARM_VCPU_SVE: Enables SVE for the CPU (arm64 only).
>>   	  Depends on KVM_CAP_ARM_SVE.
>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
>> index a585d82..25f2598 100644
>> --- a/arch/arm64/include/asm/kvm_host.h
>> +++ b/arch/arm64/include/asm/kvm_host.h
>> @@ -49,7 +49,7 @@
>>   
>>   #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS
>>   
>> -#define KVM_VCPU_MAX_FEATURES 5
>> +#define KVM_VCPU_MAX_FEATURES 7
>>   
>>   #define KVM_REQ_SLEEP \
>>   	KVM_ARCH_REQ_FLAGS(0, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
>> diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
>> index 6963b7e..fec2253 100644
>> --- a/arch/arm64/include/uapi/asm/kvm.h
>> +++ b/arch/arm64/include/uapi/asm/kvm.h
>> @@ -103,6 +103,8 @@ struct kvm_regs {
>>   #define KVM_ARM_VCPU_PSCI_0_2		2 /* CPU uses PSCI v0.2 */
>>   #define KVM_ARM_VCPU_PMU_V3		3 /* Support guest PMUv3 */
>>   #define KVM_ARM_VCPU_SVE		4 /* enable SVE for this CPU */
>> +#define KVM_ARM_VCPU_PTRAUTH_ADDRESS	5 /* VCPU uses address authentication */
>> +#define KVM_ARM_VCPU_PTRAUTH_GENERIC	6 /* VCPU uses generic authentication */
>>   
>>   struct kvm_vcpu_init {
>>   	__u32 target;
>> diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
>> index f13378d..d13406b 100644
>> --- a/arch/arm64/kvm/reset.c
>> +++ b/arch/arm64/kvm/reset.c
>> @@ -221,6 +221,24 @@ static void kvm_vcpu_reset_sve(struct kvm_vcpu *vcpu)
>>   		memset(vcpu->arch.sve_state, 0, vcpu_sve_state_size(vcpu));
>>   }
>>   
>> +static int kvm_vcpu_enable_ptrauth(struct kvm_vcpu *vcpu)
>> +{
>> +	/* Support ptrauth only if the system supports these capabilities. */
>> +	if (!has_vhe() || !system_supports_address_auth() ||
>> +		!system_supports_generic_auth())
>> +		return -EINVAL;
> 
> Nit: Funny indentation.  Please align with the "if (".  It's also
> preferable to keep the two system_supports_xxx_auth() aligned or on the
> same line, since they go together.  This might be clearer split up:
> 
> 	if (!has_vhe())
> 		return -EINVAL;
> 
> 	if (!system_supports_address_auth() ||
> 	    !system_supports_generic_auth())
> 		return -EINVAL;
ok.
> 
>> +	/*
> 
> I'd say "for now" here, since we might relax this rule later on.
> 
>> +	 * Make sure that both address/generic pointer authentication
>> +	 * features are requested by the userspace together.
>> +	 */
>> +	if (!test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, vcpu->arch.features) ||
>> +		!test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, vcpu->arch.features))
>> +		return -EINVAL;
>> +
>> +	vcpu->arch.flags |= KVM_ARM64_GUEST_HAS_PTRAUTH;
>> +	return 0;
>> +}
>> +
>>   /**
>>    * kvm_reset_vcpu - sets core registers and sys_regs to reset value
>>    * @vcpu: The VCPU pointer
>> @@ -261,6 +279,12 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
>>   		kvm_vcpu_reset_sve(vcpu);
>>   	}
>>   
>> +	if (test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, vcpu->arch.features) ||
>> +		test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, vcpu->arch.features)) {
> 
> Nit: funny indentation.
> 
>> +		if (kvm_vcpu_enable_ptrauth(vcpu))
>> +			goto out;
>> +	}
>> +
> 
> [...]
> 
> With those fixed,
sure.
> 
> Reviewed-by: Dave Martin <Dave.Martin@arm.com>
Thanks,
Amit
> 
> Cheers
> ---Dave
> 

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v9 3/5] KVM: arm64: Add userspace flag to enable pointer authentication
@ 2019-04-17  8:17       ` Amit Daniel Kachhap
  0 siblings, 0 replies; 77+ messages in thread
From: Amit Daniel Kachhap @ 2019-04-17  8:17 UTC (permalink / raw)
  To: Dave Martin
  Cc: Marc Zyngier, Catalin Marinas, Will Deacon, linux-kernel,
	Kristina Martsenko, Ramana Radhakrishnan, kvmarm,
	linux-arm-kernel


Hi,
On 4/16/19 10:01 PM, Dave Martin wrote:
> On Fri, Apr 12, 2019 at 08:50:34AM +0530, Amit Daniel Kachhap wrote:
>> Now that the building blocks of pointer authentication are present, lets
>> add userspace flags KVM_ARM_VCPU_PTRAUTH_ADDRESS and
>> KVM_ARM_VCPU_PTRAUTH_GENERIC. These flags will enable pointer
>> authentication for the KVM guest on a per-vcpu basis through the ioctl
>> KVM_ARM_VCPU_INIT.
>>
>> This features will allow the KVM guest to allow the handling of
>> pointer authentication instructions or to treat them as undefined
>> if not set.
>>
>> Necessary documentations are added to reflect the changes done.
>>
>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>> Cc: Mark Rutland <mark.rutland@arm.com>
>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>> Cc: Christoffer Dall <christoffer.dall@arm.com>
>> Cc: kvmarm@lists.cs.columbia.edu
>> ---
>>
>> Changes since v8:
>> *  Update vcpu->arch.flags to final enable state. [Dave Martin]
>> *  Change in Documentation to make clear the implementation of 2 vcpu
>>     feature flags. [Dave Martin]
>>
>>   Documentation/arm64/pointer-authentication.txt | 22 ++++++++++++++++++----
>>   Documentation/virtual/kvm/api.txt              |  6 ++++++
>>   arch/arm64/include/asm/kvm_host.h              |  2 +-
>>   arch/arm64/include/uapi/asm/kvm.h              |  2 ++
>>   arch/arm64/kvm/reset.c                         | 24 ++++++++++++++++++++++++
>>   5 files changed, 51 insertions(+), 5 deletions(-)
>>
>> diff --git a/Documentation/arm64/pointer-authentication.txt b/Documentation/arm64/pointer-authentication.txt
>> index 5baca42..fc71b33 100644
>> --- a/Documentation/arm64/pointer-authentication.txt
>> +++ b/Documentation/arm64/pointer-authentication.txt
>> @@ -87,7 +87,21 @@ used to get and set the keys for a thread.
>>   Virtualization
>>   --------------
>>   
>> -Pointer authentication is not currently supported in KVM guests. KVM
>> -will mask the feature bits from ID_AA64ISAR1_EL1, and attempted use of
>> -the feature will result in an UNDEFINED exception being injected into
>> -the guest.
>> +Pointer authentication is enabled in KVM guest when each virtual cpu is
>> +initialised by passing flags KVM_ARM_VCPU_PTRAUTH_[ADDRESS/GENERIC] and
>> +requesting these two separate cpu features to be enabled. The current KVM
>> +guest implementation works by enabling both features together, so both
>> +these userspace flags are checked before enabling pointer authentication.
>> +The separate userspace flag will allow to have no userspace ABI changes
>> +if support is added in the future to allow these two features to be
>> +enabled independently of one another.
>> +
>> +As Arm Architecture specifies that Pointer Authentication feature is
>> +implemented along with the VHE feature so KVM arm64 ptrauth code relies
>> +on VHE mode to be present.
>> +
>> +Additionally, when these vcpu feature flags are not set then KVM will
>> +filter out the Pointer Authentication system key registers from
>> +KVM_GET/SET_REG_* ioctls and mask those features from cpufeature ID
>> +register. Any attempt to use the Pointer Authentication instructions will
>> +result in an UNDEFINED exception being injected into the guest.
>> diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
>> index 68509de..9d202f4 100644
>> --- a/Documentation/virtual/kvm/api.txt
>> +++ b/Documentation/virtual/kvm/api.txt
>> @@ -2753,6 +2753,12 @@ Possible features:
>>   	  Depends on KVM_CAP_ARM_PSCI_0_2.
>>   	- KVM_ARM_VCPU_PMU_V3: Emulate PMUv3 for the CPU.
>>   	  Depends on KVM_CAP_ARM_PMU_V3.
>> +	- KVM_ARM_VCPU_PTRAUTH_ADDRESS: Enables Address Pointer authentication
>> +	  for the CPU and supported only on arm64 architecture.
> 
> Nit: (arm64 only) would be less verbose.
ok.
> 
>> +	  Must be requested if KVM_ARM_VCPU_PTRAUTH_GENERIC is also requested.
>> +	- KVM_ARM_VCPU_PTRAUTH_GENERIC: Enables Generic Pointer authentication
>> +	  for the CPU and supported only on arm64 architecture.
> 
> Ditto.  (Not a big deal, though.)
> 
>> +	  Must be requested if KVM_ARM_VCPU_PTRAUTH_ADDRESS is also requested.
>>   
>>   	- KVM_ARM_VCPU_SVE: Enables SVE for the CPU (arm64 only).
>>   	  Depends on KVM_CAP_ARM_SVE.
>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
>> index a585d82..25f2598 100644
>> --- a/arch/arm64/include/asm/kvm_host.h
>> +++ b/arch/arm64/include/asm/kvm_host.h
>> @@ -49,7 +49,7 @@
>>   
>>   #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS
>>   
>> -#define KVM_VCPU_MAX_FEATURES 5
>> +#define KVM_VCPU_MAX_FEATURES 7
>>   
>>   #define KVM_REQ_SLEEP \
>>   	KVM_ARCH_REQ_FLAGS(0, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
>> diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
>> index 6963b7e..fec2253 100644
>> --- a/arch/arm64/include/uapi/asm/kvm.h
>> +++ b/arch/arm64/include/uapi/asm/kvm.h
>> @@ -103,6 +103,8 @@ struct kvm_regs {
>>   #define KVM_ARM_VCPU_PSCI_0_2		2 /* CPU uses PSCI v0.2 */
>>   #define KVM_ARM_VCPU_PMU_V3		3 /* Support guest PMUv3 */
>>   #define KVM_ARM_VCPU_SVE		4 /* enable SVE for this CPU */
>> +#define KVM_ARM_VCPU_PTRAUTH_ADDRESS	5 /* VCPU uses address authentication */
>> +#define KVM_ARM_VCPU_PTRAUTH_GENERIC	6 /* VCPU uses generic authentication */
>>   
>>   struct kvm_vcpu_init {
>>   	__u32 target;
>> diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
>> index f13378d..d13406b 100644
>> --- a/arch/arm64/kvm/reset.c
>> +++ b/arch/arm64/kvm/reset.c
>> @@ -221,6 +221,24 @@ static void kvm_vcpu_reset_sve(struct kvm_vcpu *vcpu)
>>   		memset(vcpu->arch.sve_state, 0, vcpu_sve_state_size(vcpu));
>>   }
>>   
>> +static int kvm_vcpu_enable_ptrauth(struct kvm_vcpu *vcpu)
>> +{
>> +	/* Support ptrauth only if the system supports these capabilities. */
>> +	if (!has_vhe() || !system_supports_address_auth() ||
>> +		!system_supports_generic_auth())
>> +		return -EINVAL;
> 
> Nit: Funny indentation.  Please align with the "if (".  It's also
> preferable to keep the two system_supports_xxx_auth() aligned or on the
> same line, since they go together.  This might be clearer split up:
> 
> 	if (!has_vhe())
> 		return -EINVAL;
> 
> 	if (!system_supports_address_auth() ||
> 	    !system_supports_generic_auth())
> 		return -EINVAL;
ok.
> 
>> +	/*
> 
> I'd say "for now" here, since we might relax this rule later on.
> 
>> +	 * Make sure that both address/generic pointer authentication
>> +	 * features are requested by the userspace together.
>> +	 */
>> +	if (!test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, vcpu->arch.features) ||
>> +		!test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, vcpu->arch.features))
>> +		return -EINVAL;
>> +
>> +	vcpu->arch.flags |= KVM_ARM64_GUEST_HAS_PTRAUTH;
>> +	return 0;
>> +}
>> +
>>   /**
>>    * kvm_reset_vcpu - sets core registers and sys_regs to reset value
>>    * @vcpu: The VCPU pointer
>> @@ -261,6 +279,12 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
>>   		kvm_vcpu_reset_sve(vcpu);
>>   	}
>>   
>> +	if (test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, vcpu->arch.features) ||
>> +		test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, vcpu->arch.features)) {
> 
> Nit: funny indentation.
> 
>> +		if (kvm_vcpu_enable_ptrauth(vcpu))
>> +			goto out;
>> +	}
>> +
> 
> [...]
> 
> With those fixed,
sure.
> 
> Reviewed-by: Dave Martin <Dave.Martin@arm.com>
Thanks,
Amit
> 
> Cheers
> ---Dave
> 
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v9 3/5] KVM: arm64: Add userspace flag to enable pointer authentication
@ 2019-04-17  8:17       ` Amit Daniel Kachhap
  0 siblings, 0 replies; 77+ messages in thread
From: Amit Daniel Kachhap @ 2019-04-17  8:17 UTC (permalink / raw)
  To: Dave Martin
  Cc: Marc Zyngier, Catalin Marinas, Will Deacon, linux-kernel,
	Kristina Martsenko, Ramana Radhakrishnan, kvmarm,
	linux-arm-kernel


Hi,
On 4/16/19 10:01 PM, Dave Martin wrote:
> On Fri, Apr 12, 2019 at 08:50:34AM +0530, Amit Daniel Kachhap wrote:
>> Now that the building blocks of pointer authentication are present, lets
>> add userspace flags KVM_ARM_VCPU_PTRAUTH_ADDRESS and
>> KVM_ARM_VCPU_PTRAUTH_GENERIC. These flags will enable pointer
>> authentication for the KVM guest on a per-vcpu basis through the ioctl
>> KVM_ARM_VCPU_INIT.
>>
>> This features will allow the KVM guest to allow the handling of
>> pointer authentication instructions or to treat them as undefined
>> if not set.
>>
>> Necessary documentations are added to reflect the changes done.
>>
>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>> Cc: Mark Rutland <mark.rutland@arm.com>
>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>> Cc: Christoffer Dall <christoffer.dall@arm.com>
>> Cc: kvmarm@lists.cs.columbia.edu
>> ---
>>
>> Changes since v8:
>> *  Update vcpu->arch.flags to final enable state. [Dave Martin]
>> *  Change in Documentation to make clear the implementation of 2 vcpu
>>     feature flags. [Dave Martin]
>>
>>   Documentation/arm64/pointer-authentication.txt | 22 ++++++++++++++++++----
>>   Documentation/virtual/kvm/api.txt              |  6 ++++++
>>   arch/arm64/include/asm/kvm_host.h              |  2 +-
>>   arch/arm64/include/uapi/asm/kvm.h              |  2 ++
>>   arch/arm64/kvm/reset.c                         | 24 ++++++++++++++++++++++++
>>   5 files changed, 51 insertions(+), 5 deletions(-)
>>
>> diff --git a/Documentation/arm64/pointer-authentication.txt b/Documentation/arm64/pointer-authentication.txt
>> index 5baca42..fc71b33 100644
>> --- a/Documentation/arm64/pointer-authentication.txt
>> +++ b/Documentation/arm64/pointer-authentication.txt
>> @@ -87,7 +87,21 @@ used to get and set the keys for a thread.
>>   Virtualization
>>   --------------
>>   
>> -Pointer authentication is not currently supported in KVM guests. KVM
>> -will mask the feature bits from ID_AA64ISAR1_EL1, and attempted use of
>> -the feature will result in an UNDEFINED exception being injected into
>> -the guest.
>> +Pointer authentication is enabled in KVM guest when each virtual cpu is
>> +initialised by passing flags KVM_ARM_VCPU_PTRAUTH_[ADDRESS/GENERIC] and
>> +requesting these two separate cpu features to be enabled. The current KVM
>> +guest implementation works by enabling both features together, so both
>> +these userspace flags are checked before enabling pointer authentication.
>> +The separate userspace flag will allow to have no userspace ABI changes
>> +if support is added in the future to allow these two features to be
>> +enabled independently of one another.
>> +
>> +As Arm Architecture specifies that Pointer Authentication feature is
>> +implemented along with the VHE feature so KVM arm64 ptrauth code relies
>> +on VHE mode to be present.
>> +
>> +Additionally, when these vcpu feature flags are not set then KVM will
>> +filter out the Pointer Authentication system key registers from
>> +KVM_GET/SET_REG_* ioctls and mask those features from cpufeature ID
>> +register. Any attempt to use the Pointer Authentication instructions will
>> +result in an UNDEFINED exception being injected into the guest.
>> diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
>> index 68509de..9d202f4 100644
>> --- a/Documentation/virtual/kvm/api.txt
>> +++ b/Documentation/virtual/kvm/api.txt
>> @@ -2753,6 +2753,12 @@ Possible features:
>>   	  Depends on KVM_CAP_ARM_PSCI_0_2.
>>   	- KVM_ARM_VCPU_PMU_V3: Emulate PMUv3 for the CPU.
>>   	  Depends on KVM_CAP_ARM_PMU_V3.
>> +	- KVM_ARM_VCPU_PTRAUTH_ADDRESS: Enables Address Pointer authentication
>> +	  for the CPU and supported only on arm64 architecture.
> 
> Nit: (arm64 only) would be less verbose.
ok.
> 
>> +	  Must be requested if KVM_ARM_VCPU_PTRAUTH_GENERIC is also requested.
>> +	- KVM_ARM_VCPU_PTRAUTH_GENERIC: Enables Generic Pointer authentication
>> +	  for the CPU and supported only on arm64 architecture.
> 
> Ditto.  (Not a big deal, though.)
> 
>> +	  Must be requested if KVM_ARM_VCPU_PTRAUTH_ADDRESS is also requested.
>>   
>>   	- KVM_ARM_VCPU_SVE: Enables SVE for the CPU (arm64 only).
>>   	  Depends on KVM_CAP_ARM_SVE.
>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
>> index a585d82..25f2598 100644
>> --- a/arch/arm64/include/asm/kvm_host.h
>> +++ b/arch/arm64/include/asm/kvm_host.h
>> @@ -49,7 +49,7 @@
>>   
>>   #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS
>>   
>> -#define KVM_VCPU_MAX_FEATURES 5
>> +#define KVM_VCPU_MAX_FEATURES 7
>>   
>>   #define KVM_REQ_SLEEP \
>>   	KVM_ARCH_REQ_FLAGS(0, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
>> diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
>> index 6963b7e..fec2253 100644
>> --- a/arch/arm64/include/uapi/asm/kvm.h
>> +++ b/arch/arm64/include/uapi/asm/kvm.h
>> @@ -103,6 +103,8 @@ struct kvm_regs {
>>   #define KVM_ARM_VCPU_PSCI_0_2		2 /* CPU uses PSCI v0.2 */
>>   #define KVM_ARM_VCPU_PMU_V3		3 /* Support guest PMUv3 */
>>   #define KVM_ARM_VCPU_SVE		4 /* enable SVE for this CPU */
>> +#define KVM_ARM_VCPU_PTRAUTH_ADDRESS	5 /* VCPU uses address authentication */
>> +#define KVM_ARM_VCPU_PTRAUTH_GENERIC	6 /* VCPU uses generic authentication */
>>   
>>   struct kvm_vcpu_init {
>>   	__u32 target;
>> diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
>> index f13378d..d13406b 100644
>> --- a/arch/arm64/kvm/reset.c
>> +++ b/arch/arm64/kvm/reset.c
>> @@ -221,6 +221,24 @@ static void kvm_vcpu_reset_sve(struct kvm_vcpu *vcpu)
>>   		memset(vcpu->arch.sve_state, 0, vcpu_sve_state_size(vcpu));
>>   }
>>   
>> +static int kvm_vcpu_enable_ptrauth(struct kvm_vcpu *vcpu)
>> +{
>> +	/* Support ptrauth only if the system supports these capabilities. */
>> +	if (!has_vhe() || !system_supports_address_auth() ||
>> +		!system_supports_generic_auth())
>> +		return -EINVAL;
> 
> Nit: Funny indentation.  Please align with the "if (".  It's also
> preferable to keep the two system_supports_xxx_auth() aligned or on the
> same line, since they go together.  This might be clearer split up:
> 
> 	if (!has_vhe())
> 		return -EINVAL;
> 
> 	if (!system_supports_address_auth() ||
> 	    !system_supports_generic_auth())
> 		return -EINVAL;
ok.
> 
>> +	/*
> 
> I'd say "for now" here, since we might relax this rule later on.
> 
>> +	 * Make sure that both address/generic pointer authentication
>> +	 * features are requested by the userspace together.
>> +	 */
>> +	if (!test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, vcpu->arch.features) ||
>> +		!test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, vcpu->arch.features))
>> +		return -EINVAL;
>> +
>> +	vcpu->arch.flags |= KVM_ARM64_GUEST_HAS_PTRAUTH;
>> +	return 0;
>> +}
>> +
>>   /**
>>    * kvm_reset_vcpu - sets core registers and sys_regs to reset value
>>    * @vcpu: The VCPU pointer
>> @@ -261,6 +279,12 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
>>   		kvm_vcpu_reset_sve(vcpu);
>>   	}
>>   
>> +	if (test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, vcpu->arch.features) ||
>> +		test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, vcpu->arch.features)) {
> 
> Nit: funny indentation.
> 
>> +		if (kvm_vcpu_enable_ptrauth(vcpu))
>> +			goto out;
>> +	}
>> +
> 
> [...]
> 
> With those fixed,
sure.
> 
> Reviewed-by: Dave Martin <Dave.Martin@arm.com>
Thanks,
Amit
> 
> Cheers
> ---Dave
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v9 1/5] KVM: arm64: Add a vcpu flag to control ptrauth for guest
@ 2019-04-17  8:35     ` Marc Zyngier
  0 siblings, 0 replies; 77+ messages in thread
From: Marc Zyngier @ 2019-04-17  8:35 UTC (permalink / raw)
  To: Amit Daniel Kachhap, linux-arm-kernel
  Cc: Christoffer Dall, Catalin Marinas, Will Deacon, Andrew Jones,
	Dave Martin, Ramana Radhakrishnan, kvmarm, Kristina Martsenko,
	linux-kernel, Mark Rutland, James Morse, Julien Thierry

On 12/04/2019 04:20, Amit Daniel Kachhap wrote:
> A per vcpu flag is added to check if pointer authentication is
> enabled for the vcpu or not. This flag may be enabled according to
> the necessary user policies and host capabilities.
> 
> This patch also adds a helper to check the flag.
> 
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Christoffer Dall <christoffer.dall@arm.com>
> Cc: kvmarm@lists.cs.columbia.edu
> ---
> 
> Changes since v8:
> * Added a new per vcpu flag which will store Pointer Authentication enable
>   status instead of checking them again. [Dave Martin]
> 
>  arch/arm64/include/asm/kvm_host.h | 4 ++++
>  1 file changed, 4 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 9d57cf8..31dbc7c 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -355,10 +355,14 @@ struct kvm_vcpu_arch {
>  #define KVM_ARM64_HOST_SVE_ENABLED	(1 << 4) /* SVE enabled for EL0 */
>  #define KVM_ARM64_GUEST_HAS_SVE		(1 << 5) /* SVE exposed to guest */
>  #define KVM_ARM64_VCPU_SVE_FINALIZED	(1 << 6) /* SVE config completed */
> +#define KVM_ARM64_GUEST_HAS_PTRAUTH	(1 << 7) /* PTRAUTH exposed to guest */
>  
>  #define vcpu_has_sve(vcpu) (system_supports_sve() && \
>  			    ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_SVE))
>  
> +#define vcpu_has_ptrauth(vcpu)	\
> +			((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH)
> +

Just as for SVE, please first check that the system has PTRAUTH.
Something like:

		(cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_ARCH) && \
		 ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH))

This will save an extra load on unsuspecting CPUs thanks to the static
key embedded in the capability structure.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v9 1/5] KVM: arm64: Add a vcpu flag to control ptrauth for guest
@ 2019-04-17  8:35     ` Marc Zyngier
  0 siblings, 0 replies; 77+ messages in thread
From: Marc Zyngier @ 2019-04-17  8:35 UTC (permalink / raw)
  To: Amit Daniel Kachhap, linux-arm-kernel
  Cc: Catalin Marinas, Will Deacon, Kristina Martsenko, kvmarm,
	Ramana Radhakrishnan, Dave Martin, linux-kernel

On 12/04/2019 04:20, Amit Daniel Kachhap wrote:
> A per vcpu flag is added to check if pointer authentication is
> enabled for the vcpu or not. This flag may be enabled according to
> the necessary user policies and host capabilities.
> 
> This patch also adds a helper to check the flag.
> 
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Christoffer Dall <christoffer.dall@arm.com>
> Cc: kvmarm@lists.cs.columbia.edu
> ---
> 
> Changes since v8:
> * Added a new per vcpu flag which will store Pointer Authentication enable
>   status instead of checking them again. [Dave Martin]
> 
>  arch/arm64/include/asm/kvm_host.h | 4 ++++
>  1 file changed, 4 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 9d57cf8..31dbc7c 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -355,10 +355,14 @@ struct kvm_vcpu_arch {
>  #define KVM_ARM64_HOST_SVE_ENABLED	(1 << 4) /* SVE enabled for EL0 */
>  #define KVM_ARM64_GUEST_HAS_SVE		(1 << 5) /* SVE exposed to guest */
>  #define KVM_ARM64_VCPU_SVE_FINALIZED	(1 << 6) /* SVE config completed */
> +#define KVM_ARM64_GUEST_HAS_PTRAUTH	(1 << 7) /* PTRAUTH exposed to guest */
>  
>  #define vcpu_has_sve(vcpu) (system_supports_sve() && \
>  			    ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_SVE))
>  
> +#define vcpu_has_ptrauth(vcpu)	\
> +			((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH)
> +

Just as for SVE, please first check that the system has PTRAUTH.
Something like:

		(cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_ARCH) && \
		 ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH))

This will save an extra load on unsuspecting CPUs thanks to the static
key embedded in the capability structure.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v9 1/5] KVM: arm64: Add a vcpu flag to control ptrauth for guest
@ 2019-04-17  8:35     ` Marc Zyngier
  0 siblings, 0 replies; 77+ messages in thread
From: Marc Zyngier @ 2019-04-17  8:35 UTC (permalink / raw)
  To: Amit Daniel Kachhap, linux-arm-kernel
  Cc: Mark Rutland, Andrew Jones, Julien Thierry, Catalin Marinas,
	Will Deacon, Christoffer Dall, Kristina Martsenko, kvmarm,
	James Morse, Ramana Radhakrishnan, Dave Martin, linux-kernel

On 12/04/2019 04:20, Amit Daniel Kachhap wrote:
> A per vcpu flag is added to check if pointer authentication is
> enabled for the vcpu or not. This flag may be enabled according to
> the necessary user policies and host capabilities.
> 
> This patch also adds a helper to check the flag.
> 
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Christoffer Dall <christoffer.dall@arm.com>
> Cc: kvmarm@lists.cs.columbia.edu
> ---
> 
> Changes since v8:
> * Added a new per vcpu flag which will store Pointer Authentication enable
>   status instead of checking them again. [Dave Martin]
> 
>  arch/arm64/include/asm/kvm_host.h | 4 ++++
>  1 file changed, 4 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 9d57cf8..31dbc7c 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -355,10 +355,14 @@ struct kvm_vcpu_arch {
>  #define KVM_ARM64_HOST_SVE_ENABLED	(1 << 4) /* SVE enabled for EL0 */
>  #define KVM_ARM64_GUEST_HAS_SVE		(1 << 5) /* SVE exposed to guest */
>  #define KVM_ARM64_VCPU_SVE_FINALIZED	(1 << 6) /* SVE config completed */
> +#define KVM_ARM64_GUEST_HAS_PTRAUTH	(1 << 7) /* PTRAUTH exposed to guest */
>  
>  #define vcpu_has_sve(vcpu) (system_supports_sve() && \
>  			    ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_SVE))
>  
> +#define vcpu_has_ptrauth(vcpu)	\
> +			((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH)
> +

Just as for SVE, please first check that the system has PTRAUTH.
Something like:

		(cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_ARCH) && \
		 ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH))

This will save an extra load on unsuspecting CPUs thanks to the static
key embedded in the capability structure.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [kvmtool PATCH v9 5/5] KVM: arm/arm64: Add a vcpu feature for pointer authentication
@ 2019-04-17  8:55     ` Alexandru Elisei
  0 siblings, 0 replies; 77+ messages in thread
From: Alexandru Elisei @ 2019-04-17  8:55 UTC (permalink / raw)
  To: kvmarm

Hello,

On 4/12/19 4:20 AM, Amit Daniel Kachhap wrote:
> This patch adds a runtime capabality for KVM tool to enable Arm64 8.3
> Pointer Authentication in guest kernel. Two vcpu features
> KVM_ARM_VCPU_PTRAUTH_[ADDRESS/GENERIC] are supplied together to enable
> Pointer Authentication in KVM guest after checking the capability.
>
> Command line options --enable-ptrauth and --disable-ptrauth are added
> to use this feature. However, if those options are not provided then
> also this feature is enabled if host supports this capability.
>
> The macros defined in the headers are not in sync and should be replaced
> from the upstream.
>
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> ---
> Changes since v8:
> *  Added option --enable-ptrauth and --disable-ptrauth to use ptrauth. Also
>    enable ptrauth if no option provided and Host supports ptrauth. [Dave Martin]
> * The macro definition are not linear as the kvmtool is not synchronised with the
>   kernel changes present in kvmarm/next tree.
>
>  arm/aarch32/include/kvm/kvm-cpu-arch.h    |  1 +
>  arm/aarch64/include/asm/kvm.h             |  2 ++
>  arm/aarch64/include/kvm/kvm-config-arch.h |  6 +++++-
>  arm/aarch64/include/kvm/kvm-cpu-arch.h    |  2 ++
>  arm/include/arm-common/kvm-config-arch.h  |  2 ++
>  arm/kvm-cpu.c                             | 11 +++++++++++
>  include/linux/kvm.h                       |  2 ++
>  7 files changed, 25 insertions(+), 1 deletion(-)
>
> diff --git a/arm/aarch32/include/kvm/kvm-cpu-arch.h b/arm/aarch32/include/kvm/kvm-cpu-arch.h
> index d28ea67..520ea76 100644
> --- a/arm/aarch32/include/kvm/kvm-cpu-arch.h
> +++ b/arm/aarch32/include/kvm/kvm-cpu-arch.h
> @@ -13,4 +13,5 @@
>  #define ARM_CPU_ID		0, 0, 0
>  #define ARM_CPU_ID_MPIDR	5
>  
> +#define ARM_VCPU_PTRAUTH_FEATURE	0
>  #endif /* KVM__KVM_CPU_ARCH_H */
> diff --git a/arm/aarch64/include/asm/kvm.h b/arm/aarch64/include/asm/kvm.h
> index 97c3478..a2546e6 100644
> --- a/arm/aarch64/include/asm/kvm.h
> +++ b/arm/aarch64/include/asm/kvm.h
> @@ -102,6 +102,8 @@ struct kvm_regs {
>  #define KVM_ARM_VCPU_EL1_32BIT		1 /* CPU running a 32bit VM */
>  #define KVM_ARM_VCPU_PSCI_0_2		2 /* CPU uses PSCI v0.2 */
>  #define KVM_ARM_VCPU_PMU_V3		3 /* Support guest PMUv3 */
> +#define KVM_ARM_VCPU_PTRAUTH_ADDRESS	5 /* CPU uses address pointer authentication */
> +#define KVM_ARM_VCPU_PTRAUTH_GENERIC	6 /* CPU uses generic pointer authentication */
>  
>  struct kvm_vcpu_init {
>  	__u32 target;
> diff --git a/arm/aarch64/include/kvm/kvm-config-arch.h b/arm/aarch64/include/kvm/kvm-config-arch.h
> index 04be43d..0279b13 100644
> --- a/arm/aarch64/include/kvm/kvm-config-arch.h
> +++ b/arm/aarch64/include/kvm/kvm-config-arch.h
> @@ -8,7 +8,11 @@
>  			"Create PMUv3 device"),				\
>  	OPT_U64('\0', "kaslr-seed", &(cfg)->kaslr_seed,			\
>  			"Specify random seed for Kernel Address Space "	\
> -			"Layout Randomization (KASLR)"),
> +			"Layout Randomization (KASLR)"),		\
> +	OPT_BOOLEAN('\0', "enable-ptrauth", &(cfg)->enable_ptrauth,	\
> +			"Enables pointer authentication"),		\
> +	OPT_BOOLEAN('\0', "disable-ptrauth", &(cfg)->disable_ptrauth,	\
> +			"Disables pointer authentication"),
>  
>  #include "arm-common/kvm-config-arch.h"
>  
> diff --git a/arm/aarch64/include/kvm/kvm-cpu-arch.h b/arm/aarch64/include/kvm/kvm-cpu-arch.h
> index a9d8563..fcc2107 100644
> --- a/arm/aarch64/include/kvm/kvm-cpu-arch.h
> +++ b/arm/aarch64/include/kvm/kvm-cpu-arch.h
> @@ -17,4 +17,6 @@
>  #define ARM_CPU_CTRL		3, 0, 1, 0
>  #define ARM_CPU_CTRL_SCTLR_EL1	0
>  
> +#define ARM_VCPU_PTRAUTH_FEATURE	((1UL << KVM_ARM_VCPU_PTRAUTH_ADDRESS) \
> +					| (1UL << KVM_ARM_VCPU_PTRAUTH_GENERIC))
>  #endif /* KVM__KVM_CPU_ARCH_H */
> diff --git a/arm/include/arm-common/kvm-config-arch.h b/arm/include/arm-common/kvm-config-arch.h
> index 5734c46..1b4287d 100644
> --- a/arm/include/arm-common/kvm-config-arch.h
> +++ b/arm/include/arm-common/kvm-config-arch.h
> @@ -10,6 +10,8 @@ struct kvm_config_arch {
>  	bool		aarch32_guest;
>  	bool		has_pmuv3;
>  	u64		kaslr_seed;
> +	bool		enable_ptrauth;
> +	bool		disable_ptrauth;
>  	enum irqchip_type irqchip;
>  	u64		fw_addr;
>  };
> diff --git a/arm/kvm-cpu.c b/arm/kvm-cpu.c
> index 7780251..a45a649 100644
> --- a/arm/kvm-cpu.c
> +++ b/arm/kvm-cpu.c
> @@ -69,6 +69,17 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, unsigned long cpu_id)
>  	}
>  
>  	/*
> +	 * Always enable Pointer Authentication if requested. If system supports
> +	 * this extension then also enable it by default provided no disable
> +	 * request present.
> +	 */
> +	if ((kvm->cfg.arch.enable_ptrauth) ||
> +		(kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH_ADDRESS) &&
> +		kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH_GENERIC) &&
> +		!kvm->cfg.arch.disable_ptrauth))

I take it that:

(1) Having both --enable-ptrauth and --disable-ptrauth present on the kvmtools
command line is allowed

(2) --enable-ptrauth takes precedence over --disable-ptrauth.

Have you considered returning an error if both are present? Having
--enable-ptrauth take precedence over --disable-ptrauth looks arbitrary to me
(my expectations would have been for --disable-ptrauth to have precendece) and
probably the user is doing something wrong if kvmtools is invoked with both
arguments.

> +			vcpu_init.features[0] |= ARM_VCPU_PTRAUTH_FEATURE;
> +
> +	/*
>  	 * If the preferred target ioctl is successful then
>  	 * use preferred target else try each and every target type
>  	 */
> diff --git a/include/linux/kvm.h b/include/linux/kvm.h
> index 6d4ea4b..de1033b 100644
> --- a/include/linux/kvm.h
> +++ b/include/linux/kvm.h
> @@ -988,6 +988,8 @@ struct kvm_ppc_resize_hpt {
>  #define KVM_CAP_ARM_VM_IPA_SIZE 165
>  #define KVM_CAP_MANUAL_DIRTY_LOG_PROTECT 166
>  #define KVM_CAP_HYPERV_CPUID 167
> +#define KVM_CAP_ARM_PTRAUTH_ADDRESS 169
> +#define KVM_CAP_ARM_PTRAUTH_GENERIC 170
>  
>  #ifdef KVM_CAP_IRQ_ROUTING
>  

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [kvmtool PATCH v9 5/5] KVM: arm/arm64: Add a vcpu feature for pointer authentication
@ 2019-04-17  8:55     ` Alexandru Elisei
  0 siblings, 0 replies; 77+ messages in thread
From: Alexandru Elisei @ 2019-04-17  8:55 UTC (permalink / raw)
  To: kvmarm

Hello,

On 4/12/19 4:20 AM, Amit Daniel Kachhap wrote:
> This patch adds a runtime capabality for KVM tool to enable Arm64 8.3
> Pointer Authentication in guest kernel. Two vcpu features
> KVM_ARM_VCPU_PTRAUTH_[ADDRESS/GENERIC] are supplied together to enable
> Pointer Authentication in KVM guest after checking the capability.
>
> Command line options --enable-ptrauth and --disable-ptrauth are added
> to use this feature. However, if those options are not provided then
> also this feature is enabled if host supports this capability.
>
> The macros defined in the headers are not in sync and should be replaced
> from the upstream.
>
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> ---
> Changes since v8:
> *  Added option --enable-ptrauth and --disable-ptrauth to use ptrauth. Also
>    enable ptrauth if no option provided and Host supports ptrauth. [Dave Martin]
> * The macro definition are not linear as the kvmtool is not synchronised with the
>   kernel changes present in kvmarm/next tree.
>
>  arm/aarch32/include/kvm/kvm-cpu-arch.h    |  1 +
>  arm/aarch64/include/asm/kvm.h             |  2 ++
>  arm/aarch64/include/kvm/kvm-config-arch.h |  6 +++++-
>  arm/aarch64/include/kvm/kvm-cpu-arch.h    |  2 ++
>  arm/include/arm-common/kvm-config-arch.h  |  2 ++
>  arm/kvm-cpu.c                             | 11 +++++++++++
>  include/linux/kvm.h                       |  2 ++
>  7 files changed, 25 insertions(+), 1 deletion(-)
>
> diff --git a/arm/aarch32/include/kvm/kvm-cpu-arch.h b/arm/aarch32/include/kvm/kvm-cpu-arch.h
> index d28ea67..520ea76 100644
> --- a/arm/aarch32/include/kvm/kvm-cpu-arch.h
> +++ b/arm/aarch32/include/kvm/kvm-cpu-arch.h
> @@ -13,4 +13,5 @@
>  #define ARM_CPU_ID		0, 0, 0
>  #define ARM_CPU_ID_MPIDR	5
>  
> +#define ARM_VCPU_PTRAUTH_FEATURE	0
>  #endif /* KVM__KVM_CPU_ARCH_H */
> diff --git a/arm/aarch64/include/asm/kvm.h b/arm/aarch64/include/asm/kvm.h
> index 97c3478..a2546e6 100644
> --- a/arm/aarch64/include/asm/kvm.h
> +++ b/arm/aarch64/include/asm/kvm.h
> @@ -102,6 +102,8 @@ struct kvm_regs {
>  #define KVM_ARM_VCPU_EL1_32BIT		1 /* CPU running a 32bit VM */
>  #define KVM_ARM_VCPU_PSCI_0_2		2 /* CPU uses PSCI v0.2 */
>  #define KVM_ARM_VCPU_PMU_V3		3 /* Support guest PMUv3 */
> +#define KVM_ARM_VCPU_PTRAUTH_ADDRESS	5 /* CPU uses address pointer authentication */
> +#define KVM_ARM_VCPU_PTRAUTH_GENERIC	6 /* CPU uses generic pointer authentication */
>  
>  struct kvm_vcpu_init {
>  	__u32 target;
> diff --git a/arm/aarch64/include/kvm/kvm-config-arch.h b/arm/aarch64/include/kvm/kvm-config-arch.h
> index 04be43d..0279b13 100644
> --- a/arm/aarch64/include/kvm/kvm-config-arch.h
> +++ b/arm/aarch64/include/kvm/kvm-config-arch.h
> @@ -8,7 +8,11 @@
>  			"Create PMUv3 device"),				\
>  	OPT_U64('\0', "kaslr-seed", &(cfg)->kaslr_seed,			\
>  			"Specify random seed for Kernel Address Space "	\
> -			"Layout Randomization (KASLR)"),
> +			"Layout Randomization (KASLR)"),		\
> +	OPT_BOOLEAN('\0', "enable-ptrauth", &(cfg)->enable_ptrauth,	\
> +			"Enables pointer authentication"),		\
> +	OPT_BOOLEAN('\0', "disable-ptrauth", &(cfg)->disable_ptrauth,	\
> +			"Disables pointer authentication"),
>  
>  #include "arm-common/kvm-config-arch.h"
>  
> diff --git a/arm/aarch64/include/kvm/kvm-cpu-arch.h b/arm/aarch64/include/kvm/kvm-cpu-arch.h
> index a9d8563..fcc2107 100644
> --- a/arm/aarch64/include/kvm/kvm-cpu-arch.h
> +++ b/arm/aarch64/include/kvm/kvm-cpu-arch.h
> @@ -17,4 +17,6 @@
>  #define ARM_CPU_CTRL		3, 0, 1, 0
>  #define ARM_CPU_CTRL_SCTLR_EL1	0
>  
> +#define ARM_VCPU_PTRAUTH_FEATURE	((1UL << KVM_ARM_VCPU_PTRAUTH_ADDRESS) \
> +					| (1UL << KVM_ARM_VCPU_PTRAUTH_GENERIC))
>  #endif /* KVM__KVM_CPU_ARCH_H */
> diff --git a/arm/include/arm-common/kvm-config-arch.h b/arm/include/arm-common/kvm-config-arch.h
> index 5734c46..1b4287d 100644
> --- a/arm/include/arm-common/kvm-config-arch.h
> +++ b/arm/include/arm-common/kvm-config-arch.h
> @@ -10,6 +10,8 @@ struct kvm_config_arch {
>  	bool		aarch32_guest;
>  	bool		has_pmuv3;
>  	u64		kaslr_seed;
> +	bool		enable_ptrauth;
> +	bool		disable_ptrauth;
>  	enum irqchip_type irqchip;
>  	u64		fw_addr;
>  };
> diff --git a/arm/kvm-cpu.c b/arm/kvm-cpu.c
> index 7780251..a45a649 100644
> --- a/arm/kvm-cpu.c
> +++ b/arm/kvm-cpu.c
> @@ -69,6 +69,17 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, unsigned long cpu_id)
>  	}
>  
>  	/*
> +	 * Always enable Pointer Authentication if requested. If system supports
> +	 * this extension then also enable it by default provided no disable
> +	 * request present.
> +	 */
> +	if ((kvm->cfg.arch.enable_ptrauth) ||
> +		(kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH_ADDRESS) &&
> +		kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH_GENERIC) &&
> +		!kvm->cfg.arch.disable_ptrauth))

I take it that:

(1) Having both --enable-ptrauth and --disable-ptrauth present on the kvmtools
command line is allowed

(2) --enable-ptrauth takes precedence over --disable-ptrauth.

Have you considered returning an error if both are present? Having
--enable-ptrauth take precedence over --disable-ptrauth looks arbitrary to me
(my expectations would have been for --disable-ptrauth to have precendece) and
probably the user is doing something wrong if kvmtools is invoked with both
arguments.

> +			vcpu_init.features[0] |= ARM_VCPU_PTRAUTH_FEATURE;
> +
> +	/*
>  	 * If the preferred target ioctl is successful then
>  	 * use preferred target else try each and every target type
>  	 */
> diff --git a/include/linux/kvm.h b/include/linux/kvm.h
> index 6d4ea4b..de1033b 100644
> --- a/include/linux/kvm.h
> +++ b/include/linux/kvm.h
> @@ -988,6 +988,8 @@ struct kvm_ppc_resize_hpt {
>  #define KVM_CAP_ARM_VM_IPA_SIZE 165
>  #define KVM_CAP_MANUAL_DIRTY_LOG_PROTECT 166
>  #define KVM_CAP_HYPERV_CPUID 167
> +#define KVM_CAP_ARM_PTRAUTH_ADDRESS 169
> +#define KVM_CAP_ARM_PTRAUTH_GENERIC 170
>  
>  #ifdef KVM_CAP_IRQ_ROUTING
>  
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v9 2/5] KVM: arm/arm64: context-switch ptrauth registers
@ 2019-04-17  9:09     ` Marc Zyngier
  0 siblings, 0 replies; 77+ messages in thread
From: Marc Zyngier @ 2019-04-17  9:09 UTC (permalink / raw)
  To: Amit Daniel Kachhap, linux-arm-kernel
  Cc: Christoffer Dall, Catalin Marinas, Will Deacon, Andrew Jones,
	Dave Martin, Ramana Radhakrishnan, kvmarm, Kristina Martsenko,
	linux-kernel, Mark Rutland, James Morse, Julien Thierry

Hi Amit,

On 12/04/2019 04:20, Amit Daniel Kachhap wrote:
> From: Mark Rutland <mark.rutland@arm.com>
> 
> When pointer authentication is supported, a guest may wish to use it.
> This patch adds the necessary KVM infrastructure for this to work, with
> a semi-lazy context switch of the pointer auth state.
> 
> Pointer authentication feature is only enabled when VHE is built
> in the kernel and present in the CPU implementation so only VHE code
> paths are modified.
> 
> When we schedule a vcpu, we disable guest usage of pointer
> authentication instructions and accesses to the keys. While these are
> disabled, we avoid context-switching the keys. When we trap the guest
> trying to use pointer authentication functionality, we change to eagerly
> context-switching the keys, and enable the feature. The next time the
> vcpu is scheduled out/in, we start again. However the host key save is
> optimized and implemented inside ptrauth instruction/register access
> trap.
> 
> Pointer authentication consists of address authentication and generic
> authentication, and CPUs in a system might have varied support for
> either. Where support for either feature is not uniform, it is hidden
> from guests via ID register emulation, as a result of the cpufeature
> framework in the host.
> 
> Unfortunately, address authentication and generic authentication cannot
> be trapped separately, as the architecture provides a single EL2 trap
> covering both. If we wish to expose one without the other, we cannot
> prevent a (badly-written) guest from intermittently using a feature
> which is not uniformly supported (when scheduled on a physical CPU which
> supports the relevant feature). Hence, this patch expects both type of
> authentication to be present in a cpu.
> 
> This switch of key is done from guest enter/exit assembly as preparation
> for the upcoming in-kernel pointer authentication support. Hence, these
> key switching routines are not implemented in C code as they may cause
> pointer authentication key signing error in some situations.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> [Only VHE, key switch in full assembly, vcpu_has_ptrauth checks
> , save host key in ptrauth exception trap]
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> Reviewed-by: Julien Thierry <julien.thierry@arm.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Christoffer Dall <christoffer.dall@arm.com>
> Cc: kvmarm@lists.cs.columbia.edu
> ---
> 
> Changes since v9:
> * Used high order number for branching in assembly macros. [Kristina Martsenko]
> * Taken care of different offset for hcr_el2 now.
> 
>  arch/arm/include/asm/kvm_host.h          |   1 +
>  arch/arm64/Kconfig                       |   5 +-
>  arch/arm64/include/asm/kvm_host.h        |  17 +++++
>  arch/arm64/include/asm/kvm_ptrauth_asm.h | 106 +++++++++++++++++++++++++++++++
>  arch/arm64/kernel/asm-offsets.c          |   6 ++
>  arch/arm64/kvm/guest.c                   |  14 ++++
>  arch/arm64/kvm/handle_exit.c             |  24 ++++---
>  arch/arm64/kvm/hyp/entry.S               |   7 ++
>  arch/arm64/kvm/sys_regs.c                |  46 +++++++++++++-
>  virt/kvm/arm/arm.c                       |   2 +
>  10 files changed, 215 insertions(+), 13 deletions(-)
>  create mode 100644 arch/arm64/include/asm/kvm_ptrauth_asm.h
> 
> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
> index e80cfc1..7a5c7f8 100644
> --- a/arch/arm/include/asm/kvm_host.h
> +++ b/arch/arm/include/asm/kvm_host.h
> @@ -363,6 +363,7 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
>  static inline void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu) {}
>  static inline void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu) {}
>  static inline void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu) {}
> +static inline void kvm_arm_vcpu_ptrauth_setup_lazy(struct kvm_vcpu *vcpu) {}
>  
>  static inline void kvm_arm_vhe_guest_enter(void) {}
>  static inline void kvm_arm_vhe_guest_exit(void) {}
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 7e34b9e..9e8506e 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -1301,8 +1301,9 @@ config ARM64_PTR_AUTH
>  	  context-switched along with the process.
>  
>  	  The feature is detected at runtime. If the feature is not present in
> -	  hardware it will not be advertised to userspace nor will it be
> -	  enabled.
> +	  hardware it will not be advertised to userspace/KVM guest nor will it
> +	  be enabled. However, KVM guest also require CONFIG_ARM64_VHE=y to use
> +	  this feature.

Not only does it require CONFIG_ARM64_VHE, but it more importantly
requires a VHE system!

>  
>  endmenu
>  
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 31dbc7c..a585d82 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -161,6 +161,18 @@ enum vcpu_sysreg {
>  	PMSWINC_EL0,	/* Software Increment Register */
>  	PMUSERENR_EL0,	/* User Enable Register */
>  
> +	/* Pointer Authentication Registers in a strict increasing order. */
> +	APIAKEYLO_EL1,
> +	APIAKEYHI_EL1 = APIAKEYLO_EL1 + 1,
> +	APIBKEYLO_EL1 = APIAKEYLO_EL1 + 2,
> +	APIBKEYHI_EL1 = APIAKEYLO_EL1 + 3,
> +	APDAKEYLO_EL1 = APIAKEYLO_EL1 + 4,
> +	APDAKEYHI_EL1 = APIAKEYLO_EL1 + 5,
> +	APDBKEYLO_EL1 = APIAKEYLO_EL1 + 6,
> +	APDBKEYHI_EL1 = APIAKEYLO_EL1 + 7,
> +	APGAKEYLO_EL1 = APIAKEYLO_EL1 + 8,
> +	APGAKEYHI_EL1 = APIAKEYLO_EL1 + 9,

Why do we need these explicit +1, +2...? Being an part of an enum
already guarantees this.

> +
>  	/* 32bit specific registers. Keep them at the end of the range */
>  	DACR32_EL2,	/* Domain Access Control Register */
>  	IFSR32_EL2,	/* Instruction Fault Status Register */
> @@ -529,6 +541,11 @@ static inline bool kvm_arch_requires_vhe(void)
>  	return false;
>  }
>  
> +void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu);
> +void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu);
> +void kvm_arm_vcpu_ptrauth_setup_lazy(struct kvm_vcpu *vcpu);
> +void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu);
> +
>  static inline void kvm_arch_hardware_unsetup(void) {}
>  static inline void kvm_arch_sync_events(struct kvm *kvm) {}
>  static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
> diff --git a/arch/arm64/include/asm/kvm_ptrauth_asm.h b/arch/arm64/include/asm/kvm_ptrauth_asm.h
> new file mode 100644
> index 0000000..8142521
> --- /dev/null
> +++ b/arch/arm64/include/asm/kvm_ptrauth_asm.h

nit: this should be named kvm_ptrauth.h. The asm suffix doesn't bring
anything to the game, and is somewhat misleading (there are C macros in
this file).

> @@ -0,0 +1,106 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/* arch/arm64/include/asm/kvm_ptrauth_asm.h: Guest/host ptrauth save/restore
> + * Copyright 2019 Arm Limited
> + * Author: Mark Rutland <mark.rutland@arm.com>

nit: Authors

> + *         Amit Daniel Kachhap <amit.kachhap@arm.com>
> + */
> +
> +#ifndef __ASM_KVM_PTRAUTH_ASM_H
> +#define __ASM_KVM_PTRAUTH_ASM_H
> +
> +#ifndef __ASSEMBLY__
> +
> +#define __ptrauth_save_key(regs, key)						\
> +({										\
> +	regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1);	\
> +	regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1);	\
> +})
> +
> +#define __ptrauth_save_state(ctxt)						\
> +({										\
> +	__ptrauth_save_key(ctxt->sys_regs, APIA);				\
> +	__ptrauth_save_key(ctxt->sys_regs, APIB);				\
> +	__ptrauth_save_key(ctxt->sys_regs, APDA);				\
> +	__ptrauth_save_key(ctxt->sys_regs, APDB);				\
> +	__ptrauth_save_key(ctxt->sys_regs, APGA);				\
> +})
> +
> +#else /* __ASSEMBLY__ */
> +
> +#include <asm/sysreg.h>
> +
> +#ifdef	CONFIG_ARM64_PTR_AUTH
> +
> +#define PTRAUTH_REG_OFFSET(x)	(x - CPU_APIAKEYLO_EL1)
> +
> +/*
> + * CPU_AP*_EL1 values exceed immediate offset range (512) for stp instruction
> + * so below macros takes CPU_APIAKEYLO_EL1 as base and calculates the offset of
> + * the keys from this base to avoid an extra add instruction. These macros
> + * assumes the keys offsets are aligned in a specific increasing order.
> + */
> +.macro	ptrauth_save_state base, reg1, reg2
> +	mrs_s	\reg1, SYS_APIAKEYLO_EL1
> +	mrs_s	\reg2, SYS_APIAKEYHI_EL1
> +	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIAKEYLO_EL1)]
> +	mrs_s	\reg1, SYS_APIBKEYLO_EL1
> +	mrs_s	\reg2, SYS_APIBKEYHI_EL1
> +	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIBKEYLO_EL1)]
> +	mrs_s	\reg1, SYS_APDAKEYLO_EL1
> +	mrs_s	\reg2, SYS_APDAKEYHI_EL1
> +	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDAKEYLO_EL1)]
> +	mrs_s	\reg1, SYS_APDBKEYLO_EL1
> +	mrs_s	\reg2, SYS_APDBKEYHI_EL1
> +	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDBKEYLO_EL1)]
> +	mrs_s	\reg1, SYS_APGAKEYLO_EL1
> +	mrs_s	\reg2, SYS_APGAKEYHI_EL1
> +	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APGAKEYLO_EL1)]
> +.endm
> +
> +.macro	ptrauth_restore_state base, reg1, reg2
> +	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIAKEYLO_EL1)]
> +	msr_s	SYS_APIAKEYLO_EL1, \reg1
> +	msr_s	SYS_APIAKEYHI_EL1, \reg2
> +	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIBKEYLO_EL1)]
> +	msr_s	SYS_APIBKEYLO_EL1, \reg1
> +	msr_s	SYS_APIBKEYHI_EL1, \reg2
> +	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDAKEYLO_EL1)]
> +	msr_s	SYS_APDAKEYLO_EL1, \reg1
> +	msr_s	SYS_APDAKEYHI_EL1, \reg2
> +	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDBKEYLO_EL1)]
> +	msr_s	SYS_APDBKEYLO_EL1, \reg1
> +	msr_s	SYS_APDBKEYHI_EL1, \reg2
> +	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APGAKEYLO_EL1)]
> +	msr_s	SYS_APGAKEYLO_EL1, \reg1
> +	msr_s	SYS_APGAKEYHI_EL1, \reg2
> +.endm
> +
> +.macro ptrauth_switch_to_guest g_ctxt, reg1, reg2, reg3
> +	ldr	\reg1, [\g_ctxt, #(VCPU_HCR_EL2 - VCPU_CONTEXT)]

Given that 100% of the current HW doesn't have ptrauth at all, this
becomes an instant and pointless overhead.

It could easily be avoided by turning this into:

alternative_if_not ARM64_HAS_GENERIC_AUTH_ARCH
	b	1000f
alternative_else
	ldr	\reg1, [\g_ctxt, #(VCPU_HCR_EL2 - VCPU_CONTEXT)]
alternative_endif

> +	and	\reg1, \reg1, #(HCR_API | HCR_APK)
> +	cbz	\reg1, 1000f
> +	add	\reg1, \g_ctxt, #CPU_APIAKEYLO_EL1
> +	ptrauth_restore_state	\reg1, \reg2, \reg3
> +1000:
> +.endm
> +
> +.macro ptrauth_switch_to_host g_ctxt, h_ctxt, reg1, reg2, reg3
> +	ldr	\reg1, [\g_ctxt, #(VCPU_HCR_EL2 - VCPU_CONTEXT)]

Same thing here.

> +	and	\reg1, \reg1, #(HCR_API | HCR_APK)
> +	cbz	\reg1, 1001f
> +	add	\reg1, \g_ctxt, #CPU_APIAKEYLO_EL1
> +	ptrauth_save_state	\reg1, \reg2, \reg3
> +	add	\reg1, \h_ctxt, #CPU_APIAKEYLO_EL1
> +	ptrauth_restore_state	\reg1, \reg2, \reg3
> +	isb
> +1001:
> +.endm
> +
> +#else /* !CONFIG_ARM64_PTR_AUTH */
> +.macro ptrauth_switch_to_guest g_ctxt, reg1, reg2, reg3
> +.endm
> +.macro ptrauth_switch_to_host g_ctxt, h_ctxt, reg1, reg2, reg3
> +.endm
> +#endif /* CONFIG_ARM64_PTR_AUTH */
> +#endif /* __ASSEMBLY__ */
> +#endif /* __ASM_KVM_PTRAUTH_ASM_H */
> diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
> index 7f40dcb..8178330 100644
> --- a/arch/arm64/kernel/asm-offsets.c
> +++ b/arch/arm64/kernel/asm-offsets.c
> @@ -125,7 +125,13 @@ int main(void)
>    DEFINE(VCPU_CONTEXT,		offsetof(struct kvm_vcpu, arch.ctxt));
>    DEFINE(VCPU_FAULT_DISR,	offsetof(struct kvm_vcpu, arch.fault.disr_el1));
>    DEFINE(VCPU_WORKAROUND_FLAGS,	offsetof(struct kvm_vcpu, arch.workaround_flags));
> +  DEFINE(VCPU_HCR_EL2,		offsetof(struct kvm_vcpu, arch.hcr_el2));
>    DEFINE(CPU_GP_REGS,		offsetof(struct kvm_cpu_context, gp_regs));
> +  DEFINE(CPU_APIAKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APIAKEYLO_EL1]));
> +  DEFINE(CPU_APIBKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APIBKEYLO_EL1]));
> +  DEFINE(CPU_APDAKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APDAKEYLO_EL1]));
> +  DEFINE(CPU_APDBKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APDBKEYLO_EL1]));
> +  DEFINE(CPU_APGAKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APGAKEYLO_EL1]));
>    DEFINE(CPU_USER_PT_REGS,	offsetof(struct kvm_regs, regs));
>    DEFINE(HOST_CONTEXT_VCPU,	offsetof(struct kvm_cpu_context, __hyp_running_vcpu));
>  #endif
> diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
> index 4f7b26b..e07f763 100644
> --- a/arch/arm64/kvm/guest.c
> +++ b/arch/arm64/kvm/guest.c
> @@ -878,3 +878,17 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
>  
>  	return ret;
>  }
> +
> +/**
> + * kvm_arm_vcpu_ptrauth_setup_lazy - setup lazy ptrauth for vcpu schedule
> + *
> + * @vcpu: The VCPU pointer
> + *
> + * This function may be used to disable ptrauth and use it in a lazy context
> + * via traps.
> + */
> +void kvm_arm_vcpu_ptrauth_setup_lazy(struct kvm_vcpu *vcpu)
> +{
> +	if (vcpu_has_ptrauth(vcpu))
> +		kvm_arm_vcpu_ptrauth_disable(vcpu);
> +}

Why does this live in guest.c?

> diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
> index 0b79834..5838ff9 100644
> --- a/arch/arm64/kvm/handle_exit.c
> +++ b/arch/arm64/kvm/handle_exit.c
> @@ -30,6 +30,7 @@
>  #include <asm/kvm_coproc.h>
>  #include <asm/kvm_emulate.h>
>  #include <asm/kvm_mmu.h>
> +#include <asm/kvm_ptrauth_asm.h>
>  #include <asm/debug-monitors.h>
>  #include <asm/traps.h>
>  
> @@ -174,19 +175,26 @@ static int handle_sve(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  }
>  
>  /*
> + * Handle the guest trying to use a ptrauth instruction, or trying to access a
> + * ptrauth register.
> + */
> +void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu)
> +{
> +	if (vcpu_has_ptrauth(vcpu)) {
> +		kvm_arm_vcpu_ptrauth_enable(vcpu);

It is odd that the enable function is placed in sys_regs.c, and only
used here. You could either just inline it here, or make it a static
inline in kvm_host.h.

> +		__ptrauth_save_state(vcpu->arch.host_cpu_context);

You could expand the __ptrauth_save_state macro here. It is only used
once, and one less level of obfuscation will help grepping.

> +	} else {
> +		kvm_inject_undefined(vcpu);
> +	}
> +}
> +
> +/*
>   * Guest usage of a ptrauth instruction (which the guest EL1 did not turn into
>   * a NOP).
>   */
>  static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  {
> -	/*
> -	 * We don't currently support ptrauth in a guest, and we mask the ID
> -	 * registers to prevent well-behaved guests from trying to make use of
> -	 * it.
> -	 *
> -	 * Inject an UNDEF, as if the feature really isn't present.
> -	 */
> -	kvm_inject_undefined(vcpu);
> +	kvm_arm_vcpu_ptrauth_trap(vcpu);
>  	return 1;
>  }
>  
> diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
> index 675fdc1..3a70213 100644
> --- a/arch/arm64/kvm/hyp/entry.S
> +++ b/arch/arm64/kvm/hyp/entry.S
> @@ -24,6 +24,7 @@
>  #include <asm/kvm_arm.h>
>  #include <asm/kvm_asm.h>
>  #include <asm/kvm_mmu.h>
> +#include <asm/kvm_ptrauth_asm.h>
>  
>  #define CPU_GP_REG_OFFSET(x)	(CPU_GP_REGS + x)
>  #define CPU_XREG_OFFSET(x)	CPU_GP_REG_OFFSET(CPU_USER_PT_REGS + 8*x)
> @@ -64,6 +65,9 @@ ENTRY(__guest_enter)
>  
>  	add	x18, x0, #VCPU_CONTEXT
>  
> +	// Macro ptrauth_switch_to_guest(guest cxt, tmp1, tmp2, tmp3).
> +	ptrauth_switch_to_guest x18, x0, x1, x2
> +

This comment doesn't tell us much. What we really need is a comment
explaining *why* this needs to be an inline macro. Otherwise, someone
will one day move it back to some C code and things will randomly break.

>  	// Restore guest regs x0-x17
>  	ldp	x0, x1,   [x18, #CPU_XREG_OFFSET(0)]
>  	ldp	x2, x3,   [x18, #CPU_XREG_OFFSET(2)]
> @@ -118,6 +122,9 @@ ENTRY(__guest_exit)
>  
>  	get_host_ctxt	x2, x3
>  
> +	// Macro ptrauth_switch_to_host(guest cxt, host cxt, tmp1, tmp2, tmp3).
> +	ptrauth_switch_to_host x1, x2, x3, x4, x5
> +
>  	// Now restore the host regs
>  	restore_callee_saved_regs x2
>  
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 09e9b06..4a98b5c 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -1007,6 +1007,38 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  	{ SYS_DESC(SYS_PMEVTYPERn_EL0(n)),					\
>  	  access_pmu_evtyper, reset_unknown, (PMEVTYPER0_EL0 + n), }
>  
> +void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu)
> +{
> +	vcpu->arch.hcr_el2 |= (HCR_API | HCR_APK);
> +}
> +
> +void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu)
> +{
> +	vcpu->arch.hcr_el2 &= ~(HCR_API | HCR_APK);
> +}

As mentionned above, these could be moved as static inline to an include
file, of even directly inlined in the code that use it.

> +
> +static bool trap_ptrauth(struct kvm_vcpu *vcpu,
> +			 struct sys_reg_params *p,
> +			 const struct sys_reg_desc *rd)
> +{
> +	kvm_arm_vcpu_ptrauth_trap(vcpu);
> +	return false;

We need a comment explaining why we return false: Either ptrauth is on,
and we re-execute the same instruction, or it is off, and we have
injected an UNDEF. In both cases, we don't advance the guest's PC.

> +}
> +
> +static unsigned int ptrauth_visibility(const struct kvm_vcpu *vcpu,
> +			const struct sys_reg_desc *rd)
> +{
> +	return vcpu_has_ptrauth(vcpu) ? 0 : REG_HIDDEN_USER | REG_HIDDEN_GUEST;
> +}
> +
> +#define __PTRAUTH_KEY(k)						\
> +	{ SYS_DESC(SYS_## k), trap_ptrauth, reset_unknown, k,		\
> +	.visibility = ptrauth_visibility}
> +
> +#define PTRAUTH_KEY(k)							\
> +	__PTRAUTH_KEY(k ## KEYLO_EL1),					\
> +	__PTRAUTH_KEY(k ## KEYHI_EL1)
> +
>  static bool access_arch_timer(struct kvm_vcpu *vcpu,
>  			      struct sys_reg_params *p,
>  			      const struct sys_reg_desc *r)
> @@ -1058,9 +1090,11 @@ static u64 read_id_reg(const struct kvm_vcpu *vcpu,
>  					 (0xfUL << ID_AA64ISAR1_API_SHIFT) |
>  					 (0xfUL << ID_AA64ISAR1_GPA_SHIFT) |
>  					 (0xfUL << ID_AA64ISAR1_GPI_SHIFT);
> -		if (val & ptrauth_mask)
> -			kvm_debug("ptrauth unsupported for guests, suppressing\n");
> -		val &= ~ptrauth_mask;
> +		if (!vcpu_has_ptrauth(vcpu)) {
> +			if (val & ptrauth_mask)
> +				kvm_debug("ptrauth unsupported for guests, suppressing\n");
> +			val &= ~ptrauth_mask;
> +		}
>  	}
>  
>  	return val;
> @@ -1460,6 +1494,12 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>  	{ SYS_DESC(SYS_TTBR1_EL1), access_vm_reg, reset_unknown, TTBR1_EL1 },
>  	{ SYS_DESC(SYS_TCR_EL1), access_vm_reg, reset_val, TCR_EL1, 0 },
>  
> +	PTRAUTH_KEY(APIA),
> +	PTRAUTH_KEY(APIB),
> +	PTRAUTH_KEY(APDA),
> +	PTRAUTH_KEY(APDB),
> +	PTRAUTH_KEY(APGA),
> +
>  	{ SYS_DESC(SYS_AFSR0_EL1), access_vm_reg, reset_unknown, AFSR0_EL1 },
>  	{ SYS_DESC(SYS_AFSR1_EL1), access_vm_reg, reset_unknown, AFSR1_EL1 },
>  	{ SYS_DESC(SYS_ESR_EL1), access_vm_reg, reset_unknown, ESR_EL1 },
> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> index 9edbf0f..8d1b73c 100644
> --- a/virt/kvm/arm/arm.c
> +++ b/virt/kvm/arm/arm.c
> @@ -385,6 +385,8 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
>  		vcpu_clear_wfe_traps(vcpu);
>  	else
>  		vcpu_set_wfe_traps(vcpu);
> +
> +	kvm_arm_vcpu_ptrauth_setup_lazy(vcpu);
>  }
>  
>  void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
> 

Despite all the comments, the code looks in good shape, and I trust it
shouldn't take you long to refactor it, retest it and send an updated
version once we've settled on the ABI part which is the most contentious.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v9 2/5] KVM: arm/arm64: context-switch ptrauth registers
@ 2019-04-17  9:09     ` Marc Zyngier
  0 siblings, 0 replies; 77+ messages in thread
From: Marc Zyngier @ 2019-04-17  9:09 UTC (permalink / raw)
  To: Amit Daniel Kachhap, linux-arm-kernel
  Cc: Catalin Marinas, Will Deacon, Kristina Martsenko, kvmarm,
	Ramana Radhakrishnan, Dave Martin, linux-kernel

Hi Amit,

On 12/04/2019 04:20, Amit Daniel Kachhap wrote:
> From: Mark Rutland <mark.rutland@arm.com>
> 
> When pointer authentication is supported, a guest may wish to use it.
> This patch adds the necessary KVM infrastructure for this to work, with
> a semi-lazy context switch of the pointer auth state.
> 
> Pointer authentication feature is only enabled when VHE is built
> in the kernel and present in the CPU implementation so only VHE code
> paths are modified.
> 
> When we schedule a vcpu, we disable guest usage of pointer
> authentication instructions and accesses to the keys. While these are
> disabled, we avoid context-switching the keys. When we trap the guest
> trying to use pointer authentication functionality, we change to eagerly
> context-switching the keys, and enable the feature. The next time the
> vcpu is scheduled out/in, we start again. However the host key save is
> optimized and implemented inside ptrauth instruction/register access
> trap.
> 
> Pointer authentication consists of address authentication and generic
> authentication, and CPUs in a system might have varied support for
> either. Where support for either feature is not uniform, it is hidden
> from guests via ID register emulation, as a result of the cpufeature
> framework in the host.
> 
> Unfortunately, address authentication and generic authentication cannot
> be trapped separately, as the architecture provides a single EL2 trap
> covering both. If we wish to expose one without the other, we cannot
> prevent a (badly-written) guest from intermittently using a feature
> which is not uniformly supported (when scheduled on a physical CPU which
> supports the relevant feature). Hence, this patch expects both type of
> authentication to be present in a cpu.
> 
> This switch of key is done from guest enter/exit assembly as preparation
> for the upcoming in-kernel pointer authentication support. Hence, these
> key switching routines are not implemented in C code as they may cause
> pointer authentication key signing error in some situations.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> [Only VHE, key switch in full assembly, vcpu_has_ptrauth checks
> , save host key in ptrauth exception trap]
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> Reviewed-by: Julien Thierry <julien.thierry@arm.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Christoffer Dall <christoffer.dall@arm.com>
> Cc: kvmarm@lists.cs.columbia.edu
> ---
> 
> Changes since v9:
> * Used high order number for branching in assembly macros. [Kristina Martsenko]
> * Taken care of different offset for hcr_el2 now.
> 
>  arch/arm/include/asm/kvm_host.h          |   1 +
>  arch/arm64/Kconfig                       |   5 +-
>  arch/arm64/include/asm/kvm_host.h        |  17 +++++
>  arch/arm64/include/asm/kvm_ptrauth_asm.h | 106 +++++++++++++++++++++++++++++++
>  arch/arm64/kernel/asm-offsets.c          |   6 ++
>  arch/arm64/kvm/guest.c                   |  14 ++++
>  arch/arm64/kvm/handle_exit.c             |  24 ++++---
>  arch/arm64/kvm/hyp/entry.S               |   7 ++
>  arch/arm64/kvm/sys_regs.c                |  46 +++++++++++++-
>  virt/kvm/arm/arm.c                       |   2 +
>  10 files changed, 215 insertions(+), 13 deletions(-)
>  create mode 100644 arch/arm64/include/asm/kvm_ptrauth_asm.h
> 
> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
> index e80cfc1..7a5c7f8 100644
> --- a/arch/arm/include/asm/kvm_host.h
> +++ b/arch/arm/include/asm/kvm_host.h
> @@ -363,6 +363,7 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
>  static inline void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu) {}
>  static inline void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu) {}
>  static inline void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu) {}
> +static inline void kvm_arm_vcpu_ptrauth_setup_lazy(struct kvm_vcpu *vcpu) {}
>  
>  static inline void kvm_arm_vhe_guest_enter(void) {}
>  static inline void kvm_arm_vhe_guest_exit(void) {}
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 7e34b9e..9e8506e 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -1301,8 +1301,9 @@ config ARM64_PTR_AUTH
>  	  context-switched along with the process.
>  
>  	  The feature is detected at runtime. If the feature is not present in
> -	  hardware it will not be advertised to userspace nor will it be
> -	  enabled.
> +	  hardware it will not be advertised to userspace/KVM guest nor will it
> +	  be enabled. However, KVM guest also require CONFIG_ARM64_VHE=y to use
> +	  this feature.

Not only does it require CONFIG_ARM64_VHE, but it more importantly
requires a VHE system!

>  
>  endmenu
>  
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 31dbc7c..a585d82 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -161,6 +161,18 @@ enum vcpu_sysreg {
>  	PMSWINC_EL0,	/* Software Increment Register */
>  	PMUSERENR_EL0,	/* User Enable Register */
>  
> +	/* Pointer Authentication Registers in a strict increasing order. */
> +	APIAKEYLO_EL1,
> +	APIAKEYHI_EL1 = APIAKEYLO_EL1 + 1,
> +	APIBKEYLO_EL1 = APIAKEYLO_EL1 + 2,
> +	APIBKEYHI_EL1 = APIAKEYLO_EL1 + 3,
> +	APDAKEYLO_EL1 = APIAKEYLO_EL1 + 4,
> +	APDAKEYHI_EL1 = APIAKEYLO_EL1 + 5,
> +	APDBKEYLO_EL1 = APIAKEYLO_EL1 + 6,
> +	APDBKEYHI_EL1 = APIAKEYLO_EL1 + 7,
> +	APGAKEYLO_EL1 = APIAKEYLO_EL1 + 8,
> +	APGAKEYHI_EL1 = APIAKEYLO_EL1 + 9,

Why do we need these explicit +1, +2...? Being an part of an enum
already guarantees this.

> +
>  	/* 32bit specific registers. Keep them at the end of the range */
>  	DACR32_EL2,	/* Domain Access Control Register */
>  	IFSR32_EL2,	/* Instruction Fault Status Register */
> @@ -529,6 +541,11 @@ static inline bool kvm_arch_requires_vhe(void)
>  	return false;
>  }
>  
> +void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu);
> +void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu);
> +void kvm_arm_vcpu_ptrauth_setup_lazy(struct kvm_vcpu *vcpu);
> +void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu);
> +
>  static inline void kvm_arch_hardware_unsetup(void) {}
>  static inline void kvm_arch_sync_events(struct kvm *kvm) {}
>  static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
> diff --git a/arch/arm64/include/asm/kvm_ptrauth_asm.h b/arch/arm64/include/asm/kvm_ptrauth_asm.h
> new file mode 100644
> index 0000000..8142521
> --- /dev/null
> +++ b/arch/arm64/include/asm/kvm_ptrauth_asm.h

nit: this should be named kvm_ptrauth.h. The asm suffix doesn't bring
anything to the game, and is somewhat misleading (there are C macros in
this file).

> @@ -0,0 +1,106 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/* arch/arm64/include/asm/kvm_ptrauth_asm.h: Guest/host ptrauth save/restore
> + * Copyright 2019 Arm Limited
> + * Author: Mark Rutland <mark.rutland@arm.com>

nit: Authors

> + *         Amit Daniel Kachhap <amit.kachhap@arm.com>
> + */
> +
> +#ifndef __ASM_KVM_PTRAUTH_ASM_H
> +#define __ASM_KVM_PTRAUTH_ASM_H
> +
> +#ifndef __ASSEMBLY__
> +
> +#define __ptrauth_save_key(regs, key)						\
> +({										\
> +	regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1);	\
> +	regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1);	\
> +})
> +
> +#define __ptrauth_save_state(ctxt)						\
> +({										\
> +	__ptrauth_save_key(ctxt->sys_regs, APIA);				\
> +	__ptrauth_save_key(ctxt->sys_regs, APIB);				\
> +	__ptrauth_save_key(ctxt->sys_regs, APDA);				\
> +	__ptrauth_save_key(ctxt->sys_regs, APDB);				\
> +	__ptrauth_save_key(ctxt->sys_regs, APGA);				\
> +})
> +
> +#else /* __ASSEMBLY__ */
> +
> +#include <asm/sysreg.h>
> +
> +#ifdef	CONFIG_ARM64_PTR_AUTH
> +
> +#define PTRAUTH_REG_OFFSET(x)	(x - CPU_APIAKEYLO_EL1)
> +
> +/*
> + * CPU_AP*_EL1 values exceed immediate offset range (512) for stp instruction
> + * so below macros takes CPU_APIAKEYLO_EL1 as base and calculates the offset of
> + * the keys from this base to avoid an extra add instruction. These macros
> + * assumes the keys offsets are aligned in a specific increasing order.
> + */
> +.macro	ptrauth_save_state base, reg1, reg2
> +	mrs_s	\reg1, SYS_APIAKEYLO_EL1
> +	mrs_s	\reg2, SYS_APIAKEYHI_EL1
> +	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIAKEYLO_EL1)]
> +	mrs_s	\reg1, SYS_APIBKEYLO_EL1
> +	mrs_s	\reg2, SYS_APIBKEYHI_EL1
> +	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIBKEYLO_EL1)]
> +	mrs_s	\reg1, SYS_APDAKEYLO_EL1
> +	mrs_s	\reg2, SYS_APDAKEYHI_EL1
> +	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDAKEYLO_EL1)]
> +	mrs_s	\reg1, SYS_APDBKEYLO_EL1
> +	mrs_s	\reg2, SYS_APDBKEYHI_EL1
> +	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDBKEYLO_EL1)]
> +	mrs_s	\reg1, SYS_APGAKEYLO_EL1
> +	mrs_s	\reg2, SYS_APGAKEYHI_EL1
> +	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APGAKEYLO_EL1)]
> +.endm
> +
> +.macro	ptrauth_restore_state base, reg1, reg2
> +	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIAKEYLO_EL1)]
> +	msr_s	SYS_APIAKEYLO_EL1, \reg1
> +	msr_s	SYS_APIAKEYHI_EL1, \reg2
> +	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIBKEYLO_EL1)]
> +	msr_s	SYS_APIBKEYLO_EL1, \reg1
> +	msr_s	SYS_APIBKEYHI_EL1, \reg2
> +	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDAKEYLO_EL1)]
> +	msr_s	SYS_APDAKEYLO_EL1, \reg1
> +	msr_s	SYS_APDAKEYHI_EL1, \reg2
> +	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDBKEYLO_EL1)]
> +	msr_s	SYS_APDBKEYLO_EL1, \reg1
> +	msr_s	SYS_APDBKEYHI_EL1, \reg2
> +	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APGAKEYLO_EL1)]
> +	msr_s	SYS_APGAKEYLO_EL1, \reg1
> +	msr_s	SYS_APGAKEYHI_EL1, \reg2
> +.endm
> +
> +.macro ptrauth_switch_to_guest g_ctxt, reg1, reg2, reg3
> +	ldr	\reg1, [\g_ctxt, #(VCPU_HCR_EL2 - VCPU_CONTEXT)]

Given that 100% of the current HW doesn't have ptrauth at all, this
becomes an instant and pointless overhead.

It could easily be avoided by turning this into:

alternative_if_not ARM64_HAS_GENERIC_AUTH_ARCH
	b	1000f
alternative_else
	ldr	\reg1, [\g_ctxt, #(VCPU_HCR_EL2 - VCPU_CONTEXT)]
alternative_endif

> +	and	\reg1, \reg1, #(HCR_API | HCR_APK)
> +	cbz	\reg1, 1000f
> +	add	\reg1, \g_ctxt, #CPU_APIAKEYLO_EL1
> +	ptrauth_restore_state	\reg1, \reg2, \reg3
> +1000:
> +.endm
> +
> +.macro ptrauth_switch_to_host g_ctxt, h_ctxt, reg1, reg2, reg3
> +	ldr	\reg1, [\g_ctxt, #(VCPU_HCR_EL2 - VCPU_CONTEXT)]

Same thing here.

> +	and	\reg1, \reg1, #(HCR_API | HCR_APK)
> +	cbz	\reg1, 1001f
> +	add	\reg1, \g_ctxt, #CPU_APIAKEYLO_EL1
> +	ptrauth_save_state	\reg1, \reg2, \reg3
> +	add	\reg1, \h_ctxt, #CPU_APIAKEYLO_EL1
> +	ptrauth_restore_state	\reg1, \reg2, \reg3
> +	isb
> +1001:
> +.endm
> +
> +#else /* !CONFIG_ARM64_PTR_AUTH */
> +.macro ptrauth_switch_to_guest g_ctxt, reg1, reg2, reg3
> +.endm
> +.macro ptrauth_switch_to_host g_ctxt, h_ctxt, reg1, reg2, reg3
> +.endm
> +#endif /* CONFIG_ARM64_PTR_AUTH */
> +#endif /* __ASSEMBLY__ */
> +#endif /* __ASM_KVM_PTRAUTH_ASM_H */
> diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
> index 7f40dcb..8178330 100644
> --- a/arch/arm64/kernel/asm-offsets.c
> +++ b/arch/arm64/kernel/asm-offsets.c
> @@ -125,7 +125,13 @@ int main(void)
>    DEFINE(VCPU_CONTEXT,		offsetof(struct kvm_vcpu, arch.ctxt));
>    DEFINE(VCPU_FAULT_DISR,	offsetof(struct kvm_vcpu, arch.fault.disr_el1));
>    DEFINE(VCPU_WORKAROUND_FLAGS,	offsetof(struct kvm_vcpu, arch.workaround_flags));
> +  DEFINE(VCPU_HCR_EL2,		offsetof(struct kvm_vcpu, arch.hcr_el2));
>    DEFINE(CPU_GP_REGS,		offsetof(struct kvm_cpu_context, gp_regs));
> +  DEFINE(CPU_APIAKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APIAKEYLO_EL1]));
> +  DEFINE(CPU_APIBKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APIBKEYLO_EL1]));
> +  DEFINE(CPU_APDAKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APDAKEYLO_EL1]));
> +  DEFINE(CPU_APDBKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APDBKEYLO_EL1]));
> +  DEFINE(CPU_APGAKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APGAKEYLO_EL1]));
>    DEFINE(CPU_USER_PT_REGS,	offsetof(struct kvm_regs, regs));
>    DEFINE(HOST_CONTEXT_VCPU,	offsetof(struct kvm_cpu_context, __hyp_running_vcpu));
>  #endif
> diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
> index 4f7b26b..e07f763 100644
> --- a/arch/arm64/kvm/guest.c
> +++ b/arch/arm64/kvm/guest.c
> @@ -878,3 +878,17 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
>  
>  	return ret;
>  }
> +
> +/**
> + * kvm_arm_vcpu_ptrauth_setup_lazy - setup lazy ptrauth for vcpu schedule
> + *
> + * @vcpu: The VCPU pointer
> + *
> + * This function may be used to disable ptrauth and use it in a lazy context
> + * via traps.
> + */
> +void kvm_arm_vcpu_ptrauth_setup_lazy(struct kvm_vcpu *vcpu)
> +{
> +	if (vcpu_has_ptrauth(vcpu))
> +		kvm_arm_vcpu_ptrauth_disable(vcpu);
> +}

Why does this live in guest.c?

> diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
> index 0b79834..5838ff9 100644
> --- a/arch/arm64/kvm/handle_exit.c
> +++ b/arch/arm64/kvm/handle_exit.c
> @@ -30,6 +30,7 @@
>  #include <asm/kvm_coproc.h>
>  #include <asm/kvm_emulate.h>
>  #include <asm/kvm_mmu.h>
> +#include <asm/kvm_ptrauth_asm.h>
>  #include <asm/debug-monitors.h>
>  #include <asm/traps.h>
>  
> @@ -174,19 +175,26 @@ static int handle_sve(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  }
>  
>  /*
> + * Handle the guest trying to use a ptrauth instruction, or trying to access a
> + * ptrauth register.
> + */
> +void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu)
> +{
> +	if (vcpu_has_ptrauth(vcpu)) {
> +		kvm_arm_vcpu_ptrauth_enable(vcpu);

It is odd that the enable function is placed in sys_regs.c, and only
used here. You could either just inline it here, or make it a static
inline in kvm_host.h.

> +		__ptrauth_save_state(vcpu->arch.host_cpu_context);

You could expand the __ptrauth_save_state macro here. It is only used
once, and one less level of obfuscation will help grepping.

> +	} else {
> +		kvm_inject_undefined(vcpu);
> +	}
> +}
> +
> +/*
>   * Guest usage of a ptrauth instruction (which the guest EL1 did not turn into
>   * a NOP).
>   */
>  static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  {
> -	/*
> -	 * We don't currently support ptrauth in a guest, and we mask the ID
> -	 * registers to prevent well-behaved guests from trying to make use of
> -	 * it.
> -	 *
> -	 * Inject an UNDEF, as if the feature really isn't present.
> -	 */
> -	kvm_inject_undefined(vcpu);
> +	kvm_arm_vcpu_ptrauth_trap(vcpu);
>  	return 1;
>  }
>  
> diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
> index 675fdc1..3a70213 100644
> --- a/arch/arm64/kvm/hyp/entry.S
> +++ b/arch/arm64/kvm/hyp/entry.S
> @@ -24,6 +24,7 @@
>  #include <asm/kvm_arm.h>
>  #include <asm/kvm_asm.h>
>  #include <asm/kvm_mmu.h>
> +#include <asm/kvm_ptrauth_asm.h>
>  
>  #define CPU_GP_REG_OFFSET(x)	(CPU_GP_REGS + x)
>  #define CPU_XREG_OFFSET(x)	CPU_GP_REG_OFFSET(CPU_USER_PT_REGS + 8*x)
> @@ -64,6 +65,9 @@ ENTRY(__guest_enter)
>  
>  	add	x18, x0, #VCPU_CONTEXT
>  
> +	// Macro ptrauth_switch_to_guest(guest cxt, tmp1, tmp2, tmp3).
> +	ptrauth_switch_to_guest x18, x0, x1, x2
> +

This comment doesn't tell us much. What we really need is a comment
explaining *why* this needs to be an inline macro. Otherwise, someone
will one day move it back to some C code and things will randomly break.

>  	// Restore guest regs x0-x17
>  	ldp	x0, x1,   [x18, #CPU_XREG_OFFSET(0)]
>  	ldp	x2, x3,   [x18, #CPU_XREG_OFFSET(2)]
> @@ -118,6 +122,9 @@ ENTRY(__guest_exit)
>  
>  	get_host_ctxt	x2, x3
>  
> +	// Macro ptrauth_switch_to_host(guest cxt, host cxt, tmp1, tmp2, tmp3).
> +	ptrauth_switch_to_host x1, x2, x3, x4, x5
> +
>  	// Now restore the host regs
>  	restore_callee_saved_regs x2
>  
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 09e9b06..4a98b5c 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -1007,6 +1007,38 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  	{ SYS_DESC(SYS_PMEVTYPERn_EL0(n)),					\
>  	  access_pmu_evtyper, reset_unknown, (PMEVTYPER0_EL0 + n), }
>  
> +void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu)
> +{
> +	vcpu->arch.hcr_el2 |= (HCR_API | HCR_APK);
> +}
> +
> +void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu)
> +{
> +	vcpu->arch.hcr_el2 &= ~(HCR_API | HCR_APK);
> +}

As mentionned above, these could be moved as static inline to an include
file, of even directly inlined in the code that use it.

> +
> +static bool trap_ptrauth(struct kvm_vcpu *vcpu,
> +			 struct sys_reg_params *p,
> +			 const struct sys_reg_desc *rd)
> +{
> +	kvm_arm_vcpu_ptrauth_trap(vcpu);
> +	return false;

We need a comment explaining why we return false: Either ptrauth is on,
and we re-execute the same instruction, or it is off, and we have
injected an UNDEF. In both cases, we don't advance the guest's PC.

> +}
> +
> +static unsigned int ptrauth_visibility(const struct kvm_vcpu *vcpu,
> +			const struct sys_reg_desc *rd)
> +{
> +	return vcpu_has_ptrauth(vcpu) ? 0 : REG_HIDDEN_USER | REG_HIDDEN_GUEST;
> +}
> +
> +#define __PTRAUTH_KEY(k)						\
> +	{ SYS_DESC(SYS_## k), trap_ptrauth, reset_unknown, k,		\
> +	.visibility = ptrauth_visibility}
> +
> +#define PTRAUTH_KEY(k)							\
> +	__PTRAUTH_KEY(k ## KEYLO_EL1),					\
> +	__PTRAUTH_KEY(k ## KEYHI_EL1)
> +
>  static bool access_arch_timer(struct kvm_vcpu *vcpu,
>  			      struct sys_reg_params *p,
>  			      const struct sys_reg_desc *r)
> @@ -1058,9 +1090,11 @@ static u64 read_id_reg(const struct kvm_vcpu *vcpu,
>  					 (0xfUL << ID_AA64ISAR1_API_SHIFT) |
>  					 (0xfUL << ID_AA64ISAR1_GPA_SHIFT) |
>  					 (0xfUL << ID_AA64ISAR1_GPI_SHIFT);
> -		if (val & ptrauth_mask)
> -			kvm_debug("ptrauth unsupported for guests, suppressing\n");
> -		val &= ~ptrauth_mask;
> +		if (!vcpu_has_ptrauth(vcpu)) {
> +			if (val & ptrauth_mask)
> +				kvm_debug("ptrauth unsupported for guests, suppressing\n");
> +			val &= ~ptrauth_mask;
> +		}
>  	}
>  
>  	return val;
> @@ -1460,6 +1494,12 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>  	{ SYS_DESC(SYS_TTBR1_EL1), access_vm_reg, reset_unknown, TTBR1_EL1 },
>  	{ SYS_DESC(SYS_TCR_EL1), access_vm_reg, reset_val, TCR_EL1, 0 },
>  
> +	PTRAUTH_KEY(APIA),
> +	PTRAUTH_KEY(APIB),
> +	PTRAUTH_KEY(APDA),
> +	PTRAUTH_KEY(APDB),
> +	PTRAUTH_KEY(APGA),
> +
>  	{ SYS_DESC(SYS_AFSR0_EL1), access_vm_reg, reset_unknown, AFSR0_EL1 },
>  	{ SYS_DESC(SYS_AFSR1_EL1), access_vm_reg, reset_unknown, AFSR1_EL1 },
>  	{ SYS_DESC(SYS_ESR_EL1), access_vm_reg, reset_unknown, ESR_EL1 },
> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> index 9edbf0f..8d1b73c 100644
> --- a/virt/kvm/arm/arm.c
> +++ b/virt/kvm/arm/arm.c
> @@ -385,6 +385,8 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
>  		vcpu_clear_wfe_traps(vcpu);
>  	else
>  		vcpu_set_wfe_traps(vcpu);
> +
> +	kvm_arm_vcpu_ptrauth_setup_lazy(vcpu);
>  }
>  
>  void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
> 

Despite all the comments, the code looks in good shape, and I trust it
shouldn't take you long to refactor it, retest it and send an updated
version once we've settled on the ABI part which is the most contentious.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v9 2/5] KVM: arm/arm64: context-switch ptrauth registers
@ 2019-04-17  9:09     ` Marc Zyngier
  0 siblings, 0 replies; 77+ messages in thread
From: Marc Zyngier @ 2019-04-17  9:09 UTC (permalink / raw)
  To: Amit Daniel Kachhap, linux-arm-kernel
  Cc: Mark Rutland, Andrew Jones, Julien Thierry, Catalin Marinas,
	Will Deacon, Christoffer Dall, Kristina Martsenko, kvmarm,
	James Morse, Ramana Radhakrishnan, Dave Martin, linux-kernel

Hi Amit,

On 12/04/2019 04:20, Amit Daniel Kachhap wrote:
> From: Mark Rutland <mark.rutland@arm.com>
> 
> When pointer authentication is supported, a guest may wish to use it.
> This patch adds the necessary KVM infrastructure for this to work, with
> a semi-lazy context switch of the pointer auth state.
> 
> Pointer authentication feature is only enabled when VHE is built
> in the kernel and present in the CPU implementation so only VHE code
> paths are modified.
> 
> When we schedule a vcpu, we disable guest usage of pointer
> authentication instructions and accesses to the keys. While these are
> disabled, we avoid context-switching the keys. When we trap the guest
> trying to use pointer authentication functionality, we change to eagerly
> context-switching the keys, and enable the feature. The next time the
> vcpu is scheduled out/in, we start again. However the host key save is
> optimized and implemented inside ptrauth instruction/register access
> trap.
> 
> Pointer authentication consists of address authentication and generic
> authentication, and CPUs in a system might have varied support for
> either. Where support for either feature is not uniform, it is hidden
> from guests via ID register emulation, as a result of the cpufeature
> framework in the host.
> 
> Unfortunately, address authentication and generic authentication cannot
> be trapped separately, as the architecture provides a single EL2 trap
> covering both. If we wish to expose one without the other, we cannot
> prevent a (badly-written) guest from intermittently using a feature
> which is not uniformly supported (when scheduled on a physical CPU which
> supports the relevant feature). Hence, this patch expects both type of
> authentication to be present in a cpu.
> 
> This switch of key is done from guest enter/exit assembly as preparation
> for the upcoming in-kernel pointer authentication support. Hence, these
> key switching routines are not implemented in C code as they may cause
> pointer authentication key signing error in some situations.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> [Only VHE, key switch in full assembly, vcpu_has_ptrauth checks
> , save host key in ptrauth exception trap]
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> Reviewed-by: Julien Thierry <julien.thierry@arm.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Christoffer Dall <christoffer.dall@arm.com>
> Cc: kvmarm@lists.cs.columbia.edu
> ---
> 
> Changes since v9:
> * Used high order number for branching in assembly macros. [Kristina Martsenko]
> * Taken care of different offset for hcr_el2 now.
> 
>  arch/arm/include/asm/kvm_host.h          |   1 +
>  arch/arm64/Kconfig                       |   5 +-
>  arch/arm64/include/asm/kvm_host.h        |  17 +++++
>  arch/arm64/include/asm/kvm_ptrauth_asm.h | 106 +++++++++++++++++++++++++++++++
>  arch/arm64/kernel/asm-offsets.c          |   6 ++
>  arch/arm64/kvm/guest.c                   |  14 ++++
>  arch/arm64/kvm/handle_exit.c             |  24 ++++---
>  arch/arm64/kvm/hyp/entry.S               |   7 ++
>  arch/arm64/kvm/sys_regs.c                |  46 +++++++++++++-
>  virt/kvm/arm/arm.c                       |   2 +
>  10 files changed, 215 insertions(+), 13 deletions(-)
>  create mode 100644 arch/arm64/include/asm/kvm_ptrauth_asm.h
> 
> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
> index e80cfc1..7a5c7f8 100644
> --- a/arch/arm/include/asm/kvm_host.h
> +++ b/arch/arm/include/asm/kvm_host.h
> @@ -363,6 +363,7 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
>  static inline void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu) {}
>  static inline void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu) {}
>  static inline void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu) {}
> +static inline void kvm_arm_vcpu_ptrauth_setup_lazy(struct kvm_vcpu *vcpu) {}
>  
>  static inline void kvm_arm_vhe_guest_enter(void) {}
>  static inline void kvm_arm_vhe_guest_exit(void) {}
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 7e34b9e..9e8506e 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -1301,8 +1301,9 @@ config ARM64_PTR_AUTH
>  	  context-switched along with the process.
>  
>  	  The feature is detected at runtime. If the feature is not present in
> -	  hardware it will not be advertised to userspace nor will it be
> -	  enabled.
> +	  hardware it will not be advertised to userspace/KVM guest nor will it
> +	  be enabled. However, KVM guest also require CONFIG_ARM64_VHE=y to use
> +	  this feature.

Not only does it require CONFIG_ARM64_VHE, but it more importantly
requires a VHE system!

>  
>  endmenu
>  
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 31dbc7c..a585d82 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -161,6 +161,18 @@ enum vcpu_sysreg {
>  	PMSWINC_EL0,	/* Software Increment Register */
>  	PMUSERENR_EL0,	/* User Enable Register */
>  
> +	/* Pointer Authentication Registers in a strict increasing order. */
> +	APIAKEYLO_EL1,
> +	APIAKEYHI_EL1 = APIAKEYLO_EL1 + 1,
> +	APIBKEYLO_EL1 = APIAKEYLO_EL1 + 2,
> +	APIBKEYHI_EL1 = APIAKEYLO_EL1 + 3,
> +	APDAKEYLO_EL1 = APIAKEYLO_EL1 + 4,
> +	APDAKEYHI_EL1 = APIAKEYLO_EL1 + 5,
> +	APDBKEYLO_EL1 = APIAKEYLO_EL1 + 6,
> +	APDBKEYHI_EL1 = APIAKEYLO_EL1 + 7,
> +	APGAKEYLO_EL1 = APIAKEYLO_EL1 + 8,
> +	APGAKEYHI_EL1 = APIAKEYLO_EL1 + 9,

Why do we need these explicit +1, +2...? Being an part of an enum
already guarantees this.

> +
>  	/* 32bit specific registers. Keep them at the end of the range */
>  	DACR32_EL2,	/* Domain Access Control Register */
>  	IFSR32_EL2,	/* Instruction Fault Status Register */
> @@ -529,6 +541,11 @@ static inline bool kvm_arch_requires_vhe(void)
>  	return false;
>  }
>  
> +void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu);
> +void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu);
> +void kvm_arm_vcpu_ptrauth_setup_lazy(struct kvm_vcpu *vcpu);
> +void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu);
> +
>  static inline void kvm_arch_hardware_unsetup(void) {}
>  static inline void kvm_arch_sync_events(struct kvm *kvm) {}
>  static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
> diff --git a/arch/arm64/include/asm/kvm_ptrauth_asm.h b/arch/arm64/include/asm/kvm_ptrauth_asm.h
> new file mode 100644
> index 0000000..8142521
> --- /dev/null
> +++ b/arch/arm64/include/asm/kvm_ptrauth_asm.h

nit: this should be named kvm_ptrauth.h. The asm suffix doesn't bring
anything to the game, and is somewhat misleading (there are C macros in
this file).

> @@ -0,0 +1,106 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/* arch/arm64/include/asm/kvm_ptrauth_asm.h: Guest/host ptrauth save/restore
> + * Copyright 2019 Arm Limited
> + * Author: Mark Rutland <mark.rutland@arm.com>

nit: Authors

> + *         Amit Daniel Kachhap <amit.kachhap@arm.com>
> + */
> +
> +#ifndef __ASM_KVM_PTRAUTH_ASM_H
> +#define __ASM_KVM_PTRAUTH_ASM_H
> +
> +#ifndef __ASSEMBLY__
> +
> +#define __ptrauth_save_key(regs, key)						\
> +({										\
> +	regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1);	\
> +	regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1);	\
> +})
> +
> +#define __ptrauth_save_state(ctxt)						\
> +({										\
> +	__ptrauth_save_key(ctxt->sys_regs, APIA);				\
> +	__ptrauth_save_key(ctxt->sys_regs, APIB);				\
> +	__ptrauth_save_key(ctxt->sys_regs, APDA);				\
> +	__ptrauth_save_key(ctxt->sys_regs, APDB);				\
> +	__ptrauth_save_key(ctxt->sys_regs, APGA);				\
> +})
> +
> +#else /* __ASSEMBLY__ */
> +
> +#include <asm/sysreg.h>
> +
> +#ifdef	CONFIG_ARM64_PTR_AUTH
> +
> +#define PTRAUTH_REG_OFFSET(x)	(x - CPU_APIAKEYLO_EL1)
> +
> +/*
> + * CPU_AP*_EL1 values exceed immediate offset range (512) for stp instruction
> + * so below macros takes CPU_APIAKEYLO_EL1 as base and calculates the offset of
> + * the keys from this base to avoid an extra add instruction. These macros
> + * assumes the keys offsets are aligned in a specific increasing order.
> + */
> +.macro	ptrauth_save_state base, reg1, reg2
> +	mrs_s	\reg1, SYS_APIAKEYLO_EL1
> +	mrs_s	\reg2, SYS_APIAKEYHI_EL1
> +	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIAKEYLO_EL1)]
> +	mrs_s	\reg1, SYS_APIBKEYLO_EL1
> +	mrs_s	\reg2, SYS_APIBKEYHI_EL1
> +	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIBKEYLO_EL1)]
> +	mrs_s	\reg1, SYS_APDAKEYLO_EL1
> +	mrs_s	\reg2, SYS_APDAKEYHI_EL1
> +	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDAKEYLO_EL1)]
> +	mrs_s	\reg1, SYS_APDBKEYLO_EL1
> +	mrs_s	\reg2, SYS_APDBKEYHI_EL1
> +	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDBKEYLO_EL1)]
> +	mrs_s	\reg1, SYS_APGAKEYLO_EL1
> +	mrs_s	\reg2, SYS_APGAKEYHI_EL1
> +	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APGAKEYLO_EL1)]
> +.endm
> +
> +.macro	ptrauth_restore_state base, reg1, reg2
> +	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIAKEYLO_EL1)]
> +	msr_s	SYS_APIAKEYLO_EL1, \reg1
> +	msr_s	SYS_APIAKEYHI_EL1, \reg2
> +	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIBKEYLO_EL1)]
> +	msr_s	SYS_APIBKEYLO_EL1, \reg1
> +	msr_s	SYS_APIBKEYHI_EL1, \reg2
> +	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDAKEYLO_EL1)]
> +	msr_s	SYS_APDAKEYLO_EL1, \reg1
> +	msr_s	SYS_APDAKEYHI_EL1, \reg2
> +	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDBKEYLO_EL1)]
> +	msr_s	SYS_APDBKEYLO_EL1, \reg1
> +	msr_s	SYS_APDBKEYHI_EL1, \reg2
> +	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APGAKEYLO_EL1)]
> +	msr_s	SYS_APGAKEYLO_EL1, \reg1
> +	msr_s	SYS_APGAKEYHI_EL1, \reg2
> +.endm
> +
> +.macro ptrauth_switch_to_guest g_ctxt, reg1, reg2, reg3
> +	ldr	\reg1, [\g_ctxt, #(VCPU_HCR_EL2 - VCPU_CONTEXT)]

Given that 100% of the current HW doesn't have ptrauth at all, this
becomes an instant and pointless overhead.

It could easily be avoided by turning this into:

alternative_if_not ARM64_HAS_GENERIC_AUTH_ARCH
	b	1000f
alternative_else
	ldr	\reg1, [\g_ctxt, #(VCPU_HCR_EL2 - VCPU_CONTEXT)]
alternative_endif

> +	and	\reg1, \reg1, #(HCR_API | HCR_APK)
> +	cbz	\reg1, 1000f
> +	add	\reg1, \g_ctxt, #CPU_APIAKEYLO_EL1
> +	ptrauth_restore_state	\reg1, \reg2, \reg3
> +1000:
> +.endm
> +
> +.macro ptrauth_switch_to_host g_ctxt, h_ctxt, reg1, reg2, reg3
> +	ldr	\reg1, [\g_ctxt, #(VCPU_HCR_EL2 - VCPU_CONTEXT)]

Same thing here.

> +	and	\reg1, \reg1, #(HCR_API | HCR_APK)
> +	cbz	\reg1, 1001f
> +	add	\reg1, \g_ctxt, #CPU_APIAKEYLO_EL1
> +	ptrauth_save_state	\reg1, \reg2, \reg3
> +	add	\reg1, \h_ctxt, #CPU_APIAKEYLO_EL1
> +	ptrauth_restore_state	\reg1, \reg2, \reg3
> +	isb
> +1001:
> +.endm
> +
> +#else /* !CONFIG_ARM64_PTR_AUTH */
> +.macro ptrauth_switch_to_guest g_ctxt, reg1, reg2, reg3
> +.endm
> +.macro ptrauth_switch_to_host g_ctxt, h_ctxt, reg1, reg2, reg3
> +.endm
> +#endif /* CONFIG_ARM64_PTR_AUTH */
> +#endif /* __ASSEMBLY__ */
> +#endif /* __ASM_KVM_PTRAUTH_ASM_H */
> diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
> index 7f40dcb..8178330 100644
> --- a/arch/arm64/kernel/asm-offsets.c
> +++ b/arch/arm64/kernel/asm-offsets.c
> @@ -125,7 +125,13 @@ int main(void)
>    DEFINE(VCPU_CONTEXT,		offsetof(struct kvm_vcpu, arch.ctxt));
>    DEFINE(VCPU_FAULT_DISR,	offsetof(struct kvm_vcpu, arch.fault.disr_el1));
>    DEFINE(VCPU_WORKAROUND_FLAGS,	offsetof(struct kvm_vcpu, arch.workaround_flags));
> +  DEFINE(VCPU_HCR_EL2,		offsetof(struct kvm_vcpu, arch.hcr_el2));
>    DEFINE(CPU_GP_REGS,		offsetof(struct kvm_cpu_context, gp_regs));
> +  DEFINE(CPU_APIAKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APIAKEYLO_EL1]));
> +  DEFINE(CPU_APIBKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APIBKEYLO_EL1]));
> +  DEFINE(CPU_APDAKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APDAKEYLO_EL1]));
> +  DEFINE(CPU_APDBKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APDBKEYLO_EL1]));
> +  DEFINE(CPU_APGAKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APGAKEYLO_EL1]));
>    DEFINE(CPU_USER_PT_REGS,	offsetof(struct kvm_regs, regs));
>    DEFINE(HOST_CONTEXT_VCPU,	offsetof(struct kvm_cpu_context, __hyp_running_vcpu));
>  #endif
> diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
> index 4f7b26b..e07f763 100644
> --- a/arch/arm64/kvm/guest.c
> +++ b/arch/arm64/kvm/guest.c
> @@ -878,3 +878,17 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
>  
>  	return ret;
>  }
> +
> +/**
> + * kvm_arm_vcpu_ptrauth_setup_lazy - setup lazy ptrauth for vcpu schedule
> + *
> + * @vcpu: The VCPU pointer
> + *
> + * This function may be used to disable ptrauth and use it in a lazy context
> + * via traps.
> + */
> +void kvm_arm_vcpu_ptrauth_setup_lazy(struct kvm_vcpu *vcpu)
> +{
> +	if (vcpu_has_ptrauth(vcpu))
> +		kvm_arm_vcpu_ptrauth_disable(vcpu);
> +}

Why does this live in guest.c?

> diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
> index 0b79834..5838ff9 100644
> --- a/arch/arm64/kvm/handle_exit.c
> +++ b/arch/arm64/kvm/handle_exit.c
> @@ -30,6 +30,7 @@
>  #include <asm/kvm_coproc.h>
>  #include <asm/kvm_emulate.h>
>  #include <asm/kvm_mmu.h>
> +#include <asm/kvm_ptrauth_asm.h>
>  #include <asm/debug-monitors.h>
>  #include <asm/traps.h>
>  
> @@ -174,19 +175,26 @@ static int handle_sve(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  }
>  
>  /*
> + * Handle the guest trying to use a ptrauth instruction, or trying to access a
> + * ptrauth register.
> + */
> +void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu)
> +{
> +	if (vcpu_has_ptrauth(vcpu)) {
> +		kvm_arm_vcpu_ptrauth_enable(vcpu);

It is odd that the enable function is placed in sys_regs.c, and only
used here. You could either just inline it here, or make it a static
inline in kvm_host.h.

> +		__ptrauth_save_state(vcpu->arch.host_cpu_context);

You could expand the __ptrauth_save_state macro here. It is only used
once, and one less level of obfuscation will help grepping.

> +	} else {
> +		kvm_inject_undefined(vcpu);
> +	}
> +}
> +
> +/*
>   * Guest usage of a ptrauth instruction (which the guest EL1 did not turn into
>   * a NOP).
>   */
>  static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  {
> -	/*
> -	 * We don't currently support ptrauth in a guest, and we mask the ID
> -	 * registers to prevent well-behaved guests from trying to make use of
> -	 * it.
> -	 *
> -	 * Inject an UNDEF, as if the feature really isn't present.
> -	 */
> -	kvm_inject_undefined(vcpu);
> +	kvm_arm_vcpu_ptrauth_trap(vcpu);
>  	return 1;
>  }
>  
> diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
> index 675fdc1..3a70213 100644
> --- a/arch/arm64/kvm/hyp/entry.S
> +++ b/arch/arm64/kvm/hyp/entry.S
> @@ -24,6 +24,7 @@
>  #include <asm/kvm_arm.h>
>  #include <asm/kvm_asm.h>
>  #include <asm/kvm_mmu.h>
> +#include <asm/kvm_ptrauth_asm.h>
>  
>  #define CPU_GP_REG_OFFSET(x)	(CPU_GP_REGS + x)
>  #define CPU_XREG_OFFSET(x)	CPU_GP_REG_OFFSET(CPU_USER_PT_REGS + 8*x)
> @@ -64,6 +65,9 @@ ENTRY(__guest_enter)
>  
>  	add	x18, x0, #VCPU_CONTEXT
>  
> +	// Macro ptrauth_switch_to_guest(guest cxt, tmp1, tmp2, tmp3).
> +	ptrauth_switch_to_guest x18, x0, x1, x2
> +

This comment doesn't tell us much. What we really need is a comment
explaining *why* this needs to be an inline macro. Otherwise, someone
will one day move it back to some C code and things will randomly break.

>  	// Restore guest regs x0-x17
>  	ldp	x0, x1,   [x18, #CPU_XREG_OFFSET(0)]
>  	ldp	x2, x3,   [x18, #CPU_XREG_OFFSET(2)]
> @@ -118,6 +122,9 @@ ENTRY(__guest_exit)
>  
>  	get_host_ctxt	x2, x3
>  
> +	// Macro ptrauth_switch_to_host(guest cxt, host cxt, tmp1, tmp2, tmp3).
> +	ptrauth_switch_to_host x1, x2, x3, x4, x5
> +
>  	// Now restore the host regs
>  	restore_callee_saved_regs x2
>  
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 09e9b06..4a98b5c 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -1007,6 +1007,38 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  	{ SYS_DESC(SYS_PMEVTYPERn_EL0(n)),					\
>  	  access_pmu_evtyper, reset_unknown, (PMEVTYPER0_EL0 + n), }
>  
> +void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu)
> +{
> +	vcpu->arch.hcr_el2 |= (HCR_API | HCR_APK);
> +}
> +
> +void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu)
> +{
> +	vcpu->arch.hcr_el2 &= ~(HCR_API | HCR_APK);
> +}

As mentionned above, these could be moved as static inline to an include
file, of even directly inlined in the code that use it.

> +
> +static bool trap_ptrauth(struct kvm_vcpu *vcpu,
> +			 struct sys_reg_params *p,
> +			 const struct sys_reg_desc *rd)
> +{
> +	kvm_arm_vcpu_ptrauth_trap(vcpu);
> +	return false;

We need a comment explaining why we return false: Either ptrauth is on,
and we re-execute the same instruction, or it is off, and we have
injected an UNDEF. In both cases, we don't advance the guest's PC.

> +}
> +
> +static unsigned int ptrauth_visibility(const struct kvm_vcpu *vcpu,
> +			const struct sys_reg_desc *rd)
> +{
> +	return vcpu_has_ptrauth(vcpu) ? 0 : REG_HIDDEN_USER | REG_HIDDEN_GUEST;
> +}
> +
> +#define __PTRAUTH_KEY(k)						\
> +	{ SYS_DESC(SYS_## k), trap_ptrauth, reset_unknown, k,		\
> +	.visibility = ptrauth_visibility}
> +
> +#define PTRAUTH_KEY(k)							\
> +	__PTRAUTH_KEY(k ## KEYLO_EL1),					\
> +	__PTRAUTH_KEY(k ## KEYHI_EL1)
> +
>  static bool access_arch_timer(struct kvm_vcpu *vcpu,
>  			      struct sys_reg_params *p,
>  			      const struct sys_reg_desc *r)
> @@ -1058,9 +1090,11 @@ static u64 read_id_reg(const struct kvm_vcpu *vcpu,
>  					 (0xfUL << ID_AA64ISAR1_API_SHIFT) |
>  					 (0xfUL << ID_AA64ISAR1_GPA_SHIFT) |
>  					 (0xfUL << ID_AA64ISAR1_GPI_SHIFT);
> -		if (val & ptrauth_mask)
> -			kvm_debug("ptrauth unsupported for guests, suppressing\n");
> -		val &= ~ptrauth_mask;
> +		if (!vcpu_has_ptrauth(vcpu)) {
> +			if (val & ptrauth_mask)
> +				kvm_debug("ptrauth unsupported for guests, suppressing\n");
> +			val &= ~ptrauth_mask;
> +		}
>  	}
>  
>  	return val;
> @@ -1460,6 +1494,12 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>  	{ SYS_DESC(SYS_TTBR1_EL1), access_vm_reg, reset_unknown, TTBR1_EL1 },
>  	{ SYS_DESC(SYS_TCR_EL1), access_vm_reg, reset_val, TCR_EL1, 0 },
>  
> +	PTRAUTH_KEY(APIA),
> +	PTRAUTH_KEY(APIB),
> +	PTRAUTH_KEY(APDA),
> +	PTRAUTH_KEY(APDB),
> +	PTRAUTH_KEY(APGA),
> +
>  	{ SYS_DESC(SYS_AFSR0_EL1), access_vm_reg, reset_unknown, AFSR0_EL1 },
>  	{ SYS_DESC(SYS_AFSR1_EL1), access_vm_reg, reset_unknown, AFSR1_EL1 },
>  	{ SYS_DESC(SYS_ESR_EL1), access_vm_reg, reset_unknown, ESR_EL1 },
> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> index 9edbf0f..8d1b73c 100644
> --- a/virt/kvm/arm/arm.c
> +++ b/virt/kvm/arm/arm.c
> @@ -385,6 +385,8 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
>  		vcpu_clear_wfe_traps(vcpu);
>  	else
>  		vcpu_set_wfe_traps(vcpu);
> +
> +	kvm_arm_vcpu_ptrauth_setup_lazy(vcpu);
>  }
>  
>  void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
> 

Despite all the comments, the code looks in good shape, and I trust it
shouldn't take you long to refactor it, retest it and send an updated
version once we've settled on the ABI part which is the most contentious.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v9 4/5] KVM: arm64: Add capability to advertise ptrauth for guest
@ 2019-04-17  9:39       ` Amit Daniel Kachhap
  0 siblings, 0 replies; 77+ messages in thread
From: Amit Daniel Kachhap @ 2019-04-17  9:39 UTC (permalink / raw)
  To: Dave Martin
  Cc: linux-arm-kernel, Marc Zyngier, Catalin Marinas, Will Deacon,
	Kristina Martsenko, kvmarm, Ramana Radhakrishnan, linux-kernel

Hi,

On 4/16/19 10:02 PM, Dave Martin wrote:
> On Fri, Apr 12, 2019 at 08:50:35AM +0530, Amit Daniel Kachhap wrote:
>> This patch advertises the capability of two cpu feature called address
>> pointer authentication and generic pointer authentication. These
>> capabilities depend upon system support for pointer authentication and
>> VHE mode.
>>
>> The current arm64 KVM partially implements pointer authentication and
>> support of address/generic authentication are tied together. However,
>> separate ABI requirements for both of them is added so that any future
>> isolated implementation will not require any ABI changes.
>>
>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>> Cc: Mark Rutland <mark.rutland@arm.com>
>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>> Cc: Christoffer Dall <christoffer.dall@arm.com>
>> Cc: kvmarm@lists.cs.columbia.edu
>> ---
>> Changes since v8:
>> *  Keep the capability check same for the 2 vcpu ptrauth features. [Dave Martin]
>>
>>   Documentation/virtual/kvm/api.txt | 2 ++
>>   arch/arm64/kvm/reset.c            | 5 +++++
>>   include/uapi/linux/kvm.h          | 2 ++
>>   3 files changed, 9 insertions(+)
>>
>> diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
>> index 9d202f4..56021d0 100644
>> --- a/Documentation/virtual/kvm/api.txt
>> +++ b/Documentation/virtual/kvm/api.txt
>> @@ -2756,9 +2756,11 @@ Possible features:
>>   	- KVM_ARM_VCPU_PTRAUTH_ADDRESS: Enables Address Pointer authentication
>>   	  for the CPU and supported only on arm64 architecture.
>>   	  Must be requested if KVM_ARM_VCPU_PTRAUTH_GENERIC is also requested.
>> +	  Depends on KVM_CAP_ARM_PTRAUTH_ADDRESS.
> 
> What if KVM_CAP_ARM_PTRAUTH_ADDRESS is absent and
> KVM_ARM_VCPU_PTRAUTH_GENERIC is requested?  By these rules, we have a
> contradiction: userspace both must request and must not request
> KVM_ARM_VCPU_PTRAUTH_ADDRESS.
> 
> We could qualify as follows:
> 
> 	Depends on KVM_CAP_ARM_PTRAUTH_ADDRESS.
> 	Must be requested if KVM_CAP_ARM_PTRAUTH_ADDRESS is present and
> 	KVM_ARM_VCPU_PTRAUTH_GENERIC is also requested.
ok agree. This makes it clear.
> 
>>   	- KVM_ARM_VCPU_PTRAUTH_GENERIC: Enables Generic Pointer authentication
>>   	  for the CPU and supported only on arm64 architecture.
>>   	  Must be requested if KVM_ARM_VCPU_PTRAUTH_ADDRESS is also requested.
>> +	  Depends on KVM_CAP_ARM_PTRAUTH_GENERIC.
> 
> Similarly.
> 
> Or, we go back to having a single cap and a single feature, and add
> more caps/features later on if we decide it's possible to support
> address/generic auth separately later on.
> 
> Otherwise, we end up with complex rules that can't be tested.  This is a
> high price to pay for forwards compatibility: userspace's conformance to
> the rules can't be fully tested, so there's a fair chance it won't work
> properly anyway when hardware/KVM with just one auth type appears.
> 
> [...]
> 
> Thoughts?
I agree that single cpufeature/capability is a simple solution to 
implement. The bifurcation of feature was done to reflect the different 
ID register split up.

But the h/w implementation provides a same EL2 exception trap for both 
the features and hence current implementation ties both of the features 
together. I guess in future if this is limitation goes away then one 
auth type is possible. Here I am not sure if the future h/w will retain 
this merged exception trap and add 2 new separate exception trap in 
addition to it.

I guess it will be probably simple split-up of this merged exception 
trap. In this case there won't be any ABI change required as per current 
implementation.

Thanks,
Amit Daniel


> 
> Cheers
> ---Dave
> 

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v9 4/5] KVM: arm64: Add capability to advertise ptrauth for guest
@ 2019-04-17  9:39       ` Amit Daniel Kachhap
  0 siblings, 0 replies; 77+ messages in thread
From: Amit Daniel Kachhap @ 2019-04-17  9:39 UTC (permalink / raw)
  To: Dave Martin
  Cc: Marc Zyngier, Catalin Marinas, Will Deacon, linux-kernel,
	Kristina Martsenko, Ramana Radhakrishnan, kvmarm,
	linux-arm-kernel

Hi,

On 4/16/19 10:02 PM, Dave Martin wrote:
> On Fri, Apr 12, 2019 at 08:50:35AM +0530, Amit Daniel Kachhap wrote:
>> This patch advertises the capability of two cpu feature called address
>> pointer authentication and generic pointer authentication. These
>> capabilities depend upon system support for pointer authentication and
>> VHE mode.
>>
>> The current arm64 KVM partially implements pointer authentication and
>> support of address/generic authentication are tied together. However,
>> separate ABI requirements for both of them is added so that any future
>> isolated implementation will not require any ABI changes.
>>
>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>> Cc: Mark Rutland <mark.rutland@arm.com>
>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>> Cc: Christoffer Dall <christoffer.dall@arm.com>
>> Cc: kvmarm@lists.cs.columbia.edu
>> ---
>> Changes since v8:
>> *  Keep the capability check same for the 2 vcpu ptrauth features. [Dave Martin]
>>
>>   Documentation/virtual/kvm/api.txt | 2 ++
>>   arch/arm64/kvm/reset.c            | 5 +++++
>>   include/uapi/linux/kvm.h          | 2 ++
>>   3 files changed, 9 insertions(+)
>>
>> diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
>> index 9d202f4..56021d0 100644
>> --- a/Documentation/virtual/kvm/api.txt
>> +++ b/Documentation/virtual/kvm/api.txt
>> @@ -2756,9 +2756,11 @@ Possible features:
>>   	- KVM_ARM_VCPU_PTRAUTH_ADDRESS: Enables Address Pointer authentication
>>   	  for the CPU and supported only on arm64 architecture.
>>   	  Must be requested if KVM_ARM_VCPU_PTRAUTH_GENERIC is also requested.
>> +	  Depends on KVM_CAP_ARM_PTRAUTH_ADDRESS.
> 
> What if KVM_CAP_ARM_PTRAUTH_ADDRESS is absent and
> KVM_ARM_VCPU_PTRAUTH_GENERIC is requested?  By these rules, we have a
> contradiction: userspace both must request and must not request
> KVM_ARM_VCPU_PTRAUTH_ADDRESS.
> 
> We could qualify as follows:
> 
> 	Depends on KVM_CAP_ARM_PTRAUTH_ADDRESS.
> 	Must be requested if KVM_CAP_ARM_PTRAUTH_ADDRESS is present and
> 	KVM_ARM_VCPU_PTRAUTH_GENERIC is also requested.
ok agree. This makes it clear.
> 
>>   	- KVM_ARM_VCPU_PTRAUTH_GENERIC: Enables Generic Pointer authentication
>>   	  for the CPU and supported only on arm64 architecture.
>>   	  Must be requested if KVM_ARM_VCPU_PTRAUTH_ADDRESS is also requested.
>> +	  Depends on KVM_CAP_ARM_PTRAUTH_GENERIC.
> 
> Similarly.
> 
> Or, we go back to having a single cap and a single feature, and add
> more caps/features later on if we decide it's possible to support
> address/generic auth separately later on.
> 
> Otherwise, we end up with complex rules that can't be tested.  This is a
> high price to pay for forwards compatibility: userspace's conformance to
> the rules can't be fully tested, so there's a fair chance it won't work
> properly anyway when hardware/KVM with just one auth type appears.
> 
> [...]
> 
> Thoughts?
I agree that single cpufeature/capability is a simple solution to 
implement. The bifurcation of feature was done to reflect the different 
ID register split up.

But the h/w implementation provides a same EL2 exception trap for both 
the features and hence current implementation ties both of the features 
together. I guess in future if this is limitation goes away then one 
auth type is possible. Here I am not sure if the future h/w will retain 
this merged exception trap and add 2 new separate exception trap in 
addition to it.

I guess it will be probably simple split-up of this merged exception 
trap. In this case there won't be any ABI change required as per current 
implementation.

Thanks,
Amit Daniel


> 
> Cheers
> ---Dave
> 
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v9 4/5] KVM: arm64: Add capability to advertise ptrauth for guest
@ 2019-04-17  9:39       ` Amit Daniel Kachhap
  0 siblings, 0 replies; 77+ messages in thread
From: Amit Daniel Kachhap @ 2019-04-17  9:39 UTC (permalink / raw)
  To: Dave Martin
  Cc: Marc Zyngier, Catalin Marinas, Will Deacon, linux-kernel,
	Kristina Martsenko, Ramana Radhakrishnan, kvmarm,
	linux-arm-kernel

Hi,

On 4/16/19 10:02 PM, Dave Martin wrote:
> On Fri, Apr 12, 2019 at 08:50:35AM +0530, Amit Daniel Kachhap wrote:
>> This patch advertises the capability of two cpu feature called address
>> pointer authentication and generic pointer authentication. These
>> capabilities depend upon system support for pointer authentication and
>> VHE mode.
>>
>> The current arm64 KVM partially implements pointer authentication and
>> support of address/generic authentication are tied together. However,
>> separate ABI requirements for both of them is added so that any future
>> isolated implementation will not require any ABI changes.
>>
>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>> Cc: Mark Rutland <mark.rutland@arm.com>
>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>> Cc: Christoffer Dall <christoffer.dall@arm.com>
>> Cc: kvmarm@lists.cs.columbia.edu
>> ---
>> Changes since v8:
>> *  Keep the capability check same for the 2 vcpu ptrauth features. [Dave Martin]
>>
>>   Documentation/virtual/kvm/api.txt | 2 ++
>>   arch/arm64/kvm/reset.c            | 5 +++++
>>   include/uapi/linux/kvm.h          | 2 ++
>>   3 files changed, 9 insertions(+)
>>
>> diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
>> index 9d202f4..56021d0 100644
>> --- a/Documentation/virtual/kvm/api.txt
>> +++ b/Documentation/virtual/kvm/api.txt
>> @@ -2756,9 +2756,11 @@ Possible features:
>>   	- KVM_ARM_VCPU_PTRAUTH_ADDRESS: Enables Address Pointer authentication
>>   	  for the CPU and supported only on arm64 architecture.
>>   	  Must be requested if KVM_ARM_VCPU_PTRAUTH_GENERIC is also requested.
>> +	  Depends on KVM_CAP_ARM_PTRAUTH_ADDRESS.
> 
> What if KVM_CAP_ARM_PTRAUTH_ADDRESS is absent and
> KVM_ARM_VCPU_PTRAUTH_GENERIC is requested?  By these rules, we have a
> contradiction: userspace both must request and must not request
> KVM_ARM_VCPU_PTRAUTH_ADDRESS.
> 
> We could qualify as follows:
> 
> 	Depends on KVM_CAP_ARM_PTRAUTH_ADDRESS.
> 	Must be requested if KVM_CAP_ARM_PTRAUTH_ADDRESS is present and
> 	KVM_ARM_VCPU_PTRAUTH_GENERIC is also requested.
ok agree. This makes it clear.
> 
>>   	- KVM_ARM_VCPU_PTRAUTH_GENERIC: Enables Generic Pointer authentication
>>   	  for the CPU and supported only on arm64 architecture.
>>   	  Must be requested if KVM_ARM_VCPU_PTRAUTH_ADDRESS is also requested.
>> +	  Depends on KVM_CAP_ARM_PTRAUTH_GENERIC.
> 
> Similarly.
> 
> Or, we go back to having a single cap and a single feature, and add
> more caps/features later on if we decide it's possible to support
> address/generic auth separately later on.
> 
> Otherwise, we end up with complex rules that can't be tested.  This is a
> high price to pay for forwards compatibility: userspace's conformance to
> the rules can't be fully tested, so there's a fair chance it won't work
> properly anyway when hardware/KVM with just one auth type appears.
> 
> [...]
> 
> Thoughts?
I agree that single cpufeature/capability is a simple solution to 
implement. The bifurcation of feature was done to reflect the different 
ID register split up.

But the h/w implementation provides a same EL2 exception trap for both 
the features and hence current implementation ties both of the features 
together. I guess in future if this is limitation goes away then one 
auth type is possible. Here I am not sure if the future h/w will retain 
this merged exception trap and add 2 new separate exception trap in 
addition to it.

I guess it will be probably simple split-up of this merged exception 
trap. In this case there won't be any ABI change required as per current 
implementation.

Thanks,
Amit Daniel


> 
> Cheers
> ---Dave
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [kvmtool PATCH v9 5/5] KVM: arm/arm64: Add a vcpu feature for pointer authentication
@ 2019-04-17 12:36       ` Amit Daniel Kachhap
  0 siblings, 0 replies; 77+ messages in thread
From: Amit Daniel Kachhap @ 2019-04-17 12:36 UTC (permalink / raw)
  To: Dave Martin
  Cc: linux-arm-kernel, Marc Zyngier, Catalin Marinas, Will Deacon,
	Kristina Martsenko, kvmarm, Ramana Radhakrishnan, linux-kernel

Hi,

On 4/16/19 10:02 PM, Dave Martin wrote:
> On Fri, Apr 12, 2019 at 08:50:36AM +0530, Amit Daniel Kachhap wrote:
>> This patch adds a runtime capabality for KVM tool to enable Arm64 8.3
>> Pointer Authentication in guest kernel. Two vcpu features
>> KVM_ARM_VCPU_PTRAUTH_[ADDRESS/GENERIC] are supplied together to enable
>> Pointer Authentication in KVM guest after checking the capability.
>>
>> Command line options --enable-ptrauth and --disable-ptrauth are added
>> to use this feature. However, if those options are not provided then
>> also this feature is enabled if host supports this capability.
>>
>> The macros defined in the headers are not in sync and should be replaced
>> from the upstream.
>>
>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>> ---
>> Changes since v8:
>> *  Added option --enable-ptrauth and --disable-ptrauth to use ptrauth. Also
>>     enable ptrauth if no option provided and Host supports ptrauth. [Dave Martin]
>> * The macro definition are not linear as the kvmtool is not synchronised with the
>>    kernel changes present in kvmarm/next tree.
>>
>>   arm/aarch32/include/kvm/kvm-cpu-arch.h    |  1 +
>>   arm/aarch64/include/asm/kvm.h             |  2 ++
>>   arm/aarch64/include/kvm/kvm-config-arch.h |  6 +++++-
>>   arm/aarch64/include/kvm/kvm-cpu-arch.h    |  2 ++
>>   arm/include/arm-common/kvm-config-arch.h  |  2 ++
>>   arm/kvm-cpu.c                             | 11 +++++++++++
>>   include/linux/kvm.h                       |  2 ++
>>   7 files changed, 25 insertions(+), 1 deletion(-)
>>
>> diff --git a/arm/aarch32/include/kvm/kvm-cpu-arch.h b/arm/aarch32/include/kvm/kvm-cpu-arch.h
>> index d28ea67..520ea76 100644
>> --- a/arm/aarch32/include/kvm/kvm-cpu-arch.h
>> +++ b/arm/aarch32/include/kvm/kvm-cpu-arch.h
>> @@ -13,4 +13,5 @@
>>   #define ARM_CPU_ID		0, 0, 0
>>   #define ARM_CPU_ID_MPIDR	5
>>   
>> +#define ARM_VCPU_PTRAUTH_FEATURE	0
>>   #endif /* KVM__KVM_CPU_ARCH_H */
>> diff --git a/arm/aarch64/include/asm/kvm.h b/arm/aarch64/include/asm/kvm.h
>> index 97c3478..a2546e6 100644
>> --- a/arm/aarch64/include/asm/kvm.h
>> +++ b/arm/aarch64/include/asm/kvm.h
>> @@ -102,6 +102,8 @@ struct kvm_regs {
>>   #define KVM_ARM_VCPU_EL1_32BIT		1 /* CPU running a 32bit VM */
>>   #define KVM_ARM_VCPU_PSCI_0_2		2 /* CPU uses PSCI v0.2 */
>>   #define KVM_ARM_VCPU_PMU_V3		3 /* Support guest PMUv3 */
>> +#define KVM_ARM_VCPU_PTRAUTH_ADDRESS	5 /* CPU uses address pointer authentication */
>> +#define KVM_ARM_VCPU_PTRAUTH_GENERIC	6 /* CPU uses generic pointer authentication */
>>   
>>   struct kvm_vcpu_init {
>>   	__u32 target;
>> diff --git a/arm/aarch64/include/kvm/kvm-config-arch.h b/arm/aarch64/include/kvm/kvm-config-arch.h
>> index 04be43d..0279b13 100644
>> --- a/arm/aarch64/include/kvm/kvm-config-arch.h
>> +++ b/arm/aarch64/include/kvm/kvm-config-arch.h
>> @@ -8,7 +8,11 @@
>>   			"Create PMUv3 device"),				\
>>   	OPT_U64('\0', "kaslr-seed", &(cfg)->kaslr_seed,			\
>>   			"Specify random seed for Kernel Address Space "	\
>> -			"Layout Randomization (KASLR)"),
>> +			"Layout Randomization (KASLR)"),		\
>> +	OPT_BOOLEAN('\0', "enable-ptrauth", &(cfg)->enable_ptrauth,	\
>> +			"Enables pointer authentication"),		\
>> +	OPT_BOOLEAN('\0', "disable-ptrauth", &(cfg)->disable_ptrauth,	\
>> +			"Disables pointer authentication"),
>>   
>>   #include "arm-common/kvm-config-arch.h"
>>   
>> diff --git a/arm/aarch64/include/kvm/kvm-cpu-arch.h b/arm/aarch64/include/kvm/kvm-cpu-arch.h
>> index a9d8563..fcc2107 100644
>> --- a/arm/aarch64/include/kvm/kvm-cpu-arch.h
>> +++ b/arm/aarch64/include/kvm/kvm-cpu-arch.h
>> @@ -17,4 +17,6 @@
>>   #define ARM_CPU_CTRL		3, 0, 1, 0
>>   #define ARM_CPU_CTRL_SCTLR_EL1	0
>>   
>> +#define ARM_VCPU_PTRAUTH_FEATURE	((1UL << KVM_ARM_VCPU_PTRAUTH_ADDRESS) \
>> +					| (1UL << KVM_ARM_VCPU_PTRAUTH_GENERIC))
>>   #endif /* KVM__KVM_CPU_ARCH_H */
>> diff --git a/arm/include/arm-common/kvm-config-arch.h b/arm/include/arm-common/kvm-config-arch.h
>> index 5734c46..1b4287d 100644
>> --- a/arm/include/arm-common/kvm-config-arch.h
>> +++ b/arm/include/arm-common/kvm-config-arch.h
>> @@ -10,6 +10,8 @@ struct kvm_config_arch {
>>   	bool		aarch32_guest;
>>   	bool		has_pmuv3;
>>   	u64		kaslr_seed;
>> +	bool		enable_ptrauth;
>> +	bool		disable_ptrauth;
>>   	enum irqchip_type irqchip;
>>   	u64		fw_addr;
>>   };
>> diff --git a/arm/kvm-cpu.c b/arm/kvm-cpu.c
>> index 7780251..a45a649 100644
>> --- a/arm/kvm-cpu.c
>> +++ b/arm/kvm-cpu.c
>> @@ -69,6 +69,17 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, unsigned long cpu_id)
>>   	}
>>   
>>   	/*
>> +	 * Always enable Pointer Authentication if requested. If system supports
>> +	 * this extension then also enable it by default provided no disable
>> +	 * request present.
>> +	 */
>> +	if ((kvm->cfg.arch.enable_ptrauth) ||
> 
> Nit: redundant ()
ok.
> 
>> +		(kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH_ADDRESS) &&
> 
> Funny indentation?
ok will align it.
> 
>> +		kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH_GENERIC) &&
>> +		!kvm->cfg.arch.disable_ptrauth))
>> +			vcpu_init.features[0] |= ARM_VCPU_PTRAUTH_FEATURE;
>> +
> 
> Hmm, we have some weird behaviours here: --enable-ptrauth
> --disable-ptrauth will result in us trying to enable it, and
May be 1 more check can be added here like,

if (kvm->cfg.arch.enable_ptrauth && kvm->cfg.arch.disable_ptrauth) {
	print_err("Only 1 option should be supplied\n");
	ret -EINVAL;
}

> --enable-ptrauth without the required caps will result in an unhelpful
> "Unable to initialise vcpu" error message.  I'm not sure this is a
> whole lot worse than the way other options behave today, though.

Since now ptrauth is enabled by default if system supports it even 
though it is not explicitly requested. so I thought --enable-ptrauth
option has to now forcefully enable ptrauth and may cause some error 
message in failure.
Did I interpret something different from your last suggestion[1]?

Actually we can skip with --enable-ptrauth and have just 2 option,
* By default enable ptrauth if system supports it.
* --disable-ptrauth: useful to migrate non-ptrauth guests on ptrauth hosts

[1]:https://lkml.org/lkml/2019/4/5/171

Thanks,
Amit Daniel
> 
> You could try to be more explicit about what happens in these cases, but
> I'm not sure it's worth it given the state of the existing code.

> 
> [...]
> 
> Cheers
> ---Dave
> 

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [kvmtool PATCH v9 5/5] KVM: arm/arm64: Add a vcpu feature for pointer authentication
@ 2019-04-17 12:36       ` Amit Daniel Kachhap
  0 siblings, 0 replies; 77+ messages in thread
From: Amit Daniel Kachhap @ 2019-04-17 12:36 UTC (permalink / raw)
  To: Dave Martin
  Cc: Marc Zyngier, Catalin Marinas, Will Deacon, linux-kernel,
	Kristina Martsenko, Ramana Radhakrishnan, kvmarm,
	linux-arm-kernel

Hi,

On 4/16/19 10:02 PM, Dave Martin wrote:
> On Fri, Apr 12, 2019 at 08:50:36AM +0530, Amit Daniel Kachhap wrote:
>> This patch adds a runtime capabality for KVM tool to enable Arm64 8.3
>> Pointer Authentication in guest kernel. Two vcpu features
>> KVM_ARM_VCPU_PTRAUTH_[ADDRESS/GENERIC] are supplied together to enable
>> Pointer Authentication in KVM guest after checking the capability.
>>
>> Command line options --enable-ptrauth and --disable-ptrauth are added
>> to use this feature. However, if those options are not provided then
>> also this feature is enabled if host supports this capability.
>>
>> The macros defined in the headers are not in sync and should be replaced
>> from the upstream.
>>
>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>> ---
>> Changes since v8:
>> *  Added option --enable-ptrauth and --disable-ptrauth to use ptrauth. Also
>>     enable ptrauth if no option provided and Host supports ptrauth. [Dave Martin]
>> * The macro definition are not linear as the kvmtool is not synchronised with the
>>    kernel changes present in kvmarm/next tree.
>>
>>   arm/aarch32/include/kvm/kvm-cpu-arch.h    |  1 +
>>   arm/aarch64/include/asm/kvm.h             |  2 ++
>>   arm/aarch64/include/kvm/kvm-config-arch.h |  6 +++++-
>>   arm/aarch64/include/kvm/kvm-cpu-arch.h    |  2 ++
>>   arm/include/arm-common/kvm-config-arch.h  |  2 ++
>>   arm/kvm-cpu.c                             | 11 +++++++++++
>>   include/linux/kvm.h                       |  2 ++
>>   7 files changed, 25 insertions(+), 1 deletion(-)
>>
>> diff --git a/arm/aarch32/include/kvm/kvm-cpu-arch.h b/arm/aarch32/include/kvm/kvm-cpu-arch.h
>> index d28ea67..520ea76 100644
>> --- a/arm/aarch32/include/kvm/kvm-cpu-arch.h
>> +++ b/arm/aarch32/include/kvm/kvm-cpu-arch.h
>> @@ -13,4 +13,5 @@
>>   #define ARM_CPU_ID		0, 0, 0
>>   #define ARM_CPU_ID_MPIDR	5
>>   
>> +#define ARM_VCPU_PTRAUTH_FEATURE	0
>>   #endif /* KVM__KVM_CPU_ARCH_H */
>> diff --git a/arm/aarch64/include/asm/kvm.h b/arm/aarch64/include/asm/kvm.h
>> index 97c3478..a2546e6 100644
>> --- a/arm/aarch64/include/asm/kvm.h
>> +++ b/arm/aarch64/include/asm/kvm.h
>> @@ -102,6 +102,8 @@ struct kvm_regs {
>>   #define KVM_ARM_VCPU_EL1_32BIT		1 /* CPU running a 32bit VM */
>>   #define KVM_ARM_VCPU_PSCI_0_2		2 /* CPU uses PSCI v0.2 */
>>   #define KVM_ARM_VCPU_PMU_V3		3 /* Support guest PMUv3 */
>> +#define KVM_ARM_VCPU_PTRAUTH_ADDRESS	5 /* CPU uses address pointer authentication */
>> +#define KVM_ARM_VCPU_PTRAUTH_GENERIC	6 /* CPU uses generic pointer authentication */
>>   
>>   struct kvm_vcpu_init {
>>   	__u32 target;
>> diff --git a/arm/aarch64/include/kvm/kvm-config-arch.h b/arm/aarch64/include/kvm/kvm-config-arch.h
>> index 04be43d..0279b13 100644
>> --- a/arm/aarch64/include/kvm/kvm-config-arch.h
>> +++ b/arm/aarch64/include/kvm/kvm-config-arch.h
>> @@ -8,7 +8,11 @@
>>   			"Create PMUv3 device"),				\
>>   	OPT_U64('\0', "kaslr-seed", &(cfg)->kaslr_seed,			\
>>   			"Specify random seed for Kernel Address Space "	\
>> -			"Layout Randomization (KASLR)"),
>> +			"Layout Randomization (KASLR)"),		\
>> +	OPT_BOOLEAN('\0', "enable-ptrauth", &(cfg)->enable_ptrauth,	\
>> +			"Enables pointer authentication"),		\
>> +	OPT_BOOLEAN('\0', "disable-ptrauth", &(cfg)->disable_ptrauth,	\
>> +			"Disables pointer authentication"),
>>   
>>   #include "arm-common/kvm-config-arch.h"
>>   
>> diff --git a/arm/aarch64/include/kvm/kvm-cpu-arch.h b/arm/aarch64/include/kvm/kvm-cpu-arch.h
>> index a9d8563..fcc2107 100644
>> --- a/arm/aarch64/include/kvm/kvm-cpu-arch.h
>> +++ b/arm/aarch64/include/kvm/kvm-cpu-arch.h
>> @@ -17,4 +17,6 @@
>>   #define ARM_CPU_CTRL		3, 0, 1, 0
>>   #define ARM_CPU_CTRL_SCTLR_EL1	0
>>   
>> +#define ARM_VCPU_PTRAUTH_FEATURE	((1UL << KVM_ARM_VCPU_PTRAUTH_ADDRESS) \
>> +					| (1UL << KVM_ARM_VCPU_PTRAUTH_GENERIC))
>>   #endif /* KVM__KVM_CPU_ARCH_H */
>> diff --git a/arm/include/arm-common/kvm-config-arch.h b/arm/include/arm-common/kvm-config-arch.h
>> index 5734c46..1b4287d 100644
>> --- a/arm/include/arm-common/kvm-config-arch.h
>> +++ b/arm/include/arm-common/kvm-config-arch.h
>> @@ -10,6 +10,8 @@ struct kvm_config_arch {
>>   	bool		aarch32_guest;
>>   	bool		has_pmuv3;
>>   	u64		kaslr_seed;
>> +	bool		enable_ptrauth;
>> +	bool		disable_ptrauth;
>>   	enum irqchip_type irqchip;
>>   	u64		fw_addr;
>>   };
>> diff --git a/arm/kvm-cpu.c b/arm/kvm-cpu.c
>> index 7780251..a45a649 100644
>> --- a/arm/kvm-cpu.c
>> +++ b/arm/kvm-cpu.c
>> @@ -69,6 +69,17 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, unsigned long cpu_id)
>>   	}
>>   
>>   	/*
>> +	 * Always enable Pointer Authentication if requested. If system supports
>> +	 * this extension then also enable it by default provided no disable
>> +	 * request present.
>> +	 */
>> +	if ((kvm->cfg.arch.enable_ptrauth) ||
> 
> Nit: redundant ()
ok.
> 
>> +		(kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH_ADDRESS) &&
> 
> Funny indentation?
ok will align it.
> 
>> +		kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH_GENERIC) &&
>> +		!kvm->cfg.arch.disable_ptrauth))
>> +			vcpu_init.features[0] |= ARM_VCPU_PTRAUTH_FEATURE;
>> +
> 
> Hmm, we have some weird behaviours here: --enable-ptrauth
> --disable-ptrauth will result in us trying to enable it, and
May be 1 more check can be added here like,

if (kvm->cfg.arch.enable_ptrauth && kvm->cfg.arch.disable_ptrauth) {
	print_err("Only 1 option should be supplied\n");
	ret -EINVAL;
}

> --enable-ptrauth without the required caps will result in an unhelpful
> "Unable to initialise vcpu" error message.  I'm not sure this is a
> whole lot worse than the way other options behave today, though.

Since now ptrauth is enabled by default if system supports it even 
though it is not explicitly requested. so I thought --enable-ptrauth
option has to now forcefully enable ptrauth and may cause some error 
message in failure.
Did I interpret something different from your last suggestion[1]?

Actually we can skip with --enable-ptrauth and have just 2 option,
* By default enable ptrauth if system supports it.
* --disable-ptrauth: useful to migrate non-ptrauth guests on ptrauth hosts

[1]:https://lkml.org/lkml/2019/4/5/171

Thanks,
Amit Daniel
> 
> You could try to be more explicit about what happens in these cases, but
> I'm not sure it's worth it given the state of the existing code.

> 
> [...]
> 
> Cheers
> ---Dave
> 
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [kvmtool PATCH v9 5/5] KVM: arm/arm64: Add a vcpu feature for pointer authentication
@ 2019-04-17 12:36       ` Amit Daniel Kachhap
  0 siblings, 0 replies; 77+ messages in thread
From: Amit Daniel Kachhap @ 2019-04-17 12:36 UTC (permalink / raw)
  To: Dave Martin
  Cc: Marc Zyngier, Catalin Marinas, Will Deacon, linux-kernel,
	Kristina Martsenko, Ramana Radhakrishnan, kvmarm,
	linux-arm-kernel

Hi,

On 4/16/19 10:02 PM, Dave Martin wrote:
> On Fri, Apr 12, 2019 at 08:50:36AM +0530, Amit Daniel Kachhap wrote:
>> This patch adds a runtime capabality for KVM tool to enable Arm64 8.3
>> Pointer Authentication in guest kernel. Two vcpu features
>> KVM_ARM_VCPU_PTRAUTH_[ADDRESS/GENERIC] are supplied together to enable
>> Pointer Authentication in KVM guest after checking the capability.
>>
>> Command line options --enable-ptrauth and --disable-ptrauth are added
>> to use this feature. However, if those options are not provided then
>> also this feature is enabled if host supports this capability.
>>
>> The macros defined in the headers are not in sync and should be replaced
>> from the upstream.
>>
>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>> ---
>> Changes since v8:
>> *  Added option --enable-ptrauth and --disable-ptrauth to use ptrauth. Also
>>     enable ptrauth if no option provided and Host supports ptrauth. [Dave Martin]
>> * The macro definition are not linear as the kvmtool is not synchronised with the
>>    kernel changes present in kvmarm/next tree.
>>
>>   arm/aarch32/include/kvm/kvm-cpu-arch.h    |  1 +
>>   arm/aarch64/include/asm/kvm.h             |  2 ++
>>   arm/aarch64/include/kvm/kvm-config-arch.h |  6 +++++-
>>   arm/aarch64/include/kvm/kvm-cpu-arch.h    |  2 ++
>>   arm/include/arm-common/kvm-config-arch.h  |  2 ++
>>   arm/kvm-cpu.c                             | 11 +++++++++++
>>   include/linux/kvm.h                       |  2 ++
>>   7 files changed, 25 insertions(+), 1 deletion(-)
>>
>> diff --git a/arm/aarch32/include/kvm/kvm-cpu-arch.h b/arm/aarch32/include/kvm/kvm-cpu-arch.h
>> index d28ea67..520ea76 100644
>> --- a/arm/aarch32/include/kvm/kvm-cpu-arch.h
>> +++ b/arm/aarch32/include/kvm/kvm-cpu-arch.h
>> @@ -13,4 +13,5 @@
>>   #define ARM_CPU_ID		0, 0, 0
>>   #define ARM_CPU_ID_MPIDR	5
>>   
>> +#define ARM_VCPU_PTRAUTH_FEATURE	0
>>   #endif /* KVM__KVM_CPU_ARCH_H */
>> diff --git a/arm/aarch64/include/asm/kvm.h b/arm/aarch64/include/asm/kvm.h
>> index 97c3478..a2546e6 100644
>> --- a/arm/aarch64/include/asm/kvm.h
>> +++ b/arm/aarch64/include/asm/kvm.h
>> @@ -102,6 +102,8 @@ struct kvm_regs {
>>   #define KVM_ARM_VCPU_EL1_32BIT		1 /* CPU running a 32bit VM */
>>   #define KVM_ARM_VCPU_PSCI_0_2		2 /* CPU uses PSCI v0.2 */
>>   #define KVM_ARM_VCPU_PMU_V3		3 /* Support guest PMUv3 */
>> +#define KVM_ARM_VCPU_PTRAUTH_ADDRESS	5 /* CPU uses address pointer authentication */
>> +#define KVM_ARM_VCPU_PTRAUTH_GENERIC	6 /* CPU uses generic pointer authentication */
>>   
>>   struct kvm_vcpu_init {
>>   	__u32 target;
>> diff --git a/arm/aarch64/include/kvm/kvm-config-arch.h b/arm/aarch64/include/kvm/kvm-config-arch.h
>> index 04be43d..0279b13 100644
>> --- a/arm/aarch64/include/kvm/kvm-config-arch.h
>> +++ b/arm/aarch64/include/kvm/kvm-config-arch.h
>> @@ -8,7 +8,11 @@
>>   			"Create PMUv3 device"),				\
>>   	OPT_U64('\0', "kaslr-seed", &(cfg)->kaslr_seed,			\
>>   			"Specify random seed for Kernel Address Space "	\
>> -			"Layout Randomization (KASLR)"),
>> +			"Layout Randomization (KASLR)"),		\
>> +	OPT_BOOLEAN('\0', "enable-ptrauth", &(cfg)->enable_ptrauth,	\
>> +			"Enables pointer authentication"),		\
>> +	OPT_BOOLEAN('\0', "disable-ptrauth", &(cfg)->disable_ptrauth,	\
>> +			"Disables pointer authentication"),
>>   
>>   #include "arm-common/kvm-config-arch.h"
>>   
>> diff --git a/arm/aarch64/include/kvm/kvm-cpu-arch.h b/arm/aarch64/include/kvm/kvm-cpu-arch.h
>> index a9d8563..fcc2107 100644
>> --- a/arm/aarch64/include/kvm/kvm-cpu-arch.h
>> +++ b/arm/aarch64/include/kvm/kvm-cpu-arch.h
>> @@ -17,4 +17,6 @@
>>   #define ARM_CPU_CTRL		3, 0, 1, 0
>>   #define ARM_CPU_CTRL_SCTLR_EL1	0
>>   
>> +#define ARM_VCPU_PTRAUTH_FEATURE	((1UL << KVM_ARM_VCPU_PTRAUTH_ADDRESS) \
>> +					| (1UL << KVM_ARM_VCPU_PTRAUTH_GENERIC))
>>   #endif /* KVM__KVM_CPU_ARCH_H */
>> diff --git a/arm/include/arm-common/kvm-config-arch.h b/arm/include/arm-common/kvm-config-arch.h
>> index 5734c46..1b4287d 100644
>> --- a/arm/include/arm-common/kvm-config-arch.h
>> +++ b/arm/include/arm-common/kvm-config-arch.h
>> @@ -10,6 +10,8 @@ struct kvm_config_arch {
>>   	bool		aarch32_guest;
>>   	bool		has_pmuv3;
>>   	u64		kaslr_seed;
>> +	bool		enable_ptrauth;
>> +	bool		disable_ptrauth;
>>   	enum irqchip_type irqchip;
>>   	u64		fw_addr;
>>   };
>> diff --git a/arm/kvm-cpu.c b/arm/kvm-cpu.c
>> index 7780251..a45a649 100644
>> --- a/arm/kvm-cpu.c
>> +++ b/arm/kvm-cpu.c
>> @@ -69,6 +69,17 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, unsigned long cpu_id)
>>   	}
>>   
>>   	/*
>> +	 * Always enable Pointer Authentication if requested. If system supports
>> +	 * this extension then also enable it by default provided no disable
>> +	 * request present.
>> +	 */
>> +	if ((kvm->cfg.arch.enable_ptrauth) ||
> 
> Nit: redundant ()
ok.
> 
>> +		(kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH_ADDRESS) &&
> 
> Funny indentation?
ok will align it.
> 
>> +		kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH_GENERIC) &&
>> +		!kvm->cfg.arch.disable_ptrauth))
>> +			vcpu_init.features[0] |= ARM_VCPU_PTRAUTH_FEATURE;
>> +
> 
> Hmm, we have some weird behaviours here: --enable-ptrauth
> --disable-ptrauth will result in us trying to enable it, and
May be 1 more check can be added here like,

if (kvm->cfg.arch.enable_ptrauth && kvm->cfg.arch.disable_ptrauth) {
	print_err("Only 1 option should be supplied\n");
	ret -EINVAL;
}

> --enable-ptrauth without the required caps will result in an unhelpful
> "Unable to initialise vcpu" error message.  I'm not sure this is a
> whole lot worse than the way other options behave today, though.

Since now ptrauth is enabled by default if system supports it even 
though it is not explicitly requested. so I thought --enable-ptrauth
option has to now forcefully enable ptrauth and may cause some error 
message in failure.
Did I interpret something different from your last suggestion[1]?

Actually we can skip with --enable-ptrauth and have just 2 option,
* By default enable ptrauth if system supports it.
* --disable-ptrauth: useful to migrate non-ptrauth guests on ptrauth hosts

[1]:https://lkml.org/lkml/2019/4/5/171

Thanks,
Amit Daniel
> 
> You could try to be more explicit about what happens in these cases, but
> I'm not sure it's worth it given the state of the existing code.

> 
> [...]
> 
> Cheers
> ---Dave
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v9 1/5] KVM: arm64: Add a vcpu flag to control ptrauth for guest
@ 2019-04-17 13:08       ` Amit Daniel Kachhap
  0 siblings, 0 replies; 77+ messages in thread
From: Amit Daniel Kachhap @ 2019-04-17 13:08 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel
  Cc: Christoffer Dall, Catalin Marinas, Will Deacon, Andrew Jones,
	Dave Martin, Ramana Radhakrishnan, kvmarm, Kristina Martsenko,
	linux-kernel, Mark Rutland, James Morse, Julien Thierry

Hi,

On 4/17/19 2:05 PM, Marc Zyngier wrote:
> On 12/04/2019 04:20, Amit Daniel Kachhap wrote:
>> A per vcpu flag is added to check if pointer authentication is
>> enabled for the vcpu or not. This flag may be enabled according to
>> the necessary user policies and host capabilities.
>>
>> This patch also adds a helper to check the flag.
>>
>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>> Cc: Mark Rutland <mark.rutland@arm.com>
>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>> Cc: Christoffer Dall <christoffer.dall@arm.com>
>> Cc: kvmarm@lists.cs.columbia.edu
>> ---
>>
>> Changes since v8:
>> * Added a new per vcpu flag which will store Pointer Authentication enable
>>    status instead of checking them again. [Dave Martin]
>>
>>   arch/arm64/include/asm/kvm_host.h | 4 ++++
>>   1 file changed, 4 insertions(+)
>>
>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
>> index 9d57cf8..31dbc7c 100644
>> --- a/arch/arm64/include/asm/kvm_host.h
>> +++ b/arch/arm64/include/asm/kvm_host.h
>> @@ -355,10 +355,14 @@ struct kvm_vcpu_arch {
>>   #define KVM_ARM64_HOST_SVE_ENABLED	(1 << 4) /* SVE enabled for EL0 */
>>   #define KVM_ARM64_GUEST_HAS_SVE		(1 << 5) /* SVE exposed to guest */
>>   #define KVM_ARM64_VCPU_SVE_FINALIZED	(1 << 6) /* SVE config completed */
>> +#define KVM_ARM64_GUEST_HAS_PTRAUTH	(1 << 7) /* PTRAUTH exposed to guest */
>>   
>>   #define vcpu_has_sve(vcpu) (system_supports_sve() && \
>>   			    ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_SVE))
>>   
>> +#define vcpu_has_ptrauth(vcpu)	\
>> +			((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH)
>> +
> 
> Just as for SVE, please first check that the system has PTRAUTH.
> Something like:
> 
> 		(cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_ARCH) && \
> 		 ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH))

In the subsequent patches, vcpu->arch.flags is only set to 
KVM_ARM64_GUEST_HAS_PTRAUTH when all host capability check conditions 
matches such as system_supports_address_auth(), 
system_supports_generic_auth() so doing them again is repetitive in my view.

Thanks,
Amit D

> 
> This will save an extra load on unsuspecting CPUs thanks to the static
> key embedded in the capability structure.
> 
> Thanks,
> 
> 	M.
> 

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v9 1/5] KVM: arm64: Add a vcpu flag to control ptrauth for guest
@ 2019-04-17 13:08       ` Amit Daniel Kachhap
  0 siblings, 0 replies; 77+ messages in thread
From: Amit Daniel Kachhap @ 2019-04-17 13:08 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel
  Cc: Catalin Marinas, Will Deacon, Kristina Martsenko, kvmarm,
	Ramana Radhakrishnan, Dave Martin, linux-kernel

Hi,

On 4/17/19 2:05 PM, Marc Zyngier wrote:
> On 12/04/2019 04:20, Amit Daniel Kachhap wrote:
>> A per vcpu flag is added to check if pointer authentication is
>> enabled for the vcpu or not. This flag may be enabled according to
>> the necessary user policies and host capabilities.
>>
>> This patch also adds a helper to check the flag.
>>
>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>> Cc: Mark Rutland <mark.rutland@arm.com>
>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>> Cc: Christoffer Dall <christoffer.dall@arm.com>
>> Cc: kvmarm@lists.cs.columbia.edu
>> ---
>>
>> Changes since v8:
>> * Added a new per vcpu flag which will store Pointer Authentication enable
>>    status instead of checking them again. [Dave Martin]
>>
>>   arch/arm64/include/asm/kvm_host.h | 4 ++++
>>   1 file changed, 4 insertions(+)
>>
>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
>> index 9d57cf8..31dbc7c 100644
>> --- a/arch/arm64/include/asm/kvm_host.h
>> +++ b/arch/arm64/include/asm/kvm_host.h
>> @@ -355,10 +355,14 @@ struct kvm_vcpu_arch {
>>   #define KVM_ARM64_HOST_SVE_ENABLED	(1 << 4) /* SVE enabled for EL0 */
>>   #define KVM_ARM64_GUEST_HAS_SVE		(1 << 5) /* SVE exposed to guest */
>>   #define KVM_ARM64_VCPU_SVE_FINALIZED	(1 << 6) /* SVE config completed */
>> +#define KVM_ARM64_GUEST_HAS_PTRAUTH	(1 << 7) /* PTRAUTH exposed to guest */
>>   
>>   #define vcpu_has_sve(vcpu) (system_supports_sve() && \
>>   			    ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_SVE))
>>   
>> +#define vcpu_has_ptrauth(vcpu)	\
>> +			((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH)
>> +
> 
> Just as for SVE, please first check that the system has PTRAUTH.
> Something like:
> 
> 		(cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_ARCH) && \
> 		 ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH))

In the subsequent patches, vcpu->arch.flags is only set to 
KVM_ARM64_GUEST_HAS_PTRAUTH when all host capability check conditions 
matches such as system_supports_address_auth(), 
system_supports_generic_auth() so doing them again is repetitive in my view.

Thanks,
Amit D

> 
> This will save an extra load on unsuspecting CPUs thanks to the static
> key embedded in the capability structure.
> 
> Thanks,
> 
> 	M.
> 
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v9 1/5] KVM: arm64: Add a vcpu flag to control ptrauth for guest
@ 2019-04-17 13:08       ` Amit Daniel Kachhap
  0 siblings, 0 replies; 77+ messages in thread
From: Amit Daniel Kachhap @ 2019-04-17 13:08 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel
  Cc: Mark Rutland, Andrew Jones, Julien Thierry, Catalin Marinas,
	Will Deacon, Christoffer Dall, Kristina Martsenko, kvmarm,
	James Morse, Ramana Radhakrishnan, Dave Martin, linux-kernel

Hi,

On 4/17/19 2:05 PM, Marc Zyngier wrote:
> On 12/04/2019 04:20, Amit Daniel Kachhap wrote:
>> A per vcpu flag is added to check if pointer authentication is
>> enabled for the vcpu or not. This flag may be enabled according to
>> the necessary user policies and host capabilities.
>>
>> This patch also adds a helper to check the flag.
>>
>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>> Cc: Mark Rutland <mark.rutland@arm.com>
>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>> Cc: Christoffer Dall <christoffer.dall@arm.com>
>> Cc: kvmarm@lists.cs.columbia.edu
>> ---
>>
>> Changes since v8:
>> * Added a new per vcpu flag which will store Pointer Authentication enable
>>    status instead of checking them again. [Dave Martin]
>>
>>   arch/arm64/include/asm/kvm_host.h | 4 ++++
>>   1 file changed, 4 insertions(+)
>>
>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
>> index 9d57cf8..31dbc7c 100644
>> --- a/arch/arm64/include/asm/kvm_host.h
>> +++ b/arch/arm64/include/asm/kvm_host.h
>> @@ -355,10 +355,14 @@ struct kvm_vcpu_arch {
>>   #define KVM_ARM64_HOST_SVE_ENABLED	(1 << 4) /* SVE enabled for EL0 */
>>   #define KVM_ARM64_GUEST_HAS_SVE		(1 << 5) /* SVE exposed to guest */
>>   #define KVM_ARM64_VCPU_SVE_FINALIZED	(1 << 6) /* SVE config completed */
>> +#define KVM_ARM64_GUEST_HAS_PTRAUTH	(1 << 7) /* PTRAUTH exposed to guest */
>>   
>>   #define vcpu_has_sve(vcpu) (system_supports_sve() && \
>>   			    ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_SVE))
>>   
>> +#define vcpu_has_ptrauth(vcpu)	\
>> +			((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH)
>> +
> 
> Just as for SVE, please first check that the system has PTRAUTH.
> Something like:
> 
> 		(cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_ARCH) && \
> 		 ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH))

In the subsequent patches, vcpu->arch.flags is only set to 
KVM_ARM64_GUEST_HAS_PTRAUTH when all host capability check conditions 
matches such as system_supports_address_auth(), 
system_supports_generic_auth() so doing them again is repetitive in my view.

Thanks,
Amit D

> 
> This will save an extra load on unsuspecting CPUs thanks to the static
> key embedded in the capability structure.
> 
> Thanks,
> 
> 	M.
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v9 1/5] KVM: arm64: Add a vcpu flag to control ptrauth for guest
@ 2019-04-17 14:19         ` Marc Zyngier
  0 siblings, 0 replies; 77+ messages in thread
From: Marc Zyngier @ 2019-04-17 14:19 UTC (permalink / raw)
  To: Amit Daniel Kachhap, linux-arm-kernel
  Cc: Christoffer Dall, Catalin Marinas, Will Deacon, Andrew Jones,
	Dave Martin, Ramana Radhakrishnan, kvmarm, Kristina Martsenko,
	linux-kernel, Mark Rutland, James Morse, Julien Thierry

On 17/04/2019 14:08, Amit Daniel Kachhap wrote:
> Hi,
> 
> On 4/17/19 2:05 PM, Marc Zyngier wrote:
>> On 12/04/2019 04:20, Amit Daniel Kachhap wrote:
>>> A per vcpu flag is added to check if pointer authentication is
>>> enabled for the vcpu or not. This flag may be enabled according to
>>> the necessary user policies and host capabilities.
>>>
>>> This patch also adds a helper to check the flag.
>>>
>>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>>> Cc: Mark Rutland <mark.rutland@arm.com>
>>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>>> Cc: Christoffer Dall <christoffer.dall@arm.com>
>>> Cc: kvmarm@lists.cs.columbia.edu
>>> ---
>>>
>>> Changes since v8:
>>> * Added a new per vcpu flag which will store Pointer Authentication enable
>>>    status instead of checking them again. [Dave Martin]
>>>
>>>   arch/arm64/include/asm/kvm_host.h | 4 ++++
>>>   1 file changed, 4 insertions(+)
>>>
>>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
>>> index 9d57cf8..31dbc7c 100644
>>> --- a/arch/arm64/include/asm/kvm_host.h
>>> +++ b/arch/arm64/include/asm/kvm_host.h
>>> @@ -355,10 +355,14 @@ struct kvm_vcpu_arch {
>>>   #define KVM_ARM64_HOST_SVE_ENABLED	(1 << 4) /* SVE enabled for EL0 */
>>>   #define KVM_ARM64_GUEST_HAS_SVE		(1 << 5) /* SVE exposed to guest */
>>>   #define KVM_ARM64_VCPU_SVE_FINALIZED	(1 << 6) /* SVE config completed */
>>> +#define KVM_ARM64_GUEST_HAS_PTRAUTH	(1 << 7) /* PTRAUTH exposed to guest */
>>>   
>>>   #define vcpu_has_sve(vcpu) (system_supports_sve() && \
>>>   			    ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_SVE))
>>>   
>>> +#define vcpu_has_ptrauth(vcpu)	\
>>> +			((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH)
>>> +
>>
>> Just as for SVE, please first check that the system has PTRAUTH.
>> Something like:
>>
>> 		(cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_ARCH) && \
>> 		 ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH))
> 
> In the subsequent patches, vcpu->arch.flags is only set to 
> KVM_ARM64_GUEST_HAS_PTRAUTH when all host capability check conditions 
> matches such as system_supports_address_auth(), 
> system_supports_generic_auth() so doing them again is repetitive in my view.

It isn't the setting of the flag I care about, but the check of that
flag. Checking a flag for a feature that cannot be used on the running
system should have a zero cost, which isn't the case here.

Granted, the impact should be minimal and it looks like it mostly happen
on the slow path, but at the very least it would be consistent. So even
if you don't buy my argument about efficiency, please change it in the
name of consistency.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v9 1/5] KVM: arm64: Add a vcpu flag to control ptrauth for guest
@ 2019-04-17 14:19         ` Marc Zyngier
  0 siblings, 0 replies; 77+ messages in thread
From: Marc Zyngier @ 2019-04-17 14:19 UTC (permalink / raw)
  To: Amit Daniel Kachhap, linux-arm-kernel
  Cc: Catalin Marinas, Will Deacon, Kristina Martsenko, kvmarm,
	Ramana Radhakrishnan, Dave Martin, linux-kernel

On 17/04/2019 14:08, Amit Daniel Kachhap wrote:
> Hi,
> 
> On 4/17/19 2:05 PM, Marc Zyngier wrote:
>> On 12/04/2019 04:20, Amit Daniel Kachhap wrote:
>>> A per vcpu flag is added to check if pointer authentication is
>>> enabled for the vcpu or not. This flag may be enabled according to
>>> the necessary user policies and host capabilities.
>>>
>>> This patch also adds a helper to check the flag.
>>>
>>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>>> Cc: Mark Rutland <mark.rutland@arm.com>
>>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>>> Cc: Christoffer Dall <christoffer.dall@arm.com>
>>> Cc: kvmarm@lists.cs.columbia.edu
>>> ---
>>>
>>> Changes since v8:
>>> * Added a new per vcpu flag which will store Pointer Authentication enable
>>>    status instead of checking them again. [Dave Martin]
>>>
>>>   arch/arm64/include/asm/kvm_host.h | 4 ++++
>>>   1 file changed, 4 insertions(+)
>>>
>>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
>>> index 9d57cf8..31dbc7c 100644
>>> --- a/arch/arm64/include/asm/kvm_host.h
>>> +++ b/arch/arm64/include/asm/kvm_host.h
>>> @@ -355,10 +355,14 @@ struct kvm_vcpu_arch {
>>>   #define KVM_ARM64_HOST_SVE_ENABLED	(1 << 4) /* SVE enabled for EL0 */
>>>   #define KVM_ARM64_GUEST_HAS_SVE		(1 << 5) /* SVE exposed to guest */
>>>   #define KVM_ARM64_VCPU_SVE_FINALIZED	(1 << 6) /* SVE config completed */
>>> +#define KVM_ARM64_GUEST_HAS_PTRAUTH	(1 << 7) /* PTRAUTH exposed to guest */
>>>   
>>>   #define vcpu_has_sve(vcpu) (system_supports_sve() && \
>>>   			    ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_SVE))
>>>   
>>> +#define vcpu_has_ptrauth(vcpu)	\
>>> +			((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH)
>>> +
>>
>> Just as for SVE, please first check that the system has PTRAUTH.
>> Something like:
>>
>> 		(cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_ARCH) && \
>> 		 ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH))
> 
> In the subsequent patches, vcpu->arch.flags is only set to 
> KVM_ARM64_GUEST_HAS_PTRAUTH when all host capability check conditions 
> matches such as system_supports_address_auth(), 
> system_supports_generic_auth() so doing them again is repetitive in my view.

It isn't the setting of the flag I care about, but the check of that
flag. Checking a flag for a feature that cannot be used on the running
system should have a zero cost, which isn't the case here.

Granted, the impact should be minimal and it looks like it mostly happen
on the slow path, but at the very least it would be consistent. So even
if you don't buy my argument about efficiency, please change it in the
name of consistency.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v9 1/5] KVM: arm64: Add a vcpu flag to control ptrauth for guest
@ 2019-04-17 14:19         ` Marc Zyngier
  0 siblings, 0 replies; 77+ messages in thread
From: Marc Zyngier @ 2019-04-17 14:19 UTC (permalink / raw)
  To: Amit Daniel Kachhap, linux-arm-kernel
  Cc: Mark Rutland, Andrew Jones, Julien Thierry, Catalin Marinas,
	Will Deacon, Christoffer Dall, Kristina Martsenko, kvmarm,
	James Morse, Ramana Radhakrishnan, Dave Martin, linux-kernel

On 17/04/2019 14:08, Amit Daniel Kachhap wrote:
> Hi,
> 
> On 4/17/19 2:05 PM, Marc Zyngier wrote:
>> On 12/04/2019 04:20, Amit Daniel Kachhap wrote:
>>> A per vcpu flag is added to check if pointer authentication is
>>> enabled for the vcpu or not. This flag may be enabled according to
>>> the necessary user policies and host capabilities.
>>>
>>> This patch also adds a helper to check the flag.
>>>
>>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>>> Cc: Mark Rutland <mark.rutland@arm.com>
>>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>>> Cc: Christoffer Dall <christoffer.dall@arm.com>
>>> Cc: kvmarm@lists.cs.columbia.edu
>>> ---
>>>
>>> Changes since v8:
>>> * Added a new per vcpu flag which will store Pointer Authentication enable
>>>    status instead of checking them again. [Dave Martin]
>>>
>>>   arch/arm64/include/asm/kvm_host.h | 4 ++++
>>>   1 file changed, 4 insertions(+)
>>>
>>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
>>> index 9d57cf8..31dbc7c 100644
>>> --- a/arch/arm64/include/asm/kvm_host.h
>>> +++ b/arch/arm64/include/asm/kvm_host.h
>>> @@ -355,10 +355,14 @@ struct kvm_vcpu_arch {
>>>   #define KVM_ARM64_HOST_SVE_ENABLED	(1 << 4) /* SVE enabled for EL0 */
>>>   #define KVM_ARM64_GUEST_HAS_SVE		(1 << 5) /* SVE exposed to guest */
>>>   #define KVM_ARM64_VCPU_SVE_FINALIZED	(1 << 6) /* SVE config completed */
>>> +#define KVM_ARM64_GUEST_HAS_PTRAUTH	(1 << 7) /* PTRAUTH exposed to guest */
>>>   
>>>   #define vcpu_has_sve(vcpu) (system_supports_sve() && \
>>>   			    ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_SVE))
>>>   
>>> +#define vcpu_has_ptrauth(vcpu)	\
>>> +			((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH)
>>> +
>>
>> Just as for SVE, please first check that the system has PTRAUTH.
>> Something like:
>>
>> 		(cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_ARCH) && \
>> 		 ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH))
> 
> In the subsequent patches, vcpu->arch.flags is only set to 
> KVM_ARM64_GUEST_HAS_PTRAUTH when all host capability check conditions 
> matches such as system_supports_address_auth(), 
> system_supports_generic_auth() so doing them again is repetitive in my view.

It isn't the setting of the flag I care about, but the check of that
flag. Checking a flag for a feature that cannot be used on the running
system should have a zero cost, which isn't the case here.

Granted, the impact should be minimal and it looks like it mostly happen
on the slow path, but at the very least it would be consistent. So even
if you don't buy my argument about efficiency, please change it in the
name of consistency.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v9 2/5] KVM: arm/arm64: context-switch ptrauth registers
@ 2019-04-17 14:24       ` Amit Daniel Kachhap
  0 siblings, 0 replies; 77+ messages in thread
From: Amit Daniel Kachhap @ 2019-04-17 14:24 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel
  Cc: Christoffer Dall, Catalin Marinas, Will Deacon, Andrew Jones,
	Dave Martin, Ramana Radhakrishnan, kvmarm, Kristina Martsenko,
	linux-kernel, Mark Rutland, James Morse, Julien Thierry

Hi Marc,

On 4/17/19 2:39 PM, Marc Zyngier wrote:
> Hi Amit,
> 
> On 12/04/2019 04:20, Amit Daniel Kachhap wrote:
>> From: Mark Rutland <mark.rutland@arm.com>
>>
>> When pointer authentication is supported, a guest may wish to use it.
>> This patch adds the necessary KVM infrastructure for this to work, with
>> a semi-lazy context switch of the pointer auth state.
>>
>> Pointer authentication feature is only enabled when VHE is built
>> in the kernel and present in the CPU implementation so only VHE code
>> paths are modified.
>>
>> When we schedule a vcpu, we disable guest usage of pointer
>> authentication instructions and accesses to the keys. While these are
>> disabled, we avoid context-switching the keys. When we trap the guest
>> trying to use pointer authentication functionality, we change to eagerly
>> context-switching the keys, and enable the feature. The next time the
>> vcpu is scheduled out/in, we start again. However the host key save is
>> optimized and implemented inside ptrauth instruction/register access
>> trap.
>>
>> Pointer authentication consists of address authentication and generic
>> authentication, and CPUs in a system might have varied support for
>> either. Where support for either feature is not uniform, it is hidden
>> from guests via ID register emulation, as a result of the cpufeature
>> framework in the host.
>>
>> Unfortunately, address authentication and generic authentication cannot
>> be trapped separately, as the architecture provides a single EL2 trap
>> covering both. If we wish to expose one without the other, we cannot
>> prevent a (badly-written) guest from intermittently using a feature
>> which is not uniformly supported (when scheduled on a physical CPU which
>> supports the relevant feature). Hence, this patch expects both type of
>> authentication to be present in a cpu.
>>
>> This switch of key is done from guest enter/exit assembly as preparation
>> for the upcoming in-kernel pointer authentication support. Hence, these
>> key switching routines are not implemented in C code as they may cause
>> pointer authentication key signing error in some situations.
>>
>> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
>> [Only VHE, key switch in full assembly, vcpu_has_ptrauth checks
>> , save host key in ptrauth exception trap]
>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>> Reviewed-by: Julien Thierry <julien.thierry@arm.com>
>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>> Cc: Christoffer Dall <christoffer.dall@arm.com>
>> Cc: kvmarm@lists.cs.columbia.edu
>> ---
>>
>> Changes since v9:
>> * Used high order number for branching in assembly macros. [Kristina Martsenko]
>> * Taken care of different offset for hcr_el2 now.
>>
>>   arch/arm/include/asm/kvm_host.h          |   1 +
>>   arch/arm64/Kconfig                       |   5 +-
>>   arch/arm64/include/asm/kvm_host.h        |  17 +++++
>>   arch/arm64/include/asm/kvm_ptrauth_asm.h | 106 +++++++++++++++++++++++++++++++
>>   arch/arm64/kernel/asm-offsets.c          |   6 ++
>>   arch/arm64/kvm/guest.c                   |  14 ++++
>>   arch/arm64/kvm/handle_exit.c             |  24 ++++---
>>   arch/arm64/kvm/hyp/entry.S               |   7 ++
>>   arch/arm64/kvm/sys_regs.c                |  46 +++++++++++++-
>>   virt/kvm/arm/arm.c                       |   2 +
>>   10 files changed, 215 insertions(+), 13 deletions(-)
>>   create mode 100644 arch/arm64/include/asm/kvm_ptrauth_asm.h
>>
>> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
>> index e80cfc1..7a5c7f8 100644
>> --- a/arch/arm/include/asm/kvm_host.h
>> +++ b/arch/arm/include/asm/kvm_host.h
>> @@ -363,6 +363,7 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
>>   static inline void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu) {}
>>   static inline void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu) {}
>>   static inline void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu) {}
>> +static inline void kvm_arm_vcpu_ptrauth_setup_lazy(struct kvm_vcpu *vcpu) {}
>>   
>>   static inline void kvm_arm_vhe_guest_enter(void) {}
>>   static inline void kvm_arm_vhe_guest_exit(void) {}
>> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
>> index 7e34b9e..9e8506e 100644
>> --- a/arch/arm64/Kconfig
>> +++ b/arch/arm64/Kconfig
>> @@ -1301,8 +1301,9 @@ config ARM64_PTR_AUTH
>>   	  context-switched along with the process.
>>   
>>   	  The feature is detected at runtime. If the feature is not present in
>> -	  hardware it will not be advertised to userspace nor will it be
>> -	  enabled.
>> +	  hardware it will not be advertised to userspace/KVM guest nor will it
>> +	  be enabled. However, KVM guest also require CONFIG_ARM64_VHE=y to use
>> +	  this feature.
> 
> Not only does it require CONFIG_ARM64_VHE, but it more importantly
> requires a VHE system!
Yes will update.
> 
>>   
>>   endmenu
>>   
>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
>> index 31dbc7c..a585d82 100644
>> --- a/arch/arm64/include/asm/kvm_host.h
>> +++ b/arch/arm64/include/asm/kvm_host.h
>> @@ -161,6 +161,18 @@ enum vcpu_sysreg {
>>   	PMSWINC_EL0,	/* Software Increment Register */
>>   	PMUSERENR_EL0,	/* User Enable Register */
>>   
>> +	/* Pointer Authentication Registers in a strict increasing order. */
>> +	APIAKEYLO_EL1,
>> +	APIAKEYHI_EL1 = APIAKEYLO_EL1 + 1,
>> +	APIBKEYLO_EL1 = APIAKEYLO_EL1 + 2,
>> +	APIBKEYHI_EL1 = APIAKEYLO_EL1 + 3,
>> +	APDAKEYLO_EL1 = APIAKEYLO_EL1 + 4,
>> +	APDAKEYHI_EL1 = APIAKEYLO_EL1 + 5,
>> +	APDBKEYLO_EL1 = APIAKEYLO_EL1 + 6,
>> +	APDBKEYHI_EL1 = APIAKEYLO_EL1 + 7,
>> +	APGAKEYLO_EL1 = APIAKEYLO_EL1 + 8,
>> +	APGAKEYHI_EL1 = APIAKEYLO_EL1 + 9,
> 
> Why do we need these explicit +1, +2...? Being an part of an enum
> already guarantees this.
Yes enums are increasing. But upcoming struct/enums randomization stuffs 
may break the ptrauth register offset calculation logic in the later 
part so explicitly made this to increasing order.


> 
>> +
>>   	/* 32bit specific registers. Keep them at the end of the range */
>>   	DACR32_EL2,	/* Domain Access Control Register */
>>   	IFSR32_EL2,	/* Instruction Fault Status Register */
>> @@ -529,6 +541,11 @@ static inline bool kvm_arch_requires_vhe(void)
>>   	return false;
>>   }
>>   
>> +void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu);
>> +void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu);
>> +void kvm_arm_vcpu_ptrauth_setup_lazy(struct kvm_vcpu *vcpu);
>> +void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu);
>> +
>>   static inline void kvm_arch_hardware_unsetup(void) {}
>>   static inline void kvm_arch_sync_events(struct kvm *kvm) {}
>>   static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
>> diff --git a/arch/arm64/include/asm/kvm_ptrauth_asm.h b/arch/arm64/include/asm/kvm_ptrauth_asm.h
>> new file mode 100644
>> index 0000000..8142521
>> --- /dev/null
>> +++ b/arch/arm64/include/asm/kvm_ptrauth_asm.h
> 
> nit: this should be named kvm_ptrauth.h. The asm suffix doesn't bring
> anything to the game, and is somewhat misleading (there are C macros in
> this file).
> 
>> @@ -0,0 +1,106 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/* arch/arm64/include/asm/kvm_ptrauth_asm.h: Guest/host ptrauth save/restore
>> + * Copyright 2019 Arm Limited
>> + * Author: Mark Rutland <mark.rutland@arm.com>
> 
> nit: Authors
ok.
> 
>> + *         Amit Daniel Kachhap <amit.kachhap@arm.com>
>> + */
>> +
>> +#ifndef __ASM_KVM_PTRAUTH_ASM_H
>> +#define __ASM_KVM_PTRAUTH_ASM_H
>> +
>> +#ifndef __ASSEMBLY__
>> +
>> +#define __ptrauth_save_key(regs, key)						\
>> +({										\
>> +	regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1);	\
>> +	regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1);	\
>> +})
>> +
>> +#define __ptrauth_save_state(ctxt)						\
>> +({										\
>> +	__ptrauth_save_key(ctxt->sys_regs, APIA);				\
>> +	__ptrauth_save_key(ctxt->sys_regs, APIB);				\
>> +	__ptrauth_save_key(ctxt->sys_regs, APDA);				\
>> +	__ptrauth_save_key(ctxt->sys_regs, APDB);				\
>> +	__ptrauth_save_key(ctxt->sys_regs, APGA);				\
>> +})
>> +
>> +#else /* __ASSEMBLY__ */
>> +
>> +#include <asm/sysreg.h>
>> +
>> +#ifdef	CONFIG_ARM64_PTR_AUTH
>> +
>> +#define PTRAUTH_REG_OFFSET(x)	(x - CPU_APIAKEYLO_EL1)
>> +
>> +/*
>> + * CPU_AP*_EL1 values exceed immediate offset range (512) for stp instruction
>> + * so below macros takes CPU_APIAKEYLO_EL1 as base and calculates the offset of
>> + * the keys from this base to avoid an extra add instruction. These macros
>> + * assumes the keys offsets are aligned in a specific increasing order.
>> + */
>> +.macro	ptrauth_save_state base, reg1, reg2
>> +	mrs_s	\reg1, SYS_APIAKEYLO_EL1
>> +	mrs_s	\reg2, SYS_APIAKEYHI_EL1
>> +	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIAKEYLO_EL1)]
>> +	mrs_s	\reg1, SYS_APIBKEYLO_EL1
>> +	mrs_s	\reg2, SYS_APIBKEYHI_EL1
>> +	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIBKEYLO_EL1)]
>> +	mrs_s	\reg1, SYS_APDAKEYLO_EL1
>> +	mrs_s	\reg2, SYS_APDAKEYHI_EL1
>> +	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDAKEYLO_EL1)]
>> +	mrs_s	\reg1, SYS_APDBKEYLO_EL1
>> +	mrs_s	\reg2, SYS_APDBKEYHI_EL1
>> +	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDBKEYLO_EL1)]
>> +	mrs_s	\reg1, SYS_APGAKEYLO_EL1
>> +	mrs_s	\reg2, SYS_APGAKEYHI_EL1
>> +	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APGAKEYLO_EL1)]
>> +.endm
>> +
>> +.macro	ptrauth_restore_state base, reg1, reg2
>> +	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIAKEYLO_EL1)]
>> +	msr_s	SYS_APIAKEYLO_EL1, \reg1
>> +	msr_s	SYS_APIAKEYHI_EL1, \reg2
>> +	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIBKEYLO_EL1)]
>> +	msr_s	SYS_APIBKEYLO_EL1, \reg1
>> +	msr_s	SYS_APIBKEYHI_EL1, \reg2
>> +	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDAKEYLO_EL1)]
>> +	msr_s	SYS_APDAKEYLO_EL1, \reg1
>> +	msr_s	SYS_APDAKEYHI_EL1, \reg2
>> +	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDBKEYLO_EL1)]
>> +	msr_s	SYS_APDBKEYLO_EL1, \reg1
>> +	msr_s	SYS_APDBKEYHI_EL1, \reg2
>> +	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APGAKEYLO_EL1)]
>> +	msr_s	SYS_APGAKEYLO_EL1, \reg1
>> +	msr_s	SYS_APGAKEYHI_EL1, \reg2
>> +.endm
>> +
>> +.macro ptrauth_switch_to_guest g_ctxt, reg1, reg2, reg3
>> +	ldr	\reg1, [\g_ctxt, #(VCPU_HCR_EL2 - VCPU_CONTEXT)]
> 
> Given that 100% of the current HW doesn't have ptrauth at all, this
> becomes an instant and pointless overhead.
> 
> It could easily be avoided by turning this into:
> 
> alternative_if_not ARM64_HAS_GENERIC_AUTH_ARCH
> 	b	1000f
> alternative_else
> 	ldr	\reg1, [\g_ctxt, #(VCPU_HCR_EL2 - VCPU_CONTEXT)]
> alternative_endif
yes sure. will check.
> 
>> +	and	\reg1, \reg1, #(HCR_API | HCR_APK)
>> +	cbz	\reg1, 1000f
>> +	add	\reg1, \g_ctxt, #CPU_APIAKEYLO_EL1
>> +	ptrauth_restore_state	\reg1, \reg2, \reg3
>> +1000:
>> +.endm
>> +
>> +.macro ptrauth_switch_to_host g_ctxt, h_ctxt, reg1, reg2, reg3
>> +	ldr	\reg1, [\g_ctxt, #(VCPU_HCR_EL2 - VCPU_CONTEXT)]
> 
> Same thing here.
> 
>> +	and	\reg1, \reg1, #(HCR_API | HCR_APK)
>> +	cbz	\reg1, 1001f
>> +	add	\reg1, \g_ctxt, #CPU_APIAKEYLO_EL1
>> +	ptrauth_save_state	\reg1, \reg2, \reg3
>> +	add	\reg1, \h_ctxt, #CPU_APIAKEYLO_EL1
>> +	ptrauth_restore_state	\reg1, \reg2, \reg3
>> +	isb
>> +1001:
>> +.endm
>> +
>> +#else /* !CONFIG_ARM64_PTR_AUTH */
>> +.macro ptrauth_switch_to_guest g_ctxt, reg1, reg2, reg3
>> +.endm
>> +.macro ptrauth_switch_to_host g_ctxt, h_ctxt, reg1, reg2, reg3
>> +.endm
>> +#endif /* CONFIG_ARM64_PTR_AUTH */
>> +#endif /* __ASSEMBLY__ */
>> +#endif /* __ASM_KVM_PTRAUTH_ASM_H */
>> diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
>> index 7f40dcb..8178330 100644
>> --- a/arch/arm64/kernel/asm-offsets.c
>> +++ b/arch/arm64/kernel/asm-offsets.c
>> @@ -125,7 +125,13 @@ int main(void)
>>     DEFINE(VCPU_CONTEXT,		offsetof(struct kvm_vcpu, arch.ctxt));
>>     DEFINE(VCPU_FAULT_DISR,	offsetof(struct kvm_vcpu, arch.fault.disr_el1));
>>     DEFINE(VCPU_WORKAROUND_FLAGS,	offsetof(struct kvm_vcpu, arch.workaround_flags));
>> +  DEFINE(VCPU_HCR_EL2,		offsetof(struct kvm_vcpu, arch.hcr_el2));
>>     DEFINE(CPU_GP_REGS,		offsetof(struct kvm_cpu_context, gp_regs));
>> +  DEFINE(CPU_APIAKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APIAKEYLO_EL1]));
>> +  DEFINE(CPU_APIBKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APIBKEYLO_EL1]));
>> +  DEFINE(CPU_APDAKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APDAKEYLO_EL1]));
>> +  DEFINE(CPU_APDBKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APDBKEYLO_EL1]));
>> +  DEFINE(CPU_APGAKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APGAKEYLO_EL1]));
>>     DEFINE(CPU_USER_PT_REGS,	offsetof(struct kvm_regs, regs));
>>     DEFINE(HOST_CONTEXT_VCPU,	offsetof(struct kvm_cpu_context, __hyp_running_vcpu));
>>   #endif
>> diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
>> index 4f7b26b..e07f763 100644
>> --- a/arch/arm64/kvm/guest.c
>> +++ b/arch/arm64/kvm/guest.c
>> @@ -878,3 +878,17 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
>>   
>>   	return ret;
>>   }
>> +
>> +/**
>> + * kvm_arm_vcpu_ptrauth_setup_lazy - setup lazy ptrauth for vcpu schedule
>> + *
>> + * @vcpu: The VCPU pointer
>> + *
>> + * This function may be used to disable ptrauth and use it in a lazy context
>> + * via traps.
>> + */
>> +void kvm_arm_vcpu_ptrauth_setup_lazy(struct kvm_vcpu *vcpu)
>> +{
>> +	if (vcpu_has_ptrauth(vcpu))
>> +		kvm_arm_vcpu_ptrauth_disable(vcpu);
>> +}
> 
> Why does this live in guest.c?
Many global functions used in virt/kvm/arm/arm.c are implemented here.

However some similar kinds of function are in asm/kvm_emulate.h so can 
be moved there as static inline.
> 
>> diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
>> index 0b79834..5838ff9 100644
>> --- a/arch/arm64/kvm/handle_exit.c
>> +++ b/arch/arm64/kvm/handle_exit.c
>> @@ -30,6 +30,7 @@
>>   #include <asm/kvm_coproc.h>
>>   #include <asm/kvm_emulate.h>
>>   #include <asm/kvm_mmu.h>
>> +#include <asm/kvm_ptrauth_asm.h>
>>   #include <asm/debug-monitors.h>
>>   #include <asm/traps.h>
>>   
>> @@ -174,19 +175,26 @@ static int handle_sve(struct kvm_vcpu *vcpu, struct kvm_run *run)
>>   }
>>   
>>   /*
>> + * Handle the guest trying to use a ptrauth instruction, or trying to access a
>> + * ptrauth register.
>> + */
>> +void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu)
>> +{
>> +	if (vcpu_has_ptrauth(vcpu)) {
>> +		kvm_arm_vcpu_ptrauth_enable(vcpu);
> 
> It is odd that the enable function is placed in sys_regs.c, and only
> used here. You could either just inline it here, or make it a static
> inline in kvm_host.h.

I tried moving it to kvm_host.h but some dependency error is coming,
   CC      arch/arm64/kernel/asm-offsets.s
In file included from ./include/linux/kvm_host.h:38:0,
                  from arch/arm64/kernel/asm-offsets.c:25:
./arch/arm64/include/asm/kvm_host.h: In function 
‘kvm_arm_vcpu_ptrauth_enable’:
./arch/arm64/include/asm/kvm_host.h:547:6: error: dereferencing pointer 
to incomplete type ‘struct kvm_vcpu’
   vcpu->arch.hcr_el2 |= (HCR_API | HCR_APK);

However some similar kinds of function are in asm/kvm_emulate.h so can 
be moved there.

> 
>> +		__ptrauth_save_state(vcpu->arch.host_cpu_context);
> 
> You could expand the __ptrauth_save_state macro here. It is only used
> once, and one less level of obfuscation will help grepping.
> 
>> +	} else {
>> +		kvm_inject_undefined(vcpu);
>> +	}
>> +}
>> +
>> +/*
>>    * Guest usage of a ptrauth instruction (which the guest EL1 did not turn into
>>    * a NOP).
>>    */
>>   static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu, struct kvm_run *run)
>>   {
>> -	/*
>> -	 * We don't currently support ptrauth in a guest, and we mask the ID
>> -	 * registers to prevent well-behaved guests from trying to make use of
>> -	 * it.
>> -	 *
>> -	 * Inject an UNDEF, as if the feature really isn't present.
>> -	 */
>> -	kvm_inject_undefined(vcpu);
>> +	kvm_arm_vcpu_ptrauth_trap(vcpu);
>>   	return 1;
>>   }
>>   
>> diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
>> index 675fdc1..3a70213 100644
>> --- a/arch/arm64/kvm/hyp/entry.S
>> +++ b/arch/arm64/kvm/hyp/entry.S
>> @@ -24,6 +24,7 @@
>>   #include <asm/kvm_arm.h>
>>   #include <asm/kvm_asm.h>
>>   #include <asm/kvm_mmu.h>
>> +#include <asm/kvm_ptrauth_asm.h>
>>   
>>   #define CPU_GP_REG_OFFSET(x)	(CPU_GP_REGS + x)
>>   #define CPU_XREG_OFFSET(x)	CPU_GP_REG_OFFSET(CPU_USER_PT_REGS + 8*x)
>> @@ -64,6 +65,9 @@ ENTRY(__guest_enter)
>>   
>>   	add	x18, x0, #VCPU_CONTEXT
>>   
>> +	// Macro ptrauth_switch_to_guest(guest cxt, tmp1, tmp2, tmp3).
>> +	ptrauth_switch_to_guest x18, x0, x1, x2
>> +
> 
> This comment doesn't tell us much. What we really need is a comment
> explaining *why* this needs to be an inline macro. Otherwise, someone
> will one day move it back to some C code and things will randomly break.
ok.
> 
>>   	// Restore guest regs x0-x17
>>   	ldp	x0, x1,   [x18, #CPU_XREG_OFFSET(0)]
>>   	ldp	x2, x3,   [x18, #CPU_XREG_OFFSET(2)]
>> @@ -118,6 +122,9 @@ ENTRY(__guest_exit)
>>   
>>   	get_host_ctxt	x2, x3
>>   
>> +	// Macro ptrauth_switch_to_host(guest cxt, host cxt, tmp1, tmp2, tmp3).
>> +	ptrauth_switch_to_host x1, x2, x3, x4, x5
>> +
>>   	// Now restore the host regs
>>   	restore_callee_saved_regs x2
>>   
>> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
>> index 09e9b06..4a98b5c 100644
>> --- a/arch/arm64/kvm/sys_regs.c
>> +++ b/arch/arm64/kvm/sys_regs.c
>> @@ -1007,6 +1007,38 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>>   	{ SYS_DESC(SYS_PMEVTYPERn_EL0(n)),					\
>>   	  access_pmu_evtyper, reset_unknown, (PMEVTYPER0_EL0 + n), }
>>   
>> +void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu)
>> +{
>> +	vcpu->arch.hcr_el2 |= (HCR_API | HCR_APK);
>> +}
>> +
>> +void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu)
>> +{
>> +	vcpu->arch.hcr_el2 &= ~(HCR_API | HCR_APK);
>> +}
> 
> As mentionned above, these could be moved as static inline to an include
> file, of even directly inlined in the code that use it.
ok
> 
>> +
>> +static bool trap_ptrauth(struct kvm_vcpu *vcpu,
>> +			 struct sys_reg_params *p,
>> +			 const struct sys_reg_desc *rd)
>> +{
>> +	kvm_arm_vcpu_ptrauth_trap(vcpu);
>> +	return false;
> 
> We need a comment explaining why we return false: Either ptrauth is on,
> and we re-execute the same instruction, or it is off, and we have
> injected an UNDEF. In both cases, we don't advance the guest's PC.
ok.
> 
>> +}
>> +
>> +static unsigned int ptrauth_visibility(const struct kvm_vcpu *vcpu,
>> +			const struct sys_reg_desc *rd)
>> +{
>> +	return vcpu_has_ptrauth(vcpu) ? 0 : REG_HIDDEN_USER | REG_HIDDEN_GUEST;
>> +}
>> +
>> +#define __PTRAUTH_KEY(k)						\
>> +	{ SYS_DESC(SYS_## k), trap_ptrauth, reset_unknown, k,		\
>> +	.visibility = ptrauth_visibility}
>> +
>> +#define PTRAUTH_KEY(k)							\
>> +	__PTRAUTH_KEY(k ## KEYLO_EL1),					\
>> +	__PTRAUTH_KEY(k ## KEYHI_EL1)
>> +
>>   static bool access_arch_timer(struct kvm_vcpu *vcpu,
>>   			      struct sys_reg_params *p,
>>   			      const struct sys_reg_desc *r)
>> @@ -1058,9 +1090,11 @@ static u64 read_id_reg(const struct kvm_vcpu *vcpu,
>>   					 (0xfUL << ID_AA64ISAR1_API_SHIFT) |
>>   					 (0xfUL << ID_AA64ISAR1_GPA_SHIFT) |
>>   					 (0xfUL << ID_AA64ISAR1_GPI_SHIFT);
>> -		if (val & ptrauth_mask)
>> -			kvm_debug("ptrauth unsupported for guests, suppressing\n");
>> -		val &= ~ptrauth_mask;
>> +		if (!vcpu_has_ptrauth(vcpu)) {
>> +			if (val & ptrauth_mask)
>> +				kvm_debug("ptrauth unsupported for guests, suppressing\n");
>> +			val &= ~ptrauth_mask;
>> +		}
>>   	}
>>   
>>   	return val;
>> @@ -1460,6 +1494,12 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>>   	{ SYS_DESC(SYS_TTBR1_EL1), access_vm_reg, reset_unknown, TTBR1_EL1 },
>>   	{ SYS_DESC(SYS_TCR_EL1), access_vm_reg, reset_val, TCR_EL1, 0 },
>>   
>> +	PTRAUTH_KEY(APIA),
>> +	PTRAUTH_KEY(APIB),
>> +	PTRAUTH_KEY(APDA),
>> +	PTRAUTH_KEY(APDB),
>> +	PTRAUTH_KEY(APGA),
>> +
>>   	{ SYS_DESC(SYS_AFSR0_EL1), access_vm_reg, reset_unknown, AFSR0_EL1 },
>>   	{ SYS_DESC(SYS_AFSR1_EL1), access_vm_reg, reset_unknown, AFSR1_EL1 },
>>   	{ SYS_DESC(SYS_ESR_EL1), access_vm_reg, reset_unknown, ESR_EL1 },
>> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
>> index 9edbf0f..8d1b73c 100644
>> --- a/virt/kvm/arm/arm.c
>> +++ b/virt/kvm/arm/arm.c
>> @@ -385,6 +385,8 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
>>   		vcpu_clear_wfe_traps(vcpu);
>>   	else
>>   		vcpu_set_wfe_traps(vcpu);
>> +
>> +	kvm_arm_vcpu_ptrauth_setup_lazy(vcpu);
>>   }
>>   
>>   void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
>>
> 
> Despite all the comments, the code looks in good shape, and I trust it
> shouldn't take you long to refactor it, retest it and send an updated
> version once we've settled on the ABI part which is the most contentious.
Sure will post next version soon.

Thanks,
Amit D
> 
> Thanks,
> 
> 	M.
> 

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v9 2/5] KVM: arm/arm64: context-switch ptrauth registers
@ 2019-04-17 14:24       ` Amit Daniel Kachhap
  0 siblings, 0 replies; 77+ messages in thread
From: Amit Daniel Kachhap @ 2019-04-17 14:24 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel
  Cc: Catalin Marinas, Will Deacon, Kristina Martsenko, kvmarm,
	Ramana Radhakrishnan, Dave Martin, linux-kernel

Hi Marc,

On 4/17/19 2:39 PM, Marc Zyngier wrote:
> Hi Amit,
> 
> On 12/04/2019 04:20, Amit Daniel Kachhap wrote:
>> From: Mark Rutland <mark.rutland@arm.com>
>>
>> When pointer authentication is supported, a guest may wish to use it.
>> This patch adds the necessary KVM infrastructure for this to work, with
>> a semi-lazy context switch of the pointer auth state.
>>
>> Pointer authentication feature is only enabled when VHE is built
>> in the kernel and present in the CPU implementation so only VHE code
>> paths are modified.
>>
>> When we schedule a vcpu, we disable guest usage of pointer
>> authentication instructions and accesses to the keys. While these are
>> disabled, we avoid context-switching the keys. When we trap the guest
>> trying to use pointer authentication functionality, we change to eagerly
>> context-switching the keys, and enable the feature. The next time the
>> vcpu is scheduled out/in, we start again. However the host key save is
>> optimized and implemented inside ptrauth instruction/register access
>> trap.
>>
>> Pointer authentication consists of address authentication and generic
>> authentication, and CPUs in a system might have varied support for
>> either. Where support for either feature is not uniform, it is hidden
>> from guests via ID register emulation, as a result of the cpufeature
>> framework in the host.
>>
>> Unfortunately, address authentication and generic authentication cannot
>> be trapped separately, as the architecture provides a single EL2 trap
>> covering both. If we wish to expose one without the other, we cannot
>> prevent a (badly-written) guest from intermittently using a feature
>> which is not uniformly supported (when scheduled on a physical CPU which
>> supports the relevant feature). Hence, this patch expects both type of
>> authentication to be present in a cpu.
>>
>> This switch of key is done from guest enter/exit assembly as preparation
>> for the upcoming in-kernel pointer authentication support. Hence, these
>> key switching routines are not implemented in C code as they may cause
>> pointer authentication key signing error in some situations.
>>
>> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
>> [Only VHE, key switch in full assembly, vcpu_has_ptrauth checks
>> , save host key in ptrauth exception trap]
>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>> Reviewed-by: Julien Thierry <julien.thierry@arm.com>
>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>> Cc: Christoffer Dall <christoffer.dall@arm.com>
>> Cc: kvmarm@lists.cs.columbia.edu
>> ---
>>
>> Changes since v9:
>> * Used high order number for branching in assembly macros. [Kristina Martsenko]
>> * Taken care of different offset for hcr_el2 now.
>>
>>   arch/arm/include/asm/kvm_host.h          |   1 +
>>   arch/arm64/Kconfig                       |   5 +-
>>   arch/arm64/include/asm/kvm_host.h        |  17 +++++
>>   arch/arm64/include/asm/kvm_ptrauth_asm.h | 106 +++++++++++++++++++++++++++++++
>>   arch/arm64/kernel/asm-offsets.c          |   6 ++
>>   arch/arm64/kvm/guest.c                   |  14 ++++
>>   arch/arm64/kvm/handle_exit.c             |  24 ++++---
>>   arch/arm64/kvm/hyp/entry.S               |   7 ++
>>   arch/arm64/kvm/sys_regs.c                |  46 +++++++++++++-
>>   virt/kvm/arm/arm.c                       |   2 +
>>   10 files changed, 215 insertions(+), 13 deletions(-)
>>   create mode 100644 arch/arm64/include/asm/kvm_ptrauth_asm.h
>>
>> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
>> index e80cfc1..7a5c7f8 100644
>> --- a/arch/arm/include/asm/kvm_host.h
>> +++ b/arch/arm/include/asm/kvm_host.h
>> @@ -363,6 +363,7 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
>>   static inline void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu) {}
>>   static inline void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu) {}
>>   static inline void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu) {}
>> +static inline void kvm_arm_vcpu_ptrauth_setup_lazy(struct kvm_vcpu *vcpu) {}
>>   
>>   static inline void kvm_arm_vhe_guest_enter(void) {}
>>   static inline void kvm_arm_vhe_guest_exit(void) {}
>> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
>> index 7e34b9e..9e8506e 100644
>> --- a/arch/arm64/Kconfig
>> +++ b/arch/arm64/Kconfig
>> @@ -1301,8 +1301,9 @@ config ARM64_PTR_AUTH
>>   	  context-switched along with the process.
>>   
>>   	  The feature is detected at runtime. If the feature is not present in
>> -	  hardware it will not be advertised to userspace nor will it be
>> -	  enabled.
>> +	  hardware it will not be advertised to userspace/KVM guest nor will it
>> +	  be enabled. However, KVM guest also require CONFIG_ARM64_VHE=y to use
>> +	  this feature.
> 
> Not only does it require CONFIG_ARM64_VHE, but it more importantly
> requires a VHE system!
Yes will update.
> 
>>   
>>   endmenu
>>   
>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
>> index 31dbc7c..a585d82 100644
>> --- a/arch/arm64/include/asm/kvm_host.h
>> +++ b/arch/arm64/include/asm/kvm_host.h
>> @@ -161,6 +161,18 @@ enum vcpu_sysreg {
>>   	PMSWINC_EL0,	/* Software Increment Register */
>>   	PMUSERENR_EL0,	/* User Enable Register */
>>   
>> +	/* Pointer Authentication Registers in a strict increasing order. */
>> +	APIAKEYLO_EL1,
>> +	APIAKEYHI_EL1 = APIAKEYLO_EL1 + 1,
>> +	APIBKEYLO_EL1 = APIAKEYLO_EL1 + 2,
>> +	APIBKEYHI_EL1 = APIAKEYLO_EL1 + 3,
>> +	APDAKEYLO_EL1 = APIAKEYLO_EL1 + 4,
>> +	APDAKEYHI_EL1 = APIAKEYLO_EL1 + 5,
>> +	APDBKEYLO_EL1 = APIAKEYLO_EL1 + 6,
>> +	APDBKEYHI_EL1 = APIAKEYLO_EL1 + 7,
>> +	APGAKEYLO_EL1 = APIAKEYLO_EL1 + 8,
>> +	APGAKEYHI_EL1 = APIAKEYLO_EL1 + 9,
> 
> Why do we need these explicit +1, +2...? Being an part of an enum
> already guarantees this.
Yes enums are increasing. But upcoming struct/enums randomization stuffs 
may break the ptrauth register offset calculation logic in the later 
part so explicitly made this to increasing order.


> 
>> +
>>   	/* 32bit specific registers. Keep them at the end of the range */
>>   	DACR32_EL2,	/* Domain Access Control Register */
>>   	IFSR32_EL2,	/* Instruction Fault Status Register */
>> @@ -529,6 +541,11 @@ static inline bool kvm_arch_requires_vhe(void)
>>   	return false;
>>   }
>>   
>> +void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu);
>> +void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu);
>> +void kvm_arm_vcpu_ptrauth_setup_lazy(struct kvm_vcpu *vcpu);
>> +void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu);
>> +
>>   static inline void kvm_arch_hardware_unsetup(void) {}
>>   static inline void kvm_arch_sync_events(struct kvm *kvm) {}
>>   static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
>> diff --git a/arch/arm64/include/asm/kvm_ptrauth_asm.h b/arch/arm64/include/asm/kvm_ptrauth_asm.h
>> new file mode 100644
>> index 0000000..8142521
>> --- /dev/null
>> +++ b/arch/arm64/include/asm/kvm_ptrauth_asm.h
> 
> nit: this should be named kvm_ptrauth.h. The asm suffix doesn't bring
> anything to the game, and is somewhat misleading (there are C macros in
> this file).
> 
>> @@ -0,0 +1,106 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/* arch/arm64/include/asm/kvm_ptrauth_asm.h: Guest/host ptrauth save/restore
>> + * Copyright 2019 Arm Limited
>> + * Author: Mark Rutland <mark.rutland@arm.com>
> 
> nit: Authors
ok.
> 
>> + *         Amit Daniel Kachhap <amit.kachhap@arm.com>
>> + */
>> +
>> +#ifndef __ASM_KVM_PTRAUTH_ASM_H
>> +#define __ASM_KVM_PTRAUTH_ASM_H
>> +
>> +#ifndef __ASSEMBLY__
>> +
>> +#define __ptrauth_save_key(regs, key)						\
>> +({										\
>> +	regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1);	\
>> +	regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1);	\
>> +})
>> +
>> +#define __ptrauth_save_state(ctxt)						\
>> +({										\
>> +	__ptrauth_save_key(ctxt->sys_regs, APIA);				\
>> +	__ptrauth_save_key(ctxt->sys_regs, APIB);				\
>> +	__ptrauth_save_key(ctxt->sys_regs, APDA);				\
>> +	__ptrauth_save_key(ctxt->sys_regs, APDB);				\
>> +	__ptrauth_save_key(ctxt->sys_regs, APGA);				\
>> +})
>> +
>> +#else /* __ASSEMBLY__ */
>> +
>> +#include <asm/sysreg.h>
>> +
>> +#ifdef	CONFIG_ARM64_PTR_AUTH
>> +
>> +#define PTRAUTH_REG_OFFSET(x)	(x - CPU_APIAKEYLO_EL1)
>> +
>> +/*
>> + * CPU_AP*_EL1 values exceed immediate offset range (512) for stp instruction
>> + * so below macros takes CPU_APIAKEYLO_EL1 as base and calculates the offset of
>> + * the keys from this base to avoid an extra add instruction. These macros
>> + * assumes the keys offsets are aligned in a specific increasing order.
>> + */
>> +.macro	ptrauth_save_state base, reg1, reg2
>> +	mrs_s	\reg1, SYS_APIAKEYLO_EL1
>> +	mrs_s	\reg2, SYS_APIAKEYHI_EL1
>> +	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIAKEYLO_EL1)]
>> +	mrs_s	\reg1, SYS_APIBKEYLO_EL1
>> +	mrs_s	\reg2, SYS_APIBKEYHI_EL1
>> +	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIBKEYLO_EL1)]
>> +	mrs_s	\reg1, SYS_APDAKEYLO_EL1
>> +	mrs_s	\reg2, SYS_APDAKEYHI_EL1
>> +	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDAKEYLO_EL1)]
>> +	mrs_s	\reg1, SYS_APDBKEYLO_EL1
>> +	mrs_s	\reg2, SYS_APDBKEYHI_EL1
>> +	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDBKEYLO_EL1)]
>> +	mrs_s	\reg1, SYS_APGAKEYLO_EL1
>> +	mrs_s	\reg2, SYS_APGAKEYHI_EL1
>> +	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APGAKEYLO_EL1)]
>> +.endm
>> +
>> +.macro	ptrauth_restore_state base, reg1, reg2
>> +	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIAKEYLO_EL1)]
>> +	msr_s	SYS_APIAKEYLO_EL1, \reg1
>> +	msr_s	SYS_APIAKEYHI_EL1, \reg2
>> +	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIBKEYLO_EL1)]
>> +	msr_s	SYS_APIBKEYLO_EL1, \reg1
>> +	msr_s	SYS_APIBKEYHI_EL1, \reg2
>> +	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDAKEYLO_EL1)]
>> +	msr_s	SYS_APDAKEYLO_EL1, \reg1
>> +	msr_s	SYS_APDAKEYHI_EL1, \reg2
>> +	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDBKEYLO_EL1)]
>> +	msr_s	SYS_APDBKEYLO_EL1, \reg1
>> +	msr_s	SYS_APDBKEYHI_EL1, \reg2
>> +	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APGAKEYLO_EL1)]
>> +	msr_s	SYS_APGAKEYLO_EL1, \reg1
>> +	msr_s	SYS_APGAKEYHI_EL1, \reg2
>> +.endm
>> +
>> +.macro ptrauth_switch_to_guest g_ctxt, reg1, reg2, reg3
>> +	ldr	\reg1, [\g_ctxt, #(VCPU_HCR_EL2 - VCPU_CONTEXT)]
> 
> Given that 100% of the current HW doesn't have ptrauth at all, this
> becomes an instant and pointless overhead.
> 
> It could easily be avoided by turning this into:
> 
> alternative_if_not ARM64_HAS_GENERIC_AUTH_ARCH
> 	b	1000f
> alternative_else
> 	ldr	\reg1, [\g_ctxt, #(VCPU_HCR_EL2 - VCPU_CONTEXT)]
> alternative_endif
yes sure. will check.
> 
>> +	and	\reg1, \reg1, #(HCR_API | HCR_APK)
>> +	cbz	\reg1, 1000f
>> +	add	\reg1, \g_ctxt, #CPU_APIAKEYLO_EL1
>> +	ptrauth_restore_state	\reg1, \reg2, \reg3
>> +1000:
>> +.endm
>> +
>> +.macro ptrauth_switch_to_host g_ctxt, h_ctxt, reg1, reg2, reg3
>> +	ldr	\reg1, [\g_ctxt, #(VCPU_HCR_EL2 - VCPU_CONTEXT)]
> 
> Same thing here.
> 
>> +	and	\reg1, \reg1, #(HCR_API | HCR_APK)
>> +	cbz	\reg1, 1001f
>> +	add	\reg1, \g_ctxt, #CPU_APIAKEYLO_EL1
>> +	ptrauth_save_state	\reg1, \reg2, \reg3
>> +	add	\reg1, \h_ctxt, #CPU_APIAKEYLO_EL1
>> +	ptrauth_restore_state	\reg1, \reg2, \reg3
>> +	isb
>> +1001:
>> +.endm
>> +
>> +#else /* !CONFIG_ARM64_PTR_AUTH */
>> +.macro ptrauth_switch_to_guest g_ctxt, reg1, reg2, reg3
>> +.endm
>> +.macro ptrauth_switch_to_host g_ctxt, h_ctxt, reg1, reg2, reg3
>> +.endm
>> +#endif /* CONFIG_ARM64_PTR_AUTH */
>> +#endif /* __ASSEMBLY__ */
>> +#endif /* __ASM_KVM_PTRAUTH_ASM_H */
>> diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
>> index 7f40dcb..8178330 100644
>> --- a/arch/arm64/kernel/asm-offsets.c
>> +++ b/arch/arm64/kernel/asm-offsets.c
>> @@ -125,7 +125,13 @@ int main(void)
>>     DEFINE(VCPU_CONTEXT,		offsetof(struct kvm_vcpu, arch.ctxt));
>>     DEFINE(VCPU_FAULT_DISR,	offsetof(struct kvm_vcpu, arch.fault.disr_el1));
>>     DEFINE(VCPU_WORKAROUND_FLAGS,	offsetof(struct kvm_vcpu, arch.workaround_flags));
>> +  DEFINE(VCPU_HCR_EL2,		offsetof(struct kvm_vcpu, arch.hcr_el2));
>>     DEFINE(CPU_GP_REGS,		offsetof(struct kvm_cpu_context, gp_regs));
>> +  DEFINE(CPU_APIAKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APIAKEYLO_EL1]));
>> +  DEFINE(CPU_APIBKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APIBKEYLO_EL1]));
>> +  DEFINE(CPU_APDAKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APDAKEYLO_EL1]));
>> +  DEFINE(CPU_APDBKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APDBKEYLO_EL1]));
>> +  DEFINE(CPU_APGAKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APGAKEYLO_EL1]));
>>     DEFINE(CPU_USER_PT_REGS,	offsetof(struct kvm_regs, regs));
>>     DEFINE(HOST_CONTEXT_VCPU,	offsetof(struct kvm_cpu_context, __hyp_running_vcpu));
>>   #endif
>> diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
>> index 4f7b26b..e07f763 100644
>> --- a/arch/arm64/kvm/guest.c
>> +++ b/arch/arm64/kvm/guest.c
>> @@ -878,3 +878,17 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
>>   
>>   	return ret;
>>   }
>> +
>> +/**
>> + * kvm_arm_vcpu_ptrauth_setup_lazy - setup lazy ptrauth for vcpu schedule
>> + *
>> + * @vcpu: The VCPU pointer
>> + *
>> + * This function may be used to disable ptrauth and use it in a lazy context
>> + * via traps.
>> + */
>> +void kvm_arm_vcpu_ptrauth_setup_lazy(struct kvm_vcpu *vcpu)
>> +{
>> +	if (vcpu_has_ptrauth(vcpu))
>> +		kvm_arm_vcpu_ptrauth_disable(vcpu);
>> +}
> 
> Why does this live in guest.c?
Many global functions used in virt/kvm/arm/arm.c are implemented here.

However some similar kinds of function are in asm/kvm_emulate.h so can 
be moved there as static inline.
> 
>> diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
>> index 0b79834..5838ff9 100644
>> --- a/arch/arm64/kvm/handle_exit.c
>> +++ b/arch/arm64/kvm/handle_exit.c
>> @@ -30,6 +30,7 @@
>>   #include <asm/kvm_coproc.h>
>>   #include <asm/kvm_emulate.h>
>>   #include <asm/kvm_mmu.h>
>> +#include <asm/kvm_ptrauth_asm.h>
>>   #include <asm/debug-monitors.h>
>>   #include <asm/traps.h>
>>   
>> @@ -174,19 +175,26 @@ static int handle_sve(struct kvm_vcpu *vcpu, struct kvm_run *run)
>>   }
>>   
>>   /*
>> + * Handle the guest trying to use a ptrauth instruction, or trying to access a
>> + * ptrauth register.
>> + */
>> +void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu)
>> +{
>> +	if (vcpu_has_ptrauth(vcpu)) {
>> +		kvm_arm_vcpu_ptrauth_enable(vcpu);
> 
> It is odd that the enable function is placed in sys_regs.c, and only
> used here. You could either just inline it here, or make it a static
> inline in kvm_host.h.

I tried moving it to kvm_host.h but some dependency error is coming,
   CC      arch/arm64/kernel/asm-offsets.s
In file included from ./include/linux/kvm_host.h:38:0,
                  from arch/arm64/kernel/asm-offsets.c:25:
./arch/arm64/include/asm/kvm_host.h: In function 
‘kvm_arm_vcpu_ptrauth_enable’:
./arch/arm64/include/asm/kvm_host.h:547:6: error: dereferencing pointer 
to incomplete type ‘struct kvm_vcpu’
   vcpu->arch.hcr_el2 |= (HCR_API | HCR_APK);

However some similar kinds of function are in asm/kvm_emulate.h so can 
be moved there.

> 
>> +		__ptrauth_save_state(vcpu->arch.host_cpu_context);
> 
> You could expand the __ptrauth_save_state macro here. It is only used
> once, and one less level of obfuscation will help grepping.
> 
>> +	} else {
>> +		kvm_inject_undefined(vcpu);
>> +	}
>> +}
>> +
>> +/*
>>    * Guest usage of a ptrauth instruction (which the guest EL1 did not turn into
>>    * a NOP).
>>    */
>>   static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu, struct kvm_run *run)
>>   {
>> -	/*
>> -	 * We don't currently support ptrauth in a guest, and we mask the ID
>> -	 * registers to prevent well-behaved guests from trying to make use of
>> -	 * it.
>> -	 *
>> -	 * Inject an UNDEF, as if the feature really isn't present.
>> -	 */
>> -	kvm_inject_undefined(vcpu);
>> +	kvm_arm_vcpu_ptrauth_trap(vcpu);
>>   	return 1;
>>   }
>>   
>> diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
>> index 675fdc1..3a70213 100644
>> --- a/arch/arm64/kvm/hyp/entry.S
>> +++ b/arch/arm64/kvm/hyp/entry.S
>> @@ -24,6 +24,7 @@
>>   #include <asm/kvm_arm.h>
>>   #include <asm/kvm_asm.h>
>>   #include <asm/kvm_mmu.h>
>> +#include <asm/kvm_ptrauth_asm.h>
>>   
>>   #define CPU_GP_REG_OFFSET(x)	(CPU_GP_REGS + x)
>>   #define CPU_XREG_OFFSET(x)	CPU_GP_REG_OFFSET(CPU_USER_PT_REGS + 8*x)
>> @@ -64,6 +65,9 @@ ENTRY(__guest_enter)
>>   
>>   	add	x18, x0, #VCPU_CONTEXT
>>   
>> +	// Macro ptrauth_switch_to_guest(guest cxt, tmp1, tmp2, tmp3).
>> +	ptrauth_switch_to_guest x18, x0, x1, x2
>> +
> 
> This comment doesn't tell us much. What we really need is a comment
> explaining *why* this needs to be an inline macro. Otherwise, someone
> will one day move it back to some C code and things will randomly break.
ok.
> 
>>   	// Restore guest regs x0-x17
>>   	ldp	x0, x1,   [x18, #CPU_XREG_OFFSET(0)]
>>   	ldp	x2, x3,   [x18, #CPU_XREG_OFFSET(2)]
>> @@ -118,6 +122,9 @@ ENTRY(__guest_exit)
>>   
>>   	get_host_ctxt	x2, x3
>>   
>> +	// Macro ptrauth_switch_to_host(guest cxt, host cxt, tmp1, tmp2, tmp3).
>> +	ptrauth_switch_to_host x1, x2, x3, x4, x5
>> +
>>   	// Now restore the host regs
>>   	restore_callee_saved_regs x2
>>   
>> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
>> index 09e9b06..4a98b5c 100644
>> --- a/arch/arm64/kvm/sys_regs.c
>> +++ b/arch/arm64/kvm/sys_regs.c
>> @@ -1007,6 +1007,38 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>>   	{ SYS_DESC(SYS_PMEVTYPERn_EL0(n)),					\
>>   	  access_pmu_evtyper, reset_unknown, (PMEVTYPER0_EL0 + n), }
>>   
>> +void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu)
>> +{
>> +	vcpu->arch.hcr_el2 |= (HCR_API | HCR_APK);
>> +}
>> +
>> +void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu)
>> +{
>> +	vcpu->arch.hcr_el2 &= ~(HCR_API | HCR_APK);
>> +}
> 
> As mentionned above, these could be moved as static inline to an include
> file, of even directly inlined in the code that use it.
ok
> 
>> +
>> +static bool trap_ptrauth(struct kvm_vcpu *vcpu,
>> +			 struct sys_reg_params *p,
>> +			 const struct sys_reg_desc *rd)
>> +{
>> +	kvm_arm_vcpu_ptrauth_trap(vcpu);
>> +	return false;
> 
> We need a comment explaining why we return false: Either ptrauth is on,
> and we re-execute the same instruction, or it is off, and we have
> injected an UNDEF. In both cases, we don't advance the guest's PC.
ok.
> 
>> +}
>> +
>> +static unsigned int ptrauth_visibility(const struct kvm_vcpu *vcpu,
>> +			const struct sys_reg_desc *rd)
>> +{
>> +	return vcpu_has_ptrauth(vcpu) ? 0 : REG_HIDDEN_USER | REG_HIDDEN_GUEST;
>> +}
>> +
>> +#define __PTRAUTH_KEY(k)						\
>> +	{ SYS_DESC(SYS_## k), trap_ptrauth, reset_unknown, k,		\
>> +	.visibility = ptrauth_visibility}
>> +
>> +#define PTRAUTH_KEY(k)							\
>> +	__PTRAUTH_KEY(k ## KEYLO_EL1),					\
>> +	__PTRAUTH_KEY(k ## KEYHI_EL1)
>> +
>>   static bool access_arch_timer(struct kvm_vcpu *vcpu,
>>   			      struct sys_reg_params *p,
>>   			      const struct sys_reg_desc *r)
>> @@ -1058,9 +1090,11 @@ static u64 read_id_reg(const struct kvm_vcpu *vcpu,
>>   					 (0xfUL << ID_AA64ISAR1_API_SHIFT) |
>>   					 (0xfUL << ID_AA64ISAR1_GPA_SHIFT) |
>>   					 (0xfUL << ID_AA64ISAR1_GPI_SHIFT);
>> -		if (val & ptrauth_mask)
>> -			kvm_debug("ptrauth unsupported for guests, suppressing\n");
>> -		val &= ~ptrauth_mask;
>> +		if (!vcpu_has_ptrauth(vcpu)) {
>> +			if (val & ptrauth_mask)
>> +				kvm_debug("ptrauth unsupported for guests, suppressing\n");
>> +			val &= ~ptrauth_mask;
>> +		}
>>   	}
>>   
>>   	return val;
>> @@ -1460,6 +1494,12 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>>   	{ SYS_DESC(SYS_TTBR1_EL1), access_vm_reg, reset_unknown, TTBR1_EL1 },
>>   	{ SYS_DESC(SYS_TCR_EL1), access_vm_reg, reset_val, TCR_EL1, 0 },
>>   
>> +	PTRAUTH_KEY(APIA),
>> +	PTRAUTH_KEY(APIB),
>> +	PTRAUTH_KEY(APDA),
>> +	PTRAUTH_KEY(APDB),
>> +	PTRAUTH_KEY(APGA),
>> +
>>   	{ SYS_DESC(SYS_AFSR0_EL1), access_vm_reg, reset_unknown, AFSR0_EL1 },
>>   	{ SYS_DESC(SYS_AFSR1_EL1), access_vm_reg, reset_unknown, AFSR1_EL1 },
>>   	{ SYS_DESC(SYS_ESR_EL1), access_vm_reg, reset_unknown, ESR_EL1 },
>> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
>> index 9edbf0f..8d1b73c 100644
>> --- a/virt/kvm/arm/arm.c
>> +++ b/virt/kvm/arm/arm.c
>> @@ -385,6 +385,8 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
>>   		vcpu_clear_wfe_traps(vcpu);
>>   	else
>>   		vcpu_set_wfe_traps(vcpu);
>> +
>> +	kvm_arm_vcpu_ptrauth_setup_lazy(vcpu);
>>   }
>>   
>>   void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
>>
> 
> Despite all the comments, the code looks in good shape, and I trust it
> shouldn't take you long to refactor it, retest it and send an updated
> version once we've settled on the ABI part which is the most contentious.
Sure will post next version soon.

Thanks,
Amit D
> 
> Thanks,
> 
> 	M.
> 
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v9 2/5] KVM: arm/arm64: context-switch ptrauth registers
@ 2019-04-17 14:24       ` Amit Daniel Kachhap
  0 siblings, 0 replies; 77+ messages in thread
From: Amit Daniel Kachhap @ 2019-04-17 14:24 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel
  Cc: Mark Rutland, Andrew Jones, Julien Thierry, Catalin Marinas,
	Will Deacon, Christoffer Dall, Kristina Martsenko, kvmarm,
	James Morse, Ramana Radhakrishnan, Dave Martin, linux-kernel

Hi Marc,

On 4/17/19 2:39 PM, Marc Zyngier wrote:
> Hi Amit,
> 
> On 12/04/2019 04:20, Amit Daniel Kachhap wrote:
>> From: Mark Rutland <mark.rutland@arm.com>
>>
>> When pointer authentication is supported, a guest may wish to use it.
>> This patch adds the necessary KVM infrastructure for this to work, with
>> a semi-lazy context switch of the pointer auth state.
>>
>> Pointer authentication feature is only enabled when VHE is built
>> in the kernel and present in the CPU implementation so only VHE code
>> paths are modified.
>>
>> When we schedule a vcpu, we disable guest usage of pointer
>> authentication instructions and accesses to the keys. While these are
>> disabled, we avoid context-switching the keys. When we trap the guest
>> trying to use pointer authentication functionality, we change to eagerly
>> context-switching the keys, and enable the feature. The next time the
>> vcpu is scheduled out/in, we start again. However the host key save is
>> optimized and implemented inside ptrauth instruction/register access
>> trap.
>>
>> Pointer authentication consists of address authentication and generic
>> authentication, and CPUs in a system might have varied support for
>> either. Where support for either feature is not uniform, it is hidden
>> from guests via ID register emulation, as a result of the cpufeature
>> framework in the host.
>>
>> Unfortunately, address authentication and generic authentication cannot
>> be trapped separately, as the architecture provides a single EL2 trap
>> covering both. If we wish to expose one without the other, we cannot
>> prevent a (badly-written) guest from intermittently using a feature
>> which is not uniformly supported (when scheduled on a physical CPU which
>> supports the relevant feature). Hence, this patch expects both type of
>> authentication to be present in a cpu.
>>
>> This switch of key is done from guest enter/exit assembly as preparation
>> for the upcoming in-kernel pointer authentication support. Hence, these
>> key switching routines are not implemented in C code as they may cause
>> pointer authentication key signing error in some situations.
>>
>> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
>> [Only VHE, key switch in full assembly, vcpu_has_ptrauth checks
>> , save host key in ptrauth exception trap]
>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>> Reviewed-by: Julien Thierry <julien.thierry@arm.com>
>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>> Cc: Christoffer Dall <christoffer.dall@arm.com>
>> Cc: kvmarm@lists.cs.columbia.edu
>> ---
>>
>> Changes since v9:
>> * Used high order number for branching in assembly macros. [Kristina Martsenko]
>> * Taken care of different offset for hcr_el2 now.
>>
>>   arch/arm/include/asm/kvm_host.h          |   1 +
>>   arch/arm64/Kconfig                       |   5 +-
>>   arch/arm64/include/asm/kvm_host.h        |  17 +++++
>>   arch/arm64/include/asm/kvm_ptrauth_asm.h | 106 +++++++++++++++++++++++++++++++
>>   arch/arm64/kernel/asm-offsets.c          |   6 ++
>>   arch/arm64/kvm/guest.c                   |  14 ++++
>>   arch/arm64/kvm/handle_exit.c             |  24 ++++---
>>   arch/arm64/kvm/hyp/entry.S               |   7 ++
>>   arch/arm64/kvm/sys_regs.c                |  46 +++++++++++++-
>>   virt/kvm/arm/arm.c                       |   2 +
>>   10 files changed, 215 insertions(+), 13 deletions(-)
>>   create mode 100644 arch/arm64/include/asm/kvm_ptrauth_asm.h
>>
>> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
>> index e80cfc1..7a5c7f8 100644
>> --- a/arch/arm/include/asm/kvm_host.h
>> +++ b/arch/arm/include/asm/kvm_host.h
>> @@ -363,6 +363,7 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
>>   static inline void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu) {}
>>   static inline void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu) {}
>>   static inline void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu) {}
>> +static inline void kvm_arm_vcpu_ptrauth_setup_lazy(struct kvm_vcpu *vcpu) {}
>>   
>>   static inline void kvm_arm_vhe_guest_enter(void) {}
>>   static inline void kvm_arm_vhe_guest_exit(void) {}
>> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
>> index 7e34b9e..9e8506e 100644
>> --- a/arch/arm64/Kconfig
>> +++ b/arch/arm64/Kconfig
>> @@ -1301,8 +1301,9 @@ config ARM64_PTR_AUTH
>>   	  context-switched along with the process.
>>   
>>   	  The feature is detected at runtime. If the feature is not present in
>> -	  hardware it will not be advertised to userspace nor will it be
>> -	  enabled.
>> +	  hardware it will not be advertised to userspace/KVM guest nor will it
>> +	  be enabled. However, KVM guest also require CONFIG_ARM64_VHE=y to use
>> +	  this feature.
> 
> Not only does it require CONFIG_ARM64_VHE, but it more importantly
> requires a VHE system!
Yes will update.
> 
>>   
>>   endmenu
>>   
>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
>> index 31dbc7c..a585d82 100644
>> --- a/arch/arm64/include/asm/kvm_host.h
>> +++ b/arch/arm64/include/asm/kvm_host.h
>> @@ -161,6 +161,18 @@ enum vcpu_sysreg {
>>   	PMSWINC_EL0,	/* Software Increment Register */
>>   	PMUSERENR_EL0,	/* User Enable Register */
>>   
>> +	/* Pointer Authentication Registers in a strict increasing order. */
>> +	APIAKEYLO_EL1,
>> +	APIAKEYHI_EL1 = APIAKEYLO_EL1 + 1,
>> +	APIBKEYLO_EL1 = APIAKEYLO_EL1 + 2,
>> +	APIBKEYHI_EL1 = APIAKEYLO_EL1 + 3,
>> +	APDAKEYLO_EL1 = APIAKEYLO_EL1 + 4,
>> +	APDAKEYHI_EL1 = APIAKEYLO_EL1 + 5,
>> +	APDBKEYLO_EL1 = APIAKEYLO_EL1 + 6,
>> +	APDBKEYHI_EL1 = APIAKEYLO_EL1 + 7,
>> +	APGAKEYLO_EL1 = APIAKEYLO_EL1 + 8,
>> +	APGAKEYHI_EL1 = APIAKEYLO_EL1 + 9,
> 
> Why do we need these explicit +1, +2...? Being an part of an enum
> already guarantees this.
Yes enums are increasing. But upcoming struct/enums randomization stuffs 
may break the ptrauth register offset calculation logic in the later 
part so explicitly made this to increasing order.


> 
>> +
>>   	/* 32bit specific registers. Keep them at the end of the range */
>>   	DACR32_EL2,	/* Domain Access Control Register */
>>   	IFSR32_EL2,	/* Instruction Fault Status Register */
>> @@ -529,6 +541,11 @@ static inline bool kvm_arch_requires_vhe(void)
>>   	return false;
>>   }
>>   
>> +void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu);
>> +void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu);
>> +void kvm_arm_vcpu_ptrauth_setup_lazy(struct kvm_vcpu *vcpu);
>> +void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu);
>> +
>>   static inline void kvm_arch_hardware_unsetup(void) {}
>>   static inline void kvm_arch_sync_events(struct kvm *kvm) {}
>>   static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
>> diff --git a/arch/arm64/include/asm/kvm_ptrauth_asm.h b/arch/arm64/include/asm/kvm_ptrauth_asm.h
>> new file mode 100644
>> index 0000000..8142521
>> --- /dev/null
>> +++ b/arch/arm64/include/asm/kvm_ptrauth_asm.h
> 
> nit: this should be named kvm_ptrauth.h. The asm suffix doesn't bring
> anything to the game, and is somewhat misleading (there are C macros in
> this file).
> 
>> @@ -0,0 +1,106 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/* arch/arm64/include/asm/kvm_ptrauth_asm.h: Guest/host ptrauth save/restore
>> + * Copyright 2019 Arm Limited
>> + * Author: Mark Rutland <mark.rutland@arm.com>
> 
> nit: Authors
ok.
> 
>> + *         Amit Daniel Kachhap <amit.kachhap@arm.com>
>> + */
>> +
>> +#ifndef __ASM_KVM_PTRAUTH_ASM_H
>> +#define __ASM_KVM_PTRAUTH_ASM_H
>> +
>> +#ifndef __ASSEMBLY__
>> +
>> +#define __ptrauth_save_key(regs, key)						\
>> +({										\
>> +	regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1);	\
>> +	regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1);	\
>> +})
>> +
>> +#define __ptrauth_save_state(ctxt)						\
>> +({										\
>> +	__ptrauth_save_key(ctxt->sys_regs, APIA);				\
>> +	__ptrauth_save_key(ctxt->sys_regs, APIB);				\
>> +	__ptrauth_save_key(ctxt->sys_regs, APDA);				\
>> +	__ptrauth_save_key(ctxt->sys_regs, APDB);				\
>> +	__ptrauth_save_key(ctxt->sys_regs, APGA);				\
>> +})
>> +
>> +#else /* __ASSEMBLY__ */
>> +
>> +#include <asm/sysreg.h>
>> +
>> +#ifdef	CONFIG_ARM64_PTR_AUTH
>> +
>> +#define PTRAUTH_REG_OFFSET(x)	(x - CPU_APIAKEYLO_EL1)
>> +
>> +/*
>> + * CPU_AP*_EL1 values exceed immediate offset range (512) for stp instruction
>> + * so below macros takes CPU_APIAKEYLO_EL1 as base and calculates the offset of
>> + * the keys from this base to avoid an extra add instruction. These macros
>> + * assumes the keys offsets are aligned in a specific increasing order.
>> + */
>> +.macro	ptrauth_save_state base, reg1, reg2
>> +	mrs_s	\reg1, SYS_APIAKEYLO_EL1
>> +	mrs_s	\reg2, SYS_APIAKEYHI_EL1
>> +	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIAKEYLO_EL1)]
>> +	mrs_s	\reg1, SYS_APIBKEYLO_EL1
>> +	mrs_s	\reg2, SYS_APIBKEYHI_EL1
>> +	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIBKEYLO_EL1)]
>> +	mrs_s	\reg1, SYS_APDAKEYLO_EL1
>> +	mrs_s	\reg2, SYS_APDAKEYHI_EL1
>> +	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDAKEYLO_EL1)]
>> +	mrs_s	\reg1, SYS_APDBKEYLO_EL1
>> +	mrs_s	\reg2, SYS_APDBKEYHI_EL1
>> +	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDBKEYLO_EL1)]
>> +	mrs_s	\reg1, SYS_APGAKEYLO_EL1
>> +	mrs_s	\reg2, SYS_APGAKEYHI_EL1
>> +	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APGAKEYLO_EL1)]
>> +.endm
>> +
>> +.macro	ptrauth_restore_state base, reg1, reg2
>> +	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIAKEYLO_EL1)]
>> +	msr_s	SYS_APIAKEYLO_EL1, \reg1
>> +	msr_s	SYS_APIAKEYHI_EL1, \reg2
>> +	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIBKEYLO_EL1)]
>> +	msr_s	SYS_APIBKEYLO_EL1, \reg1
>> +	msr_s	SYS_APIBKEYHI_EL1, \reg2
>> +	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDAKEYLO_EL1)]
>> +	msr_s	SYS_APDAKEYLO_EL1, \reg1
>> +	msr_s	SYS_APDAKEYHI_EL1, \reg2
>> +	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDBKEYLO_EL1)]
>> +	msr_s	SYS_APDBKEYLO_EL1, \reg1
>> +	msr_s	SYS_APDBKEYHI_EL1, \reg2
>> +	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APGAKEYLO_EL1)]
>> +	msr_s	SYS_APGAKEYLO_EL1, \reg1
>> +	msr_s	SYS_APGAKEYHI_EL1, \reg2
>> +.endm
>> +
>> +.macro ptrauth_switch_to_guest g_ctxt, reg1, reg2, reg3
>> +	ldr	\reg1, [\g_ctxt, #(VCPU_HCR_EL2 - VCPU_CONTEXT)]
> 
> Given that 100% of the current HW doesn't have ptrauth at all, this
> becomes an instant and pointless overhead.
> 
> It could easily be avoided by turning this into:
> 
> alternative_if_not ARM64_HAS_GENERIC_AUTH_ARCH
> 	b	1000f
> alternative_else
> 	ldr	\reg1, [\g_ctxt, #(VCPU_HCR_EL2 - VCPU_CONTEXT)]
> alternative_endif
yes sure. will check.
> 
>> +	and	\reg1, \reg1, #(HCR_API | HCR_APK)
>> +	cbz	\reg1, 1000f
>> +	add	\reg1, \g_ctxt, #CPU_APIAKEYLO_EL1
>> +	ptrauth_restore_state	\reg1, \reg2, \reg3
>> +1000:
>> +.endm
>> +
>> +.macro ptrauth_switch_to_host g_ctxt, h_ctxt, reg1, reg2, reg3
>> +	ldr	\reg1, [\g_ctxt, #(VCPU_HCR_EL2 - VCPU_CONTEXT)]
> 
> Same thing here.
> 
>> +	and	\reg1, \reg1, #(HCR_API | HCR_APK)
>> +	cbz	\reg1, 1001f
>> +	add	\reg1, \g_ctxt, #CPU_APIAKEYLO_EL1
>> +	ptrauth_save_state	\reg1, \reg2, \reg3
>> +	add	\reg1, \h_ctxt, #CPU_APIAKEYLO_EL1
>> +	ptrauth_restore_state	\reg1, \reg2, \reg3
>> +	isb
>> +1001:
>> +.endm
>> +
>> +#else /* !CONFIG_ARM64_PTR_AUTH */
>> +.macro ptrauth_switch_to_guest g_ctxt, reg1, reg2, reg3
>> +.endm
>> +.macro ptrauth_switch_to_host g_ctxt, h_ctxt, reg1, reg2, reg3
>> +.endm
>> +#endif /* CONFIG_ARM64_PTR_AUTH */
>> +#endif /* __ASSEMBLY__ */
>> +#endif /* __ASM_KVM_PTRAUTH_ASM_H */
>> diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
>> index 7f40dcb..8178330 100644
>> --- a/arch/arm64/kernel/asm-offsets.c
>> +++ b/arch/arm64/kernel/asm-offsets.c
>> @@ -125,7 +125,13 @@ int main(void)
>>     DEFINE(VCPU_CONTEXT,		offsetof(struct kvm_vcpu, arch.ctxt));
>>     DEFINE(VCPU_FAULT_DISR,	offsetof(struct kvm_vcpu, arch.fault.disr_el1));
>>     DEFINE(VCPU_WORKAROUND_FLAGS,	offsetof(struct kvm_vcpu, arch.workaround_flags));
>> +  DEFINE(VCPU_HCR_EL2,		offsetof(struct kvm_vcpu, arch.hcr_el2));
>>     DEFINE(CPU_GP_REGS,		offsetof(struct kvm_cpu_context, gp_regs));
>> +  DEFINE(CPU_APIAKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APIAKEYLO_EL1]));
>> +  DEFINE(CPU_APIBKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APIBKEYLO_EL1]));
>> +  DEFINE(CPU_APDAKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APDAKEYLO_EL1]));
>> +  DEFINE(CPU_APDBKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APDBKEYLO_EL1]));
>> +  DEFINE(CPU_APGAKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APGAKEYLO_EL1]));
>>     DEFINE(CPU_USER_PT_REGS,	offsetof(struct kvm_regs, regs));
>>     DEFINE(HOST_CONTEXT_VCPU,	offsetof(struct kvm_cpu_context, __hyp_running_vcpu));
>>   #endif
>> diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
>> index 4f7b26b..e07f763 100644
>> --- a/arch/arm64/kvm/guest.c
>> +++ b/arch/arm64/kvm/guest.c
>> @@ -878,3 +878,17 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
>>   
>>   	return ret;
>>   }
>> +
>> +/**
>> + * kvm_arm_vcpu_ptrauth_setup_lazy - setup lazy ptrauth for vcpu schedule
>> + *
>> + * @vcpu: The VCPU pointer
>> + *
>> + * This function may be used to disable ptrauth and use it in a lazy context
>> + * via traps.
>> + */
>> +void kvm_arm_vcpu_ptrauth_setup_lazy(struct kvm_vcpu *vcpu)
>> +{
>> +	if (vcpu_has_ptrauth(vcpu))
>> +		kvm_arm_vcpu_ptrauth_disable(vcpu);
>> +}
> 
> Why does this live in guest.c?
Many global functions used in virt/kvm/arm/arm.c are implemented here.

However some similar kinds of function are in asm/kvm_emulate.h so can 
be moved there as static inline.
> 
>> diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
>> index 0b79834..5838ff9 100644
>> --- a/arch/arm64/kvm/handle_exit.c
>> +++ b/arch/arm64/kvm/handle_exit.c
>> @@ -30,6 +30,7 @@
>>   #include <asm/kvm_coproc.h>
>>   #include <asm/kvm_emulate.h>
>>   #include <asm/kvm_mmu.h>
>> +#include <asm/kvm_ptrauth_asm.h>
>>   #include <asm/debug-monitors.h>
>>   #include <asm/traps.h>
>>   
>> @@ -174,19 +175,26 @@ static int handle_sve(struct kvm_vcpu *vcpu, struct kvm_run *run)
>>   }
>>   
>>   /*
>> + * Handle the guest trying to use a ptrauth instruction, or trying to access a
>> + * ptrauth register.
>> + */
>> +void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu)
>> +{
>> +	if (vcpu_has_ptrauth(vcpu)) {
>> +		kvm_arm_vcpu_ptrauth_enable(vcpu);
> 
> It is odd that the enable function is placed in sys_regs.c, and only
> used here. You could either just inline it here, or make it a static
> inline in kvm_host.h.

I tried moving it to kvm_host.h but some dependency error is coming,
   CC      arch/arm64/kernel/asm-offsets.s
In file included from ./include/linux/kvm_host.h:38:0,
                  from arch/arm64/kernel/asm-offsets.c:25:
./arch/arm64/include/asm/kvm_host.h: In function 
‘kvm_arm_vcpu_ptrauth_enable’:
./arch/arm64/include/asm/kvm_host.h:547:6: error: dereferencing pointer 
to incomplete type ‘struct kvm_vcpu’
   vcpu->arch.hcr_el2 |= (HCR_API | HCR_APK);

However some similar kinds of function are in asm/kvm_emulate.h so can 
be moved there.

> 
>> +		__ptrauth_save_state(vcpu->arch.host_cpu_context);
> 
> You could expand the __ptrauth_save_state macro here. It is only used
> once, and one less level of obfuscation will help grepping.
> 
>> +	} else {
>> +		kvm_inject_undefined(vcpu);
>> +	}
>> +}
>> +
>> +/*
>>    * Guest usage of a ptrauth instruction (which the guest EL1 did not turn into
>>    * a NOP).
>>    */
>>   static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu, struct kvm_run *run)
>>   {
>> -	/*
>> -	 * We don't currently support ptrauth in a guest, and we mask the ID
>> -	 * registers to prevent well-behaved guests from trying to make use of
>> -	 * it.
>> -	 *
>> -	 * Inject an UNDEF, as if the feature really isn't present.
>> -	 */
>> -	kvm_inject_undefined(vcpu);
>> +	kvm_arm_vcpu_ptrauth_trap(vcpu);
>>   	return 1;
>>   }
>>   
>> diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
>> index 675fdc1..3a70213 100644
>> --- a/arch/arm64/kvm/hyp/entry.S
>> +++ b/arch/arm64/kvm/hyp/entry.S
>> @@ -24,6 +24,7 @@
>>   #include <asm/kvm_arm.h>
>>   #include <asm/kvm_asm.h>
>>   #include <asm/kvm_mmu.h>
>> +#include <asm/kvm_ptrauth_asm.h>
>>   
>>   #define CPU_GP_REG_OFFSET(x)	(CPU_GP_REGS + x)
>>   #define CPU_XREG_OFFSET(x)	CPU_GP_REG_OFFSET(CPU_USER_PT_REGS + 8*x)
>> @@ -64,6 +65,9 @@ ENTRY(__guest_enter)
>>   
>>   	add	x18, x0, #VCPU_CONTEXT
>>   
>> +	// Macro ptrauth_switch_to_guest(guest cxt, tmp1, tmp2, tmp3).
>> +	ptrauth_switch_to_guest x18, x0, x1, x2
>> +
> 
> This comment doesn't tell us much. What we really need is a comment
> explaining *why* this needs to be an inline macro. Otherwise, someone
> will one day move it back to some C code and things will randomly break.
ok.
> 
>>   	// Restore guest regs x0-x17
>>   	ldp	x0, x1,   [x18, #CPU_XREG_OFFSET(0)]
>>   	ldp	x2, x3,   [x18, #CPU_XREG_OFFSET(2)]
>> @@ -118,6 +122,9 @@ ENTRY(__guest_exit)
>>   
>>   	get_host_ctxt	x2, x3
>>   
>> +	// Macro ptrauth_switch_to_host(guest cxt, host cxt, tmp1, tmp2, tmp3).
>> +	ptrauth_switch_to_host x1, x2, x3, x4, x5
>> +
>>   	// Now restore the host regs
>>   	restore_callee_saved_regs x2
>>   
>> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
>> index 09e9b06..4a98b5c 100644
>> --- a/arch/arm64/kvm/sys_regs.c
>> +++ b/arch/arm64/kvm/sys_regs.c
>> @@ -1007,6 +1007,38 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>>   	{ SYS_DESC(SYS_PMEVTYPERn_EL0(n)),					\
>>   	  access_pmu_evtyper, reset_unknown, (PMEVTYPER0_EL0 + n), }
>>   
>> +void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu)
>> +{
>> +	vcpu->arch.hcr_el2 |= (HCR_API | HCR_APK);
>> +}
>> +
>> +void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu)
>> +{
>> +	vcpu->arch.hcr_el2 &= ~(HCR_API | HCR_APK);
>> +}
> 
> As mentionned above, these could be moved as static inline to an include
> file, of even directly inlined in the code that use it.
ok
> 
>> +
>> +static bool trap_ptrauth(struct kvm_vcpu *vcpu,
>> +			 struct sys_reg_params *p,
>> +			 const struct sys_reg_desc *rd)
>> +{
>> +	kvm_arm_vcpu_ptrauth_trap(vcpu);
>> +	return false;
> 
> We need a comment explaining why we return false: Either ptrauth is on,
> and we re-execute the same instruction, or it is off, and we have
> injected an UNDEF. In both cases, we don't advance the guest's PC.
ok.
> 
>> +}
>> +
>> +static unsigned int ptrauth_visibility(const struct kvm_vcpu *vcpu,
>> +			const struct sys_reg_desc *rd)
>> +{
>> +	return vcpu_has_ptrauth(vcpu) ? 0 : REG_HIDDEN_USER | REG_HIDDEN_GUEST;
>> +}
>> +
>> +#define __PTRAUTH_KEY(k)						\
>> +	{ SYS_DESC(SYS_## k), trap_ptrauth, reset_unknown, k,		\
>> +	.visibility = ptrauth_visibility}
>> +
>> +#define PTRAUTH_KEY(k)							\
>> +	__PTRAUTH_KEY(k ## KEYLO_EL1),					\
>> +	__PTRAUTH_KEY(k ## KEYHI_EL1)
>> +
>>   static bool access_arch_timer(struct kvm_vcpu *vcpu,
>>   			      struct sys_reg_params *p,
>>   			      const struct sys_reg_desc *r)
>> @@ -1058,9 +1090,11 @@ static u64 read_id_reg(const struct kvm_vcpu *vcpu,
>>   					 (0xfUL << ID_AA64ISAR1_API_SHIFT) |
>>   					 (0xfUL << ID_AA64ISAR1_GPA_SHIFT) |
>>   					 (0xfUL << ID_AA64ISAR1_GPI_SHIFT);
>> -		if (val & ptrauth_mask)
>> -			kvm_debug("ptrauth unsupported for guests, suppressing\n");
>> -		val &= ~ptrauth_mask;
>> +		if (!vcpu_has_ptrauth(vcpu)) {
>> +			if (val & ptrauth_mask)
>> +				kvm_debug("ptrauth unsupported for guests, suppressing\n");
>> +			val &= ~ptrauth_mask;
>> +		}
>>   	}
>>   
>>   	return val;
>> @@ -1460,6 +1494,12 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>>   	{ SYS_DESC(SYS_TTBR1_EL1), access_vm_reg, reset_unknown, TTBR1_EL1 },
>>   	{ SYS_DESC(SYS_TCR_EL1), access_vm_reg, reset_val, TCR_EL1, 0 },
>>   
>> +	PTRAUTH_KEY(APIA),
>> +	PTRAUTH_KEY(APIB),
>> +	PTRAUTH_KEY(APDA),
>> +	PTRAUTH_KEY(APDB),
>> +	PTRAUTH_KEY(APGA),
>> +
>>   	{ SYS_DESC(SYS_AFSR0_EL1), access_vm_reg, reset_unknown, AFSR0_EL1 },
>>   	{ SYS_DESC(SYS_AFSR1_EL1), access_vm_reg, reset_unknown, AFSR1_EL1 },
>>   	{ SYS_DESC(SYS_ESR_EL1), access_vm_reg, reset_unknown, ESR_EL1 },
>> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
>> index 9edbf0f..8d1b73c 100644
>> --- a/virt/kvm/arm/arm.c
>> +++ b/virt/kvm/arm/arm.c
>> @@ -385,6 +385,8 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
>>   		vcpu_clear_wfe_traps(vcpu);
>>   	else
>>   		vcpu_set_wfe_traps(vcpu);
>> +
>> +	kvm_arm_vcpu_ptrauth_setup_lazy(vcpu);
>>   }
>>   
>>   void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
>>
> 
> Despite all the comments, the code looks in good shape, and I trust it
> shouldn't take you long to refactor it, retest it and send an updated
> version once we've settled on the ABI part which is the most contentious.
Sure will post next version soon.

Thanks,
Amit D
> 
> Thanks,
> 
> 	M.
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v9 2/5] KVM: arm/arm64: context-switch ptrauth registers
@ 2019-04-17 14:39         ` Marc Zyngier
  0 siblings, 0 replies; 77+ messages in thread
From: Marc Zyngier @ 2019-04-17 14:39 UTC (permalink / raw)
  To: Amit Daniel Kachhap, linux-arm-kernel
  Cc: Christoffer Dall, Catalin Marinas, Will Deacon, Andrew Jones,
	Dave Martin, Ramana Radhakrishnan, kvmarm, Kristina Martsenko,
	linux-kernel, Mark Rutland, James Morse, Julien Thierry

On 17/04/2019 15:24, Amit Daniel Kachhap wrote:
> Hi Marc,
> 
> On 4/17/19 2:39 PM, Marc Zyngier wrote:
>> Hi Amit,
>>
>> On 12/04/2019 04:20, Amit Daniel Kachhap wrote:
>>> From: Mark Rutland <mark.rutland@arm.com>
>>>
>>> When pointer authentication is supported, a guest may wish to use it.
>>> This patch adds the necessary KVM infrastructure for this to work, with
>>> a semi-lazy context switch of the pointer auth state.
>>>
>>> Pointer authentication feature is only enabled when VHE is built
>>> in the kernel and present in the CPU implementation so only VHE code
>>> paths are modified.
>>>
>>> When we schedule a vcpu, we disable guest usage of pointer
>>> authentication instructions and accesses to the keys. While these are
>>> disabled, we avoid context-switching the keys. When we trap the guest
>>> trying to use pointer authentication functionality, we change to eagerly
>>> context-switching the keys, and enable the feature. The next time the
>>> vcpu is scheduled out/in, we start again. However the host key save is
>>> optimized and implemented inside ptrauth instruction/register access
>>> trap.
>>>
>>> Pointer authentication consists of address authentication and generic
>>> authentication, and CPUs in a system might have varied support for
>>> either. Where support for either feature is not uniform, it is hidden
>>> from guests via ID register emulation, as a result of the cpufeature
>>> framework in the host.
>>>
>>> Unfortunately, address authentication and generic authentication cannot
>>> be trapped separately, as the architecture provides a single EL2 trap
>>> covering both. If we wish to expose one without the other, we cannot
>>> prevent a (badly-written) guest from intermittently using a feature
>>> which is not uniformly supported (when scheduled on a physical CPU which
>>> supports the relevant feature). Hence, this patch expects both type of
>>> authentication to be present in a cpu.
>>>
>>> This switch of key is done from guest enter/exit assembly as preparation
>>> for the upcoming in-kernel pointer authentication support. Hence, these
>>> key switching routines are not implemented in C code as they may cause
>>> pointer authentication key signing error in some situations.
>>>
>>> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
>>> [Only VHE, key switch in full assembly, vcpu_has_ptrauth checks
>>> , save host key in ptrauth exception trap]
>>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>>> Reviewed-by: Julien Thierry <julien.thierry@arm.com>
>>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>>> Cc: Christoffer Dall <christoffer.dall@arm.com>
>>> Cc: kvmarm@lists.cs.columbia.edu
>>> ---
>>>
>>> Changes since v9:
>>> * Used high order number for branching in assembly macros. [Kristina Martsenko]
>>> * Taken care of different offset for hcr_el2 now.
>>>
>>>   arch/arm/include/asm/kvm_host.h          |   1 +
>>>   arch/arm64/Kconfig                       |   5 +-
>>>   arch/arm64/include/asm/kvm_host.h        |  17 +++++
>>>   arch/arm64/include/asm/kvm_ptrauth_asm.h | 106 +++++++++++++++++++++++++++++++
>>>   arch/arm64/kernel/asm-offsets.c          |   6 ++
>>>   arch/arm64/kvm/guest.c                   |  14 ++++
>>>   arch/arm64/kvm/handle_exit.c             |  24 ++++---
>>>   arch/arm64/kvm/hyp/entry.S               |   7 ++
>>>   arch/arm64/kvm/sys_regs.c                |  46 +++++++++++++-
>>>   virt/kvm/arm/arm.c                       |   2 +
>>>   10 files changed, 215 insertions(+), 13 deletions(-)
>>>   create mode 100644 arch/arm64/include/asm/kvm_ptrauth_asm.h
>>>
>>> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
>>> index e80cfc1..7a5c7f8 100644
>>> --- a/arch/arm/include/asm/kvm_host.h
>>> +++ b/arch/arm/include/asm/kvm_host.h
>>> @@ -363,6 +363,7 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
>>>   static inline void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu) {}
>>>   static inline void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu) {}
>>>   static inline void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu) {}
>>> +static inline void kvm_arm_vcpu_ptrauth_setup_lazy(struct kvm_vcpu *vcpu) {}
>>>   
>>>   static inline void kvm_arm_vhe_guest_enter(void) {}
>>>   static inline void kvm_arm_vhe_guest_exit(void) {}
>>> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
>>> index 7e34b9e..9e8506e 100644
>>> --- a/arch/arm64/Kconfig
>>> +++ b/arch/arm64/Kconfig
>>> @@ -1301,8 +1301,9 @@ config ARM64_PTR_AUTH
>>>   	  context-switched along with the process.
>>>   
>>>   	  The feature is detected at runtime. If the feature is not present in
>>> -	  hardware it will not be advertised to userspace nor will it be
>>> -	  enabled.
>>> +	  hardware it will not be advertised to userspace/KVM guest nor will it
>>> +	  be enabled. However, KVM guest also require CONFIG_ARM64_VHE=y to use
>>> +	  this feature.
>>
>> Not only does it require CONFIG_ARM64_VHE, but it more importantly
>> requires a VHE system!
> Yes will update.
>>
>>>   
>>>   endmenu
>>>   
>>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
>>> index 31dbc7c..a585d82 100644
>>> --- a/arch/arm64/include/asm/kvm_host.h
>>> +++ b/arch/arm64/include/asm/kvm_host.h
>>> @@ -161,6 +161,18 @@ enum vcpu_sysreg {
>>>   	PMSWINC_EL0,	/* Software Increment Register */
>>>   	PMUSERENR_EL0,	/* User Enable Register */
>>>   
>>> +	/* Pointer Authentication Registers in a strict increasing order. */
>>> +	APIAKEYLO_EL1,
>>> +	APIAKEYHI_EL1 = APIAKEYLO_EL1 + 1,
>>> +	APIBKEYLO_EL1 = APIAKEYLO_EL1 + 2,
>>> +	APIBKEYHI_EL1 = APIAKEYLO_EL1 + 3,
>>> +	APDAKEYLO_EL1 = APIAKEYLO_EL1 + 4,
>>> +	APDAKEYHI_EL1 = APIAKEYLO_EL1 + 5,
>>> +	APDBKEYLO_EL1 = APIAKEYLO_EL1 + 6,
>>> +	APDBKEYHI_EL1 = APIAKEYLO_EL1 + 7,
>>> +	APGAKEYLO_EL1 = APIAKEYLO_EL1 + 8,
>>> +	APGAKEYHI_EL1 = APIAKEYLO_EL1 + 9,
>>
>> Why do we need these explicit +1, +2...? Being an part of an enum
>> already guarantees this.
> Yes enums are increasing. But upcoming struct/enums randomization stuffs 
> may break the ptrauth register offset calculation logic in the later 
> part so explicitly made this to increasing order.

Enum randomization? well, the whole of KVM would break spectacularly,
not to mention most of the kernel.

So no, this isn't a concern, please drop this.

> 
> 
>>
>>> +
>>>   	/* 32bit specific registers. Keep them at the end of the range */
>>>   	DACR32_EL2,	/* Domain Access Control Register */
>>>   	IFSR32_EL2,	/* Instruction Fault Status Register */
>>> @@ -529,6 +541,11 @@ static inline bool kvm_arch_requires_vhe(void)
>>>   	return false;
>>>   }
>>>   
>>> +void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu);
>>> +void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu);
>>> +void kvm_arm_vcpu_ptrauth_setup_lazy(struct kvm_vcpu *vcpu);
>>> +void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu);
>>> +
>>>   static inline void kvm_arch_hardware_unsetup(void) {}
>>>   static inline void kvm_arch_sync_events(struct kvm *kvm) {}
>>>   static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
>>> diff --git a/arch/arm64/include/asm/kvm_ptrauth_asm.h b/arch/arm64/include/asm/kvm_ptrauth_asm.h
>>> new file mode 100644
>>> index 0000000..8142521
>>> --- /dev/null
>>> +++ b/arch/arm64/include/asm/kvm_ptrauth_asm.h
>>
>> nit: this should be named kvm_ptrauth.h. The asm suffix doesn't bring
>> anything to the game, and is somewhat misleading (there are C macros in
>> this file).
>>
>>> @@ -0,0 +1,106 @@
>>> +/* SPDX-License-Identifier: GPL-2.0 */
>>> +/* arch/arm64/include/asm/kvm_ptrauth_asm.h: Guest/host ptrauth save/restore
>>> + * Copyright 2019 Arm Limited
>>> + * Author: Mark Rutland <mark.rutland@arm.com>
>>
>> nit: Authors
> ok.
>>
>>> + *         Amit Daniel Kachhap <amit.kachhap@arm.com>
>>> + */
>>> +
>>> +#ifndef __ASM_KVM_PTRAUTH_ASM_H
>>> +#define __ASM_KVM_PTRAUTH_ASM_H
>>> +
>>> +#ifndef __ASSEMBLY__
>>> +
>>> +#define __ptrauth_save_key(regs, key)						\
>>> +({										\
>>> +	regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1);	\
>>> +	regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1);	\
>>> +})
>>> +
>>> +#define __ptrauth_save_state(ctxt)						\
>>> +({										\
>>> +	__ptrauth_save_key(ctxt->sys_regs, APIA);				\
>>> +	__ptrauth_save_key(ctxt->sys_regs, APIB);				\
>>> +	__ptrauth_save_key(ctxt->sys_regs, APDA);				\
>>> +	__ptrauth_save_key(ctxt->sys_regs, APDB);				\
>>> +	__ptrauth_save_key(ctxt->sys_regs, APGA);				\
>>> +})
>>> +
>>> +#else /* __ASSEMBLY__ */
>>> +
>>> +#include <asm/sysreg.h>
>>> +
>>> +#ifdef	CONFIG_ARM64_PTR_AUTH
>>> +
>>> +#define PTRAUTH_REG_OFFSET(x)	(x - CPU_APIAKEYLO_EL1)
>>> +
>>> +/*
>>> + * CPU_AP*_EL1 values exceed immediate offset range (512) for stp instruction
>>> + * so below macros takes CPU_APIAKEYLO_EL1 as base and calculates the offset of
>>> + * the keys from this base to avoid an extra add instruction. These macros
>>> + * assumes the keys offsets are aligned in a specific increasing order.
>>> + */
>>> +.macro	ptrauth_save_state base, reg1, reg2
>>> +	mrs_s	\reg1, SYS_APIAKEYLO_EL1
>>> +	mrs_s	\reg2, SYS_APIAKEYHI_EL1
>>> +	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIAKEYLO_EL1)]
>>> +	mrs_s	\reg1, SYS_APIBKEYLO_EL1
>>> +	mrs_s	\reg2, SYS_APIBKEYHI_EL1
>>> +	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIBKEYLO_EL1)]
>>> +	mrs_s	\reg1, SYS_APDAKEYLO_EL1
>>> +	mrs_s	\reg2, SYS_APDAKEYHI_EL1
>>> +	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDAKEYLO_EL1)]
>>> +	mrs_s	\reg1, SYS_APDBKEYLO_EL1
>>> +	mrs_s	\reg2, SYS_APDBKEYHI_EL1
>>> +	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDBKEYLO_EL1)]
>>> +	mrs_s	\reg1, SYS_APGAKEYLO_EL1
>>> +	mrs_s	\reg2, SYS_APGAKEYHI_EL1
>>> +	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APGAKEYLO_EL1)]
>>> +.endm
>>> +
>>> +.macro	ptrauth_restore_state base, reg1, reg2
>>> +	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIAKEYLO_EL1)]
>>> +	msr_s	SYS_APIAKEYLO_EL1, \reg1
>>> +	msr_s	SYS_APIAKEYHI_EL1, \reg2
>>> +	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIBKEYLO_EL1)]
>>> +	msr_s	SYS_APIBKEYLO_EL1, \reg1
>>> +	msr_s	SYS_APIBKEYHI_EL1, \reg2
>>> +	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDAKEYLO_EL1)]
>>> +	msr_s	SYS_APDAKEYLO_EL1, \reg1
>>> +	msr_s	SYS_APDAKEYHI_EL1, \reg2
>>> +	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDBKEYLO_EL1)]
>>> +	msr_s	SYS_APDBKEYLO_EL1, \reg1
>>> +	msr_s	SYS_APDBKEYHI_EL1, \reg2
>>> +	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APGAKEYLO_EL1)]
>>> +	msr_s	SYS_APGAKEYLO_EL1, \reg1
>>> +	msr_s	SYS_APGAKEYHI_EL1, \reg2
>>> +.endm
>>> +
>>> +.macro ptrauth_switch_to_guest g_ctxt, reg1, reg2, reg3
>>> +	ldr	\reg1, [\g_ctxt, #(VCPU_HCR_EL2 - VCPU_CONTEXT)]
>>
>> Given that 100% of the current HW doesn't have ptrauth at all, this
>> becomes an instant and pointless overhead.
>>
>> It could easily be avoided by turning this into:
>>
>> alternative_if_not ARM64_HAS_GENERIC_AUTH_ARCH
>> 	b	1000f
>> alternative_else
>> 	ldr	\reg1, [\g_ctxt, #(VCPU_HCR_EL2 - VCPU_CONTEXT)]
>> alternative_endif
> yes sure. will check.
>>
>>> +	and	\reg1, \reg1, #(HCR_API | HCR_APK)
>>> +	cbz	\reg1, 1000f
>>> +	add	\reg1, \g_ctxt, #CPU_APIAKEYLO_EL1
>>> +	ptrauth_restore_state	\reg1, \reg2, \reg3
>>> +1000:
>>> +.endm
>>> +
>>> +.macro ptrauth_switch_to_host g_ctxt, h_ctxt, reg1, reg2, reg3
>>> +	ldr	\reg1, [\g_ctxt, #(VCPU_HCR_EL2 - VCPU_CONTEXT)]
>>
>> Same thing here.
>>
>>> +	and	\reg1, \reg1, #(HCR_API | HCR_APK)
>>> +	cbz	\reg1, 1001f
>>> +	add	\reg1, \g_ctxt, #CPU_APIAKEYLO_EL1
>>> +	ptrauth_save_state	\reg1, \reg2, \reg3
>>> +	add	\reg1, \h_ctxt, #CPU_APIAKEYLO_EL1
>>> +	ptrauth_restore_state	\reg1, \reg2, \reg3
>>> +	isb
>>> +1001:
>>> +.endm
>>> +
>>> +#else /* !CONFIG_ARM64_PTR_AUTH */
>>> +.macro ptrauth_switch_to_guest g_ctxt, reg1, reg2, reg3
>>> +.endm
>>> +.macro ptrauth_switch_to_host g_ctxt, h_ctxt, reg1, reg2, reg3
>>> +.endm
>>> +#endif /* CONFIG_ARM64_PTR_AUTH */
>>> +#endif /* __ASSEMBLY__ */
>>> +#endif /* __ASM_KVM_PTRAUTH_ASM_H */
>>> diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
>>> index 7f40dcb..8178330 100644
>>> --- a/arch/arm64/kernel/asm-offsets.c
>>> +++ b/arch/arm64/kernel/asm-offsets.c
>>> @@ -125,7 +125,13 @@ int main(void)
>>>     DEFINE(VCPU_CONTEXT,		offsetof(struct kvm_vcpu, arch.ctxt));
>>>     DEFINE(VCPU_FAULT_DISR,	offsetof(struct kvm_vcpu, arch.fault.disr_el1));
>>>     DEFINE(VCPU_WORKAROUND_FLAGS,	offsetof(struct kvm_vcpu, arch.workaround_flags));
>>> +  DEFINE(VCPU_HCR_EL2,		offsetof(struct kvm_vcpu, arch.hcr_el2));
>>>     DEFINE(CPU_GP_REGS,		offsetof(struct kvm_cpu_context, gp_regs));
>>> +  DEFINE(CPU_APIAKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APIAKEYLO_EL1]));
>>> +  DEFINE(CPU_APIBKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APIBKEYLO_EL1]));
>>> +  DEFINE(CPU_APDAKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APDAKEYLO_EL1]));
>>> +  DEFINE(CPU_APDBKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APDBKEYLO_EL1]));
>>> +  DEFINE(CPU_APGAKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APGAKEYLO_EL1]));
>>>     DEFINE(CPU_USER_PT_REGS,	offsetof(struct kvm_regs, regs));
>>>     DEFINE(HOST_CONTEXT_VCPU,	offsetof(struct kvm_cpu_context, __hyp_running_vcpu));
>>>   #endif
>>> diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
>>> index 4f7b26b..e07f763 100644
>>> --- a/arch/arm64/kvm/guest.c
>>> +++ b/arch/arm64/kvm/guest.c
>>> @@ -878,3 +878,17 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
>>>   
>>>   	return ret;
>>>   }
>>> +
>>> +/**
>>> + * kvm_arm_vcpu_ptrauth_setup_lazy - setup lazy ptrauth for vcpu schedule
>>> + *
>>> + * @vcpu: The VCPU pointer
>>> + *
>>> + * This function may be used to disable ptrauth and use it in a lazy context
>>> + * via traps.
>>> + */
>>> +void kvm_arm_vcpu_ptrauth_setup_lazy(struct kvm_vcpu *vcpu)
>>> +{
>>> +	if (vcpu_has_ptrauth(vcpu))
>>> +		kvm_arm_vcpu_ptrauth_disable(vcpu);
>>> +}
>>
>> Why does this live in guest.c?
> Many global functions used in virt/kvm/arm/arm.c are implemented here.

None that are used on vcpu_load().

> 
> However some similar kinds of function are in asm/kvm_emulate.h so can 
> be moved there as static inline.

Exactly.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v9 2/5] KVM: arm/arm64: context-switch ptrauth registers
@ 2019-04-17 14:39         ` Marc Zyngier
  0 siblings, 0 replies; 77+ messages in thread
From: Marc Zyngier @ 2019-04-17 14:39 UTC (permalink / raw)
  To: Amit Daniel Kachhap, linux-arm-kernel
  Cc: Catalin Marinas, Will Deacon, Kristina Martsenko, kvmarm,
	Ramana Radhakrishnan, Dave Martin, linux-kernel

On 17/04/2019 15:24, Amit Daniel Kachhap wrote:
> Hi Marc,
> 
> On 4/17/19 2:39 PM, Marc Zyngier wrote:
>> Hi Amit,
>>
>> On 12/04/2019 04:20, Amit Daniel Kachhap wrote:
>>> From: Mark Rutland <mark.rutland@arm.com>
>>>
>>> When pointer authentication is supported, a guest may wish to use it.
>>> This patch adds the necessary KVM infrastructure for this to work, with
>>> a semi-lazy context switch of the pointer auth state.
>>>
>>> Pointer authentication feature is only enabled when VHE is built
>>> in the kernel and present in the CPU implementation so only VHE code
>>> paths are modified.
>>>
>>> When we schedule a vcpu, we disable guest usage of pointer
>>> authentication instructions and accesses to the keys. While these are
>>> disabled, we avoid context-switching the keys. When we trap the guest
>>> trying to use pointer authentication functionality, we change to eagerly
>>> context-switching the keys, and enable the feature. The next time the
>>> vcpu is scheduled out/in, we start again. However the host key save is
>>> optimized and implemented inside ptrauth instruction/register access
>>> trap.
>>>
>>> Pointer authentication consists of address authentication and generic
>>> authentication, and CPUs in a system might have varied support for
>>> either. Where support for either feature is not uniform, it is hidden
>>> from guests via ID register emulation, as a result of the cpufeature
>>> framework in the host.
>>>
>>> Unfortunately, address authentication and generic authentication cannot
>>> be trapped separately, as the architecture provides a single EL2 trap
>>> covering both. If we wish to expose one without the other, we cannot
>>> prevent a (badly-written) guest from intermittently using a feature
>>> which is not uniformly supported (when scheduled on a physical CPU which
>>> supports the relevant feature). Hence, this patch expects both type of
>>> authentication to be present in a cpu.
>>>
>>> This switch of key is done from guest enter/exit assembly as preparation
>>> for the upcoming in-kernel pointer authentication support. Hence, these
>>> key switching routines are not implemented in C code as they may cause
>>> pointer authentication key signing error in some situations.
>>>
>>> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
>>> [Only VHE, key switch in full assembly, vcpu_has_ptrauth checks
>>> , save host key in ptrauth exception trap]
>>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>>> Reviewed-by: Julien Thierry <julien.thierry@arm.com>
>>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>>> Cc: Christoffer Dall <christoffer.dall@arm.com>
>>> Cc: kvmarm@lists.cs.columbia.edu
>>> ---
>>>
>>> Changes since v9:
>>> * Used high order number for branching in assembly macros. [Kristina Martsenko]
>>> * Taken care of different offset for hcr_el2 now.
>>>
>>>   arch/arm/include/asm/kvm_host.h          |   1 +
>>>   arch/arm64/Kconfig                       |   5 +-
>>>   arch/arm64/include/asm/kvm_host.h        |  17 +++++
>>>   arch/arm64/include/asm/kvm_ptrauth_asm.h | 106 +++++++++++++++++++++++++++++++
>>>   arch/arm64/kernel/asm-offsets.c          |   6 ++
>>>   arch/arm64/kvm/guest.c                   |  14 ++++
>>>   arch/arm64/kvm/handle_exit.c             |  24 ++++---
>>>   arch/arm64/kvm/hyp/entry.S               |   7 ++
>>>   arch/arm64/kvm/sys_regs.c                |  46 +++++++++++++-
>>>   virt/kvm/arm/arm.c                       |   2 +
>>>   10 files changed, 215 insertions(+), 13 deletions(-)
>>>   create mode 100644 arch/arm64/include/asm/kvm_ptrauth_asm.h
>>>
>>> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
>>> index e80cfc1..7a5c7f8 100644
>>> --- a/arch/arm/include/asm/kvm_host.h
>>> +++ b/arch/arm/include/asm/kvm_host.h
>>> @@ -363,6 +363,7 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
>>>   static inline void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu) {}
>>>   static inline void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu) {}
>>>   static inline void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu) {}
>>> +static inline void kvm_arm_vcpu_ptrauth_setup_lazy(struct kvm_vcpu *vcpu) {}
>>>   
>>>   static inline void kvm_arm_vhe_guest_enter(void) {}
>>>   static inline void kvm_arm_vhe_guest_exit(void) {}
>>> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
>>> index 7e34b9e..9e8506e 100644
>>> --- a/arch/arm64/Kconfig
>>> +++ b/arch/arm64/Kconfig
>>> @@ -1301,8 +1301,9 @@ config ARM64_PTR_AUTH
>>>   	  context-switched along with the process.
>>>   
>>>   	  The feature is detected at runtime. If the feature is not present in
>>> -	  hardware it will not be advertised to userspace nor will it be
>>> -	  enabled.
>>> +	  hardware it will not be advertised to userspace/KVM guest nor will it
>>> +	  be enabled. However, KVM guest also require CONFIG_ARM64_VHE=y to use
>>> +	  this feature.
>>
>> Not only does it require CONFIG_ARM64_VHE, but it more importantly
>> requires a VHE system!
> Yes will update.
>>
>>>   
>>>   endmenu
>>>   
>>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
>>> index 31dbc7c..a585d82 100644
>>> --- a/arch/arm64/include/asm/kvm_host.h
>>> +++ b/arch/arm64/include/asm/kvm_host.h
>>> @@ -161,6 +161,18 @@ enum vcpu_sysreg {
>>>   	PMSWINC_EL0,	/* Software Increment Register */
>>>   	PMUSERENR_EL0,	/* User Enable Register */
>>>   
>>> +	/* Pointer Authentication Registers in a strict increasing order. */
>>> +	APIAKEYLO_EL1,
>>> +	APIAKEYHI_EL1 = APIAKEYLO_EL1 + 1,
>>> +	APIBKEYLO_EL1 = APIAKEYLO_EL1 + 2,
>>> +	APIBKEYHI_EL1 = APIAKEYLO_EL1 + 3,
>>> +	APDAKEYLO_EL1 = APIAKEYLO_EL1 + 4,
>>> +	APDAKEYHI_EL1 = APIAKEYLO_EL1 + 5,
>>> +	APDBKEYLO_EL1 = APIAKEYLO_EL1 + 6,
>>> +	APDBKEYHI_EL1 = APIAKEYLO_EL1 + 7,
>>> +	APGAKEYLO_EL1 = APIAKEYLO_EL1 + 8,
>>> +	APGAKEYHI_EL1 = APIAKEYLO_EL1 + 9,
>>
>> Why do we need these explicit +1, +2...? Being an part of an enum
>> already guarantees this.
> Yes enums are increasing. But upcoming struct/enums randomization stuffs 
> may break the ptrauth register offset calculation logic in the later 
> part so explicitly made this to increasing order.

Enum randomization? well, the whole of KVM would break spectacularly,
not to mention most of the kernel.

So no, this isn't a concern, please drop this.

> 
> 
>>
>>> +
>>>   	/* 32bit specific registers. Keep them at the end of the range */
>>>   	DACR32_EL2,	/* Domain Access Control Register */
>>>   	IFSR32_EL2,	/* Instruction Fault Status Register */
>>> @@ -529,6 +541,11 @@ static inline bool kvm_arch_requires_vhe(void)
>>>   	return false;
>>>   }
>>>   
>>> +void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu);
>>> +void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu);
>>> +void kvm_arm_vcpu_ptrauth_setup_lazy(struct kvm_vcpu *vcpu);
>>> +void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu);
>>> +
>>>   static inline void kvm_arch_hardware_unsetup(void) {}
>>>   static inline void kvm_arch_sync_events(struct kvm *kvm) {}
>>>   static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
>>> diff --git a/arch/arm64/include/asm/kvm_ptrauth_asm.h b/arch/arm64/include/asm/kvm_ptrauth_asm.h
>>> new file mode 100644
>>> index 0000000..8142521
>>> --- /dev/null
>>> +++ b/arch/arm64/include/asm/kvm_ptrauth_asm.h
>>
>> nit: this should be named kvm_ptrauth.h. The asm suffix doesn't bring
>> anything to the game, and is somewhat misleading (there are C macros in
>> this file).
>>
>>> @@ -0,0 +1,106 @@
>>> +/* SPDX-License-Identifier: GPL-2.0 */
>>> +/* arch/arm64/include/asm/kvm_ptrauth_asm.h: Guest/host ptrauth save/restore
>>> + * Copyright 2019 Arm Limited
>>> + * Author: Mark Rutland <mark.rutland@arm.com>
>>
>> nit: Authors
> ok.
>>
>>> + *         Amit Daniel Kachhap <amit.kachhap@arm.com>
>>> + */
>>> +
>>> +#ifndef __ASM_KVM_PTRAUTH_ASM_H
>>> +#define __ASM_KVM_PTRAUTH_ASM_H
>>> +
>>> +#ifndef __ASSEMBLY__
>>> +
>>> +#define __ptrauth_save_key(regs, key)						\
>>> +({										\
>>> +	regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1);	\
>>> +	regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1);	\
>>> +})
>>> +
>>> +#define __ptrauth_save_state(ctxt)						\
>>> +({										\
>>> +	__ptrauth_save_key(ctxt->sys_regs, APIA);				\
>>> +	__ptrauth_save_key(ctxt->sys_regs, APIB);				\
>>> +	__ptrauth_save_key(ctxt->sys_regs, APDA);				\
>>> +	__ptrauth_save_key(ctxt->sys_regs, APDB);				\
>>> +	__ptrauth_save_key(ctxt->sys_regs, APGA);				\
>>> +})
>>> +
>>> +#else /* __ASSEMBLY__ */
>>> +
>>> +#include <asm/sysreg.h>
>>> +
>>> +#ifdef	CONFIG_ARM64_PTR_AUTH
>>> +
>>> +#define PTRAUTH_REG_OFFSET(x)	(x - CPU_APIAKEYLO_EL1)
>>> +
>>> +/*
>>> + * CPU_AP*_EL1 values exceed immediate offset range (512) for stp instruction
>>> + * so below macros takes CPU_APIAKEYLO_EL1 as base and calculates the offset of
>>> + * the keys from this base to avoid an extra add instruction. These macros
>>> + * assumes the keys offsets are aligned in a specific increasing order.
>>> + */
>>> +.macro	ptrauth_save_state base, reg1, reg2
>>> +	mrs_s	\reg1, SYS_APIAKEYLO_EL1
>>> +	mrs_s	\reg2, SYS_APIAKEYHI_EL1
>>> +	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIAKEYLO_EL1)]
>>> +	mrs_s	\reg1, SYS_APIBKEYLO_EL1
>>> +	mrs_s	\reg2, SYS_APIBKEYHI_EL1
>>> +	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIBKEYLO_EL1)]
>>> +	mrs_s	\reg1, SYS_APDAKEYLO_EL1
>>> +	mrs_s	\reg2, SYS_APDAKEYHI_EL1
>>> +	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDAKEYLO_EL1)]
>>> +	mrs_s	\reg1, SYS_APDBKEYLO_EL1
>>> +	mrs_s	\reg2, SYS_APDBKEYHI_EL1
>>> +	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDBKEYLO_EL1)]
>>> +	mrs_s	\reg1, SYS_APGAKEYLO_EL1
>>> +	mrs_s	\reg2, SYS_APGAKEYHI_EL1
>>> +	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APGAKEYLO_EL1)]
>>> +.endm
>>> +
>>> +.macro	ptrauth_restore_state base, reg1, reg2
>>> +	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIAKEYLO_EL1)]
>>> +	msr_s	SYS_APIAKEYLO_EL1, \reg1
>>> +	msr_s	SYS_APIAKEYHI_EL1, \reg2
>>> +	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIBKEYLO_EL1)]
>>> +	msr_s	SYS_APIBKEYLO_EL1, \reg1
>>> +	msr_s	SYS_APIBKEYHI_EL1, \reg2
>>> +	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDAKEYLO_EL1)]
>>> +	msr_s	SYS_APDAKEYLO_EL1, \reg1
>>> +	msr_s	SYS_APDAKEYHI_EL1, \reg2
>>> +	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDBKEYLO_EL1)]
>>> +	msr_s	SYS_APDBKEYLO_EL1, \reg1
>>> +	msr_s	SYS_APDBKEYHI_EL1, \reg2
>>> +	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APGAKEYLO_EL1)]
>>> +	msr_s	SYS_APGAKEYLO_EL1, \reg1
>>> +	msr_s	SYS_APGAKEYHI_EL1, \reg2
>>> +.endm
>>> +
>>> +.macro ptrauth_switch_to_guest g_ctxt, reg1, reg2, reg3
>>> +	ldr	\reg1, [\g_ctxt, #(VCPU_HCR_EL2 - VCPU_CONTEXT)]
>>
>> Given that 100% of the current HW doesn't have ptrauth at all, this
>> becomes an instant and pointless overhead.
>>
>> It could easily be avoided by turning this into:
>>
>> alternative_if_not ARM64_HAS_GENERIC_AUTH_ARCH
>> 	b	1000f
>> alternative_else
>> 	ldr	\reg1, [\g_ctxt, #(VCPU_HCR_EL2 - VCPU_CONTEXT)]
>> alternative_endif
> yes sure. will check.
>>
>>> +	and	\reg1, \reg1, #(HCR_API | HCR_APK)
>>> +	cbz	\reg1, 1000f
>>> +	add	\reg1, \g_ctxt, #CPU_APIAKEYLO_EL1
>>> +	ptrauth_restore_state	\reg1, \reg2, \reg3
>>> +1000:
>>> +.endm
>>> +
>>> +.macro ptrauth_switch_to_host g_ctxt, h_ctxt, reg1, reg2, reg3
>>> +	ldr	\reg1, [\g_ctxt, #(VCPU_HCR_EL2 - VCPU_CONTEXT)]
>>
>> Same thing here.
>>
>>> +	and	\reg1, \reg1, #(HCR_API | HCR_APK)
>>> +	cbz	\reg1, 1001f
>>> +	add	\reg1, \g_ctxt, #CPU_APIAKEYLO_EL1
>>> +	ptrauth_save_state	\reg1, \reg2, \reg3
>>> +	add	\reg1, \h_ctxt, #CPU_APIAKEYLO_EL1
>>> +	ptrauth_restore_state	\reg1, \reg2, \reg3
>>> +	isb
>>> +1001:
>>> +.endm
>>> +
>>> +#else /* !CONFIG_ARM64_PTR_AUTH */
>>> +.macro ptrauth_switch_to_guest g_ctxt, reg1, reg2, reg3
>>> +.endm
>>> +.macro ptrauth_switch_to_host g_ctxt, h_ctxt, reg1, reg2, reg3
>>> +.endm
>>> +#endif /* CONFIG_ARM64_PTR_AUTH */
>>> +#endif /* __ASSEMBLY__ */
>>> +#endif /* __ASM_KVM_PTRAUTH_ASM_H */
>>> diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
>>> index 7f40dcb..8178330 100644
>>> --- a/arch/arm64/kernel/asm-offsets.c
>>> +++ b/arch/arm64/kernel/asm-offsets.c
>>> @@ -125,7 +125,13 @@ int main(void)
>>>     DEFINE(VCPU_CONTEXT,		offsetof(struct kvm_vcpu, arch.ctxt));
>>>     DEFINE(VCPU_FAULT_DISR,	offsetof(struct kvm_vcpu, arch.fault.disr_el1));
>>>     DEFINE(VCPU_WORKAROUND_FLAGS,	offsetof(struct kvm_vcpu, arch.workaround_flags));
>>> +  DEFINE(VCPU_HCR_EL2,		offsetof(struct kvm_vcpu, arch.hcr_el2));
>>>     DEFINE(CPU_GP_REGS,		offsetof(struct kvm_cpu_context, gp_regs));
>>> +  DEFINE(CPU_APIAKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APIAKEYLO_EL1]));
>>> +  DEFINE(CPU_APIBKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APIBKEYLO_EL1]));
>>> +  DEFINE(CPU_APDAKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APDAKEYLO_EL1]));
>>> +  DEFINE(CPU_APDBKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APDBKEYLO_EL1]));
>>> +  DEFINE(CPU_APGAKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APGAKEYLO_EL1]));
>>>     DEFINE(CPU_USER_PT_REGS,	offsetof(struct kvm_regs, regs));
>>>     DEFINE(HOST_CONTEXT_VCPU,	offsetof(struct kvm_cpu_context, __hyp_running_vcpu));
>>>   #endif
>>> diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
>>> index 4f7b26b..e07f763 100644
>>> --- a/arch/arm64/kvm/guest.c
>>> +++ b/arch/arm64/kvm/guest.c
>>> @@ -878,3 +878,17 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
>>>   
>>>   	return ret;
>>>   }
>>> +
>>> +/**
>>> + * kvm_arm_vcpu_ptrauth_setup_lazy - setup lazy ptrauth for vcpu schedule
>>> + *
>>> + * @vcpu: The VCPU pointer
>>> + *
>>> + * This function may be used to disable ptrauth and use it in a lazy context
>>> + * via traps.
>>> + */
>>> +void kvm_arm_vcpu_ptrauth_setup_lazy(struct kvm_vcpu *vcpu)
>>> +{
>>> +	if (vcpu_has_ptrauth(vcpu))
>>> +		kvm_arm_vcpu_ptrauth_disable(vcpu);
>>> +}
>>
>> Why does this live in guest.c?
> Many global functions used in virt/kvm/arm/arm.c are implemented here.

None that are used on vcpu_load().

> 
> However some similar kinds of function are in asm/kvm_emulate.h so can 
> be moved there as static inline.

Exactly.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v9 2/5] KVM: arm/arm64: context-switch ptrauth registers
@ 2019-04-17 14:39         ` Marc Zyngier
  0 siblings, 0 replies; 77+ messages in thread
From: Marc Zyngier @ 2019-04-17 14:39 UTC (permalink / raw)
  To: Amit Daniel Kachhap, linux-arm-kernel
  Cc: Mark Rutland, Andrew Jones, Julien Thierry, Catalin Marinas,
	Will Deacon, Christoffer Dall, Kristina Martsenko, kvmarm,
	James Morse, Ramana Radhakrishnan, Dave Martin, linux-kernel

On 17/04/2019 15:24, Amit Daniel Kachhap wrote:
> Hi Marc,
> 
> On 4/17/19 2:39 PM, Marc Zyngier wrote:
>> Hi Amit,
>>
>> On 12/04/2019 04:20, Amit Daniel Kachhap wrote:
>>> From: Mark Rutland <mark.rutland@arm.com>
>>>
>>> When pointer authentication is supported, a guest may wish to use it.
>>> This patch adds the necessary KVM infrastructure for this to work, with
>>> a semi-lazy context switch of the pointer auth state.
>>>
>>> Pointer authentication feature is only enabled when VHE is built
>>> in the kernel and present in the CPU implementation so only VHE code
>>> paths are modified.
>>>
>>> When we schedule a vcpu, we disable guest usage of pointer
>>> authentication instructions and accesses to the keys. While these are
>>> disabled, we avoid context-switching the keys. When we trap the guest
>>> trying to use pointer authentication functionality, we change to eagerly
>>> context-switching the keys, and enable the feature. The next time the
>>> vcpu is scheduled out/in, we start again. However the host key save is
>>> optimized and implemented inside ptrauth instruction/register access
>>> trap.
>>>
>>> Pointer authentication consists of address authentication and generic
>>> authentication, and CPUs in a system might have varied support for
>>> either. Where support for either feature is not uniform, it is hidden
>>> from guests via ID register emulation, as a result of the cpufeature
>>> framework in the host.
>>>
>>> Unfortunately, address authentication and generic authentication cannot
>>> be trapped separately, as the architecture provides a single EL2 trap
>>> covering both. If we wish to expose one without the other, we cannot
>>> prevent a (badly-written) guest from intermittently using a feature
>>> which is not uniformly supported (when scheduled on a physical CPU which
>>> supports the relevant feature). Hence, this patch expects both type of
>>> authentication to be present in a cpu.
>>>
>>> This switch of key is done from guest enter/exit assembly as preparation
>>> for the upcoming in-kernel pointer authentication support. Hence, these
>>> key switching routines are not implemented in C code as they may cause
>>> pointer authentication key signing error in some situations.
>>>
>>> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
>>> [Only VHE, key switch in full assembly, vcpu_has_ptrauth checks
>>> , save host key in ptrauth exception trap]
>>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>>> Reviewed-by: Julien Thierry <julien.thierry@arm.com>
>>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>>> Cc: Christoffer Dall <christoffer.dall@arm.com>
>>> Cc: kvmarm@lists.cs.columbia.edu
>>> ---
>>>
>>> Changes since v9:
>>> * Used high order number for branching in assembly macros. [Kristina Martsenko]
>>> * Taken care of different offset for hcr_el2 now.
>>>
>>>   arch/arm/include/asm/kvm_host.h          |   1 +
>>>   arch/arm64/Kconfig                       |   5 +-
>>>   arch/arm64/include/asm/kvm_host.h        |  17 +++++
>>>   arch/arm64/include/asm/kvm_ptrauth_asm.h | 106 +++++++++++++++++++++++++++++++
>>>   arch/arm64/kernel/asm-offsets.c          |   6 ++
>>>   arch/arm64/kvm/guest.c                   |  14 ++++
>>>   arch/arm64/kvm/handle_exit.c             |  24 ++++---
>>>   arch/arm64/kvm/hyp/entry.S               |   7 ++
>>>   arch/arm64/kvm/sys_regs.c                |  46 +++++++++++++-
>>>   virt/kvm/arm/arm.c                       |   2 +
>>>   10 files changed, 215 insertions(+), 13 deletions(-)
>>>   create mode 100644 arch/arm64/include/asm/kvm_ptrauth_asm.h
>>>
>>> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
>>> index e80cfc1..7a5c7f8 100644
>>> --- a/arch/arm/include/asm/kvm_host.h
>>> +++ b/arch/arm/include/asm/kvm_host.h
>>> @@ -363,6 +363,7 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
>>>   static inline void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu) {}
>>>   static inline void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu) {}
>>>   static inline void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu) {}
>>> +static inline void kvm_arm_vcpu_ptrauth_setup_lazy(struct kvm_vcpu *vcpu) {}
>>>   
>>>   static inline void kvm_arm_vhe_guest_enter(void) {}
>>>   static inline void kvm_arm_vhe_guest_exit(void) {}
>>> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
>>> index 7e34b9e..9e8506e 100644
>>> --- a/arch/arm64/Kconfig
>>> +++ b/arch/arm64/Kconfig
>>> @@ -1301,8 +1301,9 @@ config ARM64_PTR_AUTH
>>>   	  context-switched along with the process.
>>>   
>>>   	  The feature is detected at runtime. If the feature is not present in
>>> -	  hardware it will not be advertised to userspace nor will it be
>>> -	  enabled.
>>> +	  hardware it will not be advertised to userspace/KVM guest nor will it
>>> +	  be enabled. However, KVM guest also require CONFIG_ARM64_VHE=y to use
>>> +	  this feature.
>>
>> Not only does it require CONFIG_ARM64_VHE, but it more importantly
>> requires a VHE system!
> Yes will update.
>>
>>>   
>>>   endmenu
>>>   
>>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
>>> index 31dbc7c..a585d82 100644
>>> --- a/arch/arm64/include/asm/kvm_host.h
>>> +++ b/arch/arm64/include/asm/kvm_host.h
>>> @@ -161,6 +161,18 @@ enum vcpu_sysreg {
>>>   	PMSWINC_EL0,	/* Software Increment Register */
>>>   	PMUSERENR_EL0,	/* User Enable Register */
>>>   
>>> +	/* Pointer Authentication Registers in a strict increasing order. */
>>> +	APIAKEYLO_EL1,
>>> +	APIAKEYHI_EL1 = APIAKEYLO_EL1 + 1,
>>> +	APIBKEYLO_EL1 = APIAKEYLO_EL1 + 2,
>>> +	APIBKEYHI_EL1 = APIAKEYLO_EL1 + 3,
>>> +	APDAKEYLO_EL1 = APIAKEYLO_EL1 + 4,
>>> +	APDAKEYHI_EL1 = APIAKEYLO_EL1 + 5,
>>> +	APDBKEYLO_EL1 = APIAKEYLO_EL1 + 6,
>>> +	APDBKEYHI_EL1 = APIAKEYLO_EL1 + 7,
>>> +	APGAKEYLO_EL1 = APIAKEYLO_EL1 + 8,
>>> +	APGAKEYHI_EL1 = APIAKEYLO_EL1 + 9,
>>
>> Why do we need these explicit +1, +2...? Being an part of an enum
>> already guarantees this.
> Yes enums are increasing. But upcoming struct/enums randomization stuffs 
> may break the ptrauth register offset calculation logic in the later 
> part so explicitly made this to increasing order.

Enum randomization? well, the whole of KVM would break spectacularly,
not to mention most of the kernel.

So no, this isn't a concern, please drop this.

> 
> 
>>
>>> +
>>>   	/* 32bit specific registers. Keep them at the end of the range */
>>>   	DACR32_EL2,	/* Domain Access Control Register */
>>>   	IFSR32_EL2,	/* Instruction Fault Status Register */
>>> @@ -529,6 +541,11 @@ static inline bool kvm_arch_requires_vhe(void)
>>>   	return false;
>>>   }
>>>   
>>> +void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu);
>>> +void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu);
>>> +void kvm_arm_vcpu_ptrauth_setup_lazy(struct kvm_vcpu *vcpu);
>>> +void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu);
>>> +
>>>   static inline void kvm_arch_hardware_unsetup(void) {}
>>>   static inline void kvm_arch_sync_events(struct kvm *kvm) {}
>>>   static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
>>> diff --git a/arch/arm64/include/asm/kvm_ptrauth_asm.h b/arch/arm64/include/asm/kvm_ptrauth_asm.h
>>> new file mode 100644
>>> index 0000000..8142521
>>> --- /dev/null
>>> +++ b/arch/arm64/include/asm/kvm_ptrauth_asm.h
>>
>> nit: this should be named kvm_ptrauth.h. The asm suffix doesn't bring
>> anything to the game, and is somewhat misleading (there are C macros in
>> this file).
>>
>>> @@ -0,0 +1,106 @@
>>> +/* SPDX-License-Identifier: GPL-2.0 */
>>> +/* arch/arm64/include/asm/kvm_ptrauth_asm.h: Guest/host ptrauth save/restore
>>> + * Copyright 2019 Arm Limited
>>> + * Author: Mark Rutland <mark.rutland@arm.com>
>>
>> nit: Authors
> ok.
>>
>>> + *         Amit Daniel Kachhap <amit.kachhap@arm.com>
>>> + */
>>> +
>>> +#ifndef __ASM_KVM_PTRAUTH_ASM_H
>>> +#define __ASM_KVM_PTRAUTH_ASM_H
>>> +
>>> +#ifndef __ASSEMBLY__
>>> +
>>> +#define __ptrauth_save_key(regs, key)						\
>>> +({										\
>>> +	regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1);	\
>>> +	regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1);	\
>>> +})
>>> +
>>> +#define __ptrauth_save_state(ctxt)						\
>>> +({										\
>>> +	__ptrauth_save_key(ctxt->sys_regs, APIA);				\
>>> +	__ptrauth_save_key(ctxt->sys_regs, APIB);				\
>>> +	__ptrauth_save_key(ctxt->sys_regs, APDA);				\
>>> +	__ptrauth_save_key(ctxt->sys_regs, APDB);				\
>>> +	__ptrauth_save_key(ctxt->sys_regs, APGA);				\
>>> +})
>>> +
>>> +#else /* __ASSEMBLY__ */
>>> +
>>> +#include <asm/sysreg.h>
>>> +
>>> +#ifdef	CONFIG_ARM64_PTR_AUTH
>>> +
>>> +#define PTRAUTH_REG_OFFSET(x)	(x - CPU_APIAKEYLO_EL1)
>>> +
>>> +/*
>>> + * CPU_AP*_EL1 values exceed immediate offset range (512) for stp instruction
>>> + * so below macros takes CPU_APIAKEYLO_EL1 as base and calculates the offset of
>>> + * the keys from this base to avoid an extra add instruction. These macros
>>> + * assumes the keys offsets are aligned in a specific increasing order.
>>> + */
>>> +.macro	ptrauth_save_state base, reg1, reg2
>>> +	mrs_s	\reg1, SYS_APIAKEYLO_EL1
>>> +	mrs_s	\reg2, SYS_APIAKEYHI_EL1
>>> +	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIAKEYLO_EL1)]
>>> +	mrs_s	\reg1, SYS_APIBKEYLO_EL1
>>> +	mrs_s	\reg2, SYS_APIBKEYHI_EL1
>>> +	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIBKEYLO_EL1)]
>>> +	mrs_s	\reg1, SYS_APDAKEYLO_EL1
>>> +	mrs_s	\reg2, SYS_APDAKEYHI_EL1
>>> +	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDAKEYLO_EL1)]
>>> +	mrs_s	\reg1, SYS_APDBKEYLO_EL1
>>> +	mrs_s	\reg2, SYS_APDBKEYHI_EL1
>>> +	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDBKEYLO_EL1)]
>>> +	mrs_s	\reg1, SYS_APGAKEYLO_EL1
>>> +	mrs_s	\reg2, SYS_APGAKEYHI_EL1
>>> +	stp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APGAKEYLO_EL1)]
>>> +.endm
>>> +
>>> +.macro	ptrauth_restore_state base, reg1, reg2
>>> +	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIAKEYLO_EL1)]
>>> +	msr_s	SYS_APIAKEYLO_EL1, \reg1
>>> +	msr_s	SYS_APIAKEYHI_EL1, \reg2
>>> +	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIBKEYLO_EL1)]
>>> +	msr_s	SYS_APIBKEYLO_EL1, \reg1
>>> +	msr_s	SYS_APIBKEYHI_EL1, \reg2
>>> +	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDAKEYLO_EL1)]
>>> +	msr_s	SYS_APDAKEYLO_EL1, \reg1
>>> +	msr_s	SYS_APDAKEYHI_EL1, \reg2
>>> +	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDBKEYLO_EL1)]
>>> +	msr_s	SYS_APDBKEYLO_EL1, \reg1
>>> +	msr_s	SYS_APDBKEYHI_EL1, \reg2
>>> +	ldp	\reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APGAKEYLO_EL1)]
>>> +	msr_s	SYS_APGAKEYLO_EL1, \reg1
>>> +	msr_s	SYS_APGAKEYHI_EL1, \reg2
>>> +.endm
>>> +
>>> +.macro ptrauth_switch_to_guest g_ctxt, reg1, reg2, reg3
>>> +	ldr	\reg1, [\g_ctxt, #(VCPU_HCR_EL2 - VCPU_CONTEXT)]
>>
>> Given that 100% of the current HW doesn't have ptrauth at all, this
>> becomes an instant and pointless overhead.
>>
>> It could easily be avoided by turning this into:
>>
>> alternative_if_not ARM64_HAS_GENERIC_AUTH_ARCH
>> 	b	1000f
>> alternative_else
>> 	ldr	\reg1, [\g_ctxt, #(VCPU_HCR_EL2 - VCPU_CONTEXT)]
>> alternative_endif
> yes sure. will check.
>>
>>> +	and	\reg1, \reg1, #(HCR_API | HCR_APK)
>>> +	cbz	\reg1, 1000f
>>> +	add	\reg1, \g_ctxt, #CPU_APIAKEYLO_EL1
>>> +	ptrauth_restore_state	\reg1, \reg2, \reg3
>>> +1000:
>>> +.endm
>>> +
>>> +.macro ptrauth_switch_to_host g_ctxt, h_ctxt, reg1, reg2, reg3
>>> +	ldr	\reg1, [\g_ctxt, #(VCPU_HCR_EL2 - VCPU_CONTEXT)]
>>
>> Same thing here.
>>
>>> +	and	\reg1, \reg1, #(HCR_API | HCR_APK)
>>> +	cbz	\reg1, 1001f
>>> +	add	\reg1, \g_ctxt, #CPU_APIAKEYLO_EL1
>>> +	ptrauth_save_state	\reg1, \reg2, \reg3
>>> +	add	\reg1, \h_ctxt, #CPU_APIAKEYLO_EL1
>>> +	ptrauth_restore_state	\reg1, \reg2, \reg3
>>> +	isb
>>> +1001:
>>> +.endm
>>> +
>>> +#else /* !CONFIG_ARM64_PTR_AUTH */
>>> +.macro ptrauth_switch_to_guest g_ctxt, reg1, reg2, reg3
>>> +.endm
>>> +.macro ptrauth_switch_to_host g_ctxt, h_ctxt, reg1, reg2, reg3
>>> +.endm
>>> +#endif /* CONFIG_ARM64_PTR_AUTH */
>>> +#endif /* __ASSEMBLY__ */
>>> +#endif /* __ASM_KVM_PTRAUTH_ASM_H */
>>> diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
>>> index 7f40dcb..8178330 100644
>>> --- a/arch/arm64/kernel/asm-offsets.c
>>> +++ b/arch/arm64/kernel/asm-offsets.c
>>> @@ -125,7 +125,13 @@ int main(void)
>>>     DEFINE(VCPU_CONTEXT,		offsetof(struct kvm_vcpu, arch.ctxt));
>>>     DEFINE(VCPU_FAULT_DISR,	offsetof(struct kvm_vcpu, arch.fault.disr_el1));
>>>     DEFINE(VCPU_WORKAROUND_FLAGS,	offsetof(struct kvm_vcpu, arch.workaround_flags));
>>> +  DEFINE(VCPU_HCR_EL2,		offsetof(struct kvm_vcpu, arch.hcr_el2));
>>>     DEFINE(CPU_GP_REGS,		offsetof(struct kvm_cpu_context, gp_regs));
>>> +  DEFINE(CPU_APIAKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APIAKEYLO_EL1]));
>>> +  DEFINE(CPU_APIBKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APIBKEYLO_EL1]));
>>> +  DEFINE(CPU_APDAKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APDAKEYLO_EL1]));
>>> +  DEFINE(CPU_APDBKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APDBKEYLO_EL1]));
>>> +  DEFINE(CPU_APGAKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APGAKEYLO_EL1]));
>>>     DEFINE(CPU_USER_PT_REGS,	offsetof(struct kvm_regs, regs));
>>>     DEFINE(HOST_CONTEXT_VCPU,	offsetof(struct kvm_cpu_context, __hyp_running_vcpu));
>>>   #endif
>>> diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
>>> index 4f7b26b..e07f763 100644
>>> --- a/arch/arm64/kvm/guest.c
>>> +++ b/arch/arm64/kvm/guest.c
>>> @@ -878,3 +878,17 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
>>>   
>>>   	return ret;
>>>   }
>>> +
>>> +/**
>>> + * kvm_arm_vcpu_ptrauth_setup_lazy - setup lazy ptrauth for vcpu schedule
>>> + *
>>> + * @vcpu: The VCPU pointer
>>> + *
>>> + * This function may be used to disable ptrauth and use it in a lazy context
>>> + * via traps.
>>> + */
>>> +void kvm_arm_vcpu_ptrauth_setup_lazy(struct kvm_vcpu *vcpu)
>>> +{
>>> +	if (vcpu_has_ptrauth(vcpu))
>>> +		kvm_arm_vcpu_ptrauth_disable(vcpu);
>>> +}
>>
>> Why does this live in guest.c?
> Many global functions used in virt/kvm/arm/arm.c are implemented here.

None that are used on vcpu_load().

> 
> However some similar kinds of function are in asm/kvm_emulate.h so can 
> be moved there as static inline.

Exactly.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v9 1/5] KVM: arm64: Add a vcpu flag to control ptrauth for guest
@ 2019-04-17 14:52           ` Dave Martin
  0 siblings, 0 replies; 77+ messages in thread
From: Dave Martin @ 2019-04-17 14:52 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Amit Daniel Kachhap, linux-arm-kernel, Catalin Marinas,
	Will Deacon, Kristina Martsenko, kvmarm, Ramana Radhakrishnan,
	linux-kernel

On Wed, Apr 17, 2019 at 03:19:11PM +0100, Marc Zyngier wrote:
> On 17/04/2019 14:08, Amit Daniel Kachhap wrote:
> > Hi,
> > 
> > On 4/17/19 2:05 PM, Marc Zyngier wrote:
> >> On 12/04/2019 04:20, Amit Daniel Kachhap wrote:
> >>> A per vcpu flag is added to check if pointer authentication is
> >>> enabled for the vcpu or not. This flag may be enabled according to
> >>> the necessary user policies and host capabilities.
> >>>
> >>> This patch also adds a helper to check the flag.
> >>>
> >>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> >>> Cc: Mark Rutland <mark.rutland@arm.com>
> >>> Cc: Marc Zyngier <marc.zyngier@arm.com>
> >>> Cc: Christoffer Dall <christoffer.dall@arm.com>
> >>> Cc: kvmarm@lists.cs.columbia.edu
> >>> ---
> >>>
> >>> Changes since v8:
> >>> * Added a new per vcpu flag which will store Pointer Authentication enable
> >>>    status instead of checking them again. [Dave Martin]
> >>>
> >>>   arch/arm64/include/asm/kvm_host.h | 4 ++++
> >>>   1 file changed, 4 insertions(+)
> >>>
> >>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> >>> index 9d57cf8..31dbc7c 100644
> >>> --- a/arch/arm64/include/asm/kvm_host.h
> >>> +++ b/arch/arm64/include/asm/kvm_host.h
> >>> @@ -355,10 +355,14 @@ struct kvm_vcpu_arch {
> >>>   #define KVM_ARM64_HOST_SVE_ENABLED	(1 << 4) /* SVE enabled for EL0 */
> >>>   #define KVM_ARM64_GUEST_HAS_SVE		(1 << 5) /* SVE exposed to guest */
> >>>   #define KVM_ARM64_VCPU_SVE_FINALIZED	(1 << 6) /* SVE config completed */
> >>> +#define KVM_ARM64_GUEST_HAS_PTRAUTH	(1 << 7) /* PTRAUTH exposed to guest */
> >>>   
> >>>   #define vcpu_has_sve(vcpu) (system_supports_sve() && \
> >>>   			    ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_SVE))
> >>>   
> >>> +#define vcpu_has_ptrauth(vcpu)	\
> >>> +			((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH)
> >>> +
> >>
> >> Just as for SVE, please first check that the system has PTRAUTH.
> >> Something like:
> >>
> >> 		(cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_ARCH) && \
> >> 		 ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH))
> > 
> > In the subsequent patches, vcpu->arch.flags is only set to 
> > KVM_ARM64_GUEST_HAS_PTRAUTH when all host capability check conditions 
> > matches such as system_supports_address_auth(), 
> > system_supports_generic_auth() so doing them again is repetitive in my view.
> 
> It isn't the setting of the flag I care about, but the check of that
> flag. Checking a flag for a feature that cannot be used on the running
> system should have a zero cost, which isn't the case here.
> 
> Granted, the impact should be minimal and it looks like it mostly happen
> on the slow path, but at the very least it would be consistent. So even
> if you don't buy my argument about efficiency, please change it in the
> name of consistency.

One of the annoyances here is there is no single static key for ptrauth.

I'm assuming we don't want to check both static keys (for address and
generic auth) on hot paths.

Checking just one of the two possibilities is OK for now, but we need
to comment clearly somewhere that that will break if KVM is changed
later to expose ptrauth to guests when the host doesn't support both
types.

Cheers
---Dave

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v9 1/5] KVM: arm64: Add a vcpu flag to control ptrauth for guest
@ 2019-04-17 14:52           ` Dave Martin
  0 siblings, 0 replies; 77+ messages in thread
From: Dave Martin @ 2019-04-17 14:52 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Catalin Marinas, Will Deacon, linux-kernel, Kristina Martsenko,
	Ramana Radhakrishnan, Amit Daniel Kachhap, kvmarm,
	linux-arm-kernel

On Wed, Apr 17, 2019 at 03:19:11PM +0100, Marc Zyngier wrote:
> On 17/04/2019 14:08, Amit Daniel Kachhap wrote:
> > Hi,
> > 
> > On 4/17/19 2:05 PM, Marc Zyngier wrote:
> >> On 12/04/2019 04:20, Amit Daniel Kachhap wrote:
> >>> A per vcpu flag is added to check if pointer authentication is
> >>> enabled for the vcpu or not. This flag may be enabled according to
> >>> the necessary user policies and host capabilities.
> >>>
> >>> This patch also adds a helper to check the flag.
> >>>
> >>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> >>> Cc: Mark Rutland <mark.rutland@arm.com>
> >>> Cc: Marc Zyngier <marc.zyngier@arm.com>
> >>> Cc: Christoffer Dall <christoffer.dall@arm.com>
> >>> Cc: kvmarm@lists.cs.columbia.edu
> >>> ---
> >>>
> >>> Changes since v8:
> >>> * Added a new per vcpu flag which will store Pointer Authentication enable
> >>>    status instead of checking them again. [Dave Martin]
> >>>
> >>>   arch/arm64/include/asm/kvm_host.h | 4 ++++
> >>>   1 file changed, 4 insertions(+)
> >>>
> >>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> >>> index 9d57cf8..31dbc7c 100644
> >>> --- a/arch/arm64/include/asm/kvm_host.h
> >>> +++ b/arch/arm64/include/asm/kvm_host.h
> >>> @@ -355,10 +355,14 @@ struct kvm_vcpu_arch {
> >>>   #define KVM_ARM64_HOST_SVE_ENABLED	(1 << 4) /* SVE enabled for EL0 */
> >>>   #define KVM_ARM64_GUEST_HAS_SVE		(1 << 5) /* SVE exposed to guest */
> >>>   #define KVM_ARM64_VCPU_SVE_FINALIZED	(1 << 6) /* SVE config completed */
> >>> +#define KVM_ARM64_GUEST_HAS_PTRAUTH	(1 << 7) /* PTRAUTH exposed to guest */
> >>>   
> >>>   #define vcpu_has_sve(vcpu) (system_supports_sve() && \
> >>>   			    ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_SVE))
> >>>   
> >>> +#define vcpu_has_ptrauth(vcpu)	\
> >>> +			((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH)
> >>> +
> >>
> >> Just as for SVE, please first check that the system has PTRAUTH.
> >> Something like:
> >>
> >> 		(cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_ARCH) && \
> >> 		 ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH))
> > 
> > In the subsequent patches, vcpu->arch.flags is only set to 
> > KVM_ARM64_GUEST_HAS_PTRAUTH when all host capability check conditions 
> > matches such as system_supports_address_auth(), 
> > system_supports_generic_auth() so doing them again is repetitive in my view.
> 
> It isn't the setting of the flag I care about, but the check of that
> flag. Checking a flag for a feature that cannot be used on the running
> system should have a zero cost, which isn't the case here.
> 
> Granted, the impact should be minimal and it looks like it mostly happen
> on the slow path, but at the very least it would be consistent. So even
> if you don't buy my argument about efficiency, please change it in the
> name of consistency.

One of the annoyances here is there is no single static key for ptrauth.

I'm assuming we don't want to check both static keys (for address and
generic auth) on hot paths.

Checking just one of the two possibilities is OK for now, but we need
to comment clearly somewhere that that will break if KVM is changed
later to expose ptrauth to guests when the host doesn't support both
types.

Cheers
---Dave
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v9 1/5] KVM: arm64: Add a vcpu flag to control ptrauth for guest
@ 2019-04-17 14:52           ` Dave Martin
  0 siblings, 0 replies; 77+ messages in thread
From: Dave Martin @ 2019-04-17 14:52 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Catalin Marinas, Will Deacon, linux-kernel, Kristina Martsenko,
	Ramana Radhakrishnan, Amit Daniel Kachhap, kvmarm,
	linux-arm-kernel

On Wed, Apr 17, 2019 at 03:19:11PM +0100, Marc Zyngier wrote:
> On 17/04/2019 14:08, Amit Daniel Kachhap wrote:
> > Hi,
> > 
> > On 4/17/19 2:05 PM, Marc Zyngier wrote:
> >> On 12/04/2019 04:20, Amit Daniel Kachhap wrote:
> >>> A per vcpu flag is added to check if pointer authentication is
> >>> enabled for the vcpu or not. This flag may be enabled according to
> >>> the necessary user policies and host capabilities.
> >>>
> >>> This patch also adds a helper to check the flag.
> >>>
> >>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> >>> Cc: Mark Rutland <mark.rutland@arm.com>
> >>> Cc: Marc Zyngier <marc.zyngier@arm.com>
> >>> Cc: Christoffer Dall <christoffer.dall@arm.com>
> >>> Cc: kvmarm@lists.cs.columbia.edu
> >>> ---
> >>>
> >>> Changes since v8:
> >>> * Added a new per vcpu flag which will store Pointer Authentication enable
> >>>    status instead of checking them again. [Dave Martin]
> >>>
> >>>   arch/arm64/include/asm/kvm_host.h | 4 ++++
> >>>   1 file changed, 4 insertions(+)
> >>>
> >>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> >>> index 9d57cf8..31dbc7c 100644
> >>> --- a/arch/arm64/include/asm/kvm_host.h
> >>> +++ b/arch/arm64/include/asm/kvm_host.h
> >>> @@ -355,10 +355,14 @@ struct kvm_vcpu_arch {
> >>>   #define KVM_ARM64_HOST_SVE_ENABLED	(1 << 4) /* SVE enabled for EL0 */
> >>>   #define KVM_ARM64_GUEST_HAS_SVE		(1 << 5) /* SVE exposed to guest */
> >>>   #define KVM_ARM64_VCPU_SVE_FINALIZED	(1 << 6) /* SVE config completed */
> >>> +#define KVM_ARM64_GUEST_HAS_PTRAUTH	(1 << 7) /* PTRAUTH exposed to guest */
> >>>   
> >>>   #define vcpu_has_sve(vcpu) (system_supports_sve() && \
> >>>   			    ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_SVE))
> >>>   
> >>> +#define vcpu_has_ptrauth(vcpu)	\
> >>> +			((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH)
> >>> +
> >>
> >> Just as for SVE, please first check that the system has PTRAUTH.
> >> Something like:
> >>
> >> 		(cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_ARCH) && \
> >> 		 ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH))
> > 
> > In the subsequent patches, vcpu->arch.flags is only set to 
> > KVM_ARM64_GUEST_HAS_PTRAUTH when all host capability check conditions 
> > matches such as system_supports_address_auth(), 
> > system_supports_generic_auth() so doing them again is repetitive in my view.
> 
> It isn't the setting of the flag I care about, but the check of that
> flag. Checking a flag for a feature that cannot be used on the running
> system should have a zero cost, which isn't the case here.
> 
> Granted, the impact should be minimal and it looks like it mostly happen
> on the slow path, but at the very least it would be consistent. So even
> if you don't buy my argument about efficiency, please change it in the
> name of consistency.

One of the annoyances here is there is no single static key for ptrauth.

I'm assuming we don't want to check both static keys (for address and
generic auth) on hot paths.

Checking just one of the two possibilities is OK for now, but we need
to comment clearly somewhere that that will break if KVM is changed
later to expose ptrauth to guests when the host doesn't support both
types.

Cheers
---Dave

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v9 4/5] KVM: arm64: Add capability to advertise ptrauth for guest
@ 2019-04-17 15:22         ` Dave Martin
  0 siblings, 0 replies; 77+ messages in thread
From: Dave Martin @ 2019-04-17 15:22 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: Marc Zyngier, Catalin Marinas, Will Deacon, linux-kernel,
	Kristina Martsenko, Ramana Radhakrishnan, kvmarm,
	linux-arm-kernel

On Wed, Apr 17, 2019 at 03:09:02PM +0530, Amit Daniel Kachhap wrote:
> Hi,
> 
> On 4/16/19 10:02 PM, Dave Martin wrote:
> >On Fri, Apr 12, 2019 at 08:50:35AM +0530, Amit Daniel Kachhap wrote:
> >>This patch advertises the capability of two cpu feature called address
> >>pointer authentication and generic pointer authentication. These
> >>capabilities depend upon system support for pointer authentication and
> >>VHE mode.
> >>
> >>The current arm64 KVM partially implements pointer authentication and
> >>support of address/generic authentication are tied together. However,
> >>separate ABI requirements for both of them is added so that any future
> >>isolated implementation will not require any ABI changes.
> >>
> >>Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> >>Cc: Mark Rutland <mark.rutland@arm.com>
> >>Cc: Marc Zyngier <marc.zyngier@arm.com>
> >>Cc: Christoffer Dall <christoffer.dall@arm.com>
> >>Cc: kvmarm@lists.cs.columbia.edu
> >>---
> >>Changes since v8:
> >>*  Keep the capability check same for the 2 vcpu ptrauth features. [Dave Martin]
> >>
> >>  Documentation/virtual/kvm/api.txt | 2 ++
> >>  arch/arm64/kvm/reset.c            | 5 +++++
> >>  include/uapi/linux/kvm.h          | 2 ++
> >>  3 files changed, 9 insertions(+)
> >>
> >>diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
> >>index 9d202f4..56021d0 100644
> >>--- a/Documentation/virtual/kvm/api.txt
> >>+++ b/Documentation/virtual/kvm/api.txt
> >>@@ -2756,9 +2756,11 @@ Possible features:
> >>  	- KVM_ARM_VCPU_PTRAUTH_ADDRESS: Enables Address Pointer authentication
> >>  	  for the CPU and supported only on arm64 architecture.
> >>  	  Must be requested if KVM_ARM_VCPU_PTRAUTH_GENERIC is also requested.
> >>+	  Depends on KVM_CAP_ARM_PTRAUTH_ADDRESS.
> >
> >What if KVM_CAP_ARM_PTRAUTH_ADDRESS is absent and
> >KVM_ARM_VCPU_PTRAUTH_GENERIC is requested?  By these rules, we have a
> >contradiction: userspace both must request and must not request
> >KVM_ARM_VCPU_PTRAUTH_ADDRESS.
> >
> >We could qualify as follows:
> >
> >	Depends on KVM_CAP_ARM_PTRAUTH_ADDRESS.
> >	Must be requested if KVM_CAP_ARM_PTRAUTH_ADDRESS is present and
> >	KVM_ARM_VCPU_PTRAUTH_GENERIC is also requested.
> ok agree. This makes it clear.

[*]

> >>  	- KVM_ARM_VCPU_PTRAUTH_GENERIC: Enables Generic Pointer authentication
> >>  	  for the CPU and supported only on arm64 architecture.
> >>  	  Must be requested if KVM_ARM_VCPU_PTRAUTH_ADDRESS is also requested.
> >>+	  Depends on KVM_CAP_ARM_PTRAUTH_GENERIC.
> >
> >Similarly.
> >
> >Or, we go back to having a single cap and a single feature, and add
> >more caps/features later on if we decide it's possible to support
> >address/generic auth separately later on.
> >
> >Otherwise, we end up with complex rules that can't be tested.  This is a
> >high price to pay for forwards compatibility: userspace's conformance to
> >the rules can't be fully tested, so there's a fair chance it won't work
> >properly anyway when hardware/KVM with just one auth type appears.
> >
> >[...]
> >
> >Thoughts?
> I agree that single cpufeature/capability is a simple solution to implement.
> The bifurcation of feature was done to reflect the different ID register
> split up.
> 
> But the h/w implementation provides a same EL2 exception trap for both the
> features and hence current implementation ties both of the features
> together. I guess in future if this is limitation goes away then one auth
> type is possible. Here I am not sure if the future h/w will retain this
> merged exception trap and add 2 new separate exception trap in addition to
> it.
> 
> I guess it will be probably simple split-up of this merged exception trap.
> In this case there won't be any ABI change required as per current
> implementation.

OK, I'm not opposed to keeping the ABI as-is, with the above
clarification [*] spelled out appropriately for both cases.

Alternatively, or in addition, we could say something like:

"If KVM_CAP_ARM_PTRAUTH_ADDRESS and KVM_CAP_ARM_PTRAUTH_GENERIC are
both present, then both KVM_ARM_VCPU_PTRAUTH_ADDRESS and
KVM_ARM_VCPU_PTRAUTH_GENERIC must be requested or neither must be
requested."

Cheers
---Dave

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v9 4/5] KVM: arm64: Add capability to advertise ptrauth for guest
@ 2019-04-17 15:22         ` Dave Martin
  0 siblings, 0 replies; 77+ messages in thread
From: Dave Martin @ 2019-04-17 15:22 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: Marc Zyngier, Catalin Marinas, Will Deacon, linux-kernel,
	Kristina Martsenko, Ramana Radhakrishnan, kvmarm,
	linux-arm-kernel

On Wed, Apr 17, 2019 at 03:09:02PM +0530, Amit Daniel Kachhap wrote:
> Hi,
> 
> On 4/16/19 10:02 PM, Dave Martin wrote:
> >On Fri, Apr 12, 2019 at 08:50:35AM +0530, Amit Daniel Kachhap wrote:
> >>This patch advertises the capability of two cpu feature called address
> >>pointer authentication and generic pointer authentication. These
> >>capabilities depend upon system support for pointer authentication and
> >>VHE mode.
> >>
> >>The current arm64 KVM partially implements pointer authentication and
> >>support of address/generic authentication are tied together. However,
> >>separate ABI requirements for both of them is added so that any future
> >>isolated implementation will not require any ABI changes.
> >>
> >>Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> >>Cc: Mark Rutland <mark.rutland@arm.com>
> >>Cc: Marc Zyngier <marc.zyngier@arm.com>
> >>Cc: Christoffer Dall <christoffer.dall@arm.com>
> >>Cc: kvmarm@lists.cs.columbia.edu
> >>---
> >>Changes since v8:
> >>*  Keep the capability check same for the 2 vcpu ptrauth features. [Dave Martin]
> >>
> >>  Documentation/virtual/kvm/api.txt | 2 ++
> >>  arch/arm64/kvm/reset.c            | 5 +++++
> >>  include/uapi/linux/kvm.h          | 2 ++
> >>  3 files changed, 9 insertions(+)
> >>
> >>diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
> >>index 9d202f4..56021d0 100644
> >>--- a/Documentation/virtual/kvm/api.txt
> >>+++ b/Documentation/virtual/kvm/api.txt
> >>@@ -2756,9 +2756,11 @@ Possible features:
> >>  	- KVM_ARM_VCPU_PTRAUTH_ADDRESS: Enables Address Pointer authentication
> >>  	  for the CPU and supported only on arm64 architecture.
> >>  	  Must be requested if KVM_ARM_VCPU_PTRAUTH_GENERIC is also requested.
> >>+	  Depends on KVM_CAP_ARM_PTRAUTH_ADDRESS.
> >
> >What if KVM_CAP_ARM_PTRAUTH_ADDRESS is absent and
> >KVM_ARM_VCPU_PTRAUTH_GENERIC is requested?  By these rules, we have a
> >contradiction: userspace both must request and must not request
> >KVM_ARM_VCPU_PTRAUTH_ADDRESS.
> >
> >We could qualify as follows:
> >
> >	Depends on KVM_CAP_ARM_PTRAUTH_ADDRESS.
> >	Must be requested if KVM_CAP_ARM_PTRAUTH_ADDRESS is present and
> >	KVM_ARM_VCPU_PTRAUTH_GENERIC is also requested.
> ok agree. This makes it clear.

[*]

> >>  	- KVM_ARM_VCPU_PTRAUTH_GENERIC: Enables Generic Pointer authentication
> >>  	  for the CPU and supported only on arm64 architecture.
> >>  	  Must be requested if KVM_ARM_VCPU_PTRAUTH_ADDRESS is also requested.
> >>+	  Depends on KVM_CAP_ARM_PTRAUTH_GENERIC.
> >
> >Similarly.
> >
> >Or, we go back to having a single cap and a single feature, and add
> >more caps/features later on if we decide it's possible to support
> >address/generic auth separately later on.
> >
> >Otherwise, we end up with complex rules that can't be tested.  This is a
> >high price to pay for forwards compatibility: userspace's conformance to
> >the rules can't be fully tested, so there's a fair chance it won't work
> >properly anyway when hardware/KVM with just one auth type appears.
> >
> >[...]
> >
> >Thoughts?
> I agree that single cpufeature/capability is a simple solution to implement.
> The bifurcation of feature was done to reflect the different ID register
> split up.
> 
> But the h/w implementation provides a same EL2 exception trap for both the
> features and hence current implementation ties both of the features
> together. I guess in future if this is limitation goes away then one auth
> type is possible. Here I am not sure if the future h/w will retain this
> merged exception trap and add 2 new separate exception trap in addition to
> it.
> 
> I guess it will be probably simple split-up of this merged exception trap.
> In this case there won't be any ABI change required as per current
> implementation.

OK, I'm not opposed to keeping the ABI as-is, with the above
clarification [*] spelled out appropriately for both cases.

Alternatively, or in addition, we could say something like:

"If KVM_CAP_ARM_PTRAUTH_ADDRESS and KVM_CAP_ARM_PTRAUTH_GENERIC are
both present, then both KVM_ARM_VCPU_PTRAUTH_ADDRESS and
KVM_ARM_VCPU_PTRAUTH_GENERIC must be requested or neither must be
requested."

Cheers
---Dave
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v9 4/5] KVM: arm64: Add capability to advertise ptrauth for guest
@ 2019-04-17 15:22         ` Dave Martin
  0 siblings, 0 replies; 77+ messages in thread
From: Dave Martin @ 2019-04-17 15:22 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: Marc Zyngier, Catalin Marinas, Will Deacon, linux-kernel,
	Kristina Martsenko, Ramana Radhakrishnan, kvmarm,
	linux-arm-kernel

On Wed, Apr 17, 2019 at 03:09:02PM +0530, Amit Daniel Kachhap wrote:
> Hi,
> 
> On 4/16/19 10:02 PM, Dave Martin wrote:
> >On Fri, Apr 12, 2019 at 08:50:35AM +0530, Amit Daniel Kachhap wrote:
> >>This patch advertises the capability of two cpu feature called address
> >>pointer authentication and generic pointer authentication. These
> >>capabilities depend upon system support for pointer authentication and
> >>VHE mode.
> >>
> >>The current arm64 KVM partially implements pointer authentication and
> >>support of address/generic authentication are tied together. However,
> >>separate ABI requirements for both of them is added so that any future
> >>isolated implementation will not require any ABI changes.
> >>
> >>Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> >>Cc: Mark Rutland <mark.rutland@arm.com>
> >>Cc: Marc Zyngier <marc.zyngier@arm.com>
> >>Cc: Christoffer Dall <christoffer.dall@arm.com>
> >>Cc: kvmarm@lists.cs.columbia.edu
> >>---
> >>Changes since v8:
> >>*  Keep the capability check same for the 2 vcpu ptrauth features. [Dave Martin]
> >>
> >>  Documentation/virtual/kvm/api.txt | 2 ++
> >>  arch/arm64/kvm/reset.c            | 5 +++++
> >>  include/uapi/linux/kvm.h          | 2 ++
> >>  3 files changed, 9 insertions(+)
> >>
> >>diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
> >>index 9d202f4..56021d0 100644
> >>--- a/Documentation/virtual/kvm/api.txt
> >>+++ b/Documentation/virtual/kvm/api.txt
> >>@@ -2756,9 +2756,11 @@ Possible features:
> >>  	- KVM_ARM_VCPU_PTRAUTH_ADDRESS: Enables Address Pointer authentication
> >>  	  for the CPU and supported only on arm64 architecture.
> >>  	  Must be requested if KVM_ARM_VCPU_PTRAUTH_GENERIC is also requested.
> >>+	  Depends on KVM_CAP_ARM_PTRAUTH_ADDRESS.
> >
> >What if KVM_CAP_ARM_PTRAUTH_ADDRESS is absent and
> >KVM_ARM_VCPU_PTRAUTH_GENERIC is requested?  By these rules, we have a
> >contradiction: userspace both must request and must not request
> >KVM_ARM_VCPU_PTRAUTH_ADDRESS.
> >
> >We could qualify as follows:
> >
> >	Depends on KVM_CAP_ARM_PTRAUTH_ADDRESS.
> >	Must be requested if KVM_CAP_ARM_PTRAUTH_ADDRESS is present and
> >	KVM_ARM_VCPU_PTRAUTH_GENERIC is also requested.
> ok agree. This makes it clear.

[*]

> >>  	- KVM_ARM_VCPU_PTRAUTH_GENERIC: Enables Generic Pointer authentication
> >>  	  for the CPU and supported only on arm64 architecture.
> >>  	  Must be requested if KVM_ARM_VCPU_PTRAUTH_ADDRESS is also requested.
> >>+	  Depends on KVM_CAP_ARM_PTRAUTH_GENERIC.
> >
> >Similarly.
> >
> >Or, we go back to having a single cap and a single feature, and add
> >more caps/features later on if we decide it's possible to support
> >address/generic auth separately later on.
> >
> >Otherwise, we end up with complex rules that can't be tested.  This is a
> >high price to pay for forwards compatibility: userspace's conformance to
> >the rules can't be fully tested, so there's a fair chance it won't work
> >properly anyway when hardware/KVM with just one auth type appears.
> >
> >[...]
> >
> >Thoughts?
> I agree that single cpufeature/capability is a simple solution to implement.
> The bifurcation of feature was done to reflect the different ID register
> split up.
> 
> But the h/w implementation provides a same EL2 exception trap for both the
> features and hence current implementation ties both of the features
> together. I guess in future if this is limitation goes away then one auth
> type is possible. Here I am not sure if the future h/w will retain this
> merged exception trap and add 2 new separate exception trap in addition to
> it.
> 
> I guess it will be probably simple split-up of this merged exception trap.
> In this case there won't be any ABI change required as per current
> implementation.

OK, I'm not opposed to keeping the ABI as-is, with the above
clarification [*] spelled out appropriately for both cases.

Alternatively, or in addition, we could say something like:

"If KVM_CAP_ARM_PTRAUTH_ADDRESS and KVM_CAP_ARM_PTRAUTH_GENERIC are
both present, then both KVM_ARM_VCPU_PTRAUTH_ADDRESS and
KVM_ARM_VCPU_PTRAUTH_GENERIC must be requested or neither must be
requested."

Cheers
---Dave

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [kvmtool PATCH v9 5/5] KVM: arm/arm64: Add a vcpu feature for pointer authentication
@ 2019-04-17 15:38         ` Dave Martin
  0 siblings, 0 replies; 77+ messages in thread
From: Dave Martin @ 2019-04-17 15:38 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: Marc Zyngier, Catalin Marinas, Will Deacon, linux-kernel,
	Kristina Martsenko, Ramana Radhakrishnan, kvmarm,
	linux-arm-kernel

On Wed, Apr 17, 2019 at 06:06:11PM +0530, Amit Daniel Kachhap wrote:
> Hi,
> 
> On 4/16/19 10:02 PM, Dave Martin wrote:
> >On Fri, Apr 12, 2019 at 08:50:36AM +0530, Amit Daniel Kachhap wrote:
> >>This patch adds a runtime capabality for KVM tool to enable Arm64 8.3
> >>Pointer Authentication in guest kernel. Two vcpu features
> >>KVM_ARM_VCPU_PTRAUTH_[ADDRESS/GENERIC] are supplied together to enable
> >>Pointer Authentication in KVM guest after checking the capability.
> >>
> >>Command line options --enable-ptrauth and --disable-ptrauth are added
> >>to use this feature. However, if those options are not provided then
> >>also this feature is enabled if host supports this capability.
> >>
> >>The macros defined in the headers are not in sync and should be replaced
> >>from the upstream.
> >>
> >>Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> >>---
> >>Changes since v8:
> >>*  Added option --enable-ptrauth and --disable-ptrauth to use ptrauth. Also
> >>    enable ptrauth if no option provided and Host supports ptrauth. [Dave Martin]
> >>* The macro definition are not linear as the kvmtool is not synchronised with the
> >>   kernel changes present in kvmarm/next tree.
> >>
> >>  arm/aarch32/include/kvm/kvm-cpu-arch.h    |  1 +
> >>  arm/aarch64/include/asm/kvm.h             |  2 ++
> >>  arm/aarch64/include/kvm/kvm-config-arch.h |  6 +++++-
> >>  arm/aarch64/include/kvm/kvm-cpu-arch.h    |  2 ++
> >>  arm/include/arm-common/kvm-config-arch.h  |  2 ++
> >>  arm/kvm-cpu.c                             | 11 +++++++++++
> >>  include/linux/kvm.h                       |  2 ++
> >>  7 files changed, 25 insertions(+), 1 deletion(-)
> >>
> >>diff --git a/arm/aarch32/include/kvm/kvm-cpu-arch.h b/arm/aarch32/include/kvm/kvm-cpu-arch.h
> >>index d28ea67..520ea76 100644
> >>--- a/arm/aarch32/include/kvm/kvm-cpu-arch.h
> >>+++ b/arm/aarch32/include/kvm/kvm-cpu-arch.h
> >>@@ -13,4 +13,5 @@
> >>  #define ARM_CPU_ID		0, 0, 0
> >>  #define ARM_CPU_ID_MPIDR	5
> >>+#define ARM_VCPU_PTRAUTH_FEATURE	0
> >>  #endif /* KVM__KVM_CPU_ARCH_H */
> >>diff --git a/arm/aarch64/include/asm/kvm.h b/arm/aarch64/include/asm/kvm.h
> >>index 97c3478..a2546e6 100644
> >>--- a/arm/aarch64/include/asm/kvm.h
> >>+++ b/arm/aarch64/include/asm/kvm.h
> >>@@ -102,6 +102,8 @@ struct kvm_regs {
> >>  #define KVM_ARM_VCPU_EL1_32BIT		1 /* CPU running a 32bit VM */
> >>  #define KVM_ARM_VCPU_PSCI_0_2		2 /* CPU uses PSCI v0.2 */
> >>  #define KVM_ARM_VCPU_PMU_V3		3 /* Support guest PMUv3 */
> >>+#define KVM_ARM_VCPU_PTRAUTH_ADDRESS	5 /* CPU uses address pointer authentication */
> >>+#define KVM_ARM_VCPU_PTRAUTH_GENERIC	6 /* CPU uses generic pointer authentication */
> >>  struct kvm_vcpu_init {
> >>  	__u32 target;
> >>diff --git a/arm/aarch64/include/kvm/kvm-config-arch.h b/arm/aarch64/include/kvm/kvm-config-arch.h
> >>index 04be43d..0279b13 100644
> >>--- a/arm/aarch64/include/kvm/kvm-config-arch.h
> >>+++ b/arm/aarch64/include/kvm/kvm-config-arch.h
> >>@@ -8,7 +8,11 @@
> >>  			"Create PMUv3 device"),				\
> >>  	OPT_U64('\0', "kaslr-seed", &(cfg)->kaslr_seed,			\
> >>  			"Specify random seed for Kernel Address Space "	\
> >>-			"Layout Randomization (KASLR)"),
> >>+			"Layout Randomization (KASLR)"),		\
> >>+	OPT_BOOLEAN('\0', "enable-ptrauth", &(cfg)->enable_ptrauth,	\
> >>+			"Enables pointer authentication"),		\
> >>+	OPT_BOOLEAN('\0', "disable-ptrauth", &(cfg)->disable_ptrauth,	\
> >>+			"Disables pointer authentication"),
> >>  #include "arm-common/kvm-config-arch.h"
> >>diff --git a/arm/aarch64/include/kvm/kvm-cpu-arch.h b/arm/aarch64/include/kvm/kvm-cpu-arch.h
> >>index a9d8563..fcc2107 100644
> >>--- a/arm/aarch64/include/kvm/kvm-cpu-arch.h
> >>+++ b/arm/aarch64/include/kvm/kvm-cpu-arch.h
> >>@@ -17,4 +17,6 @@
> >>  #define ARM_CPU_CTRL		3, 0, 1, 0
> >>  #define ARM_CPU_CTRL_SCTLR_EL1	0
> >>+#define ARM_VCPU_PTRAUTH_FEATURE	((1UL << KVM_ARM_VCPU_PTRAUTH_ADDRESS) \
> >>+					| (1UL << KVM_ARM_VCPU_PTRAUTH_GENERIC))
> >>  #endif /* KVM__KVM_CPU_ARCH_H */
> >>diff --git a/arm/include/arm-common/kvm-config-arch.h b/arm/include/arm-common/kvm-config-arch.h
> >>index 5734c46..1b4287d 100644
> >>--- a/arm/include/arm-common/kvm-config-arch.h
> >>+++ b/arm/include/arm-common/kvm-config-arch.h
> >>@@ -10,6 +10,8 @@ struct kvm_config_arch {
> >>  	bool		aarch32_guest;
> >>  	bool		has_pmuv3;
> >>  	u64		kaslr_seed;
> >>+	bool		enable_ptrauth;
> >>+	bool		disable_ptrauth;
> >>  	enum irqchip_type irqchip;
> >>  	u64		fw_addr;
> >>  };
> >>diff --git a/arm/kvm-cpu.c b/arm/kvm-cpu.c
> >>index 7780251..a45a649 100644
> >>--- a/arm/kvm-cpu.c
> >>+++ b/arm/kvm-cpu.c
> >>@@ -69,6 +69,17 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, unsigned long cpu_id)
> >>  	}
> >>  	/*
> >>+	 * Always enable Pointer Authentication if requested. If system supports
> >>+	 * this extension then also enable it by default provided no disable
> >>+	 * request present.
> >>+	 */
> >>+	if ((kvm->cfg.arch.enable_ptrauth) ||
> >
> >Nit: redundant ()
> ok.
> >
> >>+		(kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH_ADDRESS) &&
> >
> >Funny indentation?
> ok will align it.
> >
> >>+		kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH_GENERIC) &&
> >>+		!kvm->cfg.arch.disable_ptrauth))
> >>+			vcpu_init.features[0] |= ARM_VCPU_PTRAUTH_FEATURE;
> >>+
> >
> >Hmm, we have some weird behaviours here: --enable-ptrauth
> >--disable-ptrauth will result in us trying to enable it, and
> May be 1 more check can be added here like,
> 
> if (kvm->cfg.arch.enable_ptrauth && kvm->cfg.arch.disable_ptrauth) {
> 	print_err("Only 1 option should be supplied\n");

Sure, but we should indicate which actual options conflicted.

> 	ret -EINVAL;
> }
> 
> >--enable-ptrauth without the required caps will result in an unhelpful
> >"Unable to initialise vcpu" error message.  I'm not sure this is a
> >whole lot worse than the way other options behave today, though.
> 
> Since now ptrauth is enabled by default if system supports it even though it
> is not explicitly requested. so I thought --enable-ptrauth
> option has to now forcefully enable ptrauth and may cause some error message
> in failure.
> Did I interpret something different from your last suggestion[1]?

No, this is what I meant.

> Actually we can skip with --enable-ptrauth and have just 2 option,
> * By default enable ptrauth if system supports it.
> * --disable-ptrauth: useful to migrate non-ptrauth guests on ptrauth hosts

I think --enable-ptrauth is still useful: with --disable-ptrauth,
ptrauth is definitely turned off; with --enable-ptrauth, ptrauth is
definitely turned on (or we refuse to start the guest at all); with
neither, ptrauth is on for the guest if the host supports it.

Cheers
---Dave

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [kvmtool PATCH v9 5/5] KVM: arm/arm64: Add a vcpu feature for pointer authentication
@ 2019-04-17 15:38         ` Dave Martin
  0 siblings, 0 replies; 77+ messages in thread
From: Dave Martin @ 2019-04-17 15:38 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: Marc Zyngier, Catalin Marinas, Will Deacon, linux-kernel,
	Kristina Martsenko, Ramana Radhakrishnan, kvmarm,
	linux-arm-kernel

On Wed, Apr 17, 2019 at 06:06:11PM +0530, Amit Daniel Kachhap wrote:
> Hi,
> 
> On 4/16/19 10:02 PM, Dave Martin wrote:
> >On Fri, Apr 12, 2019 at 08:50:36AM +0530, Amit Daniel Kachhap wrote:
> >>This patch adds a runtime capabality for KVM tool to enable Arm64 8.3
> >>Pointer Authentication in guest kernel. Two vcpu features
> >>KVM_ARM_VCPU_PTRAUTH_[ADDRESS/GENERIC] are supplied together to enable
> >>Pointer Authentication in KVM guest after checking the capability.
> >>
> >>Command line options --enable-ptrauth and --disable-ptrauth are added
> >>to use this feature. However, if those options are not provided then
> >>also this feature is enabled if host supports this capability.
> >>
> >>The macros defined in the headers are not in sync and should be replaced
> >>from the upstream.
> >>
> >>Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> >>---
> >>Changes since v8:
> >>*  Added option --enable-ptrauth and --disable-ptrauth to use ptrauth. Also
> >>    enable ptrauth if no option provided and Host supports ptrauth. [Dave Martin]
> >>* The macro definition are not linear as the kvmtool is not synchronised with the
> >>   kernel changes present in kvmarm/next tree.
> >>
> >>  arm/aarch32/include/kvm/kvm-cpu-arch.h    |  1 +
> >>  arm/aarch64/include/asm/kvm.h             |  2 ++
> >>  arm/aarch64/include/kvm/kvm-config-arch.h |  6 +++++-
> >>  arm/aarch64/include/kvm/kvm-cpu-arch.h    |  2 ++
> >>  arm/include/arm-common/kvm-config-arch.h  |  2 ++
> >>  arm/kvm-cpu.c                             | 11 +++++++++++
> >>  include/linux/kvm.h                       |  2 ++
> >>  7 files changed, 25 insertions(+), 1 deletion(-)
> >>
> >>diff --git a/arm/aarch32/include/kvm/kvm-cpu-arch.h b/arm/aarch32/include/kvm/kvm-cpu-arch.h
> >>index d28ea67..520ea76 100644
> >>--- a/arm/aarch32/include/kvm/kvm-cpu-arch.h
> >>+++ b/arm/aarch32/include/kvm/kvm-cpu-arch.h
> >>@@ -13,4 +13,5 @@
> >>  #define ARM_CPU_ID		0, 0, 0
> >>  #define ARM_CPU_ID_MPIDR	5
> >>+#define ARM_VCPU_PTRAUTH_FEATURE	0
> >>  #endif /* KVM__KVM_CPU_ARCH_H */
> >>diff --git a/arm/aarch64/include/asm/kvm.h b/arm/aarch64/include/asm/kvm.h
> >>index 97c3478..a2546e6 100644
> >>--- a/arm/aarch64/include/asm/kvm.h
> >>+++ b/arm/aarch64/include/asm/kvm.h
> >>@@ -102,6 +102,8 @@ struct kvm_regs {
> >>  #define KVM_ARM_VCPU_EL1_32BIT		1 /* CPU running a 32bit VM */
> >>  #define KVM_ARM_VCPU_PSCI_0_2		2 /* CPU uses PSCI v0.2 */
> >>  #define KVM_ARM_VCPU_PMU_V3		3 /* Support guest PMUv3 */
> >>+#define KVM_ARM_VCPU_PTRAUTH_ADDRESS	5 /* CPU uses address pointer authentication */
> >>+#define KVM_ARM_VCPU_PTRAUTH_GENERIC	6 /* CPU uses generic pointer authentication */
> >>  struct kvm_vcpu_init {
> >>  	__u32 target;
> >>diff --git a/arm/aarch64/include/kvm/kvm-config-arch.h b/arm/aarch64/include/kvm/kvm-config-arch.h
> >>index 04be43d..0279b13 100644
> >>--- a/arm/aarch64/include/kvm/kvm-config-arch.h
> >>+++ b/arm/aarch64/include/kvm/kvm-config-arch.h
> >>@@ -8,7 +8,11 @@
> >>  			"Create PMUv3 device"),				\
> >>  	OPT_U64('\0', "kaslr-seed", &(cfg)->kaslr_seed,			\
> >>  			"Specify random seed for Kernel Address Space "	\
> >>-			"Layout Randomization (KASLR)"),
> >>+			"Layout Randomization (KASLR)"),		\
> >>+	OPT_BOOLEAN('\0', "enable-ptrauth", &(cfg)->enable_ptrauth,	\
> >>+			"Enables pointer authentication"),		\
> >>+	OPT_BOOLEAN('\0', "disable-ptrauth", &(cfg)->disable_ptrauth,	\
> >>+			"Disables pointer authentication"),
> >>  #include "arm-common/kvm-config-arch.h"
> >>diff --git a/arm/aarch64/include/kvm/kvm-cpu-arch.h b/arm/aarch64/include/kvm/kvm-cpu-arch.h
> >>index a9d8563..fcc2107 100644
> >>--- a/arm/aarch64/include/kvm/kvm-cpu-arch.h
> >>+++ b/arm/aarch64/include/kvm/kvm-cpu-arch.h
> >>@@ -17,4 +17,6 @@
> >>  #define ARM_CPU_CTRL		3, 0, 1, 0
> >>  #define ARM_CPU_CTRL_SCTLR_EL1	0
> >>+#define ARM_VCPU_PTRAUTH_FEATURE	((1UL << KVM_ARM_VCPU_PTRAUTH_ADDRESS) \
> >>+					| (1UL << KVM_ARM_VCPU_PTRAUTH_GENERIC))
> >>  #endif /* KVM__KVM_CPU_ARCH_H */
> >>diff --git a/arm/include/arm-common/kvm-config-arch.h b/arm/include/arm-common/kvm-config-arch.h
> >>index 5734c46..1b4287d 100644
> >>--- a/arm/include/arm-common/kvm-config-arch.h
> >>+++ b/arm/include/arm-common/kvm-config-arch.h
> >>@@ -10,6 +10,8 @@ struct kvm_config_arch {
> >>  	bool		aarch32_guest;
> >>  	bool		has_pmuv3;
> >>  	u64		kaslr_seed;
> >>+	bool		enable_ptrauth;
> >>+	bool		disable_ptrauth;
> >>  	enum irqchip_type irqchip;
> >>  	u64		fw_addr;
> >>  };
> >>diff --git a/arm/kvm-cpu.c b/arm/kvm-cpu.c
> >>index 7780251..a45a649 100644
> >>--- a/arm/kvm-cpu.c
> >>+++ b/arm/kvm-cpu.c
> >>@@ -69,6 +69,17 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, unsigned long cpu_id)
> >>  	}
> >>  	/*
> >>+	 * Always enable Pointer Authentication if requested. If system supports
> >>+	 * this extension then also enable it by default provided no disable
> >>+	 * request present.
> >>+	 */
> >>+	if ((kvm->cfg.arch.enable_ptrauth) ||
> >
> >Nit: redundant ()
> ok.
> >
> >>+		(kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH_ADDRESS) &&
> >
> >Funny indentation?
> ok will align it.
> >
> >>+		kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH_GENERIC) &&
> >>+		!kvm->cfg.arch.disable_ptrauth))
> >>+			vcpu_init.features[0] |= ARM_VCPU_PTRAUTH_FEATURE;
> >>+
> >
> >Hmm, we have some weird behaviours here: --enable-ptrauth
> >--disable-ptrauth will result in us trying to enable it, and
> May be 1 more check can be added here like,
> 
> if (kvm->cfg.arch.enable_ptrauth && kvm->cfg.arch.disable_ptrauth) {
> 	print_err("Only 1 option should be supplied\n");

Sure, but we should indicate which actual options conflicted.

> 	ret -EINVAL;
> }
> 
> >--enable-ptrauth without the required caps will result in an unhelpful
> >"Unable to initialise vcpu" error message.  I'm not sure this is a
> >whole lot worse than the way other options behave today, though.
> 
> Since now ptrauth is enabled by default if system supports it even though it
> is not explicitly requested. so I thought --enable-ptrauth
> option has to now forcefully enable ptrauth and may cause some error message
> in failure.
> Did I interpret something different from your last suggestion[1]?

No, this is what I meant.

> Actually we can skip with --enable-ptrauth and have just 2 option,
> * By default enable ptrauth if system supports it.
> * --disable-ptrauth: useful to migrate non-ptrauth guests on ptrauth hosts

I think --enable-ptrauth is still useful: with --disable-ptrauth,
ptrauth is definitely turned off; with --enable-ptrauth, ptrauth is
definitely turned on (or we refuse to start the guest at all); with
neither, ptrauth is on for the guest if the host supports it.

Cheers
---Dave
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [kvmtool PATCH v9 5/5] KVM: arm/arm64: Add a vcpu feature for pointer authentication
@ 2019-04-17 15:38         ` Dave Martin
  0 siblings, 0 replies; 77+ messages in thread
From: Dave Martin @ 2019-04-17 15:38 UTC (permalink / raw)
  To: Amit Daniel Kachhap
  Cc: Marc Zyngier, Catalin Marinas, Will Deacon, linux-kernel,
	Kristina Martsenko, Ramana Radhakrishnan, kvmarm,
	linux-arm-kernel

On Wed, Apr 17, 2019 at 06:06:11PM +0530, Amit Daniel Kachhap wrote:
> Hi,
> 
> On 4/16/19 10:02 PM, Dave Martin wrote:
> >On Fri, Apr 12, 2019 at 08:50:36AM +0530, Amit Daniel Kachhap wrote:
> >>This patch adds a runtime capabality for KVM tool to enable Arm64 8.3
> >>Pointer Authentication in guest kernel. Two vcpu features
> >>KVM_ARM_VCPU_PTRAUTH_[ADDRESS/GENERIC] are supplied together to enable
> >>Pointer Authentication in KVM guest after checking the capability.
> >>
> >>Command line options --enable-ptrauth and --disable-ptrauth are added
> >>to use this feature. However, if those options are not provided then
> >>also this feature is enabled if host supports this capability.
> >>
> >>The macros defined in the headers are not in sync and should be replaced
> >>from the upstream.
> >>
> >>Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> >>---
> >>Changes since v8:
> >>*  Added option --enable-ptrauth and --disable-ptrauth to use ptrauth. Also
> >>    enable ptrauth if no option provided and Host supports ptrauth. [Dave Martin]
> >>* The macro definition are not linear as the kvmtool is not synchronised with the
> >>   kernel changes present in kvmarm/next tree.
> >>
> >>  arm/aarch32/include/kvm/kvm-cpu-arch.h    |  1 +
> >>  arm/aarch64/include/asm/kvm.h             |  2 ++
> >>  arm/aarch64/include/kvm/kvm-config-arch.h |  6 +++++-
> >>  arm/aarch64/include/kvm/kvm-cpu-arch.h    |  2 ++
> >>  arm/include/arm-common/kvm-config-arch.h  |  2 ++
> >>  arm/kvm-cpu.c                             | 11 +++++++++++
> >>  include/linux/kvm.h                       |  2 ++
> >>  7 files changed, 25 insertions(+), 1 deletion(-)
> >>
> >>diff --git a/arm/aarch32/include/kvm/kvm-cpu-arch.h b/arm/aarch32/include/kvm/kvm-cpu-arch.h
> >>index d28ea67..520ea76 100644
> >>--- a/arm/aarch32/include/kvm/kvm-cpu-arch.h
> >>+++ b/arm/aarch32/include/kvm/kvm-cpu-arch.h
> >>@@ -13,4 +13,5 @@
> >>  #define ARM_CPU_ID		0, 0, 0
> >>  #define ARM_CPU_ID_MPIDR	5
> >>+#define ARM_VCPU_PTRAUTH_FEATURE	0
> >>  #endif /* KVM__KVM_CPU_ARCH_H */
> >>diff --git a/arm/aarch64/include/asm/kvm.h b/arm/aarch64/include/asm/kvm.h
> >>index 97c3478..a2546e6 100644
> >>--- a/arm/aarch64/include/asm/kvm.h
> >>+++ b/arm/aarch64/include/asm/kvm.h
> >>@@ -102,6 +102,8 @@ struct kvm_regs {
> >>  #define KVM_ARM_VCPU_EL1_32BIT		1 /* CPU running a 32bit VM */
> >>  #define KVM_ARM_VCPU_PSCI_0_2		2 /* CPU uses PSCI v0.2 */
> >>  #define KVM_ARM_VCPU_PMU_V3		3 /* Support guest PMUv3 */
> >>+#define KVM_ARM_VCPU_PTRAUTH_ADDRESS	5 /* CPU uses address pointer authentication */
> >>+#define KVM_ARM_VCPU_PTRAUTH_GENERIC	6 /* CPU uses generic pointer authentication */
> >>  struct kvm_vcpu_init {
> >>  	__u32 target;
> >>diff --git a/arm/aarch64/include/kvm/kvm-config-arch.h b/arm/aarch64/include/kvm/kvm-config-arch.h
> >>index 04be43d..0279b13 100644
> >>--- a/arm/aarch64/include/kvm/kvm-config-arch.h
> >>+++ b/arm/aarch64/include/kvm/kvm-config-arch.h
> >>@@ -8,7 +8,11 @@
> >>  			"Create PMUv3 device"),				\
> >>  	OPT_U64('\0', "kaslr-seed", &(cfg)->kaslr_seed,			\
> >>  			"Specify random seed for Kernel Address Space "	\
> >>-			"Layout Randomization (KASLR)"),
> >>+			"Layout Randomization (KASLR)"),		\
> >>+	OPT_BOOLEAN('\0', "enable-ptrauth", &(cfg)->enable_ptrauth,	\
> >>+			"Enables pointer authentication"),		\
> >>+	OPT_BOOLEAN('\0', "disable-ptrauth", &(cfg)->disable_ptrauth,	\
> >>+			"Disables pointer authentication"),
> >>  #include "arm-common/kvm-config-arch.h"
> >>diff --git a/arm/aarch64/include/kvm/kvm-cpu-arch.h b/arm/aarch64/include/kvm/kvm-cpu-arch.h
> >>index a9d8563..fcc2107 100644
> >>--- a/arm/aarch64/include/kvm/kvm-cpu-arch.h
> >>+++ b/arm/aarch64/include/kvm/kvm-cpu-arch.h
> >>@@ -17,4 +17,6 @@
> >>  #define ARM_CPU_CTRL		3, 0, 1, 0
> >>  #define ARM_CPU_CTRL_SCTLR_EL1	0
> >>+#define ARM_VCPU_PTRAUTH_FEATURE	((1UL << KVM_ARM_VCPU_PTRAUTH_ADDRESS) \
> >>+					| (1UL << KVM_ARM_VCPU_PTRAUTH_GENERIC))
> >>  #endif /* KVM__KVM_CPU_ARCH_H */
> >>diff --git a/arm/include/arm-common/kvm-config-arch.h b/arm/include/arm-common/kvm-config-arch.h
> >>index 5734c46..1b4287d 100644
> >>--- a/arm/include/arm-common/kvm-config-arch.h
> >>+++ b/arm/include/arm-common/kvm-config-arch.h
> >>@@ -10,6 +10,8 @@ struct kvm_config_arch {
> >>  	bool		aarch32_guest;
> >>  	bool		has_pmuv3;
> >>  	u64		kaslr_seed;
> >>+	bool		enable_ptrauth;
> >>+	bool		disable_ptrauth;
> >>  	enum irqchip_type irqchip;
> >>  	u64		fw_addr;
> >>  };
> >>diff --git a/arm/kvm-cpu.c b/arm/kvm-cpu.c
> >>index 7780251..a45a649 100644
> >>--- a/arm/kvm-cpu.c
> >>+++ b/arm/kvm-cpu.c
> >>@@ -69,6 +69,17 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, unsigned long cpu_id)
> >>  	}
> >>  	/*
> >>+	 * Always enable Pointer Authentication if requested. If system supports
> >>+	 * this extension then also enable it by default provided no disable
> >>+	 * request present.
> >>+	 */
> >>+	if ((kvm->cfg.arch.enable_ptrauth) ||
> >
> >Nit: redundant ()
> ok.
> >
> >>+		(kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH_ADDRESS) &&
> >
> >Funny indentation?
> ok will align it.
> >
> >>+		kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH_GENERIC) &&
> >>+		!kvm->cfg.arch.disable_ptrauth))
> >>+			vcpu_init.features[0] |= ARM_VCPU_PTRAUTH_FEATURE;
> >>+
> >
> >Hmm, we have some weird behaviours here: --enable-ptrauth
> >--disable-ptrauth will result in us trying to enable it, and
> May be 1 more check can be added here like,
> 
> if (kvm->cfg.arch.enable_ptrauth && kvm->cfg.arch.disable_ptrauth) {
> 	print_err("Only 1 option should be supplied\n");

Sure, but we should indicate which actual options conflicted.

> 	ret -EINVAL;
> }
> 
> >--enable-ptrauth without the required caps will result in an unhelpful
> >"Unable to initialise vcpu" error message.  I'm not sure this is a
> >whole lot worse than the way other options behave today, though.
> 
> Since now ptrauth is enabled by default if system supports it even though it
> is not explicitly requested. so I thought --enable-ptrauth
> option has to now forcefully enable ptrauth and may cause some error message
> in failure.
> Did I interpret something different from your last suggestion[1]?

No, this is what I meant.

> Actually we can skip with --enable-ptrauth and have just 2 option,
> * By default enable ptrauth if system supports it.
> * --disable-ptrauth: useful to migrate non-ptrauth guests on ptrauth hosts

I think --enable-ptrauth is still useful: with --disable-ptrauth,
ptrauth is definitely turned off; with --enable-ptrauth, ptrauth is
definitely turned on (or we refuse to start the guest at all); with
neither, ptrauth is on for the guest if the host supports it.

Cheers
---Dave

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v9 1/5] KVM: arm64: Add a vcpu flag to control ptrauth for guest
@ 2019-04-17 15:54             ` Marc Zyngier
  0 siblings, 0 replies; 77+ messages in thread
From: Marc Zyngier @ 2019-04-17 15:54 UTC (permalink / raw)
  To: Dave Martin
  Cc: Amit Daniel Kachhap, linux-arm-kernel, Catalin Marinas,
	Will Deacon, Kristina Martsenko, kvmarm, Ramana Radhakrishnan,
	linux-kernel

On 17/04/2019 15:52, Dave Martin wrote:
> On Wed, Apr 17, 2019 at 03:19:11PM +0100, Marc Zyngier wrote:
>> On 17/04/2019 14:08, Amit Daniel Kachhap wrote:
>>> Hi,
>>>
>>> On 4/17/19 2:05 PM, Marc Zyngier wrote:
>>>> On 12/04/2019 04:20, Amit Daniel Kachhap wrote:
>>>>> A per vcpu flag is added to check if pointer authentication is
>>>>> enabled for the vcpu or not. This flag may be enabled according to
>>>>> the necessary user policies and host capabilities.
>>>>>
>>>>> This patch also adds a helper to check the flag.
>>>>>
>>>>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>>>>> Cc: Mark Rutland <mark.rutland@arm.com>
>>>>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>>>>> Cc: Christoffer Dall <christoffer.dall@arm.com>
>>>>> Cc: kvmarm@lists.cs.columbia.edu
>>>>> ---
>>>>>
>>>>> Changes since v8:
>>>>> * Added a new per vcpu flag which will store Pointer Authentication enable
>>>>>    status instead of checking them again. [Dave Martin]
>>>>>
>>>>>   arch/arm64/include/asm/kvm_host.h | 4 ++++
>>>>>   1 file changed, 4 insertions(+)
>>>>>
>>>>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
>>>>> index 9d57cf8..31dbc7c 100644
>>>>> --- a/arch/arm64/include/asm/kvm_host.h
>>>>> +++ b/arch/arm64/include/asm/kvm_host.h
>>>>> @@ -355,10 +355,14 @@ struct kvm_vcpu_arch {
>>>>>   #define KVM_ARM64_HOST_SVE_ENABLED	(1 << 4) /* SVE enabled for EL0 */
>>>>>   #define KVM_ARM64_GUEST_HAS_SVE		(1 << 5) /* SVE exposed to guest */
>>>>>   #define KVM_ARM64_VCPU_SVE_FINALIZED	(1 << 6) /* SVE config completed */
>>>>> +#define KVM_ARM64_GUEST_HAS_PTRAUTH	(1 << 7) /* PTRAUTH exposed to guest */
>>>>>   
>>>>>   #define vcpu_has_sve(vcpu) (system_supports_sve() && \
>>>>>   			    ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_SVE))
>>>>>   
>>>>> +#define vcpu_has_ptrauth(vcpu)	\
>>>>> +			((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH)
>>>>> +
>>>>
>>>> Just as for SVE, please first check that the system has PTRAUTH.
>>>> Something like:
>>>>
>>>> 		(cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_ARCH) && \
>>>> 		 ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH))
>>>
>>> In the subsequent patches, vcpu->arch.flags is only set to 
>>> KVM_ARM64_GUEST_HAS_PTRAUTH when all host capability check conditions 
>>> matches such as system_supports_address_auth(), 
>>> system_supports_generic_auth() so doing them again is repetitive in my view.
>>
>> It isn't the setting of the flag I care about, but the check of that
>> flag. Checking a flag for a feature that cannot be used on the running
>> system should have a zero cost, which isn't the case here.
>>
>> Granted, the impact should be minimal and it looks like it mostly happen
>> on the slow path, but at the very least it would be consistent. So even
>> if you don't buy my argument about efficiency, please change it in the
>> name of consistency.
> 
> One of the annoyances here is there is no single static key for ptrauth.
> 
> I'm assuming we don't want to check both static keys (for address and
> generic auth) on hot paths.

They both just branches, so I don't see why not. Of course, for people
using a lesser compiler (gcc 4.8 or clang), things will suck. But they
got it coming anyway.

Thanks,

	M.

> Checking just one of the two possibilities is OK for now, but we need
> to comment clearly somewhere that that will break if KVM is changed
> later to expose ptrauth to guests when the host doesn't support both
> types.
> 
> Cheers
> ---Dave
> 


-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v9 1/5] KVM: arm64: Add a vcpu flag to control ptrauth for guest
@ 2019-04-17 15:54             ` Marc Zyngier
  0 siblings, 0 replies; 77+ messages in thread
From: Marc Zyngier @ 2019-04-17 15:54 UTC (permalink / raw)
  To: Dave Martin
  Cc: Catalin Marinas, Will Deacon, linux-kernel, Kristina Martsenko,
	Ramana Radhakrishnan, Amit Daniel Kachhap, kvmarm,
	linux-arm-kernel

On 17/04/2019 15:52, Dave Martin wrote:
> On Wed, Apr 17, 2019 at 03:19:11PM +0100, Marc Zyngier wrote:
>> On 17/04/2019 14:08, Amit Daniel Kachhap wrote:
>>> Hi,
>>>
>>> On 4/17/19 2:05 PM, Marc Zyngier wrote:
>>>> On 12/04/2019 04:20, Amit Daniel Kachhap wrote:
>>>>> A per vcpu flag is added to check if pointer authentication is
>>>>> enabled for the vcpu or not. This flag may be enabled according to
>>>>> the necessary user policies and host capabilities.
>>>>>
>>>>> This patch also adds a helper to check the flag.
>>>>>
>>>>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>>>>> Cc: Mark Rutland <mark.rutland@arm.com>
>>>>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>>>>> Cc: Christoffer Dall <christoffer.dall@arm.com>
>>>>> Cc: kvmarm@lists.cs.columbia.edu
>>>>> ---
>>>>>
>>>>> Changes since v8:
>>>>> * Added a new per vcpu flag which will store Pointer Authentication enable
>>>>>    status instead of checking them again. [Dave Martin]
>>>>>
>>>>>   arch/arm64/include/asm/kvm_host.h | 4 ++++
>>>>>   1 file changed, 4 insertions(+)
>>>>>
>>>>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
>>>>> index 9d57cf8..31dbc7c 100644
>>>>> --- a/arch/arm64/include/asm/kvm_host.h
>>>>> +++ b/arch/arm64/include/asm/kvm_host.h
>>>>> @@ -355,10 +355,14 @@ struct kvm_vcpu_arch {
>>>>>   #define KVM_ARM64_HOST_SVE_ENABLED	(1 << 4) /* SVE enabled for EL0 */
>>>>>   #define KVM_ARM64_GUEST_HAS_SVE		(1 << 5) /* SVE exposed to guest */
>>>>>   #define KVM_ARM64_VCPU_SVE_FINALIZED	(1 << 6) /* SVE config completed */
>>>>> +#define KVM_ARM64_GUEST_HAS_PTRAUTH	(1 << 7) /* PTRAUTH exposed to guest */
>>>>>   
>>>>>   #define vcpu_has_sve(vcpu) (system_supports_sve() && \
>>>>>   			    ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_SVE))
>>>>>   
>>>>> +#define vcpu_has_ptrauth(vcpu)	\
>>>>> +			((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH)
>>>>> +
>>>>
>>>> Just as for SVE, please first check that the system has PTRAUTH.
>>>> Something like:
>>>>
>>>> 		(cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_ARCH) && \
>>>> 		 ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH))
>>>
>>> In the subsequent patches, vcpu->arch.flags is only set to 
>>> KVM_ARM64_GUEST_HAS_PTRAUTH when all host capability check conditions 
>>> matches such as system_supports_address_auth(), 
>>> system_supports_generic_auth() so doing them again is repetitive in my view.
>>
>> It isn't the setting of the flag I care about, but the check of that
>> flag. Checking a flag for a feature that cannot be used on the running
>> system should have a zero cost, which isn't the case here.
>>
>> Granted, the impact should be minimal and it looks like it mostly happen
>> on the slow path, but at the very least it would be consistent. So even
>> if you don't buy my argument about efficiency, please change it in the
>> name of consistency.
> 
> One of the annoyances here is there is no single static key for ptrauth.
> 
> I'm assuming we don't want to check both static keys (for address and
> generic auth) on hot paths.

They both just branches, so I don't see why not. Of course, for people
using a lesser compiler (gcc 4.8 or clang), things will suck. But they
got it coming anyway.

Thanks,

	M.

> Checking just one of the two possibilities is OK for now, but we need
> to comment clearly somewhere that that will break if KVM is changed
> later to expose ptrauth to guests when the host doesn't support both
> types.
> 
> Cheers
> ---Dave
> 


-- 
Jazz is not dead. It just smells funny...
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v9 1/5] KVM: arm64: Add a vcpu flag to control ptrauth for guest
@ 2019-04-17 15:54             ` Marc Zyngier
  0 siblings, 0 replies; 77+ messages in thread
From: Marc Zyngier @ 2019-04-17 15:54 UTC (permalink / raw)
  To: Dave Martin
  Cc: Catalin Marinas, Will Deacon, linux-kernel, Kristina Martsenko,
	Ramana Radhakrishnan, Amit Daniel Kachhap, kvmarm,
	linux-arm-kernel

On 17/04/2019 15:52, Dave Martin wrote:
> On Wed, Apr 17, 2019 at 03:19:11PM +0100, Marc Zyngier wrote:
>> On 17/04/2019 14:08, Amit Daniel Kachhap wrote:
>>> Hi,
>>>
>>> On 4/17/19 2:05 PM, Marc Zyngier wrote:
>>>> On 12/04/2019 04:20, Amit Daniel Kachhap wrote:
>>>>> A per vcpu flag is added to check if pointer authentication is
>>>>> enabled for the vcpu or not. This flag may be enabled according to
>>>>> the necessary user policies and host capabilities.
>>>>>
>>>>> This patch also adds a helper to check the flag.
>>>>>
>>>>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>>>>> Cc: Mark Rutland <mark.rutland@arm.com>
>>>>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>>>>> Cc: Christoffer Dall <christoffer.dall@arm.com>
>>>>> Cc: kvmarm@lists.cs.columbia.edu
>>>>> ---
>>>>>
>>>>> Changes since v8:
>>>>> * Added a new per vcpu flag which will store Pointer Authentication enable
>>>>>    status instead of checking them again. [Dave Martin]
>>>>>
>>>>>   arch/arm64/include/asm/kvm_host.h | 4 ++++
>>>>>   1 file changed, 4 insertions(+)
>>>>>
>>>>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
>>>>> index 9d57cf8..31dbc7c 100644
>>>>> --- a/arch/arm64/include/asm/kvm_host.h
>>>>> +++ b/arch/arm64/include/asm/kvm_host.h
>>>>> @@ -355,10 +355,14 @@ struct kvm_vcpu_arch {
>>>>>   #define KVM_ARM64_HOST_SVE_ENABLED	(1 << 4) /* SVE enabled for EL0 */
>>>>>   #define KVM_ARM64_GUEST_HAS_SVE		(1 << 5) /* SVE exposed to guest */
>>>>>   #define KVM_ARM64_VCPU_SVE_FINALIZED	(1 << 6) /* SVE config completed */
>>>>> +#define KVM_ARM64_GUEST_HAS_PTRAUTH	(1 << 7) /* PTRAUTH exposed to guest */
>>>>>   
>>>>>   #define vcpu_has_sve(vcpu) (system_supports_sve() && \
>>>>>   			    ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_SVE))
>>>>>   
>>>>> +#define vcpu_has_ptrauth(vcpu)	\
>>>>> +			((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH)
>>>>> +
>>>>
>>>> Just as for SVE, please first check that the system has PTRAUTH.
>>>> Something like:
>>>>
>>>> 		(cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_ARCH) && \
>>>> 		 ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH))
>>>
>>> In the subsequent patches, vcpu->arch.flags is only set to 
>>> KVM_ARM64_GUEST_HAS_PTRAUTH when all host capability check conditions 
>>> matches such as system_supports_address_auth(), 
>>> system_supports_generic_auth() so doing them again is repetitive in my view.
>>
>> It isn't the setting of the flag I care about, but the check of that
>> flag. Checking a flag for a feature that cannot be used on the running
>> system should have a zero cost, which isn't the case here.
>>
>> Granted, the impact should be minimal and it looks like it mostly happen
>> on the slow path, but at the very least it would be consistent. So even
>> if you don't buy my argument about efficiency, please change it in the
>> name of consistency.
> 
> One of the annoyances here is there is no single static key for ptrauth.
> 
> I'm assuming we don't want to check both static keys (for address and
> generic auth) on hot paths.

They both just branches, so I don't see why not. Of course, for people
using a lesser compiler (gcc 4.8 or clang), things will suck. But they
got it coming anyway.

Thanks,

	M.

> Checking just one of the two possibilities is OK for now, but we need
> to comment clearly somewhere that that will break if KVM is changed
> later to expose ptrauth to guests when the host doesn't support both
> types.
> 
> Cheers
> ---Dave
> 


-- 
Jazz is not dead. It just smells funny...

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v9 1/5] KVM: arm64: Add a vcpu flag to control ptrauth for guest
@ 2019-04-17 17:20               ` Dave Martin
  0 siblings, 0 replies; 77+ messages in thread
From: Dave Martin @ 2019-04-17 17:20 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Catalin Marinas, Will Deacon, linux-kernel, Kristina Martsenko,
	Ramana Radhakrishnan, Amit Daniel Kachhap, kvmarm,
	linux-arm-kernel

On Wed, Apr 17, 2019 at 04:54:32PM +0100, Marc Zyngier wrote:
> On 17/04/2019 15:52, Dave Martin wrote:
> > On Wed, Apr 17, 2019 at 03:19:11PM +0100, Marc Zyngier wrote:
> >> On 17/04/2019 14:08, Amit Daniel Kachhap wrote:
> >>> Hi,
> >>>
> >>> On 4/17/19 2:05 PM, Marc Zyngier wrote:
> >>>> On 12/04/2019 04:20, Amit Daniel Kachhap wrote:
> >>>>> A per vcpu flag is added to check if pointer authentication is
> >>>>> enabled for the vcpu or not. This flag may be enabled according to
> >>>>> the necessary user policies and host capabilities.
> >>>>>
> >>>>> This patch also adds a helper to check the flag.
> >>>>>
> >>>>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> >>>>> Cc: Mark Rutland <mark.rutland@arm.com>
> >>>>> Cc: Marc Zyngier <marc.zyngier@arm.com>
> >>>>> Cc: Christoffer Dall <christoffer.dall@arm.com>
> >>>>> Cc: kvmarm@lists.cs.columbia.edu
> >>>>> ---
> >>>>>
> >>>>> Changes since v8:
> >>>>> * Added a new per vcpu flag which will store Pointer Authentication enable
> >>>>>    status instead of checking them again. [Dave Martin]
> >>>>>
> >>>>>   arch/arm64/include/asm/kvm_host.h | 4 ++++
> >>>>>   1 file changed, 4 insertions(+)
> >>>>>
> >>>>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> >>>>> index 9d57cf8..31dbc7c 100644
> >>>>> --- a/arch/arm64/include/asm/kvm_host.h
> >>>>> +++ b/arch/arm64/include/asm/kvm_host.h
> >>>>> @@ -355,10 +355,14 @@ struct kvm_vcpu_arch {
> >>>>>   #define KVM_ARM64_HOST_SVE_ENABLED	(1 << 4) /* SVE enabled for EL0 */
> >>>>>   #define KVM_ARM64_GUEST_HAS_SVE		(1 << 5) /* SVE exposed to guest */
> >>>>>   #define KVM_ARM64_VCPU_SVE_FINALIZED	(1 << 6) /* SVE config completed */
> >>>>> +#define KVM_ARM64_GUEST_HAS_PTRAUTH	(1 << 7) /* PTRAUTH exposed to guest */
> >>>>>   
> >>>>>   #define vcpu_has_sve(vcpu) (system_supports_sve() && \
> >>>>>   			    ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_SVE))
> >>>>>   
> >>>>> +#define vcpu_has_ptrauth(vcpu)	\
> >>>>> +			((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH)
> >>>>> +
> >>>>
> >>>> Just as for SVE, please first check that the system has PTRAUTH.
> >>>> Something like:
> >>>>
> >>>> 		(cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_ARCH) && \
> >>>> 		 ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH))
> >>>
> >>> In the subsequent patches, vcpu->arch.flags is only set to 
> >>> KVM_ARM64_GUEST_HAS_PTRAUTH when all host capability check conditions 
> >>> matches such as system_supports_address_auth(), 
> >>> system_supports_generic_auth() so doing them again is repetitive in my view.
> >>
> >> It isn't the setting of the flag I care about, but the check of that
> >> flag. Checking a flag for a feature that cannot be used on the running
> >> system should have a zero cost, which isn't the case here.
> >>
> >> Granted, the impact should be minimal and it looks like it mostly happen
> >> on the slow path, but at the very least it would be consistent. So even
> >> if you don't buy my argument about efficiency, please change it in the
> >> name of consistency.
> > 
> > One of the annoyances here is there is no single static key for ptrauth.
> > 
> > I'm assuming we don't want to check both static keys (for address and
> > generic auth) on hot paths.
> 
> They both just branches, so I don't see why not. Of course, for people
> using a lesser compiler (gcc 4.8 or clang), things will suck. But they
> got it coming anyway.

I seem to recall Christoffer expressing concerns about this at some
point: even unconditional branches branches to a fixed address are not
free (or even correctly predicted).

I don't think any compiler can elide static key checks of merge them
together.

Maybe I am misremembering.

Cheers
---Dave

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v9 1/5] KVM: arm64: Add a vcpu flag to control ptrauth for guest
@ 2019-04-17 17:20               ` Dave Martin
  0 siblings, 0 replies; 77+ messages in thread
From: Dave Martin @ 2019-04-17 17:20 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Catalin Marinas, Will Deacon, linux-kernel, Kristina Martsenko,
	Ramana Radhakrishnan, Amit Daniel Kachhap, kvmarm,
	linux-arm-kernel

On Wed, Apr 17, 2019 at 04:54:32PM +0100, Marc Zyngier wrote:
> On 17/04/2019 15:52, Dave Martin wrote:
> > On Wed, Apr 17, 2019 at 03:19:11PM +0100, Marc Zyngier wrote:
> >> On 17/04/2019 14:08, Amit Daniel Kachhap wrote:
> >>> Hi,
> >>>
> >>> On 4/17/19 2:05 PM, Marc Zyngier wrote:
> >>>> On 12/04/2019 04:20, Amit Daniel Kachhap wrote:
> >>>>> A per vcpu flag is added to check if pointer authentication is
> >>>>> enabled for the vcpu or not. This flag may be enabled according to
> >>>>> the necessary user policies and host capabilities.
> >>>>>
> >>>>> This patch also adds a helper to check the flag.
> >>>>>
> >>>>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> >>>>> Cc: Mark Rutland <mark.rutland@arm.com>
> >>>>> Cc: Marc Zyngier <marc.zyngier@arm.com>
> >>>>> Cc: Christoffer Dall <christoffer.dall@arm.com>
> >>>>> Cc: kvmarm@lists.cs.columbia.edu
> >>>>> ---
> >>>>>
> >>>>> Changes since v8:
> >>>>> * Added a new per vcpu flag which will store Pointer Authentication enable
> >>>>>    status instead of checking them again. [Dave Martin]
> >>>>>
> >>>>>   arch/arm64/include/asm/kvm_host.h | 4 ++++
> >>>>>   1 file changed, 4 insertions(+)
> >>>>>
> >>>>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> >>>>> index 9d57cf8..31dbc7c 100644
> >>>>> --- a/arch/arm64/include/asm/kvm_host.h
> >>>>> +++ b/arch/arm64/include/asm/kvm_host.h
> >>>>> @@ -355,10 +355,14 @@ struct kvm_vcpu_arch {
> >>>>>   #define KVM_ARM64_HOST_SVE_ENABLED	(1 << 4) /* SVE enabled for EL0 */
> >>>>>   #define KVM_ARM64_GUEST_HAS_SVE		(1 << 5) /* SVE exposed to guest */
> >>>>>   #define KVM_ARM64_VCPU_SVE_FINALIZED	(1 << 6) /* SVE config completed */
> >>>>> +#define KVM_ARM64_GUEST_HAS_PTRAUTH	(1 << 7) /* PTRAUTH exposed to guest */
> >>>>>   
> >>>>>   #define vcpu_has_sve(vcpu) (system_supports_sve() && \
> >>>>>   			    ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_SVE))
> >>>>>   
> >>>>> +#define vcpu_has_ptrauth(vcpu)	\
> >>>>> +			((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH)
> >>>>> +
> >>>>
> >>>> Just as for SVE, please first check that the system has PTRAUTH.
> >>>> Something like:
> >>>>
> >>>> 		(cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_ARCH) && \
> >>>> 		 ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH))
> >>>
> >>> In the subsequent patches, vcpu->arch.flags is only set to 
> >>> KVM_ARM64_GUEST_HAS_PTRAUTH when all host capability check conditions 
> >>> matches such as system_supports_address_auth(), 
> >>> system_supports_generic_auth() so doing them again is repetitive in my view.
> >>
> >> It isn't the setting of the flag I care about, but the check of that
> >> flag. Checking a flag for a feature that cannot be used on the running
> >> system should have a zero cost, which isn't the case here.
> >>
> >> Granted, the impact should be minimal and it looks like it mostly happen
> >> on the slow path, but at the very least it would be consistent. So even
> >> if you don't buy my argument about efficiency, please change it in the
> >> name of consistency.
> > 
> > One of the annoyances here is there is no single static key for ptrauth.
> > 
> > I'm assuming we don't want to check both static keys (for address and
> > generic auth) on hot paths.
> 
> They both just branches, so I don't see why not. Of course, for people
> using a lesser compiler (gcc 4.8 or clang), things will suck. But they
> got it coming anyway.

I seem to recall Christoffer expressing concerns about this at some
point: even unconditional branches branches to a fixed address are not
free (or even correctly predicted).

I don't think any compiler can elide static key checks of merge them
together.

Maybe I am misremembering.

Cheers
---Dave
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v9 1/5] KVM: arm64: Add a vcpu flag to control ptrauth for guest
@ 2019-04-17 17:20               ` Dave Martin
  0 siblings, 0 replies; 77+ messages in thread
From: Dave Martin @ 2019-04-17 17:20 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Catalin Marinas, Will Deacon, linux-kernel, Kristina Martsenko,
	Ramana Radhakrishnan, Amit Daniel Kachhap, kvmarm,
	linux-arm-kernel

On Wed, Apr 17, 2019 at 04:54:32PM +0100, Marc Zyngier wrote:
> On 17/04/2019 15:52, Dave Martin wrote:
> > On Wed, Apr 17, 2019 at 03:19:11PM +0100, Marc Zyngier wrote:
> >> On 17/04/2019 14:08, Amit Daniel Kachhap wrote:
> >>> Hi,
> >>>
> >>> On 4/17/19 2:05 PM, Marc Zyngier wrote:
> >>>> On 12/04/2019 04:20, Amit Daniel Kachhap wrote:
> >>>>> A per vcpu flag is added to check if pointer authentication is
> >>>>> enabled for the vcpu or not. This flag may be enabled according to
> >>>>> the necessary user policies and host capabilities.
> >>>>>
> >>>>> This patch also adds a helper to check the flag.
> >>>>>
> >>>>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> >>>>> Cc: Mark Rutland <mark.rutland@arm.com>
> >>>>> Cc: Marc Zyngier <marc.zyngier@arm.com>
> >>>>> Cc: Christoffer Dall <christoffer.dall@arm.com>
> >>>>> Cc: kvmarm@lists.cs.columbia.edu
> >>>>> ---
> >>>>>
> >>>>> Changes since v8:
> >>>>> * Added a new per vcpu flag which will store Pointer Authentication enable
> >>>>>    status instead of checking them again. [Dave Martin]
> >>>>>
> >>>>>   arch/arm64/include/asm/kvm_host.h | 4 ++++
> >>>>>   1 file changed, 4 insertions(+)
> >>>>>
> >>>>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> >>>>> index 9d57cf8..31dbc7c 100644
> >>>>> --- a/arch/arm64/include/asm/kvm_host.h
> >>>>> +++ b/arch/arm64/include/asm/kvm_host.h
> >>>>> @@ -355,10 +355,14 @@ struct kvm_vcpu_arch {
> >>>>>   #define KVM_ARM64_HOST_SVE_ENABLED	(1 << 4) /* SVE enabled for EL0 */
> >>>>>   #define KVM_ARM64_GUEST_HAS_SVE		(1 << 5) /* SVE exposed to guest */
> >>>>>   #define KVM_ARM64_VCPU_SVE_FINALIZED	(1 << 6) /* SVE config completed */
> >>>>> +#define KVM_ARM64_GUEST_HAS_PTRAUTH	(1 << 7) /* PTRAUTH exposed to guest */
> >>>>>   
> >>>>>   #define vcpu_has_sve(vcpu) (system_supports_sve() && \
> >>>>>   			    ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_SVE))
> >>>>>   
> >>>>> +#define vcpu_has_ptrauth(vcpu)	\
> >>>>> +			((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH)
> >>>>> +
> >>>>
> >>>> Just as for SVE, please first check that the system has PTRAUTH.
> >>>> Something like:
> >>>>
> >>>> 		(cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_ARCH) && \
> >>>> 		 ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH))
> >>>
> >>> In the subsequent patches, vcpu->arch.flags is only set to 
> >>> KVM_ARM64_GUEST_HAS_PTRAUTH when all host capability check conditions 
> >>> matches such as system_supports_address_auth(), 
> >>> system_supports_generic_auth() so doing them again is repetitive in my view.
> >>
> >> It isn't the setting of the flag I care about, but the check of that
> >> flag. Checking a flag for a feature that cannot be used on the running
> >> system should have a zero cost, which isn't the case here.
> >>
> >> Granted, the impact should be minimal and it looks like it mostly happen
> >> on the slow path, but at the very least it would be consistent. So even
> >> if you don't buy my argument about efficiency, please change it in the
> >> name of consistency.
> > 
> > One of the annoyances here is there is no single static key for ptrauth.
> > 
> > I'm assuming we don't want to check both static keys (for address and
> > generic auth) on hot paths.
> 
> They both just branches, so I don't see why not. Of course, for people
> using a lesser compiler (gcc 4.8 or clang), things will suck. But they
> got it coming anyway.

I seem to recall Christoffer expressing concerns about this at some
point: even unconditional branches branches to a fixed address are not
free (or even correctly predicted).

I don't think any compiler can elide static key checks of merge them
together.

Maybe I am misremembering.

Cheers
---Dave

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v9 1/5] KVM: arm64: Add a vcpu flag to control ptrauth for guest
@ 2019-04-18  8:48                 ` Marc Zyngier
  0 siblings, 0 replies; 77+ messages in thread
From: Marc Zyngier @ 2019-04-18  8:48 UTC (permalink / raw)
  To: Dave Martin
  Cc: Catalin Marinas, Will Deacon, linux-kernel, Kristina Martsenko,
	Ramana Radhakrishnan, Amit Daniel Kachhap, kvmarm,
	linux-arm-kernel

On 17/04/2019 18:20, Dave Martin wrote:
> On Wed, Apr 17, 2019 at 04:54:32PM +0100, Marc Zyngier wrote:
>> On 17/04/2019 15:52, Dave Martin wrote:
>>> On Wed, Apr 17, 2019 at 03:19:11PM +0100, Marc Zyngier wrote:
>>>> On 17/04/2019 14:08, Amit Daniel Kachhap wrote:
>>>>> Hi,
>>>>>
>>>>> On 4/17/19 2:05 PM, Marc Zyngier wrote:
>>>>>> On 12/04/2019 04:20, Amit Daniel Kachhap wrote:
>>>>>>> A per vcpu flag is added to check if pointer authentication is
>>>>>>> enabled for the vcpu or not. This flag may be enabled according to
>>>>>>> the necessary user policies and host capabilities.
>>>>>>>
>>>>>>> This patch also adds a helper to check the flag.
>>>>>>>
>>>>>>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>>>>>>> Cc: Mark Rutland <mark.rutland@arm.com>
>>>>>>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>>>>>>> Cc: Christoffer Dall <christoffer.dall@arm.com>
>>>>>>> Cc: kvmarm@lists.cs.columbia.edu
>>>>>>> ---
>>>>>>>
>>>>>>> Changes since v8:
>>>>>>> * Added a new per vcpu flag which will store Pointer Authentication enable
>>>>>>>    status instead of checking them again. [Dave Martin]
>>>>>>>
>>>>>>>   arch/arm64/include/asm/kvm_host.h | 4 ++++
>>>>>>>   1 file changed, 4 insertions(+)
>>>>>>>
>>>>>>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
>>>>>>> index 9d57cf8..31dbc7c 100644
>>>>>>> --- a/arch/arm64/include/asm/kvm_host.h
>>>>>>> +++ b/arch/arm64/include/asm/kvm_host.h
>>>>>>> @@ -355,10 +355,14 @@ struct kvm_vcpu_arch {
>>>>>>>   #define KVM_ARM64_HOST_SVE_ENABLED	(1 << 4) /* SVE enabled for EL0 */
>>>>>>>   #define KVM_ARM64_GUEST_HAS_SVE		(1 << 5) /* SVE exposed to guest */
>>>>>>>   #define KVM_ARM64_VCPU_SVE_FINALIZED	(1 << 6) /* SVE config completed */
>>>>>>> +#define KVM_ARM64_GUEST_HAS_PTRAUTH	(1 << 7) /* PTRAUTH exposed to guest */
>>>>>>>   
>>>>>>>   #define vcpu_has_sve(vcpu) (system_supports_sve() && \
>>>>>>>   			    ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_SVE))
>>>>>>>   
>>>>>>> +#define vcpu_has_ptrauth(vcpu)	\
>>>>>>> +			((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH)
>>>>>>> +
>>>>>>
>>>>>> Just as for SVE, please first check that the system has PTRAUTH.
>>>>>> Something like:
>>>>>>
>>>>>> 		(cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_ARCH) && \
>>>>>> 		 ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH))
>>>>>
>>>>> In the subsequent patches, vcpu->arch.flags is only set to 
>>>>> KVM_ARM64_GUEST_HAS_PTRAUTH when all host capability check conditions 
>>>>> matches such as system_supports_address_auth(), 
>>>>> system_supports_generic_auth() so doing them again is repetitive in my view.
>>>>
>>>> It isn't the setting of the flag I care about, but the check of that
>>>> flag. Checking a flag for a feature that cannot be used on the running
>>>> system should have a zero cost, which isn't the case here.
>>>>
>>>> Granted, the impact should be minimal and it looks like it mostly happen
>>>> on the slow path, but at the very least it would be consistent. So even
>>>> if you don't buy my argument about efficiency, please change it in the
>>>> name of consistency.
>>>
>>> One of the annoyances here is there is no single static key for ptrauth.
>>>
>>> I'm assuming we don't want to check both static keys (for address and
>>> generic auth) on hot paths.
>>
>> They both just branches, so I don't see why not. Of course, for people
>> using a lesser compiler (gcc 4.8 or clang), things will suck. But they
>> got it coming anyway.
> 
> I seem to recall Christoffer expressing concerns about this at some
> point: even unconditional branches branches to a fixed address are not
> free (or even correctly predicted).

Certainly not free, but likely less expensive than a load followed by a
conditional branch. And actually, this is not a comparison against a branch,
but against a nop.

> I don't think any compiler can elide static key checks of merge them
> together.

It is not about eliding them, it is about having a cheap fast path.

Compiling this:

bool kvm_hack_test_static_key(struct kvm_vcpu *vcpu)
{
	return ((system_supports_address_auth() ||
		 system_supports_generic_auth()) &&
		vcpu->arch.flags & (1 << 6));

}

I get:

[...]
ffff0000100db5c8:       1400000c        b       ffff0000100db5f8 <kvm_hack_test_static_key+0x48>
ffff0000100db5cc:       d503201f        nop
ffff0000100db5d0:       14000012        b       ffff0000100db618 <kvm_hack_test_static_key+0x68>
ffff0000100db5d4:       d503201f        nop
ffff0000100db5d8:       14000014        b       ffff0000100db628 <kvm_hack_test_static_key+0x78>
ffff0000100db5dc:       d503201f        nop
ffff0000100db5e0:       14000017        b       ffff0000100db63c <kvm_hack_test_static_key+0x8c>
ffff0000100db5e4:       d503201f        nop
ffff0000100db5e8:       52800000        mov     w0, #0x0                        // #0
ffff0000100db5ec:       f9400bf3        ldr     x19, [sp, #16]
ffff0000100db5f0:       a8c27bfd        ldp     x29, x30, [sp], #32
ffff0000100db5f4:       d65f03c0        ret
ffff0000100db5f8:       b000ac40        adrp    x0, ffff000011664000 <reset_devices>
ffff0000100db5fc:       f942a400        ldr     x0, [x0, #1352]
ffff0000100db600:       b637fe80        tbz     x0, #38, ffff0000100db5d0 <kvm_hack_test_static_key+0x20>
ffff0000100db604:       f9441660        ldr     x0, [x19, #2088]
ffff0000100db608:       f9400bf3        ldr     x19, [sp, #16]
ffff0000100db60c:       53061800        ubfx    w0, w0, #6, #1
ffff0000100db610:       a8c27bfd        ldp     x29, x30, [sp], #32
ffff0000100db614:       d65f03c0        ret
ffff0000100db618:       b000ac40        adrp    x0, ffff000011664000 <reset_devices>
ffff0000100db61c:       f942a400        ldr     x0, [x0, #1352]
ffff0000100db620:       b73fff20        tbnz    x0, #39, ffff0000100db604 <kvm_hack_test_static_key+0x54>
ffff0000100db624:       17ffffed        b       ffff0000100db5d8 <kvm_hack_test_static_key+0x28>
ffff0000100db628:       b000ac40        adrp    x0, ffff000011664000 <reset_devices>
ffff0000100db62c:       f942a400        ldr     x0, [x0, #1352]
ffff0000100db630:       b747fea0        tbnz    x0, #40, ffff0000100db604 <kvm_hack_test_static_key+0x54>
ffff0000100db634:       14000002        b       ffff0000100db63c <kvm_hack_test_static_key+0x8c>
ffff0000100db638:       17ffffeb        b       ffff0000100db5e4 <kvm_hack_test_static_key+0x34>
ffff0000100db63c:       b000ac40        adrp    x0, ffff000011664000 <reset_devices>
ffff0000100db640:       f942a400        ldr     x0, [x0, #1352]
ffff0000100db644:       b74ffe00        tbnz    x0, #41, ffff0000100db604 <kvm_hack_test_static_key+0x54>
ffff0000100db648:       52800000        mov     w0, #0x0                        // #0
ffff0000100db64c:       17ffffe8        b       ffff0000100db5ec <kvm_hack_test_static_key+0x3c>

Once the initial 4 branches that are there to deal with the pre static 
keys checks are nop-ed, everything is controlled by the remaining 4
nops which are turned into branches to ffff0000100db604 if any of the 
conditions become true.

Which is exactly what we want: a fall through to returning zero without
doing anything else.

Thanks,

	M.

> Maybe I am misremembering.
> 
> Cheers
> ---Dave
> 


-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v9 1/5] KVM: arm64: Add a vcpu flag to control ptrauth for guest
@ 2019-04-18  8:48                 ` Marc Zyngier
  0 siblings, 0 replies; 77+ messages in thread
From: Marc Zyngier @ 2019-04-18  8:48 UTC (permalink / raw)
  To: Dave Martin
  Cc: Catalin Marinas, Will Deacon, linux-kernel, Kristina Martsenko,
	Ramana Radhakrishnan, Amit Daniel Kachhap, kvmarm,
	linux-arm-kernel

On 17/04/2019 18:20, Dave Martin wrote:
> On Wed, Apr 17, 2019 at 04:54:32PM +0100, Marc Zyngier wrote:
>> On 17/04/2019 15:52, Dave Martin wrote:
>>> On Wed, Apr 17, 2019 at 03:19:11PM +0100, Marc Zyngier wrote:
>>>> On 17/04/2019 14:08, Amit Daniel Kachhap wrote:
>>>>> Hi,
>>>>>
>>>>> On 4/17/19 2:05 PM, Marc Zyngier wrote:
>>>>>> On 12/04/2019 04:20, Amit Daniel Kachhap wrote:
>>>>>>> A per vcpu flag is added to check if pointer authentication is
>>>>>>> enabled for the vcpu or not. This flag may be enabled according to
>>>>>>> the necessary user policies and host capabilities.
>>>>>>>
>>>>>>> This patch also adds a helper to check the flag.
>>>>>>>
>>>>>>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>>>>>>> Cc: Mark Rutland <mark.rutland@arm.com>
>>>>>>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>>>>>>> Cc: Christoffer Dall <christoffer.dall@arm.com>
>>>>>>> Cc: kvmarm@lists.cs.columbia.edu
>>>>>>> ---
>>>>>>>
>>>>>>> Changes since v8:
>>>>>>> * Added a new per vcpu flag which will store Pointer Authentication enable
>>>>>>>    status instead of checking them again. [Dave Martin]
>>>>>>>
>>>>>>>   arch/arm64/include/asm/kvm_host.h | 4 ++++
>>>>>>>   1 file changed, 4 insertions(+)
>>>>>>>
>>>>>>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
>>>>>>> index 9d57cf8..31dbc7c 100644
>>>>>>> --- a/arch/arm64/include/asm/kvm_host.h
>>>>>>> +++ b/arch/arm64/include/asm/kvm_host.h
>>>>>>> @@ -355,10 +355,14 @@ struct kvm_vcpu_arch {
>>>>>>>   #define KVM_ARM64_HOST_SVE_ENABLED	(1 << 4) /* SVE enabled for EL0 */
>>>>>>>   #define KVM_ARM64_GUEST_HAS_SVE		(1 << 5) /* SVE exposed to guest */
>>>>>>>   #define KVM_ARM64_VCPU_SVE_FINALIZED	(1 << 6) /* SVE config completed */
>>>>>>> +#define KVM_ARM64_GUEST_HAS_PTRAUTH	(1 << 7) /* PTRAUTH exposed to guest */
>>>>>>>   
>>>>>>>   #define vcpu_has_sve(vcpu) (system_supports_sve() && \
>>>>>>>   			    ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_SVE))
>>>>>>>   
>>>>>>> +#define vcpu_has_ptrauth(vcpu)	\
>>>>>>> +			((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH)
>>>>>>> +
>>>>>>
>>>>>> Just as for SVE, please first check that the system has PTRAUTH.
>>>>>> Something like:
>>>>>>
>>>>>> 		(cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_ARCH) && \
>>>>>> 		 ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH))
>>>>>
>>>>> In the subsequent patches, vcpu->arch.flags is only set to 
>>>>> KVM_ARM64_GUEST_HAS_PTRAUTH when all host capability check conditions 
>>>>> matches such as system_supports_address_auth(), 
>>>>> system_supports_generic_auth() so doing them again is repetitive in my view.
>>>>
>>>> It isn't the setting of the flag I care about, but the check of that
>>>> flag. Checking a flag for a feature that cannot be used on the running
>>>> system should have a zero cost, which isn't the case here.
>>>>
>>>> Granted, the impact should be minimal and it looks like it mostly happen
>>>> on the slow path, but at the very least it would be consistent. So even
>>>> if you don't buy my argument about efficiency, please change it in the
>>>> name of consistency.
>>>
>>> One of the annoyances here is there is no single static key for ptrauth.
>>>
>>> I'm assuming we don't want to check both static keys (for address and
>>> generic auth) on hot paths.
>>
>> They both just branches, so I don't see why not. Of course, for people
>> using a lesser compiler (gcc 4.8 or clang), things will suck. But they
>> got it coming anyway.
> 
> I seem to recall Christoffer expressing concerns about this at some
> point: even unconditional branches branches to a fixed address are not
> free (or even correctly predicted).

Certainly not free, but likely less expensive than a load followed by a
conditional branch. And actually, this is not a comparison against a branch,
but against a nop.

> I don't think any compiler can elide static key checks of merge them
> together.

It is not about eliding them, it is about having a cheap fast path.

Compiling this:

bool kvm_hack_test_static_key(struct kvm_vcpu *vcpu)
{
	return ((system_supports_address_auth() ||
		 system_supports_generic_auth()) &&
		vcpu->arch.flags & (1 << 6));

}

I get:

[...]
ffff0000100db5c8:       1400000c        b       ffff0000100db5f8 <kvm_hack_test_static_key+0x48>
ffff0000100db5cc:       d503201f        nop
ffff0000100db5d0:       14000012        b       ffff0000100db618 <kvm_hack_test_static_key+0x68>
ffff0000100db5d4:       d503201f        nop
ffff0000100db5d8:       14000014        b       ffff0000100db628 <kvm_hack_test_static_key+0x78>
ffff0000100db5dc:       d503201f        nop
ffff0000100db5e0:       14000017        b       ffff0000100db63c <kvm_hack_test_static_key+0x8c>
ffff0000100db5e4:       d503201f        nop
ffff0000100db5e8:       52800000        mov     w0, #0x0                        // #0
ffff0000100db5ec:       f9400bf3        ldr     x19, [sp, #16]
ffff0000100db5f0:       a8c27bfd        ldp     x29, x30, [sp], #32
ffff0000100db5f4:       d65f03c0        ret
ffff0000100db5f8:       b000ac40        adrp    x0, ffff000011664000 <reset_devices>
ffff0000100db5fc:       f942a400        ldr     x0, [x0, #1352]
ffff0000100db600:       b637fe80        tbz     x0, #38, ffff0000100db5d0 <kvm_hack_test_static_key+0x20>
ffff0000100db604:       f9441660        ldr     x0, [x19, #2088]
ffff0000100db608:       f9400bf3        ldr     x19, [sp, #16]
ffff0000100db60c:       53061800        ubfx    w0, w0, #6, #1
ffff0000100db610:       a8c27bfd        ldp     x29, x30, [sp], #32
ffff0000100db614:       d65f03c0        ret
ffff0000100db618:       b000ac40        adrp    x0, ffff000011664000 <reset_devices>
ffff0000100db61c:       f942a400        ldr     x0, [x0, #1352]
ffff0000100db620:       b73fff20        tbnz    x0, #39, ffff0000100db604 <kvm_hack_test_static_key+0x54>
ffff0000100db624:       17ffffed        b       ffff0000100db5d8 <kvm_hack_test_static_key+0x28>
ffff0000100db628:       b000ac40        adrp    x0, ffff000011664000 <reset_devices>
ffff0000100db62c:       f942a400        ldr     x0, [x0, #1352]
ffff0000100db630:       b747fea0        tbnz    x0, #40, ffff0000100db604 <kvm_hack_test_static_key+0x54>
ffff0000100db634:       14000002        b       ffff0000100db63c <kvm_hack_test_static_key+0x8c>
ffff0000100db638:       17ffffeb        b       ffff0000100db5e4 <kvm_hack_test_static_key+0x34>
ffff0000100db63c:       b000ac40        adrp    x0, ffff000011664000 <reset_devices>
ffff0000100db640:       f942a400        ldr     x0, [x0, #1352]
ffff0000100db644:       b74ffe00        tbnz    x0, #41, ffff0000100db604 <kvm_hack_test_static_key+0x54>
ffff0000100db648:       52800000        mov     w0, #0x0                        // #0
ffff0000100db64c:       17ffffe8        b       ffff0000100db5ec <kvm_hack_test_static_key+0x3c>

Once the initial 4 branches that are there to deal with the pre static 
keys checks are nop-ed, everything is controlled by the remaining 4
nops which are turned into branches to ffff0000100db604 if any of the 
conditions become true.

Which is exactly what we want: a fall through to returning zero without
doing anything else.

Thanks,

	M.

> Maybe I am misremembering.
> 
> Cheers
> ---Dave
> 


-- 
Jazz is not dead. It just smells funny...
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v9 1/5] KVM: arm64: Add a vcpu flag to control ptrauth for guest
@ 2019-04-18  8:48                 ` Marc Zyngier
  0 siblings, 0 replies; 77+ messages in thread
From: Marc Zyngier @ 2019-04-18  8:48 UTC (permalink / raw)
  To: Dave Martin
  Cc: Catalin Marinas, Will Deacon, linux-kernel, Kristina Martsenko,
	Ramana Radhakrishnan, Amit Daniel Kachhap, kvmarm,
	linux-arm-kernel

On 17/04/2019 18:20, Dave Martin wrote:
> On Wed, Apr 17, 2019 at 04:54:32PM +0100, Marc Zyngier wrote:
>> On 17/04/2019 15:52, Dave Martin wrote:
>>> On Wed, Apr 17, 2019 at 03:19:11PM +0100, Marc Zyngier wrote:
>>>> On 17/04/2019 14:08, Amit Daniel Kachhap wrote:
>>>>> Hi,
>>>>>
>>>>> On 4/17/19 2:05 PM, Marc Zyngier wrote:
>>>>>> On 12/04/2019 04:20, Amit Daniel Kachhap wrote:
>>>>>>> A per vcpu flag is added to check if pointer authentication is
>>>>>>> enabled for the vcpu or not. This flag may be enabled according to
>>>>>>> the necessary user policies and host capabilities.
>>>>>>>
>>>>>>> This patch also adds a helper to check the flag.
>>>>>>>
>>>>>>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>>>>>>> Cc: Mark Rutland <mark.rutland@arm.com>
>>>>>>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>>>>>>> Cc: Christoffer Dall <christoffer.dall@arm.com>
>>>>>>> Cc: kvmarm@lists.cs.columbia.edu
>>>>>>> ---
>>>>>>>
>>>>>>> Changes since v8:
>>>>>>> * Added a new per vcpu flag which will store Pointer Authentication enable
>>>>>>>    status instead of checking them again. [Dave Martin]
>>>>>>>
>>>>>>>   arch/arm64/include/asm/kvm_host.h | 4 ++++
>>>>>>>   1 file changed, 4 insertions(+)
>>>>>>>
>>>>>>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
>>>>>>> index 9d57cf8..31dbc7c 100644
>>>>>>> --- a/arch/arm64/include/asm/kvm_host.h
>>>>>>> +++ b/arch/arm64/include/asm/kvm_host.h
>>>>>>> @@ -355,10 +355,14 @@ struct kvm_vcpu_arch {
>>>>>>>   #define KVM_ARM64_HOST_SVE_ENABLED	(1 << 4) /* SVE enabled for EL0 */
>>>>>>>   #define KVM_ARM64_GUEST_HAS_SVE		(1 << 5) /* SVE exposed to guest */
>>>>>>>   #define KVM_ARM64_VCPU_SVE_FINALIZED	(1 << 6) /* SVE config completed */
>>>>>>> +#define KVM_ARM64_GUEST_HAS_PTRAUTH	(1 << 7) /* PTRAUTH exposed to guest */
>>>>>>>   
>>>>>>>   #define vcpu_has_sve(vcpu) (system_supports_sve() && \
>>>>>>>   			    ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_SVE))
>>>>>>>   
>>>>>>> +#define vcpu_has_ptrauth(vcpu)	\
>>>>>>> +			((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH)
>>>>>>> +
>>>>>>
>>>>>> Just as for SVE, please first check that the system has PTRAUTH.
>>>>>> Something like:
>>>>>>
>>>>>> 		(cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_ARCH) && \
>>>>>> 		 ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH))
>>>>>
>>>>> In the subsequent patches, vcpu->arch.flags is only set to 
>>>>> KVM_ARM64_GUEST_HAS_PTRAUTH when all host capability check conditions 
>>>>> matches such as system_supports_address_auth(), 
>>>>> system_supports_generic_auth() so doing them again is repetitive in my view.
>>>>
>>>> It isn't the setting of the flag I care about, but the check of that
>>>> flag. Checking a flag for a feature that cannot be used on the running
>>>> system should have a zero cost, which isn't the case here.
>>>>
>>>> Granted, the impact should be minimal and it looks like it mostly happen
>>>> on the slow path, but at the very least it would be consistent. So even
>>>> if you don't buy my argument about efficiency, please change it in the
>>>> name of consistency.
>>>
>>> One of the annoyances here is there is no single static key for ptrauth.
>>>
>>> I'm assuming we don't want to check both static keys (for address and
>>> generic auth) on hot paths.
>>
>> They both just branches, so I don't see why not. Of course, for people
>> using a lesser compiler (gcc 4.8 or clang), things will suck. But they
>> got it coming anyway.
> 
> I seem to recall Christoffer expressing concerns about this at some
> point: even unconditional branches branches to a fixed address are not
> free (or even correctly predicted).

Certainly not free, but likely less expensive than a load followed by a
conditional branch. And actually, this is not a comparison against a branch,
but against a nop.

> I don't think any compiler can elide static key checks of merge them
> together.

It is not about eliding them, it is about having a cheap fast path.

Compiling this:

bool kvm_hack_test_static_key(struct kvm_vcpu *vcpu)
{
	return ((system_supports_address_auth() ||
		 system_supports_generic_auth()) &&
		vcpu->arch.flags & (1 << 6));

}

I get:

[...]
ffff0000100db5c8:       1400000c        b       ffff0000100db5f8 <kvm_hack_test_static_key+0x48>
ffff0000100db5cc:       d503201f        nop
ffff0000100db5d0:       14000012        b       ffff0000100db618 <kvm_hack_test_static_key+0x68>
ffff0000100db5d4:       d503201f        nop
ffff0000100db5d8:       14000014        b       ffff0000100db628 <kvm_hack_test_static_key+0x78>
ffff0000100db5dc:       d503201f        nop
ffff0000100db5e0:       14000017        b       ffff0000100db63c <kvm_hack_test_static_key+0x8c>
ffff0000100db5e4:       d503201f        nop
ffff0000100db5e8:       52800000        mov     w0, #0x0                        // #0
ffff0000100db5ec:       f9400bf3        ldr     x19, [sp, #16]
ffff0000100db5f0:       a8c27bfd        ldp     x29, x30, [sp], #32
ffff0000100db5f4:       d65f03c0        ret
ffff0000100db5f8:       b000ac40        adrp    x0, ffff000011664000 <reset_devices>
ffff0000100db5fc:       f942a400        ldr     x0, [x0, #1352]
ffff0000100db600:       b637fe80        tbz     x0, #38, ffff0000100db5d0 <kvm_hack_test_static_key+0x20>
ffff0000100db604:       f9441660        ldr     x0, [x19, #2088]
ffff0000100db608:       f9400bf3        ldr     x19, [sp, #16]
ffff0000100db60c:       53061800        ubfx    w0, w0, #6, #1
ffff0000100db610:       a8c27bfd        ldp     x29, x30, [sp], #32
ffff0000100db614:       d65f03c0        ret
ffff0000100db618:       b000ac40        adrp    x0, ffff000011664000 <reset_devices>
ffff0000100db61c:       f942a400        ldr     x0, [x0, #1352]
ffff0000100db620:       b73fff20        tbnz    x0, #39, ffff0000100db604 <kvm_hack_test_static_key+0x54>
ffff0000100db624:       17ffffed        b       ffff0000100db5d8 <kvm_hack_test_static_key+0x28>
ffff0000100db628:       b000ac40        adrp    x0, ffff000011664000 <reset_devices>
ffff0000100db62c:       f942a400        ldr     x0, [x0, #1352]
ffff0000100db630:       b747fea0        tbnz    x0, #40, ffff0000100db604 <kvm_hack_test_static_key+0x54>
ffff0000100db634:       14000002        b       ffff0000100db63c <kvm_hack_test_static_key+0x8c>
ffff0000100db638:       17ffffeb        b       ffff0000100db5e4 <kvm_hack_test_static_key+0x34>
ffff0000100db63c:       b000ac40        adrp    x0, ffff000011664000 <reset_devices>
ffff0000100db640:       f942a400        ldr     x0, [x0, #1352]
ffff0000100db644:       b74ffe00        tbnz    x0, #41, ffff0000100db604 <kvm_hack_test_static_key+0x54>
ffff0000100db648:       52800000        mov     w0, #0x0                        // #0
ffff0000100db64c:       17ffffe8        b       ffff0000100db5ec <kvm_hack_test_static_key+0x3c>

Once the initial 4 branches that are there to deal with the pre static 
keys checks are nop-ed, everything is controlled by the remaining 4
nops which are turned into branches to ffff0000100db604 if any of the 
conditions become true.

Which is exactly what we want: a fall through to returning zero without
doing anything else.

Thanks,

	M.

> Maybe I am misremembering.
> 
> Cheers
> ---Dave
> 


-- 
Jazz is not dead. It just smells funny...

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 77+ messages in thread

end of thread, other threads:[~2019-04-18  8:48 UTC | newest]

Thread overview: 77+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-04-12  3:20 [PATCH v9 0/5] Add ARMv8.3 pointer authentication for kvm guest Amit Daniel Kachhap
2019-04-12  3:20 ` Amit Daniel Kachhap
2019-04-12  3:20 ` Amit Daniel Kachhap
2019-04-12  3:20 ` [PATCH v9 1/5] KVM: arm64: Add a vcpu flag to control ptrauth for guest Amit Daniel Kachhap
2019-04-12  3:20   ` Amit Daniel Kachhap
2019-04-12  3:20   ` Amit Daniel Kachhap
2019-04-16 16:30   ` Dave Martin
2019-04-16 16:30     ` Dave Martin
2019-04-16 16:30     ` Dave Martin
2019-04-17  8:35   ` Marc Zyngier
2019-04-17  8:35     ` Marc Zyngier
2019-04-17  8:35     ` Marc Zyngier
2019-04-17 13:08     ` Amit Daniel Kachhap
2019-04-17 13:08       ` Amit Daniel Kachhap
2019-04-17 13:08       ` Amit Daniel Kachhap
2019-04-17 14:19       ` Marc Zyngier
2019-04-17 14:19         ` Marc Zyngier
2019-04-17 14:19         ` Marc Zyngier
2019-04-17 14:52         ` Dave Martin
2019-04-17 14:52           ` Dave Martin
2019-04-17 14:52           ` Dave Martin
2019-04-17 15:54           ` Marc Zyngier
2019-04-17 15:54             ` Marc Zyngier
2019-04-17 15:54             ` Marc Zyngier
2019-04-17 17:20             ` Dave Martin
2019-04-17 17:20               ` Dave Martin
2019-04-17 17:20               ` Dave Martin
2019-04-18  8:48               ` Marc Zyngier
2019-04-18  8:48                 ` Marc Zyngier
2019-04-18  8:48                 ` Marc Zyngier
2019-04-12  3:20 ` [PATCH v9 2/5] KVM: arm/arm64: context-switch ptrauth registers Amit Daniel Kachhap
2019-04-12  3:20   ` Amit Daniel Kachhap
2019-04-12  3:20   ` Amit Daniel Kachhap
2019-04-17  9:09   ` Marc Zyngier
2019-04-17  9:09     ` Marc Zyngier
2019-04-17  9:09     ` Marc Zyngier
2019-04-17 14:24     ` Amit Daniel Kachhap
2019-04-17 14:24       ` Amit Daniel Kachhap
2019-04-17 14:24       ` Amit Daniel Kachhap
2019-04-17 14:39       ` Marc Zyngier
2019-04-17 14:39         ` Marc Zyngier
2019-04-17 14:39         ` Marc Zyngier
2019-04-12  3:20 ` [PATCH v9 3/5] KVM: arm64: Add userspace flag to enable pointer authentication Amit Daniel Kachhap
2019-04-12  3:20   ` Amit Daniel Kachhap
2019-04-12  3:20   ` Amit Daniel Kachhap
2019-04-16 16:31   ` Dave Martin
2019-04-16 16:31     ` Dave Martin
2019-04-16 16:31     ` Dave Martin
2019-04-17  8:17     ` Amit Daniel Kachhap
2019-04-17  8:17       ` Amit Daniel Kachhap
2019-04-17  8:17       ` Amit Daniel Kachhap
2019-04-12  3:20 ` [PATCH v9 4/5] KVM: arm64: Add capability to advertise ptrauth for guest Amit Daniel Kachhap
2019-04-12  3:20   ` Amit Daniel Kachhap
2019-04-12  3:20   ` Amit Daniel Kachhap
2019-04-16 16:32   ` Dave Martin
2019-04-16 16:32     ` Dave Martin
2019-04-16 16:32     ` Dave Martin
2019-04-17  9:39     ` Amit Daniel Kachhap
2019-04-17  9:39       ` Amit Daniel Kachhap
2019-04-17  9:39       ` Amit Daniel Kachhap
2019-04-17 15:22       ` Dave Martin
2019-04-17 15:22         ` Dave Martin
2019-04-17 15:22         ` Dave Martin
2019-04-12  3:20 ` [kvmtool PATCH v9 5/5] KVM: arm/arm64: Add a vcpu feature for pointer authentication Amit Daniel Kachhap
2019-04-12  3:20   ` Amit Daniel Kachhap
2019-04-12  3:20   ` Amit Daniel Kachhap
2019-04-16 16:32   ` Dave Martin
2019-04-16 16:32     ` Dave Martin
2019-04-16 16:32     ` Dave Martin
2019-04-17 12:36     ` Amit Daniel Kachhap
2019-04-17 12:36       ` Amit Daniel Kachhap
2019-04-17 12:36       ` Amit Daniel Kachhap
2019-04-17 15:38       ` Dave Martin
2019-04-17 15:38         ` Dave Martin
2019-04-17 15:38         ` Dave Martin
2019-04-17  8:55   ` Alexandru Elisei
2019-04-17  8:55     ` Alexandru Elisei

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.