All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 00/15] KVM: arm64: Fixed features for protected VMs
@ 2021-07-19 16:03 ` Fuad Tabba
  0 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-07-19 16:03 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

Hi,

Changes since v2 [1]:
- Both trapping and setting of feature id registers are toggled by an allowed
  features bitmap of the feature id registers (Will)
- Documentation explaining the rationale behind allowed/blocked features (Drew)
- Restrict protected VM features by checking and restricting VM capabilities
- Misc small fixes and tidying up (mostly Will)
- Remove dependency on Will's protected VM user ABI series [2]
- Rebase on 5.14-rc2
- Carried Will's acks

Changes since v1 [3]:
- Restrict protected VM features based on an allowed features rather than
  rejected ones (Drew)
- Add more background describing protected KVM to the cover letter (Alex)

This patch series adds support for restricting CPU features for protected VMs
in KVM (pKVM) [4].

Various VM feature configurations are allowed in KVM/arm64, each requiring
specific handling logic to deal with traps, context-switching and potentially
emulation. Achieving feature parity in pKVM therefore requires either elevating
this logic to EL2 (and substantially increasing the TCB) or continuing to trust
the host handlers at EL1. Since neither of these options are especially
appealing, pKVM instead limits the CPU features exposed to a guest to a fixed
configuration based on the underlying hardware and which can mostly be provided
straightforwardly by EL2.

This series approaches that by restricting CPU features exposed to protected
guests. Features advertised through feature registers are limited, which pKVM
enforces by trapping register accesses and instructions associated with these
features.

This series is based on 5.14-rc2. You can find the applied series here [5].

Cheers,
/fuad

[1] https://lore.kernel.org/kvmarm/20210615133950.693489-1-tabba@google.com/

[2] https://lore.kernel.org/kvmarm/20210603183347.1695-1-will@kernel.org/

[3] https://lore.kernel.org/kvmarm/20210608141141.997398-1-tabba@google.com/

[4] Once complete, protected KVM adds the ability to create protected VMs.
These protected VMs are protected from the host Linux kernel (and from other
VMs), where the host does not have access to guest memory,even if compromised.
Normal (nVHE) guests can still be created and run in parallel with protected
VMs. Their functionality should not be affected.

For protected VMs, the host should not even have access to a protected guest's
state or anything that would enable it to manipulate it (e.g., vcpu register
context and el2 system registers); only hyp would have that access. If the host
could access that state, then it might be able to get around the protection
provided.  Therefore, anything that is sensitive and that would require such
access needs to happen at hyp, hence the code in nvhe running only at hyp.

For more details about pKVM, please refer to Will's talk at KVM Forum 2020:
https://mirrors.edge.kernel.org/pub/linux/kernel/people/will/slides/kvmforum-2020-edited.pdf
https://www.youtube.com/watch?v=edqJSzsDRxk

[5] https://android-kvm.googlesource.com/linux/+/refs/heads/tabba/el2_fixed_feature_v3

Fuad Tabba (15):
  KVM: arm64: placeholder to check if VM is protected
  KVM: arm64: Remove trailing whitespace in comment
  KVM: arm64: MDCR_EL2 is a 64-bit register
  KVM: arm64: Fix names of config register fields
  KVM: arm64: Refactor sys_regs.h,c for nVHE reuse
  KVM: arm64: Restore mdcr_el2 from vcpu
  KVM: arm64: Track value of cptr_el2 in struct kvm_vcpu_arch
  KVM: arm64: Add feature register flag definitions
  KVM: arm64: Add config register bit definitions
  KVM: arm64: Guest exit handlers for nVHE hyp
  KVM: arm64: Add trap handlers for protected VMs
  KVM: arm64: Move sanitized copies of CPU features
  KVM: arm64: Trap access to pVM restricted features
  KVM: arm64: Handle protected guests at 32 bits
  KVM: arm64: Restrict protected VM capabilities

 arch/arm64/include/asm/cpufeature.h       |   4 +-
 arch/arm64/include/asm/kvm_arm.h          |  54 ++-
 arch/arm64/include/asm/kvm_asm.h          |   2 +-
 arch/arm64/include/asm/kvm_fixed_config.h | 188 +++++++++
 arch/arm64/include/asm/kvm_host.h         |  15 +-
 arch/arm64/include/asm/kvm_hyp.h          |   5 +-
 arch/arm64/include/asm/sysreg.h           |  15 +-
 arch/arm64/kernel/cpufeature.c            |   8 +-
 arch/arm64/kvm/Makefile                   |   2 +-
 arch/arm64/kvm/arm.c                      |  75 +++-
 arch/arm64/kvm/debug.c                    |   2 +-
 arch/arm64/kvm/hyp/include/hyp/switch.h   |  76 +++-
 arch/arm64/kvm/hyp/nvhe/Makefile          |   2 +-
 arch/arm64/kvm/hyp/nvhe/debug-sr.c        |   2 +-
 arch/arm64/kvm/hyp/nvhe/mem_protect.c     |   6 -
 arch/arm64/kvm/hyp/nvhe/switch.c          |  72 +++-
 arch/arm64/kvm/hyp/nvhe/sys_regs.c        | 445 ++++++++++++++++++++++
 arch/arm64/kvm/hyp/vhe/debug-sr.c         |   2 +-
 arch/arm64/kvm/hyp/vhe/switch.c           |  12 +-
 arch/arm64/kvm/hyp/vhe/sysreg-sr.c        |   2 +-
 arch/arm64/kvm/pkvm.c                     | 213 +++++++++++
 arch/arm64/kvm/sys_regs.c                 |  34 +-
 arch/arm64/kvm/sys_regs.h                 |  31 ++
 23 files changed, 1172 insertions(+), 95 deletions(-)
 create mode 100644 arch/arm64/include/asm/kvm_fixed_config.h
 create mode 100644 arch/arm64/kvm/hyp/nvhe/sys_regs.c
 create mode 100644 arch/arm64/kvm/pkvm.c


base-commit: 2734d6c1b1a089fb593ef6a23d4b70903526fe0c
-- 
2.32.0.402.g57bb445576-goog


^ permalink raw reply	[flat|nested] 126+ messages in thread

* [PATCH v3 00/15] KVM: arm64: Fixed features for protected VMs
@ 2021-07-19 16:03 ` Fuad Tabba
  0 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-07-19 16:03 UTC (permalink / raw)
  To: kvmarm; +Cc: kernel-team, kvm, maz, pbonzini, will, linux-arm-kernel

Hi,

Changes since v2 [1]:
- Both trapping and setting of feature id registers are toggled by an allowed
  features bitmap of the feature id registers (Will)
- Documentation explaining the rationale behind allowed/blocked features (Drew)
- Restrict protected VM features by checking and restricting VM capabilities
- Misc small fixes and tidying up (mostly Will)
- Remove dependency on Will's protected VM user ABI series [2]
- Rebase on 5.14-rc2
- Carried Will's acks

Changes since v1 [3]:
- Restrict protected VM features based on an allowed features rather than
  rejected ones (Drew)
- Add more background describing protected KVM to the cover letter (Alex)

This patch series adds support for restricting CPU features for protected VMs
in KVM (pKVM) [4].

Various VM feature configurations are allowed in KVM/arm64, each requiring
specific handling logic to deal with traps, context-switching and potentially
emulation. Achieving feature parity in pKVM therefore requires either elevating
this logic to EL2 (and substantially increasing the TCB) or continuing to trust
the host handlers at EL1. Since neither of these options are especially
appealing, pKVM instead limits the CPU features exposed to a guest to a fixed
configuration based on the underlying hardware and which can mostly be provided
straightforwardly by EL2.

This series approaches that by restricting CPU features exposed to protected
guests. Features advertised through feature registers are limited, which pKVM
enforces by trapping register accesses and instructions associated with these
features.

This series is based on 5.14-rc2. You can find the applied series here [5].

Cheers,
/fuad

[1] https://lore.kernel.org/kvmarm/20210615133950.693489-1-tabba@google.com/

[2] https://lore.kernel.org/kvmarm/20210603183347.1695-1-will@kernel.org/

[3] https://lore.kernel.org/kvmarm/20210608141141.997398-1-tabba@google.com/

[4] Once complete, protected KVM adds the ability to create protected VMs.
These protected VMs are protected from the host Linux kernel (and from other
VMs), where the host does not have access to guest memory,even if compromised.
Normal (nVHE) guests can still be created and run in parallel with protected
VMs. Their functionality should not be affected.

For protected VMs, the host should not even have access to a protected guest's
state or anything that would enable it to manipulate it (e.g., vcpu register
context and el2 system registers); only hyp would have that access. If the host
could access that state, then it might be able to get around the protection
provided.  Therefore, anything that is sensitive and that would require such
access needs to happen at hyp, hence the code in nvhe running only at hyp.

For more details about pKVM, please refer to Will's talk at KVM Forum 2020:
https://mirrors.edge.kernel.org/pub/linux/kernel/people/will/slides/kvmforum-2020-edited.pdf
https://www.youtube.com/watch?v=edqJSzsDRxk

[5] https://android-kvm.googlesource.com/linux/+/refs/heads/tabba/el2_fixed_feature_v3

Fuad Tabba (15):
  KVM: arm64: placeholder to check if VM is protected
  KVM: arm64: Remove trailing whitespace in comment
  KVM: arm64: MDCR_EL2 is a 64-bit register
  KVM: arm64: Fix names of config register fields
  KVM: arm64: Refactor sys_regs.h,c for nVHE reuse
  KVM: arm64: Restore mdcr_el2 from vcpu
  KVM: arm64: Track value of cptr_el2 in struct kvm_vcpu_arch
  KVM: arm64: Add feature register flag definitions
  KVM: arm64: Add config register bit definitions
  KVM: arm64: Guest exit handlers for nVHE hyp
  KVM: arm64: Add trap handlers for protected VMs
  KVM: arm64: Move sanitized copies of CPU features
  KVM: arm64: Trap access to pVM restricted features
  KVM: arm64: Handle protected guests at 32 bits
  KVM: arm64: Restrict protected VM capabilities

 arch/arm64/include/asm/cpufeature.h       |   4 +-
 arch/arm64/include/asm/kvm_arm.h          |  54 ++-
 arch/arm64/include/asm/kvm_asm.h          |   2 +-
 arch/arm64/include/asm/kvm_fixed_config.h | 188 +++++++++
 arch/arm64/include/asm/kvm_host.h         |  15 +-
 arch/arm64/include/asm/kvm_hyp.h          |   5 +-
 arch/arm64/include/asm/sysreg.h           |  15 +-
 arch/arm64/kernel/cpufeature.c            |   8 +-
 arch/arm64/kvm/Makefile                   |   2 +-
 arch/arm64/kvm/arm.c                      |  75 +++-
 arch/arm64/kvm/debug.c                    |   2 +-
 arch/arm64/kvm/hyp/include/hyp/switch.h   |  76 +++-
 arch/arm64/kvm/hyp/nvhe/Makefile          |   2 +-
 arch/arm64/kvm/hyp/nvhe/debug-sr.c        |   2 +-
 arch/arm64/kvm/hyp/nvhe/mem_protect.c     |   6 -
 arch/arm64/kvm/hyp/nvhe/switch.c          |  72 +++-
 arch/arm64/kvm/hyp/nvhe/sys_regs.c        | 445 ++++++++++++++++++++++
 arch/arm64/kvm/hyp/vhe/debug-sr.c         |   2 +-
 arch/arm64/kvm/hyp/vhe/switch.c           |  12 +-
 arch/arm64/kvm/hyp/vhe/sysreg-sr.c        |   2 +-
 arch/arm64/kvm/pkvm.c                     | 213 +++++++++++
 arch/arm64/kvm/sys_regs.c                 |  34 +-
 arch/arm64/kvm/sys_regs.h                 |  31 ++
 23 files changed, 1172 insertions(+), 95 deletions(-)
 create mode 100644 arch/arm64/include/asm/kvm_fixed_config.h
 create mode 100644 arch/arm64/kvm/hyp/nvhe/sys_regs.c
 create mode 100644 arch/arm64/kvm/pkvm.c


base-commit: 2734d6c1b1a089fb593ef6a23d4b70903526fe0c
-- 
2.32.0.402.g57bb445576-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 126+ messages in thread

* [PATCH v3 00/15] KVM: arm64: Fixed features for protected VMs
@ 2021-07-19 16:03 ` Fuad Tabba
  0 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-07-19 16:03 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

Hi,

Changes since v2 [1]:
- Both trapping and setting of feature id registers are toggled by an allowed
  features bitmap of the feature id registers (Will)
- Documentation explaining the rationale behind allowed/blocked features (Drew)
- Restrict protected VM features by checking and restricting VM capabilities
- Misc small fixes and tidying up (mostly Will)
- Remove dependency on Will's protected VM user ABI series [2]
- Rebase on 5.14-rc2
- Carried Will's acks

Changes since v1 [3]:
- Restrict protected VM features based on an allowed features rather than
  rejected ones (Drew)
- Add more background describing protected KVM to the cover letter (Alex)

This patch series adds support for restricting CPU features for protected VMs
in KVM (pKVM) [4].

Various VM feature configurations are allowed in KVM/arm64, each requiring
specific handling logic to deal with traps, context-switching and potentially
emulation. Achieving feature parity in pKVM therefore requires either elevating
this logic to EL2 (and substantially increasing the TCB) or continuing to trust
the host handlers at EL1. Since neither of these options are especially
appealing, pKVM instead limits the CPU features exposed to a guest to a fixed
configuration based on the underlying hardware and which can mostly be provided
straightforwardly by EL2.

This series approaches that by restricting CPU features exposed to protected
guests. Features advertised through feature registers are limited, which pKVM
enforces by trapping register accesses and instructions associated with these
features.

This series is based on 5.14-rc2. You can find the applied series here [5].

Cheers,
/fuad

[1] https://lore.kernel.org/kvmarm/20210615133950.693489-1-tabba@google.com/

[2] https://lore.kernel.org/kvmarm/20210603183347.1695-1-will@kernel.org/

[3] https://lore.kernel.org/kvmarm/20210608141141.997398-1-tabba@google.com/

[4] Once complete, protected KVM adds the ability to create protected VMs.
These protected VMs are protected from the host Linux kernel (and from other
VMs), where the host does not have access to guest memory,even if compromised.
Normal (nVHE) guests can still be created and run in parallel with protected
VMs. Their functionality should not be affected.

For protected VMs, the host should not even have access to a protected guest's
state or anything that would enable it to manipulate it (e.g., vcpu register
context and el2 system registers); only hyp would have that access. If the host
could access that state, then it might be able to get around the protection
provided.  Therefore, anything that is sensitive and that would require such
access needs to happen at hyp, hence the code in nvhe running only at hyp.

For more details about pKVM, please refer to Will's talk at KVM Forum 2020:
https://mirrors.edge.kernel.org/pub/linux/kernel/people/will/slides/kvmforum-2020-edited.pdf
https://www.youtube.com/watch?v=edqJSzsDRxk

[5] https://android-kvm.googlesource.com/linux/+/refs/heads/tabba/el2_fixed_feature_v3

Fuad Tabba (15):
  KVM: arm64: placeholder to check if VM is protected
  KVM: arm64: Remove trailing whitespace in comment
  KVM: arm64: MDCR_EL2 is a 64-bit register
  KVM: arm64: Fix names of config register fields
  KVM: arm64: Refactor sys_regs.h,c for nVHE reuse
  KVM: arm64: Restore mdcr_el2 from vcpu
  KVM: arm64: Track value of cptr_el2 in struct kvm_vcpu_arch
  KVM: arm64: Add feature register flag definitions
  KVM: arm64: Add config register bit definitions
  KVM: arm64: Guest exit handlers for nVHE hyp
  KVM: arm64: Add trap handlers for protected VMs
  KVM: arm64: Move sanitized copies of CPU features
  KVM: arm64: Trap access to pVM restricted features
  KVM: arm64: Handle protected guests at 32 bits
  KVM: arm64: Restrict protected VM capabilities

 arch/arm64/include/asm/cpufeature.h       |   4 +-
 arch/arm64/include/asm/kvm_arm.h          |  54 ++-
 arch/arm64/include/asm/kvm_asm.h          |   2 +-
 arch/arm64/include/asm/kvm_fixed_config.h | 188 +++++++++
 arch/arm64/include/asm/kvm_host.h         |  15 +-
 arch/arm64/include/asm/kvm_hyp.h          |   5 +-
 arch/arm64/include/asm/sysreg.h           |  15 +-
 arch/arm64/kernel/cpufeature.c            |   8 +-
 arch/arm64/kvm/Makefile                   |   2 +-
 arch/arm64/kvm/arm.c                      |  75 +++-
 arch/arm64/kvm/debug.c                    |   2 +-
 arch/arm64/kvm/hyp/include/hyp/switch.h   |  76 +++-
 arch/arm64/kvm/hyp/nvhe/Makefile          |   2 +-
 arch/arm64/kvm/hyp/nvhe/debug-sr.c        |   2 +-
 arch/arm64/kvm/hyp/nvhe/mem_protect.c     |   6 -
 arch/arm64/kvm/hyp/nvhe/switch.c          |  72 +++-
 arch/arm64/kvm/hyp/nvhe/sys_regs.c        | 445 ++++++++++++++++++++++
 arch/arm64/kvm/hyp/vhe/debug-sr.c         |   2 +-
 arch/arm64/kvm/hyp/vhe/switch.c           |  12 +-
 arch/arm64/kvm/hyp/vhe/sysreg-sr.c        |   2 +-
 arch/arm64/kvm/pkvm.c                     | 213 +++++++++++
 arch/arm64/kvm/sys_regs.c                 |  34 +-
 arch/arm64/kvm/sys_regs.h                 |  31 ++
 23 files changed, 1172 insertions(+), 95 deletions(-)
 create mode 100644 arch/arm64/include/asm/kvm_fixed_config.h
 create mode 100644 arch/arm64/kvm/hyp/nvhe/sys_regs.c
 create mode 100644 arch/arm64/kvm/pkvm.c


base-commit: 2734d6c1b1a089fb593ef6a23d4b70903526fe0c
-- 
2.32.0.402.g57bb445576-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* [PATCH v3 01/15] KVM: arm64: placeholder to check if VM is protected
  2021-07-19 16:03 ` Fuad Tabba
  (?)
@ 2021-07-19 16:03   ` Fuad Tabba
  -1 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-07-19 16:03 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

Add a function to check whether a VM is protected (under pKVM).
Since the creation of protected VMs isn't enabled yet, this is a
placeholder that always returns false. The intention is for this
to become a check for protected VMs in the future (see Will's RFC
[*]).

No functional change intended.

Signed-off-by: Fuad Tabba <tabba@google.com>

[*] https://lore.kernel.org/kvmarm/20210603183347.1695-1-will@kernel.org/
---
 arch/arm64/include/asm/kvm_host.h | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 41911585ae0c..347781f99b6a 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -771,6 +771,11 @@ void kvm_arch_free_vm(struct kvm *kvm);
 
 int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long type);
 
+static inline bool kvm_vm_is_protected(struct kvm *kvm)
+{
+	return false;
+}
+
 int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature);
 bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu);
 
-- 
2.32.0.402.g57bb445576-goog


^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH v3 01/15] KVM: arm64: placeholder to check if VM is protected
@ 2021-07-19 16:03   ` Fuad Tabba
  0 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-07-19 16:03 UTC (permalink / raw)
  To: kvmarm; +Cc: kernel-team, kvm, maz, pbonzini, will, linux-arm-kernel

Add a function to check whether a VM is protected (under pKVM).
Since the creation of protected VMs isn't enabled yet, this is a
placeholder that always returns false. The intention is for this
to become a check for protected VMs in the future (see Will's RFC
[*]).

No functional change intended.

Signed-off-by: Fuad Tabba <tabba@google.com>

[*] https://lore.kernel.org/kvmarm/20210603183347.1695-1-will@kernel.org/
---
 arch/arm64/include/asm/kvm_host.h | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 41911585ae0c..347781f99b6a 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -771,6 +771,11 @@ void kvm_arch_free_vm(struct kvm *kvm);
 
 int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long type);
 
+static inline bool kvm_vm_is_protected(struct kvm *kvm)
+{
+	return false;
+}
+
 int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature);
 bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu);
 
-- 
2.32.0.402.g57bb445576-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH v3 01/15] KVM: arm64: placeholder to check if VM is protected
@ 2021-07-19 16:03   ` Fuad Tabba
  0 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-07-19 16:03 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

Add a function to check whether a VM is protected (under pKVM).
Since the creation of protected VMs isn't enabled yet, this is a
placeholder that always returns false. The intention is for this
to become a check for protected VMs in the future (see Will's RFC
[*]).

No functional change intended.

Signed-off-by: Fuad Tabba <tabba@google.com>

[*] https://lore.kernel.org/kvmarm/20210603183347.1695-1-will@kernel.org/
---
 arch/arm64/include/asm/kvm_host.h | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 41911585ae0c..347781f99b6a 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -771,6 +771,11 @@ void kvm_arch_free_vm(struct kvm *kvm);
 
 int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long type);
 
+static inline bool kvm_vm_is_protected(struct kvm *kvm)
+{
+	return false;
+}
+
 int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature);
 bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu);
 
-- 
2.32.0.402.g57bb445576-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH v3 02/15] KVM: arm64: Remove trailing whitespace in comment
  2021-07-19 16:03 ` Fuad Tabba
  (?)
@ 2021-07-19 16:03   ` Fuad Tabba
  -1 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-07-19 16:03 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

Remove trailing whitespace from comment in trap_dbgauthstatus_el1().

No functional change intended.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/kvm/sys_regs.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index f6f126eb6ac1..80a6e41cadad 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -318,14 +318,14 @@ static bool trap_dbgauthstatus_el1(struct kvm_vcpu *vcpu,
 /*
  * We want to avoid world-switching all the DBG registers all the
  * time:
- * 
+ *
  * - If we've touched any debug register, it is likely that we're
  *   going to touch more of them. It then makes sense to disable the
  *   traps and start doing the save/restore dance
  * - If debug is active (DBG_MDSCR_KDE or DBG_MDSCR_MDE set), it is
  *   then mandatory to save/restore the registers, as the guest
  *   depends on them.
- * 
+ *
  * For this, we use a DIRTY bit, indicating the guest has modified the
  * debug registers, used as follow:
  *
-- 
2.32.0.402.g57bb445576-goog


^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH v3 02/15] KVM: arm64: Remove trailing whitespace in comment
@ 2021-07-19 16:03   ` Fuad Tabba
  0 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-07-19 16:03 UTC (permalink / raw)
  To: kvmarm; +Cc: kernel-team, kvm, maz, pbonzini, will, linux-arm-kernel

Remove trailing whitespace from comment in trap_dbgauthstatus_el1().

No functional change intended.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/kvm/sys_regs.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index f6f126eb6ac1..80a6e41cadad 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -318,14 +318,14 @@ static bool trap_dbgauthstatus_el1(struct kvm_vcpu *vcpu,
 /*
  * We want to avoid world-switching all the DBG registers all the
  * time:
- * 
+ *
  * - If we've touched any debug register, it is likely that we're
  *   going to touch more of them. It then makes sense to disable the
  *   traps and start doing the save/restore dance
  * - If debug is active (DBG_MDSCR_KDE or DBG_MDSCR_MDE set), it is
  *   then mandatory to save/restore the registers, as the guest
  *   depends on them.
- * 
+ *
  * For this, we use a DIRTY bit, indicating the guest has modified the
  * debug registers, used as follow:
  *
-- 
2.32.0.402.g57bb445576-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH v3 02/15] KVM: arm64: Remove trailing whitespace in comment
@ 2021-07-19 16:03   ` Fuad Tabba
  0 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-07-19 16:03 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

Remove trailing whitespace from comment in trap_dbgauthstatus_el1().

No functional change intended.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/kvm/sys_regs.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index f6f126eb6ac1..80a6e41cadad 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -318,14 +318,14 @@ static bool trap_dbgauthstatus_el1(struct kvm_vcpu *vcpu,
 /*
  * We want to avoid world-switching all the DBG registers all the
  * time:
- * 
+ *
  * - If we've touched any debug register, it is likely that we're
  *   going to touch more of them. It then makes sense to disable the
  *   traps and start doing the save/restore dance
  * - If debug is active (DBG_MDSCR_KDE or DBG_MDSCR_MDE set), it is
  *   then mandatory to save/restore the registers, as the guest
  *   depends on them.
- * 
+ *
  * For this, we use a DIRTY bit, indicating the guest has modified the
  * debug registers, used as follow:
  *
-- 
2.32.0.402.g57bb445576-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH v3 03/15] KVM: arm64: MDCR_EL2 is a 64-bit register
  2021-07-19 16:03 ` Fuad Tabba
  (?)
@ 2021-07-19 16:03   ` Fuad Tabba
  -1 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-07-19 16:03 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

Fix the places in KVM that treat MDCR_EL2 as a 32-bit register.
More recent features (e.g., FEAT_SPEv1p2) use bits above 31.

No functional change intended.

Acked-by: Will Deacon <will@kernel.org>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/kvm_arm.h   | 20 ++++++++++----------
 arch/arm64/include/asm/kvm_asm.h   |  2 +-
 arch/arm64/include/asm/kvm_host.h  |  2 +-
 arch/arm64/kvm/debug.c             |  2 +-
 arch/arm64/kvm/hyp/nvhe/debug-sr.c |  2 +-
 arch/arm64/kvm/hyp/vhe/debug-sr.c  |  2 +-
 6 files changed, 15 insertions(+), 15 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index d436831dd706..6a523ec83415 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -281,18 +281,18 @@
 /* Hyp Debug Configuration Register bits */
 #define MDCR_EL2_E2TB_MASK	(UL(0x3))
 #define MDCR_EL2_E2TB_SHIFT	(UL(24))
-#define MDCR_EL2_TTRF		(1 << 19)
-#define MDCR_EL2_TPMS		(1 << 14)
+#define MDCR_EL2_TTRF		(UL(1) << 19)
+#define MDCR_EL2_TPMS		(UL(1) << 14)
 #define MDCR_EL2_E2PB_MASK	(UL(0x3))
 #define MDCR_EL2_E2PB_SHIFT	(UL(12))
-#define MDCR_EL2_TDRA		(1 << 11)
-#define MDCR_EL2_TDOSA		(1 << 10)
-#define MDCR_EL2_TDA		(1 << 9)
-#define MDCR_EL2_TDE		(1 << 8)
-#define MDCR_EL2_HPME		(1 << 7)
-#define MDCR_EL2_TPM		(1 << 6)
-#define MDCR_EL2_TPMCR		(1 << 5)
-#define MDCR_EL2_HPMN_MASK	(0x1F)
+#define MDCR_EL2_TDRA		(UL(1) << 11)
+#define MDCR_EL2_TDOSA		(UL(1) << 10)
+#define MDCR_EL2_TDA		(UL(1) << 9)
+#define MDCR_EL2_TDE		(UL(1) << 8)
+#define MDCR_EL2_HPME		(UL(1) << 7)
+#define MDCR_EL2_TPM		(UL(1) << 6)
+#define MDCR_EL2_TPMCR		(UL(1) << 5)
+#define MDCR_EL2_HPMN_MASK	(UL(0x1F))
 
 /* For compatibility with fault code shared with 32-bit */
 #define FSC_FAULT	ESR_ELx_FSC_FAULT
diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index 9f0bf2109be7..63ead9060ab5 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -210,7 +210,7 @@ extern u64 __vgic_v3_read_vmcr(void);
 extern void __vgic_v3_write_vmcr(u32 vmcr);
 extern void __vgic_v3_init_lrs(void);
 
-extern u32 __kvm_get_mdcr_el2(void);
+extern u64 __kvm_get_mdcr_el2(void);
 
 #define __KVM_EXTABLE(from, to)						\
 	"	.pushsection	__kvm_ex_table, \"a\"\n"		\
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 347781f99b6a..4d2d974c1522 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -289,7 +289,7 @@ struct kvm_vcpu_arch {
 
 	/* HYP configuration */
 	u64 hcr_el2;
-	u32 mdcr_el2;
+	u64 mdcr_el2;
 
 	/* Exception Information */
 	struct kvm_vcpu_fault_info fault;
diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c
index d5e79d7ee6e9..db9361338b2a 100644
--- a/arch/arm64/kvm/debug.c
+++ b/arch/arm64/kvm/debug.c
@@ -21,7 +21,7 @@
 				DBG_MDSCR_KDE | \
 				DBG_MDSCR_MDE)
 
-static DEFINE_PER_CPU(u32, mdcr_el2);
+static DEFINE_PER_CPU(u64, mdcr_el2);
 
 /**
  * save/restore_guest_debug_regs
diff --git a/arch/arm64/kvm/hyp/nvhe/debug-sr.c b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
index 7d3f25868cae..df361d839902 100644
--- a/arch/arm64/kvm/hyp/nvhe/debug-sr.c
+++ b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
@@ -109,7 +109,7 @@ void __debug_switch_to_host(struct kvm_vcpu *vcpu)
 	__debug_switch_to_host_common(vcpu);
 }
 
-u32 __kvm_get_mdcr_el2(void)
+u64 __kvm_get_mdcr_el2(void)
 {
 	return read_sysreg(mdcr_el2);
 }
diff --git a/arch/arm64/kvm/hyp/vhe/debug-sr.c b/arch/arm64/kvm/hyp/vhe/debug-sr.c
index f1e2e5a00933..289689b2682d 100644
--- a/arch/arm64/kvm/hyp/vhe/debug-sr.c
+++ b/arch/arm64/kvm/hyp/vhe/debug-sr.c
@@ -20,7 +20,7 @@ void __debug_switch_to_host(struct kvm_vcpu *vcpu)
 	__debug_switch_to_host_common(vcpu);
 }
 
-u32 __kvm_get_mdcr_el2(void)
+u64 __kvm_get_mdcr_el2(void)
 {
 	return read_sysreg(mdcr_el2);
 }
-- 
2.32.0.402.g57bb445576-goog


^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH v3 03/15] KVM: arm64: MDCR_EL2 is a 64-bit register
@ 2021-07-19 16:03   ` Fuad Tabba
  0 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-07-19 16:03 UTC (permalink / raw)
  To: kvmarm; +Cc: kernel-team, kvm, maz, pbonzini, will, linux-arm-kernel

Fix the places in KVM that treat MDCR_EL2 as a 32-bit register.
More recent features (e.g., FEAT_SPEv1p2) use bits above 31.

No functional change intended.

Acked-by: Will Deacon <will@kernel.org>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/kvm_arm.h   | 20 ++++++++++----------
 arch/arm64/include/asm/kvm_asm.h   |  2 +-
 arch/arm64/include/asm/kvm_host.h  |  2 +-
 arch/arm64/kvm/debug.c             |  2 +-
 arch/arm64/kvm/hyp/nvhe/debug-sr.c |  2 +-
 arch/arm64/kvm/hyp/vhe/debug-sr.c  |  2 +-
 6 files changed, 15 insertions(+), 15 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index d436831dd706..6a523ec83415 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -281,18 +281,18 @@
 /* Hyp Debug Configuration Register bits */
 #define MDCR_EL2_E2TB_MASK	(UL(0x3))
 #define MDCR_EL2_E2TB_SHIFT	(UL(24))
-#define MDCR_EL2_TTRF		(1 << 19)
-#define MDCR_EL2_TPMS		(1 << 14)
+#define MDCR_EL2_TTRF		(UL(1) << 19)
+#define MDCR_EL2_TPMS		(UL(1) << 14)
 #define MDCR_EL2_E2PB_MASK	(UL(0x3))
 #define MDCR_EL2_E2PB_SHIFT	(UL(12))
-#define MDCR_EL2_TDRA		(1 << 11)
-#define MDCR_EL2_TDOSA		(1 << 10)
-#define MDCR_EL2_TDA		(1 << 9)
-#define MDCR_EL2_TDE		(1 << 8)
-#define MDCR_EL2_HPME		(1 << 7)
-#define MDCR_EL2_TPM		(1 << 6)
-#define MDCR_EL2_TPMCR		(1 << 5)
-#define MDCR_EL2_HPMN_MASK	(0x1F)
+#define MDCR_EL2_TDRA		(UL(1) << 11)
+#define MDCR_EL2_TDOSA		(UL(1) << 10)
+#define MDCR_EL2_TDA		(UL(1) << 9)
+#define MDCR_EL2_TDE		(UL(1) << 8)
+#define MDCR_EL2_HPME		(UL(1) << 7)
+#define MDCR_EL2_TPM		(UL(1) << 6)
+#define MDCR_EL2_TPMCR		(UL(1) << 5)
+#define MDCR_EL2_HPMN_MASK	(UL(0x1F))
 
 /* For compatibility with fault code shared with 32-bit */
 #define FSC_FAULT	ESR_ELx_FSC_FAULT
diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index 9f0bf2109be7..63ead9060ab5 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -210,7 +210,7 @@ extern u64 __vgic_v3_read_vmcr(void);
 extern void __vgic_v3_write_vmcr(u32 vmcr);
 extern void __vgic_v3_init_lrs(void);
 
-extern u32 __kvm_get_mdcr_el2(void);
+extern u64 __kvm_get_mdcr_el2(void);
 
 #define __KVM_EXTABLE(from, to)						\
 	"	.pushsection	__kvm_ex_table, \"a\"\n"		\
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 347781f99b6a..4d2d974c1522 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -289,7 +289,7 @@ struct kvm_vcpu_arch {
 
 	/* HYP configuration */
 	u64 hcr_el2;
-	u32 mdcr_el2;
+	u64 mdcr_el2;
 
 	/* Exception Information */
 	struct kvm_vcpu_fault_info fault;
diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c
index d5e79d7ee6e9..db9361338b2a 100644
--- a/arch/arm64/kvm/debug.c
+++ b/arch/arm64/kvm/debug.c
@@ -21,7 +21,7 @@
 				DBG_MDSCR_KDE | \
 				DBG_MDSCR_MDE)
 
-static DEFINE_PER_CPU(u32, mdcr_el2);
+static DEFINE_PER_CPU(u64, mdcr_el2);
 
 /**
  * save/restore_guest_debug_regs
diff --git a/arch/arm64/kvm/hyp/nvhe/debug-sr.c b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
index 7d3f25868cae..df361d839902 100644
--- a/arch/arm64/kvm/hyp/nvhe/debug-sr.c
+++ b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
@@ -109,7 +109,7 @@ void __debug_switch_to_host(struct kvm_vcpu *vcpu)
 	__debug_switch_to_host_common(vcpu);
 }
 
-u32 __kvm_get_mdcr_el2(void)
+u64 __kvm_get_mdcr_el2(void)
 {
 	return read_sysreg(mdcr_el2);
 }
diff --git a/arch/arm64/kvm/hyp/vhe/debug-sr.c b/arch/arm64/kvm/hyp/vhe/debug-sr.c
index f1e2e5a00933..289689b2682d 100644
--- a/arch/arm64/kvm/hyp/vhe/debug-sr.c
+++ b/arch/arm64/kvm/hyp/vhe/debug-sr.c
@@ -20,7 +20,7 @@ void __debug_switch_to_host(struct kvm_vcpu *vcpu)
 	__debug_switch_to_host_common(vcpu);
 }
 
-u32 __kvm_get_mdcr_el2(void)
+u64 __kvm_get_mdcr_el2(void)
 {
 	return read_sysreg(mdcr_el2);
 }
-- 
2.32.0.402.g57bb445576-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH v3 03/15] KVM: arm64: MDCR_EL2 is a 64-bit register
@ 2021-07-19 16:03   ` Fuad Tabba
  0 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-07-19 16:03 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

Fix the places in KVM that treat MDCR_EL2 as a 32-bit register.
More recent features (e.g., FEAT_SPEv1p2) use bits above 31.

No functional change intended.

Acked-by: Will Deacon <will@kernel.org>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/kvm_arm.h   | 20 ++++++++++----------
 arch/arm64/include/asm/kvm_asm.h   |  2 +-
 arch/arm64/include/asm/kvm_host.h  |  2 +-
 arch/arm64/kvm/debug.c             |  2 +-
 arch/arm64/kvm/hyp/nvhe/debug-sr.c |  2 +-
 arch/arm64/kvm/hyp/vhe/debug-sr.c  |  2 +-
 6 files changed, 15 insertions(+), 15 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index d436831dd706..6a523ec83415 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -281,18 +281,18 @@
 /* Hyp Debug Configuration Register bits */
 #define MDCR_EL2_E2TB_MASK	(UL(0x3))
 #define MDCR_EL2_E2TB_SHIFT	(UL(24))
-#define MDCR_EL2_TTRF		(1 << 19)
-#define MDCR_EL2_TPMS		(1 << 14)
+#define MDCR_EL2_TTRF		(UL(1) << 19)
+#define MDCR_EL2_TPMS		(UL(1) << 14)
 #define MDCR_EL2_E2PB_MASK	(UL(0x3))
 #define MDCR_EL2_E2PB_SHIFT	(UL(12))
-#define MDCR_EL2_TDRA		(1 << 11)
-#define MDCR_EL2_TDOSA		(1 << 10)
-#define MDCR_EL2_TDA		(1 << 9)
-#define MDCR_EL2_TDE		(1 << 8)
-#define MDCR_EL2_HPME		(1 << 7)
-#define MDCR_EL2_TPM		(1 << 6)
-#define MDCR_EL2_TPMCR		(1 << 5)
-#define MDCR_EL2_HPMN_MASK	(0x1F)
+#define MDCR_EL2_TDRA		(UL(1) << 11)
+#define MDCR_EL2_TDOSA		(UL(1) << 10)
+#define MDCR_EL2_TDA		(UL(1) << 9)
+#define MDCR_EL2_TDE		(UL(1) << 8)
+#define MDCR_EL2_HPME		(UL(1) << 7)
+#define MDCR_EL2_TPM		(UL(1) << 6)
+#define MDCR_EL2_TPMCR		(UL(1) << 5)
+#define MDCR_EL2_HPMN_MASK	(UL(0x1F))
 
 /* For compatibility with fault code shared with 32-bit */
 #define FSC_FAULT	ESR_ELx_FSC_FAULT
diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index 9f0bf2109be7..63ead9060ab5 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -210,7 +210,7 @@ extern u64 __vgic_v3_read_vmcr(void);
 extern void __vgic_v3_write_vmcr(u32 vmcr);
 extern void __vgic_v3_init_lrs(void);
 
-extern u32 __kvm_get_mdcr_el2(void);
+extern u64 __kvm_get_mdcr_el2(void);
 
 #define __KVM_EXTABLE(from, to)						\
 	"	.pushsection	__kvm_ex_table, \"a\"\n"		\
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 347781f99b6a..4d2d974c1522 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -289,7 +289,7 @@ struct kvm_vcpu_arch {
 
 	/* HYP configuration */
 	u64 hcr_el2;
-	u32 mdcr_el2;
+	u64 mdcr_el2;
 
 	/* Exception Information */
 	struct kvm_vcpu_fault_info fault;
diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c
index d5e79d7ee6e9..db9361338b2a 100644
--- a/arch/arm64/kvm/debug.c
+++ b/arch/arm64/kvm/debug.c
@@ -21,7 +21,7 @@
 				DBG_MDSCR_KDE | \
 				DBG_MDSCR_MDE)
 
-static DEFINE_PER_CPU(u32, mdcr_el2);
+static DEFINE_PER_CPU(u64, mdcr_el2);
 
 /**
  * save/restore_guest_debug_regs
diff --git a/arch/arm64/kvm/hyp/nvhe/debug-sr.c b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
index 7d3f25868cae..df361d839902 100644
--- a/arch/arm64/kvm/hyp/nvhe/debug-sr.c
+++ b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
@@ -109,7 +109,7 @@ void __debug_switch_to_host(struct kvm_vcpu *vcpu)
 	__debug_switch_to_host_common(vcpu);
 }
 
-u32 __kvm_get_mdcr_el2(void)
+u64 __kvm_get_mdcr_el2(void)
 {
 	return read_sysreg(mdcr_el2);
 }
diff --git a/arch/arm64/kvm/hyp/vhe/debug-sr.c b/arch/arm64/kvm/hyp/vhe/debug-sr.c
index f1e2e5a00933..289689b2682d 100644
--- a/arch/arm64/kvm/hyp/vhe/debug-sr.c
+++ b/arch/arm64/kvm/hyp/vhe/debug-sr.c
@@ -20,7 +20,7 @@ void __debug_switch_to_host(struct kvm_vcpu *vcpu)
 	__debug_switch_to_host_common(vcpu);
 }
 
-u32 __kvm_get_mdcr_el2(void)
+u64 __kvm_get_mdcr_el2(void)
 {
 	return read_sysreg(mdcr_el2);
 }
-- 
2.32.0.402.g57bb445576-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH v3 04/15] KVM: arm64: Fix names of config register fields
  2021-07-19 16:03 ` Fuad Tabba
  (?)
@ 2021-07-19 16:03   ` Fuad Tabba
  -1 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-07-19 16:03 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

Change the names of hcr_el2 register fields to match the Arm
Architecture Reference Manual. Easier for cross-referencing and
for grepping.

Also, change the name of CPTR_EL2_RES1 to CPTR_NVHE_EL2_RES1,
because res1 bits are different for VHE.

No functional change intended.

Acked-by: Will Deacon <will@kernel.org>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/kvm_arm.h | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index 6a523ec83415..a928b2dc0b0f 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -32,9 +32,9 @@
 #define HCR_TVM		(UL(1) << 26)
 #define HCR_TTLB	(UL(1) << 25)
 #define HCR_TPU		(UL(1) << 24)
-#define HCR_TPC		(UL(1) << 23)
+#define HCR_TPC		(UL(1) << 23) /* HCR_TPCP if FEAT_DPB */
 #define HCR_TSW		(UL(1) << 22)
-#define HCR_TAC		(UL(1) << 21)
+#define HCR_TACR	(UL(1) << 21)
 #define HCR_TIDCP	(UL(1) << 20)
 #define HCR_TSC		(UL(1) << 19)
 #define HCR_TID3	(UL(1) << 18)
@@ -61,7 +61,7 @@
  * The bits we set in HCR:
  * TLOR:	Trap LORegion register accesses
  * RW:		64bit by default, can be overridden for 32bit VMs
- * TAC:		Trap ACTLR
+ * TACR:	Trap ACTLR
  * TSC:		Trap SMC
  * TSW:		Trap cache operations by set/way
  * TWE:		Trap WFE
@@ -76,7 +76,7 @@
  * PTW:		Take a stage2 fault if a stage1 walk steps in device memory
  */
 #define HCR_GUEST_FLAGS (HCR_TSC | HCR_TSW | HCR_TWE | HCR_TWI | HCR_VM | \
-			 HCR_BSU_IS | HCR_FB | HCR_TAC | \
+			 HCR_BSU_IS | HCR_FB | HCR_TACR | \
 			 HCR_AMO | HCR_SWIO | HCR_TIDCP | HCR_RW | HCR_TLOR | \
 			 HCR_FMO | HCR_IMO | HCR_PTW )
 #define HCR_VIRT_EXCP_MASK (HCR_VSE | HCR_VI | HCR_VF)
@@ -275,8 +275,8 @@
 #define CPTR_EL2_TTA	(1 << 20)
 #define CPTR_EL2_TFP	(1 << CPTR_EL2_TFP_SHIFT)
 #define CPTR_EL2_TZ	(1 << 8)
-#define CPTR_EL2_RES1	0x000032ff /* known RES1 bits in CPTR_EL2 */
-#define CPTR_EL2_DEFAULT	CPTR_EL2_RES1
+#define CPTR_NVHE_EL2_RES1	0x000032ff /* known RES1 bits in CPTR_EL2 (nVHE) */
+#define CPTR_EL2_DEFAULT	CPTR_NVHE_EL2_RES1
 
 /* Hyp Debug Configuration Register bits */
 #define MDCR_EL2_E2TB_MASK	(UL(0x3))
-- 
2.32.0.402.g57bb445576-goog


^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH v3 04/15] KVM: arm64: Fix names of config register fields
@ 2021-07-19 16:03   ` Fuad Tabba
  0 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-07-19 16:03 UTC (permalink / raw)
  To: kvmarm; +Cc: kernel-team, kvm, maz, pbonzini, will, linux-arm-kernel

Change the names of hcr_el2 register fields to match the Arm
Architecture Reference Manual. Easier for cross-referencing and
for grepping.

Also, change the name of CPTR_EL2_RES1 to CPTR_NVHE_EL2_RES1,
because res1 bits are different for VHE.

No functional change intended.

Acked-by: Will Deacon <will@kernel.org>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/kvm_arm.h | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index 6a523ec83415..a928b2dc0b0f 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -32,9 +32,9 @@
 #define HCR_TVM		(UL(1) << 26)
 #define HCR_TTLB	(UL(1) << 25)
 #define HCR_TPU		(UL(1) << 24)
-#define HCR_TPC		(UL(1) << 23)
+#define HCR_TPC		(UL(1) << 23) /* HCR_TPCP if FEAT_DPB */
 #define HCR_TSW		(UL(1) << 22)
-#define HCR_TAC		(UL(1) << 21)
+#define HCR_TACR	(UL(1) << 21)
 #define HCR_TIDCP	(UL(1) << 20)
 #define HCR_TSC		(UL(1) << 19)
 #define HCR_TID3	(UL(1) << 18)
@@ -61,7 +61,7 @@
  * The bits we set in HCR:
  * TLOR:	Trap LORegion register accesses
  * RW:		64bit by default, can be overridden for 32bit VMs
- * TAC:		Trap ACTLR
+ * TACR:	Trap ACTLR
  * TSC:		Trap SMC
  * TSW:		Trap cache operations by set/way
  * TWE:		Trap WFE
@@ -76,7 +76,7 @@
  * PTW:		Take a stage2 fault if a stage1 walk steps in device memory
  */
 #define HCR_GUEST_FLAGS (HCR_TSC | HCR_TSW | HCR_TWE | HCR_TWI | HCR_VM | \
-			 HCR_BSU_IS | HCR_FB | HCR_TAC | \
+			 HCR_BSU_IS | HCR_FB | HCR_TACR | \
 			 HCR_AMO | HCR_SWIO | HCR_TIDCP | HCR_RW | HCR_TLOR | \
 			 HCR_FMO | HCR_IMO | HCR_PTW )
 #define HCR_VIRT_EXCP_MASK (HCR_VSE | HCR_VI | HCR_VF)
@@ -275,8 +275,8 @@
 #define CPTR_EL2_TTA	(1 << 20)
 #define CPTR_EL2_TFP	(1 << CPTR_EL2_TFP_SHIFT)
 #define CPTR_EL2_TZ	(1 << 8)
-#define CPTR_EL2_RES1	0x000032ff /* known RES1 bits in CPTR_EL2 */
-#define CPTR_EL2_DEFAULT	CPTR_EL2_RES1
+#define CPTR_NVHE_EL2_RES1	0x000032ff /* known RES1 bits in CPTR_EL2 (nVHE) */
+#define CPTR_EL2_DEFAULT	CPTR_NVHE_EL2_RES1
 
 /* Hyp Debug Configuration Register bits */
 #define MDCR_EL2_E2TB_MASK	(UL(0x3))
-- 
2.32.0.402.g57bb445576-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH v3 04/15] KVM: arm64: Fix names of config register fields
@ 2021-07-19 16:03   ` Fuad Tabba
  0 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-07-19 16:03 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

Change the names of hcr_el2 register fields to match the Arm
Architecture Reference Manual. Easier for cross-referencing and
for grepping.

Also, change the name of CPTR_EL2_RES1 to CPTR_NVHE_EL2_RES1,
because res1 bits are different for VHE.

No functional change intended.

Acked-by: Will Deacon <will@kernel.org>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/kvm_arm.h | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index 6a523ec83415..a928b2dc0b0f 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -32,9 +32,9 @@
 #define HCR_TVM		(UL(1) << 26)
 #define HCR_TTLB	(UL(1) << 25)
 #define HCR_TPU		(UL(1) << 24)
-#define HCR_TPC		(UL(1) << 23)
+#define HCR_TPC		(UL(1) << 23) /* HCR_TPCP if FEAT_DPB */
 #define HCR_TSW		(UL(1) << 22)
-#define HCR_TAC		(UL(1) << 21)
+#define HCR_TACR	(UL(1) << 21)
 #define HCR_TIDCP	(UL(1) << 20)
 #define HCR_TSC		(UL(1) << 19)
 #define HCR_TID3	(UL(1) << 18)
@@ -61,7 +61,7 @@
  * The bits we set in HCR:
  * TLOR:	Trap LORegion register accesses
  * RW:		64bit by default, can be overridden for 32bit VMs
- * TAC:		Trap ACTLR
+ * TACR:	Trap ACTLR
  * TSC:		Trap SMC
  * TSW:		Trap cache operations by set/way
  * TWE:		Trap WFE
@@ -76,7 +76,7 @@
  * PTW:		Take a stage2 fault if a stage1 walk steps in device memory
  */
 #define HCR_GUEST_FLAGS (HCR_TSC | HCR_TSW | HCR_TWE | HCR_TWI | HCR_VM | \
-			 HCR_BSU_IS | HCR_FB | HCR_TAC | \
+			 HCR_BSU_IS | HCR_FB | HCR_TACR | \
 			 HCR_AMO | HCR_SWIO | HCR_TIDCP | HCR_RW | HCR_TLOR | \
 			 HCR_FMO | HCR_IMO | HCR_PTW )
 #define HCR_VIRT_EXCP_MASK (HCR_VSE | HCR_VI | HCR_VF)
@@ -275,8 +275,8 @@
 #define CPTR_EL2_TTA	(1 << 20)
 #define CPTR_EL2_TFP	(1 << CPTR_EL2_TFP_SHIFT)
 #define CPTR_EL2_TZ	(1 << 8)
-#define CPTR_EL2_RES1	0x000032ff /* known RES1 bits in CPTR_EL2 */
-#define CPTR_EL2_DEFAULT	CPTR_EL2_RES1
+#define CPTR_NVHE_EL2_RES1	0x000032ff /* known RES1 bits in CPTR_EL2 (nVHE) */
+#define CPTR_EL2_DEFAULT	CPTR_NVHE_EL2_RES1
 
 /* Hyp Debug Configuration Register bits */
 #define MDCR_EL2_E2TB_MASK	(UL(0x3))
-- 
2.32.0.402.g57bb445576-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH v3 05/15] KVM: arm64: Refactor sys_regs.h,c for nVHE reuse
  2021-07-19 16:03 ` Fuad Tabba
  (?)
@ 2021-07-19 16:03   ` Fuad Tabba
  -1 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-07-19 16:03 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

Refactor sys_regs.h and sys_regs.c to make it easier to reuse
common code. It will be used in nVHE in a later patch.

Note that the refactored code uses __inline_bsearch for find_reg
instead of bsearch to avoid copying the bsearch code for nVHE.

No functional change intended.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/sysreg.h |  3 +++
 arch/arm64/kvm/sys_regs.c       | 30 +-----------------------------
 arch/arm64/kvm/sys_regs.h       | 31 +++++++++++++++++++++++++++++++
 3 files changed, 35 insertions(+), 29 deletions(-)

diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 7b9c3acba684..326f49e7bd42 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -1153,6 +1153,9 @@
 #define ICH_VTR_A3V_SHIFT	21
 #define ICH_VTR_A3V_MASK	(1 << ICH_VTR_A3V_SHIFT)
 
+/* Extract the feature specified from the feature id register. */
+#define FEATURE(x)	(GENMASK_ULL(x##_SHIFT + 3, x##_SHIFT))
+
 #ifdef __ASSEMBLY__
 
 	.irp	num,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 80a6e41cadad..1a939c464858 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -44,10 +44,6 @@
  * 64bit interface.
  */
 
-#define reg_to_encoding(x)						\
-	sys_reg((u32)(x)->Op0, (u32)(x)->Op1,				\
-		(u32)(x)->CRn, (u32)(x)->CRm, (u32)(x)->Op2)
-
 static bool read_from_write_only(struct kvm_vcpu *vcpu,
 				 struct sys_reg_params *params,
 				 const struct sys_reg_desc *r)
@@ -1026,8 +1022,6 @@ static bool access_arch_timer(struct kvm_vcpu *vcpu,
 	return true;
 }
 
-#define FEATURE(x)	(GENMASK_ULL(x##_SHIFT + 3, x##_SHIFT))
-
 /* Read a sanitised cpufeature ID register by sys_reg_desc */
 static u64 read_id_reg(const struct kvm_vcpu *vcpu,
 		struct sys_reg_desc const *r, bool raz)
@@ -2106,23 +2100,6 @@ static int check_sysreg_table(const struct sys_reg_desc *table, unsigned int n,
 	return 0;
 }
 
-static int match_sys_reg(const void *key, const void *elt)
-{
-	const unsigned long pval = (unsigned long)key;
-	const struct sys_reg_desc *r = elt;
-
-	return pval - reg_to_encoding(r);
-}
-
-static const struct sys_reg_desc *find_reg(const struct sys_reg_params *params,
-					 const struct sys_reg_desc table[],
-					 unsigned int num)
-{
-	unsigned long pval = reg_to_encoding(params);
-
-	return bsearch((void *)pval, table, num, sizeof(table[0]), match_sys_reg);
-}
-
 int kvm_handle_cp14_load_store(struct kvm_vcpu *vcpu)
 {
 	kvm_inject_undefined(vcpu);
@@ -2365,13 +2342,8 @@ int kvm_handle_sys_reg(struct kvm_vcpu *vcpu)
 
 	trace_kvm_handle_sys_reg(esr);
 
-	params.Op0 = (esr >> 20) & 3;
-	params.Op1 = (esr >> 14) & 0x7;
-	params.CRn = (esr >> 10) & 0xf;
-	params.CRm = (esr >> 1) & 0xf;
-	params.Op2 = (esr >> 17) & 0x7;
+	params = esr_sys64_to_params(esr);
 	params.regval = vcpu_get_reg(vcpu, Rt);
-	params.is_write = !(esr & 1);
 
 	ret = emulate_sys_reg(vcpu, &params);
 
diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h
index 9d0621417c2a..cc0cc95a0280 100644
--- a/arch/arm64/kvm/sys_regs.h
+++ b/arch/arm64/kvm/sys_regs.h
@@ -11,6 +11,12 @@
 #ifndef __ARM64_KVM_SYS_REGS_LOCAL_H__
 #define __ARM64_KVM_SYS_REGS_LOCAL_H__
 
+#include <linux/bsearch.h>
+
+#define reg_to_encoding(x)						\
+	sys_reg((u32)(x)->Op0, (u32)(x)->Op1,				\
+		(u32)(x)->CRn, (u32)(x)->CRm, (u32)(x)->Op2)
+
 struct sys_reg_params {
 	u8	Op0;
 	u8	Op1;
@@ -21,6 +27,14 @@ struct sys_reg_params {
 	bool	is_write;
 };
 
+#define esr_sys64_to_params(esr)                                               \
+	((struct sys_reg_params){ .Op0 = ((esr) >> 20) & 3,                    \
+				  .Op1 = ((esr) >> 14) & 0x7,                  \
+				  .CRn = ((esr) >> 10) & 0xf,                  \
+				  .CRm = ((esr) >> 1) & 0xf,                   \
+				  .Op2 = ((esr) >> 17) & 0x7,                  \
+				  .is_write = !((esr) & 1) })
+
 struct sys_reg_desc {
 	/* Sysreg string for debug */
 	const char *name;
@@ -152,6 +166,23 @@ static inline int cmp_sys_reg(const struct sys_reg_desc *i1,
 	return i1->Op2 - i2->Op2;
 }
 
+static inline int match_sys_reg(const void *key, const void *elt)
+{
+	const unsigned long pval = (unsigned long)key;
+	const struct sys_reg_desc *r = elt;
+
+	return pval - reg_to_encoding(r);
+}
+
+static inline const struct sys_reg_desc *
+find_reg(const struct sys_reg_params *params, const struct sys_reg_desc table[],
+	 unsigned int num)
+{
+	unsigned long pval = reg_to_encoding(params);
+
+	return __inline_bsearch((void *)pval, table, num, sizeof(table[0]), match_sys_reg);
+}
+
 const struct sys_reg_desc *find_reg_by_id(u64 id,
 					  struct sys_reg_params *params,
 					  const struct sys_reg_desc table[],
-- 
2.32.0.402.g57bb445576-goog


^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH v3 05/15] KVM: arm64: Refactor sys_regs.h,c for nVHE reuse
@ 2021-07-19 16:03   ` Fuad Tabba
  0 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-07-19 16:03 UTC (permalink / raw)
  To: kvmarm; +Cc: kernel-team, kvm, maz, pbonzini, will, linux-arm-kernel

Refactor sys_regs.h and sys_regs.c to make it easier to reuse
common code. It will be used in nVHE in a later patch.

Note that the refactored code uses __inline_bsearch for find_reg
instead of bsearch to avoid copying the bsearch code for nVHE.

No functional change intended.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/sysreg.h |  3 +++
 arch/arm64/kvm/sys_regs.c       | 30 +-----------------------------
 arch/arm64/kvm/sys_regs.h       | 31 +++++++++++++++++++++++++++++++
 3 files changed, 35 insertions(+), 29 deletions(-)

diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 7b9c3acba684..326f49e7bd42 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -1153,6 +1153,9 @@
 #define ICH_VTR_A3V_SHIFT	21
 #define ICH_VTR_A3V_MASK	(1 << ICH_VTR_A3V_SHIFT)
 
+/* Extract the feature specified from the feature id register. */
+#define FEATURE(x)	(GENMASK_ULL(x##_SHIFT + 3, x##_SHIFT))
+
 #ifdef __ASSEMBLY__
 
 	.irp	num,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 80a6e41cadad..1a939c464858 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -44,10 +44,6 @@
  * 64bit interface.
  */
 
-#define reg_to_encoding(x)						\
-	sys_reg((u32)(x)->Op0, (u32)(x)->Op1,				\
-		(u32)(x)->CRn, (u32)(x)->CRm, (u32)(x)->Op2)
-
 static bool read_from_write_only(struct kvm_vcpu *vcpu,
 				 struct sys_reg_params *params,
 				 const struct sys_reg_desc *r)
@@ -1026,8 +1022,6 @@ static bool access_arch_timer(struct kvm_vcpu *vcpu,
 	return true;
 }
 
-#define FEATURE(x)	(GENMASK_ULL(x##_SHIFT + 3, x##_SHIFT))
-
 /* Read a sanitised cpufeature ID register by sys_reg_desc */
 static u64 read_id_reg(const struct kvm_vcpu *vcpu,
 		struct sys_reg_desc const *r, bool raz)
@@ -2106,23 +2100,6 @@ static int check_sysreg_table(const struct sys_reg_desc *table, unsigned int n,
 	return 0;
 }
 
-static int match_sys_reg(const void *key, const void *elt)
-{
-	const unsigned long pval = (unsigned long)key;
-	const struct sys_reg_desc *r = elt;
-
-	return pval - reg_to_encoding(r);
-}
-
-static const struct sys_reg_desc *find_reg(const struct sys_reg_params *params,
-					 const struct sys_reg_desc table[],
-					 unsigned int num)
-{
-	unsigned long pval = reg_to_encoding(params);
-
-	return bsearch((void *)pval, table, num, sizeof(table[0]), match_sys_reg);
-}
-
 int kvm_handle_cp14_load_store(struct kvm_vcpu *vcpu)
 {
 	kvm_inject_undefined(vcpu);
@@ -2365,13 +2342,8 @@ int kvm_handle_sys_reg(struct kvm_vcpu *vcpu)
 
 	trace_kvm_handle_sys_reg(esr);
 
-	params.Op0 = (esr >> 20) & 3;
-	params.Op1 = (esr >> 14) & 0x7;
-	params.CRn = (esr >> 10) & 0xf;
-	params.CRm = (esr >> 1) & 0xf;
-	params.Op2 = (esr >> 17) & 0x7;
+	params = esr_sys64_to_params(esr);
 	params.regval = vcpu_get_reg(vcpu, Rt);
-	params.is_write = !(esr & 1);
 
 	ret = emulate_sys_reg(vcpu, &params);
 
diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h
index 9d0621417c2a..cc0cc95a0280 100644
--- a/arch/arm64/kvm/sys_regs.h
+++ b/arch/arm64/kvm/sys_regs.h
@@ -11,6 +11,12 @@
 #ifndef __ARM64_KVM_SYS_REGS_LOCAL_H__
 #define __ARM64_KVM_SYS_REGS_LOCAL_H__
 
+#include <linux/bsearch.h>
+
+#define reg_to_encoding(x)						\
+	sys_reg((u32)(x)->Op0, (u32)(x)->Op1,				\
+		(u32)(x)->CRn, (u32)(x)->CRm, (u32)(x)->Op2)
+
 struct sys_reg_params {
 	u8	Op0;
 	u8	Op1;
@@ -21,6 +27,14 @@ struct sys_reg_params {
 	bool	is_write;
 };
 
+#define esr_sys64_to_params(esr)                                               \
+	((struct sys_reg_params){ .Op0 = ((esr) >> 20) & 3,                    \
+				  .Op1 = ((esr) >> 14) & 0x7,                  \
+				  .CRn = ((esr) >> 10) & 0xf,                  \
+				  .CRm = ((esr) >> 1) & 0xf,                   \
+				  .Op2 = ((esr) >> 17) & 0x7,                  \
+				  .is_write = !((esr) & 1) })
+
 struct sys_reg_desc {
 	/* Sysreg string for debug */
 	const char *name;
@@ -152,6 +166,23 @@ static inline int cmp_sys_reg(const struct sys_reg_desc *i1,
 	return i1->Op2 - i2->Op2;
 }
 
+static inline int match_sys_reg(const void *key, const void *elt)
+{
+	const unsigned long pval = (unsigned long)key;
+	const struct sys_reg_desc *r = elt;
+
+	return pval - reg_to_encoding(r);
+}
+
+static inline const struct sys_reg_desc *
+find_reg(const struct sys_reg_params *params, const struct sys_reg_desc table[],
+	 unsigned int num)
+{
+	unsigned long pval = reg_to_encoding(params);
+
+	return __inline_bsearch((void *)pval, table, num, sizeof(table[0]), match_sys_reg);
+}
+
 const struct sys_reg_desc *find_reg_by_id(u64 id,
 					  struct sys_reg_params *params,
 					  const struct sys_reg_desc table[],
-- 
2.32.0.402.g57bb445576-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH v3 05/15] KVM: arm64: Refactor sys_regs.h,c for nVHE reuse
@ 2021-07-19 16:03   ` Fuad Tabba
  0 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-07-19 16:03 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

Refactor sys_regs.h and sys_regs.c to make it easier to reuse
common code. It will be used in nVHE in a later patch.

Note that the refactored code uses __inline_bsearch for find_reg
instead of bsearch to avoid copying the bsearch code for nVHE.

No functional change intended.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/sysreg.h |  3 +++
 arch/arm64/kvm/sys_regs.c       | 30 +-----------------------------
 arch/arm64/kvm/sys_regs.h       | 31 +++++++++++++++++++++++++++++++
 3 files changed, 35 insertions(+), 29 deletions(-)

diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 7b9c3acba684..326f49e7bd42 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -1153,6 +1153,9 @@
 #define ICH_VTR_A3V_SHIFT	21
 #define ICH_VTR_A3V_MASK	(1 << ICH_VTR_A3V_SHIFT)
 
+/* Extract the feature specified from the feature id register. */
+#define FEATURE(x)	(GENMASK_ULL(x##_SHIFT + 3, x##_SHIFT))
+
 #ifdef __ASSEMBLY__
 
 	.irp	num,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 80a6e41cadad..1a939c464858 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -44,10 +44,6 @@
  * 64bit interface.
  */
 
-#define reg_to_encoding(x)						\
-	sys_reg((u32)(x)->Op0, (u32)(x)->Op1,				\
-		(u32)(x)->CRn, (u32)(x)->CRm, (u32)(x)->Op2)
-
 static bool read_from_write_only(struct kvm_vcpu *vcpu,
 				 struct sys_reg_params *params,
 				 const struct sys_reg_desc *r)
@@ -1026,8 +1022,6 @@ static bool access_arch_timer(struct kvm_vcpu *vcpu,
 	return true;
 }
 
-#define FEATURE(x)	(GENMASK_ULL(x##_SHIFT + 3, x##_SHIFT))
-
 /* Read a sanitised cpufeature ID register by sys_reg_desc */
 static u64 read_id_reg(const struct kvm_vcpu *vcpu,
 		struct sys_reg_desc const *r, bool raz)
@@ -2106,23 +2100,6 @@ static int check_sysreg_table(const struct sys_reg_desc *table, unsigned int n,
 	return 0;
 }
 
-static int match_sys_reg(const void *key, const void *elt)
-{
-	const unsigned long pval = (unsigned long)key;
-	const struct sys_reg_desc *r = elt;
-
-	return pval - reg_to_encoding(r);
-}
-
-static const struct sys_reg_desc *find_reg(const struct sys_reg_params *params,
-					 const struct sys_reg_desc table[],
-					 unsigned int num)
-{
-	unsigned long pval = reg_to_encoding(params);
-
-	return bsearch((void *)pval, table, num, sizeof(table[0]), match_sys_reg);
-}
-
 int kvm_handle_cp14_load_store(struct kvm_vcpu *vcpu)
 {
 	kvm_inject_undefined(vcpu);
@@ -2365,13 +2342,8 @@ int kvm_handle_sys_reg(struct kvm_vcpu *vcpu)
 
 	trace_kvm_handle_sys_reg(esr);
 
-	params.Op0 = (esr >> 20) & 3;
-	params.Op1 = (esr >> 14) & 0x7;
-	params.CRn = (esr >> 10) & 0xf;
-	params.CRm = (esr >> 1) & 0xf;
-	params.Op2 = (esr >> 17) & 0x7;
+	params = esr_sys64_to_params(esr);
 	params.regval = vcpu_get_reg(vcpu, Rt);
-	params.is_write = !(esr & 1);
 
 	ret = emulate_sys_reg(vcpu, &params);
 
diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h
index 9d0621417c2a..cc0cc95a0280 100644
--- a/arch/arm64/kvm/sys_regs.h
+++ b/arch/arm64/kvm/sys_regs.h
@@ -11,6 +11,12 @@
 #ifndef __ARM64_KVM_SYS_REGS_LOCAL_H__
 #define __ARM64_KVM_SYS_REGS_LOCAL_H__
 
+#include <linux/bsearch.h>
+
+#define reg_to_encoding(x)						\
+	sys_reg((u32)(x)->Op0, (u32)(x)->Op1,				\
+		(u32)(x)->CRn, (u32)(x)->CRm, (u32)(x)->Op2)
+
 struct sys_reg_params {
 	u8	Op0;
 	u8	Op1;
@@ -21,6 +27,14 @@ struct sys_reg_params {
 	bool	is_write;
 };
 
+#define esr_sys64_to_params(esr)                                               \
+	((struct sys_reg_params){ .Op0 = ((esr) >> 20) & 3,                    \
+				  .Op1 = ((esr) >> 14) & 0x7,                  \
+				  .CRn = ((esr) >> 10) & 0xf,                  \
+				  .CRm = ((esr) >> 1) & 0xf,                   \
+				  .Op2 = ((esr) >> 17) & 0x7,                  \
+				  .is_write = !((esr) & 1) })
+
 struct sys_reg_desc {
 	/* Sysreg string for debug */
 	const char *name;
@@ -152,6 +166,23 @@ static inline int cmp_sys_reg(const struct sys_reg_desc *i1,
 	return i1->Op2 - i2->Op2;
 }
 
+static inline int match_sys_reg(const void *key, const void *elt)
+{
+	const unsigned long pval = (unsigned long)key;
+	const struct sys_reg_desc *r = elt;
+
+	return pval - reg_to_encoding(r);
+}
+
+static inline const struct sys_reg_desc *
+find_reg(const struct sys_reg_params *params, const struct sys_reg_desc table[],
+	 unsigned int num)
+{
+	unsigned long pval = reg_to_encoding(params);
+
+	return __inline_bsearch((void *)pval, table, num, sizeof(table[0]), match_sys_reg);
+}
+
 const struct sys_reg_desc *find_reg_by_id(u64 id,
 					  struct sys_reg_params *params,
 					  const struct sys_reg_desc table[],
-- 
2.32.0.402.g57bb445576-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH v3 06/15] KVM: arm64: Restore mdcr_el2 from vcpu
  2021-07-19 16:03 ` Fuad Tabba
  (?)
@ 2021-07-19 16:03   ` Fuad Tabba
  -1 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-07-19 16:03 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

On deactivating traps, restore the value of mdcr_el2 from the
newly created and preserved host value vcpu context, rather than
directly reading the hardware register.

Up until and including this patch the two values are the same,
i.e., the hardware register and the vcpu one. A future patch will
be changing the value of mdcr_el2 on activating traps, and this
ensures that its value will be restored.

No functional change intended.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/kvm_host.h       |  5 ++++-
 arch/arm64/include/asm/kvm_hyp.h        |  2 +-
 arch/arm64/kvm/hyp/include/hyp/switch.h |  6 +++++-
 arch/arm64/kvm/hyp/nvhe/switch.c        | 11 ++---------
 arch/arm64/kvm/hyp/vhe/switch.c         | 12 ++----------
 arch/arm64/kvm/hyp/vhe/sysreg-sr.c      |  2 +-
 6 files changed, 15 insertions(+), 23 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 4d2d974c1522..76462c6a91ee 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -287,10 +287,13 @@ struct kvm_vcpu_arch {
 	/* Stage 2 paging state used by the hardware on next switch */
 	struct kvm_s2_mmu *hw_mmu;
 
-	/* HYP configuration */
+	/* Values of trap registers for the guest. */
 	u64 hcr_el2;
 	u64 mdcr_el2;
 
+	/* Values of trap registers for the host before guest entry. */
+	u64 mdcr_el2_host;
+
 	/* Exception Information */
 	struct kvm_vcpu_fault_info fault;
 
diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index 9d60b3006efc..657d0c94cf82 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -95,7 +95,7 @@ void __sve_restore_state(void *sve_pffr, u32 *fpsr);
 
 #ifndef __KVM_NVHE_HYPERVISOR__
 void activate_traps_vhe_load(struct kvm_vcpu *vcpu);
-void deactivate_traps_vhe_put(void);
+void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu);
 #endif
 
 u64 __guest_enter(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index e4a2f295a394..a0e78a6027be 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -92,11 +92,15 @@ static inline void __activate_traps_common(struct kvm_vcpu *vcpu)
 		write_sysreg(0, pmselr_el0);
 		write_sysreg(ARMV8_PMU_USERENR_MASK, pmuserenr_el0);
 	}
+
+	vcpu->arch.mdcr_el2_host = read_sysreg(mdcr_el2);
 	write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
 }
 
-static inline void __deactivate_traps_common(void)
+static inline void __deactivate_traps_common(struct kvm_vcpu *vcpu)
 {
+	write_sysreg(vcpu->arch.mdcr_el2_host, mdcr_el2);
+
 	write_sysreg(0, hstr_el2);
 	if (kvm_arm_support_pmu_v3())
 		write_sysreg(0, pmuserenr_el0);
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index f7af9688c1f7..1778593a08a9 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -69,12 +69,10 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
 static void __deactivate_traps(struct kvm_vcpu *vcpu)
 {
 	extern char __kvm_hyp_host_vector[];
-	u64 mdcr_el2, cptr;
+	u64 cptr;
 
 	___deactivate_traps(vcpu);
 
-	mdcr_el2 = read_sysreg(mdcr_el2);
-
 	if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) {
 		u64 val;
 
@@ -92,13 +90,8 @@ static void __deactivate_traps(struct kvm_vcpu *vcpu)
 		isb();
 	}
 
-	__deactivate_traps_common();
-
-	mdcr_el2 &= MDCR_EL2_HPMN_MASK;
-	mdcr_el2 |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT;
-	mdcr_el2 |= MDCR_EL2_E2TB_MASK << MDCR_EL2_E2TB_SHIFT;
+	__deactivate_traps_common(vcpu);
 
-	write_sysreg(mdcr_el2, mdcr_el2);
 	write_sysreg(this_cpu_ptr(&kvm_init_params)->hcr_el2, hcr_el2);
 
 	cptr = CPTR_EL2_DEFAULT;
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index b3229924d243..0d0c9550fb08 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -91,17 +91,9 @@ void activate_traps_vhe_load(struct kvm_vcpu *vcpu)
 	__activate_traps_common(vcpu);
 }
 
-void deactivate_traps_vhe_put(void)
+void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu)
 {
-	u64 mdcr_el2 = read_sysreg(mdcr_el2);
-
-	mdcr_el2 &= MDCR_EL2_HPMN_MASK |
-		    MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT |
-		    MDCR_EL2_TPMS;
-
-	write_sysreg(mdcr_el2, mdcr_el2);
-
-	__deactivate_traps_common();
+	__deactivate_traps_common(vcpu);
 }
 
 /* Switch to the guest for VHE systems running in EL2 */
diff --git a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
index 2a0b8c88d74f..007a12dd4351 100644
--- a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
+++ b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
@@ -101,7 +101,7 @@ void kvm_vcpu_put_sysregs_vhe(struct kvm_vcpu *vcpu)
 	struct kvm_cpu_context *host_ctxt;
 
 	host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
-	deactivate_traps_vhe_put();
+	deactivate_traps_vhe_put(vcpu);
 
 	__sysreg_save_el1_state(guest_ctxt);
 	__sysreg_save_user_state(guest_ctxt);
-- 
2.32.0.402.g57bb445576-goog


^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH v3 06/15] KVM: arm64: Restore mdcr_el2 from vcpu
@ 2021-07-19 16:03   ` Fuad Tabba
  0 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-07-19 16:03 UTC (permalink / raw)
  To: kvmarm; +Cc: kernel-team, kvm, maz, pbonzini, will, linux-arm-kernel

On deactivating traps, restore the value of mdcr_el2 from the
newly created and preserved host value vcpu context, rather than
directly reading the hardware register.

Up until and including this patch the two values are the same,
i.e., the hardware register and the vcpu one. A future patch will
be changing the value of mdcr_el2 on activating traps, and this
ensures that its value will be restored.

No functional change intended.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/kvm_host.h       |  5 ++++-
 arch/arm64/include/asm/kvm_hyp.h        |  2 +-
 arch/arm64/kvm/hyp/include/hyp/switch.h |  6 +++++-
 arch/arm64/kvm/hyp/nvhe/switch.c        | 11 ++---------
 arch/arm64/kvm/hyp/vhe/switch.c         | 12 ++----------
 arch/arm64/kvm/hyp/vhe/sysreg-sr.c      |  2 +-
 6 files changed, 15 insertions(+), 23 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 4d2d974c1522..76462c6a91ee 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -287,10 +287,13 @@ struct kvm_vcpu_arch {
 	/* Stage 2 paging state used by the hardware on next switch */
 	struct kvm_s2_mmu *hw_mmu;
 
-	/* HYP configuration */
+	/* Values of trap registers for the guest. */
 	u64 hcr_el2;
 	u64 mdcr_el2;
 
+	/* Values of trap registers for the host before guest entry. */
+	u64 mdcr_el2_host;
+
 	/* Exception Information */
 	struct kvm_vcpu_fault_info fault;
 
diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index 9d60b3006efc..657d0c94cf82 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -95,7 +95,7 @@ void __sve_restore_state(void *sve_pffr, u32 *fpsr);
 
 #ifndef __KVM_NVHE_HYPERVISOR__
 void activate_traps_vhe_load(struct kvm_vcpu *vcpu);
-void deactivate_traps_vhe_put(void);
+void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu);
 #endif
 
 u64 __guest_enter(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index e4a2f295a394..a0e78a6027be 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -92,11 +92,15 @@ static inline void __activate_traps_common(struct kvm_vcpu *vcpu)
 		write_sysreg(0, pmselr_el0);
 		write_sysreg(ARMV8_PMU_USERENR_MASK, pmuserenr_el0);
 	}
+
+	vcpu->arch.mdcr_el2_host = read_sysreg(mdcr_el2);
 	write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
 }
 
-static inline void __deactivate_traps_common(void)
+static inline void __deactivate_traps_common(struct kvm_vcpu *vcpu)
 {
+	write_sysreg(vcpu->arch.mdcr_el2_host, mdcr_el2);
+
 	write_sysreg(0, hstr_el2);
 	if (kvm_arm_support_pmu_v3())
 		write_sysreg(0, pmuserenr_el0);
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index f7af9688c1f7..1778593a08a9 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -69,12 +69,10 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
 static void __deactivate_traps(struct kvm_vcpu *vcpu)
 {
 	extern char __kvm_hyp_host_vector[];
-	u64 mdcr_el2, cptr;
+	u64 cptr;
 
 	___deactivate_traps(vcpu);
 
-	mdcr_el2 = read_sysreg(mdcr_el2);
-
 	if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) {
 		u64 val;
 
@@ -92,13 +90,8 @@ static void __deactivate_traps(struct kvm_vcpu *vcpu)
 		isb();
 	}
 
-	__deactivate_traps_common();
-
-	mdcr_el2 &= MDCR_EL2_HPMN_MASK;
-	mdcr_el2 |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT;
-	mdcr_el2 |= MDCR_EL2_E2TB_MASK << MDCR_EL2_E2TB_SHIFT;
+	__deactivate_traps_common(vcpu);
 
-	write_sysreg(mdcr_el2, mdcr_el2);
 	write_sysreg(this_cpu_ptr(&kvm_init_params)->hcr_el2, hcr_el2);
 
 	cptr = CPTR_EL2_DEFAULT;
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index b3229924d243..0d0c9550fb08 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -91,17 +91,9 @@ void activate_traps_vhe_load(struct kvm_vcpu *vcpu)
 	__activate_traps_common(vcpu);
 }
 
-void deactivate_traps_vhe_put(void)
+void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu)
 {
-	u64 mdcr_el2 = read_sysreg(mdcr_el2);
-
-	mdcr_el2 &= MDCR_EL2_HPMN_MASK |
-		    MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT |
-		    MDCR_EL2_TPMS;
-
-	write_sysreg(mdcr_el2, mdcr_el2);
-
-	__deactivate_traps_common();
+	__deactivate_traps_common(vcpu);
 }
 
 /* Switch to the guest for VHE systems running in EL2 */
diff --git a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
index 2a0b8c88d74f..007a12dd4351 100644
--- a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
+++ b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
@@ -101,7 +101,7 @@ void kvm_vcpu_put_sysregs_vhe(struct kvm_vcpu *vcpu)
 	struct kvm_cpu_context *host_ctxt;
 
 	host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
-	deactivate_traps_vhe_put();
+	deactivate_traps_vhe_put(vcpu);
 
 	__sysreg_save_el1_state(guest_ctxt);
 	__sysreg_save_user_state(guest_ctxt);
-- 
2.32.0.402.g57bb445576-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH v3 06/15] KVM: arm64: Restore mdcr_el2 from vcpu
@ 2021-07-19 16:03   ` Fuad Tabba
  0 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-07-19 16:03 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

On deactivating traps, restore the value of mdcr_el2 from the
newly created and preserved host value vcpu context, rather than
directly reading the hardware register.

Up until and including this patch the two values are the same,
i.e., the hardware register and the vcpu one. A future patch will
be changing the value of mdcr_el2 on activating traps, and this
ensures that its value will be restored.

No functional change intended.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/kvm_host.h       |  5 ++++-
 arch/arm64/include/asm/kvm_hyp.h        |  2 +-
 arch/arm64/kvm/hyp/include/hyp/switch.h |  6 +++++-
 arch/arm64/kvm/hyp/nvhe/switch.c        | 11 ++---------
 arch/arm64/kvm/hyp/vhe/switch.c         | 12 ++----------
 arch/arm64/kvm/hyp/vhe/sysreg-sr.c      |  2 +-
 6 files changed, 15 insertions(+), 23 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 4d2d974c1522..76462c6a91ee 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -287,10 +287,13 @@ struct kvm_vcpu_arch {
 	/* Stage 2 paging state used by the hardware on next switch */
 	struct kvm_s2_mmu *hw_mmu;
 
-	/* HYP configuration */
+	/* Values of trap registers for the guest. */
 	u64 hcr_el2;
 	u64 mdcr_el2;
 
+	/* Values of trap registers for the host before guest entry. */
+	u64 mdcr_el2_host;
+
 	/* Exception Information */
 	struct kvm_vcpu_fault_info fault;
 
diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index 9d60b3006efc..657d0c94cf82 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -95,7 +95,7 @@ void __sve_restore_state(void *sve_pffr, u32 *fpsr);
 
 #ifndef __KVM_NVHE_HYPERVISOR__
 void activate_traps_vhe_load(struct kvm_vcpu *vcpu);
-void deactivate_traps_vhe_put(void);
+void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu);
 #endif
 
 u64 __guest_enter(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index e4a2f295a394..a0e78a6027be 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -92,11 +92,15 @@ static inline void __activate_traps_common(struct kvm_vcpu *vcpu)
 		write_sysreg(0, pmselr_el0);
 		write_sysreg(ARMV8_PMU_USERENR_MASK, pmuserenr_el0);
 	}
+
+	vcpu->arch.mdcr_el2_host = read_sysreg(mdcr_el2);
 	write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
 }
 
-static inline void __deactivate_traps_common(void)
+static inline void __deactivate_traps_common(struct kvm_vcpu *vcpu)
 {
+	write_sysreg(vcpu->arch.mdcr_el2_host, mdcr_el2);
+
 	write_sysreg(0, hstr_el2);
 	if (kvm_arm_support_pmu_v3())
 		write_sysreg(0, pmuserenr_el0);
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index f7af9688c1f7..1778593a08a9 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -69,12 +69,10 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
 static void __deactivate_traps(struct kvm_vcpu *vcpu)
 {
 	extern char __kvm_hyp_host_vector[];
-	u64 mdcr_el2, cptr;
+	u64 cptr;
 
 	___deactivate_traps(vcpu);
 
-	mdcr_el2 = read_sysreg(mdcr_el2);
-
 	if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) {
 		u64 val;
 
@@ -92,13 +90,8 @@ static void __deactivate_traps(struct kvm_vcpu *vcpu)
 		isb();
 	}
 
-	__deactivate_traps_common();
-
-	mdcr_el2 &= MDCR_EL2_HPMN_MASK;
-	mdcr_el2 |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT;
-	mdcr_el2 |= MDCR_EL2_E2TB_MASK << MDCR_EL2_E2TB_SHIFT;
+	__deactivate_traps_common(vcpu);
 
-	write_sysreg(mdcr_el2, mdcr_el2);
 	write_sysreg(this_cpu_ptr(&kvm_init_params)->hcr_el2, hcr_el2);
 
 	cptr = CPTR_EL2_DEFAULT;
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index b3229924d243..0d0c9550fb08 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -91,17 +91,9 @@ void activate_traps_vhe_load(struct kvm_vcpu *vcpu)
 	__activate_traps_common(vcpu);
 }
 
-void deactivate_traps_vhe_put(void)
+void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu)
 {
-	u64 mdcr_el2 = read_sysreg(mdcr_el2);
-
-	mdcr_el2 &= MDCR_EL2_HPMN_MASK |
-		    MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT |
-		    MDCR_EL2_TPMS;
-
-	write_sysreg(mdcr_el2, mdcr_el2);
-
-	__deactivate_traps_common();
+	__deactivate_traps_common(vcpu);
 }
 
 /* Switch to the guest for VHE systems running in EL2 */
diff --git a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
index 2a0b8c88d74f..007a12dd4351 100644
--- a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
+++ b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
@@ -101,7 +101,7 @@ void kvm_vcpu_put_sysregs_vhe(struct kvm_vcpu *vcpu)
 	struct kvm_cpu_context *host_ctxt;
 
 	host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
-	deactivate_traps_vhe_put();
+	deactivate_traps_vhe_put(vcpu);
 
 	__sysreg_save_el1_state(guest_ctxt);
 	__sysreg_save_user_state(guest_ctxt);
-- 
2.32.0.402.g57bb445576-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH v3 07/15] KVM: arm64: Track value of cptr_el2 in struct kvm_vcpu_arch
  2021-07-19 16:03 ` Fuad Tabba
  (?)
@ 2021-07-19 16:03   ` Fuad Tabba
  -1 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-07-19 16:03 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

Track the baseline guest value for cptr_el2 in struct
kvm_vcpu_arch, similar to the other registers that control traps.
Use this value when setting cptr_el2 for the guest.

Currently this value is unchanged (CPTR_EL2_DEFAULT), but future
patches will set trapping bits based on features supported for
the guest.

No functional change intended.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/kvm_host.h | 1 +
 arch/arm64/kvm/arm.c              | 1 +
 arch/arm64/kvm/hyp/nvhe/switch.c  | 2 +-
 3 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 76462c6a91ee..ac67d5699c68 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -290,6 +290,7 @@ struct kvm_vcpu_arch {
 	/* Values of trap registers for the guest. */
 	u64 hcr_el2;
 	u64 mdcr_el2;
+	u64 cptr_el2;
 
 	/* Values of trap registers for the host before guest entry. */
 	u64 mdcr_el2_host;
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index e9a2b8f27792..14b12f2c08c0 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -1104,6 +1104,7 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu,
 	}
 
 	vcpu_reset_hcr(vcpu);
+	vcpu->arch.cptr_el2 = CPTR_EL2_DEFAULT;
 
 	/*
 	 * Handle the "start in power-off" case.
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index 1778593a08a9..86f3d6482935 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -41,7 +41,7 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
 	___activate_traps(vcpu);
 	__activate_traps_common(vcpu);
 
-	val = CPTR_EL2_DEFAULT;
+	val = vcpu->arch.cptr_el2;
 	val |= CPTR_EL2_TTA | CPTR_EL2_TAM;
 	if (!update_fp_enabled(vcpu)) {
 		val |= CPTR_EL2_TFP | CPTR_EL2_TZ;
-- 
2.32.0.402.g57bb445576-goog


^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH v3 07/15] KVM: arm64: Track value of cptr_el2 in struct kvm_vcpu_arch
@ 2021-07-19 16:03   ` Fuad Tabba
  0 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-07-19 16:03 UTC (permalink / raw)
  To: kvmarm; +Cc: kernel-team, kvm, maz, pbonzini, will, linux-arm-kernel

Track the baseline guest value for cptr_el2 in struct
kvm_vcpu_arch, similar to the other registers that control traps.
Use this value when setting cptr_el2 for the guest.

Currently this value is unchanged (CPTR_EL2_DEFAULT), but future
patches will set trapping bits based on features supported for
the guest.

No functional change intended.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/kvm_host.h | 1 +
 arch/arm64/kvm/arm.c              | 1 +
 arch/arm64/kvm/hyp/nvhe/switch.c  | 2 +-
 3 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 76462c6a91ee..ac67d5699c68 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -290,6 +290,7 @@ struct kvm_vcpu_arch {
 	/* Values of trap registers for the guest. */
 	u64 hcr_el2;
 	u64 mdcr_el2;
+	u64 cptr_el2;
 
 	/* Values of trap registers for the host before guest entry. */
 	u64 mdcr_el2_host;
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index e9a2b8f27792..14b12f2c08c0 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -1104,6 +1104,7 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu,
 	}
 
 	vcpu_reset_hcr(vcpu);
+	vcpu->arch.cptr_el2 = CPTR_EL2_DEFAULT;
 
 	/*
 	 * Handle the "start in power-off" case.
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index 1778593a08a9..86f3d6482935 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -41,7 +41,7 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
 	___activate_traps(vcpu);
 	__activate_traps_common(vcpu);
 
-	val = CPTR_EL2_DEFAULT;
+	val = vcpu->arch.cptr_el2;
 	val |= CPTR_EL2_TTA | CPTR_EL2_TAM;
 	if (!update_fp_enabled(vcpu)) {
 		val |= CPTR_EL2_TFP | CPTR_EL2_TZ;
-- 
2.32.0.402.g57bb445576-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH v3 07/15] KVM: arm64: Track value of cptr_el2 in struct kvm_vcpu_arch
@ 2021-07-19 16:03   ` Fuad Tabba
  0 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-07-19 16:03 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

Track the baseline guest value for cptr_el2 in struct
kvm_vcpu_arch, similar to the other registers that control traps.
Use this value when setting cptr_el2 for the guest.

Currently this value is unchanged (CPTR_EL2_DEFAULT), but future
patches will set trapping bits based on features supported for
the guest.

No functional change intended.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/kvm_host.h | 1 +
 arch/arm64/kvm/arm.c              | 1 +
 arch/arm64/kvm/hyp/nvhe/switch.c  | 2 +-
 3 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 76462c6a91ee..ac67d5699c68 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -290,6 +290,7 @@ struct kvm_vcpu_arch {
 	/* Values of trap registers for the guest. */
 	u64 hcr_el2;
 	u64 mdcr_el2;
+	u64 cptr_el2;
 
 	/* Values of trap registers for the host before guest entry. */
 	u64 mdcr_el2_host;
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index e9a2b8f27792..14b12f2c08c0 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -1104,6 +1104,7 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu,
 	}
 
 	vcpu_reset_hcr(vcpu);
+	vcpu->arch.cptr_el2 = CPTR_EL2_DEFAULT;
 
 	/*
 	 * Handle the "start in power-off" case.
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index 1778593a08a9..86f3d6482935 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -41,7 +41,7 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
 	___activate_traps(vcpu);
 	__activate_traps_common(vcpu);
 
-	val = CPTR_EL2_DEFAULT;
+	val = vcpu->arch.cptr_el2;
 	val |= CPTR_EL2_TTA | CPTR_EL2_TAM;
 	if (!update_fp_enabled(vcpu)) {
 		val |= CPTR_EL2_TFP | CPTR_EL2_TZ;
-- 
2.32.0.402.g57bb445576-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH v3 08/15] KVM: arm64: Add feature register flag definitions
  2021-07-19 16:03 ` Fuad Tabba
  (?)
@ 2021-07-19 16:03   ` Fuad Tabba
  -1 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-07-19 16:03 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

Add feature register flag definitions to clarify which features
might be supported.

Consolidate the various ID_AA64PFR0_ELx flags for all ELs.

No functional change intended.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/cpufeature.h |  4 ++--
 arch/arm64/include/asm/sysreg.h     | 12 ++++++++----
 arch/arm64/kernel/cpufeature.c      |  8 ++++----
 3 files changed, 14 insertions(+), 10 deletions(-)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 9bb9d11750d7..b7d9bb17908d 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -602,14 +602,14 @@ static inline bool id_aa64pfr0_32bit_el1(u64 pfr0)
 {
 	u32 val = cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_EL1_SHIFT);
 
-	return val == ID_AA64PFR0_EL1_32BIT_64BIT;
+	return val == ID_AA64PFR0_ELx_32BIT_64BIT;
 }
 
 static inline bool id_aa64pfr0_32bit_el0(u64 pfr0)
 {
 	u32 val = cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_EL0_SHIFT);
 
-	return val == ID_AA64PFR0_EL0_32BIT_64BIT;
+	return val == ID_AA64PFR0_ELx_32BIT_64BIT;
 }
 
 static inline bool id_aa64pfr0_sve(u64 pfr0)
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 326f49e7bd42..0b773037251c 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -784,14 +784,13 @@
 #define ID_AA64PFR0_AMU			0x1
 #define ID_AA64PFR0_SVE			0x1
 #define ID_AA64PFR0_RAS_V1		0x1
+#define ID_AA64PFR0_RAS_ANY		0xf
 #define ID_AA64PFR0_FP_NI		0xf
 #define ID_AA64PFR0_FP_SUPPORTED	0x0
 #define ID_AA64PFR0_ASIMD_NI		0xf
 #define ID_AA64PFR0_ASIMD_SUPPORTED	0x0
-#define ID_AA64PFR0_EL1_64BIT_ONLY	0x1
-#define ID_AA64PFR0_EL1_32BIT_64BIT	0x2
-#define ID_AA64PFR0_EL0_64BIT_ONLY	0x1
-#define ID_AA64PFR0_EL0_32BIT_64BIT	0x2
+#define ID_AA64PFR0_ELx_64BIT_ONLY	0x1
+#define ID_AA64PFR0_ELx_32BIT_64BIT	0x2
 
 /* id_aa64pfr1 */
 #define ID_AA64PFR1_MPAMFRAC_SHIFT	16
@@ -847,12 +846,16 @@
 #define ID_AA64MMFR0_ASID_SHIFT		4
 #define ID_AA64MMFR0_PARANGE_SHIFT	0
 
+#define ID_AA64MMFR0_ASID_8		0x0
+#define ID_AA64MMFR0_ASID_16		0x2
+
 #define ID_AA64MMFR0_TGRAN4_NI		0xf
 #define ID_AA64MMFR0_TGRAN4_SUPPORTED	0x0
 #define ID_AA64MMFR0_TGRAN64_NI		0xf
 #define ID_AA64MMFR0_TGRAN64_SUPPORTED	0x0
 #define ID_AA64MMFR0_TGRAN16_NI		0x0
 #define ID_AA64MMFR0_TGRAN16_SUPPORTED	0x1
+#define ID_AA64MMFR0_PARANGE_40		0x2
 #define ID_AA64MMFR0_PARANGE_48		0x5
 #define ID_AA64MMFR0_PARANGE_52		0x6
 
@@ -900,6 +903,7 @@
 #define ID_AA64MMFR2_CNP_SHIFT		0
 
 /* id_aa64dfr0 */
+#define ID_AA64DFR0_MTPMU_SHIFT		48
 #define ID_AA64DFR0_TRBE_SHIFT		44
 #define ID_AA64DFR0_TRACE_FILT_SHIFT	40
 #define ID_AA64DFR0_DOUBLELOCK_SHIFT	36
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 0ead8bfedf20..5b59fe5e26e4 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -239,8 +239,8 @@ static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = {
 	S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_FP_SHIFT, 4, ID_AA64PFR0_FP_NI),
 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL3_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL2_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL1_SHIFT, 4, ID_AA64PFR0_EL1_64BIT_ONLY),
-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL0_SHIFT, 4, ID_AA64PFR0_EL0_64BIT_ONLY),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL1_SHIFT, 4, ID_AA64PFR0_ELx_64BIT_ONLY),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL0_SHIFT, 4, ID_AA64PFR0_ELx_64BIT_ONLY),
 	ARM64_FTR_END,
 };
 
@@ -1956,7 +1956,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.sys_reg = SYS_ID_AA64PFR0_EL1,
 		.sign = FTR_UNSIGNED,
 		.field_pos = ID_AA64PFR0_EL0_SHIFT,
-		.min_field_value = ID_AA64PFR0_EL0_32BIT_64BIT,
+		.min_field_value = ID_AA64PFR0_ELx_32BIT_64BIT,
 	},
 #ifdef CONFIG_KVM
 	{
@@ -1967,7 +1967,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.sys_reg = SYS_ID_AA64PFR0_EL1,
 		.sign = FTR_UNSIGNED,
 		.field_pos = ID_AA64PFR0_EL1_SHIFT,
-		.min_field_value = ID_AA64PFR0_EL1_32BIT_64BIT,
+		.min_field_value = ID_AA64PFR0_ELx_32BIT_64BIT,
 	},
 	{
 		.desc = "Protected KVM",
-- 
2.32.0.402.g57bb445576-goog


^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH v3 08/15] KVM: arm64: Add feature register flag definitions
@ 2021-07-19 16:03   ` Fuad Tabba
  0 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-07-19 16:03 UTC (permalink / raw)
  To: kvmarm; +Cc: kernel-team, kvm, maz, pbonzini, will, linux-arm-kernel

Add feature register flag definitions to clarify which features
might be supported.

Consolidate the various ID_AA64PFR0_ELx flags for all ELs.

No functional change intended.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/cpufeature.h |  4 ++--
 arch/arm64/include/asm/sysreg.h     | 12 ++++++++----
 arch/arm64/kernel/cpufeature.c      |  8 ++++----
 3 files changed, 14 insertions(+), 10 deletions(-)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 9bb9d11750d7..b7d9bb17908d 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -602,14 +602,14 @@ static inline bool id_aa64pfr0_32bit_el1(u64 pfr0)
 {
 	u32 val = cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_EL1_SHIFT);
 
-	return val == ID_AA64PFR0_EL1_32BIT_64BIT;
+	return val == ID_AA64PFR0_ELx_32BIT_64BIT;
 }
 
 static inline bool id_aa64pfr0_32bit_el0(u64 pfr0)
 {
 	u32 val = cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_EL0_SHIFT);
 
-	return val == ID_AA64PFR0_EL0_32BIT_64BIT;
+	return val == ID_AA64PFR0_ELx_32BIT_64BIT;
 }
 
 static inline bool id_aa64pfr0_sve(u64 pfr0)
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 326f49e7bd42..0b773037251c 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -784,14 +784,13 @@
 #define ID_AA64PFR0_AMU			0x1
 #define ID_AA64PFR0_SVE			0x1
 #define ID_AA64PFR0_RAS_V1		0x1
+#define ID_AA64PFR0_RAS_ANY		0xf
 #define ID_AA64PFR0_FP_NI		0xf
 #define ID_AA64PFR0_FP_SUPPORTED	0x0
 #define ID_AA64PFR0_ASIMD_NI		0xf
 #define ID_AA64PFR0_ASIMD_SUPPORTED	0x0
-#define ID_AA64PFR0_EL1_64BIT_ONLY	0x1
-#define ID_AA64PFR0_EL1_32BIT_64BIT	0x2
-#define ID_AA64PFR0_EL0_64BIT_ONLY	0x1
-#define ID_AA64PFR0_EL0_32BIT_64BIT	0x2
+#define ID_AA64PFR0_ELx_64BIT_ONLY	0x1
+#define ID_AA64PFR0_ELx_32BIT_64BIT	0x2
 
 /* id_aa64pfr1 */
 #define ID_AA64PFR1_MPAMFRAC_SHIFT	16
@@ -847,12 +846,16 @@
 #define ID_AA64MMFR0_ASID_SHIFT		4
 #define ID_AA64MMFR0_PARANGE_SHIFT	0
 
+#define ID_AA64MMFR0_ASID_8		0x0
+#define ID_AA64MMFR0_ASID_16		0x2
+
 #define ID_AA64MMFR0_TGRAN4_NI		0xf
 #define ID_AA64MMFR0_TGRAN4_SUPPORTED	0x0
 #define ID_AA64MMFR0_TGRAN64_NI		0xf
 #define ID_AA64MMFR0_TGRAN64_SUPPORTED	0x0
 #define ID_AA64MMFR0_TGRAN16_NI		0x0
 #define ID_AA64MMFR0_TGRAN16_SUPPORTED	0x1
+#define ID_AA64MMFR0_PARANGE_40		0x2
 #define ID_AA64MMFR0_PARANGE_48		0x5
 #define ID_AA64MMFR0_PARANGE_52		0x6
 
@@ -900,6 +903,7 @@
 #define ID_AA64MMFR2_CNP_SHIFT		0
 
 /* id_aa64dfr0 */
+#define ID_AA64DFR0_MTPMU_SHIFT		48
 #define ID_AA64DFR0_TRBE_SHIFT		44
 #define ID_AA64DFR0_TRACE_FILT_SHIFT	40
 #define ID_AA64DFR0_DOUBLELOCK_SHIFT	36
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 0ead8bfedf20..5b59fe5e26e4 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -239,8 +239,8 @@ static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = {
 	S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_FP_SHIFT, 4, ID_AA64PFR0_FP_NI),
 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL3_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL2_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL1_SHIFT, 4, ID_AA64PFR0_EL1_64BIT_ONLY),
-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL0_SHIFT, 4, ID_AA64PFR0_EL0_64BIT_ONLY),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL1_SHIFT, 4, ID_AA64PFR0_ELx_64BIT_ONLY),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL0_SHIFT, 4, ID_AA64PFR0_ELx_64BIT_ONLY),
 	ARM64_FTR_END,
 };
 
@@ -1956,7 +1956,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.sys_reg = SYS_ID_AA64PFR0_EL1,
 		.sign = FTR_UNSIGNED,
 		.field_pos = ID_AA64PFR0_EL0_SHIFT,
-		.min_field_value = ID_AA64PFR0_EL0_32BIT_64BIT,
+		.min_field_value = ID_AA64PFR0_ELx_32BIT_64BIT,
 	},
 #ifdef CONFIG_KVM
 	{
@@ -1967,7 +1967,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.sys_reg = SYS_ID_AA64PFR0_EL1,
 		.sign = FTR_UNSIGNED,
 		.field_pos = ID_AA64PFR0_EL1_SHIFT,
-		.min_field_value = ID_AA64PFR0_EL1_32BIT_64BIT,
+		.min_field_value = ID_AA64PFR0_ELx_32BIT_64BIT,
 	},
 	{
 		.desc = "Protected KVM",
-- 
2.32.0.402.g57bb445576-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH v3 08/15] KVM: arm64: Add feature register flag definitions
@ 2021-07-19 16:03   ` Fuad Tabba
  0 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-07-19 16:03 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

Add feature register flag definitions to clarify which features
might be supported.

Consolidate the various ID_AA64PFR0_ELx flags for all ELs.

No functional change intended.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/cpufeature.h |  4 ++--
 arch/arm64/include/asm/sysreg.h     | 12 ++++++++----
 arch/arm64/kernel/cpufeature.c      |  8 ++++----
 3 files changed, 14 insertions(+), 10 deletions(-)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 9bb9d11750d7..b7d9bb17908d 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -602,14 +602,14 @@ static inline bool id_aa64pfr0_32bit_el1(u64 pfr0)
 {
 	u32 val = cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_EL1_SHIFT);
 
-	return val == ID_AA64PFR0_EL1_32BIT_64BIT;
+	return val == ID_AA64PFR0_ELx_32BIT_64BIT;
 }
 
 static inline bool id_aa64pfr0_32bit_el0(u64 pfr0)
 {
 	u32 val = cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_EL0_SHIFT);
 
-	return val == ID_AA64PFR0_EL0_32BIT_64BIT;
+	return val == ID_AA64PFR0_ELx_32BIT_64BIT;
 }
 
 static inline bool id_aa64pfr0_sve(u64 pfr0)
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 326f49e7bd42..0b773037251c 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -784,14 +784,13 @@
 #define ID_AA64PFR0_AMU			0x1
 #define ID_AA64PFR0_SVE			0x1
 #define ID_AA64PFR0_RAS_V1		0x1
+#define ID_AA64PFR0_RAS_ANY		0xf
 #define ID_AA64PFR0_FP_NI		0xf
 #define ID_AA64PFR0_FP_SUPPORTED	0x0
 #define ID_AA64PFR0_ASIMD_NI		0xf
 #define ID_AA64PFR0_ASIMD_SUPPORTED	0x0
-#define ID_AA64PFR0_EL1_64BIT_ONLY	0x1
-#define ID_AA64PFR0_EL1_32BIT_64BIT	0x2
-#define ID_AA64PFR0_EL0_64BIT_ONLY	0x1
-#define ID_AA64PFR0_EL0_32BIT_64BIT	0x2
+#define ID_AA64PFR0_ELx_64BIT_ONLY	0x1
+#define ID_AA64PFR0_ELx_32BIT_64BIT	0x2
 
 /* id_aa64pfr1 */
 #define ID_AA64PFR1_MPAMFRAC_SHIFT	16
@@ -847,12 +846,16 @@
 #define ID_AA64MMFR0_ASID_SHIFT		4
 #define ID_AA64MMFR0_PARANGE_SHIFT	0
 
+#define ID_AA64MMFR0_ASID_8		0x0
+#define ID_AA64MMFR0_ASID_16		0x2
+
 #define ID_AA64MMFR0_TGRAN4_NI		0xf
 #define ID_AA64MMFR0_TGRAN4_SUPPORTED	0x0
 #define ID_AA64MMFR0_TGRAN64_NI		0xf
 #define ID_AA64MMFR0_TGRAN64_SUPPORTED	0x0
 #define ID_AA64MMFR0_TGRAN16_NI		0x0
 #define ID_AA64MMFR0_TGRAN16_SUPPORTED	0x1
+#define ID_AA64MMFR0_PARANGE_40		0x2
 #define ID_AA64MMFR0_PARANGE_48		0x5
 #define ID_AA64MMFR0_PARANGE_52		0x6
 
@@ -900,6 +903,7 @@
 #define ID_AA64MMFR2_CNP_SHIFT		0
 
 /* id_aa64dfr0 */
+#define ID_AA64DFR0_MTPMU_SHIFT		48
 #define ID_AA64DFR0_TRBE_SHIFT		44
 #define ID_AA64DFR0_TRACE_FILT_SHIFT	40
 #define ID_AA64DFR0_DOUBLELOCK_SHIFT	36
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 0ead8bfedf20..5b59fe5e26e4 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -239,8 +239,8 @@ static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = {
 	S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_FP_SHIFT, 4, ID_AA64PFR0_FP_NI),
 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL3_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL2_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL1_SHIFT, 4, ID_AA64PFR0_EL1_64BIT_ONLY),
-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL0_SHIFT, 4, ID_AA64PFR0_EL0_64BIT_ONLY),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL1_SHIFT, 4, ID_AA64PFR0_ELx_64BIT_ONLY),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL0_SHIFT, 4, ID_AA64PFR0_ELx_64BIT_ONLY),
 	ARM64_FTR_END,
 };
 
@@ -1956,7 +1956,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.sys_reg = SYS_ID_AA64PFR0_EL1,
 		.sign = FTR_UNSIGNED,
 		.field_pos = ID_AA64PFR0_EL0_SHIFT,
-		.min_field_value = ID_AA64PFR0_EL0_32BIT_64BIT,
+		.min_field_value = ID_AA64PFR0_ELx_32BIT_64BIT,
 	},
 #ifdef CONFIG_KVM
 	{
@@ -1967,7 +1967,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.sys_reg = SYS_ID_AA64PFR0_EL1,
 		.sign = FTR_UNSIGNED,
 		.field_pos = ID_AA64PFR0_EL1_SHIFT,
-		.min_field_value = ID_AA64PFR0_EL1_32BIT_64BIT,
+		.min_field_value = ID_AA64PFR0_ELx_32BIT_64BIT,
 	},
 	{
 		.desc = "Protected KVM",
-- 
2.32.0.402.g57bb445576-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH v3 09/15] KVM: arm64: Add config register bit definitions
  2021-07-19 16:03 ` Fuad Tabba
  (?)
@ 2021-07-19 16:03   ` Fuad Tabba
  -1 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-07-19 16:03 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

Add hardware configuration register bit definitions for HCR_EL2
and MDCR_EL2. Future patches toggle these hyp configuration
register bits to trap on certain accesses.

No functional change intended.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/kvm_arm.h | 22 ++++++++++++++++++++++
 1 file changed, 22 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index a928b2dc0b0f..327120c0089f 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -12,8 +12,13 @@
 #include <asm/types.h>
 
 /* Hyp Configuration Register (HCR) bits */
+
+#define HCR_TID5	(UL(1) << 58)
+#define HCR_DCT		(UL(1) << 57)
 #define HCR_ATA_SHIFT	56
 #define HCR_ATA		(UL(1) << HCR_ATA_SHIFT)
+#define HCR_AMVOFFEN	(UL(1) << 51)
+#define HCR_FIEN	(UL(1) << 47)
 #define HCR_FWB		(UL(1) << 46)
 #define HCR_API		(UL(1) << 41)
 #define HCR_APK		(UL(1) << 40)
@@ -56,6 +61,7 @@
 #define HCR_PTW		(UL(1) << 2)
 #define HCR_SWIO	(UL(1) << 1)
 #define HCR_VM		(UL(1) << 0)
+#define HCR_RES0	((UL(1) << 48) | (UL(1) << 39))
 
 /*
  * The bits we set in HCR:
@@ -277,11 +283,21 @@
 #define CPTR_EL2_TZ	(1 << 8)
 #define CPTR_NVHE_EL2_RES1	0x000032ff /* known RES1 bits in CPTR_EL2 (nVHE) */
 #define CPTR_EL2_DEFAULT	CPTR_NVHE_EL2_RES1
+#define CPTR_NVHE_EL2_RES0	(GENMASK(63, 32) |	\
+				 GENMASK(29, 21) |	\
+				 GENMASK(19, 14) |	\
+				 BIT(11))
 
 /* Hyp Debug Configuration Register bits */
 #define MDCR_EL2_E2TB_MASK	(UL(0x3))
 #define MDCR_EL2_E2TB_SHIFT	(UL(24))
+#define MDCR_EL2_HPMFZS		(UL(1) << 36)
+#define MDCR_EL2_HPMFZO		(UL(1) << 29)
+#define MDCR_EL2_MTPME		(UL(1) << 28)
+#define MDCR_EL2_TDCC		(UL(1) << 27)
+#define MDCR_EL2_HCCD		(UL(1) << 23)
 #define MDCR_EL2_TTRF		(UL(1) << 19)
+#define MDCR_EL2_HPMD		(UL(1) << 17)
 #define MDCR_EL2_TPMS		(UL(1) << 14)
 #define MDCR_EL2_E2PB_MASK	(UL(0x3))
 #define MDCR_EL2_E2PB_SHIFT	(UL(12))
@@ -293,6 +309,12 @@
 #define MDCR_EL2_TPM		(UL(1) << 6)
 #define MDCR_EL2_TPMCR		(UL(1) << 5)
 #define MDCR_EL2_HPMN_MASK	(UL(0x1F))
+#define MDCR_EL2_RES0		(GENMASK(63, 37) |	\
+				 GENMASK(35, 30) |	\
+				 GENMASK(25, 24) |	\
+				 GENMASK(22, 20) |	\
+				 BIT(18) |		\
+				 GENMASK(16, 15))
 
 /* For compatibility with fault code shared with 32-bit */
 #define FSC_FAULT	ESR_ELx_FSC_FAULT
-- 
2.32.0.402.g57bb445576-goog


^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH v3 09/15] KVM: arm64: Add config register bit definitions
@ 2021-07-19 16:03   ` Fuad Tabba
  0 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-07-19 16:03 UTC (permalink / raw)
  To: kvmarm; +Cc: kernel-team, kvm, maz, pbonzini, will, linux-arm-kernel

Add hardware configuration register bit definitions for HCR_EL2
and MDCR_EL2. Future patches toggle these hyp configuration
register bits to trap on certain accesses.

No functional change intended.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/kvm_arm.h | 22 ++++++++++++++++++++++
 1 file changed, 22 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index a928b2dc0b0f..327120c0089f 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -12,8 +12,13 @@
 #include <asm/types.h>
 
 /* Hyp Configuration Register (HCR) bits */
+
+#define HCR_TID5	(UL(1) << 58)
+#define HCR_DCT		(UL(1) << 57)
 #define HCR_ATA_SHIFT	56
 #define HCR_ATA		(UL(1) << HCR_ATA_SHIFT)
+#define HCR_AMVOFFEN	(UL(1) << 51)
+#define HCR_FIEN	(UL(1) << 47)
 #define HCR_FWB		(UL(1) << 46)
 #define HCR_API		(UL(1) << 41)
 #define HCR_APK		(UL(1) << 40)
@@ -56,6 +61,7 @@
 #define HCR_PTW		(UL(1) << 2)
 #define HCR_SWIO	(UL(1) << 1)
 #define HCR_VM		(UL(1) << 0)
+#define HCR_RES0	((UL(1) << 48) | (UL(1) << 39))
 
 /*
  * The bits we set in HCR:
@@ -277,11 +283,21 @@
 #define CPTR_EL2_TZ	(1 << 8)
 #define CPTR_NVHE_EL2_RES1	0x000032ff /* known RES1 bits in CPTR_EL2 (nVHE) */
 #define CPTR_EL2_DEFAULT	CPTR_NVHE_EL2_RES1
+#define CPTR_NVHE_EL2_RES0	(GENMASK(63, 32) |	\
+				 GENMASK(29, 21) |	\
+				 GENMASK(19, 14) |	\
+				 BIT(11))
 
 /* Hyp Debug Configuration Register bits */
 #define MDCR_EL2_E2TB_MASK	(UL(0x3))
 #define MDCR_EL2_E2TB_SHIFT	(UL(24))
+#define MDCR_EL2_HPMFZS		(UL(1) << 36)
+#define MDCR_EL2_HPMFZO		(UL(1) << 29)
+#define MDCR_EL2_MTPME		(UL(1) << 28)
+#define MDCR_EL2_TDCC		(UL(1) << 27)
+#define MDCR_EL2_HCCD		(UL(1) << 23)
 #define MDCR_EL2_TTRF		(UL(1) << 19)
+#define MDCR_EL2_HPMD		(UL(1) << 17)
 #define MDCR_EL2_TPMS		(UL(1) << 14)
 #define MDCR_EL2_E2PB_MASK	(UL(0x3))
 #define MDCR_EL2_E2PB_SHIFT	(UL(12))
@@ -293,6 +309,12 @@
 #define MDCR_EL2_TPM		(UL(1) << 6)
 #define MDCR_EL2_TPMCR		(UL(1) << 5)
 #define MDCR_EL2_HPMN_MASK	(UL(0x1F))
+#define MDCR_EL2_RES0		(GENMASK(63, 37) |	\
+				 GENMASK(35, 30) |	\
+				 GENMASK(25, 24) |	\
+				 GENMASK(22, 20) |	\
+				 BIT(18) |		\
+				 GENMASK(16, 15))
 
 /* For compatibility with fault code shared with 32-bit */
 #define FSC_FAULT	ESR_ELx_FSC_FAULT
-- 
2.32.0.402.g57bb445576-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH v3 09/15] KVM: arm64: Add config register bit definitions
@ 2021-07-19 16:03   ` Fuad Tabba
  0 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-07-19 16:03 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

Add hardware configuration register bit definitions for HCR_EL2
and MDCR_EL2. Future patches toggle these hyp configuration
register bits to trap on certain accesses.

No functional change intended.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/kvm_arm.h | 22 ++++++++++++++++++++++
 1 file changed, 22 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index a928b2dc0b0f..327120c0089f 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -12,8 +12,13 @@
 #include <asm/types.h>
 
 /* Hyp Configuration Register (HCR) bits */
+
+#define HCR_TID5	(UL(1) << 58)
+#define HCR_DCT		(UL(1) << 57)
 #define HCR_ATA_SHIFT	56
 #define HCR_ATA		(UL(1) << HCR_ATA_SHIFT)
+#define HCR_AMVOFFEN	(UL(1) << 51)
+#define HCR_FIEN	(UL(1) << 47)
 #define HCR_FWB		(UL(1) << 46)
 #define HCR_API		(UL(1) << 41)
 #define HCR_APK		(UL(1) << 40)
@@ -56,6 +61,7 @@
 #define HCR_PTW		(UL(1) << 2)
 #define HCR_SWIO	(UL(1) << 1)
 #define HCR_VM		(UL(1) << 0)
+#define HCR_RES0	((UL(1) << 48) | (UL(1) << 39))
 
 /*
  * The bits we set in HCR:
@@ -277,11 +283,21 @@
 #define CPTR_EL2_TZ	(1 << 8)
 #define CPTR_NVHE_EL2_RES1	0x000032ff /* known RES1 bits in CPTR_EL2 (nVHE) */
 #define CPTR_EL2_DEFAULT	CPTR_NVHE_EL2_RES1
+#define CPTR_NVHE_EL2_RES0	(GENMASK(63, 32) |	\
+				 GENMASK(29, 21) |	\
+				 GENMASK(19, 14) |	\
+				 BIT(11))
 
 /* Hyp Debug Configuration Register bits */
 #define MDCR_EL2_E2TB_MASK	(UL(0x3))
 #define MDCR_EL2_E2TB_SHIFT	(UL(24))
+#define MDCR_EL2_HPMFZS		(UL(1) << 36)
+#define MDCR_EL2_HPMFZO		(UL(1) << 29)
+#define MDCR_EL2_MTPME		(UL(1) << 28)
+#define MDCR_EL2_TDCC		(UL(1) << 27)
+#define MDCR_EL2_HCCD		(UL(1) << 23)
 #define MDCR_EL2_TTRF		(UL(1) << 19)
+#define MDCR_EL2_HPMD		(UL(1) << 17)
 #define MDCR_EL2_TPMS		(UL(1) << 14)
 #define MDCR_EL2_E2PB_MASK	(UL(0x3))
 #define MDCR_EL2_E2PB_SHIFT	(UL(12))
@@ -293,6 +309,12 @@
 #define MDCR_EL2_TPM		(UL(1) << 6)
 #define MDCR_EL2_TPMCR		(UL(1) << 5)
 #define MDCR_EL2_HPMN_MASK	(UL(0x1F))
+#define MDCR_EL2_RES0		(GENMASK(63, 37) |	\
+				 GENMASK(35, 30) |	\
+				 GENMASK(25, 24) |	\
+				 GENMASK(22, 20) |	\
+				 BIT(18) |		\
+				 GENMASK(16, 15))
 
 /* For compatibility with fault code shared with 32-bit */
 #define FSC_FAULT	ESR_ELx_FSC_FAULT
-- 
2.32.0.402.g57bb445576-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH v3 10/15] KVM: arm64: Guest exit handlers for nVHE hyp
  2021-07-19 16:03 ` Fuad Tabba
  (?)
@ 2021-07-19 16:03   ` Fuad Tabba
  -1 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-07-19 16:03 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

Add an array of pointers to handlers for various trap reasons in
nVHE code.

The current code selects how to fixup a guest on exit based on a
series of if/else statements. Future patches will also require
different handling for guest exists. Create an array of handlers
to consolidate them.

No functional change intended as the array isn't populated yet.

Acked-by: Will Deacon <will@kernel.org>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/kvm/hyp/include/hyp/switch.h | 43 +++++++++++++++++++++++++
 arch/arm64/kvm/hyp/nvhe/switch.c        | 35 ++++++++++++++++++++
 2 files changed, 78 insertions(+)

diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index a0e78a6027be..5a2b89b96c67 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -409,6 +409,46 @@ static inline bool __hyp_handle_ptrauth(struct kvm_vcpu *vcpu)
 	return true;
 }
 
+typedef int (*exit_handle_fn)(struct kvm_vcpu *);
+
+exit_handle_fn kvm_get_nvhe_exit_handler(struct kvm_vcpu *vcpu);
+
+static exit_handle_fn kvm_get_hyp_exit_handler(struct kvm_vcpu *vcpu)
+{
+	return is_nvhe_hyp_code() ? kvm_get_nvhe_exit_handler(vcpu) : NULL;
+}
+
+/*
+ * Allow the hypervisor to handle the exit with an exit handler if it has one.
+ *
+ * Returns true if the hypervisor handled the exit, and control should go back
+ * to the guest, or false if it hasn't.
+ */
+static bool kvm_hyp_handle_exit(struct kvm_vcpu *vcpu)
+{
+	bool is_handled = false;
+	exit_handle_fn exit_handler = kvm_get_hyp_exit_handler(vcpu);
+
+	if (exit_handler) {
+		/*
+		 * There's limited vcpu context here since it's not synced yet.
+		 * Ensure that relevant vcpu context that might be used by the
+		 * exit_handler is in sync before it's called and if handled.
+		 */
+		*vcpu_pc(vcpu) = read_sysreg_el2(SYS_ELR);
+		*vcpu_cpsr(vcpu) = read_sysreg_el2(SYS_SPSR);
+
+		is_handled = exit_handler(vcpu);
+
+		if (is_handled) {
+			write_sysreg_el2(*vcpu_pc(vcpu), SYS_ELR);
+			write_sysreg_el2(*vcpu_cpsr(vcpu), SYS_SPSR);
+		}
+	}
+
+	return is_handled;
+}
+
 /*
  * Return true when we were able to fixup the guest exit and should return to
  * the guest, false when we should restore the host state and return to the
@@ -496,6 +536,9 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
 			goto guest;
 	}
 
+	/* Check if there's an exit handler and allow it to handle the exit. */
+	if (kvm_hyp_handle_exit(vcpu))
+		goto guest;
 exit:
 	/* Return to the host kernel and handle the exit */
 	return false;
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index 86f3d6482935..36da423006bd 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -158,6 +158,41 @@ static void __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt)
 		write_sysreg(pmu->events_host, pmcntenset_el0);
 }
 
+typedef int (*exit_handle_fn)(struct kvm_vcpu *);
+
+static exit_handle_fn hyp_exit_handlers[] = {
+	[0 ... ESR_ELx_EC_MAX]		= NULL,
+	[ESR_ELx_EC_WFx]		= NULL,
+	[ESR_ELx_EC_CP15_32]		= NULL,
+	[ESR_ELx_EC_CP15_64]		= NULL,
+	[ESR_ELx_EC_CP14_MR]		= NULL,
+	[ESR_ELx_EC_CP14_LS]		= NULL,
+	[ESR_ELx_EC_CP14_64]		= NULL,
+	[ESR_ELx_EC_HVC32]		= NULL,
+	[ESR_ELx_EC_SMC32]		= NULL,
+	[ESR_ELx_EC_HVC64]		= NULL,
+	[ESR_ELx_EC_SMC64]		= NULL,
+	[ESR_ELx_EC_SYS64]		= NULL,
+	[ESR_ELx_EC_SVE]		= NULL,
+	[ESR_ELx_EC_IABT_LOW]		= NULL,
+	[ESR_ELx_EC_DABT_LOW]		= NULL,
+	[ESR_ELx_EC_SOFTSTP_LOW]	= NULL,
+	[ESR_ELx_EC_WATCHPT_LOW]	= NULL,
+	[ESR_ELx_EC_BREAKPT_LOW]	= NULL,
+	[ESR_ELx_EC_BKPT32]		= NULL,
+	[ESR_ELx_EC_BRK64]		= NULL,
+	[ESR_ELx_EC_FP_ASIMD]		= NULL,
+	[ESR_ELx_EC_PAC]		= NULL,
+};
+
+exit_handle_fn kvm_get_nvhe_exit_handler(struct kvm_vcpu *vcpu)
+{
+	u32 esr = kvm_vcpu_get_esr(vcpu);
+	u8 esr_ec = ESR_ELx_EC(esr);
+
+	return hyp_exit_handlers[esr_ec];
+}
+
 /* Switch to the guest for legacy non-VHE systems */
 int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 {
-- 
2.32.0.402.g57bb445576-goog


^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH v3 10/15] KVM: arm64: Guest exit handlers for nVHE hyp
@ 2021-07-19 16:03   ` Fuad Tabba
  0 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-07-19 16:03 UTC (permalink / raw)
  To: kvmarm; +Cc: kernel-team, kvm, maz, pbonzini, will, linux-arm-kernel

Add an array of pointers to handlers for various trap reasons in
nVHE code.

The current code selects how to fixup a guest on exit based on a
series of if/else statements. Future patches will also require
different handling for guest exists. Create an array of handlers
to consolidate them.

No functional change intended as the array isn't populated yet.

Acked-by: Will Deacon <will@kernel.org>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/kvm/hyp/include/hyp/switch.h | 43 +++++++++++++++++++++++++
 arch/arm64/kvm/hyp/nvhe/switch.c        | 35 ++++++++++++++++++++
 2 files changed, 78 insertions(+)

diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index a0e78a6027be..5a2b89b96c67 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -409,6 +409,46 @@ static inline bool __hyp_handle_ptrauth(struct kvm_vcpu *vcpu)
 	return true;
 }
 
+typedef int (*exit_handle_fn)(struct kvm_vcpu *);
+
+exit_handle_fn kvm_get_nvhe_exit_handler(struct kvm_vcpu *vcpu);
+
+static exit_handle_fn kvm_get_hyp_exit_handler(struct kvm_vcpu *vcpu)
+{
+	return is_nvhe_hyp_code() ? kvm_get_nvhe_exit_handler(vcpu) : NULL;
+}
+
+/*
+ * Allow the hypervisor to handle the exit with an exit handler if it has one.
+ *
+ * Returns true if the hypervisor handled the exit, and control should go back
+ * to the guest, or false if it hasn't.
+ */
+static bool kvm_hyp_handle_exit(struct kvm_vcpu *vcpu)
+{
+	bool is_handled = false;
+	exit_handle_fn exit_handler = kvm_get_hyp_exit_handler(vcpu);
+
+	if (exit_handler) {
+		/*
+		 * There's limited vcpu context here since it's not synced yet.
+		 * Ensure that relevant vcpu context that might be used by the
+		 * exit_handler is in sync before it's called and if handled.
+		 */
+		*vcpu_pc(vcpu) = read_sysreg_el2(SYS_ELR);
+		*vcpu_cpsr(vcpu) = read_sysreg_el2(SYS_SPSR);
+
+		is_handled = exit_handler(vcpu);
+
+		if (is_handled) {
+			write_sysreg_el2(*vcpu_pc(vcpu), SYS_ELR);
+			write_sysreg_el2(*vcpu_cpsr(vcpu), SYS_SPSR);
+		}
+	}
+
+	return is_handled;
+}
+
 /*
  * Return true when we were able to fixup the guest exit and should return to
  * the guest, false when we should restore the host state and return to the
@@ -496,6 +536,9 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
 			goto guest;
 	}
 
+	/* Check if there's an exit handler and allow it to handle the exit. */
+	if (kvm_hyp_handle_exit(vcpu))
+		goto guest;
 exit:
 	/* Return to the host kernel and handle the exit */
 	return false;
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index 86f3d6482935..36da423006bd 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -158,6 +158,41 @@ static void __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt)
 		write_sysreg(pmu->events_host, pmcntenset_el0);
 }
 
+typedef int (*exit_handle_fn)(struct kvm_vcpu *);
+
+static exit_handle_fn hyp_exit_handlers[] = {
+	[0 ... ESR_ELx_EC_MAX]		= NULL,
+	[ESR_ELx_EC_WFx]		= NULL,
+	[ESR_ELx_EC_CP15_32]		= NULL,
+	[ESR_ELx_EC_CP15_64]		= NULL,
+	[ESR_ELx_EC_CP14_MR]		= NULL,
+	[ESR_ELx_EC_CP14_LS]		= NULL,
+	[ESR_ELx_EC_CP14_64]		= NULL,
+	[ESR_ELx_EC_HVC32]		= NULL,
+	[ESR_ELx_EC_SMC32]		= NULL,
+	[ESR_ELx_EC_HVC64]		= NULL,
+	[ESR_ELx_EC_SMC64]		= NULL,
+	[ESR_ELx_EC_SYS64]		= NULL,
+	[ESR_ELx_EC_SVE]		= NULL,
+	[ESR_ELx_EC_IABT_LOW]		= NULL,
+	[ESR_ELx_EC_DABT_LOW]		= NULL,
+	[ESR_ELx_EC_SOFTSTP_LOW]	= NULL,
+	[ESR_ELx_EC_WATCHPT_LOW]	= NULL,
+	[ESR_ELx_EC_BREAKPT_LOW]	= NULL,
+	[ESR_ELx_EC_BKPT32]		= NULL,
+	[ESR_ELx_EC_BRK64]		= NULL,
+	[ESR_ELx_EC_FP_ASIMD]		= NULL,
+	[ESR_ELx_EC_PAC]		= NULL,
+};
+
+exit_handle_fn kvm_get_nvhe_exit_handler(struct kvm_vcpu *vcpu)
+{
+	u32 esr = kvm_vcpu_get_esr(vcpu);
+	u8 esr_ec = ESR_ELx_EC(esr);
+
+	return hyp_exit_handlers[esr_ec];
+}
+
 /* Switch to the guest for legacy non-VHE systems */
 int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 {
-- 
2.32.0.402.g57bb445576-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH v3 10/15] KVM: arm64: Guest exit handlers for nVHE hyp
@ 2021-07-19 16:03   ` Fuad Tabba
  0 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-07-19 16:03 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

Add an array of pointers to handlers for various trap reasons in
nVHE code.

The current code selects how to fixup a guest on exit based on a
series of if/else statements. Future patches will also require
different handling for guest exists. Create an array of handlers
to consolidate them.

No functional change intended as the array isn't populated yet.

Acked-by: Will Deacon <will@kernel.org>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/kvm/hyp/include/hyp/switch.h | 43 +++++++++++++++++++++++++
 arch/arm64/kvm/hyp/nvhe/switch.c        | 35 ++++++++++++++++++++
 2 files changed, 78 insertions(+)

diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index a0e78a6027be..5a2b89b96c67 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -409,6 +409,46 @@ static inline bool __hyp_handle_ptrauth(struct kvm_vcpu *vcpu)
 	return true;
 }
 
+typedef int (*exit_handle_fn)(struct kvm_vcpu *);
+
+exit_handle_fn kvm_get_nvhe_exit_handler(struct kvm_vcpu *vcpu);
+
+static exit_handle_fn kvm_get_hyp_exit_handler(struct kvm_vcpu *vcpu)
+{
+	return is_nvhe_hyp_code() ? kvm_get_nvhe_exit_handler(vcpu) : NULL;
+}
+
+/*
+ * Allow the hypervisor to handle the exit with an exit handler if it has one.
+ *
+ * Returns true if the hypervisor handled the exit, and control should go back
+ * to the guest, or false if it hasn't.
+ */
+static bool kvm_hyp_handle_exit(struct kvm_vcpu *vcpu)
+{
+	bool is_handled = false;
+	exit_handle_fn exit_handler = kvm_get_hyp_exit_handler(vcpu);
+
+	if (exit_handler) {
+		/*
+		 * There's limited vcpu context here since it's not synced yet.
+		 * Ensure that relevant vcpu context that might be used by the
+		 * exit_handler is in sync before it's called and if handled.
+		 */
+		*vcpu_pc(vcpu) = read_sysreg_el2(SYS_ELR);
+		*vcpu_cpsr(vcpu) = read_sysreg_el2(SYS_SPSR);
+
+		is_handled = exit_handler(vcpu);
+
+		if (is_handled) {
+			write_sysreg_el2(*vcpu_pc(vcpu), SYS_ELR);
+			write_sysreg_el2(*vcpu_cpsr(vcpu), SYS_SPSR);
+		}
+	}
+
+	return is_handled;
+}
+
 /*
  * Return true when we were able to fixup the guest exit and should return to
  * the guest, false when we should restore the host state and return to the
@@ -496,6 +536,9 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
 			goto guest;
 	}
 
+	/* Check if there's an exit handler and allow it to handle the exit. */
+	if (kvm_hyp_handle_exit(vcpu))
+		goto guest;
 exit:
 	/* Return to the host kernel and handle the exit */
 	return false;
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index 86f3d6482935..36da423006bd 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -158,6 +158,41 @@ static void __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt)
 		write_sysreg(pmu->events_host, pmcntenset_el0);
 }
 
+typedef int (*exit_handle_fn)(struct kvm_vcpu *);
+
+static exit_handle_fn hyp_exit_handlers[] = {
+	[0 ... ESR_ELx_EC_MAX]		= NULL,
+	[ESR_ELx_EC_WFx]		= NULL,
+	[ESR_ELx_EC_CP15_32]		= NULL,
+	[ESR_ELx_EC_CP15_64]		= NULL,
+	[ESR_ELx_EC_CP14_MR]		= NULL,
+	[ESR_ELx_EC_CP14_LS]		= NULL,
+	[ESR_ELx_EC_CP14_64]		= NULL,
+	[ESR_ELx_EC_HVC32]		= NULL,
+	[ESR_ELx_EC_SMC32]		= NULL,
+	[ESR_ELx_EC_HVC64]		= NULL,
+	[ESR_ELx_EC_SMC64]		= NULL,
+	[ESR_ELx_EC_SYS64]		= NULL,
+	[ESR_ELx_EC_SVE]		= NULL,
+	[ESR_ELx_EC_IABT_LOW]		= NULL,
+	[ESR_ELx_EC_DABT_LOW]		= NULL,
+	[ESR_ELx_EC_SOFTSTP_LOW]	= NULL,
+	[ESR_ELx_EC_WATCHPT_LOW]	= NULL,
+	[ESR_ELx_EC_BREAKPT_LOW]	= NULL,
+	[ESR_ELx_EC_BKPT32]		= NULL,
+	[ESR_ELx_EC_BRK64]		= NULL,
+	[ESR_ELx_EC_FP_ASIMD]		= NULL,
+	[ESR_ELx_EC_PAC]		= NULL,
+};
+
+exit_handle_fn kvm_get_nvhe_exit_handler(struct kvm_vcpu *vcpu)
+{
+	u32 esr = kvm_vcpu_get_esr(vcpu);
+	u8 esr_ec = ESR_ELx_EC(esr);
+
+	return hyp_exit_handlers[esr_ec];
+}
+
 /* Switch to the guest for legacy non-VHE systems */
 int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 {
-- 
2.32.0.402.g57bb445576-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH v3 11/15] KVM: arm64: Add trap handlers for protected VMs
  2021-07-19 16:03 ` Fuad Tabba
  (?)
@ 2021-07-19 16:03   ` Fuad Tabba
  -1 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-07-19 16:03 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

Add trap handlers for protected VMs. These are mainly for Sys64
and debug traps.

No functional change intended as these are not hooked in yet to
the guest exit handlers introduced earlier. So even when trapping
is triggered, the exit handlers would let the host handle it, as
before.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/kvm_fixed_config.h | 178 +++++++++
 arch/arm64/include/asm/kvm_host.h         |   2 +
 arch/arm64/include/asm/kvm_hyp.h          |   3 +
 arch/arm64/kvm/Makefile                   |   2 +-
 arch/arm64/kvm/arm.c                      |  11 +
 arch/arm64/kvm/hyp/nvhe/Makefile          |   2 +-
 arch/arm64/kvm/hyp/nvhe/sys_regs.c        | 443 ++++++++++++++++++++++
 arch/arm64/kvm/pkvm.c                     | 183 +++++++++
 8 files changed, 822 insertions(+), 2 deletions(-)
 create mode 100644 arch/arm64/include/asm/kvm_fixed_config.h
 create mode 100644 arch/arm64/kvm/hyp/nvhe/sys_regs.c
 create mode 100644 arch/arm64/kvm/pkvm.c

diff --git a/arch/arm64/include/asm/kvm_fixed_config.h b/arch/arm64/include/asm/kvm_fixed_config.h
new file mode 100644
index 000000000000..b39a5de2c4b9
--- /dev/null
+++ b/arch/arm64/include/asm/kvm_fixed_config.h
@@ -0,0 +1,178 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (C) 2021 Google LLC
+ * Author: Fuad Tabba <tabba@google.com>
+ */
+
+#ifndef __ARM64_KVM_FIXED_CONFIG_H__
+#define __ARM64_KVM_FIXED_CONFIG_H__
+
+#include <asm/sysreg.h>
+
+/*
+ * This file contains definitions for features to be allowed or restricted for
+ * guest virtual machines as a baseline, depending on what mode KVM is running
+ * in and on the type of guest is running.
+ *
+ * The features are represented as the highest allowed value for a feature in
+ * the feature id registers. If the field is set to all ones (i.e., 0b1111),
+ * then it's only restricted by what the system allows. If the feature is set to
+ * another value, then that value would be the maximum value allowed and
+ * supported in pKVM, even if the system supports a higher value.
+ *
+ * Some features are forced to a certain value, in which case a SET bitmap is
+ * used to force these values.
+ */
+
+
+/*
+ * Allowed features for protected guests (Protected KVM)
+ *
+ * The approach taken here is to allow features that are:
+ * - needed by common Linux distributions (e.g., flooating point)
+ * - are trivial, e.g., supporting the feature doesn't introduce or require the
+ * tracking of additional state
+ * - not trapable
+ */
+
+/*
+ * - Floating-point and Advanced SIMD:
+ *	Don't require much support other than maintaining the context, which KVM
+ *	already has.
+ * - AArch64 guests only (no support for AArch32 guests):
+ *	Simplify support in case of asymmetric AArch32 systems.
+ * - RAS (v1)
+ *	v1 doesn't require much additional support, but later versions do.
+ * - Data Independent Timing
+ *	Trivial
+ * Remaining features are not supported either because they require too much
+ * support from KVM, or risk leaking guest data.
+ */
+#define PVM_ID_AA64PFR0_ALLOW (\
+	FEATURE(ID_AA64PFR0_FP) | \
+	FIELD_PREP(FEATURE(ID_AA64PFR0_EL0), ID_AA64PFR0_ELx_64BIT_ONLY) | \
+	FIELD_PREP(FEATURE(ID_AA64PFR0_EL1), ID_AA64PFR0_ELx_64BIT_ONLY) | \
+	FIELD_PREP(FEATURE(ID_AA64PFR0_EL2), ID_AA64PFR0_ELx_64BIT_ONLY) | \
+	FIELD_PREP(FEATURE(ID_AA64PFR0_EL3), ID_AA64PFR0_ELx_64BIT_ONLY) | \
+	FIELD_PREP(FEATURE(ID_AA64PFR0_RAS), ID_AA64PFR0_RAS_V1) | \
+	FEATURE(ID_AA64PFR0_ASIMD) | \
+	FEATURE(ID_AA64PFR0_DIT) \
+	)
+
+/*
+ * - Branch Target Identification
+ * - Speculative Store Bypassing
+ *	These features are trivial to support
+ */
+#define PVM_ID_AA64PFR1_ALLOW (\
+	FEATURE(ID_AA64PFR1_BT) | \
+	FEATURE(ID_AA64PFR1_SSBS) \
+	)
+
+/*
+ * No support for Scalable Vectors:
+ *	Requires additional support from KVM
+ */
+#define PVM_ID_AA64ZFR0_ALLOW (0ULL)
+
+/*
+ * No support for debug, including breakpoints, and watchpoints:
+ *	Reduce complexity and avoid exposing/leaking guest data
+ *
+ * NOTE: The Arm architecture mandates support for at least the Armv8 debug
+ * architecture, which would include at least 2 hardware breakpoints and
+ * watchpoints. Providing that support to protected guests adds considerable
+ * state and complexity, and risks leaking guest data. Therefore, the reserved
+ * value of 0 is used for debug-related fields.
+ */
+#define PVM_ID_AA64DFR0_ALLOW (0ULL)
+
+/*
+ * These features are chosen because they are supported by KVM and to limit the
+ * confiruation state space and make it more deterministic.
+ * - 40-bit IPA
+ * - 16-bit ASID
+ * - Mixed-endian
+ * - Distinction between Secure and Non-secure Memory
+ * - Mixed-endian at EL0 only
+ * - Non-context synchronizing exception entry and exit
+ */
+#define PVM_ID_AA64MMFR0_ALLOW (\
+	FIELD_PREP(FEATURE(ID_AA64MMFR0_PARANGE), ID_AA64MMFR0_PARANGE_40) | \
+	FIELD_PREP(FEATURE(ID_AA64MMFR0_ASID), ID_AA64MMFR0_ASID_16) | \
+	FEATURE(ID_AA64MMFR0_BIGENDEL) | \
+	FEATURE(ID_AA64MMFR0_SNSMEM) | \
+	FEATURE(ID_AA64MMFR0_BIGENDEL0) | \
+	FEATURE(ID_AA64MMFR0_EXS) \
+	)
+
+/*
+ * - 64KB granule not supported
+ */
+#define PVM_ID_AA64MMFR0_SET (\
+	FIELD_PREP(FEATURE(ID_AA64MMFR0_TGRAN64), ID_AA64MMFR0_TGRAN64_NI) \
+	)
+
+/*
+ * These features are chosen because they are supported by KVM and to limit the
+ * confiruation state space and make it more deterministic.
+ * - Hardware translation table updates to Access flag and Dirty state
+ * - Number of VMID bits from CPU
+ * - Hierarchical Permission Disables
+ * - Privileged Access Never
+ * - SError interrupt exceptions from speculative reads
+ * - Enhanced Translation Synchronization
+ */
+#define PVM_ID_AA64MMFR1_ALLOW (\
+	FEATURE(ID_AA64MMFR1_HADBS) | \
+	FEATURE(ID_AA64MMFR1_VMIDBITS) | \
+	FEATURE(ID_AA64MMFR1_HPD) | \
+	FEATURE(ID_AA64MMFR1_PAN) | \
+	FEATURE(ID_AA64MMFR1_SPECSEI) | \
+	FEATURE(ID_AA64MMFR1_ETS) \
+	)
+
+/*
+ * These features are chosen because they are supported by KVM and to limit the
+ * confiruation state space and make it more deterministic.
+ * - Common not Private translations
+ * - User Access Override
+ * - IESB bit in the SCTLR_ELx registers
+ * - Unaligned single-copy atomicity and atomic functions
+ * - ESR_ELx.EC value on an exception by read access to feature ID space
+ * - TTL field in address operations.
+ * - Break-before-make sequences when changing translation block size
+ * - E0PDx mechanism
+ */
+#define PVM_ID_AA64MMFR2_ALLOW (\
+	FEATURE(ID_AA64MMFR2_CNP) | \
+	FEATURE(ID_AA64MMFR2_UAO) | \
+	FEATURE(ID_AA64MMFR2_IESB) | \
+	FEATURE(ID_AA64MMFR2_AT) | \
+	FEATURE(ID_AA64MMFR2_IDS) | \
+	FEATURE(ID_AA64MMFR2_TTL) | \
+	FEATURE(ID_AA64MMFR2_BBM) | \
+	FEATURE(ID_AA64MMFR2_E0PD) \
+	)
+
+/*
+ * Allow all features in this register because they are trivial to support, or
+ * are already supported by KVM:
+ * - LS64
+ * - XS
+ * - I8MM
+ * - DGB
+ * - BF16
+ * - SPECRES
+ * - SB
+ * - FRINTTS
+ * - PAuth
+ * - FPAC
+ * - LRCPC
+ * - FCMA
+ * - JSCVT
+ * - DPB
+ */
+#define PVM_ID_AA64ISAR1_ALLOW (~0ULL)
+
+#endif /* __ARM64_KVM_FIXED_CONFIG_H__ */
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index ac67d5699c68..e1ceadd69575 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -780,6 +780,8 @@ static inline bool kvm_vm_is_protected(struct kvm *kvm)
 	return false;
 }
 
+void kvm_init_protected_traps(struct kvm_vcpu *vcpu);
+
 int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature);
 bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu);
 
diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index 657d0c94cf82..3f4866322f85 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -115,7 +115,10 @@ int __pkvm_init(phys_addr_t phys, unsigned long size, unsigned long nr_cpus,
 void __noreturn __host_enter(struct kvm_cpu_context *host_ctxt);
 #endif
 
+extern u64 kvm_nvhe_sym(id_aa64pfr0_el1_sys_val);
+extern u64 kvm_nvhe_sym(id_aa64pfr1_el1_sys_val);
 extern u64 kvm_nvhe_sym(id_aa64mmfr0_el1_sys_val);
 extern u64 kvm_nvhe_sym(id_aa64mmfr1_el1_sys_val);
+extern u64 kvm_nvhe_sym(id_aa64mmfr2_el1_sys_val);
 
 #endif /* __ARM64_KVM_HYP_H__ */
diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
index 989bb5dad2c8..0be63f5c495f 100644
--- a/arch/arm64/kvm/Makefile
+++ b/arch/arm64/kvm/Makefile
@@ -14,7 +14,7 @@ kvm-y := $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o $(KVM)/eventfd.o \
 	 $(KVM)/vfio.o $(KVM)/irqchip.o $(KVM)/binary_stats.o \
 	 arm.o mmu.o mmio.o psci.o perf.o hypercalls.o pvtime.o \
 	 inject_fault.o va_layout.o handle_exit.o \
-	 guest.o debug.o reset.o sys_regs.o \
+	 guest.o debug.o pkvm.o reset.o sys_regs.o \
 	 vgic-sys-reg-v3.o fpsimd.o pmu.o \
 	 arch_timer.o trng.o\
 	 vgic/vgic.o vgic/vgic-init.o \
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 14b12f2c08c0..3f28549aff0d 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -618,6 +618,14 @@ static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu)
 
 	ret = kvm_arm_pmu_v3_enable(vcpu);
 
+	/*
+	 * Initialize traps for protected VMs.
+	 * NOTE: Move  trap initialization to EL2 once the code is in place for
+	 * maintaining protected VM state at EL2 instead of the host.
+	 */
+	if (kvm_vm_is_protected(kvm))
+		kvm_init_protected_traps(vcpu);
+
 	return ret;
 }
 
@@ -1781,8 +1789,11 @@ static int kvm_hyp_init_protection(u32 hyp_va_bits)
 	void *addr = phys_to_virt(hyp_mem_base);
 	int ret;
 
+	kvm_nvhe_sym(id_aa64pfr0_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
+	kvm_nvhe_sym(id_aa64pfr1_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64PFR1_EL1);
 	kvm_nvhe_sym(id_aa64mmfr0_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1);
 	kvm_nvhe_sym(id_aa64mmfr1_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1);
+	kvm_nvhe_sym(id_aa64mmfr2_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR2_EL1);
 
 	ret = create_hyp_mappings(addr, addr + hyp_mem_size, PAGE_HYP);
 	if (ret)
diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile
index 5df6193fc430..a23f417a0c20 100644
--- a/arch/arm64/kvm/hyp/nvhe/Makefile
+++ b/arch/arm64/kvm/hyp/nvhe/Makefile
@@ -14,7 +14,7 @@ lib-objs := $(addprefix ../../../lib/, $(lib-objs))
 
 obj-y := timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o hyp-init.o host.o \
 	 hyp-main.o hyp-smp.o psci-relay.o early_alloc.o stub.o page_alloc.o \
-	 cache.o setup.o mm.o mem_protect.o
+	 cache.o setup.o mm.o mem_protect.o sys_regs.o
 obj-y += ../vgic-v3-sr.o ../aarch32.o ../vgic-v2-cpuif-proxy.o ../entry.o \
 	 ../fpsimd.o ../hyp-entry.o ../exception.o ../pgtable.o
 obj-y += $(lib-objs)
diff --git a/arch/arm64/kvm/hyp/nvhe/sys_regs.c b/arch/arm64/kvm/hyp/nvhe/sys_regs.c
new file mode 100644
index 000000000000..6c7230aa70e9
--- /dev/null
+++ b/arch/arm64/kvm/hyp/nvhe/sys_regs.c
@@ -0,0 +1,443 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2021 Google LLC
+ * Author: Fuad Tabba <tabba@google.com>
+ */
+
+#include <linux/kvm_host.h>
+
+#include <asm/kvm_asm.h>
+#include <asm/kvm_emulate.h>
+#include <asm/kvm_fixed_config.h>
+#include <asm/kvm_mmu.h>
+
+#include <hyp/adjust_pc.h>
+
+#include "../../sys_regs.h"
+
+/*
+ * Copies of the host's CPU features registers holding sanitized values.
+ */
+u64 id_aa64pfr0_el1_sys_val;
+u64 id_aa64pfr1_el1_sys_val;
+u64 id_aa64mmfr2_el1_sys_val;
+
+/*
+ * Inject an unknown/undefined exception to the guest.
+ */
+static void inject_undef(struct kvm_vcpu *vcpu)
+{
+	u32 esr = (ESR_ELx_EC_UNKNOWN << ESR_ELx_EC_SHIFT);
+
+	vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA64_EL1 |
+			     KVM_ARM64_EXCEPT_AA64_ELx_SYNC |
+			     KVM_ARM64_PENDING_EXCEPTION);
+
+	__kvm_adjust_pc(vcpu);
+
+	write_sysreg_el1(esr, SYS_ESR);
+	write_sysreg_el1(read_sysreg_el2(SYS_ELR), SYS_ELR);
+}
+
+/*
+ * Accessor for undefined accesses.
+ */
+static bool undef_access(struct kvm_vcpu *vcpu,
+			 struct sys_reg_params *p,
+			 const struct sys_reg_desc *r)
+{
+	inject_undef(vcpu);
+	return false;
+}
+
+/*
+ * Accessors for feature registers.
+ *
+ * If access is allowed, set the regval to the protected VM's view of the
+ * register and return true.
+ * Otherwise, inject an undefined exception and return false.
+ */
+
+/*
+ * Returns the minimum feature supported and allowed.
+ */
+static u64 get_min_feature(u64 feature, u64 allowed_features,
+			   u64 supported_features)
+{
+	const u64 allowed_feature = FIELD_GET(feature, allowed_features);
+	const u64 supported_feature = FIELD_GET(feature, supported_features);
+
+	return min(allowed_feature, supported_feature);
+}
+
+/* Accessor for ID_AA64PFR0_EL1. */
+static bool pvm_access_id_aa64pfr0(struct kvm_vcpu *vcpu,
+				   struct sys_reg_params *p,
+				   const struct sys_reg_desc *r)
+{
+	const struct kvm *kvm = (const struct kvm *) kern_hyp_va(vcpu->kvm);
+	const u64 feature_ids = PVM_ID_AA64PFR0_ALLOW;
+	u64 set_mask = 0;
+	u64 clear_mask = 0;
+
+	if (p->is_write)
+		return undef_access(vcpu, p, r);
+
+	/* Get the RAS version allowed and supported */
+	clear_mask |= FEATURE(ID_AA64PFR0_RAS);
+	set_mask |= FIELD_PREP(FEATURE(ID_AA64PFR0_RAS),
+			       get_min_feature(FEATURE(ID_AA64PFR0_RAS),
+					       feature_ids,
+					       id_aa64pfr0_el1_sys_val));
+
+	/* AArch32 guests: if not allowed then force guests to 64-bits only */
+	clear_mask |= FEATURE(ID_AA64PFR0_EL0) | FEATURE(ID_AA64PFR0_EL1) |
+		      FEATURE(ID_AA64PFR0_EL2) | FEATURE(ID_AA64PFR0_EL3);
+
+	set_mask |= FIELD_PREP(FEATURE(ID_AA64PFR0_EL0),
+			       get_min_feature(FEATURE(ID_AA64PFR0_EL0),
+					       feature_ids,
+			                       id_aa64pfr0_el1_sys_val));
+	set_mask |= FIELD_PREP(FEATURE(ID_AA64PFR0_EL1),
+			       get_min_feature(FEATURE(ID_AA64PFR0_EL1),
+					       feature_ids,
+			                       id_aa64pfr0_el1_sys_val));
+	set_mask |= FIELD_PREP(FEATURE(ID_AA64PFR0_EL2),
+			       get_min_feature(FEATURE(ID_AA64PFR0_EL2),
+					       feature_ids,
+			                       id_aa64pfr0_el1_sys_val));
+	set_mask |= FIELD_PREP(FEATURE(ID_AA64PFR0_EL3),
+			       get_min_feature(FEATURE(ID_AA64PFR0_EL3),
+					       feature_ids,
+			                       id_aa64pfr0_el1_sys_val));
+
+	/* Spectre and Meltdown mitigation */
+	set_mask |= FIELD_PREP(FEATURE(ID_AA64PFR0_CSV2),
+			       (u64)kvm->arch.pfr0_csv2);
+	set_mask |= FIELD_PREP(FEATURE(ID_AA64PFR0_CSV3),
+			       (u64)kvm->arch.pfr0_csv3);
+
+	p->regval = (id_aa64pfr0_el1_sys_val & feature_ids & ~clear_mask) |
+		    set_mask;
+	return true;
+}
+
+/* Accessor for ID_AA64PFR1_EL1. */
+static bool pvm_access_id_aa64pfr1(struct kvm_vcpu *vcpu,
+				   struct sys_reg_params *p,
+				   const struct sys_reg_desc *r)
+{
+	const u64 feature_ids = PVM_ID_AA64PFR1_ALLOW;
+
+	if (p->is_write)
+		return undef_access(vcpu, p, r);
+
+	p->regval = id_aa64pfr1_el1_sys_val & feature_ids;
+	return true;
+}
+
+/* Accessor for ID_AA64ZFR0_EL1. */
+static bool pvm_access_id_aa64zfr0(struct kvm_vcpu *vcpu,
+				   struct sys_reg_params *p,
+				   const struct sys_reg_desc *r)
+{
+	if (p->is_write)
+		return undef_access(vcpu, p, r);
+
+	/*
+	 * No support for Scalable Vectors, therefore, pKVM has no sanitized
+	 * copy of the feature id register.
+	 */
+	BUILD_BUG_ON(PVM_ID_AA64ZFR0_ALLOW != 0ULL);
+
+	p->regval = 0;
+	return true;
+}
+
+/* Accessor for ID_AA64DFR0_EL1. */
+static bool pvm_access_id_aa64dfr0(struct kvm_vcpu *vcpu,
+				   struct sys_reg_params *p,
+				   const struct sys_reg_desc *r)
+{
+	if (p->is_write)
+		return undef_access(vcpu, p, r);
+
+	/*
+	 * No support for debug, including breakpoints, and watchpoints,
+	 * therefore, pKVM has no sanitized copy of the feature id register.
+	 */
+	BUILD_BUG_ON(PVM_ID_AA64DFR0_ALLOW != 0ULL);
+
+	p->regval = 0;
+	return true;
+}
+
+/*
+ * No restrictions on ID_AA64ISAR1_EL1 features, therefore, pKVM has no
+ * sanitized copy of the feature id register and it's handled by the host.
+ */
+static_assert(PVM_ID_AA64ISAR1_ALLOW == ~0ULL);
+
+/* Accessor for ID_AA64MMFR0_EL1. */
+static bool pvm_access_id_aa64mmfr0(struct kvm_vcpu *vcpu,
+				    struct sys_reg_params *p,
+				    const struct sys_reg_desc *r)
+{
+	const u64 feature_ids = PVM_ID_AA64MMFR0_ALLOW;
+	u64 set_mask = PVM_ID_AA64MMFR0_SET;
+
+	if (p->is_write)
+		return undef_access(vcpu, p, r);
+
+	p->regval = (id_aa64mmfr0_el1_sys_val & feature_ids) | set_mask;
+	return true;
+}
+
+/* Accessor for ID_AA64MMFR1_EL1. */
+static bool pvm_access_id_aa64mmfr1(struct kvm_vcpu *vcpu,
+				    struct sys_reg_params *p,
+				    const struct sys_reg_desc *r)
+{
+	const u64 feature_ids = PVM_ID_AA64MMFR1_ALLOW;
+
+	if (p->is_write)
+		return undef_access(vcpu, p, r);
+
+	p->regval = id_aa64mmfr1_el1_sys_val & feature_ids;
+	return true;
+}
+
+/* Accessor for ID_AA64MMFR2_EL1. */
+static bool pvm_access_id_aa64mmfr2(struct kvm_vcpu *vcpu,
+				    struct sys_reg_params *p,
+				    const struct sys_reg_desc *r)
+{
+	const u64 feature_ids = PVM_ID_AA64MMFR2_ALLOW;
+
+	if (p->is_write)
+		return undef_access(vcpu, p, r);
+
+	p->regval = id_aa64mmfr2_el1_sys_val & feature_ids;
+	return true;
+}
+
+/*
+ * Accessor for AArch32 Processor Feature Registers.
+ *
+ * The value of these registers is "unknown" according to the spec if AArch32
+ * isn't supported.
+ */
+static bool pvm_access_id_aarch32(struct kvm_vcpu *vcpu,
+				  struct sys_reg_params *p,
+				  const struct sys_reg_desc *r)
+{
+	if (p->is_write)
+		return undef_access(vcpu, p, r);
+
+	/*
+	 * No support for AArch32 guests, therefore, pKVM has no sanitized copy
+	 * of AArch 32 feature id registers.
+	 */
+	BUILD_BUG_ON(FIELD_GET(FEATURE(ID_AA64PFR0_EL1),
+		     PVM_ID_AA64PFR0_ALLOW) > ID_AA64PFR0_ELx_64BIT_ONLY);
+
+	/* Use 0 for architecturally "unknown" values. */
+	p->regval = 0;
+	return true;
+}
+
+/* Mark the specified system register as an AArch32 feature register. */
+#define AARCH32(REG) { SYS_DESC(REG), .access = pvm_access_id_aarch32 }
+
+/* Mark the specified system register as not being handled in hyp. */
+#define HOST_HANDLED(REG) { SYS_DESC(REG), .access = NULL }
+
+/*
+ * Architected system registers.
+ * Important: Must be sorted ascending by Op0, Op1, CRn, CRm, Op2
+ *
+ * NOTE: Anything not explicitly listed here will be *restricted by default*,
+ * i.e., it will lead to injecting an exception into the guest.
+ */
+static const struct sys_reg_desc pvm_sys_reg_descs[] = {
+	/* Cache maintenance by set/way operations are restricted. */
+
+	/* Debug and Trace Registers are all restricted */
+
+	/* AArch64 mappings of the AArch32 ID registers */
+	/* CRm=1 */
+	AARCH32(SYS_ID_PFR0_EL1),
+	AARCH32(SYS_ID_PFR1_EL1),
+	AARCH32(SYS_ID_DFR0_EL1),
+	AARCH32(SYS_ID_AFR0_EL1),
+	AARCH32(SYS_ID_MMFR0_EL1),
+	AARCH32(SYS_ID_MMFR1_EL1),
+	AARCH32(SYS_ID_MMFR2_EL1),
+	AARCH32(SYS_ID_MMFR3_EL1),
+
+	/* CRm=2 */
+	AARCH32(SYS_ID_ISAR0_EL1),
+	AARCH32(SYS_ID_ISAR1_EL1),
+	AARCH32(SYS_ID_ISAR2_EL1),
+	AARCH32(SYS_ID_ISAR3_EL1),
+	AARCH32(SYS_ID_ISAR4_EL1),
+	AARCH32(SYS_ID_ISAR5_EL1),
+	AARCH32(SYS_ID_MMFR4_EL1),
+	AARCH32(SYS_ID_ISAR6_EL1),
+
+	/* CRm=3 */
+	AARCH32(SYS_MVFR0_EL1),
+	AARCH32(SYS_MVFR1_EL1),
+	AARCH32(SYS_MVFR2_EL1),
+	AARCH32(SYS_ID_PFR2_EL1),
+	AARCH32(SYS_ID_DFR1_EL1),
+	AARCH32(SYS_ID_MMFR5_EL1),
+
+	/* AArch64 ID registers */
+	/* CRm=4 */
+	{ SYS_DESC(SYS_ID_AA64PFR0_EL1), .access = pvm_access_id_aa64pfr0 },
+	{ SYS_DESC(SYS_ID_AA64PFR1_EL1), .access = pvm_access_id_aa64pfr1 },
+	{ SYS_DESC(SYS_ID_AA64ZFR0_EL1), .access = pvm_access_id_aa64zfr0 },
+	{ SYS_DESC(SYS_ID_AA64DFR0_EL1), .access = pvm_access_id_aa64dfr0 },
+	HOST_HANDLED(SYS_ID_AA64DFR1_EL1),
+	HOST_HANDLED(SYS_ID_AA64AFR0_EL1),
+	HOST_HANDLED(SYS_ID_AA64AFR1_EL1),
+	HOST_HANDLED(SYS_ID_AA64ISAR0_EL1),
+	HOST_HANDLED(SYS_ID_AA64ISAR1_EL1),
+	{ SYS_DESC(SYS_ID_AA64MMFR0_EL1), .access = pvm_access_id_aa64mmfr0 },
+	{ SYS_DESC(SYS_ID_AA64MMFR1_EL1), .access = pvm_access_id_aa64mmfr1 },
+	{ SYS_DESC(SYS_ID_AA64MMFR2_EL1), .access = pvm_access_id_aa64mmfr2 },
+
+	HOST_HANDLED(SYS_SCTLR_EL1),
+	HOST_HANDLED(SYS_ACTLR_EL1),
+	HOST_HANDLED(SYS_CPACR_EL1),
+
+	HOST_HANDLED(SYS_RGSR_EL1),
+	HOST_HANDLED(SYS_GCR_EL1),
+
+	/* Scalable Vector Registers are restricted. */
+
+	HOST_HANDLED(SYS_TTBR0_EL1),
+	HOST_HANDLED(SYS_TTBR1_EL1),
+	HOST_HANDLED(SYS_TCR_EL1),
+
+	HOST_HANDLED(SYS_APIAKEYLO_EL1),
+	HOST_HANDLED(SYS_APIAKEYHI_EL1),
+	HOST_HANDLED(SYS_APIBKEYLO_EL1),
+	HOST_HANDLED(SYS_APIBKEYHI_EL1),
+	HOST_HANDLED(SYS_APDAKEYLO_EL1),
+	HOST_HANDLED(SYS_APDAKEYHI_EL1),
+	HOST_HANDLED(SYS_APDBKEYLO_EL1),
+	HOST_HANDLED(SYS_APDBKEYHI_EL1),
+	HOST_HANDLED(SYS_APGAKEYLO_EL1),
+	HOST_HANDLED(SYS_APGAKEYHI_EL1),
+
+	HOST_HANDLED(SYS_AFSR0_EL1),
+	HOST_HANDLED(SYS_AFSR1_EL1),
+	HOST_HANDLED(SYS_ESR_EL1),
+
+	HOST_HANDLED(SYS_ERRIDR_EL1),
+	HOST_HANDLED(SYS_ERRSELR_EL1),
+	HOST_HANDLED(SYS_ERXFR_EL1),
+	HOST_HANDLED(SYS_ERXCTLR_EL1),
+	HOST_HANDLED(SYS_ERXSTATUS_EL1),
+	HOST_HANDLED(SYS_ERXADDR_EL1),
+	HOST_HANDLED(SYS_ERXMISC0_EL1),
+	HOST_HANDLED(SYS_ERXMISC1_EL1),
+
+	HOST_HANDLED(SYS_TFSR_EL1),
+	HOST_HANDLED(SYS_TFSRE0_EL1),
+
+	HOST_HANDLED(SYS_FAR_EL1),
+	HOST_HANDLED(SYS_PAR_EL1),
+
+	/* Performance Monitoring Registers are restricted. */
+
+	HOST_HANDLED(SYS_MAIR_EL1),
+	HOST_HANDLED(SYS_AMAIR_EL1),
+
+	/* Limited Ordering Regions Registers are restricted. */
+
+	HOST_HANDLED(SYS_VBAR_EL1),
+	HOST_HANDLED(SYS_DISR_EL1),
+
+	/* GIC CPU Interface registers are restricted. */
+
+	HOST_HANDLED(SYS_CONTEXTIDR_EL1),
+	HOST_HANDLED(SYS_TPIDR_EL1),
+
+	HOST_HANDLED(SYS_SCXTNUM_EL1),
+
+	HOST_HANDLED(SYS_CNTKCTL_EL1),
+
+	HOST_HANDLED(SYS_CCSIDR_EL1),
+	HOST_HANDLED(SYS_CLIDR_EL1),
+	HOST_HANDLED(SYS_CSSELR_EL1),
+	HOST_HANDLED(SYS_CTR_EL0),
+
+	/* Performance Monitoring Registers are restricted. */
+
+	HOST_HANDLED(SYS_TPIDR_EL0),
+	HOST_HANDLED(SYS_TPIDRRO_EL0),
+
+	HOST_HANDLED(SYS_SCXTNUM_EL0),
+
+	/* Activity Monitoring Registers are restricted. */
+
+	HOST_HANDLED(SYS_CNTP_TVAL_EL0),
+	HOST_HANDLED(SYS_CNTP_CTL_EL0),
+	HOST_HANDLED(SYS_CNTP_CVAL_EL0),
+
+	/* Performance Monitoring Registers are restricted. */
+
+	HOST_HANDLED(SYS_DACR32_EL2),
+	HOST_HANDLED(SYS_IFSR32_EL2),
+	HOST_HANDLED(SYS_FPEXC32_EL2),
+};
+
+/*
+ * Handler for protected VM MSR, MRS or System instruction execution in AArch64.
+ *
+ * Return 1 if handled, or 0 if not.
+ */
+int kvm_handle_pvm_sys64(struct kvm_vcpu *vcpu)
+{
+	const struct sys_reg_desc *r;
+	struct sys_reg_params params;
+	unsigned long esr = kvm_vcpu_get_esr(vcpu);
+	int Rt = kvm_vcpu_sys_get_rt(vcpu);
+
+	params = esr_sys64_to_params(esr);
+	params.regval = vcpu_get_reg(vcpu, Rt);
+
+	r = find_reg(&params, pvm_sys_reg_descs, ARRAY_SIZE(pvm_sys_reg_descs));
+
+	/* Undefined access (RESTRICTED). */
+	if (r == NULL) {
+		inject_undef(vcpu);
+		return 1;
+	}
+
+	/* Handled by the host (HOST_HANDLED) */
+	if (r->access == NULL)
+		return 0;
+
+	/* Handled by hyp: skip instruction if instructed to do so. */
+	if (r->access(vcpu, &params, r))
+		__kvm_skip_instr(vcpu);
+
+	vcpu_set_reg(vcpu, Rt, params.regval);
+	return 1;
+}
+
+/*
+ * Handler for protected VM restricted exceptions.
+ *
+ * Inject an undefined exception into the guest and return 1 to indicate that
+ * it was handled.
+ */
+int kvm_handle_pvm_restricted(struct kvm_vcpu *vcpu)
+{
+	inject_undef(vcpu);
+	return 1;
+}
diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c
new file mode 100644
index 000000000000..b8430b3d97af
--- /dev/null
+++ b/arch/arm64/kvm/pkvm.c
@@ -0,0 +1,183 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * KVM host (EL1) interface to Protected KVM (pkvm) code at EL2.
+ *
+ * Copyright (C) 2021 Google LLC
+ * Author: Fuad Tabba <tabba@google.com>
+ */
+
+#include <linux/kvm_host.h>
+#include <linux/mm.h>
+
+#include <asm/kvm_fixed_config.h>
+
+/*
+ * Set trap register values for features not allowed in ID_AA64PFR0.
+ */
+static void pvm_init_traps_aa64pfr0(struct kvm_vcpu *vcpu)
+{
+	const u64 feature_ids = PVM_ID_AA64PFR0_ALLOW;
+	u64 hcr_set = 0;
+	u64 hcr_clear = 0;
+	u64 cptr_set = 0;
+
+	/* Trap AArch32 guests */
+	if (FIELD_GET(FEATURE(ID_AA64PFR0_EL0), feature_ids) <
+		    ID_AA64PFR0_ELx_32BIT_64BIT ||
+	    FIELD_GET(FEATURE(ID_AA64PFR0_EL1), feature_ids) <
+		    ID_AA64PFR0_ELx_32BIT_64BIT)
+		hcr_set |= HCR_RW | HCR_TID0;
+
+	/* Trap RAS unless all versions are supported */
+	if (FIELD_GET(FEATURE(ID_AA64PFR0_RAS), feature_ids) <
+	    ID_AA64PFR0_RAS_ANY) {
+		hcr_set |= HCR_TERR | HCR_TEA;
+		hcr_clear |= HCR_FIEN;
+	}
+
+	/* Trap AMU */
+	if (!FIELD_GET(FEATURE(ID_AA64PFR0_AMU), feature_ids)) {
+		hcr_clear |= HCR_AMVOFFEN;
+		cptr_set |= CPTR_EL2_TAM;
+	}
+
+	/* Trap ASIMD */
+	if (!FIELD_GET(FEATURE(ID_AA64PFR0_ASIMD), feature_ids))
+		cptr_set |= CPTR_EL2_TFP;
+
+	/* Trap SVE */
+	if (!FIELD_GET(FEATURE(ID_AA64PFR0_SVE), feature_ids))
+		cptr_set |= CPTR_EL2_TZ;
+
+	vcpu->arch.hcr_el2 |= hcr_set;
+	vcpu->arch.hcr_el2 &= ~hcr_clear;
+	vcpu->arch.cptr_el2 |= cptr_set;
+}
+
+/*
+ * Set trap register values for features not allowed in ID_AA64PFR1.
+ */
+static void pvm_init_traps_aa64pfr1(struct kvm_vcpu *vcpu)
+{
+	const u64 feature_ids = PVM_ID_AA64PFR1_ALLOW;
+	u64 hcr_set = 0;
+	u64 hcr_clear = 0;
+
+	/* Memory Tagging: Trap and Treat as Untagged if not allowed. */
+	if (!FIELD_GET(FEATURE(ID_AA64PFR1_MTE), feature_ids)) {
+		hcr_set |= HCR_TID5;
+		hcr_clear |= HCR_DCT | HCR_ATA;
+	}
+
+	vcpu->arch.hcr_el2 |= hcr_set;
+	vcpu->arch.hcr_el2 &= ~hcr_clear;
+}
+
+/*
+ * Set trap register values for features not allowed in ID_AA64DFR0.
+ */
+static void pvm_init_traps_aa64dfr0(struct kvm_vcpu *vcpu)
+{
+	const u64 feature_ids = PVM_ID_AA64DFR0_ALLOW;
+	u64 mdcr_set = 0;
+	u64 mdcr_clear = 0;
+	u64 cptr_set = 0;
+
+	/* Trap/constrain PMU */
+	if (!FIELD_GET(FEATURE(ID_AA64DFR0_PMUVER), feature_ids)) {
+		mdcr_set |= MDCR_EL2_TPM | MDCR_EL2_TPMCR;
+		mdcr_clear |= MDCR_EL2_HPME | MDCR_EL2_MTPME |
+			      MDCR_EL2_HPMN_MASK;
+	}
+
+	/* Trap Debug */
+	if (!FIELD_GET(FEATURE(ID_AA64DFR0_DEBUGVER), feature_ids))
+		mdcr_set |= MDCR_EL2_TDRA | MDCR_EL2_TDA | MDCR_EL2_TDE;
+
+	/* Trap OS Double Lock */
+	if (!FIELD_GET(FEATURE(ID_AA64DFR0_DOUBLELOCK), feature_ids))
+		mdcr_set |= MDCR_EL2_TDOSA;
+
+	/* Trap SPE */
+	if (!FIELD_GET(FEATURE(ID_AA64DFR0_PMSVER), feature_ids)) {
+		mdcr_set |= MDCR_EL2_TPMS;
+		mdcr_clear |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT;
+	}
+
+	/* Trap Trace Filter */
+	if (!FIELD_GET(FEATURE(ID_AA64DFR0_TRACE_FILT), feature_ids))
+		mdcr_set |= MDCR_EL2_TTRF;
+
+	/* Trap Trace */
+	if (!FIELD_GET(FEATURE(ID_AA64DFR0_TRACEVER), feature_ids))
+		cptr_set |= CPTR_EL2_TTA;
+
+	vcpu->arch.mdcr_el2 |= mdcr_set;
+	vcpu->arch.mdcr_el2 &= ~mdcr_clear;
+	vcpu->arch.cptr_el2 |= cptr_set;
+}
+
+/*
+ * Set trap register values for features not allowed in ID_AA64MMFR0.
+ */
+static void pvm_init_traps_aa64mmfr0(struct kvm_vcpu *vcpu)
+{
+	const u64 feature_ids = PVM_ID_AA64MMFR0_ALLOW;
+	u64 mdcr_set = 0;
+
+	/* Trap Debug Communications Channel registers */
+	if (!FIELD_GET(FEATURE(ID_AA64MMFR0_FGT), feature_ids))
+		mdcr_set |= MDCR_EL2_TDCC;
+
+	vcpu->arch.mdcr_el2 |= mdcr_set;
+}
+
+/*
+ * Set trap register values for features not allowed in ID_AA64MMFR1.
+ */
+static void pvm_init_traps_aa64mmfr1(struct kvm_vcpu *vcpu)
+{
+	const u64 feature_ids = PVM_ID_AA64MMFR1_ALLOW;
+	u64 hcr_set = 0;
+
+	/* Trap LOR */
+	if (!FIELD_GET(FEATURE(ID_AA64MMFR1_LOR), feature_ids))
+		hcr_set |= HCR_TLOR;
+
+	vcpu->arch.hcr_el2 |= hcr_set;
+}
+
+/*
+ * Set baseline trap register values.
+ */
+static void pvm_init_trap_regs(struct kvm_vcpu *vcpu)
+{
+	const u64 hcr_trap_feat_regs = HCR_TID3;
+	const u64 hcr_trap_impdef = HCR_TACR | HCR_TIDCP | HCR_TID1;
+
+	/*
+	 * Always trap:
+	 * - Feature id registers: to control features exposed to guests
+	 * - Implementation-defined features
+	 */
+	vcpu->arch.hcr_el2 |= hcr_trap_feat_regs | hcr_trap_impdef;
+
+	/* Clear res0 and set res1 bits to trap potential new features. */
+	vcpu->arch.hcr_el2 &= ~(HCR_RES0);
+	vcpu->arch.mdcr_el2 &= ~(MDCR_EL2_RES0);
+	vcpu->arch.cptr_el2 |= CPTR_NVHE_EL2_RES1;
+	vcpu->arch.cptr_el2 &= ~(CPTR_NVHE_EL2_RES0);
+}
+
+/*
+ * Initialize trap register values for protected VMs.
+ */
+void kvm_init_protected_traps(struct kvm_vcpu *vcpu)
+{
+	pvm_init_trap_regs(vcpu);
+	pvm_init_traps_aa64pfr0(vcpu);
+	pvm_init_traps_aa64pfr1(vcpu);
+	pvm_init_traps_aa64dfr0(vcpu);
+	pvm_init_traps_aa64mmfr0(vcpu);
+	pvm_init_traps_aa64mmfr1(vcpu);
+}
-- 
2.32.0.402.g57bb445576-goog


^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH v3 11/15] KVM: arm64: Add trap handlers for protected VMs
@ 2021-07-19 16:03   ` Fuad Tabba
  0 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-07-19 16:03 UTC (permalink / raw)
  To: kvmarm; +Cc: kernel-team, kvm, maz, pbonzini, will, linux-arm-kernel

Add trap handlers for protected VMs. These are mainly for Sys64
and debug traps.

No functional change intended as these are not hooked in yet to
the guest exit handlers introduced earlier. So even when trapping
is triggered, the exit handlers would let the host handle it, as
before.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/kvm_fixed_config.h | 178 +++++++++
 arch/arm64/include/asm/kvm_host.h         |   2 +
 arch/arm64/include/asm/kvm_hyp.h          |   3 +
 arch/arm64/kvm/Makefile                   |   2 +-
 arch/arm64/kvm/arm.c                      |  11 +
 arch/arm64/kvm/hyp/nvhe/Makefile          |   2 +-
 arch/arm64/kvm/hyp/nvhe/sys_regs.c        | 443 ++++++++++++++++++++++
 arch/arm64/kvm/pkvm.c                     | 183 +++++++++
 8 files changed, 822 insertions(+), 2 deletions(-)
 create mode 100644 arch/arm64/include/asm/kvm_fixed_config.h
 create mode 100644 arch/arm64/kvm/hyp/nvhe/sys_regs.c
 create mode 100644 arch/arm64/kvm/pkvm.c

diff --git a/arch/arm64/include/asm/kvm_fixed_config.h b/arch/arm64/include/asm/kvm_fixed_config.h
new file mode 100644
index 000000000000..b39a5de2c4b9
--- /dev/null
+++ b/arch/arm64/include/asm/kvm_fixed_config.h
@@ -0,0 +1,178 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (C) 2021 Google LLC
+ * Author: Fuad Tabba <tabba@google.com>
+ */
+
+#ifndef __ARM64_KVM_FIXED_CONFIG_H__
+#define __ARM64_KVM_FIXED_CONFIG_H__
+
+#include <asm/sysreg.h>
+
+/*
+ * This file contains definitions for features to be allowed or restricted for
+ * guest virtual machines as a baseline, depending on what mode KVM is running
+ * in and on the type of guest is running.
+ *
+ * The features are represented as the highest allowed value for a feature in
+ * the feature id registers. If the field is set to all ones (i.e., 0b1111),
+ * then it's only restricted by what the system allows. If the feature is set to
+ * another value, then that value would be the maximum value allowed and
+ * supported in pKVM, even if the system supports a higher value.
+ *
+ * Some features are forced to a certain value, in which case a SET bitmap is
+ * used to force these values.
+ */
+
+
+/*
+ * Allowed features for protected guests (Protected KVM)
+ *
+ * The approach taken here is to allow features that are:
+ * - needed by common Linux distributions (e.g., flooating point)
+ * - are trivial, e.g., supporting the feature doesn't introduce or require the
+ * tracking of additional state
+ * - not trapable
+ */
+
+/*
+ * - Floating-point and Advanced SIMD:
+ *	Don't require much support other than maintaining the context, which KVM
+ *	already has.
+ * - AArch64 guests only (no support for AArch32 guests):
+ *	Simplify support in case of asymmetric AArch32 systems.
+ * - RAS (v1)
+ *	v1 doesn't require much additional support, but later versions do.
+ * - Data Independent Timing
+ *	Trivial
+ * Remaining features are not supported either because they require too much
+ * support from KVM, or risk leaking guest data.
+ */
+#define PVM_ID_AA64PFR0_ALLOW (\
+	FEATURE(ID_AA64PFR0_FP) | \
+	FIELD_PREP(FEATURE(ID_AA64PFR0_EL0), ID_AA64PFR0_ELx_64BIT_ONLY) | \
+	FIELD_PREP(FEATURE(ID_AA64PFR0_EL1), ID_AA64PFR0_ELx_64BIT_ONLY) | \
+	FIELD_PREP(FEATURE(ID_AA64PFR0_EL2), ID_AA64PFR0_ELx_64BIT_ONLY) | \
+	FIELD_PREP(FEATURE(ID_AA64PFR0_EL3), ID_AA64PFR0_ELx_64BIT_ONLY) | \
+	FIELD_PREP(FEATURE(ID_AA64PFR0_RAS), ID_AA64PFR0_RAS_V1) | \
+	FEATURE(ID_AA64PFR0_ASIMD) | \
+	FEATURE(ID_AA64PFR0_DIT) \
+	)
+
+/*
+ * - Branch Target Identification
+ * - Speculative Store Bypassing
+ *	These features are trivial to support
+ */
+#define PVM_ID_AA64PFR1_ALLOW (\
+	FEATURE(ID_AA64PFR1_BT) | \
+	FEATURE(ID_AA64PFR1_SSBS) \
+	)
+
+/*
+ * No support for Scalable Vectors:
+ *	Requires additional support from KVM
+ */
+#define PVM_ID_AA64ZFR0_ALLOW (0ULL)
+
+/*
+ * No support for debug, including breakpoints, and watchpoints:
+ *	Reduce complexity and avoid exposing/leaking guest data
+ *
+ * NOTE: The Arm architecture mandates support for at least the Armv8 debug
+ * architecture, which would include at least 2 hardware breakpoints and
+ * watchpoints. Providing that support to protected guests adds considerable
+ * state and complexity, and risks leaking guest data. Therefore, the reserved
+ * value of 0 is used for debug-related fields.
+ */
+#define PVM_ID_AA64DFR0_ALLOW (0ULL)
+
+/*
+ * These features are chosen because they are supported by KVM and to limit the
+ * confiruation state space and make it more deterministic.
+ * - 40-bit IPA
+ * - 16-bit ASID
+ * - Mixed-endian
+ * - Distinction between Secure and Non-secure Memory
+ * - Mixed-endian at EL0 only
+ * - Non-context synchronizing exception entry and exit
+ */
+#define PVM_ID_AA64MMFR0_ALLOW (\
+	FIELD_PREP(FEATURE(ID_AA64MMFR0_PARANGE), ID_AA64MMFR0_PARANGE_40) | \
+	FIELD_PREP(FEATURE(ID_AA64MMFR0_ASID), ID_AA64MMFR0_ASID_16) | \
+	FEATURE(ID_AA64MMFR0_BIGENDEL) | \
+	FEATURE(ID_AA64MMFR0_SNSMEM) | \
+	FEATURE(ID_AA64MMFR0_BIGENDEL0) | \
+	FEATURE(ID_AA64MMFR0_EXS) \
+	)
+
+/*
+ * - 64KB granule not supported
+ */
+#define PVM_ID_AA64MMFR0_SET (\
+	FIELD_PREP(FEATURE(ID_AA64MMFR0_TGRAN64), ID_AA64MMFR0_TGRAN64_NI) \
+	)
+
+/*
+ * These features are chosen because they are supported by KVM and to limit the
+ * confiruation state space and make it more deterministic.
+ * - Hardware translation table updates to Access flag and Dirty state
+ * - Number of VMID bits from CPU
+ * - Hierarchical Permission Disables
+ * - Privileged Access Never
+ * - SError interrupt exceptions from speculative reads
+ * - Enhanced Translation Synchronization
+ */
+#define PVM_ID_AA64MMFR1_ALLOW (\
+	FEATURE(ID_AA64MMFR1_HADBS) | \
+	FEATURE(ID_AA64MMFR1_VMIDBITS) | \
+	FEATURE(ID_AA64MMFR1_HPD) | \
+	FEATURE(ID_AA64MMFR1_PAN) | \
+	FEATURE(ID_AA64MMFR1_SPECSEI) | \
+	FEATURE(ID_AA64MMFR1_ETS) \
+	)
+
+/*
+ * These features are chosen because they are supported by KVM and to limit the
+ * confiruation state space and make it more deterministic.
+ * - Common not Private translations
+ * - User Access Override
+ * - IESB bit in the SCTLR_ELx registers
+ * - Unaligned single-copy atomicity and atomic functions
+ * - ESR_ELx.EC value on an exception by read access to feature ID space
+ * - TTL field in address operations.
+ * - Break-before-make sequences when changing translation block size
+ * - E0PDx mechanism
+ */
+#define PVM_ID_AA64MMFR2_ALLOW (\
+	FEATURE(ID_AA64MMFR2_CNP) | \
+	FEATURE(ID_AA64MMFR2_UAO) | \
+	FEATURE(ID_AA64MMFR2_IESB) | \
+	FEATURE(ID_AA64MMFR2_AT) | \
+	FEATURE(ID_AA64MMFR2_IDS) | \
+	FEATURE(ID_AA64MMFR2_TTL) | \
+	FEATURE(ID_AA64MMFR2_BBM) | \
+	FEATURE(ID_AA64MMFR2_E0PD) \
+	)
+
+/*
+ * Allow all features in this register because they are trivial to support, or
+ * are already supported by KVM:
+ * - LS64
+ * - XS
+ * - I8MM
+ * - DGB
+ * - BF16
+ * - SPECRES
+ * - SB
+ * - FRINTTS
+ * - PAuth
+ * - FPAC
+ * - LRCPC
+ * - FCMA
+ * - JSCVT
+ * - DPB
+ */
+#define PVM_ID_AA64ISAR1_ALLOW (~0ULL)
+
+#endif /* __ARM64_KVM_FIXED_CONFIG_H__ */
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index ac67d5699c68..e1ceadd69575 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -780,6 +780,8 @@ static inline bool kvm_vm_is_protected(struct kvm *kvm)
 	return false;
 }
 
+void kvm_init_protected_traps(struct kvm_vcpu *vcpu);
+
 int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature);
 bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu);
 
diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index 657d0c94cf82..3f4866322f85 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -115,7 +115,10 @@ int __pkvm_init(phys_addr_t phys, unsigned long size, unsigned long nr_cpus,
 void __noreturn __host_enter(struct kvm_cpu_context *host_ctxt);
 #endif
 
+extern u64 kvm_nvhe_sym(id_aa64pfr0_el1_sys_val);
+extern u64 kvm_nvhe_sym(id_aa64pfr1_el1_sys_val);
 extern u64 kvm_nvhe_sym(id_aa64mmfr0_el1_sys_val);
 extern u64 kvm_nvhe_sym(id_aa64mmfr1_el1_sys_val);
+extern u64 kvm_nvhe_sym(id_aa64mmfr2_el1_sys_val);
 
 #endif /* __ARM64_KVM_HYP_H__ */
diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
index 989bb5dad2c8..0be63f5c495f 100644
--- a/arch/arm64/kvm/Makefile
+++ b/arch/arm64/kvm/Makefile
@@ -14,7 +14,7 @@ kvm-y := $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o $(KVM)/eventfd.o \
 	 $(KVM)/vfio.o $(KVM)/irqchip.o $(KVM)/binary_stats.o \
 	 arm.o mmu.o mmio.o psci.o perf.o hypercalls.o pvtime.o \
 	 inject_fault.o va_layout.o handle_exit.o \
-	 guest.o debug.o reset.o sys_regs.o \
+	 guest.o debug.o pkvm.o reset.o sys_regs.o \
 	 vgic-sys-reg-v3.o fpsimd.o pmu.o \
 	 arch_timer.o trng.o\
 	 vgic/vgic.o vgic/vgic-init.o \
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 14b12f2c08c0..3f28549aff0d 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -618,6 +618,14 @@ static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu)
 
 	ret = kvm_arm_pmu_v3_enable(vcpu);
 
+	/*
+	 * Initialize traps for protected VMs.
+	 * NOTE: Move  trap initialization to EL2 once the code is in place for
+	 * maintaining protected VM state at EL2 instead of the host.
+	 */
+	if (kvm_vm_is_protected(kvm))
+		kvm_init_protected_traps(vcpu);
+
 	return ret;
 }
 
@@ -1781,8 +1789,11 @@ static int kvm_hyp_init_protection(u32 hyp_va_bits)
 	void *addr = phys_to_virt(hyp_mem_base);
 	int ret;
 
+	kvm_nvhe_sym(id_aa64pfr0_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
+	kvm_nvhe_sym(id_aa64pfr1_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64PFR1_EL1);
 	kvm_nvhe_sym(id_aa64mmfr0_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1);
 	kvm_nvhe_sym(id_aa64mmfr1_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1);
+	kvm_nvhe_sym(id_aa64mmfr2_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR2_EL1);
 
 	ret = create_hyp_mappings(addr, addr + hyp_mem_size, PAGE_HYP);
 	if (ret)
diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile
index 5df6193fc430..a23f417a0c20 100644
--- a/arch/arm64/kvm/hyp/nvhe/Makefile
+++ b/arch/arm64/kvm/hyp/nvhe/Makefile
@@ -14,7 +14,7 @@ lib-objs := $(addprefix ../../../lib/, $(lib-objs))
 
 obj-y := timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o hyp-init.o host.o \
 	 hyp-main.o hyp-smp.o psci-relay.o early_alloc.o stub.o page_alloc.o \
-	 cache.o setup.o mm.o mem_protect.o
+	 cache.o setup.o mm.o mem_protect.o sys_regs.o
 obj-y += ../vgic-v3-sr.o ../aarch32.o ../vgic-v2-cpuif-proxy.o ../entry.o \
 	 ../fpsimd.o ../hyp-entry.o ../exception.o ../pgtable.o
 obj-y += $(lib-objs)
diff --git a/arch/arm64/kvm/hyp/nvhe/sys_regs.c b/arch/arm64/kvm/hyp/nvhe/sys_regs.c
new file mode 100644
index 000000000000..6c7230aa70e9
--- /dev/null
+++ b/arch/arm64/kvm/hyp/nvhe/sys_regs.c
@@ -0,0 +1,443 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2021 Google LLC
+ * Author: Fuad Tabba <tabba@google.com>
+ */
+
+#include <linux/kvm_host.h>
+
+#include <asm/kvm_asm.h>
+#include <asm/kvm_emulate.h>
+#include <asm/kvm_fixed_config.h>
+#include <asm/kvm_mmu.h>
+
+#include <hyp/adjust_pc.h>
+
+#include "../../sys_regs.h"
+
+/*
+ * Copies of the host's CPU features registers holding sanitized values.
+ */
+u64 id_aa64pfr0_el1_sys_val;
+u64 id_aa64pfr1_el1_sys_val;
+u64 id_aa64mmfr2_el1_sys_val;
+
+/*
+ * Inject an unknown/undefined exception to the guest.
+ */
+static void inject_undef(struct kvm_vcpu *vcpu)
+{
+	u32 esr = (ESR_ELx_EC_UNKNOWN << ESR_ELx_EC_SHIFT);
+
+	vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA64_EL1 |
+			     KVM_ARM64_EXCEPT_AA64_ELx_SYNC |
+			     KVM_ARM64_PENDING_EXCEPTION);
+
+	__kvm_adjust_pc(vcpu);
+
+	write_sysreg_el1(esr, SYS_ESR);
+	write_sysreg_el1(read_sysreg_el2(SYS_ELR), SYS_ELR);
+}
+
+/*
+ * Accessor for undefined accesses.
+ */
+static bool undef_access(struct kvm_vcpu *vcpu,
+			 struct sys_reg_params *p,
+			 const struct sys_reg_desc *r)
+{
+	inject_undef(vcpu);
+	return false;
+}
+
+/*
+ * Accessors for feature registers.
+ *
+ * If access is allowed, set the regval to the protected VM's view of the
+ * register and return true.
+ * Otherwise, inject an undefined exception and return false.
+ */
+
+/*
+ * Returns the minimum feature supported and allowed.
+ */
+static u64 get_min_feature(u64 feature, u64 allowed_features,
+			   u64 supported_features)
+{
+	const u64 allowed_feature = FIELD_GET(feature, allowed_features);
+	const u64 supported_feature = FIELD_GET(feature, supported_features);
+
+	return min(allowed_feature, supported_feature);
+}
+
+/* Accessor for ID_AA64PFR0_EL1. */
+static bool pvm_access_id_aa64pfr0(struct kvm_vcpu *vcpu,
+				   struct sys_reg_params *p,
+				   const struct sys_reg_desc *r)
+{
+	const struct kvm *kvm = (const struct kvm *) kern_hyp_va(vcpu->kvm);
+	const u64 feature_ids = PVM_ID_AA64PFR0_ALLOW;
+	u64 set_mask = 0;
+	u64 clear_mask = 0;
+
+	if (p->is_write)
+		return undef_access(vcpu, p, r);
+
+	/* Get the RAS version allowed and supported */
+	clear_mask |= FEATURE(ID_AA64PFR0_RAS);
+	set_mask |= FIELD_PREP(FEATURE(ID_AA64PFR0_RAS),
+			       get_min_feature(FEATURE(ID_AA64PFR0_RAS),
+					       feature_ids,
+					       id_aa64pfr0_el1_sys_val));
+
+	/* AArch32 guests: if not allowed then force guests to 64-bits only */
+	clear_mask |= FEATURE(ID_AA64PFR0_EL0) | FEATURE(ID_AA64PFR0_EL1) |
+		      FEATURE(ID_AA64PFR0_EL2) | FEATURE(ID_AA64PFR0_EL3);
+
+	set_mask |= FIELD_PREP(FEATURE(ID_AA64PFR0_EL0),
+			       get_min_feature(FEATURE(ID_AA64PFR0_EL0),
+					       feature_ids,
+			                       id_aa64pfr0_el1_sys_val));
+	set_mask |= FIELD_PREP(FEATURE(ID_AA64PFR0_EL1),
+			       get_min_feature(FEATURE(ID_AA64PFR0_EL1),
+					       feature_ids,
+			                       id_aa64pfr0_el1_sys_val));
+	set_mask |= FIELD_PREP(FEATURE(ID_AA64PFR0_EL2),
+			       get_min_feature(FEATURE(ID_AA64PFR0_EL2),
+					       feature_ids,
+			                       id_aa64pfr0_el1_sys_val));
+	set_mask |= FIELD_PREP(FEATURE(ID_AA64PFR0_EL3),
+			       get_min_feature(FEATURE(ID_AA64PFR0_EL3),
+					       feature_ids,
+			                       id_aa64pfr0_el1_sys_val));
+
+	/* Spectre and Meltdown mitigation */
+	set_mask |= FIELD_PREP(FEATURE(ID_AA64PFR0_CSV2),
+			       (u64)kvm->arch.pfr0_csv2);
+	set_mask |= FIELD_PREP(FEATURE(ID_AA64PFR0_CSV3),
+			       (u64)kvm->arch.pfr0_csv3);
+
+	p->regval = (id_aa64pfr0_el1_sys_val & feature_ids & ~clear_mask) |
+		    set_mask;
+	return true;
+}
+
+/* Accessor for ID_AA64PFR1_EL1. */
+static bool pvm_access_id_aa64pfr1(struct kvm_vcpu *vcpu,
+				   struct sys_reg_params *p,
+				   const struct sys_reg_desc *r)
+{
+	const u64 feature_ids = PVM_ID_AA64PFR1_ALLOW;
+
+	if (p->is_write)
+		return undef_access(vcpu, p, r);
+
+	p->regval = id_aa64pfr1_el1_sys_val & feature_ids;
+	return true;
+}
+
+/* Accessor for ID_AA64ZFR0_EL1. */
+static bool pvm_access_id_aa64zfr0(struct kvm_vcpu *vcpu,
+				   struct sys_reg_params *p,
+				   const struct sys_reg_desc *r)
+{
+	if (p->is_write)
+		return undef_access(vcpu, p, r);
+
+	/*
+	 * No support for Scalable Vectors, therefore, pKVM has no sanitized
+	 * copy of the feature id register.
+	 */
+	BUILD_BUG_ON(PVM_ID_AA64ZFR0_ALLOW != 0ULL);
+
+	p->regval = 0;
+	return true;
+}
+
+/* Accessor for ID_AA64DFR0_EL1. */
+static bool pvm_access_id_aa64dfr0(struct kvm_vcpu *vcpu,
+				   struct sys_reg_params *p,
+				   const struct sys_reg_desc *r)
+{
+	if (p->is_write)
+		return undef_access(vcpu, p, r);
+
+	/*
+	 * No support for debug, including breakpoints, and watchpoints,
+	 * therefore, pKVM has no sanitized copy of the feature id register.
+	 */
+	BUILD_BUG_ON(PVM_ID_AA64DFR0_ALLOW != 0ULL);
+
+	p->regval = 0;
+	return true;
+}
+
+/*
+ * No restrictions on ID_AA64ISAR1_EL1 features, therefore, pKVM has no
+ * sanitized copy of the feature id register and it's handled by the host.
+ */
+static_assert(PVM_ID_AA64ISAR1_ALLOW == ~0ULL);
+
+/* Accessor for ID_AA64MMFR0_EL1. */
+static bool pvm_access_id_aa64mmfr0(struct kvm_vcpu *vcpu,
+				    struct sys_reg_params *p,
+				    const struct sys_reg_desc *r)
+{
+	const u64 feature_ids = PVM_ID_AA64MMFR0_ALLOW;
+	u64 set_mask = PVM_ID_AA64MMFR0_SET;
+
+	if (p->is_write)
+		return undef_access(vcpu, p, r);
+
+	p->regval = (id_aa64mmfr0_el1_sys_val & feature_ids) | set_mask;
+	return true;
+}
+
+/* Accessor for ID_AA64MMFR1_EL1. */
+static bool pvm_access_id_aa64mmfr1(struct kvm_vcpu *vcpu,
+				    struct sys_reg_params *p,
+				    const struct sys_reg_desc *r)
+{
+	const u64 feature_ids = PVM_ID_AA64MMFR1_ALLOW;
+
+	if (p->is_write)
+		return undef_access(vcpu, p, r);
+
+	p->regval = id_aa64mmfr1_el1_sys_val & feature_ids;
+	return true;
+}
+
+/* Accessor for ID_AA64MMFR2_EL1. */
+static bool pvm_access_id_aa64mmfr2(struct kvm_vcpu *vcpu,
+				    struct sys_reg_params *p,
+				    const struct sys_reg_desc *r)
+{
+	const u64 feature_ids = PVM_ID_AA64MMFR2_ALLOW;
+
+	if (p->is_write)
+		return undef_access(vcpu, p, r);
+
+	p->regval = id_aa64mmfr2_el1_sys_val & feature_ids;
+	return true;
+}
+
+/*
+ * Accessor for AArch32 Processor Feature Registers.
+ *
+ * The value of these registers is "unknown" according to the spec if AArch32
+ * isn't supported.
+ */
+static bool pvm_access_id_aarch32(struct kvm_vcpu *vcpu,
+				  struct sys_reg_params *p,
+				  const struct sys_reg_desc *r)
+{
+	if (p->is_write)
+		return undef_access(vcpu, p, r);
+
+	/*
+	 * No support for AArch32 guests, therefore, pKVM has no sanitized copy
+	 * of AArch 32 feature id registers.
+	 */
+	BUILD_BUG_ON(FIELD_GET(FEATURE(ID_AA64PFR0_EL1),
+		     PVM_ID_AA64PFR0_ALLOW) > ID_AA64PFR0_ELx_64BIT_ONLY);
+
+	/* Use 0 for architecturally "unknown" values. */
+	p->regval = 0;
+	return true;
+}
+
+/* Mark the specified system register as an AArch32 feature register. */
+#define AARCH32(REG) { SYS_DESC(REG), .access = pvm_access_id_aarch32 }
+
+/* Mark the specified system register as not being handled in hyp. */
+#define HOST_HANDLED(REG) { SYS_DESC(REG), .access = NULL }
+
+/*
+ * Architected system registers.
+ * Important: Must be sorted ascending by Op0, Op1, CRn, CRm, Op2
+ *
+ * NOTE: Anything not explicitly listed here will be *restricted by default*,
+ * i.e., it will lead to injecting an exception into the guest.
+ */
+static const struct sys_reg_desc pvm_sys_reg_descs[] = {
+	/* Cache maintenance by set/way operations are restricted. */
+
+	/* Debug and Trace Registers are all restricted */
+
+	/* AArch64 mappings of the AArch32 ID registers */
+	/* CRm=1 */
+	AARCH32(SYS_ID_PFR0_EL1),
+	AARCH32(SYS_ID_PFR1_EL1),
+	AARCH32(SYS_ID_DFR0_EL1),
+	AARCH32(SYS_ID_AFR0_EL1),
+	AARCH32(SYS_ID_MMFR0_EL1),
+	AARCH32(SYS_ID_MMFR1_EL1),
+	AARCH32(SYS_ID_MMFR2_EL1),
+	AARCH32(SYS_ID_MMFR3_EL1),
+
+	/* CRm=2 */
+	AARCH32(SYS_ID_ISAR0_EL1),
+	AARCH32(SYS_ID_ISAR1_EL1),
+	AARCH32(SYS_ID_ISAR2_EL1),
+	AARCH32(SYS_ID_ISAR3_EL1),
+	AARCH32(SYS_ID_ISAR4_EL1),
+	AARCH32(SYS_ID_ISAR5_EL1),
+	AARCH32(SYS_ID_MMFR4_EL1),
+	AARCH32(SYS_ID_ISAR6_EL1),
+
+	/* CRm=3 */
+	AARCH32(SYS_MVFR0_EL1),
+	AARCH32(SYS_MVFR1_EL1),
+	AARCH32(SYS_MVFR2_EL1),
+	AARCH32(SYS_ID_PFR2_EL1),
+	AARCH32(SYS_ID_DFR1_EL1),
+	AARCH32(SYS_ID_MMFR5_EL1),
+
+	/* AArch64 ID registers */
+	/* CRm=4 */
+	{ SYS_DESC(SYS_ID_AA64PFR0_EL1), .access = pvm_access_id_aa64pfr0 },
+	{ SYS_DESC(SYS_ID_AA64PFR1_EL1), .access = pvm_access_id_aa64pfr1 },
+	{ SYS_DESC(SYS_ID_AA64ZFR0_EL1), .access = pvm_access_id_aa64zfr0 },
+	{ SYS_DESC(SYS_ID_AA64DFR0_EL1), .access = pvm_access_id_aa64dfr0 },
+	HOST_HANDLED(SYS_ID_AA64DFR1_EL1),
+	HOST_HANDLED(SYS_ID_AA64AFR0_EL1),
+	HOST_HANDLED(SYS_ID_AA64AFR1_EL1),
+	HOST_HANDLED(SYS_ID_AA64ISAR0_EL1),
+	HOST_HANDLED(SYS_ID_AA64ISAR1_EL1),
+	{ SYS_DESC(SYS_ID_AA64MMFR0_EL1), .access = pvm_access_id_aa64mmfr0 },
+	{ SYS_DESC(SYS_ID_AA64MMFR1_EL1), .access = pvm_access_id_aa64mmfr1 },
+	{ SYS_DESC(SYS_ID_AA64MMFR2_EL1), .access = pvm_access_id_aa64mmfr2 },
+
+	HOST_HANDLED(SYS_SCTLR_EL1),
+	HOST_HANDLED(SYS_ACTLR_EL1),
+	HOST_HANDLED(SYS_CPACR_EL1),
+
+	HOST_HANDLED(SYS_RGSR_EL1),
+	HOST_HANDLED(SYS_GCR_EL1),
+
+	/* Scalable Vector Registers are restricted. */
+
+	HOST_HANDLED(SYS_TTBR0_EL1),
+	HOST_HANDLED(SYS_TTBR1_EL1),
+	HOST_HANDLED(SYS_TCR_EL1),
+
+	HOST_HANDLED(SYS_APIAKEYLO_EL1),
+	HOST_HANDLED(SYS_APIAKEYHI_EL1),
+	HOST_HANDLED(SYS_APIBKEYLO_EL1),
+	HOST_HANDLED(SYS_APIBKEYHI_EL1),
+	HOST_HANDLED(SYS_APDAKEYLO_EL1),
+	HOST_HANDLED(SYS_APDAKEYHI_EL1),
+	HOST_HANDLED(SYS_APDBKEYLO_EL1),
+	HOST_HANDLED(SYS_APDBKEYHI_EL1),
+	HOST_HANDLED(SYS_APGAKEYLO_EL1),
+	HOST_HANDLED(SYS_APGAKEYHI_EL1),
+
+	HOST_HANDLED(SYS_AFSR0_EL1),
+	HOST_HANDLED(SYS_AFSR1_EL1),
+	HOST_HANDLED(SYS_ESR_EL1),
+
+	HOST_HANDLED(SYS_ERRIDR_EL1),
+	HOST_HANDLED(SYS_ERRSELR_EL1),
+	HOST_HANDLED(SYS_ERXFR_EL1),
+	HOST_HANDLED(SYS_ERXCTLR_EL1),
+	HOST_HANDLED(SYS_ERXSTATUS_EL1),
+	HOST_HANDLED(SYS_ERXADDR_EL1),
+	HOST_HANDLED(SYS_ERXMISC0_EL1),
+	HOST_HANDLED(SYS_ERXMISC1_EL1),
+
+	HOST_HANDLED(SYS_TFSR_EL1),
+	HOST_HANDLED(SYS_TFSRE0_EL1),
+
+	HOST_HANDLED(SYS_FAR_EL1),
+	HOST_HANDLED(SYS_PAR_EL1),
+
+	/* Performance Monitoring Registers are restricted. */
+
+	HOST_HANDLED(SYS_MAIR_EL1),
+	HOST_HANDLED(SYS_AMAIR_EL1),
+
+	/* Limited Ordering Regions Registers are restricted. */
+
+	HOST_HANDLED(SYS_VBAR_EL1),
+	HOST_HANDLED(SYS_DISR_EL1),
+
+	/* GIC CPU Interface registers are restricted. */
+
+	HOST_HANDLED(SYS_CONTEXTIDR_EL1),
+	HOST_HANDLED(SYS_TPIDR_EL1),
+
+	HOST_HANDLED(SYS_SCXTNUM_EL1),
+
+	HOST_HANDLED(SYS_CNTKCTL_EL1),
+
+	HOST_HANDLED(SYS_CCSIDR_EL1),
+	HOST_HANDLED(SYS_CLIDR_EL1),
+	HOST_HANDLED(SYS_CSSELR_EL1),
+	HOST_HANDLED(SYS_CTR_EL0),
+
+	/* Performance Monitoring Registers are restricted. */
+
+	HOST_HANDLED(SYS_TPIDR_EL0),
+	HOST_HANDLED(SYS_TPIDRRO_EL0),
+
+	HOST_HANDLED(SYS_SCXTNUM_EL0),
+
+	/* Activity Monitoring Registers are restricted. */
+
+	HOST_HANDLED(SYS_CNTP_TVAL_EL0),
+	HOST_HANDLED(SYS_CNTP_CTL_EL0),
+	HOST_HANDLED(SYS_CNTP_CVAL_EL0),
+
+	/* Performance Monitoring Registers are restricted. */
+
+	HOST_HANDLED(SYS_DACR32_EL2),
+	HOST_HANDLED(SYS_IFSR32_EL2),
+	HOST_HANDLED(SYS_FPEXC32_EL2),
+};
+
+/*
+ * Handler for protected VM MSR, MRS or System instruction execution in AArch64.
+ *
+ * Return 1 if handled, or 0 if not.
+ */
+int kvm_handle_pvm_sys64(struct kvm_vcpu *vcpu)
+{
+	const struct sys_reg_desc *r;
+	struct sys_reg_params params;
+	unsigned long esr = kvm_vcpu_get_esr(vcpu);
+	int Rt = kvm_vcpu_sys_get_rt(vcpu);
+
+	params = esr_sys64_to_params(esr);
+	params.regval = vcpu_get_reg(vcpu, Rt);
+
+	r = find_reg(&params, pvm_sys_reg_descs, ARRAY_SIZE(pvm_sys_reg_descs));
+
+	/* Undefined access (RESTRICTED). */
+	if (r == NULL) {
+		inject_undef(vcpu);
+		return 1;
+	}
+
+	/* Handled by the host (HOST_HANDLED) */
+	if (r->access == NULL)
+		return 0;
+
+	/* Handled by hyp: skip instruction if instructed to do so. */
+	if (r->access(vcpu, &params, r))
+		__kvm_skip_instr(vcpu);
+
+	vcpu_set_reg(vcpu, Rt, params.regval);
+	return 1;
+}
+
+/*
+ * Handler for protected VM restricted exceptions.
+ *
+ * Inject an undefined exception into the guest and return 1 to indicate that
+ * it was handled.
+ */
+int kvm_handle_pvm_restricted(struct kvm_vcpu *vcpu)
+{
+	inject_undef(vcpu);
+	return 1;
+}
diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c
new file mode 100644
index 000000000000..b8430b3d97af
--- /dev/null
+++ b/arch/arm64/kvm/pkvm.c
@@ -0,0 +1,183 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * KVM host (EL1) interface to Protected KVM (pkvm) code at EL2.
+ *
+ * Copyright (C) 2021 Google LLC
+ * Author: Fuad Tabba <tabba@google.com>
+ */
+
+#include <linux/kvm_host.h>
+#include <linux/mm.h>
+
+#include <asm/kvm_fixed_config.h>
+
+/*
+ * Set trap register values for features not allowed in ID_AA64PFR0.
+ */
+static void pvm_init_traps_aa64pfr0(struct kvm_vcpu *vcpu)
+{
+	const u64 feature_ids = PVM_ID_AA64PFR0_ALLOW;
+	u64 hcr_set = 0;
+	u64 hcr_clear = 0;
+	u64 cptr_set = 0;
+
+	/* Trap AArch32 guests */
+	if (FIELD_GET(FEATURE(ID_AA64PFR0_EL0), feature_ids) <
+		    ID_AA64PFR0_ELx_32BIT_64BIT ||
+	    FIELD_GET(FEATURE(ID_AA64PFR0_EL1), feature_ids) <
+		    ID_AA64PFR0_ELx_32BIT_64BIT)
+		hcr_set |= HCR_RW | HCR_TID0;
+
+	/* Trap RAS unless all versions are supported */
+	if (FIELD_GET(FEATURE(ID_AA64PFR0_RAS), feature_ids) <
+	    ID_AA64PFR0_RAS_ANY) {
+		hcr_set |= HCR_TERR | HCR_TEA;
+		hcr_clear |= HCR_FIEN;
+	}
+
+	/* Trap AMU */
+	if (!FIELD_GET(FEATURE(ID_AA64PFR0_AMU), feature_ids)) {
+		hcr_clear |= HCR_AMVOFFEN;
+		cptr_set |= CPTR_EL2_TAM;
+	}
+
+	/* Trap ASIMD */
+	if (!FIELD_GET(FEATURE(ID_AA64PFR0_ASIMD), feature_ids))
+		cptr_set |= CPTR_EL2_TFP;
+
+	/* Trap SVE */
+	if (!FIELD_GET(FEATURE(ID_AA64PFR0_SVE), feature_ids))
+		cptr_set |= CPTR_EL2_TZ;
+
+	vcpu->arch.hcr_el2 |= hcr_set;
+	vcpu->arch.hcr_el2 &= ~hcr_clear;
+	vcpu->arch.cptr_el2 |= cptr_set;
+}
+
+/*
+ * Set trap register values for features not allowed in ID_AA64PFR1.
+ */
+static void pvm_init_traps_aa64pfr1(struct kvm_vcpu *vcpu)
+{
+	const u64 feature_ids = PVM_ID_AA64PFR1_ALLOW;
+	u64 hcr_set = 0;
+	u64 hcr_clear = 0;
+
+	/* Memory Tagging: Trap and Treat as Untagged if not allowed. */
+	if (!FIELD_GET(FEATURE(ID_AA64PFR1_MTE), feature_ids)) {
+		hcr_set |= HCR_TID5;
+		hcr_clear |= HCR_DCT | HCR_ATA;
+	}
+
+	vcpu->arch.hcr_el2 |= hcr_set;
+	vcpu->arch.hcr_el2 &= ~hcr_clear;
+}
+
+/*
+ * Set trap register values for features not allowed in ID_AA64DFR0.
+ */
+static void pvm_init_traps_aa64dfr0(struct kvm_vcpu *vcpu)
+{
+	const u64 feature_ids = PVM_ID_AA64DFR0_ALLOW;
+	u64 mdcr_set = 0;
+	u64 mdcr_clear = 0;
+	u64 cptr_set = 0;
+
+	/* Trap/constrain PMU */
+	if (!FIELD_GET(FEATURE(ID_AA64DFR0_PMUVER), feature_ids)) {
+		mdcr_set |= MDCR_EL2_TPM | MDCR_EL2_TPMCR;
+		mdcr_clear |= MDCR_EL2_HPME | MDCR_EL2_MTPME |
+			      MDCR_EL2_HPMN_MASK;
+	}
+
+	/* Trap Debug */
+	if (!FIELD_GET(FEATURE(ID_AA64DFR0_DEBUGVER), feature_ids))
+		mdcr_set |= MDCR_EL2_TDRA | MDCR_EL2_TDA | MDCR_EL2_TDE;
+
+	/* Trap OS Double Lock */
+	if (!FIELD_GET(FEATURE(ID_AA64DFR0_DOUBLELOCK), feature_ids))
+		mdcr_set |= MDCR_EL2_TDOSA;
+
+	/* Trap SPE */
+	if (!FIELD_GET(FEATURE(ID_AA64DFR0_PMSVER), feature_ids)) {
+		mdcr_set |= MDCR_EL2_TPMS;
+		mdcr_clear |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT;
+	}
+
+	/* Trap Trace Filter */
+	if (!FIELD_GET(FEATURE(ID_AA64DFR0_TRACE_FILT), feature_ids))
+		mdcr_set |= MDCR_EL2_TTRF;
+
+	/* Trap Trace */
+	if (!FIELD_GET(FEATURE(ID_AA64DFR0_TRACEVER), feature_ids))
+		cptr_set |= CPTR_EL2_TTA;
+
+	vcpu->arch.mdcr_el2 |= mdcr_set;
+	vcpu->arch.mdcr_el2 &= ~mdcr_clear;
+	vcpu->arch.cptr_el2 |= cptr_set;
+}
+
+/*
+ * Set trap register values for features not allowed in ID_AA64MMFR0.
+ */
+static void pvm_init_traps_aa64mmfr0(struct kvm_vcpu *vcpu)
+{
+	const u64 feature_ids = PVM_ID_AA64MMFR0_ALLOW;
+	u64 mdcr_set = 0;
+
+	/* Trap Debug Communications Channel registers */
+	if (!FIELD_GET(FEATURE(ID_AA64MMFR0_FGT), feature_ids))
+		mdcr_set |= MDCR_EL2_TDCC;
+
+	vcpu->arch.mdcr_el2 |= mdcr_set;
+}
+
+/*
+ * Set trap register values for features not allowed in ID_AA64MMFR1.
+ */
+static void pvm_init_traps_aa64mmfr1(struct kvm_vcpu *vcpu)
+{
+	const u64 feature_ids = PVM_ID_AA64MMFR1_ALLOW;
+	u64 hcr_set = 0;
+
+	/* Trap LOR */
+	if (!FIELD_GET(FEATURE(ID_AA64MMFR1_LOR), feature_ids))
+		hcr_set |= HCR_TLOR;
+
+	vcpu->arch.hcr_el2 |= hcr_set;
+}
+
+/*
+ * Set baseline trap register values.
+ */
+static void pvm_init_trap_regs(struct kvm_vcpu *vcpu)
+{
+	const u64 hcr_trap_feat_regs = HCR_TID3;
+	const u64 hcr_trap_impdef = HCR_TACR | HCR_TIDCP | HCR_TID1;
+
+	/*
+	 * Always trap:
+	 * - Feature id registers: to control features exposed to guests
+	 * - Implementation-defined features
+	 */
+	vcpu->arch.hcr_el2 |= hcr_trap_feat_regs | hcr_trap_impdef;
+
+	/* Clear res0 and set res1 bits to trap potential new features. */
+	vcpu->arch.hcr_el2 &= ~(HCR_RES0);
+	vcpu->arch.mdcr_el2 &= ~(MDCR_EL2_RES0);
+	vcpu->arch.cptr_el2 |= CPTR_NVHE_EL2_RES1;
+	vcpu->arch.cptr_el2 &= ~(CPTR_NVHE_EL2_RES0);
+}
+
+/*
+ * Initialize trap register values for protected VMs.
+ */
+void kvm_init_protected_traps(struct kvm_vcpu *vcpu)
+{
+	pvm_init_trap_regs(vcpu);
+	pvm_init_traps_aa64pfr0(vcpu);
+	pvm_init_traps_aa64pfr1(vcpu);
+	pvm_init_traps_aa64dfr0(vcpu);
+	pvm_init_traps_aa64mmfr0(vcpu);
+	pvm_init_traps_aa64mmfr1(vcpu);
+}
-- 
2.32.0.402.g57bb445576-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH v3 11/15] KVM: arm64: Add trap handlers for protected VMs
@ 2021-07-19 16:03   ` Fuad Tabba
  0 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-07-19 16:03 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

Add trap handlers for protected VMs. These are mainly for Sys64
and debug traps.

No functional change intended as these are not hooked in yet to
the guest exit handlers introduced earlier. So even when trapping
is triggered, the exit handlers would let the host handle it, as
before.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/kvm_fixed_config.h | 178 +++++++++
 arch/arm64/include/asm/kvm_host.h         |   2 +
 arch/arm64/include/asm/kvm_hyp.h          |   3 +
 arch/arm64/kvm/Makefile                   |   2 +-
 arch/arm64/kvm/arm.c                      |  11 +
 arch/arm64/kvm/hyp/nvhe/Makefile          |   2 +-
 arch/arm64/kvm/hyp/nvhe/sys_regs.c        | 443 ++++++++++++++++++++++
 arch/arm64/kvm/pkvm.c                     | 183 +++++++++
 8 files changed, 822 insertions(+), 2 deletions(-)
 create mode 100644 arch/arm64/include/asm/kvm_fixed_config.h
 create mode 100644 arch/arm64/kvm/hyp/nvhe/sys_regs.c
 create mode 100644 arch/arm64/kvm/pkvm.c

diff --git a/arch/arm64/include/asm/kvm_fixed_config.h b/arch/arm64/include/asm/kvm_fixed_config.h
new file mode 100644
index 000000000000..b39a5de2c4b9
--- /dev/null
+++ b/arch/arm64/include/asm/kvm_fixed_config.h
@@ -0,0 +1,178 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (C) 2021 Google LLC
+ * Author: Fuad Tabba <tabba@google.com>
+ */
+
+#ifndef __ARM64_KVM_FIXED_CONFIG_H__
+#define __ARM64_KVM_FIXED_CONFIG_H__
+
+#include <asm/sysreg.h>
+
+/*
+ * This file contains definitions for features to be allowed or restricted for
+ * guest virtual machines as a baseline, depending on what mode KVM is running
+ * in and on the type of guest is running.
+ *
+ * The features are represented as the highest allowed value for a feature in
+ * the feature id registers. If the field is set to all ones (i.e., 0b1111),
+ * then it's only restricted by what the system allows. If the feature is set to
+ * another value, then that value would be the maximum value allowed and
+ * supported in pKVM, even if the system supports a higher value.
+ *
+ * Some features are forced to a certain value, in which case a SET bitmap is
+ * used to force these values.
+ */
+
+
+/*
+ * Allowed features for protected guests (Protected KVM)
+ *
+ * The approach taken here is to allow features that are:
+ * - needed by common Linux distributions (e.g., flooating point)
+ * - are trivial, e.g., supporting the feature doesn't introduce or require the
+ * tracking of additional state
+ * - not trapable
+ */
+
+/*
+ * - Floating-point and Advanced SIMD:
+ *	Don't require much support other than maintaining the context, which KVM
+ *	already has.
+ * - AArch64 guests only (no support for AArch32 guests):
+ *	Simplify support in case of asymmetric AArch32 systems.
+ * - RAS (v1)
+ *	v1 doesn't require much additional support, but later versions do.
+ * - Data Independent Timing
+ *	Trivial
+ * Remaining features are not supported either because they require too much
+ * support from KVM, or risk leaking guest data.
+ */
+#define PVM_ID_AA64PFR0_ALLOW (\
+	FEATURE(ID_AA64PFR0_FP) | \
+	FIELD_PREP(FEATURE(ID_AA64PFR0_EL0), ID_AA64PFR0_ELx_64BIT_ONLY) | \
+	FIELD_PREP(FEATURE(ID_AA64PFR0_EL1), ID_AA64PFR0_ELx_64BIT_ONLY) | \
+	FIELD_PREP(FEATURE(ID_AA64PFR0_EL2), ID_AA64PFR0_ELx_64BIT_ONLY) | \
+	FIELD_PREP(FEATURE(ID_AA64PFR0_EL3), ID_AA64PFR0_ELx_64BIT_ONLY) | \
+	FIELD_PREP(FEATURE(ID_AA64PFR0_RAS), ID_AA64PFR0_RAS_V1) | \
+	FEATURE(ID_AA64PFR0_ASIMD) | \
+	FEATURE(ID_AA64PFR0_DIT) \
+	)
+
+/*
+ * - Branch Target Identification
+ * - Speculative Store Bypassing
+ *	These features are trivial to support
+ */
+#define PVM_ID_AA64PFR1_ALLOW (\
+	FEATURE(ID_AA64PFR1_BT) | \
+	FEATURE(ID_AA64PFR1_SSBS) \
+	)
+
+/*
+ * No support for Scalable Vectors:
+ *	Requires additional support from KVM
+ */
+#define PVM_ID_AA64ZFR0_ALLOW (0ULL)
+
+/*
+ * No support for debug, including breakpoints, and watchpoints:
+ *	Reduce complexity and avoid exposing/leaking guest data
+ *
+ * NOTE: The Arm architecture mandates support for at least the Armv8 debug
+ * architecture, which would include at least 2 hardware breakpoints and
+ * watchpoints. Providing that support to protected guests adds considerable
+ * state and complexity, and risks leaking guest data. Therefore, the reserved
+ * value of 0 is used for debug-related fields.
+ */
+#define PVM_ID_AA64DFR0_ALLOW (0ULL)
+
+/*
+ * These features are chosen because they are supported by KVM and to limit the
+ * confiruation state space and make it more deterministic.
+ * - 40-bit IPA
+ * - 16-bit ASID
+ * - Mixed-endian
+ * - Distinction between Secure and Non-secure Memory
+ * - Mixed-endian at EL0 only
+ * - Non-context synchronizing exception entry and exit
+ */
+#define PVM_ID_AA64MMFR0_ALLOW (\
+	FIELD_PREP(FEATURE(ID_AA64MMFR0_PARANGE), ID_AA64MMFR0_PARANGE_40) | \
+	FIELD_PREP(FEATURE(ID_AA64MMFR0_ASID), ID_AA64MMFR0_ASID_16) | \
+	FEATURE(ID_AA64MMFR0_BIGENDEL) | \
+	FEATURE(ID_AA64MMFR0_SNSMEM) | \
+	FEATURE(ID_AA64MMFR0_BIGENDEL0) | \
+	FEATURE(ID_AA64MMFR0_EXS) \
+	)
+
+/*
+ * - 64KB granule not supported
+ */
+#define PVM_ID_AA64MMFR0_SET (\
+	FIELD_PREP(FEATURE(ID_AA64MMFR0_TGRAN64), ID_AA64MMFR0_TGRAN64_NI) \
+	)
+
+/*
+ * These features are chosen because they are supported by KVM and to limit the
+ * confiruation state space and make it more deterministic.
+ * - Hardware translation table updates to Access flag and Dirty state
+ * - Number of VMID bits from CPU
+ * - Hierarchical Permission Disables
+ * - Privileged Access Never
+ * - SError interrupt exceptions from speculative reads
+ * - Enhanced Translation Synchronization
+ */
+#define PVM_ID_AA64MMFR1_ALLOW (\
+	FEATURE(ID_AA64MMFR1_HADBS) | \
+	FEATURE(ID_AA64MMFR1_VMIDBITS) | \
+	FEATURE(ID_AA64MMFR1_HPD) | \
+	FEATURE(ID_AA64MMFR1_PAN) | \
+	FEATURE(ID_AA64MMFR1_SPECSEI) | \
+	FEATURE(ID_AA64MMFR1_ETS) \
+	)
+
+/*
+ * These features are chosen because they are supported by KVM and to limit the
+ * confiruation state space and make it more deterministic.
+ * - Common not Private translations
+ * - User Access Override
+ * - IESB bit in the SCTLR_ELx registers
+ * - Unaligned single-copy atomicity and atomic functions
+ * - ESR_ELx.EC value on an exception by read access to feature ID space
+ * - TTL field in address operations.
+ * - Break-before-make sequences when changing translation block size
+ * - E0PDx mechanism
+ */
+#define PVM_ID_AA64MMFR2_ALLOW (\
+	FEATURE(ID_AA64MMFR2_CNP) | \
+	FEATURE(ID_AA64MMFR2_UAO) | \
+	FEATURE(ID_AA64MMFR2_IESB) | \
+	FEATURE(ID_AA64MMFR2_AT) | \
+	FEATURE(ID_AA64MMFR2_IDS) | \
+	FEATURE(ID_AA64MMFR2_TTL) | \
+	FEATURE(ID_AA64MMFR2_BBM) | \
+	FEATURE(ID_AA64MMFR2_E0PD) \
+	)
+
+/*
+ * Allow all features in this register because they are trivial to support, or
+ * are already supported by KVM:
+ * - LS64
+ * - XS
+ * - I8MM
+ * - DGB
+ * - BF16
+ * - SPECRES
+ * - SB
+ * - FRINTTS
+ * - PAuth
+ * - FPAC
+ * - LRCPC
+ * - FCMA
+ * - JSCVT
+ * - DPB
+ */
+#define PVM_ID_AA64ISAR1_ALLOW (~0ULL)
+
+#endif /* __ARM64_KVM_FIXED_CONFIG_H__ */
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index ac67d5699c68..e1ceadd69575 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -780,6 +780,8 @@ static inline bool kvm_vm_is_protected(struct kvm *kvm)
 	return false;
 }
 
+void kvm_init_protected_traps(struct kvm_vcpu *vcpu);
+
 int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature);
 bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu);
 
diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index 657d0c94cf82..3f4866322f85 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -115,7 +115,10 @@ int __pkvm_init(phys_addr_t phys, unsigned long size, unsigned long nr_cpus,
 void __noreturn __host_enter(struct kvm_cpu_context *host_ctxt);
 #endif
 
+extern u64 kvm_nvhe_sym(id_aa64pfr0_el1_sys_val);
+extern u64 kvm_nvhe_sym(id_aa64pfr1_el1_sys_val);
 extern u64 kvm_nvhe_sym(id_aa64mmfr0_el1_sys_val);
 extern u64 kvm_nvhe_sym(id_aa64mmfr1_el1_sys_val);
+extern u64 kvm_nvhe_sym(id_aa64mmfr2_el1_sys_val);
 
 #endif /* __ARM64_KVM_HYP_H__ */
diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
index 989bb5dad2c8..0be63f5c495f 100644
--- a/arch/arm64/kvm/Makefile
+++ b/arch/arm64/kvm/Makefile
@@ -14,7 +14,7 @@ kvm-y := $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o $(KVM)/eventfd.o \
 	 $(KVM)/vfio.o $(KVM)/irqchip.o $(KVM)/binary_stats.o \
 	 arm.o mmu.o mmio.o psci.o perf.o hypercalls.o pvtime.o \
 	 inject_fault.o va_layout.o handle_exit.o \
-	 guest.o debug.o reset.o sys_regs.o \
+	 guest.o debug.o pkvm.o reset.o sys_regs.o \
 	 vgic-sys-reg-v3.o fpsimd.o pmu.o \
 	 arch_timer.o trng.o\
 	 vgic/vgic.o vgic/vgic-init.o \
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 14b12f2c08c0..3f28549aff0d 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -618,6 +618,14 @@ static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu)
 
 	ret = kvm_arm_pmu_v3_enable(vcpu);
 
+	/*
+	 * Initialize traps for protected VMs.
+	 * NOTE: Move  trap initialization to EL2 once the code is in place for
+	 * maintaining protected VM state at EL2 instead of the host.
+	 */
+	if (kvm_vm_is_protected(kvm))
+		kvm_init_protected_traps(vcpu);
+
 	return ret;
 }
 
@@ -1781,8 +1789,11 @@ static int kvm_hyp_init_protection(u32 hyp_va_bits)
 	void *addr = phys_to_virt(hyp_mem_base);
 	int ret;
 
+	kvm_nvhe_sym(id_aa64pfr0_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
+	kvm_nvhe_sym(id_aa64pfr1_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64PFR1_EL1);
 	kvm_nvhe_sym(id_aa64mmfr0_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1);
 	kvm_nvhe_sym(id_aa64mmfr1_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1);
+	kvm_nvhe_sym(id_aa64mmfr2_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR2_EL1);
 
 	ret = create_hyp_mappings(addr, addr + hyp_mem_size, PAGE_HYP);
 	if (ret)
diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile
index 5df6193fc430..a23f417a0c20 100644
--- a/arch/arm64/kvm/hyp/nvhe/Makefile
+++ b/arch/arm64/kvm/hyp/nvhe/Makefile
@@ -14,7 +14,7 @@ lib-objs := $(addprefix ../../../lib/, $(lib-objs))
 
 obj-y := timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o hyp-init.o host.o \
 	 hyp-main.o hyp-smp.o psci-relay.o early_alloc.o stub.o page_alloc.o \
-	 cache.o setup.o mm.o mem_protect.o
+	 cache.o setup.o mm.o mem_protect.o sys_regs.o
 obj-y += ../vgic-v3-sr.o ../aarch32.o ../vgic-v2-cpuif-proxy.o ../entry.o \
 	 ../fpsimd.o ../hyp-entry.o ../exception.o ../pgtable.o
 obj-y += $(lib-objs)
diff --git a/arch/arm64/kvm/hyp/nvhe/sys_regs.c b/arch/arm64/kvm/hyp/nvhe/sys_regs.c
new file mode 100644
index 000000000000..6c7230aa70e9
--- /dev/null
+++ b/arch/arm64/kvm/hyp/nvhe/sys_regs.c
@@ -0,0 +1,443 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2021 Google LLC
+ * Author: Fuad Tabba <tabba@google.com>
+ */
+
+#include <linux/kvm_host.h>
+
+#include <asm/kvm_asm.h>
+#include <asm/kvm_emulate.h>
+#include <asm/kvm_fixed_config.h>
+#include <asm/kvm_mmu.h>
+
+#include <hyp/adjust_pc.h>
+
+#include "../../sys_regs.h"
+
+/*
+ * Copies of the host's CPU features registers holding sanitized values.
+ */
+u64 id_aa64pfr0_el1_sys_val;
+u64 id_aa64pfr1_el1_sys_val;
+u64 id_aa64mmfr2_el1_sys_val;
+
+/*
+ * Inject an unknown/undefined exception to the guest.
+ */
+static void inject_undef(struct kvm_vcpu *vcpu)
+{
+	u32 esr = (ESR_ELx_EC_UNKNOWN << ESR_ELx_EC_SHIFT);
+
+	vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA64_EL1 |
+			     KVM_ARM64_EXCEPT_AA64_ELx_SYNC |
+			     KVM_ARM64_PENDING_EXCEPTION);
+
+	__kvm_adjust_pc(vcpu);
+
+	write_sysreg_el1(esr, SYS_ESR);
+	write_sysreg_el1(read_sysreg_el2(SYS_ELR), SYS_ELR);
+}
+
+/*
+ * Accessor for undefined accesses.
+ */
+static bool undef_access(struct kvm_vcpu *vcpu,
+			 struct sys_reg_params *p,
+			 const struct sys_reg_desc *r)
+{
+	inject_undef(vcpu);
+	return false;
+}
+
+/*
+ * Accessors for feature registers.
+ *
+ * If access is allowed, set the regval to the protected VM's view of the
+ * register and return true.
+ * Otherwise, inject an undefined exception and return false.
+ */
+
+/*
+ * Returns the minimum feature supported and allowed.
+ */
+static u64 get_min_feature(u64 feature, u64 allowed_features,
+			   u64 supported_features)
+{
+	const u64 allowed_feature = FIELD_GET(feature, allowed_features);
+	const u64 supported_feature = FIELD_GET(feature, supported_features);
+
+	return min(allowed_feature, supported_feature);
+}
+
+/* Accessor for ID_AA64PFR0_EL1. */
+static bool pvm_access_id_aa64pfr0(struct kvm_vcpu *vcpu,
+				   struct sys_reg_params *p,
+				   const struct sys_reg_desc *r)
+{
+	const struct kvm *kvm = (const struct kvm *) kern_hyp_va(vcpu->kvm);
+	const u64 feature_ids = PVM_ID_AA64PFR0_ALLOW;
+	u64 set_mask = 0;
+	u64 clear_mask = 0;
+
+	if (p->is_write)
+		return undef_access(vcpu, p, r);
+
+	/* Get the RAS version allowed and supported */
+	clear_mask |= FEATURE(ID_AA64PFR0_RAS);
+	set_mask |= FIELD_PREP(FEATURE(ID_AA64PFR0_RAS),
+			       get_min_feature(FEATURE(ID_AA64PFR0_RAS),
+					       feature_ids,
+					       id_aa64pfr0_el1_sys_val));
+
+	/* AArch32 guests: if not allowed then force guests to 64-bits only */
+	clear_mask |= FEATURE(ID_AA64PFR0_EL0) | FEATURE(ID_AA64PFR0_EL1) |
+		      FEATURE(ID_AA64PFR0_EL2) | FEATURE(ID_AA64PFR0_EL3);
+
+	set_mask |= FIELD_PREP(FEATURE(ID_AA64PFR0_EL0),
+			       get_min_feature(FEATURE(ID_AA64PFR0_EL0),
+					       feature_ids,
+			                       id_aa64pfr0_el1_sys_val));
+	set_mask |= FIELD_PREP(FEATURE(ID_AA64PFR0_EL1),
+			       get_min_feature(FEATURE(ID_AA64PFR0_EL1),
+					       feature_ids,
+			                       id_aa64pfr0_el1_sys_val));
+	set_mask |= FIELD_PREP(FEATURE(ID_AA64PFR0_EL2),
+			       get_min_feature(FEATURE(ID_AA64PFR0_EL2),
+					       feature_ids,
+			                       id_aa64pfr0_el1_sys_val));
+	set_mask |= FIELD_PREP(FEATURE(ID_AA64PFR0_EL3),
+			       get_min_feature(FEATURE(ID_AA64PFR0_EL3),
+					       feature_ids,
+			                       id_aa64pfr0_el1_sys_val));
+
+	/* Spectre and Meltdown mitigation */
+	set_mask |= FIELD_PREP(FEATURE(ID_AA64PFR0_CSV2),
+			       (u64)kvm->arch.pfr0_csv2);
+	set_mask |= FIELD_PREP(FEATURE(ID_AA64PFR0_CSV3),
+			       (u64)kvm->arch.pfr0_csv3);
+
+	p->regval = (id_aa64pfr0_el1_sys_val & feature_ids & ~clear_mask) |
+		    set_mask;
+	return true;
+}
+
+/* Accessor for ID_AA64PFR1_EL1. */
+static bool pvm_access_id_aa64pfr1(struct kvm_vcpu *vcpu,
+				   struct sys_reg_params *p,
+				   const struct sys_reg_desc *r)
+{
+	const u64 feature_ids = PVM_ID_AA64PFR1_ALLOW;
+
+	if (p->is_write)
+		return undef_access(vcpu, p, r);
+
+	p->regval = id_aa64pfr1_el1_sys_val & feature_ids;
+	return true;
+}
+
+/* Accessor for ID_AA64ZFR0_EL1. */
+static bool pvm_access_id_aa64zfr0(struct kvm_vcpu *vcpu,
+				   struct sys_reg_params *p,
+				   const struct sys_reg_desc *r)
+{
+	if (p->is_write)
+		return undef_access(vcpu, p, r);
+
+	/*
+	 * No support for Scalable Vectors, therefore, pKVM has no sanitized
+	 * copy of the feature id register.
+	 */
+	BUILD_BUG_ON(PVM_ID_AA64ZFR0_ALLOW != 0ULL);
+
+	p->regval = 0;
+	return true;
+}
+
+/* Accessor for ID_AA64DFR0_EL1. */
+static bool pvm_access_id_aa64dfr0(struct kvm_vcpu *vcpu,
+				   struct sys_reg_params *p,
+				   const struct sys_reg_desc *r)
+{
+	if (p->is_write)
+		return undef_access(vcpu, p, r);
+
+	/*
+	 * No support for debug, including breakpoints, and watchpoints,
+	 * therefore, pKVM has no sanitized copy of the feature id register.
+	 */
+	BUILD_BUG_ON(PVM_ID_AA64DFR0_ALLOW != 0ULL);
+
+	p->regval = 0;
+	return true;
+}
+
+/*
+ * No restrictions on ID_AA64ISAR1_EL1 features, therefore, pKVM has no
+ * sanitized copy of the feature id register and it's handled by the host.
+ */
+static_assert(PVM_ID_AA64ISAR1_ALLOW == ~0ULL);
+
+/* Accessor for ID_AA64MMFR0_EL1. */
+static bool pvm_access_id_aa64mmfr0(struct kvm_vcpu *vcpu,
+				    struct sys_reg_params *p,
+				    const struct sys_reg_desc *r)
+{
+	const u64 feature_ids = PVM_ID_AA64MMFR0_ALLOW;
+	u64 set_mask = PVM_ID_AA64MMFR0_SET;
+
+	if (p->is_write)
+		return undef_access(vcpu, p, r);
+
+	p->regval = (id_aa64mmfr0_el1_sys_val & feature_ids) | set_mask;
+	return true;
+}
+
+/* Accessor for ID_AA64MMFR1_EL1. */
+static bool pvm_access_id_aa64mmfr1(struct kvm_vcpu *vcpu,
+				    struct sys_reg_params *p,
+				    const struct sys_reg_desc *r)
+{
+	const u64 feature_ids = PVM_ID_AA64MMFR1_ALLOW;
+
+	if (p->is_write)
+		return undef_access(vcpu, p, r);
+
+	p->regval = id_aa64mmfr1_el1_sys_val & feature_ids;
+	return true;
+}
+
+/* Accessor for ID_AA64MMFR2_EL1. */
+static bool pvm_access_id_aa64mmfr2(struct kvm_vcpu *vcpu,
+				    struct sys_reg_params *p,
+				    const struct sys_reg_desc *r)
+{
+	const u64 feature_ids = PVM_ID_AA64MMFR2_ALLOW;
+
+	if (p->is_write)
+		return undef_access(vcpu, p, r);
+
+	p->regval = id_aa64mmfr2_el1_sys_val & feature_ids;
+	return true;
+}
+
+/*
+ * Accessor for AArch32 Processor Feature Registers.
+ *
+ * The value of these registers is "unknown" according to the spec if AArch32
+ * isn't supported.
+ */
+static bool pvm_access_id_aarch32(struct kvm_vcpu *vcpu,
+				  struct sys_reg_params *p,
+				  const struct sys_reg_desc *r)
+{
+	if (p->is_write)
+		return undef_access(vcpu, p, r);
+
+	/*
+	 * No support for AArch32 guests, therefore, pKVM has no sanitized copy
+	 * of AArch 32 feature id registers.
+	 */
+	BUILD_BUG_ON(FIELD_GET(FEATURE(ID_AA64PFR0_EL1),
+		     PVM_ID_AA64PFR0_ALLOW) > ID_AA64PFR0_ELx_64BIT_ONLY);
+
+	/* Use 0 for architecturally "unknown" values. */
+	p->regval = 0;
+	return true;
+}
+
+/* Mark the specified system register as an AArch32 feature register. */
+#define AARCH32(REG) { SYS_DESC(REG), .access = pvm_access_id_aarch32 }
+
+/* Mark the specified system register as not being handled in hyp. */
+#define HOST_HANDLED(REG) { SYS_DESC(REG), .access = NULL }
+
+/*
+ * Architected system registers.
+ * Important: Must be sorted ascending by Op0, Op1, CRn, CRm, Op2
+ *
+ * NOTE: Anything not explicitly listed here will be *restricted by default*,
+ * i.e., it will lead to injecting an exception into the guest.
+ */
+static const struct sys_reg_desc pvm_sys_reg_descs[] = {
+	/* Cache maintenance by set/way operations are restricted. */
+
+	/* Debug and Trace Registers are all restricted */
+
+	/* AArch64 mappings of the AArch32 ID registers */
+	/* CRm=1 */
+	AARCH32(SYS_ID_PFR0_EL1),
+	AARCH32(SYS_ID_PFR1_EL1),
+	AARCH32(SYS_ID_DFR0_EL1),
+	AARCH32(SYS_ID_AFR0_EL1),
+	AARCH32(SYS_ID_MMFR0_EL1),
+	AARCH32(SYS_ID_MMFR1_EL1),
+	AARCH32(SYS_ID_MMFR2_EL1),
+	AARCH32(SYS_ID_MMFR3_EL1),
+
+	/* CRm=2 */
+	AARCH32(SYS_ID_ISAR0_EL1),
+	AARCH32(SYS_ID_ISAR1_EL1),
+	AARCH32(SYS_ID_ISAR2_EL1),
+	AARCH32(SYS_ID_ISAR3_EL1),
+	AARCH32(SYS_ID_ISAR4_EL1),
+	AARCH32(SYS_ID_ISAR5_EL1),
+	AARCH32(SYS_ID_MMFR4_EL1),
+	AARCH32(SYS_ID_ISAR6_EL1),
+
+	/* CRm=3 */
+	AARCH32(SYS_MVFR0_EL1),
+	AARCH32(SYS_MVFR1_EL1),
+	AARCH32(SYS_MVFR2_EL1),
+	AARCH32(SYS_ID_PFR2_EL1),
+	AARCH32(SYS_ID_DFR1_EL1),
+	AARCH32(SYS_ID_MMFR5_EL1),
+
+	/* AArch64 ID registers */
+	/* CRm=4 */
+	{ SYS_DESC(SYS_ID_AA64PFR0_EL1), .access = pvm_access_id_aa64pfr0 },
+	{ SYS_DESC(SYS_ID_AA64PFR1_EL1), .access = pvm_access_id_aa64pfr1 },
+	{ SYS_DESC(SYS_ID_AA64ZFR0_EL1), .access = pvm_access_id_aa64zfr0 },
+	{ SYS_DESC(SYS_ID_AA64DFR0_EL1), .access = pvm_access_id_aa64dfr0 },
+	HOST_HANDLED(SYS_ID_AA64DFR1_EL1),
+	HOST_HANDLED(SYS_ID_AA64AFR0_EL1),
+	HOST_HANDLED(SYS_ID_AA64AFR1_EL1),
+	HOST_HANDLED(SYS_ID_AA64ISAR0_EL1),
+	HOST_HANDLED(SYS_ID_AA64ISAR1_EL1),
+	{ SYS_DESC(SYS_ID_AA64MMFR0_EL1), .access = pvm_access_id_aa64mmfr0 },
+	{ SYS_DESC(SYS_ID_AA64MMFR1_EL1), .access = pvm_access_id_aa64mmfr1 },
+	{ SYS_DESC(SYS_ID_AA64MMFR2_EL1), .access = pvm_access_id_aa64mmfr2 },
+
+	HOST_HANDLED(SYS_SCTLR_EL1),
+	HOST_HANDLED(SYS_ACTLR_EL1),
+	HOST_HANDLED(SYS_CPACR_EL1),
+
+	HOST_HANDLED(SYS_RGSR_EL1),
+	HOST_HANDLED(SYS_GCR_EL1),
+
+	/* Scalable Vector Registers are restricted. */
+
+	HOST_HANDLED(SYS_TTBR0_EL1),
+	HOST_HANDLED(SYS_TTBR1_EL1),
+	HOST_HANDLED(SYS_TCR_EL1),
+
+	HOST_HANDLED(SYS_APIAKEYLO_EL1),
+	HOST_HANDLED(SYS_APIAKEYHI_EL1),
+	HOST_HANDLED(SYS_APIBKEYLO_EL1),
+	HOST_HANDLED(SYS_APIBKEYHI_EL1),
+	HOST_HANDLED(SYS_APDAKEYLO_EL1),
+	HOST_HANDLED(SYS_APDAKEYHI_EL1),
+	HOST_HANDLED(SYS_APDBKEYLO_EL1),
+	HOST_HANDLED(SYS_APDBKEYHI_EL1),
+	HOST_HANDLED(SYS_APGAKEYLO_EL1),
+	HOST_HANDLED(SYS_APGAKEYHI_EL1),
+
+	HOST_HANDLED(SYS_AFSR0_EL1),
+	HOST_HANDLED(SYS_AFSR1_EL1),
+	HOST_HANDLED(SYS_ESR_EL1),
+
+	HOST_HANDLED(SYS_ERRIDR_EL1),
+	HOST_HANDLED(SYS_ERRSELR_EL1),
+	HOST_HANDLED(SYS_ERXFR_EL1),
+	HOST_HANDLED(SYS_ERXCTLR_EL1),
+	HOST_HANDLED(SYS_ERXSTATUS_EL1),
+	HOST_HANDLED(SYS_ERXADDR_EL1),
+	HOST_HANDLED(SYS_ERXMISC0_EL1),
+	HOST_HANDLED(SYS_ERXMISC1_EL1),
+
+	HOST_HANDLED(SYS_TFSR_EL1),
+	HOST_HANDLED(SYS_TFSRE0_EL1),
+
+	HOST_HANDLED(SYS_FAR_EL1),
+	HOST_HANDLED(SYS_PAR_EL1),
+
+	/* Performance Monitoring Registers are restricted. */
+
+	HOST_HANDLED(SYS_MAIR_EL1),
+	HOST_HANDLED(SYS_AMAIR_EL1),
+
+	/* Limited Ordering Regions Registers are restricted. */
+
+	HOST_HANDLED(SYS_VBAR_EL1),
+	HOST_HANDLED(SYS_DISR_EL1),
+
+	/* GIC CPU Interface registers are restricted. */
+
+	HOST_HANDLED(SYS_CONTEXTIDR_EL1),
+	HOST_HANDLED(SYS_TPIDR_EL1),
+
+	HOST_HANDLED(SYS_SCXTNUM_EL1),
+
+	HOST_HANDLED(SYS_CNTKCTL_EL1),
+
+	HOST_HANDLED(SYS_CCSIDR_EL1),
+	HOST_HANDLED(SYS_CLIDR_EL1),
+	HOST_HANDLED(SYS_CSSELR_EL1),
+	HOST_HANDLED(SYS_CTR_EL0),
+
+	/* Performance Monitoring Registers are restricted. */
+
+	HOST_HANDLED(SYS_TPIDR_EL0),
+	HOST_HANDLED(SYS_TPIDRRO_EL0),
+
+	HOST_HANDLED(SYS_SCXTNUM_EL0),
+
+	/* Activity Monitoring Registers are restricted. */
+
+	HOST_HANDLED(SYS_CNTP_TVAL_EL0),
+	HOST_HANDLED(SYS_CNTP_CTL_EL0),
+	HOST_HANDLED(SYS_CNTP_CVAL_EL0),
+
+	/* Performance Monitoring Registers are restricted. */
+
+	HOST_HANDLED(SYS_DACR32_EL2),
+	HOST_HANDLED(SYS_IFSR32_EL2),
+	HOST_HANDLED(SYS_FPEXC32_EL2),
+};
+
+/*
+ * Handler for protected VM MSR, MRS or System instruction execution in AArch64.
+ *
+ * Return 1 if handled, or 0 if not.
+ */
+int kvm_handle_pvm_sys64(struct kvm_vcpu *vcpu)
+{
+	const struct sys_reg_desc *r;
+	struct sys_reg_params params;
+	unsigned long esr = kvm_vcpu_get_esr(vcpu);
+	int Rt = kvm_vcpu_sys_get_rt(vcpu);
+
+	params = esr_sys64_to_params(esr);
+	params.regval = vcpu_get_reg(vcpu, Rt);
+
+	r = find_reg(&params, pvm_sys_reg_descs, ARRAY_SIZE(pvm_sys_reg_descs));
+
+	/* Undefined access (RESTRICTED). */
+	if (r == NULL) {
+		inject_undef(vcpu);
+		return 1;
+	}
+
+	/* Handled by the host (HOST_HANDLED) */
+	if (r->access == NULL)
+		return 0;
+
+	/* Handled by hyp: skip instruction if instructed to do so. */
+	if (r->access(vcpu, &params, r))
+		__kvm_skip_instr(vcpu);
+
+	vcpu_set_reg(vcpu, Rt, params.regval);
+	return 1;
+}
+
+/*
+ * Handler for protected VM restricted exceptions.
+ *
+ * Inject an undefined exception into the guest and return 1 to indicate that
+ * it was handled.
+ */
+int kvm_handle_pvm_restricted(struct kvm_vcpu *vcpu)
+{
+	inject_undef(vcpu);
+	return 1;
+}
diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c
new file mode 100644
index 000000000000..b8430b3d97af
--- /dev/null
+++ b/arch/arm64/kvm/pkvm.c
@@ -0,0 +1,183 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * KVM host (EL1) interface to Protected KVM (pkvm) code at EL2.
+ *
+ * Copyright (C) 2021 Google LLC
+ * Author: Fuad Tabba <tabba@google.com>
+ */
+
+#include <linux/kvm_host.h>
+#include <linux/mm.h>
+
+#include <asm/kvm_fixed_config.h>
+
+/*
+ * Set trap register values for features not allowed in ID_AA64PFR0.
+ */
+static void pvm_init_traps_aa64pfr0(struct kvm_vcpu *vcpu)
+{
+	const u64 feature_ids = PVM_ID_AA64PFR0_ALLOW;
+	u64 hcr_set = 0;
+	u64 hcr_clear = 0;
+	u64 cptr_set = 0;
+
+	/* Trap AArch32 guests */
+	if (FIELD_GET(FEATURE(ID_AA64PFR0_EL0), feature_ids) <
+		    ID_AA64PFR0_ELx_32BIT_64BIT ||
+	    FIELD_GET(FEATURE(ID_AA64PFR0_EL1), feature_ids) <
+		    ID_AA64PFR0_ELx_32BIT_64BIT)
+		hcr_set |= HCR_RW | HCR_TID0;
+
+	/* Trap RAS unless all versions are supported */
+	if (FIELD_GET(FEATURE(ID_AA64PFR0_RAS), feature_ids) <
+	    ID_AA64PFR0_RAS_ANY) {
+		hcr_set |= HCR_TERR | HCR_TEA;
+		hcr_clear |= HCR_FIEN;
+	}
+
+	/* Trap AMU */
+	if (!FIELD_GET(FEATURE(ID_AA64PFR0_AMU), feature_ids)) {
+		hcr_clear |= HCR_AMVOFFEN;
+		cptr_set |= CPTR_EL2_TAM;
+	}
+
+	/* Trap ASIMD */
+	if (!FIELD_GET(FEATURE(ID_AA64PFR0_ASIMD), feature_ids))
+		cptr_set |= CPTR_EL2_TFP;
+
+	/* Trap SVE */
+	if (!FIELD_GET(FEATURE(ID_AA64PFR0_SVE), feature_ids))
+		cptr_set |= CPTR_EL2_TZ;
+
+	vcpu->arch.hcr_el2 |= hcr_set;
+	vcpu->arch.hcr_el2 &= ~hcr_clear;
+	vcpu->arch.cptr_el2 |= cptr_set;
+}
+
+/*
+ * Set trap register values for features not allowed in ID_AA64PFR1.
+ */
+static void pvm_init_traps_aa64pfr1(struct kvm_vcpu *vcpu)
+{
+	const u64 feature_ids = PVM_ID_AA64PFR1_ALLOW;
+	u64 hcr_set = 0;
+	u64 hcr_clear = 0;
+
+	/* Memory Tagging: Trap and Treat as Untagged if not allowed. */
+	if (!FIELD_GET(FEATURE(ID_AA64PFR1_MTE), feature_ids)) {
+		hcr_set |= HCR_TID5;
+		hcr_clear |= HCR_DCT | HCR_ATA;
+	}
+
+	vcpu->arch.hcr_el2 |= hcr_set;
+	vcpu->arch.hcr_el2 &= ~hcr_clear;
+}
+
+/*
+ * Set trap register values for features not allowed in ID_AA64DFR0.
+ */
+static void pvm_init_traps_aa64dfr0(struct kvm_vcpu *vcpu)
+{
+	const u64 feature_ids = PVM_ID_AA64DFR0_ALLOW;
+	u64 mdcr_set = 0;
+	u64 mdcr_clear = 0;
+	u64 cptr_set = 0;
+
+	/* Trap/constrain PMU */
+	if (!FIELD_GET(FEATURE(ID_AA64DFR0_PMUVER), feature_ids)) {
+		mdcr_set |= MDCR_EL2_TPM | MDCR_EL2_TPMCR;
+		mdcr_clear |= MDCR_EL2_HPME | MDCR_EL2_MTPME |
+			      MDCR_EL2_HPMN_MASK;
+	}
+
+	/* Trap Debug */
+	if (!FIELD_GET(FEATURE(ID_AA64DFR0_DEBUGVER), feature_ids))
+		mdcr_set |= MDCR_EL2_TDRA | MDCR_EL2_TDA | MDCR_EL2_TDE;
+
+	/* Trap OS Double Lock */
+	if (!FIELD_GET(FEATURE(ID_AA64DFR0_DOUBLELOCK), feature_ids))
+		mdcr_set |= MDCR_EL2_TDOSA;
+
+	/* Trap SPE */
+	if (!FIELD_GET(FEATURE(ID_AA64DFR0_PMSVER), feature_ids)) {
+		mdcr_set |= MDCR_EL2_TPMS;
+		mdcr_clear |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT;
+	}
+
+	/* Trap Trace Filter */
+	if (!FIELD_GET(FEATURE(ID_AA64DFR0_TRACE_FILT), feature_ids))
+		mdcr_set |= MDCR_EL2_TTRF;
+
+	/* Trap Trace */
+	if (!FIELD_GET(FEATURE(ID_AA64DFR0_TRACEVER), feature_ids))
+		cptr_set |= CPTR_EL2_TTA;
+
+	vcpu->arch.mdcr_el2 |= mdcr_set;
+	vcpu->arch.mdcr_el2 &= ~mdcr_clear;
+	vcpu->arch.cptr_el2 |= cptr_set;
+}
+
+/*
+ * Set trap register values for features not allowed in ID_AA64MMFR0.
+ */
+static void pvm_init_traps_aa64mmfr0(struct kvm_vcpu *vcpu)
+{
+	const u64 feature_ids = PVM_ID_AA64MMFR0_ALLOW;
+	u64 mdcr_set = 0;
+
+	/* Trap Debug Communications Channel registers */
+	if (!FIELD_GET(FEATURE(ID_AA64MMFR0_FGT), feature_ids))
+		mdcr_set |= MDCR_EL2_TDCC;
+
+	vcpu->arch.mdcr_el2 |= mdcr_set;
+}
+
+/*
+ * Set trap register values for features not allowed in ID_AA64MMFR1.
+ */
+static void pvm_init_traps_aa64mmfr1(struct kvm_vcpu *vcpu)
+{
+	const u64 feature_ids = PVM_ID_AA64MMFR1_ALLOW;
+	u64 hcr_set = 0;
+
+	/* Trap LOR */
+	if (!FIELD_GET(FEATURE(ID_AA64MMFR1_LOR), feature_ids))
+		hcr_set |= HCR_TLOR;
+
+	vcpu->arch.hcr_el2 |= hcr_set;
+}
+
+/*
+ * Set baseline trap register values.
+ */
+static void pvm_init_trap_regs(struct kvm_vcpu *vcpu)
+{
+	const u64 hcr_trap_feat_regs = HCR_TID3;
+	const u64 hcr_trap_impdef = HCR_TACR | HCR_TIDCP | HCR_TID1;
+
+	/*
+	 * Always trap:
+	 * - Feature id registers: to control features exposed to guests
+	 * - Implementation-defined features
+	 */
+	vcpu->arch.hcr_el2 |= hcr_trap_feat_regs | hcr_trap_impdef;
+
+	/* Clear res0 and set res1 bits to trap potential new features. */
+	vcpu->arch.hcr_el2 &= ~(HCR_RES0);
+	vcpu->arch.mdcr_el2 &= ~(MDCR_EL2_RES0);
+	vcpu->arch.cptr_el2 |= CPTR_NVHE_EL2_RES1;
+	vcpu->arch.cptr_el2 &= ~(CPTR_NVHE_EL2_RES0);
+}
+
+/*
+ * Initialize trap register values for protected VMs.
+ */
+void kvm_init_protected_traps(struct kvm_vcpu *vcpu)
+{
+	pvm_init_trap_regs(vcpu);
+	pvm_init_traps_aa64pfr0(vcpu);
+	pvm_init_traps_aa64pfr1(vcpu);
+	pvm_init_traps_aa64dfr0(vcpu);
+	pvm_init_traps_aa64mmfr0(vcpu);
+	pvm_init_traps_aa64mmfr1(vcpu);
+}
-- 
2.32.0.402.g57bb445576-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH v3 12/15] KVM: arm64: Move sanitized copies of CPU features
  2021-07-19 16:03 ` Fuad Tabba
  (?)
@ 2021-07-19 16:03   ` Fuad Tabba
  -1 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-07-19 16:03 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

Move the sanitized copies of the CPU feature registers to the
recently created sys_regs.c. This consolidates all copies in a
more relevant file.

No functional change intended.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/kvm/hyp/nvhe/mem_protect.c | 6 ------
 arch/arm64/kvm/hyp/nvhe/sys_regs.c    | 2 ++
 2 files changed, 2 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
index d938ce95d3bd..925c7db7fa34 100644
--- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c
+++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
@@ -25,12 +25,6 @@ struct host_kvm host_kvm;
 
 static struct hyp_pool host_s2_pool;
 
-/*
- * Copies of the host's CPU features registers holding sanitized values.
- */
-u64 id_aa64mmfr0_el1_sys_val;
-u64 id_aa64mmfr1_el1_sys_val;
-
 static const u8 pkvm_hyp_id = 1;
 
 static void *host_s2_zalloc_pages_exact(size_t size)
diff --git a/arch/arm64/kvm/hyp/nvhe/sys_regs.c b/arch/arm64/kvm/hyp/nvhe/sys_regs.c
index 6c7230aa70e9..e928567430c1 100644
--- a/arch/arm64/kvm/hyp/nvhe/sys_regs.c
+++ b/arch/arm64/kvm/hyp/nvhe/sys_regs.c
@@ -20,6 +20,8 @@
  */
 u64 id_aa64pfr0_el1_sys_val;
 u64 id_aa64pfr1_el1_sys_val;
+u64 id_aa64mmfr0_el1_sys_val;
+u64 id_aa64mmfr1_el1_sys_val;
 u64 id_aa64mmfr2_el1_sys_val;
 
 /*
-- 
2.32.0.402.g57bb445576-goog


^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH v3 12/15] KVM: arm64: Move sanitized copies of CPU features
@ 2021-07-19 16:03   ` Fuad Tabba
  0 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-07-19 16:03 UTC (permalink / raw)
  To: kvmarm; +Cc: kernel-team, kvm, maz, pbonzini, will, linux-arm-kernel

Move the sanitized copies of the CPU feature registers to the
recently created sys_regs.c. This consolidates all copies in a
more relevant file.

No functional change intended.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/kvm/hyp/nvhe/mem_protect.c | 6 ------
 arch/arm64/kvm/hyp/nvhe/sys_regs.c    | 2 ++
 2 files changed, 2 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
index d938ce95d3bd..925c7db7fa34 100644
--- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c
+++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
@@ -25,12 +25,6 @@ struct host_kvm host_kvm;
 
 static struct hyp_pool host_s2_pool;
 
-/*
- * Copies of the host's CPU features registers holding sanitized values.
- */
-u64 id_aa64mmfr0_el1_sys_val;
-u64 id_aa64mmfr1_el1_sys_val;
-
 static const u8 pkvm_hyp_id = 1;
 
 static void *host_s2_zalloc_pages_exact(size_t size)
diff --git a/arch/arm64/kvm/hyp/nvhe/sys_regs.c b/arch/arm64/kvm/hyp/nvhe/sys_regs.c
index 6c7230aa70e9..e928567430c1 100644
--- a/arch/arm64/kvm/hyp/nvhe/sys_regs.c
+++ b/arch/arm64/kvm/hyp/nvhe/sys_regs.c
@@ -20,6 +20,8 @@
  */
 u64 id_aa64pfr0_el1_sys_val;
 u64 id_aa64pfr1_el1_sys_val;
+u64 id_aa64mmfr0_el1_sys_val;
+u64 id_aa64mmfr1_el1_sys_val;
 u64 id_aa64mmfr2_el1_sys_val;
 
 /*
-- 
2.32.0.402.g57bb445576-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH v3 12/15] KVM: arm64: Move sanitized copies of CPU features
@ 2021-07-19 16:03   ` Fuad Tabba
  0 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-07-19 16:03 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

Move the sanitized copies of the CPU feature registers to the
recently created sys_regs.c. This consolidates all copies in a
more relevant file.

No functional change intended.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/kvm/hyp/nvhe/mem_protect.c | 6 ------
 arch/arm64/kvm/hyp/nvhe/sys_regs.c    | 2 ++
 2 files changed, 2 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
index d938ce95d3bd..925c7db7fa34 100644
--- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c
+++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
@@ -25,12 +25,6 @@ struct host_kvm host_kvm;
 
 static struct hyp_pool host_s2_pool;
 
-/*
- * Copies of the host's CPU features registers holding sanitized values.
- */
-u64 id_aa64mmfr0_el1_sys_val;
-u64 id_aa64mmfr1_el1_sys_val;
-
 static const u8 pkvm_hyp_id = 1;
 
 static void *host_s2_zalloc_pages_exact(size_t size)
diff --git a/arch/arm64/kvm/hyp/nvhe/sys_regs.c b/arch/arm64/kvm/hyp/nvhe/sys_regs.c
index 6c7230aa70e9..e928567430c1 100644
--- a/arch/arm64/kvm/hyp/nvhe/sys_regs.c
+++ b/arch/arm64/kvm/hyp/nvhe/sys_regs.c
@@ -20,6 +20,8 @@
  */
 u64 id_aa64pfr0_el1_sys_val;
 u64 id_aa64pfr1_el1_sys_val;
+u64 id_aa64mmfr0_el1_sys_val;
+u64 id_aa64mmfr1_el1_sys_val;
 u64 id_aa64mmfr2_el1_sys_val;
 
 /*
-- 
2.32.0.402.g57bb445576-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH v3 13/15] KVM: arm64: Trap access to pVM restricted features
  2021-07-19 16:03 ` Fuad Tabba
  (?)
@ 2021-07-19 16:03   ` Fuad Tabba
  -1 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-07-19 16:03 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

Trap accesses to restricted features for VMs running in protected
mode.

Access to feature registers are emulated, and only supported
features are exposed to protected VMs.

Accesses to restricted registers as well as restricted
instructions are trapped, and an undefined exception is injected
into the protected guests, i.e., with EC = 0x0 (unknown reason).
This EC is the one used, according to the Arm Architecture
Reference Manual, for unallocated or undefined system registers
or instructions.

Only affects the functionality of protected VMs. Otherwise,
should not affect non-protected VMs when KVM is running in
protected mode.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/kvm/hyp/include/hyp/switch.h |  3 ++
 arch/arm64/kvm/hyp/nvhe/switch.c        | 52 ++++++++++++++++++-------
 2 files changed, 41 insertions(+), 14 deletions(-)

diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 5a2b89b96c67..8431f1514280 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -33,6 +33,9 @@
 extern struct exception_table_entry __start___kvm_ex_table;
 extern struct exception_table_entry __stop___kvm_ex_table;
 
+int kvm_handle_pvm_sys64(struct kvm_vcpu *vcpu);
+int kvm_handle_pvm_restricted(struct kvm_vcpu *vcpu);
+
 /* Check whether the FP regs were dirtied while in the host-side run loop: */
 static inline bool update_fp_enabled(struct kvm_vcpu *vcpu)
 {
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index 36da423006bd..99bbbba90094 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -158,30 +158,54 @@ static void __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt)
 		write_sysreg(pmu->events_host, pmcntenset_el0);
 }
 
+/**
+ * Handle system register accesses for protected VMs.
+ *
+ * Return 1 if handled, or 0 if not.
+ */
+static int handle_pvm_sys64(struct kvm_vcpu *vcpu)
+{
+	return kvm_vm_is_protected(kern_hyp_va(vcpu->kvm)) ?
+			     kvm_handle_pvm_sys64(vcpu) :
+			     0;
+}
+
+/**
+ * Handle restricted feature accesses for protected VMs.
+ *
+ * Return 1 if handled, or 0 if not.
+ */
+static int handle_pvm_restricted(struct kvm_vcpu *vcpu)
+{
+	return kvm_vm_is_protected(kern_hyp_va(vcpu->kvm)) ?
+			     kvm_handle_pvm_restricted(vcpu) :
+			     0;
+}
+
 typedef int (*exit_handle_fn)(struct kvm_vcpu *);
 
 static exit_handle_fn hyp_exit_handlers[] = {
-	[0 ... ESR_ELx_EC_MAX]		= NULL,
+	[0 ... ESR_ELx_EC_MAX]		= handle_pvm_restricted,
 	[ESR_ELx_EC_WFx]		= NULL,
-	[ESR_ELx_EC_CP15_32]		= NULL,
-	[ESR_ELx_EC_CP15_64]		= NULL,
-	[ESR_ELx_EC_CP14_MR]		= NULL,
-	[ESR_ELx_EC_CP14_LS]		= NULL,
-	[ESR_ELx_EC_CP14_64]		= NULL,
+	[ESR_ELx_EC_CP15_32]		= handle_pvm_restricted,
+	[ESR_ELx_EC_CP15_64]		= handle_pvm_restricted,
+	[ESR_ELx_EC_CP14_MR]		= handle_pvm_restricted,
+	[ESR_ELx_EC_CP14_LS]		= handle_pvm_restricted,
+	[ESR_ELx_EC_CP14_64]		= handle_pvm_restricted,
 	[ESR_ELx_EC_HVC32]		= NULL,
 	[ESR_ELx_EC_SMC32]		= NULL,
 	[ESR_ELx_EC_HVC64]		= NULL,
 	[ESR_ELx_EC_SMC64]		= NULL,
-	[ESR_ELx_EC_SYS64]		= NULL,
-	[ESR_ELx_EC_SVE]		= NULL,
+	[ESR_ELx_EC_SYS64]		= handle_pvm_sys64,
+	[ESR_ELx_EC_SVE]		= handle_pvm_restricted,
 	[ESR_ELx_EC_IABT_LOW]		= NULL,
 	[ESR_ELx_EC_DABT_LOW]		= NULL,
-	[ESR_ELx_EC_SOFTSTP_LOW]	= NULL,
-	[ESR_ELx_EC_WATCHPT_LOW]	= NULL,
-	[ESR_ELx_EC_BREAKPT_LOW]	= NULL,
-	[ESR_ELx_EC_BKPT32]		= NULL,
-	[ESR_ELx_EC_BRK64]		= NULL,
-	[ESR_ELx_EC_FP_ASIMD]		= NULL,
+	[ESR_ELx_EC_SOFTSTP_LOW]	= handle_pvm_restricted,
+	[ESR_ELx_EC_WATCHPT_LOW]	= handle_pvm_restricted,
+	[ESR_ELx_EC_BREAKPT_LOW]	= handle_pvm_restricted,
+	[ESR_ELx_EC_BKPT32]		= handle_pvm_restricted,
+	[ESR_ELx_EC_BRK64]		= handle_pvm_restricted,
+	[ESR_ELx_EC_FP_ASIMD]		= handle_pvm_restricted,
 	[ESR_ELx_EC_PAC]		= NULL,
 };
 
-- 
2.32.0.402.g57bb445576-goog


^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH v3 13/15] KVM: arm64: Trap access to pVM restricted features
@ 2021-07-19 16:03   ` Fuad Tabba
  0 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-07-19 16:03 UTC (permalink / raw)
  To: kvmarm; +Cc: kernel-team, kvm, maz, pbonzini, will, linux-arm-kernel

Trap accesses to restricted features for VMs running in protected
mode.

Access to feature registers are emulated, and only supported
features are exposed to protected VMs.

Accesses to restricted registers as well as restricted
instructions are trapped, and an undefined exception is injected
into the protected guests, i.e., with EC = 0x0 (unknown reason).
This EC is the one used, according to the Arm Architecture
Reference Manual, for unallocated or undefined system registers
or instructions.

Only affects the functionality of protected VMs. Otherwise,
should not affect non-protected VMs when KVM is running in
protected mode.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/kvm/hyp/include/hyp/switch.h |  3 ++
 arch/arm64/kvm/hyp/nvhe/switch.c        | 52 ++++++++++++++++++-------
 2 files changed, 41 insertions(+), 14 deletions(-)

diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 5a2b89b96c67..8431f1514280 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -33,6 +33,9 @@
 extern struct exception_table_entry __start___kvm_ex_table;
 extern struct exception_table_entry __stop___kvm_ex_table;
 
+int kvm_handle_pvm_sys64(struct kvm_vcpu *vcpu);
+int kvm_handle_pvm_restricted(struct kvm_vcpu *vcpu);
+
 /* Check whether the FP regs were dirtied while in the host-side run loop: */
 static inline bool update_fp_enabled(struct kvm_vcpu *vcpu)
 {
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index 36da423006bd..99bbbba90094 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -158,30 +158,54 @@ static void __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt)
 		write_sysreg(pmu->events_host, pmcntenset_el0);
 }
 
+/**
+ * Handle system register accesses for protected VMs.
+ *
+ * Return 1 if handled, or 0 if not.
+ */
+static int handle_pvm_sys64(struct kvm_vcpu *vcpu)
+{
+	return kvm_vm_is_protected(kern_hyp_va(vcpu->kvm)) ?
+			     kvm_handle_pvm_sys64(vcpu) :
+			     0;
+}
+
+/**
+ * Handle restricted feature accesses for protected VMs.
+ *
+ * Return 1 if handled, or 0 if not.
+ */
+static int handle_pvm_restricted(struct kvm_vcpu *vcpu)
+{
+	return kvm_vm_is_protected(kern_hyp_va(vcpu->kvm)) ?
+			     kvm_handle_pvm_restricted(vcpu) :
+			     0;
+}
+
 typedef int (*exit_handle_fn)(struct kvm_vcpu *);
 
 static exit_handle_fn hyp_exit_handlers[] = {
-	[0 ... ESR_ELx_EC_MAX]		= NULL,
+	[0 ... ESR_ELx_EC_MAX]		= handle_pvm_restricted,
 	[ESR_ELx_EC_WFx]		= NULL,
-	[ESR_ELx_EC_CP15_32]		= NULL,
-	[ESR_ELx_EC_CP15_64]		= NULL,
-	[ESR_ELx_EC_CP14_MR]		= NULL,
-	[ESR_ELx_EC_CP14_LS]		= NULL,
-	[ESR_ELx_EC_CP14_64]		= NULL,
+	[ESR_ELx_EC_CP15_32]		= handle_pvm_restricted,
+	[ESR_ELx_EC_CP15_64]		= handle_pvm_restricted,
+	[ESR_ELx_EC_CP14_MR]		= handle_pvm_restricted,
+	[ESR_ELx_EC_CP14_LS]		= handle_pvm_restricted,
+	[ESR_ELx_EC_CP14_64]		= handle_pvm_restricted,
 	[ESR_ELx_EC_HVC32]		= NULL,
 	[ESR_ELx_EC_SMC32]		= NULL,
 	[ESR_ELx_EC_HVC64]		= NULL,
 	[ESR_ELx_EC_SMC64]		= NULL,
-	[ESR_ELx_EC_SYS64]		= NULL,
-	[ESR_ELx_EC_SVE]		= NULL,
+	[ESR_ELx_EC_SYS64]		= handle_pvm_sys64,
+	[ESR_ELx_EC_SVE]		= handle_pvm_restricted,
 	[ESR_ELx_EC_IABT_LOW]		= NULL,
 	[ESR_ELx_EC_DABT_LOW]		= NULL,
-	[ESR_ELx_EC_SOFTSTP_LOW]	= NULL,
-	[ESR_ELx_EC_WATCHPT_LOW]	= NULL,
-	[ESR_ELx_EC_BREAKPT_LOW]	= NULL,
-	[ESR_ELx_EC_BKPT32]		= NULL,
-	[ESR_ELx_EC_BRK64]		= NULL,
-	[ESR_ELx_EC_FP_ASIMD]		= NULL,
+	[ESR_ELx_EC_SOFTSTP_LOW]	= handle_pvm_restricted,
+	[ESR_ELx_EC_WATCHPT_LOW]	= handle_pvm_restricted,
+	[ESR_ELx_EC_BREAKPT_LOW]	= handle_pvm_restricted,
+	[ESR_ELx_EC_BKPT32]		= handle_pvm_restricted,
+	[ESR_ELx_EC_BRK64]		= handle_pvm_restricted,
+	[ESR_ELx_EC_FP_ASIMD]		= handle_pvm_restricted,
 	[ESR_ELx_EC_PAC]		= NULL,
 };
 
-- 
2.32.0.402.g57bb445576-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH v3 13/15] KVM: arm64: Trap access to pVM restricted features
@ 2021-07-19 16:03   ` Fuad Tabba
  0 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-07-19 16:03 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

Trap accesses to restricted features for VMs running in protected
mode.

Access to feature registers are emulated, and only supported
features are exposed to protected VMs.

Accesses to restricted registers as well as restricted
instructions are trapped, and an undefined exception is injected
into the protected guests, i.e., with EC = 0x0 (unknown reason).
This EC is the one used, according to the Arm Architecture
Reference Manual, for unallocated or undefined system registers
or instructions.

Only affects the functionality of protected VMs. Otherwise,
should not affect non-protected VMs when KVM is running in
protected mode.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/kvm/hyp/include/hyp/switch.h |  3 ++
 arch/arm64/kvm/hyp/nvhe/switch.c        | 52 ++++++++++++++++++-------
 2 files changed, 41 insertions(+), 14 deletions(-)

diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 5a2b89b96c67..8431f1514280 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -33,6 +33,9 @@
 extern struct exception_table_entry __start___kvm_ex_table;
 extern struct exception_table_entry __stop___kvm_ex_table;
 
+int kvm_handle_pvm_sys64(struct kvm_vcpu *vcpu);
+int kvm_handle_pvm_restricted(struct kvm_vcpu *vcpu);
+
 /* Check whether the FP regs were dirtied while in the host-side run loop: */
 static inline bool update_fp_enabled(struct kvm_vcpu *vcpu)
 {
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index 36da423006bd..99bbbba90094 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -158,30 +158,54 @@ static void __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt)
 		write_sysreg(pmu->events_host, pmcntenset_el0);
 }
 
+/**
+ * Handle system register accesses for protected VMs.
+ *
+ * Return 1 if handled, or 0 if not.
+ */
+static int handle_pvm_sys64(struct kvm_vcpu *vcpu)
+{
+	return kvm_vm_is_protected(kern_hyp_va(vcpu->kvm)) ?
+			     kvm_handle_pvm_sys64(vcpu) :
+			     0;
+}
+
+/**
+ * Handle restricted feature accesses for protected VMs.
+ *
+ * Return 1 if handled, or 0 if not.
+ */
+static int handle_pvm_restricted(struct kvm_vcpu *vcpu)
+{
+	return kvm_vm_is_protected(kern_hyp_va(vcpu->kvm)) ?
+			     kvm_handle_pvm_restricted(vcpu) :
+			     0;
+}
+
 typedef int (*exit_handle_fn)(struct kvm_vcpu *);
 
 static exit_handle_fn hyp_exit_handlers[] = {
-	[0 ... ESR_ELx_EC_MAX]		= NULL,
+	[0 ... ESR_ELx_EC_MAX]		= handle_pvm_restricted,
 	[ESR_ELx_EC_WFx]		= NULL,
-	[ESR_ELx_EC_CP15_32]		= NULL,
-	[ESR_ELx_EC_CP15_64]		= NULL,
-	[ESR_ELx_EC_CP14_MR]		= NULL,
-	[ESR_ELx_EC_CP14_LS]		= NULL,
-	[ESR_ELx_EC_CP14_64]		= NULL,
+	[ESR_ELx_EC_CP15_32]		= handle_pvm_restricted,
+	[ESR_ELx_EC_CP15_64]		= handle_pvm_restricted,
+	[ESR_ELx_EC_CP14_MR]		= handle_pvm_restricted,
+	[ESR_ELx_EC_CP14_LS]		= handle_pvm_restricted,
+	[ESR_ELx_EC_CP14_64]		= handle_pvm_restricted,
 	[ESR_ELx_EC_HVC32]		= NULL,
 	[ESR_ELx_EC_SMC32]		= NULL,
 	[ESR_ELx_EC_HVC64]		= NULL,
 	[ESR_ELx_EC_SMC64]		= NULL,
-	[ESR_ELx_EC_SYS64]		= NULL,
-	[ESR_ELx_EC_SVE]		= NULL,
+	[ESR_ELx_EC_SYS64]		= handle_pvm_sys64,
+	[ESR_ELx_EC_SVE]		= handle_pvm_restricted,
 	[ESR_ELx_EC_IABT_LOW]		= NULL,
 	[ESR_ELx_EC_DABT_LOW]		= NULL,
-	[ESR_ELx_EC_SOFTSTP_LOW]	= NULL,
-	[ESR_ELx_EC_WATCHPT_LOW]	= NULL,
-	[ESR_ELx_EC_BREAKPT_LOW]	= NULL,
-	[ESR_ELx_EC_BKPT32]		= NULL,
-	[ESR_ELx_EC_BRK64]		= NULL,
-	[ESR_ELx_EC_FP_ASIMD]		= NULL,
+	[ESR_ELx_EC_SOFTSTP_LOW]	= handle_pvm_restricted,
+	[ESR_ELx_EC_WATCHPT_LOW]	= handle_pvm_restricted,
+	[ESR_ELx_EC_BREAKPT_LOW]	= handle_pvm_restricted,
+	[ESR_ELx_EC_BKPT32]		= handle_pvm_restricted,
+	[ESR_ELx_EC_BRK64]		= handle_pvm_restricted,
+	[ESR_ELx_EC_FP_ASIMD]		= handle_pvm_restricted,
 	[ESR_ELx_EC_PAC]		= NULL,
 };
 
-- 
2.32.0.402.g57bb445576-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH v3 14/15] KVM: arm64: Handle protected guests at 32 bits
  2021-07-19 16:03 ` Fuad Tabba
  (?)
@ 2021-07-19 16:03   ` Fuad Tabba
  -1 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-07-19 16:03 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

Protected KVM does not support protected AArch32 guests. However,
it is possible for the guest to force run AArch32, potentially
causing problems. Add an extra check so that if the hypervisor
catches the guest doing that, it can prevent the guest from
running again by resetting vcpu->arch.target and returning
ARM_EXCEPTION_IL.

Adapted from commit 22f553842b14 ("KVM: arm64: Handle Asymmetric
AArch32 systems")

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/kvm/hyp/include/hyp/switch.h | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 8431f1514280..f09343e15a80 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -23,6 +23,7 @@
 #include <asm/kprobes.h>
 #include <asm/kvm_asm.h>
 #include <asm/kvm_emulate.h>
+#include <asm/kvm_fixed_config.h>
 #include <asm/kvm_hyp.h>
 #include <asm/kvm_mmu.h>
 #include <asm/fpsimd.h>
@@ -477,6 +478,29 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
 			write_sysreg_el2(read_sysreg_el2(SYS_ELR) - 4, SYS_ELR);
 	}
 
+	/*
+	 * Protected VMs might not be allowed to run in AArch32. The check below
+	 * is based on the one in kvm_arch_vcpu_ioctl_run().
+	 * The ARMv8 architecture doesn't give the hypervisor a mechanism to
+	 * prevent a guest from dropping to AArch32 EL0 if implemented by the
+	 * CPU. If the hypervisor spots a guest in such a state ensure it is
+	 * handled, and don't trust the host to spot or fix it.
+	 */
+	if (unlikely(is_nvhe_hyp_code() &&
+		     kvm_vm_is_protected(kern_hyp_va(vcpu->kvm)) &&
+		     FIELD_GET(FEATURE(ID_AA64PFR0_EL0),
+			       PVM_ID_AA64PFR0_ALLOW) <
+			     ID_AA64PFR0_ELx_32BIT_64BIT &&
+		     vcpu_mode_is_32bit(vcpu))) {
+		/*
+		 * As we have caught the guest red-handed, decide that it isn't
+		 * fit for purpose anymore by making the vcpu invalid.
+		 */
+		vcpu->arch.target = -1;
+		*exit_code = ARM_EXCEPTION_IL;
+		goto exit;
+	}
+
 	/*
 	 * We're using the raw exception code in order to only process
 	 * the trap if no SError is pending. We will come back to the
-- 
2.32.0.402.g57bb445576-goog


^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH v3 14/15] KVM: arm64: Handle protected guests at 32 bits
@ 2021-07-19 16:03   ` Fuad Tabba
  0 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-07-19 16:03 UTC (permalink / raw)
  To: kvmarm; +Cc: kernel-team, kvm, maz, pbonzini, will, linux-arm-kernel

Protected KVM does not support protected AArch32 guests. However,
it is possible for the guest to force run AArch32, potentially
causing problems. Add an extra check so that if the hypervisor
catches the guest doing that, it can prevent the guest from
running again by resetting vcpu->arch.target and returning
ARM_EXCEPTION_IL.

Adapted from commit 22f553842b14 ("KVM: arm64: Handle Asymmetric
AArch32 systems")

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/kvm/hyp/include/hyp/switch.h | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 8431f1514280..f09343e15a80 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -23,6 +23,7 @@
 #include <asm/kprobes.h>
 #include <asm/kvm_asm.h>
 #include <asm/kvm_emulate.h>
+#include <asm/kvm_fixed_config.h>
 #include <asm/kvm_hyp.h>
 #include <asm/kvm_mmu.h>
 #include <asm/fpsimd.h>
@@ -477,6 +478,29 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
 			write_sysreg_el2(read_sysreg_el2(SYS_ELR) - 4, SYS_ELR);
 	}
 
+	/*
+	 * Protected VMs might not be allowed to run in AArch32. The check below
+	 * is based on the one in kvm_arch_vcpu_ioctl_run().
+	 * The ARMv8 architecture doesn't give the hypervisor a mechanism to
+	 * prevent a guest from dropping to AArch32 EL0 if implemented by the
+	 * CPU. If the hypervisor spots a guest in such a state ensure it is
+	 * handled, and don't trust the host to spot or fix it.
+	 */
+	if (unlikely(is_nvhe_hyp_code() &&
+		     kvm_vm_is_protected(kern_hyp_va(vcpu->kvm)) &&
+		     FIELD_GET(FEATURE(ID_AA64PFR0_EL0),
+			       PVM_ID_AA64PFR0_ALLOW) <
+			     ID_AA64PFR0_ELx_32BIT_64BIT &&
+		     vcpu_mode_is_32bit(vcpu))) {
+		/*
+		 * As we have caught the guest red-handed, decide that it isn't
+		 * fit for purpose anymore by making the vcpu invalid.
+		 */
+		vcpu->arch.target = -1;
+		*exit_code = ARM_EXCEPTION_IL;
+		goto exit;
+	}
+
 	/*
 	 * We're using the raw exception code in order to only process
 	 * the trap if no SError is pending. We will come back to the
-- 
2.32.0.402.g57bb445576-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH v3 14/15] KVM: arm64: Handle protected guests at 32 bits
@ 2021-07-19 16:03   ` Fuad Tabba
  0 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-07-19 16:03 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

Protected KVM does not support protected AArch32 guests. However,
it is possible for the guest to force run AArch32, potentially
causing problems. Add an extra check so that if the hypervisor
catches the guest doing that, it can prevent the guest from
running again by resetting vcpu->arch.target and returning
ARM_EXCEPTION_IL.

Adapted from commit 22f553842b14 ("KVM: arm64: Handle Asymmetric
AArch32 systems")

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/kvm/hyp/include/hyp/switch.h | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 8431f1514280..f09343e15a80 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -23,6 +23,7 @@
 #include <asm/kprobes.h>
 #include <asm/kvm_asm.h>
 #include <asm/kvm_emulate.h>
+#include <asm/kvm_fixed_config.h>
 #include <asm/kvm_hyp.h>
 #include <asm/kvm_mmu.h>
 #include <asm/fpsimd.h>
@@ -477,6 +478,29 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
 			write_sysreg_el2(read_sysreg_el2(SYS_ELR) - 4, SYS_ELR);
 	}
 
+	/*
+	 * Protected VMs might not be allowed to run in AArch32. The check below
+	 * is based on the one in kvm_arch_vcpu_ioctl_run().
+	 * The ARMv8 architecture doesn't give the hypervisor a mechanism to
+	 * prevent a guest from dropping to AArch32 EL0 if implemented by the
+	 * CPU. If the hypervisor spots a guest in such a state ensure it is
+	 * handled, and don't trust the host to spot or fix it.
+	 */
+	if (unlikely(is_nvhe_hyp_code() &&
+		     kvm_vm_is_protected(kern_hyp_va(vcpu->kvm)) &&
+		     FIELD_GET(FEATURE(ID_AA64PFR0_EL0),
+			       PVM_ID_AA64PFR0_ALLOW) <
+			     ID_AA64PFR0_ELx_32BIT_64BIT &&
+		     vcpu_mode_is_32bit(vcpu))) {
+		/*
+		 * As we have caught the guest red-handed, decide that it isn't
+		 * fit for purpose anymore by making the vcpu invalid.
+		 */
+		vcpu->arch.target = -1;
+		*exit_code = ARM_EXCEPTION_IL;
+		goto exit;
+	}
+
 	/*
 	 * We're using the raw exception code in order to only process
 	 * the trap if no SError is pending. We will come back to the
-- 
2.32.0.402.g57bb445576-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH v3 15/15] KVM: arm64: Restrict protected VM capabilities
  2021-07-19 16:03 ` Fuad Tabba
  (?)
@ 2021-07-19 16:03   ` Fuad Tabba
  -1 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-07-19 16:03 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

Restrict protected VM capabilities based on the
fixed-configuration for protected VMs.

No functional change intended in current KVM-supported modes
(nVHE, VHE).

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/kvm_fixed_config.h | 10 ++++
 arch/arm64/kvm/arm.c                      | 63 ++++++++++++++++++++++-
 arch/arm64/kvm/pkvm.c                     | 30 +++++++++++
 3 files changed, 102 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/kvm_fixed_config.h b/arch/arm64/include/asm/kvm_fixed_config.h
index b39a5de2c4b9..14310b035bf7 100644
--- a/arch/arm64/include/asm/kvm_fixed_config.h
+++ b/arch/arm64/include/asm/kvm_fixed_config.h
@@ -175,4 +175,14 @@
  */
 #define PVM_ID_AA64ISAR1_ALLOW (~0ULL)
 
+/*
+ * Returns the maximum number of breakpoints supported for protected VMs.
+ */
+int kvm_arm_pkvm_get_max_brps(void);
+
+/*
+ * Returns the maximum number of watchpoints supported for protected VMs.
+ */
+int kvm_arm_pkvm_get_max_wrps(void);
+
 #endif /* __ARM64_KVM_FIXED_CONFIG_H__ */
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 3f28549aff0d..bc41e3b71fab 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -34,6 +34,7 @@
 #include <asm/virt.h>
 #include <asm/kvm_arm.h>
 #include <asm/kvm_asm.h>
+#include <asm/kvm_fixed_config.h>
 #include <asm/kvm_mmu.h>
 #include <asm/kvm_emulate.h>
 #include <asm/sections.h>
@@ -188,9 +189,10 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
 	atomic_set(&kvm->online_vcpus, 0);
 }
 
-int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
+static int kvm_check_extension(struct kvm *kvm, long ext)
 {
 	int r;
+
 	switch (ext) {
 	case KVM_CAP_IRQCHIP:
 		r = vgic_present;
@@ -281,6 +283,65 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 	return r;
 }
 
+static int pkvm_check_extension(struct kvm *kvm, long ext, int kvm_cap)
+{
+	int r;
+
+	switch (ext) {
+	case KVM_CAP_ARM_PSCI:
+	case KVM_CAP_ARM_PSCI_0_2:
+	case KVM_CAP_NR_VCPUS:
+	case KVM_CAP_MAX_VCPUS:
+	case KVM_CAP_MAX_VCPU_ID:
+		r = kvm_cap;
+		break;
+	case KVM_CAP_ARM_EL1_32BIT:
+		r = kvm_cap &&
+		    (FIELD_GET(FEATURE(ID_AA64PFR0_EL1), PVM_ID_AA64PFR0_ALLOW) >=
+		     ID_AA64PFR0_ELx_32BIT_64BIT);
+		break;
+	case KVM_CAP_GUEST_DEBUG_HW_BPS:
+		r = min(kvm_cap, kvm_arm_pkvm_get_max_brps());
+		break;
+	case KVM_CAP_GUEST_DEBUG_HW_WPS:
+		r = min(kvm_cap, kvm_arm_pkvm_get_max_wrps());
+		break;
+	case KVM_CAP_ARM_PMU_V3:
+		r = kvm_cap &&
+		    FIELD_GET(FEATURE(ID_AA64DFR0_PMUVER), PVM_ID_AA64DFR0_ALLOW);
+		break;
+	case KVM_CAP_ARM_SVE:
+		r = kvm_cap &&
+		    FIELD_GET(FEATURE(ID_AA64PFR0_SVE), PVM_ID_AA64PFR0_ALLOW);
+		break;
+	case KVM_CAP_ARM_PTRAUTH_ADDRESS:
+		r = kvm_cap &&
+		    FIELD_GET(FEATURE(ID_AA64ISAR1_API), PVM_ID_AA64ISAR1_ALLOW) &&
+		    FIELD_GET(FEATURE(ID_AA64ISAR1_APA), PVM_ID_AA64ISAR1_ALLOW);
+		break;
+	case KVM_CAP_ARM_PTRAUTH_GENERIC:
+		r = kvm_cap &&
+		    FIELD_GET(FEATURE(ID_AA64ISAR1_GPI), PVM_ID_AA64ISAR1_ALLOW) &&
+		    FIELD_GET(FEATURE(ID_AA64ISAR1_GPA), PVM_ID_AA64ISAR1_ALLOW);
+		break;
+	default:
+		r = 0;
+		break;
+	}
+
+	return r;
+}
+
+int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
+{
+	int r = kvm_check_extension(kvm, ext);
+
+	if (unlikely(kvm && kvm_vm_is_protected(kvm)))
+		r = pkvm_check_extension(kvm, ext, r);
+
+	return r;
+}
+
 long kvm_arch_dev_ioctl(struct file *filp,
 			unsigned int ioctl, unsigned long arg)
 {
diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c
index b8430b3d97af..d41553594d08 100644
--- a/arch/arm64/kvm/pkvm.c
+++ b/arch/arm64/kvm/pkvm.c
@@ -181,3 +181,33 @@ void kvm_init_protected_traps(struct kvm_vcpu *vcpu)
 	pvm_init_traps_aa64mmfr0(vcpu);
 	pvm_init_traps_aa64mmfr1(vcpu);
 }
+
+int kvm_arm_pkvm_get_max_brps(void)
+{
+	int num = FIELD_GET(FEATURE(ID_AA64DFR0_BRPS), PVM_ID_AA64DFR0_ALLOW);
+
+	/*
+	 * If breakpoints are supported, the maximum number is 1 + the field.
+	 * Otherwise, return 0, which is not compliant with the architecture,
+	 * but is reserved and is used here to indicate no debug support.
+	 */
+	if (num)
+		return 1 + num;
+	else
+		return 0;
+}
+
+int kvm_arm_pkvm_get_max_wrps(void)
+{
+	int num = FIELD_GET(FEATURE(ID_AA64DFR0_WRPS), PVM_ID_AA64DFR0_ALLOW);
+
+	/*
+	 * If breakpoints are supported, the maximum number is 1 + the field.
+	 * Otherwise, return 0, which is not compliant with the architecture,
+	 * but is reserved and is used here to indicate no debug support.
+	 */
+	if (num)
+		return 1 + num;
+	else
+		return 0;
+}
-- 
2.32.0.402.g57bb445576-goog


^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH v3 15/15] KVM: arm64: Restrict protected VM capabilities
@ 2021-07-19 16:03   ` Fuad Tabba
  0 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-07-19 16:03 UTC (permalink / raw)
  To: kvmarm; +Cc: kernel-team, kvm, maz, pbonzini, will, linux-arm-kernel

Restrict protected VM capabilities based on the
fixed-configuration for protected VMs.

No functional change intended in current KVM-supported modes
(nVHE, VHE).

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/kvm_fixed_config.h | 10 ++++
 arch/arm64/kvm/arm.c                      | 63 ++++++++++++++++++++++-
 arch/arm64/kvm/pkvm.c                     | 30 +++++++++++
 3 files changed, 102 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/kvm_fixed_config.h b/arch/arm64/include/asm/kvm_fixed_config.h
index b39a5de2c4b9..14310b035bf7 100644
--- a/arch/arm64/include/asm/kvm_fixed_config.h
+++ b/arch/arm64/include/asm/kvm_fixed_config.h
@@ -175,4 +175,14 @@
  */
 #define PVM_ID_AA64ISAR1_ALLOW (~0ULL)
 
+/*
+ * Returns the maximum number of breakpoints supported for protected VMs.
+ */
+int kvm_arm_pkvm_get_max_brps(void);
+
+/*
+ * Returns the maximum number of watchpoints supported for protected VMs.
+ */
+int kvm_arm_pkvm_get_max_wrps(void);
+
 #endif /* __ARM64_KVM_FIXED_CONFIG_H__ */
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 3f28549aff0d..bc41e3b71fab 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -34,6 +34,7 @@
 #include <asm/virt.h>
 #include <asm/kvm_arm.h>
 #include <asm/kvm_asm.h>
+#include <asm/kvm_fixed_config.h>
 #include <asm/kvm_mmu.h>
 #include <asm/kvm_emulate.h>
 #include <asm/sections.h>
@@ -188,9 +189,10 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
 	atomic_set(&kvm->online_vcpus, 0);
 }
 
-int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
+static int kvm_check_extension(struct kvm *kvm, long ext)
 {
 	int r;
+
 	switch (ext) {
 	case KVM_CAP_IRQCHIP:
 		r = vgic_present;
@@ -281,6 +283,65 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 	return r;
 }
 
+static int pkvm_check_extension(struct kvm *kvm, long ext, int kvm_cap)
+{
+	int r;
+
+	switch (ext) {
+	case KVM_CAP_ARM_PSCI:
+	case KVM_CAP_ARM_PSCI_0_2:
+	case KVM_CAP_NR_VCPUS:
+	case KVM_CAP_MAX_VCPUS:
+	case KVM_CAP_MAX_VCPU_ID:
+		r = kvm_cap;
+		break;
+	case KVM_CAP_ARM_EL1_32BIT:
+		r = kvm_cap &&
+		    (FIELD_GET(FEATURE(ID_AA64PFR0_EL1), PVM_ID_AA64PFR0_ALLOW) >=
+		     ID_AA64PFR0_ELx_32BIT_64BIT);
+		break;
+	case KVM_CAP_GUEST_DEBUG_HW_BPS:
+		r = min(kvm_cap, kvm_arm_pkvm_get_max_brps());
+		break;
+	case KVM_CAP_GUEST_DEBUG_HW_WPS:
+		r = min(kvm_cap, kvm_arm_pkvm_get_max_wrps());
+		break;
+	case KVM_CAP_ARM_PMU_V3:
+		r = kvm_cap &&
+		    FIELD_GET(FEATURE(ID_AA64DFR0_PMUVER), PVM_ID_AA64DFR0_ALLOW);
+		break;
+	case KVM_CAP_ARM_SVE:
+		r = kvm_cap &&
+		    FIELD_GET(FEATURE(ID_AA64PFR0_SVE), PVM_ID_AA64PFR0_ALLOW);
+		break;
+	case KVM_CAP_ARM_PTRAUTH_ADDRESS:
+		r = kvm_cap &&
+		    FIELD_GET(FEATURE(ID_AA64ISAR1_API), PVM_ID_AA64ISAR1_ALLOW) &&
+		    FIELD_GET(FEATURE(ID_AA64ISAR1_APA), PVM_ID_AA64ISAR1_ALLOW);
+		break;
+	case KVM_CAP_ARM_PTRAUTH_GENERIC:
+		r = kvm_cap &&
+		    FIELD_GET(FEATURE(ID_AA64ISAR1_GPI), PVM_ID_AA64ISAR1_ALLOW) &&
+		    FIELD_GET(FEATURE(ID_AA64ISAR1_GPA), PVM_ID_AA64ISAR1_ALLOW);
+		break;
+	default:
+		r = 0;
+		break;
+	}
+
+	return r;
+}
+
+int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
+{
+	int r = kvm_check_extension(kvm, ext);
+
+	if (unlikely(kvm && kvm_vm_is_protected(kvm)))
+		r = pkvm_check_extension(kvm, ext, r);
+
+	return r;
+}
+
 long kvm_arch_dev_ioctl(struct file *filp,
 			unsigned int ioctl, unsigned long arg)
 {
diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c
index b8430b3d97af..d41553594d08 100644
--- a/arch/arm64/kvm/pkvm.c
+++ b/arch/arm64/kvm/pkvm.c
@@ -181,3 +181,33 @@ void kvm_init_protected_traps(struct kvm_vcpu *vcpu)
 	pvm_init_traps_aa64mmfr0(vcpu);
 	pvm_init_traps_aa64mmfr1(vcpu);
 }
+
+int kvm_arm_pkvm_get_max_brps(void)
+{
+	int num = FIELD_GET(FEATURE(ID_AA64DFR0_BRPS), PVM_ID_AA64DFR0_ALLOW);
+
+	/*
+	 * If breakpoints are supported, the maximum number is 1 + the field.
+	 * Otherwise, return 0, which is not compliant with the architecture,
+	 * but is reserved and is used here to indicate no debug support.
+	 */
+	if (num)
+		return 1 + num;
+	else
+		return 0;
+}
+
+int kvm_arm_pkvm_get_max_wrps(void)
+{
+	int num = FIELD_GET(FEATURE(ID_AA64DFR0_WRPS), PVM_ID_AA64DFR0_ALLOW);
+
+	/*
+	 * If breakpoints are supported, the maximum number is 1 + the field.
+	 * Otherwise, return 0, which is not compliant with the architecture,
+	 * but is reserved and is used here to indicate no debug support.
+	 */
+	if (num)
+		return 1 + num;
+	else
+		return 0;
+}
-- 
2.32.0.402.g57bb445576-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH v3 15/15] KVM: arm64: Restrict protected VM capabilities
@ 2021-07-19 16:03   ` Fuad Tabba
  0 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-07-19 16:03 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team, tabba

Restrict protected VM capabilities based on the
fixed-configuration for protected VMs.

No functional change intended in current KVM-supported modes
(nVHE, VHE).

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/kvm_fixed_config.h | 10 ++++
 arch/arm64/kvm/arm.c                      | 63 ++++++++++++++++++++++-
 arch/arm64/kvm/pkvm.c                     | 30 +++++++++++
 3 files changed, 102 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/kvm_fixed_config.h b/arch/arm64/include/asm/kvm_fixed_config.h
index b39a5de2c4b9..14310b035bf7 100644
--- a/arch/arm64/include/asm/kvm_fixed_config.h
+++ b/arch/arm64/include/asm/kvm_fixed_config.h
@@ -175,4 +175,14 @@
  */
 #define PVM_ID_AA64ISAR1_ALLOW (~0ULL)
 
+/*
+ * Returns the maximum number of breakpoints supported for protected VMs.
+ */
+int kvm_arm_pkvm_get_max_brps(void);
+
+/*
+ * Returns the maximum number of watchpoints supported for protected VMs.
+ */
+int kvm_arm_pkvm_get_max_wrps(void);
+
 #endif /* __ARM64_KVM_FIXED_CONFIG_H__ */
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 3f28549aff0d..bc41e3b71fab 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -34,6 +34,7 @@
 #include <asm/virt.h>
 #include <asm/kvm_arm.h>
 #include <asm/kvm_asm.h>
+#include <asm/kvm_fixed_config.h>
 #include <asm/kvm_mmu.h>
 #include <asm/kvm_emulate.h>
 #include <asm/sections.h>
@@ -188,9 +189,10 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
 	atomic_set(&kvm->online_vcpus, 0);
 }
 
-int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
+static int kvm_check_extension(struct kvm *kvm, long ext)
 {
 	int r;
+
 	switch (ext) {
 	case KVM_CAP_IRQCHIP:
 		r = vgic_present;
@@ -281,6 +283,65 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 	return r;
 }
 
+static int pkvm_check_extension(struct kvm *kvm, long ext, int kvm_cap)
+{
+	int r;
+
+	switch (ext) {
+	case KVM_CAP_ARM_PSCI:
+	case KVM_CAP_ARM_PSCI_0_2:
+	case KVM_CAP_NR_VCPUS:
+	case KVM_CAP_MAX_VCPUS:
+	case KVM_CAP_MAX_VCPU_ID:
+		r = kvm_cap;
+		break;
+	case KVM_CAP_ARM_EL1_32BIT:
+		r = kvm_cap &&
+		    (FIELD_GET(FEATURE(ID_AA64PFR0_EL1), PVM_ID_AA64PFR0_ALLOW) >=
+		     ID_AA64PFR0_ELx_32BIT_64BIT);
+		break;
+	case KVM_CAP_GUEST_DEBUG_HW_BPS:
+		r = min(kvm_cap, kvm_arm_pkvm_get_max_brps());
+		break;
+	case KVM_CAP_GUEST_DEBUG_HW_WPS:
+		r = min(kvm_cap, kvm_arm_pkvm_get_max_wrps());
+		break;
+	case KVM_CAP_ARM_PMU_V3:
+		r = kvm_cap &&
+		    FIELD_GET(FEATURE(ID_AA64DFR0_PMUVER), PVM_ID_AA64DFR0_ALLOW);
+		break;
+	case KVM_CAP_ARM_SVE:
+		r = kvm_cap &&
+		    FIELD_GET(FEATURE(ID_AA64PFR0_SVE), PVM_ID_AA64PFR0_ALLOW);
+		break;
+	case KVM_CAP_ARM_PTRAUTH_ADDRESS:
+		r = kvm_cap &&
+		    FIELD_GET(FEATURE(ID_AA64ISAR1_API), PVM_ID_AA64ISAR1_ALLOW) &&
+		    FIELD_GET(FEATURE(ID_AA64ISAR1_APA), PVM_ID_AA64ISAR1_ALLOW);
+		break;
+	case KVM_CAP_ARM_PTRAUTH_GENERIC:
+		r = kvm_cap &&
+		    FIELD_GET(FEATURE(ID_AA64ISAR1_GPI), PVM_ID_AA64ISAR1_ALLOW) &&
+		    FIELD_GET(FEATURE(ID_AA64ISAR1_GPA), PVM_ID_AA64ISAR1_ALLOW);
+		break;
+	default:
+		r = 0;
+		break;
+	}
+
+	return r;
+}
+
+int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
+{
+	int r = kvm_check_extension(kvm, ext);
+
+	if (unlikely(kvm && kvm_vm_is_protected(kvm)))
+		r = pkvm_check_extension(kvm, ext, r);
+
+	return r;
+}
+
 long kvm_arch_dev_ioctl(struct file *filp,
 			unsigned int ioctl, unsigned long arg)
 {
diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c
index b8430b3d97af..d41553594d08 100644
--- a/arch/arm64/kvm/pkvm.c
+++ b/arch/arm64/kvm/pkvm.c
@@ -181,3 +181,33 @@ void kvm_init_protected_traps(struct kvm_vcpu *vcpu)
 	pvm_init_traps_aa64mmfr0(vcpu);
 	pvm_init_traps_aa64mmfr1(vcpu);
 }
+
+int kvm_arm_pkvm_get_max_brps(void)
+{
+	int num = FIELD_GET(FEATURE(ID_AA64DFR0_BRPS), PVM_ID_AA64DFR0_ALLOW);
+
+	/*
+	 * If breakpoints are supported, the maximum number is 1 + the field.
+	 * Otherwise, return 0, which is not compliant with the architecture,
+	 * but is reserved and is used here to indicate no debug support.
+	 */
+	if (num)
+		return 1 + num;
+	else
+		return 0;
+}
+
+int kvm_arm_pkvm_get_max_wrps(void)
+{
+	int num = FIELD_GET(FEATURE(ID_AA64DFR0_WRPS), PVM_ID_AA64DFR0_ALLOW);
+
+	/*
+	 * If breakpoints are supported, the maximum number is 1 + the field.
+	 * Otherwise, return 0, which is not compliant with the architecture,
+	 * but is reserved and is used here to indicate no debug support.
+	 */
+	if (num)
+		return 1 + num;
+	else
+		return 0;
+}
-- 
2.32.0.402.g57bb445576-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 14/15] KVM: arm64: Handle protected guests at 32 bits
  2021-07-19 16:03   ` Fuad Tabba
  (?)
@ 2021-07-19 19:43     ` Oliver Upton
  -1 siblings, 0 replies; 126+ messages in thread
From: Oliver Upton @ 2021-07-19 19:43 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kvmarm, kernel-team, kvm, maz, pbonzini, will, linux-arm-kernel

On Mon, Jul 19, 2021 at 9:04 AM Fuad Tabba <tabba@google.com> wrote:
>
> Protected KVM does not support protected AArch32 guests. However,
> it is possible for the guest to force run AArch32, potentially
> causing problems. Add an extra check so that if the hypervisor
> catches the guest doing that, it can prevent the guest from
> running again by resetting vcpu->arch.target and returning
> ARM_EXCEPTION_IL.
>
> Adapted from commit 22f553842b14 ("KVM: arm64: Handle Asymmetric
> AArch32 systems")
>
> Signed-off-by: Fuad Tabba <tabba@google.com>

Would it make sense to document how we handle misbehaved guests, in
case a particular VMM wants to clean up the mess afterwards?

--
Thanks,
Oliver

> ---
>  arch/arm64/kvm/hyp/include/hyp/switch.h | 24 ++++++++++++++++++++++++
>  1 file changed, 24 insertions(+)
>
> diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> index 8431f1514280..f09343e15a80 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> @@ -23,6 +23,7 @@
>  #include <asm/kprobes.h>
>  #include <asm/kvm_asm.h>
>  #include <asm/kvm_emulate.h>
> +#include <asm/kvm_fixed_config.h>
>  #include <asm/kvm_hyp.h>
>  #include <asm/kvm_mmu.h>
>  #include <asm/fpsimd.h>
> @@ -477,6 +478,29 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
>                         write_sysreg_el2(read_sysreg_el2(SYS_ELR) - 4, SYS_ELR);
>         }
>
> +       /*
> +        * Protected VMs might not be allowed to run in AArch32. The check below
> +        * is based on the one in kvm_arch_vcpu_ioctl_run().
> +        * The ARMv8 architecture doesn't give the hypervisor a mechanism to
> +        * prevent a guest from dropping to AArch32 EL0 if implemented by the
> +        * CPU. If the hypervisor spots a guest in such a state ensure it is
> +        * handled, and don't trust the host to spot or fix it.
> +        */
> +       if (unlikely(is_nvhe_hyp_code() &&
> +                    kvm_vm_is_protected(kern_hyp_va(vcpu->kvm)) &&
> +                    FIELD_GET(FEATURE(ID_AA64PFR0_EL0),
> +                              PVM_ID_AA64PFR0_ALLOW) <
> +                            ID_AA64PFR0_ELx_32BIT_64BIT &&
> +                    vcpu_mode_is_32bit(vcpu))) {
> +               /*
> +                * As we have caught the guest red-handed, decide that it isn't
> +                * fit for purpose anymore by making the vcpu invalid.
> +                */
> +               vcpu->arch.target = -1;
> +               *exit_code = ARM_EXCEPTION_IL;
> +               goto exit;
> +       }
> +
>         /*
>          * We're using the raw exception code in order to only process
>          * the trap if no SError is pending. We will come back to the
> --
> 2.32.0.402.g57bb445576-goog
>
> _______________________________________________
> kvmarm mailing list
> kvmarm@lists.cs.columbia.edu
> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 14/15] KVM: arm64: Handle protected guests at 32 bits
@ 2021-07-19 19:43     ` Oliver Upton
  0 siblings, 0 replies; 126+ messages in thread
From: Oliver Upton @ 2021-07-19 19:43 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kernel-team, kvm, maz, pbonzini, will, kvmarm, linux-arm-kernel

On Mon, Jul 19, 2021 at 9:04 AM Fuad Tabba <tabba@google.com> wrote:
>
> Protected KVM does not support protected AArch32 guests. However,
> it is possible for the guest to force run AArch32, potentially
> causing problems. Add an extra check so that if the hypervisor
> catches the guest doing that, it can prevent the guest from
> running again by resetting vcpu->arch.target and returning
> ARM_EXCEPTION_IL.
>
> Adapted from commit 22f553842b14 ("KVM: arm64: Handle Asymmetric
> AArch32 systems")
>
> Signed-off-by: Fuad Tabba <tabba@google.com>

Would it make sense to document how we handle misbehaved guests, in
case a particular VMM wants to clean up the mess afterwards?

--
Thanks,
Oliver

> ---
>  arch/arm64/kvm/hyp/include/hyp/switch.h | 24 ++++++++++++++++++++++++
>  1 file changed, 24 insertions(+)
>
> diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> index 8431f1514280..f09343e15a80 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> @@ -23,6 +23,7 @@
>  #include <asm/kprobes.h>
>  #include <asm/kvm_asm.h>
>  #include <asm/kvm_emulate.h>
> +#include <asm/kvm_fixed_config.h>
>  #include <asm/kvm_hyp.h>
>  #include <asm/kvm_mmu.h>
>  #include <asm/fpsimd.h>
> @@ -477,6 +478,29 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
>                         write_sysreg_el2(read_sysreg_el2(SYS_ELR) - 4, SYS_ELR);
>         }
>
> +       /*
> +        * Protected VMs might not be allowed to run in AArch32. The check below
> +        * is based on the one in kvm_arch_vcpu_ioctl_run().
> +        * The ARMv8 architecture doesn't give the hypervisor a mechanism to
> +        * prevent a guest from dropping to AArch32 EL0 if implemented by the
> +        * CPU. If the hypervisor spots a guest in such a state ensure it is
> +        * handled, and don't trust the host to spot or fix it.
> +        */
> +       if (unlikely(is_nvhe_hyp_code() &&
> +                    kvm_vm_is_protected(kern_hyp_va(vcpu->kvm)) &&
> +                    FIELD_GET(FEATURE(ID_AA64PFR0_EL0),
> +                              PVM_ID_AA64PFR0_ALLOW) <
> +                            ID_AA64PFR0_ELx_32BIT_64BIT &&
> +                    vcpu_mode_is_32bit(vcpu))) {
> +               /*
> +                * As we have caught the guest red-handed, decide that it isn't
> +                * fit for purpose anymore by making the vcpu invalid.
> +                */
> +               vcpu->arch.target = -1;
> +               *exit_code = ARM_EXCEPTION_IL;
> +               goto exit;
> +       }
> +
>         /*
>          * We're using the raw exception code in order to only process
>          * the trap if no SError is pending. We will come back to the
> --
> 2.32.0.402.g57bb445576-goog
>
> _______________________________________________
> kvmarm mailing list
> kvmarm@lists.cs.columbia.edu
> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 14/15] KVM: arm64: Handle protected guests at 32 bits
@ 2021-07-19 19:43     ` Oliver Upton
  0 siblings, 0 replies; 126+ messages in thread
From: Oliver Upton @ 2021-07-19 19:43 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kvmarm, kernel-team, kvm, maz, pbonzini, will, linux-arm-kernel

On Mon, Jul 19, 2021 at 9:04 AM Fuad Tabba <tabba@google.com> wrote:
>
> Protected KVM does not support protected AArch32 guests. However,
> it is possible for the guest to force run AArch32, potentially
> causing problems. Add an extra check so that if the hypervisor
> catches the guest doing that, it can prevent the guest from
> running again by resetting vcpu->arch.target and returning
> ARM_EXCEPTION_IL.
>
> Adapted from commit 22f553842b14 ("KVM: arm64: Handle Asymmetric
> AArch32 systems")
>
> Signed-off-by: Fuad Tabba <tabba@google.com>

Would it make sense to document how we handle misbehaved guests, in
case a particular VMM wants to clean up the mess afterwards?

--
Thanks,
Oliver

> ---
>  arch/arm64/kvm/hyp/include/hyp/switch.h | 24 ++++++++++++++++++++++++
>  1 file changed, 24 insertions(+)
>
> diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> index 8431f1514280..f09343e15a80 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> @@ -23,6 +23,7 @@
>  #include <asm/kprobes.h>
>  #include <asm/kvm_asm.h>
>  #include <asm/kvm_emulate.h>
> +#include <asm/kvm_fixed_config.h>
>  #include <asm/kvm_hyp.h>
>  #include <asm/kvm_mmu.h>
>  #include <asm/fpsimd.h>
> @@ -477,6 +478,29 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
>                         write_sysreg_el2(read_sysreg_el2(SYS_ELR) - 4, SYS_ELR);
>         }
>
> +       /*
> +        * Protected VMs might not be allowed to run in AArch32. The check below
> +        * is based on the one in kvm_arch_vcpu_ioctl_run().
> +        * The ARMv8 architecture doesn't give the hypervisor a mechanism to
> +        * prevent a guest from dropping to AArch32 EL0 if implemented by the
> +        * CPU. If the hypervisor spots a guest in such a state ensure it is
> +        * handled, and don't trust the host to spot or fix it.
> +        */
> +       if (unlikely(is_nvhe_hyp_code() &&
> +                    kvm_vm_is_protected(kern_hyp_va(vcpu->kvm)) &&
> +                    FIELD_GET(FEATURE(ID_AA64PFR0_EL0),
> +                              PVM_ID_AA64PFR0_ALLOW) <
> +                            ID_AA64PFR0_ELx_32BIT_64BIT &&
> +                    vcpu_mode_is_32bit(vcpu))) {
> +               /*
> +                * As we have caught the guest red-handed, decide that it isn't
> +                * fit for purpose anymore by making the vcpu invalid.
> +                */
> +               vcpu->arch.target = -1;
> +               *exit_code = ARM_EXCEPTION_IL;
> +               goto exit;
> +       }
> +
>         /*
>          * We're using the raw exception code in order to only process
>          * the trap if no SError is pending. We will come back to the
> --
> 2.32.0.402.g57bb445576-goog
>
> _______________________________________________
> kvmarm mailing list
> kvmarm@lists.cs.columbia.edu
> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 05/15] KVM: arm64: Refactor sys_regs.h,c for nVHE reuse
  2021-07-19 16:03   ` Fuad Tabba
  (?)
@ 2021-07-20 13:38     ` Andrew Jones
  -1 siblings, 0 replies; 126+ messages in thread
From: Andrew Jones @ 2021-07-20 13:38 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kvmarm, maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, qperret, kvm,
	linux-arm-kernel, kernel-team

On Mon, Jul 19, 2021 at 05:03:36PM +0100, Fuad Tabba wrote:
> Refactor sys_regs.h and sys_regs.c to make it easier to reuse
> common code. It will be used in nVHE in a later patch.
> 
> Note that the refactored code uses __inline_bsearch for find_reg
> instead of bsearch to avoid copying the bsearch code for nVHE.
> 
> No functional change intended.
> 
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/include/asm/sysreg.h |  3 +++
>  arch/arm64/kvm/sys_regs.c       | 30 +-----------------------------
>  arch/arm64/kvm/sys_regs.h       | 31 +++++++++++++++++++++++++++++++
>  3 files changed, 35 insertions(+), 29 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
> index 7b9c3acba684..326f49e7bd42 100644
> --- a/arch/arm64/include/asm/sysreg.h
> +++ b/arch/arm64/include/asm/sysreg.h
> @@ -1153,6 +1153,9 @@
>  #define ICH_VTR_A3V_SHIFT	21
>  #define ICH_VTR_A3V_MASK	(1 << ICH_VTR_A3V_SHIFT)
>  
> +/* Extract the feature specified from the feature id register. */
> +#define FEATURE(x)	(GENMASK_ULL(x##_SHIFT + 3, x##_SHIFT))

I think the comment would be better as

 Create a mask for the feature bits of the specified feature.

And, I think a more specific name than FEATURE would be better. Maybe
FEATURE_MASK or even ARM64_FEATURE_MASK ?

> +
>  #ifdef __ASSEMBLY__
>  
>  	.irp	num,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 80a6e41cadad..1a939c464858 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -44,10 +44,6 @@
>   * 64bit interface.
>   */
>  
> -#define reg_to_encoding(x)						\
> -	sys_reg((u32)(x)->Op0, (u32)(x)->Op1,				\
> -		(u32)(x)->CRn, (u32)(x)->CRm, (u32)(x)->Op2)
> -
>  static bool read_from_write_only(struct kvm_vcpu *vcpu,
>  				 struct sys_reg_params *params,
>  				 const struct sys_reg_desc *r)
> @@ -1026,8 +1022,6 @@ static bool access_arch_timer(struct kvm_vcpu *vcpu,
>  	return true;
>  }
>  
> -#define FEATURE(x)	(GENMASK_ULL(x##_SHIFT + 3, x##_SHIFT))
> -
>  /* Read a sanitised cpufeature ID register by sys_reg_desc */
>  static u64 read_id_reg(const struct kvm_vcpu *vcpu,
>  		struct sys_reg_desc const *r, bool raz)
> @@ -2106,23 +2100,6 @@ static int check_sysreg_table(const struct sys_reg_desc *table, unsigned int n,
>  	return 0;
>  }
>  
> -static int match_sys_reg(const void *key, const void *elt)
> -{
> -	const unsigned long pval = (unsigned long)key;
> -	const struct sys_reg_desc *r = elt;
> -
> -	return pval - reg_to_encoding(r);
> -}
> -
> -static const struct sys_reg_desc *find_reg(const struct sys_reg_params *params,
> -					 const struct sys_reg_desc table[],
> -					 unsigned int num)
> -{
> -	unsigned long pval = reg_to_encoding(params);
> -
> -	return bsearch((void *)pval, table, num, sizeof(table[0]), match_sys_reg);
> -}
> -
>  int kvm_handle_cp14_load_store(struct kvm_vcpu *vcpu)
>  {
>  	kvm_inject_undefined(vcpu);
> @@ -2365,13 +2342,8 @@ int kvm_handle_sys_reg(struct kvm_vcpu *vcpu)
>  
>  	trace_kvm_handle_sys_reg(esr);
>  
> -	params.Op0 = (esr >> 20) & 3;
> -	params.Op1 = (esr >> 14) & 0x7;
> -	params.CRn = (esr >> 10) & 0xf;
> -	params.CRm = (esr >> 1) & 0xf;
> -	params.Op2 = (esr >> 17) & 0x7;
> +	params = esr_sys64_to_params(esr);
>  	params.regval = vcpu_get_reg(vcpu, Rt);
> -	params.is_write = !(esr & 1);
>  
>  	ret = emulate_sys_reg(vcpu, &params);
>  
> diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h
> index 9d0621417c2a..cc0cc95a0280 100644
> --- a/arch/arm64/kvm/sys_regs.h
> +++ b/arch/arm64/kvm/sys_regs.h
> @@ -11,6 +11,12 @@
>  #ifndef __ARM64_KVM_SYS_REGS_LOCAL_H__
>  #define __ARM64_KVM_SYS_REGS_LOCAL_H__
>  
> +#include <linux/bsearch.h>
> +
> +#define reg_to_encoding(x)						\
> +	sys_reg((u32)(x)->Op0, (u32)(x)->Op1,				\
> +		(u32)(x)->CRn, (u32)(x)->CRm, (u32)(x)->Op2)
> +
>  struct sys_reg_params {
>  	u8	Op0;
>  	u8	Op1;
> @@ -21,6 +27,14 @@ struct sys_reg_params {
>  	bool	is_write;
>  };
>  
> +#define esr_sys64_to_params(esr)                                               \
> +	((struct sys_reg_params){ .Op0 = ((esr) >> 20) & 3,                    \
> +				  .Op1 = ((esr) >> 14) & 0x7,                  \
> +				  .CRn = ((esr) >> 10) & 0xf,                  \
> +				  .CRm = ((esr) >> 1) & 0xf,                   \
> +				  .Op2 = ((esr) >> 17) & 0x7,                  \
> +				  .is_write = !((esr) & 1) })
> +
>  struct sys_reg_desc {
>  	/* Sysreg string for debug */
>  	const char *name;
> @@ -152,6 +166,23 @@ static inline int cmp_sys_reg(const struct sys_reg_desc *i1,
>  	return i1->Op2 - i2->Op2;
>  }
>  
> +static inline int match_sys_reg(const void *key, const void *elt)
> +{
> +	const unsigned long pval = (unsigned long)key;
> +	const struct sys_reg_desc *r = elt;
> +
> +	return pval - reg_to_encoding(r);
> +}
> +
> +static inline const struct sys_reg_desc *
> +find_reg(const struct sys_reg_params *params, const struct sys_reg_desc table[],
> +	 unsigned int num)
> +{
> +	unsigned long pval = reg_to_encoding(params);
> +
> +	return __inline_bsearch((void *)pval, table, num, sizeof(table[0]), match_sys_reg);
> +}
> +
>  const struct sys_reg_desc *find_reg_by_id(u64 id,
>  					  struct sys_reg_params *params,
>  					  const struct sys_reg_desc table[],
> -- 
> 2.32.0.402.g57bb445576-goog
>

Otherwise

Reviewed-by: Andrew Jones <drjones@redhat.com>


^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 05/15] KVM: arm64: Refactor sys_regs.h, c for nVHE reuse
@ 2021-07-20 13:38     ` Andrew Jones
  0 siblings, 0 replies; 126+ messages in thread
From: Andrew Jones @ 2021-07-20 13:38 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kernel-team, kvm, maz, pbonzini, will, kvmarm, linux-arm-kernel

On Mon, Jul 19, 2021 at 05:03:36PM +0100, Fuad Tabba wrote:
> Refactor sys_regs.h and sys_regs.c to make it easier to reuse
> common code. It will be used in nVHE in a later patch.
> 
> Note that the refactored code uses __inline_bsearch for find_reg
> instead of bsearch to avoid copying the bsearch code for nVHE.
> 
> No functional change intended.
> 
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/include/asm/sysreg.h |  3 +++
>  arch/arm64/kvm/sys_regs.c       | 30 +-----------------------------
>  arch/arm64/kvm/sys_regs.h       | 31 +++++++++++++++++++++++++++++++
>  3 files changed, 35 insertions(+), 29 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
> index 7b9c3acba684..326f49e7bd42 100644
> --- a/arch/arm64/include/asm/sysreg.h
> +++ b/arch/arm64/include/asm/sysreg.h
> @@ -1153,6 +1153,9 @@
>  #define ICH_VTR_A3V_SHIFT	21
>  #define ICH_VTR_A3V_MASK	(1 << ICH_VTR_A3V_SHIFT)
>  
> +/* Extract the feature specified from the feature id register. */
> +#define FEATURE(x)	(GENMASK_ULL(x##_SHIFT + 3, x##_SHIFT))

I think the comment would be better as

 Create a mask for the feature bits of the specified feature.

And, I think a more specific name than FEATURE would be better. Maybe
FEATURE_MASK or even ARM64_FEATURE_MASK ?

> +
>  #ifdef __ASSEMBLY__
>  
>  	.irp	num,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 80a6e41cadad..1a939c464858 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -44,10 +44,6 @@
>   * 64bit interface.
>   */
>  
> -#define reg_to_encoding(x)						\
> -	sys_reg((u32)(x)->Op0, (u32)(x)->Op1,				\
> -		(u32)(x)->CRn, (u32)(x)->CRm, (u32)(x)->Op2)
> -
>  static bool read_from_write_only(struct kvm_vcpu *vcpu,
>  				 struct sys_reg_params *params,
>  				 const struct sys_reg_desc *r)
> @@ -1026,8 +1022,6 @@ static bool access_arch_timer(struct kvm_vcpu *vcpu,
>  	return true;
>  }
>  
> -#define FEATURE(x)	(GENMASK_ULL(x##_SHIFT + 3, x##_SHIFT))
> -
>  /* Read a sanitised cpufeature ID register by sys_reg_desc */
>  static u64 read_id_reg(const struct kvm_vcpu *vcpu,
>  		struct sys_reg_desc const *r, bool raz)
> @@ -2106,23 +2100,6 @@ static int check_sysreg_table(const struct sys_reg_desc *table, unsigned int n,
>  	return 0;
>  }
>  
> -static int match_sys_reg(const void *key, const void *elt)
> -{
> -	const unsigned long pval = (unsigned long)key;
> -	const struct sys_reg_desc *r = elt;
> -
> -	return pval - reg_to_encoding(r);
> -}
> -
> -static const struct sys_reg_desc *find_reg(const struct sys_reg_params *params,
> -					 const struct sys_reg_desc table[],
> -					 unsigned int num)
> -{
> -	unsigned long pval = reg_to_encoding(params);
> -
> -	return bsearch((void *)pval, table, num, sizeof(table[0]), match_sys_reg);
> -}
> -
>  int kvm_handle_cp14_load_store(struct kvm_vcpu *vcpu)
>  {
>  	kvm_inject_undefined(vcpu);
> @@ -2365,13 +2342,8 @@ int kvm_handle_sys_reg(struct kvm_vcpu *vcpu)
>  
>  	trace_kvm_handle_sys_reg(esr);
>  
> -	params.Op0 = (esr >> 20) & 3;
> -	params.Op1 = (esr >> 14) & 0x7;
> -	params.CRn = (esr >> 10) & 0xf;
> -	params.CRm = (esr >> 1) & 0xf;
> -	params.Op2 = (esr >> 17) & 0x7;
> +	params = esr_sys64_to_params(esr);
>  	params.regval = vcpu_get_reg(vcpu, Rt);
> -	params.is_write = !(esr & 1);
>  
>  	ret = emulate_sys_reg(vcpu, &params);
>  
> diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h
> index 9d0621417c2a..cc0cc95a0280 100644
> --- a/arch/arm64/kvm/sys_regs.h
> +++ b/arch/arm64/kvm/sys_regs.h
> @@ -11,6 +11,12 @@
>  #ifndef __ARM64_KVM_SYS_REGS_LOCAL_H__
>  #define __ARM64_KVM_SYS_REGS_LOCAL_H__
>  
> +#include <linux/bsearch.h>
> +
> +#define reg_to_encoding(x)						\
> +	sys_reg((u32)(x)->Op0, (u32)(x)->Op1,				\
> +		(u32)(x)->CRn, (u32)(x)->CRm, (u32)(x)->Op2)
> +
>  struct sys_reg_params {
>  	u8	Op0;
>  	u8	Op1;
> @@ -21,6 +27,14 @@ struct sys_reg_params {
>  	bool	is_write;
>  };
>  
> +#define esr_sys64_to_params(esr)                                               \
> +	((struct sys_reg_params){ .Op0 = ((esr) >> 20) & 3,                    \
> +				  .Op1 = ((esr) >> 14) & 0x7,                  \
> +				  .CRn = ((esr) >> 10) & 0xf,                  \
> +				  .CRm = ((esr) >> 1) & 0xf,                   \
> +				  .Op2 = ((esr) >> 17) & 0x7,                  \
> +				  .is_write = !((esr) & 1) })
> +
>  struct sys_reg_desc {
>  	/* Sysreg string for debug */
>  	const char *name;
> @@ -152,6 +166,23 @@ static inline int cmp_sys_reg(const struct sys_reg_desc *i1,
>  	return i1->Op2 - i2->Op2;
>  }
>  
> +static inline int match_sys_reg(const void *key, const void *elt)
> +{
> +	const unsigned long pval = (unsigned long)key;
> +	const struct sys_reg_desc *r = elt;
> +
> +	return pval - reg_to_encoding(r);
> +}
> +
> +static inline const struct sys_reg_desc *
> +find_reg(const struct sys_reg_params *params, const struct sys_reg_desc table[],
> +	 unsigned int num)
> +{
> +	unsigned long pval = reg_to_encoding(params);
> +
> +	return __inline_bsearch((void *)pval, table, num, sizeof(table[0]), match_sys_reg);
> +}
> +
>  const struct sys_reg_desc *find_reg_by_id(u64 id,
>  					  struct sys_reg_params *params,
>  					  const struct sys_reg_desc table[],
> -- 
> 2.32.0.402.g57bb445576-goog
>

Otherwise

Reviewed-by: Andrew Jones <drjones@redhat.com>

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 05/15] KVM: arm64: Refactor sys_regs.h, c for nVHE reuse
@ 2021-07-20 13:38     ` Andrew Jones
  0 siblings, 0 replies; 126+ messages in thread
From: Andrew Jones @ 2021-07-20 13:38 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kvmarm, maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, qperret, kvm,
	linux-arm-kernel, kernel-team

On Mon, Jul 19, 2021 at 05:03:36PM +0100, Fuad Tabba wrote:
> Refactor sys_regs.h and sys_regs.c to make it easier to reuse
> common code. It will be used in nVHE in a later patch.
> 
> Note that the refactored code uses __inline_bsearch for find_reg
> instead of bsearch to avoid copying the bsearch code for nVHE.
> 
> No functional change intended.
> 
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/include/asm/sysreg.h |  3 +++
>  arch/arm64/kvm/sys_regs.c       | 30 +-----------------------------
>  arch/arm64/kvm/sys_regs.h       | 31 +++++++++++++++++++++++++++++++
>  3 files changed, 35 insertions(+), 29 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
> index 7b9c3acba684..326f49e7bd42 100644
> --- a/arch/arm64/include/asm/sysreg.h
> +++ b/arch/arm64/include/asm/sysreg.h
> @@ -1153,6 +1153,9 @@
>  #define ICH_VTR_A3V_SHIFT	21
>  #define ICH_VTR_A3V_MASK	(1 << ICH_VTR_A3V_SHIFT)
>  
> +/* Extract the feature specified from the feature id register. */
> +#define FEATURE(x)	(GENMASK_ULL(x##_SHIFT + 3, x##_SHIFT))

I think the comment would be better as

 Create a mask for the feature bits of the specified feature.

And, I think a more specific name than FEATURE would be better. Maybe
FEATURE_MASK or even ARM64_FEATURE_MASK ?

> +
>  #ifdef __ASSEMBLY__
>  
>  	.irp	num,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 80a6e41cadad..1a939c464858 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -44,10 +44,6 @@
>   * 64bit interface.
>   */
>  
> -#define reg_to_encoding(x)						\
> -	sys_reg((u32)(x)->Op0, (u32)(x)->Op1,				\
> -		(u32)(x)->CRn, (u32)(x)->CRm, (u32)(x)->Op2)
> -
>  static bool read_from_write_only(struct kvm_vcpu *vcpu,
>  				 struct sys_reg_params *params,
>  				 const struct sys_reg_desc *r)
> @@ -1026,8 +1022,6 @@ static bool access_arch_timer(struct kvm_vcpu *vcpu,
>  	return true;
>  }
>  
> -#define FEATURE(x)	(GENMASK_ULL(x##_SHIFT + 3, x##_SHIFT))
> -
>  /* Read a sanitised cpufeature ID register by sys_reg_desc */
>  static u64 read_id_reg(const struct kvm_vcpu *vcpu,
>  		struct sys_reg_desc const *r, bool raz)
> @@ -2106,23 +2100,6 @@ static int check_sysreg_table(const struct sys_reg_desc *table, unsigned int n,
>  	return 0;
>  }
>  
> -static int match_sys_reg(const void *key, const void *elt)
> -{
> -	const unsigned long pval = (unsigned long)key;
> -	const struct sys_reg_desc *r = elt;
> -
> -	return pval - reg_to_encoding(r);
> -}
> -
> -static const struct sys_reg_desc *find_reg(const struct sys_reg_params *params,
> -					 const struct sys_reg_desc table[],
> -					 unsigned int num)
> -{
> -	unsigned long pval = reg_to_encoding(params);
> -
> -	return bsearch((void *)pval, table, num, sizeof(table[0]), match_sys_reg);
> -}
> -
>  int kvm_handle_cp14_load_store(struct kvm_vcpu *vcpu)
>  {
>  	kvm_inject_undefined(vcpu);
> @@ -2365,13 +2342,8 @@ int kvm_handle_sys_reg(struct kvm_vcpu *vcpu)
>  
>  	trace_kvm_handle_sys_reg(esr);
>  
> -	params.Op0 = (esr >> 20) & 3;
> -	params.Op1 = (esr >> 14) & 0x7;
> -	params.CRn = (esr >> 10) & 0xf;
> -	params.CRm = (esr >> 1) & 0xf;
> -	params.Op2 = (esr >> 17) & 0x7;
> +	params = esr_sys64_to_params(esr);
>  	params.regval = vcpu_get_reg(vcpu, Rt);
> -	params.is_write = !(esr & 1);
>  
>  	ret = emulate_sys_reg(vcpu, &params);
>  
> diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h
> index 9d0621417c2a..cc0cc95a0280 100644
> --- a/arch/arm64/kvm/sys_regs.h
> +++ b/arch/arm64/kvm/sys_regs.h
> @@ -11,6 +11,12 @@
>  #ifndef __ARM64_KVM_SYS_REGS_LOCAL_H__
>  #define __ARM64_KVM_SYS_REGS_LOCAL_H__
>  
> +#include <linux/bsearch.h>
> +
> +#define reg_to_encoding(x)						\
> +	sys_reg((u32)(x)->Op0, (u32)(x)->Op1,				\
> +		(u32)(x)->CRn, (u32)(x)->CRm, (u32)(x)->Op2)
> +
>  struct sys_reg_params {
>  	u8	Op0;
>  	u8	Op1;
> @@ -21,6 +27,14 @@ struct sys_reg_params {
>  	bool	is_write;
>  };
>  
> +#define esr_sys64_to_params(esr)                                               \
> +	((struct sys_reg_params){ .Op0 = ((esr) >> 20) & 3,                    \
> +				  .Op1 = ((esr) >> 14) & 0x7,                  \
> +				  .CRn = ((esr) >> 10) & 0xf,                  \
> +				  .CRm = ((esr) >> 1) & 0xf,                   \
> +				  .Op2 = ((esr) >> 17) & 0x7,                  \
> +				  .is_write = !((esr) & 1) })
> +
>  struct sys_reg_desc {
>  	/* Sysreg string for debug */
>  	const char *name;
> @@ -152,6 +166,23 @@ static inline int cmp_sys_reg(const struct sys_reg_desc *i1,
>  	return i1->Op2 - i2->Op2;
>  }
>  
> +static inline int match_sys_reg(const void *key, const void *elt)
> +{
> +	const unsigned long pval = (unsigned long)key;
> +	const struct sys_reg_desc *r = elt;
> +
> +	return pval - reg_to_encoding(r);
> +}
> +
> +static inline const struct sys_reg_desc *
> +find_reg(const struct sys_reg_params *params, const struct sys_reg_desc table[],
> +	 unsigned int num)
> +{
> +	unsigned long pval = reg_to_encoding(params);
> +
> +	return __inline_bsearch((void *)pval, table, num, sizeof(table[0]), match_sys_reg);
> +}
> +
>  const struct sys_reg_desc *find_reg_by_id(u64 id,
>  					  struct sys_reg_params *params,
>  					  const struct sys_reg_desc table[],
> -- 
> 2.32.0.402.g57bb445576-goog
>

Otherwise

Reviewed-by: Andrew Jones <drjones@redhat.com>


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 05/15] KVM: arm64: Refactor sys_regs.h,c for nVHE reuse
  2021-07-20 13:38     ` Andrew Jones
  (?)
@ 2021-07-20 14:03       ` Fuad Tabba
  -1 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-07-20 14:03 UTC (permalink / raw)
  To: Andrew Jones
  Cc: kvmarm, maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, qperret, kvm,
	linux-arm-kernel, kernel-team

Hi,

On Tue, Jul 20, 2021 at 2:38 PM Andrew Jones <drjones@redhat.com> wrote:
>
> On Mon, Jul 19, 2021 at 05:03:36PM +0100, Fuad Tabba wrote:
> > Refactor sys_regs.h and sys_regs.c to make it easier to reuse
> > common code. It will be used in nVHE in a later patch.
> >
> > Note that the refactored code uses __inline_bsearch for find_reg
> > instead of bsearch to avoid copying the bsearch code for nVHE.
> >
> > No functional change intended.
> >
> > Signed-off-by: Fuad Tabba <tabba@google.com>
> > ---
> >  arch/arm64/include/asm/sysreg.h |  3 +++
> >  arch/arm64/kvm/sys_regs.c       | 30 +-----------------------------
> >  arch/arm64/kvm/sys_regs.h       | 31 +++++++++++++++++++++++++++++++
> >  3 files changed, 35 insertions(+), 29 deletions(-)
> >
> > diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
> > index 7b9c3acba684..326f49e7bd42 100644
> > --- a/arch/arm64/include/asm/sysreg.h
> > +++ b/arch/arm64/include/asm/sysreg.h
> > @@ -1153,6 +1153,9 @@
> >  #define ICH_VTR_A3V_SHIFT    21
> >  #define ICH_VTR_A3V_MASK     (1 << ICH_VTR_A3V_SHIFT)
> >
> > +/* Extract the feature specified from the feature id register. */
> > +#define FEATURE(x)   (GENMASK_ULL(x##_SHIFT + 3, x##_SHIFT))
>
> I think the comment would be better as
>
>  Create a mask for the feature bits of the specified feature.

I agree. I'll use this instead.

> And, I think a more specific name than FEATURE would be better. Maybe
> FEATURE_MASK or even ARM64_FEATURE_MASK ?

I think so too. ARM64_FEATURE_MASK is more descriptive than just FEATURE.

Thanks,
/fuad

> > +
> >  #ifdef __ASSEMBLY__
> >
> >       .irp    num,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30
> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > index 80a6e41cadad..1a939c464858 100644
> > --- a/arch/arm64/kvm/sys_regs.c
> > +++ b/arch/arm64/kvm/sys_regs.c
> > @@ -44,10 +44,6 @@
> >   * 64bit interface.
> >   */
> >
> > -#define reg_to_encoding(x)                                           \
> > -     sys_reg((u32)(x)->Op0, (u32)(x)->Op1,                           \
> > -             (u32)(x)->CRn, (u32)(x)->CRm, (u32)(x)->Op2)
> > -
> >  static bool read_from_write_only(struct kvm_vcpu *vcpu,
> >                                struct sys_reg_params *params,
> >                                const struct sys_reg_desc *r)
> > @@ -1026,8 +1022,6 @@ static bool access_arch_timer(struct kvm_vcpu *vcpu,
> >       return true;
> >  }
> >
> > -#define FEATURE(x)   (GENMASK_ULL(x##_SHIFT + 3, x##_SHIFT))
> > -
> >  /* Read a sanitised cpufeature ID register by sys_reg_desc */
> >  static u64 read_id_reg(const struct kvm_vcpu *vcpu,
> >               struct sys_reg_desc const *r, bool raz)
> > @@ -2106,23 +2100,6 @@ static int check_sysreg_table(const struct sys_reg_desc *table, unsigned int n,
> >       return 0;
> >  }
> >
> > -static int match_sys_reg(const void *key, const void *elt)
> > -{
> > -     const unsigned long pval = (unsigned long)key;
> > -     const struct sys_reg_desc *r = elt;
> > -
> > -     return pval - reg_to_encoding(r);
> > -}
> > -
> > -static const struct sys_reg_desc *find_reg(const struct sys_reg_params *params,
> > -                                      const struct sys_reg_desc table[],
> > -                                      unsigned int num)
> > -{
> > -     unsigned long pval = reg_to_encoding(params);
> > -
> > -     return bsearch((void *)pval, table, num, sizeof(table[0]), match_sys_reg);
> > -}
> > -
> >  int kvm_handle_cp14_load_store(struct kvm_vcpu *vcpu)
> >  {
> >       kvm_inject_undefined(vcpu);
> > @@ -2365,13 +2342,8 @@ int kvm_handle_sys_reg(struct kvm_vcpu *vcpu)
> >
> >       trace_kvm_handle_sys_reg(esr);
> >
> > -     params.Op0 = (esr >> 20) & 3;
> > -     params.Op1 = (esr >> 14) & 0x7;
> > -     params.CRn = (esr >> 10) & 0xf;
> > -     params.CRm = (esr >> 1) & 0xf;
> > -     params.Op2 = (esr >> 17) & 0x7;
> > +     params = esr_sys64_to_params(esr);
> >       params.regval = vcpu_get_reg(vcpu, Rt);
> > -     params.is_write = !(esr & 1);
> >
> >       ret = emulate_sys_reg(vcpu, &params);
> >
> > diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h
> > index 9d0621417c2a..cc0cc95a0280 100644
> > --- a/arch/arm64/kvm/sys_regs.h
> > +++ b/arch/arm64/kvm/sys_regs.h
> > @@ -11,6 +11,12 @@
> >  #ifndef __ARM64_KVM_SYS_REGS_LOCAL_H__
> >  #define __ARM64_KVM_SYS_REGS_LOCAL_H__
> >
> > +#include <linux/bsearch.h>
> > +
> > +#define reg_to_encoding(x)                                           \
> > +     sys_reg((u32)(x)->Op0, (u32)(x)->Op1,                           \
> > +             (u32)(x)->CRn, (u32)(x)->CRm, (u32)(x)->Op2)
> > +
> >  struct sys_reg_params {
> >       u8      Op0;
> >       u8      Op1;
> > @@ -21,6 +27,14 @@ struct sys_reg_params {
> >       bool    is_write;
> >  };
> >
> > +#define esr_sys64_to_params(esr)                                               \
> > +     ((struct sys_reg_params){ .Op0 = ((esr) >> 20) & 3,                    \
> > +                               .Op1 = ((esr) >> 14) & 0x7,                  \
> > +                               .CRn = ((esr) >> 10) & 0xf,                  \
> > +                               .CRm = ((esr) >> 1) & 0xf,                   \
> > +                               .Op2 = ((esr) >> 17) & 0x7,                  \
> > +                               .is_write = !((esr) & 1) })
> > +
> >  struct sys_reg_desc {
> >       /* Sysreg string for debug */
> >       const char *name;
> > @@ -152,6 +166,23 @@ static inline int cmp_sys_reg(const struct sys_reg_desc *i1,
> >       return i1->Op2 - i2->Op2;
> >  }
> >
> > +static inline int match_sys_reg(const void *key, const void *elt)
> > +{
> > +     const unsigned long pval = (unsigned long)key;
> > +     const struct sys_reg_desc *r = elt;
> > +
> > +     return pval - reg_to_encoding(r);
> > +}
> > +
> > +static inline const struct sys_reg_desc *
> > +find_reg(const struct sys_reg_params *params, const struct sys_reg_desc table[],
> > +      unsigned int num)
> > +{
> > +     unsigned long pval = reg_to_encoding(params);
> > +
> > +     return __inline_bsearch((void *)pval, table, num, sizeof(table[0]), match_sys_reg);
> > +}
> > +
> >  const struct sys_reg_desc *find_reg_by_id(u64 id,
> >                                         struct sys_reg_params *params,
> >                                         const struct sys_reg_desc table[],
> > --
> > 2.32.0.402.g57bb445576-goog
> >
>
> Otherwise
>
> Reviewed-by: Andrew Jones <drjones@redhat.com>
>

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 05/15] KVM: arm64: Refactor sys_regs.h, c for nVHE reuse
@ 2021-07-20 14:03       ` Fuad Tabba
  0 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-07-20 14:03 UTC (permalink / raw)
  To: Andrew Jones
  Cc: kernel-team, kvm, maz, pbonzini, will, kvmarm, linux-arm-kernel

Hi,

On Tue, Jul 20, 2021 at 2:38 PM Andrew Jones <drjones@redhat.com> wrote:
>
> On Mon, Jul 19, 2021 at 05:03:36PM +0100, Fuad Tabba wrote:
> > Refactor sys_regs.h and sys_regs.c to make it easier to reuse
> > common code. It will be used in nVHE in a later patch.
> >
> > Note that the refactored code uses __inline_bsearch for find_reg
> > instead of bsearch to avoid copying the bsearch code for nVHE.
> >
> > No functional change intended.
> >
> > Signed-off-by: Fuad Tabba <tabba@google.com>
> > ---
> >  arch/arm64/include/asm/sysreg.h |  3 +++
> >  arch/arm64/kvm/sys_regs.c       | 30 +-----------------------------
> >  arch/arm64/kvm/sys_regs.h       | 31 +++++++++++++++++++++++++++++++
> >  3 files changed, 35 insertions(+), 29 deletions(-)
> >
> > diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
> > index 7b9c3acba684..326f49e7bd42 100644
> > --- a/arch/arm64/include/asm/sysreg.h
> > +++ b/arch/arm64/include/asm/sysreg.h
> > @@ -1153,6 +1153,9 @@
> >  #define ICH_VTR_A3V_SHIFT    21
> >  #define ICH_VTR_A3V_MASK     (1 << ICH_VTR_A3V_SHIFT)
> >
> > +/* Extract the feature specified from the feature id register. */
> > +#define FEATURE(x)   (GENMASK_ULL(x##_SHIFT + 3, x##_SHIFT))
>
> I think the comment would be better as
>
>  Create a mask for the feature bits of the specified feature.

I agree. I'll use this instead.

> And, I think a more specific name than FEATURE would be better. Maybe
> FEATURE_MASK or even ARM64_FEATURE_MASK ?

I think so too. ARM64_FEATURE_MASK is more descriptive than just FEATURE.

Thanks,
/fuad

> > +
> >  #ifdef __ASSEMBLY__
> >
> >       .irp    num,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30
> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > index 80a6e41cadad..1a939c464858 100644
> > --- a/arch/arm64/kvm/sys_regs.c
> > +++ b/arch/arm64/kvm/sys_regs.c
> > @@ -44,10 +44,6 @@
> >   * 64bit interface.
> >   */
> >
> > -#define reg_to_encoding(x)                                           \
> > -     sys_reg((u32)(x)->Op0, (u32)(x)->Op1,                           \
> > -             (u32)(x)->CRn, (u32)(x)->CRm, (u32)(x)->Op2)
> > -
> >  static bool read_from_write_only(struct kvm_vcpu *vcpu,
> >                                struct sys_reg_params *params,
> >                                const struct sys_reg_desc *r)
> > @@ -1026,8 +1022,6 @@ static bool access_arch_timer(struct kvm_vcpu *vcpu,
> >       return true;
> >  }
> >
> > -#define FEATURE(x)   (GENMASK_ULL(x##_SHIFT + 3, x##_SHIFT))
> > -
> >  /* Read a sanitised cpufeature ID register by sys_reg_desc */
> >  static u64 read_id_reg(const struct kvm_vcpu *vcpu,
> >               struct sys_reg_desc const *r, bool raz)
> > @@ -2106,23 +2100,6 @@ static int check_sysreg_table(const struct sys_reg_desc *table, unsigned int n,
> >       return 0;
> >  }
> >
> > -static int match_sys_reg(const void *key, const void *elt)
> > -{
> > -     const unsigned long pval = (unsigned long)key;
> > -     const struct sys_reg_desc *r = elt;
> > -
> > -     return pval - reg_to_encoding(r);
> > -}
> > -
> > -static const struct sys_reg_desc *find_reg(const struct sys_reg_params *params,
> > -                                      const struct sys_reg_desc table[],
> > -                                      unsigned int num)
> > -{
> > -     unsigned long pval = reg_to_encoding(params);
> > -
> > -     return bsearch((void *)pval, table, num, sizeof(table[0]), match_sys_reg);
> > -}
> > -
> >  int kvm_handle_cp14_load_store(struct kvm_vcpu *vcpu)
> >  {
> >       kvm_inject_undefined(vcpu);
> > @@ -2365,13 +2342,8 @@ int kvm_handle_sys_reg(struct kvm_vcpu *vcpu)
> >
> >       trace_kvm_handle_sys_reg(esr);
> >
> > -     params.Op0 = (esr >> 20) & 3;
> > -     params.Op1 = (esr >> 14) & 0x7;
> > -     params.CRn = (esr >> 10) & 0xf;
> > -     params.CRm = (esr >> 1) & 0xf;
> > -     params.Op2 = (esr >> 17) & 0x7;
> > +     params = esr_sys64_to_params(esr);
> >       params.regval = vcpu_get_reg(vcpu, Rt);
> > -     params.is_write = !(esr & 1);
> >
> >       ret = emulate_sys_reg(vcpu, &params);
> >
> > diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h
> > index 9d0621417c2a..cc0cc95a0280 100644
> > --- a/arch/arm64/kvm/sys_regs.h
> > +++ b/arch/arm64/kvm/sys_regs.h
> > @@ -11,6 +11,12 @@
> >  #ifndef __ARM64_KVM_SYS_REGS_LOCAL_H__
> >  #define __ARM64_KVM_SYS_REGS_LOCAL_H__
> >
> > +#include <linux/bsearch.h>
> > +
> > +#define reg_to_encoding(x)                                           \
> > +     sys_reg((u32)(x)->Op0, (u32)(x)->Op1,                           \
> > +             (u32)(x)->CRn, (u32)(x)->CRm, (u32)(x)->Op2)
> > +
> >  struct sys_reg_params {
> >       u8      Op0;
> >       u8      Op1;
> > @@ -21,6 +27,14 @@ struct sys_reg_params {
> >       bool    is_write;
> >  };
> >
> > +#define esr_sys64_to_params(esr)                                               \
> > +     ((struct sys_reg_params){ .Op0 = ((esr) >> 20) & 3,                    \
> > +                               .Op1 = ((esr) >> 14) & 0x7,                  \
> > +                               .CRn = ((esr) >> 10) & 0xf,                  \
> > +                               .CRm = ((esr) >> 1) & 0xf,                   \
> > +                               .Op2 = ((esr) >> 17) & 0x7,                  \
> > +                               .is_write = !((esr) & 1) })
> > +
> >  struct sys_reg_desc {
> >       /* Sysreg string for debug */
> >       const char *name;
> > @@ -152,6 +166,23 @@ static inline int cmp_sys_reg(const struct sys_reg_desc *i1,
> >       return i1->Op2 - i2->Op2;
> >  }
> >
> > +static inline int match_sys_reg(const void *key, const void *elt)
> > +{
> > +     const unsigned long pval = (unsigned long)key;
> > +     const struct sys_reg_desc *r = elt;
> > +
> > +     return pval - reg_to_encoding(r);
> > +}
> > +
> > +static inline const struct sys_reg_desc *
> > +find_reg(const struct sys_reg_params *params, const struct sys_reg_desc table[],
> > +      unsigned int num)
> > +{
> > +     unsigned long pval = reg_to_encoding(params);
> > +
> > +     return __inline_bsearch((void *)pval, table, num, sizeof(table[0]), match_sys_reg);
> > +}
> > +
> >  const struct sys_reg_desc *find_reg_by_id(u64 id,
> >                                         struct sys_reg_params *params,
> >                                         const struct sys_reg_desc table[],
> > --
> > 2.32.0.402.g57bb445576-goog
> >
>
> Otherwise
>
> Reviewed-by: Andrew Jones <drjones@redhat.com>
>
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 05/15] KVM: arm64: Refactor sys_regs.h, c for nVHE reuse
@ 2021-07-20 14:03       ` Fuad Tabba
  0 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-07-20 14:03 UTC (permalink / raw)
  To: Andrew Jones
  Cc: kvmarm, maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, qperret, kvm,
	linux-arm-kernel, kernel-team

Hi,

On Tue, Jul 20, 2021 at 2:38 PM Andrew Jones <drjones@redhat.com> wrote:
>
> On Mon, Jul 19, 2021 at 05:03:36PM +0100, Fuad Tabba wrote:
> > Refactor sys_regs.h and sys_regs.c to make it easier to reuse
> > common code. It will be used in nVHE in a later patch.
> >
> > Note that the refactored code uses __inline_bsearch for find_reg
> > instead of bsearch to avoid copying the bsearch code for nVHE.
> >
> > No functional change intended.
> >
> > Signed-off-by: Fuad Tabba <tabba@google.com>
> > ---
> >  arch/arm64/include/asm/sysreg.h |  3 +++
> >  arch/arm64/kvm/sys_regs.c       | 30 +-----------------------------
> >  arch/arm64/kvm/sys_regs.h       | 31 +++++++++++++++++++++++++++++++
> >  3 files changed, 35 insertions(+), 29 deletions(-)
> >
> > diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
> > index 7b9c3acba684..326f49e7bd42 100644
> > --- a/arch/arm64/include/asm/sysreg.h
> > +++ b/arch/arm64/include/asm/sysreg.h
> > @@ -1153,6 +1153,9 @@
> >  #define ICH_VTR_A3V_SHIFT    21
> >  #define ICH_VTR_A3V_MASK     (1 << ICH_VTR_A3V_SHIFT)
> >
> > +/* Extract the feature specified from the feature id register. */
> > +#define FEATURE(x)   (GENMASK_ULL(x##_SHIFT + 3, x##_SHIFT))
>
> I think the comment would be better as
>
>  Create a mask for the feature bits of the specified feature.

I agree. I'll use this instead.

> And, I think a more specific name than FEATURE would be better. Maybe
> FEATURE_MASK or even ARM64_FEATURE_MASK ?

I think so too. ARM64_FEATURE_MASK is more descriptive than just FEATURE.

Thanks,
/fuad

> > +
> >  #ifdef __ASSEMBLY__
> >
> >       .irp    num,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30
> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > index 80a6e41cadad..1a939c464858 100644
> > --- a/arch/arm64/kvm/sys_regs.c
> > +++ b/arch/arm64/kvm/sys_regs.c
> > @@ -44,10 +44,6 @@
> >   * 64bit interface.
> >   */
> >
> > -#define reg_to_encoding(x)                                           \
> > -     sys_reg((u32)(x)->Op0, (u32)(x)->Op1,                           \
> > -             (u32)(x)->CRn, (u32)(x)->CRm, (u32)(x)->Op2)
> > -
> >  static bool read_from_write_only(struct kvm_vcpu *vcpu,
> >                                struct sys_reg_params *params,
> >                                const struct sys_reg_desc *r)
> > @@ -1026,8 +1022,6 @@ static bool access_arch_timer(struct kvm_vcpu *vcpu,
> >       return true;
> >  }
> >
> > -#define FEATURE(x)   (GENMASK_ULL(x##_SHIFT + 3, x##_SHIFT))
> > -
> >  /* Read a sanitised cpufeature ID register by sys_reg_desc */
> >  static u64 read_id_reg(const struct kvm_vcpu *vcpu,
> >               struct sys_reg_desc const *r, bool raz)
> > @@ -2106,23 +2100,6 @@ static int check_sysreg_table(const struct sys_reg_desc *table, unsigned int n,
> >       return 0;
> >  }
> >
> > -static int match_sys_reg(const void *key, const void *elt)
> > -{
> > -     const unsigned long pval = (unsigned long)key;
> > -     const struct sys_reg_desc *r = elt;
> > -
> > -     return pval - reg_to_encoding(r);
> > -}
> > -
> > -static const struct sys_reg_desc *find_reg(const struct sys_reg_params *params,
> > -                                      const struct sys_reg_desc table[],
> > -                                      unsigned int num)
> > -{
> > -     unsigned long pval = reg_to_encoding(params);
> > -
> > -     return bsearch((void *)pval, table, num, sizeof(table[0]), match_sys_reg);
> > -}
> > -
> >  int kvm_handle_cp14_load_store(struct kvm_vcpu *vcpu)
> >  {
> >       kvm_inject_undefined(vcpu);
> > @@ -2365,13 +2342,8 @@ int kvm_handle_sys_reg(struct kvm_vcpu *vcpu)
> >
> >       trace_kvm_handle_sys_reg(esr);
> >
> > -     params.Op0 = (esr >> 20) & 3;
> > -     params.Op1 = (esr >> 14) & 0x7;
> > -     params.CRn = (esr >> 10) & 0xf;
> > -     params.CRm = (esr >> 1) & 0xf;
> > -     params.Op2 = (esr >> 17) & 0x7;
> > +     params = esr_sys64_to_params(esr);
> >       params.regval = vcpu_get_reg(vcpu, Rt);
> > -     params.is_write = !(esr & 1);
> >
> >       ret = emulate_sys_reg(vcpu, &params);
> >
> > diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h
> > index 9d0621417c2a..cc0cc95a0280 100644
> > --- a/arch/arm64/kvm/sys_regs.h
> > +++ b/arch/arm64/kvm/sys_regs.h
> > @@ -11,6 +11,12 @@
> >  #ifndef __ARM64_KVM_SYS_REGS_LOCAL_H__
> >  #define __ARM64_KVM_SYS_REGS_LOCAL_H__
> >
> > +#include <linux/bsearch.h>
> > +
> > +#define reg_to_encoding(x)                                           \
> > +     sys_reg((u32)(x)->Op0, (u32)(x)->Op1,                           \
> > +             (u32)(x)->CRn, (u32)(x)->CRm, (u32)(x)->Op2)
> > +
> >  struct sys_reg_params {
> >       u8      Op0;
> >       u8      Op1;
> > @@ -21,6 +27,14 @@ struct sys_reg_params {
> >       bool    is_write;
> >  };
> >
> > +#define esr_sys64_to_params(esr)                                               \
> > +     ((struct sys_reg_params){ .Op0 = ((esr) >> 20) & 3,                    \
> > +                               .Op1 = ((esr) >> 14) & 0x7,                  \
> > +                               .CRn = ((esr) >> 10) & 0xf,                  \
> > +                               .CRm = ((esr) >> 1) & 0xf,                   \
> > +                               .Op2 = ((esr) >> 17) & 0x7,                  \
> > +                               .is_write = !((esr) & 1) })
> > +
> >  struct sys_reg_desc {
> >       /* Sysreg string for debug */
> >       const char *name;
> > @@ -152,6 +166,23 @@ static inline int cmp_sys_reg(const struct sys_reg_desc *i1,
> >       return i1->Op2 - i2->Op2;
> >  }
> >
> > +static inline int match_sys_reg(const void *key, const void *elt)
> > +{
> > +     const unsigned long pval = (unsigned long)key;
> > +     const struct sys_reg_desc *r = elt;
> > +
> > +     return pval - reg_to_encoding(r);
> > +}
> > +
> > +static inline const struct sys_reg_desc *
> > +find_reg(const struct sys_reg_params *params, const struct sys_reg_desc table[],
> > +      unsigned int num)
> > +{
> > +     unsigned long pval = reg_to_encoding(params);
> > +
> > +     return __inline_bsearch((void *)pval, table, num, sizeof(table[0]), match_sys_reg);
> > +}
> > +
> >  const struct sys_reg_desc *find_reg_by_id(u64 id,
> >                                         struct sys_reg_params *params,
> >                                         const struct sys_reg_desc table[],
> > --
> > 2.32.0.402.g57bb445576-goog
> >
>
> Otherwise
>
> Reviewed-by: Andrew Jones <drjones@redhat.com>
>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 06/15] KVM: arm64: Restore mdcr_el2 from vcpu
  2021-07-19 16:03   ` Fuad Tabba
  (?)
@ 2021-07-20 14:52     ` Andrew Jones
  -1 siblings, 0 replies; 126+ messages in thread
From: Andrew Jones @ 2021-07-20 14:52 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kvmarm, maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, qperret, kvm,
	linux-arm-kernel, kernel-team

On Mon, Jul 19, 2021 at 05:03:37PM +0100, Fuad Tabba wrote:
> On deactivating traps, restore the value of mdcr_el2 from the
> newly created and preserved host value vcpu context, rather than
> directly reading the hardware register.
> 
> Up until and including this patch the two values are the same,
> i.e., the hardware register and the vcpu one. A future patch will
> be changing the value of mdcr_el2 on activating traps, and this
> ensures that its value will be restored.
> 
> No functional change intended.

I'm probably missing something, but I can't convince myself that the host
will end up with the same mdcr_el2 value after deactivating traps after
this patch as before. We clearly now restore whatever we had when
activating traps (presumably whatever we configured at init_el2_state
time), but is that equivalent to what we had before with the masking and
ORing that this patch drops?

Thanks,
drew

> 
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/include/asm/kvm_host.h       |  5 ++++-
>  arch/arm64/include/asm/kvm_hyp.h        |  2 +-
>  arch/arm64/kvm/hyp/include/hyp/switch.h |  6 +++++-
>  arch/arm64/kvm/hyp/nvhe/switch.c        | 11 ++---------
>  arch/arm64/kvm/hyp/vhe/switch.c         | 12 ++----------
>  arch/arm64/kvm/hyp/vhe/sysreg-sr.c      |  2 +-
>  6 files changed, 15 insertions(+), 23 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 4d2d974c1522..76462c6a91ee 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -287,10 +287,13 @@ struct kvm_vcpu_arch {
>  	/* Stage 2 paging state used by the hardware on next switch */
>  	struct kvm_s2_mmu *hw_mmu;
>  
> -	/* HYP configuration */
> +	/* Values of trap registers for the guest. */
>  	u64 hcr_el2;
>  	u64 mdcr_el2;
>  
> +	/* Values of trap registers for the host before guest entry. */
> +	u64 mdcr_el2_host;
> +
>  	/* Exception Information */
>  	struct kvm_vcpu_fault_info fault;
>  
> diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
> index 9d60b3006efc..657d0c94cf82 100644
> --- a/arch/arm64/include/asm/kvm_hyp.h
> +++ b/arch/arm64/include/asm/kvm_hyp.h
> @@ -95,7 +95,7 @@ void __sve_restore_state(void *sve_pffr, u32 *fpsr);
>  
>  #ifndef __KVM_NVHE_HYPERVISOR__
>  void activate_traps_vhe_load(struct kvm_vcpu *vcpu);
> -void deactivate_traps_vhe_put(void);
> +void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu);
>  #endif
>  
>  u64 __guest_enter(struct kvm_vcpu *vcpu);
> diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> index e4a2f295a394..a0e78a6027be 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> @@ -92,11 +92,15 @@ static inline void __activate_traps_common(struct kvm_vcpu *vcpu)
>  		write_sysreg(0, pmselr_el0);
>  		write_sysreg(ARMV8_PMU_USERENR_MASK, pmuserenr_el0);
>  	}
> +
> +	vcpu->arch.mdcr_el2_host = read_sysreg(mdcr_el2);
>  	write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
>  }
>  
> -static inline void __deactivate_traps_common(void)
> +static inline void __deactivate_traps_common(struct kvm_vcpu *vcpu)
>  {
> +	write_sysreg(vcpu->arch.mdcr_el2_host, mdcr_el2);
> +
>  	write_sysreg(0, hstr_el2);
>  	if (kvm_arm_support_pmu_v3())
>  		write_sysreg(0, pmuserenr_el0);
> diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
> index f7af9688c1f7..1778593a08a9 100644
> --- a/arch/arm64/kvm/hyp/nvhe/switch.c
> +++ b/arch/arm64/kvm/hyp/nvhe/switch.c
> @@ -69,12 +69,10 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
>  static void __deactivate_traps(struct kvm_vcpu *vcpu)
>  {
>  	extern char __kvm_hyp_host_vector[];
> -	u64 mdcr_el2, cptr;
> +	u64 cptr;
>  
>  	___deactivate_traps(vcpu);
>  
> -	mdcr_el2 = read_sysreg(mdcr_el2);
> -
>  	if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) {
>  		u64 val;
>  
> @@ -92,13 +90,8 @@ static void __deactivate_traps(struct kvm_vcpu *vcpu)
>  		isb();
>  	}
>  
> -	__deactivate_traps_common();
> -
> -	mdcr_el2 &= MDCR_EL2_HPMN_MASK;
> -	mdcr_el2 |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT;
> -	mdcr_el2 |= MDCR_EL2_E2TB_MASK << MDCR_EL2_E2TB_SHIFT;
> +	__deactivate_traps_common(vcpu);
>  
> -	write_sysreg(mdcr_el2, mdcr_el2);
>  	write_sysreg(this_cpu_ptr(&kvm_init_params)->hcr_el2, hcr_el2);
>  
>  	cptr = CPTR_EL2_DEFAULT;
> diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
> index b3229924d243..0d0c9550fb08 100644
> --- a/arch/arm64/kvm/hyp/vhe/switch.c
> +++ b/arch/arm64/kvm/hyp/vhe/switch.c
> @@ -91,17 +91,9 @@ void activate_traps_vhe_load(struct kvm_vcpu *vcpu)
>  	__activate_traps_common(vcpu);
>  }
>  
> -void deactivate_traps_vhe_put(void)
> +void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu)
>  {
> -	u64 mdcr_el2 = read_sysreg(mdcr_el2);
> -
> -	mdcr_el2 &= MDCR_EL2_HPMN_MASK |
> -		    MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT |
> -		    MDCR_EL2_TPMS;
> -
> -	write_sysreg(mdcr_el2, mdcr_el2);
> -
> -	__deactivate_traps_common();
> +	__deactivate_traps_common(vcpu);
>  }
>  
>  /* Switch to the guest for VHE systems running in EL2 */
> diff --git a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
> index 2a0b8c88d74f..007a12dd4351 100644
> --- a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
> +++ b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
> @@ -101,7 +101,7 @@ void kvm_vcpu_put_sysregs_vhe(struct kvm_vcpu *vcpu)
>  	struct kvm_cpu_context *host_ctxt;
>  
>  	host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
> -	deactivate_traps_vhe_put();
> +	deactivate_traps_vhe_put(vcpu);
>  
>  	__sysreg_save_el1_state(guest_ctxt);
>  	__sysreg_save_user_state(guest_ctxt);
> -- 
> 2.32.0.402.g57bb445576-goog
> 


^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 06/15] KVM: arm64: Restore mdcr_el2 from vcpu
@ 2021-07-20 14:52     ` Andrew Jones
  0 siblings, 0 replies; 126+ messages in thread
From: Andrew Jones @ 2021-07-20 14:52 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kernel-team, kvm, maz, pbonzini, will, kvmarm, linux-arm-kernel

On Mon, Jul 19, 2021 at 05:03:37PM +0100, Fuad Tabba wrote:
> On deactivating traps, restore the value of mdcr_el2 from the
> newly created and preserved host value vcpu context, rather than
> directly reading the hardware register.
> 
> Up until and including this patch the two values are the same,
> i.e., the hardware register and the vcpu one. A future patch will
> be changing the value of mdcr_el2 on activating traps, and this
> ensures that its value will be restored.
> 
> No functional change intended.

I'm probably missing something, but I can't convince myself that the host
will end up with the same mdcr_el2 value after deactivating traps after
this patch as before. We clearly now restore whatever we had when
activating traps (presumably whatever we configured at init_el2_state
time), but is that equivalent to what we had before with the masking and
ORing that this patch drops?

Thanks,
drew

> 
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/include/asm/kvm_host.h       |  5 ++++-
>  arch/arm64/include/asm/kvm_hyp.h        |  2 +-
>  arch/arm64/kvm/hyp/include/hyp/switch.h |  6 +++++-
>  arch/arm64/kvm/hyp/nvhe/switch.c        | 11 ++---------
>  arch/arm64/kvm/hyp/vhe/switch.c         | 12 ++----------
>  arch/arm64/kvm/hyp/vhe/sysreg-sr.c      |  2 +-
>  6 files changed, 15 insertions(+), 23 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 4d2d974c1522..76462c6a91ee 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -287,10 +287,13 @@ struct kvm_vcpu_arch {
>  	/* Stage 2 paging state used by the hardware on next switch */
>  	struct kvm_s2_mmu *hw_mmu;
>  
> -	/* HYP configuration */
> +	/* Values of trap registers for the guest. */
>  	u64 hcr_el2;
>  	u64 mdcr_el2;
>  
> +	/* Values of trap registers for the host before guest entry. */
> +	u64 mdcr_el2_host;
> +
>  	/* Exception Information */
>  	struct kvm_vcpu_fault_info fault;
>  
> diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
> index 9d60b3006efc..657d0c94cf82 100644
> --- a/arch/arm64/include/asm/kvm_hyp.h
> +++ b/arch/arm64/include/asm/kvm_hyp.h
> @@ -95,7 +95,7 @@ void __sve_restore_state(void *sve_pffr, u32 *fpsr);
>  
>  #ifndef __KVM_NVHE_HYPERVISOR__
>  void activate_traps_vhe_load(struct kvm_vcpu *vcpu);
> -void deactivate_traps_vhe_put(void);
> +void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu);
>  #endif
>  
>  u64 __guest_enter(struct kvm_vcpu *vcpu);
> diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> index e4a2f295a394..a0e78a6027be 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> @@ -92,11 +92,15 @@ static inline void __activate_traps_common(struct kvm_vcpu *vcpu)
>  		write_sysreg(0, pmselr_el0);
>  		write_sysreg(ARMV8_PMU_USERENR_MASK, pmuserenr_el0);
>  	}
> +
> +	vcpu->arch.mdcr_el2_host = read_sysreg(mdcr_el2);
>  	write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
>  }
>  
> -static inline void __deactivate_traps_common(void)
> +static inline void __deactivate_traps_common(struct kvm_vcpu *vcpu)
>  {
> +	write_sysreg(vcpu->arch.mdcr_el2_host, mdcr_el2);
> +
>  	write_sysreg(0, hstr_el2);
>  	if (kvm_arm_support_pmu_v3())
>  		write_sysreg(0, pmuserenr_el0);
> diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
> index f7af9688c1f7..1778593a08a9 100644
> --- a/arch/arm64/kvm/hyp/nvhe/switch.c
> +++ b/arch/arm64/kvm/hyp/nvhe/switch.c
> @@ -69,12 +69,10 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
>  static void __deactivate_traps(struct kvm_vcpu *vcpu)
>  {
>  	extern char __kvm_hyp_host_vector[];
> -	u64 mdcr_el2, cptr;
> +	u64 cptr;
>  
>  	___deactivate_traps(vcpu);
>  
> -	mdcr_el2 = read_sysreg(mdcr_el2);
> -
>  	if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) {
>  		u64 val;
>  
> @@ -92,13 +90,8 @@ static void __deactivate_traps(struct kvm_vcpu *vcpu)
>  		isb();
>  	}
>  
> -	__deactivate_traps_common();
> -
> -	mdcr_el2 &= MDCR_EL2_HPMN_MASK;
> -	mdcr_el2 |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT;
> -	mdcr_el2 |= MDCR_EL2_E2TB_MASK << MDCR_EL2_E2TB_SHIFT;
> +	__deactivate_traps_common(vcpu);
>  
> -	write_sysreg(mdcr_el2, mdcr_el2);
>  	write_sysreg(this_cpu_ptr(&kvm_init_params)->hcr_el2, hcr_el2);
>  
>  	cptr = CPTR_EL2_DEFAULT;
> diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
> index b3229924d243..0d0c9550fb08 100644
> --- a/arch/arm64/kvm/hyp/vhe/switch.c
> +++ b/arch/arm64/kvm/hyp/vhe/switch.c
> @@ -91,17 +91,9 @@ void activate_traps_vhe_load(struct kvm_vcpu *vcpu)
>  	__activate_traps_common(vcpu);
>  }
>  
> -void deactivate_traps_vhe_put(void)
> +void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu)
>  {
> -	u64 mdcr_el2 = read_sysreg(mdcr_el2);
> -
> -	mdcr_el2 &= MDCR_EL2_HPMN_MASK |
> -		    MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT |
> -		    MDCR_EL2_TPMS;
> -
> -	write_sysreg(mdcr_el2, mdcr_el2);
> -
> -	__deactivate_traps_common();
> +	__deactivate_traps_common(vcpu);
>  }
>  
>  /* Switch to the guest for VHE systems running in EL2 */
> diff --git a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
> index 2a0b8c88d74f..007a12dd4351 100644
> --- a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
> +++ b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
> @@ -101,7 +101,7 @@ void kvm_vcpu_put_sysregs_vhe(struct kvm_vcpu *vcpu)
>  	struct kvm_cpu_context *host_ctxt;
>  
>  	host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
> -	deactivate_traps_vhe_put();
> +	deactivate_traps_vhe_put(vcpu);
>  
>  	__sysreg_save_el1_state(guest_ctxt);
>  	__sysreg_save_user_state(guest_ctxt);
> -- 
> 2.32.0.402.g57bb445576-goog
> 

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 06/15] KVM: arm64: Restore mdcr_el2 from vcpu
@ 2021-07-20 14:52     ` Andrew Jones
  0 siblings, 0 replies; 126+ messages in thread
From: Andrew Jones @ 2021-07-20 14:52 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kvmarm, maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, qperret, kvm,
	linux-arm-kernel, kernel-team

On Mon, Jul 19, 2021 at 05:03:37PM +0100, Fuad Tabba wrote:
> On deactivating traps, restore the value of mdcr_el2 from the
> newly created and preserved host value vcpu context, rather than
> directly reading the hardware register.
> 
> Up until and including this patch the two values are the same,
> i.e., the hardware register and the vcpu one. A future patch will
> be changing the value of mdcr_el2 on activating traps, and this
> ensures that its value will be restored.
> 
> No functional change intended.

I'm probably missing something, but I can't convince myself that the host
will end up with the same mdcr_el2 value after deactivating traps after
this patch as before. We clearly now restore whatever we had when
activating traps (presumably whatever we configured at init_el2_state
time), but is that equivalent to what we had before with the masking and
ORing that this patch drops?

Thanks,
drew

> 
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/include/asm/kvm_host.h       |  5 ++++-
>  arch/arm64/include/asm/kvm_hyp.h        |  2 +-
>  arch/arm64/kvm/hyp/include/hyp/switch.h |  6 +++++-
>  arch/arm64/kvm/hyp/nvhe/switch.c        | 11 ++---------
>  arch/arm64/kvm/hyp/vhe/switch.c         | 12 ++----------
>  arch/arm64/kvm/hyp/vhe/sysreg-sr.c      |  2 +-
>  6 files changed, 15 insertions(+), 23 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 4d2d974c1522..76462c6a91ee 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -287,10 +287,13 @@ struct kvm_vcpu_arch {
>  	/* Stage 2 paging state used by the hardware on next switch */
>  	struct kvm_s2_mmu *hw_mmu;
>  
> -	/* HYP configuration */
> +	/* Values of trap registers for the guest. */
>  	u64 hcr_el2;
>  	u64 mdcr_el2;
>  
> +	/* Values of trap registers for the host before guest entry. */
> +	u64 mdcr_el2_host;
> +
>  	/* Exception Information */
>  	struct kvm_vcpu_fault_info fault;
>  
> diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
> index 9d60b3006efc..657d0c94cf82 100644
> --- a/arch/arm64/include/asm/kvm_hyp.h
> +++ b/arch/arm64/include/asm/kvm_hyp.h
> @@ -95,7 +95,7 @@ void __sve_restore_state(void *sve_pffr, u32 *fpsr);
>  
>  #ifndef __KVM_NVHE_HYPERVISOR__
>  void activate_traps_vhe_load(struct kvm_vcpu *vcpu);
> -void deactivate_traps_vhe_put(void);
> +void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu);
>  #endif
>  
>  u64 __guest_enter(struct kvm_vcpu *vcpu);
> diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> index e4a2f295a394..a0e78a6027be 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> @@ -92,11 +92,15 @@ static inline void __activate_traps_common(struct kvm_vcpu *vcpu)
>  		write_sysreg(0, pmselr_el0);
>  		write_sysreg(ARMV8_PMU_USERENR_MASK, pmuserenr_el0);
>  	}
> +
> +	vcpu->arch.mdcr_el2_host = read_sysreg(mdcr_el2);
>  	write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
>  }
>  
> -static inline void __deactivate_traps_common(void)
> +static inline void __deactivate_traps_common(struct kvm_vcpu *vcpu)
>  {
> +	write_sysreg(vcpu->arch.mdcr_el2_host, mdcr_el2);
> +
>  	write_sysreg(0, hstr_el2);
>  	if (kvm_arm_support_pmu_v3())
>  		write_sysreg(0, pmuserenr_el0);
> diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
> index f7af9688c1f7..1778593a08a9 100644
> --- a/arch/arm64/kvm/hyp/nvhe/switch.c
> +++ b/arch/arm64/kvm/hyp/nvhe/switch.c
> @@ -69,12 +69,10 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
>  static void __deactivate_traps(struct kvm_vcpu *vcpu)
>  {
>  	extern char __kvm_hyp_host_vector[];
> -	u64 mdcr_el2, cptr;
> +	u64 cptr;
>  
>  	___deactivate_traps(vcpu);
>  
> -	mdcr_el2 = read_sysreg(mdcr_el2);
> -
>  	if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) {
>  		u64 val;
>  
> @@ -92,13 +90,8 @@ static void __deactivate_traps(struct kvm_vcpu *vcpu)
>  		isb();
>  	}
>  
> -	__deactivate_traps_common();
> -
> -	mdcr_el2 &= MDCR_EL2_HPMN_MASK;
> -	mdcr_el2 |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT;
> -	mdcr_el2 |= MDCR_EL2_E2TB_MASK << MDCR_EL2_E2TB_SHIFT;
> +	__deactivate_traps_common(vcpu);
>  
> -	write_sysreg(mdcr_el2, mdcr_el2);
>  	write_sysreg(this_cpu_ptr(&kvm_init_params)->hcr_el2, hcr_el2);
>  
>  	cptr = CPTR_EL2_DEFAULT;
> diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
> index b3229924d243..0d0c9550fb08 100644
> --- a/arch/arm64/kvm/hyp/vhe/switch.c
> +++ b/arch/arm64/kvm/hyp/vhe/switch.c
> @@ -91,17 +91,9 @@ void activate_traps_vhe_load(struct kvm_vcpu *vcpu)
>  	__activate_traps_common(vcpu);
>  }
>  
> -void deactivate_traps_vhe_put(void)
> +void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu)
>  {
> -	u64 mdcr_el2 = read_sysreg(mdcr_el2);
> -
> -	mdcr_el2 &= MDCR_EL2_HPMN_MASK |
> -		    MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT |
> -		    MDCR_EL2_TPMS;
> -
> -	write_sysreg(mdcr_el2, mdcr_el2);
> -
> -	__deactivate_traps_common();
> +	__deactivate_traps_common(vcpu);
>  }
>  
>  /* Switch to the guest for VHE systems running in EL2 */
> diff --git a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
> index 2a0b8c88d74f..007a12dd4351 100644
> --- a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
> +++ b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
> @@ -101,7 +101,7 @@ void kvm_vcpu_put_sysregs_vhe(struct kvm_vcpu *vcpu)
>  	struct kvm_cpu_context *host_ctxt;
>  
>  	host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
> -	deactivate_traps_vhe_put();
> +	deactivate_traps_vhe_put(vcpu);
>  
>  	__sysreg_save_el1_state(guest_ctxt);
>  	__sysreg_save_user_state(guest_ctxt);
> -- 
> 2.32.0.402.g57bb445576-goog
> 


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 06/15] KVM: arm64: Restore mdcr_el2 from vcpu
  2021-07-20 14:52     ` Andrew Jones
  (?)
@ 2021-07-21  7:37       ` Fuad Tabba
  -1 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-07-21  7:37 UTC (permalink / raw)
  To: Andrew Jones
  Cc: kvmarm, maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, qperret, kvm,
	linux-arm-kernel, kernel-team

Hi Drew,

On Tue, Jul 20, 2021 at 3:53 PM Andrew Jones <drjones@redhat.com> wrote:
>
> On Mon, Jul 19, 2021 at 05:03:37PM +0100, Fuad Tabba wrote:
> > On deactivating traps, restore the value of mdcr_el2 from the
> > newly created and preserved host value vcpu context, rather than
> > directly reading the hardware register.
> >
> > Up until and including this patch the two values are the same,
> > i.e., the hardware register and the vcpu one. A future patch will
> > be changing the value of mdcr_el2 on activating traps, and this
> > ensures that its value will be restored.
> >
> > No functional change intended.
>
> I'm probably missing something, but I can't convince myself that the host
> will end up with the same mdcr_el2 value after deactivating traps after
> this patch as before. We clearly now restore whatever we had when
> activating traps (presumably whatever we configured at init_el2_state
> time), but is that equivalent to what we had before with the masking and
> ORing that this patch drops?

You're right. I thought that these were actually being initialized to
the same values, but having a closer look at the code the mdcr values
are not the same as pre-patch. I will fix this.

Thanks!
/fuad

> Thanks,
> drew
>
> >
> > Signed-off-by: Fuad Tabba <tabba@google.com>
> > ---
> >  arch/arm64/include/asm/kvm_host.h       |  5 ++++-
> >  arch/arm64/include/asm/kvm_hyp.h        |  2 +-
> >  arch/arm64/kvm/hyp/include/hyp/switch.h |  6 +++++-
> >  arch/arm64/kvm/hyp/nvhe/switch.c        | 11 ++---------
> >  arch/arm64/kvm/hyp/vhe/switch.c         | 12 ++----------
> >  arch/arm64/kvm/hyp/vhe/sysreg-sr.c      |  2 +-
> >  6 files changed, 15 insertions(+), 23 deletions(-)
> >
> > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > index 4d2d974c1522..76462c6a91ee 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -287,10 +287,13 @@ struct kvm_vcpu_arch {
> >       /* Stage 2 paging state used by the hardware on next switch */
> >       struct kvm_s2_mmu *hw_mmu;
> >
> > -     /* HYP configuration */
> > +     /* Values of trap registers for the guest. */
> >       u64 hcr_el2;
> >       u64 mdcr_el2;
> >
> > +     /* Values of trap registers for the host before guest entry. */
> > +     u64 mdcr_el2_host;
> > +
> >       /* Exception Information */
> >       struct kvm_vcpu_fault_info fault;
> >
> > diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
> > index 9d60b3006efc..657d0c94cf82 100644
> > --- a/arch/arm64/include/asm/kvm_hyp.h
> > +++ b/arch/arm64/include/asm/kvm_hyp.h
> > @@ -95,7 +95,7 @@ void __sve_restore_state(void *sve_pffr, u32 *fpsr);
> >
> >  #ifndef __KVM_NVHE_HYPERVISOR__
> >  void activate_traps_vhe_load(struct kvm_vcpu *vcpu);
> > -void deactivate_traps_vhe_put(void);
> > +void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu);
> >  #endif
> >
> >  u64 __guest_enter(struct kvm_vcpu *vcpu);
> > diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> > index e4a2f295a394..a0e78a6027be 100644
> > --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> > +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> > @@ -92,11 +92,15 @@ static inline void __activate_traps_common(struct kvm_vcpu *vcpu)
> >               write_sysreg(0, pmselr_el0);
> >               write_sysreg(ARMV8_PMU_USERENR_MASK, pmuserenr_el0);
> >       }
> > +
> > +     vcpu->arch.mdcr_el2_host = read_sysreg(mdcr_el2);
> >       write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
> >  }
> >
> > -static inline void __deactivate_traps_common(void)
> > +static inline void __deactivate_traps_common(struct kvm_vcpu *vcpu)
> >  {
> > +     write_sysreg(vcpu->arch.mdcr_el2_host, mdcr_el2);
> > +
> >       write_sysreg(0, hstr_el2);
> >       if (kvm_arm_support_pmu_v3())
> >               write_sysreg(0, pmuserenr_el0);
> > diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
> > index f7af9688c1f7..1778593a08a9 100644
> > --- a/arch/arm64/kvm/hyp/nvhe/switch.c
> > +++ b/arch/arm64/kvm/hyp/nvhe/switch.c
> > @@ -69,12 +69,10 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
> >  static void __deactivate_traps(struct kvm_vcpu *vcpu)
> >  {
> >       extern char __kvm_hyp_host_vector[];
> > -     u64 mdcr_el2, cptr;
> > +     u64 cptr;
> >
> >       ___deactivate_traps(vcpu);
> >
> > -     mdcr_el2 = read_sysreg(mdcr_el2);
> > -
> >       if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) {
> >               u64 val;
> >
> > @@ -92,13 +90,8 @@ static void __deactivate_traps(struct kvm_vcpu *vcpu)
> >               isb();
> >       }
> >
> > -     __deactivate_traps_common();
> > -
> > -     mdcr_el2 &= MDCR_EL2_HPMN_MASK;
> > -     mdcr_el2 |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT;
> > -     mdcr_el2 |= MDCR_EL2_E2TB_MASK << MDCR_EL2_E2TB_SHIFT;
> > +     __deactivate_traps_common(vcpu);
> >
> > -     write_sysreg(mdcr_el2, mdcr_el2);
> >       write_sysreg(this_cpu_ptr(&kvm_init_params)->hcr_el2, hcr_el2);
> >
> >       cptr = CPTR_EL2_DEFAULT;
> > diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
> > index b3229924d243..0d0c9550fb08 100644
> > --- a/arch/arm64/kvm/hyp/vhe/switch.c
> > +++ b/arch/arm64/kvm/hyp/vhe/switch.c
> > @@ -91,17 +91,9 @@ void activate_traps_vhe_load(struct kvm_vcpu *vcpu)
> >       __activate_traps_common(vcpu);
> >  }
> >
> > -void deactivate_traps_vhe_put(void)
> > +void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu)
> >  {
> > -     u64 mdcr_el2 = read_sysreg(mdcr_el2);
> > -
> > -     mdcr_el2 &= MDCR_EL2_HPMN_MASK |
> > -                 MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT |
> > -                 MDCR_EL2_TPMS;
> > -
> > -     write_sysreg(mdcr_el2, mdcr_el2);
> > -
> > -     __deactivate_traps_common();
> > +     __deactivate_traps_common(vcpu);
> >  }
> >
> >  /* Switch to the guest for VHE systems running in EL2 */
> > diff --git a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
> > index 2a0b8c88d74f..007a12dd4351 100644
> > --- a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
> > +++ b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
> > @@ -101,7 +101,7 @@ void kvm_vcpu_put_sysregs_vhe(struct kvm_vcpu *vcpu)
> >       struct kvm_cpu_context *host_ctxt;
> >
> >       host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
> > -     deactivate_traps_vhe_put();
> > +     deactivate_traps_vhe_put(vcpu);
> >
> >       __sysreg_save_el1_state(guest_ctxt);
> >       __sysreg_save_user_state(guest_ctxt);
> > --
> > 2.32.0.402.g57bb445576-goog
> >
>

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 06/15] KVM: arm64: Restore mdcr_el2 from vcpu
@ 2021-07-21  7:37       ` Fuad Tabba
  0 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-07-21  7:37 UTC (permalink / raw)
  To: Andrew Jones
  Cc: kernel-team, kvm, maz, pbonzini, will, kvmarm, linux-arm-kernel

Hi Drew,

On Tue, Jul 20, 2021 at 3:53 PM Andrew Jones <drjones@redhat.com> wrote:
>
> On Mon, Jul 19, 2021 at 05:03:37PM +0100, Fuad Tabba wrote:
> > On deactivating traps, restore the value of mdcr_el2 from the
> > newly created and preserved host value vcpu context, rather than
> > directly reading the hardware register.
> >
> > Up until and including this patch the two values are the same,
> > i.e., the hardware register and the vcpu one. A future patch will
> > be changing the value of mdcr_el2 on activating traps, and this
> > ensures that its value will be restored.
> >
> > No functional change intended.
>
> I'm probably missing something, but I can't convince myself that the host
> will end up with the same mdcr_el2 value after deactivating traps after
> this patch as before. We clearly now restore whatever we had when
> activating traps (presumably whatever we configured at init_el2_state
> time), but is that equivalent to what we had before with the masking and
> ORing that this patch drops?

You're right. I thought that these were actually being initialized to
the same values, but having a closer look at the code the mdcr values
are not the same as pre-patch. I will fix this.

Thanks!
/fuad

> Thanks,
> drew
>
> >
> > Signed-off-by: Fuad Tabba <tabba@google.com>
> > ---
> >  arch/arm64/include/asm/kvm_host.h       |  5 ++++-
> >  arch/arm64/include/asm/kvm_hyp.h        |  2 +-
> >  arch/arm64/kvm/hyp/include/hyp/switch.h |  6 +++++-
> >  arch/arm64/kvm/hyp/nvhe/switch.c        | 11 ++---------
> >  arch/arm64/kvm/hyp/vhe/switch.c         | 12 ++----------
> >  arch/arm64/kvm/hyp/vhe/sysreg-sr.c      |  2 +-
> >  6 files changed, 15 insertions(+), 23 deletions(-)
> >
> > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > index 4d2d974c1522..76462c6a91ee 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -287,10 +287,13 @@ struct kvm_vcpu_arch {
> >       /* Stage 2 paging state used by the hardware on next switch */
> >       struct kvm_s2_mmu *hw_mmu;
> >
> > -     /* HYP configuration */
> > +     /* Values of trap registers for the guest. */
> >       u64 hcr_el2;
> >       u64 mdcr_el2;
> >
> > +     /* Values of trap registers for the host before guest entry. */
> > +     u64 mdcr_el2_host;
> > +
> >       /* Exception Information */
> >       struct kvm_vcpu_fault_info fault;
> >
> > diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
> > index 9d60b3006efc..657d0c94cf82 100644
> > --- a/arch/arm64/include/asm/kvm_hyp.h
> > +++ b/arch/arm64/include/asm/kvm_hyp.h
> > @@ -95,7 +95,7 @@ void __sve_restore_state(void *sve_pffr, u32 *fpsr);
> >
> >  #ifndef __KVM_NVHE_HYPERVISOR__
> >  void activate_traps_vhe_load(struct kvm_vcpu *vcpu);
> > -void deactivate_traps_vhe_put(void);
> > +void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu);
> >  #endif
> >
> >  u64 __guest_enter(struct kvm_vcpu *vcpu);
> > diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> > index e4a2f295a394..a0e78a6027be 100644
> > --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> > +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> > @@ -92,11 +92,15 @@ static inline void __activate_traps_common(struct kvm_vcpu *vcpu)
> >               write_sysreg(0, pmselr_el0);
> >               write_sysreg(ARMV8_PMU_USERENR_MASK, pmuserenr_el0);
> >       }
> > +
> > +     vcpu->arch.mdcr_el2_host = read_sysreg(mdcr_el2);
> >       write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
> >  }
> >
> > -static inline void __deactivate_traps_common(void)
> > +static inline void __deactivate_traps_common(struct kvm_vcpu *vcpu)
> >  {
> > +     write_sysreg(vcpu->arch.mdcr_el2_host, mdcr_el2);
> > +
> >       write_sysreg(0, hstr_el2);
> >       if (kvm_arm_support_pmu_v3())
> >               write_sysreg(0, pmuserenr_el0);
> > diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
> > index f7af9688c1f7..1778593a08a9 100644
> > --- a/arch/arm64/kvm/hyp/nvhe/switch.c
> > +++ b/arch/arm64/kvm/hyp/nvhe/switch.c
> > @@ -69,12 +69,10 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
> >  static void __deactivate_traps(struct kvm_vcpu *vcpu)
> >  {
> >       extern char __kvm_hyp_host_vector[];
> > -     u64 mdcr_el2, cptr;
> > +     u64 cptr;
> >
> >       ___deactivate_traps(vcpu);
> >
> > -     mdcr_el2 = read_sysreg(mdcr_el2);
> > -
> >       if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) {
> >               u64 val;
> >
> > @@ -92,13 +90,8 @@ static void __deactivate_traps(struct kvm_vcpu *vcpu)
> >               isb();
> >       }
> >
> > -     __deactivate_traps_common();
> > -
> > -     mdcr_el2 &= MDCR_EL2_HPMN_MASK;
> > -     mdcr_el2 |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT;
> > -     mdcr_el2 |= MDCR_EL2_E2TB_MASK << MDCR_EL2_E2TB_SHIFT;
> > +     __deactivate_traps_common(vcpu);
> >
> > -     write_sysreg(mdcr_el2, mdcr_el2);
> >       write_sysreg(this_cpu_ptr(&kvm_init_params)->hcr_el2, hcr_el2);
> >
> >       cptr = CPTR_EL2_DEFAULT;
> > diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
> > index b3229924d243..0d0c9550fb08 100644
> > --- a/arch/arm64/kvm/hyp/vhe/switch.c
> > +++ b/arch/arm64/kvm/hyp/vhe/switch.c
> > @@ -91,17 +91,9 @@ void activate_traps_vhe_load(struct kvm_vcpu *vcpu)
> >       __activate_traps_common(vcpu);
> >  }
> >
> > -void deactivate_traps_vhe_put(void)
> > +void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu)
> >  {
> > -     u64 mdcr_el2 = read_sysreg(mdcr_el2);
> > -
> > -     mdcr_el2 &= MDCR_EL2_HPMN_MASK |
> > -                 MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT |
> > -                 MDCR_EL2_TPMS;
> > -
> > -     write_sysreg(mdcr_el2, mdcr_el2);
> > -
> > -     __deactivate_traps_common();
> > +     __deactivate_traps_common(vcpu);
> >  }
> >
> >  /* Switch to the guest for VHE systems running in EL2 */
> > diff --git a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
> > index 2a0b8c88d74f..007a12dd4351 100644
> > --- a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
> > +++ b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
> > @@ -101,7 +101,7 @@ void kvm_vcpu_put_sysregs_vhe(struct kvm_vcpu *vcpu)
> >       struct kvm_cpu_context *host_ctxt;
> >
> >       host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
> > -     deactivate_traps_vhe_put();
> > +     deactivate_traps_vhe_put(vcpu);
> >
> >       __sysreg_save_el1_state(guest_ctxt);
> >       __sysreg_save_user_state(guest_ctxt);
> > --
> > 2.32.0.402.g57bb445576-goog
> >
>
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 06/15] KVM: arm64: Restore mdcr_el2 from vcpu
@ 2021-07-21  7:37       ` Fuad Tabba
  0 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-07-21  7:37 UTC (permalink / raw)
  To: Andrew Jones
  Cc: kvmarm, maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, qperret, kvm,
	linux-arm-kernel, kernel-team

Hi Drew,

On Tue, Jul 20, 2021 at 3:53 PM Andrew Jones <drjones@redhat.com> wrote:
>
> On Mon, Jul 19, 2021 at 05:03:37PM +0100, Fuad Tabba wrote:
> > On deactivating traps, restore the value of mdcr_el2 from the
> > newly created and preserved host value vcpu context, rather than
> > directly reading the hardware register.
> >
> > Up until and including this patch the two values are the same,
> > i.e., the hardware register and the vcpu one. A future patch will
> > be changing the value of mdcr_el2 on activating traps, and this
> > ensures that its value will be restored.
> >
> > No functional change intended.
>
> I'm probably missing something, but I can't convince myself that the host
> will end up with the same mdcr_el2 value after deactivating traps after
> this patch as before. We clearly now restore whatever we had when
> activating traps (presumably whatever we configured at init_el2_state
> time), but is that equivalent to what we had before with the masking and
> ORing that this patch drops?

You're right. I thought that these were actually being initialized to
the same values, but having a closer look at the code the mdcr values
are not the same as pre-patch. I will fix this.

Thanks!
/fuad

> Thanks,
> drew
>
> >
> > Signed-off-by: Fuad Tabba <tabba@google.com>
> > ---
> >  arch/arm64/include/asm/kvm_host.h       |  5 ++++-
> >  arch/arm64/include/asm/kvm_hyp.h        |  2 +-
> >  arch/arm64/kvm/hyp/include/hyp/switch.h |  6 +++++-
> >  arch/arm64/kvm/hyp/nvhe/switch.c        | 11 ++---------
> >  arch/arm64/kvm/hyp/vhe/switch.c         | 12 ++----------
> >  arch/arm64/kvm/hyp/vhe/sysreg-sr.c      |  2 +-
> >  6 files changed, 15 insertions(+), 23 deletions(-)
> >
> > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > index 4d2d974c1522..76462c6a91ee 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -287,10 +287,13 @@ struct kvm_vcpu_arch {
> >       /* Stage 2 paging state used by the hardware on next switch */
> >       struct kvm_s2_mmu *hw_mmu;
> >
> > -     /* HYP configuration */
> > +     /* Values of trap registers for the guest. */
> >       u64 hcr_el2;
> >       u64 mdcr_el2;
> >
> > +     /* Values of trap registers for the host before guest entry. */
> > +     u64 mdcr_el2_host;
> > +
> >       /* Exception Information */
> >       struct kvm_vcpu_fault_info fault;
> >
> > diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
> > index 9d60b3006efc..657d0c94cf82 100644
> > --- a/arch/arm64/include/asm/kvm_hyp.h
> > +++ b/arch/arm64/include/asm/kvm_hyp.h
> > @@ -95,7 +95,7 @@ void __sve_restore_state(void *sve_pffr, u32 *fpsr);
> >
> >  #ifndef __KVM_NVHE_HYPERVISOR__
> >  void activate_traps_vhe_load(struct kvm_vcpu *vcpu);
> > -void deactivate_traps_vhe_put(void);
> > +void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu);
> >  #endif
> >
> >  u64 __guest_enter(struct kvm_vcpu *vcpu);
> > diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> > index e4a2f295a394..a0e78a6027be 100644
> > --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> > +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> > @@ -92,11 +92,15 @@ static inline void __activate_traps_common(struct kvm_vcpu *vcpu)
> >               write_sysreg(0, pmselr_el0);
> >               write_sysreg(ARMV8_PMU_USERENR_MASK, pmuserenr_el0);
> >       }
> > +
> > +     vcpu->arch.mdcr_el2_host = read_sysreg(mdcr_el2);
> >       write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
> >  }
> >
> > -static inline void __deactivate_traps_common(void)
> > +static inline void __deactivate_traps_common(struct kvm_vcpu *vcpu)
> >  {
> > +     write_sysreg(vcpu->arch.mdcr_el2_host, mdcr_el2);
> > +
> >       write_sysreg(0, hstr_el2);
> >       if (kvm_arm_support_pmu_v3())
> >               write_sysreg(0, pmuserenr_el0);
> > diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
> > index f7af9688c1f7..1778593a08a9 100644
> > --- a/arch/arm64/kvm/hyp/nvhe/switch.c
> > +++ b/arch/arm64/kvm/hyp/nvhe/switch.c
> > @@ -69,12 +69,10 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
> >  static void __deactivate_traps(struct kvm_vcpu *vcpu)
> >  {
> >       extern char __kvm_hyp_host_vector[];
> > -     u64 mdcr_el2, cptr;
> > +     u64 cptr;
> >
> >       ___deactivate_traps(vcpu);
> >
> > -     mdcr_el2 = read_sysreg(mdcr_el2);
> > -
> >       if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) {
> >               u64 val;
> >
> > @@ -92,13 +90,8 @@ static void __deactivate_traps(struct kvm_vcpu *vcpu)
> >               isb();
> >       }
> >
> > -     __deactivate_traps_common();
> > -
> > -     mdcr_el2 &= MDCR_EL2_HPMN_MASK;
> > -     mdcr_el2 |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT;
> > -     mdcr_el2 |= MDCR_EL2_E2TB_MASK << MDCR_EL2_E2TB_SHIFT;
> > +     __deactivate_traps_common(vcpu);
> >
> > -     write_sysreg(mdcr_el2, mdcr_el2);
> >       write_sysreg(this_cpu_ptr(&kvm_init_params)->hcr_el2, hcr_el2);
> >
> >       cptr = CPTR_EL2_DEFAULT;
> > diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
> > index b3229924d243..0d0c9550fb08 100644
> > --- a/arch/arm64/kvm/hyp/vhe/switch.c
> > +++ b/arch/arm64/kvm/hyp/vhe/switch.c
> > @@ -91,17 +91,9 @@ void activate_traps_vhe_load(struct kvm_vcpu *vcpu)
> >       __activate_traps_common(vcpu);
> >  }
> >
> > -void deactivate_traps_vhe_put(void)
> > +void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu)
> >  {
> > -     u64 mdcr_el2 = read_sysreg(mdcr_el2);
> > -
> > -     mdcr_el2 &= MDCR_EL2_HPMN_MASK |
> > -                 MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT |
> > -                 MDCR_EL2_TPMS;
> > -
> > -     write_sysreg(mdcr_el2, mdcr_el2);
> > -
> > -     __deactivate_traps_common();
> > +     __deactivate_traps_common(vcpu);
> >  }
> >
> >  /* Switch to the guest for VHE systems running in EL2 */
> > diff --git a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
> > index 2a0b8c88d74f..007a12dd4351 100644
> > --- a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
> > +++ b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
> > @@ -101,7 +101,7 @@ void kvm_vcpu_put_sysregs_vhe(struct kvm_vcpu *vcpu)
> >       struct kvm_cpu_context *host_ctxt;
> >
> >       host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
> > -     deactivate_traps_vhe_put();
> > +     deactivate_traps_vhe_put(vcpu);
> >
> >       __sysreg_save_el1_state(guest_ctxt);
> >       __sysreg_save_user_state(guest_ctxt);
> > --
> > 2.32.0.402.g57bb445576-goog
> >
>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 14/15] KVM: arm64: Handle protected guests at 32 bits
  2021-07-19 19:43     ` Oliver Upton
  (?)
@ 2021-07-21  8:39       ` Fuad Tabba
  -1 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-07-21  8:39 UTC (permalink / raw)
  To: Oliver Upton
  Cc: kvmarm, kernel-team, kvm, maz, pbonzini, will, linux-arm-kernel

Hi Oliver,

On Mon, Jul 19, 2021 at 8:43 PM Oliver Upton <oupton@google.com> wrote:
>
> On Mon, Jul 19, 2021 at 9:04 AM Fuad Tabba <tabba@google.com> wrote:
> >
> > Protected KVM does not support protected AArch32 guests. However,
> > it is possible for the guest to force run AArch32, potentially
> > causing problems. Add an extra check so that if the hypervisor
> > catches the guest doing that, it can prevent the guest from
> > running again by resetting vcpu->arch.target and returning
> > ARM_EXCEPTION_IL.
> >
> > Adapted from commit 22f553842b14 ("KVM: arm64: Handle Asymmetric
> > AArch32 systems")
> >
> > Signed-off-by: Fuad Tabba <tabba@google.com>
>
> Would it make sense to document how we handle misbehaved guests, in
> case a particular VMM wants to clean up the mess afterwards?

I agree, especially since with this patch this could happen in more
than one place.

Thanks,
/fuad

> --
> Thanks,
> Oliver
>
> > ---
> >  arch/arm64/kvm/hyp/include/hyp/switch.h | 24 ++++++++++++++++++++++++
> >  1 file changed, 24 insertions(+)
> >
> > diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> > index 8431f1514280..f09343e15a80 100644
> > --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> > +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> > @@ -23,6 +23,7 @@
> >  #include <asm/kprobes.h>
> >  #include <asm/kvm_asm.h>
> >  #include <asm/kvm_emulate.h>
> > +#include <asm/kvm_fixed_config.h>
> >  #include <asm/kvm_hyp.h>
> >  #include <asm/kvm_mmu.h>
> >  #include <asm/fpsimd.h>
> > @@ -477,6 +478,29 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
> >                         write_sysreg_el2(read_sysreg_el2(SYS_ELR) - 4, SYS_ELR);
> >         }
> >
> > +       /*
> > +        * Protected VMs might not be allowed to run in AArch32. The check below
> > +        * is based on the one in kvm_arch_vcpu_ioctl_run().
> > +        * The ARMv8 architecture doesn't give the hypervisor a mechanism to
> > +        * prevent a guest from dropping to AArch32 EL0 if implemented by the
> > +        * CPU. If the hypervisor spots a guest in such a state ensure it is
> > +        * handled, and don't trust the host to spot or fix it.
> > +        */
> > +       if (unlikely(is_nvhe_hyp_code() &&
> > +                    kvm_vm_is_protected(kern_hyp_va(vcpu->kvm)) &&
> > +                    FIELD_GET(FEATURE(ID_AA64PFR0_EL0),
> > +                              PVM_ID_AA64PFR0_ALLOW) <
> > +                            ID_AA64PFR0_ELx_32BIT_64BIT &&
> > +                    vcpu_mode_is_32bit(vcpu))) {
> > +               /*
> > +                * As we have caught the guest red-handed, decide that it isn't
> > +                * fit for purpose anymore by making the vcpu invalid.
> > +                */
> > +               vcpu->arch.target = -1;
> > +               *exit_code = ARM_EXCEPTION_IL;
> > +               goto exit;
> > +       }
> > +
> >         /*
> >          * We're using the raw exception code in order to only process
> >          * the trap if no SError is pending. We will come back to the
> > --
> > 2.32.0.402.g57bb445576-goog
> >
> > _______________________________________________
> > kvmarm mailing list
> > kvmarm@lists.cs.columbia.edu
> > https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 14/15] KVM: arm64: Handle protected guests at 32 bits
@ 2021-07-21  8:39       ` Fuad Tabba
  0 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-07-21  8:39 UTC (permalink / raw)
  To: Oliver Upton
  Cc: kernel-team, kvm, maz, pbonzini, will, kvmarm, linux-arm-kernel

Hi Oliver,

On Mon, Jul 19, 2021 at 8:43 PM Oliver Upton <oupton@google.com> wrote:
>
> On Mon, Jul 19, 2021 at 9:04 AM Fuad Tabba <tabba@google.com> wrote:
> >
> > Protected KVM does not support protected AArch32 guests. However,
> > it is possible for the guest to force run AArch32, potentially
> > causing problems. Add an extra check so that if the hypervisor
> > catches the guest doing that, it can prevent the guest from
> > running again by resetting vcpu->arch.target and returning
> > ARM_EXCEPTION_IL.
> >
> > Adapted from commit 22f553842b14 ("KVM: arm64: Handle Asymmetric
> > AArch32 systems")
> >
> > Signed-off-by: Fuad Tabba <tabba@google.com>
>
> Would it make sense to document how we handle misbehaved guests, in
> case a particular VMM wants to clean up the mess afterwards?

I agree, especially since with this patch this could happen in more
than one place.

Thanks,
/fuad

> --
> Thanks,
> Oliver
>
> > ---
> >  arch/arm64/kvm/hyp/include/hyp/switch.h | 24 ++++++++++++++++++++++++
> >  1 file changed, 24 insertions(+)
> >
> > diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> > index 8431f1514280..f09343e15a80 100644
> > --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> > +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> > @@ -23,6 +23,7 @@
> >  #include <asm/kprobes.h>
> >  #include <asm/kvm_asm.h>
> >  #include <asm/kvm_emulate.h>
> > +#include <asm/kvm_fixed_config.h>
> >  #include <asm/kvm_hyp.h>
> >  #include <asm/kvm_mmu.h>
> >  #include <asm/fpsimd.h>
> > @@ -477,6 +478,29 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
> >                         write_sysreg_el2(read_sysreg_el2(SYS_ELR) - 4, SYS_ELR);
> >         }
> >
> > +       /*
> > +        * Protected VMs might not be allowed to run in AArch32. The check below
> > +        * is based on the one in kvm_arch_vcpu_ioctl_run().
> > +        * The ARMv8 architecture doesn't give the hypervisor a mechanism to
> > +        * prevent a guest from dropping to AArch32 EL0 if implemented by the
> > +        * CPU. If the hypervisor spots a guest in such a state ensure it is
> > +        * handled, and don't trust the host to spot or fix it.
> > +        */
> > +       if (unlikely(is_nvhe_hyp_code() &&
> > +                    kvm_vm_is_protected(kern_hyp_va(vcpu->kvm)) &&
> > +                    FIELD_GET(FEATURE(ID_AA64PFR0_EL0),
> > +                              PVM_ID_AA64PFR0_ALLOW) <
> > +                            ID_AA64PFR0_ELx_32BIT_64BIT &&
> > +                    vcpu_mode_is_32bit(vcpu))) {
> > +               /*
> > +                * As we have caught the guest red-handed, decide that it isn't
> > +                * fit for purpose anymore by making the vcpu invalid.
> > +                */
> > +               vcpu->arch.target = -1;
> > +               *exit_code = ARM_EXCEPTION_IL;
> > +               goto exit;
> > +       }
> > +
> >         /*
> >          * We're using the raw exception code in order to only process
> >          * the trap if no SError is pending. We will come back to the
> > --
> > 2.32.0.402.g57bb445576-goog
> >
> > _______________________________________________
> > kvmarm mailing list
> > kvmarm@lists.cs.columbia.edu
> > https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 14/15] KVM: arm64: Handle protected guests at 32 bits
@ 2021-07-21  8:39       ` Fuad Tabba
  0 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-07-21  8:39 UTC (permalink / raw)
  To: Oliver Upton
  Cc: kvmarm, kernel-team, kvm, maz, pbonzini, will, linux-arm-kernel

Hi Oliver,

On Mon, Jul 19, 2021 at 8:43 PM Oliver Upton <oupton@google.com> wrote:
>
> On Mon, Jul 19, 2021 at 9:04 AM Fuad Tabba <tabba@google.com> wrote:
> >
> > Protected KVM does not support protected AArch32 guests. However,
> > it is possible for the guest to force run AArch32, potentially
> > causing problems. Add an extra check so that if the hypervisor
> > catches the guest doing that, it can prevent the guest from
> > running again by resetting vcpu->arch.target and returning
> > ARM_EXCEPTION_IL.
> >
> > Adapted from commit 22f553842b14 ("KVM: arm64: Handle Asymmetric
> > AArch32 systems")
> >
> > Signed-off-by: Fuad Tabba <tabba@google.com>
>
> Would it make sense to document how we handle misbehaved guests, in
> case a particular VMM wants to clean up the mess afterwards?

I agree, especially since with this patch this could happen in more
than one place.

Thanks,
/fuad

> --
> Thanks,
> Oliver
>
> > ---
> >  arch/arm64/kvm/hyp/include/hyp/switch.h | 24 ++++++++++++++++++++++++
> >  1 file changed, 24 insertions(+)
> >
> > diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> > index 8431f1514280..f09343e15a80 100644
> > --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> > +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> > @@ -23,6 +23,7 @@
> >  #include <asm/kprobes.h>
> >  #include <asm/kvm_asm.h>
> >  #include <asm/kvm_emulate.h>
> > +#include <asm/kvm_fixed_config.h>
> >  #include <asm/kvm_hyp.h>
> >  #include <asm/kvm_mmu.h>
> >  #include <asm/fpsimd.h>
> > @@ -477,6 +478,29 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
> >                         write_sysreg_el2(read_sysreg_el2(SYS_ELR) - 4, SYS_ELR);
> >         }
> >
> > +       /*
> > +        * Protected VMs might not be allowed to run in AArch32. The check below
> > +        * is based on the one in kvm_arch_vcpu_ioctl_run().
> > +        * The ARMv8 architecture doesn't give the hypervisor a mechanism to
> > +        * prevent a guest from dropping to AArch32 EL0 if implemented by the
> > +        * CPU. If the hypervisor spots a guest in such a state ensure it is
> > +        * handled, and don't trust the host to spot or fix it.
> > +        */
> > +       if (unlikely(is_nvhe_hyp_code() &&
> > +                    kvm_vm_is_protected(kern_hyp_va(vcpu->kvm)) &&
> > +                    FIELD_GET(FEATURE(ID_AA64PFR0_EL0),
> > +                              PVM_ID_AA64PFR0_ALLOW) <
> > +                            ID_AA64PFR0_ELx_32BIT_64BIT &&
> > +                    vcpu_mode_is_32bit(vcpu))) {
> > +               /*
> > +                * As we have caught the guest red-handed, decide that it isn't
> > +                * fit for purpose anymore by making the vcpu invalid.
> > +                */
> > +               vcpu->arch.target = -1;
> > +               *exit_code = ARM_EXCEPTION_IL;
> > +               goto exit;
> > +       }
> > +
> >         /*
> >          * We're using the raw exception code in order to only process
> >          * the trap if no SError is pending. We will come back to the
> > --
> > 2.32.0.402.g57bb445576-goog
> >
> > _______________________________________________
> > kvmarm mailing list
> > kvmarm@lists.cs.columbia.edu
> > https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 10/15] KVM: arm64: Guest exit handlers for nVHE hyp
  2021-07-19 16:03   ` Fuad Tabba
  (?)
@ 2021-08-03 15:32     ` Will Deacon
  -1 siblings, 0 replies; 126+ messages in thread
From: Will Deacon @ 2021-08-03 15:32 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kvmarm, maz, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team

On Mon, Jul 19, 2021 at 05:03:41PM +0100, Fuad Tabba wrote:
> Add an array of pointers to handlers for various trap reasons in
> nVHE code.
> 
> The current code selects how to fixup a guest on exit based on a
> series of if/else statements. Future patches will also require
> different handling for guest exists. Create an array of handlers
> to consolidate them.
> 
> No functional change intended as the array isn't populated yet.
> 
> Acked-by: Will Deacon <will@kernel.org>
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/kvm/hyp/include/hyp/switch.h | 43 +++++++++++++++++++++++++
>  arch/arm64/kvm/hyp/nvhe/switch.c        | 35 ++++++++++++++++++++
>  2 files changed, 78 insertions(+)

Definitely keep my Ack on this, but Clang just chucked out a warning due to:

> diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> index a0e78a6027be..5a2b89b96c67 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> @@ -409,6 +409,46 @@ static inline bool __hyp_handle_ptrauth(struct kvm_vcpu *vcpu)
>  	return true;
>  }
>  
> +typedef int (*exit_handle_fn)(struct kvm_vcpu *);
> +
> +exit_handle_fn kvm_get_nvhe_exit_handler(struct kvm_vcpu *vcpu);

and:

> diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
> index 86f3d6482935..36da423006bd 100644
> --- a/arch/arm64/kvm/hyp/nvhe/switch.c
> +++ b/arch/arm64/kvm/hyp/nvhe/switch.c
> @@ -158,6 +158,41 @@ static void __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt)
>  		write_sysreg(pmu->events_host, pmcntenset_el0);
>  }
>  
> +typedef int (*exit_handle_fn)(struct kvm_vcpu *);

Which leads to:

arch/arm64/kvm/hyp/nvhe/switch.c:189:15: warning: redefinition of typedef 'exit_handle_fn' is a C11 feature [-Wtypedef-redefinition]
typedef int (*exit_handle_fn)(struct kvm_vcpu *);
              ^
./arch/arm64/kvm/hyp/include/hyp/switch.h:416:15: note: previous definition is here
typedef int (*exit_handle_fn)(struct kvm_vcpu *);
              ^
1 warning generated.

So I guess just pick your favourite?

Will

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 10/15] KVM: arm64: Guest exit handlers for nVHE hyp
@ 2021-08-03 15:32     ` Will Deacon
  0 siblings, 0 replies; 126+ messages in thread
From: Will Deacon @ 2021-08-03 15:32 UTC (permalink / raw)
  To: Fuad Tabba; +Cc: kernel-team, kvm, maz, pbonzini, kvmarm, linux-arm-kernel

On Mon, Jul 19, 2021 at 05:03:41PM +0100, Fuad Tabba wrote:
> Add an array of pointers to handlers for various trap reasons in
> nVHE code.
> 
> The current code selects how to fixup a guest on exit based on a
> series of if/else statements. Future patches will also require
> different handling for guest exists. Create an array of handlers
> to consolidate them.
> 
> No functional change intended as the array isn't populated yet.
> 
> Acked-by: Will Deacon <will@kernel.org>
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/kvm/hyp/include/hyp/switch.h | 43 +++++++++++++++++++++++++
>  arch/arm64/kvm/hyp/nvhe/switch.c        | 35 ++++++++++++++++++++
>  2 files changed, 78 insertions(+)

Definitely keep my Ack on this, but Clang just chucked out a warning due to:

> diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> index a0e78a6027be..5a2b89b96c67 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> @@ -409,6 +409,46 @@ static inline bool __hyp_handle_ptrauth(struct kvm_vcpu *vcpu)
>  	return true;
>  }
>  
> +typedef int (*exit_handle_fn)(struct kvm_vcpu *);
> +
> +exit_handle_fn kvm_get_nvhe_exit_handler(struct kvm_vcpu *vcpu);

and:

> diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
> index 86f3d6482935..36da423006bd 100644
> --- a/arch/arm64/kvm/hyp/nvhe/switch.c
> +++ b/arch/arm64/kvm/hyp/nvhe/switch.c
> @@ -158,6 +158,41 @@ static void __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt)
>  		write_sysreg(pmu->events_host, pmcntenset_el0);
>  }
>  
> +typedef int (*exit_handle_fn)(struct kvm_vcpu *);

Which leads to:

arch/arm64/kvm/hyp/nvhe/switch.c:189:15: warning: redefinition of typedef 'exit_handle_fn' is a C11 feature [-Wtypedef-redefinition]
typedef int (*exit_handle_fn)(struct kvm_vcpu *);
              ^
./arch/arm64/kvm/hyp/include/hyp/switch.h:416:15: note: previous definition is here
typedef int (*exit_handle_fn)(struct kvm_vcpu *);
              ^
1 warning generated.

So I guess just pick your favourite?

Will
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 10/15] KVM: arm64: Guest exit handlers for nVHE hyp
@ 2021-08-03 15:32     ` Will Deacon
  0 siblings, 0 replies; 126+ messages in thread
From: Will Deacon @ 2021-08-03 15:32 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kvmarm, maz, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team

On Mon, Jul 19, 2021 at 05:03:41PM +0100, Fuad Tabba wrote:
> Add an array of pointers to handlers for various trap reasons in
> nVHE code.
> 
> The current code selects how to fixup a guest on exit based on a
> series of if/else statements. Future patches will also require
> different handling for guest exists. Create an array of handlers
> to consolidate them.
> 
> No functional change intended as the array isn't populated yet.
> 
> Acked-by: Will Deacon <will@kernel.org>
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/kvm/hyp/include/hyp/switch.h | 43 +++++++++++++++++++++++++
>  arch/arm64/kvm/hyp/nvhe/switch.c        | 35 ++++++++++++++++++++
>  2 files changed, 78 insertions(+)

Definitely keep my Ack on this, but Clang just chucked out a warning due to:

> diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> index a0e78a6027be..5a2b89b96c67 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> @@ -409,6 +409,46 @@ static inline bool __hyp_handle_ptrauth(struct kvm_vcpu *vcpu)
>  	return true;
>  }
>  
> +typedef int (*exit_handle_fn)(struct kvm_vcpu *);
> +
> +exit_handle_fn kvm_get_nvhe_exit_handler(struct kvm_vcpu *vcpu);

and:

> diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
> index 86f3d6482935..36da423006bd 100644
> --- a/arch/arm64/kvm/hyp/nvhe/switch.c
> +++ b/arch/arm64/kvm/hyp/nvhe/switch.c
> @@ -158,6 +158,41 @@ static void __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt)
>  		write_sysreg(pmu->events_host, pmcntenset_el0);
>  }
>  
> +typedef int (*exit_handle_fn)(struct kvm_vcpu *);

Which leads to:

arch/arm64/kvm/hyp/nvhe/switch.c:189:15: warning: redefinition of typedef 'exit_handle_fn' is a C11 feature [-Wtypedef-redefinition]
typedef int (*exit_handle_fn)(struct kvm_vcpu *);
              ^
./arch/arm64/kvm/hyp/include/hyp/switch.h:416:15: note: previous definition is here
typedef int (*exit_handle_fn)(struct kvm_vcpu *);
              ^
1 warning generated.

So I guess just pick your favourite?

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 06/15] KVM: arm64: Restore mdcr_el2 from vcpu
  2021-07-21  7:37       ` Fuad Tabba
  (?)
@ 2021-08-12  8:46         ` Will Deacon
  -1 siblings, 0 replies; 126+ messages in thread
From: Will Deacon @ 2021-08-12  8:46 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: Andrew Jones, kvmarm, maz, james.morse, alexandru.elisei,
	suzuki.poulose, mark.rutland, christoffer.dall, pbonzini,
	qperret, kvm, linux-arm-kernel, kernel-team

On Wed, Jul 21, 2021 at 08:37:21AM +0100, Fuad Tabba wrote:
> On Tue, Jul 20, 2021 at 3:53 PM Andrew Jones <drjones@redhat.com> wrote:
> >
> > On Mon, Jul 19, 2021 at 05:03:37PM +0100, Fuad Tabba wrote:
> > > On deactivating traps, restore the value of mdcr_el2 from the
> > > newly created and preserved host value vcpu context, rather than
> > > directly reading the hardware register.
> > >
> > > Up until and including this patch the two values are the same,
> > > i.e., the hardware register and the vcpu one. A future patch will
> > > be changing the value of mdcr_el2 on activating traps, and this
> > > ensures that its value will be restored.
> > >
> > > No functional change intended.
> >
> > I'm probably missing something, but I can't convince myself that the host
> > will end up with the same mdcr_el2 value after deactivating traps after
> > this patch as before. We clearly now restore whatever we had when
> > activating traps (presumably whatever we configured at init_el2_state
> > time), but is that equivalent to what we had before with the masking and
> > ORing that this patch drops?
> 
> You're right. I thought that these were actually being initialized to
> the same values, but having a closer look at the code the mdcr values
> are not the same as pre-patch. I will fix this.

Can you elaborate on the issue here, please? I was just looking at this
but aren't you now relying on __init_el2_debug to configure this, which
should be fine?

Will

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 06/15] KVM: arm64: Restore mdcr_el2 from vcpu
@ 2021-08-12  8:46         ` Will Deacon
  0 siblings, 0 replies; 126+ messages in thread
From: Will Deacon @ 2021-08-12  8:46 UTC (permalink / raw)
  To: Fuad Tabba; +Cc: kernel-team, kvm, maz, pbonzini, kvmarm, linux-arm-kernel

On Wed, Jul 21, 2021 at 08:37:21AM +0100, Fuad Tabba wrote:
> On Tue, Jul 20, 2021 at 3:53 PM Andrew Jones <drjones@redhat.com> wrote:
> >
> > On Mon, Jul 19, 2021 at 05:03:37PM +0100, Fuad Tabba wrote:
> > > On deactivating traps, restore the value of mdcr_el2 from the
> > > newly created and preserved host value vcpu context, rather than
> > > directly reading the hardware register.
> > >
> > > Up until and including this patch the two values are the same,
> > > i.e., the hardware register and the vcpu one. A future patch will
> > > be changing the value of mdcr_el2 on activating traps, and this
> > > ensures that its value will be restored.
> > >
> > > No functional change intended.
> >
> > I'm probably missing something, but I can't convince myself that the host
> > will end up with the same mdcr_el2 value after deactivating traps after
> > this patch as before. We clearly now restore whatever we had when
> > activating traps (presumably whatever we configured at init_el2_state
> > time), but is that equivalent to what we had before with the masking and
> > ORing that this patch drops?
> 
> You're right. I thought that these were actually being initialized to
> the same values, but having a closer look at the code the mdcr values
> are not the same as pre-patch. I will fix this.

Can you elaborate on the issue here, please? I was just looking at this
but aren't you now relying on __init_el2_debug to configure this, which
should be fine?

Will
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 06/15] KVM: arm64: Restore mdcr_el2 from vcpu
@ 2021-08-12  8:46         ` Will Deacon
  0 siblings, 0 replies; 126+ messages in thread
From: Will Deacon @ 2021-08-12  8:46 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: Andrew Jones, kvmarm, maz, james.morse, alexandru.elisei,
	suzuki.poulose, mark.rutland, christoffer.dall, pbonzini,
	qperret, kvm, linux-arm-kernel, kernel-team

On Wed, Jul 21, 2021 at 08:37:21AM +0100, Fuad Tabba wrote:
> On Tue, Jul 20, 2021 at 3:53 PM Andrew Jones <drjones@redhat.com> wrote:
> >
> > On Mon, Jul 19, 2021 at 05:03:37PM +0100, Fuad Tabba wrote:
> > > On deactivating traps, restore the value of mdcr_el2 from the
> > > newly created and preserved host value vcpu context, rather than
> > > directly reading the hardware register.
> > >
> > > Up until and including this patch the two values are the same,
> > > i.e., the hardware register and the vcpu one. A future patch will
> > > be changing the value of mdcr_el2 on activating traps, and this
> > > ensures that its value will be restored.
> > >
> > > No functional change intended.
> >
> > I'm probably missing something, but I can't convince myself that the host
> > will end up with the same mdcr_el2 value after deactivating traps after
> > this patch as before. We clearly now restore whatever we had when
> > activating traps (presumably whatever we configured at init_el2_state
> > time), but is that equivalent to what we had before with the masking and
> > ORing that this patch drops?
> 
> You're right. I thought that these were actually being initialized to
> the same values, but having a closer look at the code the mdcr values
> are not the same as pre-patch. I will fix this.

Can you elaborate on the issue here, please? I was just looking at this
but aren't you now relying on __init_el2_debug to configure this, which
should be fine?

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 01/15] KVM: arm64: placeholder to check if VM is protected
  2021-07-19 16:03   ` Fuad Tabba
  (?)
@ 2021-08-12  8:58     ` Will Deacon
  -1 siblings, 0 replies; 126+ messages in thread
From: Will Deacon @ 2021-08-12  8:58 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kvmarm, maz, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team

On Mon, Jul 19, 2021 at 05:03:32PM +0100, Fuad Tabba wrote:
> Add a function to check whether a VM is protected (under pKVM).
> Since the creation of protected VMs isn't enabled yet, this is a
> placeholder that always returns false. The intention is for this
> to become a check for protected VMs in the future (see Will's RFC
> [*]).
> 
> No functional change intended.
> 
> Signed-off-by: Fuad Tabba <tabba@google.com>
> 
> [*] https://lore.kernel.org/kvmarm/20210603183347.1695-1-will@kernel.org/

You can make this a Link: tag.

Anyway, I think it makes lots of sense to decouple this from the user-ABI
series:

Acked-by: Will Deacon <will@kernel.org>

Will

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 01/15] KVM: arm64: placeholder to check if VM is protected
@ 2021-08-12  8:58     ` Will Deacon
  0 siblings, 0 replies; 126+ messages in thread
From: Will Deacon @ 2021-08-12  8:58 UTC (permalink / raw)
  To: Fuad Tabba; +Cc: kernel-team, kvm, maz, pbonzini, kvmarm, linux-arm-kernel

On Mon, Jul 19, 2021 at 05:03:32PM +0100, Fuad Tabba wrote:
> Add a function to check whether a VM is protected (under pKVM).
> Since the creation of protected VMs isn't enabled yet, this is a
> placeholder that always returns false. The intention is for this
> to become a check for protected VMs in the future (see Will's RFC
> [*]).
> 
> No functional change intended.
> 
> Signed-off-by: Fuad Tabba <tabba@google.com>
> 
> [*] https://lore.kernel.org/kvmarm/20210603183347.1695-1-will@kernel.org/

You can make this a Link: tag.

Anyway, I think it makes lots of sense to decouple this from the user-ABI
series:

Acked-by: Will Deacon <will@kernel.org>

Will
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 01/15] KVM: arm64: placeholder to check if VM is protected
@ 2021-08-12  8:58     ` Will Deacon
  0 siblings, 0 replies; 126+ messages in thread
From: Will Deacon @ 2021-08-12  8:58 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kvmarm, maz, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team

On Mon, Jul 19, 2021 at 05:03:32PM +0100, Fuad Tabba wrote:
> Add a function to check whether a VM is protected (under pKVM).
> Since the creation of protected VMs isn't enabled yet, this is a
> placeholder that always returns false. The intention is for this
> to become a check for protected VMs in the future (see Will's RFC
> [*]).
> 
> No functional change intended.
> 
> Signed-off-by: Fuad Tabba <tabba@google.com>
> 
> [*] https://lore.kernel.org/kvmarm/20210603183347.1695-1-will@kernel.org/

You can make this a Link: tag.

Anyway, I think it makes lots of sense to decouple this from the user-ABI
series:

Acked-by: Will Deacon <will@kernel.org>

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 02/15] KVM: arm64: Remove trailing whitespace in comment
  2021-07-19 16:03   ` Fuad Tabba
  (?)
@ 2021-08-12  8:59     ` Will Deacon
  -1 siblings, 0 replies; 126+ messages in thread
From: Will Deacon @ 2021-08-12  8:59 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kvmarm, maz, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team

On Mon, Jul 19, 2021 at 05:03:33PM +0100, Fuad Tabba wrote:
> Remove trailing whitespace from comment in trap_dbgauthstatus_el1().
> 
> No functional change intended.
> 
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/kvm/sys_regs.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index f6f126eb6ac1..80a6e41cadad 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -318,14 +318,14 @@ static bool trap_dbgauthstatus_el1(struct kvm_vcpu *vcpu,
>  /*
>   * We want to avoid world-switching all the DBG registers all the
>   * time:
> - * 
> + *
>   * - If we've touched any debug register, it is likely that we're
>   *   going to touch more of them. It then makes sense to disable the
>   *   traps and start doing the save/restore dance
>   * - If debug is active (DBG_MDSCR_KDE or DBG_MDSCR_MDE set), it is
>   *   then mandatory to save/restore the registers, as the guest
>   *   depends on them.
> - * 
> + *
>   * For this, we use a DIRTY bit, indicating the guest has modified the
>   * debug registers, used as follow:
>   *

I'd usually be against these sorts of changes but given you're in the
area...

Acked-by: Will Deacon <will@kernel.org>

Will

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 02/15] KVM: arm64: Remove trailing whitespace in comment
@ 2021-08-12  8:59     ` Will Deacon
  0 siblings, 0 replies; 126+ messages in thread
From: Will Deacon @ 2021-08-12  8:59 UTC (permalink / raw)
  To: Fuad Tabba; +Cc: kernel-team, kvm, maz, pbonzini, kvmarm, linux-arm-kernel

On Mon, Jul 19, 2021 at 05:03:33PM +0100, Fuad Tabba wrote:
> Remove trailing whitespace from comment in trap_dbgauthstatus_el1().
> 
> No functional change intended.
> 
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/kvm/sys_regs.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index f6f126eb6ac1..80a6e41cadad 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -318,14 +318,14 @@ static bool trap_dbgauthstatus_el1(struct kvm_vcpu *vcpu,
>  /*
>   * We want to avoid world-switching all the DBG registers all the
>   * time:
> - * 
> + *
>   * - If we've touched any debug register, it is likely that we're
>   *   going to touch more of them. It then makes sense to disable the
>   *   traps and start doing the save/restore dance
>   * - If debug is active (DBG_MDSCR_KDE or DBG_MDSCR_MDE set), it is
>   *   then mandatory to save/restore the registers, as the guest
>   *   depends on them.
> - * 
> + *
>   * For this, we use a DIRTY bit, indicating the guest has modified the
>   * debug registers, used as follow:
>   *

I'd usually be against these sorts of changes but given you're in the
area...

Acked-by: Will Deacon <will@kernel.org>

Will
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 02/15] KVM: arm64: Remove trailing whitespace in comment
@ 2021-08-12  8:59     ` Will Deacon
  0 siblings, 0 replies; 126+ messages in thread
From: Will Deacon @ 2021-08-12  8:59 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kvmarm, maz, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team

On Mon, Jul 19, 2021 at 05:03:33PM +0100, Fuad Tabba wrote:
> Remove trailing whitespace from comment in trap_dbgauthstatus_el1().
> 
> No functional change intended.
> 
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/kvm/sys_regs.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index f6f126eb6ac1..80a6e41cadad 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -318,14 +318,14 @@ static bool trap_dbgauthstatus_el1(struct kvm_vcpu *vcpu,
>  /*
>   * We want to avoid world-switching all the DBG registers all the
>   * time:
> - * 
> + *
>   * - If we've touched any debug register, it is likely that we're
>   *   going to touch more of them. It then makes sense to disable the
>   *   traps and start doing the save/restore dance
>   * - If debug is active (DBG_MDSCR_KDE or DBG_MDSCR_MDE set), it is
>   *   then mandatory to save/restore the registers, as the guest
>   *   depends on them.
> - * 
> + *
>   * For this, we use a DIRTY bit, indicating the guest has modified the
>   * debug registers, used as follow:
>   *

I'd usually be against these sorts of changes but given you're in the
area...

Acked-by: Will Deacon <will@kernel.org>

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 05/15] KVM: arm64: Refactor sys_regs.h,c for nVHE reuse
  2021-07-19 16:03   ` Fuad Tabba
  (?)
@ 2021-08-12  8:59     ` Will Deacon
  -1 siblings, 0 replies; 126+ messages in thread
From: Will Deacon @ 2021-08-12  8:59 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kvmarm, maz, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team

On Mon, Jul 19, 2021 at 05:03:36PM +0100, Fuad Tabba wrote:
> Refactor sys_regs.h and sys_regs.c to make it easier to reuse
> common code. It will be used in nVHE in a later patch.
> 
> Note that the refactored code uses __inline_bsearch for find_reg
> instead of bsearch to avoid copying the bsearch code for nVHE.
> 
> No functional change intended.
> 
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/include/asm/sysreg.h |  3 +++
>  arch/arm64/kvm/sys_regs.c       | 30 +-----------------------------
>  arch/arm64/kvm/sys_regs.h       | 31 +++++++++++++++++++++++++++++++
>  3 files changed, 35 insertions(+), 29 deletions(-)

With the naming change suggested by Drew:

Acked-by: Will Deacon <will@kernel.org.

Will

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 05/15] KVM: arm64: Refactor sys_regs.h, c for nVHE reuse
@ 2021-08-12  8:59     ` Will Deacon
  0 siblings, 0 replies; 126+ messages in thread
From: Will Deacon @ 2021-08-12  8:59 UTC (permalink / raw)
  To: Fuad Tabba; +Cc: kernel-team, kvm, maz, pbonzini, kvmarm, linux-arm-kernel

On Mon, Jul 19, 2021 at 05:03:36PM +0100, Fuad Tabba wrote:
> Refactor sys_regs.h and sys_regs.c to make it easier to reuse
> common code. It will be used in nVHE in a later patch.
> 
> Note that the refactored code uses __inline_bsearch for find_reg
> instead of bsearch to avoid copying the bsearch code for nVHE.
> 
> No functional change intended.
> 
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/include/asm/sysreg.h |  3 +++
>  arch/arm64/kvm/sys_regs.c       | 30 +-----------------------------
>  arch/arm64/kvm/sys_regs.h       | 31 +++++++++++++++++++++++++++++++
>  3 files changed, 35 insertions(+), 29 deletions(-)

With the naming change suggested by Drew:

Acked-by: Will Deacon <will@kernel.org.

Will
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 05/15] KVM: arm64: Refactor sys_regs.h, c for nVHE reuse
@ 2021-08-12  8:59     ` Will Deacon
  0 siblings, 0 replies; 126+ messages in thread
From: Will Deacon @ 2021-08-12  8:59 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kvmarm, maz, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team

On Mon, Jul 19, 2021 at 05:03:36PM +0100, Fuad Tabba wrote:
> Refactor sys_regs.h and sys_regs.c to make it easier to reuse
> common code. It will be used in nVHE in a later patch.
> 
> Note that the refactored code uses __inline_bsearch for find_reg
> instead of bsearch to avoid copying the bsearch code for nVHE.
> 
> No functional change intended.
> 
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/include/asm/sysreg.h |  3 +++
>  arch/arm64/kvm/sys_regs.c       | 30 +-----------------------------
>  arch/arm64/kvm/sys_regs.h       | 31 +++++++++++++++++++++++++++++++
>  3 files changed, 35 insertions(+), 29 deletions(-)

With the naming change suggested by Drew:

Acked-by: Will Deacon <will@kernel.org.

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 07/15] KVM: arm64: Track value of cptr_el2 in struct kvm_vcpu_arch
  2021-07-19 16:03   ` Fuad Tabba
  (?)
@ 2021-08-12  8:59     ` Will Deacon
  -1 siblings, 0 replies; 126+ messages in thread
From: Will Deacon @ 2021-08-12  8:59 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kvmarm, maz, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team

On Mon, Jul 19, 2021 at 05:03:38PM +0100, Fuad Tabba wrote:
> Track the baseline guest value for cptr_el2 in struct
> kvm_vcpu_arch, similar to the other registers that control traps.
> Use this value when setting cptr_el2 for the guest.
> 
> Currently this value is unchanged (CPTR_EL2_DEFAULT), but future
> patches will set trapping bits based on features supported for
> the guest.
> 
> No functional change intended.
> 
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/include/asm/kvm_host.h | 1 +
>  arch/arm64/kvm/arm.c              | 1 +
>  arch/arm64/kvm/hyp/nvhe/switch.c  | 2 +-
>  3 files changed, 3 insertions(+), 1 deletion(-)

Acked-by: Will Deacon <will@kernel.org>

Will

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 07/15] KVM: arm64: Track value of cptr_el2 in struct kvm_vcpu_arch
@ 2021-08-12  8:59     ` Will Deacon
  0 siblings, 0 replies; 126+ messages in thread
From: Will Deacon @ 2021-08-12  8:59 UTC (permalink / raw)
  To: Fuad Tabba; +Cc: kernel-team, kvm, maz, pbonzini, kvmarm, linux-arm-kernel

On Mon, Jul 19, 2021 at 05:03:38PM +0100, Fuad Tabba wrote:
> Track the baseline guest value for cptr_el2 in struct
> kvm_vcpu_arch, similar to the other registers that control traps.
> Use this value when setting cptr_el2 for the guest.
> 
> Currently this value is unchanged (CPTR_EL2_DEFAULT), but future
> patches will set trapping bits based on features supported for
> the guest.
> 
> No functional change intended.
> 
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/include/asm/kvm_host.h | 1 +
>  arch/arm64/kvm/arm.c              | 1 +
>  arch/arm64/kvm/hyp/nvhe/switch.c  | 2 +-
>  3 files changed, 3 insertions(+), 1 deletion(-)

Acked-by: Will Deacon <will@kernel.org>

Will
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 07/15] KVM: arm64: Track value of cptr_el2 in struct kvm_vcpu_arch
@ 2021-08-12  8:59     ` Will Deacon
  0 siblings, 0 replies; 126+ messages in thread
From: Will Deacon @ 2021-08-12  8:59 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kvmarm, maz, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team

On Mon, Jul 19, 2021 at 05:03:38PM +0100, Fuad Tabba wrote:
> Track the baseline guest value for cptr_el2 in struct
> kvm_vcpu_arch, similar to the other registers that control traps.
> Use this value when setting cptr_el2 for the guest.
> 
> Currently this value is unchanged (CPTR_EL2_DEFAULT), but future
> patches will set trapping bits based on features supported for
> the guest.
> 
> No functional change intended.
> 
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/include/asm/kvm_host.h | 1 +
>  arch/arm64/kvm/arm.c              | 1 +
>  arch/arm64/kvm/hyp/nvhe/switch.c  | 2 +-
>  3 files changed, 3 insertions(+), 1 deletion(-)

Acked-by: Will Deacon <will@kernel.org>

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 08/15] KVM: arm64: Add feature register flag definitions
  2021-07-19 16:03   ` Fuad Tabba
  (?)
@ 2021-08-12  8:59     ` Will Deacon
  -1 siblings, 0 replies; 126+ messages in thread
From: Will Deacon @ 2021-08-12  8:59 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kvmarm, maz, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team

On Mon, Jul 19, 2021 at 05:03:39PM +0100, Fuad Tabba wrote:
> Add feature register flag definitions to clarify which features
> might be supported.
> 
> Consolidate the various ID_AA64PFR0_ELx flags for all ELs.
> 
> No functional change intended.
> 
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/include/asm/cpufeature.h |  4 ++--
>  arch/arm64/include/asm/sysreg.h     | 12 ++++++++----
>  arch/arm64/kernel/cpufeature.c      |  8 ++++----
>  3 files changed, 14 insertions(+), 10 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> index 9bb9d11750d7..b7d9bb17908d 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -602,14 +602,14 @@ static inline bool id_aa64pfr0_32bit_el1(u64 pfr0)
>  {
>  	u32 val = cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_EL1_SHIFT);
>  
> -	return val == ID_AA64PFR0_EL1_32BIT_64BIT;
> +	return val == ID_AA64PFR0_ELx_32BIT_64BIT;
>  }
>  
>  static inline bool id_aa64pfr0_32bit_el0(u64 pfr0)
>  {
>  	u32 val = cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_EL0_SHIFT);
>  
> -	return val == ID_AA64PFR0_EL0_32BIT_64BIT;
> +	return val == ID_AA64PFR0_ELx_32BIT_64BIT;
>  }
>  
>  static inline bool id_aa64pfr0_sve(u64 pfr0)
> diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
> index 326f49e7bd42..0b773037251c 100644
> --- a/arch/arm64/include/asm/sysreg.h
> +++ b/arch/arm64/include/asm/sysreg.h
> @@ -784,14 +784,13 @@
>  #define ID_AA64PFR0_AMU			0x1
>  #define ID_AA64PFR0_SVE			0x1
>  #define ID_AA64PFR0_RAS_V1		0x1
> +#define ID_AA64PFR0_RAS_ANY		0xf

This doesn't correspond to an architectural definition afaict: the manual
says that any values other than 0, 1 or 2 are "reserved" so we should avoid
defining our own definitions here.

Will

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 08/15] KVM: arm64: Add feature register flag definitions
@ 2021-08-12  8:59     ` Will Deacon
  0 siblings, 0 replies; 126+ messages in thread
From: Will Deacon @ 2021-08-12  8:59 UTC (permalink / raw)
  To: Fuad Tabba; +Cc: kernel-team, kvm, maz, pbonzini, kvmarm, linux-arm-kernel

On Mon, Jul 19, 2021 at 05:03:39PM +0100, Fuad Tabba wrote:
> Add feature register flag definitions to clarify which features
> might be supported.
> 
> Consolidate the various ID_AA64PFR0_ELx flags for all ELs.
> 
> No functional change intended.
> 
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/include/asm/cpufeature.h |  4 ++--
>  arch/arm64/include/asm/sysreg.h     | 12 ++++++++----
>  arch/arm64/kernel/cpufeature.c      |  8 ++++----
>  3 files changed, 14 insertions(+), 10 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> index 9bb9d11750d7..b7d9bb17908d 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -602,14 +602,14 @@ static inline bool id_aa64pfr0_32bit_el1(u64 pfr0)
>  {
>  	u32 val = cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_EL1_SHIFT);
>  
> -	return val == ID_AA64PFR0_EL1_32BIT_64BIT;
> +	return val == ID_AA64PFR0_ELx_32BIT_64BIT;
>  }
>  
>  static inline bool id_aa64pfr0_32bit_el0(u64 pfr0)
>  {
>  	u32 val = cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_EL0_SHIFT);
>  
> -	return val == ID_AA64PFR0_EL0_32BIT_64BIT;
> +	return val == ID_AA64PFR0_ELx_32BIT_64BIT;
>  }
>  
>  static inline bool id_aa64pfr0_sve(u64 pfr0)
> diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
> index 326f49e7bd42..0b773037251c 100644
> --- a/arch/arm64/include/asm/sysreg.h
> +++ b/arch/arm64/include/asm/sysreg.h
> @@ -784,14 +784,13 @@
>  #define ID_AA64PFR0_AMU			0x1
>  #define ID_AA64PFR0_SVE			0x1
>  #define ID_AA64PFR0_RAS_V1		0x1
> +#define ID_AA64PFR0_RAS_ANY		0xf

This doesn't correspond to an architectural definition afaict: the manual
says that any values other than 0, 1 or 2 are "reserved" so we should avoid
defining our own definitions here.

Will
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 08/15] KVM: arm64: Add feature register flag definitions
@ 2021-08-12  8:59     ` Will Deacon
  0 siblings, 0 replies; 126+ messages in thread
From: Will Deacon @ 2021-08-12  8:59 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kvmarm, maz, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team

On Mon, Jul 19, 2021 at 05:03:39PM +0100, Fuad Tabba wrote:
> Add feature register flag definitions to clarify which features
> might be supported.
> 
> Consolidate the various ID_AA64PFR0_ELx flags for all ELs.
> 
> No functional change intended.
> 
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/include/asm/cpufeature.h |  4 ++--
>  arch/arm64/include/asm/sysreg.h     | 12 ++++++++----
>  arch/arm64/kernel/cpufeature.c      |  8 ++++----
>  3 files changed, 14 insertions(+), 10 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> index 9bb9d11750d7..b7d9bb17908d 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -602,14 +602,14 @@ static inline bool id_aa64pfr0_32bit_el1(u64 pfr0)
>  {
>  	u32 val = cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_EL1_SHIFT);
>  
> -	return val == ID_AA64PFR0_EL1_32BIT_64BIT;
> +	return val == ID_AA64PFR0_ELx_32BIT_64BIT;
>  }
>  
>  static inline bool id_aa64pfr0_32bit_el0(u64 pfr0)
>  {
>  	u32 val = cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_EL0_SHIFT);
>  
> -	return val == ID_AA64PFR0_EL0_32BIT_64BIT;
> +	return val == ID_AA64PFR0_ELx_32BIT_64BIT;
>  }
>  
>  static inline bool id_aa64pfr0_sve(u64 pfr0)
> diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
> index 326f49e7bd42..0b773037251c 100644
> --- a/arch/arm64/include/asm/sysreg.h
> +++ b/arch/arm64/include/asm/sysreg.h
> @@ -784,14 +784,13 @@
>  #define ID_AA64PFR0_AMU			0x1
>  #define ID_AA64PFR0_SVE			0x1
>  #define ID_AA64PFR0_RAS_V1		0x1
> +#define ID_AA64PFR0_RAS_ANY		0xf

This doesn't correspond to an architectural definition afaict: the manual
says that any values other than 0, 1 or 2 are "reserved" so we should avoid
defining our own definitions here.

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 09/15] KVM: arm64: Add config register bit definitions
  2021-07-19 16:03   ` Fuad Tabba
  (?)
@ 2021-08-12  8:59     ` Will Deacon
  -1 siblings, 0 replies; 126+ messages in thread
From: Will Deacon @ 2021-08-12  8:59 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kvmarm, maz, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team

On Mon, Jul 19, 2021 at 05:03:40PM +0100, Fuad Tabba wrote:
> Add hardware configuration register bit definitions for HCR_EL2
> and MDCR_EL2. Future patches toggle these hyp configuration
> register bits to trap on certain accesses.
> 
> No functional change intended.
> 
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/include/asm/kvm_arm.h | 22 ++++++++++++++++++++++
>  1 file changed, 22 insertions(+)

I checked all of these against the Arm ARM and they look correct to me:

Acked-by: Will Deacon <will@kernel.org>

Will

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 09/15] KVM: arm64: Add config register bit definitions
@ 2021-08-12  8:59     ` Will Deacon
  0 siblings, 0 replies; 126+ messages in thread
From: Will Deacon @ 2021-08-12  8:59 UTC (permalink / raw)
  To: Fuad Tabba; +Cc: kernel-team, kvm, maz, pbonzini, kvmarm, linux-arm-kernel

On Mon, Jul 19, 2021 at 05:03:40PM +0100, Fuad Tabba wrote:
> Add hardware configuration register bit definitions for HCR_EL2
> and MDCR_EL2. Future patches toggle these hyp configuration
> register bits to trap on certain accesses.
> 
> No functional change intended.
> 
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/include/asm/kvm_arm.h | 22 ++++++++++++++++++++++
>  1 file changed, 22 insertions(+)

I checked all of these against the Arm ARM and they look correct to me:

Acked-by: Will Deacon <will@kernel.org>

Will
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 09/15] KVM: arm64: Add config register bit definitions
@ 2021-08-12  8:59     ` Will Deacon
  0 siblings, 0 replies; 126+ messages in thread
From: Will Deacon @ 2021-08-12  8:59 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kvmarm, maz, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team

On Mon, Jul 19, 2021 at 05:03:40PM +0100, Fuad Tabba wrote:
> Add hardware configuration register bit definitions for HCR_EL2
> and MDCR_EL2. Future patches toggle these hyp configuration
> register bits to trap on certain accesses.
> 
> No functional change intended.
> 
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/include/asm/kvm_arm.h | 22 ++++++++++++++++++++++
>  1 file changed, 22 insertions(+)

I checked all of these against the Arm ARM and they look correct to me:

Acked-by: Will Deacon <will@kernel.org>

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 08/15] KVM: arm64: Add feature register flag definitions
  2021-08-12  8:59     ` Will Deacon
  (?)
@ 2021-08-12  9:21       ` Fuad Tabba
  -1 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-08-12  9:21 UTC (permalink / raw)
  To: Will Deacon
  Cc: kvmarm, maz, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team

Hi Will,

On Thu, Aug 12, 2021 at 10:59 AM Will Deacon <will@kernel.org> wrote:
>
> On Mon, Jul 19, 2021 at 05:03:39PM +0100, Fuad Tabba wrote:
> > Add feature register flag definitions to clarify which features
> > might be supported.
> >
> > Consolidate the various ID_AA64PFR0_ELx flags for all ELs.
> >
> > No functional change intended.
> >
> > Signed-off-by: Fuad Tabba <tabba@google.com>
> > ---
> >  arch/arm64/include/asm/cpufeature.h |  4 ++--
> >  arch/arm64/include/asm/sysreg.h     | 12 ++++++++----
> >  arch/arm64/kernel/cpufeature.c      |  8 ++++----
> >  3 files changed, 14 insertions(+), 10 deletions(-)
> >
> > diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> > index 9bb9d11750d7..b7d9bb17908d 100644
> > --- a/arch/arm64/include/asm/cpufeature.h
> > +++ b/arch/arm64/include/asm/cpufeature.h
> > @@ -602,14 +602,14 @@ static inline bool id_aa64pfr0_32bit_el1(u64 pfr0)
> >  {
> >       u32 val = cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_EL1_SHIFT);
> >
> > -     return val == ID_AA64PFR0_EL1_32BIT_64BIT;
> > +     return val == ID_AA64PFR0_ELx_32BIT_64BIT;
> >  }
> >
> >  static inline bool id_aa64pfr0_32bit_el0(u64 pfr0)
> >  {
> >       u32 val = cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_EL0_SHIFT);
> >
> > -     return val == ID_AA64PFR0_EL0_32BIT_64BIT;
> > +     return val == ID_AA64PFR0_ELx_32BIT_64BIT;
> >  }
> >
> >  static inline bool id_aa64pfr0_sve(u64 pfr0)
> > diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
> > index 326f49e7bd42..0b773037251c 100644
> > --- a/arch/arm64/include/asm/sysreg.h
> > +++ b/arch/arm64/include/asm/sysreg.h
> > @@ -784,14 +784,13 @@
> >  #define ID_AA64PFR0_AMU                      0x1
> >  #define ID_AA64PFR0_SVE                      0x1
> >  #define ID_AA64PFR0_RAS_V1           0x1
> > +#define ID_AA64PFR0_RAS_ANY          0xf
>
> This doesn't correspond to an architectural definition afaict: the manual
> says that any values other than 0, 1 or 2 are "reserved" so we should avoid
> defining our own definitions here.

I'll add a ID_AA64PFR0_RAS_V2 definition in that case and use it for
the checking later. That would achieve the same goal and I wouldn't be
adding definitions to the reserved area.

Cheers,
/fuad

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 08/15] KVM: arm64: Add feature register flag definitions
@ 2021-08-12  9:21       ` Fuad Tabba
  0 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-08-12  9:21 UTC (permalink / raw)
  To: Will Deacon; +Cc: kernel-team, kvm, maz, pbonzini, kvmarm, linux-arm-kernel

Hi Will,

On Thu, Aug 12, 2021 at 10:59 AM Will Deacon <will@kernel.org> wrote:
>
> On Mon, Jul 19, 2021 at 05:03:39PM +0100, Fuad Tabba wrote:
> > Add feature register flag definitions to clarify which features
> > might be supported.
> >
> > Consolidate the various ID_AA64PFR0_ELx flags for all ELs.
> >
> > No functional change intended.
> >
> > Signed-off-by: Fuad Tabba <tabba@google.com>
> > ---
> >  arch/arm64/include/asm/cpufeature.h |  4 ++--
> >  arch/arm64/include/asm/sysreg.h     | 12 ++++++++----
> >  arch/arm64/kernel/cpufeature.c      |  8 ++++----
> >  3 files changed, 14 insertions(+), 10 deletions(-)
> >
> > diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> > index 9bb9d11750d7..b7d9bb17908d 100644
> > --- a/arch/arm64/include/asm/cpufeature.h
> > +++ b/arch/arm64/include/asm/cpufeature.h
> > @@ -602,14 +602,14 @@ static inline bool id_aa64pfr0_32bit_el1(u64 pfr0)
> >  {
> >       u32 val = cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_EL1_SHIFT);
> >
> > -     return val == ID_AA64PFR0_EL1_32BIT_64BIT;
> > +     return val == ID_AA64PFR0_ELx_32BIT_64BIT;
> >  }
> >
> >  static inline bool id_aa64pfr0_32bit_el0(u64 pfr0)
> >  {
> >       u32 val = cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_EL0_SHIFT);
> >
> > -     return val == ID_AA64PFR0_EL0_32BIT_64BIT;
> > +     return val == ID_AA64PFR0_ELx_32BIT_64BIT;
> >  }
> >
> >  static inline bool id_aa64pfr0_sve(u64 pfr0)
> > diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
> > index 326f49e7bd42..0b773037251c 100644
> > --- a/arch/arm64/include/asm/sysreg.h
> > +++ b/arch/arm64/include/asm/sysreg.h
> > @@ -784,14 +784,13 @@
> >  #define ID_AA64PFR0_AMU                      0x1
> >  #define ID_AA64PFR0_SVE                      0x1
> >  #define ID_AA64PFR0_RAS_V1           0x1
> > +#define ID_AA64PFR0_RAS_ANY          0xf
>
> This doesn't correspond to an architectural definition afaict: the manual
> says that any values other than 0, 1 or 2 are "reserved" so we should avoid
> defining our own definitions here.

I'll add a ID_AA64PFR0_RAS_V2 definition in that case and use it for
the checking later. That would achieve the same goal and I wouldn't be
adding definitions to the reserved area.

Cheers,
/fuad
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 08/15] KVM: arm64: Add feature register flag definitions
@ 2021-08-12  9:21       ` Fuad Tabba
  0 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-08-12  9:21 UTC (permalink / raw)
  To: Will Deacon
  Cc: kvmarm, maz, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team

Hi Will,

On Thu, Aug 12, 2021 at 10:59 AM Will Deacon <will@kernel.org> wrote:
>
> On Mon, Jul 19, 2021 at 05:03:39PM +0100, Fuad Tabba wrote:
> > Add feature register flag definitions to clarify which features
> > might be supported.
> >
> > Consolidate the various ID_AA64PFR0_ELx flags for all ELs.
> >
> > No functional change intended.
> >
> > Signed-off-by: Fuad Tabba <tabba@google.com>
> > ---
> >  arch/arm64/include/asm/cpufeature.h |  4 ++--
> >  arch/arm64/include/asm/sysreg.h     | 12 ++++++++----
> >  arch/arm64/kernel/cpufeature.c      |  8 ++++----
> >  3 files changed, 14 insertions(+), 10 deletions(-)
> >
> > diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> > index 9bb9d11750d7..b7d9bb17908d 100644
> > --- a/arch/arm64/include/asm/cpufeature.h
> > +++ b/arch/arm64/include/asm/cpufeature.h
> > @@ -602,14 +602,14 @@ static inline bool id_aa64pfr0_32bit_el1(u64 pfr0)
> >  {
> >       u32 val = cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_EL1_SHIFT);
> >
> > -     return val == ID_AA64PFR0_EL1_32BIT_64BIT;
> > +     return val == ID_AA64PFR0_ELx_32BIT_64BIT;
> >  }
> >
> >  static inline bool id_aa64pfr0_32bit_el0(u64 pfr0)
> >  {
> >       u32 val = cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_EL0_SHIFT);
> >
> > -     return val == ID_AA64PFR0_EL0_32BIT_64BIT;
> > +     return val == ID_AA64PFR0_ELx_32BIT_64BIT;
> >  }
> >
> >  static inline bool id_aa64pfr0_sve(u64 pfr0)
> > diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
> > index 326f49e7bd42..0b773037251c 100644
> > --- a/arch/arm64/include/asm/sysreg.h
> > +++ b/arch/arm64/include/asm/sysreg.h
> > @@ -784,14 +784,13 @@
> >  #define ID_AA64PFR0_AMU                      0x1
> >  #define ID_AA64PFR0_SVE                      0x1
> >  #define ID_AA64PFR0_RAS_V1           0x1
> > +#define ID_AA64PFR0_RAS_ANY          0xf
>
> This doesn't correspond to an architectural definition afaict: the manual
> says that any values other than 0, 1 or 2 are "reserved" so we should avoid
> defining our own definitions here.

I'll add a ID_AA64PFR0_RAS_V2 definition in that case and use it for
the checking later. That would achieve the same goal and I wouldn't be
adding definitions to the reserved area.

Cheers,
/fuad

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 01/15] KVM: arm64: placeholder to check if VM is protected
  2021-08-12  8:58     ` Will Deacon
  (?)
@ 2021-08-12  9:22       ` Fuad Tabba
  -1 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-08-12  9:22 UTC (permalink / raw)
  To: Will Deacon
  Cc: kvmarm, maz, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team

Hi Will,


On Thu, Aug 12, 2021 at 10:59 AM Will Deacon <will@kernel.org> wrote:
>
> On Mon, Jul 19, 2021 at 05:03:32PM +0100, Fuad Tabba wrote:
> > Add a function to check whether a VM is protected (under pKVM).
> > Since the creation of protected VMs isn't enabled yet, this is a
> > placeholder that always returns false. The intention is for this
> > to become a check for protected VMs in the future (see Will's RFC
> > [*]).
> >
> > No functional change intended.
> >
> > Signed-off-by: Fuad Tabba <tabba@google.com>
> >
> > [*] https://lore.kernel.org/kvmarm/20210603183347.1695-1-will@kernel.org/
>
> You can make this a Link: tag.

Of course. Thanks!
/fuad
> Anyway, I think it makes lots of sense to decouple this from the user-ABI
> series:
>
> Acked-by: Will Deacon <will@kernel.org>
>
> Will

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 01/15] KVM: arm64: placeholder to check if VM is protected
@ 2021-08-12  9:22       ` Fuad Tabba
  0 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-08-12  9:22 UTC (permalink / raw)
  To: Will Deacon; +Cc: kernel-team, kvm, maz, pbonzini, kvmarm, linux-arm-kernel

Hi Will,


On Thu, Aug 12, 2021 at 10:59 AM Will Deacon <will@kernel.org> wrote:
>
> On Mon, Jul 19, 2021 at 05:03:32PM +0100, Fuad Tabba wrote:
> > Add a function to check whether a VM is protected (under pKVM).
> > Since the creation of protected VMs isn't enabled yet, this is a
> > placeholder that always returns false. The intention is for this
> > to become a check for protected VMs in the future (see Will's RFC
> > [*]).
> >
> > No functional change intended.
> >
> > Signed-off-by: Fuad Tabba <tabba@google.com>
> >
> > [*] https://lore.kernel.org/kvmarm/20210603183347.1695-1-will@kernel.org/
>
> You can make this a Link: tag.

Of course. Thanks!
/fuad
> Anyway, I think it makes lots of sense to decouple this from the user-ABI
> series:
>
> Acked-by: Will Deacon <will@kernel.org>
>
> Will
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 01/15] KVM: arm64: placeholder to check if VM is protected
@ 2021-08-12  9:22       ` Fuad Tabba
  0 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-08-12  9:22 UTC (permalink / raw)
  To: Will Deacon
  Cc: kvmarm, maz, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team

Hi Will,


On Thu, Aug 12, 2021 at 10:59 AM Will Deacon <will@kernel.org> wrote:
>
> On Mon, Jul 19, 2021 at 05:03:32PM +0100, Fuad Tabba wrote:
> > Add a function to check whether a VM is protected (under pKVM).
> > Since the creation of protected VMs isn't enabled yet, this is a
> > placeholder that always returns false. The intention is for this
> > to become a check for protected VMs in the future (see Will's RFC
> > [*]).
> >
> > No functional change intended.
> >
> > Signed-off-by: Fuad Tabba <tabba@google.com>
> >
> > [*] https://lore.kernel.org/kvmarm/20210603183347.1695-1-will@kernel.org/
>
> You can make this a Link: tag.

Of course. Thanks!
/fuad
> Anyway, I think it makes lots of sense to decouple this from the user-ABI
> series:
>
> Acked-by: Will Deacon <will@kernel.org>
>
> Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 06/15] KVM: arm64: Restore mdcr_el2 from vcpu
  2021-08-12  8:46         ` Will Deacon
  (?)
@ 2021-08-12  9:28           ` Fuad Tabba
  -1 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-08-12  9:28 UTC (permalink / raw)
  To: Will Deacon
  Cc: Andrew Jones, kvmarm, maz, james.morse, alexandru.elisei,
	suzuki.poulose, mark.rutland, christoffer.dall, pbonzini,
	qperret, kvm, linux-arm-kernel, kernel-team

Hi Will,

On Thu, Aug 12, 2021 at 10:46 AM Will Deacon <will@kernel.org> wrote:
>
> On Wed, Jul 21, 2021 at 08:37:21AM +0100, Fuad Tabba wrote:
> > On Tue, Jul 20, 2021 at 3:53 PM Andrew Jones <drjones@redhat.com> wrote:
> > >
> > > On Mon, Jul 19, 2021 at 05:03:37PM +0100, Fuad Tabba wrote:
> > > > On deactivating traps, restore the value of mdcr_el2 from the
> > > > newly created and preserved host value vcpu context, rather than
> > > > directly reading the hardware register.
> > > >
> > > > Up until and including this patch the two values are the same,
> > > > i.e., the hardware register and the vcpu one. A future patch will
> > > > be changing the value of mdcr_el2 on activating traps, and this
> > > > ensures that its value will be restored.
> > > >
> > > > No functional change intended.
> > >
> > > I'm probably missing something, but I can't convince myself that the host
> > > will end up with the same mdcr_el2 value after deactivating traps after
> > > this patch as before. We clearly now restore whatever we had when
> > > activating traps (presumably whatever we configured at init_el2_state
> > > time), but is that equivalent to what we had before with the masking and
> > > ORing that this patch drops?
> >
> > You're right. I thought that these were actually being initialized to
> > the same values, but having a closer look at the code the mdcr values
> > are not the same as pre-patch. I will fix this.
>
> Can you elaborate on the issue here, please? I was just looking at this
> but aren't you now relying on __init_el2_debug to configure this, which
> should be fine?

I *think* that it should be fine, but as Drew pointed out, the host
does not end up with the same mdcr_el2 value after the deactivation in
this patch as it did after deactivation before this patch. In my v4
(not sent out yet), I have fixed it to ensure that the host does end
up with the same value as the one before this patch. That should make
it easier to check that there's no functional change.

I'll look into it further, and if I can convince myself that there
aren't any issues and that this patch makes the code cleaner, I will
add it as a separate patch instead to make reviewing easier.

Thanks,
/fuad

> Will

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 06/15] KVM: arm64: Restore mdcr_el2 from vcpu
@ 2021-08-12  9:28           ` Fuad Tabba
  0 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-08-12  9:28 UTC (permalink / raw)
  To: Will Deacon; +Cc: kernel-team, kvm, maz, pbonzini, kvmarm, linux-arm-kernel

Hi Will,

On Thu, Aug 12, 2021 at 10:46 AM Will Deacon <will@kernel.org> wrote:
>
> On Wed, Jul 21, 2021 at 08:37:21AM +0100, Fuad Tabba wrote:
> > On Tue, Jul 20, 2021 at 3:53 PM Andrew Jones <drjones@redhat.com> wrote:
> > >
> > > On Mon, Jul 19, 2021 at 05:03:37PM +0100, Fuad Tabba wrote:
> > > > On deactivating traps, restore the value of mdcr_el2 from the
> > > > newly created and preserved host value vcpu context, rather than
> > > > directly reading the hardware register.
> > > >
> > > > Up until and including this patch the two values are the same,
> > > > i.e., the hardware register and the vcpu one. A future patch will
> > > > be changing the value of mdcr_el2 on activating traps, and this
> > > > ensures that its value will be restored.
> > > >
> > > > No functional change intended.
> > >
> > > I'm probably missing something, but I can't convince myself that the host
> > > will end up with the same mdcr_el2 value after deactivating traps after
> > > this patch as before. We clearly now restore whatever we had when
> > > activating traps (presumably whatever we configured at init_el2_state
> > > time), but is that equivalent to what we had before with the masking and
> > > ORing that this patch drops?
> >
> > You're right. I thought that these were actually being initialized to
> > the same values, but having a closer look at the code the mdcr values
> > are not the same as pre-patch. I will fix this.
>
> Can you elaborate on the issue here, please? I was just looking at this
> but aren't you now relying on __init_el2_debug to configure this, which
> should be fine?

I *think* that it should be fine, but as Drew pointed out, the host
does not end up with the same mdcr_el2 value after the deactivation in
this patch as it did after deactivation before this patch. In my v4
(not sent out yet), I have fixed it to ensure that the host does end
up with the same value as the one before this patch. That should make
it easier to check that there's no functional change.

I'll look into it further, and if I can convince myself that there
aren't any issues and that this patch makes the code cleaner, I will
add it as a separate patch instead to make reviewing easier.

Thanks,
/fuad

> Will
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 06/15] KVM: arm64: Restore mdcr_el2 from vcpu
@ 2021-08-12  9:28           ` Fuad Tabba
  0 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-08-12  9:28 UTC (permalink / raw)
  To: Will Deacon
  Cc: Andrew Jones, kvmarm, maz, james.morse, alexandru.elisei,
	suzuki.poulose, mark.rutland, christoffer.dall, pbonzini,
	qperret, kvm, linux-arm-kernel, kernel-team

Hi Will,

On Thu, Aug 12, 2021 at 10:46 AM Will Deacon <will@kernel.org> wrote:
>
> On Wed, Jul 21, 2021 at 08:37:21AM +0100, Fuad Tabba wrote:
> > On Tue, Jul 20, 2021 at 3:53 PM Andrew Jones <drjones@redhat.com> wrote:
> > >
> > > On Mon, Jul 19, 2021 at 05:03:37PM +0100, Fuad Tabba wrote:
> > > > On deactivating traps, restore the value of mdcr_el2 from the
> > > > newly created and preserved host value vcpu context, rather than
> > > > directly reading the hardware register.
> > > >
> > > > Up until and including this patch the two values are the same,
> > > > i.e., the hardware register and the vcpu one. A future patch will
> > > > be changing the value of mdcr_el2 on activating traps, and this
> > > > ensures that its value will be restored.
> > > >
> > > > No functional change intended.
> > >
> > > I'm probably missing something, but I can't convince myself that the host
> > > will end up with the same mdcr_el2 value after deactivating traps after
> > > this patch as before. We clearly now restore whatever we had when
> > > activating traps (presumably whatever we configured at init_el2_state
> > > time), but is that equivalent to what we had before with the masking and
> > > ORing that this patch drops?
> >
> > You're right. I thought that these were actually being initialized to
> > the same values, but having a closer look at the code the mdcr values
> > are not the same as pre-patch. I will fix this.
>
> Can you elaborate on the issue here, please? I was just looking at this
> but aren't you now relying on __init_el2_debug to configure this, which
> should be fine?

I *think* that it should be fine, but as Drew pointed out, the host
does not end up with the same mdcr_el2 value after the deactivation in
this patch as it did after deactivation before this patch. In my v4
(not sent out yet), I have fixed it to ensure that the host does end
up with the same value as the one before this patch. That should make
it easier to check that there's no functional change.

I'll look into it further, and if I can convince myself that there
aren't any issues and that this patch makes the code cleaner, I will
add it as a separate patch instead to make reviewing easier.

Thanks,
/fuad

> Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 11/15] KVM: arm64: Add trap handlers for protected VMs
  2021-07-19 16:03   ` Fuad Tabba
  (?)
@ 2021-08-12  9:45     ` Will Deacon
  -1 siblings, 0 replies; 126+ messages in thread
From: Will Deacon @ 2021-08-12  9:45 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kvmarm, maz, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team

On Mon, Jul 19, 2021 at 05:03:42PM +0100, Fuad Tabba wrote:
> Add trap handlers for protected VMs. These are mainly for Sys64
> and debug traps.
> 
> No functional change intended as these are not hooked in yet to
> the guest exit handlers introduced earlier. So even when trapping
> is triggered, the exit handlers would let the host handle it, as
> before.
> 
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/include/asm/kvm_fixed_config.h | 178 +++++++++
>  arch/arm64/include/asm/kvm_host.h         |   2 +
>  arch/arm64/include/asm/kvm_hyp.h          |   3 +
>  arch/arm64/kvm/Makefile                   |   2 +-
>  arch/arm64/kvm/arm.c                      |  11 +
>  arch/arm64/kvm/hyp/nvhe/Makefile          |   2 +-
>  arch/arm64/kvm/hyp/nvhe/sys_regs.c        | 443 ++++++++++++++++++++++
>  arch/arm64/kvm/pkvm.c                     | 183 +++++++++
>  8 files changed, 822 insertions(+), 2 deletions(-)
>  create mode 100644 arch/arm64/include/asm/kvm_fixed_config.h
>  create mode 100644 arch/arm64/kvm/hyp/nvhe/sys_regs.c
>  create mode 100644 arch/arm64/kvm/pkvm.c
> 
> diff --git a/arch/arm64/include/asm/kvm_fixed_config.h b/arch/arm64/include/asm/kvm_fixed_config.h
> new file mode 100644
> index 000000000000..b39a5de2c4b9
> --- /dev/null
> +++ b/arch/arm64/include/asm/kvm_fixed_config.h
> @@ -0,0 +1,178 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +/*
> + * Copyright (C) 2021 Google LLC
> + * Author: Fuad Tabba <tabba@google.com>
> + */
> +
> +#ifndef __ARM64_KVM_FIXED_CONFIG_H__
> +#define __ARM64_KVM_FIXED_CONFIG_H__
> +
> +#include <asm/sysreg.h>
> +
> +/*
> + * This file contains definitions for features to be allowed or restricted for
> + * guest virtual machines as a baseline, depending on what mode KVM is running
> + * in and on the type of guest is running.

s/is running/that is running/

> + *
> + * The features are represented as the highest allowed value for a feature in
> + * the feature id registers. If the field is set to all ones (i.e., 0b1111),
> + * then it's only restricted by what the system allows. If the feature is set to
> + * another value, then that value would be the maximum value allowed and
> + * supported in pKVM, even if the system supports a higher value.

Given that some fields are signed whereas others are unsigned, I think the
wording could be a bit tighter here when it refers to "maximum".

> + *
> + * Some features are forced to a certain value, in which case a SET bitmap is
> + * used to force these values.
> + */
> +
> +
> +/*
> + * Allowed features for protected guests (Protected KVM)
> + *
> + * The approach taken here is to allow features that are:
> + * - needed by common Linux distributions (e.g., flooating point)

s/flooating/floating

> + * - are trivial, e.g., supporting the feature doesn't introduce or require the
> + * tracking of additional state

... in KVM.

> + * - not trapable

s/not trapable/cannot be trapped/

> + */
> +
> +/*
> + * - Floating-point and Advanced SIMD:
> + *	Don't require much support other than maintaining the context, which KVM
> + *	already has.

I'd rework this sentence. We have to support fpsimd because Linux guests
rely on it.

> + * - AArch64 guests only (no support for AArch32 guests):
> + *	Simplify support in case of asymmetric AArch32 systems.

I don't think asymmetric systems come into this really; AArch32 on its
own adds lots of complexity in trap handling, emulation, condition codes
etc. Restricting guests to AArch64 means we don't have to worry about the
AArch32 exception model or emulation of 32-bit instructions.

> + * - RAS (v1)
> + *	v1 doesn't require much additional support, but later versions do.

Be more specific?

> + * - Data Independent Timing
> + *	Trivial
> + * Remaining features are not supported either because they require too much
> + * support from KVM, or risk leaking guest data.

I think we should drop this sentence -- it makes it sounds like we can't
be arsed :)

> + */
> +#define PVM_ID_AA64PFR0_ALLOW (\
> +	FEATURE(ID_AA64PFR0_FP) | \
> +	FIELD_PREP(FEATURE(ID_AA64PFR0_EL0), ID_AA64PFR0_ELx_64BIT_ONLY) | \
> +	FIELD_PREP(FEATURE(ID_AA64PFR0_EL1), ID_AA64PFR0_ELx_64BIT_ONLY) | \
> +	FIELD_PREP(FEATURE(ID_AA64PFR0_EL2), ID_AA64PFR0_ELx_64BIT_ONLY) | \
> +	FIELD_PREP(FEATURE(ID_AA64PFR0_EL3), ID_AA64PFR0_ELx_64BIT_ONLY) | \
> +	FIELD_PREP(FEATURE(ID_AA64PFR0_RAS), ID_AA64PFR0_RAS_V1) | \

I think having the FIELD_PREP entries in the ALLOW mask is quite confusing
here -- naively you would expect to be able to bitwise-and the host register
value with the ALLOW mask and get the sanitised version back, but with these
here you have to go field-by-field to compute the common value.

So perhaps move those into a PVM_ID_AA64PFR0_RESTRICT mask or something?
Then pvm_access_id_aa64pfr0() will become a little easier to read, I think.

> +	FEATURE(ID_AA64PFR0_ASIMD) | \
> +	FEATURE(ID_AA64PFR0_DIT) \
> +	)
> +
> +/*
> + * - Branch Target Identification
> + * - Speculative Store Bypassing
> + *	These features are trivial to support
> + */
> +#define PVM_ID_AA64PFR1_ALLOW (\
> +	FEATURE(ID_AA64PFR1_BT) | \
> +	FEATURE(ID_AA64PFR1_SSBS) \
> +	)
> +
> +/*
> + * No support for Scalable Vectors:
> + *	Requires additional support from KVM

Perhaps expand on "support" here? E.g. "context-switching and trapping
support at EL2".

> + */
> +#define PVM_ID_AA64ZFR0_ALLOW (0ULL)
> +
> +/*
> + * No support for debug, including breakpoints, and watchpoints:
> + *	Reduce complexity and avoid exposing/leaking guest data
> + *
> + * NOTE: The Arm architecture mandates support for at least the Armv8 debug
> + * architecture, which would include at least 2 hardware breakpoints and
> + * watchpoints. Providing that support to protected guests adds considerable
> + * state and complexity, and risks leaking guest data. Therefore, the reserved
> + * value of 0 is used for debug-related fields.
> + */

I think the complexity of the debug architecture is a good reason to avoid
exposing it here, but I don't understand how providing breakpoints or
watchpoints to a guest could risk leaking guest data. What is the specific
threat here?

> +#define PVM_ID_AA64DFR0_ALLOW (0ULL)
> +
> +/*
> + * These features are chosen because they are supported by KVM and to limit the
> + * confiruation state space and make it more deterministic.

s/confiruation/configuration/

However, I don't agree that this provides determinism since we're not
forcing any particular values, but rather filtering the values from the
host.

> + * - 40-bit IPA

This seems more about not supporting KVM_CAP_ARM_VM_IPA_SIZE for now.

> + * - 16-bit ASID
> + * - Mixed-endian
> + * - Distinction between Secure and Non-secure Memory
> + * - Mixed-endian at EL0 only
> + * - Non-context synchronizing exception entry and exit

These all seem to fall into the "cannot trap" category, so we just advertise
whatever we've got.

> + */
> +#define PVM_ID_AA64MMFR0_ALLOW (\
> +	FIELD_PREP(FEATURE(ID_AA64MMFR0_PARANGE), ID_AA64MMFR0_PARANGE_40) | \
> +	FIELD_PREP(FEATURE(ID_AA64MMFR0_ASID), ID_AA64MMFR0_ASID_16) | \
> +	FEATURE(ID_AA64MMFR0_BIGENDEL) | \
> +	FEATURE(ID_AA64MMFR0_SNSMEM) | \
> +	FEATURE(ID_AA64MMFR0_BIGENDEL0) | \
> +	FEATURE(ID_AA64MMFR0_EXS) \
> +	)
> +
> +/*
> + * - 64KB granule not supported
> + */
> +#define PVM_ID_AA64MMFR0_SET (\
> +	FIELD_PREP(FEATURE(ID_AA64MMFR0_TGRAN64), ID_AA64MMFR0_TGRAN64_NI) \
> +	)

Why not, and can we actually prevent the guest from doing that?

> +/*
> + * These features are chosen because they are supported by KVM and to limit the
> + * confiruation state space and make it more deterministic.

It's that typo again ;) But my comment from before still applies -- I don't
think an ALLOW mask adds hugely to the determinism.

> + * - Hardware translation table updates to Access flag and Dirty state
> + * - Number of VMID bits from CPU
> + * - Hierarchical Permission Disables
> + * - Privileged Access Never
> + * - SError interrupt exceptions from speculative reads
> + * - Enhanced Translation Synchronization

As before, I think this is a mixture of "trivial" and "cannot trap"
features.

> + */
> +#define PVM_ID_AA64MMFR1_ALLOW (\
> +	FEATURE(ID_AA64MMFR1_HADBS) | \
> +	FEATURE(ID_AA64MMFR1_VMIDBITS) | \
> +	FEATURE(ID_AA64MMFR1_HPD) | \
> +	FEATURE(ID_AA64MMFR1_PAN) | \
> +	FEATURE(ID_AA64MMFR1_SPECSEI) | \
> +	FEATURE(ID_AA64MMFR1_ETS) \
> +	)
> +
> +/*
> + * These features are chosen because they are supported by KVM and to limit the
> + * confiruation state space and make it more deterministic.

<same comment>

> + * - Common not Private translations
> + * - User Access Override
> + * - IESB bit in the SCTLR_ELx registers
> + * - Unaligned single-copy atomicity and atomic functions
> + * - ESR_ELx.EC value on an exception by read access to feature ID space
> + * - TTL field in address operations.
> + * - Break-before-make sequences when changing translation block size
> + * - E0PDx mechanism
> + */
> +#define PVM_ID_AA64MMFR2_ALLOW (\
> +	FEATURE(ID_AA64MMFR2_CNP) | \
> +	FEATURE(ID_AA64MMFR2_UAO) | \
> +	FEATURE(ID_AA64MMFR2_IESB) | \
> +	FEATURE(ID_AA64MMFR2_AT) | \
> +	FEATURE(ID_AA64MMFR2_IDS) | \
> +	FEATURE(ID_AA64MMFR2_TTL) | \
> +	FEATURE(ID_AA64MMFR2_BBM) | \
> +	FEATURE(ID_AA64MMFR2_E0PD) \
> +	)
> +
> +/*
> + * Allow all features in this register because they are trivial to support, or
> + * are already supported by KVM:
> + * - LS64
> + * - XS
> + * - I8MM
> + * - DGB
> + * - BF16
> + * - SPECRES
> + * - SB
> + * - FRINTTS
> + * - PAuth
> + * - FPAC
> + * - LRCPC
> + * - FCMA
> + * - JSCVT
> + * - DPB
> + */
> +#define PVM_ID_AA64ISAR1_ALLOW (~0ULL)
> +
> +#endif /* __ARM64_KVM_FIXED_CONFIG_H__ */
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index ac67d5699c68..e1ceadd69575 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -780,6 +780,8 @@ static inline bool kvm_vm_is_protected(struct kvm *kvm)
>  	return false;
>  }
>  
> +void kvm_init_protected_traps(struct kvm_vcpu *vcpu);
> +
>  int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature);
>  bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu);
>  
> diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
> index 657d0c94cf82..3f4866322f85 100644
> --- a/arch/arm64/include/asm/kvm_hyp.h
> +++ b/arch/arm64/include/asm/kvm_hyp.h
> @@ -115,7 +115,10 @@ int __pkvm_init(phys_addr_t phys, unsigned long size, unsigned long nr_cpus,
>  void __noreturn __host_enter(struct kvm_cpu_context *host_ctxt);
>  #endif
>  
> +extern u64 kvm_nvhe_sym(id_aa64pfr0_el1_sys_val);
> +extern u64 kvm_nvhe_sym(id_aa64pfr1_el1_sys_val);
>  extern u64 kvm_nvhe_sym(id_aa64mmfr0_el1_sys_val);
>  extern u64 kvm_nvhe_sym(id_aa64mmfr1_el1_sys_val);
> +extern u64 kvm_nvhe_sym(id_aa64mmfr2_el1_sys_val);
>  
>  #endif /* __ARM64_KVM_HYP_H__ */
> diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
> index 989bb5dad2c8..0be63f5c495f 100644
> --- a/arch/arm64/kvm/Makefile
> +++ b/arch/arm64/kvm/Makefile
> @@ -14,7 +14,7 @@ kvm-y := $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o $(KVM)/eventfd.o \
>  	 $(KVM)/vfio.o $(KVM)/irqchip.o $(KVM)/binary_stats.o \
>  	 arm.o mmu.o mmio.o psci.o perf.o hypercalls.o pvtime.o \
>  	 inject_fault.o va_layout.o handle_exit.o \
> -	 guest.o debug.o reset.o sys_regs.o \
> +	 guest.o debug.o pkvm.o reset.o sys_regs.o \
>  	 vgic-sys-reg-v3.o fpsimd.o pmu.o \
>  	 arch_timer.o trng.o\
>  	 vgic/vgic.o vgic/vgic-init.o \
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index 14b12f2c08c0..3f28549aff0d 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -618,6 +618,14 @@ static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu)
>  
>  	ret = kvm_arm_pmu_v3_enable(vcpu);
>  
> +	/*
> +	 * Initialize traps for protected VMs.
> +	 * NOTE: Move  trap initialization to EL2 once the code is in place for
> +	 * maintaining protected VM state at EL2 instead of the host.
> +	 */
> +	if (kvm_vm_is_protected(kvm))
> +		kvm_init_protected_traps(vcpu);
> +
>  	return ret;
>  }
>  
> @@ -1781,8 +1789,11 @@ static int kvm_hyp_init_protection(u32 hyp_va_bits)
>  	void *addr = phys_to_virt(hyp_mem_base);
>  	int ret;
>  
> +	kvm_nvhe_sym(id_aa64pfr0_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
> +	kvm_nvhe_sym(id_aa64pfr1_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64PFR1_EL1);
>  	kvm_nvhe_sym(id_aa64mmfr0_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1);
>  	kvm_nvhe_sym(id_aa64mmfr1_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1);
> +	kvm_nvhe_sym(id_aa64mmfr2_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR2_EL1);
>  
>  	ret = create_hyp_mappings(addr, addr + hyp_mem_size, PAGE_HYP);
>  	if (ret)
> diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile
> index 5df6193fc430..a23f417a0c20 100644
> --- a/arch/arm64/kvm/hyp/nvhe/Makefile
> +++ b/arch/arm64/kvm/hyp/nvhe/Makefile
> @@ -14,7 +14,7 @@ lib-objs := $(addprefix ../../../lib/, $(lib-objs))
>  
>  obj-y := timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o hyp-init.o host.o \
>  	 hyp-main.o hyp-smp.o psci-relay.o early_alloc.o stub.o page_alloc.o \
> -	 cache.o setup.o mm.o mem_protect.o
> +	 cache.o setup.o mm.o mem_protect.o sys_regs.o
>  obj-y += ../vgic-v3-sr.o ../aarch32.o ../vgic-v2-cpuif-proxy.o ../entry.o \
>  	 ../fpsimd.o ../hyp-entry.o ../exception.o ../pgtable.o
>  obj-y += $(lib-objs)
> diff --git a/arch/arm64/kvm/hyp/nvhe/sys_regs.c b/arch/arm64/kvm/hyp/nvhe/sys_regs.c
> new file mode 100644
> index 000000000000..6c7230aa70e9
> --- /dev/null
> +++ b/arch/arm64/kvm/hyp/nvhe/sys_regs.c
> @@ -0,0 +1,443 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/*
> + * Copyright (C) 2021 Google LLC
> + * Author: Fuad Tabba <tabba@google.com>
> + */
> +
> +#include <linux/kvm_host.h>
> +
> +#include <asm/kvm_asm.h>
> +#include <asm/kvm_emulate.h>
> +#include <asm/kvm_fixed_config.h>
> +#include <asm/kvm_mmu.h>
> +
> +#include <hyp/adjust_pc.h>
> +
> +#include "../../sys_regs.h"
> +
> +/*
> + * Copies of the host's CPU features registers holding sanitized values.
> + */
> +u64 id_aa64pfr0_el1_sys_val;
> +u64 id_aa64pfr1_el1_sys_val;
> +u64 id_aa64mmfr2_el1_sys_val;
> +
> +/*
> + * Inject an unknown/undefined exception to the guest.
> + */
> +static void inject_undef(struct kvm_vcpu *vcpu)
> +{
> +	u32 esr = (ESR_ELx_EC_UNKNOWN << ESR_ELx_EC_SHIFT);
> +
> +	vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA64_EL1 |
> +			     KVM_ARM64_EXCEPT_AA64_ELx_SYNC |
> +			     KVM_ARM64_PENDING_EXCEPTION);
> +
> +	__kvm_adjust_pc(vcpu);
> +
> +	write_sysreg_el1(esr, SYS_ESR);
> +	write_sysreg_el1(read_sysreg_el2(SYS_ELR), SYS_ELR);
> +}
> +
> +/*
> + * Accessor for undefined accesses.
> + */
> +static bool undef_access(struct kvm_vcpu *vcpu,
> +			 struct sys_reg_params *p,
> +			 const struct sys_reg_desc *r)
> +{
> +	inject_undef(vcpu);
> +	return false;
> +}
> +
> +/*
> + * Accessors for feature registers.
> + *
> + * If access is allowed, set the regval to the protected VM's view of the
> + * register and return true.
> + * Otherwise, inject an undefined exception and return false.
> + */
> +
> +/*
> + * Returns the minimum feature supported and allowed.
> + */
> +static u64 get_min_feature(u64 feature, u64 allowed_features,
> +			   u64 supported_features)
> +{
> +	const u64 allowed_feature = FIELD_GET(feature, allowed_features);
> +	const u64 supported_feature = FIELD_GET(feature, supported_features);
> +
> +	return min(allowed_feature, supported_feature);

Careful here: this is an unsigned comparison, yet some fields are signed.
cpufeature.c uses the S_ARM64_FTR_BITS and ARM64_FTR_BITS to declare signed
and unsigned fields respectively.

Will

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 11/15] KVM: arm64: Add trap handlers for protected VMs
@ 2021-08-12  9:45     ` Will Deacon
  0 siblings, 0 replies; 126+ messages in thread
From: Will Deacon @ 2021-08-12  9:45 UTC (permalink / raw)
  To: Fuad Tabba; +Cc: kernel-team, kvm, maz, pbonzini, kvmarm, linux-arm-kernel

On Mon, Jul 19, 2021 at 05:03:42PM +0100, Fuad Tabba wrote:
> Add trap handlers for protected VMs. These are mainly for Sys64
> and debug traps.
> 
> No functional change intended as these are not hooked in yet to
> the guest exit handlers introduced earlier. So even when trapping
> is triggered, the exit handlers would let the host handle it, as
> before.
> 
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/include/asm/kvm_fixed_config.h | 178 +++++++++
>  arch/arm64/include/asm/kvm_host.h         |   2 +
>  arch/arm64/include/asm/kvm_hyp.h          |   3 +
>  arch/arm64/kvm/Makefile                   |   2 +-
>  arch/arm64/kvm/arm.c                      |  11 +
>  arch/arm64/kvm/hyp/nvhe/Makefile          |   2 +-
>  arch/arm64/kvm/hyp/nvhe/sys_regs.c        | 443 ++++++++++++++++++++++
>  arch/arm64/kvm/pkvm.c                     | 183 +++++++++
>  8 files changed, 822 insertions(+), 2 deletions(-)
>  create mode 100644 arch/arm64/include/asm/kvm_fixed_config.h
>  create mode 100644 arch/arm64/kvm/hyp/nvhe/sys_regs.c
>  create mode 100644 arch/arm64/kvm/pkvm.c
> 
> diff --git a/arch/arm64/include/asm/kvm_fixed_config.h b/arch/arm64/include/asm/kvm_fixed_config.h
> new file mode 100644
> index 000000000000..b39a5de2c4b9
> --- /dev/null
> +++ b/arch/arm64/include/asm/kvm_fixed_config.h
> @@ -0,0 +1,178 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +/*
> + * Copyright (C) 2021 Google LLC
> + * Author: Fuad Tabba <tabba@google.com>
> + */
> +
> +#ifndef __ARM64_KVM_FIXED_CONFIG_H__
> +#define __ARM64_KVM_FIXED_CONFIG_H__
> +
> +#include <asm/sysreg.h>
> +
> +/*
> + * This file contains definitions for features to be allowed or restricted for
> + * guest virtual machines as a baseline, depending on what mode KVM is running
> + * in and on the type of guest is running.

s/is running/that is running/

> + *
> + * The features are represented as the highest allowed value for a feature in
> + * the feature id registers. If the field is set to all ones (i.e., 0b1111),
> + * then it's only restricted by what the system allows. If the feature is set to
> + * another value, then that value would be the maximum value allowed and
> + * supported in pKVM, even if the system supports a higher value.

Given that some fields are signed whereas others are unsigned, I think the
wording could be a bit tighter here when it refers to "maximum".

> + *
> + * Some features are forced to a certain value, in which case a SET bitmap is
> + * used to force these values.
> + */
> +
> +
> +/*
> + * Allowed features for protected guests (Protected KVM)
> + *
> + * The approach taken here is to allow features that are:
> + * - needed by common Linux distributions (e.g., flooating point)

s/flooating/floating

> + * - are trivial, e.g., supporting the feature doesn't introduce or require the
> + * tracking of additional state

... in KVM.

> + * - not trapable

s/not trapable/cannot be trapped/

> + */
> +
> +/*
> + * - Floating-point and Advanced SIMD:
> + *	Don't require much support other than maintaining the context, which KVM
> + *	already has.

I'd rework this sentence. We have to support fpsimd because Linux guests
rely on it.

> + * - AArch64 guests only (no support for AArch32 guests):
> + *	Simplify support in case of asymmetric AArch32 systems.

I don't think asymmetric systems come into this really; AArch32 on its
own adds lots of complexity in trap handling, emulation, condition codes
etc. Restricting guests to AArch64 means we don't have to worry about the
AArch32 exception model or emulation of 32-bit instructions.

> + * - RAS (v1)
> + *	v1 doesn't require much additional support, but later versions do.

Be more specific?

> + * - Data Independent Timing
> + *	Trivial
> + * Remaining features are not supported either because they require too much
> + * support from KVM, or risk leaking guest data.

I think we should drop this sentence -- it makes it sounds like we can't
be arsed :)

> + */
> +#define PVM_ID_AA64PFR0_ALLOW (\
> +	FEATURE(ID_AA64PFR0_FP) | \
> +	FIELD_PREP(FEATURE(ID_AA64PFR0_EL0), ID_AA64PFR0_ELx_64BIT_ONLY) | \
> +	FIELD_PREP(FEATURE(ID_AA64PFR0_EL1), ID_AA64PFR0_ELx_64BIT_ONLY) | \
> +	FIELD_PREP(FEATURE(ID_AA64PFR0_EL2), ID_AA64PFR0_ELx_64BIT_ONLY) | \
> +	FIELD_PREP(FEATURE(ID_AA64PFR0_EL3), ID_AA64PFR0_ELx_64BIT_ONLY) | \
> +	FIELD_PREP(FEATURE(ID_AA64PFR0_RAS), ID_AA64PFR0_RAS_V1) | \

I think having the FIELD_PREP entries in the ALLOW mask is quite confusing
here -- naively you would expect to be able to bitwise-and the host register
value with the ALLOW mask and get the sanitised version back, but with these
here you have to go field-by-field to compute the common value.

So perhaps move those into a PVM_ID_AA64PFR0_RESTRICT mask or something?
Then pvm_access_id_aa64pfr0() will become a little easier to read, I think.

> +	FEATURE(ID_AA64PFR0_ASIMD) | \
> +	FEATURE(ID_AA64PFR0_DIT) \
> +	)
> +
> +/*
> + * - Branch Target Identification
> + * - Speculative Store Bypassing
> + *	These features are trivial to support
> + */
> +#define PVM_ID_AA64PFR1_ALLOW (\
> +	FEATURE(ID_AA64PFR1_BT) | \
> +	FEATURE(ID_AA64PFR1_SSBS) \
> +	)
> +
> +/*
> + * No support for Scalable Vectors:
> + *	Requires additional support from KVM

Perhaps expand on "support" here? E.g. "context-switching and trapping
support at EL2".

> + */
> +#define PVM_ID_AA64ZFR0_ALLOW (0ULL)
> +
> +/*
> + * No support for debug, including breakpoints, and watchpoints:
> + *	Reduce complexity and avoid exposing/leaking guest data
> + *
> + * NOTE: The Arm architecture mandates support for at least the Armv8 debug
> + * architecture, which would include at least 2 hardware breakpoints and
> + * watchpoints. Providing that support to protected guests adds considerable
> + * state and complexity, and risks leaking guest data. Therefore, the reserved
> + * value of 0 is used for debug-related fields.
> + */

I think the complexity of the debug architecture is a good reason to avoid
exposing it here, but I don't understand how providing breakpoints or
watchpoints to a guest could risk leaking guest data. What is the specific
threat here?

> +#define PVM_ID_AA64DFR0_ALLOW (0ULL)
> +
> +/*
> + * These features are chosen because they are supported by KVM and to limit the
> + * confiruation state space and make it more deterministic.

s/confiruation/configuration/

However, I don't agree that this provides determinism since we're not
forcing any particular values, but rather filtering the values from the
host.

> + * - 40-bit IPA

This seems more about not supporting KVM_CAP_ARM_VM_IPA_SIZE for now.

> + * - 16-bit ASID
> + * - Mixed-endian
> + * - Distinction between Secure and Non-secure Memory
> + * - Mixed-endian at EL0 only
> + * - Non-context synchronizing exception entry and exit

These all seem to fall into the "cannot trap" category, so we just advertise
whatever we've got.

> + */
> +#define PVM_ID_AA64MMFR0_ALLOW (\
> +	FIELD_PREP(FEATURE(ID_AA64MMFR0_PARANGE), ID_AA64MMFR0_PARANGE_40) | \
> +	FIELD_PREP(FEATURE(ID_AA64MMFR0_ASID), ID_AA64MMFR0_ASID_16) | \
> +	FEATURE(ID_AA64MMFR0_BIGENDEL) | \
> +	FEATURE(ID_AA64MMFR0_SNSMEM) | \
> +	FEATURE(ID_AA64MMFR0_BIGENDEL0) | \
> +	FEATURE(ID_AA64MMFR0_EXS) \
> +	)
> +
> +/*
> + * - 64KB granule not supported
> + */
> +#define PVM_ID_AA64MMFR0_SET (\
> +	FIELD_PREP(FEATURE(ID_AA64MMFR0_TGRAN64), ID_AA64MMFR0_TGRAN64_NI) \
> +	)

Why not, and can we actually prevent the guest from doing that?

> +/*
> + * These features are chosen because they are supported by KVM and to limit the
> + * confiruation state space and make it more deterministic.

It's that typo again ;) But my comment from before still applies -- I don't
think an ALLOW mask adds hugely to the determinism.

> + * - Hardware translation table updates to Access flag and Dirty state
> + * - Number of VMID bits from CPU
> + * - Hierarchical Permission Disables
> + * - Privileged Access Never
> + * - SError interrupt exceptions from speculative reads
> + * - Enhanced Translation Synchronization

As before, I think this is a mixture of "trivial" and "cannot trap"
features.

> + */
> +#define PVM_ID_AA64MMFR1_ALLOW (\
> +	FEATURE(ID_AA64MMFR1_HADBS) | \
> +	FEATURE(ID_AA64MMFR1_VMIDBITS) | \
> +	FEATURE(ID_AA64MMFR1_HPD) | \
> +	FEATURE(ID_AA64MMFR1_PAN) | \
> +	FEATURE(ID_AA64MMFR1_SPECSEI) | \
> +	FEATURE(ID_AA64MMFR1_ETS) \
> +	)
> +
> +/*
> + * These features are chosen because they are supported by KVM and to limit the
> + * confiruation state space and make it more deterministic.

<same comment>

> + * - Common not Private translations
> + * - User Access Override
> + * - IESB bit in the SCTLR_ELx registers
> + * - Unaligned single-copy atomicity and atomic functions
> + * - ESR_ELx.EC value on an exception by read access to feature ID space
> + * - TTL field in address operations.
> + * - Break-before-make sequences when changing translation block size
> + * - E0PDx mechanism
> + */
> +#define PVM_ID_AA64MMFR2_ALLOW (\
> +	FEATURE(ID_AA64MMFR2_CNP) | \
> +	FEATURE(ID_AA64MMFR2_UAO) | \
> +	FEATURE(ID_AA64MMFR2_IESB) | \
> +	FEATURE(ID_AA64MMFR2_AT) | \
> +	FEATURE(ID_AA64MMFR2_IDS) | \
> +	FEATURE(ID_AA64MMFR2_TTL) | \
> +	FEATURE(ID_AA64MMFR2_BBM) | \
> +	FEATURE(ID_AA64MMFR2_E0PD) \
> +	)
> +
> +/*
> + * Allow all features in this register because they are trivial to support, or
> + * are already supported by KVM:
> + * - LS64
> + * - XS
> + * - I8MM
> + * - DGB
> + * - BF16
> + * - SPECRES
> + * - SB
> + * - FRINTTS
> + * - PAuth
> + * - FPAC
> + * - LRCPC
> + * - FCMA
> + * - JSCVT
> + * - DPB
> + */
> +#define PVM_ID_AA64ISAR1_ALLOW (~0ULL)
> +
> +#endif /* __ARM64_KVM_FIXED_CONFIG_H__ */
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index ac67d5699c68..e1ceadd69575 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -780,6 +780,8 @@ static inline bool kvm_vm_is_protected(struct kvm *kvm)
>  	return false;
>  }
>  
> +void kvm_init_protected_traps(struct kvm_vcpu *vcpu);
> +
>  int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature);
>  bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu);
>  
> diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
> index 657d0c94cf82..3f4866322f85 100644
> --- a/arch/arm64/include/asm/kvm_hyp.h
> +++ b/arch/arm64/include/asm/kvm_hyp.h
> @@ -115,7 +115,10 @@ int __pkvm_init(phys_addr_t phys, unsigned long size, unsigned long nr_cpus,
>  void __noreturn __host_enter(struct kvm_cpu_context *host_ctxt);
>  #endif
>  
> +extern u64 kvm_nvhe_sym(id_aa64pfr0_el1_sys_val);
> +extern u64 kvm_nvhe_sym(id_aa64pfr1_el1_sys_val);
>  extern u64 kvm_nvhe_sym(id_aa64mmfr0_el1_sys_val);
>  extern u64 kvm_nvhe_sym(id_aa64mmfr1_el1_sys_val);
> +extern u64 kvm_nvhe_sym(id_aa64mmfr2_el1_sys_val);
>  
>  #endif /* __ARM64_KVM_HYP_H__ */
> diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
> index 989bb5dad2c8..0be63f5c495f 100644
> --- a/arch/arm64/kvm/Makefile
> +++ b/arch/arm64/kvm/Makefile
> @@ -14,7 +14,7 @@ kvm-y := $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o $(KVM)/eventfd.o \
>  	 $(KVM)/vfio.o $(KVM)/irqchip.o $(KVM)/binary_stats.o \
>  	 arm.o mmu.o mmio.o psci.o perf.o hypercalls.o pvtime.o \
>  	 inject_fault.o va_layout.o handle_exit.o \
> -	 guest.o debug.o reset.o sys_regs.o \
> +	 guest.o debug.o pkvm.o reset.o sys_regs.o \
>  	 vgic-sys-reg-v3.o fpsimd.o pmu.o \
>  	 arch_timer.o trng.o\
>  	 vgic/vgic.o vgic/vgic-init.o \
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index 14b12f2c08c0..3f28549aff0d 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -618,6 +618,14 @@ static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu)
>  
>  	ret = kvm_arm_pmu_v3_enable(vcpu);
>  
> +	/*
> +	 * Initialize traps for protected VMs.
> +	 * NOTE: Move  trap initialization to EL2 once the code is in place for
> +	 * maintaining protected VM state at EL2 instead of the host.
> +	 */
> +	if (kvm_vm_is_protected(kvm))
> +		kvm_init_protected_traps(vcpu);
> +
>  	return ret;
>  }
>  
> @@ -1781,8 +1789,11 @@ static int kvm_hyp_init_protection(u32 hyp_va_bits)
>  	void *addr = phys_to_virt(hyp_mem_base);
>  	int ret;
>  
> +	kvm_nvhe_sym(id_aa64pfr0_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
> +	kvm_nvhe_sym(id_aa64pfr1_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64PFR1_EL1);
>  	kvm_nvhe_sym(id_aa64mmfr0_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1);
>  	kvm_nvhe_sym(id_aa64mmfr1_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1);
> +	kvm_nvhe_sym(id_aa64mmfr2_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR2_EL1);
>  
>  	ret = create_hyp_mappings(addr, addr + hyp_mem_size, PAGE_HYP);
>  	if (ret)
> diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile
> index 5df6193fc430..a23f417a0c20 100644
> --- a/arch/arm64/kvm/hyp/nvhe/Makefile
> +++ b/arch/arm64/kvm/hyp/nvhe/Makefile
> @@ -14,7 +14,7 @@ lib-objs := $(addprefix ../../../lib/, $(lib-objs))
>  
>  obj-y := timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o hyp-init.o host.o \
>  	 hyp-main.o hyp-smp.o psci-relay.o early_alloc.o stub.o page_alloc.o \
> -	 cache.o setup.o mm.o mem_protect.o
> +	 cache.o setup.o mm.o mem_protect.o sys_regs.o
>  obj-y += ../vgic-v3-sr.o ../aarch32.o ../vgic-v2-cpuif-proxy.o ../entry.o \
>  	 ../fpsimd.o ../hyp-entry.o ../exception.o ../pgtable.o
>  obj-y += $(lib-objs)
> diff --git a/arch/arm64/kvm/hyp/nvhe/sys_regs.c b/arch/arm64/kvm/hyp/nvhe/sys_regs.c
> new file mode 100644
> index 000000000000..6c7230aa70e9
> --- /dev/null
> +++ b/arch/arm64/kvm/hyp/nvhe/sys_regs.c
> @@ -0,0 +1,443 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/*
> + * Copyright (C) 2021 Google LLC
> + * Author: Fuad Tabba <tabba@google.com>
> + */
> +
> +#include <linux/kvm_host.h>
> +
> +#include <asm/kvm_asm.h>
> +#include <asm/kvm_emulate.h>
> +#include <asm/kvm_fixed_config.h>
> +#include <asm/kvm_mmu.h>
> +
> +#include <hyp/adjust_pc.h>
> +
> +#include "../../sys_regs.h"
> +
> +/*
> + * Copies of the host's CPU features registers holding sanitized values.
> + */
> +u64 id_aa64pfr0_el1_sys_val;
> +u64 id_aa64pfr1_el1_sys_val;
> +u64 id_aa64mmfr2_el1_sys_val;
> +
> +/*
> + * Inject an unknown/undefined exception to the guest.
> + */
> +static void inject_undef(struct kvm_vcpu *vcpu)
> +{
> +	u32 esr = (ESR_ELx_EC_UNKNOWN << ESR_ELx_EC_SHIFT);
> +
> +	vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA64_EL1 |
> +			     KVM_ARM64_EXCEPT_AA64_ELx_SYNC |
> +			     KVM_ARM64_PENDING_EXCEPTION);
> +
> +	__kvm_adjust_pc(vcpu);
> +
> +	write_sysreg_el1(esr, SYS_ESR);
> +	write_sysreg_el1(read_sysreg_el2(SYS_ELR), SYS_ELR);
> +}
> +
> +/*
> + * Accessor for undefined accesses.
> + */
> +static bool undef_access(struct kvm_vcpu *vcpu,
> +			 struct sys_reg_params *p,
> +			 const struct sys_reg_desc *r)
> +{
> +	inject_undef(vcpu);
> +	return false;
> +}
> +
> +/*
> + * Accessors for feature registers.
> + *
> + * If access is allowed, set the regval to the protected VM's view of the
> + * register and return true.
> + * Otherwise, inject an undefined exception and return false.
> + */
> +
> +/*
> + * Returns the minimum feature supported and allowed.
> + */
> +static u64 get_min_feature(u64 feature, u64 allowed_features,
> +			   u64 supported_features)
> +{
> +	const u64 allowed_feature = FIELD_GET(feature, allowed_features);
> +	const u64 supported_feature = FIELD_GET(feature, supported_features);
> +
> +	return min(allowed_feature, supported_feature);

Careful here: this is an unsigned comparison, yet some fields are signed.
cpufeature.c uses the S_ARM64_FTR_BITS and ARM64_FTR_BITS to declare signed
and unsigned fields respectively.

Will
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 11/15] KVM: arm64: Add trap handlers for protected VMs
@ 2021-08-12  9:45     ` Will Deacon
  0 siblings, 0 replies; 126+ messages in thread
From: Will Deacon @ 2021-08-12  9:45 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kvmarm, maz, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team

On Mon, Jul 19, 2021 at 05:03:42PM +0100, Fuad Tabba wrote:
> Add trap handlers for protected VMs. These are mainly for Sys64
> and debug traps.
> 
> No functional change intended as these are not hooked in yet to
> the guest exit handlers introduced earlier. So even when trapping
> is triggered, the exit handlers would let the host handle it, as
> before.
> 
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/include/asm/kvm_fixed_config.h | 178 +++++++++
>  arch/arm64/include/asm/kvm_host.h         |   2 +
>  arch/arm64/include/asm/kvm_hyp.h          |   3 +
>  arch/arm64/kvm/Makefile                   |   2 +-
>  arch/arm64/kvm/arm.c                      |  11 +
>  arch/arm64/kvm/hyp/nvhe/Makefile          |   2 +-
>  arch/arm64/kvm/hyp/nvhe/sys_regs.c        | 443 ++++++++++++++++++++++
>  arch/arm64/kvm/pkvm.c                     | 183 +++++++++
>  8 files changed, 822 insertions(+), 2 deletions(-)
>  create mode 100644 arch/arm64/include/asm/kvm_fixed_config.h
>  create mode 100644 arch/arm64/kvm/hyp/nvhe/sys_regs.c
>  create mode 100644 arch/arm64/kvm/pkvm.c
> 
> diff --git a/arch/arm64/include/asm/kvm_fixed_config.h b/arch/arm64/include/asm/kvm_fixed_config.h
> new file mode 100644
> index 000000000000..b39a5de2c4b9
> --- /dev/null
> +++ b/arch/arm64/include/asm/kvm_fixed_config.h
> @@ -0,0 +1,178 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +/*
> + * Copyright (C) 2021 Google LLC
> + * Author: Fuad Tabba <tabba@google.com>
> + */
> +
> +#ifndef __ARM64_KVM_FIXED_CONFIG_H__
> +#define __ARM64_KVM_FIXED_CONFIG_H__
> +
> +#include <asm/sysreg.h>
> +
> +/*
> + * This file contains definitions for features to be allowed or restricted for
> + * guest virtual machines as a baseline, depending on what mode KVM is running
> + * in and on the type of guest is running.

s/is running/that is running/

> + *
> + * The features are represented as the highest allowed value for a feature in
> + * the feature id registers. If the field is set to all ones (i.e., 0b1111),
> + * then it's only restricted by what the system allows. If the feature is set to
> + * another value, then that value would be the maximum value allowed and
> + * supported in pKVM, even if the system supports a higher value.

Given that some fields are signed whereas others are unsigned, I think the
wording could be a bit tighter here when it refers to "maximum".

> + *
> + * Some features are forced to a certain value, in which case a SET bitmap is
> + * used to force these values.
> + */
> +
> +
> +/*
> + * Allowed features for protected guests (Protected KVM)
> + *
> + * The approach taken here is to allow features that are:
> + * - needed by common Linux distributions (e.g., flooating point)

s/flooating/floating

> + * - are trivial, e.g., supporting the feature doesn't introduce or require the
> + * tracking of additional state

... in KVM.

> + * - not trapable

s/not trapable/cannot be trapped/

> + */
> +
> +/*
> + * - Floating-point and Advanced SIMD:
> + *	Don't require much support other than maintaining the context, which KVM
> + *	already has.

I'd rework this sentence. We have to support fpsimd because Linux guests
rely on it.

> + * - AArch64 guests only (no support for AArch32 guests):
> + *	Simplify support in case of asymmetric AArch32 systems.

I don't think asymmetric systems come into this really; AArch32 on its
own adds lots of complexity in trap handling, emulation, condition codes
etc. Restricting guests to AArch64 means we don't have to worry about the
AArch32 exception model or emulation of 32-bit instructions.

> + * - RAS (v1)
> + *	v1 doesn't require much additional support, but later versions do.

Be more specific?

> + * - Data Independent Timing
> + *	Trivial
> + * Remaining features are not supported either because they require too much
> + * support from KVM, or risk leaking guest data.

I think we should drop this sentence -- it makes it sounds like we can't
be arsed :)

> + */
> +#define PVM_ID_AA64PFR0_ALLOW (\
> +	FEATURE(ID_AA64PFR0_FP) | \
> +	FIELD_PREP(FEATURE(ID_AA64PFR0_EL0), ID_AA64PFR0_ELx_64BIT_ONLY) | \
> +	FIELD_PREP(FEATURE(ID_AA64PFR0_EL1), ID_AA64PFR0_ELx_64BIT_ONLY) | \
> +	FIELD_PREP(FEATURE(ID_AA64PFR0_EL2), ID_AA64PFR0_ELx_64BIT_ONLY) | \
> +	FIELD_PREP(FEATURE(ID_AA64PFR0_EL3), ID_AA64PFR0_ELx_64BIT_ONLY) | \
> +	FIELD_PREP(FEATURE(ID_AA64PFR0_RAS), ID_AA64PFR0_RAS_V1) | \

I think having the FIELD_PREP entries in the ALLOW mask is quite confusing
here -- naively you would expect to be able to bitwise-and the host register
value with the ALLOW mask and get the sanitised version back, but with these
here you have to go field-by-field to compute the common value.

So perhaps move those into a PVM_ID_AA64PFR0_RESTRICT mask or something?
Then pvm_access_id_aa64pfr0() will become a little easier to read, I think.

> +	FEATURE(ID_AA64PFR0_ASIMD) | \
> +	FEATURE(ID_AA64PFR0_DIT) \
> +	)
> +
> +/*
> + * - Branch Target Identification
> + * - Speculative Store Bypassing
> + *	These features are trivial to support
> + */
> +#define PVM_ID_AA64PFR1_ALLOW (\
> +	FEATURE(ID_AA64PFR1_BT) | \
> +	FEATURE(ID_AA64PFR1_SSBS) \
> +	)
> +
> +/*
> + * No support for Scalable Vectors:
> + *	Requires additional support from KVM

Perhaps expand on "support" here? E.g. "context-switching and trapping
support at EL2".

> + */
> +#define PVM_ID_AA64ZFR0_ALLOW (0ULL)
> +
> +/*
> + * No support for debug, including breakpoints, and watchpoints:
> + *	Reduce complexity and avoid exposing/leaking guest data
> + *
> + * NOTE: The Arm architecture mandates support for at least the Armv8 debug
> + * architecture, which would include at least 2 hardware breakpoints and
> + * watchpoints. Providing that support to protected guests adds considerable
> + * state and complexity, and risks leaking guest data. Therefore, the reserved
> + * value of 0 is used for debug-related fields.
> + */

I think the complexity of the debug architecture is a good reason to avoid
exposing it here, but I don't understand how providing breakpoints or
watchpoints to a guest could risk leaking guest data. What is the specific
threat here?

> +#define PVM_ID_AA64DFR0_ALLOW (0ULL)
> +
> +/*
> + * These features are chosen because they are supported by KVM and to limit the
> + * confiruation state space and make it more deterministic.

s/confiruation/configuration/

However, I don't agree that this provides determinism since we're not
forcing any particular values, but rather filtering the values from the
host.

> + * - 40-bit IPA

This seems more about not supporting KVM_CAP_ARM_VM_IPA_SIZE for now.

> + * - 16-bit ASID
> + * - Mixed-endian
> + * - Distinction between Secure and Non-secure Memory
> + * - Mixed-endian at EL0 only
> + * - Non-context synchronizing exception entry and exit

These all seem to fall into the "cannot trap" category, so we just advertise
whatever we've got.

> + */
> +#define PVM_ID_AA64MMFR0_ALLOW (\
> +	FIELD_PREP(FEATURE(ID_AA64MMFR0_PARANGE), ID_AA64MMFR0_PARANGE_40) | \
> +	FIELD_PREP(FEATURE(ID_AA64MMFR0_ASID), ID_AA64MMFR0_ASID_16) | \
> +	FEATURE(ID_AA64MMFR0_BIGENDEL) | \
> +	FEATURE(ID_AA64MMFR0_SNSMEM) | \
> +	FEATURE(ID_AA64MMFR0_BIGENDEL0) | \
> +	FEATURE(ID_AA64MMFR0_EXS) \
> +	)
> +
> +/*
> + * - 64KB granule not supported
> + */
> +#define PVM_ID_AA64MMFR0_SET (\
> +	FIELD_PREP(FEATURE(ID_AA64MMFR0_TGRAN64), ID_AA64MMFR0_TGRAN64_NI) \
> +	)

Why not, and can we actually prevent the guest from doing that?

> +/*
> + * These features are chosen because they are supported by KVM and to limit the
> + * confiruation state space and make it more deterministic.

It's that typo again ;) But my comment from before still applies -- I don't
think an ALLOW mask adds hugely to the determinism.

> + * - Hardware translation table updates to Access flag and Dirty state
> + * - Number of VMID bits from CPU
> + * - Hierarchical Permission Disables
> + * - Privileged Access Never
> + * - SError interrupt exceptions from speculative reads
> + * - Enhanced Translation Synchronization

As before, I think this is a mixture of "trivial" and "cannot trap"
features.

> + */
> +#define PVM_ID_AA64MMFR1_ALLOW (\
> +	FEATURE(ID_AA64MMFR1_HADBS) | \
> +	FEATURE(ID_AA64MMFR1_VMIDBITS) | \
> +	FEATURE(ID_AA64MMFR1_HPD) | \
> +	FEATURE(ID_AA64MMFR1_PAN) | \
> +	FEATURE(ID_AA64MMFR1_SPECSEI) | \
> +	FEATURE(ID_AA64MMFR1_ETS) \
> +	)
> +
> +/*
> + * These features are chosen because they are supported by KVM and to limit the
> + * confiruation state space and make it more deterministic.

<same comment>

> + * - Common not Private translations
> + * - User Access Override
> + * - IESB bit in the SCTLR_ELx registers
> + * - Unaligned single-copy atomicity and atomic functions
> + * - ESR_ELx.EC value on an exception by read access to feature ID space
> + * - TTL field in address operations.
> + * - Break-before-make sequences when changing translation block size
> + * - E0PDx mechanism
> + */
> +#define PVM_ID_AA64MMFR2_ALLOW (\
> +	FEATURE(ID_AA64MMFR2_CNP) | \
> +	FEATURE(ID_AA64MMFR2_UAO) | \
> +	FEATURE(ID_AA64MMFR2_IESB) | \
> +	FEATURE(ID_AA64MMFR2_AT) | \
> +	FEATURE(ID_AA64MMFR2_IDS) | \
> +	FEATURE(ID_AA64MMFR2_TTL) | \
> +	FEATURE(ID_AA64MMFR2_BBM) | \
> +	FEATURE(ID_AA64MMFR2_E0PD) \
> +	)
> +
> +/*
> + * Allow all features in this register because they are trivial to support, or
> + * are already supported by KVM:
> + * - LS64
> + * - XS
> + * - I8MM
> + * - DGB
> + * - BF16
> + * - SPECRES
> + * - SB
> + * - FRINTTS
> + * - PAuth
> + * - FPAC
> + * - LRCPC
> + * - FCMA
> + * - JSCVT
> + * - DPB
> + */
> +#define PVM_ID_AA64ISAR1_ALLOW (~0ULL)
> +
> +#endif /* __ARM64_KVM_FIXED_CONFIG_H__ */
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index ac67d5699c68..e1ceadd69575 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -780,6 +780,8 @@ static inline bool kvm_vm_is_protected(struct kvm *kvm)
>  	return false;
>  }
>  
> +void kvm_init_protected_traps(struct kvm_vcpu *vcpu);
> +
>  int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature);
>  bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu);
>  
> diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
> index 657d0c94cf82..3f4866322f85 100644
> --- a/arch/arm64/include/asm/kvm_hyp.h
> +++ b/arch/arm64/include/asm/kvm_hyp.h
> @@ -115,7 +115,10 @@ int __pkvm_init(phys_addr_t phys, unsigned long size, unsigned long nr_cpus,
>  void __noreturn __host_enter(struct kvm_cpu_context *host_ctxt);
>  #endif
>  
> +extern u64 kvm_nvhe_sym(id_aa64pfr0_el1_sys_val);
> +extern u64 kvm_nvhe_sym(id_aa64pfr1_el1_sys_val);
>  extern u64 kvm_nvhe_sym(id_aa64mmfr0_el1_sys_val);
>  extern u64 kvm_nvhe_sym(id_aa64mmfr1_el1_sys_val);
> +extern u64 kvm_nvhe_sym(id_aa64mmfr2_el1_sys_val);
>  
>  #endif /* __ARM64_KVM_HYP_H__ */
> diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
> index 989bb5dad2c8..0be63f5c495f 100644
> --- a/arch/arm64/kvm/Makefile
> +++ b/arch/arm64/kvm/Makefile
> @@ -14,7 +14,7 @@ kvm-y := $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o $(KVM)/eventfd.o \
>  	 $(KVM)/vfio.o $(KVM)/irqchip.o $(KVM)/binary_stats.o \
>  	 arm.o mmu.o mmio.o psci.o perf.o hypercalls.o pvtime.o \
>  	 inject_fault.o va_layout.o handle_exit.o \
> -	 guest.o debug.o reset.o sys_regs.o \
> +	 guest.o debug.o pkvm.o reset.o sys_regs.o \
>  	 vgic-sys-reg-v3.o fpsimd.o pmu.o \
>  	 arch_timer.o trng.o\
>  	 vgic/vgic.o vgic/vgic-init.o \
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index 14b12f2c08c0..3f28549aff0d 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -618,6 +618,14 @@ static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu)
>  
>  	ret = kvm_arm_pmu_v3_enable(vcpu);
>  
> +	/*
> +	 * Initialize traps for protected VMs.
> +	 * NOTE: Move  trap initialization to EL2 once the code is in place for
> +	 * maintaining protected VM state at EL2 instead of the host.
> +	 */
> +	if (kvm_vm_is_protected(kvm))
> +		kvm_init_protected_traps(vcpu);
> +
>  	return ret;
>  }
>  
> @@ -1781,8 +1789,11 @@ static int kvm_hyp_init_protection(u32 hyp_va_bits)
>  	void *addr = phys_to_virt(hyp_mem_base);
>  	int ret;
>  
> +	kvm_nvhe_sym(id_aa64pfr0_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
> +	kvm_nvhe_sym(id_aa64pfr1_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64PFR1_EL1);
>  	kvm_nvhe_sym(id_aa64mmfr0_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1);
>  	kvm_nvhe_sym(id_aa64mmfr1_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1);
> +	kvm_nvhe_sym(id_aa64mmfr2_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR2_EL1);
>  
>  	ret = create_hyp_mappings(addr, addr + hyp_mem_size, PAGE_HYP);
>  	if (ret)
> diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile
> index 5df6193fc430..a23f417a0c20 100644
> --- a/arch/arm64/kvm/hyp/nvhe/Makefile
> +++ b/arch/arm64/kvm/hyp/nvhe/Makefile
> @@ -14,7 +14,7 @@ lib-objs := $(addprefix ../../../lib/, $(lib-objs))
>  
>  obj-y := timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o hyp-init.o host.o \
>  	 hyp-main.o hyp-smp.o psci-relay.o early_alloc.o stub.o page_alloc.o \
> -	 cache.o setup.o mm.o mem_protect.o
> +	 cache.o setup.o mm.o mem_protect.o sys_regs.o
>  obj-y += ../vgic-v3-sr.o ../aarch32.o ../vgic-v2-cpuif-proxy.o ../entry.o \
>  	 ../fpsimd.o ../hyp-entry.o ../exception.o ../pgtable.o
>  obj-y += $(lib-objs)
> diff --git a/arch/arm64/kvm/hyp/nvhe/sys_regs.c b/arch/arm64/kvm/hyp/nvhe/sys_regs.c
> new file mode 100644
> index 000000000000..6c7230aa70e9
> --- /dev/null
> +++ b/arch/arm64/kvm/hyp/nvhe/sys_regs.c
> @@ -0,0 +1,443 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/*
> + * Copyright (C) 2021 Google LLC
> + * Author: Fuad Tabba <tabba@google.com>
> + */
> +
> +#include <linux/kvm_host.h>
> +
> +#include <asm/kvm_asm.h>
> +#include <asm/kvm_emulate.h>
> +#include <asm/kvm_fixed_config.h>
> +#include <asm/kvm_mmu.h>
> +
> +#include <hyp/adjust_pc.h>
> +
> +#include "../../sys_regs.h"
> +
> +/*
> + * Copies of the host's CPU features registers holding sanitized values.
> + */
> +u64 id_aa64pfr0_el1_sys_val;
> +u64 id_aa64pfr1_el1_sys_val;
> +u64 id_aa64mmfr2_el1_sys_val;
> +
> +/*
> + * Inject an unknown/undefined exception to the guest.
> + */
> +static void inject_undef(struct kvm_vcpu *vcpu)
> +{
> +	u32 esr = (ESR_ELx_EC_UNKNOWN << ESR_ELx_EC_SHIFT);
> +
> +	vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA64_EL1 |
> +			     KVM_ARM64_EXCEPT_AA64_ELx_SYNC |
> +			     KVM_ARM64_PENDING_EXCEPTION);
> +
> +	__kvm_adjust_pc(vcpu);
> +
> +	write_sysreg_el1(esr, SYS_ESR);
> +	write_sysreg_el1(read_sysreg_el2(SYS_ELR), SYS_ELR);
> +}
> +
> +/*
> + * Accessor for undefined accesses.
> + */
> +static bool undef_access(struct kvm_vcpu *vcpu,
> +			 struct sys_reg_params *p,
> +			 const struct sys_reg_desc *r)
> +{
> +	inject_undef(vcpu);
> +	return false;
> +}
> +
> +/*
> + * Accessors for feature registers.
> + *
> + * If access is allowed, set the regval to the protected VM's view of the
> + * register and return true.
> + * Otherwise, inject an undefined exception and return false.
> + */
> +
> +/*
> + * Returns the minimum feature supported and allowed.
> + */
> +static u64 get_min_feature(u64 feature, u64 allowed_features,
> +			   u64 supported_features)
> +{
> +	const u64 allowed_feature = FIELD_GET(feature, allowed_features);
> +	const u64 supported_feature = FIELD_GET(feature, supported_features);
> +
> +	return min(allowed_feature, supported_feature);

Careful here: this is an unsigned comparison, yet some fields are signed.
cpufeature.c uses the S_ARM64_FTR_BITS and ARM64_FTR_BITS to declare signed
and unsigned fields respectively.

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 12/15] KVM: arm64: Move sanitized copies of CPU features
  2021-07-19 16:03   ` Fuad Tabba
  (?)
@ 2021-08-12  9:46     ` Will Deacon
  -1 siblings, 0 replies; 126+ messages in thread
From: Will Deacon @ 2021-08-12  9:46 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kvmarm, maz, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team

On Mon, Jul 19, 2021 at 05:03:43PM +0100, Fuad Tabba wrote:
> Move the sanitized copies of the CPU feature registers to the
> recently created sys_regs.c. This consolidates all copies in a
> more relevant file.
> 
> No functional change intended.
> 
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/kvm/hyp/nvhe/mem_protect.c | 6 ------
>  arch/arm64/kvm/hyp/nvhe/sys_regs.c    | 2 ++
>  2 files changed, 2 insertions(+), 6 deletions(-)

Acked-by: Will Deacon <will@kernel.org>

Will

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 12/15] KVM: arm64: Move sanitized copies of CPU features
@ 2021-08-12  9:46     ` Will Deacon
  0 siblings, 0 replies; 126+ messages in thread
From: Will Deacon @ 2021-08-12  9:46 UTC (permalink / raw)
  To: Fuad Tabba; +Cc: kernel-team, kvm, maz, pbonzini, kvmarm, linux-arm-kernel

On Mon, Jul 19, 2021 at 05:03:43PM +0100, Fuad Tabba wrote:
> Move the sanitized copies of the CPU feature registers to the
> recently created sys_regs.c. This consolidates all copies in a
> more relevant file.
> 
> No functional change intended.
> 
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/kvm/hyp/nvhe/mem_protect.c | 6 ------
>  arch/arm64/kvm/hyp/nvhe/sys_regs.c    | 2 ++
>  2 files changed, 2 insertions(+), 6 deletions(-)

Acked-by: Will Deacon <will@kernel.org>

Will
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 12/15] KVM: arm64: Move sanitized copies of CPU features
@ 2021-08-12  9:46     ` Will Deacon
  0 siblings, 0 replies; 126+ messages in thread
From: Will Deacon @ 2021-08-12  9:46 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kvmarm, maz, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team

On Mon, Jul 19, 2021 at 05:03:43PM +0100, Fuad Tabba wrote:
> Move the sanitized copies of the CPU feature registers to the
> recently created sys_regs.c. This consolidates all copies in a
> more relevant file.
> 
> No functional change intended.
> 
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/kvm/hyp/nvhe/mem_protect.c | 6 ------
>  arch/arm64/kvm/hyp/nvhe/sys_regs.c    | 2 ++
>  2 files changed, 2 insertions(+), 6 deletions(-)

Acked-by: Will Deacon <will@kernel.org>

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 06/15] KVM: arm64: Restore mdcr_el2 from vcpu
  2021-08-12  9:28           ` Fuad Tabba
  (?)
@ 2021-08-12  9:49             ` Will Deacon
  -1 siblings, 0 replies; 126+ messages in thread
From: Will Deacon @ 2021-08-12  9:49 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: Andrew Jones, kvmarm, maz, james.morse, alexandru.elisei,
	suzuki.poulose, mark.rutland, christoffer.dall, pbonzini,
	qperret, kvm, linux-arm-kernel, kernel-team

Hey Fuad,

On Thu, Aug 12, 2021 at 11:28:50AM +0200, Fuad Tabba wrote:
> On Thu, Aug 12, 2021 at 10:46 AM Will Deacon <will@kernel.org> wrote:
> >
> > On Wed, Jul 21, 2021 at 08:37:21AM +0100, Fuad Tabba wrote:
> > > On Tue, Jul 20, 2021 at 3:53 PM Andrew Jones <drjones@redhat.com> wrote:
> > > >
> > > > On Mon, Jul 19, 2021 at 05:03:37PM +0100, Fuad Tabba wrote:
> > > > > On deactivating traps, restore the value of mdcr_el2 from the
> > > > > newly created and preserved host value vcpu context, rather than
> > > > > directly reading the hardware register.
> > > > >
> > > > > Up until and including this patch the two values are the same,
> > > > > i.e., the hardware register and the vcpu one. A future patch will
> > > > > be changing the value of mdcr_el2 on activating traps, and this
> > > > > ensures that its value will be restored.
> > > > >
> > > > > No functional change intended.
> > > >
> > > > I'm probably missing something, but I can't convince myself that the host
> > > > will end up with the same mdcr_el2 value after deactivating traps after
> > > > this patch as before. We clearly now restore whatever we had when
> > > > activating traps (presumably whatever we configured at init_el2_state
> > > > time), but is that equivalent to what we had before with the masking and
> > > > ORing that this patch drops?
> > >
> > > You're right. I thought that these were actually being initialized to
> > > the same values, but having a closer look at the code the mdcr values
> > > are not the same as pre-patch. I will fix this.
> >
> > Can you elaborate on the issue here, please? I was just looking at this
> > but aren't you now relying on __init_el2_debug to configure this, which
> > should be fine?
> 
> I *think* that it should be fine, but as Drew pointed out, the host
> does not end up with the same mdcr_el2 value after the deactivation in
> this patch as it did after deactivation before this patch. In my v4
> (not sent out yet), I have fixed it to ensure that the host does end
> up with the same value as the one before this patch. That should make
> it easier to check that there's no functional change.
> 
> I'll look into it further, and if I can convince myself that there
> aren't any issues and that this patch makes the code cleaner, I will
> add it as a separate patch instead to make reviewing easier.

Cheers. I think the new code might actually be better, as things like
MDCR_EL2.E2PB are RES0 if SPE is not implemented. The init code takes care
to set those only if if probes SPE first, whereas the code you're removing
doesn't seem to check that.

Will

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 06/15] KVM: arm64: Restore mdcr_el2 from vcpu
@ 2021-08-12  9:49             ` Will Deacon
  0 siblings, 0 replies; 126+ messages in thread
From: Will Deacon @ 2021-08-12  9:49 UTC (permalink / raw)
  To: Fuad Tabba; +Cc: kernel-team, kvm, maz, pbonzini, kvmarm, linux-arm-kernel

Hey Fuad,

On Thu, Aug 12, 2021 at 11:28:50AM +0200, Fuad Tabba wrote:
> On Thu, Aug 12, 2021 at 10:46 AM Will Deacon <will@kernel.org> wrote:
> >
> > On Wed, Jul 21, 2021 at 08:37:21AM +0100, Fuad Tabba wrote:
> > > On Tue, Jul 20, 2021 at 3:53 PM Andrew Jones <drjones@redhat.com> wrote:
> > > >
> > > > On Mon, Jul 19, 2021 at 05:03:37PM +0100, Fuad Tabba wrote:
> > > > > On deactivating traps, restore the value of mdcr_el2 from the
> > > > > newly created and preserved host value vcpu context, rather than
> > > > > directly reading the hardware register.
> > > > >
> > > > > Up until and including this patch the two values are the same,
> > > > > i.e., the hardware register and the vcpu one. A future patch will
> > > > > be changing the value of mdcr_el2 on activating traps, and this
> > > > > ensures that its value will be restored.
> > > > >
> > > > > No functional change intended.
> > > >
> > > > I'm probably missing something, but I can't convince myself that the host
> > > > will end up with the same mdcr_el2 value after deactivating traps after
> > > > this patch as before. We clearly now restore whatever we had when
> > > > activating traps (presumably whatever we configured at init_el2_state
> > > > time), but is that equivalent to what we had before with the masking and
> > > > ORing that this patch drops?
> > >
> > > You're right. I thought that these were actually being initialized to
> > > the same values, but having a closer look at the code the mdcr values
> > > are not the same as pre-patch. I will fix this.
> >
> > Can you elaborate on the issue here, please? I was just looking at this
> > but aren't you now relying on __init_el2_debug to configure this, which
> > should be fine?
> 
> I *think* that it should be fine, but as Drew pointed out, the host
> does not end up with the same mdcr_el2 value after the deactivation in
> this patch as it did after deactivation before this patch. In my v4
> (not sent out yet), I have fixed it to ensure that the host does end
> up with the same value as the one before this patch. That should make
> it easier to check that there's no functional change.
> 
> I'll look into it further, and if I can convince myself that there
> aren't any issues and that this patch makes the code cleaner, I will
> add it as a separate patch instead to make reviewing easier.

Cheers. I think the new code might actually be better, as things like
MDCR_EL2.E2PB are RES0 if SPE is not implemented. The init code takes care
to set those only if if probes SPE first, whereas the code you're removing
doesn't seem to check that.

Will
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 06/15] KVM: arm64: Restore mdcr_el2 from vcpu
@ 2021-08-12  9:49             ` Will Deacon
  0 siblings, 0 replies; 126+ messages in thread
From: Will Deacon @ 2021-08-12  9:49 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: Andrew Jones, kvmarm, maz, james.morse, alexandru.elisei,
	suzuki.poulose, mark.rutland, christoffer.dall, pbonzini,
	qperret, kvm, linux-arm-kernel, kernel-team

Hey Fuad,

On Thu, Aug 12, 2021 at 11:28:50AM +0200, Fuad Tabba wrote:
> On Thu, Aug 12, 2021 at 10:46 AM Will Deacon <will@kernel.org> wrote:
> >
> > On Wed, Jul 21, 2021 at 08:37:21AM +0100, Fuad Tabba wrote:
> > > On Tue, Jul 20, 2021 at 3:53 PM Andrew Jones <drjones@redhat.com> wrote:
> > > >
> > > > On Mon, Jul 19, 2021 at 05:03:37PM +0100, Fuad Tabba wrote:
> > > > > On deactivating traps, restore the value of mdcr_el2 from the
> > > > > newly created and preserved host value vcpu context, rather than
> > > > > directly reading the hardware register.
> > > > >
> > > > > Up until and including this patch the two values are the same,
> > > > > i.e., the hardware register and the vcpu one. A future patch will
> > > > > be changing the value of mdcr_el2 on activating traps, and this
> > > > > ensures that its value will be restored.
> > > > >
> > > > > No functional change intended.
> > > >
> > > > I'm probably missing something, but I can't convince myself that the host
> > > > will end up with the same mdcr_el2 value after deactivating traps after
> > > > this patch as before. We clearly now restore whatever we had when
> > > > activating traps (presumably whatever we configured at init_el2_state
> > > > time), but is that equivalent to what we had before with the masking and
> > > > ORing that this patch drops?
> > >
> > > You're right. I thought that these were actually being initialized to
> > > the same values, but having a closer look at the code the mdcr values
> > > are not the same as pre-patch. I will fix this.
> >
> > Can you elaborate on the issue here, please? I was just looking at this
> > but aren't you now relying on __init_el2_debug to configure this, which
> > should be fine?
> 
> I *think* that it should be fine, but as Drew pointed out, the host
> does not end up with the same mdcr_el2 value after the deactivation in
> this patch as it did after deactivation before this patch. In my v4
> (not sent out yet), I have fixed it to ensure that the host does end
> up with the same value as the one before this patch. That should make
> it easier to check that there's no functional change.
> 
> I'll look into it further, and if I can convince myself that there
> aren't any issues and that this patch makes the code cleaner, I will
> add it as a separate patch instead to make reviewing easier.

Cheers. I think the new code might actually be better, as things like
MDCR_EL2.E2PB are RES0 if SPE is not implemented. The init code takes care
to set those only if if probes SPE first, whereas the code you're removing
doesn't seem to check that.

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 13/15] KVM: arm64: Trap access to pVM restricted features
  2021-07-19 16:03   ` Fuad Tabba
  (?)
@ 2021-08-12  9:53     ` Will Deacon
  -1 siblings, 0 replies; 126+ messages in thread
From: Will Deacon @ 2021-08-12  9:53 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kvmarm, maz, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team

On Mon, Jul 19, 2021 at 05:03:44PM +0100, Fuad Tabba wrote:
> Trap accesses to restricted features for VMs running in protected
> mode.
> 
> Access to feature registers are emulated, and only supported
> features are exposed to protected VMs.
> 
> Accesses to restricted registers as well as restricted
> instructions are trapped, and an undefined exception is injected
> into the protected guests, i.e., with EC = 0x0 (unknown reason).
> This EC is the one used, according to the Arm Architecture
> Reference Manual, for unallocated or undefined system registers
> or instructions.
> 
> Only affects the functionality of protected VMs. Otherwise,
> should not affect non-protected VMs when KVM is running in
> protected mode.
> 
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/kvm/hyp/include/hyp/switch.h |  3 ++
>  arch/arm64/kvm/hyp/nvhe/switch.c        | 52 ++++++++++++++++++-------
>  2 files changed, 41 insertions(+), 14 deletions(-)
> 
> diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> index 5a2b89b96c67..8431f1514280 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> @@ -33,6 +33,9 @@
>  extern struct exception_table_entry __start___kvm_ex_table;
>  extern struct exception_table_entry __stop___kvm_ex_table;
>  
> +int kvm_handle_pvm_sys64(struct kvm_vcpu *vcpu);
> +int kvm_handle_pvm_restricted(struct kvm_vcpu *vcpu);
> +
>  /* Check whether the FP regs were dirtied while in the host-side run loop: */
>  static inline bool update_fp_enabled(struct kvm_vcpu *vcpu)
>  {
> diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
> index 36da423006bd..99bbbba90094 100644
> --- a/arch/arm64/kvm/hyp/nvhe/switch.c
> +++ b/arch/arm64/kvm/hyp/nvhe/switch.c
> @@ -158,30 +158,54 @@ static void __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt)
>  		write_sysreg(pmu->events_host, pmcntenset_el0);
>  }
>  
> +/**
> + * Handle system register accesses for protected VMs.
> + *
> + * Return 1 if handled, or 0 if not.
> + */
> +static int handle_pvm_sys64(struct kvm_vcpu *vcpu)
> +{
> +	return kvm_vm_is_protected(kern_hyp_va(vcpu->kvm)) ?
> +			     kvm_handle_pvm_sys64(vcpu) :
> +			     0;
> +}

Why don't we move the kvm_vm_is_protected() check into
kvm_get_hyp_exit_handler() so we can avoid adding it to each handler
instead?

Either way:

Acked-by: Will Deacon <will@kernel.org>

Will

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 13/15] KVM: arm64: Trap access to pVM restricted features
@ 2021-08-12  9:53     ` Will Deacon
  0 siblings, 0 replies; 126+ messages in thread
From: Will Deacon @ 2021-08-12  9:53 UTC (permalink / raw)
  To: Fuad Tabba; +Cc: kernel-team, kvm, maz, pbonzini, kvmarm, linux-arm-kernel

On Mon, Jul 19, 2021 at 05:03:44PM +0100, Fuad Tabba wrote:
> Trap accesses to restricted features for VMs running in protected
> mode.
> 
> Access to feature registers are emulated, and only supported
> features are exposed to protected VMs.
> 
> Accesses to restricted registers as well as restricted
> instructions are trapped, and an undefined exception is injected
> into the protected guests, i.e., with EC = 0x0 (unknown reason).
> This EC is the one used, according to the Arm Architecture
> Reference Manual, for unallocated or undefined system registers
> or instructions.
> 
> Only affects the functionality of protected VMs. Otherwise,
> should not affect non-protected VMs when KVM is running in
> protected mode.
> 
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/kvm/hyp/include/hyp/switch.h |  3 ++
>  arch/arm64/kvm/hyp/nvhe/switch.c        | 52 ++++++++++++++++++-------
>  2 files changed, 41 insertions(+), 14 deletions(-)
> 
> diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> index 5a2b89b96c67..8431f1514280 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> @@ -33,6 +33,9 @@
>  extern struct exception_table_entry __start___kvm_ex_table;
>  extern struct exception_table_entry __stop___kvm_ex_table;
>  
> +int kvm_handle_pvm_sys64(struct kvm_vcpu *vcpu);
> +int kvm_handle_pvm_restricted(struct kvm_vcpu *vcpu);
> +
>  /* Check whether the FP regs were dirtied while in the host-side run loop: */
>  static inline bool update_fp_enabled(struct kvm_vcpu *vcpu)
>  {
> diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
> index 36da423006bd..99bbbba90094 100644
> --- a/arch/arm64/kvm/hyp/nvhe/switch.c
> +++ b/arch/arm64/kvm/hyp/nvhe/switch.c
> @@ -158,30 +158,54 @@ static void __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt)
>  		write_sysreg(pmu->events_host, pmcntenset_el0);
>  }
>  
> +/**
> + * Handle system register accesses for protected VMs.
> + *
> + * Return 1 if handled, or 0 if not.
> + */
> +static int handle_pvm_sys64(struct kvm_vcpu *vcpu)
> +{
> +	return kvm_vm_is_protected(kern_hyp_va(vcpu->kvm)) ?
> +			     kvm_handle_pvm_sys64(vcpu) :
> +			     0;
> +}

Why don't we move the kvm_vm_is_protected() check into
kvm_get_hyp_exit_handler() so we can avoid adding it to each handler
instead?

Either way:

Acked-by: Will Deacon <will@kernel.org>

Will
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 13/15] KVM: arm64: Trap access to pVM restricted features
@ 2021-08-12  9:53     ` Will Deacon
  0 siblings, 0 replies; 126+ messages in thread
From: Will Deacon @ 2021-08-12  9:53 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kvmarm, maz, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team

On Mon, Jul 19, 2021 at 05:03:44PM +0100, Fuad Tabba wrote:
> Trap accesses to restricted features for VMs running in protected
> mode.
> 
> Access to feature registers are emulated, and only supported
> features are exposed to protected VMs.
> 
> Accesses to restricted registers as well as restricted
> instructions are trapped, and an undefined exception is injected
> into the protected guests, i.e., with EC = 0x0 (unknown reason).
> This EC is the one used, according to the Arm Architecture
> Reference Manual, for unallocated or undefined system registers
> or instructions.
> 
> Only affects the functionality of protected VMs. Otherwise,
> should not affect non-protected VMs when KVM is running in
> protected mode.
> 
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/kvm/hyp/include/hyp/switch.h |  3 ++
>  arch/arm64/kvm/hyp/nvhe/switch.c        | 52 ++++++++++++++++++-------
>  2 files changed, 41 insertions(+), 14 deletions(-)
> 
> diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> index 5a2b89b96c67..8431f1514280 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> @@ -33,6 +33,9 @@
>  extern struct exception_table_entry __start___kvm_ex_table;
>  extern struct exception_table_entry __stop___kvm_ex_table;
>  
> +int kvm_handle_pvm_sys64(struct kvm_vcpu *vcpu);
> +int kvm_handle_pvm_restricted(struct kvm_vcpu *vcpu);
> +
>  /* Check whether the FP regs were dirtied while in the host-side run loop: */
>  static inline bool update_fp_enabled(struct kvm_vcpu *vcpu)
>  {
> diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
> index 36da423006bd..99bbbba90094 100644
> --- a/arch/arm64/kvm/hyp/nvhe/switch.c
> +++ b/arch/arm64/kvm/hyp/nvhe/switch.c
> @@ -158,30 +158,54 @@ static void __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt)
>  		write_sysreg(pmu->events_host, pmcntenset_el0);
>  }
>  
> +/**
> + * Handle system register accesses for protected VMs.
> + *
> + * Return 1 if handled, or 0 if not.
> + */
> +static int handle_pvm_sys64(struct kvm_vcpu *vcpu)
> +{
> +	return kvm_vm_is_protected(kern_hyp_va(vcpu->kvm)) ?
> +			     kvm_handle_pvm_sys64(vcpu) :
> +			     0;
> +}

Why don't we move the kvm_vm_is_protected() check into
kvm_get_hyp_exit_handler() so we can avoid adding it to each handler
instead?

Either way:

Acked-by: Will Deacon <will@kernel.org>

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 14/15] KVM: arm64: Handle protected guests at 32 bits
  2021-07-19 16:03   ` Fuad Tabba
  (?)
@ 2021-08-12  9:57     ` Will Deacon
  -1 siblings, 0 replies; 126+ messages in thread
From: Will Deacon @ 2021-08-12  9:57 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kvmarm, maz, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team

On Mon, Jul 19, 2021 at 05:03:45PM +0100, Fuad Tabba wrote:
> Protected KVM does not support protected AArch32 guests. However,
> it is possible for the guest to force run AArch32, potentially
> causing problems. Add an extra check so that if the hypervisor
> catches the guest doing that, it can prevent the guest from
> running again by resetting vcpu->arch.target and returning
> ARM_EXCEPTION_IL.
> 
> Adapted from commit 22f553842b14 ("KVM: arm64: Handle Asymmetric
> AArch32 systems")
> 
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/kvm/hyp/include/hyp/switch.h | 24 ++++++++++++++++++++++++
>  1 file changed, 24 insertions(+)
> 
> diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> index 8431f1514280..f09343e15a80 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> @@ -23,6 +23,7 @@
>  #include <asm/kprobes.h>
>  #include <asm/kvm_asm.h>
>  #include <asm/kvm_emulate.h>
> +#include <asm/kvm_fixed_config.h>
>  #include <asm/kvm_hyp.h>
>  #include <asm/kvm_mmu.h>
>  #include <asm/fpsimd.h>
> @@ -477,6 +478,29 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
>  			write_sysreg_el2(read_sysreg_el2(SYS_ELR) - 4, SYS_ELR);
>  	}
>  
> +	/*
> +	 * Protected VMs might not be allowed to run in AArch32. The check below
> +	 * is based on the one in kvm_arch_vcpu_ioctl_run().
> +	 * The ARMv8 architecture doesn't give the hypervisor a mechanism to
> +	 * prevent a guest from dropping to AArch32 EL0 if implemented by the
> +	 * CPU. If the hypervisor spots a guest in such a state ensure it is
> +	 * handled, and don't trust the host to spot or fix it.
> +	 */
> +	if (unlikely(is_nvhe_hyp_code() &&
> +		     kvm_vm_is_protected(kern_hyp_va(vcpu->kvm)) &&
> +		     FIELD_GET(FEATURE(ID_AA64PFR0_EL0),
> +			       PVM_ID_AA64PFR0_ALLOW) <
> +			     ID_AA64PFR0_ELx_32BIT_64BIT &&
> +		     vcpu_mode_is_32bit(vcpu))) {
> +		/*
> +		 * As we have caught the guest red-handed, decide that it isn't
> +		 * fit for purpose anymore by making the vcpu invalid.
> +		 */
> +		vcpu->arch.target = -1;
> +		*exit_code = ARM_EXCEPTION_IL;
> +		goto exit;
> +	}

Would this be better off inside the nvhe-specific run loop? Seems like we
could elide fixup_guest_exit() altogether if we've detect that we're in
AArch32 state when we shouldn't be and it would keep the code off the shared
path.

Will

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 14/15] KVM: arm64: Handle protected guests at 32 bits
@ 2021-08-12  9:57     ` Will Deacon
  0 siblings, 0 replies; 126+ messages in thread
From: Will Deacon @ 2021-08-12  9:57 UTC (permalink / raw)
  To: Fuad Tabba; +Cc: kernel-team, kvm, maz, pbonzini, kvmarm, linux-arm-kernel

On Mon, Jul 19, 2021 at 05:03:45PM +0100, Fuad Tabba wrote:
> Protected KVM does not support protected AArch32 guests. However,
> it is possible for the guest to force run AArch32, potentially
> causing problems. Add an extra check so that if the hypervisor
> catches the guest doing that, it can prevent the guest from
> running again by resetting vcpu->arch.target and returning
> ARM_EXCEPTION_IL.
> 
> Adapted from commit 22f553842b14 ("KVM: arm64: Handle Asymmetric
> AArch32 systems")
> 
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/kvm/hyp/include/hyp/switch.h | 24 ++++++++++++++++++++++++
>  1 file changed, 24 insertions(+)
> 
> diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> index 8431f1514280..f09343e15a80 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> @@ -23,6 +23,7 @@
>  #include <asm/kprobes.h>
>  #include <asm/kvm_asm.h>
>  #include <asm/kvm_emulate.h>
> +#include <asm/kvm_fixed_config.h>
>  #include <asm/kvm_hyp.h>
>  #include <asm/kvm_mmu.h>
>  #include <asm/fpsimd.h>
> @@ -477,6 +478,29 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
>  			write_sysreg_el2(read_sysreg_el2(SYS_ELR) - 4, SYS_ELR);
>  	}
>  
> +	/*
> +	 * Protected VMs might not be allowed to run in AArch32. The check below
> +	 * is based on the one in kvm_arch_vcpu_ioctl_run().
> +	 * The ARMv8 architecture doesn't give the hypervisor a mechanism to
> +	 * prevent a guest from dropping to AArch32 EL0 if implemented by the
> +	 * CPU. If the hypervisor spots a guest in such a state ensure it is
> +	 * handled, and don't trust the host to spot or fix it.
> +	 */
> +	if (unlikely(is_nvhe_hyp_code() &&
> +		     kvm_vm_is_protected(kern_hyp_va(vcpu->kvm)) &&
> +		     FIELD_GET(FEATURE(ID_AA64PFR0_EL0),
> +			       PVM_ID_AA64PFR0_ALLOW) <
> +			     ID_AA64PFR0_ELx_32BIT_64BIT &&
> +		     vcpu_mode_is_32bit(vcpu))) {
> +		/*
> +		 * As we have caught the guest red-handed, decide that it isn't
> +		 * fit for purpose anymore by making the vcpu invalid.
> +		 */
> +		vcpu->arch.target = -1;
> +		*exit_code = ARM_EXCEPTION_IL;
> +		goto exit;
> +	}

Would this be better off inside the nvhe-specific run loop? Seems like we
could elide fixup_guest_exit() altogether if we've detect that we're in
AArch32 state when we shouldn't be and it would keep the code off the shared
path.

Will
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 14/15] KVM: arm64: Handle protected guests at 32 bits
@ 2021-08-12  9:57     ` Will Deacon
  0 siblings, 0 replies; 126+ messages in thread
From: Will Deacon @ 2021-08-12  9:57 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kvmarm, maz, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team

On Mon, Jul 19, 2021 at 05:03:45PM +0100, Fuad Tabba wrote:
> Protected KVM does not support protected AArch32 guests. However,
> it is possible for the guest to force run AArch32, potentially
> causing problems. Add an extra check so that if the hypervisor
> catches the guest doing that, it can prevent the guest from
> running again by resetting vcpu->arch.target and returning
> ARM_EXCEPTION_IL.
> 
> Adapted from commit 22f553842b14 ("KVM: arm64: Handle Asymmetric
> AArch32 systems")
> 
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/kvm/hyp/include/hyp/switch.h | 24 ++++++++++++++++++++++++
>  1 file changed, 24 insertions(+)
> 
> diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> index 8431f1514280..f09343e15a80 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> @@ -23,6 +23,7 @@
>  #include <asm/kprobes.h>
>  #include <asm/kvm_asm.h>
>  #include <asm/kvm_emulate.h>
> +#include <asm/kvm_fixed_config.h>
>  #include <asm/kvm_hyp.h>
>  #include <asm/kvm_mmu.h>
>  #include <asm/fpsimd.h>
> @@ -477,6 +478,29 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
>  			write_sysreg_el2(read_sysreg_el2(SYS_ELR) - 4, SYS_ELR);
>  	}
>  
> +	/*
> +	 * Protected VMs might not be allowed to run in AArch32. The check below
> +	 * is based on the one in kvm_arch_vcpu_ioctl_run().
> +	 * The ARMv8 architecture doesn't give the hypervisor a mechanism to
> +	 * prevent a guest from dropping to AArch32 EL0 if implemented by the
> +	 * CPU. If the hypervisor spots a guest in such a state ensure it is
> +	 * handled, and don't trust the host to spot or fix it.
> +	 */
> +	if (unlikely(is_nvhe_hyp_code() &&
> +		     kvm_vm_is_protected(kern_hyp_va(vcpu->kvm)) &&
> +		     FIELD_GET(FEATURE(ID_AA64PFR0_EL0),
> +			       PVM_ID_AA64PFR0_ALLOW) <
> +			     ID_AA64PFR0_ELx_32BIT_64BIT &&
> +		     vcpu_mode_is_32bit(vcpu))) {
> +		/*
> +		 * As we have caught the guest red-handed, decide that it isn't
> +		 * fit for purpose anymore by making the vcpu invalid.
> +		 */
> +		vcpu->arch.target = -1;
> +		*exit_code = ARM_EXCEPTION_IL;
> +		goto exit;
> +	}

Would this be better off inside the nvhe-specific run loop? Seems like we
could elide fixup_guest_exit() altogether if we've detect that we're in
AArch32 state when we shouldn't be and it would keep the code off the shared
path.

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 15/15] KVM: arm64: Restrict protected VM capabilities
  2021-07-19 16:03   ` Fuad Tabba
  (?)
@ 2021-08-12  9:59     ` Will Deacon
  -1 siblings, 0 replies; 126+ messages in thread
From: Will Deacon @ 2021-08-12  9:59 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kvmarm, maz, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team

On Mon, Jul 19, 2021 at 05:03:46PM +0100, Fuad Tabba wrote:
> Restrict protected VM capabilities based on the
> fixed-configuration for protected VMs.
> 
> No functional change intended in current KVM-supported modes
> (nVHE, VHE).
> 
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/include/asm/kvm_fixed_config.h | 10 ++++
>  arch/arm64/kvm/arm.c                      | 63 ++++++++++++++++++++++-
>  arch/arm64/kvm/pkvm.c                     | 30 +++++++++++
>  3 files changed, 102 insertions(+), 1 deletion(-)

This patch looks good to me, but I'd be inclined to add this to the user-ABI
series given that it's really all user-facing and, without a functional
kvm_vm_is_protected(), isn't serving much purpose.

Cheers,

Will

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 15/15] KVM: arm64: Restrict protected VM capabilities
@ 2021-08-12  9:59     ` Will Deacon
  0 siblings, 0 replies; 126+ messages in thread
From: Will Deacon @ 2021-08-12  9:59 UTC (permalink / raw)
  To: Fuad Tabba; +Cc: kernel-team, kvm, maz, pbonzini, kvmarm, linux-arm-kernel

On Mon, Jul 19, 2021 at 05:03:46PM +0100, Fuad Tabba wrote:
> Restrict protected VM capabilities based on the
> fixed-configuration for protected VMs.
> 
> No functional change intended in current KVM-supported modes
> (nVHE, VHE).
> 
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/include/asm/kvm_fixed_config.h | 10 ++++
>  arch/arm64/kvm/arm.c                      | 63 ++++++++++++++++++++++-
>  arch/arm64/kvm/pkvm.c                     | 30 +++++++++++
>  3 files changed, 102 insertions(+), 1 deletion(-)

This patch looks good to me, but I'd be inclined to add this to the user-ABI
series given that it's really all user-facing and, without a functional
kvm_vm_is_protected(), isn't serving much purpose.

Cheers,

Will
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 15/15] KVM: arm64: Restrict protected VM capabilities
@ 2021-08-12  9:59     ` Will Deacon
  0 siblings, 0 replies; 126+ messages in thread
From: Will Deacon @ 2021-08-12  9:59 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kvmarm, maz, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team

On Mon, Jul 19, 2021 at 05:03:46PM +0100, Fuad Tabba wrote:
> Restrict protected VM capabilities based on the
> fixed-configuration for protected VMs.
> 
> No functional change intended in current KVM-supported modes
> (nVHE, VHE).
> 
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/include/asm/kvm_fixed_config.h | 10 ++++
>  arch/arm64/kvm/arm.c                      | 63 ++++++++++++++++++++++-
>  arch/arm64/kvm/pkvm.c                     | 30 +++++++++++
>  3 files changed, 102 insertions(+), 1 deletion(-)

This patch looks good to me, but I'd be inclined to add this to the user-ABI
series given that it's really all user-facing and, without a functional
kvm_vm_is_protected(), isn't serving much purpose.

Cheers,

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 14/15] KVM: arm64: Handle protected guests at 32 bits
  2021-08-12  9:57     ` Will Deacon
  (?)
@ 2021-08-12 13:08       ` Fuad Tabba
  -1 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-08-12 13:08 UTC (permalink / raw)
  To: Will Deacon
  Cc: kvmarm, maz, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team

Hi Will,


On Thu, Aug 12, 2021 at 11:57 AM Will Deacon <will@kernel.org> wrote:
>
> On Mon, Jul 19, 2021 at 05:03:45PM +0100, Fuad Tabba wrote:
> > Protected KVM does not support protected AArch32 guests. However,
> > it is possible for the guest to force run AArch32, potentially
> > causing problems. Add an extra check so that if the hypervisor
> > catches the guest doing that, it can prevent the guest from
> > running again by resetting vcpu->arch.target and returning
> > ARM_EXCEPTION_IL.
> >
> > Adapted from commit 22f553842b14 ("KVM: arm64: Handle Asymmetric
> > AArch32 systems")
> >
> > Signed-off-by: Fuad Tabba <tabba@google.com>
> > ---
> >  arch/arm64/kvm/hyp/include/hyp/switch.h | 24 ++++++++++++++++++++++++
> >  1 file changed, 24 insertions(+)
> >
> > diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> > index 8431f1514280..f09343e15a80 100644
> > --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> > +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> > @@ -23,6 +23,7 @@
> >  #include <asm/kprobes.h>
> >  #include <asm/kvm_asm.h>
> >  #include <asm/kvm_emulate.h>
> > +#include <asm/kvm_fixed_config.h>
> >  #include <asm/kvm_hyp.h>
> >  #include <asm/kvm_mmu.h>
> >  #include <asm/fpsimd.h>
> > @@ -477,6 +478,29 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
> >                       write_sysreg_el2(read_sysreg_el2(SYS_ELR) - 4, SYS_ELR);
> >       }
> >
> > +     /*
> > +      * Protected VMs might not be allowed to run in AArch32. The check below
> > +      * is based on the one in kvm_arch_vcpu_ioctl_run().
> > +      * The ARMv8 architecture doesn't give the hypervisor a mechanism to
> > +      * prevent a guest from dropping to AArch32 EL0 if implemented by the
> > +      * CPU. If the hypervisor spots a guest in such a state ensure it is
> > +      * handled, and don't trust the host to spot or fix it.
> > +      */
> > +     if (unlikely(is_nvhe_hyp_code() &&
> > +                  kvm_vm_is_protected(kern_hyp_va(vcpu->kvm)) &&
> > +                  FIELD_GET(FEATURE(ID_AA64PFR0_EL0),
> > +                            PVM_ID_AA64PFR0_ALLOW) <
> > +                          ID_AA64PFR0_ELx_32BIT_64BIT &&
> > +                  vcpu_mode_is_32bit(vcpu))) {
> > +             /*
> > +              * As we have caught the guest red-handed, decide that it isn't
> > +              * fit for purpose anymore by making the vcpu invalid.
> > +              */
> > +             vcpu->arch.target = -1;
> > +             *exit_code = ARM_EXCEPTION_IL;
> > +             goto exit;
> > +     }
>
> Would this be better off inside the nvhe-specific run loop? Seems like we
> could elide fixup_guest_exit() altogether if we've detect that we're in
> AArch32 state when we shouldn't be and it would keep the code off the shared
> path.

Yes, it makes more sense and would result in cleaner code to have it
there, especially in the future where there's likely to be a separate
run loop for protected VMs. I'll move it.

Thanks,
/fuad
> Will

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 14/15] KVM: arm64: Handle protected guests at 32 bits
@ 2021-08-12 13:08       ` Fuad Tabba
  0 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-08-12 13:08 UTC (permalink / raw)
  To: Will Deacon; +Cc: kernel-team, kvm, maz, pbonzini, kvmarm, linux-arm-kernel

Hi Will,


On Thu, Aug 12, 2021 at 11:57 AM Will Deacon <will@kernel.org> wrote:
>
> On Mon, Jul 19, 2021 at 05:03:45PM +0100, Fuad Tabba wrote:
> > Protected KVM does not support protected AArch32 guests. However,
> > it is possible for the guest to force run AArch32, potentially
> > causing problems. Add an extra check so that if the hypervisor
> > catches the guest doing that, it can prevent the guest from
> > running again by resetting vcpu->arch.target and returning
> > ARM_EXCEPTION_IL.
> >
> > Adapted from commit 22f553842b14 ("KVM: arm64: Handle Asymmetric
> > AArch32 systems")
> >
> > Signed-off-by: Fuad Tabba <tabba@google.com>
> > ---
> >  arch/arm64/kvm/hyp/include/hyp/switch.h | 24 ++++++++++++++++++++++++
> >  1 file changed, 24 insertions(+)
> >
> > diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> > index 8431f1514280..f09343e15a80 100644
> > --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> > +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> > @@ -23,6 +23,7 @@
> >  #include <asm/kprobes.h>
> >  #include <asm/kvm_asm.h>
> >  #include <asm/kvm_emulate.h>
> > +#include <asm/kvm_fixed_config.h>
> >  #include <asm/kvm_hyp.h>
> >  #include <asm/kvm_mmu.h>
> >  #include <asm/fpsimd.h>
> > @@ -477,6 +478,29 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
> >                       write_sysreg_el2(read_sysreg_el2(SYS_ELR) - 4, SYS_ELR);
> >       }
> >
> > +     /*
> > +      * Protected VMs might not be allowed to run in AArch32. The check below
> > +      * is based on the one in kvm_arch_vcpu_ioctl_run().
> > +      * The ARMv8 architecture doesn't give the hypervisor a mechanism to
> > +      * prevent a guest from dropping to AArch32 EL0 if implemented by the
> > +      * CPU. If the hypervisor spots a guest in such a state ensure it is
> > +      * handled, and don't trust the host to spot or fix it.
> > +      */
> > +     if (unlikely(is_nvhe_hyp_code() &&
> > +                  kvm_vm_is_protected(kern_hyp_va(vcpu->kvm)) &&
> > +                  FIELD_GET(FEATURE(ID_AA64PFR0_EL0),
> > +                            PVM_ID_AA64PFR0_ALLOW) <
> > +                          ID_AA64PFR0_ELx_32BIT_64BIT &&
> > +                  vcpu_mode_is_32bit(vcpu))) {
> > +             /*
> > +              * As we have caught the guest red-handed, decide that it isn't
> > +              * fit for purpose anymore by making the vcpu invalid.
> > +              */
> > +             vcpu->arch.target = -1;
> > +             *exit_code = ARM_EXCEPTION_IL;
> > +             goto exit;
> > +     }
>
> Would this be better off inside the nvhe-specific run loop? Seems like we
> could elide fixup_guest_exit() altogether if we've detect that we're in
> AArch32 state when we shouldn't be and it would keep the code off the shared
> path.

Yes, it makes more sense and would result in cleaner code to have it
there, especially in the future where there's likely to be a separate
run loop for protected VMs. I'll move it.

Thanks,
/fuad
> Will
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 14/15] KVM: arm64: Handle protected guests at 32 bits
@ 2021-08-12 13:08       ` Fuad Tabba
  0 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-08-12 13:08 UTC (permalink / raw)
  To: Will Deacon
  Cc: kvmarm, maz, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team

Hi Will,


On Thu, Aug 12, 2021 at 11:57 AM Will Deacon <will@kernel.org> wrote:
>
> On Mon, Jul 19, 2021 at 05:03:45PM +0100, Fuad Tabba wrote:
> > Protected KVM does not support protected AArch32 guests. However,
> > it is possible for the guest to force run AArch32, potentially
> > causing problems. Add an extra check so that if the hypervisor
> > catches the guest doing that, it can prevent the guest from
> > running again by resetting vcpu->arch.target and returning
> > ARM_EXCEPTION_IL.
> >
> > Adapted from commit 22f553842b14 ("KVM: arm64: Handle Asymmetric
> > AArch32 systems")
> >
> > Signed-off-by: Fuad Tabba <tabba@google.com>
> > ---
> >  arch/arm64/kvm/hyp/include/hyp/switch.h | 24 ++++++++++++++++++++++++
> >  1 file changed, 24 insertions(+)
> >
> > diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> > index 8431f1514280..f09343e15a80 100644
> > --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> > +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> > @@ -23,6 +23,7 @@
> >  #include <asm/kprobes.h>
> >  #include <asm/kvm_asm.h>
> >  #include <asm/kvm_emulate.h>
> > +#include <asm/kvm_fixed_config.h>
> >  #include <asm/kvm_hyp.h>
> >  #include <asm/kvm_mmu.h>
> >  #include <asm/fpsimd.h>
> > @@ -477,6 +478,29 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
> >                       write_sysreg_el2(read_sysreg_el2(SYS_ELR) - 4, SYS_ELR);
> >       }
> >
> > +     /*
> > +      * Protected VMs might not be allowed to run in AArch32. The check below
> > +      * is based on the one in kvm_arch_vcpu_ioctl_run().
> > +      * The ARMv8 architecture doesn't give the hypervisor a mechanism to
> > +      * prevent a guest from dropping to AArch32 EL0 if implemented by the
> > +      * CPU. If the hypervisor spots a guest in such a state ensure it is
> > +      * handled, and don't trust the host to spot or fix it.
> > +      */
> > +     if (unlikely(is_nvhe_hyp_code() &&
> > +                  kvm_vm_is_protected(kern_hyp_va(vcpu->kvm)) &&
> > +                  FIELD_GET(FEATURE(ID_AA64PFR0_EL0),
> > +                            PVM_ID_AA64PFR0_ALLOW) <
> > +                          ID_AA64PFR0_ELx_32BIT_64BIT &&
> > +                  vcpu_mode_is_32bit(vcpu))) {
> > +             /*
> > +              * As we have caught the guest red-handed, decide that it isn't
> > +              * fit for purpose anymore by making the vcpu invalid.
> > +              */
> > +             vcpu->arch.target = -1;
> > +             *exit_code = ARM_EXCEPTION_IL;
> > +             goto exit;
> > +     }
>
> Would this be better off inside the nvhe-specific run loop? Seems like we
> could elide fixup_guest_exit() altogether if we've detect that we're in
> AArch32 state when we shouldn't be and it would keep the code off the shared
> path.

Yes, it makes more sense and would result in cleaner code to have it
there, especially in the future where there's likely to be a separate
run loop for protected VMs. I'll move it.

Thanks,
/fuad
> Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 11/15] KVM: arm64: Add trap handlers for protected VMs
  2021-08-12  9:45     ` Will Deacon
  (?)
@ 2021-08-16 14:39       ` Fuad Tabba
  -1 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-08-16 14:39 UTC (permalink / raw)
  To: Will Deacon; +Cc: kernel-team, kvm, maz, pbonzini, kvmarm, linux-arm-kernel

Hi Will,

On Thu, Aug 12, 2021 at 11:46 AM Will Deacon <will@kernel.org> wrote:
>
> On Mon, Jul 19, 2021 at 05:03:42PM +0100, Fuad Tabba wrote:
> > Add trap handlers for protected VMs. These are mainly for Sys64
> > and debug traps.
> >
> > No functional change intended as these are not hooked in yet to
> > the guest exit handlers introduced earlier. So even when trapping
> > is triggered, the exit handlers would let the host handle it, as
> > before.
> >
> > Signed-off-by: Fuad Tabba <tabba@google.com>
> > ---
> >  arch/arm64/include/asm/kvm_fixed_config.h | 178 +++++++++
> >  arch/arm64/include/asm/kvm_host.h         |   2 +
> >  arch/arm64/include/asm/kvm_hyp.h          |   3 +
> >  arch/arm64/kvm/Makefile                   |   2 +-
> >  arch/arm64/kvm/arm.c                      |  11 +
> >  arch/arm64/kvm/hyp/nvhe/Makefile          |   2 +-
> >  arch/arm64/kvm/hyp/nvhe/sys_regs.c        | 443 ++++++++++++++++++++++
> >  arch/arm64/kvm/pkvm.c                     | 183 +++++++++
> >  8 files changed, 822 insertions(+), 2 deletions(-)
> >  create mode 100644 arch/arm64/include/asm/kvm_fixed_config.h
> >  create mode 100644 arch/arm64/kvm/hyp/nvhe/sys_regs.c
> >  create mode 100644 arch/arm64/kvm/pkvm.c
> >
> > diff --git a/arch/arm64/include/asm/kvm_fixed_config.h b/arch/arm64/include/asm/kvm_fixed_config.h
> > new file mode 100644
> > index 000000000000..b39a5de2c4b9
> > --- /dev/null
> > +++ b/arch/arm64/include/asm/kvm_fixed_config.h
> > @@ -0,0 +1,178 @@
> > +/* SPDX-License-Identifier: GPL-2.0-only */
> > +/*
> > + * Copyright (C) 2021 Google LLC
> > + * Author: Fuad Tabba <tabba@google.com>
> > + */
> > +
> > +#ifndef __ARM64_KVM_FIXED_CONFIG_H__
> > +#define __ARM64_KVM_FIXED_CONFIG_H__
> > +
> > +#include <asm/sysreg.h>
> > +
> > +/*
> > + * This file contains definitions for features to be allowed or restricted for
> > + * guest virtual machines as a baseline, depending on what mode KVM is running
> > + * in and on the type of guest is running.
>
> s/is running/that is running/

Ack.

> > + *
> > + * The features are represented as the highest allowed value for a feature in
> > + * the feature id registers. If the field is set to all ones (i.e., 0b1111),
> > + * then it's only restricted by what the system allows. If the feature is set to
> > + * another value, then that value would be the maximum value allowed and
> > + * supported in pKVM, even if the system supports a higher value.
>
> Given that some fields are signed whereas others are unsigned, I think the
> wording could be a bit tighter here when it refers to "maximum".
>
> > + *
> > + * Some features are forced to a certain value, in which case a SET bitmap is
> > + * used to force these values.
> > + */
> > +
> > +
> > +/*
> > + * Allowed features for protected guests (Protected KVM)
> > + *
> > + * The approach taken here is to allow features that are:
> > + * - needed by common Linux distributions (e.g., flooating point)
>
> s/flooating/floating
Ack.

> > + * - are trivial, e.g., supporting the feature doesn't introduce or require the
> > + * tracking of additional state
> ... in KVM.

Ack.

> > + * - not trapable
>
> s/not trapable/cannot be trapped/
Ack

> > + */
> > +
> > +/*
> > + * - Floating-point and Advanced SIMD:
> > + *   Don't require much support other than maintaining the context, which KVM
> > + *   already has.
>
> I'd rework this sentence. We have to support fpsimd because Linux guests
> rely on it.

Ack

> > + * - AArch64 guests only (no support for AArch32 guests):
> > + *   Simplify support in case of asymmetric AArch32 systems.
>
> I don't think asymmetric systems come into this really; AArch32 on its
> own adds lots of complexity in trap handling, emulation, condition codes
> etc. Restricting guests to AArch64 means we don't have to worry about the
> AArch32 exception model or emulation of 32-bit instructions.

Ack

> > + * - RAS (v1)
> > + *   v1 doesn't require much additional support, but later versions do.
>
> Be more specific?

Ack

> > + * - Data Independent Timing
> > + *   Trivial
> > + * Remaining features are not supported either because they require too much
> > + * support from KVM, or risk leaking guest data.
>
> I think we should drop this sentence -- it makes it sounds like we can't
> be arsed :)

Ack.

> > + */
> > +#define PVM_ID_AA64PFR0_ALLOW (\
> > +     FEATURE(ID_AA64PFR0_FP) | \
> > +     FIELD_PREP(FEATURE(ID_AA64PFR0_EL0), ID_AA64PFR0_ELx_64BIT_ONLY) | \
> > +     FIELD_PREP(FEATURE(ID_AA64PFR0_EL1), ID_AA64PFR0_ELx_64BIT_ONLY) | \
> > +     FIELD_PREP(FEATURE(ID_AA64PFR0_EL2), ID_AA64PFR0_ELx_64BIT_ONLY) | \
> > +     FIELD_PREP(FEATURE(ID_AA64PFR0_EL3), ID_AA64PFR0_ELx_64BIT_ONLY) | \
> > +     FIELD_PREP(FEATURE(ID_AA64PFR0_RAS), ID_AA64PFR0_RAS_V1) | \
>
> I think having the FIELD_PREP entries in the ALLOW mask is quite confusing
> here -- naively you would expect to be able to bitwise-and the host register
> value with the ALLOW mask and get the sanitised version back, but with these
> here you have to go field-by-field to compute the common value.
>
> So perhaps move those into a PVM_ID_AA64PFR0_RESTRICT mask or something?
> Then pvm_access_id_aa64pfr0() will become a little easier to read, I think.

I agree. I've reworked it, and it simplifies the code and makes it
easier to read.

> > +     FEATURE(ID_AA64PFR0_ASIMD) | \
> > +     FEATURE(ID_AA64PFR0_DIT) \
> > +     )
> > +
> > +/*
> > + * - Branch Target Identification
> > + * - Speculative Store Bypassing
> > + *   These features are trivial to support
> > + */
> > +#define PVM_ID_AA64PFR1_ALLOW (\
> > +     FEATURE(ID_AA64PFR1_BT) | \
> > +     FEATURE(ID_AA64PFR1_SSBS) \
> > +     )
> > +
> > +/*
> > + * No support for Scalable Vectors:
> > + *   Requires additional support from KVM
>
> Perhaps expand on "support" here? E.g. "context-switching and trapping
> support at EL2".

Ack.

> > + */
> > +#define PVM_ID_AA64ZFR0_ALLOW (0ULL)
> > +
> > +/*
> > + * No support for debug, including breakpoints, and watchpoints:
> > + *   Reduce complexity and avoid exposing/leaking guest data
> > + *
> > + * NOTE: The Arm architecture mandates support for at least the Armv8 debug
> > + * architecture, which would include at least 2 hardware breakpoints and
> > + * watchpoints. Providing that support to protected guests adds considerable
> > + * state and complexity, and risks leaking guest data. Therefore, the reserved
> > + * value of 0 is used for debug-related fields.
> > + */
>
> I think the complexity of the debug architecture is a good reason to avoid
> exposing it here, but I don't understand how providing breakpoints or
> watchpoints to a guest could risk leaking guest data. What is the specific
> threat here?

I mixed up the various debug and trace features here. Will fix the comment.

> > +#define PVM_ID_AA64DFR0_ALLOW (0ULL)
> > +
> > +/*
> > + * These features are chosen because they are supported by KVM and to limit the
> > + * confiruation state space and make it more deterministic.
>
> s/confiruation/configuration/
>
> However, I don't agree that this provides determinism since we're not
> forcing any particular values, but rather filtering the values from the
> host.

Ack

> > + * - 40-bit IPA
>
> This seems more about not supporting KVM_CAP_ARM_VM_IPA_SIZE for now.
>
> > + * - 16-bit ASID
> > + * - Mixed-endian
> > + * - Distinction between Secure and Non-secure Memory
> > + * - Mixed-endian at EL0 only
> > + * - Non-context synchronizing exception entry and exit
>
> These all seem to fall into the "cannot trap" category, so we just advertise
> whatever we've got.

Ack.


>
> > + */
> > +#define PVM_ID_AA64MMFR0_ALLOW (\
> > +     FIELD_PREP(FEATURE(ID_AA64MMFR0_PARANGE), ID_AA64MMFR0_PARANGE_40) | \
> > +     FIELD_PREP(FEATURE(ID_AA64MMFR0_ASID), ID_AA64MMFR0_ASID_16) | \
> > +     FEATURE(ID_AA64MMFR0_BIGENDEL) | \
> > +     FEATURE(ID_AA64MMFR0_SNSMEM) | \
> > +     FEATURE(ID_AA64MMFR0_BIGENDEL0) | \
> > +     FEATURE(ID_AA64MMFR0_EXS) \
> > +     )
> > +
> > +/*
> > + * - 64KB granule not supported
> > + */
> > +#define PVM_ID_AA64MMFR0_SET (\
> > +     FIELD_PREP(FEATURE(ID_AA64MMFR0_TGRAN64), ID_AA64MMFR0_TGRAN64_NI) \
> > +     )
>
> Why not, and can we actually prevent the guest from doing that?

We cannot prevent the guest from doing it. Initial reasoning was that
there isn't a clear use case for it, but since we cannot prevent the
guest from doing that, I'll unhide it.

> > +/*
> > + * These features are chosen because they are supported by KVM and to limit the
> > + * confiruation state space and make it more deterministic.
>
> It's that typo again ;) But my comment from before still applies -- I don't
> think an ALLOW mask adds hugely to the determinism.

Ack

> > + * - Hardware translation table updates to Access flag and Dirty state
> > + * - Number of VMID bits from CPU
> > + * - Hierarchical Permission Disables
> > + * - Privileged Access Never
> > + * - SError interrupt exceptions from speculative reads
> > + * - Enhanced Translation Synchronization
>
> As before, I think this is a mixture of "trivial" and "cannot trap"
> features.

Ack

> > + */
> > +#define PVM_ID_AA64MMFR1_ALLOW (\
> > +     FEATURE(ID_AA64MMFR1_HADBS) | \
> > +     FEATURE(ID_AA64MMFR1_VMIDBITS) | \
> > +     FEATURE(ID_AA64MMFR1_HPD) | \
> > +     FEATURE(ID_AA64MMFR1_PAN) | \
> > +     FEATURE(ID_AA64MMFR1_SPECSEI) | \
> > +     FEATURE(ID_AA64MMFR1_ETS) \
> > +     )
> > +
> > +/*
> > + * These features are chosen because they are supported by KVM and to limit the
> > + * confiruation state space and make it more deterministic.
>
> <same comment>

Ack
> > + * - Common not Private translations
> > + * - User Access Override
> > + * - IESB bit in the SCTLR_ELx registers
> > + * - Unaligned single-copy atomicity and atomic functions
> > + * - ESR_ELx.EC value on an exception by read access to feature ID space
> > + * - TTL field in address operations.
> > + * - Break-before-make sequences when changing translation block size
> > + * - E0PDx mechanism
> > + */
> > +#define PVM_ID_AA64MMFR2_ALLOW (\
> > +     FEATURE(ID_AA64MMFR2_CNP) | \
> > +     FEATURE(ID_AA64MMFR2_UAO) | \
> > +     FEATURE(ID_AA64MMFR2_IESB) | \
> > +     FEATURE(ID_AA64MMFR2_AT) | \
> > +     FEATURE(ID_AA64MMFR2_IDS) | \
> > +     FEATURE(ID_AA64MMFR2_TTL) | \
> > +     FEATURE(ID_AA64MMFR2_BBM) | \
> > +     FEATURE(ID_AA64MMFR2_E0PD) \
> > +     )
> > +
> > +/*
> > + * Allow all features in this register because they are trivial to support, or
> > + * are already supported by KVM:
> > + * - LS64
> > + * - XS
> > + * - I8MM
> > + * - DGB
> > + * - BF16
> > + * - SPECRES
> > + * - SB
> > + * - FRINTTS
> > + * - PAuth
> > + * - FPAC
> > + * - LRCPC
> > + * - FCMA
> > + * - JSCVT
> > + * - DPB
> > + */
> > +#define PVM_ID_AA64ISAR1_ALLOW (~0ULL)
> > +
> > +#endif /* __ARM64_KVM_FIXED_CONFIG_H__ */
> > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > index ac67d5699c68..e1ceadd69575 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -780,6 +780,8 @@ static inline bool kvm_vm_is_protected(struct kvm *kvm)
> >       return false;
> >  }
> >
> > +void kvm_init_protected_traps(struct kvm_vcpu *vcpu);
> > +
> >  int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature);
> >  bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu);
> >
> > diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
> > index 657d0c94cf82..3f4866322f85 100644
> > --- a/arch/arm64/include/asm/kvm_hyp.h
> > +++ b/arch/arm64/include/asm/kvm_hyp.h
> > @@ -115,7 +115,10 @@ int __pkvm_init(phys_addr_t phys, unsigned long size, unsigned long nr_cpus,
> >  void __noreturn __host_enter(struct kvm_cpu_context *host_ctxt);
> >  #endif
> >
> > +extern u64 kvm_nvhe_sym(id_aa64pfr0_el1_sys_val);
> > +extern u64 kvm_nvhe_sym(id_aa64pfr1_el1_sys_val);
> >  extern u64 kvm_nvhe_sym(id_aa64mmfr0_el1_sys_val);
> >  extern u64 kvm_nvhe_sym(id_aa64mmfr1_el1_sys_val);
> > +extern u64 kvm_nvhe_sym(id_aa64mmfr2_el1_sys_val);
> >
> >  #endif /* __ARM64_KVM_HYP_H__ */
> > diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
> > index 989bb5dad2c8..0be63f5c495f 100644
> > --- a/arch/arm64/kvm/Makefile
> > +++ b/arch/arm64/kvm/Makefile
> > @@ -14,7 +14,7 @@ kvm-y := $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o $(KVM)/eventfd.o \
> >        $(KVM)/vfio.o $(KVM)/irqchip.o $(KVM)/binary_stats.o \
> >        arm.o mmu.o mmio.o psci.o perf.o hypercalls.o pvtime.o \
> >        inject_fault.o va_layout.o handle_exit.o \
> > -      guest.o debug.o reset.o sys_regs.o \
> > +      guest.o debug.o pkvm.o reset.o sys_regs.o \
> >        vgic-sys-reg-v3.o fpsimd.o pmu.o \
> >        arch_timer.o trng.o\
> >        vgic/vgic.o vgic/vgic-init.o \
> > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> > index 14b12f2c08c0..3f28549aff0d 100644
> > --- a/arch/arm64/kvm/arm.c
> > +++ b/arch/arm64/kvm/arm.c
> > @@ -618,6 +618,14 @@ static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu)
> >
> >       ret = kvm_arm_pmu_v3_enable(vcpu);
> >
> > +     /*
> > +      * Initialize traps for protected VMs.
> > +      * NOTE: Move  trap initialization to EL2 once the code is in place for
> > +      * maintaining protected VM state at EL2 instead of the host.
> > +      */
> > +     if (kvm_vm_is_protected(kvm))
> > +             kvm_init_protected_traps(vcpu);
> > +
> >       return ret;
> >  }
> >
> > @@ -1781,8 +1789,11 @@ static int kvm_hyp_init_protection(u32 hyp_va_bits)
> >       void *addr = phys_to_virt(hyp_mem_base);
> >       int ret;
> >
> > +     kvm_nvhe_sym(id_aa64pfr0_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
> > +     kvm_nvhe_sym(id_aa64pfr1_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64PFR1_EL1);
> >       kvm_nvhe_sym(id_aa64mmfr0_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1);
> >       kvm_nvhe_sym(id_aa64mmfr1_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1);
> > +     kvm_nvhe_sym(id_aa64mmfr2_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR2_EL1);
> >
> >       ret = create_hyp_mappings(addr, addr + hyp_mem_size, PAGE_HYP);
> >       if (ret)
> > diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile
> > index 5df6193fc430..a23f417a0c20 100644
> > --- a/arch/arm64/kvm/hyp/nvhe/Makefile
> > +++ b/arch/arm64/kvm/hyp/nvhe/Makefile
> > @@ -14,7 +14,7 @@ lib-objs := $(addprefix ../../../lib/, $(lib-objs))
> >
> >  obj-y := timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o hyp-init.o host.o \
> >        hyp-main.o hyp-smp.o psci-relay.o early_alloc.o stub.o page_alloc.o \
> > -      cache.o setup.o mm.o mem_protect.o
> > +      cache.o setup.o mm.o mem_protect.o sys_regs.o
> >  obj-y += ../vgic-v3-sr.o ../aarch32.o ../vgic-v2-cpuif-proxy.o ../entry.o \
> >        ../fpsimd.o ../hyp-entry.o ../exception.o ../pgtable.o
> >  obj-y += $(lib-objs)
> > diff --git a/arch/arm64/kvm/hyp/nvhe/sys_regs.c b/arch/arm64/kvm/hyp/nvhe/sys_regs.c
> > new file mode 100644
> > index 000000000000..6c7230aa70e9
> > --- /dev/null
> > +++ b/arch/arm64/kvm/hyp/nvhe/sys_regs.c
> > @@ -0,0 +1,443 @@
> > +// SPDX-License-Identifier: GPL-2.0-only
> > +/*
> > + * Copyright (C) 2021 Google LLC
> > + * Author: Fuad Tabba <tabba@google.com>
> > + */
> > +
> > +#include <linux/kvm_host.h>
> > +
> > +#include <asm/kvm_asm.h>
> > +#include <asm/kvm_emulate.h>
> > +#include <asm/kvm_fixed_config.h>
> > +#include <asm/kvm_mmu.h>
> > +
> > +#include <hyp/adjust_pc.h>
> > +
> > +#include "../../sys_regs.h"
> > +
> > +/*
> > + * Copies of the host's CPU features registers holding sanitized values.
> > + */
> > +u64 id_aa64pfr0_el1_sys_val;
> > +u64 id_aa64pfr1_el1_sys_val;
> > +u64 id_aa64mmfr2_el1_sys_val;
> > +
> > +/*
> > + * Inject an unknown/undefined exception to the guest.
> > + */
> > +static void inject_undef(struct kvm_vcpu *vcpu)
> > +{
> > +     u32 esr = (ESR_ELx_EC_UNKNOWN << ESR_ELx_EC_SHIFT);
> > +
> > +     vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA64_EL1 |
> > +                          KVM_ARM64_EXCEPT_AA64_ELx_SYNC |
> > +                          KVM_ARM64_PENDING_EXCEPTION);
> > +
> > +     __kvm_adjust_pc(vcpu);
> > +
> > +     write_sysreg_el1(esr, SYS_ESR);
> > +     write_sysreg_el1(read_sysreg_el2(SYS_ELR), SYS_ELR);
> > +}
> > +
> > +/*
> > + * Accessor for undefined accesses.
> > + */
> > +static bool undef_access(struct kvm_vcpu *vcpu,
> > +                      struct sys_reg_params *p,
> > +                      const struct sys_reg_desc *r)
> > +{
> > +     inject_undef(vcpu);
> > +     return false;
> > +}
> > +
> > +/*
> > + * Accessors for feature registers.
> > + *
> > + * If access is allowed, set the regval to the protected VM's view of the
> > + * register and return true.
> > + * Otherwise, inject an undefined exception and return false.
> > + */
> > +
> > +/*
> > + * Returns the minimum feature supported and allowed.
> > + */
> > +static u64 get_min_feature(u64 feature, u64 allowed_features,
> > +                        u64 supported_features)
"> > +{
> > +     const u64 allowed_feature = FIELD_GET(feature, allowed_features);
> > +     const u64 supported_feature = FIELD_GET(feature, supported_features);
> > +
> > +     return min(allowed_feature, supported_feature);
>
> Careful here: this is an unsigned comparison, yet some fields are signed.
> cpufeature.c uses the S_ARM64_FTR_BITS and ARM64_FTR_BITS to declare signed
> and unsigned fields respectively.

I completely missed that! It's described in "D13.1.3 Principles of the
ID scheme for fields in ID registers" or the Arm Architecture
Reference Manual. Fortunately, all of the features I'm working with
are unsigned. However, I will fix it in v4 to ensure that should we
add a signed feature we can clearly see that it needs to be handled
differently.

Thanks!

/fuad

> Will
>
> --
> To unsubscribe from this group and stop receiving emails from it, send an email to kernel-team+unsubscribe@android.com.
>
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 11/15] KVM: arm64: Add trap handlers for protected VMs
@ 2021-08-16 14:39       ` Fuad Tabba
  0 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-08-16 14:39 UTC (permalink / raw)
  To: Will Deacon
  Cc: kvmarm, maz, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team

Hi Will,

On Thu, Aug 12, 2021 at 11:46 AM Will Deacon <will@kernel.org> wrote:
>
> On Mon, Jul 19, 2021 at 05:03:42PM +0100, Fuad Tabba wrote:
> > Add trap handlers for protected VMs. These are mainly for Sys64
> > and debug traps.
> >
> > No functional change intended as these are not hooked in yet to
> > the guest exit handlers introduced earlier. So even when trapping
> > is triggered, the exit handlers would let the host handle it, as
> > before.
> >
> > Signed-off-by: Fuad Tabba <tabba@google.com>
> > ---
> >  arch/arm64/include/asm/kvm_fixed_config.h | 178 +++++++++
> >  arch/arm64/include/asm/kvm_host.h         |   2 +
> >  arch/arm64/include/asm/kvm_hyp.h          |   3 +
> >  arch/arm64/kvm/Makefile                   |   2 +-
> >  arch/arm64/kvm/arm.c                      |  11 +
> >  arch/arm64/kvm/hyp/nvhe/Makefile          |   2 +-
> >  arch/arm64/kvm/hyp/nvhe/sys_regs.c        | 443 ++++++++++++++++++++++
> >  arch/arm64/kvm/pkvm.c                     | 183 +++++++++
> >  8 files changed, 822 insertions(+), 2 deletions(-)
> >  create mode 100644 arch/arm64/include/asm/kvm_fixed_config.h
> >  create mode 100644 arch/arm64/kvm/hyp/nvhe/sys_regs.c
> >  create mode 100644 arch/arm64/kvm/pkvm.c
> >
> > diff --git a/arch/arm64/include/asm/kvm_fixed_config.h b/arch/arm64/include/asm/kvm_fixed_config.h
> > new file mode 100644
> > index 000000000000..b39a5de2c4b9
> > --- /dev/null
> > +++ b/arch/arm64/include/asm/kvm_fixed_config.h
> > @@ -0,0 +1,178 @@
> > +/* SPDX-License-Identifier: GPL-2.0-only */
> > +/*
> > + * Copyright (C) 2021 Google LLC
> > + * Author: Fuad Tabba <tabba@google.com>
> > + */
> > +
> > +#ifndef __ARM64_KVM_FIXED_CONFIG_H__
> > +#define __ARM64_KVM_FIXED_CONFIG_H__
> > +
> > +#include <asm/sysreg.h>
> > +
> > +/*
> > + * This file contains definitions for features to be allowed or restricted for
> > + * guest virtual machines as a baseline, depending on what mode KVM is running
> > + * in and on the type of guest is running.
>
> s/is running/that is running/

Ack.

> > + *
> > + * The features are represented as the highest allowed value for a feature in
> > + * the feature id registers. If the field is set to all ones (i.e., 0b1111),
> > + * then it's only restricted by what the system allows. If the feature is set to
> > + * another value, then that value would be the maximum value allowed and
> > + * supported in pKVM, even if the system supports a higher value.
>
> Given that some fields are signed whereas others are unsigned, I think the
> wording could be a bit tighter here when it refers to "maximum".
>
> > + *
> > + * Some features are forced to a certain value, in which case a SET bitmap is
> > + * used to force these values.
> > + */
> > +
> > +
> > +/*
> > + * Allowed features for protected guests (Protected KVM)
> > + *
> > + * The approach taken here is to allow features that are:
> > + * - needed by common Linux distributions (e.g., flooating point)
>
> s/flooating/floating
Ack.

> > + * - are trivial, e.g., supporting the feature doesn't introduce or require the
> > + * tracking of additional state
> ... in KVM.

Ack.

> > + * - not trapable
>
> s/not trapable/cannot be trapped/
Ack

> > + */
> > +
> > +/*
> > + * - Floating-point and Advanced SIMD:
> > + *   Don't require much support other than maintaining the context, which KVM
> > + *   already has.
>
> I'd rework this sentence. We have to support fpsimd because Linux guests
> rely on it.

Ack

> > + * - AArch64 guests only (no support for AArch32 guests):
> > + *   Simplify support in case of asymmetric AArch32 systems.
>
> I don't think asymmetric systems come into this really; AArch32 on its
> own adds lots of complexity in trap handling, emulation, condition codes
> etc. Restricting guests to AArch64 means we don't have to worry about the
> AArch32 exception model or emulation of 32-bit instructions.

Ack

> > + * - RAS (v1)
> > + *   v1 doesn't require much additional support, but later versions do.
>
> Be more specific?

Ack

> > + * - Data Independent Timing
> > + *   Trivial
> > + * Remaining features are not supported either because they require too much
> > + * support from KVM, or risk leaking guest data.
>
> I think we should drop this sentence -- it makes it sounds like we can't
> be arsed :)

Ack.

> > + */
> > +#define PVM_ID_AA64PFR0_ALLOW (\
> > +     FEATURE(ID_AA64PFR0_FP) | \
> > +     FIELD_PREP(FEATURE(ID_AA64PFR0_EL0), ID_AA64PFR0_ELx_64BIT_ONLY) | \
> > +     FIELD_PREP(FEATURE(ID_AA64PFR0_EL1), ID_AA64PFR0_ELx_64BIT_ONLY) | \
> > +     FIELD_PREP(FEATURE(ID_AA64PFR0_EL2), ID_AA64PFR0_ELx_64BIT_ONLY) | \
> > +     FIELD_PREP(FEATURE(ID_AA64PFR0_EL3), ID_AA64PFR0_ELx_64BIT_ONLY) | \
> > +     FIELD_PREP(FEATURE(ID_AA64PFR0_RAS), ID_AA64PFR0_RAS_V1) | \
>
> I think having the FIELD_PREP entries in the ALLOW mask is quite confusing
> here -- naively you would expect to be able to bitwise-and the host register
> value with the ALLOW mask and get the sanitised version back, but with these
> here you have to go field-by-field to compute the common value.
>
> So perhaps move those into a PVM_ID_AA64PFR0_RESTRICT mask or something?
> Then pvm_access_id_aa64pfr0() will become a little easier to read, I think.

I agree. I've reworked it, and it simplifies the code and makes it
easier to read.

> > +     FEATURE(ID_AA64PFR0_ASIMD) | \
> > +     FEATURE(ID_AA64PFR0_DIT) \
> > +     )
> > +
> > +/*
> > + * - Branch Target Identification
> > + * - Speculative Store Bypassing
> > + *   These features are trivial to support
> > + */
> > +#define PVM_ID_AA64PFR1_ALLOW (\
> > +     FEATURE(ID_AA64PFR1_BT) | \
> > +     FEATURE(ID_AA64PFR1_SSBS) \
> > +     )
> > +
> > +/*
> > + * No support for Scalable Vectors:
> > + *   Requires additional support from KVM
>
> Perhaps expand on "support" here? E.g. "context-switching and trapping
> support at EL2".

Ack.

> > + */
> > +#define PVM_ID_AA64ZFR0_ALLOW (0ULL)
> > +
> > +/*
> > + * No support for debug, including breakpoints, and watchpoints:
> > + *   Reduce complexity and avoid exposing/leaking guest data
> > + *
> > + * NOTE: The Arm architecture mandates support for at least the Armv8 debug
> > + * architecture, which would include at least 2 hardware breakpoints and
> > + * watchpoints. Providing that support to protected guests adds considerable
> > + * state and complexity, and risks leaking guest data. Therefore, the reserved
> > + * value of 0 is used for debug-related fields.
> > + */
>
> I think the complexity of the debug architecture is a good reason to avoid
> exposing it here, but I don't understand how providing breakpoints or
> watchpoints to a guest could risk leaking guest data. What is the specific
> threat here?

I mixed up the various debug and trace features here. Will fix the comment.

> > +#define PVM_ID_AA64DFR0_ALLOW (0ULL)
> > +
> > +/*
> > + * These features are chosen because they are supported by KVM and to limit the
> > + * confiruation state space and make it more deterministic.
>
> s/confiruation/configuration/
>
> However, I don't agree that this provides determinism since we're not
> forcing any particular values, but rather filtering the values from the
> host.

Ack

> > + * - 40-bit IPA
>
> This seems more about not supporting KVM_CAP_ARM_VM_IPA_SIZE for now.
>
> > + * - 16-bit ASID
> > + * - Mixed-endian
> > + * - Distinction between Secure and Non-secure Memory
> > + * - Mixed-endian at EL0 only
> > + * - Non-context synchronizing exception entry and exit
>
> These all seem to fall into the "cannot trap" category, so we just advertise
> whatever we've got.

Ack.


>
> > + */
> > +#define PVM_ID_AA64MMFR0_ALLOW (\
> > +     FIELD_PREP(FEATURE(ID_AA64MMFR0_PARANGE), ID_AA64MMFR0_PARANGE_40) | \
> > +     FIELD_PREP(FEATURE(ID_AA64MMFR0_ASID), ID_AA64MMFR0_ASID_16) | \
> > +     FEATURE(ID_AA64MMFR0_BIGENDEL) | \
> > +     FEATURE(ID_AA64MMFR0_SNSMEM) | \
> > +     FEATURE(ID_AA64MMFR0_BIGENDEL0) | \
> > +     FEATURE(ID_AA64MMFR0_EXS) \
> > +     )
> > +
> > +/*
> > + * - 64KB granule not supported
> > + */
> > +#define PVM_ID_AA64MMFR0_SET (\
> > +     FIELD_PREP(FEATURE(ID_AA64MMFR0_TGRAN64), ID_AA64MMFR0_TGRAN64_NI) \
> > +     )
>
> Why not, and can we actually prevent the guest from doing that?

We cannot prevent the guest from doing it. Initial reasoning was that
there isn't a clear use case for it, but since we cannot prevent the
guest from doing that, I'll unhide it.

> > +/*
> > + * These features are chosen because they are supported by KVM and to limit the
> > + * confiruation state space and make it more deterministic.
>
> It's that typo again ;) But my comment from before still applies -- I don't
> think an ALLOW mask adds hugely to the determinism.

Ack

> > + * - Hardware translation table updates to Access flag and Dirty state
> > + * - Number of VMID bits from CPU
> > + * - Hierarchical Permission Disables
> > + * - Privileged Access Never
> > + * - SError interrupt exceptions from speculative reads
> > + * - Enhanced Translation Synchronization
>
> As before, I think this is a mixture of "trivial" and "cannot trap"
> features.

Ack

> > + */
> > +#define PVM_ID_AA64MMFR1_ALLOW (\
> > +     FEATURE(ID_AA64MMFR1_HADBS) | \
> > +     FEATURE(ID_AA64MMFR1_VMIDBITS) | \
> > +     FEATURE(ID_AA64MMFR1_HPD) | \
> > +     FEATURE(ID_AA64MMFR1_PAN) | \
> > +     FEATURE(ID_AA64MMFR1_SPECSEI) | \
> > +     FEATURE(ID_AA64MMFR1_ETS) \
> > +     )
> > +
> > +/*
> > + * These features are chosen because they are supported by KVM and to limit the
> > + * confiruation state space and make it more deterministic.
>
> <same comment>

Ack
> > + * - Common not Private translations
> > + * - User Access Override
> > + * - IESB bit in the SCTLR_ELx registers
> > + * - Unaligned single-copy atomicity and atomic functions
> > + * - ESR_ELx.EC value on an exception by read access to feature ID space
> > + * - TTL field in address operations.
> > + * - Break-before-make sequences when changing translation block size
> > + * - E0PDx mechanism
> > + */
> > +#define PVM_ID_AA64MMFR2_ALLOW (\
> > +     FEATURE(ID_AA64MMFR2_CNP) | \
> > +     FEATURE(ID_AA64MMFR2_UAO) | \
> > +     FEATURE(ID_AA64MMFR2_IESB) | \
> > +     FEATURE(ID_AA64MMFR2_AT) | \
> > +     FEATURE(ID_AA64MMFR2_IDS) | \
> > +     FEATURE(ID_AA64MMFR2_TTL) | \
> > +     FEATURE(ID_AA64MMFR2_BBM) | \
> > +     FEATURE(ID_AA64MMFR2_E0PD) \
> > +     )
> > +
> > +/*
> > + * Allow all features in this register because they are trivial to support, or
> > + * are already supported by KVM:
> > + * - LS64
> > + * - XS
> > + * - I8MM
> > + * - DGB
> > + * - BF16
> > + * - SPECRES
> > + * - SB
> > + * - FRINTTS
> > + * - PAuth
> > + * - FPAC
> > + * - LRCPC
> > + * - FCMA
> > + * - JSCVT
> > + * - DPB
> > + */
> > +#define PVM_ID_AA64ISAR1_ALLOW (~0ULL)
> > +
> > +#endif /* __ARM64_KVM_FIXED_CONFIG_H__ */
> > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > index ac67d5699c68..e1ceadd69575 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -780,6 +780,8 @@ static inline bool kvm_vm_is_protected(struct kvm *kvm)
> >       return false;
> >  }
> >
> > +void kvm_init_protected_traps(struct kvm_vcpu *vcpu);
> > +
> >  int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature);
> >  bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu);
> >
> > diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
> > index 657d0c94cf82..3f4866322f85 100644
> > --- a/arch/arm64/include/asm/kvm_hyp.h
> > +++ b/arch/arm64/include/asm/kvm_hyp.h
> > @@ -115,7 +115,10 @@ int __pkvm_init(phys_addr_t phys, unsigned long size, unsigned long nr_cpus,
> >  void __noreturn __host_enter(struct kvm_cpu_context *host_ctxt);
> >  #endif
> >
> > +extern u64 kvm_nvhe_sym(id_aa64pfr0_el1_sys_val);
> > +extern u64 kvm_nvhe_sym(id_aa64pfr1_el1_sys_val);
> >  extern u64 kvm_nvhe_sym(id_aa64mmfr0_el1_sys_val);
> >  extern u64 kvm_nvhe_sym(id_aa64mmfr1_el1_sys_val);
> > +extern u64 kvm_nvhe_sym(id_aa64mmfr2_el1_sys_val);
> >
> >  #endif /* __ARM64_KVM_HYP_H__ */
> > diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
> > index 989bb5dad2c8..0be63f5c495f 100644
> > --- a/arch/arm64/kvm/Makefile
> > +++ b/arch/arm64/kvm/Makefile
> > @@ -14,7 +14,7 @@ kvm-y := $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o $(KVM)/eventfd.o \
> >        $(KVM)/vfio.o $(KVM)/irqchip.o $(KVM)/binary_stats.o \
> >        arm.o mmu.o mmio.o psci.o perf.o hypercalls.o pvtime.o \
> >        inject_fault.o va_layout.o handle_exit.o \
> > -      guest.o debug.o reset.o sys_regs.o \
> > +      guest.o debug.o pkvm.o reset.o sys_regs.o \
> >        vgic-sys-reg-v3.o fpsimd.o pmu.o \
> >        arch_timer.o trng.o\
> >        vgic/vgic.o vgic/vgic-init.o \
> > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> > index 14b12f2c08c0..3f28549aff0d 100644
> > --- a/arch/arm64/kvm/arm.c
> > +++ b/arch/arm64/kvm/arm.c
> > @@ -618,6 +618,14 @@ static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu)
> >
> >       ret = kvm_arm_pmu_v3_enable(vcpu);
> >
> > +     /*
> > +      * Initialize traps for protected VMs.
> > +      * NOTE: Move  trap initialization to EL2 once the code is in place for
> > +      * maintaining protected VM state at EL2 instead of the host.
> > +      */
> > +     if (kvm_vm_is_protected(kvm))
> > +             kvm_init_protected_traps(vcpu);
> > +
> >       return ret;
> >  }
> >
> > @@ -1781,8 +1789,11 @@ static int kvm_hyp_init_protection(u32 hyp_va_bits)
> >       void *addr = phys_to_virt(hyp_mem_base);
> >       int ret;
> >
> > +     kvm_nvhe_sym(id_aa64pfr0_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
> > +     kvm_nvhe_sym(id_aa64pfr1_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64PFR1_EL1);
> >       kvm_nvhe_sym(id_aa64mmfr0_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1);
> >       kvm_nvhe_sym(id_aa64mmfr1_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1);
> > +     kvm_nvhe_sym(id_aa64mmfr2_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR2_EL1);
> >
> >       ret = create_hyp_mappings(addr, addr + hyp_mem_size, PAGE_HYP);
> >       if (ret)
> > diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile
> > index 5df6193fc430..a23f417a0c20 100644
> > --- a/arch/arm64/kvm/hyp/nvhe/Makefile
> > +++ b/arch/arm64/kvm/hyp/nvhe/Makefile
> > @@ -14,7 +14,7 @@ lib-objs := $(addprefix ../../../lib/, $(lib-objs))
> >
> >  obj-y := timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o hyp-init.o host.o \
> >        hyp-main.o hyp-smp.o psci-relay.o early_alloc.o stub.o page_alloc.o \
> > -      cache.o setup.o mm.o mem_protect.o
> > +      cache.o setup.o mm.o mem_protect.o sys_regs.o
> >  obj-y += ../vgic-v3-sr.o ../aarch32.o ../vgic-v2-cpuif-proxy.o ../entry.o \
> >        ../fpsimd.o ../hyp-entry.o ../exception.o ../pgtable.o
> >  obj-y += $(lib-objs)
> > diff --git a/arch/arm64/kvm/hyp/nvhe/sys_regs.c b/arch/arm64/kvm/hyp/nvhe/sys_regs.c
> > new file mode 100644
> > index 000000000000..6c7230aa70e9
> > --- /dev/null
> > +++ b/arch/arm64/kvm/hyp/nvhe/sys_regs.c
> > @@ -0,0 +1,443 @@
> > +// SPDX-License-Identifier: GPL-2.0-only
> > +/*
> > + * Copyright (C) 2021 Google LLC
> > + * Author: Fuad Tabba <tabba@google.com>
> > + */
> > +
> > +#include <linux/kvm_host.h>
> > +
> > +#include <asm/kvm_asm.h>
> > +#include <asm/kvm_emulate.h>
> > +#include <asm/kvm_fixed_config.h>
> > +#include <asm/kvm_mmu.h>
> > +
> > +#include <hyp/adjust_pc.h>
> > +
> > +#include "../../sys_regs.h"
> > +
> > +/*
> > + * Copies of the host's CPU features registers holding sanitized values.
> > + */
> > +u64 id_aa64pfr0_el1_sys_val;
> > +u64 id_aa64pfr1_el1_sys_val;
> > +u64 id_aa64mmfr2_el1_sys_val;
> > +
> > +/*
> > + * Inject an unknown/undefined exception to the guest.
> > + */
> > +static void inject_undef(struct kvm_vcpu *vcpu)
> > +{
> > +     u32 esr = (ESR_ELx_EC_UNKNOWN << ESR_ELx_EC_SHIFT);
> > +
> > +     vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA64_EL1 |
> > +                          KVM_ARM64_EXCEPT_AA64_ELx_SYNC |
> > +                          KVM_ARM64_PENDING_EXCEPTION);
> > +
> > +     __kvm_adjust_pc(vcpu);
> > +
> > +     write_sysreg_el1(esr, SYS_ESR);
> > +     write_sysreg_el1(read_sysreg_el2(SYS_ELR), SYS_ELR);
> > +}
> > +
> > +/*
> > + * Accessor for undefined accesses.
> > + */
> > +static bool undef_access(struct kvm_vcpu *vcpu,
> > +                      struct sys_reg_params *p,
> > +                      const struct sys_reg_desc *r)
> > +{
> > +     inject_undef(vcpu);
> > +     return false;
> > +}
> > +
> > +/*
> > + * Accessors for feature registers.
> > + *
> > + * If access is allowed, set the regval to the protected VM's view of the
> > + * register and return true.
> > + * Otherwise, inject an undefined exception and return false.
> > + */
> > +
> > +/*
> > + * Returns the minimum feature supported and allowed.
> > + */
> > +static u64 get_min_feature(u64 feature, u64 allowed_features,
> > +                        u64 supported_features)
"> > +{
> > +     const u64 allowed_feature = FIELD_GET(feature, allowed_features);
> > +     const u64 supported_feature = FIELD_GET(feature, supported_features);
> > +
> > +     return min(allowed_feature, supported_feature);
>
> Careful here: this is an unsigned comparison, yet some fields are signed.
> cpufeature.c uses the S_ARM64_FTR_BITS and ARM64_FTR_BITS to declare signed
> and unsigned fields respectively.

I completely missed that! It's described in "D13.1.3 Principles of the
ID scheme for fields in ID registers" or the Arm Architecture
Reference Manual. Fortunately, all of the features I'm working with
are unsigned. However, I will fix it in v4 to ensure that should we
add a signed feature we can clearly see that it needs to be handled
differently.

Thanks!

/fuad

> Will
>
> --
> To unsubscribe from this group and stop receiving emails from it, send an email to kernel-team+unsubscribe@android.com.
>

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 11/15] KVM: arm64: Add trap handlers for protected VMs
@ 2021-08-16 14:39       ` Fuad Tabba
  0 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-08-16 14:39 UTC (permalink / raw)
  To: Will Deacon
  Cc: kvmarm, maz, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team

Hi Will,

On Thu, Aug 12, 2021 at 11:46 AM Will Deacon <will@kernel.org> wrote:
>
> On Mon, Jul 19, 2021 at 05:03:42PM +0100, Fuad Tabba wrote:
> > Add trap handlers for protected VMs. These are mainly for Sys64
> > and debug traps.
> >
> > No functional change intended as these are not hooked in yet to
> > the guest exit handlers introduced earlier. So even when trapping
> > is triggered, the exit handlers would let the host handle it, as
> > before.
> >
> > Signed-off-by: Fuad Tabba <tabba@google.com>
> > ---
> >  arch/arm64/include/asm/kvm_fixed_config.h | 178 +++++++++
> >  arch/arm64/include/asm/kvm_host.h         |   2 +
> >  arch/arm64/include/asm/kvm_hyp.h          |   3 +
> >  arch/arm64/kvm/Makefile                   |   2 +-
> >  arch/arm64/kvm/arm.c                      |  11 +
> >  arch/arm64/kvm/hyp/nvhe/Makefile          |   2 +-
> >  arch/arm64/kvm/hyp/nvhe/sys_regs.c        | 443 ++++++++++++++++++++++
> >  arch/arm64/kvm/pkvm.c                     | 183 +++++++++
> >  8 files changed, 822 insertions(+), 2 deletions(-)
> >  create mode 100644 arch/arm64/include/asm/kvm_fixed_config.h
> >  create mode 100644 arch/arm64/kvm/hyp/nvhe/sys_regs.c
> >  create mode 100644 arch/arm64/kvm/pkvm.c
> >
> > diff --git a/arch/arm64/include/asm/kvm_fixed_config.h b/arch/arm64/include/asm/kvm_fixed_config.h
> > new file mode 100644
> > index 000000000000..b39a5de2c4b9
> > --- /dev/null
> > +++ b/arch/arm64/include/asm/kvm_fixed_config.h
> > @@ -0,0 +1,178 @@
> > +/* SPDX-License-Identifier: GPL-2.0-only */
> > +/*
> > + * Copyright (C) 2021 Google LLC
> > + * Author: Fuad Tabba <tabba@google.com>
> > + */
> > +
> > +#ifndef __ARM64_KVM_FIXED_CONFIG_H__
> > +#define __ARM64_KVM_FIXED_CONFIG_H__
> > +
> > +#include <asm/sysreg.h>
> > +
> > +/*
> > + * This file contains definitions for features to be allowed or restricted for
> > + * guest virtual machines as a baseline, depending on what mode KVM is running
> > + * in and on the type of guest is running.
>
> s/is running/that is running/

Ack.

> > + *
> > + * The features are represented as the highest allowed value for a feature in
> > + * the feature id registers. If the field is set to all ones (i.e., 0b1111),
> > + * then it's only restricted by what the system allows. If the feature is set to
> > + * another value, then that value would be the maximum value allowed and
> > + * supported in pKVM, even if the system supports a higher value.
>
> Given that some fields are signed whereas others are unsigned, I think the
> wording could be a bit tighter here when it refers to "maximum".
>
> > + *
> > + * Some features are forced to a certain value, in which case a SET bitmap is
> > + * used to force these values.
> > + */
> > +
> > +
> > +/*
> > + * Allowed features for protected guests (Protected KVM)
> > + *
> > + * The approach taken here is to allow features that are:
> > + * - needed by common Linux distributions (e.g., flooating point)
>
> s/flooating/floating
Ack.

> > + * - are trivial, e.g., supporting the feature doesn't introduce or require the
> > + * tracking of additional state
> ... in KVM.

Ack.

> > + * - not trapable
>
> s/not trapable/cannot be trapped/
Ack

> > + */
> > +
> > +/*
> > + * - Floating-point and Advanced SIMD:
> > + *   Don't require much support other than maintaining the context, which KVM
> > + *   already has.
>
> I'd rework this sentence. We have to support fpsimd because Linux guests
> rely on it.

Ack

> > + * - AArch64 guests only (no support for AArch32 guests):
> > + *   Simplify support in case of asymmetric AArch32 systems.
>
> I don't think asymmetric systems come into this really; AArch32 on its
> own adds lots of complexity in trap handling, emulation, condition codes
> etc. Restricting guests to AArch64 means we don't have to worry about the
> AArch32 exception model or emulation of 32-bit instructions.

Ack

> > + * - RAS (v1)
> > + *   v1 doesn't require much additional support, but later versions do.
>
> Be more specific?

Ack

> > + * - Data Independent Timing
> > + *   Trivial
> > + * Remaining features are not supported either because they require too much
> > + * support from KVM, or risk leaking guest data.
>
> I think we should drop this sentence -- it makes it sounds like we can't
> be arsed :)

Ack.

> > + */
> > +#define PVM_ID_AA64PFR0_ALLOW (\
> > +     FEATURE(ID_AA64PFR0_FP) | \
> > +     FIELD_PREP(FEATURE(ID_AA64PFR0_EL0), ID_AA64PFR0_ELx_64BIT_ONLY) | \
> > +     FIELD_PREP(FEATURE(ID_AA64PFR0_EL1), ID_AA64PFR0_ELx_64BIT_ONLY) | \
> > +     FIELD_PREP(FEATURE(ID_AA64PFR0_EL2), ID_AA64PFR0_ELx_64BIT_ONLY) | \
> > +     FIELD_PREP(FEATURE(ID_AA64PFR0_EL3), ID_AA64PFR0_ELx_64BIT_ONLY) | \
> > +     FIELD_PREP(FEATURE(ID_AA64PFR0_RAS), ID_AA64PFR0_RAS_V1) | \
>
> I think having the FIELD_PREP entries in the ALLOW mask is quite confusing
> here -- naively you would expect to be able to bitwise-and the host register
> value with the ALLOW mask and get the sanitised version back, but with these
> here you have to go field-by-field to compute the common value.
>
> So perhaps move those into a PVM_ID_AA64PFR0_RESTRICT mask or something?
> Then pvm_access_id_aa64pfr0() will become a little easier to read, I think.

I agree. I've reworked it, and it simplifies the code and makes it
easier to read.

> > +     FEATURE(ID_AA64PFR0_ASIMD) | \
> > +     FEATURE(ID_AA64PFR0_DIT) \
> > +     )
> > +
> > +/*
> > + * - Branch Target Identification
> > + * - Speculative Store Bypassing
> > + *   These features are trivial to support
> > + */
> > +#define PVM_ID_AA64PFR1_ALLOW (\
> > +     FEATURE(ID_AA64PFR1_BT) | \
> > +     FEATURE(ID_AA64PFR1_SSBS) \
> > +     )
> > +
> > +/*
> > + * No support for Scalable Vectors:
> > + *   Requires additional support from KVM
>
> Perhaps expand on "support" here? E.g. "context-switching and trapping
> support at EL2".

Ack.

> > + */
> > +#define PVM_ID_AA64ZFR0_ALLOW (0ULL)
> > +
> > +/*
> > + * No support for debug, including breakpoints, and watchpoints:
> > + *   Reduce complexity and avoid exposing/leaking guest data
> > + *
> > + * NOTE: The Arm architecture mandates support for at least the Armv8 debug
> > + * architecture, which would include at least 2 hardware breakpoints and
> > + * watchpoints. Providing that support to protected guests adds considerable
> > + * state and complexity, and risks leaking guest data. Therefore, the reserved
> > + * value of 0 is used for debug-related fields.
> > + */
>
> I think the complexity of the debug architecture is a good reason to avoid
> exposing it here, but I don't understand how providing breakpoints or
> watchpoints to a guest could risk leaking guest data. What is the specific
> threat here?

I mixed up the various debug and trace features here. Will fix the comment.

> > +#define PVM_ID_AA64DFR0_ALLOW (0ULL)
> > +
> > +/*
> > + * These features are chosen because they are supported by KVM and to limit the
> > + * confiruation state space and make it more deterministic.
>
> s/confiruation/configuration/
>
> However, I don't agree that this provides determinism since we're not
> forcing any particular values, but rather filtering the values from the
> host.

Ack

> > + * - 40-bit IPA
>
> This seems more about not supporting KVM_CAP_ARM_VM_IPA_SIZE for now.
>
> > + * - 16-bit ASID
> > + * - Mixed-endian
> > + * - Distinction between Secure and Non-secure Memory
> > + * - Mixed-endian at EL0 only
> > + * - Non-context synchronizing exception entry and exit
>
> These all seem to fall into the "cannot trap" category, so we just advertise
> whatever we've got.

Ack.


>
> > + */
> > +#define PVM_ID_AA64MMFR0_ALLOW (\
> > +     FIELD_PREP(FEATURE(ID_AA64MMFR0_PARANGE), ID_AA64MMFR0_PARANGE_40) | \
> > +     FIELD_PREP(FEATURE(ID_AA64MMFR0_ASID), ID_AA64MMFR0_ASID_16) | \
> > +     FEATURE(ID_AA64MMFR0_BIGENDEL) | \
> > +     FEATURE(ID_AA64MMFR0_SNSMEM) | \
> > +     FEATURE(ID_AA64MMFR0_BIGENDEL0) | \
> > +     FEATURE(ID_AA64MMFR0_EXS) \
> > +     )
> > +
> > +/*
> > + * - 64KB granule not supported
> > + */
> > +#define PVM_ID_AA64MMFR0_SET (\
> > +     FIELD_PREP(FEATURE(ID_AA64MMFR0_TGRAN64), ID_AA64MMFR0_TGRAN64_NI) \
> > +     )
>
> Why not, and can we actually prevent the guest from doing that?

We cannot prevent the guest from doing it. Initial reasoning was that
there isn't a clear use case for it, but since we cannot prevent the
guest from doing that, I'll unhide it.

> > +/*
> > + * These features are chosen because they are supported by KVM and to limit the
> > + * confiruation state space and make it more deterministic.
>
> It's that typo again ;) But my comment from before still applies -- I don't
> think an ALLOW mask adds hugely to the determinism.

Ack

> > + * - Hardware translation table updates to Access flag and Dirty state
> > + * - Number of VMID bits from CPU
> > + * - Hierarchical Permission Disables
> > + * - Privileged Access Never
> > + * - SError interrupt exceptions from speculative reads
> > + * - Enhanced Translation Synchronization
>
> As before, I think this is a mixture of "trivial" and "cannot trap"
> features.

Ack

> > + */
> > +#define PVM_ID_AA64MMFR1_ALLOW (\
> > +     FEATURE(ID_AA64MMFR1_HADBS) | \
> > +     FEATURE(ID_AA64MMFR1_VMIDBITS) | \
> > +     FEATURE(ID_AA64MMFR1_HPD) | \
> > +     FEATURE(ID_AA64MMFR1_PAN) | \
> > +     FEATURE(ID_AA64MMFR1_SPECSEI) | \
> > +     FEATURE(ID_AA64MMFR1_ETS) \
> > +     )
> > +
> > +/*
> > + * These features are chosen because they are supported by KVM and to limit the
> > + * confiruation state space and make it more deterministic.
>
> <same comment>

Ack
> > + * - Common not Private translations
> > + * - User Access Override
> > + * - IESB bit in the SCTLR_ELx registers
> > + * - Unaligned single-copy atomicity and atomic functions
> > + * - ESR_ELx.EC value on an exception by read access to feature ID space
> > + * - TTL field in address operations.
> > + * - Break-before-make sequences when changing translation block size
> > + * - E0PDx mechanism
> > + */
> > +#define PVM_ID_AA64MMFR2_ALLOW (\
> > +     FEATURE(ID_AA64MMFR2_CNP) | \
> > +     FEATURE(ID_AA64MMFR2_UAO) | \
> > +     FEATURE(ID_AA64MMFR2_IESB) | \
> > +     FEATURE(ID_AA64MMFR2_AT) | \
> > +     FEATURE(ID_AA64MMFR2_IDS) | \
> > +     FEATURE(ID_AA64MMFR2_TTL) | \
> > +     FEATURE(ID_AA64MMFR2_BBM) | \
> > +     FEATURE(ID_AA64MMFR2_E0PD) \
> > +     )
> > +
> > +/*
> > + * Allow all features in this register because they are trivial to support, or
> > + * are already supported by KVM:
> > + * - LS64
> > + * - XS
> > + * - I8MM
> > + * - DGB
> > + * - BF16
> > + * - SPECRES
> > + * - SB
> > + * - FRINTTS
> > + * - PAuth
> > + * - FPAC
> > + * - LRCPC
> > + * - FCMA
> > + * - JSCVT
> > + * - DPB
> > + */
> > +#define PVM_ID_AA64ISAR1_ALLOW (~0ULL)
> > +
> > +#endif /* __ARM64_KVM_FIXED_CONFIG_H__ */
> > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > index ac67d5699c68..e1ceadd69575 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -780,6 +780,8 @@ static inline bool kvm_vm_is_protected(struct kvm *kvm)
> >       return false;
> >  }
> >
> > +void kvm_init_protected_traps(struct kvm_vcpu *vcpu);
> > +
> >  int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature);
> >  bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu);
> >
> > diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
> > index 657d0c94cf82..3f4866322f85 100644
> > --- a/arch/arm64/include/asm/kvm_hyp.h
> > +++ b/arch/arm64/include/asm/kvm_hyp.h
> > @@ -115,7 +115,10 @@ int __pkvm_init(phys_addr_t phys, unsigned long size, unsigned long nr_cpus,
> >  void __noreturn __host_enter(struct kvm_cpu_context *host_ctxt);
> >  #endif
> >
> > +extern u64 kvm_nvhe_sym(id_aa64pfr0_el1_sys_val);
> > +extern u64 kvm_nvhe_sym(id_aa64pfr1_el1_sys_val);
> >  extern u64 kvm_nvhe_sym(id_aa64mmfr0_el1_sys_val);
> >  extern u64 kvm_nvhe_sym(id_aa64mmfr1_el1_sys_val);
> > +extern u64 kvm_nvhe_sym(id_aa64mmfr2_el1_sys_val);
> >
> >  #endif /* __ARM64_KVM_HYP_H__ */
> > diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
> > index 989bb5dad2c8..0be63f5c495f 100644
> > --- a/arch/arm64/kvm/Makefile
> > +++ b/arch/arm64/kvm/Makefile
> > @@ -14,7 +14,7 @@ kvm-y := $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o $(KVM)/eventfd.o \
> >        $(KVM)/vfio.o $(KVM)/irqchip.o $(KVM)/binary_stats.o \
> >        arm.o mmu.o mmio.o psci.o perf.o hypercalls.o pvtime.o \
> >        inject_fault.o va_layout.o handle_exit.o \
> > -      guest.o debug.o reset.o sys_regs.o \
> > +      guest.o debug.o pkvm.o reset.o sys_regs.o \
> >        vgic-sys-reg-v3.o fpsimd.o pmu.o \
> >        arch_timer.o trng.o\
> >        vgic/vgic.o vgic/vgic-init.o \
> > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> > index 14b12f2c08c0..3f28549aff0d 100644
> > --- a/arch/arm64/kvm/arm.c
> > +++ b/arch/arm64/kvm/arm.c
> > @@ -618,6 +618,14 @@ static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu)
> >
> >       ret = kvm_arm_pmu_v3_enable(vcpu);
> >
> > +     /*
> > +      * Initialize traps for protected VMs.
> > +      * NOTE: Move  trap initialization to EL2 once the code is in place for
> > +      * maintaining protected VM state at EL2 instead of the host.
> > +      */
> > +     if (kvm_vm_is_protected(kvm))
> > +             kvm_init_protected_traps(vcpu);
> > +
> >       return ret;
> >  }
> >
> > @@ -1781,8 +1789,11 @@ static int kvm_hyp_init_protection(u32 hyp_va_bits)
> >       void *addr = phys_to_virt(hyp_mem_base);
> >       int ret;
> >
> > +     kvm_nvhe_sym(id_aa64pfr0_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
> > +     kvm_nvhe_sym(id_aa64pfr1_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64PFR1_EL1);
> >       kvm_nvhe_sym(id_aa64mmfr0_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1);
> >       kvm_nvhe_sym(id_aa64mmfr1_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1);
> > +     kvm_nvhe_sym(id_aa64mmfr2_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR2_EL1);
> >
> >       ret = create_hyp_mappings(addr, addr + hyp_mem_size, PAGE_HYP);
> >       if (ret)
> > diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile
> > index 5df6193fc430..a23f417a0c20 100644
> > --- a/arch/arm64/kvm/hyp/nvhe/Makefile
> > +++ b/arch/arm64/kvm/hyp/nvhe/Makefile
> > @@ -14,7 +14,7 @@ lib-objs := $(addprefix ../../../lib/, $(lib-objs))
> >
> >  obj-y := timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o hyp-init.o host.o \
> >        hyp-main.o hyp-smp.o psci-relay.o early_alloc.o stub.o page_alloc.o \
> > -      cache.o setup.o mm.o mem_protect.o
> > +      cache.o setup.o mm.o mem_protect.o sys_regs.o
> >  obj-y += ../vgic-v3-sr.o ../aarch32.o ../vgic-v2-cpuif-proxy.o ../entry.o \
> >        ../fpsimd.o ../hyp-entry.o ../exception.o ../pgtable.o
> >  obj-y += $(lib-objs)
> > diff --git a/arch/arm64/kvm/hyp/nvhe/sys_regs.c b/arch/arm64/kvm/hyp/nvhe/sys_regs.c
> > new file mode 100644
> > index 000000000000..6c7230aa70e9
> > --- /dev/null
> > +++ b/arch/arm64/kvm/hyp/nvhe/sys_regs.c
> > @@ -0,0 +1,443 @@
> > +// SPDX-License-Identifier: GPL-2.0-only
> > +/*
> > + * Copyright (C) 2021 Google LLC
> > + * Author: Fuad Tabba <tabba@google.com>
> > + */
> > +
> > +#include <linux/kvm_host.h>
> > +
> > +#include <asm/kvm_asm.h>
> > +#include <asm/kvm_emulate.h>
> > +#include <asm/kvm_fixed_config.h>
> > +#include <asm/kvm_mmu.h>
> > +
> > +#include <hyp/adjust_pc.h>
> > +
> > +#include "../../sys_regs.h"
> > +
> > +/*
> > + * Copies of the host's CPU features registers holding sanitized values.
> > + */
> > +u64 id_aa64pfr0_el1_sys_val;
> > +u64 id_aa64pfr1_el1_sys_val;
> > +u64 id_aa64mmfr2_el1_sys_val;
> > +
> > +/*
> > + * Inject an unknown/undefined exception to the guest.
> > + */
> > +static void inject_undef(struct kvm_vcpu *vcpu)
> > +{
> > +     u32 esr = (ESR_ELx_EC_UNKNOWN << ESR_ELx_EC_SHIFT);
> > +
> > +     vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA64_EL1 |
> > +                          KVM_ARM64_EXCEPT_AA64_ELx_SYNC |
> > +                          KVM_ARM64_PENDING_EXCEPTION);
> > +
> > +     __kvm_adjust_pc(vcpu);
> > +
> > +     write_sysreg_el1(esr, SYS_ESR);
> > +     write_sysreg_el1(read_sysreg_el2(SYS_ELR), SYS_ELR);
> > +}
> > +
> > +/*
> > + * Accessor for undefined accesses.
> > + */
> > +static bool undef_access(struct kvm_vcpu *vcpu,
> > +                      struct sys_reg_params *p,
> > +                      const struct sys_reg_desc *r)
> > +{
> > +     inject_undef(vcpu);
> > +     return false;
> > +}
> > +
> > +/*
> > + * Accessors for feature registers.
> > + *
> > + * If access is allowed, set the regval to the protected VM's view of the
> > + * register and return true.
> > + * Otherwise, inject an undefined exception and return false.
> > + */
> > +
> > +/*
> > + * Returns the minimum feature supported and allowed.
> > + */
> > +static u64 get_min_feature(u64 feature, u64 allowed_features,
> > +                        u64 supported_features)
"> > +{
> > +     const u64 allowed_feature = FIELD_GET(feature, allowed_features);
> > +     const u64 supported_feature = FIELD_GET(feature, supported_features);
> > +
> > +     return min(allowed_feature, supported_feature);
>
> Careful here: this is an unsigned comparison, yet some fields are signed.
> cpufeature.c uses the S_ARM64_FTR_BITS and ARM64_FTR_BITS to declare signed
> and unsigned fields respectively.

I completely missed that! It's described in "D13.1.3 Principles of the
ID scheme for fields in ID registers" or the Arm Architecture
Reference Manual. Fortunately, all of the features I'm working with
are unsigned. However, I will fix it in v4 to ensure that should we
add a signed feature we can clearly see that it needs to be handled
differently.

Thanks!

/fuad

> Will
>
> --
> To unsubscribe from this group and stop receiving emails from it, send an email to kernel-team+unsubscribe@android.com.
>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 15/15] KVM: arm64: Restrict protected VM capabilities
  2021-08-12  9:59     ` Will Deacon
  (?)
@ 2021-08-16 14:40       ` Fuad Tabba
  -1 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-08-16 14:40 UTC (permalink / raw)
  To: Will Deacon; +Cc: kernel-team, kvm, maz, pbonzini, kvmarm, linux-arm-kernel

Hi Will,

On Thu, Aug 12, 2021 at 11:59 AM Will Deacon <will@kernel.org> wrote:
>
> On Mon, Jul 19, 2021 at 05:03:46PM +0100, Fuad Tabba wrote:
> > Restrict protected VM capabilities based on the
> > fixed-configuration for protected VMs.
> >
> > No functional change intended in current KVM-supported modes
> > (nVHE, VHE).
> >
> > Signed-off-by: Fuad Tabba <tabba@google.com>
> > ---
> >  arch/arm64/include/asm/kvm_fixed_config.h | 10 ++++
> >  arch/arm64/kvm/arm.c                      | 63 ++++++++++++++++++++++-
> >  arch/arm64/kvm/pkvm.c                     | 30 +++++++++++
> >  3 files changed, 102 insertions(+), 1 deletion(-)
>
> This patch looks good to me, but I'd be inclined to add this to the user-ABI
> series given that it's really all user-facing and, without a functional
> kvm_vm_is_protected(), isn't serving much purpose.

Sure.
/fuad

> Cheers,
>
> Will
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 15/15] KVM: arm64: Restrict protected VM capabilities
@ 2021-08-16 14:40       ` Fuad Tabba
  0 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-08-16 14:40 UTC (permalink / raw)
  To: Will Deacon
  Cc: kvmarm, maz, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team

Hi Will,

On Thu, Aug 12, 2021 at 11:59 AM Will Deacon <will@kernel.org> wrote:
>
> On Mon, Jul 19, 2021 at 05:03:46PM +0100, Fuad Tabba wrote:
> > Restrict protected VM capabilities based on the
> > fixed-configuration for protected VMs.
> >
> > No functional change intended in current KVM-supported modes
> > (nVHE, VHE).
> >
> > Signed-off-by: Fuad Tabba <tabba@google.com>
> > ---
> >  arch/arm64/include/asm/kvm_fixed_config.h | 10 ++++
> >  arch/arm64/kvm/arm.c                      | 63 ++++++++++++++++++++++-
> >  arch/arm64/kvm/pkvm.c                     | 30 +++++++++++
> >  3 files changed, 102 insertions(+), 1 deletion(-)
>
> This patch looks good to me, but I'd be inclined to add this to the user-ABI
> series given that it's really all user-facing and, without a functional
> kvm_vm_is_protected(), isn't serving much purpose.

Sure.
/fuad

> Cheers,
>
> Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH v3 15/15] KVM: arm64: Restrict protected VM capabilities
@ 2021-08-16 14:40       ` Fuad Tabba
  0 siblings, 0 replies; 126+ messages in thread
From: Fuad Tabba @ 2021-08-16 14:40 UTC (permalink / raw)
  To: Will Deacon
  Cc: kvmarm, maz, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team

Hi Will,

On Thu, Aug 12, 2021 at 11:59 AM Will Deacon <will@kernel.org> wrote:
>
> On Mon, Jul 19, 2021 at 05:03:46PM +0100, Fuad Tabba wrote:
> > Restrict protected VM capabilities based on the
> > fixed-configuration for protected VMs.
> >
> > No functional change intended in current KVM-supported modes
> > (nVHE, VHE).
> >
> > Signed-off-by: Fuad Tabba <tabba@google.com>
> > ---
> >  arch/arm64/include/asm/kvm_fixed_config.h | 10 ++++
> >  arch/arm64/kvm/arm.c                      | 63 ++++++++++++++++++++++-
> >  arch/arm64/kvm/pkvm.c                     | 30 +++++++++++
> >  3 files changed, 102 insertions(+), 1 deletion(-)
>
> This patch looks good to me, but I'd be inclined to add this to the user-ABI
> series given that it's really all user-facing and, without a functional
> kvm_vm_is_protected(), isn't serving much purpose.

Sure.
/fuad

> Cheers,
>
> Will

^ permalink raw reply	[flat|nested] 126+ messages in thread

end of thread, other threads:[~2021-08-16 14:51 UTC | newest]

Thread overview: 126+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-19 16:03 [PATCH v3 00/15] KVM: arm64: Fixed features for protected VMs Fuad Tabba
2021-07-19 16:03 ` Fuad Tabba
2021-07-19 16:03 ` Fuad Tabba
2021-07-19 16:03 ` [PATCH v3 01/15] KVM: arm64: placeholder to check if VM is protected Fuad Tabba
2021-07-19 16:03   ` Fuad Tabba
2021-07-19 16:03   ` Fuad Tabba
2021-08-12  8:58   ` Will Deacon
2021-08-12  8:58     ` Will Deacon
2021-08-12  8:58     ` Will Deacon
2021-08-12  9:22     ` Fuad Tabba
2021-08-12  9:22       ` Fuad Tabba
2021-08-12  9:22       ` Fuad Tabba
2021-07-19 16:03 ` [PATCH v3 02/15] KVM: arm64: Remove trailing whitespace in comment Fuad Tabba
2021-07-19 16:03   ` Fuad Tabba
2021-07-19 16:03   ` Fuad Tabba
2021-08-12  8:59   ` Will Deacon
2021-08-12  8:59     ` Will Deacon
2021-08-12  8:59     ` Will Deacon
2021-07-19 16:03 ` [PATCH v3 03/15] KVM: arm64: MDCR_EL2 is a 64-bit register Fuad Tabba
2021-07-19 16:03   ` Fuad Tabba
2021-07-19 16:03   ` Fuad Tabba
2021-07-19 16:03 ` [PATCH v3 04/15] KVM: arm64: Fix names of config register fields Fuad Tabba
2021-07-19 16:03   ` Fuad Tabba
2021-07-19 16:03   ` Fuad Tabba
2021-07-19 16:03 ` [PATCH v3 05/15] KVM: arm64: Refactor sys_regs.h,c for nVHE reuse Fuad Tabba
2021-07-19 16:03   ` Fuad Tabba
2021-07-19 16:03   ` Fuad Tabba
2021-07-20 13:38   ` Andrew Jones
2021-07-20 13:38     ` [PATCH v3 05/15] KVM: arm64: Refactor sys_regs.h, c " Andrew Jones
2021-07-20 13:38     ` Andrew Jones
2021-07-20 14:03     ` [PATCH v3 05/15] KVM: arm64: Refactor sys_regs.h,c " Fuad Tabba
2021-07-20 14:03       ` [PATCH v3 05/15] KVM: arm64: Refactor sys_regs.h, c " Fuad Tabba
2021-07-20 14:03       ` Fuad Tabba
2021-08-12  8:59   ` [PATCH v3 05/15] KVM: arm64: Refactor sys_regs.h,c " Will Deacon
2021-08-12  8:59     ` [PATCH v3 05/15] KVM: arm64: Refactor sys_regs.h, c " Will Deacon
2021-08-12  8:59     ` Will Deacon
2021-07-19 16:03 ` [PATCH v3 06/15] KVM: arm64: Restore mdcr_el2 from vcpu Fuad Tabba
2021-07-19 16:03   ` Fuad Tabba
2021-07-19 16:03   ` Fuad Tabba
2021-07-20 14:52   ` Andrew Jones
2021-07-20 14:52     ` Andrew Jones
2021-07-20 14:52     ` Andrew Jones
2021-07-21  7:37     ` Fuad Tabba
2021-07-21  7:37       ` Fuad Tabba
2021-07-21  7:37       ` Fuad Tabba
2021-08-12  8:46       ` Will Deacon
2021-08-12  8:46         ` Will Deacon
2021-08-12  8:46         ` Will Deacon
2021-08-12  9:28         ` Fuad Tabba
2021-08-12  9:28           ` Fuad Tabba
2021-08-12  9:28           ` Fuad Tabba
2021-08-12  9:49           ` Will Deacon
2021-08-12  9:49             ` Will Deacon
2021-08-12  9:49             ` Will Deacon
2021-07-19 16:03 ` [PATCH v3 07/15] KVM: arm64: Track value of cptr_el2 in struct kvm_vcpu_arch Fuad Tabba
2021-07-19 16:03   ` Fuad Tabba
2021-07-19 16:03   ` Fuad Tabba
2021-08-12  8:59   ` Will Deacon
2021-08-12  8:59     ` Will Deacon
2021-08-12  8:59     ` Will Deacon
2021-07-19 16:03 ` [PATCH v3 08/15] KVM: arm64: Add feature register flag definitions Fuad Tabba
2021-07-19 16:03   ` Fuad Tabba
2021-07-19 16:03   ` Fuad Tabba
2021-08-12  8:59   ` Will Deacon
2021-08-12  8:59     ` Will Deacon
2021-08-12  8:59     ` Will Deacon
2021-08-12  9:21     ` Fuad Tabba
2021-08-12  9:21       ` Fuad Tabba
2021-08-12  9:21       ` Fuad Tabba
2021-07-19 16:03 ` [PATCH v3 09/15] KVM: arm64: Add config register bit definitions Fuad Tabba
2021-07-19 16:03   ` Fuad Tabba
2021-07-19 16:03   ` Fuad Tabba
2021-08-12  8:59   ` Will Deacon
2021-08-12  8:59     ` Will Deacon
2021-08-12  8:59     ` Will Deacon
2021-07-19 16:03 ` [PATCH v3 10/15] KVM: arm64: Guest exit handlers for nVHE hyp Fuad Tabba
2021-07-19 16:03   ` Fuad Tabba
2021-07-19 16:03   ` Fuad Tabba
2021-08-03 15:32   ` Will Deacon
2021-08-03 15:32     ` Will Deacon
2021-08-03 15:32     ` Will Deacon
2021-07-19 16:03 ` [PATCH v3 11/15] KVM: arm64: Add trap handlers for protected VMs Fuad Tabba
2021-07-19 16:03   ` Fuad Tabba
2021-07-19 16:03   ` Fuad Tabba
2021-08-12  9:45   ` Will Deacon
2021-08-12  9:45     ` Will Deacon
2021-08-12  9:45     ` Will Deacon
2021-08-16 14:39     ` Fuad Tabba
2021-08-16 14:39       ` Fuad Tabba
2021-08-16 14:39       ` Fuad Tabba
2021-07-19 16:03 ` [PATCH v3 12/15] KVM: arm64: Move sanitized copies of CPU features Fuad Tabba
2021-07-19 16:03   ` Fuad Tabba
2021-07-19 16:03   ` Fuad Tabba
2021-08-12  9:46   ` Will Deacon
2021-08-12  9:46     ` Will Deacon
2021-08-12  9:46     ` Will Deacon
2021-07-19 16:03 ` [PATCH v3 13/15] KVM: arm64: Trap access to pVM restricted features Fuad Tabba
2021-07-19 16:03   ` Fuad Tabba
2021-07-19 16:03   ` Fuad Tabba
2021-08-12  9:53   ` Will Deacon
2021-08-12  9:53     ` Will Deacon
2021-08-12  9:53     ` Will Deacon
2021-07-19 16:03 ` [PATCH v3 14/15] KVM: arm64: Handle protected guests at 32 bits Fuad Tabba
2021-07-19 16:03   ` Fuad Tabba
2021-07-19 16:03   ` Fuad Tabba
2021-07-19 19:43   ` Oliver Upton
2021-07-19 19:43     ` Oliver Upton
2021-07-19 19:43     ` Oliver Upton
2021-07-21  8:39     ` Fuad Tabba
2021-07-21  8:39       ` Fuad Tabba
2021-07-21  8:39       ` Fuad Tabba
2021-08-12  9:57   ` Will Deacon
2021-08-12  9:57     ` Will Deacon
2021-08-12  9:57     ` Will Deacon
2021-08-12 13:08     ` Fuad Tabba
2021-08-12 13:08       ` Fuad Tabba
2021-08-12 13:08       ` Fuad Tabba
2021-07-19 16:03 ` [PATCH v3 15/15] KVM: arm64: Restrict protected VM capabilities Fuad Tabba
2021-07-19 16:03   ` Fuad Tabba
2021-07-19 16:03   ` Fuad Tabba
2021-08-12  9:59   ` Will Deacon
2021-08-12  9:59     ` Will Deacon
2021-08-12  9:59     ` Will Deacon
2021-08-16 14:40     ` Fuad Tabba
2021-08-16 14:40       ` Fuad Tabba
2021-08-16 14:40       ` Fuad Tabba

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.