kvmarm.lists.cs.columbia.edu archive mirror
 help / color / mirror / Atom feed
* [PATCH v5 0/3] KVM: arm/arm64: Add VCPU workarounds firmware register
@ 2019-04-15 11:15 Andre Przywara
  2019-04-15 11:15 ` Andre Przywara
                   ` (3 more replies)
  0 siblings, 4 replies; 22+ messages in thread
From: Andre Przywara @ 2019-04-15 11:15 UTC (permalink / raw)
  To: Marc Zyngier, Christoffer Dall
  Cc: kvm, Jeremy Linton, kvmarm, Steven Price, Dave Martin, linux-arm-kernel

Hi,

another update, mostly basing upon Jeremy's sysfs vulnerabilites
series[1], which adds the new UNAFFECTED state for WORKAROUND_1.
The new patch 1/3 propagates this state to the guest, so a guest benefits
from the host kernel's knowledge about whether this workaround is needed.
This triggers some changes in patch 2/3, where we properly export this
new state to userland as well, so we can deny migration from NOT_NEEDED to
AVAILABLE, and allow the other way round.
The documentation in patch 3/3 has been reworded to be more precise, so
I dropped Steven's R-b: on this.

Cheers,
Andre

-----------------------------
Workarounds for Spectre variant 2 or 4 vulnerabilities require some help
from the firmware, so KVM implements an interface to provide that for
guests. When such a guest is migrated, we want to make sure we don't
loose the protection the guest relies on.

This introduces two new firmware registers in KVM's GET/SET_ONE_REG
interface, so userland can save the level of protection implemented by
the hypervisor and used by the guest. Upon restoring these registers,
we make sure we don't downgrade and reject any values that would mean
weaker protection.
The protection level is encoded in the lower 4 bits, with smaller
values indicating weaker protection.

ARM(32) is a bit of a pain (again), as the firmware register interface
is shared, but 32-bit does not implement all the workarounds.
For now I stuffed two wrappers into kvm_emulate.h, which doesn't sound
like the best solution. Happy to hear about better solutions.

This has been tested with migration between two Juno systems. Out of the
box they advertise identical workaround levels, and migration succeeds.
However when disabling the A57 cluster on one system, WORKAROUND_1 is
not needed and the host kernel propagates this. Migration now only
succeeds in one direction (from the big/LITTLE configuration to the
A53-only setup).

Please have a look and comment!

This is based upon v7 of the "add system vulnerability sysfs entries",
applied on top of 5.1-rc4. Let me know if you want to have a different
base.

Cheers,
Andre

[1] http://lists.infradead.org/pipermail/linux-arm-kernel/2019-April/645382.html

Changelog:
v4 .. v5:
- add patch to advertise ARM64_BP_HARDEN_MITIGATED state to a guest
- allow migration from KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_AVAIL to
  (new) KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_UNAFFECTED
- reword API documentation
- return -EINVAL on querying invalid firmware register
- add some comments
- minor fixes according to Eric's review

v3 .. v4:
- clarify API documentation for WORKAROUND_1
- check for unknown bits in userland provided register values
- report proper -ENOENT when register ID is unknown

v2 .. v3:
- rebase against latest kvm-arm/next
- introduce UNAFFECTED value for WORKAROUND_1
- require exact match for WORKAROUND_1 levels

v1 .. v2:
- complete rework of WORKAROUND_2 presentation to use a linear scale,
  dropping the complicated comparison routine

Andre Przywara (3):
  arm64: KVM: Propagate full Spectre v2 workaround state to KVM guests
  KVM: arm/arm64: Add save/restore support for firmware workaround state
  KVM: doc: add API documentation on the KVM_REG_ARM_WORKAROUNDS
    register

 Documentation/virtual/kvm/arm/psci.txt |  31 +++++
 arch/arm/include/asm/kvm_emulate.h     |  10 ++
 arch/arm/include/asm/kvm_host.h        |  12 +-
 arch/arm/include/uapi/asm/kvm.h        |  12 ++
 arch/arm64/include/asm/cpufeature.h    |   6 +
 arch/arm64/include/asm/kvm_emulate.h   |  14 +++
 arch/arm64/include/asm/kvm_host.h      |  16 ++-
 arch/arm64/include/uapi/asm/kvm.h      |  10 ++
 arch/arm64/kernel/cpu_errata.c         |  23 +++-
 virt/kvm/arm/psci.c                    | 149 ++++++++++++++++++++++---
 10 files changed, 257 insertions(+), 26 deletions(-)

-- 
2.17.1

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v5 0/3] KVM: arm/arm64: Add VCPU workarounds firmware register
  2019-04-15 11:15 [PATCH v5 0/3] KVM: arm/arm64: Add VCPU workarounds firmware register Andre Przywara
@ 2019-04-15 11:15 ` Andre Przywara
  2019-04-15 11:15 ` [PATCH v5 1/3] arm64: KVM: Propagate full Spectre v2 workaround state to KVM guests Andre Przywara
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 22+ messages in thread
From: Andre Przywara @ 2019-04-15 11:15 UTC (permalink / raw)
  To: Marc Zyngier, Christoffer Dall
  Cc: kvm, Jeremy Linton, kvmarm, Steven Price, Dave Martin, linux-arm-kernel

Hi,

another update, mostly basing upon Jeremy's sysfs vulnerabilites
series[1], which adds the new UNAFFECTED state for WORKAROUND_1.
The new patch 1/3 propagates this state to the guest, so a guest benefits
from the host kernel's knowledge about whether this workaround is needed.
This triggers some changes in patch 2/3, where we properly export this
new state to userland as well, so we can deny migration from NOT_NEEDED to
AVAILABLE, and allow the other way round.
The documentation in patch 3/3 has been reworded to be more precise, so
I dropped Steven's R-b: on this.

Cheers,
Andre

-----------------------------
Workarounds for Spectre variant 2 or 4 vulnerabilities require some help
from the firmware, so KVM implements an interface to provide that for
guests. When such a guest is migrated, we want to make sure we don't
loose the protection the guest relies on.

This introduces two new firmware registers in KVM's GET/SET_ONE_REG
interface, so userland can save the level of protection implemented by
the hypervisor and used by the guest. Upon restoring these registers,
we make sure we don't downgrade and reject any values that would mean
weaker protection.
The protection level is encoded in the lower 4 bits, with smaller
values indicating weaker protection.

ARM(32) is a bit of a pain (again), as the firmware register interface
is shared, but 32-bit does not implement all the workarounds.
For now I stuffed two wrappers into kvm_emulate.h, which doesn't sound
like the best solution. Happy to hear about better solutions.

This has been tested with migration between two Juno systems. Out of the
box they advertise identical workaround levels, and migration succeeds.
However when disabling the A57 cluster on one system, WORKAROUND_1 is
not needed and the host kernel propagates this. Migration now only
succeeds in one direction (from the big/LITTLE configuration to the
A53-only setup).

Please have a look and comment!

This is based upon v7 of the "add system vulnerability sysfs entries",
applied on top of 5.1-rc4. Let me know if you want to have a different
base.

Cheers,
Andre

[1] http://lists.infradead.org/pipermail/linux-arm-kernel/2019-April/645382.html

Changelog:
v4 .. v5:
- add patch to advertise ARM64_BP_HARDEN_MITIGATED state to a guest
- allow migration from KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_AVAIL to
  (new) KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_UNAFFECTED
- reword API documentation
- return -EINVAL on querying invalid firmware register
- add some comments
- minor fixes according to Eric's review

v3 .. v4:
- clarify API documentation for WORKAROUND_1
- check for unknown bits in userland provided register values
- report proper -ENOENT when register ID is unknown

v2 .. v3:
- rebase against latest kvm-arm/next
- introduce UNAFFECTED value for WORKAROUND_1
- require exact match for WORKAROUND_1 levels

v1 .. v2:
- complete rework of WORKAROUND_2 presentation to use a linear scale,
  dropping the complicated comparison routine

Andre Przywara (3):
  arm64: KVM: Propagate full Spectre v2 workaround state to KVM guests
  KVM: arm/arm64: Add save/restore support for firmware workaround state
  KVM: doc: add API documentation on the KVM_REG_ARM_WORKAROUNDS
    register

 Documentation/virtual/kvm/arm/psci.txt |  31 +++++
 arch/arm/include/asm/kvm_emulate.h     |  10 ++
 arch/arm/include/asm/kvm_host.h        |  12 +-
 arch/arm/include/uapi/asm/kvm.h        |  12 ++
 arch/arm64/include/asm/cpufeature.h    |   6 +
 arch/arm64/include/asm/kvm_emulate.h   |  14 +++
 arch/arm64/include/asm/kvm_host.h      |  16 ++-
 arch/arm64/include/uapi/asm/kvm.h      |  10 ++
 arch/arm64/kernel/cpu_errata.c         |  23 +++-
 virt/kvm/arm/psci.c                    | 149 ++++++++++++++++++++++---
 10 files changed, 257 insertions(+), 26 deletions(-)

-- 
2.17.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v5 1/3] arm64: KVM: Propagate full Spectre v2 workaround state to KVM guests
  2019-04-15 11:15 [PATCH v5 0/3] KVM: arm/arm64: Add VCPU workarounds firmware register Andre Przywara
  2019-04-15 11:15 ` Andre Przywara
@ 2019-04-15 11:15 ` Andre Przywara
  2019-04-15 11:15   ` Andre Przywara
  2019-04-15 14:03   ` Steven Price
  2019-04-15 11:15 ` [PATCH v5 2/3] KVM: arm/arm64: Add save/restore support for firmware workaround state Andre Przywara
  2019-04-15 11:15 ` [PATCH v5 3/3] KVM: doc: add API documentation on the KVM_REG_ARM_WORKAROUNDS register Andre Przywara
  3 siblings, 2 replies; 22+ messages in thread
From: Andre Przywara @ 2019-04-15 11:15 UTC (permalink / raw)
  To: Marc Zyngier, Christoffer Dall
  Cc: kvm, Jeremy Linton, kvmarm, Steven Price, Dave Martin, linux-arm-kernel

Recent commits added the explicit notion of "Not affected" to the state
of the Spectre v2 (aka. BP_HARDENING) workaround, where we just had
"needed" and "unknown" before.

Export this knowledge to the rest of the kernel and enhance the existing
kvm_arm_harden_branch_predictor() to report this new state as well.
Export this new state to guests when they use KVM's firmware interface
emulation.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
---
 arch/arm/include/asm/kvm_host.h     | 12 +++++++++---
 arch/arm64/include/asm/cpufeature.h |  6 ++++++
 arch/arm64/include/asm/kvm_host.h   | 16 ++++++++++++++--
 arch/arm64/kernel/cpu_errata.c      | 23 ++++++++++++++++++-----
 virt/kvm/arm/psci.c                 | 10 +++++++++-
 5 files changed, 56 insertions(+), 11 deletions(-)

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 770d73257ad9..836479e4b340 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -364,7 +364,11 @@ static inline void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu) {}
 static inline void kvm_arm_vhe_guest_enter(void) {}
 static inline void kvm_arm_vhe_guest_exit(void) {}
 
-static inline bool kvm_arm_harden_branch_predictor(void)
+#define KVM_BP_HARDEN_UNKNOWN		-1
+#define KVM_BP_HARDEN_NEEDED		0
+#define KVM_BP_HARDEN_MITIGATED		1
+
+static inline int kvm_arm_harden_branch_predictor(void)
 {
 	switch(read_cpuid_part()) {
 #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
@@ -372,10 +376,12 @@ static inline bool kvm_arm_harden_branch_predictor(void)
 	case ARM_CPU_PART_CORTEX_A12:
 	case ARM_CPU_PART_CORTEX_A15:
 	case ARM_CPU_PART_CORTEX_A17:
-		return true;
+		return KVM_BP_HARDEN_NEEDED;
 #endif
+	case ARM_CPU_PART_CORTEX_A7:
+		return KVM_BP_HARDEN_MITIGATED;
 	default:
-		return false;
+		return KVM_BP_HARDEN_UNKNOWN;
 	}
 }
 
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 6ccdc97e5d6a..3c5b25c1bda1 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -622,6 +622,12 @@ static inline bool system_uses_irq_prio_masking(void)
 	       cpus_have_const_cap(ARM64_HAS_IRQ_PRIO_MASKING);
 }
 
+#define ARM64_BP_HARDEN_UNKNOWN		-1
+#define ARM64_BP_HARDEN_NEEDED		0
+#define ARM64_BP_HARDEN_MITIGATED	1
+
+int get_spectre_v2_workaround_state(void);
+
 #define ARM64_SSBD_UNKNOWN		-1
 #define ARM64_SSBD_FORCE_DISABLE	0
 #define ARM64_SSBD_KERNEL		1
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index a01fe087e022..bf9a59b7d1ce 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -555,9 +555,21 @@ static inline void kvm_arm_vhe_guest_exit(void)
 	isb();
 }
 
-static inline bool kvm_arm_harden_branch_predictor(void)
+#define KVM_BP_HARDEN_UNKNOWN		-1
+#define KVM_BP_HARDEN_NEEDED		0
+#define KVM_BP_HARDEN_MITIGATED		1
+
+static inline int kvm_arm_harden_branch_predictor(void)
 {
-	return cpus_have_const_cap(ARM64_HARDEN_BRANCH_PREDICTOR);
+	switch (get_spectre_v2_workaround_state()) {
+	case ARM64_BP_HARDEN_NEEDED:
+		return KVM_BP_HARDEN_NEEDED;
+	case ARM64_BP_HARDEN_MITIGATED:
+		return KVM_BP_HARDEN_MITIGATED;
+	case ARM64_BP_HARDEN_UNKNOWN:
+	default:
+		return KVM_BP_HARDEN_UNKNOWN;
+	}
 }
 
 #define KVM_SSBD_UNKNOWN		-1
diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index a1f3188c7be0..7fa23ab09d4e 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -555,6 +555,17 @@ cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused)
 static bool __hardenbp_enab = true;
 static bool __spectrev2_safe = true;
 
+int get_spectre_v2_workaround_state(void)
+{
+	if (__spectrev2_safe)
+		return ARM64_BP_HARDEN_MITIGATED;
+
+	if (!__hardenbp_enab)
+		return ARM64_BP_HARDEN_UNKNOWN;
+
+	return ARM64_BP_HARDEN_NEEDED;
+}
+
 /*
  * List of CPUs that do not need any Spectre-v2 mitigation at all.
  */
@@ -834,13 +845,15 @@ ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr,
 ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr,
 		char *buf)
 {
-	if (__spectrev2_safe)
+	switch (get_spectre_v2_workaround_state()) {
+	case ARM64_BP_HARDEN_MITIGATED:
 		return sprintf(buf, "Not affected\n");
-
-	if (__hardenbp_enab)
+        case ARM64_BP_HARDEN_NEEDED:
 		return sprintf(buf, "Mitigation: Branch predictor hardening\n");
-
-	return sprintf(buf, "Vulnerable\n");
+        case ARM64_BP_HARDEN_UNKNOWN:
+	default:
+		return sprintf(buf, "Vulnerable\n");
+	}
 }
 
 ssize_t cpu_show_spec_store_bypass(struct device *dev,
diff --git a/virt/kvm/arm/psci.c b/virt/kvm/arm/psci.c
index 34d08ee63747..1da53e0e38f7 100644
--- a/virt/kvm/arm/psci.c
+++ b/virt/kvm/arm/psci.c
@@ -412,8 +412,16 @@ int kvm_hvc_call_handler(struct kvm_vcpu *vcpu)
 		feature = smccc_get_arg1(vcpu);
 		switch(feature) {
 		case ARM_SMCCC_ARCH_WORKAROUND_1:
-			if (kvm_arm_harden_branch_predictor())
+			switch (kvm_arm_harden_branch_predictor()) {
+			case KVM_BP_HARDEN_UNKNOWN:
+				break;
+			case KVM_BP_HARDEN_NEEDED:
 				val = SMCCC_RET_SUCCESS;
+				break;
+			case KVM_BP_HARDEN_MITIGATED:
+				val = SMCCC_RET_NOT_REQUIRED;
+				break;
+			}
 			break;
 		case ARM_SMCCC_ARCH_WORKAROUND_2:
 			switch (kvm_arm_have_ssbd()) {
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v5 1/3] arm64: KVM: Propagate full Spectre v2 workaround state to KVM guests
  2019-04-15 11:15 ` [PATCH v5 1/3] arm64: KVM: Propagate full Spectre v2 workaround state to KVM guests Andre Przywara
@ 2019-04-15 11:15   ` Andre Przywara
  2019-04-15 14:03   ` Steven Price
  1 sibling, 0 replies; 22+ messages in thread
From: Andre Przywara @ 2019-04-15 11:15 UTC (permalink / raw)
  To: Marc Zyngier, Christoffer Dall
  Cc: kvm, Jeremy Linton, kvmarm, Steven Price, Dave Martin, linux-arm-kernel

Recent commits added the explicit notion of "Not affected" to the state
of the Spectre v2 (aka. BP_HARDENING) workaround, where we just had
"needed" and "unknown" before.

Export this knowledge to the rest of the kernel and enhance the existing
kvm_arm_harden_branch_predictor() to report this new state as well.
Export this new state to guests when they use KVM's firmware interface
emulation.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
---
 arch/arm/include/asm/kvm_host.h     | 12 +++++++++---
 arch/arm64/include/asm/cpufeature.h |  6 ++++++
 arch/arm64/include/asm/kvm_host.h   | 16 ++++++++++++++--
 arch/arm64/kernel/cpu_errata.c      | 23 ++++++++++++++++++-----
 virt/kvm/arm/psci.c                 | 10 +++++++++-
 5 files changed, 56 insertions(+), 11 deletions(-)

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 770d73257ad9..836479e4b340 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -364,7 +364,11 @@ static inline void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu) {}
 static inline void kvm_arm_vhe_guest_enter(void) {}
 static inline void kvm_arm_vhe_guest_exit(void) {}
 
-static inline bool kvm_arm_harden_branch_predictor(void)
+#define KVM_BP_HARDEN_UNKNOWN		-1
+#define KVM_BP_HARDEN_NEEDED		0
+#define KVM_BP_HARDEN_MITIGATED		1
+
+static inline int kvm_arm_harden_branch_predictor(void)
 {
 	switch(read_cpuid_part()) {
 #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
@@ -372,10 +376,12 @@ static inline bool kvm_arm_harden_branch_predictor(void)
 	case ARM_CPU_PART_CORTEX_A12:
 	case ARM_CPU_PART_CORTEX_A15:
 	case ARM_CPU_PART_CORTEX_A17:
-		return true;
+		return KVM_BP_HARDEN_NEEDED;
 #endif
+	case ARM_CPU_PART_CORTEX_A7:
+		return KVM_BP_HARDEN_MITIGATED;
 	default:
-		return false;
+		return KVM_BP_HARDEN_UNKNOWN;
 	}
 }
 
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 6ccdc97e5d6a..3c5b25c1bda1 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -622,6 +622,12 @@ static inline bool system_uses_irq_prio_masking(void)
 	       cpus_have_const_cap(ARM64_HAS_IRQ_PRIO_MASKING);
 }
 
+#define ARM64_BP_HARDEN_UNKNOWN		-1
+#define ARM64_BP_HARDEN_NEEDED		0
+#define ARM64_BP_HARDEN_MITIGATED	1
+
+int get_spectre_v2_workaround_state(void);
+
 #define ARM64_SSBD_UNKNOWN		-1
 #define ARM64_SSBD_FORCE_DISABLE	0
 #define ARM64_SSBD_KERNEL		1
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index a01fe087e022..bf9a59b7d1ce 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -555,9 +555,21 @@ static inline void kvm_arm_vhe_guest_exit(void)
 	isb();
 }
 
-static inline bool kvm_arm_harden_branch_predictor(void)
+#define KVM_BP_HARDEN_UNKNOWN		-1
+#define KVM_BP_HARDEN_NEEDED		0
+#define KVM_BP_HARDEN_MITIGATED		1
+
+static inline int kvm_arm_harden_branch_predictor(void)
 {
-	return cpus_have_const_cap(ARM64_HARDEN_BRANCH_PREDICTOR);
+	switch (get_spectre_v2_workaround_state()) {
+	case ARM64_BP_HARDEN_NEEDED:
+		return KVM_BP_HARDEN_NEEDED;
+	case ARM64_BP_HARDEN_MITIGATED:
+		return KVM_BP_HARDEN_MITIGATED;
+	case ARM64_BP_HARDEN_UNKNOWN:
+	default:
+		return KVM_BP_HARDEN_UNKNOWN;
+	}
 }
 
 #define KVM_SSBD_UNKNOWN		-1
diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index a1f3188c7be0..7fa23ab09d4e 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -555,6 +555,17 @@ cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused)
 static bool __hardenbp_enab = true;
 static bool __spectrev2_safe = true;
 
+int get_spectre_v2_workaround_state(void)
+{
+	if (__spectrev2_safe)
+		return ARM64_BP_HARDEN_MITIGATED;
+
+	if (!__hardenbp_enab)
+		return ARM64_BP_HARDEN_UNKNOWN;
+
+	return ARM64_BP_HARDEN_NEEDED;
+}
+
 /*
  * List of CPUs that do not need any Spectre-v2 mitigation at all.
  */
@@ -834,13 +845,15 @@ ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr,
 ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr,
 		char *buf)
 {
-	if (__spectrev2_safe)
+	switch (get_spectre_v2_workaround_state()) {
+	case ARM64_BP_HARDEN_MITIGATED:
 		return sprintf(buf, "Not affected\n");
-
-	if (__hardenbp_enab)
+        case ARM64_BP_HARDEN_NEEDED:
 		return sprintf(buf, "Mitigation: Branch predictor hardening\n");
-
-	return sprintf(buf, "Vulnerable\n");
+        case ARM64_BP_HARDEN_UNKNOWN:
+	default:
+		return sprintf(buf, "Vulnerable\n");
+	}
 }
 
 ssize_t cpu_show_spec_store_bypass(struct device *dev,
diff --git a/virt/kvm/arm/psci.c b/virt/kvm/arm/psci.c
index 34d08ee63747..1da53e0e38f7 100644
--- a/virt/kvm/arm/psci.c
+++ b/virt/kvm/arm/psci.c
@@ -412,8 +412,16 @@ int kvm_hvc_call_handler(struct kvm_vcpu *vcpu)
 		feature = smccc_get_arg1(vcpu);
 		switch(feature) {
 		case ARM_SMCCC_ARCH_WORKAROUND_1:
-			if (kvm_arm_harden_branch_predictor())
+			switch (kvm_arm_harden_branch_predictor()) {
+			case KVM_BP_HARDEN_UNKNOWN:
+				break;
+			case KVM_BP_HARDEN_NEEDED:
 				val = SMCCC_RET_SUCCESS;
+				break;
+			case KVM_BP_HARDEN_MITIGATED:
+				val = SMCCC_RET_NOT_REQUIRED;
+				break;
+			}
 			break;
 		case ARM_SMCCC_ARCH_WORKAROUND_2:
 			switch (kvm_arm_have_ssbd()) {
-- 
2.17.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v5 2/3] KVM: arm/arm64: Add save/restore support for firmware workaround state
  2019-04-15 11:15 [PATCH v5 0/3] KVM: arm/arm64: Add VCPU workarounds firmware register Andre Przywara
  2019-04-15 11:15 ` Andre Przywara
  2019-04-15 11:15 ` [PATCH v5 1/3] arm64: KVM: Propagate full Spectre v2 workaround state to KVM guests Andre Przywara
@ 2019-04-15 11:15 ` Andre Przywara
  2019-04-15 11:15   ` Andre Przywara
                     ` (2 more replies)
  2019-04-15 11:15 ` [PATCH v5 3/3] KVM: doc: add API documentation on the KVM_REG_ARM_WORKAROUNDS register Andre Przywara
  3 siblings, 3 replies; 22+ messages in thread
From: Andre Przywara @ 2019-04-15 11:15 UTC (permalink / raw)
  To: Marc Zyngier, Christoffer Dall
  Cc: kvm, Jeremy Linton, kvmarm, Steven Price, Dave Martin, linux-arm-kernel

KVM implements the firmware interface for mitigating cache speculation
vulnerabilities. Guests may use this interface to ensure mitigation is
active.
If we want to migrate such a guest to a host with a different support
level for those workarounds, migration might need to fail, to ensure that
critical guests don't loose their protection.

Introduce a way for userland to save and restore the workarounds state.
On restoring we do checks that make sure we don't downgrade our
mitigation level.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
---
 arch/arm/include/asm/kvm_emulate.h   |  10 ++
 arch/arm/include/uapi/asm/kvm.h      |  12 +++
 arch/arm64/include/asm/kvm_emulate.h |  14 +++
 arch/arm64/include/uapi/asm/kvm.h    |  10 ++
 virt/kvm/arm/psci.c                  | 139 ++++++++++++++++++++++++---
 5 files changed, 170 insertions(+), 15 deletions(-)

diff --git a/arch/arm/include/asm/kvm_emulate.h b/arch/arm/include/asm/kvm_emulate.h
index 8927cae7c966..663a02d7e6f4 100644
--- a/arch/arm/include/asm/kvm_emulate.h
+++ b/arch/arm/include/asm/kvm_emulate.h
@@ -283,6 +283,16 @@ static inline unsigned long kvm_vcpu_get_mpidr_aff(struct kvm_vcpu *vcpu)
 	return vcpu_cp15(vcpu, c0_MPIDR) & MPIDR_HWID_BITMASK;
 }
 
+static inline bool kvm_arm_get_vcpu_workaround_2_flag(struct kvm_vcpu *vcpu)
+{
+	return false;
+}
+
+static inline void kvm_arm_set_vcpu_workaround_2_flag(struct kvm_vcpu *vcpu,
+						      bool flag)
+{
+}
+
 static inline void kvm_vcpu_set_be(struct kvm_vcpu *vcpu)
 {
 	*vcpu_cpsr(vcpu) |= PSR_E_BIT;
diff --git a/arch/arm/include/uapi/asm/kvm.h b/arch/arm/include/uapi/asm/kvm.h
index 4602464ebdfb..dfce6fdbd03b 100644
--- a/arch/arm/include/uapi/asm/kvm.h
+++ b/arch/arm/include/uapi/asm/kvm.h
@@ -214,6 +214,18 @@ struct kvm_vcpu_events {
 #define KVM_REG_ARM_FW_REG(r)		(KVM_REG_ARM | KVM_REG_SIZE_U64 | \
 					 KVM_REG_ARM_FW | ((r) & 0xffff))
 #define KVM_REG_ARM_PSCI_VERSION	KVM_REG_ARM_FW_REG(0)
+#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1	KVM_REG_ARM_FW_REG(1)
+	/* Higher values mean better protection. */
+#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_NOT_AVAIL	0
+#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_AVAIL	1
+#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_UNAFFECTED	2
+#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2	KVM_REG_ARM_FW_REG(2)
+	/* Higher values mean better protection. */
+#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_AVAIL	0
+#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNKNOWN	1
+#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_AVAIL	2
+#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNAFFECTED	3
+#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_ENABLED	(1U << 4)
 
 /* Device Control API: ARM VGIC */
 #define KVM_DEV_ARM_VGIC_GRP_ADDR	0
diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index d3842791e1c4..c00c17c9adb6 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -348,6 +348,20 @@ static inline unsigned long kvm_vcpu_get_mpidr_aff(struct kvm_vcpu *vcpu)
 	return vcpu_read_sys_reg(vcpu, MPIDR_EL1) & MPIDR_HWID_BITMASK;
 }
 
+static inline bool kvm_arm_get_vcpu_workaround_2_flag(struct kvm_vcpu *vcpu)
+{
+	return vcpu->arch.workaround_flags & VCPU_WORKAROUND_2_FLAG;
+}
+
+static inline void kvm_arm_set_vcpu_workaround_2_flag(struct kvm_vcpu *vcpu,
+						      bool flag)
+{
+	if (flag)
+		vcpu->arch.workaround_flags |= VCPU_WORKAROUND_2_FLAG;
+	else
+		vcpu->arch.workaround_flags &= ~VCPU_WORKAROUND_2_FLAG;
+}
+
 static inline void kvm_vcpu_set_be(struct kvm_vcpu *vcpu)
 {
 	if (vcpu_mode_is_32bit(vcpu)) {
diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
index 97c3478ee6e7..e75a81a7716f 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -225,6 +225,16 @@ struct kvm_vcpu_events {
 #define KVM_REG_ARM_FW_REG(r)		(KVM_REG_ARM64 | KVM_REG_SIZE_U64 | \
 					 KVM_REG_ARM_FW | ((r) & 0xffff))
 #define KVM_REG_ARM_PSCI_VERSION	KVM_REG_ARM_FW_REG(0)
+#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1	KVM_REG_ARM_FW_REG(1)
+#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_NOT_AVAIL	0
+#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_AVAIL	1
+#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_UNAFFECTED	2
+#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2	KVM_REG_ARM_FW_REG(2)
+#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_AVAIL	0
+#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNKNOWN	1
+#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_AVAIL	2
+#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNAFFECTED	3
+#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_ENABLED     (1U << 4)
 
 /* Device Control API: ARM VGIC */
 #define KVM_DEV_ARM_VGIC_GRP_ADDR	0
diff --git a/virt/kvm/arm/psci.c b/virt/kvm/arm/psci.c
index 1da53e0e38f7..c82d90aad3ea 100644
--- a/virt/kvm/arm/psci.c
+++ b/virt/kvm/arm/psci.c
@@ -449,42 +449,103 @@ int kvm_hvc_call_handler(struct kvm_vcpu *vcpu)
 
 int kvm_arm_get_fw_num_regs(struct kvm_vcpu *vcpu)
 {
-	return 1;		/* PSCI version */
+	return 3;		/* PSCI version and two workaround registers */
 }
 
 int kvm_arm_copy_fw_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices)
 {
-	if (put_user(KVM_REG_ARM_PSCI_VERSION, uindices))
+	if (put_user(KVM_REG_ARM_PSCI_VERSION, uindices++))
+		return -EFAULT;
+
+	if (put_user(KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1, uindices++))
+		return -EFAULT;
+
+	if (put_user(KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2, uindices++))
 		return -EFAULT;
 
 	return 0;
 }
 
+#define KVM_REG_FEATURE_LEVEL_WIDTH	4
+#define KVM_REG_FEATURE_LEVEL_MASK	(BIT(KVM_REG_FEATURE_LEVEL_WIDTH) - 1)
+
+/*
+ * Convert the workaround level into an easy-to-compare number, where higher
+ * values mean better protection.
+ */
+static int get_kernel_wa_level(u64 regid)
+{
+	switch (regid) {
+	case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1:
+		switch (kvm_arm_harden_branch_predictor()) {
+		case KVM_BP_HARDEN_UNKNOWN:
+			return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_NOT_AVAIL;
+		case KVM_BP_HARDEN_NEEDED:
+			return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_AVAIL;
+		case KVM_BP_HARDEN_MITIGATED:
+			return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_UNAFFECTED;
+		}
+		return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_NOT_AVAIL;
+	case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2:
+		switch (kvm_arm_have_ssbd()) {
+		case KVM_SSBD_FORCE_DISABLE:
+			return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_AVAIL;
+		case KVM_SSBD_KERNEL:
+			return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_AVAIL;
+		case KVM_SSBD_FORCE_ENABLE:
+		case KVM_SSBD_MITIGATED:
+			return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNAFFECTED;
+		case KVM_SSBD_UNKNOWN:
+		default:
+			return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNKNOWN;
+		}
+	}
+
+	return -EINVAL;
+}
+
 int kvm_arm_get_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
 {
-	if (reg->id == KVM_REG_ARM_PSCI_VERSION) {
-		void __user *uaddr = (void __user *)(long)reg->addr;
-		u64 val;
+	void __user *uaddr = (void __user *)(long)reg->addr;
+	u64 val;
 
+	switch (reg->id) {
+	case KVM_REG_ARM_PSCI_VERSION:
 		val = kvm_psci_version(vcpu, vcpu->kvm);
-		if (copy_to_user(uaddr, &val, KVM_REG_SIZE(reg->id)))
-			return -EFAULT;
+		break;
+	case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1:
+		val = get_kernel_wa_level(reg->id) & KVM_REG_FEATURE_LEVEL_MASK;
+		break;
+	case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2:
+		val = get_kernel_wa_level(reg->id) & KVM_REG_FEATURE_LEVEL_MASK;
 
-		return 0;
+		if (val == KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_AVAIL &&
+		    kvm_arm_get_vcpu_workaround_2_flag(vcpu))
+			val |= KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_ENABLED;
+		break;
+	default:
+		return -ENOENT;
 	}
 
-	return -EINVAL;
+	if (copy_to_user(uaddr, &val, KVM_REG_SIZE(reg->id)))
+		return -EFAULT;
+
+	return 0;
 }
 
 int kvm_arm_set_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
 {
-	if (reg->id == KVM_REG_ARM_PSCI_VERSION) {
-		void __user *uaddr = (void __user *)(long)reg->addr;
-		bool wants_02;
-		u64 val;
+	void __user *uaddr = (void __user *)(long)reg->addr;
+	u64 val;
+	int wa_level;
+
+	if (copy_from_user(&val, uaddr, KVM_REG_SIZE(reg->id)))
+		return -EFAULT;
 
-		if (copy_from_user(&val, uaddr, KVM_REG_SIZE(reg->id)))
-			return -EFAULT;
+	switch (reg->id) {
+	case KVM_REG_ARM_PSCI_VERSION:
+	{
+		bool wants_02;
 
 		wants_02 = test_bit(KVM_ARM_VCPU_PSCI_0_2, vcpu->arch.features);
 
@@ -501,6 +562,54 @@ int kvm_arm_set_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
 			vcpu->kvm->arch.psci_version = val;
 			return 0;
 		}
+		break;
+	}
+
+	case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1:
+		if (val & ~KVM_REG_FEATURE_LEVEL_MASK)
+			return -EINVAL;
+
+		if (get_kernel_wa_level(reg->id) < val)
+			return -EINVAL;
+
+		return 0;
+
+	case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2:
+		if (val & ~(KVM_REG_FEATURE_LEVEL_MASK |
+			    KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_ENABLED))
+			return -EINVAL;
+
+		wa_level = val & KVM_REG_FEATURE_LEVEL_MASK;
+
+		if (get_kernel_wa_level(reg->id) < wa_level)
+			return -EINVAL;
+
+		/* The enabled bit must not be set unless the level is AVAIL. */
+		if (wa_level != KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_AVAIL &&
+		    wa_level != val)
+			return -EINVAL;
+
+		/* Are we finished or do we need to check the enable bit ? */
+		if (kvm_arm_have_ssbd() != KVM_SSBD_KERNEL)
+			return 0;
+
+		/*
+		 * If this kernel supports the workaround to be switched on
+		 * or off, make sure it matches the requested setting.
+		 */
+		switch (wa_level) {
+		case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_AVAIL:
+			kvm_arm_set_vcpu_workaround_2_flag(vcpu,
+			    val & KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_ENABLED);
+			break;
+		case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNAFFECTED:
+			kvm_arm_set_vcpu_workaround_2_flag(vcpu, true);
+			break;
+		}
+
+		return 0;
+	default:
+		return -ENOENT;
 	}
 
 	return -EINVAL;
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v5 2/3] KVM: arm/arm64: Add save/restore support for firmware workaround state
  2019-04-15 11:15 ` [PATCH v5 2/3] KVM: arm/arm64: Add save/restore support for firmware workaround state Andre Przywara
@ 2019-04-15 11:15   ` Andre Przywara
  2019-04-15 14:04   ` Steven Price
  2019-04-26 15:07   ` Auger Eric
  2 siblings, 0 replies; 22+ messages in thread
From: Andre Przywara @ 2019-04-15 11:15 UTC (permalink / raw)
  To: Marc Zyngier, Christoffer Dall
  Cc: kvm, Jeremy Linton, kvmarm, Steven Price, Dave Martin, linux-arm-kernel

KVM implements the firmware interface for mitigating cache speculation
vulnerabilities. Guests may use this interface to ensure mitigation is
active.
If we want to migrate such a guest to a host with a different support
level for those workarounds, migration might need to fail, to ensure that
critical guests don't loose their protection.

Introduce a way for userland to save and restore the workarounds state.
On restoring we do checks that make sure we don't downgrade our
mitigation level.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
---
 arch/arm/include/asm/kvm_emulate.h   |  10 ++
 arch/arm/include/uapi/asm/kvm.h      |  12 +++
 arch/arm64/include/asm/kvm_emulate.h |  14 +++
 arch/arm64/include/uapi/asm/kvm.h    |  10 ++
 virt/kvm/arm/psci.c                  | 139 ++++++++++++++++++++++++---
 5 files changed, 170 insertions(+), 15 deletions(-)

diff --git a/arch/arm/include/asm/kvm_emulate.h b/arch/arm/include/asm/kvm_emulate.h
index 8927cae7c966..663a02d7e6f4 100644
--- a/arch/arm/include/asm/kvm_emulate.h
+++ b/arch/arm/include/asm/kvm_emulate.h
@@ -283,6 +283,16 @@ static inline unsigned long kvm_vcpu_get_mpidr_aff(struct kvm_vcpu *vcpu)
 	return vcpu_cp15(vcpu, c0_MPIDR) & MPIDR_HWID_BITMASK;
 }
 
+static inline bool kvm_arm_get_vcpu_workaround_2_flag(struct kvm_vcpu *vcpu)
+{
+	return false;
+}
+
+static inline void kvm_arm_set_vcpu_workaround_2_flag(struct kvm_vcpu *vcpu,
+						      bool flag)
+{
+}
+
 static inline void kvm_vcpu_set_be(struct kvm_vcpu *vcpu)
 {
 	*vcpu_cpsr(vcpu) |= PSR_E_BIT;
diff --git a/arch/arm/include/uapi/asm/kvm.h b/arch/arm/include/uapi/asm/kvm.h
index 4602464ebdfb..dfce6fdbd03b 100644
--- a/arch/arm/include/uapi/asm/kvm.h
+++ b/arch/arm/include/uapi/asm/kvm.h
@@ -214,6 +214,18 @@ struct kvm_vcpu_events {
 #define KVM_REG_ARM_FW_REG(r)		(KVM_REG_ARM | KVM_REG_SIZE_U64 | \
 					 KVM_REG_ARM_FW | ((r) & 0xffff))
 #define KVM_REG_ARM_PSCI_VERSION	KVM_REG_ARM_FW_REG(0)
+#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1	KVM_REG_ARM_FW_REG(1)
+	/* Higher values mean better protection. */
+#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_NOT_AVAIL	0
+#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_AVAIL	1
+#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_UNAFFECTED	2
+#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2	KVM_REG_ARM_FW_REG(2)
+	/* Higher values mean better protection. */
+#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_AVAIL	0
+#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNKNOWN	1
+#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_AVAIL	2
+#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNAFFECTED	3
+#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_ENABLED	(1U << 4)
 
 /* Device Control API: ARM VGIC */
 #define KVM_DEV_ARM_VGIC_GRP_ADDR	0
diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index d3842791e1c4..c00c17c9adb6 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -348,6 +348,20 @@ static inline unsigned long kvm_vcpu_get_mpidr_aff(struct kvm_vcpu *vcpu)
 	return vcpu_read_sys_reg(vcpu, MPIDR_EL1) & MPIDR_HWID_BITMASK;
 }
 
+static inline bool kvm_arm_get_vcpu_workaround_2_flag(struct kvm_vcpu *vcpu)
+{
+	return vcpu->arch.workaround_flags & VCPU_WORKAROUND_2_FLAG;
+}
+
+static inline void kvm_arm_set_vcpu_workaround_2_flag(struct kvm_vcpu *vcpu,
+						      bool flag)
+{
+	if (flag)
+		vcpu->arch.workaround_flags |= VCPU_WORKAROUND_2_FLAG;
+	else
+		vcpu->arch.workaround_flags &= ~VCPU_WORKAROUND_2_FLAG;
+}
+
 static inline void kvm_vcpu_set_be(struct kvm_vcpu *vcpu)
 {
 	if (vcpu_mode_is_32bit(vcpu)) {
diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
index 97c3478ee6e7..e75a81a7716f 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -225,6 +225,16 @@ struct kvm_vcpu_events {
 #define KVM_REG_ARM_FW_REG(r)		(KVM_REG_ARM64 | KVM_REG_SIZE_U64 | \
 					 KVM_REG_ARM_FW | ((r) & 0xffff))
 #define KVM_REG_ARM_PSCI_VERSION	KVM_REG_ARM_FW_REG(0)
+#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1	KVM_REG_ARM_FW_REG(1)
+#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_NOT_AVAIL	0
+#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_AVAIL	1
+#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_UNAFFECTED	2
+#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2	KVM_REG_ARM_FW_REG(2)
+#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_AVAIL	0
+#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNKNOWN	1
+#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_AVAIL	2
+#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNAFFECTED	3
+#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_ENABLED     (1U << 4)
 
 /* Device Control API: ARM VGIC */
 #define KVM_DEV_ARM_VGIC_GRP_ADDR	0
diff --git a/virt/kvm/arm/psci.c b/virt/kvm/arm/psci.c
index 1da53e0e38f7..c82d90aad3ea 100644
--- a/virt/kvm/arm/psci.c
+++ b/virt/kvm/arm/psci.c
@@ -449,42 +449,103 @@ int kvm_hvc_call_handler(struct kvm_vcpu *vcpu)
 
 int kvm_arm_get_fw_num_regs(struct kvm_vcpu *vcpu)
 {
-	return 1;		/* PSCI version */
+	return 3;		/* PSCI version and two workaround registers */
 }
 
 int kvm_arm_copy_fw_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices)
 {
-	if (put_user(KVM_REG_ARM_PSCI_VERSION, uindices))
+	if (put_user(KVM_REG_ARM_PSCI_VERSION, uindices++))
+		return -EFAULT;
+
+	if (put_user(KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1, uindices++))
+		return -EFAULT;
+
+	if (put_user(KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2, uindices++))
 		return -EFAULT;
 
 	return 0;
 }
 
+#define KVM_REG_FEATURE_LEVEL_WIDTH	4
+#define KVM_REG_FEATURE_LEVEL_MASK	(BIT(KVM_REG_FEATURE_LEVEL_WIDTH) - 1)
+
+/*
+ * Convert the workaround level into an easy-to-compare number, where higher
+ * values mean better protection.
+ */
+static int get_kernel_wa_level(u64 regid)
+{
+	switch (regid) {
+	case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1:
+		switch (kvm_arm_harden_branch_predictor()) {
+		case KVM_BP_HARDEN_UNKNOWN:
+			return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_NOT_AVAIL;
+		case KVM_BP_HARDEN_NEEDED:
+			return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_AVAIL;
+		case KVM_BP_HARDEN_MITIGATED:
+			return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_UNAFFECTED;
+		}
+		return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_NOT_AVAIL;
+	case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2:
+		switch (kvm_arm_have_ssbd()) {
+		case KVM_SSBD_FORCE_DISABLE:
+			return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_AVAIL;
+		case KVM_SSBD_KERNEL:
+			return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_AVAIL;
+		case KVM_SSBD_FORCE_ENABLE:
+		case KVM_SSBD_MITIGATED:
+			return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNAFFECTED;
+		case KVM_SSBD_UNKNOWN:
+		default:
+			return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNKNOWN;
+		}
+	}
+
+	return -EINVAL;
+}
+
 int kvm_arm_get_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
 {
-	if (reg->id == KVM_REG_ARM_PSCI_VERSION) {
-		void __user *uaddr = (void __user *)(long)reg->addr;
-		u64 val;
+	void __user *uaddr = (void __user *)(long)reg->addr;
+	u64 val;
 
+	switch (reg->id) {
+	case KVM_REG_ARM_PSCI_VERSION:
 		val = kvm_psci_version(vcpu, vcpu->kvm);
-		if (copy_to_user(uaddr, &val, KVM_REG_SIZE(reg->id)))
-			return -EFAULT;
+		break;
+	case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1:
+		val = get_kernel_wa_level(reg->id) & KVM_REG_FEATURE_LEVEL_MASK;
+		break;
+	case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2:
+		val = get_kernel_wa_level(reg->id) & KVM_REG_FEATURE_LEVEL_MASK;
 
-		return 0;
+		if (val == KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_AVAIL &&
+		    kvm_arm_get_vcpu_workaround_2_flag(vcpu))
+			val |= KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_ENABLED;
+		break;
+	default:
+		return -ENOENT;
 	}
 
-	return -EINVAL;
+	if (copy_to_user(uaddr, &val, KVM_REG_SIZE(reg->id)))
+		return -EFAULT;
+
+	return 0;
 }
 
 int kvm_arm_set_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
 {
-	if (reg->id == KVM_REG_ARM_PSCI_VERSION) {
-		void __user *uaddr = (void __user *)(long)reg->addr;
-		bool wants_02;
-		u64 val;
+	void __user *uaddr = (void __user *)(long)reg->addr;
+	u64 val;
+	int wa_level;
+
+	if (copy_from_user(&val, uaddr, KVM_REG_SIZE(reg->id)))
+		return -EFAULT;
 
-		if (copy_from_user(&val, uaddr, KVM_REG_SIZE(reg->id)))
-			return -EFAULT;
+	switch (reg->id) {
+	case KVM_REG_ARM_PSCI_VERSION:
+	{
+		bool wants_02;
 
 		wants_02 = test_bit(KVM_ARM_VCPU_PSCI_0_2, vcpu->arch.features);
 
@@ -501,6 +562,54 @@ int kvm_arm_set_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
 			vcpu->kvm->arch.psci_version = val;
 			return 0;
 		}
+		break;
+	}
+
+	case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1:
+		if (val & ~KVM_REG_FEATURE_LEVEL_MASK)
+			return -EINVAL;
+
+		if (get_kernel_wa_level(reg->id) < val)
+			return -EINVAL;
+
+		return 0;
+
+	case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2:
+		if (val & ~(KVM_REG_FEATURE_LEVEL_MASK |
+			    KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_ENABLED))
+			return -EINVAL;
+
+		wa_level = val & KVM_REG_FEATURE_LEVEL_MASK;
+
+		if (get_kernel_wa_level(reg->id) < wa_level)
+			return -EINVAL;
+
+		/* The enabled bit must not be set unless the level is AVAIL. */
+		if (wa_level != KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_AVAIL &&
+		    wa_level != val)
+			return -EINVAL;
+
+		/* Are we finished or do we need to check the enable bit ? */
+		if (kvm_arm_have_ssbd() != KVM_SSBD_KERNEL)
+			return 0;
+
+		/*
+		 * If this kernel supports the workaround to be switched on
+		 * or off, make sure it matches the requested setting.
+		 */
+		switch (wa_level) {
+		case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_AVAIL:
+			kvm_arm_set_vcpu_workaround_2_flag(vcpu,
+			    val & KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_ENABLED);
+			break;
+		case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNAFFECTED:
+			kvm_arm_set_vcpu_workaround_2_flag(vcpu, true);
+			break;
+		}
+
+		return 0;
+	default:
+		return -ENOENT;
 	}
 
 	return -EINVAL;
-- 
2.17.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v5 3/3] KVM: doc: add API documentation on the KVM_REG_ARM_WORKAROUNDS register
  2019-04-15 11:15 [PATCH v5 0/3] KVM: arm/arm64: Add VCPU workarounds firmware register Andre Przywara
                   ` (2 preceding siblings ...)
  2019-04-15 11:15 ` [PATCH v5 2/3] KVM: arm/arm64: Add save/restore support for firmware workaround state Andre Przywara
@ 2019-04-15 11:15 ` Andre Przywara
  2019-04-15 11:15   ` Andre Przywara
                     ` (2 more replies)
  3 siblings, 3 replies; 22+ messages in thread
From: Andre Przywara @ 2019-04-15 11:15 UTC (permalink / raw)
  To: Marc Zyngier, Christoffer Dall
  Cc: kvm, Jeremy Linton, kvmarm, Steven Price, Dave Martin, linux-arm-kernel

Add documentation for the newly defined firmware registers to save and
restore any vulnerability mitigation status.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
---
 Documentation/virtual/kvm/arm/psci.txt | 31 ++++++++++++++++++++++++++
 1 file changed, 31 insertions(+)

diff --git a/Documentation/virtual/kvm/arm/psci.txt b/Documentation/virtual/kvm/arm/psci.txt
index aafdab887b04..a876c1baa56e 100644
--- a/Documentation/virtual/kvm/arm/psci.txt
+++ b/Documentation/virtual/kvm/arm/psci.txt
@@ -28,3 +28,34 @@ The following register is defined:
   - Allows any PSCI version implemented by KVM and compatible with
     v0.2 to be set with SET_ONE_REG
   - Affects the whole VM (even if the register view is per-vcpu)
+
+* KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1:
+  Holds the state of the firmware support to mitigate CVE-2017-5715, as
+  offered by KVM to the guest via a HVC call. The workaround is described
+  under SMCCC_ARCH_WORKAROUND_1 in [1].
+  Accepted values are:
+    KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_NOT_AVAIL: KVM does not offer
+      firmware support for the workaround. The mitigation status for the
+      guest is unknown.
+    KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_AVAIL: The workaround HVC call is
+      available to the guest and required for the mitigation.
+    KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_UNAFFECTED: The workaround HVC call
+      is available to the guest, but it is not needed on this VCPU.
+
+* KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2:
+  Holds the state of the firmware support to mitigate CVE-2018-3639, as
+  offered by KVM to the guest via a HVC call. The workaround is described
+  under SMCCC_ARCH_WORKAROUND_2 in [1].
+  Accepted values are:
+    KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_AVAIL: A workaround is not
+      available. KVM does not offer firmware support for the workaround.
+    KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNKNOWN: The workaround state is
+      unknown. KVM does not offer firmware support for the workaround.
+    KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_AVAIL: The workaround is available,
+      and can be disabled by a vCPU. If
+      KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_ENABLED is set, it is active for
+      this vCPU.
+    KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNAFFECTED: The workaround is always
+      active on this vCPU or it is not needed.
+
+[1] https://developer.arm.com/-/media/developer/pdf/ARM_DEN_0070A_Firmware_interfaces_for_mitigating_CVE-2017-5715.pdf
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v5 3/3] KVM: doc: add API documentation on the KVM_REG_ARM_WORKAROUNDS register
  2019-04-15 11:15 ` [PATCH v5 3/3] KVM: doc: add API documentation on the KVM_REG_ARM_WORKAROUNDS register Andre Przywara
@ 2019-04-15 11:15   ` Andre Przywara
  2019-04-15 14:06   ` Steven Price
  2019-04-26 15:25   ` Auger Eric
  2 siblings, 0 replies; 22+ messages in thread
From: Andre Przywara @ 2019-04-15 11:15 UTC (permalink / raw)
  To: Marc Zyngier, Christoffer Dall
  Cc: kvm, Jeremy Linton, kvmarm, Steven Price, Dave Martin, linux-arm-kernel

Add documentation for the newly defined firmware registers to save and
restore any vulnerability mitigation status.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
---
 Documentation/virtual/kvm/arm/psci.txt | 31 ++++++++++++++++++++++++++
 1 file changed, 31 insertions(+)

diff --git a/Documentation/virtual/kvm/arm/psci.txt b/Documentation/virtual/kvm/arm/psci.txt
index aafdab887b04..a876c1baa56e 100644
--- a/Documentation/virtual/kvm/arm/psci.txt
+++ b/Documentation/virtual/kvm/arm/psci.txt
@@ -28,3 +28,34 @@ The following register is defined:
   - Allows any PSCI version implemented by KVM and compatible with
     v0.2 to be set with SET_ONE_REG
   - Affects the whole VM (even if the register view is per-vcpu)
+
+* KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1:
+  Holds the state of the firmware support to mitigate CVE-2017-5715, as
+  offered by KVM to the guest via a HVC call. The workaround is described
+  under SMCCC_ARCH_WORKAROUND_1 in [1].
+  Accepted values are:
+    KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_NOT_AVAIL: KVM does not offer
+      firmware support for the workaround. The mitigation status for the
+      guest is unknown.
+    KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_AVAIL: The workaround HVC call is
+      available to the guest and required for the mitigation.
+    KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_UNAFFECTED: The workaround HVC call
+      is available to the guest, but it is not needed on this VCPU.
+
+* KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2:
+  Holds the state of the firmware support to mitigate CVE-2018-3639, as
+  offered by KVM to the guest via a HVC call. The workaround is described
+  under SMCCC_ARCH_WORKAROUND_2 in [1].
+  Accepted values are:
+    KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_AVAIL: A workaround is not
+      available. KVM does not offer firmware support for the workaround.
+    KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNKNOWN: The workaround state is
+      unknown. KVM does not offer firmware support for the workaround.
+    KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_AVAIL: The workaround is available,
+      and can be disabled by a vCPU. If
+      KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_ENABLED is set, it is active for
+      this vCPU.
+    KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNAFFECTED: The workaround is always
+      active on this vCPU or it is not needed.
+
+[1] https://developer.arm.com/-/media/developer/pdf/ARM_DEN_0070A_Firmware_interfaces_for_mitigating_CVE-2017-5715.pdf
-- 
2.17.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH v5 1/3] arm64: KVM: Propagate full Spectre v2 workaround state to KVM guests
  2019-04-15 11:15 ` [PATCH v5 1/3] arm64: KVM: Propagate full Spectre v2 workaround state to KVM guests Andre Przywara
  2019-04-15 11:15   ` Andre Przywara
@ 2019-04-15 14:03   ` Steven Price
  2019-04-15 14:03     ` Steven Price
  2019-04-26 15:37     ` Auger Eric
  1 sibling, 2 replies; 22+ messages in thread
From: Steven Price @ 2019-04-15 14:03 UTC (permalink / raw)
  To: Andre Przywara, Marc Zyngier, Christoffer Dall
  Cc: kvm, Jeremy Linton, kvmarm, Dave Martin, linux-arm-kernel

On 15/04/2019 12:15, Andre Przywara wrote:
> Recent commits added the explicit notion of "Not affected" to the state
> of the Spectre v2 (aka. BP_HARDENING) workaround, where we just had
> "needed" and "unknown" before.
> 
> Export this knowledge to the rest of the kernel and enhance the existing
> kvm_arm_harden_branch_predictor() to report this new state as well.
> Export this new state to guests when they use KVM's firmware interface
> emulation.
> 
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> ---
>  arch/arm/include/asm/kvm_host.h     | 12 +++++++++---
>  arch/arm64/include/asm/cpufeature.h |  6 ++++++
>  arch/arm64/include/asm/kvm_host.h   | 16 ++++++++++++++--
>  arch/arm64/kernel/cpu_errata.c      | 23 ++++++++++++++++++-----
>  virt/kvm/arm/psci.c                 | 10 +++++++++-
>  5 files changed, 56 insertions(+), 11 deletions(-)
> 
> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
> index 770d73257ad9..836479e4b340 100644
> --- a/arch/arm/include/asm/kvm_host.h
> +++ b/arch/arm/include/asm/kvm_host.h
> @@ -364,7 +364,11 @@ static inline void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu) {}
>  static inline void kvm_arm_vhe_guest_enter(void) {}
>  static inline void kvm_arm_vhe_guest_exit(void) {}
>  
> -static inline bool kvm_arm_harden_branch_predictor(void)
> +#define KVM_BP_HARDEN_UNKNOWN		-1
> +#define KVM_BP_HARDEN_NEEDED		0
> +#define KVM_BP_HARDEN_MITIGATED		1

I find the naming here a little confusing - it's not really clear what
"mitigated" means (see below).

> +
> +static inline int kvm_arm_harden_branch_predictor(void)
>  {
>  	switch(read_cpuid_part()) {
>  #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
> @@ -372,10 +376,12 @@ static inline bool kvm_arm_harden_branch_predictor(void)
>  	case ARM_CPU_PART_CORTEX_A12:
>  	case ARM_CPU_PART_CORTEX_A15:
>  	case ARM_CPU_PART_CORTEX_A17:
> -		return true;
> +		return KVM_BP_HARDEN_NEEDED;
>  #endif
> +	case ARM_CPU_PART_CORTEX_A7:
> +		return KVM_BP_HARDEN_MITIGATED;
>  	default:
> -		return false;
> +		return KVM_BP_HARDEN_UNKNOWN;
>  	}
>  }
>  
> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> index 6ccdc97e5d6a..3c5b25c1bda1 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -622,6 +622,12 @@ static inline bool system_uses_irq_prio_masking(void)
>  	       cpus_have_const_cap(ARM64_HAS_IRQ_PRIO_MASKING);
>  }
>  
> +#define ARM64_BP_HARDEN_UNKNOWN		-1
> +#define ARM64_BP_HARDEN_NEEDED		0
> +#define ARM64_BP_HARDEN_MITIGATED	1
> +
> +int get_spectre_v2_workaround_state(void);
> +
>  #define ARM64_SSBD_UNKNOWN		-1
>  #define ARM64_SSBD_FORCE_DISABLE	0
>  #define ARM64_SSBD_KERNEL		1
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index a01fe087e022..bf9a59b7d1ce 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -555,9 +555,21 @@ static inline void kvm_arm_vhe_guest_exit(void)
>  	isb();
>  }
>  
> -static inline bool kvm_arm_harden_branch_predictor(void)
> +#define KVM_BP_HARDEN_UNKNOWN		-1
> +#define KVM_BP_HARDEN_NEEDED		0
> +#define KVM_BP_HARDEN_MITIGATED		1
> +
> +static inline int kvm_arm_harden_branch_predictor(void)
>  {
> -	return cpus_have_const_cap(ARM64_HARDEN_BRANCH_PREDICTOR);
> +	switch (get_spectre_v2_workaround_state()) {
> +	case ARM64_BP_HARDEN_NEEDED:
> +		return KVM_BP_HARDEN_NEEDED;
> +	case ARM64_BP_HARDEN_MITIGATED:
> +		return KVM_BP_HARDEN_MITIGATED;
> +	case ARM64_BP_HARDEN_UNKNOWN:
> +	default:
> +		return KVM_BP_HARDEN_UNKNOWN;
> +	}
>  }
>  
>  #define KVM_SSBD_UNKNOWN		-1
> diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
> index a1f3188c7be0..7fa23ab09d4e 100644
> --- a/arch/arm64/kernel/cpu_errata.c
> +++ b/arch/arm64/kernel/cpu_errata.c
> @@ -555,6 +555,17 @@ cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused)
>  static bool __hardenbp_enab = true;
>  static bool __spectrev2_safe = true;
>  
> +int get_spectre_v2_workaround_state(void)
> +{
> +	if (__spectrev2_safe)
> +		return ARM64_BP_HARDEN_MITIGATED;
> +
> +	if (!__hardenbp_enab)
> +		return ARM64_BP_HARDEN_UNKNOWN;
> +
> +	return ARM64_BP_HARDEN_NEEDED;
> +}
> +
>  /*
>   * List of CPUs that do not need any Spectre-v2 mitigation at all.
>   */
> @@ -834,13 +845,15 @@ ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr,
>  ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr,
>  		char *buf)
>  {
> -	if (__spectrev2_safe)
> +	switch (get_spectre_v2_workaround_state()) {
> +	case ARM64_BP_HARDEN_MITIGATED:
>  		return sprintf(buf, "Not affected\n");

Here "mitigated" means "not affected".

> -
> -	if (__hardenbp_enab)
> +        case ARM64_BP_HARDEN_NEEDED:
>  		return sprintf(buf, "Mitigation: Branch predictor hardening\n");

And "harden needed" means mitigated.

> -
> -	return sprintf(buf, "Vulnerable\n");
> +        case ARM64_BP_HARDEN_UNKNOWN:
> +	default:
> +		return sprintf(buf, "Vulnerable\n");
> +	}
>  }
>  
>  ssize_t cpu_show_spec_store_bypass(struct device *dev,
> diff --git a/virt/kvm/arm/psci.c b/virt/kvm/arm/psci.c
> index 34d08ee63747..1da53e0e38f7 100644
> --- a/virt/kvm/arm/psci.c
> +++ b/virt/kvm/arm/psci.c
> @@ -412,8 +412,16 @@ int kvm_hvc_call_handler(struct kvm_vcpu *vcpu)
>  		feature = smccc_get_arg1(vcpu);
>  		switch(feature) {
>  		case ARM_SMCCC_ARCH_WORKAROUND_1:
> -			if (kvm_arm_harden_branch_predictor())
> +			switch (kvm_arm_harden_branch_predictor()) {
> +			case KVM_BP_HARDEN_UNKNOWN:
> +				break;
> +			case KVM_BP_HARDEN_NEEDED:
>  				val = SMCCC_RET_SUCCESS;
> +				break;
> +			case KVM_BP_HARDEN_MITIGATED:
> +				val = SMCCC_RET_NOT_REQUIRED;

Would KVM_BP_HARDEN_NOT_REQUIRED be a more logical name?

Steve

> +				break;
> +			}
>  			break;
>  		case ARM_SMCCC_ARCH_WORKAROUND_2:
>  			switch (kvm_arm_have_ssbd()) {
> 

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v5 1/3] arm64: KVM: Propagate full Spectre v2 workaround state to KVM guests
  2019-04-15 14:03   ` Steven Price
@ 2019-04-15 14:03     ` Steven Price
  2019-04-26 15:37     ` Auger Eric
  1 sibling, 0 replies; 22+ messages in thread
From: Steven Price @ 2019-04-15 14:03 UTC (permalink / raw)
  To: Andre Przywara, Marc Zyngier, Christoffer Dall
  Cc: kvm, Jeremy Linton, kvmarm, Dave Martin, linux-arm-kernel

On 15/04/2019 12:15, Andre Przywara wrote:
> Recent commits added the explicit notion of "Not affected" to the state
> of the Spectre v2 (aka. BP_HARDENING) workaround, where we just had
> "needed" and "unknown" before.
> 
> Export this knowledge to the rest of the kernel and enhance the existing
> kvm_arm_harden_branch_predictor() to report this new state as well.
> Export this new state to guests when they use KVM's firmware interface
> emulation.
> 
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> ---
>  arch/arm/include/asm/kvm_host.h     | 12 +++++++++---
>  arch/arm64/include/asm/cpufeature.h |  6 ++++++
>  arch/arm64/include/asm/kvm_host.h   | 16 ++++++++++++++--
>  arch/arm64/kernel/cpu_errata.c      | 23 ++++++++++++++++++-----
>  virt/kvm/arm/psci.c                 | 10 +++++++++-
>  5 files changed, 56 insertions(+), 11 deletions(-)
> 
> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
> index 770d73257ad9..836479e4b340 100644
> --- a/arch/arm/include/asm/kvm_host.h
> +++ b/arch/arm/include/asm/kvm_host.h
> @@ -364,7 +364,11 @@ static inline void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu) {}
>  static inline void kvm_arm_vhe_guest_enter(void) {}
>  static inline void kvm_arm_vhe_guest_exit(void) {}
>  
> -static inline bool kvm_arm_harden_branch_predictor(void)
> +#define KVM_BP_HARDEN_UNKNOWN		-1
> +#define KVM_BP_HARDEN_NEEDED		0
> +#define KVM_BP_HARDEN_MITIGATED		1

I find the naming here a little confusing - it's not really clear what
"mitigated" means (see below).

> +
> +static inline int kvm_arm_harden_branch_predictor(void)
>  {
>  	switch(read_cpuid_part()) {
>  #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
> @@ -372,10 +376,12 @@ static inline bool kvm_arm_harden_branch_predictor(void)
>  	case ARM_CPU_PART_CORTEX_A12:
>  	case ARM_CPU_PART_CORTEX_A15:
>  	case ARM_CPU_PART_CORTEX_A17:
> -		return true;
> +		return KVM_BP_HARDEN_NEEDED;
>  #endif
> +	case ARM_CPU_PART_CORTEX_A7:
> +		return KVM_BP_HARDEN_MITIGATED;
>  	default:
> -		return false;
> +		return KVM_BP_HARDEN_UNKNOWN;
>  	}
>  }
>  
> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> index 6ccdc97e5d6a..3c5b25c1bda1 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -622,6 +622,12 @@ static inline bool system_uses_irq_prio_masking(void)
>  	       cpus_have_const_cap(ARM64_HAS_IRQ_PRIO_MASKING);
>  }
>  
> +#define ARM64_BP_HARDEN_UNKNOWN		-1
> +#define ARM64_BP_HARDEN_NEEDED		0
> +#define ARM64_BP_HARDEN_MITIGATED	1
> +
> +int get_spectre_v2_workaround_state(void);
> +
>  #define ARM64_SSBD_UNKNOWN		-1
>  #define ARM64_SSBD_FORCE_DISABLE	0
>  #define ARM64_SSBD_KERNEL		1
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index a01fe087e022..bf9a59b7d1ce 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -555,9 +555,21 @@ static inline void kvm_arm_vhe_guest_exit(void)
>  	isb();
>  }
>  
> -static inline bool kvm_arm_harden_branch_predictor(void)
> +#define KVM_BP_HARDEN_UNKNOWN		-1
> +#define KVM_BP_HARDEN_NEEDED		0
> +#define KVM_BP_HARDEN_MITIGATED		1
> +
> +static inline int kvm_arm_harden_branch_predictor(void)
>  {
> -	return cpus_have_const_cap(ARM64_HARDEN_BRANCH_PREDICTOR);
> +	switch (get_spectre_v2_workaround_state()) {
> +	case ARM64_BP_HARDEN_NEEDED:
> +		return KVM_BP_HARDEN_NEEDED;
> +	case ARM64_BP_HARDEN_MITIGATED:
> +		return KVM_BP_HARDEN_MITIGATED;
> +	case ARM64_BP_HARDEN_UNKNOWN:
> +	default:
> +		return KVM_BP_HARDEN_UNKNOWN;
> +	}
>  }
>  
>  #define KVM_SSBD_UNKNOWN		-1
> diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
> index a1f3188c7be0..7fa23ab09d4e 100644
> --- a/arch/arm64/kernel/cpu_errata.c
> +++ b/arch/arm64/kernel/cpu_errata.c
> @@ -555,6 +555,17 @@ cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused)
>  static bool __hardenbp_enab = true;
>  static bool __spectrev2_safe = true;
>  
> +int get_spectre_v2_workaround_state(void)
> +{
> +	if (__spectrev2_safe)
> +		return ARM64_BP_HARDEN_MITIGATED;
> +
> +	if (!__hardenbp_enab)
> +		return ARM64_BP_HARDEN_UNKNOWN;
> +
> +	return ARM64_BP_HARDEN_NEEDED;
> +}
> +
>  /*
>   * List of CPUs that do not need any Spectre-v2 mitigation at all.
>   */
> @@ -834,13 +845,15 @@ ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr,
>  ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr,
>  		char *buf)
>  {
> -	if (__spectrev2_safe)
> +	switch (get_spectre_v2_workaround_state()) {
> +	case ARM64_BP_HARDEN_MITIGATED:
>  		return sprintf(buf, "Not affected\n");

Here "mitigated" means "not affected".

> -
> -	if (__hardenbp_enab)
> +        case ARM64_BP_HARDEN_NEEDED:
>  		return sprintf(buf, "Mitigation: Branch predictor hardening\n");

And "harden needed" means mitigated.

> -
> -	return sprintf(buf, "Vulnerable\n");
> +        case ARM64_BP_HARDEN_UNKNOWN:
> +	default:
> +		return sprintf(buf, "Vulnerable\n");
> +	}
>  }
>  
>  ssize_t cpu_show_spec_store_bypass(struct device *dev,
> diff --git a/virt/kvm/arm/psci.c b/virt/kvm/arm/psci.c
> index 34d08ee63747..1da53e0e38f7 100644
> --- a/virt/kvm/arm/psci.c
> +++ b/virt/kvm/arm/psci.c
> @@ -412,8 +412,16 @@ int kvm_hvc_call_handler(struct kvm_vcpu *vcpu)
>  		feature = smccc_get_arg1(vcpu);
>  		switch(feature) {
>  		case ARM_SMCCC_ARCH_WORKAROUND_1:
> -			if (kvm_arm_harden_branch_predictor())
> +			switch (kvm_arm_harden_branch_predictor()) {
> +			case KVM_BP_HARDEN_UNKNOWN:
> +				break;
> +			case KVM_BP_HARDEN_NEEDED:
>  				val = SMCCC_RET_SUCCESS;
> +				break;
> +			case KVM_BP_HARDEN_MITIGATED:
> +				val = SMCCC_RET_NOT_REQUIRED;

Would KVM_BP_HARDEN_NOT_REQUIRED be a more logical name?

Steve

> +				break;
> +			}
>  			break;
>  		case ARM_SMCCC_ARCH_WORKAROUND_2:
>  			switch (kvm_arm_have_ssbd()) {
> 

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v5 2/3] KVM: arm/arm64: Add save/restore support for firmware workaround state
  2019-04-15 11:15 ` [PATCH v5 2/3] KVM: arm/arm64: Add save/restore support for firmware workaround state Andre Przywara
  2019-04-15 11:15   ` Andre Przywara
@ 2019-04-15 14:04   ` Steven Price
  2019-04-15 14:04     ` Steven Price
  2019-04-26 15:07   ` Auger Eric
  2 siblings, 1 reply; 22+ messages in thread
From: Steven Price @ 2019-04-15 14:04 UTC (permalink / raw)
  To: Andre Przywara, Marc Zyngier, Christoffer Dall
  Cc: kvm, Jeremy Linton, kvmarm, Dave Martin, linux-arm-kernel

On 15/04/2019 12:15, Andre Przywara wrote:
> KVM implements the firmware interface for mitigating cache speculation
> vulnerabilities. Guests may use this interface to ensure mitigation is
> active.
> If we want to migrate such a guest to a host with a different support
> level for those workarounds, migration might need to fail, to ensure that
> critical guests don't loose their protection.
> 
> Introduce a way for userland to save and restore the workarounds state.
> On restoring we do checks that make sure we don't downgrade our
> mitigation level.
> 
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>

Reviewed-by: Steven Price <steven.price@arm.com>

Thanks,

Steve

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v5 2/3] KVM: arm/arm64: Add save/restore support for firmware workaround state
  2019-04-15 14:04   ` Steven Price
@ 2019-04-15 14:04     ` Steven Price
  0 siblings, 0 replies; 22+ messages in thread
From: Steven Price @ 2019-04-15 14:04 UTC (permalink / raw)
  To: Andre Przywara, Marc Zyngier, Christoffer Dall
  Cc: kvm, Jeremy Linton, kvmarm, Dave Martin, linux-arm-kernel

On 15/04/2019 12:15, Andre Przywara wrote:
> KVM implements the firmware interface for mitigating cache speculation
> vulnerabilities. Guests may use this interface to ensure mitigation is
> active.
> If we want to migrate such a guest to a host with a different support
> level for those workarounds, migration might need to fail, to ensure that
> critical guests don't loose their protection.
> 
> Introduce a way for userland to save and restore the workarounds state.
> On restoring we do checks that make sure we don't downgrade our
> mitigation level.
> 
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>

Reviewed-by: Steven Price <steven.price@arm.com>

Thanks,

Steve
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v5 3/3] KVM: doc: add API documentation on the KVM_REG_ARM_WORKAROUNDS register
  2019-04-15 11:15 ` [PATCH v5 3/3] KVM: doc: add API documentation on the KVM_REG_ARM_WORKAROUNDS register Andre Przywara
  2019-04-15 11:15   ` Andre Przywara
@ 2019-04-15 14:06   ` Steven Price
  2019-04-15 14:06     ` Steven Price
  2019-04-26 15:25   ` Auger Eric
  2 siblings, 1 reply; 22+ messages in thread
From: Steven Price @ 2019-04-15 14:06 UTC (permalink / raw)
  To: Andre Przywara, Marc Zyngier, Christoffer Dall
  Cc: kvm, Jeremy Linton, kvmarm, Dave Martin, linux-arm-kernel

On 15/04/2019 12:15, Andre Przywara wrote:
> Add documentation for the newly defined firmware registers to save and
> restore any vulnerability mitigation status.
> 
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>

Reviewed-by: Steven Price <steven.price@arm.com>

Thanks,

Steve

> ---
>  Documentation/virtual/kvm/arm/psci.txt | 31 ++++++++++++++++++++++++++
>  1 file changed, 31 insertions(+)
> 
> diff --git a/Documentation/virtual/kvm/arm/psci.txt b/Documentation/virtual/kvm/arm/psci.txt
> index aafdab887b04..a876c1baa56e 100644
> --- a/Documentation/virtual/kvm/arm/psci.txt
> +++ b/Documentation/virtual/kvm/arm/psci.txt
> @@ -28,3 +28,34 @@ The following register is defined:
>    - Allows any PSCI version implemented by KVM and compatible with
>      v0.2 to be set with SET_ONE_REG
>    - Affects the whole VM (even if the register view is per-vcpu)
> +
> +* KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1:
> +  Holds the state of the firmware support to mitigate CVE-2017-5715, as
> +  offered by KVM to the guest via a HVC call. The workaround is described
> +  under SMCCC_ARCH_WORKAROUND_1 in [1].
> +  Accepted values are:
> +    KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_NOT_AVAIL: KVM does not offer
> +      firmware support for the workaround. The mitigation status for the
> +      guest is unknown.
> +    KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_AVAIL: The workaround HVC call is
> +      available to the guest and required for the mitigation.
> +    KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_UNAFFECTED: The workaround HVC call
> +      is available to the guest, but it is not needed on this VCPU.
> +
> +* KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2:
> +  Holds the state of the firmware support to mitigate CVE-2018-3639, as
> +  offered by KVM to the guest via a HVC call. The workaround is described
> +  under SMCCC_ARCH_WORKAROUND_2 in [1].
> +  Accepted values are:
> +    KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_AVAIL: A workaround is not
> +      available. KVM does not offer firmware support for the workaround.
> +    KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNKNOWN: The workaround state is
> +      unknown. KVM does not offer firmware support for the workaround.
> +    KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_AVAIL: The workaround is available,
> +      and can be disabled by a vCPU. If
> +      KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_ENABLED is set, it is active for
> +      this vCPU.
> +    KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNAFFECTED: The workaround is always
> +      active on this vCPU or it is not needed.
> +
> +[1] https://developer.arm.com/-/media/developer/pdf/ARM_DEN_0070A_Firmware_interfaces_for_mitigating_CVE-2017-5715.pdf
> 

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v5 3/3] KVM: doc: add API documentation on the KVM_REG_ARM_WORKAROUNDS register
  2019-04-15 14:06   ` Steven Price
@ 2019-04-15 14:06     ` Steven Price
  0 siblings, 0 replies; 22+ messages in thread
From: Steven Price @ 2019-04-15 14:06 UTC (permalink / raw)
  To: Andre Przywara, Marc Zyngier, Christoffer Dall
  Cc: kvm, Jeremy Linton, kvmarm, Dave Martin, linux-arm-kernel

On 15/04/2019 12:15, Andre Przywara wrote:
> Add documentation for the newly defined firmware registers to save and
> restore any vulnerability mitigation status.
> 
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>

Reviewed-by: Steven Price <steven.price@arm.com>

Thanks,

Steve

> ---
>  Documentation/virtual/kvm/arm/psci.txt | 31 ++++++++++++++++++++++++++
>  1 file changed, 31 insertions(+)
> 
> diff --git a/Documentation/virtual/kvm/arm/psci.txt b/Documentation/virtual/kvm/arm/psci.txt
> index aafdab887b04..a876c1baa56e 100644
> --- a/Documentation/virtual/kvm/arm/psci.txt
> +++ b/Documentation/virtual/kvm/arm/psci.txt
> @@ -28,3 +28,34 @@ The following register is defined:
>    - Allows any PSCI version implemented by KVM and compatible with
>      v0.2 to be set with SET_ONE_REG
>    - Affects the whole VM (even if the register view is per-vcpu)
> +
> +* KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1:
> +  Holds the state of the firmware support to mitigate CVE-2017-5715, as
> +  offered by KVM to the guest via a HVC call. The workaround is described
> +  under SMCCC_ARCH_WORKAROUND_1 in [1].
> +  Accepted values are:
> +    KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_NOT_AVAIL: KVM does not offer
> +      firmware support for the workaround. The mitigation status for the
> +      guest is unknown.
> +    KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_AVAIL: The workaround HVC call is
> +      available to the guest and required for the mitigation.
> +    KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_UNAFFECTED: The workaround HVC call
> +      is available to the guest, but it is not needed on this VCPU.
> +
> +* KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2:
> +  Holds the state of the firmware support to mitigate CVE-2018-3639, as
> +  offered by KVM to the guest via a HVC call. The workaround is described
> +  under SMCCC_ARCH_WORKAROUND_2 in [1].
> +  Accepted values are:
> +    KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_AVAIL: A workaround is not
> +      available. KVM does not offer firmware support for the workaround.
> +    KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNKNOWN: The workaround state is
> +      unknown. KVM does not offer firmware support for the workaround.
> +    KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_AVAIL: The workaround is available,
> +      and can be disabled by a vCPU. If
> +      KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_ENABLED is set, it is active for
> +      this vCPU.
> +    KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNAFFECTED: The workaround is always
> +      active on this vCPU or it is not needed.
> +
> +[1] https://developer.arm.com/-/media/developer/pdf/ARM_DEN_0070A_Firmware_interfaces_for_mitigating_CVE-2017-5715.pdf
> 

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v5 2/3] KVM: arm/arm64: Add save/restore support for firmware workaround state
  2019-04-15 11:15 ` [PATCH v5 2/3] KVM: arm/arm64: Add save/restore support for firmware workaround state Andre Przywara
  2019-04-15 11:15   ` Andre Przywara
  2019-04-15 14:04   ` Steven Price
@ 2019-04-26 15:07   ` Auger Eric
  2019-04-26 15:07     ` Auger Eric
  2 siblings, 1 reply; 22+ messages in thread
From: Auger Eric @ 2019-04-26 15:07 UTC (permalink / raw)
  To: Andre Przywara, Marc Zyngier, Christoffer Dall
  Cc: kvm, Jeremy Linton, kvmarm, Steven Price, Dave Martin, linux-arm-kernel

Hi Andre,

On 4/15/19 1:15 PM, Andre Przywara wrote:
> KVM implements the firmware interface for mitigating cache speculation
> vulnerabilities. Guests may use this interface to ensure mitigation is
> active.
> If we want to migrate such a guest to a host with a different support
> level for those workarounds, migration might need to fail, to ensure that
> critical guests don't loose their protection.
> 
> Introduce a way for userland to save and restore the workarounds state.
> On restoring we do checks that make sure we don't downgrade our
> mitigation level.
Reviewed-by: Eric Auger <eric.auger@redhat.com>

Thanks

Eric
> 
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> ---
>  arch/arm/include/asm/kvm_emulate.h   |  10 ++
>  arch/arm/include/uapi/asm/kvm.h      |  12 +++
>  arch/arm64/include/asm/kvm_emulate.h |  14 +++
>  arch/arm64/include/uapi/asm/kvm.h    |  10 ++
>  virt/kvm/arm/psci.c                  | 139 ++++++++++++++++++++++++---
>  5 files changed, 170 insertions(+), 15 deletions(-)
> 
> diff --git a/arch/arm/include/asm/kvm_emulate.h b/arch/arm/include/asm/kvm_emulate.h
> index 8927cae7c966..663a02d7e6f4 100644
> --- a/arch/arm/include/asm/kvm_emulate.h
> +++ b/arch/arm/include/asm/kvm_emulate.h
> @@ -283,6 +283,16 @@ static inline unsigned long kvm_vcpu_get_mpidr_aff(struct kvm_vcpu *vcpu)
>  	return vcpu_cp15(vcpu, c0_MPIDR) & MPIDR_HWID_BITMASK;
>  }
>  
> +static inline bool kvm_arm_get_vcpu_workaround_2_flag(struct kvm_vcpu *vcpu)
> +{
> +	return false;
> +}
> +
> +static inline void kvm_arm_set_vcpu_workaround_2_flag(struct kvm_vcpu *vcpu,
> +						      bool flag)
> +{
> +}
> +
>  static inline void kvm_vcpu_set_be(struct kvm_vcpu *vcpu)
>  {
>  	*vcpu_cpsr(vcpu) |= PSR_E_BIT;
> diff --git a/arch/arm/include/uapi/asm/kvm.h b/arch/arm/include/uapi/asm/kvm.h
> index 4602464ebdfb..dfce6fdbd03b 100644
> --- a/arch/arm/include/uapi/asm/kvm.h
> +++ b/arch/arm/include/uapi/asm/kvm.h
> @@ -214,6 +214,18 @@ struct kvm_vcpu_events {
>  #define KVM_REG_ARM_FW_REG(r)		(KVM_REG_ARM | KVM_REG_SIZE_U64 | \
>  					 KVM_REG_ARM_FW | ((r) & 0xffff))
>  #define KVM_REG_ARM_PSCI_VERSION	KVM_REG_ARM_FW_REG(0)
> +#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1	KVM_REG_ARM_FW_REG(1)
> +	/* Higher values mean better protection. */
> +#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_NOT_AVAIL	0
> +#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_AVAIL	1
> +#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_UNAFFECTED	2
> +#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2	KVM_REG_ARM_FW_REG(2)
> +	/* Higher values mean better protection. */
> +#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_AVAIL	0
> +#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNKNOWN	1
> +#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_AVAIL	2
> +#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNAFFECTED	3
> +#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_ENABLED	(1U << 4)
>  
>  /* Device Control API: ARM VGIC */
>  #define KVM_DEV_ARM_VGIC_GRP_ADDR	0
> diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
> index d3842791e1c4..c00c17c9adb6 100644
> --- a/arch/arm64/include/asm/kvm_emulate.h
> +++ b/arch/arm64/include/asm/kvm_emulate.h
> @@ -348,6 +348,20 @@ static inline unsigned long kvm_vcpu_get_mpidr_aff(struct kvm_vcpu *vcpu)
>  	return vcpu_read_sys_reg(vcpu, MPIDR_EL1) & MPIDR_HWID_BITMASK;
>  }
>  
> +static inline bool kvm_arm_get_vcpu_workaround_2_flag(struct kvm_vcpu *vcpu)
> +{
> +	return vcpu->arch.workaround_flags & VCPU_WORKAROUND_2_FLAG;
> +}
> +
> +static inline void kvm_arm_set_vcpu_workaround_2_flag(struct kvm_vcpu *vcpu,
> +						      bool flag)
> +{
> +	if (flag)
> +		vcpu->arch.workaround_flags |= VCPU_WORKAROUND_2_FLAG;
> +	else
> +		vcpu->arch.workaround_flags &= ~VCPU_WORKAROUND_2_FLAG;
> +}
> +
>  static inline void kvm_vcpu_set_be(struct kvm_vcpu *vcpu)
>  {
>  	if (vcpu_mode_is_32bit(vcpu)) {
> diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
> index 97c3478ee6e7..e75a81a7716f 100644
> --- a/arch/arm64/include/uapi/asm/kvm.h
> +++ b/arch/arm64/include/uapi/asm/kvm.h
> @@ -225,6 +225,16 @@ struct kvm_vcpu_events {
>  #define KVM_REG_ARM_FW_REG(r)		(KVM_REG_ARM64 | KVM_REG_SIZE_U64 | \
>  					 KVM_REG_ARM_FW | ((r) & 0xffff))
>  #define KVM_REG_ARM_PSCI_VERSION	KVM_REG_ARM_FW_REG(0)
> +#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1	KVM_REG_ARM_FW_REG(1)
> +#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_NOT_AVAIL	0
> +#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_AVAIL	1
> +#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_UNAFFECTED	2
> +#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2	KVM_REG_ARM_FW_REG(2)
> +#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_AVAIL	0
> +#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNKNOWN	1
> +#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_AVAIL	2
> +#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNAFFECTED	3
> +#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_ENABLED     (1U << 4)
>  
>  /* Device Control API: ARM VGIC */
>  #define KVM_DEV_ARM_VGIC_GRP_ADDR	0
> diff --git a/virt/kvm/arm/psci.c b/virt/kvm/arm/psci.c
> index 1da53e0e38f7..c82d90aad3ea 100644
> --- a/virt/kvm/arm/psci.c
> +++ b/virt/kvm/arm/psci.c
> @@ -449,42 +449,103 @@ int kvm_hvc_call_handler(struct kvm_vcpu *vcpu)
>  
>  int kvm_arm_get_fw_num_regs(struct kvm_vcpu *vcpu)
>  {
> -	return 1;		/* PSCI version */
> +	return 3;		/* PSCI version and two workaround registers */
>  }
>  
>  int kvm_arm_copy_fw_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices)
>  {
> -	if (put_user(KVM_REG_ARM_PSCI_VERSION, uindices))
> +	if (put_user(KVM_REG_ARM_PSCI_VERSION, uindices++))
> +		return -EFAULT;
> +
> +	if (put_user(KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1, uindices++))
> +		return -EFAULT;
> +
> +	if (put_user(KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2, uindices++))
>  		return -EFAULT;
>  
>  	return 0;
>  }
>  
> +#define KVM_REG_FEATURE_LEVEL_WIDTH	4
> +#define KVM_REG_FEATURE_LEVEL_MASK	(BIT(KVM_REG_FEATURE_LEVEL_WIDTH) - 1)
> +
> +/*
> + * Convert the workaround level into an easy-to-compare number, where higher
> + * values mean better protection.
> + */
> +static int get_kernel_wa_level(u64 regid)
> +{
> +	switch (regid) {
> +	case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1:
> +		switch (kvm_arm_harden_branch_predictor()) {
> +		case KVM_BP_HARDEN_UNKNOWN:
> +			return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_NOT_AVAIL;
> +		case KVM_BP_HARDEN_NEEDED:
> +			return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_AVAIL;
> +		case KVM_BP_HARDEN_MITIGATED:
> +			return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_UNAFFECTED;
> +		}
> +		return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_NOT_AVAIL;
> +	case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2:
> +		switch (kvm_arm_have_ssbd()) {
> +		case KVM_SSBD_FORCE_DISABLE:
> +			return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_AVAIL;
> +		case KVM_SSBD_KERNEL:
> +			return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_AVAIL;
> +		case KVM_SSBD_FORCE_ENABLE:
> +		case KVM_SSBD_MITIGATED:
> +			return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNAFFECTED;
> +		case KVM_SSBD_UNKNOWN:
> +		default:
> +			return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNKNOWN;
> +		}
> +	}
> +
> +	return -EINVAL;
> +}
> +
>  int kvm_arm_get_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
>  {
> -	if (reg->id == KVM_REG_ARM_PSCI_VERSION) {
> -		void __user *uaddr = (void __user *)(long)reg->addr;
> -		u64 val;
> +	void __user *uaddr = (void __user *)(long)reg->addr;
> +	u64 val;
>  
> +	switch (reg->id) {
> +	case KVM_REG_ARM_PSCI_VERSION:
>  		val = kvm_psci_version(vcpu, vcpu->kvm);
> -		if (copy_to_user(uaddr, &val, KVM_REG_SIZE(reg->id)))
> -			return -EFAULT;
> +		break;
> +	case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1:
> +		val = get_kernel_wa_level(reg->id) & KVM_REG_FEATURE_LEVEL_MASK;
> +		break;
> +	case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2:
> +		val = get_kernel_wa_level(reg->id) & KVM_REG_FEATURE_LEVEL_MASK;
>  
> -		return 0;
> +		if (val == KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_AVAIL &&
> +		    kvm_arm_get_vcpu_workaround_2_flag(vcpu))
> +			val |= KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_ENABLED;
> +		break;
> +	default:
> +		return -ENOENT;
>  	}
>  
> -	return -EINVAL;
> +	if (copy_to_user(uaddr, &val, KVM_REG_SIZE(reg->id)))
> +		return -EFAULT;
> +
> +	return 0;
>  }
>  
>  int kvm_arm_set_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
>  {
> -	if (reg->id == KVM_REG_ARM_PSCI_VERSION) {
> -		void __user *uaddr = (void __user *)(long)reg->addr;
> -		bool wants_02;
> -		u64 val;
> +	void __user *uaddr = (void __user *)(long)reg->addr;
> +	u64 val;
> +	int wa_level;> +
> +	if (copy_from_user(&val, uaddr, KVM_REG_SIZE(reg->id)))
> +		return -EFAULT;
>  
> -		if (copy_from_user(&val, uaddr, KVM_REG_SIZE(reg->id)))
> -			return -EFAULT;
> +	switch (reg->id) {
> +	case KVM_REG_ARM_PSCI_VERSION:
> +	{
> +		bool wants_02;
>  
>  		wants_02 = test_bit(KVM_ARM_VCPU_PSCI_0_2, vcpu->arch.features);
>  
> @@ -501,6 +562,54 @@ int kvm_arm_set_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
>  			vcpu->kvm->arch.psci_version = val;
>  			return 0;
>  		}
> +		break;
> +	}
> +
> +	case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1:
> +		if (val & ~KVM_REG_FEATURE_LEVEL_MASK)
> +			return -EINVAL;
> +
> +		if (get_kernel_wa_level(reg->id) < val)
> +			return -EINVAL;
> +
> +		return 0;
> +
> +	case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2:
> +		if (val & ~(KVM_REG_FEATURE_LEVEL_MASK |
> +			    KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_ENABLED))
> +			return -EINVAL;
> +
> +		wa_level = val & KVM_REG_FEATURE_LEVEL_MASK;
> +
> +		if (get_kernel_wa_level(reg->id) < wa_level)
> +			return -EINVAL;
> +
> +		/* The enabled bit must not be set unless the level is AVAIL. */
> +		if (wa_level != KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_AVAIL &&
> +		    wa_level != val)
> +			return -EINVAL;
> +
> +		/* Are we finished or do we need to check the enable bit ? */
> +		if (kvm_arm_have_ssbd() != KVM_SSBD_KERNEL)> +			return 0;
> +
> +		/*
> +		 * If this kernel supports the workaround to be switched on
> +		 * or off, make sure it matches the requested setting.
> +		 */
> +		switch (wa_level) {
> +		case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_AVAIL:
> +			kvm_arm_set_vcpu_workaround_2_flag(vcpu,
> +			    val & KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_ENABLED);
> +			break;
> +		case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNAFFECTED:
> +			kvm_arm_set_vcpu_workaround_2_flag(vcpu, true);
> +			break;
> +		}
> +
> +		return 0;
> +	default:
> +		return -ENOENT;
>  	}
>  
>  	return -EINVAL;
> 

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v5 2/3] KVM: arm/arm64: Add save/restore support for firmware workaround state
  2019-04-26 15:07   ` Auger Eric
@ 2019-04-26 15:07     ` Auger Eric
  0 siblings, 0 replies; 22+ messages in thread
From: Auger Eric @ 2019-04-26 15:07 UTC (permalink / raw)
  To: Andre Przywara, Marc Zyngier, Christoffer Dall
  Cc: kvm, Jeremy Linton, kvmarm, Steven Price, Dave Martin, linux-arm-kernel

Hi Andre,

On 4/15/19 1:15 PM, Andre Przywara wrote:
> KVM implements the firmware interface for mitigating cache speculation
> vulnerabilities. Guests may use this interface to ensure mitigation is
> active.
> If we want to migrate such a guest to a host with a different support
> level for those workarounds, migration might need to fail, to ensure that
> critical guests don't loose their protection.
> 
> Introduce a way for userland to save and restore the workarounds state.
> On restoring we do checks that make sure we don't downgrade our
> mitigation level.
Reviewed-by: Eric Auger <eric.auger@redhat.com>

Thanks

Eric
> 
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> ---
>  arch/arm/include/asm/kvm_emulate.h   |  10 ++
>  arch/arm/include/uapi/asm/kvm.h      |  12 +++
>  arch/arm64/include/asm/kvm_emulate.h |  14 +++
>  arch/arm64/include/uapi/asm/kvm.h    |  10 ++
>  virt/kvm/arm/psci.c                  | 139 ++++++++++++++++++++++++---
>  5 files changed, 170 insertions(+), 15 deletions(-)
> 
> diff --git a/arch/arm/include/asm/kvm_emulate.h b/arch/arm/include/asm/kvm_emulate.h
> index 8927cae7c966..663a02d7e6f4 100644
> --- a/arch/arm/include/asm/kvm_emulate.h
> +++ b/arch/arm/include/asm/kvm_emulate.h
> @@ -283,6 +283,16 @@ static inline unsigned long kvm_vcpu_get_mpidr_aff(struct kvm_vcpu *vcpu)
>  	return vcpu_cp15(vcpu, c0_MPIDR) & MPIDR_HWID_BITMASK;
>  }
>  
> +static inline bool kvm_arm_get_vcpu_workaround_2_flag(struct kvm_vcpu *vcpu)
> +{
> +	return false;
> +}
> +
> +static inline void kvm_arm_set_vcpu_workaround_2_flag(struct kvm_vcpu *vcpu,
> +						      bool flag)
> +{
> +}
> +
>  static inline void kvm_vcpu_set_be(struct kvm_vcpu *vcpu)
>  {
>  	*vcpu_cpsr(vcpu) |= PSR_E_BIT;
> diff --git a/arch/arm/include/uapi/asm/kvm.h b/arch/arm/include/uapi/asm/kvm.h
> index 4602464ebdfb..dfce6fdbd03b 100644
> --- a/arch/arm/include/uapi/asm/kvm.h
> +++ b/arch/arm/include/uapi/asm/kvm.h
> @@ -214,6 +214,18 @@ struct kvm_vcpu_events {
>  #define KVM_REG_ARM_FW_REG(r)		(KVM_REG_ARM | KVM_REG_SIZE_U64 | \
>  					 KVM_REG_ARM_FW | ((r) & 0xffff))
>  #define KVM_REG_ARM_PSCI_VERSION	KVM_REG_ARM_FW_REG(0)
> +#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1	KVM_REG_ARM_FW_REG(1)
> +	/* Higher values mean better protection. */
> +#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_NOT_AVAIL	0
> +#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_AVAIL	1
> +#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_UNAFFECTED	2
> +#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2	KVM_REG_ARM_FW_REG(2)
> +	/* Higher values mean better protection. */
> +#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_AVAIL	0
> +#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNKNOWN	1
> +#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_AVAIL	2
> +#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNAFFECTED	3
> +#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_ENABLED	(1U << 4)
>  
>  /* Device Control API: ARM VGIC */
>  #define KVM_DEV_ARM_VGIC_GRP_ADDR	0
> diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
> index d3842791e1c4..c00c17c9adb6 100644
> --- a/arch/arm64/include/asm/kvm_emulate.h
> +++ b/arch/arm64/include/asm/kvm_emulate.h
> @@ -348,6 +348,20 @@ static inline unsigned long kvm_vcpu_get_mpidr_aff(struct kvm_vcpu *vcpu)
>  	return vcpu_read_sys_reg(vcpu, MPIDR_EL1) & MPIDR_HWID_BITMASK;
>  }
>  
> +static inline bool kvm_arm_get_vcpu_workaround_2_flag(struct kvm_vcpu *vcpu)
> +{
> +	return vcpu->arch.workaround_flags & VCPU_WORKAROUND_2_FLAG;
> +}
> +
> +static inline void kvm_arm_set_vcpu_workaround_2_flag(struct kvm_vcpu *vcpu,
> +						      bool flag)
> +{
> +	if (flag)
> +		vcpu->arch.workaround_flags |= VCPU_WORKAROUND_2_FLAG;
> +	else
> +		vcpu->arch.workaround_flags &= ~VCPU_WORKAROUND_2_FLAG;
> +}
> +
>  static inline void kvm_vcpu_set_be(struct kvm_vcpu *vcpu)
>  {
>  	if (vcpu_mode_is_32bit(vcpu)) {
> diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
> index 97c3478ee6e7..e75a81a7716f 100644
> --- a/arch/arm64/include/uapi/asm/kvm.h
> +++ b/arch/arm64/include/uapi/asm/kvm.h
> @@ -225,6 +225,16 @@ struct kvm_vcpu_events {
>  #define KVM_REG_ARM_FW_REG(r)		(KVM_REG_ARM64 | KVM_REG_SIZE_U64 | \
>  					 KVM_REG_ARM_FW | ((r) & 0xffff))
>  #define KVM_REG_ARM_PSCI_VERSION	KVM_REG_ARM_FW_REG(0)
> +#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1	KVM_REG_ARM_FW_REG(1)
> +#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_NOT_AVAIL	0
> +#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_AVAIL	1
> +#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_UNAFFECTED	2
> +#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2	KVM_REG_ARM_FW_REG(2)
> +#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_AVAIL	0
> +#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNKNOWN	1
> +#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_AVAIL	2
> +#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNAFFECTED	3
> +#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_ENABLED     (1U << 4)
>  
>  /* Device Control API: ARM VGIC */
>  #define KVM_DEV_ARM_VGIC_GRP_ADDR	0
> diff --git a/virt/kvm/arm/psci.c b/virt/kvm/arm/psci.c
> index 1da53e0e38f7..c82d90aad3ea 100644
> --- a/virt/kvm/arm/psci.c
> +++ b/virt/kvm/arm/psci.c
> @@ -449,42 +449,103 @@ int kvm_hvc_call_handler(struct kvm_vcpu *vcpu)
>  
>  int kvm_arm_get_fw_num_regs(struct kvm_vcpu *vcpu)
>  {
> -	return 1;		/* PSCI version */
> +	return 3;		/* PSCI version and two workaround registers */
>  }
>  
>  int kvm_arm_copy_fw_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices)
>  {
> -	if (put_user(KVM_REG_ARM_PSCI_VERSION, uindices))
> +	if (put_user(KVM_REG_ARM_PSCI_VERSION, uindices++))
> +		return -EFAULT;
> +
> +	if (put_user(KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1, uindices++))
> +		return -EFAULT;
> +
> +	if (put_user(KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2, uindices++))
>  		return -EFAULT;
>  
>  	return 0;
>  }
>  
> +#define KVM_REG_FEATURE_LEVEL_WIDTH	4
> +#define KVM_REG_FEATURE_LEVEL_MASK	(BIT(KVM_REG_FEATURE_LEVEL_WIDTH) - 1)
> +
> +/*
> + * Convert the workaround level into an easy-to-compare number, where higher
> + * values mean better protection.
> + */
> +static int get_kernel_wa_level(u64 regid)
> +{
> +	switch (regid) {
> +	case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1:
> +		switch (kvm_arm_harden_branch_predictor()) {
> +		case KVM_BP_HARDEN_UNKNOWN:
> +			return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_NOT_AVAIL;
> +		case KVM_BP_HARDEN_NEEDED:
> +			return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_AVAIL;
> +		case KVM_BP_HARDEN_MITIGATED:
> +			return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_UNAFFECTED;
> +		}
> +		return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_NOT_AVAIL;
> +	case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2:
> +		switch (kvm_arm_have_ssbd()) {
> +		case KVM_SSBD_FORCE_DISABLE:
> +			return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_AVAIL;
> +		case KVM_SSBD_KERNEL:
> +			return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_AVAIL;
> +		case KVM_SSBD_FORCE_ENABLE:
> +		case KVM_SSBD_MITIGATED:
> +			return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNAFFECTED;
> +		case KVM_SSBD_UNKNOWN:
> +		default:
> +			return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNKNOWN;
> +		}
> +	}
> +
> +	return -EINVAL;
> +}
> +
>  int kvm_arm_get_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
>  {
> -	if (reg->id == KVM_REG_ARM_PSCI_VERSION) {
> -		void __user *uaddr = (void __user *)(long)reg->addr;
> -		u64 val;
> +	void __user *uaddr = (void __user *)(long)reg->addr;
> +	u64 val;
>  
> +	switch (reg->id) {
> +	case KVM_REG_ARM_PSCI_VERSION:
>  		val = kvm_psci_version(vcpu, vcpu->kvm);
> -		if (copy_to_user(uaddr, &val, KVM_REG_SIZE(reg->id)))
> -			return -EFAULT;
> +		break;
> +	case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1:
> +		val = get_kernel_wa_level(reg->id) & KVM_REG_FEATURE_LEVEL_MASK;
> +		break;
> +	case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2:
> +		val = get_kernel_wa_level(reg->id) & KVM_REG_FEATURE_LEVEL_MASK;
>  
> -		return 0;
> +		if (val == KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_AVAIL &&
> +		    kvm_arm_get_vcpu_workaround_2_flag(vcpu))
> +			val |= KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_ENABLED;
> +		break;
> +	default:
> +		return -ENOENT;
>  	}
>  
> -	return -EINVAL;
> +	if (copy_to_user(uaddr, &val, KVM_REG_SIZE(reg->id)))
> +		return -EFAULT;
> +
> +	return 0;
>  }
>  
>  int kvm_arm_set_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
>  {
> -	if (reg->id == KVM_REG_ARM_PSCI_VERSION) {
> -		void __user *uaddr = (void __user *)(long)reg->addr;
> -		bool wants_02;
> -		u64 val;
> +	void __user *uaddr = (void __user *)(long)reg->addr;
> +	u64 val;
> +	int wa_level;> +
> +	if (copy_from_user(&val, uaddr, KVM_REG_SIZE(reg->id)))
> +		return -EFAULT;
>  
> -		if (copy_from_user(&val, uaddr, KVM_REG_SIZE(reg->id)))
> -			return -EFAULT;
> +	switch (reg->id) {
> +	case KVM_REG_ARM_PSCI_VERSION:
> +	{
> +		bool wants_02;
>  
>  		wants_02 = test_bit(KVM_ARM_VCPU_PSCI_0_2, vcpu->arch.features);
>  
> @@ -501,6 +562,54 @@ int kvm_arm_set_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
>  			vcpu->kvm->arch.psci_version = val;
>  			return 0;
>  		}
> +		break;
> +	}
> +
> +	case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1:
> +		if (val & ~KVM_REG_FEATURE_LEVEL_MASK)
> +			return -EINVAL;
> +
> +		if (get_kernel_wa_level(reg->id) < val)
> +			return -EINVAL;
> +
> +		return 0;
> +
> +	case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2:
> +		if (val & ~(KVM_REG_FEATURE_LEVEL_MASK |
> +			    KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_ENABLED))
> +			return -EINVAL;
> +
> +		wa_level = val & KVM_REG_FEATURE_LEVEL_MASK;
> +
> +		if (get_kernel_wa_level(reg->id) < wa_level)
> +			return -EINVAL;
> +
> +		/* The enabled bit must not be set unless the level is AVAIL. */
> +		if (wa_level != KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_AVAIL &&
> +		    wa_level != val)
> +			return -EINVAL;
> +
> +		/* Are we finished or do we need to check the enable bit ? */
> +		if (kvm_arm_have_ssbd() != KVM_SSBD_KERNEL)> +			return 0;
> +
> +		/*
> +		 * If this kernel supports the workaround to be switched on
> +		 * or off, make sure it matches the requested setting.
> +		 */
> +		switch (wa_level) {
> +		case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_AVAIL:
> +			kvm_arm_set_vcpu_workaround_2_flag(vcpu,
> +			    val & KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_ENABLED);
> +			break;
> +		case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNAFFECTED:
> +			kvm_arm_set_vcpu_workaround_2_flag(vcpu, true);
> +			break;
> +		}
> +
> +		return 0;
> +	default:
> +		return -ENOENT;
>  	}
>  
>  	return -EINVAL;
> 
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v5 3/3] KVM: doc: add API documentation on the KVM_REG_ARM_WORKAROUNDS register
  2019-04-15 11:15 ` [PATCH v5 3/3] KVM: doc: add API documentation on the KVM_REG_ARM_WORKAROUNDS register Andre Przywara
  2019-04-15 11:15   ` Andre Przywara
  2019-04-15 14:06   ` Steven Price
@ 2019-04-26 15:25   ` Auger Eric
  2019-04-26 15:25     ` Auger Eric
  2 siblings, 1 reply; 22+ messages in thread
From: Auger Eric @ 2019-04-26 15:25 UTC (permalink / raw)
  To: Andre Przywara, Marc Zyngier, Christoffer Dall
  Cc: kvm, Jeremy Linton, kvmarm, Steven Price, Dave Martin, linux-arm-kernel

Hi Andre,

On 4/15/19 1:15 PM, Andre Przywara wrote:
> Add documentation for the newly defined firmware registers to save and
> restore any vulnerability mitigation status.
> 
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> ---
>  Documentation/virtual/kvm/arm/psci.txt | 31 ++++++++++++++++++++++++++
>  1 file changed, 31 insertions(+)
> 
> diff --git a/Documentation/virtual/kvm/arm/psci.txt b/Documentation/virtual/kvm/arm/psci.txt
> index aafdab887b04..a876c1baa56e 100644
> --- a/Documentation/virtual/kvm/arm/psci.txt
> +++ b/Documentation/virtual/kvm/arm/psci.txt
> @@ -28,3 +28,34 @@ The following register is defined:
>    - Allows any PSCI version implemented by KVM and compatible with
>      v0.2 to be set with SET_ONE_REG
>    - Affects the whole VM (even if the register view is per-vcpu)
> +
> +* KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1:
> +  Holds the state of the firmware support to mitigate CVE-2017-5715, as
> +  offered by KVM to the guest via a HVC call. The workaround is described
> +  under SMCCC_ARCH_WORKAROUND_1 in [1].
> +  Accepted values are:
Why not simplifying overall:
> +    KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_NOT_AVAIL: KVM does not offer
> +      firmware support for the workaround. The mitigation status for the
> +      guest is unknown.
The workaround is not supported by KVM
> +    KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_AVAIL: The workaround HVC call is
> +      available to the guest and required for the mitigation.
Mitigation is needed for this vCPU and the workaround is supported by KVM
> +    KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_UNAFFECTED: The workaround HVC call
> +      is available to the guest, but it is not needed on this VCPU.
Mitigation is not needed for this vCPU(. The workaround is supported
though.)
> +
> +* KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2:
> +  Holds the state of the firmware support to mitigate CVE-2018-3639, as
> +  offered by KVM to the guest via a HVC call. The workaround is described
> +  under SMCCC_ARCH_WORKAROUND_2 in [1].
> +  Accepted values are:
> +    KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_AVAIL: A workaround is not
> +      available. KVM does not offer firmware support for the workaround.
The workaround is not supported by KVM
> +    KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNKNOWN: The workaround state is
> +      unknown. KVM does not offer firmware support for the workaround.
The workaround state is unknown
> +    KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_AVAIL: The workaround is available,
> +      and can be disabled by a vCPU. If
s/by a vCPU/per vCPU?
The workaround is available and can be set per vCPU
> +      KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_ENABLED is set, it is active for
> +      this vCPU.
> +    KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNAFFECTED: The workaround is always
> +      active on this vCPU or it is not needed.

Thanks

Eric
> +
> +[1] https://developer.arm.com/-/media/developer/pdf/ARM_DEN_0070A_Firmware_interfaces_for_mitigating_CVE-2017-5715.pdf
> 

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v5 3/3] KVM: doc: add API documentation on the KVM_REG_ARM_WORKAROUNDS register
  2019-04-26 15:25   ` Auger Eric
@ 2019-04-26 15:25     ` Auger Eric
  0 siblings, 0 replies; 22+ messages in thread
From: Auger Eric @ 2019-04-26 15:25 UTC (permalink / raw)
  To: Andre Przywara, Marc Zyngier, Christoffer Dall
  Cc: kvm, Jeremy Linton, kvmarm, Steven Price, Dave Martin, linux-arm-kernel

Hi Andre,

On 4/15/19 1:15 PM, Andre Przywara wrote:
> Add documentation for the newly defined firmware registers to save and
> restore any vulnerability mitigation status.
> 
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> ---
>  Documentation/virtual/kvm/arm/psci.txt | 31 ++++++++++++++++++++++++++
>  1 file changed, 31 insertions(+)
> 
> diff --git a/Documentation/virtual/kvm/arm/psci.txt b/Documentation/virtual/kvm/arm/psci.txt
> index aafdab887b04..a876c1baa56e 100644
> --- a/Documentation/virtual/kvm/arm/psci.txt
> +++ b/Documentation/virtual/kvm/arm/psci.txt
> @@ -28,3 +28,34 @@ The following register is defined:
>    - Allows any PSCI version implemented by KVM and compatible with
>      v0.2 to be set with SET_ONE_REG
>    - Affects the whole VM (even if the register view is per-vcpu)
> +
> +* KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1:
> +  Holds the state of the firmware support to mitigate CVE-2017-5715, as
> +  offered by KVM to the guest via a HVC call. The workaround is described
> +  under SMCCC_ARCH_WORKAROUND_1 in [1].
> +  Accepted values are:
Why not simplifying overall:
> +    KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_NOT_AVAIL: KVM does not offer
> +      firmware support for the workaround. The mitigation status for the
> +      guest is unknown.
The workaround is not supported by KVM
> +    KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_AVAIL: The workaround HVC call is
> +      available to the guest and required for the mitigation.
Mitigation is needed for this vCPU and the workaround is supported by KVM
> +    KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_UNAFFECTED: The workaround HVC call
> +      is available to the guest, but it is not needed on this VCPU.
Mitigation is not needed for this vCPU(. The workaround is supported
though.)
> +
> +* KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2:
> +  Holds the state of the firmware support to mitigate CVE-2018-3639, as
> +  offered by KVM to the guest via a HVC call. The workaround is described
> +  under SMCCC_ARCH_WORKAROUND_2 in [1].
> +  Accepted values are:
> +    KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_AVAIL: A workaround is not
> +      available. KVM does not offer firmware support for the workaround.
The workaround is not supported by KVM
> +    KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNKNOWN: The workaround state is
> +      unknown. KVM does not offer firmware support for the workaround.
The workaround state is unknown
> +    KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_AVAIL: The workaround is available,
> +      and can be disabled by a vCPU. If
s/by a vCPU/per vCPU?
The workaround is available and can be set per vCPU
> +      KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_ENABLED is set, it is active for
> +      this vCPU.
> +    KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNAFFECTED: The workaround is always
> +      active on this vCPU or it is not needed.

Thanks

Eric
> +
> +[1] https://developer.arm.com/-/media/developer/pdf/ARM_DEN_0070A_Firmware_interfaces_for_mitigating_CVE-2017-5715.pdf
> 
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v5 1/3] arm64: KVM: Propagate full Spectre v2 workaround state to KVM guests
  2019-04-15 14:03   ` Steven Price
  2019-04-15 14:03     ` Steven Price
@ 2019-04-26 15:37     ` Auger Eric
  2019-04-26 15:37       ` Auger Eric
  2019-05-03  9:33       ` Andre Przywara
  1 sibling, 2 replies; 22+ messages in thread
From: Auger Eric @ 2019-04-26 15:37 UTC (permalink / raw)
  To: Steven Price, Andre Przywara, Marc Zyngier, Christoffer Dall
  Cc: Dave Martin, kvm, kvmarm, linux-arm-kernel, Jeremy Linton

Hi Andre,

On 4/15/19 4:03 PM, Steven Price wrote:
> On 15/04/2019 12:15, Andre Przywara wrote:
>> Recent commits added the explicit notion of "Not affected" to the state
>> of the Spectre v2 (aka. BP_HARDENING) workaround, where we just had
>> "needed" and "unknown" before.
>>
>> Export this knowledge to the rest of the kernel and enhance the existing
>> kvm_arm_harden_branch_predictor() to report this new state as well.
>> Export this new state to guests when they use KVM's firmware interface
>> emulation.
>>
>> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
>> ---
>>  arch/arm/include/asm/kvm_host.h     | 12 +++++++++---
>>  arch/arm64/include/asm/cpufeature.h |  6 ++++++
>>  arch/arm64/include/asm/kvm_host.h   | 16 ++++++++++++++--
>>  arch/arm64/kernel/cpu_errata.c      | 23 ++++++++++++++++++-----
>>  virt/kvm/arm/psci.c                 | 10 +++++++++-
>>  5 files changed, 56 insertions(+), 11 deletions(-)
>>
>> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
>> index 770d73257ad9..836479e4b340 100644
>> --- a/arch/arm/include/asm/kvm_host.h
>> +++ b/arch/arm/include/asm/kvm_host.h
>> @@ -364,7 +364,11 @@ static inline void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu) {}
>>  static inline void kvm_arm_vhe_guest_enter(void) {}
>>  static inline void kvm_arm_vhe_guest_exit(void) {}
>>  
>> -static inline bool kvm_arm_harden_branch_predictor(void)
>> +#define KVM_BP_HARDEN_UNKNOWN		-1
>> +#define KVM_BP_HARDEN_NEEDED		0
>> +#define KVM_BP_HARDEN_MITIGATED		1
> 
> I find the naming here a little confusing - it's not really clear what
> "mitigated" means (see below).
> 
>> +
>> +static inline int kvm_arm_harden_branch_predictor(void)
>>  {
>>  	switch(read_cpuid_part()) {
>>  #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
>> @@ -372,10 +376,12 @@ static inline bool kvm_arm_harden_branch_predictor(void)
>>  	case ARM_CPU_PART_CORTEX_A12:
>>  	case ARM_CPU_PART_CORTEX_A15:
>>  	case ARM_CPU_PART_CORTEX_A17:
>> -		return true;
>> +		return KVM_BP_HARDEN_NEEDED;
>>  #endif
>> +	case ARM_CPU_PART_CORTEX_A7:
>> +		return KVM_BP_HARDEN_MITIGATED;
>>  	default:
>> -		return false;
>> +		return KVM_BP_HARDEN_UNKNOWN;
>>  	}
>>  }
>>  
>> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
>> index 6ccdc97e5d6a..3c5b25c1bda1 100644
>> --- a/arch/arm64/include/asm/cpufeature.h
>> +++ b/arch/arm64/include/asm/cpufeature.h
>> @@ -622,6 +622,12 @@ static inline bool system_uses_irq_prio_masking(void)
>>  	       cpus_have_const_cap(ARM64_HAS_IRQ_PRIO_MASKING);
>>  }
>>  
>> +#define ARM64_BP_HARDEN_UNKNOWN		-1
>> +#define ARM64_BP_HARDEN_NEEDED		0
>> +#define ARM64_BP_HARDEN_MITIGATED	1
>> +
>> +int get_spectre_v2_workaround_state(void);
>> +
>>  #define ARM64_SSBD_UNKNOWN		-1
>>  #define ARM64_SSBD_FORCE_DISABLE	0
>>  #define ARM64_SSBD_KERNEL		1
>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
>> index a01fe087e022..bf9a59b7d1ce 100644
>> --- a/arch/arm64/include/asm/kvm_host.h
>> +++ b/arch/arm64/include/asm/kvm_host.h
>> @@ -555,9 +555,21 @@ static inline void kvm_arm_vhe_guest_exit(void)
>>  	isb();
>>  }
>>  
>> -static inline bool kvm_arm_harden_branch_predictor(void)
>> +#define KVM_BP_HARDEN_UNKNOWN		-1
>> +#define KVM_BP_HARDEN_NEEDED		0
>> +#define KVM_BP_HARDEN_MITIGATED		1
>> +
>> +static inline int kvm_arm_harden_branch_predictor(void)
>>  {
>> -	return cpus_have_const_cap(ARM64_HARDEN_BRANCH_PREDICTOR);
>> +	switch (get_spectre_v2_workaround_state()) {
>> +	case ARM64_BP_HARDEN_NEEDED:
>> +		return KVM_BP_HARDEN_NEEDED;
>> +	case ARM64_BP_HARDEN_MITIGATED:
>> +		return KVM_BP_HARDEN_MITIGATED;
>> +	case ARM64_BP_HARDEN_UNKNOWN:
>> +	default:
>> +		return KVM_BP_HARDEN_UNKNOWN;
>> +	}
>>  }
>>  
>>  #define KVM_SSBD_UNKNOWN		-1
>> diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
>> index a1f3188c7be0..7fa23ab09d4e 100644
>> --- a/arch/arm64/kernel/cpu_errata.c
>> +++ b/arch/arm64/kernel/cpu_errata.c
>> @@ -555,6 +555,17 @@ cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused)
>>  static bool __hardenbp_enab = true;
>>  static bool __spectrev2_safe = true;
>>  
>> +int get_spectre_v2_workaround_state(void)
>> +{
>> +	if (__spectrev2_safe)
>> +		return ARM64_BP_HARDEN_MITIGATED;
>> +
>> +	if (!__hardenbp_enab)
>> +		return ARM64_BP_HARDEN_UNKNOWN;
>> +
>> +	return ARM64_BP_HARDEN_NEEDED;
>> +}
>> +
>>  /*
>>   * List of CPUs that do not need any Spectre-v2 mitigation at all.
>>   */
>> @@ -834,13 +845,15 @@ ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr,
>>  ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr,
>>  		char *buf)
>>  {
>> -	if (__spectrev2_safe)
>> +	switch (get_spectre_v2_workaround_state()) {
>> +	case ARM64_BP_HARDEN_MITIGATED:
>>  		return sprintf(buf, "Not affected\n");
> 
> Here "mitigated" means "not affected".
> 
>> -
>> -	if (__hardenbp_enab)
>> +        case ARM64_BP_HARDEN_NEEDED:
>>  		return sprintf(buf, "Mitigation: Branch predictor hardening\n");
> 
> And "harden needed" means mitigated.
> 
>> -
>> -	return sprintf(buf, "Vulnerable\n");
>> +        case ARM64_BP_HARDEN_UNKNOWN:
>> +	default:
>> +		return sprintf(buf, "Vulnerable\n");
>> +	}
>>  }
>>  
>>  ssize_t cpu_show_spec_store_bypass(struct device *dev,
>> diff --git a/virt/kvm/arm/psci.c b/virt/kvm/arm/psci.c
>> index 34d08ee63747..1da53e0e38f7 100644
>> --- a/virt/kvm/arm/psci.c
>> +++ b/virt/kvm/arm/psci.c
>> @@ -412,8 +412,16 @@ int kvm_hvc_call_handler(struct kvm_vcpu *vcpu)
>>  		feature = smccc_get_arg1(vcpu);
>>  		switch(feature) {
>>  		case ARM_SMCCC_ARCH_WORKAROUND_1:
>> -			if (kvm_arm_harden_branch_predictor())
>> +			switch (kvm_arm_harden_branch_predictor()) {
>> +			case KVM_BP_HARDEN_UNKNOWN:
>> +				break;
>> +			case KVM_BP_HARDEN_NEEDED:
>>  				val = SMCCC_RET_SUCCESS;
>> +				break;
>> +			case KVM_BP_HARDEN_MITIGATED:
>> +				val = SMCCC_RET_NOT_REQUIRED;
> 
> Would KVM_BP_HARDEN_NOT_REQUIRED be a more logical name?
I tend to agree with Steven's comment

But then why not also choosing the same terminology for the uapi:
KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_UNAFFECTED?

For the same case we seem to have 3 different terminologies. But maybe I
miss something.

In the uapi doc, in case the workaround is not needed do we actually
care to mention the wa is supported?

Thanks

Eric
> 
> Steve
> 
>> +				break;
>> +			}
>>  			break;
>>  		case ARM_SMCCC_ARCH_WORKAROUND_2:
>>  			switch (kvm_arm_have_ssbd()) {
>>
> 

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v5 1/3] arm64: KVM: Propagate full Spectre v2 workaround state to KVM guests
  2019-04-26 15:37     ` Auger Eric
@ 2019-04-26 15:37       ` Auger Eric
  2019-05-03  9:33       ` Andre Przywara
  1 sibling, 0 replies; 22+ messages in thread
From: Auger Eric @ 2019-04-26 15:37 UTC (permalink / raw)
  To: Steven Price, Andre Przywara, Marc Zyngier, Christoffer Dall
  Cc: Dave Martin, kvm, kvmarm, linux-arm-kernel, Jeremy Linton

Hi Andre,

On 4/15/19 4:03 PM, Steven Price wrote:
> On 15/04/2019 12:15, Andre Przywara wrote:
>> Recent commits added the explicit notion of "Not affected" to the state
>> of the Spectre v2 (aka. BP_HARDENING) workaround, where we just had
>> "needed" and "unknown" before.
>>
>> Export this knowledge to the rest of the kernel and enhance the existing
>> kvm_arm_harden_branch_predictor() to report this new state as well.
>> Export this new state to guests when they use KVM's firmware interface
>> emulation.
>>
>> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
>> ---
>>  arch/arm/include/asm/kvm_host.h     | 12 +++++++++---
>>  arch/arm64/include/asm/cpufeature.h |  6 ++++++
>>  arch/arm64/include/asm/kvm_host.h   | 16 ++++++++++++++--
>>  arch/arm64/kernel/cpu_errata.c      | 23 ++++++++++++++++++-----
>>  virt/kvm/arm/psci.c                 | 10 +++++++++-
>>  5 files changed, 56 insertions(+), 11 deletions(-)
>>
>> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
>> index 770d73257ad9..836479e4b340 100644
>> --- a/arch/arm/include/asm/kvm_host.h
>> +++ b/arch/arm/include/asm/kvm_host.h
>> @@ -364,7 +364,11 @@ static inline void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu) {}
>>  static inline void kvm_arm_vhe_guest_enter(void) {}
>>  static inline void kvm_arm_vhe_guest_exit(void) {}
>>  
>> -static inline bool kvm_arm_harden_branch_predictor(void)
>> +#define KVM_BP_HARDEN_UNKNOWN		-1
>> +#define KVM_BP_HARDEN_NEEDED		0
>> +#define KVM_BP_HARDEN_MITIGATED		1
> 
> I find the naming here a little confusing - it's not really clear what
> "mitigated" means (see below).
> 
>> +
>> +static inline int kvm_arm_harden_branch_predictor(void)
>>  {
>>  	switch(read_cpuid_part()) {
>>  #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
>> @@ -372,10 +376,12 @@ static inline bool kvm_arm_harden_branch_predictor(void)
>>  	case ARM_CPU_PART_CORTEX_A12:
>>  	case ARM_CPU_PART_CORTEX_A15:
>>  	case ARM_CPU_PART_CORTEX_A17:
>> -		return true;
>> +		return KVM_BP_HARDEN_NEEDED;
>>  #endif
>> +	case ARM_CPU_PART_CORTEX_A7:
>> +		return KVM_BP_HARDEN_MITIGATED;
>>  	default:
>> -		return false;
>> +		return KVM_BP_HARDEN_UNKNOWN;
>>  	}
>>  }
>>  
>> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
>> index 6ccdc97e5d6a..3c5b25c1bda1 100644
>> --- a/arch/arm64/include/asm/cpufeature.h
>> +++ b/arch/arm64/include/asm/cpufeature.h
>> @@ -622,6 +622,12 @@ static inline bool system_uses_irq_prio_masking(void)
>>  	       cpus_have_const_cap(ARM64_HAS_IRQ_PRIO_MASKING);
>>  }
>>  
>> +#define ARM64_BP_HARDEN_UNKNOWN		-1
>> +#define ARM64_BP_HARDEN_NEEDED		0
>> +#define ARM64_BP_HARDEN_MITIGATED	1
>> +
>> +int get_spectre_v2_workaround_state(void);
>> +
>>  #define ARM64_SSBD_UNKNOWN		-1
>>  #define ARM64_SSBD_FORCE_DISABLE	0
>>  #define ARM64_SSBD_KERNEL		1
>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
>> index a01fe087e022..bf9a59b7d1ce 100644
>> --- a/arch/arm64/include/asm/kvm_host.h
>> +++ b/arch/arm64/include/asm/kvm_host.h
>> @@ -555,9 +555,21 @@ static inline void kvm_arm_vhe_guest_exit(void)
>>  	isb();
>>  }
>>  
>> -static inline bool kvm_arm_harden_branch_predictor(void)
>> +#define KVM_BP_HARDEN_UNKNOWN		-1
>> +#define KVM_BP_HARDEN_NEEDED		0
>> +#define KVM_BP_HARDEN_MITIGATED		1
>> +
>> +static inline int kvm_arm_harden_branch_predictor(void)
>>  {
>> -	return cpus_have_const_cap(ARM64_HARDEN_BRANCH_PREDICTOR);
>> +	switch (get_spectre_v2_workaround_state()) {
>> +	case ARM64_BP_HARDEN_NEEDED:
>> +		return KVM_BP_HARDEN_NEEDED;
>> +	case ARM64_BP_HARDEN_MITIGATED:
>> +		return KVM_BP_HARDEN_MITIGATED;
>> +	case ARM64_BP_HARDEN_UNKNOWN:
>> +	default:
>> +		return KVM_BP_HARDEN_UNKNOWN;
>> +	}
>>  }
>>  
>>  #define KVM_SSBD_UNKNOWN		-1
>> diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
>> index a1f3188c7be0..7fa23ab09d4e 100644
>> --- a/arch/arm64/kernel/cpu_errata.c
>> +++ b/arch/arm64/kernel/cpu_errata.c
>> @@ -555,6 +555,17 @@ cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused)
>>  static bool __hardenbp_enab = true;
>>  static bool __spectrev2_safe = true;
>>  
>> +int get_spectre_v2_workaround_state(void)
>> +{
>> +	if (__spectrev2_safe)
>> +		return ARM64_BP_HARDEN_MITIGATED;
>> +
>> +	if (!__hardenbp_enab)
>> +		return ARM64_BP_HARDEN_UNKNOWN;
>> +
>> +	return ARM64_BP_HARDEN_NEEDED;
>> +}
>> +
>>  /*
>>   * List of CPUs that do not need any Spectre-v2 mitigation at all.
>>   */
>> @@ -834,13 +845,15 @@ ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr,
>>  ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr,
>>  		char *buf)
>>  {
>> -	if (__spectrev2_safe)
>> +	switch (get_spectre_v2_workaround_state()) {
>> +	case ARM64_BP_HARDEN_MITIGATED:
>>  		return sprintf(buf, "Not affected\n");
> 
> Here "mitigated" means "not affected".
> 
>> -
>> -	if (__hardenbp_enab)
>> +        case ARM64_BP_HARDEN_NEEDED:
>>  		return sprintf(buf, "Mitigation: Branch predictor hardening\n");
> 
> And "harden needed" means mitigated.
> 
>> -
>> -	return sprintf(buf, "Vulnerable\n");
>> +        case ARM64_BP_HARDEN_UNKNOWN:
>> +	default:
>> +		return sprintf(buf, "Vulnerable\n");
>> +	}
>>  }
>>  
>>  ssize_t cpu_show_spec_store_bypass(struct device *dev,
>> diff --git a/virt/kvm/arm/psci.c b/virt/kvm/arm/psci.c
>> index 34d08ee63747..1da53e0e38f7 100644
>> --- a/virt/kvm/arm/psci.c
>> +++ b/virt/kvm/arm/psci.c
>> @@ -412,8 +412,16 @@ int kvm_hvc_call_handler(struct kvm_vcpu *vcpu)
>>  		feature = smccc_get_arg1(vcpu);
>>  		switch(feature) {
>>  		case ARM_SMCCC_ARCH_WORKAROUND_1:
>> -			if (kvm_arm_harden_branch_predictor())
>> +			switch (kvm_arm_harden_branch_predictor()) {
>> +			case KVM_BP_HARDEN_UNKNOWN:
>> +				break;
>> +			case KVM_BP_HARDEN_NEEDED:
>>  				val = SMCCC_RET_SUCCESS;
>> +				break;
>> +			case KVM_BP_HARDEN_MITIGATED:
>> +				val = SMCCC_RET_NOT_REQUIRED;
> 
> Would KVM_BP_HARDEN_NOT_REQUIRED be a more logical name?
I tend to agree with Steven's comment

But then why not also choosing the same terminology for the uapi:
KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_UNAFFECTED?

For the same case we seem to have 3 different terminologies. But maybe I
miss something.

In the uapi doc, in case the workaround is not needed do we actually
care to mention the wa is supported?

Thanks

Eric
> 
> Steve
> 
>> +				break;
>> +			}
>>  			break;
>>  		case ARM_SMCCC_ARCH_WORKAROUND_2:
>>  			switch (kvm_arm_have_ssbd()) {
>>
> 
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v5 1/3] arm64: KVM: Propagate full Spectre v2 workaround state to KVM guests
  2019-04-26 15:37     ` Auger Eric
  2019-04-26 15:37       ` Auger Eric
@ 2019-05-03  9:33       ` Andre Przywara
  2019-05-03 11:46         ` Auger Eric
  1 sibling, 1 reply; 22+ messages in thread
From: Andre Przywara @ 2019-05-03  9:33 UTC (permalink / raw)
  To: Auger Eric
  Cc: kvm, Marc Zyngier, Dave Martin, Jeremy Linton, Steven Price,
	kvmarm, linux-arm-kernel

On Fri, 26 Apr 2019 17:37:47 +0200
Auger Eric <eric.auger@redhat.com> wrote:

Hi,

> Hi Andre,
> 
> On 4/15/19 4:03 PM, Steven Price wrote:
> > On 15/04/2019 12:15, Andre Przywara wrote:  
> >> Recent commits added the explicit notion of "Not affected" to the state
> >> of the Spectre v2 (aka. BP_HARDENING) workaround, where we just had
> >> "needed" and "unknown" before.
> >>
> >> Export this knowledge to the rest of the kernel and enhance the existing
> >> kvm_arm_harden_branch_predictor() to report this new state as well.
> >> Export this new state to guests when they use KVM's firmware interface
> >> emulation.
> >>
> >> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> >> ---
> >>  arch/arm/include/asm/kvm_host.h     | 12 +++++++++---
> >>  arch/arm64/include/asm/cpufeature.h |  6 ++++++
> >>  arch/arm64/include/asm/kvm_host.h   | 16 ++++++++++++++--
> >>  arch/arm64/kernel/cpu_errata.c      | 23 ++++++++++++++++++-----
> >>  virt/kvm/arm/psci.c                 | 10 +++++++++-
> >>  5 files changed, 56 insertions(+), 11 deletions(-)
> >>
> >> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
> >> index 770d73257ad9..836479e4b340 100644
> >> --- a/arch/arm/include/asm/kvm_host.h
> >> +++ b/arch/arm/include/asm/kvm_host.h
> >> @@ -364,7 +364,11 @@ static inline void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu) {}
> >>  static inline void kvm_arm_vhe_guest_enter(void) {}
> >>  static inline void kvm_arm_vhe_guest_exit(void) {}
> >>  
> >> -static inline bool kvm_arm_harden_branch_predictor(void)
> >> +#define KVM_BP_HARDEN_UNKNOWN		-1
> >> +#define KVM_BP_HARDEN_NEEDED		0
> >> +#define KVM_BP_HARDEN_MITIGATED		1  
> > 
> > I find the naming here a little confusing - it's not really clear what
> > "mitigated" means (see below).

That's indeed slightly confusing, but was modelled after the SSBD
workaround below, which reads:
#define KVM_SSBD_UNKNOWN                -1
#define KVM_SSBD_FORCE_DISABLE          0
#define KVM_SSBD_KERNEL         1
#define KVM_SSBD_FORCE_ENABLE           2
#define KVM_SSBD_MITIGATED              3

I changed the naming (for this and the other derived definitions) to:
#define KVM_BP_HARDEN_UNKNOWN           -1
#define KVM_BP_HARDEN_WA_NEEDED         0
#define KVM_BP_HARDEN_NOT_REQUIRED      1

> >   
> >> +
> >> +static inline int kvm_arm_harden_branch_predictor(void)
> >>  {
> >>  	switch(read_cpuid_part()) {
> >>  #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
> >> @@ -372,10 +376,12 @@ static inline bool kvm_arm_harden_branch_predictor(void)
> >>  	case ARM_CPU_PART_CORTEX_A12:
> >>  	case ARM_CPU_PART_CORTEX_A15:
> >>  	case ARM_CPU_PART_CORTEX_A17:
> >> -		return true;
> >> +		return KVM_BP_HARDEN_NEEDED;
> >>  #endif
> >> +	case ARM_CPU_PART_CORTEX_A7:
> >> +		return KVM_BP_HARDEN_MITIGATED;
> >>  	default:
> >> -		return false;
> >> +		return KVM_BP_HARDEN_UNKNOWN;
> >>  	}
> >>  }
> >>  
> >> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> >> index 6ccdc97e5d6a..3c5b25c1bda1 100644
> >> --- a/arch/arm64/include/asm/cpufeature.h
> >> +++ b/arch/arm64/include/asm/cpufeature.h
> >> @@ -622,6 +622,12 @@ static inline bool system_uses_irq_prio_masking(void)
> >>  	       cpus_have_const_cap(ARM64_HAS_IRQ_PRIO_MASKING);
> >>  }
> >>  
> >> +#define ARM64_BP_HARDEN_UNKNOWN		-1
> >> +#define ARM64_BP_HARDEN_NEEDED		0
> >> +#define ARM64_BP_HARDEN_MITIGATED	1
> >> +
> >> +int get_spectre_v2_workaround_state(void);
> >> +
> >>  #define ARM64_SSBD_UNKNOWN		-1
> >>  #define ARM64_SSBD_FORCE_DISABLE	0
> >>  #define ARM64_SSBD_KERNEL		1
> >> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> >> index a01fe087e022..bf9a59b7d1ce 100644
> >> --- a/arch/arm64/include/asm/kvm_host.h
> >> +++ b/arch/arm64/include/asm/kvm_host.h
> >> @@ -555,9 +555,21 @@ static inline void kvm_arm_vhe_guest_exit(void)
> >>  	isb();
> >>  }
> >>  
> >> -static inline bool kvm_arm_harden_branch_predictor(void)
> >> +#define KVM_BP_HARDEN_UNKNOWN		-1
> >> +#define KVM_BP_HARDEN_NEEDED		0
> >> +#define KVM_BP_HARDEN_MITIGATED		1
> >> +
> >> +static inline int kvm_arm_harden_branch_predictor(void)
> >>  {
> >> -	return cpus_have_const_cap(ARM64_HARDEN_BRANCH_PREDICTOR);
> >> +	switch (get_spectre_v2_workaround_state()) {
> >> +	case ARM64_BP_HARDEN_NEEDED:
> >> +		return KVM_BP_HARDEN_NEEDED;
> >> +	case ARM64_BP_HARDEN_MITIGATED:
> >> +		return KVM_BP_HARDEN_MITIGATED;
> >> +	case ARM64_BP_HARDEN_UNKNOWN:
> >> +	default:
> >> +		return KVM_BP_HARDEN_UNKNOWN;
> >> +	}
> >>  }
> >>  
> >>  #define KVM_SSBD_UNKNOWN		-1
> >> diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
> >> index a1f3188c7be0..7fa23ab09d4e 100644
> >> --- a/arch/arm64/kernel/cpu_errata.c
> >> +++ b/arch/arm64/kernel/cpu_errata.c
> >> @@ -555,6 +555,17 @@ cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused)
> >>  static bool __hardenbp_enab = true;
> >>  static bool __spectrev2_safe = true;
> >>  
> >> +int get_spectre_v2_workaround_state(void)
> >> +{
> >> +	if (__spectrev2_safe)
> >> +		return ARM64_BP_HARDEN_MITIGATED;
> >> +
> >> +	if (!__hardenbp_enab)
> >> +		return ARM64_BP_HARDEN_UNKNOWN;
> >> +
> >> +	return ARM64_BP_HARDEN_NEEDED;
> >> +}
> >> +
> >>  /*
> >>   * List of CPUs that do not need any Spectre-v2 mitigation at all.
> >>   */
> >> @@ -834,13 +845,15 @@ ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr,
> >>  ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr,
> >>  		char *buf)
> >>  {
> >> -	if (__spectrev2_safe)
> >> +	switch (get_spectre_v2_workaround_state()) {
> >> +	case ARM64_BP_HARDEN_MITIGATED:
> >>  		return sprintf(buf, "Not affected\n");  
> > 
> > Here "mitigated" means "not affected".
> >   
> >> -
> >> -	if (__hardenbp_enab)
> >> +        case ARM64_BP_HARDEN_NEEDED:
> >>  		return sprintf(buf, "Mitigation: Branch predictor hardening\n");  
> > 
> > And "harden needed" means mitigated.
> >   
> >> -
> >> -	return sprintf(buf, "Vulnerable\n");
> >> +        case ARM64_BP_HARDEN_UNKNOWN:
> >> +	default:
> >> +		return sprintf(buf, "Vulnerable\n");
> >> +	}
> >>  }
> >>  
> >>  ssize_t cpu_show_spec_store_bypass(struct device *dev,
> >> diff --git a/virt/kvm/arm/psci.c b/virt/kvm/arm/psci.c
> >> index 34d08ee63747..1da53e0e38f7 100644
> >> --- a/virt/kvm/arm/psci.c
> >> +++ b/virt/kvm/arm/psci.c
> >> @@ -412,8 +412,16 @@ int kvm_hvc_call_handler(struct kvm_vcpu *vcpu)
> >>  		feature = smccc_get_arg1(vcpu);
> >>  		switch(feature) {
> >>  		case ARM_SMCCC_ARCH_WORKAROUND_1:
> >> -			if (kvm_arm_harden_branch_predictor())
> >> +			switch (kvm_arm_harden_branch_predictor()) {
> >> +			case KVM_BP_HARDEN_UNKNOWN:
> >> +				break;
> >> +			case KVM_BP_HARDEN_NEEDED:
> >>  				val = SMCCC_RET_SUCCESS;
> >> +				break;
> >> +			case KVM_BP_HARDEN_MITIGATED:
> >> +				val = SMCCC_RET_NOT_REQUIRED;  
> > 
> > Would KVM_BP_HARDEN_NOT_REQUIRED be a more logical name?  
> I tend to agree with Steven's comment
> 
> But then why not also choosing the same terminology for the uapi:
> KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_UNAFFECTED?

You mean using ..._NOT_REQUIRED here?
Makes sense, as "unaffected" is different from "not required", and we
cannot guarantee the first.

> For the same case we seem to have 3 different terminologies. But maybe I
> miss something.
> 
> In the uapi doc, in case the workaround is not needed do we actually
> care to mention the wa is supported?

I think yes, as it's important to know that a guest could call into the
"firmware", regardless of the effect.

Cheers,
Andre.

> Thanks
> 
> Eric
> > 
> > Steve
> >   
> >> +				break;
> >> +			}
> >>  			break;
> >>  		case ARM_SMCCC_ARCH_WORKAROUND_2:
> >>  			switch (kvm_arm_have_ssbd()) {
> >>  
> >   

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v5 1/3] arm64: KVM: Propagate full Spectre v2 workaround state to KVM guests
  2019-05-03  9:33       ` Andre Przywara
@ 2019-05-03 11:46         ` Auger Eric
  0 siblings, 0 replies; 22+ messages in thread
From: Auger Eric @ 2019-05-03 11:46 UTC (permalink / raw)
  To: Andre Przywara
  Cc: kvm, Marc Zyngier, Dave Martin, Jeremy Linton, Steven Price,
	kvmarm, linux-arm-kernel

Hi Andre,

On 5/3/19 11:33 AM, Andre Przywara wrote:
> On Fri, 26 Apr 2019 17:37:47 +0200
> Auger Eric <eric.auger@redhat.com> wrote:
> 
> Hi,
> 
>> Hi Andre,
>>
>> On 4/15/19 4:03 PM, Steven Price wrote:
>>> On 15/04/2019 12:15, Andre Przywara wrote:  
>>>> Recent commits added the explicit notion of "Not affected" to the state
>>>> of the Spectre v2 (aka. BP_HARDENING) workaround, where we just had
>>>> "needed" and "unknown" before.
>>>>
>>>> Export this knowledge to the rest of the kernel and enhance the existing
>>>> kvm_arm_harden_branch_predictor() to report this new state as well.
>>>> Export this new state to guests when they use KVM's firmware interface
>>>> emulation.
>>>>
>>>> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
>>>> ---
>>>>  arch/arm/include/asm/kvm_host.h     | 12 +++++++++---
>>>>  arch/arm64/include/asm/cpufeature.h |  6 ++++++
>>>>  arch/arm64/include/asm/kvm_host.h   | 16 ++++++++++++++--
>>>>  arch/arm64/kernel/cpu_errata.c      | 23 ++++++++++++++++++-----
>>>>  virt/kvm/arm/psci.c                 | 10 +++++++++-
>>>>  5 files changed, 56 insertions(+), 11 deletions(-)
>>>>
>>>> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
>>>> index 770d73257ad9..836479e4b340 100644
>>>> --- a/arch/arm/include/asm/kvm_host.h
>>>> +++ b/arch/arm/include/asm/kvm_host.h
>>>> @@ -364,7 +364,11 @@ static inline void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu) {}
>>>>  static inline void kvm_arm_vhe_guest_enter(void) {}
>>>>  static inline void kvm_arm_vhe_guest_exit(void) {}
>>>>  
>>>> -static inline bool kvm_arm_harden_branch_predictor(void)
>>>> +#define KVM_BP_HARDEN_UNKNOWN		-1
>>>> +#define KVM_BP_HARDEN_NEEDED		0
>>>> +#define KVM_BP_HARDEN_MITIGATED		1  
>>>
>>> I find the naming here a little confusing - it's not really clear what
>>> "mitigated" means (see below).
> 
> That's indeed slightly confusing, but was modelled after the SSBD
> workaround below, which reads:
> #define KVM_SSBD_UNKNOWN                -1
> #define KVM_SSBD_FORCE_DISABLE          0
> #define KVM_SSBD_KERNEL         1
> #define KVM_SSBD_FORCE_ENABLE           2
> #define KVM_SSBD_MITIGATED              3
> 
> I changed the naming (for this and the other derived definitions) to:
> #define KVM_BP_HARDEN_UNKNOWN           -1
> #define KVM_BP_HARDEN_WA_NEEDED         0
> #define KVM_BP_HARDEN_NOT_REQUIRED      1
> 
>>>   
>>>> +
>>>> +static inline int kvm_arm_harden_branch_predictor(void)
>>>>  {
>>>>  	switch(read_cpuid_part()) {
>>>>  #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
>>>> @@ -372,10 +376,12 @@ static inline bool kvm_arm_harden_branch_predictor(void)
>>>>  	case ARM_CPU_PART_CORTEX_A12:
>>>>  	case ARM_CPU_PART_CORTEX_A15:
>>>>  	case ARM_CPU_PART_CORTEX_A17:
>>>> -		return true;
>>>> +		return KVM_BP_HARDEN_NEEDED;
>>>>  #endif
>>>> +	case ARM_CPU_PART_CORTEX_A7:
>>>> +		return KVM_BP_HARDEN_MITIGATED;
>>>>  	default:
>>>> -		return false;
>>>> +		return KVM_BP_HARDEN_UNKNOWN;
>>>>  	}
>>>>  }
>>>>  
>>>> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
>>>> index 6ccdc97e5d6a..3c5b25c1bda1 100644
>>>> --- a/arch/arm64/include/asm/cpufeature.h
>>>> +++ b/arch/arm64/include/asm/cpufeature.h
>>>> @@ -622,6 +622,12 @@ static inline bool system_uses_irq_prio_masking(void)
>>>>  	       cpus_have_const_cap(ARM64_HAS_IRQ_PRIO_MASKING);
>>>>  }
>>>>  
>>>> +#define ARM64_BP_HARDEN_UNKNOWN		-1
>>>> +#define ARM64_BP_HARDEN_NEEDED		0
>>>> +#define ARM64_BP_HARDEN_MITIGATED	1
>>>> +
>>>> +int get_spectre_v2_workaround_state(void);
>>>> +
>>>>  #define ARM64_SSBD_UNKNOWN		-1
>>>>  #define ARM64_SSBD_FORCE_DISABLE	0
>>>>  #define ARM64_SSBD_KERNEL		1
>>>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
>>>> index a01fe087e022..bf9a59b7d1ce 100644
>>>> --- a/arch/arm64/include/asm/kvm_host.h
>>>> +++ b/arch/arm64/include/asm/kvm_host.h
>>>> @@ -555,9 +555,21 @@ static inline void kvm_arm_vhe_guest_exit(void)
>>>>  	isb();
>>>>  }
>>>>  
>>>> -static inline bool kvm_arm_harden_branch_predictor(void)
>>>> +#define KVM_BP_HARDEN_UNKNOWN		-1
>>>> +#define KVM_BP_HARDEN_NEEDED		0
>>>> +#define KVM_BP_HARDEN_MITIGATED		1
>>>> +
>>>> +static inline int kvm_arm_harden_branch_predictor(void)
>>>>  {
>>>> -	return cpus_have_const_cap(ARM64_HARDEN_BRANCH_PREDICTOR);
>>>> +	switch (get_spectre_v2_workaround_state()) {
>>>> +	case ARM64_BP_HARDEN_NEEDED:
>>>> +		return KVM_BP_HARDEN_NEEDED;
>>>> +	case ARM64_BP_HARDEN_MITIGATED:
>>>> +		return KVM_BP_HARDEN_MITIGATED;
>>>> +	case ARM64_BP_HARDEN_UNKNOWN:
>>>> +	default:
>>>> +		return KVM_BP_HARDEN_UNKNOWN;
>>>> +	}
>>>>  }
>>>>  
>>>>  #define KVM_SSBD_UNKNOWN		-1
>>>> diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
>>>> index a1f3188c7be0..7fa23ab09d4e 100644
>>>> --- a/arch/arm64/kernel/cpu_errata.c
>>>> +++ b/arch/arm64/kernel/cpu_errata.c
>>>> @@ -555,6 +555,17 @@ cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused)
>>>>  static bool __hardenbp_enab = true;
>>>>  static bool __spectrev2_safe = true;
>>>>  
>>>> +int get_spectre_v2_workaround_state(void)
>>>> +{
>>>> +	if (__spectrev2_safe)
>>>> +		return ARM64_BP_HARDEN_MITIGATED;
>>>> +
>>>> +	if (!__hardenbp_enab)
>>>> +		return ARM64_BP_HARDEN_UNKNOWN;
>>>> +
>>>> +	return ARM64_BP_HARDEN_NEEDED;
>>>> +}
>>>> +
>>>>  /*
>>>>   * List of CPUs that do not need any Spectre-v2 mitigation at all.
>>>>   */
>>>> @@ -834,13 +845,15 @@ ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr,
>>>>  ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr,
>>>>  		char *buf)
>>>>  {
>>>> -	if (__spectrev2_safe)
>>>> +	switch (get_spectre_v2_workaround_state()) {
>>>> +	case ARM64_BP_HARDEN_MITIGATED:
>>>>  		return sprintf(buf, "Not affected\n");  
>>>
>>> Here "mitigated" means "not affected".
>>>   
>>>> -
>>>> -	if (__hardenbp_enab)
>>>> +        case ARM64_BP_HARDEN_NEEDED:
>>>>  		return sprintf(buf, "Mitigation: Branch predictor hardening\n");  
>>>
>>> And "harden needed" means mitigated.
>>>   
>>>> -
>>>> -	return sprintf(buf, "Vulnerable\n");
>>>> +        case ARM64_BP_HARDEN_UNKNOWN:
>>>> +	default:
>>>> +		return sprintf(buf, "Vulnerable\n");
>>>> +	}
>>>>  }
>>>>  
>>>>  ssize_t cpu_show_spec_store_bypass(struct device *dev,
>>>> diff --git a/virt/kvm/arm/psci.c b/virt/kvm/arm/psci.c
>>>> index 34d08ee63747..1da53e0e38f7 100644
>>>> --- a/virt/kvm/arm/psci.c
>>>> +++ b/virt/kvm/arm/psci.c
>>>> @@ -412,8 +412,16 @@ int kvm_hvc_call_handler(struct kvm_vcpu *vcpu)
>>>>  		feature = smccc_get_arg1(vcpu);
>>>>  		switch(feature) {
>>>>  		case ARM_SMCCC_ARCH_WORKAROUND_1:
>>>> -			if (kvm_arm_harden_branch_predictor())
>>>> +			switch (kvm_arm_harden_branch_predictor()) {
>>>> +			case KVM_BP_HARDEN_UNKNOWN:
>>>> +				break;
>>>> +			case KVM_BP_HARDEN_NEEDED:
>>>>  				val = SMCCC_RET_SUCCESS;
>>>> +				break;
>>>> +			case KVM_BP_HARDEN_MITIGATED:
>>>> +				val = SMCCC_RET_NOT_REQUIRED;  
>>>
>>> Would KVM_BP_HARDEN_NOT_REQUIRED be a more logical name?  
>> I tend to agree with Steven's comment
>>
>> But then why not also choosing the same terminology for the uapi:
>> KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_UNAFFECTED?
> 
> You mean using ..._NOT_REQUIRED here?
yes sorry.
> Makes sense, as "unaffected" is different from "not required", and we
> cannot guarantee the first.
> 
>> For the same case we seem to have 3 different terminologies. But maybe I
>> miss something.
>>
>> In the uapi doc, in case the workaround is not needed do we actually
>> care to mention the wa is supported?
> 
> I think yes, as it's important to know that a guest could call into the
> "firmware", regardless of the effect.
OK

Thanks

Eric
> 
> Cheers,
> Andre.
> 
>> Thanks
>>
>> Eric
>>>
>>> Steve
>>>   
>>>> +				break;
>>>> +			}
>>>>  			break;
>>>>  		case ARM_SMCCC_ARCH_WORKAROUND_2:
>>>>  			switch (kvm_arm_have_ssbd()) {
>>>>  
>>>   
> 
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2019-05-03 11:46 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-04-15 11:15 [PATCH v5 0/3] KVM: arm/arm64: Add VCPU workarounds firmware register Andre Przywara
2019-04-15 11:15 ` Andre Przywara
2019-04-15 11:15 ` [PATCH v5 1/3] arm64: KVM: Propagate full Spectre v2 workaround state to KVM guests Andre Przywara
2019-04-15 11:15   ` Andre Przywara
2019-04-15 14:03   ` Steven Price
2019-04-15 14:03     ` Steven Price
2019-04-26 15:37     ` Auger Eric
2019-04-26 15:37       ` Auger Eric
2019-05-03  9:33       ` Andre Przywara
2019-05-03 11:46         ` Auger Eric
2019-04-15 11:15 ` [PATCH v5 2/3] KVM: arm/arm64: Add save/restore support for firmware workaround state Andre Przywara
2019-04-15 11:15   ` Andre Przywara
2019-04-15 14:04   ` Steven Price
2019-04-15 14:04     ` Steven Price
2019-04-26 15:07   ` Auger Eric
2019-04-26 15:07     ` Auger Eric
2019-04-15 11:15 ` [PATCH v5 3/3] KVM: doc: add API documentation on the KVM_REG_ARM_WORKAROUNDS register Andre Przywara
2019-04-15 11:15   ` Andre Przywara
2019-04-15 14:06   ` Steven Price
2019-04-15 14:06     ` Steven Price
2019-04-26 15:25   ` Auger Eric
2019-04-26 15:25     ` Auger Eric

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).