linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/5] KVM: arm64: vcpu preempted check support
@ 2019-12-17 13:55 yezengruan
  2019-12-17 13:55 ` [PATCH 1/5] KVM: arm64: Document PV-lock interface yezengruan
                   ` (4 more replies)
  0 siblings, 5 replies; 17+ messages in thread
From: yezengruan @ 2019-12-17 13:55 UTC (permalink / raw)
  To: yezengruan, linux-kernel, linux-arm-kernel, kvmarm, kvm,
	linux-doc, virtualization
  Cc: maz, james.morse, linux, suzuki.poulose, julien.thierry.kdev,
	catalin.marinas, mark.rutland, will, steven.price,
	daniel.lezcano

From: Zengruan Ye <yezengruan@huawei.com>

This patch set aims to support the vcpu_is_preempted() functionality
under KVM/arm64, which allowing the guest to obtain the vcpu is
currently running or not. This will enhance lock performance on
overcommitted hosts (more runnable vcpus than physical cpus in the
system) as doing busy waits for preempted vcpus will hurt system
performance far worse than early yielding.

We have observed some performace improvements in uninx benchmark tests.

unix benchmark result:
  host:  kernel 5.5.0-rc1, HiSilicon Kunpeng920, 8 cpus
  guest: kernel 5.5.0-rc1, 16 vcpus

               test-case                |    after-patch    |   before-patch
----------------------------------------+-------------------+------------------
 Dhrystone 2 using register variables   | 334600751.0 lps   | 335319028.3 lps
 Double-Precision Whetstone             |     32856.1 MWIPS |   32849.6 MWIPS
 Execl Throughput                       |      3662.1 lps   |    2718.0 lps
 File Copy 1024 bufsize 2000 maxblocks  |    432906.4 KBps  |  158011.8 KBps
 File Copy 256 bufsize 500 maxblocks    |    116023.0 KBps  |   37664.0 KBps
 File Copy 4096 bufsize 8000 maxblocks  |   1432769.8 KBps  |  441108.8 KBps
 Pipe Throughput                        |   6405029.6 lps   | 6021457.6 lps
 Pipe-based Context Switching           |    185872.7 lps   |  184255.3 lps
 Process Creation                       |      4025.7 lps   |    3706.6 lps
 Shell Scripts (1 concurrent)           |      6745.6 lpm   |    6436.1 lpm
 Shell Scripts (8 concurrent)           |       998.7 lpm   |     931.1 lpm
 System Call Overhead                   |   3913363.1 lps   | 3883287.8 lps
----------------------------------------+-------------------+------------------
 System Benchmarks Index Score          |      1835.1       |    1327.6

Zengruan Ye (5):
  KVM: arm64: Document PV-lock interface
  KVM: arm64: Implement PV_LOCK_FEATURES call
  KVM: arm64: Support pvlock preempted via shared structure
  KVM: arm64: Add interface to support vcpu preempted check
  KVM: arm64: Support the vcpu preemption check

 Documentation/virt/kvm/arm/pvlock.rst  | 31 +++++++++
 arch/arm/include/asm/kvm_host.h        | 13 ++++
 arch/arm64/include/asm/kvm_host.h      | 17 +++++
 arch/arm64/include/asm/paravirt.h      | 15 ++++
 arch/arm64/include/asm/pvlock-abi.h    | 16 +++++
 arch/arm64/include/asm/spinlock.h      |  7 ++
 arch/arm64/kernel/Makefile             |  2 +-
 arch/arm64/kernel/paravirt-spinlocks.c | 13 ++++
 arch/arm64/kernel/paravirt.c           | 95 +++++++++++++++++++++++++-
 arch/arm64/kernel/setup.c              |  2 +
 arch/arm64/kvm/Makefile                |  1 +
 include/linux/arm-smccc.h              | 13 ++++
 include/linux/cpuhotplug.h             |  1 +
 virt/kvm/arm/arm.c                     |  8 +++
 virt/kvm/arm/hypercalls.c              |  7 ++
 virt/kvm/arm/pvlock.c                  | 21 ++++++
 16 files changed, 260 insertions(+), 2 deletions(-)
 create mode 100644 Documentation/virt/kvm/arm/pvlock.rst
 create mode 100644 arch/arm64/include/asm/pvlock-abi.h
 create mode 100644 arch/arm64/kernel/paravirt-spinlocks.c
 create mode 100644 virt/kvm/arm/pvlock.c

-- 
2.19.1



^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH 1/5] KVM: arm64: Document PV-lock interface
  2019-12-17 13:55 [PATCH 0/5] KVM: arm64: vcpu preempted check support yezengruan
@ 2019-12-17 13:55 ` yezengruan
  2019-12-17 14:21   ` Steven Price
  2019-12-20 14:32   ` Markus Elfring
  2019-12-17 13:55 ` [PATCH 2/5] KVM: arm64: Implement PV_LOCK_FEATURES call yezengruan
                   ` (3 subsequent siblings)
  4 siblings, 2 replies; 17+ messages in thread
From: yezengruan @ 2019-12-17 13:55 UTC (permalink / raw)
  To: yezengruan, linux-kernel, linux-arm-kernel, kvmarm, kvm,
	linux-doc, virtualization
  Cc: maz, james.morse, linux, suzuki.poulose, julien.thierry.kdev,
	catalin.marinas, mark.rutland, will, steven.price,
	daniel.lezcano

From: Zengruan Ye <yezengruan@huawei.com>

Introduce a paravirtualization interface for KVM/arm64 to obtain the vcpu
is currently running or not.

A hypercall interface is provided for the guest to interrogate the
hypervisor's support for this interface and the location of the shared
memory structures.

Signed-off-by: Zengruan Ye <yezengruan@huawei.com>
---
 Documentation/virt/kvm/arm/pvlock.rst | 31 +++++++++++++++++++++++++++
 1 file changed, 31 insertions(+)
 create mode 100644 Documentation/virt/kvm/arm/pvlock.rst

diff --git a/Documentation/virt/kvm/arm/pvlock.rst b/Documentation/virt/kvm/arm/pvlock.rst
new file mode 100644
index 000000000000..eec0c36edf17
--- /dev/null
+++ b/Documentation/virt/kvm/arm/pvlock.rst
@@ -0,0 +1,31 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+Paravirtualized lock support for arm64
+======================================
+
+KVM/arm64 provids some hypervisor service calls to support a paravirtualized
+guest obtaining the vcpu is currently running or not.
+
+Two new SMCCC compatible hypercalls are defined:
+
+* PV_LOCK_FEATURES:   0xC5000040
+* PV_LOCK_PREEMPTED:  0xC5000041
+
+The existence of the PV_LOCK hypercall should be probed using the SMCCC 1.1
+ARCH_FEATURES mechanism before calling it.
+
+PV_LOCK_FEATURES
+    ============= ========    ==========
+    Function ID:  (uint32)    0xC5000040
+    PV_call_id:   (uint32)    The function to query for support.
+    Return value: (int64)     NOT_SUPPORTED (-1) or SUCCESS (0) if the relevant
+                              PV-lock feature is supported by the hypervisor.
+    ============= ========    ==========
+
+PV_LOCK_PREEMPTED
+    ============= ========    ==========
+    Function ID:  (uint32)    0xC5000041
+    Return value: (int64)     NOT_SUPPORTED (-1) or SUCCESS (0) if the IPA of
+                              this vcpu's pv data structure is configured by
+                              the hypervisor.
+    ============= ========    ==========
-- 
2.19.1



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 2/5] KVM: arm64: Implement PV_LOCK_FEATURES call
  2019-12-17 13:55 [PATCH 0/5] KVM: arm64: vcpu preempted check support yezengruan
  2019-12-17 13:55 ` [PATCH 1/5] KVM: arm64: Document PV-lock interface yezengruan
@ 2019-12-17 13:55 ` yezengruan
  2019-12-17 14:28   ` Steven Price
  2019-12-17 13:55 ` [PATCH 3/5] KVM: arm64: Support pvlock preempted via shared structure yezengruan
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 17+ messages in thread
From: yezengruan @ 2019-12-17 13:55 UTC (permalink / raw)
  To: yezengruan, linux-kernel, linux-arm-kernel, kvmarm, kvm,
	linux-doc, virtualization
  Cc: maz, james.morse, linux, suzuki.poulose, julien.thierry.kdev,
	catalin.marinas, mark.rutland, will, steven.price,
	daniel.lezcano

From: Zengruan Ye <yezengruan@huawei.com>

This provides a mechanism for querying which paravirtualized lock
features are available in this hypervisor.

Also add the header file which defines the ABI for the paravirtualized
lock features we're about to add.

Signed-off-by: Zengruan Ye <yezengruan@huawei.com>
---
 arch/arm64/include/asm/pvlock-abi.h | 16 ++++++++++++++++
 include/linux/arm-smccc.h           | 13 +++++++++++++
 virt/kvm/arm/hypercalls.c           |  3 +++
 3 files changed, 32 insertions(+)
 create mode 100644 arch/arm64/include/asm/pvlock-abi.h

diff --git a/arch/arm64/include/asm/pvlock-abi.h b/arch/arm64/include/asm/pvlock-abi.h
new file mode 100644
index 000000000000..06e0c3d7710a
--- /dev/null
+++ b/arch/arm64/include/asm/pvlock-abi.h
@@ -0,0 +1,16 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright(c) 2019 Huawei Technologies Co., Ltd
+ * Author: Zengruan Ye <yezengruan@huawei.com>
+ */
+
+#ifndef __ASM_PVLOCK_ABI_H
+#define __ASM_PVLOCK_ABI_H
+
+struct pvlock_vcpu_state {
+	__le64 preempted;
+	/* Structure must be 64 byte aligned, pad to that size */
+	u8 padding[56];
+} __packed;
+
+#endif
diff --git a/include/linux/arm-smccc.h b/include/linux/arm-smccc.h
index 59494df0f55b..59e65a951959 100644
--- a/include/linux/arm-smccc.h
+++ b/include/linux/arm-smccc.h
@@ -377,5 +377,18 @@ asmlinkage void __arm_smccc_hvc(unsigned long a0, unsigned long a1,
 			   ARM_SMCCC_OWNER_STANDARD_HYP,	\
 			   0x21)
 
+/* Paravirtualised lock calls */
+#define ARM_SMCCC_HV_PV_LOCK_FEATURES				\
+	ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL,			\
+			   ARM_SMCCC_SMC_64,			\
+			   ARM_SMCCC_OWNER_STANDARD_HYP,	\
+			   0x40)
+
+#define ARM_SMCCC_HV_PV_LOCK_PREEMPTED				\
+	ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL,			\
+			   ARM_SMCCC_SMC_64,			\
+			   ARM_SMCCC_OWNER_STANDARD_HYP,	\
+			   0x41)
+
 #endif /*__ASSEMBLY__*/
 #endif /*__LINUX_ARM_SMCCC_H*/
diff --git a/virt/kvm/arm/hypercalls.c b/virt/kvm/arm/hypercalls.c
index 550dfa3e53cd..ff13871fd85a 100644
--- a/virt/kvm/arm/hypercalls.c
+++ b/virt/kvm/arm/hypercalls.c
@@ -52,6 +52,9 @@ int kvm_hvc_call_handler(struct kvm_vcpu *vcpu)
 		case ARM_SMCCC_HV_PV_TIME_FEATURES:
 			val = SMCCC_RET_SUCCESS;
 			break;
+		case ARM_SMCCC_HV_PV_LOCK_FEATURES:
+			val = SMCCC_RET_SUCCESS;
+			break;
 		}
 		break;
 	case ARM_SMCCC_HV_PV_TIME_FEATURES:
-- 
2.19.1



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 3/5] KVM: arm64: Support pvlock preempted via shared structure
  2019-12-17 13:55 [PATCH 0/5] KVM: arm64: vcpu preempted check support yezengruan
  2019-12-17 13:55 ` [PATCH 1/5] KVM: arm64: Document PV-lock interface yezengruan
  2019-12-17 13:55 ` [PATCH 2/5] KVM: arm64: Implement PV_LOCK_FEATURES call yezengruan
@ 2019-12-17 13:55 ` yezengruan
  2019-12-17 14:33   ` Steven Price
  2019-12-17 13:55 ` [PATCH 4/5] KVM: arm64: Add interface to support vcpu preempted check yezengruan
  2019-12-17 13:55 ` [PATCH 5/5] KVM: arm64: Support the vcpu preemption check yezengruan
  4 siblings, 1 reply; 17+ messages in thread
From: yezengruan @ 2019-12-17 13:55 UTC (permalink / raw)
  To: yezengruan, linux-kernel, linux-arm-kernel, kvmarm, kvm,
	linux-doc, virtualization
  Cc: maz, james.morse, linux, suzuki.poulose, julien.thierry.kdev,
	catalin.marinas, mark.rutland, will, steven.price,
	daniel.lezcano

From: Zengruan Ye <yezengruan@huawei.com>

Implement the service call for configuring a shared structure between a
vcpu and the hypervisor in which the hypervisor can tell the vcpu is
running or not.

The preempted field is zero if 1) some old KVM deos not support this filed.
2) the vcpu is not preempted. Other values means the vcpu has been preempted.

Signed-off-by: Zengruan Ye <yezengruan@huawei.com>
---
 arch/arm/include/asm/kvm_host.h   | 13 +++++++++++++
 arch/arm64/include/asm/kvm_host.h | 17 +++++++++++++++++
 arch/arm64/kvm/Makefile           |  1 +
 virt/kvm/arm/arm.c                |  8 ++++++++
 virt/kvm/arm/hypercalls.c         |  4 ++++
 virt/kvm/arm/pvlock.c             | 21 +++++++++++++++++++++
 6 files changed, 64 insertions(+)
 create mode 100644 virt/kvm/arm/pvlock.c

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 556cd818eccf..098375f1c89e 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -356,6 +356,19 @@ static inline bool kvm_arm_is_pvtime_enabled(struct kvm_vcpu_arch *vcpu_arch)
 	return false;
 }
 
+static inline void kvm_arm_pvlock_preempted_init(struct kvm_vcpu_arch *vcpu_arch)
+{
+}
+
+static inline bool kvm_arm_is_pvlock_preempted_ready(struct kvm_vcpu_arch *vcpu_arch)
+{
+	return false;
+}
+
+static inline void kvm_update_pvlock_preempted(struct kvm_vcpu *vcpu, u64 preempted)
+{
+}
+
 void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot);
 
 struct kvm_vcpu *kvm_mpidr_to_vcpu(struct kvm *kvm, unsigned long mpidr);
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index c61260cf63c5..d9b2a21a87ac 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -354,6 +354,11 @@ struct kvm_vcpu_arch {
 		u64 last_steal;
 		gpa_t base;
 	} steal;
+
+	/* Guest PV lock state */
+	struct {
+		gpa_t base;
+	} pv;
 };
 
 /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
@@ -515,6 +520,18 @@ static inline bool kvm_arm_is_pvtime_enabled(struct kvm_vcpu_arch *vcpu_arch)
 	return (vcpu_arch->steal.base != GPA_INVALID);
 }
 
+static inline void kvm_arm_pvlock_preempted_init(struct kvm_vcpu_arch *vcpu_arch)
+{
+	vcpu_arch->pv.base = GPA_INVALID;
+}
+
+static inline bool kvm_arm_is_pvlock_preempted_ready(struct kvm_vcpu_arch *vcpu_arch)
+{
+	return (vcpu_arch->pv.base != GPA_INVALID);
+}
+
+void kvm_update_pvlock_preempted(struct kvm_vcpu *vcpu, u64 preempted);
+
 void kvm_set_sei_esr(struct kvm_vcpu *vcpu, u64 syndrome);
 
 struct kvm_vcpu *kvm_mpidr_to_vcpu(struct kvm *kvm, unsigned long mpidr);
diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
index 5ffbdc39e780..e4591f56d5f1 100644
--- a/arch/arm64/kvm/Makefile
+++ b/arch/arm64/kvm/Makefile
@@ -15,6 +15,7 @@ kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/arm.o $(KVM)/arm/mmu.o $(KVM)/arm/mmio.
 kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/psci.o $(KVM)/arm/perf.o
 kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/hypercalls.o
 kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/pvtime.o
+kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/pvlock.o
 
 kvm-$(CONFIG_KVM_ARM_HOST) += inject_fault.o regmap.o va_layout.o
 kvm-$(CONFIG_KVM_ARM_HOST) += hyp.o hyp-init.o handle_exit.o
diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index 12e0280291ce..c562f62fdd45 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -383,6 +383,8 @@ int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu)
 
 	kvm_arm_pvtime_vcpu_init(&vcpu->arch);
 
+	kvm_arm_pvlock_preempted_init(&vcpu->arch);
+
 	return kvm_vgic_vcpu_init(vcpu);
 }
 
@@ -421,6 +423,9 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 		vcpu_set_wfx_traps(vcpu);
 
 	vcpu_ptrauth_setup_lazy(vcpu);
+
+	if (kvm_arm_is_pvlock_preempted_ready(&vcpu->arch))
+		kvm_update_pvlock_preempted(vcpu, 0);
 }
 
 void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
@@ -434,6 +439,9 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
 	vcpu->cpu = -1;
 
 	kvm_arm_set_running_vcpu(NULL);
+
+	if (kvm_arm_is_pvlock_preempted_ready(&vcpu->arch))
+		kvm_update_pvlock_preempted(vcpu, 1);
 }
 
 static void vcpu_power_off(struct kvm_vcpu *vcpu)
diff --git a/virt/kvm/arm/hypercalls.c b/virt/kvm/arm/hypercalls.c
index ff13871fd85a..5964982ccd05 100644
--- a/virt/kvm/arm/hypercalls.c
+++ b/virt/kvm/arm/hypercalls.c
@@ -65,6 +65,10 @@ int kvm_hvc_call_handler(struct kvm_vcpu *vcpu)
 		if (gpa != GPA_INVALID)
 			val = gpa;
 		break;
+	case ARM_SMCCC_HV_PV_LOCK_PREEMPTED:
+		vcpu->arch.pv.base = smccc_get_arg1(vcpu);
+		val = SMCCC_RET_SUCCESS;
+		break;
 	default:
 		return kvm_psci_call(vcpu);
 	}
diff --git a/virt/kvm/arm/pvlock.c b/virt/kvm/arm/pvlock.c
new file mode 100644
index 000000000000..c3464958b0f5
--- /dev/null
+++ b/virt/kvm/arm/pvlock.c
@@ -0,0 +1,21 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright(c) 2019 Huawei Technologies Co., Ltd
+ * Author: Zengruan Ye <yezengruan@huawei.com>
+ */
+
+#include <linux/arm-smccc.h>
+#include <linux/kvm_host.h>
+
+#include <kvm/arm_hypercalls.h>
+
+void kvm_update_pvlock_preempted(struct kvm_vcpu *vcpu, u64 preempted)
+{
+	u64 preempted_le;
+	u64 base;
+	struct kvm *kvm = vcpu->kvm;
+
+	base = vcpu->arch.pv.base;
+	preempted_le = cpu_to_le64(preempted);
+	kvm_put_guest(kvm, base, preempted_le, u64);
+}
-- 
2.19.1



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 4/5] KVM: arm64: Add interface to support vcpu preempted check
  2019-12-17 13:55 [PATCH 0/5] KVM: arm64: vcpu preempted check support yezengruan
                   ` (2 preceding siblings ...)
  2019-12-17 13:55 ` [PATCH 3/5] KVM: arm64: Support pvlock preempted via shared structure yezengruan
@ 2019-12-17 13:55 ` yezengruan
  2019-12-17 13:55 ` [PATCH 5/5] KVM: arm64: Support the vcpu preemption check yezengruan
  4 siblings, 0 replies; 17+ messages in thread
From: yezengruan @ 2019-12-17 13:55 UTC (permalink / raw)
  To: yezengruan, linux-kernel, linux-arm-kernel, kvmarm, kvm,
	linux-doc, virtualization
  Cc: maz, james.morse, linux, suzuki.poulose, julien.thierry.kdev,
	catalin.marinas, mark.rutland, will, steven.price,
	daniel.lezcano

From: Zengruan Ye <yezengruan@huawei.com>

This is to fix some lock holder preemption issues. Some other locks
implementation do a spin loop before acquiring the lock itself.
Currently kernel has an interface of bool vcpu_is_preempted(int cpu). It
takes the cpu as parameter and return true if the cpu is preempted.
Then kernel can break the spin loops upon the retval of vcpu_is_preempted.

As kernel has used this interface, So lets support it.

Signed-off-by: Zengruan Ye <yezengruan@huawei.com>
---
 arch/arm64/include/asm/paravirt.h      | 12 ++++++++++++
 arch/arm64/include/asm/spinlock.h      |  7 +++++++
 arch/arm64/kernel/Makefile             |  2 +-
 arch/arm64/kernel/paravirt-spinlocks.c | 13 +++++++++++++
 arch/arm64/kernel/paravirt.c           |  4 +++-
 5 files changed, 36 insertions(+), 2 deletions(-)
 create mode 100644 arch/arm64/kernel/paravirt-spinlocks.c

diff --git a/arch/arm64/include/asm/paravirt.h b/arch/arm64/include/asm/paravirt.h
index cf3a0fd7c1a7..7b1c81b544bb 100644
--- a/arch/arm64/include/asm/paravirt.h
+++ b/arch/arm64/include/asm/paravirt.h
@@ -11,8 +11,13 @@ struct pv_time_ops {
 	unsigned long long (*steal_clock)(int cpu);
 };
 
+struct pv_lock_ops {
+	bool (*vcpu_is_preempted)(int cpu);
+};
+
 struct paravirt_patch_template {
 	struct pv_time_ops time;
+	struct pv_lock_ops lock;
 };
 
 extern struct paravirt_patch_template pv_ops;
@@ -24,6 +29,13 @@ static inline u64 paravirt_steal_clock(int cpu)
 
 int __init pv_time_init(void);
 
+__visible bool __native_vcpu_is_preempted(int cpu);
+
+static inline bool pv_vcpu_is_preempted(int cpu)
+{
+	return pv_ops.lock.vcpu_is_preempted(cpu);
+}
+
 #else
 
 #define pv_time_init() do {} while (0)
diff --git a/arch/arm64/include/asm/spinlock.h b/arch/arm64/include/asm/spinlock.h
index b093b287babf..45ff1b2949a6 100644
--- a/arch/arm64/include/asm/spinlock.h
+++ b/arch/arm64/include/asm/spinlock.h
@@ -7,8 +7,15 @@
 
 #include <asm/qrwlock.h>
 #include <asm/qspinlock.h>
+#include <asm/paravirt.h>
 
 /* See include/linux/spinlock.h */
 #define smp_mb__after_spinlock()	smp_mb()
 
+#define vcpu_is_preempted vcpu_is_preempted
+static inline bool vcpu_is_preempted(long cpu)
+{
+	return pv_vcpu_is_preempted(cpu);
+}
+
 #endif /* __ASM_SPINLOCK_H */
diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index fc6488660f64..b23cdae433a4 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -50,7 +50,7 @@ obj-$(CONFIG_ARMV8_DEPRECATED)		+= armv8_deprecated.o
 obj-$(CONFIG_ACPI)			+= acpi.o
 obj-$(CONFIG_ACPI_NUMA)			+= acpi_numa.o
 obj-$(CONFIG_ARM64_ACPI_PARKING_PROTOCOL)	+= acpi_parking_protocol.o
-obj-$(CONFIG_PARAVIRT)			+= paravirt.o
+obj-$(CONFIG_PARAVIRT)			+= paravirt.o paravirt-spinlocks.o
 obj-$(CONFIG_RANDOMIZE_BASE)		+= kaslr.o
 obj-$(CONFIG_HIBERNATION)		+= hibernate.o hibernate-asm.o
 obj-$(CONFIG_KEXEC_CORE)		+= machine_kexec.o relocate_kernel.o	\
diff --git a/arch/arm64/kernel/paravirt-spinlocks.c b/arch/arm64/kernel/paravirt-spinlocks.c
new file mode 100644
index 000000000000..718aa773d45c
--- /dev/null
+++ b/arch/arm64/kernel/paravirt-spinlocks.c
@@ -0,0 +1,13 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright(c) 2019 Huawei Technologies Co., Ltd
+ * Author: Zengruan Ye <yezengruan@huawei.com>
+ */
+
+#include <linux/spinlock.h>
+#include <asm/paravirt.h>
+
+__visible bool __native_vcpu_is_preempted(int cpu)
+{
+	return false;
+}
diff --git a/arch/arm64/kernel/paravirt.c b/arch/arm64/kernel/paravirt.c
index 1ef702b0be2d..d8f1ba8c22ce 100644
--- a/arch/arm64/kernel/paravirt.c
+++ b/arch/arm64/kernel/paravirt.c
@@ -26,7 +26,9 @@
 struct static_key paravirt_steal_enabled;
 struct static_key paravirt_steal_rq_enabled;
 
-struct paravirt_patch_template pv_ops;
+struct paravirt_patch_template pv_ops = {
+	.lock.vcpu_is_preempted		= __native_vcpu_is_preempted,
+};
 EXPORT_SYMBOL_GPL(pv_ops);
 
 struct pv_time_stolen_time_region {
-- 
2.19.1



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 5/5] KVM: arm64: Support the vcpu preemption check
  2019-12-17 13:55 [PATCH 0/5] KVM: arm64: vcpu preempted check support yezengruan
                   ` (3 preceding siblings ...)
  2019-12-17 13:55 ` [PATCH 4/5] KVM: arm64: Add interface to support vcpu preempted check yezengruan
@ 2019-12-17 13:55 ` yezengruan
  2019-12-17 14:40   ` Steven Price
  4 siblings, 1 reply; 17+ messages in thread
From: yezengruan @ 2019-12-17 13:55 UTC (permalink / raw)
  To: yezengruan, linux-kernel, linux-arm-kernel, kvmarm, kvm,
	linux-doc, virtualization
  Cc: maz, james.morse, linux, suzuki.poulose, julien.thierry.kdev,
	catalin.marinas, mark.rutland, will, steven.price,
	daniel.lezcano

From: Zengruan Ye <yezengruan@huawei.com>

Support the vcpu_is_preempted() functionality under KVM/arm64. This will
enhance lock performance on overcommitted hosts (more runnable vcpus
than physical cpus in the system) as doing busy waits for preempted
vcpus will hurt system performance far worse than early yielding.

unix benchmark result:
  host:  kernel 5.5.0-rc1, HiSilicon Kunpeng920, 8 cpus
  guest: kernel 5.5.0-rc1, 16 vcpus

               test-case                |    after-patch    |   before-patch
----------------------------------------+-------------------+------------------
 Dhrystone 2 using register variables   | 334600751.0 lps   | 335319028.3 lps
 Double-Precision Whetstone             |     32856.1 MWIPS |     32849.6 MWIPS
 Execl Throughput                       |      3662.1 lps   |      2718.0 lps
 File Copy 1024 bufsize 2000 maxblocks  |    432906.4 KBps  |    158011.8 KBps
 File Copy 256 bufsize 500 maxblocks    |    116023.0 KBps  |     37664.0 KBps
 File Copy 4096 bufsize 8000 maxblocks  |   1432769.8 KBps  |    441108.8 KBps
 Pipe Throughput                        |   6405029.6 lps   |   6021457.6 lps
 Pipe-based Context Switching           |    185872.7 lps   |    184255.3 lps
 Process Creation                       |      4025.7 lps   |      3706.6 lps
 Shell Scripts (1 concurrent)           |      6745.6 lpm   |      6436.1 lpm
 Shell Scripts (8 concurrent)           |       998.7 lpm   |       931.1 lpm
 System Call Overhead                   |   3913363.1 lps   |   3883287.8 lps
----------------------------------------+-------------------+------------------
 System Benchmarks Index Score          |      1835.1       |      1327.6

Signed-off-by: Zengruan Ye <yezengruan@huawei.com>
---
 arch/arm64/include/asm/paravirt.h |  3 +
 arch/arm64/kernel/paravirt.c      | 91 +++++++++++++++++++++++++++++++
 arch/arm64/kernel/setup.c         |  2 +
 include/linux/cpuhotplug.h        |  1 +
 4 files changed, 97 insertions(+)

diff --git a/arch/arm64/include/asm/paravirt.h b/arch/arm64/include/asm/paravirt.h
index 7b1c81b544bb..a2cd0183bbef 100644
--- a/arch/arm64/include/asm/paravirt.h
+++ b/arch/arm64/include/asm/paravirt.h
@@ -29,6 +29,8 @@ static inline u64 paravirt_steal_clock(int cpu)
 
 int __init pv_time_init(void);
 
+int __init kvm_guest_init(void);
+
 __visible bool __native_vcpu_is_preempted(int cpu);
 
 static inline bool pv_vcpu_is_preempted(int cpu)
@@ -39,6 +41,7 @@ static inline bool pv_vcpu_is_preempted(int cpu)
 #else
 
 #define pv_time_init() do {} while (0)
+#define kvm_guest_init() do {} while (0)
 
 #endif // CONFIG_PARAVIRT
 
diff --git a/arch/arm64/kernel/paravirt.c b/arch/arm64/kernel/paravirt.c
index d8f1ba8c22ce..a86dead40473 100644
--- a/arch/arm64/kernel/paravirt.c
+++ b/arch/arm64/kernel/paravirt.c
@@ -22,6 +22,7 @@
 #include <asm/paravirt.h>
 #include <asm/pvclock-abi.h>
 #include <asm/smp_plat.h>
+#include <asm/pvlock-abi.h>
 
 struct static_key paravirt_steal_enabled;
 struct static_key paravirt_steal_rq_enabled;
@@ -158,3 +159,93 @@ int __init pv_time_init(void)
 
 	return 0;
 }
+
+DEFINE_PER_CPU(struct pvlock_vcpu_state, pvlock_vcpu_region) __aligned(64);
+EXPORT_PER_CPU_SYMBOL(pvlock_vcpu_region);
+
+static int pvlock_vcpu_state_dying_cpu(unsigned int cpu)
+{
+	struct pvlock_vcpu_state *reg;
+
+	reg = this_cpu_ptr(&pvlock_vcpu_region);
+	if (!reg)
+		return -EFAULT;
+
+	memset(reg, 0, sizeof(*reg));
+
+	return 0;
+}
+
+static int init_pvlock_vcpu_state(unsigned int cpu)
+{
+	struct pvlock_vcpu_state *reg;
+	struct arm_smccc_res res;
+
+	reg = this_cpu_ptr(&pvlock_vcpu_region);
+	if (!reg)
+		return -EFAULT;
+
+	/* Pass the memory address to host via hypercall */
+	arm_smccc_1_1_invoke(ARM_SMCCC_HV_PV_LOCK_PREEMPTED,
+			     virt_to_phys(reg), &res);
+
+	return 0;
+}
+
+static bool kvm_vcpu_is_preempted(int cpu)
+{
+	struct pvlock_vcpu_state *reg = &per_cpu(pvlock_vcpu_region, cpu);
+
+	if (reg)
+		return !!(reg->preempted & 1);
+
+	return false;
+}
+
+static int kvm_arm_init_pvlock(void)
+{
+	int ret;
+
+	ret = cpuhp_setup_state(CPUHP_AP_ARM_KVM_PVLOCK_STARTING,
+				"hypervisor/arm/pvlock:starting",
+				init_pvlock_vcpu_state,
+				pvlock_vcpu_state_dying_cpu);
+	if (ret < 0)
+		return ret;
+
+	pv_ops.lock.vcpu_is_preempted = kvm_vcpu_is_preempted;
+
+	pr_info("using PV-lock preempted\n");
+
+	return 0;
+}
+
+static bool has_kvm_pvlock(void)
+{
+	struct arm_smccc_res res;
+
+	/* To detect the presence of PV lock support we require SMCCC 1.1+ */
+	if (psci_ops.smccc_version < SMCCC_VERSION_1_1)
+		return false;
+
+	arm_smccc_1_1_invoke(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
+			     ARM_SMCCC_HV_PV_LOCK_FEATURES, &res);
+
+	if (res.a0 != SMCCC_RET_SUCCESS)
+		return false;
+
+	return true;
+}
+
+int __init kvm_guest_init(void)
+{
+	if (is_hyp_mode_available())
+		return 0;
+
+	if (!has_kvm_pvlock())
+		return 0;
+
+	kvm_arm_init_pvlock();
+
+	return 0;
+}
diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
index 56f664561754..64c4d515ba2d 100644
--- a/arch/arm64/kernel/setup.c
+++ b/arch/arm64/kernel/setup.c
@@ -341,6 +341,8 @@ void __init setup_arch(char **cmdline_p)
 	smp_init_cpus();
 	smp_build_mpidr_hash();
 
+	kvm_guest_init();
+
 	/* Init percpu seeds for random tags after cpus are set up. */
 	kasan_init_tags();
 
diff --git a/include/linux/cpuhotplug.h b/include/linux/cpuhotplug.h
index e51ee772b9f5..f72ff95ab63a 100644
--- a/include/linux/cpuhotplug.h
+++ b/include/linux/cpuhotplug.h
@@ -138,6 +138,7 @@ enum cpuhp_state {
 	CPUHP_AP_DUMMY_TIMER_STARTING,
 	CPUHP_AP_ARM_XEN_STARTING,
 	CPUHP_AP_ARM_KVMPV_STARTING,
+	CPUHP_AP_ARM_KVM_PVLOCK_STARTING,
 	CPUHP_AP_ARM_CORESIGHT_STARTING,
 	CPUHP_AP_ARM64_ISNDEP_STARTING,
 	CPUHP_AP_SMPCFD_DYING,
-- 
2.19.1



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH 1/5] KVM: arm64: Document PV-lock interface
  2019-12-17 13:55 ` [PATCH 1/5] KVM: arm64: Document PV-lock interface yezengruan
@ 2019-12-17 14:21   ` Steven Price
  2019-12-19 11:45     ` yezengruan
  2019-12-20 14:32   ` Markus Elfring
  1 sibling, 1 reply; 17+ messages in thread
From: Steven Price @ 2019-12-17 14:21 UTC (permalink / raw)
  To: yezengruan
  Cc: linux-kernel, linux-arm-kernel, kvmarm, kvm, linux-doc,
	virtualization, maz, James Morse, linux, Suzuki Poulose,
	julien.thierry.kdev, Catalin Marinas, Mark Rutland, will,
	daniel.lezcano

On Tue, Dec 17, 2019 at 01:55:45PM +0000, yezengruan@huawei.com wrote:
> From: Zengruan Ye <yezengruan@huawei.com>
> 
> Introduce a paravirtualization interface for KVM/arm64 to obtain the vcpu
> is currently running or not.
> 
> A hypercall interface is provided for the guest to interrogate the
> hypervisor's support for this interface and the location of the shared
> memory structures.
> 
> Signed-off-by: Zengruan Ye <yezengruan@huawei.com>
> ---
>  Documentation/virt/kvm/arm/pvlock.rst | 31 +++++++++++++++++++++++++++
>  1 file changed, 31 insertions(+)
>  create mode 100644 Documentation/virt/kvm/arm/pvlock.rst
> 
> diff --git a/Documentation/virt/kvm/arm/pvlock.rst b/Documentation/virt/kvm/arm/pvlock.rst
> new file mode 100644
> index 000000000000..eec0c36edf17
> --- /dev/null
> +++ b/Documentation/virt/kvm/arm/pvlock.rst
> @@ -0,0 +1,31 @@
> +.. SPDX-License-Identifier: GPL-2.0
> +
> +Paravirtualized lock support for arm64
> +======================================
> +
> +KVM/arm64 provids some hypervisor service calls to support a paravirtualized
> +guest obtaining the vcpu is currently running or not.
> +
> +Two new SMCCC compatible hypercalls are defined:
> +
> +* PV_LOCK_FEATURES:   0xC5000040
> +* PV_LOCK_PREEMPTED:  0xC5000041

These values are in the "Standard Hypervisor Service Calls" section of
SMCCC - so is there a document that describes this features such that
other OSes or hypervisors can implement it? I'm also not entirely sure
of the process of ensuring that the IDs picked are non-conflicting.

Otherwise if this is a KVM specific interface this should probably
belong within the "Vendor Specific Hypervisor Service Calls" section
along with some probing that the hypervisor is actually KVM. Although I
don't see anything KVM specific.

> +
> +The existence of the PV_LOCK hypercall should be probed using the SMCCC 1.1
> +ARCH_FEATURES mechanism before calling it.
> +
> +PV_LOCK_FEATURES
> +    ============= ========    ==========
> +    Function ID:  (uint32)    0xC5000040
> +    PV_call_id:   (uint32)    The function to query for support.
> +    Return value: (int64)     NOT_SUPPORTED (-1) or SUCCESS (0) if the relevant
> +                              PV-lock feature is supported by the hypervisor.
> +    ============= ========    ==========
> +
> +PV_LOCK_PREEMPTED
> +    ============= ========    ==========
> +    Function ID:  (uint32)    0xC5000041
> +    Return value: (int64)     NOT_SUPPORTED (-1) or SUCCESS (0) if the IPA of
> +                              this vcpu's pv data structure is configured by
> +                              the hypervisor.
> +    ============= ========    ==========

From the code it looks like there's another argument for this SMC - the
physical address (or IPA) of a struct pvlock_vcpu_state. This structure
also needs to be described as it is part of the ABI.

Steve

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/5] KVM: arm64: Implement PV_LOCK_FEATURES call
  2019-12-17 13:55 ` [PATCH 2/5] KVM: arm64: Implement PV_LOCK_FEATURES call yezengruan
@ 2019-12-17 14:28   ` Steven Price
  2019-12-19 11:59     ` yezengruan
  0 siblings, 1 reply; 17+ messages in thread
From: Steven Price @ 2019-12-17 14:28 UTC (permalink / raw)
  To: yezengruan
  Cc: linux-kernel, linux-arm-kernel, kvmarm, kvm, linux-doc,
	virtualization, maz, James Morse, linux, Suzuki Poulose,
	julien.thierry.kdev, Catalin Marinas, Mark Rutland, will,
	daniel.lezcano

On Tue, Dec 17, 2019 at 01:55:46PM +0000, yezengruan@huawei.com wrote:
> From: Zengruan Ye <yezengruan@huawei.com>
> 
> This provides a mechanism for querying which paravirtualized lock
> features are available in this hypervisor.
> 
> Also add the header file which defines the ABI for the paravirtualized
> lock features we're about to add.
> 
> Signed-off-by: Zengruan Ye <yezengruan@huawei.com>
> ---
>  arch/arm64/include/asm/pvlock-abi.h | 16 ++++++++++++++++
>  include/linux/arm-smccc.h           | 13 +++++++++++++
>  virt/kvm/arm/hypercalls.c           |  3 +++
>  3 files changed, 32 insertions(+)
>  create mode 100644 arch/arm64/include/asm/pvlock-abi.h
> 
> diff --git a/arch/arm64/include/asm/pvlock-abi.h b/arch/arm64/include/asm/pvlock-abi.h
> new file mode 100644
> index 000000000000..06e0c3d7710a
> --- /dev/null
> +++ b/arch/arm64/include/asm/pvlock-abi.h
> @@ -0,0 +1,16 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright(c) 2019 Huawei Technologies Co., Ltd
> + * Author: Zengruan Ye <yezengruan@huawei.com>
> + */
> +
> +#ifndef __ASM_PVLOCK_ABI_H
> +#define __ASM_PVLOCK_ABI_H
> +
> +struct pvlock_vcpu_state {
> +	__le64 preempted;

Somewhere we need to document when 'preempted' is. It looks like it's a
1-bit field from the later patches.

> +	/* Structure must be 64 byte aligned, pad to that size */
> +	u8 padding[56];
> +} __packed;
> +
> +#endif
> diff --git a/include/linux/arm-smccc.h b/include/linux/arm-smccc.h
> index 59494df0f55b..59e65a951959 100644
> --- a/include/linux/arm-smccc.h
> +++ b/include/linux/arm-smccc.h
> @@ -377,5 +377,18 @@ asmlinkage void __arm_smccc_hvc(unsigned long a0, unsigned long a1,
>  			   ARM_SMCCC_OWNER_STANDARD_HYP,	\
>  			   0x21)
>  
> +/* Paravirtualised lock calls */
> +#define ARM_SMCCC_HV_PV_LOCK_FEATURES				\
> +	ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL,			\
> +			   ARM_SMCCC_SMC_64,			\
> +			   ARM_SMCCC_OWNER_STANDARD_HYP,	\
> +			   0x40)
> +
> +#define ARM_SMCCC_HV_PV_LOCK_PREEMPTED				\
> +	ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL,			\
> +			   ARM_SMCCC_SMC_64,			\
> +			   ARM_SMCCC_OWNER_STANDARD_HYP,	\
> +			   0x41)
> +
>  #endif /*__ASSEMBLY__*/
>  #endif /*__LINUX_ARM_SMCCC_H*/
> diff --git a/virt/kvm/arm/hypercalls.c b/virt/kvm/arm/hypercalls.c
> index 550dfa3e53cd..ff13871fd85a 100644
> --- a/virt/kvm/arm/hypercalls.c
> +++ b/virt/kvm/arm/hypercalls.c
> @@ -52,6 +52,9 @@ int kvm_hvc_call_handler(struct kvm_vcpu *vcpu)
>  		case ARM_SMCCC_HV_PV_TIME_FEATURES:
>  			val = SMCCC_RET_SUCCESS;
>  			break;
> +		case ARM_SMCCC_HV_PV_LOCK_FEATURES:
> +			val = SMCCC_RET_SUCCESS;
> +			break;

Ideally you wouldn't report that PV_LOCK_FEATURES exists until the
actual hypercalls are wired up to avoid breaking a bisect.

Steve

>  		}
>  		break;
>  	case ARM_SMCCC_HV_PV_TIME_FEATURES:
> -- 
> 2.19.1
> 
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/5] KVM: arm64: Support pvlock preempted via shared structure
  2019-12-17 13:55 ` [PATCH 3/5] KVM: arm64: Support pvlock preempted via shared structure yezengruan
@ 2019-12-17 14:33   ` Steven Price
  2019-12-26  8:33     ` yezengruan
  0 siblings, 1 reply; 17+ messages in thread
From: Steven Price @ 2019-12-17 14:33 UTC (permalink / raw)
  To: yezengruan
  Cc: linux-kernel, linux-arm-kernel, kvmarm, kvm, linux-doc,
	virtualization, maz, James Morse, linux, Suzuki Poulose,
	julien.thierry.kdev, Catalin Marinas, Mark Rutland, will,
	daniel.lezcano

On Tue, Dec 17, 2019 at 01:55:47PM +0000, yezengruan@huawei.com wrote:
> From: Zengruan Ye <yezengruan@huawei.com>
> 
> Implement the service call for configuring a shared structure between a
> vcpu and the hypervisor in which the hypervisor can tell the vcpu is
> running or not.
> 
> The preempted field is zero if 1) some old KVM deos not support this filed.
> 2) the vcpu is not preempted. Other values means the vcpu has been preempted.
> 
> Signed-off-by: Zengruan Ye <yezengruan@huawei.com>
> ---
>  arch/arm/include/asm/kvm_host.h   | 13 +++++++++++++
>  arch/arm64/include/asm/kvm_host.h | 17 +++++++++++++++++
>  arch/arm64/kvm/Makefile           |  1 +
>  virt/kvm/arm/arm.c                |  8 ++++++++
>  virt/kvm/arm/hypercalls.c         |  4 ++++
>  virt/kvm/arm/pvlock.c             | 21 +++++++++++++++++++++
>  6 files changed, 64 insertions(+)
>  create mode 100644 virt/kvm/arm/pvlock.c
> 
> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
> index 556cd818eccf..098375f1c89e 100644
> --- a/arch/arm/include/asm/kvm_host.h
> +++ b/arch/arm/include/asm/kvm_host.h
> @@ -356,6 +356,19 @@ static inline bool kvm_arm_is_pvtime_enabled(struct kvm_vcpu_arch *vcpu_arch)
>  	return false;
>  }
>  
> +static inline void kvm_arm_pvlock_preempted_init(struct kvm_vcpu_arch *vcpu_arch)
> +{
> +}
> +
> +static inline bool kvm_arm_is_pvlock_preempted_ready(struct kvm_vcpu_arch *vcpu_arch)
> +{
> +	return false;
> +}
> +
> +static inline void kvm_update_pvlock_preempted(struct kvm_vcpu *vcpu, u64 preempted)
> +{
> +}
> +
>  void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot);
>  
>  struct kvm_vcpu *kvm_mpidr_to_vcpu(struct kvm *kvm, unsigned long mpidr);
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index c61260cf63c5..d9b2a21a87ac 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -354,6 +354,11 @@ struct kvm_vcpu_arch {
>  		u64 last_steal;
>  		gpa_t base;
>  	} steal;
> +
> +	/* Guest PV lock state */
> +	struct {
> +		gpa_t base;
> +	} pv;
>  };
>  
>  /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
> @@ -515,6 +520,18 @@ static inline bool kvm_arm_is_pvtime_enabled(struct kvm_vcpu_arch *vcpu_arch)
>  	return (vcpu_arch->steal.base != GPA_INVALID);
>  }
>  
> +static inline void kvm_arm_pvlock_preempted_init(struct kvm_vcpu_arch *vcpu_arch)
> +{
> +	vcpu_arch->pv.base = GPA_INVALID;
> +}
> +
> +static inline bool kvm_arm_is_pvlock_preempted_ready(struct kvm_vcpu_arch *vcpu_arch)
> +{
> +	return (vcpu_arch->pv.base != GPA_INVALID);
> +}
> +
> +void kvm_update_pvlock_preempted(struct kvm_vcpu *vcpu, u64 preempted);
> +
>  void kvm_set_sei_esr(struct kvm_vcpu *vcpu, u64 syndrome);
>  
>  struct kvm_vcpu *kvm_mpidr_to_vcpu(struct kvm *kvm, unsigned long mpidr);
> diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
> index 5ffbdc39e780..e4591f56d5f1 100644
> --- a/arch/arm64/kvm/Makefile
> +++ b/arch/arm64/kvm/Makefile
> @@ -15,6 +15,7 @@ kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/arm.o $(KVM)/arm/mmu.o $(KVM)/arm/mmio.
>  kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/psci.o $(KVM)/arm/perf.o
>  kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/hypercalls.o
>  kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/pvtime.o
> +kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/pvlock.o
>  
>  kvm-$(CONFIG_KVM_ARM_HOST) += inject_fault.o regmap.o va_layout.o
>  kvm-$(CONFIG_KVM_ARM_HOST) += hyp.o hyp-init.o handle_exit.o
> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> index 12e0280291ce..c562f62fdd45 100644
> --- a/virt/kvm/arm/arm.c
> +++ b/virt/kvm/arm/arm.c
> @@ -383,6 +383,8 @@ int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu)
>  
>  	kvm_arm_pvtime_vcpu_init(&vcpu->arch);
>  
> +	kvm_arm_pvlock_preempted_init(&vcpu->arch);
> +
>  	return kvm_vgic_vcpu_init(vcpu);
>  }
>  
> @@ -421,6 +423,9 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
>  		vcpu_set_wfx_traps(vcpu);
>  
>  	vcpu_ptrauth_setup_lazy(vcpu);
> +
> +	if (kvm_arm_is_pvlock_preempted_ready(&vcpu->arch))
> +		kvm_update_pvlock_preempted(vcpu, 0);
>  }
>  
>  void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
> @@ -434,6 +439,9 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
>  	vcpu->cpu = -1;
>  
>  	kvm_arm_set_running_vcpu(NULL);
> +
> +	if (kvm_arm_is_pvlock_preempted_ready(&vcpu->arch))
> +		kvm_update_pvlock_preempted(vcpu, 1);
>  }
>  
>  static void vcpu_power_off(struct kvm_vcpu *vcpu)
> diff --git a/virt/kvm/arm/hypercalls.c b/virt/kvm/arm/hypercalls.c
> index ff13871fd85a..5964982ccd05 100644
> --- a/virt/kvm/arm/hypercalls.c
> +++ b/virt/kvm/arm/hypercalls.c
> @@ -65,6 +65,10 @@ int kvm_hvc_call_handler(struct kvm_vcpu *vcpu)
>  		if (gpa != GPA_INVALID)
>  			val = gpa;
>  		break;
> +	case ARM_SMCCC_HV_PV_LOCK_PREEMPTED:
> +		vcpu->arch.pv.base = smccc_get_arg1(vcpu);
> +		val = SMCCC_RET_SUCCESS;

It would be useful to at least do some basic validation that the address
passed in is valid. Debugging problems with this interface will be hard
if it always returns success even if the address cannot be used.

The second patch also states that the structure should be 64 byte
aligned, but there's nothing here to enforce that.

Steve

> +		break;
>  	default:
>  		return kvm_psci_call(vcpu);
>  	}
> diff --git a/virt/kvm/arm/pvlock.c b/virt/kvm/arm/pvlock.c
> new file mode 100644
> index 000000000000..c3464958b0f5
> --- /dev/null
> +++ b/virt/kvm/arm/pvlock.c
> @@ -0,0 +1,21 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +/*
> + * Copyright(c) 2019 Huawei Technologies Co., Ltd
> + * Author: Zengruan Ye <yezengruan@huawei.com>
> + */
> +
> +#include <linux/arm-smccc.h>
> +#include <linux/kvm_host.h>
> +
> +#include <kvm/arm_hypercalls.h>
> +
> +void kvm_update_pvlock_preempted(struct kvm_vcpu *vcpu, u64 preempted)
> +{
> +	u64 preempted_le;
> +	u64 base;
> +	struct kvm *kvm = vcpu->kvm;
> +
> +	base = vcpu->arch.pv.base;
> +	preempted_le = cpu_to_le64(preempted);
> +	kvm_put_guest(kvm, base, preempted_le, u64);
> +}
> -- 
> 2.19.1
> 
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 5/5] KVM: arm64: Support the vcpu preemption check
  2019-12-17 13:55 ` [PATCH 5/5] KVM: arm64: Support the vcpu preemption check yezengruan
@ 2019-12-17 14:40   ` Steven Price
  2019-12-26  8:34     ` yezengruan
  0 siblings, 1 reply; 17+ messages in thread
From: Steven Price @ 2019-12-17 14:40 UTC (permalink / raw)
  To: yezengruan
  Cc: linux-kernel, linux-arm-kernel, kvmarm, kvm, linux-doc,
	virtualization, maz, James Morse, linux, Suzuki Poulose,
	julien.thierry.kdev, Catalin Marinas, Mark Rutland, will,
	daniel.lezcano

On Tue, Dec 17, 2019 at 01:55:49PM +0000, yezengruan@huawei.com wrote:
> From: Zengruan Ye <yezengruan@huawei.com>
> 
> Support the vcpu_is_preempted() functionality under KVM/arm64. This will
> enhance lock performance on overcommitted hosts (more runnable vcpus
> than physical cpus in the system) as doing busy waits for preempted
> vcpus will hurt system performance far worse than early yielding.
> 
> unix benchmark result:
>   host:  kernel 5.5.0-rc1, HiSilicon Kunpeng920, 8 cpus
>   guest: kernel 5.5.0-rc1, 16 vcpus
> 
>                test-case                |    after-patch    |   before-patch
> ----------------------------------------+-------------------+------------------
>  Dhrystone 2 using register variables   | 334600751.0 lps   | 335319028.3 lps
>  Double-Precision Whetstone             |     32856.1 MWIPS |     32849.6 MWIPS
>  Execl Throughput                       |      3662.1 lps   |      2718.0 lps
>  File Copy 1024 bufsize 2000 maxblocks  |    432906.4 KBps  |    158011.8 KBps
>  File Copy 256 bufsize 500 maxblocks    |    116023.0 KBps  |     37664.0 KBps
>  File Copy 4096 bufsize 8000 maxblocks  |   1432769.8 KBps  |    441108.8 KBps
>  Pipe Throughput                        |   6405029.6 lps   |   6021457.6 lps
>  Pipe-based Context Switching           |    185872.7 lps   |    184255.3 lps
>  Process Creation                       |      4025.7 lps   |      3706.6 lps
>  Shell Scripts (1 concurrent)           |      6745.6 lpm   |      6436.1 lpm
>  Shell Scripts (8 concurrent)           |       998.7 lpm   |       931.1 lpm
>  System Call Overhead                   |   3913363.1 lps   |   3883287.8 lps
> ----------------------------------------+-------------------+------------------
>  System Benchmarks Index Score          |      1835.1       |      1327.6
> 
> Signed-off-by: Zengruan Ye <yezengruan@huawei.com>
> ---
>  arch/arm64/include/asm/paravirt.h |  3 +
>  arch/arm64/kernel/paravirt.c      | 91 +++++++++++++++++++++++++++++++
>  arch/arm64/kernel/setup.c         |  2 +
>  include/linux/cpuhotplug.h        |  1 +
>  4 files changed, 97 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/paravirt.h b/arch/arm64/include/asm/paravirt.h
> index 7b1c81b544bb..a2cd0183bbef 100644
> --- a/arch/arm64/include/asm/paravirt.h
> +++ b/arch/arm64/include/asm/paravirt.h
> @@ -29,6 +29,8 @@ static inline u64 paravirt_steal_clock(int cpu)
>  
>  int __init pv_time_init(void);
>  
> +int __init kvm_guest_init(void);
> +

This is a *very* generic name - I suggest something like pv_lock_init()
so it's clear what the function actually does.

>  __visible bool __native_vcpu_is_preempted(int cpu);
>  
>  static inline bool pv_vcpu_is_preempted(int cpu)
> @@ -39,6 +41,7 @@ static inline bool pv_vcpu_is_preempted(int cpu)
>  #else
>  
>  #define pv_time_init() do {} while (0)
> +#define kvm_guest_init() do {} while (0)
>  
>  #endif // CONFIG_PARAVIRT
>  
> diff --git a/arch/arm64/kernel/paravirt.c b/arch/arm64/kernel/paravirt.c
> index d8f1ba8c22ce..a86dead40473 100644
> --- a/arch/arm64/kernel/paravirt.c
> +++ b/arch/arm64/kernel/paravirt.c
> @@ -22,6 +22,7 @@
>  #include <asm/paravirt.h>
>  #include <asm/pvclock-abi.h>
>  #include <asm/smp_plat.h>
> +#include <asm/pvlock-abi.h>
>  
>  struct static_key paravirt_steal_enabled;
>  struct static_key paravirt_steal_rq_enabled;
> @@ -158,3 +159,93 @@ int __init pv_time_init(void)
>  
>  	return 0;
>  }
> +
> +DEFINE_PER_CPU(struct pvlock_vcpu_state, pvlock_vcpu_region) __aligned(64);
> +EXPORT_PER_CPU_SYMBOL(pvlock_vcpu_region);
> +
> +static int pvlock_vcpu_state_dying_cpu(unsigned int cpu)
> +{
> +	struct pvlock_vcpu_state *reg;
> +
> +	reg = this_cpu_ptr(&pvlock_vcpu_region);
> +	if (!reg)
> +		return -EFAULT;
> +
> +	memset(reg, 0, sizeof(*reg));

I might be missing something obvious here - but I don't see the point of
this. The hypervisor might immediately overwrite the structure again.
Indeed you should conside a mechanism for the guest to "unregister" the
region - otherwise you will face issues with the likes of kexec.

For pv_time the memory is allocated by the hypervisor not the guest to
avoid lifetime issues about kexec.

> +
> +	return 0;
> +}
> +
> +static int init_pvlock_vcpu_state(unsigned int cpu)
> +{
> +	struct pvlock_vcpu_state *reg;
> +	struct arm_smccc_res res;
> +
> +	reg = this_cpu_ptr(&pvlock_vcpu_region);
> +	if (!reg)
> +		return -EFAULT;
> +
> +	/* Pass the memory address to host via hypercall */
> +	arm_smccc_1_1_invoke(ARM_SMCCC_HV_PV_LOCK_PREEMPTED,
> +			     virt_to_phys(reg), &res);
> +
> +	return 0;
> +}
> +
> +static bool kvm_vcpu_is_preempted(int cpu)
> +{
> +	struct pvlock_vcpu_state *reg = &per_cpu(pvlock_vcpu_region, cpu);
> +
> +	if (reg)
> +		return !!(reg->preempted & 1);
> +
> +	return false;
> +}
> +
> +static int kvm_arm_init_pvlock(void)
> +{
> +	int ret;
> +
> +	ret = cpuhp_setup_state(CPUHP_AP_ARM_KVM_PVLOCK_STARTING,
> +				"hypervisor/arm/pvlock:starting",
> +				init_pvlock_vcpu_state,
> +				pvlock_vcpu_state_dying_cpu);
> +	if (ret < 0)
> +		return ret;
> +
> +	pv_ops.lock.vcpu_is_preempted = kvm_vcpu_is_preempted;
> +
> +	pr_info("using PV-lock preempted\n");
> +
> +	return 0;
> +}
> +
> +static bool has_kvm_pvlock(void)
> +{
> +	struct arm_smccc_res res;
> +
> +	/* To detect the presence of PV lock support we require SMCCC 1.1+ */
> +	if (psci_ops.smccc_version < SMCCC_VERSION_1_1)
> +		return false;
> +
> +	arm_smccc_1_1_invoke(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
> +			     ARM_SMCCC_HV_PV_LOCK_FEATURES, &res);
> +
> +	if (res.a0 != SMCCC_RET_SUCCESS)
> +		return false;
> +
> +	return true;
> +}
> +
> +int __init kvm_guest_init(void)
> +{
> +	if (is_hyp_mode_available())
> +		return 0;
> +
> +	if (!has_kvm_pvlock())
> +		return 0;
> +
> +	kvm_arm_init_pvlock();

Consider reporting errors from kvm_arm_init_pvlock()? At the moment
it's impossible to tell the difference between pvlock not being
supported and something failing in the setup.

Steve

> +
> +	return 0;
> +}
> diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
> index 56f664561754..64c4d515ba2d 100644
> --- a/arch/arm64/kernel/setup.c
> +++ b/arch/arm64/kernel/setup.c
> @@ -341,6 +341,8 @@ void __init setup_arch(char **cmdline_p)
>  	smp_init_cpus();
>  	smp_build_mpidr_hash();
>  
> +	kvm_guest_init();
> +
>  	/* Init percpu seeds for random tags after cpus are set up. */
>  	kasan_init_tags();
>  
> diff --git a/include/linux/cpuhotplug.h b/include/linux/cpuhotplug.h
> index e51ee772b9f5..f72ff95ab63a 100644
> --- a/include/linux/cpuhotplug.h
> +++ b/include/linux/cpuhotplug.h
> @@ -138,6 +138,7 @@ enum cpuhp_state {
>  	CPUHP_AP_DUMMY_TIMER_STARTING,
>  	CPUHP_AP_ARM_XEN_STARTING,
>  	CPUHP_AP_ARM_KVMPV_STARTING,
> +	CPUHP_AP_ARM_KVM_PVLOCK_STARTING,
>  	CPUHP_AP_ARM_CORESIGHT_STARTING,
>  	CPUHP_AP_ARM64_ISNDEP_STARTING,
>  	CPUHP_AP_SMPCFD_DYING,
> -- 
> 2.19.1
> 
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 1/5] KVM: arm64: Document PV-lock interface
  2019-12-17 14:21   ` Steven Price
@ 2019-12-19 11:45     ` yezengruan
  2019-12-20 11:43       ` Steven Price
  0 siblings, 1 reply; 17+ messages in thread
From: yezengruan @ 2019-12-19 11:45 UTC (permalink / raw)
  To: Steven Price
  Cc: linux-kernel, linux-arm-kernel, kvmarm, kvm, linux-doc,
	virtualization, maz, James Morse, linux, Suzuki Poulose,
	julien.thierry.kdev, Catalin Marinas, Mark Rutland, will,
	daniel.lezcano

Hi Steve,

On 2019/12/17 22:21, Steven Price wrote:
> On Tue, Dec 17, 2019 at 01:55:45PM +0000, yezengruan@huawei.com wrote:
>> From: Zengruan Ye <yezengruan@huawei.com>
>>
>> Introduce a paravirtualization interface for KVM/arm64 to obtain the vcpu
>> is currently running or not.
>>
>> A hypercall interface is provided for the guest to interrogate the
>> hypervisor's support for this interface and the location of the shared
>> memory structures.
>>
>> Signed-off-by: Zengruan Ye <yezengruan@huawei.com>
>> ---
>>  Documentation/virt/kvm/arm/pvlock.rst | 31 +++++++++++++++++++++++++++
>>  1 file changed, 31 insertions(+)
>>  create mode 100644 Documentation/virt/kvm/arm/pvlock.rst
>>
>> diff --git a/Documentation/virt/kvm/arm/pvlock.rst b/Documentation/virt/kvm/arm/pvlock.rst
>> new file mode 100644
>> index 000000000000..eec0c36edf17
>> --- /dev/null
>> +++ b/Documentation/virt/kvm/arm/pvlock.rst
>> @@ -0,0 +1,31 @@
>> +.. SPDX-License-Identifier: GPL-2.0
>> +
>> +Paravirtualized lock support for arm64
>> +======================================
>> +
>> +KVM/arm64 provids some hypervisor service calls to support a paravirtualized
>> +guest obtaining the vcpu is currently running or not.
>> +
>> +Two new SMCCC compatible hypercalls are defined:
>> +
>> +* PV_LOCK_FEATURES:   0xC5000040
>> +* PV_LOCK_PREEMPTED:  0xC5000041
> 
> These values are in the "Standard Hypervisor Service Calls" section of
> SMCCC - so is there a document that describes this features such that
> other OSes or hypervisors can implement it? I'm also not entirely sure
> of the process of ensuring that the IDs picked are non-conflicting.
> 
> Otherwise if this is a KVM specific interface this should probably
> belong within the "Vendor Specific Hypervisor Service Calls" section
> along with some probing that the hypervisor is actually KVM. Although I
> don't see anything KVM specific.

Thanks for pointing it out to me! Actually, I also don't see any documents
or KVM specific that describes this features. The values in the "Vendor
Specific Hypervisor Service Calls" section may be more appropriate, such as
the following

* PV_LOCK_FEATURES:   0xC6000020
* PV_LOCK_PREEMPTED:  0xC6000021

Please let me know if you have any suggestions.

> 
>> +
>> +The existence of the PV_LOCK hypercall should be probed using the SMCCC 1.1
>> +ARCH_FEATURES mechanism before calling it.
>> +
>> +PV_LOCK_FEATURES
>> +    ============= ========    ==========
>> +    Function ID:  (uint32)    0xC5000040
>> +    PV_call_id:   (uint32)    The function to query for support.
>> +    Return value: (int64)     NOT_SUPPORTED (-1) or SUCCESS (0) if the relevant
>> +                              PV-lock feature is supported by the hypervisor.
>> +    ============= ========    ==========
>> +
>> +PV_LOCK_PREEMPTED
>> +    ============= ========    ==========
>> +    Function ID:  (uint32)    0xC5000041
>> +    Return value: (int64)     NOT_SUPPORTED (-1) or SUCCESS (0) if the IPA of
>> +                              this vcpu's pv data structure is configured by
>> +                              the hypervisor.
>> +    ============= ========    ==========
> 
>>From the code it looks like there's another argument for this SMC - the
> physical address (or IPA) of a struct pvlock_vcpu_state. This structure
> also needs to be described as it is part of the ABI.

Will update.

> 
> Steve
> 
> .
> 

Thanks,

Zengruan



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/5] KVM: arm64: Implement PV_LOCK_FEATURES call
  2019-12-17 14:28   ` Steven Price
@ 2019-12-19 11:59     ` yezengruan
  0 siblings, 0 replies; 17+ messages in thread
From: yezengruan @ 2019-12-19 11:59 UTC (permalink / raw)
  To: Steven Price
  Cc: linux-kernel, linux-arm-kernel, kvmarm, kvm, linux-doc,
	virtualization, maz, James Morse, linux, Suzuki Poulose,
	julien.thierry.kdev, Catalin Marinas, Mark Rutland, will,
	daniel.lezcano

Hi Steve,

On 2019/12/17 22:28, Steven Price wrote:
> On Tue, Dec 17, 2019 at 01:55:46PM +0000, yezengruan@huawei.com wrote:
>> From: Zengruan Ye <yezengruan@huawei.com>
>>
>> This provides a mechanism for querying which paravirtualized lock
>> features are available in this hypervisor.
>>
>> Also add the header file which defines the ABI for the paravirtualized
>> lock features we're about to add.
>>
>> Signed-off-by: Zengruan Ye <yezengruan@huawei.com>
>> ---
>>  arch/arm64/include/asm/pvlock-abi.h | 16 ++++++++++++++++
>>  include/linux/arm-smccc.h           | 13 +++++++++++++
>>  virt/kvm/arm/hypercalls.c           |  3 +++
>>  3 files changed, 32 insertions(+)
>>  create mode 100644 arch/arm64/include/asm/pvlock-abi.h
>>
>> diff --git a/arch/arm64/include/asm/pvlock-abi.h b/arch/arm64/include/asm/pvlock-abi.h
>> new file mode 100644
>> index 000000000000..06e0c3d7710a
>> --- /dev/null
>> +++ b/arch/arm64/include/asm/pvlock-abi.h
>> @@ -0,0 +1,16 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright(c) 2019 Huawei Technologies Co., Ltd
>> + * Author: Zengruan Ye <yezengruan@huawei.com>
>> + */
>> +
>> +#ifndef __ASM_PVLOCK_ABI_H
>> +#define __ASM_PVLOCK_ABI_H
>> +
>> +struct pvlock_vcpu_state {
>> +	__le64 preempted;
> 
> Somewhere we need to document when 'preempted' is. It looks like it's a
> 1-bit field from the later patches.

Good point, I'll document this in the pvlock doc.

> 
>> +	/* Structure must be 64 byte aligned, pad to that size */
>> +	u8 padding[56];
>> +} __packed;
>> +
>> +#endif
>> diff --git a/include/linux/arm-smccc.h b/include/linux/arm-smccc.h
>> index 59494df0f55b..59e65a951959 100644
>> --- a/include/linux/arm-smccc.h
>> +++ b/include/linux/arm-smccc.h
>> @@ -377,5 +377,18 @@ asmlinkage void __arm_smccc_hvc(unsigned long a0, unsigned long a1,
>>  			   ARM_SMCCC_OWNER_STANDARD_HYP,	\
>>  			   0x21)
>>  
>> +/* Paravirtualised lock calls */
>> +#define ARM_SMCCC_HV_PV_LOCK_FEATURES				\
>> +	ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL,			\
>> +			   ARM_SMCCC_SMC_64,			\
>> +			   ARM_SMCCC_OWNER_STANDARD_HYP,	\
>> +			   0x40)
>> +
>> +#define ARM_SMCCC_HV_PV_LOCK_PREEMPTED				\
>> +	ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL,			\
>> +			   ARM_SMCCC_SMC_64,			\
>> +			   ARM_SMCCC_OWNER_STANDARD_HYP,	\
>> +			   0x41)
>> +
>>  #endif /*__ASSEMBLY__*/
>>  #endif /*__LINUX_ARM_SMCCC_H*/
>> diff --git a/virt/kvm/arm/hypercalls.c b/virt/kvm/arm/hypercalls.c
>> index 550dfa3e53cd..ff13871fd85a 100644
>> --- a/virt/kvm/arm/hypercalls.c
>> +++ b/virt/kvm/arm/hypercalls.c
>> @@ -52,6 +52,9 @@ int kvm_hvc_call_handler(struct kvm_vcpu *vcpu)
>>  		case ARM_SMCCC_HV_PV_TIME_FEATURES:
>>  			val = SMCCC_RET_SUCCESS;
>>  			break;
>> +		case ARM_SMCCC_HV_PV_LOCK_FEATURES:
>> +			val = SMCCC_RET_SUCCESS;
>> +			break;
> 
> Ideally you wouldn't report that PV_LOCK_FEATURES exists until the
> actual hypercalls are wired up to avoid breaking a bisect.

Thanks for pointing it out to me! I'll update the code.

> 
> Steve
> 
>>  		}
>>  		break;
>>  	case ARM_SMCCC_HV_PV_TIME_FEATURES:
>> -- 
>> 2.19.1
>>
>>
> 
> .
> 

Thanks,

Zengruan



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 1/5] KVM: arm64: Document PV-lock interface
  2019-12-19 11:45     ` yezengruan
@ 2019-12-20 11:43       ` Steven Price
  0 siblings, 0 replies; 17+ messages in thread
From: Steven Price @ 2019-12-20 11:43 UTC (permalink / raw)
  To: yezengruan
  Cc: Mark Rutland, daniel.lezcano, kvm, linux-doc, maz,
	Suzuki Poulose, linux-kernel, virtualization, James Morse,
	julien.thierry.kdev, Catalin Marinas, linux, will, kvmarm,
	linux-arm-kernel

On 19/12/2019 11:45, yezengruan wrote:
> Hi Steve,
> 
> On 2019/12/17 22:21, Steven Price wrote:
>> On Tue, Dec 17, 2019 at 01:55:45PM +0000, yezengruan@huawei.com wrote:
>>> From: Zengruan Ye <yezengruan@huawei.com>
>>>
>>> Introduce a paravirtualization interface for KVM/arm64 to obtain the vcpu
>>> is currently running or not.
>>>
>>> A hypercall interface is provided for the guest to interrogate the
>>> hypervisor's support for this interface and the location of the shared
>>> memory structures.
>>>
>>> Signed-off-by: Zengruan Ye <yezengruan@huawei.com>
>>> ---
>>>  Documentation/virt/kvm/arm/pvlock.rst | 31 +++++++++++++++++++++++++++
>>>  1 file changed, 31 insertions(+)
>>>  create mode 100644 Documentation/virt/kvm/arm/pvlock.rst
>>>
>>> diff --git a/Documentation/virt/kvm/arm/pvlock.rst b/Documentation/virt/kvm/arm/pvlock.rst
>>> new file mode 100644
>>> index 000000000000..eec0c36edf17
>>> --- /dev/null
>>> +++ b/Documentation/virt/kvm/arm/pvlock.rst
>>> @@ -0,0 +1,31 @@
>>> +.. SPDX-License-Identifier: GPL-2.0
>>> +
>>> +Paravirtualized lock support for arm64
>>> +======================================
>>> +
>>> +KVM/arm64 provids some hypervisor service calls to support a paravirtualized
>>> +guest obtaining the vcpu is currently running or not.
>>> +
>>> +Two new SMCCC compatible hypercalls are defined:
>>> +
>>> +* PV_LOCK_FEATURES:   0xC5000040
>>> +* PV_LOCK_PREEMPTED:  0xC5000041
>>
>> These values are in the "Standard Hypervisor Service Calls" section of
>> SMCCC - so is there a document that describes this features such that
>> other OSes or hypervisors can implement it? I'm also not entirely sure
>> of the process of ensuring that the IDs picked are non-conflicting.
>>
>> Otherwise if this is a KVM specific interface this should probably
>> belong within the "Vendor Specific Hypervisor Service Calls" section
>> along with some probing that the hypervisor is actually KVM. Although I
>> don't see anything KVM specific.
> 
> Thanks for pointing it out to me! Actually, I also don't see any documents
> or KVM specific that describes this features. The values in the "Vendor
> Specific Hypervisor Service Calls" section may be more appropriate, such as
> the following
> 
> * PV_LOCK_FEATURES:   0xC6000020
> * PV_LOCK_PREEMPTED:  0xC6000021
> 
> Please let me know if you have any suggestions.

I don't have strong feelings on whether this should be KVM-specific or
generic. I'm not familiar with whether there are competing solutions to
this problem - it's obviously ideal if all hypervisors can make use of
the same interface if possible, but maybe that ship has sailed already?

However if this going to be KVM-specific then you'll need to add the
probing logic for checking whether the hypervisor is KVM or not. Will
has a couple of patches on a branch which do this [1] and [2]. Then you
can use kvm_arm_hyp_services_available() as the first step to probe
whether the hypervisor is KVM.

[1]
https://git.kernel.org/pub/scm/linux/kernel/git/will/linux.git/commit/?h=kvm/hvc&id=464f5a1741e5959c3e4d2be1966ae0093b4dce06

[2]
https://git.kernel.org/pub/scm/linux/kernel/git/will/linux.git/commit/?h=kvm/hvc&id=6597490e005d0eeca8ed8c1c1d7b4318ee014681

Steve

>>
>>> +
>>> +The existence of the PV_LOCK hypercall should be probed using the SMCCC 1.1
>>> +ARCH_FEATURES mechanism before calling it.
>>> +
>>> +PV_LOCK_FEATURES
>>> +    ============= ========    ==========
>>> +    Function ID:  (uint32)    0xC5000040
>>> +    PV_call_id:   (uint32)    The function to query for support.
>>> +    Return value: (int64)     NOT_SUPPORTED (-1) or SUCCESS (0) if the relevant
>>> +                              PV-lock feature is supported by the hypervisor.
>>> +    ============= ========    ==========
>>> +
>>> +PV_LOCK_PREEMPTED
>>> +    ============= ========    ==========
>>> +    Function ID:  (uint32)    0xC5000041
>>> +    Return value: (int64)     NOT_SUPPORTED (-1) or SUCCESS (0) if the IPA of
>>> +                              this vcpu's pv data structure is configured by
>>> +                              the hypervisor.
>>> +    ============= ========    ==========
>>
>> >From the code it looks like there's another argument for this SMC - the
>> physical address (or IPA) of a struct pvlock_vcpu_state. This structure
>> also needs to be described as it is part of the ABI.
> 
> Will update.
> 
>>
>> Steve
>>
>> .
>>
> 
> Thanks,
> 
> Zengruan
> 
> 
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
> 


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 1/5] KVM: arm64: Document PV-lock interface
  2019-12-17 13:55 ` [PATCH 1/5] KVM: arm64: Document PV-lock interface yezengruan
  2019-12-17 14:21   ` Steven Price
@ 2019-12-20 14:32   ` Markus Elfring
  2019-12-23  2:06     ` yezengruan
  1 sibling, 1 reply; 17+ messages in thread
From: Markus Elfring @ 2019-12-20 14:32 UTC (permalink / raw)
  To: Zengruan Ye, kvm, kvmarm, linux-arm-kernel, linux-doc, virtualization
  Cc: linux-kernel, Catalin Marinas, Daniel Lezcano, James Morse,
	Julien Thierry, Marc Zyngier, Mark Rutland, Russell King,
	Steven Price, Suzuki K. Poulose, Will Deacon

…
> +++ b/Documentation/virt/kvm/arm/pvlock.rst
> +Paravirtualized lock support for arm64
> +======================================
> +
> +KVM/arm64 provids some …
…

I suggest to avoid a typo here.

Regards,
Markus

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 1/5] KVM: arm64: Document PV-lock interface
  2019-12-20 14:32   ` Markus Elfring
@ 2019-12-23  2:06     ` yezengruan
  0 siblings, 0 replies; 17+ messages in thread
From: yezengruan @ 2019-12-23  2:06 UTC (permalink / raw)
  To: Markus Elfring, kvm, kvmarm, linux-arm-kernel, linux-doc, virtualization
  Cc: linux-kernel, Catalin Marinas, Daniel Lezcano, James Morse,
	Julien Thierry, Marc Zyngier, Mark Rutland, Russell King,
	Steven Price, Suzuki K. Poulose, Will Deacon

Hi Markus,

On 2019/12/20 22:32, Markus Elfring wrote:
> …
>> +++ b/Documentation/virt/kvm/arm/pvlock.rst
> …
>> +Paravirtualized lock support for arm64
>> +======================================
>> +
>> +KVM/arm64 provids some …
> …
> 
> I suggest to avoid a typo here.

Thanks for posting this.

> 
> Regards,
> Markus
> 

Thanks,

Zengruan


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/5] KVM: arm64: Support pvlock preempted via shared structure
  2019-12-17 14:33   ` Steven Price
@ 2019-12-26  8:33     ` yezengruan
  0 siblings, 0 replies; 17+ messages in thread
From: yezengruan @ 2019-12-26  8:33 UTC (permalink / raw)
  To: Steven Price
  Cc: linux-kernel, linux-arm-kernel, kvmarm, kvm, linux-doc,
	virtualization, maz, James Morse, linux, Suzuki Poulose,
	julien.thierry.kdev, Catalin Marinas, Mark Rutland, will,
	daniel.lezcano

Hi Steve,

On 2019/12/17 22:33, Steven Price wrote:
> On Tue, Dec 17, 2019 at 01:55:47PM +0000, yezengruan@huawei.com wrote:
>> From: Zengruan Ye <yezengruan@huawei.com>
>>
>> Implement the service call for configuring a shared structure between a
>> vcpu and the hypervisor in which the hypervisor can tell the vcpu is
>> running or not.
>>
>> The preempted field is zero if 1) some old KVM deos not support this filed.
>> 2) the vcpu is not preempted. Other values means the vcpu has been preempted.
>>
>> Signed-off-by: Zengruan Ye <yezengruan@huawei.com>
>> ---
>>  arch/arm/include/asm/kvm_host.h   | 13 +++++++++++++
>>  arch/arm64/include/asm/kvm_host.h | 17 +++++++++++++++++
>>  arch/arm64/kvm/Makefile           |  1 +
>>  virt/kvm/arm/arm.c                |  8 ++++++++
>>  virt/kvm/arm/hypercalls.c         |  4 ++++
>>  virt/kvm/arm/pvlock.c             | 21 +++++++++++++++++++++
>>  6 files changed, 64 insertions(+)
>>  create mode 100644 virt/kvm/arm/pvlock.c
>>
>> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
>> index 556cd818eccf..098375f1c89e 100644
>> --- a/arch/arm/include/asm/kvm_host.h
>> +++ b/arch/arm/include/asm/kvm_host.h
>> @@ -356,6 +356,19 @@ static inline bool kvm_arm_is_pvtime_enabled(struct kvm_vcpu_arch *vcpu_arch)
>>  	return false;
>>  }
>>  
>> +static inline void kvm_arm_pvlock_preempted_init(struct kvm_vcpu_arch *vcpu_arch)
>> +{
>> +}
>> +
>> +static inline bool kvm_arm_is_pvlock_preempted_ready(struct kvm_vcpu_arch *vcpu_arch)
>> +{
>> +	return false;
>> +}
>> +
>> +static inline void kvm_update_pvlock_preempted(struct kvm_vcpu *vcpu, u64 preempted)
>> +{
>> +}
>> +
>>  void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot);
>>  
>>  struct kvm_vcpu *kvm_mpidr_to_vcpu(struct kvm *kvm, unsigned long mpidr);
>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
>> index c61260cf63c5..d9b2a21a87ac 100644
>> --- a/arch/arm64/include/asm/kvm_host.h
>> +++ b/arch/arm64/include/asm/kvm_host.h
>> @@ -354,6 +354,11 @@ struct kvm_vcpu_arch {
>>  		u64 last_steal;
>>  		gpa_t base;
>>  	} steal;
>> +
>> +	/* Guest PV lock state */
>> +	struct {
>> +		gpa_t base;
>> +	} pv;
>>  };
>>  
>>  /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
>> @@ -515,6 +520,18 @@ static inline bool kvm_arm_is_pvtime_enabled(struct kvm_vcpu_arch *vcpu_arch)
>>  	return (vcpu_arch->steal.base != GPA_INVALID);
>>  }
>>  
>> +static inline void kvm_arm_pvlock_preempted_init(struct kvm_vcpu_arch *vcpu_arch)
>> +{
>> +	vcpu_arch->pv.base = GPA_INVALID;
>> +}
>> +
>> +static inline bool kvm_arm_is_pvlock_preempted_ready(struct kvm_vcpu_arch *vcpu_arch)
>> +{
>> +	return (vcpu_arch->pv.base != GPA_INVALID);
>> +}
>> +
>> +void kvm_update_pvlock_preempted(struct kvm_vcpu *vcpu, u64 preempted);
>> +
>>  void kvm_set_sei_esr(struct kvm_vcpu *vcpu, u64 syndrome);
>>  
>>  struct kvm_vcpu *kvm_mpidr_to_vcpu(struct kvm *kvm, unsigned long mpidr);
>> diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
>> index 5ffbdc39e780..e4591f56d5f1 100644
>> --- a/arch/arm64/kvm/Makefile
>> +++ b/arch/arm64/kvm/Makefile
>> @@ -15,6 +15,7 @@ kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/arm.o $(KVM)/arm/mmu.o $(KVM)/arm/mmio.
>>  kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/psci.o $(KVM)/arm/perf.o
>>  kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/hypercalls.o
>>  kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/pvtime.o
>> +kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/pvlock.o
>>  
>>  kvm-$(CONFIG_KVM_ARM_HOST) += inject_fault.o regmap.o va_layout.o
>>  kvm-$(CONFIG_KVM_ARM_HOST) += hyp.o hyp-init.o handle_exit.o
>> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
>> index 12e0280291ce..c562f62fdd45 100644
>> --- a/virt/kvm/arm/arm.c
>> +++ b/virt/kvm/arm/arm.c
>> @@ -383,6 +383,8 @@ int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu)
>>  
>>  	kvm_arm_pvtime_vcpu_init(&vcpu->arch);
>>  
>> +	kvm_arm_pvlock_preempted_init(&vcpu->arch);
>> +
>>  	return kvm_vgic_vcpu_init(vcpu);
>>  }
>>  
>> @@ -421,6 +423,9 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
>>  		vcpu_set_wfx_traps(vcpu);
>>  
>>  	vcpu_ptrauth_setup_lazy(vcpu);
>> +
>> +	if (kvm_arm_is_pvlock_preempted_ready(&vcpu->arch))
>> +		kvm_update_pvlock_preempted(vcpu, 0);
>>  }
>>  
>>  void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
>> @@ -434,6 +439,9 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
>>  	vcpu->cpu = -1;
>>  
>>  	kvm_arm_set_running_vcpu(NULL);
>> +
>> +	if (kvm_arm_is_pvlock_preempted_ready(&vcpu->arch))
>> +		kvm_update_pvlock_preempted(vcpu, 1);
>>  }
>>  
>>  static void vcpu_power_off(struct kvm_vcpu *vcpu)
>> diff --git a/virt/kvm/arm/hypercalls.c b/virt/kvm/arm/hypercalls.c
>> index ff13871fd85a..5964982ccd05 100644
>> --- a/virt/kvm/arm/hypercalls.c
>> +++ b/virt/kvm/arm/hypercalls.c
>> @@ -65,6 +65,10 @@ int kvm_hvc_call_handler(struct kvm_vcpu *vcpu)
>>  		if (gpa != GPA_INVALID)
>>  			val = gpa;
>>  		break;
>> +	case ARM_SMCCC_HV_PV_LOCK_PREEMPTED:
>> +		vcpu->arch.pv.base = smccc_get_arg1(vcpu);
>> +		val = SMCCC_RET_SUCCESS;
> 
> It would be useful to at least do some basic validation that the address
> passed in is valid. Debugging problems with this interface will be hard
> if it always returns success even if the address cannot be used.
> 
> The second patch also states that the structure should be 64 byte
> aligned, but there's nothing here to enforce that.

Thanks for posting this. I'll update the code.

> 
> Steve
> 
>> +		break;
>>  	default:
>>  		return kvm_psci_call(vcpu);
>>  	}
>> diff --git a/virt/kvm/arm/pvlock.c b/virt/kvm/arm/pvlock.c
>> new file mode 100644
>> index 000000000000..c3464958b0f5
>> --- /dev/null
>> +++ b/virt/kvm/arm/pvlock.c
>> @@ -0,0 +1,21 @@
>> +/* SPDX-License-Identifier: GPL-2.0-only */
>> +/*
>> + * Copyright(c) 2019 Huawei Technologies Co., Ltd
>> + * Author: Zengruan Ye <yezengruan@huawei.com>
>> + */
>> +
>> +#include <linux/arm-smccc.h>
>> +#include <linux/kvm_host.h>
>> +
>> +#include <kvm/arm_hypercalls.h>
>> +
>> +void kvm_update_pvlock_preempted(struct kvm_vcpu *vcpu, u64 preempted)
>> +{
>> +	u64 preempted_le;
>> +	u64 base;
>> +	struct kvm *kvm = vcpu->kvm;
>> +
>> +	base = vcpu->arch.pv.base;
>> +	preempted_le = cpu_to_le64(preempted);
>> +	kvm_put_guest(kvm, base, preempted_le, u64);
>> +}
>> -- 
>> 2.19.1
>>
>>
> 
> .
> 

Thanks,

Zengruan


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 5/5] KVM: arm64: Support the vcpu preemption check
  2019-12-17 14:40   ` Steven Price
@ 2019-12-26  8:34     ` yezengruan
  0 siblings, 0 replies; 17+ messages in thread
From: yezengruan @ 2019-12-26  8:34 UTC (permalink / raw)
  To: Steven Price
  Cc: linux-kernel, linux-arm-kernel, kvmarm, kvm, linux-doc,
	virtualization, maz, James Morse, linux, Suzuki Poulose,
	julien.thierry.kdev, Catalin Marinas, Mark Rutland, will,
	daniel.lezcano

Hi Steve,

On 2019/12/17 22:40, Steven Price wrote:
> On Tue, Dec 17, 2019 at 01:55:49PM +0000, yezengruan@huawei.com wrote:
>> From: Zengruan Ye <yezengruan@huawei.com>
>>
>> Support the vcpu_is_preempted() functionality under KVM/arm64. This will
>> enhance lock performance on overcommitted hosts (more runnable vcpus
>> than physical cpus in the system) as doing busy waits for preempted
>> vcpus will hurt system performance far worse than early yielding.
>>
>> unix benchmark result:
>>   host:  kernel 5.5.0-rc1, HiSilicon Kunpeng920, 8 cpus
>>   guest: kernel 5.5.0-rc1, 16 vcpus
>>
>>                test-case                |    after-patch    |   before-patch
>> ----------------------------------------+-------------------+------------------
>>  Dhrystone 2 using register variables   | 334600751.0 lps   | 335319028.3 lps
>>  Double-Precision Whetstone             |     32856.1 MWIPS |     32849.6 MWIPS
>>  Execl Throughput                       |      3662.1 lps   |      2718.0 lps
>>  File Copy 1024 bufsize 2000 maxblocks  |    432906.4 KBps  |    158011.8 KBps
>>  File Copy 256 bufsize 500 maxblocks    |    116023.0 KBps  |     37664.0 KBps
>>  File Copy 4096 bufsize 8000 maxblocks  |   1432769.8 KBps  |    441108.8 KBps
>>  Pipe Throughput                        |   6405029.6 lps   |   6021457.6 lps
>>  Pipe-based Context Switching           |    185872.7 lps   |    184255.3 lps
>>  Process Creation                       |      4025.7 lps   |      3706.6 lps
>>  Shell Scripts (1 concurrent)           |      6745.6 lpm   |      6436.1 lpm
>>  Shell Scripts (8 concurrent)           |       998.7 lpm   |       931.1 lpm
>>  System Call Overhead                   |   3913363.1 lps   |   3883287.8 lps
>> ----------------------------------------+-------------------+------------------
>>  System Benchmarks Index Score          |      1835.1       |      1327.6
>>
>> Signed-off-by: Zengruan Ye <yezengruan@huawei.com>
>> ---
>>  arch/arm64/include/asm/paravirt.h |  3 +
>>  arch/arm64/kernel/paravirt.c      | 91 +++++++++++++++++++++++++++++++
>>  arch/arm64/kernel/setup.c         |  2 +
>>  include/linux/cpuhotplug.h        |  1 +
>>  4 files changed, 97 insertions(+)
>>
>> diff --git a/arch/arm64/include/asm/paravirt.h b/arch/arm64/include/asm/paravirt.h
>> index 7b1c81b544bb..a2cd0183bbef 100644
>> --- a/arch/arm64/include/asm/paravirt.h
>> +++ b/arch/arm64/include/asm/paravirt.h
>> @@ -29,6 +29,8 @@ static inline u64 paravirt_steal_clock(int cpu)
>>  
>>  int __init pv_time_init(void);
>>  
>> +int __init kvm_guest_init(void);
>> +
> 
> This is a *very* generic name - I suggest something like pv_lock_init()
> so it's clear what the function actually does.
> 
>>  __visible bool __native_vcpu_is_preempted(int cpu);
>>  
>>  static inline bool pv_vcpu_is_preempted(int cpu)
>> @@ -39,6 +41,7 @@ static inline bool pv_vcpu_is_preempted(int cpu)
>>  #else
>>  
>>  #define pv_time_init() do {} while (0)
>> +#define kvm_guest_init() do {} while (0)
>>  
>>  #endif // CONFIG_PARAVIRT
>>  
>> diff --git a/arch/arm64/kernel/paravirt.c b/arch/arm64/kernel/paravirt.c
>> index d8f1ba8c22ce..a86dead40473 100644
>> --- a/arch/arm64/kernel/paravirt.c
>> +++ b/arch/arm64/kernel/paravirt.c
>> @@ -22,6 +22,7 @@
>>  #include <asm/paravirt.h>
>>  #include <asm/pvclock-abi.h>
>>  #include <asm/smp_plat.h>
>> +#include <asm/pvlock-abi.h>
>>  
>>  struct static_key paravirt_steal_enabled;
>>  struct static_key paravirt_steal_rq_enabled;
>> @@ -158,3 +159,93 @@ int __init pv_time_init(void)
>>  
>>  	return 0;
>>  }
>> +
>> +DEFINE_PER_CPU(struct pvlock_vcpu_state, pvlock_vcpu_region) __aligned(64);
>> +EXPORT_PER_CPU_SYMBOL(pvlock_vcpu_region);
>> +
>> +static int pvlock_vcpu_state_dying_cpu(unsigned int cpu)
>> +{
>> +	struct pvlock_vcpu_state *reg;
>> +
>> +	reg = this_cpu_ptr(&pvlock_vcpu_region);
>> +	if (!reg)
>> +		return -EFAULT;
>> +
>> +	memset(reg, 0, sizeof(*reg));
> 
> I might be missing something obvious here - but I don't see the point of
> this. The hypervisor might immediately overwrite the structure again.
> Indeed you should conside a mechanism for the guest to "unregister" the
> region - otherwise you will face issues with the likes of kexec.
> 
> For pv_time the memory is allocated by the hypervisor not the guest to
> avoid lifetime issues about kexec.


Thanks for pointing it out to me! I'll update the memory allocation
mechanism of the PV lock structure to avoid lifetime issues about
kexec.

> 
>> +
>> +	return 0;
>> +}
>> +
>> +static int init_pvlock_vcpu_state(unsigned int cpu)
>> +{
>> +	struct pvlock_vcpu_state *reg;
>> +	struct arm_smccc_res res;
>> +
>> +	reg = this_cpu_ptr(&pvlock_vcpu_region);
>> +	if (!reg)
>> +		return -EFAULT;
>> +
>> +	/* Pass the memory address to host via hypercall */
>> +	arm_smccc_1_1_invoke(ARM_SMCCC_HV_PV_LOCK_PREEMPTED,
>> +			     virt_to_phys(reg), &res);
>> +
>> +	return 0;
>> +}
>> +
>> +static bool kvm_vcpu_is_preempted(int cpu)
>> +{
>> +	struct pvlock_vcpu_state *reg = &per_cpu(pvlock_vcpu_region, cpu);
>> +
>> +	if (reg)
>> +		return !!(reg->preempted & 1);
>> +
>> +	return false;
>> +}
>> +
>> +static int kvm_arm_init_pvlock(void)
>> +{
>> +	int ret;
>> +
>> +	ret = cpuhp_setup_state(CPUHP_AP_ARM_KVM_PVLOCK_STARTING,
>> +				"hypervisor/arm/pvlock:starting",
>> +				init_pvlock_vcpu_state,
>> +				pvlock_vcpu_state_dying_cpu);
>> +	if (ret < 0)
>> +		return ret;
>> +
>> +	pv_ops.lock.vcpu_is_preempted = kvm_vcpu_is_preempted;
>> +
>> +	pr_info("using PV-lock preempted\n");
>> +
>> +	return 0;
>> +}
>> +
>> +static bool has_kvm_pvlock(void)
>> +{
>> +	struct arm_smccc_res res;
>> +
>> +	/* To detect the presence of PV lock support we require SMCCC 1.1+ */
>> +	if (psci_ops.smccc_version < SMCCC_VERSION_1_1)
>> +		return false;
>> +
>> +	arm_smccc_1_1_invoke(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
>> +			     ARM_SMCCC_HV_PV_LOCK_FEATURES, &res);
>> +
>> +	if (res.a0 != SMCCC_RET_SUCCESS)
>> +		return false;
>> +
>> +	return true;
>> +}
>> +
>> +int __init kvm_guest_init(void)
>> +{
>> +	if (is_hyp_mode_available())
>> +		return 0;
>> +
>> +	if (!has_kvm_pvlock())
>> +		return 0;
>> +
>> +	kvm_arm_init_pvlock();
> 
> Consider reporting errors from kvm_arm_init_pvlock()? At the moment
> it's impossible to tell the difference between pvlock not being
> supported and something failing in the setup.

Good point, I'll update the code.

> 
> Steve
> 
>> +
>> +	return 0;
>> +}
>> diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
>> index 56f664561754..64c4d515ba2d 100644
>> --- a/arch/arm64/kernel/setup.c
>> +++ b/arch/arm64/kernel/setup.c
>> @@ -341,6 +341,8 @@ void __init setup_arch(char **cmdline_p)
>>  	smp_init_cpus();
>>  	smp_build_mpidr_hash();
>>  
>> +	kvm_guest_init();
>> +
>>  	/* Init percpu seeds for random tags after cpus are set up. */
>>  	kasan_init_tags();
>>  
>> diff --git a/include/linux/cpuhotplug.h b/include/linux/cpuhotplug.h
>> index e51ee772b9f5..f72ff95ab63a 100644
>> --- a/include/linux/cpuhotplug.h
>> +++ b/include/linux/cpuhotplug.h
>> @@ -138,6 +138,7 @@ enum cpuhp_state {
>>  	CPUHP_AP_DUMMY_TIMER_STARTING,
>>  	CPUHP_AP_ARM_XEN_STARTING,
>>  	CPUHP_AP_ARM_KVMPV_STARTING,
>> +	CPUHP_AP_ARM_KVM_PVLOCK_STARTING,
>>  	CPUHP_AP_ARM_CORESIGHT_STARTING,
>>  	CPUHP_AP_ARM64_ISNDEP_STARTING,
>>  	CPUHP_AP_SMPCFD_DYING,
>> -- 
>> 2.19.1
>>
>>
> 
> .
> 

Thanks,

Zengruan


^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2019-12-26  8:34 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-12-17 13:55 [PATCH 0/5] KVM: arm64: vcpu preempted check support yezengruan
2019-12-17 13:55 ` [PATCH 1/5] KVM: arm64: Document PV-lock interface yezengruan
2019-12-17 14:21   ` Steven Price
2019-12-19 11:45     ` yezengruan
2019-12-20 11:43       ` Steven Price
2019-12-20 14:32   ` Markus Elfring
2019-12-23  2:06     ` yezengruan
2019-12-17 13:55 ` [PATCH 2/5] KVM: arm64: Implement PV_LOCK_FEATURES call yezengruan
2019-12-17 14:28   ` Steven Price
2019-12-19 11:59     ` yezengruan
2019-12-17 13:55 ` [PATCH 3/5] KVM: arm64: Support pvlock preempted via shared structure yezengruan
2019-12-17 14:33   ` Steven Price
2019-12-26  8:33     ` yezengruan
2019-12-17 13:55 ` [PATCH 4/5] KVM: arm64: Add interface to support vcpu preempted check yezengruan
2019-12-17 13:55 ` [PATCH 5/5] KVM: arm64: Support the vcpu preemption check yezengruan
2019-12-17 14:40   ` Steven Price
2019-12-26  8:34     ` yezengruan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).