linux-hyperv.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/4]  Add support for XMM fast hypercalls
@ 2021-04-12 17:00 Siddharth Chandrasekaran
  2021-04-12 17:00 ` [PATCH v2 1/4] KVM: x86: Move FPU register accessors into fpu.h Siddharth Chandrasekaran
                   ` (3 more replies)
  0 siblings, 4 replies; 16+ messages in thread
From: Siddharth Chandrasekaran @ 2021-04-12 17:00 UTC (permalink / raw)
  To: K. Y. Srinivasan, Haiyang Zhang, Stephen Hemminger, Wei Liu,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, x86,
	H. Peter Anvin, Paolo Bonzini, Sean Christopherson,
	Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel
  Cc: Siddharth Chandrasekaran, Alexander Graf, Evgeny Iakovlev,
	Liran Alon, Ioannis Aslanidis, linux-hyperv, linux-kernel, kvm

Hyper-V supports the use of XMM registers to perform fast hypercalls.
This allows guests to take advantage of the improved performance of the
fast hypercall interface even though a hypercall may require more than
(the current maximum of) two general purpose registers.

The XMM fast hypercall interface uses an additional six XMM registers
(XMM0 to XMM5) to allow the caller to pass an input parameter block of
up to 112 bytes. Hyper-V can also return data back to the guest in the
remaining XMM registers that are not used by the current hypercall.

Although the Hyper-v TLFS mentions that a guest cannot use this feature
unless the hypervisor advertises support for it, some hypercalls which
we plan on upstreaming in future uses them anyway. This patchset adds
necessary infrastructure for handling input/output via XMM registers and
patches kvm_hv_flush_tlb() to use xmm input arguments.

~ Sid.

v2:
- Add hc.fast to is_xmm_fast_hypercall() check
- Split CPUID feature bits for input and output

Siddharth Chandrasekaran (4):
  KVM: x86: Move FPU register accessors into fpu.h
  KVM: hyper-v: Collect hypercall params into struct
  KVM: x86: kvm_hv_flush_tlb use inputs from XMM registers
  KVM: hyper-v: Advertise support for fast XMM hypercalls

 arch/x86/include/asm/hyperv-tlfs.h |   7 +-
 arch/x86/kvm/emulate.c             | 138 +++-------------
 arch/x86/kvm/fpu.h                 | 140 +++++++++++++++++
 arch/x86/kvm/hyperv.c              | 242 +++++++++++++++++++----------
 4 files changed, 327 insertions(+), 200 deletions(-)
 create mode 100644 arch/x86/kvm/fpu.h

-- 
2.17.1



Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879




^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH v2 1/4] KVM: x86: Move FPU register accessors into fpu.h
  2021-04-12 17:00 [PATCH v2 0/4] Add support for XMM fast hypercalls Siddharth Chandrasekaran
@ 2021-04-12 17:00 ` Siddharth Chandrasekaran
  2021-04-13 13:40   ` Vitaly Kuznetsov
  2021-04-12 17:00 ` [PATCH v2 2/4] KVM: hyper-v: Collect hypercall params into struct Siddharth Chandrasekaran
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 16+ messages in thread
From: Siddharth Chandrasekaran @ 2021-04-12 17:00 UTC (permalink / raw)
  To: K. Y. Srinivasan, Haiyang Zhang, Stephen Hemminger, Wei Liu,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, x86,
	H. Peter Anvin, Paolo Bonzini, Sean Christopherson,
	Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel
  Cc: Siddharth Chandrasekaran, Alexander Graf, Evgeny Iakovlev,
	Liran Alon, Ioannis Aslanidis, linux-hyperv, linux-kernel, kvm

Hyper-v XMM fast hypercalls use XMM registers to pass input/output
parameters. To access these, hyperv.c can reuse some FPU register
accessors defined in emulator.c. Move them to a common location so both
can access them.

While at it, reorder the parameters of these accessor methods to make
them more readable.

Cc: Alexander Graf <graf@amazon.com>
Cc: Evgeny Iakovlev <eyakovl@amazon.de>
Signed-off-by: Siddharth Chandrasekaran <sidcha@amazon.de>
---
 arch/x86/kvm/emulate.c | 138 ++++++----------------------------------
 arch/x86/kvm/fpu.h     | 140 +++++++++++++++++++++++++++++++++++++++++
 2 files changed, 158 insertions(+), 120 deletions(-)
 create mode 100644 arch/x86/kvm/fpu.h

diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
index f7970ba6219f..296f8f3ce988 100644
--- a/arch/x86/kvm/emulate.c
+++ b/arch/x86/kvm/emulate.c
@@ -22,7 +22,6 @@
 #include "kvm_cache_regs.h"
 #include "kvm_emulate.h"
 #include <linux/stringify.h>
-#include <asm/fpu/api.h>
 #include <asm/debugreg.h>
 #include <asm/nospec-branch.h>
 
@@ -30,6 +29,7 @@
 #include "tss.h"
 #include "mmu.h"
 #include "pmu.h"
+#include "fpu.h"
 
 /*
  * Operand types
@@ -1081,116 +1081,14 @@ static void fetch_register_operand(struct operand *op)
 	}
 }
 
-static void emulator_get_fpu(void)
-{
-	fpregs_lock();
-
-	fpregs_assert_state_consistent();
-	if (test_thread_flag(TIF_NEED_FPU_LOAD))
-		switch_fpu_return();
-}
-
-static void emulator_put_fpu(void)
-{
-	fpregs_unlock();
-}
-
-static void read_sse_reg(sse128_t *data, int reg)
-{
-	emulator_get_fpu();
-	switch (reg) {
-	case 0: asm("movdqa %%xmm0, %0" : "=m"(*data)); break;
-	case 1: asm("movdqa %%xmm1, %0" : "=m"(*data)); break;
-	case 2: asm("movdqa %%xmm2, %0" : "=m"(*data)); break;
-	case 3: asm("movdqa %%xmm3, %0" : "=m"(*data)); break;
-	case 4: asm("movdqa %%xmm4, %0" : "=m"(*data)); break;
-	case 5: asm("movdqa %%xmm5, %0" : "=m"(*data)); break;
-	case 6: asm("movdqa %%xmm6, %0" : "=m"(*data)); break;
-	case 7: asm("movdqa %%xmm7, %0" : "=m"(*data)); break;
-#ifdef CONFIG_X86_64
-	case 8: asm("movdqa %%xmm8, %0" : "=m"(*data)); break;
-	case 9: asm("movdqa %%xmm9, %0" : "=m"(*data)); break;
-	case 10: asm("movdqa %%xmm10, %0" : "=m"(*data)); break;
-	case 11: asm("movdqa %%xmm11, %0" : "=m"(*data)); break;
-	case 12: asm("movdqa %%xmm12, %0" : "=m"(*data)); break;
-	case 13: asm("movdqa %%xmm13, %0" : "=m"(*data)); break;
-	case 14: asm("movdqa %%xmm14, %0" : "=m"(*data)); break;
-	case 15: asm("movdqa %%xmm15, %0" : "=m"(*data)); break;
-#endif
-	default: BUG();
-	}
-	emulator_put_fpu();
-}
-
-static void write_sse_reg(sse128_t *data, int reg)
-{
-	emulator_get_fpu();
-	switch (reg) {
-	case 0: asm("movdqa %0, %%xmm0" : : "m"(*data)); break;
-	case 1: asm("movdqa %0, %%xmm1" : : "m"(*data)); break;
-	case 2: asm("movdqa %0, %%xmm2" : : "m"(*data)); break;
-	case 3: asm("movdqa %0, %%xmm3" : : "m"(*data)); break;
-	case 4: asm("movdqa %0, %%xmm4" : : "m"(*data)); break;
-	case 5: asm("movdqa %0, %%xmm5" : : "m"(*data)); break;
-	case 6: asm("movdqa %0, %%xmm6" : : "m"(*data)); break;
-	case 7: asm("movdqa %0, %%xmm7" : : "m"(*data)); break;
-#ifdef CONFIG_X86_64
-	case 8: asm("movdqa %0, %%xmm8" : : "m"(*data)); break;
-	case 9: asm("movdqa %0, %%xmm9" : : "m"(*data)); break;
-	case 10: asm("movdqa %0, %%xmm10" : : "m"(*data)); break;
-	case 11: asm("movdqa %0, %%xmm11" : : "m"(*data)); break;
-	case 12: asm("movdqa %0, %%xmm12" : : "m"(*data)); break;
-	case 13: asm("movdqa %0, %%xmm13" : : "m"(*data)); break;
-	case 14: asm("movdqa %0, %%xmm14" : : "m"(*data)); break;
-	case 15: asm("movdqa %0, %%xmm15" : : "m"(*data)); break;
-#endif
-	default: BUG();
-	}
-	emulator_put_fpu();
-}
-
-static void read_mmx_reg(u64 *data, int reg)
-{
-	emulator_get_fpu();
-	switch (reg) {
-	case 0: asm("movq %%mm0, %0" : "=m"(*data)); break;
-	case 1: asm("movq %%mm1, %0" : "=m"(*data)); break;
-	case 2: asm("movq %%mm2, %0" : "=m"(*data)); break;
-	case 3: asm("movq %%mm3, %0" : "=m"(*data)); break;
-	case 4: asm("movq %%mm4, %0" : "=m"(*data)); break;
-	case 5: asm("movq %%mm5, %0" : "=m"(*data)); break;
-	case 6: asm("movq %%mm6, %0" : "=m"(*data)); break;
-	case 7: asm("movq %%mm7, %0" : "=m"(*data)); break;
-	default: BUG();
-	}
-	emulator_put_fpu();
-}
-
-static void write_mmx_reg(u64 *data, int reg)
-{
-	emulator_get_fpu();
-	switch (reg) {
-	case 0: asm("movq %0, %%mm0" : : "m"(*data)); break;
-	case 1: asm("movq %0, %%mm1" : : "m"(*data)); break;
-	case 2: asm("movq %0, %%mm2" : : "m"(*data)); break;
-	case 3: asm("movq %0, %%mm3" : : "m"(*data)); break;
-	case 4: asm("movq %0, %%mm4" : : "m"(*data)); break;
-	case 5: asm("movq %0, %%mm5" : : "m"(*data)); break;
-	case 6: asm("movq %0, %%mm6" : : "m"(*data)); break;
-	case 7: asm("movq %0, %%mm7" : : "m"(*data)); break;
-	default: BUG();
-	}
-	emulator_put_fpu();
-}
-
 static int em_fninit(struct x86_emulate_ctxt *ctxt)
 {
 	if (ctxt->ops->get_cr(ctxt, 0) & (X86_CR0_TS | X86_CR0_EM))
 		return emulate_nm(ctxt);
 
-	emulator_get_fpu();
+	kvm_fpu_get();
 	asm volatile("fninit");
-	emulator_put_fpu();
+	kvm_fpu_put();
 	return X86EMUL_CONTINUE;
 }
 
@@ -1201,9 +1099,9 @@ static int em_fnstcw(struct x86_emulate_ctxt *ctxt)
 	if (ctxt->ops->get_cr(ctxt, 0) & (X86_CR0_TS | X86_CR0_EM))
 		return emulate_nm(ctxt);
 
-	emulator_get_fpu();
+	kvm_fpu_get();
 	asm volatile("fnstcw %0": "+m"(fcw));
-	emulator_put_fpu();
+	kvm_fpu_put();
 
 	ctxt->dst.val = fcw;
 
@@ -1217,9 +1115,9 @@ static int em_fnstsw(struct x86_emulate_ctxt *ctxt)
 	if (ctxt->ops->get_cr(ctxt, 0) & (X86_CR0_TS | X86_CR0_EM))
 		return emulate_nm(ctxt);
 
-	emulator_get_fpu();
+	kvm_fpu_get();
 	asm volatile("fnstsw %0": "+m"(fsw));
-	emulator_put_fpu();
+	kvm_fpu_put();
 
 	ctxt->dst.val = fsw;
 
@@ -1238,7 +1136,7 @@ static void decode_register_operand(struct x86_emulate_ctxt *ctxt,
 		op->type = OP_XMM;
 		op->bytes = 16;
 		op->addr.xmm = reg;
-		read_sse_reg(&op->vec_val, reg);
+		kvm_read_sse_reg(reg, &op->vec_val);
 		return;
 	}
 	if (ctxt->d & Mmx) {
@@ -1289,7 +1187,7 @@ static int decode_modrm(struct x86_emulate_ctxt *ctxt,
 			op->type = OP_XMM;
 			op->bytes = 16;
 			op->addr.xmm = ctxt->modrm_rm;
-			read_sse_reg(&op->vec_val, ctxt->modrm_rm);
+			kvm_read_sse_reg(ctxt->modrm_rm, &op->vec_val);
 			return rc;
 		}
 		if (ctxt->d & Mmx) {
@@ -1866,10 +1764,10 @@ static int writeback(struct x86_emulate_ctxt *ctxt, struct operand *op)
 				       op->bytes * op->count);
 		break;
 	case OP_XMM:
-		write_sse_reg(&op->vec_val, op->addr.xmm);
+		kvm_write_sse_reg(op->addr.xmm, &op->vec_val);
 		break;
 	case OP_MM:
-		write_mmx_reg(&op->mm_val, op->addr.mm);
+		kvm_write_mmx_reg(op->addr.mm, &op->mm_val);
 		break;
 	case OP_NONE:
 		/* no writeback */
@@ -4124,11 +4022,11 @@ static int em_fxsave(struct x86_emulate_ctxt *ctxt)
 	if (rc != X86EMUL_CONTINUE)
 		return rc;
 
-	emulator_get_fpu();
+	kvm_fpu_get();
 
 	rc = asm_safe("fxsave %[fx]", , [fx] "+m"(fx_state));
 
-	emulator_put_fpu();
+	kvm_fpu_put();
 
 	if (rc != X86EMUL_CONTINUE)
 		return rc;
@@ -4172,7 +4070,7 @@ static int em_fxrstor(struct x86_emulate_ctxt *ctxt)
 	if (rc != X86EMUL_CONTINUE)
 		return rc;
 
-	emulator_get_fpu();
+	kvm_fpu_get();
 
 	if (size < __fxstate_size(16)) {
 		rc = fxregs_fixup(&fx_state, size);
@@ -4189,7 +4087,7 @@ static int em_fxrstor(struct x86_emulate_ctxt *ctxt)
 		rc = asm_safe("fxrstor %[fx]", : [fx] "m"(fx_state));
 
 out:
-	emulator_put_fpu();
+	kvm_fpu_put();
 
 	return rc;
 }
@@ -5510,9 +5408,9 @@ static int flush_pending_x87_faults(struct x86_emulate_ctxt *ctxt)
 {
 	int rc;
 
-	emulator_get_fpu();
+	kvm_fpu_get();
 	rc = asm_safe("fwait");
-	emulator_put_fpu();
+	kvm_fpu_put();
 
 	if (unlikely(rc != X86EMUL_CONTINUE))
 		return emulate_exception(ctxt, MF_VECTOR, 0, false);
@@ -5523,7 +5421,7 @@ static int flush_pending_x87_faults(struct x86_emulate_ctxt *ctxt)
 static void fetch_possible_mmx_operand(struct operand *op)
 {
 	if (op->type == OP_MM)
-		read_mmx_reg(&op->mm_val, op->addr.mm);
+		kvm_read_mmx_reg(op->addr.mm, &op->mm_val);
 }
 
 static int fastop(struct x86_emulate_ctxt *ctxt, fastop_t fop)
diff --git a/arch/x86/kvm/fpu.h b/arch/x86/kvm/fpu.h
new file mode 100644
index 000000000000..3ba12888bf66
--- /dev/null
+++ b/arch/x86/kvm/fpu.h
@@ -0,0 +1,140 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef __KVM_FPU_H_
+#define __KVM_FPU_H_
+
+#include <asm/fpu/api.h>
+
+typedef u32		__attribute__((vector_size(16))) sse128_t;
+#define __sse128_u	union { sse128_t vec; u64 as_u64[2]; u32 as_u32[4]; }
+#define sse128_lo(x)	({ __sse128_u t; t.vec = x; t.as_u64[0]; })
+#define sse128_hi(x)	({ __sse128_u t; t.vec = x; t.as_u64[1]; })
+#define sse128_l0(x)	({ __sse128_u t; t.vec = x; t.as_u32[0]; })
+#define sse128_l1(x)	({ __sse128_u t; t.vec = x; t.as_u32[1]; })
+#define sse128_l2(x)	({ __sse128_u t; t.vec = x; t.as_u32[2]; })
+#define sse128_l3(x)	({ __sse128_u t; t.vec = x; t.as_u32[3]; })
+#define sse128(lo, hi)	({ __sse128_u t; t.as_u64[0] = lo; t.as_u64[1] = hi; t.vec; })
+
+static inline void _kvm_read_sse_reg(int reg, sse128_t *data)
+{
+	switch (reg) {
+	case 0: asm("movdqa %%xmm0, %0" : "=m"(*data)); break;
+	case 1: asm("movdqa %%xmm1, %0" : "=m"(*data)); break;
+	case 2: asm("movdqa %%xmm2, %0" : "=m"(*data)); break;
+	case 3: asm("movdqa %%xmm3, %0" : "=m"(*data)); break;
+	case 4: asm("movdqa %%xmm4, %0" : "=m"(*data)); break;
+	case 5: asm("movdqa %%xmm5, %0" : "=m"(*data)); break;
+	case 6: asm("movdqa %%xmm6, %0" : "=m"(*data)); break;
+	case 7: asm("movdqa %%xmm7, %0" : "=m"(*data)); break;
+#ifdef CONFIG_X86_64
+	case 8: asm("movdqa %%xmm8, %0" : "=m"(*data)); break;
+	case 9: asm("movdqa %%xmm9, %0" : "=m"(*data)); break;
+	case 10: asm("movdqa %%xmm10, %0" : "=m"(*data)); break;
+	case 11: asm("movdqa %%xmm11, %0" : "=m"(*data)); break;
+	case 12: asm("movdqa %%xmm12, %0" : "=m"(*data)); break;
+	case 13: asm("movdqa %%xmm13, %0" : "=m"(*data)); break;
+	case 14: asm("movdqa %%xmm14, %0" : "=m"(*data)); break;
+	case 15: asm("movdqa %%xmm15, %0" : "=m"(*data)); break;
+#endif
+	default: BUG();
+	}
+}
+
+static inline void _kvm_write_sse_reg(int reg, const sse128_t *data)
+{
+	switch (reg) {
+	case 0: asm("movdqa %0, %%xmm0" : : "m"(*data)); break;
+	case 1: asm("movdqa %0, %%xmm1" : : "m"(*data)); break;
+	case 2: asm("movdqa %0, %%xmm2" : : "m"(*data)); break;
+	case 3: asm("movdqa %0, %%xmm3" : : "m"(*data)); break;
+	case 4: asm("movdqa %0, %%xmm4" : : "m"(*data)); break;
+	case 5: asm("movdqa %0, %%xmm5" : : "m"(*data)); break;
+	case 6: asm("movdqa %0, %%xmm6" : : "m"(*data)); break;
+	case 7: asm("movdqa %0, %%xmm7" : : "m"(*data)); break;
+#ifdef CONFIG_X86_64
+	case 8: asm("movdqa %0, %%xmm8" : : "m"(*data)); break;
+	case 9: asm("movdqa %0, %%xmm9" : : "m"(*data)); break;
+	case 10: asm("movdqa %0, %%xmm10" : : "m"(*data)); break;
+	case 11: asm("movdqa %0, %%xmm11" : : "m"(*data)); break;
+	case 12: asm("movdqa %0, %%xmm12" : : "m"(*data)); break;
+	case 13: asm("movdqa %0, %%xmm13" : : "m"(*data)); break;
+	case 14: asm("movdqa %0, %%xmm14" : : "m"(*data)); break;
+	case 15: asm("movdqa %0, %%xmm15" : : "m"(*data)); break;
+#endif
+	default: BUG();
+	}
+}
+
+static inline void _kvm_read_mmx_reg(int reg, u64 *data)
+{
+	switch (reg) {
+	case 0: asm("movq %%mm0, %0" : "=m"(*data)); break;
+	case 1: asm("movq %%mm1, %0" : "=m"(*data)); break;
+	case 2: asm("movq %%mm2, %0" : "=m"(*data)); break;
+	case 3: asm("movq %%mm3, %0" : "=m"(*data)); break;
+	case 4: asm("movq %%mm4, %0" : "=m"(*data)); break;
+	case 5: asm("movq %%mm5, %0" : "=m"(*data)); break;
+	case 6: asm("movq %%mm6, %0" : "=m"(*data)); break;
+	case 7: asm("movq %%mm7, %0" : "=m"(*data)); break;
+	default: BUG();
+	}
+}
+
+static inline void _kvm_write_mmx_reg(int reg, const u64 *data)
+{
+	switch (reg) {
+	case 0: asm("movq %0, %%mm0" : : "m"(*data)); break;
+	case 1: asm("movq %0, %%mm1" : : "m"(*data)); break;
+	case 2: asm("movq %0, %%mm2" : : "m"(*data)); break;
+	case 3: asm("movq %0, %%mm3" : : "m"(*data)); break;
+	case 4: asm("movq %0, %%mm4" : : "m"(*data)); break;
+	case 5: asm("movq %0, %%mm5" : : "m"(*data)); break;
+	case 6: asm("movq %0, %%mm6" : : "m"(*data)); break;
+	case 7: asm("movq %0, %%mm7" : : "m"(*data)); break;
+	default: BUG();
+	}
+}
+
+static inline void kvm_fpu_get(void)
+{
+	fpregs_lock();
+
+	fpregs_assert_state_consistent();
+	if (test_thread_flag(TIF_NEED_FPU_LOAD))
+		switch_fpu_return();
+}
+
+static inline void kvm_fpu_put(void)
+{
+	fpregs_unlock();
+}
+
+static inline void kvm_read_sse_reg(int reg, sse128_t *data)
+{
+	kvm_fpu_get();
+	_kvm_read_sse_reg(reg, data);
+	kvm_fpu_put();
+}
+
+static inline void kvm_write_sse_reg(int reg, const sse128_t *data)
+{
+	kvm_fpu_get();
+	_kvm_write_sse_reg(reg, data);
+	kvm_fpu_put();
+}
+
+static inline void kvm_read_mmx_reg(int reg, u64 *data)
+{
+	kvm_fpu_get();
+	_kvm_read_mmx_reg(reg, data);
+	kvm_fpu_put();
+}
+
+static inline void kvm_write_mmx_reg(int reg, const u64 *data)
+{
+	kvm_fpu_get();
+	_kvm_write_mmx_reg(reg, data);
+	kvm_fpu_put();
+}
+
+#endif
-- 
2.17.1




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879




^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v2 2/4] KVM: hyper-v: Collect hypercall params into struct
  2021-04-12 17:00 [PATCH v2 0/4] Add support for XMM fast hypercalls Siddharth Chandrasekaran
  2021-04-12 17:00 ` [PATCH v2 1/4] KVM: x86: Move FPU register accessors into fpu.h Siddharth Chandrasekaran
@ 2021-04-12 17:00 ` Siddharth Chandrasekaran
  2021-04-13 13:53   ` Vitaly Kuznetsov
  2021-04-12 17:00 ` [PATCH v2 3/4] KVM: x86: kvm_hv_flush_tlb use inputs from XMM registers Siddharth Chandrasekaran
  2021-04-12 17:00 ` [PATCH v2 4/4] KVM: hyper-v: Advertise support for fast XMM hypercalls Siddharth Chandrasekaran
  3 siblings, 1 reply; 16+ messages in thread
From: Siddharth Chandrasekaran @ 2021-04-12 17:00 UTC (permalink / raw)
  To: K. Y. Srinivasan, Haiyang Zhang, Stephen Hemminger, Wei Liu,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, x86,
	H. Peter Anvin, Paolo Bonzini, Sean Christopherson,
	Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel
  Cc: Siddharth Chandrasekaran, Alexander Graf, Evgeny Iakovlev,
	Liran Alon, Ioannis Aslanidis, linux-hyperv, linux-kernel, kvm

As of now there are 7 parameters (and flags) that are used in various
hyper-v hypercall handlers. There are 6 more input/output parameters
passed from XMM registers which are to be added in an upcoming patch.

To make passing arguments to the handlers more readable, capture all
these parameters into a single structure.

Cc: Alexander Graf <graf@amazon.com>
Cc: Evgeny Iakovlev <eyakovl@amazon.de>
Signed-off-by: Siddharth Chandrasekaran <sidcha@amazon.de>
---
 arch/x86/kvm/hyperv.c | 147 +++++++++++++++++++++++-------------------
 1 file changed, 79 insertions(+), 68 deletions(-)

diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
index f98370a39936..8f6babd1ea0d 100644
--- a/arch/x86/kvm/hyperv.c
+++ b/arch/x86/kvm/hyperv.c
@@ -1623,7 +1623,18 @@ static __always_inline unsigned long *sparse_set_to_vcpu_mask(
 	return vcpu_bitmap;
 }
 
-static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, u64 ingpa, u16 rep_cnt, bool ex)
+struct kvm_hv_hcall {
+	u64 param;
+	u64 ingpa;
+	u64 outgpa;
+	u16 code;
+	u16 rep_cnt;
+	u16 rep_idx;
+	bool fast;
+	bool rep;
+};
+
+static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc, bool ex)
 {
 	struct kvm *kvm = vcpu->kvm;
 	struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
@@ -1638,7 +1649,7 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, u64 ingpa, u16 rep_cnt, bool
 	bool all_cpus;
 
 	if (!ex) {
-		if (unlikely(kvm_read_guest(kvm, ingpa, &flush, sizeof(flush))))
+		if (unlikely(kvm_read_guest(kvm, hc->ingpa, &flush, sizeof(flush))))
 			return HV_STATUS_INVALID_HYPERCALL_INPUT;
 
 		trace_kvm_hv_flush_tlb(flush.processor_mask,
@@ -1657,7 +1668,7 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, u64 ingpa, u16 rep_cnt, bool
 		all_cpus = (flush.flags & HV_FLUSH_ALL_PROCESSORS) ||
 			flush.processor_mask == 0;
 	} else {
-		if (unlikely(kvm_read_guest(kvm, ingpa, &flush_ex,
+		if (unlikely(kvm_read_guest(kvm, hc->ingpa, &flush_ex,
 					    sizeof(flush_ex))))
 			return HV_STATUS_INVALID_HYPERCALL_INPUT;
 
@@ -1679,8 +1690,8 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, u64 ingpa, u16 rep_cnt, bool
 
 		if (!all_cpus &&
 		    kvm_read_guest(kvm,
-				   ingpa + offsetof(struct hv_tlb_flush_ex,
-						    hv_vp_set.bank_contents),
+				   hc->ingpa + offsetof(struct hv_tlb_flush_ex,
+							hv_vp_set.bank_contents),
 				   sparse_banks,
 				   sparse_banks_len))
 			return HV_STATUS_INVALID_HYPERCALL_INPUT;
@@ -1700,9 +1711,9 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, u64 ingpa, u16 rep_cnt, bool
 				    NULL, vcpu_mask, &hv_vcpu->tlb_flush);
 
 ret_success:
-	/* We always do full TLB flush, set rep_done = rep_cnt. */
+	/* We always do full TLB flush, set rep_done = hc->rep_cnt. */
 	return (u64)HV_STATUS_SUCCESS |
-		((u64)rep_cnt << HV_HYPERCALL_REP_COMP_OFFSET);
+		((u64)hc->rep_cnt << HV_HYPERCALL_REP_COMP_OFFSET);
 }
 
 static void kvm_send_ipi_to_many(struct kvm *kvm, u32 vector,
@@ -1724,8 +1735,7 @@ static void kvm_send_ipi_to_many(struct kvm *kvm, u32 vector,
 	}
 }
 
-static u64 kvm_hv_send_ipi(struct kvm_vcpu *vcpu, u64 ingpa, u64 outgpa,
-			   bool ex, bool fast)
+static u64 kvm_hv_send_ipi(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc, bool ex)
 {
 	struct kvm *kvm = vcpu->kvm;
 	struct hv_send_ipi_ex send_ipi_ex;
@@ -1740,25 +1750,25 @@ static u64 kvm_hv_send_ipi(struct kvm_vcpu *vcpu, u64 ingpa, u64 outgpa,
 	bool all_cpus;
 
 	if (!ex) {
-		if (!fast) {
-			if (unlikely(kvm_read_guest(kvm, ingpa, &send_ipi,
+		if (!hc->fast) {
+			if (unlikely(kvm_read_guest(kvm, hc->ingpa, &send_ipi,
 						    sizeof(send_ipi))))
 				return HV_STATUS_INVALID_HYPERCALL_INPUT;
 			sparse_banks[0] = send_ipi.cpu_mask;
 			vector = send_ipi.vector;
 		} else {
 			/* 'reserved' part of hv_send_ipi should be 0 */
-			if (unlikely(ingpa >> 32 != 0))
+			if (unlikely(hc->ingpa >> 32 != 0))
 				return HV_STATUS_INVALID_HYPERCALL_INPUT;
-			sparse_banks[0] = outgpa;
-			vector = (u32)ingpa;
+			sparse_banks[0] = hc->outgpa;
+			vector = (u32)hc->ingpa;
 		}
 		all_cpus = false;
 		valid_bank_mask = BIT_ULL(0);
 
 		trace_kvm_hv_send_ipi(vector, sparse_banks[0]);
 	} else {
-		if (unlikely(kvm_read_guest(kvm, ingpa, &send_ipi_ex,
+		if (unlikely(kvm_read_guest(kvm, hc->ingpa, &send_ipi_ex,
 					    sizeof(send_ipi_ex))))
 			return HV_STATUS_INVALID_HYPERCALL_INPUT;
 
@@ -1778,8 +1788,8 @@ static u64 kvm_hv_send_ipi(struct kvm_vcpu *vcpu, u64 ingpa, u64 outgpa,
 
 		if (!all_cpus &&
 		    kvm_read_guest(kvm,
-				   ingpa + offsetof(struct hv_send_ipi_ex,
-						    vp_set.bank_contents),
+				   hc->ingpa + offsetof(struct hv_send_ipi_ex,
+							vp_set.bank_contents),
 				   sparse_banks,
 				   sparse_banks_len))
 			return HV_STATUS_INVALID_HYPERCALL_INPUT;
@@ -1839,20 +1849,21 @@ static int kvm_hv_hypercall_complete_userspace(struct kvm_vcpu *vcpu)
 	return kvm_hv_hypercall_complete(vcpu, vcpu->run->hyperv.u.hcall.result);
 }
 
-static u16 kvm_hvcall_signal_event(struct kvm_vcpu *vcpu, bool fast, u64 param)
+static u16 kvm_hvcall_signal_event(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
 {
 	struct kvm_hv *hv = to_kvm_hv(vcpu->kvm);
 	struct eventfd_ctx *eventfd;
 
-	if (unlikely(!fast)) {
+	if (unlikely(!hc->fast)) {
 		int ret;
-		gpa_t gpa = param;
+		gpa_t gpa = hc->ingpa;
 
-		if ((gpa & (__alignof__(param) - 1)) ||
-		    offset_in_page(gpa) + sizeof(param) > PAGE_SIZE)
+		if ((gpa & (__alignof__(hc->ingpa) - 1)) ||
+		    offset_in_page(gpa) + sizeof(hc->ingpa) > PAGE_SIZE)
 			return HV_STATUS_INVALID_ALIGNMENT;
 
-		ret = kvm_vcpu_read_guest(vcpu, gpa, &param, sizeof(param));
+		ret = kvm_vcpu_read_guest(vcpu, gpa,
+					  &hc->ingpa, sizeof(hc->ingpa));
 		if (ret < 0)
 			return HV_STATUS_INVALID_ALIGNMENT;
 	}
@@ -1862,15 +1873,15 @@ static u16 kvm_hvcall_signal_event(struct kvm_vcpu *vcpu, bool fast, u64 param)
 	 * have no use for it, and in all known usecases it is zero, so just
 	 * report lookup failure if it isn't.
 	 */
-	if (param & 0xffff00000000ULL)
+	if (hc->ingpa & 0xffff00000000ULL)
 		return HV_STATUS_INVALID_PORT_ID;
 	/* remaining bits are reserved-zero */
-	if (param & ~KVM_HYPERV_CONN_ID_MASK)
+	if (hc->ingpa & ~KVM_HYPERV_CONN_ID_MASK)
 		return HV_STATUS_INVALID_HYPERCALL_INPUT;
 
 	/* the eventfd is protected by vcpu->kvm->srcu, but conn_to_evt isn't */
 	rcu_read_lock();
-	eventfd = idr_find(&hv->conn_to_evt, param);
+	eventfd = idr_find(&hv->conn_to_evt, hc->ingpa);
 	rcu_read_unlock();
 	if (!eventfd)
 		return HV_STATUS_INVALID_PORT_ID;
@@ -1881,9 +1892,8 @@ static u16 kvm_hvcall_signal_event(struct kvm_vcpu *vcpu, bool fast, u64 param)
 
 int kvm_hv_hypercall(struct kvm_vcpu *vcpu)
 {
-	u64 param, ingpa, outgpa, ret = HV_STATUS_SUCCESS;
-	uint16_t code, rep_idx, rep_cnt;
-	bool fast, rep;
+	struct kvm_hv_hcall hc;
+	u64 ret = HV_STATUS_SUCCESS;
 
 	/*
 	 * hypercall generates UD from non zero cpl and real mode
@@ -1896,104 +1906,105 @@ int kvm_hv_hypercall(struct kvm_vcpu *vcpu)
 
 #ifdef CONFIG_X86_64
 	if (is_64_bit_mode(vcpu)) {
-		param = kvm_rcx_read(vcpu);
-		ingpa = kvm_rdx_read(vcpu);
-		outgpa = kvm_r8_read(vcpu);
+		hc.param = kvm_rcx_read(vcpu);
+		hc.ingpa = kvm_rdx_read(vcpu);
+		hc.outgpa = kvm_r8_read(vcpu);
 	} else
 #endif
 	{
-		param = ((u64)kvm_rdx_read(vcpu) << 32) |
-			(kvm_rax_read(vcpu) & 0xffffffff);
-		ingpa = ((u64)kvm_rbx_read(vcpu) << 32) |
-			(kvm_rcx_read(vcpu) & 0xffffffff);
-		outgpa = ((u64)kvm_rdi_read(vcpu) << 32) |
-			(kvm_rsi_read(vcpu) & 0xffffffff);
+		hc.param = ((u64)kvm_rdx_read(vcpu) << 32) |
+			    (kvm_rax_read(vcpu) & 0xffffffff);
+		hc.ingpa = ((u64)kvm_rbx_read(vcpu) << 32) |
+			    (kvm_rcx_read(vcpu) & 0xffffffff);
+		hc.outgpa = ((u64)kvm_rdi_read(vcpu) << 32) |
+			     (kvm_rsi_read(vcpu) & 0xffffffff);
 	}
 
-	code = param & 0xffff;
-	fast = !!(param & HV_HYPERCALL_FAST_BIT);
-	rep_cnt = (param >> HV_HYPERCALL_REP_COMP_OFFSET) & 0xfff;
-	rep_idx = (param >> HV_HYPERCALL_REP_START_OFFSET) & 0xfff;
-	rep = !!(rep_cnt || rep_idx);
+	hc.code = hc.param & 0xffff;
+	hc.fast = !!(hc.param & HV_HYPERCALL_FAST_BIT);
+	hc.rep_cnt = (hc.param >> HV_HYPERCALL_REP_COMP_OFFSET) & 0xfff;
+	hc.rep_idx = (hc.param >> HV_HYPERCALL_REP_START_OFFSET) & 0xfff;
+	hc.rep = !!(hc.rep_cnt || hc.rep_idx);
 
-	trace_kvm_hv_hypercall(code, fast, rep_cnt, rep_idx, ingpa, outgpa);
+	trace_kvm_hv_hypercall(hc.code, hc.fast, hc.rep_cnt, hc.rep_idx,
+			       hc.ingpa, hc.outgpa);
 
-	switch (code) {
+	switch (hc.code) {
 	case HVCALL_NOTIFY_LONG_SPIN_WAIT:
-		if (unlikely(rep)) {
+		if (unlikely(hc.rep)) {
 			ret = HV_STATUS_INVALID_HYPERCALL_INPUT;
 			break;
 		}
 		kvm_vcpu_on_spin(vcpu, true);
 		break;
 	case HVCALL_SIGNAL_EVENT:
-		if (unlikely(rep)) {
+		if (unlikely(hc.rep)) {
 			ret = HV_STATUS_INVALID_HYPERCALL_INPUT;
 			break;
 		}
-		ret = kvm_hvcall_signal_event(vcpu, fast, ingpa);
+		ret = kvm_hvcall_signal_event(vcpu, &hc);
 		if (ret != HV_STATUS_INVALID_PORT_ID)
 			break;
 		fallthrough;	/* maybe userspace knows this conn_id */
 	case HVCALL_POST_MESSAGE:
 		/* don't bother userspace if it has no way to handle it */
-		if (unlikely(rep || !to_hv_synic(vcpu)->active)) {
+		if (unlikely(hc.rep || !to_hv_synic(vcpu)->active)) {
 			ret = HV_STATUS_INVALID_HYPERCALL_INPUT;
 			break;
 		}
 		vcpu->run->exit_reason = KVM_EXIT_HYPERV;
 		vcpu->run->hyperv.type = KVM_EXIT_HYPERV_HCALL;
-		vcpu->run->hyperv.u.hcall.input = param;
-		vcpu->run->hyperv.u.hcall.params[0] = ingpa;
-		vcpu->run->hyperv.u.hcall.params[1] = outgpa;
+		vcpu->run->hyperv.u.hcall.input = hc.param;
+		vcpu->run->hyperv.u.hcall.params[0] = hc.ingpa;
+		vcpu->run->hyperv.u.hcall.params[1] = hc.outgpa;
 		vcpu->arch.complete_userspace_io =
 				kvm_hv_hypercall_complete_userspace;
 		return 0;
 	case HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST:
-		if (unlikely(fast || !rep_cnt || rep_idx)) {
+		if (unlikely(hc.fast || !hc.rep_cnt || hc.rep_idx)) {
 			ret = HV_STATUS_INVALID_HYPERCALL_INPUT;
 			break;
 		}
-		ret = kvm_hv_flush_tlb(vcpu, ingpa, rep_cnt, false);
+		ret = kvm_hv_flush_tlb(vcpu, &hc, false);
 		break;
 	case HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE:
-		if (unlikely(fast || rep)) {
+		if (unlikely(hc.fast || hc.rep)) {
 			ret = HV_STATUS_INVALID_HYPERCALL_INPUT;
 			break;
 		}
-		ret = kvm_hv_flush_tlb(vcpu, ingpa, rep_cnt, false);
+		ret = kvm_hv_flush_tlb(vcpu, &hc, false);
 		break;
 	case HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX:
-		if (unlikely(fast || !rep_cnt || rep_idx)) {
+		if (unlikely(hc.fast || !hc.rep_cnt || hc.rep_idx)) {
 			ret = HV_STATUS_INVALID_HYPERCALL_INPUT;
 			break;
 		}
-		ret = kvm_hv_flush_tlb(vcpu, ingpa, rep_cnt, true);
+		ret = kvm_hv_flush_tlb(vcpu, &hc, true);
 		break;
 	case HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX:
-		if (unlikely(fast || rep)) {
+		if (unlikely(hc.fast || hc.rep)) {
 			ret = HV_STATUS_INVALID_HYPERCALL_INPUT;
 			break;
 		}
-		ret = kvm_hv_flush_tlb(vcpu, ingpa, rep_cnt, true);
+		ret = kvm_hv_flush_tlb(vcpu, &hc, true);
 		break;
 	case HVCALL_SEND_IPI:
-		if (unlikely(rep)) {
+		if (unlikely(hc.rep)) {
 			ret = HV_STATUS_INVALID_HYPERCALL_INPUT;
 			break;
 		}
-		ret = kvm_hv_send_ipi(vcpu, ingpa, outgpa, false, fast);
+		ret = kvm_hv_send_ipi(vcpu, &hc, false);
 		break;
 	case HVCALL_SEND_IPI_EX:
-		if (unlikely(fast || rep)) {
+		if (unlikely(hc.fast || hc.rep)) {
 			ret = HV_STATUS_INVALID_HYPERCALL_INPUT;
 			break;
 		}
-		ret = kvm_hv_send_ipi(vcpu, ingpa, outgpa, true, false);
+		ret = kvm_hv_send_ipi(vcpu, &hc, true);
 		break;
 	case HVCALL_POST_DEBUG_DATA:
 	case HVCALL_RETRIEVE_DEBUG_DATA:
-		if (unlikely(fast)) {
+		if (unlikely(hc.fast)) {
 			ret = HV_STATUS_INVALID_PARAMETER;
 			break;
 		}
@@ -2012,9 +2023,9 @@ int kvm_hv_hypercall(struct kvm_vcpu *vcpu)
 		}
 		vcpu->run->exit_reason = KVM_EXIT_HYPERV;
 		vcpu->run->hyperv.type = KVM_EXIT_HYPERV_HCALL;
-		vcpu->run->hyperv.u.hcall.input = param;
-		vcpu->run->hyperv.u.hcall.params[0] = ingpa;
-		vcpu->run->hyperv.u.hcall.params[1] = outgpa;
+		vcpu->run->hyperv.u.hcall.input = hc.param;
+		vcpu->run->hyperv.u.hcall.params[0] = hc.ingpa;
+		vcpu->run->hyperv.u.hcall.params[1] = hc.outgpa;
 		vcpu->arch.complete_userspace_io =
 				kvm_hv_hypercall_complete_userspace;
 		return 0;
-- 
2.17.1




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879




^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v2 3/4] KVM: x86: kvm_hv_flush_tlb use inputs from XMM registers
  2021-04-12 17:00 [PATCH v2 0/4] Add support for XMM fast hypercalls Siddharth Chandrasekaran
  2021-04-12 17:00 ` [PATCH v2 1/4] KVM: x86: Move FPU register accessors into fpu.h Siddharth Chandrasekaran
  2021-04-12 17:00 ` [PATCH v2 2/4] KVM: hyper-v: Collect hypercall params into struct Siddharth Chandrasekaran
@ 2021-04-12 17:00 ` Siddharth Chandrasekaran
  2021-04-12 20:13   ` Wei Liu
  2021-04-13 14:09   ` Vitaly Kuznetsov
  2021-04-12 17:00 ` [PATCH v2 4/4] KVM: hyper-v: Advertise support for fast XMM hypercalls Siddharth Chandrasekaran
  3 siblings, 2 replies; 16+ messages in thread
From: Siddharth Chandrasekaran @ 2021-04-12 17:00 UTC (permalink / raw)
  To: K. Y. Srinivasan, Haiyang Zhang, Stephen Hemminger, Wei Liu,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, x86,
	H. Peter Anvin, Paolo Bonzini, Sean Christopherson,
	Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel
  Cc: Siddharth Chandrasekaran, Alexander Graf, Evgeny Iakovlev,
	Liran Alon, Ioannis Aslanidis, linux-hyperv, linux-kernel, kvm

Hyper-V supports the use of XMM registers to perform fast hypercalls.
This allows guests to take advantage of the improved performance of the
fast hypercall interface even though a hypercall may require more than
(the current maximum of) two input registers.

The XMM fast hypercall interface uses six additional XMM registers (XMM0
to XMM5) to allow the guest to pass an input parameter block of up to
112 bytes. Hyper-V can also return data back to the guest in the
remaining XMM registers that are not used by the current hypercall.

Add framework to read/write to XMM registers in kvm_hv_hypercall() and
use the additional hypercall inputs from XMM registers in
kvm_hv_flush_tlb() when possible.

Cc: Alexander Graf <graf@amazon.com>
Co-developed-by: Evgeny Iakovlev <eyakovl@amazon.de>
Signed-off-by: Evgeny Iakovlev <eyakovl@amazon.de>
Signed-off-by: Siddharth Chandrasekaran <sidcha@amazon.de>
---
 arch/x86/kvm/hyperv.c | 109 ++++++++++++++++++++++++++++++++++--------
 1 file changed, 90 insertions(+), 19 deletions(-)

diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
index 8f6babd1ea0d..1f9959aba70d 100644
--- a/arch/x86/kvm/hyperv.c
+++ b/arch/x86/kvm/hyperv.c
@@ -36,6 +36,7 @@
 
 #include "trace.h"
 #include "irq.h"
+#include "fpu.h"
 
 /* "Hv#1" signature */
 #define HYPERV_CPUID_SIGNATURE_EAX 0x31237648
@@ -1623,6 +1624,8 @@ static __always_inline unsigned long *sparse_set_to_vcpu_mask(
 	return vcpu_bitmap;
 }
 
+#define KVM_HV_HYPERCALL_MAX_XMM_REGISTERS  6
+
 struct kvm_hv_hcall {
 	u64 param;
 	u64 ingpa;
@@ -1632,10 +1635,14 @@ struct kvm_hv_hcall {
 	u16 rep_idx;
 	bool fast;
 	bool rep;
+	sse128_t xmm[KVM_HV_HYPERCALL_MAX_XMM_REGISTERS];
+	bool xmm_dirty;
 };
 
 static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc, bool ex)
 {
+	int i, j;
+	gpa_t gpa;
 	struct kvm *kvm = vcpu->kvm;
 	struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
 	struct hv_tlb_flush_ex flush_ex;
@@ -1649,8 +1656,15 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc, bool
 	bool all_cpus;
 
 	if (!ex) {
-		if (unlikely(kvm_read_guest(kvm, hc->ingpa, &flush, sizeof(flush))))
-			return HV_STATUS_INVALID_HYPERCALL_INPUT;
+		if (hc->fast) {
+			flush.address_space = hc->ingpa;
+			flush.flags = hc->outgpa;
+			flush.processor_mask = sse128_lo(hc->xmm[0]);
+		} else {
+			if (unlikely(kvm_read_guest(kvm, hc->ingpa,
+						    &flush, sizeof(flush))))
+				return HV_STATUS_INVALID_HYPERCALL_INPUT;
+		}
 
 		trace_kvm_hv_flush_tlb(flush.processor_mask,
 				       flush.address_space, flush.flags);
@@ -1668,9 +1682,16 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc, bool
 		all_cpus = (flush.flags & HV_FLUSH_ALL_PROCESSORS) ||
 			flush.processor_mask == 0;
 	} else {
-		if (unlikely(kvm_read_guest(kvm, hc->ingpa, &flush_ex,
-					    sizeof(flush_ex))))
-			return HV_STATUS_INVALID_HYPERCALL_INPUT;
+		if (hc->fast) {
+			flush_ex.address_space = hc->ingpa;
+			flush_ex.flags = hc->outgpa;
+			memcpy(&flush_ex.hv_vp_set,
+			       &hc->xmm[0], sizeof(hc->xmm[0]));
+		} else {
+			if (unlikely(kvm_read_guest(kvm, hc->ingpa, &flush_ex,
+						    sizeof(flush_ex))))
+				return HV_STATUS_INVALID_HYPERCALL_INPUT;
+		}
 
 		trace_kvm_hv_flush_tlb_ex(flush_ex.hv_vp_set.valid_bank_mask,
 					  flush_ex.hv_vp_set.format,
@@ -1681,20 +1702,29 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc, bool
 		all_cpus = flush_ex.hv_vp_set.format !=
 			HV_GENERIC_SET_SPARSE_4K;
 
-		sparse_banks_len =
-			bitmap_weight((unsigned long *)&valid_bank_mask, 64) *
-			sizeof(sparse_banks[0]);
+		sparse_banks_len = bitmap_weight((unsigned long *)&valid_bank_mask, 64);
 
 		if (!sparse_banks_len && !all_cpus)
 			goto ret_success;
 
-		if (!all_cpus &&
-		    kvm_read_guest(kvm,
-				   hc->ingpa + offsetof(struct hv_tlb_flush_ex,
-							hv_vp_set.bank_contents),
-				   sparse_banks,
-				   sparse_banks_len))
-			return HV_STATUS_INVALID_HYPERCALL_INPUT;
+		if (!all_cpus) {
+			if (hc->fast) {
+				if (sparse_banks_len > KVM_HV_HYPERCALL_MAX_XMM_REGISTERS - 1)
+					return HV_STATUS_INVALID_HYPERCALL_INPUT;
+				for (i = 0, j = 1; i < sparse_banks_len; i += 2, j++) {
+					sparse_banks[i + 0] = sse128_lo(hc->xmm[j]);
+					sparse_banks[i + 1] = sse128_hi(hc->xmm[j]);
+				}
+			} else {
+				gpa = hc->ingpa;
+				gpa += offsetof(struct hv_tlb_flush_ex,
+						hv_vp_set.bank_contents);
+				if (unlikely(kvm_read_guest(kvm, gpa, sparse_banks,
+							    sparse_banks_len *
+							    sizeof(sparse_banks[0]))))
+					return HV_STATUS_INVALID_HYPERCALL_INPUT;
+			}
+		}
 	}
 
 	cpumask_clear(&hv_vcpu->tlb_flush);
@@ -1890,6 +1920,41 @@ static u16 kvm_hvcall_signal_event(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *h
 	return HV_STATUS_SUCCESS;
 }
 
+static bool is_xmm_fast_hypercall(struct kvm_hv_hcall *hc)
+{
+	switch (hc->code) {
+	case HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST:
+	case HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE:
+	case HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX:
+	case HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX:
+		return true;
+	}
+
+	return false;
+}
+
+static inline void kvm_hv_hypercall_read_xmm(struct kvm_hv_hcall *hc)
+{
+	int reg;
+
+	kvm_fpu_get();
+	for (reg = 0; reg < KVM_HV_HYPERCALL_MAX_XMM_REGISTERS; reg++)
+		_kvm_read_sse_reg(reg, &hc->xmm[reg]);
+	kvm_fpu_put();
+	hc->xmm_dirty = false;
+}
+
+static inline void kvm_hv_hypercall_write_xmm(struct kvm_hv_hcall *hc)
+{
+	int reg;
+
+	kvm_fpu_get();
+	for (reg = 0; reg < KVM_HV_HYPERCALL_MAX_XMM_REGISTERS; reg++)
+		_kvm_write_sse_reg(reg, &hc->xmm[reg]);
+	kvm_fpu_put();
+	hc->xmm_dirty = false;
+}
+
 int kvm_hv_hypercall(struct kvm_vcpu *vcpu)
 {
 	struct kvm_hv_hcall hc;
@@ -1926,6 +1991,9 @@ int kvm_hv_hypercall(struct kvm_vcpu *vcpu)
 	hc.rep_idx = (hc.param >> HV_HYPERCALL_REP_START_OFFSET) & 0xfff;
 	hc.rep = !!(hc.rep_cnt || hc.rep_idx);
 
+	if (hc.fast && is_xmm_fast_hypercall(&hc))
+		kvm_hv_hypercall_read_xmm(&hc);
+
 	trace_kvm_hv_hypercall(hc.code, hc.fast, hc.rep_cnt, hc.rep_idx,
 			       hc.ingpa, hc.outgpa);
 
@@ -1961,28 +2029,28 @@ int kvm_hv_hypercall(struct kvm_vcpu *vcpu)
 				kvm_hv_hypercall_complete_userspace;
 		return 0;
 	case HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST:
-		if (unlikely(hc.fast || !hc.rep_cnt || hc.rep_idx)) {
+		if (unlikely(!hc.rep_cnt || hc.rep_idx)) {
 			ret = HV_STATUS_INVALID_HYPERCALL_INPUT;
 			break;
 		}
 		ret = kvm_hv_flush_tlb(vcpu, &hc, false);
 		break;
 	case HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE:
-		if (unlikely(hc.fast || hc.rep)) {
+		if (unlikely(hc.rep)) {
 			ret = HV_STATUS_INVALID_HYPERCALL_INPUT;
 			break;
 		}
 		ret = kvm_hv_flush_tlb(vcpu, &hc, false);
 		break;
 	case HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX:
-		if (unlikely(hc.fast || !hc.rep_cnt || hc.rep_idx)) {
+		if (unlikely(!hc.rep_cnt || hc.rep_idx)) {
 			ret = HV_STATUS_INVALID_HYPERCALL_INPUT;
 			break;
 		}
 		ret = kvm_hv_flush_tlb(vcpu, &hc, true);
 		break;
 	case HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX:
-		if (unlikely(hc.fast || hc.rep)) {
+		if (unlikely(hc.rep)) {
 			ret = HV_STATUS_INVALID_HYPERCALL_INPUT;
 			break;
 		}
@@ -2035,6 +2103,9 @@ int kvm_hv_hypercall(struct kvm_vcpu *vcpu)
 		break;
 	}
 
+	if (hc.xmm_dirty)
+		kvm_hv_hypercall_write_xmm(&hc);
+
 	return kvm_hv_hypercall_complete(vcpu, ret);
 }
 
-- 
2.17.1




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879




^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v2 4/4] KVM: hyper-v: Advertise support for fast XMM hypercalls
  2021-04-12 17:00 [PATCH v2 0/4] Add support for XMM fast hypercalls Siddharth Chandrasekaran
                   ` (2 preceding siblings ...)
  2021-04-12 17:00 ` [PATCH v2 3/4] KVM: x86: kvm_hv_flush_tlb use inputs from XMM registers Siddharth Chandrasekaran
@ 2021-04-12 17:00 ` Siddharth Chandrasekaran
  2021-04-12 20:14   ` Wei Liu
  2021-04-13 14:26   ` Vitaly Kuznetsov
  3 siblings, 2 replies; 16+ messages in thread
From: Siddharth Chandrasekaran @ 2021-04-12 17:00 UTC (permalink / raw)
  To: K. Y. Srinivasan, Haiyang Zhang, Stephen Hemminger, Wei Liu,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, x86,
	H. Peter Anvin, Paolo Bonzini, Sean Christopherson,
	Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel
  Cc: Siddharth Chandrasekaran, Alexander Graf, Evgeny Iakovlev,
	Liran Alon, Ioannis Aslanidis, linux-hyperv, linux-kernel, kvm

Now that all extant hypercalls that can use XMM registers (based on
spec) for input/outputs are patched to support them, we can start
advertising this feature to guests.

Cc: Alexander Graf <graf@amazon.com>
Cc: Evgeny Iakovlev <eyakovl@amazon.de>
Signed-off-by: Siddharth Chandrasekaran <sidcha@amazon.de>
---
 arch/x86/include/asm/hyperv-tlfs.h | 7 ++++++-
 arch/x86/kvm/hyperv.c              | 2 ++
 2 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/hyperv-tlfs.h b/arch/x86/include/asm/hyperv-tlfs.h
index e6cd3fee562b..716f12be411e 100644
--- a/arch/x86/include/asm/hyperv-tlfs.h
+++ b/arch/x86/include/asm/hyperv-tlfs.h
@@ -52,7 +52,7 @@
  * Support for passing hypercall input parameter block via XMM
  * registers is available
  */
-#define HV_X64_HYPERCALL_PARAMS_XMM_AVAILABLE		BIT(4)
+#define HV_X64_HYPERCALL_XMM_INPUT_AVAILABLE		BIT(4)
 /* Support for a virtual guest idle state is available */
 #define HV_X64_GUEST_IDLE_STATE_AVAILABLE		BIT(5)
 /* Frequency MSRs available */
@@ -61,6 +61,11 @@
 #define HV_FEATURE_GUEST_CRASH_MSR_AVAILABLE		BIT(10)
 /* Support for debug MSRs available */
 #define HV_FEATURE_DEBUG_MSRS_AVAILABLE			BIT(11)
+/*
+ * Support for returning hypercall ouput block via XMM
+ * registers is available
+ */
+#define HV_X64_HYPERCALL_XMM_OUTPUT_AVAILABLE		BIT(15)
 /* stimer Direct Mode is available */
 #define HV_STIMER_DIRECT_MODE_AVAILABLE			BIT(19)
 
diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
index 1f9959aba70d..55838c266bcd 100644
--- a/arch/x86/kvm/hyperv.c
+++ b/arch/x86/kvm/hyperv.c
@@ -2254,6 +2254,8 @@ int kvm_get_hv_cpuid(struct kvm_vcpu *vcpu, struct kvm_cpuid2 *cpuid,
 			ent->ebx |= HV_POST_MESSAGES;
 			ent->ebx |= HV_SIGNAL_EVENTS;
 
+			ent->edx |= HV_X64_HYPERCALL_XMM_INPUT_AVAILABLE;
+			ent->edx |= HV_X64_HYPERCALL_XMM_OUTPUT_AVAILABLE;
 			ent->edx |= HV_FEATURE_FREQUENCY_MSRS_AVAILABLE;
 			ent->edx |= HV_FEATURE_GUEST_CRASH_MSR_AVAILABLE;
 
-- 
2.17.1




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879




^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH v2 3/4] KVM: x86: kvm_hv_flush_tlb use inputs from XMM registers
  2021-04-12 17:00 ` [PATCH v2 3/4] KVM: x86: kvm_hv_flush_tlb use inputs from XMM registers Siddharth Chandrasekaran
@ 2021-04-12 20:13   ` Wei Liu
  2021-04-13  9:09     ` Siddharth Chandrasekaran
  2021-04-13 14:09   ` Vitaly Kuznetsov
  1 sibling, 1 reply; 16+ messages in thread
From: Wei Liu @ 2021-04-12 20:13 UTC (permalink / raw)
  To: Siddharth Chandrasekaran
  Cc: K. Y. Srinivasan, Haiyang Zhang, Stephen Hemminger, Wei Liu,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, x86,
	H. Peter Anvin, Paolo Bonzini, Sean Christopherson,
	Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel,
	Alexander Graf, Evgeny Iakovlev, Liran Alon, Ioannis Aslanidis,
	linux-hyperv, linux-kernel, kvm

On Mon, Apr 12, 2021 at 07:00:16PM +0200, Siddharth Chandrasekaran wrote:
> +
> +static inline void kvm_hv_hypercall_read_xmm(struct kvm_hv_hcall *hc)

Do you really need inline here? The compiler should be smart enough to
inline this function if necessary.

> +{
> +	int reg;
> +
> +	kvm_fpu_get();
> +	for (reg = 0; reg < KVM_HV_HYPERCALL_MAX_XMM_REGISTERS; reg++)
> +		_kvm_read_sse_reg(reg, &hc->xmm[reg]);
> +	kvm_fpu_put();
> +	hc->xmm_dirty = false;

There is no code that sets xmm_dirty to true? What am I missing? I guess
that's because you haven't implemented the hypercalls that need writing
back to guest?

Wei.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v2 4/4] KVM: hyper-v: Advertise support for fast XMM hypercalls
  2021-04-12 17:00 ` [PATCH v2 4/4] KVM: hyper-v: Advertise support for fast XMM hypercalls Siddharth Chandrasekaran
@ 2021-04-12 20:14   ` Wei Liu
  2021-04-13  9:11     ` Siddharth Chandrasekaran
  2021-04-13 14:26   ` Vitaly Kuznetsov
  1 sibling, 1 reply; 16+ messages in thread
From: Wei Liu @ 2021-04-12 20:14 UTC (permalink / raw)
  To: Siddharth Chandrasekaran
  Cc: K. Y. Srinivasan, Haiyang Zhang, Stephen Hemminger, Wei Liu,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, x86,
	H. Peter Anvin, Paolo Bonzini, Sean Christopherson,
	Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel,
	Alexander Graf, Evgeny Iakovlev, Liran Alon, Ioannis Aslanidis,
	linux-hyperv, linux-kernel, kvm

On Mon, Apr 12, 2021 at 07:00:17PM +0200, Siddharth Chandrasekaran wrote:
> Now that all extant hypercalls that can use XMM registers (based on
> spec) for input/outputs are patched to support them, we can start
> advertising this feature to guests.
> 
> Cc: Alexander Graf <graf@amazon.com>
> Cc: Evgeny Iakovlev <eyakovl@amazon.de>
> Signed-off-by: Siddharth Chandrasekaran <sidcha@amazon.de>
> ---
>  arch/x86/include/asm/hyperv-tlfs.h | 7 ++++++-
>  arch/x86/kvm/hyperv.c              | 2 ++
>  2 files changed, 8 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/include/asm/hyperv-tlfs.h b/arch/x86/include/asm/hyperv-tlfs.h
> index e6cd3fee562b..716f12be411e 100644
> --- a/arch/x86/include/asm/hyperv-tlfs.h
> +++ b/arch/x86/include/asm/hyperv-tlfs.h
> @@ -52,7 +52,7 @@
>   * Support for passing hypercall input parameter block via XMM
>   * registers is available
>   */
> -#define HV_X64_HYPERCALL_PARAMS_XMM_AVAILABLE		BIT(4)
> +#define HV_X64_HYPERCALL_XMM_INPUT_AVAILABLE		BIT(4)
>  /* Support for a virtual guest idle state is available */
>  #define HV_X64_GUEST_IDLE_STATE_AVAILABLE		BIT(5)
>  /* Frequency MSRs available */
> @@ -61,6 +61,11 @@
>  #define HV_FEATURE_GUEST_CRASH_MSR_AVAILABLE		BIT(10)
>  /* Support for debug MSRs available */
>  #define HV_FEATURE_DEBUG_MSRS_AVAILABLE			BIT(11)
> +/*
> + * Support for returning hypercall ouput block via XMM

"output"

Wei.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v2 3/4] KVM: x86: kvm_hv_flush_tlb use inputs from XMM registers
  2021-04-12 20:13   ` Wei Liu
@ 2021-04-13  9:09     ` Siddharth Chandrasekaran
  0 siblings, 0 replies; 16+ messages in thread
From: Siddharth Chandrasekaran @ 2021-04-13  9:09 UTC (permalink / raw)
  To: Wei Liu
  Cc: K. Y. Srinivasan, Haiyang Zhang, Stephen Hemminger,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, x86,
	H. Peter Anvin, Paolo Bonzini, Sean Christopherson,
	Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel,
	Alexander Graf, Evgeny Iakovlev, Liran Alon, Ioannis Aslanidis,
	linux-hyperv, linux-kernel, kvm

On Mon, Apr 12, 2021 at 08:13:19PM +0000, Wei Liu wrote:
> On Mon, Apr 12, 2021 at 07:00:16PM +0200, Siddharth Chandrasekaran wrote:
> > +
> > +static inline void kvm_hv_hypercall_read_xmm(struct kvm_hv_hcall *hc)
> 
> Do you really need inline here? The compiler should be smart enough to
> inline this function if necessary.

Removed.

> > +{
> > +     int reg;
> > +
> > +     kvm_fpu_get();
> > +     for (reg = 0; reg < KVM_HV_HYPERCALL_MAX_XMM_REGISTERS; reg++)
> > +             _kvm_read_sse_reg(reg, &hc->xmm[reg]);
> > +     kvm_fpu_put();
> > +     hc->xmm_dirty = false;
> 
> There is no code that sets xmm_dirty to true? What am I missing? I guess
> that's because you haven't implemented the hypercalls that need writing
> back to guest?

Yes, when a hypercall want to return data via XMM registers, it should
update hc->xmm[] and set hc->dirty (I plan on using this in a future
patch). The reason why I didn't differ this change to actual patch
needs it is that it pairs nicely with the read/write xmm_reg() calls in
kvm_hv_hypercall().

~ Sid.



Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879




^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v2 4/4] KVM: hyper-v: Advertise support for fast XMM hypercalls
  2021-04-12 20:14   ` Wei Liu
@ 2021-04-13  9:11     ` Siddharth Chandrasekaran
  0 siblings, 0 replies; 16+ messages in thread
From: Siddharth Chandrasekaran @ 2021-04-13  9:11 UTC (permalink / raw)
  To: Wei Liu
  Cc: K. Y. Srinivasan, Haiyang Zhang, Stephen Hemminger,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, x86,
	H. Peter Anvin, Paolo Bonzini, Sean Christopherson,
	Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel,
	Alexander Graf, Evgeny Iakovlev, Liran Alon, Ioannis Aslanidis,
	linux-hyperv, linux-kernel, kvm

On Mon, Apr 12, 2021 at 08:14:23PM +0000, Wei Liu wrote:
> On Mon, Apr 12, 2021 at 07:00:17PM +0200, Siddharth Chandrasekaran wrote:
> > Now that all extant hypercalls that can use XMM registers (based on
> > spec) for input/outputs are patched to support them, we can start
> > advertising this feature to guests.
> >
> > Cc: Alexander Graf <graf@amazon.com>
> > Cc: Evgeny Iakovlev <eyakovl@amazon.de>
> > Signed-off-by: Siddharth Chandrasekaran <sidcha@amazon.de>
> > ---
> >  arch/x86/include/asm/hyperv-tlfs.h | 7 ++++++-
> >  arch/x86/kvm/hyperv.c              | 2 ++
> >  2 files changed, 8 insertions(+), 1 deletion(-)
> >
> > diff --git a/arch/x86/include/asm/hyperv-tlfs.h b/arch/x86/include/asm/hyperv-tlfs.h
> > index e6cd3fee562b..716f12be411e 100644
> > --- a/arch/x86/include/asm/hyperv-tlfs.h
> > +++ b/arch/x86/include/asm/hyperv-tlfs.h
> > @@ -52,7 +52,7 @@
> >   * Support for passing hypercall input parameter block via XMM
> >   * registers is available
> >   */
> > -#define HV_X64_HYPERCALL_PARAMS_XMM_AVAILABLE                BIT(4)
> > +#define HV_X64_HYPERCALL_XMM_INPUT_AVAILABLE         BIT(4)
> >  /* Support for a virtual guest idle state is available */
> >  #define HV_X64_GUEST_IDLE_STATE_AVAILABLE            BIT(5)
> >  /* Frequency MSRs available */
> > @@ -61,6 +61,11 @@
> >  #define HV_FEATURE_GUEST_CRASH_MSR_AVAILABLE         BIT(10)
> >  /* Support for debug MSRs available */
> >  #define HV_FEATURE_DEBUG_MSRS_AVAILABLE                      BIT(11)
> > +/*
> > + * Support for returning hypercall ouput block via XMM
> 
> "output"

Fixed. Thanks.

~ Sid.



Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879




^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v2 1/4] KVM: x86: Move FPU register accessors into fpu.h
  2021-04-12 17:00 ` [PATCH v2 1/4] KVM: x86: Move FPU register accessors into fpu.h Siddharth Chandrasekaran
@ 2021-04-13 13:40   ` Vitaly Kuznetsov
  2021-04-13 13:46     ` Siddharth Chandrasekaran
  0 siblings, 1 reply; 16+ messages in thread
From: Vitaly Kuznetsov @ 2021-04-13 13:40 UTC (permalink / raw)
  To: Siddharth Chandrasekaran
  Cc: Alexander Graf, Evgeny Iakovlev, Liran Alon, Ioannis Aslanidis,
	linux-hyperv, linux-kernel, kvm, K. Y. Srinivasan, Haiyang Zhang,
	Stephen Hemminger, Wei Liu, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, x86, H. Peter Anvin, Paolo Bonzini,
	Sean Christopherson, Wanpeng Li, Jim Mattson, Joerg Roedel

Siddharth Chandrasekaran <sidcha@amazon.de> writes:

> Hyper-v XMM fast hypercalls use XMM registers to pass input/output
> parameters. To access these, hyperv.c can reuse some FPU register
> accessors defined in emulator.c. Move them to a common location so both
> can access them.
>
> While at it, reorder the parameters of these accessor methods to make
> them more readable.
>
> Cc: Alexander Graf <graf@amazon.com>
> Cc: Evgeny Iakovlev <eyakovl@amazon.de>
> Signed-off-by: Siddharth Chandrasekaran <sidcha@amazon.de>
> ---
>  arch/x86/kvm/emulate.c | 138 ++++++----------------------------------
>  arch/x86/kvm/fpu.h     | 140 +++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 158 insertions(+), 120 deletions(-)
>  create mode 100644 arch/x86/kvm/fpu.h
>
> diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
> index f7970ba6219f..296f8f3ce988 100644
> --- a/arch/x86/kvm/emulate.c
> +++ b/arch/x86/kvm/emulate.c
> @@ -22,7 +22,6 @@
>  #include "kvm_cache_regs.h"
>  #include "kvm_emulate.h"
>  #include <linux/stringify.h>
> -#include <asm/fpu/api.h>
>  #include <asm/debugreg.h>
>  #include <asm/nospec-branch.h>
>  
> @@ -30,6 +29,7 @@
>  #include "tss.h"
>  #include "mmu.h"
>  #include "pmu.h"
> +#include "fpu.h"
>  
>  /*
>   * Operand types
> @@ -1081,116 +1081,14 @@ static void fetch_register_operand(struct operand *op)
>  	}
>  }
>  
> -static void emulator_get_fpu(void)
> -{
> -	fpregs_lock();
> -
> -	fpregs_assert_state_consistent();
> -	if (test_thread_flag(TIF_NEED_FPU_LOAD))
> -		switch_fpu_return();
> -}
> -
> -static void emulator_put_fpu(void)
> -{
> -	fpregs_unlock();
> -}
> -
> -static void read_sse_reg(sse128_t *data, int reg)
> -{
> -	emulator_get_fpu();
> -	switch (reg) {
> -	case 0: asm("movdqa %%xmm0, %0" : "=m"(*data)); break;
> -	case 1: asm("movdqa %%xmm1, %0" : "=m"(*data)); break;
> -	case 2: asm("movdqa %%xmm2, %0" : "=m"(*data)); break;
> -	case 3: asm("movdqa %%xmm3, %0" : "=m"(*data)); break;
> -	case 4: asm("movdqa %%xmm4, %0" : "=m"(*data)); break;
> -	case 5: asm("movdqa %%xmm5, %0" : "=m"(*data)); break;
> -	case 6: asm("movdqa %%xmm6, %0" : "=m"(*data)); break;
> -	case 7: asm("movdqa %%xmm7, %0" : "=m"(*data)); break;
> -#ifdef CONFIG_X86_64
> -	case 8: asm("movdqa %%xmm8, %0" : "=m"(*data)); break;
> -	case 9: asm("movdqa %%xmm9, %0" : "=m"(*data)); break;
> -	case 10: asm("movdqa %%xmm10, %0" : "=m"(*data)); break;
> -	case 11: asm("movdqa %%xmm11, %0" : "=m"(*data)); break;
> -	case 12: asm("movdqa %%xmm12, %0" : "=m"(*data)); break;
> -	case 13: asm("movdqa %%xmm13, %0" : "=m"(*data)); break;
> -	case 14: asm("movdqa %%xmm14, %0" : "=m"(*data)); break;
> -	case 15: asm("movdqa %%xmm15, %0" : "=m"(*data)); break;
> -#endif
> -	default: BUG();
> -	}
> -	emulator_put_fpu();
> -}
> -
> -static void write_sse_reg(sse128_t *data, int reg)
> -{
> -	emulator_get_fpu();
> -	switch (reg) {
> -	case 0: asm("movdqa %0, %%xmm0" : : "m"(*data)); break;
> -	case 1: asm("movdqa %0, %%xmm1" : : "m"(*data)); break;
> -	case 2: asm("movdqa %0, %%xmm2" : : "m"(*data)); break;
> -	case 3: asm("movdqa %0, %%xmm3" : : "m"(*data)); break;
> -	case 4: asm("movdqa %0, %%xmm4" : : "m"(*data)); break;
> -	case 5: asm("movdqa %0, %%xmm5" : : "m"(*data)); break;
> -	case 6: asm("movdqa %0, %%xmm6" : : "m"(*data)); break;
> -	case 7: asm("movdqa %0, %%xmm7" : : "m"(*data)); break;
> -#ifdef CONFIG_X86_64
> -	case 8: asm("movdqa %0, %%xmm8" : : "m"(*data)); break;
> -	case 9: asm("movdqa %0, %%xmm9" : : "m"(*data)); break;
> -	case 10: asm("movdqa %0, %%xmm10" : : "m"(*data)); break;
> -	case 11: asm("movdqa %0, %%xmm11" : : "m"(*data)); break;
> -	case 12: asm("movdqa %0, %%xmm12" : : "m"(*data)); break;
> -	case 13: asm("movdqa %0, %%xmm13" : : "m"(*data)); break;
> -	case 14: asm("movdqa %0, %%xmm14" : : "m"(*data)); break;
> -	case 15: asm("movdqa %0, %%xmm15" : : "m"(*data)); break;
> -#endif
> -	default: BUG();
> -	}
> -	emulator_put_fpu();
> -}
> -
> -static void read_mmx_reg(u64 *data, int reg)
> -{
> -	emulator_get_fpu();
> -	switch (reg) {
> -	case 0: asm("movq %%mm0, %0" : "=m"(*data)); break;
> -	case 1: asm("movq %%mm1, %0" : "=m"(*data)); break;
> -	case 2: asm("movq %%mm2, %0" : "=m"(*data)); break;
> -	case 3: asm("movq %%mm3, %0" : "=m"(*data)); break;
> -	case 4: asm("movq %%mm4, %0" : "=m"(*data)); break;
> -	case 5: asm("movq %%mm5, %0" : "=m"(*data)); break;
> -	case 6: asm("movq %%mm6, %0" : "=m"(*data)); break;
> -	case 7: asm("movq %%mm7, %0" : "=m"(*data)); break;
> -	default: BUG();
> -	}
> -	emulator_put_fpu();
> -}
> -
> -static void write_mmx_reg(u64 *data, int reg)
> -{
> -	emulator_get_fpu();
> -	switch (reg) {
> -	case 0: asm("movq %0, %%mm0" : : "m"(*data)); break;
> -	case 1: asm("movq %0, %%mm1" : : "m"(*data)); break;
> -	case 2: asm("movq %0, %%mm2" : : "m"(*data)); break;
> -	case 3: asm("movq %0, %%mm3" : : "m"(*data)); break;
> -	case 4: asm("movq %0, %%mm4" : : "m"(*data)); break;
> -	case 5: asm("movq %0, %%mm5" : : "m"(*data)); break;
> -	case 6: asm("movq %0, %%mm6" : : "m"(*data)); break;
> -	case 7: asm("movq %0, %%mm7" : : "m"(*data)); break;
> -	default: BUG();
> -	}
> -	emulator_put_fpu();
> -}
> -
>  static int em_fninit(struct x86_emulate_ctxt *ctxt)
>  {
>  	if (ctxt->ops->get_cr(ctxt, 0) & (X86_CR0_TS | X86_CR0_EM))
>  		return emulate_nm(ctxt);
>  
> -	emulator_get_fpu();
> +	kvm_fpu_get();
>  	asm volatile("fninit");
> -	emulator_put_fpu();
> +	kvm_fpu_put();
>  	return X86EMUL_CONTINUE;
>  }
>  
> @@ -1201,9 +1099,9 @@ static int em_fnstcw(struct x86_emulate_ctxt *ctxt)
>  	if (ctxt->ops->get_cr(ctxt, 0) & (X86_CR0_TS | X86_CR0_EM))
>  		return emulate_nm(ctxt);
>  
> -	emulator_get_fpu();
> +	kvm_fpu_get();
>  	asm volatile("fnstcw %0": "+m"(fcw));
> -	emulator_put_fpu();
> +	kvm_fpu_put();
>  
>  	ctxt->dst.val = fcw;
>  
> @@ -1217,9 +1115,9 @@ static int em_fnstsw(struct x86_emulate_ctxt *ctxt)
>  	if (ctxt->ops->get_cr(ctxt, 0) & (X86_CR0_TS | X86_CR0_EM))
>  		return emulate_nm(ctxt);
>  
> -	emulator_get_fpu();
> +	kvm_fpu_get();
>  	asm volatile("fnstsw %0": "+m"(fsw));
> -	emulator_put_fpu();
> +	kvm_fpu_put();
>  
>  	ctxt->dst.val = fsw;
>  
> @@ -1238,7 +1136,7 @@ static void decode_register_operand(struct x86_emulate_ctxt *ctxt,
>  		op->type = OP_XMM;
>  		op->bytes = 16;
>  		op->addr.xmm = reg;
> -		read_sse_reg(&op->vec_val, reg);
> +		kvm_read_sse_reg(reg, &op->vec_val);
>  		return;
>  	}
>  	if (ctxt->d & Mmx) {
> @@ -1289,7 +1187,7 @@ static int decode_modrm(struct x86_emulate_ctxt *ctxt,
>  			op->type = OP_XMM;
>  			op->bytes = 16;
>  			op->addr.xmm = ctxt->modrm_rm;
> -			read_sse_reg(&op->vec_val, ctxt->modrm_rm);
> +			kvm_read_sse_reg(ctxt->modrm_rm, &op->vec_val);
>  			return rc;
>  		}
>  		if (ctxt->d & Mmx) {
> @@ -1866,10 +1764,10 @@ static int writeback(struct x86_emulate_ctxt *ctxt, struct operand *op)
>  				       op->bytes * op->count);
>  		break;
>  	case OP_XMM:
> -		write_sse_reg(&op->vec_val, op->addr.xmm);
> +		kvm_write_sse_reg(op->addr.xmm, &op->vec_val);
>  		break;
>  	case OP_MM:
> -		write_mmx_reg(&op->mm_val, op->addr.mm);
> +		kvm_write_mmx_reg(op->addr.mm, &op->mm_val);
>  		break;
>  	case OP_NONE:
>  		/* no writeback */
> @@ -4124,11 +4022,11 @@ static int em_fxsave(struct x86_emulate_ctxt *ctxt)
>  	if (rc != X86EMUL_CONTINUE)
>  		return rc;
>  
> -	emulator_get_fpu();
> +	kvm_fpu_get();
>  
>  	rc = asm_safe("fxsave %[fx]", , [fx] "+m"(fx_state));
>  
> -	emulator_put_fpu();
> +	kvm_fpu_put();
>  
>  	if (rc != X86EMUL_CONTINUE)
>  		return rc;
> @@ -4172,7 +4070,7 @@ static int em_fxrstor(struct x86_emulate_ctxt *ctxt)
>  	if (rc != X86EMUL_CONTINUE)
>  		return rc;
>  
> -	emulator_get_fpu();
> +	kvm_fpu_get();
>  
>  	if (size < __fxstate_size(16)) {
>  		rc = fxregs_fixup(&fx_state, size);
> @@ -4189,7 +4087,7 @@ static int em_fxrstor(struct x86_emulate_ctxt *ctxt)
>  		rc = asm_safe("fxrstor %[fx]", : [fx] "m"(fx_state));
>  
>  out:
> -	emulator_put_fpu();
> +	kvm_fpu_put();
>  
>  	return rc;
>  }
> @@ -5510,9 +5408,9 @@ static int flush_pending_x87_faults(struct x86_emulate_ctxt *ctxt)
>  {
>  	int rc;
>  
> -	emulator_get_fpu();
> +	kvm_fpu_get();
>  	rc = asm_safe("fwait");
> -	emulator_put_fpu();
> +	kvm_fpu_put();
>  
>  	if (unlikely(rc != X86EMUL_CONTINUE))
>  		return emulate_exception(ctxt, MF_VECTOR, 0, false);
> @@ -5523,7 +5421,7 @@ static int flush_pending_x87_faults(struct x86_emulate_ctxt *ctxt)
>  static void fetch_possible_mmx_operand(struct operand *op)
>  {
>  	if (op->type == OP_MM)
> -		read_mmx_reg(&op->mm_val, op->addr.mm);
> +		kvm_read_mmx_reg(op->addr.mm, &op->mm_val);
>  }
>  
>  static int fastop(struct x86_emulate_ctxt *ctxt, fastop_t fop)
> diff --git a/arch/x86/kvm/fpu.h b/arch/x86/kvm/fpu.h
> new file mode 100644
> index 000000000000..3ba12888bf66
> --- /dev/null
> +++ b/arch/x86/kvm/fpu.h
> @@ -0,0 +1,140 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +
> +#ifndef __KVM_FPU_H_
> +#define __KVM_FPU_H_
> +
> +#include <asm/fpu/api.h>
> +
> +typedef u32		__attribute__((vector_size(16))) sse128_t;

Post-patch we seem to have two definitions of 'sse128_t':

$ git grep sse128_t
HEAD~3:arch/x86/kvm/fpu.h:typedef u32           __attribute__((vector_size(16))) sse128_t;
HEAD~3:arch/x86/kvm/fpu.h:#define __sse128_u    union { sse128_t vec; u64 as_u64[2]; u32 as_u32[4]; }
HEAD~3:arch/x86/kvm/fpu.h:static inline void _kvm_read_sse_reg(int reg, sse128_t *data)
HEAD~3:arch/x86/kvm/fpu.h:static inline void _kvm_write_sse_reg(int reg, const sse128_t *data)
HEAD~3:arch/x86/kvm/fpu.h:static inline void kvm_read_sse_reg(int reg, sse128_t *data)
HEAD~3:arch/x86/kvm/fpu.h:static inline void kvm_write_sse_reg(int reg, const sse128_t *data)
HEAD~3:arch/x86/kvm/kvm_emulate.h:typedef u32 __attribute__((vector_size(16))) sse128_t;
HEAD~3:arch/x86/kvm/kvm_emulate.h:              char valptr[sizeof(sse128_t)];
HEAD~3:arch/x86/kvm/kvm_emulate.h:              sse128_t vec_val;

Should the one from kvm_emulate.h go away?

> +#define __sse128_u	union { sse128_t vec; u64 as_u64[2]; u32 as_u32[4]; }
> +#define sse128_lo(x)	({ __sse128_u t; t.vec = x; t.as_u64[0]; })
> +#define sse128_hi(x)	({ __sse128_u t; t.vec = x; t.as_u64[1]; })
> +#define sse128_l0(x)	({ __sse128_u t; t.vec = x; t.as_u32[0]; })
> +#define sse128_l1(x)	({ __sse128_u t; t.vec = x; t.as_u32[1]; })
> +#define sse128_l2(x)	({ __sse128_u t; t.vec = x; t.as_u32[2]; })
> +#define sse128_l3(x)	({ __sse128_u t; t.vec = x; t.as_u32[3]; })
> +#define sse128(lo, hi)	({ __sse128_u t; t.as_u64[0] = lo; t.as_u64[1] = hi; t.vec; })
> +
> +static inline void _kvm_read_sse_reg(int reg, sse128_t *data)
> +{
> +	switch (reg) {
> +	case 0: asm("movdqa %%xmm0, %0" : "=m"(*data)); break;
> +	case 1: asm("movdqa %%xmm1, %0" : "=m"(*data)); break;
> +	case 2: asm("movdqa %%xmm2, %0" : "=m"(*data)); break;
> +	case 3: asm("movdqa %%xmm3, %0" : "=m"(*data)); break;
> +	case 4: asm("movdqa %%xmm4, %0" : "=m"(*data)); break;
> +	case 5: asm("movdqa %%xmm5, %0" : "=m"(*data)); break;
> +	case 6: asm("movdqa %%xmm6, %0" : "=m"(*data)); break;
> +	case 7: asm("movdqa %%xmm7, %0" : "=m"(*data)); break;
> +#ifdef CONFIG_X86_64
> +	case 8: asm("movdqa %%xmm8, %0" : "=m"(*data)); break;
> +	case 9: asm("movdqa %%xmm9, %0" : "=m"(*data)); break;
> +	case 10: asm("movdqa %%xmm10, %0" : "=m"(*data)); break;
> +	case 11: asm("movdqa %%xmm11, %0" : "=m"(*data)); break;
> +	case 12: asm("movdqa %%xmm12, %0" : "=m"(*data)); break;
> +	case 13: asm("movdqa %%xmm13, %0" : "=m"(*data)); break;
> +	case 14: asm("movdqa %%xmm14, %0" : "=m"(*data)); break;
> +	case 15: asm("movdqa %%xmm15, %0" : "=m"(*data)); break;
> +#endif
> +	default: BUG();
> +	}
> +}
> +
> +static inline void _kvm_write_sse_reg(int reg, const sse128_t *data)
> +{
> +	switch (reg) {
> +	case 0: asm("movdqa %0, %%xmm0" : : "m"(*data)); break;
> +	case 1: asm("movdqa %0, %%xmm1" : : "m"(*data)); break;
> +	case 2: asm("movdqa %0, %%xmm2" : : "m"(*data)); break;
> +	case 3: asm("movdqa %0, %%xmm3" : : "m"(*data)); break;
> +	case 4: asm("movdqa %0, %%xmm4" : : "m"(*data)); break;
> +	case 5: asm("movdqa %0, %%xmm5" : : "m"(*data)); break;
> +	case 6: asm("movdqa %0, %%xmm6" : : "m"(*data)); break;
> +	case 7: asm("movdqa %0, %%xmm7" : : "m"(*data)); break;
> +#ifdef CONFIG_X86_64
> +	case 8: asm("movdqa %0, %%xmm8" : : "m"(*data)); break;
> +	case 9: asm("movdqa %0, %%xmm9" : : "m"(*data)); break;
> +	case 10: asm("movdqa %0, %%xmm10" : : "m"(*data)); break;
> +	case 11: asm("movdqa %0, %%xmm11" : : "m"(*data)); break;
> +	case 12: asm("movdqa %0, %%xmm12" : : "m"(*data)); break;
> +	case 13: asm("movdqa %0, %%xmm13" : : "m"(*data)); break;
> +	case 14: asm("movdqa %0, %%xmm14" : : "m"(*data)); break;
> +	case 15: asm("movdqa %0, %%xmm15" : : "m"(*data)); break;
> +#endif
> +	default: BUG();
> +	}
> +}
> +
> +static inline void _kvm_read_mmx_reg(int reg, u64 *data)
> +{
> +	switch (reg) {
> +	case 0: asm("movq %%mm0, %0" : "=m"(*data)); break;
> +	case 1: asm("movq %%mm1, %0" : "=m"(*data)); break;
> +	case 2: asm("movq %%mm2, %0" : "=m"(*data)); break;
> +	case 3: asm("movq %%mm3, %0" : "=m"(*data)); break;
> +	case 4: asm("movq %%mm4, %0" : "=m"(*data)); break;
> +	case 5: asm("movq %%mm5, %0" : "=m"(*data)); break;
> +	case 6: asm("movq %%mm6, %0" : "=m"(*data)); break;
> +	case 7: asm("movq %%mm7, %0" : "=m"(*data)); break;
> +	default: BUG();
> +	}
> +}
> +
> +static inline void _kvm_write_mmx_reg(int reg, const u64 *data)
> +{
> +	switch (reg) {
> +	case 0: asm("movq %0, %%mm0" : : "m"(*data)); break;
> +	case 1: asm("movq %0, %%mm1" : : "m"(*data)); break;
> +	case 2: asm("movq %0, %%mm2" : : "m"(*data)); break;
> +	case 3: asm("movq %0, %%mm3" : : "m"(*data)); break;
> +	case 4: asm("movq %0, %%mm4" : : "m"(*data)); break;
> +	case 5: asm("movq %0, %%mm5" : : "m"(*data)); break;
> +	case 6: asm("movq %0, %%mm6" : : "m"(*data)); break;
> +	case 7: asm("movq %0, %%mm7" : : "m"(*data)); break;
> +	default: BUG();
> +	}
> +}
> +
> +static inline void kvm_fpu_get(void)
> +{
> +	fpregs_lock();
> +
> +	fpregs_assert_state_consistent();
> +	if (test_thread_flag(TIF_NEED_FPU_LOAD))
> +		switch_fpu_return();
> +}
> +
> +static inline void kvm_fpu_put(void)
> +{
> +	fpregs_unlock();
> +}
> +
> +static inline void kvm_read_sse_reg(int reg, sse128_t *data)
> +{
> +	kvm_fpu_get();
> +	_kvm_read_sse_reg(reg, data);
> +	kvm_fpu_put();
> +}
> +
> +static inline void kvm_write_sse_reg(int reg, const sse128_t *data)
> +{
> +	kvm_fpu_get();
> +	_kvm_write_sse_reg(reg, data);
> +	kvm_fpu_put();
> +}
> +
> +static inline void kvm_read_mmx_reg(int reg, u64 *data)
> +{
> +	kvm_fpu_get();
> +	_kvm_read_mmx_reg(reg, data);
> +	kvm_fpu_put();
> +}
> +
> +static inline void kvm_write_mmx_reg(int reg, const u64 *data)
> +{
> +	kvm_fpu_get();
> +	_kvm_write_mmx_reg(reg, data);
> +	kvm_fpu_put();
> +}
> +
> +#endif

-- 
Vitaly


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v2 1/4] KVM: x86: Move FPU register accessors into fpu.h
  2021-04-13 13:40   ` Vitaly Kuznetsov
@ 2021-04-13 13:46     ` Siddharth Chandrasekaran
  0 siblings, 0 replies; 16+ messages in thread
From: Siddharth Chandrasekaran @ 2021-04-13 13:46 UTC (permalink / raw)
  To: Vitaly Kuznetsov
  Cc: Alexander Graf, Evgeny Iakovlev, Liran Alon, Ioannis Aslanidis,
	linux-hyperv, linux-kernel, kvm, K. Y. Srinivasan, Haiyang Zhang,
	Stephen Hemminger, Wei Liu, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, x86, H. Peter Anvin, Paolo Bonzini,
	Sean Christopherson, Wanpeng Li, Jim Mattson, Joerg Roedel

On Tue, Apr 13, 2021 at 03:40:56PM +0200, Vitaly Kuznetsov wrote:
> Siddharth Chandrasekaran <sidcha@amazon.de> writes:
> > @@ -0,0 +1,140 @@
> > +/* SPDX-License-Identifier: GPL-2.0 */
> > +
> > +#ifndef __KVM_FPU_H_
> > +#define __KVM_FPU_H_
> > +
> > +#include <asm/fpu/api.h>
> > +
> > +typedef u32          __attribute__((vector_size(16))) sse128_t;
> 
> Post-patch we seem to have two definitions of 'sse128_t':
> 
> $ git grep sse128_t
> HEAD~3:arch/x86/kvm/fpu.h:typedef u32           __attribute__((vector_size(16))) sse128_t;
> HEAD~3:arch/x86/kvm/fpu.h:#define __sse128_u    union { sse128_t vec; u64 as_u64[2]; u32 as_u32[4]; }
> HEAD~3:arch/x86/kvm/fpu.h:static inline void _kvm_read_sse_reg(int reg, sse128_t *data)
> HEAD~3:arch/x86/kvm/fpu.h:static inline void _kvm_write_sse_reg(int reg, const sse128_t *data)
> HEAD~3:arch/x86/kvm/fpu.h:static inline void kvm_read_sse_reg(int reg, sse128_t *data)
> HEAD~3:arch/x86/kvm/fpu.h:static inline void kvm_write_sse_reg(int reg, const sse128_t *data)
> HEAD~3:arch/x86/kvm/kvm_emulate.h:typedef u32 __attribute__((vector_size(16))) sse128_t;
> HEAD~3:arch/x86/kvm/kvm_emulate.h:              char valptr[sizeof(sse128_t)];
> HEAD~3:arch/x86/kvm/kvm_emulate.h:              sse128_t vec_val;
> 
> Should the one from kvm_emulate.h go away?

Yes, removed.

~ Sid.



Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879




^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v2 2/4] KVM: hyper-v: Collect hypercall params into struct
  2021-04-12 17:00 ` [PATCH v2 2/4] KVM: hyper-v: Collect hypercall params into struct Siddharth Chandrasekaran
@ 2021-04-13 13:53   ` Vitaly Kuznetsov
  2021-04-13 14:11     ` Siddharth Chandrasekaran
  0 siblings, 1 reply; 16+ messages in thread
From: Vitaly Kuznetsov @ 2021-04-13 13:53 UTC (permalink / raw)
  To: Siddharth Chandrasekaran
  Cc: Alexander Graf, Evgeny Iakovlev, Liran Alon, Ioannis Aslanidis,
	linux-hyperv, linux-kernel, kvm, K. Y. Srinivasan, Haiyang Zhang,
	Stephen Hemminger, Wei Liu, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, x86, H. Peter Anvin, Paolo Bonzini,
	Sean Christopherson, Wanpeng Li, Jim Mattson, Joerg Roedel

Siddharth Chandrasekaran <sidcha@amazon.de> writes:

> As of now there are 7 parameters (and flags) that are used in various
> hyper-v hypercall handlers. There are 6 more input/output parameters
> passed from XMM registers which are to be added in an upcoming patch.
>
> To make passing arguments to the handlers more readable, capture all
> these parameters into a single structure.
>
> Cc: Alexander Graf <graf@amazon.com>
> Cc: Evgeny Iakovlev <eyakovl@amazon.de>
> Signed-off-by: Siddharth Chandrasekaran <sidcha@amazon.de>
> ---
>  arch/x86/kvm/hyperv.c | 147 +++++++++++++++++++++++-------------------
>  1 file changed, 79 insertions(+), 68 deletions(-)
>
> diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
> index f98370a39936..8f6babd1ea0d 100644
> --- a/arch/x86/kvm/hyperv.c
> +++ b/arch/x86/kvm/hyperv.c
> @@ -1623,7 +1623,18 @@ static __always_inline unsigned long *sparse_set_to_vcpu_mask(
>  	return vcpu_bitmap;
>  }
>  
> -static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, u64 ingpa, u16 rep_cnt, bool ex)
> +struct kvm_hv_hcall {
> +	u64 param;
> +	u64 ingpa;
> +	u64 outgpa;
> +	u16 code;
> +	u16 rep_cnt;
> +	u16 rep_idx;
> +	bool fast;
> +	bool rep;
> +};
> +
> +static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc, bool ex)

Nitpick: Would it make sense to also pack the fact that we're dealing
with a hypercall using ExProcessorMasks into 'struct kvm_hv_hcall' and
get rid of 'bool ex' parameter for both kvm_hv_flush_tlb() and
kvm_hv_send_ipi()? 'struct kvm_hv_hcall' is already a synthetic
aggregator for input and output so adding some other information there
may not be that big of a stretch...

>  {
>  	struct kvm *kvm = vcpu->kvm;
>  	struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
> @@ -1638,7 +1649,7 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, u64 ingpa, u16 rep_cnt, bool
>  	bool all_cpus;
>  
>  	if (!ex) {
> -		if (unlikely(kvm_read_guest(kvm, ingpa, &flush, sizeof(flush))))
> +		if (unlikely(kvm_read_guest(kvm, hc->ingpa, &flush, sizeof(flush))))
>  			return HV_STATUS_INVALID_HYPERCALL_INPUT;
>  
>  		trace_kvm_hv_flush_tlb(flush.processor_mask,
> @@ -1657,7 +1668,7 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, u64 ingpa, u16 rep_cnt, bool
>  		all_cpus = (flush.flags & HV_FLUSH_ALL_PROCESSORS) ||
>  			flush.processor_mask == 0;
>  	} else {
> -		if (unlikely(kvm_read_guest(kvm, ingpa, &flush_ex,
> +		if (unlikely(kvm_read_guest(kvm, hc->ingpa, &flush_ex,
>  					    sizeof(flush_ex))))
>  			return HV_STATUS_INVALID_HYPERCALL_INPUT;
>  
> @@ -1679,8 +1690,8 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, u64 ingpa, u16 rep_cnt, bool
>  
>  		if (!all_cpus &&
>  		    kvm_read_guest(kvm,
> -				   ingpa + offsetof(struct hv_tlb_flush_ex,
> -						    hv_vp_set.bank_contents),
> +				   hc->ingpa + offsetof(struct hv_tlb_flush_ex,
> +							hv_vp_set.bank_contents),
>  				   sparse_banks,
>  				   sparse_banks_len))
>  			return HV_STATUS_INVALID_HYPERCALL_INPUT;
> @@ -1700,9 +1711,9 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, u64 ingpa, u16 rep_cnt, bool
>  				    NULL, vcpu_mask, &hv_vcpu->tlb_flush);
>  
>  ret_success:
> -	/* We always do full TLB flush, set rep_done = rep_cnt. */
> +	/* We always do full TLB flush, set rep_done = hc->rep_cnt. */

Nitpicking: I'd suggest we word it a bit differently:

"We always do full TLB flush, set 'Reps completed' = 'Rep Count'."

so it matches TLFS rather than KVM internals.

>  	return (u64)HV_STATUS_SUCCESS |
> -		((u64)rep_cnt << HV_HYPERCALL_REP_COMP_OFFSET);
> +		((u64)hc->rep_cnt << HV_HYPERCALL_REP_COMP_OFFSET);
>  }
>  
>  static void kvm_send_ipi_to_many(struct kvm *kvm, u32 vector,
> @@ -1724,8 +1735,7 @@ static void kvm_send_ipi_to_many(struct kvm *kvm, u32 vector,
>  	}
>  }
>  
> -static u64 kvm_hv_send_ipi(struct kvm_vcpu *vcpu, u64 ingpa, u64 outgpa,
> -			   bool ex, bool fast)
> +static u64 kvm_hv_send_ipi(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc, bool ex)
>  {
>  	struct kvm *kvm = vcpu->kvm;
>  	struct hv_send_ipi_ex send_ipi_ex;
> @@ -1740,25 +1750,25 @@ static u64 kvm_hv_send_ipi(struct kvm_vcpu *vcpu, u64 ingpa, u64 outgpa,
>  	bool all_cpus;
>  
>  	if (!ex) {
> -		if (!fast) {
> -			if (unlikely(kvm_read_guest(kvm, ingpa, &send_ipi,
> +		if (!hc->fast) {
> +			if (unlikely(kvm_read_guest(kvm, hc->ingpa, &send_ipi,
>  						    sizeof(send_ipi))))
>  				return HV_STATUS_INVALID_HYPERCALL_INPUT;
>  			sparse_banks[0] = send_ipi.cpu_mask;
>  			vector = send_ipi.vector;
>  		} else {
>  			/* 'reserved' part of hv_send_ipi should be 0 */
> -			if (unlikely(ingpa >> 32 != 0))
> +			if (unlikely(hc->ingpa >> 32 != 0))
>  				return HV_STATUS_INVALID_HYPERCALL_INPUT;
> -			sparse_banks[0] = outgpa;
> -			vector = (u32)ingpa;
> +			sparse_banks[0] = hc->outgpa;
> +			vector = (u32)hc->ingpa;
>  		}
>  		all_cpus = false;
>  		valid_bank_mask = BIT_ULL(0);
>  
>  		trace_kvm_hv_send_ipi(vector, sparse_banks[0]);
>  	} else {
> -		if (unlikely(kvm_read_guest(kvm, ingpa, &send_ipi_ex,
> +		if (unlikely(kvm_read_guest(kvm, hc->ingpa, &send_ipi_ex,
>  					    sizeof(send_ipi_ex))))
>  			return HV_STATUS_INVALID_HYPERCALL_INPUT;
>  
> @@ -1778,8 +1788,8 @@ static u64 kvm_hv_send_ipi(struct kvm_vcpu *vcpu, u64 ingpa, u64 outgpa,
>  
>  		if (!all_cpus &&
>  		    kvm_read_guest(kvm,
> -				   ingpa + offsetof(struct hv_send_ipi_ex,
> -						    vp_set.bank_contents),
> +				   hc->ingpa + offsetof(struct hv_send_ipi_ex,
> +							vp_set.bank_contents),
>  				   sparse_banks,
>  				   sparse_banks_len))
>  			return HV_STATUS_INVALID_HYPERCALL_INPUT;
> @@ -1839,20 +1849,21 @@ static int kvm_hv_hypercall_complete_userspace(struct kvm_vcpu *vcpu)
>  	return kvm_hv_hypercall_complete(vcpu, vcpu->run->hyperv.u.hcall.result);
>  }
>  
> -static u16 kvm_hvcall_signal_event(struct kvm_vcpu *vcpu, bool fast, u64 param)
> +static u16 kvm_hvcall_signal_event(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
>  {
>  	struct kvm_hv *hv = to_kvm_hv(vcpu->kvm);
>  	struct eventfd_ctx *eventfd;
>  
> -	if (unlikely(!fast)) {
> +	if (unlikely(!hc->fast)) {
>  		int ret;
> -		gpa_t gpa = param;
> +		gpa_t gpa = hc->ingpa;
>  
> -		if ((gpa & (__alignof__(param) - 1)) ||
> -		    offset_in_page(gpa) + sizeof(param) > PAGE_SIZE)
> +		if ((gpa & (__alignof__(hc->ingpa) - 1)) ||
> +		    offset_in_page(gpa) + sizeof(hc->ingpa) > PAGE_SIZE)
>  			return HV_STATUS_INVALID_ALIGNMENT;
>  
> -		ret = kvm_vcpu_read_guest(vcpu, gpa, &param, sizeof(param));
> +		ret = kvm_vcpu_read_guest(vcpu, gpa,
> +					  &hc->ingpa, sizeof(hc->ingpa));
>  		if (ret < 0)
>  			return HV_STATUS_INVALID_ALIGNMENT;
>  	}
> @@ -1862,15 +1873,15 @@ static u16 kvm_hvcall_signal_event(struct kvm_vcpu *vcpu, bool fast, u64 param)
>  	 * have no use for it, and in all known usecases it is zero, so just
>  	 * report lookup failure if it isn't.
>  	 */
> -	if (param & 0xffff00000000ULL)
> +	if (hc->ingpa & 0xffff00000000ULL)
>  		return HV_STATUS_INVALID_PORT_ID;
>  	/* remaining bits are reserved-zero */
> -	if (param & ~KVM_HYPERV_CONN_ID_MASK)
> +	if (hc->ingpa & ~KVM_HYPERV_CONN_ID_MASK)
>  		return HV_STATUS_INVALID_HYPERCALL_INPUT;
>  
>  	/* the eventfd is protected by vcpu->kvm->srcu, but conn_to_evt isn't */
>  	rcu_read_lock();
> -	eventfd = idr_find(&hv->conn_to_evt, param);
> +	eventfd = idr_find(&hv->conn_to_evt, hc->ingpa);
>  	rcu_read_unlock();
>  	if (!eventfd)
>  		return HV_STATUS_INVALID_PORT_ID;
> @@ -1881,9 +1892,8 @@ static u16 kvm_hvcall_signal_event(struct kvm_vcpu *vcpu, bool fast, u64 param)
>  
>  int kvm_hv_hypercall(struct kvm_vcpu *vcpu)
>  {
> -	u64 param, ingpa, outgpa, ret = HV_STATUS_SUCCESS;
> -	uint16_t code, rep_idx, rep_cnt;
> -	bool fast, rep;
> +	struct kvm_hv_hcall hc;
> +	u64 ret = HV_STATUS_SUCCESS;
>  
>  	/*
>  	 * hypercall generates UD from non zero cpl and real mode
> @@ -1896,104 +1906,105 @@ int kvm_hv_hypercall(struct kvm_vcpu *vcpu)
>  
>  #ifdef CONFIG_X86_64
>  	if (is_64_bit_mode(vcpu)) {
> -		param = kvm_rcx_read(vcpu);
> -		ingpa = kvm_rdx_read(vcpu);
> -		outgpa = kvm_r8_read(vcpu);
> +		hc.param = kvm_rcx_read(vcpu);
> +		hc.ingpa = kvm_rdx_read(vcpu);
> +		hc.outgpa = kvm_r8_read(vcpu);
>  	} else
>  #endif
>  	{
> -		param = ((u64)kvm_rdx_read(vcpu) << 32) |
> -			(kvm_rax_read(vcpu) & 0xffffffff);
> -		ingpa = ((u64)kvm_rbx_read(vcpu) << 32) |
> -			(kvm_rcx_read(vcpu) & 0xffffffff);
> -		outgpa = ((u64)kvm_rdi_read(vcpu) << 32) |
> -			(kvm_rsi_read(vcpu) & 0xffffffff);
> +		hc.param = ((u64)kvm_rdx_read(vcpu) << 32) |
> +			    (kvm_rax_read(vcpu) & 0xffffffff);
> +		hc.ingpa = ((u64)kvm_rbx_read(vcpu) << 32) |
> +			    (kvm_rcx_read(vcpu) & 0xffffffff);
> +		hc.outgpa = ((u64)kvm_rdi_read(vcpu) << 32) |
> +			     (kvm_rsi_read(vcpu) & 0xffffffff);
>  	}
>  
> -	code = param & 0xffff;
> -	fast = !!(param & HV_HYPERCALL_FAST_BIT);
> -	rep_cnt = (param >> HV_HYPERCALL_REP_COMP_OFFSET) & 0xfff;
> -	rep_idx = (param >> HV_HYPERCALL_REP_START_OFFSET) & 0xfff;
> -	rep = !!(rep_cnt || rep_idx);
> +	hc.code = hc.param & 0xffff;
> +	hc.fast = !!(hc.param & HV_HYPERCALL_FAST_BIT);
> +	hc.rep_cnt = (hc.param >> HV_HYPERCALL_REP_COMP_OFFSET) & 0xfff;
> +	hc.rep_idx = (hc.param >> HV_HYPERCALL_REP_START_OFFSET) & 0xfff;
> +	hc.rep = !!(hc.rep_cnt || hc.rep_idx);
>  
> -	trace_kvm_hv_hypercall(code, fast, rep_cnt, rep_idx, ingpa, outgpa);
> +	trace_kvm_hv_hypercall(hc.code, hc.fast, hc.rep_cnt, hc.rep_idx,
> +			       hc.ingpa, hc.outgpa);
>  
> -	switch (code) {
> +	switch (hc.code) {
>  	case HVCALL_NOTIFY_LONG_SPIN_WAIT:
> -		if (unlikely(rep)) {
> +		if (unlikely(hc.rep)) {
>  			ret = HV_STATUS_INVALID_HYPERCALL_INPUT;
>  			break;
>  		}
>  		kvm_vcpu_on_spin(vcpu, true);
>  		break;
>  	case HVCALL_SIGNAL_EVENT:
> -		if (unlikely(rep)) {
> +		if (unlikely(hc.rep)) {
>  			ret = HV_STATUS_INVALID_HYPERCALL_INPUT;
>  			break;
>  		}
> -		ret = kvm_hvcall_signal_event(vcpu, fast, ingpa);
> +		ret = kvm_hvcall_signal_event(vcpu, &hc);
>  		if (ret != HV_STATUS_INVALID_PORT_ID)
>  			break;
>  		fallthrough;	/* maybe userspace knows this conn_id */
>  	case HVCALL_POST_MESSAGE:
>  		/* don't bother userspace if it has no way to handle it */
> -		if (unlikely(rep || !to_hv_synic(vcpu)->active)) {
> +		if (unlikely(hc.rep || !to_hv_synic(vcpu)->active)) {
>  			ret = HV_STATUS_INVALID_HYPERCALL_INPUT;
>  			break;
>  		}
>  		vcpu->run->exit_reason = KVM_EXIT_HYPERV;
>  		vcpu->run->hyperv.type = KVM_EXIT_HYPERV_HCALL;
> -		vcpu->run->hyperv.u.hcall.input = param;
> -		vcpu->run->hyperv.u.hcall.params[0] = ingpa;
> -		vcpu->run->hyperv.u.hcall.params[1] = outgpa;
> +		vcpu->run->hyperv.u.hcall.input = hc.param;
> +		vcpu->run->hyperv.u.hcall.params[0] = hc.ingpa;
> +		vcpu->run->hyperv.u.hcall.params[1] = hc.outgpa;
>  		vcpu->arch.complete_userspace_io =
>  				kvm_hv_hypercall_complete_userspace;
>  		return 0;
>  	case HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST:
> -		if (unlikely(fast || !rep_cnt || rep_idx)) {
> +		if (unlikely(hc.fast || !hc.rep_cnt || hc.rep_idx)) {
>  			ret = HV_STATUS_INVALID_HYPERCALL_INPUT;
>  			break;
>  		}
> -		ret = kvm_hv_flush_tlb(vcpu, ingpa, rep_cnt, false);
> +		ret = kvm_hv_flush_tlb(vcpu, &hc, false);
>  		break;
>  	case HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE:
> -		if (unlikely(fast || rep)) {
> +		if (unlikely(hc.fast || hc.rep)) {
>  			ret = HV_STATUS_INVALID_HYPERCALL_INPUT;
>  			break;
>  		}
> -		ret = kvm_hv_flush_tlb(vcpu, ingpa, rep_cnt, false);
> +		ret = kvm_hv_flush_tlb(vcpu, &hc, false);
>  		break;
>  	case HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX:
> -		if (unlikely(fast || !rep_cnt || rep_idx)) {
> +		if (unlikely(hc.fast || !hc.rep_cnt || hc.rep_idx)) {
>  			ret = HV_STATUS_INVALID_HYPERCALL_INPUT;
>  			break;
>  		}
> -		ret = kvm_hv_flush_tlb(vcpu, ingpa, rep_cnt, true);
> +		ret = kvm_hv_flush_tlb(vcpu, &hc, true);
>  		break;
>  	case HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX:
> -		if (unlikely(fast || rep)) {
> +		if (unlikely(hc.fast || hc.rep)) {
>  			ret = HV_STATUS_INVALID_HYPERCALL_INPUT;
>  			break;
>  		}
> -		ret = kvm_hv_flush_tlb(vcpu, ingpa, rep_cnt, true);
> +		ret = kvm_hv_flush_tlb(vcpu, &hc, true);
>  		break;
>  	case HVCALL_SEND_IPI:
> -		if (unlikely(rep)) {
> +		if (unlikely(hc.rep)) {
>  			ret = HV_STATUS_INVALID_HYPERCALL_INPUT;
>  			break;
>  		}
> -		ret = kvm_hv_send_ipi(vcpu, ingpa, outgpa, false, fast);
> +		ret = kvm_hv_send_ipi(vcpu, &hc, false);
>  		break;
>  	case HVCALL_SEND_IPI_EX:
> -		if (unlikely(fast || rep)) {
> +		if (unlikely(hc.fast || hc.rep)) {
>  			ret = HV_STATUS_INVALID_HYPERCALL_INPUT;
>  			break;
>  		}
> -		ret = kvm_hv_send_ipi(vcpu, ingpa, outgpa, true, false);
> +		ret = kvm_hv_send_ipi(vcpu, &hc, true);
>  		break;
>  	case HVCALL_POST_DEBUG_DATA:
>  	case HVCALL_RETRIEVE_DEBUG_DATA:
> -		if (unlikely(fast)) {
> +		if (unlikely(hc.fast)) {
>  			ret = HV_STATUS_INVALID_PARAMETER;
>  			break;
>  		}
> @@ -2012,9 +2023,9 @@ int kvm_hv_hypercall(struct kvm_vcpu *vcpu)
>  		}
>  		vcpu->run->exit_reason = KVM_EXIT_HYPERV;
>  		vcpu->run->hyperv.type = KVM_EXIT_HYPERV_HCALL;
> -		vcpu->run->hyperv.u.hcall.input = param;
> -		vcpu->run->hyperv.u.hcall.params[0] = ingpa;
> -		vcpu->run->hyperv.u.hcall.params[1] = outgpa;
> +		vcpu->run->hyperv.u.hcall.input = hc.param;
> +		vcpu->run->hyperv.u.hcall.params[0] = hc.ingpa;
> +		vcpu->run->hyperv.u.hcall.params[1] = hc.outgpa;
>  		vcpu->arch.complete_userspace_io =
>  				kvm_hv_hypercall_complete_userspace;
>  		return 0;

With or without the nitpicks from above addressed,

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v2 3/4] KVM: x86: kvm_hv_flush_tlb use inputs from XMM registers
  2021-04-12 17:00 ` [PATCH v2 3/4] KVM: x86: kvm_hv_flush_tlb use inputs from XMM registers Siddharth Chandrasekaran
  2021-04-12 20:13   ` Wei Liu
@ 2021-04-13 14:09   ` Vitaly Kuznetsov
  2021-04-13 21:07     ` Siddharth Chandrasekaran
  1 sibling, 1 reply; 16+ messages in thread
From: Vitaly Kuznetsov @ 2021-04-13 14:09 UTC (permalink / raw)
  To: Siddharth Chandrasekaran
  Cc: Alexander Graf, Evgeny Iakovlev, Liran Alon, Ioannis Aslanidis,
	linux-hyperv, linux-kernel, kvm, K. Y. Srinivasan, Haiyang Zhang,
	Stephen Hemminger, Wei Liu, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, x86, H. Peter Anvin, Paolo Bonzini,
	Sean Christopherson, Wanpeng Li, Jim Mattson, Joerg Roedel

Siddharth Chandrasekaran <sidcha@amazon.de> writes:

> Hyper-V supports the use of XMM registers to perform fast hypercalls.
> This allows guests to take advantage of the improved performance of the
> fast hypercall interface even though a hypercall may require more than
> (the current maximum of) two input registers.
>
> The XMM fast hypercall interface uses six additional XMM registers (XMM0
> to XMM5) to allow the guest to pass an input parameter block of up to
> 112 bytes. Hyper-V can also return data back to the guest in the
> remaining XMM registers that are not used by the current hypercall.
>
> Add framework to read/write to XMM registers in kvm_hv_hypercall() and
> use the additional hypercall inputs from XMM registers in
> kvm_hv_flush_tlb() when possible.
>
> Cc: Alexander Graf <graf@amazon.com>
> Co-developed-by: Evgeny Iakovlev <eyakovl@amazon.de>
> Signed-off-by: Evgeny Iakovlev <eyakovl@amazon.de>
> Signed-off-by: Siddharth Chandrasekaran <sidcha@amazon.de>
> ---
>  arch/x86/kvm/hyperv.c | 109 ++++++++++++++++++++++++++++++++++--------
>  1 file changed, 90 insertions(+), 19 deletions(-)
>
> diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
> index 8f6babd1ea0d..1f9959aba70d 100644
> --- a/arch/x86/kvm/hyperv.c
> +++ b/arch/x86/kvm/hyperv.c
> @@ -36,6 +36,7 @@
>  
>  #include "trace.h"
>  #include "irq.h"
> +#include "fpu.h"
>  
>  /* "Hv#1" signature */
>  #define HYPERV_CPUID_SIGNATURE_EAX 0x31237648
> @@ -1623,6 +1624,8 @@ static __always_inline unsigned long *sparse_set_to_vcpu_mask(
>  	return vcpu_bitmap;
>  }
>  
> +#define KVM_HV_HYPERCALL_MAX_XMM_REGISTERS  6

Nitpick: this is not KVM-specific so could probably go to arch/x86/include/asm/hyperv-tlfs.h

> +
>  struct kvm_hv_hcall {
>  	u64 param;
>  	u64 ingpa;
> @@ -1632,10 +1635,14 @@ struct kvm_hv_hcall {
>  	u16 rep_idx;
>  	bool fast;
>  	bool rep;
> +	sse128_t xmm[KVM_HV_HYPERCALL_MAX_XMM_REGISTERS];
> +	bool xmm_dirty;
>  };
>  
>  static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc, bool ex)
>  {
> +	int i, j;
> +	gpa_t gpa;
>  	struct kvm *kvm = vcpu->kvm;
>  	struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
>  	struct hv_tlb_flush_ex flush_ex;
> @@ -1649,8 +1656,15 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc, bool
>  	bool all_cpus;
>  
>  	if (!ex) {
> -		if (unlikely(kvm_read_guest(kvm, hc->ingpa, &flush, sizeof(flush))))
> -			return HV_STATUS_INVALID_HYPERCALL_INPUT;
> +		if (hc->fast) {
> +			flush.address_space = hc->ingpa;
> +			flush.flags = hc->outgpa;
> +			flush.processor_mask = sse128_lo(hc->xmm[0]);
> +		} else {
> +			if (unlikely(kvm_read_guest(kvm, hc->ingpa,
> +						    &flush, sizeof(flush))))
> +				return HV_STATUS_INVALID_HYPERCALL_INPUT;
> +		}
>  
>  		trace_kvm_hv_flush_tlb(flush.processor_mask,
>  				       flush.address_space, flush.flags);
> @@ -1668,9 +1682,16 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc, bool
>  		all_cpus = (flush.flags & HV_FLUSH_ALL_PROCESSORS) ||
>  			flush.processor_mask == 0;
>  	} else {
> -		if (unlikely(kvm_read_guest(kvm, hc->ingpa, &flush_ex,
> -					    sizeof(flush_ex))))
> -			return HV_STATUS_INVALID_HYPERCALL_INPUT;
> +		if (hc->fast) {
> +			flush_ex.address_space = hc->ingpa;
> +			flush_ex.flags = hc->outgpa;
> +			memcpy(&flush_ex.hv_vp_set,
> +			       &hc->xmm[0], sizeof(hc->xmm[0]));
> +		} else {
> +			if (unlikely(kvm_read_guest(kvm, hc->ingpa, &flush_ex,
> +						    sizeof(flush_ex))))
> +				return HV_STATUS_INVALID_HYPERCALL_INPUT;
> +		}
>  
>  		trace_kvm_hv_flush_tlb_ex(flush_ex.hv_vp_set.valid_bank_mask,
>  					  flush_ex.hv_vp_set.format,
> @@ -1681,20 +1702,29 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc, bool
>  		all_cpus = flush_ex.hv_vp_set.format !=
>  			HV_GENERIC_SET_SPARSE_4K;
>  
> -		sparse_banks_len =
> -			bitmap_weight((unsigned long *)&valid_bank_mask, 64) *
> -			sizeof(sparse_banks[0]);
> +		sparse_banks_len = bitmap_weight((unsigned long *)&valid_bank_mask, 64);
>  
>  		if (!sparse_banks_len && !all_cpus)
>  			goto ret_success;
>  
> -		if (!all_cpus &&
> -		    kvm_read_guest(kvm,
> -				   hc->ingpa + offsetof(struct hv_tlb_flush_ex,
> -							hv_vp_set.bank_contents),
> -				   sparse_banks,
> -				   sparse_banks_len))
> -			return HV_STATUS_INVALID_HYPERCALL_INPUT;
> +		if (!all_cpus) {
> +			if (hc->fast) {
> +				if (sparse_banks_len > KVM_HV_HYPERCALL_MAX_XMM_REGISTERS - 1)
> +					return HV_STATUS_INVALID_HYPERCALL_INPUT;
> +				for (i = 0, j = 1; i < sparse_banks_len; i += 2, j++) {

Nitpick: you don't really need 'j' here as 'j == i/2 + 1', right?

> +					sparse_banks[i + 0] = sse128_lo(hc->xmm[j]);

Using ' + 0' for identation is ... unusual :-) I'm not opposed to it
here though.

> +					sparse_banks[i + 1] = sse128_hi(hc->xmm[j]);
> +				}
> +			} else {
> +				gpa = hc->ingpa;
> +				gpa += offsetof(struct hv_tlb_flush_ex,
> +						hv_vp_set.bank_contents);

Nitpick: if splitting these into two lines is only done to fit into 80
chars then I'd the requirement is no more so we can be a bit wider.

 gpa = hc->ingpa + offsetof(...) 

> +				if (unlikely(kvm_read_guest(kvm, gpa, sparse_banks,
> +							    sparse_banks_len *
> +							    sizeof(sparse_banks[0]))))
> +					return HV_STATUS_INVALID_HYPERCALL_INPUT;
> +			}
> +		}
>  	}
>  
>  	cpumask_clear(&hv_vcpu->tlb_flush);
> @@ -1890,6 +1920,41 @@ static u16 kvm_hvcall_signal_event(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *h
>  	return HV_STATUS_SUCCESS;
>  }
>  
> +static bool is_xmm_fast_hypercall(struct kvm_hv_hcall *hc)
> +{
> +	switch (hc->code) {
> +	case HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST:
> +	case HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE:
> +	case HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX:
> +	case HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX:
> +		return true;
> +	}
> +
> +	return false;
> +}
> +
> +static inline void kvm_hv_hypercall_read_xmm(struct kvm_hv_hcall *hc)
> +{
> +	int reg;
> +
> +	kvm_fpu_get();
> +	for (reg = 0; reg < KVM_HV_HYPERCALL_MAX_XMM_REGISTERS; reg++)
> +		_kvm_read_sse_reg(reg, &hc->xmm[reg]);
> +	kvm_fpu_put();
> +	hc->xmm_dirty = false;
> +}
> +
> +static inline void kvm_hv_hypercall_write_xmm(struct kvm_hv_hcall *hc)
> +{
> +	int reg;
> +
> +	kvm_fpu_get();
> +	for (reg = 0; reg < KVM_HV_HYPERCALL_MAX_XMM_REGISTERS; reg++)
> +		_kvm_write_sse_reg(reg, &hc->xmm[reg]);
> +	kvm_fpu_put();
> +	hc->xmm_dirty = false;
> +}
> +
>  int kvm_hv_hypercall(struct kvm_vcpu *vcpu)
>  {
>  	struct kvm_hv_hcall hc;
> @@ -1926,6 +1991,9 @@ int kvm_hv_hypercall(struct kvm_vcpu *vcpu)
>  	hc.rep_idx = (hc.param >> HV_HYPERCALL_REP_START_OFFSET) & 0xfff;
>  	hc.rep = !!(hc.rep_cnt || hc.rep_idx);
>  
> +	if (hc.fast && is_xmm_fast_hypercall(&hc))
> +		kvm_hv_hypercall_read_xmm(&hc);
> +
>  	trace_kvm_hv_hypercall(hc.code, hc.fast, hc.rep_cnt, hc.rep_idx,
>  			       hc.ingpa, hc.outgpa);
>  
> @@ -1961,28 +2029,28 @@ int kvm_hv_hypercall(struct kvm_vcpu *vcpu)
>  				kvm_hv_hypercall_complete_userspace;
>  		return 0;
>  	case HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST:
> -		if (unlikely(hc.fast || !hc.rep_cnt || hc.rep_idx)) {
> +		if (unlikely(!hc.rep_cnt || hc.rep_idx)) {
>  			ret = HV_STATUS_INVALID_HYPERCALL_INPUT;
>  			break;
>  		}
>  		ret = kvm_hv_flush_tlb(vcpu, &hc, false);
>  		break;
>  	case HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE:
> -		if (unlikely(hc.fast || hc.rep)) {
> +		if (unlikely(hc.rep)) {
>  			ret = HV_STATUS_INVALID_HYPERCALL_INPUT;
>  			break;
>  		}
>  		ret = kvm_hv_flush_tlb(vcpu, &hc, false);
>  		break;
>  	case HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX:
> -		if (unlikely(hc.fast || !hc.rep_cnt || hc.rep_idx)) {
> +		if (unlikely(!hc.rep_cnt || hc.rep_idx)) {
>  			ret = HV_STATUS_INVALID_HYPERCALL_INPUT;
>  			break;
>  		}
>  		ret = kvm_hv_flush_tlb(vcpu, &hc, true);
>  		break;
>  	case HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX:
> -		if (unlikely(hc.fast || hc.rep)) {
> +		if (unlikely(hc.rep)) {
>  			ret = HV_STATUS_INVALID_HYPERCALL_INPUT;
>  			break;
>  		}
> @@ -2035,6 +2103,9 @@ int kvm_hv_hypercall(struct kvm_vcpu *vcpu)
>  		break;
>  	}
>  
> +	if (hc.xmm_dirty)
> +		kvm_hv_hypercall_write_xmm(&hc);
> +

Wei already mention that but as 'xmm_dirty' is not being used in this
patch I'd suggest we move it out too.

>  	return kvm_hv_hypercall_complete(vcpu, ret);
>  }

-- 
Vitaly


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v2 2/4] KVM: hyper-v: Collect hypercall params into struct
  2021-04-13 13:53   ` Vitaly Kuznetsov
@ 2021-04-13 14:11     ` Siddharth Chandrasekaran
  0 siblings, 0 replies; 16+ messages in thread
From: Siddharth Chandrasekaran @ 2021-04-13 14:11 UTC (permalink / raw)
  To: Vitaly Kuznetsov
  Cc: Alexander Graf, Evgeny Iakovlev, Liran Alon, Ioannis Aslanidis,
	linux-hyperv, linux-kernel, kvm, K. Y. Srinivasan, Haiyang Zhang,
	Stephen Hemminger, Wei Liu, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, x86, H. Peter Anvin, Paolo Bonzini,
	Sean Christopherson, Wanpeng Li, Jim Mattson, Joerg Roedel

On Tue, Apr 13, 2021 at 03:53:01PM +0200, Vitaly Kuznetsov wrote:
> Siddharth Chandrasekaran <sidcha@amazon.de> writes:
> > As of now there are 7 parameters (and flags) that are used in various
> > hyper-v hypercall handlers. There are 6 more input/output parameters
> > passed from XMM registers which are to be added in an upcoming patch.
> >
> > To make passing arguments to the handlers more readable, capture all
> > these parameters into a single structure.
> >
> > Cc: Alexander Graf <graf@amazon.com>
> > Cc: Evgeny Iakovlev <eyakovl@amazon.de>
> > Signed-off-by: Siddharth Chandrasekaran <sidcha@amazon.de>
> > ---
> >  arch/x86/kvm/hyperv.c | 147 +++++++++++++++++++++++-------------------
> >  1 file changed, 79 insertions(+), 68 deletions(-)
> >
> > diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
> > index f98370a39936..8f6babd1ea0d 100644
> > --- a/arch/x86/kvm/hyperv.c
> > +++ b/arch/x86/kvm/hyperv.c
> > @@ -1623,7 +1623,18 @@ static __always_inline unsigned long *sparse_set_to_vcpu_mask(
> >       return vcpu_bitmap;
> >  }
> >
> > -static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, u64 ingpa, u16 rep_cnt, bool ex)
> > +struct kvm_hv_hcall {
> > +     u64 param;
> > +     u64 ingpa;
> > +     u64 outgpa;
> > +     u16 code;
> > +     u16 rep_cnt;
> > +     u16 rep_idx;
> > +     bool fast;
> > +     bool rep;
> > +};
> > +
> > +static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc, bool ex)
> 
> Nitpick: Would it make sense to also pack the fact that we're dealing
> with a hypercall using ExProcessorMasks into 'struct kvm_hv_hcall' and
> get rid of 'bool ex' parameter for both kvm_hv_flush_tlb() and
> kvm_hv_send_ipi()? 'struct kvm_hv_hcall' is already a synthetic
> aggregator for input and output so adding some other information there
> may not be that big of a stretch...

The other members of the struct are all hypercall parameters (or flags)
while the 'bool ex' is our way of handling ExProcessorMasks within the
same method.

Besides, in kvm_hv_hypercall() passing it as a 3rd argument looks
better than setting 'hc.ex = true' and than immediately calling the
method :-).

> >  {
> >       struct kvm *kvm = vcpu->kvm;
> >       struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
> > @@ -1638,7 +1649,7 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, u64 ingpa, u16 rep_cnt, bool
> >       bool all_cpus;
> >
> >       if (!ex) {
> > -             if (unlikely(kvm_read_guest(kvm, ingpa, &flush, sizeof(flush))))
> > +             if (unlikely(kvm_read_guest(kvm, hc->ingpa, &flush, sizeof(flush))))
> >                       return HV_STATUS_INVALID_HYPERCALL_INPUT;
> >
> >               trace_kvm_hv_flush_tlb(flush.processor_mask,
> > @@ -1657,7 +1668,7 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, u64 ingpa, u16 rep_cnt, bool
> >               all_cpus = (flush.flags & HV_FLUSH_ALL_PROCESSORS) ||
> >                       flush.processor_mask == 0;
> >       } else {
> > -             if (unlikely(kvm_read_guest(kvm, ingpa, &flush_ex,
> > +             if (unlikely(kvm_read_guest(kvm, hc->ingpa, &flush_ex,
> >                                           sizeof(flush_ex))))
> >                       return HV_STATUS_INVALID_HYPERCALL_INPUT;
> >
> > @@ -1679,8 +1690,8 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, u64 ingpa, u16 rep_cnt, bool
> >
> >               if (!all_cpus &&
> >                   kvm_read_guest(kvm,
> > -                                ingpa + offsetof(struct hv_tlb_flush_ex,
> > -                                                 hv_vp_set.bank_contents),
> > +                                hc->ingpa + offsetof(struct hv_tlb_flush_ex,
> > +                                                     hv_vp_set.bank_contents),
> >                                  sparse_banks,
> >                                  sparse_banks_len))
> >                       return HV_STATUS_INVALID_HYPERCALL_INPUT;
> > @@ -1700,9 +1711,9 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, u64 ingpa, u16 rep_cnt, bool
> >                                   NULL, vcpu_mask, &hv_vcpu->tlb_flush);
> >
> >  ret_success:
> > -     /* We always do full TLB flush, set rep_done = rep_cnt. */
> > +     /* We always do full TLB flush, set rep_done = hc->rep_cnt. */
> 
> Nitpicking: I'd suggest we word it a bit differently:
> 
> "We always do full TLB flush, set 'Reps completed' = 'Rep Count'."
> 
> so it matches TLFS rather than KVM internals.

Makes sense. Changed.

Thanks for your reviews.

~ Sid.



Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879




^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v2 4/4] KVM: hyper-v: Advertise support for fast XMM hypercalls
  2021-04-12 17:00 ` [PATCH v2 4/4] KVM: hyper-v: Advertise support for fast XMM hypercalls Siddharth Chandrasekaran
  2021-04-12 20:14   ` Wei Liu
@ 2021-04-13 14:26   ` Vitaly Kuznetsov
  1 sibling, 0 replies; 16+ messages in thread
From: Vitaly Kuznetsov @ 2021-04-13 14:26 UTC (permalink / raw)
  To: Siddharth Chandrasekaran
  Cc: Alexander Graf, Evgeny Iakovlev, Liran Alon, Ioannis Aslanidis,
	linux-hyperv, linux-kernel, kvm, K. Y. Srinivasan, Haiyang Zhang,
	Stephen Hemminger, Wei Liu, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, x86, H. Peter Anvin, Paolo Bonzini,
	Sean Christopherson, Wanpeng Li, Jim Mattson, Joerg Roedel

Siddharth Chandrasekaran <sidcha@amazon.de> writes:

> Now that all extant hypercalls that can use XMM registers (based on
> spec) for input/outputs are patched to support them, we can start
> advertising this feature to guests.
>
> Cc: Alexander Graf <graf@amazon.com>
> Cc: Evgeny Iakovlev <eyakovl@amazon.de>
> Signed-off-by: Siddharth Chandrasekaran <sidcha@amazon.de>
> ---
>  arch/x86/include/asm/hyperv-tlfs.h | 7 ++++++-
>  arch/x86/kvm/hyperv.c              | 2 ++
>  2 files changed, 8 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/include/asm/hyperv-tlfs.h b/arch/x86/include/asm/hyperv-tlfs.h
> index e6cd3fee562b..716f12be411e 100644
> --- a/arch/x86/include/asm/hyperv-tlfs.h
> +++ b/arch/x86/include/asm/hyperv-tlfs.h
> @@ -52,7 +52,7 @@
>   * Support for passing hypercall input parameter block via XMM
>   * registers is available
>   */
> -#define HV_X64_HYPERCALL_PARAMS_XMM_AVAILABLE		BIT(4)
> +#define HV_X64_HYPERCALL_XMM_INPUT_AVAILABLE		BIT(4)
>  /* Support for a virtual guest idle state is available */
>  #define HV_X64_GUEST_IDLE_STATE_AVAILABLE		BIT(5)
>  /* Frequency MSRs available */
> @@ -61,6 +61,11 @@
>  #define HV_FEATURE_GUEST_CRASH_MSR_AVAILABLE		BIT(10)
>  /* Support for debug MSRs available */
>  #define HV_FEATURE_DEBUG_MSRS_AVAILABLE			BIT(11)
> +/*
> + * Support for returning hypercall ouput block via XMM
> + * registers is available
> + */
> +#define HV_X64_HYPERCALL_XMM_OUTPUT_AVAILABLE		BIT(15)
>  /* stimer Direct Mode is available */
>  #define HV_STIMER_DIRECT_MODE_AVAILABLE			BIT(19)
>  
> diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
> index 1f9959aba70d..55838c266bcd 100644
> --- a/arch/x86/kvm/hyperv.c
> +++ b/arch/x86/kvm/hyperv.c
> @@ -2254,6 +2254,8 @@ int kvm_get_hv_cpuid(struct kvm_vcpu *vcpu, struct kvm_cpuid2 *cpuid,
>  			ent->ebx |= HV_POST_MESSAGES;
>  			ent->ebx |= HV_SIGNAL_EVENTS;
>  
> +			ent->edx |= HV_X64_HYPERCALL_XMM_INPUT_AVAILABLE;
> +			ent->edx |= HV_X64_HYPERCALL_XMM_OUTPUT_AVAILABLE;
>  			ent->edx |= HV_FEATURE_FREQUENCY_MSRS_AVAILABLE;
>  			ent->edx |= HV_FEATURE_GUEST_CRASH_MSR_AVAILABLE;

With 'ouput' typo fixed,

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v2 3/4] KVM: x86: kvm_hv_flush_tlb use inputs from XMM registers
  2021-04-13 14:09   ` Vitaly Kuznetsov
@ 2021-04-13 21:07     ` Siddharth Chandrasekaran
  0 siblings, 0 replies; 16+ messages in thread
From: Siddharth Chandrasekaran @ 2021-04-13 21:07 UTC (permalink / raw)
  To: Vitaly Kuznetsov
  Cc: Alexander Graf, Evgeny Iakovlev, Liran Alon, Ioannis Aslanidis,
	linux-hyperv, linux-kernel, kvm, K. Y. Srinivasan, Haiyang Zhang,
	Stephen Hemminger, Wei Liu, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, x86, H. Peter Anvin, Paolo Bonzini,
	Sean Christopherson, Wanpeng Li, Jim Mattson, Joerg Roedel

On Tue, Apr 13, 2021 at 04:09:48PM +0200, Vitaly Kuznetsov wrote:
> Siddharth Chandrasekaran <sidcha@amazon.de> writes:
> > Hyper-V supports the use of XMM registers to perform fast hypercalls.
> > This allows guests to take advantage of the improved performance of the
> > fast hypercall interface even though a hypercall may require more than
> > (the current maximum of) two input registers.
> >
> > The XMM fast hypercall interface uses six additional XMM registers (XMM0
> > to XMM5) to allow the guest to pass an input parameter block of up to
> > 112 bytes. Hyper-V can also return data back to the guest in the
> > remaining XMM registers that are not used by the current hypercall.
> >
> > Add framework to read/write to XMM registers in kvm_hv_hypercall() and
> > use the additional hypercall inputs from XMM registers in
> > kvm_hv_flush_tlb() when possible.
> >
> > Cc: Alexander Graf <graf@amazon.com>
> > Co-developed-by: Evgeny Iakovlev <eyakovl@amazon.de>
> > Signed-off-by: Evgeny Iakovlev <eyakovl@amazon.de>
> > Signed-off-by: Siddharth Chandrasekaran <sidcha@amazon.de>
> > ---
> >  arch/x86/kvm/hyperv.c | 109 ++++++++++++++++++++++++++++++++++--------
> >  1 file changed, 90 insertions(+), 19 deletions(-)
> >
> > diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
> > index 8f6babd1ea0d..1f9959aba70d 100644
> > --- a/arch/x86/kvm/hyperv.c
> > +++ b/arch/x86/kvm/hyperv.c
> > @@ -36,6 +36,7 @@
> >
> >  #include "trace.h"
> >  #include "irq.h"
> > +#include "fpu.h"
> >
> >  /* "Hv#1" signature */
> >  #define HYPERV_CPUID_SIGNATURE_EAX 0x31237648
> > @@ -1623,6 +1624,8 @@ static __always_inline unsigned long *sparse_set_to_vcpu_mask(
> >       return vcpu_bitmap;
> >  }
> >
> > +#define KVM_HV_HYPERCALL_MAX_XMM_REGISTERS  6
> 
> Nitpick: this is not KVM-specific so could probably go to arch/x86/include/asm/hyperv-tlfs.h

Ack.

> > +
> >  struct kvm_hv_hcall {
> >       u64 param;
> >       u64 ingpa;
> > @@ -1632,10 +1635,14 @@ struct kvm_hv_hcall {
> >       u16 rep_idx;
> >       bool fast;
> >       bool rep;
> > +     sse128_t xmm[KVM_HV_HYPERCALL_MAX_XMM_REGISTERS];
> > +     bool xmm_dirty;
> >  };
> >
> >  static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc, bool ex)
> >  {
> > +     int i, j;
> > +     gpa_t gpa;
> >       struct kvm *kvm = vcpu->kvm;
> >       struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
> >       struct hv_tlb_flush_ex flush_ex;
> > @@ -1649,8 +1656,15 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc, bool
> >       bool all_cpus;
> >
> >       if (!ex) {
> > -             if (unlikely(kvm_read_guest(kvm, hc->ingpa, &flush, sizeof(flush))))
> > -                     return HV_STATUS_INVALID_HYPERCALL_INPUT;
> > +             if (hc->fast) {
> > +                     flush.address_space = hc->ingpa;
> > +                     flush.flags = hc->outgpa;
> > +                     flush.processor_mask = sse128_lo(hc->xmm[0]);
> > +             } else {
> > +                     if (unlikely(kvm_read_guest(kvm, hc->ingpa,
> > +                                                 &flush, sizeof(flush))))
> > +                             return HV_STATUS_INVALID_HYPERCALL_INPUT;
> > +             }
> >
> >               trace_kvm_hv_flush_tlb(flush.processor_mask,
> >                                      flush.address_space, flush.flags);
> > @@ -1668,9 +1682,16 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc, bool
> >               all_cpus = (flush.flags & HV_FLUSH_ALL_PROCESSORS) ||
> >                       flush.processor_mask == 0;
> >       } else {
> > -             if (unlikely(kvm_read_guest(kvm, hc->ingpa, &flush_ex,
> > -                                         sizeof(flush_ex))))
> > -                     return HV_STATUS_INVALID_HYPERCALL_INPUT;
> > +             if (hc->fast) {
> > +                     flush_ex.address_space = hc->ingpa;
> > +                     flush_ex.flags = hc->outgpa;
> > +                     memcpy(&flush_ex.hv_vp_set,
> > +                            &hc->xmm[0], sizeof(hc->xmm[0]));
> > +             } else {
> > +                     if (unlikely(kvm_read_guest(kvm, hc->ingpa, &flush_ex,
> > +                                                 sizeof(flush_ex))))
> > +                             return HV_STATUS_INVALID_HYPERCALL_INPUT;
> > +             }
> >
> >               trace_kvm_hv_flush_tlb_ex(flush_ex.hv_vp_set.valid_bank_mask,
> >                                         flush_ex.hv_vp_set.format,
> > @@ -1681,20 +1702,29 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc, bool
> >               all_cpus = flush_ex.hv_vp_set.format !=
> >                       HV_GENERIC_SET_SPARSE_4K;
> >
> > -             sparse_banks_len =
> > -                     bitmap_weight((unsigned long *)&valid_bank_mask, 64) *
> > -                     sizeof(sparse_banks[0]);
> > +             sparse_banks_len = bitmap_weight((unsigned long *)&valid_bank_mask, 64);
> >
> >               if (!sparse_banks_len && !all_cpus)
> >                       goto ret_success;
> >
> > -             if (!all_cpus &&
> > -                 kvm_read_guest(kvm,
> > -                                hc->ingpa + offsetof(struct hv_tlb_flush_ex,
> > -                                                     hv_vp_set.bank_contents),
> > -                                sparse_banks,
> > -                                sparse_banks_len))
> > -                     return HV_STATUS_INVALID_HYPERCALL_INPUT;
> > +             if (!all_cpus) {
> > +                     if (hc->fast) {
> > +                             if (sparse_banks_len > KVM_HV_HYPERCALL_MAX_XMM_REGISTERS - 1)
> > +                                     return HV_STATUS_INVALID_HYPERCALL_INPUT;
> > +                             for (i = 0, j = 1; i < sparse_banks_len; i += 2, j++) {
> 
> Nitpick: you don't really need 'j' here as 'j == i/2 + 1', right?

Yes, you are right. Changed.

> > +                                     sparse_banks[i + 0] = sse128_lo(hc->xmm[j]);
> 
> Using ' + 0' for identation is ... unusual :-) I'm not opposed to it
> here though.

Old habit :-).

> > +                                     sparse_banks[i + 1] = sse128_hi(hc->xmm[j]);
> > +                             }
> > +                     } else {
> > +                             gpa = hc->ingpa;
> > +                             gpa += offsetof(struct hv_tlb_flush_ex,
> > +                                             hv_vp_set.bank_contents);
> 
> Nitpick: if splitting these into two lines is only done to fit into 80
> chars then I'd the requirement is no more so we can be a bit wider.
> 
>  gpa = hc->ingpa + offsetof(...)

Ack.

> > +                             if (unlikely(kvm_read_guest(kvm, gpa, sparse_banks,
> > +                                                         sparse_banks_len *
> > +                                                         sizeof(sparse_banks[0]))))
> > +                                     return HV_STATUS_INVALID_HYPERCALL_INPUT;
> > +                     }
> > +             }
> >       }
> >
> >       cpumask_clear(&hv_vcpu->tlb_flush);
> > @@ -1890,6 +1920,41 @@ static u16 kvm_hvcall_signal_event(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *h
> >       return HV_STATUS_SUCCESS;
> >  }
> >
> > +static bool is_xmm_fast_hypercall(struct kvm_hv_hcall *hc)
> > +{
> > +     switch (hc->code) {
> > +     case HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST:
> > +     case HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE:
> > +     case HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX:
> > +     case HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX:
> > +             return true;
> > +     }
> > +
> > +     return false;
> > +}
> > +
> > +static inline void kvm_hv_hypercall_read_xmm(struct kvm_hv_hcall *hc)
> > +{
> > +     int reg;
> > +
> > +     kvm_fpu_get();
> > +     for (reg = 0; reg < KVM_HV_HYPERCALL_MAX_XMM_REGISTERS; reg++)
> > +             _kvm_read_sse_reg(reg, &hc->xmm[reg]);
> > +     kvm_fpu_put();
> > +     hc->xmm_dirty = false;
> > +}
> > +
> > +static inline void kvm_hv_hypercall_write_xmm(struct kvm_hv_hcall *hc)
> > +{
> > +     int reg;
> > +
> > +     kvm_fpu_get();
> > +     for (reg = 0; reg < KVM_HV_HYPERCALL_MAX_XMM_REGISTERS; reg++)
> > +             _kvm_write_sse_reg(reg, &hc->xmm[reg]);
> > +     kvm_fpu_put();
> > +     hc->xmm_dirty = false;
> > +}
> > +
> >  int kvm_hv_hypercall(struct kvm_vcpu *vcpu)
> >  {
> >       struct kvm_hv_hcall hc;
> > @@ -1926,6 +1991,9 @@ int kvm_hv_hypercall(struct kvm_vcpu *vcpu)
> >       hc.rep_idx = (hc.param >> HV_HYPERCALL_REP_START_OFFSET) & 0xfff;
> >       hc.rep = !!(hc.rep_cnt || hc.rep_idx);
> >
> > +     if (hc.fast && is_xmm_fast_hypercall(&hc))
> > +             kvm_hv_hypercall_read_xmm(&hc);
> > +
> >       trace_kvm_hv_hypercall(hc.code, hc.fast, hc.rep_cnt, hc.rep_idx,
> >                              hc.ingpa, hc.outgpa);
> >
> > @@ -1961,28 +2029,28 @@ int kvm_hv_hypercall(struct kvm_vcpu *vcpu)
> >                               kvm_hv_hypercall_complete_userspace;
> >               return 0;
> >       case HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST:
> > -             if (unlikely(hc.fast || !hc.rep_cnt || hc.rep_idx)) {
> > +             if (unlikely(!hc.rep_cnt || hc.rep_idx)) {
> >                       ret = HV_STATUS_INVALID_HYPERCALL_INPUT;
> >                       break;
> >               }
> >               ret = kvm_hv_flush_tlb(vcpu, &hc, false);
> >               break;
> >       case HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE:
> > -             if (unlikely(hc.fast || hc.rep)) {
> > +             if (unlikely(hc.rep)) {
> >                       ret = HV_STATUS_INVALID_HYPERCALL_INPUT;
> >                       break;
> >               }
> >               ret = kvm_hv_flush_tlb(vcpu, &hc, false);
> >               break;
> >       case HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX:
> > -             if (unlikely(hc.fast || !hc.rep_cnt || hc.rep_idx)) {
> > +             if (unlikely(!hc.rep_cnt || hc.rep_idx)) {
> >                       ret = HV_STATUS_INVALID_HYPERCALL_INPUT;
> >                       break;
> >               }
> >               ret = kvm_hv_flush_tlb(vcpu, &hc, true);
> >               break;
> >       case HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX:
> > -             if (unlikely(hc.fast || hc.rep)) {
> > +             if (unlikely(hc.rep)) {
> >                       ret = HV_STATUS_INVALID_HYPERCALL_INPUT;
> >                       break;
> >               }
> > @@ -2035,6 +2103,9 @@ int kvm_hv_hypercall(struct kvm_vcpu *vcpu)
> >               break;
> >       }
> >
> > +     if (hc.xmm_dirty)
> > +             kvm_hv_hypercall_write_xmm(&hc);
> > +
> 
> Wei already mention that but as 'xmm_dirty' is not being used in this
> patch I'd suggest we move it out too.

Okay, I should remove HV_X64_HYPERCALL_XMM_OUTPUT_AVAILABLE for now then.

~ Sid.



Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879




^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2021-04-13 21:08 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-12 17:00 [PATCH v2 0/4] Add support for XMM fast hypercalls Siddharth Chandrasekaran
2021-04-12 17:00 ` [PATCH v2 1/4] KVM: x86: Move FPU register accessors into fpu.h Siddharth Chandrasekaran
2021-04-13 13:40   ` Vitaly Kuznetsov
2021-04-13 13:46     ` Siddharth Chandrasekaran
2021-04-12 17:00 ` [PATCH v2 2/4] KVM: hyper-v: Collect hypercall params into struct Siddharth Chandrasekaran
2021-04-13 13:53   ` Vitaly Kuznetsov
2021-04-13 14:11     ` Siddharth Chandrasekaran
2021-04-12 17:00 ` [PATCH v2 3/4] KVM: x86: kvm_hv_flush_tlb use inputs from XMM registers Siddharth Chandrasekaran
2021-04-12 20:13   ` Wei Liu
2021-04-13  9:09     ` Siddharth Chandrasekaran
2021-04-13 14:09   ` Vitaly Kuznetsov
2021-04-13 21:07     ` Siddharth Chandrasekaran
2021-04-12 17:00 ` [PATCH v2 4/4] KVM: hyper-v: Advertise support for fast XMM hypercalls Siddharth Chandrasekaran
2021-04-12 20:14   ` Wei Liu
2021-04-13  9:11     ` Siddharth Chandrasekaran
2021-04-13 14:26   ` Vitaly Kuznetsov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).