linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/22] KVM: x86: Fill *_x86_ops via kvm-x86-ops.h
@ 2022-01-28  0:51 Sean Christopherson
  2022-01-28  0:51 ` [PATCH 01/22] KVM: x86: Drop unnecessary and confusing KVM_X86_OP_NULL macro Sean Christopherson
                   ` (21 more replies)
  0 siblings, 22 replies; 29+ messages in thread
From: Sean Christopherson @ 2022-01-28  0:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel, Like Xu

Use kvm-x86-ops.h to fill vmx_x86_ops and svm_x86_ops, with a bunch of
cleanup along the way to make that happen.  Aside from removing a lot of
boilerplate code, the part I most like about fill via kvm-x86-ops.h is
that it provides enforcement that (a) new kvm_x86_ops hooks get added to
kvm-x86-ops.h and (b) vendor code has to implement _something_, even if
it's a redirect to NULL, which provides documentation of what's going on.

Patch 01 isn't exactly necessary for this series, my hope is that Maxim's
bug fix for the AVIC race can go on top without introducing too much
conflict for either of us.

Sean Christopherson (22):
  KVM: x86: Drop unnecessary and confusing KVM_X86_OP_NULL macro
  KVM: x86: Move delivery of non-APICv interrupt into vendor code
  KVM: x86: Drop export for .tlb_flush_current() static_call key
  KVM: x86: Rename kvm_x86_ops pointers to align w/ preferred vendor
    names
  KVM: x86: Use static_call() for .vcpu_deliver_sipi_vector()
  KVM: VMX: Call vmx_get_cpl() directly in handle_dr()
  KVM: xen: Use static_call() for invoking kvm_x86_ops hooks
  KVM: nVMX: Refactor PMU refresh to avoid referencing
    kvm_x86_ops.pmu_ops
  KVM: x86: Uninline and export hv_track_root_tdp()
  KVM: x86: Unexport kvm_x86_ops
  KVM: x86: Use static_call() for copy/move encryption context ioctls()
  KVM: x86: Allow different macros for APICv, CVM, and Hyper-V
    kvm_x86_ops
  KVM: VMX: Rename VMX functions to conform to kvm_x86_ops names
  KVM: VMX: Use kvm-x86-ops.h to fill vmx_x86_ops
  KVM: x86: Move get_cs_db_l_bits() helper to SVM
  KVM: SVM: Rename svm_flush_tlb() to svm_flush_tlb_current()
  KVM: SVM: Remove unused MAX_INST_SIZE #define
  KVM: SVM: Rename AVIC helpers to use "avic" prefix instead of "svm"
  KVM: x86: Use more verbose names for mem encrypt kvm_x86_ops hooks
  KVM: SVM: Rename SEV implemenations to conform to kvm_x86_ops hooks
  KVM: SVM: Rename hook implementations to conform to kvm_x86_ops' names
  KVM: SVM: Use kvm-x86-ops.h to fill svm_x86_ops

 arch/x86/include/asm/kvm-x86-ops.h | 131 ++++++++++--------
 arch/x86/include/asm/kvm_host.h    |  32 ++---
 arch/x86/kvm/kvm_onhyperv.c        |  14 ++
 arch/x86/kvm/kvm_onhyperv.h        |  14 +-
 arch/x86/kvm/lapic.c               |  12 +-
 arch/x86/kvm/mmu/mmu.c             |   6 +-
 arch/x86/kvm/svm/avic.c            |  28 ++--
 arch/x86/kvm/svm/sev.c             |  18 +--
 arch/x86/kvm/svm/svm.c             | 214 ++++++++++-------------------
 arch/x86/kvm/svm/svm.h             |  41 +++---
 arch/x86/kvm/vmx/nested.c          |   5 +-
 arch/x86/kvm/vmx/nested.h          |   3 +-
 arch/x86/kvm/vmx/pmu_intel.c       |   3 +-
 arch/x86/kvm/vmx/posted_intr.c     |   6 +-
 arch/x86/kvm/vmx/posted_intr.h     |   4 +-
 arch/x86/kvm/vmx/vmx.c             | 178 +++++++-----------------
 arch/x86/kvm/x86.c                 |  79 +++++------
 arch/x86/kvm/xen.c                 |   4 +-
 18 files changed, 324 insertions(+), 468 deletions(-)


base-commit: b029c138e8f090f5cb9ba77ef20509f903ef0004
-- 
2.35.0.rc0.227.g00780c9af4-goog


^ permalink raw reply	[flat|nested] 29+ messages in thread

* [PATCH 01/22] KVM: x86: Drop unnecessary and confusing KVM_X86_OP_NULL macro
  2022-01-28  0:51 [PATCH 00/22] KVM: x86: Fill *_x86_ops via kvm-x86-ops.h Sean Christopherson
@ 2022-01-28  0:51 ` Sean Christopherson
  2022-01-28 10:11   ` Paolo Bonzini
  2022-01-28  0:51 ` [PATCH 02/22] KVM: x86: Move delivery of non-APICv interrupt into vendor code Sean Christopherson
                   ` (20 subsequent siblings)
  21 siblings, 1 reply; 29+ messages in thread
From: Sean Christopherson @ 2022-01-28  0:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel, Like Xu

Drop KVM_X86_OP_NULL, which is superfluous and confusing.  The macro is
just a "pass-through" to KVM_X86_OP; it was added with the intent of
actually using it in the future, but that obviously never happened.  The
name is confusing because its intended use was to provide a way for
vendor implementations to specify a NULL pointer, and even if it were
used, wouldn't necessarily be synonymous with declaring a kvm_x86_op as
DEFINE_STATIC_CALL_NULL.

Lastly, actually using KVM_X86_OP_NULL as intended isn't a maintanable
approach, e.g. bleeds vendor details into common x86 code, and would
either be prone to bit rot or would require modifying common x86 code
when modifying a vendor implementation.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/include/asm/kvm-x86-ops.h | 76 ++++++++++++++----------------
 arch/x86/include/asm/kvm_host.h    |  2 -
 arch/x86/kvm/x86.c                 |  1 -
 3 files changed, 35 insertions(+), 44 deletions(-)

diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
index 631d5040b31e..e07151b2d1f6 100644
--- a/arch/x86/include/asm/kvm-x86-ops.h
+++ b/arch/x86/include/asm/kvm-x86-ops.h
@@ -1,25 +1,20 @@
 /* SPDX-License-Identifier: GPL-2.0 */
-#if !defined(KVM_X86_OP) || !defined(KVM_X86_OP_NULL)
+#ifndef KVM_X86_OP
 BUILD_BUG_ON(1)
 #endif
 
 /*
- * KVM_X86_OP() and KVM_X86_OP_NULL() are used to help generate
- * "static_call()"s. They are also intended for use when defining
- * the vmx/svm kvm_x86_ops. KVM_X86_OP() can be used for those
- * functions that follow the [svm|vmx]_func_name convention.
- * KVM_X86_OP_NULL() can leave a NULL definition for the
- * case where there is no definition or a function name that
- * doesn't match the typical naming convention is supplied.
+ * Invoke KVM_X86_OP() on all functions in struct kvm_x86_ops, e.g. to generate
+ * static_call declarations, definitions and updates.
  */
-KVM_X86_OP_NULL(hardware_enable)
-KVM_X86_OP_NULL(hardware_disable)
-KVM_X86_OP_NULL(hardware_unsetup)
-KVM_X86_OP_NULL(cpu_has_accelerated_tpr)
+KVM_X86_OP(hardware_enable)
+KVM_X86_OP(hardware_disable)
+KVM_X86_OP(hardware_unsetup)
+KVM_X86_OP(cpu_has_accelerated_tpr)
 KVM_X86_OP(has_emulated_msr)
 KVM_X86_OP(vcpu_after_set_cpuid)
 KVM_X86_OP(vm_init)
-KVM_X86_OP_NULL(vm_destroy)
+KVM_X86_OP(vm_destroy)
 KVM_X86_OP(vcpu_create)
 KVM_X86_OP(vcpu_free)
 KVM_X86_OP(vcpu_reset)
@@ -33,9 +28,9 @@ KVM_X86_OP(get_segment_base)
 KVM_X86_OP(get_segment)
 KVM_X86_OP(get_cpl)
 KVM_X86_OP(set_segment)
-KVM_X86_OP_NULL(get_cs_db_l_bits)
+KVM_X86_OP(get_cs_db_l_bits)
 KVM_X86_OP(set_cr0)
-KVM_X86_OP_NULL(post_set_cr3)
+KVM_X86_OP(post_set_cr3)
 KVM_X86_OP(is_valid_cr4)
 KVM_X86_OP(set_cr4)
 KVM_X86_OP(set_efer)
@@ -51,15 +46,15 @@ KVM_X86_OP(set_rflags)
 KVM_X86_OP(get_if_flag)
 KVM_X86_OP(tlb_flush_all)
 KVM_X86_OP(tlb_flush_current)
-KVM_X86_OP_NULL(tlb_remote_flush)
-KVM_X86_OP_NULL(tlb_remote_flush_with_range)
+KVM_X86_OP(tlb_remote_flush)
+KVM_X86_OP(tlb_remote_flush_with_range)
 KVM_X86_OP(tlb_flush_gva)
 KVM_X86_OP(tlb_flush_guest)
 KVM_X86_OP(vcpu_pre_run)
 KVM_X86_OP(run)
-KVM_X86_OP_NULL(handle_exit)
-KVM_X86_OP_NULL(skip_emulated_instruction)
-KVM_X86_OP_NULL(update_emulated_instruction)
+KVM_X86_OP(handle_exit)
+KVM_X86_OP(skip_emulated_instruction)
+KVM_X86_OP(update_emulated_instruction)
 KVM_X86_OP(set_interrupt_shadow)
 KVM_X86_OP(get_interrupt_shadow)
 KVM_X86_OP(patch_hypercall)
@@ -78,17 +73,17 @@ KVM_X86_OP(check_apicv_inhibit_reasons)
 KVM_X86_OP(refresh_apicv_exec_ctrl)
 KVM_X86_OP(hwapic_irr_update)
 KVM_X86_OP(hwapic_isr_update)
-KVM_X86_OP_NULL(guest_apic_has_interrupt)
+KVM_X86_OP(guest_apic_has_interrupt)
 KVM_X86_OP(load_eoi_exitmap)
 KVM_X86_OP(set_virtual_apic_mode)
-KVM_X86_OP_NULL(set_apic_access_page_addr)
+KVM_X86_OP(set_apic_access_page_addr)
 KVM_X86_OP(deliver_posted_interrupt)
-KVM_X86_OP_NULL(sync_pir_to_irr)
+KVM_X86_OP(sync_pir_to_irr)
 KVM_X86_OP(set_tss_addr)
 KVM_X86_OP(set_identity_map_addr)
 KVM_X86_OP(get_mt_mask)
 KVM_X86_OP(load_mmu_pgd)
-KVM_X86_OP_NULL(has_wbinvd_exit)
+KVM_X86_OP(has_wbinvd_exit)
 KVM_X86_OP(get_l2_tsc_offset)
 KVM_X86_OP(get_l2_tsc_multiplier)
 KVM_X86_OP(write_tsc_offset)
@@ -96,32 +91,31 @@ KVM_X86_OP(write_tsc_multiplier)
 KVM_X86_OP(get_exit_info)
 KVM_X86_OP(check_intercept)
 KVM_X86_OP(handle_exit_irqoff)
-KVM_X86_OP_NULL(request_immediate_exit)
+KVM_X86_OP(request_immediate_exit)
 KVM_X86_OP(sched_in)
-KVM_X86_OP_NULL(update_cpu_dirty_logging)
-KVM_X86_OP_NULL(vcpu_blocking)
-KVM_X86_OP_NULL(vcpu_unblocking)
-KVM_X86_OP_NULL(update_pi_irte)
-KVM_X86_OP_NULL(start_assignment)
-KVM_X86_OP_NULL(apicv_post_state_restore)
-KVM_X86_OP_NULL(dy_apicv_has_pending_interrupt)
-KVM_X86_OP_NULL(set_hv_timer)
-KVM_X86_OP_NULL(cancel_hv_timer)
+KVM_X86_OP(update_cpu_dirty_logging)
+KVM_X86_OP(vcpu_blocking)
+KVM_X86_OP(vcpu_unblocking)
+KVM_X86_OP(update_pi_irte)
+KVM_X86_OP(start_assignment)
+KVM_X86_OP(apicv_post_state_restore)
+KVM_X86_OP(dy_apicv_has_pending_interrupt)
+KVM_X86_OP(set_hv_timer)
+KVM_X86_OP(cancel_hv_timer)
 KVM_X86_OP(setup_mce)
 KVM_X86_OP(smi_allowed)
 KVM_X86_OP(enter_smm)
 KVM_X86_OP(leave_smm)
 KVM_X86_OP(enable_smi_window)
-KVM_X86_OP_NULL(mem_enc_op)
-KVM_X86_OP_NULL(mem_enc_reg_region)
-KVM_X86_OP_NULL(mem_enc_unreg_region)
+KVM_X86_OP(mem_enc_op)
+KVM_X86_OP(mem_enc_reg_region)
+KVM_X86_OP(mem_enc_unreg_region)
 KVM_X86_OP(get_msr_feature)
 KVM_X86_OP(can_emulate_instruction)
 KVM_X86_OP(apic_init_signal_blocked)
-KVM_X86_OP_NULL(enable_direct_tlbflush)
-KVM_X86_OP_NULL(migrate_timers)
+KVM_X86_OP(enable_direct_tlbflush)
+KVM_X86_OP(migrate_timers)
 KVM_X86_OP(msr_filter_changed)
-KVM_X86_OP_NULL(complete_emulated_msr)
+KVM_X86_OP(complete_emulated_msr)
 
 #undef KVM_X86_OP
-#undef KVM_X86_OP_NULL
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index b2c3721b1c98..756806d2e801 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1538,14 +1538,12 @@ extern struct kvm_x86_ops kvm_x86_ops;
 
 #define KVM_X86_OP(func) \
 	DECLARE_STATIC_CALL(kvm_x86_##func, *(((struct kvm_x86_ops *)0)->func));
-#define KVM_X86_OP_NULL KVM_X86_OP
 #include <asm/kvm-x86-ops.h>
 
 static inline void kvm_ops_static_call_update(void)
 {
 #define KVM_X86_OP(func) \
 	static_call_update(kvm_x86_##func, kvm_x86_ops.func);
-#define KVM_X86_OP_NULL KVM_X86_OP
 #include <asm/kvm-x86-ops.h>
 }
 
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 8033eca6f3a1..ebab514ec82a 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -129,7 +129,6 @@ EXPORT_SYMBOL_GPL(kvm_x86_ops);
 #define KVM_X86_OP(func)					     \
 	DEFINE_STATIC_CALL_NULL(kvm_x86_##func,			     \
 				*(((struct kvm_x86_ops *)0)->func));
-#define KVM_X86_OP_NULL KVM_X86_OP
 #include <asm/kvm-x86-ops.h>
 EXPORT_STATIC_CALL_GPL(kvm_x86_get_cs_db_l_bits);
 EXPORT_STATIC_CALL_GPL(kvm_x86_cache_reg);
-- 
2.35.0.rc0.227.g00780c9af4-goog


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 02/22] KVM: x86: Move delivery of non-APICv interrupt into vendor code
  2022-01-28  0:51 [PATCH 00/22] KVM: x86: Fill *_x86_ops via kvm-x86-ops.h Sean Christopherson
  2022-01-28  0:51 ` [PATCH 01/22] KVM: x86: Drop unnecessary and confusing KVM_X86_OP_NULL macro Sean Christopherson
@ 2022-01-28  0:51 ` Sean Christopherson
  2022-01-28  0:51 ` [PATCH 03/22] KVM: x86: Drop export for .tlb_flush_current() static_call key Sean Christopherson
                   ` (19 subsequent siblings)
  21 siblings, 0 replies; 29+ messages in thread
From: Sean Christopherson @ 2022-01-28  0:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel, Like Xu

Handle non-APICv interrupt delivery in vendor code, even though it means
VMX and SVM will temporarily have duplicate code.  SVM's AVIC has a race
condition that requires KVM to fall back to legacy interrupt injection
_after_ the interrupt has been logged in the vIRR, i.e. to fix the race,
SVM will need to open code the full flow anyways[*].  Refactor the code
so that the SVM bug without introducing other issues, e.g. SVM would
return "success" and thus invoke trace_kvm_apicv_accept_irq() even when
delivery through the AVIC failed, and to opportunistically prepare for
using KVM_X86_OP to fill each vendor's kvm_x86_ops struct, which will
rely on the vendor function matching the kvm_x86_op pointer name.

No functional change intended.

[*] https://lore.kernel.org/all/20211213104634.199141-4-mlevitsk@redhat.com

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/include/asm/kvm-x86-ops.h |  2 +-
 arch/x86/include/asm/kvm_host.h    |  3 ++-
 arch/x86/kvm/lapic.c               | 10 ++--------
 arch/x86/kvm/svm/svm.c             | 17 ++++++++++++++++-
 arch/x86/kvm/vmx/vmx.c             | 17 ++++++++++++++++-
 5 files changed, 37 insertions(+), 12 deletions(-)

diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
index e07151b2d1f6..fd134c436029 100644
--- a/arch/x86/include/asm/kvm-x86-ops.h
+++ b/arch/x86/include/asm/kvm-x86-ops.h
@@ -77,7 +77,7 @@ KVM_X86_OP(guest_apic_has_interrupt)
 KVM_X86_OP(load_eoi_exitmap)
 KVM_X86_OP(set_virtual_apic_mode)
 KVM_X86_OP(set_apic_access_page_addr)
-KVM_X86_OP(deliver_posted_interrupt)
+KVM_X86_OP(deliver_interrupt)
 KVM_X86_OP(sync_pir_to_irr)
 KVM_X86_OP(set_tss_addr)
 KVM_X86_OP(set_identity_map_addr)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 756806d2e801..c895e94ffb80 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1409,7 +1409,8 @@ struct kvm_x86_ops {
 	void (*load_eoi_exitmap)(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap);
 	void (*set_virtual_apic_mode)(struct kvm_vcpu *vcpu);
 	void (*set_apic_access_page_addr)(struct kvm_vcpu *vcpu);
-	int (*deliver_posted_interrupt)(struct kvm_vcpu *vcpu, int vector);
+	void (*deliver_interrupt)(struct kvm_lapic *apic, int delivery_mode,
+				  int trig_mode, int vector);
 	int (*sync_pir_to_irr)(struct kvm_vcpu *vcpu);
 	int (*set_tss_addr)(struct kvm *kvm, unsigned int addr);
 	int (*set_identity_map_addr)(struct kvm *kvm, u64 ident_addr);
diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
index 4662469240bc..d7e6fde82d25 100644
--- a/arch/x86/kvm/lapic.c
+++ b/arch/x86/kvm/lapic.c
@@ -1096,14 +1096,8 @@ static int __apic_accept_irq(struct kvm_lapic *apic, int delivery_mode,
 						       apic->regs + APIC_TMR);
 		}
 
-		if (static_call(kvm_x86_deliver_posted_interrupt)(vcpu, vector)) {
-			kvm_lapic_set_irr(vector, apic);
-			kvm_make_request(KVM_REQ_EVENT, vcpu);
-			kvm_vcpu_kick(vcpu);
-		} else {
-			trace_kvm_apicv_accept_irq(vcpu->vcpu_id, delivery_mode,
-						   trig_mode, vector);
-		}
+		static_call(kvm_x86_deliver_interrupt)(apic, delivery_mode,
+						       trig_mode, vector);
 		break;
 
 	case APIC_DM_REMRD:
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index d73bff4f9e86..75d277067141 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -3293,6 +3293,21 @@ static void svm_set_irq(struct kvm_vcpu *vcpu)
 		SVM_EVTINJ_VALID | SVM_EVTINJ_TYPE_INTR;
 }
 
+static void svm_deliver_interrupt(struct kvm_lapic *apic, int delivery_mode,
+				  int trig_mode, int vector)
+{
+	struct kvm_vcpu *vcpu = apic->vcpu;
+
+	if (svm_deliver_avic_intr(vcpu, vector)) {
+		kvm_lapic_set_irr(vector, apic);
+		kvm_make_request(KVM_REQ_EVENT, vcpu);
+		kvm_vcpu_kick(vcpu);
+	} else {
+		trace_kvm_apicv_accept_irq(vcpu->vcpu_id, delivery_mode,
+					   trig_mode, vector);
+	}
+}
+
 static void svm_update_cr8_intercept(struct kvm_vcpu *vcpu, int tpr, int irr)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
@@ -4547,7 +4562,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
 	.pmu_ops = &amd_pmu_ops,
 	.nested_ops = &svm_nested_ops,
 
-	.deliver_posted_interrupt = svm_deliver_avic_intr,
+	.deliver_interrupt = svm_deliver_interrupt,
 	.dy_apicv_has_pending_interrupt = svm_dy_apicv_has_pending_interrupt,
 	.update_pi_irte = svm_update_pi_irte,
 	.setup_mce = svm_setup_mce,
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 92e30bfdf785..97d6edbd25a0 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -4041,6 +4041,21 @@ static int vmx_deliver_posted_interrupt(struct kvm_vcpu *vcpu, int vector)
 	return 0;
 }
 
+static void vmx_deliver_interrupt(struct kvm_lapic *apic, int delivery_mode,
+				  int trig_mode, int vector)
+{
+	struct kvm_vcpu *vcpu = apic->vcpu;
+
+	if (vmx_deliver_posted_interrupt(vcpu, vector)) {
+		kvm_lapic_set_irr(vector, apic);
+		kvm_make_request(KVM_REQ_EVENT, vcpu);
+		kvm_vcpu_kick(vcpu);
+	} else {
+		trace_kvm_apicv_accept_irq(vcpu->vcpu_id, delivery_mode,
+					   trig_mode, vector);
+	}
+}
+
 /*
  * Set up the vmcs's constant host-state fields, i.e., host-state fields that
  * will not change in the lifetime of the guest.
@@ -7766,7 +7781,7 @@ static struct kvm_x86_ops vmx_x86_ops __initdata = {
 	.hwapic_isr_update = vmx_hwapic_isr_update,
 	.guest_apic_has_interrupt = vmx_guest_apic_has_interrupt,
 	.sync_pir_to_irr = vmx_sync_pir_to_irr,
-	.deliver_posted_interrupt = vmx_deliver_posted_interrupt,
+	.deliver_interrupt = vmx_deliver_interrupt,
 	.dy_apicv_has_pending_interrupt = pi_has_pending_interrupt,
 
 	.set_tss_addr = vmx_set_tss_addr,
-- 
2.35.0.rc0.227.g00780c9af4-goog


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 03/22] KVM: x86: Drop export for .tlb_flush_current() static_call key
  2022-01-28  0:51 [PATCH 00/22] KVM: x86: Fill *_x86_ops via kvm-x86-ops.h Sean Christopherson
  2022-01-28  0:51 ` [PATCH 01/22] KVM: x86: Drop unnecessary and confusing KVM_X86_OP_NULL macro Sean Christopherson
  2022-01-28  0:51 ` [PATCH 02/22] KVM: x86: Move delivery of non-APICv interrupt into vendor code Sean Christopherson
@ 2022-01-28  0:51 ` Sean Christopherson
  2022-01-28  0:51 ` [PATCH 04/22] KVM: x86: Rename kvm_x86_ops pointers to align w/ preferred vendor names Sean Christopherson
                   ` (18 subsequent siblings)
  21 siblings, 0 replies; 29+ messages in thread
From: Sean Christopherson @ 2022-01-28  0:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel, Like Xu

Remove the export of kvm_x86_tlb_flush_current() as there are no longer
any users outside of common x86 code.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/kvm/x86.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index ebab514ec82a..a2821c46dfa4 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -132,7 +132,6 @@ EXPORT_SYMBOL_GPL(kvm_x86_ops);
 #include <asm/kvm-x86-ops.h>
 EXPORT_STATIC_CALL_GPL(kvm_x86_get_cs_db_l_bits);
 EXPORT_STATIC_CALL_GPL(kvm_x86_cache_reg);
-EXPORT_STATIC_CALL_GPL(kvm_x86_tlb_flush_current);
 
 static bool __read_mostly ignore_msrs = 0;
 module_param(ignore_msrs, bool, S_IRUGO | S_IWUSR);
-- 
2.35.0.rc0.227.g00780c9af4-goog


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 04/22] KVM: x86: Rename kvm_x86_ops pointers to align w/ preferred vendor names
  2022-01-28  0:51 [PATCH 00/22] KVM: x86: Fill *_x86_ops via kvm-x86-ops.h Sean Christopherson
                   ` (2 preceding siblings ...)
  2022-01-28  0:51 ` [PATCH 03/22] KVM: x86: Drop export for .tlb_flush_current() static_call key Sean Christopherson
@ 2022-01-28  0:51 ` Sean Christopherson
  2022-01-28  0:51 ` [PATCH 05/22] KVM: x86: Use static_call() for .vcpu_deliver_sipi_vector() Sean Christopherson
                   ` (17 subsequent siblings)
  21 siblings, 0 replies; 29+ messages in thread
From: Sean Christopherson @ 2022-01-28  0:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel, Like Xu

Rename a variety of kvm_x86_op function pointers so that preferred name
for vendor implementations follows the pattern <vendor>_<function>, e.g.
rename .run() to .vcpu_run() to match {svm,vmx}_vcpu_run().  This will
allow vendor implementations to be wired up via the KVM_X86_OP macro.

In many cases, VMX and SVM "disagree" on the preferred name, though in
reality it's VMX and x86 that disagree as SVM blindly prepended _svm to
the kvm_x86_ops name.  Justification for using the VMX nomenclature:

  - set_{irq,nmi} => inject_{irq,nmi} because the helper is injecting an
    event that has already been "set" in e.g. the vIRR.  SVM's relevant
    VMCB field is even named event_inj, and KVM's stat is irq_injections.

  - prepare_guest_switch => prepare_switch_to_guest because the former is
    ambiguous, e.g. it could mean switching between multiple guests,
    switching from the guest to host, etc...

  - update_pi_irte => pi_update_irte to allow for matching match the rest
    of VMX's posted interrupt naming scheme, which is vmx_pi_<blah>().

  - start_assignment => pi_start_assignment to again follow VMX's posted
    interrupt naming scheme, and to provide context for what bit of code
    might care about an otherwise undescribed "assignment".

The "tlb_flush" => "flush_tlb" creates an inconsistency with respect to
Hyper-V's "tlb_remote_flush" hooks, but Hyper-V really is the one that's
wrong.  x86, VMX, and SVM all use flush_tlb, and even common KVM is on a
variant of the bandwagon with "kvm_flush_remote_tlbs", e.g. a more
appropriate name for the Hyper-V hooks would be flush_remote_tlbs.  Leave
that change for another time as the Hyper-V hooks always start as NULL,
i.e. the name doesn't matter for using kvm-x86-ops.h, and changing all
names requires an astounding amount of churn.

VMX and SVM function names are intentionally left as is to minimize the
diff.  Both VMX and SVM will need to rename even more functions in order
to fully utilize KVM_X86_OPS, i.e. an additional patch for each is
inevitable.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/include/asm/kvm-x86-ops.h | 20 +++++++++----------
 arch/x86/include/asm/kvm_host.h    | 20 +++++++++----------
 arch/x86/kvm/mmu/mmu.c             |  6 +++---
 arch/x86/kvm/svm/svm.c             | 18 ++++++++---------
 arch/x86/kvm/vmx/vmx.c             | 20 +++++++++----------
 arch/x86/kvm/x86.c                 | 31 ++++++++++++++----------------
 6 files changed, 56 insertions(+), 59 deletions(-)

diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
index fd134c436029..a87632641a13 100644
--- a/arch/x86/include/asm/kvm-x86-ops.h
+++ b/arch/x86/include/asm/kvm-x86-ops.h
@@ -18,7 +18,7 @@ KVM_X86_OP(vm_destroy)
 KVM_X86_OP(vcpu_create)
 KVM_X86_OP(vcpu_free)
 KVM_X86_OP(vcpu_reset)
-KVM_X86_OP(prepare_guest_switch)
+KVM_X86_OP(prepare_switch_to_guest)
 KVM_X86_OP(vcpu_load)
 KVM_X86_OP(vcpu_put)
 KVM_X86_OP(update_exception_bitmap)
@@ -44,22 +44,22 @@ KVM_X86_OP(cache_reg)
 KVM_X86_OP(get_rflags)
 KVM_X86_OP(set_rflags)
 KVM_X86_OP(get_if_flag)
-KVM_X86_OP(tlb_flush_all)
-KVM_X86_OP(tlb_flush_current)
+KVM_X86_OP(flush_tlb_all)
+KVM_X86_OP(flush_tlb_current)
 KVM_X86_OP(tlb_remote_flush)
 KVM_X86_OP(tlb_remote_flush_with_range)
-KVM_X86_OP(tlb_flush_gva)
-KVM_X86_OP(tlb_flush_guest)
+KVM_X86_OP(flush_tlb_gva)
+KVM_X86_OP(flush_tlb_guest)
 KVM_X86_OP(vcpu_pre_run)
-KVM_X86_OP(run)
+KVM_X86_OP(vcpu_run)
 KVM_X86_OP(handle_exit)
 KVM_X86_OP(skip_emulated_instruction)
 KVM_X86_OP(update_emulated_instruction)
 KVM_X86_OP(set_interrupt_shadow)
 KVM_X86_OP(get_interrupt_shadow)
 KVM_X86_OP(patch_hypercall)
-KVM_X86_OP(set_irq)
-KVM_X86_OP(set_nmi)
+KVM_X86_OP(inject_irq)
+KVM_X86_OP(inject_nmi)
 KVM_X86_OP(queue_exception)
 KVM_X86_OP(cancel_injection)
 KVM_X86_OP(interrupt_allowed)
@@ -96,8 +96,8 @@ KVM_X86_OP(sched_in)
 KVM_X86_OP(update_cpu_dirty_logging)
 KVM_X86_OP(vcpu_blocking)
 KVM_X86_OP(vcpu_unblocking)
-KVM_X86_OP(update_pi_irte)
-KVM_X86_OP(start_assignment)
+KVM_X86_OP(pi_update_irte)
+KVM_X86_OP(pi_start_assignment)
 KVM_X86_OP(apicv_post_state_restore)
 KVM_X86_OP(dy_apicv_has_pending_interrupt)
 KVM_X86_OP(set_hv_timer)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index c895e94ffb80..91c0e4957bd0 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1330,7 +1330,7 @@ struct kvm_x86_ops {
 	void (*vcpu_free)(struct kvm_vcpu *vcpu);
 	void (*vcpu_reset)(struct kvm_vcpu *vcpu, bool init_event);
 
-	void (*prepare_guest_switch)(struct kvm_vcpu *vcpu);
+	void (*prepare_switch_to_guest)(struct kvm_vcpu *vcpu);
 	void (*vcpu_load)(struct kvm_vcpu *vcpu, int cpu);
 	void (*vcpu_put)(struct kvm_vcpu *vcpu);
 
@@ -1360,8 +1360,8 @@ struct kvm_x86_ops {
 	void (*set_rflags)(struct kvm_vcpu *vcpu, unsigned long rflags);
 	bool (*get_if_flag)(struct kvm_vcpu *vcpu);
 
-	void (*tlb_flush_all)(struct kvm_vcpu *vcpu);
-	void (*tlb_flush_current)(struct kvm_vcpu *vcpu);
+	void (*flush_tlb_all)(struct kvm_vcpu *vcpu);
+	void (*flush_tlb_current)(struct kvm_vcpu *vcpu);
 	int  (*tlb_remote_flush)(struct kvm *kvm);
 	int  (*tlb_remote_flush_with_range)(struct kvm *kvm,
 			struct kvm_tlb_range *range);
@@ -1372,16 +1372,16 @@ struct kvm_x86_ops {
 	 * Can potentially get non-canonical addresses through INVLPGs, which
 	 * the implementation may choose to ignore if appropriate.
 	 */
-	void (*tlb_flush_gva)(struct kvm_vcpu *vcpu, gva_t addr);
+	void (*flush_tlb_gva)(struct kvm_vcpu *vcpu, gva_t addr);
 
 	/*
 	 * Flush any TLB entries created by the guest.  Like tlb_flush_gva(),
 	 * does not need to flush GPA->HPA mappings.
 	 */
-	void (*tlb_flush_guest)(struct kvm_vcpu *vcpu);
+	void (*flush_tlb_guest)(struct kvm_vcpu *vcpu);
 
 	int (*vcpu_pre_run)(struct kvm_vcpu *vcpu);
-	enum exit_fastpath_completion (*run)(struct kvm_vcpu *vcpu);
+	enum exit_fastpath_completion (*vcpu_run)(struct kvm_vcpu *vcpu);
 	int (*handle_exit)(struct kvm_vcpu *vcpu,
 		enum exit_fastpath_completion exit_fastpath);
 	int (*skip_emulated_instruction)(struct kvm_vcpu *vcpu);
@@ -1390,8 +1390,8 @@ struct kvm_x86_ops {
 	u32 (*get_interrupt_shadow)(struct kvm_vcpu *vcpu);
 	void (*patch_hypercall)(struct kvm_vcpu *vcpu,
 				unsigned char *hypercall_addr);
-	void (*set_irq)(struct kvm_vcpu *vcpu);
-	void (*set_nmi)(struct kvm_vcpu *vcpu);
+	void (*inject_irq)(struct kvm_vcpu *vcpu);
+	void (*inject_nmi)(struct kvm_vcpu *vcpu);
 	void (*queue_exception)(struct kvm_vcpu *vcpu);
 	void (*cancel_injection)(struct kvm_vcpu *vcpu);
 	int (*interrupt_allowed)(struct kvm_vcpu *vcpu, bool for_injection);
@@ -1458,9 +1458,9 @@ struct kvm_x86_ops {
 	void (*vcpu_blocking)(struct kvm_vcpu *vcpu);
 	void (*vcpu_unblocking)(struct kvm_vcpu *vcpu);
 
-	int (*update_pi_irte)(struct kvm *kvm, unsigned int host_irq,
+	int (*pi_update_irte)(struct kvm *kvm, unsigned int host_irq,
 			      uint32_t guest_irq, bool set);
-	void (*start_assignment)(struct kvm *kvm);
+	void (*pi_start_assignment)(struct kvm *kvm);
 	void (*apicv_post_state_restore)(struct kvm_vcpu *vcpu);
 	bool (*dy_apicv_has_pending_interrupt)(struct kvm_vcpu *vcpu);
 
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index b29fc88b51b4..9f1b4711d5ea 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -5097,7 +5097,7 @@ int kvm_mmu_load(struct kvm_vcpu *vcpu)
 	kvm_mmu_sync_roots(vcpu);
 
 	kvm_mmu_load_pgd(vcpu);
-	static_call(kvm_x86_tlb_flush_current)(vcpu);
+	static_call(kvm_x86_flush_tlb_current)(vcpu);
 out:
 	return r;
 }
@@ -5357,7 +5357,7 @@ void kvm_mmu_invalidate_gva(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
 		if (is_noncanonical_address(gva, vcpu))
 			return;
 
-		static_call(kvm_x86_tlb_flush_gva)(vcpu, gva);
+		static_call(kvm_x86_flush_tlb_gva)(vcpu, gva);
 	}
 
 	if (!mmu->invlpg)
@@ -5413,7 +5413,7 @@ void kvm_mmu_invpcid_gva(struct kvm_vcpu *vcpu, gva_t gva, unsigned long pcid)
 	}
 
 	if (tlb_flush)
-		static_call(kvm_x86_tlb_flush_gva)(vcpu, gva);
+		static_call(kvm_x86_flush_tlb_gva)(vcpu, gva);
 
 	++vcpu->stat.invlpg;
 
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 75d277067141..991d3e628c60 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -4472,7 +4472,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
 	.vm_init = svm_vm_init,
 	.vm_destroy = svm_vm_destroy,
 
-	.prepare_guest_switch = svm_prepare_guest_switch,
+	.prepare_switch_to_guest = svm_prepare_guest_switch,
 	.vcpu_load = svm_vcpu_load,
 	.vcpu_put = svm_vcpu_put,
 	.vcpu_blocking = avic_vcpu_blocking,
@@ -4503,21 +4503,21 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
 	.set_rflags = svm_set_rflags,
 	.get_if_flag = svm_get_if_flag,
 
-	.tlb_flush_all = svm_flush_tlb,
-	.tlb_flush_current = svm_flush_tlb,
-	.tlb_flush_gva = svm_flush_tlb_gva,
-	.tlb_flush_guest = svm_flush_tlb,
+	.flush_tlb_all = svm_flush_tlb,
+	.flush_tlb_current = svm_flush_tlb,
+	.flush_tlb_gva = svm_flush_tlb_gva,
+	.flush_tlb_guest = svm_flush_tlb,
 
 	.vcpu_pre_run = svm_vcpu_pre_run,
-	.run = svm_vcpu_run,
+	.vcpu_run = svm_vcpu_run,
 	.handle_exit = handle_exit,
 	.skip_emulated_instruction = skip_emulated_instruction,
 	.update_emulated_instruction = NULL,
 	.set_interrupt_shadow = svm_set_interrupt_shadow,
 	.get_interrupt_shadow = svm_get_interrupt_shadow,
 	.patch_hypercall = svm_patch_hypercall,
-	.set_irq = svm_set_irq,
-	.set_nmi = svm_inject_nmi,
+	.inject_irq = svm_set_irq,
+	.inject_nmi = svm_inject_nmi,
 	.queue_exception = svm_queue_exception,
 	.cancel_injection = svm_cancel_injection,
 	.interrupt_allowed = svm_interrupt_allowed,
@@ -4564,7 +4564,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
 
 	.deliver_interrupt = svm_deliver_interrupt,
 	.dy_apicv_has_pending_interrupt = svm_dy_apicv_has_pending_interrupt,
-	.update_pi_irte = svm_update_pi_irte,
+	.pi_update_irte = svm_update_pi_irte,
 	.setup_mce = svm_setup_mce,
 
 	.smi_allowed = svm_smi_allowed,
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 97d6edbd25a0..1d2d850b124b 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -7719,7 +7719,7 @@ static struct kvm_x86_ops vmx_x86_ops __initdata = {
 	.vcpu_free = vmx_free_vcpu,
 	.vcpu_reset = vmx_vcpu_reset,
 
-	.prepare_guest_switch = vmx_prepare_switch_to_guest,
+	.prepare_switch_to_guest = vmx_prepare_switch_to_guest,
 	.vcpu_load = vmx_vcpu_load,
 	.vcpu_put = vmx_vcpu_put,
 
@@ -7747,21 +7747,21 @@ static struct kvm_x86_ops vmx_x86_ops __initdata = {
 	.set_rflags = vmx_set_rflags,
 	.get_if_flag = vmx_get_if_flag,
 
-	.tlb_flush_all = vmx_flush_tlb_all,
-	.tlb_flush_current = vmx_flush_tlb_current,
-	.tlb_flush_gva = vmx_flush_tlb_gva,
-	.tlb_flush_guest = vmx_flush_tlb_guest,
+	.flush_tlb_all = vmx_flush_tlb_all,
+	.flush_tlb_current = vmx_flush_tlb_current,
+	.flush_tlb_gva = vmx_flush_tlb_gva,
+	.flush_tlb_guest = vmx_flush_tlb_guest,
 
 	.vcpu_pre_run = vmx_vcpu_pre_run,
-	.run = vmx_vcpu_run,
+	.vcpu_run = vmx_vcpu_run,
 	.handle_exit = vmx_handle_exit,
 	.skip_emulated_instruction = vmx_skip_emulated_instruction,
 	.update_emulated_instruction = vmx_update_emulated_instruction,
 	.set_interrupt_shadow = vmx_set_interrupt_shadow,
 	.get_interrupt_shadow = vmx_get_interrupt_shadow,
 	.patch_hypercall = vmx_patch_hypercall,
-	.set_irq = vmx_inject_irq,
-	.set_nmi = vmx_inject_nmi,
+	.inject_irq = vmx_inject_irq,
+	.inject_nmi = vmx_inject_nmi,
 	.queue_exception = vmx_queue_exception,
 	.cancel_injection = vmx_cancel_injection,
 	.interrupt_allowed = vmx_interrupt_allowed,
@@ -7814,8 +7814,8 @@ static struct kvm_x86_ops vmx_x86_ops __initdata = {
 	.pmu_ops = &intel_pmu_ops,
 	.nested_ops = &vmx_nested_ops,
 
-	.update_pi_irte = pi_update_irte,
-	.start_assignment = vmx_pi_start_assignment,
+	.pi_update_irte = pi_update_irte,
+	.pi_start_assignment = vmx_pi_start_assignment,
 
 #ifdef CONFIG_X86_64
 	.set_hv_timer = vmx_set_hv_timer,
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index a2821c46dfa4..cc14f79c446c 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -3264,7 +3264,7 @@ static void kvmclock_reset(struct kvm_vcpu *vcpu)
 static void kvm_vcpu_flush_tlb_all(struct kvm_vcpu *vcpu)
 {
 	++vcpu->stat.tlb_flush;
-	static_call(kvm_x86_tlb_flush_all)(vcpu);
+	static_call(kvm_x86_flush_tlb_all)(vcpu);
 }
 
 static void kvm_vcpu_flush_tlb_guest(struct kvm_vcpu *vcpu)
@@ -3282,14 +3282,14 @@ static void kvm_vcpu_flush_tlb_guest(struct kvm_vcpu *vcpu)
 		kvm_mmu_sync_prev_roots(vcpu);
 	}
 
-	static_call(kvm_x86_tlb_flush_guest)(vcpu);
+	static_call(kvm_x86_flush_tlb_guest)(vcpu);
 }
 
 
 static inline void kvm_vcpu_flush_tlb_current(struct kvm_vcpu *vcpu)
 {
 	++vcpu->stat.tlb_flush;
-	static_call(kvm_x86_tlb_flush_current)(vcpu);
+	static_call(kvm_x86_flush_tlb_current)(vcpu);
 }
 
 /*
@@ -9283,10 +9283,10 @@ static int inject_pending_event(struct kvm_vcpu *vcpu, bool *req_immediate_exit)
 	 */
 	else if (!vcpu->arch.exception.pending) {
 		if (vcpu->arch.nmi_injected) {
-			static_call(kvm_x86_set_nmi)(vcpu);
+			static_call(kvm_x86_inject_nmi)(vcpu);
 			can_inject = false;
 		} else if (vcpu->arch.interrupt.injected) {
-			static_call(kvm_x86_set_irq)(vcpu);
+			static_call(kvm_x86_inject_irq)(vcpu);
 			can_inject = false;
 		}
 	}
@@ -9366,7 +9366,7 @@ static int inject_pending_event(struct kvm_vcpu *vcpu, bool *req_immediate_exit)
 		if (r) {
 			--vcpu->arch.nmi_pending;
 			vcpu->arch.nmi_injected = true;
-			static_call(kvm_x86_set_nmi)(vcpu);
+			static_call(kvm_x86_inject_nmi)(vcpu);
 			can_inject = false;
 			WARN_ON(static_call(kvm_x86_nmi_allowed)(vcpu, true) < 0);
 		}
@@ -9380,7 +9380,7 @@ static int inject_pending_event(struct kvm_vcpu *vcpu, bool *req_immediate_exit)
 			goto out;
 		if (r) {
 			kvm_queue_interrupt(vcpu, kvm_cpu_get_interrupt(vcpu), false);
-			static_call(kvm_x86_set_irq)(vcpu);
+			static_call(kvm_x86_inject_irq)(vcpu);
 			WARN_ON(static_call(kvm_x86_interrupt_allowed)(vcpu, true) < 0);
 		}
 		if (kvm_cpu_has_injectable_intr(vcpu))
@@ -10005,7 +10005,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 
 	preempt_disable();
 
-	static_call(kvm_x86_prepare_guest_switch)(vcpu);
+	static_call(kvm_x86_prepare_switch_to_guest)(vcpu);
 
 	/*
 	 * Disable IRQs before setting IN_GUEST_MODE.  Posted interrupt
@@ -10082,7 +10082,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 		 */
 		WARN_ON_ONCE(kvm_apicv_activated(vcpu->kvm) != kvm_vcpu_apicv_active(vcpu));
 
-		exit_fastpath = static_call(kvm_x86_run)(vcpu);
+		exit_fastpath = static_call(kvm_x86_vcpu_run)(vcpu);
 		if (likely(exit_fastpath != EXIT_FASTPATH_REENTER_GUEST))
 			break;
 
@@ -10385,10 +10385,7 @@ static int complete_emulated_mmio(struct kvm_vcpu *vcpu)
 /* Swap (qemu) user FPU context for the guest FPU context. */
 static void kvm_load_guest_fpu(struct kvm_vcpu *vcpu)
 {
-	/*
-	 * Exclude PKRU from restore as restored separately in
-	 * kvm_x86_ops.run().
-	 */
+	/* Exclude PKRU, it's restored separately immediately after VM-Exit. */
 	fpu_swap_kvm_fpstate(&vcpu->arch.guest_fpu, true);
 	trace_kvm_fpu(1);
 }
@@ -12396,7 +12393,7 @@ bool kvm_arch_can_dequeue_async_page_present(struct kvm_vcpu *vcpu)
 void kvm_arch_start_assignment(struct kvm *kvm)
 {
 	if (atomic_inc_return(&kvm->arch.assigned_device_count) == 1)
-		static_call_cond(kvm_x86_start_assignment)(kvm);
+		static_call_cond(kvm_x86_pi_start_assignment)(kvm);
 }
 EXPORT_SYMBOL_GPL(kvm_arch_start_assignment);
 
@@ -12444,7 +12441,7 @@ int kvm_arch_irq_bypass_add_producer(struct irq_bypass_consumer *cons,
 
 	irqfd->producer = prod;
 	kvm_arch_start_assignment(irqfd->kvm);
-	ret = static_call(kvm_x86_update_pi_irte)(irqfd->kvm,
+	ret = static_call(kvm_x86_pi_update_irte)(irqfd->kvm,
 					 prod->irq, irqfd->gsi, 1);
 
 	if (ret)
@@ -12469,7 +12466,7 @@ void kvm_arch_irq_bypass_del_producer(struct irq_bypass_consumer *cons,
 	 * when the irq is masked/disabled or the consumer side (KVM
 	 * int this case doesn't want to receive the interrupts.
 	*/
-	ret = static_call(kvm_x86_update_pi_irte)(irqfd->kvm, prod->irq, irqfd->gsi, 0);
+	ret = static_call(kvm_x86_pi_update_irte)(irqfd->kvm, prod->irq, irqfd->gsi, 0);
 	if (ret)
 		printk(KERN_INFO "irq bypass consumer (token %p) unregistration"
 		       " fails: %d\n", irqfd->consumer.token, ret);
@@ -12480,7 +12477,7 @@ void kvm_arch_irq_bypass_del_producer(struct irq_bypass_consumer *cons,
 int kvm_arch_update_irqfd_routing(struct kvm *kvm, unsigned int host_irq,
 				   uint32_t guest_irq, bool set)
 {
-	return static_call(kvm_x86_update_pi_irte)(kvm, host_irq, guest_irq, set);
+	return static_call(kvm_x86_pi_update_irte)(kvm, host_irq, guest_irq, set);
 }
 
 bool kvm_arch_irqfd_route_changed(struct kvm_kernel_irq_routing_entry *old,
-- 
2.35.0.rc0.227.g00780c9af4-goog


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 05/22] KVM: x86: Use static_call() for .vcpu_deliver_sipi_vector()
  2022-01-28  0:51 [PATCH 00/22] KVM: x86: Fill *_x86_ops via kvm-x86-ops.h Sean Christopherson
                   ` (3 preceding siblings ...)
  2022-01-28  0:51 ` [PATCH 04/22] KVM: x86: Rename kvm_x86_ops pointers to align w/ preferred vendor names Sean Christopherson
@ 2022-01-28  0:51 ` Sean Christopherson
  2022-01-28  0:51 ` [PATCH 06/22] KVM: VMX: Call vmx_get_cpl() directly in handle_dr() Sean Christopherson
                   ` (16 subsequent siblings)
  21 siblings, 0 replies; 29+ messages in thread
From: Sean Christopherson @ 2022-01-28  0:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel, Like Xu

Define and use a static_call() for kvm_x86_ops.vcpu_deliver_sipi_vector(),
mostly so that the op is defined in kvm-x86-ops.h.  This will allow using
KVM_X86_OP in vendor code to wire up the implementation.  Any performance
gains eeked out by using static_call() is a happy bonus and not the
primary motiviation.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/include/asm/kvm-x86-ops.h | 1 +
 arch/x86/kvm/lapic.c               | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
index a87632641a13..eb93aa439d61 100644
--- a/arch/x86/include/asm/kvm-x86-ops.h
+++ b/arch/x86/include/asm/kvm-x86-ops.h
@@ -117,5 +117,6 @@ KVM_X86_OP(enable_direct_tlbflush)
 KVM_X86_OP(migrate_timers)
 KVM_X86_OP(msr_filter_changed)
 KVM_X86_OP(complete_emulated_msr)
+KVM_X86_OP(vcpu_deliver_sipi_vector)
 
 #undef KVM_X86_OP
diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
index d7e6fde82d25..dc4bc9eea81c 100644
--- a/arch/x86/kvm/lapic.c
+++ b/arch/x86/kvm/lapic.c
@@ -2928,7 +2928,7 @@ int kvm_apic_accept_events(struct kvm_vcpu *vcpu)
 			/* evaluate pending_events before reading the vector */
 			smp_rmb();
 			sipi_vector = apic->sipi_vector;
-			kvm_x86_ops.vcpu_deliver_sipi_vector(vcpu, sipi_vector);
+			static_call(kvm_x86_vcpu_deliver_sipi_vector)(vcpu, sipi_vector);
 			vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE;
 		}
 	}
-- 
2.35.0.rc0.227.g00780c9af4-goog


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 06/22] KVM: VMX: Call vmx_get_cpl() directly in handle_dr()
  2022-01-28  0:51 [PATCH 00/22] KVM: x86: Fill *_x86_ops via kvm-x86-ops.h Sean Christopherson
                   ` (4 preceding siblings ...)
  2022-01-28  0:51 ` [PATCH 05/22] KVM: x86: Use static_call() for .vcpu_deliver_sipi_vector() Sean Christopherson
@ 2022-01-28  0:51 ` Sean Christopherson
  2022-01-28  0:51 ` [PATCH 07/22] KVM: xen: Use static_call() for invoking kvm_x86_ops hooks Sean Christopherson
                   ` (15 subsequent siblings)
  21 siblings, 0 replies; 29+ messages in thread
From: Sean Christopherson @ 2022-01-28  0:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel, Like Xu

Use vmx_get_cpl() instead of bouncing through kvm_x86_ops.get_cpl() when
performing a CPL check on MOV DR accesses.  This avoids a RETPOLINE (when
enabled), and more importantly removes a vendor reference to kvm_x86_ops
and helps pave the way for unexporting kvm_x86_ops.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/kvm/vmx/vmx.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 1d2d850b124b..de66786396bd 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -5184,7 +5184,7 @@ static int handle_dr(struct kvm_vcpu *vcpu)
 	if (!kvm_require_dr(vcpu, dr))
 		return 1;
 
-	if (kvm_x86_ops.get_cpl(vcpu) > 0)
+	if (vmx_get_cpl(vcpu) > 0)
 		goto out;
 
 	dr7 = vmcs_readl(GUEST_DR7);
-- 
2.35.0.rc0.227.g00780c9af4-goog


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 07/22] KVM: xen: Use static_call() for invoking kvm_x86_ops hooks
  2022-01-28  0:51 [PATCH 00/22] KVM: x86: Fill *_x86_ops via kvm-x86-ops.h Sean Christopherson
                   ` (5 preceding siblings ...)
  2022-01-28  0:51 ` [PATCH 06/22] KVM: VMX: Call vmx_get_cpl() directly in handle_dr() Sean Christopherson
@ 2022-01-28  0:51 ` Sean Christopherson
  2022-01-28  0:51 ` [PATCH 08/22] KVM: nVMX: Refactor PMU refresh to avoid referencing kvm_x86_ops.pmu_ops Sean Christopherson
                   ` (14 subsequent siblings)
  21 siblings, 0 replies; 29+ messages in thread
From: Sean Christopherson @ 2022-01-28  0:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel, Like Xu

Use static_call() for invoking kvm_x86_ops function that already have a
defined static call, mostly as a step toward having _all_ calls to
kvm_x86_ops route through a static_call() in order to simplify auditing,
e.g. via grep, that all functions have an entry in kvm-x86-ops.h, but
also because there's no reason not to use a static_call().

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/kvm/xen.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c
index bad57535fad0..419bae180930 100644
--- a/arch/x86/kvm/xen.c
+++ b/arch/x86/kvm/xen.c
@@ -695,7 +695,7 @@ int kvm_xen_write_hypercall_page(struct kvm_vcpu *vcpu, u64 data)
 		instructions[0] = 0xb8;
 
 		/* vmcall / vmmcall */
-		kvm_x86_ops.patch_hypercall(vcpu, instructions + 5);
+		static_call(kvm_x86_patch_hypercall)(vcpu, instructions + 5);
 
 		/* ret */
 		instructions[8] = 0xc3;
@@ -830,7 +830,7 @@ int kvm_xen_hypercall(struct kvm_vcpu *vcpu)
 	vcpu->run->exit_reason = KVM_EXIT_XEN;
 	vcpu->run->xen.type = KVM_EXIT_XEN_HCALL;
 	vcpu->run->xen.u.hcall.longmode = longmode;
-	vcpu->run->xen.u.hcall.cpl = kvm_x86_ops.get_cpl(vcpu);
+	vcpu->run->xen.u.hcall.cpl = static_call(kvm_x86_get_cpl)(vcpu);
 	vcpu->run->xen.u.hcall.input = input;
 	vcpu->run->xen.u.hcall.params[0] = params[0];
 	vcpu->run->xen.u.hcall.params[1] = params[1];
-- 
2.35.0.rc0.227.g00780c9af4-goog


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 08/22] KVM: nVMX: Refactor PMU refresh to avoid referencing kvm_x86_ops.pmu_ops
  2022-01-28  0:51 [PATCH 00/22] KVM: x86: Fill *_x86_ops via kvm-x86-ops.h Sean Christopherson
                   ` (6 preceding siblings ...)
  2022-01-28  0:51 ` [PATCH 07/22] KVM: xen: Use static_call() for invoking kvm_x86_ops hooks Sean Christopherson
@ 2022-01-28  0:51 ` Sean Christopherson
  2022-01-28  0:51 ` [PATCH 09/22] KVM: x86: Uninline and export hv_track_root_tdp() Sean Christopherson
                   ` (13 subsequent siblings)
  21 siblings, 0 replies; 29+ messages in thread
From: Sean Christopherson @ 2022-01-28  0:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel, Like Xu

Refactor the nested VMX PMU refresh helper to pass it a flag stating
whether or not the vCPU has PERF_GLOBAL_CTRL instead of having the nVMX
helper query the information by bouncing through kvm_x86_ops.pmu_ops.
This will allow a future patch to use static_call() for the PMU ops
without having to export any static call definitions from common x86, and
it is also a step toward unexported kvm_x86_ops.

Alternatively, nVMX could call kvm_pmu_is_valid_msr() to indirectly use
kvm_x86_ops.pmu_ops, but that would incur an extra layer of indirection
and would require exporting kvm_pmu_is_valid_msr().

Opportunistically rename the helper to keep line lengths somewhat
reasonable, and to better capture its high-level role.

No functional change intended.

Cc: Like Xu <like.xu.linux@gmail.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/kvm/vmx/nested.c    | 5 +++--
 arch/x86/kvm/vmx/nested.h    | 3 ++-
 arch/x86/kvm/vmx/pmu_intel.c | 3 ++-
 3 files changed, 7 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index 2777cea05cc0..fdae31db640c 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -4796,7 +4796,8 @@ int get_vmx_mem_address(struct kvm_vcpu *vcpu, unsigned long exit_qualification,
 	return 0;
 }
 
-void nested_vmx_pmu_entry_exit_ctls_update(struct kvm_vcpu *vcpu)
+void nested_vmx_pmu_refresh(struct kvm_vcpu *vcpu,
+			    bool vcpu_has_perf_global_ctrl)
 {
 	struct vcpu_vmx *vmx;
 
@@ -4804,7 +4805,7 @@ void nested_vmx_pmu_entry_exit_ctls_update(struct kvm_vcpu *vcpu)
 		return;
 
 	vmx = to_vmx(vcpu);
-	if (kvm_x86_ops.pmu_ops->is_valid_msr(vcpu, MSR_CORE_PERF_GLOBAL_CTRL)) {
+	if (vcpu_has_perf_global_ctrl) {
 		vmx->nested.msrs.entry_ctls_high |=
 				VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL;
 		vmx->nested.msrs.exit_ctls_high |=
diff --git a/arch/x86/kvm/vmx/nested.h b/arch/x86/kvm/vmx/nested.h
index b69a80f43b37..c92cea0b8ccc 100644
--- a/arch/x86/kvm/vmx/nested.h
+++ b/arch/x86/kvm/vmx/nested.h
@@ -32,7 +32,8 @@ int vmx_set_vmx_msr(struct kvm_vcpu *vcpu, u32 msr_index, u64 data);
 int vmx_get_vmx_msr(struct nested_vmx_msrs *msrs, u32 msr_index, u64 *pdata);
 int get_vmx_mem_address(struct kvm_vcpu *vcpu, unsigned long exit_qualification,
 			u32 vmx_instruction_info, bool wr, int len, gva_t *ret);
-void nested_vmx_pmu_entry_exit_ctls_update(struct kvm_vcpu *vcpu);
+void nested_vmx_pmu_refresh(struct kvm_vcpu *vcpu,
+			    bool vcpu_has_perf_global_ctrl);
 void nested_mark_vmcs12_pages_dirty(struct kvm_vcpu *vcpu);
 bool nested_vmx_check_io_bitmaps(struct kvm_vcpu *vcpu, unsigned int port,
 				 int size);
diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c
index 466d18fc0c5d..03fab48b149c 100644
--- a/arch/x86/kvm/vmx/pmu_intel.c
+++ b/arch/x86/kvm/vmx/pmu_intel.c
@@ -541,7 +541,8 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu)
 	bitmap_set(pmu->all_valid_pmc_idx,
 		INTEL_PMC_MAX_GENERIC, pmu->nr_arch_fixed_counters);
 
-	nested_vmx_pmu_entry_exit_ctls_update(vcpu);
+	nested_vmx_pmu_refresh(vcpu,
+			       intel_is_valid_msr(vcpu, MSR_CORE_PERF_GLOBAL_CTRL));
 
 	if (intel_pmu_lbr_is_compatible(vcpu))
 		x86_perf_get_lbr(&lbr_desc->records);
-- 
2.35.0.rc0.227.g00780c9af4-goog


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 09/22] KVM: x86: Uninline and export hv_track_root_tdp()
  2022-01-28  0:51 [PATCH 00/22] KVM: x86: Fill *_x86_ops via kvm-x86-ops.h Sean Christopherson
                   ` (7 preceding siblings ...)
  2022-01-28  0:51 ` [PATCH 08/22] KVM: nVMX: Refactor PMU refresh to avoid referencing kvm_x86_ops.pmu_ops Sean Christopherson
@ 2022-01-28  0:51 ` Sean Christopherson
  2022-01-31 16:19   ` Vitaly Kuznetsov
  2022-01-28  0:51 ` [PATCH 10/22] KVM: x86: Unexport kvm_x86_ops Sean Christopherson
                   ` (12 subsequent siblings)
  21 siblings, 1 reply; 29+ messages in thread
From: Sean Christopherson @ 2022-01-28  0:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel, Like Xu

Uninline and export Hyper-V's hv_track_root_tdp(), which is (somewhat
indirectly) the last remaining reference to kvm_x86_ops from vendor
modules, i.e. will allow unexporting kvm_x86_ops.  Reloading the TDP PGD
isn't the fastest of paths, hv_track_root_tdp() isn't exactly tiny, and
disallowing vendor code from accessing kvm_x86_ops provides nice-to-have
encapsulation of common x86 code (and of Hyper-V code for that matter).

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/kvm/kvm_onhyperv.c | 14 ++++++++++++++
 arch/x86/kvm/kvm_onhyperv.h | 14 +-------------
 2 files changed, 15 insertions(+), 13 deletions(-)

diff --git a/arch/x86/kvm/kvm_onhyperv.c b/arch/x86/kvm/kvm_onhyperv.c
index b469f45e3fe4..ee4f696a0782 100644
--- a/arch/x86/kvm/kvm_onhyperv.c
+++ b/arch/x86/kvm/kvm_onhyperv.c
@@ -92,3 +92,17 @@ int hv_remote_flush_tlb(struct kvm *kvm)
 	return hv_remote_flush_tlb_with_range(kvm, NULL);
 }
 EXPORT_SYMBOL_GPL(hv_remote_flush_tlb);
+
+void hv_track_root_tdp(struct kvm_vcpu *vcpu, hpa_t root_tdp)
+{
+	struct kvm_arch *kvm_arch = &vcpu->kvm->arch;
+
+	if (kvm_x86_ops.tlb_remote_flush == hv_remote_flush_tlb) {
+		spin_lock(&kvm_arch->hv_root_tdp_lock);
+		vcpu->arch.hv_root_tdp = root_tdp;
+		if (root_tdp != kvm_arch->hv_root_tdp)
+			kvm_arch->hv_root_tdp = INVALID_PAGE;
+		spin_unlock(&kvm_arch->hv_root_tdp_lock);
+	}
+}
+EXPORT_SYMBOL_GPL(hv_track_root_tdp);
diff --git a/arch/x86/kvm/kvm_onhyperv.h b/arch/x86/kvm/kvm_onhyperv.h
index 1c67abf2eba9..287e98ef9df3 100644
--- a/arch/x86/kvm/kvm_onhyperv.h
+++ b/arch/x86/kvm/kvm_onhyperv.h
@@ -10,19 +10,7 @@
 int hv_remote_flush_tlb_with_range(struct kvm *kvm,
 		struct kvm_tlb_range *range);
 int hv_remote_flush_tlb(struct kvm *kvm);
-
-static inline void hv_track_root_tdp(struct kvm_vcpu *vcpu, hpa_t root_tdp)
-{
-	struct kvm_arch *kvm_arch = &vcpu->kvm->arch;
-
-	if (kvm_x86_ops.tlb_remote_flush == hv_remote_flush_tlb) {
-		spin_lock(&kvm_arch->hv_root_tdp_lock);
-		vcpu->arch.hv_root_tdp = root_tdp;
-		if (root_tdp != kvm_arch->hv_root_tdp)
-			kvm_arch->hv_root_tdp = INVALID_PAGE;
-		spin_unlock(&kvm_arch->hv_root_tdp_lock);
-	}
-}
+void hv_track_root_tdp(struct kvm_vcpu *vcpu, hpa_t root_tdp);
 #else /* !CONFIG_HYPERV */
 static inline void hv_track_root_tdp(struct kvm_vcpu *vcpu, hpa_t root_tdp)
 {
-- 
2.35.0.rc0.227.g00780c9af4-goog


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 10/22] KVM: x86: Unexport kvm_x86_ops
  2022-01-28  0:51 [PATCH 00/22] KVM: x86: Fill *_x86_ops via kvm-x86-ops.h Sean Christopherson
                   ` (8 preceding siblings ...)
  2022-01-28  0:51 ` [PATCH 09/22] KVM: x86: Uninline and export hv_track_root_tdp() Sean Christopherson
@ 2022-01-28  0:51 ` Sean Christopherson
  2022-01-28  0:51 ` [PATCH 11/22] KVM: x86: Use static_call() for copy/move encryption context ioctls() Sean Christopherson
                   ` (11 subsequent siblings)
  21 siblings, 0 replies; 29+ messages in thread
From: Sean Christopherson @ 2022-01-28  0:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel, Like Xu

Drop the export of kvm_x86_ops now it is no longer referenced by SVM or
VMX.  Disallowing access to kvm_x86_ops is very desirable as it prevents
vendor code from incorrectly modifying hooks after they have been set by
kvm_arch_hardware_setup(), and more importantly after each function's
associated static_call key has been updated.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/kvm/x86.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index cc14f79c446c..a8ea1b212267 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -124,7 +124,6 @@ static int __set_sregs2(struct kvm_vcpu *vcpu, struct kvm_sregs2 *sregs2);
 static void __get_sregs2(struct kvm_vcpu *vcpu, struct kvm_sregs2 *sregs2);
 
 struct kvm_x86_ops kvm_x86_ops __read_mostly;
-EXPORT_SYMBOL_GPL(kvm_x86_ops);
 
 #define KVM_X86_OP(func)					     \
 	DEFINE_STATIC_CALL_NULL(kvm_x86_##func,			     \
-- 
2.35.0.rc0.227.g00780c9af4-goog


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 11/22] KVM: x86: Use static_call() for copy/move encryption context ioctls()
  2022-01-28  0:51 [PATCH 00/22] KVM: x86: Fill *_x86_ops via kvm-x86-ops.h Sean Christopherson
                   ` (9 preceding siblings ...)
  2022-01-28  0:51 ` [PATCH 10/22] KVM: x86: Unexport kvm_x86_ops Sean Christopherson
@ 2022-01-28  0:51 ` Sean Christopherson
  2022-01-28  0:51 ` [PATCH 12/22] KVM: x86: Allow different macros for APICv, CVM, and Hyper-V kvm_x86_ops Sean Christopherson
                   ` (10 subsequent siblings)
  21 siblings, 0 replies; 29+ messages in thread
From: Sean Christopherson @ 2022-01-28  0:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel, Like Xu

Define and use static_call()s for .vm_{copy,move}_enc_context_from(),
mostly so that the op is defined in kvm-x86-ops.h.  This will allow using
KVM_X86_OP in vendor code to wire up the implementation.  Any performance
gains eeked out by using static_call() is a happy bonus and not the
primary motiviation.

Opportunistically refactor the code to reduce indentation and keep line
lengths reasonable, and to be consistent when wrapping versus running
a bit over the 80 char soft limit.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/include/asm/kvm-x86-ops.h |  2 ++
 arch/x86/kvm/x86.c                 | 17 ++++++++++-------
 2 files changed, 12 insertions(+), 7 deletions(-)

diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
index eb93aa439d61..4ee046e60c34 100644
--- a/arch/x86/include/asm/kvm-x86-ops.h
+++ b/arch/x86/include/asm/kvm-x86-ops.h
@@ -110,6 +110,8 @@ KVM_X86_OP(enable_smi_window)
 KVM_X86_OP(mem_enc_op)
 KVM_X86_OP(mem_enc_reg_region)
 KVM_X86_OP(mem_enc_unreg_region)
+KVM_X86_OP(vm_copy_enc_context_from)
+KVM_X86_OP(vm_move_enc_context_from)
 KVM_X86_OP(get_msr_feature)
 KVM_X86_OP(can_emulate_instruction)
 KVM_X86_OP(apic_init_signal_blocked)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index a8ea1b212267..580a2adaec7c 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5958,15 +5958,18 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm,
 #endif
 	case KVM_CAP_VM_COPY_ENC_CONTEXT_FROM:
 		r = -EINVAL;
-		if (kvm_x86_ops.vm_copy_enc_context_from)
-			r = kvm_x86_ops.vm_copy_enc_context_from(kvm, cap->args[0]);
-		return r;
+		if (!kvm_x86_ops.vm_copy_enc_context_from)
+			break;
+
+		r = static_call(kvm_x86_vm_copy_enc_context_from)(kvm, cap->args[0]);
+		break;
 	case KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM:
 		r = -EINVAL;
-		if (kvm_x86_ops.vm_move_enc_context_from)
-			r = kvm_x86_ops.vm_move_enc_context_from(
-				kvm, cap->args[0]);
-		return r;
+		if (!kvm_x86_ops.vm_move_enc_context_from)
+			break;
+
+		r = static_call(kvm_x86_vm_move_enc_context_from)(kvm, cap->args[0]);
+		break;
 	case KVM_CAP_EXIT_HYPERCALL:
 		if (cap->args[0] & ~KVM_EXIT_HYPERCALL_VALID_MASK) {
 			r = -EINVAL;
-- 
2.35.0.rc0.227.g00780c9af4-goog


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 12/22] KVM: x86: Allow different macros for APICv, CVM, and Hyper-V kvm_x86_ops
  2022-01-28  0:51 [PATCH 00/22] KVM: x86: Fill *_x86_ops via kvm-x86-ops.h Sean Christopherson
                   ` (10 preceding siblings ...)
  2022-01-28  0:51 ` [PATCH 11/22] KVM: x86: Use static_call() for copy/move encryption context ioctls() Sean Christopherson
@ 2022-01-28  0:51 ` Sean Christopherson
  2022-01-28  0:51 ` [PATCH 13/22] KVM: VMX: Rename VMX functions to conform to kvm_x86_ops names Sean Christopherson
                   ` (9 subsequent siblings)
  21 siblings, 0 replies; 29+ messages in thread
From: Sean Christopherson @ 2022-01-28  0:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel, Like Xu

Introduce optional macros for defining APICv, Confidental VM (a.k.a. so
called memory encryption), and Hyper-V kvm_x86_ops.  Specialized macros
will allow vendor code to easily apply a single pattern when wiring up
implementations, e.g. SVM using "sev" for Confidential VMs and AVIC for
APICv, and VMX currently doesn't support any Condifential VM hooks.

Bundling also adds a small amount of self-documentation to the various
hooks in kvm-x86-ops.h.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/include/asm/kvm-x86-ops.h | 74 +++++++++++++++++++-----------
 1 file changed, 48 insertions(+), 26 deletions(-)

diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
index 4ee046e60c34..cb3af3a55317 100644
--- a/arch/x86/include/asm/kvm-x86-ops.h
+++ b/arch/x86/include/asm/kvm-x86-ops.h
@@ -4,8 +4,24 @@ BUILD_BUG_ON(1)
 #endif
 
 /*
- * Invoke KVM_X86_OP() on all functions in struct kvm_x86_ops, e.g. to generate
- * static_call declarations, definitions and updates.
+ * APICv, Hyper-V, and Confidential VM macros are optional, redirect to the
+ * standard ops macro if the caller didn't define a type-specific variant.
+ */
+#ifndef KVM_X86_APICV_OP
+#define KVM_X86_APICV_OP KVM_X86_OP
+#endif
+
+#ifndef KVM_X86_HYPERV_OP
+#define KVM_X86_HYPERV_OP KVM_X86_OP
+#endif
+
+#ifndef KVM_X86_CVM_OP
+#define KVM_X86_CVM_OP KVM_X86_OP
+#endif
+
+/*
+ * Invoke the appropriate macro on all functions in struct kvm_x86_ops, e.g. to
+ * generate static_call declarations, definitions and updates.
  */
 KVM_X86_OP(hardware_enable)
 KVM_X86_OP(hardware_disable)
@@ -30,7 +46,6 @@ KVM_X86_OP(get_cpl)
 KVM_X86_OP(set_segment)
 KVM_X86_OP(get_cs_db_l_bits)
 KVM_X86_OP(set_cr0)
-KVM_X86_OP(post_set_cr3)
 KVM_X86_OP(is_valid_cr4)
 KVM_X86_OP(set_cr4)
 KVM_X86_OP(set_efer)
@@ -46,8 +61,6 @@ KVM_X86_OP(set_rflags)
 KVM_X86_OP(get_if_flag)
 KVM_X86_OP(flush_tlb_all)
 KVM_X86_OP(flush_tlb_current)
-KVM_X86_OP(tlb_remote_flush)
-KVM_X86_OP(tlb_remote_flush_with_range)
 KVM_X86_OP(flush_tlb_gva)
 KVM_X86_OP(flush_tlb_guest)
 KVM_X86_OP(vcpu_pre_run)
@@ -69,16 +82,7 @@ KVM_X86_OP(set_nmi_mask)
 KVM_X86_OP(enable_nmi_window)
 KVM_X86_OP(enable_irq_window)
 KVM_X86_OP(update_cr8_intercept)
-KVM_X86_OP(check_apicv_inhibit_reasons)
-KVM_X86_OP(refresh_apicv_exec_ctrl)
-KVM_X86_OP(hwapic_irr_update)
-KVM_X86_OP(hwapic_isr_update)
-KVM_X86_OP(guest_apic_has_interrupt)
-KVM_X86_OP(load_eoi_exitmap)
-KVM_X86_OP(set_virtual_apic_mode)
-KVM_X86_OP(set_apic_access_page_addr)
 KVM_X86_OP(deliver_interrupt)
-KVM_X86_OP(sync_pir_to_irr)
 KVM_X86_OP(set_tss_addr)
 KVM_X86_OP(set_identity_map_addr)
 KVM_X86_OP(get_mt_mask)
@@ -94,12 +98,6 @@ KVM_X86_OP(handle_exit_irqoff)
 KVM_X86_OP(request_immediate_exit)
 KVM_X86_OP(sched_in)
 KVM_X86_OP(update_cpu_dirty_logging)
-KVM_X86_OP(vcpu_blocking)
-KVM_X86_OP(vcpu_unblocking)
-KVM_X86_OP(pi_update_irte)
-KVM_X86_OP(pi_start_assignment)
-KVM_X86_OP(apicv_post_state_restore)
-KVM_X86_OP(dy_apicv_has_pending_interrupt)
 KVM_X86_OP(set_hv_timer)
 KVM_X86_OP(cancel_hv_timer)
 KVM_X86_OP(setup_mce)
@@ -107,18 +105,42 @@ KVM_X86_OP(smi_allowed)
 KVM_X86_OP(enter_smm)
 KVM_X86_OP(leave_smm)
 KVM_X86_OP(enable_smi_window)
-KVM_X86_OP(mem_enc_op)
-KVM_X86_OP(mem_enc_reg_region)
-KVM_X86_OP(mem_enc_unreg_region)
-KVM_X86_OP(vm_copy_enc_context_from)
-KVM_X86_OP(vm_move_enc_context_from)
 KVM_X86_OP(get_msr_feature)
 KVM_X86_OP(can_emulate_instruction)
 KVM_X86_OP(apic_init_signal_blocked)
-KVM_X86_OP(enable_direct_tlbflush)
 KVM_X86_OP(migrate_timers)
 KVM_X86_OP(msr_filter_changed)
 KVM_X86_OP(complete_emulated_msr)
 KVM_X86_OP(vcpu_deliver_sipi_vector)
 
+KVM_X86_APICV_OP(check_apicv_inhibit_reasons)
+KVM_X86_APICV_OP(refresh_apicv_exec_ctrl)
+KVM_X86_APICV_OP(load_eoi_exitmap)
+KVM_X86_APICV_OP(set_virtual_apic_mode)
+KVM_X86_APICV_OP(set_apic_access_page_addr)
+KVM_X86_APICV_OP(sync_pir_to_irr)
+KVM_X86_APICV_OP(hwapic_irr_update)
+KVM_X86_APICV_OP(hwapic_isr_update)
+KVM_X86_APICV_OP(guest_apic_has_interrupt)
+KVM_X86_APICV_OP(vcpu_blocking)
+KVM_X86_APICV_OP(vcpu_unblocking)
+KVM_X86_APICV_OP(pi_update_irte)
+KVM_X86_APICV_OP(pi_start_assignment)
+KVM_X86_APICV_OP(apicv_post_state_restore)
+KVM_X86_APICV_OP(dy_apicv_has_pending_interrupt)
+
+KVM_X86_HYPERV_OP(tlb_remote_flush)
+KVM_X86_HYPERV_OP(tlb_remote_flush_with_range)
+KVM_X86_HYPERV_OP(enable_direct_tlbflush)
+
+KVM_X86_CVM_OP(mem_enc_op)
+KVM_X86_CVM_OP(mem_enc_reg_region)
+KVM_X86_CVM_OP(mem_enc_unreg_region)
+KVM_X86_CVM_OP(vm_copy_enc_context_from)
+KVM_X86_CVM_OP(vm_move_enc_context_from)
+KVM_X86_CVM_OP(post_set_cr3)
+
+#undef KVM_X86_APICV_OP
+#undef KVM_X86_HYPERV_OP
+#undef KVM_X86_CVM_OP
 #undef KVM_X86_OP
-- 
2.35.0.rc0.227.g00780c9af4-goog


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 13/22] KVM: VMX: Rename VMX functions to conform to kvm_x86_ops names
  2022-01-28  0:51 [PATCH 00/22] KVM: x86: Fill *_x86_ops via kvm-x86-ops.h Sean Christopherson
                   ` (11 preceding siblings ...)
  2022-01-28  0:51 ` [PATCH 12/22] KVM: x86: Allow different macros for APICv, CVM, and Hyper-V kvm_x86_ops Sean Christopherson
@ 2022-01-28  0:51 ` Sean Christopherson
  2022-01-28  0:52 ` [PATCH 14/22] KVM: VMX: Use kvm-x86-ops.h to fill vmx_x86_ops Sean Christopherson
                   ` (8 subsequent siblings)
  21 siblings, 0 replies; 29+ messages in thread
From: Sean Christopherson @ 2022-01-28  0:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel, Like Xu

Massage VMX's implementation names for kvm_x86_ops to maximize use of
kvm-x86-ops.h.  Leave cpu_has_vmx_wbinvd_exit() as-is to preserve the
cpu_has_vmx_*() pattern used for querying VMCS capabilities.  Keep
pi_has_pending_interrupt() as vmx_dy_apicv_has_pending_interrupt() does
a poor job of describing exactly what is being checked in VMX land.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/kvm/vmx/posted_intr.c |  6 +++---
 arch/x86/kvm/vmx/posted_intr.h |  4 ++--
 arch/x86/kvm/vmx/vmx.c         | 26 +++++++++++++-------------
 3 files changed, 18 insertions(+), 18 deletions(-)

diff --git a/arch/x86/kvm/vmx/posted_intr.c b/arch/x86/kvm/vmx/posted_intr.c
index aa1fe9085d77..3834bb30ce54 100644
--- a/arch/x86/kvm/vmx/posted_intr.c
+++ b/arch/x86/kvm/vmx/posted_intr.c
@@ -244,7 +244,7 @@ void vmx_pi_start_assignment(struct kvm *kvm)
 }
 
 /*
- * pi_update_irte - set IRTE for Posted-Interrupts
+ * vmx_pi_update_irte - set IRTE for Posted-Interrupts
  *
  * @kvm: kvm
  * @host_irq: host irq of the interrupt
@@ -252,8 +252,8 @@ void vmx_pi_start_assignment(struct kvm *kvm)
  * @set: set or unset PI
  * returns 0 on success, < 0 on failure
  */
-int pi_update_irte(struct kvm *kvm, unsigned int host_irq, uint32_t guest_irq,
-		   bool set)
+int vmx_pi_update_irte(struct kvm *kvm, unsigned int host_irq,
+		       uint32_t guest_irq, bool set)
 {
 	struct kvm_kernel_irq_routing_entry *e;
 	struct kvm_irq_routing_table *irq_rt;
diff --git a/arch/x86/kvm/vmx/posted_intr.h b/arch/x86/kvm/vmx/posted_intr.h
index eb14e76b84ef..9a45d5c9f116 100644
--- a/arch/x86/kvm/vmx/posted_intr.h
+++ b/arch/x86/kvm/vmx/posted_intr.h
@@ -97,8 +97,8 @@ void vmx_vcpu_pi_put(struct kvm_vcpu *vcpu);
 void pi_wakeup_handler(void);
 void __init pi_init_cpu(int cpu);
 bool pi_has_pending_interrupt(struct kvm_vcpu *vcpu);
-int pi_update_irte(struct kvm *kvm, unsigned int host_irq, uint32_t guest_irq,
-		   bool set);
+int vmx_pi_update_irte(struct kvm *kvm, unsigned int host_irq,
+		       uint32_t guest_irq, bool set);
 void vmx_pi_start_assignment(struct kvm *kvm);
 
 #endif /* __KVM_X86_VMX_POSTED_INTR_H */
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index de66786396bd..2138f7439a19 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -541,7 +541,7 @@ static inline bool cpu_need_virtualize_apic_accesses(struct kvm_vcpu *vcpu)
 	return flexpriority_enabled && lapic_in_kernel(vcpu);
 }
 
-static inline bool report_flexpriority(void)
+static inline bool vmx_cpu_has_accelerated_tpr(void)
 {
 	return flexpriority_enabled;
 }
@@ -2341,7 +2341,7 @@ static int kvm_cpu_vmxon(u64 vmxon_pointer)
 	return -EFAULT;
 }
 
-static int hardware_enable(void)
+static int vmx_hardware_enable(void)
 {
 	int cpu = raw_smp_processor_id();
 	u64 phys_addr = __pa(per_cpu(vmxarea, cpu));
@@ -2382,7 +2382,7 @@ static void vmclear_local_loaded_vmcss(void)
 		__loaded_vmcs_clear(v);
 }
 
-static void hardware_disable(void)
+static void vmx_hardware_disable(void)
 {
 	vmclear_local_loaded_vmcss();
 
@@ -6967,7 +6967,7 @@ static fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu)
 	return vmx_exit_handlers_fastpath(vcpu);
 }
 
-static void vmx_free_vcpu(struct kvm_vcpu *vcpu)
+static void vmx_vcpu_free(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 
@@ -6978,7 +6978,7 @@ static void vmx_free_vcpu(struct kvm_vcpu *vcpu)
 	free_loaded_vmcs(vmx->loaded_vmcs);
 }
 
-static int vmx_create_vcpu(struct kvm_vcpu *vcpu)
+static int vmx_vcpu_create(struct kvm_vcpu *vcpu)
 {
 	struct vmx_uret_msr *tsx_ctrl;
 	struct vcpu_vmx *vmx;
@@ -7682,7 +7682,7 @@ static void vmx_migrate_timers(struct kvm_vcpu *vcpu)
 	}
 }
 
-static void hardware_unsetup(void)
+static void vmx_hardware_unsetup(void)
 {
 	kvm_set_posted_intr_wakeup_handler(NULL);
 
@@ -7705,18 +7705,18 @@ static bool vmx_check_apicv_inhibit_reasons(ulong bit)
 static struct kvm_x86_ops vmx_x86_ops __initdata = {
 	.name = "kvm_intel",
 
-	.hardware_unsetup = hardware_unsetup,
+	.hardware_unsetup = vmx_hardware_unsetup,
 
-	.hardware_enable = hardware_enable,
-	.hardware_disable = hardware_disable,
-	.cpu_has_accelerated_tpr = report_flexpriority,
+	.hardware_enable = vmx_hardware_enable,
+	.hardware_disable = vmx_hardware_disable,
+	.cpu_has_accelerated_tpr = vmx_cpu_has_accelerated_tpr,
 	.has_emulated_msr = vmx_has_emulated_msr,
 
 	.vm_size = sizeof(struct kvm_vmx),
 	.vm_init = vmx_vm_init,
 
-	.vcpu_create = vmx_create_vcpu,
-	.vcpu_free = vmx_free_vcpu,
+	.vcpu_create = vmx_vcpu_create,
+	.vcpu_free = vmx_vcpu_free,
 	.vcpu_reset = vmx_vcpu_reset,
 
 	.prepare_switch_to_guest = vmx_prepare_switch_to_guest,
@@ -7814,7 +7814,7 @@ static struct kvm_x86_ops vmx_x86_ops __initdata = {
 	.pmu_ops = &intel_pmu_ops,
 	.nested_ops = &vmx_nested_ops,
 
-	.pi_update_irte = pi_update_irte,
+	.pi_update_irte = vmx_pi_update_irte,
 	.pi_start_assignment = vmx_pi_start_assignment,
 
 #ifdef CONFIG_X86_64
-- 
2.35.0.rc0.227.g00780c9af4-goog


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 14/22] KVM: VMX: Use kvm-x86-ops.h to fill vmx_x86_ops
  2022-01-28  0:51 [PATCH 00/22] KVM: x86: Fill *_x86_ops via kvm-x86-ops.h Sean Christopherson
                   ` (12 preceding siblings ...)
  2022-01-28  0:51 ` [PATCH 13/22] KVM: VMX: Rename VMX functions to conform to kvm_x86_ops names Sean Christopherson
@ 2022-01-28  0:52 ` Sean Christopherson
  2022-01-28  0:52 ` [PATCH 15/22] KVM: x86: Move get_cs_db_l_bits() helper to SVM Sean Christopherson
                   ` (7 subsequent siblings)
  21 siblings, 0 replies; 29+ messages in thread
From: Sean Christopherson @ 2022-01-28  0:52 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel, Like Xu

Fill vmx_x86_ops by including kvm-x86-ops.h and defining the appropriate
macros.  Use the default for KVM_X86_APICV_OP as VMX doesn't have a
single prefix for all APICv ops, and the majority of APICv ops that do
conform to the kvm_x86_ops names do so with the standard vmx_ prefix.

Document the handful of exceptions where vmx_x86_ops deviates from the
"default".

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/kvm/vmx/vmx.c | 149 +++++++----------------------------------
 1 file changed, 25 insertions(+), 124 deletions(-)

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 2138f7439a19..f22d02fe4df3 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -7702,141 +7702,42 @@ static bool vmx_check_apicv_inhibit_reasons(ulong bit)
 	return supported & BIT(bit);
 }
 
+/* Not currently implemented for VMX. */
+#define vmx_vm_destroy NULL
+#define vmx_vcpu_blocking NULL
+#define vmx_vcpu_unblocking NULL
+
+/* Redirects to common KVM helpers (hooks provided for SEV-ES). */
+#define vmx_complete_emulated_msr kvm_complete_insn_gp
+#define vmx_vcpu_deliver_sipi_vector kvm_vcpu_deliver_sipi_vector
+
+/* Redirects to preserve VMX's preferred nomenclature. */
+#define vmx_has_wbinvd_exit cpu_has_vmx_wbinvd_exit
+#define vmx_dy_apicv_has_pending_interrupt pi_has_pending_interrupt
+
+/* VMX preemption timer support is 64-bit only as it uses 64-bit division. */
+#ifndef CONFIG_X86_64
+#define vmx_set_hv_timer NULL
+#define vmx_cancel_hv_timer NULL
+#endif
+
 static struct kvm_x86_ops vmx_x86_ops __initdata = {
 	.name = "kvm_intel",
-
-	.hardware_unsetup = vmx_hardware_unsetup,
-
-	.hardware_enable = vmx_hardware_enable,
-	.hardware_disable = vmx_hardware_disable,
-	.cpu_has_accelerated_tpr = vmx_cpu_has_accelerated_tpr,
-	.has_emulated_msr = vmx_has_emulated_msr,
-
 	.vm_size = sizeof(struct kvm_vmx),
-	.vm_init = vmx_vm_init,
-
-	.vcpu_create = vmx_vcpu_create,
-	.vcpu_free = vmx_vcpu_free,
-	.vcpu_reset = vmx_vcpu_reset,
-
-	.prepare_switch_to_guest = vmx_prepare_switch_to_guest,
-	.vcpu_load = vmx_vcpu_load,
-	.vcpu_put = vmx_vcpu_put,
-
-	.update_exception_bitmap = vmx_update_exception_bitmap,
-	.get_msr_feature = vmx_get_msr_feature,
-	.get_msr = vmx_get_msr,
-	.set_msr = vmx_set_msr,
-	.get_segment_base = vmx_get_segment_base,
-	.get_segment = vmx_get_segment,
-	.set_segment = vmx_set_segment,
-	.get_cpl = vmx_get_cpl,
-	.get_cs_db_l_bits = vmx_get_cs_db_l_bits,
-	.set_cr0 = vmx_set_cr0,
-	.is_valid_cr4 = vmx_is_valid_cr4,
-	.set_cr4 = vmx_set_cr4,
-	.set_efer = vmx_set_efer,
-	.get_idt = vmx_get_idt,
-	.set_idt = vmx_set_idt,
-	.get_gdt = vmx_get_gdt,
-	.set_gdt = vmx_set_gdt,
-	.set_dr7 = vmx_set_dr7,
-	.sync_dirty_debug_regs = vmx_sync_dirty_debug_regs,
-	.cache_reg = vmx_cache_reg,
-	.get_rflags = vmx_get_rflags,
-	.set_rflags = vmx_set_rflags,
-	.get_if_flag = vmx_get_if_flag,
-
-	.flush_tlb_all = vmx_flush_tlb_all,
-	.flush_tlb_current = vmx_flush_tlb_current,
-	.flush_tlb_gva = vmx_flush_tlb_gva,
-	.flush_tlb_guest = vmx_flush_tlb_guest,
-
-	.vcpu_pre_run = vmx_vcpu_pre_run,
-	.vcpu_run = vmx_vcpu_run,
-	.handle_exit = vmx_handle_exit,
-	.skip_emulated_instruction = vmx_skip_emulated_instruction,
-	.update_emulated_instruction = vmx_update_emulated_instruction,
-	.set_interrupt_shadow = vmx_set_interrupt_shadow,
-	.get_interrupt_shadow = vmx_get_interrupt_shadow,
-	.patch_hypercall = vmx_patch_hypercall,
-	.inject_irq = vmx_inject_irq,
-	.inject_nmi = vmx_inject_nmi,
-	.queue_exception = vmx_queue_exception,
-	.cancel_injection = vmx_cancel_injection,
-	.interrupt_allowed = vmx_interrupt_allowed,
-	.nmi_allowed = vmx_nmi_allowed,
-	.get_nmi_mask = vmx_get_nmi_mask,
-	.set_nmi_mask = vmx_set_nmi_mask,
-	.enable_nmi_window = vmx_enable_nmi_window,
-	.enable_irq_window = vmx_enable_irq_window,
-	.update_cr8_intercept = vmx_update_cr8_intercept,
-	.set_virtual_apic_mode = vmx_set_virtual_apic_mode,
-	.set_apic_access_page_addr = vmx_set_apic_access_page_addr,
-	.refresh_apicv_exec_ctrl = vmx_refresh_apicv_exec_ctrl,
-	.load_eoi_exitmap = vmx_load_eoi_exitmap,
-	.apicv_post_state_restore = vmx_apicv_post_state_restore,
-	.check_apicv_inhibit_reasons = vmx_check_apicv_inhibit_reasons,
-	.hwapic_irr_update = vmx_hwapic_irr_update,
-	.hwapic_isr_update = vmx_hwapic_isr_update,
-	.guest_apic_has_interrupt = vmx_guest_apic_has_interrupt,
-	.sync_pir_to_irr = vmx_sync_pir_to_irr,
-	.deliver_interrupt = vmx_deliver_interrupt,
-	.dy_apicv_has_pending_interrupt = pi_has_pending_interrupt,
-
-	.set_tss_addr = vmx_set_tss_addr,
-	.set_identity_map_addr = vmx_set_identity_map_addr,
-	.get_mt_mask = vmx_get_mt_mask,
-
-	.get_exit_info = vmx_get_exit_info,
-
-	.vcpu_after_set_cpuid = vmx_vcpu_after_set_cpuid,
-
-	.has_wbinvd_exit = cpu_has_vmx_wbinvd_exit,
-
-	.get_l2_tsc_offset = vmx_get_l2_tsc_offset,
-	.get_l2_tsc_multiplier = vmx_get_l2_tsc_multiplier,
-	.write_tsc_offset = vmx_write_tsc_offset,
-	.write_tsc_multiplier = vmx_write_tsc_multiplier,
-
-	.load_mmu_pgd = vmx_load_mmu_pgd,
-
-	.check_intercept = vmx_check_intercept,
-	.handle_exit_irqoff = vmx_handle_exit_irqoff,
-
-	.request_immediate_exit = vmx_request_immediate_exit,
-
-	.sched_in = vmx_sched_in,
-
 	.cpu_dirty_log_size = PML_ENTITY_NUM,
-	.update_cpu_dirty_logging = vmx_update_cpu_dirty_logging,
 
 	.pmu_ops = &intel_pmu_ops,
 	.nested_ops = &vmx_nested_ops,
 
-	.pi_update_irte = vmx_pi_update_irte,
-	.pi_start_assignment = vmx_pi_start_assignment,
+#define KVM_X86_OP(func) .func = vmx_##func,
 
-#ifdef CONFIG_X86_64
-	.set_hv_timer = vmx_set_hv_timer,
-	.cancel_hv_timer = vmx_cancel_hv_timer,
-#endif
+/* VMX doesn't yet support confidential VMs. */
+#define KVM_X86_CVM_OP(func) .func = NULL,
 
-	.setup_mce = vmx_setup_mce,
+/* Hyper-V hooks are filled at runtime. */
+#define KVM_X86_HYPERV_OP(func) .func = NULL,
 
-	.smi_allowed = vmx_smi_allowed,
-	.enter_smm = vmx_enter_smm,
-	.leave_smm = vmx_leave_smm,
-	.enable_smi_window = vmx_enable_smi_window,
-
-	.can_emulate_instruction = vmx_can_emulate_instruction,
-	.apic_init_signal_blocked = vmx_apic_init_signal_blocked,
-	.migrate_timers = vmx_migrate_timers,
-
-	.msr_filter_changed = vmx_msr_filter_changed,
-	.complete_emulated_msr = kvm_complete_insn_gp,
-
-	.vcpu_deliver_sipi_vector = kvm_vcpu_deliver_sipi_vector,
+#include <asm/kvm-x86-ops.h>
 };
 
 static __init void vmx_setup_user_return_msrs(void)
-- 
2.35.0.rc0.227.g00780c9af4-goog


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 15/22] KVM: x86: Move get_cs_db_l_bits() helper to SVM
  2022-01-28  0:51 [PATCH 00/22] KVM: x86: Fill *_x86_ops via kvm-x86-ops.h Sean Christopherson
                   ` (13 preceding siblings ...)
  2022-01-28  0:52 ` [PATCH 14/22] KVM: VMX: Use kvm-x86-ops.h to fill vmx_x86_ops Sean Christopherson
@ 2022-01-28  0:52 ` Sean Christopherson
  2022-01-28  0:52 ` [PATCH 16/22] KVM: SVM: Rename svm_flush_tlb() to svm_flush_tlb_current() Sean Christopherson
                   ` (6 subsequent siblings)
  21 siblings, 0 replies; 29+ messages in thread
From: Sean Christopherson @ 2022-01-28  0:52 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel, Like Xu

Move kvm_get_cs_db_l_bits() to SVM and rename it appropriately so that
its svm_x86_ops entry can be filled via kvm-x86-ops, and to eliminate a
superfluous export from KVM x86.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/include/asm/kvm_host.h |  1 -
 arch/x86/kvm/svm/svm.c          | 11 ++++++++++-
 arch/x86/kvm/x86.c              | 10 ----------
 3 files changed, 10 insertions(+), 12 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 91c0e4957bd0..f97d155810ac 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1717,7 +1717,6 @@ int kvm_set_dr(struct kvm_vcpu *vcpu, int dr, unsigned long val);
 void kvm_get_dr(struct kvm_vcpu *vcpu, int dr, unsigned long *val);
 unsigned long kvm_get_cr8(struct kvm_vcpu *vcpu);
 void kvm_lmsw(struct kvm_vcpu *vcpu, unsigned long msw);
-void kvm_get_cs_db_l_bits(struct kvm_vcpu *vcpu, int *db, int *l);
 int kvm_emulate_xsetbv(struct kvm_vcpu *vcpu);
 
 int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr);
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 991d3e628c60..fda09a6ea3ba 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -1531,6 +1531,15 @@ static int svm_get_cpl(struct kvm_vcpu *vcpu)
 	return save->cpl;
 }
 
+static void svm_get_cs_db_l_bits(struct kvm_vcpu *vcpu, int *db, int *l)
+{
+	struct kvm_segment cs;
+
+	svm_get_segment(vcpu, &cs, VCPU_SREG_CS);
+	*db = cs.db;
+	*l = cs.l;
+}
+
 static void svm_get_idt(struct kvm_vcpu *vcpu, struct desc_ptr *dt)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
@@ -4486,7 +4495,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
 	.get_segment = svm_get_segment,
 	.set_segment = svm_set_segment,
 	.get_cpl = svm_get_cpl,
-	.get_cs_db_l_bits = kvm_get_cs_db_l_bits,
+	.get_cs_db_l_bits = svm_get_cs_db_l_bits,
 	.set_cr0 = svm_set_cr0,
 	.post_set_cr3 = svm_post_set_cr3,
 	.is_valid_cr4 = svm_is_valid_cr4,
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 580a2adaec7c..b151db419590 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -10570,16 +10570,6 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
 	return 0;
 }
 
-void kvm_get_cs_db_l_bits(struct kvm_vcpu *vcpu, int *db, int *l)
-{
-	struct kvm_segment cs;
-
-	kvm_get_segment(vcpu, &cs, VCPU_SREG_CS);
-	*db = cs.db;
-	*l = cs.l;
-}
-EXPORT_SYMBOL_GPL(kvm_get_cs_db_l_bits);
-
 static void __get_sregs_common(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs)
 {
 	struct desc_ptr dt;
-- 
2.35.0.rc0.227.g00780c9af4-goog


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 16/22] KVM: SVM: Rename svm_flush_tlb() to svm_flush_tlb_current()
  2022-01-28  0:51 [PATCH 00/22] KVM: x86: Fill *_x86_ops via kvm-x86-ops.h Sean Christopherson
                   ` (14 preceding siblings ...)
  2022-01-28  0:52 ` [PATCH 15/22] KVM: x86: Move get_cs_db_l_bits() helper to SVM Sean Christopherson
@ 2022-01-28  0:52 ` Sean Christopherson
  2022-01-28  0:52 ` [PATCH 17/22] KVM: SVM: Remove unused MAX_INST_SIZE #define Sean Christopherson
                   ` (5 subsequent siblings)
  21 siblings, 0 replies; 29+ messages in thread
From: Sean Christopherson @ 2022-01-28  0:52 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel, Like Xu

Rename svm_flush_tlb() to svm_flush_tlb_current() so that at least one of
the flushing operations in svm_x86_ops can be filled via kvm-x86-ops.h,
and to document the scope of the flush (specifically that it doesn't
flush "all").

Opportunistically make svm_tlb_flush_current(), was svm_flush_tlb(),
static.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/kvm/svm/svm.c | 12 +++++++-----
 arch/x86/kvm/svm/svm.h |  1 -
 2 files changed, 7 insertions(+), 6 deletions(-)

diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index fda09a6ea3ba..5382710ba106 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -265,6 +265,8 @@ u32 svm_msrpm_offset(u32 msr)
 
 #define MAX_INST_SIZE 15
 
+static void svm_flush_tlb_current(struct kvm_vcpu *vcpu);
+
 static int get_npt_level(void)
 {
 #ifdef CONFIG_X86_64
@@ -1654,7 +1656,7 @@ void svm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
 	unsigned long old_cr4 = vcpu->arch.cr4;
 
 	if (npt_enabled && ((old_cr4 ^ cr4) & X86_CR4_PGE))
-		svm_flush_tlb(vcpu);
+		svm_flush_tlb_current(vcpu);
 
 	vcpu->arch.cr4 = cr4;
 	if (!npt_enabled)
@@ -3489,7 +3491,7 @@ static int svm_set_identity_map_addr(struct kvm *kvm, u64 ident_addr)
 	return 0;
 }
 
-void svm_flush_tlb(struct kvm_vcpu *vcpu)
+static void svm_flush_tlb_current(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 
@@ -4512,10 +4514,10 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
 	.set_rflags = svm_set_rflags,
 	.get_if_flag = svm_get_if_flag,
 
-	.flush_tlb_all = svm_flush_tlb,
-	.flush_tlb_current = svm_flush_tlb,
+	.flush_tlb_all = svm_flush_tlb_current,
+	.flush_tlb_current = svm_flush_tlb_current,
 	.flush_tlb_gva = svm_flush_tlb_gva,
-	.flush_tlb_guest = svm_flush_tlb,
+	.flush_tlb_guest = svm_flush_tlb_current,
 
 	.vcpu_pre_run = svm_vcpu_pre_run,
 	.vcpu_run = svm_vcpu_run,
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index baa5435f1bde..16ad5fa128f4 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -480,7 +480,6 @@ void svm_vcpu_free_msrpm(u32 *msrpm);
 int svm_set_efer(struct kvm_vcpu *vcpu, u64 efer);
 void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0);
 void svm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4);
-void svm_flush_tlb(struct kvm_vcpu *vcpu);
 void disable_nmi_singlestep(struct vcpu_svm *svm);
 bool svm_smi_blocked(struct kvm_vcpu *vcpu);
 bool svm_nmi_blocked(struct kvm_vcpu *vcpu);
-- 
2.35.0.rc0.227.g00780c9af4-goog


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 17/22] KVM: SVM: Remove unused MAX_INST_SIZE #define
  2022-01-28  0:51 [PATCH 00/22] KVM: x86: Fill *_x86_ops via kvm-x86-ops.h Sean Christopherson
                   ` (15 preceding siblings ...)
  2022-01-28  0:52 ` [PATCH 16/22] KVM: SVM: Rename svm_flush_tlb() to svm_flush_tlb_current() Sean Christopherson
@ 2022-01-28  0:52 ` Sean Christopherson
  2022-01-28  0:52 ` [PATCH 18/22] KVM: SVM: Rename AVIC helpers to use "avic" prefix instead of "svm" Sean Christopherson
                   ` (4 subsequent siblings)
  21 siblings, 0 replies; 29+ messages in thread
From: Sean Christopherson @ 2022-01-28  0:52 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel, Like Xu

Remove SVM's MAX_INST_SIZE, which has long since been obsoleted by the
common MAX_INSN_SIZE.  Note, the latter's "insn" is also the generally
preferred abbreviation of instruction.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/kvm/svm/svm.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 5382710ba106..87e136b81991 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -263,8 +263,6 @@ u32 svm_msrpm_offset(u32 msr)
 	return MSR_INVALID;
 }
 
-#define MAX_INST_SIZE 15
-
 static void svm_flush_tlb_current(struct kvm_vcpu *vcpu);
 
 static int get_npt_level(void)
-- 
2.35.0.rc0.227.g00780c9af4-goog


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 18/22] KVM: SVM: Rename AVIC helpers to use "avic" prefix instead of "svm"
  2022-01-28  0:51 [PATCH 00/22] KVM: x86: Fill *_x86_ops via kvm-x86-ops.h Sean Christopherson
                   ` (16 preceding siblings ...)
  2022-01-28  0:52 ` [PATCH 17/22] KVM: SVM: Remove unused MAX_INST_SIZE #define Sean Christopherson
@ 2022-01-28  0:52 ` Sean Christopherson
  2022-01-28  0:52 ` [PATCH 19/22] KVM: x86: Use more verbose names for mem encrypt kvm_x86_ops hooks Sean Christopherson
                   ` (3 subsequent siblings)
  21 siblings, 0 replies; 29+ messages in thread
From: Sean Christopherson @ 2022-01-28  0:52 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel, Like Xu

Use "avic" instead of "svm" for SVM's all of APICv hooks and make a few
additional funciton name tweaks so that the AVIC functions conform to
their associated kvm_x86_ops hooks.  This will allow using kvm-x86-ops.h
with a custom KVM_X86_APICV_OP() macro to fill all AVIC hooks in one fell
swoop.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/kvm/svm/avic.c | 28 ++++++++++++++--------------
 arch/x86/kvm/svm/svm.c  | 18 +++++++++---------
 arch/x86/kvm/svm/svm.h  | 20 ++++++++++----------
 3 files changed, 33 insertions(+), 33 deletions(-)

diff --git a/arch/x86/kvm/svm/avic.c b/arch/x86/kvm/svm/avic.c
index 90364d02f22a..99f907ec5aa8 100644
--- a/arch/x86/kvm/svm/avic.c
+++ b/arch/x86/kvm/svm/avic.c
@@ -579,7 +579,7 @@ int avic_init_vcpu(struct vcpu_svm *svm)
 	return ret;
 }
 
-void avic_post_state_restore(struct kvm_vcpu *vcpu)
+void avic_apicv_post_state_restore(struct kvm_vcpu *vcpu)
 {
 	if (avic_handle_apic_id_update(vcpu) != 0)
 		return;
@@ -587,20 +587,20 @@ void avic_post_state_restore(struct kvm_vcpu *vcpu)
 	avic_handle_ldr_update(vcpu);
 }
 
-void svm_set_virtual_apic_mode(struct kvm_vcpu *vcpu)
+void avic_set_virtual_apic_mode(struct kvm_vcpu *vcpu)
 {
 	return;
 }
 
-void svm_hwapic_irr_update(struct kvm_vcpu *vcpu, int max_irr)
+void avic_hwapic_irr_update(struct kvm_vcpu *vcpu, int max_irr)
 {
 }
 
-void svm_hwapic_isr_update(struct kvm_vcpu *vcpu, int max_isr)
+void avic_hwapic_isr_update(struct kvm_vcpu *vcpu, int max_isr)
 {
 }
 
-static int svm_set_pi_irte_mode(struct kvm_vcpu *vcpu, bool activate)
+static int avic_set_pi_irte_mode(struct kvm_vcpu *vcpu, bool activate)
 {
 	int ret = 0;
 	unsigned long flags;
@@ -632,7 +632,7 @@ static int svm_set_pi_irte_mode(struct kvm_vcpu *vcpu, bool activate)
 	return ret;
 }
 
-void svm_refresh_apicv_exec_ctrl(struct kvm_vcpu *vcpu)
+void avic_refresh_apicv_exec_ctrl(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 	struct vmcb *vmcb = svm->vmcb01.ptr;
@@ -649,7 +649,7 @@ void svm_refresh_apicv_exec_ctrl(struct kvm_vcpu *vcpu)
 		 * we need to check and update the AVIC logical APIC ID table
 		 * accordingly before re-activating.
 		 */
-		avic_post_state_restore(vcpu);
+		avic_apicv_post_state_restore(vcpu);
 		vmcb->control.int_ctl |= AVIC_ENABLE_MASK;
 	} else {
 		vmcb->control.int_ctl &= ~AVIC_ENABLE_MASK;
@@ -661,10 +661,10 @@ void svm_refresh_apicv_exec_ctrl(struct kvm_vcpu *vcpu)
 	else
 		avic_vcpu_put(vcpu);
 
-	svm_set_pi_irte_mode(vcpu, activated);
+	avic_set_pi_irte_mode(vcpu, activated);
 }
 
-void svm_load_eoi_exitmap(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap)
+void avic_load_eoi_exitmap(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap)
 {
 	return;
 }
@@ -715,7 +715,7 @@ int svm_deliver_avic_intr(struct kvm_vcpu *vcpu, int vec)
 	return 0;
 }
 
-bool svm_dy_apicv_has_pending_interrupt(struct kvm_vcpu *vcpu)
+bool avic_dy_apicv_has_pending_interrupt(struct kvm_vcpu *vcpu)
 {
 	return false;
 }
@@ -817,7 +817,7 @@ get_pi_vcpu_info(struct kvm *kvm, struct kvm_kernel_irq_routing_entry *e,
 }
 
 /*
- * svm_update_pi_irte - set IRTE for Posted-Interrupts
+ * avic_pi_update_irte - set IRTE for Posted-Interrupts
  *
  * @kvm: kvm
  * @host_irq: host irq of the interrupt
@@ -825,8 +825,8 @@ get_pi_vcpu_info(struct kvm *kvm, struct kvm_kernel_irq_routing_entry *e,
  * @set: set or unset PI
  * returns 0 on success, < 0 on failure
  */
-int svm_update_pi_irte(struct kvm *kvm, unsigned int host_irq,
-		       uint32_t guest_irq, bool set)
+int avic_pi_update_irte(struct kvm *kvm, unsigned int host_irq,
+			uint32_t guest_irq, bool set)
 {
 	struct kvm_kernel_irq_routing_entry *e;
 	struct kvm_irq_routing_table *irq_rt;
@@ -926,7 +926,7 @@ int svm_update_pi_irte(struct kvm *kvm, unsigned int host_irq,
 	return ret;
 }
 
-bool svm_check_apicv_inhibit_reasons(ulong bit)
+bool avic_check_apicv_inhibit_reasons(ulong bit)
 {
 	ulong supported = BIT(APICV_INHIBIT_REASON_DISABLE) |
 			  BIT(APICV_INHIBIT_REASON_ABSENT) |
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 87e136b81991..a6ddc8b7c63b 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -4536,13 +4536,13 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
 	.enable_nmi_window = svm_enable_nmi_window,
 	.enable_irq_window = svm_enable_irq_window,
 	.update_cr8_intercept = svm_update_cr8_intercept,
-	.set_virtual_apic_mode = svm_set_virtual_apic_mode,
-	.refresh_apicv_exec_ctrl = svm_refresh_apicv_exec_ctrl,
-	.check_apicv_inhibit_reasons = svm_check_apicv_inhibit_reasons,
-	.load_eoi_exitmap = svm_load_eoi_exitmap,
-	.hwapic_irr_update = svm_hwapic_irr_update,
-	.hwapic_isr_update = svm_hwapic_isr_update,
-	.apicv_post_state_restore = avic_post_state_restore,
+	.set_virtual_apic_mode = avic_set_virtual_apic_mode,
+	.refresh_apicv_exec_ctrl = avic_refresh_apicv_exec_ctrl,
+	.check_apicv_inhibit_reasons = avic_check_apicv_inhibit_reasons,
+	.load_eoi_exitmap = avic_load_eoi_exitmap,
+	.hwapic_irr_update = avic_hwapic_irr_update,
+	.hwapic_isr_update = avic_hwapic_isr_update,
+	.apicv_post_state_restore = avic_apicv_post_state_restore,
 
 	.set_tss_addr = svm_set_tss_addr,
 	.set_identity_map_addr = svm_set_identity_map_addr,
@@ -4572,8 +4572,8 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
 	.nested_ops = &svm_nested_ops,
 
 	.deliver_interrupt = svm_deliver_interrupt,
-	.dy_apicv_has_pending_interrupt = svm_dy_apicv_has_pending_interrupt,
-	.pi_update_irte = svm_update_pi_irte,
+	.dy_apicv_has_pending_interrupt = avic_dy_apicv_has_pending_interrupt,
+	.pi_update_irte = avic_pi_update_irte,
 	.setup_mce = svm_setup_mce,
 
 	.smi_allowed = svm_smi_allowed,
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 16ad5fa128f4..096abbf01969 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -575,17 +575,17 @@ int avic_unaccelerated_access_interception(struct kvm_vcpu *vcpu);
 int avic_init_vcpu(struct vcpu_svm *svm);
 void avic_vcpu_load(struct kvm_vcpu *vcpu, int cpu);
 void avic_vcpu_put(struct kvm_vcpu *vcpu);
-void avic_post_state_restore(struct kvm_vcpu *vcpu);
-void svm_set_virtual_apic_mode(struct kvm_vcpu *vcpu);
-void svm_refresh_apicv_exec_ctrl(struct kvm_vcpu *vcpu);
-bool svm_check_apicv_inhibit_reasons(ulong bit);
-void svm_load_eoi_exitmap(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap);
-void svm_hwapic_irr_update(struct kvm_vcpu *vcpu, int max_irr);
-void svm_hwapic_isr_update(struct kvm_vcpu *vcpu, int max_isr);
+void avic_apicv_post_state_restore(struct kvm_vcpu *vcpu);
+void avic_set_virtual_apic_mode(struct kvm_vcpu *vcpu);
+void avic_refresh_apicv_exec_ctrl(struct kvm_vcpu *vcpu);
+bool avic_check_apicv_inhibit_reasons(ulong bit);
+void avic_load_eoi_exitmap(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap);
+void avic_hwapic_irr_update(struct kvm_vcpu *vcpu, int max_irr);
+void avic_hwapic_isr_update(struct kvm_vcpu *vcpu, int max_isr);
 int svm_deliver_avic_intr(struct kvm_vcpu *vcpu, int vec);
-bool svm_dy_apicv_has_pending_interrupt(struct kvm_vcpu *vcpu);
-int svm_update_pi_irte(struct kvm *kvm, unsigned int host_irq,
-		       uint32_t guest_irq, bool set);
+bool avic_dy_apicv_has_pending_interrupt(struct kvm_vcpu *vcpu);
+int avic_pi_update_irte(struct kvm *kvm, unsigned int host_irq,
+			uint32_t guest_irq, bool set);
 void avic_vcpu_blocking(struct kvm_vcpu *vcpu);
 void avic_vcpu_unblocking(struct kvm_vcpu *vcpu);
 
-- 
2.35.0.rc0.227.g00780c9af4-goog


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 19/22] KVM: x86: Use more verbose names for mem encrypt kvm_x86_ops hooks
  2022-01-28  0:51 [PATCH 00/22] KVM: x86: Fill *_x86_ops via kvm-x86-ops.h Sean Christopherson
                   ` (17 preceding siblings ...)
  2022-01-28  0:52 ` [PATCH 18/22] KVM: SVM: Rename AVIC helpers to use "avic" prefix instead of "svm" Sean Christopherson
@ 2022-01-28  0:52 ` Sean Christopherson
  2022-01-28  0:52 ` [PATCH 20/22] KVM: SVM: Rename SEV implemenations to conform to " Sean Christopherson
                   ` (2 subsequent siblings)
  21 siblings, 0 replies; 29+ messages in thread
From: Sean Christopherson @ 2022-01-28  0:52 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel, Like Xu

Use slightly more verbose names for the so called "memory encrypt",
a.k.a. "mem enc", kvm_x86_ops hooks to bridge the gap between the current
super short kvm_x86_ops names and SVM's more verbose, but non-conforming
names.  This is a step toward using kvm-x86-ops.h with KVM_X86_CVM_OP()
to fill svm_x86_ops.

Opportunistically rename mem_enc_op() to mem_enc_ioctl() to better
reflect its true nature, as it really is a full fledged ioctl() of its
own.  Ideally, the hook would be named confidential_vm_ioctl() or so, as
the ioctl() is a gateway to more than just memory encryption, and because
its underlying purpose to support Confidential VMs, which can be provided
without memory encryption, e.g. if the TCB of the guest includes the host
kernel but not host userspace, or by isolation in hardware without
encrypting memory.  But, diverging from KVM_MEMORY_ENCRYPT_OP even
further is undeseriable, and short of creating alises for all related
ioctl()s, which introduces a different flavor of divergence, KVM is stuck
with the nomenclature.

Defer renaming SVM's functions to a future commit as there are additional
changes needed to make SVM fully conforming and to match reality (looking
at you, svm_vm_copy_asid_from()).

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/include/asm/kvm-x86-ops.h |  6 +++---
 arch/x86/include/asm/kvm_host.h    |  6 +++---
 arch/x86/kvm/svm/sev.c             |  2 +-
 arch/x86/kvm/svm/svm.c             |  6 +++---
 arch/x86/kvm/svm/svm.h             |  2 +-
 arch/x86/kvm/x86.c                 | 18 ++++++++++++------
 6 files changed, 23 insertions(+), 17 deletions(-)

diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
index cb3af3a55317..efc4d5da45ad 100644
--- a/arch/x86/include/asm/kvm-x86-ops.h
+++ b/arch/x86/include/asm/kvm-x86-ops.h
@@ -133,9 +133,9 @@ KVM_X86_HYPERV_OP(tlb_remote_flush)
 KVM_X86_HYPERV_OP(tlb_remote_flush_with_range)
 KVM_X86_HYPERV_OP(enable_direct_tlbflush)
 
-KVM_X86_CVM_OP(mem_enc_op)
-KVM_X86_CVM_OP(mem_enc_reg_region)
-KVM_X86_CVM_OP(mem_enc_unreg_region)
+KVM_X86_CVM_OP(mem_enc_ioctl)
+KVM_X86_CVM_OP(mem_enc_register_region)
+KVM_X86_CVM_OP(mem_enc_unregister_region)
 KVM_X86_CVM_OP(vm_copy_enc_context_from)
 KVM_X86_CVM_OP(vm_move_enc_context_from)
 KVM_X86_CVM_OP(post_set_cr3)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index f97d155810ac..6228c12fc6c3 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1475,9 +1475,9 @@ struct kvm_x86_ops {
 	int (*leave_smm)(struct kvm_vcpu *vcpu, const char *smstate);
 	void (*enable_smi_window)(struct kvm_vcpu *vcpu);
 
-	int (*mem_enc_op)(struct kvm *kvm, void __user *argp);
-	int (*mem_enc_reg_region)(struct kvm *kvm, struct kvm_enc_region *argp);
-	int (*mem_enc_unreg_region)(struct kvm *kvm, struct kvm_enc_region *argp);
+	int (*mem_enc_ioctl)(struct kvm *kvm, void __user *argp);
+	int (*mem_enc_register_region)(struct kvm *kvm, struct kvm_enc_region *argp);
+	int (*mem_enc_unregister_region)(struct kvm *kvm, struct kvm_enc_region *argp);
 	int (*vm_copy_enc_context_from)(struct kvm *kvm, unsigned int source_fd);
 	int (*vm_move_enc_context_from)(struct kvm *kvm, unsigned int source_fd);
 
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index b82eeef89a3e..7f346ddcae0a 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -1761,7 +1761,7 @@ int svm_vm_migrate_from(struct kvm *kvm, unsigned int source_fd)
 	return ret;
 }
 
-int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
+int svm_mem_enc_ioctl(struct kvm *kvm, void __user *argp)
 {
 	struct kvm_sev_cmd sev_cmd;
 	int r;
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index a6ddc8b7c63b..4b9041e931a8 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -4581,9 +4581,9 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
 	.leave_smm = svm_leave_smm,
 	.enable_smi_window = svm_enable_smi_window,
 
-	.mem_enc_op = svm_mem_enc_op,
-	.mem_enc_reg_region = svm_register_enc_region,
-	.mem_enc_unreg_region = svm_unregister_enc_region,
+	.mem_enc_ioctl = svm_mem_enc_ioctl,
+	.mem_enc_register_region = svm_register_enc_region,
+	.mem_enc_unregister_region = svm_unregister_enc_region,
 
 	.vm_copy_enc_context_from = svm_vm_copy_asid_from,
 	.vm_move_enc_context_from = svm_vm_migrate_from,
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 096abbf01969..7cf81e029f9c 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -598,7 +598,7 @@ void avic_vcpu_unblocking(struct kvm_vcpu *vcpu);
 extern unsigned int max_sev_asid;
 
 void sev_vm_destroy(struct kvm *kvm);
-int svm_mem_enc_op(struct kvm *kvm, void __user *argp);
+int svm_mem_enc_ioctl(struct kvm *kvm, void __user *argp);
 int svm_register_enc_region(struct kvm *kvm,
 			    struct kvm_enc_region *range);
 int svm_unregister_enc_region(struct kvm *kvm,
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index b151db419590..01f68b3da5ee 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -6444,8 +6444,10 @@ long kvm_arch_vm_ioctl(struct file *filp,
 		break;
 	case KVM_MEMORY_ENCRYPT_OP: {
 		r = -ENOTTY;
-		if (kvm_x86_ops.mem_enc_op)
-			r = static_call(kvm_x86_mem_enc_op)(kvm, argp);
+		if (!kvm_x86_ops.mem_enc_ioctl)
+			goto out;
+
+		r = static_call(kvm_x86_mem_enc_ioctl)(kvm, argp);
 		break;
 	}
 	case KVM_MEMORY_ENCRYPT_REG_REGION: {
@@ -6456,8 +6458,10 @@ long kvm_arch_vm_ioctl(struct file *filp,
 			goto out;
 
 		r = -ENOTTY;
-		if (kvm_x86_ops.mem_enc_reg_region)
-			r = static_call(kvm_x86_mem_enc_reg_region)(kvm, &region);
+		if (!kvm_x86_ops.mem_enc_register_region)
+			goto out;
+
+		r = static_call(kvm_x86_mem_enc_register_region)(kvm, &region);
 		break;
 	}
 	case KVM_MEMORY_ENCRYPT_UNREG_REGION: {
@@ -6468,8 +6472,10 @@ long kvm_arch_vm_ioctl(struct file *filp,
 			goto out;
 
 		r = -ENOTTY;
-		if (kvm_x86_ops.mem_enc_unreg_region)
-			r = static_call(kvm_x86_mem_enc_unreg_region)(kvm, &region);
+		if (!kvm_x86_ops.mem_enc_unregister_region)
+			goto out;
+
+		r = static_call(kvm_x86_mem_enc_unregister_region)(kvm, &region);
 		break;
 	}
 	case KVM_HYPERV_EVENTFD: {
-- 
2.35.0.rc0.227.g00780c9af4-goog


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 20/22] KVM: SVM: Rename SEV implemenations to conform to kvm_x86_ops hooks
  2022-01-28  0:51 [PATCH 00/22] KVM: x86: Fill *_x86_ops via kvm-x86-ops.h Sean Christopherson
                   ` (18 preceding siblings ...)
  2022-01-28  0:52 ` [PATCH 19/22] KVM: x86: Use more verbose names for mem encrypt kvm_x86_ops hooks Sean Christopherson
@ 2022-01-28  0:52 ` Sean Christopherson
  2022-01-28  0:52 ` [PATCH 21/22] KVM: SVM: Rename hook implementations to conform to kvm_x86_ops' names Sean Christopherson
  2022-01-28  0:52 ` [PATCH 22/22] KVM: SVM: Use kvm-x86-ops.h to fill svm_x86_ops Sean Christopherson
  21 siblings, 0 replies; 29+ messages in thread
From: Sean Christopherson @ 2022-01-28  0:52 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel, Like Xu

Rename svm_vm_copy_asid_from() and svm_vm_migrate_from() to conform to
the names used by kvm_x86_ops, and opportunistically use "sev" instead of
"svm" to more precisely identify the role of the hooks.

svm_vm_copy_asid_from() in particular was poorly named as the function
does much more than simply copy the ASID.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/kvm/svm/sev.c | 14 +++++++-------
 arch/x86/kvm/svm/svm.c | 14 +++++++-------
 arch/x86/kvm/svm/svm.h | 14 +++++++-------
 3 files changed, 21 insertions(+), 21 deletions(-)

diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 7f346ddcae0a..4662e5fd7559 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -1681,7 +1681,7 @@ static int sev_es_migrate_from(struct kvm *dst, struct kvm *src)
 	return 0;
 }
 
-int svm_vm_migrate_from(struct kvm *kvm, unsigned int source_fd)
+int sev_vm_move_enc_context_from(struct kvm *kvm, unsigned int source_fd)
 {
 	struct kvm_sev_info *dst_sev = &to_kvm_svm(kvm)->sev_info;
 	struct kvm_sev_info *src_sev, *cg_cleanup_sev;
@@ -1761,7 +1761,7 @@ int svm_vm_migrate_from(struct kvm *kvm, unsigned int source_fd)
 	return ret;
 }
 
-int svm_mem_enc_ioctl(struct kvm *kvm, void __user *argp)
+int sev_mem_enc_ioctl(struct kvm *kvm, void __user *argp)
 {
 	struct kvm_sev_cmd sev_cmd;
 	int r;
@@ -1858,8 +1858,8 @@ int svm_mem_enc_ioctl(struct kvm *kvm, void __user *argp)
 	return r;
 }
 
-int svm_register_enc_region(struct kvm *kvm,
-			    struct kvm_enc_region *range)
+int sev_mem_enc_register_region(struct kvm *kvm,
+				struct kvm_enc_region *range)
 {
 	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
 	struct enc_region *region;
@@ -1932,8 +1932,8 @@ static void __unregister_enc_region_locked(struct kvm *kvm,
 	kfree(region);
 }
 
-int svm_unregister_enc_region(struct kvm *kvm,
-			      struct kvm_enc_region *range)
+int sev_mem_enc_unregister_region(struct kvm *kvm,
+				  struct kvm_enc_region *range)
 {
 	struct enc_region *region;
 	int ret;
@@ -1972,7 +1972,7 @@ int svm_unregister_enc_region(struct kvm *kvm,
 	return ret;
 }
 
-int svm_vm_copy_asid_from(struct kvm *kvm, unsigned int source_fd)
+int sev_vm_copy_enc_context_from(struct kvm *kvm, unsigned int source_fd)
 {
 	struct file *source_kvm_file;
 	struct kvm *source_kvm;
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 4b9041e931a8..a075c6458a27 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -1574,7 +1574,7 @@ static void svm_set_gdt(struct kvm_vcpu *vcpu, struct desc_ptr *dt)
 	vmcb_mark_dirty(svm->vmcb, VMCB_DT);
 }
 
-static void svm_post_set_cr3(struct kvm_vcpu *vcpu, unsigned long cr3)
+static void sev_post_set_cr3(struct kvm_vcpu *vcpu, unsigned long cr3)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 
@@ -4497,7 +4497,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
 	.get_cpl = svm_get_cpl,
 	.get_cs_db_l_bits = svm_get_cs_db_l_bits,
 	.set_cr0 = svm_set_cr0,
-	.post_set_cr3 = svm_post_set_cr3,
+	.post_set_cr3 = sev_post_set_cr3,
 	.is_valid_cr4 = svm_is_valid_cr4,
 	.set_cr4 = svm_set_cr4,
 	.set_efer = svm_set_efer,
@@ -4581,12 +4581,12 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
 	.leave_smm = svm_leave_smm,
 	.enable_smi_window = svm_enable_smi_window,
 
-	.mem_enc_ioctl = svm_mem_enc_ioctl,
-	.mem_enc_register_region = svm_register_enc_region,
-	.mem_enc_unregister_region = svm_unregister_enc_region,
+	.mem_enc_ioctl = sev_mem_enc_ioctl,
+	.mem_enc_register_region = sev_mem_enc_register_region,
+	.mem_enc_unregister_region = sev_mem_enc_unregister_region,
 
-	.vm_copy_enc_context_from = svm_vm_copy_asid_from,
-	.vm_move_enc_context_from = svm_vm_migrate_from,
+	.vm_copy_enc_context_from = sev_vm_copy_enc_context_from,
+	.vm_move_enc_context_from = sev_vm_move_enc_context_from,
 
 	.can_emulate_instruction = svm_can_emulate_instruction,
 
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 7cf81e029f9c..67c17509c4c0 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -598,13 +598,13 @@ void avic_vcpu_unblocking(struct kvm_vcpu *vcpu);
 extern unsigned int max_sev_asid;
 
 void sev_vm_destroy(struct kvm *kvm);
-int svm_mem_enc_ioctl(struct kvm *kvm, void __user *argp);
-int svm_register_enc_region(struct kvm *kvm,
-			    struct kvm_enc_region *range);
-int svm_unregister_enc_region(struct kvm *kvm,
-			      struct kvm_enc_region *range);
-int svm_vm_copy_asid_from(struct kvm *kvm, unsigned int source_fd);
-int svm_vm_migrate_from(struct kvm *kvm, unsigned int source_fd);
+int sev_mem_enc_ioctl(struct kvm *kvm, void __user *argp);
+int sev_mem_enc_register_region(struct kvm *kvm,
+				struct kvm_enc_region *range);
+int sev_mem_enc_unregister_region(struct kvm *kvm,
+				  struct kvm_enc_region *range);
+int sev_vm_copy_enc_context_from(struct kvm *kvm, unsigned int source_fd);
+int sev_vm_move_enc_context_from(struct kvm *kvm, unsigned int source_fd);
 void pre_sev_run(struct vcpu_svm *svm, int cpu);
 void __init sev_set_cpu_caps(void);
 void __init sev_hardware_setup(void);
-- 
2.35.0.rc0.227.g00780c9af4-goog


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 21/22] KVM: SVM: Rename hook implementations to conform to kvm_x86_ops' names
  2022-01-28  0:51 [PATCH 00/22] KVM: x86: Fill *_x86_ops via kvm-x86-ops.h Sean Christopherson
                   ` (19 preceding siblings ...)
  2022-01-28  0:52 ` [PATCH 20/22] KVM: SVM: Rename SEV implemenations to conform to " Sean Christopherson
@ 2022-01-28  0:52 ` Sean Christopherson
  2022-01-28  0:52 ` [PATCH 22/22] KVM: SVM: Use kvm-x86-ops.h to fill svm_x86_ops Sean Christopherson
  21 siblings, 0 replies; 29+ messages in thread
From: Sean Christopherson @ 2022-01-28  0:52 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel, Like Xu

Massage SVM's implementation names that still diverge from kvm_x86_ops to
allow for wiring up all SVM-defined functions via kvm-x86-ops.h.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/kvm/svm/sev.c |  4 ++--
 arch/x86/kvm/svm/svm.c | 40 ++++++++++++++++++++--------------------
 arch/x86/kvm/svm/svm.h |  6 +++---
 3 files changed, 25 insertions(+), 25 deletions(-)

diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 4662e5fd7559..f4d88292f337 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -2173,7 +2173,7 @@ void __init sev_hardware_setup(void)
 #endif
 }
 
-void sev_hardware_teardown(void)
+void sev_hardware_unsetup(void)
 {
 	if (!sev_enabled)
 		return;
@@ -2907,7 +2907,7 @@ void sev_es_vcpu_reset(struct vcpu_svm *svm)
 					    sev_enc_bit));
 }
 
-void sev_es_prepare_guest_switch(struct vmcb_save_area *hostsa)
+void sev_es_prepare_switch_to_guest(struct vmcb_save_area *hostsa)
 {
 	/*
 	 * As an SEV-ES guest, hardware will restore the host state on VMEXIT,
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index a075c6458a27..7f70f456a5a5 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -353,7 +353,7 @@ static void svm_set_interrupt_shadow(struct kvm_vcpu *vcpu, int mask)
 
 }
 
-static int skip_emulated_instruction(struct kvm_vcpu *vcpu)
+static int svm_skip_emulated_instruction(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 
@@ -401,7 +401,7 @@ static void svm_queue_exception(struct kvm_vcpu *vcpu)
 		 * raises a fault that is not intercepted. Still better than
 		 * failing in all cases.
 		 */
-		(void)skip_emulated_instruction(vcpu);
+		(void)svm_skip_emulated_instruction(vcpu);
 		rip = kvm_rip_read(vcpu);
 		svm->int3_rip = rip + svm->vmcb->save.cs.base;
 		svm->int3_injected = rip - old_rip;
@@ -873,11 +873,11 @@ static void shrink_ple_window(struct kvm_vcpu *vcpu)
 	}
 }
 
-static void svm_hardware_teardown(void)
+static void svm_hardware_unsetup(void)
 {
 	int cpu;
 
-	sev_hardware_teardown();
+	sev_hardware_unsetup();
 
 	for_each_possible_cpu(cpu)
 		svm_cpu_uninit(cpu);
@@ -1175,7 +1175,7 @@ void svm_switch_vmcb(struct vcpu_svm *svm, struct kvm_vmcb_info *target_vmcb)
 	svm->vmcb = target_vmcb->ptr;
 }
 
-static int svm_create_vcpu(struct kvm_vcpu *vcpu)
+static int svm_vcpu_create(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm;
 	struct page *vmcb01_page;
@@ -1246,7 +1246,7 @@ static void svm_clear_current_vmcb(struct vmcb *vmcb)
 		cmpxchg(&per_cpu(svm_data, i)->current_vmcb, vmcb, NULL);
 }
 
-static void svm_free_vcpu(struct kvm_vcpu *vcpu)
+static void svm_vcpu_free(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 
@@ -1265,7 +1265,7 @@ static void svm_free_vcpu(struct kvm_vcpu *vcpu)
 	__free_pages(virt_to_page(svm->msrpm), get_order(MSRPM_SIZE));
 }
 
-static void svm_prepare_guest_switch(struct kvm_vcpu *vcpu)
+static void svm_prepare_switch_to_guest(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 	struct svm_cpu_data *sd = per_cpu(svm_data, vcpu->cpu);
@@ -1285,7 +1285,7 @@ static void svm_prepare_guest_switch(struct kvm_vcpu *vcpu)
 		struct vmcb_save_area *hostsa;
 		hostsa = (struct vmcb_save_area *)(page_address(sd->save_area) + 0x400);
 
-		sev_es_prepare_guest_switch(hostsa);
+		sev_es_prepare_switch_to_guest(hostsa);
 	}
 
 	if (tsc_scaling) {
@@ -2272,7 +2272,7 @@ static int task_switch_interception(struct kvm_vcpu *vcpu)
 	    int_type == SVM_EXITINTINFO_TYPE_SOFT ||
 	    (int_type == SVM_EXITINTINFO_TYPE_EXEPT &&
 	     (int_vec == OF_VECTOR || int_vec == BP_VECTOR))) {
-		if (!skip_emulated_instruction(vcpu))
+		if (!svm_skip_emulated_instruction(vcpu))
 			return 0;
 	}
 
@@ -3192,7 +3192,7 @@ static void svm_get_exit_info(struct kvm_vcpu *vcpu, u32 *reason,
 		*error_code = 0;
 }
 
-static int handle_exit(struct kvm_vcpu *vcpu, fastpath_t exit_fastpath)
+static int svm_handle_exit(struct kvm_vcpu *vcpu, fastpath_t exit_fastpath)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 	struct kvm_run *kvm_run = vcpu->run;
@@ -3289,7 +3289,7 @@ static void svm_inject_nmi(struct kvm_vcpu *vcpu)
 	++vcpu->stat.nmi_injections;
 }
 
-static void svm_set_irq(struct kvm_vcpu *vcpu)
+static void svm_inject_irq(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 
@@ -4199,7 +4199,7 @@ static int svm_enter_smm(struct kvm_vcpu *vcpu, char *smstate)
 	 * by 0x400 (matches the offset of 'struct vmcb_save_area'
 	 * within 'struct vmcb'). Note: HSAVE area may also be used by
 	 * L1 hypervisor to save additional host context (e.g. KVM does
-	 * that, see svm_prepare_guest_switch()) which must be
+	 * that, see svm_prepare_switch_to_guest()) which must be
 	 * preserved.
 	 */
 	if (kvm_vcpu_map(vcpu, gpa_to_gfn(svm->nested.hsave_msr),
@@ -4467,21 +4467,21 @@ static int svm_vm_init(struct kvm *kvm)
 static struct kvm_x86_ops svm_x86_ops __initdata = {
 	.name = "kvm_amd",
 
-	.hardware_unsetup = svm_hardware_teardown,
+	.hardware_unsetup = svm_hardware_unsetup,
 	.hardware_enable = svm_hardware_enable,
 	.hardware_disable = svm_hardware_disable,
 	.cpu_has_accelerated_tpr = svm_cpu_has_accelerated_tpr,
 	.has_emulated_msr = svm_has_emulated_msr,
 
-	.vcpu_create = svm_create_vcpu,
-	.vcpu_free = svm_free_vcpu,
+	.vcpu_create = svm_vcpu_create,
+	.vcpu_free = svm_vcpu_free,
 	.vcpu_reset = svm_vcpu_reset,
 
 	.vm_size = sizeof(struct kvm_svm),
 	.vm_init = svm_vm_init,
 	.vm_destroy = svm_vm_destroy,
 
-	.prepare_switch_to_guest = svm_prepare_guest_switch,
+	.prepare_switch_to_guest = svm_prepare_switch_to_guest,
 	.vcpu_load = svm_vcpu_load,
 	.vcpu_put = svm_vcpu_put,
 	.vcpu_blocking = avic_vcpu_blocking,
@@ -4519,13 +4519,13 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
 
 	.vcpu_pre_run = svm_vcpu_pre_run,
 	.vcpu_run = svm_vcpu_run,
-	.handle_exit = handle_exit,
-	.skip_emulated_instruction = skip_emulated_instruction,
+	.handle_exit = svm_handle_exit,
+	.skip_emulated_instruction = svm_skip_emulated_instruction,
 	.update_emulated_instruction = NULL,
 	.set_interrupt_shadow = svm_set_interrupt_shadow,
 	.get_interrupt_shadow = svm_get_interrupt_shadow,
 	.patch_hypercall = svm_patch_hypercall,
-	.inject_irq = svm_set_irq,
+	.inject_irq = svm_inject_irq,
 	.inject_nmi = svm_inject_nmi,
 	.queue_exception = svm_queue_exception,
 	.cancel_injection = svm_cancel_injection,
@@ -4830,7 +4830,7 @@ static __init int svm_hardware_setup(void)
 	return 0;
 
 err:
-	svm_hardware_teardown();
+	svm_hardware_unsetup();
 	return r;
 }
 
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 67c17509c4c0..852b12aee03d 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -321,7 +321,7 @@ static __always_inline struct vcpu_svm *to_svm(struct kvm_vcpu *vcpu)
 
 /*
  * Only the PDPTRs are loaded on demand into the shadow MMU.  All other
- * fields are synchronized in handle_exit, because accessing the VMCB is cheap.
+ * fields are synchronized on VM-Exit, because accessing the VMCB is cheap.
  *
  * CR3 might be out of date in the VMCB but it is not marked dirty; instead,
  * KVM_REQ_LOAD_MMU_PGD is always requested when the cached vcpu->arch.cr3
@@ -608,7 +608,7 @@ int sev_vm_move_enc_context_from(struct kvm *kvm, unsigned int source_fd);
 void pre_sev_run(struct vcpu_svm *svm, int cpu);
 void __init sev_set_cpu_caps(void);
 void __init sev_hardware_setup(void);
-void sev_hardware_teardown(void);
+void sev_hardware_unsetup(void);
 int sev_cpu_init(struct svm_cpu_data *sd);
 void sev_free_vcpu(struct kvm_vcpu *vcpu);
 int sev_handle_vmgexit(struct kvm_vcpu *vcpu);
@@ -616,7 +616,7 @@ int sev_es_string_io(struct vcpu_svm *svm, int size, unsigned int port, int in);
 void sev_es_init_vmcb(struct vcpu_svm *svm);
 void sev_es_vcpu_reset(struct vcpu_svm *svm);
 void sev_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector);
-void sev_es_prepare_guest_switch(struct vmcb_save_area *hostsa);
+void sev_es_prepare_switch_to_guest(struct vmcb_save_area *hostsa);
 void sev_es_unmap_ghcb(struct vcpu_svm *svm);
 
 /* vmenter.S */
-- 
2.35.0.rc0.227.g00780c9af4-goog


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 22/22] KVM: SVM: Use kvm-x86-ops.h to fill svm_x86_ops
  2022-01-28  0:51 [PATCH 00/22] KVM: x86: Fill *_x86_ops via kvm-x86-ops.h Sean Christopherson
                   ` (20 preceding siblings ...)
  2022-01-28  0:52 ` [PATCH 21/22] KVM: SVM: Rename hook implementations to conform to kvm_x86_ops' names Sean Christopherson
@ 2022-01-28  0:52 ` Sean Christopherson
  21 siblings, 0 replies; 29+ messages in thread
From: Sean Christopherson @ 2022-01-28  0:52 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel, Like Xu

Fill svm_x86_ops by including kvm-x86-ops.h and defining the appropriate
macros.  Document the handful of exceptions where svm_x86_ops deviates
from the "default" (mostly due to lack of hardware support for a related
feature).

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/kvm/svm/svm.c | 156 +++++++++--------------------------------
 1 file changed, 32 insertions(+), 124 deletions(-)

diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 7f70f456a5a5..b3761073fa81 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -4464,138 +4464,46 @@ static int svm_vm_init(struct kvm *kvm)
 	return 0;
 }
 
+/*
+ * SVM unconditionally flushes between nested transition and so doesn't provide
+ * a "flush all" variant, and a guest's ASID is tied to both guest and NPT
+ * translations, thus there's no "flush guest" variant.
+ */
+#define svm_flush_tlb_all svm_flush_tlb_current
+#define svm_flush_tlb_guest svm_flush_tlb_current
+
+/* APICv hooks not needed/implemented for AVIC. */
+#define avic_guest_apic_has_interrupt NULL
+#define avic_set_apic_access_page_addr NULL
+#define avic_sync_pir_to_irr NULL
+#define avic_pi_start_assignment NULL
+
+/* SVM has no hyperversior debug trap (VMX's Monitor Trap Flag). */
+#define svm_update_emulated_instruction NULL
+
+/* SVM has no CPU assisted dirty logging (VMX's Page Modification Logging). */
+#define svm_update_cpu_dirty_logging NULL
+
+/* SVM has no hypervisor timer (VMX's preemption timer). */
+#define svm_set_hv_timer NULL
+#define svm_cancel_hv_timer NULL
+#define svm_migrate_timers NULL
+#define svm_request_immediate_exit __kvm_request_immediate_exit
+
 static struct kvm_x86_ops svm_x86_ops __initdata = {
 	.name = "kvm_amd",
-
-	.hardware_unsetup = svm_hardware_unsetup,
-	.hardware_enable = svm_hardware_enable,
-	.hardware_disable = svm_hardware_disable,
-	.cpu_has_accelerated_tpr = svm_cpu_has_accelerated_tpr,
-	.has_emulated_msr = svm_has_emulated_msr,
-
-	.vcpu_create = svm_vcpu_create,
-	.vcpu_free = svm_vcpu_free,
-	.vcpu_reset = svm_vcpu_reset,
-
 	.vm_size = sizeof(struct kvm_svm),
-	.vm_init = svm_vm_init,
-	.vm_destroy = svm_vm_destroy,
-
-	.prepare_switch_to_guest = svm_prepare_switch_to_guest,
-	.vcpu_load = svm_vcpu_load,
-	.vcpu_put = svm_vcpu_put,
-	.vcpu_blocking = avic_vcpu_blocking,
-	.vcpu_unblocking = avic_vcpu_unblocking,
-
-	.update_exception_bitmap = svm_update_exception_bitmap,
-	.get_msr_feature = svm_get_msr_feature,
-	.get_msr = svm_get_msr,
-	.set_msr = svm_set_msr,
-	.get_segment_base = svm_get_segment_base,
-	.get_segment = svm_get_segment,
-	.set_segment = svm_set_segment,
-	.get_cpl = svm_get_cpl,
-	.get_cs_db_l_bits = svm_get_cs_db_l_bits,
-	.set_cr0 = svm_set_cr0,
-	.post_set_cr3 = sev_post_set_cr3,
-	.is_valid_cr4 = svm_is_valid_cr4,
-	.set_cr4 = svm_set_cr4,
-	.set_efer = svm_set_efer,
-	.get_idt = svm_get_idt,
-	.set_idt = svm_set_idt,
-	.get_gdt = svm_get_gdt,
-	.set_gdt = svm_set_gdt,
-	.set_dr7 = svm_set_dr7,
-	.sync_dirty_debug_regs = svm_sync_dirty_debug_regs,
-	.cache_reg = svm_cache_reg,
-	.get_rflags = svm_get_rflags,
-	.set_rflags = svm_set_rflags,
-	.get_if_flag = svm_get_if_flag,
-
-	.flush_tlb_all = svm_flush_tlb_current,
-	.flush_tlb_current = svm_flush_tlb_current,
-	.flush_tlb_gva = svm_flush_tlb_gva,
-	.flush_tlb_guest = svm_flush_tlb_current,
-
-	.vcpu_pre_run = svm_vcpu_pre_run,
-	.vcpu_run = svm_vcpu_run,
-	.handle_exit = svm_handle_exit,
-	.skip_emulated_instruction = svm_skip_emulated_instruction,
-	.update_emulated_instruction = NULL,
-	.set_interrupt_shadow = svm_set_interrupt_shadow,
-	.get_interrupt_shadow = svm_get_interrupt_shadow,
-	.patch_hypercall = svm_patch_hypercall,
-	.inject_irq = svm_inject_irq,
-	.inject_nmi = svm_inject_nmi,
-	.queue_exception = svm_queue_exception,
-	.cancel_injection = svm_cancel_injection,
-	.interrupt_allowed = svm_interrupt_allowed,
-	.nmi_allowed = svm_nmi_allowed,
-	.get_nmi_mask = svm_get_nmi_mask,
-	.set_nmi_mask = svm_set_nmi_mask,
-	.enable_nmi_window = svm_enable_nmi_window,
-	.enable_irq_window = svm_enable_irq_window,
-	.update_cr8_intercept = svm_update_cr8_intercept,
-	.set_virtual_apic_mode = avic_set_virtual_apic_mode,
-	.refresh_apicv_exec_ctrl = avic_refresh_apicv_exec_ctrl,
-	.check_apicv_inhibit_reasons = avic_check_apicv_inhibit_reasons,
-	.load_eoi_exitmap = avic_load_eoi_exitmap,
-	.hwapic_irr_update = avic_hwapic_irr_update,
-	.hwapic_isr_update = avic_hwapic_isr_update,
-	.apicv_post_state_restore = avic_apicv_post_state_restore,
-
-	.set_tss_addr = svm_set_tss_addr,
-	.set_identity_map_addr = svm_set_identity_map_addr,
-	.get_mt_mask = svm_get_mt_mask,
-
-	.get_exit_info = svm_get_exit_info,
-
-	.vcpu_after_set_cpuid = svm_vcpu_after_set_cpuid,
-
-	.has_wbinvd_exit = svm_has_wbinvd_exit,
-
-	.get_l2_tsc_offset = svm_get_l2_tsc_offset,
-	.get_l2_tsc_multiplier = svm_get_l2_tsc_multiplier,
-	.write_tsc_offset = svm_write_tsc_offset,
-	.write_tsc_multiplier = svm_write_tsc_multiplier,
-
-	.load_mmu_pgd = svm_load_mmu_pgd,
-
-	.check_intercept = svm_check_intercept,
-	.handle_exit_irqoff = svm_handle_exit_irqoff,
-
-	.request_immediate_exit = __kvm_request_immediate_exit,
-
-	.sched_in = svm_sched_in,
 
 	.pmu_ops = &amd_pmu_ops,
 	.nested_ops = &svm_nested_ops,
 
-	.deliver_interrupt = svm_deliver_interrupt,
-	.dy_apicv_has_pending_interrupt = avic_dy_apicv_has_pending_interrupt,
-	.pi_update_irte = avic_pi_update_irte,
-	.setup_mce = svm_setup_mce,
+#define KVM_X86_OP(func) .func = svm_##func,
+#define KVM_X86_APICV_OP(func) .func = avic_##func,
+#define KVM_X86_CVM_OP(func) .func = sev_##func,
 
-	.smi_allowed = svm_smi_allowed,
-	.enter_smm = svm_enter_smm,
-	.leave_smm = svm_leave_smm,
-	.enable_smi_window = svm_enable_smi_window,
-
-	.mem_enc_ioctl = sev_mem_enc_ioctl,
-	.mem_enc_register_region = sev_mem_enc_register_region,
-	.mem_enc_unregister_region = sev_mem_enc_unregister_region,
-
-	.vm_copy_enc_context_from = sev_vm_copy_enc_context_from,
-	.vm_move_enc_context_from = sev_vm_move_enc_context_from,
-
-	.can_emulate_instruction = svm_can_emulate_instruction,
-
-	.apic_init_signal_blocked = svm_apic_init_signal_blocked,
-
-	.msr_filter_changed = svm_msr_filter_changed,
-	.complete_emulated_msr = svm_complete_emulated_msr,
-
-	.vcpu_deliver_sipi_vector = svm_vcpu_deliver_sipi_vector,
+/* Hyper-V hooks are filled at runtime. */
+#define KVM_X86_HYPERV_OP(func) .func = NULL,
+#include <asm/kvm-x86-ops.h>
 };
 
 /*
-- 
2.35.0.rc0.227.g00780c9af4-goog


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* Re: [PATCH 01/22] KVM: x86: Drop unnecessary and confusing KVM_X86_OP_NULL macro
  2022-01-28  0:51 ` [PATCH 01/22] KVM: x86: Drop unnecessary and confusing KVM_X86_OP_NULL macro Sean Christopherson
@ 2022-01-28 10:11   ` Paolo Bonzini
  2022-01-28 15:42     ` Sean Christopherson
  0 siblings, 1 reply; 29+ messages in thread
From: Paolo Bonzini @ 2022-01-28 10:11 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm,
	linux-kernel, Like Xu

On 1/28/22 01:51, Sean Christopherson wrote:
> Drop KVM_X86_OP_NULL, which is superfluous and confusing.  The macro is
> just a "pass-through" to KVM_X86_OP; it was added with the intent of
> actually using it in the future, but that obviously never happened.  The
> name is confusing because its intended use was to provide a way for
> vendor implementations to specify a NULL pointer, and even if it were
> used, wouldn't necessarily be synonymous with declaring a kvm_x86_op as
> DEFINE_STATIC_CALL_NULL.
> 
> Lastly, actually using KVM_X86_OP_NULL as intended isn't a maintanable
> approach, e.g. bleeds vendor details into common x86 code, and would
> either be prone to bit rot or would require modifying common x86 code
> when modifying a vendor implementation.

I have some patches that redefine KVM_X86_OP_NULL as "must be used with 
static_call_cond".  That's a more interesting definition, as it can be 
used to WARN if KVM_X86_OP is used with a NULL function pointer.

Paolo

> No functional change intended.
> 
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
>   arch/x86/include/asm/kvm-x86-ops.h | 76 ++++++++++++++----------------
>   arch/x86/include/asm/kvm_host.h    |  2 -
>   arch/x86/kvm/x86.c                 |  1 -
>   3 files changed, 35 insertions(+), 44 deletions(-)
> 
> diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
> index 631d5040b31e..e07151b2d1f6 100644
> --- a/arch/x86/include/asm/kvm-x86-ops.h
> +++ b/arch/x86/include/asm/kvm-x86-ops.h
> @@ -1,25 +1,20 @@
>   /* SPDX-License-Identifier: GPL-2.0 */
> -#if !defined(KVM_X86_OP) || !defined(KVM_X86_OP_NULL)
> +#ifndef KVM_X86_OP
>   BUILD_BUG_ON(1)
>   #endif
>   
>   /*
> - * KVM_X86_OP() and KVM_X86_OP_NULL() are used to help generate
> - * "static_call()"s. They are also intended for use when defining
> - * the vmx/svm kvm_x86_ops. KVM_X86_OP() can be used for those
> - * functions that follow the [svm|vmx]_func_name convention.
> - * KVM_X86_OP_NULL() can leave a NULL definition for the
> - * case where there is no definition or a function name that
> - * doesn't match the typical naming convention is supplied.
> + * Invoke KVM_X86_OP() on all functions in struct kvm_x86_ops, e.g. to generate
> + * static_call declarations, definitions and updates.
>    */
> -KVM_X86_OP_NULL(hardware_enable)
> -KVM_X86_OP_NULL(hardware_disable)
> -KVM_X86_OP_NULL(hardware_unsetup)
> -KVM_X86_OP_NULL(cpu_has_accelerated_tpr)
> +KVM_X86_OP(hardware_enable)
> +KVM_X86_OP(hardware_disable)
> +KVM_X86_OP(hardware_unsetup)
> +KVM_X86_OP(cpu_has_accelerated_tpr)
>   KVM_X86_OP(has_emulated_msr)
>   KVM_X86_OP(vcpu_after_set_cpuid)
>   KVM_X86_OP(vm_init)
> -KVM_X86_OP_NULL(vm_destroy)
> +KVM_X86_OP(vm_destroy)
>   KVM_X86_OP(vcpu_create)
>   KVM_X86_OP(vcpu_free)
>   KVM_X86_OP(vcpu_reset)
> @@ -33,9 +28,9 @@ KVM_X86_OP(get_segment_base)
>   KVM_X86_OP(get_segment)
>   KVM_X86_OP(get_cpl)
>   KVM_X86_OP(set_segment)
> -KVM_X86_OP_NULL(get_cs_db_l_bits)
> +KVM_X86_OP(get_cs_db_l_bits)
>   KVM_X86_OP(set_cr0)
> -KVM_X86_OP_NULL(post_set_cr3)
> +KVM_X86_OP(post_set_cr3)
>   KVM_X86_OP(is_valid_cr4)
>   KVM_X86_OP(set_cr4)
>   KVM_X86_OP(set_efer)
> @@ -51,15 +46,15 @@ KVM_X86_OP(set_rflags)
>   KVM_X86_OP(get_if_flag)
>   KVM_X86_OP(tlb_flush_all)
>   KVM_X86_OP(tlb_flush_current)
> -KVM_X86_OP_NULL(tlb_remote_flush)
> -KVM_X86_OP_NULL(tlb_remote_flush_with_range)
> +KVM_X86_OP(tlb_remote_flush)
> +KVM_X86_OP(tlb_remote_flush_with_range)
>   KVM_X86_OP(tlb_flush_gva)
>   KVM_X86_OP(tlb_flush_guest)
>   KVM_X86_OP(vcpu_pre_run)
>   KVM_X86_OP(run)
> -KVM_X86_OP_NULL(handle_exit)
> -KVM_X86_OP_NULL(skip_emulated_instruction)
> -KVM_X86_OP_NULL(update_emulated_instruction)
> +KVM_X86_OP(handle_exit)
> +KVM_X86_OP(skip_emulated_instruction)
> +KVM_X86_OP(update_emulated_instruction)
>   KVM_X86_OP(set_interrupt_shadow)
>   KVM_X86_OP(get_interrupt_shadow)
>   KVM_X86_OP(patch_hypercall)
> @@ -78,17 +73,17 @@ KVM_X86_OP(check_apicv_inhibit_reasons)
>   KVM_X86_OP(refresh_apicv_exec_ctrl)
>   KVM_X86_OP(hwapic_irr_update)
>   KVM_X86_OP(hwapic_isr_update)
> -KVM_X86_OP_NULL(guest_apic_has_interrupt)
> +KVM_X86_OP(guest_apic_has_interrupt)
>   KVM_X86_OP(load_eoi_exitmap)
>   KVM_X86_OP(set_virtual_apic_mode)
> -KVM_X86_OP_NULL(set_apic_access_page_addr)
> +KVM_X86_OP(set_apic_access_page_addr)
>   KVM_X86_OP(deliver_posted_interrupt)
> -KVM_X86_OP_NULL(sync_pir_to_irr)
> +KVM_X86_OP(sync_pir_to_irr)
>   KVM_X86_OP(set_tss_addr)
>   KVM_X86_OP(set_identity_map_addr)
>   KVM_X86_OP(get_mt_mask)
>   KVM_X86_OP(load_mmu_pgd)
> -KVM_X86_OP_NULL(has_wbinvd_exit)
> +KVM_X86_OP(has_wbinvd_exit)
>   KVM_X86_OP(get_l2_tsc_offset)
>   KVM_X86_OP(get_l2_tsc_multiplier)
>   KVM_X86_OP(write_tsc_offset)
> @@ -96,32 +91,31 @@ KVM_X86_OP(write_tsc_multiplier)
>   KVM_X86_OP(get_exit_info)
>   KVM_X86_OP(check_intercept)
>   KVM_X86_OP(handle_exit_irqoff)
> -KVM_X86_OP_NULL(request_immediate_exit)
> +KVM_X86_OP(request_immediate_exit)
>   KVM_X86_OP(sched_in)
> -KVM_X86_OP_NULL(update_cpu_dirty_logging)
> -KVM_X86_OP_NULL(vcpu_blocking)
> -KVM_X86_OP_NULL(vcpu_unblocking)
> -KVM_X86_OP_NULL(update_pi_irte)
> -KVM_X86_OP_NULL(start_assignment)
> -KVM_X86_OP_NULL(apicv_post_state_restore)
> -KVM_X86_OP_NULL(dy_apicv_has_pending_interrupt)
> -KVM_X86_OP_NULL(set_hv_timer)
> -KVM_X86_OP_NULL(cancel_hv_timer)
> +KVM_X86_OP(update_cpu_dirty_logging)
> +KVM_X86_OP(vcpu_blocking)
> +KVM_X86_OP(vcpu_unblocking)
> +KVM_X86_OP(update_pi_irte)
> +KVM_X86_OP(start_assignment)
> +KVM_X86_OP(apicv_post_state_restore)
> +KVM_X86_OP(dy_apicv_has_pending_interrupt)
> +KVM_X86_OP(set_hv_timer)
> +KVM_X86_OP(cancel_hv_timer)
>   KVM_X86_OP(setup_mce)
>   KVM_X86_OP(smi_allowed)
>   KVM_X86_OP(enter_smm)
>   KVM_X86_OP(leave_smm)
>   KVM_X86_OP(enable_smi_window)
> -KVM_X86_OP_NULL(mem_enc_op)
> -KVM_X86_OP_NULL(mem_enc_reg_region)
> -KVM_X86_OP_NULL(mem_enc_unreg_region)
> +KVM_X86_OP(mem_enc_op)
> +KVM_X86_OP(mem_enc_reg_region)
> +KVM_X86_OP(mem_enc_unreg_region)
>   KVM_X86_OP(get_msr_feature)
>   KVM_X86_OP(can_emulate_instruction)
>   KVM_X86_OP(apic_init_signal_blocked)
> -KVM_X86_OP_NULL(enable_direct_tlbflush)
> -KVM_X86_OP_NULL(migrate_timers)
> +KVM_X86_OP(enable_direct_tlbflush)
> +KVM_X86_OP(migrate_timers)
>   KVM_X86_OP(msr_filter_changed)
> -KVM_X86_OP_NULL(complete_emulated_msr)
> +KVM_X86_OP(complete_emulated_msr)
>   
>   #undef KVM_X86_OP
> -#undef KVM_X86_OP_NULL
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index b2c3721b1c98..756806d2e801 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1538,14 +1538,12 @@ extern struct kvm_x86_ops kvm_x86_ops;
>   
>   #define KVM_X86_OP(func) \
>   	DECLARE_STATIC_CALL(kvm_x86_##func, *(((struct kvm_x86_ops *)0)->func));
> -#define KVM_X86_OP_NULL KVM_X86_OP
>   #include <asm/kvm-x86-ops.h>
>   
>   static inline void kvm_ops_static_call_update(void)
>   {
>   #define KVM_X86_OP(func) \
>   	static_call_update(kvm_x86_##func, kvm_x86_ops.func);
> -#define KVM_X86_OP_NULL KVM_X86_OP
>   #include <asm/kvm-x86-ops.h>
>   }
>   
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 8033eca6f3a1..ebab514ec82a 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -129,7 +129,6 @@ EXPORT_SYMBOL_GPL(kvm_x86_ops);
>   #define KVM_X86_OP(func)					     \
>   	DEFINE_STATIC_CALL_NULL(kvm_x86_##func,			     \
>   				*(((struct kvm_x86_ops *)0)->func));
> -#define KVM_X86_OP_NULL KVM_X86_OP
>   #include <asm/kvm-x86-ops.h>
>   EXPORT_STATIC_CALL_GPL(kvm_x86_get_cs_db_l_bits);
>   EXPORT_STATIC_CALL_GPL(kvm_x86_cache_reg);


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 01/22] KVM: x86: Drop unnecessary and confusing KVM_X86_OP_NULL macro
  2022-01-28 10:11   ` Paolo Bonzini
@ 2022-01-28 15:42     ` Sean Christopherson
  2022-01-31 14:56       ` Paolo Bonzini
  0 siblings, 1 reply; 29+ messages in thread
From: Sean Christopherson @ 2022-01-28 15:42 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm,
	linux-kernel, Like Xu

On Fri, Jan 28, 2022, Paolo Bonzini wrote:
> On 1/28/22 01:51, Sean Christopherson wrote:
> > Drop KVM_X86_OP_NULL, which is superfluous and confusing.  The macro is
> > just a "pass-through" to KVM_X86_OP; it was added with the intent of
> > actually using it in the future, but that obviously never happened.  The
> > name is confusing because its intended use was to provide a way for
> > vendor implementations to specify a NULL pointer, and even if it were
> > used, wouldn't necessarily be synonymous with declaring a kvm_x86_op as
> > DEFINE_STATIC_CALL_NULL.
> > 
> > Lastly, actually using KVM_X86_OP_NULL as intended isn't a maintanable
> > approach, e.g. bleeds vendor details into common x86 code, and would
> > either be prone to bit rot or would require modifying common x86 code
> > when modifying a vendor implementation.
> 
> I have some patches that redefine KVM_X86_OP_NULL as "must be used with
> static_call_cond".  That's a more interesting definition, as it can be used
> to WARN if KVM_X86_OP is used with a NULL function pointer.

I'm skeptical that will actually work well and be maintainble.  E.g. sync_pir_to_ir()
must be explicitly check for NULL in apic_has_interrupt_for_ppr(), forcing that path
to do static_call_cond() will be odd.  Ditto for ops that are wired up to ioctl()s,
e.g. the confidential VM stuff, and for ops that are guarded by other stuff, e.g. the
hypervisor timer.

Actually, it won't just be odd, it will be impossible to disallow NULL a pointer
for KVM_X86_OP and require static_call_cond() for KVM_X86_OP_NULL.  static_call_cond()
forces the return to "void", so any path that returns a value needs to be manually
guarded and can't use static_call_cond(), e.g.

arch/x86/kvm/x86.c: In function ‘kvm_arch_vm_ioctl’:
arch/x86/kvm/x86.c:6450:19: error: void value not ignored as it ought to be
 6450 |                 r = static_call_cond(kvm_x86_mem_enc_ioctl)(kvm, argp);
      |                   ^

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 01/22] KVM: x86: Drop unnecessary and confusing KVM_X86_OP_NULL macro
  2022-01-28 15:42     ` Sean Christopherson
@ 2022-01-31 14:56       ` Paolo Bonzini
  2022-01-31 15:19         ` Maxim Levitsky
  2022-01-31 16:48         ` Sean Christopherson
  0 siblings, 2 replies; 29+ messages in thread
From: Paolo Bonzini @ 2022-01-31 14:56 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm,
	linux-kernel, Like Xu

On 1/28/22 16:42, Sean Christopherson wrote:
> On Fri, Jan 28, 2022, Paolo Bonzini wrote:
>> On 1/28/22 01:51, Sean Christopherson wrote:
>>> Drop KVM_X86_OP_NULL, which is superfluous and confusing.  The macro is
>>> just a "pass-through" to KVM_X86_OP; it was added with the intent of
>>> actually using it in the future, but that obviously never happened.  The
>>> name is confusing because its intended use was to provide a way for
>>> vendor implementations to specify a NULL pointer, and even if it were
>>> used, wouldn't necessarily be synonymous with declaring a kvm_x86_op as
>>> DEFINE_STATIC_CALL_NULL.
>>>
>>> Lastly, actually using KVM_X86_OP_NULL as intended isn't a maintanable
>>> approach, e.g. bleeds vendor details into common x86 code, and would
>>> either be prone to bit rot or would require modifying common x86 code
>>> when modifying a vendor implementation.
>>
>> I have some patches that redefine KVM_X86_OP_NULL as "must be used with
>> static_call_cond".  That's a more interesting definition, as it can be used
>> to WARN if KVM_X86_OP is used with a NULL function pointer.
> 
> I'm skeptical that will actually work well and be maintainble.  E.g. sync_pir_to_ir()
> must be explicitly check for NULL in apic_has_interrupt_for_ppr(), forcing that path
> to do static_call_cond() will be odd.  Ditto for ops that are wired up to ioctl()s,
> e.g. the confidential VM stuff, and for ops that are guarded by other stuff, e.g. the
> hypervisor timer.
> 
> Actually, it won't just be odd, it will be impossible to disallow NULL a pointer
> for KVM_X86_OP and require static_call_cond() for KVM_X86_OP_NULL.  static_call_cond()
> forces the return to "void", so any path that returns a value needs to be manually
> guarded and can't use static_call_cond(), e.g.

You're right and I should have looked up the series instead of going by 
memory.  What I did was mostly WARNing on KVM_X86_OP that sets NULL, as 
non-NULL ops are the common case.  I also added KVM_X86_OP_RET0 to 
remove some checks on kvm_x86_ops for ops that return a value.

All in all I totally agree with patches 2-11 and will apply them (patch 
2 to 5.17 even, as a prerequisite to fix the AVIC race).  Several of 
patches 13-21 are also mostly useful as it clarifies the code, and the 
others I guess are okay in the context of a coherent series though 
probably they would have been rejected as one-offs.  However, patches 12 
and 22 are unnecessary uses of the C preprocessor in my opinion.

Paolo

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 01/22] KVM: x86: Drop unnecessary and confusing KVM_X86_OP_NULL macro
  2022-01-31 14:56       ` Paolo Bonzini
@ 2022-01-31 15:19         ` Maxim Levitsky
  2022-01-31 16:48         ` Sean Christopherson
  1 sibling, 0 replies; 29+ messages in thread
From: Maxim Levitsky @ 2022-01-31 15:19 UTC (permalink / raw)
  To: Paolo Bonzini, Sean Christopherson
  Cc: Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm,
	linux-kernel, Like Xu

On Mon, 2022-01-31 at 15:56 +0100, Paolo Bonzini wrote:
> On 1/28/22 16:42, Sean Christopherson wrote:
> > On Fri, Jan 28, 2022, Paolo Bonzini wrote:
> > > On 1/28/22 01:51, Sean Christopherson wrote:
> > > > Drop KVM_X86_OP_NULL, which is superfluous and confusing.  The macro is
> > > > just a "pass-through" to KVM_X86_OP; it was added with the intent of
> > > > actually using it in the future, but that obviously never happened.  The
> > > > name is confusing because its intended use was to provide a way for
> > > > vendor implementations to specify a NULL pointer, and even if it were
> > > > used, wouldn't necessarily be synonymous with declaring a kvm_x86_op as
> > > > DEFINE_STATIC_CALL_NULL.
> > > > 
> > > > Lastly, actually using KVM_X86_OP_NULL as intended isn't a maintanable
> > > > approach, e.g. bleeds vendor details into common x86 code, and would
> > > > either be prone to bit rot or would require modifying common x86 code
> > > > when modifying a vendor implementation.
> > > 
> > > I have some patches that redefine KVM_X86_OP_NULL as "must be used with
> > > static_call_cond".  That's a more interesting definition, as it can be used
> > > to WARN if KVM_X86_OP is used with a NULL function pointer.
> > 
> > I'm skeptical that will actually work well and be maintainble.  E.g. sync_pir_to_ir()
> > must be explicitly check for NULL in apic_has_interrupt_for_ppr(), forcing that path
> > to do static_call_cond() will be odd.  Ditto for ops that are wired up to ioctl()s,
> > e.g. the confidential VM stuff, and for ops that are guarded by other stuff, e.g. the
> > hypervisor timer.
> > 
> > Actually, it won't just be odd, it will be impossible to disallow NULL a pointer
> > for KVM_X86_OP and require static_call_cond() for KVM_X86_OP_NULL.  static_call_cond()
> > forces the return to "void", so any path that returns a value needs to be manually
> > guarded and can't use static_call_cond(), e.g.
> 
> You're right and I should have looked up the series instead of going by 
> memory.  What I did was mostly WARNing on KVM_X86_OP that sets NULL, as 
> non-NULL ops are the common case.  I also added KVM_X86_OP_RET0 to 
> remove some checks on kvm_x86_ops for ops that return a value.
> 
> All in all I totally agree with patches 2-11 and will apply them (patch 
> 2 to 5.17 even, as a prerequisite to fix the AVIC race).  Several of 
> patches 13-21 are also mostly useful as it clarifies the code, and the 
> others I guess are okay in the context of a coherent series though 
> probably they would have been rejected as one-offs.  However, patches 12 
> and 22 are unnecessary uses of the C preprocessor in my opinion.
> 

I will send my patches very very soon - I'll rebase on top of this,
and review this patch series soon as well.

Best regards,
	Maxim Levitsky

> Paolo
> 



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 09/22] KVM: x86: Uninline and export hv_track_root_tdp()
  2022-01-28  0:51 ` [PATCH 09/22] KVM: x86: Uninline and export hv_track_root_tdp() Sean Christopherson
@ 2022-01-31 16:19   ` Vitaly Kuznetsov
  0 siblings, 0 replies; 29+ messages in thread
From: Vitaly Kuznetsov @ 2022-01-31 16:19 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm,
	linux-kernel, Like Xu

Sean Christopherson <seanjc@google.com> writes:

> Uninline and export Hyper-V's hv_track_root_tdp(), which is (somewhat
> indirectly) the last remaining reference to kvm_x86_ops from vendor
> modules, i.e. will allow unexporting kvm_x86_ops.  Reloading the TDP PGD
> isn't the fastest of paths, hv_track_root_tdp() isn't exactly tiny, and
> disallowing vendor code from accessing kvm_x86_ops provides nice-to-have
> encapsulation of common x86 code (and of Hyper-V code for that
> matter).

We can add a static branch for "kvm_x86_ops.tlb_remote_flush ==
hv_remote_flush_tlb" condition and check it in vendor modules prior to
calling into hv_track_root_tdp() but I seriously doubt it'll bring us
noticable performance gain.

>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
>  arch/x86/kvm/kvm_onhyperv.c | 14 ++++++++++++++
>  arch/x86/kvm/kvm_onhyperv.h | 14 +-------------
>  2 files changed, 15 insertions(+), 13 deletions(-)
>
> diff --git a/arch/x86/kvm/kvm_onhyperv.c b/arch/x86/kvm/kvm_onhyperv.c
> index b469f45e3fe4..ee4f696a0782 100644
> --- a/arch/x86/kvm/kvm_onhyperv.c
> +++ b/arch/x86/kvm/kvm_onhyperv.c
> @@ -92,3 +92,17 @@ int hv_remote_flush_tlb(struct kvm *kvm)
>  	return hv_remote_flush_tlb_with_range(kvm, NULL);
>  }
>  EXPORT_SYMBOL_GPL(hv_remote_flush_tlb);
> +
> +void hv_track_root_tdp(struct kvm_vcpu *vcpu, hpa_t root_tdp)
> +{
> +	struct kvm_arch *kvm_arch = &vcpu->kvm->arch;
> +
> +	if (kvm_x86_ops.tlb_remote_flush == hv_remote_flush_tlb) {
> +		spin_lock(&kvm_arch->hv_root_tdp_lock);
> +		vcpu->arch.hv_root_tdp = root_tdp;
> +		if (root_tdp != kvm_arch->hv_root_tdp)
> +			kvm_arch->hv_root_tdp = INVALID_PAGE;
> +		spin_unlock(&kvm_arch->hv_root_tdp_lock);
> +	}
> +}
> +EXPORT_SYMBOL_GPL(hv_track_root_tdp);
> diff --git a/arch/x86/kvm/kvm_onhyperv.h b/arch/x86/kvm/kvm_onhyperv.h
> index 1c67abf2eba9..287e98ef9df3 100644
> --- a/arch/x86/kvm/kvm_onhyperv.h
> +++ b/arch/x86/kvm/kvm_onhyperv.h
> @@ -10,19 +10,7 @@
>  int hv_remote_flush_tlb_with_range(struct kvm *kvm,
>  		struct kvm_tlb_range *range);
>  int hv_remote_flush_tlb(struct kvm *kvm);
> -
> -static inline void hv_track_root_tdp(struct kvm_vcpu *vcpu, hpa_t root_tdp)
> -{
> -	struct kvm_arch *kvm_arch = &vcpu->kvm->arch;
> -
> -	if (kvm_x86_ops.tlb_remote_flush == hv_remote_flush_tlb) {
> -		spin_lock(&kvm_arch->hv_root_tdp_lock);
> -		vcpu->arch.hv_root_tdp = root_tdp;
> -		if (root_tdp != kvm_arch->hv_root_tdp)
> -			kvm_arch->hv_root_tdp = INVALID_PAGE;
> -		spin_unlock(&kvm_arch->hv_root_tdp_lock);
> -	}
> -}
> +void hv_track_root_tdp(struct kvm_vcpu *vcpu, hpa_t root_tdp);
>  #else /* !CONFIG_HYPERV */
>  static inline void hv_track_root_tdp(struct kvm_vcpu *vcpu, hpa_t root_tdp)
>  {

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 01/22] KVM: x86: Drop unnecessary and confusing KVM_X86_OP_NULL macro
  2022-01-31 14:56       ` Paolo Bonzini
  2022-01-31 15:19         ` Maxim Levitsky
@ 2022-01-31 16:48         ` Sean Christopherson
  1 sibling, 0 replies; 29+ messages in thread
From: Sean Christopherson @ 2022-01-31 16:48 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm,
	linux-kernel, Like Xu

On Mon, Jan 31, 2022, Paolo Bonzini wrote:
> On 1/28/22 16:42, Sean Christopherson wrote:
> All in all I totally agree with patches 2-11 and will apply them (patch 2 to
> 5.17 even, as a prerequisite to fix the AVIC race).  Several of patches
> 13-21 are also mostly useful as it clarifies the code, and the others I
> guess are okay in the context of a coherent series though probably they
> would have been rejected as one-offs.

Yeah, the SEV changes in particular are a bit forced.  The only one I care deeply
about is mem_enc_op() => mem_enc_ioctl().  If the macro shenanigans are rejected,
I'd say drop patches 20 and 21, drop most of 19, and maybe give 18 (svm=>avic) the
boot as well.  I'd prefer to keep patch 17 (TLB tweak) to clarify the scope of
SVM's TLB flush.  Many of the changelogs would need to be tweaked as well, i.e. a
v2 is in order.

> However, patches 12 and 22 are unnecessary uses of the C preprocessor in my
> opinion.  

And 14 :-)

I don't have a super strong opinion.  I mostly worked on this because the idea
had been discussed multiple times in the past.  And because I wanted an excuse to
rename vmx_free_vcpu => vmx_vcpu_free, which for some reason I can never find :-)

I was/am concerned that the macro approach will make it more difficult to find a
vendor's implementation, though forcing a conforming name will mitigate that to
some degree.

The pros, in order of importance (IMO)

  1. Mostly forces vendor implementation name to match hook name
  2. Forces new hooks to get an entry in kvm-x86-ops.h
  3. Provides a bit of documentation for specialized hooks (APICv, etc...)
  4. Forces vendors to explicitly define something for non-conforming hooks

^ permalink raw reply	[flat|nested] 29+ messages in thread

end of thread, other threads:[~2022-01-31 16:49 UTC | newest]

Thread overview: 29+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-01-28  0:51 [PATCH 00/22] KVM: x86: Fill *_x86_ops via kvm-x86-ops.h Sean Christopherson
2022-01-28  0:51 ` [PATCH 01/22] KVM: x86: Drop unnecessary and confusing KVM_X86_OP_NULL macro Sean Christopherson
2022-01-28 10:11   ` Paolo Bonzini
2022-01-28 15:42     ` Sean Christopherson
2022-01-31 14:56       ` Paolo Bonzini
2022-01-31 15:19         ` Maxim Levitsky
2022-01-31 16:48         ` Sean Christopherson
2022-01-28  0:51 ` [PATCH 02/22] KVM: x86: Move delivery of non-APICv interrupt into vendor code Sean Christopherson
2022-01-28  0:51 ` [PATCH 03/22] KVM: x86: Drop export for .tlb_flush_current() static_call key Sean Christopherson
2022-01-28  0:51 ` [PATCH 04/22] KVM: x86: Rename kvm_x86_ops pointers to align w/ preferred vendor names Sean Christopherson
2022-01-28  0:51 ` [PATCH 05/22] KVM: x86: Use static_call() for .vcpu_deliver_sipi_vector() Sean Christopherson
2022-01-28  0:51 ` [PATCH 06/22] KVM: VMX: Call vmx_get_cpl() directly in handle_dr() Sean Christopherson
2022-01-28  0:51 ` [PATCH 07/22] KVM: xen: Use static_call() for invoking kvm_x86_ops hooks Sean Christopherson
2022-01-28  0:51 ` [PATCH 08/22] KVM: nVMX: Refactor PMU refresh to avoid referencing kvm_x86_ops.pmu_ops Sean Christopherson
2022-01-28  0:51 ` [PATCH 09/22] KVM: x86: Uninline and export hv_track_root_tdp() Sean Christopherson
2022-01-31 16:19   ` Vitaly Kuznetsov
2022-01-28  0:51 ` [PATCH 10/22] KVM: x86: Unexport kvm_x86_ops Sean Christopherson
2022-01-28  0:51 ` [PATCH 11/22] KVM: x86: Use static_call() for copy/move encryption context ioctls() Sean Christopherson
2022-01-28  0:51 ` [PATCH 12/22] KVM: x86: Allow different macros for APICv, CVM, and Hyper-V kvm_x86_ops Sean Christopherson
2022-01-28  0:51 ` [PATCH 13/22] KVM: VMX: Rename VMX functions to conform to kvm_x86_ops names Sean Christopherson
2022-01-28  0:52 ` [PATCH 14/22] KVM: VMX: Use kvm-x86-ops.h to fill vmx_x86_ops Sean Christopherson
2022-01-28  0:52 ` [PATCH 15/22] KVM: x86: Move get_cs_db_l_bits() helper to SVM Sean Christopherson
2022-01-28  0:52 ` [PATCH 16/22] KVM: SVM: Rename svm_flush_tlb() to svm_flush_tlb_current() Sean Christopherson
2022-01-28  0:52 ` [PATCH 17/22] KVM: SVM: Remove unused MAX_INST_SIZE #define Sean Christopherson
2022-01-28  0:52 ` [PATCH 18/22] KVM: SVM: Rename AVIC helpers to use "avic" prefix instead of "svm" Sean Christopherson
2022-01-28  0:52 ` [PATCH 19/22] KVM: x86: Use more verbose names for mem encrypt kvm_x86_ops hooks Sean Christopherson
2022-01-28  0:52 ` [PATCH 20/22] KVM: SVM: Rename SEV implemenations to conform to " Sean Christopherson
2022-01-28  0:52 ` [PATCH 21/22] KVM: SVM: Rename hook implementations to conform to kvm_x86_ops' names Sean Christopherson
2022-01-28  0:52 ` [PATCH 22/22] KVM: SVM: Use kvm-x86-ops.h to fill svm_x86_ops Sean Christopherson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).