linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/13] KVM monolithic v3
@ 2019-11-04 22:59 Andrea Arcangeli
  2019-11-04 22:59 ` [PATCH 01/13] KVM: monolithic: x86: remove kvm.ko Andrea Arcangeli
                   ` (12 more replies)
  0 siblings, 13 replies; 34+ messages in thread
From: Andrea Arcangeli @ 2019-11-04 22:59 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: Paolo Bonzini, Vitaly Kuznetsov, Sean Christopherson

Hello,

Here's the rebase of KVM monolithic on 5.4-rc6 after improving the
Kbuild as suggested in the v2 review.

This should have corrected also the non-x86 KVM builds (not verified
so it may not have full coverage yet, but I should have fixed all
issues reported by kbot so far).

The effect of the patchset is visible at the end of the below pdf. I
dropped CPUID so the results are still an exact match only for the
last two bench slides which are the only important ones in real life.

https://people.redhat.com/~aarcange/slides/2019-KVM-monolithic.pdf

https://git.kernel.org/pub/scm/linux/kernel/git/andrea/aa.git/log/?h=kvm-mono3

Andrea Arcangeli (13):
  KVM: monolithic: x86: remove kvm.ko
  KVM: monolithic: x86: convert the kvm_x86_ops and kvm_pmu_ops methods
    to external functions
  kvm: monolithic: fixup x86-32 build
  KVM: monolithic: x86: handle the request_immediate_exit variation
  KVM: monolithic: add more section prefixes
  KVM: monolithic: x86: remove __exit section prefix from
    machine_unsetup
  KVM: monolithic: x86: remove __init section prefix from
    kvm_x86_cpu_has_kvm_support
  KVM: monolithic: remove exports
  KVM: monolithic: x86: drop the kvm_pmu_ops structure
  KVM: x86: optimize more exit handlers in vmx.c
  KVM: retpolines: x86: eliminate retpoline from vmx.c exit handlers
  KVM: retpolines: x86: eliminate retpoline from svm.c exit handlers
  x86: retpolines: eliminate retpoline from msr event handlers

 arch/powerpc/kvm/book3s.c       |   2 +-
 arch/x86/events/intel/core.c    |  11 +
 arch/x86/include/asm/kvm_host.h | 210 ++++++++-
 arch/x86/kvm/Kconfig            |  30 +-
 arch/x86/kvm/Makefile           |   5 +-
 arch/x86/kvm/cpuid.c            |  27 +-
 arch/x86/kvm/hyperv.c           |   8 +-
 arch/x86/kvm/irq.c              |   4 -
 arch/x86/kvm/irq_comm.c         |   2 -
 arch/x86/kvm/kvm_cache_regs.h   |  10 +-
 arch/x86/kvm/lapic.c            |  46 +-
 arch/x86/kvm/mmu.c              |  50 +-
 arch/x86/kvm/mmu.h              |   4 +-
 arch/x86/kvm/mtrr.c             |   2 -
 arch/x86/kvm/pmu.c              |  27 +-
 arch/x86/kvm/pmu.h              |  37 +-
 arch/x86/kvm/pmu_amd.c          |  43 +-
 arch/x86/kvm/svm.c              | 683 ++++++++++++++++-----------
 arch/x86/kvm/trace.h            |   4 +-
 arch/x86/kvm/vmx/nested.c       |  84 ++--
 arch/x86/kvm/vmx/pmu_intel.c    |  46 +-
 arch/x86/kvm/vmx/vmx.c          | 807 ++++++++++++++++++--------------
 arch/x86/kvm/vmx/vmx.h          |  39 +-
 arch/x86/kvm/x86.c              | 418 ++++++-----------
 arch/x86/kvm/x86.h              |   2 +-
 include/linux/kvm_host.h        |   8 +-
 virt/kvm/arm/arm.c              |   2 +-
 virt/kvm/eventfd.c              |   1 -
 virt/kvm/kvm_main.c             |  70 +--
 29 files changed, 1440 insertions(+), 1242 deletions(-)


^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH 01/13] KVM: monolithic: x86: remove kvm.ko
  2019-11-04 22:59 [PATCH 00/13] KVM monolithic v3 Andrea Arcangeli
@ 2019-11-04 22:59 ` Andrea Arcangeli
  2019-11-04 22:59 ` [PATCH 02/13] KVM: monolithic: x86: convert the kvm_x86_ops and kvm_pmu_ops methods to external functions Andrea Arcangeli
                   ` (11 subsequent siblings)
  12 siblings, 0 replies; 34+ messages in thread
From: Andrea Arcangeli @ 2019-11-04 22:59 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: Paolo Bonzini, Vitaly Kuznetsov, Sean Christopherson

This is the first commit of a patch series that aims to replace the
modular kvm.ko kernel module with a monolithic kvm-intel/kvm-amd
model. This change has the only possible cons of wasting some disk
space in /lib/modules/. The pros are that it saves CPUS and some minor
iTLB and RAM which are more scarse resources than disk space.

The pointer to function virtual template model cannot provide any
runtime benefit because kvm-intel and kvm-amd can't be loaded at the
same time.

This removes kvm.ko and it links and duplicates all kvm.ko objects to
both kvm-amd and kvm-intel.

Linking both vmx and svm into the kernel at the same time isn't
possible anymore or the kvm_x86/kvm_x86_pmu external function names
would collide.

Explanation of Kbuild from Paolo Bonzini follows:

===
The left side of the "||" ensures that, if KVM=m, you can only choose
module build for both KVM_INTEL and KVM_AMD.  Having just "depends on
KVM" would allow a pre-existing .config to choose the now-invalid
combination

        CONFIG_KVM=y
        CONFIG_KVM_INTEL=y
        CONFIG_KVM_AMD=y

The right side of the "||" part is just for documentation, to avoid
that a selected symbol does not satisfy its dependencies.
====

Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
---
 arch/x86/kvm/Kconfig  | 30 ++++++++++++++++++++++++++----
 arch/x86/kvm/Makefile |  5 ++---
 2 files changed, 28 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig
index 840e12583b85..0d6e8809e359 100644
--- a/arch/x86/kvm/Kconfig
+++ b/arch/x86/kvm/Kconfig
@@ -59,9 +59,30 @@ config KVM
 
 	  If unsure, say N.
 
+if KVM=y
+
+choice
+	prompt "KVM built-in support"
+	help
+	  In order to build a kernel with support for both AMD and Intel
+	  CPUs, you need to set CONFIG_KVM=m instead.
+
+config KVM_AMD_STATIC
+	select KVM_AMD
+	bool "AMD"
+
+config KVM_INTEL_STATIC
+	select KVM_INTEL
+	bool "Intel"
+
+endchoice
+
+endif
+
 config KVM_INTEL
-	tristate "KVM for Intel processors support"
-	depends on KVM
+	tristate
+	prompt "KVM for Intel processors support" if KVM=m
+	depends on (KVM=m && m) || KVM_INTEL_STATIC
 	# for perf_guest_get_msrs():
 	depends on CPU_SUP_INTEL
 	---help---
@@ -72,8 +93,9 @@ config KVM_INTEL
 	  will be called kvm-intel.
 
 config KVM_AMD
-	tristate "KVM for AMD processors support"
-	depends on KVM
+	tristate
+	prompt "KVM for AMD processors support" if KVM=m
+	depends on (KVM=m && m) || KVM_AMD_STATIC
 	---help---
 	  Provides support for KVM on AMD processors equipped with the AMD-V
 	  (SVM) extensions.
diff --git a/arch/x86/kvm/Makefile b/arch/x86/kvm/Makefile
index 31ecf7a76d5a..68b81f381369 100644
--- a/arch/x86/kvm/Makefile
+++ b/arch/x86/kvm/Makefile
@@ -12,9 +12,8 @@ kvm-y			+= x86.o mmu.o emulate.o i8259.o irq.o lapic.o \
 			   i8254.o ioapic.o irq_comm.o cpuid.o pmu.o mtrr.o \
 			   hyperv.o page_track.o debugfs.o
 
-kvm-intel-y		+= vmx/vmx.o vmx/vmenter.o vmx/pmu_intel.o vmx/vmcs12.o vmx/evmcs.o vmx/nested.o
-kvm-amd-y		+= svm.o pmu_amd.o
+kvm-intel-y		+= vmx/vmx.o vmx/vmenter.o vmx/pmu_intel.o vmx/vmcs12.o vmx/evmcs.o vmx/nested.o $(kvm-y)
+kvm-amd-y		+= svm.o pmu_amd.o $(kvm-y)
 
-obj-$(CONFIG_KVM)	+= kvm.o
 obj-$(CONFIG_KVM_INTEL)	+= kvm-intel.o
 obj-$(CONFIG_KVM_AMD)	+= kvm-amd.o


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 02/13] KVM: monolithic: x86: convert the kvm_x86_ops and kvm_pmu_ops methods to external functions
  2019-11-04 22:59 [PATCH 00/13] KVM monolithic v3 Andrea Arcangeli
  2019-11-04 22:59 ` [PATCH 01/13] KVM: monolithic: x86: remove kvm.ko Andrea Arcangeli
@ 2019-11-04 22:59 ` Andrea Arcangeli
  2019-11-04 22:59 ` [PATCH 03/13] kvm: monolithic: fixup x86-32 build Andrea Arcangeli
                   ` (10 subsequent siblings)
  12 siblings, 0 replies; 34+ messages in thread
From: Andrea Arcangeli @ 2019-11-04 22:59 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: Paolo Bonzini, Vitaly Kuznetsov, Sean Christopherson

This replaces all kvm_x86_ops and kvm_pmu_ops pointer to functions
with regular external functions that don't require indirect calls.

In practice this optimization results in a double digit percent
reduction in the vmexit latency with the default retpoline spectre v2
mitigation enabled in the host.

When the host is booted with spectre_v2=off this still results in a
best case measurable improvement up to 2% that varies wildly depending
on the CPU vendor implementation under high frequency timer guest
workloads. Possibly guest userland workloads stressing the BTB might
see an improvement even on CPUs where the vmexit latency isn't reduced
with all mitigations disabled.

To reduce the rejecting parts while tracking upstream, this doesn't
attempt to entirely remove the kvm_x86_ops structure yet, that is
meant for a later cleanup. The pmu ops have been already cleaned up in
this patchset because it was left completely unused right after the
conversion from pointer to functions to external functions.

Further incremental minor optimizations that weren't possible before
are now enabled by the monolithic model. For example it is now
possible to convert some of the small kvm_x86_* external methods to
inline functions. However that will require more Makefile tweaks and
so it is left for later.

This is a list of the most common retpolines executed in KVM on VMX
under a guest workload triggering a high resolution timer SIGALRM
flood before the monolithic KVM patchset is applied.

[..]
@[
    trace_retpoline+1
    __trace_retpoline+30
    __x86_indirect_thunk_rax+33
    cancel_hv_timer.isra.46+44
    restart_apic_timer+295
    kvm_set_msr_common+1435
    vmx_set_msr+478
    handle_wrmsr+85
    vcpu_enter_guest+772
    kvm_arch_vcpu_ioctl_run+263
    kvm_vcpu_ioctl+559
    do_vfs_ioctl+164
    ksys_ioctl+96
    __x64_sys_ioctl+22
    do_syscall_64+89
    entry_SYSCALL_64_after_hwframe+68
]: 65382
@[
    trace_retpoline+1
    __trace_retpoline+30
    __x86_indirect_thunk_rax+33
    vcpu_enter_guest+1646
    kvm_arch_vcpu_ioctl_run+263
    kvm_vcpu_ioctl+559
    do_vfs_ioctl+164
    ksys_ioctl+96
    __x64_sys_ioctl+22
    do_syscall_64+89
    entry_SYSCALL_64_after_hwframe+68
]: 66164
@[
    trace_retpoline+1
    __trace_retpoline+30
    __x86_indirect_thunk_rax+33
    kvm_read_l1_tsc+41
    __kvm_wait_lapic_expire+60
    vmx_vcpu_run.part.88+1091
    vcpu_enter_guest+423
    kvm_arch_vcpu_ioctl_run+263
    kvm_vcpu_ioctl+559
    do_vfs_ioctl+164
    ksys_ioctl+96
    __x64_sys_ioctl+22
    do_syscall_64+89
    entry_SYSCALL_64_after_hwframe+68
]: 66199
@[
    trace_retpoline+1
    __trace_retpoline+30
    __x86_indirect_thunk_rax+33
    vcpu_enter_guest+4958
    kvm_arch_vcpu_ioctl_run+263
    kvm_vcpu_ioctl+559
    do_vfs_ioctl+164
    ksys_ioctl+96
    __x64_sys_ioctl+22
    do_syscall_64+89
    entry_SYSCALL_64_after_hwframe+68
]: 66227
@[
    trace_retpoline+1
    __trace_retpoline+30
    __x86_indirect_thunk_rax+33
    restart_apic_timer+99
    kvm_set_msr_common+1435
    vmx_set_msr+478
    handle_wrmsr+85
    vcpu_enter_guest+772
    kvm_arch_vcpu_ioctl_run+263
    kvm_vcpu_ioctl+559
    do_vfs_ioctl+164
    ksys_ioctl+96
    __x64_sys_ioctl+22
    do_syscall_64+89
    entry_SYSCALL_64_after_hwframe+68
]: 130619
@[
    trace_retpoline+1
    __trace_retpoline+30
    __x86_indirect_thunk_rax+33
    kvm_read_l1_tsc+41
    vmx_set_hv_timer+81
    restart_apic_timer+99
    kvm_set_msr_common+1435
    vmx_set_msr+478
    handle_wrmsr+85
    vcpu_enter_guest+772
    kvm_arch_vcpu_ioctl_run+263
    kvm_vcpu_ioctl+559
    do_vfs_ioctl+164
    ksys_ioctl+96
    __x64_sys_ioctl+22
    do_syscall_64+89
    entry_SYSCALL_64_after_hwframe+68
]: 130665
@[
    trace_retpoline+1
    __trace_retpoline+30
    __x86_indirect_thunk_rax+33
    kvm_skip_emulated_instruction+49
    handle_wrmsr+102
    vcpu_enter_guest+772
    kvm_arch_vcpu_ioctl_run+263
    kvm_vcpu_ioctl+559
    do_vfs_ioctl+164
    ksys_ioctl+96
    __x64_sys_ioctl+22
    do_syscall_64+89
    entry_SYSCALL_64_after_hwframe+68
]: 131020
@[
    trace_retpoline+1
    __trace_retpoline+30
    __x86_indirect_thunk_rax+33
    kvm_skip_emulated_instruction+82
    handle_wrmsr+102
    vcpu_enter_guest+772
    kvm_arch_vcpu_ioctl_run+263
    kvm_vcpu_ioctl+559
    do_vfs_ioctl+164
    ksys_ioctl+96
    __x64_sys_ioctl+22
    do_syscall_64+89
    entry_SYSCALL_64_after_hwframe+68
]: 131025
@[
    trace_retpoline+1
    __trace_retpoline+30
    __x86_indirect_thunk_rax+33
    handle_wrmsr+85
    vcpu_enter_guest+772
    kvm_arch_vcpu_ioctl_run+263
    kvm_vcpu_ioctl+559
    do_vfs_ioctl+164
    ksys_ioctl+96
    __x64_sys_ioctl+22
    do_syscall_64+89
    entry_SYSCALL_64_after_hwframe+68
]: 131043
@[
    trace_retpoline+1
    __trace_retpoline+30
    __x86_indirect_thunk_rax+33
    skip_emulated_instruction+48
    kvm_skip_emulated_instruction+82
    handle_wrmsr+102
    vcpu_enter_guest+772
    kvm_arch_vcpu_ioctl_run+263
    kvm_vcpu_ioctl+559
    do_vfs_ioctl+164
    ksys_ioctl+96
    __x64_sys_ioctl+22
    do_syscall_64+89
    entry_SYSCALL_64_after_hwframe+68
]: 131046
@[
    trace_retpoline+1
    __trace_retpoline+30
    __x86_indirect_thunk_rax+33
    vcpu_enter_guest+4009
    kvm_arch_vcpu_ioctl_run+263
    kvm_vcpu_ioctl+559
    do_vfs_ioctl+164
    ksys_ioctl+96
    __x64_sys_ioctl+22
    do_syscall_64+89
    entry_SYSCALL_64_after_hwframe+68
]: 132405
@[
    trace_retpoline+1
    __trace_retpoline+30
    __x86_indirect_thunk_rcx+33
    vcpu_enter_guest+1689
    kvm_arch_vcpu_ioctl_run+263
    kvm_vcpu_ioctl+559
    do_vfs_ioctl+164
    ksys_ioctl+96
    __x64_sys_ioctl+22
    do_syscall_64+89
    entry_SYSCALL_64_after_hwframe+68
]: 197697
@[
    trace_retpoline+1
    __trace_retpoline+30
    __x86_indirect_thunk_rax+33
    vmx_vcpu_run.part.88+358
    vcpu_enter_guest+423
    kvm_arch_vcpu_ioctl_run+263
    kvm_vcpu_ioctl+559
    do_vfs_ioctl+164
    ksys_ioctl+96
    __x64_sys_ioctl+22
    do_syscall_64+89
    entry_SYSCALL_64_after_hwframe+68
]: 198736
@[
    trace_retpoline+1
    __trace_retpoline+30
    __x86_indirect_thunk_rax+33
    vcpu_enter_guest+575
    kvm_arch_vcpu_ioctl_run+263
    kvm_vcpu_ioctl+559
    do_vfs_ioctl+164
    ksys_ioctl+96
    __x64_sys_ioctl+22
    do_syscall_64+89
    entry_SYSCALL_64_after_hwframe+68
]: 198771
@[
    trace_retpoline+1
    __trace_retpoline+30
    __x86_indirect_thunk_rax+33
    vcpu_enter_guest+423
    kvm_arch_vcpu_ioctl_run+263
    kvm_vcpu_ioctl+559
    do_vfs_ioctl+164
    ksys_ioctl+96
    __x64_sys_ioctl+22
    do_syscall_64+89
    entry_SYSCALL_64_after_hwframe+68
]: 198793
@[
    trace_retpoline+1
    __trace_retpoline+30
    __x86_indirect_thunk_rax+33
    vcpu_enter_guest+486
    kvm_arch_vcpu_ioctl_run+263
    kvm_vcpu_ioctl+559
    do_vfs_ioctl+164
    ksys_ioctl+96
    __x64_sys_ioctl+22
    do_syscall_64+89
    entry_SYSCALL_64_after_hwframe+68
]: 198801
@[
    trace_retpoline+1
    __trace_retpoline+30
    __x86_indirect_thunk_rax+33
    vcpu_enter_guest+168
    kvm_arch_vcpu_ioctl_run+263
    kvm_vcpu_ioctl+559
    do_vfs_ioctl+164
    ksys_ioctl+96
    __x64_sys_ioctl+22
    do_syscall_64+89
    entry_SYSCALL_64_after_hwframe+68
]: 198848
@[
    trace_retpoline+1
    __trace_retpoline+30
    __x86_indirect_thunk_rax+33
    vcpu_enter_guest+772
    kvm_arch_vcpu_ioctl_run+263
    kvm_vcpu_ioctl+559
    do_vfs_ioctl+164
    ksys_ioctl+96
    __x64_sys_ioctl+22
    do_syscall_64+89
    entry_SYSCALL_64_after_hwframe+68
]: 397680

@total: 3816655

Here the same but on SVM:

[..]
@[
    trace_retpoline+1
    __trace_retpoline+30
    __x86_indirect_thunk_rax+33
    clockevents_program_event+148
    hrtimer_start_range_ns+528
    start_sw_timer+356
    restart_apic_timer+111
    kvm_set_msr_common+1435
    msr_interception+138
    vcpu_enter_guest+772
    kvm_arch_vcpu_ioctl_run+263
    kvm_vcpu_ioctl+559
    do_vfs_ioctl+164
    ksys_ioctl+96
    __x64_sys_ioctl+22
    do_syscall_64+89
    entry_SYSCALL_64_after_hwframe+68
]: 36031
@[
    trace_retpoline+1
    __trace_retpoline+30
    __x86_indirect_thunk_rax+33
    lapic_next_event+28
    clockevents_program_event+148
    hrtimer_start_range_ns+528
    start_sw_timer+356
    restart_apic_timer+111
    kvm_set_msr_common+1435
    msr_interception+138
    vcpu_enter_guest+772
    kvm_arch_vcpu_ioctl_run+263
    kvm_vcpu_ioctl+559
    do_vfs_ioctl+164
    ksys_ioctl+96
    __x64_sys_ioctl+22
    do_syscall_64+89
    entry_SYSCALL_64_after_hwframe+68
]: 36063
@[
    trace_retpoline+1
    __trace_retpoline+30
    __x86_indirect_thunk_rax+33
    ktime_get+58
    clockevents_program_event+84
    hrtimer_try_to_cancel+168
    hrtimer_cancel+21
    kvm_set_lapic_tscdeadline_msr+43
    kvm_set_msr_common+1435
    msr_interception+138
    vcpu_enter_guest+772
    kvm_arch_vcpu_ioctl_run+263
    kvm_vcpu_ioctl+559
    do_vfs_ioctl+164
    ksys_ioctl+96
    __x64_sys_ioctl+22
    do_syscall_64+89
    entry_SYSCALL_64_after_hwframe+68
]: 36134
@[
    trace_retpoline+1
    __trace_retpoline+30
    __x86_indirect_thunk_rax+33
    lapic_next_event+28
    clockevents_program_event+148
    hrtimer_try_to_cancel+168
    hrtimer_cancel+21
    kvm_set_lapic_tscdeadline_msr+43
    kvm_set_msr_common+1435
    msr_interception+138
    vcpu_enter_guest+772
    kvm_arch_vcpu_ioctl_run+263
    kvm_vcpu_ioctl+559
    do_vfs_ioctl+164
    ksys_ioctl+96
    __x64_sys_ioctl+22
    do_syscall_64+89
    entry_SYSCALL_64_after_hwframe+68
]: 36146
@[
    trace_retpoline+1
    __trace_retpoline+30
    __x86_indirect_thunk_rax+33
    clockevents_program_event+148
    hrtimer_try_to_cancel+168
    hrtimer_cancel+21
    kvm_set_lapic_tscdeadline_msr+43
    kvm_set_msr_common+1435
    msr_interception+138
    vcpu_enter_guest+772
    kvm_arch_vcpu_ioctl_run+263
    kvm_vcpu_ioctl+559
    do_vfs_ioctl+164
    ksys_ioctl+96
    __x64_sys_ioctl+22
    do_syscall_64+89
    entry_SYSCALL_64_after_hwframe+68
]: 36190
@[
    trace_retpoline+1
    __trace_retpoline+30
    __x86_indirect_thunk_rax+33
    ktime_get+58
    clockevents_program_event+84
    hrtimer_start_range_ns+528
    start_sw_timer+356
    restart_apic_timer+111
    kvm_set_msr_common+1435
    msr_interception+138
    vcpu_enter_guest+772
    kvm_arch_vcpu_ioctl_run+263
    kvm_vcpu_ioctl+559
    do_vfs_ioctl+164
    ksys_ioctl+96
    __x64_sys_ioctl+22
    do_syscall_64+89
    entry_SYSCALL_64_after_hwframe+68
]: 36281
@[
    trace_retpoline+1
    __trace_retpoline+30
    __x86_indirect_thunk_rax+33
    vcpu_enter_guest+1646
    kvm_arch_vcpu_ioctl_run+263
    kvm_vcpu_ioctl+559
    do_vfs_ioctl+164
    ksys_ioctl+96
    __x64_sys_ioctl+22
    do_syscall_64+89
    entry_SYSCALL_64_after_hwframe+68
]: 37752
@[
    trace_retpoline+1
    __trace_retpoline+30
    __x86_indirect_thunk_rax+33
    kvm_read_l1_tsc+41
    __kvm_wait_lapic_expire+60
    svm_vcpu_run+1276
]: 37886
@[
    trace_retpoline+1
    __trace_retpoline+30
    __x86_indirect_thunk_rax+33
    vcpu_enter_guest+4958
    kvm_arch_vcpu_ioctl_run+263
    kvm_vcpu_ioctl+559
    do_vfs_ioctl+164
    ksys_ioctl+96
    __x64_sys_ioctl+22
    do_syscall_64+89
    entry_SYSCALL_64_after_hwframe+68
]: 37957
@[
    trace_retpoline+1
    __trace_retpoline+30
    __x86_indirect_thunk_rax+33
    kvm_read_l1_tsc+41
    start_sw_timer+302
    restart_apic_timer+111
    kvm_set_msr_common+1435
    msr_interception+138
    vcpu_enter_guest+772
    kvm_arch_vcpu_ioctl_run+263
    kvm_vcpu_ioctl+559
    do_vfs_ioctl+164
    ksys_ioctl+96
    __x64_sys_ioctl+22
    do_syscall_64+89
    entry_SYSCALL_64_after_hwframe+68
]: 74358
@[
    trace_retpoline+1
    __trace_retpoline+30
    __x86_indirect_thunk_rax+33
    ktime_get+58
    start_sw_timer+279
    restart_apic_timer+111
    kvm_set_msr_common+1435
    msr_interception+138
    vcpu_enter_guest+772
    kvm_arch_vcpu_ioctl_run+263
    kvm_vcpu_ioctl+559
    do_vfs_ioctl+164
    ksys_ioctl+96
    __x64_sys_ioctl+22
    do_syscall_64+89
    entry_SYSCALL_64_after_hwframe+68
]: 74558
@[
    trace_retpoline+1
    __trace_retpoline+30
    __x86_indirect_thunk_rax+33
    kvm_skip_emulated_instruction+82
    msr_interception+356
    vcpu_enter_guest+772
    kvm_arch_vcpu_ioctl_run+263
    kvm_vcpu_ioctl+559
    do_vfs_ioctl+164
    ksys_ioctl+96
    __x64_sys_ioctl+22
    do_syscall_64+89
    entry_SYSCALL_64_after_hwframe+68
]: 74713
@[
    trace_retpoline+1
    __trace_retpoline+30
    __x86_indirect_thunk_rax+33
    kvm_skip_emulated_instruction+49
    msr_interception+356
    vcpu_enter_guest+772
    kvm_arch_vcpu_ioctl_run+263
    kvm_vcpu_ioctl+559
    do_vfs_ioctl+164
    ksys_ioctl+96
    __x64_sys_ioctl+22
    do_syscall_64+89
    entry_SYSCALL_64_after_hwframe+68
]: 74757
@[
    trace_retpoline+1
    __trace_retpoline+30
    __x86_indirect_thunk_rax+33
    msr_interception+138
    vcpu_enter_guest+772
    kvm_arch_vcpu_ioctl_run+263
    kvm_vcpu_ioctl+559
    do_vfs_ioctl+164
    ksys_ioctl+96
    __x64_sys_ioctl+22
    do_syscall_64+89
    entry_SYSCALL_64_after_hwframe+68
]: 74795
@[
    trace_retpoline+1
    __trace_retpoline+30
    __x86_indirect_thunk_rax+33
    kvm_get_rflags+28
    svm_interrupt_allowed+50
    vcpu_enter_guest+4009
    kvm_arch_vcpu_ioctl_run+263
    kvm_vcpu_ioctl+559
    do_vfs_ioctl+164
    ksys_ioctl+96
    __x64_sys_ioctl+22
    do_syscall_64+89
    entry_SYSCALL_64_after_hwframe+68
]: 75647
@[
    trace_retpoline+1
    __trace_retpoline+30
    __x86_indirect_thunk_rax+33
    vcpu_enter_guest+4009
    kvm_arch_vcpu_ioctl_run+263
    kvm_vcpu_ioctl+559
    do_vfs_ioctl+164
    ksys_ioctl+96
    __x64_sys_ioctl+22
    do_syscall_64+89
    entry_SYSCALL_64_after_hwframe+68
]: 75812
@[
    trace_retpoline+1
    __trace_retpoline+30
    __x86_indirect_thunk_rcx+33
    vcpu_enter_guest+1689
    kvm_arch_vcpu_ioctl_run+263
    kvm_vcpu_ioctl+559
    do_vfs_ioctl+164
    ksys_ioctl+96
    __x64_sys_ioctl+22
    do_syscall_64+89
    entry_SYSCALL_64_after_hwframe+68
]: 112579
@[
    trace_retpoline+1
    __trace_retpoline+30
    __x86_indirect_thunk_rax+33
    vcpu_enter_guest+575
    kvm_arch_vcpu_ioctl_run+263
    kvm_vcpu_ioctl+559
    do_vfs_ioctl+164
    ksys_ioctl+96
    __x64_sys_ioctl+22
    do_syscall_64+89
    entry_SYSCALL_64_after_hwframe+68
]: 113371
@[
    trace_retpoline+1
    __trace_retpoline+30
    __x86_indirect_thunk_rax+33
    vcpu_enter_guest+423
    kvm_arch_vcpu_ioctl_run+263
    kvm_vcpu_ioctl+559
    do_vfs_ioctl+164
    ksys_ioctl+96
    __x64_sys_ioctl+22
    do_syscall_64+89
    entry_SYSCALL_64_after_hwframe+68
]: 113386
@[
    trace_retpoline+1
    __trace_retpoline+30
    __x86_indirect_thunk_rax+33
    vcpu_enter_guest+486
    kvm_arch_vcpu_ioctl_run+263
    kvm_vcpu_ioctl+559
    do_vfs_ioctl+164
    ksys_ioctl+96
    __x64_sys_ioctl+22
    do_syscall_64+89
    entry_SYSCALL_64_after_hwframe+68
]: 113414
@[
    trace_retpoline+1
    __trace_retpoline+30
    __x86_indirect_thunk_rax+33
    vcpu_enter_guest+168
    kvm_arch_vcpu_ioctl_run+263
    kvm_vcpu_ioctl+559
    do_vfs_ioctl+164
    ksys_ioctl+96
    __x64_sys_ioctl+22
    do_syscall_64+89
    entry_SYSCALL_64_after_hwframe+68
]: 113601
@[
    trace_retpoline+1
    __trace_retpoline+30
    __x86_indirect_thunk_rax+33
    vcpu_enter_guest+772
    kvm_arch_vcpu_ioctl_run+263
    kvm_vcpu_ioctl+559
    do_vfs_ioctl+164
    ksys_ioctl+96
    __x64_sys_ioctl+22
    do_syscall_64+89
    entry_SYSCALL_64_after_hwframe+68
]: 227076

@total: 3829460

Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
---
 arch/x86/include/asm/kvm_host.h | 203 ++++++++-
 arch/x86/kvm/cpuid.c            |  22 +-
 arch/x86/kvm/hyperv.c           |   6 +-
 arch/x86/kvm/kvm_cache_regs.h   |  10 +-
 arch/x86/kvm/lapic.c            |  30 +-
 arch/x86/kvm/mmu.c              |  26 +-
 arch/x86/kvm/mmu.h              |   4 +-
 arch/x86/kvm/pmu.c              |  24 +-
 arch/x86/kvm/pmu.h              |  19 +-
 arch/x86/kvm/pmu_amd.c          |  52 +--
 arch/x86/kvm/svm.c              | 664 ++++++++++++++++------------
 arch/x86/kvm/trace.h            |   4 +-
 arch/x86/kvm/vmx/nested.c       |  84 ++--
 arch/x86/kvm/vmx/pmu_intel.c    |  55 ++-
 arch/x86/kvm/vmx/vmx.c          | 739 ++++++++++++++++++--------------
 arch/x86/kvm/vmx/vmx.h          |  39 +-
 arch/x86/kvm/x86.c              | 308 ++++++-------
 arch/x86/kvm/x86.h              |   2 +-
 18 files changed, 1361 insertions(+), 930 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 24d6598dea29..b36dd3265036 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -998,6 +998,197 @@ struct kvm_lapic_irq {
 	bool msi_redir_hint;
 };
 
+extern __init int kvm_x86_cpu_has_kvm_support(void);
+extern __init int kvm_x86_disabled_by_bios(void);
+extern int kvm_x86_hardware_enable(void);
+extern void kvm_x86_hardware_disable(void);
+extern __init int kvm_x86_check_processor_compatibility(void);
+extern __init int kvm_x86_hardware_setup(void);
+extern __exit void kvm_x86_hardware_unsetup(void);
+extern bool kvm_x86_cpu_has_accelerated_tpr(void);
+extern bool kvm_x86_has_emulated_msr(int index);
+extern void kvm_x86_cpuid_update(struct kvm_vcpu *vcpu);
+extern struct kvm *kvm_x86_vm_alloc(void);
+extern void kvm_x86_vm_free(struct kvm *kvm);
+extern int kvm_x86_vm_init(struct kvm *kvm);
+extern void kvm_x86_vm_destroy(struct kvm *kvm);
+/* Create, but do not attach this VCPU */
+extern struct kvm_vcpu *kvm_x86_vcpu_create(struct kvm *kvm, unsigned id);
+extern void kvm_x86_vcpu_free(struct kvm_vcpu *vcpu);
+extern void kvm_x86_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event);
+extern void kvm_x86_prepare_guest_switch(struct kvm_vcpu *vcpu);
+extern void kvm_x86_vcpu_load(struct kvm_vcpu *vcpu, int cpu);
+extern void kvm_x86_vcpu_put(struct kvm_vcpu *vcpu);
+extern void kvm_x86_update_bp_intercept(struct kvm_vcpu *vcpu);
+extern int kvm_x86_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr);
+extern int kvm_x86_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr);
+extern u64 kvm_x86_get_segment_base(struct kvm_vcpu *vcpu, int seg);
+extern void kvm_x86_get_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var,
+				int seg);
+extern int kvm_x86_get_cpl(struct kvm_vcpu *vcpu);
+extern void kvm_x86_set_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var,
+				int seg);
+extern void kvm_x86_get_cs_db_l_bits(struct kvm_vcpu *vcpu, int *db, int *l);
+extern void kvm_x86_decache_cr0_guest_bits(struct kvm_vcpu *vcpu);
+extern void kvm_x86_decache_cr3(struct kvm_vcpu *vcpu);
+extern void kvm_x86_decache_cr4_guest_bits(struct kvm_vcpu *vcpu);
+extern void kvm_x86_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0);
+extern void kvm_x86_set_cr3(struct kvm_vcpu *vcpu, unsigned long cr3);
+extern int kvm_x86_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4);
+extern void kvm_x86_set_efer(struct kvm_vcpu *vcpu, u64 efer);
+extern void kvm_x86_get_idt(struct kvm_vcpu *vcpu, struct desc_ptr *dt);
+extern void kvm_x86_set_idt(struct kvm_vcpu *vcpu, struct desc_ptr *dt);
+extern void kvm_x86_get_gdt(struct kvm_vcpu *vcpu, struct desc_ptr *dt);
+extern void kvm_x86_set_gdt(struct kvm_vcpu *vcpu, struct desc_ptr *dt);
+extern u64 kvm_x86_get_dr6(struct kvm_vcpu *vcpu);
+extern void kvm_x86_set_dr6(struct kvm_vcpu *vcpu, unsigned long value);
+extern void kvm_x86_sync_dirty_debug_regs(struct kvm_vcpu *vcpu);
+extern void kvm_x86_set_dr7(struct kvm_vcpu *vcpu, unsigned long value);
+extern void kvm_x86_cache_reg(struct kvm_vcpu *vcpu, enum kvm_reg reg);
+extern unsigned long kvm_x86_get_rflags(struct kvm_vcpu *vcpu);
+extern void kvm_x86_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags);
+extern void kvm_x86_tlb_flush(struct kvm_vcpu *vcpu, bool invalidate_gpa);
+extern int kvm_x86_tlb_remote_flush(struct kvm *kvm);
+extern int kvm_x86_tlb_remote_flush_with_range(struct kvm *kvm,
+					       struct kvm_tlb_range *range);
+/*
+ * Flush any TLB entries associated with the given GVA.
+ * Does not need to flush GPA->HPA mappings.
+ * Can potentially get non-canonical addresses through INVLPGs, which
+ * the implementation may choose to ignore if appropriate.
+ */
+extern void kvm_x86_tlb_flush_gva(struct kvm_vcpu *vcpu, gva_t addr);
+extern void kvm_x86_run(struct kvm_vcpu *vcpu);
+extern int kvm_x86_handle_exit(struct kvm_vcpu *vcpu);
+extern int kvm_x86_skip_emulated_instruction(struct kvm_vcpu *vcpu);
+extern void kvm_x86_set_interrupt_shadow(struct kvm_vcpu *vcpu, int mask);
+extern u32 kvm_x86_get_interrupt_shadow(struct kvm_vcpu *vcpu);
+extern void kvm_x86_patch_hypercall(struct kvm_vcpu *vcpu,
+				    unsigned char *hypercall_addr);
+extern void kvm_x86_set_irq(struct kvm_vcpu *vcpu);
+extern void kvm_x86_set_nmi(struct kvm_vcpu *vcpu);
+extern void kvm_x86_queue_exception(struct kvm_vcpu *vcpu);
+extern void kvm_x86_cancel_injection(struct kvm_vcpu *vcpu);
+extern int kvm_x86_interrupt_allowed(struct kvm_vcpu *vcpu);
+extern int kvm_x86_nmi_allowed(struct kvm_vcpu *vcpu);
+extern bool kvm_x86_get_nmi_mask(struct kvm_vcpu *vcpu);
+extern void kvm_x86_set_nmi_mask(struct kvm_vcpu *vcpu, bool masked);
+extern void kvm_x86_enable_nmi_window(struct kvm_vcpu *vcpu);
+extern void kvm_x86_enable_irq_window(struct kvm_vcpu *vcpu);
+extern void kvm_x86_update_cr8_intercept(struct kvm_vcpu *vcpu, int tpr,
+					 int irr);
+extern bool kvm_x86_get_enable_apicv(struct kvm_vcpu *vcpu);
+extern void kvm_x86_refresh_apicv_exec_ctrl(struct kvm_vcpu *vcpu);
+extern void kvm_x86_hwapic_irr_update(struct kvm_vcpu *vcpu, int max_irr);
+extern void kvm_x86_hwapic_isr_update(struct kvm_vcpu *vcpu, int isr);
+extern bool kvm_x86_guest_apic_has_interrupt(struct kvm_vcpu *vcpu);
+extern void kvm_x86_load_eoi_exitmap(struct kvm_vcpu *vcpu,
+				     u64 *eoi_exit_bitmap);
+extern void kvm_x86_set_virtual_apic_mode(struct kvm_vcpu *vcpu);
+extern void kvm_x86_set_apic_access_page_addr(struct kvm_vcpu *vcpu,
+					      hpa_t hpa);
+extern void kvm_x86_deliver_posted_interrupt(struct kvm_vcpu *vcpu,
+					     int vector);
+extern int kvm_x86_sync_pir_to_irr(struct kvm_vcpu *vcpu);
+extern int kvm_x86_set_tss_addr(struct kvm *kvm, unsigned int addr);
+extern int kvm_x86_set_identity_map_addr(struct kvm *kvm, u64 ident_addr);
+extern int kvm_x86_get_tdp_level(struct kvm_vcpu *vcpu);
+extern u64 kvm_x86_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio);
+extern int kvm_x86_get_lpage_level(void);
+extern bool kvm_x86_rdtscp_supported(void);
+extern bool kvm_x86_invpcid_supported(void);
+extern void kvm_x86_set_tdp_cr3(struct kvm_vcpu *vcpu, unsigned long cr3);
+extern void kvm_x86_set_supported_cpuid(u32 func,
+					struct kvm_cpuid_entry2 *entry);
+extern bool kvm_x86_has_wbinvd_exit(void);
+extern u64 kvm_x86_read_l1_tsc_offset(struct kvm_vcpu *vcpu);
+/* Returns actual tsc_offset set in active VMCS */
+extern u64 kvm_x86_write_l1_tsc_offset(struct kvm_vcpu *vcpu, u64 offset);
+extern void kvm_x86_get_exit_info(struct kvm_vcpu *vcpu, u64 *info1,
+				  u64 *info2);
+extern int kvm_x86_check_intercept(struct kvm_vcpu *vcpu,
+				   struct x86_instruction_info *info,
+				   enum x86_intercept_stage stage);
+extern void kvm_x86_handle_exit_irqoff(struct kvm_vcpu *vcpu);
+extern bool kvm_x86_mpx_supported(void);
+extern bool kvm_x86_xsaves_supported(void);
+extern bool kvm_x86_umip_emulated(void);
+extern bool kvm_x86_pt_supported(void);
+extern int kvm_x86_check_nested_events(struct kvm_vcpu *vcpu,
+				       bool external_intr);
+extern void kvm_x86_request_immediate_exit(struct kvm_vcpu *vcpu);
+extern void kvm_x86_sched_in(struct kvm_vcpu *kvm, int cpu);
+/*
+ * Arch-specific dirty logging hooks. These hooks are only supposed to
+ * be valid if the specific arch has hardware-accelerated dirty logging
+ * mechanism. Currently only for PML on VMX.
+ *
+ *  - slot_enable_log_dirty:
+ *	called when enabling log dirty mode for the slot.
+ *  - slot_disable_log_dirty:
+ *	called when disabling log dirty mode for the slot.
+ *	also called when slot is created with log dirty disabled.
+ *  - flush_log_dirty:
+ *	called before reporting dirty_bitmap to userspace.
+ *  - enable_log_dirty_pt_masked:
+ *	called when reenabling log dirty for the GFNs in the mask after
+ *	corresponding bits are cleared in slot->dirty_bitmap.
+ */
+extern void kvm_x86_slot_enable_log_dirty(struct kvm *kvm,
+					  struct kvm_memory_slot *slot);
+extern void kvm_x86_slot_disable_log_dirty(struct kvm *kvm,
+					   struct kvm_memory_slot *slot);
+extern void kvm_x86_flush_log_dirty(struct kvm *kvm);
+extern void kvm_x86_enable_log_dirty_pt_masked(struct kvm *kvm,
+					       struct kvm_memory_slot *slot,
+					       gfn_t offset,
+					       unsigned long mask);
+extern int kvm_x86_write_log_dirty(struct kvm_vcpu *vcpu);
+/*
+ * Architecture specific hooks for vCPU blocking due to
+ * HLT instruction.
+ * Returns for .pre_block():
+ *    - 0 means continue to block the vCPU.
+ *    - 1 means we cannot block the vCPU since some event
+ *        happens during this period, such as, 'ON' bit in
+ *        posted-interrupts descriptor is set.
+ */
+extern int kvm_x86_pre_block(struct kvm_vcpu *vcpu);
+extern void kvm_x86_post_block(struct kvm_vcpu *vcpu);
+extern void kvm_x86_vcpu_blocking(struct kvm_vcpu *vcpu);
+extern void kvm_x86_vcpu_unblocking(struct kvm_vcpu *vcpu);
+extern int kvm_x86_update_pi_irte(struct kvm *kvm, unsigned int host_irq,
+				  uint32_t guest_irq, bool set);
+extern void kvm_x86_apicv_post_state_restore(struct kvm_vcpu *vcpu);
+extern bool kvm_x86_dy_apicv_has_pending_interrupt(struct kvm_vcpu *vcpu);
+extern int kvm_x86_set_hv_timer(struct kvm_vcpu *vcpu, u64 guest_deadline_tsc,
+				bool *expired);
+extern void kvm_x86_cancel_hv_timer(struct kvm_vcpu *vcpu);
+extern void kvm_x86_setup_mce(struct kvm_vcpu *vcpu);
+extern int kvm_x86_get_nested_state(struct kvm_vcpu *vcpu,
+				    struct kvm_nested_state __user *user_kvm_nested_state,
+				    unsigned user_data_size);
+extern int kvm_x86_set_nested_state(struct kvm_vcpu *vcpu,
+				    struct kvm_nested_state __user *user_kvm_nested_state,
+				    struct kvm_nested_state *kvm_state);
+extern bool kvm_x86_get_vmcs12_pages(struct kvm_vcpu *vcpu);
+extern int kvm_x86_smi_allowed(struct kvm_vcpu *vcpu);
+extern int kvm_x86_pre_enter_smm(struct kvm_vcpu *vcpu, char *smstate);
+extern int kvm_x86_pre_leave_smm(struct kvm_vcpu *vcpu, const char *smstate);
+extern int kvm_x86_enable_smi_window(struct kvm_vcpu *vcpu);
+extern int kvm_x86_mem_enc_op(struct kvm *kvm, void __user *argp);
+extern int kvm_x86_mem_enc_reg_region(struct kvm *kvm,
+				      struct kvm_enc_region *argp);
+extern int kvm_x86_mem_enc_unreg_region(struct kvm *kvm,
+					struct kvm_enc_region *argp);
+extern int kvm_x86_get_msr_feature(struct kvm_msr_entry *entry);
+extern int kvm_x86_nested_enable_evmcs(struct kvm_vcpu *vcpu,
+				       uint16_t *vmcs_version);
+extern uint16_t kvm_x86_nested_get_evmcs_version(struct kvm_vcpu *vcpu);
+extern bool kvm_x86_need_emulation_on_page_fault(struct kvm_vcpu *vcpu);
+extern bool kvm_x86_apic_init_signal_blocked(struct kvm_vcpu *vcpu);
+extern int kvm_x86_enable_direct_tlbflush(struct kvm_vcpu *vcpu);
+
 struct kvm_x86_ops {
 	int (*cpu_has_kvm_support)(void);          /* __init */
 	int (*disabled_by_bios)(void);             /* __init */
@@ -1225,19 +1416,19 @@ extern struct kmem_cache *x86_fpu_cache;
 #define __KVM_HAVE_ARCH_VM_ALLOC
 static inline struct kvm *kvm_arch_alloc_vm(void)
 {
-	return kvm_x86_ops->vm_alloc();
+	return kvm_x86_vm_alloc();
 }
 
 static inline void kvm_arch_free_vm(struct kvm *kvm)
 {
-	return kvm_x86_ops->vm_free(kvm);
+	return kvm_x86_vm_free(kvm);
 }
 
 #define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLB
 static inline int kvm_arch_flush_remote_tlb(struct kvm *kvm)
 {
 	if (kvm_x86_ops->tlb_remote_flush &&
-	    !kvm_x86_ops->tlb_remote_flush(kvm))
+	    !kvm_x86_tlb_remote_flush(kvm))
 		return 0;
 	else
 		return -ENOTSUPP;
@@ -1322,7 +1513,7 @@ extern u64 kvm_mce_cap_supported;
  *
  * EMULTYPE_SKIP - Set when emulating solely to skip an instruction, i.e. to
  *		   decode the instruction length.  For use *only* by
- *		   kvm_x86_ops->skip_emulated_instruction() implementations.
+ *		   kvm_x86_skip_emulated_instruction() implementations.
  *
  * EMULTYPE_ALLOW_RETRY - Set when the emulator should resume the guest to
  *			  retry native execution under certain conditions.
@@ -1608,13 +1799,13 @@ static inline bool kvm_irq_is_postable(struct kvm_lapic_irq *irq)
 static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu)
 {
 	if (kvm_x86_ops->vcpu_blocking)
-		kvm_x86_ops->vcpu_blocking(vcpu);
+		kvm_x86_vcpu_blocking(vcpu);
 }
 
 static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu)
 {
 	if (kvm_x86_ops->vcpu_unblocking)
-		kvm_x86_ops->vcpu_unblocking(vcpu);
+		kvm_x86_vcpu_unblocking(vcpu);
 }
 
 static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {}
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index f68c0c753c38..d156d27d83bb 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -48,7 +48,7 @@ static u32 xstate_required_size(u64 xstate_bv, bool compacted)
 bool kvm_mpx_supported(void)
 {
 	return ((host_xcr0 & (XFEATURE_MASK_BNDREGS | XFEATURE_MASK_BNDCSR))
-		 && kvm_x86_ops->mpx_supported());
+		 && kvm_x86_mpx_supported());
 }
 EXPORT_SYMBOL_GPL(kvm_mpx_supported);
 
@@ -232,7 +232,7 @@ int kvm_vcpu_ioctl_set_cpuid(struct kvm_vcpu *vcpu,
 	vcpu->arch.cpuid_nent = cpuid->nent;
 	cpuid_fix_nx_cap(vcpu);
 	kvm_apic_set_version(vcpu);
-	kvm_x86_ops->cpuid_update(vcpu);
+	kvm_x86_cpuid_update(vcpu);
 	r = kvm_update_cpuid(vcpu);
 
 out:
@@ -255,7 +255,7 @@ int kvm_vcpu_ioctl_set_cpuid2(struct kvm_vcpu *vcpu,
 		goto out;
 	vcpu->arch.cpuid_nent = cpuid->nent;
 	kvm_apic_set_version(vcpu);
-	kvm_x86_ops->cpuid_update(vcpu);
+	kvm_x86_cpuid_update(vcpu);
 	r = kvm_update_cpuid(vcpu);
 out:
 	return r;
@@ -347,10 +347,10 @@ static int __do_cpuid_func_emulated(struct kvm_cpuid_entry2 *entry,
 
 static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry, int index)
 {
-	unsigned f_invpcid = kvm_x86_ops->invpcid_supported() ? F(INVPCID) : 0;
+	unsigned f_invpcid = kvm_x86_invpcid_supported() ? F(INVPCID) : 0;
 	unsigned f_mpx = kvm_mpx_supported() ? F(MPX) : 0;
-	unsigned f_umip = kvm_x86_ops->umip_emulated() ? F(UMIP) : 0;
-	unsigned f_intel_pt = kvm_x86_ops->pt_supported() ? F(INTEL_PT) : 0;
+	unsigned f_umip = kvm_x86_umip_emulated() ? F(UMIP) : 0;
+	unsigned f_intel_pt = kvm_x86_pt_supported() ? F(INTEL_PT) : 0;
 	unsigned f_la57;
 
 	/* cpuid 7.0.ebx */
@@ -432,16 +432,16 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
 	int r;
 	unsigned f_nx = is_efer_nx() ? F(NX) : 0;
 #ifdef CONFIG_X86_64
-	unsigned f_gbpages = (kvm_x86_ops->get_lpage_level() == PT_PDPE_LEVEL)
+	unsigned f_gbpages = (kvm_x86_get_lpage_level() == PT_PDPE_LEVEL)
 				? F(GBPAGES) : 0;
 	unsigned f_lm = F(LM);
 #else
 	unsigned f_gbpages = 0;
 	unsigned f_lm = 0;
 #endif
-	unsigned f_rdtscp = kvm_x86_ops->rdtscp_supported() ? F(RDTSCP) : 0;
-	unsigned f_xsaves = kvm_x86_ops->xsaves_supported() ? F(XSAVES) : 0;
-	unsigned f_intel_pt = kvm_x86_ops->pt_supported() ? F(INTEL_PT) : 0;
+	unsigned f_rdtscp = kvm_x86_rdtscp_supported() ? F(RDTSCP) : 0;
+	unsigned f_xsaves = kvm_x86_xsaves_supported() ? F(XSAVES) : 0;
+	unsigned f_intel_pt = kvm_x86_pt_supported() ? F(INTEL_PT) : 0;
 
 	/* cpuid 1.edx */
 	const u32 kvm_cpuid_1_edx_x86_features =
@@ -797,7 +797,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
 		break;
 	}
 
-	kvm_x86_ops->set_supported_cpuid(function, entry);
+	kvm_x86_set_supported_cpuid(function, entry);
 
 	r = 0;
 
diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
index 23ff65504d7e..a345e48a7a24 100644
--- a/arch/x86/kvm/hyperv.c
+++ b/arch/x86/kvm/hyperv.c
@@ -1018,7 +1018,7 @@ static int kvm_hv_set_msr_pw(struct kvm_vcpu *vcpu, u32 msr, u64 data,
 		addr = gfn_to_hva(kvm, gfn);
 		if (kvm_is_error_hva(addr))
 			return 1;
-		kvm_x86_ops->patch_hypercall(vcpu, instructions);
+		kvm_x86_patch_hypercall(vcpu, instructions);
 		((unsigned char *)instructions)[3] = 0xc3; /* ret */
 		if (__copy_to_user((void __user *)addr, instructions, 4))
 			return 1;
@@ -1603,7 +1603,7 @@ int kvm_hv_hypercall(struct kvm_vcpu *vcpu)
 	 * hypercall generates UD from non zero cpl and real mode
 	 * per HYPER-V spec
 	 */
-	if (kvm_x86_ops->get_cpl(vcpu) != 0 || !is_protmode(vcpu)) {
+	if (kvm_x86_get_cpl(vcpu) != 0 || !is_protmode(vcpu)) {
 		kvm_queue_exception(vcpu, UD_VECTOR);
 		return 1;
 	}
@@ -1797,7 +1797,7 @@ int kvm_vcpu_ioctl_get_hv_cpuid(struct kvm_vcpu *vcpu, struct kvm_cpuid2 *cpuid,
 	int i, nent = ARRAY_SIZE(cpuid_entries);
 
 	if (kvm_x86_ops->nested_get_evmcs_version)
-		evmcs_ver = kvm_x86_ops->nested_get_evmcs_version(vcpu);
+		evmcs_ver = kvm_x86_nested_get_evmcs_version(vcpu);
 
 	/* Skip NESTED_FEATURES if eVMCS is not supported */
 	if (!evmcs_ver)
diff --git a/arch/x86/kvm/kvm_cache_regs.h b/arch/x86/kvm/kvm_cache_regs.h
index 1cc6c47dc77e..5904f4fd1e15 100644
--- a/arch/x86/kvm/kvm_cache_regs.h
+++ b/arch/x86/kvm/kvm_cache_regs.h
@@ -41,7 +41,7 @@ static inline unsigned long kvm_register_read(struct kvm_vcpu *vcpu,
 					      enum kvm_reg reg)
 {
 	if (!test_bit(reg, (unsigned long *)&vcpu->arch.regs_avail))
-		kvm_x86_ops->cache_reg(vcpu, reg);
+		kvm_x86_cache_reg(vcpu, reg);
 
 	return vcpu->arch.regs[reg];
 }
@@ -81,7 +81,7 @@ static inline u64 kvm_pdptr_read(struct kvm_vcpu *vcpu, int index)
 
 	if (!test_bit(VCPU_EXREG_PDPTR,
 		      (unsigned long *)&vcpu->arch.regs_avail))
-		kvm_x86_ops->cache_reg(vcpu, (enum kvm_reg)VCPU_EXREG_PDPTR);
+		kvm_x86_cache_reg(vcpu, (enum kvm_reg)VCPU_EXREG_PDPTR);
 
 	return vcpu->arch.walk_mmu->pdptrs[index];
 }
@@ -90,7 +90,7 @@ static inline ulong kvm_read_cr0_bits(struct kvm_vcpu *vcpu, ulong mask)
 {
 	ulong tmask = mask & KVM_POSSIBLE_CR0_GUEST_BITS;
 	if (tmask & vcpu->arch.cr0_guest_owned_bits)
-		kvm_x86_ops->decache_cr0_guest_bits(vcpu);
+		kvm_x86_decache_cr0_guest_bits(vcpu);
 	return vcpu->arch.cr0 & mask;
 }
 
@@ -103,14 +103,14 @@ static inline ulong kvm_read_cr4_bits(struct kvm_vcpu *vcpu, ulong mask)
 {
 	ulong tmask = mask & KVM_POSSIBLE_CR4_GUEST_BITS;
 	if (tmask & vcpu->arch.cr4_guest_owned_bits)
-		kvm_x86_ops->decache_cr4_guest_bits(vcpu);
+		kvm_x86_decache_cr4_guest_bits(vcpu);
 	return vcpu->arch.cr4 & mask;
 }
 
 static inline ulong kvm_read_cr3(struct kvm_vcpu *vcpu)
 {
 	if (!test_bit(VCPU_EXREG_CR3, (ulong *)&vcpu->arch.regs_avail))
-		kvm_x86_ops->decache_cr3(vcpu);
+		kvm_x86_decache_cr3(vcpu);
 	return vcpu->arch.cr3;
 }
 
diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
index b29d00b661ff..22c1079fadaa 100644
--- a/arch/x86/kvm/lapic.c
+++ b/arch/x86/kvm/lapic.c
@@ -451,7 +451,7 @@ static inline void apic_clear_irr(int vec, struct kvm_lapic *apic)
 	if (unlikely(vcpu->arch.apicv_active)) {
 		/* need to update RVI */
 		kvm_lapic_clear_vector(vec, apic->regs + APIC_IRR);
-		kvm_x86_ops->hwapic_irr_update(vcpu,
+		kvm_x86_hwapic_irr_update(vcpu,
 				apic_find_highest_irr(apic));
 	} else {
 		apic->irr_pending = false;
@@ -476,7 +476,7 @@ static inline void apic_set_isr(int vec, struct kvm_lapic *apic)
 	 * just set SVI.
 	 */
 	if (unlikely(vcpu->arch.apicv_active))
-		kvm_x86_ops->hwapic_isr_update(vcpu, vec);
+		kvm_x86_hwapic_isr_update(vcpu, vec);
 	else {
 		++apic->isr_count;
 		BUG_ON(apic->isr_count > MAX_APIC_VECTOR);
@@ -524,7 +524,7 @@ static inline void apic_clear_isr(int vec, struct kvm_lapic *apic)
 	 * and must be left alone.
 	 */
 	if (unlikely(vcpu->arch.apicv_active))
-		kvm_x86_ops->hwapic_isr_update(vcpu,
+		kvm_x86_hwapic_isr_update(vcpu,
 					       apic_find_highest_isr(apic));
 	else {
 		--apic->isr_count;
@@ -667,7 +667,7 @@ static int apic_has_interrupt_for_ppr(struct kvm_lapic *apic, u32 ppr)
 {
 	int highest_irr;
 	if (apic->vcpu->arch.apicv_active)
-		highest_irr = kvm_x86_ops->sync_pir_to_irr(apic->vcpu);
+		highest_irr = kvm_x86_sync_pir_to_irr(apic->vcpu);
 	else
 		highest_irr = apic_find_highest_irr(apic);
 	if (highest_irr == -1 || (highest_irr & 0xF0) <= ppr)
@@ -1057,7 +1057,7 @@ static int __apic_accept_irq(struct kvm_lapic *apic, int delivery_mode,
 		}
 
 		if (vcpu->arch.apicv_active)
-			kvm_x86_ops->deliver_posted_interrupt(vcpu, vector);
+			kvm_x86_deliver_posted_interrupt(vcpu, vector);
 		else {
 			kvm_lapic_set_irr(vector, apic);
 
@@ -1701,7 +1701,7 @@ static void cancel_hv_timer(struct kvm_lapic *apic)
 {
 	WARN_ON(preemptible());
 	WARN_ON(!apic->lapic_timer.hv_timer_in_use);
-	kvm_x86_ops->cancel_hv_timer(apic->vcpu);
+	kvm_x86_cancel_hv_timer(apic->vcpu);
 	apic->lapic_timer.hv_timer_in_use = false;
 }
 
@@ -1718,7 +1718,7 @@ static bool start_hv_timer(struct kvm_lapic *apic)
 	if (!ktimer->tscdeadline)
 		return false;
 
-	if (kvm_x86_ops->set_hv_timer(vcpu, ktimer->tscdeadline, &expired))
+	if (kvm_x86_set_hv_timer(vcpu, ktimer->tscdeadline, &expired))
 		return false;
 
 	ktimer->hv_timer_in_use = true;
@@ -2138,7 +2138,7 @@ void kvm_lapic_set_base(struct kvm_vcpu *vcpu, u64 value)
 		kvm_apic_set_x2apic_id(apic, vcpu->vcpu_id);
 
 	if ((old_value ^ value) & (MSR_IA32_APICBASE_ENABLE | X2APIC_ENABLE))
-		kvm_x86_ops->set_virtual_apic_mode(vcpu);
+		kvm_x86_set_virtual_apic_mode(vcpu);
 
 	apic->base_address = apic->vcpu->arch.apic_base &
 			     MSR_IA32_APICBASE_BASE;
@@ -2201,9 +2201,9 @@ void kvm_lapic_reset(struct kvm_vcpu *vcpu, bool init_event)
 	vcpu->arch.pv_eoi.msr_val = 0;
 	apic_update_ppr(apic);
 	if (vcpu->arch.apicv_active) {
-		kvm_x86_ops->apicv_post_state_restore(vcpu);
-		kvm_x86_ops->hwapic_irr_update(vcpu, -1);
-		kvm_x86_ops->hwapic_isr_update(vcpu, -1);
+		kvm_x86_apicv_post_state_restore(vcpu);
+		kvm_x86_hwapic_irr_update(vcpu, -1);
+		kvm_x86_hwapic_isr_update(vcpu, -1);
 	}
 
 	vcpu->arch.apic_arb_prio = 0;
@@ -2454,10 +2454,10 @@ int kvm_apic_set_state(struct kvm_vcpu *vcpu, struct kvm_lapic_state *s)
 				1 : count_vectors(apic->regs + APIC_ISR);
 	apic->highest_isr_cache = -1;
 	if (vcpu->arch.apicv_active) {
-		kvm_x86_ops->apicv_post_state_restore(vcpu);
-		kvm_x86_ops->hwapic_irr_update(vcpu,
+		kvm_x86_apicv_post_state_restore(vcpu);
+		kvm_x86_hwapic_irr_update(vcpu,
 				apic_find_highest_irr(apic));
-		kvm_x86_ops->hwapic_isr_update(vcpu,
+		kvm_x86_hwapic_isr_update(vcpu,
 				apic_find_highest_isr(apic));
 	}
 	kvm_make_request(KVM_REQ_EVENT, vcpu);
@@ -2709,7 +2709,7 @@ void kvm_apic_accept_events(struct kvm_vcpu *vcpu)
 	 * KVM_MP_STATE_INIT_RECEIVED state), just eat SIPIs
 	 * and leave the INIT pending.
 	 */
-	if (is_smm(vcpu) || kvm_x86_ops->apic_init_signal_blocked(vcpu)) {
+	if (is_smm(vcpu) || kvm_x86_apic_init_signal_blocked(vcpu)) {
 		WARN_ON_ONCE(vcpu->arch.mp_state == KVM_MP_STATE_INIT_RECEIVED);
 		if (test_bit(KVM_APIC_SIPI, &apic->pending_events))
 			clear_bit(KVM_APIC_SIPI, &apic->pending_events);
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 24c23c66b226..29d930470db9 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -292,7 +292,7 @@ static void kvm_flush_remote_tlbs_with_range(struct kvm *kvm,
 	int ret = -ENOTSUPP;
 
 	if (range && kvm_x86_ops->tlb_remote_flush_with_range)
-		ret = kvm_x86_ops->tlb_remote_flush_with_range(kvm, range);
+		ret = kvm_x86_tlb_remote_flush_with_range(kvm, range);
 
 	if (ret)
 		kvm_flush_remote_tlbs(kvm);
@@ -1289,7 +1289,7 @@ static int mapping_level(struct kvm_vcpu *vcpu, gfn_t large_gfn,
 	if (host_level == PT_PAGE_TABLE_LEVEL)
 		return host_level;
 
-	max_level = min(kvm_x86_ops->get_lpage_level(), host_level);
+	max_level = min(kvm_x86_get_lpage_level(), host_level);
 
 	for (level = PT_DIRECTORY_LEVEL; level <= max_level; ++level)
 		if (__mmu_gfn_lpage_is_disallowed(large_gfn, level, slot))
@@ -1748,7 +1748,7 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
 				gfn_t gfn_offset, unsigned long mask)
 {
 	if (kvm_x86_ops->enable_log_dirty_pt_masked)
-		kvm_x86_ops->enable_log_dirty_pt_masked(kvm, slot, gfn_offset,
+		kvm_x86_enable_log_dirty_pt_masked(kvm, slot, gfn_offset,
 				mask);
 	else
 		kvm_mmu_write_protect_pt_masked(kvm, slot, gfn_offset, mask);
@@ -1764,7 +1764,7 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
 int kvm_arch_write_log_dirty(struct kvm_vcpu *vcpu)
 {
 	if (kvm_x86_ops->write_log_dirty)
-		return kvm_x86_ops->write_log_dirty(vcpu);
+		return kvm_x86_write_log_dirty(vcpu);
 
 	return 0;
 }
@@ -3024,7 +3024,7 @@ static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep,
 	if (level > PT_PAGE_TABLE_LEVEL)
 		spte |= PT_PAGE_SIZE_MASK;
 	if (tdp_enabled)
-		spte |= kvm_x86_ops->get_mt_mask(vcpu, gfn,
+		spte |= kvm_x86_get_mt_mask(vcpu, gfn,
 			kvm_is_mmio_pfn(pfn));
 
 	if (host_writable)
@@ -4295,7 +4295,7 @@ static bool fast_cr3_switch(struct kvm_vcpu *vcpu, gpa_t new_cr3,
 			kvm_make_request(KVM_REQ_LOAD_CR3, vcpu);
 			if (!skip_tlb_flush) {
 				kvm_make_request(KVM_REQ_MMU_SYNC, vcpu);
-				kvm_x86_ops->tlb_flush(vcpu, true);
+				kvm_x86_tlb_flush(vcpu, true);
 			}
 
 			/*
@@ -4890,7 +4890,7 @@ kvm_calc_tdp_mmu_root_page_role(struct kvm_vcpu *vcpu, bool base_only)
 	union kvm_mmu_role role = kvm_calc_mmu_role_common(vcpu, base_only);
 
 	role.base.ad_disabled = (shadow_accessed_mask == 0);
-	role.base.level = kvm_x86_ops->get_tdp_level(vcpu);
+	role.base.level = kvm_x86_get_tdp_level(vcpu);
 	role.base.direct = true;
 	role.base.gpte_is_8_bytes = true;
 
@@ -4912,7 +4912,7 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu)
 	context->sync_page = nonpaging_sync_page;
 	context->invlpg = nonpaging_invlpg;
 	context->update_pte = nonpaging_update_pte;
-	context->shadow_root_level = kvm_x86_ops->get_tdp_level(vcpu);
+	context->shadow_root_level = kvm_x86_get_tdp_level(vcpu);
 	context->direct_map = true;
 	context->set_cr3 = kvm_x86_ops->set_tdp_cr3;
 	context->get_cr3 = get_cr3;
@@ -5169,7 +5169,7 @@ int kvm_mmu_load(struct kvm_vcpu *vcpu)
 	if (r)
 		goto out;
 	kvm_mmu_load_cr3(vcpu);
-	kvm_x86_ops->tlb_flush(vcpu, true);
+	kvm_x86_tlb_flush(vcpu, true);
 out:
 	return r;
 }
@@ -5482,7 +5482,7 @@ int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u64 error_code,
 	 * guest, with the exception of AMD Erratum 1096 which is unrecoverable.
 	 */
 	if (unlikely(insn && !insn_len)) {
-		if (!kvm_x86_ops->need_emulation_on_page_fault(vcpu))
+		if (!kvm_x86_need_emulation_on_page_fault(vcpu))
 			return 1;
 	}
 
@@ -5517,7 +5517,7 @@ void kvm_mmu_invlpg(struct kvm_vcpu *vcpu, gva_t gva)
 		if (VALID_PAGE(mmu->prev_roots[i].hpa))
 			mmu->invlpg(vcpu, gva, mmu->prev_roots[i].hpa);
 
-	kvm_x86_ops->tlb_flush_gva(vcpu, gva);
+	kvm_x86_tlb_flush_gva(vcpu, gva);
 	++vcpu->stat.invlpg;
 }
 EXPORT_SYMBOL_GPL(kvm_mmu_invlpg);
@@ -5542,7 +5542,7 @@ void kvm_mmu_invpcid_gva(struct kvm_vcpu *vcpu, gva_t gva, unsigned long pcid)
 	}
 
 	if (tlb_flush)
-		kvm_x86_ops->tlb_flush_gva(vcpu, gva);
+		kvm_x86_tlb_flush_gva(vcpu, gva);
 
 	++vcpu->stat.invlpg;
 
@@ -5659,7 +5659,7 @@ static int alloc_mmu_pages(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu)
 	 * SVM's 32-bit NPT support, TDP paging doesn't use PAE paging and can
 	 * skip allocating the PDP table.
 	 */
-	if (tdp_enabled && kvm_x86_ops->get_tdp_level(vcpu) > PT32E_ROOT_LEVEL)
+	if (tdp_enabled && kvm_x86_get_tdp_level(vcpu) > PT32E_ROOT_LEVEL)
 		return 0;
 
 	page = alloc_page(GFP_KERNEL_ACCOUNT | __GFP_DMA32);
diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
index 11f8ec89433b..8ac288bc42eb 100644
--- a/arch/x86/kvm/mmu.h
+++ b/arch/x86/kvm/mmu.h
@@ -157,8 +157,8 @@ static inline u8 permission_fault(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
 				  unsigned pte_access, unsigned pte_pkey,
 				  unsigned pfec)
 {
-	int cpl = kvm_x86_ops->get_cpl(vcpu);
-	unsigned long rflags = kvm_x86_ops->get_rflags(vcpu);
+	int cpl = kvm_x86_get_cpl(vcpu);
+	unsigned long rflags = kvm_x86_get_rflags(vcpu);
 
 	/*
 	 * If CPL < 3, SMAP prevention are disabled if EFLAGS.AC = 1.
diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
index 46875bbd0419..144e5d0c25ff 100644
--- a/arch/x86/kvm/pmu.c
+++ b/arch/x86/kvm/pmu.c
@@ -183,7 +183,7 @@ void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel)
 			  ARCH_PERFMON_EVENTSEL_CMASK |
 			  HSW_IN_TX |
 			  HSW_IN_TX_CHECKPOINTED))) {
-		config = kvm_x86_ops->pmu_ops->find_arch_event(pmc_to_pmu(pmc),
+		config = kvm_x86_pmu_find_arch_event(pmc_to_pmu(pmc),
 						      event_select,
 						      unit_mask);
 		if (config != PERF_COUNT_HW_MAX)
@@ -225,7 +225,7 @@ void reprogram_fixed_counter(struct kvm_pmc *pmc, u8 ctrl, int idx)
 	}
 
 	pmc_reprogram_counter(pmc, PERF_TYPE_HARDWARE,
-			      kvm_x86_ops->pmu_ops->find_fixed_event(idx),
+			      kvm_x86_pmu_find_fixed_event(idx),
 			      !(en_field & 0x2), /* exclude user */
 			      !(en_field & 0x1), /* exclude kernel */
 			      pmi, false, false);
@@ -234,7 +234,7 @@ EXPORT_SYMBOL_GPL(reprogram_fixed_counter);
 
 void reprogram_counter(struct kvm_pmu *pmu, int pmc_idx)
 {
-	struct kvm_pmc *pmc = kvm_x86_ops->pmu_ops->pmc_idx_to_pmc(pmu, pmc_idx);
+	struct kvm_pmc *pmc = kvm_x86_pmu_pmc_idx_to_pmc(pmu, pmc_idx);
 
 	if (!pmc)
 		return;
@@ -259,7 +259,7 @@ void kvm_pmu_handle_event(struct kvm_vcpu *vcpu)
 	bitmask = pmu->reprogram_pmi;
 
 	for_each_set_bit(bit, (unsigned long *)&bitmask, X86_PMC_IDX_MAX) {
-		struct kvm_pmc *pmc = kvm_x86_ops->pmu_ops->pmc_idx_to_pmc(pmu, bit);
+		struct kvm_pmc *pmc = kvm_x86_pmu_pmc_idx_to_pmc(pmu, bit);
 
 		if (unlikely(!pmc || !pmc->perf_event)) {
 			clear_bit(bit, (unsigned long *)&pmu->reprogram_pmi);
@@ -273,7 +273,7 @@ void kvm_pmu_handle_event(struct kvm_vcpu *vcpu)
 /* check if idx is a valid index to access PMU */
 int kvm_pmu_is_valid_msr_idx(struct kvm_vcpu *vcpu, unsigned idx)
 {
-	return kvm_x86_ops->pmu_ops->is_valid_msr_idx(vcpu, idx);
+	return kvm_x86_pmu_is_valid_msr_idx(vcpu, idx);
 }
 
 bool is_vmware_backdoor_pmc(u32 pmc_idx)
@@ -323,7 +323,7 @@ int kvm_pmu_rdpmc(struct kvm_vcpu *vcpu, unsigned idx, u64 *data)
 	if (is_vmware_backdoor_pmc(idx))
 		return kvm_pmu_rdpmc_vmware(vcpu, idx, data);
 
-	pmc = kvm_x86_ops->pmu_ops->msr_idx_to_pmc(vcpu, idx, &mask);
+	pmc = kvm_x86_pmu_msr_idx_to_pmc(vcpu, idx, &mask);
 	if (!pmc)
 		return 1;
 
@@ -339,17 +339,17 @@ void kvm_pmu_deliver_pmi(struct kvm_vcpu *vcpu)
 
 bool kvm_pmu_is_valid_msr(struct kvm_vcpu *vcpu, u32 msr)
 {
-	return kvm_x86_ops->pmu_ops->is_valid_msr(vcpu, msr);
+	return kvm_x86_pmu_is_valid_msr(vcpu, msr);
 }
 
 int kvm_pmu_get_msr(struct kvm_vcpu *vcpu, u32 msr, u64 *data)
 {
-	return kvm_x86_ops->pmu_ops->get_msr(vcpu, msr, data);
+	return kvm_x86_pmu_get_msr(vcpu, msr, data);
 }
 
 int kvm_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 {
-	return kvm_x86_ops->pmu_ops->set_msr(vcpu, msr_info);
+	return kvm_x86_pmu_set_msr(vcpu, msr_info);
 }
 
 /* refresh PMU settings. This function generally is called when underlying
@@ -358,7 +358,7 @@ int kvm_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
  */
 void kvm_pmu_refresh(struct kvm_vcpu *vcpu)
 {
-	kvm_x86_ops->pmu_ops->refresh(vcpu);
+	kvm_x86_pmu_refresh(vcpu);
 }
 
 void kvm_pmu_reset(struct kvm_vcpu *vcpu)
@@ -366,7 +366,7 @@ void kvm_pmu_reset(struct kvm_vcpu *vcpu)
 	struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
 
 	irq_work_sync(&pmu->irq_work);
-	kvm_x86_ops->pmu_ops->reset(vcpu);
+	kvm_x86_pmu_reset(vcpu);
 }
 
 void kvm_pmu_init(struct kvm_vcpu *vcpu)
@@ -374,7 +374,7 @@ void kvm_pmu_init(struct kvm_vcpu *vcpu)
 	struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
 
 	memset(pmu, 0, sizeof(*pmu));
-	kvm_x86_ops->pmu_ops->init(vcpu);
+	kvm_x86_pmu_init(vcpu);
 	init_irq_work(&pmu->irq_work, kvm_pmi_trigger_fn);
 	kvm_pmu_refresh(vcpu);
 }
diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h
index 58265f761c3b..82f07e3492df 100644
--- a/arch/x86/kvm/pmu.h
+++ b/arch/x86/kvm/pmu.h
@@ -19,6 +19,23 @@ struct kvm_event_hw_type_mapping {
 	unsigned event_type;
 };
 
+extern unsigned kvm_x86_pmu_find_arch_event(struct kvm_pmu *pmu,
+					    u8 event_select, u8 unit_mask);
+extern unsigned kvm_x86_pmu_find_fixed_event(int idx);
+extern bool kvm_x86_pmu_pmc_is_enabled(struct kvm_pmc *pmc);
+extern struct kvm_pmc *kvm_x86_pmu_pmc_idx_to_pmc(struct kvm_pmu *pmu,
+						  int pmc_idx);
+extern struct kvm_pmc *kvm_x86_pmu_msr_idx_to_pmc(struct kvm_vcpu *vcpu,
+						  unsigned idx, u64 *mask);
+extern int kvm_x86_pmu_is_valid_msr_idx(struct kvm_vcpu *vcpu, unsigned idx);
+extern bool kvm_x86_pmu_is_valid_msr(struct kvm_vcpu *vcpu, u32 msr);
+extern int kvm_x86_pmu_get_msr(struct kvm_vcpu *vcpu, u32 msr, u64 *data);
+extern int kvm_x86_pmu_set_msr(struct kvm_vcpu *vcpu,
+			       struct msr_data *msr_info);
+extern void kvm_x86_pmu_refresh(struct kvm_vcpu *vcpu);
+extern void kvm_x86_pmu_init(struct kvm_vcpu *vcpu);
+extern void kvm_x86_pmu_reset(struct kvm_vcpu *vcpu);
+
 struct kvm_pmu_ops {
 	unsigned (*find_arch_event)(struct kvm_pmu *pmu, u8 event_select,
 				    u8 unit_mask);
@@ -76,7 +93,7 @@ static inline bool pmc_is_fixed(struct kvm_pmc *pmc)
 
 static inline bool pmc_is_enabled(struct kvm_pmc *pmc)
 {
-	return kvm_x86_ops->pmu_ops->pmc_is_enabled(pmc);
+	return kvm_x86_pmu_pmc_is_enabled(pmc);
 }
 
 /* returns general purpose PMC with the specified MSR. Note that it can be
diff --git a/arch/x86/kvm/pmu_amd.c b/arch/x86/kvm/pmu_amd.c
index c8388389a3b0..7ea588023949 100644
--- a/arch/x86/kvm/pmu_amd.c
+++ b/arch/x86/kvm/pmu_amd.c
@@ -126,9 +126,8 @@ static inline struct kvm_pmc *get_gp_pmc_amd(struct kvm_pmu *pmu, u32 msr,
 	return &pmu->gp_counters[msr_to_index(msr)];
 }
 
-static unsigned amd_find_arch_event(struct kvm_pmu *pmu,
-				    u8 event_select,
-				    u8 unit_mask)
+unsigned kvm_x86_pmu_find_arch_event(struct kvm_pmu *pmu, u8 event_select,
+				     u8 unit_mask)
 {
 	int i;
 
@@ -144,7 +143,7 @@ static unsigned amd_find_arch_event(struct kvm_pmu *pmu,
 }
 
 /* return PERF_COUNT_HW_MAX as AMD doesn't have fixed events */
-static unsigned amd_find_fixed_event(int idx)
+unsigned kvm_x86_pmu_find_fixed_event(int idx)
 {
 	return PERF_COUNT_HW_MAX;
 }
@@ -152,12 +151,12 @@ static unsigned amd_find_fixed_event(int idx)
 /* check if a PMC is enabled by comparing it against global_ctrl bits. Because
  * AMD CPU doesn't have global_ctrl MSR, all PMCs are enabled (return TRUE).
  */
-static bool amd_pmc_is_enabled(struct kvm_pmc *pmc)
+bool kvm_x86_pmu_pmc_is_enabled(struct kvm_pmc *pmc)
 {
 	return true;
 }
 
-static struct kvm_pmc *amd_pmc_idx_to_pmc(struct kvm_pmu *pmu, int pmc_idx)
+struct kvm_pmc *kvm_x86_pmu_pmc_idx_to_pmc(struct kvm_pmu *pmu, int pmc_idx)
 {
 	unsigned int base = get_msr_base(pmu, PMU_TYPE_COUNTER);
 	struct kvm_vcpu *vcpu = pmu_to_vcpu(pmu);
@@ -174,7 +173,7 @@ static struct kvm_pmc *amd_pmc_idx_to_pmc(struct kvm_pmu *pmu, int pmc_idx)
 }
 
 /* returns 0 if idx's corresponding MSR exists; otherwise returns 1. */
-static int amd_is_valid_msr_idx(struct kvm_vcpu *vcpu, unsigned idx)
+int kvm_x86_pmu_is_valid_msr_idx(struct kvm_vcpu *vcpu, unsigned idx)
 {
 	struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
 
@@ -184,7 +183,8 @@ static int amd_is_valid_msr_idx(struct kvm_vcpu *vcpu, unsigned idx)
 }
 
 /* idx is the ECX register of RDPMC instruction */
-static struct kvm_pmc *amd_msr_idx_to_pmc(struct kvm_vcpu *vcpu, unsigned idx, u64 *mask)
+struct kvm_pmc *kvm_x86_pmu_msr_idx_to_pmc(struct kvm_vcpu *vcpu, unsigned idx,
+					   u64 *mask)
 {
 	struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
 	struct kvm_pmc *counters;
@@ -197,7 +197,7 @@ static struct kvm_pmc *amd_msr_idx_to_pmc(struct kvm_vcpu *vcpu, unsigned idx, u
 	return &counters[idx];
 }
 
-static bool amd_is_valid_msr(struct kvm_vcpu *vcpu, u32 msr)
+bool kvm_x86_pmu_is_valid_msr(struct kvm_vcpu *vcpu, u32 msr)
 {
 	struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
 	int ret = false;
@@ -208,7 +208,7 @@ static bool amd_is_valid_msr(struct kvm_vcpu *vcpu, u32 msr)
 	return ret;
 }
 
-static int amd_pmu_get_msr(struct kvm_vcpu *vcpu, u32 msr, u64 *data)
+int kvm_x86_pmu_get_msr(struct kvm_vcpu *vcpu, u32 msr, u64 *data)
 {
 	struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
 	struct kvm_pmc *pmc;
@@ -229,7 +229,7 @@ static int amd_pmu_get_msr(struct kvm_vcpu *vcpu, u32 msr, u64 *data)
 	return 1;
 }
 
-static int amd_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+int kvm_x86_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 {
 	struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
 	struct kvm_pmc *pmc;
@@ -256,7 +256,7 @@ static int amd_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 	return 1;
 }
 
-static void amd_pmu_refresh(struct kvm_vcpu *vcpu)
+void kvm_x86_pmu_refresh(struct kvm_vcpu *vcpu)
 {
 	struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
 
@@ -274,7 +274,7 @@ static void amd_pmu_refresh(struct kvm_vcpu *vcpu)
 	pmu->global_status = 0;
 }
 
-static void amd_pmu_init(struct kvm_vcpu *vcpu)
+void kvm_x86_pmu_init(struct kvm_vcpu *vcpu)
 {
 	struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
 	int i;
@@ -288,7 +288,7 @@ static void amd_pmu_init(struct kvm_vcpu *vcpu)
 	}
 }
 
-static void amd_pmu_reset(struct kvm_vcpu *vcpu)
+void kvm_x86_pmu_reset(struct kvm_vcpu *vcpu)
 {
 	struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
 	int i;
@@ -302,16 +302,16 @@ static void amd_pmu_reset(struct kvm_vcpu *vcpu)
 }
 
 struct kvm_pmu_ops amd_pmu_ops = {
-	.find_arch_event = amd_find_arch_event,
-	.find_fixed_event = amd_find_fixed_event,
-	.pmc_is_enabled = amd_pmc_is_enabled,
-	.pmc_idx_to_pmc = amd_pmc_idx_to_pmc,
-	.msr_idx_to_pmc = amd_msr_idx_to_pmc,
-	.is_valid_msr_idx = amd_is_valid_msr_idx,
-	.is_valid_msr = amd_is_valid_msr,
-	.get_msr = amd_pmu_get_msr,
-	.set_msr = amd_pmu_set_msr,
-	.refresh = amd_pmu_refresh,
-	.init = amd_pmu_init,
-	.reset = amd_pmu_reset,
+	.find_arch_event = kvm_x86_pmu_find_arch_event,
+	.find_fixed_event = kvm_x86_pmu_find_fixed_event,
+	.pmc_is_enabled = kvm_x86_pmu_pmc_is_enabled,
+	.pmc_idx_to_pmc = kvm_x86_pmu_pmc_idx_to_pmc,
+	.msr_idx_to_pmc = kvm_x86_pmu_msr_idx_to_pmc,
+	.is_valid_msr_idx = kvm_x86_pmu_is_valid_msr_idx,
+	.is_valid_msr = kvm_x86_pmu_is_valid_msr,
+	.get_msr = kvm_x86_pmu_get_msr,
+	.set_msr = kvm_x86_pmu_set_msr,
+	.refresh = kvm_x86_pmu_refresh,
+	.init = kvm_x86_pmu_init,
+	.reset = kvm_x86_pmu_reset,
 };
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index c5673bda4b66..1705608246fb 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -383,8 +383,6 @@ module_param(dump_invalid_vmcb, bool, 0644);
 
 static u8 rsm_ins_bytes[] = "\x0f\xaa";
 
-static void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0);
-static void svm_flush_tlb(struct kvm_vcpu *vcpu, bool invalidate_gpa);
 static void svm_complete_interrupts(struct vcpu_svm *svm);
 
 static int nested_svm_exit_handled(struct vcpu_svm *svm);
@@ -722,7 +720,7 @@ static inline void invlpga(unsigned long addr, u32 asid)
 	asm volatile (__ex("invlpga %1, %0") : : "c"(asid), "a"(addr));
 }
 
-static int get_npt_level(struct kvm_vcpu *vcpu)
+int kvm_x86_get_tdp_level(struct kvm_vcpu *vcpu)
 {
 #ifdef CONFIG_X86_64
 	return PT64_ROOT_4LEVEL;
@@ -731,7 +729,7 @@ static int get_npt_level(struct kvm_vcpu *vcpu)
 #endif
 }
 
-static void svm_set_efer(struct kvm_vcpu *vcpu, u64 efer)
+void kvm_x86_set_efer(struct kvm_vcpu *vcpu, u64 efer)
 {
 	vcpu->arch.efer = efer;
 
@@ -753,7 +751,7 @@ static int is_external_interrupt(u32 info)
 	return info == (SVM_EVTINJ_VALID | SVM_EVTINJ_TYPE_INTR);
 }
 
-static u32 svm_get_interrupt_shadow(struct kvm_vcpu *vcpu)
+u32 kvm_x86_get_interrupt_shadow(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 	u32 ret = 0;
@@ -763,7 +761,7 @@ static u32 svm_get_interrupt_shadow(struct kvm_vcpu *vcpu)
 	return ret;
 }
 
-static void svm_set_interrupt_shadow(struct kvm_vcpu *vcpu, int mask)
+void kvm_x86_set_interrupt_shadow(struct kvm_vcpu *vcpu, int mask)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 
@@ -774,7 +772,7 @@ static void svm_set_interrupt_shadow(struct kvm_vcpu *vcpu, int mask)
 
 }
 
-static int skip_emulated_instruction(struct kvm_vcpu *vcpu)
+int kvm_x86_skip_emulated_instruction(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 
@@ -792,12 +790,12 @@ static int skip_emulated_instruction(struct kvm_vcpu *vcpu)
 			       __func__, kvm_rip_read(vcpu), svm->next_rip);
 		kvm_rip_write(vcpu, svm->next_rip);
 	}
-	svm_set_interrupt_shadow(vcpu, 0);
+	kvm_x86_set_interrupt_shadow(vcpu, 0);
 
 	return 1;
 }
 
-static void svm_queue_exception(struct kvm_vcpu *vcpu)
+void kvm_x86_queue_exception(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 	unsigned nr = vcpu->arch.exception.nr;
@@ -825,7 +823,7 @@ static void svm_queue_exception(struct kvm_vcpu *vcpu)
 		 * raises a fault that is not intercepted. Still better than
 		 * failing in all cases.
 		 */
-		(void)skip_emulated_instruction(&svm->vcpu);
+		(void)kvm_x86_skip_emulated_instruction(&svm->vcpu);
 		rip = kvm_rip_read(&svm->vcpu);
 		svm->int3_rip = rip + svm->vmcb->save.cs.base;
 		svm->int3_injected = rip - old_rip;
@@ -883,7 +881,7 @@ static void svm_init_osvw(struct kvm_vcpu *vcpu)
 		vcpu->arch.osvw.status |= 1;
 }
 
-static int has_svm(void)
+int kvm_x86_cpu_has_kvm_support(void)
 {
 	const char *msg;
 
@@ -895,7 +893,7 @@ static int has_svm(void)
 	return 1;
 }
 
-static void svm_hardware_disable(void)
+void kvm_x86_hardware_disable(void)
 {
 	/* Make sure we clean up behind us */
 	if (static_cpu_has(X86_FEATURE_TSCRATEMSR))
@@ -906,7 +904,7 @@ static void svm_hardware_disable(void)
 	amd_pmu_disable_virt();
 }
 
-static int svm_hardware_enable(void)
+int kvm_x86_hardware_enable(void)
 {
 
 	struct svm_cpu_data *sd;
@@ -918,7 +916,7 @@ static int svm_hardware_enable(void)
 	if (efer & EFER_SVME)
 		return -EBUSY;
 
-	if (!has_svm()) {
+	if (!kvm_x86_cpu_has_kvm_support()) {
 		pr_err("%s: err EOPNOTSUPP on %d\n", __func__, me);
 		return -EINVAL;
 	}
@@ -1298,7 +1296,7 @@ static void shrink_ple_window(struct kvm_vcpu *vcpu)
 	}
 }
 
-static __init int svm_hardware_setup(void)
+__init int kvm_x86_hardware_setup(void)
 {
 	int cpu;
 	struct page *iopm_pages;
@@ -1414,7 +1412,7 @@ static __init int svm_hardware_setup(void)
 	return r;
 }
 
-static __exit void svm_hardware_unsetup(void)
+__exit void kvm_x86_hardware_unsetup(void)
 {
 	int cpu;
 
@@ -1445,7 +1443,7 @@ static void init_sys_seg(struct vmcb_seg *seg, uint32_t type)
 	seg->base = 0;
 }
 
-static u64 svm_read_l1_tsc_offset(struct kvm_vcpu *vcpu)
+u64 kvm_x86_read_l1_tsc_offset(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 
@@ -1455,7 +1453,7 @@ static u64 svm_read_l1_tsc_offset(struct kvm_vcpu *vcpu)
 	return vcpu->arch.tsc_offset;
 }
 
-static u64 svm_write_l1_tsc_offset(struct kvm_vcpu *vcpu, u64 offset)
+u64 kvm_x86_write_l1_tsc_offset(struct kvm_vcpu *vcpu, u64 offset)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 	u64 g_tsc_offset = 0;
@@ -1580,17 +1578,17 @@ static void init_vmcb(struct vcpu_svm *svm)
 	init_sys_seg(&save->ldtr, SEG_TYPE_LDT);
 	init_sys_seg(&save->tr, SEG_TYPE_BUSY_TSS16);
 
-	svm_set_efer(&svm->vcpu, 0);
+	kvm_x86_set_efer(&svm->vcpu, 0);
 	save->dr6 = 0xffff0ff0;
 	kvm_set_rflags(&svm->vcpu, 2);
 	save->rip = 0x0000fff0;
 	svm->vcpu.arch.regs[VCPU_REGS_RIP] = save->rip;
 
 	/*
-	 * svm_set_cr0() sets PG and WP and clears NW and CD on save->cr0.
+	 * kvm_x86_set_cr0() sets PG and WP and clears NW and CD on save->cr0.
 	 * It also updates the guest-visible cr0 value.
 	 */
-	svm_set_cr0(&svm->vcpu, X86_CR0_NW | X86_CR0_CD | X86_CR0_ET);
+	kvm_x86_set_cr0(&svm->vcpu, X86_CR0_NW | X86_CR0_CD | X86_CR0_ET);
 	kvm_mmu_reset_context(&svm->vcpu);
 
 	save->cr4 = X86_CR4_PAE;
@@ -1878,7 +1876,7 @@ static void __unregister_enc_region_locked(struct kvm *kvm,
 	kfree(region);
 }
 
-static struct kvm *svm_vm_alloc(void)
+struct kvm *kvm_x86_vm_alloc(void)
 {
 	struct kvm_svm *kvm_svm = __vmalloc(sizeof(struct kvm_svm),
 					    GFP_KERNEL_ACCOUNT | __GFP_ZERO,
@@ -1886,7 +1884,7 @@ static struct kvm *svm_vm_alloc(void)
 	return &kvm_svm->kvm;
 }
 
-static void svm_vm_free(struct kvm *kvm)
+void kvm_x86_vm_free(struct kvm *kvm)
 {
 	vfree(to_kvm_svm(kvm));
 }
@@ -1937,13 +1935,13 @@ static void avic_vm_destroy(struct kvm *kvm)
 	spin_unlock_irqrestore(&svm_vm_data_hash_lock, flags);
 }
 
-static void svm_vm_destroy(struct kvm *kvm)
+void kvm_x86_vm_destroy(struct kvm *kvm)
 {
 	avic_vm_destroy(kvm);
 	sev_vm_destroy(kvm);
 }
 
-static int avic_vm_init(struct kvm *kvm)
+int kvm_x86_vm_init(struct kvm *kvm)
 {
 	unsigned long flags;
 	int err = -ENOMEM;
@@ -2089,7 +2087,7 @@ static void avic_set_running(struct kvm_vcpu *vcpu, bool is_run)
 		avic_vcpu_put(vcpu);
 }
 
-static void svm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
+void kvm_x86_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 	u32 dummy;
@@ -2132,7 +2130,7 @@ static int avic_init_vcpu(struct vcpu_svm *svm)
 	return ret;
 }
 
-static struct kvm_vcpu *svm_create_vcpu(struct kvm *kvm, unsigned int id)
+struct kvm_vcpu *kvm_x86_vcpu_create(struct kvm *kvm, unsigned int id)
 {
 	struct vcpu_svm *svm;
 	struct page *page;
@@ -2242,13 +2240,13 @@ static void svm_clear_current_vmcb(struct vmcb *vmcb)
 		cmpxchg(&per_cpu(svm_data, i)->current_vmcb, vmcb, NULL);
 }
 
-static void svm_free_vcpu(struct kvm_vcpu *vcpu)
+void kvm_x86_vcpu_free(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 
 	/*
 	 * The vmcb page can be recycled, causing a false negative in
-	 * svm_vcpu_load(). So, ensure that no logical CPU has this
+	 * kvm_x86_vcpu_load(). So, ensure that no logical CPU has this
 	 * vmcb page recorded as its current vmcb.
 	 */
 	svm_clear_current_vmcb(svm->vmcb);
@@ -2263,7 +2261,7 @@ static void svm_free_vcpu(struct kvm_vcpu *vcpu)
 	kmem_cache_free(kvm_vcpu_cache, svm);
 }
 
-static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
+void kvm_x86_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 	struct svm_cpu_data *sd = per_cpu(svm_data, cpu);
@@ -2302,7 +2300,7 @@ static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 	avic_vcpu_load(vcpu, cpu);
 }
 
-static void svm_vcpu_put(struct kvm_vcpu *vcpu)
+void kvm_x86_vcpu_put(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 	int i;
@@ -2324,17 +2322,17 @@ static void svm_vcpu_put(struct kvm_vcpu *vcpu)
 		wrmsrl(host_save_user_msrs[i], svm->host_user_msrs[i]);
 }
 
-static void svm_vcpu_blocking(struct kvm_vcpu *vcpu)
+void kvm_x86_vcpu_blocking(struct kvm_vcpu *vcpu)
 {
 	avic_set_running(vcpu, false);
 }
 
-static void svm_vcpu_unblocking(struct kvm_vcpu *vcpu)
+void kvm_x86_vcpu_unblocking(struct kvm_vcpu *vcpu)
 {
 	avic_set_running(vcpu, true);
 }
 
-static unsigned long svm_get_rflags(struct kvm_vcpu *vcpu)
+unsigned long kvm_x86_get_rflags(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 	unsigned long rflags = svm->vmcb->save.rflags;
@@ -2349,7 +2347,7 @@ static unsigned long svm_get_rflags(struct kvm_vcpu *vcpu)
 	return rflags;
 }
 
-static void svm_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags)
+void kvm_x86_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags)
 {
 	if (to_svm(vcpu)->nmi_singlestep)
 		rflags |= (X86_EFLAGS_TF | X86_EFLAGS_RF);
@@ -2362,7 +2360,7 @@ static void svm_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags)
 	to_svm(vcpu)->vmcb->save.rflags = rflags;
 }
 
-static void svm_cache_reg(struct kvm_vcpu *vcpu, enum kvm_reg reg)
+void kvm_x86_cache_reg(struct kvm_vcpu *vcpu, enum kvm_reg reg)
 {
 	switch (reg) {
 	case VCPU_EXREG_PDPTR:
@@ -2402,15 +2400,15 @@ static struct vmcb_seg *svm_seg(struct kvm_vcpu *vcpu, int seg)
 	return NULL;
 }
 
-static u64 svm_get_segment_base(struct kvm_vcpu *vcpu, int seg)
+u64 kvm_x86_get_segment_base(struct kvm_vcpu *vcpu, int seg)
 {
 	struct vmcb_seg *s = svm_seg(vcpu, seg);
 
 	return s->base;
 }
 
-static void svm_get_segment(struct kvm_vcpu *vcpu,
-			    struct kvm_segment *var, int seg)
+void kvm_x86_get_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var,
+			 int seg)
 {
 	struct vmcb_seg *s = svm_seg(vcpu, seg);
 
@@ -2472,20 +2470,20 @@ static void svm_get_segment(struct kvm_vcpu *vcpu,
 		 */
 		if (var->unusable)
 			var->db = 0;
-		/* This is symmetric with svm_set_segment() */
+		/* This is symmetric with kvm_x86_set_segment() */
 		var->dpl = to_svm(vcpu)->vmcb->save.cpl;
 		break;
 	}
 }
 
-static int svm_get_cpl(struct kvm_vcpu *vcpu)
+int kvm_x86_get_cpl(struct kvm_vcpu *vcpu)
 {
 	struct vmcb_save_area *save = &to_svm(vcpu)->vmcb->save;
 
 	return save->cpl;
 }
 
-static void svm_get_idt(struct kvm_vcpu *vcpu, struct desc_ptr *dt)
+void kvm_x86_get_idt(struct kvm_vcpu *vcpu, struct desc_ptr *dt)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 
@@ -2493,7 +2491,7 @@ static void svm_get_idt(struct kvm_vcpu *vcpu, struct desc_ptr *dt)
 	dt->address = svm->vmcb->save.idtr.base;
 }
 
-static void svm_set_idt(struct kvm_vcpu *vcpu, struct desc_ptr *dt)
+void kvm_x86_set_idt(struct kvm_vcpu *vcpu, struct desc_ptr *dt)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 
@@ -2502,7 +2500,7 @@ static void svm_set_idt(struct kvm_vcpu *vcpu, struct desc_ptr *dt)
 	mark_dirty(svm->vmcb, VMCB_DT);
 }
 
-static void svm_get_gdt(struct kvm_vcpu *vcpu, struct desc_ptr *dt)
+void kvm_x86_get_gdt(struct kvm_vcpu *vcpu, struct desc_ptr *dt)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 
@@ -2510,7 +2508,7 @@ static void svm_get_gdt(struct kvm_vcpu *vcpu, struct desc_ptr *dt)
 	dt->address = svm->vmcb->save.gdtr.base;
 }
 
-static void svm_set_gdt(struct kvm_vcpu *vcpu, struct desc_ptr *dt)
+void kvm_x86_set_gdt(struct kvm_vcpu *vcpu, struct desc_ptr *dt)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 
@@ -2519,15 +2517,15 @@ static void svm_set_gdt(struct kvm_vcpu *vcpu, struct desc_ptr *dt)
 	mark_dirty(svm->vmcb, VMCB_DT);
 }
 
-static void svm_decache_cr0_guest_bits(struct kvm_vcpu *vcpu)
+void kvm_x86_decache_cr0_guest_bits(struct kvm_vcpu *vcpu)
 {
 }
 
-static void svm_decache_cr3(struct kvm_vcpu *vcpu)
+void kvm_x86_decache_cr3(struct kvm_vcpu *vcpu)
 {
 }
 
-static void svm_decache_cr4_guest_bits(struct kvm_vcpu *vcpu)
+void kvm_x86_decache_cr4_guest_bits(struct kvm_vcpu *vcpu)
 {
 }
 
@@ -2550,7 +2548,7 @@ static void update_cr0_intercept(struct vcpu_svm *svm)
 	}
 }
 
-static void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0)
+void kvm_x86_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 
@@ -2584,7 +2582,7 @@ static void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0)
 	update_cr0_intercept(svm);
 }
 
-static int svm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
+int kvm_x86_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
 {
 	unsigned long host_cr4_mce = cr4_read_shadow() & X86_CR4_MCE;
 	unsigned long old_cr4 = to_svm(vcpu)->vmcb->save.cr4;
@@ -2593,7 +2591,7 @@ static int svm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
 		return 1;
 
 	if (npt_enabled && ((old_cr4 ^ cr4) & X86_CR4_PGE))
-		svm_flush_tlb(vcpu, true);
+		kvm_x86_tlb_flush(vcpu, true);
 
 	vcpu->arch.cr4 = cr4;
 	if (!npt_enabled)
@@ -2604,8 +2602,8 @@ static int svm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
 	return 0;
 }
 
-static void svm_set_segment(struct kvm_vcpu *vcpu,
-			    struct kvm_segment *var, int seg)
+void kvm_x86_set_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var,
+			 int seg)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 	struct vmcb_seg *s = svm_seg(vcpu, seg);
@@ -2629,13 +2627,13 @@ static void svm_set_segment(struct kvm_vcpu *vcpu,
 	 * would entail passing the CPL to userspace and back.
 	 */
 	if (seg == VCPU_SREG_SS)
-		/* This is symmetric with svm_get_segment() */
+		/* This is symmetric with kvm_x86_get_segment() */
 		svm->vmcb->save.cpl = (var->dpl & 3);
 
 	mark_dirty(svm->vmcb, VMCB_SEG);
 }
 
-static void update_bp_intercept(struct kvm_vcpu *vcpu)
+void kvm_x86_update_bp_intercept(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 
@@ -2662,12 +2660,12 @@ static void new_asid(struct vcpu_svm *svm, struct svm_cpu_data *sd)
 	mark_dirty(svm->vmcb, VMCB_ASID);
 }
 
-static u64 svm_get_dr6(struct kvm_vcpu *vcpu)
+u64 kvm_x86_get_dr6(struct kvm_vcpu *vcpu)
 {
 	return to_svm(vcpu)->vmcb->save.dr6;
 }
 
-static void svm_set_dr6(struct kvm_vcpu *vcpu, unsigned long value)
+void kvm_x86_set_dr6(struct kvm_vcpu *vcpu, unsigned long value)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 
@@ -2675,7 +2673,7 @@ static void svm_set_dr6(struct kvm_vcpu *vcpu, unsigned long value)
 	mark_dirty(svm->vmcb, VMCB_DR);
 }
 
-static void svm_sync_dirty_debug_regs(struct kvm_vcpu *vcpu)
+void kvm_x86_sync_dirty_debug_regs(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 
@@ -2683,14 +2681,14 @@ static void svm_sync_dirty_debug_regs(struct kvm_vcpu *vcpu)
 	get_debugreg(vcpu->arch.db[1], 1);
 	get_debugreg(vcpu->arch.db[2], 2);
 	get_debugreg(vcpu->arch.db[3], 3);
-	vcpu->arch.dr6 = svm_get_dr6(vcpu);
+	vcpu->arch.dr6 = kvm_x86_get_dr6(vcpu);
 	vcpu->arch.dr7 = svm->vmcb->save.dr7;
 
 	vcpu->arch.switch_db_regs &= ~KVM_DEBUGREG_WONT_EXIT;
 	set_dr_intercepts(svm);
 }
 
-static void svm_set_dr7(struct kvm_vcpu *vcpu, unsigned long value)
+void kvm_x86_set_dr7(struct kvm_vcpu *vcpu, unsigned long value)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 
@@ -2989,7 +2987,7 @@ static void nested_svm_init_mmu_context(struct kvm_vcpu *vcpu)
 	vcpu->arch.mmu->get_cr3           = nested_svm_get_tdp_cr3;
 	vcpu->arch.mmu->get_pdptr         = nested_svm_get_tdp_pdptr;
 	vcpu->arch.mmu->inject_page_fault = nested_svm_inject_npf_exit;
-	vcpu->arch.mmu->shadow_root_level = get_npt_level(vcpu);
+	vcpu->arch.mmu->shadow_root_level = kvm_x86_get_tdp_level(vcpu);
 	reset_shadow_zero_bits_mask(vcpu, vcpu->arch.mmu);
 	vcpu->arch.walk_mmu              = &vcpu->arch.nested_mmu;
 }
@@ -3409,9 +3407,9 @@ static int nested_svm_vmexit(struct vcpu_svm *svm)
 	svm->vmcb->save.gdtr = hsave->save.gdtr;
 	svm->vmcb->save.idtr = hsave->save.idtr;
 	kvm_set_rflags(&svm->vcpu, hsave->save.rflags);
-	svm_set_efer(&svm->vcpu, hsave->save.efer);
-	svm_set_cr0(&svm->vcpu, hsave->save.cr0 | X86_CR0_PE);
-	svm_set_cr4(&svm->vcpu, hsave->save.cr4);
+	kvm_x86_set_efer(&svm->vcpu, hsave->save.efer);
+	kvm_x86_set_cr0(&svm->vcpu, hsave->save.cr0 | X86_CR0_PE);
+	kvm_x86_set_cr4(&svm->vcpu, hsave->save.cr4);
 	if (npt_enabled) {
 		svm->vmcb->save.cr3 = hsave->save.cr3;
 		svm->vcpu.arch.cr3 = hsave->save.cr3;
@@ -3513,9 +3511,9 @@ static void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa,
 	svm->vmcb->save.gdtr = nested_vmcb->save.gdtr;
 	svm->vmcb->save.idtr = nested_vmcb->save.idtr;
 	kvm_set_rflags(&svm->vcpu, nested_vmcb->save.rflags);
-	svm_set_efer(&svm->vcpu, nested_vmcb->save.efer);
-	svm_set_cr0(&svm->vcpu, nested_vmcb->save.cr0);
-	svm_set_cr4(&svm->vcpu, nested_vmcb->save.cr4);
+	kvm_x86_set_efer(&svm->vcpu, nested_vmcb->save.efer);
+	kvm_x86_set_cr0(&svm->vcpu, nested_vmcb->save.cr0);
+	kvm_x86_set_cr4(&svm->vcpu, nested_vmcb->save.cr4);
 	if (npt_enabled) {
 		svm->vmcb->save.cr3 = nested_vmcb->save.cr3;
 		svm->vcpu.arch.cr3 = nested_vmcb->save.cr3;
@@ -3547,7 +3545,7 @@ static void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa,
 	svm->nested.intercept_exceptions = nested_vmcb->control.intercept_exceptions;
 	svm->nested.intercept            = nested_vmcb->control.intercept;
 
-	svm_flush_tlb(&svm->vcpu, true);
+	kvm_x86_tlb_flush(&svm->vcpu, true);
 	svm->vmcb->control.int_ctl = nested_vmcb->control.int_ctl | V_INTR_MASKING_MASK;
 	if (nested_vmcb->control.int_ctl & V_INTR_MASKING_MASK)
 		svm->vcpu.arch.hflags |= HF_VINTR_MASK;
@@ -3898,7 +3896,7 @@ static int task_switch_interception(struct vcpu_svm *svm)
 	    int_type == SVM_EXITINTINFO_TYPE_SOFT ||
 	    (int_type == SVM_EXITINTINFO_TYPE_EXEPT &&
 	     (int_vec == OF_VECTOR || int_vec == BP_VECTOR))) {
-		if (!skip_emulated_instruction(&svm->vcpu))
+		if (!kvm_x86_skip_emulated_instruction(&svm->vcpu))
 			return 0;
 	}
 
@@ -4104,7 +4102,7 @@ static int cr8_write_interception(struct vcpu_svm *svm)
 	return 0;
 }
 
-static int svm_get_msr_feature(struct kvm_msr_entry *msr)
+int kvm_x86_get_msr_feature(struct kvm_msr_entry *msr)
 {
 	msr->data = 0;
 
@@ -4120,7 +4118,7 @@ static int svm_get_msr_feature(struct kvm_msr_entry *msr)
 	return 0;
 }
 
-static int svm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+int kvm_x86_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 
@@ -4253,7 +4251,7 @@ static int svm_set_vm_cr(struct kvm_vcpu *vcpu, u64 data)
 	return 0;
 }
 
-static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
+int kvm_x86_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 
@@ -4389,7 +4387,7 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
 		struct kvm_msr_entry msr_entry;
 
 		msr_entry.index = msr->index;
-		if (svm_get_msr_feature(&msr_entry))
+		if (kvm_x86_get_msr_feature(&msr_entry))
 			return 1;
 
 		/* Check the supported bits */
@@ -4439,7 +4437,7 @@ static int interrupt_window_interception(struct vcpu_svm *svm)
 static int pause_interception(struct vcpu_svm *svm)
 {
 	struct kvm_vcpu *vcpu = &svm->vcpu;
-	bool in_kernel = (svm_get_cpl(vcpu) == 0);
+	bool in_kernel = (kvm_x86_get_cpl(vcpu) == 0);
 
 	if (pause_filter_thresh)
 		grow_ple_window(vcpu);
@@ -4919,7 +4917,7 @@ static void dump_vmcb(struct kvm_vcpu *vcpu)
 	       "excp_to:", save->last_excp_to);
 }
 
-static void svm_get_exit_info(struct kvm_vcpu *vcpu, u64 *info1, u64 *info2)
+void kvm_x86_get_exit_info(struct kvm_vcpu *vcpu, u64 *info1, u64 *info2)
 {
 	struct vmcb_control_area *control = &to_svm(vcpu)->vmcb->control;
 
@@ -4927,7 +4925,7 @@ static void svm_get_exit_info(struct kvm_vcpu *vcpu, u64 *info1, u64 *info2)
 	*info2 = control->exit_info_2;
 }
 
-static int handle_exit(struct kvm_vcpu *vcpu)
+int kvm_x86_handle_exit(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 	struct kvm_run *kvm_run = vcpu->run;
@@ -5047,7 +5045,7 @@ static void pre_svm_run(struct vcpu_svm *svm)
 		new_asid(svm, sd);
 }
 
-static void svm_inject_nmi(struct kvm_vcpu *vcpu)
+void kvm_x86_set_nmi(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 
@@ -5070,7 +5068,7 @@ static inline void svm_inject_irq(struct vcpu_svm *svm, int irq)
 	mark_dirty(svm->vmcb, VMCB_INTR);
 }
 
-static void svm_set_irq(struct kvm_vcpu *vcpu)
+void kvm_x86_set_irq(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 
@@ -5088,7 +5086,7 @@ static inline bool svm_nested_virtualize_tpr(struct kvm_vcpu *vcpu)
 	return is_guest_mode(vcpu) && (vcpu->arch.hflags & HF_VINTR_MASK);
 }
 
-static void update_cr8_intercept(struct kvm_vcpu *vcpu, int tpr, int irr)
+void kvm_x86_update_cr8_intercept(struct kvm_vcpu *vcpu, int tpr, int irr)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 
@@ -5105,26 +5103,26 @@ static void update_cr8_intercept(struct kvm_vcpu *vcpu, int tpr, int irr)
 		set_cr_intercept(svm, INTERCEPT_CR8_WRITE);
 }
 
-static void svm_set_virtual_apic_mode(struct kvm_vcpu *vcpu)
+void kvm_x86_set_virtual_apic_mode(struct kvm_vcpu *vcpu)
 {
 	return;
 }
 
-static bool svm_get_enable_apicv(struct kvm_vcpu *vcpu)
+bool kvm_x86_get_enable_apicv(struct kvm_vcpu *vcpu)
 {
 	return avic && irqchip_split(vcpu->kvm);
 }
 
-static void svm_hwapic_irr_update(struct kvm_vcpu *vcpu, int max_irr)
+void kvm_x86_hwapic_irr_update(struct kvm_vcpu *vcpu, int max_irr)
 {
 }
 
-static void svm_hwapic_isr_update(struct kvm_vcpu *vcpu, int max_isr)
+void kvm_x86_hwapic_isr_update(struct kvm_vcpu *vcpu, int max_isr)
 {
 }
 
 /* Note: Currently only used by Hyper-V. */
-static void svm_refresh_apicv_exec_ctrl(struct kvm_vcpu *vcpu)
+void kvm_x86_refresh_apicv_exec_ctrl(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 	struct vmcb *vmcb = svm->vmcb;
@@ -5136,12 +5134,12 @@ static void svm_refresh_apicv_exec_ctrl(struct kvm_vcpu *vcpu)
 	mark_dirty(vmcb, VMCB_AVIC);
 }
 
-static void svm_load_eoi_exitmap(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap)
+void kvm_x86_load_eoi_exitmap(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap)
 {
 	return;
 }
 
-static void svm_deliver_avic_intr(struct kvm_vcpu *vcpu, int vec)
+void kvm_x86_deliver_posted_interrupt(struct kvm_vcpu *vcpu, int vec)
 {
 	kvm_lapic_set_irr(vec, vcpu->arch.apic);
 	smp_mb__after_atomic();
@@ -5156,7 +5154,7 @@ static void svm_deliver_avic_intr(struct kvm_vcpu *vcpu, int vec)
 		kvm_vcpu_wake_up(vcpu);
 }
 
-static bool svm_dy_apicv_has_pending_interrupt(struct kvm_vcpu *vcpu)
+bool kvm_x86_dy_apicv_has_pending_interrupt(struct kvm_vcpu *vcpu)
 {
 	return false;
 }
@@ -5266,8 +5264,8 @@ get_pi_vcpu_info(struct kvm *kvm, struct kvm_kernel_irq_routing_entry *e,
  * @set: set or unset PI
  * returns 0 on success, < 0 on failure
  */
-static int svm_update_pi_irte(struct kvm *kvm, unsigned int host_irq,
-			      uint32_t guest_irq, bool set)
+int kvm_x86_update_pi_irte(struct kvm *kvm, unsigned int host_irq,
+			   uint32_t guest_irq, bool set)
 {
 	struct kvm_kernel_irq_routing_entry *e;
 	struct kvm_irq_routing_table *irq_rt;
@@ -5366,7 +5364,7 @@ static int svm_update_pi_irte(struct kvm *kvm, unsigned int host_irq,
 	return ret;
 }
 
-static int svm_nmi_allowed(struct kvm_vcpu *vcpu)
+int kvm_x86_nmi_allowed(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 	struct vmcb *vmcb = svm->vmcb;
@@ -5378,14 +5376,14 @@ static int svm_nmi_allowed(struct kvm_vcpu *vcpu)
 	return ret;
 }
 
-static bool svm_get_nmi_mask(struct kvm_vcpu *vcpu)
+bool kvm_x86_get_nmi_mask(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 
 	return !!(svm->vcpu.arch.hflags & HF_NMI_MASK);
 }
 
-static void svm_set_nmi_mask(struct kvm_vcpu *vcpu, bool masked)
+void kvm_x86_set_nmi_mask(struct kvm_vcpu *vcpu, bool masked)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 
@@ -5398,7 +5396,7 @@ static void svm_set_nmi_mask(struct kvm_vcpu *vcpu, bool masked)
 	}
 }
 
-static int svm_interrupt_allowed(struct kvm_vcpu *vcpu)
+int kvm_x86_interrupt_allowed(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 	struct vmcb *vmcb = svm->vmcb;
@@ -5416,7 +5414,7 @@ static int svm_interrupt_allowed(struct kvm_vcpu *vcpu)
 	return ret;
 }
 
-static void enable_irq_window(struct kvm_vcpu *vcpu)
+void kvm_x86_enable_irq_window(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 
@@ -5437,7 +5435,7 @@ static void enable_irq_window(struct kvm_vcpu *vcpu)
 	}
 }
 
-static void enable_nmi_window(struct kvm_vcpu *vcpu)
+void kvm_x86_enable_nmi_window(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 
@@ -5458,22 +5456,22 @@ static void enable_nmi_window(struct kvm_vcpu *vcpu)
 	 * Something prevents NMI from been injected. Single step over possible
 	 * problem (IRET or exception injection or interrupt shadow)
 	 */
-	svm->nmi_singlestep_guest_rflags = svm_get_rflags(vcpu);
+	svm->nmi_singlestep_guest_rflags = kvm_x86_get_rflags(vcpu);
 	svm->nmi_singlestep = true;
 	svm->vmcb->save.rflags |= (X86_EFLAGS_TF | X86_EFLAGS_RF);
 }
 
-static int svm_set_tss_addr(struct kvm *kvm, unsigned int addr)
+int kvm_x86_set_tss_addr(struct kvm *kvm, unsigned int addr)
 {
 	return 0;
 }
 
-static int svm_set_identity_map_addr(struct kvm *kvm, u64 ident_addr)
+int kvm_x86_set_identity_map_addr(struct kvm *kvm, u64 ident_addr)
 {
 	return 0;
 }
 
-static void svm_flush_tlb(struct kvm_vcpu *vcpu, bool invalidate_gpa)
+void kvm_x86_tlb_flush(struct kvm_vcpu *vcpu, bool invalidate_gpa)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 
@@ -5483,14 +5481,14 @@ static void svm_flush_tlb(struct kvm_vcpu *vcpu, bool invalidate_gpa)
 		svm->asid_generation--;
 }
 
-static void svm_flush_tlb_gva(struct kvm_vcpu *vcpu, gva_t gva)
+void kvm_x86_tlb_flush_gva(struct kvm_vcpu *vcpu, gva_t gva)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 
 	invlpga(gva, svm->vmcb->control.asid);
 }
 
-static void svm_prepare_guest_switch(struct kvm_vcpu *vcpu)
+void kvm_x86_prepare_guest_switch(struct kvm_vcpu *vcpu)
 {
 }
 
@@ -5585,7 +5583,7 @@ static void svm_complete_interrupts(struct vcpu_svm *svm)
 	}
 }
 
-static void svm_cancel_injection(struct kvm_vcpu *vcpu)
+void kvm_x86_cancel_injection(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 	struct vmcb_control_area *control = &svm->vmcb->control;
@@ -5596,7 +5594,7 @@ static void svm_cancel_injection(struct kvm_vcpu *vcpu)
 	svm_complete_interrupts(svm);
 }
 
-static void svm_vcpu_run(struct kvm_vcpu *vcpu)
+void kvm_x86_run(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 
@@ -5817,9 +5815,9 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu)
 
 	mark_all_clean(svm->vmcb);
 }
-STACK_FRAME_NON_STANDARD(svm_vcpu_run);
+STACK_FRAME_NON_STANDARD(kvm_x86_run);
 
-static void svm_set_cr3(struct kvm_vcpu *vcpu, unsigned long root)
+void kvm_x86_set_cr3(struct kvm_vcpu *vcpu, unsigned long root)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 
@@ -5827,7 +5825,7 @@ static void svm_set_cr3(struct kvm_vcpu *vcpu, unsigned long root)
 	mark_dirty(svm->vmcb, VMCB_CR);
 }
 
-static void set_tdp_cr3(struct kvm_vcpu *vcpu, unsigned long root)
+void kvm_x86_set_tdp_cr3(struct kvm_vcpu *vcpu, unsigned long root)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 
@@ -5839,7 +5837,7 @@ static void set_tdp_cr3(struct kvm_vcpu *vcpu, unsigned long root)
 	mark_dirty(svm->vmcb, VMCB_CR);
 }
 
-static int is_disabled(void)
+int kvm_x86_disabled_by_bios(void)
 {
 	u64 vm_cr;
 
@@ -5850,8 +5848,7 @@ static int is_disabled(void)
 	return 0;
 }
 
-static void
-svm_patch_hypercall(struct kvm_vcpu *vcpu, unsigned char *hypercall)
+void kvm_x86_patch_hypercall(struct kvm_vcpu *vcpu, unsigned char *hypercall)
 {
 	/*
 	 * Patch in the VMMCALL instruction:
@@ -5861,17 +5858,17 @@ svm_patch_hypercall(struct kvm_vcpu *vcpu, unsigned char *hypercall)
 	hypercall[2] = 0xd9;
 }
 
-static int __init svm_check_processor_compat(void)
+__init int kvm_x86_check_processor_compatibility(void)
 {
 	return 0;
 }
 
-static bool svm_cpu_has_accelerated_tpr(void)
+bool kvm_x86_cpu_has_accelerated_tpr(void)
 {
 	return false;
 }
 
-static bool svm_has_emulated_msr(int index)
+bool kvm_x86_has_emulated_msr(int index)
 {
 	switch (index) {
 	case MSR_IA32_MCG_EXT_CTL:
@@ -5884,12 +5881,12 @@ static bool svm_has_emulated_msr(int index)
 	return true;
 }
 
-static u64 svm_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio)
+u64 kvm_x86_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio)
 {
 	return 0;
 }
 
-static void svm_cpuid_update(struct kvm_vcpu *vcpu)
+void kvm_x86_cpuid_update(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 
@@ -5904,7 +5901,7 @@ static void svm_cpuid_update(struct kvm_vcpu *vcpu)
 
 #define F(x) bit(X86_FEATURE_##x)
 
-static void svm_set_supported_cpuid(u32 func, struct kvm_cpuid_entry2 *entry)
+void kvm_x86_set_supported_cpuid(u32 func, struct kvm_cpuid_entry2 *entry)
 {
 	switch (func) {
 	case 0x1:
@@ -5946,42 +5943,42 @@ static void svm_set_supported_cpuid(u32 func, struct kvm_cpuid_entry2 *entry)
 	}
 }
 
-static int svm_get_lpage_level(void)
+int kvm_x86_get_lpage_level(void)
 {
 	return PT_PDPE_LEVEL;
 }
 
-static bool svm_rdtscp_supported(void)
+bool kvm_x86_rdtscp_supported(void)
 {
 	return boot_cpu_has(X86_FEATURE_RDTSCP);
 }
 
-static bool svm_invpcid_supported(void)
+bool kvm_x86_invpcid_supported(void)
 {
 	return false;
 }
 
-static bool svm_mpx_supported(void)
+bool kvm_x86_mpx_supported(void)
 {
 	return false;
 }
 
-static bool svm_xsaves_supported(void)
+bool kvm_x86_xsaves_supported(void)
 {
 	return false;
 }
 
-static bool svm_umip_emulated(void)
+bool kvm_x86_umip_emulated(void)
 {
 	return false;
 }
 
-static bool svm_pt_supported(void)
+bool kvm_x86_pt_supported(void)
 {
 	return false;
 }
 
-static bool svm_has_wbinvd_exit(void)
+bool kvm_x86_has_wbinvd_exit(void)
 {
 	return true;
 }
@@ -6050,9 +6047,9 @@ static const struct __x86_intercept {
 #undef POST_EX
 #undef POST_MEM
 
-static int svm_check_intercept(struct kvm_vcpu *vcpu,
-			       struct x86_instruction_info *info,
-			       enum x86_intercept_stage stage)
+int kvm_x86_check_intercept(struct kvm_vcpu *vcpu,
+			    struct x86_instruction_info *info,
+			    enum x86_intercept_stage stage)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 	int vmexit, ret = X86EMUL_CONTINUE;
@@ -6171,18 +6168,18 @@ static int svm_check_intercept(struct kvm_vcpu *vcpu,
 	return ret;
 }
 
-static void svm_handle_exit_irqoff(struct kvm_vcpu *vcpu)
+void kvm_x86_handle_exit_irqoff(struct kvm_vcpu *vcpu)
 {
 
 }
 
-static void svm_sched_in(struct kvm_vcpu *vcpu, int cpu)
+void kvm_x86_sched_in(struct kvm_vcpu *vcpu, int cpu)
 {
 	if (pause_filter_thresh)
 		shrink_ple_window(vcpu);
 }
 
-static inline void avic_post_state_restore(struct kvm_vcpu *vcpu)
+void kvm_x86_apicv_post_state_restore(struct kvm_vcpu *vcpu)
 {
 	if (avic_handle_apic_id_update(vcpu) != 0)
 		return;
@@ -6190,13 +6187,13 @@ static inline void avic_post_state_restore(struct kvm_vcpu *vcpu)
 	avic_handle_ldr_update(vcpu);
 }
 
-static void svm_setup_mce(struct kvm_vcpu *vcpu)
+void kvm_x86_setup_mce(struct kvm_vcpu *vcpu)
 {
 	/* [63:9] are reserved. */
 	vcpu->arch.mcg_cap &= 0x1ff;
 }
 
-static int svm_smi_allowed(struct kvm_vcpu *vcpu)
+int kvm_x86_smi_allowed(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 
@@ -6215,7 +6212,7 @@ static int svm_smi_allowed(struct kvm_vcpu *vcpu)
 	return 1;
 }
 
-static int svm_pre_enter_smm(struct kvm_vcpu *vcpu, char *smstate)
+int kvm_x86_pre_enter_smm(struct kvm_vcpu *vcpu, char *smstate)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 	int ret;
@@ -6237,7 +6234,7 @@ static int svm_pre_enter_smm(struct kvm_vcpu *vcpu, char *smstate)
 	return 0;
 }
 
-static int svm_pre_leave_smm(struct kvm_vcpu *vcpu, const char *smstate)
+int kvm_x86_pre_leave_smm(struct kvm_vcpu *vcpu, const char *smstate)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 	struct vmcb *nested_vmcb;
@@ -6257,7 +6254,7 @@ static int svm_pre_leave_smm(struct kvm_vcpu *vcpu, const char *smstate)
 	return 0;
 }
 
-static int enable_smi_window(struct kvm_vcpu *vcpu)
+int kvm_x86_enable_smi_window(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 
@@ -6960,7 +6957,7 @@ static int sev_launch_secret(struct kvm *kvm, struct kvm_sev_cmd *argp)
 	return ret;
 }
 
-static int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
+int kvm_x86_mem_enc_op(struct kvm *kvm, void __user *argp)
 {
 	struct kvm_sev_cmd sev_cmd;
 	int r;
@@ -7014,8 +7011,7 @@ static int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
 	return r;
 }
 
-static int svm_register_enc_region(struct kvm *kvm,
-				   struct kvm_enc_region *range)
+int kvm_x86_mem_enc_reg_region(struct kvm *kvm, struct kvm_enc_region *range)
 {
 	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
 	struct enc_region *region;
@@ -7076,8 +7072,7 @@ find_enc_region(struct kvm *kvm, struct kvm_enc_region *range)
 }
 
 
-static int svm_unregister_enc_region(struct kvm *kvm,
-				     struct kvm_enc_region *range)
+int kvm_x86_mem_enc_unreg_region(struct kvm *kvm, struct kvm_enc_region *range)
 {
 	struct enc_region *region;
 	int ret;
@@ -7105,12 +7100,12 @@ static int svm_unregister_enc_region(struct kvm *kvm,
 	return ret;
 }
 
-static bool svm_need_emulation_on_page_fault(struct kvm_vcpu *vcpu)
+bool kvm_x86_need_emulation_on_page_fault(struct kvm_vcpu *vcpu)
 {
 	unsigned long cr4 = kvm_read_cr4(vcpu);
 	bool smep = cr4 & X86_CR4_SMEP;
 	bool smap = cr4 & X86_CR4_SMAP;
-	bool is_user = svm_get_cpl(vcpu) == 3;
+	bool is_user = kvm_x86_get_cpl(vcpu) == 3;
 
 	/*
 	 * Detect and workaround Errata 1096 Fam_17h_00_0Fh.
@@ -7163,7 +7158,7 @@ static bool svm_need_emulation_on_page_fault(struct kvm_vcpu *vcpu)
 	return false;
 }
 
-static bool svm_apic_init_signal_blocked(struct kvm_vcpu *vcpu)
+bool kvm_x86_apic_init_signal_blocked(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 
@@ -7171,7 +7166,7 @@ static bool svm_apic_init_signal_blocked(struct kvm_vcpu *vcpu)
 	 * TODO: Last condition latch INIT signals on vCPU when
 	 * vCPU is in guest-mode and vmcb12 defines intercept on INIT.
 	 * To properly emulate the INIT intercept, SVM should implement
-	 * kvm_x86_ops->check_nested_events() and call nested_svm_vmexit()
+	 * kvm_x86_check_nested_events() and call nested_svm_vmexit()
 	 * there if an INIT signal is pending.
 	 */
 	return !gif_set(svm) ||
@@ -7179,143 +7174,143 @@ static bool svm_apic_init_signal_blocked(struct kvm_vcpu *vcpu)
 }
 
 static struct kvm_x86_ops svm_x86_ops __ro_after_init = {
-	.cpu_has_kvm_support = has_svm,
-	.disabled_by_bios = is_disabled,
-	.hardware_setup = svm_hardware_setup,
-	.hardware_unsetup = svm_hardware_unsetup,
-	.check_processor_compatibility = svm_check_processor_compat,
-	.hardware_enable = svm_hardware_enable,
-	.hardware_disable = svm_hardware_disable,
-	.cpu_has_accelerated_tpr = svm_cpu_has_accelerated_tpr,
-	.has_emulated_msr = svm_has_emulated_msr,
-
-	.vcpu_create = svm_create_vcpu,
-	.vcpu_free = svm_free_vcpu,
-	.vcpu_reset = svm_vcpu_reset,
-
-	.vm_alloc = svm_vm_alloc,
-	.vm_free = svm_vm_free,
-	.vm_init = avic_vm_init,
-	.vm_destroy = svm_vm_destroy,
-
-	.prepare_guest_switch = svm_prepare_guest_switch,
-	.vcpu_load = svm_vcpu_load,
-	.vcpu_put = svm_vcpu_put,
-	.vcpu_blocking = svm_vcpu_blocking,
-	.vcpu_unblocking = svm_vcpu_unblocking,
-
-	.update_bp_intercept = update_bp_intercept,
-	.get_msr_feature = svm_get_msr_feature,
-	.get_msr = svm_get_msr,
-	.set_msr = svm_set_msr,
-	.get_segment_base = svm_get_segment_base,
-	.get_segment = svm_get_segment,
-	.set_segment = svm_set_segment,
-	.get_cpl = svm_get_cpl,
-	.get_cs_db_l_bits = kvm_get_cs_db_l_bits,
-	.decache_cr0_guest_bits = svm_decache_cr0_guest_bits,
-	.decache_cr3 = svm_decache_cr3,
-	.decache_cr4_guest_bits = svm_decache_cr4_guest_bits,
-	.set_cr0 = svm_set_cr0,
-	.set_cr3 = svm_set_cr3,
-	.set_cr4 = svm_set_cr4,
-	.set_efer = svm_set_efer,
-	.get_idt = svm_get_idt,
-	.set_idt = svm_set_idt,
-	.get_gdt = svm_get_gdt,
-	.set_gdt = svm_set_gdt,
-	.get_dr6 = svm_get_dr6,
-	.set_dr6 = svm_set_dr6,
-	.set_dr7 = svm_set_dr7,
-	.sync_dirty_debug_regs = svm_sync_dirty_debug_regs,
-	.cache_reg = svm_cache_reg,
-	.get_rflags = svm_get_rflags,
-	.set_rflags = svm_set_rflags,
-
-	.tlb_flush = svm_flush_tlb,
-	.tlb_flush_gva = svm_flush_tlb_gva,
-
-	.run = svm_vcpu_run,
-	.handle_exit = handle_exit,
-	.skip_emulated_instruction = skip_emulated_instruction,
-	.set_interrupt_shadow = svm_set_interrupt_shadow,
-	.get_interrupt_shadow = svm_get_interrupt_shadow,
-	.patch_hypercall = svm_patch_hypercall,
-	.set_irq = svm_set_irq,
-	.set_nmi = svm_inject_nmi,
-	.queue_exception = svm_queue_exception,
-	.cancel_injection = svm_cancel_injection,
-	.interrupt_allowed = svm_interrupt_allowed,
-	.nmi_allowed = svm_nmi_allowed,
-	.get_nmi_mask = svm_get_nmi_mask,
-	.set_nmi_mask = svm_set_nmi_mask,
-	.enable_nmi_window = enable_nmi_window,
-	.enable_irq_window = enable_irq_window,
-	.update_cr8_intercept = update_cr8_intercept,
-	.set_virtual_apic_mode = svm_set_virtual_apic_mode,
-	.get_enable_apicv = svm_get_enable_apicv,
-	.refresh_apicv_exec_ctrl = svm_refresh_apicv_exec_ctrl,
-	.load_eoi_exitmap = svm_load_eoi_exitmap,
-	.hwapic_irr_update = svm_hwapic_irr_update,
-	.hwapic_isr_update = svm_hwapic_isr_update,
-	.sync_pir_to_irr = kvm_lapic_find_highest_irr,
-	.apicv_post_state_restore = avic_post_state_restore,
-
-	.set_tss_addr = svm_set_tss_addr,
-	.set_identity_map_addr = svm_set_identity_map_addr,
-	.get_tdp_level = get_npt_level,
-	.get_mt_mask = svm_get_mt_mask,
-
-	.get_exit_info = svm_get_exit_info,
-
-	.get_lpage_level = svm_get_lpage_level,
-
-	.cpuid_update = svm_cpuid_update,
-
-	.rdtscp_supported = svm_rdtscp_supported,
-	.invpcid_supported = svm_invpcid_supported,
-	.mpx_supported = svm_mpx_supported,
-	.xsaves_supported = svm_xsaves_supported,
-	.umip_emulated = svm_umip_emulated,
-	.pt_supported = svm_pt_supported,
-
-	.set_supported_cpuid = svm_set_supported_cpuid,
-
-	.has_wbinvd_exit = svm_has_wbinvd_exit,
-
-	.read_l1_tsc_offset = svm_read_l1_tsc_offset,
-	.write_l1_tsc_offset = svm_write_l1_tsc_offset,
-
-	.set_tdp_cr3 = set_tdp_cr3,
-
-	.check_intercept = svm_check_intercept,
-	.handle_exit_irqoff = svm_handle_exit_irqoff,
-
-	.request_immediate_exit = __kvm_request_immediate_exit,
-
-	.sched_in = svm_sched_in,
+	.cpu_has_kvm_support = kvm_x86_cpu_has_kvm_support,
+	.disabled_by_bios = kvm_x86_disabled_by_bios,
+	.hardware_setup = kvm_x86_hardware_setup,
+	.hardware_unsetup = kvm_x86_hardware_unsetup,
+	.check_processor_compatibility = kvm_x86_check_processor_compatibility,
+	.hardware_enable = kvm_x86_hardware_enable,
+	.hardware_disable = kvm_x86_hardware_disable,
+	.cpu_has_accelerated_tpr = kvm_x86_cpu_has_accelerated_tpr,
+	.has_emulated_msr = kvm_x86_has_emulated_msr,
+
+	.vcpu_create = kvm_x86_vcpu_create,
+	.vcpu_free = kvm_x86_vcpu_free,
+	.vcpu_reset = kvm_x86_vcpu_reset,
+
+	.vm_alloc = kvm_x86_vm_alloc,
+	.vm_free = kvm_x86_vm_free,
+	.vm_init = kvm_x86_vm_init,
+	.vm_destroy = kvm_x86_vm_destroy,
+
+	.prepare_guest_switch = kvm_x86_prepare_guest_switch,
+	.vcpu_load = kvm_x86_vcpu_load,
+	.vcpu_put = kvm_x86_vcpu_put,
+	.vcpu_blocking = kvm_x86_vcpu_blocking,
+	.vcpu_unblocking = kvm_x86_vcpu_unblocking,
+
+	.update_bp_intercept = kvm_x86_update_bp_intercept,
+	.get_msr_feature = kvm_x86_get_msr_feature,
+	.get_msr = kvm_x86_get_msr,
+	.set_msr = kvm_x86_set_msr,
+	.get_segment_base = kvm_x86_get_segment_base,
+	.get_segment = kvm_x86_get_segment,
+	.set_segment = kvm_x86_set_segment,
+	.get_cpl = kvm_x86_get_cpl,
+	.get_cs_db_l_bits = kvm_x86_get_cs_db_l_bits,
+	.decache_cr0_guest_bits = kvm_x86_decache_cr0_guest_bits,
+	.decache_cr3 = kvm_x86_decache_cr3,
+	.decache_cr4_guest_bits = kvm_x86_decache_cr4_guest_bits,
+	.set_cr0 = kvm_x86_set_cr0,
+	.set_cr3 = kvm_x86_set_cr3,
+	.set_cr4 = kvm_x86_set_cr4,
+	.set_efer = kvm_x86_set_efer,
+	.get_idt = kvm_x86_get_idt,
+	.set_idt = kvm_x86_set_idt,
+	.get_gdt = kvm_x86_get_gdt,
+	.set_gdt = kvm_x86_set_gdt,
+	.get_dr6 = kvm_x86_get_dr6,
+	.set_dr6 = kvm_x86_set_dr6,
+	.set_dr7 = kvm_x86_set_dr7,
+	.sync_dirty_debug_regs = kvm_x86_sync_dirty_debug_regs,
+	.cache_reg = kvm_x86_cache_reg,
+	.get_rflags = kvm_x86_get_rflags,
+	.set_rflags = kvm_x86_set_rflags,
+
+	.tlb_flush = kvm_x86_tlb_flush,
+	.tlb_flush_gva = kvm_x86_tlb_flush_gva,
+
+	.run = kvm_x86_run,
+	.handle_exit = kvm_x86_handle_exit,
+	.skip_emulated_instruction = kvm_x86_skip_emulated_instruction,
+	.set_interrupt_shadow = kvm_x86_set_interrupt_shadow,
+	.get_interrupt_shadow = kvm_x86_get_interrupt_shadow,
+	.patch_hypercall = kvm_x86_patch_hypercall,
+	.set_irq = kvm_x86_set_irq,
+	.set_nmi = kvm_x86_set_nmi,
+	.queue_exception = kvm_x86_queue_exception,
+	.cancel_injection = kvm_x86_cancel_injection,
+	.interrupt_allowed = kvm_x86_interrupt_allowed,
+	.nmi_allowed = kvm_x86_nmi_allowed,
+	.get_nmi_mask = kvm_x86_get_nmi_mask,
+	.set_nmi_mask = kvm_x86_set_nmi_mask,
+	.enable_nmi_window = kvm_x86_enable_nmi_window,
+	.enable_irq_window = kvm_x86_enable_irq_window,
+	.update_cr8_intercept = kvm_x86_update_cr8_intercept,
+	.set_virtual_apic_mode = kvm_x86_set_virtual_apic_mode,
+	.get_enable_apicv = kvm_x86_get_enable_apicv,
+	.refresh_apicv_exec_ctrl = kvm_x86_refresh_apicv_exec_ctrl,
+	.load_eoi_exitmap = kvm_x86_load_eoi_exitmap,
+	.hwapic_irr_update = kvm_x86_hwapic_irr_update,
+	.hwapic_isr_update = kvm_x86_hwapic_isr_update,
+	.sync_pir_to_irr = kvm_x86_sync_pir_to_irr,
+	.apicv_post_state_restore = kvm_x86_apicv_post_state_restore,
+
+	.set_tss_addr = kvm_x86_set_tss_addr,
+	.set_identity_map_addr = kvm_x86_set_identity_map_addr,
+	.get_tdp_level = kvm_x86_get_tdp_level,
+	.get_mt_mask = kvm_x86_get_mt_mask,
+
+	.get_exit_info = kvm_x86_get_exit_info,
+
+	.get_lpage_level = kvm_x86_get_lpage_level,
+
+	.cpuid_update = kvm_x86_cpuid_update,
+
+	.rdtscp_supported = kvm_x86_rdtscp_supported,
+	.invpcid_supported = kvm_x86_invpcid_supported,
+	.mpx_supported = kvm_x86_mpx_supported,
+	.xsaves_supported = kvm_x86_xsaves_supported,
+	.umip_emulated = kvm_x86_umip_emulated,
+	.pt_supported = kvm_x86_pt_supported,
+
+	.set_supported_cpuid = kvm_x86_set_supported_cpuid,
+
+	.has_wbinvd_exit = kvm_x86_has_wbinvd_exit,
+
+	.read_l1_tsc_offset = kvm_x86_read_l1_tsc_offset,
+	.write_l1_tsc_offset = kvm_x86_write_l1_tsc_offset,
+
+	.set_tdp_cr3 = kvm_x86_set_tdp_cr3,
+
+	.check_intercept = kvm_x86_check_intercept,
+	.handle_exit_irqoff = kvm_x86_handle_exit_irqoff,
+
+	.request_immediate_exit = kvm_x86_request_immediate_exit,
+
+	.sched_in = kvm_x86_sched_in,
 
 	.pmu_ops = &amd_pmu_ops,
-	.deliver_posted_interrupt = svm_deliver_avic_intr,
-	.dy_apicv_has_pending_interrupt = svm_dy_apicv_has_pending_interrupt,
-	.update_pi_irte = svm_update_pi_irte,
-	.setup_mce = svm_setup_mce,
+	.deliver_posted_interrupt = kvm_x86_deliver_posted_interrupt,
+	.dy_apicv_has_pending_interrupt = kvm_x86_dy_apicv_has_pending_interrupt,
+	.update_pi_irte = kvm_x86_update_pi_irte,
+	.setup_mce = kvm_x86_setup_mce,
 
-	.smi_allowed = svm_smi_allowed,
-	.pre_enter_smm = svm_pre_enter_smm,
-	.pre_leave_smm = svm_pre_leave_smm,
-	.enable_smi_window = enable_smi_window,
+	.smi_allowed = kvm_x86_smi_allowed,
+	.pre_enter_smm = kvm_x86_pre_enter_smm,
+	.pre_leave_smm = kvm_x86_pre_leave_smm,
+	.enable_smi_window = kvm_x86_enable_smi_window,
 
-	.mem_enc_op = svm_mem_enc_op,
-	.mem_enc_reg_region = svm_register_enc_region,
-	.mem_enc_unreg_region = svm_unregister_enc_region,
+	.mem_enc_op = kvm_x86_mem_enc_op,
+	.mem_enc_reg_region = kvm_x86_mem_enc_reg_region,
+	.mem_enc_unreg_region = kvm_x86_mem_enc_unreg_region,
 
 	.nested_enable_evmcs = NULL,
 	.nested_get_evmcs_version = NULL,
 
-	.need_emulation_on_page_fault = svm_need_emulation_on_page_fault,
+	.need_emulation_on_page_fault = kvm_x86_need_emulation_on_page_fault,
 
-	.apic_init_signal_blocked = svm_apic_init_signal_blocked,
+	.apic_init_signal_blocked = kvm_x86_apic_init_signal_blocked,
 };
 
 static int __init svm_init(void)
@@ -7331,3 +7326,130 @@ static void __exit svm_exit(void)
 
 module_init(svm_init)
 module_exit(svm_exit)
+
+void kvm_x86_get_cs_db_l_bits(struct kvm_vcpu *vcpu, int *db, int *l)
+{
+	kvm_get_cs_db_l_bits(vcpu, db, l);
+}
+
+int kvm_x86_tlb_remote_flush(struct kvm *kvm)
+{
+	return kvm_x86_ops->tlb_remote_flush(kvm);
+}
+
+int kvm_x86_tlb_remote_flush_with_range(struct kvm *kvm,
+					struct kvm_tlb_range *range)
+{
+	return kvm_x86_ops->tlb_remote_flush_with_range(kvm, range);
+}
+
+bool kvm_x86_guest_apic_has_interrupt(struct kvm_vcpu *vcpu)
+{
+	return kvm_x86_ops->guest_apic_has_interrupt(vcpu);
+}
+
+void kvm_x86_set_apic_access_page_addr(struct kvm_vcpu *vcpu, hpa_t hpa)
+{
+	kvm_x86_ops->set_apic_access_page_addr(vcpu, hpa);
+}
+
+int kvm_x86_sync_pir_to_irr(struct kvm_vcpu *vcpu)
+{
+	return kvm_lapic_find_highest_irr(vcpu);
+}
+
+int kvm_x86_check_nested_events(struct kvm_vcpu *vcpu, bool external_intr)
+{
+	return kvm_x86_ops->check_nested_events(vcpu, external_intr);
+}
+
+void kvm_x86_request_immediate_exit(struct kvm_vcpu *vcpu)
+{
+	__kvm_request_immediate_exit(vcpu);
+}
+
+void kvm_x86_slot_enable_log_dirty(struct kvm *kvm,
+				   struct kvm_memory_slot *slot)
+{
+	kvm_x86_ops->slot_enable_log_dirty(kvm, slot);
+}
+
+void kvm_x86_slot_disable_log_dirty(struct kvm *kvm,
+				    struct kvm_memory_slot *slot)
+{
+	kvm_x86_ops->slot_disable_log_dirty(kvm, slot);
+}
+
+void kvm_x86_flush_log_dirty(struct kvm *kvm)
+{
+	kvm_x86_ops->flush_log_dirty(kvm);
+}
+
+void kvm_x86_enable_log_dirty_pt_masked(struct kvm *kvm,
+					struct kvm_memory_slot *slot,
+					gfn_t offset, unsigned long mask)
+{
+	kvm_x86_ops->enable_log_dirty_pt_masked(kvm, slot, offset, mask);
+}
+
+int kvm_x86_write_log_dirty(struct kvm_vcpu *vcpu)
+{
+	return kvm_x86_ops->write_log_dirty(vcpu);
+}
+
+int kvm_x86_pre_block(struct kvm_vcpu *vcpu)
+{
+	return kvm_x86_ops->pre_block(vcpu);
+}
+
+void kvm_x86_post_block(struct kvm_vcpu *vcpu)
+{
+	kvm_x86_ops->post_block(vcpu);
+}
+
+int kvm_x86_set_hv_timer(struct kvm_vcpu *vcpu, u64 guest_deadline_tsc,
+			 bool *expired)
+{
+	return kvm_x86_ops->set_hv_timer(vcpu, guest_deadline_tsc, expired);
+}
+
+void kvm_x86_cancel_hv_timer(struct kvm_vcpu *vcpu)
+{
+	kvm_x86_ops->cancel_hv_timer(vcpu);
+}
+
+int kvm_x86_get_nested_state(struct kvm_vcpu *vcpu,
+			     struct kvm_nested_state __user *user_kvm_nested_state,
+			     unsigned user_data_size)
+{
+	return kvm_x86_ops->get_nested_state(vcpu, user_kvm_nested_state,
+					     user_data_size);
+}
+
+int kvm_x86_set_nested_state(struct kvm_vcpu *vcpu,
+			     struct kvm_nested_state __user *user_kvm_nested_state,
+			     struct kvm_nested_state *kvm_state)
+{
+	return kvm_x86_ops->set_nested_state(vcpu, user_kvm_nested_state,
+					     kvm_state);
+}
+
+bool kvm_x86_get_vmcs12_pages(struct kvm_vcpu *vcpu)
+{
+	return kvm_x86_ops->get_vmcs12_pages(vcpu);
+}
+
+int kvm_x86_nested_enable_evmcs(struct kvm_vcpu *vcpu, uint16_t *vmcs_version)
+{
+	return kvm_x86_ops->nested_enable_evmcs(vcpu, vmcs_version);
+}
+
+uint16_t kvm_x86_nested_get_evmcs_version(struct kvm_vcpu *vcpu)
+{
+	return kvm_x86_ops->nested_get_evmcs_version(vcpu);
+}
+
+int kvm_x86_enable_direct_tlbflush(struct kvm_vcpu *vcpu)
+{
+	return kvm_x86_ops->enable_direct_tlbflush(vcpu);
+}
diff --git a/arch/x86/kvm/trace.h b/arch/x86/kvm/trace.h
index 7c741a0c5f80..c52bfb96ce40 100644
--- a/arch/x86/kvm/trace.h
+++ b/arch/x86/kvm/trace.h
@@ -240,7 +240,7 @@ TRACE_EVENT(kvm_exit,
 		__entry->guest_rip	= kvm_rip_read(vcpu);
 		__entry->isa            = isa;
 		__entry->vcpu_id        = vcpu->vcpu_id;
-		kvm_x86_ops->get_exit_info(vcpu, &__entry->info1,
+		kvm_x86_get_exit_info(vcpu, &__entry->info1,
 					   &__entry->info2);
 	),
 
@@ -744,7 +744,7 @@ TRACE_EVENT(kvm_emulate_insn,
 		),
 
 	TP_fast_assign(
-		__entry->csbase = kvm_x86_ops->get_segment_base(vcpu, VCPU_SREG_CS);
+		__entry->csbase = kvm_x86_get_segment_base(vcpu, VCPU_SREG_CS);
 		__entry->len = vcpu->arch.emulate_ctxt.fetch.ptr
 			       - vcpu->arch.emulate_ctxt.fetch.data;
 		__entry->rip = vcpu->arch.emulate_ctxt._eip - __entry->len;
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index 0e7c9301fe86..bb58d323da16 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -152,7 +152,7 @@ static void init_vmcs_shadow_fields(void)
  */
 static int nested_vmx_succeed(struct kvm_vcpu *vcpu)
 {
-	vmx_set_rflags(vcpu, vmx_get_rflags(vcpu)
+	kvm_x86_set_rflags(vcpu, kvm_x86_get_rflags(vcpu)
 			& ~(X86_EFLAGS_CF | X86_EFLAGS_PF | X86_EFLAGS_AF |
 			    X86_EFLAGS_ZF | X86_EFLAGS_SF | X86_EFLAGS_OF));
 	return kvm_skip_emulated_instruction(vcpu);
@@ -160,7 +160,7 @@ static int nested_vmx_succeed(struct kvm_vcpu *vcpu)
 
 static int nested_vmx_failInvalid(struct kvm_vcpu *vcpu)
 {
-	vmx_set_rflags(vcpu, (vmx_get_rflags(vcpu)
+	kvm_x86_set_rflags(vcpu, (kvm_x86_get_rflags(vcpu)
 			& ~(X86_EFLAGS_PF | X86_EFLAGS_AF | X86_EFLAGS_ZF |
 			    X86_EFLAGS_SF | X86_EFLAGS_OF))
 			| X86_EFLAGS_CF);
@@ -179,7 +179,7 @@ static int nested_vmx_failValid(struct kvm_vcpu *vcpu,
 	if (vmx->nested.current_vmptr == -1ull && !vmx->nested.hv_evmcs)
 		return nested_vmx_failInvalid(vcpu);
 
-	vmx_set_rflags(vcpu, (vmx_get_rflags(vcpu)
+	kvm_x86_set_rflags(vcpu, (kvm_x86_get_rflags(vcpu)
 			& ~(X86_EFLAGS_CF | X86_EFLAGS_PF | X86_EFLAGS_AF |
 			    X86_EFLAGS_SF | X86_EFLAGS_OF))
 			| X86_EFLAGS_ZF);
@@ -353,7 +353,7 @@ static void nested_ept_init_mmu_context(struct kvm_vcpu *vcpu)
 			VMX_EPT_EXECUTE_ONLY_BIT,
 			nested_ept_ad_enabled(vcpu),
 			nested_ept_get_cr3(vcpu));
-	vcpu->arch.mmu->set_cr3           = vmx_set_cr3;
+	vcpu->arch.mmu->set_cr3           = kvm_x86_set_cr3;
 	vcpu->arch.mmu->get_cr3           = nested_ept_get_cr3;
 	vcpu->arch.mmu->inject_page_fault = nested_ept_inject_page_fault;
 	vcpu->arch.mmu->get_pdptr         = kvm_pdptr_read;
@@ -2125,10 +2125,10 @@ static void prepare_vmcs02_early(struct vcpu_vmx *vmx, struct vmcs12 *vmcs12)
 		exec_control &= ~SECONDARY_EXEC_SHADOW_VMCS;
 
 		/*
-		 * Preset *DT exiting when emulating UMIP, so that vmx_set_cr4()
+		 * Preset *DT exiting when emulating UMIP, so that kvm_x86_set_cr4()
 		 * will not have to rewrite the controls just for this bit.
 		 */
-		if (!boot_cpu_has(X86_FEATURE_UMIP) && vmx_umip_emulated() &&
+		if (!boot_cpu_has(X86_FEATURE_UMIP) && kvm_x86_umip_emulated() &&
 		    (vmcs12->guest_cr4 & X86_CR4_UMIP))
 			exec_control |= SECONDARY_EXEC_DESC;
 
@@ -2143,9 +2143,9 @@ static void prepare_vmcs02_early(struct vcpu_vmx *vmx, struct vmcs12 *vmcs12)
 	 * ENTRY CONTROLS
 	 *
 	 * vmcs12's VM_{ENTRY,EXIT}_LOAD_IA32_EFER and VM_ENTRY_IA32E_MODE
-	 * are emulated by vmx_set_efer() in prepare_vmcs02(), but speculate
+	 * are emulated by kvm_x86_set_efer() in prepare_vmcs02(), but speculate
 	 * on the related bits (if supported by the CPU) in the hope that
-	 * we can avoid VMWrites during vmx_set_efer().
+	 * we can avoid VMWrites during kvm_x86_set_efer().
 	 */
 	exec_control = (vmcs12->vm_entry_controls | vmx_vmentry_ctrl()) &
 			~VM_ENTRY_IA32E_MODE & ~VM_ENTRY_LOAD_IA32_EFER;
@@ -2162,7 +2162,7 @@ static void prepare_vmcs02_early(struct vcpu_vmx *vmx, struct vmcs12 *vmcs12)
 	 *
 	 * L2->L1 exit controls are emulated - the hardware exit is to L0 so
 	 * we should use its exit controls. Note that VM_EXIT_LOAD_IA32_EFER
-	 * bits may be modified by vmx_set_efer() in prepare_vmcs02().
+	 * bits may be modified by kvm_x86_set_efer() in prepare_vmcs02().
 	 */
 	exec_control = vmx_vmexit_ctrl();
 	if (cpu_has_load_ia32_efer() && guest_efer != host_efer)
@@ -2329,13 +2329,13 @@ static int prepare_vmcs02(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12,
 	if (kvm_mpx_supported() && (!vmx->nested.nested_run_pending ||
 	    !(vmcs12->vm_entry_controls & VM_ENTRY_LOAD_BNDCFGS)))
 		vmcs_write64(GUEST_BNDCFGS, vmx->nested.vmcs01_guest_bndcfgs);
-	vmx_set_rflags(vcpu, vmcs12->guest_rflags);
+	kvm_x86_set_rflags(vcpu, vmcs12->guest_rflags);
 
 	/* EXCEPTION_BITMAP and CR0_GUEST_HOST_MASK should basically be the
 	 * bitwise-or of what L1 wants to trap for L2, and what we want to
 	 * trap. Note that CR0.TS also needs updating - we do this later.
 	 */
-	update_exception_bitmap(vcpu);
+	kvm_x86_update_bp_intercept(vcpu);
 	vcpu->arch.cr0_guest_owned_bits &= ~vmcs12->cr0_guest_host_mask;
 	vmcs_writel(CR0_GUEST_HOST_MASK, ~vcpu->arch.cr0_guest_owned_bits);
 
@@ -2383,7 +2383,7 @@ static int prepare_vmcs02(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12,
 		nested_ept_init_mmu_context(vcpu);
 	else if (nested_cpu_has2(vmcs12,
 				 SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES))
-		vmx_flush_tlb(vcpu, true);
+		kvm_x86_tlb_flush(vcpu, true);
 
 	/*
 	 * This sets GUEST_CR0 to vmcs12->guest_cr0, possibly modifying those
@@ -2393,15 +2393,15 @@ static int prepare_vmcs02(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12,
 	 * vmcs12->cr0_read_shadow because on our cr0_guest_host_mask we we
 	 * have more bits than L1 expected.
 	 */
-	vmx_set_cr0(vcpu, vmcs12->guest_cr0);
+	kvm_x86_set_cr0(vcpu, vmcs12->guest_cr0);
 	vmcs_writel(CR0_READ_SHADOW, nested_read_cr0(vmcs12));
 
-	vmx_set_cr4(vcpu, vmcs12->guest_cr4);
+	kvm_x86_set_cr4(vcpu, vmcs12->guest_cr4);
 	vmcs_writel(CR4_READ_SHADOW, nested_read_cr4(vmcs12));
 
 	vcpu->arch.efer = nested_vmx_calc_efer(vmx, vmcs12);
 	/* Note: may modify VM_ENTRY/EXIT_CONTROLS and GUEST/HOST_IA32_EFER */
-	vmx_set_efer(vcpu, vcpu->arch.efer);
+	kvm_x86_set_efer(vcpu, vcpu->arch.efer);
 
 	/*
 	 * Guest state is invalid and unrestricted guest is disabled,
@@ -2825,7 +2825,7 @@ static int nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu)
 
 	preempt_disable();
 
-	vmx_prepare_switch_to_guest(vcpu);
+	kvm_x86_prepare_guest_switch(vcpu);
 
 	/*
 	 * Induce a consistency check VMExit by clearing bit 1 in GUEST_RFLAGS,
@@ -3010,7 +3010,7 @@ static int nested_vmx_check_permission(struct kvm_vcpu *vcpu)
 		return 0;
 	}
 
-	if (vmx_get_cpl(vcpu)) {
+	if (kvm_x86_get_cpl(vcpu)) {
 		kvm_inject_gp(vcpu, 0);
 		return 0;
 	}
@@ -3187,7 +3187,7 @@ static int nested_vmx_run(struct kvm_vcpu *vcpu, bool launch)
 	struct vmcs12 *vmcs12;
 	enum nvmx_vmentry_status status;
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
-	u32 interrupt_shadow = vmx_get_interrupt_shadow(vcpu);
+	u32 interrupt_shadow = kvm_x86_get_interrupt_shadow(vcpu);
 
 	if (!nested_vmx_check_permission(vcpu))
 		return 1;
@@ -3443,7 +3443,7 @@ static void nested_vmx_inject_exception_vmexit(struct kvm_vcpu *vcpu,
 		intr_info |= INTR_TYPE_HARD_EXCEPTION;
 
 	if (!(vmcs12->idt_vectoring_info_field & VECTORING_INFO_VALID_MASK) &&
-	    vmx_get_nmi_mask(vcpu))
+	    kvm_x86_get_nmi_mask(vcpu))
 		intr_info |= INTR_INFO_UNBLOCK_NMI;
 
 	nested_vmx_vmexit(vcpu, EXIT_REASON_EXCEPTION_NMI, intr_info, exit_qual);
@@ -3492,7 +3492,7 @@ static int vmx_check_nested_events(struct kvm_vcpu *vcpu, bool external_intr)
 		 * clear this one and block further NMIs.
 		 */
 		vcpu->arch.nmi_pending = 0;
-		vmx_set_nmi_mask(vcpu, true);
+		kvm_x86_set_nmi_mask(vcpu, true);
 		return 0;
 	}
 
@@ -3630,12 +3630,12 @@ static void copy_vmcs02_to_vmcs12_rare(struct kvm_vcpu *vcpu,
 
 	cpu = get_cpu();
 	vmx->loaded_vmcs = &vmx->nested.vmcs02;
-	vmx_vcpu_load(&vmx->vcpu, cpu);
+	kvm_x86_vcpu_load(&vmx->vcpu, cpu);
 
 	sync_vmcs02_to_vmcs12_rare(vcpu, vmcs12);
 
 	vmx->loaded_vmcs = &vmx->vmcs01;
-	vmx_vcpu_load(&vmx->vcpu, cpu);
+	kvm_x86_vcpu_load(&vmx->vcpu, cpu);
 	put_cpu();
 }
 
@@ -3795,12 +3795,12 @@ static void load_vmcs12_host_state(struct kvm_vcpu *vcpu,
 		vcpu->arch.efer |= (EFER_LMA | EFER_LME);
 	else
 		vcpu->arch.efer &= ~(EFER_LMA | EFER_LME);
-	vmx_set_efer(vcpu, vcpu->arch.efer);
+	kvm_x86_set_efer(vcpu, vcpu->arch.efer);
 
 	kvm_rsp_write(vcpu, vmcs12->host_rsp);
 	kvm_rip_write(vcpu, vmcs12->host_rip);
-	vmx_set_rflags(vcpu, X86_EFLAGS_FIXED);
-	vmx_set_interrupt_shadow(vcpu, 0);
+	kvm_x86_set_rflags(vcpu, X86_EFLAGS_FIXED);
+	kvm_x86_set_interrupt_shadow(vcpu, 0);
 
 	/*
 	 * Note that calling vmx_set_cr0 is important, even if cr0 hasn't
@@ -3810,11 +3810,11 @@ static void load_vmcs12_host_state(struct kvm_vcpu *vcpu,
 	 * (KVM doesn't change it);
 	 */
 	vcpu->arch.cr0_guest_owned_bits = X86_CR0_TS;
-	vmx_set_cr0(vcpu, vmcs12->host_cr0);
+	kvm_x86_set_cr0(vcpu, vmcs12->host_cr0);
 
 	/* Same as above - no reason to call set_cr4_guest_host_mask().  */
 	vcpu->arch.cr4_guest_owned_bits = ~vmcs_readl(CR4_GUEST_HOST_MASK);
-	vmx_set_cr4(vcpu, vmcs12->host_cr4);
+	kvm_x86_set_cr4(vcpu, vmcs12->host_cr4);
 
 	nested_ept_uninit_mmu_context(vcpu);
 
@@ -3882,7 +3882,7 @@ static void load_vmcs12_host_state(struct kvm_vcpu *vcpu,
 		seg.l = 1;
 	else
 		seg.db = 1;
-	vmx_set_segment(vcpu, &seg, VCPU_SREG_CS);
+	kvm_x86_set_segment(vcpu, &seg, VCPU_SREG_CS);
 	seg = (struct kvm_segment) {
 		.base = 0,
 		.limit = 0xFFFFFFFF,
@@ -3893,17 +3893,17 @@ static void load_vmcs12_host_state(struct kvm_vcpu *vcpu,
 		.g = 1
 	};
 	seg.selector = vmcs12->host_ds_selector;
-	vmx_set_segment(vcpu, &seg, VCPU_SREG_DS);
+	kvm_x86_set_segment(vcpu, &seg, VCPU_SREG_DS);
 	seg.selector = vmcs12->host_es_selector;
-	vmx_set_segment(vcpu, &seg, VCPU_SREG_ES);
+	kvm_x86_set_segment(vcpu, &seg, VCPU_SREG_ES);
 	seg.selector = vmcs12->host_ss_selector;
-	vmx_set_segment(vcpu, &seg, VCPU_SREG_SS);
+	kvm_x86_set_segment(vcpu, &seg, VCPU_SREG_SS);
 	seg.selector = vmcs12->host_fs_selector;
 	seg.base = vmcs12->host_fs_base;
-	vmx_set_segment(vcpu, &seg, VCPU_SREG_FS);
+	kvm_x86_set_segment(vcpu, &seg, VCPU_SREG_FS);
 	seg.selector = vmcs12->host_gs_selector;
 	seg.base = vmcs12->host_gs_base;
-	vmx_set_segment(vcpu, &seg, VCPU_SREG_GS);
+	kvm_x86_set_segment(vcpu, &seg, VCPU_SREG_GS);
 	seg = (struct kvm_segment) {
 		.base = vmcs12->host_tr_base,
 		.limit = 0x67,
@@ -3911,7 +3911,7 @@ static void load_vmcs12_host_state(struct kvm_vcpu *vcpu,
 		.type = 11,
 		.present = 1
 	};
-	vmx_set_segment(vcpu, &seg, VCPU_SREG_TR);
+	kvm_x86_set_segment(vcpu, &seg, VCPU_SREG_TR);
 
 	kvm_set_dr(vcpu, 7, 0x400);
 	vmcs_write64(GUEST_IA32_DEBUGCTL, 0);
@@ -3974,13 +3974,13 @@ static void nested_vmx_restore_host_state(struct kvm_vcpu *vcpu)
 	 * Note that calling vmx_set_{efer,cr0,cr4} is important as they
 	 * handle a variety of side effects to KVM's software model.
 	 */
-	vmx_set_efer(vcpu, nested_vmx_get_vmcs01_guest_efer(vmx));
+	kvm_x86_set_efer(vcpu, nested_vmx_get_vmcs01_guest_efer(vmx));
 
 	vcpu->arch.cr0_guest_owned_bits = X86_CR0_TS;
-	vmx_set_cr0(vcpu, vmcs_readl(CR0_READ_SHADOW));
+	kvm_x86_set_cr0(vcpu, vmcs_readl(CR0_READ_SHADOW));
 
 	vcpu->arch.cr4_guest_owned_bits = ~vmcs_readl(CR4_GUEST_HOST_MASK);
-	vmx_set_cr4(vcpu, vmcs_readl(CR4_READ_SHADOW));
+	kvm_x86_set_cr4(vcpu, vmcs_readl(CR4_READ_SHADOW));
 
 	nested_ept_uninit_mmu_context(vcpu);
 	vcpu->arch.cr3 = vmcs_readl(GUEST_CR3);
@@ -4118,11 +4118,11 @@ void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 exit_reason,
 
 	if (vmx->nested.change_vmcs01_virtual_apic_mode) {
 		vmx->nested.change_vmcs01_virtual_apic_mode = false;
-		vmx_set_virtual_apic_mode(vcpu);
+		kvm_x86_set_virtual_apic_mode(vcpu);
 	} else if (!nested_cpu_has_ept(vmcs12) &&
 		   nested_cpu_has2(vmcs12,
 				   SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES)) {
-		vmx_flush_tlb(vcpu, true);
+		kvm_x86_tlb_flush(vcpu, true);
 	}
 
 	/* Unpin physical memory we referred to in vmcs02 */
@@ -4243,7 +4243,7 @@ int get_vmx_mem_address(struct kvm_vcpu *vcpu, unsigned long exit_qualification,
 		off += kvm_register_read(vcpu, base_reg);
 	if (index_is_valid)
 		off += kvm_register_read(vcpu, index_reg)<<scaling;
-	vmx_get_segment(vcpu, &s, seg_reg);
+	kvm_x86_get_segment(vcpu, &s, seg_reg);
 
 	/*
 	 * The effective address, i.e. @off, of a memory operand is truncated
@@ -4440,7 +4440,7 @@ static int handle_vmon(struct kvm_vcpu *vcpu)
 	/*
 	 * The Intel VMX Instruction Reference lists a bunch of bits that are
 	 * prerequisite to running VMXON, most notably cr4.VMXE must be set to
-	 * 1 (see vmx_set_cr4() for when we allow the guest to set this).
+	 * 1 (see kvm_x86_set_cr4() for when we allow the guest to set this).
 	 * Otherwise, we should fail with #UD.  But most faulting conditions
 	 * have already been checked by hardware, prior to the VM-exit for
 	 * VMXON.  We do test guest cr4.VMXE because processor CR4 always has
@@ -4452,7 +4452,7 @@ static int handle_vmon(struct kvm_vcpu *vcpu)
 	}
 
 	/* CPL=0 must be checked manually. */
-	if (vmx_get_cpl(vcpu)) {
+	if (kvm_x86_get_cpl(vcpu)) {
 		kvm_inject_gp(vcpu, 0);
 		return 1;
 	}
diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c
index 3e9c059099e9..2fa9ae5acde1 100644
--- a/arch/x86/kvm/vmx/pmu_intel.c
+++ b/arch/x86/kvm/vmx/pmu_intel.c
@@ -64,9 +64,8 @@ static void global_ctrl_changed(struct kvm_pmu *pmu, u64 data)
 		reprogram_counter(pmu, bit);
 }
 
-static unsigned intel_find_arch_event(struct kvm_pmu *pmu,
-				      u8 event_select,
-				      u8 unit_mask)
+unsigned kvm_x86_pmu_find_arch_event(struct kvm_pmu *pmu, u8 event_select,
+				     u8 unit_mask)
 {
 	int i;
 
@@ -82,7 +81,7 @@ static unsigned intel_find_arch_event(struct kvm_pmu *pmu,
 	return intel_arch_events[i].event_type;
 }
 
-static unsigned intel_find_fixed_event(int idx)
+unsigned kvm_x86_pmu_find_fixed_event(int idx)
 {
 	if (idx >= ARRAY_SIZE(fixed_pmc_events))
 		return PERF_COUNT_HW_MAX;
@@ -91,14 +90,14 @@ static unsigned intel_find_fixed_event(int idx)
 }
 
 /* check if a PMC is enabled by comparing it with globl_ctrl bits. */
-static bool intel_pmc_is_enabled(struct kvm_pmc *pmc)
+bool kvm_x86_pmu_pmc_is_enabled(struct kvm_pmc *pmc)
 {
 	struct kvm_pmu *pmu = pmc_to_pmu(pmc);
 
 	return test_bit(pmc->idx, (unsigned long *)&pmu->global_ctrl);
 }
 
-static struct kvm_pmc *intel_pmc_idx_to_pmc(struct kvm_pmu *pmu, int pmc_idx)
+struct kvm_pmc *kvm_x86_pmu_pmc_idx_to_pmc(struct kvm_pmu *pmu, int pmc_idx)
 {
 	if (pmc_idx < INTEL_PMC_IDX_FIXED)
 		return get_gp_pmc(pmu, MSR_P6_EVNTSEL0 + pmc_idx,
@@ -111,7 +110,7 @@ static struct kvm_pmc *intel_pmc_idx_to_pmc(struct kvm_pmu *pmu, int pmc_idx)
 }
 
 /* returns 0 if idx's corresponding MSR exists; otherwise returns 1. */
-static int intel_is_valid_msr_idx(struct kvm_vcpu *vcpu, unsigned idx)
+int kvm_x86_pmu_is_valid_msr_idx(struct kvm_vcpu *vcpu, unsigned idx)
 {
 	struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
 	bool fixed = idx & (1u << 30);
@@ -122,8 +121,8 @@ static int intel_is_valid_msr_idx(struct kvm_vcpu *vcpu, unsigned idx)
 		(fixed && idx >= pmu->nr_arch_fixed_counters);
 }
 
-static struct kvm_pmc *intel_msr_idx_to_pmc(struct kvm_vcpu *vcpu,
-					    unsigned idx, u64 *mask)
+struct kvm_pmc *kvm_x86_pmu_msr_idx_to_pmc(struct kvm_vcpu *vcpu, unsigned idx,
+					   u64 *mask)
 {
 	struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
 	bool fixed = idx & (1u << 30);
@@ -140,7 +139,7 @@ static struct kvm_pmc *intel_msr_idx_to_pmc(struct kvm_vcpu *vcpu,
 	return &counters[idx];
 }
 
-static bool intel_is_valid_msr(struct kvm_vcpu *vcpu, u32 msr)
+bool kvm_x86_pmu_is_valid_msr(struct kvm_vcpu *vcpu, u32 msr)
 {
 	struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
 	int ret;
@@ -162,7 +161,7 @@ static bool intel_is_valid_msr(struct kvm_vcpu *vcpu, u32 msr)
 	return ret;
 }
 
-static int intel_pmu_get_msr(struct kvm_vcpu *vcpu, u32 msr, u64 *data)
+int kvm_x86_pmu_get_msr(struct kvm_vcpu *vcpu, u32 msr, u64 *data)
 {
 	struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
 	struct kvm_pmc *pmc;
@@ -198,7 +197,7 @@ static int intel_pmu_get_msr(struct kvm_vcpu *vcpu, u32 msr, u64 *data)
 	return 1;
 }
 
-static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+int kvm_x86_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 {
 	struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
 	struct kvm_pmc *pmc;
@@ -259,7 +258,7 @@ static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 	return 1;
 }
 
-static void intel_pmu_refresh(struct kvm_vcpu *vcpu)
+void kvm_x86_pmu_refresh(struct kvm_vcpu *vcpu)
 {
 	struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
 	struct x86_pmu_capability x86_pmu;
@@ -308,7 +307,7 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu)
 	pmu->global_ovf_ctrl_mask = pmu->global_ctrl_mask
 			& ~(MSR_CORE_PERF_GLOBAL_OVF_CTRL_OVF_BUF |
 			    MSR_CORE_PERF_GLOBAL_OVF_CTRL_COND_CHGD);
-	if (kvm_x86_ops->pt_supported())
+	if (kvm_x86_pt_supported())
 		pmu->global_ovf_ctrl_mask &=
 				~MSR_CORE_PERF_GLOBAL_OVF_CTRL_TRACE_TOPA_PMI;
 
@@ -319,7 +318,7 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu)
 		pmu->reserved_bits ^= HSW_IN_TX|HSW_IN_TX_CHECKPOINTED;
 }
 
-static void intel_pmu_init(struct kvm_vcpu *vcpu)
+void kvm_x86_pmu_init(struct kvm_vcpu *vcpu)
 {
 	int i;
 	struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
@@ -337,7 +336,7 @@ static void intel_pmu_init(struct kvm_vcpu *vcpu)
 	}
 }
 
-static void intel_pmu_reset(struct kvm_vcpu *vcpu)
+void kvm_x86_pmu_reset(struct kvm_vcpu *vcpu)
 {
 	struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
 	struct kvm_pmc *pmc = NULL;
@@ -362,16 +361,16 @@ static void intel_pmu_reset(struct kvm_vcpu *vcpu)
 }
 
 struct kvm_pmu_ops intel_pmu_ops = {
-	.find_arch_event = intel_find_arch_event,
-	.find_fixed_event = intel_find_fixed_event,
-	.pmc_is_enabled = intel_pmc_is_enabled,
-	.pmc_idx_to_pmc = intel_pmc_idx_to_pmc,
-	.msr_idx_to_pmc = intel_msr_idx_to_pmc,
-	.is_valid_msr_idx = intel_is_valid_msr_idx,
-	.is_valid_msr = intel_is_valid_msr,
-	.get_msr = intel_pmu_get_msr,
-	.set_msr = intel_pmu_set_msr,
-	.refresh = intel_pmu_refresh,
-	.init = intel_pmu_init,
-	.reset = intel_pmu_reset,
+	.find_arch_event = kvm_x86_pmu_find_arch_event,
+	.find_fixed_event = kvm_x86_pmu_find_fixed_event,
+	.pmc_is_enabled = kvm_x86_pmu_pmc_is_enabled,
+	.pmc_idx_to_pmc = kvm_x86_pmu_pmc_idx_to_pmc,
+	.msr_idx_to_pmc = kvm_x86_pmu_msr_idx_to_pmc,
+	.is_valid_msr_idx = kvm_x86_pmu_is_valid_msr_idx,
+	.is_valid_msr = kvm_x86_pmu_is_valid_msr,
+	.get_msr = kvm_x86_pmu_get_msr,
+	.set_msr = kvm_x86_pmu_set_msr,
+	.refresh = kvm_x86_pmu_refresh,
+	.init = kvm_x86_pmu_init,
+	.reset = kvm_x86_pmu_reset,
 };
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 5d21a4ab28cf..bd17ad61f7e3 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -613,7 +613,7 @@ static inline bool cpu_need_virtualize_apic_accesses(struct kvm_vcpu *vcpu)
 	return flexpriority_enabled && lapic_in_kernel(vcpu);
 }
 
-static inline bool report_flexpriority(void)
+bool kvm_x86_cpu_has_accelerated_tpr(void)
 {
 	return flexpriority_enabled;
 }
@@ -771,7 +771,7 @@ static u32 vmx_read_guest_seg_ar(struct vcpu_vmx *vmx, unsigned seg)
 	return *p;
 }
 
-void update_exception_bitmap(struct kvm_vcpu *vcpu)
+void kvm_x86_update_bp_intercept(struct kvm_vcpu *vcpu)
 {
 	u32 eb;
 
@@ -1127,7 +1127,7 @@ void vmx_set_host_fs_gs(struct vmcs_host_state *host, u16 fs_sel, u16 gs_sel,
 	}
 }
 
-void vmx_prepare_switch_to_guest(struct kvm_vcpu *vcpu)
+void kvm_x86_prepare_guest_switch(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 	struct vmcs_host_state *host_state;
@@ -1362,7 +1362,7 @@ void vmx_vcpu_load_vmcs(struct kvm_vcpu *vcpu, int cpu)
  * Switches to specified vcpu, until a matching vcpu_put(), but assumes
  * vcpu mutex is already taken.
  */
-void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
+void kvm_x86_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 
@@ -1388,7 +1388,7 @@ static void vmx_vcpu_pi_put(struct kvm_vcpu *vcpu)
 		pi_set_sn(pi_desc);
 }
 
-static void vmx_vcpu_put(struct kvm_vcpu *vcpu)
+void kvm_x86_vcpu_put(struct kvm_vcpu *vcpu)
 {
 	vmx_vcpu_pi_put(vcpu);
 
@@ -1400,9 +1400,8 @@ static bool emulation_required(struct kvm_vcpu *vcpu)
 	return emulate_invalid_guest_state && !guest_state_valid(vcpu);
 }
 
-static void vmx_decache_cr0_guest_bits(struct kvm_vcpu *vcpu);
 
-unsigned long vmx_get_rflags(struct kvm_vcpu *vcpu)
+unsigned long kvm_x86_get_rflags(struct kvm_vcpu *vcpu)
 {
 	unsigned long rflags, save_rflags;
 
@@ -1419,9 +1418,9 @@ unsigned long vmx_get_rflags(struct kvm_vcpu *vcpu)
 	return to_vmx(vcpu)->rflags;
 }
 
-void vmx_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags)
+void kvm_x86_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags)
 {
-	unsigned long old_rflags = vmx_get_rflags(vcpu);
+	unsigned long old_rflags = kvm_x86_get_rflags(vcpu);
 
 	__set_bit(VCPU_EXREG_RFLAGS, (ulong *)&vcpu->arch.regs_avail);
 	to_vmx(vcpu)->rflags = rflags;
@@ -1435,7 +1434,7 @@ void vmx_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags)
 		to_vmx(vcpu)->emulation_required = emulation_required(vcpu);
 }
 
-u32 vmx_get_interrupt_shadow(struct kvm_vcpu *vcpu)
+u32 kvm_x86_get_interrupt_shadow(struct kvm_vcpu *vcpu)
 {
 	u32 interruptibility = vmcs_read32(GUEST_INTERRUPTIBILITY_INFO);
 	int ret = 0;
@@ -1448,7 +1447,7 @@ u32 vmx_get_interrupt_shadow(struct kvm_vcpu *vcpu)
 	return ret;
 }
 
-void vmx_set_interrupt_shadow(struct kvm_vcpu *vcpu, int mask)
+void kvm_x86_set_interrupt_shadow(struct kvm_vcpu *vcpu, int mask)
 {
 	u32 interruptibility_old = vmcs_read32(GUEST_INTERRUPTIBILITY_INFO);
 	u32 interruptibility = interruptibility_old;
@@ -1536,7 +1535,7 @@ static int vmx_rtit_ctl_check(struct kvm_vcpu *vcpu, u64 data)
 	return 0;
 }
 
-static int skip_emulated_instruction(struct kvm_vcpu *vcpu)
+int kvm_x86_skip_emulated_instruction(struct kvm_vcpu *vcpu)
 {
 	unsigned long rip;
 
@@ -1559,7 +1558,7 @@ static int skip_emulated_instruction(struct kvm_vcpu *vcpu)
 	}
 
 	/* skipping an emulated instruction also counts */
-	vmx_set_interrupt_shadow(vcpu, 0);
+	kvm_x86_set_interrupt_shadow(vcpu, 0);
 
 	return 1;
 }
@@ -1577,7 +1576,7 @@ static void vmx_clear_hlt(struct kvm_vcpu *vcpu)
 		vmcs_write32(GUEST_ACTIVITY_STATE, GUEST_ACTIVITY_ACTIVE);
 }
 
-static void vmx_queue_exception(struct kvm_vcpu *vcpu)
+void kvm_x86_queue_exception(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 	unsigned nr = vcpu->arch.exception.nr;
@@ -1614,12 +1613,12 @@ static void vmx_queue_exception(struct kvm_vcpu *vcpu)
 	vmx_clear_hlt(vcpu);
 }
 
-static bool vmx_rdtscp_supported(void)
+bool kvm_x86_rdtscp_supported(void)
 {
 	return cpu_has_vmx_rdtscp();
 }
 
-static bool vmx_invpcid_supported(void)
+bool kvm_x86_invpcid_supported(void)
 {
 	return cpu_has_vmx_invpcid();
 }
@@ -1677,7 +1676,7 @@ static void setup_msrs(struct vcpu_vmx *vmx)
 		vmx_update_msr_bitmap(&vmx->vcpu);
 }
 
-static u64 vmx_read_l1_tsc_offset(struct kvm_vcpu *vcpu)
+u64 kvm_x86_read_l1_tsc_offset(struct kvm_vcpu *vcpu)
 {
 	struct vmcs12 *vmcs12 = get_vmcs12(vcpu);
 
@@ -1688,7 +1687,7 @@ static u64 vmx_read_l1_tsc_offset(struct kvm_vcpu *vcpu)
 	return vcpu->arch.tsc_offset;
 }
 
-static u64 vmx_write_l1_tsc_offset(struct kvm_vcpu *vcpu, u64 offset)
+u64 kvm_x86_write_l1_tsc_offset(struct kvm_vcpu *vcpu, u64 offset)
 {
 	struct vmcs12 *vmcs12 = get_vmcs12(vcpu);
 	u64 g_tsc_offset = 0;
@@ -1729,7 +1728,7 @@ static inline bool vmx_feature_control_msr_valid(struct kvm_vcpu *vcpu,
 	return !(val & ~valid_bits);
 }
 
-static int vmx_get_msr_feature(struct kvm_msr_entry *msr)
+int kvm_x86_get_msr_feature(struct kvm_msr_entry *msr)
 {
 	switch (msr->index) {
 	case MSR_IA32_VMX_BASIC ... MSR_IA32_VMX_VMFUNC:
@@ -1748,7 +1747,7 @@ static int vmx_get_msr_feature(struct kvm_msr_entry *msr)
  * Returns 0 on success, non-0 otherwise.
  * Assumes vcpu_load() was already called.
  */
-static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+int kvm_x86_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 	struct shared_msr_entry *msr;
@@ -1813,7 +1812,7 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 		return vmx_get_vmx_msr(&vmx->nested.msrs, msr_info->index,
 				       &msr_info->data);
 	case MSR_IA32_XSS:
-		if (!vmx_xsaves_supported() ||
+		if (!kvm_x86_xsaves_supported() ||
 		    (!msr_info->host_initiated &&
 		     !(guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) &&
 		       guest_cpuid_has(vcpu, X86_FEATURE_XSAVES))))
@@ -1888,7 +1887,7 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
  * Returns 0 on success, non-0 otherwise.
  * Assumes vcpu_load() was already called.
  */
-static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+int kvm_x86_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 	struct shared_msr_entry *msr;
@@ -2056,7 +2055,7 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 			return 1;
 		return vmx_set_vmx_msr(vcpu, msr_index, data);
 	case MSR_IA32_XSS:
-		if (!vmx_xsaves_supported() ||
+		if (!kvm_x86_xsaves_supported() ||
 		    (!msr_info->host_initiated &&
 		     !(guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) &&
 		       guest_cpuid_has(vcpu, X86_FEATURE_XSAVES))))
@@ -2160,7 +2159,7 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 	return ret;
 }
 
-static void vmx_cache_reg(struct kvm_vcpu *vcpu, enum kvm_reg reg)
+void kvm_x86_cache_reg(struct kvm_vcpu *vcpu, enum kvm_reg reg)
 {
 	__set_bit(reg, (unsigned long *)&vcpu->arch.regs_avail);
 	switch (reg) {
@@ -2179,12 +2178,12 @@ static void vmx_cache_reg(struct kvm_vcpu *vcpu, enum kvm_reg reg)
 	}
 }
 
-static __init int cpu_has_kvm_support(void)
+__init int kvm_x86_cpu_has_kvm_support(void)
 {
 	return cpu_has_vmx();
 }
 
-static __init int vmx_disabled_by_bios(void)
+__init int kvm_x86_disabled_by_bios(void)
 {
 	u64 msr;
 
@@ -2219,7 +2218,7 @@ static void kvm_cpu_vmxon(u64 addr)
 	asm volatile ("vmxon %0" : : "m"(addr));
 }
 
-static int hardware_enable(void)
+int kvm_x86_hardware_enable(void)
 {
 	int cpu = raw_smp_processor_id();
 	u64 phys_addr = __pa(per_cpu(vmxarea, cpu));
@@ -2291,7 +2290,7 @@ static void kvm_cpu_vmxoff(void)
 	cr4_clear_bits(X86_CR4_VMXE);
 }
 
-static void hardware_disable(void)
+void kvm_x86_hardware_disable(void)
 {
 	vmclear_local_loaded_vmcss();
 	kvm_cpu_vmxoff();
@@ -2651,7 +2650,7 @@ static void fix_pmode_seg(struct kvm_vcpu *vcpu, int seg,
 		save->dpl = save->selector & SEGMENT_RPL_MASK;
 		save->s = 1;
 	}
-	vmx_set_segment(vcpu, save, seg);
+	kvm_x86_set_segment(vcpu, save, seg);
 }
 
 static void enter_pmode(struct kvm_vcpu *vcpu)
@@ -2663,18 +2662,18 @@ static void enter_pmode(struct kvm_vcpu *vcpu)
 	 * Update real mode segment cache. It may be not up-to-date if sement
 	 * register was written while vcpu was in a guest mode.
 	 */
-	vmx_get_segment(vcpu, &vmx->rmode.segs[VCPU_SREG_ES], VCPU_SREG_ES);
-	vmx_get_segment(vcpu, &vmx->rmode.segs[VCPU_SREG_DS], VCPU_SREG_DS);
-	vmx_get_segment(vcpu, &vmx->rmode.segs[VCPU_SREG_FS], VCPU_SREG_FS);
-	vmx_get_segment(vcpu, &vmx->rmode.segs[VCPU_SREG_GS], VCPU_SREG_GS);
-	vmx_get_segment(vcpu, &vmx->rmode.segs[VCPU_SREG_SS], VCPU_SREG_SS);
-	vmx_get_segment(vcpu, &vmx->rmode.segs[VCPU_SREG_CS], VCPU_SREG_CS);
+	kvm_x86_get_segment(vcpu, &vmx->rmode.segs[VCPU_SREG_ES], VCPU_SREG_ES);
+	kvm_x86_get_segment(vcpu, &vmx->rmode.segs[VCPU_SREG_DS], VCPU_SREG_DS);
+	kvm_x86_get_segment(vcpu, &vmx->rmode.segs[VCPU_SREG_FS], VCPU_SREG_FS);
+	kvm_x86_get_segment(vcpu, &vmx->rmode.segs[VCPU_SREG_GS], VCPU_SREG_GS);
+	kvm_x86_get_segment(vcpu, &vmx->rmode.segs[VCPU_SREG_SS], VCPU_SREG_SS);
+	kvm_x86_get_segment(vcpu, &vmx->rmode.segs[VCPU_SREG_CS], VCPU_SREG_CS);
 
 	vmx->rmode.vm86_active = 0;
 
 	vmx_segment_cache_clear(vmx);
 
-	vmx_set_segment(vcpu, &vmx->rmode.segs[VCPU_SREG_TR], VCPU_SREG_TR);
+	kvm_x86_set_segment(vcpu, &vmx->rmode.segs[VCPU_SREG_TR], VCPU_SREG_TR);
 
 	flags = vmcs_readl(GUEST_RFLAGS);
 	flags &= RMODE_GUEST_OWNED_EFLAGS_BITS;
@@ -2684,7 +2683,7 @@ static void enter_pmode(struct kvm_vcpu *vcpu)
 	vmcs_writel(GUEST_CR4, (vmcs_readl(GUEST_CR4) & ~X86_CR4_VME) |
 			(vmcs_readl(CR4_READ_SHADOW) & X86_CR4_VME));
 
-	update_exception_bitmap(vcpu);
+	kvm_x86_update_bp_intercept(vcpu);
 
 	fix_pmode_seg(vcpu, VCPU_SREG_CS, &vmx->rmode.segs[VCPU_SREG_CS]);
 	fix_pmode_seg(vcpu, VCPU_SREG_SS, &vmx->rmode.segs[VCPU_SREG_SS]);
@@ -2733,13 +2732,13 @@ static void enter_rmode(struct kvm_vcpu *vcpu)
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 	struct kvm_vmx *kvm_vmx = to_kvm_vmx(vcpu->kvm);
 
-	vmx_get_segment(vcpu, &vmx->rmode.segs[VCPU_SREG_TR], VCPU_SREG_TR);
-	vmx_get_segment(vcpu, &vmx->rmode.segs[VCPU_SREG_ES], VCPU_SREG_ES);
-	vmx_get_segment(vcpu, &vmx->rmode.segs[VCPU_SREG_DS], VCPU_SREG_DS);
-	vmx_get_segment(vcpu, &vmx->rmode.segs[VCPU_SREG_FS], VCPU_SREG_FS);
-	vmx_get_segment(vcpu, &vmx->rmode.segs[VCPU_SREG_GS], VCPU_SREG_GS);
-	vmx_get_segment(vcpu, &vmx->rmode.segs[VCPU_SREG_SS], VCPU_SREG_SS);
-	vmx_get_segment(vcpu, &vmx->rmode.segs[VCPU_SREG_CS], VCPU_SREG_CS);
+	kvm_x86_get_segment(vcpu, &vmx->rmode.segs[VCPU_SREG_TR], VCPU_SREG_TR);
+	kvm_x86_get_segment(vcpu, &vmx->rmode.segs[VCPU_SREG_ES], VCPU_SREG_ES);
+	kvm_x86_get_segment(vcpu, &vmx->rmode.segs[VCPU_SREG_DS], VCPU_SREG_DS);
+	kvm_x86_get_segment(vcpu, &vmx->rmode.segs[VCPU_SREG_FS], VCPU_SREG_FS);
+	kvm_x86_get_segment(vcpu, &vmx->rmode.segs[VCPU_SREG_GS], VCPU_SREG_GS);
+	kvm_x86_get_segment(vcpu, &vmx->rmode.segs[VCPU_SREG_SS], VCPU_SREG_SS);
+	kvm_x86_get_segment(vcpu, &vmx->rmode.segs[VCPU_SREG_CS], VCPU_SREG_CS);
 
 	vmx->rmode.vm86_active = 1;
 
@@ -2764,7 +2763,7 @@ static void enter_rmode(struct kvm_vcpu *vcpu)
 
 	vmcs_writel(GUEST_RFLAGS, flags);
 	vmcs_writel(GUEST_CR4, vmcs_readl(GUEST_CR4) | X86_CR4_VME);
-	update_exception_bitmap(vcpu);
+	kvm_x86_update_bp_intercept(vcpu);
 
 	fix_rmode_seg(VCPU_SREG_SS, &vmx->rmode.segs[VCPU_SREG_SS]);
 	fix_rmode_seg(VCPU_SREG_CS, &vmx->rmode.segs[VCPU_SREG_CS]);
@@ -2776,7 +2775,7 @@ static void enter_rmode(struct kvm_vcpu *vcpu)
 	kvm_mmu_reset_context(vcpu);
 }
 
-void vmx_set_efer(struct kvm_vcpu *vcpu, u64 efer)
+void kvm_x86_set_efer(struct kvm_vcpu *vcpu, u64 efer)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 	struct shared_msr_entry *msr = find_msr_entry(vmx, MSR_EFER);
@@ -2812,18 +2811,18 @@ static void enter_lmode(struct kvm_vcpu *vcpu)
 			     (guest_tr_ar & ~VMX_AR_TYPE_MASK)
 			     | VMX_AR_TYPE_BUSY_64_TSS);
 	}
-	vmx_set_efer(vcpu, vcpu->arch.efer | EFER_LMA);
+	kvm_x86_set_efer(vcpu, vcpu->arch.efer | EFER_LMA);
 }
 
 static void exit_lmode(struct kvm_vcpu *vcpu)
 {
 	vm_entry_controls_clearbit(to_vmx(vcpu), VM_ENTRY_IA32E_MODE);
-	vmx_set_efer(vcpu, vcpu->arch.efer & ~EFER_LMA);
+	kvm_x86_set_efer(vcpu, vcpu->arch.efer & ~EFER_LMA);
 }
 
 #endif
 
-static void vmx_flush_tlb_gva(struct kvm_vcpu *vcpu, gva_t addr)
+void kvm_x86_tlb_flush_gva(struct kvm_vcpu *vcpu, gva_t addr)
 {
 	int vpid = to_vmx(vcpu)->vpid;
 
@@ -2837,7 +2836,7 @@ static void vmx_flush_tlb_gva(struct kvm_vcpu *vcpu, gva_t addr)
 	 */
 }
 
-static void vmx_decache_cr0_guest_bits(struct kvm_vcpu *vcpu)
+void kvm_x86_decache_cr0_guest_bits(struct kvm_vcpu *vcpu)
 {
 	ulong cr0_guest_owned_bits = vcpu->arch.cr0_guest_owned_bits;
 
@@ -2845,14 +2844,14 @@ static void vmx_decache_cr0_guest_bits(struct kvm_vcpu *vcpu)
 	vcpu->arch.cr0 |= vmcs_readl(GUEST_CR0) & cr0_guest_owned_bits;
 }
 
-static void vmx_decache_cr3(struct kvm_vcpu *vcpu)
+void kvm_x86_decache_cr3(struct kvm_vcpu *vcpu)
 {
 	if (enable_unrestricted_guest || (enable_ept && is_paging(vcpu)))
 		vcpu->arch.cr3 = vmcs_readl(GUEST_CR3);
 	__set_bit(VCPU_EXREG_CR3, (ulong *)&vcpu->arch.regs_avail);
 }
 
-static void vmx_decache_cr4_guest_bits(struct kvm_vcpu *vcpu)
+void kvm_x86_decache_cr4_guest_bits(struct kvm_vcpu *vcpu)
 {
 	ulong cr4_guest_owned_bits = vcpu->arch.cr4_guest_owned_bits;
 
@@ -2900,26 +2899,26 @@ static void ept_update_paging_mode_cr0(unsigned long *hw_cr0,
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 
 	if (!test_bit(VCPU_EXREG_CR3, (ulong *)&vcpu->arch.regs_avail))
-		vmx_decache_cr3(vcpu);
+		kvm_x86_decache_cr3(vcpu);
 	if (!(cr0 & X86_CR0_PG)) {
 		/* From paging/starting to nonpaging */
 		exec_controls_setbit(vmx, CPU_BASED_CR3_LOAD_EXITING |
 					  CPU_BASED_CR3_STORE_EXITING);
 		vcpu->arch.cr0 = cr0;
-		vmx_set_cr4(vcpu, kvm_read_cr4(vcpu));
+		kvm_x86_set_cr4(vcpu, kvm_read_cr4(vcpu));
 	} else if (!is_paging(vcpu)) {
 		/* From nonpaging to paging */
 		exec_controls_clearbit(vmx, CPU_BASED_CR3_LOAD_EXITING |
 					    CPU_BASED_CR3_STORE_EXITING);
 		vcpu->arch.cr0 = cr0;
-		vmx_set_cr4(vcpu, kvm_read_cr4(vcpu));
+		kvm_x86_set_cr4(vcpu, kvm_read_cr4(vcpu));
 	}
 
 	if (!(cr0 & X86_CR0_WP))
 		*hw_cr0 &= ~X86_CR0_WP;
 }
 
-void vmx_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0)
+void kvm_x86_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 	unsigned long hw_cr0;
@@ -2957,7 +2956,7 @@ void vmx_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0)
 	vmx->emulation_required = emulation_required(vcpu);
 }
 
-static int get_ept_level(struct kvm_vcpu *vcpu)
+int kvm_x86_get_tdp_level(struct kvm_vcpu *vcpu)
 {
 	if (cpu_has_vmx_ept_5levels() && (cpuid_maxphyaddr(vcpu) > 48))
 		return 5;
@@ -2968,7 +2967,7 @@ u64 construct_eptp(struct kvm_vcpu *vcpu, unsigned long root_hpa)
 {
 	u64 eptp = VMX_EPTP_MT_WB;
 
-	eptp |= (get_ept_level(vcpu) == 5) ? VMX_EPTP_PWL_5 : VMX_EPTP_PWL_4;
+	eptp |= (kvm_x86_get_tdp_level(vcpu) == 5) ? VMX_EPTP_PWL_5 : VMX_EPTP_PWL_4;
 
 	if (enable_ept_ad_bits &&
 	    (!is_guest_mode(vcpu) || nested_ept_ad_enabled(vcpu)))
@@ -2978,7 +2977,7 @@ u64 construct_eptp(struct kvm_vcpu *vcpu, unsigned long root_hpa)
 	return eptp;
 }
 
-void vmx_set_cr3(struct kvm_vcpu *vcpu, unsigned long cr3)
+void kvm_x86_set_cr3(struct kvm_vcpu *vcpu, unsigned long cr3)
 {
 	struct kvm *kvm = vcpu->kvm;
 	unsigned long guest_cr3;
@@ -3008,7 +3007,7 @@ void vmx_set_cr3(struct kvm_vcpu *vcpu, unsigned long cr3)
 	vmcs_writel(GUEST_CR3, guest_cr3);
 }
 
-int vmx_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
+int kvm_x86_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 	/*
@@ -3026,7 +3025,7 @@ int vmx_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
 	else
 		hw_cr4 |= KVM_PMODE_VM_CR4_ALWAYS_ON;
 
-	if (!boot_cpu_has(X86_FEATURE_UMIP) && vmx_umip_emulated()) {
+	if (!boot_cpu_has(X86_FEATURE_UMIP) && kvm_x86_umip_emulated()) {
 		if (cr4 & X86_CR4_UMIP) {
 			secondary_exec_controls_setbit(vmx, SECONDARY_EXEC_DESC);
 			hw_cr4 &= ~X86_CR4_UMIP;
@@ -3083,7 +3082,8 @@ int vmx_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
 	return 0;
 }
 
-void vmx_get_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var, int seg)
+void kvm_x86_get_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var,
+			 int seg)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 	u32 ar;
@@ -3119,18 +3119,18 @@ void vmx_get_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var, int seg)
 	var->g = (ar >> 15) & 1;
 }
 
-static u64 vmx_get_segment_base(struct kvm_vcpu *vcpu, int seg)
+u64 kvm_x86_get_segment_base(struct kvm_vcpu *vcpu, int seg)
 {
 	struct kvm_segment s;
 
 	if (to_vmx(vcpu)->rmode.vm86_active) {
-		vmx_get_segment(vcpu, &s, seg);
+		kvm_x86_get_segment(vcpu, &s, seg);
 		return s.base;
 	}
 	return vmx_read_guest_seg_base(to_vmx(vcpu), seg);
 }
 
-int vmx_get_cpl(struct kvm_vcpu *vcpu)
+int kvm_x86_get_cpl(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 
@@ -3162,7 +3162,8 @@ static u32 vmx_segment_access_rights(struct kvm_segment *var)
 	return ar;
 }
 
-void vmx_set_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var, int seg)
+void kvm_x86_set_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var,
+			 int seg)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 	const struct kvm_vmx_segment_field *sf = &kvm_vmx_segment_fields[seg];
@@ -3202,7 +3203,7 @@ void vmx_set_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var, int seg)
 	vmx->emulation_required = emulation_required(vcpu);
 }
 
-static void vmx_get_cs_db_l_bits(struct kvm_vcpu *vcpu, int *db, int *l)
+void kvm_x86_get_cs_db_l_bits(struct kvm_vcpu *vcpu, int *db, int *l)
 {
 	u32 ar = vmx_read_guest_seg_ar(to_vmx(vcpu), VCPU_SREG_CS);
 
@@ -3210,25 +3211,25 @@ static void vmx_get_cs_db_l_bits(struct kvm_vcpu *vcpu, int *db, int *l)
 	*l = (ar >> 13) & 1;
 }
 
-static void vmx_get_idt(struct kvm_vcpu *vcpu, struct desc_ptr *dt)
+void kvm_x86_get_idt(struct kvm_vcpu *vcpu, struct desc_ptr *dt)
 {
 	dt->size = vmcs_read32(GUEST_IDTR_LIMIT);
 	dt->address = vmcs_readl(GUEST_IDTR_BASE);
 }
 
-static void vmx_set_idt(struct kvm_vcpu *vcpu, struct desc_ptr *dt)
+void kvm_x86_set_idt(struct kvm_vcpu *vcpu, struct desc_ptr *dt)
 {
 	vmcs_write32(GUEST_IDTR_LIMIT, dt->size);
 	vmcs_writel(GUEST_IDTR_BASE, dt->address);
 }
 
-static void vmx_get_gdt(struct kvm_vcpu *vcpu, struct desc_ptr *dt)
+void kvm_x86_get_gdt(struct kvm_vcpu *vcpu, struct desc_ptr *dt)
 {
 	dt->size = vmcs_read32(GUEST_GDTR_LIMIT);
 	dt->address = vmcs_readl(GUEST_GDTR_BASE);
 }
 
-static void vmx_set_gdt(struct kvm_vcpu *vcpu, struct desc_ptr *dt)
+void kvm_x86_set_gdt(struct kvm_vcpu *vcpu, struct desc_ptr *dt)
 {
 	vmcs_write32(GUEST_GDTR_LIMIT, dt->size);
 	vmcs_writel(GUEST_GDTR_BASE, dt->address);
@@ -3239,7 +3240,7 @@ static bool rmode_segment_valid(struct kvm_vcpu *vcpu, int seg)
 	struct kvm_segment var;
 	u32 ar;
 
-	vmx_get_segment(vcpu, &var, seg);
+	kvm_x86_get_segment(vcpu, &var, seg);
 	var.dpl = 0x3;
 	if (seg == VCPU_SREG_CS)
 		var.type = 0x3;
@@ -3260,7 +3261,7 @@ static bool code_segment_valid(struct kvm_vcpu *vcpu)
 	struct kvm_segment cs;
 	unsigned int cs_rpl;
 
-	vmx_get_segment(vcpu, &cs, VCPU_SREG_CS);
+	kvm_x86_get_segment(vcpu, &cs, VCPU_SREG_CS);
 	cs_rpl = cs.selector & SEGMENT_RPL_MASK;
 
 	if (cs.unusable)
@@ -3288,7 +3289,7 @@ static bool stack_segment_valid(struct kvm_vcpu *vcpu)
 	struct kvm_segment ss;
 	unsigned int ss_rpl;
 
-	vmx_get_segment(vcpu, &ss, VCPU_SREG_SS);
+	kvm_x86_get_segment(vcpu, &ss, VCPU_SREG_SS);
 	ss_rpl = ss.selector & SEGMENT_RPL_MASK;
 
 	if (ss.unusable)
@@ -3310,7 +3311,7 @@ static bool data_segment_valid(struct kvm_vcpu *vcpu, int seg)
 	struct kvm_segment var;
 	unsigned int rpl;
 
-	vmx_get_segment(vcpu, &var, seg);
+	kvm_x86_get_segment(vcpu, &var, seg);
 	rpl = var.selector & SEGMENT_RPL_MASK;
 
 	if (var.unusable)
@@ -3334,7 +3335,7 @@ static bool tr_valid(struct kvm_vcpu *vcpu)
 {
 	struct kvm_segment tr;
 
-	vmx_get_segment(vcpu, &tr, VCPU_SREG_TR);
+	kvm_x86_get_segment(vcpu, &tr, VCPU_SREG_TR);
 
 	if (tr.unusable)
 		return false;
@@ -3352,7 +3353,7 @@ static bool ldtr_valid(struct kvm_vcpu *vcpu)
 {
 	struct kvm_segment ldtr;
 
-	vmx_get_segment(vcpu, &ldtr, VCPU_SREG_LDTR);
+	kvm_x86_get_segment(vcpu, &ldtr, VCPU_SREG_LDTR);
 
 	if (ldtr.unusable)
 		return true;
@@ -3370,8 +3371,8 @@ static bool cs_ss_rpl_check(struct kvm_vcpu *vcpu)
 {
 	struct kvm_segment cs, ss;
 
-	vmx_get_segment(vcpu, &cs, VCPU_SREG_CS);
-	vmx_get_segment(vcpu, &ss, VCPU_SREG_SS);
+	kvm_x86_get_segment(vcpu, &cs, VCPU_SREG_CS);
+	kvm_x86_get_segment(vcpu, &ss, VCPU_SREG_SS);
 
 	return ((cs.selector & SEGMENT_RPL_MASK) ==
 		 (ss.selector & SEGMENT_RPL_MASK));
@@ -3388,7 +3389,7 @@ static bool guest_state_valid(struct kvm_vcpu *vcpu)
 		return true;
 
 	/* real mode guest state checks */
-	if (!is_protmode(vcpu) || (vmx_get_rflags(vcpu) & X86_EFLAGS_VM)) {
+	if (!is_protmode(vcpu) || (kvm_x86_get_rflags(vcpu) & X86_EFLAGS_VM)) {
 		if (!rmode_segment_valid(vcpu, VCPU_SREG_CS))
 			return false;
 		if (!rmode_segment_valid(vcpu, VCPU_SREG_SS))
@@ -3739,12 +3740,12 @@ void pt_update_intercept_for_msr(struct vcpu_vmx *vmx)
 	}
 }
 
-static bool vmx_get_enable_apicv(struct kvm_vcpu *vcpu)
+bool kvm_x86_get_enable_apicv(struct kvm_vcpu *vcpu)
 {
 	return enable_apicv;
 }
 
-static bool vmx_guest_apic_has_interrupt(struct kvm_vcpu *vcpu)
+bool kvm_x86_guest_apic_has_interrupt(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 	void *vapic_page;
@@ -3830,7 +3831,7 @@ static int vmx_deliver_nested_posted_interrupt(struct kvm_vcpu *vcpu,
  * 2. If target vcpu isn't running(root mode), kick it to pick up the
  * interrupt from PIR in next vmentry.
  */
-static void vmx_deliver_posted_interrupt(struct kvm_vcpu *vcpu, int vector)
+void kvm_x86_deliver_posted_interrupt(struct kvm_vcpu *vcpu, int vector)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 	int r;
@@ -3854,7 +3855,7 @@ static void vmx_deliver_posted_interrupt(struct kvm_vcpu *vcpu, int vector)
  * Set up the vmcs's constant host-state fields, i.e., host-state fields that
  * will not change in the lifetime of the guest.
  * Note that host-state that does change is set elsewhere. E.g., host-state
- * that is set differently for each CPU is set in vmx_vcpu_load(), not here.
+ * that is set differently for each CPU is set in kvm_x86_vcpu_load(), not here.
  */
 void vmx_set_constant_host_state(struct vcpu_vmx *vmx)
 {
@@ -3940,7 +3941,7 @@ u32 vmx_pin_based_exec_ctrl(struct vcpu_vmx *vmx)
 	return pin_based_exec_ctrl;
 }
 
-static void vmx_refresh_apicv_exec_ctrl(struct kvm_vcpu *vcpu)
+void kvm_x86_refresh_apicv_exec_ctrl(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 
@@ -4026,7 +4027,7 @@ static void vmx_compute_secondary_exec_control(struct vcpu_vmx *vmx)
 	if (!enable_pml)
 		exec_control &= ~SECONDARY_EXEC_ENABLE_PML;
 
-	if (vmx_xsaves_supported()) {
+	if (kvm_x86_xsaves_supported()) {
 		/* Exposing XSAVES only when XSAVE is exposed */
 		bool xsaves_enabled =
 			guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) &&
@@ -4045,7 +4046,7 @@ static void vmx_compute_secondary_exec_control(struct vcpu_vmx *vmx)
 		}
 	}
 
-	if (vmx_rdtscp_supported()) {
+	if (kvm_x86_rdtscp_supported()) {
 		bool rdtscp_enabled = guest_cpuid_has(vcpu, X86_FEATURE_RDTSCP);
 		if (!rdtscp_enabled)
 			exec_control &= ~SECONDARY_EXEC_RDTSCP;
@@ -4060,7 +4061,7 @@ static void vmx_compute_secondary_exec_control(struct vcpu_vmx *vmx)
 		}
 	}
 
-	if (vmx_invpcid_supported()) {
+	if (kvm_x86_invpcid_supported()) {
 		/* Exposing INVPCID only when PCID is exposed */
 		bool invpcid_enabled =
 			guest_cpuid_has(vcpu, X86_FEATURE_INVPCID) &&
@@ -4234,7 +4235,7 @@ static void vmx_vcpu_setup(struct vcpu_vmx *vmx)
 
 	set_cr4_guest_host_mask(vmx);
 
-	if (vmx_xsaves_supported())
+	if (kvm_x86_xsaves_supported())
 		vmcs_write64(XSS_EXIT_BITMAP, VMX_XSS_EXIT_BITMAP);
 
 	if (enable_pml) {
@@ -4253,7 +4254,7 @@ static void vmx_vcpu_setup(struct vcpu_vmx *vmx)
 	}
 }
 
-static void vmx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
+void kvm_x86_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 	struct msr_data apic_base_msr;
@@ -4341,34 +4342,34 @@ static void vmx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
 
 	cr0 = X86_CR0_NW | X86_CR0_CD | X86_CR0_ET;
 	vmx->vcpu.arch.cr0 = cr0;
-	vmx_set_cr0(vcpu, cr0); /* enter rmode */
-	vmx_set_cr4(vcpu, 0);
-	vmx_set_efer(vcpu, 0);
+	kvm_x86_set_cr0(vcpu, cr0); /* enter rmode */
+	kvm_x86_set_cr4(vcpu, 0);
+	kvm_x86_set_efer(vcpu, 0);
 
-	update_exception_bitmap(vcpu);
+	kvm_x86_update_bp_intercept(vcpu);
 
 	vpid_sync_context(vmx->vpid);
 	if (init_event)
 		vmx_clear_hlt(vcpu);
 }
 
-static void enable_irq_window(struct kvm_vcpu *vcpu)
+void kvm_x86_enable_irq_window(struct kvm_vcpu *vcpu)
 {
 	exec_controls_setbit(to_vmx(vcpu), CPU_BASED_VIRTUAL_INTR_PENDING);
 }
 
-static void enable_nmi_window(struct kvm_vcpu *vcpu)
+void kvm_x86_enable_nmi_window(struct kvm_vcpu *vcpu)
 {
 	if (!enable_vnmi ||
 	    vmcs_read32(GUEST_INTERRUPTIBILITY_INFO) & GUEST_INTR_STATE_STI) {
-		enable_irq_window(vcpu);
+		kvm_x86_enable_irq_window(vcpu);
 		return;
 	}
 
 	exec_controls_setbit(to_vmx(vcpu), CPU_BASED_VIRTUAL_NMI_PENDING);
 }
 
-static void vmx_inject_irq(struct kvm_vcpu *vcpu)
+void kvm_x86_set_irq(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 	uint32_t intr;
@@ -4396,7 +4397,7 @@ static void vmx_inject_irq(struct kvm_vcpu *vcpu)
 	vmx_clear_hlt(vcpu);
 }
 
-static void vmx_inject_nmi(struct kvm_vcpu *vcpu)
+void kvm_x86_set_nmi(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 
@@ -4427,7 +4428,7 @@ static void vmx_inject_nmi(struct kvm_vcpu *vcpu)
 	vmx_clear_hlt(vcpu);
 }
 
-bool vmx_get_nmi_mask(struct kvm_vcpu *vcpu)
+bool kvm_x86_get_nmi_mask(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 	bool masked;
@@ -4441,7 +4442,7 @@ bool vmx_get_nmi_mask(struct kvm_vcpu *vcpu)
 	return masked;
 }
 
-void vmx_set_nmi_mask(struct kvm_vcpu *vcpu, bool masked)
+void kvm_x86_set_nmi_mask(struct kvm_vcpu *vcpu, bool masked)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 
@@ -4461,7 +4462,7 @@ void vmx_set_nmi_mask(struct kvm_vcpu *vcpu, bool masked)
 	}
 }
 
-static int vmx_nmi_allowed(struct kvm_vcpu *vcpu)
+int kvm_x86_nmi_allowed(struct kvm_vcpu *vcpu)
 {
 	if (to_vmx(vcpu)->nested.nested_run_pending)
 		return 0;
@@ -4475,7 +4476,7 @@ static int vmx_nmi_allowed(struct kvm_vcpu *vcpu)
 		   | GUEST_INTR_STATE_NMI));
 }
 
-static int vmx_interrupt_allowed(struct kvm_vcpu *vcpu)
+int kvm_x86_interrupt_allowed(struct kvm_vcpu *vcpu)
 {
 	return (!to_vmx(vcpu)->nested.nested_run_pending &&
 		vmcs_readl(GUEST_RFLAGS) & X86_EFLAGS_IF) &&
@@ -4483,7 +4484,7 @@ static int vmx_interrupt_allowed(struct kvm_vcpu *vcpu)
 			(GUEST_INTR_STATE_STI | GUEST_INTR_STATE_MOV_SS));
 }
 
-static int vmx_set_tss_addr(struct kvm *kvm, unsigned int addr)
+int kvm_x86_set_tss_addr(struct kvm *kvm, unsigned int addr)
 {
 	int ret;
 
@@ -4498,7 +4499,7 @@ static int vmx_set_tss_addr(struct kvm *kvm, unsigned int addr)
 	return init_rmode_tss(kvm);
 }
 
-static int vmx_set_identity_map_addr(struct kvm *kvm, u64 ident_addr)
+int kvm_x86_set_identity_map_addr(struct kvm *kvm, u64 ident_addr)
 {
 	to_kvm_vmx(kvm)->ept_identity_map_addr = ident_addr;
 	return 0;
@@ -4584,7 +4585,7 @@ static void kvm_machine_check(void)
 
 static int handle_machine_check(struct kvm_vcpu *vcpu)
 {
-	/* handled by vmx_vcpu_run() */
+	/* handled by kvm_x86_run() */
 	return 1;
 }
 
@@ -4663,7 +4664,7 @@ static int handle_exception_nmi(struct kvm_vcpu *vcpu)
 			vcpu->arch.dr6 &= ~DR_TRAP_BITS;
 			vcpu->arch.dr6 |= dr6 | DR6_RTM;
 			if (is_icebp(intr_info))
-				WARN_ON(!skip_emulated_instruction(vcpu));
+				WARN_ON(!kvm_x86_skip_emulated_instruction(vcpu));
 
 			kvm_queue_exception(vcpu, DB_VECTOR);
 			return 1;
@@ -4727,8 +4728,7 @@ static int handle_io(struct kvm_vcpu *vcpu)
 	return kvm_fast_pio(vcpu, size, port, in);
 }
 
-static void
-vmx_patch_hypercall(struct kvm_vcpu *vcpu, unsigned char *hypercall)
+void kvm_x86_patch_hypercall(struct kvm_vcpu *vcpu, unsigned char *hypercall)
 {
 	/*
 	 * Patch in the VMCALL instruction:
@@ -4842,7 +4842,7 @@ static int handle_cr(struct kvm_vcpu *vcpu)
 		break;
 	case 2: /* clts */
 		WARN_ONCE(1, "Guest should always own CR0.TS");
-		vmx_set_cr0(vcpu, kvm_read_cr0_bits(vcpu, ~X86_CR0_TS));
+		kvm_x86_set_cr0(vcpu, kvm_read_cr0_bits(vcpu, ~X86_CR0_TS));
 		trace_kvm_cr_write(0, kvm_read_cr0(vcpu));
 		return kvm_skip_emulated_instruction(vcpu);
 	case 1: /*mov from cr*/
@@ -4938,16 +4938,16 @@ static int handle_dr(struct kvm_vcpu *vcpu)
 	return kvm_skip_emulated_instruction(vcpu);
 }
 
-static u64 vmx_get_dr6(struct kvm_vcpu *vcpu)
+u64 kvm_x86_get_dr6(struct kvm_vcpu *vcpu)
 {
 	return vcpu->arch.dr6;
 }
 
-static void vmx_set_dr6(struct kvm_vcpu *vcpu, unsigned long val)
+void kvm_x86_set_dr6(struct kvm_vcpu *vcpu, unsigned long val)
 {
 }
 
-static void vmx_sync_dirty_debug_regs(struct kvm_vcpu *vcpu)
+void kvm_x86_sync_dirty_debug_regs(struct kvm_vcpu *vcpu)
 {
 	get_debugreg(vcpu->arch.db[0], 0);
 	get_debugreg(vcpu->arch.db[1], 1);
@@ -4960,7 +4960,7 @@ static void vmx_sync_dirty_debug_regs(struct kvm_vcpu *vcpu)
 	exec_controls_setbit(to_vmx(vcpu), CPU_BASED_MOV_DR_EXITING);
 }
 
-static void vmx_set_dr7(struct kvm_vcpu *vcpu, unsigned long val)
+void kvm_x86_set_dr7(struct kvm_vcpu *vcpu, unsigned long val)
 {
 	vmcs_writel(GUEST_DR7, val);
 }
@@ -5104,7 +5104,7 @@ static int handle_task_switch(struct kvm_vcpu *vcpu)
 		switch (type) {
 		case INTR_TYPE_NMI_INTR:
 			vcpu->arch.nmi_injected = false;
-			vmx_set_nmi_mask(vcpu, true);
+			kvm_x86_set_nmi_mask(vcpu, true);
 			break;
 		case INTR_TYPE_EXT_INTR:
 		case INTR_TYPE_SOFT_INTR:
@@ -5130,7 +5130,7 @@ static int handle_task_switch(struct kvm_vcpu *vcpu)
 	if (!idt_v || (type != INTR_TYPE_HARD_EXCEPTION &&
 		       type != INTR_TYPE_EXT_INTR &&
 		       type != INTR_TYPE_NMI_INTR))
-		WARN_ON(!skip_emulated_instruction(vcpu));
+		WARN_ON(!kvm_x86_skip_emulated_instruction(vcpu));
 
 	/*
 	 * TODO: What about debug traps on tss switch?
@@ -5230,7 +5230,7 @@ static int handle_invalid_guest_state(struct kvm_vcpu *vcpu)
 				CPU_BASED_VIRTUAL_INTR_PENDING;
 
 	while (vmx->emulation_required && count-- != 0) {
-		if (intr_window_requested && vmx_interrupt_allowed(vcpu))
+		if (intr_window_requested && kvm_x86_interrupt_allowed(vcpu))
 			return handle_interrupt_window(&vmx->vcpu);
 
 		if (kvm_test_request(KVM_REQ_EVENT, vcpu))
@@ -5596,7 +5596,7 @@ static int (*kvm_vmx_exit_handlers[])(struct kvm_vcpu *vcpu) = {
 static const int kvm_vmx_max_exit_handlers =
 	ARRAY_SIZE(kvm_vmx_exit_handlers);
 
-static void vmx_get_exit_info(struct kvm_vcpu *vcpu, u64 *info1, u64 *info2)
+void kvm_x86_get_exit_info(struct kvm_vcpu *vcpu, u64 *info1, u64 *info2)
 {
 	*info1 = vmcs_readl(EXIT_QUALIFICATION);
 	*info2 = vmcs_read32(VM_EXIT_INTR_INFO);
@@ -5838,7 +5838,7 @@ void dump_vmcs(void)
  * The guest has exited.  See if we can fix it or if we need userspace
  * assistance.
  */
-static int vmx_handle_exit(struct kvm_vcpu *vcpu)
+int kvm_x86_handle_exit(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 	u32 exit_reason = vmx->exit_reason;
@@ -5907,7 +5907,7 @@ static int vmx_handle_exit(struct kvm_vcpu *vcpu)
 
 	if (unlikely(!enable_vnmi &&
 		     vmx->loaded_vmcs->soft_vnmi_blocked)) {
-		if (vmx_interrupt_allowed(vcpu)) {
+		if (kvm_x86_interrupt_allowed(vcpu)) {
 			vmx->loaded_vmcs->soft_vnmi_blocked = 0;
 		} else if (vmx->loaded_vmcs->vnmi_blocked_time > 1000000000LL &&
 			   vcpu->arch.nmi_pending) {
@@ -6010,7 +6010,7 @@ static void vmx_l1d_flush(struct kvm_vcpu *vcpu)
 		: "eax", "ebx", "ecx", "edx");
 }
 
-static void update_cr8_intercept(struct kvm_vcpu *vcpu, int tpr, int irr)
+void kvm_x86_update_cr8_intercept(struct kvm_vcpu *vcpu, int tpr, int irr)
 {
 	struct vmcs12 *vmcs12 = get_vmcs12(vcpu);
 
@@ -6026,7 +6026,7 @@ static void update_cr8_intercept(struct kvm_vcpu *vcpu, int tpr, int irr)
 	vmcs_write32(TPR_THRESHOLD, irr);
 }
 
-void vmx_set_virtual_apic_mode(struct kvm_vcpu *vcpu)
+void kvm_x86_set_virtual_apic_mode(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 	u32 sec_exec_control;
@@ -6057,7 +6057,7 @@ void vmx_set_virtual_apic_mode(struct kvm_vcpu *vcpu)
 		if (flexpriority_enabled) {
 			sec_exec_control |=
 				SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
-			vmx_flush_tlb(vcpu, true);
+			kvm_x86_tlb_flush(vcpu, true);
 		}
 		break;
 	case LAPIC_MODE_X2APIC:
@@ -6071,15 +6071,15 @@ void vmx_set_virtual_apic_mode(struct kvm_vcpu *vcpu)
 	vmx_update_msr_bitmap(vcpu);
 }
 
-static void vmx_set_apic_access_page_addr(struct kvm_vcpu *vcpu, hpa_t hpa)
+void kvm_x86_set_apic_access_page_addr(struct kvm_vcpu *vcpu, hpa_t hpa)
 {
 	if (!is_guest_mode(vcpu)) {
 		vmcs_write64(APIC_ACCESS_ADDR, hpa);
-		vmx_flush_tlb(vcpu, true);
+		kvm_x86_tlb_flush(vcpu, true);
 	}
 }
 
-static void vmx_hwapic_isr_update(struct kvm_vcpu *vcpu, int max_isr)
+void kvm_x86_hwapic_isr_update(struct kvm_vcpu *vcpu, int max_isr)
 {
 	u16 status;
 	u8 old;
@@ -6113,7 +6113,7 @@ static void vmx_set_rvi(int vector)
 	}
 }
 
-static void vmx_hwapic_irr_update(struct kvm_vcpu *vcpu, int max_irr)
+void kvm_x86_hwapic_irr_update(struct kvm_vcpu *vcpu, int max_irr)
 {
 	/*
 	 * When running L2, updating RVI is only relevant when
@@ -6127,7 +6127,7 @@ static void vmx_hwapic_irr_update(struct kvm_vcpu *vcpu, int max_irr)
 		vmx_set_rvi(max_irr);
 }
 
-static int vmx_sync_pir_to_irr(struct kvm_vcpu *vcpu)
+int kvm_x86_sync_pir_to_irr(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 	int max_irr;
@@ -6161,16 +6161,16 @@ static int vmx_sync_pir_to_irr(struct kvm_vcpu *vcpu)
 	} else {
 		max_irr = kvm_lapic_find_highest_irr(vcpu);
 	}
-	vmx_hwapic_irr_update(vcpu, max_irr);
+	kvm_x86_hwapic_irr_update(vcpu, max_irr);
 	return max_irr;
 }
 
-static bool vmx_dy_apicv_has_pending_interrupt(struct kvm_vcpu *vcpu)
+bool kvm_x86_dy_apicv_has_pending_interrupt(struct kvm_vcpu *vcpu)
 {
 	return pi_test_on(vcpu_to_pi_desc(vcpu));
 }
 
-static void vmx_load_eoi_exitmap(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap)
+void kvm_x86_load_eoi_exitmap(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap)
 {
 	if (!kvm_vcpu_apicv_active(vcpu))
 		return;
@@ -6181,7 +6181,7 @@ static void vmx_load_eoi_exitmap(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap)
 	vmcs_write64(EOI_EXIT_BITMAP3, eoi_exit_bitmap[3]);
 }
 
-static void vmx_apicv_post_state_restore(struct kvm_vcpu *vcpu)
+void kvm_x86_apicv_post_state_restore(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 
@@ -6255,7 +6255,7 @@ static void handle_external_interrupt_irqoff(struct kvm_vcpu *vcpu)
 }
 STACK_FRAME_NON_STANDARD(handle_external_interrupt_irqoff);
 
-static void vmx_handle_exit_irqoff(struct kvm_vcpu *vcpu)
+void kvm_x86_handle_exit_irqoff(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 
@@ -6265,7 +6265,7 @@ static void vmx_handle_exit_irqoff(struct kvm_vcpu *vcpu)
 		handle_exception_nmi_irqoff(vmx);
 }
 
-static bool vmx_has_emulated_msr(int index)
+bool kvm_x86_has_emulated_msr(int index)
 {
 	switch (index) {
 	case MSR_IA32_SMBASE:
@@ -6284,7 +6284,7 @@ static bool vmx_has_emulated_msr(int index)
 	}
 }
 
-static bool vmx_pt_supported(void)
+bool kvm_x86_pt_supported(void)
 {
 	return pt_mode == PT_MODE_HOST_GUEST;
 }
@@ -6363,7 +6363,7 @@ static void __vmx_complete_interrupts(struct kvm_vcpu *vcpu,
 		 * Clear bit "block by NMI" before VM entry if a NMI
 		 * delivery faulted.
 		 */
-		vmx_set_nmi_mask(vcpu, false);
+		kvm_x86_set_nmi_mask(vcpu, false);
 		break;
 	case INTR_TYPE_SOFT_EXCEPTION:
 		vcpu->arch.event_exit_inst_len = vmcs_read32(instr_len_field);
@@ -6393,7 +6393,7 @@ static void vmx_complete_interrupts(struct vcpu_vmx *vmx)
 				  IDT_VECTORING_ERROR_CODE);
 }
 
-static void vmx_cancel_injection(struct kvm_vcpu *vcpu)
+void kvm_x86_cancel_injection(struct kvm_vcpu *vcpu)
 {
 	__vmx_complete_interrupts(vcpu,
 				  vmcs_read32(VM_ENTRY_INTR_INFO_FIELD),
@@ -6474,7 +6474,7 @@ void vmx_update_host_rsp(struct vcpu_vmx *vmx, unsigned long host_rsp)
 
 bool __vmx_vcpu_run(struct vcpu_vmx *vmx, unsigned long *regs, bool launched);
 
-static void vmx_vcpu_run(struct kvm_vcpu *vcpu)
+void kvm_x86_run(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 	unsigned long cr3, cr4;
@@ -6520,7 +6520,7 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu)
 	 * exceptions being set, but that's not correct for the guest debugging
 	 * case. */
 	if (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP)
-		vmx_set_interrupt_shadow(vcpu, 0);
+		kvm_x86_set_interrupt_shadow(vcpu, 0);
 
 	kvm_load_guest_xcr0(vcpu);
 
@@ -6648,7 +6648,7 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu)
 	vmx_complete_interrupts(vmx);
 }
 
-static struct kvm *vmx_vm_alloc(void)
+struct kvm *kvm_x86_vm_alloc(void)
 {
 	struct kvm_vmx *kvm_vmx = __vmalloc(sizeof(struct kvm_vmx),
 					    GFP_KERNEL_ACCOUNT | __GFP_ZERO,
@@ -6656,13 +6656,13 @@ static struct kvm *vmx_vm_alloc(void)
 	return &kvm_vmx->kvm;
 }
 
-static void vmx_vm_free(struct kvm *kvm)
+void kvm_x86_vm_free(struct kvm *kvm)
 {
 	kfree(kvm->arch.hyperv.hv_pa_pg);
 	vfree(to_kvm_vmx(kvm));
 }
 
-static void vmx_free_vcpu(struct kvm_vcpu *vcpu)
+void kvm_x86_vcpu_free(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 
@@ -6678,7 +6678,7 @@ static void vmx_free_vcpu(struct kvm_vcpu *vcpu)
 	kmem_cache_free(kvm_vcpu_cache, vmx);
 }
 
-static struct kvm_vcpu *vmx_create_vcpu(struct kvm *kvm, unsigned int id)
+struct kvm_vcpu *kvm_x86_vcpu_create(struct kvm *kvm, unsigned int id)
 {
 	int err;
 	struct vcpu_vmx *vmx;
@@ -6757,10 +6757,10 @@ static struct kvm_vcpu *vmx_create_vcpu(struct kvm *kvm, unsigned int id)
 
 	vmx->loaded_vmcs = &vmx->vmcs01;
 	cpu = get_cpu();
-	vmx_vcpu_load(&vmx->vcpu, cpu);
+	kvm_x86_vcpu_load(&vmx->vcpu, cpu);
 	vmx->vcpu.cpu = cpu;
 	vmx_vcpu_setup(vmx);
-	vmx_vcpu_put(&vmx->vcpu);
+	kvm_x86_vcpu_put(&vmx->vcpu);
 	put_cpu();
 	if (cpu_need_virtualize_apic_accesses(&vmx->vcpu)) {
 		err = alloc_apic_access_page(kvm);
@@ -6818,7 +6818,7 @@ static struct kvm_vcpu *vmx_create_vcpu(struct kvm *kvm, unsigned int id)
 #define L1TF_MSG_SMT "L1TF CPU bug present and SMT on, data leak possible. See CVE-2018-3646 and https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/l1tf.html for details.\n"
 #define L1TF_MSG_L1D "L1TF CPU bug present and virtualization mitigation disabled, data leak possible. See CVE-2018-3646 and https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/l1tf.html for details.\n"
 
-static int vmx_vm_init(struct kvm *kvm)
+int kvm_x86_vm_init(struct kvm *kvm)
 {
 	spin_lock_init(&to_kvm_vmx(kvm)->ept_pointer_lock);
 
@@ -6851,7 +6851,7 @@ static int vmx_vm_init(struct kvm *kvm)
 	return 0;
 }
 
-static int __init vmx_check_processor_compat(void)
+__init int kvm_x86_check_processor_compatibility(void)
 {
 	struct vmcs_config vmcs_conf;
 	struct vmx_capability vmx_cap;
@@ -6869,7 +6869,7 @@ static int __init vmx_check_processor_compat(void)
 	return 0;
 }
 
-static u64 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio)
+u64 kvm_x86_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio)
 {
 	u8 cache;
 	u64 ipat = 0;
@@ -6911,7 +6911,7 @@ static u64 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio)
 	return (cache << VMX_EPT_MT_EPTE_SHIFT) | ipat;
 }
 
-static int vmx_get_lpage_level(void)
+int kvm_x86_get_lpage_level(void)
 {
 	if (enable_ept && !cpu_has_vmx_ept_1g_page())
 		return PT_DIRECTORY_LEVEL;
@@ -7069,7 +7069,7 @@ static void update_intel_pt_cfg(struct kvm_vcpu *vcpu)
 		vmx->pt_desc.ctl_bitmask &= ~(0xfULL << (32 + i * 4));
 }
 
-static void vmx_cpuid_update(struct kvm_vcpu *vcpu)
+void kvm_x86_cpuid_update(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 
@@ -7095,20 +7095,20 @@ static void vmx_cpuid_update(struct kvm_vcpu *vcpu)
 		update_intel_pt_cfg(vcpu);
 }
 
-static void vmx_set_supported_cpuid(u32 func, struct kvm_cpuid_entry2 *entry)
+void kvm_x86_set_supported_cpuid(u32 func, struct kvm_cpuid_entry2 *entry)
 {
 	if (func == 1 && nested)
 		entry->ecx |= bit(X86_FEATURE_VMX);
 }
 
-static void vmx_request_immediate_exit(struct kvm_vcpu *vcpu)
+void kvm_x86_request_immediate_exit(struct kvm_vcpu *vcpu)
 {
 	to_vmx(vcpu)->req_immediate_exit = true;
 }
 
-static int vmx_check_intercept(struct kvm_vcpu *vcpu,
-			       struct x86_instruction_info *info,
-			       enum x86_intercept_stage stage)
+int kvm_x86_check_intercept(struct kvm_vcpu *vcpu,
+			    struct x86_instruction_info *info,
+			    enum x86_intercept_stage stage)
 {
 	struct vmcs12 *vmcs12 = get_vmcs12(vcpu);
 	struct x86_emulate_ctxt *ctxt = &vcpu->arch.emulate_ctxt;
@@ -7147,8 +7147,8 @@ static inline int u64_shl_div_u64(u64 a, unsigned int shift,
 	return 0;
 }
 
-static int vmx_set_hv_timer(struct kvm_vcpu *vcpu, u64 guest_deadline_tsc,
-			    bool *expired)
+int kvm_x86_set_hv_timer(struct kvm_vcpu *vcpu, u64 guest_deadline_tsc,
+			 bool *expired)
 {
 	struct vcpu_vmx *vmx;
 	u64 tscl, guest_tscl, delta_tsc, lapic_timer_advance_cycles;
@@ -7191,37 +7191,37 @@ static int vmx_set_hv_timer(struct kvm_vcpu *vcpu, u64 guest_deadline_tsc,
 	return 0;
 }
 
-static void vmx_cancel_hv_timer(struct kvm_vcpu *vcpu)
+void kvm_x86_cancel_hv_timer(struct kvm_vcpu *vcpu)
 {
 	to_vmx(vcpu)->hv_deadline_tsc = -1;
 }
 #endif
 
-static void vmx_sched_in(struct kvm_vcpu *vcpu, int cpu)
+void kvm_x86_sched_in(struct kvm_vcpu *vcpu, int cpu)
 {
 	if (!kvm_pause_in_guest(vcpu->kvm))
 		shrink_ple_window(vcpu);
 }
 
-static void vmx_slot_enable_log_dirty(struct kvm *kvm,
-				     struct kvm_memory_slot *slot)
+void kvm_x86_slot_enable_log_dirty(struct kvm *kvm,
+				   struct kvm_memory_slot *slot)
 {
 	kvm_mmu_slot_leaf_clear_dirty(kvm, slot);
 	kvm_mmu_slot_largepage_remove_write_access(kvm, slot);
 }
 
-static void vmx_slot_disable_log_dirty(struct kvm *kvm,
-				       struct kvm_memory_slot *slot)
+void kvm_x86_slot_disable_log_dirty(struct kvm *kvm,
+				    struct kvm_memory_slot *slot)
 {
 	kvm_mmu_slot_set_dirty(kvm, slot);
 }
 
-static void vmx_flush_log_dirty(struct kvm *kvm)
+void kvm_x86_flush_log_dirty(struct kvm *kvm)
 {
 	kvm_flush_pml_buffers(kvm);
 }
 
-static int vmx_write_pml_buffer(struct kvm_vcpu *vcpu)
+int kvm_x86_write_log_dirty(struct kvm_vcpu *vcpu)
 {
 	struct vmcs12 *vmcs12;
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
@@ -7257,9 +7257,9 @@ static int vmx_write_pml_buffer(struct kvm_vcpu *vcpu)
 	return 0;
 }
 
-static void vmx_enable_log_dirty_pt_masked(struct kvm *kvm,
-					   struct kvm_memory_slot *memslot,
-					   gfn_t offset, unsigned long mask)
+void kvm_x86_enable_log_dirty_pt_masked(struct kvm *kvm,
+					struct kvm_memory_slot *memslot,
+					gfn_t offset, unsigned long mask)
 {
 	kvm_mmu_clear_dirty_pt_masked(kvm, memslot, offset, mask);
 }
@@ -7365,7 +7365,7 @@ static int pi_pre_block(struct kvm_vcpu *vcpu)
 	return (vcpu->pre_pcpu == -1);
 }
 
-static int vmx_pre_block(struct kvm_vcpu *vcpu)
+int kvm_x86_pre_block(struct kvm_vcpu *vcpu)
 {
 	if (pi_pre_block(vcpu))
 		return 1;
@@ -7387,7 +7387,7 @@ static void pi_post_block(struct kvm_vcpu *vcpu)
 	local_irq_enable();
 }
 
-static void vmx_post_block(struct kvm_vcpu *vcpu)
+void kvm_x86_post_block(struct kvm_vcpu *vcpu)
 {
 	if (kvm_x86_ops->set_hv_timer)
 		kvm_lapic_switch_to_hv_timer(vcpu);
@@ -7404,8 +7404,8 @@ static void vmx_post_block(struct kvm_vcpu *vcpu)
  * @set: set or unset PI
  * returns 0 on success, < 0 on failure
  */
-static int vmx_update_pi_irte(struct kvm *kvm, unsigned int host_irq,
-			      uint32_t guest_irq, bool set)
+int kvm_x86_update_pi_irte(struct kvm *kvm, unsigned int host_irq,
+			   uint32_t guest_irq, bool set)
 {
 	struct kvm_kernel_irq_routing_entry *e;
 	struct kvm_irq_routing_table *irq_rt;
@@ -7489,7 +7489,7 @@ static int vmx_update_pi_irte(struct kvm *kvm, unsigned int host_irq,
 	return ret;
 }
 
-static void vmx_setup_mce(struct kvm_vcpu *vcpu)
+void kvm_x86_setup_mce(struct kvm_vcpu *vcpu)
 {
 	if (vcpu->arch.mcg_cap & MCG_LMCE_P)
 		to_vmx(vcpu)->msr_ia32_feature_control_valid_bits |=
@@ -7499,7 +7499,7 @@ static void vmx_setup_mce(struct kvm_vcpu *vcpu)
 			~FEATURE_CONTROL_LMCE;
 }
 
-static int vmx_smi_allowed(struct kvm_vcpu *vcpu)
+int kvm_x86_smi_allowed(struct kvm_vcpu *vcpu)
 {
 	/* we need a nested vmexit to enter SMM, postpone if run is pending */
 	if (to_vmx(vcpu)->nested.nested_run_pending)
@@ -7507,7 +7507,7 @@ static int vmx_smi_allowed(struct kvm_vcpu *vcpu)
 	return 1;
 }
 
-static int vmx_pre_enter_smm(struct kvm_vcpu *vcpu, char *smstate)
+int kvm_x86_pre_enter_smm(struct kvm_vcpu *vcpu, char *smstate)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 
@@ -7521,7 +7521,7 @@ static int vmx_pre_enter_smm(struct kvm_vcpu *vcpu, char *smstate)
 	return 0;
 }
 
-static int vmx_pre_leave_smm(struct kvm_vcpu *vcpu, const char *smstate)
+int kvm_x86_pre_leave_smm(struct kvm_vcpu *vcpu, const char *smstate)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 	int ret;
@@ -7541,22 +7541,22 @@ static int vmx_pre_leave_smm(struct kvm_vcpu *vcpu, const char *smstate)
 	return 0;
 }
 
-static int enable_smi_window(struct kvm_vcpu *vcpu)
+int kvm_x86_enable_smi_window(struct kvm_vcpu *vcpu)
 {
 	return 0;
 }
 
-static bool vmx_need_emulation_on_page_fault(struct kvm_vcpu *vcpu)
+bool kvm_x86_need_emulation_on_page_fault(struct kvm_vcpu *vcpu)
 {
 	return false;
 }
 
-static bool vmx_apic_init_signal_blocked(struct kvm_vcpu *vcpu)
+bool kvm_x86_apic_init_signal_blocked(struct kvm_vcpu *vcpu)
 {
 	return to_vmx(vcpu)->nested.vmxon;
 }
 
-static __init int hardware_setup(void)
+__init int kvm_x86_hardware_setup(void)
 {
 	unsigned long host_bndcfgs;
 	struct desc_ptr dt;
@@ -7723,7 +7723,7 @@ static __init int hardware_setup(void)
 	return r;
 }
 
-static __exit void hardware_unsetup(void)
+__exit void kvm_x86_hardware_unsetup(void)
 {
 	if (nested)
 		nested_vmx_hardware_unsetup();
@@ -7731,148 +7731,173 @@ static __exit void hardware_unsetup(void)
 	free_kvm_area();
 }
 
+void kvm_x86_tlb_flush(struct kvm_vcpu *vcpu, bool invalidate_gpa)
+{
+	__vmx_flush_tlb(vcpu, to_vmx(vcpu)->vpid, invalidate_gpa);
+}
+
+bool kvm_x86_has_wbinvd_exit(void)
+{
+	return cpu_has_vmx_wbinvd_exit();
+}
+
+bool kvm_x86_mpx_supported(void)
+{
+	return vmx_mpx_supported();
+}
+
+bool kvm_x86_xsaves_supported(void)
+{
+	return vmx_xsaves_supported();
+}
+
+bool kvm_x86_umip_emulated(void)
+{
+	return vmx_umip_emulated();
+}
+
 static struct kvm_x86_ops vmx_x86_ops __ro_after_init = {
-	.cpu_has_kvm_support = cpu_has_kvm_support,
-	.disabled_by_bios = vmx_disabled_by_bios,
-	.hardware_setup = hardware_setup,
-	.hardware_unsetup = hardware_unsetup,
-	.check_processor_compatibility = vmx_check_processor_compat,
-	.hardware_enable = hardware_enable,
-	.hardware_disable = hardware_disable,
-	.cpu_has_accelerated_tpr = report_flexpriority,
-	.has_emulated_msr = vmx_has_emulated_msr,
-
-	.vm_init = vmx_vm_init,
-	.vm_alloc = vmx_vm_alloc,
-	.vm_free = vmx_vm_free,
-
-	.vcpu_create = vmx_create_vcpu,
-	.vcpu_free = vmx_free_vcpu,
-	.vcpu_reset = vmx_vcpu_reset,
-
-	.prepare_guest_switch = vmx_prepare_switch_to_guest,
-	.vcpu_load = vmx_vcpu_load,
-	.vcpu_put = vmx_vcpu_put,
-
-	.update_bp_intercept = update_exception_bitmap,
-	.get_msr_feature = vmx_get_msr_feature,
-	.get_msr = vmx_get_msr,
-	.set_msr = vmx_set_msr,
-	.get_segment_base = vmx_get_segment_base,
-	.get_segment = vmx_get_segment,
-	.set_segment = vmx_set_segment,
-	.get_cpl = vmx_get_cpl,
-	.get_cs_db_l_bits = vmx_get_cs_db_l_bits,
-	.decache_cr0_guest_bits = vmx_decache_cr0_guest_bits,
-	.decache_cr3 = vmx_decache_cr3,
-	.decache_cr4_guest_bits = vmx_decache_cr4_guest_bits,
-	.set_cr0 = vmx_set_cr0,
-	.set_cr3 = vmx_set_cr3,
-	.set_cr4 = vmx_set_cr4,
-	.set_efer = vmx_set_efer,
-	.get_idt = vmx_get_idt,
-	.set_idt = vmx_set_idt,
-	.get_gdt = vmx_get_gdt,
-	.set_gdt = vmx_set_gdt,
-	.get_dr6 = vmx_get_dr6,
-	.set_dr6 = vmx_set_dr6,
-	.set_dr7 = vmx_set_dr7,
-	.sync_dirty_debug_regs = vmx_sync_dirty_debug_regs,
-	.cache_reg = vmx_cache_reg,
-	.get_rflags = vmx_get_rflags,
-	.set_rflags = vmx_set_rflags,
-
-	.tlb_flush = vmx_flush_tlb,
-	.tlb_flush_gva = vmx_flush_tlb_gva,
-
-	.run = vmx_vcpu_run,
-	.handle_exit = vmx_handle_exit,
-	.skip_emulated_instruction = skip_emulated_instruction,
-	.set_interrupt_shadow = vmx_set_interrupt_shadow,
-	.get_interrupt_shadow = vmx_get_interrupt_shadow,
-	.patch_hypercall = vmx_patch_hypercall,
-	.set_irq = vmx_inject_irq,
-	.set_nmi = vmx_inject_nmi,
-	.queue_exception = vmx_queue_exception,
-	.cancel_injection = vmx_cancel_injection,
-	.interrupt_allowed = vmx_interrupt_allowed,
-	.nmi_allowed = vmx_nmi_allowed,
-	.get_nmi_mask = vmx_get_nmi_mask,
-	.set_nmi_mask = vmx_set_nmi_mask,
-	.enable_nmi_window = enable_nmi_window,
-	.enable_irq_window = enable_irq_window,
-	.update_cr8_intercept = update_cr8_intercept,
-	.set_virtual_apic_mode = vmx_set_virtual_apic_mode,
-	.set_apic_access_page_addr = vmx_set_apic_access_page_addr,
-	.get_enable_apicv = vmx_get_enable_apicv,
-	.refresh_apicv_exec_ctrl = vmx_refresh_apicv_exec_ctrl,
-	.load_eoi_exitmap = vmx_load_eoi_exitmap,
-	.apicv_post_state_restore = vmx_apicv_post_state_restore,
-	.hwapic_irr_update = vmx_hwapic_irr_update,
-	.hwapic_isr_update = vmx_hwapic_isr_update,
-	.guest_apic_has_interrupt = vmx_guest_apic_has_interrupt,
-	.sync_pir_to_irr = vmx_sync_pir_to_irr,
-	.deliver_posted_interrupt = vmx_deliver_posted_interrupt,
-	.dy_apicv_has_pending_interrupt = vmx_dy_apicv_has_pending_interrupt,
-
-	.set_tss_addr = vmx_set_tss_addr,
-	.set_identity_map_addr = vmx_set_identity_map_addr,
-	.get_tdp_level = get_ept_level,
-	.get_mt_mask = vmx_get_mt_mask,
-
-	.get_exit_info = vmx_get_exit_info,
-
-	.get_lpage_level = vmx_get_lpage_level,
-
-	.cpuid_update = vmx_cpuid_update,
-
-	.rdtscp_supported = vmx_rdtscp_supported,
-	.invpcid_supported = vmx_invpcid_supported,
-
-	.set_supported_cpuid = vmx_set_supported_cpuid,
-
-	.has_wbinvd_exit = cpu_has_vmx_wbinvd_exit,
-
-	.read_l1_tsc_offset = vmx_read_l1_tsc_offset,
-	.write_l1_tsc_offset = vmx_write_l1_tsc_offset,
-
-	.set_tdp_cr3 = vmx_set_cr3,
-
-	.check_intercept = vmx_check_intercept,
-	.handle_exit_irqoff = vmx_handle_exit_irqoff,
-	.mpx_supported = vmx_mpx_supported,
-	.xsaves_supported = vmx_xsaves_supported,
-	.umip_emulated = vmx_umip_emulated,
-	.pt_supported = vmx_pt_supported,
-
-	.request_immediate_exit = vmx_request_immediate_exit,
-
-	.sched_in = vmx_sched_in,
-
-	.slot_enable_log_dirty = vmx_slot_enable_log_dirty,
-	.slot_disable_log_dirty = vmx_slot_disable_log_dirty,
-	.flush_log_dirty = vmx_flush_log_dirty,
-	.enable_log_dirty_pt_masked = vmx_enable_log_dirty_pt_masked,
-	.write_log_dirty = vmx_write_pml_buffer,
-
-	.pre_block = vmx_pre_block,
-	.post_block = vmx_post_block,
+	.cpu_has_kvm_support = kvm_x86_cpu_has_kvm_support,
+	.disabled_by_bios = kvm_x86_disabled_by_bios,
+	.hardware_setup = kvm_x86_hardware_setup,
+	.hardware_unsetup = kvm_x86_hardware_unsetup,
+	.check_processor_compatibility = kvm_x86_check_processor_compatibility,
+	.hardware_enable = kvm_x86_hardware_enable,
+	.hardware_disable = kvm_x86_hardware_disable,
+	.cpu_has_accelerated_tpr = kvm_x86_cpu_has_accelerated_tpr,
+	.has_emulated_msr = kvm_x86_has_emulated_msr,
+
+	.vm_init = kvm_x86_vm_init,
+	.vm_alloc = kvm_x86_vm_alloc,
+	.vm_free = kvm_x86_vm_free,
+
+	.vcpu_create = kvm_x86_vcpu_create,
+	.vcpu_free = kvm_x86_vcpu_free,
+	.vcpu_reset = kvm_x86_vcpu_reset,
+
+	.prepare_guest_switch = kvm_x86_prepare_guest_switch,
+	.vcpu_load = kvm_x86_vcpu_load,
+	.vcpu_put = kvm_x86_vcpu_put,
+
+	.update_bp_intercept = kvm_x86_update_bp_intercept,
+	.get_msr_feature = kvm_x86_get_msr_feature,
+	.get_msr = kvm_x86_get_msr,
+	.set_msr = kvm_x86_set_msr,
+	.get_segment_base = kvm_x86_get_segment_base,
+	.get_segment = kvm_x86_get_segment,
+	.set_segment = kvm_x86_set_segment,
+	.get_cpl = kvm_x86_get_cpl,
+	.get_cs_db_l_bits = kvm_x86_get_cs_db_l_bits,
+	.decache_cr0_guest_bits = kvm_x86_decache_cr0_guest_bits,
+	.decache_cr3 = kvm_x86_decache_cr3,
+	.decache_cr4_guest_bits = kvm_x86_decache_cr4_guest_bits,
+	.set_cr0 = kvm_x86_set_cr0,
+	.set_cr3 = kvm_x86_set_cr3,
+	.set_cr4 = kvm_x86_set_cr4,
+	.set_efer = kvm_x86_set_efer,
+	.get_idt = kvm_x86_get_idt,
+	.set_idt = kvm_x86_set_idt,
+	.get_gdt = kvm_x86_get_gdt,
+	.set_gdt = kvm_x86_set_gdt,
+	.get_dr6 = kvm_x86_get_dr6,
+	.set_dr6 = kvm_x86_set_dr6,
+	.set_dr7 = kvm_x86_set_dr7,
+	.sync_dirty_debug_regs = kvm_x86_sync_dirty_debug_regs,
+	.cache_reg = kvm_x86_cache_reg,
+	.get_rflags = kvm_x86_get_rflags,
+	.set_rflags = kvm_x86_set_rflags,
+
+	.tlb_flush = kvm_x86_tlb_flush,
+	.tlb_flush_gva = kvm_x86_tlb_flush_gva,
+
+	.run = kvm_x86_run,
+	.handle_exit = kvm_x86_handle_exit,
+	.skip_emulated_instruction = kvm_x86_skip_emulated_instruction,
+	.set_interrupt_shadow = kvm_x86_set_interrupt_shadow,
+	.get_interrupt_shadow = kvm_x86_get_interrupt_shadow,
+	.patch_hypercall = kvm_x86_patch_hypercall,
+	.set_irq = kvm_x86_set_irq,
+	.set_nmi = kvm_x86_set_nmi,
+	.queue_exception = kvm_x86_queue_exception,
+	.cancel_injection = kvm_x86_cancel_injection,
+	.interrupt_allowed = kvm_x86_interrupt_allowed,
+	.nmi_allowed = kvm_x86_nmi_allowed,
+	.get_nmi_mask = kvm_x86_get_nmi_mask,
+	.set_nmi_mask = kvm_x86_set_nmi_mask,
+	.enable_nmi_window = kvm_x86_enable_nmi_window,
+	.enable_irq_window = kvm_x86_enable_irq_window,
+	.update_cr8_intercept = kvm_x86_update_cr8_intercept,
+	.set_virtual_apic_mode = kvm_x86_set_virtual_apic_mode,
+	.set_apic_access_page_addr = kvm_x86_set_apic_access_page_addr,
+	.get_enable_apicv = kvm_x86_get_enable_apicv,
+	.refresh_apicv_exec_ctrl = kvm_x86_refresh_apicv_exec_ctrl,
+	.load_eoi_exitmap = kvm_x86_load_eoi_exitmap,
+	.apicv_post_state_restore = kvm_x86_apicv_post_state_restore,
+	.hwapic_irr_update = kvm_x86_hwapic_irr_update,
+	.hwapic_isr_update = kvm_x86_hwapic_isr_update,
+	.guest_apic_has_interrupt = kvm_x86_guest_apic_has_interrupt,
+	.sync_pir_to_irr = kvm_x86_sync_pir_to_irr,
+	.deliver_posted_interrupt = kvm_x86_deliver_posted_interrupt,
+	.dy_apicv_has_pending_interrupt = kvm_x86_dy_apicv_has_pending_interrupt,
+
+	.set_tss_addr = kvm_x86_set_tss_addr,
+	.set_identity_map_addr = kvm_x86_set_identity_map_addr,
+	.get_tdp_level = kvm_x86_get_tdp_level,
+	.get_mt_mask = kvm_x86_get_mt_mask,
+
+	.get_exit_info = kvm_x86_get_exit_info,
+
+	.get_lpage_level = kvm_x86_get_lpage_level,
+
+	.cpuid_update = kvm_x86_cpuid_update,
+
+	.rdtscp_supported = kvm_x86_rdtscp_supported,
+	.invpcid_supported = kvm_x86_invpcid_supported,
+
+	.set_supported_cpuid = kvm_x86_set_supported_cpuid,
+
+	.has_wbinvd_exit = kvm_x86_has_wbinvd_exit,
+
+	.read_l1_tsc_offset = kvm_x86_read_l1_tsc_offset,
+	.write_l1_tsc_offset = kvm_x86_write_l1_tsc_offset,
+
+	.set_tdp_cr3 = kvm_x86_set_cr3,
+
+	.check_intercept = kvm_x86_check_intercept,
+	.handle_exit_irqoff = kvm_x86_handle_exit_irqoff,
+	.mpx_supported = kvm_x86_mpx_supported,
+	.xsaves_supported = kvm_x86_xsaves_supported,
+	.umip_emulated = kvm_x86_umip_emulated,
+	.pt_supported = kvm_x86_pt_supported,
+
+	.request_immediate_exit = kvm_x86_request_immediate_exit,
+
+	.sched_in = kvm_x86_sched_in,
+
+	.slot_enable_log_dirty = kvm_x86_slot_enable_log_dirty,
+	.slot_disable_log_dirty = kvm_x86_slot_disable_log_dirty,
+	.flush_log_dirty = kvm_x86_flush_log_dirty,
+	.enable_log_dirty_pt_masked = kvm_x86_enable_log_dirty_pt_masked,
+	.write_log_dirty = kvm_x86_write_log_dirty,
+
+	.pre_block = kvm_x86_pre_block,
+	.post_block = kvm_x86_post_block,
 
 	.pmu_ops = &intel_pmu_ops,
 
-	.update_pi_irte = vmx_update_pi_irte,
+	.update_pi_irte = kvm_x86_update_pi_irte,
 
 #ifdef CONFIG_X86_64
-	.set_hv_timer = vmx_set_hv_timer,
-	.cancel_hv_timer = vmx_cancel_hv_timer,
+	.set_hv_timer = kvm_x86_set_hv_timer,
+	.cancel_hv_timer = kvm_x86_cancel_hv_timer,
 #endif
 
-	.setup_mce = vmx_setup_mce,
+	.setup_mce = kvm_x86_setup_mce,
 
-	.smi_allowed = vmx_smi_allowed,
-	.pre_enter_smm = vmx_pre_enter_smm,
-	.pre_leave_smm = vmx_pre_leave_smm,
-	.enable_smi_window = enable_smi_window,
+	.smi_allowed = kvm_x86_smi_allowed,
+	.pre_enter_smm = kvm_x86_pre_enter_smm,
+	.pre_leave_smm = kvm_x86_pre_leave_smm,
+	.enable_smi_window = kvm_x86_enable_smi_window,
 
 	.check_nested_events = NULL,
 	.get_nested_state = NULL,
@@ -7880,8 +7905,8 @@ static struct kvm_x86_ops vmx_x86_ops __ro_after_init = {
 	.get_vmcs12_pages = NULL,
 	.nested_enable_evmcs = NULL,
 	.nested_get_evmcs_version = NULL,
-	.need_emulation_on_page_fault = vmx_need_emulation_on_page_fault,
-	.apic_init_signal_blocked = vmx_apic_init_signal_blocked,
+	.need_emulation_on_page_fault = kvm_x86_need_emulation_on_page_fault,
+	.apic_init_signal_blocked = kvm_x86_apic_init_signal_blocked,
 };
 
 static void vmx_cleanup_l1d_flush(void)
@@ -7995,3 +8020,85 @@ static int __init vmx_init(void)
 	return 0;
 }
 module_init(vmx_init);
+
+void kvm_x86_vm_destroy(struct kvm *kvm)
+{
+	kvm_x86_ops->vm_destroy(kvm);
+}
+
+int kvm_x86_tlb_remote_flush(struct kvm *kvm)
+{
+	return kvm_x86_ops->tlb_remote_flush(kvm);
+}
+
+int kvm_x86_tlb_remote_flush_with_range(struct kvm *kvm,
+					struct kvm_tlb_range *range)
+{
+	return kvm_x86_ops->tlb_remote_flush_with_range(kvm, range);
+}
+
+int kvm_x86_check_nested_events(struct kvm_vcpu *vcpu, bool external_intr)
+{
+	return kvm_x86_ops->check_nested_events(vcpu, external_intr);
+}
+
+void kvm_x86_vcpu_blocking(struct kvm_vcpu *vcpu)
+{
+	kvm_x86_ops->vcpu_blocking(vcpu);
+}
+
+void kvm_x86_vcpu_unblocking(struct kvm_vcpu *vcpu)
+{
+	kvm_x86_ops->vcpu_unblocking(vcpu);
+}
+
+int kvm_x86_get_nested_state(struct kvm_vcpu *vcpu,
+			     struct kvm_nested_state __user *user_kvm_nested_state,
+			     unsigned user_data_size)
+{
+	return kvm_x86_ops->get_nested_state(vcpu, user_kvm_nested_state,
+					     user_data_size);
+}
+
+int kvm_x86_set_nested_state(struct kvm_vcpu *vcpu,
+			     struct kvm_nested_state __user *user_kvm_nested_state,
+			     struct kvm_nested_state *kvm_state)
+{
+	return kvm_x86_ops->set_nested_state(vcpu, user_kvm_nested_state,
+					     kvm_state);
+}
+
+bool kvm_x86_get_vmcs12_pages(struct kvm_vcpu *vcpu)
+{
+	return kvm_x86_ops->get_vmcs12_pages(vcpu);
+}
+
+int kvm_x86_mem_enc_op(struct kvm *kvm, void __user *argp)
+{
+	return kvm_x86_ops->mem_enc_op(kvm, argp);
+}
+
+int kvm_x86_mem_enc_reg_region(struct kvm *kvm, struct kvm_enc_region *argp)
+{
+	return kvm_x86_ops->mem_enc_reg_region(kvm, argp);
+}
+
+int kvm_x86_mem_enc_unreg_region(struct kvm *kvm, struct kvm_enc_region *argp)
+{
+	return kvm_x86_ops->mem_enc_unreg_region(kvm, argp);
+}
+
+int kvm_x86_nested_enable_evmcs(struct kvm_vcpu *vcpu, uint16_t *vmcs_version)
+{
+	return kvm_x86_ops->nested_enable_evmcs(vcpu, vmcs_version);
+}
+
+uint16_t kvm_x86_nested_get_evmcs_version(struct kvm_vcpu *vcpu)
+{
+	return kvm_x86_ops->nested_get_evmcs_version(vcpu);
+}
+
+int kvm_x86_enable_direct_tlbflush(struct kvm_vcpu *vcpu)
+{
+	return kvm_x86_ops->enable_direct_tlbflush(vcpu);
+}
diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
index bee16687dc0b..4e5dca97dec4 100644
--- a/arch/x86/kvm/vmx/vmx.h
+++ b/arch/x86/kvm/vmx/vmx.h
@@ -305,32 +305,32 @@ struct kvm_vmx {
 
 bool nested_vmx_allowed(struct kvm_vcpu *vcpu);
 void vmx_vcpu_load_vmcs(struct kvm_vcpu *vcpu, int cpu);
-void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu);
+void kvm_x86_vcpu_load(struct kvm_vcpu *vcpu, int cpu);
 int allocate_vpid(void);
 void free_vpid(int vpid);
 void vmx_set_constant_host_state(struct vcpu_vmx *vmx);
-void vmx_prepare_switch_to_guest(struct kvm_vcpu *vcpu);
+void kvm_x86_prepare_guest_switch(struct kvm_vcpu *vcpu);
 void vmx_set_host_fs_gs(struct vmcs_host_state *host, u16 fs_sel, u16 gs_sel,
 			unsigned long fs_base, unsigned long gs_base);
-int vmx_get_cpl(struct kvm_vcpu *vcpu);
-unsigned long vmx_get_rflags(struct kvm_vcpu *vcpu);
-void vmx_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags);
-u32 vmx_get_interrupt_shadow(struct kvm_vcpu *vcpu);
-void vmx_set_interrupt_shadow(struct kvm_vcpu *vcpu, int mask);
-void vmx_set_efer(struct kvm_vcpu *vcpu, u64 efer);
-void vmx_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0);
-void vmx_set_cr3(struct kvm_vcpu *vcpu, unsigned long cr3);
-int vmx_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4);
+int kvm_x86_get_cpl(struct kvm_vcpu *vcpu);
+unsigned long kvm_x86_get_rflags(struct kvm_vcpu *vcpu);
+void kvm_x86_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags);
+u32 kvm_x86_get_interrupt_shadow(struct kvm_vcpu *vcpu);
+void kvm_x86_set_interrupt_shadow(struct kvm_vcpu *vcpu, int mask);
+void kvm_x86_set_efer(struct kvm_vcpu *vcpu, u64 efer);
+void kvm_x86_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0);
+void kvm_x86_set_cr3(struct kvm_vcpu *vcpu, unsigned long cr3);
+int kvm_x86_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4);
 void set_cr4_guest_host_mask(struct vcpu_vmx *vmx);
 void ept_save_pdptrs(struct kvm_vcpu *vcpu);
-void vmx_get_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var, int seg);
-void vmx_set_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var, int seg);
+void kvm_x86_get_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var, int seg);
+void kvm_x86_set_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var, int seg);
 u64 construct_eptp(struct kvm_vcpu *vcpu, unsigned long root_hpa);
-void update_exception_bitmap(struct kvm_vcpu *vcpu);
+void kvm_x86_update_bp_intercept(struct kvm_vcpu *vcpu);
 void vmx_update_msr_bitmap(struct kvm_vcpu *vcpu);
-bool vmx_get_nmi_mask(struct kvm_vcpu *vcpu);
-void vmx_set_nmi_mask(struct kvm_vcpu *vcpu, bool masked);
-void vmx_set_virtual_apic_mode(struct kvm_vcpu *vcpu);
+bool kvm_x86_get_nmi_mask(struct kvm_vcpu *vcpu);
+void kvm_x86_set_nmi_mask(struct kvm_vcpu *vcpu, bool masked);
+void kvm_x86_set_virtual_apic_mode(struct kvm_vcpu *vcpu);
 struct shared_msr_entry *find_msr_entry(struct vcpu_vmx *vmx, u32 msr);
 void pt_update_intercept_for_msr(struct vcpu_vmx *vmx);
 void vmx_update_host_rsp(struct vcpu_vmx *vmx, unsigned long host_rsp);
@@ -489,11 +489,6 @@ static inline void __vmx_flush_tlb(struct kvm_vcpu *vcpu, int vpid,
 	}
 }
 
-static inline void vmx_flush_tlb(struct kvm_vcpu *vcpu, bool invalidate_gpa)
-{
-	__vmx_flush_tlb(vcpu, to_vmx(vcpu)->vpid, invalidate_gpa);
-}
-
 static inline void decache_tsc_multiplier(struct vcpu_vmx *vmx)
 {
 	vmx->current_tsc_ratio = vmx->vcpu.arch.tsc_scaling_ratio;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index ff395f812719..fb963e6b2e54 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -626,7 +626,7 @@ EXPORT_SYMBOL_GPL(kvm_requeue_exception_e);
  */
 bool kvm_require_cpl(struct kvm_vcpu *vcpu, int required_cpl)
 {
-	if (kvm_x86_ops->get_cpl(vcpu) <= required_cpl)
+	if (kvm_x86_get_cpl(vcpu) <= required_cpl)
 		return true;
 	kvm_queue_exception_e(vcpu, GP_VECTOR, 0);
 	return false;
@@ -773,7 +773,7 @@ int kvm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0)
 
 			if (!is_pae(vcpu))
 				return 1;
-			kvm_x86_ops->get_cs_db_l_bits(vcpu, &cs_db, &cs_l);
+			kvm_x86_get_cs_db_l_bits(vcpu, &cs_db, &cs_l);
 			if (cs_l)
 				return 1;
 		} else
@@ -786,7 +786,7 @@ int kvm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0)
 	if (!(cr0 & X86_CR0_PG) && kvm_read_cr4_bits(vcpu, X86_CR4_PCIDE))
 		return 1;
 
-	kvm_x86_ops->set_cr0(vcpu, cr0);
+	kvm_x86_set_cr0(vcpu, cr0);
 
 	if ((cr0 ^ old_cr0) & X86_CR0_PG) {
 		kvm_clear_async_pf_completion_queue(vcpu);
@@ -875,7 +875,7 @@ static int __kvm_set_xcr(struct kvm_vcpu *vcpu, u32 index, u64 xcr)
 
 int kvm_set_xcr(struct kvm_vcpu *vcpu, u32 index, u64 xcr)
 {
-	if (kvm_x86_ops->get_cpl(vcpu) != 0 ||
+	if (kvm_x86_get_cpl(vcpu) != 0 ||
 	    __kvm_set_xcr(vcpu, index, xcr)) {
 		kvm_inject_gp(vcpu, 0);
 		return 1;
@@ -940,7 +940,7 @@ int kvm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
 			return 1;
 	}
 
-	if (kvm_x86_ops->set_cr4(vcpu, cr4))
+	if (kvm_x86_set_cr4(vcpu, cr4))
 		return 1;
 
 	if (((cr4 ^ old_cr4) & pdptr_bits) ||
@@ -1024,7 +1024,7 @@ static void kvm_update_dr0123(struct kvm_vcpu *vcpu)
 static void kvm_update_dr6(struct kvm_vcpu *vcpu)
 {
 	if (!(vcpu->guest_debug & KVM_GUESTDBG_USE_HW_BP))
-		kvm_x86_ops->set_dr6(vcpu, vcpu->arch.dr6);
+		kvm_x86_set_dr6(vcpu, vcpu->arch.dr6);
 }
 
 static void kvm_update_dr7(struct kvm_vcpu *vcpu)
@@ -1035,7 +1035,7 @@ static void kvm_update_dr7(struct kvm_vcpu *vcpu)
 		dr7 = vcpu->arch.guest_debug_dr7;
 	else
 		dr7 = vcpu->arch.dr7;
-	kvm_x86_ops->set_dr7(vcpu, dr7);
+	kvm_x86_set_dr7(vcpu, dr7);
 	vcpu->arch.switch_db_regs &= ~KVM_DEBUGREG_BP_ENABLED;
 	if (dr7 & DR7_BP_EN_MASK)
 		vcpu->arch.switch_db_regs |= KVM_DEBUGREG_BP_ENABLED;
@@ -1101,7 +1101,7 @@ int kvm_get_dr(struct kvm_vcpu *vcpu, int dr, unsigned long *val)
 		if (vcpu->guest_debug & KVM_GUESTDBG_USE_HW_BP)
 			*val = vcpu->arch.dr6;
 		else
-			*val = kvm_x86_ops->get_dr6(vcpu);
+			*val = kvm_x86_get_dr6(vcpu);
 		break;
 	case 5:
 		/* fall through */
@@ -1311,7 +1311,7 @@ static int kvm_get_msr_feature(struct kvm_msr_entry *msr)
 		rdmsrl_safe(msr->index, &msr->data);
 		break;
 	default:
-		if (kvm_x86_ops->get_msr_feature(msr))
+		if (kvm_x86_get_msr_feature(msr))
 			return 1;
 	}
 	return 0;
@@ -1379,7 +1379,7 @@ static int set_efer(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 	efer &= ~EFER_LMA;
 	efer |= vcpu->arch.efer & EFER_LMA;
 
-	kvm_x86_ops->set_efer(vcpu, efer);
+	kvm_x86_set_efer(vcpu, efer);
 
 	/* Update reserved bits */
 	if ((efer ^ old_efer) & EFER_NX)
@@ -1435,7 +1435,7 @@ static int __kvm_set_msr(struct kvm_vcpu *vcpu, u32 index, u64 data,
 	msr.index = index;
 	msr.host_initiated = host_initiated;
 
-	return kvm_x86_ops->set_msr(vcpu, &msr);
+	return kvm_x86_set_msr(vcpu, &msr);
 }
 
 /*
@@ -1453,7 +1453,7 @@ static int __kvm_get_msr(struct kvm_vcpu *vcpu, u32 index, u64 *data,
 	msr.index = index;
 	msr.host_initiated = host_initiated;
 
-	ret = kvm_x86_ops->get_msr(vcpu, &msr);
+	ret = kvm_x86_get_msr(vcpu, &msr);
 	if (!ret)
 		*data = msr.data;
 	return ret;
@@ -1774,7 +1774,7 @@ static void kvm_track_tsc_matching(struct kvm_vcpu *vcpu)
 
 static void update_ia32_tsc_adjust_msr(struct kvm_vcpu *vcpu, s64 offset)
 {
-	u64 curr_offset = kvm_x86_ops->read_l1_tsc_offset(vcpu);
+	u64 curr_offset = kvm_x86_read_l1_tsc_offset(vcpu);
 	vcpu->arch.ia32_tsc_adjust_msr += offset - curr_offset;
 }
 
@@ -1816,7 +1816,7 @@ static u64 kvm_compute_tsc_offset(struct kvm_vcpu *vcpu, u64 target_tsc)
 
 u64 kvm_read_l1_tsc(struct kvm_vcpu *vcpu, u64 host_tsc)
 {
-	u64 tsc_offset = kvm_x86_ops->read_l1_tsc_offset(vcpu);
+	u64 tsc_offset = kvm_x86_read_l1_tsc_offset(vcpu);
 
 	return tsc_offset + kvm_scale_tsc(vcpu, host_tsc);
 }
@@ -1824,7 +1824,7 @@ EXPORT_SYMBOL_GPL(kvm_read_l1_tsc);
 
 static void kvm_vcpu_write_tsc_offset(struct kvm_vcpu *vcpu, u64 offset)
 {
-	vcpu->arch.tsc_offset = kvm_x86_ops->write_l1_tsc_offset(vcpu, offset);
+	vcpu->arch.tsc_offset = kvm_x86_write_l1_tsc_offset(vcpu, offset);
 }
 
 static inline bool kvm_check_tsc_unstable(void)
@@ -1948,7 +1948,7 @@ EXPORT_SYMBOL_GPL(kvm_write_tsc);
 static inline void adjust_tsc_offset_guest(struct kvm_vcpu *vcpu,
 					   s64 adjustment)
 {
-	u64 tsc_offset = kvm_x86_ops->read_l1_tsc_offset(vcpu);
+	u64 tsc_offset = kvm_x86_read_l1_tsc_offset(vcpu);
 	kvm_vcpu_write_tsc_offset(vcpu, tsc_offset + adjustment);
 }
 
@@ -2542,7 +2542,7 @@ static void kvmclock_reset(struct kvm_vcpu *vcpu)
 static void kvm_vcpu_flush_tlb(struct kvm_vcpu *vcpu, bool invalidate_gpa)
 {
 	++vcpu->stat.tlb_flush;
-	kvm_x86_ops->tlb_flush(vcpu, invalidate_gpa);
+	kvm_x86_tlb_flush(vcpu, invalidate_gpa);
 }
 
 static void record_steal_time(struct kvm_vcpu *vcpu)
@@ -3242,10 +3242,10 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 		 * fringe case that is not enabled except via specific settings
 		 * of the module parameters.
 		 */
-		r = kvm_x86_ops->has_emulated_msr(MSR_IA32_SMBASE);
+		r = kvm_x86_has_emulated_msr(MSR_IA32_SMBASE);
 		break;
 	case KVM_CAP_VAPIC:
-		r = !kvm_x86_ops->cpu_has_accelerated_tpr();
+		r = !kvm_x86_cpu_has_accelerated_tpr();
 		break;
 	case KVM_CAP_NR_VCPUS:
 		r = KVM_SOFT_MAX_VCPUS;
@@ -3273,7 +3273,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 		break;
 	case KVM_CAP_NESTED_STATE:
 		r = kvm_x86_ops->get_nested_state ?
-			kvm_x86_ops->get_nested_state(NULL, NULL, 0) : 0;
+			kvm_x86_get_nested_state(NULL, NULL, 0) : 0;
 		break;
 	case KVM_CAP_HYPERV_DIRECT_TLBFLUSH:
 		r = kvm_x86_ops->enable_direct_tlbflush != NULL;
@@ -3395,14 +3395,14 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 {
 	/* Address WBINVD may be executed by guest */
 	if (need_emulate_wbinvd(vcpu)) {
-		if (kvm_x86_ops->has_wbinvd_exit())
+		if (kvm_x86_has_wbinvd_exit())
 			cpumask_set_cpu(cpu, vcpu->arch.wbinvd_dirty_mask);
 		else if (vcpu->cpu != -1 && vcpu->cpu != cpu)
 			smp_call_function_single(vcpu->cpu,
 					wbinvd_ipi, NULL, 1);
 	}
 
-	kvm_x86_ops->vcpu_load(vcpu, cpu);
+	kvm_x86_vcpu_load(vcpu, cpu);
 
 	fpregs_assert_state_consistent();
 	if (test_thread_flag(TIF_NEED_FPU_LOAD))
@@ -3463,7 +3463,7 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
 	int idx;
 
 	if (vcpu->preempted)
-		vcpu->arch.preempted_in_kernel = !kvm_x86_ops->get_cpl(vcpu);
+		vcpu->arch.preempted_in_kernel = !kvm_x86_get_cpl(vcpu);
 
 	/*
 	 * Disable page faults because we're in atomic context here.
@@ -3482,7 +3482,7 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
 	kvm_steal_time_set_preempted(vcpu);
 	srcu_read_unlock(&vcpu->kvm->srcu, idx);
 	pagefault_enable();
-	kvm_x86_ops->vcpu_put(vcpu);
+	kvm_x86_vcpu_put(vcpu);
 	vcpu->arch.last_host_tsc = rdtsc();
 	/*
 	 * If userspace has set any breakpoints or watchpoints, dr6 is restored
@@ -3496,7 +3496,7 @@ static int kvm_vcpu_ioctl_get_lapic(struct kvm_vcpu *vcpu,
 				    struct kvm_lapic_state *s)
 {
 	if (vcpu->arch.apicv_active)
-		kvm_x86_ops->sync_pir_to_irr(vcpu);
+		kvm_x86_sync_pir_to_irr(vcpu);
 
 	return kvm_apic_get_state(vcpu, s);
 }
@@ -3604,7 +3604,7 @@ static int kvm_vcpu_ioctl_x86_setup_mce(struct kvm_vcpu *vcpu,
 	for (bank = 0; bank < bank_num; bank++)
 		vcpu->arch.mce_banks[bank*4] = ~(u64)0;
 
-	kvm_x86_ops->setup_mce(vcpu);
+	kvm_x86_setup_mce(vcpu);
 out:
 	return r;
 }
@@ -3693,11 +3693,11 @@ static void kvm_vcpu_ioctl_x86_get_vcpu_events(struct kvm_vcpu *vcpu,
 		vcpu->arch.interrupt.injected && !vcpu->arch.interrupt.soft;
 	events->interrupt.nr = vcpu->arch.interrupt.nr;
 	events->interrupt.soft = 0;
-	events->interrupt.shadow = kvm_x86_ops->get_interrupt_shadow(vcpu);
+	events->interrupt.shadow = kvm_x86_get_interrupt_shadow(vcpu);
 
 	events->nmi.injected = vcpu->arch.nmi_injected;
 	events->nmi.pending = vcpu->arch.nmi_pending != 0;
-	events->nmi.masked = kvm_x86_ops->get_nmi_mask(vcpu);
+	events->nmi.masked = kvm_x86_get_nmi_mask(vcpu);
 	events->nmi.pad = 0;
 
 	events->sipi_vector = 0; /* never valid when reporting to user space */
@@ -3764,13 +3764,13 @@ static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu,
 	vcpu->arch.interrupt.nr = events->interrupt.nr;
 	vcpu->arch.interrupt.soft = events->interrupt.soft;
 	if (events->flags & KVM_VCPUEVENT_VALID_SHADOW)
-		kvm_x86_ops->set_interrupt_shadow(vcpu,
+		kvm_x86_set_interrupt_shadow(vcpu,
 						  events->interrupt.shadow);
 
 	vcpu->arch.nmi_injected = events->nmi.injected;
 	if (events->flags & KVM_VCPUEVENT_VALID_NMI_PENDING)
 		vcpu->arch.nmi_pending = events->nmi.pending;
-	kvm_x86_ops->set_nmi_mask(vcpu, events->nmi.masked);
+	kvm_x86_set_nmi_mask(vcpu, events->nmi.masked);
 
 	if (events->flags & KVM_VCPUEVENT_VALID_SIPI_VECTOR &&
 	    lapic_in_kernel(vcpu))
@@ -4046,7 +4046,7 @@ static int kvm_vcpu_ioctl_enable_cap(struct kvm_vcpu *vcpu,
 	case KVM_CAP_HYPERV_ENLIGHTENED_VMCS:
 		if (!kvm_x86_ops->nested_enable_evmcs)
 			return -ENOTTY;
-		r = kvm_x86_ops->nested_enable_evmcs(vcpu, &vmcs_version);
+		r = kvm_x86_nested_enable_evmcs(vcpu, &vmcs_version);
 		if (!r) {
 			user_ptr = (void __user *)(uintptr_t)cap->args[0];
 			if (copy_to_user(user_ptr, &vmcs_version,
@@ -4058,7 +4058,7 @@ static int kvm_vcpu_ioctl_enable_cap(struct kvm_vcpu *vcpu,
 		if (!kvm_x86_ops->enable_direct_tlbflush)
 			return -ENOTTY;
 
-		return kvm_x86_ops->enable_direct_tlbflush(vcpu);
+		return kvm_x86_enable_direct_tlbflush(vcpu);
 
 	default:
 		return -EINVAL;
@@ -4369,7 +4369,7 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
 		if (get_user(user_data_size, &user_kvm_nested_state->size))
 			break;
 
-		r = kvm_x86_ops->get_nested_state(vcpu, user_kvm_nested_state,
+		r = kvm_x86_get_nested_state(vcpu, user_kvm_nested_state,
 						  user_data_size);
 		if (r < 0)
 			break;
@@ -4411,7 +4411,7 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
 		    && !(kvm_state.flags & KVM_STATE_NESTED_GUEST_MODE))
 			break;
 
-		r = kvm_x86_ops->set_nested_state(vcpu, user_kvm_nested_state, &kvm_state);
+		r = kvm_x86_set_nested_state(vcpu, user_kvm_nested_state, &kvm_state);
 		break;
 	}
 	case KVM_GET_SUPPORTED_HV_CPUID: {
@@ -4454,14 +4454,14 @@ static int kvm_vm_ioctl_set_tss_addr(struct kvm *kvm, unsigned long addr)
 
 	if (addr > (unsigned int)(-3 * PAGE_SIZE))
 		return -EINVAL;
-	ret = kvm_x86_ops->set_tss_addr(kvm, addr);
+	ret = kvm_x86_set_tss_addr(kvm, addr);
 	return ret;
 }
 
 static int kvm_vm_ioctl_set_identity_map_addr(struct kvm *kvm,
 					      u64 ident_addr)
 {
-	return kvm_x86_ops->set_identity_map_addr(kvm, ident_addr);
+	return kvm_x86_set_identity_map_addr(kvm, ident_addr);
 }
 
 static int kvm_vm_ioctl_set_nr_mmu_pages(struct kvm *kvm,
@@ -4646,7 +4646,7 @@ int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log)
 	 * Flush potentially hardware-cached dirty pages to dirty_bitmap.
 	 */
 	if (kvm_x86_ops->flush_log_dirty)
-		kvm_x86_ops->flush_log_dirty(kvm);
+		kvm_x86_flush_log_dirty(kvm);
 
 	r = kvm_get_dirty_log_protect(kvm, log, &flush);
 
@@ -4673,7 +4673,7 @@ int kvm_vm_ioctl_clear_dirty_log(struct kvm *kvm, struct kvm_clear_dirty_log *lo
 	 * Flush potentially hardware-cached dirty pages to dirty_bitmap.
 	 */
 	if (kvm_x86_ops->flush_log_dirty)
-		kvm_x86_ops->flush_log_dirty(kvm);
+		kvm_x86_flush_log_dirty(kvm);
 
 	r = kvm_clear_dirty_log_protect(kvm, log, &flush);
 
@@ -5040,7 +5040,7 @@ long kvm_arch_vm_ioctl(struct file *filp,
 	case KVM_MEMORY_ENCRYPT_OP: {
 		r = -ENOTTY;
 		if (kvm_x86_ops->mem_enc_op)
-			r = kvm_x86_ops->mem_enc_op(kvm, argp);
+			r = kvm_x86_mem_enc_op(kvm, argp);
 		break;
 	}
 	case KVM_MEMORY_ENCRYPT_REG_REGION: {
@@ -5052,7 +5052,7 @@ long kvm_arch_vm_ioctl(struct file *filp,
 
 		r = -ENOTTY;
 		if (kvm_x86_ops->mem_enc_reg_region)
-			r = kvm_x86_ops->mem_enc_reg_region(kvm, &region);
+			r = kvm_x86_mem_enc_reg_region(kvm, &region);
 		break;
 	}
 	case KVM_MEMORY_ENCRYPT_UNREG_REGION: {
@@ -5064,7 +5064,7 @@ long kvm_arch_vm_ioctl(struct file *filp,
 
 		r = -ENOTTY;
 		if (kvm_x86_ops->mem_enc_unreg_region)
-			r = kvm_x86_ops->mem_enc_unreg_region(kvm, &region);
+			r = kvm_x86_mem_enc_unreg_region(kvm, &region);
 		break;
 	}
 	case KVM_HYPERV_EVENTFD: {
@@ -5111,28 +5111,28 @@ static void kvm_init_msr_list(void)
 				continue;
 			break;
 		case MSR_TSC_AUX:
-			if (!kvm_x86_ops->rdtscp_supported())
+			if (!kvm_x86_rdtscp_supported())
 				continue;
 			break;
 		case MSR_IA32_RTIT_CTL:
 		case MSR_IA32_RTIT_STATUS:
-			if (!kvm_x86_ops->pt_supported())
+			if (!kvm_x86_pt_supported())
 				continue;
 			break;
 		case MSR_IA32_RTIT_CR3_MATCH:
-			if (!kvm_x86_ops->pt_supported() ||
+			if (!kvm_x86_pt_supported() ||
 			    !intel_pt_validate_hw_cap(PT_CAP_cr3_filtering))
 				continue;
 			break;
 		case MSR_IA32_RTIT_OUTPUT_BASE:
 		case MSR_IA32_RTIT_OUTPUT_MASK:
-			if (!kvm_x86_ops->pt_supported() ||
+			if (!kvm_x86_pt_supported() ||
 				(!intel_pt_validate_hw_cap(PT_CAP_topa_output) &&
 				 !intel_pt_validate_hw_cap(PT_CAP_single_range_output)))
 				continue;
 			break;
 		case MSR_IA32_RTIT_ADDR0_A ... MSR_IA32_RTIT_ADDR3_B: {
-			if (!kvm_x86_ops->pt_supported() ||
+			if (!kvm_x86_pt_supported() ||
 				msrs_to_save[i] - MSR_IA32_RTIT_ADDR0_A >=
 				intel_pt_validate_hw_cap(PT_CAP_num_address_ranges) * 2)
 				continue;
@@ -5158,7 +5158,7 @@ static void kvm_init_msr_list(void)
 	num_msrs_to_save = j;
 
 	for (i = j = 0; i < ARRAY_SIZE(emulated_msrs); i++) {
-		if (!kvm_x86_ops->has_emulated_msr(emulated_msrs[i]))
+		if (!kvm_x86_has_emulated_msr(emulated_msrs[i]))
 			continue;
 
 		if (j < i)
@@ -5227,13 +5227,13 @@ static int vcpu_mmio_read(struct kvm_vcpu *vcpu, gpa_t addr, int len, void *v)
 static void kvm_set_segment(struct kvm_vcpu *vcpu,
 			struct kvm_segment *var, int seg)
 {
-	kvm_x86_ops->set_segment(vcpu, var, seg);
+	kvm_x86_set_segment(vcpu, var, seg);
 }
 
 void kvm_get_segment(struct kvm_vcpu *vcpu,
 		     struct kvm_segment *var, int seg)
 {
-	kvm_x86_ops->get_segment(vcpu, var, seg);
+	kvm_x86_get_segment(vcpu, var, seg);
 }
 
 gpa_t translate_nested_gpa(struct kvm_vcpu *vcpu, gpa_t gpa, u32 access,
@@ -5253,14 +5253,14 @@ gpa_t translate_nested_gpa(struct kvm_vcpu *vcpu, gpa_t gpa, u32 access,
 gpa_t kvm_mmu_gva_to_gpa_read(struct kvm_vcpu *vcpu, gva_t gva,
 			      struct x86_exception *exception)
 {
-	u32 access = (kvm_x86_ops->get_cpl(vcpu) == 3) ? PFERR_USER_MASK : 0;
+	u32 access = (kvm_x86_get_cpl(vcpu) == 3) ? PFERR_USER_MASK : 0;
 	return vcpu->arch.walk_mmu->gva_to_gpa(vcpu, gva, access, exception);
 }
 
  gpa_t kvm_mmu_gva_to_gpa_fetch(struct kvm_vcpu *vcpu, gva_t gva,
 				struct x86_exception *exception)
 {
-	u32 access = (kvm_x86_ops->get_cpl(vcpu) == 3) ? PFERR_USER_MASK : 0;
+	u32 access = (kvm_x86_get_cpl(vcpu) == 3) ? PFERR_USER_MASK : 0;
 	access |= PFERR_FETCH_MASK;
 	return vcpu->arch.walk_mmu->gva_to_gpa(vcpu, gva, access, exception);
 }
@@ -5268,7 +5268,7 @@ gpa_t kvm_mmu_gva_to_gpa_read(struct kvm_vcpu *vcpu, gva_t gva,
 gpa_t kvm_mmu_gva_to_gpa_write(struct kvm_vcpu *vcpu, gva_t gva,
 			       struct x86_exception *exception)
 {
-	u32 access = (kvm_x86_ops->get_cpl(vcpu) == 3) ? PFERR_USER_MASK : 0;
+	u32 access = (kvm_x86_get_cpl(vcpu) == 3) ? PFERR_USER_MASK : 0;
 	access |= PFERR_WRITE_MASK;
 	return vcpu->arch.walk_mmu->gva_to_gpa(vcpu, gva, access, exception);
 }
@@ -5317,7 +5317,7 @@ static int kvm_fetch_guest_virt(struct x86_emulate_ctxt *ctxt,
 				struct x86_exception *exception)
 {
 	struct kvm_vcpu *vcpu = emul_to_vcpu(ctxt);
-	u32 access = (kvm_x86_ops->get_cpl(vcpu) == 3) ? PFERR_USER_MASK : 0;
+	u32 access = (kvm_x86_get_cpl(vcpu) == 3) ? PFERR_USER_MASK : 0;
 	unsigned offset;
 	int ret;
 
@@ -5342,7 +5342,7 @@ int kvm_read_guest_virt(struct kvm_vcpu *vcpu,
 			       gva_t addr, void *val, unsigned int bytes,
 			       struct x86_exception *exception)
 {
-	u32 access = (kvm_x86_ops->get_cpl(vcpu) == 3) ? PFERR_USER_MASK : 0;
+	u32 access = (kvm_x86_get_cpl(vcpu) == 3) ? PFERR_USER_MASK : 0;
 
 	/*
 	 * FIXME: this should call handle_emulation_failure if X86EMUL_IO_NEEDED
@@ -5363,7 +5363,7 @@ static int emulator_read_std(struct x86_emulate_ctxt *ctxt,
 	struct kvm_vcpu *vcpu = emul_to_vcpu(ctxt);
 	u32 access = 0;
 
-	if (!system && kvm_x86_ops->get_cpl(vcpu) == 3)
+	if (!system && kvm_x86_get_cpl(vcpu) == 3)
 		access |= PFERR_USER_MASK;
 
 	return kvm_read_guest_virt_helper(addr, val, bytes, vcpu, access, exception);
@@ -5416,7 +5416,7 @@ static int emulator_write_std(struct x86_emulate_ctxt *ctxt, gva_t addr, void *v
 	struct kvm_vcpu *vcpu = emul_to_vcpu(ctxt);
 	u32 access = PFERR_WRITE_MASK;
 
-	if (!system && kvm_x86_ops->get_cpl(vcpu) == 3)
+	if (!system && kvm_x86_get_cpl(vcpu) == 3)
 		access |= PFERR_USER_MASK;
 
 	return kvm_write_guest_virt_helper(addr, val, bytes, vcpu,
@@ -5478,7 +5478,7 @@ static int vcpu_mmio_gva_to_gpa(struct kvm_vcpu *vcpu, unsigned long gva,
 				gpa_t *gpa, struct x86_exception *exception,
 				bool write)
 {
-	u32 access = ((kvm_x86_ops->get_cpl(vcpu) == 3) ? PFERR_USER_MASK : 0)
+	u32 access = ((kvm_x86_get_cpl(vcpu) == 3) ? PFERR_USER_MASK : 0)
 		| (write ? PFERR_WRITE_MASK : 0);
 
 	/*
@@ -5866,7 +5866,7 @@ static int emulator_pio_out_emulated(struct x86_emulate_ctxt *ctxt,
 
 static unsigned long get_segment_base(struct kvm_vcpu *vcpu, int seg)
 {
-	return kvm_x86_ops->get_segment_base(vcpu, seg);
+	return kvm_x86_get_segment_base(vcpu, seg);
 }
 
 static void emulator_invlpg(struct x86_emulate_ctxt *ctxt, ulong address)
@@ -5879,7 +5879,7 @@ static int kvm_emulate_wbinvd_noskip(struct kvm_vcpu *vcpu)
 	if (!need_emulate_wbinvd(vcpu))
 		return X86EMUL_CONTINUE;
 
-	if (kvm_x86_ops->has_wbinvd_exit()) {
+	if (kvm_x86_has_wbinvd_exit()) {
 		int cpu = get_cpu();
 
 		cpumask_set_cpu(cpu, vcpu->arch.wbinvd_dirty_mask);
@@ -5984,27 +5984,27 @@ static int emulator_set_cr(struct x86_emulate_ctxt *ctxt, int cr, ulong val)
 
 static int emulator_get_cpl(struct x86_emulate_ctxt *ctxt)
 {
-	return kvm_x86_ops->get_cpl(emul_to_vcpu(ctxt));
+	return kvm_x86_get_cpl(emul_to_vcpu(ctxt));
 }
 
 static void emulator_get_gdt(struct x86_emulate_ctxt *ctxt, struct desc_ptr *dt)
 {
-	kvm_x86_ops->get_gdt(emul_to_vcpu(ctxt), dt);
+	kvm_x86_get_gdt(emul_to_vcpu(ctxt), dt);
 }
 
 static void emulator_get_idt(struct x86_emulate_ctxt *ctxt, struct desc_ptr *dt)
 {
-	kvm_x86_ops->get_idt(emul_to_vcpu(ctxt), dt);
+	kvm_x86_get_idt(emul_to_vcpu(ctxt), dt);
 }
 
 static void emulator_set_gdt(struct x86_emulate_ctxt *ctxt, struct desc_ptr *dt)
 {
-	kvm_x86_ops->set_gdt(emul_to_vcpu(ctxt), dt);
+	kvm_x86_set_gdt(emul_to_vcpu(ctxt), dt);
 }
 
 static void emulator_set_idt(struct x86_emulate_ctxt *ctxt, struct desc_ptr *dt)
 {
-	kvm_x86_ops->set_idt(emul_to_vcpu(ctxt), dt);
+	kvm_x86_set_idt(emul_to_vcpu(ctxt), dt);
 }
 
 static unsigned long emulator_get_cached_segment_base(
@@ -6126,7 +6126,7 @@ static int emulator_intercept(struct x86_emulate_ctxt *ctxt,
 			      struct x86_instruction_info *info,
 			      enum x86_intercept_stage stage)
 {
-	return kvm_x86_ops->check_intercept(emul_to_vcpu(ctxt), info, stage);
+	return kvm_x86_check_intercept(emul_to_vcpu(ctxt), info, stage);
 }
 
 static bool emulator_get_cpuid(struct x86_emulate_ctxt *ctxt,
@@ -6147,7 +6147,7 @@ static void emulator_write_gpr(struct x86_emulate_ctxt *ctxt, unsigned reg, ulon
 
 static void emulator_set_nmi_mask(struct x86_emulate_ctxt *ctxt, bool masked)
 {
-	kvm_x86_ops->set_nmi_mask(emul_to_vcpu(ctxt), masked);
+	kvm_x86_set_nmi_mask(emul_to_vcpu(ctxt), masked);
 }
 
 static unsigned emulator_get_hflags(struct x86_emulate_ctxt *ctxt)
@@ -6163,7 +6163,7 @@ static void emulator_set_hflags(struct x86_emulate_ctxt *ctxt, unsigned emul_fla
 static int emulator_pre_leave_smm(struct x86_emulate_ctxt *ctxt,
 				  const char *smstate)
 {
-	return kvm_x86_ops->pre_leave_smm(emul_to_vcpu(ctxt), smstate);
+	return kvm_x86_pre_leave_smm(emul_to_vcpu(ctxt), smstate);
 }
 
 static void emulator_post_leave_smm(struct x86_emulate_ctxt *ctxt)
@@ -6222,7 +6222,7 @@ static const struct x86_emulate_ops emulate_ops = {
 
 static void toggle_interruptibility(struct kvm_vcpu *vcpu, u32 mask)
 {
-	u32 int_shadow = kvm_x86_ops->get_interrupt_shadow(vcpu);
+	u32 int_shadow = kvm_x86_get_interrupt_shadow(vcpu);
 	/*
 	 * an sti; sti; sequence only disable interrupts for the first
 	 * instruction. So, if the last instruction, be it emulated or
@@ -6233,7 +6233,7 @@ static void toggle_interruptibility(struct kvm_vcpu *vcpu, u32 mask)
 	if (int_shadow & mask)
 		mask = 0;
 	if (unlikely(int_shadow || mask)) {
-		kvm_x86_ops->set_interrupt_shadow(vcpu, mask);
+		kvm_x86_set_interrupt_shadow(vcpu, mask);
 		if (!mask)
 			kvm_make_request(KVM_REQ_EVENT, vcpu);
 	}
@@ -6258,7 +6258,7 @@ static void init_emulate_ctxt(struct kvm_vcpu *vcpu)
 	struct x86_emulate_ctxt *ctxt = &vcpu->arch.emulate_ctxt;
 	int cs_db, cs_l;
 
-	kvm_x86_ops->get_cs_db_l_bits(vcpu, &cs_db, &cs_l);
+	kvm_x86_get_cs_db_l_bits(vcpu, &cs_db, &cs_l);
 
 	ctxt->eflags = kvm_get_rflags(vcpu);
 	ctxt->tf = (ctxt->eflags & X86_EFLAGS_TF) != 0;
@@ -6318,7 +6318,7 @@ static int handle_emulation_failure(struct kvm_vcpu *vcpu, int emulation_type)
 
 	kvm_queue_exception(vcpu, UD_VECTOR);
 
-	if (!is_guest_mode(vcpu) && kvm_x86_ops->get_cpl(vcpu) == 0) {
+	if (!is_guest_mode(vcpu) && kvm_x86_get_cpl(vcpu) == 0) {
 		vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
 		vcpu->run->internal.suberror = KVM_INTERNAL_ERROR_EMULATION;
 		vcpu->run->internal.ndata = 0;
@@ -6497,10 +6497,10 @@ static int kvm_vcpu_do_singlestep(struct kvm_vcpu *vcpu)
 
 int kvm_skip_emulated_instruction(struct kvm_vcpu *vcpu)
 {
-	unsigned long rflags = kvm_x86_ops->get_rflags(vcpu);
+	unsigned long rflags = kvm_x86_get_rflags(vcpu);
 	int r;
 
-	r = kvm_x86_ops->skip_emulated_instruction(vcpu);
+	r = kvm_x86_skip_emulated_instruction(vcpu);
 	if (unlikely(!r))
 		return 0;
 
@@ -6726,7 +6726,7 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu,
 		r = 1;
 
 	if (writeback) {
-		unsigned long rflags = kvm_x86_ops->get_rflags(vcpu);
+		unsigned long rflags = kvm_x86_get_rflags(vcpu);
 		toggle_interruptibility(vcpu, ctxt->interruptibility);
 		vcpu->arch.emulate_regs_need_sync_to_vcpu = false;
 		if (!ctxt->have_exception ||
@@ -7062,7 +7062,7 @@ static int kvm_is_user_mode(void)
 	int user_mode = 3;
 
 	if (__this_cpu_read(current_vcpu))
-		user_mode = kvm_x86_ops->get_cpl(__this_cpu_read(current_vcpu));
+		user_mode = kvm_x86_get_cpl(__this_cpu_read(current_vcpu));
 
 	return user_mode != 0;
 }
@@ -7326,7 +7326,7 @@ void kvm_vcpu_deactivate_apicv(struct kvm_vcpu *vcpu)
 		return;
 
 	vcpu->arch.apicv_active = false;
-	kvm_x86_ops->refresh_apicv_exec_ctrl(vcpu);
+	kvm_x86_refresh_apicv_exec_ctrl(vcpu);
 }
 
 static void kvm_sched_yield(struct kvm *kvm, unsigned long dest_id)
@@ -7371,7 +7371,7 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)
 		a3 &= 0xFFFFFFFF;
 	}
 
-	if (kvm_x86_ops->get_cpl(vcpu) != 0) {
+	if (kvm_x86_get_cpl(vcpu) != 0) {
 		ret = -KVM_EPERM;
 		goto out;
 	}
@@ -7417,7 +7417,7 @@ static int emulator_fix_hypercall(struct x86_emulate_ctxt *ctxt)
 	char instruction[3];
 	unsigned long rip = kvm_rip_read(vcpu);
 
-	kvm_x86_ops->patch_hypercall(vcpu, instruction);
+	kvm_x86_patch_hypercall(vcpu, instruction);
 
 	return emulator_write_emulated(ctxt, rip, instruction, 3,
 		&ctxt->exception);
@@ -7465,7 +7465,7 @@ static void update_cr8_intercept(struct kvm_vcpu *vcpu)
 
 	tpr = kvm_lapic_get_cr8(vcpu);
 
-	kvm_x86_ops->update_cr8_intercept(vcpu, tpr, max_irr);
+	kvm_x86_update_cr8_intercept(vcpu, tpr, max_irr);
 }
 
 static int inject_pending_event(struct kvm_vcpu *vcpu, bool req_int_win)
@@ -7475,7 +7475,7 @@ static int inject_pending_event(struct kvm_vcpu *vcpu, bool req_int_win)
 	/* try to reinject previous events if any */
 
 	if (vcpu->arch.exception.injected)
-		kvm_x86_ops->queue_exception(vcpu);
+		kvm_x86_queue_exception(vcpu);
 	/*
 	 * Do not inject an NMI or interrupt if there is a pending
 	 * exception.  Exceptions and interrupts are recognized at
@@ -7492,9 +7492,9 @@ static int inject_pending_event(struct kvm_vcpu *vcpu, bool req_int_win)
 	 */
 	else if (!vcpu->arch.exception.pending) {
 		if (vcpu->arch.nmi_injected)
-			kvm_x86_ops->set_nmi(vcpu);
+			kvm_x86_set_nmi(vcpu);
 		else if (vcpu->arch.interrupt.injected)
-			kvm_x86_ops->set_irq(vcpu);
+			kvm_x86_set_irq(vcpu);
 	}
 
 	/*
@@ -7504,7 +7504,7 @@ static int inject_pending_event(struct kvm_vcpu *vcpu, bool req_int_win)
 	 * from L2 to L1.
 	 */
 	if (is_guest_mode(vcpu) && kvm_x86_ops->check_nested_events) {
-		r = kvm_x86_ops->check_nested_events(vcpu, req_int_win);
+		r = kvm_x86_check_nested_events(vcpu, req_int_win);
 		if (r != 0)
 			return r;
 	}
@@ -7541,7 +7541,7 @@ static int inject_pending_event(struct kvm_vcpu *vcpu, bool req_int_win)
 			}
 		}
 
-		kvm_x86_ops->queue_exception(vcpu);
+		kvm_x86_queue_exception(vcpu);
 	}
 
 	/* Don't consider new event if we re-injected an event */
@@ -7549,14 +7549,14 @@ static int inject_pending_event(struct kvm_vcpu *vcpu, bool req_int_win)
 		return 0;
 
 	if (vcpu->arch.smi_pending && !is_smm(vcpu) &&
-	    kvm_x86_ops->smi_allowed(vcpu)) {
+	    kvm_x86_smi_allowed(vcpu)) {
 		vcpu->arch.smi_pending = false;
 		++vcpu->arch.smi_count;
 		enter_smm(vcpu);
-	} else if (vcpu->arch.nmi_pending && kvm_x86_ops->nmi_allowed(vcpu)) {
+	} else if (vcpu->arch.nmi_pending && kvm_x86_nmi_allowed(vcpu)) {
 		--vcpu->arch.nmi_pending;
 		vcpu->arch.nmi_injected = true;
-		kvm_x86_ops->set_nmi(vcpu);
+		kvm_x86_set_nmi(vcpu);
 	} else if (kvm_cpu_has_injectable_intr(vcpu)) {
 		/*
 		 * Because interrupts can be injected asynchronously, we are
@@ -7566,14 +7566,14 @@ static int inject_pending_event(struct kvm_vcpu *vcpu, bool req_int_win)
 		 * KVM_REQ_EVENT only on certain events and not unconditionally?
 		 */
 		if (is_guest_mode(vcpu) && kvm_x86_ops->check_nested_events) {
-			r = kvm_x86_ops->check_nested_events(vcpu, req_int_win);
+			r = kvm_x86_check_nested_events(vcpu, req_int_win);
 			if (r != 0)
 				return r;
 		}
-		if (kvm_x86_ops->interrupt_allowed(vcpu)) {
+		if (kvm_x86_interrupt_allowed(vcpu)) {
 			kvm_queue_interrupt(vcpu, kvm_cpu_get_interrupt(vcpu),
 					    false);
-			kvm_x86_ops->set_irq(vcpu);
+			kvm_x86_set_irq(vcpu);
 		}
 	}
 
@@ -7589,7 +7589,7 @@ static void process_nmi(struct kvm_vcpu *vcpu)
 	 * If an NMI is already in progress, limit further NMIs to just one.
 	 * Otherwise, allow two (and we'll inject the first one immediately).
 	 */
-	if (kvm_x86_ops->get_nmi_mask(vcpu) || vcpu->arch.nmi_injected)
+	if (kvm_x86_get_nmi_mask(vcpu) || vcpu->arch.nmi_injected)
 		limit = 1;
 
 	vcpu->arch.nmi_pending += atomic_xchg(&vcpu->arch.nmi_queued, 0);
@@ -7679,11 +7679,11 @@ static void enter_smm_save_state_32(struct kvm_vcpu *vcpu, char *buf)
 	put_smstate(u32, buf, 0x7f7c, seg.limit);
 	put_smstate(u32, buf, 0x7f78, enter_smm_get_segment_flags(&seg));
 
-	kvm_x86_ops->get_gdt(vcpu, &dt);
+	kvm_x86_get_gdt(vcpu, &dt);
 	put_smstate(u32, buf, 0x7f74, dt.address);
 	put_smstate(u32, buf, 0x7f70, dt.size);
 
-	kvm_x86_ops->get_idt(vcpu, &dt);
+	kvm_x86_get_idt(vcpu, &dt);
 	put_smstate(u32, buf, 0x7f58, dt.address);
 	put_smstate(u32, buf, 0x7f54, dt.size);
 
@@ -7733,7 +7733,7 @@ static void enter_smm_save_state_64(struct kvm_vcpu *vcpu, char *buf)
 	put_smstate(u32, buf, 0x7e94, seg.limit);
 	put_smstate(u64, buf, 0x7e98, seg.base);
 
-	kvm_x86_ops->get_idt(vcpu, &dt);
+	kvm_x86_get_idt(vcpu, &dt);
 	put_smstate(u32, buf, 0x7e84, dt.size);
 	put_smstate(u64, buf, 0x7e88, dt.address);
 
@@ -7743,7 +7743,7 @@ static void enter_smm_save_state_64(struct kvm_vcpu *vcpu, char *buf)
 	put_smstate(u32, buf, 0x7e74, seg.limit);
 	put_smstate(u64, buf, 0x7e78, seg.base);
 
-	kvm_x86_ops->get_gdt(vcpu, &dt);
+	kvm_x86_get_gdt(vcpu, &dt);
 	put_smstate(u32, buf, 0x7e64, dt.size);
 	put_smstate(u64, buf, 0x7e68, dt.address);
 
@@ -7773,28 +7773,28 @@ static void enter_smm(struct kvm_vcpu *vcpu)
 	 * vCPU state (e.g. leave guest mode) after we've saved the state into
 	 * the SMM state-save area.
 	 */
-	kvm_x86_ops->pre_enter_smm(vcpu, buf);
+	kvm_x86_pre_enter_smm(vcpu, buf);
 
 	vcpu->arch.hflags |= HF_SMM_MASK;
 	kvm_vcpu_write_guest(vcpu, vcpu->arch.smbase + 0xfe00, buf, sizeof(buf));
 
-	if (kvm_x86_ops->get_nmi_mask(vcpu))
+	if (kvm_x86_get_nmi_mask(vcpu))
 		vcpu->arch.hflags |= HF_SMM_INSIDE_NMI_MASK;
 	else
-		kvm_x86_ops->set_nmi_mask(vcpu, true);
+		kvm_x86_set_nmi_mask(vcpu, true);
 
 	kvm_set_rflags(vcpu, X86_EFLAGS_FIXED);
 	kvm_rip_write(vcpu, 0x8000);
 
 	cr0 = vcpu->arch.cr0 & ~(X86_CR0_PE | X86_CR0_EM | X86_CR0_TS | X86_CR0_PG);
-	kvm_x86_ops->set_cr0(vcpu, cr0);
+	kvm_x86_set_cr0(vcpu, cr0);
 	vcpu->arch.cr0 = cr0;
 
-	kvm_x86_ops->set_cr4(vcpu, 0);
+	kvm_x86_set_cr4(vcpu, 0);
 
 	/* Undocumented: IDT limit is set to zero on entry to SMM.  */
 	dt.address = dt.size = 0;
-	kvm_x86_ops->set_idt(vcpu, &dt);
+	kvm_x86_set_idt(vcpu, &dt);
 
 	__kvm_set_dr(vcpu, 7, DR7_FIXED_1);
 
@@ -7825,7 +7825,7 @@ static void enter_smm(struct kvm_vcpu *vcpu)
 
 #ifdef CONFIG_X86_64
 	if (guest_cpuid_has(vcpu, X86_FEATURE_LM))
-		kvm_x86_ops->set_efer(vcpu, 0);
+		kvm_x86_set_efer(vcpu, 0);
 #endif
 
 	kvm_update_cpuid(vcpu);
@@ -7854,7 +7854,7 @@ static void vcpu_scan_ioapic(struct kvm_vcpu *vcpu)
 		kvm_scan_ioapic_routes(vcpu, vcpu->arch.ioapic_handled_vectors);
 	else {
 		if (vcpu->arch.apicv_active)
-			kvm_x86_ops->sync_pir_to_irr(vcpu);
+			kvm_x86_sync_pir_to_irr(vcpu);
 		if (ioapic_in_kernel(vcpu->kvm))
 			kvm_ioapic_scan_entry(vcpu, vcpu->arch.ioapic_handled_vectors);
 	}
@@ -7874,7 +7874,7 @@ static void vcpu_load_eoi_exitmap(struct kvm_vcpu *vcpu)
 
 	bitmap_or((ulong *)eoi_exit_bitmap, vcpu->arch.ioapic_handled_vectors,
 		  vcpu_to_synic(vcpu)->vec_bitmap, 256);
-	kvm_x86_ops->load_eoi_exitmap(vcpu, eoi_exit_bitmap);
+	kvm_x86_load_eoi_exitmap(vcpu, eoi_exit_bitmap);
 }
 
 int kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm,
@@ -7907,7 +7907,7 @@ void kvm_vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu)
 	page = gfn_to_page(vcpu->kvm, APIC_DEFAULT_PHYS_BASE >> PAGE_SHIFT);
 	if (is_error_page(page))
 		return;
-	kvm_x86_ops->set_apic_access_page_addr(vcpu, page_to_phys(page));
+	kvm_x86_set_apic_access_page_addr(vcpu, page_to_phys(page));
 
 	/*
 	 * Do not pin apic access page in memory, the MMU notifier
@@ -7939,7 +7939,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 
 	if (kvm_request_pending(vcpu)) {
 		if (kvm_check_request(KVM_REQ_GET_VMCS12_PAGES, vcpu)) {
-			if (unlikely(!kvm_x86_ops->get_vmcs12_pages(vcpu))) {
+			if (unlikely(!kvm_x86_get_vmcs12_pages(vcpu))) {
 				r = 0;
 				goto out;
 			}
@@ -8061,12 +8061,12 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 			 *    SMI.
 			 */
 			if (vcpu->arch.smi_pending && !is_smm(vcpu))
-				if (!kvm_x86_ops->enable_smi_window(vcpu))
+				if (!kvm_x86_enable_smi_window(vcpu))
 					req_immediate_exit = true;
 			if (vcpu->arch.nmi_pending)
-				kvm_x86_ops->enable_nmi_window(vcpu);
+				kvm_x86_enable_nmi_window(vcpu);
 			if (kvm_cpu_has_injectable_intr(vcpu) || req_int_win)
-				kvm_x86_ops->enable_irq_window(vcpu);
+				kvm_x86_enable_irq_window(vcpu);
 			WARN_ON(vcpu->arch.exception.pending);
 		}
 
@@ -8083,7 +8083,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 
 	preempt_disable();
 
-	kvm_x86_ops->prepare_guest_switch(vcpu);
+	kvm_x86_prepare_guest_switch(vcpu);
 
 	/*
 	 * Disable IRQs before setting IN_GUEST_MODE.  Posted interrupt
@@ -8114,7 +8114,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 	 * notified with kvm_vcpu_kick.
 	 */
 	if (kvm_lapic_enabled(vcpu) && vcpu->arch.apicv_active)
-		kvm_x86_ops->sync_pir_to_irr(vcpu);
+		kvm_x86_sync_pir_to_irr(vcpu);
 
 	if (vcpu->mode == EXITING_GUEST_MODE || kvm_request_pending(vcpu)
 	    || need_resched() || signal_pending(current)) {
@@ -8129,7 +8129,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 
 	if (req_immediate_exit) {
 		kvm_make_request(KVM_REQ_EVENT, vcpu);
-		kvm_x86_ops->request_immediate_exit(vcpu);
+		kvm_x86_request_immediate_exit(vcpu);
 	}
 
 	trace_kvm_entry(vcpu->vcpu_id);
@@ -8148,7 +8148,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 		vcpu->arch.switch_db_regs &= ~KVM_DEBUGREG_RELOAD;
 	}
 
-	kvm_x86_ops->run(vcpu);
+	kvm_x86_run(vcpu);
 
 	/*
 	 * Do this here before restoring debug registers on the host.  And
@@ -8158,7 +8158,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 	 */
 	if (unlikely(vcpu->arch.switch_db_regs & KVM_DEBUGREG_WONT_EXIT)) {
 		WARN_ON(vcpu->guest_debug & KVM_GUESTDBG_USE_HW_BP);
-		kvm_x86_ops->sync_dirty_debug_regs(vcpu);
+		kvm_x86_sync_dirty_debug_regs(vcpu);
 		kvm_update_dr0123(vcpu);
 		kvm_update_dr6(vcpu);
 		kvm_update_dr7(vcpu);
@@ -8180,7 +8180,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 	vcpu->mode = OUTSIDE_GUEST_MODE;
 	smp_wmb();
 
-	kvm_x86_ops->handle_exit_irqoff(vcpu);
+	kvm_x86_handle_exit_irqoff(vcpu);
 
 	/*
 	 * Consume any pending interrupts, including the possible source of
@@ -8224,11 +8224,11 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 		kvm_lapic_sync_from_vapic(vcpu);
 
 	vcpu->arch.gpa_available = false;
-	r = kvm_x86_ops->handle_exit(vcpu);
+	r = kvm_x86_handle_exit(vcpu);
 	return r;
 
 cancel_injection:
-	kvm_x86_ops->cancel_injection(vcpu);
+	kvm_x86_cancel_injection(vcpu);
 	if (unlikely(vcpu->arch.apic_attention))
 		kvm_lapic_sync_from_vapic(vcpu);
 out:
@@ -8238,13 +8238,13 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 static inline int vcpu_block(struct kvm *kvm, struct kvm_vcpu *vcpu)
 {
 	if (!kvm_arch_vcpu_runnable(vcpu) &&
-	    (!kvm_x86_ops->pre_block || kvm_x86_ops->pre_block(vcpu) == 0)) {
+	    (!kvm_x86_ops->pre_block || kvm_x86_pre_block(vcpu) == 0)) {
 		srcu_read_unlock(&kvm->srcu, vcpu->srcu_idx);
 		kvm_vcpu_block(vcpu);
 		vcpu->srcu_idx = srcu_read_lock(&kvm->srcu);
 
 		if (kvm_x86_ops->post_block)
-			kvm_x86_ops->post_block(vcpu);
+			kvm_x86_post_block(vcpu);
 
 		if (!kvm_check_request(KVM_REQ_UNHALT, vcpu))
 			return 1;
@@ -8272,7 +8272,7 @@ static inline int vcpu_block(struct kvm *kvm, struct kvm_vcpu *vcpu)
 static inline bool kvm_vcpu_running(struct kvm_vcpu *vcpu)
 {
 	if (is_guest_mode(vcpu) && kvm_x86_ops->check_nested_events)
-		kvm_x86_ops->check_nested_events(vcpu, false);
+		kvm_x86_check_nested_events(vcpu, false);
 
 	return (vcpu->arch.mp_state == KVM_MP_STATE_RUNNABLE &&
 		!vcpu->arch.apf.halted);
@@ -8616,10 +8616,10 @@ static void __get_sregs(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs)
 	kvm_get_segment(vcpu, &sregs->tr, VCPU_SREG_TR);
 	kvm_get_segment(vcpu, &sregs->ldt, VCPU_SREG_LDTR);
 
-	kvm_x86_ops->get_idt(vcpu, &dt);
+	kvm_x86_get_idt(vcpu, &dt);
 	sregs->idt.limit = dt.size;
 	sregs->idt.base = dt.address;
-	kvm_x86_ops->get_gdt(vcpu, &dt);
+	kvm_x86_get_gdt(vcpu, &dt);
 	sregs->gdt.limit = dt.size;
 	sregs->gdt.base = dt.address;
 
@@ -8759,10 +8759,10 @@ static int __set_sregs(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs)
 
 	dt.size = sregs->idt.limit;
 	dt.address = sregs->idt.base;
-	kvm_x86_ops->set_idt(vcpu, &dt);
+	kvm_x86_set_idt(vcpu, &dt);
 	dt.size = sregs->gdt.limit;
 	dt.address = sregs->gdt.base;
-	kvm_x86_ops->set_gdt(vcpu, &dt);
+	kvm_x86_set_gdt(vcpu, &dt);
 
 	vcpu->arch.cr2 = sregs->cr2;
 	mmu_reset_needed |= kvm_read_cr3(vcpu) != sregs->cr3;
@@ -8772,16 +8772,16 @@ static int __set_sregs(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs)
 	kvm_set_cr8(vcpu, sregs->cr8);
 
 	mmu_reset_needed |= vcpu->arch.efer != sregs->efer;
-	kvm_x86_ops->set_efer(vcpu, sregs->efer);
+	kvm_x86_set_efer(vcpu, sregs->efer);
 
 	mmu_reset_needed |= kvm_read_cr0(vcpu) != sregs->cr0;
-	kvm_x86_ops->set_cr0(vcpu, sregs->cr0);
+	kvm_x86_set_cr0(vcpu, sregs->cr0);
 	vcpu->arch.cr0 = sregs->cr0;
 
 	mmu_reset_needed |= kvm_read_cr4(vcpu) != sregs->cr4;
 	cpuid_update_needed |= ((kvm_read_cr4(vcpu) ^ sregs->cr4) &
 				(X86_CR4_OSXSAVE | X86_CR4_PKE));
-	kvm_x86_ops->set_cr4(vcpu, sregs->cr4);
+	kvm_x86_set_cr4(vcpu, sregs->cr4);
 	if (cpuid_update_needed)
 		kvm_update_cpuid(vcpu);
 
@@ -8887,7 +8887,7 @@ int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu,
 	 */
 	kvm_set_rflags(vcpu, rflags);
 
-	kvm_x86_ops->update_bp_intercept(vcpu);
+	kvm_x86_update_bp_intercept(vcpu);
 
 	r = 0;
 
@@ -9021,7 +9021,7 @@ void kvm_arch_vcpu_free(struct kvm_vcpu *vcpu)
 
 	kvmclock_reset(vcpu);
 
-	kvm_x86_ops->vcpu_free(vcpu);
+	kvm_x86_vcpu_free(vcpu);
 	free_cpumask_var(wbinvd_dirty_mask);
 }
 
@@ -9035,7 +9035,7 @@ struct kvm_vcpu *kvm_arch_vcpu_create(struct kvm *kvm,
 		"kvm: SMP vm created on host with unstable TSC; "
 		"guest TSC will not be reliable\n");
 
-	vcpu = kvm_x86_ops->vcpu_create(kvm, id);
+	vcpu = kvm_x86_vcpu_create(kvm, id);
 
 	return vcpu;
 }
@@ -9088,7 +9088,7 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
 	kvm_mmu_unload(vcpu);
 	vcpu_put(vcpu);
 
-	kvm_x86_ops->vcpu_free(vcpu);
+	kvm_x86_vcpu_free(vcpu);
 }
 
 void kvm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
@@ -9161,7 +9161,7 @@ void kvm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
 
 	vcpu->arch.ia32_xss = 0;
 
-	kvm_x86_ops->vcpu_reset(vcpu, init_event);
+	kvm_x86_vcpu_reset(vcpu, init_event);
 }
 
 void kvm_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector)
@@ -9186,7 +9186,7 @@ int kvm_arch_hardware_enable(void)
 	bool stable, backwards_tsc = false;
 
 	kvm_shared_msr_cpu_online();
-	ret = kvm_x86_ops->hardware_enable();
+	ret = kvm_x86_hardware_enable();
 	if (ret != 0)
 		return ret;
 
@@ -9268,7 +9268,7 @@ int kvm_arch_hardware_enable(void)
 
 void kvm_arch_hardware_disable(void)
 {
-	kvm_x86_ops->hardware_disable();
+	kvm_x86_hardware_disable();
 	drop_user_return_notifiers();
 }
 
@@ -9276,7 +9276,7 @@ int kvm_arch_hardware_setup(void)
 {
 	int r;
 
-	r = kvm_x86_ops->hardware_setup();
+	r = kvm_x86_hardware_setup();
 	if (r != 0)
 		return r;
 
@@ -9300,12 +9300,12 @@ int kvm_arch_hardware_setup(void)
 
 void kvm_arch_hardware_unsetup(void)
 {
-	kvm_x86_ops->hardware_unsetup();
+	kvm_x86_hardware_unsetup();
 }
 
 int kvm_arch_check_processor_compat(void)
 {
-	return kvm_x86_ops->check_processor_compatibility();
+	return kvm_x86_check_processor_compatibility();
 }
 
 bool kvm_vcpu_is_reset_bsp(struct kvm_vcpu *vcpu)
@@ -9347,7 +9347,7 @@ int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu)
 		goto fail_free_pio_data;
 
 	if (irqchip_in_kernel(vcpu->kvm)) {
-		vcpu->arch.apicv_active = kvm_x86_ops->get_enable_apicv(vcpu);
+		vcpu->arch.apicv_active = kvm_x86_get_enable_apicv(vcpu);
 		r = kvm_create_lapic(vcpu, lapic_timer_advance_ns);
 		if (r < 0)
 			goto fail_mmu_destroy;
@@ -9417,7 +9417,7 @@ void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu)
 void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu)
 {
 	vcpu->arch.l1tf_flush_l1d = true;
-	kvm_x86_ops->sched_in(vcpu, cpu);
+	kvm_x86_sched_in(vcpu, cpu);
 }
 
 int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
@@ -9453,7 +9453,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
 	kvm_page_track_init(kvm);
 	kvm_mmu_init_vm(kvm);
 
-	return kvm_x86_ops->vm_init(kvm);
+	return kvm_x86_vm_init(kvm);
 }
 
 static void kvm_unload_vcpu_mmu(struct kvm_vcpu *vcpu)
@@ -9570,7 +9570,7 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
 		x86_set_memory_region(kvm, TSS_PRIVATE_MEMSLOT, 0, 0);
 	}
 	if (kvm_x86_ops->vm_destroy)
-		kvm_x86_ops->vm_destroy(kvm);
+		kvm_x86_vm_destroy(kvm);
 	kvm_pic_destroy(kvm);
 	kvm_ioapic_destroy(kvm);
 	kvm_free_vcpus(kvm);
@@ -9727,12 +9727,12 @@ static void kvm_mmu_slot_apply_flags(struct kvm *kvm,
 	 */
 	if (new->flags & KVM_MEM_LOG_DIRTY_PAGES) {
 		if (kvm_x86_ops->slot_enable_log_dirty)
-			kvm_x86_ops->slot_enable_log_dirty(kvm, new);
+			kvm_x86_slot_enable_log_dirty(kvm, new);
 		else
 			kvm_mmu_slot_remove_write_access(kvm, new);
 	} else {
 		if (kvm_x86_ops->slot_disable_log_dirty)
-			kvm_x86_ops->slot_disable_log_dirty(kvm, new);
+			kvm_x86_slot_disable_log_dirty(kvm, new);
 	}
 }
 
@@ -9797,7 +9797,7 @@ static inline bool kvm_guest_apic_has_interrupt(struct kvm_vcpu *vcpu)
 {
 	return (is_guest_mode(vcpu) &&
 			kvm_x86_ops->guest_apic_has_interrupt &&
-			kvm_x86_ops->guest_apic_has_interrupt(vcpu));
+			kvm_x86_guest_apic_has_interrupt(vcpu));
 }
 
 static inline bool kvm_vcpu_has_events(struct kvm_vcpu *vcpu)
@@ -9816,7 +9816,7 @@ static inline bool kvm_vcpu_has_events(struct kvm_vcpu *vcpu)
 
 	if (kvm_test_request(KVM_REQ_NMI, vcpu) ||
 	    (vcpu->arch.nmi_pending &&
-	     kvm_x86_ops->nmi_allowed(vcpu)))
+	     kvm_x86_nmi_allowed(vcpu)))
 		return true;
 
 	if (kvm_test_request(KVM_REQ_SMI, vcpu) ||
@@ -9849,7 +9849,7 @@ bool kvm_arch_dy_runnable(struct kvm_vcpu *vcpu)
 		 kvm_test_request(KVM_REQ_EVENT, vcpu))
 		return true;
 
-	if (vcpu->arch.apicv_active && kvm_x86_ops->dy_apicv_has_pending_interrupt(vcpu))
+	if (vcpu->arch.apicv_active && kvm_x86_dy_apicv_has_pending_interrupt(vcpu))
 		return true;
 
 	return false;
@@ -9867,7 +9867,7 @@ int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu)
 
 int kvm_arch_interrupt_allowed(struct kvm_vcpu *vcpu)
 {
-	return kvm_x86_ops->interrupt_allowed(vcpu);
+	return kvm_x86_interrupt_allowed(vcpu);
 }
 
 unsigned long kvm_get_linear_rip(struct kvm_vcpu *vcpu)
@@ -9889,7 +9889,7 @@ unsigned long kvm_get_rflags(struct kvm_vcpu *vcpu)
 {
 	unsigned long rflags;
 
-	rflags = kvm_x86_ops->get_rflags(vcpu);
+	rflags = kvm_x86_get_rflags(vcpu);
 	if (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP)
 		rflags &= ~X86_EFLAGS_TF;
 	return rflags;
@@ -9901,7 +9901,7 @@ static void __kvm_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags)
 	if (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP &&
 	    kvm_is_linear_rip(vcpu, vcpu->arch.singlestep_rip))
 		rflags |= X86_EFLAGS_TF;
-	kvm_x86_ops->set_rflags(vcpu, rflags);
+	kvm_x86_set_rflags(vcpu, rflags);
 }
 
 void kvm_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags)
@@ -10012,7 +10012,7 @@ static bool kvm_can_deliver_async_pf(struct kvm_vcpu *vcpu)
 
 	if (!(vcpu->arch.apf.msr_val & KVM_ASYNC_PF_ENABLED) ||
 	    (vcpu->arch.apf.send_user_only &&
-	     kvm_x86_ops->get_cpl(vcpu) == 0))
+	     kvm_x86_get_cpl(vcpu) == 0))
 		return false;
 
 	return true;
@@ -10032,7 +10032,7 @@ bool kvm_can_do_async_pf(struct kvm_vcpu *vcpu)
 	 * If interrupts are off we cannot even use an artificial
 	 * halt state.
 	 */
-	return kvm_x86_ops->interrupt_allowed(vcpu);
+	return kvm_x86_interrupt_allowed(vcpu);
 }
 
 void kvm_arch_async_page_not_present(struct kvm_vcpu *vcpu,
@@ -10161,7 +10161,7 @@ int kvm_arch_irq_bypass_add_producer(struct irq_bypass_consumer *cons,
 
 	irqfd->producer = prod;
 
-	return kvm_x86_ops->update_pi_irte(irqfd->kvm,
+	return kvm_x86_update_pi_irte(irqfd->kvm,
 					   prod->irq, irqfd->gsi, 1);
 }
 
@@ -10181,7 +10181,7 @@ void kvm_arch_irq_bypass_del_producer(struct irq_bypass_consumer *cons,
 	 * when the irq is masked/disabled or the consumer side (KVM
 	 * int this case doesn't want to receive the interrupts.
 	*/
-	ret = kvm_x86_ops->update_pi_irte(irqfd->kvm, prod->irq, irqfd->gsi, 0);
+	ret = kvm_x86_update_pi_irte(irqfd->kvm, prod->irq, irqfd->gsi, 0);
 	if (ret)
 		printk(KERN_INFO "irq bypass consumer (token %p) unregistration"
 		       " fails: %d\n", irqfd->consumer.token, ret);
@@ -10190,7 +10190,7 @@ void kvm_arch_irq_bypass_del_producer(struct irq_bypass_consumer *cons,
 int kvm_arch_update_irqfd_routing(struct kvm *kvm, unsigned int host_irq,
 				   uint32_t guest_irq, bool set)
 {
-	return kvm_x86_ops->update_pi_irte(kvm, host_irq, guest_irq, set);
+	return kvm_x86_update_pi_irte(kvm, host_irq, guest_irq, set);
 }
 
 bool kvm_vector_hashing_enabled(void)
diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
index dbf7442a822b..7c5bd68443a3 100644
--- a/arch/x86/kvm/x86.h
+++ b/arch/x86/kvm/x86.h
@@ -96,7 +96,7 @@ static inline bool is_64_bit_mode(struct kvm_vcpu *vcpu)
 
 	if (!is_long_mode(vcpu))
 		return false;
-	kvm_x86_ops->get_cs_db_l_bits(vcpu, &cs_db, &cs_l);
+	kvm_x86_get_cs_db_l_bits(vcpu, &cs_db, &cs_l);
 	return cs_l;
 }
 


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 03/13] kvm: monolithic: fixup x86-32 build
  2019-11-04 22:59 [PATCH 00/13] KVM monolithic v3 Andrea Arcangeli
  2019-11-04 22:59 ` [PATCH 01/13] KVM: monolithic: x86: remove kvm.ko Andrea Arcangeli
  2019-11-04 22:59 ` [PATCH 02/13] KVM: monolithic: x86: convert the kvm_x86_ops and kvm_pmu_ops methods to external functions Andrea Arcangeli
@ 2019-11-04 22:59 ` Andrea Arcangeli
  2019-11-05 10:04   ` Paolo Bonzini
  2019-11-04 22:59 ` [PATCH 04/13] KVM: monolithic: x86: handle the request_immediate_exit variation Andrea Arcangeli
                   ` (9 subsequent siblings)
  12 siblings, 1 reply; 34+ messages in thread
From: Andrea Arcangeli @ 2019-11-04 22:59 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: Paolo Bonzini, Vitaly Kuznetsov, Sean Christopherson

kvm_x86_set_hv_timer and kvm_x86_cancel_hv_timer needs to be defined
to succeed the 32bit kernel build, but they can't be called.

Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
---
 arch/x86/kvm/vmx/vmx.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index bd17ad61f7e3..1a58ae38c8f2 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -7195,6 +7195,17 @@ void kvm_x86_cancel_hv_timer(struct kvm_vcpu *vcpu)
 {
 	to_vmx(vcpu)->hv_deadline_tsc = -1;
 }
+#else
+int kvm_x86_set_hv_timer(struct kvm_vcpu *vcpu, u64 guest_deadline_tsc,
+			 bool *expired)
+{
+	BUG();
+}
+
+void kvm_x86_cancel_hv_timer(struct kvm_vcpu *vcpu)
+{
+	BUG();
+}
 #endif
 
 void kvm_x86_sched_in(struct kvm_vcpu *vcpu, int cpu)


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 04/13] KVM: monolithic: x86: handle the request_immediate_exit variation
  2019-11-04 22:59 [PATCH 00/13] KVM monolithic v3 Andrea Arcangeli
                   ` (2 preceding siblings ...)
  2019-11-04 22:59 ` [PATCH 03/13] kvm: monolithic: fixup x86-32 build Andrea Arcangeli
@ 2019-11-04 22:59 ` Andrea Arcangeli
  2019-11-04 22:59 ` [PATCH 05/13] KVM: monolithic: add more section prefixes Andrea Arcangeli
                   ` (8 subsequent siblings)
  12 siblings, 0 replies; 34+ messages in thread
From: Andrea Arcangeli @ 2019-11-04 22:59 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: Paolo Bonzini, Vitaly Kuznetsov, Sean Christopherson

request_immediate_exit is one of those few cases where the pointer to
function of the method isn't fixed at build time and it requires
special handling because hardware_setup() may override it at runtime.

Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
---
 arch/x86/kvm/vmx/vmx.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 1a58ae38c8f2..9c5f0c67b899 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -7103,7 +7103,10 @@ void kvm_x86_set_supported_cpuid(u32 func, struct kvm_cpuid_entry2 *entry)
 
 void kvm_x86_request_immediate_exit(struct kvm_vcpu *vcpu)
 {
-	to_vmx(vcpu)->req_immediate_exit = true;
+	if (likely(enable_preemption_timer))
+		to_vmx(vcpu)->req_immediate_exit = true;
+	else
+		__kvm_request_immediate_exit(vcpu);
 }
 
 int kvm_x86_check_intercept(struct kvm_vcpu *vcpu,


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 05/13] KVM: monolithic: add more section prefixes
  2019-11-04 22:59 [PATCH 00/13] KVM monolithic v3 Andrea Arcangeli
                   ` (3 preceding siblings ...)
  2019-11-04 22:59 ` [PATCH 04/13] KVM: monolithic: x86: handle the request_immediate_exit variation Andrea Arcangeli
@ 2019-11-04 22:59 ` Andrea Arcangeli
  2019-11-05 10:16   ` Paolo Bonzini
  2019-11-04 22:59 ` [PATCH 06/13] KVM: monolithic: x86: remove __exit section prefix from machine_unsetup Andrea Arcangeli
                   ` (7 subsequent siblings)
  12 siblings, 1 reply; 34+ messages in thread
From: Andrea Arcangeli @ 2019-11-04 22:59 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: Paolo Bonzini, Vitaly Kuznetsov, Sean Christopherson

Add more section prefixes because with the monolithic KVM model the
section checker can now do a more accurate static analysis at build
time and this allows to build without
CONFIG_SECTION_MISMATCH_WARN_ONLY=n.

Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
---
 arch/powerpc/kvm/book3s.c | 2 +-
 arch/x86/kvm/x86.c        | 4 ++--
 include/linux/kvm_host.h  | 8 ++++----
 virt/kvm/arm/arm.c        | 2 +-
 virt/kvm/kvm_main.c       | 6 +++---
 5 files changed, 11 insertions(+), 11 deletions(-)

diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index ec2547cc5ecb..e80e9504722a 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -1067,7 +1067,7 @@ int kvm_irq_map_chip_pin(struct kvm *kvm, unsigned irqchip, unsigned pin)
 
 #endif /* CONFIG_KVM_XICS */
 
-static int kvmppc_book3s_init(void)
+static __init int kvmppc_book3s_init(void)
 {
 	int r;
 
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index fb963e6b2e54..5e98fa6b7bf8 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -9272,7 +9272,7 @@ void kvm_arch_hardware_disable(void)
 	drop_user_return_notifiers();
 }
 
-int kvm_arch_hardware_setup(void)
+__init int kvm_arch_hardware_setup(void)
 {
 	int r;
 
@@ -9303,7 +9303,7 @@ void kvm_arch_hardware_unsetup(void)
 	kvm_x86_hardware_unsetup();
 }
 
-int kvm_arch_check_processor_compat(void)
+__init int kvm_arch_check_processor_compat(void)
 {
 	return kvm_x86_check_processor_compatibility();
 }
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 719fc3e15ea4..426bc2f485a9 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -616,8 +616,8 @@ static inline void kvm_irqfd_exit(void)
 {
 }
 #endif
-int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align,
-		  struct module *module);
+__init int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align,
+		    struct module *module);
 void kvm_exit(void);
 
 void kvm_get_kvm(struct kvm *kvm);
@@ -867,9 +867,9 @@ void kvm_arch_create_vcpu_debugfs(struct kvm_vcpu *vcpu);
 
 int kvm_arch_hardware_enable(void);
 void kvm_arch_hardware_disable(void);
-int kvm_arch_hardware_setup(void);
+__init int kvm_arch_hardware_setup(void);
 void kvm_arch_hardware_unsetup(void);
-int kvm_arch_check_processor_compat(void);
+__init int kvm_arch_check_processor_compat(void);
 int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu);
 bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu);
 int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu);
diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index 86c6aa1cb58e..65f7f0f6868d 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -1726,7 +1726,7 @@ void kvm_arch_exit(void)
 	kvm_perf_teardown();
 }
 
-static int arm_init(void)
+static __init int arm_init(void)
 {
 	int rc = kvm_init(NULL, sizeof(struct kvm_vcpu), 0, THIS_MODULE);
 	return rc;
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index d6f0696d98ef..1b7fbd138406 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -4246,13 +4246,13 @@ static void kvm_sched_out(struct preempt_notifier *pn,
 	kvm_arch_vcpu_put(vcpu);
 }
 
-static void check_processor_compat(void *rtn)
+static __init void check_processor_compat(void *rtn)
 {
 	*(int *)rtn = kvm_arch_check_processor_compat();
 }
 
-int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align,
-		  struct module *module)
+__init int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align,
+		    struct module *module)
 {
 	int r;
 	int cpu;


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 06/13] KVM: monolithic: x86: remove __exit section prefix from machine_unsetup
  2019-11-04 22:59 [PATCH 00/13] KVM monolithic v3 Andrea Arcangeli
                   ` (4 preceding siblings ...)
  2019-11-04 22:59 ` [PATCH 05/13] KVM: monolithic: add more section prefixes Andrea Arcangeli
@ 2019-11-04 22:59 ` Andrea Arcangeli
  2019-11-05 10:16   ` Paolo Bonzini
  2019-11-04 22:59 ` [PATCH 07/13] KVM: monolithic: x86: remove __init section prefix from kvm_x86_cpu_has_kvm_support Andrea Arcangeli
                   ` (6 subsequent siblings)
  12 siblings, 1 reply; 34+ messages in thread
From: Andrea Arcangeli @ 2019-11-04 22:59 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: Paolo Bonzini, Vitaly Kuznetsov, Sean Christopherson

Adjusts the section prefixes of some KVM x86 code function because
with the monolithic KVM model the section checker can now do a more
accurate static analysis at build time and it found a potentially
kernel crashing bug. This also allows to build without
CONFIG_SECTION_MISMATCH_WARN_ONLY=n.

The __exit removed from machine_unsetup is because
kvm_arch_hardware_unsetup() is called by kvm_init() which is in the
__init section. It's not allowed to call a function located in the
__exit section and dropped during the kernel link from the __init
section or the kernel will crash if that call is made.

Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
---
 arch/x86/include/asm/kvm_host.h | 4 ++--
 arch/x86/kvm/svm.c              | 2 +-
 arch/x86/kvm/vmx/vmx.c          | 2 +-
 3 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index b36dd3265036..2b03ec80f6d7 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1004,7 +1004,7 @@ extern int kvm_x86_hardware_enable(void);
 extern void kvm_x86_hardware_disable(void);
 extern __init int kvm_x86_check_processor_compatibility(void);
 extern __init int kvm_x86_hardware_setup(void);
-extern __exit void kvm_x86_hardware_unsetup(void);
+extern void kvm_x86_hardware_unsetup(void);
 extern bool kvm_x86_cpu_has_accelerated_tpr(void);
 extern bool kvm_x86_has_emulated_msr(int index);
 extern void kvm_x86_cpuid_update(struct kvm_vcpu *vcpu);
@@ -1196,7 +1196,7 @@ struct kvm_x86_ops {
 	void (*hardware_disable)(void);
 	int (*check_processor_compatibility)(void);/* __init */
 	int (*hardware_setup)(void);               /* __init */
-	void (*hardware_unsetup)(void);            /* __exit */
+	void (*hardware_unsetup)(void);
 	bool (*cpu_has_accelerated_tpr)(void);
 	bool (*has_emulated_msr)(int index);
 	void (*cpuid_update)(struct kvm_vcpu *vcpu);
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 1705608246fb..4ce102f6f075 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -1412,7 +1412,7 @@ __init int kvm_x86_hardware_setup(void)
 	return r;
 }
 
-__exit void kvm_x86_hardware_unsetup(void)
+void kvm_x86_hardware_unsetup(void)
 {
 	int cpu;
 
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 9c5f0c67b899..e406707381a4 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -7737,7 +7737,7 @@ __init int kvm_x86_hardware_setup(void)
 	return r;
 }
 
-__exit void kvm_x86_hardware_unsetup(void)
+void kvm_x86_hardware_unsetup(void)
 {
 	if (nested)
 		nested_vmx_hardware_unsetup();


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 07/13] KVM: monolithic: x86: remove __init section prefix from kvm_x86_cpu_has_kvm_support
  2019-11-04 22:59 [PATCH 00/13] KVM monolithic v3 Andrea Arcangeli
                   ` (5 preceding siblings ...)
  2019-11-04 22:59 ` [PATCH 06/13] KVM: monolithic: x86: remove __exit section prefix from machine_unsetup Andrea Arcangeli
@ 2019-11-04 22:59 ` Andrea Arcangeli
  2019-11-05 10:16   ` Paolo Bonzini
  2019-11-04 22:59 ` [PATCH 08/13] KVM: monolithic: remove exports Andrea Arcangeli
                   ` (5 subsequent siblings)
  12 siblings, 1 reply; 34+ messages in thread
From: Andrea Arcangeli @ 2019-11-04 22:59 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: Paolo Bonzini, Vitaly Kuznetsov, Sean Christopherson

Adjusts the section prefixes of some KVM x86 code function because
with the monolithic KVM model the section checker can now do a more
accurate static analysis at build time. This also allows to build
without CONFIG_SECTION_MISMATCH_WARN_ONLY=n.

The __init needs to be removed on vmx despite it's only svm calling it
from kvm_x86_hardware_enable which is eventually called by
hardware_enable_nolock() or there's a (potentially false positive)
warning (false positive because this function isn't called in the vmx
case). If this isn't needed the right cleanup isn't to put it in the
__init section, but to drop it. As long as it's defined in vmx as a
kvm_x86 operation, it's expectable that might eventually be called at
runtime while hot plugging new CPUs.

Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
---
 arch/x86/include/asm/kvm_host.h | 4 ++--
 arch/x86/kvm/vmx/vmx.c          | 2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 2b03ec80f6d7..2ddc61fdcd09 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -998,7 +998,7 @@ struct kvm_lapic_irq {
 	bool msi_redir_hint;
 };
 
-extern __init int kvm_x86_cpu_has_kvm_support(void);
+extern int kvm_x86_cpu_has_kvm_support(void);
 extern __init int kvm_x86_disabled_by_bios(void);
 extern int kvm_x86_hardware_enable(void);
 extern void kvm_x86_hardware_disable(void);
@@ -1190,7 +1190,7 @@ extern bool kvm_x86_apic_init_signal_blocked(struct kvm_vcpu *vcpu);
 extern int kvm_x86_enable_direct_tlbflush(struct kvm_vcpu *vcpu);
 
 struct kvm_x86_ops {
-	int (*cpu_has_kvm_support)(void);          /* __init */
+	int (*cpu_has_kvm_support)(void);
 	int (*disabled_by_bios)(void);             /* __init */
 	int (*hardware_enable)(void);
 	void (*hardware_disable)(void);
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index e406707381a4..87e5d7276ea4 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -2178,7 +2178,7 @@ void kvm_x86_cache_reg(struct kvm_vcpu *vcpu, enum kvm_reg reg)
 	}
 }
 
-__init int kvm_x86_cpu_has_kvm_support(void)
+int kvm_x86_cpu_has_kvm_support(void)
 {
 	return cpu_has_vmx();
 }


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 08/13] KVM: monolithic: remove exports
  2019-11-04 22:59 [PATCH 00/13] KVM monolithic v3 Andrea Arcangeli
                   ` (6 preceding siblings ...)
  2019-11-04 22:59 ` [PATCH 07/13] KVM: monolithic: x86: remove __init section prefix from kvm_x86_cpu_has_kvm_support Andrea Arcangeli
@ 2019-11-04 22:59 ` Andrea Arcangeli
  2019-11-04 22:59 ` [PATCH 09/13] KVM: monolithic: x86: drop the kvm_pmu_ops structure Andrea Arcangeli
                   ` (4 subsequent siblings)
  12 siblings, 0 replies; 34+ messages in thread
From: Andrea Arcangeli @ 2019-11-04 22:59 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: Paolo Bonzini, Vitaly Kuznetsov, Sean Christopherson

The exports would be duplicated across kvm-amd and kvm-intel if
they're kept and that causes various harmless warnings.

The warnings aren't particularly concerning because the two modules
can't load at the same time, but it's cleaner to remove the warnings
by removing the exports.

This commit might break non-x86 archs, but it should be simple to make
them monolithic too (if they're not already).

In the unlikely case there's a legit reason not to go monolithic in
any arch and to keep kvm.ko around, we'll need a way to retain the
common code exports. In which case this commit would need to be
partially reversed and the exports in the kvm common code should then
be done only conditionally to a new opt-in per-arch CONFIG option.

The following warning remains for now to be able to load the kvmgt and
the powerpc version. These remaining warnings can be handled later.

WARNING: arch/x86/kvm/kvm-amd: 'kvm_debugfs_dir' exported twice. Previous export was in arch/x86/kvm/kvm-intel.ko

This is needed by powerpc.

WARNING: arch/x86/kvm/kvm-amd: 'kvm_get_kvm' exported twice. Previous export was in arch/x86/kvm/kvm-intel.ko
WARNING: arch/x86/kvm/kvm-amd: 'kvm_put_kvm' exported twice. Previous export was in arch/x86/kvm/kvm-intel.ko
WARNING: arch/x86/kvm/kvm-amd: 'gfn_to_memslot' exported twice. Previous export was in arch/x86/kvm/kvm-intel.ko
WARNING: arch/x86/kvm/kvm-amd: 'kvm_is_visible_gfn' exported twice. Previous export was in arch/x86/kvm/kvm-intel.ko
WARNING: arch/x86/kvm/kvm-amd: 'gfn_to_pfn' exported twice. Previous export was in arch/x86/kvm/kvm-intel.ko
WARNING: arch/x86/kvm/kvm-amd: 'kvm_read_guest' exported twice. Previous export was in arch/x86/kvm/kvm-intel.ko
WARNING: arch/x86/kvm/kvm-amd: 'kvm_write_guest' exported twice. Previous export was in arch/x86/kvm/kvm-intel.ko
WARNING: arch/x86/kvm/kvm-amd: 'kvm_slot_page_track_add_page' exported twice. Previous export was in arch/x86/kvm/kvm-intel.ko
WARNING: arch/x86/kvm/kvm-amd: 'kvm_slot_page_track_remove_page' exported twice. Previous export was in arch/x86/kvm/kvm-intel.ko
WARNING: arch/x86/kvm/kvm-amd: 'kvm_page_track_register_notifier' exported twice. Previous export was in arch/x86/kvm/kvm-intel.ko
WARNING: arch/x86/kvm/kvm-amd: 'kvm_page_track_unregister_notifier' exported twice. Previous export was in arch/x86/kvm/kvm-intel.ko

This is needed by kvmgt.c.

Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
---
 arch/x86/kvm/cpuid.c    |   5 --
 arch/x86/kvm/hyperv.c   |   2 -
 arch/x86/kvm/irq.c      |   4 --
 arch/x86/kvm/irq_comm.c |   2 -
 arch/x86/kvm/lapic.c    |  16 ------
 arch/x86/kvm/mmu.c      |  24 ---------
 arch/x86/kvm/mtrr.c     |   2 -
 arch/x86/kvm/pmu.c      |   3 --
 arch/x86/kvm/x86.c      | 106 ----------------------------------------
 virt/kvm/eventfd.c      |   1 -
 virt/kvm/kvm_main.c     |  64 ------------------------
 11 files changed, 229 deletions(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index d156d27d83bb..661e68a53e2b 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -50,7 +50,6 @@ bool kvm_mpx_supported(void)
 	return ((host_xcr0 & (XFEATURE_MASK_BNDREGS | XFEATURE_MASK_BNDCSR))
 		 && kvm_x86_mpx_supported());
 }
-EXPORT_SYMBOL_GPL(kvm_mpx_supported);
 
 u64 kvm_supported_xcr0(void)
 {
@@ -192,7 +191,6 @@ int cpuid_query_maxphyaddr(struct kvm_vcpu *vcpu)
 not_found:
 	return 36;
 }
-EXPORT_SYMBOL_GPL(cpuid_query_maxphyaddr);
 
 /* when an old userspace process fills a new kernel module */
 int kvm_vcpu_ioctl_set_cpuid(struct kvm_vcpu *vcpu,
@@ -971,7 +969,6 @@ struct kvm_cpuid_entry2 *kvm_find_cpuid_entry(struct kvm_vcpu *vcpu,
 	}
 	return best;
 }
-EXPORT_SYMBOL_GPL(kvm_find_cpuid_entry);
 
 /*
  * If the basic or extended CPUID leaf requested is higher than the
@@ -1035,7 +1032,6 @@ bool kvm_cpuid(struct kvm_vcpu *vcpu, u32 *eax, u32 *ebx,
 	trace_kvm_cpuid(function, *eax, *ebx, *ecx, *edx, found);
 	return found;
 }
-EXPORT_SYMBOL_GPL(kvm_cpuid);
 
 int kvm_emulate_cpuid(struct kvm_vcpu *vcpu)
 {
@@ -1053,4 +1049,3 @@ int kvm_emulate_cpuid(struct kvm_vcpu *vcpu)
 	kvm_rdx_write(vcpu, edx);
 	return kvm_skip_emulated_instruction(vcpu);
 }
-EXPORT_SYMBOL_GPL(kvm_emulate_cpuid);
diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
index a345e48a7a24..bf0c86fdee52 100644
--- a/arch/x86/kvm/hyperv.c
+++ b/arch/x86/kvm/hyperv.c
@@ -714,7 +714,6 @@ bool kvm_hv_assist_page_enabled(struct kvm_vcpu *vcpu)
 		return false;
 	return vcpu->arch.pv_eoi.msr_val & KVM_MSR_ENABLED;
 }
-EXPORT_SYMBOL_GPL(kvm_hv_assist_page_enabled);
 
 bool kvm_hv_get_assist_page(struct kvm_vcpu *vcpu,
 			    struct hv_vp_assist_page *assist_page)
@@ -724,7 +723,6 @@ bool kvm_hv_get_assist_page(struct kvm_vcpu *vcpu,
 	return !kvm_read_guest_cached(vcpu->kvm, &vcpu->arch.pv_eoi.data,
 				      assist_page, sizeof(*assist_page));
 }
-EXPORT_SYMBOL_GPL(kvm_hv_get_assist_page);
 
 static void stimer_prepare_msg(struct kvm_vcpu_hv_stimer *stimer)
 {
diff --git a/arch/x86/kvm/irq.c b/arch/x86/kvm/irq.c
index e330e7d125f7..ba4300f36a32 100644
--- a/arch/x86/kvm/irq.c
+++ b/arch/x86/kvm/irq.c
@@ -26,7 +26,6 @@ int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu)
 
 	return 0;
 }
-EXPORT_SYMBOL(kvm_cpu_has_pending_timer);
 
 /*
  * check if there is a pending userspace external interrupt
@@ -109,7 +108,6 @@ int kvm_cpu_has_interrupt(struct kvm_vcpu *v)
 
 	return kvm_apic_has_interrupt(v) != -1;	/* LAPIC */
 }
-EXPORT_SYMBOL_GPL(kvm_cpu_has_interrupt);
 
 /*
  * Read pending interrupt(from non-APIC source)
@@ -146,14 +144,12 @@ int kvm_cpu_get_interrupt(struct kvm_vcpu *v)
 
 	return kvm_get_apic_interrupt(v);	/* APIC */
 }
-EXPORT_SYMBOL_GPL(kvm_cpu_get_interrupt);
 
 void kvm_inject_pending_timer_irqs(struct kvm_vcpu *vcpu)
 {
 	if (lapic_in_kernel(vcpu))
 		kvm_inject_apic_timer_irqs(vcpu);
 }
-EXPORT_SYMBOL_GPL(kvm_inject_pending_timer_irqs);
 
 void __kvm_migrate_timers(struct kvm_vcpu *vcpu)
 {
diff --git a/arch/x86/kvm/irq_comm.c b/arch/x86/kvm/irq_comm.c
index 8ecd48d31800..64a13d5fcc9f 100644
--- a/arch/x86/kvm/irq_comm.c
+++ b/arch/x86/kvm/irq_comm.c
@@ -122,7 +122,6 @@ void kvm_set_msi_irq(struct kvm *kvm, struct kvm_kernel_irq_routing_entry *e,
 	irq->level = 1;
 	irq->shorthand = 0;
 }
-EXPORT_SYMBOL_GPL(kvm_set_msi_irq);
 
 static inline bool kvm_msi_route_invalid(struct kvm *kvm,
 		struct kvm_kernel_irq_routing_entry *e)
@@ -346,7 +345,6 @@ bool kvm_intr_is_single_vcpu(struct kvm *kvm, struct kvm_lapic_irq *irq,
 
 	return r == 1;
 }
-EXPORT_SYMBOL_GPL(kvm_intr_is_single_vcpu);
 
 #define IOAPIC_ROUTING_ENTRY(irq) \
 	{ .gsi = irq, .type = KVM_IRQ_ROUTING_IRQCHIP,	\
diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
index 22c1079fadaa..55d58bb2954a 100644
--- a/arch/x86/kvm/lapic.c
+++ b/arch/x86/kvm/lapic.c
@@ -120,7 +120,6 @@ bool kvm_can_post_timer_interrupt(struct kvm_vcpu *vcpu)
 {
 	return pi_inject_timer && kvm_vcpu_apicv_active(vcpu);
 }
-EXPORT_SYMBOL_GPL(kvm_can_post_timer_interrupt);
 
 static bool kvm_use_posted_timer_interrupt(struct kvm_vcpu *vcpu)
 {
@@ -410,7 +409,6 @@ bool __kvm_apic_update_irr(u32 *pir, void *regs, int *max_irr)
 	return ((max_updated_irr != -1) &&
 		(max_updated_irr == *max_irr));
 }
-EXPORT_SYMBOL_GPL(__kvm_apic_update_irr);
 
 bool kvm_apic_update_irr(struct kvm_vcpu *vcpu, u32 *pir, int *max_irr)
 {
@@ -418,7 +416,6 @@ bool kvm_apic_update_irr(struct kvm_vcpu *vcpu, u32 *pir, int *max_irr)
 
 	return __kvm_apic_update_irr(pir, apic->regs, max_irr);
 }
-EXPORT_SYMBOL_GPL(kvm_apic_update_irr);
 
 static inline int apic_search_irr(struct kvm_lapic *apic)
 {
@@ -542,7 +539,6 @@ int kvm_lapic_find_highest_irr(struct kvm_vcpu *vcpu)
 	 */
 	return apic_find_highest_irr(vcpu->arch.apic);
 }
-EXPORT_SYMBOL_GPL(kvm_lapic_find_highest_irr);
 
 static int __apic_accept_irq(struct kvm_lapic *apic, int delivery_mode,
 			     int vector, int level, int trig_mode,
@@ -710,7 +706,6 @@ void kvm_apic_update_ppr(struct kvm_vcpu *vcpu)
 {
 	apic_update_ppr(vcpu->arch.apic);
 }
-EXPORT_SYMBOL_GPL(kvm_apic_update_ppr);
 
 static void apic_set_tpr(struct kvm_lapic *apic, u32 tpr)
 {
@@ -821,7 +816,6 @@ bool kvm_apic_match_dest(struct kvm_vcpu *vcpu, struct kvm_lapic *source,
 		return false;
 	}
 }
-EXPORT_SYMBOL_GPL(kvm_apic_match_dest);
 
 int kvm_vector_to_index(u32 vector, u32 dest_vcpus,
 		       const unsigned long *bitmap, u32 bitmap_size)
@@ -1194,7 +1188,6 @@ void kvm_apic_set_eoi_accelerated(struct kvm_vcpu *vcpu, int vector)
 	kvm_ioapic_send_eoi(apic, vector);
 	kvm_make_request(KVM_REQ_EVENT, apic->vcpu);
 }
-EXPORT_SYMBOL_GPL(kvm_apic_set_eoi_accelerated);
 
 static void apic_send_ipi(struct kvm_lapic *apic, u32 icr_low, u32 icr_high)
 {
@@ -1353,7 +1346,6 @@ int kvm_lapic_reg_read(struct kvm_lapic *apic, u32 offset, int len,
 	}
 	return 0;
 }
-EXPORT_SYMBOL_GPL(kvm_lapic_reg_read);
 
 static int apic_mmio_in_range(struct kvm_lapic *apic, gpa_t addr)
 {
@@ -1530,7 +1522,6 @@ void kvm_wait_lapic_expire(struct kvm_vcpu *vcpu)
 	if (lapic_timer_int_injected(vcpu))
 		__kvm_wait_lapic_expire(vcpu);
 }
-EXPORT_SYMBOL_GPL(kvm_wait_lapic_expire);
 
 static void kvm_apic_inject_pending_timer_irqs(struct kvm_lapic *apic)
 {
@@ -1695,7 +1686,6 @@ bool kvm_lapic_hv_timer_in_use(struct kvm_vcpu *vcpu)
 
 	return vcpu->arch.apic->lapic_timer.hv_timer_in_use;
 }
-EXPORT_SYMBOL_GPL(kvm_lapic_hv_timer_in_use);
 
 static void cancel_hv_timer(struct kvm_lapic *apic)
 {
@@ -1796,13 +1786,11 @@ void kvm_lapic_expired_hv_timer(struct kvm_vcpu *vcpu)
 out:
 	preempt_enable();
 }
-EXPORT_SYMBOL_GPL(kvm_lapic_expired_hv_timer);
 
 void kvm_lapic_switch_to_hv_timer(struct kvm_vcpu *vcpu)
 {
 	restart_apic_timer(vcpu->arch.apic);
 }
-EXPORT_SYMBOL_GPL(kvm_lapic_switch_to_hv_timer);
 
 void kvm_lapic_switch_to_sw_timer(struct kvm_vcpu *vcpu)
 {
@@ -1814,7 +1802,6 @@ void kvm_lapic_switch_to_sw_timer(struct kvm_vcpu *vcpu)
 		start_sw_timer(apic);
 	preempt_enable();
 }
-EXPORT_SYMBOL_GPL(kvm_lapic_switch_to_sw_timer);
 
 void kvm_lapic_restart_hv_timer(struct kvm_vcpu *vcpu)
 {
@@ -1984,7 +1971,6 @@ int kvm_lapic_reg_write(struct kvm_lapic *apic, u32 reg, u32 val)
 
 	return ret;
 }
-EXPORT_SYMBOL_GPL(kvm_lapic_reg_write);
 
 static int apic_mmio_write(struct kvm_vcpu *vcpu, struct kvm_io_device *this,
 			    gpa_t address, int len, const void *data)
@@ -2023,7 +2009,6 @@ void kvm_lapic_set_eoi(struct kvm_vcpu *vcpu)
 {
 	kvm_lapic_reg_write(vcpu->arch.apic, APIC_EOI, 0);
 }
-EXPORT_SYMBOL_GPL(kvm_lapic_set_eoi);
 
 /* emulate APIC access in a trap manner */
 void kvm_apic_write_nodecode(struct kvm_vcpu *vcpu, u32 offset)
@@ -2038,7 +2023,6 @@ void kvm_apic_write_nodecode(struct kvm_vcpu *vcpu, u32 offset)
 	/* TODO: optimize to just emulate side effect w/o one more write */
 	kvm_lapic_reg_write(vcpu->arch.apic, offset, val);
 }
-EXPORT_SYMBOL_GPL(kvm_apic_write_nodecode);
 
 void kvm_free_lapic(struct kvm_vcpu *vcpu)
 {
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 29d930470db9..9467eac7dc4d 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -317,7 +317,6 @@ void kvm_mmu_set_mmio_spte_mask(u64 mmio_mask, u64 mmio_value, u64 access_mask)
 	shadow_mmio_mask = mmio_mask | SPTE_SPECIAL_MASK;
 	shadow_mmio_access_mask = access_mask;
 }
-EXPORT_SYMBOL_GPL(kvm_mmu_set_mmio_spte_mask);
 
 static bool is_mmio_spte(u64 spte)
 {
@@ -498,7 +497,6 @@ void kvm_mmu_set_mask_ptes(u64 user_mask, u64 accessed_mask,
 	shadow_acc_track_mask = acc_track_mask;
 	shadow_me_mask = me_mask;
 }
-EXPORT_SYMBOL_GPL(kvm_mmu_set_mask_ptes);
 
 static u8 kvm_get_shadow_phys_bits(void)
 {
@@ -1731,7 +1729,6 @@ void kvm_mmu_clear_dirty_pt_masked(struct kvm *kvm,
 		mask &= mask - 1;
 	}
 }
-EXPORT_SYMBOL_GPL(kvm_mmu_clear_dirty_pt_masked);
 
 /**
  * kvm_arch_mmu_enable_log_dirty_pt_masked - enable dirty logging for selected
@@ -2888,7 +2885,6 @@ int kvm_mmu_unprotect_page(struct kvm *kvm, gfn_t gfn)
 
 	return r;
 }
-EXPORT_SYMBOL_GPL(kvm_mmu_unprotect_page);
 
 static void kvm_unsync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
 {
@@ -3658,7 +3654,6 @@ void kvm_mmu_free_roots(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
 	kvm_mmu_commit_zap_page(vcpu->kvm, &invalid_list);
 	spin_unlock(&vcpu->kvm->mmu_lock);
 }
-EXPORT_SYMBOL_GPL(kvm_mmu_free_roots);
 
 static int mmu_check_root(struct kvm_vcpu *vcpu, gfn_t root_gfn)
 {
@@ -3883,7 +3878,6 @@ void kvm_mmu_sync_roots(struct kvm_vcpu *vcpu)
 	kvm_mmu_audit(vcpu, AUDIT_POST_SYNC);
 	spin_unlock(&vcpu->kvm->mmu_lock);
 }
-EXPORT_SYMBOL_GPL(kvm_mmu_sync_roots);
 
 static gpa_t nonpaging_gva_to_gpa(struct kvm_vcpu *vcpu, gva_t vaddr,
 				  u32 access, struct x86_exception *exception)
@@ -4151,7 +4145,6 @@ int kvm_handle_page_fault(struct kvm_vcpu *vcpu, u64 error_code,
 	}
 	return r;
 }
-EXPORT_SYMBOL_GPL(kvm_handle_page_fault);
 
 static bool
 check_hugepage_cache_consistency(struct kvm_vcpu *vcpu, gfn_t gfn, int level)
@@ -4331,7 +4324,6 @@ void kvm_mmu_new_cr3(struct kvm_vcpu *vcpu, gpa_t new_cr3, bool skip_tlb_flush)
 	__kvm_mmu_new_cr3(vcpu, new_cr3, kvm_mmu_calc_root_page_role(vcpu),
 			  skip_tlb_flush);
 }
-EXPORT_SYMBOL_GPL(kvm_mmu_new_cr3);
 
 static unsigned long get_cr3(struct kvm_vcpu *vcpu)
 {
@@ -4571,7 +4563,6 @@ reset_shadow_zero_bits_mask(struct kvm_vcpu *vcpu, struct kvm_mmu *context)
 	}
 
 }
-EXPORT_SYMBOL_GPL(reset_shadow_zero_bits_mask);
 
 static inline bool boot_cpu_is_amd(void)
 {
@@ -4991,7 +4982,6 @@ void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu)
 	context->mmu_role.as_u64 = new_role.as_u64;
 	reset_shadow_zero_bits_mask(vcpu, context);
 }
-EXPORT_SYMBOL_GPL(kvm_init_shadow_mmu);
 
 static union kvm_mmu_role
 kvm_calc_shadow_ept_root_page_role(struct kvm_vcpu *vcpu, bool accessed_dirty,
@@ -5055,7 +5045,6 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly,
 	reset_rsvds_bits_mask_ept(vcpu, context, execonly);
 	reset_ept_shadow_zero_bits_mask(vcpu, context, execonly);
 }
-EXPORT_SYMBOL_GPL(kvm_init_shadow_ept_mmu);
 
 static void init_kvm_softmmu(struct kvm_vcpu *vcpu)
 {
@@ -5135,7 +5124,6 @@ void kvm_init_mmu(struct kvm_vcpu *vcpu, bool reset_roots)
 	else
 		init_kvm_softmmu(vcpu);
 }
-EXPORT_SYMBOL_GPL(kvm_init_mmu);
 
 static union kvm_mmu_page_role
 kvm_mmu_calc_root_page_role(struct kvm_vcpu *vcpu)
@@ -5155,7 +5143,6 @@ void kvm_mmu_reset_context(struct kvm_vcpu *vcpu)
 	kvm_mmu_unload(vcpu);
 	kvm_init_mmu(vcpu, true);
 }
-EXPORT_SYMBOL_GPL(kvm_mmu_reset_context);
 
 int kvm_mmu_load(struct kvm_vcpu *vcpu)
 {
@@ -5173,7 +5160,6 @@ int kvm_mmu_load(struct kvm_vcpu *vcpu)
 out:
 	return r;
 }
-EXPORT_SYMBOL_GPL(kvm_mmu_load);
 
 void kvm_mmu_unload(struct kvm_vcpu *vcpu)
 {
@@ -5182,7 +5168,6 @@ void kvm_mmu_unload(struct kvm_vcpu *vcpu)
 	kvm_mmu_free_roots(vcpu, &vcpu->arch.guest_mmu, KVM_MMU_ROOTS_ALL);
 	WARN_ON(VALID_PAGE(vcpu->arch.guest_mmu.root_hpa));
 }
-EXPORT_SYMBOL_GPL(kvm_mmu_unload);
 
 static void mmu_pte_write_new_pte(struct kvm_vcpu *vcpu,
 				  struct kvm_mmu_page *sp, u64 *spte,
@@ -5394,7 +5379,6 @@ int kvm_mmu_unprotect_page_virt(struct kvm_vcpu *vcpu, gva_t gva)
 
 	return r;
 }
-EXPORT_SYMBOL_GPL(kvm_mmu_unprotect_page_virt);
 
 static int make_mmu_pages_available(struct kvm_vcpu *vcpu)
 {
@@ -5489,7 +5473,6 @@ int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u64 error_code,
 	return x86_emulate_instruction(vcpu, cr2, emulation_type, insn,
 				       insn_len);
 }
-EXPORT_SYMBOL_GPL(kvm_mmu_page_fault);
 
 void kvm_mmu_invlpg(struct kvm_vcpu *vcpu, gva_t gva)
 {
@@ -5520,7 +5503,6 @@ void kvm_mmu_invlpg(struct kvm_vcpu *vcpu, gva_t gva)
 	kvm_x86_tlb_flush_gva(vcpu, gva);
 	++vcpu->stat.invlpg;
 }
-EXPORT_SYMBOL_GPL(kvm_mmu_invlpg);
 
 void kvm_mmu_invpcid_gva(struct kvm_vcpu *vcpu, gva_t gva, unsigned long pcid)
 {
@@ -5552,19 +5534,16 @@ void kvm_mmu_invpcid_gva(struct kvm_vcpu *vcpu, gva_t gva, unsigned long pcid)
 	 * for them.
 	 */
 }
-EXPORT_SYMBOL_GPL(kvm_mmu_invpcid_gva);
 
 void kvm_enable_tdp(void)
 {
 	tdp_enabled = true;
 }
-EXPORT_SYMBOL_GPL(kvm_enable_tdp);
 
 void kvm_disable_tdp(void)
 {
 	tdp_enabled = false;
 }
-EXPORT_SYMBOL_GPL(kvm_disable_tdp);
 
 
 /* The return value indicates if tlb flush on all vcpus is needed. */
@@ -5963,7 +5942,6 @@ void kvm_mmu_slot_leaf_clear_dirty(struct kvm *kvm,
 		kvm_flush_remote_tlbs_with_address(kvm, memslot->base_gfn,
 				memslot->npages);
 }
-EXPORT_SYMBOL_GPL(kvm_mmu_slot_leaf_clear_dirty);
 
 void kvm_mmu_slot_largepage_remove_write_access(struct kvm *kvm,
 					struct kvm_memory_slot *memslot)
@@ -5982,7 +5960,6 @@ void kvm_mmu_slot_largepage_remove_write_access(struct kvm *kvm,
 		kvm_flush_remote_tlbs_with_address(kvm, memslot->base_gfn,
 				memslot->npages);
 }
-EXPORT_SYMBOL_GPL(kvm_mmu_slot_largepage_remove_write_access);
 
 void kvm_mmu_slot_set_dirty(struct kvm *kvm,
 			    struct kvm_memory_slot *memslot)
@@ -6000,7 +5977,6 @@ void kvm_mmu_slot_set_dirty(struct kvm *kvm,
 		kvm_flush_remote_tlbs_with_address(kvm, memslot->base_gfn,
 				memslot->npages);
 }
-EXPORT_SYMBOL_GPL(kvm_mmu_slot_set_dirty);
 
 void kvm_mmu_zap_all(struct kvm *kvm)
 {
diff --git a/arch/x86/kvm/mtrr.c b/arch/x86/kvm/mtrr.c
index 25ce3edd1872..477f7141f793 100644
--- a/arch/x86/kvm/mtrr.c
+++ b/arch/x86/kvm/mtrr.c
@@ -91,7 +91,6 @@ bool kvm_mtrr_valid(struct kvm_vcpu *vcpu, u32 msr, u64 data)
 
 	return true;
 }
-EXPORT_SYMBOL_GPL(kvm_mtrr_valid);
 
 static bool mtrr_is_enabled(struct kvm_mtrr *mtrr_state)
 {
@@ -686,7 +685,6 @@ u8 kvm_mtrr_get_guest_memory_type(struct kvm_vcpu *vcpu, gfn_t gfn)
 
 	return type;
 }
-EXPORT_SYMBOL_GPL(kvm_mtrr_get_guest_memory_type);
 
 bool kvm_mtrr_check_gfn_range_consistency(struct kvm_vcpu *vcpu, gfn_t gfn,
 					  int page_num)
diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
index 144e5d0c25ff..0ac70bad4b31 100644
--- a/arch/x86/kvm/pmu.c
+++ b/arch/x86/kvm/pmu.c
@@ -200,7 +200,6 @@ void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel)
 			      (eventsel & HSW_IN_TX),
 			      (eventsel & HSW_IN_TX_CHECKPOINTED));
 }
-EXPORT_SYMBOL_GPL(reprogram_gp_counter);
 
 void reprogram_fixed_counter(struct kvm_pmc *pmc, u8 ctrl, int idx)
 {
@@ -230,7 +229,6 @@ void reprogram_fixed_counter(struct kvm_pmc *pmc, u8 ctrl, int idx)
 			      !(en_field & 0x1), /* exclude kernel */
 			      pmi, false, false);
 }
-EXPORT_SYMBOL_GPL(reprogram_fixed_counter);
 
 void reprogram_counter(struct kvm_pmu *pmu, int pmc_idx)
 {
@@ -248,7 +246,6 @@ void reprogram_counter(struct kvm_pmu *pmu, int pmc_idx)
 		reprogram_fixed_counter(pmc, ctrl, idx);
 	}
 }
-EXPORT_SYMBOL_GPL(reprogram_counter);
 
 void kvm_pmu_handle_event(struct kvm_vcpu *vcpu)
 {
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 5e98fa6b7bf8..799c069a2296 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -76,7 +76,6 @@
 #define MAX_IO_MSRS 256
 #define KVM_MAX_MCE_BANKS 32
 u64 __read_mostly kvm_mce_cap_supported = MCG_CTL_P | MCG_SER_P;
-EXPORT_SYMBOL_GPL(kvm_mce_cap_supported);
 
 #define emul_to_vcpu(ctxt) \
 	container_of(ctxt, struct kvm_vcpu, arch.emulate_ctxt)
@@ -106,7 +105,6 @@ static void store_regs(struct kvm_vcpu *vcpu);
 static int sync_regs(struct kvm_vcpu *vcpu);
 
 struct kvm_x86_ops *kvm_x86_ops __read_mostly;
-EXPORT_SYMBOL_GPL(kvm_x86_ops);
 
 static bool __read_mostly ignore_msrs = 0;
 module_param(ignore_msrs, bool, S_IRUGO | S_IWUSR);
@@ -121,15 +119,10 @@ static bool __read_mostly kvmclock_periodic_sync = true;
 module_param(kvmclock_periodic_sync, bool, S_IRUGO);
 
 bool __read_mostly kvm_has_tsc_control;
-EXPORT_SYMBOL_GPL(kvm_has_tsc_control);
 u32  __read_mostly kvm_max_guest_tsc_khz;
-EXPORT_SYMBOL_GPL(kvm_max_guest_tsc_khz);
 u8   __read_mostly kvm_tsc_scaling_ratio_frac_bits;
-EXPORT_SYMBOL_GPL(kvm_tsc_scaling_ratio_frac_bits);
 u64  __read_mostly kvm_max_tsc_scaling_ratio;
-EXPORT_SYMBOL_GPL(kvm_max_tsc_scaling_ratio);
 u64 __read_mostly kvm_default_tsc_scaling_ratio;
-EXPORT_SYMBOL_GPL(kvm_default_tsc_scaling_ratio);
 
 /* tsc tolerance in parts per million - default to 1/2 of the NTP threshold */
 static u32 __read_mostly tsc_tolerance_ppm = 250;
@@ -149,7 +142,6 @@ module_param(vector_hashing, bool, S_IRUGO);
 
 bool __read_mostly enable_vmware_backdoor = false;
 module_param(enable_vmware_backdoor, bool, S_IRUGO);
-EXPORT_SYMBOL_GPL(enable_vmware_backdoor);
 
 static bool __read_mostly force_emulation_prefix = false;
 module_param(force_emulation_prefix, bool, S_IRUGO);
@@ -221,7 +213,6 @@ struct kvm_stats_debugfs_item debugfs_entries[] = {
 u64 __read_mostly host_xcr0;
 
 struct kmem_cache *x86_fpu_cache;
-EXPORT_SYMBOL_GPL(x86_fpu_cache);
 
 static int emulator_fix_hypercall(struct x86_emulate_ctxt *ctxt);
 
@@ -283,7 +274,6 @@ void kvm_define_shared_msr(unsigned slot, u32 msr)
 	if (slot >= shared_msrs_global.nr)
 		shared_msrs_global.nr = slot + 1;
 }
-EXPORT_SYMBOL_GPL(kvm_define_shared_msr);
 
 static void kvm_shared_msr_cpu_online(void)
 {
@@ -313,7 +303,6 @@ int kvm_set_shared_msr(unsigned slot, u64 value, u64 mask)
 	}
 	return 0;
 }
-EXPORT_SYMBOL_GPL(kvm_set_shared_msr);
 
 static void drop_user_return_notifiers(void)
 {
@@ -328,13 +317,11 @@ u64 kvm_get_apic_base(struct kvm_vcpu *vcpu)
 {
 	return vcpu->arch.apic_base;
 }
-EXPORT_SYMBOL_GPL(kvm_get_apic_base);
 
 enum lapic_mode kvm_get_apic_mode(struct kvm_vcpu *vcpu)
 {
 	return kvm_apic_mode(kvm_get_apic_base(vcpu));
 }
-EXPORT_SYMBOL_GPL(kvm_get_apic_mode);
 
 int kvm_set_apic_base(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 {
@@ -355,14 +342,12 @@ int kvm_set_apic_base(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 	kvm_lapic_set_base(vcpu, msr_info->data);
 	return 0;
 }
-EXPORT_SYMBOL_GPL(kvm_set_apic_base);
 
 asmlinkage __visible void kvm_spurious_fault(void)
 {
 	/* Fault while not rebooting.  We want the trace. */
 	BUG_ON(!kvm_rebooting);
 }
-EXPORT_SYMBOL_GPL(kvm_spurious_fault);
 
 #define EXCPT_BENIGN		0
 #define EXCPT_CONTRIBUTORY	1
@@ -450,7 +435,6 @@ void kvm_deliver_exception_payload(struct kvm_vcpu *vcpu)
 	vcpu->arch.exception.has_payload = false;
 	vcpu->arch.exception.payload = 0;
 }
-EXPORT_SYMBOL_GPL(kvm_deliver_exception_payload);
 
 static void kvm_multiple_exception(struct kvm_vcpu *vcpu,
 		unsigned nr, bool has_error, u32 error_code,
@@ -544,13 +528,11 @@ void kvm_queue_exception(struct kvm_vcpu *vcpu, unsigned nr)
 {
 	kvm_multiple_exception(vcpu, nr, false, 0, false, 0, false);
 }
-EXPORT_SYMBOL_GPL(kvm_queue_exception);
 
 void kvm_requeue_exception(struct kvm_vcpu *vcpu, unsigned nr)
 {
 	kvm_multiple_exception(vcpu, nr, false, 0, false, 0, true);
 }
-EXPORT_SYMBOL_GPL(kvm_requeue_exception);
 
 static void kvm_queue_exception_p(struct kvm_vcpu *vcpu, unsigned nr,
 				  unsigned long payload)
@@ -574,7 +556,6 @@ int kvm_complete_insn_gp(struct kvm_vcpu *vcpu, int err)
 
 	return 1;
 }
-EXPORT_SYMBOL_GPL(kvm_complete_insn_gp);
 
 void kvm_inject_page_fault(struct kvm_vcpu *vcpu, struct x86_exception *fault)
 {
@@ -589,7 +570,6 @@ void kvm_inject_page_fault(struct kvm_vcpu *vcpu, struct x86_exception *fault)
 					fault->address);
 	}
 }
-EXPORT_SYMBOL_GPL(kvm_inject_page_fault);
 
 static bool kvm_propagate_fault(struct kvm_vcpu *vcpu, struct x86_exception *fault)
 {
@@ -606,19 +586,16 @@ void kvm_inject_nmi(struct kvm_vcpu *vcpu)
 	atomic_inc(&vcpu->arch.nmi_queued);
 	kvm_make_request(KVM_REQ_NMI, vcpu);
 }
-EXPORT_SYMBOL_GPL(kvm_inject_nmi);
 
 void kvm_queue_exception_e(struct kvm_vcpu *vcpu, unsigned nr, u32 error_code)
 {
 	kvm_multiple_exception(vcpu, nr, true, error_code, false, 0, false);
 }
-EXPORT_SYMBOL_GPL(kvm_queue_exception_e);
 
 void kvm_requeue_exception_e(struct kvm_vcpu *vcpu, unsigned nr, u32 error_code)
 {
 	kvm_multiple_exception(vcpu, nr, true, error_code, false, 0, true);
 }
-EXPORT_SYMBOL_GPL(kvm_requeue_exception_e);
 
 /*
  * Checks if cpl <= required_cpl; if true, return true.  Otherwise queue
@@ -631,7 +608,6 @@ bool kvm_require_cpl(struct kvm_vcpu *vcpu, int required_cpl)
 	kvm_queue_exception_e(vcpu, GP_VECTOR, 0);
 	return false;
 }
-EXPORT_SYMBOL_GPL(kvm_require_cpl);
 
 bool kvm_require_dr(struct kvm_vcpu *vcpu, int dr)
 {
@@ -641,7 +617,6 @@ bool kvm_require_dr(struct kvm_vcpu *vcpu, int dr)
 	kvm_queue_exception(vcpu, UD_VECTOR);
 	return false;
 }
-EXPORT_SYMBOL_GPL(kvm_require_dr);
 
 /*
  * This function will be used to read from the physical memory of the currently
@@ -665,7 +640,6 @@ int kvm_read_guest_page_mmu(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
 
 	return kvm_vcpu_read_guest_page(vcpu, real_gfn, data, offset, len);
 }
-EXPORT_SYMBOL_GPL(kvm_read_guest_page_mmu);
 
 static int kvm_read_nested_guest_page(struct kvm_vcpu *vcpu, gfn_t gfn,
 			       void *data, int offset, int len, u32 access)
@@ -716,7 +690,6 @@ int load_pdptrs(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, unsigned long cr3)
 
 	return ret;
 }
-EXPORT_SYMBOL_GPL(load_pdptrs);
 
 bool pdptrs_changed(struct kvm_vcpu *vcpu)
 {
@@ -744,7 +717,6 @@ bool pdptrs_changed(struct kvm_vcpu *vcpu)
 
 	return changed;
 }
-EXPORT_SYMBOL_GPL(pdptrs_changed);
 
 int kvm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0)
 {
@@ -803,13 +775,11 @@ int kvm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0)
 
 	return 0;
 }
-EXPORT_SYMBOL_GPL(kvm_set_cr0);
 
 void kvm_lmsw(struct kvm_vcpu *vcpu, unsigned long msw)
 {
 	(void)kvm_set_cr0(vcpu, kvm_read_cr0_bits(vcpu, ~0x0eul) | (msw & 0x0f));
 }
-EXPORT_SYMBOL_GPL(kvm_lmsw);
 
 void kvm_load_guest_xcr0(struct kvm_vcpu *vcpu)
 {
@@ -821,7 +791,6 @@ void kvm_load_guest_xcr0(struct kvm_vcpu *vcpu)
 		vcpu->guest_xcr0_loaded = 1;
 	}
 }
-EXPORT_SYMBOL_GPL(kvm_load_guest_xcr0);
 
 void kvm_put_guest_xcr0(struct kvm_vcpu *vcpu)
 {
@@ -831,7 +800,6 @@ void kvm_put_guest_xcr0(struct kvm_vcpu *vcpu)
 		vcpu->guest_xcr0_loaded = 0;
 	}
 }
-EXPORT_SYMBOL_GPL(kvm_put_guest_xcr0);
 
 static int __kvm_set_xcr(struct kvm_vcpu *vcpu, u32 index, u64 xcr)
 {
@@ -882,7 +850,6 @@ int kvm_set_xcr(struct kvm_vcpu *vcpu, u32 index, u64 xcr)
 	}
 	return 0;
 }
-EXPORT_SYMBOL_GPL(kvm_set_xcr);
 
 static int kvm_valid_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
 {
@@ -952,7 +919,6 @@ int kvm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
 
 	return 0;
 }
-EXPORT_SYMBOL_GPL(kvm_set_cr4);
 
 int kvm_set_cr3(struct kvm_vcpu *vcpu, unsigned long cr3)
 {
@@ -987,7 +953,6 @@ int kvm_set_cr3(struct kvm_vcpu *vcpu, unsigned long cr3)
 
 	return 0;
 }
-EXPORT_SYMBOL_GPL(kvm_set_cr3);
 
 int kvm_set_cr8(struct kvm_vcpu *vcpu, unsigned long cr8)
 {
@@ -999,7 +964,6 @@ int kvm_set_cr8(struct kvm_vcpu *vcpu, unsigned long cr8)
 		vcpu->arch.cr8 = cr8;
 	return 0;
 }
-EXPORT_SYMBOL_GPL(kvm_set_cr8);
 
 unsigned long kvm_get_cr8(struct kvm_vcpu *vcpu)
 {
@@ -1008,7 +972,6 @@ unsigned long kvm_get_cr8(struct kvm_vcpu *vcpu)
 	else
 		return vcpu->arch.cr8;
 }
-EXPORT_SYMBOL_GPL(kvm_get_cr8);
 
 static void kvm_update_dr0123(struct kvm_vcpu *vcpu)
 {
@@ -1087,7 +1050,6 @@ int kvm_set_dr(struct kvm_vcpu *vcpu, int dr, unsigned long val)
 	}
 	return 0;
 }
-EXPORT_SYMBOL_GPL(kvm_set_dr);
 
 int kvm_get_dr(struct kvm_vcpu *vcpu, int dr, unsigned long *val)
 {
@@ -1111,7 +1073,6 @@ int kvm_get_dr(struct kvm_vcpu *vcpu, int dr, unsigned long *val)
 	}
 	return 0;
 }
-EXPORT_SYMBOL_GPL(kvm_get_dr);
 
 bool kvm_rdpmc(struct kvm_vcpu *vcpu)
 {
@@ -1126,7 +1087,6 @@ bool kvm_rdpmc(struct kvm_vcpu *vcpu)
 	kvm_rdx_write(vcpu, data >> 32);
 	return err;
 }
-EXPORT_SYMBOL_GPL(kvm_rdpmc);
 
 /*
  * List of msr numbers which we expose to userspace through KVM_GET_MSRS
@@ -1357,7 +1317,6 @@ bool kvm_valid_efer(struct kvm_vcpu *vcpu, u64 efer)
 
 	return __kvm_valid_efer(vcpu, efer);
 }
-EXPORT_SYMBOL_GPL(kvm_valid_efer);
 
 static int set_efer(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 {
@@ -1392,7 +1351,6 @@ void kvm_enable_efer_bits(u64 mask)
 {
        efer_reserved_bits &= ~mask;
 }
-EXPORT_SYMBOL_GPL(kvm_enable_efer_bits);
 
 /*
  * Write @data into the MSR specified by @index.  Select MSR specific fault
@@ -1463,13 +1421,11 @@ int kvm_get_msr(struct kvm_vcpu *vcpu, u32 index, u64 *data)
 {
 	return __kvm_get_msr(vcpu, index, data, false);
 }
-EXPORT_SYMBOL_GPL(kvm_get_msr);
 
 int kvm_set_msr(struct kvm_vcpu *vcpu, u32 index, u64 data)
 {
 	return __kvm_set_msr(vcpu, index, data, false);
 }
-EXPORT_SYMBOL_GPL(kvm_set_msr);
 
 int kvm_emulate_rdmsr(struct kvm_vcpu *vcpu)
 {
@@ -1488,7 +1444,6 @@ int kvm_emulate_rdmsr(struct kvm_vcpu *vcpu)
 	kvm_rdx_write(vcpu, (data >> 32) & -1u);
 	return kvm_skip_emulated_instruction(vcpu);
 }
-EXPORT_SYMBOL_GPL(kvm_emulate_rdmsr);
 
 int kvm_emulate_wrmsr(struct kvm_vcpu *vcpu)
 {
@@ -1504,7 +1459,6 @@ int kvm_emulate_wrmsr(struct kvm_vcpu *vcpu)
 	trace_kvm_msr_write(ecx, data);
 	return kvm_skip_emulated_instruction(vcpu);
 }
-EXPORT_SYMBOL_GPL(kvm_emulate_wrmsr);
 
 /*
  * Adapt set_msr() to msr_io()'s calling convention
@@ -1803,7 +1757,6 @@ u64 kvm_scale_tsc(struct kvm_vcpu *vcpu, u64 tsc)
 
 	return _tsc;
 }
-EXPORT_SYMBOL_GPL(kvm_scale_tsc);
 
 static u64 kvm_compute_tsc_offset(struct kvm_vcpu *vcpu, u64 target_tsc)
 {
@@ -1820,7 +1773,6 @@ u64 kvm_read_l1_tsc(struct kvm_vcpu *vcpu, u64 host_tsc)
 
 	return tsc_offset + kvm_scale_tsc(vcpu, host_tsc);
 }
-EXPORT_SYMBOL_GPL(kvm_read_l1_tsc);
 
 static void kvm_vcpu_write_tsc_offset(struct kvm_vcpu *vcpu, u64 offset)
 {
@@ -1943,7 +1895,6 @@ void kvm_write_tsc(struct kvm_vcpu *vcpu, struct msr_data *msr)
 	spin_unlock(&kvm->arch.pvclock_gtod_sync_lock);
 }
 
-EXPORT_SYMBOL_GPL(kvm_write_tsc);
 
 static inline void adjust_tsc_offset_guest(struct kvm_vcpu *vcpu,
 					   s64 adjustment)
@@ -2851,7 +2802,6 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 	}
 	return 0;
 }
-EXPORT_SYMBOL_GPL(kvm_set_msr_common);
 
 static int get_msr_mce(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata, bool host)
 {
@@ -3090,7 +3040,6 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 	}
 	return 0;
 }
-EXPORT_SYMBOL_GPL(kvm_get_msr_common);
 
 /*
  * Read or write a bunch of msrs. All parameters are kernel addresses.
@@ -5354,7 +5303,6 @@ int kvm_read_guest_virt(struct kvm_vcpu *vcpu,
 	return kvm_read_guest_virt_helper(addr, val, bytes, vcpu, access,
 					  exception);
 }
-EXPORT_SYMBOL_GPL(kvm_read_guest_virt);
 
 static int emulator_read_std(struct x86_emulate_ctxt *ctxt,
 			     gva_t addr, void *val, unsigned int bytes,
@@ -5439,7 +5387,6 @@ int kvm_write_guest_virt_system(struct kvm_vcpu *vcpu, gva_t addr, void *val,
 	return kvm_write_guest_virt_helper(addr, val, bytes, vcpu,
 					   PFERR_WRITE_MASK, exception);
 }
-EXPORT_SYMBOL_GPL(kvm_write_guest_virt_system);
 
 int handle_ud(struct kvm_vcpu *vcpu)
 {
@@ -5457,7 +5404,6 @@ int handle_ud(struct kvm_vcpu *vcpu)
 
 	return kvm_emulate_instruction(vcpu, emul_type);
 }
-EXPORT_SYMBOL_GPL(handle_ud);
 
 static int vcpu_is_mmio_gpa(struct kvm_vcpu *vcpu, unsigned long gva,
 			    gpa_t gpa, bool write)
@@ -5897,7 +5843,6 @@ int kvm_emulate_wbinvd(struct kvm_vcpu *vcpu)
 	kvm_emulate_wbinvd_noskip(vcpu);
 	return kvm_skip_emulated_instruction(vcpu);
 }
-EXPORT_SYMBOL_GPL(kvm_emulate_wbinvd);
 
 
 
@@ -6297,7 +6242,6 @@ void kvm_inject_realmode_interrupt(struct kvm_vcpu *vcpu, int irq, int inc_eip)
 		kvm_set_rflags(vcpu, ctxt->eflags);
 	}
 }
-EXPORT_SYMBOL_GPL(kvm_inject_realmode_interrupt);
 
 static int handle_emulation_failure(struct kvm_vcpu *vcpu, int emulation_type)
 {
@@ -6516,7 +6460,6 @@ int kvm_skip_emulated_instruction(struct kvm_vcpu *vcpu)
 		r = kvm_vcpu_do_singlestep(vcpu);
 	return r;
 }
-EXPORT_SYMBOL_GPL(kvm_skip_emulated_instruction);
 
 static bool kvm_vcpu_check_breakpoint(struct kvm_vcpu *vcpu, int *r)
 {
@@ -6755,14 +6698,12 @@ int kvm_emulate_instruction(struct kvm_vcpu *vcpu, int emulation_type)
 {
 	return x86_emulate_instruction(vcpu, 0, emulation_type, NULL, 0);
 }
-EXPORT_SYMBOL_GPL(kvm_emulate_instruction);
 
 int kvm_emulate_instruction_from_buffer(struct kvm_vcpu *vcpu,
 					void *insn, int insn_len)
 {
 	return x86_emulate_instruction(vcpu, 0, 0, insn, insn_len);
 }
-EXPORT_SYMBOL_GPL(kvm_emulate_instruction_from_buffer);
 
 static int complete_fast_pio_out_port_0x7e(struct kvm_vcpu *vcpu)
 {
@@ -6863,7 +6804,6 @@ int kvm_fast_pio(struct kvm_vcpu *vcpu, int size, unsigned short port, int in)
 		ret = kvm_fast_pio_out(vcpu, size, port);
 	return ret && kvm_skip_emulated_instruction(vcpu);
 }
-EXPORT_SYMBOL_GPL(kvm_fast_pio);
 
 static int kvmclock_cpu_down_prep(unsigned int cpu)
 {
@@ -7050,7 +6990,6 @@ static void kvm_timer_init(void)
 }
 
 DEFINE_PER_CPU(struct kvm_vcpu *, current_vcpu);
-EXPORT_PER_CPU_SYMBOL_GPL(current_vcpu);
 
 int kvm_is_in_guest(void)
 {
@@ -7254,7 +7193,6 @@ int kvm_vcpu_halt(struct kvm_vcpu *vcpu)
 		return 0;
 	}
 }
-EXPORT_SYMBOL_GPL(kvm_vcpu_halt);
 
 int kvm_emulate_halt(struct kvm_vcpu *vcpu)
 {
@@ -7265,7 +7203,6 @@ int kvm_emulate_halt(struct kvm_vcpu *vcpu)
 	 */
 	return kvm_vcpu_halt(vcpu) && ret;
 }
-EXPORT_SYMBOL_GPL(kvm_emulate_halt);
 
 #ifdef CONFIG_X86_64
 static int kvm_pv_clock_pairing(struct kvm_vcpu *vcpu, gpa_t paddr,
@@ -7409,7 +7346,6 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)
 	++vcpu->stat.hypercalls;
 	return kvm_skip_emulated_instruction(vcpu);
 }
-EXPORT_SYMBOL_GPL(kvm_emulate_hypercall);
 
 static int emulator_fix_hypercall(struct x86_emulate_ctxt *ctxt)
 {
@@ -7915,13 +7851,11 @@ void kvm_vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu)
 	 */
 	put_page(page);
 }
-EXPORT_SYMBOL_GPL(kvm_vcpu_reload_apic_access_page);
 
 void __kvm_request_immediate_exit(struct kvm_vcpu *vcpu)
 {
 	smp_send_reschedule(vcpu->cpu);
 }
-EXPORT_SYMBOL_GPL(__kvm_request_immediate_exit);
 
 /*
  * Returns 1 to let vcpu_run() continue the guest execution loop without
@@ -8600,7 +8534,6 @@ void kvm_get_cs_db_l_bits(struct kvm_vcpu *vcpu, int *db, int *l)
 	*db = cs.db;
 	*l = cs.l;
 }
-EXPORT_SYMBOL_GPL(kvm_get_cs_db_l_bits);
 
 static void __get_sregs(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs)
 {
@@ -8715,7 +8648,6 @@ int kvm_task_switch(struct kvm_vcpu *vcpu, u16 tss_selector, int idt_index,
 	kvm_make_request(KVM_REQ_EVENT, vcpu);
 	return 1;
 }
-EXPORT_SYMBOL_GPL(kvm_task_switch);
 
 static int kvm_valid_sregs(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs)
 {
@@ -9312,7 +9244,6 @@ bool kvm_vcpu_is_reset_bsp(struct kvm_vcpu *vcpu)
 {
 	return vcpu->kvm->arch.bsp_vcpu_id == vcpu->vcpu_id;
 }
-EXPORT_SYMBOL_GPL(kvm_vcpu_is_reset_bsp);
 
 bool kvm_vcpu_is_bsp(struct kvm_vcpu *vcpu)
 {
@@ -9320,7 +9251,6 @@ bool kvm_vcpu_is_bsp(struct kvm_vcpu *vcpu)
 }
 
 struct static_key kvm_no_apic_vcpu __read_mostly;
-EXPORT_SYMBOL_GPL(kvm_no_apic_vcpu);
 
 int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu)
 {
@@ -9543,7 +9473,6 @@ int __x86_set_memory_region(struct kvm *kvm, int id, gpa_t gpa, u32 size)
 
 	return 0;
 }
-EXPORT_SYMBOL_GPL(__x86_set_memory_region);
 
 int x86_set_memory_region(struct kvm *kvm, int id, gpa_t gpa, u32 size)
 {
@@ -9555,7 +9484,6 @@ int x86_set_memory_region(struct kvm *kvm, int id, gpa_t gpa, u32 size)
 
 	return r;
 }
-EXPORT_SYMBOL_GPL(x86_set_memory_region);
 
 void kvm_arch_destroy_vm(struct kvm *kvm)
 {
@@ -9877,13 +9805,11 @@ unsigned long kvm_get_linear_rip(struct kvm_vcpu *vcpu)
 	return (u32)(get_segment_base(vcpu, VCPU_SREG_CS) +
 		     kvm_rip_read(vcpu));
 }
-EXPORT_SYMBOL_GPL(kvm_get_linear_rip);
 
 bool kvm_is_linear_rip(struct kvm_vcpu *vcpu, unsigned long linear_rip)
 {
 	return kvm_get_linear_rip(vcpu) == linear_rip;
 }
-EXPORT_SYMBOL_GPL(kvm_is_linear_rip);
 
 unsigned long kvm_get_rflags(struct kvm_vcpu *vcpu)
 {
@@ -9894,7 +9820,6 @@ unsigned long kvm_get_rflags(struct kvm_vcpu *vcpu)
 		rflags &= ~X86_EFLAGS_TF;
 	return rflags;
 }
-EXPORT_SYMBOL_GPL(kvm_get_rflags);
 
 static void __kvm_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags)
 {
@@ -9909,7 +9834,6 @@ void kvm_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags)
 	__kvm_set_rflags(vcpu, rflags);
 	kvm_make_request(KVM_REQ_EVENT, vcpu);
 }
-EXPORT_SYMBOL_GPL(kvm_set_rflags);
 
 void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu, struct kvm_async_pf *work)
 {
@@ -10116,37 +10040,31 @@ void kvm_arch_start_assignment(struct kvm *kvm)
 {
 	atomic_inc(&kvm->arch.assigned_device_count);
 }
-EXPORT_SYMBOL_GPL(kvm_arch_start_assignment);
 
 void kvm_arch_end_assignment(struct kvm *kvm)
 {
 	atomic_dec(&kvm->arch.assigned_device_count);
 }
-EXPORT_SYMBOL_GPL(kvm_arch_end_assignment);
 
 bool kvm_arch_has_assigned_device(struct kvm *kvm)
 {
 	return atomic_read(&kvm->arch.assigned_device_count);
 }
-EXPORT_SYMBOL_GPL(kvm_arch_has_assigned_device);
 
 void kvm_arch_register_noncoherent_dma(struct kvm *kvm)
 {
 	atomic_inc(&kvm->arch.noncoherent_dma_count);
 }
-EXPORT_SYMBOL_GPL(kvm_arch_register_noncoherent_dma);
 
 void kvm_arch_unregister_noncoherent_dma(struct kvm *kvm)
 {
 	atomic_dec(&kvm->arch.noncoherent_dma_count);
 }
-EXPORT_SYMBOL_GPL(kvm_arch_unregister_noncoherent_dma);
 
 bool kvm_arch_has_noncoherent_dma(struct kvm *kvm)
 {
 	return atomic_read(&kvm->arch.noncoherent_dma_count);
 }
-EXPORT_SYMBOL_GPL(kvm_arch_has_noncoherent_dma);
 
 bool kvm_arch_has_irq_bypass(void)
 {
@@ -10197,32 +10115,8 @@ bool kvm_vector_hashing_enabled(void)
 {
 	return vector_hashing;
 }
-EXPORT_SYMBOL_GPL(kvm_vector_hashing_enabled);
 
 bool kvm_arch_no_poll(struct kvm_vcpu *vcpu)
 {
 	return (vcpu->arch.msr_kvm_poll_control & 1) == 0;
 }
-EXPORT_SYMBOL_GPL(kvm_arch_no_poll);
-
-
-EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_exit);
-EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_fast_mmio);
-EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_inj_virq);
-EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_page_fault);
-EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_msr);
-EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_cr);
-EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_nested_vmrun);
-EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_nested_vmexit);
-EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_nested_vmexit_inject);
-EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_nested_intr_vmexit);
-EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_nested_vmenter_failed);
-EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_invlpga);
-EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_skinit);
-EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_nested_intercepts);
-EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_write_tsc_offset);
-EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_ple_window_update);
-EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_pml_full);
-EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_pi_irte_update);
-EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_avic_unaccelerated_access);
-EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_avic_incomplete_ipi);
diff --git a/virt/kvm/eventfd.c b/virt/kvm/eventfd.c
index 67b6fc153e9c..4c1a8abd1458 100644
--- a/virt/kvm/eventfd.c
+++ b/virt/kvm/eventfd.c
@@ -462,7 +462,6 @@ bool kvm_irq_has_notifier(struct kvm *kvm, unsigned irqchip, unsigned pin)
 
 	return false;
 }
-EXPORT_SYMBOL_GPL(kvm_irq_has_notifier);
 
 void kvm_notify_acked_gsi(struct kvm *kvm, int gsi)
 {
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 1b7fbd138406..bbc4064d74c2 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -72,22 +72,18 @@ MODULE_LICENSE("GPL");
 /* Architectures should define their poll value according to the halt latency */
 unsigned int halt_poll_ns = KVM_HALT_POLL_NS_DEFAULT;
 module_param(halt_poll_ns, uint, 0644);
-EXPORT_SYMBOL_GPL(halt_poll_ns);
 
 /* Default doubles per-vcpu halt_poll_ns. */
 unsigned int halt_poll_ns_grow = 2;
 module_param(halt_poll_ns_grow, uint, 0644);
-EXPORT_SYMBOL_GPL(halt_poll_ns_grow);
 
 /* The start value to grow halt_poll_ns from */
 unsigned int halt_poll_ns_grow_start = 10000; /* 10us */
 module_param(halt_poll_ns_grow_start, uint, 0644);
-EXPORT_SYMBOL_GPL(halt_poll_ns_grow_start);
 
 /* Default resets per-vcpu halt_poll_ns . */
 unsigned int halt_poll_ns_shrink;
 module_param(halt_poll_ns_shrink, uint, 0644);
-EXPORT_SYMBOL_GPL(halt_poll_ns_shrink);
 
 /*
  * Ordering of locks:
@@ -104,7 +100,6 @@ static int kvm_usage_count;
 static atomic_t hardware_enable_failed;
 
 struct kmem_cache *kvm_vcpu_cache;
-EXPORT_SYMBOL_GPL(kvm_vcpu_cache);
 
 static __read_mostly struct preempt_ops kvm_preempt_ops;
 
@@ -133,7 +128,6 @@ static void kvm_io_bus_destroy(struct kvm_io_bus *bus);
 static void mark_page_dirty_in_slot(struct kvm_memory_slot *memslot, gfn_t gfn);
 
 __visible bool kvm_rebooting;
-EXPORT_SYMBOL_GPL(kvm_rebooting);
 
 static bool largepages_enabled = true;
 
@@ -167,7 +161,6 @@ void vcpu_load(struct kvm_vcpu *vcpu)
 	kvm_arch_vcpu_load(vcpu, cpu);
 	put_cpu();
 }
-EXPORT_SYMBOL_GPL(vcpu_load);
 
 void vcpu_put(struct kvm_vcpu *vcpu)
 {
@@ -176,7 +169,6 @@ void vcpu_put(struct kvm_vcpu *vcpu)
 	preempt_notifier_unregister(&vcpu->preempt_notifier);
 	preempt_enable();
 }
-EXPORT_SYMBOL_GPL(vcpu_put);
 
 /* TODO: merge with kvm_arch_vcpu_should_kick */
 static bool kvm_request_needs_ipi(struct kvm_vcpu *vcpu, unsigned req)
@@ -280,7 +272,6 @@ void kvm_flush_remote_tlbs(struct kvm *kvm)
 		++kvm->stat.remote_tlb_flush;
 	cmpxchg(&kvm->tlbs_dirty, dirty_count, 0);
 }
-EXPORT_SYMBOL_GPL(kvm_flush_remote_tlbs);
 #endif
 
 void kvm_reload_remote_mmus(struct kvm *kvm)
@@ -326,7 +317,6 @@ int kvm_vcpu_init(struct kvm_vcpu *vcpu, struct kvm *kvm, unsigned id)
 fail:
 	return r;
 }
-EXPORT_SYMBOL_GPL(kvm_vcpu_init);
 
 void kvm_vcpu_uninit(struct kvm_vcpu *vcpu)
 {
@@ -339,7 +329,6 @@ void kvm_vcpu_uninit(struct kvm_vcpu *vcpu)
 	kvm_arch_vcpu_uninit(vcpu);
 	free_page((unsigned long)vcpu->run);
 }
-EXPORT_SYMBOL_GPL(kvm_vcpu_uninit);
 
 #if defined(CONFIG_MMU_NOTIFIER) && defined(KVM_ARCH_WANT_MMU_NOTIFIER)
 static inline struct kvm *mmu_notifier_to_kvm(struct mmu_notifier *mn)
@@ -1081,7 +1070,6 @@ int __kvm_set_memory_region(struct kvm *kvm,
 out:
 	return r;
 }
-EXPORT_SYMBOL_GPL(__kvm_set_memory_region);
 
 int kvm_set_memory_region(struct kvm *kvm,
 			  const struct kvm_userspace_memory_region *mem)
@@ -1093,7 +1081,6 @@ int kvm_set_memory_region(struct kvm *kvm,
 	mutex_unlock(&kvm->slots_lock);
 	return r;
 }
-EXPORT_SYMBOL_GPL(kvm_set_memory_region);
 
 static int kvm_vm_ioctl_set_memory_region(struct kvm *kvm,
 					  struct kvm_userspace_memory_region *mem)
@@ -1135,7 +1122,6 @@ int kvm_get_dirty_log(struct kvm *kvm,
 		*is_dirty = 1;
 	return 0;
 }
-EXPORT_SYMBOL_GPL(kvm_get_dirty_log);
 
 #ifdef CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT
 /**
@@ -1221,7 +1207,6 @@ int kvm_get_dirty_log_protect(struct kvm *kvm,
 		return -EFAULT;
 	return 0;
 }
-EXPORT_SYMBOL_GPL(kvm_get_dirty_log_protect);
 
 /**
  * kvm_clear_dirty_log_protect - clear dirty bits in the bitmap
@@ -1295,7 +1280,6 @@ int kvm_clear_dirty_log_protect(struct kvm *kvm,
 
 	return 0;
 }
-EXPORT_SYMBOL_GPL(kvm_clear_dirty_log_protect);
 #endif
 
 bool kvm_largepages_enabled(void)
@@ -1307,7 +1291,6 @@ void kvm_disable_largepages(void)
 {
 	largepages_enabled = false;
 }
-EXPORT_SYMBOL_GPL(kvm_disable_largepages);
 
 struct kvm_memory_slot *gfn_to_memslot(struct kvm *kvm, gfn_t gfn)
 {
@@ -1387,19 +1370,16 @@ unsigned long gfn_to_hva_memslot(struct kvm_memory_slot *slot,
 {
 	return gfn_to_hva_many(slot, gfn, NULL);
 }
-EXPORT_SYMBOL_GPL(gfn_to_hva_memslot);
 
 unsigned long gfn_to_hva(struct kvm *kvm, gfn_t gfn)
 {
 	return gfn_to_hva_many(gfn_to_memslot(kvm, gfn), gfn, NULL);
 }
-EXPORT_SYMBOL_GPL(gfn_to_hva);
 
 unsigned long kvm_vcpu_gfn_to_hva(struct kvm_vcpu *vcpu, gfn_t gfn)
 {
 	return gfn_to_hva_many(kvm_vcpu_gfn_to_memslot(vcpu, gfn), gfn, NULL);
 }
-EXPORT_SYMBOL_GPL(kvm_vcpu_gfn_to_hva);
 
 /*
  * Return the hva of a @gfn and the R/W attribute if possible.
@@ -1661,7 +1641,6 @@ kvm_pfn_t __gfn_to_pfn_memslot(struct kvm_memory_slot *slot, gfn_t gfn,
 	return hva_to_pfn(addr, atomic, async, write_fault,
 			  writable);
 }
-EXPORT_SYMBOL_GPL(__gfn_to_pfn_memslot);
 
 kvm_pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn, bool write_fault,
 		      bool *writable)
@@ -1669,31 +1648,26 @@ kvm_pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn, bool write_fault,
 	return __gfn_to_pfn_memslot(gfn_to_memslot(kvm, gfn), gfn, false, NULL,
 				    write_fault, writable);
 }
-EXPORT_SYMBOL_GPL(gfn_to_pfn_prot);
 
 kvm_pfn_t gfn_to_pfn_memslot(struct kvm_memory_slot *slot, gfn_t gfn)
 {
 	return __gfn_to_pfn_memslot(slot, gfn, false, NULL, true, NULL);
 }
-EXPORT_SYMBOL_GPL(gfn_to_pfn_memslot);
 
 kvm_pfn_t gfn_to_pfn_memslot_atomic(struct kvm_memory_slot *slot, gfn_t gfn)
 {
 	return __gfn_to_pfn_memslot(slot, gfn, true, NULL, true, NULL);
 }
-EXPORT_SYMBOL_GPL(gfn_to_pfn_memslot_atomic);
 
 kvm_pfn_t gfn_to_pfn_atomic(struct kvm *kvm, gfn_t gfn)
 {
 	return gfn_to_pfn_memslot_atomic(gfn_to_memslot(kvm, gfn), gfn);
 }
-EXPORT_SYMBOL_GPL(gfn_to_pfn_atomic);
 
 kvm_pfn_t kvm_vcpu_gfn_to_pfn_atomic(struct kvm_vcpu *vcpu, gfn_t gfn)
 {
 	return gfn_to_pfn_memslot_atomic(kvm_vcpu_gfn_to_memslot(vcpu, gfn), gfn);
 }
-EXPORT_SYMBOL_GPL(kvm_vcpu_gfn_to_pfn_atomic);
 
 kvm_pfn_t gfn_to_pfn(struct kvm *kvm, gfn_t gfn)
 {
@@ -1705,7 +1679,6 @@ kvm_pfn_t kvm_vcpu_gfn_to_pfn(struct kvm_vcpu *vcpu, gfn_t gfn)
 {
 	return gfn_to_pfn_memslot(kvm_vcpu_gfn_to_memslot(vcpu, gfn), gfn);
 }
-EXPORT_SYMBOL_GPL(kvm_vcpu_gfn_to_pfn);
 
 int gfn_to_page_many_atomic(struct kvm_memory_slot *slot, gfn_t gfn,
 			    struct page **pages, int nr_pages)
@@ -1722,7 +1695,6 @@ int gfn_to_page_many_atomic(struct kvm_memory_slot *slot, gfn_t gfn,
 
 	return __get_user_pages_fast(addr, nr_pages, 1, pages);
 }
-EXPORT_SYMBOL_GPL(gfn_to_page_many_atomic);
 
 static struct page *kvm_pfn_to_page(kvm_pfn_t pfn)
 {
@@ -1745,7 +1717,6 @@ struct page *gfn_to_page(struct kvm *kvm, gfn_t gfn)
 
 	return kvm_pfn_to_page(pfn);
 }
-EXPORT_SYMBOL_GPL(gfn_to_page);
 
 static int __kvm_map_gfn(struct kvm_memory_slot *slot, gfn_t gfn,
 			 struct kvm_host_map *map)
@@ -1785,7 +1756,6 @@ int kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *map)
 {
 	return __kvm_map_gfn(kvm_vcpu_gfn_to_memslot(vcpu, gfn), gfn, map);
 }
-EXPORT_SYMBOL_GPL(kvm_vcpu_map);
 
 void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct kvm_host_map *map,
 		    bool dirty)
@@ -1813,7 +1783,6 @@ void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct kvm_host_map *map,
 	map->hva = NULL;
 	map->page = NULL;
 }
-EXPORT_SYMBOL_GPL(kvm_vcpu_unmap);
 
 struct page *kvm_vcpu_gfn_to_page(struct kvm_vcpu *vcpu, gfn_t gfn)
 {
@@ -1823,7 +1792,6 @@ struct page *kvm_vcpu_gfn_to_page(struct kvm_vcpu *vcpu, gfn_t gfn)
 
 	return kvm_pfn_to_page(pfn);
 }
-EXPORT_SYMBOL_GPL(kvm_vcpu_gfn_to_page);
 
 void kvm_release_page_clean(struct page *page)
 {
@@ -1831,14 +1799,12 @@ void kvm_release_page_clean(struct page *page)
 
 	kvm_release_pfn_clean(page_to_pfn(page));
 }
-EXPORT_SYMBOL_GPL(kvm_release_page_clean);
 
 void kvm_release_pfn_clean(kvm_pfn_t pfn)
 {
 	if (!is_error_noslot_pfn(pfn) && !kvm_is_reserved_pfn(pfn))
 		put_page(pfn_to_page(pfn));
 }
-EXPORT_SYMBOL_GPL(kvm_release_pfn_clean);
 
 void kvm_release_page_dirty(struct page *page)
 {
@@ -1846,14 +1812,12 @@ void kvm_release_page_dirty(struct page *page)
 
 	kvm_release_pfn_dirty(page_to_pfn(page));
 }
-EXPORT_SYMBOL_GPL(kvm_release_page_dirty);
 
 void kvm_release_pfn_dirty(kvm_pfn_t pfn)
 {
 	kvm_set_pfn_dirty(pfn);
 	kvm_release_pfn_clean(pfn);
 }
-EXPORT_SYMBOL_GPL(kvm_release_pfn_dirty);
 
 void kvm_set_pfn_dirty(kvm_pfn_t pfn)
 {
@@ -1863,21 +1827,18 @@ void kvm_set_pfn_dirty(kvm_pfn_t pfn)
 		SetPageDirty(page);
 	}
 }
-EXPORT_SYMBOL_GPL(kvm_set_pfn_dirty);
 
 void kvm_set_pfn_accessed(kvm_pfn_t pfn)
 {
 	if (!kvm_is_reserved_pfn(pfn))
 		mark_page_accessed(pfn_to_page(pfn));
 }
-EXPORT_SYMBOL_GPL(kvm_set_pfn_accessed);
 
 void kvm_get_pfn(kvm_pfn_t pfn)
 {
 	if (!kvm_is_reserved_pfn(pfn))
 		get_page(pfn_to_page(pfn));
 }
-EXPORT_SYMBOL_GPL(kvm_get_pfn);
 
 static int next_segment(unsigned long len, int offset)
 {
@@ -1909,7 +1870,6 @@ int kvm_read_guest_page(struct kvm *kvm, gfn_t gfn, void *data, int offset,
 
 	return __kvm_read_guest_page(slot, gfn, data, offset, len);
 }
-EXPORT_SYMBOL_GPL(kvm_read_guest_page);
 
 int kvm_vcpu_read_guest_page(struct kvm_vcpu *vcpu, gfn_t gfn, void *data,
 			     int offset, int len)
@@ -1918,7 +1878,6 @@ int kvm_vcpu_read_guest_page(struct kvm_vcpu *vcpu, gfn_t gfn, void *data,
 
 	return __kvm_read_guest_page(slot, gfn, data, offset, len);
 }
-EXPORT_SYMBOL_GPL(kvm_vcpu_read_guest_page);
 
 int kvm_read_guest(struct kvm *kvm, gpa_t gpa, void *data, unsigned long len)
 {
@@ -1958,7 +1917,6 @@ int kvm_vcpu_read_guest(struct kvm_vcpu *vcpu, gpa_t gpa, void *data, unsigned l
 	}
 	return 0;
 }
-EXPORT_SYMBOL_GPL(kvm_vcpu_read_guest);
 
 static int __kvm_read_guest_atomic(struct kvm_memory_slot *slot, gfn_t gfn,
 			           void *data, int offset, unsigned long len)
@@ -1986,7 +1944,6 @@ int kvm_read_guest_atomic(struct kvm *kvm, gpa_t gpa, void *data,
 
 	return __kvm_read_guest_atomic(slot, gfn, data, offset, len);
 }
-EXPORT_SYMBOL_GPL(kvm_read_guest_atomic);
 
 int kvm_vcpu_read_guest_atomic(struct kvm_vcpu *vcpu, gpa_t gpa,
 			       void *data, unsigned long len)
@@ -1997,7 +1954,6 @@ int kvm_vcpu_read_guest_atomic(struct kvm_vcpu *vcpu, gpa_t gpa,
 
 	return __kvm_read_guest_atomic(slot, gfn, data, offset, len);
 }
-EXPORT_SYMBOL_GPL(kvm_vcpu_read_guest_atomic);
 
 static int __kvm_write_guest_page(struct kvm_memory_slot *memslot, gfn_t gfn,
 			          const void *data, int offset, int len)
@@ -2022,7 +1978,6 @@ int kvm_write_guest_page(struct kvm *kvm, gfn_t gfn,
 
 	return __kvm_write_guest_page(slot, gfn, data, offset, len);
 }
-EXPORT_SYMBOL_GPL(kvm_write_guest_page);
 
 int kvm_vcpu_write_guest_page(struct kvm_vcpu *vcpu, gfn_t gfn,
 			      const void *data, int offset, int len)
@@ -2031,7 +1986,6 @@ int kvm_vcpu_write_guest_page(struct kvm_vcpu *vcpu, gfn_t gfn,
 
 	return __kvm_write_guest_page(slot, gfn, data, offset, len);
 }
-EXPORT_SYMBOL_GPL(kvm_vcpu_write_guest_page);
 
 int kvm_write_guest(struct kvm *kvm, gpa_t gpa, const void *data,
 		    unsigned long len)
@@ -2073,7 +2027,6 @@ int kvm_vcpu_write_guest(struct kvm_vcpu *vcpu, gpa_t gpa, const void *data,
 	}
 	return 0;
 }
-EXPORT_SYMBOL_GPL(kvm_vcpu_write_guest);
 
 static int __kvm_gfn_to_hva_cache_init(struct kvm_memslots *slots,
 				       struct gfn_to_hva_cache *ghc,
@@ -2119,7 +2072,6 @@ int kvm_gfn_to_hva_cache_init(struct kvm *kvm, struct gfn_to_hva_cache *ghc,
 	struct kvm_memslots *slots = kvm_memslots(kvm);
 	return __kvm_gfn_to_hva_cache_init(slots, ghc, gpa, len);
 }
-EXPORT_SYMBOL_GPL(kvm_gfn_to_hva_cache_init);
 
 int kvm_write_guest_offset_cached(struct kvm *kvm, struct gfn_to_hva_cache *ghc,
 				  void *data, unsigned int offset,
@@ -2147,14 +2099,12 @@ int kvm_write_guest_offset_cached(struct kvm *kvm, struct gfn_to_hva_cache *ghc,
 
 	return 0;
 }
-EXPORT_SYMBOL_GPL(kvm_write_guest_offset_cached);
 
 int kvm_write_guest_cached(struct kvm *kvm, struct gfn_to_hva_cache *ghc,
 			   void *data, unsigned long len)
 {
 	return kvm_write_guest_offset_cached(kvm, ghc, data, 0, len);
 }
-EXPORT_SYMBOL_GPL(kvm_write_guest_cached);
 
 int kvm_read_guest_cached(struct kvm *kvm, struct gfn_to_hva_cache *ghc,
 			   void *data, unsigned long len)
@@ -2179,7 +2129,6 @@ int kvm_read_guest_cached(struct kvm *kvm, struct gfn_to_hva_cache *ghc,
 
 	return 0;
 }
-EXPORT_SYMBOL_GPL(kvm_read_guest_cached);
 
 int kvm_clear_guest_page(struct kvm *kvm, gfn_t gfn, int offset, int len)
 {
@@ -2187,7 +2136,6 @@ int kvm_clear_guest_page(struct kvm *kvm, gfn_t gfn, int offset, int len)
 
 	return kvm_write_guest_page(kvm, gfn, zero_page, offset, len);
 }
-EXPORT_SYMBOL_GPL(kvm_clear_guest_page);
 
 int kvm_clear_guest(struct kvm *kvm, gpa_t gpa, unsigned long len)
 {
@@ -2206,7 +2154,6 @@ int kvm_clear_guest(struct kvm *kvm, gpa_t gpa, unsigned long len)
 	}
 	return 0;
 }
-EXPORT_SYMBOL_GPL(kvm_clear_guest);
 
 static void mark_page_dirty_in_slot(struct kvm_memory_slot *memslot,
 				    gfn_t gfn)
@@ -2225,7 +2172,6 @@ void mark_page_dirty(struct kvm *kvm, gfn_t gfn)
 	memslot = gfn_to_memslot(kvm, gfn);
 	mark_page_dirty_in_slot(memslot, gfn);
 }
-EXPORT_SYMBOL_GPL(mark_page_dirty);
 
 void kvm_vcpu_mark_page_dirty(struct kvm_vcpu *vcpu, gfn_t gfn)
 {
@@ -2234,7 +2180,6 @@ void kvm_vcpu_mark_page_dirty(struct kvm_vcpu *vcpu, gfn_t gfn)
 	memslot = kvm_vcpu_gfn_to_memslot(vcpu, gfn);
 	mark_page_dirty_in_slot(memslot, gfn);
 }
-EXPORT_SYMBOL_GPL(kvm_vcpu_mark_page_dirty);
 
 void kvm_sigset_activate(struct kvm_vcpu *vcpu)
 {
@@ -2385,7 +2330,6 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu)
 	trace_kvm_vcpu_wakeup(block_ns, waited, vcpu_valid_wakeup(vcpu));
 	kvm_arch_vcpu_block_finish(vcpu);
 }
-EXPORT_SYMBOL_GPL(kvm_vcpu_block);
 
 bool kvm_vcpu_wake_up(struct kvm_vcpu *vcpu)
 {
@@ -2401,7 +2345,6 @@ bool kvm_vcpu_wake_up(struct kvm_vcpu *vcpu)
 
 	return false;
 }
-EXPORT_SYMBOL_GPL(kvm_vcpu_wake_up);
 
 #ifndef CONFIG_S390
 /*
@@ -2421,7 +2364,6 @@ void kvm_vcpu_kick(struct kvm_vcpu *vcpu)
 			smp_send_reschedule(cpu);
 	put_cpu();
 }
-EXPORT_SYMBOL_GPL(kvm_vcpu_kick);
 #endif /* !CONFIG_S390 */
 
 int kvm_vcpu_yield_to(struct kvm_vcpu *target)
@@ -2442,7 +2384,6 @@ int kvm_vcpu_yield_to(struct kvm_vcpu *target)
 
 	return ret;
 }
-EXPORT_SYMBOL_GPL(kvm_vcpu_yield_to);
 
 /*
  * Helper that checks whether a VCPU is eligible for directed yield.
@@ -2559,7 +2500,6 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *me, bool yield_to_kernel_mode)
 	/* Ensure vcpu is not eligible during next spinloop */
 	kvm_vcpu_set_dy_eligible(me, false);
 }
-EXPORT_SYMBOL_GPL(kvm_vcpu_on_spin);
 
 static vm_fault_t kvm_vcpu_fault(struct vm_fault *vmf)
 {
@@ -3743,7 +3683,6 @@ int kvm_io_bus_write(struct kvm_vcpu *vcpu, enum kvm_bus bus_idx, gpa_t addr,
 	r = __kvm_io_bus_write(vcpu, bus, &range, val);
 	return r < 0 ? r : 0;
 }
-EXPORT_SYMBOL_GPL(kvm_io_bus_write);
 
 /* kvm_io_bus_write_cookie - called under kvm->slots_lock */
 int kvm_io_bus_write_cookie(struct kvm_vcpu *vcpu, enum kvm_bus bus_idx,
@@ -3920,7 +3859,6 @@ struct kvm_io_device *kvm_io_bus_get_dev(struct kvm *kvm, enum kvm_bus bus_idx,
 
 	return iodev;
 }
-EXPORT_SYMBOL_GPL(kvm_io_bus_get_dev);
 
 static int kvm_debugfs_open(struct inode *inode, struct file *file,
 			   int (*get)(void *, u64 *), int (*set)(void *, u64),
@@ -4352,7 +4290,6 @@ __init int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align,
 out_fail:
 	return r;
 }
-EXPORT_SYMBOL_GPL(kvm_init);
 
 void kvm_exit(void)
 {
@@ -4370,4 +4307,3 @@ void kvm_exit(void)
 	free_cpumask_var(cpus_hardware_enabled);
 	kvm_vfio_ops_exit();
 }
-EXPORT_SYMBOL_GPL(kvm_exit);


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 09/13] KVM: monolithic: x86: drop the kvm_pmu_ops structure
  2019-11-04 22:59 [PATCH 00/13] KVM monolithic v3 Andrea Arcangeli
                   ` (7 preceding siblings ...)
  2019-11-04 22:59 ` [PATCH 08/13] KVM: monolithic: remove exports Andrea Arcangeli
@ 2019-11-04 22:59 ` Andrea Arcangeli
  2019-11-04 22:59 ` [PATCH 10/13] KVM: x86: optimize more exit handlers in vmx.c Andrea Arcangeli
                   ` (3 subsequent siblings)
  12 siblings, 0 replies; 34+ messages in thread
From: Andrea Arcangeli @ 2019-11-04 22:59 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: Paolo Bonzini, Vitaly Kuznetsov, Sean Christopherson

Cleanup after the structure was finally left completely unused.

Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
---
 arch/x86/include/asm/kvm_host.h |  3 ---
 arch/x86/kvm/pmu.h              | 20 --------------------
 arch/x86/kvm/pmu_amd.c          | 15 ---------------
 arch/x86/kvm/svm.c              |  1 -
 arch/x86/kvm/vmx/pmu_intel.c    | 15 ---------------
 arch/x86/kvm/vmx/vmx.c          |  2 --
 6 files changed, 56 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 2ddc61fdcd09..9cb18d3ffbe1 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1345,9 +1345,6 @@ struct kvm_x86_ops {
 					   gfn_t offset, unsigned long mask);
 	int (*write_log_dirty)(struct kvm_vcpu *vcpu);
 
-	/* pmu operations of sub-arch */
-	const struct kvm_pmu_ops *pmu_ops;
-
 	/*
 	 * Architecture specific hooks for vCPU blocking due to
 	 * HLT instruction.
diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h
index 82f07e3492df..c74d4ab30f66 100644
--- a/arch/x86/kvm/pmu.h
+++ b/arch/x86/kvm/pmu.h
@@ -36,23 +36,6 @@ extern void kvm_x86_pmu_refresh(struct kvm_vcpu *vcpu);
 extern void kvm_x86_pmu_init(struct kvm_vcpu *vcpu);
 extern void kvm_x86_pmu_reset(struct kvm_vcpu *vcpu);
 
-struct kvm_pmu_ops {
-	unsigned (*find_arch_event)(struct kvm_pmu *pmu, u8 event_select,
-				    u8 unit_mask);
-	unsigned (*find_fixed_event)(int idx);
-	bool (*pmc_is_enabled)(struct kvm_pmc *pmc);
-	struct kvm_pmc *(*pmc_idx_to_pmc)(struct kvm_pmu *pmu, int pmc_idx);
-	struct kvm_pmc *(*msr_idx_to_pmc)(struct kvm_vcpu *vcpu, unsigned idx,
-					  u64 *mask);
-	int (*is_valid_msr_idx)(struct kvm_vcpu *vcpu, unsigned idx);
-	bool (*is_valid_msr)(struct kvm_vcpu *vcpu, u32 msr);
-	int (*get_msr)(struct kvm_vcpu *vcpu, u32 msr, u64 *data);
-	int (*set_msr)(struct kvm_vcpu *vcpu, struct msr_data *msr_info);
-	void (*refresh)(struct kvm_vcpu *vcpu);
-	void (*init)(struct kvm_vcpu *vcpu);
-	void (*reset)(struct kvm_vcpu *vcpu);
-};
-
 static inline u64 pmc_bitmask(struct kvm_pmc *pmc)
 {
 	struct kvm_pmu *pmu = pmc_to_pmu(pmc);
@@ -138,7 +121,4 @@ void kvm_pmu_destroy(struct kvm_vcpu *vcpu);
 int kvm_vm_ioctl_set_pmu_event_filter(struct kvm *kvm, void __user *argp);
 
 bool is_vmware_backdoor_pmc(u32 pmc_idx);
-
-extern struct kvm_pmu_ops intel_pmu_ops;
-extern struct kvm_pmu_ops amd_pmu_ops;
 #endif /* __KVM_X86_PMU_H */
diff --git a/arch/x86/kvm/pmu_amd.c b/arch/x86/kvm/pmu_amd.c
index 7ea588023949..1b09ae337516 100644
--- a/arch/x86/kvm/pmu_amd.c
+++ b/arch/x86/kvm/pmu_amd.c
@@ -300,18 +300,3 @@ void kvm_x86_pmu_reset(struct kvm_vcpu *vcpu)
 		pmc->counter = pmc->eventsel = 0;
 	}
 }
-
-struct kvm_pmu_ops amd_pmu_ops = {
-	.find_arch_event = kvm_x86_pmu_find_arch_event,
-	.find_fixed_event = kvm_x86_pmu_find_fixed_event,
-	.pmc_is_enabled = kvm_x86_pmu_pmc_is_enabled,
-	.pmc_idx_to_pmc = kvm_x86_pmu_pmc_idx_to_pmc,
-	.msr_idx_to_pmc = kvm_x86_pmu_msr_idx_to_pmc,
-	.is_valid_msr_idx = kvm_x86_pmu_is_valid_msr_idx,
-	.is_valid_msr = kvm_x86_pmu_is_valid_msr,
-	.get_msr = kvm_x86_pmu_get_msr,
-	.set_msr = kvm_x86_pmu_set_msr,
-	.refresh = kvm_x86_pmu_refresh,
-	.init = kvm_x86_pmu_init,
-	.reset = kvm_x86_pmu_reset,
-};
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 4ce102f6f075..0021e11fd1fb 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -7290,7 +7290,6 @@ static struct kvm_x86_ops svm_x86_ops __ro_after_init = {
 
 	.sched_in = kvm_x86_sched_in,
 
-	.pmu_ops = &amd_pmu_ops,
 	.deliver_posted_interrupt = kvm_x86_deliver_posted_interrupt,
 	.dy_apicv_has_pending_interrupt = kvm_x86_dy_apicv_has_pending_interrupt,
 	.update_pi_irte = kvm_x86_update_pi_irte,
diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c
index 2fa9ae5acde1..9bd062a8516a 100644
--- a/arch/x86/kvm/vmx/pmu_intel.c
+++ b/arch/x86/kvm/vmx/pmu_intel.c
@@ -359,18 +359,3 @@ void kvm_x86_pmu_reset(struct kvm_vcpu *vcpu)
 	pmu->fixed_ctr_ctrl = pmu->global_ctrl = pmu->global_status =
 		pmu->global_ovf_ctrl = 0;
 }
-
-struct kvm_pmu_ops intel_pmu_ops = {
-	.find_arch_event = kvm_x86_pmu_find_arch_event,
-	.find_fixed_event = kvm_x86_pmu_find_fixed_event,
-	.pmc_is_enabled = kvm_x86_pmu_pmc_is_enabled,
-	.pmc_idx_to_pmc = kvm_x86_pmu_pmc_idx_to_pmc,
-	.msr_idx_to_pmc = kvm_x86_pmu_msr_idx_to_pmc,
-	.is_valid_msr_idx = kvm_x86_pmu_is_valid_msr_idx,
-	.is_valid_msr = kvm_x86_pmu_is_valid_msr,
-	.get_msr = kvm_x86_pmu_get_msr,
-	.set_msr = kvm_x86_pmu_set_msr,
-	.refresh = kvm_x86_pmu_refresh,
-	.init = kvm_x86_pmu_init,
-	.reset = kvm_x86_pmu_reset,
-};
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 87e5d7276ea4..222467b2040e 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -7897,8 +7897,6 @@ static struct kvm_x86_ops vmx_x86_ops __ro_after_init = {
 	.pre_block = kvm_x86_pre_block,
 	.post_block = kvm_x86_post_block,
 
-	.pmu_ops = &intel_pmu_ops,
-
 	.update_pi_irte = kvm_x86_update_pi_irte,
 
 #ifdef CONFIG_X86_64


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 10/13] KVM: x86: optimize more exit handlers in vmx.c
  2019-11-04 22:59 [PATCH 00/13] KVM monolithic v3 Andrea Arcangeli
                   ` (8 preceding siblings ...)
  2019-11-04 22:59 ` [PATCH 09/13] KVM: monolithic: x86: drop the kvm_pmu_ops structure Andrea Arcangeli
@ 2019-11-04 22:59 ` Andrea Arcangeli
  2019-11-05 10:20   ` Paolo Bonzini
  2019-11-04 22:59 ` [PATCH 11/13] KVM: retpolines: x86: eliminate retpoline from vmx.c exit handlers Andrea Arcangeli
                   ` (2 subsequent siblings)
  12 siblings, 1 reply; 34+ messages in thread
From: Andrea Arcangeli @ 2019-11-04 22:59 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: Paolo Bonzini, Vitaly Kuznetsov, Sean Christopherson

Eliminate wasteful call/ret non RETPOLINE case and unnecessary fentry
dynamic tracing hooking points.

Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
---
 arch/x86/kvm/vmx/vmx.c | 30 +++++-------------------------
 1 file changed, 5 insertions(+), 25 deletions(-)

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 222467b2040e..a6afa5f4a01c 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -4694,7 +4694,7 @@ static int handle_exception_nmi(struct kvm_vcpu *vcpu)
 	return 0;
 }
 
-static int handle_external_interrupt(struct kvm_vcpu *vcpu)
+static __always_inline int handle_external_interrupt(struct kvm_vcpu *vcpu)
 {
 	++vcpu->stat.irq_exits;
 	return 1;
@@ -4965,21 +4965,6 @@ void kvm_x86_set_dr7(struct kvm_vcpu *vcpu, unsigned long val)
 	vmcs_writel(GUEST_DR7, val);
 }
 
-static int handle_cpuid(struct kvm_vcpu *vcpu)
-{
-	return kvm_emulate_cpuid(vcpu);
-}
-
-static int handle_rdmsr(struct kvm_vcpu *vcpu)
-{
-	return kvm_emulate_rdmsr(vcpu);
-}
-
-static int handle_wrmsr(struct kvm_vcpu *vcpu)
-{
-	return kvm_emulate_wrmsr(vcpu);
-}
-
 static int handle_tpr_below_threshold(struct kvm_vcpu *vcpu)
 {
 	kvm_apic_update_ppr(vcpu);
@@ -4996,11 +4981,6 @@ static int handle_interrupt_window(struct kvm_vcpu *vcpu)
 	return 1;
 }
 
-static int handle_halt(struct kvm_vcpu *vcpu)
-{
-	return kvm_emulate_halt(vcpu);
-}
-
 static int handle_vmcall(struct kvm_vcpu *vcpu)
 {
 	return kvm_emulate_hypercall(vcpu);
@@ -5548,11 +5528,11 @@ static int (*kvm_vmx_exit_handlers[])(struct kvm_vcpu *vcpu) = {
 	[EXIT_REASON_IO_INSTRUCTION]          = handle_io,
 	[EXIT_REASON_CR_ACCESS]               = handle_cr,
 	[EXIT_REASON_DR_ACCESS]               = handle_dr,
-	[EXIT_REASON_CPUID]                   = handle_cpuid,
-	[EXIT_REASON_MSR_READ]                = handle_rdmsr,
-	[EXIT_REASON_MSR_WRITE]               = handle_wrmsr,
+	[EXIT_REASON_CPUID]                   = kvm_emulate_cpuid,
+	[EXIT_REASON_MSR_READ]                = kvm_emulate_rdmsr,
+	[EXIT_REASON_MSR_WRITE]               = kvm_emulate_wrmsr,
 	[EXIT_REASON_PENDING_INTERRUPT]       = handle_interrupt_window,
-	[EXIT_REASON_HLT]                     = handle_halt,
+	[EXIT_REASON_HLT]                     = kvm_emulate_halt,
 	[EXIT_REASON_INVD]		      = handle_invd,
 	[EXIT_REASON_INVLPG]		      = handle_invlpg,
 	[EXIT_REASON_RDPMC]                   = handle_rdpmc,


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 11/13] KVM: retpolines: x86: eliminate retpoline from vmx.c exit handlers
  2019-11-04 22:59 [PATCH 00/13] KVM monolithic v3 Andrea Arcangeli
                   ` (9 preceding siblings ...)
  2019-11-04 22:59 ` [PATCH 10/13] KVM: x86: optimize more exit handlers in vmx.c Andrea Arcangeli
@ 2019-11-04 22:59 ` Andrea Arcangeli
  2019-11-05 10:20   ` Paolo Bonzini
  2019-11-04 23:00 ` [PATCH 12/13] KVM: retpolines: x86: eliminate retpoline from svm.c " Andrea Arcangeli
  2019-11-04 23:00 ` [PATCH 13/13] x86: retpolines: eliminate retpoline from msr event handlers Andrea Arcangeli
  12 siblings, 1 reply; 34+ messages in thread
From: Andrea Arcangeli @ 2019-11-04 22:59 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: Paolo Bonzini, Vitaly Kuznetsov, Sean Christopherson

It's enough to check the exit value and issue a direct call to avoid
the retpoline for all the common vmexit reasons.

Of course CONFIG_RETPOLINE already forbids gcc to use indirect jumps
while compiling all switch() statements, however switch() would still
allow the compiler to bisect the case value. It's more efficient to
prioritize the most frequent vmexits instead.

The halt may be slow paths from the point of the guest, but not
necessarily so from the point of the host if the host runs at full CPU
capacity and no host CPU is ever left idle.

Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
---
 arch/x86/kvm/vmx/vmx.c | 18 ++++++++++++++++--
 1 file changed, 16 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index a6afa5f4a01c..582f837dc8c2 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -5905,9 +5905,23 @@ int kvm_x86_handle_exit(struct kvm_vcpu *vcpu)
 	}
 
 	if (exit_reason < kvm_vmx_max_exit_handlers
-	    && kvm_vmx_exit_handlers[exit_reason])
+	    && kvm_vmx_exit_handlers[exit_reason]) {
+#ifdef CONFIG_RETPOLINE
+		if (exit_reason == EXIT_REASON_MSR_WRITE)
+			return kvm_emulate_wrmsr(vcpu);
+		else if (exit_reason == EXIT_REASON_PREEMPTION_TIMER)
+			return handle_preemption_timer(vcpu);
+		else if (exit_reason == EXIT_REASON_PENDING_INTERRUPT)
+			return handle_interrupt_window(vcpu);
+		else if (exit_reason == EXIT_REASON_EXTERNAL_INTERRUPT)
+			return handle_external_interrupt(vcpu);
+		else if (exit_reason == EXIT_REASON_HLT)
+			return kvm_emulate_halt(vcpu);
+		else if (exit_reason == EXIT_REASON_EPT_MISCONFIG)
+			return handle_ept_misconfig(vcpu);
+#endif
 		return kvm_vmx_exit_handlers[exit_reason](vcpu);
-	else {
+	} else {
 		vcpu_unimpl(vcpu, "vmx: unexpected exit reason 0x%x\n",
 				exit_reason);
 		dump_vmcs();


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 12/13] KVM: retpolines: x86: eliminate retpoline from svm.c exit handlers
  2019-11-04 22:59 [PATCH 00/13] KVM monolithic v3 Andrea Arcangeli
                   ` (10 preceding siblings ...)
  2019-11-04 22:59 ` [PATCH 11/13] KVM: retpolines: x86: eliminate retpoline from vmx.c exit handlers Andrea Arcangeli
@ 2019-11-04 23:00 ` Andrea Arcangeli
  2019-11-05 10:21   ` Paolo Bonzini
  2019-11-04 23:00 ` [PATCH 13/13] x86: retpolines: eliminate retpoline from msr event handlers Andrea Arcangeli
  12 siblings, 1 reply; 34+ messages in thread
From: Andrea Arcangeli @ 2019-11-04 23:00 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: Paolo Bonzini, Vitaly Kuznetsov, Sean Christopherson

It's enough to check the exit value and issue a direct call to avoid
the retpoline for all the common vmexit reasons.

After this commit is applied, here the most common retpolines executed
under a high resolution timer workload in the guest on a SVM host:

[..]
@[
    trace_retpoline+1
    __trace_retpoline+30
    __x86_indirect_thunk_rax+33
    ktime_get_update_offsets_now+70
    hrtimer_interrupt+131
    smp_apic_timer_interrupt+106
    apic_timer_interrupt+15
    start_sw_timer+359
    restart_apic_timer+85
    kvm_set_msr_common+1497
    msr_interception+142
    vcpu_enter_guest+684
    kvm_arch_vcpu_ioctl_run+261
    kvm_vcpu_ioctl+559
    do_vfs_ioctl+164
    ksys_ioctl+96
    __x64_sys_ioctl+22
    do_syscall_64+89
    entry_SYSCALL_64_after_hwframe+68
]: 1940
@[
    trace_retpoline+1
    __trace_retpoline+30
    __x86_indirect_thunk_r12+33
    force_qs_rnp+217
    rcu_gp_kthread+1270
    kthread+268
    ret_from_fork+34
]: 4644
@[]: 25095
@[
    trace_retpoline+1
    __trace_retpoline+30
    __x86_indirect_thunk_rax+33
    lapic_next_event+28
    clockevents_program_event+148
    hrtimer_start_range_ns+528
    start_sw_timer+356
    restart_apic_timer+85
    kvm_set_msr_common+1497
    msr_interception+142
    vcpu_enter_guest+684
    kvm_arch_vcpu_ioctl_run+261
    kvm_vcpu_ioctl+559
    do_vfs_ioctl+164
    ksys_ioctl+96
    __x64_sys_ioctl+22
    do_syscall_64+89
    entry_SYSCALL_64_after_hwframe+68
]: 41474
@[
    trace_retpoline+1
    __trace_retpoline+30
    __x86_indirect_thunk_rax+33
    clockevents_program_event+148
    hrtimer_start_range_ns+528
    start_sw_timer+356
    restart_apic_timer+85
    kvm_set_msr_common+1497
    msr_interception+142
    vcpu_enter_guest+684
    kvm_arch_vcpu_ioctl_run+261
    kvm_vcpu_ioctl+559
    do_vfs_ioctl+164
    ksys_ioctl+96
    __x64_sys_ioctl+22
    do_syscall_64+89
    entry_SYSCALL_64_after_hwframe+68
]: 41474
@[
    trace_retpoline+1
    __trace_retpoline+30
    __x86_indirect_thunk_rax+33
    ktime_get+58
    clockevents_program_event+84
    hrtimer_start_range_ns+528
    start_sw_timer+356
    restart_apic_timer+85
    kvm_set_msr_common+1497
    msr_interception+142
    vcpu_enter_guest+684
    kvm_arch_vcpu_ioctl_run+261
    kvm_vcpu_ioctl+559
    do_vfs_ioctl+164
    ksys_ioctl+96
    __x64_sys_ioctl+22
    do_syscall_64+89
    entry_SYSCALL_64_after_hwframe+68
]: 41887
@[
    trace_retpoline+1
    __trace_retpoline+30
    __x86_indirect_thunk_rax+33
    lapic_next_event+28
    clockevents_program_event+148
    hrtimer_try_to_cancel+168
    hrtimer_cancel+21
    kvm_set_lapic_tscdeadline_msr+43
    kvm_set_msr_common+1497
    msr_interception+142
    vcpu_enter_guest+684
    kvm_arch_vcpu_ioctl_run+261
    kvm_vcpu_ioctl+559
    do_vfs_ioctl+164
    ksys_ioctl+96
    __x64_sys_ioctl+22
    do_syscall_64+89
    entry_SYSCALL_64_after_hwframe+68
]: 42723
@[
    trace_retpoline+1
    __trace_retpoline+30
    __x86_indirect_thunk_rax+33
    clockevents_program_event+148
    hrtimer_try_to_cancel+168
    hrtimer_cancel+21
    kvm_set_lapic_tscdeadline_msr+43
    kvm_set_msr_common+1497
    msr_interception+142
    vcpu_enter_guest+684
    kvm_arch_vcpu_ioctl_run+261
    kvm_vcpu_ioctl+559
    do_vfs_ioctl+164
    ksys_ioctl+96
    __x64_sys_ioctl+22
    do_syscall_64+89
    entry_SYSCALL_64_after_hwframe+68
]: 42766
@[
    trace_retpoline+1
    __trace_retpoline+30
    __x86_indirect_thunk_rax+33
    ktime_get+58
    clockevents_program_event+84
    hrtimer_try_to_cancel+168
    hrtimer_cancel+21
    kvm_set_lapic_tscdeadline_msr+43
    kvm_set_msr_common+1497
    msr_interception+142
    vcpu_enter_guest+684
    kvm_arch_vcpu_ioctl_run+261
    kvm_vcpu_ioctl+559
    do_vfs_ioctl+164
    ksys_ioctl+96
    __x64_sys_ioctl+22
    do_syscall_64+89
    entry_SYSCALL_64_after_hwframe+68
]: 42848
@[
    trace_retpoline+1
    __trace_retpoline+30
    __x86_indirect_thunk_rax+33
    ktime_get+58
    start_sw_timer+279
    restart_apic_timer+85
    kvm_set_msr_common+1497
    msr_interception+142
    vcpu_enter_guest+684
    kvm_arch_vcpu_ioctl_run+261
    kvm_vcpu_ioctl+559
    do_vfs_ioctl+164
    ksys_ioctl+96
    __x64_sys_ioctl+22
    do_syscall_64+89
    entry_SYSCALL_64_after_hwframe+68
]: 499845

@total: 1780243

SVM has no TSC based programmable preemption timer so it is invoking
ktime_get() frequently.

Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
---
 arch/x86/kvm/svm.c | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 0021e11fd1fb..3942bca46740 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -4995,6 +4995,18 @@ int kvm_x86_handle_exit(struct kvm_vcpu *vcpu)
 		return 0;
 	}
 
+#ifdef CONFIG_RETPOLINE
+	if (exit_code == SVM_EXIT_MSR)
+		return msr_interception(svm);
+	else if (exit_code == SVM_EXIT_VINTR)
+		return interrupt_window_interception(svm);
+	else if (exit_code == SVM_EXIT_INTR)
+		return intr_interception(svm);
+	else if (exit_code == SVM_EXIT_HLT)
+		return halt_interception(svm);
+	else if (exit_code == SVM_EXIT_NPF)
+		return npf_interception(svm);
+#endif
 	return svm_exit_handlers[exit_code](svm);
 }
 


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 13/13] x86: retpolines: eliminate retpoline from msr event handlers
  2019-11-04 22:59 [PATCH 00/13] KVM monolithic v3 Andrea Arcangeli
                   ` (11 preceding siblings ...)
  2019-11-04 23:00 ` [PATCH 12/13] KVM: retpolines: x86: eliminate retpoline from svm.c " Andrea Arcangeli
@ 2019-11-04 23:00 ` Andrea Arcangeli
  2019-11-05 10:21   ` Paolo Bonzini
  12 siblings, 1 reply; 34+ messages in thread
From: Andrea Arcangeli @ 2019-11-04 23:00 UTC (permalink / raw)
  To: kvm, linux-kernel; +Cc: Paolo Bonzini, Vitaly Kuznetsov, Sean Christopherson

It's enough to check the value and issue the direct call.

After this commit is applied, here the most common retpolines executed
under a high resolution timer workload in the guest on a VMX host:

[..]
@[
    trace_retpoline+1
    __trace_retpoline+30
    __x86_indirect_thunk_rax+33
    do_syscall_64+89
    entry_SYSCALL_64_after_hwframe+68
]: 267
@[]: 2256
@[
    trace_retpoline+1
    __trace_retpoline+30
    __x86_indirect_thunk_rax+33
    __kvm_wait_lapic_expire+284
    vmx_vcpu_run.part.97+1091
    vcpu_enter_guest+377
    kvm_arch_vcpu_ioctl_run+261
    kvm_vcpu_ioctl+559
    do_vfs_ioctl+164
    ksys_ioctl+96
    __x64_sys_ioctl+22
    do_syscall_64+89
    entry_SYSCALL_64_after_hwframe+68
]: 2390
@[]: 33410

@total: 315707

Note the highest hit above is __delay so probably not worth optimizing
even if it would be more frequent than 2k hits per sec.

Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
---
 arch/x86/events/intel/core.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index fcef678c3423..937363b803c1 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -3323,8 +3323,19 @@ static int intel_pmu_hw_config(struct perf_event *event)
 	return 0;
 }
 
+#ifdef CONFIG_RETPOLINE
+static struct perf_guest_switch_msr *core_guest_get_msrs(int *nr);
+static struct perf_guest_switch_msr *intel_guest_get_msrs(int *nr);
+#endif
+
 struct perf_guest_switch_msr *perf_guest_get_msrs(int *nr)
 {
+#ifdef CONFIG_RETPOLINE
+	if (x86_pmu.guest_get_msrs == intel_guest_get_msrs)
+		return intel_guest_get_msrs(nr);
+	else if (x86_pmu.guest_get_msrs == core_guest_get_msrs)
+		return core_guest_get_msrs(nr);
+#endif
 	if (x86_pmu.guest_get_msrs)
 		return x86_pmu.guest_get_msrs(nr);
 	*nr = 0;


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* Re: [PATCH 03/13] kvm: monolithic: fixup x86-32 build
  2019-11-04 22:59 ` [PATCH 03/13] kvm: monolithic: fixup x86-32 build Andrea Arcangeli
@ 2019-11-05 10:04   ` Paolo Bonzini
  2019-11-05 10:37     ` Paolo Bonzini
  0 siblings, 1 reply; 34+ messages in thread
From: Paolo Bonzini @ 2019-11-05 10:04 UTC (permalink / raw)
  To: Andrea Arcangeli, kvm, linux-kernel; +Cc: Vitaly Kuznetsov, Sean Christopherson

On 04/11/19 23:59, Andrea Arcangeli wrote:
> kvm_x86_set_hv_timer and kvm_x86_cancel_hv_timer needs to be defined
> to succeed the 32bit kernel build, but they can't be called.
> 
> Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
> ---
>  arch/x86/kvm/vmx/vmx.c | 11 +++++++++++
>  1 file changed, 11 insertions(+)
> 
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index bd17ad61f7e3..1a58ae38c8f2 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -7195,6 +7195,17 @@ void kvm_x86_cancel_hv_timer(struct kvm_vcpu *vcpu)
>  {
>  	to_vmx(vcpu)->hv_deadline_tsc = -1;
>  }
> +#else
> +int kvm_x86_set_hv_timer(struct kvm_vcpu *vcpu, u64 guest_deadline_tsc,
> +			 bool *expired)
> +{
> +	BUG();
> +}
> +
> +void kvm_x86_cancel_hv_timer(struct kvm_vcpu *vcpu)
> +{
> +	BUG();
> +}
>  #endif
>  
>  void kvm_x86_sched_in(struct kvm_vcpu *vcpu, int cpu)
> 

I'll check for how long this has been broken.  It may be the proof that
we can actually drop 32-bit KVM support.

Paolo

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 05/13] KVM: monolithic: add more section prefixes
  2019-11-04 22:59 ` [PATCH 05/13] KVM: monolithic: add more section prefixes Andrea Arcangeli
@ 2019-11-05 10:16   ` Paolo Bonzini
  0 siblings, 0 replies; 34+ messages in thread
From: Paolo Bonzini @ 2019-11-05 10:16 UTC (permalink / raw)
  To: Andrea Arcangeli, kvm, linux-kernel; +Cc: Vitaly Kuznetsov, Sean Christopherson

On 04/11/19 23:59, Andrea Arcangeli wrote:
> Add more section prefixes because with the monolithic KVM model the
> section checker can now do a more accurate static analysis at build
> time and this allows to build without
> CONFIG_SECTION_MISMATCH_WARN_ONLY=n.
> 
> Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
> ---
>  arch/powerpc/kvm/book3s.c | 2 +-
>  arch/x86/kvm/x86.c        | 4 ++--
>  include/linux/kvm_host.h  | 8 ++++----
>  virt/kvm/arm/arm.c        | 2 +-
>  virt/kvm/kvm_main.c       | 6 +++---
>  5 files changed, 11 insertions(+), 11 deletions(-)
> 
> diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
> index ec2547cc5ecb..e80e9504722a 100644
> --- a/arch/powerpc/kvm/book3s.c
> +++ b/arch/powerpc/kvm/book3s.c
> @@ -1067,7 +1067,7 @@ int kvm_irq_map_chip_pin(struct kvm *kvm, unsigned irqchip, unsigned pin)
>  
>  #endif /* CONFIG_KVM_XICS */
>  
> -static int kvmppc_book3s_init(void)
> +static __init int kvmppc_book3s_init(void)
>  {
>  	int r;
>  
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index fb963e6b2e54..5e98fa6b7bf8 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -9272,7 +9272,7 @@ void kvm_arch_hardware_disable(void)
>  	drop_user_return_notifiers();
>  }
>  
> -int kvm_arch_hardware_setup(void)
> +__init int kvm_arch_hardware_setup(void)
>  {
>  	int r;
>  
> @@ -9303,7 +9303,7 @@ void kvm_arch_hardware_unsetup(void)
>  	kvm_x86_hardware_unsetup();
>  }
>  
> -int kvm_arch_check_processor_compat(void)
> +__init int kvm_arch_check_processor_compat(void)
>  {
>  	return kvm_x86_check_processor_compatibility();
>  }
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index 719fc3e15ea4..426bc2f485a9 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -616,8 +616,8 @@ static inline void kvm_irqfd_exit(void)
>  {
>  }
>  #endif
> -int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align,
> -		  struct module *module);
> +__init int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align,
> +		    struct module *module);
>  void kvm_exit(void);
>  
>  void kvm_get_kvm(struct kvm *kvm);
> @@ -867,9 +867,9 @@ void kvm_arch_create_vcpu_debugfs(struct kvm_vcpu *vcpu);
>  
>  int kvm_arch_hardware_enable(void);
>  void kvm_arch_hardware_disable(void);
> -int kvm_arch_hardware_setup(void);
> +__init int kvm_arch_hardware_setup(void);
>  void kvm_arch_hardware_unsetup(void);
> -int kvm_arch_check_processor_compat(void);
> +__init int kvm_arch_check_processor_compat(void);
>  int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu);
>  bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu);
>  int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu);
> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> index 86c6aa1cb58e..65f7f0f6868d 100644
> --- a/virt/kvm/arm/arm.c
> +++ b/virt/kvm/arm/arm.c
> @@ -1726,7 +1726,7 @@ void kvm_arch_exit(void)
>  	kvm_perf_teardown();
>  }
>  
> -static int arm_init(void)
> +static __init int arm_init(void)
>  {
>  	int rc = kvm_init(NULL, sizeof(struct kvm_vcpu), 0, THIS_MODULE);
>  	return rc;
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index d6f0696d98ef..1b7fbd138406 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -4246,13 +4246,13 @@ static void kvm_sched_out(struct preempt_notifier *pn,
>  	kvm_arch_vcpu_put(vcpu);
>  }
>  
> -static void check_processor_compat(void *rtn)
> +static __init void check_processor_compat(void *rtn)
>  {
>  	*(int *)rtn = kvm_arch_check_processor_compat();
>  }
>  
> -int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align,
> -		  struct module *module)
> +__init int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align,
> +		    struct module *module)
>  {
>  	int r;
>  	int cpu;
> 

Queued, thanks.

Paolo

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 06/13] KVM: monolithic: x86: remove __exit section prefix from machine_unsetup
  2019-11-04 22:59 ` [PATCH 06/13] KVM: monolithic: x86: remove __exit section prefix from machine_unsetup Andrea Arcangeli
@ 2019-11-05 10:16   ` Paolo Bonzini
  0 siblings, 0 replies; 34+ messages in thread
From: Paolo Bonzini @ 2019-11-05 10:16 UTC (permalink / raw)
  To: Andrea Arcangeli, kvm, linux-kernel; +Cc: Vitaly Kuznetsov, Sean Christopherson

On 04/11/19 23:59, Andrea Arcangeli wrote:
> Adjusts the section prefixes of some KVM x86 code function because
> with the monolithic KVM model the section checker can now do a more
> accurate static analysis at build time and it found a potentially
> kernel crashing bug. This also allows to build without
> CONFIG_SECTION_MISMATCH_WARN_ONLY=n.
> 
> The __exit removed from machine_unsetup is because
> kvm_arch_hardware_unsetup() is called by kvm_init() which is in the
> __init section. It's not allowed to call a function located in the
> __exit section and dropped during the kernel link from the __init
> section or the kernel will crash if that call is made.
> 
> Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
> ---
>  arch/x86/include/asm/kvm_host.h | 4 ++--
>  arch/x86/kvm/svm.c              | 2 +-
>  arch/x86/kvm/vmx/vmx.c          | 2 +-
>  3 files changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index b36dd3265036..2b03ec80f6d7 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1004,7 +1004,7 @@ extern int kvm_x86_hardware_enable(void);
>  extern void kvm_x86_hardware_disable(void);
>  extern __init int kvm_x86_check_processor_compatibility(void);
>  extern __init int kvm_x86_hardware_setup(void);
> -extern __exit void kvm_x86_hardware_unsetup(void);
> +extern void kvm_x86_hardware_unsetup(void);
>  extern bool kvm_x86_cpu_has_accelerated_tpr(void);
>  extern bool kvm_x86_has_emulated_msr(int index);
>  extern void kvm_x86_cpuid_update(struct kvm_vcpu *vcpu);
> @@ -1196,7 +1196,7 @@ struct kvm_x86_ops {
>  	void (*hardware_disable)(void);
>  	int (*check_processor_compatibility)(void);/* __init */
>  	int (*hardware_setup)(void);               /* __init */
> -	void (*hardware_unsetup)(void);            /* __exit */
> +	void (*hardware_unsetup)(void);
>  	bool (*cpu_has_accelerated_tpr)(void);
>  	bool (*has_emulated_msr)(int index);
>  	void (*cpuid_update)(struct kvm_vcpu *vcpu);
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index 1705608246fb..4ce102f6f075 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -1412,7 +1412,7 @@ __init int kvm_x86_hardware_setup(void)
>  	return r;
>  }
>  
> -__exit void kvm_x86_hardware_unsetup(void)
> +void kvm_x86_hardware_unsetup(void)
>  {
>  	int cpu;
>  
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 9c5f0c67b899..e406707381a4 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -7737,7 +7737,7 @@ __init int kvm_x86_hardware_setup(void)
>  	return r;
>  }
>  
> -__exit void kvm_x86_hardware_unsetup(void)
> +void kvm_x86_hardware_unsetup(void)
>  {
>  	if (nested)
>  		nested_vmx_hardware_unsetup();
> 

Queued, thanks.

Paolo

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 07/13] KVM: monolithic: x86: remove __init section prefix from kvm_x86_cpu_has_kvm_support
  2019-11-04 22:59 ` [PATCH 07/13] KVM: monolithic: x86: remove __init section prefix from kvm_x86_cpu_has_kvm_support Andrea Arcangeli
@ 2019-11-05 10:16   ` Paolo Bonzini
  0 siblings, 0 replies; 34+ messages in thread
From: Paolo Bonzini @ 2019-11-05 10:16 UTC (permalink / raw)
  To: Andrea Arcangeli, kvm, linux-kernel; +Cc: Vitaly Kuznetsov, Sean Christopherson

On 04/11/19 23:59, Andrea Arcangeli wrote:
> Adjusts the section prefixes of some KVM x86 code function because
> with the monolithic KVM model the section checker can now do a more
> accurate static analysis at build time. This also allows to build
> without CONFIG_SECTION_MISMATCH_WARN_ONLY=n.
> 
> The __init needs to be removed on vmx despite it's only svm calling it
> from kvm_x86_hardware_enable which is eventually called by
> hardware_enable_nolock() or there's a (potentially false positive)
> warning (false positive because this function isn't called in the vmx
> case). If this isn't needed the right cleanup isn't to put it in the
> __init section, but to drop it. As long as it's defined in vmx as a
> kvm_x86 operation, it's expectable that might eventually be called at
> runtime while hot plugging new CPUs.
> 
> Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
> ---
>  arch/x86/include/asm/kvm_host.h | 4 ++--
>  arch/x86/kvm/vmx/vmx.c          | 2 +-
>  2 files changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 2b03ec80f6d7..2ddc61fdcd09 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -998,7 +998,7 @@ struct kvm_lapic_irq {
>  	bool msi_redir_hint;
>  };
>  
> -extern __init int kvm_x86_cpu_has_kvm_support(void);
> +extern int kvm_x86_cpu_has_kvm_support(void);
>  extern __init int kvm_x86_disabled_by_bios(void);
>  extern int kvm_x86_hardware_enable(void);
>  extern void kvm_x86_hardware_disable(void);
> @@ -1190,7 +1190,7 @@ extern bool kvm_x86_apic_init_signal_blocked(struct kvm_vcpu *vcpu);
>  extern int kvm_x86_enable_direct_tlbflush(struct kvm_vcpu *vcpu);
>  
>  struct kvm_x86_ops {
> -	int (*cpu_has_kvm_support)(void);          /* __init */
> +	int (*cpu_has_kvm_support)(void);
>  	int (*disabled_by_bios)(void);             /* __init */
>  	int (*hardware_enable)(void);
>  	void (*hardware_disable)(void);
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index e406707381a4..87e5d7276ea4 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -2178,7 +2178,7 @@ void kvm_x86_cache_reg(struct kvm_vcpu *vcpu, enum kvm_reg reg)
>  	}
>  }
>  
> -__init int kvm_x86_cpu_has_kvm_support(void)
> +int kvm_x86_cpu_has_kvm_support(void)
>  {
>  	return cpu_has_vmx();
>  }
> 

I think we should eliminate all the complications in cpu_has_svm(), so
that svm_hardware_enable can use it.  I'll post a patch.

Paolo

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 10/13] KVM: x86: optimize more exit handlers in vmx.c
  2019-11-04 22:59 ` [PATCH 10/13] KVM: x86: optimize more exit handlers in vmx.c Andrea Arcangeli
@ 2019-11-05 10:20   ` Paolo Bonzini
  0 siblings, 0 replies; 34+ messages in thread
From: Paolo Bonzini @ 2019-11-05 10:20 UTC (permalink / raw)
  To: Andrea Arcangeli, kvm, linux-kernel; +Cc: Vitaly Kuznetsov, Sean Christopherson

On 04/11/19 23:59, Andrea Arcangeli wrote:
> Eliminate wasteful call/ret non RETPOLINE case and unnecessary fentry
> dynamic tracing hooking points.
> 
> Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
> ---
>  arch/x86/kvm/vmx/vmx.c | 30 +++++-------------------------
>  1 file changed, 5 insertions(+), 25 deletions(-)
> 
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 222467b2040e..a6afa5f4a01c 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -4694,7 +4694,7 @@ static int handle_exception_nmi(struct kvm_vcpu *vcpu)
>  	return 0;
>  }
>  
> -static int handle_external_interrupt(struct kvm_vcpu *vcpu)
> +static __always_inline int handle_external_interrupt(struct kvm_vcpu *vcpu)
>  {
>  	++vcpu->stat.irq_exits;
>  	return 1;
> @@ -4965,21 +4965,6 @@ void kvm_x86_set_dr7(struct kvm_vcpu *vcpu, unsigned long val)
>  	vmcs_writel(GUEST_DR7, val);
>  }
>  
> -static int handle_cpuid(struct kvm_vcpu *vcpu)
> -{
> -	return kvm_emulate_cpuid(vcpu);
> -}
> -
> -static int handle_rdmsr(struct kvm_vcpu *vcpu)
> -{
> -	return kvm_emulate_rdmsr(vcpu);
> -}
> -
> -static int handle_wrmsr(struct kvm_vcpu *vcpu)
> -{
> -	return kvm_emulate_wrmsr(vcpu);
> -}
> -
>  static int handle_tpr_below_threshold(struct kvm_vcpu *vcpu)
>  {
>  	kvm_apic_update_ppr(vcpu);
> @@ -4996,11 +4981,6 @@ static int handle_interrupt_window(struct kvm_vcpu *vcpu)
>  	return 1;
>  }
>  
> -static int handle_halt(struct kvm_vcpu *vcpu)
> -{
> -	return kvm_emulate_halt(vcpu);
> -}
> -
>  static int handle_vmcall(struct kvm_vcpu *vcpu)
>  {
>  	return kvm_emulate_hypercall(vcpu);
> @@ -5548,11 +5528,11 @@ static int (*kvm_vmx_exit_handlers[])(struct kvm_vcpu *vcpu) = {
>  	[EXIT_REASON_IO_INSTRUCTION]          = handle_io,
>  	[EXIT_REASON_CR_ACCESS]               = handle_cr,
>  	[EXIT_REASON_DR_ACCESS]               = handle_dr,
> -	[EXIT_REASON_CPUID]                   = handle_cpuid,
> -	[EXIT_REASON_MSR_READ]                = handle_rdmsr,
> -	[EXIT_REASON_MSR_WRITE]               = handle_wrmsr,
> +	[EXIT_REASON_CPUID]                   = kvm_emulate_cpuid,
> +	[EXIT_REASON_MSR_READ]                = kvm_emulate_rdmsr,
> +	[EXIT_REASON_MSR_WRITE]               = kvm_emulate_wrmsr,
>  	[EXIT_REASON_PENDING_INTERRUPT]       = handle_interrupt_window,
> -	[EXIT_REASON_HLT]                     = handle_halt,
> +	[EXIT_REASON_HLT]                     = kvm_emulate_halt,
>  	[EXIT_REASON_INVD]		      = handle_invd,
>  	[EXIT_REASON_INVLPG]		      = handle_invlpg,
>  	[EXIT_REASON_RDPMC]                   = handle_rdpmc,
> 

Queued, thanks.

Paolo

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 11/13] KVM: retpolines: x86: eliminate retpoline from vmx.c exit handlers
  2019-11-04 22:59 ` [PATCH 11/13] KVM: retpolines: x86: eliminate retpoline from vmx.c exit handlers Andrea Arcangeli
@ 2019-11-05 10:20   ` Paolo Bonzini
  0 siblings, 0 replies; 34+ messages in thread
From: Paolo Bonzini @ 2019-11-05 10:20 UTC (permalink / raw)
  To: Andrea Arcangeli, kvm, linux-kernel; +Cc: Vitaly Kuznetsov, Sean Christopherson

On 04/11/19 23:59, Andrea Arcangeli wrote:
> It's enough to check the exit value and issue a direct call to avoid
> the retpoline for all the common vmexit reasons.
> 
> Of course CONFIG_RETPOLINE already forbids gcc to use indirect jumps
> while compiling all switch() statements, however switch() would still
> allow the compiler to bisect the case value. It's more efficient to
> prioritize the most frequent vmexits instead.
> 
> The halt may be slow paths from the point of the guest, but not
> necessarily so from the point of the host if the host runs at full CPU
> capacity and no host CPU is ever left idle.
> 
> Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
> ---
>  arch/x86/kvm/vmx/vmx.c | 18 ++++++++++++++++--
>  1 file changed, 16 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index a6afa5f4a01c..582f837dc8c2 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -5905,9 +5905,23 @@ int kvm_x86_handle_exit(struct kvm_vcpu *vcpu)
>  	}
>  
>  	if (exit_reason < kvm_vmx_max_exit_handlers
> -	    && kvm_vmx_exit_handlers[exit_reason])
> +	    && kvm_vmx_exit_handlers[exit_reason]) {
> +#ifdef CONFIG_RETPOLINE
> +		if (exit_reason == EXIT_REASON_MSR_WRITE)
> +			return kvm_emulate_wrmsr(vcpu);
> +		else if (exit_reason == EXIT_REASON_PREEMPTION_TIMER)
> +			return handle_preemption_timer(vcpu);
> +		else if (exit_reason == EXIT_REASON_PENDING_INTERRUPT)
> +			return handle_interrupt_window(vcpu);
> +		else if (exit_reason == EXIT_REASON_EXTERNAL_INTERRUPT)
> +			return handle_external_interrupt(vcpu);
> +		else if (exit_reason == EXIT_REASON_HLT)
> +			return kvm_emulate_halt(vcpu);
> +		else if (exit_reason == EXIT_REASON_EPT_MISCONFIG)
> +			return handle_ept_misconfig(vcpu);
> +#endif
>  		return kvm_vmx_exit_handlers[exit_reason](vcpu);
> -	else {
> +	} else {
>  		vcpu_unimpl(vcpu, "vmx: unexpected exit reason 0x%x\n",
>  				exit_reason);
>  		dump_vmcs();
> 

Queued, thanks.

Paolo

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 12/13] KVM: retpolines: x86: eliminate retpoline from svm.c exit handlers
  2019-11-04 23:00 ` [PATCH 12/13] KVM: retpolines: x86: eliminate retpoline from svm.c " Andrea Arcangeli
@ 2019-11-05 10:21   ` Paolo Bonzini
  0 siblings, 0 replies; 34+ messages in thread
From: Paolo Bonzini @ 2019-11-05 10:21 UTC (permalink / raw)
  To: Andrea Arcangeli, kvm, linux-kernel; +Cc: Vitaly Kuznetsov, Sean Christopherson

On 05/11/19 00:00, Andrea Arcangeli wrote:
> It's enough to check the exit value and issue a direct call to avoid
> the retpoline for all the common vmexit reasons.
> 
> After this commit is applied, here the most common retpolines executed
> under a high resolution timer workload in the guest on a SVM host:
> 
> [..]
> @[
>     trace_retpoline+1
>     __trace_retpoline+30
>     __x86_indirect_thunk_rax+33
>     ktime_get_update_offsets_now+70
>     hrtimer_interrupt+131
>     smp_apic_timer_interrupt+106
>     apic_timer_interrupt+15
>     start_sw_timer+359
>     restart_apic_timer+85
>     kvm_set_msr_common+1497
>     msr_interception+142
>     vcpu_enter_guest+684
>     kvm_arch_vcpu_ioctl_run+261
>     kvm_vcpu_ioctl+559
>     do_vfs_ioctl+164
>     ksys_ioctl+96
>     __x64_sys_ioctl+22
>     do_syscall_64+89
>     entry_SYSCALL_64_after_hwframe+68
> ]: 1940
> @[
>     trace_retpoline+1
>     __trace_retpoline+30
>     __x86_indirect_thunk_r12+33
>     force_qs_rnp+217
>     rcu_gp_kthread+1270
>     kthread+268
>     ret_from_fork+34
> ]: 4644
> @[]: 25095
> @[
>     trace_retpoline+1
>     __trace_retpoline+30
>     __x86_indirect_thunk_rax+33
>     lapic_next_event+28
>     clockevents_program_event+148
>     hrtimer_start_range_ns+528
>     start_sw_timer+356
>     restart_apic_timer+85
>     kvm_set_msr_common+1497
>     msr_interception+142
>     vcpu_enter_guest+684
>     kvm_arch_vcpu_ioctl_run+261
>     kvm_vcpu_ioctl+559
>     do_vfs_ioctl+164
>     ksys_ioctl+96
>     __x64_sys_ioctl+22
>     do_syscall_64+89
>     entry_SYSCALL_64_after_hwframe+68
> ]: 41474
> @[
>     trace_retpoline+1
>     __trace_retpoline+30
>     __x86_indirect_thunk_rax+33
>     clockevents_program_event+148
>     hrtimer_start_range_ns+528
>     start_sw_timer+356
>     restart_apic_timer+85
>     kvm_set_msr_common+1497
>     msr_interception+142
>     vcpu_enter_guest+684
>     kvm_arch_vcpu_ioctl_run+261
>     kvm_vcpu_ioctl+559
>     do_vfs_ioctl+164
>     ksys_ioctl+96
>     __x64_sys_ioctl+22
>     do_syscall_64+89
>     entry_SYSCALL_64_after_hwframe+68
> ]: 41474
> @[
>     trace_retpoline+1
>     __trace_retpoline+30
>     __x86_indirect_thunk_rax+33
>     ktime_get+58
>     clockevents_program_event+84
>     hrtimer_start_range_ns+528
>     start_sw_timer+356
>     restart_apic_timer+85
>     kvm_set_msr_common+1497
>     msr_interception+142
>     vcpu_enter_guest+684
>     kvm_arch_vcpu_ioctl_run+261
>     kvm_vcpu_ioctl+559
>     do_vfs_ioctl+164
>     ksys_ioctl+96
>     __x64_sys_ioctl+22
>     do_syscall_64+89
>     entry_SYSCALL_64_after_hwframe+68
> ]: 41887
> @[
>     trace_retpoline+1
>     __trace_retpoline+30
>     __x86_indirect_thunk_rax+33
>     lapic_next_event+28
>     clockevents_program_event+148
>     hrtimer_try_to_cancel+168
>     hrtimer_cancel+21
>     kvm_set_lapic_tscdeadline_msr+43
>     kvm_set_msr_common+1497
>     msr_interception+142
>     vcpu_enter_guest+684
>     kvm_arch_vcpu_ioctl_run+261
>     kvm_vcpu_ioctl+559
>     do_vfs_ioctl+164
>     ksys_ioctl+96
>     __x64_sys_ioctl+22
>     do_syscall_64+89
>     entry_SYSCALL_64_after_hwframe+68
> ]: 42723
> @[
>     trace_retpoline+1
>     __trace_retpoline+30
>     __x86_indirect_thunk_rax+33
>     clockevents_program_event+148
>     hrtimer_try_to_cancel+168
>     hrtimer_cancel+21
>     kvm_set_lapic_tscdeadline_msr+43
>     kvm_set_msr_common+1497
>     msr_interception+142
>     vcpu_enter_guest+684
>     kvm_arch_vcpu_ioctl_run+261
>     kvm_vcpu_ioctl+559
>     do_vfs_ioctl+164
>     ksys_ioctl+96
>     __x64_sys_ioctl+22
>     do_syscall_64+89
>     entry_SYSCALL_64_after_hwframe+68
> ]: 42766
> @[
>     trace_retpoline+1
>     __trace_retpoline+30
>     __x86_indirect_thunk_rax+33
>     ktime_get+58
>     clockevents_program_event+84
>     hrtimer_try_to_cancel+168
>     hrtimer_cancel+21
>     kvm_set_lapic_tscdeadline_msr+43
>     kvm_set_msr_common+1497
>     msr_interception+142
>     vcpu_enter_guest+684
>     kvm_arch_vcpu_ioctl_run+261
>     kvm_vcpu_ioctl+559
>     do_vfs_ioctl+164
>     ksys_ioctl+96
>     __x64_sys_ioctl+22
>     do_syscall_64+89
>     entry_SYSCALL_64_after_hwframe+68
> ]: 42848
> @[
>     trace_retpoline+1
>     __trace_retpoline+30
>     __x86_indirect_thunk_rax+33
>     ktime_get+58
>     start_sw_timer+279
>     restart_apic_timer+85
>     kvm_set_msr_common+1497
>     msr_interception+142
>     vcpu_enter_guest+684
>     kvm_arch_vcpu_ioctl_run+261
>     kvm_vcpu_ioctl+559
>     do_vfs_ioctl+164
>     ksys_ioctl+96
>     __x64_sys_ioctl+22
>     do_syscall_64+89
>     entry_SYSCALL_64_after_hwframe+68
> ]: 499845
> 
> @total: 1780243
> 
> SVM has no TSC based programmable preemption timer so it is invoking
> ktime_get() frequently.
> 
> Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
> ---
>  arch/x86/kvm/svm.c | 12 ++++++++++++
>  1 file changed, 12 insertions(+)
> 
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index 0021e11fd1fb..3942bca46740 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -4995,6 +4995,18 @@ int kvm_x86_handle_exit(struct kvm_vcpu *vcpu)
>  		return 0;
>  	}
>  
> +#ifdef CONFIG_RETPOLINE
> +	if (exit_code == SVM_EXIT_MSR)
> +		return msr_interception(svm);
> +	else if (exit_code == SVM_EXIT_VINTR)
> +		return interrupt_window_interception(svm);
> +	else if (exit_code == SVM_EXIT_INTR)
> +		return intr_interception(svm);
> +	else if (exit_code == SVM_EXIT_HLT)
> +		return halt_interception(svm);
> +	else if (exit_code == SVM_EXIT_NPF)
> +		return npf_interception(svm);
> +#endif
>  	return svm_exit_handlers[exit_code](svm);
>  }
>  
> 

Queued, thanks (BTW, I still disagree about HLT exits but okay).

Paolo

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 13/13] x86: retpolines: eliminate retpoline from msr event handlers
  2019-11-04 23:00 ` [PATCH 13/13] x86: retpolines: eliminate retpoline from msr event handlers Andrea Arcangeli
@ 2019-11-05 10:21   ` Paolo Bonzini
  0 siblings, 0 replies; 34+ messages in thread
From: Paolo Bonzini @ 2019-11-05 10:21 UTC (permalink / raw)
  To: Andrea Arcangeli, kvm, linux-kernel; +Cc: Vitaly Kuznetsov, Sean Christopherson

On 05/11/19 00:00, Andrea Arcangeli wrote:
> It's enough to check the value and issue the direct call.
> 
> After this commit is applied, here the most common retpolines executed
> under a high resolution timer workload in the guest on a VMX host:
> 
> [..]
> @[
>     trace_retpoline+1
>     __trace_retpoline+30
>     __x86_indirect_thunk_rax+33
>     do_syscall_64+89
>     entry_SYSCALL_64_after_hwframe+68
> ]: 267
> @[]: 2256
> @[
>     trace_retpoline+1
>     __trace_retpoline+30
>     __x86_indirect_thunk_rax+33
>     __kvm_wait_lapic_expire+284
>     vmx_vcpu_run.part.97+1091
>     vcpu_enter_guest+377
>     kvm_arch_vcpu_ioctl_run+261
>     kvm_vcpu_ioctl+559
>     do_vfs_ioctl+164
>     ksys_ioctl+96
>     __x64_sys_ioctl+22
>     do_syscall_64+89
>     entry_SYSCALL_64_after_hwframe+68
> ]: 2390
> @[]: 33410
> 
> @total: 315707
> 
> Note the highest hit above is __delay so probably not worth optimizing
> even if it would be more frequent than 2k hits per sec.
> 
> Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
> ---
>  arch/x86/events/intel/core.c | 11 +++++++++++
>  1 file changed, 11 insertions(+)
> 
> diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
> index fcef678c3423..937363b803c1 100644
> --- a/arch/x86/events/intel/core.c
> +++ b/arch/x86/events/intel/core.c
> @@ -3323,8 +3323,19 @@ static int intel_pmu_hw_config(struct perf_event *event)
>  	return 0;
>  }
>  
> +#ifdef CONFIG_RETPOLINE
> +static struct perf_guest_switch_msr *core_guest_get_msrs(int *nr);
> +static struct perf_guest_switch_msr *intel_guest_get_msrs(int *nr);
> +#endif
> +
>  struct perf_guest_switch_msr *perf_guest_get_msrs(int *nr)
>  {
> +#ifdef CONFIG_RETPOLINE
> +	if (x86_pmu.guest_get_msrs == intel_guest_get_msrs)
> +		return intel_guest_get_msrs(nr);
> +	else if (x86_pmu.guest_get_msrs == core_guest_get_msrs)
> +		return core_guest_get_msrs(nr);
> +#endif
>  	if (x86_pmu.guest_get_msrs)
>  		return x86_pmu.guest_get_msrs(nr);
>  	*nr = 0;
> 

Queued, thanks.

Paolo

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 03/13] kvm: monolithic: fixup x86-32 build
  2019-11-05 10:04   ` Paolo Bonzini
@ 2019-11-05 10:37     ` Paolo Bonzini
  2019-11-05 13:54       ` Andrea Arcangeli
  0 siblings, 1 reply; 34+ messages in thread
From: Paolo Bonzini @ 2019-11-05 10:37 UTC (permalink / raw)
  To: Andrea Arcangeli, kvm, linux-kernel; +Cc: Vitaly Kuznetsov, Sean Christopherson

On 05/11/19 11:04, Paolo Bonzini wrote:
> On 04/11/19 23:59, Andrea Arcangeli wrote:
>> kvm_x86_set_hv_timer and kvm_x86_cancel_hv_timer needs to be defined
>> to succeed the 32bit kernel build, but they can't be called.
>>
>> Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
>> ---
>>  arch/x86/kvm/vmx/vmx.c | 11 +++++++++++
>>  1 file changed, 11 insertions(+)
>>
>> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
>> index bd17ad61f7e3..1a58ae38c8f2 100644
>> --- a/arch/x86/kvm/vmx/vmx.c
>> +++ b/arch/x86/kvm/vmx/vmx.c
>> @@ -7195,6 +7195,17 @@ void kvm_x86_cancel_hv_timer(struct kvm_vcpu *vcpu)
>>  {
>>  	to_vmx(vcpu)->hv_deadline_tsc = -1;
>>  }
>> +#else
>> +int kvm_x86_set_hv_timer(struct kvm_vcpu *vcpu, u64 guest_deadline_tsc,
>> +			 bool *expired)
>> +{
>> +	BUG();
>> +}
>> +
>> +void kvm_x86_cancel_hv_timer(struct kvm_vcpu *vcpu)
>> +{
>> +	BUG();
>> +}
>>  #endif
>>  
>>  void kvm_x86_sched_in(struct kvm_vcpu *vcpu, int cpu)
>>
> 
> I'll check for how long this has been broken.  It may be the proof that
> we can actually drop 32-bit KVM support.

Ah no, I was confused because this series is not bisectable (in addition
to doing two things at a same time, namely the monolithic kvm.ko and the
retpoline eliminations).

I have picked up the patches that are independent of the monolithic
kvm.ko work or can be considered bugfixes.

For the rest, please do this before posting again:

- ensure that everything is bisectable

- look into how to remove the modpost warnings.  A simple (though
somewhat ugly) way is to keep a kvm.ko module that includes common
virt/kvm/ code as well as, for x86 only, page_track.o.  A few functions,
such as kvm_mmu_gfn_disallow_lpage and kvm_mmu_gfn_allow_lpage, would
have to be moved into mmu.h, but that's not a big deal.

- provide at least some examples of replacing the NULL kvm_x86_ops
checks with error codes in the function (or just early "return"s).  I
can help with the others, but remember that for the patch to be merged,
kvm_x86_ops must be removed completely.

Thanks,

Paolo

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 03/13] kvm: monolithic: fixup x86-32 build
  2019-11-05 10:37     ` Paolo Bonzini
@ 2019-11-05 13:54       ` Andrea Arcangeli
  2019-11-05 14:09         ` Paolo Bonzini
  0 siblings, 1 reply; 34+ messages in thread
From: Andrea Arcangeli @ 2019-11-05 13:54 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: kvm, linux-kernel, Vitaly Kuznetsov, Sean Christopherson

On Tue, Nov 05, 2019 at 11:37:47AM +0100, Paolo Bonzini wrote:
> For the rest, please do this before posting again:
> 
> - ensure that everything is bisectable

x86-64 is already bisectable.

All other archs bisectable I didn't check them all anyway.

Even 4/13 is suboptimal and needs to be re-done later in more optimal
way. I prefer all logic changes to happen at later steps so one can at
least bisect to something that functionally works like before. And
4/13 also would need to be merged in the huge patch if one wants to
guarantee bisectability on all CPUs, but it'll just be hidden there in
the huge patch.

Obviously I can squash both 3/13 and 4/13 into 2/13 but I don't feel
like doing the right thing by squashing them just to increase
bisectability.

> - look into how to remove the modpost warnings.  A simple (though
> somewhat ugly) way is to keep a kvm.ko module that includes common
> virt/kvm/ code as well as, for x86 only, page_track.o.  A few functions,
> such as kvm_mmu_gfn_disallow_lpage and kvm_mmu_gfn_allow_lpage, would
> have to be moved into mmu.h, but that's not a big deal.

I think we should:

1) whitelist to shut off the warnings on demand

2) verify that if two modules are registering the same export symbol
   the second one fails to load and the module code is robust about
   that, this hopefully should already be the case

Provided verification of 2), the whitelist is more efficient than
losing 4k of ram in all KVM hypervisors out there.

> - provide at least some examples of replacing the NULL kvm_x86_ops
> checks with error codes in the function (or just early "return"s).  I
> can help with the others, but remember that for the patch to be merged,
> kvm_x86_ops must be removed completely.

Even if kvm_x86_ops wouldn't be guaranteed to go away, this would
already provide all the performance benefit to the KVM users, so I
wouldn't see a reason not to apply it even if kvm_x86_ops cannot go
away. Said that it will go away and there's no concern about it. It's
just that the patchset seems large enough already and it rejects
heavily already at every port. I simply stopped at the first self
contained step that provides all performance benefits.

If I go ahead and remove kvm_x86_ops how do I know it won't reject
heavily the next day I rebase and I've to redo it all from scratch? If
you explain me how you're going to guarantee that I won't have to do
that work more than once I'd be happy to go ahead.

Thanks,
Andrea


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 03/13] kvm: monolithic: fixup x86-32 build
  2019-11-05 13:54       ` Andrea Arcangeli
@ 2019-11-05 14:09         ` Paolo Bonzini
  2019-11-05 14:56           ` Andrea Arcangeli
  0 siblings, 1 reply; 34+ messages in thread
From: Paolo Bonzini @ 2019-11-05 14:09 UTC (permalink / raw)
  To: Andrea Arcangeli; +Cc: kvm, linux-kernel, Vitaly Kuznetsov, Sean Christopherson

On 05/11/19 14:54, Andrea Arcangeli wrote:
> x86-64 is already bisectable.
> 
> All other archs bisectable I didn't check them all anyway.
> 
> Even 4/13 is suboptimal and needs to be re-done later in more optimal
> way. I prefer all logic changes to happen at later steps so one can at
> least bisect to something that functionally works like before. And
> 4/13 also would need to be merged in the huge patch if one wants to
> guarantee bisectability on all CPUs, but it'll just be hidden there in
> the huge patch.
> 
> Obviously I can squash both 3/13 and 4/13 into 2/13 but I don't feel
> like doing the right thing by squashing them just to increase
> bisectability.

You can reorder patches so that kvm_x86_ops assignments never happen.
That way, 4/13 for example would be moved to the very beginning.

>> - look into how to remove the modpost warnings.  A simple (though
>> somewhat ugly) way is to keep a kvm.ko module that includes common
>> virt/kvm/ code as well as, for x86 only, page_track.o.  A few functions,
>> such as kvm_mmu_gfn_disallow_lpage and kvm_mmu_gfn_allow_lpage, would
>> have to be moved into mmu.h, but that's not a big deal.
> 
> I think we should:
> 
> 1) whitelist to shut off the warnings on demand

Do you mean adding a whitelist to modpost?  That would work, though I am
not sure if the module maintainer (Jessica Yu) would accept that.

> 2) verify that if two modules are registering the same export symbol
>    the second one fails to load and the module code is robust about
>    that, this hopefully should already be the case
> 
> Provided verification of 2), the whitelist is more efficient than
> losing 4k of ram in all KVM hypervisors out there.

I agree.

>> - provide at least some examples of replacing the NULL kvm_x86_ops
>> checks with error codes in the function (or just early "return"s).  I
>> can help with the others, but remember that for the patch to be merged,
>> kvm_x86_ops must be removed completely.
> 
> Even if kvm_x86_ops wouldn't be guaranteed to go away, this would
> already provide all the performance benefit to the KVM users, so I
> wouldn't see a reason not to apply it even if kvm_x86_ops cannot go
> away.

The answer is maintainability.  My suggestion is that we start looking
into removing all assignments and tests of kvm_x86_ops, one step at a
time.  Until this is done, unfortunately we won't be able to reap the
performance benefit.  But the advantage is that this can be done in many
separate submissions; it doesn't have to be one huge patch.

Once this is done, removing kvm_x86_ops is trivial in the end.  It's
okay if the intermediate step has minimal performance regressions, we
know what it will look like.  I have to order patches with maintenance
first and performance second, if possible.

By the way, we are already planning to make some module parameters
per-VM instead of global, so this refactoring would also help that effort.

> Said that it will go away and there's no concern about it. It's
> just that the patchset seems large enough already and it rejects
> heavily already at every port. I simply stopped at the first self
> contained step that provides all performance benefits.

That is good enough to prove the feasibility of the idea, so I agree
that was a good plan.

Paolo

> If I go ahead and remove kvm_x86_ops how do I know it won't reject
> heavily the next day I rebase and I've to redo it all from scratch? If
> you explain me how you're going to guarantee that I won't have to do
> that work more than once I'd be happy to go ahead.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 03/13] kvm: monolithic: fixup x86-32 build
  2019-11-05 14:09         ` Paolo Bonzini
@ 2019-11-05 14:56           ` Andrea Arcangeli
  2019-11-05 15:10             ` Paolo Bonzini
  0 siblings, 1 reply; 34+ messages in thread
From: Andrea Arcangeli @ 2019-11-05 14:56 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: kvm, linux-kernel, Vitaly Kuznetsov, Sean Christopherson,
	Masahiro Yamada, Jessica Yu, Matthias Maennich

On Tue, Nov 05, 2019 at 03:09:37PM +0100, Paolo Bonzini wrote:
> You can reorder patches so that kvm_x86_ops assignments never happen.
> That way, 4/13 for example would be moved to the very beginning.

Ok 3/13 and 4/14 can both go before 2/13, I reordered them fine, the
end result at least is the same and the intermediate result should
improve. Hopefully this is the best solution for those two outliers.

Once this is committed I expect Sean to take over 4/14 with a more
optimal version to get rid of that branch like he proposed initially.

> >> - look into how to remove the modpost warnings.  A simple (though
> >> somewhat ugly) way is to keep a kvm.ko module that includes common
> >> virt/kvm/ code as well as, for x86 only, page_track.o.  A few functions,
> >> such as kvm_mmu_gfn_disallow_lpage and kvm_mmu_gfn_allow_lpage, would
> >> have to be moved into mmu.h, but that's not a big deal.
> > 
> > I think we should:
> > 
> > 1) whitelist to shut off the warnings on demand
> 
> Do you mean adding a whitelist to modpost?  That would work, though I am
> not sure if the module maintainer (Jessica Yu) would accept that.

Yes that's exactly what I meant.

> > 2) verify that if two modules are registering the same export symbol
> >    the second one fails to load and the module code is robust about
> >    that, this hopefully should already be the case
> > 
> > Provided verification of 2), the whitelist is more efficient than
> > losing 4k of ram in all KVM hypervisors out there.
> 
> I agree.

Ok, so I enlarged the CC list accordingly to check how the whitelist
can be done in modpost.

> >> - provide at least some examples of replacing the NULL kvm_x86_ops
> >> checks with error codes in the function (or just early "return"s).  I
> >> can help with the others, but remember that for the patch to be merged,
> >> kvm_x86_ops must be removed completely.
> > 
> > Even if kvm_x86_ops wouldn't be guaranteed to go away, this would
> > already provide all the performance benefit to the KVM users, so I
> > wouldn't see a reason not to apply it even if kvm_x86_ops cannot go
> > away.
> 
> The answer is maintainability.  My suggestion is that we start looking
> into removing all assignments and tests of kvm_x86_ops, one step at a
> time.  Until this is done, unfortunately we won't be able to reap the
> performance benefit.  But the advantage is that this can be done in many

There's not much performance benefit left from the removal
kvm_x86_ops. It'll only remove a few branches at best (and only if we
don't have to replace the branches on the pointer check with other
branches on a static variable to disambiguate the different cases).

> separate submissions; it doesn't have to be one huge patch.
> 
> Once this is done, removing kvm_x86_ops is trivial in the end.  It's
> okay if the intermediate step has minimal performance regressions, we
> know what it will look like.  I have to order patches with maintenance
> first and performance second, if possible.

The removal of kvm_x86_ops is just a badly needed code cleanup and of
course I agree it must happen sooner than later. I'm just trying to
avoid running into rejects on those further commit cleanups too.

> By the way, we are already planning to make some module parameters
> per-VM instead of global, so this refactoring would also help that effort.
>
> > Said that it will go away and there's no concern about it. It's
> > just that the patchset seems large enough already and it rejects
> > heavily already at every port. I simply stopped at the first self
> > contained step that provides all performance benefits.
> 
> That is good enough to prove the feasibility of the idea, so I agree
> that was a good plan.

All right, so I'm not exactly sure what's the plan and if it's ok to
do it over time or if I should go ahead doing all logic changes while
the big patch remains out of tree.

If you apply it and reorder 4/13 and 3/13 before 2/13 in a rebase like
I did locally, it should already be good starting point in my view and
the modpost also can be fixed over time too, the warnings appears
harmless so far.

Thanks,
Andrea


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 03/13] kvm: monolithic: fixup x86-32 build
  2019-11-05 14:56           ` Andrea Arcangeli
@ 2019-11-05 15:10             ` Paolo Bonzini
  2019-11-08 13:56               ` Jessica Yu
  0 siblings, 1 reply; 34+ messages in thread
From: Paolo Bonzini @ 2019-11-05 15:10 UTC (permalink / raw)
  To: Andrea Arcangeli
  Cc: kvm, linux-kernel, Vitaly Kuznetsov, Sean Christopherson,
	Masahiro Yamada, Jessica Yu, Matthias Maennich

On 05/11/19 15:56, Andrea Arcangeli wrote:
>>> I think we should:
>>>
>>> 1) whitelist to shut off the warnings on demand
>>
>> Do you mean adding a whitelist to modpost?  That would work, though I am
>> not sure if the module maintainer (Jessica Yu) would accept that.
> 
> Yes that's exactly what I meant.

Ok, thanks.  Jessica, the issue here is that we have two (mutually
exclusive) modules providing the same interface to a third module.

Andrea will check that, when the same symbol is exported by two modules,
the second-loaded module correctly fails insmod.  If that is okay, we
will also need modpost not to warn for these symbols in sym_add_exported.

>> The answer is maintainability.  My suggestion is that we start looking
>> into removing all assignments and tests of kvm_x86_ops, one step at a
>> time.  Until this is done, unfortunately we won't be able to reap the
>> performance benefit.  But the advantage is that this can be done in many
> 
> There's not much performance benefit left from the removal
> kvm_x86_ops.

Indeed; what I mean is that until then we will have to keep the
retpolines.  Not removing kvm_x86_ops leaves an unsustainable mess in
terms of maintainability, therefore we will need to first refactor the
code.  Once the refactoring is over, kvm_x86_ops can be dropped easily,
just like kvm_pmu_ops in this version of the series.

The good thing is that the modpost discussion can proceed in parallel.

> The removal of kvm_x86_ops is just a badly needed code cleanup and of
> course I agree it must happen sooner than later. I'm just trying to
> avoid running into rejects on those further commit cleanups too.

>> That is good enough to prove the feasibility of the idea, so I agree
>> that was a good plan.
> 
> All right, so I'm not exactly sure what's the plan and if it's ok to
> do it over time or if I should go ahead doing all logic changes while
> the big patch remains out of tree.

Yes, the changes to remove tests and assignments to kvm_x86_ops must
happen first.  I understand that the big patch is a conflict magnet, but
once all the refactoring is done it will be very easy to review and it
will get in quickly.

Paolo

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 03/13] kvm: monolithic: fixup x86-32 build
  2019-11-05 15:10             ` Paolo Bonzini
@ 2019-11-08 13:56               ` Jessica Yu
  2019-11-08 19:51                 ` Paolo Bonzini
  0 siblings, 1 reply; 34+ messages in thread
From: Jessica Yu @ 2019-11-08 13:56 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Andrea Arcangeli, kvm, linux-kernel, Vitaly Kuznetsov,
	Sean Christopherson, Masahiro Yamada, Matthias Maennich

+++ Paolo Bonzini [05/11/19 16:10 +0100]:
>On 05/11/19 15:56, Andrea Arcangeli wrote:
>>>> I think we should:
>>>>
>>>> 1) whitelist to shut off the warnings on demand
>>>
>>> Do you mean adding a whitelist to modpost?  That would work, though I am
>>> not sure if the module maintainer (Jessica Yu) would accept that.
>>
>> Yes that's exactly what I meant.
>
>Ok, thanks.  Jessica, the issue here is that we have two (mutually
>exclusive) modules providing the same interface to a third module.

>Andrea will check that, when the same symbol is exported by two modules,
>the second-loaded module correctly fails insmod.

Hi Paolo, thanks for getting me up to speed.

The module loader already rejects loading a module with
duplicate exported symbols.

> If that is okay, we will also need modpost not to warn for these
> symbols in sym_add_exported.

I think it's certainly doable in modpost, for example we could pass a
list of whitelisted symbols and have modpost read them in and not warn
if it encounters the whitelisted symbols more than once.  Modpost will
also have to be modified to accomodate duplicate symbols.  I'm not
sure how ugly this would be without seeing the actual patch.  And I am
not sure what Masahiro (who takes care of all things kbuild-related)
thinks of this idea. But before implementing all this, is there
absolutely no way around having the duplicated exported symbols? (e.g.,
could the modules be configured/built in a mutally exclusive way? I'm
lacking the context from the rest of the thread, so not sure which are
the problematic modules.)

Thanks,

Jessica

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 03/13] kvm: monolithic: fixup x86-32 build
  2019-11-08 13:56               ` Jessica Yu
@ 2019-11-08 19:51                 ` Paolo Bonzini
  2019-11-08 20:01                   ` Andrea Arcangeli
  2019-11-09  3:30                   ` Masahiro Yamada
  0 siblings, 2 replies; 34+ messages in thread
From: Paolo Bonzini @ 2019-11-08 19:51 UTC (permalink / raw)
  To: Jessica Yu
  Cc: Andrea Arcangeli, kvm, linux-kernel, Vitaly Kuznetsov,
	Sean Christopherson, Masahiro Yamada, Matthias Maennich

On 08/11/19 14:56, Jessica Yu wrote:
> And I am
> not sure what Masahiro (who takes care of all things kbuild-related)
> thinks of this idea. But before implementing all this, is there
> absolutely no way around having the duplicated exported symbols? (e.g.,
> could the modules be configured/built in a mutally exclusive way? I'm
> lacking the context from the rest of the thread, so not sure which are
> the problematic modules.)

The problematic modules are kvm_intel and kvm_amd, so we cannot build
them in a mutually exclusive way (but we know it won't make sense to
load both).  We will have to build only one of them when built into
vmlinux, but the module case must support building both.

Currently we put the common symbols in kvm.ko, and kvm.ko acts as a kind
of "library" for kvm_intel.ko and kvm_amd.ko.  The problem is that
kvm_intel.ko and kvm_amd.ko currently pass a large array of function
pointers to kvm.ko, and Andrea measured a substantial performance
penalty from retpolines when kvm.ko calls back through those pointers.

Therefore he would like to remove kvm.ko, and that would result in
symbols exported from two modules.

I suppose we could use code patching mechanism to avoid the retpolines.
 Andrea, what do you think about that?  That would have the advantage
that we won't have to remove kvm_x86_ops. :)

Thanks,

Paolo

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 03/13] kvm: monolithic: fixup x86-32 build
  2019-11-08 19:51                 ` Paolo Bonzini
@ 2019-11-08 20:01                   ` Andrea Arcangeli
  2019-11-08 21:02                     ` Paolo Bonzini
  2019-11-09  3:30                   ` Masahiro Yamada
  1 sibling, 1 reply; 34+ messages in thread
From: Andrea Arcangeli @ 2019-11-08 20:01 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Jessica Yu, kvm, linux-kernel, Vitaly Kuznetsov,
	Sean Christopherson, Masahiro Yamada, Matthias Maennich

On Fri, Nov 08, 2019 at 08:51:04PM +0100, Paolo Bonzini wrote:
> I suppose we could use code patching mechanism to avoid the retpolines.
>  Andrea, what do you think about that?  That would have the advantage
> that we won't have to remove kvm_x86_ops. :)

page 17 covers pvops:

https://people.redhat.com/~aarcange/slides/2019-KVM-monolithic.pdf


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 03/13] kvm: monolithic: fixup x86-32 build
  2019-11-08 20:01                   ` Andrea Arcangeli
@ 2019-11-08 21:02                     ` Paolo Bonzini
  2019-11-08 21:26                       ` Andrea Arcangeli
  0 siblings, 1 reply; 34+ messages in thread
From: Paolo Bonzini @ 2019-11-08 21:02 UTC (permalink / raw)
  To: Andrea Arcangeli
  Cc: Jessica Yu, kvm, linux-kernel, Vitaly Kuznetsov,
	Sean Christopherson, Masahiro Yamada, Matthias Maennich

On 08/11/19 21:01, Andrea Arcangeli wrote:
> On Fri, Nov 08, 2019 at 08:51:04PM +0100, Paolo Bonzini wrote:
>> I suppose we could use code patching mechanism to avoid the retpolines.
>>  Andrea, what do you think about that?  That would have the advantage
>> that we won't have to remove kvm_x86_ops. :)
> 
> page 17 covers pvops:
> 
> https://people.redhat.com/~aarcange/slides/2019-KVM-monolithic.pdf

You can patch call instructions directly using text_poke when
kvm_intel.ko or kvm_amd.ko, I'm not sure why that would be worse for TLB
or RAM usage.  The hard part is recording the location of the call sites
using some pushsection/popsection magic.

Paolo

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 03/13] kvm: monolithic: fixup x86-32 build
  2019-11-08 21:02                     ` Paolo Bonzini
@ 2019-11-08 21:26                       ` Andrea Arcangeli
  2019-11-08 23:10                         ` Paolo Bonzini
  0 siblings, 1 reply; 34+ messages in thread
From: Andrea Arcangeli @ 2019-11-08 21:26 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Jessica Yu, kvm, linux-kernel, Vitaly Kuznetsov,
	Sean Christopherson, Masahiro Yamada, Matthias Maennich

On Fri, Nov 08, 2019 at 10:02:52PM +0100, Paolo Bonzini wrote:
> kvm_intel.ko or kvm_amd.ko, I'm not sure why that would be worse for TLB
> or RAM usage.  The hard part is recording the location of the call sites

Let's ignore the different code complexity of supporting self
modifying code: kvm.ko and kvm-*.ko will be located in different
pages, hence it'll waste 1 iTLB for every vmexit and 2k of RAM in
average. The L1 icache also will be wasted. It'll simply run slower.

Now about the code complexity, it is even higher than pvops:

   KVM				pvops
   =========                    =============
1) Changes daily		Never change

2) Patched at runtime		Patched only at boot time early on
   during module load
   and multiple times
   at every load of kvm-*.ko

3) The patching points to	All patch destinations are linked into
   code in kernel modules       the kernel

Why exactly should we go through such a complication when it runs
slower in the end and it's much more complex to implement and maintain
and in fact even more complex than pvops already is?

Runtime patching the indirect call like pvops do is strictly required
when you are forced to resolve the linking at runtime. The alternative
would be to ship two different Linux kernels for PV and bare
metal. Maintaining a whole new kernel rpm and having to install a
different rpm depending on the hypervisor/bare metal is troublesome so
pvops is worth it.

With kvm-amd and kvm-intel we can avoid the whole runtime patching of
the call sites as already proven by KVM monolithic patchset, and it'll
run faster in the CPU and it'll save RAM, so I'm not exactly sure how
anybody could prefer runtime patching here when the only benefit is a
few mbytes of disk space saved on disk.

Furthermore by linking the thing statically we'll also enable LTO and
other gcc features which would never be possible with those indirect
calls.

Thanks,
Andrea


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 03/13] kvm: monolithic: fixup x86-32 build
  2019-11-08 21:26                       ` Andrea Arcangeli
@ 2019-11-08 23:10                         ` Paolo Bonzini
  0 siblings, 0 replies; 34+ messages in thread
From: Paolo Bonzini @ 2019-11-08 23:10 UTC (permalink / raw)
  To: Andrea Arcangeli
  Cc: Jessica Yu, kvm, linux-kernel, Vitaly Kuznetsov,
	Sean Christopherson, Masahiro Yamada, Matthias Maennich

On 08/11/19 22:26, Andrea Arcangeli wrote:
> On Fri, Nov 08, 2019 at 10:02:52PM +0100, Paolo Bonzini wrote:
>> kvm_intel.ko or kvm_amd.ko, I'm not sure why that would be worse for TLB
>> or RAM usage.  The hard part is recording the location of the call sites
> 
> Let's ignore the different code complexity of supporting self
> modifying code: kvm.ko and kvm-*.ko will be located in different
> pages, hence it'll waste 1 iTLB for every vmexit and 2k of RAM in
> average.

This is unlikely to make a difference, since kvm.o and kvm-intel.o are
overall about 700 KiB in size.  You do lose some inlining opportunities
 with LTO, but without LTO the L1 cache benefits are debatable too.  The
real loss is in the complexity, I agree with you about that.

> Now about the code complexity, it is even higher than pvops:
> 
>    KVM				pvops
>    =========                    =============
> 1) Changes daily		Never change
> 
> 2) Patched at runtime		Patched only at boot time early on
>    during module load
>    and multiple times
>    at every load of kvm-*.ko
> 
> 3) The patching points to	All patch destinations are linked into
>    code in kernel modules       the kernel
> 
> Why exactly should we go through such a complication when it runs
> slower in the end and it's much more complex to implement and maintain
> and in fact even more complex than pvops already is?

For completeness, one advantage of patching would be to keep support for
built-in Intel+AMD.  The modpost patch should be pretty small, and since
Jessica seemed quite open to it let's do that.

Thanks,

Paolo

> Furthermore by linking the thing statically we'll also enable LTO and
> other gcc features which would never be possible with those indirect
> calls.
> 
> Thanks,
> Andrea
> 


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 03/13] kvm: monolithic: fixup x86-32 build
  2019-11-08 19:51                 ` Paolo Bonzini
  2019-11-08 20:01                   ` Andrea Arcangeli
@ 2019-11-09  3:30                   ` Masahiro Yamada
  1 sibling, 0 replies; 34+ messages in thread
From: Masahiro Yamada @ 2019-11-09  3:30 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Jessica Yu, Andrea Arcangeli, kvm, Linux Kernel Mailing List,
	Vitaly Kuznetsov, Sean Christopherson, Matthias Maennich,
	Lucas De Marchi

(+CC: Lucas De Marchi, the kmod maintainer)

On Sat, Nov 9, 2019 at 4:51 AM Paolo Bonzini <pbonzini@redhat.com> wrote:
>
> On 08/11/19 14:56, Jessica Yu wrote:
> > And I am
> > not sure what Masahiro (who takes care of all things kbuild-related)
> > thinks of this idea. But before implementing all this, is there
> > absolutely no way around having the duplicated exported symbols? (e.g.,
> > could the modules be configured/built in a mutally exclusive way? I'm
> > lacking the context from the rest of the thread, so not sure which are
> > the problematic modules.)

I do not think having a white-list in the modpost
is maintainable.

>
> The problematic modules are kvm_intel and kvm_amd, so we cannot build
> them in a mutually exclusive way (but we know it won't make sense to
> load both).  We will have to build only one of them when built into
> vmlinux, but the module case must support building both.
>
> Currently we put the common symbols in kvm.ko, and kvm.ko acts as a kind
> of "library" for kvm_intel.ko and kvm_amd.ko.  The problem is that
> kvm_intel.ko and kvm_amd.ko currently pass a large array of function
> pointers to kvm.ko, and Andrea measured a substantial performance
> penalty from retpolines when kvm.ko calls back through those pointers.
>
> Therefore he would like to remove kvm.ko, and that would result in
> symbols exported from two modules.

If this is a general demand, I think we can relax
the 'exported twice' warning; we can show the warning
only when the previous export is from vmlinux.


However, I am not sure how the module dependency
should be handled when the same symbol is exported
from multiple modules.


Let's say the same symbol is exported from foo.ko and bar.ko
and foo.ko appears first in modules.order .
In this case, I think the foo.ko should be considered to have a higher
priority than bar.ko
The modpost records MODULE_INFO(depends, foo.ko) correctly,
but modules.{dep, symbols} do not seem to reflect that.
Maybe, depmod does not take multiple-export into consideration ??

If we change this, I want this working consistently.


Masahiro Yamada




> I suppose we could use code patching mechanism to avoid the retpolines.
>  Andrea, what do you think about that?  That would have the advantage
> that we won't have to remove kvm_x86_ops. :)
>
> Thanks,
>
> Paolo



-- 
Best Regards
Masahiro Yamada

^ permalink raw reply	[flat|nested] 34+ messages in thread

end of thread, other threads:[~2019-11-09  3:31 UTC | newest]

Thread overview: 34+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-11-04 22:59 [PATCH 00/13] KVM monolithic v3 Andrea Arcangeli
2019-11-04 22:59 ` [PATCH 01/13] KVM: monolithic: x86: remove kvm.ko Andrea Arcangeli
2019-11-04 22:59 ` [PATCH 02/13] KVM: monolithic: x86: convert the kvm_x86_ops and kvm_pmu_ops methods to external functions Andrea Arcangeli
2019-11-04 22:59 ` [PATCH 03/13] kvm: monolithic: fixup x86-32 build Andrea Arcangeli
2019-11-05 10:04   ` Paolo Bonzini
2019-11-05 10:37     ` Paolo Bonzini
2019-11-05 13:54       ` Andrea Arcangeli
2019-11-05 14:09         ` Paolo Bonzini
2019-11-05 14:56           ` Andrea Arcangeli
2019-11-05 15:10             ` Paolo Bonzini
2019-11-08 13:56               ` Jessica Yu
2019-11-08 19:51                 ` Paolo Bonzini
2019-11-08 20:01                   ` Andrea Arcangeli
2019-11-08 21:02                     ` Paolo Bonzini
2019-11-08 21:26                       ` Andrea Arcangeli
2019-11-08 23:10                         ` Paolo Bonzini
2019-11-09  3:30                   ` Masahiro Yamada
2019-11-04 22:59 ` [PATCH 04/13] KVM: monolithic: x86: handle the request_immediate_exit variation Andrea Arcangeli
2019-11-04 22:59 ` [PATCH 05/13] KVM: monolithic: add more section prefixes Andrea Arcangeli
2019-11-05 10:16   ` Paolo Bonzini
2019-11-04 22:59 ` [PATCH 06/13] KVM: monolithic: x86: remove __exit section prefix from machine_unsetup Andrea Arcangeli
2019-11-05 10:16   ` Paolo Bonzini
2019-11-04 22:59 ` [PATCH 07/13] KVM: monolithic: x86: remove __init section prefix from kvm_x86_cpu_has_kvm_support Andrea Arcangeli
2019-11-05 10:16   ` Paolo Bonzini
2019-11-04 22:59 ` [PATCH 08/13] KVM: monolithic: remove exports Andrea Arcangeli
2019-11-04 22:59 ` [PATCH 09/13] KVM: monolithic: x86: drop the kvm_pmu_ops structure Andrea Arcangeli
2019-11-04 22:59 ` [PATCH 10/13] KVM: x86: optimize more exit handlers in vmx.c Andrea Arcangeli
2019-11-05 10:20   ` Paolo Bonzini
2019-11-04 22:59 ` [PATCH 11/13] KVM: retpolines: x86: eliminate retpoline from vmx.c exit handlers Andrea Arcangeli
2019-11-05 10:20   ` Paolo Bonzini
2019-11-04 23:00 ` [PATCH 12/13] KVM: retpolines: x86: eliminate retpoline from svm.c " Andrea Arcangeli
2019-11-05 10:21   ` Paolo Bonzini
2019-11-04 23:00 ` [PATCH 13/13] x86: retpolines: eliminate retpoline from msr event handlers Andrea Arcangeli
2019-11-05 10:21   ` Paolo Bonzini

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).