From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 80152C433EF for ; Mon, 27 Jun 2022 21:54:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241220AbiF0Vyu (ORCPT ); Mon, 27 Jun 2022 17:54:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59126 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237850AbiF0Vys (ORCPT ); Mon, 27 Jun 2022 17:54:48 -0400 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0B7EB55BA; Mon, 27 Jun 2022 14:54:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1656366888; x=1687902888; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=+S6IvI85nA4aZrvK6bqcDqp/KcAmR/F0WMLBRB2cS5Y=; b=RVhkPc7bnCAEzxFkOYd0EYDcRKpU34+cBkTRAauW0ToLS5angLtlqjzR IHs4p9UlT7QlehM/rZpfpEuvkEp6fLCQPO/0OUE13fAJQRAVPMQ4LFDmd O2NoA5rkT++pwf3O6frrZpFs0bDYoIxUsnf7dt3ZVAj0zUmFRygojebVv tdGpqKJeZRkjIusaE2O7RVf+jvf1qKKtJydFX407VFneLtnDyKcqVhs/r kvfFJcmi9xpetuM7uMnRxkILU6w7SvovLC6d4ZY3/T9Sa/Sur/dnKUC2o doKhhCjmPlAh/pEPfM2XMfWYJwKPOgGT9dkmlyu48caix5O3RQGVcSXmX g==; X-IronPort-AV: E=McAfee;i="6400,9594,10391"; a="282662706" X-IronPort-AV: E=Sophos;i="5.92,227,1650956400"; d="scan'208";a="282662706" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Jun 2022 14:54:47 -0700 X-IronPort-AV: E=Sophos;i="5.92,227,1650956400"; d="scan'208";a="657863430" Received: from ls.sc.intel.com (HELO localhost) ([143.183.96.54]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Jun 2022 14:54:47 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , Chao Gao , Sean Christopherson Subject: [PATCH v7 001/102] KVM: x86: Move check_processor_compatibility from init ops to runtime ops Date: Mon, 27 Jun 2022 14:52:53 -0700 Message-Id: <3658627ced7311705a4cbf466db4f463c839a552.1656366337.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Chao Gao so that KVM can do compatibility checks on hotplugged CPUs. Drop __init from check_processor_compatibility() and its callees. use a static_call() to invoke .check_processor_compatibility. Opportunistically rename {svm,vmx}_check_processor_compat to conform to the naming convention of fields of kvm_x86_ops. Signed-off-by: Chao Gao Reviewed-by: Sean Christopherson Link: https://lore.kernel.org/r/20220216031528.92558-2-chao.gao@intel.com Signed-off-by: Isaku Yamahata --- arch/x86/include/asm/kvm-x86-ops.h | 1 + arch/x86/include/asm/kvm_host.h | 2 +- arch/x86/kvm/svm/svm.c | 4 ++-- arch/x86/kvm/vmx/evmcs.c | 2 +- arch/x86/kvm/vmx/evmcs.h | 2 +- arch/x86/kvm/vmx/vmx.c | 14 +++++++------- arch/x86/kvm/x86.c | 3 +-- 7 files changed, 14 insertions(+), 14 deletions(-) diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h index 6f2f1affbb78..75bc44aa8d51 100644 --- a/arch/x86/include/asm/kvm-x86-ops.h +++ b/arch/x86/include/asm/kvm-x86-ops.h @@ -129,6 +129,7 @@ KVM_X86_OP(msr_filter_changed) KVM_X86_OP(complete_emulated_msr) KVM_X86_OP(vcpu_deliver_sipi_vector) KVM_X86_OP_OPTIONAL_RET0(vcpu_get_apicv_inhibit_reasons); +KVM_X86_OP(check_processor_compatibility) #undef KVM_X86_OP #undef KVM_X86_OP_OPTIONAL diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 7e98b2876380..62dec97f6607 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1427,6 +1427,7 @@ static inline u16 kvm_lapic_irq_dest_mode(bool dest_mode_logical) struct kvm_x86_ops { const char *name; + int (*check_processor_compatibility)(void); int (*hardware_enable)(void); void (*hardware_disable)(void); void (*hardware_unsetup)(void); @@ -1637,7 +1638,6 @@ struct kvm_x86_nested_ops { struct kvm_x86_init_ops { int (*cpu_has_kvm_support)(void); int (*disabled_by_bios)(void); - int (*check_processor_compatibility)(void); int (*hardware_setup)(void); unsigned int (*handle_intel_pt_intr)(void); diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index c6cca0ce127b..247c0ad458a0 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -4083,7 +4083,7 @@ svm_patch_hypercall(struct kvm_vcpu *vcpu, unsigned char *hypercall) hypercall[2] = 0xd9; } -static int __init svm_check_processor_compat(void) +static int svm_check_processor_compatibility(void) { return 0; } @@ -4703,6 +4703,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = { .name = "kvm_amd", .hardware_unsetup = svm_hardware_unsetup, + .check_processor_compatibility = svm_check_processor_compatibility, .hardware_enable = svm_hardware_enable, .hardware_disable = svm_hardware_disable, .has_emulated_msr = svm_has_emulated_msr, @@ -5090,7 +5091,6 @@ static struct kvm_x86_init_ops svm_init_ops __initdata = { .cpu_has_kvm_support = has_svm, .disabled_by_bios = is_disabled, .hardware_setup = svm_hardware_setup, - .check_processor_compatibility = svm_check_processor_compat, .runtime_ops = &svm_x86_ops, .pmu_ops = &amd_pmu_ops, diff --git a/arch/x86/kvm/vmx/evmcs.c b/arch/x86/kvm/vmx/evmcs.c index 6a61b1ae7942..3f84680c8139 100644 --- a/arch/x86/kvm/vmx/evmcs.c +++ b/arch/x86/kvm/vmx/evmcs.c @@ -295,7 +295,7 @@ const struct evmcs_field vmcs_field_to_evmcs_1[] = { const unsigned int nr_evmcs_1_fields = ARRAY_SIZE(vmcs_field_to_evmcs_1); #if IS_ENABLED(CONFIG_HYPERV) -__init void evmcs_sanitize_exec_ctrls(struct vmcs_config *vmcs_conf) +void evmcs_sanitize_exec_ctrls(struct vmcs_config *vmcs_conf) { vmcs_conf->cpu_based_exec_ctrl &= ~EVMCS1_UNSUPPORTED_EXEC_CTRL; vmcs_conf->pin_based_exec_ctrl &= ~EVMCS1_UNSUPPORTED_PINCTRL; diff --git a/arch/x86/kvm/vmx/evmcs.h b/arch/x86/kvm/vmx/evmcs.h index f886a8ff0342..276f788cef15 100644 --- a/arch/x86/kvm/vmx/evmcs.h +++ b/arch/x86/kvm/vmx/evmcs.h @@ -212,7 +212,7 @@ static inline void evmcs_load(u64 phys_addr) vp_ap->enlighten_vmentry = 1; } -__init void evmcs_sanitize_exec_ctrls(struct vmcs_config *vmcs_conf); +void evmcs_sanitize_exec_ctrls(struct vmcs_config *vmcs_conf); #else /* !IS_ENABLED(CONFIG_HYPERV) */ static __always_inline void evmcs_write64(unsigned long field, u64 value) {} static inline void evmcs_write32(unsigned long field, u32 value) {} diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 5e14e4c40007..31e7630203fd 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -2410,8 +2410,8 @@ static bool cpu_has_sgx(void) return cpuid_eax(0) >= 0x12 && (cpuid_eax(0x12) & BIT(0)); } -static __init int adjust_vmx_controls(u32 ctl_min, u32 ctl_opt, - u32 msr, u32 *result) +static int adjust_vmx_controls(u32 ctl_min, u32 ctl_opt, + u32 msr, u32 *result) { u32 vmx_msr_low, vmx_msr_high; u32 ctl = ctl_min | ctl_opt; @@ -2429,7 +2429,7 @@ static __init int adjust_vmx_controls(u32 ctl_min, u32 ctl_opt, return 0; } -static __init u64 adjust_vmx_controls64(u64 ctl_opt, u32 msr) +static u64 adjust_vmx_controls64(u64 ctl_opt, u32 msr) { u64 allowed; @@ -2438,8 +2438,8 @@ static __init u64 adjust_vmx_controls64(u64 ctl_opt, u32 msr) return ctl_opt & allowed; } -static __init int setup_vmcs_config(struct vmcs_config *vmcs_conf, - struct vmx_capability *vmx_cap) +static int setup_vmcs_config(struct vmcs_config *vmcs_conf, + struct vmx_capability *vmx_cap) { u32 vmx_msr_low, vmx_msr_high; u32 min, opt, min2, opt2; @@ -7318,7 +7318,7 @@ static int vmx_vm_init(struct kvm *kvm) return 0; } -static int __init vmx_check_processor_compat(void) +static int vmx_check_processor_compatibility(void) { struct vmcs_config vmcs_conf; struct vmx_capability vmx_cap; @@ -7928,6 +7928,7 @@ static struct kvm_x86_ops vmx_x86_ops __initdata = { .hardware_unsetup = vmx_hardware_unsetup, + .check_processor_compatibility = vmx_check_processor_compatibility, .hardware_enable = vmx_hardware_enable, .hardware_disable = vmx_hardware_disable, .has_emulated_msr = vmx_has_emulated_msr, @@ -8316,7 +8317,6 @@ static __init int hardware_setup(void) static struct kvm_x86_init_ops vmx_init_ops __initdata = { .cpu_has_kvm_support = cpu_has_kvm_support, .disabled_by_bios = vmx_disabled_by_bios, - .check_processor_compatibility = vmx_check_processor_compat, .hardware_setup = hardware_setup, .handle_intel_pt_intr = NULL, diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 2318a99139fa..3d9dbaf9828f 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -11802,7 +11802,6 @@ void kvm_arch_hardware_unsetup(void) int kvm_arch_check_processor_compat(void *opaque) { struct cpuinfo_x86 *c = &cpu_data(smp_processor_id()); - struct kvm_x86_init_ops *ops = opaque; WARN_ON(!irqs_disabled()); @@ -11810,7 +11809,7 @@ int kvm_arch_check_processor_compat(void *opaque) __cr4_reserved_bits(cpu_has, &boot_cpu_data)) return -EIO; - return ops->check_processor_compatibility(); + return static_call(kvm_x86_check_processor_compatibility)(); } bool kvm_vcpu_is_reset_bsp(struct kvm_vcpu *vcpu) -- 2.25.1