From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5A09BC47080 for ; Tue, 1 Jun 2021 18:22:23 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id DD98461396 for ; Tue, 1 Jun 2021 18:22:22 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DD98461396 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:52964 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lo91x-0007aI-W5 for qemu-devel@archiver.kernel.org; Tue, 01 Jun 2021 14:22:22 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:43454) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lo8qk-0002eF-8n for qemu-devel@nongnu.org; Tue, 01 Jun 2021 14:10:46 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:29721) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lo8qa-0005nM-DU for qemu-devel@nongnu.org; Tue, 01 Jun 2021 14:10:46 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1622571035; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=wPnj6KgAdjDpm2bFOzx8pD3GN3IHJ5shv++eHkyj2m8=; b=bb9PxIZwv5D4udLOAiJHMeEGiBNCkgF9wOpeaqRcQNIwE9LB41Rv1JZema7GeBG7qroflz ZGJ7QOHXZI0+nJEERudqzVo00ZFISBfn1C3Tf6yyQVqmPfSc9NriXu0uI/7Efs0SzFCorO LDiIpxIDwgC0cJuzpiMrTXJ4jmX04Rs= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-333-geny2meVM2y_LUses_MLyg-1; Tue, 01 Jun 2021 14:10:32 -0400 X-MC-Unique: geny2meVM2y_LUses_MLyg-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 13B82107ACCA; Tue, 1 Jun 2021 18:10:31 +0000 (UTC) Received: from localhost (ovpn-112-239.rdu2.redhat.com [10.10.112.239]) by smtp.corp.redhat.com (Postfix) with ESMTP id CE4BC5D6D5; Tue, 1 Jun 2021 18:10:30 +0000 (UTC) From: Eduardo Habkost To: qemu-devel@nongnu.org, Peter Maydell Subject: [PULL 15/24] i386: introduce hv_cpuid_cache Date: Tue, 1 Jun 2021 14:10:05 -0400 Message-Id: <20210601181014.2568861-16-ehabkost@redhat.com> In-Reply-To: <20210601181014.2568861-1-ehabkost@redhat.com> References: <20210601181014.2568861-1-ehabkost@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=ehabkost@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset="US-ASCII" Received-SPF: pass client-ip=216.205.24.124; envelope-from=ehabkost@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -31 X-Spam_score: -3.2 X-Spam_bar: --- X-Spam_report: (-3.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.371, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Paolo Bonzini , Vitaly Kuznetsov , Richard Henderson , Eduardo Habkost Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" From: Vitaly Kuznetsov Just like with cpuid_cache, it makes no sense to call KVM_GET_SUPPORTED_HV_CPUID more than once and instead of (ab)using env->features[] and/or trying to keep all the code in one place, it is better to introduce persistent hv_cpuid_cache and hv_cpuid_get_host() accessor to it. Note, hv_cpuid_get_fw() is converted to using hv_cpuid_get_host() just to be removed later with Hyper-V specific feature words. Signed-off-by: Vitaly Kuznetsov Reviewed-by: Eduardo Habkost Message-Id: <20210422161130.652779-9-vkuznets@redhat.com> Signed-off-by: Eduardo Habkost --- target/i386/kvm/kvm.c | 109 ++++++++++++++++++++++-------------------- 1 file changed, 56 insertions(+), 53 deletions(-) diff --git a/target/i386/kvm/kvm.c b/target/i386/kvm/kvm.c index 6d6afd83e3a..2dd60fcaacf 100644 --- a/target/i386/kvm/kvm.c +++ b/target/i386/kvm/kvm.c @@ -129,6 +129,7 @@ static int has_exception_payload; static bool has_msr_mcg_ext_ctl; static struct kvm_cpuid2 *cpuid_cache; +static struct kvm_cpuid2 *hv_cpuid_cache; static struct kvm_msr_list *kvm_feature_msrs; int kvm_has_pit_state2(void) @@ -1067,10 +1068,25 @@ static struct kvm_cpuid2 *get_supported_hv_cpuid_legacy(CPUState *cs) return cpuid; } -static uint32_t hv_cpuid_get_host(struct kvm_cpuid2 *cpuid, uint32_t func, - int reg) +static uint32_t hv_cpuid_get_host(CPUState *cs, uint32_t func, int reg) { struct kvm_cpuid_entry2 *entry; + struct kvm_cpuid2 *cpuid; + + if (hv_cpuid_cache) { + cpuid = hv_cpuid_cache; + } else { + if (kvm_check_extension(kvm_state, KVM_CAP_HYPERV_CPUID) > 0) { + cpuid = get_supported_hv_cpuid(cs); + } else { + cpuid = get_supported_hv_cpuid_legacy(cs); + } + hv_cpuid_cache = cpuid; + } + + if (!cpuid) { + return 0; + } entry = cpuid_find_entry(cpuid, func, 0); if (!entry) { @@ -1080,7 +1096,7 @@ static uint32_t hv_cpuid_get_host(struct kvm_cpuid2 *cpuid, uint32_t func, return cpuid_entry_get_reg(entry, reg); } -static bool hyperv_feature_supported(struct kvm_cpuid2 *cpuid, int feature) +static bool hyperv_feature_supported(CPUState *cs, int feature) { uint32_t func, bits; int i, reg; @@ -1095,7 +1111,7 @@ static bool hyperv_feature_supported(struct kvm_cpuid2 *cpuid, int feature) continue; } - if ((hv_cpuid_get_host(cpuid, func, reg) & bits) != bits) { + if ((hv_cpuid_get_host(cs, func, reg) & bits) != bits) { return false; } } @@ -1103,8 +1119,7 @@ static bool hyperv_feature_supported(struct kvm_cpuid2 *cpuid, int feature) return true; } -static int hv_cpuid_check_and_set(CPUState *cs, struct kvm_cpuid2 *cpuid, - int feature) +static int hv_cpuid_check_and_set(CPUState *cs, int feature) { X86CPU *cpu = X86_CPU(cs); uint64_t deps; @@ -1127,7 +1142,7 @@ static int hv_cpuid_check_and_set(CPUState *cs, struct kvm_cpuid2 *cpuid, deps &= ~(1ull << dep_feat); } - if (!hyperv_feature_supported(cpuid, feature)) { + if (!hyperv_feature_supported(cs, feature)) { if (hyperv_feat_enabled(cpu, feature)) { fprintf(stderr, "Hyper-V %s is not supported by kernel\n", @@ -1180,7 +1195,6 @@ static int hyperv_handle_properties(CPUState *cs, struct kvm_cpuid_entry2 *cpuid_ent) { X86CPU *cpu = X86_CPU(cs); - struct kvm_cpuid2 *cpuid; struct kvm_cpuid_entry2 *c; uint32_t cpuid_i = 0; int r; @@ -1206,19 +1220,13 @@ static int hyperv_handle_properties(CPUState *cs, } } - if (kvm_check_extension(cs->kvm_state, KVM_CAP_HYPERV_CPUID) > 0) { - cpuid = get_supported_hv_cpuid(cs); - } else { - cpuid = get_supported_hv_cpuid_legacy(cs); - } - if (cpu->hyperv_passthrough) { cpu->hyperv_vendor_id[0] = - hv_cpuid_get_host(cpuid, HV_CPUID_VENDOR_AND_MAX_FUNCTIONS, R_EBX); + hv_cpuid_get_host(cs, HV_CPUID_VENDOR_AND_MAX_FUNCTIONS, R_EBX); cpu->hyperv_vendor_id[1] = - hv_cpuid_get_host(cpuid, HV_CPUID_VENDOR_AND_MAX_FUNCTIONS, R_ECX); + hv_cpuid_get_host(cs, HV_CPUID_VENDOR_AND_MAX_FUNCTIONS, R_ECX); cpu->hyperv_vendor_id[2] = - hv_cpuid_get_host(cpuid, HV_CPUID_VENDOR_AND_MAX_FUNCTIONS, R_EDX); + hv_cpuid_get_host(cs, HV_CPUID_VENDOR_AND_MAX_FUNCTIONS, R_EDX); cpu->hyperv_vendor = g_realloc(cpu->hyperv_vendor, sizeof(cpu->hyperv_vendor_id) + 1); memcpy(cpu->hyperv_vendor, cpu->hyperv_vendor_id, @@ -1226,52 +1234,52 @@ static int hyperv_handle_properties(CPUState *cs, cpu->hyperv_vendor[sizeof(cpu->hyperv_vendor_id)] = 0; cpu->hyperv_interface_id[0] = - hv_cpuid_get_host(cpuid, HV_CPUID_INTERFACE, R_EAX); + hv_cpuid_get_host(cs, HV_CPUID_INTERFACE, R_EAX); cpu->hyperv_interface_id[1] = - hv_cpuid_get_host(cpuid, HV_CPUID_INTERFACE, R_EBX); + hv_cpuid_get_host(cs, HV_CPUID_INTERFACE, R_EBX); cpu->hyperv_interface_id[2] = - hv_cpuid_get_host(cpuid, HV_CPUID_INTERFACE, R_ECX); + hv_cpuid_get_host(cs, HV_CPUID_INTERFACE, R_ECX); cpu->hyperv_interface_id[3] = - hv_cpuid_get_host(cpuid, HV_CPUID_INTERFACE, R_EDX); + hv_cpuid_get_host(cs, HV_CPUID_INTERFACE, R_EDX); cpu->hyperv_version_id[0] = - hv_cpuid_get_host(cpuid, HV_CPUID_VERSION, R_EAX); + hv_cpuid_get_host(cs, HV_CPUID_VERSION, R_EAX); cpu->hyperv_version_id[1] = - hv_cpuid_get_host(cpuid, HV_CPUID_VERSION, R_EBX); + hv_cpuid_get_host(cs, HV_CPUID_VERSION, R_EBX); cpu->hyperv_version_id[2] = - hv_cpuid_get_host(cpuid, HV_CPUID_VERSION, R_ECX); + hv_cpuid_get_host(cs, HV_CPUID_VERSION, R_ECX); cpu->hyperv_version_id[3] = - hv_cpuid_get_host(cpuid, HV_CPUID_VERSION, R_EDX); + hv_cpuid_get_host(cs, HV_CPUID_VERSION, R_EDX); - cpu->hv_max_vps = hv_cpuid_get_host(cpuid, HV_CPUID_IMPLEMENT_LIMITS, + cpu->hv_max_vps = hv_cpuid_get_host(cs, HV_CPUID_IMPLEMENT_LIMITS, R_EAX); cpu->hyperv_limits[0] = - hv_cpuid_get_host(cpuid, HV_CPUID_IMPLEMENT_LIMITS, R_EBX); + hv_cpuid_get_host(cs, HV_CPUID_IMPLEMENT_LIMITS, R_EBX); cpu->hyperv_limits[1] = - hv_cpuid_get_host(cpuid, HV_CPUID_IMPLEMENT_LIMITS, R_ECX); + hv_cpuid_get_host(cs, HV_CPUID_IMPLEMENT_LIMITS, R_ECX); cpu->hyperv_limits[2] = - hv_cpuid_get_host(cpuid, HV_CPUID_IMPLEMENT_LIMITS, R_EDX); + hv_cpuid_get_host(cs, HV_CPUID_IMPLEMENT_LIMITS, R_EDX); cpu->hyperv_spinlock_attempts = - hv_cpuid_get_host(cpuid, HV_CPUID_ENLIGHTMENT_INFO, R_EBX); + hv_cpuid_get_host(cs, HV_CPUID_ENLIGHTMENT_INFO, R_EBX); } /* Features */ - r = hv_cpuid_check_and_set(cs, cpuid, HYPERV_FEAT_RELAXED); - r |= hv_cpuid_check_and_set(cs, cpuid, HYPERV_FEAT_VAPIC); - r |= hv_cpuid_check_and_set(cs, cpuid, HYPERV_FEAT_TIME); - r |= hv_cpuid_check_and_set(cs, cpuid, HYPERV_FEAT_CRASH); - r |= hv_cpuid_check_and_set(cs, cpuid, HYPERV_FEAT_RESET); - r |= hv_cpuid_check_and_set(cs, cpuid, HYPERV_FEAT_VPINDEX); - r |= hv_cpuid_check_and_set(cs, cpuid, HYPERV_FEAT_RUNTIME); - r |= hv_cpuid_check_and_set(cs, cpuid, HYPERV_FEAT_SYNIC); - r |= hv_cpuid_check_and_set(cs, cpuid, HYPERV_FEAT_STIMER); - r |= hv_cpuid_check_and_set(cs, cpuid, HYPERV_FEAT_FREQUENCIES); - r |= hv_cpuid_check_and_set(cs, cpuid, HYPERV_FEAT_REENLIGHTENMENT); - r |= hv_cpuid_check_and_set(cs, cpuid, HYPERV_FEAT_TLBFLUSH); - r |= hv_cpuid_check_and_set(cs, cpuid, HYPERV_FEAT_EVMCS); - r |= hv_cpuid_check_and_set(cs, cpuid, HYPERV_FEAT_IPI); - r |= hv_cpuid_check_and_set(cs, cpuid, HYPERV_FEAT_STIMER_DIRECT); + r = hv_cpuid_check_and_set(cs, HYPERV_FEAT_RELAXED); + r |= hv_cpuid_check_and_set(cs, HYPERV_FEAT_VAPIC); + r |= hv_cpuid_check_and_set(cs, HYPERV_FEAT_TIME); + r |= hv_cpuid_check_and_set(cs, HYPERV_FEAT_CRASH); + r |= hv_cpuid_check_and_set(cs, HYPERV_FEAT_RESET); + r |= hv_cpuid_check_and_set(cs, HYPERV_FEAT_VPINDEX); + r |= hv_cpuid_check_and_set(cs, HYPERV_FEAT_RUNTIME); + r |= hv_cpuid_check_and_set(cs, HYPERV_FEAT_SYNIC); + r |= hv_cpuid_check_and_set(cs, HYPERV_FEAT_STIMER); + r |= hv_cpuid_check_and_set(cs, HYPERV_FEAT_FREQUENCIES); + r |= hv_cpuid_check_and_set(cs, HYPERV_FEAT_REENLIGHTENMENT); + r |= hv_cpuid_check_and_set(cs, HYPERV_FEAT_TLBFLUSH); + r |= hv_cpuid_check_and_set(cs, HYPERV_FEAT_EVMCS); + r |= hv_cpuid_check_and_set(cs, HYPERV_FEAT_IPI); + r |= hv_cpuid_check_and_set(cs, HYPERV_FEAT_STIMER_DIRECT); /* Additional dependencies not covered by kvm_hyperv_properties[] */ if (hyperv_feat_enabled(cpu, HYPERV_FEAT_SYNIC) && @@ -1284,8 +1292,7 @@ static int hyperv_handle_properties(CPUState *cs, } if (r) { - r = -ENOSYS; - goto free; + return -ENOSYS; } c = &cpuid_ent[cpuid_i++]; @@ -1327,7 +1334,7 @@ static int hyperv_handle_properties(CPUState *cs, if (cpu->hyperv_no_nonarch_cs == ON_OFF_AUTO_ON) { c->eax |= HV_NO_NONARCH_CORESHARING; } else if (cpu->hyperv_no_nonarch_cs == ON_OFF_AUTO_AUTO) { - c->eax |= hv_cpuid_get_host(cpuid, HV_CPUID_ENLIGHTMENT_INFO, R_EAX) & + c->eax |= hv_cpuid_get_host(cs, HV_CPUID_ENLIGHTMENT_INFO, R_EAX) & HV_NO_NONARCH_CORESHARING; } @@ -1352,12 +1359,8 @@ static int hyperv_handle_properties(CPUState *cs, c->function = HV_CPUID_NESTED_FEATURES; c->eax = cpu->hyperv_nested[0]; } - r = cpuid_i; -free: - g_free(cpuid); - - return r; + return cpuid_i; } static Error *hv_passthrough_mig_blocker; -- 2.30.2