From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0A903C2B9F4 for ; Tue, 22 Jun 2021 17:59:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E600161289 for ; Tue, 22 Jun 2021 17:59:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232789AbhFVSBZ (ORCPT ); Tue, 22 Jun 2021 14:01:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37666 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232633AbhFVSA5 (ORCPT ); Tue, 22 Jun 2021 14:00:57 -0400 Received: from mail-qk1-x74a.google.com (mail-qk1-x74a.google.com [IPv6:2607:f8b0:4864:20::74a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 16237C061767 for ; Tue, 22 Jun 2021 10:58:29 -0700 (PDT) Received: by mail-qk1-x74a.google.com with SMTP id v134-20020a37618c0000b02902fa5329f2b4so4592629qkb.18 for ; Tue, 22 Jun 2021 10:58:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=uTc90HlRvNxV1/4nXHcBOYKgMxpy+5jWYh9NNDOLUD8=; b=d+4QRZf2vjtSjIFvpLRxJyq+dSUFgq6Q8H3Ui2TPuYfKcaQ4aO/f8HzfxE1XLHpECC RVLzWUw9ClVb1VymCJcIDfXB0aAHSz2Hh4MYq8HH4wO/32Ja0wDmwv/N4vf/CqPjRnbF MNwJ58W59X7x9Bs7iIX5xY7dHkHdw0adBZt9QO1CrZA10I8shjlrn4qUiBMxTyfKXtSH U5Vz1y4jHLB6ETlMlq3gGjQSkMNo/tvEjhEBu0zno2JlILBXGhQ7ycfY2UhxvvI/7EBN yFpOqz/76Pp0KqtTavVjUFuGCN2rW96/Lrl5KzBIRg2X4H9MGtfjCRq1y38XPev8F6qT l5LA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=uTc90HlRvNxV1/4nXHcBOYKgMxpy+5jWYh9NNDOLUD8=; b=t2KUsce3tkY9cgNFG2OzcWgR9ruAZYh7YhPyzOek0Ss/ZIoYo6/xFCJU3RCu51W6YV 7O1Y4UNT9om9+DVC2lT4k6PwNaS4XFzeiIOl4CJSYQ4g4F+E1g8laKscrcZhamIVWOis TyXE5WlwVYyVmTxX0eFMyJ2vMsziFtOV6A7pnOaTRXMHUOHIWbfmHkQkhPP2EqrvH4Ah hCb3H7WEsky/JO4csABKP+gDuksPOMER7m7+cjtq5Qu1AooftNBKBsjkZrcZMWmua8rJ CGT4lyRzgLQtVmGogNVYKBGAwiphV183UuTMR0vdWNFFcu1bo71qqiGLix1+3Eph5YaX WcJA== X-Gm-Message-State: AOAM532/tN6IzzZqlPjSXZZt2C8r6h6hnlfuGYEwu9ETa4/oAUtht78+ eBlT98yrjoBmCozDghsDIIz2qN2OgBs= X-Google-Smtp-Source: ABdhPJz2LE0P39MT7JrSChNHGeZB/3WIosyg/xkSg5rQgJuK+6VHqX2zw2HqTGfq4giguLjPphzSkycR2Ac= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:5722:92ce:361f:3832]) (user=seanjc job=sendgmr) by 2002:a25:11c2:: with SMTP id 185mr6652606ybr.101.1624384708229; Tue, 22 Jun 2021 10:58:28 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 22 Jun 2021 10:56:59 -0700 In-Reply-To: <20210622175739.3610207-1-seanjc@google.com> Message-Id: <20210622175739.3610207-15-seanjc@google.com> Mime-Version: 1.0 References: <20210622175739.3610207-1-seanjc@google.com> X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH 14/54] KVM: x86: Fix sizes used to pass around CR0, CR4, and EFER From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Maxim Levitsky Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When configuring KVM's MMU, pass CR0 and CR4 as unsigned longs, and EFER as a u64 in various flows (mostly MMU). Passing the params as u32s is functionally ok since all of the affected registers reserve bits 63:32 to zero (enforced by KVM), but it's technically wrong. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu.h | 4 ++-- arch/x86/kvm/mmu/mmu.c | 11 ++++++----- arch/x86/kvm/svm/nested.c | 2 +- arch/x86/kvm/x86.c | 2 +- 4 files changed, 10 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index bc11402df83b..47131b92b990 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -66,8 +66,8 @@ void reset_shadow_zero_bits_mask(struct kvm_vcpu *vcpu, struct kvm_mmu *context); void kvm_init_mmu(struct kvm_vcpu *vcpu); -void kvm_init_shadow_npt_mmu(struct kvm_vcpu *vcpu, u32 cr0, u32 cr4, u32 efer, - gpa_t nested_cr3); +void kvm_init_shadow_npt_mmu(struct kvm_vcpu *vcpu, unsigned long cr0, + unsigned long cr4, u64 efer, gpa_t nested_cr3); void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly, bool accessed_dirty, gpa_t new_eptp); bool kvm_can_do_async_pf(struct kvm_vcpu *vcpu); diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 0171c245ecc7..96c16a6e0044 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4659,8 +4659,8 @@ kvm_calc_shadow_mmu_root_page_role(struct kvm_vcpu *vcpu, bool base_only) } static void shadow_mmu_init_context(struct kvm_vcpu *vcpu, struct kvm_mmu *context, - u32 cr0, u32 cr4, u32 efer, - union kvm_mmu_role new_role) + unsigned long cr0, unsigned long cr4, + u64 efer, union kvm_mmu_role new_role) { if (!(cr0 & X86_CR0_PG)) nonpaging_init_context(vcpu, context); @@ -4675,7 +4675,8 @@ static void shadow_mmu_init_context(struct kvm_vcpu *vcpu, struct kvm_mmu *conte reset_shadow_zero_bits_mask(vcpu, context); } -static void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu, u32 cr0, u32 cr4, u32 efer) +static void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu, unsigned long cr0, + unsigned long cr4, u64 efer) { struct kvm_mmu *context = &vcpu->arch.root_mmu; union kvm_mmu_role new_role = @@ -4697,8 +4698,8 @@ kvm_calc_shadow_npt_root_page_role(struct kvm_vcpu *vcpu) return role; } -void kvm_init_shadow_npt_mmu(struct kvm_vcpu *vcpu, u32 cr0, u32 cr4, u32 efer, - gpa_t nested_cr3) +void kvm_init_shadow_npt_mmu(struct kvm_vcpu *vcpu, unsigned long cr0, + unsigned long cr4, u64 efer, gpa_t nested_cr3) { struct kvm_mmu *context = &vcpu->arch.guest_mmu; union kvm_mmu_role new_role = kvm_calc_shadow_npt_root_page_role(vcpu); diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index dca20f949b63..9f0e7ed672b2 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -1244,8 +1244,8 @@ static int svm_set_nested_state(struct kvm_vcpu *vcpu, &user_kvm_nested_state->data.svm[0]; struct vmcb_control_area *ctl; struct vmcb_save_area *save; + unsigned long cr0; int ret; - u32 cr0; BUILD_BUG_ON(sizeof(struct vmcb_control_area) + sizeof(struct vmcb_save_area) > KVM_STATE_NESTED_SVM_VMCB_SIZE); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 92b4a9305651..2d3b9f10b14a 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -9076,8 +9076,8 @@ static void enter_smm(struct kvm_vcpu *vcpu) { struct kvm_segment cs, ds; struct desc_ptr dt; + unsigned long cr0; char buf[512]; - u32 cr0; memset(buf, 0, 512); #ifdef CONFIG_X86_64 -- 2.32.0.288.g62a8d224e6-goog