From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5A873C48BDF for ; Tue, 22 Jun 2021 18:05:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 477D361350 for ; Tue, 22 Jun 2021 18:05:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233054AbhFVSHp (ORCPT ); Tue, 22 Jun 2021 14:07:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38828 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232695AbhFVSGt (ORCPT ); Tue, 22 Jun 2021 14:06:49 -0400 Received: from mail-qt1-x849.google.com (mail-qt1-x849.google.com [IPv6:2607:f8b0:4864:20::849]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CFC60C035409 for ; Tue, 22 Jun 2021 10:59:55 -0700 (PDT) Received: by mail-qt1-x849.google.com with SMTP id a5-20020ac84d850000b029024998e61d00so61642qtw.14 for ; Tue, 22 Jun 2021 10:59:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=wT/dm8cE+d6lo3Bb4AloJvwz2DyF/vxG+6pSp7piqUc=; b=vODvpVd/aV/+C3hrbB7LmkOt9TLPWBJrnp4HDVmBSvtNkk/XhhwpGqRjmD8LQ35TYx aT46oCsvARGg8Y5XqWGcw6dBr/OLNgdXh4giv0iKHCsBCcz840knZGUdWNINCm4EjA/x 41kz3dKNZpETycko4uTAKjq2E353Sxmw48wwPD/oLASTXbydu7qaNtnCBa1bgFjgjrue 9NgRt4OyM+lKsurHIcEyGRLKgQMeC3uujOBdv+FzfHMrm93Kl1iEf7Ykni8GAapvLSwY eORtAZ0tMxLhWrxhzuUcNA223E9vR6Af6t7klIQlyZaZYAiU0msihVOK7EDbwAWhOrDP 2cJw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=wT/dm8cE+d6lo3Bb4AloJvwz2DyF/vxG+6pSp7piqUc=; b=ns23xZsazIZvsFvmhMaPHqu5E8eXmFSB3lsY38eFMwAh5gVGsOcaMg2ZovqQwt2bOr +c9vypZKc8LChCsbXrBSYzZiJ8msPA9Vh+yOYKfd12HUPHkOo5Rq0AsisHPv4rH+PUVl o0S9cJMFmXridJic5JBdpzrwPkTO1TZZj64fkpt4/p6IyxTOLVtasDQcKIneRdwmT36H wZxiGUVOTgaAY3cSNYHkuTYe4rul/7OF5l54xg60h6wZ1tFQoUcWbBwxOu4JeImMP/AV w81D0t8blayP85fLiDK326sjFOlpRKVLDZkJjWKY/d+of29ShMozIcXAMFthamv2lkVW C0mg== X-Gm-Message-State: AOAM531Ryok5pzA3K6uOtoPjIfrJquC94FThSXq15x7a7UtCozGqY7sB jvmLPN5Sx6Lkg1/XcXd3ryUagfm2lUM= X-Google-Smtp-Source: ABdhPJzrVPF6qRfoHU/js/FPcvbFY8LsAicQYbANz8g6CGZgDt5fVOp0cv0lfhr2lG5KFLK0p88JNOQuxf4= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:5722:92ce:361f:3832]) (user=seanjc job=sendgmr) by 2002:a05:6214:c6b:: with SMTP id t11mr26682145qvj.31.1624384794991; Tue, 22 Jun 2021 10:59:54 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 22 Jun 2021 10:57:37 -0700 In-Reply-To: <20210622175739.3610207-1-seanjc@google.com> Message-Id: <20210622175739.3610207-53-seanjc@google.com> Mime-Version: 1.0 References: <20210622175739.3610207-1-seanjc@google.com> X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH 52/54] KVM: x86/mmu: Get CR0.WP from MMU, not vCPU, in shadow page fault From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Maxim Levitsky Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Use the current MMU instead of vCPU state to query CR0.WP when handling a page fault. In the nested NPT case, the current CR0.WP reflects L2, whereas the page fault is shadowing L1's NPT. Practically speaking, this is a nop a NPT walks are always user faults, but fix it up for consistency. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu.h | 5 ----- arch/x86/kvm/mmu/paging_tmpl.h | 5 ++--- 2 files changed, 2 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 62844bacd13f..83e6c6965f1e 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -165,11 +165,6 @@ static inline bool is_writable_pte(unsigned long pte) return pte & PT_WRITABLE_MASK; } -static inline bool is_write_protection(struct kvm_vcpu *vcpu) -{ - return kvm_read_cr0_bits(vcpu, X86_CR0_WP); -} - /* * Check if a given access (described through the I/D, W/R and U/S bits of a * page fault error code pfec) causes a permission fault with the given PTE diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index ec1de57f3572..260a9c06d764 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -795,7 +795,7 @@ FNAME(is_self_change_mapping)(struct kvm_vcpu *vcpu, bool self_changed = false; if (!(walker->pte_access & ACC_WRITE_MASK || - (!is_write_protection(vcpu) && !user_fault))) + (!is_cr0_wp(vcpu->arch.mmu) && !user_fault))) return false; for (level = walker->level; level <= walker->max_level; level++) { @@ -893,8 +893,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gpa_t addr, u32 error_code, * we will cache the incorrect access into mmio spte. */ if (write_fault && !(walker.pte_access & ACC_WRITE_MASK) && - !is_write_protection(vcpu) && !user_fault && - !is_noslot_pfn(pfn)) { + !is_cr0_wp(vcpu->arch.mmu) && !user_fault && !is_noslot_pfn(pfn)) { walker.pte_access |= ACC_WRITE_MASK; walker.pte_access &= ~ACC_USER_MASK; -- 2.32.0.288.g62a8d224e6-goog