From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-23.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5CC55C433B4 for ; Thu, 6 May 2021 16:22:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1F28861078 for ; Thu, 6 May 2021 16:22:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235890AbhEFQXp (ORCPT ); Thu, 6 May 2021 12:23:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44900 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235246AbhEFQXn (ORCPT ); Thu, 6 May 2021 12:23:43 -0400 Received: from mail-ej1-x631.google.com (mail-ej1-x631.google.com [IPv6:2a00:1450:4864:20::631]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 97970C061574 for ; Thu, 6 May 2021 09:22:44 -0700 (PDT) Received: by mail-ej1-x631.google.com with SMTP id l4so9124689ejc.10 for ; Thu, 06 May 2021 09:22:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=E+te4OFAktmgQwkoeQ7L1K6OrqKLzhYm9BcP/aHLuqg=; b=q3SCPNc3KBWkyGwAoQRZDeLjigbz525+Qrbr72a3c8KRb2nW6Hg3AlIiRIZJzMqKPx h2lHym+KC8ufp5S1RtGNnl1OM4iMuIptZPGMEFxuwUqeY+9VgheQ51LTzLoZwN6mjwv2 N7qymi9rcrHn/BLwLjfIcGBDbRAGGZqaAr4dZcBQwjmJdNRo/EwlZQAtfN4RkvnigZfK NfhqZPzn7SelCcc/sq8V/Catjl49i0qSywsE7PM0eDeuQ1zf7e7bJY84qX1WE2Ihvr1G FQ2PRN2wqqGWDaZpn1N+fe2frCwHJoHM19XO8CgfFvzpDPzScgY/HMPhLLrgafAohBWW f8Kw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=E+te4OFAktmgQwkoeQ7L1K6OrqKLzhYm9BcP/aHLuqg=; b=IpQ3EbvwICKlYhnlY06Qrz4vOzIgd17UH4/Vs4Yxep3bBKPvu36qadT4UTIZsqMXZC ngsaLV0QuTU6iwggHDVf1+xfy45u2TsJdUQIlCmWR9fDPnjc/jU6NuwG4kCcC01S8sOG Lo1jCVh6dl7pvBbwdhEicfYGIF2L22s1cWgv/ZJ0RPe3RDqfWZTpiyuqozEgFwK73lCY kkl6UQf2pa0mgma3V+OMxpt77mvHXs5K+QPHXUS1IKuQ4yxrfxsc8kMeFweZYNpJX59v 4C/DwwDfMM/32v8wMI20GfDagYEf2gQbJkDcO7fCz9NMeO+sM1QPdzoL/i+XbvDhWrKt 6Y/w== X-Gm-Message-State: AOAM533rGgd9H0ic6V6QH8pLO+cnBBPLxaREGszETmgOjF3d8Anys1bA ED6+gxTJ7InYS6rYPu1m1mJzWOLnxKpDghMQQY7JoQ== X-Google-Smtp-Source: ABdhPJwHJLWWjiIXNlJup4zGJiDD+n9JomruHoydzPa3JulAZqmFAY5WX7t/YrpvjYnyOIW8gBEQtDb592svrwl4EHU= X-Received: by 2002:a17:906:b191:: with SMTP id w17mr5391777ejy.200.1620318163010; Thu, 06 May 2021 09:22:43 -0700 (PDT) MIME-Version: 1.0 References: <817eae486273adad0a622671f628c5a99b72a375.1620200410.git.kai.huang@intel.com> <51f7d6bbe52ad0c42d3c09fffd340fe7d2c0e113.camel@intel.com> In-Reply-To: <51f7d6bbe52ad0c42d3c09fffd340fe7d2c0e113.camel@intel.com> From: Ben Gardon Date: Thu, 6 May 2021 09:22:32 -0700 Message-ID: Subject: Re: [PATCH 3/3] KVM: x86/mmu: Fix TDP MMU page table level To: Kai Huang Cc: kvm , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org On Thu, May 6, 2021 at 1:00 AM Kai Huang wrote: > > On Wed, 2021-05-05 at 09:28 -0700, Ben Gardon wrote: > > On Wed, May 5, 2021 at 2:38 AM Kai Huang wrote: > > > > > > TDP MMU iterator's level is identical to page table's actual level. For > > > instance, for the last level page table (whose entry points to one 4K > > > page), iter->level is 1 (PG_LEVEL_4K), and in case of 5 level paging, > > > the iter->level is mmu->shadow_root_level, which is 5. However, struct > > > kvm_mmu_page's level currently is not set correctly when it is allocated > > > in kvm_tdp_mmu_map(). When iterator hits non-present SPTE and needs to > > > allocate a new child page table, currently iter->level, which is the > > > level of the page table where the non-present SPTE belongs to, is used. > > > This results in struct kvm_mmu_page's level always having its parent's > > > level (excpet root table's level, which is initialized explicitly using > > > mmu->shadow_root_level). This is kinda wrong, and not consistent with > > > existing non TDP MMU code. Fortuantely the sp->role.level is only used > > > in handle_removed_tdp_mmu_page(), which apparently is already aware of > > > this, and handles correctly. However to make it consistent with non TDP > > > MMU code (and fix the issue that both root page table and any child of > > > it having shadow_root_level), fix this by using iter->level - 1 in > > > kvm_tdp_mmu_map(). Also modify handle_removed_tdp_mmu_page() to handle > > > such change. > > > > Ugh. Thank you for catching this. This is going to take me a bit to > > review as I should audit the code more broadly for this problem in the > > TDP MMU. > > It would probably also be a good idea to add a comment on the level > > field to say that it represents the level of the SPTEs in the > > associated page, not the level of the SPTE that links to the > > associated page. > > Hopefully that will prevent similar future misunderstandings. > > Regarding to adding a comment, sorry I had a hard time to figure out where to add. Did > you mean level field of 'struct kvm_mmu_page_role', or 'struct tdp_iter'? If it is the > former, to me not quite useful. I meant the level field of 'struct kvm_mmu_page_role', but if you don't think it makes sense to add one there, I don't feel strongly either way. > > I ended up with below. Is it OK to you? Yeah, it looks good to me. > > If you still think a comment of level should be added, would you be more specific so that > I can add it? struct { + /* + * The level of the SPT tracked by this SP, as opposed to the level of the + * parent SPTE linking this SPT. + */ unsigned level:4; ... I guess that does sound kind of unnecessary. > > ------------------------------------------------------------------------ > > TDP MMU iterator's level is identical to page table's actual level. For > instance, for the last level page table (whose entry points to one 4K > page), iter->level is 1 (PG_LEVEL_4K), and in case of 5 level paging, > the iter->level is mmu->shadow_root_level, which is 5. However, struct > kvm_mmu_page's level currently is not set correctly when it is allocated > in kvm_tdp_mmu_map(). When iterator hits non-present SPTE and needs to > allocate a new child page table, currently iter->level, which is the > level of the page table where the non-present SPTE belongs to, is used. > This results in struct kvm_mmu_page's level always having its parent's > level (excpet root table's level, which is initialized explicitly using > mmu->shadow_root_level). > > This is kinda wrong, and not consistent with existing non TDP MMU code. > Fortuantely sp->role.level is only used in handle_removed_tdp_mmu_page() > and kvm_tdp_mmu_zap_sp(), and they are already aware of this and behave > correctly. However to make it consistent with legacy MMU code (and fix > the issue that both root page table and its child page table have > shadow_root_level), use iter->level - 1 in kvm_tdp_mmu_map(), and change > handle_removed_tdp_mmu_page() and kvm_tdp_mmu_zap_sp() accordingly. > > Signed-off-by: Kai Huang > --- > arch/x86/kvm/mmu/tdp_mmu.c | 8 ++++---- > arch/x86/kvm/mmu/tdp_mmu.h | 2 +- > 2 files changed, 5 insertions(+), 5 deletions(-) > > diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c > index 5e28fbabcd35..45fb889f6a94 100644 > --- a/arch/x86/kvm/mmu/tdp_mmu.c > +++ b/arch/x86/kvm/mmu/tdp_mmu.c > @@ -335,7 +335,7 @@ static void handle_removed_tdp_mmu_page(struct kvm *kvm, tdp_ptep_t > pt, > > for (i = 0; i < PT64_ENT_PER_PAGE; i++) { > sptep = rcu_dereference(pt) + i; > - gfn = base_gfn + (i * KVM_PAGES_PER_HPAGE(level - 1)); > + gfn = base_gfn + i * KVM_PAGES_PER_HPAGE(level); > > if (shared) { > /* > @@ -377,12 +377,12 @@ static void handle_removed_tdp_mmu_page(struct kvm *kvm, tdp_ptep_t > pt, > WRITE_ONCE(*sptep, REMOVED_SPTE); > } > handle_changed_spte(kvm, kvm_mmu_page_as_id(sp), gfn, > - old_child_spte, REMOVED_SPTE, level - 1, > + old_child_spte, REMOVED_SPTE, level, > shared); > } > > kvm_flush_remote_tlbs_with_address(kvm, gfn, > - KVM_PAGES_PER_HPAGE(level)); > + KVM_PAGES_PER_HPAGE(level + 1)); > > call_rcu(&sp->rcu_head, tdp_mmu_free_sp_rcu_callback); > } > @@ -1013,7 +1013,7 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, gpa_t gpa, u32 > error_code, > } > > if (!is_shadow_present_pte(iter.old_spte)) { > - sp = alloc_tdp_mmu_page(vcpu, iter.gfn, iter.level); > + sp = alloc_tdp_mmu_page(vcpu, iter.gfn, iter.level - 1); > child_pt = sp->spt; > > new_spte = make_nonleaf_spte(child_pt, > diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h > index 5fdf63090451..7f9974c5d0b4 100644 > --- a/arch/x86/kvm/mmu/tdp_mmu.h > +++ b/arch/x86/kvm/mmu/tdp_mmu.h > @@ -31,7 +31,7 @@ static inline bool kvm_tdp_mmu_zap_gfn_range(struct kvm *kvm, int as_id, > } > static inline bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp) > { > - gfn_t end = sp->gfn + KVM_PAGES_PER_HPAGE(sp->role.level); > + gfn_t end = sp->gfn + KVM_PAGES_PER_HPAGE(sp->role.level + 1); > > /* > * Don't allow yielding, as the caller may have a flush pending. Note, > -- > 2.31.1 > > > > > > > > > > Signed-off-by: Kai Huang > > > --- > > > arch/x86/kvm/mmu/tdp_mmu.c | 8 ++++---- > > > 1 file changed, 4 insertions(+), 4 deletions(-) > > > > > > diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c > > > index debe8c3ec844..bcfb87e1c06e 100644 > > > --- a/arch/x86/kvm/mmu/tdp_mmu.c > > > +++ b/arch/x86/kvm/mmu/tdp_mmu.c > > > @@ -335,7 +335,7 @@ static void handle_removed_tdp_mmu_page(struct kvm *kvm, tdp_ptep_t pt, > > > > > > for (i = 0; i < PT64_ENT_PER_PAGE; i++) { > > > sptep = rcu_dereference(pt) + i; > > > - gfn = base_gfn + (i * KVM_PAGES_PER_HPAGE(level - 1)); > > > + gfn = base_gfn + i * KVM_PAGES_PER_HPAGE(level); > > > > > > if (shared) { > > > /* > > > @@ -377,12 +377,12 @@ static void handle_removed_tdp_mmu_page(struct kvm *kvm, tdp_ptep_t pt, > > > WRITE_ONCE(*sptep, REMOVED_SPTE); > > > } > > > handle_changed_spte(kvm, kvm_mmu_page_as_id(sp), gfn, > > > - old_child_spte, REMOVED_SPTE, level - 1, > > > + old_child_spte, REMOVED_SPTE, level, > > > shared); > > > } > > > > > > kvm_flush_remote_tlbs_with_address(kvm, gfn, > > > - KVM_PAGES_PER_HPAGE(level)); > > > + KVM_PAGES_PER_HPAGE(level + 1)); > > > > > > call_rcu(&sp->rcu_head, tdp_mmu_free_sp_rcu_callback); > > > } > > > @@ -1009,7 +1009,7 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, > > > } > > > > > > if (!is_shadow_present_pte(iter.old_spte)) { > > > - sp = alloc_tdp_mmu_page(vcpu, iter.gfn, iter.level); > > > + sp = alloc_tdp_mmu_page(vcpu, iter.gfn, iter.level - 1); > > > child_pt = sp->spt; > > > > > > new_spte = make_nonleaf_spte(child_pt, > > > -- > > > 2.31.1 > > > > >