All of lore.kernel.org
 help / color / mirror / Atom feed
From: Sean Christopherson <seanjc@google.com>
To: Lai Jiangshan <jiangshanlai@gmail.com>
Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org,
	Paolo Bonzini <pbonzini@redhat.com>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	Maxim Levitsky <mlevitsk@redhat.com>,
	David Matlack <dmatlack@google.com>,
	Lai Jiangshan <jiangshan.ljs@antgroup.com>
Subject: Re: [PATCH V3 01/12] KVM: X86/MMU: Verify PDPTE for nested NPT in PAE paging mode when page fault
Date: Tue, 19 Jul 2022 21:17:21 +0000	[thread overview]
Message-ID: <YtcfYX5KkF0ZhLR/@google.com> (raw)
In-Reply-To: <20220521131700.3661-2-jiangshanlai@gmail.com>

On Sat, May 21, 2022, Lai Jiangshan wrote:
> From: Lai Jiangshan <jiangshan.ljs@antgroup.com>
> Fixes: e4e517b4be01 ("KVM: MMU: Do not unconditionally read PDPTE from guest memory")
> Signed-off-by: Lai Jiangshan <jiangshan.ljs@antgroup.com>
> ---
>  arch/x86/kvm/mmu/paging_tmpl.h | 39 ++++++++++++++++++++++++++++++++++
>  1 file changed, 39 insertions(+)
> 
> diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
> index db80f7ccaa4e..6e3df84e8455 100644
> --- a/arch/x86/kvm/mmu/paging_tmpl.h
> +++ b/arch/x86/kvm/mmu/paging_tmpl.h
> @@ -870,6 +870,44 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault
>  	if (is_page_fault_stale(vcpu, fault, mmu_seq))
>  		goto out_unlock;
>  
> +	/*
> +	 * When nested NPT enabled and L1 is PAE paging, mmu->get_pdptrs()
> +	 * which is nested_svm_get_tdp_pdptr() reads the guest NPT's PDPTE
> +	 * from memory unconditionally for each call.
> +	 *
> +	 * The guest PAE root page is not write-protected.

I think it's worth calling out that it's simply not feasible to write-protect
PDPTEs due to them not covering a full page.

And looking at this comment as a whole, while I love detailed comments, I think
it'd be better off to avoid referring to mmu->get_pdptrs() and use more generic
terminology when talking about KVM.

I think this is accurate?

	/*
	 * If KVM is shadowing nested NPT and L1 is using PAE paging, zap the
	 * root for the PDPTE if the cached value doesn't match the entry at the
	 * time of the page fault, and resume the guest to rebuid the root.
	 * This is effectively a variation of write-protection, where the target
	 * SPTE(s) is zapped on use instead of on write.
	 *
	 * Under SVM with NPT+PAE, the CPU does NOT cache PDPTEs and instead
	 * handles them as it would any other page table entry.  I.e. KVM can't
	 * cache PDPTEs at nested VMRUN without violating the SVM architecture.
	 *
	 * KVM doesn't write-protect PDPTEs because CR3 only needs to be 32-byte
	 * aligned and sized when using PAE paging, whereas write-protection
	 * works at page granularity.
	 */

> +	 *
> +	 * The mmu->get_pdptrs() in FNAME(walk_addr_generic) might get a value
> +	 * different from previous calls or different from the return value of
> +	 * mmu->get_pdptrs() in mmu_alloc_shadow_roots().
> +	 *
> +	 * It will cause FNAME(fetch) installs the spte in a wrong sp or links
> +	 * a sp to a wrong parent if the return value of mmu->get_pdptrs()
> +	 * is not verified unchanged since FNAME(gpte_changed) can't check
> +	 * this kind of change.
> +	 *
> +	 * Verify the return value of mmu->get_pdptrs() (only the gfn in it
> +	 * needs to be checked) and do kvm_mmu_free_roots() like load_pdptr()
> +	 * if the gfn isn't matched.
> +	 *
> +	 * Do the verifying unconditionally when the guest is PAE paging no
> +	 * matter whether it is nested NPT or not to avoid complicated code.

Doing this unconditionally just trades one architecturally incorrect behavior
with another.  Does any real world use case actually care?  Probably not.  But the
behavior is visible to software, and I don't think it costs us much to get it right.

There are a number of ways to handle this, e.g. set a flag in kvm_init_shadow_npt_mmu()
and consume it here.  We could probably even burn a bit in kvm_mmu_extended_role
since we have lots of bits to burn.  E.g.

	if (vcpu->arch.mmu->cpu_role.ext.npt_pae) {

	}


> +	 */
> +	if (vcpu->arch.mmu->cpu_role.base.level == PT32E_ROOT_LEVEL) {
> +		u64 pdpte = vcpu->arch.mmu->pae_root[(fault->addr >> 30) & 3];
> +		struct kvm_mmu_page *sp = NULL;
> +
> +		if (IS_VALID_PAE_ROOT(pdpte))
> +			sp = to_shadow_page(pdpte & PT64_BASE_ADDR_MASK);
> +
> +		if (!sp || walker.table_gfn[PT32E_ROOT_LEVEL - 2] != sp->gfn) {
> +			write_unlock(&vcpu->kvm->mmu_lock);
> +			kvm_mmu_free_roots(vcpu->kvm, vcpu->arch.mmu,
> +					   KVM_MMU_ROOT_CURRENT);
> +			goto release_clean;
> +		}
> +	}
> +
>  	r = make_mmu_pages_available(vcpu);
>  	if (r)
>  		goto out_unlock;
> @@ -877,6 +915,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault
>  
>  out_unlock:
>  	write_unlock(&vcpu->kvm->mmu_lock);
> +release_clean:
>  	kvm_release_pfn_clean(fault->pfn);
>  	return r;
>  }
> -- 
> 2.19.1.6.gb485710b
> 

  reply	other threads:[~2022-07-19 21:17 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-05-21 13:16 [PATCH V3 00/12] KVM: X86/MMU: Use one-off local shadow page for special roots Lai Jiangshan
2022-05-21 13:16 ` [PATCH V3 01/12] KVM: X86/MMU: Verify PDPTE for nested NPT in PAE paging mode when page fault Lai Jiangshan
2022-07-19 21:17   ` Sean Christopherson [this message]
2022-05-21 13:16 ` [PATCH V3 02/12] KVM: X86/MMU: Add using_local_root_page() Lai Jiangshan
2022-05-26 21:28   ` David Matlack
2022-05-26 21:38     ` Sean Christopherson
2022-07-19 22:03   ` Sean Christopherson
2022-05-21 13:16 ` [PATCH V3 03/12] KVM: X86/MMU: Reduce a check in using_local_root_page() for common cases Lai Jiangshan
2022-05-21 13:16 ` [PATCH V3 04/12] KVM: X86/MMU: Add local shadow pages Lai Jiangshan
2022-05-26 21:38   ` David Matlack
2022-05-26 22:01   ` David Matlack
2022-07-20  0:35   ` Sean Christopherson
2022-05-21 13:16 ` [PATCH V3 05/12] KVM: X86/MMU: Link PAE root pagetable with its children Lai Jiangshan
2022-07-19 22:21   ` Sean Christopherson
2022-05-21 13:16 ` [PATCH V3 06/12] KVM: X86/MMU: Activate local shadow pages and remove old logic Lai Jiangshan
2022-05-21 13:16 ` [PATCH V3 07/12] KVM: X86/MMU: Remove the check of the return value of to_shadow_page() Lai Jiangshan
2022-07-19 22:42   ` Sean Christopherson
2022-05-21 13:16 ` [PATCH V3 08/12] KVM: X86/MMU: Allocate mmu->pae_root for PAE paging on-demand Lai Jiangshan
2022-07-19 23:08   ` Sean Christopherson
2022-07-20  0:07     ` Sean Christopherson
2022-05-21 13:16 ` [PATCH V3 09/12] KVM: X86/MMU: Move the verifying of NPT's PDPTE in FNAME(fetch) Lai Jiangshan
2022-07-19 23:21   ` Sean Christopherson
2022-05-21 13:16 ` [PATCH V3 10/12] KVM: X86/MMU: Remove unused INVALID_PAE_ROOT and IS_VALID_PAE_ROOT Lai Jiangshan
2022-07-19 23:11   ` Sean Christopherson
2022-05-21 13:16 ` [PATCH V3 11/12] KVM: X86/MMU: Don't use mmu->pae_root when shadowing PAE NPT in 64-bit host Lai Jiangshan
2022-07-19 23:26   ` Sean Christopherson
2022-07-19 23:27     ` Sean Christopherson
2022-05-21 13:17 ` [PATCH V3 12/12] KVM: X86/MMU: Remove mmu_alloc_special_roots() Lai Jiangshan
2022-05-26  8:49 ` [PATCH V3 00/12] KVM: X86/MMU: Use one-off local shadow page for special roots Lai Jiangshan
2022-05-26 20:27   ` David Matlack

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YtcfYX5KkF0ZhLR/@google.com \
    --to=seanjc@google.com \
    --cc=dmatlack@google.com \
    --cc=jiangshan.ljs@antgroup.com \
    --cc=jiangshanlai@gmail.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mlevitsk@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=vkuznets@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.