All of lore.kernel.org
 help / color / mirror / Atom feed
From: Paolo Bonzini <pbonzini@redhat.com>
To: Sean Christopherson <sean.j.christopherson@intel.com>,
	stable@vger.kernel.org,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: linux-kernel@vger.kernel.org
Subject: Re: [PATCH 4.19 STABLE] KVM: x86: introduce is_pae_paging
Date: Tue, 12 Nov 2019 00:05:44 +0100	[thread overview]
Message-ID: <aecfcc9e-dcab-2b52-ebdb-373a416a4951@redhat.com> (raw)
In-Reply-To: <20191111225423.29309-1-sean.j.christopherson@intel.com>

On 11/11/19 23:54, Sean Christopherson wrote:
> From: Paolo Bonzini <pbonzini@redhat.com>
> 
> Upstream commit bf03d4f9334728bf7c8ffc7de787df48abd6340e.
> 
> Checking for 32-bit PAE is quite common around code that fiddles with
> the PDPTRs.  Add a function to compress all checks into a single
> invocation.
> 
> Moving to the common helper also fixes a subtle bug in kvm_set_cr3()
> where it fails to check is_long_mode() and results in KVM incorrectly
> attempting to load PDPTRs for a 64-bit guest.
> 
> Reviewed-by: Sean Christopherson <sean.j.christopherson@intel.com>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> [sean: backport to 4.x; handle vmx.c split in 5.x, call out the bugfix]
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/kvm/vmx.c | 7 +++----
>  arch/x86/kvm/x86.c | 8 ++++----
>  arch/x86/kvm/x86.h | 5 +++++
>  3 files changed, 12 insertions(+), 8 deletions(-)
> 
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index 6f7b3acdab26..83acaed244ba 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -5181,7 +5181,7 @@ static void ept_load_pdptrs(struct kvm_vcpu *vcpu)
>  		      (unsigned long *)&vcpu->arch.regs_dirty))
>  		return;
>  
> -	if (is_paging(vcpu) && is_pae(vcpu) && !is_long_mode(vcpu)) {
> +	if (is_pae_paging(vcpu)) {
>  		vmcs_write64(GUEST_PDPTR0, mmu->pdptrs[0]);
>  		vmcs_write64(GUEST_PDPTR1, mmu->pdptrs[1]);
>  		vmcs_write64(GUEST_PDPTR2, mmu->pdptrs[2]);
> @@ -5193,7 +5193,7 @@ static void ept_save_pdptrs(struct kvm_vcpu *vcpu)
>  {
>  	struct kvm_mmu *mmu = vcpu->arch.walk_mmu;
>  
> -	if (is_paging(vcpu) && is_pae(vcpu) && !is_long_mode(vcpu)) {
> +	if (is_pae_paging(vcpu)) {
>  		mmu->pdptrs[0] = vmcs_read64(GUEST_PDPTR0);
>  		mmu->pdptrs[1] = vmcs_read64(GUEST_PDPTR1);
>  		mmu->pdptrs[2] = vmcs_read64(GUEST_PDPTR2);
> @@ -12021,8 +12021,7 @@ static int nested_vmx_load_cr3(struct kvm_vcpu *vcpu, unsigned long cr3, bool ne
>  		 * If PAE paging and EPT are both on, CR3 is not used by the CPU and
>  		 * must not be dereferenced.
>  		 */
> -		if (!is_long_mode(vcpu) && is_pae(vcpu) && is_paging(vcpu) &&
> -		    !nested_ept) {
> +		if (is_pae_paging(vcpu) && !nested_ept) {
>  			if (!load_pdptrs(vcpu, vcpu->arch.walk_mmu, cr3)) {
>  				*entry_failure_code = ENTRY_FAIL_PDPTE;
>  				return 1;
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 6ae8a013af31..b9b87fb75ac0 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -633,7 +633,7 @@ bool pdptrs_changed(struct kvm_vcpu *vcpu)
>  	gfn_t gfn;
>  	int r;
>  
> -	if (is_long_mode(vcpu) || !is_pae(vcpu) || !is_paging(vcpu))
> +	if (!is_pae_paging(vcpu))
>  		return false;
>  
>  	if (!test_bit(VCPU_EXREG_PDPTR,
> @@ -884,8 +884,8 @@ int kvm_set_cr3(struct kvm_vcpu *vcpu, unsigned long cr3)
>  	if (is_long_mode(vcpu) &&
>  	    (cr3 & rsvd_bits(cpuid_maxphyaddr(vcpu), 63)))
>  		return 1;
> -	else if (is_pae(vcpu) && is_paging(vcpu) &&
> -		   !load_pdptrs(vcpu, vcpu->arch.walk_mmu, cr3))
> +	else if (is_pae_paging(vcpu) &&
> +		 !load_pdptrs(vcpu, vcpu->arch.walk_mmu, cr3))
>  		return 1;
>  
>  	kvm_mmu_new_cr3(vcpu, cr3, skip_tlb_flush);
> @@ -8312,7 +8312,7 @@ static int __set_sregs(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs)
>  		kvm_update_cpuid(vcpu);
>  
>  	idx = srcu_read_lock(&vcpu->kvm->srcu);
> -	if (!is_long_mode(vcpu) && is_pae(vcpu) && is_paging(vcpu)) {
> +	if (is_pae_paging(vcpu)) {
>  		load_pdptrs(vcpu, vcpu->arch.walk_mmu, kvm_read_cr3(vcpu));
>  		mmu_reset_needed = 1;
>  	}
> diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
> index 3a91ea760f07..608e5f8c5d0a 100644
> --- a/arch/x86/kvm/x86.h
> +++ b/arch/x86/kvm/x86.h
> @@ -139,6 +139,11 @@ static inline int is_paging(struct kvm_vcpu *vcpu)
>  	return likely(kvm_read_cr0_bits(vcpu, X86_CR0_PG));
>  }
>  
> +static inline bool is_pae_paging(struct kvm_vcpu *vcpu)
> +{
> +	return !is_long_mode(vcpu) && is_pae(vcpu) && is_paging(vcpu);
> +}
> +
>  static inline u32 bit(int bitno)
>  {
>  	return 1 << (bitno & 31);
> 

Acked-by: Paolo Bonzini <pbonzini@redhat.com>


  reply	other threads:[~2019-11-11 23:05 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-11-11 22:54 [PATCH 4.19 STABLE] KVM: x86: introduce is_pae_paging Sean Christopherson
2019-11-11 23:05 ` Paolo Bonzini [this message]
2019-11-13  1:10   ` Sasha Levin
2019-11-12 11:10 ` Thomas Lamprecht

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aecfcc9e-dcab-2b52-ebdb-373a416a4951@redhat.com \
    --to=pbonzini@redhat.com \
    --cc=gregkh@linuxfoundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=sean.j.christopherson@intel.com \
    --cc=stable@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.