linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Paolo Bonzini <pbonzini@redhat.com>
To: KarimAllah Ahmed <karahmed@amazon.de>,
	x86@kernel.org, linux-kernel@vger.kernel.org,
	kvm@vger.kernel.org
Cc: hpa@zytor.com, jmattson@google.com, mingo@redhat.com,
	rkrcmar@redhat.com, tglx@linutronix.de
Subject: Re: [PATCH 07/10] KVM/nVMX: Use kvm_vcpu_map when mapping the posted interrupt descriptor table
Date: Thu, 12 Apr 2018 16:39:16 +0200	[thread overview]
Message-ID: <5b3e7e52-6353-3fe9-f5e4-ecb1d8e2ac6f@redhat.com> (raw)
In-Reply-To: <1519235241-6500-8-git-send-email-karahmed@amazon.de>

On 21/02/2018 18:47, KarimAllah Ahmed wrote:
> ... since using kvm_vcpu_gpa_to_page() and kmap() will only work for guest
> memory that has a "struct page".
> 
> The life-cycle of the mapping also changes to avoid doing map and unmap on
> every single exit (which becomes very expesive once we use memremap).  Now
> the memory is mapped and only unmapped when a new VMCS12 is loaded into the
> vCPU (or when the vCPU is freed!).
> 
> Signed-off-by: KarimAllah Ahmed <karahmed@amazon.de>

Same here, let's change the lifecycle separately.

Paolo

> ---
>  arch/x86/kvm/vmx.c | 45 +++++++++++++--------------------------------
>  1 file changed, 13 insertions(+), 32 deletions(-)
> 
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index a700338..7b29419 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -461,7 +461,7 @@ struct nested_vmx {
>  	 */
>  	struct page *apic_access_page;
>  	struct kvm_host_map virtual_apic_map;
> -	struct page *pi_desc_page;
> +	struct kvm_host_map pi_desc_map;
>  	struct kvm_host_map msr_bitmap_map;
>  
>  	struct pi_desc *pi_desc;
> @@ -7666,6 +7666,7 @@ static inline void nested_release_vmcs12(struct vcpu_vmx *vmx)
>  				  vmx->nested.cached_vmcs12, 0, VMCS12_SIZE);
>  
>  	kvm_vcpu_unmap(&vmx->nested.virtual_apic_map);
> +	kvm_vcpu_unmap(&vmx->nested.pi_desc_map);
>  	kvm_vcpu_unmap(&vmx->nested.msr_bitmap_map);
>  
>  	vmx->nested.current_vmptr = -1ull;
> @@ -7698,14 +7699,9 @@ static void free_nested(struct vcpu_vmx *vmx)
>  		vmx->nested.apic_access_page = NULL;
>  	}
>  	kvm_vcpu_unmap(&vmx->nested.virtual_apic_map);
> -	if (vmx->nested.pi_desc_page) {
> -		kunmap(vmx->nested.pi_desc_page);
> -		kvm_release_page_dirty(vmx->nested.pi_desc_page);
> -		vmx->nested.pi_desc_page = NULL;
> -		vmx->nested.pi_desc = NULL;
> -	}
> -
> +	kvm_vcpu_unmap(&vmx->nested.pi_desc_map);
>  	kvm_vcpu_unmap(&vmx->nested.msr_bitmap_map);
> +	vmx->nested.pi_desc = NULL;
>  
>  	free_loaded_vmcs(&vmx->nested.vmcs02);
>  }
> @@ -10278,24 +10274,16 @@ static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu,
>  	}
>  
>  	if (nested_cpu_has_posted_intr(vmcs12)) {
> -		if (vmx->nested.pi_desc_page) { /* shouldn't happen */
> -			kunmap(vmx->nested.pi_desc_page);
> -			kvm_release_page_dirty(vmx->nested.pi_desc_page);
> -			vmx->nested.pi_desc_page = NULL;
> +		map = &vmx->nested.pi_desc_map;
> +
> +		if (kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->posted_intr_desc_addr), map)) {
> +			vmx->nested.pi_desc =
> +				(struct pi_desc *)(((void *)map->kaddr) +
> +				offset_in_page(vmcs12->posted_intr_desc_addr));
> +			vmcs_write64(POSTED_INTR_DESC_ADDR, pfn_to_hpa(map->pfn) +
> +							    offset_in_page(vmcs12->posted_intr_desc_addr));
>  		}
> -		page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->posted_intr_desc_addr);
> -		if (is_error_page(page))
> -			return;
> -		vmx->nested.pi_desc_page = page;
> -		vmx->nested.pi_desc = kmap(vmx->nested.pi_desc_page);
> -		vmx->nested.pi_desc =
> -			(struct pi_desc *)((void *)vmx->nested.pi_desc +
> -			(unsigned long)(vmcs12->posted_intr_desc_addr &
> -			(PAGE_SIZE - 1)));
> -		vmcs_write64(POSTED_INTR_DESC_ADDR,
> -			page_to_phys(vmx->nested.pi_desc_page) +
> -			(unsigned long)(vmcs12->posted_intr_desc_addr &
> -			(PAGE_SIZE - 1)));
> +
>  	}
>  	if (nested_vmx_prepare_msr_bitmap(vcpu, vmcs12))
>  		vmcs_set_bits(CPU_BASED_VM_EXEC_CONTROL,
> @@ -11893,13 +11881,6 @@ static void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 exit_reason,
>  		kvm_release_page_dirty(vmx->nested.apic_access_page);
>  		vmx->nested.apic_access_page = NULL;
>  	}
> -	if (vmx->nested.pi_desc_page) {
> -		kunmap(vmx->nested.pi_desc_page);
> -		kvm_release_page_dirty(vmx->nested.pi_desc_page);
> -		vmx->nested.pi_desc_page = NULL;
> -		vmx->nested.pi_desc = NULL;
> -	}
> -
>  	/*
>  	 * We are now running in L2, mmu_notifier will force to reload the
>  	 * page's hpa for L2 vmcs. Need to reload it for L1 before entering L1.
> 

  reply	other threads:[~2018-04-12 14:39 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-02-21 17:47 [PATCH 00/10] KVM/X86: Handle guest memory that does not have a struct page KarimAllah Ahmed
2018-02-21 17:47 ` [PATCH 01/10] X86/nVMX: handle_vmon: Read 4 bytes from guest memory instead of map->read->unmap sequence KarimAllah Ahmed
2018-02-21 17:47 ` [PATCH 02/10] X86/nVMX: handle_vmptrld: Copy the VMCS12 directly from guest memory instead of map->copy->unmap sequence KarimAllah Ahmed
2018-02-21 17:47 ` [PATCH 03/10] X86/nVMX: Update the PML table without mapping and unmapping the page KarimAllah Ahmed
2018-02-23  2:02   ` kbuild test robot
2018-04-12 15:03   ` Paolo Bonzini
2018-02-21 17:47 ` [PATCH 04/10] KVM: Introduce a new guest mapping API KarimAllah Ahmed
2018-02-23  1:27   ` kbuild test robot
2018-02-23  1:37   ` kbuild test robot
2018-02-23 23:48     ` Raslan, KarimAllah
2018-04-12 14:33   ` Paolo Bonzini
2018-02-21 17:47 ` [PATCH 05/10] KVM/nVMX: Use kvm_vcpu_map when mapping the L1 MSR bitmap KarimAllah Ahmed
2018-02-23 21:36   ` Konrad Rzeszutek Wilk
2018-02-23 23:45     ` Raslan, KarimAllah
2018-04-12 14:36   ` Paolo Bonzini
2018-02-21 17:47 ` [PATCH 06/10] KVM/nVMX: Use kvm_vcpu_map when mapping the virtual APIC page KarimAllah Ahmed
2018-04-12 14:38   ` Paolo Bonzini
2018-04-12 17:57     ` Sean Christopherson
2018-04-12 20:23       ` Paolo Bonzini
2018-02-21 17:47 ` [PATCH 07/10] KVM/nVMX: Use kvm_vcpu_map when mapping the posted interrupt descriptor table KarimAllah Ahmed
2018-04-12 14:39   ` Paolo Bonzini [this message]
2018-02-21 17:47 ` [PATCH 08/10] KVM/X86: Use kvm_vcpu_map in emulator_cmpxchg_emulated KarimAllah Ahmed
2018-02-22  2:56   ` Raslan, KarimAllah
2018-02-21 17:47 ` [PATCH 09/10] KVM/X86: hyperv: Use kvm_vcpu_map in synic_clear_sint_msg_pending KarimAllah Ahmed
2018-02-21 17:47 ` [PATCH 10/10] KVM/X86: hyperv: Use kvm_vcpu_map in synic_deliver_msg KarimAllah Ahmed
2018-03-01 15:24 ` [PATCH 00/10] KVM/X86: Handle guest memory that does not have a struct page Raslan, KarimAllah
2018-03-01 17:51   ` Jim Mattson
2018-03-02 17:40     ` Paolo Bonzini
2018-04-12 14:59 ` Paolo Bonzini
2018-04-12 21:25   ` Raslan, KarimAllah

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5b3e7e52-6353-3fe9-f5e4-ecb1d8e2ac6f@redhat.com \
    --to=pbonzini@redhat.com \
    --cc=hpa@zytor.com \
    --cc=jmattson@google.com \
    --cc=karahmed@amazon.de \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=rkrcmar@redhat.com \
    --cc=tglx@linutronix.de \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).