linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v7 0/9] kvm, mem-hotplug: Do not pin ept identity pagetable and apic access page.
@ 2014-09-20 10:47 Tang Chen
  2014-09-20 10:47 ` [PATCH v7 1/9] kvm: Use APIC_DEFAULT_PHYS_BASE macro as the apic access page address Tang Chen
                   ` (8 more replies)
  0 siblings, 9 replies; 21+ messages in thread
From: Tang Chen @ 2014-09-20 10:47 UTC (permalink / raw)
  To: gleb, mtosatti, nadav.amit, jan.kiszka, pbonzini
  Cc: kvm, laijs, isimatu.yasuaki, guz.fnst, linux-kernel, tangchen

ept identity pagetable and apic access page in kvm are pinned in memory.
As a result, they cannot be migrated/hot-removed.

But actually they don't need to be pinned in memory.

[For ept identity page]
Just do not pin it. When it is migrated, guest will be able to find the
new page in the next ept violation.

[For apic access page]
The hpa of apic access page is stored in VMCS APIC_ACCESS_ADDR pointer.
When apic access page is migrated, we update VMCS APIC_ACCESS_ADDR pointer
for each vcpu in addition.

This patch-set is based on Linux 3.17.0-rc5.

NOTE: Tested with -cpu xxx,-x2apic option.
      But since nested vm pins some other pages in memory, if user uses nested
      vm, memory hot-remove will not work.

Change log v6 -> v7:
1. Patch 1/9~3/9 are applied to kvm/queue by Paolo Bonzini <pbonzini@redhat.com>.
   Just resend them, no changes.
2. In the new patch 4/9, add a new interface to check if secondary exec 
   virtualized apic access is enabled.
3. In new patch 6/9, rename make_all_cpus_request() to kvm_make_all_cpus_request() 
   and make it non-static so that we can use it in other patches.
4. In new patch 8/9, add arch specific function to make apic access page reload 
   request in mmu notifier.

Tang Chen (9):
  kvm: Use APIC_DEFAULT_PHYS_BASE macro as the apic access page address.
  kvm: Remove ept_identity_pagetable from struct kvm_arch.
  kvm: Make init_rmode_identity_map() return 0 on success.
  kvm: Add interface to check if secondary exec virtualzed apic accesses
    is enabled.
  kvm, mem-hotplug: Reload L1's apic access page in vcpu_enter_guest().
  kvm: Rename make_all_cpus_request() to kvm_make_all_cpus_request() and
    make it non-static.
  kvm, mem-hotplug: Reload L1's apic access page on migration when L2 is
    running.
  kvm, mem-hotplug: Add arch specific mmu notifier to handle apic access
    migration.
  kvm, mem-hotplug: Unpin and remove kvm_arch->apic_access_page.

 arch/x86/include/asm/kvm_host.h |   6 ++-
 arch/x86/kvm/svm.c              |  15 +++++-
 arch/x86/kvm/vmx.c              | 104 ++++++++++++++++++++++++----------------
 arch/x86/kvm/x86.c              |  47 ++++++++++++++++--
 include/linux/kvm_host.h        |  16 ++++++-
 virt/kvm/kvm_main.c             |  13 +++--
 6 files changed, 146 insertions(+), 55 deletions(-)

-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH v7 1/9] kvm: Use APIC_DEFAULT_PHYS_BASE macro as the apic access page address.
  2014-09-20 10:47 [PATCH v7 0/9] kvm, mem-hotplug: Do not pin ept identity pagetable and apic access page Tang Chen
@ 2014-09-20 10:47 ` Tang Chen
  2014-09-20 10:47 ` [PATCH v7 2/9] kvm: Remove ept_identity_pagetable from struct kvm_arch Tang Chen
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 21+ messages in thread
From: Tang Chen @ 2014-09-20 10:47 UTC (permalink / raw)
  To: gleb, mtosatti, nadav.amit, jan.kiszka, pbonzini
  Cc: kvm, laijs, isimatu.yasuaki, guz.fnst, linux-kernel, tangchen

We have APIC_DEFAULT_PHYS_BASE defined as 0xfee00000, which is also the address of
apic access page. So use this macro.

Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com>
Reviewed-by: Gleb Natapov <gleb@kernel.org>
---
 arch/x86/kvm/svm.c | 3 ++-
 arch/x86/kvm/vmx.c | 6 +++---
 2 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index ddf7427..1d941ad 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -1257,7 +1257,8 @@ static struct kvm_vcpu *svm_create_vcpu(struct kvm *kvm, unsigned int id)
 	svm->asid_generation = 0;
 	init_vmcb(svm);
 
-	svm->vcpu.arch.apic_base = 0xfee00000 | MSR_IA32_APICBASE_ENABLE;
+	svm->vcpu.arch.apic_base = APIC_DEFAULT_PHYS_BASE |
+				   MSR_IA32_APICBASE_ENABLE;
 	if (kvm_vcpu_is_bsp(&svm->vcpu))
 		svm->vcpu.arch.apic_base |= MSR_IA32_APICBASE_BSP;
 
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index bfe11cf..4b80ead 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -3999,13 +3999,13 @@ static int alloc_apic_access_page(struct kvm *kvm)
 		goto out;
 	kvm_userspace_mem.slot = APIC_ACCESS_PAGE_PRIVATE_MEMSLOT;
 	kvm_userspace_mem.flags = 0;
-	kvm_userspace_mem.guest_phys_addr = 0xfee00000ULL;
+	kvm_userspace_mem.guest_phys_addr = APIC_DEFAULT_PHYS_BASE;
 	kvm_userspace_mem.memory_size = PAGE_SIZE;
 	r = __kvm_set_memory_region(kvm, &kvm_userspace_mem);
 	if (r)
 		goto out;
 
-	page = gfn_to_page(kvm, 0xfee00);
+	page = gfn_to_page(kvm, APIC_DEFAULT_PHYS_BASE >> PAGE_SHIFT);
 	if (is_error_page(page)) {
 		r = -EFAULT;
 		goto out;
@@ -4477,7 +4477,7 @@ static void vmx_vcpu_reset(struct kvm_vcpu *vcpu)
 
 	vmx->vcpu.arch.regs[VCPU_REGS_RDX] = get_rdx_init_val();
 	kvm_set_cr8(&vmx->vcpu, 0);
-	apic_base_msr.data = 0xfee00000 | MSR_IA32_APICBASE_ENABLE;
+	apic_base_msr.data = APIC_DEFAULT_PHYS_BASE | MSR_IA32_APICBASE_ENABLE;
 	if (kvm_vcpu_is_bsp(&vmx->vcpu))
 		apic_base_msr.data |= MSR_IA32_APICBASE_BSP;
 	apic_base_msr.host_initiated = true;
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v7 2/9] kvm: Remove ept_identity_pagetable from struct kvm_arch.
  2014-09-20 10:47 [PATCH v7 0/9] kvm, mem-hotplug: Do not pin ept identity pagetable and apic access page Tang Chen
  2014-09-20 10:47 ` [PATCH v7 1/9] kvm: Use APIC_DEFAULT_PHYS_BASE macro as the apic access page address Tang Chen
@ 2014-09-20 10:47 ` Tang Chen
  2014-09-20 10:47 ` [PATCH v7 3/9] kvm: Make init_rmode_identity_map() return 0 on success Tang Chen
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 21+ messages in thread
From: Tang Chen @ 2014-09-20 10:47 UTC (permalink / raw)
  To: gleb, mtosatti, nadav.amit, jan.kiszka, pbonzini
  Cc: kvm, laijs, isimatu.yasuaki, guz.fnst, linux-kernel, tangchen

kvm_arch->ept_identity_pagetable holds the ept identity pagetable page. But
it is never used to refer to the page at all.

In vcpu initialization, it indicates two things:
1. indicates if ept page is allocated
2. indicates if a memory slot for identity page is initialized

Actually, kvm_arch->ept_identity_pagetable_done is enough to tell if the ept
identity pagetable is initialized. So we can remove ept_identity_pagetable.

NOTE: In the original code, ept identity pagetable page is pinned in memroy.
      As a result, it cannot be migrated/hot-removed. After this patch, since
      kvm_arch->ept_identity_pagetable is removed, ept identity pagetable page
      is no longer pinned in memory. And it can be migrated/hot-removed.

Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com>
Reviewed-by: Gleb Natapov <gleb@kernel.org>
---
 arch/x86/include/asm/kvm_host.h |  1 -
 arch/x86/kvm/vmx.c              | 47 +++++++++++++++++++----------------------
 arch/x86/kvm/x86.c              |  2 --
 3 files changed, 22 insertions(+), 28 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 7c492ed..35171c7 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -580,7 +580,6 @@ struct kvm_arch {
 
 	gpa_t wall_clock;
 
-	struct page *ept_identity_pagetable;
 	bool ept_identity_pagetable_done;
 	gpa_t ept_identity_map_addr;
 
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 4b80ead..4fb84ad 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -743,6 +743,7 @@ static u32 vmx_segment_access_rights(struct kvm_segment *var);
 static void vmx_sync_pir_to_irr_dummy(struct kvm_vcpu *vcpu);
 static void copy_vmcs12_to_shadow(struct vcpu_vmx *vmx);
 static void copy_shadow_to_vmcs12(struct vcpu_vmx *vmx);
+static int alloc_identity_pagetable(struct kvm *kvm);
 
 static DEFINE_PER_CPU(struct vmcs *, vmxarea);
 static DEFINE_PER_CPU(struct vmcs *, current_vmcs);
@@ -3938,21 +3939,27 @@ out:
 
 static int init_rmode_identity_map(struct kvm *kvm)
 {
-	int i, idx, r, ret;
+	int i, idx, r, ret = 0;
 	pfn_t identity_map_pfn;
 	u32 tmp;
 
 	if (!enable_ept)
 		return 1;
-	if (unlikely(!kvm->arch.ept_identity_pagetable)) {
-		printk(KERN_ERR "EPT: identity-mapping pagetable "
-			"haven't been allocated!\n");
-		return 0;
+
+	/* Protect kvm->arch.ept_identity_pagetable_done. */
+	mutex_lock(&kvm->slots_lock);
+
+	if (likely(kvm->arch.ept_identity_pagetable_done)) {
+		ret = 1;
+		goto out2;
 	}
-	if (likely(kvm->arch.ept_identity_pagetable_done))
-		return 1;
-	ret = 0;
+
 	identity_map_pfn = kvm->arch.ept_identity_map_addr >> PAGE_SHIFT;
+
+	r = alloc_identity_pagetable(kvm);
+	if (r)
+		goto out2;
+
 	idx = srcu_read_lock(&kvm->srcu);
 	r = kvm_clear_guest_page(kvm, identity_map_pfn, 0, PAGE_SIZE);
 	if (r < 0)
@@ -3970,6 +3977,9 @@ static int init_rmode_identity_map(struct kvm *kvm)
 	ret = 1;
 out:
 	srcu_read_unlock(&kvm->srcu, idx);
+
+out2:
+	mutex_unlock(&kvm->slots_lock);
 	return ret;
 }
 
@@ -4019,31 +4029,20 @@ out:
 
 static int alloc_identity_pagetable(struct kvm *kvm)
 {
-	struct page *page;
+	/* Called with kvm->slots_lock held. */
+
 	struct kvm_userspace_memory_region kvm_userspace_mem;
 	int r = 0;
 
-	mutex_lock(&kvm->slots_lock);
-	if (kvm->arch.ept_identity_pagetable)
-		goto out;
+	BUG_ON(kvm->arch.ept_identity_pagetable_done);
+
 	kvm_userspace_mem.slot = IDENTITY_PAGETABLE_PRIVATE_MEMSLOT;
 	kvm_userspace_mem.flags = 0;
 	kvm_userspace_mem.guest_phys_addr =
 		kvm->arch.ept_identity_map_addr;
 	kvm_userspace_mem.memory_size = PAGE_SIZE;
 	r = __kvm_set_memory_region(kvm, &kvm_userspace_mem);
-	if (r)
-		goto out;
-
-	page = gfn_to_page(kvm, kvm->arch.ept_identity_map_addr >> PAGE_SHIFT);
-	if (is_error_page(page)) {
-		r = -EFAULT;
-		goto out;
-	}
 
-	kvm->arch.ept_identity_pagetable = page;
-out:
-	mutex_unlock(&kvm->slots_lock);
 	return r;
 }
 
@@ -7643,8 +7642,6 @@ static struct kvm_vcpu *vmx_create_vcpu(struct kvm *kvm, unsigned int id)
 			kvm->arch.ept_identity_map_addr =
 				VMX_EPT_IDENTITY_PAGETABLE_ADDR;
 		err = -ENOMEM;
-		if (alloc_identity_pagetable(kvm) != 0)
-			goto free_vmcs;
 		if (!init_rmode_identity_map(kvm))
 			goto free_vmcs;
 	}
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 8f1e22d..e05bd58 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -7239,8 +7239,6 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
 	kvm_free_vcpus(kvm);
 	if (kvm->arch.apic_access_page)
 		put_page(kvm->arch.apic_access_page);
-	if (kvm->arch.ept_identity_pagetable)
-		put_page(kvm->arch.ept_identity_pagetable);
 	kfree(rcu_dereference_check(kvm->arch.apic_map, 1));
 }
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v7 3/9] kvm: Make init_rmode_identity_map() return 0 on success.
  2014-09-20 10:47 [PATCH v7 0/9] kvm, mem-hotplug: Do not pin ept identity pagetable and apic access page Tang Chen
  2014-09-20 10:47 ` [PATCH v7 1/9] kvm: Use APIC_DEFAULT_PHYS_BASE macro as the apic access page address Tang Chen
  2014-09-20 10:47 ` [PATCH v7 2/9] kvm: Remove ept_identity_pagetable from struct kvm_arch Tang Chen
@ 2014-09-20 10:47 ` Tang Chen
  2014-09-20 10:47 ` [PATCH v7 4/9] kvm: Add interface to check if secondary exec virtualzed apic accesses is enabled Tang Chen
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 21+ messages in thread
From: Tang Chen @ 2014-09-20 10:47 UTC (permalink / raw)
  To: gleb, mtosatti, nadav.amit, jan.kiszka, pbonzini
  Cc: kvm, laijs, isimatu.yasuaki, guz.fnst, linux-kernel, tangchen

In init_rmode_identity_map(), there two variables indicating the return
value, r and ret, and it return 0 on error, 1 on success. The function
is only called by vmx_create_vcpu(), and r is redundant.

This patch removes the redundant variable r, and make init_rmode_identity_map()
return 0 on success, -errno on failure.

Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com>
---
 arch/x86/kvm/vmx.c | 31 +++++++++++++++----------------
 1 file changed, 15 insertions(+), 16 deletions(-)

diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 4fb84ad..72a0470 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -3939,45 +3939,42 @@ out:
 
 static int init_rmode_identity_map(struct kvm *kvm)
 {
-	int i, idx, r, ret = 0;
+	int i, idx, ret = 0;
 	pfn_t identity_map_pfn;
 	u32 tmp;
 
 	if (!enable_ept)
-		return 1;
+		return 0;
 
 	/* Protect kvm->arch.ept_identity_pagetable_done. */
 	mutex_lock(&kvm->slots_lock);
 
-	if (likely(kvm->arch.ept_identity_pagetable_done)) {
-		ret = 1;
+	if (likely(kvm->arch.ept_identity_pagetable_done))
 		goto out2;
-	}
 
 	identity_map_pfn = kvm->arch.ept_identity_map_addr >> PAGE_SHIFT;
 
-	r = alloc_identity_pagetable(kvm);
-	if (r)
+	ret = alloc_identity_pagetable(kvm);
+	if (ret)
 		goto out2;
 
 	idx = srcu_read_lock(&kvm->srcu);
-	r = kvm_clear_guest_page(kvm, identity_map_pfn, 0, PAGE_SIZE);
-	if (r < 0)
+	ret = kvm_clear_guest_page(kvm, identity_map_pfn, 0, PAGE_SIZE);
+	if (ret)
 		goto out;
 	/* Set up identity-mapping pagetable for EPT in real mode */
 	for (i = 0; i < PT32_ENT_PER_PAGE; i++) {
 		tmp = (i << 22) + (_PAGE_PRESENT | _PAGE_RW | _PAGE_USER |
 			_PAGE_ACCESSED | _PAGE_DIRTY | _PAGE_PSE);
-		r = kvm_write_guest_page(kvm, identity_map_pfn,
+		ret = kvm_write_guest_page(kvm, identity_map_pfn,
 				&tmp, i * sizeof(tmp), sizeof(tmp));
-		if (r < 0)
+		if (ret)
 			goto out;
 	}
 	kvm->arch.ept_identity_pagetable_done = true;
-	ret = 1;
+
 out:
 	srcu_read_unlock(&kvm->srcu, idx);
-
 out2:
 	mutex_unlock(&kvm->slots_lock);
 	return ret;
@@ -7604,11 +7601,13 @@ static struct kvm_vcpu *vmx_create_vcpu(struct kvm *kvm, unsigned int id)
 	if (err)
 		goto free_vcpu;
 
+	/* Set err to -ENOMEM to handle memory allocation error. */
+	err = -ENOMEM;
+
 	vmx->guest_msrs = kmalloc(PAGE_SIZE, GFP_KERNEL);
 	BUILD_BUG_ON(ARRAY_SIZE(vmx_msr_index) * sizeof(vmx->guest_msrs[0])
 		     > PAGE_SIZE);
 
-	err = -ENOMEM;
 	if (!vmx->guest_msrs) {
 		goto uninit_vcpu;
 	}
@@ -7641,8 +7640,8 @@ static struct kvm_vcpu *vmx_create_vcpu(struct kvm *kvm, unsigned int id)
 		if (!kvm->arch.ept_identity_map_addr)
 			kvm->arch.ept_identity_map_addr =
 				VMX_EPT_IDENTITY_PAGETABLE_ADDR;
-		err = -ENOMEM;
-		if (!init_rmode_identity_map(kvm))
+		err = init_rmode_identity_map(kvm);
+		if (err < 0)
 			goto free_vmcs;
 	}
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v7 4/9] kvm: Add interface to check if secondary exec virtualzed apic accesses is enabled.
  2014-09-20 10:47 [PATCH v7 0/9] kvm, mem-hotplug: Do not pin ept identity pagetable and apic access page Tang Chen
                   ` (2 preceding siblings ...)
  2014-09-20 10:47 ` [PATCH v7 3/9] kvm: Make init_rmode_identity_map() return 0 on success Tang Chen
@ 2014-09-20 10:47 ` Tang Chen
  2014-09-22  9:50   ` Paolo Bonzini
  2014-09-20 10:47 ` [PATCH v7 5/9] kvm, mem-hotplug: Reload L1's apic access page in vcpu_enter_guest() Tang Chen
                   ` (4 subsequent siblings)
  8 siblings, 1 reply; 21+ messages in thread
From: Tang Chen @ 2014-09-20 10:47 UTC (permalink / raw)
  To: gleb, mtosatti, nadav.amit, jan.kiszka, pbonzini
  Cc: kvm, laijs, isimatu.yasuaki, guz.fnst, linux-kernel, tangchen

We wants to migrate apic access page pinned by guest (L1 and L2) to make memory
hotplug available.

There are two situations need to be handled for apic access page used by L2 vm:
1. L1 prepares a separate apic access page for L2.

   L2 pins a lot of pages in memory. Even if we can migrate apic access page,
   memory hotplug is not available when L2 is running. So do not handle this
   now. Migrate apic access page only.

2. L1 and L2 share one apic access page.

   Since we will migrate L1's apic access page, we should do some handling when
   migration happens in the following situations:

   1) when L0 is running: Update L1's vmcs in the next L0->L1 entry and L2's
      vmcs in the next L1->L2 entry.

   2) when L1 is running: Force a L1->L0 exit, update L1's vmcs in the next
      L0->L1 entry and L2's vmcs in the next L1->L2 entry.

   3) when L2 is running: Force a L2->L0 exit, update L2's vmcs in the next
      L0->L2 entry and L1's vmcs in the next L2->L1 exit.

Since we don't handle L1 ans L2 have separate apic access pages situation,
when we update vmcs, we need to check if we are in L2 and if L2's secondary
exec virtualzed apic accesses is enabled.

This patch adds an interface to check if L2's secondary exec virtualzed apic
accesses is enabled, because vmx cannot be accessed outside vmx.c.

Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com>
---
 arch/x86/include/asm/kvm_host.h | 1 +
 arch/x86/kvm/svm.c              | 6 ++++++
 arch/x86/kvm/vmx.c              | 9 +++++++++
 3 files changed, 16 insertions(+)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 35171c7..69fe032 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -739,6 +739,7 @@ struct kvm_x86_ops {
 	void (*hwapic_isr_update)(struct kvm *kvm, int isr);
 	void (*load_eoi_exitmap)(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap);
 	void (*set_virtual_x2apic_mode)(struct kvm_vcpu *vcpu, bool set);
+	bool (*has_secondary_apic_access)(struct kvm_vcpu *vcpu);
 	void (*deliver_posted_interrupt)(struct kvm_vcpu *vcpu, int vector);
 	void (*sync_pir_to_irr)(struct kvm_vcpu *vcpu);
 	int (*set_tss_addr)(struct kvm *kvm, unsigned int addr);
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 1d941ad..9c8ae32 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -3619,6 +3619,11 @@ static void svm_set_virtual_x2apic_mode(struct kvm_vcpu *vcpu, bool set)
 	return;
 }
 
+static bool svm_has_secondary_apic_access(struct kvm_vcpu *vcpu)
+{
+	return false;
+}
+
 static int svm_vm_has_apicv(struct kvm *kvm)
 {
 	return 0;
@@ -4373,6 +4378,7 @@ static struct kvm_x86_ops svm_x86_ops = {
 	.enable_irq_window = enable_irq_window,
 	.update_cr8_intercept = update_cr8_intercept,
 	.set_virtual_x2apic_mode = svm_set_virtual_x2apic_mode,
+	.has_secondary_apic_access = svm_has_secondary_apic_access,
 	.vm_has_apicv = svm_vm_has_apicv,
 	.load_eoi_exitmap = svm_load_eoi_exitmap,
 	.hwapic_isr_update = svm_hwapic_isr_update,
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 72a0470..0b541d9 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -7090,6 +7090,14 @@ static void vmx_set_virtual_x2apic_mode(struct kvm_vcpu *vcpu, bool set)
 	vmx_set_msr_bitmap(vcpu);
 }
 
+static bool vmx_has_secondary_apic_access(struct kvm_vcpu *vcpu)
+{
+	struct vcpu_vmx *vmx = to_vmx(vcpu);
+
+	return (vmx->nested.current_vmcs12->secondary_vm_exec_control &
+		SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES);
+}
+
 static void vmx_hwapic_isr_update(struct kvm *kvm, int isr)
 {
 	u16 status;
@@ -8909,6 +8917,7 @@ static struct kvm_x86_ops vmx_x86_ops = {
 	.enable_irq_window = enable_irq_window,
 	.update_cr8_intercept = update_cr8_intercept,
 	.set_virtual_x2apic_mode = vmx_set_virtual_x2apic_mode,
+	.has_secondary_apic_access = vmx_has_secondary_apic_access,
 	.vm_has_apicv = vmx_vm_has_apicv,
 	.load_eoi_exitmap = vmx_load_eoi_exitmap,
 	.hwapic_irr_update = vmx_hwapic_irr_update,
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v7 5/9] kvm, mem-hotplug: Reload L1's apic access page in vcpu_enter_guest().
  2014-09-20 10:47 [PATCH v7 0/9] kvm, mem-hotplug: Do not pin ept identity pagetable and apic access page Tang Chen
                   ` (3 preceding siblings ...)
  2014-09-20 10:47 ` [PATCH v7 4/9] kvm: Add interface to check if secondary exec virtualzed apic accesses is enabled Tang Chen
@ 2014-09-20 10:47 ` Tang Chen
  2014-09-22  9:33   ` Paolo Bonzini
  2014-09-20 10:47 ` [PATCH v7 6/9] kvm: Rename make_all_cpus_request() to kvm_make_all_cpus_request() and make it non-static Tang Chen
                   ` (3 subsequent siblings)
  8 siblings, 1 reply; 21+ messages in thread
From: Tang Chen @ 2014-09-20 10:47 UTC (permalink / raw)
  To: gleb, mtosatti, nadav.amit, jan.kiszka, pbonzini
  Cc: kvm, laijs, isimatu.yasuaki, guz.fnst, linux-kernel, tangchen

We are handling "L1 and L2 share one apic access page" situation when migrating
apic access page. We should do some handling when migration happens in the
following situations:

   1) when L0 is running: Update L1's vmcs in the next L0->L1 entry and L2's
      vmcs in the next L1->L2 entry.

   2) when L1 is running: Force a L1->L0 exit, update L1's vmcs in the next
      L0->L1 entry and L2's vmcs in the next L1->L2 entry.

   3) when L2 is running: Force a L2->L0 exit, update L2's vmcs in the next
      L0->L2 entry and L1's vmcs in the next L2->L1 exit.

This patch handles 1) and 2) by making a new vcpu request named KVM_REQ_APIC_PAGE_RELOAD
to reload apic access page in L0->L1 entry.

Since we don't handle L1 ans L2 have separate apic access pages situation,
when we update vmcs, we need to check if we are in L2 and if L2's secondary
exec virtualzed apic accesses is enabled.

Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com>
---
 arch/x86/include/asm/kvm_host.h |  1 +
 arch/x86/kvm/svm.c              |  6 ++++++
 arch/x86/kvm/vmx.c              |  6 ++++++
 arch/x86/kvm/x86.c              | 23 +++++++++++++++++++++++
 include/linux/kvm_host.h        |  1 +
 5 files changed, 37 insertions(+)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 69fe032..56156eb 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -740,6 +740,7 @@ struct kvm_x86_ops {
 	void (*load_eoi_exitmap)(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap);
 	void (*set_virtual_x2apic_mode)(struct kvm_vcpu *vcpu, bool set);
 	bool (*has_secondary_apic_access)(struct kvm_vcpu *vcpu);
+	void (*set_apic_access_page_addr)(struct kvm *kvm, hpa_t hpa);
 	void (*deliver_posted_interrupt)(struct kvm_vcpu *vcpu, int vector);
 	void (*sync_pir_to_irr)(struct kvm_vcpu *vcpu);
 	int (*set_tss_addr)(struct kvm *kvm, unsigned int addr);
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 9c8ae32..99378d7 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -3624,6 +3624,11 @@ static bool svm_has_secondary_apic_access(struct kvm_vcpu *vcpu)
 	return false;
 }
 
+static void svm_set_apic_access_page_addr(struct kvm *kvm, hpa_t hpa)
+{
+	return;
+}
+
 static int svm_vm_has_apicv(struct kvm *kvm)
 {
 	return 0;
@@ -4379,6 +4384,7 @@ static struct kvm_x86_ops svm_x86_ops = {
 	.update_cr8_intercept = update_cr8_intercept,
 	.set_virtual_x2apic_mode = svm_set_virtual_x2apic_mode,
 	.has_secondary_apic_access = svm_has_secondary_apic_access,
+	.set_apic_access_page_addr = svm_set_apic_access_page_addr,
 	.vm_has_apicv = svm_vm_has_apicv,
 	.load_eoi_exitmap = svm_load_eoi_exitmap,
 	.hwapic_isr_update = svm_hwapic_isr_update,
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 0b541d9..c8e90ec 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -7098,6 +7098,11 @@ static bool vmx_has_secondary_apic_access(struct kvm_vcpu *vcpu)
 		SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES);
 }
 
+static void vmx_set_apic_access_page_addr(struct kvm *kvm, hpa_t hpa)
+{
+	vmcs_write64(APIC_ACCESS_ADDR, hpa);
+}
+
 static void vmx_hwapic_isr_update(struct kvm *kvm, int isr)
 {
 	u16 status;
@@ -8918,6 +8923,7 @@ static struct kvm_x86_ops vmx_x86_ops = {
 	.update_cr8_intercept = update_cr8_intercept,
 	.set_virtual_x2apic_mode = vmx_set_virtual_x2apic_mode,
 	.has_secondary_apic_access = vmx_has_secondary_apic_access,
+	.set_apic_access_page_addr = vmx_set_apic_access_page_addr,
 	.vm_has_apicv = vmx_vm_has_apicv,
 	.load_eoi_exitmap = vmx_load_eoi_exitmap,
 	.hwapic_irr_update = vmx_hwapic_irr_update,
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index e05bd58..fc54fa6 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5989,6 +5989,27 @@ static void vcpu_scan_ioapic(struct kvm_vcpu *vcpu)
 	kvm_apic_update_tmr(vcpu, tmr);
 }
 
+static void kvm_vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu)
+{
+	/*
+	 * Only APIC access page shared by L1 and L2 vm is handled. The APIC
+	 * access page prepared by L1 for L2's execution is still pinned in
+	 * memory, and it cannot be migrated.
+	 */
+	if (!is_guest_mode(vcpu) ||
+	    !kvm_x86_ops->has_secondary_apic_access(vcpu)) {
+		/*
+		 * APIC access page could be migrated. When the page is being
+		 * migrated, GUP will wait till the migrate entry is replaced
+		 * with the new pte entry pointing to the new page.
+		 */
+		vcpu->kvm->arch.apic_access_page = gfn_to_page(vcpu->kvm,
+				APIC_DEFAULT_PHYS_BASE >> PAGE_SHIFT);
+		kvm_x86_ops->set_apic_access_page_addr(vcpu->kvm,
+				page_to_phys(vcpu->kvm->arch.apic_access_page));
+	}
+}
+
 /*
  * Returns 1 to let __vcpu_run() continue the guest execution loop without
  * exiting to the userspace.  Otherwise, the value will be returned to the
@@ -6049,6 +6070,8 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 			kvm_deliver_pmi(vcpu);
 		if (kvm_check_request(KVM_REQ_SCAN_IOAPIC, vcpu))
 			vcpu_scan_ioapic(vcpu);
+		if (kvm_check_request(KVM_REQ_APIC_PAGE_RELOAD, vcpu))
+			kvm_vcpu_reload_apic_access_page(vcpu);
 	}
 
 	if (kvm_check_request(KVM_REQ_EVENT, vcpu) || req_int_win) {
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index a4c33b3..c23236a 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -136,6 +136,7 @@ static inline bool is_error_page(struct page *page)
 #define KVM_REQ_GLOBAL_CLOCK_UPDATE 22
 #define KVM_REQ_ENABLE_IBS        23
 #define KVM_REQ_DISABLE_IBS       24
+#define KVM_REQ_APIC_PAGE_RELOAD  25
 
 #define KVM_USERSPACE_IRQ_SOURCE_ID		0
 #define KVM_IRQFD_RESAMPLE_IRQ_SOURCE_ID	1
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v7 6/9] kvm: Rename make_all_cpus_request() to kvm_make_all_cpus_request() and make it non-static.
  2014-09-20 10:47 [PATCH v7 0/9] kvm, mem-hotplug: Do not pin ept identity pagetable and apic access page Tang Chen
                   ` (4 preceding siblings ...)
  2014-09-20 10:47 ` [PATCH v7 5/9] kvm, mem-hotplug: Reload L1's apic access page in vcpu_enter_guest() Tang Chen
@ 2014-09-20 10:47 ` Tang Chen
  2014-09-22  9:28   ` Paolo Bonzini
  2014-09-20 10:47 ` [PATCH v7 7/9] kvm, mem-hotplug: Reload L1's apic access page on migration when L2 is running Tang Chen
                   ` (2 subsequent siblings)
  8 siblings, 1 reply; 21+ messages in thread
From: Tang Chen @ 2014-09-20 10:47 UTC (permalink / raw)
  To: gleb, mtosatti, nadav.amit, jan.kiszka, pbonzini
  Cc: kvm, laijs, isimatu.yasuaki, guz.fnst, linux-kernel, tangchen

Since different architectures need different handling, we will add some arch specific
code later. The code may need to make cpu requests outside kvm_main.c, so make it
non-static and rename it to kvm_make_all_cpus_request().

Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com>
---
 include/linux/kvm_host.h |  1 +
 virt/kvm/kvm_main.c      | 10 +++++-----
 2 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index c23236a..73de13c 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -580,6 +580,7 @@ void kvm_flush_remote_tlbs(struct kvm *kvm);
 void kvm_reload_remote_mmus(struct kvm *kvm);
 void kvm_make_mclock_inprogress_request(struct kvm *kvm);
 void kvm_make_scan_ioapic_request(struct kvm *kvm);
+bool kvm_make_all_cpus_request(struct kvm *kvm, unsigned int req);
 
 long kvm_arch_dev_ioctl(struct file *filp,
 			unsigned int ioctl, unsigned long arg);
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 33712fb..0f8b6f6 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -152,7 +152,7 @@ static void ack_flush(void *_completed)
 {
 }
 
-static bool make_all_cpus_request(struct kvm *kvm, unsigned int req)
+bool kvm_make_all_cpus_request(struct kvm *kvm, unsigned int req)
 {
 	int i, cpu, me;
 	cpumask_var_t cpus;
@@ -189,7 +189,7 @@ void kvm_flush_remote_tlbs(struct kvm *kvm)
 	long dirty_count = kvm->tlbs_dirty;
 
 	smp_mb();
-	if (make_all_cpus_request(kvm, KVM_REQ_TLB_FLUSH))
+	if (kvm_make_all_cpus_request(kvm, KVM_REQ_TLB_FLUSH))
 		++kvm->stat.remote_tlb_flush;
 	cmpxchg(&kvm->tlbs_dirty, dirty_count, 0);
 }
@@ -197,17 +197,17 @@ EXPORT_SYMBOL_GPL(kvm_flush_remote_tlbs);
 
 void kvm_reload_remote_mmus(struct kvm *kvm)
 {
-	make_all_cpus_request(kvm, KVM_REQ_MMU_RELOAD);
+	kvm_make_all_cpus_request(kvm, KVM_REQ_MMU_RELOAD);
 }
 
 void kvm_make_mclock_inprogress_request(struct kvm *kvm)
 {
-	make_all_cpus_request(kvm, KVM_REQ_MCLOCK_INPROGRESS);
+	kvm_make_all_cpus_request(kvm, KVM_REQ_MCLOCK_INPROGRESS);
 }
 
 void kvm_make_scan_ioapic_request(struct kvm *kvm)
 {
-	make_all_cpus_request(kvm, KVM_REQ_SCAN_IOAPIC);
+	kvm_make_all_cpus_request(kvm, KVM_REQ_SCAN_IOAPIC);
 }
 
 int kvm_vcpu_init(struct kvm_vcpu *vcpu, struct kvm *kvm, unsigned id)
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v7 7/9] kvm, mem-hotplug: Reload L1's apic access page on migration when L2 is running.
  2014-09-20 10:47 [PATCH v7 0/9] kvm, mem-hotplug: Do not pin ept identity pagetable and apic access page Tang Chen
                   ` (5 preceding siblings ...)
  2014-09-20 10:47 ` [PATCH v7 6/9] kvm: Rename make_all_cpus_request() to kvm_make_all_cpus_request() and make it non-static Tang Chen
@ 2014-09-20 10:47 ` Tang Chen
  2014-09-22  9:29   ` Paolo Bonzini
  2014-09-20 10:47 ` [PATCH v7 8/9] kvm, mem-hotplug: Add arch specific mmu notifier to handle apic access migration Tang Chen
  2014-09-20 10:47 ` [PATCH v7 9/9] kvm, mem-hotplug: Unpin and remove kvm_arch->apic_access_page Tang Chen
  8 siblings, 1 reply; 21+ messages in thread
From: Tang Chen @ 2014-09-20 10:47 UTC (permalink / raw)
  To: gleb, mtosatti, nadav.amit, jan.kiszka, pbonzini
  Cc: kvm, laijs, isimatu.yasuaki, guz.fnst, linux-kernel, tangchen

We are handling "L1 and L2 share one apic access page" situation when migrating
apic access page. We should do some handling when migration happens in the
following situations:

   1) when L0 is running: Update L1's vmcs in the next L0->L1 entry and L2's
      vmcs in the next L1->L2 entry.

   2) when L1 is running: Force a L1->L0 exit, update L1's vmcs in the next
      L0->L1 entry and L2's vmcs in the next L1->L2 entry.

   3) when L2 is running: Force a L2->L0 exit, update L2's vmcs in the next
      L0->L2 entry and L1's vmcs in the next L2->L1 exit.

This patch handles 3).

In L0->L2 entry, L2's vmcs will be updated in prepare_vmcs02() called by
nested_vm_run(). So we need to do nothing.

In L2->L1 exit, this patch requests apic access page reload in L2->L1 vmexit.

Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com>
---
 arch/x86/include/asm/kvm_host.h | 1 +
 arch/x86/kvm/vmx.c              | 6 ++++++
 arch/x86/kvm/x86.c              | 3 ++-
 3 files changed, 9 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 56156eb..1a8317e 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1047,6 +1047,7 @@ int kvm_cpu_has_interrupt(struct kvm_vcpu *vcpu);
 int kvm_arch_interrupt_allowed(struct kvm_vcpu *vcpu);
 int kvm_cpu_get_interrupt(struct kvm_vcpu *v);
 void kvm_vcpu_reset(struct kvm_vcpu *vcpu);
+void kvm_vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu);
 
 void kvm_define_shared_msr(unsigned index, u32 msr);
 void kvm_set_shared_msr(unsigned index, u64 val, u64 mask);
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index c8e90ec..baac78a 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -8803,6 +8803,12 @@ static void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 exit_reason,
 	}
 
 	/*
+	 * We are now running in L2, mmu_notifier will force to reload the
+	 * page's hpa for L2 vmcs. Need to reload it for L1 before entering L1.
+	 */
+	kvm_vcpu_reload_apic_access_page(vcpu);
+
+	/*
 	 * Exiting from L2 to L1, we're now back to L1 which thinks it just
 	 * finished a VMLAUNCH or VMRESUME instruction, so we need to set the
 	 * success or failure flag accordingly.
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index fc54fa6..2ae2dc7 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5989,7 +5989,7 @@ static void vcpu_scan_ioapic(struct kvm_vcpu *vcpu)
 	kvm_apic_update_tmr(vcpu, tmr);
 }
 
-static void kvm_vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu)
+void kvm_vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu)
 {
 	/*
 	 * Only APIC access page shared by L1 and L2 vm is handled. The APIC
@@ -6009,6 +6009,7 @@ static void kvm_vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu)
 				page_to_phys(vcpu->kvm->arch.apic_access_page));
 	}
 }
+EXPORT_SYMBOL_GPL(kvm_vcpu_reload_apic_access_page);
 
 /*
  * Returns 1 to let __vcpu_run() continue the guest execution loop without
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v7 8/9] kvm, mem-hotplug: Add arch specific mmu notifier to handle apic access migration.
  2014-09-20 10:47 [PATCH v7 0/9] kvm, mem-hotplug: Do not pin ept identity pagetable and apic access page Tang Chen
                   ` (6 preceding siblings ...)
  2014-09-20 10:47 ` [PATCH v7 7/9] kvm, mem-hotplug: Reload L1's apic access page on migration when L2 is running Tang Chen
@ 2014-09-20 10:47 ` Tang Chen
  2014-09-22  9:31   ` Paolo Bonzini
  2014-09-20 10:47 ` [PATCH v7 9/9] kvm, mem-hotplug: Unpin and remove kvm_arch->apic_access_page Tang Chen
  8 siblings, 1 reply; 21+ messages in thread
From: Tang Chen @ 2014-09-20 10:47 UTC (permalink / raw)
  To: gleb, mtosatti, nadav.amit, jan.kiszka, pbonzini
  Cc: kvm, laijs, isimatu.yasuaki, guz.fnst, linux-kernel, tangchen

We are handling "L1 and L2 share one apic access page" situation when migrating
apic access page. We should do some handling when migration happens in the
following situations:

   1) when L0 is running: Update L1's vmcs in the next L0->L1 entry and L2's
      vmcs in the next L1->L2 entry.

   2) when L1 is running: Force a L1->L0 exit, update L1's vmcs in the next
      L0->L1 entry and L2's vmcs in the next L1->L2 entry.

   3) when L2 is running: Force a L2->L0 exit, update L2's vmcs in the next
      L0->L2 entry and L1's vmcs in the next L2->L1 exit.

This patch force a L1->L0 exit or L2->L0 exit when shared apic access page is
migrated using mmu notifier. Since apic access page is only used on intel x86,
this is arch specific code.

Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com>
---
 arch/x86/kvm/x86.c       | 11 +++++++++++
 include/linux/kvm_host.h | 14 +++++++++++++-
 virt/kvm/kvm_main.c      |  3 +++
 3 files changed, 27 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 2ae2dc7..7dd4179 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -6011,6 +6011,17 @@ void kvm_vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu)
 }
 EXPORT_SYMBOL_GPL(kvm_vcpu_reload_apic_access_page);
 
+void kvm_arch_mmu_notifier_invalidate_page(struct kvm *kvm,
+					   unsigned long address)
+{
+	/*
+	 * The physical address of apic access page is stored in VMCS.
+	 * Update it when it becomes invalid.
+	 */
+	if (address == gfn_to_hva(kvm, APIC_DEFAULT_PHYS_BASE >> PAGE_SHIFT))
+		kvm_make_all_cpus_request(kvm, KVM_REQ_APIC_PAGE_RELOAD);
+}
+
 /*
  * Returns 1 to let __vcpu_run() continue the guest execution loop without
  * exiting to the userspace.  Otherwise, the value will be returned to the
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 73de13c..b6e4d38 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -917,7 +917,19 @@ static inline int mmu_notifier_retry(struct kvm *kvm, unsigned long mmu_seq)
 		return 1;
 	return 0;
 }
-#endif
+
+#ifdef _ASM_X86_KVM_HOST_H
+void kvm_arch_mmu_notifier_invalidate_page(struct kvm *kvm,
+					   unsigned long address);
+#else /* _ASM_X86_KVM_HOST_H */
+inline void kvm_arch_mmu_notifier_invalidate_page(struct kvm *kvm,
+						  unsigned long address)
+{
+	return;
+}
+#endif /* _ASM_X86_KVM_HOST_H */
+
+#endif /* CONFIG_MMU_NOTIFIER & KVM_ARCH_WANT_MMU_NOTIFIER */
 
 #ifdef CONFIG_HAVE_KVM_IRQ_ROUTING
 
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 0f8b6f6..5427973d 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -295,6 +295,9 @@ static void kvm_mmu_notifier_invalidate_page(struct mmu_notifier *mn,
 		kvm_flush_remote_tlbs(kvm);
 
 	spin_unlock(&kvm->mmu_lock);
+
+	kvm_arch_mmu_notifier_invalidate_page(kvm, address);
+
 	srcu_read_unlock(&kvm->srcu, idx);
 }
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v7 9/9] kvm, mem-hotplug: Unpin and remove kvm_arch->apic_access_page.
  2014-09-20 10:47 [PATCH v7 0/9] kvm, mem-hotplug: Do not pin ept identity pagetable and apic access page Tang Chen
                   ` (7 preceding siblings ...)
  2014-09-20 10:47 ` [PATCH v7 8/9] kvm, mem-hotplug: Add arch specific mmu notifier to handle apic access migration Tang Chen
@ 2014-09-20 10:47 ` Tang Chen
  2014-09-22  9:28   ` Paolo Bonzini
  8 siblings, 1 reply; 21+ messages in thread
From: Tang Chen @ 2014-09-20 10:47 UTC (permalink / raw)
  To: gleb, mtosatti, nadav.amit, jan.kiszka, pbonzini
  Cc: kvm, laijs, isimatu.yasuaki, guz.fnst, linux-kernel, tangchen

To make apic access page migratable, we do not pin it in memory now.
When it is migrated, we should reload its physical address for all
vmcses. But when we tried to do this, all vcpu will access
kvm_arch->apic_access_page without any locking. This is not safe.

Actually, we do not need kvm_arch->apic_access_page anymore. Since
apic access page is not pinned in memory now, we can remove
kvm_arch->apic_access_page. When we need to write its physical address
into vmcs, use gfn_to_page() to get its page struct, which will also
pin it. And unpin it after then.

Suggested-by: Gleb Natapov <gleb@kernel.org>
Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com>
---
 arch/x86/include/asm/kvm_host.h |  2 +-
 arch/x86/kvm/vmx.c              | 15 +++++++++------
 arch/x86/kvm/x86.c              | 16 +++++++++++-----
 3 files changed, 21 insertions(+), 12 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 1a8317e..9fb3d4c 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -576,7 +576,7 @@ struct kvm_arch {
 	struct kvm_apic_map *apic_map;
 
 	unsigned int tss_addr;
-	struct page *apic_access_page;
+	bool apic_access_page_done;
 
 	gpa_t wall_clock;
 
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index baac78a..12f0715 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -4002,7 +4002,7 @@ static int alloc_apic_access_page(struct kvm *kvm)
 	int r = 0;
 
 	mutex_lock(&kvm->slots_lock);
-	if (kvm->arch.apic_access_page)
+	if (kvm->arch.apic_access_page_done)
 		goto out;
 	kvm_userspace_mem.slot = APIC_ACCESS_PAGE_PRIVATE_MEMSLOT;
 	kvm_userspace_mem.flags = 0;
@@ -4018,7 +4018,12 @@ static int alloc_apic_access_page(struct kvm *kvm)
 		goto out;
 	}
 
-	kvm->arch.apic_access_page = page;
+	/*
+	 * Do not pin apic access page in memory so that memory hotplug
+	 * process is able to migrate it.
+	 */
+	put_page(page);
+	kvm->arch.apic_access_page_done = true;
 out:
 	mutex_unlock(&kvm->slots_lock);
 	return r;
@@ -4534,8 +4539,7 @@ static void vmx_vcpu_reset(struct kvm_vcpu *vcpu)
 	}
 
 	if (vm_need_virtualize_apic_accesses(vmx->vcpu.kvm))
-		vmcs_write64(APIC_ACCESS_ADDR,
-			     page_to_phys(vmx->vcpu.kvm->arch.apic_access_page));
+		kvm_vcpu_reload_apic_access_page(vcpu);
 
 	if (vmx_vm_has_apicv(vcpu->kvm))
 		memset(&vmx->pi_desc, 0, sizeof(struct pi_desc));
@@ -8003,8 +8007,7 @@ static void prepare_vmcs02(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12)
 		} else if (vm_need_virtualize_apic_accesses(vmx->vcpu.kvm)) {
 			exec_control |=
 				SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
-			vmcs_write64(APIC_ACCESS_ADDR,
-				page_to_phys(vcpu->kvm->arch.apic_access_page));
+			kvm_vcpu_reload_apic_access_page(vcpu);
 		}
 
 		vmcs_write32(SECONDARY_VM_EXEC_CONTROL, exec_control);
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 7dd4179..996af6e 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5991,6 +5991,8 @@ static void vcpu_scan_ioapic(struct kvm_vcpu *vcpu)
 
 void kvm_vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu)
 {
+	struct page *page = NULL;
+
 	/*
 	 * Only APIC access page shared by L1 and L2 vm is handled. The APIC
 	 * access page prepared by L1 for L2's execution is still pinned in
@@ -6003,10 +6005,16 @@ void kvm_vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu)
 		 * migrated, GUP will wait till the migrate entry is replaced
 		 * with the new pte entry pointing to the new page.
 		 */
-		vcpu->kvm->arch.apic_access_page = gfn_to_page(vcpu->kvm,
-				APIC_DEFAULT_PHYS_BASE >> PAGE_SHIFT);
+		page = gfn_to_page(vcpu->kvm,
+				   APIC_DEFAULT_PHYS_BASE >> PAGE_SHIFT);
 		kvm_x86_ops->set_apic_access_page_addr(vcpu->kvm,
-				page_to_phys(vcpu->kvm->arch.apic_access_page));
+						       page_to_phys(page));
+
+		/*
+		 * Do not pin apic access page in memory so that memory hotplug
+		 * process is able to migrate it.
+		 */
+		put_page(page);
 	}
 }
 EXPORT_SYMBOL_GPL(kvm_vcpu_reload_apic_access_page);
@@ -7272,8 +7280,6 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
 	kfree(kvm->arch.vpic);
 	kfree(kvm->arch.vioapic);
 	kvm_free_vcpus(kvm);
-	if (kvm->arch.apic_access_page)
-		put_page(kvm->arch.apic_access_page);
 	kfree(rcu_dereference_check(kvm->arch.apic_map, 1));
 }
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH v7 9/9] kvm, mem-hotplug: Unpin and remove kvm_arch->apic_access_page.
  2014-09-20 10:47 ` [PATCH v7 9/9] kvm, mem-hotplug: Unpin and remove kvm_arch->apic_access_page Tang Chen
@ 2014-09-22  9:28   ` Paolo Bonzini
  0 siblings, 0 replies; 21+ messages in thread
From: Paolo Bonzini @ 2014-09-22  9:28 UTC (permalink / raw)
  To: Tang Chen, gleb, mtosatti, nadav.amit, jan.kiszka
  Cc: kvm, laijs, isimatu.yasuaki, guz.fnst, linux-kernel

Il 20/09/2014 12:47, Tang Chen ha scritto:
> @@ -4534,8 +4539,7 @@ static void vmx_vcpu_reset(struct kvm_vcpu *vcpu)
>  	}
>  
>  	if (vm_need_virtualize_apic_accesses(vmx->vcpu.kvm))
> -		vmcs_write64(APIC_ACCESS_ADDR,
> -			     page_to_phys(vmx->vcpu.kvm->arch.apic_access_page));
> +		kvm_vcpu_reload_apic_access_page(vcpu);
>  
>  	if (vmx_vm_has_apicv(vcpu->kvm))
>  		memset(&vmx->pi_desc, 0, sizeof(struct pi_desc));

Please make the call unconditional in kvm_vcpu_reset.

> 
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index 9c8ae32..99378d7 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -3624,6 +3624,11 @@ static bool svm_has_secondary_apic_access(struct kvm_vcpu *vcpu)
>  	return false;
>  }
>  
> +static void svm_set_apic_access_page_addr(struct kvm *kvm, hpa_t hpa)
> +{
> +	return;
> +}
> +
>  static int svm_vm_has_apicv(struct kvm *kvm)
>  {
>  	return 0;

I will ask you to modify the vmx_has_secondary_apic_access callback in
such a way that svm_set_apic_access_page_addr is not needed, so please
remove it from v8.

Paolo

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v7 6/9] kvm: Rename make_all_cpus_request() to kvm_make_all_cpus_request() and make it non-static.
  2014-09-20 10:47 ` [PATCH v7 6/9] kvm: Rename make_all_cpus_request() to kvm_make_all_cpus_request() and make it non-static Tang Chen
@ 2014-09-22  9:28   ` Paolo Bonzini
  0 siblings, 0 replies; 21+ messages in thread
From: Paolo Bonzini @ 2014-09-22  9:28 UTC (permalink / raw)
  To: Tang Chen, gleb, mtosatti, nadav.amit, jan.kiszka
  Cc: kvm, laijs, isimatu.yasuaki, guz.fnst, linux-kernel

Il 20/09/2014 12:47, Tang Chen ha scritto:
> Since different architectures need different handling, we will add some arch specific
> code later. The code may need to make cpu requests outside kvm_main.c, so make it
> non-static and rename it to kvm_make_all_cpus_request().
> 
> Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com>
> ---
>  include/linux/kvm_host.h |  1 +
>  virt/kvm/kvm_main.c      | 10 +++++-----
>  2 files changed, 6 insertions(+), 5 deletions(-)
> 
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index c23236a..73de13c 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -580,6 +580,7 @@ void kvm_flush_remote_tlbs(struct kvm *kvm);
>  void kvm_reload_remote_mmus(struct kvm *kvm);
>  void kvm_make_mclock_inprogress_request(struct kvm *kvm);
>  void kvm_make_scan_ioapic_request(struct kvm *kvm);
> +bool kvm_make_all_cpus_request(struct kvm *kvm, unsigned int req);
>  
>  long kvm_arch_dev_ioctl(struct file *filp,
>  			unsigned int ioctl, unsigned long arg);
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 33712fb..0f8b6f6 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -152,7 +152,7 @@ static void ack_flush(void *_completed)
>  {
>  }
>  
> -static bool make_all_cpus_request(struct kvm *kvm, unsigned int req)
> +bool kvm_make_all_cpus_request(struct kvm *kvm, unsigned int req)
>  {
>  	int i, cpu, me;
>  	cpumask_var_t cpus;
> @@ -189,7 +189,7 @@ void kvm_flush_remote_tlbs(struct kvm *kvm)
>  	long dirty_count = kvm->tlbs_dirty;
>  
>  	smp_mb();
> -	if (make_all_cpus_request(kvm, KVM_REQ_TLB_FLUSH))
> +	if (kvm_make_all_cpus_request(kvm, KVM_REQ_TLB_FLUSH))
>  		++kvm->stat.remote_tlb_flush;
>  	cmpxchg(&kvm->tlbs_dirty, dirty_count, 0);
>  }
> @@ -197,17 +197,17 @@ EXPORT_SYMBOL_GPL(kvm_flush_remote_tlbs);
>  
>  void kvm_reload_remote_mmus(struct kvm *kvm)
>  {
> -	make_all_cpus_request(kvm, KVM_REQ_MMU_RELOAD);
> +	kvm_make_all_cpus_request(kvm, KVM_REQ_MMU_RELOAD);
>  }
>  
>  void kvm_make_mclock_inprogress_request(struct kvm *kvm)
>  {
> -	make_all_cpus_request(kvm, KVM_REQ_MCLOCK_INPROGRESS);
> +	kvm_make_all_cpus_request(kvm, KVM_REQ_MCLOCK_INPROGRESS);
>  }
>  
>  void kvm_make_scan_ioapic_request(struct kvm *kvm)
>  {
> -	make_all_cpus_request(kvm, KVM_REQ_SCAN_IOAPIC);
> +	kvm_make_all_cpus_request(kvm, KVM_REQ_SCAN_IOAPIC);
>  }
>  
>  int kvm_vcpu_init(struct kvm_vcpu *vcpu, struct kvm *kvm, unsigned id)
> 

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v7 7/9] kvm, mem-hotplug: Reload L1's apic access page on migration when L2 is running.
  2014-09-20 10:47 ` [PATCH v7 7/9] kvm, mem-hotplug: Reload L1's apic access page on migration when L2 is running Tang Chen
@ 2014-09-22  9:29   ` Paolo Bonzini
  0 siblings, 0 replies; 21+ messages in thread
From: Paolo Bonzini @ 2014-09-22  9:29 UTC (permalink / raw)
  To: Tang Chen, gleb, mtosatti, nadav.amit, jan.kiszka
  Cc: kvm, laijs, isimatu.yasuaki, guz.fnst, linux-kernel

Il 20/09/2014 12:47, Tang Chen ha scritto:
> We are handling "L1 and L2 share one apic access page" situation when migrating
> apic access page. We should do some handling when migration happens in the
> following situations:
> 
>    1) when L0 is running: Update L1's vmcs in the next L0->L1 entry and L2's
>       vmcs in the next L1->L2 entry.
> 
>    2) when L1 is running: Force a L1->L0 exit, update L1's vmcs in the next
>       L0->L1 entry and L2's vmcs in the next L1->L2 entry.
> 
>    3) when L2 is running: Force a L2->L0 exit, update L2's vmcs in the next
>       L0->L2 entry and L1's vmcs in the next L2->L1 exit.
> 
> This patch handles 3).
> 
> In L0->L2 entry, L2's vmcs will be updated in prepare_vmcs02() called by
> nested_vm_run(). So we need to do nothing.
> 
> In L2->L1 exit, this patch requests apic access page reload in L2->L1 vmexit.
> 
> Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com>
> ---
>  arch/x86/include/asm/kvm_host.h | 1 +
>  arch/x86/kvm/vmx.c              | 6 ++++++
>  arch/x86/kvm/x86.c              | 3 ++-
>  3 files changed, 9 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 56156eb..1a8317e 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1047,6 +1047,7 @@ int kvm_cpu_has_interrupt(struct kvm_vcpu *vcpu);
>  int kvm_arch_interrupt_allowed(struct kvm_vcpu *vcpu);
>  int kvm_cpu_get_interrupt(struct kvm_vcpu *v);
>  void kvm_vcpu_reset(struct kvm_vcpu *vcpu);
> +void kvm_vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu);
>  
>  void kvm_define_shared_msr(unsigned index, u32 msr);
>  void kvm_set_shared_msr(unsigned index, u64 val, u64 mask);
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index c8e90ec..baac78a 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -8803,6 +8803,12 @@ static void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 exit_reason,
>  	}
>  
>  	/*
> +	 * We are now running in L2, mmu_notifier will force to reload the
> +	 * page's hpa for L2 vmcs. Need to reload it for L1 before entering L1.
> +	 */
> +	kvm_vcpu_reload_apic_access_page(vcpu);
> +
> +	/*
>  	 * Exiting from L2 to L1, we're now back to L1 which thinks it just
>  	 * finished a VMLAUNCH or VMRESUME instruction, so we need to set the
>  	 * success or failure flag accordingly.
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index fc54fa6..2ae2dc7 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -5989,7 +5989,7 @@ static void vcpu_scan_ioapic(struct kvm_vcpu *vcpu)
>  	kvm_apic_update_tmr(vcpu, tmr);
>  }
>  
> -static void kvm_vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu)
> +void kvm_vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu)
>  {
>  	/*
>  	 * Only APIC access page shared by L1 and L2 vm is handled. The APIC
> @@ -6009,6 +6009,7 @@ static void kvm_vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu)
>  				page_to_phys(vcpu->kvm->arch.apic_access_page));
>  	}
>  }
> +EXPORT_SYMBOL_GPL(kvm_vcpu_reload_apic_access_page);
>  
>  /*
>   * Returns 1 to let __vcpu_run() continue the guest execution loop without
> 

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v7 8/9] kvm, mem-hotplug: Add arch specific mmu notifier to handle apic access migration.
  2014-09-20 10:47 ` [PATCH v7 8/9] kvm, mem-hotplug: Add arch specific mmu notifier to handle apic access migration Tang Chen
@ 2014-09-22  9:31   ` Paolo Bonzini
  2014-09-24  2:09     ` [PATCH 1/1] " Tang Chen
  0 siblings, 1 reply; 21+ messages in thread
From: Paolo Bonzini @ 2014-09-22  9:31 UTC (permalink / raw)
  To: Tang Chen, gleb, mtosatti, nadav.amit, jan.kiszka
  Cc: kvm, laijs, isimatu.yasuaki, guz.fnst, linux-kernel

Il 20/09/2014 12:47, Tang Chen ha scritto:
> We are handling "L1 and L2 share one apic access page" situation when migrating
> apic access page. We should do some handling when migration happens in the
> following situations:
> 
>    1) when L0 is running: Update L1's vmcs in the next L0->L1 entry and L2's
>       vmcs in the next L1->L2 entry.
> 
>    2) when L1 is running: Force a L1->L0 exit, update L1's vmcs in the next
>       L0->L1 entry and L2's vmcs in the next L1->L2 entry.
> 
>    3) when L2 is running: Force a L2->L0 exit, update L2's vmcs in the next
>       L0->L2 entry and L1's vmcs in the next L2->L1 exit.
> 
> This patch force a L1->L0 exit or L2->L0 exit when shared apic access page is
> migrated using mmu notifier. Since apic access page is only used on intel x86,
> this is arch specific code.
> 
> Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com>
> ---
>  arch/x86/kvm/x86.c       | 11 +++++++++++
>  include/linux/kvm_host.h | 14 +++++++++++++-
>  virt/kvm/kvm_main.c      |  3 +++
>  3 files changed, 27 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 2ae2dc7..7dd4179 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -6011,6 +6011,17 @@ void kvm_vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu)
>  }
>  EXPORT_SYMBOL_GPL(kvm_vcpu_reload_apic_access_page);
>  
> +void kvm_arch_mmu_notifier_invalidate_page(struct kvm *kvm,
> +					   unsigned long address)
> +{
> +	/*
> +	 * The physical address of apic access page is stored in VMCS.
> +	 * Update it when it becomes invalid.
> +	 */
> +	if (address == gfn_to_hva(kvm, APIC_DEFAULT_PHYS_BASE >> PAGE_SHIFT))
> +		kvm_make_all_cpus_request(kvm, KVM_REQ_APIC_PAGE_RELOAD);
> +}
> +
>  /*
>   * Returns 1 to let __vcpu_run() continue the guest execution loop without
>   * exiting to the userspace.  Otherwise, the value will be returned to the
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index 73de13c..b6e4d38 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -917,7 +917,19 @@ static inline int mmu_notifier_retry(struct kvm *kvm, unsigned long mmu_seq)
>  		return 1;
>  	return 0;
>  }
> -#endif
> +
> +#ifdef _ASM_X86_KVM_HOST_H
> +void kvm_arch_mmu_notifier_invalidate_page(struct kvm *kvm,
> +					   unsigned long address);
> +#else /* _ASM_X86_KVM_HOST_H */
> +inline void kvm_arch_mmu_notifier_invalidate_page(struct kvm *kvm,
> +						  unsigned long address)

This needs to be static.

However, do not add it here.  Instead, touch all
arch/*/include/asm/kvm_host.h files, adding either the prototype or the
inline function.

Apart from this, the patch looks good.

Paolo

> +{
> +	return;
> +}
> +#endif /* _ASM_X86_KVM_HOST_H */
> +
> +#endif /* CONFIG_MMU_NOTIFIER & KVM_ARCH_WANT_MMU_NOTIFIER */
>  
>  #ifdef CONFIG_HAVE_KVM_IRQ_ROUTING
>  
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 0f8b6f6..5427973d 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -295,6 +295,9 @@ static void kvm_mmu_notifier_invalidate_page(struct mmu_notifier *mn,
>  		kvm_flush_remote_tlbs(kvm);
>  
>  	spin_unlock(&kvm->mmu_lock);
> +
> +	kvm_arch_mmu_notifier_invalidate_page(kvm, address);
> +
>  	srcu_read_unlock(&kvm->srcu, idx);
>  }
>  
> 


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v7 5/9] kvm, mem-hotplug: Reload L1's apic access page in vcpu_enter_guest().
  2014-09-20 10:47 ` [PATCH v7 5/9] kvm, mem-hotplug: Reload L1's apic access page in vcpu_enter_guest() Tang Chen
@ 2014-09-22  9:33   ` Paolo Bonzini
  2014-09-22  9:38     ` Paolo Bonzini
  0 siblings, 1 reply; 21+ messages in thread
From: Paolo Bonzini @ 2014-09-22  9:33 UTC (permalink / raw)
  To: Tang Chen, gleb, mtosatti, nadav.amit, jan.kiszka
  Cc: kvm, laijs, isimatu.yasuaki, guz.fnst, linux-kernel

Il 20/09/2014 12:47, Tang Chen ha scritto:
> @@ -3624,6 +3624,11 @@ static bool svm_has_secondary_apic_access(struct kvm_vcpu *vcpu)
>  	return false;
>  }
>  
> +static void svm_set_apic_access_page_addr(struct kvm *kvm, hpa_t hpa)
> +{
> +	return;
> +}
> +
>  static int svm_vm_has_apicv(struct kvm *kvm)
>  {
>  	return 0;
> @@ -4379,6 +4384,7 @@ static struct kvm_x86_ops svm_x86_ops = {
>  	.update_cr8_intercept = update_cr8_intercept,
>  	.set_virtual_x2apic_mode = svm_set_virtual_x2apic_mode,
>  	.has_secondary_apic_access = svm_has_secondary_apic_access,
> +	.set_apic_access_page_addr = svm_set_apic_access_page_addr,
>  	.vm_has_apicv = svm_vm_has_apicv,
>  	.load_eoi_exitmap = svm_load_eoi_exitmap,
>  	.hwapic_isr_update = svm_hwapic_isr_update,

Something's wrong in the way you're generating the patches, because
you're adding these hunks twice.

Paolo

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v7 5/9] kvm, mem-hotplug: Reload L1's apic access page in vcpu_enter_guest().
  2014-09-22  9:33   ` Paolo Bonzini
@ 2014-09-22  9:38     ` Paolo Bonzini
  0 siblings, 0 replies; 21+ messages in thread
From: Paolo Bonzini @ 2014-09-22  9:38 UTC (permalink / raw)
  To: Tang Chen, gleb, mtosatti, nadav.amit, jan.kiszka
  Cc: kvm, laijs, isimatu.yasuaki, guz.fnst, linux-kernel

Il 22/09/2014 11:33, Paolo Bonzini ha scritto:
> Something's wrong in the way you're generating the patches, because
> you're adding these hunks twice.

Nevermind, that was my mistake.

Paolo

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v7 4/9] kvm: Add interface to check if secondary exec virtualzed apic accesses is enabled.
  2014-09-20 10:47 ` [PATCH v7 4/9] kvm: Add interface to check if secondary exec virtualzed apic accesses is enabled Tang Chen
@ 2014-09-22  9:50   ` Paolo Bonzini
  0 siblings, 0 replies; 21+ messages in thread
From: Paolo Bonzini @ 2014-09-22  9:50 UTC (permalink / raw)
  To: Tang Chen, gleb, mtosatti, nadav.amit, jan.kiszka
  Cc: kvm, laijs, isimatu.yasuaki, guz.fnst, linux-kernel

Il 20/09/2014 12:47, Tang Chen ha scritto:
> We wants to migrate apic access page pinned by guest (L1 and L2) to make memory
> hotplug available.
> 
> There are two situations need to be handled for apic access page used by L2 vm:
> 1. L1 prepares a separate apic access page for L2.
> 
>    L2 pins a lot of pages in memory. Even if we can migrate apic access page,
>    memory hotplug is not available when L2 is running. So do not handle this
>    now. Migrate apic access page only.
> 
> 2. L1 and L2 share one apic access page.
> 
>    Since we will migrate L1's apic access page, we should do some handling when
>    migration happens in the following situations:
> 
>    1) when L0 is running: Update L1's vmcs in the next L0->L1 entry and L2's
>       vmcs in the next L1->L2 entry.
> 
>    2) when L1 is running: Force a L1->L0 exit, update L1's vmcs in the next
>       L0->L1 entry and L2's vmcs in the next L1->L2 entry.
> 
>    3) when L2 is running: Force a L2->L0 exit, update L2's vmcs in the next
>       L0->L2 entry and L1's vmcs in the next L2->L1 exit.
> 
> Since we don't handle L1 ans L2 have separate apic access pages situation,
> when we update vmcs, we need to check if we are in L2 and if L2's secondary
> exec virtualzed apic accesses is enabled.
> 
> This patch adds an interface to check if L2's secondary exec virtualzed apic
> accesses is enabled, because vmx cannot be accessed outside vmx.c.
> 
> Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com>
> ---
>  arch/x86/include/asm/kvm_host.h | 1 +
>  arch/x86/kvm/svm.c              | 6 ++++++
>  arch/x86/kvm/vmx.c              | 9 +++++++++
>  3 files changed, 16 insertions(+)
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 35171c7..69fe032 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -739,6 +739,7 @@ struct kvm_x86_ops {
>  	void (*hwapic_isr_update)(struct kvm *kvm, int isr);
>  	void (*load_eoi_exitmap)(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap);
>  	void (*set_virtual_x2apic_mode)(struct kvm_vcpu *vcpu, bool set);
> +	bool (*has_secondary_apic_access)(struct kvm_vcpu *vcpu);
>  	void (*deliver_posted_interrupt)(struct kvm_vcpu *vcpu, int vector);
>  	void (*sync_pir_to_irr)(struct kvm_vcpu *vcpu);
>  	int (*set_tss_addr)(struct kvm *kvm, unsigned int addr);
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index 1d941ad..9c8ae32 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -3619,6 +3619,11 @@ static void svm_set_virtual_x2apic_mode(struct kvm_vcpu *vcpu, bool set)
>  	return;
>  }
>  
> +static bool svm_has_secondary_apic_access(struct kvm_vcpu *vcpu)
> +{
> +	return false;
> +}
> +
>  static int svm_vm_has_apicv(struct kvm *kvm)
>  {
>  	return 0;
> @@ -4373,6 +4378,7 @@ static struct kvm_x86_ops svm_x86_ops = {
>  	.enable_irq_window = enable_irq_window,
>  	.update_cr8_intercept = update_cr8_intercept,
>  	.set_virtual_x2apic_mode = svm_set_virtual_x2apic_mode,
> +	.has_secondary_apic_access = svm_has_secondary_apic_access,
>  	.vm_has_apicv = svm_vm_has_apicv,
>  	.load_eoi_exitmap = svm_load_eoi_exitmap,
>  	.hwapic_isr_update = svm_hwapic_isr_update,
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index 72a0470..0b541d9 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -7090,6 +7090,14 @@ static void vmx_set_virtual_x2apic_mode(struct kvm_vcpu *vcpu, bool set)
>  	vmx_set_msr_bitmap(vcpu);
>  }
>  
> +static bool vmx_has_secondary_apic_access(struct kvm_vcpu *vcpu)
> +{
> +	struct vcpu_vmx *vmx = to_vmx(vcpu);
> +
> +	return (vmx->nested.current_vmcs12->secondary_vm_exec_control &
> +		SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES);
> +}
> +
>  static void vmx_hwapic_isr_update(struct kvm *kvm, int isr)
>  {
>  	u16 status;
> @@ -8909,6 +8917,7 @@ static struct kvm_x86_ops vmx_x86_ops = {
>  	.enable_irq_window = enable_irq_window,
>  	.update_cr8_intercept = update_cr8_intercept,
>  	.set_virtual_x2apic_mode = vmx_set_virtual_x2apic_mode,
> +	.has_secondary_apic_access = vmx_has_secondary_apic_access,
>  	.vm_has_apicv = vmx_vm_has_apicv,
>  	.load_eoi_exitmap = vmx_load_eoi_exitmap,
>  	.hwapic_irr_update = vmx_hwapic_irr_update,
> 

I don't think you need two hooks.  You can just do the gfn_to_page
unconditionally in kvm_vcpu_reload_apic_access_page, like

	vcpu->kvm->arch.apic_access_page = gfn_to_page(vcpu->kvm,
			APIC_DEFAULT_PHYS_BASE >> PAGE_SHIFT);
	if (vcpu->kvm->arch.apic_access_page)
		kvm_x86_ops->set_apic_access_page_addr(vcpu->kvm,
			page_to_phys(vcpu->kvm->arch.apic_access_page));
	/* The last patch adds put_page here.  */

and check is_guest_mode etc. in vmx.c.  The extra gfn_to_page/put_page
is rare enough that you can just do it unconditionally.

Another small change; you shouldn't run this code unless the APIC page
is in use.  So you can add

	if (!kvm_x86_ops->set_apic_access_page_addr)
		return;

and in the vmx.c file, please also add this change:

-	if (!cpu_has_vmx_flexpriority())
+	if (!cpu_has_vmx_flexpriority()) {
		flexpriority_enabled = 0;
+		kvm_x86_ops->set_apic_access_page_addr = NULL;
+	}

so that the callback is skipped correctly for old processors that lack
the APIC_ACCESS_ADDR field.

Paolo

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH 1/1] kvm, mem-hotplug: Add arch specific mmu notifier to handle apic access migration.
  2014-09-22  9:31   ` Paolo Bonzini
@ 2014-09-24  2:09     ` Tang Chen
  2014-09-24  7:00       ` Paolo Bonzini
  2014-09-24  7:08       ` Jan Kiszka
  0 siblings, 2 replies; 21+ messages in thread
From: Tang Chen @ 2014-09-24  2:09 UTC (permalink / raw)
  To: pbonzini
  Cc: gleb, mtosatti, nadav.amit, jan.kiszka, kvm, laijs,
	isimatu.yasuaki, guz.fnst, linux-kernel, tangchen

Hi Paolo, 

I'm not sure if this patch is following your comment. Please review.
And all the other comments are followed. If this patch is OK, I'll 
send v8 soon.

Thanks.

We are handling "L1 and L2 share one apic access page" situation when migrating
apic access page. We should do some handling when migration happens in the
following situations:

   1) when L0 is running: Update L1's vmcs in the next L0->L1 entry and L2's
      vmcs in the next L1->L2 entry.

   2) when L1 is running: Force a L1->L0 exit, update L1's vmcs in the next
      L0->L1 entry and L2's vmcs in the next L1->L2 entry.

   3) when L2 is running: Force a L2->L0 exit, update L2's vmcs in the next
      L0->L2 entry and L1's vmcs in the next L2->L1 exit.

This patch force a L1->L0 exit or L2->L0 exit when shared apic access page is
migrated using mmu notifier. Since apic access page is only used on intel x86,
this is arch specific code.
---
 arch/arm/include/asm/kvm_host.h     |  6 ++++++
 arch/arm64/include/asm/kvm_host.h   |  6 ++++++
 arch/ia64/include/asm/kvm_host.h    |  8 ++++++++
 arch/mips/include/asm/kvm_host.h    |  7 +++++++
 arch/powerpc/include/asm/kvm_host.h |  6 ++++++
 arch/s390/include/asm/kvm_host.h    |  9 +++++++++
 arch/x86/include/asm/kvm_host.h     |  2 ++
 arch/x86/kvm/x86.c                  | 11 +++++++++++
 virt/kvm/kvm_main.c                 |  3 +++
 9 files changed, 58 insertions(+)

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 6dfb404..79bbf7d 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -182,6 +182,12 @@ static inline int kvm_test_age_hva(struct kvm *kvm, unsigned long hva)
 	return 0;
 }
 
+static inline void kvm_arch_mmu_notifier_invalidate_page(struct kvm *kvm,
+							 unsigned long address)
+{
+	return;
+}
+
 struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
 struct kvm_vcpu __percpu **kvm_get_running_vcpus(void);
 
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index e10c45a..ee89fad 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -192,6 +192,12 @@ static inline int kvm_test_age_hva(struct kvm *kvm, unsigned long hva)
 	return 0;
 }
 
+static inline void kvm_arch_mmu_notifier_invalidate_page(struct kvm *kvm,
+							 unsigned long address)
+{
+	return;
+}
+
 struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
 struct kvm_vcpu __percpu **kvm_get_running_vcpus(void);
 
diff --git a/arch/ia64/include/asm/kvm_host.h b/arch/ia64/include/asm/kvm_host.h
index db95f57..326ac55 100644
--- a/arch/ia64/include/asm/kvm_host.h
+++ b/arch/ia64/include/asm/kvm_host.h
@@ -574,6 +574,14 @@ static inline struct kvm_pt_regs *vcpu_regs(struct kvm_vcpu *v)
 	return (struct kvm_pt_regs *) ((unsigned long) v + KVM_STK_OFFSET) - 1;
 }
 
+#ifdef KVM_ARCH_WANT_MMU_NOTIFIER
+static inline void kvm_arch_mmu_notifier_invalidate_page(struct kvm *kvm,
+							 unsigned long address)
+{
+	return;
+}
+#endif /* KVM_ARCH_WANT_MMU_NOTIFIER */
+
 typedef int kvm_vmm_entry(void);
 typedef void kvm_tramp_entry(union context *host, union context *guest);
 
diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h
index 7a3fc67..c392705 100644
--- a/arch/mips/include/asm/kvm_host.h
+++ b/arch/mips/include/asm/kvm_host.h
@@ -767,5 +767,12 @@ extern int kvm_mips_trans_mtc0(uint32_t inst, uint32_t *opc,
 extern void kvm_mips_dump_stats(struct kvm_vcpu *vcpu);
 extern unsigned long kvm_mips_get_ramsize(struct kvm *kvm);
 
+#ifdef KVM_ARCH_WANT_MMU_NOTIFIER
+static inline void kvm_arch_mmu_notifier_invalidate_page(struct kvm *kvm,
+							 unsigned long address)
+{
+	return;
+}
+#endif /* KVM_ARCH_WANT_MMU_NOTIFIER */
 
 #endif /* __MIPS_KVM_HOST_H__ */
diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 98d9dd5..c16a573 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -61,6 +61,12 @@ extern int kvm_age_hva(struct kvm *kvm, unsigned long hva);
 extern int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
 extern void kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
 
+static inline void kvm_arch_mmu_notifier_invalidate_page(struct kvm *kvm,
+							 unsigned long address)
+{
+	return;
+}
+
 #define HPTEG_CACHE_NUM			(1 << 15)
 #define HPTEG_HASH_BITS_PTE		13
 #define HPTEG_HASH_BITS_PTE_LONG	12
diff --git a/arch/s390/include/asm/kvm_host.h b/arch/s390/include/asm/kvm_host.h
index 773bef7..693290f 100644
--- a/arch/s390/include/asm/kvm_host.h
+++ b/arch/s390/include/asm/kvm_host.h
@@ -450,4 +450,13 @@ void kvm_arch_async_page_present(struct kvm_vcpu *vcpu,
 
 extern int sie64a(struct kvm_s390_sie_block *, u64 *);
 extern char sie_exit;
+
+#ifdef KVM_ARCH_WANT_MMU_NOTIFIER
+static inline void kvm_arch_mmu_notifier_invalidate_page(struct kvm *kvm,
+							 unsigned long address)
+{
+	return;
+}
+#endif /* KVM_ARCH_WANT_MMU_NOTIFIER */
+
 #endif
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 66480fd..408b944 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1047,6 +1047,8 @@ int kvm_arch_interrupt_allowed(struct kvm_vcpu *vcpu);
 int kvm_cpu_get_interrupt(struct kvm_vcpu *v);
 void kvm_vcpu_reset(struct kvm_vcpu *vcpu);
 void kvm_vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu);
+void kvm_arch_mmu_notifier_invalidate_page(struct kvm *kvm,
+					   unsigned long address);
 
 void kvm_define_shared_msr(unsigned index, u32 msr);
 void kvm_set_shared_msr(unsigned index, u64 val, u64 mask);
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index c064ca6..e042ef6 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -6011,6 +6011,17 @@ void kvm_vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu)
 }
 EXPORT_SYMBOL_GPL(kvm_vcpu_reload_apic_access_page);
 
+void kvm_arch_mmu_notifier_invalidate_page(struct kvm *kvm,
+					   unsigned long address)
+{
+	/*
+	 * The physical address of apic access page is stored in VMCS.
+	 * Update it when it becomes invalid.
+	 */
+	if (address == gfn_to_hva(kvm, APIC_DEFAULT_PHYS_BASE >> PAGE_SHIFT))
+		kvm_make_all_cpus_request(kvm, KVM_REQ_APIC_PAGE_RELOAD);
+}
+
 /*
  * Returns 1 to let __vcpu_run() continue the guest execution loop without
  * exiting to the userspace.  Otherwise, the value will be returned to the
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 0f8b6f6..5427973d 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -295,6 +295,9 @@ static void kvm_mmu_notifier_invalidate_page(struct mmu_notifier *mn,
 		kvm_flush_remote_tlbs(kvm);
 
 	spin_unlock(&kvm->mmu_lock);
+
+	kvm_arch_mmu_notifier_invalidate_page(kvm, address);
+
 	srcu_read_unlock(&kvm->srcu, idx);
 }
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/1] kvm, mem-hotplug: Add arch specific mmu notifier to handle apic access migration.
  2014-09-24  2:09     ` [PATCH 1/1] " Tang Chen
@ 2014-09-24  7:00       ` Paolo Bonzini
  2014-09-24  7:08       ` Jan Kiszka
  1 sibling, 0 replies; 21+ messages in thread
From: Paolo Bonzini @ 2014-09-24  7:00 UTC (permalink / raw)
  To: Tang Chen
  Cc: gleb, mtosatti, nadav.amit, jan.kiszka, kvm, laijs,
	isimatu.yasuaki, guz.fnst, linux-kernel

Il 24/09/2014 04:09, Tang Chen ha scritto:
> Hi Paolo, 
> 
> I'm not sure if this patch is following your comment. Please review.
> And all the other comments are followed. If this patch is OK, I'll 
> send v8 soon.
> 
> Thanks.
> 
> We are handling "L1 and L2 share one apic access page" situation when migrating
> apic access page. We should do some handling when migration happens in the
> following situations:
> 
>    1) when L0 is running: Update L1's vmcs in the next L0->L1 entry and L2's
>       vmcs in the next L1->L2 entry.
> 
>    2) when L1 is running: Force a L1->L0 exit, update L1's vmcs in the next
>       L0->L1 entry and L2's vmcs in the next L1->L2 entry.
> 
>    3) when L2 is running: Force a L2->L0 exit, update L2's vmcs in the next
>       L0->L2 entry and L1's vmcs in the next L2->L1 exit.
> 
> This patch force a L1->L0 exit or L2->L0 exit when shared apic access page is
> migrated using mmu notifier. Since apic access page is only used on intel x86,
> this is arch specific code.
> ---
>  arch/arm/include/asm/kvm_host.h     |  6 ++++++
>  arch/arm64/include/asm/kvm_host.h   |  6 ++++++
>  arch/ia64/include/asm/kvm_host.h    |  8 ++++++++
>  arch/mips/include/asm/kvm_host.h    |  7 +++++++
>  arch/powerpc/include/asm/kvm_host.h |  6 ++++++
>  arch/s390/include/asm/kvm_host.h    |  9 +++++++++
>  arch/x86/include/asm/kvm_host.h     |  2 ++
>  arch/x86/kvm/x86.c                  | 11 +++++++++++
>  virt/kvm/kvm_main.c                 |  3 +++
>  9 files changed, 58 insertions(+)
> 
> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
> index 6dfb404..79bbf7d 100644
> --- a/arch/arm/include/asm/kvm_host.h
> +++ b/arch/arm/include/asm/kvm_host.h
> @@ -182,6 +182,12 @@ static inline int kvm_test_age_hva(struct kvm *kvm, unsigned long hva)
>  	return 0;
>  }
>  
> +static inline void kvm_arch_mmu_notifier_invalidate_page(struct kvm *kvm,
> +							 unsigned long address)
> +{
> +	return;
> +}
> +
>  struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
>  struct kvm_vcpu __percpu **kvm_get_running_vcpus(void);
>  
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index e10c45a..ee89fad 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -192,6 +192,12 @@ static inline int kvm_test_age_hva(struct kvm *kvm, unsigned long hva)
>  	return 0;
>  }
>  
> +static inline void kvm_arch_mmu_notifier_invalidate_page(struct kvm *kvm,
> +							 unsigned long address)
> +{
> +	return;
> +}
> +
>  struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
>  struct kvm_vcpu __percpu **kvm_get_running_vcpus(void);
>  
> diff --git a/arch/ia64/include/asm/kvm_host.h b/arch/ia64/include/asm/kvm_host.h
> index db95f57..326ac55 100644
> --- a/arch/ia64/include/asm/kvm_host.h
> +++ b/arch/ia64/include/asm/kvm_host.h
> @@ -574,6 +574,14 @@ static inline struct kvm_pt_regs *vcpu_regs(struct kvm_vcpu *v)
>  	return (struct kvm_pt_regs *) ((unsigned long) v + KVM_STK_OFFSET) - 1;
>  }
>  
> +#ifdef KVM_ARCH_WANT_MMU_NOTIFIER
> +static inline void kvm_arch_mmu_notifier_invalidate_page(struct kvm *kvm,
> +							 unsigned long address)
> +{
> +	return;
> +}
> +#endif /* KVM_ARCH_WANT_MMU_NOTIFIER */
> +
>  typedef int kvm_vmm_entry(void);
>  typedef void kvm_tramp_entry(union context *host, union context *guest);
>  
> diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h
> index 7a3fc67..c392705 100644
> --- a/arch/mips/include/asm/kvm_host.h
> +++ b/arch/mips/include/asm/kvm_host.h
> @@ -767,5 +767,12 @@ extern int kvm_mips_trans_mtc0(uint32_t inst, uint32_t *opc,
>  extern void kvm_mips_dump_stats(struct kvm_vcpu *vcpu);
>  extern unsigned long kvm_mips_get_ramsize(struct kvm *kvm);
>  
> +#ifdef KVM_ARCH_WANT_MMU_NOTIFIER
> +static inline void kvm_arch_mmu_notifier_invalidate_page(struct kvm *kvm,
> +							 unsigned long address)
> +{
> +	return;
> +}
> +#endif /* KVM_ARCH_WANT_MMU_NOTIFIER */
>  
>  #endif /* __MIPS_KVM_HOST_H__ */
> diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
> index 98d9dd5..c16a573 100644
> --- a/arch/powerpc/include/asm/kvm_host.h
> +++ b/arch/powerpc/include/asm/kvm_host.h
> @@ -61,6 +61,12 @@ extern int kvm_age_hva(struct kvm *kvm, unsigned long hva);
>  extern int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
>  extern void kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
>  
> +static inline void kvm_arch_mmu_notifier_invalidate_page(struct kvm *kvm,
> +							 unsigned long address)
> +{
> +	return;
> +}
> +
>  #define HPTEG_CACHE_NUM			(1 << 15)
>  #define HPTEG_HASH_BITS_PTE		13
>  #define HPTEG_HASH_BITS_PTE_LONG	12
> diff --git a/arch/s390/include/asm/kvm_host.h b/arch/s390/include/asm/kvm_host.h
> index 773bef7..693290f 100644
> --- a/arch/s390/include/asm/kvm_host.h
> +++ b/arch/s390/include/asm/kvm_host.h
> @@ -450,4 +450,13 @@ void kvm_arch_async_page_present(struct kvm_vcpu *vcpu,
>  
>  extern int sie64a(struct kvm_s390_sie_block *, u64 *);
>  extern char sie_exit;
> +
> +#ifdef KVM_ARCH_WANT_MMU_NOTIFIER
> +static inline void kvm_arch_mmu_notifier_invalidate_page(struct kvm *kvm,
> +							 unsigned long address)
> +{
> +	return;
> +}
> +#endif /* KVM_ARCH_WANT_MMU_NOTIFIER */
> +
>  #endif
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 66480fd..408b944 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1047,6 +1047,8 @@ int kvm_arch_interrupt_allowed(struct kvm_vcpu *vcpu);
>  int kvm_cpu_get_interrupt(struct kvm_vcpu *v);
>  void kvm_vcpu_reset(struct kvm_vcpu *vcpu);
>  void kvm_vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu);
> +void kvm_arch_mmu_notifier_invalidate_page(struct kvm *kvm,
> +					   unsigned long address);
>  
>  void kvm_define_shared_msr(unsigned index, u32 msr);
>  void kvm_set_shared_msr(unsigned index, u64 val, u64 mask);
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index c064ca6..e042ef6 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -6011,6 +6011,17 @@ void kvm_vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu)
>  }
>  EXPORT_SYMBOL_GPL(kvm_vcpu_reload_apic_access_page);
>  
> +void kvm_arch_mmu_notifier_invalidate_page(struct kvm *kvm,
> +					   unsigned long address)
> +{
> +	/*
> +	 * The physical address of apic access page is stored in VMCS.
> +	 * Update it when it becomes invalid.
> +	 */
> +	if (address == gfn_to_hva(kvm, APIC_DEFAULT_PHYS_BASE >> PAGE_SHIFT))
> +		kvm_make_all_cpus_request(kvm, KVM_REQ_APIC_PAGE_RELOAD);
> +}
> +
>  /*
>   * Returns 1 to let __vcpu_run() continue the guest execution loop without
>   * exiting to the userspace.  Otherwise, the value will be returned to the
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 0f8b6f6..5427973d 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -295,6 +295,9 @@ static void kvm_mmu_notifier_invalidate_page(struct mmu_notifier *mn,
>  		kvm_flush_remote_tlbs(kvm);
>  
>  	spin_unlock(&kvm->mmu_lock);
> +
> +	kvm_arch_mmu_notifier_invalidate_page(kvm, address);
> +
>  	srcu_read_unlock(&kvm->srcu, idx);
>  }
>  
> 

Yes, this is it.

Paolo

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/1] kvm, mem-hotplug: Add arch specific mmu notifier to handle apic access migration.
  2014-09-24  2:09     ` [PATCH 1/1] " Tang Chen
  2014-09-24  7:00       ` Paolo Bonzini
@ 2014-09-24  7:08       ` Jan Kiszka
  2014-09-24  7:31         ` Tang Chen
  1 sibling, 1 reply; 21+ messages in thread
From: Jan Kiszka @ 2014-09-24  7:08 UTC (permalink / raw)
  To: Tang Chen, pbonzini
  Cc: gleb, mtosatti, nadav.amit, kvm, laijs, isimatu.yasuaki,
	guz.fnst, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 7446 bytes --]

On 2014-09-24 04:09, Tang Chen wrote:
> Hi Paolo, 
> 
> I'm not sure if this patch is following your comment. Please review.
> And all the other comments are followed. If this patch is OK, I'll 
> send v8 soon.
> 
> Thanks.
> 
> We are handling "L1 and L2 share one apic access page" situation when migrating
> apic access page. We should do some handling when migration happens in the
> following situations:
> 
>    1) when L0 is running: Update L1's vmcs in the next L0->L1 entry and L2's
>       vmcs in the next L1->L2 entry.
> 
>    2) when L1 is running: Force a L1->L0 exit, update L1's vmcs in the next
>       L0->L1 entry and L2's vmcs in the next L1->L2 entry.
> 
>    3) when L2 is running: Force a L2->L0 exit, update L2's vmcs in the next
>       L0->L2 entry and L1's vmcs in the next L2->L1 exit.
> 
> This patch force a L1->L0 exit or L2->L0 exit when shared apic access page is
> migrated using mmu notifier. Since apic access page is only used on intel x86,
> this is arch specific code.
> ---
>  arch/arm/include/asm/kvm_host.h     |  6 ++++++
>  arch/arm64/include/asm/kvm_host.h   |  6 ++++++
>  arch/ia64/include/asm/kvm_host.h    |  8 ++++++++
>  arch/mips/include/asm/kvm_host.h    |  7 +++++++
>  arch/powerpc/include/asm/kvm_host.h |  6 ++++++
>  arch/s390/include/asm/kvm_host.h    |  9 +++++++++
>  arch/x86/include/asm/kvm_host.h     |  2 ++
>  arch/x86/kvm/x86.c                  | 11 +++++++++++
>  virt/kvm/kvm_main.c                 |  3 +++
>  9 files changed, 58 insertions(+)
> 
> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
> index 6dfb404..79bbf7d 100644
> --- a/arch/arm/include/asm/kvm_host.h
> +++ b/arch/arm/include/asm/kvm_host.h
> @@ -182,6 +182,12 @@ static inline int kvm_test_age_hva(struct kvm *kvm, unsigned long hva)
>  	return 0;
>  }
>  
> +static inline void kvm_arch_mmu_notifier_invalidate_page(struct kvm *kvm,
> +							 unsigned long address)
> +{
> +	return;

Redundant return, more cases below.

Jan

> +}
> +
>  struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
>  struct kvm_vcpu __percpu **kvm_get_running_vcpus(void);
>  
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index e10c45a..ee89fad 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -192,6 +192,12 @@ static inline int kvm_test_age_hva(struct kvm *kvm, unsigned long hva)
>  	return 0;
>  }
>  
> +static inline void kvm_arch_mmu_notifier_invalidate_page(struct kvm *kvm,
> +							 unsigned long address)
> +{
> +	return;
> +}
> +
>  struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
>  struct kvm_vcpu __percpu **kvm_get_running_vcpus(void);
>  
> diff --git a/arch/ia64/include/asm/kvm_host.h b/arch/ia64/include/asm/kvm_host.h
> index db95f57..326ac55 100644
> --- a/arch/ia64/include/asm/kvm_host.h
> +++ b/arch/ia64/include/asm/kvm_host.h
> @@ -574,6 +574,14 @@ static inline struct kvm_pt_regs *vcpu_regs(struct kvm_vcpu *v)
>  	return (struct kvm_pt_regs *) ((unsigned long) v + KVM_STK_OFFSET) - 1;
>  }
>  
> +#ifdef KVM_ARCH_WANT_MMU_NOTIFIER
> +static inline void kvm_arch_mmu_notifier_invalidate_page(struct kvm *kvm,
> +							 unsigned long address)
> +{
> +	return;
> +}
> +#endif /* KVM_ARCH_WANT_MMU_NOTIFIER */
> +
>  typedef int kvm_vmm_entry(void);
>  typedef void kvm_tramp_entry(union context *host, union context *guest);
>  
> diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h
> index 7a3fc67..c392705 100644
> --- a/arch/mips/include/asm/kvm_host.h
> +++ b/arch/mips/include/asm/kvm_host.h
> @@ -767,5 +767,12 @@ extern int kvm_mips_trans_mtc0(uint32_t inst, uint32_t *opc,
>  extern void kvm_mips_dump_stats(struct kvm_vcpu *vcpu);
>  extern unsigned long kvm_mips_get_ramsize(struct kvm *kvm);
>  
> +#ifdef KVM_ARCH_WANT_MMU_NOTIFIER
> +static inline void kvm_arch_mmu_notifier_invalidate_page(struct kvm *kvm,
> +							 unsigned long address)
> +{
> +	return;
> +}
> +#endif /* KVM_ARCH_WANT_MMU_NOTIFIER */
>  
>  #endif /* __MIPS_KVM_HOST_H__ */
> diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
> index 98d9dd5..c16a573 100644
> --- a/arch/powerpc/include/asm/kvm_host.h
> +++ b/arch/powerpc/include/asm/kvm_host.h
> @@ -61,6 +61,12 @@ extern int kvm_age_hva(struct kvm *kvm, unsigned long hva);
>  extern int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
>  extern void kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
>  
> +static inline void kvm_arch_mmu_notifier_invalidate_page(struct kvm *kvm,
> +							 unsigned long address)
> +{
> +	return;
> +}
> +
>  #define HPTEG_CACHE_NUM			(1 << 15)
>  #define HPTEG_HASH_BITS_PTE		13
>  #define HPTEG_HASH_BITS_PTE_LONG	12
> diff --git a/arch/s390/include/asm/kvm_host.h b/arch/s390/include/asm/kvm_host.h
> index 773bef7..693290f 100644
> --- a/arch/s390/include/asm/kvm_host.h
> +++ b/arch/s390/include/asm/kvm_host.h
> @@ -450,4 +450,13 @@ void kvm_arch_async_page_present(struct kvm_vcpu *vcpu,
>  
>  extern int sie64a(struct kvm_s390_sie_block *, u64 *);
>  extern char sie_exit;
> +
> +#ifdef KVM_ARCH_WANT_MMU_NOTIFIER
> +static inline void kvm_arch_mmu_notifier_invalidate_page(struct kvm *kvm,
> +							 unsigned long address)
> +{
> +	return;
> +}
> +#endif /* KVM_ARCH_WANT_MMU_NOTIFIER */
> +
>  #endif
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 66480fd..408b944 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1047,6 +1047,8 @@ int kvm_arch_interrupt_allowed(struct kvm_vcpu *vcpu);
>  int kvm_cpu_get_interrupt(struct kvm_vcpu *v);
>  void kvm_vcpu_reset(struct kvm_vcpu *vcpu);
>  void kvm_vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu);
> +void kvm_arch_mmu_notifier_invalidate_page(struct kvm *kvm,
> +					   unsigned long address);
>  
>  void kvm_define_shared_msr(unsigned index, u32 msr);
>  void kvm_set_shared_msr(unsigned index, u64 val, u64 mask);
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index c064ca6..e042ef6 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -6011,6 +6011,17 @@ void kvm_vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu)
>  }
>  EXPORT_SYMBOL_GPL(kvm_vcpu_reload_apic_access_page);
>  
> +void kvm_arch_mmu_notifier_invalidate_page(struct kvm *kvm,
> +					   unsigned long address)
> +{
> +	/*
> +	 * The physical address of apic access page is stored in VMCS.
> +	 * Update it when it becomes invalid.
> +	 */
> +	if (address == gfn_to_hva(kvm, APIC_DEFAULT_PHYS_BASE >> PAGE_SHIFT))
> +		kvm_make_all_cpus_request(kvm, KVM_REQ_APIC_PAGE_RELOAD);
> +}
> +
>  /*
>   * Returns 1 to let __vcpu_run() continue the guest execution loop without
>   * exiting to the userspace.  Otherwise, the value will be returned to the
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 0f8b6f6..5427973d 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -295,6 +295,9 @@ static void kvm_mmu_notifier_invalidate_page(struct mmu_notifier *mn,
>  		kvm_flush_remote_tlbs(kvm);
>  
>  	spin_unlock(&kvm->mmu_lock);
> +
> +	kvm_arch_mmu_notifier_invalidate_page(kvm, address);
> +
>  	srcu_read_unlock(&kvm->srcu, idx);
>  }
>  
> 



[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/1] kvm, mem-hotplug: Add arch specific mmu notifier to handle apic access migration.
  2014-09-24  7:08       ` Jan Kiszka
@ 2014-09-24  7:31         ` Tang Chen
  0 siblings, 0 replies; 21+ messages in thread
From: Tang Chen @ 2014-09-24  7:31 UTC (permalink / raw)
  To: Jan Kiszka, pbonzini
  Cc: gleb, mtosatti, nadav.amit, kvm, laijs, isimatu.yasuaki,
	guz.fnst, linux-kernel


On 09/24/2014 03:08 PM, Jan Kiszka wrote:
> On 2014-09-24 04:09, Tang Chen wrote:
>> Hi Paolo,
>>
>> I'm not sure if this patch is following your comment. Please review.
>> And all the other comments are followed. If this patch is OK, I'll
>> send v8 soon.
>>
>> Thanks.
>>
>> We are handling "L1 and L2 share one apic access page" situation when migrating
>> apic access page. We should do some handling when migration happens in the
>> following situations:
>>
>>     1) when L0 is running: Update L1's vmcs in the next L0->L1 entry and L2's
>>        vmcs in the next L1->L2 entry.
>>
>>     2) when L1 is running: Force a L1->L0 exit, update L1's vmcs in the next
>>        L0->L1 entry and L2's vmcs in the next L1->L2 entry.
>>
>>     3) when L2 is running: Force a L2->L0 exit, update L2's vmcs in the next
>>        L0->L2 entry and L1's vmcs in the next L2->L1 exit.
>>
>> This patch force a L1->L0 exit or L2->L0 exit when shared apic access page is
>> migrated using mmu notifier. Since apic access page is only used on intel x86,
>> this is arch specific code.
>> ---
>>   arch/arm/include/asm/kvm_host.h     |  6 ++++++
>>   arch/arm64/include/asm/kvm_host.h   |  6 ++++++
>>   arch/ia64/include/asm/kvm_host.h    |  8 ++++++++
>>   arch/mips/include/asm/kvm_host.h    |  7 +++++++
>>   arch/powerpc/include/asm/kvm_host.h |  6 ++++++
>>   arch/s390/include/asm/kvm_host.h    |  9 +++++++++
>>   arch/x86/include/asm/kvm_host.h     |  2 ++
>>   arch/x86/kvm/x86.c                  | 11 +++++++++++
>>   virt/kvm/kvm_main.c                 |  3 +++
>>   9 files changed, 58 insertions(+)
>>
>> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
>> index 6dfb404..79bbf7d 100644
>> --- a/arch/arm/include/asm/kvm_host.h
>> +++ b/arch/arm/include/asm/kvm_host.h
>> @@ -182,6 +182,12 @@ static inline int kvm_test_age_hva(struct kvm *kvm, unsigned long hva)
>>   	return 0;
>>   }
>>   
>> +static inline void kvm_arch_mmu_notifier_invalidate_page(struct kvm *kvm,
>> +							 unsigned long address)
>> +{
>> +	return;
> Redundant return, more cases below.

OK, will remove it. Thanks.

>
> Jan
>
>> +}
>> +
>>   struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
>>   struct kvm_vcpu __percpu **kvm_get_running_vcpus(void);
>>   
>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
>> index e10c45a..ee89fad 100644
>> --- a/arch/arm64/include/asm/kvm_host.h
>> +++ b/arch/arm64/include/asm/kvm_host.h
>> @@ -192,6 +192,12 @@ static inline int kvm_test_age_hva(struct kvm *kvm, unsigned long hva)
>>   	return 0;
>>   }
>>   
>> +static inline void kvm_arch_mmu_notifier_invalidate_page(struct kvm *kvm,
>> +							 unsigned long address)
>> +{
>> +	return;
>> +}
>> +
>>   struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
>>   struct kvm_vcpu __percpu **kvm_get_running_vcpus(void);
>>   
>> diff --git a/arch/ia64/include/asm/kvm_host.h b/arch/ia64/include/asm/kvm_host.h
>> index db95f57..326ac55 100644
>> --- a/arch/ia64/include/asm/kvm_host.h
>> +++ b/arch/ia64/include/asm/kvm_host.h
>> @@ -574,6 +574,14 @@ static inline struct kvm_pt_regs *vcpu_regs(struct kvm_vcpu *v)
>>   	return (struct kvm_pt_regs *) ((unsigned long) v + KVM_STK_OFFSET) - 1;
>>   }
>>   
>> +#ifdef KVM_ARCH_WANT_MMU_NOTIFIER
>> +static inline void kvm_arch_mmu_notifier_invalidate_page(struct kvm *kvm,
>> +							 unsigned long address)
>> +{
>> +	return;
>> +}
>> +#endif /* KVM_ARCH_WANT_MMU_NOTIFIER */
>> +
>>   typedef int kvm_vmm_entry(void);
>>   typedef void kvm_tramp_entry(union context *host, union context *guest);
>>   
>> diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h
>> index 7a3fc67..c392705 100644
>> --- a/arch/mips/include/asm/kvm_host.h
>> +++ b/arch/mips/include/asm/kvm_host.h
>> @@ -767,5 +767,12 @@ extern int kvm_mips_trans_mtc0(uint32_t inst, uint32_t *opc,
>>   extern void kvm_mips_dump_stats(struct kvm_vcpu *vcpu);
>>   extern unsigned long kvm_mips_get_ramsize(struct kvm *kvm);
>>   
>> +#ifdef KVM_ARCH_WANT_MMU_NOTIFIER
>> +static inline void kvm_arch_mmu_notifier_invalidate_page(struct kvm *kvm,
>> +							 unsigned long address)
>> +{
>> +	return;
>> +}
>> +#endif /* KVM_ARCH_WANT_MMU_NOTIFIER */
>>   
>>   #endif /* __MIPS_KVM_HOST_H__ */
>> diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
>> index 98d9dd5..c16a573 100644
>> --- a/arch/powerpc/include/asm/kvm_host.h
>> +++ b/arch/powerpc/include/asm/kvm_host.h
>> @@ -61,6 +61,12 @@ extern int kvm_age_hva(struct kvm *kvm, unsigned long hva);
>>   extern int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
>>   extern void kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
>>   
>> +static inline void kvm_arch_mmu_notifier_invalidate_page(struct kvm *kvm,
>> +							 unsigned long address)
>> +{
>> +	return;
>> +}
>> +
>>   #define HPTEG_CACHE_NUM			(1 << 15)
>>   #define HPTEG_HASH_BITS_PTE		13
>>   #define HPTEG_HASH_BITS_PTE_LONG	12
>> diff --git a/arch/s390/include/asm/kvm_host.h b/arch/s390/include/asm/kvm_host.h
>> index 773bef7..693290f 100644
>> --- a/arch/s390/include/asm/kvm_host.h
>> +++ b/arch/s390/include/asm/kvm_host.h
>> @@ -450,4 +450,13 @@ void kvm_arch_async_page_present(struct kvm_vcpu *vcpu,
>>   
>>   extern int sie64a(struct kvm_s390_sie_block *, u64 *);
>>   extern char sie_exit;
>> +
>> +#ifdef KVM_ARCH_WANT_MMU_NOTIFIER
>> +static inline void kvm_arch_mmu_notifier_invalidate_page(struct kvm *kvm,
>> +							 unsigned long address)
>> +{
>> +	return;
>> +}
>> +#endif /* KVM_ARCH_WANT_MMU_NOTIFIER */
>> +
>>   #endif
>> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
>> index 66480fd..408b944 100644
>> --- a/arch/x86/include/asm/kvm_host.h
>> +++ b/arch/x86/include/asm/kvm_host.h
>> @@ -1047,6 +1047,8 @@ int kvm_arch_interrupt_allowed(struct kvm_vcpu *vcpu);
>>   int kvm_cpu_get_interrupt(struct kvm_vcpu *v);
>>   void kvm_vcpu_reset(struct kvm_vcpu *vcpu);
>>   void kvm_vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu);
>> +void kvm_arch_mmu_notifier_invalidate_page(struct kvm *kvm,
>> +					   unsigned long address);
>>   
>>   void kvm_define_shared_msr(unsigned index, u32 msr);
>>   void kvm_set_shared_msr(unsigned index, u64 val, u64 mask);
>> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
>> index c064ca6..e042ef6 100644
>> --- a/arch/x86/kvm/x86.c
>> +++ b/arch/x86/kvm/x86.c
>> @@ -6011,6 +6011,17 @@ void kvm_vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu)
>>   }
>>   EXPORT_SYMBOL_GPL(kvm_vcpu_reload_apic_access_page);
>>   
>> +void kvm_arch_mmu_notifier_invalidate_page(struct kvm *kvm,
>> +					   unsigned long address)
>> +{
>> +	/*
>> +	 * The physical address of apic access page is stored in VMCS.
>> +	 * Update it when it becomes invalid.
>> +	 */
>> +	if (address == gfn_to_hva(kvm, APIC_DEFAULT_PHYS_BASE >> PAGE_SHIFT))
>> +		kvm_make_all_cpus_request(kvm, KVM_REQ_APIC_PAGE_RELOAD);
>> +}
>> +
>>   /*
>>    * Returns 1 to let __vcpu_run() continue the guest execution loop without
>>    * exiting to the userspace.  Otherwise, the value will be returned to the
>> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
>> index 0f8b6f6..5427973d 100644
>> --- a/virt/kvm/kvm_main.c
>> +++ b/virt/kvm/kvm_main.c
>> @@ -295,6 +295,9 @@ static void kvm_mmu_notifier_invalidate_page(struct mmu_notifier *mn,
>>   		kvm_flush_remote_tlbs(kvm);
>>   
>>   	spin_unlock(&kvm->mmu_lock);
>> +
>> +	kvm_arch_mmu_notifier_invalidate_page(kvm, address);
>> +
>>   	srcu_read_unlock(&kvm->srcu, idx);
>>   }
>>   
>>
>


^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2014-09-24  7:32 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-09-20 10:47 [PATCH v7 0/9] kvm, mem-hotplug: Do not pin ept identity pagetable and apic access page Tang Chen
2014-09-20 10:47 ` [PATCH v7 1/9] kvm: Use APIC_DEFAULT_PHYS_BASE macro as the apic access page address Tang Chen
2014-09-20 10:47 ` [PATCH v7 2/9] kvm: Remove ept_identity_pagetable from struct kvm_arch Tang Chen
2014-09-20 10:47 ` [PATCH v7 3/9] kvm: Make init_rmode_identity_map() return 0 on success Tang Chen
2014-09-20 10:47 ` [PATCH v7 4/9] kvm: Add interface to check if secondary exec virtualzed apic accesses is enabled Tang Chen
2014-09-22  9:50   ` Paolo Bonzini
2014-09-20 10:47 ` [PATCH v7 5/9] kvm, mem-hotplug: Reload L1's apic access page in vcpu_enter_guest() Tang Chen
2014-09-22  9:33   ` Paolo Bonzini
2014-09-22  9:38     ` Paolo Bonzini
2014-09-20 10:47 ` [PATCH v7 6/9] kvm: Rename make_all_cpus_request() to kvm_make_all_cpus_request() and make it non-static Tang Chen
2014-09-22  9:28   ` Paolo Bonzini
2014-09-20 10:47 ` [PATCH v7 7/9] kvm, mem-hotplug: Reload L1's apic access page on migration when L2 is running Tang Chen
2014-09-22  9:29   ` Paolo Bonzini
2014-09-20 10:47 ` [PATCH v7 8/9] kvm, mem-hotplug: Add arch specific mmu notifier to handle apic access migration Tang Chen
2014-09-22  9:31   ` Paolo Bonzini
2014-09-24  2:09     ` [PATCH 1/1] " Tang Chen
2014-09-24  7:00       ` Paolo Bonzini
2014-09-24  7:08       ` Jan Kiszka
2014-09-24  7:31         ` Tang Chen
2014-09-20 10:47 ` [PATCH v7 9/9] kvm, mem-hotplug: Unpin and remove kvm_arch->apic_access_page Tang Chen
2014-09-22  9:28   ` Paolo Bonzini

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).