linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 0/6] kvm, mem-hotplug: Do not pin ept identity pagetable and apic access page.
@ 2014-08-27 10:17 Tang Chen
  2014-08-27 10:17 ` [PATCH v4 1/6] kvm: Use APIC_DEFAULT_PHYS_BASE macro as the apic access page address Tang Chen
                   ` (6 more replies)
  0 siblings, 7 replies; 17+ messages in thread
From: Tang Chen @ 2014-08-27 10:17 UTC (permalink / raw)
  To: gleb, mtosatti, nadav.amit, jan.kiszka
  Cc: kvm, laijs, isimatu.yasuaki, guz.fnst, linux-kernel, tangchen

ept identity pagetable and apic access page in kvm are pinned in memory.
As a result, they cannot be migrated/hot-removed.

But actually they don't need to be pinned in memory.

[For ept identity page]
Just do not pin it. When it is migrated, guest will be able to find the
new page in the next ept violation.

[For apic access page]
The hpa of apic access page is stored in VMCS APIC_ACCESS_ADDR pointer.
When apic access page is migrated, we update VMCS APIC_ACCESS_ADDR pointer
for each vcpu in addition.

NOTE: Tested with -cpu xxx,-x2apic option.
      But since nested vm pins some other pages in memory, if user uses nested
      vm, memory hot-remove will not work.

Change log v3 -> v4:
1. The original patch 6 is now patch 5. ( by Jan Kiszka <jan.kiszka@web.de> )
2. The original patch 1 is now patch 6 since we should unpin apic access page
   at the very last moment.


Tang Chen (6):
  kvm: Use APIC_DEFAULT_PHYS_BASE macro as the apic access page address.
  kvm: Remove ept_identity_pagetable from struct kvm_arch.
  kvm: Make init_rmode_identity_map() return 0 on success.
  kvm, mem-hotplug: Reload L1' apic access page on migration in
    vcpu_enter_guest().
  kvm, mem-hotplug: Reload L1's apic access page on migration when L2 is
    running.
  kvm, mem-hotplug: Do not pin apic access page in memory.

 arch/x86/include/asm/kvm_host.h |   3 +-
 arch/x86/kvm/svm.c              |  15 +++++-
 arch/x86/kvm/vmx.c              | 103 +++++++++++++++++++++++++++-------------
 arch/x86/kvm/x86.c              |  22 +++++++--
 include/linux/kvm_host.h        |   3 ++
 virt/kvm/kvm_main.c             |  30 +++++++++++-
 6 files changed, 135 insertions(+), 41 deletions(-)

-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v4 1/6] kvm: Use APIC_DEFAULT_PHYS_BASE macro as the apic access page address.
  2014-08-27 10:17 [PATCH v4 0/6] kvm, mem-hotplug: Do not pin ept identity pagetable and apic access page Tang Chen
@ 2014-08-27 10:17 ` Tang Chen
  2014-09-10 13:57   ` Gleb Natapov
  2014-08-27 10:17 ` [PATCH v4 2/6] kvm: Remove ept_identity_pagetable from struct kvm_arch Tang Chen
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 17+ messages in thread
From: Tang Chen @ 2014-08-27 10:17 UTC (permalink / raw)
  To: gleb, mtosatti, nadav.amit, jan.kiszka
  Cc: kvm, laijs, isimatu.yasuaki, guz.fnst, linux-kernel, tangchen

We have APIC_DEFAULT_PHYS_BASE defined as 0xfee00000, which is also the address of
apic access page. So use this macro.

Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com>
---
 arch/x86/kvm/svm.c | 3 ++-
 arch/x86/kvm/vmx.c | 6 +++---
 2 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index ddf7427..1d941ad 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -1257,7 +1257,8 @@ static struct kvm_vcpu *svm_create_vcpu(struct kvm *kvm, unsigned int id)
 	svm->asid_generation = 0;
 	init_vmcb(svm);
 
-	svm->vcpu.arch.apic_base = 0xfee00000 | MSR_IA32_APICBASE_ENABLE;
+	svm->vcpu.arch.apic_base = APIC_DEFAULT_PHYS_BASE |
+				   MSR_IA32_APICBASE_ENABLE;
 	if (kvm_vcpu_is_bsp(&svm->vcpu))
 		svm->vcpu.arch.apic_base |= MSR_IA32_APICBASE_BSP;
 
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index bfe11cf..4b80ead 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -3999,13 +3999,13 @@ static int alloc_apic_access_page(struct kvm *kvm)
 		goto out;
 	kvm_userspace_mem.slot = APIC_ACCESS_PAGE_PRIVATE_MEMSLOT;
 	kvm_userspace_mem.flags = 0;
-	kvm_userspace_mem.guest_phys_addr = 0xfee00000ULL;
+	kvm_userspace_mem.guest_phys_addr = APIC_DEFAULT_PHYS_BASE;
 	kvm_userspace_mem.memory_size = PAGE_SIZE;
 	r = __kvm_set_memory_region(kvm, &kvm_userspace_mem);
 	if (r)
 		goto out;
 
-	page = gfn_to_page(kvm, 0xfee00);
+	page = gfn_to_page(kvm, APIC_DEFAULT_PHYS_BASE >> PAGE_SHIFT);
 	if (is_error_page(page)) {
 		r = -EFAULT;
 		goto out;
@@ -4477,7 +4477,7 @@ static void vmx_vcpu_reset(struct kvm_vcpu *vcpu)
 
 	vmx->vcpu.arch.regs[VCPU_REGS_RDX] = get_rdx_init_val();
 	kvm_set_cr8(&vmx->vcpu, 0);
-	apic_base_msr.data = 0xfee00000 | MSR_IA32_APICBASE_ENABLE;
+	apic_base_msr.data = APIC_DEFAULT_PHYS_BASE | MSR_IA32_APICBASE_ENABLE;
 	if (kvm_vcpu_is_bsp(&vmx->vcpu))
 		apic_base_msr.data |= MSR_IA32_APICBASE_BSP;
 	apic_base_msr.host_initiated = true;
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v4 2/6] kvm: Remove ept_identity_pagetable from struct kvm_arch.
  2014-08-27 10:17 [PATCH v4 0/6] kvm, mem-hotplug: Do not pin ept identity pagetable and apic access page Tang Chen
  2014-08-27 10:17 ` [PATCH v4 1/6] kvm: Use APIC_DEFAULT_PHYS_BASE macro as the apic access page address Tang Chen
@ 2014-08-27 10:17 ` Tang Chen
  2014-09-10 13:58   ` Gleb Natapov
  2014-08-27 10:17 ` [PATCH v4 3/6] kvm: Make init_rmode_identity_map() return 0 on success Tang Chen
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 17+ messages in thread
From: Tang Chen @ 2014-08-27 10:17 UTC (permalink / raw)
  To: gleb, mtosatti, nadav.amit, jan.kiszka
  Cc: kvm, laijs, isimatu.yasuaki, guz.fnst, linux-kernel, tangchen

kvm_arch->ept_identity_pagetable holds the ept identity pagetable page. But
it is never used to refer to the page at all.

In vcpu initialization, it indicates two things:
1. indicates if ept page is allocated
2. indicates if a memory slot for identity page is initialized

Actually, kvm_arch->ept_identity_pagetable_done is enough to tell if the ept
identity pagetable is initialized. So we can remove ept_identity_pagetable.

NOTE: In the original code, ept identity pagetable page is pinned in memroy.
      As a result, it cannot be migrated/hot-removed. After this patch, since
      kvm_arch->ept_identity_pagetable is removed, ept identity pagetable page
      is no longer pinned in memory. And it can be migrated/hot-removed.

Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com>
---
 arch/x86/include/asm/kvm_host.h |  1 -
 arch/x86/kvm/vmx.c              | 50 ++++++++++++++++++++---------------------
 arch/x86/kvm/x86.c              |  2 --
 3 files changed, 25 insertions(+), 28 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 7c492ed..35171c7 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -580,7 +580,6 @@ struct kvm_arch {
 
 	gpa_t wall_clock;
 
-	struct page *ept_identity_pagetable;
 	bool ept_identity_pagetable_done;
 	gpa_t ept_identity_map_addr;
 
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 4b80ead..953d529 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -743,6 +743,7 @@ static u32 vmx_segment_access_rights(struct kvm_segment *var);
 static void vmx_sync_pir_to_irr_dummy(struct kvm_vcpu *vcpu);
 static void copy_vmcs12_to_shadow(struct vcpu_vmx *vmx);
 static void copy_shadow_to_vmcs12(struct vcpu_vmx *vmx);
+static int alloc_identity_pagetable(struct kvm *kvm);
 
 static DEFINE_PER_CPU(struct vmcs *, vmxarea);
 static DEFINE_PER_CPU(struct vmcs *, current_vmcs);
@@ -3938,21 +3939,27 @@ out:
 
 static int init_rmode_identity_map(struct kvm *kvm)
 {
-	int i, idx, r, ret;
+	int i, idx, r, ret = 0;
 	pfn_t identity_map_pfn;
 	u32 tmp;
 
 	if (!enable_ept)
 		return 1;
-	if (unlikely(!kvm->arch.ept_identity_pagetable)) {
-		printk(KERN_ERR "EPT: identity-mapping pagetable "
-			"haven't been allocated!\n");
-		return 0;
+
+	/* Protect kvm->arch.ept_identity_pagetable_done. */
+	mutex_lock(&kvm->slots_lock);
+
+	if (likely(kvm->arch.ept_identity_pagetable_done)) {
+		ret = 1;
+		goto out2;
 	}
-	if (likely(kvm->arch.ept_identity_pagetable_done))
-		return 1;
-	ret = 0;
+
 	identity_map_pfn = kvm->arch.ept_identity_map_addr >> PAGE_SHIFT;
+
+	r = alloc_identity_pagetable(kvm);
+	if (r)
+		goto out2;
+
 	idx = srcu_read_lock(&kvm->srcu);
 	r = kvm_clear_guest_page(kvm, identity_map_pfn, 0, PAGE_SIZE);
 	if (r < 0)
@@ -3970,6 +3977,9 @@ static int init_rmode_identity_map(struct kvm *kvm)
 	ret = 1;
 out:
 	srcu_read_unlock(&kvm->srcu, idx);
+
+out2:
+	mutex_unlock(&kvm->slots_lock);
 	return ret;
 }
 
@@ -4019,31 +4029,23 @@ out:
 
 static int alloc_identity_pagetable(struct kvm *kvm)
 {
-	struct page *page;
+	/*
+	 * In init_rmode_identity_map(), kvm->arch.ept_identity_pagetable_done
+	 * is checked before calling this function and set to true after the
+	 * calling. The access to kvm->arch.ept_identity_pagetable_done should
+	 * be protected by kvm->slots_lock.
+	 */
+
 	struct kvm_userspace_memory_region kvm_userspace_mem;
 	int r = 0;
 
-	mutex_lock(&kvm->slots_lock);
-	if (kvm->arch.ept_identity_pagetable)
-		goto out;
 	kvm_userspace_mem.slot = IDENTITY_PAGETABLE_PRIVATE_MEMSLOT;
 	kvm_userspace_mem.flags = 0;
 	kvm_userspace_mem.guest_phys_addr =
 		kvm->arch.ept_identity_map_addr;
 	kvm_userspace_mem.memory_size = PAGE_SIZE;
 	r = __kvm_set_memory_region(kvm, &kvm_userspace_mem);
-	if (r)
-		goto out;
 
-	page = gfn_to_page(kvm, kvm->arch.ept_identity_map_addr >> PAGE_SHIFT);
-	if (is_error_page(page)) {
-		r = -EFAULT;
-		goto out;
-	}
-
-	kvm->arch.ept_identity_pagetable = page;
-out:
-	mutex_unlock(&kvm->slots_lock);
 	return r;
 }
 
@@ -7643,8 +7645,6 @@ static struct kvm_vcpu *vmx_create_vcpu(struct kvm *kvm, unsigned int id)
 			kvm->arch.ept_identity_map_addr =
 				VMX_EPT_IDENTITY_PAGETABLE_ADDR;
 		err = -ENOMEM;
-		if (alloc_identity_pagetable(kvm) != 0)
-			goto free_vmcs;
 		if (!init_rmode_identity_map(kvm))
 			goto free_vmcs;
 	}
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 8f1e22d..e05bd58 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -7239,8 +7239,6 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
 	kvm_free_vcpus(kvm);
 	if (kvm->arch.apic_access_page)
 		put_page(kvm->arch.apic_access_page);
-	if (kvm->arch.ept_identity_pagetable)
-		put_page(kvm->arch.ept_identity_pagetable);
 	kfree(rcu_dereference_check(kvm->arch.apic_map, 1));
 }
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v4 3/6] kvm: Make init_rmode_identity_map() return 0 on success.
  2014-08-27 10:17 [PATCH v4 0/6] kvm, mem-hotplug: Do not pin ept identity pagetable and apic access page Tang Chen
  2014-08-27 10:17 ` [PATCH v4 1/6] kvm: Use APIC_DEFAULT_PHYS_BASE macro as the apic access page address Tang Chen
  2014-08-27 10:17 ` [PATCH v4 2/6] kvm: Remove ept_identity_pagetable from struct kvm_arch Tang Chen
@ 2014-08-27 10:17 ` Tang Chen
  2014-08-27 10:17 ` [PATCH v4 4/6] kvm, mem-hotplug: Reload L1' apic access page on migration in vcpu_enter_guest() Tang Chen
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 17+ messages in thread
From: Tang Chen @ 2014-08-27 10:17 UTC (permalink / raw)
  To: gleb, mtosatti, nadav.amit, jan.kiszka
  Cc: kvm, laijs, isimatu.yasuaki, guz.fnst, linux-kernel, tangchen

In init_rmode_identity_map(), there two variables indicating the return
value, r and ret, and it return 0 on error, 1 on success. The function
is only called by vmx_create_vcpu(), and r is redundant.

This patch removes the redundant variable r, and make init_rmode_identity_map()
return 0 on success, -errno on failure.

Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com>
---
 arch/x86/kvm/vmx.c | 25 +++++++++++--------------
 1 file changed, 11 insertions(+), 14 deletions(-)

diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 953d529..63c4c3e 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -3939,45 +3939,42 @@ out:
 
 static int init_rmode_identity_map(struct kvm *kvm)
 {
-	int i, idx, r, ret = 0;
+	int i, idx, ret = 0;
 	pfn_t identity_map_pfn;
 	u32 tmp;
 
 	if (!enable_ept)
-		return 1;
+		return 0;
 
 	/* Protect kvm->arch.ept_identity_pagetable_done. */
 	mutex_lock(&kvm->slots_lock);
 
-	if (likely(kvm->arch.ept_identity_pagetable_done)) {
-		ret = 1;
+	if (likely(kvm->arch.ept_identity_pagetable_done))
 		goto out2;
-	}
 
 	identity_map_pfn = kvm->arch.ept_identity_map_addr >> PAGE_SHIFT;
 
-	r = alloc_identity_pagetable(kvm);
-	if (r)
+	ret = alloc_identity_pagetable(kvm);
+	if (ret)
 		goto out2;
 
 	idx = srcu_read_lock(&kvm->srcu);
-	r = kvm_clear_guest_page(kvm, identity_map_pfn, 0, PAGE_SIZE);
-	if (r < 0)
+	ret = kvm_clear_guest_page(kvm, identity_map_pfn, 0, PAGE_SIZE);
+	if (ret)
 		goto out;
 	/* Set up identity-mapping pagetable for EPT in real mode */
 	for (i = 0; i < PT32_ENT_PER_PAGE; i++) {
 		tmp = (i << 22) + (_PAGE_PRESENT | _PAGE_RW | _PAGE_USER |
 			_PAGE_ACCESSED | _PAGE_DIRTY | _PAGE_PSE);
-		r = kvm_write_guest_page(kvm, identity_map_pfn,
+		ret = kvm_write_guest_page(kvm, identity_map_pfn,
 				&tmp, i * sizeof(tmp), sizeof(tmp));
-		if (r < 0)
+		if (ret)
 			goto out;
 	}
 	kvm->arch.ept_identity_pagetable_done = true;
-	ret = 1;
+
 out:
 	srcu_read_unlock(&kvm->srcu, idx);
-
 out2:
 	mutex_unlock(&kvm->slots_lock);
 	return ret;
@@ -7645,7 +7642,7 @@ static struct kvm_vcpu *vmx_create_vcpu(struct kvm *kvm, unsigned int id)
 			kvm->arch.ept_identity_map_addr =
 				VMX_EPT_IDENTITY_PAGETABLE_ADDR;
 		err = -ENOMEM;
-		if (!init_rmode_identity_map(kvm))
+		if (init_rmode_identity_map(kvm))
 			goto free_vmcs;
 	}
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v4 4/6] kvm, mem-hotplug: Reload L1' apic access page on migration in vcpu_enter_guest().
  2014-08-27 10:17 [PATCH v4 0/6] kvm, mem-hotplug: Do not pin ept identity pagetable and apic access page Tang Chen
                   ` (2 preceding siblings ...)
  2014-08-27 10:17 ` [PATCH v4 3/6] kvm: Make init_rmode_identity_map() return 0 on success Tang Chen
@ 2014-08-27 10:17 ` Tang Chen
  2014-09-02 16:00   ` Gleb Natapov
  2014-08-27 10:17 ` [PATCH v4 5/6] kvm, mem-hotplug: Reload L1's apic access page on migration when L2 is running Tang Chen
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 17+ messages in thread
From: Tang Chen @ 2014-08-27 10:17 UTC (permalink / raw)
  To: gleb, mtosatti, nadav.amit, jan.kiszka
  Cc: kvm, laijs, isimatu.yasuaki, guz.fnst, linux-kernel, tangchen

apic access page is pinned in memory. As a result, it cannot be migrated/hot-removed.
Actually, it is not necessary to be pinned.

The hpa of apic access page is stored in VMCS APIC_ACCESS_ADDR pointer. When
the page is migrated, kvm_mmu_notifier_invalidate_page() will invalidate the
corresponding ept entry. This patch introduces a new vcpu request named
KVM_REQ_APIC_PAGE_RELOAD, and makes this request to all the vcpus at this time,
and force all the vcpus exit guest, and re-enter guest till they updates the VMCS
APIC_ACCESS_ADDR pointer to the new apic access page address, and updates
kvm->arch.apic_access_page to the new page.

Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com>
---
 arch/x86/include/asm/kvm_host.h |  1 +
 arch/x86/kvm/svm.c              |  6 ++++++
 arch/x86/kvm/vmx.c              |  6 ++++++
 arch/x86/kvm/x86.c              | 15 +++++++++++++++
 include/linux/kvm_host.h        |  2 ++
 virt/kvm/kvm_main.c             | 12 ++++++++++++
 6 files changed, 42 insertions(+)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 35171c7..514183e 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -739,6 +739,7 @@ struct kvm_x86_ops {
 	void (*hwapic_isr_update)(struct kvm *kvm, int isr);
 	void (*load_eoi_exitmap)(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap);
 	void (*set_virtual_x2apic_mode)(struct kvm_vcpu *vcpu, bool set);
+	void (*set_apic_access_page_addr)(struct kvm *kvm, hpa_t hpa);
 	void (*deliver_posted_interrupt)(struct kvm_vcpu *vcpu, int vector);
 	void (*sync_pir_to_irr)(struct kvm_vcpu *vcpu);
 	int (*set_tss_addr)(struct kvm *kvm, unsigned int addr);
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 1d941ad..f2eacc4 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -3619,6 +3619,11 @@ static void svm_set_virtual_x2apic_mode(struct kvm_vcpu *vcpu, bool set)
 	return;
 }
 
+static void svm_set_apic_access_page_addr(struct kvm *kvm, hpa_t hpa)
+{
+	return;
+}
+
 static int svm_vm_has_apicv(struct kvm *kvm)
 {
 	return 0;
@@ -4373,6 +4378,7 @@ static struct kvm_x86_ops svm_x86_ops = {
 	.enable_irq_window = enable_irq_window,
 	.update_cr8_intercept = update_cr8_intercept,
 	.set_virtual_x2apic_mode = svm_set_virtual_x2apic_mode,
+	.set_apic_access_page_addr = svm_set_apic_access_page_addr,
 	.vm_has_apicv = svm_vm_has_apicv,
 	.load_eoi_exitmap = svm_load_eoi_exitmap,
 	.hwapic_isr_update = svm_hwapic_isr_update,
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 63c4c3e..da6d55d 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -7093,6 +7093,11 @@ static void vmx_set_virtual_x2apic_mode(struct kvm_vcpu *vcpu, bool set)
 	vmx_set_msr_bitmap(vcpu);
 }
 
+static void vmx_set_apic_access_page_addr(struct kvm *kvm, hpa_t hpa)
+{
+	vmcs_write64(APIC_ACCESS_ADDR, hpa);
+}
+
 static void vmx_hwapic_isr_update(struct kvm *kvm, int isr)
 {
 	u16 status;
@@ -8910,6 +8915,7 @@ static struct kvm_x86_ops vmx_x86_ops = {
 	.enable_irq_window = enable_irq_window,
 	.update_cr8_intercept = update_cr8_intercept,
 	.set_virtual_x2apic_mode = vmx_set_virtual_x2apic_mode,
+	.set_apic_access_page_addr = vmx_set_apic_access_page_addr,
 	.vm_has_apicv = vmx_vm_has_apicv,
 	.load_eoi_exitmap = vmx_load_eoi_exitmap,
 	.hwapic_irr_update = vmx_hwapic_irr_update,
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index e05bd58..96f4188 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5989,6 +5989,19 @@ static void vcpu_scan_ioapic(struct kvm_vcpu *vcpu)
 	kvm_apic_update_tmr(vcpu, tmr);
 }
 
+static void vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu)
+{
+	/*
+	 * apic access page could be migrated. When the page is being migrated,
+	 * GUP will wait till the migrate entry is replaced with the new pte
+	 * entry pointing to the new page.
+	 */
+	vcpu->kvm->arch.apic_access_page = gfn_to_page(vcpu->kvm,
+				APIC_DEFAULT_PHYS_BASE >> PAGE_SHIFT);
+	kvm_x86_ops->set_apic_access_page_addr(vcpu->kvm,
+				page_to_phys(vcpu->kvm->arch.apic_access_page));
+}
+
 /*
  * Returns 1 to let __vcpu_run() continue the guest execution loop without
  * exiting to the userspace.  Otherwise, the value will be returned to the
@@ -6049,6 +6062,8 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 			kvm_deliver_pmi(vcpu);
 		if (kvm_check_request(KVM_REQ_SCAN_IOAPIC, vcpu))
 			vcpu_scan_ioapic(vcpu);
+		if (kvm_check_request(KVM_REQ_APIC_PAGE_RELOAD, vcpu))
+			vcpu_reload_apic_access_page(vcpu);
 	}
 
 	if (kvm_check_request(KVM_REQ_EVENT, vcpu) || req_int_win) {
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index a4c33b3..8be076a 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -136,6 +136,7 @@ static inline bool is_error_page(struct page *page)
 #define KVM_REQ_GLOBAL_CLOCK_UPDATE 22
 #define KVM_REQ_ENABLE_IBS        23
 #define KVM_REQ_DISABLE_IBS       24
+#define KVM_REQ_APIC_PAGE_RELOAD  25
 
 #define KVM_USERSPACE_IRQ_SOURCE_ID		0
 #define KVM_IRQFD_RESAMPLE_IRQ_SOURCE_ID	1
@@ -579,6 +580,7 @@ void kvm_flush_remote_tlbs(struct kvm *kvm);
 void kvm_reload_remote_mmus(struct kvm *kvm);
 void kvm_make_mclock_inprogress_request(struct kvm *kvm);
 void kvm_make_scan_ioapic_request(struct kvm *kvm);
+void kvm_reload_apic_access_page(struct kvm *kvm);
 
 long kvm_arch_dev_ioctl(struct file *filp,
 			unsigned int ioctl, unsigned long arg);
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 33712fb..d8280de 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -210,6 +210,11 @@ void kvm_make_scan_ioapic_request(struct kvm *kvm)
 	make_all_cpus_request(kvm, KVM_REQ_SCAN_IOAPIC);
 }
 
+void kvm_reload_apic_access_page(struct kvm *kvm)
+{
+	make_all_cpus_request(kvm, KVM_REQ_APIC_PAGE_RELOAD);
+}
+
 int kvm_vcpu_init(struct kvm_vcpu *vcpu, struct kvm *kvm, unsigned id)
 {
 	struct page *page;
@@ -294,6 +299,13 @@ static void kvm_mmu_notifier_invalidate_page(struct mmu_notifier *mn,
 	if (need_tlb_flush)
 		kvm_flush_remote_tlbs(kvm);
 
+	/*
+	 * The physical address of apic access page is stroed in VMCS.
+	 * So need to update it when it becomes invalid.
+	 */
+	if (address == gfn_to_hva(kvm, APIC_DEFAULT_PHYS_BASE >> PAGE_SHIFT))
+		kvm_reload_apic_access_page(kvm);
+
 	spin_unlock(&kvm->mmu_lock);
 	srcu_read_unlock(&kvm->srcu, idx);
 }
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v4 5/6] kvm, mem-hotplug: Reload L1's apic access page on migration when L2 is running.
  2014-08-27 10:17 [PATCH v4 0/6] kvm, mem-hotplug: Do not pin ept identity pagetable and apic access page Tang Chen
                   ` (3 preceding siblings ...)
  2014-08-27 10:17 ` [PATCH v4 4/6] kvm, mem-hotplug: Reload L1' apic access page on migration in vcpu_enter_guest() Tang Chen
@ 2014-08-27 10:17 ` Tang Chen
  2014-09-03  1:48   ` tangchen
  2014-09-03 15:08   ` Gleb Natapov
  2014-08-27 10:17 ` [PATCH v4 6/6] kvm, mem-hotplug: Do not pin apic access page in memory Tang Chen
  2014-09-01  5:14 ` [PATCH v4 0/6] kvm, mem-hotplug: Do not pin ept identity pagetable and apic access page tangchen
  6 siblings, 2 replies; 17+ messages in thread
From: Tang Chen @ 2014-08-27 10:17 UTC (permalink / raw)
  To: gleb, mtosatti, nadav.amit, jan.kiszka
  Cc: kvm, laijs, isimatu.yasuaki, guz.fnst, linux-kernel, tangchen

This patch only handle "L1 and L2 vm share one apic access page" situation.

When L1 vm is running, if the shared apic access page is migrated, mmu_notifier will
request all vcpus to exit to L0, and reload apic access page physical address for
all the vcpus' vmcs (which is done by patch 5/6). And when it enters L2 vm, L2's vmcs
will be updated in prepare_vmcs02() called by nested_vm_run(). So we need to do
nothing.

When L2 vm is running, if the shared apic access page is migrated, mmu_notifier will
request all vcpus to exit to L0, and reload apic access page physical address for
all L2 vmcs. And this patch requests apic access page reload in L2->L1 vmexit.

Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com>
---
 arch/x86/include/asm/kvm_host.h |  1 +
 arch/x86/kvm/svm.c              |  6 ++++++
 arch/x86/kvm/vmx.c              | 32 ++++++++++++++++++++++++++++++++
 arch/x86/kvm/x86.c              |  3 +++
 virt/kvm/kvm_main.c             |  1 +
 5 files changed, 43 insertions(+)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 514183e..13fbb62 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -740,6 +740,7 @@ struct kvm_x86_ops {
 	void (*load_eoi_exitmap)(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap);
 	void (*set_virtual_x2apic_mode)(struct kvm_vcpu *vcpu, bool set);
 	void (*set_apic_access_page_addr)(struct kvm *kvm, hpa_t hpa);
+	void (*set_nested_apic_page_migrated)(struct kvm_vcpu *vcpu, bool set);
 	void (*deliver_posted_interrupt)(struct kvm_vcpu *vcpu, int vector);
 	void (*sync_pir_to_irr)(struct kvm_vcpu *vcpu);
 	int (*set_tss_addr)(struct kvm *kvm, unsigned int addr);
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index f2eacc4..da88646 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -3624,6 +3624,11 @@ static void svm_set_apic_access_page_addr(struct kvm *kvm, hpa_t hpa)
 	return;
 }
 
+static void svm_set_nested_apic_page_migrated(struct kvm_vcpu *vcpu, bool set)
+{
+	return;
+}
+
 static int svm_vm_has_apicv(struct kvm *kvm)
 {
 	return 0;
@@ -4379,6 +4384,7 @@ static struct kvm_x86_ops svm_x86_ops = {
 	.update_cr8_intercept = update_cr8_intercept,
 	.set_virtual_x2apic_mode = svm_set_virtual_x2apic_mode,
 	.set_apic_access_page_addr = svm_set_apic_access_page_addr,
+	.set_nested_apic_page_migrated = svm_set_nested_apic_page_migrated,
 	.vm_has_apicv = svm_vm_has_apicv,
 	.load_eoi_exitmap = svm_load_eoi_exitmap,
 	.hwapic_isr_update = svm_hwapic_isr_update,
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index da6d55d..9035fd1 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -379,6 +379,16 @@ struct nested_vmx {
 	 * we must keep them pinned while L2 runs.
 	 */
 	struct page *apic_access_page;
+	/*
+	 * L1's apic access page can be migrated. When L1 and L2 are sharing
+	 * the apic access page, after the page is migrated when L2 is running,
+	 * we have to reload it to L1 vmcs before we enter L1.
+	 *
+	 * When the shared apic access page is migrated in L1 mode, we don't
+	 * need to do anything else because we reload apic access page each
+	 * time when entering L2 in prepare_vmcs02().
+	 */
+	bool apic_access_page_migrated;
 	u64 msr_ia32_feature_control;
 
 	struct hrtimer preemption_timer;
@@ -7098,6 +7108,12 @@ static void vmx_set_apic_access_page_addr(struct kvm *kvm, hpa_t hpa)
 	vmcs_write64(APIC_ACCESS_ADDR, hpa);
 }
 
+static void vmx_set_nested_apic_page_migrated(struct kvm_vcpu *vcpu, bool set)
+{
+	struct vcpu_vmx *vmx = to_vmx(vcpu);
+	vmx->nested.apic_access_page_migrated = set;
+}
+
 static void vmx_hwapic_isr_update(struct kvm *kvm, int isr)
 {
 	u16 status;
@@ -8796,6 +8812,21 @@ static void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 exit_reason,
 	}
 
 	/*
+	 * When shared (L1 & L2) apic access page is migrated during L2 is
+	 * running, mmu_notifier will force to reload the page's hpa for L2
+	 * vmcs. Need to reload it for L1 before entering L1.
+	 */
+	if (vmx->nested.apic_access_page_migrated) {
+		/*
+		 * Do not call kvm_reload_apic_access_page() because we are now
+		 * in L2. We should not call make_all_cpus_request() to exit to
+		 * L0, otherwise we will reload for L2 vmcs again.
+		 */
+		kvm_reload_apic_access_page(vcpu->kvm);
+		vmx->nested.apic_access_page_migrated = false;
+	}
+
+	/*
 	 * Exiting from L2 to L1, we're now back to L1 which thinks it just
 	 * finished a VMLAUNCH or VMRESUME instruction, so we need to set the
 	 * success or failure flag accordingly.
@@ -8916,6 +8947,7 @@ static struct kvm_x86_ops vmx_x86_ops = {
 	.update_cr8_intercept = update_cr8_intercept,
 	.set_virtual_x2apic_mode = vmx_set_virtual_x2apic_mode,
 	.set_apic_access_page_addr = vmx_set_apic_access_page_addr,
+	.set_nested_apic_page_migrated = vmx_set_nested_apic_page_migrated,
 	.vm_has_apicv = vmx_vm_has_apicv,
 	.load_eoi_exitmap = vmx_load_eoi_exitmap,
 	.hwapic_irr_update = vmx_hwapic_irr_update,
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 96f4188..131b6e8 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -6000,6 +6000,9 @@ static void vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu)
 				APIC_DEFAULT_PHYS_BASE >> PAGE_SHIFT);
 	kvm_x86_ops->set_apic_access_page_addr(vcpu->kvm,
 				page_to_phys(vcpu->kvm->arch.apic_access_page));
+
+	if (is_guest_mode(vcpu))
+		kvm_x86_ops->set_nested_apic_page_migrated(vcpu, true);
 }
 
 /*
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index d8280de..784127e 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -214,6 +214,7 @@ void kvm_reload_apic_access_page(struct kvm *kvm)
 {
 	make_all_cpus_request(kvm, KVM_REQ_APIC_PAGE_RELOAD);
 }
+EXPORT_SYMBOL_GPL(kvm_reload_apic_access_page);
 
 int kvm_vcpu_init(struct kvm_vcpu *vcpu, struct kvm *kvm, unsigned id)
 {
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v4 6/6] kvm, mem-hotplug: Do not pin apic access page in memory.
  2014-08-27 10:17 [PATCH v4 0/6] kvm, mem-hotplug: Do not pin ept identity pagetable and apic access page Tang Chen
                   ` (4 preceding siblings ...)
  2014-08-27 10:17 ` [PATCH v4 5/6] kvm, mem-hotplug: Reload L1's apic access page on migration when L2 is running Tang Chen
@ 2014-08-27 10:17 ` Tang Chen
  2014-09-01  5:14 ` [PATCH v4 0/6] kvm, mem-hotplug: Do not pin ept identity pagetable and apic access page tangchen
  6 siblings, 0 replies; 17+ messages in thread
From: Tang Chen @ 2014-08-27 10:17 UTC (permalink / raw)
  To: gleb, mtosatti, nadav.amit, jan.kiszka
  Cc: kvm, laijs, isimatu.yasuaki, guz.fnst, linux-kernel, tangchen

gfn_to_page() will finally call hva_to_pfn() to get the pfn, and pin the page
in memory by calling GUP functions. This function unpins the page.

After this patch, acpi access page is able to be migrated.

Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com>
---
 arch/x86/kvm/vmx.c       |  2 +-
 arch/x86/kvm/x86.c       |  4 +---
 include/linux/kvm_host.h |  1 +
 virt/kvm/kvm_main.c      | 17 ++++++++++++++++-
 4 files changed, 19 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 9035fd1..e0043a5 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -4022,7 +4022,7 @@ static int alloc_apic_access_page(struct kvm *kvm)
 	if (r)
 		goto out;
 
-	page = gfn_to_page(kvm, APIC_DEFAULT_PHYS_BASE >> PAGE_SHIFT);
+	page = gfn_to_page_no_pin(kvm, APIC_DEFAULT_PHYS_BASE >> PAGE_SHIFT);
 	if (is_error_page(page)) {
 		r = -EFAULT;
 		goto out;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 131b6e8..2edbeb9 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5996,7 +5996,7 @@ static void vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu)
 	 * GUP will wait till the migrate entry is replaced with the new pte
 	 * entry pointing to the new page.
 	 */
-	vcpu->kvm->arch.apic_access_page = gfn_to_page(vcpu->kvm,
+	vcpu->kvm->arch.apic_access_page = gfn_to_page_no_pin(vcpu->kvm,
 				APIC_DEFAULT_PHYS_BASE >> PAGE_SHIFT);
 	kvm_x86_ops->set_apic_access_page_addr(vcpu->kvm,
 				page_to_phys(vcpu->kvm->arch.apic_access_page));
@@ -7255,8 +7255,6 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
 	kfree(kvm->arch.vpic);
 	kfree(kvm->arch.vioapic);
 	kvm_free_vcpus(kvm);
-	if (kvm->arch.apic_access_page)
-		put_page(kvm->arch.apic_access_page);
 	kfree(rcu_dereference_check(kvm->arch.apic_map, 1));
 }
 
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 8be076a..02cbcb1 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -526,6 +526,7 @@ int gfn_to_page_many_atomic(struct kvm *kvm, gfn_t gfn, struct page **pages,
 			    int nr_pages);
 
 struct page *gfn_to_page(struct kvm *kvm, gfn_t gfn);
+struct page *gfn_to_page_no_pin(struct kvm *kvm, gfn_t gfn);
 unsigned long gfn_to_hva(struct kvm *kvm, gfn_t gfn);
 unsigned long gfn_to_hva_prot(struct kvm *kvm, gfn_t gfn, bool *writable);
 unsigned long gfn_to_hva_memslot(struct kvm_memory_slot *slot, gfn_t gfn);
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 784127e..19d90d2 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1386,9 +1386,24 @@ struct page *gfn_to_page(struct kvm *kvm, gfn_t gfn)
 
 	return kvm_pfn_to_page(pfn);
 }
-
 EXPORT_SYMBOL_GPL(gfn_to_page);
 
+struct page *gfn_to_page_no_pin(struct kvm *kvm, gfn_t gfn)
+{
+	struct page *page = gfn_to_page(kvm, gfn);
+
+	/*
+	 * gfn_to_page() will finally call hva_to_pfn() to get the pfn, and pin
+	 * the page in memory by calling GUP functions. This function unpins
+	 * the page.
+	 */
+	if (!is_error_page(page))
+		put_page(page);
+
+	return page;
+}
+EXPORT_SYMBOL_GPL(gfn_to_page_no_pin);
+
 void kvm_release_page_clean(struct page *page)
 {
 	WARN_ON(is_error_page(page));
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 0/6] kvm, mem-hotplug: Do not pin ept identity pagetable and apic access page.
  2014-08-27 10:17 [PATCH v4 0/6] kvm, mem-hotplug: Do not pin ept identity pagetable and apic access page Tang Chen
                   ` (5 preceding siblings ...)
  2014-08-27 10:17 ` [PATCH v4 6/6] kvm, mem-hotplug: Do not pin apic access page in memory Tang Chen
@ 2014-09-01  5:14 ` tangchen
  6 siblings, 0 replies; 17+ messages in thread
From: tangchen @ 2014-09-01  5:14 UTC (permalink / raw)
  To: gleb, mtosatti, nadav.amit, jan.kiszka
  Cc: kvm, laijs, isimatu.yasuaki, guz.fnst, linux-kernel, tangchen

Hi Gleb,

Would you please help to review these patches ?

Thanks.

On 08/27/2014 06:17 PM, Tang Chen wrote:
> ept identity pagetable and apic access page in kvm are pinned in memory.
> As a result, they cannot be migrated/hot-removed.
>
> But actually they don't need to be pinned in memory.
>
> [For ept identity page]
> Just do not pin it. When it is migrated, guest will be able to find the
> new page in the next ept violation.
>
> [For apic access page]
> The hpa of apic access page is stored in VMCS APIC_ACCESS_ADDR pointer.
> When apic access page is migrated, we update VMCS APIC_ACCESS_ADDR pointer
> for each vcpu in addition.
>
> NOTE: Tested with -cpu xxx,-x2apic option.
>        But since nested vm pins some other pages in memory, if user uses nested
>        vm, memory hot-remove will not work.
>
> Change log v3 -> v4:
> 1. The original patch 6 is now patch 5. ( by Jan Kiszka <jan.kiszka@web.de> )
> 2. The original patch 1 is now patch 6 since we should unpin apic access page
>     at the very last moment.
>
>
> Tang Chen (6):
>    kvm: Use APIC_DEFAULT_PHYS_BASE macro as the apic access page address.
>    kvm: Remove ept_identity_pagetable from struct kvm_arch.
>    kvm: Make init_rmode_identity_map() return 0 on success.
>    kvm, mem-hotplug: Reload L1' apic access page on migration in
>      vcpu_enter_guest().
>    kvm, mem-hotplug: Reload L1's apic access page on migration when L2 is
>      running.
>    kvm, mem-hotplug: Do not pin apic access page in memory.
>
>   arch/x86/include/asm/kvm_host.h |   3 +-
>   arch/x86/kvm/svm.c              |  15 +++++-
>   arch/x86/kvm/vmx.c              | 103 +++++++++++++++++++++++++++-------------
>   arch/x86/kvm/x86.c              |  22 +++++++--
>   include/linux/kvm_host.h        |   3 ++
>   virt/kvm/kvm_main.c             |  30 +++++++++++-
>   6 files changed, 135 insertions(+), 41 deletions(-)
>


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 4/6] kvm, mem-hotplug: Reload L1' apic access page on migration in vcpu_enter_guest().
  2014-08-27 10:17 ` [PATCH v4 4/6] kvm, mem-hotplug: Reload L1' apic access page on migration in vcpu_enter_guest() Tang Chen
@ 2014-09-02 16:00   ` Gleb Natapov
  2014-09-03  1:42     ` tangchen
  0 siblings, 1 reply; 17+ messages in thread
From: Gleb Natapov @ 2014-09-02 16:00 UTC (permalink / raw)
  To: Tang Chen
  Cc: mtosatti, nadav.amit, jan.kiszka, kvm, laijs, isimatu.yasuaki,
	guz.fnst, linux-kernel

On Wed, Aug 27, 2014 at 06:17:39PM +0800, Tang Chen wrote:
> apic access page is pinned in memory. As a result, it cannot be migrated/hot-removed.
> Actually, it is not necessary to be pinned.
> 
> The hpa of apic access page is stored in VMCS APIC_ACCESS_ADDR pointer. When
> the page is migrated, kvm_mmu_notifier_invalidate_page() will invalidate the
> corresponding ept entry. This patch introduces a new vcpu request named
> KVM_REQ_APIC_PAGE_RELOAD, and makes this request to all the vcpus at this time,
> and force all the vcpus exit guest, and re-enter guest till they updates the VMCS
> APIC_ACCESS_ADDR pointer to the new apic access page address, and updates
> kvm->arch.apic_access_page to the new page.
> 
> Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com>
> ---
>  arch/x86/include/asm/kvm_host.h |  1 +
>  arch/x86/kvm/svm.c              |  6 ++++++
>  arch/x86/kvm/vmx.c              |  6 ++++++
>  arch/x86/kvm/x86.c              | 15 +++++++++++++++
>  include/linux/kvm_host.h        |  2 ++
>  virt/kvm/kvm_main.c             | 12 ++++++++++++
>  6 files changed, 42 insertions(+)
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 35171c7..514183e 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -739,6 +739,7 @@ struct kvm_x86_ops {
>  	void (*hwapic_isr_update)(struct kvm *kvm, int isr);
>  	void (*load_eoi_exitmap)(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap);
>  	void (*set_virtual_x2apic_mode)(struct kvm_vcpu *vcpu, bool set);
> +	void (*set_apic_access_page_addr)(struct kvm *kvm, hpa_t hpa);
>  	void (*deliver_posted_interrupt)(struct kvm_vcpu *vcpu, int vector);
>  	void (*sync_pir_to_irr)(struct kvm_vcpu *vcpu);
>  	int (*set_tss_addr)(struct kvm *kvm, unsigned int addr);
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index 1d941ad..f2eacc4 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -3619,6 +3619,11 @@ static void svm_set_virtual_x2apic_mode(struct kvm_vcpu *vcpu, bool set)
>  	return;
>  }
>  
> +static void svm_set_apic_access_page_addr(struct kvm *kvm, hpa_t hpa)
> +{
> +	return;
> +}
> +
>  static int svm_vm_has_apicv(struct kvm *kvm)
>  {
>  	return 0;
> @@ -4373,6 +4378,7 @@ static struct kvm_x86_ops svm_x86_ops = {
>  	.enable_irq_window = enable_irq_window,
>  	.update_cr8_intercept = update_cr8_intercept,
>  	.set_virtual_x2apic_mode = svm_set_virtual_x2apic_mode,
> +	.set_apic_access_page_addr = svm_set_apic_access_page_addr,
>  	.vm_has_apicv = svm_vm_has_apicv,
>  	.load_eoi_exitmap = svm_load_eoi_exitmap,
>  	.hwapic_isr_update = svm_hwapic_isr_update,
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index 63c4c3e..da6d55d 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -7093,6 +7093,11 @@ static void vmx_set_virtual_x2apic_mode(struct kvm_vcpu *vcpu, bool set)
>  	vmx_set_msr_bitmap(vcpu);
>  }
>  
> +static void vmx_set_apic_access_page_addr(struct kvm *kvm, hpa_t hpa)
> +{
> +	vmcs_write64(APIC_ACCESS_ADDR, hpa);
> +}
> +
>  static void vmx_hwapic_isr_update(struct kvm *kvm, int isr)
>  {
>  	u16 status;
> @@ -8910,6 +8915,7 @@ static struct kvm_x86_ops vmx_x86_ops = {
>  	.enable_irq_window = enable_irq_window,
>  	.update_cr8_intercept = update_cr8_intercept,
>  	.set_virtual_x2apic_mode = vmx_set_virtual_x2apic_mode,
> +	.set_apic_access_page_addr = vmx_set_apic_access_page_addr,
>  	.vm_has_apicv = vmx_vm_has_apicv,
>  	.load_eoi_exitmap = vmx_load_eoi_exitmap,
>  	.hwapic_irr_update = vmx_hwapic_irr_update,
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index e05bd58..96f4188 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -5989,6 +5989,19 @@ static void vcpu_scan_ioapic(struct kvm_vcpu *vcpu)
>  	kvm_apic_update_tmr(vcpu, tmr);
>  }
>  
> +static void vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu)
> +{
> +	/*
> +	 * apic access page could be migrated. When the page is being migrated,
> +	 * GUP will wait till the migrate entry is replaced with the new pte
> +	 * entry pointing to the new page.
> +	 */
> +	vcpu->kvm->arch.apic_access_page = gfn_to_page(vcpu->kvm,
> +				APIC_DEFAULT_PHYS_BASE >> PAGE_SHIFT);
> +	kvm_x86_ops->set_apic_access_page_addr(vcpu->kvm,
> +				page_to_phys(vcpu->kvm->arch.apic_access_page));
I am a little bit worried that here all vcpus write to vcpu->kvm->arch.apic_access_page
without any locking. It is probably benign since pointer write is atomic on x86. Paolo?

Do we even need apic_access_page? Why not call
 gfn_to_page(APIC_DEFAULT_PHYS_BASE >> PAGE_SHIFT)
 put_page()
on rare occasions we need to know its address?

> +}
> +
>  /*
>   * Returns 1 to let __vcpu_run() continue the guest execution loop without
>   * exiting to the userspace.  Otherwise, the value will be returned to the
> @@ -6049,6 +6062,8 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
>  			kvm_deliver_pmi(vcpu);
>  		if (kvm_check_request(KVM_REQ_SCAN_IOAPIC, vcpu))
>  			vcpu_scan_ioapic(vcpu);
> +		if (kvm_check_request(KVM_REQ_APIC_PAGE_RELOAD, vcpu))
> +			vcpu_reload_apic_access_page(vcpu);
>  	}
>  
>  	if (kvm_check_request(KVM_REQ_EVENT, vcpu) || req_int_win) {
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index a4c33b3..8be076a 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -136,6 +136,7 @@ static inline bool is_error_page(struct page *page)
>  #define KVM_REQ_GLOBAL_CLOCK_UPDATE 22
>  #define KVM_REQ_ENABLE_IBS        23
>  #define KVM_REQ_DISABLE_IBS       24
> +#define KVM_REQ_APIC_PAGE_RELOAD  25
>  
>  #define KVM_USERSPACE_IRQ_SOURCE_ID		0
>  #define KVM_IRQFD_RESAMPLE_IRQ_SOURCE_ID	1
> @@ -579,6 +580,7 @@ void kvm_flush_remote_tlbs(struct kvm *kvm);
>  void kvm_reload_remote_mmus(struct kvm *kvm);
>  void kvm_make_mclock_inprogress_request(struct kvm *kvm);
>  void kvm_make_scan_ioapic_request(struct kvm *kvm);
> +void kvm_reload_apic_access_page(struct kvm *kvm);
>  
>  long kvm_arch_dev_ioctl(struct file *filp,
>  			unsigned int ioctl, unsigned long arg);
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 33712fb..d8280de 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -210,6 +210,11 @@ void kvm_make_scan_ioapic_request(struct kvm *kvm)
>  	make_all_cpus_request(kvm, KVM_REQ_SCAN_IOAPIC);
>  }
>  
> +void kvm_reload_apic_access_page(struct kvm *kvm)
> +{
> +	make_all_cpus_request(kvm, KVM_REQ_APIC_PAGE_RELOAD);
> +}
> +
>  int kvm_vcpu_init(struct kvm_vcpu *vcpu, struct kvm *kvm, unsigned id)
>  {
>  	struct page *page;
> @@ -294,6 +299,13 @@ static void kvm_mmu_notifier_invalidate_page(struct mmu_notifier *mn,
>  	if (need_tlb_flush)
>  		kvm_flush_remote_tlbs(kvm);
>  
> +	/*
> +	 * The physical address of apic access page is stroed in VMCS.
> +	 * So need to update it when it becomes invalid.
> +	 */
> +	if (address == gfn_to_hva(kvm, APIC_DEFAULT_PHYS_BASE >> PAGE_SHIFT))
> +		kvm_reload_apic_access_page(kvm);
> +
>  	spin_unlock(&kvm->mmu_lock);
>  	srcu_read_unlock(&kvm->srcu, idx);
>  }
> -- 
> 1.8.3.1
> 

--
			Gleb.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 4/6] kvm, mem-hotplug: Reload L1' apic access page on migration in vcpu_enter_guest().
  2014-09-02 16:00   ` Gleb Natapov
@ 2014-09-03  1:42     ` tangchen
  2014-09-03 15:04       ` Gleb Natapov
  0 siblings, 1 reply; 17+ messages in thread
From: tangchen @ 2014-09-03  1:42 UTC (permalink / raw)
  To: Gleb Natapov
  Cc: mtosatti, nadav.amit, jan.kiszka, kvm, laijs, isimatu.yasuaki,
	guz.fnst, linux-kernel, tangchen

Hi Gleb,

On 09/03/2014 12:00 AM, Gleb Natapov wrote:
> ......
> +static void vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu)
> +{
> +	/*
> +	 * apic access page could be migrated. When the page is being migrated,
> +	 * GUP will wait till the migrate entry is replaced with the new pte
> +	 * entry pointing to the new page.
> +	 */
> +	vcpu->kvm->arch.apic_access_page = gfn_to_page(vcpu->kvm,
> +				APIC_DEFAULT_PHYS_BASE >> PAGE_SHIFT);
> +	kvm_x86_ops->set_apic_access_page_addr(vcpu->kvm,
> +				page_to_phys(vcpu->kvm->arch.apic_access_page));
> I am a little bit worried that here all vcpus write to vcpu->kvm->arch.apic_access_page
> without any locking. It is probably benign since pointer write is atomic on x86. Paolo?
>
> Do we even need apic_access_page? Why not call
>   gfn_to_page(APIC_DEFAULT_PHYS_BASE >> PAGE_SHIFT)
>   put_page()
> on rare occasions we need to know its address?

Isn't it a necessary item defined in hardware spec ?

I didn't read intel spec deeply, but according to the code, the page's 
address is
written into vmcs. And it made me think that we cannnot remove it.

Thanks.

>
>> +}
>> +
>>   /*
>>    * Returns 1 to let __vcpu_run() continue the guest execution loop without
>>    * exiting to the userspace.  Otherwise, the value will be returned to the
>> @@ -6049,6 +6062,8 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
>>   			kvm_deliver_pmi(vcpu);
>>   		if (kvm_check_request(KVM_REQ_SCAN_IOAPIC, vcpu))
>>   			vcpu_scan_ioapic(vcpu);
>> +		if (kvm_check_request(KVM_REQ_APIC_PAGE_RELOAD, vcpu))
>> +			vcpu_reload_apic_access_page(vcpu);
>>   	}
>>   
>>   	if (kvm_check_request(KVM_REQ_EVENT, vcpu) || req_int_win) {
>> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
>> index a4c33b3..8be076a 100644
>> --- a/include/linux/kvm_host.h
>> +++ b/include/linux/kvm_host.h
>> @@ -136,6 +136,7 @@ static inline bool is_error_page(struct page *page)
>>   #define KVM_REQ_GLOBAL_CLOCK_UPDATE 22
>>   #define KVM_REQ_ENABLE_IBS        23
>>   #define KVM_REQ_DISABLE_IBS       24
>> +#define KVM_REQ_APIC_PAGE_RELOAD  25
>>   
>>   #define KVM_USERSPACE_IRQ_SOURCE_ID		0
>>   #define KVM_IRQFD_RESAMPLE_IRQ_SOURCE_ID	1
>> @@ -579,6 +580,7 @@ void kvm_flush_remote_tlbs(struct kvm *kvm);
>>   void kvm_reload_remote_mmus(struct kvm *kvm);
>>   void kvm_make_mclock_inprogress_request(struct kvm *kvm);
>>   void kvm_make_scan_ioapic_request(struct kvm *kvm);
>> +void kvm_reload_apic_access_page(struct kvm *kvm);
>>   
>>   long kvm_arch_dev_ioctl(struct file *filp,
>>   			unsigned int ioctl, unsigned long arg);
>> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
>> index 33712fb..d8280de 100644
>> --- a/virt/kvm/kvm_main.c
>> +++ b/virt/kvm/kvm_main.c
>> @@ -210,6 +210,11 @@ void kvm_make_scan_ioapic_request(struct kvm *kvm)
>>   	make_all_cpus_request(kvm, KVM_REQ_SCAN_IOAPIC);
>>   }
>>   
>> +void kvm_reload_apic_access_page(struct kvm *kvm)
>> +{
>> +	make_all_cpus_request(kvm, KVM_REQ_APIC_PAGE_RELOAD);
>> +}
>> +
>>   int kvm_vcpu_init(struct kvm_vcpu *vcpu, struct kvm *kvm, unsigned id)
>>   {
>>   	struct page *page;
>> @@ -294,6 +299,13 @@ static void kvm_mmu_notifier_invalidate_page(struct mmu_notifier *mn,
>>   	if (need_tlb_flush)
>>   		kvm_flush_remote_tlbs(kvm);
>>   
>> +	/*
>> +	 * The physical address of apic access page is stroed in VMCS.
>> +	 * So need to update it when it becomes invalid.
>> +	 */
>> +	if (address == gfn_to_hva(kvm, APIC_DEFAULT_PHYS_BASE >> PAGE_SHIFT))
>> +		kvm_reload_apic_access_page(kvm);
>> +
>>   	spin_unlock(&kvm->mmu_lock);
>>   	srcu_read_unlock(&kvm->srcu, idx);
>>   }
>> -- 
>> 1.8.3.1
>>
> --
> 			Gleb.
> .
>


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 5/6] kvm, mem-hotplug: Reload L1's apic access page on migration when L2 is running.
  2014-08-27 10:17 ` [PATCH v4 5/6] kvm, mem-hotplug: Reload L1's apic access page on migration when L2 is running Tang Chen
@ 2014-09-03  1:48   ` tangchen
  2014-09-03 15:08   ` Gleb Natapov
  1 sibling, 0 replies; 17+ messages in thread
From: tangchen @ 2014-09-03  1:48 UTC (permalink / raw)
  To: gleb, mtosatti, nadav.amit, jan.kiszka
  Cc: kvm, laijs, isimatu.yasuaki, guz.fnst, linux-kernel, tangchen

Hi Gleb,

By the way, when testing nested vm, I started L1 and L2 vm with
         -cpu XXX, -x2apic

But with or with out this patch 5/6, when migrating apic access page,
the nested vm didn't corrupt.

We cannot migrate L2 vm because it pinned some other pages in memory.
Without this patch, if we migrate apic access page, I thought L2 vm will
corrupt. But it didn't.

Did I made any mistake you can obviously find out ?

Thanks.

On 08/27/2014 06:17 PM, Tang Chen wrote:
> This patch only handle "L1 and L2 vm share one apic access page" situation.
>
> When L1 vm is running, if the shared apic access page is migrated, mmu_notifier will
> request all vcpus to exit to L0, and reload apic access page physical address for
> all the vcpus' vmcs (which is done by patch 5/6). And when it enters L2 vm, L2's vmcs
> will be updated in prepare_vmcs02() called by nested_vm_run(). So we need to do
> nothing.
>
> When L2 vm is running, if the shared apic access page is migrated, mmu_notifier will
> request all vcpus to exit to L0, and reload apic access page physical address for
> all L2 vmcs. And this patch requests apic access page reload in L2->L1 vmexit.
>
> Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com>
> ---
>   arch/x86/include/asm/kvm_host.h |  1 +
>   arch/x86/kvm/svm.c              |  6 ++++++
>   arch/x86/kvm/vmx.c              | 32 ++++++++++++++++++++++++++++++++
>   arch/x86/kvm/x86.c              |  3 +++
>   virt/kvm/kvm_main.c             |  1 +
>   5 files changed, 43 insertions(+)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 514183e..13fbb62 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -740,6 +740,7 @@ struct kvm_x86_ops {
>   	void (*load_eoi_exitmap)(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap);
>   	void (*set_virtual_x2apic_mode)(struct kvm_vcpu *vcpu, bool set);
>   	void (*set_apic_access_page_addr)(struct kvm *kvm, hpa_t hpa);
> +	void (*set_nested_apic_page_migrated)(struct kvm_vcpu *vcpu, bool set);
>   	void (*deliver_posted_interrupt)(struct kvm_vcpu *vcpu, int vector);
>   	void (*sync_pir_to_irr)(struct kvm_vcpu *vcpu);
>   	int (*set_tss_addr)(struct kvm *kvm, unsigned int addr);
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index f2eacc4..da88646 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -3624,6 +3624,11 @@ static void svm_set_apic_access_page_addr(struct kvm *kvm, hpa_t hpa)
>   	return;
>   }
>   
> +static void svm_set_nested_apic_page_migrated(struct kvm_vcpu *vcpu, bool set)
> +{
> +	return;
> +}
> +
>   static int svm_vm_has_apicv(struct kvm *kvm)
>   {
>   	return 0;
> @@ -4379,6 +4384,7 @@ static struct kvm_x86_ops svm_x86_ops = {
>   	.update_cr8_intercept = update_cr8_intercept,
>   	.set_virtual_x2apic_mode = svm_set_virtual_x2apic_mode,
>   	.set_apic_access_page_addr = svm_set_apic_access_page_addr,
> +	.set_nested_apic_page_migrated = svm_set_nested_apic_page_migrated,
>   	.vm_has_apicv = svm_vm_has_apicv,
>   	.load_eoi_exitmap = svm_load_eoi_exitmap,
>   	.hwapic_isr_update = svm_hwapic_isr_update,
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index da6d55d..9035fd1 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -379,6 +379,16 @@ struct nested_vmx {
>   	 * we must keep them pinned while L2 runs.
>   	 */
>   	struct page *apic_access_page;
> +	/*
> +	 * L1's apic access page can be migrated. When L1 and L2 are sharing
> +	 * the apic access page, after the page is migrated when L2 is running,
> +	 * we have to reload it to L1 vmcs before we enter L1.
> +	 *
> +	 * When the shared apic access page is migrated in L1 mode, we don't
> +	 * need to do anything else because we reload apic access page each
> +	 * time when entering L2 in prepare_vmcs02().
> +	 */
> +	bool apic_access_page_migrated;
>   	u64 msr_ia32_feature_control;
>   
>   	struct hrtimer preemption_timer;
> @@ -7098,6 +7108,12 @@ static void vmx_set_apic_access_page_addr(struct kvm *kvm, hpa_t hpa)
>   	vmcs_write64(APIC_ACCESS_ADDR, hpa);
>   }
>   
> +static void vmx_set_nested_apic_page_migrated(struct kvm_vcpu *vcpu, bool set)
> +{
> +	struct vcpu_vmx *vmx = to_vmx(vcpu);
> +	vmx->nested.apic_access_page_migrated = set;
> +}
> +
>   static void vmx_hwapic_isr_update(struct kvm *kvm, int isr)
>   {
>   	u16 status;
> @@ -8796,6 +8812,21 @@ static void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 exit_reason,
>   	}
>   
>   	/*
> +	 * When shared (L1 & L2) apic access page is migrated during L2 is
> +	 * running, mmu_notifier will force to reload the page's hpa for L2
> +	 * vmcs. Need to reload it for L1 before entering L1.
> +	 */
> +	if (vmx->nested.apic_access_page_migrated) {
> +		/*
> +		 * Do not call kvm_reload_apic_access_page() because we are now
> +		 * in L2. We should not call make_all_cpus_request() to exit to
> +		 * L0, otherwise we will reload for L2 vmcs again.
> +		 */
> +		kvm_reload_apic_access_page(vcpu->kvm);
> +		vmx->nested.apic_access_page_migrated = false;
> +	}
> +
> +	/*
>   	 * Exiting from L2 to L1, we're now back to L1 which thinks it just
>   	 * finished a VMLAUNCH or VMRESUME instruction, so we need to set the
>   	 * success or failure flag accordingly.
> @@ -8916,6 +8947,7 @@ static struct kvm_x86_ops vmx_x86_ops = {
>   	.update_cr8_intercept = update_cr8_intercept,
>   	.set_virtual_x2apic_mode = vmx_set_virtual_x2apic_mode,
>   	.set_apic_access_page_addr = vmx_set_apic_access_page_addr,
> +	.set_nested_apic_page_migrated = vmx_set_nested_apic_page_migrated,
>   	.vm_has_apicv = vmx_vm_has_apicv,
>   	.load_eoi_exitmap = vmx_load_eoi_exitmap,
>   	.hwapic_irr_update = vmx_hwapic_irr_update,
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 96f4188..131b6e8 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -6000,6 +6000,9 @@ static void vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu)
>   				APIC_DEFAULT_PHYS_BASE >> PAGE_SHIFT);
>   	kvm_x86_ops->set_apic_access_page_addr(vcpu->kvm,
>   				page_to_phys(vcpu->kvm->arch.apic_access_page));
> +
> +	if (is_guest_mode(vcpu))
> +		kvm_x86_ops->set_nested_apic_page_migrated(vcpu, true);
>   }
>   
>   /*
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index d8280de..784127e 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -214,6 +214,7 @@ void kvm_reload_apic_access_page(struct kvm *kvm)
>   {
>   	make_all_cpus_request(kvm, KVM_REQ_APIC_PAGE_RELOAD);
>   }
> +EXPORT_SYMBOL_GPL(kvm_reload_apic_access_page);
>   
>   int kvm_vcpu_init(struct kvm_vcpu *vcpu, struct kvm *kvm, unsigned id)
>   {


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 4/6] kvm, mem-hotplug: Reload L1' apic access page on migration in vcpu_enter_guest().
  2014-09-03  1:42     ` tangchen
@ 2014-09-03 15:04       ` Gleb Natapov
  2014-09-09  7:13         ` tangchen
  0 siblings, 1 reply; 17+ messages in thread
From: Gleb Natapov @ 2014-09-03 15:04 UTC (permalink / raw)
  To: tangchen
  Cc: mtosatti, nadav.amit, jan.kiszka, kvm, laijs, isimatu.yasuaki,
	guz.fnst, linux-kernel

On Wed, Sep 03, 2014 at 09:42:30AM +0800, tangchen wrote:
> Hi Gleb,
> 
> On 09/03/2014 12:00 AM, Gleb Natapov wrote:
> >......
> >+static void vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu)
> >+{
> >+	/*
> >+	 * apic access page could be migrated. When the page is being migrated,
> >+	 * GUP will wait till the migrate entry is replaced with the new pte
> >+	 * entry pointing to the new page.
> >+	 */
> >+	vcpu->kvm->arch.apic_access_page = gfn_to_page(vcpu->kvm,
> >+				APIC_DEFAULT_PHYS_BASE >> PAGE_SHIFT);
> >+	kvm_x86_ops->set_apic_access_page_addr(vcpu->kvm,
> >+				page_to_phys(vcpu->kvm->arch.apic_access_page));
> >I am a little bit worried that here all vcpus write to vcpu->kvm->arch.apic_access_page
> >without any locking. It is probably benign since pointer write is atomic on x86. Paolo?
> >
> >Do we even need apic_access_page? Why not call
> >  gfn_to_page(APIC_DEFAULT_PHYS_BASE >> PAGE_SHIFT)
> >  put_page()
> >on rare occasions we need to know its address?
> 
> Isn't it a necessary item defined in hardware spec ?
> 
vcpu->kvm->arch.apic_access_page? No. This is internal kvm data structure.

> I didn't read intel spec deeply, but according to the code, the page's
> address is
> written into vmcs. And it made me think that we cannnot remove it.
> 
We cannot remove writing of apic page address into vmcs, but this is not done by
assigning to vcpu->kvm->arch.apic_access_page, but by vmwrite in set_apic_access_page_addr().

--
			Gleb.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 5/6] kvm, mem-hotplug: Reload L1's apic access page on migration when L2 is running.
  2014-08-27 10:17 ` [PATCH v4 5/6] kvm, mem-hotplug: Reload L1's apic access page on migration when L2 is running Tang Chen
  2014-09-03  1:48   ` tangchen
@ 2014-09-03 15:08   ` Gleb Natapov
  1 sibling, 0 replies; 17+ messages in thread
From: Gleb Natapov @ 2014-09-03 15:08 UTC (permalink / raw)
  To: Tang Chen
  Cc: mtosatti, nadav.amit, jan.kiszka, kvm, laijs, isimatu.yasuaki,
	guz.fnst, linux-kernel

On Wed, Aug 27, 2014 at 06:17:40PM +0800, Tang Chen wrote:
> This patch only handle "L1 and L2 vm share one apic access page" situation.
> 
> When L1 vm is running, if the shared apic access page is migrated, mmu_notifier will
> request all vcpus to exit to L0, and reload apic access page physical address for
> all the vcpus' vmcs (which is done by patch 5/6). And when it enters L2 vm, L2's vmcs
> will be updated in prepare_vmcs02() called by nested_vm_run(). So we need to do
> nothing.
> 
> When L2 vm is running, if the shared apic access page is migrated, mmu_notifier will
> request all vcpus to exit to L0, and reload apic access page physical address for
> all L2 vmcs. And this patch requests apic access page reload in L2->L1 vmexit.
> 
> Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com>
> ---
>  arch/x86/include/asm/kvm_host.h |  1 +
>  arch/x86/kvm/svm.c              |  6 ++++++
>  arch/x86/kvm/vmx.c              | 32 ++++++++++++++++++++++++++++++++
>  arch/x86/kvm/x86.c              |  3 +++
>  virt/kvm/kvm_main.c             |  1 +
>  5 files changed, 43 insertions(+)
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 514183e..13fbb62 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -740,6 +740,7 @@ struct kvm_x86_ops {
>  	void (*load_eoi_exitmap)(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap);
>  	void (*set_virtual_x2apic_mode)(struct kvm_vcpu *vcpu, bool set);
>  	void (*set_apic_access_page_addr)(struct kvm *kvm, hpa_t hpa);
> +	void (*set_nested_apic_page_migrated)(struct kvm_vcpu *vcpu, bool set);
>  	void (*deliver_posted_interrupt)(struct kvm_vcpu *vcpu, int vector);
>  	void (*sync_pir_to_irr)(struct kvm_vcpu *vcpu);
>  	int (*set_tss_addr)(struct kvm *kvm, unsigned int addr);
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index f2eacc4..da88646 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -3624,6 +3624,11 @@ static void svm_set_apic_access_page_addr(struct kvm *kvm, hpa_t hpa)
>  	return;
>  }
>  
> +static void svm_set_nested_apic_page_migrated(struct kvm_vcpu *vcpu, bool set)
> +{
> +	return;
> +}
> +
>  static int svm_vm_has_apicv(struct kvm *kvm)
>  {
>  	return 0;
> @@ -4379,6 +4384,7 @@ static struct kvm_x86_ops svm_x86_ops = {
>  	.update_cr8_intercept = update_cr8_intercept,
>  	.set_virtual_x2apic_mode = svm_set_virtual_x2apic_mode,
>  	.set_apic_access_page_addr = svm_set_apic_access_page_addr,
> +	.set_nested_apic_page_migrated = svm_set_nested_apic_page_migrated,
>  	.vm_has_apicv = svm_vm_has_apicv,
>  	.load_eoi_exitmap = svm_load_eoi_exitmap,
>  	.hwapic_isr_update = svm_hwapic_isr_update,
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index da6d55d..9035fd1 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -379,6 +379,16 @@ struct nested_vmx {
>  	 * we must keep them pinned while L2 runs.
>  	 */
>  	struct page *apic_access_page;
> +	/*
> +	 * L1's apic access page can be migrated. When L1 and L2 are sharing
> +	 * the apic access page, after the page is migrated when L2 is running,
> +	 * we have to reload it to L1 vmcs before we enter L1.
> +	 *
> +	 * When the shared apic access page is migrated in L1 mode, we don't
> +	 * need to do anything else because we reload apic access page each
> +	 * time when entering L2 in prepare_vmcs02().
> +	 */
> +	bool apic_access_page_migrated;
>  	u64 msr_ia32_feature_control;
>  
>  	struct hrtimer preemption_timer;
> @@ -7098,6 +7108,12 @@ static void vmx_set_apic_access_page_addr(struct kvm *kvm, hpa_t hpa)
>  	vmcs_write64(APIC_ACCESS_ADDR, hpa);
>  }
>  
> +static void vmx_set_nested_apic_page_migrated(struct kvm_vcpu *vcpu, bool set)
> +{
> +	struct vcpu_vmx *vmx = to_vmx(vcpu);
> +	vmx->nested.apic_access_page_migrated = set;
> +}
> +
>  static void vmx_hwapic_isr_update(struct kvm *kvm, int isr)
>  {
>  	u16 status;
> @@ -8796,6 +8812,21 @@ static void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 exit_reason,
>  	}
>  
>  	/*
> +	 * When shared (L1 & L2) apic access page is migrated during L2 is
> +	 * running, mmu_notifier will force to reload the page's hpa for L2
> +	 * vmcs. Need to reload it for L1 before entering L1.
> +	 */
> +	if (vmx->nested.apic_access_page_migrated) {
> +		/*
> +		 * Do not call kvm_reload_apic_access_page() because we are now
> +		 * in L2. We should not call make_all_cpus_request() to exit to
> +		 * L0, otherwise we will reload for L2 vmcs again.
> +		 */
> +		kvm_reload_apic_access_page(vcpu->kvm);
> +		vmx->nested.apic_access_page_migrated = false;
> +	}
I would just call kvm_reload_apic_access_page() unconditionally and only if
it will prove to be performance problem would optimize it further. Vmexit emulation it
pretty heavy, so I doubt one more vmwrite will be noticeable.

--
			Gleb.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 4/6] kvm, mem-hotplug: Reload L1' apic access page on migration in vcpu_enter_guest().
  2014-09-03 15:04       ` Gleb Natapov
@ 2014-09-09  7:13         ` tangchen
  2014-09-10 14:01           ` Gleb Natapov
  0 siblings, 1 reply; 17+ messages in thread
From: tangchen @ 2014-09-09  7:13 UTC (permalink / raw)
  To: Gleb Natapov
  Cc: mtosatti, nadav.amit, jan.kiszka, kvm, laijs, isimatu.yasuaki,
	guz.fnst, linux-kernel, tangchen

Hi Gleb,

On 09/03/2014 11:04 PM, Gleb Natapov wrote:
> On Wed, Sep 03, 2014 at 09:42:30AM +0800, tangchen wrote:
>> Hi Gleb,
>>
>> On 09/03/2014 12:00 AM, Gleb Natapov wrote:
>>> ......
>>> +static void vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu)
>>> +{
>>> +	/*
>>> +	 * apic access page could be migrated. When the page is being migrated,
>>> +	 * GUP will wait till the migrate entry is replaced with the new pte
>>> +	 * entry pointing to the new page.
>>> +	 */
>>> +	vcpu->kvm->arch.apic_access_page = gfn_to_page(vcpu->kvm,
>>> +				APIC_DEFAULT_PHYS_BASE >> PAGE_SHIFT);
>>> +	kvm_x86_ops->set_apic_access_page_addr(vcpu->kvm,
>>> +				page_to_phys(vcpu->kvm->arch.apic_access_page));
>>> I am a little bit worried that here all vcpus write to vcpu->kvm->arch.apic_access_page
>>> without any locking. It is probably benign since pointer write is atomic on x86. Paolo?
>>>
>>> Do we even need apic_access_page? Why not call
>>>   gfn_to_page(APIC_DEFAULT_PHYS_BASE >> PAGE_SHIFT)
>>>   put_page()
>>> on rare occasions we need to know its address?
>> Isn't it a necessary item defined in hardware spec ?
>>
> vcpu->kvm->arch.apic_access_page? No. This is internal kvm data structure.
>
>> I didn't read intel spec deeply, but according to the code, the page's
>> address is
>> written into vmcs. And it made me think that we cannnot remove it.
>>
> We cannot remove writing of apic page address into vmcs, but this is not done by
> assigning to vcpu->kvm->arch.apic_access_page, but by vmwrite in set_apic_access_page_addr().

OK, I'll try to remove kvm->arch.apic_access_page and send a patch for 
it soon.

BTW, if you don't have objection to the first two patches, would you 
please help to
commit them first ?

Thanks.

>
> --
> 			Gleb.
> .
>


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 1/6] kvm: Use APIC_DEFAULT_PHYS_BASE macro as the apic access page address.
  2014-08-27 10:17 ` [PATCH v4 1/6] kvm: Use APIC_DEFAULT_PHYS_BASE macro as the apic access page address Tang Chen
@ 2014-09-10 13:57   ` Gleb Natapov
  0 siblings, 0 replies; 17+ messages in thread
From: Gleb Natapov @ 2014-09-10 13:57 UTC (permalink / raw)
  To: Tang Chen
  Cc: mtosatti, nadav.amit, jan.kiszka, kvm, laijs, isimatu.yasuaki,
	guz.fnst, linux-kernel

On Wed, Aug 27, 2014 at 06:17:36PM +0800, Tang Chen wrote:
> We have APIC_DEFAULT_PHYS_BASE defined as 0xfee00000, which is also the address of
> apic access page. So use this macro.
Reviewed-by: Gleb Natapov <gleb@kernel.org>

> 
> Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com>
> ---
>  arch/x86/kvm/svm.c | 3 ++-
>  arch/x86/kvm/vmx.c | 6 +++---
>  2 files changed, 5 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index ddf7427..1d941ad 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -1257,7 +1257,8 @@ static struct kvm_vcpu *svm_create_vcpu(struct kvm *kvm, unsigned int id)
>  	svm->asid_generation = 0;
>  	init_vmcb(svm);
>  
> -	svm->vcpu.arch.apic_base = 0xfee00000 | MSR_IA32_APICBASE_ENABLE;
> +	svm->vcpu.arch.apic_base = APIC_DEFAULT_PHYS_BASE |
> +				   MSR_IA32_APICBASE_ENABLE;
>  	if (kvm_vcpu_is_bsp(&svm->vcpu))
>  		svm->vcpu.arch.apic_base |= MSR_IA32_APICBASE_BSP;
>  
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index bfe11cf..4b80ead 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -3999,13 +3999,13 @@ static int alloc_apic_access_page(struct kvm *kvm)
>  		goto out;
>  	kvm_userspace_mem.slot = APIC_ACCESS_PAGE_PRIVATE_MEMSLOT;
>  	kvm_userspace_mem.flags = 0;
> -	kvm_userspace_mem.guest_phys_addr = 0xfee00000ULL;
> +	kvm_userspace_mem.guest_phys_addr = APIC_DEFAULT_PHYS_BASE;
>  	kvm_userspace_mem.memory_size = PAGE_SIZE;
>  	r = __kvm_set_memory_region(kvm, &kvm_userspace_mem);
>  	if (r)
>  		goto out;
>  
> -	page = gfn_to_page(kvm, 0xfee00);
> +	page = gfn_to_page(kvm, APIC_DEFAULT_PHYS_BASE >> PAGE_SHIFT);
>  	if (is_error_page(page)) {
>  		r = -EFAULT;
>  		goto out;
> @@ -4477,7 +4477,7 @@ static void vmx_vcpu_reset(struct kvm_vcpu *vcpu)
>  
>  	vmx->vcpu.arch.regs[VCPU_REGS_RDX] = get_rdx_init_val();
>  	kvm_set_cr8(&vmx->vcpu, 0);
> -	apic_base_msr.data = 0xfee00000 | MSR_IA32_APICBASE_ENABLE;
> +	apic_base_msr.data = APIC_DEFAULT_PHYS_BASE | MSR_IA32_APICBASE_ENABLE;
>  	if (kvm_vcpu_is_bsp(&vmx->vcpu))
>  		apic_base_msr.data |= MSR_IA32_APICBASE_BSP;
>  	apic_base_msr.host_initiated = true;
> -- 
> 1.8.3.1
> 

--
			Gleb.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 2/6] kvm: Remove ept_identity_pagetable from struct kvm_arch.
  2014-08-27 10:17 ` [PATCH v4 2/6] kvm: Remove ept_identity_pagetable from struct kvm_arch Tang Chen
@ 2014-09-10 13:58   ` Gleb Natapov
  0 siblings, 0 replies; 17+ messages in thread
From: Gleb Natapov @ 2014-09-10 13:58 UTC (permalink / raw)
  To: Tang Chen
  Cc: mtosatti, nadav.amit, jan.kiszka, kvm, laijs, isimatu.yasuaki,
	guz.fnst, linux-kernel

On Wed, Aug 27, 2014 at 06:17:37PM +0800, Tang Chen wrote:
> kvm_arch->ept_identity_pagetable holds the ept identity pagetable page. But
> it is never used to refer to the page at all.
> 
> In vcpu initialization, it indicates two things:
> 1. indicates if ept page is allocated
> 2. indicates if a memory slot for identity page is initialized
> 
> Actually, kvm_arch->ept_identity_pagetable_done is enough to tell if the ept
> identity pagetable is initialized. So we can remove ept_identity_pagetable.
> 
> NOTE: In the original code, ept identity pagetable page is pinned in memroy.
>       As a result, it cannot be migrated/hot-removed. After this patch, since
>       kvm_arch->ept_identity_pagetable is removed, ept identity pagetable page
>       is no longer pinned in memory. And it can be migrated/hot-removed.
Reviewed-by: Gleb Natapov <gleb@kernel.org>

> 
> Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com>
> ---
>  arch/x86/include/asm/kvm_host.h |  1 -
>  arch/x86/kvm/vmx.c              | 50 ++++++++++++++++++++---------------------
>  arch/x86/kvm/x86.c              |  2 --
>  3 files changed, 25 insertions(+), 28 deletions(-)
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 7c492ed..35171c7 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -580,7 +580,6 @@ struct kvm_arch {
>  
>  	gpa_t wall_clock;
>  
> -	struct page *ept_identity_pagetable;
>  	bool ept_identity_pagetable_done;
>  	gpa_t ept_identity_map_addr;
>  
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index 4b80ead..953d529 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -743,6 +743,7 @@ static u32 vmx_segment_access_rights(struct kvm_segment *var);
>  static void vmx_sync_pir_to_irr_dummy(struct kvm_vcpu *vcpu);
>  static void copy_vmcs12_to_shadow(struct vcpu_vmx *vmx);
>  static void copy_shadow_to_vmcs12(struct vcpu_vmx *vmx);
> +static int alloc_identity_pagetable(struct kvm *kvm);
>  
>  static DEFINE_PER_CPU(struct vmcs *, vmxarea);
>  static DEFINE_PER_CPU(struct vmcs *, current_vmcs);
> @@ -3938,21 +3939,27 @@ out:
>  
>  static int init_rmode_identity_map(struct kvm *kvm)
>  {
> -	int i, idx, r, ret;
> +	int i, idx, r, ret = 0;
>  	pfn_t identity_map_pfn;
>  	u32 tmp;
>  
>  	if (!enable_ept)
>  		return 1;
> -	if (unlikely(!kvm->arch.ept_identity_pagetable)) {
> -		printk(KERN_ERR "EPT: identity-mapping pagetable "
> -			"haven't been allocated!\n");
> -		return 0;
> +
> +	/* Protect kvm->arch.ept_identity_pagetable_done. */
> +	mutex_lock(&kvm->slots_lock);
> +
> +	if (likely(kvm->arch.ept_identity_pagetable_done)) {
> +		ret = 1;
> +		goto out2;
>  	}
> -	if (likely(kvm->arch.ept_identity_pagetable_done))
> -		return 1;
> -	ret = 0;
> +
>  	identity_map_pfn = kvm->arch.ept_identity_map_addr >> PAGE_SHIFT;
> +
> +	r = alloc_identity_pagetable(kvm);
> +	if (r)
> +		goto out2;
> +
>  	idx = srcu_read_lock(&kvm->srcu);
>  	r = kvm_clear_guest_page(kvm, identity_map_pfn, 0, PAGE_SIZE);
>  	if (r < 0)
> @@ -3970,6 +3977,9 @@ static int init_rmode_identity_map(struct kvm *kvm)
>  	ret = 1;
>  out:
>  	srcu_read_unlock(&kvm->srcu, idx);
> +
> +out2:
> +	mutex_unlock(&kvm->slots_lock);
>  	return ret;
>  }
>  
> @@ -4019,31 +4029,23 @@ out:
>  
>  static int alloc_identity_pagetable(struct kvm *kvm)
>  {
> -	struct page *page;
> +	/*
> +	 * In init_rmode_identity_map(), kvm->arch.ept_identity_pagetable_done
> +	 * is checked before calling this function and set to true after the
> +	 * calling. The access to kvm->arch.ept_identity_pagetable_done should
> +	 * be protected by kvm->slots_lock.
> +	 */
> +
>  	struct kvm_userspace_memory_region kvm_userspace_mem;
>  	int r = 0;
>  
> -	mutex_lock(&kvm->slots_lock);
> -	if (kvm->arch.ept_identity_pagetable)
> -		goto out;
>  	kvm_userspace_mem.slot = IDENTITY_PAGETABLE_PRIVATE_MEMSLOT;
>  	kvm_userspace_mem.flags = 0;
>  	kvm_userspace_mem.guest_phys_addr =
>  		kvm->arch.ept_identity_map_addr;
>  	kvm_userspace_mem.memory_size = PAGE_SIZE;
>  	r = __kvm_set_memory_region(kvm, &kvm_userspace_mem);
> -	if (r)
> -		goto out;
>  
> -	page = gfn_to_page(kvm, kvm->arch.ept_identity_map_addr >> PAGE_SHIFT);
> -	if (is_error_page(page)) {
> -		r = -EFAULT;
> -		goto out;
> -	}
> -
> -	kvm->arch.ept_identity_pagetable = page;
> -out:
> -	mutex_unlock(&kvm->slots_lock);
>  	return r;
>  }
>  
> @@ -7643,8 +7645,6 @@ static struct kvm_vcpu *vmx_create_vcpu(struct kvm *kvm, unsigned int id)
>  			kvm->arch.ept_identity_map_addr =
>  				VMX_EPT_IDENTITY_PAGETABLE_ADDR;
>  		err = -ENOMEM;
> -		if (alloc_identity_pagetable(kvm) != 0)
> -			goto free_vmcs;
>  		if (!init_rmode_identity_map(kvm))
>  			goto free_vmcs;
>  	}
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 8f1e22d..e05bd58 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -7239,8 +7239,6 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
>  	kvm_free_vcpus(kvm);
>  	if (kvm->arch.apic_access_page)
>  		put_page(kvm->arch.apic_access_page);
> -	if (kvm->arch.ept_identity_pagetable)
> -		put_page(kvm->arch.ept_identity_pagetable);
>  	kfree(rcu_dereference_check(kvm->arch.apic_map, 1));
>  }
>  
> -- 
> 1.8.3.1
> 

--
			Gleb.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 4/6] kvm, mem-hotplug: Reload L1' apic access page on migration in vcpu_enter_guest().
  2014-09-09  7:13         ` tangchen
@ 2014-09-10 14:01           ` Gleb Natapov
  0 siblings, 0 replies; 17+ messages in thread
From: Gleb Natapov @ 2014-09-10 14:01 UTC (permalink / raw)
  To: tangchen
  Cc: mtosatti, nadav.amit, jan.kiszka, kvm, laijs, isimatu.yasuaki,
	guz.fnst, linux-kernel, Paolo Bonzini

On Tue, Sep 09, 2014 at 03:13:07PM +0800, tangchen wrote:
> Hi Gleb,
> 
> On 09/03/2014 11:04 PM, Gleb Natapov wrote:
> >On Wed, Sep 03, 2014 at 09:42:30AM +0800, tangchen wrote:
> >>Hi Gleb,
> >>
> >>On 09/03/2014 12:00 AM, Gleb Natapov wrote:
> >>>......
> >>>+static void vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu)
> >>>+{
> >>>+	/*
> >>>+	 * apic access page could be migrated. When the page is being migrated,
> >>>+	 * GUP will wait till the migrate entry is replaced with the new pte
> >>>+	 * entry pointing to the new page.
> >>>+	 */
> >>>+	vcpu->kvm->arch.apic_access_page = gfn_to_page(vcpu->kvm,
> >>>+				APIC_DEFAULT_PHYS_BASE >> PAGE_SHIFT);
> >>>+	kvm_x86_ops->set_apic_access_page_addr(vcpu->kvm,
> >>>+				page_to_phys(vcpu->kvm->arch.apic_access_page));
> >>>I am a little bit worried that here all vcpus write to vcpu->kvm->arch.apic_access_page
> >>>without any locking. It is probably benign since pointer write is atomic on x86. Paolo?
> >>>
> >>>Do we even need apic_access_page? Why not call
> >>>  gfn_to_page(APIC_DEFAULT_PHYS_BASE >> PAGE_SHIFT)
> >>>  put_page()
> >>>on rare occasions we need to know its address?
> >>Isn't it a necessary item defined in hardware spec ?
> >>
> >vcpu->kvm->arch.apic_access_page? No. This is internal kvm data structure.
> >
> >>I didn't read intel spec deeply, but according to the code, the page's
> >>address is
> >>written into vmcs. And it made me think that we cannnot remove it.
> >>
> >We cannot remove writing of apic page address into vmcs, but this is not done by
> >assigning to vcpu->kvm->arch.apic_access_page, but by vmwrite in set_apic_access_page_addr().
> 
> OK, I'll try to remove kvm->arch.apic_access_page and send a patch for it
> soon.
> 
> BTW, if you don't have objection to the first two patches, would you please
> help to
> commit them first ?
> 
I acked them and CCed Paolo to this reply. I hope he will look at the series too.

--
			Gleb.

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2014-09-10 14:01 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-08-27 10:17 [PATCH v4 0/6] kvm, mem-hotplug: Do not pin ept identity pagetable and apic access page Tang Chen
2014-08-27 10:17 ` [PATCH v4 1/6] kvm: Use APIC_DEFAULT_PHYS_BASE macro as the apic access page address Tang Chen
2014-09-10 13:57   ` Gleb Natapov
2014-08-27 10:17 ` [PATCH v4 2/6] kvm: Remove ept_identity_pagetable from struct kvm_arch Tang Chen
2014-09-10 13:58   ` Gleb Natapov
2014-08-27 10:17 ` [PATCH v4 3/6] kvm: Make init_rmode_identity_map() return 0 on success Tang Chen
2014-08-27 10:17 ` [PATCH v4 4/6] kvm, mem-hotplug: Reload L1' apic access page on migration in vcpu_enter_guest() Tang Chen
2014-09-02 16:00   ` Gleb Natapov
2014-09-03  1:42     ` tangchen
2014-09-03 15:04       ` Gleb Natapov
2014-09-09  7:13         ` tangchen
2014-09-10 14:01           ` Gleb Natapov
2014-08-27 10:17 ` [PATCH v4 5/6] kvm, mem-hotplug: Reload L1's apic access page on migration when L2 is running Tang Chen
2014-09-03  1:48   ` tangchen
2014-09-03 15:08   ` Gleb Natapov
2014-08-27 10:17 ` [PATCH v4 6/6] kvm, mem-hotplug: Do not pin apic access page in memory Tang Chen
2014-09-01  5:14 ` [PATCH v4 0/6] kvm, mem-hotplug: Do not pin ept identity pagetable and apic access page tangchen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).