* [PATCH 0/4 v4] KVM: Introduce kvm_memory_slot::arch
@ 2012-02-08 3:58 Takuya Yoshikawa
2012-02-08 3:59 ` [PATCH 1/4] KVM: Introduce gfn_to_index() which returns the index for a given level Takuya Yoshikawa
` (4 more replies)
0 siblings, 5 replies; 6+ messages in thread
From: Takuya Yoshikawa @ 2012-02-08 3:58 UTC (permalink / raw)
To: avi, mtosatti; +Cc: kvm
Rebased the whole series (against the next branch of kvm.git).
No manual edit.
If something is still wrong, please let me know.
Thanks,
Takuya
---
This is the first step to separate the architecture specific members.
The rmap and dirty_bitmap can be treated later based on this.
v4:
Just rebased
v3:
Patch 4:
- Narrowed down the ifndef a bit for s390.
v2:
Patch 3:
- Removed extra checks for NULL when we create a new slot.
- Removed "if (user_alloc)" check taken from the s390 code.
Takuya
arch/ia64/include/asm/kvm_host.h | 3 +
arch/ia64/kvm/kvm-ia64.c | 10 +++++
arch/powerpc/include/asm/kvm_host.h | 3 +
arch/powerpc/kvm/powerpc.c | 10 +++++
arch/s390/include/asm/kvm_host.h | 3 +
arch/s390/kvm/kvm-s390.c | 10 +++++
arch/x86/include/asm/kvm_host.h | 9 ++++
arch/x86/kvm/mmu.c | 5 +-
arch/x86/kvm/x86.c | 59 ++++++++++++++++++++++++++
include/linux/kvm_host.h | 18 +++++---
virt/kvm/kvm_main.c | 77 ++++++----------------------------
11 files changed, 135 insertions(+), 72 deletions(-)
--
1.7.5.4
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH 1/4] KVM: Introduce gfn_to_index() which returns the index for a given level
2012-02-08 3:58 [PATCH 0/4 v4] KVM: Introduce kvm_memory_slot::arch Takuya Yoshikawa
@ 2012-02-08 3:59 ` Takuya Yoshikawa
2012-02-08 4:00 ` [PATCH 2/4] KVM: Split lpage_info creation out from __kvm_set_memory_region() Takuya Yoshikawa
` (3 subsequent siblings)
4 siblings, 0 replies; 6+ messages in thread
From: Takuya Yoshikawa @ 2012-02-08 3:59 UTC (permalink / raw)
To: avi, mtosatti; +Cc: kvm
This patch cleans up the code and removes the "(void)level;" warning
suppressor.
Note that we can also use this for PT_PAGE_TABLE_LEVEL to treat every
level uniformly later.
Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp>
---
arch/x86/kvm/mmu.c | 3 +--
include/linux/kvm_host.h | 7 +++++++
virt/kvm/kvm_main.c | 7 +------
3 files changed, 9 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index ae76cc3..37e7f10 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -688,8 +688,7 @@ static struct kvm_lpage_info *lpage_info_slot(gfn_t gfn,
{
unsigned long idx;
- idx = (gfn >> KVM_HPAGE_GFN_SHIFT(level)) -
- (slot->base_gfn >> KVM_HPAGE_GFN_SHIFT(level));
+ idx = gfn_to_index(gfn, slot->base_gfn, level);
return &slot->lpage_info[level - 2][idx];
}
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 9698080..7a08496 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -681,6 +681,13 @@ static inline int memslot_id(struct kvm *kvm, gfn_t gfn)
return gfn_to_memslot(kvm, gfn)->id;
}
+static inline gfn_t gfn_to_index(gfn_t gfn, gfn_t base_gfn, int level)
+{
+ /* KVM_HPAGE_GFN_SHIFT(PT_PAGE_TABLE_LEVEL) must be 0. */
+ return (gfn >> KVM_HPAGE_GFN_SHIFT(level)) -
+ (base_gfn >> KVM_HPAGE_GFN_SHIFT(level));
+}
+
static inline unsigned long gfn_to_hva_memslot(struct kvm_memory_slot *slot,
gfn_t gfn)
{
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 470e305..415fe81 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -784,15 +784,10 @@ int __kvm_set_memory_region(struct kvm *kvm,
int lpages;
int level = i + 2;
- /* Avoid unused variable warning if no large pages */
- (void)level;
-
if (new.lpage_info[i])
continue;
- lpages = 1 + ((base_gfn + npages - 1)
- >> KVM_HPAGE_GFN_SHIFT(level));
- lpages -= base_gfn >> KVM_HPAGE_GFN_SHIFT(level);
+ lpages = gfn_to_index(base_gfn + npages - 1, base_gfn, level) + 1;
new.lpage_info[i] = vzalloc(lpages * sizeof(*new.lpage_info[i]));
--
1.7.5.4
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH 2/4] KVM: Split lpage_info creation out from __kvm_set_memory_region()
2012-02-08 3:58 [PATCH 0/4 v4] KVM: Introduce kvm_memory_slot::arch Takuya Yoshikawa
2012-02-08 3:59 ` [PATCH 1/4] KVM: Introduce gfn_to_index() which returns the index for a given level Takuya Yoshikawa
@ 2012-02-08 4:00 ` Takuya Yoshikawa
2012-02-08 4:01 ` [PATCH 3/4] KVM: Simplify ifndef conditional usage in __kvm_set_memory_region() Takuya Yoshikawa
` (2 subsequent siblings)
4 siblings, 0 replies; 6+ messages in thread
From: Takuya Yoshikawa @ 2012-02-08 4:00 UTC (permalink / raw)
To: avi, mtosatti; +Cc: kvm
This makes it easy to make lpage_info architecture specific.
Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp>
---
virt/kvm/kvm_main.c | 83 ++++++++++++++++++++++++++++++++-------------------
1 files changed, 52 insertions(+), 31 deletions(-)
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 415fe81..7adaa20 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -685,6 +685,56 @@ void update_memslots(struct kvm_memslots *slots, struct kvm_memory_slot *new)
slots->generation++;
}
+#ifndef CONFIG_S390
+static int create_lpage_info(struct kvm_memory_slot *slot, unsigned long npages)
+{
+ int i;
+
+ for (i = 0; i < KVM_NR_PAGE_SIZES - 1; ++i) {
+ unsigned long ugfn;
+ int lpages;
+ int level = i + 2;
+
+ if (slot->lpage_info[i])
+ continue;
+
+ lpages = gfn_to_index(slot->base_gfn + npages - 1,
+ slot->base_gfn, level) + 1;
+
+ slot->lpage_info[i] = vzalloc(lpages * sizeof(*slot->lpage_info[i]));
+ if (!slot->lpage_info[i])
+ goto out_free;
+
+ if (slot->base_gfn & (KVM_PAGES_PER_HPAGE(level) - 1))
+ slot->lpage_info[i][0].write_count = 1;
+ if ((slot->base_gfn + npages) & (KVM_PAGES_PER_HPAGE(level) - 1))
+ slot->lpage_info[i][lpages - 1].write_count = 1;
+ ugfn = slot->userspace_addr >> PAGE_SHIFT;
+ /*
+ * If the gfn and userspace address are not aligned wrt each
+ * other, or if explicitly asked to, disable large page
+ * support for this slot
+ */
+ if ((slot->base_gfn ^ ugfn) & (KVM_PAGES_PER_HPAGE(level) - 1) ||
+ !largepages_enabled) {
+ unsigned long j;
+
+ for (j = 0; j < lpages; ++j)
+ slot->lpage_info[i][j].write_count = 1;
+ }
+ }
+
+ return 0;
+
+out_free:
+ for (i = 0; i < KVM_NR_PAGE_SIZES - 1; ++i) {
+ vfree(slot->lpage_info[i]);
+ slot->lpage_info[i] = NULL;
+ }
+ return -ENOMEM;
+}
+#endif /* not defined CONFIG_S390 */
+
/*
* Allocate some memory and give it an address in the guest physical address
* space.
@@ -778,37 +828,8 @@ int __kvm_set_memory_region(struct kvm *kvm,
if (!npages)
goto skip_lpage;
- for (i = 0; i < KVM_NR_PAGE_SIZES - 1; ++i) {
- unsigned long ugfn;
- unsigned long j;
- int lpages;
- int level = i + 2;
-
- if (new.lpage_info[i])
- continue;
-
- lpages = gfn_to_index(base_gfn + npages - 1, base_gfn, level) + 1;
-
- new.lpage_info[i] = vzalloc(lpages * sizeof(*new.lpage_info[i]));
-
- if (!new.lpage_info[i])
- goto out_free;
-
- if (base_gfn & (KVM_PAGES_PER_HPAGE(level) - 1))
- new.lpage_info[i][0].write_count = 1;
- if ((base_gfn+npages) & (KVM_PAGES_PER_HPAGE(level) - 1))
- new.lpage_info[i][lpages - 1].write_count = 1;
- ugfn = new.userspace_addr >> PAGE_SHIFT;
- /*
- * If the gfn and userspace address are not aligned wrt each
- * other, or if explicitly asked to, disable large page
- * support for this slot
- */
- if ((base_gfn ^ ugfn) & (KVM_PAGES_PER_HPAGE(level) - 1) ||
- !largepages_enabled)
- for (j = 0; j < lpages; ++j)
- new.lpage_info[i][j].write_count = 1;
- }
+ if (create_lpage_info(&new, npages))
+ goto out_free;
skip_lpage:
--
1.7.5.4
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH 3/4] KVM: Simplify ifndef conditional usage in __kvm_set_memory_region()
2012-02-08 3:58 [PATCH 0/4 v4] KVM: Introduce kvm_memory_slot::arch Takuya Yoshikawa
2012-02-08 3:59 ` [PATCH 1/4] KVM: Introduce gfn_to_index() which returns the index for a given level Takuya Yoshikawa
2012-02-08 4:00 ` [PATCH 2/4] KVM: Split lpage_info creation out from __kvm_set_memory_region() Takuya Yoshikawa
@ 2012-02-08 4:01 ` Takuya Yoshikawa
2012-02-08 4:02 ` [PATCH 4/4] KVM: Introduce kvm_memory_slot::arch and move lpage_info into it Takuya Yoshikawa
2012-02-08 18:47 ` [PATCH 0/4 v4] KVM: Introduce kvm_memory_slot::arch Marcelo Tosatti
4 siblings, 0 replies; 6+ messages in thread
From: Takuya Yoshikawa @ 2012-02-08 4:01 UTC (permalink / raw)
To: avi, mtosatti; +Cc: kvm
Narrow down the controlled text inside the conditional so that it will
include lpage_info and rmap stuff only.
For this we change the way we check whether the slot is being created
from "if (npages && !new.rmap)" to "if (npages && !old.npages)".
We also stop checking if lpage_info is NULL when we create lpage_info
because we do it from inside the slot creation code block.
Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp>
---
virt/kvm/kvm_main.c | 29 ++++++++---------------------
1 files changed, 8 insertions(+), 21 deletions(-)
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 7adaa20..a30447c 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -616,7 +616,6 @@ static int kvm_vm_release(struct inode *inode, struct file *filp)
return 0;
}
-#ifndef CONFIG_S390
/*
* Allocation size is twice as large as the actual dirty bitmap size.
* This makes it possible to do double buffering: see x86's
@@ -624,6 +623,7 @@ static int kvm_vm_release(struct inode *inode, struct file *filp)
*/
static int kvm_create_dirty_bitmap(struct kvm_memory_slot *memslot)
{
+#ifndef CONFIG_S390
unsigned long dirty_bytes = 2 * kvm_dirty_bitmap_bytes(memslot);
if (dirty_bytes > PAGE_SIZE)
@@ -636,9 +636,9 @@ static int kvm_create_dirty_bitmap(struct kvm_memory_slot *memslot)
memslot->dirty_bitmap_head = memslot->dirty_bitmap;
memslot->nr_dirty_pages = 0;
+#endif /* !CONFIG_S390 */
return 0;
}
-#endif /* !CONFIG_S390 */
static int cmp_memslot(const void *slot1, const void *slot2)
{
@@ -695,9 +695,6 @@ static int create_lpage_info(struct kvm_memory_slot *slot, unsigned long npages)
int lpages;
int level = i + 2;
- if (slot->lpage_info[i])
- continue;
-
lpages = gfn_to_index(slot->base_gfn + npages - 1,
slot->base_gfn, level) + 1;
@@ -815,23 +812,18 @@ int __kvm_set_memory_region(struct kvm *kvm,
r = -ENOMEM;
/* Allocate if a slot is being created */
+ if (npages && !old.npages) {
+ new.user_alloc = user_alloc;
+ new.userspace_addr = mem->userspace_addr;
#ifndef CONFIG_S390
- if (npages && !new.rmap) {
new.rmap = vzalloc(npages * sizeof(*new.rmap));
-
if (!new.rmap)
goto out_free;
- new.user_alloc = user_alloc;
- new.userspace_addr = mem->userspace_addr;
+ if (create_lpage_info(&new, npages))
+ goto out_free;
+#endif /* not defined CONFIG_S390 */
}
- if (!npages)
- goto skip_lpage;
-
- if (create_lpage_info(&new, npages))
- goto out_free;
-
-skip_lpage:
/* Allocate page dirty bitmap if needed */
if ((new.flags & KVM_MEM_LOG_DIRTY_PAGES) && !new.dirty_bitmap) {
@@ -839,11 +831,6 @@ skip_lpage:
goto out_free;
/* destroy any largepage mappings for dirty tracking */
}
-#else /* not defined CONFIG_S390 */
- new.user_alloc = user_alloc;
- if (user_alloc)
- new.userspace_addr = mem->userspace_addr;
-#endif /* not defined CONFIG_S390 */
if (!npages) {
struct kvm_memory_slot *slot;
--
1.7.5.4
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH 4/4] KVM: Introduce kvm_memory_slot::arch and move lpage_info into it
2012-02-08 3:58 [PATCH 0/4 v4] KVM: Introduce kvm_memory_slot::arch Takuya Yoshikawa
` (2 preceding siblings ...)
2012-02-08 4:01 ` [PATCH 3/4] KVM: Simplify ifndef conditional usage in __kvm_set_memory_region() Takuya Yoshikawa
@ 2012-02-08 4:02 ` Takuya Yoshikawa
2012-02-08 18:47 ` [PATCH 0/4 v4] KVM: Introduce kvm_memory_slot::arch Marcelo Tosatti
4 siblings, 0 replies; 6+ messages in thread
From: Takuya Yoshikawa @ 2012-02-08 4:02 UTC (permalink / raw)
To: avi, mtosatti; +Cc: kvm
Some members of kvm_memory_slot are not used by every architecture.
This patch is the first step to make this difference clear by
introducing kvm_memory_slot::arch; lpage_info is moved into it.
Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp>
---
arch/ia64/include/asm/kvm_host.h | 3 +
arch/ia64/kvm/kvm-ia64.c | 10 +++++
arch/powerpc/include/asm/kvm_host.h | 3 +
arch/powerpc/kvm/powerpc.c | 10 +++++
arch/s390/include/asm/kvm_host.h | 3 +
arch/s390/kvm/kvm-s390.c | 10 +++++
arch/x86/include/asm/kvm_host.h | 9 ++++
arch/x86/kvm/mmu.c | 2 +-
arch/x86/kvm/x86.c | 59 +++++++++++++++++++++++++++++
include/linux/kvm_host.h | 11 ++---
virt/kvm/kvm_main.c | 70 ++++------------------------------
11 files changed, 122 insertions(+), 68 deletions(-)
diff --git a/arch/ia64/include/asm/kvm_host.h b/arch/ia64/include/asm/kvm_host.h
index 2689ee5..e35b3a8 100644
--- a/arch/ia64/include/asm/kvm_host.h
+++ b/arch/ia64/include/asm/kvm_host.h
@@ -459,6 +459,9 @@ struct kvm_sal_data {
unsigned long boot_gp;
};
+struct kvm_arch_memory_slot {
+};
+
struct kvm_arch {
spinlock_t dirty_log_lock;
diff --git a/arch/ia64/kvm/kvm-ia64.c b/arch/ia64/kvm/kvm-ia64.c
index 8ca7261..d8ddbba 100644
--- a/arch/ia64/kvm/kvm-ia64.c
+++ b/arch/ia64/kvm/kvm-ia64.c
@@ -1571,6 +1571,16 @@ int kvm_arch_vcpu_fault(struct kvm_vcpu *vcpu, struct vm_fault *vmf)
return VM_FAULT_SIGBUS;
}
+void kvm_arch_free_memslot(struct kvm_memory_slot *free,
+ struct kvm_memory_slot *dont)
+{
+}
+
+int kvm_arch_create_memslot(struct kvm_memory_slot *slot, unsigned long npages)
+{
+ return 0;
+}
+
int kvm_arch_prepare_memory_region(struct kvm *kvm,
struct kvm_memory_slot *memslot,
struct kvm_memory_slot old,
diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 1843d5d..52eb9c1 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -213,6 +213,9 @@ struct revmap_entry {
#define KVMPPC_PAGE_WRITETHRU HPTE_R_W /* 0x40 */
#define KVMPPC_GOT_PAGE 0x80
+struct kvm_arch_memory_slot {
+};
+
struct kvm_arch {
#ifdef CONFIG_KVM_BOOK3S_64_HV
unsigned long hpt_virt;
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 0e21d15..00d7e34 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -281,6 +281,16 @@ long kvm_arch_dev_ioctl(struct file *filp,
return -EINVAL;
}
+void kvm_arch_free_memslot(struct kvm_memory_slot *free,
+ struct kvm_memory_slot *dont)
+{
+}
+
+int kvm_arch_create_memslot(struct kvm_memory_slot *slot, unsigned long npages)
+{
+ return 0;
+}
+
int kvm_arch_prepare_memory_region(struct kvm *kvm,
struct kvm_memory_slot *memslot,
struct kvm_memory_slot old,
diff --git a/arch/s390/include/asm/kvm_host.h b/arch/s390/include/asm/kvm_host.h
index e630426..7343872 100644
--- a/arch/s390/include/asm/kvm_host.h
+++ b/arch/s390/include/asm/kvm_host.h
@@ -245,6 +245,9 @@ struct kvm_vm_stat {
u32 remote_tlb_flush;
};
+struct kvm_arch_memory_slot {
+};
+
struct kvm_arch{
struct sca_block *sca;
debug_info_t *dbf;
diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index 0b91679..418a69c 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -807,6 +807,16 @@ int kvm_arch_vcpu_fault(struct kvm_vcpu *vcpu, struct vm_fault *vmf)
return VM_FAULT_SIGBUS;
}
+void kvm_arch_free_memslot(struct kvm_memory_slot *free,
+ struct kvm_memory_slot *dont)
+{
+}
+
+int kvm_arch_create_memslot(struct kvm_memory_slot *slot, unsigned long npages)
+{
+ return 0;
+}
+
/* Section: memory related */
int kvm_arch_prepare_memory_region(struct kvm *kvm,
struct kvm_memory_slot *memslot,
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index c24125c..74c9edf 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -483,6 +483,15 @@ struct kvm_vcpu_arch {
} osvw;
};
+struct kvm_lpage_info {
+ unsigned long rmap_pde;
+ int write_count;
+};
+
+struct kvm_arch_memory_slot {
+ struct kvm_lpage_info *lpage_info[KVM_NR_PAGE_SIZES - 1];
+};
+
struct kvm_arch {
unsigned int n_used_mmu_pages;
unsigned int n_requested_mmu_pages;
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 37e7f10..ff053ca 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -689,7 +689,7 @@ static struct kvm_lpage_info *lpage_info_slot(gfn_t gfn,
unsigned long idx;
idx = gfn_to_index(gfn, slot->base_gfn, level);
- return &slot->lpage_info[level - 2][idx];
+ return &slot->arch.lpage_info[level - 2][idx];
}
static void account_shadowed(struct kvm *kvm, gfn_t gfn)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index b7c5206..a400d4c 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -6226,6 +6226,65 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
put_page(kvm->arch.ept_identity_pagetable);
}
+void kvm_arch_free_memslot(struct kvm_memory_slot *free,
+ struct kvm_memory_slot *dont)
+{
+ int i;
+
+ for (i = 0; i < KVM_NR_PAGE_SIZES - 1; ++i) {
+ if (!dont || free->arch.lpage_info[i] != dont->arch.lpage_info[i]) {
+ vfree(free->arch.lpage_info[i]);
+ free->arch.lpage_info[i] = NULL;
+ }
+ }
+}
+
+int kvm_arch_create_memslot(struct kvm_memory_slot *slot, unsigned long npages)
+{
+ int i;
+
+ for (i = 0; i < KVM_NR_PAGE_SIZES - 1; ++i) {
+ unsigned long ugfn;
+ int lpages;
+ int level = i + 2;
+
+ lpages = gfn_to_index(slot->base_gfn + npages - 1,
+ slot->base_gfn, level) + 1;
+
+ slot->arch.lpage_info[i] =
+ vzalloc(lpages * sizeof(*slot->arch.lpage_info[i]));
+ if (!slot->arch.lpage_info[i])
+ goto out_free;
+
+ if (slot->base_gfn & (KVM_PAGES_PER_HPAGE(level) - 1))
+ slot->arch.lpage_info[i][0].write_count = 1;
+ if ((slot->base_gfn + npages) & (KVM_PAGES_PER_HPAGE(level) - 1))
+ slot->arch.lpage_info[i][lpages - 1].write_count = 1;
+ ugfn = slot->userspace_addr >> PAGE_SHIFT;
+ /*
+ * If the gfn and userspace address are not aligned wrt each
+ * other, or if explicitly asked to, disable large page
+ * support for this slot
+ */
+ if ((slot->base_gfn ^ ugfn) & (KVM_PAGES_PER_HPAGE(level) - 1) ||
+ !kvm_largepages_enabled()) {
+ unsigned long j;
+
+ for (j = 0; j < lpages; ++j)
+ slot->arch.lpage_info[i][j].write_count = 1;
+ }
+ }
+
+ return 0;
+
+out_free:
+ for (i = 0; i < KVM_NR_PAGE_SIZES - 1; ++i) {
+ vfree(slot->arch.lpage_info[i]);
+ slot->arch.lpage_info[i] = NULL;
+ }
+ return -ENOMEM;
+}
+
int kvm_arch_prepare_memory_region(struct kvm *kvm,
struct kvm_memory_slot *memslot,
struct kvm_memory_slot old,
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 7a08496..355e445 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -171,11 +171,6 @@ static inline int kvm_vcpu_exiting_guest_mode(struct kvm_vcpu *vcpu)
*/
#define KVM_MEM_MAX_NR_PAGES ((1UL << 31) - 1)
-struct kvm_lpage_info {
- unsigned long rmap_pde;
- int write_count;
-};
-
struct kvm_memory_slot {
gfn_t base_gfn;
unsigned long npages;
@@ -184,7 +179,7 @@ struct kvm_memory_slot {
unsigned long *dirty_bitmap;
unsigned long *dirty_bitmap_head;
unsigned long nr_dirty_pages;
- struct kvm_lpage_info *lpage_info[KVM_NR_PAGE_SIZES - 1];
+ struct kvm_arch_memory_slot arch;
unsigned long userspace_addr;
int user_alloc;
int id;
@@ -376,6 +371,9 @@ int kvm_set_memory_region(struct kvm *kvm,
int __kvm_set_memory_region(struct kvm *kvm,
struct kvm_userspace_memory_region *mem,
int user_alloc);
+void kvm_arch_free_memslot(struct kvm_memory_slot *free,
+ struct kvm_memory_slot *dont);
+int kvm_arch_create_memslot(struct kvm_memory_slot *slot, unsigned long npages);
int kvm_arch_prepare_memory_region(struct kvm *kvm,
struct kvm_memory_slot *memslot,
struct kvm_memory_slot old,
@@ -385,6 +383,7 @@ void kvm_arch_commit_memory_region(struct kvm *kvm,
struct kvm_userspace_memory_region *mem,
struct kvm_memory_slot old,
int user_alloc);
+bool kvm_largepages_enabled(void);
void kvm_disable_largepages(void);
void kvm_arch_flush_shadow(struct kvm *kvm);
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index a30447c..8340e0e 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -535,21 +535,13 @@ static void kvm_destroy_dirty_bitmap(struct kvm_memory_slot *memslot)
static void kvm_free_physmem_slot(struct kvm_memory_slot *free,
struct kvm_memory_slot *dont)
{
- int i;
-
if (!dont || free->rmap != dont->rmap)
vfree(free->rmap);
if (!dont || free->dirty_bitmap != dont->dirty_bitmap)
kvm_destroy_dirty_bitmap(free);
-
- for (i = 0; i < KVM_NR_PAGE_SIZES - 1; ++i) {
- if (!dont || free->lpage_info[i] != dont->lpage_info[i]) {
- vfree(free->lpage_info[i]);
- free->lpage_info[i] = NULL;
- }
- }
+ kvm_arch_free_memslot(free, dont);
free->npages = 0;
free->rmap = NULL;
@@ -685,53 +677,6 @@ void update_memslots(struct kvm_memslots *slots, struct kvm_memory_slot *new)
slots->generation++;
}
-#ifndef CONFIG_S390
-static int create_lpage_info(struct kvm_memory_slot *slot, unsigned long npages)
-{
- int i;
-
- for (i = 0; i < KVM_NR_PAGE_SIZES - 1; ++i) {
- unsigned long ugfn;
- int lpages;
- int level = i + 2;
-
- lpages = gfn_to_index(slot->base_gfn + npages - 1,
- slot->base_gfn, level) + 1;
-
- slot->lpage_info[i] = vzalloc(lpages * sizeof(*slot->lpage_info[i]));
- if (!slot->lpage_info[i])
- goto out_free;
-
- if (slot->base_gfn & (KVM_PAGES_PER_HPAGE(level) - 1))
- slot->lpage_info[i][0].write_count = 1;
- if ((slot->base_gfn + npages) & (KVM_PAGES_PER_HPAGE(level) - 1))
- slot->lpage_info[i][lpages - 1].write_count = 1;
- ugfn = slot->userspace_addr >> PAGE_SHIFT;
- /*
- * If the gfn and userspace address are not aligned wrt each
- * other, or if explicitly asked to, disable large page
- * support for this slot
- */
- if ((slot->base_gfn ^ ugfn) & (KVM_PAGES_PER_HPAGE(level) - 1) ||
- !largepages_enabled) {
- unsigned long j;
-
- for (j = 0; j < lpages; ++j)
- slot->lpage_info[i][j].write_count = 1;
- }
- }
-
- return 0;
-
-out_free:
- for (i = 0; i < KVM_NR_PAGE_SIZES - 1; ++i) {
- vfree(slot->lpage_info[i]);
- slot->lpage_info[i] = NULL;
- }
- return -ENOMEM;
-}
-#endif /* not defined CONFIG_S390 */
-
/*
* Allocate some memory and give it an address in the guest physical address
* space.
@@ -819,10 +764,9 @@ int __kvm_set_memory_region(struct kvm *kvm,
new.rmap = vzalloc(npages * sizeof(*new.rmap));
if (!new.rmap)
goto out_free;
-
- if (create_lpage_info(&new, npages))
- goto out_free;
#endif /* not defined CONFIG_S390 */
+ if (kvm_arch_create_memslot(&new, npages))
+ goto out_free;
}
/* Allocate page dirty bitmap if needed */
@@ -880,8 +824,7 @@ int __kvm_set_memory_region(struct kvm *kvm,
if (!npages) {
new.rmap = NULL;
new.dirty_bitmap = NULL;
- for (i = 0; i < KVM_NR_PAGE_SIZES - 1; ++i)
- new.lpage_info[i] = NULL;
+ memset(&new.arch, 0, sizeof(new.arch));
}
update_memslots(slots, &new);
@@ -968,6 +911,11 @@ out:
return r;
}
+bool kvm_largepages_enabled(void)
+{
+ return largepages_enabled;
+}
+
void kvm_disable_largepages(void)
{
largepages_enabled = false;
--
1.7.5.4
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH 0/4 v4] KVM: Introduce kvm_memory_slot::arch
2012-02-08 3:58 [PATCH 0/4 v4] KVM: Introduce kvm_memory_slot::arch Takuya Yoshikawa
` (3 preceding siblings ...)
2012-02-08 4:02 ` [PATCH 4/4] KVM: Introduce kvm_memory_slot::arch and move lpage_info into it Takuya Yoshikawa
@ 2012-02-08 18:47 ` Marcelo Tosatti
4 siblings, 0 replies; 6+ messages in thread
From: Marcelo Tosatti @ 2012-02-08 18:47 UTC (permalink / raw)
To: Takuya Yoshikawa; +Cc: avi, kvm
On Wed, Feb 08, 2012 at 12:58:02PM +0900, Takuya Yoshikawa wrote:
> Rebased the whole series (against the next branch of kvm.git).
>
> No manual edit.
> If something is still wrong, please let me know.
>
>
> Thanks,
> Takuya
Applied, thanks.
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2012-02-08 20:03 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-02-08 3:58 [PATCH 0/4 v4] KVM: Introduce kvm_memory_slot::arch Takuya Yoshikawa
2012-02-08 3:59 ` [PATCH 1/4] KVM: Introduce gfn_to_index() which returns the index for a given level Takuya Yoshikawa
2012-02-08 4:00 ` [PATCH 2/4] KVM: Split lpage_info creation out from __kvm_set_memory_region() Takuya Yoshikawa
2012-02-08 4:01 ` [PATCH 3/4] KVM: Simplify ifndef conditional usage in __kvm_set_memory_region() Takuya Yoshikawa
2012-02-08 4:02 ` [PATCH 4/4] KVM: Introduce kvm_memory_slot::arch and move lpage_info into it Takuya Yoshikawa
2012-02-08 18:47 ` [PATCH 0/4 v4] KVM: Introduce kvm_memory_slot::arch Marcelo Tosatti
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.