* [patch 0/4] set_memory_region locking fixes / vcpu->arch.cr3 + removal of memslots
@ 2009-04-27 20:06 mtosatti
2009-04-27 20:06 ` [patch 1/4] KVM: MMU: protect kvm_mmu_change_mmu_pages with mmu_lock mtosatti
` (3 more replies)
0 siblings, 4 replies; 18+ messages in thread
From: mtosatti @ 2009-04-27 20:06 UTC (permalink / raw)
To: kvm
^ permalink raw reply [flat|nested] 18+ messages in thread
* [patch 1/4] KVM: MMU: protect kvm_mmu_change_mmu_pages with mmu_lock
2009-04-27 20:06 [patch 0/4] set_memory_region locking fixes / vcpu->arch.cr3 + removal of memslots mtosatti
@ 2009-04-27 20:06 ` mtosatti
2009-04-27 20:06 ` [patch 2/4] KVM: take mmu_lock when updating a deleted slot mtosatti
` (2 subsequent siblings)
3 siblings, 0 replies; 18+ messages in thread
From: mtosatti @ 2009-04-27 20:06 UTC (permalink / raw)
To: kvm; +Cc: Andrea Arcangeli, Marcelo Tosatti
[-- Attachment #1: set-mem-lock --]
[-- Type: text/plain, Size: 2139 bytes --]
kvm_handle_hva, called by MMU notifiers, manipulates mmu data only with
the protection of mmu_lock.
Update kvm_mmu_change_mmu_pages callers to take mmu_lock, thus protecting
against kvm_handle_hva.
CC: Andrea Arcangeli <aarcange@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Index: kvm/arch/x86/kvm/mmu.c
===================================================================
--- kvm.orig/arch/x86/kvm/mmu.c
+++ kvm/arch/x86/kvm/mmu.c
@@ -2732,7 +2732,6 @@ void kvm_mmu_slot_remove_write_access(st
{
struct kvm_mmu_page *sp;
- spin_lock(&kvm->mmu_lock);
list_for_each_entry(sp, &kvm->arch.active_mmu_pages, link) {
int i;
u64 *pt;
@@ -2747,7 +2746,6 @@ void kvm_mmu_slot_remove_write_access(st
pt[i] &= ~PT_WRITABLE_MASK;
}
kvm_flush_remote_tlbs(kvm);
- spin_unlock(&kvm->mmu_lock);
}
void kvm_mmu_zap_all(struct kvm *kvm)
Index: kvm/arch/x86/kvm/x86.c
===================================================================
--- kvm.orig/arch/x86/kvm/x86.c
+++ kvm/arch/x86/kvm/x86.c
@@ -1647,10 +1647,12 @@ static int kvm_vm_ioctl_set_nr_mmu_pages
return -EINVAL;
down_write(&kvm->slots_lock);
+ spin_lock(&kvm->mmu_lock);
kvm_mmu_change_mmu_pages(kvm, kvm_nr_mmu_pages);
kvm->arch.n_requested_mmu_pages = kvm_nr_mmu_pages;
+ spin_unlock(&kvm->mmu_lock);
up_write(&kvm->slots_lock);
return 0;
}
@@ -1826,7 +1828,9 @@ int kvm_vm_ioctl_get_dirty_log(struct kv
/* If nothing is dirty, don't bother messing with page tables. */
if (is_dirty) {
+ spin_lock(&kvm->mmu_lock);
kvm_mmu_slot_remove_write_access(kvm, log->slot);
+ spin_unlock(&kvm->mmu_lock);
kvm_flush_remote_tlbs(kvm);
memslot = &kvm->memslots[log->slot];
n = ALIGN(memslot->npages, BITS_PER_LONG) / 8;
@@ -4510,12 +4514,14 @@ int kvm_arch_set_memory_region(struct kv
}
}
+ spin_lock(&kvm->mmu_lock);
if (!kvm->arch.n_requested_mmu_pages) {
unsigned int nr_mmu_pages = kvm_mmu_calculate_mmu_pages(kvm);
kvm_mmu_change_mmu_pages(kvm, nr_mmu_pages);
}
kvm_mmu_slot_remove_write_access(kvm, mem->slot);
+ spin_unlock(&kvm->mmu_lock);
kvm_flush_remote_tlbs(kvm);
return 0;
^ permalink raw reply [flat|nested] 18+ messages in thread
* [patch 2/4] KVM: take mmu_lock when updating a deleted slot
2009-04-27 20:06 [patch 0/4] set_memory_region locking fixes / vcpu->arch.cr3 + removal of memslots mtosatti
2009-04-27 20:06 ` [patch 1/4] KVM: MMU: protect kvm_mmu_change_mmu_pages with mmu_lock mtosatti
@ 2009-04-27 20:06 ` mtosatti
2009-04-27 20:06 ` [patch 3/4] KVM: introduce kvm_arch_can_free_memslot, disallow slot deletion if cached cr3 mtosatti
2009-04-27 20:06 ` mtosatti
3 siblings, 0 replies; 18+ messages in thread
From: mtosatti @ 2009-04-27 20:06 UTC (permalink / raw)
To: kvm; +Cc: Marcelo Tosatti
[-- Attachment #1: slot-delete-update-mmu-lock --]
[-- Type: text/plain, Size: 692 bytes --]
kvm_handle_hva relies on mmu_lock protection to safely access
the memslot structures.
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Index: kvm/virt/kvm/kvm_main.c
===================================================================
--- kvm.orig/virt/kvm/kvm_main.c
+++ kvm/virt/kvm/kvm_main.c
@@ -1199,8 +1199,10 @@ int __kvm_set_memory_region(struct kvm *
kvm_free_physmem_slot(&old, npages ? &new : NULL);
/* Slot deletion case: we have to update the current slot */
+ spin_lock(&kvm->mmu_lock);
if (!npages)
*memslot = old;
+ spin_unlock(&kvm->mmu_lock);
#ifdef CONFIG_DMAR
/* map the pages in iommu page table */
r = kvm_iommu_map_pages(kvm, base_gfn, npages);
^ permalink raw reply [flat|nested] 18+ messages in thread
* [patch 3/4] KVM: introduce kvm_arch_can_free_memslot, disallow slot deletion if cached cr3
2009-04-27 20:06 [patch 0/4] set_memory_region locking fixes / vcpu->arch.cr3 + removal of memslots mtosatti
2009-04-27 20:06 ` [patch 1/4] KVM: MMU: protect kvm_mmu_change_mmu_pages with mmu_lock mtosatti
2009-04-27 20:06 ` [patch 2/4] KVM: take mmu_lock when updating a deleted slot mtosatti
@ 2009-04-27 20:06 ` mtosatti
2009-05-07 14:16 ` Avi Kivity
2009-04-27 20:06 ` mtosatti
3 siblings, 1 reply; 18+ messages in thread
From: mtosatti @ 2009-04-27 20:06 UTC (permalink / raw)
To: kvm; +Cc: Marcelo Tosatti
[-- Attachment #1: verify-cr3 --]
[-- Type: text/plain, Size: 4650 bytes --]
Disallow the deletion of memory slots (and aliases, for x86 case), if a
vcpu contains a cr3 that points to such slot/alias.
This complements commit 6c20e1442bb1c62914bb85b7f4a38973d2a423ba.
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Index: kvm/arch/ia64/kvm/kvm-ia64.c
===================================================================
--- kvm.orig/arch/ia64/kvm/kvm-ia64.c
+++ kvm/arch/ia64/kvm/kvm-ia64.c
@@ -1633,6 +1633,11 @@ void kvm_arch_flush_shadow(struct kvm *k
kvm_flush_remote_tlbs(kvm);
}
+int kvm_arch_can_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot)
+{
+ return 1;
+}
+
long kvm_arch_dev_ioctl(struct file *filp,
unsigned int ioctl, unsigned long arg)
{
Index: kvm/arch/powerpc/kvm/powerpc.c
===================================================================
--- kvm.orig/arch/powerpc/kvm/powerpc.c
+++ kvm/arch/powerpc/kvm/powerpc.c
@@ -176,6 +176,11 @@ void kvm_arch_flush_shadow(struct kvm *k
{
}
+int kvm_arch_can_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot)
+{
+ return 1;
+}
+
struct kvm_vcpu *kvm_arch_vcpu_create(struct kvm *kvm, unsigned int id)
{
struct kvm_vcpu *vcpu;
Index: kvm/arch/s390/kvm/kvm-s390.c
===================================================================
--- kvm.orig/arch/s390/kvm/kvm-s390.c
+++ kvm/arch/s390/kvm/kvm-s390.c
@@ -691,6 +691,11 @@ void kvm_arch_flush_shadow(struct kvm *k
{
}
+int kvm_arch_can_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot)
+{
+ return 1;
+}
+
gfn_t unalias_gfn(struct kvm *kvm, gfn_t gfn)
{
return gfn;
Index: kvm/arch/x86/kvm/x86.c
===================================================================
--- kvm.orig/arch/x86/kvm/x86.c
+++ kvm/arch/x86/kvm/x86.c
@@ -1676,6 +1676,27 @@ gfn_t unalias_gfn(struct kvm *kvm, gfn_t
return gfn;
}
+static int kvm_root_gfn_in_range(struct kvm *kvm, gfn_t base_gfn,
+ gfn_t end_gfn, bool unalias)
+{
+ struct kvm_vcpu *vcpu;
+ gfn_t root_gfn;
+ int i;
+
+ for (i = 0; i < KVM_MAX_VCPUS; ++i) {
+ vcpu = kvm->vcpus[i];
+ if (!vcpu)
+ continue;
+ root_gfn = vcpu->arch.cr3 >> PAGE_SHIFT;
+ if (unalias)
+ root_gfn = unalias_gfn(kvm, root_gfn);
+ if (root_gfn >= base_gfn && root_gfn <= end_gfn)
+ return 1;
+ }
+
+ return 0;
+}
+
/*
* Set a new alias region. Aliases map a portion of physical memory into
* another portion. This is useful for memory windows, for example the PC
@@ -1706,6 +1727,19 @@ static int kvm_vm_ioctl_set_memory_alias
spin_lock(&kvm->mmu_lock);
p = &kvm->arch.aliases[alias->slot];
+
+ /* FIXME: either disallow shrinking alias slots or disable
+ * size changes as done with memslots
+ */
+ if (!alias->memory_size) {
+ r = -EBUSY;
+ if (kvm_root_gfn_in_range(kvm, p->base_gfn,
+ p->base_gfn + p->npages - 1,
+ false))
+ goto out_unlock;
+ }
+
+
p->base_gfn = alias->guest_phys_addr >> PAGE_SHIFT;
p->npages = alias->memory_size >> PAGE_SHIFT;
p->target_gfn = alias->target_phys_addr >> PAGE_SHIFT;
@@ -1722,6 +1756,9 @@ static int kvm_vm_ioctl_set_memory_alias
return 0;
+out_unlock:
+ spin_unlock(&kvm->mmu_lock);
+ up_write(&kvm->slots_lock);
out:
return r;
}
@@ -4532,6 +4569,15 @@ void kvm_arch_flush_shadow(struct kvm *k
kvm_mmu_zap_all(kvm);
}
+int kvm_arch_can_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot)
+{
+ int ret;
+
+ ret = kvm_root_gfn_in_range(kvm, slot->base_gfn,
+ slot->base_gfn + slot->npages - 1, true);
+ return !ret;
+}
+
int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu)
{
return vcpu->arch.mp_state == KVM_MP_STATE_RUNNABLE
Index: kvm/include/linux/kvm_host.h
===================================================================
--- kvm.orig/include/linux/kvm_host.h
+++ kvm/include/linux/kvm_host.h
@@ -200,6 +200,7 @@ int kvm_arch_set_memory_region(struct kv
struct kvm_memory_slot old,
int user_alloc);
void kvm_arch_flush_shadow(struct kvm *kvm);
+int kvm_arch_can_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot);
gfn_t unalias_gfn(struct kvm *kvm, gfn_t gfn);
struct page *gfn_to_page(struct kvm *kvm, gfn_t gfn);
unsigned long gfn_to_hva(struct kvm *kvm, gfn_t gfn);
Index: kvm/virt/kvm/kvm_main.c
===================================================================
--- kvm.orig/virt/kvm/kvm_main.c
+++ kvm/virt/kvm/kvm_main.c
@@ -1179,8 +1179,13 @@ int __kvm_set_memory_region(struct kvm *
}
#endif /* not defined CONFIG_S390 */
- if (!npages)
+ if (!npages) {
kvm_arch_flush_shadow(kvm);
+ if (!kvm_arch_can_free_memslot(kvm, memslot)) {
+ r = -EBUSY;
+ goto out_free;
+ }
+ }
spin_lock(&kvm->mmu_lock);
if (mem->slot >= kvm->nmemslots)
^ permalink raw reply [flat|nested] 18+ messages in thread
* [patch 4/4] KVM: x86: disallow changing a slots size
2009-04-27 20:06 [patch 0/4] set_memory_region locking fixes / vcpu->arch.cr3 + removal of memslots mtosatti
` (2 preceding siblings ...)
2009-04-27 20:06 ` [patch 3/4] KVM: introduce kvm_arch_can_free_memslot, disallow slot deletion if cached cr3 mtosatti
@ 2009-04-27 20:06 ` mtosatti
3 siblings, 0 replies; 18+ messages in thread
From: mtosatti @ 2009-04-27 20:06 UTC (permalink / raw)
To: kvm; +Cc: Marcelo Tosatti
[-- Attachment #1: disallow-alias-size-change --]
[-- Type: text/plain, Size: 1039 bytes --]
Support to shrinking aliases complicates kernel code unnecessarily,
while userspace can do the same with two operations, delete an alias,
and create a new alias.
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Index: kvm/arch/x86/kvm/x86.c
===================================================================
--- kvm.orig/arch/x86/kvm/x86.c
+++ kvm/arch/x86/kvm/x86.c
@@ -1707,6 +1707,7 @@ static int kvm_vm_ioctl_set_memory_alias
{
int r, n;
struct kvm_mem_alias *p;
+ unsigned long npages;
r = -EINVAL;
/* General sanity checks */
@@ -1728,9 +1729,12 @@ static int kvm_vm_ioctl_set_memory_alias
p = &kvm->arch.aliases[alias->slot];
- /* FIXME: either disallow shrinking alias slots or disable
- * size changes as done with memslots
- */
+ /* Disallow changing an alias slot's size. */
+ npages = alias->memory_size >> PAGE_SHIFT;
+ r = -EINVAL;
+ if (npages && p->npages && npages != p->npages)
+ goto out_unlock;
+
if (!alias->memory_size) {
r = -EBUSY;
if (kvm_root_gfn_in_range(kvm, p->base_gfn,
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [patch 3/4] KVM: introduce kvm_arch_can_free_memslot, disallow slot deletion if cached cr3
2009-04-27 20:06 ` [patch 3/4] KVM: introduce kvm_arch_can_free_memslot, disallow slot deletion if cached cr3 mtosatti
@ 2009-05-07 14:16 ` Avi Kivity
2009-05-07 18:58 ` Marcelo Tosatti
2009-05-07 21:03 ` [patch 0/4] set_memory_region locking fixes / cr3 vs removal of memslots v2 mtosatti
0 siblings, 2 replies; 18+ messages in thread
From: Avi Kivity @ 2009-05-07 14:16 UTC (permalink / raw)
To: mtosatti; +Cc: kvm
mtosatti@redhat.com wrote:
> Disallow the deletion of memory slots (and aliases, for x86 case), if a
> vcpu contains a cr3 that points to such slot/alias.
>
That allows the guest to induce failures in the host. Better to
triple-fault the guest instead.
>
> +int kvm_arch_can_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot)
> +{
> + return 1;
> +}
> +
>
In general, instead of stubs in every arch, have x86 say
KVM_HAVE_ARCH_CAN_FREE_MEMSLOT and define the stub in generic code when
that define is not present.
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [patch 3/4] KVM: introduce kvm_arch_can_free_memslot, disallow slot deletion if cached cr3
2009-05-07 14:16 ` Avi Kivity
@ 2009-05-07 18:58 ` Marcelo Tosatti
2009-05-07 21:03 ` [patch 0/4] set_memory_region locking fixes / cr3 vs removal of memslots v2 mtosatti
1 sibling, 0 replies; 18+ messages in thread
From: Marcelo Tosatti @ 2009-05-07 18:58 UTC (permalink / raw)
To: Avi Kivity; +Cc: kvm
On Thu, May 07, 2009 at 05:16:35PM +0300, Avi Kivity wrote:
> mtosatti@redhat.com wrote:
>> Disallow the deletion of memory slots (and aliases, for x86 case), if a
>> vcpu contains a cr3 that points to such slot/alias.
>>
>
> That allows the guest to induce failures in the host.
I don't understand what you mean. What is the problem with returning
errors in the ioctl handlers?
The guest can cause an overflow in qemu, overwrite the parameters to
KVM_GET_MSR_INDEX_LIST in an attempt to read kernel data, and get
-E2BIG. Or pick your combination.
> Better to triple-fault the guest instead.
Sure can additionally triple fault it, but the kernel might attempt to
access the non-existant slot which cr3 points to before TRIPLE_FAULT is
processed. So you have to avoid that possibility in the first place,
thats why the patch modifies the ioctls to fail.
>> +int kvm_arch_can_free_memslot(struct kvm *kvm, struct kvm_memory_slot
>> *slot)
>> +{
>> + return 1;
>> +}
>> +
>>
>
> In general, instead of stubs in every arch, have x86 say
> KVM_HAVE_ARCH_CAN_FREE_MEMSLOT and define the stub in generic code when
> that define is not present.
Will fix that.
^ permalink raw reply [flat|nested] 18+ messages in thread
* [patch 0/4] set_memory_region locking fixes / cr3 vs removal of memslots v2
2009-05-07 14:16 ` Avi Kivity
2009-05-07 18:58 ` Marcelo Tosatti
@ 2009-05-07 21:03 ` mtosatti
2009-05-07 21:03 ` [patch 1/4] KVM: MMU: protect kvm_mmu_change_mmu_pages with mmu_lock mtosatti
` (3 more replies)
1 sibling, 4 replies; 18+ messages in thread
From: mtosatti @ 2009-05-07 21:03 UTC (permalink / raw)
To: kvm; +Cc: avi
Addressing comments.
^ permalink raw reply [flat|nested] 18+ messages in thread
* [patch 1/4] KVM: MMU: protect kvm_mmu_change_mmu_pages with mmu_lock
2009-05-07 21:03 ` [patch 0/4] set_memory_region locking fixes / cr3 vs removal of memslots v2 mtosatti
@ 2009-05-07 21:03 ` mtosatti
2009-05-07 21:03 ` [patch 2/4] KVM: take mmu_lock when updating a deleted slot mtosatti
` (2 subsequent siblings)
3 siblings, 0 replies; 18+ messages in thread
From: mtosatti @ 2009-05-07 21:03 UTC (permalink / raw)
To: kvm; +Cc: avi, Andrea Arcangeli, Marcelo Tosatti
[-- Attachment #1: set-mem-lock --]
[-- Type: text/plain, Size: 2187 bytes --]
kvm_handle_hva, called by MMU notifiers, manipulates mmu data only with
the protection of mmu_lock.
Update kvm_mmu_change_mmu_pages callers to take mmu_lock, thus protecting
against kvm_handle_hva.
CC: Andrea Arcangeli <aarcange@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Index: kvm-pending/arch/x86/kvm/mmu.c
===================================================================
--- kvm-pending.orig/arch/x86/kvm/mmu.c
+++ kvm-pending/arch/x86/kvm/mmu.c
@@ -2723,7 +2723,6 @@ void kvm_mmu_slot_remove_write_access(st
{
struct kvm_mmu_page *sp;
- spin_lock(&kvm->mmu_lock);
list_for_each_entry(sp, &kvm->arch.active_mmu_pages, link) {
int i;
u64 *pt;
@@ -2738,7 +2737,6 @@ void kvm_mmu_slot_remove_write_access(st
pt[i] &= ~PT_WRITABLE_MASK;
}
kvm_flush_remote_tlbs(kvm);
- spin_unlock(&kvm->mmu_lock);
}
void kvm_mmu_zap_all(struct kvm *kvm)
Index: kvm-pending/arch/x86/kvm/x86.c
===================================================================
--- kvm-pending.orig/arch/x86/kvm/x86.c
+++ kvm-pending/arch/x86/kvm/x86.c
@@ -1607,10 +1607,12 @@ static int kvm_vm_ioctl_set_nr_mmu_pages
return -EINVAL;
down_write(&kvm->slots_lock);
+ spin_lock(&kvm->mmu_lock);
kvm_mmu_change_mmu_pages(kvm, kvm_nr_mmu_pages);
kvm->arch.n_requested_mmu_pages = kvm_nr_mmu_pages;
+ spin_unlock(&kvm->mmu_lock);
up_write(&kvm->slots_lock);
return 0;
}
@@ -1786,7 +1788,9 @@ int kvm_vm_ioctl_get_dirty_log(struct kv
/* If nothing is dirty, don't bother messing with page tables. */
if (is_dirty) {
+ spin_lock(&kvm->mmu_lock);
kvm_mmu_slot_remove_write_access(kvm, log->slot);
+ spin_unlock(&kvm->mmu_lock);
kvm_flush_remote_tlbs(kvm);
memslot = &kvm->memslots[log->slot];
n = ALIGN(memslot->npages, BITS_PER_LONG) / 8;
@@ -4530,12 +4534,14 @@ int kvm_arch_set_memory_region(struct kv
}
}
+ spin_lock(&kvm->mmu_lock);
if (!kvm->arch.n_requested_mmu_pages) {
unsigned int nr_mmu_pages = kvm_mmu_calculate_mmu_pages(kvm);
kvm_mmu_change_mmu_pages(kvm, nr_mmu_pages);
}
kvm_mmu_slot_remove_write_access(kvm, mem->slot);
+ spin_unlock(&kvm->mmu_lock);
kvm_flush_remote_tlbs(kvm);
return 0;
^ permalink raw reply [flat|nested] 18+ messages in thread
* [patch 2/4] KVM: take mmu_lock when updating a deleted slot
2009-05-07 21:03 ` [patch 0/4] set_memory_region locking fixes / cr3 vs removal of memslots v2 mtosatti
2009-05-07 21:03 ` [patch 1/4] KVM: MMU: protect kvm_mmu_change_mmu_pages with mmu_lock mtosatti
@ 2009-05-07 21:03 ` mtosatti
2009-05-07 21:03 ` [patch 3/4] KVM: introduce kvm_arch_can_free_memslot, disallow slot deletion if cached cr3 mtosatti
2009-05-07 21:03 ` [patch 4/4] KVM: x86: disallow changing a slots size mtosatti
3 siblings, 0 replies; 18+ messages in thread
From: mtosatti @ 2009-05-07 21:03 UTC (permalink / raw)
To: kvm; +Cc: avi, Marcelo Tosatti
[-- Attachment #1: slot-delete-update-mmu-lock --]
[-- Type: text/plain, Size: 716 bytes --]
kvm_handle_hva relies on mmu_lock protection to safely access
the memslot structures.
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Index: kvm-pending/virt/kvm/kvm_main.c
===================================================================
--- kvm-pending.orig/virt/kvm/kvm_main.c
+++ kvm-pending/virt/kvm/kvm_main.c
@@ -1199,8 +1199,10 @@ int __kvm_set_memory_region(struct kvm *
kvm_free_physmem_slot(&old, npages ? &new : NULL);
/* Slot deletion case: we have to update the current slot */
+ spin_lock(&kvm->mmu_lock);
if (!npages)
*memslot = old;
+ spin_unlock(&kvm->mmu_lock);
#ifdef CONFIG_DMAR
/* map the pages in iommu page table */
r = kvm_iommu_map_pages(kvm, base_gfn, npages);
^ permalink raw reply [flat|nested] 18+ messages in thread
* [patch 3/4] KVM: introduce kvm_arch_can_free_memslot, disallow slot deletion if cached cr3
2009-05-07 21:03 ` [patch 0/4] set_memory_region locking fixes / cr3 vs removal of memslots v2 mtosatti
2009-05-07 21:03 ` [patch 1/4] KVM: MMU: protect kvm_mmu_change_mmu_pages with mmu_lock mtosatti
2009-05-07 21:03 ` [patch 2/4] KVM: take mmu_lock when updating a deleted slot mtosatti
@ 2009-05-07 21:03 ` mtosatti
2009-05-10 16:40 ` Avi Kivity
2009-05-07 21:03 ` [patch 4/4] KVM: x86: disallow changing a slots size mtosatti
3 siblings, 1 reply; 18+ messages in thread
From: mtosatti @ 2009-05-07 21:03 UTC (permalink / raw)
To: kvm; +Cc: avi, Marcelo Tosatti
[-- Attachment #1: verify-cr3 --]
[-- Type: text/plain, Size: 4286 bytes --]
Disallow the deletion of memory slots (and aliases, for x86 case), if a
vcpu contains a cr3 that points to such slot/alias.
This complements commit 6c20e1442bb1c62914bb85b7f4a38973d2a423ba.
v2:
- set KVM_REQ_TRIPLE_FAULT
- use __KVM_HAVE_ARCH_CAN_FREE_MEMSLOT to avoid duplication of stub
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Index: kvm-pending/arch/x86/kvm/x86.c
===================================================================
--- kvm-pending.orig/arch/x86/kvm/x86.c
+++ kvm-pending/arch/x86/kvm/x86.c
@@ -1636,6 +1636,29 @@ gfn_t unalias_gfn(struct kvm *kvm, gfn_t
return gfn;
}
+static int kvm_root_gfn_in_range(struct kvm *kvm, gfn_t base_gfn,
+ gfn_t end_gfn, bool unalias)
+{
+ struct kvm_vcpu *vcpu;
+ gfn_t root_gfn;
+ int i;
+
+ for (i = 0; i < KVM_MAX_VCPUS; ++i) {
+ vcpu = kvm->vcpus[i];
+ if (!vcpu)
+ continue;
+ root_gfn = vcpu->arch.cr3 >> PAGE_SHIFT;
+ if (unalias)
+ root_gfn = unalias_gfn(kvm, root_gfn);
+ if (root_gfn >= base_gfn && root_gfn <= end_gfn) {
+ set_bit(KVM_REQ_TRIPLE_FAULT, &vcpu->requests);
+ return 1;
+ }
+ }
+
+ return 0;
+}
+
/*
* Set a new alias region. Aliases map a portion of physical memory into
* another portion. This is useful for memory windows, for example the PC
@@ -1666,6 +1689,19 @@ static int kvm_vm_ioctl_set_memory_alias
spin_lock(&kvm->mmu_lock);
p = &kvm->arch.aliases[alias->slot];
+
+ /* FIXME: either disallow shrinking alias slots or disable
+ * size changes as done with memslots
+ */
+ if (!alias->memory_size) {
+ r = -EBUSY;
+ if (kvm_root_gfn_in_range(kvm, p->base_gfn,
+ p->base_gfn + p->npages - 1,
+ false))
+ goto out_unlock;
+ }
+
+
p->base_gfn = alias->guest_phys_addr >> PAGE_SHIFT;
p->npages = alias->memory_size >> PAGE_SHIFT;
p->target_gfn = alias->target_phys_addr >> PAGE_SHIFT;
@@ -1682,6 +1718,9 @@ static int kvm_vm_ioctl_set_memory_alias
return 0;
+out_unlock:
+ spin_unlock(&kvm->mmu_lock);
+ up_write(&kvm->slots_lock);
out:
return r;
}
@@ -4552,6 +4591,15 @@ void kvm_arch_flush_shadow(struct kvm *k
kvm_mmu_zap_all(kvm);
}
+int kvm_arch_can_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot)
+{
+ int ret;
+
+ ret = kvm_root_gfn_in_range(kvm, slot->base_gfn,
+ slot->base_gfn + slot->npages - 1, true);
+ return !ret;
+}
+
int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu)
{
return vcpu->arch.mp_state == KVM_MP_STATE_RUNNABLE
Index: kvm-pending/include/linux/kvm_host.h
===================================================================
--- kvm-pending.orig/include/linux/kvm_host.h
+++ kvm-pending/include/linux/kvm_host.h
@@ -200,6 +200,7 @@ int kvm_arch_set_memory_region(struct kv
struct kvm_memory_slot old,
int user_alloc);
void kvm_arch_flush_shadow(struct kvm *kvm);
+int kvm_arch_can_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot);
gfn_t unalias_gfn(struct kvm *kvm, gfn_t gfn);
struct page *gfn_to_page(struct kvm *kvm, gfn_t gfn);
unsigned long gfn_to_hva(struct kvm *kvm, gfn_t gfn);
Index: kvm-pending/virt/kvm/kvm_main.c
===================================================================
--- kvm-pending.orig/virt/kvm/kvm_main.c
+++ kvm-pending/virt/kvm/kvm_main.c
@@ -1061,6 +1061,13 @@ static int kvm_vm_release(struct inode *
return 0;
}
+#ifndef __KVM_HAVE_ARCH_CAN_FREE_MEMSLOT
+int kvm_arch_can_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot)
+{
+ return 1;
+}
+#endif
+
/*
* Allocate some memory and give it an address in the guest physical address
* space.
@@ -1179,8 +1186,13 @@ int __kvm_set_memory_region(struct kvm *
}
#endif /* not defined CONFIG_S390 */
- if (!npages)
+ if (!npages) {
kvm_arch_flush_shadow(kvm);
+ if (!kvm_arch_can_free_memslot(kvm, memslot)) {
+ r = -EBUSY;
+ goto out_free;
+ }
+ }
spin_lock(&kvm->mmu_lock);
if (mem->slot >= kvm->nmemslots)
Index: kvm-pending/arch/x86/include/asm/kvm.h
===================================================================
--- kvm-pending.orig/arch/x86/include/asm/kvm.h
+++ kvm-pending/arch/x86/include/asm/kvm.h
@@ -18,6 +18,8 @@
#define __KVM_HAVE_GUEST_DEBUG
#define __KVM_HAVE_MSIX
+#define __KVM_HAVE_ARCH_CAN_FREE_MEMSLOT
+
/* Architectural interrupt line count. */
#define KVM_NR_INTERRUPTS 256
^ permalink raw reply [flat|nested] 18+ messages in thread
* [patch 4/4] KVM: x86: disallow changing a slots size
2009-05-07 21:03 ` [patch 0/4] set_memory_region locking fixes / cr3 vs removal of memslots v2 mtosatti
` (2 preceding siblings ...)
2009-05-07 21:03 ` [patch 3/4] KVM: introduce kvm_arch_can_free_memslot, disallow slot deletion if cached cr3 mtosatti
@ 2009-05-07 21:03 ` mtosatti
3 siblings, 0 replies; 18+ messages in thread
From: mtosatti @ 2009-05-07 21:03 UTC (permalink / raw)
To: kvm; +Cc: avi, Marcelo Tosatti
[-- Attachment #1: disallow-alias-size-change --]
[-- Type: text/plain, Size: 1063 bytes --]
Support to shrinking aliases complicates kernel code unnecessarily,
while userspace can do the same with two operations, delete an alias,
and create a new alias.
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Index: kvm-pending/arch/x86/kvm/x86.c
===================================================================
--- kvm-pending.orig/arch/x86/kvm/x86.c
+++ kvm-pending/arch/x86/kvm/x86.c
@@ -1669,6 +1669,7 @@ static int kvm_vm_ioctl_set_memory_alias
{
int r, n;
struct kvm_mem_alias *p;
+ unsigned long npages;
r = -EINVAL;
/* General sanity checks */
@@ -1690,9 +1691,12 @@ static int kvm_vm_ioctl_set_memory_alias
p = &kvm->arch.aliases[alias->slot];
- /* FIXME: either disallow shrinking alias slots or disable
- * size changes as done with memslots
- */
+ /* Disallow changing an alias slot's size. */
+ npages = alias->memory_size >> PAGE_SHIFT;
+ r = -EINVAL;
+ if (npages && p->npages && npages != p->npages)
+ goto out_unlock;
+
if (!alias->memory_size) {
r = -EBUSY;
if (kvm_root_gfn_in_range(kvm, p->base_gfn,
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [patch 3/4] KVM: introduce kvm_arch_can_free_memslot, disallow slot deletion if cached cr3
2009-05-07 21:03 ` [patch 3/4] KVM: introduce kvm_arch_can_free_memslot, disallow slot deletion if cached cr3 mtosatti
@ 2009-05-10 16:40 ` Avi Kivity
2009-05-12 21:55 ` [patch 0/3] locking fixes / cr3 validation v3 mtosatti
0 siblings, 1 reply; 18+ messages in thread
From: Avi Kivity @ 2009-05-10 16:40 UTC (permalink / raw)
To: mtosatti; +Cc: kvm
mtosatti@redhat.com wrote:
> Disallow the deletion of memory slots (and aliases, for x86 case), if a
> vcpu contains a cr3 that points to such slot/alias.
>
> This complements commit 6c20e1442bb1c62914bb85b7f4a38973d2a423ba.
>
> v2:
> - set KVM_REQ_TRIPLE_FAULT
> - use __KVM_HAVE_ARCH_CAN_FREE_MEMSLOT to avoid duplication of stub
>
> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
>
> Index: kvm-pending/arch/x86/kvm/x86.c
> ===================================================================
> --- kvm-pending.orig/arch/x86/kvm/x86.c
> +++ kvm-pending/arch/x86/kvm/x86.c
> @@ -1636,6 +1636,29 @@ gfn_t unalias_gfn(struct kvm *kvm, gfn_t
> return gfn;
> }
>
> +static int kvm_root_gfn_in_range(struct kvm *kvm, gfn_t base_gfn,
> + gfn_t end_gfn, bool unalias)
> +{
> + struct kvm_vcpu *vcpu;
> + gfn_t root_gfn;
> + int i;
> +
> + for (i = 0; i < KVM_MAX_VCPUS; ++i) {
> + vcpu = kvm->vcpus[i];
> + if (!vcpu)
> + continue;
> + root_gfn = vcpu->arch.cr3 >> PAGE_SHIFT;
>
The guest may have changed this by now.
> + if (unalias)
> + root_gfn = unalias_gfn(kvm, root_gfn);
> + if (root_gfn >= base_gfn && root_gfn <= end_gfn) {
> + set_bit(KVM_REQ_TRIPLE_FAULT, &vcpu->requests);
> + return 1;
> + }
> + }
> +
> + return 0;
> +}
> +
>
The naming is bad, a function named as a predicate shouldn't have side
effects.
Also, we should allow deleting the slot. There's no reason to deny
userspace something just because the guest is playing around
I think this should be enough:
- take mmu lock
- request an mmu reload from all vcpus
- drop the slot
- release mmu lock
The reload will inject a #GP if cr3 is now out of bounds, should be
changed to triple fault, but everything is in place (set_cr3 already
checks).
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 18+ messages in thread
* [patch 0/3] locking fixes / cr3 validation v3
2009-05-10 16:40 ` Avi Kivity
@ 2009-05-12 21:55 ` mtosatti
2009-05-12 21:55 ` [patch 1/3] KVM: MMU: protect kvm_mmu_change_mmu_pages with mmu_lock mtosatti
` (3 more replies)
0 siblings, 4 replies; 18+ messages in thread
From: mtosatti @ 2009-05-12 21:55 UTC (permalink / raw)
To: avi; +Cc: kvm
Addressing comments.
^ permalink raw reply [flat|nested] 18+ messages in thread
* [patch 1/3] KVM: MMU: protect kvm_mmu_change_mmu_pages with mmu_lock
2009-05-12 21:55 ` [patch 0/3] locking fixes / cr3 validation v3 mtosatti
@ 2009-05-12 21:55 ` mtosatti
2009-05-12 21:55 ` [patch 2/3] KVM: take mmu_lock when updating a deleted slot mtosatti
` (2 subsequent siblings)
3 siblings, 0 replies; 18+ messages in thread
From: mtosatti @ 2009-05-12 21:55 UTC (permalink / raw)
To: avi; +Cc: kvm, Marcelo Tosatti
[-- Attachment #1: set-mem-lock --]
[-- Type: text/plain, Size: 2144 bytes --]
kvm_handle_hva, called by MMU notifiers, manipulates mmu data only with
the protection of mmu_lock.
Update kvm_mmu_change_mmu_pages callers to take mmu_lock, thus protecting
against kvm_handle_hva.
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Index: kvm-pending/arch/x86/kvm/mmu.c
===================================================================
--- kvm-pending.orig/arch/x86/kvm/mmu.c
+++ kvm-pending/arch/x86/kvm/mmu.c
@@ -2723,7 +2723,6 @@ void kvm_mmu_slot_remove_write_access(st
{
struct kvm_mmu_page *sp;
- spin_lock(&kvm->mmu_lock);
list_for_each_entry(sp, &kvm->arch.active_mmu_pages, link) {
int i;
u64 *pt;
@@ -2738,7 +2737,6 @@ void kvm_mmu_slot_remove_write_access(st
pt[i] &= ~PT_WRITABLE_MASK;
}
kvm_flush_remote_tlbs(kvm);
- spin_unlock(&kvm->mmu_lock);
}
void kvm_mmu_zap_all(struct kvm *kvm)
Index: kvm-pending/arch/x86/kvm/x86.c
===================================================================
--- kvm-pending.orig/arch/x86/kvm/x86.c
+++ kvm-pending/arch/x86/kvm/x86.c
@@ -1607,10 +1607,12 @@ static int kvm_vm_ioctl_set_nr_mmu_pages
return -EINVAL;
down_write(&kvm->slots_lock);
+ spin_lock(&kvm->mmu_lock);
kvm_mmu_change_mmu_pages(kvm, kvm_nr_mmu_pages);
kvm->arch.n_requested_mmu_pages = kvm_nr_mmu_pages;
+ spin_unlock(&kvm->mmu_lock);
up_write(&kvm->slots_lock);
return 0;
}
@@ -1786,7 +1788,9 @@ int kvm_vm_ioctl_get_dirty_log(struct kv
/* If nothing is dirty, don't bother messing with page tables. */
if (is_dirty) {
+ spin_lock(&kvm->mmu_lock);
kvm_mmu_slot_remove_write_access(kvm, log->slot);
+ spin_unlock(&kvm->mmu_lock);
kvm_flush_remote_tlbs(kvm);
memslot = &kvm->memslots[log->slot];
n = ALIGN(memslot->npages, BITS_PER_LONG) / 8;
@@ -4530,12 +4534,14 @@ int kvm_arch_set_memory_region(struct kv
}
}
+ spin_lock(&kvm->mmu_lock);
if (!kvm->arch.n_requested_mmu_pages) {
unsigned int nr_mmu_pages = kvm_mmu_calculate_mmu_pages(kvm);
kvm_mmu_change_mmu_pages(kvm, nr_mmu_pages);
}
kvm_mmu_slot_remove_write_access(kvm, mem->slot);
+ spin_unlock(&kvm->mmu_lock);
kvm_flush_remote_tlbs(kvm);
return 0;
^ permalink raw reply [flat|nested] 18+ messages in thread
* [patch 2/3] KVM: take mmu_lock when updating a deleted slot
2009-05-12 21:55 ` [patch 0/3] locking fixes / cr3 validation v3 mtosatti
2009-05-12 21:55 ` [patch 1/3] KVM: MMU: protect kvm_mmu_change_mmu_pages with mmu_lock mtosatti
@ 2009-05-12 21:55 ` mtosatti
2009-05-12 21:55 ` [patch 3/3] KVM: x86: check for cr3 validity in mmu_alloc_roots mtosatti
2009-05-13 7:40 ` [patch 0/3] locking fixes / cr3 validation v3 Avi Kivity
3 siblings, 0 replies; 18+ messages in thread
From: mtosatti @ 2009-05-12 21:55 UTC (permalink / raw)
To: avi; +Cc: kvm, Marcelo Tosatti
[-- Attachment #1: slot-delete-update-mmu-lock --]
[-- Type: text/plain, Size: 716 bytes --]
kvm_handle_hva relies on mmu_lock protection to safely access
the memslot structures.
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Index: kvm-pending/virt/kvm/kvm_main.c
===================================================================
--- kvm-pending.orig/virt/kvm/kvm_main.c
+++ kvm-pending/virt/kvm/kvm_main.c
@@ -1199,8 +1199,10 @@ int __kvm_set_memory_region(struct kvm *
kvm_free_physmem_slot(&old, npages ? &new : NULL);
/* Slot deletion case: we have to update the current slot */
+ spin_lock(&kvm->mmu_lock);
if (!npages)
*memslot = old;
+ spin_unlock(&kvm->mmu_lock);
#ifdef CONFIG_DMAR
/* map the pages in iommu page table */
r = kvm_iommu_map_pages(kvm, base_gfn, npages);
^ permalink raw reply [flat|nested] 18+ messages in thread
* [patch 3/3] KVM: x86: check for cr3 validity in mmu_alloc_roots
2009-05-12 21:55 ` [patch 0/3] locking fixes / cr3 validation v3 mtosatti
2009-05-12 21:55 ` [patch 1/3] KVM: MMU: protect kvm_mmu_change_mmu_pages with mmu_lock mtosatti
2009-05-12 21:55 ` [patch 2/3] KVM: take mmu_lock when updating a deleted slot mtosatti
@ 2009-05-12 21:55 ` mtosatti
2009-05-13 7:40 ` [patch 0/3] locking fixes / cr3 validation v3 Avi Kivity
3 siblings, 0 replies; 18+ messages in thread
From: mtosatti @ 2009-05-12 21:55 UTC (permalink / raw)
To: avi; +Cc: kvm, Marcelo Tosatti
[-- Attachment #1: reload-cr3 --]
[-- Type: text/plain, Size: 2863 bytes --]
Verify the cr3 address stored in vcpu->arch.cr3 points to an existant
memslot. If not, inject a triple fault.
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Index: kvm/arch/x86/kvm/mmu.c
===================================================================
--- kvm.orig/arch/x86/kvm/mmu.c
+++ kvm/arch/x86/kvm/mmu.c
@@ -1912,7 +1912,19 @@ static void mmu_free_roots(struct kvm_vc
vcpu->arch.mmu.root_hpa = INVALID_PAGE;
}
-static void mmu_alloc_roots(struct kvm_vcpu *vcpu)
+static int mmu_check_root(struct kvm_vcpu *vcpu, gfn_t root_gfn)
+{
+ int ret = 0;
+
+ if (!kvm_is_visible_gfn(vcpu->kvm, root_gfn)) {
+ set_bit(KVM_REQ_TRIPLE_FAULT, &vcpu->requests);
+ ret = 1;
+ }
+
+ return ret;
+}
+
+static int mmu_alloc_roots(struct kvm_vcpu *vcpu)
{
int i;
gfn_t root_gfn;
@@ -1927,13 +1939,15 @@ static void mmu_alloc_roots(struct kvm_v
ASSERT(!VALID_PAGE(root));
if (tdp_enabled)
direct = 1;
+ if (mmu_check_root(vcpu, root_gfn))
+ return 1;
sp = kvm_mmu_get_page(vcpu, root_gfn, 0,
PT64_ROOT_LEVEL, direct,
ACC_ALL, NULL);
root = __pa(sp->spt);
++sp->root_count;
vcpu->arch.mmu.root_hpa = root;
- return;
+ return 0;
}
direct = !is_paging(vcpu);
if (tdp_enabled)
@@ -1950,6 +1964,8 @@ static void mmu_alloc_roots(struct kvm_v
root_gfn = vcpu->arch.pdptrs[i] >> PAGE_SHIFT;
} else if (vcpu->arch.mmu.root_level == 0)
root_gfn = 0;
+ if (mmu_check_root(vcpu, root_gfn))
+ return 1;
sp = kvm_mmu_get_page(vcpu, root_gfn, i << 30,
PT32_ROOT_LEVEL, direct,
ACC_ALL, NULL);
@@ -1958,6 +1974,7 @@ static void mmu_alloc_roots(struct kvm_v
vcpu->arch.mmu.pae_root[i] = root | PT_PRESENT_MASK;
}
vcpu->arch.mmu.root_hpa = __pa(vcpu->arch.mmu.pae_root);
+ return 0;
}
static void mmu_sync_roots(struct kvm_vcpu *vcpu)
@@ -1976,7 +1993,7 @@ static void mmu_sync_roots(struct kvm_vc
for (i = 0; i < 4; ++i) {
hpa_t root = vcpu->arch.mmu.pae_root[i];
- if (root) {
+ if (root && VALID_PAGE(root)) {
root &= PT64_BASE_ADDR_MASK;
sp = page_header(root);
mmu_sync_children(vcpu, sp);
@@ -2311,9 +2328,11 @@ int kvm_mmu_load(struct kvm_vcpu *vcpu)
goto out;
spin_lock(&vcpu->kvm->mmu_lock);
kvm_mmu_free_some_pages(vcpu);
- mmu_alloc_roots(vcpu);
+ r = mmu_alloc_roots(vcpu);
mmu_sync_roots(vcpu);
spin_unlock(&vcpu->kvm->mmu_lock);
+ if (r)
+ goto out;
kvm_x86_ops->set_cr3(vcpu, vcpu->arch.mmu.root_hpa);
kvm_mmu_flush_tlb(vcpu);
out:
Index: kvm/arch/x86/kvm/x86.c
===================================================================
--- kvm.orig/arch/x86/kvm/x86.c
+++ kvm/arch/x86/kvm/x86.c
@@ -4554,6 +4554,7 @@ int kvm_arch_set_memory_region(struct kv
void kvm_arch_flush_shadow(struct kvm *kvm)
{
kvm_mmu_zap_all(kvm);
+ kvm_reload_remote_mmus(kvm);
}
int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu)
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [patch 0/3] locking fixes / cr3 validation v3
2009-05-12 21:55 ` [patch 0/3] locking fixes / cr3 validation v3 mtosatti
` (2 preceding siblings ...)
2009-05-12 21:55 ` [patch 3/3] KVM: x86: check for cr3 validity in mmu_alloc_roots mtosatti
@ 2009-05-13 7:40 ` Avi Kivity
3 siblings, 0 replies; 18+ messages in thread
From: Avi Kivity @ 2009-05-13 7:40 UTC (permalink / raw)
To: mtosatti; +Cc: kvm
mtosatti@redhat.com wrote:
> Addressing comments.
>
>
>
Applied all. But please fix you From: header.
--
Do not meddle in the internals of kernels, for they are subtle and quick to panic.
^ permalink raw reply [flat|nested] 18+ messages in thread
end of thread, other threads:[~2009-05-13 7:40 UTC | newest]
Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-04-27 20:06 [patch 0/4] set_memory_region locking fixes / vcpu->arch.cr3 + removal of memslots mtosatti
2009-04-27 20:06 ` [patch 1/4] KVM: MMU: protect kvm_mmu_change_mmu_pages with mmu_lock mtosatti
2009-04-27 20:06 ` [patch 2/4] KVM: take mmu_lock when updating a deleted slot mtosatti
2009-04-27 20:06 ` [patch 3/4] KVM: introduce kvm_arch_can_free_memslot, disallow slot deletion if cached cr3 mtosatti
2009-05-07 14:16 ` Avi Kivity
2009-05-07 18:58 ` Marcelo Tosatti
2009-05-07 21:03 ` [patch 0/4] set_memory_region locking fixes / cr3 vs removal of memslots v2 mtosatti
2009-05-07 21:03 ` [patch 1/4] KVM: MMU: protect kvm_mmu_change_mmu_pages with mmu_lock mtosatti
2009-05-07 21:03 ` [patch 2/4] KVM: take mmu_lock when updating a deleted slot mtosatti
2009-05-07 21:03 ` [patch 3/4] KVM: introduce kvm_arch_can_free_memslot, disallow slot deletion if cached cr3 mtosatti
2009-05-10 16:40 ` Avi Kivity
2009-05-12 21:55 ` [patch 0/3] locking fixes / cr3 validation v3 mtosatti
2009-05-12 21:55 ` [patch 1/3] KVM: MMU: protect kvm_mmu_change_mmu_pages with mmu_lock mtosatti
2009-05-12 21:55 ` [patch 2/3] KVM: take mmu_lock when updating a deleted slot mtosatti
2009-05-12 21:55 ` [patch 3/3] KVM: x86: check for cr3 validity in mmu_alloc_roots mtosatti
2009-05-13 7:40 ` [patch 0/3] locking fixes / cr3 validation v3 Avi Kivity
2009-05-07 21:03 ` [patch 4/4] KVM: x86: disallow changing a slots size mtosatti
2009-04-27 20:06 ` mtosatti
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.