All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/5] KVM: Make the maximum number of user memslots configurable and raise the default
@ 2021-01-27 17:57 Vitaly Kuznetsov
  2021-01-27 17:57 ` [PATCH 1/5] KVM: Make the maximum number of user memslots a per-VM thing Vitaly Kuznetsov
                   ` (4 more replies)
  0 siblings, 5 replies; 15+ messages in thread
From: Vitaly Kuznetsov @ 2021-01-27 17:57 UTC (permalink / raw)
  To: kvm, Paolo Bonzini
  Cc: Sean Christopherson, Wanpeng Li, Jim Mattson, Igor Mammedov,
	Maciej S . Szmigiero

This is a successor of previously sent "[PATCH RFC 0/4] KVM: x86:
Drastically raise KVM_USER_MEM_SLOTS limit".

Changes since RFC:
- Re-wrote everything [Sean]. The maximum number of slots is now
a per-VM thing controlled by an ioctl.

Original description:

Current KVM_USER_MEM_SLOTS limit on x86 (509) can be a limiting factor
for some configurations. In particular, when QEMU tries to start a Windows
guest with Hyper-V SynIC enabled and e.g. 256 vCPUs the limit is hit as
SynIC requires two pages per vCPU and the guest is free to pick any GFN for
each of them, this fragments memslots as QEMU wants to have a separate
memslot for each of these pages (which are supposed to act as 'overlay'
pages).

Memory slots are allocated dynamically in KVM when added so the only real
limitation is 'id_to_index' array which is 'short'. We don't seem to have
any other KVM_MEM_SLOTS_NUM/KVM_USER_MEM_SLOTS-sized statically defined
structures.

We could've just raised the limit to e.g. '1021' (we have 3 private
memslots on x86) and this should be enough for now as KVM_MAX_VCPUS is
'288' but AFAIK there are plans to raise this limit as well. Raise the
default value to 32768 - KVM_PRIVATE_MEM_SLOTS and introduce a new ioctl
to set the limit per-VM.

Vitaly Kuznetsov (5):
  KVM: Make the maximum number of user memslots a per-VM thing
  KVM: Raise the maximum number of user memslots
  KVM: Make the maximum number of user memslots configurable
  selftests: kvm: Test the newly introduced KVM_CAP_MEMSLOTS_LIMIT
  selftests: kvm: Raise the default timeout to 120 seconds

 Documentation/virt/kvm/api.rst                | 16 +++++++
 arch/arm64/include/asm/kvm_host.h             |  1 -
 arch/mips/include/asm/kvm_host.h              |  1 -
 arch/powerpc/include/asm/kvm_host.h           |  1 -
 arch/powerpc/kvm/book3s_hv.c                  |  2 +-
 arch/s390/include/asm/kvm_host.h              |  1 -
 arch/s390/kvm/kvm-s390.c                      |  2 +-
 arch/x86/include/asm/kvm_host.h               |  2 -
 include/linux/kvm_host.h                      |  6 +--
 include/uapi/linux/kvm.h                      |  1 +
 .../testing/selftests/kvm/include/kvm_util.h  |  1 +
 tools/testing/selftests/kvm/lib/kvm_util.c    | 30 ++++++++++++-
 .../selftests/kvm/set_memory_region_test.c    | 43 ++++++++++++++++---
 tools/testing/selftests/kvm/settings          |  1 +
 virt/kvm/dirty_ring.c                         |  2 +-
 virt/kvm/kvm_main.c                           | 42 +++++++++++++++---
 16 files changed, 128 insertions(+), 24 deletions(-)
 create mode 100644 tools/testing/selftests/kvm/settings

-- 
2.29.2


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH 1/5] KVM: Make the maximum number of user memslots a per-VM thing
  2021-01-27 17:57 [PATCH 0/5] KVM: Make the maximum number of user memslots configurable and raise the default Vitaly Kuznetsov
@ 2021-01-27 17:57 ` Vitaly Kuznetsov
  2021-01-27 22:27   ` Maciej S. Szmigiero
  2021-01-27 17:57 ` [PATCH 2/5] KVM: Raise the maximum number of user memslots Vitaly Kuznetsov
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 15+ messages in thread
From: Vitaly Kuznetsov @ 2021-01-27 17:57 UTC (permalink / raw)
  To: kvm, Paolo Bonzini
  Cc: Sean Christopherson, Wanpeng Li, Jim Mattson, Igor Mammedov,
	Maciej S . Szmigiero

Limiting the maximum number of user memslots globally can be undesirable as
different VMs may have different needs. Generally, a relatively small
number should suffice and a VMM may want to enforce the limitation so a VM
won't accidentally eat too much memory. On the other hand, the number of
required memslots can depend on the number of assigned vCPUs, e.g. each
Hyper-V SynIC may require up to two additional slots per vCPU.

Prepare to limit the maximum number of user memslots per-VM. No real
functional change in this patch as the limit is still hard-coded to
KVM_USER_MEM_SLOTS.

Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
---
 arch/powerpc/kvm/book3s_hv.c |  2 +-
 arch/s390/kvm/kvm-s390.c     |  2 +-
 include/linux/kvm_host.h     |  1 +
 virt/kvm/dirty_ring.c        |  2 +-
 virt/kvm/kvm_main.c          | 11 ++++++-----
 5 files changed, 10 insertions(+), 8 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 6f612d240392..bea2f34e3662 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -4472,7 +4472,7 @@ static int kvm_vm_ioctl_get_dirty_log_hv(struct kvm *kvm,
 	mutex_lock(&kvm->slots_lock);
 
 	r = -EINVAL;
-	if (log->slot >= KVM_USER_MEM_SLOTS)
+	if (log->slot >= kvm->memslots_max)
 		goto out;
 
 	slots = kvm_memslots(kvm);
diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index dbafd057ca6a..b8c49105f40c 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -640,7 +640,7 @@ int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm,
 	mutex_lock(&kvm->slots_lock);
 
 	r = -EINVAL;
-	if (log->slot >= KVM_USER_MEM_SLOTS)
+	if (log->slot >= kvm->memslots_max)
 		goto out;
 
 	r = kvm_get_dirty_log(kvm, log, &is_dirty, &memslot);
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index f3b1013fb22c..0033ccffe617 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -513,6 +513,7 @@ struct kvm {
 	pid_t userspace_pid;
 	unsigned int max_halt_poll_ns;
 	u32 dirty_ring_size;
+	short int memslots_max;
 };
 
 #define kvm_err(fmt, ...) \
diff --git a/virt/kvm/dirty_ring.c b/virt/kvm/dirty_ring.c
index 9d01299563ee..40d0a749a55d 100644
--- a/virt/kvm/dirty_ring.c
+++ b/virt/kvm/dirty_ring.c
@@ -52,7 +52,7 @@ static void kvm_reset_dirty_gfn(struct kvm *kvm, u32 slot, u64 offset, u64 mask)
 	as_id = slot >> 16;
 	id = (u16)slot;
 
-	if (as_id >= KVM_ADDRESS_SPACE_NUM || id >= KVM_USER_MEM_SLOTS)
+	if (as_id >= KVM_ADDRESS_SPACE_NUM || id >= kvm->memslots_max)
 		return;
 
 	memslot = id_to_memslot(__kvm_memslots(kvm, as_id), id);
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 8367d88ce39b..a78e982e7107 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -755,6 +755,7 @@ static struct kvm *kvm_create_vm(unsigned long type)
 	INIT_LIST_HEAD(&kvm->devices);
 
 	BUILD_BUG_ON(KVM_MEM_SLOTS_NUM > SHRT_MAX);
+	kvm->memslots_max = KVM_USER_MEM_SLOTS;
 
 	if (init_srcu_struct(&kvm->srcu))
 		goto out_err_no_srcu;
@@ -1404,7 +1405,7 @@ EXPORT_SYMBOL_GPL(kvm_set_memory_region);
 static int kvm_vm_ioctl_set_memory_region(struct kvm *kvm,
 					  struct kvm_userspace_memory_region *mem)
 {
-	if ((u16)mem->slot >= KVM_USER_MEM_SLOTS)
+	if ((u16)mem->slot >= kvm->memslots_max)
 		return -EINVAL;
 
 	return kvm_set_memory_region(kvm, mem);
@@ -1435,7 +1436,7 @@ int kvm_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log,
 
 	as_id = log->slot >> 16;
 	id = (u16)log->slot;
-	if (as_id >= KVM_ADDRESS_SPACE_NUM || id >= KVM_USER_MEM_SLOTS)
+	if (as_id >= KVM_ADDRESS_SPACE_NUM || id >= kvm->memslots_max)
 		return -EINVAL;
 
 	slots = __kvm_memslots(kvm, as_id);
@@ -1497,7 +1498,7 @@ static int kvm_get_dirty_log_protect(struct kvm *kvm, struct kvm_dirty_log *log)
 
 	as_id = log->slot >> 16;
 	id = (u16)log->slot;
-	if (as_id >= KVM_ADDRESS_SPACE_NUM || id >= KVM_USER_MEM_SLOTS)
+	if (as_id >= KVM_ADDRESS_SPACE_NUM || id >= kvm->memslots_max)
 		return -EINVAL;
 
 	slots = __kvm_memslots(kvm, as_id);
@@ -1609,7 +1610,7 @@ static int kvm_clear_dirty_log_protect(struct kvm *kvm,
 
 	as_id = log->slot >> 16;
 	id = (u16)log->slot;
-	if (as_id >= KVM_ADDRESS_SPACE_NUM || id >= KVM_USER_MEM_SLOTS)
+	if (as_id >= KVM_ADDRESS_SPACE_NUM || id >= kvm->memslots_max)
 		return -EINVAL;
 
 	if (log->first_page & 63)
@@ -3682,7 +3683,7 @@ static long kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg)
 		return KVM_ADDRESS_SPACE_NUM;
 #endif
 	case KVM_CAP_NR_MEMSLOTS:
-		return KVM_USER_MEM_SLOTS;
+		return kvm ? kvm->memslots_max : KVM_USER_MEM_SLOTS;
 	case KVM_CAP_DIRTY_LOG_RING:
 #if KVM_DIRTY_LOG_PAGE_OFFSET > 0
 		return KVM_DIRTY_RING_MAX_ENTRIES * sizeof(struct kvm_dirty_gfn);
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 2/5] KVM: Raise the maximum number of user memslots
  2021-01-27 17:57 [PATCH 0/5] KVM: Make the maximum number of user memslots configurable and raise the default Vitaly Kuznetsov
  2021-01-27 17:57 ` [PATCH 1/5] KVM: Make the maximum number of user memslots a per-VM thing Vitaly Kuznetsov
@ 2021-01-27 17:57 ` Vitaly Kuznetsov
  2021-01-27 17:57 ` [PATCH 3/5] KVM: Make the maximum number of user memslots configurable Vitaly Kuznetsov
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 15+ messages in thread
From: Vitaly Kuznetsov @ 2021-01-27 17:57 UTC (permalink / raw)
  To: kvm, Paolo Bonzini
  Cc: Sean Christopherson, Wanpeng Li, Jim Mattson, Igor Mammedov,
	Maciej S . Szmigiero

Current KVM_USER_MEM_SLOTS limits are arch specific (512 on Power, 509 on x86,
32 on s390, 16 on MIPS) but they don't really need to be. Memory slots are
allocated dynamically in KVM when added so the only real limitation is
'id_to_index' array which is 'short'. We don't have any other
KVM_MEM_SLOTS_NUM/KVM_USER_MEM_SLOTS-sized statically defined structures.

Low KVM_USER_MEM_SLOTS can be a limiting factor for some configurations.
In particular, when QEMU tries to start a Windows guest with Hyper-V SynIC
enabled and e.g. 256 vCPUs the limit is hit as SynIC requires two pages per
vCPU and the guest is free to pick any GFN for each of them, this fragments
memslots as QEMU wants to have a separate memslot for each of these pages
(which are supposed to act as 'overlay' pages).

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
---
 arch/arm64/include/asm/kvm_host.h   | 1 -
 arch/mips/include/asm/kvm_host.h    | 1 -
 arch/powerpc/include/asm/kvm_host.h | 1 -
 arch/s390/include/asm/kvm_host.h    | 1 -
 arch/x86/include/asm/kvm_host.h     | 2 --
 include/linux/kvm_host.h            | 5 ++---
 virt/kvm/kvm_main.c                 | 1 +
 7 files changed, 3 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 8fcfab0c2567..1b8a3d825276 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -30,7 +30,6 @@
 
 #define __KVM_HAVE_ARCH_INTC_INITIALIZED
 
-#define KVM_USER_MEM_SLOTS 512
 #define KVM_HALT_POLL_NS_DEFAULT 500000
 
 #include <kvm/arm_vgic.h>
diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h
index 24f3d0f9996b..3a5612e7304c 100644
--- a/arch/mips/include/asm/kvm_host.h
+++ b/arch/mips/include/asm/kvm_host.h
@@ -83,7 +83,6 @@
 
 
 #define KVM_MAX_VCPUS		16
-#define KVM_USER_MEM_SLOTS	16
 /* memory slots that does not exposed to userspace */
 #define KVM_PRIVATE_MEM_SLOTS	0
 
diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index d67a470e95a3..2b9b6855ec86 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -28,7 +28,6 @@
 
 #define KVM_MAX_VCPUS		NR_CPUS
 #define KVM_MAX_VCORES		NR_CPUS
-#define KVM_USER_MEM_SLOTS	512
 
 #include <asm/cputhreads.h>
 
diff --git a/arch/s390/include/asm/kvm_host.h b/arch/s390/include/asm/kvm_host.h
index 74f9a036bab2..6bcfc5614bbc 100644
--- a/arch/s390/include/asm/kvm_host.h
+++ b/arch/s390/include/asm/kvm_host.h
@@ -28,7 +28,6 @@
 #define KVM_S390_BSCA_CPU_SLOTS 64
 #define KVM_S390_ESCA_CPU_SLOTS 248
 #define KVM_MAX_VCPUS 255
-#define KVM_USER_MEM_SLOTS 32
 
 /*
  * These seem to be used for allocating ->chip in the routing table, which we
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 3d6616f6f6ef..a6547dc9191a 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -40,10 +40,8 @@
 #define KVM_MAX_VCPUS 288
 #define KVM_SOFT_MAX_VCPUS 240
 #define KVM_MAX_VCPU_ID 1023
-#define KVM_USER_MEM_SLOTS 509
 /* memory slots that are not exposed to userspace */
 #define KVM_PRIVATE_MEM_SLOTS 3
-#define KVM_MEM_SLOTS_NUM (KVM_USER_MEM_SLOTS + KVM_PRIVATE_MEM_SLOTS)
 
 #define KVM_HALT_POLL_NS_DEFAULT 200000
 
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 0033ccffe617..754020140f37 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -425,9 +425,8 @@ struct kvm_irq_routing_table {
 #define KVM_PRIVATE_MEM_SLOTS 0
 #endif
 
-#ifndef KVM_MEM_SLOTS_NUM
-#define KVM_MEM_SLOTS_NUM (KVM_USER_MEM_SLOTS + KVM_PRIVATE_MEM_SLOTS)
-#endif
+#define KVM_MEM_SLOTS_NUM SHRT_MAX
+#define KVM_USER_MEM_SLOTS (KVM_MEM_SLOTS_NUM - KVM_PRIVATE_MEM_SLOTS)
 
 #ifndef __KVM_VCPU_MULTIPLE_ADDRESS_SPACE
 static inline int kvm_arch_vcpu_memslots_id(struct kvm_vcpu *vcpu)
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index a78e982e7107..5adb1b694304 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -755,6 +755,7 @@ static struct kvm *kvm_create_vm(unsigned long type)
 	INIT_LIST_HEAD(&kvm->devices);
 
 	BUILD_BUG_ON(KVM_MEM_SLOTS_NUM > SHRT_MAX);
+	BUILD_BUG_ON(KVM_PRIVATE_MEM_SLOTS >= KVM_MEM_SLOTS_NUM);
 	kvm->memslots_max = KVM_USER_MEM_SLOTS;
 
 	if (init_srcu_struct(&kvm->srcu))
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 3/5] KVM: Make the maximum number of user memslots configurable
  2021-01-27 17:57 [PATCH 0/5] KVM: Make the maximum number of user memslots configurable and raise the default Vitaly Kuznetsov
  2021-01-27 17:57 ` [PATCH 1/5] KVM: Make the maximum number of user memslots a per-VM thing Vitaly Kuznetsov
  2021-01-27 17:57 ` [PATCH 2/5] KVM: Raise the maximum number of user memslots Vitaly Kuznetsov
@ 2021-01-27 17:57 ` Vitaly Kuznetsov
  2021-01-27 17:57 ` [PATCH 4/5] selftests: kvm: Test the newly introduced KVM_CAP_MEMSLOTS_LIMIT Vitaly Kuznetsov
  2021-01-27 17:57 ` [PATCH 5/5] selftests: kvm: Raise the default timeout to 120 seconds Vitaly Kuznetsov
  4 siblings, 0 replies; 15+ messages in thread
From: Vitaly Kuznetsov @ 2021-01-27 17:57 UTC (permalink / raw)
  To: kvm, Paolo Bonzini
  Cc: Sean Christopherson, Wanpeng Li, Jim Mattson, Igor Mammedov,
	Maciej S . Szmigiero

The maximum number of user memslots is now a per-VM setting but there is
no way to change it. Intoduce KVM_CAP_MEMSLOTS_LIMIT per-VM capability to
set the limit.

When the limit is set, it becomes impossible to manage memslots whose id
is greater or equal so make sure there are no such slots.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
---
 Documentation/virt/kvm/api.rst | 16 ++++++++++++++++
 include/uapi/linux/kvm.h       |  1 +
 virt/kvm/kvm_main.c            | 30 ++++++++++++++++++++++++++++++
 3 files changed, 47 insertions(+)

diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index 99ceb978c8b0..551236fc1261 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -6038,6 +6038,22 @@ KVM_EXIT_X86_RDMSR and KVM_EXIT_X86_WRMSR exit notifications which user space
 can then handle to implement model specific MSR handling and/or user notifications
 to inform a user that an MSR was not handled.
 
+7.22 KVM_CAP_MEMSLOTS_LIMIT
+----------------------
+
+:Architectures: all
+:Target: VM
+:Parameters: args[0] is the maximum number of memory slots
+:Returns: 0 on success; E2BIG when set above global KVM_CAP_NR_MEMSLOTS; EINVAL
+          when there is an existing slot with id >= limit
+
+This capability overrides the default maximum number of memory slots, available
+per target VM. The limit can be changed at any time, however, when lowered, no
+memory slots with id ('slot' in 'struct kvm_userspace_memory_region') greater
+than the requested value should exist or EINVAL is returned. The maximum allowed
+value can be queried by checking system KVM_CAP_NR_MEMSLOTS capability. Per-VM
+KVM_CAP_NR_MEMSLOTS capability represents the currently set limit.
+
 8. Other capabilities.
 ======================
 
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 374c67875cdb..f68b0cde801a 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1058,6 +1058,7 @@ struct kvm_ppc_resize_hpt {
 #define KVM_CAP_ENFORCE_PV_FEATURE_CPUID 190
 #define KVM_CAP_SYS_HYPERV_CPUID 191
 #define KVM_CAP_DIRTY_LOG_RING 192
+#define KVM_CAP_MEMSLOTS_LIMIT 193
 
 #ifdef KVM_CAP_IRQ_ROUTING
 
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 5adb1b694304..da2cbfe9c9ee 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1412,6 +1412,31 @@ static int kvm_vm_ioctl_set_memory_region(struct kvm *kvm,
 	return kvm_set_memory_region(kvm, mem);
 }
 
+static int kvm_set_memslots_max(struct kvm *kvm, u64 max)
+{
+	struct kvm_memory_slot *memslot;
+	int r = 0, as_id;
+
+	if (max > KVM_USER_MEM_SLOTS)
+		return -E2BIG;
+
+	mutex_lock(&kvm->slots_lock);
+	for (as_id = 0; as_id < KVM_ADDRESS_SPACE_NUM; as_id++) {
+		kvm_for_each_memslot(memslot, __kvm_memslots(kvm, as_id)) {
+			if (memslot->id >= max) {
+				r = -EINVAL;
+				break;
+			}
+		}
+	}
+	if (!r)
+		kvm->memslots_max = max;
+
+	mutex_unlock(&kvm->slots_lock);
+
+	return r;
+}
+
 #ifndef CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT
 /**
  * kvm_get_dirty_log - get a snapshot of dirty pages
@@ -3664,6 +3689,7 @@ static long kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg)
 	case KVM_CAP_CHECK_EXTENSION_VM:
 	case KVM_CAP_ENABLE_CAP_VM:
 	case KVM_CAP_HALT_POLL:
+	case KVM_CAP_MEMSLOTS_LIMIT:
 		return 1;
 #ifdef CONFIG_KVM_MMIO
 	case KVM_CAP_COALESCED_MMIO:
@@ -3789,6 +3815,10 @@ static int kvm_vm_ioctl_enable_cap_generic(struct kvm *kvm,
 	}
 	case KVM_CAP_DIRTY_LOG_RING:
 		return kvm_vm_ioctl_enable_dirty_log_ring(kvm, cap->args[0]);
+	case KVM_CAP_MEMSLOTS_LIMIT:
+		if (cap->flags)
+			return -EINVAL;
+		return kvm_set_memslots_max(kvm, cap->args[0]);
 	default:
 		return kvm_vm_ioctl_enable_cap(kvm, cap);
 	}
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 4/5] selftests: kvm: Test the newly introduced KVM_CAP_MEMSLOTS_LIMIT
  2021-01-27 17:57 [PATCH 0/5] KVM: Make the maximum number of user memslots configurable and raise the default Vitaly Kuznetsov
                   ` (2 preceding siblings ...)
  2021-01-27 17:57 ` [PATCH 3/5] KVM: Make the maximum number of user memslots configurable Vitaly Kuznetsov
@ 2021-01-27 17:57 ` Vitaly Kuznetsov
  2021-01-27 17:57 ` [PATCH 5/5] selftests: kvm: Raise the default timeout to 120 seconds Vitaly Kuznetsov
  4 siblings, 0 replies; 15+ messages in thread
From: Vitaly Kuznetsov @ 2021-01-27 17:57 UTC (permalink / raw)
  To: kvm, Paolo Bonzini
  Cc: Sean Christopherson, Wanpeng Li, Jim Mattson, Igor Mammedov,
	Maciej S . Szmigiero

The number of user memslots can now be limited with KVM_CAP_MEMSLOTS_LIMIT
and, when limited, per-VM KVM_CHECK_EXTENSION(KVM_CAP_NR_MEMSLOTS) should
return the updated value.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
---
 .../testing/selftests/kvm/include/kvm_util.h  |  1 +
 tools/testing/selftests/kvm/lib/kvm_util.c    | 30 ++++++++++++-
 .../selftests/kvm/set_memory_region_test.c    | 43 ++++++++++++++++---
 3 files changed, 67 insertions(+), 7 deletions(-)

diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index 5cbb861525ed..eb759a54dfc6 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -86,6 +86,7 @@ enum vm_mem_backing_src_type {
 };
 
 int kvm_check_cap(long cap);
+int vm_check_cap(struct kvm_vm *vm, long cap);
 int vm_enable_cap(struct kvm_vm *vm, struct kvm_enable_cap *cap);
 int vcpu_enable_cap(struct kvm_vm *vm, uint32_t vcpu_id,
 		    struct kvm_enable_cap *cap);
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index fa5a90e6c6f0..115947b77808 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -31,7 +31,7 @@ static void *align(void *x, size_t size)
 }
 
 /*
- * Capability
+ * Check a global capability
  *
  * Input Args:
  *   cap - Capability
@@ -64,6 +64,34 @@ int kvm_check_cap(long cap)
 	return ret;
 }
 
+/*
+ * Check a per-VM capability
+ *
+ * Input Args:
+ *   vm - Virtual Machine
+ *   cap - Capability
+ *
+ * Output Args: None
+ *
+ * Return:
+ *   On success, the Value corresponding to the capability (KVM_CAP_*)
+ *   specified by the value of cap.  On failure a TEST_ASSERT failure
+ *   is produced.
+ *
+ * Looks up and returns the value corresponding to the capability
+ * (KVM_CAP_*) given by cap.
+ */
+int vm_check_cap(struct kvm_vm *vm, long cap)
+{
+	int ret;
+
+	ret = ioctl(vm->fd, KVM_CHECK_EXTENSION, cap);
+	TEST_ASSERT(ret != -1, "KVM_CHECK_EXTENSION IOCTL failed,\n"
+		"  rc: %i errno: %i", ret, errno);
+
+	return ret;
+}
+
 /* VM Enable Capability
  *
  * Input Args:
diff --git a/tools/testing/selftests/kvm/set_memory_region_test.c b/tools/testing/selftests/kvm/set_memory_region_test.c
index f127ed31dba7..66ed011f26f3 100644
--- a/tools/testing/selftests/kvm/set_memory_region_test.c
+++ b/tools/testing/selftests/kvm/set_memory_region_test.c
@@ -330,14 +330,14 @@ static void test_zero_memory_regions(void)
 #endif /* __x86_64__ */
 
 /*
- * Test it can be added memory slots up to KVM_CAP_NR_MEMSLOTS, then any
- * tentative to add further slots should fail.
+ * Test it can be added memory slots up to KVM_CAP_NR_MEMSLOTS/given limit,
+ * then any tentative to add further slots should fail.
  */
-static void test_add_max_memory_regions(void)
+static void test_add_max_memory_regions(int limit)
 {
 	int ret;
 	struct kvm_vm *vm;
-	uint32_t max_mem_slots;
+	uint32_t max_mem_slots, vm_mem_slots;
 	uint32_t slot;
 	uint64_t guest_addr = 0x0;
 	uint64_t mem_reg_npages;
@@ -346,10 +346,38 @@ static void test_add_max_memory_regions(void)
 	max_mem_slots = kvm_check_cap(KVM_CAP_NR_MEMSLOTS);
 	TEST_ASSERT(max_mem_slots > 0,
 		    "KVM_CAP_NR_MEMSLOTS should be greater than 0");
-	pr_info("Allowed number of memory slots: %i\n", max_mem_slots);
+
+	if (!limit)
+		pr_info("Allowed number of memory slots: %i\n", max_mem_slots);
 
 	vm = vm_create(VM_MODE_DEFAULT, 0, O_RDWR);
 
+	if (limit) {
+		struct kvm_enable_cap cap = {
+			.cap = KVM_CAP_MEMSLOTS_LIMIT,
+			.args[0] = limit
+		};
+
+		pr_info("Default max number of memory slots: %i\n", max_mem_slots);
+
+		vm_mem_slots = vm_check_cap(vm, KVM_CAP_NR_MEMSLOTS);
+		TEST_ASSERT(vm_mem_slots == max_mem_slots,
+			    "KVM_CAP_NR_MEMSLOTS for a newly created VM: %d"
+			    " should equal to the global limit: %d",
+			    vm_mem_slots, max_mem_slots);
+
+		pr_info("Limiting the number of memory slots to: %i\n", limit);
+
+		vm_enable_cap(vm, &cap);
+		vm_mem_slots = vm_check_cap(vm, KVM_CAP_NR_MEMSLOTS);
+		TEST_ASSERT(vm_mem_slots == limit,
+			    "KVM_CAP_NR_MEMSLOTS was limited to: %d"
+			    " but is currently set to %d instead",
+			    limit, vm_mem_slots);
+
+		max_mem_slots = vm_mem_slots;
+	}
+
 	mem_reg_npages = vm_calc_num_guest_pages(VM_MODE_DEFAULT, MEM_REGION_SIZE);
 
 	/* Check it can be added memory slots up to the maximum allowed */
@@ -394,7 +422,10 @@ int main(int argc, char *argv[])
 	test_zero_memory_regions();
 #endif
 
-	test_add_max_memory_regions();
+	test_add_max_memory_regions(0);
+
+	test_add_max_memory_regions(10);
+
 
 #ifdef __x86_64__
 	if (argc > 1)
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 5/5] selftests: kvm: Raise the default timeout to 120 seconds
  2021-01-27 17:57 [PATCH 0/5] KVM: Make the maximum number of user memslots configurable and raise the default Vitaly Kuznetsov
                   ` (3 preceding siblings ...)
  2021-01-27 17:57 ` [PATCH 4/5] selftests: kvm: Test the newly introduced KVM_CAP_MEMSLOTS_LIMIT Vitaly Kuznetsov
@ 2021-01-27 17:57 ` Vitaly Kuznetsov
  4 siblings, 0 replies; 15+ messages in thread
From: Vitaly Kuznetsov @ 2021-01-27 17:57 UTC (permalink / raw)
  To: kvm, Paolo Bonzini
  Cc: Sean Christopherson, Wanpeng Li, Jim Mattson, Igor Mammedov,
	Maciej S . Szmigiero

With the updated maximum number of user memslots (32)
set_memory_region_test sometimes takes longer than the default 45 seconds
to finish. Raise the value to an arbitrary 120 seconds.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
---
 tools/testing/selftests/kvm/settings | 1 +
 1 file changed, 1 insertion(+)
 create mode 100644 tools/testing/selftests/kvm/settings

diff --git a/tools/testing/selftests/kvm/settings b/tools/testing/selftests/kvm/settings
new file mode 100644
index 000000000000..6091b45d226b
--- /dev/null
+++ b/tools/testing/selftests/kvm/settings
@@ -0,0 +1 @@
+timeout=120
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH 1/5] KVM: Make the maximum number of user memslots a per-VM thing
  2021-01-27 17:57 ` [PATCH 1/5] KVM: Make the maximum number of user memslots a per-VM thing Vitaly Kuznetsov
@ 2021-01-27 22:27   ` Maciej S. Szmigiero
  2021-01-28  8:45     ` Vitaly Kuznetsov
  0 siblings, 1 reply; 15+ messages in thread
From: Maciej S. Szmigiero @ 2021-01-27 22:27 UTC (permalink / raw)
  To: Vitaly Kuznetsov
  Cc: Sean Christopherson, Paolo Bonzini, kvm, Wanpeng Li, Jim Mattson,
	Igor Mammedov

On 27.01.2021 18:57, Vitaly Kuznetsov wrote:
> Limiting the maximum number of user memslots globally can be undesirable as
> different VMs may have different needs. Generally, a relatively small
> number should suffice and a VMM may want to enforce the limitation so a VM
> won't accidentally eat too much memory. On the other hand, the number of
> required memslots can depend on the number of assigned vCPUs, e.g. each
> Hyper-V SynIC may require up to two additional slots per vCPU.
> 
> Prepare to limit the maximum number of user memslots per-VM. No real
> functional change in this patch as the limit is still hard-coded to
> KVM_USER_MEM_SLOTS.
> 
> Suggested-by: Sean Christopherson <seanjc@google.com>
> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
> ---

Perhaps I didn't understand the idea clearly but I thought it was
to protect the kernel from a rogue userspace VMM allocating many
memslots and so consuming a lot of memory in kernel?

But then what's the difference between allocating 32k memslots for
one VM and allocating 509 slots for 64 VMs?

A guest can't add a memslot on its own, only the host software
(like QEMU) can, right?

Thanks,
Maciej

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 1/5] KVM: Make the maximum number of user memslots a per-VM thing
  2021-01-27 22:27   ` Maciej S. Szmigiero
@ 2021-01-28  8:45     ` Vitaly Kuznetsov
  2021-01-28 10:48       ` Maciej S. Szmigiero
  0 siblings, 1 reply; 15+ messages in thread
From: Vitaly Kuznetsov @ 2021-01-28  8:45 UTC (permalink / raw)
  To: Maciej S. Szmigiero
  Cc: Sean Christopherson, Paolo Bonzini, kvm, Wanpeng Li, Jim Mattson,
	Igor Mammedov

"Maciej S. Szmigiero" <maciej.szmigiero@oracle.com> writes:

> On 27.01.2021 18:57, Vitaly Kuznetsov wrote:
>> Limiting the maximum number of user memslots globally can be undesirable as
>> different VMs may have different needs. Generally, a relatively small
>> number should suffice and a VMM may want to enforce the limitation so a VM
>> won't accidentally eat too much memory. On the other hand, the number of
>> required memslots can depend on the number of assigned vCPUs, e.g. each
>> Hyper-V SynIC may require up to two additional slots per vCPU.
>> 
>> Prepare to limit the maximum number of user memslots per-VM. No real
>> functional change in this patch as the limit is still hard-coded to
>> KVM_USER_MEM_SLOTS.
>> 
>> Suggested-by: Sean Christopherson <seanjc@google.com>
>> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
>> ---
>
> Perhaps I didn't understand the idea clearly but I thought it was
> to protect the kernel from a rogue userspace VMM allocating many
> memslots and so consuming a lot of memory in kernel?
>
> But then what's the difference between allocating 32k memslots for
> one VM and allocating 509 slots for 64 VMs?
>

It was Sean's idea :-) Initially, I had the exact same thoughts but now
I agree with

"I see it as an easy way to mitigate the damage.  E.g. if a containers use case
is spinning up hundreds of VMs and something goes awry in the config, it would
be the difference between consuming tens of MBs and hundreds of MBs.  Cgroup
limits should also be in play, but defense in depth and all that. "

https://lore.kernel.org/kvm/YAcU6swvNkpPffE7@google.com/

That said it is not really a security feature, VMM still stays in
control. 

> A guest can't add a memslot on its own, only the host software
> (like QEMU) can, right?
>

VMMs (especially big ones like QEMU) are complex and e.g. each driver
can cause memory regions (-> memslots in KVM) to change. With this
feature it becomes possible to set a limit upfront (based on VM
configuration) so it'll be more obvious when it's hit.

-- 
Vitaly


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 1/5] KVM: Make the maximum number of user memslots a per-VM thing
  2021-01-28  8:45     ` Vitaly Kuznetsov
@ 2021-01-28 10:48       ` Maciej S. Szmigiero
  2021-01-28 13:01         ` Paolo Bonzini
  0 siblings, 1 reply; 15+ messages in thread
From: Maciej S. Szmigiero @ 2021-01-28 10:48 UTC (permalink / raw)
  To: Vitaly Kuznetsov
  Cc: Sean Christopherson, Paolo Bonzini, kvm, Wanpeng Li, Jim Mattson,
	Igor Mammedov

On 28.01.2021 09:45, Vitaly Kuznetsov wrote:
> "Maciej S. Szmigiero" <maciej.szmigiero@oracle.com> writes:
> 
>> On 27.01.2021 18:57, Vitaly Kuznetsov wrote:
>>> Limiting the maximum number of user memslots globally can be undesirable as
>>> different VMs may have different needs. Generally, a relatively small
>>> number should suffice and a VMM may want to enforce the limitation so a VM
>>> won't accidentally eat too much memory. On the other hand, the number of
>>> required memslots can depend on the number of assigned vCPUs, e.g. each
>>> Hyper-V SynIC may require up to two additional slots per vCPU.
>>>
>>> Prepare to limit the maximum number of user memslots per-VM. No real
>>> functional change in this patch as the limit is still hard-coded to
>>> KVM_USER_MEM_SLOTS.
>>>
>>> Suggested-by: Sean Christopherson <seanjc@google.com>
>>> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
>>> ---
>>
>> Perhaps I didn't understand the idea clearly but I thought it was
>> to protect the kernel from a rogue userspace VMM allocating many
>> memslots and so consuming a lot of memory in kernel?
>>
>> But then what's the difference between allocating 32k memslots for
>> one VM and allocating 509 slots for 64 VMs?
>>
> 
> It was Sean's idea :-) Initially, I had the exact same thoughts but now
> I agree with
> 
> "I see it as an easy way to mitigate the damage.  E.g. if a containers use case
> is spinning up hundreds of VMs and something goes awry in the config, it would
> be the difference between consuming tens of MBs and hundreds of MBs.  Cgroup
> limits should also be in play, but defense in depth and all that. "
> 
> https://urldefense.com/v3/__https://lore.kernel.org/kvm/YAcU6swvNkpPffE7@google.com/__;!!GqivPVa7Brio!MEvJAWTpdPwU7jynHog2X5g4AHX7YCbRlNvTC9x4xdmk3aiSMjwT_rMpvZM6g8TkoJfvcw$
> 
> That said it is not really a security feature, VMM still stays in
> control.
> 
>> A guest can't add a memslot on its own, only the host software
>> (like QEMU) can, right?
>>
> 
> VMMs (especially big ones like QEMU) are complex and e.g. each driver
> can cause memory regions (-> memslots in KVM) to change. With this
> feature it becomes possible to set a limit upfront (based on VM
> configuration) so it'll be more obvious when it's hit.
> 

I see: it's a kind of a "big switch", so every VMM doesn't have to be
modified or audited.
Thanks for the explanation.

Maciej

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 1/5] KVM: Make the maximum number of user memslots a per-VM thing
  2021-01-28 10:48       ` Maciej S. Szmigiero
@ 2021-01-28 13:01         ` Paolo Bonzini
  2021-01-28 15:26           ` Vitaly Kuznetsov
  2021-01-28 17:31           ` Sean Christopherson
  0 siblings, 2 replies; 15+ messages in thread
From: Paolo Bonzini @ 2021-01-28 13:01 UTC (permalink / raw)
  To: Maciej S. Szmigiero, Vitaly Kuznetsov
  Cc: Sean Christopherson, kvm, Wanpeng Li, Jim Mattson, Igor Mammedov

On 28/01/21 11:48, Maciej S. Szmigiero wrote:
>>
>> VMMs (especially big ones like QEMU) are complex and e.g. each driver
>> can cause memory regions (-> memslots in KVM) to change. With this
>> feature it becomes possible to set a limit upfront (based on VM
>> configuration) so it'll be more obvious when it's hit.
>>
> 
> I see: it's a kind of a "big switch", so every VMM doesn't have to be
> modified or audited.
> Thanks for the explanation.

Not really, it's the opposite: the VMM needs to opt into a smaller 
number of memslots.

I don't know... I understand it would be defense in depth, however 
between dynamic allocation of memslots arrays and GFP_KERNEL_ACCOUNT, it 
seems to be a bit of a solution in search of a problem.  For now I 
applied patches 1-2-5.

Thanks,

Paolo


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 1/5] KVM: Make the maximum number of user memslots a per-VM thing
  2021-01-28 13:01         ` Paolo Bonzini
@ 2021-01-28 15:26           ` Vitaly Kuznetsov
  2021-01-28 17:31           ` Sean Christopherson
  1 sibling, 0 replies; 15+ messages in thread
From: Vitaly Kuznetsov @ 2021-01-28 15:26 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, kvm, Wanpeng Li, Jim Mattson, Igor Mammedov,
	Maciej S. Szmigiero

Paolo Bonzini <pbonzini@redhat.com> writes:

> On 28/01/21 11:48, Maciej S. Szmigiero wrote:
>>>
>>> VMMs (especially big ones like QEMU) are complex and e.g. each driver
>>> can cause memory regions (-> memslots in KVM) to change. With this
>>> feature it becomes possible to set a limit upfront (based on VM
>>> configuration) so it'll be more obvious when it's hit.
>>>
>> 
>> I see: it's a kind of a "big switch", so every VMM doesn't have to be
>> modified or audited.
>> Thanks for the explanation.
>
> Not really, it's the opposite: the VMM needs to opt into a smaller 
> number of memslots.
>
> I don't know... I understand it would be defense in depth, however 
> between dynamic allocation of memslots arrays and GFP_KERNEL_ACCOUNT, it 
> seems to be a bit of a solution in search of a problem.  For now I 
> applied patches 1-2-5.

An alternative with a new module parameter was also suggested, that
would make it possible to protect against buggy/malicious VMMs but
again, the attack is not any different from just creating many
VMs. Module parameter will most like end up being unused 99,9% of the
time (if not 100). I don't seem to have a strong opinion.

-- 
Vitaly


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 1/5] KVM: Make the maximum number of user memslots a per-VM thing
  2021-01-28 13:01         ` Paolo Bonzini
  2021-01-28 15:26           ` Vitaly Kuznetsov
@ 2021-01-28 17:31           ` Sean Christopherson
  2021-01-28 17:59             ` Paolo Bonzini
  1 sibling, 1 reply; 15+ messages in thread
From: Sean Christopherson @ 2021-01-28 17:31 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Maciej S. Szmigiero, Vitaly Kuznetsov, kvm, Wanpeng Li,
	Jim Mattson, Igor Mammedov

On Thu, Jan 28, 2021, Paolo Bonzini wrote:
> On 28/01/21 11:48, Maciej S. Szmigiero wrote:
> > > 
> > > VMMs (especially big ones like QEMU) are complex and e.g. each driver
> > > can cause memory regions (-> memslots in KVM) to change. With this
> > > feature it becomes possible to set a limit upfront (based on VM
> > > configuration) so it'll be more obvious when it's hit.
> > > 
> > 
> > I see: it's a kind of a "big switch", so every VMM doesn't have to be
> > modified or audited.
> > Thanks for the explanation.
> 
> Not really, it's the opposite: the VMM needs to opt into a smaller number of
> memslots.

Yep, my thinking is that it would be similar to using seccomp to prevent doing
something that should never happen.

> I don't know... I understand it would be defense in depth, however between
> dynamic allocation of memslots arrays and GFP_KERNEL_ACCOUNT, it seems to be
> a bit of a solution in search of a problem.

I'm a-ok waiting to add a capability until there's a VMM that actually wants to
use it.

> For now I applied patches 1-2-5.

Why keep patch 1?  Simply raising the limit in patch 2 shouldn't require per-VM
tracking.  The 'memslots_max' name is also ambiguous.  In my head, the new
capability would restrict the _number_ of memslots, but as implemented in
patches 1+3 it restrists the max _ID_ of a memslot.  Limiting the max ID also
effectively limits that max number of memslots, but that approach confuses
things since the IDs themselves do not affect memory consumption.  Limiting the
IDs bleeds the old implementation details into the ABI.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 1/5] KVM: Make the maximum number of user memslots a per-VM thing
  2021-01-28 17:31           ` Sean Christopherson
@ 2021-01-28 17:59             ` Paolo Bonzini
  2021-02-08 14:20               ` Vitaly Kuznetsov
  0 siblings, 1 reply; 15+ messages in thread
From: Paolo Bonzini @ 2021-01-28 17:59 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Maciej S. Szmigiero, Vitaly Kuznetsov, kvm, Wanpeng Li,
	Jim Mattson, Igor Mammedov

On 28/01/21 18:31, Sean Christopherson wrote:
> 
>> For now I applied patches 1-2-5.
> Why keep patch 1?  Simply raising the limit in patch 2 shouldn't require per-VM
> tracking.

Well, right you are.

Paolo


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 1/5] KVM: Make the maximum number of user memslots a per-VM thing
  2021-01-28 17:59             ` Paolo Bonzini
@ 2021-02-08 14:20               ` Vitaly Kuznetsov
  2021-02-08 15:02                 ` Paolo Bonzini
  0 siblings, 1 reply; 15+ messages in thread
From: Vitaly Kuznetsov @ 2021-02-08 14:20 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Maciej S. Szmigiero, kvm, Wanpeng Li, Jim Mattson, Igor Mammedov,
	Sean Christopherson

Paolo Bonzini <pbonzini@redhat.com> writes:

> On 28/01/21 18:31, Sean Christopherson wrote:
>> 
>>> For now I applied patches 1-2-5.
>> Why keep patch 1?  Simply raising the limit in patch 2 shouldn't require per-VM
>> tracking.
>
> Well, right you are.
>

... so are we going to raise it after all? I don't see anything in
kvm/queue so just wanted to double-check.

Thanks!

-- 
Vitaly


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 1/5] KVM: Make the maximum number of user memslots a per-VM thing
  2021-02-08 14:20               ` Vitaly Kuznetsov
@ 2021-02-08 15:02                 ` Paolo Bonzini
  0 siblings, 0 replies; 15+ messages in thread
From: Paolo Bonzini @ 2021-02-08 15:02 UTC (permalink / raw)
  To: Vitaly Kuznetsov
  Cc: Maciej S. Szmigiero, kvm, Wanpeng Li, Jim Mattson, Igor Mammedov,
	Sean Christopherson

On 08/02/21 15:20, Vitaly Kuznetsov wrote:
> Paolo Bonzini <pbonzini@redhat.com> writes:
> 
>> On 28/01/21 18:31, Sean Christopherson wrote:
>>>
>>>> For now I applied patches 1-2-5.
>>> Why keep patch 1?  Simply raising the limit in patch 2 shouldn't require per-VM
>>> tracking.
>>
>> Well, right you are.
>>
> 
> ... so are we going to raise it after all? I don't see anything in
> kvm/queue so just wanted to double-check.

Added back, thanks.

Paolo


^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2021-02-08 15:06 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-01-27 17:57 [PATCH 0/5] KVM: Make the maximum number of user memslots configurable and raise the default Vitaly Kuznetsov
2021-01-27 17:57 ` [PATCH 1/5] KVM: Make the maximum number of user memslots a per-VM thing Vitaly Kuznetsov
2021-01-27 22:27   ` Maciej S. Szmigiero
2021-01-28  8:45     ` Vitaly Kuznetsov
2021-01-28 10:48       ` Maciej S. Szmigiero
2021-01-28 13:01         ` Paolo Bonzini
2021-01-28 15:26           ` Vitaly Kuznetsov
2021-01-28 17:31           ` Sean Christopherson
2021-01-28 17:59             ` Paolo Bonzini
2021-02-08 14:20               ` Vitaly Kuznetsov
2021-02-08 15:02                 ` Paolo Bonzini
2021-01-27 17:57 ` [PATCH 2/5] KVM: Raise the maximum number of user memslots Vitaly Kuznetsov
2021-01-27 17:57 ` [PATCH 3/5] KVM: Make the maximum number of user memslots configurable Vitaly Kuznetsov
2021-01-27 17:57 ` [PATCH 4/5] selftests: kvm: Test the newly introduced KVM_CAP_MEMSLOTS_LIMIT Vitaly Kuznetsov
2021-01-27 17:57 ` [PATCH 5/5] selftests: kvm: Raise the default timeout to 120 seconds Vitaly Kuznetsov

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.