linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] vhost: support upto 509 memory regions
@ 2015-02-13 15:49 Igor Mammedov
  2015-02-17  9:02 ` Michael S. Tsirkin
  0 siblings, 1 reply; 16+ messages in thread
From: Igor Mammedov @ 2015-02-13 15:49 UTC (permalink / raw)
  To: linux-kernel; +Cc: mst, kvm, netdev, pbonzini

since commit
 1d4e7e3 kvm: x86: increase user memory slots to 509

it became possible to use a bigger amount of memory
slots, which is used by memory hotplug for
registering hotplugged memory.
However QEMU aborts if it's used with more than ~60
pc-dimm devices and vhost-net since host kernel
in module vhost-net refuses to accept more than 65
memory regions.

Increasing VHOST_MEMORY_MAX_NREGIONS from 65 to 509
to match KVM_USER_MEM_SLOTS fixes issue for vhost-net.

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
---
 drivers/vhost/vhost.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index 2ee2826..ecbd7a4 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -29,7 +29,7 @@
 #include "vhost.h"
 
 enum {
-	VHOST_MEMORY_MAX_NREGIONS = 64,
+	VHOST_MEMORY_MAX_NREGIONS = 509,
 	VHOST_MEMORY_F_LOG = 0x1,
 };
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH] vhost: support upto 509 memory regions
  2015-02-13 15:49 [PATCH] vhost: support upto 509 memory regions Igor Mammedov
@ 2015-02-17  9:02 ` Michael S. Tsirkin
  2015-02-17 10:59   ` Paolo Bonzini
  0 siblings, 1 reply; 16+ messages in thread
From: Michael S. Tsirkin @ 2015-02-17  9:02 UTC (permalink / raw)
  To: Igor Mammedov; +Cc: linux-kernel, kvm, netdev, pbonzini

On Fri, Feb 13, 2015 at 03:49:59PM +0000, Igor Mammedov wrote:
> since commit
>  1d4e7e3 kvm: x86: increase user memory slots to 509
> 
> it became possible to use a bigger amount of memory
> slots, which is used by memory hotplug for
> registering hotplugged memory.
> However QEMU aborts if it's used with more than ~60
> pc-dimm devices and vhost-net since host kernel
> in module vhost-net refuses to accept more than 65
> memory regions.
> 
> Increasing VHOST_MEMORY_MAX_NREGIONS from 65 to 509
> to match KVM_USER_MEM_SLOTS fixes issue for vhost-net.
> 
> Signed-off-by: Igor Mammedov <imammedo@redhat.com>

This scares me a bit: each region is 32byte, we are talking
a 16K allocation that userspace can trigger.
How does kvm handle this issue?


> ---
>  drivers/vhost/vhost.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
> index 2ee2826..ecbd7a4 100644
> --- a/drivers/vhost/vhost.c
> +++ b/drivers/vhost/vhost.c
> @@ -29,7 +29,7 @@
>  #include "vhost.h"
>  
>  enum {
> -	VHOST_MEMORY_MAX_NREGIONS = 64,
> +	VHOST_MEMORY_MAX_NREGIONS = 509,
>  	VHOST_MEMORY_F_LOG = 0x1,
>  };
>  
> -- 
> 1.8.3.1

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH] vhost: support upto 509 memory regions
  2015-02-17  9:02 ` Michael S. Tsirkin
@ 2015-02-17 10:59   ` Paolo Bonzini
  2015-02-17 12:32     ` Michael S. Tsirkin
  0 siblings, 1 reply; 16+ messages in thread
From: Paolo Bonzini @ 2015-02-17 10:59 UTC (permalink / raw)
  To: Michael S. Tsirkin, Igor Mammedov; +Cc: linux-kernel, kvm, netdev



On 17/02/2015 10:02, Michael S. Tsirkin wrote:
> > Increasing VHOST_MEMORY_MAX_NREGIONS from 65 to 509
> > to match KVM_USER_MEM_SLOTS fixes issue for vhost-net.
> > 
> > Signed-off-by: Igor Mammedov <imammedo@redhat.com>
>
> This scares me a bit: each region is 32byte, we are talking
> a 16K allocation that userspace can trigger.

What's bad with a 16K allocation?

> How does kvm handle this issue?

It doesn't.

Paolo

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH] vhost: support upto 509 memory regions
  2015-02-17 10:59   ` Paolo Bonzini
@ 2015-02-17 12:32     ` Michael S. Tsirkin
  2015-02-17 13:11       ` Paolo Bonzini
                         ` (2 more replies)
  0 siblings, 3 replies; 16+ messages in thread
From: Michael S. Tsirkin @ 2015-02-17 12:32 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: Igor Mammedov, linux-kernel, kvm, netdev

On Tue, Feb 17, 2015 at 11:59:48AM +0100, Paolo Bonzini wrote:
> 
> 
> On 17/02/2015 10:02, Michael S. Tsirkin wrote:
> > > Increasing VHOST_MEMORY_MAX_NREGIONS from 65 to 509
> > > to match KVM_USER_MEM_SLOTS fixes issue for vhost-net.
> > > 
> > > Signed-off-by: Igor Mammedov <imammedo@redhat.com>
> >
> > This scares me a bit: each region is 32byte, we are talking
> > a 16K allocation that userspace can trigger.
> 
> What's bad with a 16K allocation?

It fails when memory is fragmented.

> > How does kvm handle this issue?
> 
> It doesn't.
> 
> Paolo

I'm guessing kvm doesn't do memory scans on data path,
vhost does.

qemu is just doing things that kernel didn't expect it to need.

Instead, I suggest reducing number of GPA<->HVA mappings:

you have GPA 1,5,7
map them at HVA 11,15,17
then you can have 1 slot: 1->11

To avoid libc reusing the memory holes, reserve them with MAP_NORESERVE
or something like this.

We can discuss smarter lookup algorithms but I'd rather
userspace didn't do things that we then have to
work around in kernel.


-- 
MST

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH] vhost: support upto 509 memory regions
  2015-02-17 12:32     ` Michael S. Tsirkin
@ 2015-02-17 13:11       ` Paolo Bonzini
  2015-02-17 13:29         ` Michael S. Tsirkin
  2015-02-17 14:44       ` Igor Mammedov
  2015-02-18  0:53       ` Eric Northup
  2 siblings, 1 reply; 16+ messages in thread
From: Paolo Bonzini @ 2015-02-17 13:11 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: Igor Mammedov, linux-kernel, kvm, netdev



On 17/02/2015 13:32, Michael S. Tsirkin wrote:
> On Tue, Feb 17, 2015 at 11:59:48AM +0100, Paolo Bonzini wrote:
>>
>>
>> On 17/02/2015 10:02, Michael S. Tsirkin wrote:
>>>> Increasing VHOST_MEMORY_MAX_NREGIONS from 65 to 509
>>>> to match KVM_USER_MEM_SLOTS fixes issue for vhost-net.
>>>>
>>>> Signed-off-by: Igor Mammedov <imammedo@redhat.com>
>>>
>>> This scares me a bit: each region is 32byte, we are talking
>>> a 16K allocation that userspace can trigger.
>>
>> What's bad with a 16K allocation?
> 
> It fails when memory is fragmented.

If memory is _that_ fragmented I think you have much bigger problems
than vhost.

> I'm guessing kvm doesn't do memory scans on data path, vhost does.

It does for MMIO memory-to-memory writes, but that's not a particularly
fast path.

KVM doesn't access the memory map on fast paths, but QEMU does, so I
don't think it's beyond the expectations of the kernel.  For example you
can use a radix tree (not lib/radix-tree.c unfortunately), and cache
GVA->HPA translations if it turns out that lookup has become a hot path.

The addressing space of x86 is in practice 44 bits or fewer, and each
slot will typically be at least 1 GiB, so you only have 14 bits to
dispatch on.   It's probably possible to only have two or three levels
in the radix tree in the common case, and beat the linear scan real quick.

The radix tree can be tuned to use order-0 allocations, and then your
worries about fragmentation go away too.

Paolo

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH] vhost: support upto 509 memory regions
  2015-02-17 13:11       ` Paolo Bonzini
@ 2015-02-17 13:29         ` Michael S. Tsirkin
  2015-02-17 14:11           ` Paolo Bonzini
  2015-02-17 15:02           ` Igor Mammedov
  0 siblings, 2 replies; 16+ messages in thread
From: Michael S. Tsirkin @ 2015-02-17 13:29 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: Igor Mammedov, linux-kernel, kvm, netdev

On Tue, Feb 17, 2015 at 02:11:37PM +0100, Paolo Bonzini wrote:
> 
> 
> On 17/02/2015 13:32, Michael S. Tsirkin wrote:
> > On Tue, Feb 17, 2015 at 11:59:48AM +0100, Paolo Bonzini wrote:
> >>
> >>
> >> On 17/02/2015 10:02, Michael S. Tsirkin wrote:
> >>>> Increasing VHOST_MEMORY_MAX_NREGIONS from 65 to 509
> >>>> to match KVM_USER_MEM_SLOTS fixes issue for vhost-net.
> >>>>
> >>>> Signed-off-by: Igor Mammedov <imammedo@redhat.com>
> >>>
> >>> This scares me a bit: each region is 32byte, we are talking
> >>> a 16K allocation that userspace can trigger.
> >>
> >> What's bad with a 16K allocation?
> > 
> > It fails when memory is fragmented.
> 
> If memory is _that_ fragmented I think you have much bigger problems
> than vhost.
> 
> > I'm guessing kvm doesn't do memory scans on data path, vhost does.
> 
> It does for MMIO memory-to-memory writes, but that's not a particularly
> fast path.
> 
> KVM doesn't access the memory map on fast paths, but QEMU does, so I
> don't think it's beyond the expectations of the kernel.

QEMU has an elaborate data structure to deal with that.

>  For example you
> can use a radix tree (not lib/radix-tree.c unfortunately), and cache
> GVA->HPA translations if it turns out that lookup has become a hot path.

All vhost lookups are hot path.

> The addressing space of x86 is in practice 44 bits or fewer, and each
> slot will typically be at least 1 GiB, so you only have 14 bits to
> dispatch on.   It's probably possible to only have two or three levels
> in the radix tree in the common case, and beat the linear scan real quick.

Not if there are about 6 regions, I think.

> The radix tree can be tuned to use order-0 allocations, and then your
> worries about fragmentation go away too.
> 
> Paolo

Increasing the number might be reasonable for workloads such as nested
virt. But depending on this in userspace when you don't have to is not a
good idea IMHO.


-- 
MST

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH] vhost: support upto 509 memory regions
  2015-02-17 13:29         ` Michael S. Tsirkin
@ 2015-02-17 14:11           ` Paolo Bonzini
  2015-02-17 15:02           ` Igor Mammedov
  1 sibling, 0 replies; 16+ messages in thread
From: Paolo Bonzini @ 2015-02-17 14:11 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: Igor Mammedov, linux-kernel, kvm, netdev



On 17/02/2015 14:29, Michael S. Tsirkin wrote:
> On Tue, Feb 17, 2015 at 02:11:37PM +0100, Paolo Bonzini wrote:
>>
>>
>> On 17/02/2015 13:32, Michael S. Tsirkin wrote:
>>> On Tue, Feb 17, 2015 at 11:59:48AM +0100, Paolo Bonzini wrote:
>>>>
>>>>
>>>> On 17/02/2015 10:02, Michael S. Tsirkin wrote:
>>>>>> Increasing VHOST_MEMORY_MAX_NREGIONS from 65 to 509
>>>>>> to match KVM_USER_MEM_SLOTS fixes issue for vhost-net.
>>>>>>
>>>>>> Signed-off-by: Igor Mammedov <imammedo@redhat.com>
>>>>>
>>>>> This scares me a bit: each region is 32byte, we are talking
>>>>> a 16K allocation that userspace can trigger.
>>>>
>>>> What's bad with a 16K allocation?
>>>
>>> It fails when memory is fragmented.
>>
>> If memory is _that_ fragmented I think you have much bigger problems
>> than vhost.
>>
>>> I'm guessing kvm doesn't do memory scans on data path, vhost does.
>>
>> It does for MMIO memory-to-memory writes, but that's not a particularly
>> fast path.
>>
>> KVM doesn't access the memory map on fast paths, but QEMU does, so I
>> don't think it's beyond the expectations of the kernel.
> 
> QEMU has an elaborate data structure to deal with that.

It's not elaborate, it's just a radix tree.  The complicated part is
building the flat view and computing what changed in the memory map, but
none of this would have to be done in vhost.  vhost gets the flat memory
map in VHOST_SET_MEM_TABLE.

A lookup is basically:

#define LOG_TRIE_WIDTH      (PAGE_SHIFT - LOG_BITS_PER_LONG)

unsigned long node_val = (unsigned long) trie_root;
/* log of highest valid address in the memory map */
if (addr & (-1U << vhost_address_space_bits))
	return NULL;

addr <<= 64 - vhost_address_space_bits;
do {
	struct memmap_trie_node *node;
	unsigned i = addr >> (64 - LOG_TRIE_WIDTH);
	addr = addr << LOG_TRIE_WIDTH;
	node = (struct memmap_trie_node *) (node_val - 1);
	node_val = (unsigned long) node[i];
} while (node_val & 1);
return (struct vhost_mem_slot *)node_val;

bit 0: 0 if leaf

if leaf:
	bits 1-63: pointer to mem table entry
if not leaf:
	bits 1-63: pointer to next level

>>  For example you
>> can use a radix tree (not lib/radix-tree.c unfortunately), and cache
>> GVA->HPA translations if it turns out that lookup has become a hot path.
> 
> All vhost lookups are hot path.

What % is lookup vs the networking stuff?  Also, adding a simple MRU
cache might make lookups less prominent in the profile.

>> The addressing space of x86 is in practice 44 bits or fewer, and each
>> slot will typically be at least 1 GiB, so you only have 14 bits to
>> dispatch on.   It's probably possible to only have two or three levels
>> in the radix tree in the common case, and beat the linear scan real quick.
> 
> Not if there are about 6 regions, I think.

It depends on many factors including branch prediction, MRU cache hits, etc.

> Increasing the number might be reasonable for workloads such as nested
> virt.

Why does nested virt matter?

Paolo

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH] vhost: support upto 509 memory regions
  2015-02-17 12:32     ` Michael S. Tsirkin
  2015-02-17 13:11       ` Paolo Bonzini
@ 2015-02-17 14:44       ` Igor Mammedov
  2015-02-17 14:45         ` Paolo Bonzini
  2015-02-18  0:53       ` Eric Northup
  2 siblings, 1 reply; 16+ messages in thread
From: Igor Mammedov @ 2015-02-17 14:44 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: Paolo Bonzini, linux-kernel, kvm, netdev

On Tue, 17 Feb 2015 13:32:12 +0100
"Michael S. Tsirkin" <mst@redhat.com> wrote:

> On Tue, Feb 17, 2015 at 11:59:48AM +0100, Paolo Bonzini wrote:
> > 
> > 
> > On 17/02/2015 10:02, Michael S. Tsirkin wrote:
> > > > Increasing VHOST_MEMORY_MAX_NREGIONS from 65 to 509
> > > > to match KVM_USER_MEM_SLOTS fixes issue for vhost-net.
> > > > 
> > > > Signed-off-by: Igor Mammedov <imammedo@redhat.com>
> > >
> > > This scares me a bit: each region is 32byte, we are talking
> > > a 16K allocation that userspace can trigger.
> > 
> > What's bad with a 16K allocation?
> 
> It fails when memory is fragmented.
> 
> > > How does kvm handle this issue?
> > 
> > It doesn't.
> > 
> > Paolo
> 
> I'm guessing kvm doesn't do memory scans on data path,
> vhost does.
> 
> qemu is just doing things that kernel didn't expect it to need.
> 
> Instead, I suggest reducing number of GPA<->HVA mappings:
> 
> you have GPA 1,5,7
> map them at HVA 11,15,17
> then you can have 1 slot: 1->11
> 
> To avoid libc reusing the memory holes, reserve them with MAP_NORESERVE
> or something like this.
Lets suppose that we add API to reserve whole memory hotplug region
with MAP_NORESERVE and passed it as memslot to KVM.

Then what will happen to guest accessing not really mapped region?
This memslot will also be passed to vhost as region, is it really ok?
I don't know what else it might break.

As alternative:
we can filter out hotplugged memory and vhost will continue to work with
only initial memory.
So question is id we have to tell vhost about hotplugged memory?

> 
> We can discuss smarter lookup algorithms but I'd rather
> userspace didn't do things that we then have to
> work around in kernel.
> 
> 


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH] vhost: support upto 509 memory regions
  2015-02-17 14:44       ` Igor Mammedov
@ 2015-02-17 14:45         ` Paolo Bonzini
  0 siblings, 0 replies; 16+ messages in thread
From: Paolo Bonzini @ 2015-02-17 14:45 UTC (permalink / raw)
  To: Igor Mammedov, Michael S. Tsirkin; +Cc: linux-kernel, kvm, netdev



On 17/02/2015 15:44, Igor Mammedov wrote:
> As alternative:
> we can filter out hotplugged memory and vhost will continue to work with
> only initial memory.
> So question is id we have to tell vhost about hotplugged memory?

Yes, I think so.

Paolo

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH] vhost: support upto 509 memory regions
  2015-02-17 13:29         ` Michael S. Tsirkin
  2015-02-17 14:11           ` Paolo Bonzini
@ 2015-02-17 15:02           ` Igor Mammedov
  2015-02-17 17:09             ` Paolo Bonzini
  1 sibling, 1 reply; 16+ messages in thread
From: Igor Mammedov @ 2015-02-17 15:02 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: Paolo Bonzini, linux-kernel, kvm, netdev

On Tue, 17 Feb 2015 14:29:31 +0100
"Michael S. Tsirkin" <mst@redhat.com> wrote:

> On Tue, Feb 17, 2015 at 02:11:37PM +0100, Paolo Bonzini wrote:
> > 
> > 
> > On 17/02/2015 13:32, Michael S. Tsirkin wrote:
> > > On Tue, Feb 17, 2015 at 11:59:48AM +0100, Paolo Bonzini wrote:
> > >>
> > >>
> > >> On 17/02/2015 10:02, Michael S. Tsirkin wrote:
> > >>>> Increasing VHOST_MEMORY_MAX_NREGIONS from 65 to 509
> > >>>> to match KVM_USER_MEM_SLOTS fixes issue for vhost-net.
> > >>>>
> > >>>> Signed-off-by: Igor Mammedov <imammedo@redhat.com>
> > >>>
> > >>> This scares me a bit: each region is 32byte, we are talking
> > >>> a 16K allocation that userspace can trigger.
> > >>
> > >> What's bad with a 16K allocation?
> > > 
> > > It fails when memory is fragmented.
> > 
> > If memory is _that_ fragmented I think you have much bigger problems
> > than vhost.
> > 
> > > I'm guessing kvm doesn't do memory scans on data path, vhost does.
> > 
> > It does for MMIO memory-to-memory writes, but that's not a particularly
> > fast path.
> > 
> > KVM doesn't access the memory map on fast paths, but QEMU does, so I
> > don't think it's beyond the expectations of the kernel.
> 
> QEMU has an elaborate data structure to deal with that.
> 
> >  For example you
> > can use a radix tree (not lib/radix-tree.c unfortunately), and cache
> > GVA->HPA translations if it turns out that lookup has become a hot path.
> 
> All vhost lookups are hot path.
> 
> > The addressing space of x86 is in practice 44 bits or fewer, and each
> > slot will typically be at least 1 GiB, so you only have 14 bits to
> > dispatch on.   It's probably possible to only have two or three levels
> > in the radix tree in the common case, and beat the linear scan real quick.
> 
> Not if there are about 6 regions, I think.
When memslots where increased to 509 and look up of them was replaced on
binary search results were on par with linear search for a default 13 memslots VM.

Adding LRU cache helped to shave ~40% of cycles for sequential lookup workloads.

> 
> > The radix tree can be tuned to use order-0 allocations, and then your
> > worries about fragmentation go away too.
> > 
> > Paolo
> 
> Increasing the number might be reasonable for workloads such as nested
> virt. But depending on this in userspace when you don't have to is not a
> good idea IMHO.
> 
> 


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH] vhost: support upto 509 memory regions
  2015-02-17 15:02           ` Igor Mammedov
@ 2015-02-17 17:09             ` Paolo Bonzini
  0 siblings, 0 replies; 16+ messages in thread
From: Paolo Bonzini @ 2015-02-17 17:09 UTC (permalink / raw)
  To: Igor Mammedov, Michael S. Tsirkin; +Cc: linux-kernel, kvm, netdev



On 17/02/2015 16:02, Igor Mammedov wrote:
>> > 
>> > Not if there are about 6 regions, I think.
> When memslots where increased to 509 and look up of them was replaced on
> binary search results were on par with linear search for a default 13 memslots VM.
> 
> Adding LRU

You mean MRU. :)

> cache helped to shave ~40% of cycles for sequential lookup workloads.

It's a bit different for vhost because you can have up to four "things"
being looked up at the same time:

- the s/g list that will end up in the skb

- the avail/used ring

- the virtio buffers

- the virtio indirect buffers

So you probably need multiple MRU caches.  But yes, MRU can help a lot.

Paolo

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH] vhost: support upto 509 memory regions
  2015-02-17 12:32     ` Michael S. Tsirkin
  2015-02-17 13:11       ` Paolo Bonzini
  2015-02-17 14:44       ` Igor Mammedov
@ 2015-02-18  0:53       ` Eric Northup
  2015-02-18  4:27         ` Michael S. Tsirkin
  2 siblings, 1 reply; 16+ messages in thread
From: Eric Northup @ 2015-02-18  0:53 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Paolo Bonzini, Igor Mammedov, Linux Kernel Mailing List, KVM, netdev

On Tue, Feb 17, 2015 at 4:32 AM, Michael S. Tsirkin <mst@redhat.com> wrote:
> On Tue, Feb 17, 2015 at 11:59:48AM +0100, Paolo Bonzini wrote:
>>
>>
>> On 17/02/2015 10:02, Michael S. Tsirkin wrote:
>> > > Increasing VHOST_MEMORY_MAX_NREGIONS from 65 to 509
>> > > to match KVM_USER_MEM_SLOTS fixes issue for vhost-net.
>> > >
>> > > Signed-off-by: Igor Mammedov <imammedo@redhat.com>
>> >
>> > This scares me a bit: each region is 32byte, we are talking
>> > a 16K allocation that userspace can trigger.
>>
>> What's bad with a 16K allocation?
>
> It fails when memory is fragmented.
>
>> > How does kvm handle this issue?
>>
>> It doesn't.
>>
>> Paolo
>
> I'm guessing kvm doesn't do memory scans on data path,
> vhost does.
>
> qemu is just doing things that kernel didn't expect it to need.
>
> Instead, I suggest reducing number of GPA<->HVA mappings:
>
> you have GPA 1,5,7
> map them at HVA 11,15,17
> then you can have 1 slot: 1->11
>
> To avoid libc reusing the memory holes, reserve them with MAP_NORESERVE
> or something like this.

This works beautifully when host virtual address bits are more
plentiful than guest physical address bits.  Not all architectures
have that property, though.

> We can discuss smarter lookup algorithms but I'd rather
> userspace didn't do things that we then have to
> work around in kernel.
>
>
> --
> MST
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH] vhost: support upto 509 memory regions
  2015-02-18  0:53       ` Eric Northup
@ 2015-02-18  4:27         ` Michael S. Tsirkin
  2015-05-18 16:22           ` Andrey Korolyov
  0 siblings, 1 reply; 16+ messages in thread
From: Michael S. Tsirkin @ 2015-02-18  4:27 UTC (permalink / raw)
  To: Eric Northup
  Cc: Paolo Bonzini, Igor Mammedov, Linux Kernel Mailing List, KVM, netdev

On Tue, Feb 17, 2015 at 04:53:45PM -0800, Eric Northup wrote:
> On Tue, Feb 17, 2015 at 4:32 AM, Michael S. Tsirkin <mst@redhat.com> wrote:
> > On Tue, Feb 17, 2015 at 11:59:48AM +0100, Paolo Bonzini wrote:
> >>
> >>
> >> On 17/02/2015 10:02, Michael S. Tsirkin wrote:
> >> > > Increasing VHOST_MEMORY_MAX_NREGIONS from 65 to 509
> >> > > to match KVM_USER_MEM_SLOTS fixes issue for vhost-net.
> >> > >
> >> > > Signed-off-by: Igor Mammedov <imammedo@redhat.com>
> >> >
> >> > This scares me a bit: each region is 32byte, we are talking
> >> > a 16K allocation that userspace can trigger.
> >>
> >> What's bad with a 16K allocation?
> >
> > It fails when memory is fragmented.
> >
> >> > How does kvm handle this issue?
> >>
> >> It doesn't.
> >>
> >> Paolo
> >
> > I'm guessing kvm doesn't do memory scans on data path,
> > vhost does.
> >
> > qemu is just doing things that kernel didn't expect it to need.
> >
> > Instead, I suggest reducing number of GPA<->HVA mappings:
> >
> > you have GPA 1,5,7
> > map them at HVA 11,15,17
> > then you can have 1 slot: 1->11
> >
> > To avoid libc reusing the memory holes, reserve them with MAP_NORESERVE
> > or something like this.
> 
> This works beautifully when host virtual address bits are more
> plentiful than guest physical address bits.  Not all architectures
> have that property, though.

AFAIK this is pretty much a requirement for both kvm and vhost,
as we require each guest page to also be mapped in qemu memory.

> > We can discuss smarter lookup algorithms but I'd rather
> > userspace didn't do things that we then have to
> > work around in kernel.
> >
> >
> > --
> > MST
> > --
> > To unsubscribe from this list: send the line "unsubscribe kvm" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH] vhost: support upto 509 memory regions
  2015-02-18  4:27         ` Michael S. Tsirkin
@ 2015-05-18 16:22           ` Andrey Korolyov
  2015-05-18 16:28             ` Michael S. Tsirkin
  2015-05-19 11:50             ` Igor Mammedov
  0 siblings, 2 replies; 16+ messages in thread
From: Andrey Korolyov @ 2015-05-18 16:22 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Eric Northup, Paolo Bonzini, Igor Mammedov,
	Linux Kernel Mailing List, KVM, netdev

On Wed, Feb 18, 2015 at 7:27 AM, Michael S. Tsirkin <mst@redhat.com> wrote:
> On Tue, Feb 17, 2015 at 04:53:45PM -0800, Eric Northup wrote:
>> On Tue, Feb 17, 2015 at 4:32 AM, Michael S. Tsirkin <mst@redhat.com> wrote:
>> > On Tue, Feb 17, 2015 at 11:59:48AM +0100, Paolo Bonzini wrote:
>> >>
>> >>
>> >> On 17/02/2015 10:02, Michael S. Tsirkin wrote:
>> >> > > Increasing VHOST_MEMORY_MAX_NREGIONS from 65 to 509
>> >> > > to match KVM_USER_MEM_SLOTS fixes issue for vhost-net.
>> >> > >
>> >> > > Signed-off-by: Igor Mammedov <imammedo@redhat.com>
>> >> >
>> >> > This scares me a bit: each region is 32byte, we are talking
>> >> > a 16K allocation that userspace can trigger.
>> >>
>> >> What's bad with a 16K allocation?
>> >
>> > It fails when memory is fragmented.
>> >
>> >> > How does kvm handle this issue?
>> >>
>> >> It doesn't.
>> >>
>> >> Paolo
>> >
>> > I'm guessing kvm doesn't do memory scans on data path,
>> > vhost does.
>> >
>> > qemu is just doing things that kernel didn't expect it to need.
>> >
>> > Instead, I suggest reducing number of GPA<->HVA mappings:
>> >
>> > you have GPA 1,5,7
>> > map them at HVA 11,15,17
>> > then you can have 1 slot: 1->11
>> >
>> > To avoid libc reusing the memory holes, reserve them with MAP_NORESERVE
>> > or something like this.
>>
>> This works beautifully when host virtual address bits are more
>> plentiful than guest physical address bits.  Not all architectures
>> have that property, though.
>
> AFAIK this is pretty much a requirement for both kvm and vhost,
> as we require each guest page to also be mapped in qemu memory.
>
>> > We can discuss smarter lookup algorithms but I'd rather
>> > userspace didn't do things that we then have to
>> > work around in kernel.
>> >
>> >
>> > --
>> > MST
>> > --
>> > To unsubscribe from this list: send the line "unsubscribe kvm" in
>> > the body of a message to majordomo@vger.kernel.org
>> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


Hello,

any chance of getting the proposed patch in the mainline? Though it
seems that most users will not suffer from relatively slot number
ceiling (they can decrease slot 'granularity' for larger VMs and
vice-versa), fine slot size, 256M or even 128M, with the large number
of slots can be useful for a certain kind of tasks for an
orchestration systems. I`ve made a backport series of all seemingly
interesting memslot-related improvements to a 3.10 branch, is it worth
to be tested with straighforward patch like one from above, with
simulated fragmentation of allocations in host?

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH] vhost: support upto 509 memory regions
  2015-05-18 16:22           ` Andrey Korolyov
@ 2015-05-18 16:28             ` Michael S. Tsirkin
  2015-05-19 11:50             ` Igor Mammedov
  1 sibling, 0 replies; 16+ messages in thread
From: Michael S. Tsirkin @ 2015-05-18 16:28 UTC (permalink / raw)
  To: Andrey Korolyov
  Cc: Eric Northup, Paolo Bonzini, Igor Mammedov,
	Linux Kernel Mailing List, KVM, netdev

On Mon, May 18, 2015 at 07:22:34PM +0300, Andrey Korolyov wrote:
> On Wed, Feb 18, 2015 at 7:27 AM, Michael S. Tsirkin <mst@redhat.com> wrote:
> > On Tue, Feb 17, 2015 at 04:53:45PM -0800, Eric Northup wrote:
> >> On Tue, Feb 17, 2015 at 4:32 AM, Michael S. Tsirkin <mst@redhat.com> wrote:
> >> > On Tue, Feb 17, 2015 at 11:59:48AM +0100, Paolo Bonzini wrote:
> >> >>
> >> >>
> >> >> On 17/02/2015 10:02, Michael S. Tsirkin wrote:
> >> >> > > Increasing VHOST_MEMORY_MAX_NREGIONS from 65 to 509
> >> >> > > to match KVM_USER_MEM_SLOTS fixes issue for vhost-net.
> >> >> > >
> >> >> > > Signed-off-by: Igor Mammedov <imammedo@redhat.com>
> >> >> >
> >> >> > This scares me a bit: each region is 32byte, we are talking
> >> >> > a 16K allocation that userspace can trigger.
> >> >>
> >> >> What's bad with a 16K allocation?
> >> >
> >> > It fails when memory is fragmented.
> >> >
> >> >> > How does kvm handle this issue?
> >> >>
> >> >> It doesn't.
> >> >>
> >> >> Paolo
> >> >
> >> > I'm guessing kvm doesn't do memory scans on data path,
> >> > vhost does.
> >> >
> >> > qemu is just doing things that kernel didn't expect it to need.
> >> >
> >> > Instead, I suggest reducing number of GPA<->HVA mappings:
> >> >
> >> > you have GPA 1,5,7
> >> > map them at HVA 11,15,17
> >> > then you can have 1 slot: 1->11
> >> >
> >> > To avoid libc reusing the memory holes, reserve them with MAP_NORESERVE
> >> > or something like this.
> >>
> >> This works beautifully when host virtual address bits are more
> >> plentiful than guest physical address bits.  Not all architectures
> >> have that property, though.
> >
> > AFAIK this is pretty much a requirement for both kvm and vhost,
> > as we require each guest page to also be mapped in qemu memory.
> >
> >> > We can discuss smarter lookup algorithms but I'd rather
> >> > userspace didn't do things that we then have to
> >> > work around in kernel.
> >> >
> >> >
> >> > --
> >> > MST
> >> > --
> >> > To unsubscribe from this list: send the line "unsubscribe kvm" in
> >> > the body of a message to majordomo@vger.kernel.org
> >> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > --
> > To unsubscribe from this list: send the line "unsubscribe netdev" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 
> Hello,
> 
> any chance of getting the proposed patch in the mainline? Though it
> seems that most users will not suffer from relatively slot number
> ceiling (they can decrease slot 'granularity' for larger VMs and
> vice-versa), fine slot size, 256M or even 128M, with the large number
> of slots can be useful for a certain kind of tasks for an
> orchestration systems. I`ve made a backport series of all seemingly
> interesting memslot-related improvements to a 3.10 branch, is it worth
> to be tested with straighforward patch like one from above, with
> simulated fragmentation of allocations in host?

I'd rather people worked on the 1:1 mapping, it will also
speed up lookups. I'm concerned if I merge this one, motivation
for people to work on the right fix will disappear.

-- 
MST

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH] vhost: support upto 509 memory regions
  2015-05-18 16:22           ` Andrey Korolyov
  2015-05-18 16:28             ` Michael S. Tsirkin
@ 2015-05-19 11:50             ` Igor Mammedov
  1 sibling, 0 replies; 16+ messages in thread
From: Igor Mammedov @ 2015-05-19 11:50 UTC (permalink / raw)
  To: Andrey Korolyov
  Cc: Michael S. Tsirkin, Eric Northup, Paolo Bonzini,
	Linux Kernel Mailing List, KVM, netdev

On Mon, 18 May 2015 19:22:34 +0300
Andrey Korolyov <andrey@xdel.ru> wrote:

> On Wed, Feb 18, 2015 at 7:27 AM, Michael S. Tsirkin <mst@redhat.com> wrote:
> > On Tue, Feb 17, 2015 at 04:53:45PM -0800, Eric Northup wrote:
> >> On Tue, Feb 17, 2015 at 4:32 AM, Michael S. Tsirkin <mst@redhat.com> wrote:
> >> > On Tue, Feb 17, 2015 at 11:59:48AM +0100, Paolo Bonzini wrote:
> >> >>
> >> >>
> >> >> On 17/02/2015 10:02, Michael S. Tsirkin wrote:
> >> >> > > Increasing VHOST_MEMORY_MAX_NREGIONS from 65 to 509
> >> >> > > to match KVM_USER_MEM_SLOTS fixes issue for vhost-net.
> >> >> > >
> >> >> > > Signed-off-by: Igor Mammedov <imammedo@redhat.com>
> >> >> >
> >> >> > This scares me a bit: each region is 32byte, we are talking
> >> >> > a 16K allocation that userspace can trigger.
> >> >>
> >> >> What's bad with a 16K allocation?
> >> >
> >> > It fails when memory is fragmented.
> >> >
> >> >> > How does kvm handle this issue?
> >> >>
> >> >> It doesn't.
> >> >>
> >> >> Paolo
> >> >
> >> > I'm guessing kvm doesn't do memory scans on data path,
> >> > vhost does.
> >> >
> >> > qemu is just doing things that kernel didn't expect it to need.
> >> >
> >> > Instead, I suggest reducing number of GPA<->HVA mappings:
> >> >
> >> > you have GPA 1,5,7
> >> > map them at HVA 11,15,17
> >> > then you can have 1 slot: 1->11
> >> >
> >> > To avoid libc reusing the memory holes, reserve them with MAP_NORESERVE
> >> > or something like this.
> >>
> >> This works beautifully when host virtual address bits are more
> >> plentiful than guest physical address bits.  Not all architectures
> >> have that property, though.
> >
> > AFAIK this is pretty much a requirement for both kvm and vhost,
> > as we require each guest page to also be mapped in qemu memory.
> >
> >> > We can discuss smarter lookup algorithms but I'd rather
> >> > userspace didn't do things that we then have to
> >> > work around in kernel.
> >> >
> >> >
> >> > --
> >> > MST
> >> > --
> >> > To unsubscribe from this list: send the line "unsubscribe kvm" in
> >> > the body of a message to majordomo@vger.kernel.org
> >> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > --
> > To unsubscribe from this list: send the line "unsubscribe netdev" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 
> Hello,
> 
> any chance of getting the proposed patch in the mainline? Though it
> seems that most users will not suffer from relatively slot number
> ceiling (they can decrease slot 'granularity' for larger VMs and
> vice-versa), fine slot size, 256M or even 128M, with the large number
> of slots can be useful for a certain kind of tasks for an
> orchestration systems. I`ve made a backport series of all seemingly
> interesting memslot-related improvements to a 3.10 branch, is it worth
> to be tested with straighforward patch like one from above, with
> simulated fragmentation of allocations in host?

I'm almost done with approach suggested by Paolo,
i.e. replace linear search with faster/scalable lookup alg.
Hope to post it soon.

> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2015-05-19 11:50 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-02-13 15:49 [PATCH] vhost: support upto 509 memory regions Igor Mammedov
2015-02-17  9:02 ` Michael S. Tsirkin
2015-02-17 10:59   ` Paolo Bonzini
2015-02-17 12:32     ` Michael S. Tsirkin
2015-02-17 13:11       ` Paolo Bonzini
2015-02-17 13:29         ` Michael S. Tsirkin
2015-02-17 14:11           ` Paolo Bonzini
2015-02-17 15:02           ` Igor Mammedov
2015-02-17 17:09             ` Paolo Bonzini
2015-02-17 14:44       ` Igor Mammedov
2015-02-17 14:45         ` Paolo Bonzini
2015-02-18  0:53       ` Eric Northup
2015-02-18  4:27         ` Michael S. Tsirkin
2015-05-18 16:22           ` Andrey Korolyov
2015-05-18 16:28             ` Michael S. Tsirkin
2015-05-19 11:50             ` Igor Mammedov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).