All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Andrey Korolyov <andrey@xdel.ru>
Cc: Eric Northup <digitaleric@google.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Igor Mammedov <imammedo@redhat.com>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	KVM <kvm@vger.kernel.org>,
	netdev@vger.kernel.org
Subject: Re: [PATCH] vhost: support upto 509 memory regions
Date: Mon, 18 May 2015 18:28:39 +0200	[thread overview]
Message-ID: <20150518182723-mutt-send-email-mst@redhat.com> (raw)
In-Reply-To: <CABYiri8WnRHz3M4JE_EJxbgpRQxNbsY653LNWGOjDLLgdYx-+w@mail.gmail.com>

On Mon, May 18, 2015 at 07:22:34PM +0300, Andrey Korolyov wrote:
> On Wed, Feb 18, 2015 at 7:27 AM, Michael S. Tsirkin <mst@redhat.com> wrote:
> > On Tue, Feb 17, 2015 at 04:53:45PM -0800, Eric Northup wrote:
> >> On Tue, Feb 17, 2015 at 4:32 AM, Michael S. Tsirkin <mst@redhat.com> wrote:
> >> > On Tue, Feb 17, 2015 at 11:59:48AM +0100, Paolo Bonzini wrote:
> >> >>
> >> >>
> >> >> On 17/02/2015 10:02, Michael S. Tsirkin wrote:
> >> >> > > Increasing VHOST_MEMORY_MAX_NREGIONS from 65 to 509
> >> >> > > to match KVM_USER_MEM_SLOTS fixes issue for vhost-net.
> >> >> > >
> >> >> > > Signed-off-by: Igor Mammedov <imammedo@redhat.com>
> >> >> >
> >> >> > This scares me a bit: each region is 32byte, we are talking
> >> >> > a 16K allocation that userspace can trigger.
> >> >>
> >> >> What's bad with a 16K allocation?
> >> >
> >> > It fails when memory is fragmented.
> >> >
> >> >> > How does kvm handle this issue?
> >> >>
> >> >> It doesn't.
> >> >>
> >> >> Paolo
> >> >
> >> > I'm guessing kvm doesn't do memory scans on data path,
> >> > vhost does.
> >> >
> >> > qemu is just doing things that kernel didn't expect it to need.
> >> >
> >> > Instead, I suggest reducing number of GPA<->HVA mappings:
> >> >
> >> > you have GPA 1,5,7
> >> > map them at HVA 11,15,17
> >> > then you can have 1 slot: 1->11
> >> >
> >> > To avoid libc reusing the memory holes, reserve them with MAP_NORESERVE
> >> > or something like this.
> >>
> >> This works beautifully when host virtual address bits are more
> >> plentiful than guest physical address bits.  Not all architectures
> >> have that property, though.
> >
> > AFAIK this is pretty much a requirement for both kvm and vhost,
> > as we require each guest page to also be mapped in qemu memory.
> >
> >> > We can discuss smarter lookup algorithms but I'd rather
> >> > userspace didn't do things that we then have to
> >> > work around in kernel.
> >> >
> >> >
> >> > --
> >> > MST
> >> > --
> >> > To unsubscribe from this list: send the line "unsubscribe kvm" in
> >> > the body of a message to majordomo@vger.kernel.org
> >> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > --
> > To unsubscribe from this list: send the line "unsubscribe netdev" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 
> Hello,
> 
> any chance of getting the proposed patch in the mainline? Though it
> seems that most users will not suffer from relatively slot number
> ceiling (they can decrease slot 'granularity' for larger VMs and
> vice-versa), fine slot size, 256M or even 128M, with the large number
> of slots can be useful for a certain kind of tasks for an
> orchestration systems. I`ve made a backport series of all seemingly
> interesting memslot-related improvements to a 3.10 branch, is it worth
> to be tested with straighforward patch like one from above, with
> simulated fragmentation of allocations in host?

I'd rather people worked on the 1:1 mapping, it will also
speed up lookups. I'm concerned if I merge this one, motivation
for people to work on the right fix will disappear.

-- 
MST

  reply	other threads:[~2015-05-18 16:28 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-02-13 15:49 [PATCH] vhost: support upto 509 memory regions Igor Mammedov
2015-02-17  9:02 ` Michael S. Tsirkin
2015-02-17 10:59   ` Paolo Bonzini
2015-02-17 12:32     ` Michael S. Tsirkin
2015-02-17 13:11       ` Paolo Bonzini
2015-02-17 13:29         ` Michael S. Tsirkin
2015-02-17 14:11           ` Paolo Bonzini
2015-02-17 15:02           ` Igor Mammedov
2015-02-17 17:09             ` Paolo Bonzini
2015-02-17 14:44       ` Igor Mammedov
2015-02-17 14:45         ` Paolo Bonzini
2015-02-18  0:53       ` Eric Northup
2015-02-18  4:27         ` Michael S. Tsirkin
2015-05-18 16:22           ` Andrey Korolyov
2015-05-18 16:28             ` Michael S. Tsirkin [this message]
2015-05-19 11:50             ` Igor Mammedov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20150518182723-mutt-send-email-mst@redhat.com \
    --to=mst@redhat.com \
    --cc=andrey@xdel.ru \
    --cc=digitaleric@google.com \
    --cc=imammedo@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=pbonzini@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.