From: Bandan Das <bsd@redhat.com>
To: "Michael S. Tsirkin" <mst@redhat.com>
Cc: kvm@vger.kernel.org, netdev@vger.kernel.org,
linux-kernel@vger.kernel.org, Eyal Moscovici <EYALMO@il.ibm.com>,
Razya Ladelsky <RAZYA@il.ibm.com>,
cgroups@vger.kernel.org, jasowang@redhat.com
Subject: Re: [RFC PATCH 0/4] Shared vhost design
Date: Mon, 10 Aug 2015 16:00:21 -0400 [thread overview]
Message-ID: <jpgtws6rk16.fsf@linux.bootlegged.copy> (raw)
In-Reply-To: <20150809154357-mutt-send-email-mst@redhat.com> (Michael S. Tsirkin's message of "Sun, 9 Aug 2015 15:45:47 +0300")
"Michael S. Tsirkin" <mst@redhat.com> writes:
> On Sat, Aug 08, 2015 at 07:06:38PM -0400, Bandan Das wrote:
>> Hi Michael,
...
>>
>> > - does the design address the issue of VM 1 being blocked
>> > (e.g. because it hits swap) and blocking VM 2?
>> Good question. I haven't thought of this yet. But IIUC,
>> the worker thread will complete VM1's job and then move on to
>> executing VM2's scheduled work.
>> It doesn't matter if VM1 is
>> blocked currently. I think it would be a problem though if/when
>> polling is introduced.
>
> Sorry, I wasn't clear. If VM1's memory is in swap, attempts to
> access it might block the service thread, so it won't
> complete VM2's job.
Ah ok, I understand now. I am pretty sure the current RFC doesn't
take care of this :) I will add this to my todo list for v2.
Bandan
>
>
>>
>> >>
>> >> #* Last run with the vCPU and I/O thread(s) pinned, no CPU/memory limit imposed.
>> >> # I/O thread runs on CPU 14 or 15 depending on which guest it's serving
>> >>
>> >> There's a simple graph at
>> >> http://people.redhat.com/~bdas/elvis/data/results.png
>> >> that shows how task affinity results in a jump and even without it,
>> >> as the number of guests increase, the shared vhost design performs
>> >> slightly better.
>> >>
>> >> Observations:
>> >> 1. In terms of "stock" performance, the results are comparable.
>> >> 2. However, with a tuned setup, even without polling, we see an improvement
>> >> with the new design.
>> >> 3. Making the new design simulate old behavior would be a matter of setting
>> >> the number of guests per vhost threads to 1.
>> >> 4. Maybe, setting a per guest limit on the work being done by a specific vhost
>> >> thread is needed for it to be fair.
>> >> 5. cgroup associations needs to be figured out. I just slightly hacked the
>> >> current cgroup association mechanism to work with the new model. Ccing cgroups
>> >> for input/comments.
>> >>
>> >> Many thanks to Razya Ladelsky and Eyal Moscovici, IBM for the initial
>> >> patches, the helpful testing suggestions and discussions.
>> >>
>> >> Bandan Das (4):
>> >> vhost: Introduce a universal thread to serve all users
>> >> vhost: Limit the number of devices served by a single worker thread
>> >> cgroup: Introduce a function to compare cgroups
>> >> vhost: Add cgroup-aware creation of worker threads
>> >>
>> >> drivers/vhost/net.c | 6 +-
>> >> drivers/vhost/scsi.c | 18 ++--
>> >> drivers/vhost/vhost.c | 272 +++++++++++++++++++++++++++++++++++--------------
>> >> drivers/vhost/vhost.h | 32 +++++-
>> >> include/linux/cgroup.h | 1 +
>> >> kernel/cgroup.c | 40 ++++++++
>> >> 6 files changed, 275 insertions(+), 94 deletions(-)
>> >>
>> >> --
>> >> 2.4.3
prev parent reply other threads:[~2015-08-10 20:00 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-07-13 4:07 [RFC PATCH 0/4] Shared vhost design Bandan Das
2015-07-13 4:07 ` [RFC PATCH 1/4] vhost: Introduce a universal thread to serve all users Bandan Das
[not found] ` <OF8AF3E3F8.F0120188-ONC2257E8E.00740E46-C2257E90.0035BD30@il.ibm.com>
2015-08-08 22:40 ` Bandan Das
2015-08-10 9:27 ` Michael S. Tsirkin
2015-08-10 20:09 ` Bandan Das
[not found] ` <jpg1tfarjly.fsf-oDDOE2N8RG3XLSnhx7PemevR1TjyzBtM@public.gmane.org>
2015-08-10 21:05 ` Bandan Das
2015-07-13 4:07 ` [RFC PATCH 2/4] vhost: Limit the number of devices served by a single worker thread Bandan Das
2015-07-13 4:07 ` [RFC PATCH 3/4] cgroup: Introduce a function to compare cgroups Bandan Das
2015-07-13 4:07 ` [RFC PATCH 4/4] vhost: Add cgroup-aware creation of worker threads Bandan Das
2015-07-27 21:12 ` Michael S. Tsirkin
[not found] ` <OF451FED84.3040AFD2-ONC2257E8C.0043F908-C2257E8C.00446592@il.ibm.com>
2015-07-27 19:48 ` [RFC PATCH 0/4] Shared vhost design Bandan Das
2015-07-27 21:07 ` Michael S. Tsirkin
[not found] ` <OFFB2CB583.341B00EF-ONC2257E94.002FF06E-C2257E94.0032BC0A@il.ibm.com>
[not found] ` <OFFB2CB583.341B00EF-ONC2257E94.002FF06E-C2257E94.0032BC0A-7z/5BgaJwgfQT0dZR+AlfA@public.gmane.org>
2015-08-01 18:48 ` Bandan Das
2015-07-27 21:02 ` Michael S. Tsirkin
[not found] ` <20150727235818-mutt-send-email-mst-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2015-08-08 23:06 ` Bandan Das
[not found] ` <jpgoaihs7lt.fsf-oDDOE2N8RG3XLSnhx7PemevR1TjyzBtM@public.gmane.org>
2015-08-09 12:45 ` Michael S. Tsirkin
[not found] ` <OFC68F4730.CA40D595-ONC2257E9C.00515E83-C2257E9C.00523437@il.ibm.com>
2015-08-09 15:40 ` Michael S. Tsirkin
2015-08-10 20:00 ` Bandan Das [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=jpgtws6rk16.fsf@linux.bootlegged.copy \
--to=bsd@redhat.com \
--cc=EYALMO@il.ibm.com \
--cc=RAZYA@il.ibm.com \
--cc=cgroups@vger.kernel.org \
--cc=jasowang@redhat.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mst@redhat.com \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).