From: Paulina Szubarczyk <paulinaszubarczyk@gmail.com>
To: "Roger Pau Monné" <roger.pau@citrix.com>
Cc: sstabellini@kernel.org, wei.liu2@citrix.com,
ian.jackson@eu.citrix.com, P.Gawkowski@ii.pw.edu.pl,
anthony.perard@citrix.com, xen-devel@lists.xenproject.org
Subject: Re: [PATCH RESEND 4/4] qemu-xen-dir/hw/block: Cache local buffers used in grant copy
Date: Tue, 07 Jun 2016 15:13:11 +0200 [thread overview]
Message-ID: <1465305191.23468.26.camel@localhost> (raw)
In-Reply-To: <20160602141919.k3fio52xsvn2qj3s@mac>
On Thu, 2016-06-02 at 16:19 +0200, Roger Pau Monné wrote:
> On Tue, May 31, 2016 at 06:44:58AM +0200, Paulina Szubarczyk wrote:
> > If there are still pending requests the buffers are not free() but
> > cached in an array of a size max_request*BLKIF_MAX_SEGMENTS_PER_REQUEST
> >
> > ---
> > hw/block/xen_disk.c | 60 +++++++++++++++++++++++++++++++++++++++++------------
> > 1 file changed, 47 insertions(+), 13 deletions(-)
> >
> > diff --git a/hw/block/xen_disk.c b/hw/block/xen_disk.c
> > index 43cd9c9..cf80897 100644
> > --- a/hw/block/xen_disk.c
> > +++ b/hw/block/xen_disk.c
> > @@ -125,6 +125,10 @@ struct XenBlkDev {
> > /* */
> > gboolean feature_discard;
> >
> > + /* request buffer cache */
> > + void **buf_cache;
> > + int buf_cache_free;
>
> Have you checked if there's some already available FIFO queue structure that
> you can use?
>
> Glib Trash Stacks looks like a suitable candidate:
>
> https://developer.gnome.org/glib/stable/glib-Trash-Stacks.html
Persistent regions are using a single-link-list GSList and I was
thinking that using that structure here will be better since from the
link you send comes out that Trash-Stacks are deprecated from 2.48.
But I have some problems with debuging qemu-system-i386. gdb is not able
to load symbols, it informs "qemu-system-i386...(no debugging symbols
found)...done." It was not an issue earlier and I have tried to run
configure with --enable-debug before the build as well as setting
'strip_opt="yes"'.
>
> > +
> > /* qemu block driver */
> > DriveInfo *dinfo;
> > BlockBackend *blk;
> > @@ -284,12 +288,16 @@ err:
> > return -1;
> > }
> >
> > -
> > -static void* get_buffer(void) {
> > +static void* get_buffer(struct XenBlkDev *blkdev) {
> > void *buf;
> >
> > - buf = mmap(NULL, 1 << XC_PAGE_SHIFT, PROT_READ | PROT_WRITE,
> > + if(blkdev->buf_cache_free <= 0) {
> > + buf = mmap(NULL, 1 << XC_PAGE_SHIFT, PROT_READ | PROT_WRITE,
> > MAP_SHARED | MAP_ANONYMOUS, -1, 0);
> > + } else {
> > + blkdev->buf_cache_free--;
> > + buf = blkdev->buf_cache[blkdev->buf_cache_free];
> > + }
> >
> > if (unlikely(buf == MAP_FAILED))
> > return NULL;
> > @@ -301,21 +309,40 @@ static int free_buffer(void* buf) {
> > return munmap(buf, 1 << XC_PAGE_SHIFT);
> > }
> >
> > -static int free_buffers(void** page, int count)
> > +static int free_buffers(void** page, int count, struct XenBlkDev *blkdev)
> > {
> > - int i, r = 0;
> > + int i, put_buf_cache = 0, r = 0;
> > +
> > + if (blkdev->more_work && blkdev->requests_inflight < max_requests) {
>
> Shouldn't this be <=?
>
> Or else you will only cache at most 341 pages instead of the maximum
> number of pages that can be in-flight (352).
At the moment when the request is completing and freeing the pages it is
still a part of in-flight requests and then I think there should not be
scheduled more then max_request-1 of others.
Paulina
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
prev parent reply other threads:[~2016-06-07 13:14 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-05-31 4:44 [PATCH RESEND 0/4] qemu-qdisk: Replace grant map by grant copy Paulina Szubarczyk
2016-05-31 4:44 ` [PATCH RESEND 1/4] libs, gnttab, libxc: Interface for grant copy operation Paulina Szubarczyk
2016-05-31 9:25 ` David Vrabel
2016-06-01 7:45 ` Paulina Szubarczyk
2016-06-01 11:22 ` David Vrabel
2016-06-01 11:42 ` Paulina Szubarczyk
2016-06-02 9:37 ` Roger Pau Monné
2016-06-06 14:47 ` Wei Liu
2016-05-31 4:44 ` [PATCH RESEND 2/4] qdisk, hw/block/xen_disk: Removal of grant mapping Paulina Szubarczyk
2016-05-31 9:26 ` David Vrabel
2016-06-02 9:41 ` Roger Pau Monné
2016-06-02 9:57 ` Paulina Szubarczyk
2016-06-02 10:22 ` David Vrabel
2016-05-31 4:44 ` [PATCH RESEND 3/4] qdisk, hw/block/xen_disk: Perform grant copy instead of grant map Paulina Szubarczyk
2016-05-31 9:37 ` David Vrabel
2016-06-01 7:52 ` Paulina Szubarczyk
2016-06-01 11:15 ` David Vrabel
2016-06-02 13:47 ` Roger Pau Monné
2016-05-31 4:44 ` [PATCH RESEND 4/4] qemu-xen-dir/hw/block: Cache local buffers used in grant copy Paulina Szubarczyk
2016-06-02 14:19 ` Roger Pau Monné
2016-06-07 13:13 ` Paulina Szubarczyk [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1465305191.23468.26.camel@localhost \
--to=paulinaszubarczyk@gmail.com \
--cc=P.Gawkowski@ii.pw.edu.pl \
--cc=anthony.perard@citrix.com \
--cc=ian.jackson@eu.citrix.com \
--cc=roger.pau@citrix.com \
--cc=sstabellini@kernel.org \
--cc=wei.liu2@citrix.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).