From: Paul Durrant <Paul.Durrant@citrix.com> To: Tim Smith <tim.smith@citrix.com>, "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "qemu-devel@nongnu.org" <qemu-devel@nongnu.org>, "qemu-block@nongnu.org" <qemu-block@nongnu.org> Cc: Anthony Perard <anthony.perard@citrix.com>, Kevin Wolf <kwolf@redhat.com>, Stefano Stabellini <sstabellini@kernel.org>, Max Reitz <mreitz@redhat.com> Subject: Re: [Qemu-devel] [PATCH 3/3] Avoid repeated memory allocation in xen_disk Date: Fri, 2 Nov 2018 11:15:24 +0000 [thread overview] Message-ID: <a04865aa2fb94f51b526d12cd07819e2@AMSPEX02CL03.citrite.net> (raw) In-Reply-To: <154115286959.11300.498371710893672725.stgit@dhcp-3-135.uk.xensource.com> > -----Original Message----- > From: Tim Smith [mailto:tim.smith@citrix.com] > Sent: 02 November 2018 10:01 > To: xen-devel@lists.xenproject.org; qemu-devel@nongnu.org; qemu- > block@nongnu.org > Cc: Anthony Perard <anthony.perard@citrix.com>; Kevin Wolf > <kwolf@redhat.com>; Paul Durrant <Paul.Durrant@citrix.com>; Stefano > Stabellini <sstabellini@kernel.org>; Max Reitz <mreitz@redhat.com> > Subject: [PATCH 3/3] Avoid repeated memory allocation in xen_disk > > xen_disk currently allocates memory to hold the data for each ioreq > as that ioreq is used, and frees it afterwards. Because it requires > page-aligned blocks, this interacts poorly with non-page-aligned > allocations and balloons the heap. > > Instead, allocate the maximum possible requirement, which is > BLKIF_MAX_SEGMENTS_PER_REQUEST pages (currently 11 pages) when > the ioreq is created, and keep that allocation until it is destroyed. > Since the ioreqs themselves are re-used via a free list, this > should actually improve memory usage. > > Signed-off-by: Tim Smith <tim.smith@citrix.com> Reviewed-by: Paul Durrant <paul.durrant@citrix.com> > --- > hw/block/xen_disk.c | 11 ++++++----- > 1 file changed, 6 insertions(+), 5 deletions(-) > > diff --git a/hw/block/xen_disk.c b/hw/block/xen_disk.c > index b506e23868..faaeefba29 100644 > --- a/hw/block/xen_disk.c > +++ b/hw/block/xen_disk.c > @@ -112,7 +112,6 @@ static void ioreq_reset(struct ioreq *ioreq) > memset(&ioreq->req, 0, sizeof(ioreq->req)); > ioreq->status = 0; > ioreq->start = 0; > - ioreq->buf = NULL; > ioreq->size = 0; > ioreq->presync = 0; > > @@ -137,6 +136,11 @@ static struct ioreq *ioreq_start(struct XenBlkDev > *blkdev) > /* allocate new struct */ > ioreq = g_malloc0(sizeof(*ioreq)); > ioreq->blkdev = blkdev; > + /* We cannot need more pages per ioreq than this, and we do re- > use > + * ioreqs, so allocate the memory once here, to be freed in > + * blk_free() when the ioreq is freed. */ > + ioreq->buf = qemu_memalign(XC_PAGE_SIZE, > BLKIF_MAX_SEGMENTS_PER_REQUEST > + * XC_PAGE_SIZE); > blkdev->requests_total++; > qemu_iovec_init(&ioreq->v, 1); > } else { > @@ -313,14 +317,12 @@ static void qemu_aio_complete(void *opaque, int ret) > if (ret == 0) { > ioreq_grant_copy(ioreq); > } > - qemu_vfree(ioreq->buf); > break; > case BLKIF_OP_WRITE: > case BLKIF_OP_FLUSH_DISKCACHE: > if (!ioreq->req.nr_segments) { > break; > } > - qemu_vfree(ioreq->buf); > break; > default: > break; > @@ -392,12 +394,10 @@ static int ioreq_runio_qemu_aio(struct ioreq *ioreq) > { > struct XenBlkDev *blkdev = ioreq->blkdev; > > - ioreq->buf = qemu_memalign(XC_PAGE_SIZE, ioreq->size); > if (ioreq->req.nr_segments && > (ioreq->req.operation == BLKIF_OP_WRITE || > ioreq->req.operation == BLKIF_OP_FLUSH_DISKCACHE) && > ioreq_grant_copy(ioreq)) { > - qemu_vfree(ioreq->buf); > goto err; > } > > @@ -990,6 +990,7 @@ static int blk_free(struct XenDevice *xendev) > ioreq = QLIST_FIRST(&blkdev->freelist); > QLIST_REMOVE(ioreq, list); > qemu_iovec_destroy(&ioreq->v); > + qemu_vfree(ioreq->buf); > g_free(ioreq); > } >
WARNING: multiple messages have this Message-ID (diff)
From: Paul Durrant <Paul.Durrant@citrix.com> To: Tim Smith <tim.smith@citrix.com>, "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "qemu-devel@nongnu.org" <qemu-devel@nongnu.org>, "qemu-block@nongnu.org" <qemu-block@nongnu.org> Cc: Anthony Perard <anthony.perard@citrix.com>, Kevin Wolf <kwolf@redhat.com>, Stefano Stabellini <sstabellini@kernel.org>, Max Reitz <mreitz@redhat.com> Subject: Re: [PATCH 3/3] Avoid repeated memory allocation in xen_disk Date: Fri, 2 Nov 2018 11:15:24 +0000 [thread overview] Message-ID: <a04865aa2fb94f51b526d12cd07819e2@AMSPEX02CL03.citrite.net> (raw) In-Reply-To: <154115286959.11300.498371710893672725.stgit@dhcp-3-135.uk.xensource.com> > -----Original Message----- > From: Tim Smith [mailto:tim.smith@citrix.com] > Sent: 02 November 2018 10:01 > To: xen-devel@lists.xenproject.org; qemu-devel@nongnu.org; qemu- > block@nongnu.org > Cc: Anthony Perard <anthony.perard@citrix.com>; Kevin Wolf > <kwolf@redhat.com>; Paul Durrant <Paul.Durrant@citrix.com>; Stefano > Stabellini <sstabellini@kernel.org>; Max Reitz <mreitz@redhat.com> > Subject: [PATCH 3/3] Avoid repeated memory allocation in xen_disk > > xen_disk currently allocates memory to hold the data for each ioreq > as that ioreq is used, and frees it afterwards. Because it requires > page-aligned blocks, this interacts poorly with non-page-aligned > allocations and balloons the heap. > > Instead, allocate the maximum possible requirement, which is > BLKIF_MAX_SEGMENTS_PER_REQUEST pages (currently 11 pages) when > the ioreq is created, and keep that allocation until it is destroyed. > Since the ioreqs themselves are re-used via a free list, this > should actually improve memory usage. > > Signed-off-by: Tim Smith <tim.smith@citrix.com> Reviewed-by: Paul Durrant <paul.durrant@citrix.com> > --- > hw/block/xen_disk.c | 11 ++++++----- > 1 file changed, 6 insertions(+), 5 deletions(-) > > diff --git a/hw/block/xen_disk.c b/hw/block/xen_disk.c > index b506e23868..faaeefba29 100644 > --- a/hw/block/xen_disk.c > +++ b/hw/block/xen_disk.c > @@ -112,7 +112,6 @@ static void ioreq_reset(struct ioreq *ioreq) > memset(&ioreq->req, 0, sizeof(ioreq->req)); > ioreq->status = 0; > ioreq->start = 0; > - ioreq->buf = NULL; > ioreq->size = 0; > ioreq->presync = 0; > > @@ -137,6 +136,11 @@ static struct ioreq *ioreq_start(struct XenBlkDev > *blkdev) > /* allocate new struct */ > ioreq = g_malloc0(sizeof(*ioreq)); > ioreq->blkdev = blkdev; > + /* We cannot need more pages per ioreq than this, and we do re- > use > + * ioreqs, so allocate the memory once here, to be freed in > + * blk_free() when the ioreq is freed. */ > + ioreq->buf = qemu_memalign(XC_PAGE_SIZE, > BLKIF_MAX_SEGMENTS_PER_REQUEST > + * XC_PAGE_SIZE); > blkdev->requests_total++; > qemu_iovec_init(&ioreq->v, 1); > } else { > @@ -313,14 +317,12 @@ static void qemu_aio_complete(void *opaque, int ret) > if (ret == 0) { > ioreq_grant_copy(ioreq); > } > - qemu_vfree(ioreq->buf); > break; > case BLKIF_OP_WRITE: > case BLKIF_OP_FLUSH_DISKCACHE: > if (!ioreq->req.nr_segments) { > break; > } > - qemu_vfree(ioreq->buf); > break; > default: > break; > @@ -392,12 +394,10 @@ static int ioreq_runio_qemu_aio(struct ioreq *ioreq) > { > struct XenBlkDev *blkdev = ioreq->blkdev; > > - ioreq->buf = qemu_memalign(XC_PAGE_SIZE, ioreq->size); > if (ioreq->req.nr_segments && > (ioreq->req.operation == BLKIF_OP_WRITE || > ioreq->req.operation == BLKIF_OP_FLUSH_DISKCACHE) && > ioreq_grant_copy(ioreq)) { > - qemu_vfree(ioreq->buf); > goto err; > } > > @@ -990,6 +990,7 @@ static int blk_free(struct XenDevice *xendev) > ioreq = QLIST_FIRST(&blkdev->freelist); > QLIST_REMOVE(ioreq, list); > qemu_iovec_destroy(&ioreq->v); > + qemu_vfree(ioreq->buf); > g_free(ioreq); > } > _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
next prev parent reply other threads:[~2018-11-02 11:15 UTC|newest] Thread overview: 52+ messages / expand[flat|nested] mbox.gz Atom feed top 2018-11-02 10:00 [Qemu-devel] [PATCH 0/3] Performance improvements for xen_disk v2 Tim Smith 2018-11-02 10:00 ` [PATCH 1/3] Improve xen_disk batching behaviour Tim Smith 2018-11-02 10:00 ` [Qemu-devel] " Tim Smith 2018-11-02 11:14 ` Paul Durrant 2018-11-02 11:14 ` Paul Durrant 2018-11-02 13:53 ` Anthony PERARD 2018-11-02 13:53 ` [Qemu-devel] " Anthony PERARD 2018-11-02 10:01 ` [Qemu-devel] [PATCH 2/3] Improve xen_disk response latency Tim Smith 2018-11-02 11:14 ` Paul Durrant 2018-11-02 11:14 ` Paul Durrant 2018-11-02 13:53 ` Anthony PERARD 2018-11-02 13:53 ` [Qemu-devel] " Anthony PERARD 2018-11-02 10:01 ` Tim Smith 2018-11-02 10:01 ` [PATCH 3/3] Avoid repeated memory allocation in xen_disk Tim Smith 2018-11-02 10:01 ` [Qemu-devel] " Tim Smith 2018-11-02 11:15 ` Paul Durrant [this message] 2018-11-02 11:15 ` Paul Durrant 2018-11-02 13:53 ` [Qemu-devel] " Anthony PERARD 2018-11-02 13:53 ` Anthony PERARD 2018-11-02 11:04 ` xen_disk qdevification (was: [PATCH 0/3] Performance improvements for xen_disk v2) Kevin Wolf 2018-11-02 11:04 ` [Qemu-devel] " Kevin Wolf 2018-11-02 11:13 ` Paul Durrant 2018-11-02 11:13 ` [Qemu-devel] " Paul Durrant 2018-11-02 12:14 ` Kevin Wolf 2018-11-02 12:14 ` [Qemu-devel] " Kevin Wolf 2018-11-05 15:57 ` [Qemu-devel] xen_disk qdevification Markus Armbruster 2018-11-05 15:57 ` Markus Armbruster 2018-11-05 16:15 ` Paul Durrant 2018-11-05 16:15 ` Paul Durrant 2018-11-08 14:00 ` Paul Durrant 2018-11-08 14:00 ` Paul Durrant 2018-11-08 15:21 ` Kevin Wolf 2018-11-08 15:43 ` Paul Durrant 2018-11-08 15:43 ` Paul Durrant 2018-11-08 16:44 ` Paul Durrant 2018-11-09 10:27 ` Paul Durrant 2018-11-09 10:40 ` Kevin Wolf 2018-11-09 10:40 ` Kevin Wolf 2018-11-09 10:27 ` Paul Durrant 2018-11-08 16:44 ` Paul Durrant 2018-11-08 15:21 ` Kevin Wolf 2018-12-12 8:59 ` [Qemu-devel] [Xen-devel] xen_disk qdevification (was: [PATCH 0/3] Performance improvements for xen_disk v2) Olaf Hering 2018-12-12 9:22 ` Paul Durrant 2018-12-12 9:22 ` Paul Durrant 2018-12-12 12:03 ` [Qemu-devel] [Xen-devel] " Kevin Wolf 2018-12-12 12:03 ` Kevin Wolf 2018-12-12 12:04 ` [Qemu-devel] xen_disk qdevification Markus Armbruster 2018-12-12 12:04 ` [Qemu-devel] [Xen-devel] " Markus Armbruster 2018-12-12 8:59 ` xen_disk qdevification (was: [PATCH 0/3] Performance improvements for xen_disk v2) Olaf Hering -- strict thread matches above, loose matches on Subject: below -- 2018-11-02 9:29 [Qemu-devel] [PATCH 0/3] Performance improvements for xen_disk Tim Smith 2018-11-02 9:30 ` [Qemu-devel] [PATCH 3/3] Avoid repeated memory allocation in xen_disk Tim Smith 2018-09-07 10:21 [Qemu-devel] [PATCH 1/3] Improve xen_disk batching behaviour Tim Smith 2018-09-07 10:21 ` [Qemu-devel] [PATCH 3/3] Avoid repeated memory allocation in xen_disk Tim Smith 2018-09-07 16:05 ` Paul Durrant
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=a04865aa2fb94f51b526d12cd07819e2@AMSPEX02CL03.citrite.net \ --to=paul.durrant@citrix.com \ --cc=anthony.perard@citrix.com \ --cc=kwolf@redhat.com \ --cc=mreitz@redhat.com \ --cc=qemu-block@nongnu.org \ --cc=qemu-devel@nongnu.org \ --cc=sstabellini@kernel.org \ --cc=tim.smith@citrix.com \ --cc=xen-devel@lists.xenproject.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.