All of lore.kernel.org
 help / color / mirror / Atom feed
From: Felipe Franciosi <felipe.franciosi@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>,
	Bob Liu <bob.liu@oracle.com>
Cc: "'Konrad Rzeszutek Wilk'" <konrad.wilk@oracle.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"axboe@fb.com" <axboe@fb.com>,
	"hch@infradead.org" <hch@infradead.org>,
	"avanzini.arianna@gmail.com" <avanzini.arianna@gmail.com>
Subject: RE: [PATCH 04/10] xen/blkfront: separate ring information to an new struct
Date: Thu, 19 Feb 2015 12:06:24 +0000	[thread overview]
Message-ID: <9F2C4E7DFB7839489C89757A66C5AD629EDBBA@AMSPEX01CL03.citrite.net> (raw)
In-Reply-To: <54E5C59F.2060300@citrix.com>



> -----Original Message-----
> From: David Vrabel
> Sent: 19 February 2015 11:15
> To: Roger Pau Monne; Bob Liu; Felipe Franciosi
> Cc: 'Konrad Rzeszutek Wilk'; xen-devel@lists.xen.org; linux-
> kernel@vger.kernel.org; axboe@fb.com; hch@infradead.org;
> avanzini.arianna@gmail.com
> Subject: Re: [PATCH 04/10] xen/blkfront: separate ring information to an new
> struct
> 
> On 19/02/15 11:08, Roger Pau Monné wrote:
> > El 19/02/15 a les 3.05, Bob Liu ha escrit:
> >>
> >>
> >> On 02/19/2015 02:08 AM, Felipe Franciosi wrote:
> >>>> -----Original Message-----
> >>>> From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
> >>>> Sent: 18 February 2015 17:38
> >>>> To: Roger Pau Monne
> >>>> Cc: Bob Liu; xen-devel@lists.xen.org; David Vrabel; linux-
> >>>> kernel@vger.kernel.org; Felipe Franciosi; axboe@fb.com;
> >>>> hch@infradead.org; avanzini.arianna@gmail.com
> >>>> Subject: Re: [PATCH 04/10] xen/blkfront: separate ring information
> >>>> to an new struct
> >>>>
> >>>> On Wed, Feb 18, 2015 at 06:28:49PM +0100, Roger Pau Monné wrote:
> >>>>> El 15/02/15 a les 9.18, Bob Liu ha escrit:
> >>>>> AFAICT you seem to have a list of persistent grants, indirect
> >>>>> pages and a grant table callback for each ring, isn't this
> >>>>> supposed to be shared between all rings?
> >>>>>
> >>>>> I don't think we should be going down that route, or else we can
> >>>>> hoard a large amount of memory and grants.
> >>>>
> >>>> It does remove the lock that would have to be accessed by each ring
> >>>> thread to access those. Those values (grants) can be limited to be
> >>>> a smaller value such that the overall number is the same as it was with
> the previous version. As in:
> >>>> each ring has = MAX_GRANTS / nr_online_cpus().
> >>>>>
> >>>
> >>> We should definitely be concerned with the amount of memory consumed
> on the backend for each plugged virtual disk. We have faced several problems
> in XenServer around this area before; it drastically affects VBD scalability per
> host.
> >>>
> >>
> >> Right, so we have to keep both the lock and the amount of memory
> >> consumed in mind.
> >>
> >>> This makes me think that all the persistent grants work was done as a
> workaround while we were facing several performance problems around
> concurrent grant un/mapping operations. Given all the recent submissions
> made around this (grant ops) area, is this something we should perhaps revisit
> and discuss whether we want to continue offering persistent grants as a feature?
> >>>
> >>
> >> Agree, Life would be easier if we can remove the persistent feature.
> >
> > I was thinking about this yesterday, and IMHO I think we should remove
> > persistent grants now while it's not too entangled, leaving it for
> > later will just make our life more miserable.
> >
> > While it's true that persistent grants provide a throughput increase
> > by preventing grant table operations and TLB flushes, it has several
> > problems that cannot by avoided:
> >
> >  - Memory/grants hoarding, we need to reserve the same amount of
> > memory as the amount of data that we want to have in-flight. While
> > this is not so critical for memory, it is for grants, since using too
> > many grants can basically deadlock other PV interfaces. There's no way
> > to avoid this since it's the design behind persistent grants.
> >
> >  - Memcopy: guest needs to perform a memcopy of all data that goes
> > through blkfront. While not so critical, Felipe found systems were
> > memcopy was more expensive than grant map/unmap in the backend (IIRC
> > those were AMD systems).
> >
> >  - Complexity/interactions: when persistent grants was designed number
> > of requests was limited to 32 and each request could only contain 11
> > pages. This means we had to use 352 pages/grants which was fine. Now
> > that we have indirect IO and multiqueue in the horizon this number has
> > gone up by orders of magnitude, I don't think this is viable/useful
> > any more.
> >
> > If Konrad/Bob agree I would like to send a patch to remove persistent
> > grants and then have the multiqueue series rebased on top of that.
> 
> I agree with this.
> 
> I think we can get better  performance/scalability gains of with improvements
> to grant table locking and TLB flush avoidance.
> 
> David

It doesn't change the fact that persistent grants (as well as the grant copy implementation we did for tapdisk3) were alternatives that allowed aggregate storage performance to increase drastically. Before committing to removing something that allow Xen users to scale their deployments, I think we need to revisit whether the recent improvements to the whole grant mechanisms (grant table locking, TLB flushing, batched calls, etc) are performing as we would (now) expect.

What I think should be done prior to committing to either direction is a proper performance assessment of grant mapping vs. persistent grants vs. grant copy for single and aggregate workloads. We need to test a meaningful set of host architectures, workloads and storage types. Last year at the XenDevelSummit, for example, we showed how grant copy scaled better than persistent grants at the cost of doing the copy on the back end.

I don't mean to propose tests that will delay innovation by weeks or months. However, it is very easy to find changes that improve this or that synthetic workload and ignore the fact that it might damage several (possibly very realistic) others. I think this is the time to run performance tests objectively without trying to dig too much into debugging and go from there.

Felipe

  reply	other threads:[~2015-02-19 12:06 UTC|newest]

Thread overview: 73+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-02-15  8:18 [RFC PATCH 00/10] Multi-queue support for xen-block driver Bob Liu
2015-02-15  8:18 ` Bob Liu
2015-02-15  8:18 ` [PATCH 01/10] xen/blkfront: convert to blk-mq API Bob Liu
2015-02-15  8:18   ` Bob Liu
2015-02-15  8:18 ` [PATCH 02/10] xen/blkfront: drop legacy block layer support Bob Liu
2015-02-18 17:02   ` Christoph Hellwig
2015-02-18 17:02   ` Christoph Hellwig
2015-02-15  8:18 ` Bob Liu
2015-02-15  8:18 ` [PATCH 03/10] xen/blkfront: reorg info->io_lock after using blk-mq API Bob Liu
2015-02-15  8:18   ` Bob Liu
2015-02-18 17:05   ` Christoph Hellwig
2015-02-19  2:07     ` Bob Liu
2015-02-19  2:07     ` Bob Liu
2015-02-18 17:05   ` Christoph Hellwig
2015-02-15  8:18 ` [PATCH 04/10] xen/blkfront: separate ring information to an new struct Bob Liu
2015-02-15  8:18 ` Bob Liu
2015-02-18 17:28   ` Roger Pau Monné
2015-02-18 17:37     ` Konrad Rzeszutek Wilk
2015-02-18 17:37     ` Konrad Rzeszutek Wilk
2015-02-18 18:08       ` Felipe Franciosi
2015-02-18 18:29         ` Konrad Rzeszutek Wilk
2015-02-18 18:29         ` Konrad Rzeszutek Wilk
2015-02-19  2:05         ` Bob Liu
2015-02-19  2:05         ` Bob Liu
2015-02-19 11:08           ` Roger Pau Monné
2015-02-19 11:08           ` Roger Pau Monné
2015-02-19 11:14             ` David Vrabel
2015-02-19 11:14             ` David Vrabel
2015-02-19 12:06               ` Felipe Franciosi [this message]
2015-02-19 13:12                 ` Roger Pau Monné
2015-02-19 13:12                 ` Roger Pau Monné
2015-02-20 18:59                   ` Konrad Rzeszutek Wilk
2015-02-20 18:59                   ` Konrad Rzeszutek Wilk
2015-02-27 12:52                     ` Bob Liu
2015-02-27 12:52                     ` Bob Liu
2015-03-04 21:21                       ` Konrad Rzeszutek Wilk
2015-03-04 21:21                       ` Konrad Rzeszutek Wilk
2015-03-05  0:47                         ` Bob Liu
2015-03-05  0:47                         ` Bob Liu
2015-03-06 10:30                           ` Felipe Franciosi
2015-03-06 10:30                           ` Felipe Franciosi
2015-03-17  7:00                             ` Bob Liu
2015-03-17  7:00                             ` Bob Liu
2015-03-17 14:52                               ` Felipe Franciosi
2015-03-17 14:52                               ` Felipe Franciosi
2015-03-18  0:52                                 ` Bob Liu
2015-03-18  0:52                                 ` Bob Liu
2015-02-19 12:06               ` Felipe Franciosi
2015-02-19 11:30             ` Malcolm Crossley
2015-02-18 18:08       ` Felipe Franciosi
2015-02-18 17:28   ` Roger Pau Monné
2015-02-15  8:19 ` [PATCH 05/10] xen/blkback: separate ring information out of struct xen_blkif Bob Liu
2015-02-15  8:19 ` Bob Liu
2015-02-15  8:19 ` [PATCH 06/10] xen/blkfront: pseudo support for multi hardware queues Bob Liu
2015-02-15  8:19 ` Bob Liu
2015-02-15  8:19 ` [PATCH 07/10] xen/blkback: " Bob Liu
2015-02-15  8:19 ` Bob Liu
2015-02-19 16:57   ` David Vrabel
2015-02-19 16:57   ` [Xen-devel] " David Vrabel
2015-02-15  8:19 ` [PATCH 08/10] xen/blkfront: negotiate hardware queue number with backend Bob Liu
2015-02-15  8:19 ` Bob Liu
2015-02-15  8:19 ` [PATCH 09/10] xen/blkback: get hardware queue number from blkfront Bob Liu
2015-02-15  8:19 ` Bob Liu
2015-02-15  8:19 ` [PATCH 10/10] xen/blkfront: use work queue to fast blkif interrupt return Bob Liu
2015-02-19 16:51   ` [Xen-devel] " David Vrabel
2015-02-19 16:51     ` David Vrabel
2015-02-15  8:19 ` Bob Liu
2015-02-18 17:01 ` [RFC PATCH 00/10] Multi-queue support for xen-block driver Christoph Hellwig
2015-02-18 17:01 ` Christoph Hellwig
2015-02-18 18:22 ` Felipe Franciosi
2015-02-18 18:22 ` Felipe Franciosi
2015-02-19  2:04   ` Bob Liu
2015-02-19  2:04   ` Bob Liu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=9F2C4E7DFB7839489C89757A66C5AD629EDBBA@AMSPEX01CL03.citrite.net \
    --to=felipe.franciosi@citrix.com \
    --cc=avanzini.arianna@gmail.com \
    --cc=axboe@fb.com \
    --cc=bob.liu@oracle.com \
    --cc=david.vrabel@citrix.com \
    --cc=hch@infradead.org \
    --cc=konrad.wilk@oracle.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=roger.pau@citrix.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.