linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Jason Wang <jasowang@redhat.com>
Cc: linux-kernel@vger.kernel.org, brouer@redhat.com
Subject: Re: [PATCH 1/3] ptr_ring: batch ring zeroing
Date: Sat, 15 Apr 2017 00:00:11 +0300	[thread overview]
Message-ID: <20170414235732-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <97915bd0-ef22-4e67-1b18-3522e2369fda@redhat.com>

On Fri, Apr 14, 2017 at 03:52:23PM +0800, Jason Wang wrote:
> 
> 
> On 2017年04月12日 16:03, Jason Wang wrote:
> > 
> > 
> > On 2017年04月07日 13:49, Michael S. Tsirkin wrote:
> > > A known weakness in ptr_ring design is that it does not handle well the
> > > situation when ring is almost full: as entries are consumed they are
> > > immediately used again by the producer, so consumer and producer are
> > > writing to a shared cache line.
> > > 
> > > To fix this, add batching to consume calls: as entries are
> > > consumed do not write NULL into the ring until we get
> > > a multiple (in current implementation 2x) of cache lines
> > > away from the producer. At that point, write them all out.
> > > 
> > > We do the write out in the reverse order to keep
> > > producer from sharing cache with consumer for as long
> > > as possible.
> > > 
> > > Writeout also triggers when ring wraps around - there's
> > > no special reason to do this but it helps keep the code
> > > a bit simpler.
> > > 
> > > What should we do if getting away from producer by 2 cache lines
> > > would mean we are keeping the ring moe than half empty?
> > > Maybe we should reduce the batching in this case,
> > > current patch simply reduces the batching.
> > > 
> > > Notes:
> > > - it is no longer true that a call to consume guarantees
> > >    that the following call to produce will succeed.
> > >    No users seem to assume that.
> > > - batching can also in theory reduce the signalling rate:
> > >    users that would previously send interrups to the producer
> > >    to wake it up after consuming each entry would now only
> > >    need to do this once in a batch.
> > >    Doing this would be easy by returning a flag to the caller.
> > >    No users seem to do signalling on consume yet so this was not
> > >    implemented yet.
> > > 
> > > Signed-off-by: Michael S. Tsirkin<mst@redhat.com>
> > > ---
> > > 
> > > Jason, I am curious whether the following gives you some of
> > > the performance boost that you see with vhost batching
> > > patches. Is vhost batching on top still helpful?
> > 
> > The patch looks good to me, will have a test for vhost batching patches.
> > 
> > Thanks
> 
> Still helpful:
> 
> before this patch: 1.84Mpps
> with this patch: 2.00Mpps
> with batch dequeuing: 2.30Mpps
> 
> Acked-by: Jason Wang <jasowang@redhat.com>
> 
> Thanks

Fascinating. How do we explain the gain with batch dequeue?
Is it just the lock overhead? Can you pls try to replace
the lock with a simple non-fair atomic and see what happens?

-- 
MST

  reply	other threads:[~2017-04-14 21:00 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-04-07  5:49 [PATCH 1/3] ptr_ring: batch ring zeroing Michael S. Tsirkin
2017-04-07  5:50 ` [PATCH 2/3] ringtest: support test specific parameters Michael S. Tsirkin
2017-04-07  5:50 ` [PATCH 3/3] ptr_ring: support testing different batching sizes Michael S. Tsirkin
2017-04-08 12:14 ` [PATCH 1/3] ptr_ring: batch ring zeroing Jesper Dangaard Brouer
2017-05-09 13:33   ` Michael S. Tsirkin
2017-05-10  3:30     ` Jason Wang
2017-05-10 12:22       ` Michael S. Tsirkin
2017-05-10  9:18     ` Jesper Dangaard Brouer
2017-05-10 12:20       ` Michael S. Tsirkin
2017-04-12  8:03 ` Jason Wang
2017-04-14  7:52   ` Jason Wang
2017-04-14 21:00     ` Michael S. Tsirkin [this message]
2017-04-18  2:16       ` Jason Wang
2017-04-14 22:50     ` Michael S. Tsirkin
2017-04-18  2:18       ` Jason Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170414235732-mutt-send-email-mst@kernel.org \
    --to=mst@redhat.com \
    --cc=brouer@redhat.com \
    --cc=jasowang@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).