linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Jason Wang <jasowang@redhat.com>
Cc: kvm@vger.kernel.org, virtualization@lists.linux-foundation.org,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	davem@davemloft.net, Dan Williams <dan.j.williams@intel.com>
Subject: Re: [RFC PATCH V3 0/5] Hi:
Date: Mon, 7 Jan 2019 09:37:25 -0500	[thread overview]
Message-ID: <20190107091947-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <62f0fda8-92a4-b160-1b3b-4acdfef49d44@redhat.com>

On Mon, Jan 07, 2019 at 02:50:17PM +0800, Jason Wang wrote:
> 
> On 2019/1/7 下午12:17, Michael S. Tsirkin wrote:
> > On Mon, Jan 07, 2019 at 11:53:41AM +0800, Jason Wang wrote:
> > > On 2019/1/7 上午11:28, Michael S. Tsirkin wrote:
> > > > On Mon, Jan 07, 2019 at 10:19:03AM +0800, Jason Wang wrote:
> > > > > On 2019/1/3 上午4:47, Michael S. Tsirkin wrote:
> > > > > > On Sat, Dec 29, 2018 at 08:46:51PM +0800, Jason Wang wrote:
> > > > > > > This series tries to access virtqueue metadata through kernel virtual
> > > > > > > address instead of copy_user() friends since they had too much
> > > > > > > overheads like checks, spec barriers or even hardware feature
> > > > > > > toggling.
> > > > > > Will review, thanks!
> > > > > > One questions that comes to mind is whether it's all about bypassing
> > > > > > stac/clac.  Could you please include a performance comparison with
> > > > > > nosmap?
> > > > > > 
> > > > > On machine without SMAP (Sandy Bridge):
> > > > > 
> > > > > Before: 4.8Mpps
> > > > > 
> > > > > After: 5.2Mpps
> > > > OK so would you say it's really unsafe versus safe accesses?
> > > > Or would you say it's just a better written code?
> > > 
> > > It's the effect of removing speculation barrier.
> > 
> > You mean __uaccess_begin_nospec introduced by
> > commit 304ec1b050310548db33063e567123fae8fd0301
> > ?
> 
> Yes.
> 
> 
> > 
> > So fundamentally we do access_ok checks when supplying
> > the memory table to the kernel thread, and we should
> > do the spec barrier there.
> > 
> > Then we can just create and use a variant of uaccess macros that does
> > not include the barrier?
> 
> 
> The unsafe ones?

Fundamentally yes.


> 
> > 
> > Or, how about moving the barrier into access_ok?
> > This way repeated accesses with a single access_ok get a bit faster.
> > CC Dan Williams on this idea.
> 
> 
> The problem is, e.g for vhost control path. During mem table validation, we
> don't even want to access them there. So the spec barrier is not needed.

Again spec barrier is not needed as such at all. It's defence in depth.
And mem table init is slow path. So we can stick a barrier there and it
won't be a problem for anyone.

> 
> > 
> > 
> > > > > On machine with SMAP (Broadwell):
> > > > > 
> > > > > Before: 5.0Mpps
> > > > > 
> > > > > After: 6.1Mpps
> > > > > 
> > > > > No smap: 7.5Mpps
> > > > > 
> > > > > 
> > > > > Thanks
> > > > no smap being before or after?
> > > > 
> > > Let me clarify:
> > > 
> > > 
> > > Before (SMAP on): 5.0Mpps
> > > 
> > > Before (SMAP off): 7.5Mpps
> > > 
> > > After (SMAP on): 6.1Mpps
> > > 
> > > 
> > > Thanks
> > How about after + smap off?
> 
> 
> After (SMAP off): 8.0Mpps
> 
> > 
> > And maybe we want a module option just for the vhost thread to keep smap
> > off generally since almost all it does is copy stuff from userspace into
> > kernel anyway. Because what above numbers should is that we really
> > really want a solution that isn't limited to just meta-data access,
> > and I really do not see how any such solution can not also be
> > used to make meta-data access fast.
> 
> 
> As we've discussed in another thread of previous version. This requires lots
> of changes, the main issues is SMAP state was not saved/restored on explicit
> schedule().

I wonder how expensive can reading eflags be?
If it's cheap we can just check EFLAGS.AC and rerun stac if needed.

> Even if it did, since vhost will call lots of net/block codes,
> any kind of uaccess in those codes needs understand this special request
> from vhost e.g you provably need to invent a new kinds of iov iterator that
> does not touch SMAP at all. And I'm not sure this is the only thing we need
> to deal with.


Well we wanted to move packet processing from tun into vhost anyway right?

> 
> So I still prefer to:
> 
> 1) speedup the metadata access through vmap + MMU notifier
> 
> 2) speedup the datacopy with batched copy (unsafe ones or other new
> interfaces)
> 
> Thanks

I just guess once you do (2) you will want to rework (1) to use
the new interfaces. So all the effort you are now investing in (1)
will be wasted. Just my $.02.

-- 
MST

  reply	other threads:[~2019-01-07 14:37 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-12-29 12:46 [RFC PATCH V3 0/5] Hi: Jason Wang
2018-12-29 12:46 ` [RFC PATCH V3 1/5] vhost: generalize adding used elem Jason Wang
2019-01-04 21:29   ` Michael S. Tsirkin
2019-01-05  0:33     ` Sean Christopherson
2019-01-07  7:00       ` Jason Wang
2019-01-07 14:50         ` Michael S. Tsirkin
2018-12-29 12:46 ` [RFC PATCH V3 2/5] vhost: fine grain userspace memory accessors Jason Wang
2018-12-29 12:46 ` [RFC PATCH V3 3/5] vhost: rename vq_iotlb_prefetch() to vq_meta_prefetch() Jason Wang
2018-12-29 12:46 ` [RFC PATCH V3 4/5] vhost: introduce helpers to get the size of metadata area Jason Wang
2018-12-29 12:46 ` [RFC PATCH V3 5/5] vhost: access vq metadata through kernel virtual address Jason Wang
2019-01-04 21:34   ` Michael S. Tsirkin
2019-01-07  8:40     ` Jason Wang
2019-01-02 20:47 ` [RFC PATCH V3 0/5] Hi: Michael S. Tsirkin
2019-01-07  2:19   ` Jason Wang
2019-01-07  3:28     ` Michael S. Tsirkin
2019-01-07  3:53       ` Jason Wang
2019-01-07  4:17         ` Michael S. Tsirkin
2019-01-07  6:50           ` Jason Wang
2019-01-07 14:37             ` Michael S. Tsirkin [this message]
2019-01-08 10:01               ` Jason Wang
2019-01-07  7:15           ` Dan Williams
2019-01-07 14:11             ` Michael S. Tsirkin
2019-01-07 21:39               ` Dan Williams
2019-01-07 22:25                 ` Michael S. Tsirkin
2019-01-07 22:44                   ` Dan Williams
2019-01-09  4:31                     ` __get_user slower than get_user (was Re: [RFC PATCH V3 0/5] Hi:) Michael S. Tsirkin
2019-01-09  5:19                       ` Linus Torvalds
2019-01-08 11:42               ` [RFC PATCH V3 0/5] Hi: Jason Wang
2019-01-04 21:41 ` Michael S. Tsirkin
2019-01-07  6:58   ` Jason Wang
2019-01-07 14:47     ` Michael S. Tsirkin
2019-01-08 10:12       ` Jason Wang
2019-01-11  8:59         ` Jason Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190107091947-mutt-send-email-mst@kernel.org \
    --to=mst@redhat.com \
    --cc=dan.j.williams@intel.com \
    --cc=davem@davemloft.net \
    --cc=jasowang@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).