All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jason Wang <jasowang@redhat.com>
To: "Michael S. Tsirkin" <mst@redhat.com>
Cc: kvm@vger.kernel.org, virtualization@lists.linux-foundation.org,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH net-next 0/3] vhost: accelerate metadata access through vmap()
Date: Mon, 24 Dec 2018 16:32:39 +0800	[thread overview]
Message-ID: <b10c99a2-a9c3-595f-983e-2547325e64ad@redhat.com> (raw)
In-Reply-To: <20181214072334-mutt-send-email-mst@kernel.org>


On 2018/12/14 下午8:33, Michael S. Tsirkin wrote:
> On Fri, Dec 14, 2018 at 11:42:18AM +0800, Jason Wang wrote:
>> On 2018/12/13 下午11:27, Michael S. Tsirkin wrote:
>>> On Thu, Dec 13, 2018 at 06:10:19PM +0800, Jason Wang wrote:
>>>> Hi:
>>>>
>>>> This series tries to access virtqueue metadata through kernel virtual
>>>> address instead of copy_user() friends since they had too much
>>>> overheads like checks, spec barriers or even hardware feature
>>>> toggling.
>>> Userspace accesses through remapping tricks and next time there's a need
>>> for a new barrier we are left to figure it out by ourselves.
>>
>> I don't get here, do you mean spec barriers?
> I mean the next barrier people decide to put into userspace
> memory accesses.
>
>> It's completely unnecessary for
>> vhost which is kernel thread.
> It's defence in depth. Take a look at the commit that added them.
> And yes quite possibly in most cases we actually have a spec
> barrier in the validation phase. If we do let's use the
> unsafe variants so they can be found.


unsafe variants can only work if you can batch userspace access. This is 
not necessarily the case for light load.


>
>> And even if you're right, vhost is not the
>> only place, there's lots of vmap() based accessing in kernel.
> For sure. But if one can get by without get user pages, one
> really should. Witness recently uncovered mess with file
> backed storage.


We only pin metadata pages, I don't believe they will be used by any DMA.


>
>> Think in
>> another direction, this means we won't suffer form unnecessary barriers for
>> kthread like vhost in the future, we will manually pick the one we really
>> need
> I personally think we should err on the side of caution not on the side of
> performance.


So what you suggest may lead unnecessary performance regression 
(10%-20%) which is part of the goal of this series. We should audit and 
only use the one we really need instead of depending on copy_user() 
friends().

If we do it our own, it could be slow for for security fix but it's no 
less safe than before with performance kept.


>
>> (but it should have little possibility).
> History seems to teach otherwise.


What case did you mean here?


>
>> Please notice we only access metdata through remapping not the data itself.
>> This idea has been used for high speed userspace backend for years, e.g
>> packet socket or recent AF_XDP.
> I think their justification for the higher risk is that they are mostly
> designed for priveledged userspace.


I think it's the same with TUN/TAP, privileged process can pass them to 
unprivileged ones.


>
>> The only difference is the page was remap to
>> from kernel to userspace.
> At least that avoids the g.u.p mess.


I'm still not very clear at the point. We only pin 2 or 4 pages, they're 
several other cases that will pin much more.


>
>>>     I don't
>>> like the idea I have to say.  As a first step, why don't we switch to
>>> unsafe_put_user/unsafe_get_user etc?
>>
>> Several reasons:
>>
>> - They only have x86 variant, it won't have any difference for the rest of
>> architecture.
> Is there an issue on other architectures? If yes they can be extended
> there.


Consider the unexpected amount of work and in the best case it can give 
the same performance to vmap(). I'm not sure it's worth.


>
>> - unsafe_put_user/unsafe_get_user is not sufficient for accessing structures
>> (e.g accessing descriptor) or arrays (batching).
> So you want unsafe_copy_xxx_user? I can do this. Hang on will post.
>
>> - Unless we can batch at least the accessing of two places in three of
>> avail, used and descriptor in one run. There will be no difference. E.g we
>> can batch updating used ring, but it won't make any difference in this case.
>>
> So let's batch them all?


Batching might not help for the case of light load. And we need to 
measure the gain/cost of batching itself.


>
>
>>> That would be more of an apples to apples comparison, would it not?
>>
>> Apples to apples comparison only help if we are the No.1. But the fact is we
>> are not. If we want to compete with e.g dpdk or AF_XDP, vmap() is the
>> fastest method AFAIK.
>>
>>
>> Thanks
> We need to speed up the packet access itself too though.
> You can't vmap all of guest memory.


This series only pin and vmap very few pages (metadata).

Thanks


>
>
>>>
>>>> Test shows about 24% improvement on TX PPS. It should benefit other
>>>> cases as well.
>>>>
>>>> Please review
>>>>
>>>> Jason Wang (3):
>>>>     vhost: generalize adding used elem
>>>>     vhost: fine grain userspace memory accessors
>>>>     vhost: access vq metadata through kernel virtual address
>>>>
>>>>    drivers/vhost/vhost.c | 281 ++++++++++++++++++++++++++++++++++++++----
>>>>    drivers/vhost/vhost.h |  11 ++
>>>>    2 files changed, 266 insertions(+), 26 deletions(-)
>>>>
>>>> -- 
>>>> 2.17.1

WARNING: multiple messages have this Message-ID (diff)
From: Jason Wang <jasowang@redhat.com>
To: "Michael S. Tsirkin" <mst@redhat.com>
Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	kvm@vger.kernel.org, virtualization@lists.linux-foundation.org
Subject: Re: [PATCH net-next 0/3] vhost: accelerate metadata access through vmap()
Date: Mon, 24 Dec 2018 16:32:39 +0800	[thread overview]
Message-ID: <b10c99a2-a9c3-595f-983e-2547325e64ad@redhat.com> (raw)
In-Reply-To: <20181214072334-mutt-send-email-mst@kernel.org>


On 2018/12/14 下午8:33, Michael S. Tsirkin wrote:
> On Fri, Dec 14, 2018 at 11:42:18AM +0800, Jason Wang wrote:
>> On 2018/12/13 下午11:27, Michael S. Tsirkin wrote:
>>> On Thu, Dec 13, 2018 at 06:10:19PM +0800, Jason Wang wrote:
>>>> Hi:
>>>>
>>>> This series tries to access virtqueue metadata through kernel virtual
>>>> address instead of copy_user() friends since they had too much
>>>> overheads like checks, spec barriers or even hardware feature
>>>> toggling.
>>> Userspace accesses through remapping tricks and next time there's a need
>>> for a new barrier we are left to figure it out by ourselves.
>>
>> I don't get here, do you mean spec barriers?
> I mean the next barrier people decide to put into userspace
> memory accesses.
>
>> It's completely unnecessary for
>> vhost which is kernel thread.
> It's defence in depth. Take a look at the commit that added them.
> And yes quite possibly in most cases we actually have a spec
> barrier in the validation phase. If we do let's use the
> unsafe variants so they can be found.


unsafe variants can only work if you can batch userspace access. This is 
not necessarily the case for light load.


>
>> And even if you're right, vhost is not the
>> only place, there's lots of vmap() based accessing in kernel.
> For sure. But if one can get by without get user pages, one
> really should. Witness recently uncovered mess with file
> backed storage.


We only pin metadata pages, I don't believe they will be used by any DMA.


>
>> Think in
>> another direction, this means we won't suffer form unnecessary barriers for
>> kthread like vhost in the future, we will manually pick the one we really
>> need
> I personally think we should err on the side of caution not on the side of
> performance.


So what you suggest may lead unnecessary performance regression 
(10%-20%) which is part of the goal of this series. We should audit and 
only use the one we really need instead of depending on copy_user() 
friends().

If we do it our own, it could be slow for for security fix but it's no 
less safe than before with performance kept.


>
>> (but it should have little possibility).
> History seems to teach otherwise.


What case did you mean here?


>
>> Please notice we only access metdata through remapping not the data itself.
>> This idea has been used for high speed userspace backend for years, e.g
>> packet socket or recent AF_XDP.
> I think their justification for the higher risk is that they are mostly
> designed for priveledged userspace.


I think it's the same with TUN/TAP, privileged process can pass them to 
unprivileged ones.


>
>> The only difference is the page was remap to
>> from kernel to userspace.
> At least that avoids the g.u.p mess.


I'm still not very clear at the point. We only pin 2 or 4 pages, they're 
several other cases that will pin much more.


>
>>>     I don't
>>> like the idea I have to say.  As a first step, why don't we switch to
>>> unsafe_put_user/unsafe_get_user etc?
>>
>> Several reasons:
>>
>> - They only have x86 variant, it won't have any difference for the rest of
>> architecture.
> Is there an issue on other architectures? If yes they can be extended
> there.


Consider the unexpected amount of work and in the best case it can give 
the same performance to vmap(). I'm not sure it's worth.


>
>> - unsafe_put_user/unsafe_get_user is not sufficient for accessing structures
>> (e.g accessing descriptor) or arrays (batching).
> So you want unsafe_copy_xxx_user? I can do this. Hang on will post.
>
>> - Unless we can batch at least the accessing of two places in three of
>> avail, used and descriptor in one run. There will be no difference. E.g we
>> can batch updating used ring, but it won't make any difference in this case.
>>
> So let's batch them all?


Batching might not help for the case of light load. And we need to 
measure the gain/cost of batching itself.


>
>
>>> That would be more of an apples to apples comparison, would it not?
>>
>> Apples to apples comparison only help if we are the No.1. But the fact is we
>> are not. If we want to compete with e.g dpdk or AF_XDP, vmap() is the
>> fastest method AFAIK.
>>
>>
>> Thanks
> We need to speed up the packet access itself too though.
> You can't vmap all of guest memory.


This series only pin and vmap very few pages (metadata).

Thanks


>
>
>>>
>>>> Test shows about 24% improvement on TX PPS. It should benefit other
>>>> cases as well.
>>>>
>>>> Please review
>>>>
>>>> Jason Wang (3):
>>>>     vhost: generalize adding used elem
>>>>     vhost: fine grain userspace memory accessors
>>>>     vhost: access vq metadata through kernel virtual address
>>>>
>>>>    drivers/vhost/vhost.c | 281 ++++++++++++++++++++++++++++++++++++++----
>>>>    drivers/vhost/vhost.h |  11 ++
>>>>    2 files changed, 266 insertions(+), 26 deletions(-)
>>>>
>>>> -- 
>>>> 2.17.1
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

  parent reply	other threads:[~2018-12-24  8:32 UTC|newest]

Thread overview: 78+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-12-13 10:10 [PATCH net-next 0/3] vhost: accelerate metadata access through vmap() Jason Wang
2018-12-13 10:10 ` [PATCH net-next 1/3] vhost: generalize adding used elem Jason Wang
2018-12-13 19:41   ` Michael S. Tsirkin
2018-12-13 19:41   ` Michael S. Tsirkin
2018-12-14  4:00     ` Jason Wang
2018-12-14  4:00     ` Jason Wang
2018-12-13 10:10 ` Jason Wang
2018-12-13 10:10 ` [PATCH net-next 2/3] vhost: fine grain userspace memory accessors Jason Wang
2018-12-13 10:10 ` Jason Wang
2018-12-13 10:10 ` [PATCH net-next 3/3] vhost: access vq metadata through kernel virtual address Jason Wang
2018-12-13 10:10 ` Jason Wang
2018-12-13 15:44   ` Michael S. Tsirkin
2018-12-13 15:44   ` Michael S. Tsirkin
2018-12-13 21:18     ` Konrad Rzeszutek Wilk
2018-12-13 21:58       ` Michael S. Tsirkin
2018-12-13 21:58       ` Michael S. Tsirkin
2018-12-13 21:18     ` Konrad Rzeszutek Wilk
2018-12-14  3:57     ` Jason Wang
2018-12-14  3:57     ` Jason Wang
2018-12-14 12:36       ` Michael S. Tsirkin
2018-12-24  7:53         ` Jason Wang
2018-12-24  7:53         ` Jason Wang
2018-12-24 18:10           ` Michael S. Tsirkin
2018-12-25 10:05             ` Jason Wang
2018-12-25 10:05               ` Jason Wang
2018-12-25 12:50               ` Michael S. Tsirkin
2018-12-25 12:50                 ` Michael S. Tsirkin
2018-12-26  3:57                 ` Jason Wang
2018-12-26  3:57                 ` Jason Wang
2018-12-26 15:02                   ` Michael S. Tsirkin
2018-12-26 15:02                     ` Michael S. Tsirkin
2018-12-27  9:39                     ` Jason Wang
2018-12-30 18:30                       ` Michael S. Tsirkin
2018-12-30 18:30                         ` Michael S. Tsirkin
2019-01-02 11:38                         ` Jason Wang
2019-01-02 11:38                         ` Jason Wang
2018-12-27  9:39                     ` Jason Wang
2018-12-24 18:10           ` Michael S. Tsirkin
2018-12-14 12:36       ` Michael S. Tsirkin
2018-12-15 21:15       ` David Miller
2018-12-15 21:15         ` David Miller
2018-12-14 14:48   ` kbuild test robot
2018-12-14 14:48   ` kbuild test robot
2018-12-13 15:27 ` [PATCH net-next 0/3] vhost: accelerate metadata access through vmap() Michael S. Tsirkin
2018-12-14  3:42   ` Jason Wang
2018-12-14  3:42   ` Jason Wang
2018-12-14 12:33     ` Michael S. Tsirkin
2018-12-14 12:33     ` Michael S. Tsirkin
2018-12-14 15:31       ` Michael S. Tsirkin
2018-12-14 15:31       ` Michael S. Tsirkin
2018-12-24  8:32       ` Jason Wang [this message]
2018-12-24  8:32         ` Jason Wang
2018-12-24 18:12         ` Michael S. Tsirkin
2018-12-24 18:12           ` Michael S. Tsirkin
2018-12-25 10:09           ` Jason Wang
2018-12-25 10:09             ` Jason Wang
2018-12-25 12:52             ` Michael S. Tsirkin
2018-12-25 12:52               ` Michael S. Tsirkin
2018-12-26  3:59               ` Jason Wang
2018-12-26  3:59                 ` Jason Wang
2018-12-13 15:27 ` Michael S. Tsirkin
2018-12-13 20:12 ` Michael S. Tsirkin
2018-12-14  4:29   ` Jason Wang
2018-12-14  4:29   ` Jason Wang
2018-12-14 12:52     ` Michael S. Tsirkin
2018-12-14 12:52     ` Michael S. Tsirkin
2018-12-15 19:43     ` David Miller
2018-12-15 19:43     ` David Miller
2018-12-16 19:57       ` Michael S. Tsirkin
2018-12-24  8:44         ` Jason Wang
2018-12-24  8:44         ` Jason Wang
2018-12-24 19:09           ` Michael S. Tsirkin
2018-12-24 19:09             ` Michael S. Tsirkin
2018-12-16 19:57       ` Michael S. Tsirkin
2018-12-13 20:12 ` Michael S. Tsirkin
2018-12-14 15:16 ` Michael S. Tsirkin
2018-12-14 15:16 ` Michael S. Tsirkin
2018-12-13 10:10 Jason Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b10c99a2-a9c3-595f-983e-2547325e64ad@redhat.com \
    --to=jasowang@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mst@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.