From: Jason Wang <jasowang@redhat.com>
To: Jason Gunthorpe <jgg@ziepe.ca>
Cc: mst@redhat.com, kvm@vger.kernel.org,
virtualization@lists.linux-foundation.org,
netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
linux-mm@kvack.org
Subject: Re: [PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
Date: Mon, 5 Aug 2019 12:20:45 +0800 [thread overview]
Message-ID: <11b2a930-eae4-522c-4132-3f8a2da05666@redhat.com> (raw)
In-Reply-To: <20190802124613.GA11245@ziepe.ca>
On 2019/8/2 下午8:46, Jason Gunthorpe wrote:
> On Fri, Aug 02, 2019 at 05:40:07PM +0800, Jason Wang wrote:
>>> This must be a proper barrier, like a spinlock, mutex, or
>>> synchronize_rcu.
>>
>> I start with synchronize_rcu() but both you and Michael raise some
>> concern.
> I've also idly wondered if calling synchronize_rcu() under the various
> mm locks is a deadlock situation.
Maybe, that's why I suggest to use vhost_work_flush() which is much
lightweight can can achieve the same function. It can guarantee all
previous work has been processed after vhost_work_flush() return.
>
>> Then I try spinlock and mutex:
>>
>> 1) spinlock: add lots of overhead on datapath, this leads 0 performance
>> improvement.
> I think the topic here is correctness not performance improvement
But the whole series is to speed up vhost.
>
>> 2) SRCU: full memory barrier requires on srcu_read_lock(), which still leads
>> little performance improvement
>
>> 3) mutex: a possible issue is need to wait for the page to be swapped in (is
>> this unacceptable ?), another issue is that we need hold vq lock during
>> range overlap check.
> I have a feeling that mmu notififers cannot safely become dependent on
> progress of swap without causing deadlock. You probably should avoid
> this.
Yes, so that's why I try to synchronize the critical region by myself.
>>> And, again, you can't re-invent a spinlock with open coding and get
>>> something better.
>> So the question is if waiting for swap is considered to be unsuitable for
>> MMU notifiers. If not, it would simplify codes. If not, we still need to
>> figure out a possible solution.
>>
>> Btw, I come up another idea, that is to disable preemption when vhost thread
>> need to access the memory. Then register preempt notifier and if vhost
>> thread is preempted, we're sure no one will access the memory and can do the
>> cleanup.
> I think you should use the spinlock so at least the code is obviously
> functionally correct and worry about designing some properly justified
> performance change after.
>
> Jason
Spinlock is correct but make the whole series meaningless consider it
won't bring any performance improvement.
Thanks
next prev parent reply other threads:[~2019-08-05 4:20 UTC|newest]
Thread overview: 51+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-07-31 8:46 [PATCH V2 0/9] Fixes for metadata accelreation Jason Wang
2019-07-31 8:46 ` [PATCH V2 1/9] vhost: don't set uaddr for invalid address Jason Wang
2019-07-31 8:46 ` [PATCH V2 2/9] vhost: validate MMU notifier registration Jason Wang
2019-07-31 8:46 ` [PATCH V2 3/9] vhost: fix vhost map leak Jason Wang
2019-07-31 8:46 ` [PATCH V2 4/9] vhost: reset invalidate_count in vhost_set_vring_num_addr() Jason Wang
2019-07-31 12:41 ` Jason Gunthorpe
2019-07-31 13:29 ` Jason Wang
2019-07-31 19:32 ` Jason Gunthorpe
2019-07-31 19:37 ` Michael S. Tsirkin
2019-08-01 5:03 ` Jason Wang
2019-07-31 8:46 ` [PATCH V2 5/9] vhost: mark dirty pages during map uninit Jason Wang
2019-07-31 8:46 ` [PATCH V2 6/9] vhost: don't do synchronize_rcu() in vhost_uninit_vq_maps() Jason Wang
2019-07-31 8:46 ` [PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker Jason Wang
2019-07-31 8:50 ` Jason Wang
2019-07-31 12:39 ` Jason Gunthorpe
2019-07-31 13:28 ` Jason Wang
2019-07-31 19:30 ` Jason Gunthorpe
2019-08-01 5:02 ` Jason Wang
2019-08-01 14:15 ` Jason Gunthorpe
2019-08-02 9:40 ` Jason Wang
2019-08-02 12:46 ` Jason Gunthorpe
2019-08-02 14:27 ` Michael S. Tsirkin
2019-08-02 17:24 ` Jason Gunthorpe
2019-08-03 21:36 ` Michael S. Tsirkin
2019-08-04 0:14 ` Jason Gunthorpe
2019-08-04 8:07 ` Michael S. Tsirkin
2019-08-05 4:39 ` Jason Wang
2019-08-06 11:53 ` Jason Gunthorpe
2019-08-06 13:36 ` Michael S. Tsirkin
2019-08-06 13:40 ` Jason Gunthorpe
2019-08-05 4:36 ` Jason Wang
2019-08-05 4:41 ` Jason Wang
2019-08-05 6:40 ` Michael S. Tsirkin
2019-08-05 8:24 ` Jason Wang
2019-08-05 6:30 ` Michael S. Tsirkin
2019-08-05 8:22 ` Jason Wang
2019-08-05 4:20 ` Jason Wang [this message]
2019-08-06 12:04 ` Jason Gunthorpe
2019-08-07 6:49 ` Jason Wang
2019-08-02 14:03 ` Michael S. Tsirkin
2019-08-05 4:33 ` Jason Wang
2019-08-05 6:28 ` Michael S. Tsirkin
2019-08-05 8:21 ` Jason Wang
2019-07-31 18:29 ` Michael S. Tsirkin
2019-08-01 8:06 ` Jason Wang
2019-08-03 21:54 ` Michael S. Tsirkin
2019-08-05 8:18 ` Jason Wang
2019-07-31 8:46 ` [PATCH V2 8/9] vhost: correctly set dirty pages in MMU notifiers callback Jason Wang
2019-07-31 8:46 ` [PATCH V2 9/9] vhost: do not return -EAGIAN for non blocking invalidation too early Jason Wang
2019-07-31 9:59 ` Stefano Garzarella
2019-07-31 10:05 ` Jason Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=11b2a930-eae4-522c-4132-3f8a2da05666@redhat.com \
--to=jasowang@redhat.com \
--cc=jgg@ziepe.ca \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mst@redhat.com \
--cc=netdev@vger.kernel.org \
--cc=virtualization@lists.linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).