From: "Michael S. Tsirkin" <mst@redhat.com> To: Lee Jones <lee.jones@linaro.org> Cc: Jason Wang <jasowang@redhat.com>, linux-kernel <linux-kernel@vger.kernel.org>, kvm <kvm@vger.kernel.org>, virtualization <virtualization@lists.linux-foundation.org>, netdev <netdev@vger.kernel.org>, stable@vger.kernel.org, syzbot+adc3cb32385586bec859@syzkaller.appspotmail.com Subject: Re: [PATCH 1/1] vhost: Protect the virtqueue from being cleared whilst still in use Date: Tue, 8 Mar 2022 06:06:47 -0500 [thread overview] Message-ID: <20220308060542-mutt-send-email-mst@kernel.org> (raw) In-Reply-To: <YicO+aF4VhaBYNqK@google.com> On Tue, Mar 08, 2022 at 08:08:25AM +0000, Lee Jones wrote: > On Tue, 08 Mar 2022, Jason Wang wrote: > > > On Tue, Mar 8, 2022 at 3:18 AM Lee Jones <lee.jones@linaro.org> wrote: > > > > > > vhost_vsock_handle_tx_kick() already holds the mutex during its call > > > to vhost_get_vq_desc(). All we have to do here is take the same lock > > > during virtqueue clean-up and we mitigate the reported issues. > > > > > > Also WARN() as a precautionary measure. The purpose of this is to > > > capture possible future race conditions which may pop up over time. > > > > > > Link: https://syzkaller.appspot.com/bug?extid=279432d30d825e63ba00 > > > > > > Cc: <stable@vger.kernel.org> > > > Reported-by: syzbot+adc3cb32385586bec859@syzkaller.appspotmail.com > > > Signed-off-by: Lee Jones <lee.jones@linaro.org> > > > --- > > > drivers/vhost/vhost.c | 10 ++++++++++ > > > 1 file changed, 10 insertions(+) > > > > > > diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c > > > index 59edb5a1ffe28..ef7e371e3e649 100644 > > > --- a/drivers/vhost/vhost.c > > > +++ b/drivers/vhost/vhost.c > > > @@ -693,6 +693,15 @@ void vhost_dev_cleanup(struct vhost_dev *dev) > > > int i; > > > > > > for (i = 0; i < dev->nvqs; ++i) { > > > + /* No workers should run here by design. However, races have > > > + * previously occurred where drivers have been unable to flush > > > + * all work properly prior to clean-up. Without a successful > > > + * flush the guest will malfunction, but avoiding host memory > > > + * corruption in those cases does seem preferable. > > > + */ > > > + WARN_ON(mutex_is_locked(&dev->vqs[i]->mutex)); > > > + > > > > I don't get how this can help, the mutex could be grabbed in the > > middle of the above and below line. > > The worst that happens in this slim scenario is we miss a warning. > The mutexes below will still function as expected and prevent possible > memory corruption. maybe. or maybe corruption already happened and this is the fallout. > > > + mutex_lock(&dev->vqs[i]->mutex); > > > if (dev->vqs[i]->error_ctx) > > > eventfd_ctx_put(dev->vqs[i]->error_ctx); > > > if (dev->vqs[i]->kick) > > > @@ -700,6 +709,7 @@ void vhost_dev_cleanup(struct vhost_dev *dev) > > > if (dev->vqs[i]->call_ctx.ctx) > > > eventfd_ctx_put(dev->vqs[i]->call_ctx.ctx); > > > vhost_vq_reset(dev, dev->vqs[i]); > > > + mutex_unlock(&dev->vqs[i]->mutex); > > > } > > > > I'm not sure it's correct to assume some behaviour of a buggy device. > > For the device mutex, we use that to protect more than just err/call > > and vq. > > When I authored this, I did so as *the* fix. However, since the cause > of today's crash has now been patched, this has become a belt and > braces solution. Michael's addition of the WARN() also has the > benefit of providing us with an early warning system for future > breakages. Personally, I think it's kinda neat. > > -- > Lee Jones [李琼斯] > Principal Technical Lead - Developer Services > Linaro.org │ Open source software for Arm SoCs > Follow Linaro: Facebook | Twitter | Blog
WARNING: multiple messages have this Message-ID (diff)
From: "Michael S. Tsirkin" <mst@redhat.com> To: Lee Jones <lee.jones@linaro.org> Cc: syzbot+adc3cb32385586bec859@syzkaller.appspotmail.com, kvm <kvm@vger.kernel.org>, netdev <netdev@vger.kernel.org>, linux-kernel <linux-kernel@vger.kernel.org>, stable@vger.kernel.org, virtualization <virtualization@lists.linux-foundation.org> Subject: Re: [PATCH 1/1] vhost: Protect the virtqueue from being cleared whilst still in use Date: Tue, 8 Mar 2022 06:06:47 -0500 [thread overview] Message-ID: <20220308060542-mutt-send-email-mst@kernel.org> (raw) In-Reply-To: <YicO+aF4VhaBYNqK@google.com> On Tue, Mar 08, 2022 at 08:08:25AM +0000, Lee Jones wrote: > On Tue, 08 Mar 2022, Jason Wang wrote: > > > On Tue, Mar 8, 2022 at 3:18 AM Lee Jones <lee.jones@linaro.org> wrote: > > > > > > vhost_vsock_handle_tx_kick() already holds the mutex during its call > > > to vhost_get_vq_desc(). All we have to do here is take the same lock > > > during virtqueue clean-up and we mitigate the reported issues. > > > > > > Also WARN() as a precautionary measure. The purpose of this is to > > > capture possible future race conditions which may pop up over time. > > > > > > Link: https://syzkaller.appspot.com/bug?extid=279432d30d825e63ba00 > > > > > > Cc: <stable@vger.kernel.org> > > > Reported-by: syzbot+adc3cb32385586bec859@syzkaller.appspotmail.com > > > Signed-off-by: Lee Jones <lee.jones@linaro.org> > > > --- > > > drivers/vhost/vhost.c | 10 ++++++++++ > > > 1 file changed, 10 insertions(+) > > > > > > diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c > > > index 59edb5a1ffe28..ef7e371e3e649 100644 > > > --- a/drivers/vhost/vhost.c > > > +++ b/drivers/vhost/vhost.c > > > @@ -693,6 +693,15 @@ void vhost_dev_cleanup(struct vhost_dev *dev) > > > int i; > > > > > > for (i = 0; i < dev->nvqs; ++i) { > > > + /* No workers should run here by design. However, races have > > > + * previously occurred where drivers have been unable to flush > > > + * all work properly prior to clean-up. Without a successful > > > + * flush the guest will malfunction, but avoiding host memory > > > + * corruption in those cases does seem preferable. > > > + */ > > > + WARN_ON(mutex_is_locked(&dev->vqs[i]->mutex)); > > > + > > > > I don't get how this can help, the mutex could be grabbed in the > > middle of the above and below line. > > The worst that happens in this slim scenario is we miss a warning. > The mutexes below will still function as expected and prevent possible > memory corruption. maybe. or maybe corruption already happened and this is the fallout. > > > + mutex_lock(&dev->vqs[i]->mutex); > > > if (dev->vqs[i]->error_ctx) > > > eventfd_ctx_put(dev->vqs[i]->error_ctx); > > > if (dev->vqs[i]->kick) > > > @@ -700,6 +709,7 @@ void vhost_dev_cleanup(struct vhost_dev *dev) > > > if (dev->vqs[i]->call_ctx.ctx) > > > eventfd_ctx_put(dev->vqs[i]->call_ctx.ctx); > > > vhost_vq_reset(dev, dev->vqs[i]); > > > + mutex_unlock(&dev->vqs[i]->mutex); > > > } > > > > I'm not sure it's correct to assume some behaviour of a buggy device. > > For the device mutex, we use that to protect more than just err/call > > and vq. > > When I authored this, I did so as *the* fix. However, since the cause > of today's crash has now been patched, this has become a belt and > braces solution. Michael's addition of the WARN() also has the > benefit of providing us with an early warning system for future > breakages. Personally, I think it's kinda neat. > > -- > Lee Jones [李琼斯] > Principal Technical Lead - Developer Services > Linaro.org │ Open source software for Arm SoCs > Follow Linaro: Facebook | Twitter | Blog _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
next prev parent reply other threads:[~2022-03-08 11:06 UTC|newest] Thread overview: 94+ messages / expand[flat|nested] mbox.gz Atom feed top 2022-03-07 19:17 [PATCH 1/1] vhost: Protect the virtqueue from being cleared whilst still in use Lee Jones 2022-03-07 19:17 ` Lee Jones 2022-03-07 19:33 ` Greg KH 2022-03-07 19:33 ` Greg KH 2022-03-07 22:39 ` Michael S. Tsirkin 2022-03-07 22:39 ` Michael S. Tsirkin 2022-03-08 8:10 ` Lee Jones 2022-03-08 8:10 ` Lee Jones 2022-03-08 8:11 ` Lee Jones 2022-03-08 8:11 ` Lee Jones 2022-03-08 8:57 ` Greg KH 2022-03-08 8:57 ` Greg KH 2022-03-08 9:15 ` Lee Jones 2022-03-08 9:15 ` Lee Jones 2022-03-08 9:57 ` Greg KH 2022-03-08 9:57 ` Greg KH 2022-03-08 10:08 ` Lee Jones 2022-03-08 10:08 ` Lee Jones 2022-03-08 10:55 ` Michael S. Tsirkin 2022-03-08 10:55 ` Michael S. Tsirkin 2022-03-08 11:45 ` Greg KH 2022-03-08 11:45 ` Greg KH 2022-03-08 12:27 ` Michael S. Tsirkin 2022-03-08 12:27 ` Michael S. Tsirkin 2022-03-08 13:17 ` Lee Jones 2022-03-08 13:17 ` Lee Jones 2022-03-08 17:17 ` Michael S. Tsirkin 2022-03-08 17:17 ` Michael S. Tsirkin 2022-03-08 11:05 ` Michael S. Tsirkin 2022-03-08 11:05 ` Michael S. Tsirkin 2022-03-09 18:52 ` Leon Romanovsky 2022-03-09 18:52 ` Leon Romanovsky 2022-03-07 22:37 ` Michael S. Tsirkin 2022-03-07 22:37 ` Michael S. Tsirkin 2022-03-08 8:01 ` Lee Jones 2022-03-08 8:01 ` Lee Jones 2022-03-08 11:07 ` Michael S. Tsirkin 2022-03-08 11:07 ` Michael S. Tsirkin 2022-03-08 6:15 ` Jason Wang 2022-03-08 6:15 ` Jason Wang 2022-03-08 8:08 ` Lee Jones 2022-03-08 8:08 ` Lee Jones 2022-03-08 11:06 ` Michael S. Tsirkin [this message] 2022-03-08 11:06 ` Michael S. Tsirkin -- strict thread matches above, loose matches on Subject: below -- 2022-03-14 8:43 Lee Jones 2022-03-14 8:43 ` Lee Jones 2022-03-14 8:56 ` Greg KH 2022-03-14 8:56 ` Greg KH 2022-03-14 11:49 ` Michael S. Tsirkin 2022-03-14 11:49 ` Michael S. Tsirkin 2022-03-14 12:47 ` Lee Jones 2022-03-14 12:47 ` Lee Jones 2022-03-02 7:54 Lee Jones 2022-03-02 7:54 ` Lee Jones 2022-03-02 9:34 ` Stefano Garzarella 2022-03-02 9:34 ` Stefano Garzarella 2022-03-02 10:07 ` Lee Jones 2022-03-02 10:07 ` Lee Jones 2022-03-02 13:35 ` Michael S. Tsirkin 2022-03-02 13:35 ` Michael S. Tsirkin 2022-03-02 14:11 ` Stefano Garzarella 2022-03-02 14:11 ` Stefano Garzarella 2022-03-02 14:50 ` Michael S. Tsirkin 2022-03-02 14:50 ` Michael S. Tsirkin 2022-03-02 15:36 ` Stefano Garzarella 2022-03-02 15:36 ` Stefano Garzarella 2022-03-04 16:46 ` Michael S. Tsirkin 2022-03-04 16:46 ` Michael S. Tsirkin 2022-03-02 13:30 ` Michael S. Tsirkin 2022-03-02 13:30 ` Michael S. Tsirkin 2022-03-02 13:56 ` Lee Jones 2022-03-02 13:56 ` Lee Jones 2022-03-02 14:51 ` Michael S. Tsirkin 2022-03-02 14:51 ` Michael S. Tsirkin 2022-03-02 14:57 ` Lee Jones 2022-03-02 14:57 ` Lee Jones 2022-03-02 16:28 ` Stefano Garzarella 2022-03-02 16:28 ` Stefano Garzarella 2022-03-02 16:30 ` Michael S. Tsirkin 2022-03-02 16:30 ` Michael S. Tsirkin 2022-03-02 16:49 ` Lee Jones 2022-03-02 16:49 ` Lee Jones 2022-03-02 17:10 ` Stefano Garzarella 2022-03-02 17:10 ` Stefano Garzarella 2022-03-03 14:17 ` Lee Jones 2022-03-03 14:17 ` Lee Jones 2022-03-04 5:00 ` Michael S. Tsirkin 2022-03-04 5:00 ` Michael S. Tsirkin 2022-03-04 15:22 ` Lee Jones 2022-03-04 15:22 ` Lee Jones 2022-03-04 16:48 ` Michael S. Tsirkin 2022-03-04 16:48 ` Michael S. Tsirkin 2022-03-04 16:56 ` Lee Jones 2022-03-04 16:56 ` Lee Jones
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20220308060542-mutt-send-email-mst@kernel.org \ --to=mst@redhat.com \ --cc=jasowang@redhat.com \ --cc=kvm@vger.kernel.org \ --cc=lee.jones@linaro.org \ --cc=linux-kernel@vger.kernel.org \ --cc=netdev@vger.kernel.org \ --cc=stable@vger.kernel.org \ --cc=syzbot+adc3cb32385586bec859@syzkaller.appspotmail.com \ --cc=virtualization@lists.linux-foundation.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.