From: "Michael S. Tsirkin" <mst@redhat.com> To: Jason Wang <jasowang@redhat.com> Cc: linux-kernel@vger.kernel.org, Jakub Kicinski <kuba@kernel.org>, Wei Wang <weiwan@google.com>, David Miller <davem@davemloft.net>, netdev@vger.kernel.org, Willem de Bruijn <willemb@google.com>, virtualization@lists.linux-foundation.org Subject: Re: [PATCH RFC v2 3/4] virtio_net: move tx vq operation under tx queue lock Date: Tue, 13 Apr 2021 10:02:55 -0400 [thread overview] Message-ID: <20210413100222-mutt-send-email-mst@kernel.org> (raw) In-Reply-To: <805053bf-960f-3c34-ce23-012d121ca937@redhat.com> On Tue, Apr 13, 2021 at 04:54:42PM +0800, Jason Wang wrote: > > 在 2021/4/13 下午1:47, Michael S. Tsirkin 写道: > > It's unsafe to operate a vq from multiple threads. > > Unfortunately this is exactly what we do when invoking > > clean tx poll from rx napi. > > As a fix move everything that deals with the vq to under tx lock. > > > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com> > > --- > > drivers/net/virtio_net.c | 22 +++++++++++++++++++++- > > 1 file changed, 21 insertions(+), 1 deletion(-) > > > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c > > index 16d5abed582c..460ccdbb840e 100644 > > --- a/drivers/net/virtio_net.c > > +++ b/drivers/net/virtio_net.c > > @@ -1505,6 +1505,8 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget) > > struct virtnet_info *vi = sq->vq->vdev->priv; > > unsigned int index = vq2txq(sq->vq); > > struct netdev_queue *txq; > > + int opaque; > > + bool done; > > if (unlikely(is_xdp_raw_buffer_queue(vi, index))) { > > /* We don't need to enable cb for XDP */ > > @@ -1514,10 +1516,28 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget) > > txq = netdev_get_tx_queue(vi->dev, index); > > __netif_tx_lock(txq, raw_smp_processor_id()); > > + virtqueue_disable_cb(sq->vq); > > free_old_xmit_skbs(sq, true); > > + > > + opaque = virtqueue_enable_cb_prepare(sq->vq); > > + > > + done = napi_complete_done(napi, 0); > > + > > + if (!done) > > + virtqueue_disable_cb(sq->vq); > > + > > __netif_tx_unlock(txq); > > - virtqueue_napi_complete(napi, sq->vq, 0); > > > So I wonder why not simply move __netif_tx_unlock() after > virtqueue_napi_complete()? > > Thanks > Because that calls tx poll which also takes tx lock internally ... > > + if (done) { > > + if (unlikely(virtqueue_poll(sq->vq, opaque))) { > > + if (napi_schedule_prep(napi)) { > > + __netif_tx_lock(txq, raw_smp_processor_id()); > > + virtqueue_disable_cb(sq->vq); > > + __netif_tx_unlock(txq); > > + __napi_schedule(napi); > > + } > > + } > > + } > > if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS) > > netif_tx_wake_queue(txq);
WARNING: multiple messages have this Message-ID (diff)
From: "Michael S. Tsirkin" <mst@redhat.com> To: Jason Wang <jasowang@redhat.com> Cc: Willem de Bruijn <willemb@google.com>, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, Jakub Kicinski <kuba@kernel.org>, Wei Wang <weiwan@google.com>, David Miller <davem@davemloft.net> Subject: Re: [PATCH RFC v2 3/4] virtio_net: move tx vq operation under tx queue lock Date: Tue, 13 Apr 2021 10:02:55 -0400 [thread overview] Message-ID: <20210413100222-mutt-send-email-mst@kernel.org> (raw) In-Reply-To: <805053bf-960f-3c34-ce23-012d121ca937@redhat.com> On Tue, Apr 13, 2021 at 04:54:42PM +0800, Jason Wang wrote: > > 在 2021/4/13 下午1:47, Michael S. Tsirkin 写道: > > It's unsafe to operate a vq from multiple threads. > > Unfortunately this is exactly what we do when invoking > > clean tx poll from rx napi. > > As a fix move everything that deals with the vq to under tx lock. > > > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com> > > --- > > drivers/net/virtio_net.c | 22 +++++++++++++++++++++- > > 1 file changed, 21 insertions(+), 1 deletion(-) > > > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c > > index 16d5abed582c..460ccdbb840e 100644 > > --- a/drivers/net/virtio_net.c > > +++ b/drivers/net/virtio_net.c > > @@ -1505,6 +1505,8 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget) > > struct virtnet_info *vi = sq->vq->vdev->priv; > > unsigned int index = vq2txq(sq->vq); > > struct netdev_queue *txq; > > + int opaque; > > + bool done; > > if (unlikely(is_xdp_raw_buffer_queue(vi, index))) { > > /* We don't need to enable cb for XDP */ > > @@ -1514,10 +1516,28 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget) > > txq = netdev_get_tx_queue(vi->dev, index); > > __netif_tx_lock(txq, raw_smp_processor_id()); > > + virtqueue_disable_cb(sq->vq); > > free_old_xmit_skbs(sq, true); > > + > > + opaque = virtqueue_enable_cb_prepare(sq->vq); > > + > > + done = napi_complete_done(napi, 0); > > + > > + if (!done) > > + virtqueue_disable_cb(sq->vq); > > + > > __netif_tx_unlock(txq); > > - virtqueue_napi_complete(napi, sq->vq, 0); > > > So I wonder why not simply move __netif_tx_unlock() after > virtqueue_napi_complete()? > > Thanks > Because that calls tx poll which also takes tx lock internally ... > > + if (done) { > > + if (unlikely(virtqueue_poll(sq->vq, opaque))) { > > + if (napi_schedule_prep(napi)) { > > + __netif_tx_lock(txq, raw_smp_processor_id()); > > + virtqueue_disable_cb(sq->vq); > > + __netif_tx_unlock(txq); > > + __napi_schedule(napi); > > + } > > + } > > + } > > if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS) > > netif_tx_wake_queue(txq); _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
next prev parent reply other threads:[~2021-04-13 14:03 UTC|newest] Thread overview: 36+ messages / expand[flat|nested] mbox.gz Atom feed top 2021-04-13 5:47 [PATCH RFC v2 0/4] virtio net: spurious interrupt related fixes Michael S. Tsirkin 2021-04-13 5:47 ` Michael S. Tsirkin 2021-04-13 5:47 ` [PATCH RFC v2 1/4] virtio: fix up virtio_disable_cb Michael S. Tsirkin 2021-04-13 5:47 ` Michael S. Tsirkin 2021-04-13 8:51 ` Jason Wang 2021-04-13 8:51 ` Jason Wang 2021-04-13 14:01 ` Willem de Bruijn 2021-04-13 14:01 ` Willem de Bruijn 2021-04-13 19:53 ` Michael S. Tsirkin 2021-04-13 19:53 ` Michael S. Tsirkin 2021-04-13 21:44 ` Willem de Bruijn 2021-04-13 21:44 ` Willem de Bruijn 2021-04-13 22:11 ` Michael S. Tsirkin 2021-04-13 22:11 ` Michael S. Tsirkin 2021-04-14 0:24 ` Willem de Bruijn 2021-04-14 0:24 ` Willem de Bruijn 2021-04-13 5:47 ` [PATCH RFC v2 2/4] virtio_net: disable cb aggressively Michael S. Tsirkin 2021-04-13 5:47 ` Michael S. Tsirkin 2021-04-13 8:53 ` Jason Wang 2021-04-13 8:53 ` Jason Wang 2021-04-13 14:08 ` Willem de Bruijn 2021-04-13 14:08 ` Willem de Bruijn 2021-04-13 14:33 ` Michael S. Tsirkin 2021-04-13 14:33 ` Michael S. Tsirkin 2021-04-13 5:47 ` [PATCH RFC v2 3/4] virtio_net: move tx vq operation under tx queue lock Michael S. Tsirkin 2021-04-13 5:47 ` Michael S. Tsirkin 2021-04-13 8:54 ` Jason Wang 2021-04-13 8:54 ` Jason Wang 2021-04-13 14:02 ` Michael S. Tsirkin [this message] 2021-04-13 14:02 ` Michael S. Tsirkin 2021-04-13 14:20 ` Willem de Bruijn 2021-04-13 14:20 ` Willem de Bruijn 2021-04-13 19:38 ` Michael S. Tsirkin 2021-04-13 19:38 ` Michael S. Tsirkin 2021-04-13 5:47 ` [PATCH RFC v2 4/4] virtio_net: move txq wakeups under tx q lock Michael S. Tsirkin 2021-04-13 5:47 ` Michael S. Tsirkin
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20210413100222-mutt-send-email-mst@kernel.org \ --to=mst@redhat.com \ --cc=davem@davemloft.net \ --cc=jasowang@redhat.com \ --cc=kuba@kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=netdev@vger.kernel.org \ --cc=virtualization@lists.linux-foundation.org \ --cc=weiwan@google.com \ --cc=willemb@google.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.