All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Willem de Bruijn <willemdebruijn.kernel@gmail.com>
Cc: Jason Wang <jasowang@redhat.com>,
	linux-kernel <linux-kernel@vger.kernel.org>,
	Jakub Kicinski <kuba@kernel.org>, Wei Wang <weiwan@google.com>,
	David Miller <davem@davemloft.net>,
	Network Development <netdev@vger.kernel.org>,
	virtualization <virtualization@lists.linux-foundation.org>
Subject: Re: [PATCH v3 1/4] virtio_net: move tx vq operation under tx queue lock
Date: Wed, 9 Jun 2021 18:03:59 -0400	[thread overview]
Message-ID: <20210609175825-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <CA+FuTSccMS4qEyexAuzjcuevS8KwaruJih5_0hgiOFz4BpDHzA@mail.gmail.com>

On Fri, May 28, 2021 at 06:25:11PM -0400, Willem de Bruijn wrote:
> On Wed, May 26, 2021 at 11:41 PM Jason Wang <jasowang@redhat.com> wrote:
> >
> >
> > 在 2021/5/26 下午4:24, Michael S. Tsirkin 写道:
> > > It's unsafe to operate a vq from multiple threads.
> > > Unfortunately this is exactly what we do when invoking
> > > clean tx poll from rx napi.
> > > Same happens with napi-tx even without the
> > > opportunistic cleaning from the receive interrupt: that races
> > > with processing the vq in start_xmit.
> > >
> > > As a fix move everything that deals with the vq to under tx lock.
> 
> This patch also disables callbacks during free_old_xmit_skbs
> processing on tx interrupt. Should that be a separate commit, with its
> own explanation?
> > >
> > > Fixes: b92f1e6751a6 ("virtio-net: transmit napi")
> > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > ---
> > >   drivers/net/virtio_net.c | 22 +++++++++++++++++++++-
> > >   1 file changed, 21 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > > index ac0c143f97b4..12512d1002ec 100644
> > > --- a/drivers/net/virtio_net.c
> > > +++ b/drivers/net/virtio_net.c
> > > @@ -1508,6 +1508,8 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget)
> > >       struct virtnet_info *vi = sq->vq->vdev->priv;
> > >       unsigned int index = vq2txq(sq->vq);
> > >       struct netdev_queue *txq;
> > > +     int opaque;
> > > +     bool done;
> > >
> > >       if (unlikely(is_xdp_raw_buffer_queue(vi, index))) {
> > >               /* We don't need to enable cb for XDP */
> > > @@ -1517,10 +1519,28 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget)
> > >
> > >       txq = netdev_get_tx_queue(vi->dev, index);
> > >       __netif_tx_lock(txq, raw_smp_processor_id());
> > > +     virtqueue_disable_cb(sq->vq);
> > >       free_old_xmit_skbs(sq, true);
> > > +
> > > +     opaque = virtqueue_enable_cb_prepare(sq->vq);
> > > +
> > > +     done = napi_complete_done(napi, 0);
> > > +
> > > +     if (!done)
> > > +             virtqueue_disable_cb(sq->vq);
> > > +
> > >       __netif_tx_unlock(txq);
> > >
> > > -     virtqueue_napi_complete(napi, sq->vq, 0);
> > > +     if (done) {
> > > +             if (unlikely(virtqueue_poll(sq->vq, opaque))) {
> 
> Should this also be inside the lock, as it operates on vq?

No vq poll is ok outside of locks, it's atomic.

> Is there anything that is not allowed to run with the lock held?
> > > +                     if (napi_schedule_prep(napi)) {
> > > +                             __netif_tx_lock(txq, raw_smp_processor_id());
> > > +                             virtqueue_disable_cb(sq->vq);
> > > +                             __netif_tx_unlock(txq);
> > > +                             __napi_schedule(napi);
> > > +                     }
> > > +             }
> > > +     }
> >
> >
> > Interesting, this looks like somehwo a open-coded version of
> > virtqueue_napi_complete(). I wonder if we can simply keep using
> > virtqueue_napi_complete() by simply moving the __netif_tx_unlock() after
> > that:
> >
> > netif_tx_lock(txq);
> > free_old_xmit_skbs(sq, true);
> > virtqueue_napi_complete(napi, sq->vq, 0);
> > __netif_tx_unlock(txq);
> 
> Agreed. And subsequent block
> 
>        if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS)
>                netif_tx_wake_queue(txq);
> 
> as well

Yes I thought I saw something here that can't be called with tx lock
held but I no longer see it. Will do.

> >
> > Thanks
> >
> >
> > >
> > >       if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS)
> > >               netif_tx_wake_queue(txq);
> >


WARNING: multiple messages have this Message-ID (diff)
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Willem de Bruijn <willemdebruijn.kernel@gmail.com>
Cc: Network Development <netdev@vger.kernel.org>,
	linux-kernel <linux-kernel@vger.kernel.org>,
	virtualization <virtualization@lists.linux-foundation.org>,
	Jakub Kicinski <kuba@kernel.org>, Wei Wang <weiwan@google.com>,
	David Miller <davem@davemloft.net>
Subject: Re: [PATCH v3 1/4] virtio_net: move tx vq operation under tx queue lock
Date: Wed, 9 Jun 2021 18:03:59 -0400	[thread overview]
Message-ID: <20210609175825-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <CA+FuTSccMS4qEyexAuzjcuevS8KwaruJih5_0hgiOFz4BpDHzA@mail.gmail.com>

On Fri, May 28, 2021 at 06:25:11PM -0400, Willem de Bruijn wrote:
> On Wed, May 26, 2021 at 11:41 PM Jason Wang <jasowang@redhat.com> wrote:
> >
> >
> > 在 2021/5/26 下午4:24, Michael S. Tsirkin 写道:
> > > It's unsafe to operate a vq from multiple threads.
> > > Unfortunately this is exactly what we do when invoking
> > > clean tx poll from rx napi.
> > > Same happens with napi-tx even without the
> > > opportunistic cleaning from the receive interrupt: that races
> > > with processing the vq in start_xmit.
> > >
> > > As a fix move everything that deals with the vq to under tx lock.
> 
> This patch also disables callbacks during free_old_xmit_skbs
> processing on tx interrupt. Should that be a separate commit, with its
> own explanation?
> > >
> > > Fixes: b92f1e6751a6 ("virtio-net: transmit napi")
> > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > ---
> > >   drivers/net/virtio_net.c | 22 +++++++++++++++++++++-
> > >   1 file changed, 21 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > > index ac0c143f97b4..12512d1002ec 100644
> > > --- a/drivers/net/virtio_net.c
> > > +++ b/drivers/net/virtio_net.c
> > > @@ -1508,6 +1508,8 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget)
> > >       struct virtnet_info *vi = sq->vq->vdev->priv;
> > >       unsigned int index = vq2txq(sq->vq);
> > >       struct netdev_queue *txq;
> > > +     int opaque;
> > > +     bool done;
> > >
> > >       if (unlikely(is_xdp_raw_buffer_queue(vi, index))) {
> > >               /* We don't need to enable cb for XDP */
> > > @@ -1517,10 +1519,28 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget)
> > >
> > >       txq = netdev_get_tx_queue(vi->dev, index);
> > >       __netif_tx_lock(txq, raw_smp_processor_id());
> > > +     virtqueue_disable_cb(sq->vq);
> > >       free_old_xmit_skbs(sq, true);
> > > +
> > > +     opaque = virtqueue_enable_cb_prepare(sq->vq);
> > > +
> > > +     done = napi_complete_done(napi, 0);
> > > +
> > > +     if (!done)
> > > +             virtqueue_disable_cb(sq->vq);
> > > +
> > >       __netif_tx_unlock(txq);
> > >
> > > -     virtqueue_napi_complete(napi, sq->vq, 0);
> > > +     if (done) {
> > > +             if (unlikely(virtqueue_poll(sq->vq, opaque))) {
> 
> Should this also be inside the lock, as it operates on vq?

No vq poll is ok outside of locks, it's atomic.

> Is there anything that is not allowed to run with the lock held?
> > > +                     if (napi_schedule_prep(napi)) {
> > > +                             __netif_tx_lock(txq, raw_smp_processor_id());
> > > +                             virtqueue_disable_cb(sq->vq);
> > > +                             __netif_tx_unlock(txq);
> > > +                             __napi_schedule(napi);
> > > +                     }
> > > +             }
> > > +     }
> >
> >
> > Interesting, this looks like somehwo a open-coded version of
> > virtqueue_napi_complete(). I wonder if we can simply keep using
> > virtqueue_napi_complete() by simply moving the __netif_tx_unlock() after
> > that:
> >
> > netif_tx_lock(txq);
> > free_old_xmit_skbs(sq, true);
> > virtqueue_napi_complete(napi, sq->vq, 0);
> > __netif_tx_unlock(txq);
> 
> Agreed. And subsequent block
> 
>        if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS)
>                netif_tx_wake_queue(txq);
> 
> as well

Yes I thought I saw something here that can't be called with tx lock
held but I no longer see it. Will do.

> >
> > Thanks
> >
> >
> > >
> > >       if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS)
> > >               netif_tx_wake_queue(txq);
> >

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

  reply	other threads:[~2021-06-09 22:04 UTC|newest]

Thread overview: 49+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-26  8:24 [PATCH v3 0/4] virtio net: spurious interrupt related fixes Michael S. Tsirkin
2021-05-26  8:24 ` Michael S. Tsirkin
2021-05-26  8:24 ` [PATCH v3 1/4] virtio_net: move tx vq operation under tx queue lock Michael S. Tsirkin
2021-05-26  8:24   ` Michael S. Tsirkin
2021-05-27  3:41   ` Jason Wang
2021-05-27  3:41     ` Jason Wang
2021-05-28 22:25     ` Willem de Bruijn
2021-05-28 22:25       ` Willem de Bruijn
2021-06-09 22:03       ` Michael S. Tsirkin [this message]
2021-06-09 22:03         ` Michael S. Tsirkin
2021-05-26  8:24 ` [PATCH v3 2/4] virtio_net: move txq wakeups under tx q lock Michael S. Tsirkin
2021-05-26  8:24   ` Michael S. Tsirkin
2021-05-27  3:48   ` Jason Wang
2021-05-27  3:48     ` Jason Wang
2021-05-26  8:24 ` [PATCH v3 3/4] virtio: fix up virtio_disable_cb Michael S. Tsirkin
2021-05-26  8:24   ` Michael S. Tsirkin
2021-05-27  4:01   ` Jason Wang
2021-05-27  4:01     ` Jason Wang
2023-03-30  6:07   ` Xuan Zhuo
2023-03-30  6:07     ` Xuan Zhuo
2023-03-30  6:44     ` Michael S. Tsirkin
2023-03-30  6:44       ` Michael S. Tsirkin
2023-03-30  6:54       ` Xuan Zhuo
2023-03-30  6:54         ` Xuan Zhuo
2023-03-30 14:04         ` Michael S. Tsirkin
2023-03-30 14:04           ` Michael S. Tsirkin
2023-03-31  3:38           ` Xuan Zhuo
2023-03-31  3:38             ` Xuan Zhuo
2021-05-26  8:24 ` [PATCH v3 4/4] virtio_net: disable cb aggressively Michael S. Tsirkin
2021-05-26  8:24   ` Michael S. Tsirkin
2021-05-26 15:15   ` Eric Dumazet
2021-05-26 15:15     ` Eric Dumazet
2021-05-26 21:22     ` Willem de Bruijn
2021-05-26 21:22       ` Willem de Bruijn
2021-05-26 19:39   ` Jakub Kicinski
2021-05-27  4:09   ` Jason Wang
2021-05-27  4:09     ` Jason Wang
2023-01-16 13:41   ` Laurent Vivier
2023-01-16 13:41     ` Laurent Vivier
2023-01-17  3:48     ` Jason Wang
2023-01-17  3:48       ` Jason Wang
2021-05-26 15:34 ` [PATCH v3 0/4] virtio net: spurious interrupt related fixes Willem de Bruijn
2021-05-26 15:34   ` Willem de Bruijn
2021-06-01  2:53   ` Willem de Bruijn
2021-06-01  2:53     ` Willem de Bruijn
2021-06-09 21:36     ` Willem de Bruijn
2021-06-09 21:36       ` Willem de Bruijn
2021-06-09 22:59       ` Willem de Bruijn
2021-06-09 22:59         ` Willem de Bruijn

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210609175825-mutt-send-email-mst@kernel.org \
    --to=mst@redhat.com \
    --cc=davem@davemloft.net \
    --cc=jasowang@redhat.com \
    --cc=kuba@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=virtualization@lists.linux-foundation.org \
    --cc=weiwan@google.com \
    --cc=willemdebruijn.kernel@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.