bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Sean Christopherson <sean.j.christopherson@intel.com>
To: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Jason Wang <jasowang@redhat.com>,
	"David S. Miller" <davem@davemloft.net>,
	Jakub Kicinski <kuba@kernel.org>,
	Alexei Starovoitov <ast@kernel.org>,
	Daniel Borkmann <daniel@iogearbox.net>,
	Jesper Dangaard Brouer <hawk@kernel.org>,
	John Fastabend <john.fastabend@gmail.com>,
	virtualization@lists.linux-foundation.org,
	netdev@vger.kernel.org, bpf@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	Jesper Dangaard Brouer <brouer@redhat.com>
Subject: Re: [PATCH] virtio_net: Unregister and re-register xdp_rxq across freeze/restore
Date: Mon, 8 Jun 2020 14:09:06 -0700	[thread overview]
Message-ID: <20200608210906.GG8223@linux.intel.com> (raw)
In-Reply-To: <20200607091542-mutt-send-email-mst@kernel.org>

On Sun, Jun 07, 2020 at 09:23:03AM -0400, Michael S. Tsirkin wrote:
> On Fri, Jun 05, 2020 at 02:46:24PM -0700, Sean Christopherson wrote:
> > @@ -1480,17 +1495,10 @@ static int virtnet_open(struct net_device *dev)
> >  			if (!try_fill_recv(vi, &vi->rq[i], GFP_KERNEL))
> >  				schedule_delayed_work(&vi->refill, 0);
> >  
> > -		err = xdp_rxq_info_reg(&vi->rq[i].xdp_rxq, dev, i);
> > +		err = virtnet_reg_xdp(&vi->rq[i].xdp_rxq, dev, i);
> >  		if (err < 0)
> >  			return err;
> >  
> > -		err = xdp_rxq_info_reg_mem_model(&vi->rq[i].xdp_rxq,
> > -						 MEM_TYPE_PAGE_SHARED, NULL);
> > -		if (err < 0) {
> > -			xdp_rxq_info_unreg(&vi->rq[i].xdp_rxq);
> > -			return err;
> > -		}
> > -
> >  		virtnet_napi_enable(vi->rq[i].vq, &vi->rq[i].napi);
> >  		virtnet_napi_tx_enable(vi, vi->sq[i].vq, &vi->sq[i].napi);
> >  	}
> > @@ -2306,6 +2314,7 @@ static void virtnet_freeze_down(struct virtio_device *vdev)
> >  
> >  	if (netif_running(vi->dev)) {
> >  		for (i = 0; i < vi->max_queue_pairs; i++) {
> > +			xdp_rxq_info_unreg(&vi->rq[i].xdp_rxq);
> >  			napi_disable(&vi->rq[i].napi);
> >  			virtnet_napi_tx_disable(&vi->sq[i].napi);
> 
> I suspect the right thing to do is to first disable all NAPI,
> then play with XDP. Generally cleanup in the reverse order
> of init is a good idea.

Hmm, I was simply following virtnet_close().  Actually, the entire loop
could be factored out into a separate helper.  Perhaps do that as part of
the fix, and then invert the ordering in a separate patch?

> >  		}
> > @@ -2313,6 +2322,8 @@ static void virtnet_freeze_down(struct virtio_device *vdev)
> >  }
> >  
> >  static int init_vqs(struct virtnet_info *vi);
> > +static void virtnet_del_vqs(struct virtnet_info *vi);
> > +static void free_receive_page_frags(struct virtnet_info *vi);
> 
> I'd really rather we reordered code so forward decls are not necessary.

Yeah, no argument from me.  Would you prefer the reordering in a separate
patch on top, e.g. to simplify potential backporting?

> >  static int virtnet_restore_up(struct virtio_device *vdev)
> >  {
> > @@ -2331,6 +2342,10 @@ static int virtnet_restore_up(struct virtio_device *vdev)
> >  				schedule_delayed_work(&vi->refill, 0);
> >  
> >  		for (i = 0; i < vi->max_queue_pairs; i++) {
> > +			err = virtnet_reg_xdp(&vi->rq[i].xdp_rxq, vi->dev, i);
> > +			if (err)
> > +				goto free_vqs;
> > +
> >  			virtnet_napi_enable(vi->rq[i].vq, &vi->rq[i].napi);
> >  			virtnet_napi_tx_enable(vi, vi->sq[i].vq,
> >  					       &vi->sq[i].napi);
> > @@ -2340,6 +2355,12 @@ static int virtnet_restore_up(struct virtio_device *vdev)
> >  	netif_tx_lock_bh(vi->dev);
> >  	netif_device_attach(vi->dev);
> >  	netif_tx_unlock_bh(vi->dev);
> > +	return 0;
> > +
> > +free_vqs:
> > +	cancel_delayed_work_sync(&vi->refill);
> > +	free_receive_page_frags(vi);
> > +	virtnet_del_vqs(vi);
> 
> 
> I am not sure this is safe to do after device-ready.
> 
> Can reg xdp happen before device ready?

From a code perspective, I don't see anything that will explode, but I have
no idea if that's correct/sane behavior.

FWIW, the xdp error handling in virtnet_open() also looks bizarre to me,
e.g. bails in the middle of a loop without doing any cleanup.  I assume
virtnet_close() wouldn't called if open failed?  But I can't determine
whether or not that holds true based on code inspection, there are too many
call sites that lead to open and close.

      reply	other threads:[~2020-06-08 21:09 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-06-05 21:46 [PATCH] virtio_net: Unregister and re-register xdp_rxq across freeze/restore Sean Christopherson
2020-06-07 13:23 ` Michael S. Tsirkin
2020-06-08 21:09   ` Sean Christopherson [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200608210906.GG8223@linux.intel.com \
    --to=sean.j.christopherson@intel.com \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=brouer@redhat.com \
    --cc=daniel@iogearbox.net \
    --cc=davem@davemloft.net \
    --cc=hawk@kernel.org \
    --cc=jasowang@redhat.com \
    --cc=john.fastabend@gmail.com \
    --cc=kuba@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mst@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).