netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ilias Apalodimas <ilias.apalodimas@linaro.org>
To: Andy Gospodarek <andrew.gospodarek@broadcom.com>
Cc: Michael Chan <michael.chan@broadcom.com>,
	davem@davemloft.net, netdev@vger.kernel.org, hawk@kernel.org,
	ast@kernel.org, ivan.khoronzhuk@linaro.org
Subject: Re: [PATCH net-next 3/4] bnxt_en: optimized XDP_REDIRECT support
Date: Mon, 8 Jul 2019 17:51:37 +0300	[thread overview]
Message-ID: <20190708145137.GA21894@apalos> (raw)
In-Reply-To: <20190708142606.GF87269@C02RW35GFVH8.dhcp.broadcom.net>

Hi Andy, 

> On Mon, Jul 08, 2019 at 11:28:03AM +0300, Ilias Apalodimas wrote:
> > Thanks Andy, Michael
> > 
> > > +	if (event & BNXT_REDIRECT_EVENT)
> > > +		xdp_do_flush_map();
> > > +
> > >  	if (event & BNXT_TX_EVENT) {
> > >  		struct bnxt_tx_ring_info *txr = bnapi->tx_ring;
> > >  		u16 prod = txr->tx_prod;
> > > @@ -2254,9 +2257,23 @@ static void bnxt_free_tx_skbs(struct bnxt *bp)
> > >  
> > >  		for (j = 0; j < max_idx;) {
> > >  			struct bnxt_sw_tx_bd *tx_buf = &txr->tx_buf_ring[j];
> > > -			struct sk_buff *skb = tx_buf->skb;
> > > +			struct sk_buff *skb;
> > >  			int k, last;
> > >  
> > > +			if (i < bp->tx_nr_rings_xdp &&
> > > +			    tx_buf->action == XDP_REDIRECT) {
> > > +				dma_unmap_single(&pdev->dev,
> > > +					dma_unmap_addr(tx_buf, mapping),
> > > +					dma_unmap_len(tx_buf, len),
> > > +					PCI_DMA_TODEVICE);
> > > +				xdp_return_frame(tx_buf->xdpf);
> > > +				tx_buf->action = 0;
> > > +				tx_buf->xdpf = NULL;
> > > +				j++;
> > > +				continue;
> > > +			}
> > > +
> > 
> > Can't see the whole file here and maybe i am missing something, but since you
> > optimize for that and start using page_pool, XDP_TX will be a re-synced (and
> > not remapped)  buffer that can be returned to the pool and resynced for 
> > device usage. 
> > Is that happening later on the tx clean function?
> 
> Take a look at the way we treat the buffers in bnxt_rx_xdp() when we
> receive them and then in bnxt_tx_int_xdp() when the transmits have
> completed (for XDP_TX and XDP_REDIRECT).  I think we are doing what is
> proper with respect to mapping vs sync for both cases, but I would be
> fine to be corrected.
> 

Yea seems to be doing the right thing, 
XDP_TX syncs correctly and reuses with bnxt_reuse_rx_data() right?

This might be a bit confusing for someone reading the driver on the first time,
probably because you'll end up with 2 ways of recycling buffers. 

Once a buffers get freed on the XDP path it's either fed back to the pool, so
the next requested buffer get served from the pools cache (ndo_xdp_xmit case in
the patch). If the buffer is used for XDP_TX is's synced correctly but recycled
via bnxt_reuse_rx_data() right? Since you are moving to page pool please
consider having a common approach towards the recycling path. I understand that
means tracking buffers types and make sure you do the right thing on 'tx clean'.
I've done something similar on the netsec driver and i do think this might be a
good thing to add on page_pool API

Again this isn't a blocker at least for me but you already have the buffer type
(via tx_buf->action)

> > 
> > > +			skb = tx_buf->skb;
> > >  			if (!skb) {
> > >  				j++;
> > >  				continue;
> > > @@ -2517,6 +2534,13 @@ static int bnxt_alloc_rx_rings(struct bnxt *bp)
> > >  		if (rc < 0)
> > >  			return rc;
> > >  
> > > +		rc = xdp_rxq_info_reg_mem_model(&rxr->xdp_rxq,
> > > +						MEM_TYPE_PAGE_SHARED, NULL);
> > > +		if (rc) {
> > > +			xdp_rxq_info_unreg(&rxr->xdp_rxq);
> > 
> > I think you can use page_pool_free directly here (and pge_pool_destroy once
> > Ivan's patchset gets nerged), that's what mlx5 does iirc. Can we keep that
> > common please?
> 
> That's an easy change, I can do that.
> 
> > 
> > If Ivan's patch get merged please note you'll have to explicitly
> > page_pool_destroy, after calling xdp_rxq_info_unreg() in the general unregister
> > case (not the error habdling here). Sorry for the confusion this might bring!
> 
> Funny enough the driver was basically doing that until page_pool_destroy
> was removed (these patches are not new).  I saw last week there was
> discussion to add it back, but I did not want to wait to get this on the
> list before that was resolved.

Fair enough

> 
> This path works as expected with the code in the tree today so it seemed
> like the correct approach to post something that is working, right?  :-)

Yes.

It will continue to work even if you dont change the call in the future. 
This is more a 'let's not spread the code' attempt, but removing and re-adding
page_pool_destroy() was/is our mess. We might as well live with the
consequences!

> 
> > 
> > > +			return rc;
> > > +		}
> > > +
> > >  		rc = bnxt_alloc_ring(bp, &ring->ring_mem);
> > >  		if (rc)
> > >  			return rc;
> > > @@ -10233,6 +10257,7 @@ static const struct net_device_ops bnxt_netdev_ops = {
> > [...]
> > 

Thanks!
/Ilias

  reply	other threads:[~2019-07-08 14:51 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-07-06  7:36 [PATCH net-next 0/4] bnxt_en: Add XDP_REDIRECT support Michael Chan
2019-07-06  7:36 ` [PATCH net-next 1/4] bnxt_en: rename some xdp functions Michael Chan
2019-07-06  7:36 ` [PATCH net-next 2/4] bnxt_en: Refactor __bnxt_xmit_xdp() Michael Chan
2019-07-06  7:36 ` [PATCH net-next 3/4] bnxt_en: optimized XDP_REDIRECT support Michael Chan
2019-07-08  8:28   ` Ilias Apalodimas
2019-07-08 14:26     ` Andy Gospodarek
2019-07-08 14:51       ` Ilias Apalodimas [this message]
2019-07-08 15:24         ` Andy Gospodarek
2019-07-06  7:36 ` [PATCH net-next 4/4] bnxt_en: add page_pool support Michael Chan
2019-07-08 18:49   ` Andy Gospodarek
2019-07-06 22:26 ` [PATCH net-next 0/4] bnxt_en: Add XDP_REDIRECT support David Miller
2019-07-08 10:01   ` Toke Høiland-Jørgensen
2019-07-08 21:34 ` David Miller

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190708145137.GA21894@apalos \
    --to=ilias.apalodimas@linaro.org \
    --cc=andrew.gospodarek@broadcom.com \
    --cc=ast@kernel.org \
    --cc=davem@davemloft.net \
    --cc=hawk@kernel.org \
    --cc=ivan.khoronzhuk@linaro.org \
    --cc=michael.chan@broadcom.com \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).