All of lore.kernel.org
 help / color / mirror / Atom feed
From: Ilias Apalodimas <ilias.apalodimas@linaro.org>
To: Jonathan Lemon <jlemon@flugsvamp.com>
Cc: Lorenzo Bianconi <lorenzo@kernel.org>,
	netdev@vger.kernel.org, lorenzo.bianconi@redhat.com,
	davem@davemloft.net, thomas.petazzoni@bootlin.com,
	brouer@redhat.com, matteo.croce@redhat.com
Subject: Re: [PATCH net-next 3/3] net: mvneta: get rid of huge DMA sync in mvneta_rx_refill
Date: Thu, 14 Nov 2019 20:18:50 +0200	[thread overview]
Message-ID: <20191114181850.GA42770@PC192.168.49.172> (raw)
In-Reply-To: <79304C3A-EC21-4D15-8D03-BA035D9E0F4C@flugsvamp.com>

Hi Jonathan,

On Thu, Nov 14, 2019 at 10:14:19AM -0800, Jonathan Lemon wrote:
> On 10 Nov 2019, at 4:09, Lorenzo Bianconi wrote:
> 
> > Get rid of costly dma_sync_single_for_device in mvneta_rx_refill
> > since now the driver can let page_pool API to manage needed DMA
> > sync with a proper size.
> > 
> > - XDP_DROP DMA sync managed by mvneta driver:	~420Kpps
> > - XDP_DROP DMA sync managed by page_pool API:	~595Kpps
> > 
> > Tested-by: Matteo Croce <mcroce@redhat.com>
> > Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
> > ---
> >  drivers/net/ethernet/marvell/mvneta.c | 25 +++++++++++++++----------
> >  1 file changed, 15 insertions(+), 10 deletions(-)
> > 
> > diff --git a/drivers/net/ethernet/marvell/mvneta.c
> > b/drivers/net/ethernet/marvell/mvneta.c
> > index ed93eecb7485..591d580c68b4 100644
> > --- a/drivers/net/ethernet/marvell/mvneta.c
> > +++ b/drivers/net/ethernet/marvell/mvneta.c
> > @@ -1846,7 +1846,6 @@ static int mvneta_rx_refill(struct mvneta_port
> > *pp,
> >  			    struct mvneta_rx_queue *rxq,
> >  			    gfp_t gfp_mask)
> >  {
> > -	enum dma_data_direction dma_dir;
> >  	dma_addr_t phys_addr;
> >  	struct page *page;
> > 
> > @@ -1856,9 +1855,6 @@ static int mvneta_rx_refill(struct mvneta_port
> > *pp,
> >  		return -ENOMEM;
> > 
> >  	phys_addr = page_pool_get_dma_addr(page) + pp->rx_offset_correction;
> > -	dma_dir = page_pool_get_dma_dir(rxq->page_pool);
> > -	dma_sync_single_for_device(pp->dev->dev.parent, phys_addr,
> > -				   MVNETA_MAX_RX_BUF_SIZE, dma_dir);
> >  	mvneta_rx_desc_fill(rx_desc, phys_addr, page, rxq);
> > 
> >  	return 0;
> > @@ -2097,8 +2093,10 @@ mvneta_run_xdp(struct mvneta_port *pp, struct
> > mvneta_rx_queue *rxq,
> >  		err = xdp_do_redirect(pp->dev, xdp, prog);
> >  		if (err) {
> >  			ret = MVNETA_XDP_DROPPED;
> > -			page_pool_recycle_direct(rxq->page_pool,
> > -						 virt_to_head_page(xdp->data));
> > +			__page_pool_put_page(rxq->page_pool,
> > +					virt_to_head_page(xdp->data),
> > +					xdp->data_end - xdp->data_hard_start,
> > +					true);
> 
> I just have a clarifying question.  Here, the RX buffer was received and
> then
> dma_sync'd to the CPU.  Now, it is going to be recycled for RX again; does
> it
> actually need to be sync'd back to the device?
> 
> I'm asking since several of the other network drivers (mellanox, for
> example)
> don't resync the buffer back to the device when recycling it for reuse.

I think that if noone apart from the NIC touches the memory, you don't have any
pending cache writes you have to account for. 
So since the buffer is completely under the drivers control, as long as you can
guarantee noone's going to write it, you can hand it back to the device without
syncing (BPF use case where the bpf program changes the packet for example
breaks this rule)

Thanks
/Ilias
> -- 
> Jonathan

      reply	other threads:[~2019-11-14 18:21 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-11-10 12:09 [PATCH net-next 0/3] add DMA sync capability to page_pool API Lorenzo Bianconi
2019-11-10 12:09 ` [PATCH net-next 1/3] net: mvneta: rely on page_pool_recycle_direct in mvneta_run_xdp Lorenzo Bianconi
2019-11-10 12:09 ` [PATCH net-next 2/3] net: page_pool: add the possibility to sync DMA memory for non-coherent devices Lorenzo Bianconi
2019-11-11 16:48   ` Jesper Dangaard Brouer
2019-11-11 19:11     ` Lorenzo Bianconi
2019-11-13  8:29   ` Jesper Dangaard Brouer
2019-11-14 18:48   ` Jonathan Lemon
2019-11-14 18:53     ` Ilias Apalodimas
2019-11-14 20:27       ` Jonathan Lemon
2019-11-14 20:42         ` Ilias Apalodimas
2019-11-14 21:04           ` Jonathan Lemon
2019-11-14 21:43             ` Jesper Dangaard Brouer
2019-11-15  7:05               ` Ilias Apalodimas
2019-11-15  7:49                 ` Lorenzo Bianconi
2019-11-15  8:03                   ` Ilias Apalodimas
2019-11-15 16:47                     ` Jonathan Lemon
2019-11-15 16:53                       ` Lorenzo Bianconi
2019-11-15  7:17             ` Ilias Apalodimas
2019-11-10 12:09 ` [PATCH net-next 3/3] net: mvneta: get rid of huge DMA sync in mvneta_rx_refill Lorenzo Bianconi
2019-11-14 18:14   ` Jonathan Lemon
2019-11-14 18:18     ` Ilias Apalodimas [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191114181850.GA42770@PC192.168.49.172 \
    --to=ilias.apalodimas@linaro.org \
    --cc=brouer@redhat.com \
    --cc=davem@davemloft.net \
    --cc=jlemon@flugsvamp.com \
    --cc=lorenzo.bianconi@redhat.com \
    --cc=lorenzo@kernel.org \
    --cc=matteo.croce@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=thomas.petazzoni@bootlin.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.