netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ilias Apalodimas <ilias.apalodimas@linaro.org>
To: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: Lorenzo Bianconi <lorenzo@kernel.org>,
	netdev@vger.kernel.org, lorenzo.bianconi@redhat.com,
	davem@davemloft.net, thomas.petazzoni@bootlin.com,
	matteo.croce@redhat.com, mw@semihalf.com
Subject: Re: [PATCH v2 net-next 4/8] net: mvneta: sync dma buffers before refilling hw queues
Date: Thu, 10 Oct 2019 10:21:57 +0300	[thread overview]
Message-ID: <20191010072157.GA31883@apalos.home> (raw)
In-Reply-To: <20191010090831.5d6c41f2@carbon>

Hi Lorenzo, Jesper,

On Thu, Oct 10, 2019 at 09:08:31AM +0200, Jesper Dangaard Brouer wrote:
> On Thu, 10 Oct 2019 01:18:34 +0200
> Lorenzo Bianconi <lorenzo@kernel.org> wrote:
> 
> > mvneta driver can run on not cache coherent devices so it is
> > necessary to sync dma buffers before sending them to the device
> > in order to avoid memory corruption. This patch introduce a performance
> > penalty and it is necessary to introduce a more sophisticated logic
> > in order to avoid dma sync as much as we can
> 
> Report with benchmarks here:
>  https://github.com/xdp-project/xdp-project/blob/master/areas/arm64/board_espressobin08_bench_xdp.org
> 
> We are testing this on an Espressobin board, and do see a huge
> performance cost associated with this DMA-sync.   Regardless we still
> want to get this patch merged, to move forward with XDP support for
> this driver. 
> 
> We promised each-other (on IRC freenode #xdp) that we will follow-up
> with a solution/mitigation, after this patchset is merged.  There are
> several ideas, that likely should get separate upstream review.

I think mentioning that the patch *introduces* a performance penalty is a bit
misleading. 
The dma sync does have a performance penalty but it was always there. 
The initial driver was mapping the DMA with DMA_FROM_DEVICE, which implies
syncing as well. In page_pool we do not explicitly sync buffers on allocation
and leave it up the driver writer (and allow him some tricks to avoid that),
thus this patch is needed.

In any case what Jesper mentions is correct, we do have a plan :)

> 
> > Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
> 
> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
> 
> > ---
> >  drivers/net/ethernet/marvell/mvneta.c | 4 ++++
> >  1 file changed, 4 insertions(+)
> > 
> > diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
> > index 79a6bac0192b..ba4aa9bbc798 100644
> > --- a/drivers/net/ethernet/marvell/mvneta.c
> > +++ b/drivers/net/ethernet/marvell/mvneta.c
> > @@ -1821,6 +1821,7 @@ static int mvneta_rx_refill(struct mvneta_port *pp,
> >  			    struct mvneta_rx_queue *rxq,
> >  			    gfp_t gfp_mask)
> >  {
> > +	enum dma_data_direction dma_dir;
> >  	dma_addr_t phys_addr;
> >  	struct page *page;
> >  
> > @@ -1830,6 +1831,9 @@ static int mvneta_rx_refill(struct mvneta_port *pp,
> >  		return -ENOMEM;
> >  
> >  	phys_addr = page_pool_get_dma_addr(page) + pp->rx_offset_correction;
> > +	dma_dir = page_pool_get_dma_dir(rxq->page_pool);
> > +	dma_sync_single_for_device(pp->dev->dev.parent, phys_addr,
> > +				   MVNETA_MAX_RX_BUF_SIZE, dma_dir);
> >  	mvneta_rx_desc_fill(rx_desc, phys_addr, page, rxq);
> >  
> >  	return 0;
> 
> 
> 
> -- 
> Best regards,
>   Jesper Dangaard Brouer
>   MSc.CS, Principal Kernel Engineer at Red Hat
>   LinkedIn: http://www.linkedin.com/in/brouer

Thanks!
/Ilias

  reply	other threads:[~2019-10-10  7:22 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-09 23:18 [PATCH v2 net-next 0/8] add XDP support to mvneta driver Lorenzo Bianconi
2019-10-09 23:18 ` [PATCH v2 net-next 1/8] net: mvneta: introduce mvneta_update_stats routine Lorenzo Bianconi
2019-10-09 23:18 ` [PATCH v2 net-next 2/8] net: mvneta: introduce page pool API for sw buffer manager Lorenzo Bianconi
2019-10-09 23:18 ` [PATCH v2 net-next 3/8] net: mvneta: rely on build_skb in mvneta_rx_swbm poll routine Lorenzo Bianconi
2019-10-10  7:16   ` Ilias Apalodimas
2019-10-10  9:58     ` Lorenzo Bianconi
2019-10-09 23:18 ` [PATCH v2 net-next 4/8] net: mvneta: sync dma buffers before refilling hw queues Lorenzo Bianconi
2019-10-10  7:08   ` Jesper Dangaard Brouer
2019-10-10  7:21     ` Ilias Apalodimas [this message]
2019-10-10  9:18       ` Lorenzo Bianconi
2019-10-10  9:20     ` Lorenzo Bianconi
2019-10-09 23:18 ` [PATCH v2 net-next 5/8] net: mvneta: add basic XDP support Lorenzo Bianconi
2019-10-10  8:50   ` Jesper Dangaard Brouer
2019-10-10  9:57     ` Lorenzo Bianconi
2019-10-09 23:18 ` [PATCH v2 net-next 6/8] net: mvneta: move header prefetch in mvneta_swbm_rx_frame Lorenzo Bianconi
2019-10-09 23:18 ` [PATCH v2 net-next 7/8] net: mvneta: make tx buffer array agnostic Lorenzo Bianconi
2019-10-09 23:18 ` [PATCH v2 net-next 8/8] net: mvneta: add XDP_TX support Lorenzo Bianconi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191010072157.GA31883@apalos.home \
    --to=ilias.apalodimas@linaro.org \
    --cc=brouer@redhat.com \
    --cc=davem@davemloft.net \
    --cc=lorenzo.bianconi@redhat.com \
    --cc=lorenzo@kernel.org \
    --cc=matteo.croce@redhat.com \
    --cc=mw@semihalf.com \
    --cc=netdev@vger.kernel.org \
    --cc=thomas.petazzoni@bootlin.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).