From: Ilias Apalodimas <ilias.apalodimas@linaro.org>
To: Lorenzo Bianconi <lorenzo@kernel.org>
Cc: netdev@vger.kernel.org, lorenzo.bianconi@redhat.com,
davem@davemloft.net, thomas.petazzoni@bootlin.com,
brouer@redhat.com, matteo.croce@redhat.com, mw@semihalf.com
Subject: Re: [PATCH v2 net-next 3/8] net: mvneta: rely on build_skb in mvneta_rx_swbm poll routine
Date: Thu, 10 Oct 2019 10:16:52 +0300 [thread overview]
Message-ID: <20191010071652.GA31160@apalos.home> (raw)
In-Reply-To: <e9ad915633d1e7e02d4b9021761d325d4b130101.1570662004.git.lorenzo@kernel.org>
Hi Lorenzo,
On Thu, Oct 10, 2019 at 01:18:33AM +0200, Lorenzo Bianconi wrote:
> Refactor mvneta_rx_swbm code introducing mvneta_swbm_rx_frame and
> mvneta_swbm_add_rx_fragment routines. Rely on build_skb in oreder to
> allocate skb since the previous patch introduced buffer recycling using
> the page_pool API.
> This patch fixes even an issue in the original driver where dma buffers
> are accessed before dma sync
>
> Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
> Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
> ---
> drivers/net/ethernet/marvell/mvneta.c | 198 ++++++++++++++------------
> 1 file changed, 104 insertions(+), 94 deletions(-)
>
> diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
> index 31cecc1ed848..79a6bac0192b 100644
> --- a/drivers/net/ethernet/marvell/mvneta.c
> +++ b/drivers/net/ethernet/marvell/mvneta.c
> @@ -323,6 +323,11 @@
> ETH_HLEN + ETH_FCS_LEN, \
> cache_line_size())
>
> +#define MVNETA_SKB_PAD (SKB_DATA_ALIGN(sizeof(struct skb_shared_info) + \
> + NET_SKB_PAD))
> +#define MVNETA_SKB_SIZE(len) (SKB_DATA_ALIGN(len) + MVNETA_SKB_PAD)
> +#define MVNETA_MAX_RX_BUF_SIZE (PAGE_SIZE - MVNETA_SKB_PAD)
> +
> #define IS_TSO_HEADER(txq, addr) \
> ((addr >= txq->tso_hdrs_phys) && \
> (addr < txq->tso_hdrs_phys + txq->size * TSO_HEADER_SIZE))
> @@ -646,7 +651,6 @@ static int txq_number = 8;
> static int rxq_def;
>
> static int rx_copybreak __read_mostly = 256;
> -static int rx_header_size __read_mostly = 128;
>
> /* HW BM need that each port be identify by a unique ID */
> static int global_port_id;
> + if (rxq->left_size > MVNETA_MAX_RX_BUF_SIZE) {
[...]
> + len = MVNETA_MAX_RX_BUF_SIZE;
> + data_len = len;
> + } else {
> + len = rxq->left_size;
> + data_len = len - ETH_FCS_LEN;
> + }
> + dma_dir = page_pool_get_dma_dir(rxq->page_pool);
> + dma_sync_single_range_for_cpu(dev->dev.parent,
> + rx_desc->buf_phys_addr, 0,
> + len, dma_dir);
> + if (data_len > 0) {
> + /* refill descriptor with new buffer later */
> + skb_add_rx_frag(rxq->skb,
> + skb_shinfo(rxq->skb)->nr_frags,
> + page, NET_SKB_PAD, data_len,
> + PAGE_SIZE);
> +
> + page_pool_release_page(rxq->page_pool, page);
> + rx_desc->buf_phys_addr = 0;
Shouldn't we unmap and set the buf_phys_addr to 0 regardless of the data_len?
> + }
> + rxq->left_size -= len;
> +}
> +
> mvneta_rxq_buf_size_set(pp, rxq, PAGE_SIZE < SZ_64K ?
[...]
> - PAGE_SIZE :
> + MVNETA_MAX_RX_BUF_SIZE :
> MVNETA_RX_BUF_SIZE(pp->pkt_size));
> mvneta_rxq_bm_disable(pp, rxq);
> mvneta_rxq_fill(pp, rxq, rxq->size);
> @@ -4656,7 +4666,7 @@ static int mvneta_probe(struct platform_device *pdev)
> SET_NETDEV_DEV(dev, &pdev->dev);
>
> pp->id = global_port_id++;
> - pp->rx_offset_correction = 0; /* not relevant for SW BM */
> + pp->rx_offset_correction = NET_SKB_PAD;
>
> /* Obtain access to BM resources if enabled and already initialized */
> bm_node = of_parse_phandle(dn, "buffer-manager", 0);
> --
> 2.21.0
>
Regards
/Ilias
next prev parent reply other threads:[~2019-10-10 7:22 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-10-09 23:18 [PATCH v2 net-next 0/8] add XDP support to mvneta driver Lorenzo Bianconi
2019-10-09 23:18 ` [PATCH v2 net-next 1/8] net: mvneta: introduce mvneta_update_stats routine Lorenzo Bianconi
2019-10-09 23:18 ` [PATCH v2 net-next 2/8] net: mvneta: introduce page pool API for sw buffer manager Lorenzo Bianconi
2019-10-09 23:18 ` [PATCH v2 net-next 3/8] net: mvneta: rely on build_skb in mvneta_rx_swbm poll routine Lorenzo Bianconi
2019-10-10 7:16 ` Ilias Apalodimas [this message]
2019-10-10 9:58 ` Lorenzo Bianconi
2019-10-09 23:18 ` [PATCH v2 net-next 4/8] net: mvneta: sync dma buffers before refilling hw queues Lorenzo Bianconi
2019-10-10 7:08 ` Jesper Dangaard Brouer
2019-10-10 7:21 ` Ilias Apalodimas
2019-10-10 9:18 ` Lorenzo Bianconi
2019-10-10 9:20 ` Lorenzo Bianconi
2019-10-09 23:18 ` [PATCH v2 net-next 5/8] net: mvneta: add basic XDP support Lorenzo Bianconi
2019-10-10 8:50 ` Jesper Dangaard Brouer
2019-10-10 9:57 ` Lorenzo Bianconi
2019-10-09 23:18 ` [PATCH v2 net-next 6/8] net: mvneta: move header prefetch in mvneta_swbm_rx_frame Lorenzo Bianconi
2019-10-09 23:18 ` [PATCH v2 net-next 7/8] net: mvneta: make tx buffer array agnostic Lorenzo Bianconi
2019-10-09 23:18 ` [PATCH v2 net-next 8/8] net: mvneta: add XDP_TX support Lorenzo Bianconi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20191010071652.GA31160@apalos.home \
--to=ilias.apalodimas@linaro.org \
--cc=brouer@redhat.com \
--cc=davem@davemloft.net \
--cc=lorenzo.bianconi@redhat.com \
--cc=lorenzo@kernel.org \
--cc=matteo.croce@redhat.com \
--cc=mw@semihalf.com \
--cc=netdev@vger.kernel.org \
--cc=thomas.petazzoni@bootlin.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).