All of lore.kernel.org
 help / color / mirror / Atom feed
From: Sunil Kovvuri <sunil.kovvuri@gmail.com>
To: Kevin Hao <haokexin@gmail.com>
Cc: Linux Netdev List <netdev@vger.kernel.org>,
	Sunil Goutham <sgoutham@marvell.com>,
	Geetha sowjanya <gakula@marvell.com>,
	Subbaraya Sundeep <sbhatta@marvell.com>,
	hariprasad <hkelam@marvell.com>,
	"David S. Miller" <davem@davemloft.net>
Subject: Re: [PATCH] octeontx2-pf: Use the napi_alloc_frag() to alloc the pool buffers
Date: Fri, 8 May 2020 13:10:00 +0530	[thread overview]
Message-ID: <CA+sq2CfoY1aRC2BernvqaMGgTgCByM+yq19-Vak0KJqxEU-5Eg@mail.gmail.com> (raw)
In-Reply-To: <20200508040728.24202-1-haokexin@gmail.com>

On Fri, May 8, 2020 at 9:43 AM Kevin Hao <haokexin@gmail.com> wrote:
>
> In the current codes, the octeontx2 uses its own method to allocate
> the pool buffers, but there are some issues in this implementation.
> 1. We have to run the otx2_get_page() for each allocation cycle and
>    this is pretty error prone. As I can see there is no invocation
>    of the otx2_get_page() in otx2_pool_refill_task(), this will leave
>    the allocated pages have the wrong refcount and may be freed wrongly.
> 2. It wastes memory. For example, if we only receive one packet in a
>    NAPI RX cycle, and then allocate a 2K buffer with otx2_alloc_rbuf()
>    to refill the pool buffers and leave the remain area of the allocated
>    page wasted. On a kernel with 64K page, 62K area is wasted.
>
> IMHO it is really unnecessary to implement our own method for the
> buffers allocate, we can reuse the napi_alloc_frag() to simplify
> our code.
>
> Signed-off-by: Kevin Hao <haokexin@gmail.com>
> ---
>  .../marvell/octeontx2/nic/otx2_common.c       | 51 ++++++++-----------
>  .../marvell/octeontx2/nic/otx2_common.h       | 15 +-----
>  .../marvell/octeontx2/nic/otx2_txrx.c         |  3 +-
>  .../marvell/octeontx2/nic/otx2_txrx.h         |  4 --
>  4 files changed, 22 insertions(+), 51 deletions(-)
>
> diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
> index f1d2dea90a8c..15fa1ad57f88 100644
> --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
> +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
> @@ -379,40 +379,32 @@ void otx2_config_irq_coalescing(struct otx2_nic *pfvf, int qidx)
>                      (pfvf->hw.cq_ecount_wait - 1));
>  }
>
> -dma_addr_t otx2_alloc_rbuf(struct otx2_nic *pfvf, struct otx2_pool *pool,
> -                          gfp_t gfp)
> +dma_addr_t _otx2_alloc_rbuf(struct otx2_nic *pfvf, struct otx2_pool *pool)
>  {
>         dma_addr_t iova;
> +       u8 *buf;
>
> -       /* Check if request can be accommodated in previous allocated page */
> -       if (pool->page && ((pool->page_offset + pool->rbsize) <=
> -           (PAGE_SIZE << pool->rbpage_order))) {
> -               pool->pageref++;
> -               goto ret;
> -       }
> -
> -       otx2_get_page(pool);
> -
> -       /* Allocate a new page */
> -       pool->page = alloc_pages(gfp | __GFP_COMP | __GFP_NOWARN,
> -                                pool->rbpage_order);
> -       if (unlikely(!pool->page))
> +       buf = napi_alloc_frag(pool->rbsize);
> +       if (unlikely(!buf))
>                 return -ENOMEM;
>
> -       pool->page_offset = 0;
> -ret:
> -       iova = (u64)otx2_dma_map_page(pfvf, pool->page, pool->page_offset,
> -                                     pool->rbsize, DMA_FROM_DEVICE);
> -       if (!iova) {
> -               if (!pool->page_offset)
> -                       __free_pages(pool->page, pool->rbpage_order);
> -               pool->page = NULL;
> +       iova = dma_map_single(pfvf->dev, buf, pool->rbsize, DMA_FROM_DEVICE);
> +       if (unlikely(dma_mapping_error(pfvf->dev, iova)))
>                 return -ENOMEM;

Use DMA_ATTR_SKIP_CPU_SYNC while mapping the buffer.

Thanks,
Sunil.

  parent reply	other threads:[~2020-05-08  7:40 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-05-08  4:07 [PATCH] octeontx2-pf: Use the napi_alloc_frag() to alloc the pool buffers Kevin Hao
2020-05-08  4:48 ` Sunil Kovvuri
2020-05-08  5:30   ` Kevin Hao
2020-05-08  7:38     ` Sunil Kovvuri
2020-05-08 13:01       ` Andrew Lunn
2020-05-09  0:58         ` Yunsheng Lin
2020-05-08  7:40 ` Sunil Kovvuri [this message]
2020-05-08 11:37   ` Kevin Hao

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CA+sq2CfoY1aRC2BernvqaMGgTgCByM+yq19-Vak0KJqxEU-5Eg@mail.gmail.com \
    --to=sunil.kovvuri@gmail.com \
    --cc=davem@davemloft.net \
    --cc=gakula@marvell.com \
    --cc=haokexin@gmail.com \
    --cc=hkelam@marvell.com \
    --cc=netdev@vger.kernel.org \
    --cc=sbhatta@marvell.com \
    --cc=sgoutham@marvell.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.