All of lore.kernel.org
 help / color / mirror / Atom feed
From: Yunsheng Lin <linyunsheng@huawei.com>
To: Andrew Lunn <andrew@lunn.ch>, Sunil Kovvuri <sunil.kovvuri@gmail.com>
Cc: Kevin Hao <haokexin@gmail.com>,
	Linux Netdev List <netdev@vger.kernel.org>,
	Sunil Goutham <sgoutham@marvell.com>,
	"Geetha sowjanya" <gakula@marvell.com>,
	Subbaraya Sundeep <sbhatta@marvell.com>,
	hariprasad <hkelam@marvell.com>,
	"David S. Miller" <davem@davemloft.net>
Subject: Re: [PATCH] octeontx2-pf: Use the napi_alloc_frag() to alloc the pool buffers
Date: Sat, 9 May 2020 08:58:00 +0800	[thread overview]
Message-ID: <f14cb268-efd2-1b62-22cb-d501f1f183a7@huawei.com> (raw)
In-Reply-To: <20200508130115.GO208718@lunn.ch>

On 2020/5/8 21:01, Andrew Lunn wrote:
> On Fri, May 08, 2020 at 01:08:13PM +0530, Sunil Kovvuri wrote:
>> On Fri, May 8, 2020 at 11:00 AM Kevin Hao <haokexin@gmail.com> wrote:
>>>
>>> On Fri, May 08, 2020 at 10:18:27AM +0530, Sunil Kovvuri wrote:
>>>> On Fri, May 8, 2020 at 9:43 AM Kevin Hao <haokexin@gmail.com> wrote:
>>>>>
>>>>> In the current codes, the octeontx2 uses its own method to allocate
>>>>> the pool buffers, but there are some issues in this implementation.
>>>>> 1. We have to run the otx2_get_page() for each allocation cycle and
>>>>>    this is pretty error prone. As I can see there is no invocation
>>>>>    of the otx2_get_page() in otx2_pool_refill_task(), this will leave
>>>>>    the allocated pages have the wrong refcount and may be freed wrongly.
>>>>
>>>> Thanks for pointing, will fix.
>>>>
>>>>> 2. It wastes memory. For example, if we only receive one packet in a
>>>>>    NAPI RX cycle, and then allocate a 2K buffer with otx2_alloc_rbuf()
>>>>>    to refill the pool buffers and leave the remain area of the allocated
>>>>>    page wasted. On a kernel with 64K page, 62K area is wasted.
>>>>>
>>>>> IMHO it is really unnecessary to implement our own method for the
>>>>> buffers allocate, we can reuse the napi_alloc_frag() to simplify
>>>>> our code.
>>>>>
>>>>> Signed-off-by: Kevin Hao <haokexin@gmail.com>
>>>>
>>>> Have you measured performance with and without your patch ?
>>>
>>> I will do performance compare later. But I don't think there will be measurable
>>> difference.
>>>
>>>> I didn't use napi_alloc_frag() as it's too costly, if in one NAPI
>>>> instance driver
>>>> receives 32 pkts, then 32 calls to napi_alloc_frag() and updates to page ref
>>>> count per fragment etc are costly.
>>>
>>> No, the page ref only be updated at the page allocation and all the space are
>>> used. In general, the invocation of napi_alloc_frag() will not cause the update
>>> of the page ref. So in theory, the count of updating page ref should be reduced
>>> by using of napi_alloc_frag() compare to the current otx2 implementation.
>>>
>>
>> Okay, it seems i misunderstood it.
> 
> Hi Sunil
> 
> In general, you should not work around issues in the core, you should
> improve the core. If your implementation really was more efficient
> than the core code, it would of been better if you proposed fixes to
> the core, not hide away better code in your own driver.

Hi, Andrew

When looking the napi_alloc_frag() api, the mapping/unmapping is done by
caller, if the mapping/unmapping is managed in the core, then the
mapping/unmapping can be avoided when the page is reused, because the
mapping/unmapping operation is costly when IOMMU is on, do you think it
makes sense to do the mapping/ummapping in the page_frag_*()?

> 
>       Andrew
> .
> 

  reply	other threads:[~2020-05-09  0:58 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-05-08  4:07 [PATCH] octeontx2-pf: Use the napi_alloc_frag() to alloc the pool buffers Kevin Hao
2020-05-08  4:48 ` Sunil Kovvuri
2020-05-08  5:30   ` Kevin Hao
2020-05-08  7:38     ` Sunil Kovvuri
2020-05-08 13:01       ` Andrew Lunn
2020-05-09  0:58         ` Yunsheng Lin [this message]
2020-05-08  7:40 ` Sunil Kovvuri
2020-05-08 11:37   ` Kevin Hao

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=f14cb268-efd2-1b62-22cb-d501f1f183a7@huawei.com \
    --to=linyunsheng@huawei.com \
    --cc=andrew@lunn.ch \
    --cc=davem@davemloft.net \
    --cc=gakula@marvell.com \
    --cc=haokexin@gmail.com \
    --cc=hkelam@marvell.com \
    --cc=netdev@vger.kernel.org \
    --cc=sbhatta@marvell.com \
    --cc=sgoutham@marvell.com \
    --cc=sunil.kovvuri@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.