linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Yunsheng Lin <linyunsheng@huawei.com>
To: Guangbin Huang <huangguangbin2@huawei.com>, <davem@davemloft.net>,
	<kuba@kernel.org>
Cc: <netdev@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
	<salil.mehta@huawei.com>, <lipeng321@huawei.com>
Subject: Re: [PATCH net-next 6/7] net: hns3: optimize the rx page reuse handling process
Date: Wed, 16 Jun 2021 16:47:05 +0800	[thread overview]
Message-ID: <8eb826a2-c48d-e277-ba48-cc93acee07fd@huawei.com> (raw)
In-Reply-To: <1623825377-41948-7-git-send-email-huangguangbin2@huawei.com>

On 2021/6/16 14:36, Guangbin Huang wrote:
> From: Yunsheng Lin <linyunsheng@huawei.com>
> 
> Current rx page offset only reset to zero when all the below
> conditions are satisfied:
> 1. rx page is only owned by driver.
> 2. rx page is reusable.
> 3. the page offset that is above to be given to the stack has
> reached the end of the page.
> 
> If the page offset is over the hns3_buf_size(), it means the
> buffer below the offset of the page is usable when the above
> condition 1 & 2 are satisfied, so page offset can be reset to
> zero instead of increasing the offset. We may be able to always
> reuse the first 4K buffer of a 64K page, which means we can
> limit the hot buffer size as much as possible.
> 
> The above optimization is a side effect when refacting the
> rx page reuse handling in order to support the rx copybreak.
> 
> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
> Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com>
> ---
>  drivers/net/ethernet/hisilicon/hns3/hns3_enet.c | 44 ++++++++++++-------------
>  1 file changed, 22 insertions(+), 22 deletions(-)
> 
> diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
> index f60a344a6a9f..98e8a548edb8 100644
> --- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
> +++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
> @@ -3525,7 +3525,7 @@ static void hns3_nic_alloc_rx_buffers(struct hns3_enet_ring *ring,
>  
>  static bool hns3_can_reuse_page(struct hns3_desc_cb *cb)
>  {
> -	return (page_count(cb->priv) - cb->pagecnt_bias) == 1;
> +	return page_count(cb->priv) == cb->pagecnt_bias;
>  }
>  
>  static void hns3_nic_reuse_page(struct sk_buff *skb, int i,
> @@ -3533,40 +3533,40 @@ static void hns3_nic_reuse_page(struct sk_buff *skb, int i,
>  				struct hns3_desc_cb *desc_cb)
>  {
>  	struct hns3_desc *desc = &ring->desc[ring->next_to_clean];
> +	u32 frag_offset = desc_cb->page_offset + pull_len;
>  	int size = le16_to_cpu(desc->rx.size);
>  	u32 truesize = hns3_buf_size(ring);
> +	u32 frag_size = size - pull_len;
>  
> -	desc_cb->pagecnt_bias--;
> -	skb_add_rx_frag(skb, i, desc_cb->priv, desc_cb->page_offset + pull_len,
> -			size - pull_len, truesize);
> +	/* Avoid re-using remote or pfmem page */
> +	if (unlikely(!dev_page_is_reusable(desc_cb->priv)))
> +		goto out;
>  
> -	/* Avoid re-using remote and pfmemalloc pages, or the stack is still
> -	 * using the page when page_offset rollback to zero, flag default
> -	 * unreuse
> +	/* Stack is not using and current page_offset is non-zero, we can
> +	 * reuse from the zero offset.
>  	 */
> -	if (!dev_page_is_reusable(desc_cb->priv) ||
> -	    (!desc_cb->page_offset && !hns3_can_reuse_page(desc_cb))) {
> -		__page_frag_cache_drain(desc_cb->priv, desc_cb->pagecnt_bias);
> -		return;
> -	}
> -
> -	/* Move offset up to the next cache line */
> -	desc_cb->page_offset += truesize;
> -
> -	if (desc_cb->page_offset + truesize <= hns3_page_size(ring)) {
> +	if (desc_cb->page_offset && hns3_can_reuse_page(desc_cb)) {
> +		desc_cb->page_offset = 0;
>  		desc_cb->reuse_flag = 1;
> -	} else if (hns3_can_reuse_page(desc_cb)) {
> +	} else if (desc_cb->page_offset + truesize * 2 <=
> +		   hns3_page_size(ring)) {

The above assumption is wrong, we need to check the if the page
is only owned by driver at the begin and at the end of a page
to make sure there is no reuse conflict beteween driver and stack
when desc_cb->page_offset is rollback to zero or incremented.

The fix for above problem is pending internally, which was supposed to
merged with this patch when upstreaming.

It seems davem has merged this patch, will send out the fix later, sorry
for the inconvenience.


> +		desc_cb->page_offset += truesize;
>  		desc_cb->reuse_flag = 1;
> -		desc_cb->page_offset = 0;
> -	} else if (desc_cb->pagecnt_bias) {
> -		__page_frag_cache_drain(desc_cb->priv, desc_cb->pagecnt_bias);
> -		return;
>  	}
>  
> +out:
> +	desc_cb->pagecnt_bias--;
> +
>  	if (unlikely(!desc_cb->pagecnt_bias)) {
>  		page_ref_add(desc_cb->priv, USHRT_MAX);
>  		desc_cb->pagecnt_bias = USHRT_MAX;
>  	}
> +
> +	skb_add_rx_frag(skb, i, desc_cb->priv, frag_offset,
> +			frag_size, truesize);
> +
> +	if (unlikely(!desc_cb->reuse_flag))
> +		__page_frag_cache_drain(desc_cb->priv, desc_cb->pagecnt_bias);
>  }
>  
>  static int hns3_gro_complete(struct sk_buff *skb, u32 l234info)
> 


  reply	other threads:[~2021-06-16  8:47 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-16  6:36 [PATCH net-next 0/7] net: hns3: updates for -next Guangbin Huang
2021-06-16  6:36 ` [PATCH net-next 1/7] net: hns3: minor refactor related to desc_cb handling Guangbin Huang
2021-06-16  6:36 ` [PATCH net-next 2/7] net: hns3: refactor for hns3_fill_desc() function Guangbin Huang
2021-06-16  6:36 ` [PATCH net-next 3/7] net: hns3: use tx bounce buffer for small packets Guangbin Huang
2021-06-16  6:36 ` [PATCH net-next 4/7] net: hns3: add support to query tx spare buffer size for pf Guangbin Huang
2021-06-16  6:36 ` [PATCH net-next 5/7] net: hns3: support dma_map_sg() for multi frags skb Guangbin Huang
2021-06-16  6:36 ` [PATCH net-next 6/7] net: hns3: optimize the rx page reuse handling process Guangbin Huang
2021-06-16  8:47   ` Yunsheng Lin [this message]
2021-06-16  6:36 ` [PATCH net-next 7/7] net: hns3: use bounce buffer when rx page can not be reused Guangbin Huang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8eb826a2-c48d-e277-ba48-cc93acee07fd@huawei.com \
    --to=linyunsheng@huawei.com \
    --cc=davem@davemloft.net \
    --cc=huangguangbin2@huawei.com \
    --cc=kuba@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lipeng321@huawei.com \
    --cc=netdev@vger.kernel.org \
    --cc=salil.mehta@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).