linux-usb.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jakub Kicinski <jakub.kicinski@netronome.com>
To: Hayes Wang <hayeswang@realtek.com>
Cc: <netdev@vger.kernel.org>, <nic_swsd@realtek.com>,
	<linux-kernel@vger.kernel.org>, <linux-usb@vger.kernel.org>
Subject: Re: [PATCH net-next 4/5] r8152: support skb_add_rx_frag
Date: Tue, 6 Aug 2019 15:08:02 -0700	[thread overview]
Message-ID: <20190806150802.72e0ef02@cakuba.netronome.com> (raw)
In-Reply-To: <1394712342-15778-293-albertk@realtek.com>

On Tue, 6 Aug 2019 19:18:03 +0800, Hayes Wang wrote:
> Use skb_add_rx_frag() to reduce the memory copy for rx data.
> 
> Use a new list of rx_used to store the rx buffer which couldn't be
> reused yet.
> 
> Besides, the total number of rx buffer may be increased or decreased
> dynamically. And it is limited by RTL8152_MAX_RX_AGG.
> 
> Signed-off-by: Hayes Wang <hayeswang@realtek.com>

> diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c
> index 401e56112365..1615900c8592 100644
> --- a/drivers/net/usb/r8152.c
> +++ b/drivers/net/usb/r8152.c
> @@ -584,6 +584,9 @@ enum rtl_register_content {
>  #define TX_ALIGN		4
>  #define RX_ALIGN		8
>  
> +#define RTL8152_MAX_RX_AGG	(10 * RTL8152_MAX_RX)
> +#define RTL8152_RXFG_HEADSZ	256
> +
>  #define INTR_LINK		0x0004
>  
>  #define RTL8152_REQT_READ	0xc0
> @@ -720,7 +723,7 @@ struct r8152 {
>  	struct net_device *netdev;
>  	struct urb *intr_urb;
>  	struct tx_agg tx_info[RTL8152_MAX_TX];
> -	struct list_head rx_info;
> +	struct list_head rx_info, rx_used;

I don't see where entries on the rx_used list get freed when driver is
unloaded, could you explain how that's taken care of?

>  	struct list_head rx_done, tx_free;
>  	struct sk_buff_head tx_queue, rx_queue;
>  	spinlock_t rx_lock, tx_lock;
> @@ -1476,7 +1479,7 @@ static void free_rx_agg(struct r8152 *tp, struct rx_agg *agg)
>  	list_del(&agg->info_list);
>  
>  	usb_free_urb(agg->urb);
> -	__free_pages(agg->page, get_order(tp->rx_buf_sz));
> +	put_page(agg->page);
>  	kfree(agg);
>  
>  	atomic_dec(&tp->rx_count);
> @@ -1493,7 +1496,7 @@ static struct rx_agg *alloc_rx_agg(struct r8152 *tp, gfp_t mflags)
>  	if (rx_agg) {
>  		unsigned long flags;
>  
> -		rx_agg->page = alloc_pages(mflags, order);
> +		rx_agg->page = alloc_pages(mflags | __GFP_COMP, order);
>  		if (!rx_agg->page)
>  			goto free_rx;
>  
> @@ -1951,6 +1954,50 @@ static u8 r8152_rx_csum(struct r8152 *tp, struct rx_desc *rx_desc)
>  	return checksum;
>  }
>  
> +static inline bool rx_count_exceed(struct r8152 *tp)
> +{
> +	return atomic_read(&tp->rx_count) > RTL8152_MAX_RX;
> +}
> +
> +static inline int agg_offset(struct rx_agg *agg, void *addr)
> +{
> +	return (int)(addr - agg->buffer);
> +}
> +
> +static struct rx_agg *rtl_get_free_rx(struct r8152 *tp, gfp_t mflags)
> +{
> +	struct list_head *cursor, *next;
> +	struct rx_agg *agg_free = NULL;
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&tp->rx_lock, flags);
> +
> +	list_for_each_safe(cursor, next, &tp->rx_used) {
> +		struct rx_agg *agg;
> +
> +		agg = list_entry(cursor, struct rx_agg, list);
> +
> +		if (page_count(agg->page) == 1) {
> +			if (!agg_free) {
> +				list_del_init(cursor);
> +				agg_free = agg;
> +				continue;
> +			} else if (rx_count_exceed(tp)) {

nit: else unnecessary after continue

> +				list_del_init(cursor);
> +				free_rx_agg(tp, agg);
> +			}
> +			break;
> +		}
> +	}
> +
> +	spin_unlock_irqrestore(&tp->rx_lock, flags);
> +
> +	if (!agg_free && atomic_read(&tp->rx_count) < RTL8152_MAX_RX_AGG)
> +		agg_free = alloc_rx_agg(tp, mflags);
> +
> +	return agg_free;
> +}
> +
>  static int rx_bottom(struct r8152 *tp, int budget)
>  {
>  	unsigned long flags;

  reply	other threads:[~2019-08-06 22:08 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-08-06 11:17 [PATCH net-next 0/5] RX improve Hayes Wang
2019-08-06 11:18 ` [PATCH net-next 1/5] r8152: separate the rx buffer size Hayes Wang
2019-08-06 11:18 ` [PATCH net-next 2/5] r8152: replace array with linking list for rx information Hayes Wang
2019-08-06 19:53   ` Jakub Kicinski
2019-08-06 21:40     ` Jakub Kicinski
2019-08-07  4:34     ` Hayes Wang
2019-08-07 18:21       ` Jakub Kicinski
2019-08-06 11:18 ` [PATCH net-next 3/5] r8152: use alloc_pages for rx buffer Hayes Wang
2019-08-06 11:18 ` [PATCH net-next 4/5] r8152: support skb_add_rx_frag Hayes Wang
2019-08-06 22:08   ` Jakub Kicinski [this message]
2019-08-07  4:34     ` Hayes Wang
2019-08-06 11:18 ` [PATCH net-next 5/5] r8152: change rx_frag_head_sz and rx_max_agg_num dynamically Hayes Wang
2019-08-06 22:10   ` Jakub Kicinski
2019-08-07  7:12     ` Hayes Wang
2019-08-07 12:43       ` Maciej Fijalkowski
2019-08-08  1:40         ` Hayes Wang
2019-08-08  8:52     ` Hayes Wang
2019-08-08 11:49       ` Maciej Fijalkowski
2019-08-08 12:16         ` Hayes Wang
2019-08-08 18:43           ` Jakub Kicinski
2019-08-09  3:38             ` Hayes Wang
2019-08-09  4:51               ` David Miller

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190806150802.72e0ef02@cakuba.netronome.com \
    --to=jakub.kicinski@netronome.com \
    --cc=hayeswang@realtek.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-usb@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=nic_swsd@realtek.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).