From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 541EEECE58C for ; Fri, 11 Oct 2019 07:26:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 2EE9621D7C for ; Fri, 11 Oct 2019 07:26:41 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=dlink.ru header.i=@dlink.ru header.b="eC1VKdI0" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727515AbfJKH0i (ORCPT ); Fri, 11 Oct 2019 03:26:38 -0400 Received: from fd.dlink.ru ([178.170.168.18]:45558 "EHLO fd.dlink.ru" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726679AbfJKH0i (ORCPT ); Fri, 11 Oct 2019 03:26:38 -0400 Received: by fd.dlink.ru (Postfix, from userid 5000) id 078D31B219E5; Fri, 11 Oct 2019 10:26:36 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 fd.dlink.ru 078D31B219E5 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=dlink.ru; s=mail; t=1570778796; bh=dOAAmZ8hjSoImka9riHogX7F0mjdr4LsS8gu4vN8Q+E=; h=Date:From:To:Cc:Subject:In-Reply-To:References; b=eC1VKdI0CIfK20AIvg3zUMydFtaRLCQ9SiSDSYK30/YYpdbbrFxzHEpdqjrpAQXEX ewvEBaClc23mzWj+q3pXjvadx5ne/6XlHPAon0OBErf2lidTmzJKVBNBxAPIcKd5IA NmsXvXEAK0jdF1C333CI79lJ8oryNuHz3fuBv6SY= Received: from mail.rzn.dlink.ru (mail.rzn.dlink.ru [178.170.168.13]) by fd.dlink.ru (Postfix) with ESMTP id D5F141B219E0; Fri, 11 Oct 2019 10:26:32 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 fd.dlink.ru D5F141B219E0 Received: by mail.rzn.dlink.ru (Postfix, from userid 5000) id C2B1B1B2192D; Fri, 11 Oct 2019 10:26:32 +0300 (MSK) Received: from mail.rzn.dlink.ru (localhost [127.0.0.1]) by mail.rzn.dlink.ru (Postfix) with ESMTPA id 156C41B2023E; Fri, 11 Oct 2019 10:26:25 +0300 (MSK) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit Date: Fri, 11 Oct 2019 10:26:25 +0300 From: Alexander Lobakin To: Edward Cree Cc: "David S. Miller" , Jiri Pirko , Eric Dumazet , Ido Schimmel , Paolo Abeni , Petr Machata , Sabrina Dubroca , Florian Fainelli , Jassi Brar , Ilias Apalodimas , netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH net-next1/2] net: core: use listified Rx for GRO_NORMAL in napi_gro_receive() In-Reply-To: References: <20191010144226.4115-1-alobakin@dlink.ru> <20191010144226.4115-2-alobakin@dlink.ru> Message-ID: X-Sender: alobakin@dlink.ru User-Agent: Roundcube Webmail/1.3.6 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Edward Cree wrote 10.10.2019 21:23: > On 10/10/2019 15:42, Alexander Lobakin wrote: >> Commit 323ebb61e32b4 ("net: use listified RX for handling GRO_NORMAL >> skbs") made use of listified skb processing for the users of >> napi_gro_frags(). >> The same technique can be used in a way more common napi_gro_receive() >> to speed up non-merged (GRO_NORMAL) skbs for a wide range of drivers, >> including gro_cells and mac80211 users. >> >> Signed-off-by: Alexander Lobakin >> --- >> net/core/dev.c | 49 +++++++++++++++++++++++++------------------------ >> 1 file changed, 25 insertions(+), 24 deletions(-) >> >> diff --git a/net/core/dev.c b/net/core/dev.c >> index 8bc3dce71fc0..a33f56b439ce 100644 >> --- a/net/core/dev.c >> +++ b/net/core/dev.c >> @@ -5884,6 +5884,26 @@ struct packet_offload >> *gro_find_complete_by_type(__be16 type) >> } >> EXPORT_SYMBOL(gro_find_complete_by_type); >> >> +/* Pass the currently batched GRO_NORMAL SKBs up to the stack. */ >> +static void gro_normal_list(struct napi_struct *napi) >> +{ >> + if (!napi->rx_count) >> + return; >> + netif_receive_skb_list_internal(&napi->rx_list); >> + INIT_LIST_HEAD(&napi->rx_list); >> + napi->rx_count = 0; >> +} >> + >> +/* Queue one GRO_NORMAL SKB up for list processing. If batch size >> exceeded, >> + * pass the whole batch up to the stack. >> + */ >> +static void gro_normal_one(struct napi_struct *napi, struct sk_buff >> *skb) >> +{ >> + list_add_tail(&skb->list, &napi->rx_list); >> + if (++napi->rx_count >= gro_normal_batch) >> + gro_normal_list(napi); >> +} >> + >> static void napi_skb_free_stolen_head(struct sk_buff *skb) >> { >> skb_dst_drop(skb); >> @@ -5891,12 +5911,13 @@ static void napi_skb_free_stolen_head(struct >> sk_buff *skb) >> kmem_cache_free(skbuff_head_cache, skb); >> } >> >> -static gro_result_t napi_skb_finish(gro_result_t ret, struct sk_buff >> *skb) >> +static gro_result_t napi_skb_finish(struct napi_struct *napi, >> + struct sk_buff *skb, >> + gro_result_t ret) > Any reason why the argument order here is changed around? Actually yes: to match napi_skb_finish() and napi_frags_finish() prototypes, as gro_normal one() required an addition of napi argument anyway. > > -Ed >> { >> switch (ret) { >> case GRO_NORMAL: >> - if (netif_receive_skb_internal(skb)) >> - ret = GRO_DROP; >> + gro_normal_one(napi, skb); >> break; >> >> case GRO_DROP: >> @@ -5928,7 +5949,7 @@ gro_result_t napi_gro_receive(struct napi_struct >> *napi, struct sk_buff *skb) >> >> skb_gro_reset_offset(skb); >> >> - ret = napi_skb_finish(dev_gro_receive(napi, skb), skb); >> + ret = napi_skb_finish(napi, skb, dev_gro_receive(napi, skb)); >> trace_napi_gro_receive_exit(ret); >> >> return ret; >> @@ -5974,26 +5995,6 @@ struct sk_buff *napi_get_frags(struct >> napi_struct *napi) >> } >> EXPORT_SYMBOL(napi_get_frags); >> >> -/* Pass the currently batched GRO_NORMAL SKBs up to the stack. */ >> -static void gro_normal_list(struct napi_struct *napi) >> -{ >> - if (!napi->rx_count) >> - return; >> - netif_receive_skb_list_internal(&napi->rx_list); >> - INIT_LIST_HEAD(&napi->rx_list); >> - napi->rx_count = 0; >> -} >> - >> -/* Queue one GRO_NORMAL SKB up for list processing. If batch size >> exceeded, >> - * pass the whole batch up to the stack. >> - */ >> -static void gro_normal_one(struct napi_struct *napi, struct sk_buff >> *skb) >> -{ >> - list_add_tail(&skb->list, &napi->rx_list); >> - if (++napi->rx_count >= gro_normal_batch) >> - gro_normal_list(napi); >> -} >> - >> static gro_result_t napi_frags_finish(struct napi_struct *napi, >> struct sk_buff *skb, >> gro_result_t ret) Regards, ᚷ ᛖ ᚢ ᚦ ᚠ ᚱ