From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 67221C433E0 for ; Thu, 11 Feb 2021 19:04:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1F23D64E6B for ; Thu, 11 Feb 2021 19:04:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231342AbhBKS4W (ORCPT ); Thu, 11 Feb 2021 13:56:22 -0500 Received: from mail-40131.protonmail.ch ([185.70.40.131]:60425 "EHLO mail-40131.protonmail.ch" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230452AbhBKSzB (ORCPT ); Thu, 11 Feb 2021 13:55:01 -0500 Date: Thu, 11 Feb 2021 18:54:12 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pm.me; s=protonmail; t=1613069659; bh=QfKdtGfY1AFcBLPik/7jwC6fjjYNviRZUAIIEIHseOQ=; h=Date:To:From:Cc:Reply-To:Subject:In-Reply-To:References:From; b=B2bUHK/RclDwtBB+qlDrG4BTMN/IZMJAPqhTRqm3s+Q8wDE7PbAIxA2h4Gp+Zt1eQ KjIguOJQSlz3ZGwRfvG/Hjhj64iva/4As/fBjZeipQXnboLSMRZKs0uXnnHAOppnKX HsnW6WsfanDv1hF47hqZTUj6omIArUeKtKafg6tfSGRonOyOR9K95d/T0uxlTTGLoF 1YmO9VSRATSZPsGz5LladIqRVLfl2GuB+3O+YmroKX+HfVPY5B7pYcKwlw4uqAzAHO bpS6uikCPa5G6ng4xoU8crYlJI41dbvOvUb1mLaSmA2UYI0yQfZaqxYQBcxf2dEL9a +UqX+OCxM3UsQ== To: "David S. Miller" , Jakub Kicinski From: Alexander Lobakin Cc: Jonathan Lemon , Eric Dumazet , Dmitry Vyukov , Willem de Bruijn , Alexander Lobakin , Randy Dunlap , Kevin Hao , Pablo Neira Ayuso , Jakub Sitnicki , Marco Elver , Dexuan Cui , Paolo Abeni , Jesper Dangaard Brouer , Alexander Duyck , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Taehee Yoo , Cong Wang , =?utf-8?Q?Bj=C3=B6rn_T=C3=B6pel?= , Miaohe Lin , Guillaume Nault , Yonghong Song , zhudi , Michal Kubecek , Marcelo Ricardo Leitner , Dmitry Safonov <0x7f454c46@gmail.com>, Yang Yingliang , Florian Westphal , Edward Cree , linux-kernel@vger.kernel.org, netdev@vger.kernel.org Reply-To: Alexander Lobakin Subject: [PATCH v5 net-next 06/11] skbuff: remove __kfree_skb_flush() Message-ID: <20210211185220.9753-7-alobakin@pm.me> In-Reply-To: <20210211185220.9753-1-alobakin@pm.me> References: <20210211185220.9753-1-alobakin@pm.me> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This function isn't much needed as NAPI skb queue gets bulk-freed anyway when there's no more room, and even may reduce the efficiency of bulk operations. It will be even less needed after reusing skb cache on allocation path, so remove it and this way lighten network softirqs a bit. Suggested-by: Eric Dumazet Signed-off-by: Alexander Lobakin --- include/linux/skbuff.h | 1 - net/core/dev.c | 7 +------ net/core/skbuff.c | 12 ------------ 3 files changed, 1 insertion(+), 19 deletions(-) diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 0a4e91a2f873..0e0707296098 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -2919,7 +2919,6 @@ static inline struct sk_buff *napi_alloc_skb(struct n= api_struct *napi, } void napi_consume_skb(struct sk_buff *skb, int budget); =20 -void __kfree_skb_flush(void); void __kfree_skb_defer(struct sk_buff *skb); =20 /** diff --git a/net/core/dev.c b/net/core/dev.c index 321d41a110e7..4154d4683bb9 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -4944,8 +4944,6 @@ static __latent_entropy void net_tx_action(struct sof= tirq_action *h) =09=09=09else =09=09=09=09__kfree_skb_defer(skb); =09=09} - -=09=09__kfree_skb_flush(); =09} =20 =09if (sd->output_queue) { @@ -7012,7 +7010,6 @@ static int napi_threaded_poll(void *data) =09=09=09__napi_poll(napi, &repoll); =09=09=09netpoll_poll_unlock(have); =20 -=09=09=09__kfree_skb_flush(); =09=09=09local_bh_enable(); =20 =09=09=09if (!repoll) @@ -7042,7 +7039,7 @@ static __latent_entropy void net_rx_action(struct sof= tirq_action *h) =20 =09=09if (list_empty(&list)) { =09=09=09if (!sd_has_rps_ipi_waiting(sd) && list_empty(&repoll)) -=09=09=09=09goto out; +=09=09=09=09return; =09=09=09break; =09=09} =20 @@ -7069,8 +7066,6 @@ static __latent_entropy void net_rx_action(struct sof= tirq_action *h) =09=09__raise_softirq_irqoff(NET_RX_SOFTIRQ); =20 =09net_rps_action_and_irq_enable(sd); -out: -=09__kfree_skb_flush(); } =20 struct netdev_adjacent { diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 1c6f6ef70339..4be2bb969535 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -838,18 +838,6 @@ void __consume_stateless_skb(struct sk_buff *skb) =09kfree_skbmem(skb); } =20 -void __kfree_skb_flush(void) -{ -=09struct napi_alloc_cache *nc =3D this_cpu_ptr(&napi_alloc_cache); - -=09/* flush skb_cache if containing objects */ -=09if (nc->skb_count) { -=09=09kmem_cache_free_bulk(skbuff_head_cache, nc->skb_count, -=09=09=09=09 nc->skb_cache); -=09=09nc->skb_count =3D 0; -=09} -} - static inline void _kfree_skb_defer(struct sk_buff *skb) { =09struct napi_alloc_cache *nc =3D this_cpu_ptr(&napi_alloc_cache); --=20 2.30.1