From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-23.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AFE7AC11F68 for ; Wed, 30 Jun 2021 11:03:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 96B3D61CC0 for ; Wed, 30 Jun 2021 11:03:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234206AbhF3LFh (ORCPT ); Wed, 30 Jun 2021 07:05:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36660 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233977AbhF3LFe (ORCPT ); Wed, 30 Jun 2021 07:05:34 -0400 Received: from mail-yb1-xb2d.google.com (mail-yb1-xb2d.google.com [IPv6:2607:f8b0:4864:20::b2d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 85848C061756 for ; Wed, 30 Jun 2021 04:03:05 -0700 (PDT) Received: by mail-yb1-xb2d.google.com with SMTP id b13so4313485ybk.4 for ; Wed, 30 Jun 2021 04:03:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=d+YReNfrn+My6MDFsEwu0i1PD0fj98FTmiER+oq8EEw=; b=O0F4mQ7TW8eCorQS/KggtcA3XG8b//n0jyjvjvDUide7BKe4L0TLOIbJkVji41E1Mv cx14tKQlg0+uinPaaLy8/4aqeRNLX4FTOETrAdf6Mf+LNoqE06DeWFv+qZh4h8J1xx0a Y8lYu4YkMo/Lt4h4J6kz6dyaaA/zptvxt6PR2Hgl5YNk4xjcLxb5jXXA7zzC1z0kvv8X dCyPx0rzLpzoU6LwMNRHT/9/7GIkNLNQfBBHfK57xzD8V0MjHaz+GX6Sau5fF/+VBg7O 9jOnw9ZFyyDraguIMlHNIRQ72Yb56fyDBbOZpu/z7HV81zDHcG/kooASlSjCPsZPsLUT NaqQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=d+YReNfrn+My6MDFsEwu0i1PD0fj98FTmiER+oq8EEw=; b=jpVKkpmE1w6dlNwxEgiIq2Lxt8PBjPCuShhsqxbeYohj/KkFbst/XwWMKObY+EOFBa KsYWv0/qSIGoE/7UfGSYRcjst1/AO3zdRAgKqzRQs+YI5Ussm5M2GuQeXr0MkbHRIKZl DAOGKH+w18CZWcObvtuVDcHFcucI93vY01YZmeufrmzcVpDWRiAA1yTKcByFsr/c8NBU TKGnRFoXxuclhO+SLDtssBqizThXvZAUFQoXrWSuIkc+tM4GP+h2OrlrHWBEfDj/RS3p KXiNXBcb5WrHcICzoF5kKg/UN8i+ItjdCh/uyGPWpPI+w+mxq33SjDVin0rPt/jMBqr9 5xpg== X-Gm-Message-State: AOAM532jfdKT4KwyT+xMIuSpUnZQ3/C96p1kyPaI3q3rAro3Wum6Qc/4 /JTnU7mZ5i22eRm64gnXBD4UqDBrEMeeVQSZuVBsgw== X-Google-Smtp-Source: ABdhPJwrrqzx1XPekwwxSUGhBbvWSAE/6iMN0chVuS1LN4hSNUSx0uUaMlBjn0ofNXn4moedbQpgiX+kn81SBqgokuo= X-Received: by 2002:a25:6e82:: with SMTP id j124mr42144554ybc.132.1625050984283; Wed, 30 Jun 2021 04:03:04 -0700 (PDT) MIME-Version: 1.0 References: <20210630051118.2212-1-yajun.deng@linux.dev> In-Reply-To: <20210630051118.2212-1-yajun.deng@linux.dev> From: Eric Dumazet Date: Wed, 30 Jun 2021 13:02:53 +0200 Message-ID: Subject: Re: [PATCH] net: core: Modify alloc_size in alloc_netdev_mqs() To: Yajun Deng Cc: davem@davemloft.net, kuba@kernel.org, andriin@fb.com, atenart@kernel.org, alobakin@pm.me, ast@kernel.org, daniel@iogearbox.net, weiwan@google.com, ap420073@gmail.com, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Jun 30, 2021 at 7:11 AM Yajun Deng wrote: > > Use ALIGN for 'struct net_device', and remove the unneeded > 'NETDEV_ALIGN - 1'. This can save a few bytes. and modify > the pr_err content when txqs < 1. I think that in old times (maybe still today), SLAB debugging could lead to not unaligned allocated zones. The forced alignment for netdev structures came in commit f346af6a27c0cea99522213cb813fd30489136e2 ("net_device and netdev private struct allocation improvements.") in linux-2.6.3 (back in 2004) This supposedly was a win in itself, otherwise Al Viro would not have spent time on this. > > Signed-off-by: Yajun Deng > --- > net/core/dev.c | 10 ++++------ > 1 file changed, 4 insertions(+), 6 deletions(-) > > diff --git a/net/core/dev.c b/net/core/dev.c > index c253c2aafe97..c42a682a624d 100644 > --- a/net/core/dev.c > +++ b/net/core/dev.c > @@ -10789,7 +10789,7 @@ struct net_device *alloc_netdev_mqs(int sizeof_priv, const char *name, > BUG_ON(strlen(name) >= sizeof(dev->name)); > > if (txqs < 1) { > - pr_err("alloc_netdev: Unable to allocate device with zero queues\n"); > + pr_err("alloc_netdev: Unable to allocate device with zero TX queues\n"); > return NULL; > } > > @@ -10798,14 +10798,12 @@ struct net_device *alloc_netdev_mqs(int sizeof_priv, const char *name, > return NULL; > } > > - alloc_size = sizeof(struct net_device); > + /* ensure 32-byte alignment of struct net_device*/ > + alloc_size = ALIGN(sizeof(struct net_device), NETDEV_ALIGN); This is not really needed, because struct net_device is cache line aligned already on SMP builds. > if (sizeof_priv) { > /* ensure 32-byte alignment of private area */ > - alloc_size = ALIGN(alloc_size, NETDEV_ALIGN); > - alloc_size += sizeof_priv; > + alloc_size += ALIGN(sizeof_priv, NETDEV_ALIGN); No longer needed, the private area starts at the end of struct net_device, whose size is a multiple of cache line. Really I doubt this makes sense anymore these days, we have hundreds of structures in the kernel that would need a similar handling if SLAB/SLUB was doing silly things. I would simply do : alloc_size += sizeof_priv; > } > - /* ensure 32-byte alignment of whole construct */ > - alloc_size += NETDEV_ALIGN - 1; > > p = kvzalloc(alloc_size, GFP_KERNEL | __GFP_RETRY_MAYFAIL); > if (!p) > -- > 2.32.0 >