From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-pj1-f42.google.com (mail-pj1-f42.google.com [209.85.216.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 18D0C8F67 for ; Sat, 24 Sep 2022 09:11:31 +0000 (UTC) Received: by mail-pj1-f42.google.com with SMTP id q35-20020a17090a752600b002038d8a68fbso8052651pjk.0 for ; Sat, 24 Sep 2022 02:11:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date; bh=i+DbZ92rhFWH/0UDN/zG8OtvV+E4X59I6BjaPOZbEQQ=; b=E5bOhboA/+6BWRDx1UWFFrMvOXvKoaBtGsWYlDmuj8X7rAkhnZpWRHTQ3LNDz7+mZj DgCRrEBR3MyfN1vHwttkAVt9C+sNXNl8rbk7MHBer0lal296QvNAuoGU6ltJiVjNpwJw ZbeC7n9zGD6mKw33TGHmgsxJslMQNugj0HBL4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date; bh=i+DbZ92rhFWH/0UDN/zG8OtvV+E4X59I6BjaPOZbEQQ=; b=eO370yN/nJS/CbasGIyHEMuQWFPz8zJD1cEQsGLWplVkWpmvl5/RxUvtS3CDR1utfg q4QORpdd37RRB4tgyRVqOYXFE8+XnK60Smp66MSeXwGKg88LyRC9sgRvUbja7CvG0pU+ VkgrJm5sLfKTK7JcHhfE3Dcpoysfn5U2HkEuZOZpnU+oSKoZ7VKMvk9JCPHGw2VRID1m EohMZA+nI6X3YdlUlfE4FNvqpmmTTrUvNY2SpdhXlOQUyPHolKHgJrRh2kOAPm9heikU VwyIOtM59pcPcT8lmQ3UlgLNMr6QMMEQitoEF2P3jtQTWaIbOocVthAPLkUBVwpIpn0X B8pQ== X-Gm-Message-State: ACrzQf2GyAgk/adCZi41BU7kevx0YxbCeD1Xgi7SfHrORXs38XMZQNnH +9l88uEJgMD4zbbx+Jz7SDyO5Q== X-Google-Smtp-Source: AMsMyM7r8bq+DxJ2Zg4nmPe+hShYURXJSZpppn8rw6aZz5OkkmasnCi/hTE7Baskd4vD2R6zimHrAA== X-Received: by 2002:a17:902:d4c9:b0:178:6d7b:c36f with SMTP id o9-20020a170902d4c900b001786d7bc36fmr12510856plg.19.1664010691458; Sat, 24 Sep 2022 02:11:31 -0700 (PDT) Received: from www.outflux.net (smtp.outflux.net. [198.145.64.163]) by smtp.gmail.com with ESMTPSA id b13-20020aa7950d000000b005546fe4b127sm5590232pfp.78.2022.09.24.02.11.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 24 Sep 2022 02:11:30 -0700 (PDT) Date: Sat, 24 Sep 2022 02:11:29 -0700 From: Kees Cook To: Jakub Kicinski Cc: "David S. Miller" , Vlastimil Babka , Eric Dumazet , Paolo Abeni , netdev@vger.kernel.org, Greg Kroah-Hartman , Nick Desaulniers , David Rientjes , "Ruhl, Michael J" , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Christoph Lameter , Pekka Enberg , Joonsoo Kim , Andrew Morton , Alex Elder , Josef Bacik , David Sterba , Sumit Semwal , Christian =?iso-8859-1?Q?K=F6nig?= , Jesse Brandeburg , Daniel Micay , Yonghong Song , Marco Elver , Miguel Ojeda , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-btrfs@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, linux-fsdevel@vger.kernel.org, intel-wired-lan@lists.osuosl.org, dev@openvswitch.org, x86@kernel.org, llvm@lists.linux.dev, linux-hardening@vger.kernel.org Subject: Re: [PATCH v2 03/16] skbuff: Proactively round up to kmalloc bucket size Message-ID: <202209240208.84F847F3B@keescook> References: <20220923202822.2667581-1-keescook@chromium.org> <20220923202822.2667581-4-keescook@chromium.org> Precedence: bulk X-Mailing-List: llvm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220923202822.2667581-4-keescook@chromium.org> On Fri, Sep 23, 2022 at 01:28:09PM -0700, Kees Cook wrote: > Instead of discovering the kmalloc bucket size _after_ allocation, round > up proactively so the allocation is explicitly made for the full size, > allowing the compiler to correctly reason about the resulting size of > the buffer through the existing __alloc_size() hint. > > This will allow for kernels built with CONFIG_UBSAN_BOUNDS or the > coming dynamic bounds checking under CONFIG_FORTIFY_SOURCE to gain > back the __alloc_size() hints that were temporarily reverted in commit > 93dd04ab0b2b ("slab: remove __alloc_size attribute from __kmalloc_track_caller") > > Additionally tries to normalize size variables to u32 from int. Most > interfaces are using "int", but notably __alloc_skb uses unsigned int. > > Also fix some reverse Christmas tree and comments while touching nearby > code. Something in this patch is breaking things -- I've refactored it again to avoid overwriting the incoming size argument, and instead add a dedicated outgoing size variable. Here's what will be v3 ... --- net/core/skbuff.c | 41 ++++++++++++++++++++++------------------- 1 file changed, 22 insertions(+), 19 deletions(-) diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 974bbbbe7138..9b5a9fb69d9d 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -346,11 +346,12 @@ EXPORT_SYMBOL(napi_build_skb); * memory is free */ static void *kmalloc_reserve(size_t size, gfp_t flags, int node, - bool *pfmemalloc) + bool *pfmemalloc, size_t *alloc_size) { void *obj; bool ret_pfmemalloc = false; + size = kmalloc_size_roundup(size); /* * Try a regular allocation, when that fails and we're not entitled * to the reserves, fail. @@ -369,6 +370,7 @@ static void *kmalloc_reserve(size_t size, gfp_t flags, int node, if (pfmemalloc) *pfmemalloc = ret_pfmemalloc; + *alloc_size = size; return obj; } @@ -400,7 +402,7 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask, { struct kmem_cache *cache; struct sk_buff *skb; - unsigned int osize; + size_t alloc_size; bool pfmemalloc; u8 *data; @@ -427,15 +429,15 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask, */ size = SKB_DATA_ALIGN(size); size += SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); - data = kmalloc_reserve(size, gfp_mask, node, &pfmemalloc); - if (unlikely(!data)) - goto nodata; - /* kmalloc(size) might give us more room than requested. + /* kmalloc(size) might give us more room than requested, so + * allocate the true bucket size up front. * Put skb_shared_info exactly at the end of allocated zone, * to allow max possible filling before reallocation. */ - osize = ksize(data); - size = SKB_WITH_OVERHEAD(osize); + data = kmalloc_reserve(size, gfp_mask, node, &pfmemalloc, &alloc_size); + if (unlikely(!data)) + goto nodata; + size = SKB_WITH_OVERHEAD(alloc_size); prefetchw(data + size); /* @@ -444,7 +446,7 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask, * the tail pointer in struct sk_buff! */ memset(skb, 0, offsetof(struct sk_buff, tail)); - __build_skb_around(skb, data, osize); + __build_skb_around(skb, data, alloc_size); skb->pfmemalloc = pfmemalloc; if (flags & SKB_ALLOC_FCLONE) { @@ -1709,6 +1711,7 @@ int pskb_expand_head(struct sk_buff *skb, int nhead, int ntail, { int i, osize = skb_end_offset(skb); int size = osize + nhead + ntail; + size_t alloc_size; long off; u8 *data; @@ -1723,10 +1726,10 @@ int pskb_expand_head(struct sk_buff *skb, int nhead, int ntail, if (skb_pfmemalloc(skb)) gfp_mask |= __GFP_MEMALLOC; data = kmalloc_reserve(size + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)), - gfp_mask, NUMA_NO_NODE, NULL); + gfp_mask, NUMA_NO_NODE, NULL, &alloc_size); if (!data) goto nodata; - size = SKB_WITH_OVERHEAD(ksize(data)); + size = SKB_WITH_OVERHEAD(alloc_size); /* Copy only real data... and, alas, header. This should be * optimized for the cases when header is void. @@ -6063,19 +6066,19 @@ static int pskb_carve_inside_header(struct sk_buff *skb, const u32 off, int i; int size = skb_end_offset(skb); int new_hlen = headlen - off; + size_t alloc_size; u8 *data; size = SKB_DATA_ALIGN(size); if (skb_pfmemalloc(skb)) gfp_mask |= __GFP_MEMALLOC; - data = kmalloc_reserve(size + - SKB_DATA_ALIGN(sizeof(struct skb_shared_info)), - gfp_mask, NUMA_NO_NODE, NULL); + data = kmalloc_reserve(size + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)), + gfp_mask, NUMA_NO_NODE, NULL, &alloc_size); if (!data) return -ENOMEM; - size = SKB_WITH_OVERHEAD(ksize(data)); + size = SKB_WITH_OVERHEAD(alloc_size); /* Copy real data, and all frags */ skb_copy_from_linear_data_offset(skb, off, data, new_hlen); @@ -6184,18 +6187,18 @@ static int pskb_carve_inside_nonlinear(struct sk_buff *skb, const u32 off, u8 *data; const int nfrags = skb_shinfo(skb)->nr_frags; struct skb_shared_info *shinfo; + size_t alloc_size; size = SKB_DATA_ALIGN(size); if (skb_pfmemalloc(skb)) gfp_mask |= __GFP_MEMALLOC; - data = kmalloc_reserve(size + - SKB_DATA_ALIGN(sizeof(struct skb_shared_info)), - gfp_mask, NUMA_NO_NODE, NULL); + data = kmalloc_reserve(size + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)), + gfp_mask, NUMA_NO_NODE, NULL, &alloc_size); if (!data) return -ENOMEM; - size = SKB_WITH_OVERHEAD(ksize(data)); + size = SKB_WITH_OVERHEAD(alloc_size); memcpy((struct skb_shared_info *)(data + size), skb_shinfo(skb), offsetof(struct skb_shared_info, frags[0])); -- 2.34.1 -- Kees Cook From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B1A4AC6FA83 for ; Sat, 24 Sep 2022 09:11:37 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 077F210E55A; Sat, 24 Sep 2022 09:11:35 +0000 (UTC) Received: from mail-pl1-x634.google.com (mail-pl1-x634.google.com [IPv6:2607:f8b0:4864:20::634]) by gabe.freedesktop.org (Postfix) with ESMTPS id E4E3210E55A for ; Sat, 24 Sep 2022 09:11:31 +0000 (UTC) Received: by mail-pl1-x634.google.com with SMTP id d11so2131727pll.8 for ; Sat, 24 Sep 2022 02:11:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date; bh=i+DbZ92rhFWH/0UDN/zG8OtvV+E4X59I6BjaPOZbEQQ=; b=E5bOhboA/+6BWRDx1UWFFrMvOXvKoaBtGsWYlDmuj8X7rAkhnZpWRHTQ3LNDz7+mZj DgCRrEBR3MyfN1vHwttkAVt9C+sNXNl8rbk7MHBer0lal296QvNAuoGU6ltJiVjNpwJw ZbeC7n9zGD6mKw33TGHmgsxJslMQNugj0HBL4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date; bh=i+DbZ92rhFWH/0UDN/zG8OtvV+E4X59I6BjaPOZbEQQ=; b=sN9GZW9+lOjjfb8KmOF0KBI+Noe7pddFlfyswdQVsscUJBXHYr9Pvv95d7WjkvReii wQhzwP0Z5Bc675Keq+hsNvjxJjcXmI9fValMIsIvb5bSNEL+VX1kw0WJzce+D9Dbnko4 ubjY9QqYyt/01TSFqEfMUfnsJwOc6hxSR9bHONPRL8Ea92HZYz2YpsMJaBljMnF9t+LO o0iykFKRIHEcBwniA4u9dxqOOSCH/lqMXaibrc3evojzrW9YLWN8i7DjPLOK3Y888B2T UY/gV/12Wt5zO0/hBKSIQA32zgQdyi1Vtuhhzh+xhyshQ+7NTypol1ErnOfiHtslKeq4 zLGw== X-Gm-Message-State: ACrzQf1an5LZNgcDdHx0shwdbyNhTmh0XiIbptgl7oHztTHciRjY715c /v31AWU0yRfogQ3jDXb5oIm4vw== X-Google-Smtp-Source: AMsMyM7r8bq+DxJ2Zg4nmPe+hShYURXJSZpppn8rw6aZz5OkkmasnCi/hTE7Baskd4vD2R6zimHrAA== X-Received: by 2002:a17:902:d4c9:b0:178:6d7b:c36f with SMTP id o9-20020a170902d4c900b001786d7bc36fmr12510856plg.19.1664010691458; Sat, 24 Sep 2022 02:11:31 -0700 (PDT) Received: from www.outflux.net (smtp.outflux.net. [198.145.64.163]) by smtp.gmail.com with ESMTPSA id b13-20020aa7950d000000b005546fe4b127sm5590232pfp.78.2022.09.24.02.11.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 24 Sep 2022 02:11:30 -0700 (PDT) Date: Sat, 24 Sep 2022 02:11:29 -0700 From: Kees Cook To: Jakub Kicinski Subject: Re: [PATCH v2 03/16] skbuff: Proactively round up to kmalloc bucket size Message-ID: <202209240208.84F847F3B@keescook> References: <20220923202822.2667581-1-keescook@chromium.org> <20220923202822.2667581-4-keescook@chromium.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220923202822.2667581-4-keescook@chromium.org> X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: llvm@lists.linux.dev, dri-devel@lists.freedesktop.org, "Ruhl, Michael J" , Eric Dumazet , linux-hardening@vger.kernel.org, Hyeonggon Yoo <42.hyeyoo@gmail.com>, Christoph Lameter , Sumit Semwal , dev@openvswitch.org, x86@kernel.org, Jesse Brandeburg , intel-wired-lan@lists.osuosl.org, David Rientjes , Miguel Ojeda , Yonghong Song , Paolo Abeni , linux-media@vger.kernel.org, Marco Elver , Josef Bacik , linaro-mm-sig@lists.linaro.org, David Sterba , Joonsoo Kim , Vlastimil Babka , Alex Elder , linux-mm@kvack.org, netdev@vger.kernel.org, Nick Desaulniers , linux-kernel@vger.kernel.org, "David S. Miller" , Pekka Enberg , Daniel Micay , Greg Kroah-Hartman , linux-fsdevel@vger.kernel.org, Andrew Morton , Christian =?iso-8859-1?Q?K=F6nig?= , linux-btrfs@vger.kernel.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" On Fri, Sep 23, 2022 at 01:28:09PM -0700, Kees Cook wrote: > Instead of discovering the kmalloc bucket size _after_ allocation, round > up proactively so the allocation is explicitly made for the full size, > allowing the compiler to correctly reason about the resulting size of > the buffer through the existing __alloc_size() hint. > > This will allow for kernels built with CONFIG_UBSAN_BOUNDS or the > coming dynamic bounds checking under CONFIG_FORTIFY_SOURCE to gain > back the __alloc_size() hints that were temporarily reverted in commit > 93dd04ab0b2b ("slab: remove __alloc_size attribute from __kmalloc_track_caller") > > Additionally tries to normalize size variables to u32 from int. Most > interfaces are using "int", but notably __alloc_skb uses unsigned int. > > Also fix some reverse Christmas tree and comments while touching nearby > code. Something in this patch is breaking things -- I've refactored it again to avoid overwriting the incoming size argument, and instead add a dedicated outgoing size variable. Here's what will be v3 ... --- net/core/skbuff.c | 41 ++++++++++++++++++++++------------------- 1 file changed, 22 insertions(+), 19 deletions(-) diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 974bbbbe7138..9b5a9fb69d9d 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -346,11 +346,12 @@ EXPORT_SYMBOL(napi_build_skb); * memory is free */ static void *kmalloc_reserve(size_t size, gfp_t flags, int node, - bool *pfmemalloc) + bool *pfmemalloc, size_t *alloc_size) { void *obj; bool ret_pfmemalloc = false; + size = kmalloc_size_roundup(size); /* * Try a regular allocation, when that fails and we're not entitled * to the reserves, fail. @@ -369,6 +370,7 @@ static void *kmalloc_reserve(size_t size, gfp_t flags, int node, if (pfmemalloc) *pfmemalloc = ret_pfmemalloc; + *alloc_size = size; return obj; } @@ -400,7 +402,7 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask, { struct kmem_cache *cache; struct sk_buff *skb; - unsigned int osize; + size_t alloc_size; bool pfmemalloc; u8 *data; @@ -427,15 +429,15 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask, */ size = SKB_DATA_ALIGN(size); size += SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); - data = kmalloc_reserve(size, gfp_mask, node, &pfmemalloc); - if (unlikely(!data)) - goto nodata; - /* kmalloc(size) might give us more room than requested. + /* kmalloc(size) might give us more room than requested, so + * allocate the true bucket size up front. * Put skb_shared_info exactly at the end of allocated zone, * to allow max possible filling before reallocation. */ - osize = ksize(data); - size = SKB_WITH_OVERHEAD(osize); + data = kmalloc_reserve(size, gfp_mask, node, &pfmemalloc, &alloc_size); + if (unlikely(!data)) + goto nodata; + size = SKB_WITH_OVERHEAD(alloc_size); prefetchw(data + size); /* @@ -444,7 +446,7 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask, * the tail pointer in struct sk_buff! */ memset(skb, 0, offsetof(struct sk_buff, tail)); - __build_skb_around(skb, data, osize); + __build_skb_around(skb, data, alloc_size); skb->pfmemalloc = pfmemalloc; if (flags & SKB_ALLOC_FCLONE) { @@ -1709,6 +1711,7 @@ int pskb_expand_head(struct sk_buff *skb, int nhead, int ntail, { int i, osize = skb_end_offset(skb); int size = osize + nhead + ntail; + size_t alloc_size; long off; u8 *data; @@ -1723,10 +1726,10 @@ int pskb_expand_head(struct sk_buff *skb, int nhead, int ntail, if (skb_pfmemalloc(skb)) gfp_mask |= __GFP_MEMALLOC; data = kmalloc_reserve(size + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)), - gfp_mask, NUMA_NO_NODE, NULL); + gfp_mask, NUMA_NO_NODE, NULL, &alloc_size); if (!data) goto nodata; - size = SKB_WITH_OVERHEAD(ksize(data)); + size = SKB_WITH_OVERHEAD(alloc_size); /* Copy only real data... and, alas, header. This should be * optimized for the cases when header is void. @@ -6063,19 +6066,19 @@ static int pskb_carve_inside_header(struct sk_buff *skb, const u32 off, int i; int size = skb_end_offset(skb); int new_hlen = headlen - off; + size_t alloc_size; u8 *data; size = SKB_DATA_ALIGN(size); if (skb_pfmemalloc(skb)) gfp_mask |= __GFP_MEMALLOC; - data = kmalloc_reserve(size + - SKB_DATA_ALIGN(sizeof(struct skb_shared_info)), - gfp_mask, NUMA_NO_NODE, NULL); + data = kmalloc_reserve(size + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)), + gfp_mask, NUMA_NO_NODE, NULL, &alloc_size); if (!data) return -ENOMEM; - size = SKB_WITH_OVERHEAD(ksize(data)); + size = SKB_WITH_OVERHEAD(alloc_size); /* Copy real data, and all frags */ skb_copy_from_linear_data_offset(skb, off, data, new_hlen); @@ -6184,18 +6187,18 @@ static int pskb_carve_inside_nonlinear(struct sk_buff *skb, const u32 off, u8 *data; const int nfrags = skb_shinfo(skb)->nr_frags; struct skb_shared_info *shinfo; + size_t alloc_size; size = SKB_DATA_ALIGN(size); if (skb_pfmemalloc(skb)) gfp_mask |= __GFP_MEMALLOC; - data = kmalloc_reserve(size + - SKB_DATA_ALIGN(sizeof(struct skb_shared_info)), - gfp_mask, NUMA_NO_NODE, NULL); + data = kmalloc_reserve(size + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)), + gfp_mask, NUMA_NO_NODE, NULL, &alloc_size); if (!data) return -ENOMEM; - size = SKB_WITH_OVERHEAD(ksize(data)); + size = SKB_WITH_OVERHEAD(alloc_size); memcpy((struct skb_shared_info *)(data + size), skb_shinfo(skb), offsetof(struct skb_shared_info, frags[0])); -- 2.34.1 -- Kees Cook From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from smtp2.osuosl.org (smtp2.osuosl.org [140.211.166.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AFA83C32771 for ; Sat, 24 Sep 2022 09:11:36 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp2.osuosl.org (Postfix) with ESMTP id 2D57640A68; Sat, 24 Sep 2022 09:11:36 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org 2D57640A68 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=osuosl.org; s=default; t=1664010696; bh=MSzPtQx1ECJLfTjpFvyP7pUuIMy0XLYDj73SHHQ/I9o=; h=Date:From:To:References:In-Reply-To:Subject:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: Cc:From; b=mYzauiHDSMXUw2F0QFoH7F6beARG8ct6G6T3NLU8fSdhktKi2aSJU4HFfyIZU0W74 RcC9f6oKZpWYIZl2KUQMsBaRkr204iR0sziEW0jyKdH7BOURdE1iofbpB6F/DQ3xsL Dw4j9xAWdBGnEqHXovKS/G2aRC8GstkyCS/iP2+8BZ5vMQJUTHca2JJyyHXL4d0HvJ +VibTKtaKBA5foGi3Pp2OQcRWBnhhhNr6MSLeuFGqc2wtQFmHDFEXdJ8fr7xvCk5HX xqmVLnRxVUY9YrBULgEQvLcQmzZpW+7kxjaceNTWSMHQRJP5TklGN+xiDwBnJgg+g/ ykyMHC01/DnBg== X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp2.osuosl.org ([127.0.0.1]) by localhost (smtp2.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id tHRhvVpDlzPB; Sat, 24 Sep 2022 09:11:35 +0000 (UTC) Received: from ash.osuosl.org (ash.osuosl.org [140.211.166.34]) by smtp2.osuosl.org (Postfix) with ESMTP id 055FF409F3; Sat, 24 Sep 2022 09:11:34 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org 055FF409F3 Received: from smtp3.osuosl.org (smtp3.osuosl.org [140.211.166.136]) by ash.osuosl.org (Postfix) with ESMTP id 2365D1BF20F for ; Sat, 24 Sep 2022 09:11:33 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp3.osuosl.org (Postfix) with ESMTP id 0ADED60A83 for ; Sat, 24 Sep 2022 09:11:33 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp3.osuosl.org 0ADED60A83 X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp3.osuosl.org ([127.0.0.1]) by localhost (smtp3.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id kCN1Je3G8RAW for ; Sat, 24 Sep 2022 09:11:32 +0000 (UTC) X-Greylist: whitelisted by SQLgrey-1.8.0 DKIM-Filter: OpenDKIM Filter v2.11.0 smtp3.osuosl.org 18BED60A82 Received: from mail-pl1-x62a.google.com (mail-pl1-x62a.google.com [IPv6:2607:f8b0:4864:20::62a]) by smtp3.osuosl.org (Postfix) with ESMTPS id 18BED60A82 for ; Sat, 24 Sep 2022 09:11:31 +0000 (UTC) Received: by mail-pl1-x62a.google.com with SMTP id t3so2157917ply.2 for ; Sat, 24 Sep 2022 02:11:31 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date; bh=i+DbZ92rhFWH/0UDN/zG8OtvV+E4X59I6BjaPOZbEQQ=; b=jLuYRx5tIgk51Rh4dVzXPfv7oerBYb7Eou/5/8zBPpRh3bKq+9YED7fPppFsts688C 8cFMRc3S9ShEWjufRr7dFsbjdUUWiIU62jJrI9DDpf1RdX9XPg/bVthm1zQwAUdumkNM 0CFPNA6ozIhbQze7zz3/UIlkREYsX/B4GHsMwttp1HFdb7UdI0UAbAxwxrs4mfTajzvl 4vELcgs4U2l9tQHUzYPUOlKUZY3rsbbKhc/gPdR8/jLpIlvSzPYIOY0M3gS+cO1AmjYG xYruzdNICo1klvIqSK8MWYpZG3LJ6N5h6KXgl1xc8vnKg/i/HLCe54V3ccrfAXQ+aOwq rmqg== X-Gm-Message-State: ACrzQf2kHf5ACOEvwo+zwymIAAk5216bEYGy6yem6UzkHe+jMxJOb8tj fq/ep4s9dyk9Oye0OStwy7UioA== X-Google-Smtp-Source: AMsMyM7r8bq+DxJ2Zg4nmPe+hShYURXJSZpppn8rw6aZz5OkkmasnCi/hTE7Baskd4vD2R6zimHrAA== X-Received: by 2002:a17:902:d4c9:b0:178:6d7b:c36f with SMTP id o9-20020a170902d4c900b001786d7bc36fmr12510856plg.19.1664010691458; Sat, 24 Sep 2022 02:11:31 -0700 (PDT) Received: from www.outflux.net (smtp.outflux.net. [198.145.64.163]) by smtp.gmail.com with ESMTPSA id b13-20020aa7950d000000b005546fe4b127sm5590232pfp.78.2022.09.24.02.11.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 24 Sep 2022 02:11:30 -0700 (PDT) Date: Sat, 24 Sep 2022 02:11:29 -0700 From: Kees Cook To: Jakub Kicinski Message-ID: <202209240208.84F847F3B@keescook> References: <20220923202822.2667581-1-keescook@chromium.org> <20220923202822.2667581-4-keescook@chromium.org> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20220923202822.2667581-4-keescook@chromium.org> X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date; bh=i+DbZ92rhFWH/0UDN/zG8OtvV+E4X59I6BjaPOZbEQQ=; b=E5bOhboA/+6BWRDx1UWFFrMvOXvKoaBtGsWYlDmuj8X7rAkhnZpWRHTQ3LNDz7+mZj DgCRrEBR3MyfN1vHwttkAVt9C+sNXNl8rbk7MHBer0lal296QvNAuoGU6ltJiVjNpwJw ZbeC7n9zGD6mKw33TGHmgsxJslMQNugj0HBL4= X-Mailman-Original-Authentication-Results: smtp3.osuosl.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.a=rsa-sha256 header.s=google header.b=E5bOhboA Subject: Re: [Intel-wired-lan] [PATCH v2 03/16] skbuff: Proactively round up to kmalloc bucket size X-BeenThere: intel-wired-lan@osuosl.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel Wired Ethernet Linux Kernel Driver Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: llvm@lists.linux.dev, dri-devel@lists.freedesktop.org, "Ruhl, Michael J" , Eric Dumazet , linux-hardening@vger.kernel.org, Hyeonggon Yoo <42.hyeyoo@gmail.com>, Christoph Lameter , Sumit Semwal , dev@openvswitch.org, x86@kernel.org, intel-wired-lan@lists.osuosl.org, David Rientjes , Miguel Ojeda , Yonghong Song , Paolo Abeni , linux-media@vger.kernel.org, Marco Elver , Josef Bacik , linaro-mm-sig@lists.linaro.org, David Sterba , Joonsoo Kim , Vlastimil Babka , Alex Elder , linux-mm@kvack.org, netdev@vger.kernel.org, Nick Desaulniers , linux-kernel@vger.kernel.org, "David S. Miller" , Pekka Enberg , Daniel Micay , Greg Kroah-Hartman , linux-fsdevel@vger.kernel.org, Andrew Morton , Christian =?iso-8859-1?Q?K=F6nig?= , linux-btrfs@vger.kernel.org Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: intel-wired-lan-bounces@osuosl.org Sender: "Intel-wired-lan" On Fri, Sep 23, 2022 at 01:28:09PM -0700, Kees Cook wrote: > Instead of discovering the kmalloc bucket size _after_ allocation, round > up proactively so the allocation is explicitly made for the full size, > allowing the compiler to correctly reason about the resulting size of > the buffer through the existing __alloc_size() hint. > > This will allow for kernels built with CONFIG_UBSAN_BOUNDS or the > coming dynamic bounds checking under CONFIG_FORTIFY_SOURCE to gain > back the __alloc_size() hints that were temporarily reverted in commit > 93dd04ab0b2b ("slab: remove __alloc_size attribute from __kmalloc_track_caller") > > Additionally tries to normalize size variables to u32 from int. Most > interfaces are using "int", but notably __alloc_skb uses unsigned int. > > Also fix some reverse Christmas tree and comments while touching nearby > code. Something in this patch is breaking things -- I've refactored it again to avoid overwriting the incoming size argument, and instead add a dedicated outgoing size variable. Here's what will be v3 ... --- net/core/skbuff.c | 41 ++++++++++++++++++++++------------------- 1 file changed, 22 insertions(+), 19 deletions(-) diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 974bbbbe7138..9b5a9fb69d9d 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -346,11 +346,12 @@ EXPORT_SYMBOL(napi_build_skb); * memory is free */ static void *kmalloc_reserve(size_t size, gfp_t flags, int node, - bool *pfmemalloc) + bool *pfmemalloc, size_t *alloc_size) { void *obj; bool ret_pfmemalloc = false; + size = kmalloc_size_roundup(size); /* * Try a regular allocation, when that fails and we're not entitled * to the reserves, fail. @@ -369,6 +370,7 @@ static void *kmalloc_reserve(size_t size, gfp_t flags, int node, if (pfmemalloc) *pfmemalloc = ret_pfmemalloc; + *alloc_size = size; return obj; } @@ -400,7 +402,7 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask, { struct kmem_cache *cache; struct sk_buff *skb; - unsigned int osize; + size_t alloc_size; bool pfmemalloc; u8 *data; @@ -427,15 +429,15 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask, */ size = SKB_DATA_ALIGN(size); size += SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); - data = kmalloc_reserve(size, gfp_mask, node, &pfmemalloc); - if (unlikely(!data)) - goto nodata; - /* kmalloc(size) might give us more room than requested. + /* kmalloc(size) might give us more room than requested, so + * allocate the true bucket size up front. * Put skb_shared_info exactly at the end of allocated zone, * to allow max possible filling before reallocation. */ - osize = ksize(data); - size = SKB_WITH_OVERHEAD(osize); + data = kmalloc_reserve(size, gfp_mask, node, &pfmemalloc, &alloc_size); + if (unlikely(!data)) + goto nodata; + size = SKB_WITH_OVERHEAD(alloc_size); prefetchw(data + size); /* @@ -444,7 +446,7 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask, * the tail pointer in struct sk_buff! */ memset(skb, 0, offsetof(struct sk_buff, tail)); - __build_skb_around(skb, data, osize); + __build_skb_around(skb, data, alloc_size); skb->pfmemalloc = pfmemalloc; if (flags & SKB_ALLOC_FCLONE) { @@ -1709,6 +1711,7 @@ int pskb_expand_head(struct sk_buff *skb, int nhead, int ntail, { int i, osize = skb_end_offset(skb); int size = osize + nhead + ntail; + size_t alloc_size; long off; u8 *data; @@ -1723,10 +1726,10 @@ int pskb_expand_head(struct sk_buff *skb, int nhead, int ntail, if (skb_pfmemalloc(skb)) gfp_mask |= __GFP_MEMALLOC; data = kmalloc_reserve(size + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)), - gfp_mask, NUMA_NO_NODE, NULL); + gfp_mask, NUMA_NO_NODE, NULL, &alloc_size); if (!data) goto nodata; - size = SKB_WITH_OVERHEAD(ksize(data)); + size = SKB_WITH_OVERHEAD(alloc_size); /* Copy only real data... and, alas, header. This should be * optimized for the cases when header is void. @@ -6063,19 +6066,19 @@ static int pskb_carve_inside_header(struct sk_buff *skb, const u32 off, int i; int size = skb_end_offset(skb); int new_hlen = headlen - off; + size_t alloc_size; u8 *data; size = SKB_DATA_ALIGN(size); if (skb_pfmemalloc(skb)) gfp_mask |= __GFP_MEMALLOC; - data = kmalloc_reserve(size + - SKB_DATA_ALIGN(sizeof(struct skb_shared_info)), - gfp_mask, NUMA_NO_NODE, NULL); + data = kmalloc_reserve(size + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)), + gfp_mask, NUMA_NO_NODE, NULL, &alloc_size); if (!data) return -ENOMEM; - size = SKB_WITH_OVERHEAD(ksize(data)); + size = SKB_WITH_OVERHEAD(alloc_size); /* Copy real data, and all frags */ skb_copy_from_linear_data_offset(skb, off, data, new_hlen); @@ -6184,18 +6187,18 @@ static int pskb_carve_inside_nonlinear(struct sk_buff *skb, const u32 off, u8 *data; const int nfrags = skb_shinfo(skb)->nr_frags; struct skb_shared_info *shinfo; + size_t alloc_size; size = SKB_DATA_ALIGN(size); if (skb_pfmemalloc(skb)) gfp_mask |= __GFP_MEMALLOC; - data = kmalloc_reserve(size + - SKB_DATA_ALIGN(sizeof(struct skb_shared_info)), - gfp_mask, NUMA_NO_NODE, NULL); + data = kmalloc_reserve(size + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)), + gfp_mask, NUMA_NO_NODE, NULL, &alloc_size); if (!data) return -ENOMEM; - size = SKB_WITH_OVERHEAD(ksize(data)); + size = SKB_WITH_OVERHEAD(alloc_size); memcpy((struct skb_shared_info *)(data + size), skb_shinfo(skb), offsetof(struct skb_shared_info, frags[0])); -- 2.34.1 -- Kees Cook _______________________________________________ Intel-wired-lan mailing list Intel-wired-lan@osuosl.org https://lists.osuosl.org/mailman/listinfo/intel-wired-lan