From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 52FE7C54EE9 for ; Wed, 7 Sep 2022 07:10:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DC1B56B0074; Wed, 7 Sep 2022 03:10:58 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D71C38D0003; Wed, 7 Sep 2022 03:10:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BECB18D0002; Wed, 7 Sep 2022 03:10:58 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id B0BAC6B0074 for ; Wed, 7 Sep 2022 03:10:58 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 83FB91C5DD4 for ; Wed, 7 Sep 2022 07:10:58 +0000 (UTC) X-FDA: 79884417396.08.61C43DD Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by imf19.hostedemail.com (Postfix) with ESMTP id EC3681A005A for ; Wed, 7 Sep 2022 07:10:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1662534658; x=1694070658; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=C5itssx805QxDZViIvZ2qQF9EYKwb0ORMQWw0Kko9Xs=; b=azQmk0OS458+e00//doF2E8vwgP7A1Gzz0X1xY3+A84pFgcvM7qO1h+d 8smNcqzQ4CY0YyDCPsqHye9lQume65obwptkJNbNnrCfLcaaN4Y5BdpAf DUlF2UqR2luKYepCgYPCc3LzHFn50vMOCQ1QdOCLf0EJVQZURHjDth8FM JRNlaTo8Q6btoZzGlhGIw69Sw9YCDLBOS0BVnvDiAAwABAlQeEORAq0Ar osn5N/WJAHcRRA79LCyGEiefBF9PVG4JeO18nRMlanwwo3KNOiA4fHS20 WCb8+T5xFj77ErBJkDpdwTDTxgCnt7zX1rwaARgso1HG7mrkIsS4zgH2G A==; X-IronPort-AV: E=McAfee;i="6500,9779,10462"; a="283798269" X-IronPort-AV: E=Sophos;i="5.93,296,1654585200"; d="scan'208";a="283798269" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Sep 2022 00:10:57 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,296,1654585200"; d="scan'208";a="676053413" Received: from feng-clx.sh.intel.com ([10.238.200.228]) by fmsmga008.fm.intel.com with ESMTP; 07 Sep 2022 00:10:53 -0700 From: Feng Tang To: Andrew Morton , Vlastimil Babka , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Dmitry Vyukov , Jonathan Corbet Cc: Dave Hansen , linux-mm@kvack.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, Feng Tang Subject: [PATCH v5 2/4] mm/slub: only zero the requested size of buffer for kzalloc Date: Wed, 7 Sep 2022 15:10:21 +0800 Message-Id: <20220907071023.3838692-3-feng.tang@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220907071023.3838692-1-feng.tang@intel.com> References: <20220907071023.3838692-1-feng.tang@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=azQmk0OS; spf=pass (imf19.hostedemail.com: domain of feng.tang@intel.com designates 134.134.136.20 as permitted sender) smtp.mailfrom=feng.tang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662534658; a=rsa-sha256; cv=none; b=itDPZNdbNDY4lTPxk1+gUoTsMoyfwM11ziWuTzG4MNNr0cCTZIMO3tGOiJDEQKbaY1kWtj AxL+gCh4Vw3kBd+qokaEgCvqp972dsiXVpb2AlJ1NzveGptGthYz2HSLnDsPJWS1cL7s2b AhCKLzeq2cu+mpERO/TEJ7udCXR9cXc= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662534658; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=K6KORaPf2wkeklIAdwnig2cQf/eC91oDOg62kOu8d7o=; b=yyS+6Gg+OdD3igTPYSD9zsnlusKU6lPNojZzkOI2xZtu1Dkx88ZAAVLi0f95iwtX3lL6bZ yAJBAFOcCYl/JpfQivQyPgCqzKGHVHCHQH2HwK3JFSXI38ZLnPfchD11jaQk1YgM3/ZIB9 8LZC4intKCZJnkISYmu03Uo/RnY6F9o= Authentication-Results: imf19.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=azQmk0OS; spf=pass (imf19.hostedemail.com: domain of feng.tang@intel.com designates 134.134.136.20 as permitted sender) smtp.mailfrom=feng.tang@intel.com; dmarc=pass (policy=none) header.from=intel.com X-Stat-Signature: i4yu5mzskxs4uszcaejfeum1pz3jqrec X-Rspamd-Queue-Id: EC3681A005A X-Rspamd-Server: rspam09 X-Rspam-User: X-HE-Tag: 1662534657-351575 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: kzalloc/kmalloc will round up the request size to a fixed size (mostly power of 2), so the allocated memory could be more than requested. Currently kzalloc family APIs will zero all the allocated memory. To detect out-of-bound usage of the extra allocated memory, only zero the requested part, so that sanity check could be added to the extra space later. For kzalloc users who will call ksize() later and utilize this extra space, please be aware that the space is not zeroed any more. Signed-off-by: Feng Tang --- mm/slab.c | 6 +++--- mm/slab.h | 9 +++++++-- mm/slub.c | 6 +++--- 3 files changed, 13 insertions(+), 8 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index a5486ff8362a..73ecaa7066e1 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3253,7 +3253,7 @@ slab_alloc_node(struct kmem_cache *cachep, struct list_lru *lru, gfp_t flags, init = slab_want_init_on_alloc(flags, cachep); out: - slab_post_alloc_hook(cachep, objcg, flags, 1, &objp, init); + slab_post_alloc_hook(cachep, objcg, flags, 1, &objp, init, 0); return objp; } @@ -3506,13 +3506,13 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, * Done outside of the IRQ disabled section. */ slab_post_alloc_hook(s, objcg, flags, size, p, - slab_want_init_on_alloc(flags, s)); + slab_want_init_on_alloc(flags, s), 0); /* FIXME: Trace call missing. Christoph would like a bulk variant */ return size; error: local_irq_enable(); cache_alloc_debugcheck_after_bulk(s, flags, i, p, _RET_IP_); - slab_post_alloc_hook(s, objcg, flags, i, p, false); + slab_post_alloc_hook(s, objcg, flags, i, p, false, 0); kmem_cache_free_bulk(s, i, p); return 0; } diff --git a/mm/slab.h b/mm/slab.h index d0ef9dd44b71..20f9e2a9814f 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -730,12 +730,17 @@ static inline struct kmem_cache *slab_pre_alloc_hook(struct kmem_cache *s, static inline void slab_post_alloc_hook(struct kmem_cache *s, struct obj_cgroup *objcg, gfp_t flags, - size_t size, void **p, bool init) + size_t size, void **p, bool init, + unsigned int orig_size) { size_t i; flags &= gfp_allowed_mask; + /* If original request size(kmalloc) is not set, use object_size */ + if (!orig_size) + orig_size = s->object_size; + /* * As memory initialization might be integrated into KASAN, * kasan_slab_alloc and initialization memset must be @@ -746,7 +751,7 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, for (i = 0; i < size; i++) { p[i] = kasan_slab_alloc(s, p[i], flags, init); if (p[i] && init && !kasan_has_integrated_init()) - memset(p[i], 0, s->object_size); + memset(p[i], 0, orig_size); kmemleak_alloc_recursive(p[i], s->object_size, 1, s->flags, flags); } diff --git a/mm/slub.c b/mm/slub.c index effd994438e6..f523601d3fcf 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3376,7 +3376,7 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s, struct list_l init = slab_want_init_on_alloc(gfpflags, s); out: - slab_post_alloc_hook(s, objcg, gfpflags, 1, &object, init); + slab_post_alloc_hook(s, objcg, gfpflags, 1, &object, init, orig_size); return object; } @@ -3833,11 +3833,11 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, * Done outside of the IRQ disabled fastpath loop. */ slab_post_alloc_hook(s, objcg, flags, size, p, - slab_want_init_on_alloc(flags, s)); + slab_want_init_on_alloc(flags, s), 0); return i; error: slub_put_cpu_ptr(s->cpu_slab); - slab_post_alloc_hook(s, objcg, flags, i, p, false); + slab_post_alloc_hook(s, objcg, flags, i, p, false, 0); kmem_cache_free_bulk(s, i, p); return 0; } -- 2.34.1