From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.7 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DEDD1C433E1 for ; Wed, 10 Jun 2020 16:31:58 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id AF4BC20734 for ; Wed, 10 Jun 2020 16:31:58 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AF4BC20734 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 15CE46B0062; Wed, 10 Jun 2020 12:31:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0E70D6B006C; Wed, 10 Jun 2020 12:31:55 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EF26B6B006E; Wed, 10 Jun 2020 12:31:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0102.hostedemail.com [216.40.44.102]) by kanga.kvack.org (Postfix) with ESMTP id B91946B0062 for ; Wed, 10 Jun 2020 12:31:54 -0400 (EDT) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 7AEEF18024DF0 for ; Wed, 10 Jun 2020 16:31:54 +0000 (UTC) X-FDA: 76913843748.26.view62_3e174a326dcc Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin26.hostedemail.com (Postfix) with ESMTP id 75E6C18C00F17 for ; Wed, 10 Jun 2020 16:31:53 +0000 (UTC) X-HE-Tag: view62_3e174a326dcc X-Filterd-Recvd-Size: 3564 Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) by imf44.hostedemail.com (Postfix) with ESMTP for ; Wed, 10 Jun 2020 16:31:52 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id C287CAF24; Wed, 10 Jun 2020 16:31:53 +0000 (UTC) From: Vlastimil Babka To: Andrew Morton , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, vinmenon@codeaurora.org, Kees Cook , Matthew Garrett , Roman Gushchin , Vlastimil Babka , Jann Horn , Vijayanand Jitta Subject: [PATCH 3/9] mm, slub: remove runtime allocation order changes Date: Wed, 10 Jun 2020 18:31:29 +0200 Message-Id: <20200610163135.17364-4-vbabka@suse.cz> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200610163135.17364-1-vbabka@suse.cz> References: <20200610163135.17364-1-vbabka@suse.cz> MIME-Version: 1.0 X-Rspamd-Queue-Id: 75E6C18C00F17 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: SLUB allows runtime changing of page allocation order by writing into the /sys/kernel/slab//order file. Jann has reported [1] that this inte= rface allows the order to be set too small, leading to crashes. While it's possible to fix the immediate issue, closer inspection reveals potential races. Storing the new order calls calculate_sizes() which non-atomically updates a lot of kmem_cache fields while the cache is stil= l in use. Unexpected behavior might occur even if the fields are set to the sa= me value as they were. This could be fixed by splitting out the part of calculate_sizes() that d= epends on forced_order, so that we only update kmem_cache.oo field. This could s= till race with init_cache_random_seq(), shuffle_freelist(), allocate_slab(). P= erhaps it's possible to audit and e.g. add some READ_ONCE/WRITE_ONCE accesses, i= t might be easier just to remove the runtime order changes, which is what t= his patch does. If there are valid usecases for per-cache order setting, we c= ould e.g. extend the boot parameters to do that. [1] https://lore.kernel.org/r/CAG48ez31PP--h6_FzVyfJ4H86QYczAFPdxtJHUEEan= +7VJETAQ@mail.gmail.com Reported-by: Jann Horn Signed-off-by: Vlastimil Babka Reviewed-by: Kees Cook Acked-by: Roman Gushchin --- mm/slub.c | 19 +------------------ 1 file changed, 1 insertion(+), 18 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index e254164d6cae..c5f3f2424392 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -5111,28 +5111,11 @@ static ssize_t objs_per_slab_show(struct kmem_cac= he *s, char *buf) } SLAB_ATTR_RO(objs_per_slab); =20 -static ssize_t order_store(struct kmem_cache *s, - const char *buf, size_t length) -{ - unsigned int order; - int err; - - err =3D kstrtouint(buf, 10, &order); - if (err) - return err; - - if (order > slub_max_order || order < slub_min_order) - return -EINVAL; - - calculate_sizes(s, order); - return length; -} - static ssize_t order_show(struct kmem_cache *s, char *buf) { return sprintf(buf, "%u\n", oo_order(s->oo)); } -SLAB_ATTR(order); +SLAB_ATTR_RO(order); =20 static ssize_t min_partial_show(struct kmem_cache *s, char *buf) { --=20 2.26.2