From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 34244C4338F for ; Tue, 24 Aug 2021 10:03:33 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A0CBD61265 for ; Tue, 24 Aug 2021 10:03:32 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org A0CBD61265 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 0607A6B006C; Tue, 24 Aug 2021 06:03:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 00E5B6B0071; Tue, 24 Aug 2021 06:03:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E3EC28D0001; Tue, 24 Aug 2021 06:03:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0245.hostedemail.com [216.40.44.245]) by kanga.kvack.org (Postfix) with ESMTP id C8E3A6B006C for ; Tue, 24 Aug 2021 06:03:31 -0400 (EDT) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 69A3722003 for ; Tue, 24 Aug 2021 10:03:31 +0000 (UTC) X-FDA: 78509537022.09.6CE38DA Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf25.hostedemail.com (Postfix) with ESMTP id D7C80B000191 for ; Tue, 24 Aug 2021 10:03:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=jEemhWjdWYdaktW0WnXaWiCI55KQbF6fbMFrKXzv0jg=; b=vnfx7hbN306xe6SfAZtnnQRo5l N+KCcbKUWY8Ddtw380F7JJXxhU29Krr4/+//sYG6cJc6TcQQA34PqWP4M8O5vd6CCtkJC//sYhH/K TlKTF8CUUeO5Ezb50kYKi48H2pE8wgiJV4w+Bz1qo3PZdTCGAwU96+TTR6LyFrhEp66MTVozNjzuw 2uPf+T1o0/ygFkXpU/f91Y7a/dfRp2vuDVIP25tOHs94GdlML6AqhuCQ5K3xx9OwbAbdtSuMdKkV2 PT3gQN00iQyBjdEr07Me2oDpeqslNM/UQsv/odqn+8ryzp6GrSqdVQlesBwT9b/EgsetSeQhZMoy6 NL2xJrGA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mITFu-00Ashi-RY; Tue, 24 Aug 2021 10:02:37 +0000 Date: Tue, 24 Aug 2021 11:02:06 +0100 From: Matthew Wilcox To: Vlastimil Babka Cc: linux-mm@kvack.org, Andrew Morton , Muchun Song , Chris Down , Michal Hocko , Chunxin Zang Subject: Re: [PATCH] mm, vmscan: guarantee drop_slab_node() termination Message-ID: References: <20210818152239.25502-1-vbabka@suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210818152239.25502-1-vbabka@suse.cz> X-Rspamd-Queue-Id: D7C80B000191 Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=vnfx7hbN; dmarc=none; spf=none (imf25.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspamd-Server: rspam01 X-Stat-Signature: fupdfhgbhjupie6mjqj6stbp41xt5oge X-HE-Tag: 1629799409-842081 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Aug 18, 2021 at 05:22:39PM +0200, Vlastimil Babka wrote: > diff --git a/mm/vmscan.c b/mm/vmscan.c > index 403a175a720f..ef3554314b47 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -936,6 +936,7 @@ static unsigned long shrink_slab(gfp_t gfp_mask, int nid, > void drop_slab_node(int nid) > { > unsigned long freed; > + int shift = 0; > > do { > struct mem_cgroup *memcg = NULL; > @@ -948,7 +949,7 @@ void drop_slab_node(int nid) > do { > freed += shrink_slab(GFP_KERNEL, nid, memcg, 0); > } while ((memcg = mem_cgroup_iter(NULL, memcg, NULL)) != NULL); > - } while (freed > 10); > + } while ((freed >> shift++) > 0); This can, if you're really unlucky, produce UB. If you free 2^63 items when shift is 63, then 2^63 >> 63 is 1 and shift becomes 64, producing UB on the next iteration. We could do: } while (shift < BITS_PER_LONG) && (freed >> shift++) > 0); but honestly, that feels silly. How about: } while ((freed >> shift++) > 1); almost exactly as arbitrary, but guarantees no UB.