From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 10DF8C2B9F4 for ; Mon, 14 Jun 2021 14:01:49 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 727C76115B for ; Mon, 14 Jun 2021 14:01:48 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 727C76115B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linutronix.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id EDED86B006C; Mon, 14 Jun 2021 10:01:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E669A6B006E; Mon, 14 Jun 2021 10:01:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CDEFE6B0070; Mon, 14 Jun 2021 10:01:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0153.hostedemail.com [216.40.44.153]) by kanga.kvack.org (Postfix) with ESMTP id 9BA856B006C for ; Mon, 14 Jun 2021 10:01:47 -0400 (EDT) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 2ED5B181AC9CB for ; Mon, 14 Jun 2021 14:01:47 +0000 (UTC) X-FDA: 78252492654.13.56CBC34 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) by imf15.hostedemail.com (Postfix) with ESMTP id 521A4A0021DC for ; Mon, 14 Jun 2021 14:01:39 +0000 (UTC) Date: Mon, 14 Jun 2021 16:01:38 +0200 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1623679299; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=3DT1R4sMY2Bm8waC43/W6qp1g1k6z/nnrssefIHXpts=; b=l/jgwy4bBanobhax+edKgJIdrMspIYO3LBw1gsyL6zm2wUjsZNdilic6SMXWTOTkeVhhDe u6y4/SmiHs+wQ+iHphLvqeHbzQGdY/UflX7yvxrCbEVt4E+E2gJTCmAYIXVE+dSUZFsEOk J38+zl9sgDY9gdqsBftWhpnMVk7J0UoOqBytPCHmorcPyFm0DD2oCBpnWa5ibMsK9k01Vv kjBMFtbizppkoDq20i29KNwKLG8y/fyNfgj9MVAVXIdLqsbwRF0EQYpydqwxXnt6UpCkwU TyV089RJon43uKDL52oCLS9pgQ8yhqrybokTn59OqmRWJK21kVL43vBRA4z0HA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1623679299; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=3DT1R4sMY2Bm8waC43/W6qp1g1k6z/nnrssefIHXpts=; b=7DRKZukesw9FZsY7e3oUrBpBBFHFgoTIzeQsVbdN0CTO4DMYe8m1pG4PtXvix6ifMRiejR T/E6tm3gRegIWeBw== From: Sebastian Andrzej Siewior To: Vlastimil Babka Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Christoph Lameter , David Rientjes , Pekka Enberg , Joonsoo Kim , Thomas Gleixner , Mel Gorman , Jesper Dangaard Brouer , Peter Zijlstra , Jann Horn Subject: Re: [RFC v2 33/34] mm, slub: use migrate_disable() on PREEMPT_RT Message-ID: <20210614140138.urxtrsk3jddnv57r@linutronix.de> References: <20210609113903.1421-1-vbabka@suse.cz> <20210609113903.1421-34-vbabka@suse.cz> <20210614111619.l3ral7tt2wasvlb4@linutronix.de> <390fc59e-17ed-47eb-48ff-8dae93c9a718@suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <390fc59e-17ed-47eb-48ff-8dae93c9a718@suse.cz> Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=linutronix.de header.s=2020 header.b="l/jgwy4b"; dkim=pass header.d=linutronix.de header.s=2020e header.b=7DRKZuke; spf=pass (imf15.hostedemail.com: domain of bigeasy@linutronix.de designates 193.142.43.55 as permitted sender) smtp.mailfrom=bigeasy@linutronix.de; dmarc=pass (policy=none) header.from=linutronix.de X-Stat-Signature: rq18ipg7bzp6gqzysdcftmsdr7m57e4k X-Rspamd-Queue-Id: 521A4A0021DC X-Rspamd-Server: rspam06 X-HE-Tag: 1623679299-815753 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2021-06-14 13:33:43 [+0200], Vlastimil Babka wrote: > On 6/14/21 1:16 PM, Sebastian Andrzej Siewior wrote: > > I haven't looked at the series and I have just this tiny question: why > > did migrate_disable() crash for Mel on !RT and why do you expect that it > > does not happen on PREEMPT_RT? > > Right, so it's because __slab_alloc() has this optimization to avoid re-reading > 'c' in case there is no preemption enabled at all (or it's just voluntary). > > #ifdef CONFIG_PREEMPTION > /* > * We may have been preempted and rescheduled on a different > * cpu before disabling preemption. Need to reload cpu area > * pointer. > */ > c = slub_get_cpu_ptr(s->cpu_slab); > #endif > > Mel's config has CONFIG_PREEMPT_VOLUNTARY, which means CONFIG_PREEMPTION is not > enabled. > > But then later in ___slab_alloc() we have > > slub_put_cpu_ptr(s->cpu_slab); > page = new_slab(s, gfpflags, node); > c = slub_get_cpu_ptr(s->cpu_slab); > > And this is not hidden under CONFIG_PREEMPTION, so with the #ifdef bug the > slub_put_cpu_ptr did a migrate_enable() with Mel's config, without prior > migrate_disable(). Ach, right. The update to this field is done with cmpxchg-double (if I remember correctly) but I don't remember if this is also re-entry safe. > If there wasn't the #ifdef PREEMPT_RT bug: > - this slub_put_cpu_ptr() would translate to put_cpu_ptr() thus > preempt_enable(), which on this config is just a barrier(), so it doesn't matter > that there was no matching preempt_disable() before. > - with PREEMPT_RT the CONFIG_PREEMPTION would be enabled, so the > slub_get_cpu_ptr() would do a migrate_disable() and there's no imbalance. > > But now that I dig into this in detail, I can see there might be another > instance of this imbalance bug, if CONFIG_PREEMPTION is disabled, but > CONFIG_PREEMPT_COUNT is enabled, which seems to be possible in some debug > scenarios. Because then preempt_disable()/preempt_enable() still manipulate the > preempt counter and compiling them out in __slab_alloc() will cause imbalance. > > So I think the guards in __slab_alloc() should be using CONFIG_PREEMPT_COUNT > instead of CONFIG_PREEMPT to be correct on all configs. I dare not remove them > completely :) :) Sebastian