From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1B082C433FE for ; Thu, 23 Sep 2021 13:28:48 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A734761250 for ; Thu, 23 Sep 2021 13:28:47 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org A734761250 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 0F7626B006C; Thu, 23 Sep 2021 09:28:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0A8206B0071; Thu, 23 Sep 2021 09:28:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ED844900002; Thu, 23 Sep 2021 09:28:46 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0249.hostedemail.com [216.40.44.249]) by kanga.kvack.org (Postfix) with ESMTP id E04C36B006C for ; Thu, 23 Sep 2021 09:28:46 -0400 (EDT) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id A33A71809A32E for ; Thu, 23 Sep 2021 13:28:46 +0000 (UTC) X-FDA: 78618918252.17.3B0317C Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf04.hostedemail.com (Postfix) with ESMTP id 5C32250000A8 for ; Thu, 23 Sep 2021 13:28:46 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 09C1861107; Thu, 23 Sep 2021 13:28:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1632403725; bh=w0JufNshx/PXI2oi9FITKwYGXFUmV7b/kmCZtFIv61E=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=J9Ezn/lBNXW73vjn6aYXvT1FJ+1Zc5vGGQPPaJx/RGjvMhhqlu0VXNVtw6Ia4nm17 T/xbP3fcz59nbaknDSFRHCcorU0UQ00eBy1nDt3cgQarFyJDgW9n+4jlS0qKiGj++f SJdqhjw4NuvAT2VO0a1m9z8KL6UZ/j+UV7mw2CDi0UlX6KuXbgT25eF/3/5R3eWnyV gxjLBkJJoTBPMaWUiJDv1U5Uf2q3uKP6sosQWSxtZktMnS9R9prDGqJOuV46skqOvG +FK4DJCwUVqmpDO/YLNzDVb2JR/vJ5kiBp94pKZXu4cHj7pcB59gZ0yxK122ZaMa+E 9Bjn+XKtVxuGw== Date: Thu, 23 Sep 2021 06:28:44 -0700 From: Jakub Kicinski To: Jens Axboe Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>, linux-mm@kvack.org, Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , linux-kernel@vger.kernel.org, Matthew Wilcox , John Garry , linux-block@vger.kernel.org, netdev@vger.kernel.org Subject: Re: [RFC v2 PATCH] mm, sl[au]b: Introduce lockless cache Message-ID: <20210923062844.148e08fd@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com> In-Reply-To: <688e6750-87e9-fb44-ce40-943bad072e48@kernel.dk> References: <20210920154816.31832-1-42.hyeyoo@gmail.com> <20210922081906.GA78305@kvm.asia-northeast3-a.c.our-ratio-313919.internal> <688e6750-87e9-fb44-ce40-943bad072e48@kernel.dk> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="J9Ezn/lB"; spf=pass (imf04.hostedemail.com: domain of kuba@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=kuba@kernel.org; dmarc=pass (policy=none) header.from=kernel.org X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 5C32250000A8 X-Stat-Signature: ygqhkq6jie7jpembc6g85xajdn171a53 X-HE-Tag: 1632403726-385250 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, 22 Sep 2021 06:58:00 -0600 Jens Axboe wrote: > > I considered only case 2) when writing code. Well, To support 1), > > I think there are two ways: > > > > a) internally call kmem_cache_free when in_interrupt() is true > > b) caller must disable interrupt when freeing > > > > I think a) is okay, how do you think? > > If the API doesn't support freeing from interrupts, then I'd make that > the rule. Caller should know better if that can happen, and then just > use kmem_cache_free() if in a problematic context. That avoids polluting > the fast path with that check. I'd still make it a WARN_ON_ONCE() as > described and it can get removed later, hopefully. Shooting from the hip a little but if I'm getting the context right this is all very similar to the skb cache so lockdep_assert_in_softirq() may be useful: /* * Acceptable for protecting per-CPU resources accessed from BH. * Much like in_softirq() - semantics are ambiguous, use carefully. */ #define lockdep_assert_in_softirq() \ do { \ WARN_ON_ONCE(__lockdep_enabled && \ (!in_softirq() || in_irq() || in_nmi())); \ } while (0)