From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_PATCH,MAILING_LIST_MULTI, MENTIONS_GIT_HOSTING,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E55C1C49361 for ; Thu, 17 Jun 2021 14:11:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CCBB2613EE for ; Thu, 17 Jun 2021 14:11:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231648AbhFQONV (ORCPT ); Thu, 17 Jun 2021 10:13:21 -0400 Received: from mail.kernel.org ([198.145.29.99]:60624 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231145AbhFQONT (ORCPT ); Thu, 17 Jun 2021 10:13:19 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id BC6496128B; Thu, 17 Jun 2021 14:11:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1623939071; bh=Zlbp6xL1xf3PskM9PhEp4wz+OJCI7NfCR625CwcZ8Yg=; h=In-Reply-To:References:Date:From:To:Cc:Subject:From; b=NrCGPIaTzqu5ZgtwR7rd4dkzl3KvflJ0zn4ZpNUJmEvvMciMWB0lqgX60asfXz/hM miCjtznBalwX/KLOM1e4ayyA6j5DORyMLuiUVqjejRmdMjefowOovK7BDB5tMp9vAO 175v+wvY0tW1qVxrwY4skyjLaarSgHbBjtRsWWWb701cPg/lHS+cBy9AdbC7tyaEQ4 RwdzkP/6PiRa8Kx//tH4mPa/jvAMKXaw3eR+vF1FtwO9a+Nu3E67pyFg3kkSTC/J5B u0tL6iatlaX/ZuPBkjJnhUMN5KT9iuOM/Iz8AW912lIaOqF/A26D1e4nAfbSeEJCUM qBE/4AGSp0/UA== Received: from compute2.internal (compute2.nyi.internal [10.202.2.42]) by mailauth.nyi.internal (Postfix) with ESMTP id CECC127C005A; Thu, 17 Jun 2021 10:11:09 -0400 (EDT) Received: from imap21 ([10.202.2.71]) by compute2.internal (MEProxy); Thu, 17 Jun 2021 10:11:09 -0400 X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduledrfeefuddgjedvucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhepofgfggfkjghffffhvffutgesthdtredtreerjeenucfhrhhomhepfdetnhgu hicunfhuthhomhhirhhskhhifdcuoehluhhtoheskhgvrhhnvghlrdhorhhgqeenucggtf frrghtthgvrhhnpeeufedtffejtdeuieevgedtfeeffedttedukeeiffekgfejffdtvdff ffekjeejudenucffohhmrghinhepkhgvrhhnvghlrdhorhhgnecuvehluhhsthgvrhfuih iivgeptdenucfrrghrrghmpehmrghilhhfrhhomheprghnugihodhmvghsmhhtphgruhht hhhpvghrshhonhgrlhhithihqdduudeiudekheeifedvqddvieefudeiiedtkedqlhhuth hopeepkhgvrhhnvghlrdhorhhgsehlihhnuhigrdhluhhtohdruhhs X-ME-Proxy: Received: by mailuser.nyi.internal (Postfix, from userid 501) id B47A451C0060; Thu, 17 Jun 2021 10:11:08 -0400 (EDT) X-Mailer: MessagingEngine.com Webmail Interface User-Agent: Cyrus-JMAP/3.5.0-alpha0-526-gf020ecf851-fm-20210616.001-gf020ecf8 Mime-Version: 1.0 Message-Id: In-Reply-To: References: <1623816595.myt8wbkcar.astroid@bobo.none> <617cb897-58b1-8266-ecec-ef210832e927@kernel.org> <1623893358.bbty474jyy.astroid@bobo.none> <58b949fb-663e-4675-8592-25933a3e361c@www.fastmail.com> Date: Thu, 17 Jun 2021 07:10:41 -0700 From: "Andy Lutomirski" To: "Peter Zijlstra (Intel)" Cc: "Nicholas Piggin" , "Rik van Riel" , "Andrew Morton" , "Dave Hansen" , "Linux Kernel Mailing List" , linux-mm@kvack.org, "Mathieu Desnoyers" , "the arch/x86 maintainers" , "Paul E. McKenney" Subject: =?UTF-8?Q?Re:_[RFC][PATCH]_sched:_Use_lightweight_hazard_pointers_to_gra?= =?UTF-8?Q?b_lazy_mms?= Content-Type: text/plain Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Jun 17, 2021, at 2:08 AM, Peter Zijlstra wrote: > On Wed, Jun 16, 2021 at 10:32:15PM -0700, Andy Lutomirski wrote: > > Here it is. Not even boot tested! > > It is now, it even builds a kernel.. so it must be perfect :-) > > > https://git.kernel.org/pub/scm/linux/kernel/git/luto/linux.git/commit/?h=sched/lazymm&id=ecc3992c36cb88087df9c537e2326efb51c95e31 > > Since I had to turn it into a patch to post, so that I could comment on > it, I've cleaned it up a little for you. > > I'll reply to self with some notes, but I think I like it. > > --- > arch/x86/include/asm/mmu.h | 5 ++ > include/linux/sched/mm.h | 3 + > kernel/fork.c | 2 + > kernel/sched/core.c | 138 ++++++++++++++++++++++++++++++++++++--------- > kernel/sched/sched.h | 10 +++- > 5 files changed, 130 insertions(+), 28 deletions(-) > > diff --git a/arch/x86/include/asm/mmu.h b/arch/x86/include/asm/mmu.h > index 5d7494631ea9..ce94162168c2 100644 > --- a/arch/x86/include/asm/mmu.h > +++ b/arch/x86/include/asm/mmu.h > @@ -66,4 +66,9 @@ typedef struct { > void leave_mm(int cpu); > #define leave_mm leave_mm > > +/* On x86, mm_cpumask(mm) contains all CPUs that might be lazily using mm */ > +#define for_each_possible_lazymm_cpu(cpu, mm) \ > + for_each_cpu((cpu), mm_cpumask((mm))) > + > + > #endif /* _ASM_X86_MMU_H */ > diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h > index e24b1fe348e3..5c7eafee6fea 100644 > --- a/include/linux/sched/mm.h > +++ b/include/linux/sched/mm.h > @@ -77,6 +77,9 @@ static inline bool mmget_not_zero(struct mm_struct *mm) > > /* mmput gets rid of the mappings and all user-space */ > extern void mmput(struct mm_struct *); > + > +extern void mm_unlazy_mm_count(struct mm_struct *mm); You didn't like mm_many_words_in_the_name_of_the_function()? :) > - if (mm) { > - membarrier_mm_sync_core_before_usermode(mm); > - mmdrop(mm); > - } What happened here? I think that my membarrier work should land before this patch. Specifically, I want the scheduler to be in a state where nothing depends on the barrier-ness of mmdrop() so that we can change the mmdrop() calls to stop being barriers without our brains exploding trying to understand two different fancy synchronization schemes at the same time. Other than that I like your cleanups.