From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_2 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id ABFCFC4727D for ; Thu, 24 Sep 2020 12:34:55 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 1ED942388A for ; Thu, 24 Sep 2020 12:34:55 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="xGhC2R6E" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1ED942388A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=goodmis.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID: Subject:To:From:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=gV2xGT8vC4Bzk7kUAlaFt33EmULw+E4Fudz7Hot7QnQ=; b=xGhC2R6EoiXhAdViBJdeePHI9 jInmffytV8Os5GtDW9NJ7podHZXsL0Z4ERrT/24HrzfHNBWBz0oO/deam4rgNCooexFjNS4EAKwvG t8sIv2ywNrKiQXEekiSRNjKysXNrbqDSgSY28S/ZEhQfsLLR25KsNfBHk1Dz2UxllQxVXpRAbCvv6 +rxyEH66FC2XtS/x6B+VgnMLVkSFP+ywXaqRJh2CGvNB7o+ONFwFTrJLM9XHLUOF3XdaRp0YhoyAR tq0bIKie/yV84bdrbj7Hu7nbAIvWbsgPSMkBSoPSloAWo2a4RcXvAyjQBoRc7Dn1PaEP+kMujezMR LHpsxMISA==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kLQQj-0001vx-Bo; Thu, 24 Sep 2020 12:32:57 +0000 Received: from mail.kernel.org ([198.145.29.99]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kLQQb-0001qU-Rz; Thu, 24 Sep 2020 12:32:51 +0000 Received: from gandalf.local.home (cpe-66-24-58-225.stny.res.rr.com [66.24.58.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 3E4EA205F4; Thu, 24 Sep 2020 12:32:43 +0000 (UTC) Date: Thu, 24 Sep 2020 08:32:41 -0400 From: Steven Rostedt To: Thomas Gleixner Subject: Re: [patch RFC 00/15] mm/highmem: Provide a preemptible variant of kmap_atomic & friends Message-ID: <20200924083241.314f2102@gandalf.local.home> In-Reply-To: <871riracgf.fsf@nanos.tec.linutronix.de> References: <20200919091751.011116649@linutronix.de> <87mu1lc5mp.fsf@nanos.tec.linutronix.de> <87k0wode9a.fsf@nanos.tec.linutronix.de> <87eemwcpnq.fsf@nanos.tec.linutronix.de> <87a6xjd1dw.fsf@nanos.tec.linutronix.de> <87sgbbaq0y.fsf@nanos.tec.linutronix.de> <20200923084032.GU1362448@hirez.programming.kicks-ass.net> <20200923115251.7cc63a7e@oasis.local.home> <874kno9pr9.fsf@nanos.tec.linutronix.de> <20200923171234.0001402d@oasis.local.home> <871riracgf.fsf@nanos.tec.linutronix.de> X-Mailer: Claws Mail 3.17.3 (GTK+ 2.24.32; x86_64-pc-linux-gnu) MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200924_083250_025499_D62EE835 X-CRM114-Status: GOOD ( 26.57 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Juri Lelli , peterz@infradead.org, Benjamin Herrenschmidt , Sebastian Andrzej Siewior , Joonas Lahtinen , dri-devel , linux-mips@vger.kernel.org, Ben Segall , Max Filippov , Guo Ren , linux-sparc , Vincent Chen , Will Deacon , Ard Biesheuvel , linux-arch , Vincent Guittot , Herbert Xu , Michael Ellerman , the arch/x86 maintainers , Russell King , linux-csky@vger.kernel.org, David Airlie , Mel Gorman , "open list:SYNOPSYS ARC ARCHITECTURE" , linux-xtensa@linux-xtensa.org, Paul McKenney , intel-gfx , linuxppc-dev , Jani Nikula , Greentime Hu , Rodrigo Vivi , Dietmar Eggemann , Linux ARM , Chris Zankel , Michal Simek , Thomas Bogendoerfer , Nick Hu , Linux-MM , Linus Torvalds , LKML , Arnd Bergmann , Daniel Vetter , Vineet Gupta , Paul Mackerras , Andrew Morton , Daniel Bristot de Oliveira , "David S. Miller" Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Thu, 24 Sep 2020 08:57:52 +0200 Thomas Gleixner wrote: > > Now as for migration disabled nesting, at least now we would have > > groupings of this, and perhaps the theorists can handle that. I mean, > > how is this much different that having a bunch of tasks blocked on a > > mutex with the owner is pinned on a CPU? > > > > migrate_disable() is a BKL of pinning affinity. > > No. That's just wrong. preempt disable is a concurrency control, I think you totally misunderstood what I was saying. The above wasn't about comparing preempt_disable to migrate_disable. It was comparing migrate_disable to a chain of tasks blocked on mutexes where the top owner has preempt_disable set. You still have a bunch of tasks that can't move to other CPUs. > > If we only have local_lock() available (even on !RT), then it makes > > the blocking in groups. At least this way you could grep for all the > > different local_locks in the system and plug that into the algorithm > > for WCS, just like one would with a bunch of mutexes. > > You cannot do that on RT at all where migrate disable is substituting > preempt disable in spin and rw locks. The result would be the same as > with a !RT kernel just with horribly bad performance. Note, the spin and rwlocks already have a lock associated with them. Why would it be any different on RT? I wasn't suggesting adding another lock inside a spinlock. Why would I recommend THAT? I wasn't recommending blindly replacing migrate_disable() with local_lock(). I just meant expose local_lock() but not migrate_disable(). > > That means the stacking problem has to be solved anyway. > > So why on earth do you want to create yet another special duct tape case > for kamp_local() which proliferates inconsistency instead of aiming for > consistency accross all preemption models? The idea was to help with the scheduling issue. Anyway, instead of blocking. What about having a counter of number of migrate disabled tasks per cpu, and when taking a migrate_disable(), and there's already another task with migrate_disabled() set, and the current task has an affinity greater than 1, it tries to migrate to another CPU? This way migrate_disable() is less likely to have a bunch of tasks blocked on one CPU serialized by each task exiting the migrate_disable() section. Yes, there's more overhead, but it only happens if multiple tasks are in a migrate disable section on the same CPU. -- Steve _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel