From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_SANE_2 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A727BC4363D for ; Thu, 24 Sep 2020 12:32:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 67EAE235FD for ; Thu, 24 Sep 2020 12:32:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727663AbgIXMct (ORCPT ); Thu, 24 Sep 2020 08:32:49 -0400 Received: from mail.kernel.org ([198.145.29.99]:55286 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727556AbgIXMcs (ORCPT ); Thu, 24 Sep 2020 08:32:48 -0400 Received: from gandalf.local.home (cpe-66-24-58-225.stny.res.rr.com [66.24.58.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 3E4EA205F4; Thu, 24 Sep 2020 12:32:43 +0000 (UTC) Date: Thu, 24 Sep 2020 08:32:41 -0400 From: Steven Rostedt To: Thomas Gleixner Cc: peterz@infradead.org, Linus Torvalds , LKML , linux-arch , Paul McKenney , the arch/x86 maintainers , Sebastian Andrzej Siewior , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Will Deacon , Andrew Morton , Linux-MM , Russell King , Linux ARM , Chris Zankel , Max Filippov , linux-xtensa@linux-xtensa.org, Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , David Airlie , Daniel Vetter , intel-gfx , dri-devel , Ard Biesheuvel , Herbert Xu , Vineet Gupta , "open list\:SYNOPSYS ARC ARCHITECTURE" , Arnd Bergmann , Guo Ren , linux-csky@vger.kernel.org, Michal Simek , Thomas Bogendoerfer , linux-mips@vger.kernel.org, Nick Hu , Greentime Hu , Vincent Chen , Michael Ellerman , Benjamin Herrenschmidt , Paul Mackerras , linuxppc-dev , "David S. Miller" , linux-sparc Subject: Re: [patch RFC 00/15] mm/highmem: Provide a preemptible variant of kmap_atomic & friends Message-ID: <20200924083241.314f2102@gandalf.local.home> In-Reply-To: <871riracgf.fsf@nanos.tec.linutronix.de> References: <20200919091751.011116649@linutronix.de> <87mu1lc5mp.fsf@nanos.tec.linutronix.de> <87k0wode9a.fsf@nanos.tec.linutronix.de> <87eemwcpnq.fsf@nanos.tec.linutronix.de> <87a6xjd1dw.fsf@nanos.tec.linutronix.de> <87sgbbaq0y.fsf@nanos.tec.linutronix.de> <20200923084032.GU1362448@hirez.programming.kicks-ass.net> <20200923115251.7cc63a7e@oasis.local.home> <874kno9pr9.fsf@nanos.tec.linutronix.de> <20200923171234.0001402d@oasis.local.home> <871riracgf.fsf@nanos.tec.linutronix.de> X-Mailer: Claws Mail 3.17.3 (GTK+ 2.24.32; x86_64-pc-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-csky@vger.kernel.org On Thu, 24 Sep 2020 08:57:52 +0200 Thomas Gleixner wrote: > > Now as for migration disabled nesting, at least now we would have > > groupings of this, and perhaps the theorists can handle that. I mean, > > how is this much different that having a bunch of tasks blocked on a > > mutex with the owner is pinned on a CPU? > > > > migrate_disable() is a BKL of pinning affinity. > > No. That's just wrong. preempt disable is a concurrency control, I think you totally misunderstood what I was saying. The above wasn't about comparing preempt_disable to migrate_disable. It was comparing migrate_disable to a chain of tasks blocked on mutexes where the top owner has preempt_disable set. You still have a bunch of tasks that can't move to other CPUs. > > If we only have local_lock() available (even on !RT), then it makes > > the blocking in groups. At least this way you could grep for all the > > different local_locks in the system and plug that into the algorithm > > for WCS, just like one would with a bunch of mutexes. > > You cannot do that on RT at all where migrate disable is substituting > preempt disable in spin and rw locks. The result would be the same as > with a !RT kernel just with horribly bad performance. Note, the spin and rwlocks already have a lock associated with them. Why would it be any different on RT? I wasn't suggesting adding another lock inside a spinlock. Why would I recommend THAT? I wasn't recommending blindly replacing migrate_disable() with local_lock(). I just meant expose local_lock() but not migrate_disable(). > > That means the stacking problem has to be solved anyway. > > So why on earth do you want to create yet another special duct tape case > for kamp_local() which proliferates inconsistency instead of aiming for > consistency accross all preemption models? The idea was to help with the scheduling issue. Anyway, instead of blocking. What about having a counter of number of migrate disabled tasks per cpu, and when taking a migrate_disable(), and there's already another task with migrate_disabled() set, and the current task has an affinity greater than 1, it tries to migrate to another CPU? This way migrate_disable() is less likely to have a bunch of tasks blocked on one CPU serialized by each task exiting the migrate_disable() section. Yes, there's more overhead, but it only happens if multiple tasks are in a migrate disable section on the same CPU. -- Steve