From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933296AbbFWS0k (ORCPT ); Tue, 23 Jun 2015 14:26:40 -0400 Received: from e31.co.us.ibm.com ([32.97.110.149]:52148 "EHLO e31.co.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933200AbbFWS0c (ORCPT ); Tue, 23 Jun 2015 14:26:32 -0400 X-Helo: d03dlp03.boulder.ibm.com X-MailFrom: paulmck@linux.vnet.ibm.com X-RcptTo: linux-kernel@vger.kernel.org Date: Tue, 23 Jun 2015 11:26:26 -0700 From: "Paul E. McKenney" To: Peter Zijlstra Cc: Oleg Nesterov , tj@kernel.org, mingo@redhat.com, linux-kernel@vger.kernel.org, der.herr@hofr.at, dave@stgolabs.net, riel@redhat.com, viro@ZenIV.linux.org.uk, torvalds@linux-foundation.org Subject: Re: [RFC][PATCH 12/13] stop_machine: Remove lglock Message-ID: <20150623182626.GO3892@linux.vnet.ibm.com> Reply-To: paulmck@linux.vnet.ibm.com References: <20150622121623.291363374@infradead.org> <20150622122256.765619039@infradead.org> <20150622222152.GA4460@redhat.com> <20150623100932.GB3644@twins.programming.kicks-ass.net> <20150623105548.GE18673@twins.programming.kicks-ass.net> <20150623112041.GF18673@twins.programming.kicks-ass.net> <20150623130826.GG18673@twins.programming.kicks-ass.net> <20150623173038.GJ3892@linux.vnet.ibm.com> <20150623180411.GF3644@twins.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20150623180411.GF3644@twins.programming.kicks-ass.net> User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 15062318-8236-0000-0000-00000C9D3053 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Jun 23, 2015 at 08:04:11PM +0200, Peter Zijlstra wrote: > On Tue, Jun 23, 2015 at 10:30:38AM -0700, Paul E. McKenney wrote: > > Good, you don't need this because you can check for dynticks later. > > You will need to check for offline CPUs. > > get_online_cpus() > for_each_online_cpus() { > ... > } > > is what the new code does. Ah, I missed that this was not deleted. > > > - /* > > > - * Each pass through the following loop attempts to force a > > > - * context switch on each CPU. > > > - */ > > > - while (try_stop_cpus(cma ? cm : cpu_online_mask, > > > - synchronize_sched_expedited_cpu_stop, > > > - NULL) == -EAGAIN) { > > > - put_online_cpus(); > > > - atomic_long_inc(&rsp->expedited_tryfail); > > > - > > > - /* Check to see if someone else did our work for us. */ > > > - s = atomic_long_read(&rsp->expedited_done); > > > - if (ULONG_CMP_GE((ulong)s, (ulong)firstsnap)) { > > > - /* ensure test happens before caller kfree */ > > > - smp_mb__before_atomic(); /* ^^^ */ > > > - atomic_long_inc(&rsp->expedited_workdone1); > > > - free_cpumask_var(cm); > > > - return; > > > > Here you lose batching. Yeah, I know that synchronize_sched_expedited() > > is -supposed- to be used sparingly, but it is not cool for the kernel > > to melt down just because some creative user found a way to heat up a > > code path. Need a mutex_trylock() with a counter and checking for > > others having already done the needed work. > > I really think you're making that expedited nonsense far too accessible. This has nothing to do with accessibility and everything to do with robustness. And with me not becoming the triage center for too many non-RCU bugs. > But it was exactly that trylock I was trying to get rid of. OK. Why, exactly? > > And we still need to be able to drop back to synchronize_sched() > > (AKA wait_rcu_gp(call_rcu_sched) in this case) in case we have both a > > creative user and a long-running RCU-sched read-side critical section. > > No, a long-running RCU-sched read-side is a bug and we should fix that, > its called a preemption-latency, we don't like those. Yes, we should fix them. No, they absolutely must not result in a meltdown of some unrelated portion of the kernel (like RCU), particularly if this situation occurs on some system running a production workload that doesn't happen to care about preemption latency. > > > + for_each_online_cpu(cpu) { > > > + struct rcu_dynticks *rdtp = &per_cpu(rcu_dynticks, cpu); > > > > > > - /* Recheck to see if someone else did our work for us. */ > > > - s = atomic_long_read(&rsp->expedited_done); > > > - if (ULONG_CMP_GE((ulong)s, (ulong)firstsnap)) { > > > - /* ensure test happens before caller kfree */ > > > - smp_mb__before_atomic(); /* ^^^ */ > > > - atomic_long_inc(&rsp->expedited_workdone2); > > > - free_cpumask_var(cm); > > > - return; > > > - } > > > + /* Offline CPUs, idle CPUs, and any CPU we run on are quiescent. */ > > > + if (!(atomic_add_return(0, &rdtp->dynticks) & 0x1)) > > > + continue; > > > > Let's see... This does work for idle CPUs and for nohz_full CPUs running > > in userspace. > > > > It does not work for the current CPU, so the check needs an additional > > check against raw_smp_processor_id(), which is easy enough to add. > > Right, realized after I send it out, but it _should_ work for the > current cpu too. Just pointless doing it. OK, and easily fixed up in any case. > > There always has been a race window involving CPU hotplug. > > There is no hotplug race, the entire thing has get_online_cpus() held > across it. Which I would like to get rid of, but not urgent. > > > + stop_one_cpu(cpu, synchronize_sched_expedited_cpu_stop, NULL); > > > > My thought was to use smp_call_function_single(), and to have the function > > called recheck dyntick-idle state, avoiding doing a set_tsk_need_resched() > > if so. > > set_tsk_need_resched() is buggy and should not be used. OK, what API is used for this purpose? > > This would result in a single pass through schedule() instead > > of stop_one_cpu()'s double context switch. It would likely also require > > some rework of rcu_note_context_switch(), which stop_one_cpu() avoids > > the need for. > > _IF_ you're going to touch rcu_note_context_switch(), you might as well > use a completion, set it for the number of CPUs that need a resched, > spray resched-IPI and have rcu_note_context_switch() do a complete(). > > But I would really like to avoid adding code to > rcu_note_context_switch(), because we run that on _every_ single context > switch. I believe that I can rework the current code to get the effect without increased overhead, given that I have no intention of adding the complete(). Adding the complete -would- add overhead to that fastpath. Thanx, Paul