From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753694Ab2A3Vth (ORCPT ); Mon, 30 Jan 2012 16:49:37 -0500 Received: from mail.linuxfoundation.org ([140.211.169.12]:50547 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752150Ab2A3Vtf (ORCPT ); Mon, 30 Jan 2012 16:49:35 -0500 Date: Mon, 30 Jan 2012 13:49:33 -0800 From: Andrew Morton To: Gilad Ben-Yossef Cc: linux-kernel@vger.kernel.org, Mel Gorman , KOSAKI Motohiro , Christoph Lameter , Chris Metcalf , Peter Zijlstra , Frederic Weisbecker , Russell King , linux-mm@kvack.org, Pekka Enberg , Matt Mackall , Sasha Levin , Rik van Riel , Andi Kleen , Alexander Viro , linux-fsdevel@vger.kernel.org, Avi Kivity , Michal Nazarewicz , Milton Miller Subject: Re: [v7 7/8] mm: only IPI CPUs to drain local pages if they exist Message-Id: <20120130134933.39779c48.akpm@linux-foundation.org> In-Reply-To: References: <1327572121-13673-1-git-send-email-gilad@benyossef.com> <1327572121-13673-8-git-send-email-gilad@benyossef.com> <20120127161236.ff1e7e7e.akpm@linux-foundation.org> X-Mailer: Sylpheed 3.0.2 (GTK+ 2.20.1; x86_64-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sun, 29 Jan 2012 14:18:32 +0200 Gilad Ben-Yossef wrote: > On Sat, Jan 28, 2012 at 2:12 AM, Andrew Morton > wrote: > > On Thu, 26 Jan 2012 12:02:00 +0200 > > Gilad Ben-Yossef wrote: > > > >> Calculate a cpumask of CPUs with per-cpu pages in any zone > >> and only send an IPI requesting CPUs to drain these pages > >> to the buddy allocator if they actually have pages when > >> asked to flush. > >> > > ... > > > Can we end up sending an IPI to a now-unplugged CPU? __That won't work > > very well if that CPU is now sitting on its sysadmin's desk. > > Nope. on_each_cpu_mask() disables preemption and calls smp_call_function_many() > which then checks the mask against the cpu_online_mask OK. General rule of thumb: if a reviewer asked something then it is likely that others will wonder the same thing when reading the code later on. So consider reviewer questions as a sign that the code needs additional comments! > > There's also the case of CPU online. __We could end up failing to IPI a > > CPU which now has some percpu pages. __That's not at all serious - 90% > > is good enough in page reclaim. __But this thinking merits a mention in > > the comment. __Or we simply make this code hotplug-safe. > > hmm.. I'm probably daft but I don't see how to make the code hotplug safe for > CPU online case. I mean, let's say we disable preemption throughout the > entire ordeal and then the CPU goes online and gets itself some percpu pages > *after* we've calculated the masks, sent the IPIs and waiting for the > whole thing > to finish but before we've returned... This is inherent to the whole drain-pages design - it's only a best-effort thing and there's nothing to prevent other CPUs from undoing your work 2 nanoseconds later. The exception to this is the case of suspend, which drains the queues when all tasks (and, hopefully, IRQs) have been frozen. This is the only way to make draining 100% "reliable". > I might be missing something here, but I think that unless you have some other > means to stop newly hotplugged CPUs to grab per cpus pages there is nothing > you can do in this code to stop it. Maybe make the race window > shorter, that's all. > > Would adding a comment such as the following OK? > > "This code is protected against sending an IPI to an offline CPU but does not > guarantee sending an IPI to newly hotplugged CPUs" Looks OK. I'd mention *how* this protection comes about: on_each_cpu_mask() blocks hotplug and won't talk to offlined CPUs.