From mboxrd@z Thu Jan 1 00:00:00 1970 From: KOSAKI Motohiro Subject: Re: [PATCH v5 7/8] mm: Only IPI CPUs to drain local pages if they exist Date: Tue, 03 Jan 2012 12:45:45 -0500 Message-ID: <4F033EC9.4050909@gmail.com> References: <1325499859-2262-1-git-send-email-gilad@benyossef.com> <1325499859-2262-8-git-send-email-gilad@benyossef.com> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-2022-JP Content-Transfer-Encoding: 7bit Cc: linux-kernel@vger.kernel.org, Chris Metcalf , Peter Zijlstra , Frederic Weisbecker , Russell King , linux-mm@kvack.org, Pekka Enberg , Matt Mackall , Sasha Levin , Rik van Riel , Andi Kleen , Mel Gorman , Andrew Morton , Alexander Viro , linux-fsdevel@vger.kernel.org, Avi Kivity To: Gilad Ben-Yossef Return-path: Received: from mail-vx0-f174.google.com ([209.85.220.174]:40276 "EHLO mail-vx0-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754292Ab2ACRpt (ORCPT ); Tue, 3 Jan 2012 12:45:49 -0500 In-Reply-To: <1325499859-2262-8-git-send-email-gilad@benyossef.com> Sender: linux-fsdevel-owner@vger.kernel.org List-ID: (1/2/12 5:24 AM), Gilad Ben-Yossef wrote: > Calculate a cpumask of CPUs with per-cpu pages in any zone > and only send an IPI requesting CPUs to drain these pages > to the buddy allocator if they actually have pages when > asked to flush. > > This patch saves 99% of IPIs asking to drain per-cpu > pages in case of severe memory preassure that leads > to OOM since in these cases multiple, possibly concurrent, > allocation requests end up in the direct reclaim code > path so when the per-cpu pages end up reclaimed on first > allocation failure for most of the proceeding allocation > attempts until the memory pressure is off (possibly via > the OOM killer) there are no per-cpu pages on most CPUs > (and there can easily be hundreds of them). > > This also has the side effect of shortening the average > latency of direct reclaim by 1 or more order of magnitude > since waiting for all the CPUs to ACK the IPI takes a > long time. > > Tested by running "hackbench 400" on a 4 CPU x86 otherwise > idle VM and observing the difference between the number > of direct reclaim attempts that end up in drain_all_pages() > and those were more then 1/2 of the online CPU had any > per-cpu page in them, using the vmstat counters introduced > in the next patch in the series and using proc/interrupts. > > In the test sceanrio, this saved around 500 global IPIs. > After trigerring an OOM: > > $ cat /proc/vmstat > ... > pcp_global_drain 627 > pcp_global_ipi_saved 578 > > I've also seen the number of drains reach 15k calls > with the saved percentage reaching 99% when there > are more tasks running during an OOM kill. > > Signed-off-by: Gilad Ben-Yossef > Acked-by: Christoph Lameter > CC: Chris Metcalf > CC: Peter Zijlstra > CC: Frederic Weisbecker > CC: Russell King > CC: linux-mm@kvack.org > CC: Pekka Enberg > CC: Matt Mackall > CC: Sasha Levin > CC: Rik van Riel > CC: Andi Kleen > CC: Mel Gorman > CC: Andrew Morton > CC: Alexander Viro > CC: linux-fsdevel@vger.kernel.org > CC: Avi Kivity > --- > Christopth Ack was for a previous version that allocated > the cpumask in drain_all_pages(). When you changed a patch design and implementation, ACKs are should be dropped. otherwise you miss to chance to get a good review. > mm/page_alloc.c | 26 +++++++++++++++++++++++++- > 1 files changed, 25 insertions(+), 1 deletions(-) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 2b8ba3a..092c331 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -67,6 +67,14 @@ DEFINE_PER_CPU(int, numa_node); > EXPORT_PER_CPU_SYMBOL(numa_node); > #endif > > +/* > + * A global cpumask of CPUs with per-cpu pages that gets > + * recomputed on each drain. We use a global cpumask > + * for to avoid allocation on direct reclaim code path > + * for CONFIG_CPUMASK_OFFSTACK=y > + */ > +static cpumask_var_t cpus_with_pcps; > + > #ifdef CONFIG_HAVE_MEMORYLESS_NODES > /* > * N.B., Do NOT reference the '_numa_mem_' per cpu variable directly. > @@ -1119,7 +1127,19 @@ void drain_local_pages(void *arg) > */ > void drain_all_pages(void) > { > - on_each_cpu(drain_local_pages, NULL, 1); > + int cpu; > + struct per_cpu_pageset *pcp; > + struct zone *zone; > + get_online_cpu() ? > + for_each_online_cpu(cpu) > + for_each_populated_zone(zone) { > + pcp = per_cpu_ptr(zone->pageset, cpu); > + if (pcp->pcp.count) > + cpumask_set_cpu(cpu, cpus_with_pcps); > + else > + cpumask_clear_cpu(cpu, cpus_with_pcps); cpumask* functions can't be used locklessly? > + } > + on_each_cpu_mask(cpus_with_pcps, drain_local_pages, NULL, 1); > } > > #ifdef CONFIG_HIBERNATION > @@ -3623,6 +3643,10 @@ static void setup_zone_pageset(struct zone *zone) > void __init setup_per_cpu_pageset(void) > { > struct zone *zone; > + int ret; > + > + ret = zalloc_cpumask_var(&cpus_with_pcps, GFP_KERNEL); > + BUG_ON(!ret); > > for_each_populated_zone(zone) > setup_zone_pageset(zone);