From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752426Ab2AZKD3 (ORCPT ); Thu, 26 Jan 2012 05:03:29 -0500 Received: from mail-ee0-f46.google.com ([74.125.83.46]:33004 "EHLO mail-ee0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752269Ab2AZKDY (ORCPT ); Thu, 26 Jan 2012 05:03:24 -0500 From: Gilad Ben-Yossef To: linux-kernel@vger.kernel.org Cc: Gilad Ben-Yossef , Mel Gorman , KOSAKI Motohiro , Christoph Lameter , Chris Metcalf , Peter Zijlstra , Frederic Weisbecker , Russell King , linux-mm@kvack.org, Pekka Enberg , Matt Mackall , Sasha Levin , Rik van Riel , Andi Kleen , Andrew Morton , Alexander Viro , linux-fsdevel@vger.kernel.org, Avi Kivity , Michal Nazarewicz , Milton Miller Subject: [v7 7/8] mm: only IPI CPUs to drain local pages if they exist Date: Thu, 26 Jan 2012 12:02:00 +0200 Message-Id: <1327572121-13673-8-git-send-email-gilad@benyossef.com> X-Mailer: git-send-email 1.7.0.4 In-Reply-To: <1327572121-13673-1-git-send-email-gilad@benyossef.com> References: <1327572121-13673-1-git-send-email-gilad@benyossef.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Calculate a cpumask of CPUs with per-cpu pages in any zone and only send an IPI requesting CPUs to drain these pages to the buddy allocator if they actually have pages when asked to flush. This patch saves 85%+ of IPIs asking to drain per-cpu pages in case of severe memory preassure that leads to OOM since in these cases multiple, possibly concurrent, allocation requests end up in the direct reclaim code path so when the per-cpu pages end up reclaimed on first allocation failure for most of the proceeding allocation attempts until the memory pressure is off (possibly via the OOM killer) there are no per-cpu pages on most CPUs (and there can easily be hundreds of them). This also has the side effect of shortening the average latency of direct reclaim by 1 or more order of magnitude since waiting for all the CPUs to ACK the IPI takes a long time. Tested by running "hackbench 400" on a 8 CPU x86 VM and observing the difference between the number of direct reclaim attempts that end up in drain_all_pages() and those were more then 1/2 of the online CPU had any per-cpu page in them, using the vmstat counters introduced in the next patch in the series and using proc/interrupts. In the test sceanrio, this was seen to save around 3600 global IPIs after trigerring an OOM on a concurrent workload: $ cat /proc/vmstat | tail -n 2 pcp_global_drain 0 pcp_global_ipi_saved 0 $ cat /proc/interrupts | grep CAL CAL: 1 2 1 2 2 2 2 2 Function call interrupts $ hackbench 400 [OOM messages snipped] $ cat /proc/vmstat | tail -n 2 pcp_global_drain 3647 pcp_global_ipi_saved 3642 $ cat /proc/interrupts | grep CAL CAL: 6 13 6 3 3 3 1 2 7 Function call interrupts Please note that if the global drain is removed from the direct reclaim path as a patch from Mel Gorman currently suggests this should be replaced with an on_each_cpu_cond invocation. Signed-off-by: Gilad Ben-Yossef CC: Mel Gorman CC: KOSAKI Motohiro CC: Christoph Lameter CC: Chris Metcalf CC: Peter Zijlstra CC: Frederic Weisbecker CC: Russell King CC: linux-mm@kvack.org CC: Pekka Enberg CC: Matt Mackall CC: Sasha Levin CC: Rik van Riel CC: Andi Kleen CC: Andrew Morton CC: Alexander Viro CC: linux-fsdevel@vger.kernel.org CC: Avi Kivity CC: Michal Nazarewicz CC: Milton Miller --- mm/page_alloc.c | 31 ++++++++++++++++++++++++++++++- 1 files changed, 30 insertions(+), 1 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index d2186ec..4135983 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1165,7 +1165,36 @@ void drain_local_pages(void *arg) */ void drain_all_pages(void) { - on_each_cpu(drain_local_pages, NULL, 1); + int cpu; + struct per_cpu_pageset *pcp; + struct zone *zone; + + /* Allocate in the BSS so we wont require allocation in + * direct reclaim path for CONFIG_CPUMASK_OFFSTACK=y + */ + static cpumask_t cpus_with_pcps; + + /* + * We don't care about racing with CPU hotplug event + * as offline notification will cause the notified + * cpu to drain that CPU pcps and on_each_cpu_mask + * disables preemption as part of its processing + */ + for_each_online_cpu(cpu) { + bool has_pcps = false; + for_each_populated_zone(zone) { + pcp = per_cpu_ptr(zone->pageset, cpu); + if (pcp->pcp.count) { + has_pcps = true; + break; + } + } + if (has_pcps) + cpumask_set_cpu(cpu, &cpus_with_pcps); + else + cpumask_clear_cpu(cpu, &cpus_with_pcps); + } + on_each_cpu_mask(&cpus_with_pcps, drain_local_pages, NULL, 1); } #ifdef CONFIG_HIBERNATION -- 1.7.0.4