From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752619Ab2LTNua (ORCPT ); Thu, 20 Dec 2012 08:50:30 -0500 Received: from mail-oa0-f43.google.com ([209.85.219.43]:55228 "EHLO mail-oa0-f43.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751519Ab2LTNuW (ORCPT ); Thu, 20 Dec 2012 08:50:22 -0500 MIME-Version: 1.0 In-Reply-To: References: <20121212100338.GS1009@suse.de> Date: Thu, 20 Dec 2012 21:50:21 +0800 Message-ID: Subject: Re: [GIT PULL] Automatic NUMA Balancing V11 From: Alex Shi To: Linus Torvalds Cc: Mel Gorman , Peter Zijlstra , Andrea Arcangeli , Ingo Molnar , Rik van Riel , Johannes Weiner , Hugh Dickins , Thomas Gleixner , Paul Turner , Hillf Danton , David Rientjes , Lee Schermerhorn , Srikar Dronamraju , Aneesh Kumar , Andrew Morton , LKML Content-Type: text/plain; charset=ISO-8859-1 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Dec 17, 2012 at 7:19 AM, Linus Torvalds wrote: > On Wed, Dec 12, 2012 at 2:03 AM, Mel Gorman wrote: >> This is a pull request for "Automatic NUMA Balancing V11". The list > > Ok, guys, I've pulled this and pushed out. There were some conflicts > with both the VM changes and with the scheduler tree, but they were > pretty small and looked simple, so I fixed them up and hope they all > work. > > Has anybody tested the impact on single-node systems? If distros I tested your tree till this patch set under our lkp testing system, with benchmark kbuild, aim9-mutitask, specjbb2005 -openjdk/jrockit, hackbench-process/thread, sysbench -fileio-cfq, multiple loop back netperf, on 2 laptops, SNB i7, and WSM i5. only aim9-mutitask-nl (2000 loads, increment 100) has about 2% performance drop on both of machine. all others has no clear performance change. > enable this by default (and it does have 'default y', which is a big > no-no for new features - I undid that part) then there will be tons of > people running this without actually having multiple sockets. Does it > gracefully avoid pointless overheads for this case? >