From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751930AbbCTJ4S (ORCPT ); Fri, 20 Mar 2015 05:56:18 -0400 Received: from cantor2.suse.de ([195.135.220.15]:51801 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751332AbbCTJ4O (ORCPT ); Fri, 20 Mar 2015 05:56:14 -0400 Date: Fri, 20 Mar 2015 09:56:06 +0000 From: Mel Gorman To: Linus Torvalds Cc: Dave Chinner , Ingo Molnar , Andrew Morton , Aneesh Kumar , Linux Kernel Mailing List , Linux-MM , xfs@oss.sgi.com, ppc-dev Subject: Re: [PATCH 4/4] mm: numa: Slow PTE scan rate if migration failures occur Message-ID: <20150320095606.GE3087@suse.de> References: <20150317070655.GB10105@dastard> <20150317205104.GA28621@dastard> <20150317220840.GC28621@dastard> <20150319224143.GI10105@dastard> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Mar 19, 2015 at 04:05:46PM -0700, Linus Torvalds wrote: > On Thu, Mar 19, 2015 at 3:41 PM, Dave Chinner wrote: > > > > My recollection wasn't faulty - I pulled it from an earlier email. > > That said, the original measurement might have been faulty. I ran > > the numbers again on the 3.19 kernel I saved away from the original > > testing. That came up at 235k, which is pretty much the same as > > yesterday's test. The runtime,however, is unchanged from my original > > measurements of 4m54s (pte_hack came in at 5m20s). > > Ok. Good. So the "more than an order of magnitude difference" was > really about measurement differences, not quite as real. Looks like > more a "factor of two" than a factor of 20. > > Did you do the profiles the same way? Because that would explain the > differences in the TLB flush percentages too (the "1.4% from > tlb_invalidate_range()" vs "pretty much everything from migration"). > > The runtime variation does show that there's some *big* subtle > difference for the numa balancing in the exact TNF_NO_GROUP details. TNF_NO_GROUP affects whether the scheduler tries to group related processes together. Whether migration occurs depends on what node a process is scheduled on. If processes are aggressively grouped inappropriately then it is possible there is a bug that causes the load balancer to move processes off a node (possible migration) with NUMA balancing trying to pull it back (another possible migration). Small bugs there can result in excessive migration. -- Mel Gorman SUSE Labs