From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752358AbbCWMBh (ORCPT ); Mon, 23 Mar 2015 08:01:37 -0400 Received: from cantor2.suse.de ([195.135.220.15]:34914 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752035AbbCWMBg (ORCPT ); Mon, 23 Mar 2015 08:01:36 -0400 Date: Mon, 23 Mar 2015 12:01:31 +0000 From: Mel Gorman To: Linus Torvalds Cc: Dave Chinner , Ingo Molnar , Andrew Morton , Aneesh Kumar , Linux Kernel Mailing List , Linux-MM , xfs@oss.sgi.com, ppc-dev Subject: Re: [PATCH 4/4] mm: numa: Slow PTE scan rate if migration failures occur Message-ID: <20150323120131.GB4701@suse.de> References: <20150317220840.GC28621@dastard> <20150319224143.GI10105@dastard> <20150320002311.GG28621@dastard> <20150320041357.GO10105@dastard> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Mar 20, 2015 at 10:02:23AM -0700, Linus Torvalds wrote: > On Thu, Mar 19, 2015 at 9:13 PM, Dave Chinner wrote: > > > > Testing now. It's a bit faster - three runs gave 7m35s, 7m20s and > > 7m36s. IOWs's a bit better, but not significantly. page migrations > > are pretty much unchanged, too: > > > > 558,632 migrate:mm_migrate_pages ( +- 6.38% ) > > Ok. That was kind of the expected thing. > > I don't really know the NUMA fault rate limiting code, but one thing > that strikes me is that if it tries to balance the NUMA faults against > the *regular* faults, then maybe just the fact that we end up taking > more COW faults after a NUMA fault then means that the NUMA rate > limiting code now gets over-eager (because it sees all those extra > non-numa faults). > > Mel, does that sound at all possible? I really have never looked at > the magic automatic rate handling.. > It should not be trying to balance against regular faults as it has no information on it. The trapping of additional faults to mark the PTE writable will alter timing so it indirectly affects how many migration faults there but this is only a side-effect IMO. There is more overhead now due to losing the writable information and that should be reduced so I tried a few approaches. Ultimately, the one that performed the best and was easiest to understand simply preserved the writable bit across the protection update and page fault. I'll post it later when I stick a changelog on it. -- Mel Gorman SUSE Labs