linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "Huang, Ying" <ying.huang@intel.com>
To: Mel Gorman <mgorman@suse.de>
Cc: <linux-mm@kvack.org>,  Andrew Morton <akpm@linux-foundation.org>,
	<linux-kernel@vger.kernel.org>,
	 Peter Zijlstra <peterz@infradead.org>,
	"Peter Xu" <peterx@redhat.com>,
	 Johannes Weiner <hannes@cmpxchg.org>,
	"Vlastimil Babka" <vbabka@suse.cz>,
	 Matthew Wilcox <willy@infradead.org>,
	 Will Deacon <will@kernel.org>,
	 Michel Lespinasse <walken@google.com>,
	 Arjun Roy <arjunroy@google.com>,
	 "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Subject: Re: [RFC] NUMA balancing: reduce TLB flush via delaying mapping on hint page fault
Date: Thu, 01 Apr 2021 08:52:29 +0800	[thread overview]
Message-ID: <875z16967m.fsf@yhuang6-desk1.ccr.corp.intel.com> (raw)
In-Reply-To: <20210331131658.GV15768@suse.de> (Mel Gorman's message of "Wed, 31 Mar 2021 14:16:58 +0100")

Mel Gorman <mgorman@suse.de> writes:

> On Wed, Mar 31, 2021 at 07:20:09PM +0800, Huang, Ying wrote:
>> Mel Gorman <mgorman@suse.de> writes:
>> 
>> > On Mon, Mar 29, 2021 at 02:26:51PM +0800, Huang Ying wrote:
>> >> For NUMA balancing, in hint page fault handler, the faulting page will
>> >> be migrated to the accessing node if necessary.  During the migration,
>> >> TLB will be shot down on all CPUs that the process has run on
>> >> recently.  Because in the hint page fault handler, the PTE will be
>> >> made accessible before the migration is tried.  The overhead of TLB
>> >> shooting down is high, so it's better to be avoided if possible.  In
>> >> fact, if we delay mapping the page in PTE until migration, that can be
>> >> avoided.  This is what this patch doing.
>> >> 
>> >
>> > Why would the overhead be high? It was previously inaccessibly so it's
>> > only parallel accesses making forward progress that trigger the need
>> > for a flush.
>> 
>> Sorry, I don't understand this.  Although the page is inaccessible, the
>> threads may access other pages, so TLB flushing is still necessary.
>> 
>
> You assert the overhead of TLB shootdown is high and yes, it can be
> very high but you also said "the benchmark score has no visible changes"
> indicating the TLB shootdown cost is not a major problem for the workload.
> It does not mean we should ignore it though.
>
>> > <SNIP the parts that are not problematic>
>> >
>> > If migration is attempted, then the time until the migration PTE is
>> > created is variable. The page has to be isolated from the LRU so there
>> > could be contention on the LRU lock, a new page has to be allocated and
>> > that allocation potentially has to enter the page allocator slow path
>> > etc. During that time, parallel threads make forward progress but with
>> > the patch, multiple threads potentially attempt the allocation and fail
>> > instead of doing real work.
>> 
>> If my understanding of the code were correct, only the first thread will
>> attempt the isolation and allocation.  Because TestClearPageLRU() is
>> called in
>> 
>>   migrate_misplaced_page()
>>     numamigrate_isolate_page()
>>       isolate_lru_page()
>> 
>> And migrate_misplaced_page() will return 0 immediately if
>> TestClearPageLRU() returns false.  Then the second thread will make the
>> page accessible and make forward progress.
>> 
>
> Ok, that's true. While additional work is done, the cost is reasonably
> low -- lower than I initially imagined and with fewer side-effects.
>
>> But there's still some timing difference between the original and
>> patched kernel.  We have several choices to reduce the difference.
>> 
>> 1. Check PageLRU() with PTL held in do_numa_page()
>> 
>> If PageLRU() return false, do_numa_page() can make the page accessible
>> firstly.  So the second thread will make the page accessible earlier.
>> 
>> 2. Try to lock the page with PTL held in do_numa_page()
>> 
>> If the try-locking succeeds, it's the first thread, so it can delay
>> mapping.  If try-locking fails, it may be the second thread, so it will
>> make the page accessible firstly.  We need to teach
>> migrate_misplaced_page() to work with the page locked.  This will
>> enlarge the duration that the page is locked.  Is it a problem?
>> 
>> 3. Check page_count() with PTL held in do_numa_page()
>> 
>> The first thread will call get_page() in numa_migrate_prep().  So if the
>> second thread can detect that, it can make the page accessible firstly.
>> The difficulty is that it appears hard to identify the expected
>> page_count() for the file pages.  For anonymous pages, that is much
>> easier, so at least if a page passes the following test, we can delay
>> mapping,
>> 
>>     PageAnon(page) && page_count(page) == page_mapcount(page) + !!PageSwapCache(page)
>> 
>> This will disable the optimization for the file pages.  But it may be
>> good enough?
>> 
>> Which one do you think is better?  Maybe the first one is good enough?
>> 
>
> The first one is probably the most straight-forward but it's more
> important to figure out why interrupts were higher with at least one
> workload when the exact opposite is expected. Investigating which of
> options 1-3 are best and whether it's worth the duplicated check could
> be done as a separate patch.
>
>> > You should consider the following question -- is the potential saving
>> > of an IPI transmission enough to offset the cost of parallel accesses
>> > not making forward progress while one migration is setup and having
>> > different migration attempts collide?
>> >
>> > I have tests running just in case but I think the answer may be "no".
>> > So far only one useful test as completed (specjbb2005 with one VM per NUMA
>> > node) and it showed a mix of small gains and losses but with *higher*
>> > interrupts contrary to what was expected from the changelog.
>> 
>> That is hard to be understood.  May be caused by the bug you pointed out
>> (about was_writable)?
>> 
>
> It's possible and I could not figure out what the rationale behind the
> change was :/
>
> Fix it and run it through your tests to make sure it works as you
> expect. Assuming it passes your tests and it's posted, I'll read it again
> and run it through a battery of tests. If it shows that interrupts are
> lower and is either netural or improves performance in enough cases then
> I think it'll be ok. Even if it's only neutral in terms of performance
> but interrupts are lower, it'll be acceptable.

Will do it.  Thanks a lot for your help!

Best Regards,
Huang, Ying


      parent reply	other threads:[~2021-04-01  0:52 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-29  6:26 [RFC] NUMA balancing: reduce TLB flush via delaying mapping on hint page fault Huang Ying
2021-03-30 13:33 ` Mel Gorman
2021-03-31 11:20   ` Huang, Ying
2021-03-31 13:16     ` Mel Gorman
2021-03-31 16:36       ` Nadav Amit
2021-04-01  8:38         ` Mel Gorman
2021-04-01 19:21           ` Nadav Amit
2021-04-01  0:52       ` Huang, Ying [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=875z16967m.fsf@yhuang6-desk1.ccr.corp.intel.com \
    --to=ying.huang@intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=arjunroy@google.com \
    --cc=hannes@cmpxchg.org \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@suse.de \
    --cc=peterx@redhat.com \
    --cc=peterz@infradead.org \
    --cc=vbabka@suse.cz \
    --cc=walken@google.com \
    --cc=will@kernel.org \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).