From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751895AbeCUB7G (ORCPT ); Tue, 20 Mar 2018 21:59:06 -0400 Received: from mga11.intel.com ([192.55.52.93]:2189 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751699AbeCUB7E (ORCPT ); Tue, 20 Mar 2018 21:59:04 -0400 X-Amp-Result: UNKNOWN X-Amp-Original-Verdict: FILE UNKNOWN X-Amp-File-Uploaded: False X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.48,338,1517904000"; d="scan'208";a="35508780" Date: Wed, 21 Mar 2018 09:59:45 +0800 From: Aaron Lu To: "Figo.zhang" Cc: Linux MM , LKML , Andrew Morton , Huang Ying , Dave Hansen , Kemi Wang , Tim Chen , Andi Kleen , Michal Hocko , Vlastimil Babka , Mel Gorman , Matthew Wilcox , Daniel Jordan Subject: Re: [RFC PATCH v2 2/4] mm/__free_one_page: skip merge for order-0 page unless compaction failed Message-ID: <20180321015944.GB28705@intel.com> References: <20180320085452.24641-1-aaron.lu@intel.com> <20180320085452.24641-3-aaron.lu@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.9.2 (2017-12-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Mar 20, 2018 at 03:58:51PM -0700, Figo.zhang wrote: > 2018-03-20 1:54 GMT-07:00 Aaron Lu : > > > Running will-it-scale/page_fault1 process mode workload on a 2 sockets > > Intel Skylake server showed severe lock contention of zone->lock, as > > high as about 80%(42% on allocation path and 35% on free path) CPU > > cycles are burnt spinning. With perf, the most time consuming part inside > > that lock on free path is cache missing on page structures, mostly on > > the to-be-freed page's buddy due to merging. > > > > One way to avoid this overhead is not do any merging at all for order-0 > > pages. With this approach, the lock contention for zone->lock on free > > path dropped to 1.1% but allocation side still has as high as 42% lock > > contention. In the meantime, the dropped lock contention on free side > > doesn't translate to performance increase, instead, it's consumed by > > increased lock contention of the per node lru_lock(rose from 5% to 37%) > > and the final performance slightly dropped about 1%. > > > > Though performance dropped a little, it almost eliminated zone lock > > contention on free path and it is the foundation for the next patch > > that eliminates zone lock contention for allocation path. > > > > A new document file called "struct_page_filed" is added to explain > > the newly reused field in "struct page". > > > > Suggested-by: Dave Hansen > > Signed-off-by: Aaron Lu > > --- > > Documentation/vm/struct_page_field | 5 +++ > > include/linux/mm_types.h | 1 + > > mm/compaction.c | 13 +++++- > > mm/internal.h | 27 ++++++++++++ > > mm/page_alloc.c | 89 ++++++++++++++++++++++++++++++ > > +++----- > > 5 files changed, 122 insertions(+), 13 deletions(-) > > create mode 100644 Documentation/vm/struct_page_field > > > > diff --git a/Documentation/vm/struct_page_field b/Documentation/vm/struct_ > > page_field > > new file mode 100644 > > index 000000000000..1ab6c19ccc7a > > --- /dev/null > > +++ b/Documentation/vm/struct_page_field > > @@ -0,0 +1,5 @@ > > +buddy_merge_skipped: > > +Used to indicate this page skipped merging when added to buddy. This > > +field only makes sense if the page is in Buddy and is order zero. > > +It's a bug if any higher order pages in Buddy has this field set. > > +Shares space with index. > > diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h > > index fd1af6b9591d..7edc4e102a8e 100644 > > --- a/include/linux/mm_types.h > > +++ b/include/linux/mm_types.h > > @@ -91,6 +91,7 @@ struct page { > > pgoff_t index; /* Our offset within mapping. */ > > void *freelist; /* sl[aou]b first free object */ > > /* page_deferred_list().prev -- second tail page */ > > + bool buddy_merge_skipped; /* skipped merging when added to > > buddy */ > > }; > > > > union { > > diff --git a/mm/compaction.c b/mm/compaction.c > > index 2c8999d027ab..fb9031fdca41 100644 > > --- a/mm/compaction.c > > +++ b/mm/compaction.c > > @@ -776,8 +776,19 @@ isolate_migratepages_block(struct compact_control > > *cc, unsigned long low_pfn, > > * potential isolation targets. > > */ > > if (PageBuddy(page)) { > > - unsigned long freepage_order = > > page_order_unsafe(page); > > + unsigned long freepage_order; > > > > + /* > > + * If this is a merge_skipped page, do merge now > > + * since high-order pages are needed. zone lock > > + * isn't taken for the merge_skipped check so the > > + * check could be wrong but the worst case is we > > + * lose a merge opportunity. > > + */ > > + if (page_merge_was_skipped(page)) > > + try_to_merge_page(page); > > + > > + freepage_order = page_order_unsafe(page); > > /* > > * Without lock, we cannot be sure that what we > > got is > > * a valid page order. Consider only values in the > > > > when the system memory is very very low and try a lot of failures and then If the system memory is very very low, it doesn't appear there is a need to do compaction since compaction needs to have enough order 0 pages to make a high order one. > go into > __alloc_pages_direct_compact() to has a opportunity to do your > try_to_merge_page(), is it the best timing for here to > do order-0 migration? try_to_merge_page(), as I added in this patch, doesn't do any page migration but merging. It will not cause page migration.