From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2725FC282CE for ; Mon, 11 Feb 2019 15:58:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id E7DB921B18 for ; Mon, 11 Feb 2019 15:58:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731482AbfBKP6b (ORCPT ); Mon, 11 Feb 2019 10:58:31 -0500 Received: from mga11.intel.com ([192.55.52.93]:58792 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728524AbfBKP63 (ORCPT ); Mon, 11 Feb 2019 10:58:29 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 11 Feb 2019 07:58:28 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.58,359,1544515200"; d="scan'208";a="115325426" Received: from ahduyck-desk1.jf.intel.com ([10.7.198.76]) by orsmga006.jf.intel.com with ESMTP; 11 Feb 2019 07:58:28 -0800 Message-ID: Subject: Re: [RFC PATCH 4/4] mm: Add merge page notifier From: Alexander Duyck To: Aaron Lu , Alexander Duyck , linux-mm@kvack.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: rkrcmar@redhat.com, x86@kernel.org, mingo@redhat.com, bp@alien8.de, hpa@zytor.com, pbonzini@redhat.com, tglx@linutronix.de, akpm@linux-foundation.org Date: Mon, 11 Feb 2019 07:58:28 -0800 In-Reply-To: <5e6d22b2-0f14-43eb-846b-a940e629c02b@gmail.com> References: <20190204181118.12095.38300.stgit@localhost.localdomain> <20190204181558.12095.83484.stgit@localhost.localdomain> <5e6d22b2-0f14-43eb-846b-a940e629c02b@gmail.com> Content-Type: text/plain; charset="UTF-8" X-Mailer: Evolution 3.28.5 (3.28.5-2.fc28) Mime-Version: 1.0 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, 2019-02-11 at 14:40 +0800, Aaron Lu wrote: > On 2019/2/5 2:15, Alexander Duyck wrote: > > From: Alexander Duyck > > > > Because the implementation was limiting itself to only providing hints on > > pages huge TLB order sized or larger we introduced the possibility for free > > pages to slip past us because they are freed as something less then > > huge TLB in size and aggregated with buddies later. > > > > To address that I am adding a new call arch_merge_page which is called > > after __free_one_page has merged a pair of pages to create a higher order > > page. By doing this I am able to fill the gap and provide full coverage for > > all of the pages huge TLB order or larger. > > > > Signed-off-by: Alexander Duyck > > --- > > arch/x86/include/asm/page.h | 12 ++++++++++++ > > arch/x86/kernel/kvm.c | 28 ++++++++++++++++++++++++++++ > > include/linux/gfp.h | 4 ++++ > > mm/page_alloc.c | 2 ++ > > 4 files changed, 46 insertions(+) > > > > diff --git a/arch/x86/include/asm/page.h b/arch/x86/include/asm/page.h > > index 4487ad7a3385..9540a97c9997 100644 > > --- a/arch/x86/include/asm/page.h > > +++ b/arch/x86/include/asm/page.h > > @@ -29,6 +29,18 @@ static inline void arch_free_page(struct page *page, unsigned int order) > > if (static_branch_unlikely(&pv_free_page_hint_enabled)) > > __arch_free_page(page, order); > > } > > + > > +struct zone; > > + > > +#define HAVE_ARCH_MERGE_PAGE > > +void __arch_merge_page(struct zone *zone, struct page *page, > > + unsigned int order); > > +static inline void arch_merge_page(struct zone *zone, struct page *page, > > + unsigned int order) > > +{ > > + if (static_branch_unlikely(&pv_free_page_hint_enabled)) > > + __arch_merge_page(zone, page, order); > > +} > > #endif > > > > #include > > diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c > > index 09c91641c36c..957bb4f427bb 100644 > > --- a/arch/x86/kernel/kvm.c > > +++ b/arch/x86/kernel/kvm.c > > @@ -785,6 +785,34 @@ void __arch_free_page(struct page *page, unsigned int order) > > PAGE_SIZE << order); > > } > > > > +void __arch_merge_page(struct zone *zone, struct page *page, > > + unsigned int order) > > +{ > > + /* > > + * The merging logic has merged a set of buddies up to the > > + * KVM_PV_UNUSED_PAGE_HINT_MIN_ORDER. Since that is the case, take > > + * advantage of this moment to notify the hypervisor of the free > > + * memory. > > + */ > > + if (order != KVM_PV_UNUSED_PAGE_HINT_MIN_ORDER) > > + return; > > + > > + /* > > + * Drop zone lock while processing the hypercall. This > > + * should be safe as the page has not yet been added > > + * to the buddy list as of yet and all the pages that > > + * were merged have had their buddy/guard flags cleared > > + * and their order reset to 0. > > + */ > > + spin_unlock(&zone->lock); > > + > > + kvm_hypercall2(KVM_HC_UNUSED_PAGE_HINT, page_to_phys(page), > > + PAGE_SIZE << order); > > + > > + /* reacquire lock and resume freeing memory */ > > + spin_lock(&zone->lock); > > +} > > + > > #ifdef CONFIG_PARAVIRT_SPINLOCKS > > > > /* Kick a cpu by its apicid. Used to wake up a halted vcpu */ > > diff --git a/include/linux/gfp.h b/include/linux/gfp.h > > index fdab7de7490d..4746d5560193 100644 > > --- a/include/linux/gfp.h > > +++ b/include/linux/gfp.h > > @@ -459,6 +459,10 @@ static inline struct zonelist *node_zonelist(int nid, gfp_t flags) > > #ifndef HAVE_ARCH_FREE_PAGE > > static inline void arch_free_page(struct page *page, int order) { } > > #endif > > +#ifndef HAVE_ARCH_MERGE_PAGE > > +static inline void > > +arch_merge_page(struct zone *zone, struct page *page, int order) { } > > +#endif > > #ifndef HAVE_ARCH_ALLOC_PAGE > > static inline void arch_alloc_page(struct page *page, int order) { } > > #endif > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > > index c954f8c1fbc4..7a1309b0b7c5 100644 > > --- a/mm/page_alloc.c > > +++ b/mm/page_alloc.c > > @@ -913,6 +913,8 @@ static inline void __free_one_page(struct page *page, > > page = page + (combined_pfn - pfn); > > pfn = combined_pfn; > > order++; > > + > > + arch_merge_page(zone, page, order); > > Not a proper place AFAICS. > > Assume we have an order-8 page being sent here for merge and its order-8 > buddy is also free, then order++ became 9 and arch_merge_page() will do > the hint to host on this page as an order-9 page, no problem so far. > Then the next round, assume the now order-9 page's buddy is also free, > order++ will become 10 and arch_merge_page() will again hint to host on > this page as an order-10 page. The first hint to host became redundant. Actually the problem is even worse the other way around. My concern was pages being incrementally freed. With this setup I can catch when we have crossed the threshold from order 8 to 9, and specifically for that case provide the hint. This allows me to ignore orders above and below 9. If I move the hint to the spot after the merging I have no way of telling if I have hinted the page as a lower order or not. As such I will hint if it is merged up to orders 9 or greater. So for example if it merges up to order 9 and stops there then done_merging will report an order 9 page, then if another page is freed and merged with this up to order 10 you would be hinting on order 10. By placing the function here I can guarantee that no more than 1 hint is provided per 2MB page. > I think the proper place is after the done_merging tag. > > BTW, with arch_merge_page() at the proper place, I don't think patch3/4 > is necessary - any freed page will go through merge anyway, we won't > lose any hint opportunity. Or do I miss anything? You can refer to my comment above. What I want to avoid is us hinting a page multiple times if we aren't using MAX_ORDER - 1 as the limit. What I am avoiding by placing this where I did is us doing a hint on orders greater than our target hint order. So with this way I only perform one hint per 2MB page, otherwise I would be performing multiple hints per 2MB page as every order above that would also trigger hints.