mm-commits.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* + mm-page_owner-make-init_pages_in_zone-faster.patch added to -mm tree
@ 2017-07-21 20:07 akpm
  0 siblings, 0 replies; only message in thread
From: akpm @ 2017-07-21 20:07 UTC (permalink / raw)
  To: vbabka, iamjoonsoo.kim, labbott, mgorman, mhocko, vinmenon,
	yang.shi, zhongjiang, mm-commits


The patch titled
     Subject: mm, page_owner: make init_pages_in_zone() faster
has been added to the -mm tree.  Its filename is
     mm-page_owner-make-init_pages_in_zone-faster.patch

This patch should soon appear at
    http://ozlabs.org/~akpm/mmots/broken-out/mm-page_owner-make-init_pages_in_zone-faster.patch
and later at
    http://ozlabs.org/~akpm/mmotm/broken-out/mm-page_owner-make-init_pages_in_zone-faster.patch

Before you just go and hit "reply", please:
   a) Consider who else should be cc'ed
   b) Prefer to cc a suitable mailing list as well
   c) Ideally: find the original patch on the mailing list and do a
      reply-to-all to that, adding suitable additional cc's

*** Remember to use Documentation/SubmitChecklist when testing your code ***

The -mm tree is included into linux-next and is updated
there every 3-4 working days

------------------------------------------------------
From: Vlastimil Babka <vbabka@suse.cz>
Subject: mm, page_owner: make init_pages_in_zone() faster

In init_pages_in_zone() we currently use the generic set_page_owner()
function to initialize page_owner info for early allocated pages.  This
means we needlessly do lookup_page_ext() twice for each page, and more
importantly save_stack(), which has to unwind the stack and find the
corresponding stack depot handle.  Because the stack is always the same
for the initialization, unwind it once in init_pages_in_zone() and reuse
the handle.  Also avoid the repeated lookup_page_ext().

This can significantly reduce boot times with page_owner=on on large
machines, especially for kernels built without frame pointer, where the
stack unwinding is noticeably slower.

Link: http://lkml.kernel.org/r/20170720134029.25268-2-vbabka@suse.cz
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Yang Shi <yang.shi@linaro.org>
Cc: Laura Abbott <labbott@redhat.com>
Cc: Vinayak Menon <vinmenon@codeaurora.org>
Cc: zhong jiang <zhongjiang@huawei.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 mm/page_owner.c |   19 ++++++++++++++++++-
 1 file changed, 18 insertions(+), 1 deletion(-)

diff -puN mm/page_owner.c~mm-page_owner-make-init_pages_in_zone-faster mm/page_owner.c
--- a/mm/page_owner.c~mm-page_owner-make-init_pages_in_zone-faster
+++ a/mm/page_owner.c
@@ -183,6 +183,20 @@ noinline void __set_page_owner(struct pa
 	__set_bit(PAGE_EXT_OWNER, &page_ext->flags);
 }
 
+static void __set_page_owner_init(struct page_ext *page_ext,
+					depot_stack_handle_t handle)
+{
+	struct page_owner *page_owner;
+
+	page_owner = get_page_owner(page_ext);
+	page_owner->handle = handle;
+	page_owner->order = 0;
+	page_owner->gfp_mask = 0;
+	page_owner->last_migrate_reason = -1;
+
+	__set_bit(PAGE_EXT_OWNER, &page_ext->flags);
+}
+
 void __set_page_owner_migrate_reason(struct page *page, int reason)
 {
 	struct page_ext *page_ext = lookup_page_ext(page);
@@ -520,10 +534,13 @@ static void init_pages_in_zone(pg_data_t
 	unsigned long pfn = zone->zone_start_pfn, block_end_pfn;
 	unsigned long end_pfn = pfn + zone->spanned_pages;
 	unsigned long count = 0;
+	depot_stack_handle_t init_handle;
 
 	/* Scan block by block. First and last block may be incomplete */
 	pfn = zone->zone_start_pfn;
 
+	init_handle = save_stack(0);
+
 	/*
 	 * Walk the zone in pageblock_nr_pages steps. If a page block spans
 	 * a zone boundary, it will be double counted between zones. This does
@@ -570,7 +587,7 @@ static void init_pages_in_zone(pg_data_t
 				continue;
 
 			/* Found early allocated page */
-			set_page_owner(page, 0, 0);
+			__set_page_owner_init(page_ext, init_handle);
 			count++;
 		}
 	}
_

Patches currently in -mm which might be from vbabka@suse.cz are

mm-page_owner-make-init_pages_in_zone-faster.patch
mm-page_ext-periodically-reschedule-during-page_ext_init.patch
mm-page_owner-dont-grab-zone-lock-for-init_pages_in_zone.patch
mm-page_ext-move-page_ext_init-after-page_alloc_init_late.patch


^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2017-07-21 20:07 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-07-21 20:07 + mm-page_owner-make-init_pages_in_zone-faster.patch added to -mm tree akpm

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).