linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v12 00/11] complete deferred page initialization
@ 2017-10-13 17:32 Pavel Tatashin
  2017-10-13 17:32 ` [PATCH v12 01/11] mm: deferred_init_memmap improvements Pavel Tatashin
                   ` (11 more replies)
  0 siblings, 12 replies; 28+ messages in thread
From: Pavel Tatashin @ 2017-10-13 17:32 UTC (permalink / raw)
  To: linux-kernel, sparclinux, linux-mm, linuxppc-dev, linux-s390,
	linux-arm-kernel, x86, kasan-dev, borntraeger, heiko.carstens,
	davem, willy, mhocko, ard.biesheuvel, mark.rutland, will.deacon,
	catalin.marinas, sam, mgorman, akpm, steven.sistare,
	daniel.m.jordan, bob.picco

Changelog:
v12 - v11
- Improved comments for mm: zero reserved and unavailable struct pages
- Added back patch: mm: deferred_init_memmap improvements
- Added patch from Will Deacon: arm64: kasan: Avoid using
  vmemmap_populate to initialise shadow

v11 - v10
- Moved kasan_map_populate() implementation from common code into arch
  specific as discussed with Will Deacon. We do not need
  "mm/kasan: kasan specific map populate function" anymore, so only
  9 patches left.

v10 - v9
- Addressed new comments from Michal Hocko.
- Sent "mm: deferred_init_memmap improvements" as a separate patch as
  it is also fixing existing problem.
- Merged "mm: stop zeroing memory during allocation in vmemmap" with
  "mm: zero struct pages during initialization".
- Added more comments "mm: zero reserved and unavailable struct pages"

v9 - v8
- Addressed comments raised by Mark Rutland and Ard Biesheuvel: changed
  kasan implementation. Added a new function: kasan_map_populate() that
  zeroes the allocated and mapped memory

v8 - v7
- Added Acked-by's from Dave Miller for SPARC changes
- Fixed a minor compiling issue on tile architecture reported by kbuild

v7 - v6
- Addressed comments from Michal Hocko
- memblock_discard() patch was removed from this series and integrated
  separately
- Fixed bug reported by kbuild test robot new patch:
  mm: zero reserved and unavailable struct pages
- Removed patch
  x86/mm: reserve only exiting low pages
  As, it is not needed anymore, because of the previous fix 
- Re-wrote deferred_init_memmap(), found and fixed an existing bug, where
  page variable is not reset when zone holes present.
- Merged several patches together per Michal request
- Added performance data including raw logs

v6 - v5
- Fixed ARM64 + kasan code, as reported by Ard Biesheuvel
- Tested ARM64 code in qemu and found few more issues, that I fixed in this
  iteration
- Added page roundup/rounddown to x86 and arm zeroing routines to zero the
  whole allocated range, instead of only provided address range.
- Addressed SPARC related comment from Sam Ravnborg
- Fixed section mismatch warnings related to memblock_discard().

v5 - v4
- Fixed build issues reported by kbuild on various configurations
v4 - v3
- Rewrote code to zero sturct pages in __init_single_page() as
  suggested by Michal Hocko
- Added code to handle issues related to accessing struct page
  memory before they are initialized.

v3 - v2
- Addressed David Miller comments about one change per patch:
    * Splited changes to platforms into 4 patches
    * Made "do not zero vmemmap_buf" as a separate patch

v2 - v1
- Per request, added s390 to deferred "struct page" zeroing
- Collected performance data on x86 which proofs the importance to
  keep memset() as prefetch (see below).

SMP machines can benefit from the DEFERRED_STRUCT_PAGE_INIT config option,
which defers initializing struct pages until all cpus have been started so
it can be done in parallel.

However, this feature is sub-optimal, because the deferred page
initialization code expects that the struct pages have already been zeroed,
and the zeroing is done early in boot with a single thread only.  Also, we
access that memory and set flags before struct pages are initialized. All
of this is fixed in this patchset.

In this work we do the following:
- Never read access struct page until it was initialized
- Never set any fields in struct pages before they are initialized
- Zero struct page at the beginning of struct page initialization


==========================================================================
Performance improvements on x86 machine with 8 nodes:
Intel(R) Xeon(R) CPU E7-8895 v3 @ 2.60GHz and 1T of memory:
                        TIME          SPEED UP
base no deferred:       95.796233s
fix no deferred:        79.978956s    19.77%

base deferred:          77.254713s
fix deferred:           55.050509s    40.34%
==========================================================================
SPARC M6 3600 MHz with 15T of memory
                        TIME          SPEED UP
base no deferred:       358.335727s
fix no deferred:        302.320936s   18.52%

base deferred:          237.534603s
fix deferred:           182.103003s   30.44%
==========================================================================
Raw dmesg output with timestamps:
x86 base no deferred:    https://hastebin.com/ofunepurit.scala
x86 base deferred:       https://hastebin.com/ifazegeyas.scala
x86 fix no deferred:     https://hastebin.com/pegocohevo.scala
x86 fix deferred:        https://hastebin.com/ofupevikuk.scala
sparc base no deferred:  https://hastebin.com/ibobeteken.go
sparc base deferred:     https://hastebin.com/fariqimiyu.go
sparc fix no deferred:   https://hastebin.com/muhegoheyi.go
sparc fix deferred:      https://hastebin.com/xadinobutu.go

Pavel Tatashin (10):
  mm: deferred_init_memmap improvements
  x86/mm: setting fields in deferred pages
  sparc64/mm: setting fields in deferred pages
  sparc64: simplify vmemmap_populate
  mm: defining memblock_virt_alloc_try_nid_raw
  mm: zero reserved and unavailable struct pages
  x86/kasan: add and use kasan_map_populate()
  arm64/kasan: add and use kasan_map_populate()
  mm: stop zeroing memory during allocation in vmemmap
  sparc64: optimized struct page zeroing

Will Deacon (1):
  arm64: kasan: Avoid using vmemmap_populate to initialise shadow

 arch/arm64/Kconfig                  |   2 +-
 arch/arm64/mm/kasan_init.c          | 130 +++++++++++++--------
 arch/sparc/include/asm/pgtable_64.h |  30 +++++
 arch/sparc/mm/init_64.c             |  32 +++---
 arch/x86/mm/init_64.c               |  10 +-
 arch/x86/mm/kasan_init_64.c         |  75 +++++++++++-
 include/linux/bootmem.h             |  27 +++++
 include/linux/memblock.h            |  16 +++
 include/linux/mm.h                  |  26 +++++
 mm/memblock.c                       |  60 ++++++++--
 mm/page_alloc.c                     | 224 +++++++++++++++++++++---------------
 mm/sparse-vmemmap.c                 |  15 ++-
 mm/sparse.c                         |   6 +-
 13 files changed, 469 insertions(+), 184 deletions(-)

-- 
2.14.2

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH v12 01/11] mm: deferred_init_memmap improvements
  2017-10-13 17:32 [PATCH v12 00/11] complete deferred page initialization Pavel Tatashin
@ 2017-10-13 17:32 ` Pavel Tatashin
  2017-10-17 11:40   ` Michal Hocko
  2017-10-13 17:32 ` [PATCH v12 02/11] x86/mm: setting fields in deferred pages Pavel Tatashin
                   ` (10 subsequent siblings)
  11 siblings, 1 reply; 28+ messages in thread
From: Pavel Tatashin @ 2017-10-13 17:32 UTC (permalink / raw)
  To: linux-kernel, sparclinux, linux-mm, linuxppc-dev, linux-s390,
	linux-arm-kernel, x86, kasan-dev, borntraeger, heiko.carstens,
	davem, willy, mhocko, ard.biesheuvel, mark.rutland, will.deacon,
	catalin.marinas, sam, mgorman, akpm, steven.sistare,
	daniel.m.jordan, bob.picco

deferred_init_memmap() is called when struct pages are initialized later
in boot by slave CPUs. This patch simplifies and optimizes this function,
and also fixes a couple issues (described below).

The main change is that now we are iterating through free memblock areas
instead of all configured memory. Thus, we do not have to check if the
struct page has already been initialized.

=====
In deferred_init_memmap() where all deferred struct pages are initialized
we have a check like this:

if (page->flags) {
	VM_BUG_ON(page_zone(page) != zone);
	goto free_range;
}

This way we are checking if the current deferred page has already been
initialized. It works, because memory for struct pages has been zeroed, and
the only way flags are not zero if it went through __init_single_page()
before.  But, once we change the current behavior and won't zero the memory
in memblock allocator, we cannot trust anything inside "struct page"es
until they are initialized. This patch fixes this.

The deferred_init_memmap() is re-written to loop through only free memory
ranges provided by memblock.

Note, this first issue is relevant only when the following change is
merged:

=====
This patch fixes another existing issue on systems that have holes in
zones i.e CONFIG_HOLES_IN_ZONE is defined.

In for_each_mem_pfn_range() we have code like this:

if (!pfn_valid_within(pfn)
	goto free_range;

Note: 'page' is not set to NULL and is not incremented but 'pfn' advances.
Thus means if deferred struct pages are enabled on systems with these kind
of holes, linux would get memory corruptions. I have fixed this issue by
defining a new macro that performs all the necessary operations when we
free the current set of pages.

Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com>
Reviewed-by: Steven Sistare <steven.sistare@oracle.com>
Reviewed-by: Daniel Jordan <daniel.m.jordan@oracle.com>
Reviewed-by: Bob Picco <bob.picco@oracle.com>
---
 mm/page_alloc.c | 168 ++++++++++++++++++++++++++++----------------------------
 1 file changed, 85 insertions(+), 83 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 77e4d3c5c57b..cdbd14829fd3 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1410,14 +1410,17 @@ void clear_zone_contiguous(struct zone *zone)
 }
 
 #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT
-static void __init deferred_free_range(struct page *page,
-					unsigned long pfn, int nr_pages)
+static void __init deferred_free_range(unsigned long pfn,
+				       unsigned long nr_pages)
 {
-	int i;
+	struct page *page;
+	unsigned long i;
 
-	if (!page)
+	if (!nr_pages)
 		return;
 
+	page = pfn_to_page(pfn);
+
 	/* Free a large naturally-aligned chunk if possible */
 	if (nr_pages == pageblock_nr_pages &&
 	    (pfn & (pageblock_nr_pages - 1)) == 0) {
@@ -1443,19 +1446,89 @@ static inline void __init pgdat_init_report_one_done(void)
 		complete(&pgdat_init_all_done_comp);
 }
 
+/*
+ * Helper for deferred_init_range, free the given range, reset the counters, and
+ * return number of pages freed.
+ */
+static inline unsigned long __def_free(unsigned long *nr_free,
+				       unsigned long *free_base_pfn,
+				       struct page **page)
+{
+	unsigned long nr = *nr_free;
+
+	deferred_free_range(*free_base_pfn, nr);
+	*free_base_pfn = 0;
+	*nr_free = 0;
+	*page = NULL;
+
+	return nr;
+}
+
+static unsigned long deferred_init_range(int nid, int zid, unsigned long pfn,
+					 unsigned long end_pfn)
+{
+	struct mminit_pfnnid_cache nid_init_state = { };
+	unsigned long nr_pgmask = pageblock_nr_pages - 1;
+	unsigned long free_base_pfn = 0;
+	unsigned long nr_pages = 0;
+	unsigned long nr_free = 0;
+	struct page *page = NULL;
+
+	for (; pfn < end_pfn; pfn++) {
+		/*
+		 * First we check if pfn is valid on architectures where it is
+		 * possible to have holes within pageblock_nr_pages. On systems
+		 * where it is not possible, this function is optimized out.
+		 *
+		 * Then, we check if a current large page is valid by only
+		 * checking the validity of the head pfn.
+		 *
+		 * meminit_pfn_in_nid is checked on systems where pfns can
+		 * interleave within a node: a pfn is between start and end
+		 * of a node, but does not belong to this memory node.
+		 *
+		 * Finally, we minimize pfn page lookups and scheduler checks by
+		 * performing it only once every pageblock_nr_pages.
+		 */
+		if (!pfn_valid_within(pfn)) {
+			nr_pages += __def_free(&nr_free, &free_base_pfn, &page);
+		} else if (!(pfn & nr_pgmask) && !pfn_valid(pfn)) {
+			nr_pages += __def_free(&nr_free, &free_base_pfn, &page);
+		} else if (!meminit_pfn_in_nid(pfn, nid, &nid_init_state)) {
+			nr_pages += __def_free(&nr_free, &free_base_pfn, &page);
+		} else if (page && (pfn & nr_pgmask)) {
+			page++;
+			__init_single_page(page, pfn, zid, nid);
+			nr_free++;
+		} else {
+			nr_pages += __def_free(&nr_free, &free_base_pfn, &page);
+			page = pfn_to_page(pfn);
+			__init_single_page(page, pfn, zid, nid);
+			free_base_pfn = pfn;
+			nr_free = 1;
+			cond_resched();
+		}
+	}
+	/* Free the last block of pages to allocator */
+	nr_pages += __def_free(&nr_free, &free_base_pfn, &page);
+
+	return nr_pages;
+}
+
 /* Initialise remaining memory on a node */
 static int __init deferred_init_memmap(void *data)
 {
 	pg_data_t *pgdat = data;
 	int nid = pgdat->node_id;
-	struct mminit_pfnnid_cache nid_init_state = { };
 	unsigned long start = jiffies;
 	unsigned long nr_pages = 0;
-	unsigned long walk_start, walk_end;
-	int i, zid;
+	unsigned long spfn, epfn;
+	phys_addr_t spa, epa;
+	int zid;
 	struct zone *zone;
 	unsigned long first_init_pfn = pgdat->first_deferred_pfn;
 	const struct cpumask *cpumask = cpumask_of_node(pgdat->node_id);
+	u64 i;
 
 	if (first_init_pfn == ULONG_MAX) {
 		pgdat_init_report_one_done();
@@ -1477,83 +1550,12 @@ static int __init deferred_init_memmap(void *data)
 		if (first_init_pfn < zone_end_pfn(zone))
 			break;
 	}
+	first_init_pfn = max(zone->zone_start_pfn, first_init_pfn);
 
-	for_each_mem_pfn_range(i, nid, &walk_start, &walk_end, NULL) {
-		unsigned long pfn, end_pfn;
-		struct page *page = NULL;
-		struct page *free_base_page = NULL;
-		unsigned long free_base_pfn = 0;
-		int nr_to_free = 0;
-
-		end_pfn = min(walk_end, zone_end_pfn(zone));
-		pfn = first_init_pfn;
-		if (pfn < walk_start)
-			pfn = walk_start;
-		if (pfn < zone->zone_start_pfn)
-			pfn = zone->zone_start_pfn;
-
-		for (; pfn < end_pfn; pfn++) {
-			if (!pfn_valid_within(pfn))
-				goto free_range;
-
-			/*
-			 * Ensure pfn_valid is checked every
-			 * pageblock_nr_pages for memory holes
-			 */
-			if ((pfn & (pageblock_nr_pages - 1)) == 0) {
-				if (!pfn_valid(pfn)) {
-					page = NULL;
-					goto free_range;
-				}
-			}
-
-			if (!meminit_pfn_in_nid(pfn, nid, &nid_init_state)) {
-				page = NULL;
-				goto free_range;
-			}
-
-			/* Minimise pfn page lookups and scheduler checks */
-			if (page && (pfn & (pageblock_nr_pages - 1)) != 0) {
-				page++;
-			} else {
-				nr_pages += nr_to_free;
-				deferred_free_range(free_base_page,
-						free_base_pfn, nr_to_free);
-				free_base_page = NULL;
-				free_base_pfn = nr_to_free = 0;
-
-				page = pfn_to_page(pfn);
-				cond_resched();
-			}
-
-			if (page->flags) {
-				VM_BUG_ON(page_zone(page) != zone);
-				goto free_range;
-			}
-
-			__init_single_page(page, pfn, zid, nid);
-			if (!free_base_page) {
-				free_base_page = page;
-				free_base_pfn = pfn;
-				nr_to_free = 0;
-			}
-			nr_to_free++;
-
-			/* Where possible, batch up pages for a single free */
-			continue;
-free_range:
-			/* Free the current block of pages to allocator */
-			nr_pages += nr_to_free;
-			deferred_free_range(free_base_page, free_base_pfn,
-								nr_to_free);
-			free_base_page = NULL;
-			free_base_pfn = nr_to_free = 0;
-		}
-		/* Free the last block of pages to allocator */
-		nr_pages += nr_to_free;
-		deferred_free_range(free_base_page, free_base_pfn, nr_to_free);
-
-		first_init_pfn = max(end_pfn, first_init_pfn);
+	for_each_free_mem_range(i, nid, MEMBLOCK_NONE, &spa, &epa, NULL) {
+		spfn = max_t(unsigned long, first_init_pfn, PFN_UP(spa));
+		epfn = min_t(unsigned long, zone_end_pfn(zone), PFN_DOWN(epa));
+		nr_pages += deferred_init_range(nid, zid, spfn, epfn);
 	}
 
 	/* Sanity check that the next zone really is unpopulated */
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v12 02/11] x86/mm: setting fields in deferred pages
  2017-10-13 17:32 [PATCH v12 00/11] complete deferred page initialization Pavel Tatashin
  2017-10-13 17:32 ` [PATCH v12 01/11] mm: deferred_init_memmap improvements Pavel Tatashin
@ 2017-10-13 17:32 ` Pavel Tatashin
  2017-10-13 17:32 ` [PATCH v12 03/11] sparc64/mm: " Pavel Tatashin
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 28+ messages in thread
From: Pavel Tatashin @ 2017-10-13 17:32 UTC (permalink / raw)
  To: linux-kernel, sparclinux, linux-mm, linuxppc-dev, linux-s390,
	linux-arm-kernel, x86, kasan-dev, borntraeger, heiko.carstens,
	davem, willy, mhocko, ard.biesheuvel, mark.rutland, will.deacon,
	catalin.marinas, sam, mgorman, akpm, steven.sistare,
	daniel.m.jordan, bob.picco

Without deferred struct page feature (CONFIG_DEFERRED_STRUCT_PAGE_INIT),
flags and other fields in "struct page"es are never changed prior to first
initializing struct pages by going through __init_single_page().

With deferred struct page feature enabled, however, we set fields in
register_page_bootmem_info that are subsequently clobbered right after in
free_all_bootmem:

        mem_init() {
                register_page_bootmem_info();
                free_all_bootmem();
                ...
        }

When register_page_bootmem_info() is called only non-deferred struct pages
are initialized. But, this function goes through some reserved pages which
might be part of the deferred, and thus are not yet initialized.

  mem_init
   register_page_bootmem_info
    register_page_bootmem_info_node
     get_page_bootmem
      .. setting fields here ..
      such as: page->freelist = (void *)type;

  free_all_bootmem()
   free_low_memory_core_early()
    for_each_reserved_mem_region()
     reserve_bootmem_region()
      init_reserved_page() <- Only if this is deferred reserved page
       __init_single_pfn()
        __init_single_page()
            memset(0) <-- Loose the set fields here

We end-up with issue where, currently we do not observe problem as memory
is explicitly zeroed. But, if flag asserts are changed we can start hitting
issues.

Also, because in this patch series we will stop zeroing struct page memory
during allocation, we must make sure that struct pages are properly
initialized prior to using them.

The deferred-reserved pages are initialized in free_all_bootmem().
Therefore, the fix is to switch the above calls.

Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com>
Reviewed-by: Steven Sistare <steven.sistare@oracle.com>
Reviewed-by: Daniel Jordan <daniel.m.jordan@oracle.com>
Reviewed-by: Bob Picco <bob.picco@oracle.com>
Acked-by: Michal Hocko <mhocko@suse.com>
---
 arch/x86/mm/init_64.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 5ea1c3c2636e..8822523fdcd7 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -1182,12 +1182,18 @@ void __init mem_init(void)
 
 	/* clear_bss() already clear the empty_zero_page */
 
-	register_page_bootmem_info();
-
 	/* this will put all memory onto the freelists */
 	free_all_bootmem();
 	after_bootmem = 1;
 
+	/*
+	 * Must be done after boot memory is put on freelist, because here we
+	 * might set fields in deferred struct pages that have not yet been
+	 * initialized, and free_all_bootmem() initializes all the reserved
+	 * deferred pages for us.
+	 */
+	register_page_bootmem_info();
+
 	/* Register memory areas for /proc/kcore */
 	kclist_add(&kcore_vsyscall, (void *)VSYSCALL_ADDR,
 			 PAGE_SIZE, KCORE_OTHER);
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v12 03/11] sparc64/mm: setting fields in deferred pages
  2017-10-13 17:32 [PATCH v12 00/11] complete deferred page initialization Pavel Tatashin
  2017-10-13 17:32 ` [PATCH v12 01/11] mm: deferred_init_memmap improvements Pavel Tatashin
  2017-10-13 17:32 ` [PATCH v12 02/11] x86/mm: setting fields in deferred pages Pavel Tatashin
@ 2017-10-13 17:32 ` Pavel Tatashin
  2017-10-13 17:32 ` [PATCH v12 04/11] sparc64: simplify vmemmap_populate Pavel Tatashin
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 28+ messages in thread
From: Pavel Tatashin @ 2017-10-13 17:32 UTC (permalink / raw)
  To: linux-kernel, sparclinux, linux-mm, linuxppc-dev, linux-s390,
	linux-arm-kernel, x86, kasan-dev, borntraeger, heiko.carstens,
	davem, willy, mhocko, ard.biesheuvel, mark.rutland, will.deacon,
	catalin.marinas, sam, mgorman, akpm, steven.sistare,
	daniel.m.jordan, bob.picco

Without deferred struct page feature (CONFIG_DEFERRED_STRUCT_PAGE_INIT),
flags and other fields in "struct page"es are never changed prior to first
initializing struct pages by going through __init_single_page().

With deferred struct page feature enabled there is a case where we set some
fields prior to initializing:

mem_init() {
     register_page_bootmem_info();
     free_all_bootmem();
     ...
}

When register_page_bootmem_info() is called only non-deferred struct pages
are initialized. But, this function goes through some reserved pages which
might be part of the deferred, and thus are not yet initialized.

mem_init
register_page_bootmem_info
register_page_bootmem_info_node
 get_page_bootmem
  .. setting fields here ..
  such as: page->freelist = (void *)type;

free_all_bootmem()
free_low_memory_core_early()
 for_each_reserved_mem_region()
  reserve_bootmem_region()
   init_reserved_page() <- Only if this is deferred reserved page
    __init_single_pfn()
     __init_single_page()
      memset(0) <-- Loose the set fields here

We end-up with similar issue as in the previous patch, where currently we
do not observe problem as memory is zeroed. But, if flag asserts are
changed we can start hitting issues.

Also, because in this patch series we will stop zeroing struct page memory
during allocation, we must make sure that struct pages are properly
initialized prior to using them.

The deferred-reserved pages are initialized in free_all_bootmem().
Therefore, the fix is to switch the above calls.

Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com>
Reviewed-by: Steven Sistare <steven.sistare@oracle.com>
Reviewed-by: Daniel Jordan <daniel.m.jordan@oracle.com>
Reviewed-by: Bob Picco <bob.picco@oracle.com>
Acked-by: David S. Miller <davem@davemloft.net>
Acked-by: Michal Hocko <mhocko@suse.com>
---
 arch/sparc/mm/init_64.c | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c
index 6034569e2c0d..caed495544e9 100644
--- a/arch/sparc/mm/init_64.c
+++ b/arch/sparc/mm/init_64.c
@@ -2548,9 +2548,16 @@ void __init mem_init(void)
 {
 	high_memory = __va(last_valid_pfn << PAGE_SHIFT);
 
-	register_page_bootmem_info();
 	free_all_bootmem();
 
+	/*
+	 * Must be done after boot memory is put on freelist, because here we
+	 * might set fields in deferred struct pages that have not yet been
+	 * initialized, and free_all_bootmem() initializes all the reserved
+	 * deferred pages for us.
+	 */
+	register_page_bootmem_info();
+
 	/*
 	 * Set up the zero page, mark it reserved, so that page count
 	 * is not manipulated when freeing the page from user ptes.
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v12 04/11] sparc64: simplify vmemmap_populate
  2017-10-13 17:32 [PATCH v12 00/11] complete deferred page initialization Pavel Tatashin
                   ` (2 preceding siblings ...)
  2017-10-13 17:32 ` [PATCH v12 03/11] sparc64/mm: " Pavel Tatashin
@ 2017-10-13 17:32 ` Pavel Tatashin
  2017-10-13 17:32 ` [PATCH v12 05/11] mm: defining memblock_virt_alloc_try_nid_raw Pavel Tatashin
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 28+ messages in thread
From: Pavel Tatashin @ 2017-10-13 17:32 UTC (permalink / raw)
  To: linux-kernel, sparclinux, linux-mm, linuxppc-dev, linux-s390,
	linux-arm-kernel, x86, kasan-dev, borntraeger, heiko.carstens,
	davem, willy, mhocko, ard.biesheuvel, mark.rutland, will.deacon,
	catalin.marinas, sam, mgorman, akpm, steven.sistare,
	daniel.m.jordan, bob.picco

Remove duplicating code by using common functions
vmemmap_pud_populate and vmemmap_pgd_populate.

Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com>
Reviewed-by: Steven Sistare <steven.sistare@oracle.com>
Reviewed-by: Daniel Jordan <daniel.m.jordan@oracle.com>
Reviewed-by: Bob Picco <bob.picco@oracle.com>
Acked-by: David S. Miller <davem@davemloft.net>
Acked-by: Michal Hocko <mhocko@suse.com>
---
 arch/sparc/mm/init_64.c | 23 ++++++-----------------
 1 file changed, 6 insertions(+), 17 deletions(-)

diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c
index caed495544e9..6839db3ffe1d 100644
--- a/arch/sparc/mm/init_64.c
+++ b/arch/sparc/mm/init_64.c
@@ -2652,30 +2652,19 @@ int __meminit vmemmap_populate(unsigned long vstart, unsigned long vend,
 	vstart = vstart & PMD_MASK;
 	vend = ALIGN(vend, PMD_SIZE);
 	for (; vstart < vend; vstart += PMD_SIZE) {
-		pgd_t *pgd = pgd_offset_k(vstart);
+		pgd_t *pgd = vmemmap_pgd_populate(vstart, node);
 		unsigned long pte;
 		pud_t *pud;
 		pmd_t *pmd;
 
-		if (pgd_none(*pgd)) {
-			pud_t *new = vmemmap_alloc_block(PAGE_SIZE, node);
+		if (!pgd)
+			return -ENOMEM;
 
-			if (!new)
-				return -ENOMEM;
-			pgd_populate(&init_mm, pgd, new);
-		}
-
-		pud = pud_offset(pgd, vstart);
-		if (pud_none(*pud)) {
-			pmd_t *new = vmemmap_alloc_block(PAGE_SIZE, node);
-
-			if (!new)
-				return -ENOMEM;
-			pud_populate(&init_mm, pud, new);
-		}
+		pud = vmemmap_pud_populate(pgd, vstart, node);
+		if (!pud)
+			return -ENOMEM;
 
 		pmd = pmd_offset(pud, vstart);
-
 		pte = pmd_val(*pmd);
 		if (!(pte & _PAGE_VALID)) {
 			void *block = vmemmap_alloc_block(PMD_SIZE, node);
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v12 05/11] mm: defining memblock_virt_alloc_try_nid_raw
  2017-10-13 17:32 [PATCH v12 00/11] complete deferred page initialization Pavel Tatashin
                   ` (3 preceding siblings ...)
  2017-10-13 17:32 ` [PATCH v12 04/11] sparc64: simplify vmemmap_populate Pavel Tatashin
@ 2017-10-13 17:32 ` Pavel Tatashin
  2017-10-13 17:32 ` [PATCH v12 06/11] mm: zero reserved and unavailable struct pages Pavel Tatashin
                   ` (6 subsequent siblings)
  11 siblings, 0 replies; 28+ messages in thread
From: Pavel Tatashin @ 2017-10-13 17:32 UTC (permalink / raw)
  To: linux-kernel, sparclinux, linux-mm, linuxppc-dev, linux-s390,
	linux-arm-kernel, x86, kasan-dev, borntraeger, heiko.carstens,
	davem, willy, mhocko, ard.biesheuvel, mark.rutland, will.deacon,
	catalin.marinas, sam, mgorman, akpm, steven.sistare,
	daniel.m.jordan, bob.picco

* A new variant of memblock_virt_alloc_* allocations:
memblock_virt_alloc_try_nid_raw()
    - Does not zero the allocated memory
    - Does not panic if request cannot be satisfied

* optimize early system hash allocations

Clients can call alloc_large_system_hash() with flag: HASH_ZERO to specify
that memory that was allocated for system hash needs to be zeroed,
otherwise the memory does not need to be zeroed, and client will initialize
it.

If memory does not need to be zero'd, call the new
memblock_virt_alloc_raw() interface, and thus improve the boot performance.

* debug for raw alloctor

When CONFIG_DEBUG_VM is enabled, this patch sets all the memory that is
returned by memblock_virt_alloc_try_nid_raw() to ones to ensure that no
places excpect zeroed memory.

Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com>
Reviewed-by: Steven Sistare <steven.sistare@oracle.com>
Reviewed-by: Daniel Jordan <daniel.m.jordan@oracle.com>
Reviewed-by: Bob Picco <bob.picco@oracle.com>
Acked-by: Michal Hocko <mhocko@suse.com>
---
 include/linux/bootmem.h | 27 ++++++++++++++++++++++
 mm/memblock.c           | 60 +++++++++++++++++++++++++++++++++++++++++++------
 mm/page_alloc.c         | 15 ++++++-------
 3 files changed, 87 insertions(+), 15 deletions(-)

diff --git a/include/linux/bootmem.h b/include/linux/bootmem.h
index e223d91b6439..ea30b3987282 100644
--- a/include/linux/bootmem.h
+++ b/include/linux/bootmem.h
@@ -160,6 +160,9 @@ extern void *__alloc_bootmem_low_node(pg_data_t *pgdat,
 #define BOOTMEM_ALLOC_ANYWHERE		(~(phys_addr_t)0)
 
 /* FIXME: Move to memblock.h at a point where we remove nobootmem.c */
+void *memblock_virt_alloc_try_nid_raw(phys_addr_t size, phys_addr_t align,
+				      phys_addr_t min_addr,
+				      phys_addr_t max_addr, int nid);
 void *memblock_virt_alloc_try_nid_nopanic(phys_addr_t size,
 		phys_addr_t align, phys_addr_t min_addr,
 		phys_addr_t max_addr, int nid);
@@ -176,6 +179,14 @@ static inline void * __init memblock_virt_alloc(
 					    NUMA_NO_NODE);
 }
 
+static inline void * __init memblock_virt_alloc_raw(
+					phys_addr_t size,  phys_addr_t align)
+{
+	return memblock_virt_alloc_try_nid_raw(size, align, BOOTMEM_LOW_LIMIT,
+					    BOOTMEM_ALLOC_ACCESSIBLE,
+					    NUMA_NO_NODE);
+}
+
 static inline void * __init memblock_virt_alloc_nopanic(
 					phys_addr_t size, phys_addr_t align)
 {
@@ -257,6 +268,14 @@ static inline void * __init memblock_virt_alloc(
 	return __alloc_bootmem(size, align, BOOTMEM_LOW_LIMIT);
 }
 
+static inline void * __init memblock_virt_alloc_raw(
+					phys_addr_t size,  phys_addr_t align)
+{
+	if (!align)
+		align = SMP_CACHE_BYTES;
+	return __alloc_bootmem_nopanic(size, align, BOOTMEM_LOW_LIMIT);
+}
+
 static inline void * __init memblock_virt_alloc_nopanic(
 					phys_addr_t size, phys_addr_t align)
 {
@@ -309,6 +328,14 @@ static inline void * __init memblock_virt_alloc_try_nid(phys_addr_t size,
 					  min_addr);
 }
 
+static inline void * __init memblock_virt_alloc_try_nid_raw(
+			phys_addr_t size, phys_addr_t align,
+			phys_addr_t min_addr, phys_addr_t max_addr, int nid)
+{
+	return ___alloc_bootmem_node_nopanic(NODE_DATA(nid), size, align,
+				min_addr, max_addr);
+}
+
 static inline void * __init memblock_virt_alloc_try_nid_nopanic(
 			phys_addr_t size, phys_addr_t align,
 			phys_addr_t min_addr, phys_addr_t max_addr, int nid)
diff --git a/mm/memblock.c b/mm/memblock.c
index 91205780e6b1..1f299fb1eb08 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -1327,7 +1327,6 @@ static void * __init memblock_virt_alloc_internal(
 	return NULL;
 done:
 	ptr = phys_to_virt(alloc);
-	memset(ptr, 0, size);
 
 	/*
 	 * The min_count is set to 0 so that bootmem allocated blocks
@@ -1340,6 +1339,45 @@ static void * __init memblock_virt_alloc_internal(
 	return ptr;
 }
 
+/**
+ * memblock_virt_alloc_try_nid_raw - allocate boot memory block without zeroing
+ * memory and without panicking
+ * @size: size of memory block to be allocated in bytes
+ * @align: alignment of the region and block's size
+ * @min_addr: the lower bound of the memory region from where the allocation
+ *	  is preferred (phys address)
+ * @max_addr: the upper bound of the memory region from where the allocation
+ *	      is preferred (phys address), or %BOOTMEM_ALLOC_ACCESSIBLE to
+ *	      allocate only from memory limited by memblock.current_limit value
+ * @nid: nid of the free area to find, %NUMA_NO_NODE for any node
+ *
+ * Public function, provides additional debug information (including caller
+ * info), if enabled. Does not zero allocated memory, does not panic if request
+ * cannot be satisfied.
+ *
+ * RETURNS:
+ * Virtual address of allocated memory block on success, NULL on failure.
+ */
+void * __init memblock_virt_alloc_try_nid_raw(
+			phys_addr_t size, phys_addr_t align,
+			phys_addr_t min_addr, phys_addr_t max_addr,
+			int nid)
+{
+	void *ptr;
+
+	memblock_dbg("%s: %llu bytes align=0x%llx nid=%d from=0x%llx max_addr=0x%llx %pF\n",
+		     __func__, (u64)size, (u64)align, nid, (u64)min_addr,
+		     (u64)max_addr, (void *)_RET_IP_);
+
+	ptr = memblock_virt_alloc_internal(size, align,
+					   min_addr, max_addr, nid);
+#ifdef CONFIG_DEBUG_VM
+	if (ptr && size > 0)
+		memset(ptr, 0xff, size);
+#endif
+	return ptr;
+}
+
 /**
  * memblock_virt_alloc_try_nid_nopanic - allocate boot memory block
  * @size: size of memory block to be allocated in bytes
@@ -1351,8 +1389,8 @@ static void * __init memblock_virt_alloc_internal(
  *	      allocate only from memory limited by memblock.current_limit value
  * @nid: nid of the free area to find, %NUMA_NO_NODE for any node
  *
- * Public version of _memblock_virt_alloc_try_nid_nopanic() which provides
- * additional debug information (including caller info), if enabled.
+ * Public function, provides additional debug information (including caller
+ * info), if enabled. This function zeroes the allocated memory.
  *
  * RETURNS:
  * Virtual address of allocated memory block on success, NULL on failure.
@@ -1362,11 +1400,17 @@ void * __init memblock_virt_alloc_try_nid_nopanic(
 				phys_addr_t min_addr, phys_addr_t max_addr,
 				int nid)
 {
+	void *ptr;
+
 	memblock_dbg("%s: %llu bytes align=0x%llx nid=%d from=0x%llx max_addr=0x%llx %pF\n",
 		     __func__, (u64)size, (u64)align, nid, (u64)min_addr,
 		     (u64)max_addr, (void *)_RET_IP_);
-	return memblock_virt_alloc_internal(size, align, min_addr,
-					     max_addr, nid);
+
+	ptr = memblock_virt_alloc_internal(size, align,
+					   min_addr, max_addr, nid);
+	if (ptr)
+		memset(ptr, 0, size);
+	return ptr;
 }
 
 /**
@@ -1380,7 +1424,7 @@ void * __init memblock_virt_alloc_try_nid_nopanic(
  *	      allocate only from memory limited by memblock.current_limit value
  * @nid: nid of the free area to find, %NUMA_NO_NODE for any node
  *
- * Public panicking version of _memblock_virt_alloc_try_nid_nopanic()
+ * Public panicking version of memblock_virt_alloc_try_nid_nopanic()
  * which provides debug information (including caller info), if enabled,
  * and panics if the request can not be satisfied.
  *
@@ -1399,8 +1443,10 @@ void * __init memblock_virt_alloc_try_nid(
 		     (u64)max_addr, (void *)_RET_IP_);
 	ptr = memblock_virt_alloc_internal(size, align,
 					   min_addr, max_addr, nid);
-	if (ptr)
+	if (ptr) {
+		memset(ptr, 0, size);
 		return ptr;
+	}
 
 	panic("%s: Failed to allocate %llu bytes align=0x%llx nid=%d from=0x%llx max_addr=0x%llx\n",
 	      __func__, (u64)size, (u64)align, nid, (u64)min_addr,
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index cdbd14829fd3..20b0bace2235 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -7307,18 +7307,17 @@ void *__init alloc_large_system_hash(const char *tablename,
 
 	log2qty = ilog2(numentries);
 
-	/*
-	 * memblock allocator returns zeroed memory already, so HASH_ZERO is
-	 * currently not used when HASH_EARLY is specified.
-	 */
 	gfp_flags = (flags & HASH_ZERO) ? GFP_ATOMIC | __GFP_ZERO : GFP_ATOMIC;
 	do {
 		size = bucketsize << log2qty;
-		if (flags & HASH_EARLY)
-			table = memblock_virt_alloc_nopanic(size, 0);
-		else if (hashdist)
+		if (flags & HASH_EARLY) {
+			if (flags & HASH_ZERO)
+				table = memblock_virt_alloc_nopanic(size, 0);
+			else
+				table = memblock_virt_alloc_raw(size, 0);
+		} else if (hashdist) {
 			table = __vmalloc(size, gfp_flags, PAGE_KERNEL);
-		else {
+		} else {
 			/*
 			 * If bucketsize is not a power-of-two, we may free
 			 * some pages at the end of hash table which
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v12 06/11] mm: zero reserved and unavailable struct pages
  2017-10-13 17:32 [PATCH v12 00/11] complete deferred page initialization Pavel Tatashin
                   ` (4 preceding siblings ...)
  2017-10-13 17:32 ` [PATCH v12 05/11] mm: defining memblock_virt_alloc_try_nid_raw Pavel Tatashin
@ 2017-10-13 17:32 ` Pavel Tatashin
  2017-10-13 17:32 ` [PATCH v12 07/11] x86/kasan: add and use kasan_map_populate() Pavel Tatashin
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 28+ messages in thread
From: Pavel Tatashin @ 2017-10-13 17:32 UTC (permalink / raw)
  To: linux-kernel, sparclinux, linux-mm, linuxppc-dev, linux-s390,
	linux-arm-kernel, x86, kasan-dev, borntraeger, heiko.carstens,
	davem, willy, mhocko, ard.biesheuvel, mark.rutland, will.deacon,
	catalin.marinas, sam, mgorman, akpm, steven.sistare,
	daniel.m.jordan, bob.picco

Some memory is reserved but unavailable: not present in memblock.memory
(because not backed by physical pages), but present in memblock.reserved.
Such memory has backing struct pages, but they are not initialized by going
through __init_single_page().

In some cases these struct pages are accessed even if they do not contain
any data. One example is page_to_pfn() might access page->flags if this is
where section information is stored (CONFIG_SPARSEMEM,
SECTION_IN_PAGE_FLAGS).

One example of such memory: trim_low_memory_range() unconditionally
reserves from pfn 0, but e820__memblock_setup() might provide the exiting
memory from pfn 1 (i.e. KVM).

Since, struct pages are zeroed in __init_single_page(), and not during
allocation time, we must zero such struct pages explicitly.

The patch involves adding a new memblock iterator:
	for_each_resv_unavail_range(i, p_start, p_end)

Which iterates through reserved && !memory lists, and we zero struct pages
explicitly by calling mm_zero_struct_page().

===

Here is more detailed example of problem that this patch is addressing:

Run tested on qemu with the following arguments:

	-enable-kvm -cpu kvm64 -m 512 -smp 2

This patch reports that there are 98 unavailable pages.

They are: pfn 0 and pfns in range [159, 255].

Note, trim_low_memory_range() reserves only pfns in range [0, 15], it does
not reserve [159, 255] ones.

e820__memblock_setup() reports linux that the following physical ranges are
available:
    [1 , 158]
[256, 130783]

Notice, that exactly unavailable pfns are missing!

Now, lets check what we have in zone 0: [1, 131039]

pfn 0, is not part of the zone, but pfns [1, 158], are.

However, the bigger problem we have if we do not initialize these struct
pages is with memory hotplug. Because, that path operates at 2M boundaries
(section_nr). And checks if 2M range of pages is hot removable. It starts
with first pfn from zone, rounds it down to 2M boundary (sturct pages are
allocated at 2M boundaries when vmemmap is created), and checks if that
section is hot removable. In this case start with pfn 1 and convert it down
to pfn 0. Later pfn is converted to struct page, and some fields are
checked. Now, if we do not zero struct pages, we get unpredictable results.

In fact when CONFIG_VM_DEBUG is enabled, and we explicitly set all vmemmap
memory to ones, the following panic is observed with kernel test without
this patch applied:

BUG: unable to handle kernel NULL pointer dereference at          (null)
IP: is_pageblock_removable_nolock+0x35/0x90
PGD 0 P4D 0
Oops: 0000 [#1] PREEMPT
...
task: ffff88001f4e2900 task.stack: ffffc90000314000
RIP: 0010:is_pageblock_removable_nolock+0x35/0x90
RSP: 0018:ffffc90000317d60 EFLAGS: 00010202
RAX: ffffffffffffffff RBX: ffff88001d92b000 RCX: 0000000000000000
RDX: 0000000000000000 RSI: 0000000000200000 RDI: ffff88001d92b000
RBP: ffffc90000317d80 R08: 00000000000010c8 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000000 R12: ffff88001db2b000
R13: ffffffff81af6d00 R14: ffff88001f7d5000 R15: ffffffff82a1b6c0
FS:  00007f4eb857f7c0(0000) GS:ffffffff81c27000(0000) knlGS:0
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000000000000 CR3: 000000001f4e6000 CR4: 00000000000006b0
Call Trace:
 ? is_mem_section_removable+0x5a/0xd0
 show_mem_removable+0x6b/0xa0
 dev_attr_show+0x1b/0x50
 sysfs_kf_seq_show+0xa1/0x100
 kernfs_seq_show+0x22/0x30
 seq_read+0x1ac/0x3a0
 kernfs_fop_read+0x36/0x190
 ? security_file_permission+0x90/0xb0
 __vfs_read+0x16/0x30
 vfs_read+0x81/0x130
 SyS_read+0x44/0xa0
 entry_SYSCALL_64_fastpath+0x1f/0xbd

Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com>
Reviewed-by: Steven Sistare <steven.sistare@oracle.com>
Reviewed-by: Daniel Jordan <daniel.m.jordan@oracle.com>
Reviewed-by: Bob Picco <bob.picco@oracle.com>
Acked-by: Michal Hocko <mhocko@suse.com>
---
 include/linux/memblock.h | 16 ++++++++++++++++
 include/linux/mm.h       | 15 +++++++++++++++
 mm/page_alloc.c          | 40 ++++++++++++++++++++++++++++++++++++++++
 3 files changed, 71 insertions(+)

diff --git a/include/linux/memblock.h b/include/linux/memblock.h
index bae11c7e7bf3..ce8bfa5f3e9b 100644
--- a/include/linux/memblock.h
+++ b/include/linux/memblock.h
@@ -237,6 +237,22 @@ unsigned long memblock_next_valid_pfn(unsigned long pfn, unsigned long max_pfn);
 	for_each_mem_range_rev(i, &memblock.memory, &memblock.reserved,	\
 			       nid, flags, p_start, p_end, p_nid)
 
+/**
+ * for_each_resv_unavail_range - iterate through reserved and unavailable memory
+ * @i: u64 used as loop variable
+ * @flags: pick from blocks based on memory attributes
+ * @p_start: ptr to phys_addr_t for start address of the range, can be %NULL
+ * @p_end: ptr to phys_addr_t for end address of the range, can be %NULL
+ *
+ * Walks over unavailable but reserved (reserved && !memory) areas of memblock.
+ * Available as soon as memblock is initialized.
+ * Note: because this memory does not belong to any physical node, flags and
+ * nid arguments do not make sense and thus not exported as arguments.
+ */
+#define for_each_resv_unavail_range(i, p_start, p_end)			\
+	for_each_mem_range(i, &memblock.reserved, &memblock.memory,	\
+			   NUMA_NO_NODE, MEMBLOCK_NONE, p_start, p_end, NULL)
+
 static inline void memblock_set_region_flags(struct memblock_region *r,
 					     unsigned long flags)
 {
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 065d99deb847..04c8b2e5aff4 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -94,6 +94,15 @@ extern int mmap_rnd_compat_bits __read_mostly;
 #define mm_forbids_zeropage(X)	(0)
 #endif
 
+/*
+ * On some architectures it is expensive to call memset() for small sizes.
+ * Those architectures should provide their own implementation of "struct page"
+ * zeroing by defining this macro in <asm/pgtable.h>.
+ */
+#ifndef mm_zero_struct_page
+#define mm_zero_struct_page(pp)  ((void)memset((pp), 0, sizeof(struct page)))
+#endif
+
 /*
  * Default maximum number of active map areas, this limits the number of vmas
  * per mm struct. Users can overwrite this number by sysctl but there is a
@@ -2001,6 +2010,12 @@ extern int __meminit __early_pfn_to_nid(unsigned long pfn,
 					struct mminit_pfnnid_cache *state);
 #endif
 
+#ifdef CONFIG_HAVE_MEMBLOCK
+void zero_resv_unavail(void);
+#else
+static inline void zero_resv_unavail(void) {}
+#endif
+
 extern void set_dma_reserve(unsigned long new_dma_reserve);
 extern void memmap_init_zone(unsigned long, int, unsigned long,
 				unsigned long, enum memmap_context);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 20b0bace2235..54e0fa12e7ff 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -6209,6 +6209,44 @@ void __paginginit free_area_init_node(int nid, unsigned long *zones_size,
 	free_area_init_core(pgdat);
 }
 
+#ifdef CONFIG_HAVE_MEMBLOCK
+/*
+ * Only struct pages that are backed by physical memory are zeroed and
+ * initialized by going through __init_single_page(). But, there are some
+ * struct pages which are reserved in memblock allocator and their fields
+ * may be accessed (for example page_to_pfn() on some configuration accesses
+ * flags). We must explicitly zero those struct pages.
+ */
+void __paginginit zero_resv_unavail(void)
+{
+	phys_addr_t start, end;
+	unsigned long pfn;
+	u64 i, pgcnt;
+
+	/*
+	 * Loop through ranges that are reserved, but do not have reported
+	 * physical memory backing.
+	 */
+	pgcnt = 0;
+	for_each_resv_unavail_range(i, &start, &end) {
+		for (pfn = PFN_DOWN(start); pfn < PFN_UP(end); pfn++) {
+			mm_zero_struct_page(pfn_to_page(pfn));
+			pgcnt++;
+		}
+	}
+
+	/*
+	 * Struct pages that do not have backing memory. This could be because
+	 * firmware is using some of this memory, or for some other reasons.
+	 * Once memblock is changed so such behaviour is not allowed: i.e.
+	 * list of "reserved" memory must be a subset of list of "memory", then
+	 * this code can be removed.
+	 */
+	if (pgcnt)
+		pr_info("Reserved but unavailable: %lld pages", pgcnt);
+}
+#endif /* CONFIG_HAVE_MEMBLOCK */
+
 #ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP
 
 #if MAX_NUMNODES > 1
@@ -6632,6 +6670,7 @@ void __init free_area_init_nodes(unsigned long *max_zone_pfn)
 			node_set_state(nid, N_MEMORY);
 		check_for_memory(pgdat, nid);
 	}
+	zero_resv_unavail();
 }
 
 static int __init cmdline_parse_core(char *p, unsigned long *core)
@@ -6795,6 +6834,7 @@ void __init free_area_init(unsigned long *zones_size)
 {
 	free_area_init_node(0, zones_size,
 			__pa(PAGE_OFFSET) >> PAGE_SHIFT, NULL);
+	zero_resv_unavail();
 }
 
 static int page_alloc_cpu_dead(unsigned int cpu)
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v12 07/11] x86/kasan: add and use kasan_map_populate()
  2017-10-13 17:32 [PATCH v12 00/11] complete deferred page initialization Pavel Tatashin
                   ` (5 preceding siblings ...)
  2017-10-13 17:32 ` [PATCH v12 06/11] mm: zero reserved and unavailable struct pages Pavel Tatashin
@ 2017-10-13 17:32 ` Pavel Tatashin
  2017-10-18 17:11   ` Andrey Ryabinin
  2017-10-13 17:32 ` [PATCH v12 08/11] arm64/kasan: " Pavel Tatashin
                   ` (4 subsequent siblings)
  11 siblings, 1 reply; 28+ messages in thread
From: Pavel Tatashin @ 2017-10-13 17:32 UTC (permalink / raw)
  To: linux-kernel, sparclinux, linux-mm, linuxppc-dev, linux-s390,
	linux-arm-kernel, x86, kasan-dev, borntraeger, heiko.carstens,
	davem, willy, mhocko, ard.biesheuvel, mark.rutland, will.deacon,
	catalin.marinas, sam, mgorman, akpm, steven.sistare,
	daniel.m.jordan, bob.picco

During early boot, kasan uses vmemmap_populate() to establish its shadow
memory. But, that interface is intended for struct pages use.

Because of the current project, vmemmap won't be zeroed during allocation,
but kasan expects that memory to be zeroed. We are adding a new
kasan_map_populate() function to resolve this difference.

Therefore, we must use a new interface to allocate and map kasan shadow
memory, that also zeroes memory for us.

Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com>
---
 arch/x86/mm/kasan_init_64.c | 75 ++++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 71 insertions(+), 4 deletions(-)

diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
index bc84b73684b7..9778fec8a5dc 100644
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -15,6 +15,73 @@
 
 extern struct range pfn_mapped[E820_MAX_ENTRIES];
 
+/* Creates mappings for kasan during early boot. The mapped memory is zeroed */
+static int __meminit kasan_map_populate(unsigned long start, unsigned long end,
+					int node)
+{
+	unsigned long addr, pfn, next;
+	unsigned long long size;
+	pgd_t *pgd;
+	p4d_t *p4d;
+	pud_t *pud;
+	pmd_t *pmd;
+	pte_t *pte;
+	int ret;
+
+	ret = vmemmap_populate(start, end, node);
+	/*
+	 * We might have partially populated memory, so check for no entries,
+	 * and zero only those that actually exist.
+	 */
+	for (addr = start; addr < end; addr = next) {
+		pgd = pgd_offset_k(addr);
+		if (pgd_none(*pgd)) {
+			next = pgd_addr_end(addr, end);
+			continue;
+		}
+
+		p4d = p4d_offset(pgd, addr);
+		if (p4d_none(*p4d)) {
+			next = p4d_addr_end(addr, end);
+			continue;
+		}
+
+		pud = pud_offset(p4d, addr);
+		if (pud_none(*pud)) {
+			next = pud_addr_end(addr, end);
+			continue;
+		}
+		if (pud_large(*pud)) {
+			/* This is PUD size page */
+			next = pud_addr_end(addr, end);
+			size = PUD_SIZE;
+			pfn = pud_pfn(*pud);
+		} else {
+			pmd = pmd_offset(pud, addr);
+			if (pmd_none(*pmd)) {
+				next = pmd_addr_end(addr, end);
+				continue;
+			}
+			if (pmd_large(*pmd)) {
+				/* This is PMD size page */
+				next = pmd_addr_end(addr, end);
+				size = PMD_SIZE;
+				pfn = pmd_pfn(*pmd);
+			} else {
+				pte = pte_offset_kernel(pmd, addr);
+				next = addr + PAGE_SIZE;
+				if (pte_none(*pte))
+					continue;
+				/* This is base size page */
+				size = PAGE_SIZE;
+				pfn = pte_pfn(*pte);
+			}
+		}
+		memset(phys_to_virt(PFN_PHYS(pfn)), 0, size);
+	}
+	return ret;
+}
+
 static int __init map_range(struct range *range)
 {
 	unsigned long start;
@@ -23,7 +90,7 @@ static int __init map_range(struct range *range)
 	start = (unsigned long)kasan_mem_to_shadow(pfn_to_kaddr(range->start));
 	end = (unsigned long)kasan_mem_to_shadow(pfn_to_kaddr(range->end));
 
-	return vmemmap_populate(start, end, NUMA_NO_NODE);
+	return kasan_map_populate(start, end, NUMA_NO_NODE);
 }
 
 static void __init clear_pgds(unsigned long start,
@@ -136,9 +203,9 @@ void __init kasan_init(void)
 		kasan_mem_to_shadow((void *)PAGE_OFFSET + MAXMEM),
 		kasan_mem_to_shadow((void *)__START_KERNEL_map));
 
-	vmemmap_populate((unsigned long)kasan_mem_to_shadow(_stext),
-			(unsigned long)kasan_mem_to_shadow(_end),
-			NUMA_NO_NODE);
+	kasan_map_populate((unsigned long)kasan_mem_to_shadow(_stext),
+			   (unsigned long)kasan_mem_to_shadow(_end),
+			   NUMA_NO_NODE);
 
 	kasan_populate_zero_shadow(kasan_mem_to_shadow((void *)MODULES_END),
 			(void *)KASAN_SHADOW_END);
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v12 08/11] arm64/kasan: add and use kasan_map_populate()
  2017-10-13 17:32 [PATCH v12 00/11] complete deferred page initialization Pavel Tatashin
                   ` (6 preceding siblings ...)
  2017-10-13 17:32 ` [PATCH v12 07/11] x86/kasan: add and use kasan_map_populate() Pavel Tatashin
@ 2017-10-13 17:32 ` Pavel Tatashin
  2017-10-18 16:55   ` Andrey Ryabinin
  2017-10-13 17:32 ` [PATCH v12 09/11] mm: stop zeroing memory during allocation in vmemmap Pavel Tatashin
                   ` (3 subsequent siblings)
  11 siblings, 1 reply; 28+ messages in thread
From: Pavel Tatashin @ 2017-10-13 17:32 UTC (permalink / raw)
  To: linux-kernel, sparclinux, linux-mm, linuxppc-dev, linux-s390,
	linux-arm-kernel, x86, kasan-dev, borntraeger, heiko.carstens,
	davem, willy, mhocko, ard.biesheuvel, mark.rutland, will.deacon,
	catalin.marinas, sam, mgorman, akpm, steven.sistare,
	daniel.m.jordan, bob.picco

During early boot, kasan uses vmemmap_populate() to establish its shadow
memory. But, that interface is intended for struct pages use.

Because of the current project, vmemmap won't be zeroed during allocation,
but kasan expects that memory to be zeroed. We are adding a new
kasan_map_populate() function to resolve this difference.

Therefore, we must use a new interface to allocate and map kasan shadow
memory, that also zeroes memory for us.

Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com>
---
 arch/arm64/mm/kasan_init.c | 72 ++++++++++++++++++++++++++++++++++++++++++----
 1 file changed, 66 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index 81f03959a4ab..cb4af2951c90 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -28,6 +28,66 @@
 
 static pgd_t tmp_pg_dir[PTRS_PER_PGD] __initdata __aligned(PGD_SIZE);
 
+/* Creates mappings for kasan during early boot. The mapped memory is zeroed */
+static int __meminit kasan_map_populate(unsigned long start, unsigned long end,
+					int node)
+{
+	unsigned long addr, pfn, next;
+	unsigned long long size;
+	pgd_t *pgd;
+	pud_t *pud;
+	pmd_t *pmd;
+	pte_t *pte;
+	int ret;
+
+	ret = vmemmap_populate(start, end, node);
+	/*
+	 * We might have partially populated memory, so check for no entries,
+	 * and zero only those that actually exist.
+	 */
+	for (addr = start; addr < end; addr = next) {
+		pgd = pgd_offset_k(addr);
+		if (pgd_none(*pgd)) {
+			next = pgd_addr_end(addr, end);
+			continue;
+		}
+
+		pud = pud_offset(pgd, addr);
+		if (pud_none(*pud)) {
+			next = pud_addr_end(addr, end);
+			continue;
+		}
+		if (pud_sect(*pud)) {
+			/* This is PUD size page */
+			next = pud_addr_end(addr, end);
+			size = PUD_SIZE;
+			pfn = pud_pfn(*pud);
+		} else {
+			pmd = pmd_offset(pud, addr);
+			if (pmd_none(*pmd)) {
+				next = pmd_addr_end(addr, end);
+				continue;
+			}
+			if (pmd_sect(*pmd)) {
+				/* This is PMD size page */
+				next = pmd_addr_end(addr, end);
+				size = PMD_SIZE;
+				pfn = pmd_pfn(*pmd);
+			} else {
+				pte = pte_offset_kernel(pmd, addr);
+				next = addr + PAGE_SIZE;
+				if (pte_none(*pte))
+					continue;
+				/* This is base size page */
+				size = PAGE_SIZE;
+				pfn = pte_pfn(*pte);
+			}
+		}
+		memset(phys_to_virt(PFN_PHYS(pfn)), 0, size);
+	}
+	return ret;
+}
+
 /*
  * The p*d_populate functions call virt_to_phys implicitly so they can't be used
  * directly on kernel symbols (bm_p*d). All the early functions are called too
@@ -161,11 +221,11 @@ void __init kasan_init(void)
 
 	clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END);
 
-	vmemmap_populate(kimg_shadow_start, kimg_shadow_end,
-			 pfn_to_nid(virt_to_pfn(lm_alias(_text))));
+	kasan_map_populate(kimg_shadow_start, kimg_shadow_end,
+			   pfn_to_nid(virt_to_pfn(lm_alias(_text))));
 
 	/*
-	 * vmemmap_populate() has populated the shadow region that covers the
+	 * kasan_map_populate() has populated the shadow region that covers the
 	 * kernel image with SWAPPER_BLOCK_SIZE mappings, so we have to round
 	 * the start and end addresses to SWAPPER_BLOCK_SIZE as well, to prevent
 	 * kasan_populate_zero_shadow() from replacing the page table entries
@@ -191,9 +251,9 @@ void __init kasan_init(void)
 		if (start >= end)
 			break;
 
-		vmemmap_populate((unsigned long)kasan_mem_to_shadow(start),
-				(unsigned long)kasan_mem_to_shadow(end),
-				pfn_to_nid(virt_to_pfn(start)));
+		kasan_map_populate((unsigned long)kasan_mem_to_shadow(start),
+				   (unsigned long)kasan_mem_to_shadow(end),
+				   pfn_to_nid(virt_to_pfn(start)));
 	}
 
 	/*
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v12 09/11] mm: stop zeroing memory during allocation in vmemmap
  2017-10-13 17:32 [PATCH v12 00/11] complete deferred page initialization Pavel Tatashin
                   ` (7 preceding siblings ...)
  2017-10-13 17:32 ` [PATCH v12 08/11] arm64/kasan: " Pavel Tatashin
@ 2017-10-13 17:32 ` Pavel Tatashin
  2017-10-19 23:59   ` Andrew Morton
  2017-10-13 17:32 ` [PATCH v12 10/11] sparc64: optimized struct page zeroing Pavel Tatashin
                   ` (2 subsequent siblings)
  11 siblings, 1 reply; 28+ messages in thread
From: Pavel Tatashin @ 2017-10-13 17:32 UTC (permalink / raw)
  To: linux-kernel, sparclinux, linux-mm, linuxppc-dev, linux-s390,
	linux-arm-kernel, x86, kasan-dev, borntraeger, heiko.carstens,
	davem, willy, mhocko, ard.biesheuvel, mark.rutland, will.deacon,
	catalin.marinas, sam, mgorman, akpm, steven.sistare,
	daniel.m.jordan, bob.picco

vmemmap_alloc_block() will no longer zero the block, so zero memory
at its call sites for everything except struct pages.  Struct page memory
is zero'd by struct page initialization.

Replace allocators in sprase-vmemmap to use the non-zeroing version. So,
we will get the performance improvement by zeroing the memory in parallel
when struct pages are zeroed.

Add struct page zeroing as a part of initialization of other fields in
__init_single_page().

This single thread performance collected on: Intel(R) Xeon(R) CPU E7-8895
v3 @ 2.60GHz with 1T of memory (268400646 pages in 8 nodes):

                         BASE            FIX
sparse_init     11.244671836s   0.007199623s
zone_sizes_init  4.879775891s   8.355182299s
                  --------------------------
Total           16.124447727s   8.362381922s

sparse_init is where memory for struct pages is zeroed, and the zeroing
part is moved later in this patch into __init_single_page(), which is
called from zone_sizes_init().

Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com>
Reviewed-by: Steven Sistare <steven.sistare@oracle.com>
Reviewed-by: Daniel Jordan <daniel.m.jordan@oracle.com>
Reviewed-by: Bob Picco <bob.picco@oracle.com>
Acked-by: Michal Hocko <mhocko@suse.com>
---
 include/linux/mm.h  | 11 +++++++++++
 mm/page_alloc.c     |  1 +
 mm/sparse-vmemmap.c | 15 +++++++--------
 mm/sparse.c         |  6 +++---
 4 files changed, 22 insertions(+), 11 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 04c8b2e5aff4..fd045a3b243a 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2501,6 +2501,17 @@ static inline void *vmemmap_alloc_block_buf(unsigned long size, int node)
 	return __vmemmap_alloc_block_buf(size, node, NULL);
 }
 
+static inline void *vmemmap_alloc_block_zero(unsigned long size, int node)
+{
+	void *p = vmemmap_alloc_block(size, node);
+
+	if (!p)
+		return NULL;
+	memset(p, 0, size);
+
+	return p;
+}
+
 void vmemmap_verify(pte_t *, int, unsigned long, unsigned long);
 int vmemmap_populate_basepages(unsigned long start, unsigned long end,
 			       int node);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 54e0fa12e7ff..eb2ac79926e8 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1170,6 +1170,7 @@ static void free_one_page(struct zone *zone,
 static void __meminit __init_single_page(struct page *page, unsigned long pfn,
 				unsigned long zone, int nid)
 {
+	mm_zero_struct_page(page);
 	set_page_links(page, zone, nid, pfn);
 	init_page_count(page);
 	page_mapcount_reset(page);
diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
index d1a39b8051e0..c2f5654e7c9d 100644
--- a/mm/sparse-vmemmap.c
+++ b/mm/sparse-vmemmap.c
@@ -41,7 +41,7 @@ static void * __ref __earlyonly_bootmem_alloc(int node,
 				unsigned long align,
 				unsigned long goal)
 {
-	return memblock_virt_alloc_try_nid(size, align, goal,
+	return memblock_virt_alloc_try_nid_raw(size, align, goal,
 					    BOOTMEM_ALLOC_ACCESSIBLE, node);
 }
 
@@ -54,9 +54,8 @@ void * __meminit vmemmap_alloc_block(unsigned long size, int node)
 	if (slab_is_available()) {
 		struct page *page;
 
-		page = alloc_pages_node(node,
-			GFP_KERNEL | __GFP_ZERO | __GFP_RETRY_MAYFAIL,
-			get_order(size));
+		page = alloc_pages_node(node, GFP_KERNEL | __GFP_RETRY_MAYFAIL,
+					get_order(size));
 		if (page)
 			return page_address(page);
 		return NULL;
@@ -183,7 +182,7 @@ pmd_t * __meminit vmemmap_pmd_populate(pud_t *pud, unsigned long addr, int node)
 {
 	pmd_t *pmd = pmd_offset(pud, addr);
 	if (pmd_none(*pmd)) {
-		void *p = vmemmap_alloc_block(PAGE_SIZE, node);
+		void *p = vmemmap_alloc_block_zero(PAGE_SIZE, node);
 		if (!p)
 			return NULL;
 		pmd_populate_kernel(&init_mm, pmd, p);
@@ -195,7 +194,7 @@ pud_t * __meminit vmemmap_pud_populate(p4d_t *p4d, unsigned long addr, int node)
 {
 	pud_t *pud = pud_offset(p4d, addr);
 	if (pud_none(*pud)) {
-		void *p = vmemmap_alloc_block(PAGE_SIZE, node);
+		void *p = vmemmap_alloc_block_zero(PAGE_SIZE, node);
 		if (!p)
 			return NULL;
 		pud_populate(&init_mm, pud, p);
@@ -207,7 +206,7 @@ p4d_t * __meminit vmemmap_p4d_populate(pgd_t *pgd, unsigned long addr, int node)
 {
 	p4d_t *p4d = p4d_offset(pgd, addr);
 	if (p4d_none(*p4d)) {
-		void *p = vmemmap_alloc_block(PAGE_SIZE, node);
+		void *p = vmemmap_alloc_block_zero(PAGE_SIZE, node);
 		if (!p)
 			return NULL;
 		p4d_populate(&init_mm, p4d, p);
@@ -219,7 +218,7 @@ pgd_t * __meminit vmemmap_pgd_populate(unsigned long addr, int node)
 {
 	pgd_t *pgd = pgd_offset_k(addr);
 	if (pgd_none(*pgd)) {
-		void *p = vmemmap_alloc_block(PAGE_SIZE, node);
+		void *p = vmemmap_alloc_block_zero(PAGE_SIZE, node);
 		if (!p)
 			return NULL;
 		pgd_populate(&init_mm, pgd, p);
diff --git a/mm/sparse.c b/mm/sparse.c
index 83b3bf6461af..d22f51bb7c79 100644
--- a/mm/sparse.c
+++ b/mm/sparse.c
@@ -437,9 +437,9 @@ void __init sparse_mem_maps_populate_node(struct page **map_map,
 	}
 
 	size = PAGE_ALIGN(size);
-	map = memblock_virt_alloc_try_nid(size * map_count,
-					  PAGE_SIZE, __pa(MAX_DMA_ADDRESS),
-					  BOOTMEM_ALLOC_ACCESSIBLE, nodeid);
+	map = memblock_virt_alloc_try_nid_raw(size * map_count,
+					      PAGE_SIZE, __pa(MAX_DMA_ADDRESS),
+					      BOOTMEM_ALLOC_ACCESSIBLE, nodeid);
 	if (map) {
 		for (pnum = pnum_begin; pnum < pnum_end; pnum++) {
 			if (!present_section_nr(pnum))
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v12 10/11] sparc64: optimized struct page zeroing
  2017-10-13 17:32 [PATCH v12 00/11] complete deferred page initialization Pavel Tatashin
                   ` (8 preceding siblings ...)
  2017-10-13 17:32 ` [PATCH v12 09/11] mm: stop zeroing memory during allocation in vmemmap Pavel Tatashin
@ 2017-10-13 17:32 ` Pavel Tatashin
  2017-10-13 17:32 ` [PATCH v12 11/11] arm64: kasan: Avoid using vmemmap_populate to initialise shadow Pavel Tatashin
  2017-10-13 18:23 ` [PATCH v12 00/11] complete deferred page initialization Bob Picco
  11 siblings, 0 replies; 28+ messages in thread
From: Pavel Tatashin @ 2017-10-13 17:32 UTC (permalink / raw)
  To: linux-kernel, sparclinux, linux-mm, linuxppc-dev, linux-s390,
	linux-arm-kernel, x86, kasan-dev, borntraeger, heiko.carstens,
	davem, willy, mhocko, ard.biesheuvel, mark.rutland, will.deacon,
	catalin.marinas, sam, mgorman, akpm, steven.sistare,
	daniel.m.jordan, bob.picco

Add an optimized mm_zero_struct_page(), so struct page's are zeroed without
calling memset(). We do eight to ten regular stores based on the size of
struct page. Compiler optimizes out the conditions of switch() statement.

SPARC-M6 with 15T of memory, single thread performance:

                               BASE            FIX  OPTIMIZED_FIX
        bootmem_init   28.440467985s   2.305674818s   2.305161615s
free_area_init_nodes  202.845901673s 225.343084508s 172.556506560s
                      --------------------------------------------
Total                 231.286369658s 227.648759326s 174.861668175s

BASE:  current linux
FIX:   This patch series without "optimized struct page zeroing"
OPTIMIZED_FIX: This patch series including the current patch.

bootmem_init() is where memory for struct pages is zeroed during
allocation. Note, about two seconds in this function is a fixed time: it
does not increase as memory is increased.

Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com>
Reviewed-by: Steven Sistare <steven.sistare@oracle.com>
Reviewed-by: Daniel Jordan <daniel.m.jordan@oracle.com>
Reviewed-by: Bob Picco <bob.picco@oracle.com>
Acked-by: David S. Miller <davem@davemloft.net>
---
 arch/sparc/include/asm/pgtable_64.h | 30 ++++++++++++++++++++++++++++++
 1 file changed, 30 insertions(+)

diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h
index 4fefe3762083..8ed478abc630 100644
--- a/arch/sparc/include/asm/pgtable_64.h
+++ b/arch/sparc/include/asm/pgtable_64.h
@@ -230,6 +230,36 @@ extern unsigned long _PAGE_ALL_SZ_BITS;
 extern struct page *mem_map_zero;
 #define ZERO_PAGE(vaddr)	(mem_map_zero)
 
+/* This macro must be updated when the size of struct page grows above 80
+ * or reduces below 64.
+ * The idea that compiler optimizes out switch() statement, and only
+ * leaves clrx instructions
+ */
+#define	mm_zero_struct_page(pp) do {					\
+	unsigned long *_pp = (void *)(pp);				\
+									\
+	 /* Check that struct page is either 64, 72, or 80 bytes */	\
+	BUILD_BUG_ON(sizeof(struct page) & 7);				\
+	BUILD_BUG_ON(sizeof(struct page) < 64);				\
+	BUILD_BUG_ON(sizeof(struct page) > 80);				\
+									\
+	switch (sizeof(struct page)) {					\
+	case 80:							\
+		_pp[9] = 0;	/* fallthrough */			\
+	case 72:							\
+		_pp[8] = 0;	/* fallthrough */			\
+	default:							\
+		_pp[7] = 0;						\
+		_pp[6] = 0;						\
+		_pp[5] = 0;						\
+		_pp[4] = 0;						\
+		_pp[3] = 0;						\
+		_pp[2] = 0;						\
+		_pp[1] = 0;						\
+		_pp[0] = 0;						\
+	}								\
+} while (0)
+
 /* PFNs are real physical page numbers.  However, mem_map only begins to record
  * per-page information starting at pfn_base.  This is to handle systems where
  * the first physical page in the machine is at some huge physical address,
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v12 11/11] arm64: kasan: Avoid using vmemmap_populate to initialise shadow
  2017-10-13 17:32 [PATCH v12 00/11] complete deferred page initialization Pavel Tatashin
                   ` (9 preceding siblings ...)
  2017-10-13 17:32 ` [PATCH v12 10/11] sparc64: optimized struct page zeroing Pavel Tatashin
@ 2017-10-13 17:32 ` Pavel Tatashin
  2017-10-13 18:23 ` [PATCH v12 00/11] complete deferred page initialization Bob Picco
  11 siblings, 0 replies; 28+ messages in thread
From: Pavel Tatashin @ 2017-10-13 17:32 UTC (permalink / raw)
  To: linux-kernel, sparclinux, linux-mm, linuxppc-dev, linux-s390,
	linux-arm-kernel, x86, kasan-dev, borntraeger, heiko.carstens,
	davem, willy, mhocko, ard.biesheuvel, mark.rutland, will.deacon,
	catalin.marinas, sam, mgorman, akpm, steven.sistare,
	daniel.m.jordan, bob.picco

From: Will Deacon <will.deacon@arm.com>

The kasan shadow is currently mapped using vmemmap_populate since that
provides a semi-convenient way to map pages into swapper. However, since
that no longer zeroes the mapped pages, it is not suitable for kasan,
which requires that the shadow is zeroed in order to avoid false
positives.

This patch removes our reliance on vmemmap_populate and reuses the
existing kasan page table code, which is already required for creating
the early shadow.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com>
---
 arch/arm64/Kconfig         |   2 +-
 arch/arm64/mm/kasan_init.c | 180 +++++++++++++++++++--------------------------
 2 files changed, 76 insertions(+), 106 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 0df64a6a56d4..888580b9036e 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -68,7 +68,7 @@ config ARM64
 	select HAVE_ARCH_BITREVERSE
 	select HAVE_ARCH_HUGE_VMAP
 	select HAVE_ARCH_JUMP_LABEL
-	select HAVE_ARCH_KASAN if SPARSEMEM_VMEMMAP && !(ARM64_16K_PAGES && ARM64_VA_BITS_48)
+	select HAVE_ARCH_KASAN if !(ARM64_16K_PAGES && ARM64_VA_BITS_48)
 	select HAVE_ARCH_KGDB
 	select HAVE_ARCH_MMAP_RND_BITS
 	select HAVE_ARCH_MMAP_RND_COMPAT_BITS if COMPAT
diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index cb4af2951c90..acba49fb5aac 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -11,6 +11,7 @@
  */
 
 #define pr_fmt(fmt) "kasan: " fmt
+#include <linux/bootmem.h>
 #include <linux/kasan.h>
 #include <linux/kernel.h>
 #include <linux/sched/task.h>
@@ -28,66 +29,6 @@
 
 static pgd_t tmp_pg_dir[PTRS_PER_PGD] __initdata __aligned(PGD_SIZE);
 
-/* Creates mappings for kasan during early boot. The mapped memory is zeroed */
-static int __meminit kasan_map_populate(unsigned long start, unsigned long end,
-					int node)
-{
-	unsigned long addr, pfn, next;
-	unsigned long long size;
-	pgd_t *pgd;
-	pud_t *pud;
-	pmd_t *pmd;
-	pte_t *pte;
-	int ret;
-
-	ret = vmemmap_populate(start, end, node);
-	/*
-	 * We might have partially populated memory, so check for no entries,
-	 * and zero only those that actually exist.
-	 */
-	for (addr = start; addr < end; addr = next) {
-		pgd = pgd_offset_k(addr);
-		if (pgd_none(*pgd)) {
-			next = pgd_addr_end(addr, end);
-			continue;
-		}
-
-		pud = pud_offset(pgd, addr);
-		if (pud_none(*pud)) {
-			next = pud_addr_end(addr, end);
-			continue;
-		}
-		if (pud_sect(*pud)) {
-			/* This is PUD size page */
-			next = pud_addr_end(addr, end);
-			size = PUD_SIZE;
-			pfn = pud_pfn(*pud);
-		} else {
-			pmd = pmd_offset(pud, addr);
-			if (pmd_none(*pmd)) {
-				next = pmd_addr_end(addr, end);
-				continue;
-			}
-			if (pmd_sect(*pmd)) {
-				/* This is PMD size page */
-				next = pmd_addr_end(addr, end);
-				size = PMD_SIZE;
-				pfn = pmd_pfn(*pmd);
-			} else {
-				pte = pte_offset_kernel(pmd, addr);
-				next = addr + PAGE_SIZE;
-				if (pte_none(*pte))
-					continue;
-				/* This is base size page */
-				size = PAGE_SIZE;
-				pfn = pte_pfn(*pte);
-			}
-		}
-		memset(phys_to_virt(PFN_PHYS(pfn)), 0, size);
-	}
-	return ret;
-}
-
 /*
  * The p*d_populate functions call virt_to_phys implicitly so they can't be used
  * directly on kernel symbols (bm_p*d). All the early functions are called too
@@ -95,77 +36,117 @@ static int __meminit kasan_map_populate(unsigned long start, unsigned long end,
  * with the physical address from __pa_symbol.
  */
 
-static void __init kasan_early_pte_populate(pmd_t *pmd, unsigned long addr,
-					unsigned long end)
+static phys_addr_t __init kasan_alloc_zeroed_page(int node)
 {
-	pte_t *pte;
-	unsigned long next;
+	void *p = memblock_virt_alloc_try_nid(PAGE_SIZE, PAGE_SIZE,
+					      __pa(MAX_DMA_ADDRESS),
+					      MEMBLOCK_ALLOC_ACCESSIBLE, node);
+	return __pa(p);
+}
 
-	if (pmd_none(*pmd))
-		__pmd_populate(pmd, __pa_symbol(kasan_zero_pte), PMD_TYPE_TABLE);
+static pte_t *__init kasan_pte_offset(pmd_t *pmd, unsigned long addr, int node,
+				      bool early)
+{
+	if (pmd_none(*pmd)) {
+		phys_addr_t pte_phys = early ? __pa_symbol(kasan_zero_pte)
+					     : kasan_alloc_zeroed_page(node);
+		__pmd_populate(pmd, pte_phys, PMD_TYPE_TABLE);
+	}
+
+	return early ? pte_offset_kimg(pmd, addr)
+		     : pte_offset_kernel(pmd, addr);
+}
+
+static pmd_t *__init kasan_pmd_offset(pud_t *pud, unsigned long addr, int node,
+				      bool early)
+{
+	if (pud_none(*pud)) {
+		phys_addr_t pmd_phys = early ? __pa_symbol(kasan_zero_pmd)
+					     : kasan_alloc_zeroed_page(node);
+		__pud_populate(pud, pmd_phys, PMD_TYPE_TABLE);
+	}
+
+	return early ? pmd_offset_kimg(pud, addr) : pmd_offset(pud, addr);
+}
+
+static pud_t *__init kasan_pud_offset(pgd_t *pgd, unsigned long addr, int node,
+				      bool early)
+{
+	if (pgd_none(*pgd)) {
+		phys_addr_t pud_phys = early ? __pa_symbol(kasan_zero_pud)
+					     : kasan_alloc_zeroed_page(node);
+		__pgd_populate(pgd, pud_phys, PMD_TYPE_TABLE);
+	}
+
+	return early ? pud_offset_kimg(pgd, addr) : pud_offset(pgd, addr);
+}
+
+static void __init kasan_pte_populate(pmd_t *pmd, unsigned long addr,
+				      unsigned long end, int node, bool early)
+{
+	unsigned long next;
+	pte_t *pte = kasan_pte_offset(pmd, addr, node, early);
 
-	pte = pte_offset_kimg(pmd, addr);
 	do {
+		phys_addr_t page_phys = early ? __pa_symbol(kasan_zero_page)
+					      : kasan_alloc_zeroed_page(node);
 		next = addr + PAGE_SIZE;
-		set_pte(pte, pfn_pte(sym_to_pfn(kasan_zero_page),
-					PAGE_KERNEL));
+		set_pte(pte, pfn_pte(__phys_to_pfn(page_phys), PAGE_KERNEL));
 	} while (pte++, addr = next, addr != end && pte_none(*pte));
 }
 
-static void __init kasan_early_pmd_populate(pud_t *pud,
-					unsigned long addr,
-					unsigned long end)
+static void __init kasan_pmd_populate(pud_t *pud, unsigned long addr,
+				      unsigned long end, int node, bool early)
 {
-	pmd_t *pmd;
 	unsigned long next;
+	pmd_t *pmd = kasan_pmd_offset(pud, addr, node, early);
 
-	if (pud_none(*pud))
-		__pud_populate(pud, __pa_symbol(kasan_zero_pmd), PMD_TYPE_TABLE);
-
-	pmd = pmd_offset_kimg(pud, addr);
 	do {
 		next = pmd_addr_end(addr, end);
-		kasan_early_pte_populate(pmd, addr, next);
+		kasan_pte_populate(pmd, addr, next, node, early);
 	} while (pmd++, addr = next, addr != end && pmd_none(*pmd));
 }
 
-static void __init kasan_early_pud_populate(pgd_t *pgd,
-					unsigned long addr,
-					unsigned long end)
+static void __init kasan_pud_populate(pgd_t *pgd, unsigned long addr,
+				      unsigned long end, int node, bool early)
 {
-	pud_t *pud;
 	unsigned long next;
+	pud_t *pud = kasan_pud_offset(pgd, addr, node, early);
 
-	if (pgd_none(*pgd))
-		__pgd_populate(pgd, __pa_symbol(kasan_zero_pud), PUD_TYPE_TABLE);
-
-	pud = pud_offset_kimg(pgd, addr);
 	do {
 		next = pud_addr_end(addr, end);
-		kasan_early_pmd_populate(pud, addr, next);
+		kasan_pmd_populate(pud, addr, next, node, early);
 	} while (pud++, addr = next, addr != end && pud_none(*pud));
 }
 
-static void __init kasan_map_early_shadow(void)
+static void __init kasan_pgd_populate(unsigned long addr, unsigned long end,
+				      int node, bool early)
 {
-	unsigned long addr = KASAN_SHADOW_START;
-	unsigned long end = KASAN_SHADOW_END;
 	unsigned long next;
 	pgd_t *pgd;
 
 	pgd = pgd_offset_k(addr);
 	do {
 		next = pgd_addr_end(addr, end);
-		kasan_early_pud_populate(pgd, addr, next);
+		kasan_pud_populate(pgd, addr, next, node, early);
 	} while (pgd++, addr = next, addr != end);
 }
 
+/* The early shadow maps everything to a single page of zeroes */
 asmlinkage void __init kasan_early_init(void)
 {
 	BUILD_BUG_ON(KASAN_SHADOW_OFFSET != KASAN_SHADOW_END - (1UL << 61));
 	BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_START, PGDIR_SIZE));
 	BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_END, PGDIR_SIZE));
-	kasan_map_early_shadow();
+	kasan_pgd_populate(KASAN_SHADOW_START, KASAN_SHADOW_END, NUMA_NO_NODE,
+			   true);
+}
+
+/* Set up full kasan mappings, ensuring that the mapped pages are zeroed */
+static void __init kasan_map_populate(unsigned long start, unsigned long end,
+				      int node)
+{
+	kasan_pgd_populate(start & PAGE_MASK, PAGE_ALIGN(end), node, false);
 }
 
 /*
@@ -202,8 +183,8 @@ void __init kasan_init(void)
 	struct memblock_region *reg;
 	int i;
 
-	kimg_shadow_start = (u64)kasan_mem_to_shadow(_text);
-	kimg_shadow_end = (u64)kasan_mem_to_shadow(_end);
+	kimg_shadow_start = (u64)kasan_mem_to_shadow(_text) & PAGE_MASK;
+	kimg_shadow_end = PAGE_ALIGN((u64)kasan_mem_to_shadow(_end));
 
 	mod_shadow_start = (u64)kasan_mem_to_shadow((void *)MODULES_VADDR);
 	mod_shadow_end = (u64)kasan_mem_to_shadow((void *)MODULES_END);
@@ -224,17 +205,6 @@ void __init kasan_init(void)
 	kasan_map_populate(kimg_shadow_start, kimg_shadow_end,
 			   pfn_to_nid(virt_to_pfn(lm_alias(_text))));
 
-	/*
-	 * kasan_map_populate() has populated the shadow region that covers the
-	 * kernel image with SWAPPER_BLOCK_SIZE mappings, so we have to round
-	 * the start and end addresses to SWAPPER_BLOCK_SIZE as well, to prevent
-	 * kasan_populate_zero_shadow() from replacing the page table entries
-	 * (PMD or PTE) at the edges of the shadow region for the kernel
-	 * image.
-	 */
-	kimg_shadow_start = round_down(kimg_shadow_start, SWAPPER_BLOCK_SIZE);
-	kimg_shadow_end = round_up(kimg_shadow_end, SWAPPER_BLOCK_SIZE);
-
 	kasan_populate_zero_shadow((void *)KASAN_SHADOW_START,
 				   (void *)mod_shadow_start);
 	kasan_populate_zero_shadow((void *)kimg_shadow_end,
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [PATCH v12 00/11] complete deferred page initialization
  2017-10-13 17:32 [PATCH v12 00/11] complete deferred page initialization Pavel Tatashin
                   ` (10 preceding siblings ...)
  2017-10-13 17:32 ` [PATCH v12 11/11] arm64: kasan: Avoid using vmemmap_populate to initialise shadow Pavel Tatashin
@ 2017-10-13 18:23 ` Bob Picco
  11 siblings, 0 replies; 28+ messages in thread
From: Bob Picco @ 2017-10-13 18:23 UTC (permalink / raw)
  To: Pavel Tatashin
  Cc: linux-kernel, sparclinux, linux-mm, linuxppc-dev, linux-s390,
	linux-arm-kernel, x86, kasan-dev, borntraeger, heiko.carstens,
	davem, willy, mhocko, ard.biesheuvel, mark.rutland, will.deacon,
	catalin.marinas, sam, mgorman, akpm, steven.sistare,
	daniel.m.jordan, bob.picco

Pavel Tatashin wrote:	[Fri Oct 13 2017, 01:32:03PM EDT]
> Changelog:
> v12 - v11
> - Improved comments for mm: zero reserved and unavailable struct pages
> - Added back patch: mm: deferred_init_memmap improvements
> - Added patch from Will Deacon: arm64: kasan: Avoid using
>   vmemmap_populate to initialise shadow
[...]
> Pavel Tatashin (10):
>   mm: deferred_init_memmap improvements
>   x86/mm: setting fields in deferred pages
>   sparc64/mm: setting fields in deferred pages
>   sparc64: simplify vmemmap_populate
>   mm: defining memblock_virt_alloc_try_nid_raw
>   mm: zero reserved and unavailable struct pages
>   x86/kasan: add and use kasan_map_populate()
>   arm64/kasan: add and use kasan_map_populate()
>   mm: stop zeroing memory during allocation in vmemmap
>   sparc64: optimized struct page zeroing
> 
> Will Deacon (1):
>   arm64: kasan: Avoid using vmemmap_populate to initialise shadow
> 
>  arch/arm64/Kconfig                  |   2 +-
>  arch/arm64/mm/kasan_init.c          | 130 +++++++++++++--------
>  arch/sparc/include/asm/pgtable_64.h |  30 +++++
>  arch/sparc/mm/init_64.c             |  32 +++---
>  arch/x86/mm/init_64.c               |  10 +-
>  arch/x86/mm/kasan_init_64.c         |  75 +++++++++++-
>  include/linux/bootmem.h             |  27 +++++
>  include/linux/memblock.h            |  16 +++
>  include/linux/mm.h                  |  26 +++++
>  mm/memblock.c                       |  60 ++++++++--
>  mm/page_alloc.c                     | 224 +++++++++++++++++++++---------------
>  mm/sparse-vmemmap.c                 |  15 ++-
>  mm/sparse.c                         |   6 +-
>  13 files changed, 469 insertions(+), 184 deletions(-)
> 
> -- 
> 2.14.2
> 
Boot tested on ThunderX2 VM.
Tested-by: Bob Picco <bob.picco@oracle.com>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v12 01/11] mm: deferred_init_memmap improvements
  2017-10-13 17:32 ` [PATCH v12 01/11] mm: deferred_init_memmap improvements Pavel Tatashin
@ 2017-10-17 11:40   ` Michal Hocko
  2017-10-17 15:13     ` Pavel Tatashin
  0 siblings, 1 reply; 28+ messages in thread
From: Michal Hocko @ 2017-10-17 11:40 UTC (permalink / raw)
  To: Pavel Tatashin
  Cc: linux-kernel, sparclinux, linux-mm, linuxppc-dev, linux-s390,
	linux-arm-kernel, x86, kasan-dev, borntraeger, heiko.carstens,
	davem, willy, ard.biesheuvel, mark.rutland, will.deacon,
	catalin.marinas, sam, mgorman, akpm, steven.sistare,
	daniel.m.jordan, bob.picco

On Fri 13-10-17 13:32:04, Pavel Tatashin wrote:
> deferred_init_memmap() is called when struct pages are initialized later
> in boot by slave CPUs. This patch simplifies and optimizes this function,
> and also fixes a couple issues (described below).
> 
> The main change is that now we are iterating through free memblock areas
> instead of all configured memory. Thus, we do not have to check if the
> struct page has already been initialized.
> 
> =====
> In deferred_init_memmap() where all deferred struct pages are initialized
> we have a check like this:
> 
> if (page->flags) {
> 	VM_BUG_ON(page_zone(page) != zone);
> 	goto free_range;
> }
> 
> This way we are checking if the current deferred page has already been
> initialized. It works, because memory for struct pages has been zeroed, and
> the only way flags are not zero if it went through __init_single_page()
> before.  But, once we change the current behavior and won't zero the memory
> in memblock allocator, we cannot trust anything inside "struct page"es
> until they are initialized. This patch fixes this.
> 
> The deferred_init_memmap() is re-written to loop through only free memory
> ranges provided by memblock.
> 
> Note, this first issue is relevant only when the following change is
> merged:
> 
> =====
> This patch fixes another existing issue on systems that have holes in
> zones i.e CONFIG_HOLES_IN_ZONE is defined.
> 
> In for_each_mem_pfn_range() we have code like this:
> 
> if (!pfn_valid_within(pfn)
> 	goto free_range;
> 
> Note: 'page' is not set to NULL and is not incremented but 'pfn' advances.
> Thus means if deferred struct pages are enabled on systems with these kind
> of holes, linux would get memory corruptions. I have fixed this issue by
> defining a new macro that performs all the necessary operations when we
> free the current set of pages.

This really begs to have two patches... I will not insist though. I also
suspect the code can be further simplified but again this is nothing to
block this to go.
 
> Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com>
> Reviewed-by: Steven Sistare <steven.sistare@oracle.com>
> Reviewed-by: Daniel Jordan <daniel.m.jordan@oracle.com>
> Reviewed-by: Bob Picco <bob.picco@oracle.com>

I do not see any obvious issues in the patch

Acked-by: Michal Hocko <mhocko@suse.com>

> ---
>  mm/page_alloc.c | 168 ++++++++++++++++++++++++++++----------------------------
>  1 file changed, 85 insertions(+), 83 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 77e4d3c5c57b..cdbd14829fd3 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1410,14 +1410,17 @@ void clear_zone_contiguous(struct zone *zone)
>  }
>  
>  #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT
> -static void __init deferred_free_range(struct page *page,
> -					unsigned long pfn, int nr_pages)
> +static void __init deferred_free_range(unsigned long pfn,
> +				       unsigned long nr_pages)
>  {
> -	int i;
> +	struct page *page;
> +	unsigned long i;
>  
> -	if (!page)
> +	if (!nr_pages)
>  		return;
>  
> +	page = pfn_to_page(pfn);
> +
>  	/* Free a large naturally-aligned chunk if possible */
>  	if (nr_pages == pageblock_nr_pages &&
>  	    (pfn & (pageblock_nr_pages - 1)) == 0) {
> @@ -1443,19 +1446,89 @@ static inline void __init pgdat_init_report_one_done(void)
>  		complete(&pgdat_init_all_done_comp);
>  }
>  
> +/*
> + * Helper for deferred_init_range, free the given range, reset the counters, and
> + * return number of pages freed.
> + */
> +static inline unsigned long __def_free(unsigned long *nr_free,
> +				       unsigned long *free_base_pfn,
> +				       struct page **page)
> +{
> +	unsigned long nr = *nr_free;
> +
> +	deferred_free_range(*free_base_pfn, nr);
> +	*free_base_pfn = 0;
> +	*nr_free = 0;
> +	*page = NULL;
> +
> +	return nr;
> +}
> +
> +static unsigned long deferred_init_range(int nid, int zid, unsigned long pfn,
> +					 unsigned long end_pfn)
> +{
> +	struct mminit_pfnnid_cache nid_init_state = { };
> +	unsigned long nr_pgmask = pageblock_nr_pages - 1;
> +	unsigned long free_base_pfn = 0;
> +	unsigned long nr_pages = 0;
> +	unsigned long nr_free = 0;
> +	struct page *page = NULL;
> +
> +	for (; pfn < end_pfn; pfn++) {
> +		/*
> +		 * First we check if pfn is valid on architectures where it is
> +		 * possible to have holes within pageblock_nr_pages. On systems
> +		 * where it is not possible, this function is optimized out.
> +		 *
> +		 * Then, we check if a current large page is valid by only
> +		 * checking the validity of the head pfn.
> +		 *
> +		 * meminit_pfn_in_nid is checked on systems where pfns can
> +		 * interleave within a node: a pfn is between start and end
> +		 * of a node, but does not belong to this memory node.
> +		 *
> +		 * Finally, we minimize pfn page lookups and scheduler checks by
> +		 * performing it only once every pageblock_nr_pages.
> +		 */
> +		if (!pfn_valid_within(pfn)) {
> +			nr_pages += __def_free(&nr_free, &free_base_pfn, &page);
> +		} else if (!(pfn & nr_pgmask) && !pfn_valid(pfn)) {
> +			nr_pages += __def_free(&nr_free, &free_base_pfn, &page);
> +		} else if (!meminit_pfn_in_nid(pfn, nid, &nid_init_state)) {
> +			nr_pages += __def_free(&nr_free, &free_base_pfn, &page);
> +		} else if (page && (pfn & nr_pgmask)) {
> +			page++;
> +			__init_single_page(page, pfn, zid, nid);
> +			nr_free++;
> +		} else {
> +			nr_pages += __def_free(&nr_free, &free_base_pfn, &page);
> +			page = pfn_to_page(pfn);
> +			__init_single_page(page, pfn, zid, nid);
> +			free_base_pfn = pfn;
> +			nr_free = 1;
> +			cond_resched();
> +		}
> +	}
> +	/* Free the last block of pages to allocator */
> +	nr_pages += __def_free(&nr_free, &free_base_pfn, &page);
> +
> +	return nr_pages;
> +}
> +
>  /* Initialise remaining memory on a node */
>  static int __init deferred_init_memmap(void *data)
>  {
>  	pg_data_t *pgdat = data;
>  	int nid = pgdat->node_id;
> -	struct mminit_pfnnid_cache nid_init_state = { };
>  	unsigned long start = jiffies;
>  	unsigned long nr_pages = 0;
> -	unsigned long walk_start, walk_end;
> -	int i, zid;
> +	unsigned long spfn, epfn;
> +	phys_addr_t spa, epa;
> +	int zid;
>  	struct zone *zone;
>  	unsigned long first_init_pfn = pgdat->first_deferred_pfn;
>  	const struct cpumask *cpumask = cpumask_of_node(pgdat->node_id);
> +	u64 i;
>  
>  	if (first_init_pfn == ULONG_MAX) {
>  		pgdat_init_report_one_done();
> @@ -1477,83 +1550,12 @@ static int __init deferred_init_memmap(void *data)
>  		if (first_init_pfn < zone_end_pfn(zone))
>  			break;
>  	}
> +	first_init_pfn = max(zone->zone_start_pfn, first_init_pfn);
>  
> -	for_each_mem_pfn_range(i, nid, &walk_start, &walk_end, NULL) {
> -		unsigned long pfn, end_pfn;
> -		struct page *page = NULL;
> -		struct page *free_base_page = NULL;
> -		unsigned long free_base_pfn = 0;
> -		int nr_to_free = 0;
> -
> -		end_pfn = min(walk_end, zone_end_pfn(zone));
> -		pfn = first_init_pfn;
> -		if (pfn < walk_start)
> -			pfn = walk_start;
> -		if (pfn < zone->zone_start_pfn)
> -			pfn = zone->zone_start_pfn;
> -
> -		for (; pfn < end_pfn; pfn++) {
> -			if (!pfn_valid_within(pfn))
> -				goto free_range;
> -
> -			/*
> -			 * Ensure pfn_valid is checked every
> -			 * pageblock_nr_pages for memory holes
> -			 */
> -			if ((pfn & (pageblock_nr_pages - 1)) == 0) {
> -				if (!pfn_valid(pfn)) {
> -					page = NULL;
> -					goto free_range;
> -				}
> -			}
> -
> -			if (!meminit_pfn_in_nid(pfn, nid, &nid_init_state)) {
> -				page = NULL;
> -				goto free_range;
> -			}
> -
> -			/* Minimise pfn page lookups and scheduler checks */
> -			if (page && (pfn & (pageblock_nr_pages - 1)) != 0) {
> -				page++;
> -			} else {
> -				nr_pages += nr_to_free;
> -				deferred_free_range(free_base_page,
> -						free_base_pfn, nr_to_free);
> -				free_base_page = NULL;
> -				free_base_pfn = nr_to_free = 0;
> -
> -				page = pfn_to_page(pfn);
> -				cond_resched();
> -			}
> -
> -			if (page->flags) {
> -				VM_BUG_ON(page_zone(page) != zone);
> -				goto free_range;
> -			}
> -
> -			__init_single_page(page, pfn, zid, nid);
> -			if (!free_base_page) {
> -				free_base_page = page;
> -				free_base_pfn = pfn;
> -				nr_to_free = 0;
> -			}
> -			nr_to_free++;
> -
> -			/* Where possible, batch up pages for a single free */
> -			continue;
> -free_range:
> -			/* Free the current block of pages to allocator */
> -			nr_pages += nr_to_free;
> -			deferred_free_range(free_base_page, free_base_pfn,
> -								nr_to_free);
> -			free_base_page = NULL;
> -			free_base_pfn = nr_to_free = 0;
> -		}
> -		/* Free the last block of pages to allocator */
> -		nr_pages += nr_to_free;
> -		deferred_free_range(free_base_page, free_base_pfn, nr_to_free);
> -
> -		first_init_pfn = max(end_pfn, first_init_pfn);
> +	for_each_free_mem_range(i, nid, MEMBLOCK_NONE, &spa, &epa, NULL) {
> +		spfn = max_t(unsigned long, first_init_pfn, PFN_UP(spa));
> +		epfn = min_t(unsigned long, zone_end_pfn(zone), PFN_DOWN(epa));
> +		nr_pages += deferred_init_range(nid, zid, spfn, epfn);
>  	}
>  
>  	/* Sanity check that the next zone really is unpopulated */
> -- 
> 2.14.2

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v12 01/11] mm: deferred_init_memmap improvements
  2017-10-17 11:40   ` Michal Hocko
@ 2017-10-17 15:13     ` Pavel Tatashin
  0 siblings, 0 replies; 28+ messages in thread
From: Pavel Tatashin @ 2017-10-17 15:13 UTC (permalink / raw)
  To: Michal Hocko
  Cc: linux-kernel, sparclinux, linux-mm, linuxppc-dev, linux-s390,
	linux-arm-kernel, x86, kasan-dev, borntraeger, heiko.carstens,
	davem, willy, ard.biesheuvel, mark.rutland, will.deacon,
	catalin.marinas, sam, mgorman, akpm, steven.sistare,
	daniel.m.jordan, bob.picco

> This really begs to have two patches... I will not insist though. I also
> suspect the code can be further simplified but again this is nothing to
> block this to go.

Perhaps "page" can be avoided in deferred_init_range(), as pfn is 
converted to page in deferred_free_range, but I have not studied it.

>   
>> Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com>
>> Reviewed-by: Steven Sistare <steven.sistare@oracle.com>
>> Reviewed-by: Daniel Jordan <daniel.m.jordan@oracle.com>
>> Reviewed-by: Bob Picco <bob.picco@oracle.com>
> 
> I do not see any obvious issues in the patch
> 
> Acked-by: Michal Hocko <mhocko@suse.com>

Thank you very much!

Pavel

> 
>> ---
>>   mm/page_alloc.c | 168 ++++++++++++++++++++++++++++----------------------------
>>   1 file changed, 85 insertions(+), 83 deletions(-)
>>
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index 77e4d3c5c57b..cdbd14829fd3 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -1410,14 +1410,17 @@ void clear_zone_contiguous(struct zone *zone)
>>   }
>>   
>>   #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT
>> -static void __init deferred_free_range(struct page *page,
>> -					unsigned long pfn, int nr_pages)
>> +static void __init deferred_free_range(unsigned long pfn,
>> +				       unsigned long nr_pages)
>>   {
>> -	int i;
>> +	struct page *page;
>> +	unsigned long i;
>>   
>> -	if (!page)
>> +	if (!nr_pages)
>>   		return;
>>   
>> +	page = pfn_to_page(pfn);
>> +
>>   	/* Free a large naturally-aligned chunk if possible */
>>   	if (nr_pages == pageblock_nr_pages &&
>>   	    (pfn & (pageblock_nr_pages - 1)) == 0) {
>> @@ -1443,19 +1446,89 @@ static inline void __init pgdat_init_report_one_done(void)
>>   		complete(&pgdat_init_all_done_comp);
>>   }
>>   
>> +/*
>> + * Helper for deferred_init_range, free the given range, reset the counters, and
>> + * return number of pages freed.
>> + */
>> +static inline unsigned long __def_free(unsigned long *nr_free,
>> +				       unsigned long *free_base_pfn,
>> +				       struct page **page)
>> +{
>> +	unsigned long nr = *nr_free;
>> +
>> +	deferred_free_range(*free_base_pfn, nr);
>> +	*free_base_pfn = 0;
>> +	*nr_free = 0;
>> +	*page = NULL;
>> +
>> +	return nr;
>> +}
>> +
>> +static unsigned long deferred_init_range(int nid, int zid, unsigned long pfn,
>> +					 unsigned long end_pfn)
>> +{
>> +	struct mminit_pfnnid_cache nid_init_state = { };
>> +	unsigned long nr_pgmask = pageblock_nr_pages - 1;
>> +	unsigned long free_base_pfn = 0;
>> +	unsigned long nr_pages = 0;
>> +	unsigned long nr_free = 0;
>> +	struct page *page = NULL;
>> +
>> +	for (; pfn < end_pfn; pfn++) {
>> +		/*
>> +		 * First we check if pfn is valid on architectures where it is
>> +		 * possible to have holes within pageblock_nr_pages. On systems
>> +		 * where it is not possible, this function is optimized out.
>> +		 *
>> +		 * Then, we check if a current large page is valid by only
>> +		 * checking the validity of the head pfn.
>> +		 *
>> +		 * meminit_pfn_in_nid is checked on systems where pfns can
>> +		 * interleave within a node: a pfn is between start and end
>> +		 * of a node, but does not belong to this memory node.
>> +		 *
>> +		 * Finally, we minimize pfn page lookups and scheduler checks by
>> +		 * performing it only once every pageblock_nr_pages.
>> +		 */
>> +		if (!pfn_valid_within(pfn)) {
>> +			nr_pages += __def_free(&nr_free, &free_base_pfn, &page);
>> +		} else if (!(pfn & nr_pgmask) && !pfn_valid(pfn)) {
>> +			nr_pages += __def_free(&nr_free, &free_base_pfn, &page);
>> +		} else if (!meminit_pfn_in_nid(pfn, nid, &nid_init_state)) {
>> +			nr_pages += __def_free(&nr_free, &free_base_pfn, &page);
>> +		} else if (page && (pfn & nr_pgmask)) {
>> +			page++;
>> +			__init_single_page(page, pfn, zid, nid);
>> +			nr_free++;
>> +		} else {
>> +			nr_pages += __def_free(&nr_free, &free_base_pfn, &page);
>> +			page = pfn_to_page(pfn);
>> +			__init_single_page(page, pfn, zid, nid);
>> +			free_base_pfn = pfn;
>> +			nr_free = 1;
>> +			cond_resched();
>> +		}
>> +	}
>> +	/* Free the last block of pages to allocator */
>> +	nr_pages += __def_free(&nr_free, &free_base_pfn, &page);
>> +
>> +	return nr_pages;
>> +}
>> +
>>   /* Initialise remaining memory on a node */
>>   static int __init deferred_init_memmap(void *data)
>>   {
>>   	pg_data_t *pgdat = data;
>>   	int nid = pgdat->node_id;
>> -	struct mminit_pfnnid_cache nid_init_state = { };
>>   	unsigned long start = jiffies;
>>   	unsigned long nr_pages = 0;
>> -	unsigned long walk_start, walk_end;
>> -	int i, zid;
>> +	unsigned long spfn, epfn;
>> +	phys_addr_t spa, epa;
>> +	int zid;
>>   	struct zone *zone;
>>   	unsigned long first_init_pfn = pgdat->first_deferred_pfn;
>>   	const struct cpumask *cpumask = cpumask_of_node(pgdat->node_id);
>> +	u64 i;
>>   
>>   	if (first_init_pfn == ULONG_MAX) {
>>   		pgdat_init_report_one_done();
>> @@ -1477,83 +1550,12 @@ static int __init deferred_init_memmap(void *data)
>>   		if (first_init_pfn < zone_end_pfn(zone))
>>   			break;
>>   	}
>> +	first_init_pfn = max(zone->zone_start_pfn, first_init_pfn);
>>   
>> -	for_each_mem_pfn_range(i, nid, &walk_start, &walk_end, NULL) {
>> -		unsigned long pfn, end_pfn;
>> -		struct page *page = NULL;
>> -		struct page *free_base_page = NULL;
>> -		unsigned long free_base_pfn = 0;
>> -		int nr_to_free = 0;
>> -
>> -		end_pfn = min(walk_end, zone_end_pfn(zone));
>> -		pfn = first_init_pfn;
>> -		if (pfn < walk_start)
>> -			pfn = walk_start;
>> -		if (pfn < zone->zone_start_pfn)
>> -			pfn = zone->zone_start_pfn;
>> -
>> -		for (; pfn < end_pfn; pfn++) {
>> -			if (!pfn_valid_within(pfn))
>> -				goto free_range;
>> -
>> -			/*
>> -			 * Ensure pfn_valid is checked every
>> -			 * pageblock_nr_pages for memory holes
>> -			 */
>> -			if ((pfn & (pageblock_nr_pages - 1)) == 0) {
>> -				if (!pfn_valid(pfn)) {
>> -					page = NULL;
>> -					goto free_range;
>> -				}
>> -			}
>> -
>> -			if (!meminit_pfn_in_nid(pfn, nid, &nid_init_state)) {
>> -				page = NULL;
>> -				goto free_range;
>> -			}
>> -
>> -			/* Minimise pfn page lookups and scheduler checks */
>> -			if (page && (pfn & (pageblock_nr_pages - 1)) != 0) {
>> -				page++;
>> -			} else {
>> -				nr_pages += nr_to_free;
>> -				deferred_free_range(free_base_page,
>> -						free_base_pfn, nr_to_free);
>> -				free_base_page = NULL;
>> -				free_base_pfn = nr_to_free = 0;
>> -
>> -				page = pfn_to_page(pfn);
>> -				cond_resched();
>> -			}
>> -
>> -			if (page->flags) {
>> -				VM_BUG_ON(page_zone(page) != zone);
>> -				goto free_range;
>> -			}
>> -
>> -			__init_single_page(page, pfn, zid, nid);
>> -			if (!free_base_page) {
>> -				free_base_page = page;
>> -				free_base_pfn = pfn;
>> -				nr_to_free = 0;
>> -			}
>> -			nr_to_free++;
>> -
>> -			/* Where possible, batch up pages for a single free */
>> -			continue;
>> -free_range:
>> -			/* Free the current block of pages to allocator */
>> -			nr_pages += nr_to_free;
>> -			deferred_free_range(free_base_page, free_base_pfn,
>> -								nr_to_free);
>> -			free_base_page = NULL;
>> -			free_base_pfn = nr_to_free = 0;
>> -		}
>> -		/* Free the last block of pages to allocator */
>> -		nr_pages += nr_to_free;
>> -		deferred_free_range(free_base_page, free_base_pfn, nr_to_free);
>> -
>> -		first_init_pfn = max(end_pfn, first_init_pfn);
>> +	for_each_free_mem_range(i, nid, MEMBLOCK_NONE, &spa, &epa, NULL) {
>> +		spfn = max_t(unsigned long, first_init_pfn, PFN_UP(spa));
>> +		epfn = min_t(unsigned long, zone_end_pfn(zone), PFN_DOWN(epa));
>> +		nr_pages += deferred_init_range(nid, zid, spfn, epfn);
>>   	}
>>   
>>   	/* Sanity check that the next zone really is unpopulated */
>> -- 
>> 2.14.2
> 

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v12 08/11] arm64/kasan: add and use kasan_map_populate()
  2017-10-13 17:32 ` [PATCH v12 08/11] arm64/kasan: " Pavel Tatashin
@ 2017-10-18 16:55   ` Andrey Ryabinin
  2017-10-18 17:03     ` Pavel Tatashin
  0 siblings, 1 reply; 28+ messages in thread
From: Andrey Ryabinin @ 2017-10-18 16:55 UTC (permalink / raw)
  To: Pavel Tatashin, linux-kernel, sparclinux, linux-mm, linuxppc-dev,
	linux-s390, linux-arm-kernel, x86, kasan-dev, borntraeger,
	heiko.carstens, davem, willy, mhocko, ard.biesheuvel,
	mark.rutland, will.deacon, catalin.marinas, sam, mgorman, akpm,
	steven.sistare, daniel.m.jordan, bob.picco

On 10/13/2017 08:32 PM, Pavel Tatashin wrote:
> During early boot, kasan uses vmemmap_populate() to establish its shadow
> memory. But, that interface is intended for struct pages use.
> 
> Because of the current project, vmemmap won't be zeroed during allocation,
> but kasan expects that memory to be zeroed. We are adding a new
> kasan_map_populate() function to resolve this difference.
> 
> Therefore, we must use a new interface to allocate and map kasan shadow
> memory, that also zeroes memory for us.
> 

What's the point of this patch? We have "arm64: kasan: Avoid using vmemmap_populate to initialise shadow"
which does the right thing and basically reverts this patch.
This patch as intermediate step looks absolutely useless. We can just avoid vemmap_populate() right away.

> Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com>
> ---
>  arch/arm64/mm/kasan_init.c | 72 ++++++++++++++++++++++++++++++++++++++++++----
>  1 file changed, 66 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
> index 81f03959a4ab..cb4af2951c90 100644
> --- a/arch/arm64/mm/kasan_init.c
> +++ b/arch/arm64/mm/kasan_init.c
> @@ -28,6 +28,66 @@
>  

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v12 08/11] arm64/kasan: add and use kasan_map_populate()
  2017-10-18 16:55   ` Andrey Ryabinin
@ 2017-10-18 17:03     ` Pavel Tatashin
  2017-10-18 17:06       ` Will Deacon
  0 siblings, 1 reply; 28+ messages in thread
From: Pavel Tatashin @ 2017-10-18 17:03 UTC (permalink / raw)
  To: Andrey Ryabinin, linux-kernel, sparclinux, linux-mm,
	linuxppc-dev, linux-s390, linux-arm-kernel, x86, kasan-dev,
	borntraeger, heiko.carstens, davem, willy, mhocko,
	ard.biesheuvel, mark.rutland, will.deacon, catalin.marinas, sam,
	mgorman, akpm, steven.sistare, daniel.m.jordan, bob.picco

Hi Andrey,

I asked Will, about it, and he preferred to have this patched added to 
the end of my series instead of replacing "arm64/kasan: add and use 
kasan_map_populate()".

In addition, Will's patch stops using large pages for kasan memory, and 
thus might add some regression in which case it is easier to revert just 
that patch instead of the whole series. It is unlikely that regression 
is going to be detectable, because kasan by itself makes system quiet 
slow already.

Pasha

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v12 08/11] arm64/kasan: add and use kasan_map_populate()
  2017-10-18 17:03     ` Pavel Tatashin
@ 2017-10-18 17:06       ` Will Deacon
  2017-10-18 17:08         ` Pavel Tatashin
  0 siblings, 1 reply; 28+ messages in thread
From: Will Deacon @ 2017-10-18 17:06 UTC (permalink / raw)
  To: Pavel Tatashin
  Cc: Andrey Ryabinin, linux-kernel, sparclinux, linux-mm,
	linuxppc-dev, linux-s390, linux-arm-kernel, x86, kasan-dev,
	borntraeger, heiko.carstens, davem, willy, mhocko,
	ard.biesheuvel, mark.rutland, catalin.marinas, sam, mgorman,
	akpm, steven.sistare, daniel.m.jordan, bob.picco

On Wed, Oct 18, 2017 at 01:03:10PM -0400, Pavel Tatashin wrote:
> I asked Will, about it, and he preferred to have this patched added to the
> end of my series instead of replacing "arm64/kasan: add and use
> kasan_map_populate()".

As I said, I'm fine either way, I just didn't want to cause extra work
or rebasing:

http://lists.infradead.org/pipermail/linux-arm-kernel/2017-October/535703.html

> In addition, Will's patch stops using large pages for kasan memory, and thus
> might add some regression in which case it is easier to revert just that
> patch instead of the whole series. It is unlikely that regression is going
> to be detectable, because kasan by itself makes system quiet slow already.

If it causes problems, I'll just fix them. No need to revert.

Will

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v12 08/11] arm64/kasan: add and use kasan_map_populate()
  2017-10-18 17:06       ` Will Deacon
@ 2017-10-18 17:08         ` Pavel Tatashin
  2017-10-18 17:18           ` Andrey Ryabinin
  0 siblings, 1 reply; 28+ messages in thread
From: Pavel Tatashin @ 2017-10-18 17:08 UTC (permalink / raw)
  To: Will Deacon
  Cc: Andrey Ryabinin, linux-kernel, sparclinux, linux-mm,
	linuxppc-dev, linux-s390, linux-arm-kernel, x86, kasan-dev,
	borntraeger, heiko.carstens, davem, willy, mhocko,
	ard.biesheuvel, mark.rutland, catalin.marinas, sam, mgorman,
	akpm, steven.sistare, daniel.m.jordan, bob.picco

> 
> As I said, I'm fine either way, I just didn't want to cause extra work
> or rebasing:
> 
> http://lists.infradead.org/pipermail/linux-arm-kernel/2017-October/535703.html

Makes sense. I am also fine either way, I can submit a new patch merging 
together the two if needed.

Pavel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v12 07/11] x86/kasan: add and use kasan_map_populate()
  2017-10-13 17:32 ` [PATCH v12 07/11] x86/kasan: add and use kasan_map_populate() Pavel Tatashin
@ 2017-10-18 17:11   ` Andrey Ryabinin
  2017-10-18 17:14     ` Pavel Tatashin
  0 siblings, 1 reply; 28+ messages in thread
From: Andrey Ryabinin @ 2017-10-18 17:11 UTC (permalink / raw)
  To: Pavel Tatashin, linux-kernel, sparclinux, linux-mm, linuxppc-dev,
	linux-s390, linux-arm-kernel, x86, kasan-dev, borntraeger,
	heiko.carstens, davem, willy, mhocko, ard.biesheuvel,
	mark.rutland, will.deacon, catalin.marinas, sam, mgorman, akpm,
	steven.sistare, daniel.m.jordan, bob.picco

On 10/13/2017 08:32 PM, Pavel Tatashin wrote:
> During early boot, kasan uses vmemmap_populate() to establish its shadow
> memory. But, that interface is intended for struct pages use.
> 
> Because of the current project, vmemmap won't be zeroed during allocation,
> but kasan expects that memory to be zeroed. We are adding a new
> kasan_map_populate() function to resolve this difference.
> 
> Therefore, we must use a new interface to allocate and map kasan shadow
> memory, that also zeroes memory for us.
> 

Nah, we should do the same as with arm64 here.
The patch bellow works for me. Could you please make sure that it works for you as well? Just in case.




From: Andrey Ryabinin <aryabinin@virtuozzo.com>
Subject: x86/mm/kasan: don't use vmemmap_populate() to initialize
 shadow

The kasan shadow is currently mapped using vmemmap_populate() since that
provides a semi-convenient way to map pages into init_top_pgt. However,
since that no longer zeroes the mapped pages, it is not suitable for kasan,
which requires zeroed shadow memory.

Add kasan_populate_shadow() interface and use it instead of
vmemmap_populate(). Besides, this allows us to take advantage of gigantic pages
and use them to populate the shadow, which should save us some memory
wasted on page tables and reduce TLB pressure.

Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
---
 arch/x86/Kconfig            |   2 +-
 arch/x86/mm/kasan_init_64.c | 141 ++++++++++++++++++++++++++++++++++++++++++--
 2 files changed, 136 insertions(+), 7 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 8153cf40e5fe..e3847e472bd7 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -108,7 +108,7 @@ config X86
 	select HAVE_ARCH_AUDITSYSCALL
 	select HAVE_ARCH_HUGE_VMAP		if X86_64 || X86_PAE
 	select HAVE_ARCH_JUMP_LABEL
-	select HAVE_ARCH_KASAN			if X86_64 && SPARSEMEM_VMEMMAP
+	select HAVE_ARCH_KASAN			if X86_64
 	select HAVE_ARCH_KGDB
 	select HAVE_ARCH_MMAP_RND_BITS		if MMU
 	select HAVE_ARCH_MMAP_RND_COMPAT_BITS	if MMU && COMPAT
diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
index bc84b73684b7..f8ea02571494 100644
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -3,19 +3,148 @@
 #include <linux/bootmem.h>
 #include <linux/kasan.h>
 #include <linux/kdebug.h>
+#include <linux/memblock.h>
 #include <linux/mm.h>
 #include <linux/sched.h>
 #include <linux/sched/task.h>
 #include <linux/vmalloc.h>
 
 #include <asm/e820/types.h>
+#include <asm/pgalloc.h>
 #include <asm/tlbflush.h>
 #include <asm/sections.h>
 #include <asm/pgtable.h>
 
 extern struct range pfn_mapped[E820_MAX_ENTRIES];
 
-static int __init map_range(struct range *range)
+static __init void *early_alloc(size_t size, int nid)
+{
+	return memblock_virt_alloc_try_nid_nopanic(size, size,
+		__pa(MAX_DMA_ADDRESS), BOOTMEM_ALLOC_ACCESSIBLE, nid);
+}
+
+static void __init kasan_populate_pmd(pmd_t *pmd, unsigned long addr,
+				unsigned long end, int nid)
+{
+	pte_t *pte;
+
+	if (pmd_none(*pmd)) {
+		void *p;
+
+		if (boot_cpu_has(X86_FEATURE_PSE) &&
+		    ((end - addr) == PMD_SIZE) &&
+		    IS_ALIGNED(addr, PMD_SIZE)) {
+			p = early_alloc(PMD_SIZE, nid);
+			if (p && pmd_set_huge(pmd, __pa(p), PAGE_KERNEL))
+				return;
+			else if (p)
+				memblock_free(__pa(p), PMD_SIZE);
+		}
+
+		p = early_alloc(PAGE_SIZE, nid);
+		pmd_populate_kernel(&init_mm, pmd, p);
+	}
+
+	pte = pte_offset_kernel(pmd, addr);
+	do {
+		pte_t entry;
+		void *p;
+
+		if (!pte_none(*pte))
+			continue;
+
+		p = early_alloc(PAGE_SIZE, nid);
+		entry = pfn_pte(PFN_DOWN(__pa(p)), PAGE_KERNEL);
+		set_pte_at(&init_mm, addr, pte, entry);
+	} while (pte++, addr += PAGE_SIZE, addr != end);
+}
+
+
+static void __init kasan_populate_pud(pud_t *pud, unsigned long addr,
+				unsigned long end, int nid)
+{
+	pmd_t *pmd;
+	unsigned long next;
+
+	if (pud_none(*pud)) {
+		void *p;
+
+		if (boot_cpu_has(X86_FEATURE_GBPAGES) &&
+		    ((end - addr) == PUD_SIZE) &&
+		    IS_ALIGNED(addr, PUD_SIZE)) {
+			p = early_alloc(PUD_SIZE, nid);
+			if (p && pud_set_huge(pud, __pa(p), PAGE_KERNEL))
+				return;
+			else if (p)
+				memblock_free(__pa(p), PUD_SIZE);
+		}
+
+		p = early_alloc(PAGE_SIZE, nid);
+		pud_populate(&init_mm, pud, p);
+	}
+
+	pmd = pmd_offset(pud, addr);
+	do {
+		next = pmd_addr_end(addr, end);
+		if (!pmd_large(*pmd))
+			kasan_populate_pmd(pmd, addr, next, nid);
+	} while (pmd++, addr = next, addr != end);
+}
+
+static void __init kasan_populate_p4d(p4d_t *p4d, unsigned long addr,
+				unsigned long end, int nid)
+{
+	pud_t *pud;
+	unsigned long next;
+
+	if (p4d_none(*p4d)) {
+		void *p = early_alloc(PAGE_SIZE, nid);
+		p4d_populate(&init_mm, p4d, p);
+	}
+
+	pud = pud_offset(p4d, addr);
+	do {
+		next = pud_addr_end(addr, end);
+		if (!pud_large(*pud))
+			kasan_populate_pud(pud, addr, next, nid);
+	} while (pud++, addr = next, addr != end);
+}
+
+static void __init kasan_populate_pgd(pgd_t *pgd, unsigned long addr,
+				unsigned long end, int nid)
+{
+	void *p;
+	p4d_t *p4d;
+	unsigned long next;
+
+	if (pgd_none(*pgd)) {
+		p = early_alloc(PAGE_SIZE, nid);
+		pgd_populate(&init_mm, pgd, p);
+	}
+
+	p4d = p4d_offset(pgd, addr);
+	do {
+		next = p4d_addr_end(addr, end);
+		kasan_populate_p4d(p4d, addr, next, nid);
+	} while (p4d++, addr = next, addr != end);
+}
+
+static void __init kasan_populate_shadow(unsigned long addr, unsigned long end,
+					int nid)
+{
+	pgd_t *pgd;
+	unsigned long next;
+
+	addr = addr & PAGE_MASK;
+	end = round_up(end, PAGE_SIZE);
+	pgd = pgd_offset_k(addr);
+	do {
+		next = pgd_addr_end(addr, end);
+		kasan_populate_pgd(pgd, addr, next, nid);
+	} while (pgd++, addr = next, addr != end);
+}
+
+static void __init map_range(struct range *range)
 {
 	unsigned long start;
 	unsigned long end;
@@ -23,7 +152,7 @@ static int __init map_range(struct range *range)
 	start = (unsigned long)kasan_mem_to_shadow(pfn_to_kaddr(range->start));
 	end = (unsigned long)kasan_mem_to_shadow(pfn_to_kaddr(range->end));
 
-	return vmemmap_populate(start, end, NUMA_NO_NODE);
+	kasan_populate_shadow(start, end, early_pfn_to_nid(range->start));
 }
 
 static void __init clear_pgds(unsigned long start,
@@ -129,16 +258,16 @@ void __init kasan_init(void)
 		if (pfn_mapped[i].end == 0)
 			break;
 
-		if (map_range(&pfn_mapped[i]))
-			panic("kasan: unable to allocate shadow!");
+		map_range(&pfn_mapped[i]);
 	}
+
 	kasan_populate_zero_shadow(
 		kasan_mem_to_shadow((void *)PAGE_OFFSET + MAXMEM),
 		kasan_mem_to_shadow((void *)__START_KERNEL_map));
 
-	vmemmap_populate((unsigned long)kasan_mem_to_shadow(_stext),
+	kasan_populate_shadow((unsigned long)kasan_mem_to_shadow(_stext),
 			(unsigned long)kasan_mem_to_shadow(_end),
-			NUMA_NO_NODE);
+			early_pfn_to_nid(__pa(_stext)));
 
 	kasan_populate_zero_shadow(kasan_mem_to_shadow((void *)MODULES_END),
 			(void *)KASAN_SHADOW_END);
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [PATCH v12 07/11] x86/kasan: add and use kasan_map_populate()
  2017-10-18 17:11   ` Andrey Ryabinin
@ 2017-10-18 17:14     ` Pavel Tatashin
  2017-10-18 17:20       ` Andrey Ryabinin
  0 siblings, 1 reply; 28+ messages in thread
From: Pavel Tatashin @ 2017-10-18 17:14 UTC (permalink / raw)
  To: Andrey Ryabinin, linux-kernel, sparclinux, linux-mm,
	linuxppc-dev, linux-s390, linux-arm-kernel, x86, kasan-dev,
	borntraeger, heiko.carstens, davem, willy, mhocko,
	ard.biesheuvel, mark.rutland, will.deacon, catalin.marinas, sam,
	mgorman, akpm, steven.sistare, daniel.m.jordan, bob.picco

Thank you Andrey, I will test this patch. Should it go on top or replace 
the existing patch in mm-tree? ARM and x86 should be done the same 
either both as follow-ups or both replace.

Pavel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v12 08/11] arm64/kasan: add and use kasan_map_populate()
  2017-10-18 17:08         ` Pavel Tatashin
@ 2017-10-18 17:18           ` Andrey Ryabinin
  2017-10-18 17:23             ` Pavel Tatashin
  0 siblings, 1 reply; 28+ messages in thread
From: Andrey Ryabinin @ 2017-10-18 17:18 UTC (permalink / raw)
  To: Pavel Tatashin, Will Deacon
  Cc: linux-kernel, sparclinux, linux-mm, linuxppc-dev, linux-s390,
	linux-arm-kernel, x86, kasan-dev, borntraeger, heiko.carstens,
	davem, willy, mhocko, ard.biesheuvel, mark.rutland,
	catalin.marinas, sam, mgorman, akpm, steven.sistare,
	daniel.m.jordan, bob.picco

On 10/18/2017 08:08 PM, Pavel Tatashin wrote:
>>
>> As I said, I'm fine either way, I just didn't want to cause extra work
>> or rebasing:
>>
>> http://lists.infradead.org/pipermail/linux-arm-kernel/2017-October/535703.html
> 
> Makes sense. I am also fine either way, I can submit a new patch merging together the two if needed.
> 

Please, do this. Single patch makes more sense


> Pavel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v12 07/11] x86/kasan: add and use kasan_map_populate()
  2017-10-18 17:14     ` Pavel Tatashin
@ 2017-10-18 17:20       ` Andrey Ryabinin
  0 siblings, 0 replies; 28+ messages in thread
From: Andrey Ryabinin @ 2017-10-18 17:20 UTC (permalink / raw)
  To: Pavel Tatashin, Andrey Ryabinin, linux-kernel, sparclinux,
	linux-mm, linuxppc-dev, linux-s390, linux-arm-kernel, x86,
	kasan-dev, borntraeger, heiko.carstens, davem, willy, mhocko,
	ard.biesheuvel, mark.rutland, will.deacon, catalin.marinas, sam,
	mgorman, akpm, steven.sistare, daniel.m.jordan, bob.picco

On 10/18/2017 08:14 PM, Pavel Tatashin wrote:
> Thank you Andrey, I will test this patch. Should it go on top or replace the existing patch in mm-tree? ARM and x86 should be done the same either both as follow-ups or both replace.
> 

 It's a replacement of your patch.


> Pavel
> 
> -- 
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v12 08/11] arm64/kasan: add and use kasan_map_populate()
  2017-10-18 17:18           ` Andrey Ryabinin
@ 2017-10-18 17:23             ` Pavel Tatashin
  2017-11-03 15:40               ` Andrey Ryabinin
  0 siblings, 1 reply; 28+ messages in thread
From: Pavel Tatashin @ 2017-10-18 17:23 UTC (permalink / raw)
  To: Andrey Ryabinin, Will Deacon, mhocko, akpm
  Cc: linux-kernel, sparclinux, linux-mm, linuxppc-dev, linux-s390,
	linux-arm-kernel, x86, kasan-dev, borntraeger, heiko.carstens,
	davem, willy, ard.biesheuvel, mark.rutland, catalin.marinas, sam,
	mgorman, steven.sistare, daniel.m.jordan, bob.picco

Hi Andrew and Michal,

There are a few changes I need to do to my series:

1. Replace these two patches:

arm64/kasan: add and use kasan_map_populate()
x86/kasan: add and use kasan_map_populate()

With:

x86/mm/kasan: don't use vmemmap_populate() to initialize
  shadow
arm64/mm/kasan: don't use vmemmap_populate() to initialize
  shadow

2. Fix a kbuild warning about section mismatch in
mm: deferred_init_memmap improvements

How should I proceed to get these replaced in mm-tree? Send three new 
patches, or send a new series?

Thank you,
Pavel

On 10/18/2017 01:18 PM, Andrey Ryabinin wrote:
> On 10/18/2017 08:08 PM, Pavel Tatashin wrote:
>>>
>>> As I said, I'm fine either way, I just didn't want to cause extra work
>>> or rebasing:
>>>
>>> http://lists.infradead.org/pipermail/linux-arm-kernel/2017-October/535703.html
>>
>> Makes sense. I am also fine either way, I can submit a new patch merging together the two if needed.
>>
> 
> Please, do this. Single patch makes more sense
> 
> 
>> Pavel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v12 09/11] mm: stop zeroing memory during allocation in vmemmap
  2017-10-13 17:32 ` [PATCH v12 09/11] mm: stop zeroing memory during allocation in vmemmap Pavel Tatashin
@ 2017-10-19 23:59   ` Andrew Morton
  2017-10-20  1:13     ` Pavel Tatashin
  0 siblings, 1 reply; 28+ messages in thread
From: Andrew Morton @ 2017-10-19 23:59 UTC (permalink / raw)
  To: Pavel Tatashin
  Cc: linux-kernel, sparclinux, linux-mm, linuxppc-dev, linux-s390,
	linux-arm-kernel, x86, kasan-dev, borntraeger, heiko.carstens,
	davem, willy, mhocko, ard.biesheuvel, mark.rutland, will.deacon,
	catalin.marinas, sam, mgorman, steven.sistare, daniel.m.jordan,
	bob.picco

On Fri, 13 Oct 2017 13:32:12 -0400 Pavel Tatashin <pasha.tatashin@oracle.com> wrote:

> vmemmap_alloc_block() will no longer zero the block, so zero memory
> at its call sites for everything except struct pages.  Struct page memory
> is zero'd by struct page initialization.
> 
> Replace allocators in sprase-vmemmap to use the non-zeroing version. So,
> we will get the performance improvement by zeroing the memory in parallel
> when struct pages are zeroed.
> 
> Add struct page zeroing as a part of initialization of other fields in
> __init_single_page().
> 
> This single thread performance collected on: Intel(R) Xeon(R) CPU E7-8895
> v3 @ 2.60GHz with 1T of memory (268400646 pages in 8 nodes):
> 
>                          BASE            FIX
> sparse_init     11.244671836s   0.007199623s
> zone_sizes_init  4.879775891s   8.355182299s
>                   --------------------------
> Total           16.124447727s   8.362381922s
> 
> sparse_init is where memory for struct pages is zeroed, and the zeroing
> part is moved later in this patch into __init_single_page(), which is
> called from zone_sizes_init().

x86_64 allmodconfig:

WARNING: vmlinux.o(.text+0x29d099): Section mismatch in reference from the function T.1331() to the function .meminit.text:vmemmap_alloc_block()
The function T.1331() references
the function __meminit vmemmap_alloc_block().
This is often because T.1331 lacks a __meminit 
annotation or the annotation of vmemmap_alloc_block is wrong.

>From a quick scan it's unclear to me why this is happening.  Maybe
gcc-4.4.4 decided to create an out-of-line version of
vmemmap_alloc_block_zero() for some reason.

Anyway.  I see no reason to publish vmemmap_alloc_block_zero() to the
whole world when it's only used in sparse-vmemmap.c.  The below fixes
the section mismatch:


--- a/include/linux/mm.h~mm-stop-zeroing-memory-during-allocation-in-vmemmap-fix
+++ a/include/linux/mm.h
@@ -2529,17 +2529,6 @@ static inline void *vmemmap_alloc_block_
 	return __vmemmap_alloc_block_buf(size, node, NULL);
 }
 
-static inline void *vmemmap_alloc_block_zero(unsigned long size, int node)
-{
-	void *p = vmemmap_alloc_block(size, node);
-
-	if (!p)
-		return NULL;
-	memset(p, 0, size);
-
-	return p;
-}
-
 void vmemmap_verify(pte_t *, int, unsigned long, unsigned long);
 int vmemmap_populate_basepages(unsigned long start, unsigned long end,
 			       int node);
--- a/mm/sparse-vmemmap.c~mm-stop-zeroing-memory-during-allocation-in-vmemmap-fix
+++ a/mm/sparse-vmemmap.c
@@ -178,6 +178,17 @@ pte_t * __meminit vmemmap_pte_populate(p
 	return pte;
 }
 
+static void * __meminit vmemmap_alloc_block_zero(unsigned long size, int node)
+{
+	void *p = vmemmap_alloc_block(size, node);
+
+	if (!p)
+		return NULL;
+	memset(p, 0, size);
+
+	return p;
+}
+
 pmd_t * __meminit vmemmap_pmd_populate(pud_t *pud, unsigned long addr, int node)
 {
 	pmd_t *pmd = pmd_offset(pud, addr);
_

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v12 09/11] mm: stop zeroing memory during allocation in vmemmap
  2017-10-19 23:59   ` Andrew Morton
@ 2017-10-20  1:13     ` Pavel Tatashin
  0 siblings, 0 replies; 28+ messages in thread
From: Pavel Tatashin @ 2017-10-20  1:13 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, sparclinux, Linux Memory Management List,
	linuxppc-dev, linux-s390, linux-arm-kernel, x86, kasan-dev,
	borntraeger, heiko.carstens, davem, willy, Michal Hocko,
	Ard Biesheuvel, Mark Rutland, Will Deacon, catalin.marinas, sam,
	mgorman, Steve Sistare, Daniel Jordan, Bob Picco

This looks good to me, thank you Andrew.

Pavel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v12 08/11] arm64/kasan: add and use kasan_map_populate()
  2017-10-18 17:23             ` Pavel Tatashin
@ 2017-11-03 15:40               ` Andrey Ryabinin
  2017-11-03 15:50                 ` Pavel Tatashin
  0 siblings, 1 reply; 28+ messages in thread
From: Andrey Ryabinin @ 2017-11-03 15:40 UTC (permalink / raw)
  To: Pavel Tatashin
  Cc: Will Deacon, mhocko, akpm, linux-kernel, sparclinux, linux-mm,
	linuxppc-dev, linux-s390, linux-arm-kernel, x86, kasan-dev,
	borntraeger, heiko.carstens, davem, willy, ard.biesheuvel,
	mark.rutland, catalin.marinas, sam, mgorman, steven.sistare,
	daniel.m.jordan, bob.picco



On 10/18/2017 08:23 PM, Pavel Tatashin wrote:
> Hi Andrew and Michal,
> 
> There are a few changes I need to do to my series:
> 
> 1. Replace these two patches:
> 
> arm64/kasan: add and use kasan_map_populate()
> x86/kasan: add and use kasan_map_populate()
> 
> With:
> 
> x86/mm/kasan: don't use vmemmap_populate() to initialize
>  shadow
> arm64/mm/kasan: don't use vmemmap_populate() to initialize
>  shadow
> 

Pavel, could you please send the patches? These patches doesn't interfere with rest of the series,
so I think it should be enough to send just two patches to replace the old ones.




> 2. Fix a kbuild warning about section mismatch in
> mm: deferred_init_memmap improvements
> 
> How should I proceed to get these replaced in mm-tree? Send three new patches, or send a new series?
> 
> Thank you,
> Pavel
> 

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v12 08/11] arm64/kasan: add and use kasan_map_populate()
  2017-11-03 15:40               ` Andrey Ryabinin
@ 2017-11-03 15:50                 ` Pavel Tatashin
  0 siblings, 0 replies; 28+ messages in thread
From: Pavel Tatashin @ 2017-11-03 15:50 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Will Deacon, mhocko, akpm, linux-kernel, sparclinux, linux-mm,
	linuxppc-dev, linux-s390, linux-arm-kernel, x86, kasan-dev,
	borntraeger, heiko.carstens, davem, willy, ard.biesheuvel,
	mark.rutland, catalin.marinas, sam, mgorman, steven.sistare,
	daniel.m.jordan, bob.picco

>> 1. Replace these two patches:
>>
>> arm64/kasan: add and use kasan_map_populate()
>> x86/kasan: add and use kasan_map_populate()
>>
>> With:
>>
>> x86/mm/kasan: don't use vmemmap_populate() to initialize
>>   shadow
>> arm64/mm/kasan: don't use vmemmap_populate() to initialize
>>   shadow
>>
> 
> Pavel, could you please send the patches? These patches doesn't interfere with rest of the series,
> so I think it should be enough to send just two patches to replace the old ones.
> 

Hi Andrey,

I asked Michal and Andrew how to proceed but never received a reply from 
them. The patches independent from the deferred page init series as long 
as they come before the series.

Anyway, I will post these two patches to the mailing list soon. But, not 
really sure if they will be taken into mm-tree.

Pavel

^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2017-11-03 15:52 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-10-13 17:32 [PATCH v12 00/11] complete deferred page initialization Pavel Tatashin
2017-10-13 17:32 ` [PATCH v12 01/11] mm: deferred_init_memmap improvements Pavel Tatashin
2017-10-17 11:40   ` Michal Hocko
2017-10-17 15:13     ` Pavel Tatashin
2017-10-13 17:32 ` [PATCH v12 02/11] x86/mm: setting fields in deferred pages Pavel Tatashin
2017-10-13 17:32 ` [PATCH v12 03/11] sparc64/mm: " Pavel Tatashin
2017-10-13 17:32 ` [PATCH v12 04/11] sparc64: simplify vmemmap_populate Pavel Tatashin
2017-10-13 17:32 ` [PATCH v12 05/11] mm: defining memblock_virt_alloc_try_nid_raw Pavel Tatashin
2017-10-13 17:32 ` [PATCH v12 06/11] mm: zero reserved and unavailable struct pages Pavel Tatashin
2017-10-13 17:32 ` [PATCH v12 07/11] x86/kasan: add and use kasan_map_populate() Pavel Tatashin
2017-10-18 17:11   ` Andrey Ryabinin
2017-10-18 17:14     ` Pavel Tatashin
2017-10-18 17:20       ` Andrey Ryabinin
2017-10-13 17:32 ` [PATCH v12 08/11] arm64/kasan: " Pavel Tatashin
2017-10-18 16:55   ` Andrey Ryabinin
2017-10-18 17:03     ` Pavel Tatashin
2017-10-18 17:06       ` Will Deacon
2017-10-18 17:08         ` Pavel Tatashin
2017-10-18 17:18           ` Andrey Ryabinin
2017-10-18 17:23             ` Pavel Tatashin
2017-11-03 15:40               ` Andrey Ryabinin
2017-11-03 15:50                 ` Pavel Tatashin
2017-10-13 17:32 ` [PATCH v12 09/11] mm: stop zeroing memory during allocation in vmemmap Pavel Tatashin
2017-10-19 23:59   ` Andrew Morton
2017-10-20  1:13     ` Pavel Tatashin
2017-10-13 17:32 ` [PATCH v12 10/11] sparc64: optimized struct page zeroing Pavel Tatashin
2017-10-13 17:32 ` [PATCH v12 11/11] arm64: kasan: Avoid using vmemmap_populate to initialise shadow Pavel Tatashin
2017-10-13 18:23 ` [PATCH v12 00/11] complete deferred page initialization Bob Picco

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).