From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ed1-f69.google.com (mail-ed1-f69.google.com [209.85.208.69]) by kanga.kvack.org (Postfix) with ESMTP id C7E766B02CF for ; Mon, 9 Jul 2018 08:56:52 -0400 (EDT) Received: by mail-ed1-f69.google.com with SMTP id r9-v6so6935503edo.16 for ; Mon, 09 Jul 2018 05:56:52 -0700 (PDT) Received: from mx1.suse.de (mx2.suse.de. [195.135.220.15]) by mx.google.com with ESMTPS id q11-v6si2668741eds.67.2018.07.09.05.56.50 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 09 Jul 2018 05:56:50 -0700 (PDT) Date: Mon, 9 Jul 2018 14:56:41 +0200 From: Jan Kara Subject: Re: [PATCH 00/13] mm: Asynchronous + multithreaded memmap init for ZONE_DEVICE Message-ID: <20180709125641.xpoq66p4r7dzsgyj@quack2.suse.cz> References: <153077334130.40830.2714147692560185329.stgit@dwillia2-desk3.amr.corp.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <153077334130.40830.2714147692560185329.stgit@dwillia2-desk3.amr.corp.intel.com> Sender: owner-linux-mm@kvack.org List-ID: To: Dan Williams Cc: akpm@linux-foundation.org, Tony Luck , Huaisheng Ye , Vishal Verma , Jan Kara , Dave Jiang , "H. Peter Anvin" , Thomas Gleixner , Rich Felker , Fenghua Yu , Yoshinori Sato , Benjamin Herrenschmidt , Michal Hocko , Paul Mackerras , Christoph Hellwig , =?iso-8859-1?B?Suly9G1l?= Glisse , Ingo Molnar , Johannes Thumshirn , Michael Ellerman , Heiko Carstens , x86@kernel.org, Logan Gunthorpe , Ross Zwisler , Jeff Moyer , Vlastimil Babka , Martin Schwidefsky , linux-nvdimm@lists.01.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org On Wed 04-07-18 23:49:02, Dan Williams wrote: > In order to keep pfn_to_page() a simple offset calculation the 'struct > page' memmap needs to be mapped and initialized in advance of any usage > of a page. This poses a problem for large memory systems as it delays > full availability of memory resources for 10s to 100s of seconds. > > For typical 'System RAM' the problem is mitigated by the fact that large > memory allocations tend to happen after the kernel has fully initialized > and userspace services / applications are launched. A small amount, 2GB > of memory, is initialized up front. The remainder is initialized in the > background and freed to the page allocator over time. > > Unfortunately, that scheme is not directly reusable for persistent > memory and dax because userspace has visibility to the entire resource > pool and can choose to access any offset directly at its choosing. In > other words there is no allocator indirection where the kernel can > satisfy requests with arbitrary pages as they become initialized. > > That said, we can approximate the optimization by performing the > initialization in the background, allow the kernel to fully boot the > platform, start up pmem block devices, mount filesystems in dax mode, > and only incur the delay at the first userspace dax fault. > > With this change an 8 socket system was observed to initialize pmem > namespaces in ~4 seconds whereas it was previously taking ~4 minutes. > > These patches apply on top of the HMM + devm_memremap_pages() reworks > [1]. Andrew, once the reviews come back, please consider this series for > -mm as well. > > [1]: https://lkml.org/lkml/2018/6/19/108 One question: Why not (in addition to background initialization) have ->direct_access() initialize a block of struct pages around the pfn it needs if it finds it's not initialized yet? That would make devices usable immediately without waiting for init to complete... Honza > > --- > > Dan Williams (9): > mm: Plumb dev_pagemap instead of vmem_altmap to memmap_init_zone() > mm: Enable asynchronous __add_pages() and vmemmap_populate_hugepages() > mm: Teach memmap_init_zone() to initialize ZONE_DEVICE pages > mm: Multithread ZONE_DEVICE initialization > mm: Allow an external agent to wait for memmap initialization > filesystem-dax: Make mount time pfn validation a debug check > libnvdimm, pmem: Initialize the memmap in the background > device-dax: Initialize the memmap in the background > libnvdimm, namespace: Publish page structure init state / control > > Huaisheng Ye (4): > nvdimm/pmem: check the validity of the pointer pfn > nvdimm/pmem-dax: check the validity of the pointer pfn > s390/block/dcssblk: check the validity of the pointer pfn > fs/dax: Assign NULL to pfn of dax_direct_access if useless > > > arch/ia64/mm/init.c | 5 + > arch/powerpc/mm/mem.c | 5 + > arch/s390/mm/init.c | 8 + > arch/sh/mm/init.c | 5 + > arch/x86/mm/init_32.c | 8 + > arch/x86/mm/init_64.c | 27 +++-- > drivers/dax/Kconfig | 10 ++ > drivers/dax/dax-private.h | 2 > drivers/dax/device-dax.h | 2 > drivers/dax/device.c | 16 +++ > drivers/dax/pmem.c | 5 + > drivers/dax/super.c | 64 +++++++----- > drivers/nvdimm/nd.h | 2 > drivers/nvdimm/pfn_devs.c | 54 ++++++++-- > drivers/nvdimm/pmem.c | 17 ++- > drivers/nvdimm/pmem.h | 1 > drivers/s390/block/dcssblk.c | 5 + > fs/dax.c | 10 +- > include/linux/memmap_async.h | 55 ++++++++++ > include/linux/memory_hotplug.h | 18 ++- > include/linux/memremap.h | 31 ++++++ > include/linux/mm.h | 8 + > kernel/memremap.c | 85 ++++++++------- > mm/memory_hotplug.c | 73 ++++++++++--- > mm/page_alloc.c | 215 +++++++++++++++++++++++++++++++++------ > mm/sparse-vmemmap.c | 56 ++++++++-- > tools/testing/nvdimm/pmem-dax.c | 11 ++ > 27 files changed, 610 insertions(+), 188 deletions(-) > create mode 100644 include/linux/memmap_async.h -- Jan Kara SUSE Labs, CR