From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: MIME-Version: 1.0 References: <153176041838.12695.3365448145295112857.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <153176041838.12695.3365448145295112857.stgit@dwillia2-desk3.amr.corp.intel.com> From: Pavel Tatashin Date: Mon, 16 Jul 2018 15:12:58 -0400 Message-ID: Subject: Re: [PATCH v2 00/14] mm: Asynchronous + multithreaded memmap init for ZONE_DEVICE Content-Type: text/plain; charset="UTF-8" Sender: owner-linux-mm@kvack.org To: dan.j.williams@intel.com Cc: Andrew Morton , tony.luck@intel.com, yehs1@lenovo.com, vishal.l.verma@intel.com, jack@suse.cz, willy@infradead.org, dave.jiang@intel.com, hpa@zytor.com, tglx@linutronix.de, dalias@libc.org, fenghua.yu@intel.com, Daniel Jordan , ysato@users.sourceforge.jp, benh@kernel.crashing.org, Michal Hocko , paulus@samba.org, hch@lst.de, jglisse@redhat.com, mingo@redhat.com, mpe@ellerman.id.au, Heiko Carstens , x86@kernel.org, logang@deltatee.com, ross.zwisler@linux.intel.com, jmoyer@redhat.com, jthumshirn@suse.de, schwidefsky@de.ibm.com, Linux Memory Management List , linux-nvdimm@lists.01.org, LKML List-ID: On Mon, Jul 16, 2018 at 1:10 PM Dan Williams wrote: > > Changes since v1 [1]: > * Teach memmap_sync() to take over a sub-set of memmap initialization in > the foreground. This foreground work still needs to await the > completion of vmemmap_populate_hugepages(), but it will otherwise > steal 1/1024th of the 'struct page' init work for the given range. > (Jan) > * Add kernel-doc for all the new 'async' structures. > * Split foreach_order_pgoff() to its own patch. > * Add Pavel and Daniel to the cc as they have been active in the memory > hotplug code. > * Fix a typo that prevented CONFIG_DAX_DRIVER_DEBUG=y from performing > early pfn retrieval at dax-filesystem mount time. > * Improve some of the changelogs > > [1]: https://lwn.net/Articles/759117/ > > --- > > In order to keep pfn_to_page() a simple offset calculation the 'struct > page' memmap needs to be mapped and initialized in advance of any usage > of a page. This poses a problem for large memory systems as it delays > full availability of memory resources for 10s to 100s of seconds. > > For typical 'System RAM' the problem is mitigated by the fact that large > memory allocations tend to happen after the kernel has fully initialized > and userspace services / applications are launched. A small amount, 2GB > of memory, is initialized up front. The remainder is initialized in the > background and freed to the page allocator over time. > > Unfortunately, that scheme is not directly reusable for persistent > memory and dax because userspace has visibility to the entire resource > pool and can choose to access any offset directly at its choosing. In > other words there is no allocator indirection where the kernel can > satisfy requests with arbitrary pages as they become initialized. > > That said, we can approximate the optimization by performing the > initialization in the background, allow the kernel to fully boot the > platform, start up pmem block devices, mount filesystems in dax mode, > and only incur delay at the first userspace dax fault. When that initial > fault occurs that process is delegated a portion of the memmap to > initialize in the foreground so that it need not wait for initialization > of resources that it does not immediately need. > > With this change an 8 socket system was observed to initialize pmem > namespaces in ~4 seconds whereas it was previously taking ~4 minutes. Hi Dan, I am worried that this work adds another way to multi-thread struct page initialization without re-use of already existing method. The code is already a mess, and leads to bugs [1] because of the number of different memory layouts, architecture specific quirks, and different struct page initialization methods. So, when DEFERRED_STRUCT_PAGE_INIT is used we initialize struct pages on demand until page_alloc_init_late() is called, and at that time we initialize all the rest of struct pages by calling: page_alloc_init_late() deferred_init_memmap() (a thread per node) deferred_init_pages() __init_single_page() This is because memmap_init_zone() is not multi-threaded. However, this work makes memmap_init_zone() multi-threaded. So, I think we should really be either be using deferred_init_memmap() here, or teach DEFERRED_STRUCT_PAGE_INIT to use new multi-threaded memmap_init_zone() but not both. I am planning to study the memmap layouts, and figure out how can we reduce their number or merge some of the code, and also, I'd like to simplify memmap_init_zone() by at least splitting it into two functions: one that handles the boot case, and another that handles the hotplug case, as those are substantially different, and make memmap_init_zone() more complicated than needed. Thank you, Pavel [1] https://www.spinics.net/lists/linux-mm/msg157271.html > > These patches apply on top of the HMM + devm_memremap_pages() reworks: