From: Dan Williams <dan.j.williams@intel.com> To: akpm@linux-foundation.org Cc: linux-nvdimm@lists.01.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, hch@lst.de Subject: [PATCH 11/13] libnvdimm, pmem: Initialize the memmap in the background Date: Wed, 04 Jul 2018 23:50:01 -0700 [thread overview] Message-ID: <153077340108.40830.8427794791878610916.stgit@dwillia2-desk3.amr.corp.intel.com> (raw) In-Reply-To: <153077334130.40830.2714147692560185329.stgit@dwillia2-desk3.amr.corp.intel.com> Arrange for the pmem driver to call memmap_sync() when it is asked to produce a valid pfn. The infrastructure is housed in the 'nd_pfn' device which implies that the async init support only exists for platform defined persistent memory, not the legacy / debug memmap=ss!nn facility. Another reason to restrict the capability to the 'nd_pfn' device case is that nd_pfn devices have sysfs infrastructure to communicate the memmap initialization state to userspace. The sysfs publication of memmap init state is saved for a later patch. Cc: Ross Zwisler <ross.zwisler@linux.intel.com> Cc: Vishal Verma <vishal.l.verma@intel.com> Cc: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com> --- drivers/nvdimm/nd.h | 2 ++ drivers/nvdimm/pmem.c | 16 ++++++++++++---- drivers/nvdimm/pmem.h | 1 + tools/testing/nvdimm/pmem-dax.c | 7 ++++++- 4 files changed, 21 insertions(+), 5 deletions(-) diff --git a/drivers/nvdimm/nd.h b/drivers/nvdimm/nd.h index 32e0364b48b9..ee4f76fb0cb5 100644 --- a/drivers/nvdimm/nd.h +++ b/drivers/nvdimm/nd.h @@ -12,6 +12,7 @@ */ #ifndef __ND_H__ #define __ND_H__ +#include <linux/memmap_async.h> #include <linux/libnvdimm.h> #include <linux/badblocks.h> #include <linux/blkdev.h> @@ -208,6 +209,7 @@ struct nd_pfn { unsigned long npfns; enum nd_pfn_mode mode; struct nd_pfn_sb *pfn_sb; + struct memmap_async_state async; struct nd_namespace_common *ndns; }; diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c index c430536320a5..a1158181adc2 100644 --- a/drivers/nvdimm/pmem.c +++ b/drivers/nvdimm/pmem.c @@ -22,6 +22,7 @@ #include <linux/platform_device.h> #include <linux/module.h> #include <linux/moduleparam.h> +#include <linux/memmap_async.h> #include <linux/badblocks.h> #include <linux/memremap.h> #include <linux/vmalloc.h> @@ -228,8 +229,13 @@ __weak long __pmem_direct_access(struct pmem_device *pmem, pgoff_t pgoff, PFN_PHYS(nr_pages)))) return -EIO; *kaddr = pmem->virt_addr + offset; - if (pfn) + if (pfn) { + struct dev_pagemap *pgmap = &pmem->pgmap; + struct memmap_async_state *async = pgmap->async; + *pfn = phys_to_pfn_t(pmem->phys_addr + offset, pmem->pfn_flags); + memmap_sync(*pfn, nr_pages, async); + } /* * If badblocks are present, limit known good range to the @@ -310,13 +316,15 @@ static void fsdax_pagefree(struct page *page, void *data) wake_up_var(&page->_refcount); } -static int setup_pagemap_fsdax(struct device *dev, struct dev_pagemap *pgmap) +static int setup_pagemap_fsdax(struct device *dev, struct dev_pagemap *pgmap, + struct memmap_async_state *async) { dev_pagemap_get_ops(); if (devm_add_action_or_reset(dev, pmem_release_pgmap_ops, pgmap)) return -ENOMEM; pgmap->type = MEMORY_DEVICE_FS_DAX; pgmap->page_free = fsdax_pagefree; + pgmap->async = async; return 0; } @@ -379,7 +387,7 @@ static int pmem_attach_disk(struct device *dev, pmem->pfn_flags = PFN_DEV; pmem->pgmap.ref = &q->q_usage_counter; if (is_nd_pfn(dev)) { - if (setup_pagemap_fsdax(dev, &pmem->pgmap)) + if (setup_pagemap_fsdax(dev, &pmem->pgmap, &nd_pfn->async)) return -ENOMEM; addr = devm_memremap_pages(dev, &pmem->pgmap, pmem_freeze_queue); @@ -393,7 +401,7 @@ static int pmem_attach_disk(struct device *dev, } else if (pmem_should_map_pages(dev)) { memcpy(&pmem->pgmap.res, &nsio->res, sizeof(pmem->pgmap.res)); pmem->pgmap.altmap_valid = false; - if (setup_pagemap_fsdax(dev, &pmem->pgmap)) + if (setup_pagemap_fsdax(dev, &pmem->pgmap, NULL)) return -ENOMEM; addr = devm_memremap_pages(dev, &pmem->pgmap, pmem_freeze_queue); diff --git a/drivers/nvdimm/pmem.h b/drivers/nvdimm/pmem.h index a64ebc78b5df..93d226ea1006 100644 --- a/drivers/nvdimm/pmem.h +++ b/drivers/nvdimm/pmem.h @@ -1,6 +1,7 @@ /* SPDX-License-Identifier: GPL-2.0 */ #ifndef __NVDIMM_PMEM_H__ #define __NVDIMM_PMEM_H__ +#include <linux/memmap_async.h> #include <linux/badblocks.h> #include <linux/types.h> #include <linux/pfn_t.h> diff --git a/tools/testing/nvdimm/pmem-dax.c b/tools/testing/nvdimm/pmem-dax.c index d4cb5281b30e..63151b75615c 100644 --- a/tools/testing/nvdimm/pmem-dax.c +++ b/tools/testing/nvdimm/pmem-dax.c @@ -42,8 +42,13 @@ long __pmem_direct_access(struct pmem_device *pmem, pgoff_t pgoff, } *kaddr = pmem->virt_addr + offset; - if (pfn) + if (pfn) { + struct dev_pagemap *pgmap = &pmem->pgmap; + struct memmap_async_state *async = pgmap->async; + *pfn = phys_to_pfn_t(pmem->phys_addr + offset, pmem->pfn_flags); + memmap_sync(*pfn, nr_pages, async); + } /* * If badblocks are present, limit known good range to the _______________________________________________ Linux-nvdimm mailing list Linux-nvdimm@lists.01.org https://lists.01.org/mailman/listinfo/linux-nvdimm
WARNING: multiple messages have this Message-ID (diff)
From: Dan Williams <dan.j.williams@intel.com> To: akpm@linux-foundation.org Cc: Ross Zwisler <ross.zwisler@linux.intel.com>, Vishal Verma <vishal.l.verma@intel.com>, Dave Jiang <dave.jiang@intel.com>, hch@lst.de, linux-nvdimm@lists.01.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 11/13] libnvdimm, pmem: Initialize the memmap in the background Date: Wed, 04 Jul 2018 23:50:01 -0700 [thread overview] Message-ID: <153077340108.40830.8427794791878610916.stgit@dwillia2-desk3.amr.corp.intel.com> (raw) In-Reply-To: <153077334130.40830.2714147692560185329.stgit@dwillia2-desk3.amr.corp.intel.com> Arrange for the pmem driver to call memmap_sync() when it is asked to produce a valid pfn. The infrastructure is housed in the 'nd_pfn' device which implies that the async init support only exists for platform defined persistent memory, not the legacy / debug memmap=ss!nn facility. Another reason to restrict the capability to the 'nd_pfn' device case is that nd_pfn devices have sysfs infrastructure to communicate the memmap initialization state to userspace. The sysfs publication of memmap init state is saved for a later patch. Cc: Ross Zwisler <ross.zwisler@linux.intel.com> Cc: Vishal Verma <vishal.l.verma@intel.com> Cc: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com> --- drivers/nvdimm/nd.h | 2 ++ drivers/nvdimm/pmem.c | 16 ++++++++++++---- drivers/nvdimm/pmem.h | 1 + tools/testing/nvdimm/pmem-dax.c | 7 ++++++- 4 files changed, 21 insertions(+), 5 deletions(-) diff --git a/drivers/nvdimm/nd.h b/drivers/nvdimm/nd.h index 32e0364b48b9..ee4f76fb0cb5 100644 --- a/drivers/nvdimm/nd.h +++ b/drivers/nvdimm/nd.h @@ -12,6 +12,7 @@ */ #ifndef __ND_H__ #define __ND_H__ +#include <linux/memmap_async.h> #include <linux/libnvdimm.h> #include <linux/badblocks.h> #include <linux/blkdev.h> @@ -208,6 +209,7 @@ struct nd_pfn { unsigned long npfns; enum nd_pfn_mode mode; struct nd_pfn_sb *pfn_sb; + struct memmap_async_state async; struct nd_namespace_common *ndns; }; diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c index c430536320a5..a1158181adc2 100644 --- a/drivers/nvdimm/pmem.c +++ b/drivers/nvdimm/pmem.c @@ -22,6 +22,7 @@ #include <linux/platform_device.h> #include <linux/module.h> #include <linux/moduleparam.h> +#include <linux/memmap_async.h> #include <linux/badblocks.h> #include <linux/memremap.h> #include <linux/vmalloc.h> @@ -228,8 +229,13 @@ __weak long __pmem_direct_access(struct pmem_device *pmem, pgoff_t pgoff, PFN_PHYS(nr_pages)))) return -EIO; *kaddr = pmem->virt_addr + offset; - if (pfn) + if (pfn) { + struct dev_pagemap *pgmap = &pmem->pgmap; + struct memmap_async_state *async = pgmap->async; + *pfn = phys_to_pfn_t(pmem->phys_addr + offset, pmem->pfn_flags); + memmap_sync(*pfn, nr_pages, async); + } /* * If badblocks are present, limit known good range to the @@ -310,13 +316,15 @@ static void fsdax_pagefree(struct page *page, void *data) wake_up_var(&page->_refcount); } -static int setup_pagemap_fsdax(struct device *dev, struct dev_pagemap *pgmap) +static int setup_pagemap_fsdax(struct device *dev, struct dev_pagemap *pgmap, + struct memmap_async_state *async) { dev_pagemap_get_ops(); if (devm_add_action_or_reset(dev, pmem_release_pgmap_ops, pgmap)) return -ENOMEM; pgmap->type = MEMORY_DEVICE_FS_DAX; pgmap->page_free = fsdax_pagefree; + pgmap->async = async; return 0; } @@ -379,7 +387,7 @@ static int pmem_attach_disk(struct device *dev, pmem->pfn_flags = PFN_DEV; pmem->pgmap.ref = &q->q_usage_counter; if (is_nd_pfn(dev)) { - if (setup_pagemap_fsdax(dev, &pmem->pgmap)) + if (setup_pagemap_fsdax(dev, &pmem->pgmap, &nd_pfn->async)) return -ENOMEM; addr = devm_memremap_pages(dev, &pmem->pgmap, pmem_freeze_queue); @@ -393,7 +401,7 @@ static int pmem_attach_disk(struct device *dev, } else if (pmem_should_map_pages(dev)) { memcpy(&pmem->pgmap.res, &nsio->res, sizeof(pmem->pgmap.res)); pmem->pgmap.altmap_valid = false; - if (setup_pagemap_fsdax(dev, &pmem->pgmap)) + if (setup_pagemap_fsdax(dev, &pmem->pgmap, NULL)) return -ENOMEM; addr = devm_memremap_pages(dev, &pmem->pgmap, pmem_freeze_queue); diff --git a/drivers/nvdimm/pmem.h b/drivers/nvdimm/pmem.h index a64ebc78b5df..93d226ea1006 100644 --- a/drivers/nvdimm/pmem.h +++ b/drivers/nvdimm/pmem.h @@ -1,6 +1,7 @@ /* SPDX-License-Identifier: GPL-2.0 */ #ifndef __NVDIMM_PMEM_H__ #define __NVDIMM_PMEM_H__ +#include <linux/memmap_async.h> #include <linux/badblocks.h> #include <linux/types.h> #include <linux/pfn_t.h> diff --git a/tools/testing/nvdimm/pmem-dax.c b/tools/testing/nvdimm/pmem-dax.c index d4cb5281b30e..63151b75615c 100644 --- a/tools/testing/nvdimm/pmem-dax.c +++ b/tools/testing/nvdimm/pmem-dax.c @@ -42,8 +42,13 @@ long __pmem_direct_access(struct pmem_device *pmem, pgoff_t pgoff, } *kaddr = pmem->virt_addr + offset; - if (pfn) + if (pfn) { + struct dev_pagemap *pgmap = &pmem->pgmap; + struct memmap_async_state *async = pgmap->async; + *pfn = phys_to_pfn_t(pmem->phys_addr + offset, pmem->pfn_flags); + memmap_sync(*pfn, nr_pages, async); + } /* * If badblocks are present, limit known good range to the
next prev parent reply other threads:[~2018-07-05 6:59 UTC|newest] Thread overview: 55+ messages / expand[flat|nested] mbox.gz Atom feed top 2018-07-05 6:49 [PATCH 00/13] mm: Asynchronous + multithreaded memmap init for ZONE_DEVICE Dan Williams 2018-07-05 6:49 ` Dan Williams 2018-07-05 6:49 ` [PATCH 01/13] mm: Plumb dev_pagemap instead of vmem_altmap to memmap_init_zone() Dan Williams 2018-07-05 6:49 ` Dan Williams 2018-07-05 6:49 ` [PATCH 02/13] mm: Enable asynchronous __add_pages() and vmemmap_populate_hugepages() Dan Williams 2018-07-05 6:49 ` Dan Williams 2018-07-05 6:49 ` [PATCH 03/13] mm: Teach memmap_init_zone() to initialize ZONE_DEVICE pages Dan Williams 2018-07-05 6:49 ` Dan Williams 2018-07-05 6:49 ` Dan Williams 2018-07-05 6:49 ` [PATCH 04/13] mm: Multithread ZONE_DEVICE initialization Dan Williams 2018-07-05 6:49 ` Dan Williams 2018-07-05 6:49 ` [PATCH 05/13] mm: Allow an external agent to wait for memmap initialization Dan Williams 2018-07-05 6:49 ` Dan Williams 2018-07-05 6:49 ` Dan Williams 2018-07-05 6:49 ` [PATCH 06/13] nvdimm/pmem: check the validity of the pointer pfn Dan Williams 2018-07-05 6:49 ` Dan Williams 2018-07-05 6:49 ` [PATCH 07/13] nvdimm/pmem-dax: " Dan Williams 2018-07-05 6:49 ` Dan Williams 2018-07-05 6:49 ` [PATCH 08/13] s390/block/dcssblk: " Dan Williams 2018-07-05 6:49 ` Dan Williams 2018-07-05 6:49 ` [PATCH 09/13] fs/dax: Assign NULL to pfn of dax_direct_access if useless Dan Williams 2018-07-05 6:49 ` Dan Williams 2018-07-05 6:49 ` [PATCH 10/13] filesystem-dax: Make mount time pfn validation a debug check Dan Williams 2018-07-05 6:49 ` Dan Williams 2018-07-05 6:50 ` Dan Williams [this message] 2018-07-05 6:50 ` [PATCH 11/13] libnvdimm, pmem: Initialize the memmap in the background Dan Williams 2018-07-05 6:50 ` [PATCH 12/13] device-dax: " Dan Williams 2018-07-05 6:50 ` Dan Williams 2018-07-05 6:50 ` [PATCH 13/13] libnvdimm, namespace: Publish page structure init state / control Dan Williams 2018-07-05 6:50 ` Dan Williams 2018-07-05 8:29 ` Johannes Thumshirn 2018-07-05 8:29 ` Johannes Thumshirn 2018-07-05 8:29 ` Johannes Thumshirn 2018-07-05 14:46 ` Dan Williams 2018-07-05 14:46 ` Dan Williams 2018-07-05 14:49 ` Johannes Thumshirn 2018-07-05 14:49 ` Johannes Thumshirn 2018-07-05 14:49 ` Johannes Thumshirn 2018-07-05 20:24 ` Andrew Morton 2018-07-05 20:24 ` Andrew Morton 2018-07-05 20:34 ` Dan Williams 2018-07-05 20:34 ` Dan Williams 2018-07-06 8:18 ` Johannes Thumshirn 2018-07-06 8:18 ` Johannes Thumshirn 2018-07-06 8:18 ` Johannes Thumshirn 2018-07-05 21:00 ` Matthew Wilcox 2018-07-05 21:00 ` Matthew Wilcox 2018-07-05 19:49 ` Matthew Wilcox 2018-07-05 19:49 ` Matthew Wilcox 2018-07-05 19:52 ` Dan Williams 2018-07-05 19:52 ` Dan Williams 2018-07-05 20:00 ` Jeff Moyer 2018-07-09 12:56 ` [PATCH 00/13] mm: Asynchronous + multithreaded memmap init for ZONE_DEVICE Jan Kara 2018-07-09 12:56 ` Jan Kara 2018-07-09 16:53 ` Dan Williams
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=153077340108.40830.8427794791878610916.stgit@dwillia2-desk3.amr.corp.intel.com \ --to=dan.j.williams@intel.com \ --cc=akpm@linux-foundation.org \ --cc=hch@lst.de \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=linux-nvdimm@lists.01.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.