linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/8] mm: Rework hmm to use devm_memremap_pages and other fixes
@ 2018-06-19  6:04 Dan Williams
  2018-06-19  6:04 ` [PATCH v3 1/8] mm, devm_memremap_pages: Mark devm_memremap_pages() EXPORT_SYMBOL_GPL Dan Williams
                   ` (7 more replies)
  0 siblings, 8 replies; 14+ messages in thread
From: Dan Williams @ 2018-06-19  6:04 UTC (permalink / raw)
  To: akpm
  Cc: stable, Logan Gunthorpe, Christoph Hellwig,
	Jérôme Glisse, Michal Hocko, John Hubbard, Joe Gorse,
	linux-mm, linux-kernel

Changes since v2 [1]:
* Rebased on v4.18-rc1
* Collect Logan's reviewed-by for "mm, devm_memremap_pages: Add
  MEMORY_DEVICE_PRIVATE support"
* Convert __put_devmap_managed_page and devmap_managed_key to
  EXPORT_SYMBOL otherwise put_page() becomes limited to GPL-only
  modules.
* Clarify some of the changelogs.

[1]: https://lkml.org/lkml/2018/5/23/24

---

Hi Andrew, here's v3 to replace these 5 currently in mm:

mm-devm_memremap_pages-mark-devm_memremap_pages-export_symbol_gpl.patch
mm-devm_memremap_pages-handle-errors-allocating-final-devres-action.patch
mm-hmm-use-devm-semantics-for-hmm_devmem_add-remove.patch
mm-hmm-replace-hmm_devmem_pages_create-with-devm_memremap_pages.patch
mm-hmm-mark-hmm_devmem_add-add_resource-export_symbol_gpl.patch

For maintainability, as ZONE_DEVICE continues to attract new users,
it is useful to keep all users consolidated on devm_memremap_pages() as
the interface for create "device pages".

The devm_memremap_pages() implementation was recently reworked to make
it more generic for arbitrary users, like the proposed peer-to-peer
PCI-E enabling. HMM pre-dated this rework and opted to duplicate
devm_memremap_pages() as hmm_devmem_pages_create().

Rework HMM to be a consumer of devm_memremap_pages() directly and fix up
the licensing on the exports given the deep dependencies on the mm.

Patches based on v4.18-rc1 where there are no upstream consumers of the
HMM functionality.

---

Dan Williams (8):
      mm, devm_memremap_pages: Mark devm_memremap_pages() EXPORT_SYMBOL_GPL
      mm, devm_memremap_pages: Kill mapping "System RAM" support
      mm, devm_memremap_pages: Fix shutdown handling
      mm, devm_memremap_pages: Add MEMORY_DEVICE_PRIVATE support
      mm, hmm: Use devm semantics for hmm_devmem_{add,remove}
      mm, hmm: Replace hmm_devmem_pages_create() with devm_memremap_pages()
      mm, hmm: Mark hmm_devmem_{add,add_resource} EXPORT_SYMBOL_GPL
      mm: Fix exports that inadvertently make put_page() EXPORT_SYMBOL_GPL


 drivers/dax/pmem.c                |   10 -
 drivers/nvdimm/pmem.c             |   18 +-
 include/linux/hmm.h               |    4 
 include/linux/memremap.h          |    7 +
 kernel/memremap.c                 |   89 +++++++----
 mm/hmm.c                          |  307 +++++--------------------------------
 tools/testing/nvdimm/test/iomap.c |   21 ++-
 7 files changed, 132 insertions(+), 324 deletions(-)

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v3 1/8] mm, devm_memremap_pages: Mark devm_memremap_pages() EXPORT_SYMBOL_GPL
  2018-06-19  6:04 [PATCH v3 0/8] mm: Rework hmm to use devm_memremap_pages and other fixes Dan Williams
@ 2018-06-19  6:04 ` Dan Williams
  2018-06-19  6:04 ` [PATCH v3 2/8] mm, devm_memremap_pages: Kill mapping "System RAM" support Dan Williams
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 14+ messages in thread
From: Dan Williams @ 2018-06-19  6:04 UTC (permalink / raw)
  To: akpm
  Cc: Michal Hocko, Jérôme Glisse, Christoph Hellwig,
	linux-mm, linux-kernel

The devm_memremap_pages() facility is tightly integrated with the
kernel's memory hotplug functionality. It injects an altmap argument
deep into the architecture specific vmemmap implementation to allow
allocating from specific reserved pages, and it has Linux specific
assumptions about page structure reference counting relative to
get_user_pages() and get_user_pages_fast(). It was an oversight that
this was not marked EXPORT_SYMBOL_GPL from the outset.

Cc: Michal Hocko <mhocko@suse.com>
Cc: "Jérôme Glisse" <jglisse@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 kernel/memremap.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/memremap.c b/kernel/memremap.c
index 5857267a4af5..4478e4688bb7 100644
--- a/kernel/memremap.c
+++ b/kernel/memremap.c
@@ -257,7 +257,7 @@ void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap)
 	pgmap_radix_release(res, pgoff);
 	return ERR_PTR(error);
 }
-EXPORT_SYMBOL(devm_memremap_pages);
+EXPORT_SYMBOL_GPL(devm_memremap_pages);
 
 unsigned long vmem_altmap_offset(struct vmem_altmap *altmap)
 {


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v3 2/8] mm, devm_memremap_pages: Kill mapping "System RAM" support
  2018-06-19  6:04 [PATCH v3 0/8] mm: Rework hmm to use devm_memremap_pages and other fixes Dan Williams
  2018-06-19  6:04 ` [PATCH v3 1/8] mm, devm_memremap_pages: Mark devm_memremap_pages() EXPORT_SYMBOL_GPL Dan Williams
@ 2018-06-19  6:04 ` Dan Williams
  2018-06-19  6:04 ` [PATCH v3 3/8] mm, devm_memremap_pages: Fix shutdown handling Dan Williams
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 14+ messages in thread
From: Dan Williams @ 2018-06-19  6:04 UTC (permalink / raw)
  To: akpm
  Cc: Christoph Hellwig, Jérôme Glisse, Logan Gunthorpe,
	linux-mm, linux-kernel

Given the fact that devm_memremap_pages() requires a percpu_ref that is
torn down by devm_memremap_pages_release() the current support for
mapping RAM is broken.

This has been broken since forever and there is no use case to map RAM
in this way, so just kill the support and make it an explicit error.

Cc: Christoph Hellwig <hch@lst.de>
Cc: "Jérôme Glisse" <jglisse@redhat.com>
Cc: Logan Gunthorpe <logang@deltatee.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 kernel/memremap.c |    9 +++------
 1 file changed, 3 insertions(+), 6 deletions(-)

diff --git a/kernel/memremap.c b/kernel/memremap.c
index 4478e4688bb7..2d2c901cbe23 100644
--- a/kernel/memremap.c
+++ b/kernel/memremap.c
@@ -183,15 +183,12 @@ void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap)
 	is_ram = region_intersects(align_start, align_size,
 		IORESOURCE_SYSTEM_RAM, IORES_DESC_NONE);
 
-	if (is_ram == REGION_MIXED) {
-		WARN_ONCE(1, "%s attempted on mixed region %pr\n",
-				__func__, res);
+	if (is_ram != REGION_DISJOINT) {
+		WARN_ONCE(1, "%s attempted on %s region %pr\n", __func__,
+				is_ram == REGION_MIXED ? "mixed" : "ram", res);
 		return ERR_PTR(-ENXIO);
 	}
 
-	if (is_ram == REGION_INTERSECTS)
-		return __va(res->start);
-
 	if (!pgmap->ref)
 		return ERR_PTR(-EINVAL);
 


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v3 3/8] mm, devm_memremap_pages: Fix shutdown handling
  2018-06-19  6:04 [PATCH v3 0/8] mm: Rework hmm to use devm_memremap_pages and other fixes Dan Williams
  2018-06-19  6:04 ` [PATCH v3 1/8] mm, devm_memremap_pages: Mark devm_memremap_pages() EXPORT_SYMBOL_GPL Dan Williams
  2018-06-19  6:04 ` [PATCH v3 2/8] mm, devm_memremap_pages: Kill mapping "System RAM" support Dan Williams
@ 2018-06-19  6:04 ` Dan Williams
  2018-06-19 16:00   ` Logan Gunthorpe
  2018-06-19  6:04 ` [PATCH v3 4/8] mm, devm_memremap_pages: Add MEMORY_DEVICE_PRIVATE support Dan Williams
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 14+ messages in thread
From: Dan Williams @ 2018-06-19  6:04 UTC (permalink / raw)
  To: akpm
  Cc: stable, Christoph Hellwig, Jérôme Glisse,
	Logan Gunthorpe, linux-mm, linux-kernel

The last step before devm_memremap_pages() returns success is to
allocate a release action, devm_memremap_pages_release(), to tear the
entire setup down. However, the result from devm_add_action() is not
checked.

Checking the error from devm_add_action() is not enough. The api
currently relies on the fact that the percpu_ref it is using is killed
by the time the devm_memremap_pages_release() is run. Rather than
continue this awkward situation, offload the responsibility of killing
the percpu_ref to devm_memremap_pages_release() directly. This allows
devm_memremap_pages() to do the right thing  relative to init failures
and shutdown.

Without this change we could fail to register the teardown of
devm_memremap_pages(). The likelihood of hitting this failure is tiny as
small memory allocations almost always succeed. However, the impact of
the failure is large given any future reconfiguration, or
disable/enable, of an nvdimm namespace will fail forever as subsequent
calls to devm_memremap_pages() will fail to setup the pgmap_radix since
there will be stale entries for the physical address range.

An argument could be made to require that the ->kill() operation be set
in the @pgmap arg rather than passed in separately. However, it helps
code readability, tracking the lifetime of a given instance, to be able
to grep the kill routine directly at the devm_memremap_pages() call
site.

Cc: <stable@vger.kernel.org>
Fixes: e8d513483300 ("memremap: change devm_memremap_pages interface...")
Cc: Christoph Hellwig <hch@lst.de>
Cc: "Jérôme Glisse" <jglisse@redhat.com>
Reported-by: Logan Gunthorpe <logang@deltatee.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 drivers/dax/pmem.c                |   10 ++--------
 drivers/nvdimm/pmem.c             |   18 ++++++++----------
 include/linux/memremap.h          |    7 +++++--
 kernel/memremap.c                 |   36 +++++++++++++++++++-----------------
 tools/testing/nvdimm/test/iomap.c |   21 ++++++++++++++++++---
 5 files changed, 52 insertions(+), 40 deletions(-)

diff --git a/drivers/dax/pmem.c b/drivers/dax/pmem.c
index fd49b24fd6af..54cba20c8ba6 100644
--- a/drivers/dax/pmem.c
+++ b/drivers/dax/pmem.c
@@ -48,9 +48,8 @@ static void dax_pmem_percpu_exit(void *data)
 	percpu_ref_exit(ref);
 }
 
-static void dax_pmem_percpu_kill(void *data)
+static void dax_pmem_percpu_kill(struct percpu_ref *ref)
 {
-	struct percpu_ref *ref = data;
 	struct dax_pmem *dax_pmem = to_dax_pmem(ref);
 
 	dev_dbg(dax_pmem->dev, "trace\n");
@@ -111,15 +110,10 @@ static int dax_pmem_probe(struct device *dev)
 		return rc;
 
 	dax_pmem->pgmap.ref = &dax_pmem->ref;
-	addr = devm_memremap_pages(dev, &dax_pmem->pgmap);
+	addr = devm_memremap_pages(dev, &dax_pmem->pgmap, dax_pmem_percpu_kill);
 	if (IS_ERR(addr))
 		return PTR_ERR(addr);
 
-	rc = devm_add_action_or_reset(dev, dax_pmem_percpu_kill,
-							&dax_pmem->ref);
-	if (rc)
-		return rc;
-
 	/* adjust the dax_region resource to the start of data */
 	memcpy(&res, &dax_pmem->pgmap.res, sizeof(res));
 	res.start += le64_to_cpu(pfn_sb->dataoff);
diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
index 68940356cad3..e8ac6f244d2b 100644
--- a/drivers/nvdimm/pmem.c
+++ b/drivers/nvdimm/pmem.c
@@ -281,8 +281,11 @@ static void pmem_release_queue(void *q)
 	blk_cleanup_queue(q);
 }
 
-static void pmem_freeze_queue(void *q)
+static void pmem_freeze_queue(struct percpu_ref *ref)
 {
+	struct request_queue *q;
+
+	q = container_of(ref, typeof(*q), q_usage_counter);
 	blk_freeze_queue_start(q);
 }
 
@@ -377,7 +380,8 @@ static int pmem_attach_disk(struct device *dev,
 	if (is_nd_pfn(dev)) {
 		if (setup_pagemap_fsdax(dev, &pmem->pgmap))
 			return -ENOMEM;
-		addr = devm_memremap_pages(dev, &pmem->pgmap);
+		addr = devm_memremap_pages(dev, &pmem->pgmap,
+				pmem_freeze_queue);
 		pfn_sb = nd_pfn->pfn_sb;
 		pmem->data_offset = le64_to_cpu(pfn_sb->dataoff);
 		pmem->pfn_pad = resource_size(res) -
@@ -390,20 +394,14 @@ static int pmem_attach_disk(struct device *dev,
 		pmem->pgmap.altmap_valid = false;
 		if (setup_pagemap_fsdax(dev, &pmem->pgmap))
 			return -ENOMEM;
-		addr = devm_memremap_pages(dev, &pmem->pgmap);
+		addr = devm_memremap_pages(dev, &pmem->pgmap,
+				pmem_freeze_queue);
 		pmem->pfn_flags |= PFN_MAP;
 		memcpy(&bb_res, &pmem->pgmap.res, sizeof(bb_res));
 	} else
 		addr = devm_memremap(dev, pmem->phys_addr,
 				pmem->size, ARCH_MEMREMAP_PMEM);
 
-	/*
-	 * At release time the queue must be frozen before
-	 * devm_memremap_pages is unwound
-	 */
-	if (devm_add_action_or_reset(dev, pmem_freeze_queue, q))
-		return -ENOMEM;
-
 	if (IS_ERR(addr))
 		return PTR_ERR(addr);
 	pmem->virt_addr = addr;
diff --git a/include/linux/memremap.h b/include/linux/memremap.h
index f91f9e763557..71f5e7c7dfb9 100644
--- a/include/linux/memremap.h
+++ b/include/linux/memremap.h
@@ -106,6 +106,7 @@ typedef void (*dev_page_free_t)(struct page *page, void *data);
  * @altmap: pre-allocated/reserved memory for vmemmap allocations
  * @res: physical address range covered by @ref
  * @ref: reference count that pins the devm_memremap_pages() mapping
+ * @kill: callback to transition @ref to the dead state
  * @dev: host device of the mapping for debug
  * @data: private data pointer for page_free()
  * @type: memory type: see MEMORY_* in memory_hotplug.h
@@ -117,13 +118,15 @@ struct dev_pagemap {
 	bool altmap_valid;
 	struct resource res;
 	struct percpu_ref *ref;
+	void (*kill)(struct percpu_ref *ref);
 	struct device *dev;
 	void *data;
 	enum memory_type type;
 };
 
 #ifdef CONFIG_ZONE_DEVICE
-void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap);
+void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap,
+		void (*kill)(struct percpu_ref *));
 struct dev_pagemap *get_dev_pagemap(unsigned long pfn,
 		struct dev_pagemap *pgmap);
 
@@ -131,7 +134,7 @@ unsigned long vmem_altmap_offset(struct vmem_altmap *altmap);
 void vmem_altmap_free(struct vmem_altmap *altmap, unsigned long nr_pfns);
 #else
 static inline void *devm_memremap_pages(struct device *dev,
-		struct dev_pagemap *pgmap)
+		struct dev_pagemap *pgmap, void (*kill)(struct percpu_ref *))
 {
 	/*
 	 * Fail attempts to call devm_memremap_pages() without
diff --git a/kernel/memremap.c b/kernel/memremap.c
index 2d2c901cbe23..92b8d7057321 100644
--- a/kernel/memremap.c
+++ b/kernel/memremap.c
@@ -121,14 +121,10 @@ static void devm_memremap_pages_release(void *data)
 	resource_size_t align_start, align_size;
 	unsigned long pfn;
 
+	pgmap->kill(pgmap->ref);
 	for_each_device_pfn(pfn, pgmap)
 		put_page(pfn_to_page(pfn));
 
-	if (percpu_ref_tryget_live(pgmap->ref)) {
-		dev_WARN(dev, "%s: page mapping is still live!\n", __func__);
-		percpu_ref_put(pgmap->ref);
-	}
-
 	/* pages are dead and unused, undo the arch mapping */
 	align_start = res->start & ~(SECTION_SIZE - 1);
 	align_size = ALIGN(res->start + resource_size(res), SECTION_SIZE)
@@ -148,7 +144,8 @@ static void devm_memremap_pages_release(void *data)
 /**
  * devm_memremap_pages - remap and provide memmap backing for the given resource
  * @dev: hosting device for @res
- * @pgmap: pointer to a struct dev_pgmap
+ * @pgmap: pointer to a struct dev_pagemap
+ * @kill: routine to kill @pgmap->ref
  *
  * Notes:
  * 1/ At a minimum the res, ref and type members of @pgmap must be initialized
@@ -157,17 +154,15 @@ static void devm_memremap_pages_release(void *data)
  * 2/ The altmap field may optionally be initialized, in which case altmap_valid
  *    must be set to true
  *
- * 3/ pgmap.ref must be 'live' on entry and 'dead' before devm_memunmap_pages()
- *    time (or devm release event). The expected order of events is that ref has
- *    been through percpu_ref_kill() before devm_memremap_pages_release(). The
- *    wait for the completion of all references being dropped and
- *    percpu_ref_exit() must occur after devm_memremap_pages_release().
+ * 3/ pgmap->ref must be 'live' on entry and will be killed at
+ *    devm_memremap_pages_release() time, or if this routine fails.
  *
  * 4/ res is expected to be a host memory range that could feasibly be
  *    treated as a "System RAM" range, i.e. not a device mmio range, but
  *    this is not enforced.
  */
-void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap)
+void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap,
+		void (*kill)(struct percpu_ref *))
 {
 	resource_size_t align_start, align_size, align_end;
 	struct vmem_altmap *altmap = pgmap->altmap_valid ?
@@ -177,6 +172,9 @@ void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap)
 	pgprot_t pgprot = PAGE_KERNEL;
 	int error, nid, is_ram;
 
+	if (!pgmap->ref || !kill)
+		return ERR_PTR(-EINVAL);
+
 	align_start = res->start & ~(SECTION_SIZE - 1);
 	align_size = ALIGN(res->start + resource_size(res), SECTION_SIZE)
 		- align_start;
@@ -186,12 +184,10 @@ void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap)
 	if (is_ram != REGION_DISJOINT) {
 		WARN_ONCE(1, "%s attempted on %s region %pr\n", __func__,
 				is_ram == REGION_MIXED ? "mixed" : "ram", res);
-		return ERR_PTR(-ENXIO);
+		error = -ENXIO;
+		goto err_init;
 	}
 
-	if (!pgmap->ref)
-		return ERR_PTR(-EINVAL);
-
 	pgmap->dev = dev;
 
 	mutex_lock(&pgmap_lock);
@@ -243,7 +239,11 @@ void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap)
 		percpu_ref_get(pgmap->ref);
 	}
 
-	devm_add_action(dev, devm_memremap_pages_release, pgmap);
+	pgmap->kill = kill;
+	error = devm_add_action_or_reset(dev, devm_memremap_pages_release,
+			pgmap);
+	if (error)
+		return ERR_PTR(error);
 
 	return __va(res->start);
 
@@ -252,6 +252,8 @@ void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap)
  err_pfn_remap:
  err_radix:
 	pgmap_radix_release(res, pgoff);
+ err_init:
+	kill(pgmap->ref);
 	return ERR_PTR(error);
 }
 EXPORT_SYMBOL_GPL(devm_memremap_pages);
diff --git a/tools/testing/nvdimm/test/iomap.c b/tools/testing/nvdimm/test/iomap.c
index ff9d3a5825e1..ad544e6476a9 100644
--- a/tools/testing/nvdimm/test/iomap.c
+++ b/tools/testing/nvdimm/test/iomap.c
@@ -104,14 +104,29 @@ void *__wrap_devm_memremap(struct device *dev, resource_size_t offset,
 }
 EXPORT_SYMBOL(__wrap_devm_memremap);
 
-void *__wrap_devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap)
+static void nfit_test_kill(void *_pgmap)
+{
+	struct dev_pagemap *pgmap = _pgmap;
+
+	pgmap->kill(pgmap->ref);
+}
+
+void *__wrap_devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap,
+		void (*kill)(struct percpu_ref *))
 {
 	resource_size_t offset = pgmap->res.start;
 	struct nfit_test_resource *nfit_res = get_nfit_res(offset);
 
-	if (nfit_res)
+	if (nfit_res) {
+		int rc;
+
+		pgmap->kill = kill;
+		rc = devm_add_action_or_reset(dev, nfit_test_kill, pgmap);
+		if (rc)
+			return ERR_PTR(rc);
 		return nfit_res->buf + offset - nfit_res->res.start;
-	return devm_memremap_pages(dev, pgmap);
+	}
+	return devm_memremap_pages(dev, pgmap, kill);
 }
 EXPORT_SYMBOL(__wrap_devm_memremap_pages);
 


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v3 4/8] mm, devm_memremap_pages: Add MEMORY_DEVICE_PRIVATE support
  2018-06-19  6:04 [PATCH v3 0/8] mm: Rework hmm to use devm_memremap_pages and other fixes Dan Williams
                   ` (2 preceding siblings ...)
  2018-06-19  6:04 ` [PATCH v3 3/8] mm, devm_memremap_pages: Fix shutdown handling Dan Williams
@ 2018-06-19  6:04 ` Dan Williams
  2018-06-19  6:05 ` [PATCH v3 5/8] mm, hmm: Use devm semantics for hmm_devmem_{add, remove} Dan Williams
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 14+ messages in thread
From: Dan Williams @ 2018-06-19  6:04 UTC (permalink / raw)
  To: akpm
  Cc: Christoph Hellwig, Jérôme Glisse, Logan Gunthorpe,
	Logan Gunthorpe, linux-mm, linux-kernel

In preparation for consolidating all ZONE_DEVICE enabling via
devm_memremap_pages(), teach it how to handle the constraints of
MEMORY_DEVICE_PRIVATE ranges.

Cc: Christoph Hellwig <hch@lst.de>
Cc: "Jérôme Glisse" <jglisse@redhat.com>
Reported-by: Logan Gunthorpe <logang@deltatee.com>
Reviewed-by: Logan Gunthorpe <logang@deltatee.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 kernel/memremap.c |   38 ++++++++++++++++++++++++++++++++------
 1 file changed, 32 insertions(+), 6 deletions(-)

diff --git a/kernel/memremap.c b/kernel/memremap.c
index 92b8d7057321..16141b608b63 100644
--- a/kernel/memremap.c
+++ b/kernel/memremap.c
@@ -131,8 +131,13 @@ static void devm_memremap_pages_release(void *data)
 		- align_start;
 
 	mem_hotplug_begin();
-	arch_remove_memory(align_start, align_size, pgmap->altmap_valid ?
-			&pgmap->altmap : NULL);
+	if (pgmap->type == MEMORY_DEVICE_PRIVATE) {
+		pfn = align_start >> PAGE_SHIFT;
+		__remove_pages(page_zone(pfn_to_page(pfn)), pfn,
+				align_size >> PAGE_SHIFT, NULL);
+	} else
+		arch_remove_memory(align_start, align_size,
+				pgmap->altmap_valid ? &pgmap->altmap : NULL);
 	mem_hotplug_done();
 
 	untrack_pfn(NULL, PHYS_PFN(align_start), align_size);
@@ -216,11 +221,32 @@ void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap,
 		goto err_pfn_remap;
 
 	mem_hotplug_begin();
-	error = arch_add_memory(nid, align_start, align_size, altmap, false);
-	if (!error)
-		move_pfn_range_to_zone(&NODE_DATA(nid)->node_zones[ZONE_DEVICE],
-					align_start >> PAGE_SHIFT,
+
+	/*
+	 * For device private memory we call add_pages() as we only need to
+	 * allocate and initialize struct page for the device memory. More-
+	 * over the device memory is un-accessible thus we do not want to
+	 * create a linear mapping for the memory like arch_add_memory()
+	 * would do.
+	 *
+	 * For all other device memory types, which are accessible by
+	 * the CPU, we do want the linear mapping and thus use
+	 * arch_add_memory().
+	 */
+	if (pgmap->type == MEMORY_DEVICE_PRIVATE) {
+		error = add_pages(nid, align_start >> PAGE_SHIFT,
+				align_size >> PAGE_SHIFT, NULL, false);
+	} else {
+		struct zone *zone;
+
+		error = arch_add_memory(nid, align_start, align_size, altmap,
+				false);
+		zone = &NODE_DATA(nid)->node_zones[ZONE_DEVICE];
+		if (!error)
+			move_pfn_range_to_zone(zone, align_start >> PAGE_SHIFT,
 					align_size >> PAGE_SHIFT, altmap);
+	}
+
 	mem_hotplug_done();
 	if (error)
 		goto err_add_memory;


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v3 5/8] mm, hmm: Use devm semantics for hmm_devmem_{add, remove}
  2018-06-19  6:04 [PATCH v3 0/8] mm: Rework hmm to use devm_memremap_pages and other fixes Dan Williams
                   ` (3 preceding siblings ...)
  2018-06-19  6:04 ` [PATCH v3 4/8] mm, devm_memremap_pages: Add MEMORY_DEVICE_PRIVATE support Dan Williams
@ 2018-06-19  6:05 ` Dan Williams
  2018-06-19  6:05 ` [PATCH v3 6/8] mm, hmm: Replace hmm_devmem_pages_create() with devm_memremap_pages() Dan Williams
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 14+ messages in thread
From: Dan Williams @ 2018-06-19  6:05 UTC (permalink / raw)
  To: akpm
  Cc: Christoph Hellwig, Jérôme Glisse, Logan Gunthorpe,
	linux-mm, linux-kernel

devm semantics arrange for resources to be torn down when
device-driver-probe fails or when device-driver-release completes.
Similar to devm_memremap_pages() there is no need to support an explicit
remove operation when the users properly adhere to devm semantics.

Note that devm_kzalloc() automatically handles allocating node-local
memory.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Cc: "Jérôme Glisse" <jglisse@redhat.com>
Cc: Logan Gunthorpe <logang@deltatee.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 include/linux/hmm.h |    4 --
 mm/hmm.c            |  127 ++++++++++-----------------------------------------
 2 files changed, 25 insertions(+), 106 deletions(-)

diff --git a/include/linux/hmm.h b/include/linux/hmm.h
index 4c92e3ba3e16..5ec8635f602c 100644
--- a/include/linux/hmm.h
+++ b/include/linux/hmm.h
@@ -499,8 +499,7 @@ struct hmm_devmem {
  * enough and allocate struct page for it.
  *
  * The device driver can wrap the hmm_devmem struct inside a private device
- * driver struct. The device driver must call hmm_devmem_remove() before the
- * device goes away and before freeing the hmm_devmem struct memory.
+ * driver struct.
  */
 struct hmm_devmem *hmm_devmem_add(const struct hmm_devmem_ops *ops,
 				  struct device *device,
@@ -508,7 +507,6 @@ struct hmm_devmem *hmm_devmem_add(const struct hmm_devmem_ops *ops,
 struct hmm_devmem *hmm_devmem_add_resource(const struct hmm_devmem_ops *ops,
 					   struct device *device,
 					   struct resource *res);
-void hmm_devmem_remove(struct hmm_devmem *devmem);
 
 /*
  * hmm_devmem_page_set_drvdata - set per-page driver data field
diff --git a/mm/hmm.c b/mm/hmm.c
index de7b6bf77201..d65a9419dbc2 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -934,7 +934,6 @@ static void hmm_devmem_ref_exit(void *data)
 
 	devmem = container_of(ref, struct hmm_devmem, ref);
 	percpu_ref_exit(ref);
-	devm_remove_action(devmem->device, &hmm_devmem_ref_exit, data);
 }
 
 static void hmm_devmem_ref_kill(void *data)
@@ -945,7 +944,6 @@ static void hmm_devmem_ref_kill(void *data)
 	devmem = container_of(ref, struct hmm_devmem, ref);
 	percpu_ref_kill(ref);
 	wait_for_completion(&devmem->completion);
-	devm_remove_action(devmem->device, &hmm_devmem_ref_kill, data);
 }
 
 static int hmm_devmem_fault(struct vm_area_struct *vma,
@@ -984,7 +982,7 @@ static void hmm_devmem_radix_release(struct resource *resource)
 	mutex_unlock(&hmm_devmem_lock);
 }
 
-static void hmm_devmem_release(struct device *dev, void *data)
+static void hmm_devmem_release(void *data)
 {
 	struct hmm_devmem *devmem = data;
 	struct resource *resource = devmem->resource;
@@ -992,11 +990,6 @@ static void hmm_devmem_release(struct device *dev, void *data)
 	struct zone *zone;
 	struct page *page;
 
-	if (percpu_ref_tryget_live(&devmem->ref)) {
-		dev_WARN(dev, "%s: page mapping is still live!\n", __func__);
-		percpu_ref_put(&devmem->ref);
-	}
-
 	/* pages are dead and unused, undo the arch mapping */
 	start_pfn = (resource->start & ~(PA_SECTION_SIZE - 1)) >> PAGE_SHIFT;
 	npages = ALIGN(resource_size(resource), PA_SECTION_SIZE) >> PAGE_SHIFT;
@@ -1120,19 +1113,6 @@ static int hmm_devmem_pages_create(struct hmm_devmem *devmem)
 	return ret;
 }
 
-static int hmm_devmem_match(struct device *dev, void *data, void *match_data)
-{
-	struct hmm_devmem *devmem = data;
-
-	return devmem->resource == match_data;
-}
-
-static void hmm_devmem_pages_remove(struct hmm_devmem *devmem)
-{
-	devres_release(devmem->device, &hmm_devmem_release,
-		       &hmm_devmem_match, devmem->resource);
-}
-
 /*
  * hmm_devmem_add() - hotplug ZONE_DEVICE memory for device memory
  *
@@ -1160,8 +1140,7 @@ struct hmm_devmem *hmm_devmem_add(const struct hmm_devmem_ops *ops,
 
 	dev_pagemap_get_ops();
 
-	devmem = devres_alloc_node(&hmm_devmem_release, sizeof(*devmem),
-				   GFP_KERNEL, dev_to_node(device));
+	devmem = devm_kzalloc(device, sizeof(*devmem), GFP_KERNEL);
 	if (!devmem)
 		return ERR_PTR(-ENOMEM);
 
@@ -1175,11 +1154,11 @@ struct hmm_devmem *hmm_devmem_add(const struct hmm_devmem_ops *ops,
 	ret = percpu_ref_init(&devmem->ref, &hmm_devmem_ref_release,
 			      0, GFP_KERNEL);
 	if (ret)
-		goto error_percpu_ref;
+		return ERR_PTR(ret);
 
-	ret = devm_add_action(device, hmm_devmem_ref_exit, &devmem->ref);
+	ret = devm_add_action_or_reset(device, hmm_devmem_ref_exit, &devmem->ref);
 	if (ret)
-		goto error_devm_add_action;
+		return ERR_PTR(ret);
 
 	size = ALIGN(size, PA_SECTION_SIZE);
 	addr = min((unsigned long)iomem_resource.end,
@@ -1199,16 +1178,12 @@ struct hmm_devmem *hmm_devmem_add(const struct hmm_devmem_ops *ops,
 
 		devmem->resource = devm_request_mem_region(device, addr, size,
 							   dev_name(device));
-		if (!devmem->resource) {
-			ret = -ENOMEM;
-			goto error_no_resource;
-		}
+		if (!devmem->resource)
+			return ERR_PTR(-ENOMEM);
 		break;
 	}
-	if (!devmem->resource) {
-		ret = -ERANGE;
-		goto error_no_resource;
-	}
+	if (!devmem->resource)
+		return ERR_PTR(-ERANGE);
 
 	devmem->resource->desc = IORES_DESC_DEVICE_PRIVATE_MEMORY;
 	devmem->pfn_first = devmem->resource->start >> PAGE_SHIFT;
@@ -1217,28 +1192,13 @@ struct hmm_devmem *hmm_devmem_add(const struct hmm_devmem_ops *ops,
 
 	ret = hmm_devmem_pages_create(devmem);
 	if (ret)
-		goto error_pages;
-
-	devres_add(device, devmem);
+		return ERR_PTR(ret);
 
-	ret = devm_add_action(device, hmm_devmem_ref_kill, &devmem->ref);
-	if (ret) {
-		hmm_devmem_remove(devmem);
+	ret = devm_add_action_or_reset(device, hmm_devmem_release, devmem);
+	if (ret)
 		return ERR_PTR(ret);
-	}
 
 	return devmem;
-
-error_pages:
-	devm_release_mem_region(device, devmem->resource->start,
-				resource_size(devmem->resource));
-error_no_resource:
-error_devm_add_action:
-	hmm_devmem_ref_kill(&devmem->ref);
-	hmm_devmem_ref_exit(&devmem->ref);
-error_percpu_ref:
-	devres_free(devmem);
-	return ERR_PTR(ret);
 }
 EXPORT_SYMBOL(hmm_devmem_add);
 
@@ -1254,8 +1214,7 @@ struct hmm_devmem *hmm_devmem_add_resource(const struct hmm_devmem_ops *ops,
 
 	dev_pagemap_get_ops();
 
-	devmem = devres_alloc_node(&hmm_devmem_release, sizeof(*devmem),
-				   GFP_KERNEL, dev_to_node(device));
+	devmem = devm_kzalloc(device, sizeof(*devmem), GFP_KERNEL);
 	if (!devmem)
 		return ERR_PTR(-ENOMEM);
 
@@ -1269,12 +1228,12 @@ struct hmm_devmem *hmm_devmem_add_resource(const struct hmm_devmem_ops *ops,
 	ret = percpu_ref_init(&devmem->ref, &hmm_devmem_ref_release,
 			      0, GFP_KERNEL);
 	if (ret)
-		goto error_percpu_ref;
+		return ERR_PTR(ret);
 
-	ret = devm_add_action(device, hmm_devmem_ref_exit, &devmem->ref);
+	ret = devm_add_action_or_reset(device, hmm_devmem_ref_exit,
+			&devmem->ref);
 	if (ret)
-		goto error_devm_add_action;
-
+		return ERR_PTR(ret);
 
 	devmem->pfn_first = devmem->resource->start >> PAGE_SHIFT;
 	devmem->pfn_last = devmem->pfn_first +
@@ -1282,60 +1241,22 @@ struct hmm_devmem *hmm_devmem_add_resource(const struct hmm_devmem_ops *ops,
 
 	ret = hmm_devmem_pages_create(devmem);
 	if (ret)
-		goto error_devm_add_action;
+		return ERR_PTR(ret);
 
-	devres_add(device, devmem);
+	ret = devm_add_action_or_reset(device, hmm_devmem_release, devmem);
+	if (ret)
+		return ERR_PTR(ret);
 
-	ret = devm_add_action(device, hmm_devmem_ref_kill, &devmem->ref);
-	if (ret) {
-		hmm_devmem_remove(devmem);
+	ret = devm_add_action_or_reset(device, hmm_devmem_ref_kill,
+			&devmem->ref);
+	if (ret)
 		return ERR_PTR(ret);
-	}
 
 	return devmem;
-
-error_devm_add_action:
-	hmm_devmem_ref_kill(&devmem->ref);
-	hmm_devmem_ref_exit(&devmem->ref);
-error_percpu_ref:
-	devres_free(devmem);
-	return ERR_PTR(ret);
 }
 EXPORT_SYMBOL(hmm_devmem_add_resource);
 
 /*
- * hmm_devmem_remove() - remove device memory (kill and free ZONE_DEVICE)
- *
- * @devmem: hmm_devmem struct use to track and manage the ZONE_DEVICE memory
- *
- * This will hot-unplug memory that was hotplugged by hmm_devmem_add on behalf
- * of the device driver. It will free struct page and remove the resource that
- * reserved the physical address range for this device memory.
- */
-void hmm_devmem_remove(struct hmm_devmem *devmem)
-{
-	resource_size_t start, size;
-	struct device *device;
-	bool cdm = false;
-
-	if (!devmem)
-		return;
-
-	device = devmem->device;
-	start = devmem->resource->start;
-	size = resource_size(devmem->resource);
-
-	cdm = devmem->resource->desc == IORES_DESC_DEVICE_PUBLIC_MEMORY;
-	hmm_devmem_ref_kill(&devmem->ref);
-	hmm_devmem_ref_exit(&devmem->ref);
-	hmm_devmem_pages_remove(devmem);
-
-	if (!cdm)
-		devm_release_mem_region(device, start, size);
-}
-EXPORT_SYMBOL(hmm_devmem_remove);
-
-/*
  * A device driver that wants to handle multiple devices memory through a
  * single fake device can use hmm_device to do so. This is purely a helper
  * and it is not needed to make use of any HMM functionality.


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v3 6/8] mm, hmm: Replace hmm_devmem_pages_create() with devm_memremap_pages()
  2018-06-19  6:04 [PATCH v3 0/8] mm: Rework hmm to use devm_memremap_pages and other fixes Dan Williams
                   ` (4 preceding siblings ...)
  2018-06-19  6:05 ` [PATCH v3 5/8] mm, hmm: Use devm semantics for hmm_devmem_{add, remove} Dan Williams
@ 2018-06-19  6:05 ` Dan Williams
  2018-06-19  6:05 ` [PATCH v3 7/8] mm, hmm: Mark hmm_devmem_{add, add_resource} EXPORT_SYMBOL_GPL Dan Williams
  2018-06-19  6:05 ` [PATCH v3 8/8] mm: Fix exports that inadvertently make put_page() EXPORT_SYMBOL_GPL Dan Williams
  7 siblings, 0 replies; 14+ messages in thread
From: Dan Williams @ 2018-06-19  6:05 UTC (permalink / raw)
  To: akpm
  Cc: Christoph Hellwig, Jérôme Glisse, Logan Gunthorpe,
	linux-mm, linux-kernel

Commit e8d513483300 "memremap: change devm_memremap_pages interface to
use struct dev_pagemap" refactored devm_memremap_pages() to allow a
dev_pagemap instance to be supplied. Passing in a dev_pagemap interface
simplifies the design of pgmap type drivers in that they can rely on
container_of() to lookup any private data associated with the given
dev_pagemap instance.

In addition to the cleanups this also gives hmm users multi-order-radix
improvements that arrived with commit ab1b597ee0e4 "mm,
devm_memremap_pages: use multi-order radix for ZONE_DEVICE lookups"

As part of the conversion to the devm_memremap_pages() method of
handling the percpu_ref relative to when pages are put, the percpu_ref
completion needs to move to hmm_devmem_ref_exit(). See commit
71389703839e ("mm, zone_device: Replace {get, put}_zone_device_page...")
for details.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Cc: "Jérôme Glisse" <jglisse@redhat.com>
Cc: Logan Gunthorpe <logang@deltatee.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 mm/hmm.c |  198 ++++++++------------------------------------------------------
 1 file changed, 26 insertions(+), 172 deletions(-)

diff --git a/mm/hmm.c b/mm/hmm.c
index d65a9419dbc2..b019d67a610e 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -918,7 +918,6 @@ struct page *hmm_vma_alloc_locked_page(struct vm_area_struct *vma,
 }
 EXPORT_SYMBOL(hmm_vma_alloc_locked_page);
 
-
 static void hmm_devmem_ref_release(struct percpu_ref *ref)
 {
 	struct hmm_devmem *devmem;
@@ -933,17 +932,16 @@ static void hmm_devmem_ref_exit(void *data)
 	struct hmm_devmem *devmem;
 
 	devmem = container_of(ref, struct hmm_devmem, ref);
+	wait_for_completion(&devmem->completion);
 	percpu_ref_exit(ref);
 }
 
-static void hmm_devmem_ref_kill(void *data)
+static void hmm_devmem_ref_kill(struct percpu_ref *ref)
 {
-	struct percpu_ref *ref = data;
 	struct hmm_devmem *devmem;
 
 	devmem = container_of(ref, struct hmm_devmem, ref);
 	percpu_ref_kill(ref);
-	wait_for_completion(&devmem->completion);
 }
 
 static int hmm_devmem_fault(struct vm_area_struct *vma,
@@ -964,155 +962,6 @@ static void hmm_devmem_free(struct page *page, void *data)
 	devmem->ops->free(devmem, page);
 }
 
-static DEFINE_MUTEX(hmm_devmem_lock);
-static RADIX_TREE(hmm_devmem_radix, GFP_KERNEL);
-
-static void hmm_devmem_radix_release(struct resource *resource)
-{
-	resource_size_t key, align_start, align_size;
-
-	align_start = resource->start & ~(PA_SECTION_SIZE - 1);
-	align_size = ALIGN(resource_size(resource), PA_SECTION_SIZE);
-
-	mutex_lock(&hmm_devmem_lock);
-	for (key = resource->start;
-	     key <= resource->end;
-	     key += PA_SECTION_SIZE)
-		radix_tree_delete(&hmm_devmem_radix, key >> PA_SECTION_SHIFT);
-	mutex_unlock(&hmm_devmem_lock);
-}
-
-static void hmm_devmem_release(void *data)
-{
-	struct hmm_devmem *devmem = data;
-	struct resource *resource = devmem->resource;
-	unsigned long start_pfn, npages;
-	struct zone *zone;
-	struct page *page;
-
-	/* pages are dead and unused, undo the arch mapping */
-	start_pfn = (resource->start & ~(PA_SECTION_SIZE - 1)) >> PAGE_SHIFT;
-	npages = ALIGN(resource_size(resource), PA_SECTION_SIZE) >> PAGE_SHIFT;
-
-	page = pfn_to_page(start_pfn);
-	zone = page_zone(page);
-
-	mem_hotplug_begin();
-	if (resource->desc == IORES_DESC_DEVICE_PRIVATE_MEMORY)
-		__remove_pages(zone, start_pfn, npages, NULL);
-	else
-		arch_remove_memory(start_pfn << PAGE_SHIFT,
-				   npages << PAGE_SHIFT, NULL);
-	mem_hotplug_done();
-
-	hmm_devmem_radix_release(resource);
-}
-
-static int hmm_devmem_pages_create(struct hmm_devmem *devmem)
-{
-	resource_size_t key, align_start, align_size, align_end;
-	struct device *device = devmem->device;
-	int ret, nid, is_ram;
-	unsigned long pfn;
-
-	align_start = devmem->resource->start & ~(PA_SECTION_SIZE - 1);
-	align_size = ALIGN(devmem->resource->start +
-			   resource_size(devmem->resource),
-			   PA_SECTION_SIZE) - align_start;
-
-	is_ram = region_intersects(align_start, align_size,
-				   IORESOURCE_SYSTEM_RAM,
-				   IORES_DESC_NONE);
-	if (is_ram == REGION_MIXED) {
-		WARN_ONCE(1, "%s attempted on mixed region %pr\n",
-				__func__, devmem->resource);
-		return -ENXIO;
-	}
-	if (is_ram == REGION_INTERSECTS)
-		return -ENXIO;
-
-	if (devmem->resource->desc == IORES_DESC_DEVICE_PUBLIC_MEMORY)
-		devmem->pagemap.type = MEMORY_DEVICE_PUBLIC;
-	else
-		devmem->pagemap.type = MEMORY_DEVICE_PRIVATE;
-
-	devmem->pagemap.res = *devmem->resource;
-	devmem->pagemap.page_fault = hmm_devmem_fault;
-	devmem->pagemap.page_free = hmm_devmem_free;
-	devmem->pagemap.dev = devmem->device;
-	devmem->pagemap.ref = &devmem->ref;
-	devmem->pagemap.data = devmem;
-
-	mutex_lock(&hmm_devmem_lock);
-	align_end = align_start + align_size - 1;
-	for (key = align_start; key <= align_end; key += PA_SECTION_SIZE) {
-		struct hmm_devmem *dup;
-
-		dup = radix_tree_lookup(&hmm_devmem_radix,
-					key >> PA_SECTION_SHIFT);
-		if (dup) {
-			dev_err(device, "%s: collides with mapping for %s\n",
-				__func__, dev_name(dup->device));
-			mutex_unlock(&hmm_devmem_lock);
-			ret = -EBUSY;
-			goto error;
-		}
-		ret = radix_tree_insert(&hmm_devmem_radix,
-					key >> PA_SECTION_SHIFT,
-					devmem);
-		if (ret) {
-			dev_err(device, "%s: failed: %d\n", __func__, ret);
-			mutex_unlock(&hmm_devmem_lock);
-			goto error_radix;
-		}
-	}
-	mutex_unlock(&hmm_devmem_lock);
-
-	nid = dev_to_node(device);
-	if (nid < 0)
-		nid = numa_mem_id();
-
-	mem_hotplug_begin();
-	/*
-	 * For device private memory we call add_pages() as we only need to
-	 * allocate and initialize struct page for the device memory. More-
-	 * over the device memory is un-accessible thus we do not want to
-	 * create a linear mapping for the memory like arch_add_memory()
-	 * would do.
-	 *
-	 * For device public memory, which is accesible by the CPU, we do
-	 * want the linear mapping and thus use arch_add_memory().
-	 */
-	if (devmem->pagemap.type == MEMORY_DEVICE_PUBLIC)
-		ret = arch_add_memory(nid, align_start, align_size, NULL,
-				false);
-	else
-		ret = add_pages(nid, align_start >> PAGE_SHIFT,
-				align_size >> PAGE_SHIFT, NULL, false);
-	if (ret) {
-		mem_hotplug_done();
-		goto error_add_memory;
-	}
-	move_pfn_range_to_zone(&NODE_DATA(nid)->node_zones[ZONE_DEVICE],
-				align_start >> PAGE_SHIFT,
-				align_size >> PAGE_SHIFT, NULL);
-	mem_hotplug_done();
-
-	for (pfn = devmem->pfn_first; pfn < devmem->pfn_last; pfn++) {
-		struct page *page = pfn_to_page(pfn);
-
-		page->pgmap = &devmem->pagemap;
-	}
-	return 0;
-
-error_add_memory:
-	untrack_pfn(NULL, PHYS_PFN(align_start), align_size);
-error_radix:
-	hmm_devmem_radix_release(devmem->resource);
-error:
-	return ret;
-}
-
 /*
  * hmm_devmem_add() - hotplug ZONE_DEVICE memory for device memory
  *
@@ -1136,6 +985,7 @@ struct hmm_devmem *hmm_devmem_add(const struct hmm_devmem_ops *ops,
 {
 	struct hmm_devmem *devmem;
 	resource_size_t addr;
+	void *result;
 	int ret;
 
 	dev_pagemap_get_ops();
@@ -1190,14 +1040,18 @@ struct hmm_devmem *hmm_devmem_add(const struct hmm_devmem_ops *ops,
 	devmem->pfn_last = devmem->pfn_first +
 			   (resource_size(devmem->resource) >> PAGE_SHIFT);
 
-	ret = hmm_devmem_pages_create(devmem);
-	if (ret)
-		return ERR_PTR(ret);
-
-	ret = devm_add_action_or_reset(device, hmm_devmem_release, devmem);
-	if (ret)
-		return ERR_PTR(ret);
+	devmem->pagemap.type = MEMORY_DEVICE_PRIVATE;
+	devmem->pagemap.res = *devmem->resource;
+	devmem->pagemap.page_fault = hmm_devmem_fault;
+	devmem->pagemap.page_free = hmm_devmem_free;
+	devmem->pagemap.altmap_valid = false;
+	devmem->pagemap.ref = &devmem->ref;
+	devmem->pagemap.data = devmem;
 
+	result = devm_memremap_pages(devmem->device, &devmem->pagemap,
+			hmm_devmem_ref_kill);
+	if (IS_ERR(result))
+		return result;
 	return devmem;
 }
 EXPORT_SYMBOL(hmm_devmem_add);
@@ -1207,6 +1061,7 @@ struct hmm_devmem *hmm_devmem_add_resource(const struct hmm_devmem_ops *ops,
 					   struct resource *res)
 {
 	struct hmm_devmem *devmem;
+	void *result;
 	int ret;
 
 	if (res->desc != IORES_DESC_DEVICE_PUBLIC_MEMORY)
@@ -1239,19 +1094,18 @@ struct hmm_devmem *hmm_devmem_add_resource(const struct hmm_devmem_ops *ops,
 	devmem->pfn_last = devmem->pfn_first +
 			   (resource_size(devmem->resource) >> PAGE_SHIFT);
 
-	ret = hmm_devmem_pages_create(devmem);
-	if (ret)
-		return ERR_PTR(ret);
-
-	ret = devm_add_action_or_reset(device, hmm_devmem_release, devmem);
-	if (ret)
-		return ERR_PTR(ret);
-
-	ret = devm_add_action_or_reset(device, hmm_devmem_ref_kill,
-			&devmem->ref);
-	if (ret)
-		return ERR_PTR(ret);
+	devmem->pagemap.type = MEMORY_DEVICE_PUBLIC;
+	devmem->pagemap.res = *devmem->resource;
+	devmem->pagemap.page_fault = hmm_devmem_fault;
+	devmem->pagemap.page_free = hmm_devmem_free;
+	devmem->pagemap.altmap_valid = false;
+	devmem->pagemap.ref = &devmem->ref;
+	devmem->pagemap.data = devmem;
 
+	result = devm_memremap_pages(devmem->device, &devmem->pagemap,
+			hmm_devmem_ref_kill);
+	if (IS_ERR(result))
+		return result;
 	return devmem;
 }
 EXPORT_SYMBOL(hmm_devmem_add_resource);


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v3 7/8] mm, hmm: Mark hmm_devmem_{add, add_resource} EXPORT_SYMBOL_GPL
  2018-06-19  6:04 [PATCH v3 0/8] mm: Rework hmm to use devm_memremap_pages and other fixes Dan Williams
                   ` (5 preceding siblings ...)
  2018-06-19  6:05 ` [PATCH v3 6/8] mm, hmm: Replace hmm_devmem_pages_create() with devm_memremap_pages() Dan Williams
@ 2018-06-19  6:05 ` Dan Williams
  2018-07-06 23:53   ` Dan Williams
  2018-06-19  6:05 ` [PATCH v3 8/8] mm: Fix exports that inadvertently make put_page() EXPORT_SYMBOL_GPL Dan Williams
  7 siblings, 1 reply; 14+ messages in thread
From: Dan Williams @ 2018-06-19  6:05 UTC (permalink / raw)
  To: akpm
  Cc: Jérôme Glisse, Logan Gunthorpe, Christoph Hellwig,
	linux-mm, linux-kernel

The routines hmm_devmem_add(), and hmm_devmem_add_resource() are
now wrappers around the functionality provided by devm_memremap_pages() to
inject a dev_pagemap instance and hook page-idle events. The
devm_memremap_pages() interface is base infrastructure for HMM which has
more and deeper ties into the kernel memory management implementation
than base ZONE_DEVICE.

Originally, the HMM page structure creation routines copied the
devm_memremap_pages() code and reused ZONE_DEVICE. A cleanup to unify
the implementations was discussed during the initial review:
http://lkml.iu.edu/hypermail/linux/kernel/1701.2/00812.html

Given that devm_memremap_pages() is marked EXPORT_SYMBOL_GPL by its
authors and the hmm_devmem_{add,add_resource} routines are simple
wrappers around that base, mark these routines as EXPORT_SYMBOL_GPL as
well.

Cc: "Jérôme Glisse" <jglisse@redhat.com>
Cc: Logan Gunthorpe <logang@deltatee.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 mm/hmm.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/hmm.c b/mm/hmm.c
index b019d67a610e..481a7a5f6f46 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -1054,7 +1054,7 @@ struct hmm_devmem *hmm_devmem_add(const struct hmm_devmem_ops *ops,
 		return result;
 	return devmem;
 }
-EXPORT_SYMBOL(hmm_devmem_add);
+EXPORT_SYMBOL_GPL(hmm_devmem_add);
 
 struct hmm_devmem *hmm_devmem_add_resource(const struct hmm_devmem_ops *ops,
 					   struct device *device,
@@ -1108,7 +1108,7 @@ struct hmm_devmem *hmm_devmem_add_resource(const struct hmm_devmem_ops *ops,
 		return result;
 	return devmem;
 }
-EXPORT_SYMBOL(hmm_devmem_add_resource);
+EXPORT_SYMBOL_GPL(hmm_devmem_add_resource);
 
 /*
  * A device driver that wants to handle multiple devices memory through a


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v3 8/8] mm: Fix exports that inadvertently make put_page() EXPORT_SYMBOL_GPL
  2018-06-19  6:04 [PATCH v3 0/8] mm: Rework hmm to use devm_memremap_pages and other fixes Dan Williams
                   ` (6 preceding siblings ...)
  2018-06-19  6:05 ` [PATCH v3 7/8] mm, hmm: Mark hmm_devmem_{add, add_resource} EXPORT_SYMBOL_GPL Dan Williams
@ 2018-06-19  6:05 ` Dan Williams
  2018-06-19  6:59   ` John Hubbard
  7 siblings, 1 reply; 14+ messages in thread
From: Dan Williams @ 2018-06-19  6:05 UTC (permalink / raw)
  To: akpm; +Cc: Joe Gorse, John Hubbard, hch, linux-mm, linux-kernel

Now that all producers of dev_pagemap instances in the kernel are
properly converted to EXPORT_SYMBOL_GPL, fix up implicit consumers that
interact with dev_pagemap owners via put_page(). To reiterate,
dev_pagemap producers are EXPORT_SYMBOL_GPL because they adopt and
modify core memory management interfaces such that the dev_pagemap owner
can interact with all other kernel infrastructure and sub-systems
(drivers, filesystems, etc...) that consume page structures.

Fixes: e76384884344 ("mm: introduce MEMORY_DEVICE_FS_DAX and CONFIG_DEV_PAGEMAP_OPS")
Reported-by: Joe Gorse <jhgorse@gmail.com>
Reported-by: John Hubbard <jhubbard@nvidia.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 kernel/memremap.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/memremap.c b/kernel/memremap.c
index 16141b608b63..ecee37b44aa1 100644
--- a/kernel/memremap.c
+++ b/kernel/memremap.c
@@ -330,7 +330,7 @@ EXPORT_SYMBOL_GPL(get_dev_pagemap);
 
 #ifdef CONFIG_DEV_PAGEMAP_OPS
 DEFINE_STATIC_KEY_FALSE(devmap_managed_key);
-EXPORT_SYMBOL_GPL(devmap_managed_key);
+EXPORT_SYMBOL(devmap_managed_key);
 static atomic_t devmap_enable;
 
 /*
@@ -371,5 +371,5 @@ void __put_devmap_managed_page(struct page *page)
 	} else if (!count)
 		__put_page(page);
 }
-EXPORT_SYMBOL_GPL(__put_devmap_managed_page);
+EXPORT_SYMBOL(__put_devmap_managed_page);
 #endif /* CONFIG_DEV_PAGEMAP_OPS */


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 8/8] mm: Fix exports that inadvertently make put_page() EXPORT_SYMBOL_GPL
  2018-06-19  6:05 ` [PATCH v3 8/8] mm: Fix exports that inadvertently make put_page() EXPORT_SYMBOL_GPL Dan Williams
@ 2018-06-19  6:59   ` John Hubbard
  0 siblings, 0 replies; 14+ messages in thread
From: John Hubbard @ 2018-06-19  6:59 UTC (permalink / raw)
  To: Dan Williams, akpm; +Cc: Joe Gorse, hch, linux-mm, linux-kernel

On 06/18/2018 11:05 PM, Dan Williams wrote:
> Now that all producers of dev_pagemap instances in the kernel are
> properly converted to EXPORT_SYMBOL_GPL, fix up implicit consumers that
> interact with dev_pagemap owners via put_page(). To reiterate,
> dev_pagemap producers are EXPORT_SYMBOL_GPL because they adopt and
> modify core memory management interfaces such that the dev_pagemap owner
> can interact with all other kernel infrastructure and sub-systems
> (drivers, filesystems, etc...) that consume page structures.
> 
> Fixes: e76384884344 ("mm: introduce MEMORY_DEVICE_FS_DAX and CONFIG_DEV_PAGEMAP_OPS")
> Reported-by: Joe Gorse <jhgorse@gmail.com>
> Reported-by: John Hubbard <jhubbard@nvidia.com>
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
> ---
>  kernel/memremap.c |    4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/kernel/memremap.c b/kernel/memremap.c
> index 16141b608b63..ecee37b44aa1 100644
> --- a/kernel/memremap.c
> +++ b/kernel/memremap.c
> @@ -330,7 +330,7 @@ EXPORT_SYMBOL_GPL(get_dev_pagemap);
>  
>  #ifdef CONFIG_DEV_PAGEMAP_OPS
>  DEFINE_STATIC_KEY_FALSE(devmap_managed_key);
> -EXPORT_SYMBOL_GPL(devmap_managed_key);
> +EXPORT_SYMBOL(devmap_managed_key);
>  static atomic_t devmap_enable;
>  
>  /*
> @@ -371,5 +371,5 @@ void __put_devmap_managed_page(struct page *page)
>  	} else if (!count)
>  		__put_page(page);
>  }
> -EXPORT_SYMBOL_GPL(__put_devmap_managed_page);
> +EXPORT_SYMBOL(__put_devmap_managed_page);
>  #endif /* CONFIG_DEV_PAGEMAP_OPS */
> 

Yep, that fixes everything I was seeing.

thanks,
-- 
John Hubbard
NVIDIA

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 3/8] mm, devm_memremap_pages: Fix shutdown handling
  2018-06-19  6:04 ` [PATCH v3 3/8] mm, devm_memremap_pages: Fix shutdown handling Dan Williams
@ 2018-06-19 16:00   ` Logan Gunthorpe
  0 siblings, 0 replies; 14+ messages in thread
From: Logan Gunthorpe @ 2018-06-19 16:00 UTC (permalink / raw)
  To: Dan Williams, akpm
  Cc: stable, Christoph Hellwig, Jérôme Glisse, linux-mm,
	linux-kernel



On 19/06/18 12:04 AM, Dan Williams wrote:
> Cc: <stable@vger.kernel.org>
> Fixes: e8d513483300 ("memremap: change devm_memremap_pages interface...")
> Cc: Christoph Hellwig <hch@lst.de>
> Cc: "Jérôme Glisse" <jglisse@redhat.com>
> Reported-by: Logan Gunthorpe <logang@deltatee.com>
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>

Looks good to me.

Reviewed-by: Logan Gunthorpe <logang@deltatee.com>


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 7/8] mm, hmm: Mark hmm_devmem_{add, add_resource} EXPORT_SYMBOL_GPL
  2018-06-19  6:05 ` [PATCH v3 7/8] mm, hmm: Mark hmm_devmem_{add, add_resource} EXPORT_SYMBOL_GPL Dan Williams
@ 2018-07-06 23:53   ` Dan Williams
  2018-07-10  0:34     ` Andrew Morton
  0 siblings, 1 reply; 14+ messages in thread
From: Dan Williams @ 2018-07-06 23:53 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Jérôme Glisse, Logan Gunthorpe, Christoph Hellwig,
	Linux MM, Linux Kernel Mailing List

On Mon, Jun 18, 2018 at 11:05 PM, Dan Williams <dan.j.williams@intel.com> wrote:
> The routines hmm_devmem_add(), and hmm_devmem_add_resource() are
> now wrappers around the functionality provided by devm_memremap_pages() to
> inject a dev_pagemap instance and hook page-idle events. The
> devm_memremap_pages() interface is base infrastructure for HMM which has
> more and deeper ties into the kernel memory management implementation
> than base ZONE_DEVICE.
>
> Originally, the HMM page structure creation routines copied the
> devm_memremap_pages() code and reused ZONE_DEVICE. A cleanup to unify
> the implementations was discussed during the initial review:
> http://lkml.iu.edu/hypermail/linux/kernel/1701.2/00812.html
>
> Given that devm_memremap_pages() is marked EXPORT_SYMBOL_GPL by its
> authors and the hmm_devmem_{add,add_resource} routines are simple
> wrappers around that base, mark these routines as EXPORT_SYMBOL_GPL as
> well.
>
> Cc: "Jérôme Glisse" <jglisse@redhat.com>
> Cc: Logan Gunthorpe <logang@deltatee.com>
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>

Currently OpenAFS is blocked from compiling with the 4.18 series due
to the current state of put_page() inadvertently pulling in GPL-only
symbols. This series, "PATCH v3 0/8] mm: Rework hmm to use
devm_memremap_pages and other fixes" corrects that situation and
corrects HMM's usage of EXPORT_SYMBOL_GPL.

If HMM wants to export functionality to out-of-tree proprietary
drivers it should do so without consuming GPL-only exports, or
consuming internal-only public functions in its exports.

In addition to duplicating devm_memremap_pages(), that should have
been EXPORT_SYMBOL_GPL from the beginning, it is also exporting /
consuming these GPL-only symbols via HMM's EXPORT_SYMBOL entry points.

    mmu_notifier_unregister_no_release
    percpu_ref
    region_intersects
    __class_create

Those entry points also consume / export functionality that is
currently not exported to any other driver.

    alloc_pages_vma
    walk_page_range

Andrew, please consider applying this v3 series to fix this up (let me
know if you need a resend).

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 7/8] mm, hmm: Mark hmm_devmem_{add, add_resource} EXPORT_SYMBOL_GPL
  2018-07-06 23:53   ` Dan Williams
@ 2018-07-10  0:34     ` Andrew Morton
  2018-07-10 17:11       ` Jerome Glisse
  0 siblings, 1 reply; 14+ messages in thread
From: Andrew Morton @ 2018-07-10  0:34 UTC (permalink / raw)
  To: Dan Williams
  Cc: Jérôme Glisse, Logan Gunthorpe, Christoph Hellwig,
	Linux MM, Linux Kernel Mailing List

On Fri, 6 Jul 2018 16:53:11 -0700 Dan Williams <dan.j.williams@intel.com> wrote:

> On Mon, Jun 18, 2018 at 11:05 PM, Dan Williams <dan.j.williams@intel.com> wrote:
> > The routines hmm_devmem_add(), and hmm_devmem_add_resource() are
> > now wrappers around the functionality provided by devm_memremap_pages() to
> > inject a dev_pagemap instance and hook page-idle events. The
> > devm_memremap_pages() interface is base infrastructure for HMM which has
> > more and deeper ties into the kernel memory management implementation
> > than base ZONE_DEVICE.
> >
> > Originally, the HMM page structure creation routines copied the
> > devm_memremap_pages() code and reused ZONE_DEVICE. A cleanup to unify
> > the implementations was discussed during the initial review:
> > http://lkml.iu.edu/hypermail/linux/kernel/1701.2/00812.html
> >
> > Given that devm_memremap_pages() is marked EXPORT_SYMBOL_GPL by its
> > authors and the hmm_devmem_{add,add_resource} routines are simple
> > wrappers around that base, mark these routines as EXPORT_SYMBOL_GPL as
> > well.
> >
> > Cc: "Jérôme Glisse" <jglisse@redhat.com>
> > Cc: Logan Gunthorpe <logang@deltatee.com>
> > Reviewed-by: Christoph Hellwig <hch@lst.de>
> > Signed-off-by: Dan Williams <dan.j.williams@intel.com>
> 
> Currently OpenAFS is blocked from compiling with the 4.18 series due
> to the current state of put_page() inadvertently pulling in GPL-only
> symbols. This series, "PATCH v3 0/8] mm: Rework hmm to use
> devm_memremap_pages and other fixes" corrects that situation and
> corrects HMM's usage of EXPORT_SYMBOL_GPL.
> 
> If HMM wants to export functionality to out-of-tree proprietary
> drivers it should do so without consuming GPL-only exports, or
> consuming internal-only public functions in its exports.
> 
> In addition to duplicating devm_memremap_pages(), that should have
> been EXPORT_SYMBOL_GPL from the beginning, it is also exporting /
> consuming these GPL-only symbols via HMM's EXPORT_SYMBOL entry points.
> 
>     mmu_notifier_unregister_no_release
>     percpu_ref
>     region_intersects
>     __class_create
> 
> Those entry points also consume / export functionality that is
> currently not exported to any other driver.
> 
>     alloc_pages_vma
>     walk_page_range
> 
> Andrew, please consider applying this v3 series to fix this up (let me
> know if you need a resend).

A resend would be good.  And include the above info in the changelog.

I can't say I'm terribly happy with the HMM situation.  I was under the
impression that a significant number of significant in-tree drivers
would be using HMM but I've heard nothing since, apart from ongoing
nouveau work, which will be perfectly happy with GPL-only exports.

So yes, we should revisit the licensing situation and, if only nouveau
will be using HMM we should revisit HMM's overall usefulness.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 7/8] mm, hmm: Mark hmm_devmem_{add, add_resource} EXPORT_SYMBOL_GPL
  2018-07-10  0:34     ` Andrew Morton
@ 2018-07-10 17:11       ` Jerome Glisse
  0 siblings, 0 replies; 14+ messages in thread
From: Jerome Glisse @ 2018-07-10 17:11 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Dan Williams, Logan Gunthorpe, Christoph Hellwig, Linux MM,
	Linux Kernel Mailing List

On Mon, Jul 09, 2018 at 05:34:17PM -0700, Andrew Morton wrote:
> On Fri, 6 Jul 2018 16:53:11 -0700 Dan Williams <dan.j.williams@intel.com> wrote:
> 
> > On Mon, Jun 18, 2018 at 11:05 PM, Dan Williams <dan.j.williams@intel.com> wrote:
> > > The routines hmm_devmem_add(), and hmm_devmem_add_resource() are
> > > now wrappers around the functionality provided by devm_memremap_pages() to
> > > inject a dev_pagemap instance and hook page-idle events. The
> > > devm_memremap_pages() interface is base infrastructure for HMM which has
> > > more and deeper ties into the kernel memory management implementation
> > > than base ZONE_DEVICE.
> > >
> > > Originally, the HMM page structure creation routines copied the
> > > devm_memremap_pages() code and reused ZONE_DEVICE. A cleanup to unify
> > > the implementations was discussed during the initial review:
> > > http://lkml.iu.edu/hypermail/linux/kernel/1701.2/00812.html
> > >
> > > Given that devm_memremap_pages() is marked EXPORT_SYMBOL_GPL by its
> > > authors and the hmm_devmem_{add,add_resource} routines are simple
> > > wrappers around that base, mark these routines as EXPORT_SYMBOL_GPL as
> > > well.
> > >
> > > Cc: "Jérôme Glisse" <jglisse@redhat.com>
> > > Cc: Logan Gunthorpe <logang@deltatee.com>
> > > Reviewed-by: Christoph Hellwig <hch@lst.de>
> > > Signed-off-by: Dan Williams <dan.j.williams@intel.com>
> > 
> > Currently OpenAFS is blocked from compiling with the 4.18 series due
> > to the current state of put_page() inadvertently pulling in GPL-only
> > symbols. This series, "PATCH v3 0/8] mm: Rework hmm to use
> > devm_memremap_pages and other fixes" corrects that situation and
> > corrects HMM's usage of EXPORT_SYMBOL_GPL.
> > 
> > If HMM wants to export functionality to out-of-tree proprietary
> > drivers it should do so without consuming GPL-only exports, or
> > consuming internal-only public functions in its exports.
> > 
> > In addition to duplicating devm_memremap_pages(), that should have
> > been EXPORT_SYMBOL_GPL from the beginning, it is also exporting /
> > consuming these GPL-only symbols via HMM's EXPORT_SYMBOL entry points.
> > 
> >     mmu_notifier_unregister_no_release
> >     percpu_ref
> >     region_intersects
> >     __class_create
> > 
> > Those entry points also consume / export functionality that is
> > currently not exported to any other driver.
> > 
> >     alloc_pages_vma
> >     walk_page_range
> > 
> > Andrew, please consider applying this v3 series to fix this up (let me
> > know if you need a resend).
> 
> A resend would be good.  And include the above info in the changelog.
> 
> I can't say I'm terribly happy with the HMM situation.  I was under the
> impression that a significant number of significant in-tree drivers
> would be using HMM but I've heard nothing since, apart from ongoing
> nouveau work, which will be perfectly happy with GPL-only exports.
> 
> So yes, we should revisit the licensing situation and, if only nouveau
> will be using HMM we should revisit HMM's overall usefulness.

So right now i am working on finishing another version of nouveau
patchset. Then i will be working on radeon driver, then on Intel.
I also have been in talk with Mellanox to bring back to life my
mlx5 patchset which converted ODP to use HMM. So this is also on
the radar. AMD GPU will come next.


The nouveau patchset is taking so long because nouveau have under
gone massive rewrite of how it manages channel (commands queue) and
memory. Which was a pre-requisite for doing HMM. This rework has
started going upstream since 4.14, piece by piece and it is still
not finish in 4.18. So work have been going steadily, if people
wants i can point to all the patches.

As this is the DRM subsystem we also need open source userspaca and
again we have been working on this since last year and this takes
time to. Lot of work have been done. I understand that it is not
necessarily obvious to people who do not follow mesa, dri-devel or
nouveau mailing list.

I am sorry this is taking so long but resources to work on this are
scarce. Yet this is important work as new standard develop inside the
C++ committee (everybody love C++ here right ;)) and in other high
level language will rely on features HMM provides to those drivers.

Cheers,
Jérôme

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2018-07-10 17:11 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-06-19  6:04 [PATCH v3 0/8] mm: Rework hmm to use devm_memremap_pages and other fixes Dan Williams
2018-06-19  6:04 ` [PATCH v3 1/8] mm, devm_memremap_pages: Mark devm_memremap_pages() EXPORT_SYMBOL_GPL Dan Williams
2018-06-19  6:04 ` [PATCH v3 2/8] mm, devm_memremap_pages: Kill mapping "System RAM" support Dan Williams
2018-06-19  6:04 ` [PATCH v3 3/8] mm, devm_memremap_pages: Fix shutdown handling Dan Williams
2018-06-19 16:00   ` Logan Gunthorpe
2018-06-19  6:04 ` [PATCH v3 4/8] mm, devm_memremap_pages: Add MEMORY_DEVICE_PRIVATE support Dan Williams
2018-06-19  6:05 ` [PATCH v3 5/8] mm, hmm: Use devm semantics for hmm_devmem_{add, remove} Dan Williams
2018-06-19  6:05 ` [PATCH v3 6/8] mm, hmm: Replace hmm_devmem_pages_create() with devm_memremap_pages() Dan Williams
2018-06-19  6:05 ` [PATCH v3 7/8] mm, hmm: Mark hmm_devmem_{add, add_resource} EXPORT_SYMBOL_GPL Dan Williams
2018-07-06 23:53   ` Dan Williams
2018-07-10  0:34     ` Andrew Morton
2018-07-10 17:11       ` Jerome Glisse
2018-06-19  6:05 ` [PATCH v3 8/8] mm: Fix exports that inadvertently make put_page() EXPORT_SYMBOL_GPL Dan Williams
2018-06-19  6:59   ` John Hubbard

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).