All of lore.kernel.org
 help / color / mirror / Atom feed
From: Christoph Hellwig <hch@lst.de>
To: Dan Williams <dan.j.williams@intel.com>
Cc: linux-mm@kvack.org, "Jérôme Glisse" <jglisse@redhat.com>,
	linux-nvdimm@lists.01.org
Subject: [PATCH 02/14] mm: optimize dev_pagemap reference counting around get_dev_pagemap
Date: Thu,  7 Dec 2017 07:08:28 -0800	[thread overview]
Message-ID: <20171207150840.28409-3-hch@lst.de> (raw)
In-Reply-To: <20171207150840.28409-1-hch@lst.de>

Change the calling convention so that get_dev_pagemap always consumes the
previous reference instead of doing this using an explicit earlier call to
put_dev_pagemap in the callers.

The callers will still need to put the final reference after finishing the
loop over the pages.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 kernel/memremap.c | 17 +++++++++--------
 mm/gup.c          |  7 +++++--
 2 files changed, 14 insertions(+), 10 deletions(-)

diff --git a/kernel/memremap.c b/kernel/memremap.c
index f0b54eca85b0..502fa107a585 100644
--- a/kernel/memremap.c
+++ b/kernel/memremap.c
@@ -506,22 +506,23 @@ struct vmem_altmap *to_vmem_altmap(unsigned long memmap_start)
  * @pfn: page frame number to lookup page_map
  * @pgmap: optional known pgmap that already has a reference
  *
- * @pgmap allows the overhead of a lookup to be bypassed when @pfn lands in the
- * same mapping.
+ * If @pgmap is non-NULL and covers @pfn it will be returned as-is.  If @pgmap
+ * is non-NULL but does not cover @pfn the reference to it while be released.
  */
 struct dev_pagemap *get_dev_pagemap(unsigned long pfn,
 		struct dev_pagemap *pgmap)
 {
-	const struct resource *res = pgmap ? pgmap->res : NULL;
 	resource_size_t phys = PFN_PHYS(pfn);
 
 	/*
-	 * In the cached case we're already holding a live reference so
-	 * we can simply do a blind increment
+	 * In the cached case we're already holding a live reference.
 	 */
-	if (res && phys >= res->start && phys <= res->end) {
-		percpu_ref_get(pgmap->ref);
-		return pgmap;
+	if (pgmap) {
+		const struct resource *res = pgmap ? pgmap->res : NULL;
+
+		if (res && phys >= res->start && phys <= res->end)
+			return pgmap;
+		put_dev_pagemap(pgmap);
 	}
 
 	/* fall back to slow path lookup */
diff --git a/mm/gup.c b/mm/gup.c
index d3fb60e5bfac..9d142eb9e2e9 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -1410,7 +1410,6 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
 
 		VM_BUG_ON_PAGE(compound_head(page) != head, page);
 
-		put_dev_pagemap(pgmap);
 		SetPageReferenced(page);
 		pages[*nr] = page;
 		(*nr)++;
@@ -1420,6 +1419,8 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
 	ret = 1;
 
 pte_unmap:
+	if (pgmap)
+		put_dev_pagemap(pgmap);
 	pte_unmap(ptem);
 	return ret;
 }
@@ -1459,10 +1460,12 @@ static int __gup_device_huge(unsigned long pfn, unsigned long addr,
 		SetPageReferenced(page);
 		pages[*nr] = page;
 		get_page(page);
-		put_dev_pagemap(pgmap);
 		(*nr)++;
 		pfn++;
 	} while (addr += PAGE_SIZE, addr != end);
+
+	if (pgmap)
+		put_dev_pagemap(pgmap);
 	return 1;
 }
 
-- 
2.14.2

_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

WARNING: multiple messages have this Message-ID
From: Christoph Hellwig <hch@lst.de>
To: Dan Williams <dan.j.williams@intel.com>
Cc: "Jérôme Glisse" <jglisse@redhat.com>,
	"Logan Gunthorpe" <logang@deltatee.com>,
	linux-nvdimm@lists.01.org, linux-mm@kvack.org
Subject: [PATCH 02/14] mm: optimize dev_pagemap reference counting around get_dev_pagemap
Date: Thu,  7 Dec 2017 07:08:28 -0800	[thread overview]
Message-ID: <20171207150840.28409-3-hch@lst.de> (raw)
In-Reply-To: <20171207150840.28409-1-hch@lst.de>

Change the calling convention so that get_dev_pagemap always consumes the
previous reference instead of doing this using an explicit earlier call to
put_dev_pagemap in the callers.

The callers will still need to put the final reference after finishing the
loop over the pages.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 kernel/memremap.c | 17 +++++++++--------
 mm/gup.c          |  7 +++++--
 2 files changed, 14 insertions(+), 10 deletions(-)

diff --git a/kernel/memremap.c b/kernel/memremap.c
index f0b54eca85b0..502fa107a585 100644
--- a/kernel/memremap.c
+++ b/kernel/memremap.c
@@ -506,22 +506,23 @@ struct vmem_altmap *to_vmem_altmap(unsigned long memmap_start)
  * @pfn: page frame number to lookup page_map
  * @pgmap: optional known pgmap that already has a reference
  *
- * @pgmap allows the overhead of a lookup to be bypassed when @pfn lands in the
- * same mapping.
+ * If @pgmap is non-NULL and covers @pfn it will be returned as-is.  If @pgmap
+ * is non-NULL but does not cover @pfn the reference to it while be released.
  */
 struct dev_pagemap *get_dev_pagemap(unsigned long pfn,
 		struct dev_pagemap *pgmap)
 {
-	const struct resource *res = pgmap ? pgmap->res : NULL;
 	resource_size_t phys = PFN_PHYS(pfn);
 
 	/*
-	 * In the cached case we're already holding a live reference so
-	 * we can simply do a blind increment
+	 * In the cached case we're already holding a live reference.
 	 */
-	if (res && phys >= res->start && phys <= res->end) {
-		percpu_ref_get(pgmap->ref);
-		return pgmap;
+	if (pgmap) {
+		const struct resource *res = pgmap ? pgmap->res : NULL;
+
+		if (res && phys >= res->start && phys <= res->end)
+			return pgmap;
+		put_dev_pagemap(pgmap);
 	}
 
 	/* fall back to slow path lookup */
diff --git a/mm/gup.c b/mm/gup.c
index d3fb60e5bfac..9d142eb9e2e9 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -1410,7 +1410,6 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
 
 		VM_BUG_ON_PAGE(compound_head(page) != head, page);
 
-		put_dev_pagemap(pgmap);
 		SetPageReferenced(page);
 		pages[*nr] = page;
 		(*nr)++;
@@ -1420,6 +1419,8 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
 	ret = 1;
 
 pte_unmap:
+	if (pgmap)
+		put_dev_pagemap(pgmap);
 	pte_unmap(ptem);
 	return ret;
 }
@@ -1459,10 +1460,12 @@ static int __gup_device_huge(unsigned long pfn, unsigned long addr,
 		SetPageReferenced(page);
 		pages[*nr] = page;
 		get_page(page);
-		put_dev_pagemap(pgmap);
 		(*nr)++;
 		pfn++;
 	} while (addr += PAGE_SIZE, addr != end);
+
+	if (pgmap)
+		put_dev_pagemap(pgmap);
 	return 1;
 }
 
-- 
2.14.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  parent reply	other threads:[~2017-12-07 15:04 UTC|newest]

Thread overview: 57+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-12-07 15:08 revamp vmem_altmap / dev_pagemap handling Christoph Hellwig
2017-12-07 15:08 ` Christoph Hellwig
2017-12-07 15:08 ` [PATCH 01/14] mm: move get_dev_pagemap out of line Christoph Hellwig
2017-12-07 15:08   ` Christoph Hellwig
2017-12-07 18:34   ` Logan Gunthorpe
2017-12-07 18:34     ` Logan Gunthorpe
2017-12-07 15:08 ` Christoph Hellwig [this message]
2017-12-07 15:08   ` [PATCH 02/14] mm: optimize dev_pagemap reference counting around get_dev_pagemap Christoph Hellwig
2017-12-07 18:46   ` Logan Gunthorpe
2017-12-07 18:46     ` Logan Gunthorpe
2017-12-07 15:08 ` [PATCH 03/14] mm: better abstract out dev_pagemap freeing Christoph Hellwig
2017-12-07 15:08   ` Christoph Hellwig
2017-12-07 18:49   ` Logan Gunthorpe
2017-12-07 18:49     ` Logan Gunthorpe
2017-12-07 15:08 ` [PATCH 04/14] mm: better abstract out dev_pagemap alloc Christoph Hellwig
2017-12-07 15:08   ` Christoph Hellwig
2017-12-07 18:52   ` Logan Gunthorpe
2017-12-07 18:52     ` Logan Gunthorpe
2017-12-07 15:08 ` [PATCH 05/14] mm: better abstract out dev_pagemap offset calculation Christoph Hellwig
2017-12-07 15:08   ` Christoph Hellwig
2017-12-07 18:54   ` Logan Gunthorpe
2017-12-07 18:54     ` Logan Gunthorpe
2017-12-07 15:08 ` [PATCH 06/14] mm: better abstract out dev_pagemap start_pfn Christoph Hellwig
2017-12-07 15:08   ` Christoph Hellwig
2017-12-07 18:57   ` Logan Gunthorpe
2017-12-07 18:57     ` Logan Gunthorpe
2017-12-07 15:08 ` [PATCH 07/14] mm: split dev_pagemap memory map allocation from normal case Christoph Hellwig
2017-12-07 15:08   ` Christoph Hellwig
2017-12-07 19:08   ` Logan Gunthorpe
2017-12-07 19:08     ` Logan Gunthorpe
2017-12-07 15:08 ` [PATCH 08/14] mm: merge vmem_altmap_alloc into dev_pagemap_alloc_block_buf Christoph Hellwig
2017-12-07 15:08   ` Christoph Hellwig
2017-12-07 19:14   ` Logan Gunthorpe
2017-12-07 19:14     ` Logan Gunthorpe
2017-12-07 15:08 ` [PATCH 09/14] memremap: drop private struct page_map Christoph Hellwig
2017-12-07 15:08   ` Christoph Hellwig
2017-12-07 15:08 ` [PATCH 10/14] memremap: change devm_memremap_pages interface to use struct dev_pagemap Christoph Hellwig
2017-12-07 15:08   ` Christoph Hellwig
2017-12-08  4:03   ` Dan Williams
2017-12-07 15:08 ` [PATCH 11/14] memremap: simplify duplicate region handling in devm_memremap_pages Christoph Hellwig
2017-12-07 15:08   ` Christoph Hellwig
2017-12-07 19:34   ` Logan Gunthorpe
2017-12-07 19:34     ` Logan Gunthorpe
2017-12-07 15:08 ` [PATCH 12/14] memremap: remove find_dev_pagemap Christoph Hellwig
2017-12-07 15:08   ` Christoph Hellwig
2017-12-07 19:35   ` Logan Gunthorpe
2017-12-07 19:35     ` Logan Gunthorpe
2017-12-07 15:08 ` [PATCH 13/14] memremap: remove struct vmem_altmap Christoph Hellwig
2017-12-07 15:08   ` Christoph Hellwig
2017-12-07 19:40   ` Logan Gunthorpe
2017-12-07 19:40     ` Logan Gunthorpe
2017-12-07 15:08 ` [PATCH 14/14] memremap: RCU protect data returned from dev_pagemap lookups Christoph Hellwig
2017-12-07 15:08   ` Christoph Hellwig
2017-12-07 19:53   ` Logan Gunthorpe
2017-12-07 19:53     ` Logan Gunthorpe
2017-12-08  4:14 ` revamp vmem_altmap / dev_pagemap handling Williams, Dan J
2017-12-08  4:14   ` Williams, Dan J

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20171207150840.28409-3-hch@lst.de \
    --to=hch@lst.de \
    --cc=dan.j.williams@intel.com \
    --cc=jglisse@redhat.com \
    --cc=linux-mm@kvack.org \
    --cc=linux-nvdimm@lists.01.org \
    --subject='Re: [PATCH 02/14] mm: optimize dev_pagemap reference counting around get_dev_pagemap' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.