* [PATCH 01/17] mm/highmem: Lift memcpy_[to|from]_page and memset_page to core
2020-11-24 6:07 [PATCH 00/17] kmap: Create mem*_page interfaces ira.weiny
@ 2020-11-24 6:07 ` ira.weiny
2020-11-24 14:19 ` Matthew Wilcox
[not found] ` <160648238432.10416.12405581766428273347@jlahtine-mobl.ger.corp.intel.com>
2020-11-24 6:07 ` [PATCH 02/17] drivers/firmware_loader: Use new memcpy_[to|from]_page() ira.weiny
` (16 subsequent siblings)
17 siblings, 2 replies; 32+ messages in thread
From: ira.weiny @ 2020-11-24 6:07 UTC (permalink / raw)
To: Andrew Morton
Cc: Ira Weiny, Dave Hansen, Matthew Wilcox, Christoph Hellwig,
Dan Williams, Al Viro, Eric Biggers, Thomas Gleixner,
Luis Chamberlain, Patrik Jakobsson, Jani Nikula, Joonas Lahtinen,
Rodrigo Vivi, David Howells, Chris Mason, Josef Bacik,
David Sterba, Steve French, Jaegeuk Kim, Chao Yu, Nicolas Pitre,
Martin K. Petersen, Brian King, Greg Kroah-Hartman,
Alexei Starovoitov, Daniel Borkmann, Jérôme Glisse,
Kirti Wankhede, linux-kernel, linux-fsdevel
From: Ira Weiny <ira.weiny@intel.com>
Working through a conversion to a call such as kmap_thread() revealed
many places where the pattern kmap/memcpy/kunmap occurred.
Eric Biggers, Matthew Wilcox, Christoph Hellwig, Dan Williams, and Al
Viro all suggested putting this code into helper functions. Al Viro
further pointed out that these functions already existed in the iov_iter
code.[1]
Placing these functions in 'highmem.h' is suboptimal especially with the
changes being proposed in the functionality of kmap. From a caller
perspective including/using 'highmem.h' implies that the functions
defined in that header are only required when highmem is in use which is
increasingly not the case with modern processors. Some headers like
mm.h or string.h seem ok but don't really portray the functionality
well. 'pagemap.h', on the other hand, makes sense and is already
included in many of the places we want to convert.
Another alternative would be to create a new header for the promoted
memcpy functions, but it masks the fact that these are designed to copy
to/from pages using the kernel direct mappings and complicates matters
with a new header.
Lift memcpy_to_page(), memcpy_from_page(), and memzero_page() to
pagemap.h.
Also, add a memcpy_page(), memmove_page, and memset_page() to cover more
kmap/mem*/kunmap. patterns.
[1] https://lore.kernel.org/lkml/20201013200149.GI3576660@ZenIV.linux.org.uk/
https://lore.kernel.org/lkml/20201013112544.GA5249@infradead.org/
Cc: Dave Hansen <dave.hansen@intel.com>
Suggested-by: Matthew Wilcox <willy@infradead.org>
Suggested-by: Christoph Hellwig <hch@infradead.org>
Suggested-by: Dan Williams <dan.j.williams@intel.com>
Suggested-by: Al Viro <viro@zeniv.linux.org.uk>
Suggested-by: Eric Biggers <ebiggers@kernel.org>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
include/linux/pagemap.h | 49 +++++++++++++++++++++++++++++++++++++++++
lib/iov_iter.c | 21 ------------------
2 files changed, 49 insertions(+), 21 deletions(-)
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index c77b7c31b2e4..82a0af6bc843 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -1028,4 +1028,53 @@ unsigned int i_blocks_per_page(struct inode *inode, struct page *page)
{
return thp_size(page) >> inode->i_blkbits;
}
+
+static inline void memcpy_page(struct page *dst_page, size_t dst_off,
+ struct page *src_page, size_t src_off,
+ size_t len)
+{
+ char *dst = kmap_atomic(dst_page);
+ char *src = kmap_atomic(src_page);
+ memcpy(dst + dst_off, src + src_off, len);
+ kunmap_atomic(src);
+ kunmap_atomic(dst);
+}
+
+static inline void memmove_page(struct page *dst_page, size_t dst_off,
+ struct page *src_page, size_t src_off,
+ size_t len)
+{
+ char *dst = kmap_atomic(dst_page);
+ char *src = kmap_atomic(src_page);
+ memmove(dst + dst_off, src + src_off, len);
+ kunmap_atomic(src);
+ kunmap_atomic(dst);
+}
+
+static inline void memcpy_from_page(char *to, struct page *page, size_t offset, size_t len)
+{
+ char *from = kmap_atomic(page);
+ memcpy(to, from + offset, len);
+ kunmap_atomic(from);
+}
+
+static inline void memcpy_to_page(struct page *page, size_t offset, const char *from, size_t len)
+{
+ char *to = kmap_atomic(page);
+ memcpy(to + offset, from, len);
+ kunmap_atomic(to);
+}
+
+static inline void memset_page(struct page *page, int val, size_t offset, size_t len)
+{
+ char *addr = kmap_atomic(page);
+ memset(addr + offset, val, len);
+ kunmap_atomic(addr);
+}
+
+static inline void memzero_page(struct page *page, size_t offset, size_t len)
+{
+ memset_page(page, 0, offset, len);
+}
+
#endif /* _LINUX_PAGEMAP_H */
diff --git a/lib/iov_iter.c b/lib/iov_iter.c
index 1635111c5bd2..2439a8b4f0d2 100644
--- a/lib/iov_iter.c
+++ b/lib/iov_iter.c
@@ -466,27 +466,6 @@ void iov_iter_init(struct iov_iter *i, unsigned int direction,
}
EXPORT_SYMBOL(iov_iter_init);
-static void memcpy_from_page(char *to, struct page *page, size_t offset, size_t len)
-{
- char *from = kmap_atomic(page);
- memcpy(to, from + offset, len);
- kunmap_atomic(from);
-}
-
-static void memcpy_to_page(struct page *page, size_t offset, const char *from, size_t len)
-{
- char *to = kmap_atomic(page);
- memcpy(to + offset, from, len);
- kunmap_atomic(to);
-}
-
-static void memzero_page(struct page *page, size_t offset, size_t len)
-{
- char *addr = kmap_atomic(page);
- memset(addr + offset, 0, len);
- kunmap_atomic(addr);
-}
-
static inline bool allocated(struct pipe_buffer *buf)
{
return buf->ops == &default_pipe_buf_ops;
--
2.28.0.rc0.12.gb6a658bd00c9
^ permalink raw reply related [flat|nested] 32+ messages in thread
* Re: [PATCH 01/17] mm/highmem: Lift memcpy_[to|from]_page and memset_page to core
2020-11-24 6:07 ` [PATCH 01/17] mm/highmem: Lift memcpy_[to|from]_page and memset_page to core ira.weiny
@ 2020-11-24 14:19 ` Matthew Wilcox
2020-11-24 19:21 ` Ira Weiny
[not found] ` <160648238432.10416.12405581766428273347@jlahtine-mobl.ger.corp.intel.com>
1 sibling, 1 reply; 32+ messages in thread
From: Matthew Wilcox @ 2020-11-24 14:19 UTC (permalink / raw)
To: ira.weiny
Cc: Andrew Morton, Dave Hansen, Christoph Hellwig, Dan Williams,
Al Viro, Eric Biggers, Thomas Gleixner, Luis Chamberlain,
Patrik Jakobsson, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
David Howells, Chris Mason, Josef Bacik, David Sterba,
Steve French, Jaegeuk Kim, Chao Yu, Nicolas Pitre,
Martin K. Petersen, Brian King, Greg Kroah-Hartman,
Alexei Starovoitov, Daniel Borkmann, Jérôme Glisse,
Kirti Wankhede, linux-kernel, linux-fsdevel
On Mon, Nov 23, 2020 at 10:07:39PM -0800, ira.weiny@intel.com wrote:
> +static inline void memzero_page(struct page *page, size_t offset, size_t len)
> +{
> + memset_page(page, 0, offset, len);
> +}
This is a less-capable zero_user_segments().
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH 01/17] mm/highmem: Lift memcpy_[to|from]_page and memset_page to core
2020-11-24 14:19 ` Matthew Wilcox
@ 2020-11-24 19:21 ` Ira Weiny
2020-11-24 20:20 ` Matthew Wilcox
0 siblings, 1 reply; 32+ messages in thread
From: Ira Weiny @ 2020-11-24 19:21 UTC (permalink / raw)
To: Matthew Wilcox
Cc: Andrew Morton, Dave Hansen, Christoph Hellwig, Dan Williams,
Al Viro, Eric Biggers, Thomas Gleixner, Luis Chamberlain,
Patrik Jakobsson, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
David Howells, Chris Mason, Josef Bacik, David Sterba,
Steve French, Jaegeuk Kim, Chao Yu, Nicolas Pitre,
Martin K. Petersen, Brian King, Greg Kroah-Hartman,
Alexei Starovoitov, Daniel Borkmann, Jérôme Glisse,
Kirti Wankhede, linux-kernel, linux-fsdevel
On Tue, Nov 24, 2020 at 02:19:41PM +0000, Matthew Wilcox wrote:
> On Mon, Nov 23, 2020 at 10:07:39PM -0800, ira.weiny@intel.com wrote:
> > +static inline void memzero_page(struct page *page, size_t offset, size_t len)
> > +{
> > + memset_page(page, 0, offset, len);
> > +}
>
> This is a less-capable zero_user_segments().
Actually it is a duplicate of zero_user()... Sorry I did not notice those...
:-(
Why are they called '_user_'?
Ira
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH 01/17] mm/highmem: Lift memcpy_[to|from]_page and memset_page to core
2020-11-24 19:21 ` Ira Weiny
@ 2020-11-24 20:20 ` Matthew Wilcox
0 siblings, 0 replies; 32+ messages in thread
From: Matthew Wilcox @ 2020-11-24 20:20 UTC (permalink / raw)
To: Ira Weiny
Cc: Andrew Morton, Dave Hansen, Christoph Hellwig, Dan Williams,
Al Viro, Eric Biggers, Thomas Gleixner, Luis Chamberlain,
Patrik Jakobsson, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
David Howells, Chris Mason, Josef Bacik, David Sterba,
Steve French, Jaegeuk Kim, Chao Yu, Nicolas Pitre,
Martin K. Petersen, Brian King, Greg Kroah-Hartman,
Alexei Starovoitov, Daniel Borkmann, Jérôme Glisse,
Kirti Wankhede, linux-kernel, linux-fsdevel
On Tue, Nov 24, 2020 at 11:21:13AM -0800, Ira Weiny wrote:
> On Tue, Nov 24, 2020 at 02:19:41PM +0000, Matthew Wilcox wrote:
> > On Mon, Nov 23, 2020 at 10:07:39PM -0800, ira.weiny@intel.com wrote:
> > > +static inline void memzero_page(struct page *page, size_t offset, size_t len)
> > > +{
> > > + memset_page(page, 0, offset, len);
> > > +}
> >
> > This is a less-capable zero_user_segments().
>
> Actually it is a duplicate of zero_user()... Sorry I did not notice those...
> :-(
>
> Why are they called '_user_'?
git knows ...
commit 01f2705daf5a36208e69d7cf95db9c330f843af6
Author: Nate Diller <nate.diller@gmail.com>
Date: Wed May 9 02:35:07 2007 -0700
fs: convert core functions to zero_user_page
It's very common for file systems to need to zero part or all of a page,
the simplist way is just to use kmap_atomic() and memset(). There's
actually a library function in include/linux/highmem.h that does exactly
that, but it's confusingly named memclear_highpage_flush(), which is
descriptive of *how* it does the work rather than what the *purpose* is.
So this patchset renames the function to zero_user_page(), and calls it
from the various places that currently open code it.
This first patch introduces the new function call, and converts all the
core kernel callsites, both the open-coded ones and the old
memclear_highpage_flush() ones. Following this patch is a series of
conversions for each file system individually, per AKPM, and finally a
patch deprecating the old call. The diffstat below shows the entire
patchset.
^ permalink raw reply [flat|nested] 32+ messages in thread
[parent not found: <160648238432.10416.12405581766428273347@jlahtine-mobl.ger.corp.intel.com>]
* Re: [PATCH 01/17] mm/highmem: Lift memcpy_[to|from]_page and memset_page to core
[not found] ` <160648238432.10416.12405581766428273347@jlahtine-mobl.ger.corp.intel.com>
@ 2020-11-27 13:20 ` Matthew Wilcox
[not found] ` <160672815223.3453.2374529656870007787@jlahtine-mobl.ger.corp.intel.com>
0 siblings, 1 reply; 32+ messages in thread
From: Matthew Wilcox @ 2020-11-27 13:20 UTC (permalink / raw)
To: Joonas Lahtinen
Cc: Andrew Morton, ira.weiny, Dave Hansen, Christoph Hellwig,
Dan Williams, Al Viro, Eric Biggers, Thomas Gleixner,
Luis Chamberlain, Patrik Jakobsson, Jani Nikula, Rodrigo Vivi,
David Howells, Chris Mason, Josef Bacik, David Sterba,
Steve French, Jaegeuk Kim, Chao Yu, Nicolas Pitre,
Martin K. Petersen, Brian King, Greg Kroah-Hartman,
Alexei Starovoitov, Daniel Borkmann, Jérôme Glisse,
Kirti Wankhede, linux-kernel, linux-fsdevel
On Fri, Nov 27, 2020 at 03:06:24PM +0200, Joonas Lahtinen wrote:
> Quoting ira.weiny@intel.com (2020-11-24 08:07:39)
> > From: Ira Weiny <ira.weiny@intel.com>
> >
> > Working through a conversion to a call such as kmap_thread() revealed
> > many places where the pattern kmap/memcpy/kunmap occurred.
> >
> > Eric Biggers, Matthew Wilcox, Christoph Hellwig, Dan Williams, and Al
> > Viro all suggested putting this code into helper functions. Al Viro
> > further pointed out that these functions already existed in the iov_iter
> > code.[1]
> >
> > Placing these functions in 'highmem.h' is suboptimal especially with the
> > changes being proposed in the functionality of kmap. From a caller
> > perspective including/using 'highmem.h' implies that the functions
> > defined in that header are only required when highmem is in use which is
> > increasingly not the case with modern processors. Some headers like
> > mm.h or string.h seem ok but don't really portray the functionality
> > well. 'pagemap.h', on the other hand, makes sense and is already
> > included in many of the places we want to convert.
> >
> > Another alternative would be to create a new header for the promoted
> > memcpy functions, but it masks the fact that these are designed to copy
> > to/from pages using the kernel direct mappings and complicates matters
> > with a new header.
> >
> > Lift memcpy_to_page(), memcpy_from_page(), and memzero_page() to
> > pagemap.h.
> >
> > Also, add a memcpy_page(), memmove_page, and memset_page() to cover more
> > kmap/mem*/kunmap. patterns.
> >
> > [1] https://lore.kernel.org/lkml/20201013200149.GI3576660@ZenIV.linux.org.uk/
> > https://lore.kernel.org/lkml/20201013112544.GA5249@infradead.org/
> >
> > Cc: Dave Hansen <dave.hansen@intel.com>
> > Suggested-by: Matthew Wilcox <willy@infradead.org>
> > Suggested-by: Christoph Hellwig <hch@infradead.org>
> > Suggested-by: Dan Williams <dan.j.williams@intel.com>
> > Suggested-by: Al Viro <viro@zeniv.linux.org.uk>
> > Suggested-by: Eric Biggers <ebiggers@kernel.org>
> > Signed-off-by: Ira Weiny <ira.weiny@intel.com>
>
> <SNIP>
>
> > +static inline void memset_page(struct page *page, int val, size_t offset, size_t len)
> > +{
> > + char *addr = kmap_atomic(page);
> > + memset(addr + offset, val, len);
> > + kunmap_atomic(addr);
> > +}
>
> Other functions have (page, offset) pair. Insertion of 'val' in the middle here required
> to take a double look during review.
Let's be explicit here. Your suggested order is:
(page, offset, val, len)
right? I think I would prefer that to (page, val, offset, len).
^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCH 02/17] drivers/firmware_loader: Use new memcpy_[to|from]_page()
2020-11-24 6:07 [PATCH 00/17] kmap: Create mem*_page interfaces ira.weiny
2020-11-24 6:07 ` [PATCH 01/17] mm/highmem: Lift memcpy_[to|from]_page and memset_page to core ira.weiny
@ 2020-11-24 6:07 ` ira.weiny
2020-11-24 6:07 ` [PATCH 03/17] drivers/gpu: Convert to mem*_page() ira.weiny
` (15 subsequent siblings)
17 siblings, 0 replies; 32+ messages in thread
From: ira.weiny @ 2020-11-24 6:07 UTC (permalink / raw)
To: Andrew Morton
Cc: Ira Weiny, Luis Chamberlain, Thomas Gleixner, Dave Hansen,
Matthew Wilcox, Christoph Hellwig, Dan Williams, Al Viro,
Eric Biggers, Patrik Jakobsson, Jani Nikula, Joonas Lahtinen,
Rodrigo Vivi, David Howells, Chris Mason, Josef Bacik,
David Sterba, Steve French, Jaegeuk Kim, Chao Yu, Nicolas Pitre,
Martin K. Petersen, Brian King, Greg Kroah-Hartman,
Alexei Starovoitov, Daniel Borkmann, Jérôme Glisse,
Kirti Wankhede, linux-kernel, linux-fsdevel
From: Ira Weiny <ira.weiny@intel.com>
Too many users are using kmap_*() incorrectly and a common pattern is
for them to kmap/mempcy/kunmap. Change these calls to use the newly
lifted memcpy_[to|from]_page() calls.
Cc: Luis Chamberlain <mcgrof@kernel.org>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
drivers/base/firmware_loader/fallback.c | 11 +++++------
1 file changed, 5 insertions(+), 6 deletions(-)
diff --git a/drivers/base/firmware_loader/fallback.c b/drivers/base/firmware_loader/fallback.c
index 4dec4b79ae06..dc93dc307d18 100644
--- a/drivers/base/firmware_loader/fallback.c
+++ b/drivers/base/firmware_loader/fallback.c
@@ -10,6 +10,7 @@
#include <linux/sysctl.h>
#include <linux/vmalloc.h>
#include <linux/module.h>
+#include <linux/pagemap.h>
#include "fallback.h"
#include "firmware.h"
@@ -317,19 +318,17 @@ static void firmware_rw(struct fw_priv *fw_priv, char *buffer,
loff_t offset, size_t count, bool read)
{
while (count) {
- void *page_data;
int page_nr = offset >> PAGE_SHIFT;
int page_ofs = offset & (PAGE_SIZE-1);
int page_cnt = min_t(size_t, PAGE_SIZE - page_ofs, count);
- page_data = kmap(fw_priv->pages[page_nr]);
-
if (read)
- memcpy(buffer, page_data + page_ofs, page_cnt);
+ memcpy_from_page(buffer, fw_priv->pages[page_nr],
+ page_ofs, page_cnt);
else
- memcpy(page_data + page_ofs, buffer, page_cnt);
+ memcpy_to_page(fw_priv->pages[page_nr], page_ofs,
+ buffer, page_cnt);
- kunmap(fw_priv->pages[page_nr]);
buffer += page_cnt;
offset += page_cnt;
count -= page_cnt;
--
2.28.0.rc0.12.gb6a658bd00c9
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 03/17] drivers/gpu: Convert to mem*_page()
2020-11-24 6:07 [PATCH 00/17] kmap: Create mem*_page interfaces ira.weiny
2020-11-24 6:07 ` [PATCH 01/17] mm/highmem: Lift memcpy_[to|from]_page and memset_page to core ira.weiny
2020-11-24 6:07 ` [PATCH 02/17] drivers/firmware_loader: Use new memcpy_[to|from]_page() ira.weiny
@ 2020-11-24 6:07 ` ira.weiny
[not found] ` <160648211578.10416.3269409785516897908@jlahtine-mobl.ger.corp.intel.com>
2020-11-24 6:07 ` [PATCH 04/17] fs/afs: Convert to memzero_page() ira.weiny
` (14 subsequent siblings)
17 siblings, 1 reply; 32+ messages in thread
From: ira.weiny @ 2020-11-24 6:07 UTC (permalink / raw)
To: Andrew Morton
Cc: Ira Weiny, Patrik Jakobsson, Jani Nikula, Joonas Lahtinen,
Rodrigo Vivi, Thomas Gleixner, Dave Hansen, Matthew Wilcox,
Christoph Hellwig, Dan Williams, Al Viro, Eric Biggers,
Luis Chamberlain, David Howells, Chris Mason, Josef Bacik,
David Sterba, Steve French, Jaegeuk Kim, Chao Yu, Nicolas Pitre,
Martin K. Petersen, Brian King, Greg Kroah-Hartman,
Alexei Starovoitov, Daniel Borkmann, Jérôme Glisse,
Kirti Wankhede, linux-kernel, linux-fsdevel
From: Ira Weiny <ira.weiny@intel.com>
The pattern of kmap/mem*/kunmap is repeated. Use the new mem*_page()
calls instead.
Cc: Patrik Jakobsson <patrik.r.jakobsson@gmail.com>
Cc: Jani Nikula <jani.nikula@linux.intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
drivers/gpu/drm/gma500/gma_display.c | 7 +++----
drivers/gpu/drm/gma500/mmu.c | 4 ++--
drivers/gpu/drm/i915/gem/i915_gem_shmem.c | 6 ++----
drivers/gpu/drm/i915/gt/intel_gtt.c | 9 ++-------
drivers/gpu/drm/i915/gt/shmem_utils.c | 8 +++-----
5 files changed, 12 insertions(+), 22 deletions(-)
diff --git a/drivers/gpu/drm/gma500/gma_display.c b/drivers/gpu/drm/gma500/gma_display.c
index 3df6d6e850f5..f81114594211 100644
--- a/drivers/gpu/drm/gma500/gma_display.c
+++ b/drivers/gpu/drm/gma500/gma_display.c
@@ -9,6 +9,7 @@
#include <linux/delay.h>
#include <linux/highmem.h>
+#include <linux/pagemap.h>
#include <drm/drm_crtc.h>
#include <drm/drm_fourcc.h>
@@ -334,7 +335,7 @@ int gma_crtc_cursor_set(struct drm_crtc *crtc,
struct gtt_range *gt;
struct gtt_range *cursor_gt = gma_crtc->cursor_gt;
struct drm_gem_object *obj;
- void *tmp_dst, *tmp_src;
+ void *tmp_dst;
int ret = 0, i, cursor_pages;
/* If we didn't get a handle then turn the cursor off */
@@ -400,9 +401,7 @@ int gma_crtc_cursor_set(struct drm_crtc *crtc,
/* Copy the cursor to cursor mem */
tmp_dst = dev_priv->vram_addr + cursor_gt->offset;
for (i = 0; i < cursor_pages; i++) {
- tmp_src = kmap(gt->pages[i]);
- memcpy(tmp_dst, tmp_src, PAGE_SIZE);
- kunmap(gt->pages[i]);
+ memcpy_from_page(tmp_dst, gt->pages[i], 0, PAGE_SIZE);
tmp_dst += PAGE_SIZE;
}
diff --git a/drivers/gpu/drm/gma500/mmu.c b/drivers/gpu/drm/gma500/mmu.c
index 505044c9a673..8a0856c7f439 100644
--- a/drivers/gpu/drm/gma500/mmu.c
+++ b/drivers/gpu/drm/gma500/mmu.c
@@ -5,6 +5,7 @@
**************************************************************************/
#include <linux/highmem.h>
+#include <linux/pagemap.h>
#include "mmu.h"
#include "psb_drv.h"
@@ -204,8 +205,7 @@ struct psb_mmu_pd *psb_mmu_alloc_pd(struct psb_mmu_driver *driver,
kunmap(pd->p);
- clear_page(kmap(pd->dummy_page));
- kunmap(pd->dummy_page);
+ memzero_page(pd->dummy_page, 0, PAGE_SIZE);
pd->tables = vmalloc_user(sizeof(struct psb_mmu_pt *) * 1024);
if (!pd->tables)
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
index 75e8b71c18b9..8a25e08edd18 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
@@ -558,7 +558,7 @@ i915_gem_object_create_shmem_from_data(struct drm_i915_private *dev_priv,
do {
unsigned int len = min_t(typeof(size), size, PAGE_SIZE);
struct page *page;
- void *pgdata, *vaddr;
+ void *pgdata;
err = pagecache_write_begin(file, file->f_mapping,
offset, len, 0,
@@ -566,9 +566,7 @@ i915_gem_object_create_shmem_from_data(struct drm_i915_private *dev_priv,
if (err < 0)
goto fail;
- vaddr = kmap(page);
- memcpy(vaddr, data, len);
- kunmap(page);
+ memcpy_to_page(page, 0, data, len);
err = pagecache_write_end(file, file->f_mapping,
offset, len, len,
diff --git a/drivers/gpu/drm/i915/gt/intel_gtt.c b/drivers/gpu/drm/i915/gt/intel_gtt.c
index 3f1114b58b01..f3d7c601d362 100644
--- a/drivers/gpu/drm/i915/gt/intel_gtt.c
+++ b/drivers/gpu/drm/i915/gt/intel_gtt.c
@@ -153,13 +153,8 @@ static void poison_scratch_page(struct drm_i915_gem_object *scratch)
if (IS_ENABLED(CONFIG_DRM_I915_DEBUG_GEM))
val = POISON_FREE;
- for_each_sgt_page(page, sgt, scratch->mm.pages) {
- void *vaddr;
-
- vaddr = kmap(page);
- memset(vaddr, val, PAGE_SIZE);
- kunmap(page);
- }
+ for_each_sgt_page(page, sgt, scratch->mm.pages)
+ memset_page(page, val, 0, PAGE_SIZE);
}
int setup_scratch_page(struct i915_address_space *vm)
diff --git a/drivers/gpu/drm/i915/gt/shmem_utils.c b/drivers/gpu/drm/i915/gt/shmem_utils.c
index f011ea42487e..2d5f1f2e803d 100644
--- a/drivers/gpu/drm/i915/gt/shmem_utils.c
+++ b/drivers/gpu/drm/i915/gt/shmem_utils.c
@@ -95,19 +95,17 @@ static int __shmem_rw(struct file *file, loff_t off,
unsigned int this =
min_t(size_t, PAGE_SIZE - offset_in_page(off), len);
struct page *page;
- void *vaddr;
page = shmem_read_mapping_page_gfp(file->f_mapping, pfn,
GFP_KERNEL);
if (IS_ERR(page))
return PTR_ERR(page);
- vaddr = kmap(page);
if (write)
- memcpy(vaddr + offset_in_page(off), ptr, this);
+ memcpy_to_page(page, offset_in_page(off), ptr, this);
else
- memcpy(ptr, vaddr + offset_in_page(off), this);
- kunmap(page);
+ memcpy_from_page(ptr, page, offset_in_page(off), this);
+
put_page(page);
len -= this;
--
2.28.0.rc0.12.gb6a658bd00c9
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 04/17] fs/afs: Convert to memzero_page()
2020-11-24 6:07 [PATCH 00/17] kmap: Create mem*_page interfaces ira.weiny
` (2 preceding siblings ...)
2020-11-24 6:07 ` [PATCH 03/17] drivers/gpu: Convert to mem*_page() ira.weiny
@ 2020-11-24 6:07 ` ira.weiny
2020-11-24 6:07 ` [PATCH 05/17] fs/btrfs: " ira.weiny
` (13 subsequent siblings)
17 siblings, 0 replies; 32+ messages in thread
From: ira.weiny @ 2020-11-24 6:07 UTC (permalink / raw)
To: Andrew Morton
Cc: Ira Weiny, David Howells, Thomas Gleixner, Dave Hansen,
Matthew Wilcox, Christoph Hellwig, Dan Williams, Al Viro,
Eric Biggers, Luis Chamberlain, Patrik Jakobsson, Jani Nikula,
Joonas Lahtinen, Rodrigo Vivi, Chris Mason, Josef Bacik,
David Sterba, Steve French, Jaegeuk Kim, Chao Yu, Nicolas Pitre,
Martin K. Petersen, Brian King, Greg Kroah-Hartman,
Alexei Starovoitov, Daniel Borkmann, Jérôme Glisse,
Kirti Wankhede, linux-kernel, linux-fsdevel
From: Ira Weiny <ira.weiny@intel.com>
Convert the kmap()/memcpy()/kunmap() pattern to memzero_page().
Cc: David Howells <dhowells@redhat.com>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
fs/afs/write.c | 5 +----
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/fs/afs/write.c b/fs/afs/write.c
index 50371207f327..ed7419de0178 100644
--- a/fs/afs/write.c
+++ b/fs/afs/write.c
@@ -30,7 +30,6 @@ static int afs_fill_page(struct afs_vnode *vnode, struct key *key,
{
struct afs_read *req;
size_t p;
- void *data;
int ret;
_enter(",,%llu", (unsigned long long)pos);
@@ -38,9 +37,7 @@ static int afs_fill_page(struct afs_vnode *vnode, struct key *key,
if (pos >= vnode->vfs_inode.i_size) {
p = pos & ~PAGE_MASK;
ASSERTCMP(p + len, <=, PAGE_SIZE);
- data = kmap(page);
- memset(data + p, 0, len);
- kunmap(page);
+ memzero_page(page, p, len);
return 0;
}
--
2.28.0.rc0.12.gb6a658bd00c9
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 05/17] fs/btrfs: Convert to memzero_page()
2020-11-24 6:07 [PATCH 00/17] kmap: Create mem*_page interfaces ira.weiny
` (3 preceding siblings ...)
2020-11-24 6:07 ` [PATCH 04/17] fs/afs: Convert to memzero_page() ira.weiny
@ 2020-11-24 6:07 ` ira.weiny
2020-11-24 14:12 ` David Sterba
2020-11-24 6:07 ` [PATCH 06/17] fs/hfs: Convert to mem*_page() interface ira.weiny
` (12 subsequent siblings)
17 siblings, 1 reply; 32+ messages in thread
From: ira.weiny @ 2020-11-24 6:07 UTC (permalink / raw)
To: Andrew Morton
Cc: Ira Weiny, Chris Mason, Josef Bacik, David Sterba,
Thomas Gleixner, Dave Hansen, Matthew Wilcox, Christoph Hellwig,
Dan Williams, Al Viro, Eric Biggers, Luis Chamberlain,
Patrik Jakobsson, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
David Howells, Steve French, Jaegeuk Kim, Chao Yu, Nicolas Pitre,
Martin K. Petersen, Brian King, Greg Kroah-Hartman,
Alexei Starovoitov, Daniel Borkmann, Jérôme Glisse,
Kirti Wankhede, linux-kernel, linux-fsdevel
From: Ira Weiny <ira.weiny@intel.com>
Remove the kmap/memset()/kunmap pattern and use the new memzero_page()
call where possible.
Cc: Chris Mason <clm@fb.com>
Cc: Josef Bacik <josef@toxicpanda.com>
Cc: David Sterba <dsterba@suse.com>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
fs/btrfs/inode.c | 21 +++++----------------
1 file changed, 5 insertions(+), 16 deletions(-)
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index da58c58ef9aa..b0bcf9493236 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -590,17 +590,12 @@ static noinline int compress_file_range(struct async_chunk *async_chunk)
if (!ret) {
unsigned long offset = offset_in_page(total_compressed);
struct page *page = pages[nr_pages - 1];
- char *kaddr;
/* zero the tail end of the last page, we might be
* sending it down to disk
*/
- if (offset) {
- kaddr = kmap_atomic(page);
- memset(kaddr + offset, 0,
- PAGE_SIZE - offset);
- kunmap_atomic(kaddr);
- }
+ if (offset)
+ memzero_page(page, offset, PAGE_SIZE - offset);
will_compress = 1;
}
}
@@ -6485,11 +6480,8 @@ static noinline int uncompress_inline(struct btrfs_path *path,
* cover that region here.
*/
- if (max_size + pg_offset < PAGE_SIZE) {
- char *map = kmap(page);
- memset(map + pg_offset + max_size, 0, PAGE_SIZE - max_size - pg_offset);
- kunmap(page);
- }
+ if (max_size + pg_offset < PAGE_SIZE)
+ memzero_page(page, pg_offset + max_size, PAGE_SIZE - max_size - pg_offset);
kfree(tmp);
return ret;
}
@@ -8245,7 +8237,6 @@ vm_fault_t btrfs_page_mkwrite(struct vm_fault *vmf)
struct btrfs_ordered_extent *ordered;
struct extent_state *cached_state = NULL;
struct extent_changeset *data_reserved = NULL;
- char *kaddr;
unsigned long zero_start;
loff_t size;
vm_fault_t ret;
@@ -8352,10 +8343,8 @@ vm_fault_t btrfs_page_mkwrite(struct vm_fault *vmf)
zero_start = PAGE_SIZE;
if (zero_start != PAGE_SIZE) {
- kaddr = kmap(page);
- memset(kaddr + zero_start, 0, PAGE_SIZE - zero_start);
+ memzero_page(page, zero_start, PAGE_SIZE - zero_start);
flush_dcache_page(page);
- kunmap(page);
}
ClearPageChecked(page);
set_page_dirty(page);
--
2.28.0.rc0.12.gb6a658bd00c9
^ permalink raw reply related [flat|nested] 32+ messages in thread
* Re: [PATCH 05/17] fs/btrfs: Convert to memzero_page()
2020-11-24 6:07 ` [PATCH 05/17] fs/btrfs: " ira.weiny
@ 2020-11-24 14:12 ` David Sterba
2020-11-24 19:25 ` Ira Weiny
0 siblings, 1 reply; 32+ messages in thread
From: David Sterba @ 2020-11-24 14:12 UTC (permalink / raw)
To: ira.weiny
Cc: Andrew Morton, Chris Mason, Josef Bacik, David Sterba,
Thomas Gleixner, Dave Hansen, Matthew Wilcox, Christoph Hellwig,
Dan Williams, Al Viro, Eric Biggers, Luis Chamberlain,
Patrik Jakobsson, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
David Howells, Steve French, Jaegeuk Kim, Chao Yu, Nicolas Pitre,
Martin K. Petersen, Brian King, Greg Kroah-Hartman,
Alexei Starovoitov, Daniel Borkmann, Jérôme Glisse,
Kirti Wankhede, linux-kernel, linux-fsdevel
On Mon, Nov 23, 2020 at 10:07:43PM -0800, ira.weiny@intel.com wrote:
> From: Ira Weiny <ira.weiny@intel.com>
>
> Remove the kmap/memset()/kunmap pattern and use the new memzero_page()
> call where possible.
>
> Cc: Chris Mason <clm@fb.com>
> Cc: Josef Bacik <josef@toxicpanda.com>
> Cc: David Sterba <dsterba@suse.com>
> Signed-off-by: Ira Weiny <ira.weiny@intel.com>
> ---
> fs/btrfs/inode.c | 21 +++++----------------
The patch converts the pattern only in inode.c, but there's more in
compression.c, extent_io.c, zlib.c,d zstd.c (kmap_atomic) and reflink.c,
send.c (kmap).
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH 05/17] fs/btrfs: Convert to memzero_page()
2020-11-24 14:12 ` David Sterba
@ 2020-11-24 19:25 ` Ira Weiny
0 siblings, 0 replies; 32+ messages in thread
From: Ira Weiny @ 2020-11-24 19:25 UTC (permalink / raw)
To: Andrew Morton, Chris Mason, Josef Bacik, David Sterba,
Thomas Gleixner, Dave Hansen, Matthew Wilcox, Christoph Hellwig,
Dan Williams, Al Viro, Eric Biggers, Luis Chamberlain,
Patrik Jakobsson, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
David Howells, Steve French, Jaegeuk Kim, Chao Yu, Nicolas Pitre,
Martin K. Petersen, Brian King, Greg Kroah-Hartman,
Alexei Starovoitov, Daniel Borkmann, Jérôme Glisse,
Kirti Wankhede, linux-kernel, linux-fsdevel
On Tue, Nov 24, 2020 at 03:12:44PM +0100, David Sterba wrote:
> On Mon, Nov 23, 2020 at 10:07:43PM -0800, ira.weiny@intel.com wrote:
> > From: Ira Weiny <ira.weiny@intel.com>
> >
> > Remove the kmap/memset()/kunmap pattern and use the new memzero_page()
> > call where possible.
> >
> > Cc: Chris Mason <clm@fb.com>
> > Cc: Josef Bacik <josef@toxicpanda.com>
> > Cc: David Sterba <dsterba@suse.com>
> > Signed-off-by: Ira Weiny <ira.weiny@intel.com>
> > ---
> > fs/btrfs/inode.c | 21 +++++----------------
>
> The patch converts the pattern only in inode.c, but there's more in
> compression.c, extent_io.c, zlib.c,d zstd.c (kmap_atomic) and reflink.c,
> send.c (kmap).
Thanks... not sure how I missed reflink.c and send.c.
I'll add them in v2.
Thanks!
Ira
^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCH 06/17] fs/hfs: Convert to mem*_page() interface
2020-11-24 6:07 [PATCH 00/17] kmap: Create mem*_page interfaces ira.weiny
` (4 preceding siblings ...)
2020-11-24 6:07 ` [PATCH 05/17] fs/btrfs: " ira.weiny
@ 2020-11-24 6:07 ` ira.weiny
2020-11-24 6:07 ` [PATCH 07/17] fs/cifs: Convert to memcpy_page() ira.weiny
` (11 subsequent siblings)
17 siblings, 0 replies; 32+ messages in thread
From: ira.weiny @ 2020-11-24 6:07 UTC (permalink / raw)
To: Andrew Morton
Cc: Ira Weiny, Thomas Gleixner, Dave Hansen, Matthew Wilcox,
Christoph Hellwig, Dan Williams, Al Viro, Eric Biggers,
Luis Chamberlain, Patrik Jakobsson, Jani Nikula, Joonas Lahtinen,
Rodrigo Vivi, David Howells, Chris Mason, Josef Bacik,
David Sterba, Steve French, Jaegeuk Kim, Chao Yu, Nicolas Pitre,
Martin K. Petersen, Brian King, Greg Kroah-Hartman,
Alexei Starovoitov, Daniel Borkmann, Jérôme Glisse,
Kirti Wankhede, linux-kernel, linux-fsdevel
From: Ira Weiny <ira.weiny@intel.com>
Where possible remove kmap/mem*/kunmap in favor of the new mem*_page()
calls.
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
fs/hfs/bnode.c | 13 ++++---------
1 file changed, 4 insertions(+), 9 deletions(-)
diff --git a/fs/hfs/bnode.c b/fs/hfs/bnode.c
index b63a4df7327b..56037ae5ba69 100644
--- a/fs/hfs/bnode.c
+++ b/fs/hfs/bnode.c
@@ -23,8 +23,7 @@ void hfs_bnode_read(struct hfs_bnode *node, void *buf,
off += node->page_offset;
page = node->page[0];
- memcpy(buf, kmap(page) + off, len);
- kunmap(page);
+ memcpy_from_page(buf, page, off, len);
}
u16 hfs_bnode_read_u16(struct hfs_bnode *node, int off)
@@ -65,8 +64,7 @@ void hfs_bnode_write(struct hfs_bnode *node, void *buf, int off, int len)
off += node->page_offset;
page = node->page[0];
- memcpy(kmap(page) + off, buf, len);
- kunmap(page);
+ memcpy_to_page(page, off, buf, len);
set_page_dirty(page);
}
@@ -90,8 +88,7 @@ void hfs_bnode_clear(struct hfs_bnode *node, int off, int len)
off += node->page_offset;
page = node->page[0];
- memset(kmap(page) + off, 0, len);
- kunmap(page);
+ memzero_page(page, off, len);
set_page_dirty(page);
}
@@ -108,9 +105,7 @@ void hfs_bnode_copy(struct hfs_bnode *dst_node, int dst,
src_page = src_node->page[0];
dst_page = dst_node->page[0];
- memcpy(kmap(dst_page) + dst, kmap(src_page) + src, len);
- kunmap(src_page);
- kunmap(dst_page);
+ memcpy_page(dst_page, dst, src_page, src, len);
set_page_dirty(dst_page);
}
--
2.28.0.rc0.12.gb6a658bd00c9
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 07/17] fs/cifs: Convert to memcpy_page()
2020-11-24 6:07 [PATCH 00/17] kmap: Create mem*_page interfaces ira.weiny
` (5 preceding siblings ...)
2020-11-24 6:07 ` [PATCH 06/17] fs/hfs: Convert to mem*_page() interface ira.weiny
@ 2020-11-24 6:07 ` ira.weiny
2020-11-24 6:07 ` [PATCH 08/17] fs/hfsplus: Convert to mem*_page() ira.weiny
` (10 subsequent siblings)
17 siblings, 0 replies; 32+ messages in thread
From: ira.weiny @ 2020-11-24 6:07 UTC (permalink / raw)
To: Andrew Morton
Cc: Ira Weiny, Steve French, Thomas Gleixner, Dave Hansen,
Matthew Wilcox, Christoph Hellwig, Dan Williams, Al Viro,
Eric Biggers, Luis Chamberlain, Patrik Jakobsson, Jani Nikula,
Joonas Lahtinen, Rodrigo Vivi, David Howells, Chris Mason,
Josef Bacik, David Sterba, Jaegeuk Kim, Chao Yu, Nicolas Pitre,
Martin K. Petersen, Brian King, Greg Kroah-Hartman,
Alexei Starovoitov, Daniel Borkmann, Jérôme Glisse,
Kirti Wankhede, linux-kernel, linux-fsdevel
From: Ira Weiny <ira.weiny@intel.com>
Use memcpy_page() instead of open coding kmap/memcpy/kunmap.
Cc: Steve French <sfrench@samba.org>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
fs/cifs/smb2ops.c | 10 +++-------
1 file changed, 3 insertions(+), 7 deletions(-)
diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
index 504766cb6c19..d1088ee9a0e6 100644
--- a/fs/cifs/smb2ops.c
+++ b/fs/cifs/smb2ops.c
@@ -4223,17 +4223,13 @@ smb3_init_transform_rq(struct TCP_Server_Info *server, int num_rqst,
/* copy pages form the old */
for (j = 0; j < npages; j++) {
- char *dst, *src;
unsigned int offset, len;
rqst_page_get_length(&new_rq[i], j, &len, &offset);
- dst = (char *) kmap(new_rq[i].rq_pages[j]) + offset;
- src = (char *) kmap(old_rq[i - 1].rq_pages[j]) + offset;
-
- memcpy(dst, src, len);
- kunmap(new_rq[i].rq_pages[j]);
- kunmap(old_rq[i - 1].rq_pages[j]);
+ memcpy_page(new_rq[i].rq_pages[j], offset,
+ old_rq[i - 1].rq_pages[j], offset,
+ len);
}
}
--
2.28.0.rc0.12.gb6a658bd00c9
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 08/17] fs/hfsplus: Convert to mem*_page()
2020-11-24 6:07 [PATCH 00/17] kmap: Create mem*_page interfaces ira.weiny
` (6 preceding siblings ...)
2020-11-24 6:07 ` [PATCH 07/17] fs/cifs: Convert to memcpy_page() ira.weiny
@ 2020-11-24 6:07 ` ira.weiny
2020-11-24 6:07 ` [PATCH 09/17] fs/f2fs: Remove f2fs_copy_page() ira.weiny
` (9 subsequent siblings)
17 siblings, 0 replies; 32+ messages in thread
From: ira.weiny @ 2020-11-24 6:07 UTC (permalink / raw)
To: Andrew Morton
Cc: Ira Weiny, Thomas Gleixner, Dave Hansen, Matthew Wilcox,
Christoph Hellwig, Dan Williams, Al Viro, Eric Biggers,
Luis Chamberlain, Patrik Jakobsson, Jani Nikula, Joonas Lahtinen,
Rodrigo Vivi, David Howells, Chris Mason, Josef Bacik,
David Sterba, Steve French, Jaegeuk Kim, Chao Yu, Nicolas Pitre,
Martin K. Petersen, Brian King, Greg Kroah-Hartman,
Alexei Starovoitov, Daniel Borkmann, Jérôme Glisse,
Kirti Wankhede, linux-kernel, linux-fsdevel
From: Ira Weiny <ira.weiny@intel.com>
Remove the pattern of kmap/mem*/kunmap in favor of the new mem*_page()
functions which handle the kmap'ing correctly for us.
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
fs/hfsplus/bnode.c | 53 +++++++++++++---------------------------------
1 file changed, 15 insertions(+), 38 deletions(-)
diff --git a/fs/hfsplus/bnode.c b/fs/hfsplus/bnode.c
index 177fae4e6581..c4347b1cb36f 100644
--- a/fs/hfsplus/bnode.c
+++ b/fs/hfsplus/bnode.c
@@ -29,14 +29,12 @@ void hfs_bnode_read(struct hfs_bnode *node, void *buf, int off, int len)
off &= ~PAGE_MASK;
l = min_t(int, len, PAGE_SIZE - off);
- memcpy(buf, kmap(*pagep) + off, l);
- kunmap(*pagep);
+ memcpy_from_page(buf, *pagep, off, l);
while ((len -= l) != 0) {
buf += l;
l = min_t(int, len, PAGE_SIZE);
- memcpy(buf, kmap(*++pagep), l);
- kunmap(*pagep);
+ memcpy_from_page(buf, *++pagep, 0, l);
}
}
@@ -82,16 +80,14 @@ void hfs_bnode_write(struct hfs_bnode *node, void *buf, int off, int len)
off &= ~PAGE_MASK;
l = min_t(int, len, PAGE_SIZE - off);
- memcpy(kmap(*pagep) + off, buf, l);
+ memcpy_to_page(*pagep, off, buf, l);
set_page_dirty(*pagep);
- kunmap(*pagep);
while ((len -= l) != 0) {
buf += l;
l = min_t(int, len, PAGE_SIZE);
- memcpy(kmap(*++pagep), buf, l);
+ memcpy_to_page(*++pagep, 0, buf, l);
set_page_dirty(*pagep);
- kunmap(*pagep);
}
}
@@ -112,15 +108,13 @@ void hfs_bnode_clear(struct hfs_bnode *node, int off, int len)
off &= ~PAGE_MASK;
l = min_t(int, len, PAGE_SIZE - off);
- memset(kmap(*pagep) + off, 0, l);
+ memzero_page(*pagep, off, l);
set_page_dirty(*pagep);
- kunmap(*pagep);
while ((len -= l) != 0) {
l = min_t(int, len, PAGE_SIZE);
- memset(kmap(*++pagep), 0, l);
+ memzero_page(*++pagep, 0, l);
set_page_dirty(*pagep);
- kunmap(*pagep);
}
}
@@ -142,17 +136,13 @@ void hfs_bnode_copy(struct hfs_bnode *dst_node, int dst,
if (src == dst) {
l = min_t(int, len, PAGE_SIZE - src);
- memcpy(kmap(*dst_page) + src, kmap(*src_page) + src, l);
- kunmap(*src_page);
+ memcpy_page(*dst_page, src, *src_page, src, l);
set_page_dirty(*dst_page);
- kunmap(*dst_page);
while ((len -= l) != 0) {
l = min_t(int, len, PAGE_SIZE);
- memcpy(kmap(*++dst_page), kmap(*++src_page), l);
- kunmap(*src_page);
+ memcpy_page(*++dst_page, 0, *++src_page, 0, l);
set_page_dirty(*dst_page);
- kunmap(*dst_page);
}
} else {
void *src_ptr, *dst_ptr;
@@ -202,21 +192,16 @@ void hfs_bnode_move(struct hfs_bnode *node, int dst, int src, int len)
if (src == dst) {
while (src < len) {
- memmove(kmap(*dst_page), kmap(*src_page), src);
- kunmap(*src_page);
+ memmove_page(*dst_page, 0, *src_page, 0, src);
set_page_dirty(*dst_page);
- kunmap(*dst_page);
len -= src;
src = PAGE_SIZE;
src_page--;
dst_page--;
}
src -= len;
- memmove(kmap(*dst_page) + src,
- kmap(*src_page) + src, len);
- kunmap(*src_page);
+ memmove_page(*dst_page, src, *src_page, src, len);
set_page_dirty(*dst_page);
- kunmap(*dst_page);
} else {
void *src_ptr, *dst_ptr;
@@ -251,19 +236,13 @@ void hfs_bnode_move(struct hfs_bnode *node, int dst, int src, int len)
if (src == dst) {
l = min_t(int, len, PAGE_SIZE - src);
- memmove(kmap(*dst_page) + src,
- kmap(*src_page) + src, l);
- kunmap(*src_page);
+ memmove_page(*dst_page, src, *src_page, src, l);
set_page_dirty(*dst_page);
- kunmap(*dst_page);
while ((len -= l) != 0) {
l = min_t(int, len, PAGE_SIZE);
- memmove(kmap(*++dst_page),
- kmap(*++src_page), l);
- kunmap(*src_page);
+ memmove_page(*++dst_page, 0, *++src_page, 0, l);
set_page_dirty(*dst_page);
- kunmap(*dst_page);
}
} else {
void *src_ptr, *dst_ptr;
@@ -593,14 +572,12 @@ struct hfs_bnode *hfs_bnode_create(struct hfs_btree *tree, u32 num)
}
pagep = node->page;
- memset(kmap(*pagep) + node->page_offset, 0,
- min_t(int, PAGE_SIZE, tree->node_size));
+ memzero_page(*pagep, node->page_offset,
+ min_t(int, PAGE_SIZE, tree->node_size));
set_page_dirty(*pagep);
- kunmap(*pagep);
for (i = 1; i < tree->pages_per_bnode; i++) {
- memset(kmap(*++pagep), 0, PAGE_SIZE);
+ memzero_page(*++pagep, 0, PAGE_SIZE);
set_page_dirty(*pagep);
- kunmap(*pagep);
}
clear_bit(HFS_BNODE_NEW, &node->flags);
wake_up(&node->lock_wq);
--
2.28.0.rc0.12.gb6a658bd00c9
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 09/17] fs/f2fs: Remove f2fs_copy_page()
2020-11-24 6:07 [PATCH 00/17] kmap: Create mem*_page interfaces ira.weiny
` (7 preceding siblings ...)
2020-11-24 6:07 ` [PATCH 08/17] fs/hfsplus: Convert to mem*_page() ira.weiny
@ 2020-11-24 6:07 ` ira.weiny
2020-11-25 3:27 ` Chao Yu
2020-11-24 6:07 ` [PATCH 10/17] fs/freevxfs: Use memcpy_to_page() ira.weiny
` (8 subsequent siblings)
17 siblings, 1 reply; 32+ messages in thread
From: ira.weiny @ 2020-11-24 6:07 UTC (permalink / raw)
To: Andrew Morton
Cc: Ira Weiny, Jaegeuk Kim, Chao Yu, Thomas Gleixner, Dave Hansen,
Matthew Wilcox, Christoph Hellwig, Dan Williams, Al Viro,
Eric Biggers, Luis Chamberlain, Patrik Jakobsson, Jani Nikula,
Joonas Lahtinen, Rodrigo Vivi, David Howells, Chris Mason,
Josef Bacik, David Sterba, Steve French, Nicolas Pitre,
Martin K. Petersen, Brian King, Greg Kroah-Hartman,
Alexei Starovoitov, Daniel Borkmann, Jérôme Glisse,
Kirti Wankhede, linux-kernel, linux-fsdevel
From: Ira Weiny <ira.weiny@intel.com>
The new common function memcpy_page() provides this exactly
functionality. Remove the local f2fs_copy_page() and call memcpy_page()
instead.
Cc: Jaegeuk Kim <jaegeuk@kernel.org>
Cc: Chao Yu <yuchao0@huawei.com>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
fs/f2fs/f2fs.h | 10 ----------
fs/f2fs/file.c | 3 ++-
2 files changed, 2 insertions(+), 11 deletions(-)
diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
index cb700d797296..546dba7d7cc2 100644
--- a/fs/f2fs/f2fs.h
+++ b/fs/f2fs/f2fs.h
@@ -2428,16 +2428,6 @@ static inline struct page *f2fs_pagecache_get_page(
return pagecache_get_page(mapping, index, fgp_flags, gfp_mask);
}
-static inline void f2fs_copy_page(struct page *src, struct page *dst)
-{
- char *src_kaddr = kmap(src);
- char *dst_kaddr = kmap(dst);
-
- memcpy(dst_kaddr, src_kaddr, PAGE_SIZE);
- kunmap(dst);
- kunmap(src);
-}
-
static inline void f2fs_put_page(struct page *page, int unlock)
{
if (!page)
diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
index ee861c6d9ff0..c38aa186a7c6 100644
--- a/fs/f2fs/file.c
+++ b/fs/f2fs/file.c
@@ -17,6 +17,7 @@
#include <linux/uaccess.h>
#include <linux/mount.h>
#include <linux/pagevec.h>
+#include <linux/pagemap.h>
#include <linux/uio.h>
#include <linux/uuid.h>
#include <linux/file.h>
@@ -1234,7 +1235,7 @@ static int __clone_blkaddrs(struct inode *src_inode, struct inode *dst_inode,
f2fs_put_page(psrc, 1);
return PTR_ERR(pdst);
}
- f2fs_copy_page(psrc, pdst);
+ memcpy_page(psrc, 0, pdst, 0, PAGE_SIZE);
set_page_dirty(pdst);
f2fs_put_page(pdst, 1);
f2fs_put_page(psrc, 1);
--
2.28.0.rc0.12.gb6a658bd00c9
^ permalink raw reply related [flat|nested] 32+ messages in thread
* Re: [PATCH 09/17] fs/f2fs: Remove f2fs_copy_page()
2020-11-24 6:07 ` [PATCH 09/17] fs/f2fs: Remove f2fs_copy_page() ira.weiny
@ 2020-11-25 3:27 ` Chao Yu
0 siblings, 0 replies; 32+ messages in thread
From: Chao Yu @ 2020-11-25 3:27 UTC (permalink / raw)
To: ira.weiny, Andrew Morton
Cc: Jaegeuk Kim, Thomas Gleixner, Dave Hansen, Matthew Wilcox,
Christoph Hellwig, Dan Williams, Al Viro, Eric Biggers,
Luis Chamberlain, Patrik Jakobsson, Jani Nikula, Joonas Lahtinen,
Rodrigo Vivi, David Howells, Chris Mason, Josef Bacik,
David Sterba, Steve French, Nicolas Pitre, Martin K. Petersen,
Brian King, Greg Kroah-Hartman, Alexei Starovoitov,
Daniel Borkmann, Jérôme Glisse, Kirti Wankhede,
linux-kernel, linux-fsdevel
On 2020/11/24 14:07, ira.weiny@intel.com wrote:
> From: Ira Weiny <ira.weiny@intel.com>
>
> The new common function memcpy_page() provides this exactly
> functionality. Remove the local f2fs_copy_page() and call memcpy_page()
> instead.
>
> Cc: Jaegeuk Kim <jaegeuk@kernel.org>
> Cc: Chao Yu <yuchao0@huawei.com>
> Signed-off-by: Ira Weiny <ira.weiny@intel.com>
Acked-by: Chao Yu <yuchao0@huawei.com>
Thanks,
^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCH 10/17] fs/freevxfs: Use memcpy_to_page()
2020-11-24 6:07 [PATCH 00/17] kmap: Create mem*_page interfaces ira.weiny
` (8 preceding siblings ...)
2020-11-24 6:07 ` [PATCH 09/17] fs/f2fs: Remove f2fs_copy_page() ira.weiny
@ 2020-11-24 6:07 ` ira.weiny
2020-11-24 6:07 ` [PATCH 11/17] fs/reiserfs: Use memcpy_from_page() ira.weiny
` (7 subsequent siblings)
17 siblings, 0 replies; 32+ messages in thread
From: ira.weiny @ 2020-11-24 6:07 UTC (permalink / raw)
To: Andrew Morton
Cc: Ira Weiny, Christoph Hellwig, Thomas Gleixner, Dave Hansen,
Matthew Wilcox, Dan Williams, Al Viro, Eric Biggers,
Luis Chamberlain, Patrik Jakobsson, Jani Nikula, Joonas Lahtinen,
Rodrigo Vivi, David Howells, Chris Mason, Josef Bacik,
David Sterba, Steve French, Jaegeuk Kim, Chao Yu, Nicolas Pitre,
Martin K. Petersen, Brian King, Greg Kroah-Hartman,
Alexei Starovoitov, Daniel Borkmann, Jérôme Glisse,
Kirti Wankhede, linux-kernel, linux-fsdevel
From: Ira Weiny <ira.weiny@intel.com>
Remove kmap/memcpy/kunmap pattern in favor of the new memcpy_to_page()
Cc: Christoph Hellwig <hch@infradead.org>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
fs/freevxfs/vxfs_immed.c | 6 +-----
1 file changed, 1 insertion(+), 5 deletions(-)
diff --git a/fs/freevxfs/vxfs_immed.c b/fs/freevxfs/vxfs_immed.c
index bfc780c682fb..d185fa67b82f 100644
--- a/fs/freevxfs/vxfs_immed.c
+++ b/fs/freevxfs/vxfs_immed.c
@@ -67,12 +67,8 @@ vxfs_immed_readpage(struct file *fp, struct page *pp)
{
struct vxfs_inode_info *vip = VXFS_INO(pp->mapping->host);
u_int64_t offset = (u_int64_t)pp->index << PAGE_SHIFT;
- caddr_t kaddr;
- kaddr = kmap(pp);
- memcpy(kaddr, vip->vii_immed.vi_immed + offset, PAGE_SIZE);
- kunmap(pp);
-
+ memcpy_to_page(pp, 0, vip->vii_immed.vi_immed + offset, PAGE_SIZE);
flush_dcache_page(pp);
SetPageUptodate(pp);
unlock_page(pp);
--
2.28.0.rc0.12.gb6a658bd00c9
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 11/17] fs/reiserfs: Use memcpy_from_page()
2020-11-24 6:07 [PATCH 00/17] kmap: Create mem*_page interfaces ira.weiny
` (9 preceding siblings ...)
2020-11-24 6:07 ` [PATCH 10/17] fs/freevxfs: Use memcpy_to_page() ira.weiny
@ 2020-11-24 6:07 ` ira.weiny
2020-11-24 6:07 ` [PATCH 12/17] fs/cramfs: " ira.weiny
` (6 subsequent siblings)
17 siblings, 0 replies; 32+ messages in thread
From: ira.weiny @ 2020-11-24 6:07 UTC (permalink / raw)
To: Andrew Morton
Cc: Ira Weiny, Thomas Gleixner, Dave Hansen, Matthew Wilcox,
Christoph Hellwig, Dan Williams, Al Viro, Eric Biggers,
Luis Chamberlain, Patrik Jakobsson, Jani Nikula, Joonas Lahtinen,
Rodrigo Vivi, David Howells, Chris Mason, Josef Bacik,
David Sterba, Steve French, Jaegeuk Kim, Chao Yu, Nicolas Pitre,
Martin K. Petersen, Brian King, Greg Kroah-Hartman,
Alexei Starovoitov, Daniel Borkmann, Jérôme Glisse,
Kirti Wankhede, linux-kernel, linux-fsdevel
From: Ira Weiny <ira.weiny@intel.com>
Remove the open coding of kmap/memcpy/kunmap and use the new
memcpy_from_page() function.
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
fs/reiserfs/journal.c | 9 +++------
1 file changed, 3 insertions(+), 6 deletions(-)
diff --git a/fs/reiserfs/journal.c b/fs/reiserfs/journal.c
index e98f99338f8f..e288bbbe80ff 100644
--- a/fs/reiserfs/journal.c
+++ b/fs/reiserfs/journal.c
@@ -4184,7 +4184,6 @@ static int do_journal_end(struct reiserfs_transaction_handle *th, int flags)
/* copy all the real blocks into log area. dirty log blocks */
if (buffer_journaled(cn->bh)) {
struct buffer_head *tmp_bh;
- char *addr;
struct page *page;
tmp_bh =
journal_getblk(sb,
@@ -4194,11 +4193,9 @@ static int do_journal_end(struct reiserfs_transaction_handle *th, int flags)
SB_ONDISK_JOURNAL_SIZE(sb)));
set_buffer_uptodate(tmp_bh);
page = cn->bh->b_page;
- addr = kmap(page);
- memcpy(tmp_bh->b_data,
- addr + offset_in_page(cn->bh->b_data),
- cn->bh->b_size);
- kunmap(page);
+ memcpy_from_page(tmp_bh->b_data, page,
+ offset_in_page(cn->bh->b_data),
+ cn->bh->b_size);
mark_buffer_dirty(tmp_bh);
jindex++;
set_buffer_journal_dirty(cn->bh);
--
2.28.0.rc0.12.gb6a658bd00c9
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 12/17] fs/cramfs: Use memcpy_from_page()
2020-11-24 6:07 [PATCH 00/17] kmap: Create mem*_page interfaces ira.weiny
` (10 preceding siblings ...)
2020-11-24 6:07 ` [PATCH 11/17] fs/reiserfs: Use memcpy_from_page() ira.weiny
@ 2020-11-24 6:07 ` ira.weiny
2020-11-24 15:20 ` Nicolas Pitre
2020-11-24 6:07 ` [PATCH 13/17] drivers/target: Convert to mem*_page() ira.weiny
` (5 subsequent siblings)
17 siblings, 1 reply; 32+ messages in thread
From: ira.weiny @ 2020-11-24 6:07 UTC (permalink / raw)
To: Andrew Morton
Cc: Ira Weiny, Nicolas Pitre, Thomas Gleixner, Dave Hansen,
Matthew Wilcox, Christoph Hellwig, Dan Williams, Al Viro,
Eric Biggers, Luis Chamberlain, Patrik Jakobsson, Jani Nikula,
Joonas Lahtinen, Rodrigo Vivi, David Howells, Chris Mason,
Josef Bacik, David Sterba, Steve French, Jaegeuk Kim, Chao Yu,
Martin K. Petersen, Brian King, Greg Kroah-Hartman,
Alexei Starovoitov, Daniel Borkmann, Jérôme Glisse,
Kirti Wankhede, linux-kernel, linux-fsdevel
From: Ira Weiny <ira.weiny@intel.com>
Remove open coded kmap/memcpy/kunmap and use mempcy_from_page() instead.
Cc: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
fs/cramfs/inode.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/fs/cramfs/inode.c b/fs/cramfs/inode.c
index 4b90cfd1ec36..996a3a32a01f 100644
--- a/fs/cramfs/inode.c
+++ b/fs/cramfs/inode.c
@@ -247,8 +247,7 @@ static void *cramfs_blkdev_read(struct super_block *sb, unsigned int offset,
struct page *page = pages[i];
if (page) {
- memcpy(data, kmap(page), PAGE_SIZE);
- kunmap(page);
+ memcpy_from_page(data, page, 0, PAGE_SIZE);
put_page(page);
} else
memset(data, 0, PAGE_SIZE);
--
2.28.0.rc0.12.gb6a658bd00c9
^ permalink raw reply related [flat|nested] 32+ messages in thread
* Re: [PATCH 12/17] fs/cramfs: Use memcpy_from_page()
2020-11-24 6:07 ` [PATCH 12/17] fs/cramfs: " ira.weiny
@ 2020-11-24 15:20 ` Nicolas Pitre
0 siblings, 0 replies; 32+ messages in thread
From: Nicolas Pitre @ 2020-11-24 15:20 UTC (permalink / raw)
To: Ira Weiny
Cc: Andrew Morton, Thomas Gleixner, Dave Hansen, Matthew Wilcox,
Christoph Hellwig, Dan Williams, Al Viro, Eric Biggers,
Luis Chamberlain, Patrik Jakobsson, Jani Nikula, Joonas Lahtinen,
Rodrigo Vivi, David Howells, Chris Mason, Josef Bacik,
David Sterba, Steve French, Jaegeuk Kim, Chao Yu,
Martin K. Petersen, Brian King, Greg Kroah-Hartman,
Alexei Starovoitov, Daniel Borkmann, Jérôme Glisse,
Kirti Wankhede, linux-kernel, linux-fsdevel
On Mon, 23 Nov 2020, ira.weiny@intel.com wrote:
> From: Ira Weiny <ira.weiny@intel.com>
>
> Remove open coded kmap/memcpy/kunmap and use mempcy_from_page() instead.
>
> Cc: Nicolas Pitre <nico@fluxnic.net>
> Signed-off-by: Ira Weiny <ira.weiny@intel.com>
Acked-by: Nicolas Pitre <nico@fluxnic.net>
> ---
> fs/cramfs/inode.c | 3 +--
> 1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/fs/cramfs/inode.c b/fs/cramfs/inode.c
> index 4b90cfd1ec36..996a3a32a01f 100644
> --- a/fs/cramfs/inode.c
> +++ b/fs/cramfs/inode.c
> @@ -247,8 +247,7 @@ static void *cramfs_blkdev_read(struct super_block *sb, unsigned int offset,
> struct page *page = pages[i];
>
> if (page) {
> - memcpy(data, kmap(page), PAGE_SIZE);
> - kunmap(page);
> + memcpy_from_page(data, page, 0, PAGE_SIZE);
> put_page(page);
> } else
> memset(data, 0, PAGE_SIZE);
> --
> 2.28.0.rc0.12.gb6a658bd00c9
>
>
^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCH 13/17] drivers/target: Convert to mem*_page()
2020-11-24 6:07 [PATCH 00/17] kmap: Create mem*_page interfaces ira.weiny
` (11 preceding siblings ...)
2020-11-24 6:07 ` [PATCH 12/17] fs/cramfs: " ira.weiny
@ 2020-11-24 6:07 ` ira.weiny
2020-11-24 6:07 ` [PATCH 14/17] drivers/scsi: Use memcpy_to_page() ira.weiny
` (4 subsequent siblings)
17 siblings, 0 replies; 32+ messages in thread
From: ira.weiny @ 2020-11-24 6:07 UTC (permalink / raw)
To: Andrew Morton
Cc: Ira Weiny, Martin K. Petersen, Thomas Gleixner, Dave Hansen,
Matthew Wilcox, Christoph Hellwig, Dan Williams, Al Viro,
Eric Biggers, Luis Chamberlain, Patrik Jakobsson, Jani Nikula,
Joonas Lahtinen, Rodrigo Vivi, David Howells, Chris Mason,
Josef Bacik, David Sterba, Steve French, Jaegeuk Kim, Chao Yu,
Nicolas Pitre, Brian King, Greg Kroah-Hartman,
Alexei Starovoitov, Daniel Borkmann, Jérôme Glisse,
Kirti Wankhede, linux-kernel, linux-fsdevel
From: Ira Weiny <ira.weiny@intel.com>
Remove the kmap/mem*()/kunmap patter and use the new mem*_page()
functions.
Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
drivers/target/target_core_rd.c | 6 ++----
drivers/target/target_core_transport.c | 10 +++-------
2 files changed, 5 insertions(+), 11 deletions(-)
diff --git a/drivers/target/target_core_rd.c b/drivers/target/target_core_rd.c
index bf936bbeccfe..30bf0fcae519 100644
--- a/drivers/target/target_core_rd.c
+++ b/drivers/target/target_core_rd.c
@@ -18,6 +18,7 @@
#include <linux/scatterlist.h>
#include <linux/slab.h>
#include <linux/spinlock.h>
+#include <linux/pagemap.h>
#include <scsi/scsi_proto.h>
#include <target/target_core_base.h>
@@ -117,7 +118,6 @@ static int rd_allocate_sgl_table(struct rd_dev *rd_dev, struct rd_dev_sg_table *
sizeof(struct scatterlist));
struct page *pg;
struct scatterlist *sg;
- unsigned char *p;
while (total_sg_needed) {
unsigned int chain_entry = 0;
@@ -159,9 +159,7 @@ static int rd_allocate_sgl_table(struct rd_dev *rd_dev, struct rd_dev_sg_table *
sg_assign_page(&sg[j], pg);
sg[j].length = PAGE_SIZE;
- p = kmap(pg);
- memset(p, init_payload, PAGE_SIZE);
- kunmap(pg);
+ memset_page(pg, init_payload, 0, PAGE_SIZE);
}
page_offset += sg_per_table;
diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c
index ff26ab0a5f60..4fec5c728344 100644
--- a/drivers/target/target_core_transport.c
+++ b/drivers/target/target_core_transport.c
@@ -22,6 +22,7 @@
#include <linux/module.h>
#include <linux/ratelimit.h>
#include <linux/vmalloc.h>
+#include <linux/pagemap.h>
#include <asm/unaligned.h>
#include <net/sock.h>
#include <net/tcp.h>
@@ -1689,15 +1690,10 @@ int target_submit_cmd_map_sgls(struct se_cmd *se_cmd, struct se_session *se_sess
*/
if (!(se_cmd->se_cmd_flags & SCF_SCSI_DATA_CDB) &&
se_cmd->data_direction == DMA_FROM_DEVICE) {
- unsigned char *buf = NULL;
if (sgl)
- buf = kmap(sg_page(sgl)) + sgl->offset;
-
- if (buf) {
- memset(buf, 0, sgl->length);
- kunmap(sg_page(sgl));
- }
+ memzero_page(sg_page(sgl), sgl->offset,
+ sgl->length);
}
rc = transport_generic_map_mem_to_cmd(se_cmd, sgl, sgl_count,
--
2.28.0.rc0.12.gb6a658bd00c9
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 14/17] drivers/scsi: Use memcpy_to_page()
2020-11-24 6:07 [PATCH 00/17] kmap: Create mem*_page interfaces ira.weiny
` (12 preceding siblings ...)
2020-11-24 6:07 ` [PATCH 13/17] drivers/target: Convert to mem*_page() ira.weiny
@ 2020-11-24 6:07 ` ira.weiny
2020-11-24 6:07 ` [PATCH 15/17] drivers/staging: Use memcpy_to/from_page() ira.weiny
` (3 subsequent siblings)
17 siblings, 0 replies; 32+ messages in thread
From: ira.weiny @ 2020-11-24 6:07 UTC (permalink / raw)
To: Andrew Morton
Cc: Ira Weiny, Brian King, Thomas Gleixner, Dave Hansen,
Matthew Wilcox, Christoph Hellwig, Dan Williams, Al Viro,
Eric Biggers, Luis Chamberlain, Patrik Jakobsson, Jani Nikula,
Joonas Lahtinen, Rodrigo Vivi, David Howells, Chris Mason,
Josef Bacik, David Sterba, Steve French, Jaegeuk Kim, Chao Yu,
Nicolas Pitre, Martin K. Petersen, Greg Kroah-Hartman,
Alexei Starovoitov, Daniel Borkmann, Jérôme Glisse,
Kirti Wankhede, linux-kernel, linux-fsdevel
From: Ira Weiny <ira.weiny@intel.com>
Remove kmap/mem*()/kunmap pattern and use memcpy_to_page()
Cc: Brian King <brking@us.ibm.com>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
drivers/scsi/ipr.c | 11 ++---------
1 file changed, 2 insertions(+), 9 deletions(-)
diff --git a/drivers/scsi/ipr.c b/drivers/scsi/ipr.c
index b0aa58d117cc..3cdd8db24270 100644
--- a/drivers/scsi/ipr.c
+++ b/drivers/scsi/ipr.c
@@ -3912,7 +3912,6 @@ static int ipr_copy_ucode_buffer(struct ipr_sglist *sglist,
{
int bsize_elem, i, result = 0;
struct scatterlist *sg;
- void *kaddr;
/* Determine the actual number of bytes per element */
bsize_elem = PAGE_SIZE * (1 << sglist->order);
@@ -3923,10 +3922,7 @@ static int ipr_copy_ucode_buffer(struct ipr_sglist *sglist,
buffer += bsize_elem) {
struct page *page = sg_page(sg);
- kaddr = kmap(page);
- memcpy(kaddr, buffer, bsize_elem);
- kunmap(page);
-
+ memcpy_to_page(page, 0, buffer, bsize_elem);
sg->length = bsize_elem;
if (result != 0) {
@@ -3938,10 +3934,7 @@ static int ipr_copy_ucode_buffer(struct ipr_sglist *sglist,
if (len % bsize_elem) {
struct page *page = sg_page(sg);
- kaddr = kmap(page);
- memcpy(kaddr, buffer, len % bsize_elem);
- kunmap(page);
-
+ memcpy_to_page(page, 0, buffer, len % bsize_elem);
sg->length = len % bsize_elem;
}
--
2.28.0.rc0.12.gb6a658bd00c9
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 15/17] drivers/staging: Use memcpy_to/from_page()
2020-11-24 6:07 [PATCH 00/17] kmap: Create mem*_page interfaces ira.weiny
` (13 preceding siblings ...)
2020-11-24 6:07 ` [PATCH 14/17] drivers/scsi: Use memcpy_to_page() ira.weiny
@ 2020-11-24 6:07 ` ira.weiny
2020-11-24 6:07 ` [PATCH 16/17] lib: Use mempcy_to/from_page() ira.weiny
` (2 subsequent siblings)
17 siblings, 0 replies; 32+ messages in thread
From: ira.weiny @ 2020-11-24 6:07 UTC (permalink / raw)
To: Andrew Morton
Cc: Ira Weiny, Greg Kroah-Hartman, Thomas Gleixner, Dave Hansen,
Matthew Wilcox, Christoph Hellwig, Dan Williams, Al Viro,
Eric Biggers, Luis Chamberlain, Patrik Jakobsson, Jani Nikula,
Joonas Lahtinen, Rodrigo Vivi, David Howells, Chris Mason,
Josef Bacik, David Sterba, Steve French, Jaegeuk Kim, Chao Yu,
Nicolas Pitre, Martin K. Petersen, Brian King,
Alexei Starovoitov, Daniel Borkmann, Jérôme Glisse,
Kirti Wankhede, linux-kernel, linux-fsdevel
From: Ira Weiny <ira.weiny@intel.com>
Remove kmap/mem*()/kunmap pattern and use memcpy_to/from_page()
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
drivers/staging/rts5208/rtsx_transport.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/staging/rts5208/rtsx_transport.c b/drivers/staging/rts5208/rtsx_transport.c
index 909a3e663ef6..e0e52bae953e 100644
--- a/drivers/staging/rts5208/rtsx_transport.c
+++ b/drivers/staging/rts5208/rtsx_transport.c
@@ -92,13 +92,13 @@ unsigned int rtsx_stor_access_xfer_buf(unsigned char *buffer,
while (sglen > 0) {
unsigned int plen = min(sglen, (unsigned int)
PAGE_SIZE - poff);
- unsigned char *ptr = kmap(page);
if (dir == TO_XFER_BUF)
- memcpy(ptr + poff, buffer + cnt, plen);
+ memcpy_to_page(page, poff,
+ buffer + cnt, plen);
else
- memcpy(buffer + cnt, ptr + poff, plen);
- kunmap(page);
+ memcpy_from_page(buffer + cnt, page,
+ poff, plen);
/* Start at the beginning of the next page */
poff = 0;
--
2.28.0.rc0.12.gb6a658bd00c9
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 16/17] lib: Use mempcy_to/from_page()
2020-11-24 6:07 [PATCH 00/17] kmap: Create mem*_page interfaces ira.weiny
` (14 preceding siblings ...)
2020-11-24 6:07 ` [PATCH 15/17] drivers/staging: Use memcpy_to/from_page() ira.weiny
@ 2020-11-24 6:07 ` ira.weiny
2020-11-24 6:07 ` [PATCH 17/17] samples: Use memcpy_to/from_page() ira.weiny
2020-12-04 10:18 ` [PATCH 04/17] fs/afs: Convert to memzero_page() David Howells
17 siblings, 0 replies; 32+ messages in thread
From: ira.weiny @ 2020-11-24 6:07 UTC (permalink / raw)
To: Andrew Morton
Cc: Ira Weiny, Alexei Starovoitov, Daniel Borkmann,
Jérôme Glisse, Thomas Gleixner, Dave Hansen,
Matthew Wilcox, Christoph Hellwig, Dan Williams, Al Viro,
Eric Biggers, Luis Chamberlain, Patrik Jakobsson, Jani Nikula,
Joonas Lahtinen, Rodrigo Vivi, David Howells, Chris Mason,
Josef Bacik, David Sterba, Steve French, Jaegeuk Kim, Chao Yu,
Nicolas Pitre, Martin K. Petersen, Brian King,
Greg Kroah-Hartman, Kirti Wankhede, linux-kernel, linux-fsdevel
From: Ira Weiny <ira.weiny@intel.com>
Remove kmap/mem*()/kunmap pattern and use memcpy_to/from_page()
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: "Jérôme Glisse" <jglisse@redhat.com>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
lib/test_bpf.c | 11 ++---------
lib/test_hmm.c | 10 ++--------
2 files changed, 4 insertions(+), 17 deletions(-)
diff --git a/lib/test_bpf.c b/lib/test_bpf.c
index ca7d635bccd9..def048bc1c48 100644
--- a/lib/test_bpf.c
+++ b/lib/test_bpf.c
@@ -14,6 +14,7 @@
#include <linux/skbuff.h>
#include <linux/netdevice.h>
#include <linux/if_vlan.h>
+#include <linux/pagemap.h>
#include <linux/random.h>
#include <linux/highmem.h>
#include <linux/sched.h>
@@ -6499,25 +6500,17 @@ static void *generate_test_data(struct bpf_test *test, int sub)
* single fragment to the skb, filled with
* test->frag_data.
*/
- void *ptr;
-
page = alloc_page(GFP_KERNEL);
if (!page)
goto err_kfree_skb;
- ptr = kmap(page);
- if (!ptr)
- goto err_free_page;
- memcpy(ptr, test->frag_data, MAX_DATA);
- kunmap(page);
+ memcpy_to_page(page, 0, test->frag_data, MAX_DATA);
skb_add_rx_frag(skb, 0, page, 0, MAX_DATA, MAX_DATA);
}
return skb;
-err_free_page:
- __free_page(page);
err_kfree_skb:
kfree_skb(skb);
return NULL;
diff --git a/lib/test_hmm.c b/lib/test_hmm.c
index 80a78877bd93..6a5fe7c4088b 100644
--- a/lib/test_hmm.c
+++ b/lib/test_hmm.c
@@ -321,16 +321,13 @@ static int dmirror_do_read(struct dmirror *dmirror, unsigned long start,
for (pfn = start >> PAGE_SHIFT; pfn < (end >> PAGE_SHIFT); pfn++) {
void *entry;
struct page *page;
- void *tmp;
entry = xa_load(&dmirror->pt, pfn);
page = xa_untag_pointer(entry);
if (!page)
return -ENOENT;
- tmp = kmap(page);
- memcpy(ptr, tmp, PAGE_SIZE);
- kunmap(page);
+ memcpy_from_page(ptr, page, 0, PAGE_SIZE);
ptr += PAGE_SIZE;
bounce->cpages++;
@@ -390,16 +387,13 @@ static int dmirror_do_write(struct dmirror *dmirror, unsigned long start,
for (pfn = start >> PAGE_SHIFT; pfn < (end >> PAGE_SHIFT); pfn++) {
void *entry;
struct page *page;
- void *tmp;
entry = xa_load(&dmirror->pt, pfn);
page = xa_untag_pointer(entry);
if (!page || xa_pointer_tag(entry) != DPT_XA_TAG_WRITE)
return -ENOENT;
- tmp = kmap(page);
- memcpy(tmp, ptr, PAGE_SIZE);
- kunmap(page);
+ memcpy_to_page(page, 0, ptr, PAGE_SIZE);
ptr += PAGE_SIZE;
bounce->cpages++;
--
2.28.0.rc0.12.gb6a658bd00c9
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 17/17] samples: Use memcpy_to/from_page()
2020-11-24 6:07 [PATCH 00/17] kmap: Create mem*_page interfaces ira.weiny
` (15 preceding siblings ...)
2020-11-24 6:07 ` [PATCH 16/17] lib: Use mempcy_to/from_page() ira.weiny
@ 2020-11-24 6:07 ` ira.weiny
2020-12-04 10:18 ` [PATCH 04/17] fs/afs: Convert to memzero_page() David Howells
17 siblings, 0 replies; 32+ messages in thread
From: ira.weiny @ 2020-11-24 6:07 UTC (permalink / raw)
To: Andrew Morton
Cc: Ira Weiny, Kirti Wankhede, Thomas Gleixner, Dave Hansen,
Matthew Wilcox, Christoph Hellwig, Dan Williams, Al Viro,
Eric Biggers, Luis Chamberlain, Patrik Jakobsson, Jani Nikula,
Joonas Lahtinen, Rodrigo Vivi, David Howells, Chris Mason,
Josef Bacik, David Sterba, Steve French, Jaegeuk Kim, Chao Yu,
Nicolas Pitre, Martin K. Petersen, Brian King,
Greg Kroah-Hartman, Alexei Starovoitov, Daniel Borkmann,
Jérôme Glisse, linux-kernel, linux-fsdevel
From: Ira Weiny <ira.weiny@intel.com>
Remove kmap/mem*()/kunmap pattern and use memcpy_to/from_page()
Cc: Kirti Wankhede <kwankhede@nvidia.com>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
samples/vfio-mdev/mbochs.c | 8 +++-----
1 file changed, 3 insertions(+), 5 deletions(-)
diff --git a/samples/vfio-mdev/mbochs.c b/samples/vfio-mdev/mbochs.c
index e03068917273..54fe04f63c66 100644
--- a/samples/vfio-mdev/mbochs.c
+++ b/samples/vfio-mdev/mbochs.c
@@ -30,6 +30,7 @@
#include <linux/iommu.h>
#include <linux/sysfs.h>
#include <linux/mdev.h>
+#include <linux/pagemap.h>
#include <linux/pci.h>
#include <linux/dma-buf.h>
#include <linux/highmem.h>
@@ -442,7 +443,6 @@ static ssize_t mdev_access(struct mdev_device *mdev, char *buf, size_t count,
struct device *dev = mdev_dev(mdev);
struct page *pg;
loff_t poff;
- char *map;
int ret = 0;
mutex_lock(&mdev_state->ops_lock);
@@ -479,12 +479,10 @@ static ssize_t mdev_access(struct mdev_device *mdev, char *buf, size_t count,
pos -= MBOCHS_MMIO_BAR_OFFSET;
poff = pos & ~PAGE_MASK;
pg = __mbochs_get_page(mdev_state, pos >> PAGE_SHIFT);
- map = kmap(pg);
if (is_write)
- memcpy(map + poff, buf, count);
+ memcpy_to_page(pg, poff, buf, count);
else
- memcpy(buf, map + poff, count);
- kunmap(pg);
+ memcpy_from_page(buf, pg, poff, count);
put_page(pg);
} else {
--
2.28.0.rc0.12.gb6a658bd00c9
^ permalink raw reply related [flat|nested] 32+ messages in thread
* Re: [PATCH 04/17] fs/afs: Convert to memzero_page()
2020-11-24 6:07 [PATCH 00/17] kmap: Create mem*_page interfaces ira.weiny
` (16 preceding siblings ...)
2020-11-24 6:07 ` [PATCH 17/17] samples: Use memcpy_to/from_page() ira.weiny
@ 2020-12-04 10:18 ` David Howells
17 siblings, 0 replies; 32+ messages in thread
From: David Howells @ 2020-12-04 10:18 UTC (permalink / raw)
To: Ira Weiny
Cc: David Howells, Thomas Gleixner, Dave Hansen, Matthew Wilcox,
Christoph Hellwig, Dan Williams, Al Viro, Eric Biggers,
Luis Chamberlain, Patrik Jakobsson, Jani Nikula, Joonas Lahtinen,
Rodrigo Vivi, Chris Mason, Josef Bacik, David Sterba,
Steve French, Jaegeuk Kim, Chao Yu, Nicolas Pitre,
Martin K. Petersen, Brian King, Greg Kroah-Hartman,
Alexei Starovoitov, Daniel Borkmann, Jérôme Glisse,
Kirti Wankhede, linux-kernel, linux-fsdevel
ira.weiny@intel.com wrote:
> Convert the kmap()/memcpy()/kunmap() pattern to memzero_page().
>
> Cc: David Howells <dhowells@redhat.com>
> Signed-off-by: Ira Weiny <ira.weiny@intel.com>
Acked-by: David Howells <dhowells@redhat.com>
^ permalink raw reply [flat|nested] 32+ messages in thread