linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH V2 0/8] btrfs: convert kmaps to core page calls
@ 2021-02-10  6:22 ira.weiny
  2021-02-10  6:22 ` [PATCH V2 1/8] mm/highmem: Lift memcpy_[to|from]_page to core ira.weiny
                   ` (8 more replies)
  0 siblings, 9 replies; 31+ messages in thread
From: ira.weiny @ 2021-02-10  6:22 UTC (permalink / raw)
  To: Andrew Morton, David Sterba
  Cc: Ira Weiny, clm, josef, Christoph Hellwig, linux-kernel, linux-fsdevel

From: Ira Weiny <ira.weiny@intel.com>

Changes from V1:
	Rework commit messages because they were very weak
	Change 'fs/btrfs: X' to 'btrfs: x'
		https://lore.kernel.org/lkml/20210209151442.GU1993@suse.cz/
	Per Andrew
		Split out changes to highmem.h
			The addition memcpy, memmove, and memset page functions
			The change from kmap_atomic to kmap_local_page
			The addition of BUG_ON
			The conversion of memzero_page to zero_user in iov_iter
		Change BUG_ON to VM_BUG_ON
	While we are refactoring adjust the line length down per Chaitany


There are many places where kmap/<operation>/kunmap patterns occur.  We lift a
couple of these patterns to the core common functions and use them as well as
existing core functions in the btrfs file system.  At the same time we convert
those core functions to use kmap_local_page() which is more efficient in those
calls.

Per the conversation on V1 it looks like Andrew would like this to go through
the btrfs tree.  I think that is fine.  The other users of
memcpy_[to|from]_page are probably not ready and I believe could be taken in an
early rc after David submits.

Is that ok with you David?

Thanks,
Ira

Ira Weiny (8):
  mm/highmem: Lift memcpy_[to|from]_page to core
  mm/highmem: Convert memcpy_[to|from]_page() to kmap_local_page()
  mm/highmem: Introduce memcpy_page(), memmove_page(), and memset_page()
  mm/highmem: Add VM_BUG_ON() to mem*_page() calls
  iov_iter: Remove memzero_page() in favor of zero_user()
  btrfs: use memcpy_[to|from]_page() and kmap_local_page()
  btrfs: use copy_highpage() instead of 2 kmaps()
  btrfs: convert to zero_user()

 fs/btrfs/compression.c  | 11 +++-----
 fs/btrfs/extent_io.c    | 22 +++-------------
 fs/btrfs/inode.c        | 33 ++++++++----------------
 fs/btrfs/lzo.c          |  4 +--
 fs/btrfs/raid56.c       | 10 +-------
 fs/btrfs/reflink.c      | 12 ++-------
 fs/btrfs/send.c         |  7 ++----
 fs/btrfs/zlib.c         | 10 +++-----
 fs/btrfs/zstd.c         | 11 +++-----
 include/linux/highmem.h | 56 +++++++++++++++++++++++++++++++++++++++++
 lib/iov_iter.c          | 26 +++----------------
 11 files changed, 89 insertions(+), 113 deletions(-)

-- 
2.28.0.rc0.12.gb6a658bd00c9


^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH V2 1/8] mm/highmem: Lift memcpy_[to|from]_page to core
  2021-02-10  6:22 [PATCH V2 0/8] btrfs: convert kmaps to core page calls ira.weiny
@ 2021-02-10  6:22 ` ira.weiny
  2021-02-10  6:33   ` Chaitanya Kulkarni
  2021-02-10 12:53   ` Christoph Hellwig
  2021-02-10  6:22 ` [PATCH V2 2/8] mm/highmem: Convert memcpy_[to|from]_page() to kmap_local_page() ira.weiny
                   ` (7 subsequent siblings)
  8 siblings, 2 replies; 31+ messages in thread
From: ira.weiny @ 2021-02-10  6:22 UTC (permalink / raw)
  To: Andrew Morton, David Sterba
  Cc: Ira Weiny, Boris Pismenny, Or Gerlitz, Dave Hansen,
	Matthew Wilcox, Christoph Hellwig, Dan Williams, Al Viro,
	Eric Biggers, clm, josef, linux-kernel, linux-fsdevel

From: Ira Weiny <ira.weiny@intel.com>

Working through a conversion to a call kmap_local_page() instead of
kmap() revealed many places where the pattern kmap/memcpy/kunmap
occurred.

Eric Biggers, Matthew Wilcox, Christoph Hellwig, Dan Williams, and Al
Viro all suggested putting this code into helper functions.  Al Viro
further pointed out that these functions already existed in the iov_iter
code.[1]

Various locations for the lifted functions were considered.

Headers like mm.h or string.h seem ok but don't really portray the
functionality well.  pagemap.h made some sense but is for page cache
functionality.[2]

Another alternative would be to create a new header for the promoted
memcpy functions, but it masks the fact that these are designed to copy
to/from pages using the kernel direct mappings and complicates matters
with a new header.

Placing these functions in 'highmem.h' is suboptimal especially with the
changes being proposed in the functionality of kmap.  From a caller
perspective including/using 'highmem.h' implies that the functions
defined in that header are only required when highmem is in use which is
increasingly not the case with modern processors.  However, highmem.h is
where all the current functions like this reside (zero_user(),
clear_highpage(), clear_user_highpage(), copy_user_highpage(), and
copy_highpage()).  So it makes the most sense even though it is
distasteful for some.[3]

Lift memcpy_to_page() and memcpy_from_page() to pagemap.h.

[1] https://lore.kernel.org/lkml/20201013200149.GI3576660@ZenIV.linux.org.uk/
    https://lore.kernel.org/lkml/20201013112544.GA5249@infradead.org/

[2] https://lore.kernel.org/lkml/20201208122316.GH7338@casper.infradead.org/

[3] https://lore.kernel.org/lkml/20201013200149.GI3576660@ZenIV.linux.org.uk/#t
    https://lore.kernel.org/lkml/20201208163814.GN1563847@iweiny-DESK2.sc.intel.com/

Cc: Boris Pismenny <borisp@mellanox.com>
Cc: Or Gerlitz <gerlitz.or@gmail.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Suggested-by: Matthew Wilcox <willy@infradead.org>
Suggested-by: Christoph Hellwig <hch@infradead.org>
Suggested-by: Dan Williams <dan.j.williams@intel.com>
Suggested-by: Al Viro <viro@zeniv.linux.org.uk>
Suggested-by: Eric Biggers <ebiggers@kernel.org>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>

---
Changes from v1 btrfs series:
	https://lore.kernel.org/lkml/20210205232304.1670522-2-ira.weiny@intel.com/
	Split out the BUG_ON()'s per Andrew
	Split out the change to kmap_local_page() per Andrew
	Split out the addition of memcpy_page, memmove_page, and memset_page
	While we are refactoring adjust the line length down per Chaitanya
		https://lore.kernel.org/lkml/BYAPR04MB49655E5BDB24A108FEFFE9C486B09@BYAPR04MB4965.namprd04.prod.outlook.com/

Chagnes for V4:
	Update commit message to say kmap_local_page() since
	kmap_thread() is no longer valid

Changes for V3:
	From Matthew Wilcox
		Move calls to highmem.h
		Add BUG_ON()

Changes for V2:
	From Thomas Gleixner
		Change kmap_atomic() to kmap_local_page() after basing
		on tip/core/mm
	From Joonas Lahtinen
		Reverse offset/val in memset_page()
---
 include/linux/highmem.h | 18 ++++++++++++++++++
 lib/iov_iter.c          | 14 --------------
 2 files changed, 18 insertions(+), 14 deletions(-)

diff --git a/include/linux/highmem.h b/include/linux/highmem.h
index d2c70d3772a3..736b6a9f144d 100644
--- a/include/linux/highmem.h
+++ b/include/linux/highmem.h
@@ -276,4 +276,22 @@ static inline void copy_highpage(struct page *to, struct page *from)
 
 #endif
 
+static inline void memcpy_from_page(char *to, struct page *page,
+				    size_t offset, size_t len)
+{
+	char *from = kmap_atomic(page);
+
+	memcpy(to, from + offset, len);
+	kunmap_atomic(from);
+}
+
+static inline void memcpy_to_page(struct page *page, size_t offset,
+				  const char *from, size_t len)
+{
+	char *to = kmap_atomic(page);
+
+	memcpy(to + offset, from, len);
+	kunmap_atomic(to);
+}
+
 #endif /* _LINUX_HIGHMEM_H */
diff --git a/lib/iov_iter.c b/lib/iov_iter.c
index a21e6a5792c5..9889e9903cdf 100644
--- a/lib/iov_iter.c
+++ b/lib/iov_iter.c
@@ -466,20 +466,6 @@ void iov_iter_init(struct iov_iter *i, unsigned int direction,
 }
 EXPORT_SYMBOL(iov_iter_init);
 
-static void memcpy_from_page(char *to, struct page *page, size_t offset, size_t len)
-{
-	char *from = kmap_atomic(page);
-	memcpy(to, from + offset, len);
-	kunmap_atomic(from);
-}
-
-static void memcpy_to_page(struct page *page, size_t offset, const char *from, size_t len)
-{
-	char *to = kmap_atomic(page);
-	memcpy(to + offset, from, len);
-	kunmap_atomic(to);
-}
-
 static void memzero_page(struct page *page, size_t offset, size_t len)
 {
 	char *addr = kmap_atomic(page);
-- 
2.28.0.rc0.12.gb6a658bd00c9


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH V2 2/8] mm/highmem: Convert memcpy_[to|from]_page() to kmap_local_page()
  2021-02-10  6:22 [PATCH V2 0/8] btrfs: convert kmaps to core page calls ira.weiny
  2021-02-10  6:22 ` [PATCH V2 1/8] mm/highmem: Lift memcpy_[to|from]_page to core ira.weiny
@ 2021-02-10  6:22 ` ira.weiny
  2021-02-10  6:37   ` Chaitanya Kulkarni
  2021-02-10 12:54   ` Christoph Hellwig
  2021-02-10  6:22 ` [PATCH V2 3/8] mm/highmem: Introduce memcpy_page(), memmove_page(), and memset_page() ira.weiny
                   ` (6 subsequent siblings)
  8 siblings, 2 replies; 31+ messages in thread
From: ira.weiny @ 2021-02-10  6:22 UTC (permalink / raw)
  To: Andrew Morton, David Sterba
  Cc: Ira Weiny, Christoph Hellwig, clm, josef, linux-kernel, linux-fsdevel

From: Ira Weiny <ira.weiny@intel.com>

kmap_local_page() is more efficient and is well suited for these calls.
Convert the kmap() to kmap_local_page()

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Christoph Hellwig <hch@infradead.org>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
New for V2
---
 include/linux/highmem.h | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/include/linux/highmem.h b/include/linux/highmem.h
index 736b6a9f144d..c17a175fe5fe 100644
--- a/include/linux/highmem.h
+++ b/include/linux/highmem.h
@@ -279,19 +279,19 @@ static inline void copy_highpage(struct page *to, struct page *from)
 static inline void memcpy_from_page(char *to, struct page *page,
 				    size_t offset, size_t len)
 {
-	char *from = kmap_atomic(page);
+	char *from = kmap_local_page(page);
 
 	memcpy(to, from + offset, len);
-	kunmap_atomic(from);
+	kunmap_local(from);
 }
 
 static inline void memcpy_to_page(struct page *page, size_t offset,
 				  const char *from, size_t len)
 {
-	char *to = kmap_atomic(page);
+	char *to = kmap_local_page(page);
 
 	memcpy(to + offset, from, len);
-	kunmap_atomic(to);
+	kunmap_local(to);
 }
 
 #endif /* _LINUX_HIGHMEM_H */
-- 
2.28.0.rc0.12.gb6a658bd00c9


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH V2 3/8] mm/highmem: Introduce memcpy_page(), memmove_page(), and memset_page()
  2021-02-10  6:22 [PATCH V2 0/8] btrfs: convert kmaps to core page calls ira.weiny
  2021-02-10  6:22 ` [PATCH V2 1/8] mm/highmem: Lift memcpy_[to|from]_page to core ira.weiny
  2021-02-10  6:22 ` [PATCH V2 2/8] mm/highmem: Convert memcpy_[to|from]_page() to kmap_local_page() ira.weiny
@ 2021-02-10  6:22 ` ira.weiny
  2021-02-10  6:46   ` Chaitanya Kulkarni
  2021-02-10 12:54   ` Christoph Hellwig
  2021-02-10  6:22 ` [PATCH V2 4/8] mm/highmem: Add VM_BUG_ON() to mem*_page() calls ira.weiny
                   ` (5 subsequent siblings)
  8 siblings, 2 replies; 31+ messages in thread
From: ira.weiny @ 2021-02-10  6:22 UTC (permalink / raw)
  To: Andrew Morton, David Sterba
  Cc: Ira Weiny, Christoph Hellwig, clm, josef, linux-kernel, linux-fsdevel

From: Ira Weiny <ira.weiny@intel.com>

3 more common kmap patterns are kmap/memcpy/kunmap, kmap/memmove/kunmap.
and kmap/memset/kunmap.

Add helper functions for those patterns which use kmap_local_page().

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Christoph Hellwig <hch@infradead.org>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>

---
New for V2
---
 include/linux/highmem.h | 33 +++++++++++++++++++++++++++++++++
 1 file changed, 33 insertions(+)

diff --git a/include/linux/highmem.h b/include/linux/highmem.h
index c17a175fe5fe..0b5d89621cb9 100644
--- a/include/linux/highmem.h
+++ b/include/linux/highmem.h
@@ -276,6 +276,39 @@ static inline void copy_highpage(struct page *to, struct page *from)
 
 #endif
 
+static inline void memcpy_page(struct page *dst_page, size_t dst_off,
+			       struct page *src_page, size_t src_off,
+			       size_t len)
+{
+	char *dst = kmap_local_page(dst_page);
+	char *src = kmap_local_page(src_page);
+
+	memcpy(dst + dst_off, src + src_off, len);
+	kunmap_local(src);
+	kunmap_local(dst);
+}
+
+static inline void memmove_page(struct page *dst_page, size_t dst_off,
+			       struct page *src_page, size_t src_off,
+			       size_t len)
+{
+	char *dst = kmap_local_page(dst_page);
+	char *src = kmap_local_page(src_page);
+
+	memmove(dst + dst_off, src + src_off, len);
+	kunmap_local(src);
+	kunmap_local(dst);
+}
+
+static inline void memset_page(struct page *page, size_t offset, int val,
+			       size_t len)
+{
+	char *addr = kmap_local_page(page);
+
+	memset(addr + offset, val, len);
+	kunmap_local(addr);
+}
+
 static inline void memcpy_from_page(char *to, struct page *page,
 				    size_t offset, size_t len)
 {
-- 
2.28.0.rc0.12.gb6a658bd00c9


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH V2 4/8] mm/highmem: Add VM_BUG_ON() to mem*_page() calls
  2021-02-10  6:22 [PATCH V2 0/8] btrfs: convert kmaps to core page calls ira.weiny
                   ` (2 preceding siblings ...)
  2021-02-10  6:22 ` [PATCH V2 3/8] mm/highmem: Introduce memcpy_page(), memmove_page(), and memset_page() ira.weiny
@ 2021-02-10  6:22 ` ira.weiny
  2021-02-10  6:57   ` Chaitanya Kulkarni
                     ` (2 more replies)
  2021-02-10  6:22 ` [PATCH V2 5/8] iov_iter: Remove memzero_page() in favor of zero_user() ira.weiny
                   ` (4 subsequent siblings)
  8 siblings, 3 replies; 31+ messages in thread
From: ira.weiny @ 2021-02-10  6:22 UTC (permalink / raw)
  To: Andrew Morton, David Sterba
  Cc: Ira Weiny, Christoph Hellwig, Matthew Wilcox, clm, josef,
	linux-kernel, linux-fsdevel

From: Ira Weiny <ira.weiny@intel.com>

Add VM_BUG_ON bounds checks to ensure the newly lifted and created page
memory operations do not result in corrupted data in neighbor pages and
to make them consistent with zero_user().[1][2]

[1] https://lore.kernel.org/lkml/20201210053502.GS1563847@iweiny-DESK2.sc.intel.com/
[2] https://lore.kernel.org/lkml/20210209110931.00f00e47d9a0529fcee2ff01@linux-foundation.org/

Cc: Christoph Hellwig <hch@infradead.org>
Suggested-by: Matthew Wilcox <willy@infradead.org>
Suggested-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>

---
New for V2
---
 include/linux/highmem.h | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/include/linux/highmem.h b/include/linux/highmem.h
index 0b5d89621cb9..520bbc67e67f 100644
--- a/include/linux/highmem.h
+++ b/include/linux/highmem.h
@@ -283,6 +283,7 @@ static inline void memcpy_page(struct page *dst_page, size_t dst_off,
 	char *dst = kmap_local_page(dst_page);
 	char *src = kmap_local_page(src_page);
 
+	BUG_ON(dst_off + len > PAGE_SIZE || src_off + len > PAGE_SIZE);
 	memcpy(dst + dst_off, src + src_off, len);
 	kunmap_local(src);
 	kunmap_local(dst);
@@ -295,6 +296,7 @@ static inline void memmove_page(struct page *dst_page, size_t dst_off,
 	char *dst = kmap_local_page(dst_page);
 	char *src = kmap_local_page(src_page);
 
+	BUG_ON(dst_off + len > PAGE_SIZE || src_off + len > PAGE_SIZE);
 	memmove(dst + dst_off, src + src_off, len);
 	kunmap_local(src);
 	kunmap_local(dst);
@@ -305,6 +307,7 @@ static inline void memset_page(struct page *page, size_t offset, int val,
 {
 	char *addr = kmap_local_page(page);
 
+	BUG_ON(offset + len > PAGE_SIZE);
 	memset(addr + offset, val, len);
 	kunmap_local(addr);
 }
@@ -314,6 +317,7 @@ static inline void memcpy_from_page(char *to, struct page *page,
 {
 	char *from = kmap_local_page(page);
 
+	BUG_ON(offset + len > PAGE_SIZE);
 	memcpy(to, from + offset, len);
 	kunmap_local(from);
 }
@@ -323,6 +327,7 @@ static inline void memcpy_to_page(struct page *page, size_t offset,
 {
 	char *to = kmap_local_page(page);
 
+	BUG_ON(offset + len > PAGE_SIZE);
 	memcpy(to + offset, from, len);
 	kunmap_local(to);
 }
-- 
2.28.0.rc0.12.gb6a658bd00c9


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH V2 5/8] iov_iter: Remove memzero_page() in favor of zero_user()
  2021-02-10  6:22 [PATCH V2 0/8] btrfs: convert kmaps to core page calls ira.weiny
                   ` (3 preceding siblings ...)
  2021-02-10  6:22 ` [PATCH V2 4/8] mm/highmem: Add VM_BUG_ON() to mem*_page() calls ira.weiny
@ 2021-02-10  6:22 ` ira.weiny
  2021-02-10 12:55   ` Christoph Hellwig
  2021-02-10  6:22 ` [PATCH V2 6/8] btrfs: use memcpy_[to|from]_page() and kmap_local_page() ira.weiny
                   ` (3 subsequent siblings)
  8 siblings, 1 reply; 31+ messages in thread
From: ira.weiny @ 2021-02-10  6:22 UTC (permalink / raw)
  To: Andrew Morton, David Sterba
  Cc: Ira Weiny, Christoph Hellwig, clm, josef, linux-kernel, linux-fsdevel

From: Ira Weiny <ira.weiny@intel.com>

zero_user() is already defined with the same interface and contains the
same code pattern as memzero_page().  Remove memzero_page() and use the
already defined common function zero_user()

To: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Christoph Hellwig <hch@infradead.org>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>

---
New for V2
---
 lib/iov_iter.c | 12 +++---------
 1 file changed, 3 insertions(+), 9 deletions(-)

diff --git a/lib/iov_iter.c b/lib/iov_iter.c
index 9889e9903cdf..aa0d03b33a1e 100644
--- a/lib/iov_iter.c
+++ b/lib/iov_iter.c
@@ -5,6 +5,7 @@
 #include <linux/fault-inject-usercopy.h>
 #include <linux/uio.h>
 #include <linux/pagemap.h>
+#include <linux/highmem.h>
 #include <linux/slab.h>
 #include <linux/vmalloc.h>
 #include <linux/splice.h>
@@ -466,13 +467,6 @@ void iov_iter_init(struct iov_iter *i, unsigned int direction,
 }
 EXPORT_SYMBOL(iov_iter_init);
 
-static void memzero_page(struct page *page, size_t offset, size_t len)
-{
-	char *addr = kmap_atomic(page);
-	memset(addr + offset, 0, len);
-	kunmap_atomic(addr);
-}
-
 static inline bool allocated(struct pipe_buffer *buf)
 {
 	return buf->ops == &default_pipe_buf_ops;
@@ -950,7 +944,7 @@ static size_t pipe_zero(size_t bytes, struct iov_iter *i)
 
 	do {
 		size_t chunk = min_t(size_t, n, PAGE_SIZE - off);
-		memzero_page(pipe->bufs[i_head & p_mask].page, off, chunk);
+		zero_user(pipe->bufs[i_head & p_mask].page, off, chunk);
 		i->head = i_head;
 		i->iov_offset = off + chunk;
 		n -= chunk;
@@ -967,7 +961,7 @@ size_t iov_iter_zero(size_t bytes, struct iov_iter *i)
 		return pipe_zero(bytes, i);
 	iterate_and_advance(i, bytes, v,
 		clear_user(v.iov_base, v.iov_len),
-		memzero_page(v.bv_page, v.bv_offset, v.bv_len),
+		zero_user(v.bv_page, v.bv_offset, v.bv_len),
 		memset(v.iov_base, 0, v.iov_len)
 	)
 
-- 
2.28.0.rc0.12.gb6a658bd00c9


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH V2 6/8] btrfs: use memcpy_[to|from]_page() and kmap_local_page()
  2021-02-10  6:22 [PATCH V2 0/8] btrfs: convert kmaps to core page calls ira.weiny
                   ` (4 preceding siblings ...)
  2021-02-10  6:22 ` [PATCH V2 5/8] iov_iter: Remove memzero_page() in favor of zero_user() ira.weiny
@ 2021-02-10  6:22 ` ira.weiny
  2021-02-10 12:56   ` Christoph Hellwig
  2021-02-10  6:22 ` [PATCH V2 7/8] btrfs: use copy_highpage() instead of 2 kmaps() ira.weiny
                   ` (2 subsequent siblings)
  8 siblings, 1 reply; 31+ messages in thread
From: ira.weiny @ 2021-02-10  6:22 UTC (permalink / raw)
  To: Andrew Morton, David Sterba
  Cc: Ira Weiny, clm, josef, Christoph Hellwig, linux-kernel, linux-fsdevel

From: Ira Weiny <ira.weiny@intel.com>

There are many places where the pattern kmap/memcpy/kunmap occurs.

This pattern was lifted to the core common functions
memcpy_[to|from]_page().

Use these new functions to reduce the code, eliminate direct uses of
kmap, and leverage the new core functions use of kmap_local_page().

Also, there is 1 place where a kmap/memcpy is followed by an
optional memset.  Here we leave the kmap open coded to avoid remapping
the page but use kmap_local_page() directly.

Development of this patch was aided by the coccinelle script:

// <smpl>
// SPDX-License-Identifier: GPL-2.0-only
// Find kmap/memcpy/kunmap pattern and replace with memcpy*page calls
//
// NOTE: Offsets and other expressions may be more complex than what the script
// will automatically generate.  Therefore a catchall rule is provided to find
// the pattern which then must be evaluated by hand.
//
// Confidence: Low
// Copyright: (C) 2021 Intel Corporation
// URL: http://coccinelle.lip6.fr/
// Comments:
// Options:

//
// simple memcpy version
//
@ memcpy_rule1 @
expression page, T, F, B, Off;
identifier ptr;
type VP;
@@

(
-VP ptr = kmap(page);
|
-ptr = kmap(page);
|
-VP ptr = kmap_atomic(page);
|
-ptr = kmap_atomic(page);
)
<+...
(
-memcpy(ptr + Off, F, B);
+memcpy_to_page(page, Off, F, B);
|
-memcpy(ptr, F, B);
+memcpy_to_page(page, 0, F, B);
|
-memcpy(T, ptr + Off, B);
+memcpy_from_page(T, page, Off, B);
|
-memcpy(T, ptr, B);
+memcpy_from_page(T, page, 0, B);
)
...+>
(
-kunmap(page);
|
-kunmap_atomic(ptr);
)

// Remove any pointers left unused
@
depends on memcpy_rule1
@
identifier memcpy_rule1.ptr;
type VP, VP1;
@@

-VP ptr;
	... when != ptr;
? VP1 ptr;

//
// Some callers kmap without a temp pointer
//
@ memcpy_rule2 @
expression page, T, Off, F, B;
@@

<+...
(
-memcpy(kmap(page) + Off, F, B);
+memcpy_to_page(page, Off, F, B);
|
-memcpy(kmap(page), F, B);
+memcpy_to_page(page, 0, F, B);
|
-memcpy(T, kmap(page) + Off, B);
+memcpy_from_page(T, page, Off, B);
|
-memcpy(T, kmap(page), B);
+memcpy_from_page(T, page, 0, B);
)
...+>
-kunmap(page);
// No need for the ptr variable removal

//
// Catch all
//
@ memcpy_rule3 @
expression page;
expression GenTo, GenFrom, GenSize;
identifier ptr;
type VP;
@@

(
-VP ptr = kmap(page);
|
-ptr = kmap(page);
|
-VP ptr = kmap_atomic(page);
|
-ptr = kmap_atomic(page);
)
<+...
(
//
// Some call sites have complex expressions within the memcpy
// match a catch all to be evaluated by hand.
//
-memcpy(GenTo, GenFrom, GenSize);
+memcpy_to_pageExtra(page, GenTo, GenFrom, GenSize);
+memcpy_from_pageExtra(GenTo, page, GenFrom, GenSize);
)
...+>
(
-kunmap(page);
|
-kunmap_atomic(ptr);
)

// Remove any pointers left unused
@
depends on memcpy_rule3
@
identifier memcpy_rule3.ptr;
type VP, VP1;
@@

-VP ptr;
	... when != ptr;
? VP1 ptr;

// <smpl>

Signed-off-by: Ira Weiny <ira.weiny@intel.com>

---
Changes from v1:
	Update commit message per David
		https://lore.kernel.org/lkml/20210209151442.GU1993@suse.cz/
---
 fs/btrfs/compression.c | 6 ++----
 fs/btrfs/lzo.c         | 4 ++--
 fs/btrfs/reflink.c     | 6 +-----
 fs/btrfs/send.c        | 7 ++-----
 fs/btrfs/zlib.c        | 5 ++---
 fs/btrfs/zstd.c        | 6 ++----
 6 files changed, 11 insertions(+), 23 deletions(-)

diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index 5ae3fa0386b7..047b632b4139 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -1231,7 +1231,6 @@ int btrfs_decompress_buf2page(const char *buf, unsigned long buf_start,
 	unsigned long prev_start_byte;
 	unsigned long working_bytes = total_out - buf_start;
 	unsigned long bytes;
-	char *kaddr;
 	struct bio_vec bvec = bio_iter_iovec(bio, bio->bi_iter);
 
 	/*
@@ -1262,9 +1261,8 @@ int btrfs_decompress_buf2page(const char *buf, unsigned long buf_start,
 				PAGE_SIZE - (buf_offset % PAGE_SIZE));
 		bytes = min(bytes, working_bytes);
 
-		kaddr = kmap_atomic(bvec.bv_page);
-		memcpy(kaddr + bvec.bv_offset, buf + buf_offset, bytes);
-		kunmap_atomic(kaddr);
+		memcpy_to_page(bvec.bv_page, bvec.bv_offset, buf + buf_offset,
+			       bytes);
 		flush_dcache_page(bvec.bv_page);
 
 		buf_offset += bytes;
diff --git a/fs/btrfs/lzo.c b/fs/btrfs/lzo.c
index aa9cd11f4b78..9084a950dc09 100644
--- a/fs/btrfs/lzo.c
+++ b/fs/btrfs/lzo.c
@@ -467,7 +467,7 @@ int lzo_decompress(struct list_head *ws, unsigned char *data_in,
 	destlen = min_t(unsigned long, destlen, PAGE_SIZE);
 	bytes = min_t(unsigned long, destlen, out_len - start_byte);
 
-	kaddr = kmap_atomic(dest_page);
+	kaddr = kmap_local_page(dest_page);
 	memcpy(kaddr, workspace->buf + start_byte, bytes);
 
 	/*
@@ -477,7 +477,7 @@ int lzo_decompress(struct list_head *ws, unsigned char *data_in,
 	 */
 	if (bytes < destlen)
 		memset(kaddr+bytes, 0, destlen-bytes);
-	kunmap_atomic(kaddr);
+	kunmap_local(kaddr);
 out:
 	return ret;
 }
diff --git a/fs/btrfs/reflink.c b/fs/btrfs/reflink.c
index b03e7891394e..74c62e49c0c9 100644
--- a/fs/btrfs/reflink.c
+++ b/fs/btrfs/reflink.c
@@ -103,12 +103,8 @@ static int copy_inline_to_page(struct btrfs_inode *inode,
 	set_bit(BTRFS_INODE_NO_DELALLOC_FLUSH, &inode->runtime_flags);
 
 	if (comp_type == BTRFS_COMPRESS_NONE) {
-		char *map;
-
-		map = kmap(page);
-		memcpy(map, data_start, datal);
+		memcpy_to_page(page, 0, data_start, datal);
 		flush_dcache_page(page);
-		kunmap(page);
 	} else {
 		ret = btrfs_decompress(comp_type, data_start, page, 0,
 				       inline_size, datal);
diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
index 78a35374d492..83982b3b7057 100644
--- a/fs/btrfs/send.c
+++ b/fs/btrfs/send.c
@@ -4948,7 +4948,6 @@ static int put_file_data(struct send_ctx *sctx, u64 offset, u32 len)
 	struct btrfs_fs_info *fs_info = root->fs_info;
 	struct inode *inode;
 	struct page *page;
-	char *addr;
 	pgoff_t index = offset >> PAGE_SHIFT;
 	pgoff_t last_index;
 	unsigned pg_offset = offset_in_page(offset);
@@ -5001,10 +5000,8 @@ static int put_file_data(struct send_ctx *sctx, u64 offset, u32 len)
 			}
 		}
 
-		addr = kmap(page);
-		memcpy(sctx->send_buf + sctx->send_size, addr + pg_offset,
-		       cur_len);
-		kunmap(page);
+		memcpy_from_page(sctx->send_buf + sctx->send_size, page,
+				 pg_offset, cur_len);
 		unlock_page(page);
 		put_page(page);
 		index++;
diff --git a/fs/btrfs/zlib.c b/fs/btrfs/zlib.c
index 05615a1099db..d524acf7b3e5 100644
--- a/fs/btrfs/zlib.c
+++ b/fs/btrfs/zlib.c
@@ -432,9 +432,8 @@ int zlib_decompress(struct list_head *ws, unsigned char *data_in,
 			    PAGE_SIZE - (buf_offset % PAGE_SIZE));
 		bytes = min(bytes, bytes_left);
 
-		kaddr = kmap_atomic(dest_page);
-		memcpy(kaddr + pg_offset, workspace->buf + buf_offset, bytes);
-		kunmap_atomic(kaddr);
+		memcpy_to_page(dest_page, pg_offset,
+			       workspace->buf + buf_offset, bytes);
 
 		pg_offset += bytes;
 		bytes_left -= bytes;
diff --git a/fs/btrfs/zstd.c b/fs/btrfs/zstd.c
index 9a4871636c6c..8e9626d63976 100644
--- a/fs/btrfs/zstd.c
+++ b/fs/btrfs/zstd.c
@@ -688,10 +688,8 @@ int zstd_decompress(struct list_head *ws, unsigned char *data_in,
 		bytes = min_t(unsigned long, destlen - pg_offset,
 				workspace->out_buf.size - buf_offset);
 
-		kaddr = kmap_atomic(dest_page);
-		memcpy(kaddr + pg_offset, workspace->out_buf.dst + buf_offset,
-				bytes);
-		kunmap_atomic(kaddr);
+		memcpy_to_page(dest_page, pg_offset,
+			       workspace->out_buf.dst + buf_offset, bytes);
 
 		pg_offset += bytes;
 	}
-- 
2.28.0.rc0.12.gb6a658bd00c9


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH V2 7/8] btrfs: use copy_highpage() instead of 2 kmaps()
  2021-02-10  6:22 [PATCH V2 0/8] btrfs: convert kmaps to core page calls ira.weiny
                   ` (5 preceding siblings ...)
  2021-02-10  6:22 ` [PATCH V2 6/8] btrfs: use memcpy_[to|from]_page() and kmap_local_page() ira.weiny
@ 2021-02-10  6:22 ` ira.weiny
  2021-02-10 12:56   ` Christoph Hellwig
  2021-02-10  6:22 ` [PATCH V2 8/8] btrfs: convert to zero_user() ira.weiny
  2021-02-11 19:38 ` [PATCH V2 0/8] btrfs: convert kmaps to core page calls David Sterba
  8 siblings, 1 reply; 31+ messages in thread
From: ira.weiny @ 2021-02-10  6:22 UTC (permalink / raw)
  To: Andrew Morton, David Sterba
  Cc: Ira Weiny, clm, josef, Christoph Hellwig, linux-kernel, linux-fsdevel

From: Ira Weiny <ira.weiny@intel.com>

There are many places where kmap/memove/kunmap patterns occur.

This pattern exists in the core common function copy_highpage().

Use copy_highpage to avoid open coding the use of kmap and leverages the
core functions use of kmap_local_page().

Development of this patch was aided by the following coccinelle script:

// <smpl>
// SPDX-License-Identifier: GPL-2.0-only
// Find kmap/copypage/kunmap pattern and replace with copy_highpage calls
//
// NOTE: The expressions in the copy page version of this kmap pattern are
// overly complex and so these all need individual attention.
//
// Confidence: Low
// Copyright: (C) 2021 Intel Corporation
// URL: http://coccinelle.lip6.fr/
// Comments:
// Options:

//
// Then a copy_page where we have 2 pages involved.
//
@ copy_page_rule @
expression page, page2, To, From, Size;
identifier ptr, ptr2;
type VP, VP2;
@@

/* kmap */
(
-VP ptr = kmap(page);
...
-VP2 ptr2 = kmap(page2);
|
-VP ptr = kmap_atomic(page);
...
-VP2 ptr2 = kmap_atomic(page2);
|
-ptr = kmap(page);
...
-ptr2 = kmap(page2);
|
-ptr = kmap_atomic(page);
...
-ptr2 = kmap_atomic(page2);
)

// 1 or more copy versions of the entire page
<+...
(
-copy_page(To, From);
+copy_highpage(To, From);
|
-memmove(To, From, Size);
+memmoveExtra(To, From, Size);
)
...+>

/* kunmap */
(
-kunmap(page2);
...
-kunmap(page);
|
-kunmap(page);
...
-kunmap(page2);
|
-kmap_atomic(ptr2);
...
-kmap_atomic(ptr);
)

// Remove any pointers left unused
@
depends on copy_page_rule
@
identifier copy_page_rule.ptr;
identifier copy_page_rule.ptr2;
type VP, VP1;
type VP2, VP21;
@@

-VP ptr;
	... when != ptr;
? VP1 ptr;
-VP2 ptr2;
	... when != ptr2;
? VP21 ptr2;

// </smpl>

Signed-off-by: Ira Weiny <ira.weiny@intel.com>

---
Changes from v1:
	Update commit message per David
		https://lore.kernel.org/lkml/20210209151442.GU1993@suse.cz/
---
 fs/btrfs/raid56.c | 10 +---------
 1 file changed, 1 insertion(+), 9 deletions(-)

diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c
index dbf52f1a379d..7c3f6dc918c1 100644
--- a/fs/btrfs/raid56.c
+++ b/fs/btrfs/raid56.c
@@ -250,8 +250,6 @@ int btrfs_alloc_stripe_hash_table(struct btrfs_fs_info *info)
 static void cache_rbio_pages(struct btrfs_raid_bio *rbio)
 {
 	int i;
-	char *s;
-	char *d;
 	int ret;
 
 	ret = alloc_rbio_pages(rbio);
@@ -262,13 +260,7 @@ static void cache_rbio_pages(struct btrfs_raid_bio *rbio)
 		if (!rbio->bio_pages[i])
 			continue;
 
-		s = kmap(rbio->bio_pages[i]);
-		d = kmap(rbio->stripe_pages[i]);
-
-		copy_page(d, s);
-
-		kunmap(rbio->bio_pages[i]);
-		kunmap(rbio->stripe_pages[i]);
+		copy_highpage(rbio->stripe_pages[i], rbio->bio_pages[i]);
 		SetPageUptodate(rbio->stripe_pages[i]);
 	}
 	set_bit(RBIO_CACHE_READY_BIT, &rbio->flags);
-- 
2.28.0.rc0.12.gb6a658bd00c9


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH V2 8/8] btrfs: convert to zero_user()
  2021-02-10  6:22 [PATCH V2 0/8] btrfs: convert kmaps to core page calls ira.weiny
                   ` (6 preceding siblings ...)
  2021-02-10  6:22 ` [PATCH V2 7/8] btrfs: use copy_highpage() instead of 2 kmaps() ira.weiny
@ 2021-02-10  6:22 ` ira.weiny
  2021-02-10 12:57   ` Christoph Hellwig
  2021-02-11 19:38 ` [PATCH V2 0/8] btrfs: convert kmaps to core page calls David Sterba
  8 siblings, 1 reply; 31+ messages in thread
From: ira.weiny @ 2021-02-10  6:22 UTC (permalink / raw)
  To: Andrew Morton, David Sterba
  Cc: Ira Weiny, clm, josef, Christoph Hellwig, linux-kernel, linux-fsdevel

From: Ira Weiny <ira.weiny@intel.com>

There are many places where kmap/memset/kunmap patterns occur.

This pattern exists in the core as zero_user()

Use zero_user() to eliminate direct uses of kmap and leverage the new
core functions use of kmap_local_page().

The development of this patch was aided by the following coccinelle
script:

// <smpl>
// SPDX-License-Identifier: GPL-2.0-only
// Find kmap/memset/kunmap pattern and replace with memset*page calls
//
// NOTE: Offsets and other expressions may be more complex than what the script
// will automatically generate.  Therefore a catchall rule is provided to find
// the pattern which then must be evaluated by hand.
//
// Confidence: Low
// Copyright: (C) 2021 Intel Corporation
// URL: http://coccinelle.lip6.fr/
// Comments:
// Options:

//
// Then the memset pattern
//
@ memset_rule1 @
expression page, V, L, Off;
identifier ptr;
type VP;
@@

(
-VP ptr = kmap(page);
|
-ptr = kmap(page);
|
-VP ptr = kmap_atomic(page);
|
-ptr = kmap_atomic(page);
)
<+...
(
-memset(ptr, 0, L);
+zero_user(page, 0, L);
|
-memset(ptr + Off, 0, L);
+zero_user(page, Off, L);
|
-memset(ptr, V, L);
+memset_page(page, V, 0, L);
|
-memset(ptr + Off, V, L);
+memset_page(page, V, Off, L);
)
...+>
(
-kunmap(page);
|
-kunmap_atomic(ptr);
)

// Remove any pointers left unused
@
depends on memset_rule1
@
identifier memset_rule1.ptr;
type VP, VP1;
@@

-VP ptr;
	... when != ptr;
? VP1 ptr;

//
// Catch all
//
@ memset_rule2 @
expression page;
identifier ptr;
expression GenTo, GenSize, GenValue;
type VP;
@@

(
-VP ptr = kmap(page);
|
-ptr = kmap(page);
|
-VP ptr = kmap_atomic(page);
|
-ptr = kmap_atomic(page);
)
<+...
(
//
// Some call sites have complex expressions within the memset/memcpy
// The follow are catch alls which need to be evaluated by hand.
//
-memset(GenTo, 0, GenSize);
+zero_userExtra(page, GenTo, GenSize);
|
-memset(GenTo, GenValue, GenSize);
+memset_pageExtra(page, GenValue, GenTo, GenSize);
)
...+>
(
-kunmap(page);
|
-kunmap_atomic(ptr);
)

// Remove any pointers left unused
@
depends on memset_rule2
@
identifier memset_rule2.ptr;
type VP, VP1;
@@

-VP ptr;
	... when != ptr;
? VP1 ptr;

// </smpl>

Signed-off-by: Ira Weiny <ira.weiny@intel.com>

---
Changes from v1:
	Update commit message per David
		https://lore.kernel.org/lkml/20210209151442.GU1993@suse.cz/
---
 fs/btrfs/compression.c |  5 +----
 fs/btrfs/extent_io.c   | 22 ++++------------------
 fs/btrfs/inode.c       | 33 ++++++++++-----------------------
 fs/btrfs/reflink.c     |  6 +-----
 fs/btrfs/zlib.c        |  5 +----
 fs/btrfs/zstd.c        |  5 +----
 6 files changed, 18 insertions(+), 58 deletions(-)

diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index 047b632b4139..a219dcdb749e 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -567,16 +567,13 @@ static noinline int add_ra_bio_pages(struct inode *inode,
 		free_extent_map(em);
 
 		if (page->index == end_index) {
-			char *userpage;
 			size_t zero_offset = offset_in_page(isize);
 
 			if (zero_offset) {
 				int zeros;
 				zeros = PAGE_SIZE - zero_offset;
-				userpage = kmap_atomic(page);
-				memset(userpage + zero_offset, 0, zeros);
+				zero_user(page, zero_offset, zeros);
 				flush_dcache_page(page);
-				kunmap_atomic(userpage);
 			}
 		}
 
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index c9cee458e001..bdc9389bff59 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -3229,15 +3229,12 @@ int btrfs_do_readpage(struct page *page, struct extent_map **em_cached,
 	}
 
 	if (page->index == last_byte >> PAGE_SHIFT) {
-		char *userpage;
 		size_t zero_offset = offset_in_page(last_byte);
 
 		if (zero_offset) {
 			iosize = PAGE_SIZE - zero_offset;
-			userpage = kmap_atomic(page);
-			memset(userpage + zero_offset, 0, iosize);
+			zero_user(page, zero_offset, iosize);
 			flush_dcache_page(page);
-			kunmap_atomic(userpage);
 		}
 	}
 	while (cur <= end) {
@@ -3245,14 +3242,11 @@ int btrfs_do_readpage(struct page *page, struct extent_map **em_cached,
 		u64 offset;
 
 		if (cur >= last_byte) {
-			char *userpage;
 			struct extent_state *cached = NULL;
 
 			iosize = PAGE_SIZE - pg_offset;
-			userpage = kmap_atomic(page);
-			memset(userpage + pg_offset, 0, iosize);
+			zero_user(page, pg_offset, iosize);
 			flush_dcache_page(page);
-			kunmap_atomic(userpage);
 			set_extent_uptodate(tree, cur, cur + iosize - 1,
 					    &cached, GFP_NOFS);
 			unlock_extent_cached(tree, cur,
@@ -3334,13 +3328,10 @@ int btrfs_do_readpage(struct page *page, struct extent_map **em_cached,
 
 		/* we've found a hole, just zero and go on */
 		if (block_start == EXTENT_MAP_HOLE) {
-			char *userpage;
 			struct extent_state *cached = NULL;
 
-			userpage = kmap_atomic(page);
-			memset(userpage + pg_offset, 0, iosize);
+			zero_user(page, pg_offset, iosize);
 			flush_dcache_page(page);
-			kunmap_atomic(userpage);
 
 			set_extent_uptodate(tree, cur, cur + iosize - 1,
 					    &cached, GFP_NOFS);
@@ -3654,12 +3645,7 @@ static int __extent_writepage(struct page *page, struct writeback_control *wbc,
 	}
 
 	if (page->index == end_index) {
-		char *userpage;
-
-		userpage = kmap_atomic(page);
-		memset(userpage + pg_offset, 0,
-		       PAGE_SIZE - pg_offset);
-		kunmap_atomic(userpage);
+		zero_user(page, pg_offset, PAGE_SIZE - pg_offset);
 		flush_dcache_page(page);
 	}
 
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index a8e0a6b038d3..641f21a11722 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -640,17 +640,12 @@ static noinline int compress_file_range(struct async_chunk *async_chunk)
 		if (!ret) {
 			unsigned long offset = offset_in_page(total_compressed);
 			struct page *page = pages[nr_pages - 1];
-			char *kaddr;
 
 			/* zero the tail end of the last page, we might be
 			 * sending it down to disk
 			 */
-			if (offset) {
-				kaddr = kmap_atomic(page);
-				memset(kaddr + offset, 0,
-				       PAGE_SIZE - offset);
-				kunmap_atomic(kaddr);
-			}
+			if (offset)
+				zero_user(page, offset, PAGE_SIZE - offset);
 			will_compress = 1;
 		}
 	}
@@ -4675,7 +4670,6 @@ int btrfs_truncate_block(struct btrfs_inode *inode, loff_t from, loff_t len,
 	struct btrfs_ordered_extent *ordered;
 	struct extent_state *cached_state = NULL;
 	struct extent_changeset *data_reserved = NULL;
-	char *kaddr;
 	bool only_release_metadata = false;
 	u32 blocksize = fs_info->sectorsize;
 	pgoff_t index = from >> PAGE_SHIFT;
@@ -4765,15 +4759,13 @@ int btrfs_truncate_block(struct btrfs_inode *inode, loff_t from, loff_t len,
 	if (offset != blocksize) {
 		if (!len)
 			len = blocksize - offset;
-		kaddr = kmap(page);
 		if (front)
-			memset(kaddr + (block_start - page_offset(page)),
-				0, offset);
+			zero_user(page, (block_start - page_offset(page)),
+				  offset);
 		else
-			memset(kaddr + (block_start - page_offset(page)) +  offset,
-				0, len);
+			zero_user(page, (block_start - page_offset(page)) + offset,
+				  len);
 		flush_dcache_page(page);
-		kunmap(page);
 	}
 	ClearPageChecked(page);
 	set_page_dirty(page);
@@ -6660,11 +6652,9 @@ static noinline int uncompress_inline(struct btrfs_path *path,
 	 * cover that region here.
 	 */
 
-	if (max_size + pg_offset < PAGE_SIZE) {
-		char *map = kmap(page);
-		memset(map + pg_offset + max_size, 0, PAGE_SIZE - max_size - pg_offset);
-		kunmap(page);
-	}
+	if (max_size + pg_offset < PAGE_SIZE)
+		zero_user(page,  pg_offset + max_size,
+			  PAGE_SIZE - max_size - pg_offset);
 	kfree(tmp);
 	return ret;
 }
@@ -8303,7 +8293,6 @@ vm_fault_t btrfs_page_mkwrite(struct vm_fault *vmf)
 	struct btrfs_ordered_extent *ordered;
 	struct extent_state *cached_state = NULL;
 	struct extent_changeset *data_reserved = NULL;
-	char *kaddr;
 	unsigned long zero_start;
 	loff_t size;
 	vm_fault_t ret;
@@ -8410,10 +8399,8 @@ vm_fault_t btrfs_page_mkwrite(struct vm_fault *vmf)
 		zero_start = PAGE_SIZE;
 
 	if (zero_start != PAGE_SIZE) {
-		kaddr = kmap(page);
-		memset(kaddr + zero_start, 0, PAGE_SIZE - zero_start);
+		zero_user(page, zero_start, PAGE_SIZE - zero_start);
 		flush_dcache_page(page);
-		kunmap(page);
 	}
 	ClearPageChecked(page);
 	set_page_dirty(page);
diff --git a/fs/btrfs/reflink.c b/fs/btrfs/reflink.c
index 74c62e49c0c9..c052c4878670 100644
--- a/fs/btrfs/reflink.c
+++ b/fs/btrfs/reflink.c
@@ -126,12 +126,8 @@ static int copy_inline_to_page(struct btrfs_inode *inode,
 	 * So what's in the range [500, 4095] corresponds to zeroes.
 	 */
 	if (datal < block_size) {
-		char *map;
-
-		map = kmap(page);
-		memset(map + datal, 0, block_size - datal);
+		zero_user(page, datal, block_size - datal);
 		flush_dcache_page(page);
-		kunmap(page);
 	}
 
 	SetPageUptodate(page);
diff --git a/fs/btrfs/zlib.c b/fs/btrfs/zlib.c
index d524acf7b3e5..fd612a509463 100644
--- a/fs/btrfs/zlib.c
+++ b/fs/btrfs/zlib.c
@@ -375,7 +375,6 @@ int zlib_decompress(struct list_head *ws, unsigned char *data_in,
 	unsigned long bytes_left;
 	unsigned long total_out = 0;
 	unsigned long pg_offset = 0;
-	char *kaddr;
 
 	destlen = min_t(unsigned long, destlen, PAGE_SIZE);
 	bytes_left = destlen;
@@ -455,9 +454,7 @@ int zlib_decompress(struct list_head *ws, unsigned char *data_in,
 	 * end of the inline extent (destlen) to the end of the page
 	 */
 	if (pg_offset < destlen) {
-		kaddr = kmap_atomic(dest_page);
-		memset(kaddr + pg_offset, 0, destlen - pg_offset);
-		kunmap_atomic(kaddr);
+		zero_user(dest_page, pg_offset, destlen - pg_offset);
 	}
 	return ret;
 }
diff --git a/fs/btrfs/zstd.c b/fs/btrfs/zstd.c
index 8e9626d63976..b6f687b8a3da 100644
--- a/fs/btrfs/zstd.c
+++ b/fs/btrfs/zstd.c
@@ -631,7 +631,6 @@ int zstd_decompress(struct list_head *ws, unsigned char *data_in,
 	size_t ret2;
 	unsigned long total_out = 0;
 	unsigned long pg_offset = 0;
-	char *kaddr;
 
 	stream = ZSTD_initDStream(
 			ZSTD_BTRFS_MAX_INPUT, workspace->mem, workspace->size);
@@ -696,9 +695,7 @@ int zstd_decompress(struct list_head *ws, unsigned char *data_in,
 	ret = 0;
 finish:
 	if (pg_offset < destlen) {
-		kaddr = kmap_atomic(dest_page);
-		memset(kaddr + pg_offset, 0, destlen - pg_offset);
-		kunmap_atomic(kaddr);
+		zero_user(dest_page, pg_offset, destlen - pg_offset);
 	}
 	return ret;
 }
-- 
2.28.0.rc0.12.gb6a658bd00c9


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* Re: [PATCH V2 1/8] mm/highmem: Lift memcpy_[to|from]_page to core
  2021-02-10  6:22 ` [PATCH V2 1/8] mm/highmem: Lift memcpy_[to|from]_page to core ira.weiny
@ 2021-02-10  6:33   ` Chaitanya Kulkarni
  2021-02-10 12:53   ` Christoph Hellwig
  1 sibling, 0 replies; 31+ messages in thread
From: Chaitanya Kulkarni @ 2021-02-10  6:33 UTC (permalink / raw)
  To: ira.weiny, Andrew Morton, David Sterba
  Cc: Boris Pismenny, Or Gerlitz, Dave Hansen, Matthew Wilcox, hch,
	Dan Williams, Al Viro, Eric Biggers, clm, josef, linux-kernel,
	linux-fsdevel

On 2/9/21 22:25, ira.weiny@intel.com wrote:
> From: Ira Weiny <ira.weiny@intel.com>
>
> Working through a conversion to a call kmap_local_page() instead of
> kmap() revealed many places where the pattern kmap/memcpy/kunmap
> occurred.
>
> Eric Biggers, Matthew Wilcox, Christoph Hellwig, Dan Williams, and Al
> Viro all suggested putting this code into helper functions.  Al Viro
> further pointed out that these functions already existed in the iov_iter
> code.[1]
>
> Various locations for the lifted functions were considered.
>
> Headers like mm.h or string.h seem ok but don't really portray the
> functionality well.  pagemap.h made some sense but is for page cache
> functionality.[2]
>
> Another alternative would be to create a new header for the promoted
> memcpy functions, but it masks the fact that these are designed to copy
> to/from pages using the kernel direct mappings and complicates matters
> with a new header.
>
> Placing these functions in 'highmem.h' is suboptimal especially with the
> changes being proposed in the functionality of kmap.  From a caller
> perspective including/using 'highmem.h' implies that the functions
> defined in that header are only required when highmem is in use which is
> increasingly not the case with modern processors.  However, highmem.h is
> where all the current functions like this reside (zero_user(),
> clear_highpage(), clear_user_highpage(), copy_user_highpage(), and
> copy_highpage()).  So it makes the most sense even though it is
> distasteful for some.[3]
>
> Lift memcpy_to_page() and memcpy_from_page() to pagemap.h.
>
> [1] https://lore.kernel.org/lkml/20201013200149.GI3576660@ZenIV.linux.org.uk/
>     https://lore.kernel.org/lkml/20201013112544.GA5249@infradead.org/
>
> [2] https://lore.kernel.org/lkml/20201208122316.GH7338@casper.infradead.org/
>
> [3] https://lore.kernel.org/lkml/20201013200149.GI3576660@ZenIV.linux.org.uk/#t
>     https://lore.kernel.org/lkml/20201208163814.GN1563847@iweiny-DESK2.sc.intel.com/
>
> Cc: Boris Pismenny <borisp@mellanox.com>
> Cc: Or Gerlitz <gerlitz.or@gmail.com>
> Cc: Dave Hansen <dave.hansen@intel.com>
> Suggested-by: Matthew Wilcox <willy@infradead.org>
> Suggested-by: Christoph Hellwig <hch@infradead.org>
> Suggested-by: Dan Williams <dan.j.williams@intel.com>
> Suggested-by: Al Viro <viro@zeniv.linux.org.uk>
> Suggested-by: Eric Biggers <ebiggers@kernel.org>
> Signed-off-by: Ira Weiny <ira.weiny@intel.com>

Thanks for adding a new line in the new calls after variable declaration.
Looks good.

Reviewed-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH V2 2/8] mm/highmem: Convert memcpy_[to|from]_page() to kmap_local_page()
  2021-02-10  6:22 ` [PATCH V2 2/8] mm/highmem: Convert memcpy_[to|from]_page() to kmap_local_page() ira.weiny
@ 2021-02-10  6:37   ` Chaitanya Kulkarni
  2021-02-10 12:54   ` Christoph Hellwig
  1 sibling, 0 replies; 31+ messages in thread
From: Chaitanya Kulkarni @ 2021-02-10  6:37 UTC (permalink / raw)
  To: ira.weiny, Andrew Morton, David Sterba
  Cc: hch, clm, josef, linux-kernel, linux-fsdevel

On 2/9/21 22:25, ira.weiny@intel.com wrote:
> From: Ira Weiny <ira.weiny@intel.com>
>
> kmap_local_page() is more efficient and is well suited for these calls.
> Convert the kmap() to kmap_local_page()
>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Christoph Hellwig <hch@infradead.org>
> Signed-off-by: Ira Weiny <ira.weiny@intel.com>
Looks good.

Reviewed-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>





^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH V2 3/8] mm/highmem: Introduce memcpy_page(), memmove_page(), and memset_page()
  2021-02-10  6:22 ` [PATCH V2 3/8] mm/highmem: Introduce memcpy_page(), memmove_page(), and memset_page() ira.weiny
@ 2021-02-10  6:46   ` Chaitanya Kulkarni
  2021-02-10 12:54   ` Christoph Hellwig
  1 sibling, 0 replies; 31+ messages in thread
From: Chaitanya Kulkarni @ 2021-02-10  6:46 UTC (permalink / raw)
  To: ira.weiny, Andrew Morton, David Sterba
  Cc: hch, clm, josef, linux-kernel, linux-fsdevel

On 2/9/21 22:25, ira.weiny@intel.com wrote:
> From: Ira Weiny <ira.weiny@intel.com>
>
> 3 more common kmap patterns are kmap/memcpy/kunmap, kmap/memmove/kunmap.
> and kmap/memset/kunmap.
>
> Add helper functions for those patterns which use kmap_local_page().
>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Christoph Hellwig <hch@infradead.org>
> Signed-off-by: Ira Weiny <ira.weiny@intel.com>
Looks good.

Reviewed-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH V2 4/8] mm/highmem: Add VM_BUG_ON() to mem*_page() calls
  2021-02-10  6:22 ` [PATCH V2 4/8] mm/highmem: Add VM_BUG_ON() to mem*_page() calls ira.weiny
@ 2021-02-10  6:57   ` Chaitanya Kulkarni
  2021-02-10 16:31     ` Ira Weiny
  2021-02-10 12:55   ` Christoph Hellwig
  2021-02-10 17:49   ` [PATCH V2.1] " ira.weiny
  2 siblings, 1 reply; 31+ messages in thread
From: Chaitanya Kulkarni @ 2021-02-10  6:57 UTC (permalink / raw)
  To: ira.weiny, Andrew Morton, David Sterba
  Cc: hch, Matthew Wilcox, clm, josef, linux-kernel, linux-fsdevel

On 2/9/21 22:25, ira.weiny@intel.com wrote:
> From: Ira Weiny <ira.weiny@intel.com>
>
> Add VM_BUG_ON bounds checks to ensure the newly lifted and created page
> memory operations do not result in corrupted data in neighbor pages and
> to make them consistent with zero_user().[1][2]
>
I did not understand this, in my tree :-

zero_user()
 -> zero_user_segments()

which uses BUG_ON(), the commit log says add VM_BUG_ON(), isn't that
inconsistent withwhat is there in zero_user_segments() which uses BUG_ON() ?

Also, this patch uses BUG_ON() which doesn't match the commit log that says
ADD VM_BUG_ON(),

Did I interpret the commit log wrong ?

[1]
 void zero_user_segments(struct page *page, unsigned start1, unsigned end1,
365                 unsigned start2, unsigned end2)
366 {
367         unsigned int
i;                                                                           

368
369         BUG_ON(end1 > page_size(page) || end2 > page_size(page));
370
371         for (i = 0; i < compound_nr(page); i++) {
372                 void *kaddr = NULL;
373 
374                 if (start1 < PAGE_SIZE || start2 < PAGE_SIZE)
375                         kaddr = kmap_atomic(page + i);
376
377                 if (start1 >= PAGE_SIZE) {
378                         start1 -= PAGE_SIZE;
379                         end1 -= PAGE_SIZE;
380                 } else {
381                         unsigned this_end = min_t(unsigned, end1,
PAGE_SIZE);
382        
383                         if (end1 > start1)
384                                 memset(kaddr + start1, 0, this_end -
start1);
385                         end1 -= this_end;
386                         start1 = 0;
387                 }
388
389                 if (start2 >= PAGE_SIZE) {
390                         start2 -= PAGE_SIZE;
391                         end2 -= PAGE_SIZE;
392                 } else {
393                         unsigned this_end = min_t(unsigned, end2,
PAGE_SIZE);
394 
395                         if (end2 > start2)
396                                 memset(kaddr + start2, 0, this_end -
start2);
397                         end2 -= this_end;
398                         start2 = 0;
399                 }
400        
401                 if (kaddr) {
402                         kunmap_atomic(kaddr);
403                         flush_dcache_page(page + i);
404                 }
405        
406                 if (!end1 && !end2)
407                         break;
408         }
409        
410         BUG_ON((start1 | start2 | end1 | end2) != 0);
411 }
412 EXPORT_SYMBOL(zero_user_segments);




^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH V2 1/8] mm/highmem: Lift memcpy_[to|from]_page to core
  2021-02-10  6:22 ` [PATCH V2 1/8] mm/highmem: Lift memcpy_[to|from]_page to core ira.weiny
  2021-02-10  6:33   ` Chaitanya Kulkarni
@ 2021-02-10 12:53   ` Christoph Hellwig
  1 sibling, 0 replies; 31+ messages in thread
From: Christoph Hellwig @ 2021-02-10 12:53 UTC (permalink / raw)
  To: ira.weiny
  Cc: Andrew Morton, David Sterba, Boris Pismenny, Or Gerlitz,
	Dave Hansen, Matthew Wilcox, Christoph Hellwig, Dan Williams,
	Al Viro, Eric Biggers, clm, josef, linux-kernel, linux-fsdevel

Looks good,

Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH V2 2/8] mm/highmem: Convert memcpy_[to|from]_page() to kmap_local_page()
  2021-02-10  6:22 ` [PATCH V2 2/8] mm/highmem: Convert memcpy_[to|from]_page() to kmap_local_page() ira.weiny
  2021-02-10  6:37   ` Chaitanya Kulkarni
@ 2021-02-10 12:54   ` Christoph Hellwig
  1 sibling, 0 replies; 31+ messages in thread
From: Christoph Hellwig @ 2021-02-10 12:54 UTC (permalink / raw)
  To: ira.weiny
  Cc: Andrew Morton, David Sterba, Christoph Hellwig, clm, josef,
	linux-kernel, linux-fsdevel

Looks good,

Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH V2 3/8] mm/highmem: Introduce memcpy_page(), memmove_page(), and memset_page()
  2021-02-10  6:22 ` [PATCH V2 3/8] mm/highmem: Introduce memcpy_page(), memmove_page(), and memset_page() ira.weiny
  2021-02-10  6:46   ` Chaitanya Kulkarni
@ 2021-02-10 12:54   ` Christoph Hellwig
  1 sibling, 0 replies; 31+ messages in thread
From: Christoph Hellwig @ 2021-02-10 12:54 UTC (permalink / raw)
  To: ira.weiny
  Cc: Andrew Morton, David Sterba, Christoph Hellwig, clm, josef,
	linux-kernel, linux-fsdevel

Looks good,

Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH V2 4/8] mm/highmem: Add VM_BUG_ON() to mem*_page() calls
  2021-02-10  6:22 ` [PATCH V2 4/8] mm/highmem: Add VM_BUG_ON() to mem*_page() calls ira.weiny
  2021-02-10  6:57   ` Chaitanya Kulkarni
@ 2021-02-10 12:55   ` Christoph Hellwig
  2021-02-10 16:29     ` Ira Weiny
  2021-02-10 17:49   ` [PATCH V2.1] " ira.weiny
  2 siblings, 1 reply; 31+ messages in thread
From: Christoph Hellwig @ 2021-02-10 12:55 UTC (permalink / raw)
  To: ira.weiny
  Cc: Andrew Morton, David Sterba, Christoph Hellwig, Matthew Wilcox,
	clm, josef, linux-kernel, linux-fsdevel

On Tue, Feb 09, 2021 at 10:22:17PM -0800, ira.weiny@intel.com wrote:
> From: Ira Weiny <ira.weiny@intel.com>
> 
> Add VM_BUG_ON bounds checks to ensure the newly lifted and created page
> memory operations do not result in corrupted data in neighbor pages and
> to make them consistent with zero_user().[1][2]

s/VM_BUG_ON/BUG_ON/g ?

Otherwise looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH V2 5/8] iov_iter: Remove memzero_page() in favor of zero_user()
  2021-02-10  6:22 ` [PATCH V2 5/8] iov_iter: Remove memzero_page() in favor of zero_user() ira.weiny
@ 2021-02-10 12:55   ` Christoph Hellwig
  0 siblings, 0 replies; 31+ messages in thread
From: Christoph Hellwig @ 2021-02-10 12:55 UTC (permalink / raw)
  To: ira.weiny
  Cc: Andrew Morton, David Sterba, Christoph Hellwig, clm, josef,
	linux-kernel, linux-fsdevel

On Tue, Feb 09, 2021 at 10:22:18PM -0800, ira.weiny@intel.com wrote:
> From: Ira Weiny <ira.weiny@intel.com>
> 
> zero_user() is already defined with the same interface and contains the
> same code pattern as memzero_page().  Remove memzero_page() and use the
> already defined common function zero_user()

Looks good,

Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH V2 6/8] btrfs: use memcpy_[to|from]_page() and kmap_local_page()
  2021-02-10  6:22 ` [PATCH V2 6/8] btrfs: use memcpy_[to|from]_page() and kmap_local_page() ira.weiny
@ 2021-02-10 12:56   ` Christoph Hellwig
  0 siblings, 0 replies; 31+ messages in thread
From: Christoph Hellwig @ 2021-02-10 12:56 UTC (permalink / raw)
  To: ira.weiny
  Cc: Andrew Morton, David Sterba, clm, josef, Christoph Hellwig,
	linux-kernel, linux-fsdevel

Looks good,

Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH V2 7/8] btrfs: use copy_highpage() instead of 2 kmaps()
  2021-02-10  6:22 ` [PATCH V2 7/8] btrfs: use copy_highpage() instead of 2 kmaps() ira.weiny
@ 2021-02-10 12:56   ` Christoph Hellwig
  0 siblings, 0 replies; 31+ messages in thread
From: Christoph Hellwig @ 2021-02-10 12:56 UTC (permalink / raw)
  To: ira.weiny
  Cc: Andrew Morton, David Sterba, clm, josef, Christoph Hellwig,
	linux-kernel, linux-fsdevel

Looks good,

Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH V2 8/8] btrfs: convert to zero_user()
  2021-02-10  6:22 ` [PATCH V2 8/8] btrfs: convert to zero_user() ira.weiny
@ 2021-02-10 12:57   ` Christoph Hellwig
  0 siblings, 0 replies; 31+ messages in thread
From: Christoph Hellwig @ 2021-02-10 12:57 UTC (permalink / raw)
  To: ira.weiny
  Cc: Andrew Morton, David Sterba, clm, josef, Christoph Hellwig,
	linux-kernel, linux-fsdevel

Looks good,

Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH V2 4/8] mm/highmem: Add VM_BUG_ON() to mem*_page() calls
  2021-02-10 12:55   ` Christoph Hellwig
@ 2021-02-10 16:29     ` Ira Weiny
  2021-02-10 16:41       ` Christoph Hellwig
  2021-02-10 18:56       ` Matthew Wilcox
  0 siblings, 2 replies; 31+ messages in thread
From: Ira Weiny @ 2021-02-10 16:29 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Andrew Morton, David Sterba, Matthew Wilcox, clm, josef,
	linux-kernel, linux-fsdevel

On Wed, Feb 10, 2021 at 12:55:02PM +0000, Christoph Hellwig wrote:
> On Tue, Feb 09, 2021 at 10:22:17PM -0800, ira.weiny@intel.com wrote:
> > From: Ira Weiny <ira.weiny@intel.com>
> > 
> > Add VM_BUG_ON bounds checks to ensure the newly lifted and created page
> > memory operations do not result in corrupted data in neighbor pages and
> > to make them consistent with zero_user().[1][2]
> 
> s/VM_BUG_ON/BUG_ON/g ?

Andrew wanted VM_BUG_ON.[1]

And I thought it was a good idea.  Any file system development should have
tests with DEBUG_VM which should cover Matthew's concern while not having the
overhead in production.  Seemed like a decent compromise?

Ira

[1] https://lore.kernel.org/lkml/20210209131103.b46e80db675fec8bec8d2ad1@linux-foundation.org/

> 
> Otherwise looks good:
> 
> Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH V2 4/8] mm/highmem: Add VM_BUG_ON() to mem*_page() calls
  2021-02-10  6:57   ` Chaitanya Kulkarni
@ 2021-02-10 16:31     ` Ira Weiny
  0 siblings, 0 replies; 31+ messages in thread
From: Ira Weiny @ 2021-02-10 16:31 UTC (permalink / raw)
  To: Chaitanya Kulkarni
  Cc: Andrew Morton, David Sterba, hch, Matthew Wilcox, clm, josef,
	linux-kernel, linux-fsdevel

On Wed, Feb 10, 2021 at 06:57:30AM +0000, Chaitanya Kulkarni wrote:
> On 2/9/21 22:25, ira.weiny@intel.com wrote:
> > From: Ira Weiny <ira.weiny@intel.com>
> >
> > Add VM_BUG_ON bounds checks to ensure the newly lifted and created page
> > memory operations do not result in corrupted data in neighbor pages and
> > to make them consistent with zero_user().[1][2]
> >
> I did not understand this, in my tree :-
> 
> zero_user()
>  -> zero_user_segments()
> 
> which uses BUG_ON(), the commit log says add VM_BUG_ON(), isn't that
> inconsistent withwhat is there in zero_user_segments() which uses BUG_ON() ?
> 
> Also, this patch uses BUG_ON() which doesn't match the commit log that says
> ADD VM_BUG_ON(),

Oops, yea that 'consistent with zero_user()' was carried over from the BUG_ON
commit comment in the original patch...

The comment should be deleted.

But I'm going to wait because Christoph prefers BUG_ON...

Ira

> 
> Did I interpret the commit log wrong ?
> 
> [1]
>  void zero_user_segments(struct page *page, unsigned start1, unsigned end1,
> 365                 unsigned start2, unsigned end2)
> 366 {
> 367         unsigned int
> i;                                                                           
> 
> 368
> 369         BUG_ON(end1 > page_size(page) || end2 > page_size(page));
> 370
> 371         for (i = 0; i < compound_nr(page); i++) {
> 372                 void *kaddr = NULL;
> 373 
> 374                 if (start1 < PAGE_SIZE || start2 < PAGE_SIZE)
> 375                         kaddr = kmap_atomic(page + i);
> 376
> 377                 if (start1 >= PAGE_SIZE) {
> 378                         start1 -= PAGE_SIZE;
> 379                         end1 -= PAGE_SIZE;
> 380                 } else {
> 381                         unsigned this_end = min_t(unsigned, end1,
> PAGE_SIZE);
> 382        
> 383                         if (end1 > start1)
> 384                                 memset(kaddr + start1, 0, this_end -
> start1);
> 385                         end1 -= this_end;
> 386                         start1 = 0;
> 387                 }
> 388
> 389                 if (start2 >= PAGE_SIZE) {
> 390                         start2 -= PAGE_SIZE;
> 391                         end2 -= PAGE_SIZE;
> 392                 } else {
> 393                         unsigned this_end = min_t(unsigned, end2,
> PAGE_SIZE);
> 394 
> 395                         if (end2 > start2)
> 396                                 memset(kaddr + start2, 0, this_end -
> start2);
> 397                         end2 -= this_end;
> 398                         start2 = 0;
> 399                 }
> 400        
> 401                 if (kaddr) {
> 402                         kunmap_atomic(kaddr);
> 403                         flush_dcache_page(page + i);
> 404                 }
> 405        
> 406                 if (!end1 && !end2)
> 407                         break;
> 408         }
> 409        
> 410         BUG_ON((start1 | start2 | end1 | end2) != 0);
> 411 }
> 412 EXPORT_SYMBOL(zero_user_segments);
> 
> 
> 

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH V2 4/8] mm/highmem: Add VM_BUG_ON() to mem*_page() calls
  2021-02-10 16:29     ` Ira Weiny
@ 2021-02-10 16:41       ` Christoph Hellwig
  2021-02-10 17:04         ` Ira Weiny
  2021-02-10 18:56       ` Matthew Wilcox
  1 sibling, 1 reply; 31+ messages in thread
From: Christoph Hellwig @ 2021-02-10 16:41 UTC (permalink / raw)
  To: Ira Weiny
  Cc: Christoph Hellwig, Andrew Morton, David Sterba, Matthew Wilcox,
	clm, josef, linux-kernel, linux-fsdevel

On Wed, Feb 10, 2021 at 08:29:01AM -0800, Ira Weiny wrote:
> On Wed, Feb 10, 2021 at 12:55:02PM +0000, Christoph Hellwig wrote:
> > On Tue, Feb 09, 2021 at 10:22:17PM -0800, ira.weiny@intel.com wrote:
> > > From: Ira Weiny <ira.weiny@intel.com>
> > > 
> > > Add VM_BUG_ON bounds checks to ensure the newly lifted and created page
> > > memory operations do not result in corrupted data in neighbor pages and
> > > to make them consistent with zero_user().[1][2]
> > 
> > s/VM_BUG_ON/BUG_ON/g ?
> 
> Andrew wanted VM_BUG_ON.[1]

I don't care either way, but the patch uses BUG_ON, so the description
should match.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH V2 4/8] mm/highmem: Add VM_BUG_ON() to mem*_page() calls
  2021-02-10 16:41       ` Christoph Hellwig
@ 2021-02-10 17:04         ` Ira Weiny
  0 siblings, 0 replies; 31+ messages in thread
From: Ira Weiny @ 2021-02-10 17:04 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Andrew Morton, David Sterba, Matthew Wilcox, clm, josef,
	linux-kernel, linux-fsdevel

On Wed, Feb 10, 2021 at 04:41:34PM +0000, Christoph Hellwig wrote:
> On Wed, Feb 10, 2021 at 08:29:01AM -0800, Ira Weiny wrote:
> > On Wed, Feb 10, 2021 at 12:55:02PM +0000, Christoph Hellwig wrote:
> > > On Tue, Feb 09, 2021 at 10:22:17PM -0800, ira.weiny@intel.com wrote:
> > > > From: Ira Weiny <ira.weiny@intel.com>
> > > > 
> > > > Add VM_BUG_ON bounds checks to ensure the newly lifted and created page
> > > > memory operations do not result in corrupted data in neighbor pages and
> > > > to make them consistent with zero_user().[1][2]
> > > 
> > > s/VM_BUG_ON/BUG_ON/g ?
> > 
> > Andrew wanted VM_BUG_ON.[1]
> 
> I don't care either way, but the patch uses BUG_ON, so the description
> should match.

Oh man...  I changed the commit message after spliting the patch and forgot to
change the code...  <doh>

Thanks,
Ira

^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH V2.1] mm/highmem: Add VM_BUG_ON() to mem*_page() calls
  2021-02-10  6:22 ` [PATCH V2 4/8] mm/highmem: Add VM_BUG_ON() to mem*_page() calls ira.weiny
  2021-02-10  6:57   ` Chaitanya Kulkarni
  2021-02-10 12:55   ` Christoph Hellwig
@ 2021-02-10 17:49   ` ira.weiny
  2 siblings, 0 replies; 31+ messages in thread
From: ira.weiny @ 2021-02-10 17:49 UTC (permalink / raw)
  To: Andrew Morton, David Sterba
  Cc: Ira Weiny, Matthew Wilcox, clm, josef, Christoph Hellwig,
	linux-kernel, linux-fsdevel

From: Ira Weiny <ira.weiny@intel.com>

Add VM_BUG_ON bounds checks to ensure the newly lifted and created page
memory operations do not result in corrupted data in neighbor pages.[1][2]

[1] https://lore.kernel.org/lkml/20201210053502.GS1563847@iweiny-DESK2.sc.intel.com/
[2] https://lore.kernel.org/lkml/20210209110931.00f00e47d9a0529fcee2ff01@linux-foundation.org/

Suggested-by: Matthew Wilcox <willy@infradead.org>
Suggested-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>

---
Changes from V2:
	Actually, really do VM_BUG_ON...  <sigh>
	Clean up commit message
---
 include/linux/highmem.h | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/include/linux/highmem.h b/include/linux/highmem.h
index 0b5d89621cb9..44170f312ae7 100644
--- a/include/linux/highmem.h
+++ b/include/linux/highmem.h
@@ -283,6 +283,7 @@ static inline void memcpy_page(struct page *dst_page, size_t dst_off,
 	char *dst = kmap_local_page(dst_page);
 	char *src = kmap_local_page(src_page);
 
+	VM_BUG_ON(dst_off + len > PAGE_SIZE || src_off + len > PAGE_SIZE);
 	memcpy(dst + dst_off, src + src_off, len);
 	kunmap_local(src);
 	kunmap_local(dst);
@@ -295,6 +296,7 @@ static inline void memmove_page(struct page *dst_page, size_t dst_off,
 	char *dst = kmap_local_page(dst_page);
 	char *src = kmap_local_page(src_page);
 
+	VM_BUG_ON(dst_off + len > PAGE_SIZE || src_off + len > PAGE_SIZE);
 	memmove(dst + dst_off, src + src_off, len);
 	kunmap_local(src);
 	kunmap_local(dst);
@@ -305,6 +307,7 @@ static inline void memset_page(struct page *page, size_t offset, int val,
 {
 	char *addr = kmap_local_page(page);
 
+	VM_BUG_ON(offset + len > PAGE_SIZE);
 	memset(addr + offset, val, len);
 	kunmap_local(addr);
 }
@@ -314,6 +317,7 @@ static inline void memcpy_from_page(char *to, struct page *page,
 {
 	char *from = kmap_local_page(page);
 
+	VM_BUG_ON(offset + len > PAGE_SIZE);
 	memcpy(to, from + offset, len);
 	kunmap_local(from);
 }
@@ -323,6 +327,7 @@ static inline void memcpy_to_page(struct page *page, size_t offset,
 {
 	char *to = kmap_local_page(page);
 
+	VM_BUG_ON(offset + len > PAGE_SIZE);
 	memcpy(to + offset, from, len);
 	kunmap_local(to);
 }
-- 
2.28.0.rc0.12.gb6a658bd00c9


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* Re: [PATCH V2 4/8] mm/highmem: Add VM_BUG_ON() to mem*_page() calls
  2021-02-10 16:29     ` Ira Weiny
  2021-02-10 16:41       ` Christoph Hellwig
@ 2021-02-10 18:56       ` Matthew Wilcox
  2021-02-10 21:22         ` Ira Weiny
  1 sibling, 1 reply; 31+ messages in thread
From: Matthew Wilcox @ 2021-02-10 18:56 UTC (permalink / raw)
  To: Ira Weiny
  Cc: Christoph Hellwig, Andrew Morton, David Sterba, clm, josef,
	linux-kernel, linux-fsdevel

On Wed, Feb 10, 2021 at 08:29:01AM -0800, Ira Weiny wrote:
> And I thought it was a good idea.  Any file system development should have
> tests with DEBUG_VM which should cover Matthew's concern while not having the
> overhead in production.  Seemed like a decent compromise?

Why do you think these paths are only used during file system development?
They're definitely used by networking, by device drivers of all kinds
and they're probably even used by the graphics system.

While developers *should* turn on DEBUG_VM during development, a
shockingly high percentage don't even turn on lockdep.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH V2 4/8] mm/highmem: Add VM_BUG_ON() to mem*_page() calls
  2021-02-10 18:56       ` Matthew Wilcox
@ 2021-02-10 21:22         ` Ira Weiny
  2021-02-11 18:52           ` David Sterba
  0 siblings, 1 reply; 31+ messages in thread
From: Ira Weiny @ 2021-02-10 21:22 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Christoph Hellwig, Andrew Morton, David Sterba, clm, josef,
	linux-kernel, linux-fsdevel

On Wed, Feb 10, 2021 at 06:56:06PM +0000, Matthew Wilcox wrote:
> On Wed, Feb 10, 2021 at 08:29:01AM -0800, Ira Weiny wrote:
> > And I thought it was a good idea.  Any file system development should have
> > tests with DEBUG_VM which should cover Matthew's concern while not having the
> > overhead in production.  Seemed like a decent compromise?
> 
> Why do you think these paths are only used during file system development?

I can't guarantee it but right now most of the conversions I have worked on are
in FS's.

> They're definitely used by networking, by device drivers of all kinds
> and they're probably even used by the graphics system.
> 
> While developers *should* turn on DEBUG_VM during development, a
> shockingly high percentage don't even turn on lockdep.

Honestly, I don't feel strongly enough to argue it.

Andrew?  David?  David this is going through your tree so would you feel more
comfortable with 1 or the other?

Ira


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH V2 4/8] mm/highmem: Add VM_BUG_ON() to mem*_page() calls
  2021-02-10 21:22         ` Ira Weiny
@ 2021-02-11 18:52           ` David Sterba
  0 siblings, 0 replies; 31+ messages in thread
From: David Sterba @ 2021-02-11 18:52 UTC (permalink / raw)
  To: Ira Weiny
  Cc: Matthew Wilcox, Christoph Hellwig, Andrew Morton, clm, josef,
	linux-kernel, linux-fsdevel

On Wed, Feb 10, 2021 at 01:22:28PM -0800, Ira Weiny wrote:
> On Wed, Feb 10, 2021 at 06:56:06PM +0000, Matthew Wilcox wrote:
> > On Wed, Feb 10, 2021 at 08:29:01AM -0800, Ira Weiny wrote:
> > > And I thought it was a good idea.  Any file system development should have
> > > tests with DEBUG_VM which should cover Matthew's concern while not having the
> > > overhead in production.  Seemed like a decent compromise?
> > 
> > Why do you think these paths are only used during file system development?
> 
> I can't guarantee it but right now most of the conversions I have worked on are
> in FS's.
> 
> > They're definitely used by networking, by device drivers of all kinds
> > and they're probably even used by the graphics system.
> > 
> > While developers *should* turn on DEBUG_VM during development, a
> > shockingly high percentage don't even turn on lockdep.
> 
> Honestly, I don't feel strongly enough to argue it.

I checked my devel config and I don't have DEBUG_VM enabled, while I
have a bunch of other debugging options related to locking or other
fine-grained sanity checks. The help text is not very specific what
exactly is being checked other that it hurts performance, so I read it
as that it's for MM developers that change the MM code, while in
filesystem we use the APIs.

However, for the this patchset I'll turn it on all testing instances of
course.

> Andrew?  David?  David this is going through your tree so would you feel more
> comfortable with 1 or the other?

I think it's a question for MM people, for now I assume it's supposed to
be VM_BUG_ON.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH V2 0/8] btrfs: convert kmaps to core page calls
  2021-02-10  6:22 [PATCH V2 0/8] btrfs: convert kmaps to core page calls ira.weiny
                   ` (7 preceding siblings ...)
  2021-02-10  6:22 ` [PATCH V2 8/8] btrfs: convert to zero_user() ira.weiny
@ 2021-02-11 19:38 ` David Sterba
  2021-02-11 21:32   ` Ira Weiny
  8 siblings, 1 reply; 31+ messages in thread
From: David Sterba @ 2021-02-11 19:38 UTC (permalink / raw)
  To: ira.weiny
  Cc: Andrew Morton, clm, josef, Christoph Hellwig, linux-kernel,
	linux-fsdevel

On Tue, Feb 09, 2021 at 10:22:13PM -0800, ira.weiny@intel.com wrote:
> From: Ira Weiny <ira.weiny@intel.com>
> 
> Changes from V1:
> 	Rework commit messages because they were very weak
> 	Change 'fs/btrfs: X' to 'btrfs: x'
> 		https://lore.kernel.org/lkml/20210209151442.GU1993@suse.cz/
> 	Per Andrew
> 		Split out changes to highmem.h
> 			The addition memcpy, memmove, and memset page functions
> 			The change from kmap_atomic to kmap_local_page
> 			The addition of BUG_ON
> 			The conversion of memzero_page to zero_user in iov_iter
> 		Change BUG_ON to VM_BUG_ON
> 	While we are refactoring adjust the line length down per Chaitany
> 
> 
> There are many places where kmap/<operation>/kunmap patterns occur.  We lift a
> couple of these patterns to the core common functions and use them as well as
> existing core functions in the btrfs file system.  At the same time we convert
> those core functions to use kmap_local_page() which is more efficient in those
> calls.
> 
> Per the conversation on V1 it looks like Andrew would like this to go through
> the btrfs tree.  I think that is fine.  The other users of
> memcpy_[to|from]_page are probably not ready and I believe could be taken in an
> early rc after David submits.
> 
> Is that ok with you David?

Yes.

The branch is now in
https://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux.git/log/?h=kmap-conversion
let me know if I've missed acked-by or reviewed-by, I added those sent
to the mailinglist and added mine to the btrfs ones and to the iov_iter
patch.

I'll add the patchset to my for-next so it gets picked by linux-next and
will keep testing it for at least a week.

Though this is less than the expected time before merge window, the
reasoning is that it's exporting helpers that are going to be used in
various subsystems. The changes in btrfs are simple and would allow to
focus on the other less trivial conversions. ETA for the pull request is
mid of the 2nd week of the merge window or after rc1.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH V2 0/8] btrfs: convert kmaps to core page calls
  2021-02-11 19:38 ` [PATCH V2 0/8] btrfs: convert kmaps to core page calls David Sterba
@ 2021-02-11 21:32   ` Ira Weiny
  0 siblings, 0 replies; 31+ messages in thread
From: Ira Weiny @ 2021-02-11 21:32 UTC (permalink / raw)
  To: dsterba, Andrew Morton, clm, josef, Christoph Hellwig,
	linux-kernel, linux-fsdevel

On Thu, Feb 11, 2021 at 08:38:03PM +0100, David Sterba wrote:
> On Tue, Feb 09, 2021 at 10:22:13PM -0800, ira.weiny@intel.com wrote:
> > From: Ira Weiny <ira.weiny@intel.com>
> > 
> > Per the conversation on V1 it looks like Andrew would like this to go through
> > the btrfs tree.  I think that is fine.  The other users of
> > memcpy_[to|from]_page are probably not ready and I believe could be taken in an
> > early rc after David submits.
> > 
> > Is that ok with you David?
> 
> Yes.
> 
> The branch is now in
> https://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux.git/log/?h=kmap-conversion
> let me know if I've missed acked-by or reviewed-by, I added those sent
> to the mailinglist and added mine to the btrfs ones and to the iov_iter
> patch.

Looks good.  Thank you!

> 
> I'll add the patchset to my for-next so it gets picked by linux-next and
> will keep testing it for at least a week.
> 
> Though this is less than the expected time before merge window, the
> reasoning is that it's exporting helpers that are going to be used in
> various subsystems. The changes in btrfs are simple and would allow to
> focus on the other less trivial conversions. ETA for the pull request is
> mid of the 2nd week of the merge window or after rc1.

Thanks for working with me on this.  Yes these were the more straight forward
conversions.  The next set will require more review and I should have them
posted soon at least for RFC.  Unfortunately, there are 2 places which are
proving difficult to follow the mapping orders required of kmap_local_page().
I'll open that discussion with the next round of conversions.

For now, thank you again,
Ira


^ permalink raw reply	[flat|nested] 31+ messages in thread

end of thread, other threads:[~2021-02-11 21:33 UTC | newest]

Thread overview: 31+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-02-10  6:22 [PATCH V2 0/8] btrfs: convert kmaps to core page calls ira.weiny
2021-02-10  6:22 ` [PATCH V2 1/8] mm/highmem: Lift memcpy_[to|from]_page to core ira.weiny
2021-02-10  6:33   ` Chaitanya Kulkarni
2021-02-10 12:53   ` Christoph Hellwig
2021-02-10  6:22 ` [PATCH V2 2/8] mm/highmem: Convert memcpy_[to|from]_page() to kmap_local_page() ira.weiny
2021-02-10  6:37   ` Chaitanya Kulkarni
2021-02-10 12:54   ` Christoph Hellwig
2021-02-10  6:22 ` [PATCH V2 3/8] mm/highmem: Introduce memcpy_page(), memmove_page(), and memset_page() ira.weiny
2021-02-10  6:46   ` Chaitanya Kulkarni
2021-02-10 12:54   ` Christoph Hellwig
2021-02-10  6:22 ` [PATCH V2 4/8] mm/highmem: Add VM_BUG_ON() to mem*_page() calls ira.weiny
2021-02-10  6:57   ` Chaitanya Kulkarni
2021-02-10 16:31     ` Ira Weiny
2021-02-10 12:55   ` Christoph Hellwig
2021-02-10 16:29     ` Ira Weiny
2021-02-10 16:41       ` Christoph Hellwig
2021-02-10 17:04         ` Ira Weiny
2021-02-10 18:56       ` Matthew Wilcox
2021-02-10 21:22         ` Ira Weiny
2021-02-11 18:52           ` David Sterba
2021-02-10 17:49   ` [PATCH V2.1] " ira.weiny
2021-02-10  6:22 ` [PATCH V2 5/8] iov_iter: Remove memzero_page() in favor of zero_user() ira.weiny
2021-02-10 12:55   ` Christoph Hellwig
2021-02-10  6:22 ` [PATCH V2 6/8] btrfs: use memcpy_[to|from]_page() and kmap_local_page() ira.weiny
2021-02-10 12:56   ` Christoph Hellwig
2021-02-10  6:22 ` [PATCH V2 7/8] btrfs: use copy_highpage() instead of 2 kmaps() ira.weiny
2021-02-10 12:56   ` Christoph Hellwig
2021-02-10  6:22 ` [PATCH V2 8/8] btrfs: convert to zero_user() ira.weiny
2021-02-10 12:57   ` Christoph Hellwig
2021-02-11 19:38 ` [PATCH V2 0/8] btrfs: convert kmaps to core page calls David Sterba
2021-02-11 21:32   ` Ira Weiny

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).