All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/6] NFSD: Improvements for the NFSv4.2 READ_PLUS operation
@ 2022-07-13 19:08 Anna Schumaker
  2022-07-13 19:08 ` [PATCH v2 1/6] SUNRPC: Introduce xdr_stream_move_subsegment() Anna Schumaker
                   ` (5 more replies)
  0 siblings, 6 replies; 10+ messages in thread
From: Anna Schumaker @ 2022-07-13 19:08 UTC (permalink / raw)
  To: linux-nfs, chuck.lever; +Cc: anna

From: Anna Schumaker <Anna.Schumaker@Netapp.com>

The main motivation for this patchset is fixing generic/091 and
generic/263 with READ_PLUS. These tests appear to be failing due to
files getting modified in the middle of reply encoding. Attempts to lock
the file for the entire encode result in a deadlock, since llseek() and
read() both need the file lock.

The solution is to read everything from disk at once, and then check if
each buffer page is all zeroes or not. As a bonus, this lets us support
READ_PLUS hole segments on filesystems that don't track sparse files.
Additionally, this also solves the performance issues I hit when testing
using btrfs on a virtual machine.

I created a wiki page with the results of my performance testing here:
    https://wiki.linux-nfs.org/wiki/index.php/Read_Plus_May_2022

These patches should probably go in before the related client changes
as the client will also be changed to make use of the
xdr_stream_move_subsegment() function.

Changes in v2:
  - Update to v5.19-rc6
  - Rename xdr_stream_move_segment() -> xdr_stream_move_subsegment()

Thoughts?
Anna


Anna Schumaker (6):
  SUNRPC: Introduce xdr_stream_move_subsegment()
  SUNRPC: Introduce xdr_encode_double()
  SUNRPC: Introduce xdr_buf_trim_head()
  SUNRPC: Introduce xdr_buf_nth_page_address()
  SUNRPC: Export xdr_buf_pagecount()
  NFSD: Repeal and replace the READ_PLUS implementation

 fs/nfsd/nfs4xdr.c          | 202 +++++++++++++++++++------------------
 include/linux/sunrpc/xdr.h |   6 ++
 net/sunrpc/xdr.c           | 102 +++++++++++++++++++
 3 files changed, 210 insertions(+), 100 deletions(-)

-- 
2.37.0


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v2 1/6] SUNRPC: Introduce xdr_stream_move_subsegment()
  2022-07-13 19:08 [PATCH v2 0/6] NFSD: Improvements for the NFSv4.2 READ_PLUS operation Anna Schumaker
@ 2022-07-13 19:08 ` Anna Schumaker
  2022-07-13 19:08 ` [PATCH v2 2/6] SUNRPC: Introduce xdr_encode_double() Anna Schumaker
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Anna Schumaker @ 2022-07-13 19:08 UTC (permalink / raw)
  To: linux-nfs, chuck.lever; +Cc: anna

From: Anna Schumaker <Anna.Schumaker@Netapp.com>

I do this by creating an xdr subsegment for the range we will be
operating over. This lets me shift data to the correct place without
potentially overwriting anything already there.

Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
---
 include/linux/sunrpc/xdr.h |  2 ++
 net/sunrpc/xdr.c           | 59 ++++++++++++++++++++++++++++++++++++++
 2 files changed, 61 insertions(+)

diff --git a/include/linux/sunrpc/xdr.h b/include/linux/sunrpc/xdr.h
index 5860f32e3958..7dcc6c31fe29 100644
--- a/include/linux/sunrpc/xdr.h
+++ b/include/linux/sunrpc/xdr.h
@@ -262,6 +262,8 @@ extern unsigned int xdr_align_data(struct xdr_stream *, unsigned int offset, uns
 extern unsigned int xdr_expand_hole(struct xdr_stream *, unsigned int offset, unsigned int length);
 extern bool xdr_stream_subsegment(struct xdr_stream *xdr, struct xdr_buf *subbuf,
 				  unsigned int len);
+extern unsigned int xdr_stream_move_subsegment(struct xdr_stream *xdr, unsigned int offset,
+					       unsigned int target, unsigned int length);
 
 /**
  * xdr_set_scratch_buffer - Attach a scratch buffer for decoding data.
diff --git a/net/sunrpc/xdr.c b/net/sunrpc/xdr.c
index 5d2b3e6979fb..8ba11a754297 100644
--- a/net/sunrpc/xdr.c
+++ b/net/sunrpc/xdr.c
@@ -775,6 +775,34 @@ static void xdr_buf_pages_shift_left(const struct xdr_buf *buf,
 	xdr_buf_tail_copy_left(buf, 0, len - buf->page_len, shift);
 }
 
+static void xdr_buf_head_shift_left(const struct xdr_buf *buf,
+				    unsigned int base, unsigned int len,
+				    unsigned int shift)
+{
+	const struct kvec *head = buf->head;
+	unsigned int bytes;
+
+	if (!shift || !len)
+		return;
+
+	if (shift > base) {
+		bytes = (shift - base);
+		if (bytes >= len)
+			return;
+		base += bytes;
+		len -= bytes;
+	}
+
+	if (base < head->iov_len) {
+		bytes = min_t(unsigned int, len, head->iov_len - base);
+		memmove(head->iov_base + (base - shift),
+			head->iov_base + base, bytes);
+		base += bytes;
+		len -= bytes;
+	}
+	xdr_buf_pages_shift_left(buf, base - head->iov_len, len, shift);
+}
+
 /**
  * xdr_shrink_bufhead
  * @buf: xdr_buf
@@ -1680,6 +1708,37 @@ bool xdr_stream_subsegment(struct xdr_stream *xdr, struct xdr_buf *subbuf,
 }
 EXPORT_SYMBOL_GPL(xdr_stream_subsegment);
 
+/**
+ * xdr_stream_move_subsegment - Move part of a stream to another position
+ * @xdr: the source xdr_stream
+ * @offset: the source offset of the segment
+ * @target: the target offset of the segment
+ * @length: the number of bytes to move
+ *
+ * Moves @length bytes from @offset to @target in the xdr_stream, overwriting
+ * anything in its space. Returns the number of bytes in the segment.
+ */
+unsigned int xdr_stream_move_subsegment(struct xdr_stream *xdr, unsigned int offset,
+					unsigned int target, unsigned int length)
+{
+	struct xdr_buf buf;
+	unsigned int shift;
+
+	if (offset < target) {
+		shift = target - offset;
+		if (xdr_buf_subsegment(xdr->buf, &buf, offset, shift + length) < 0)
+			return 0;
+		xdr_buf_head_shift_right(&buf, 0, length, shift);
+	} else if (offset > target) {
+		shift = offset - target;
+		if (xdr_buf_subsegment(xdr->buf, &buf, target, shift + length) < 0)
+			return 0;
+		xdr_buf_head_shift_left(&buf, shift, length, shift);
+	}
+	return length;
+}
+EXPORT_SYMBOL_GPL(xdr_stream_move_subsegment);
+
 /**
  * xdr_buf_trim - lop at most "len" bytes off the end of "buf"
  * @buf: buf to be trimmed
-- 
2.37.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 2/6] SUNRPC: Introduce xdr_encode_double()
  2022-07-13 19:08 [PATCH v2 0/6] NFSD: Improvements for the NFSv4.2 READ_PLUS operation Anna Schumaker
  2022-07-13 19:08 ` [PATCH v2 1/6] SUNRPC: Introduce xdr_stream_move_subsegment() Anna Schumaker
@ 2022-07-13 19:08 ` Anna Schumaker
  2022-07-13 19:08 ` [PATCH v2 3/6] SUNRPC: Introduce xdr_buf_trim_head() Anna Schumaker
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Anna Schumaker @ 2022-07-13 19:08 UTC (permalink / raw)
  To: linux-nfs, chuck.lever; +Cc: anna

From: Anna Schumaker <Anna.Schumaker@Netapp.com>

This is similar to xdr_encode_word(), but instead lets us encode a
64-bit wide value into the xdr_buf at the given offset.

Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
---
 include/linux/sunrpc/xdr.h | 1 +
 net/sunrpc/xdr.c           | 8 ++++++++
 2 files changed, 9 insertions(+)

diff --git a/include/linux/sunrpc/xdr.h b/include/linux/sunrpc/xdr.h
index 7dcc6c31fe29..e26047d474b2 100644
--- a/include/linux/sunrpc/xdr.h
+++ b/include/linux/sunrpc/xdr.h
@@ -196,6 +196,7 @@ extern int read_bytes_from_xdr_buf(const struct xdr_buf *, unsigned int, void *,
 extern int write_bytes_to_xdr_buf(const struct xdr_buf *, unsigned int, void *, unsigned int);
 
 extern int xdr_encode_word(const struct xdr_buf *, unsigned int, u32);
+extern int xdr_encode_double(const struct xdr_buf *, unsigned int, u64);
 extern int xdr_decode_word(const struct xdr_buf *, unsigned int, u32 *);
 
 struct xdr_array2_desc;
diff --git a/net/sunrpc/xdr.c b/net/sunrpc/xdr.c
index 8ba11a754297..63d9cdc989da 100644
--- a/net/sunrpc/xdr.c
+++ b/net/sunrpc/xdr.c
@@ -1865,6 +1865,14 @@ int xdr_encode_word(const struct xdr_buf *buf, unsigned int base, u32 obj)
 }
 EXPORT_SYMBOL_GPL(xdr_encode_word);
 
+int xdr_encode_double(const struct xdr_buf *buf, unsigned int base, u64 obj)
+{
+	__be64 raw = cpu_to_be64(obj);
+
+	return write_bytes_to_xdr_buf(buf, base, &raw, sizeof(obj));
+}
+EXPORT_SYMBOL_GPL(xdr_encode_double);
+
 /* Returns 0 on success, or else a negative error code. */
 static int xdr_xcode_array2(const struct xdr_buf *buf, unsigned int base,
 			    struct xdr_array2_desc *desc, int encode)
-- 
2.37.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 3/6] SUNRPC: Introduce xdr_buf_trim_head()
  2022-07-13 19:08 [PATCH v2 0/6] NFSD: Improvements for the NFSv4.2 READ_PLUS operation Anna Schumaker
  2022-07-13 19:08 ` [PATCH v2 1/6] SUNRPC: Introduce xdr_stream_move_subsegment() Anna Schumaker
  2022-07-13 19:08 ` [PATCH v2 2/6] SUNRPC: Introduce xdr_encode_double() Anna Schumaker
@ 2022-07-13 19:08 ` Anna Schumaker
  2022-07-16  6:28   ` kernel test robot
  2022-07-13 19:08 ` [PATCH v2 4/6] SUNRPC: Introduce xdr_buf_nth_page_address() Anna Schumaker
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 10+ messages in thread
From: Anna Schumaker @ 2022-07-13 19:08 UTC (permalink / raw)
  To: linux-nfs, chuck.lever; +Cc: anna

From: Anna Schumaker <Anna.Schumaker@Netapp.com>

The READ_PLUS operation uses a 32-bit length field for encoding a DATA
segment, but 64-bit length field for encoding a HOLE segment. When
setting up our reply buffer, we need to reserve enough space to encode
a HOLE before reading the file data and use this function if the first
segment turns out to be DATA.

Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
---
 include/linux/sunrpc/xdr.h |  1 +
 net/sunrpc/xdr.c           | 17 +++++++++++++++++
 2 files changed, 18 insertions(+)

diff --git a/include/linux/sunrpc/xdr.h b/include/linux/sunrpc/xdr.h
index e26047d474b2..bdaf048edde0 100644
--- a/include/linux/sunrpc/xdr.h
+++ b/include/linux/sunrpc/xdr.h
@@ -191,6 +191,7 @@ xdr_adjust_iovec(struct kvec *iov, __be32 *p)
 extern void xdr_shift_buf(struct xdr_buf *, size_t);
 extern void xdr_buf_from_iov(const struct kvec *, struct xdr_buf *);
 extern int xdr_buf_subsegment(const struct xdr_buf *, struct xdr_buf *, unsigned int, unsigned int);
+extern void xdr_buf_trim_head(struct xdr_buf *, unsigned int);
 extern void xdr_buf_trim(struct xdr_buf *, unsigned int);
 extern int read_bytes_from_xdr_buf(const struct xdr_buf *, unsigned int, void *, unsigned int);
 extern int write_bytes_to_xdr_buf(const struct xdr_buf *, unsigned int, void *, unsigned int);
diff --git a/net/sunrpc/xdr.c b/net/sunrpc/xdr.c
index 63d9cdc989da..37956a274f81 100644
--- a/net/sunrpc/xdr.c
+++ b/net/sunrpc/xdr.c
@@ -1739,6 +1739,23 @@ unsigned int xdr_stream_move_subsegment(struct xdr_stream *xdr, unsigned int off
 }
 EXPORT_SYMBOL_GPL(xdr_stream_move_subsegment);
 
+/**
+ * xdr_buf_trim_head - lop at most "len" bytes off the end of "buf"->head
+ * @buf: buf to be trimmed
+ * @len: number of bytes to reduce "buf"->head by
+ *
+ * Trim an xdr_buf->head by the given number of bytes by fixing up the lengths.
+ * Note that it's possible that we'll trim less than that amount if the
+ *  xdr_buf->head is too small.
+ */
+void xdr_buf_trim_head(struct xdr_buf *buf, unsigned int len)
+{
+	size_t trim = min_t(size_t, buf->head[0].iov_len, len);
+	buf->head[0].iov_len -= trim;
+	buf->len -= trim;
+}
+EXPORT_SYMBOL_GPL(xdr_buf_trim_head);
+
 /**
  * xdr_buf_trim - lop at most "len" bytes off the end of "buf"
  * @buf: buf to be trimmed
-- 
2.37.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 4/6] SUNRPC: Introduce xdr_buf_nth_page_address()
  2022-07-13 19:08 [PATCH v2 0/6] NFSD: Improvements for the NFSv4.2 READ_PLUS operation Anna Schumaker
                   ` (2 preceding siblings ...)
  2022-07-13 19:08 ` [PATCH v2 3/6] SUNRPC: Introduce xdr_buf_trim_head() Anna Schumaker
@ 2022-07-13 19:08 ` Anna Schumaker
  2022-07-13 19:08 ` [PATCH v2 5/6] SUNRPC: Export xdr_buf_pagecount() Anna Schumaker
  2022-07-13 19:08 ` [PATCH v2 6/6] NFSD: Repeal and replace the READ_PLUS implementation Anna Schumaker
  5 siblings, 0 replies; 10+ messages in thread
From: Anna Schumaker @ 2022-07-13 19:08 UTC (permalink / raw)
  To: linux-nfs, chuck.lever; +Cc: anna

From: Anna Schumaker <Anna.Schumaker@Netapp.com>

For getting a pointer to the memory address represented by the nth page,
along with the length of the data on that page.

Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
---
 include/linux/sunrpc/xdr.h |  2 ++
 net/sunrpc/xdr.c           | 17 +++++++++++++++++
 2 files changed, 19 insertions(+)

diff --git a/include/linux/sunrpc/xdr.h b/include/linux/sunrpc/xdr.h
index bdaf048edde0..79824fea4529 100644
--- a/include/linux/sunrpc/xdr.h
+++ b/include/linux/sunrpc/xdr.h
@@ -137,6 +137,8 @@ void	xdr_inline_pages(struct xdr_buf *, unsigned int,
 			 struct page **, unsigned int, unsigned int);
 void	xdr_terminate_string(const struct xdr_buf *, const u32);
 size_t	xdr_buf_pagecount(const struct xdr_buf *buf);
+char	*xdr_buf_nth_page_address(const struct xdr_buf *buf, unsigned int n,
+				  unsigned int *len);
 int	xdr_alloc_bvec(struct xdr_buf *buf, gfp_t gfp);
 void	xdr_free_bvec(struct xdr_buf *buf);
 
diff --git a/net/sunrpc/xdr.c b/net/sunrpc/xdr.c
index 37956a274f81..88b28656a05d 100644
--- a/net/sunrpc/xdr.c
+++ b/net/sunrpc/xdr.c
@@ -140,6 +140,23 @@ size_t xdr_buf_pagecount(const struct xdr_buf *buf)
 	return (buf->page_base + buf->page_len + PAGE_SIZE - 1) >> PAGE_SHIFT;
 }
 
+char *xdr_buf_nth_page_address(const struct xdr_buf *buf, unsigned int n,
+			       unsigned int *len)
+{
+	unsigned int pgbase = buf->page_base + (n * PAGE_SIZE);
+	struct page **pages = buf->pages;
+	struct page **page;
+
+	if (n >= xdr_buf_pagecount(buf))
+		return NULL;
+
+	page = pages + (pgbase >> PAGE_SHIFT);
+	pgbase &= ~PAGE_MASK;
+	*len = min_t(size_t, PAGE_SIZE, buf->page_len - (n * PAGE_SIZE));
+	return page_address(*page) + pgbase;
+}
+EXPORT_SYMBOL_GPL(xdr_buf_nth_page_address);
+
 int
 xdr_alloc_bvec(struct xdr_buf *buf, gfp_t gfp)
 {
-- 
2.37.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 5/6] SUNRPC: Export xdr_buf_pagecount()
  2022-07-13 19:08 [PATCH v2 0/6] NFSD: Improvements for the NFSv4.2 READ_PLUS operation Anna Schumaker
                   ` (3 preceding siblings ...)
  2022-07-13 19:08 ` [PATCH v2 4/6] SUNRPC: Introduce xdr_buf_nth_page_address() Anna Schumaker
@ 2022-07-13 19:08 ` Anna Schumaker
  2022-07-13 19:08 ` [PATCH v2 6/6] NFSD: Repeal and replace the READ_PLUS implementation Anna Schumaker
  5 siblings, 0 replies; 10+ messages in thread
From: Anna Schumaker @ 2022-07-13 19:08 UTC (permalink / raw)
  To: linux-nfs, chuck.lever; +Cc: anna

From: Anna Schumaker <Anna.Schumaker@Netapp.com>

The NFS server will need this for iterating over pages in a READ_PLUS
reply

Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
---
 net/sunrpc/xdr.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/net/sunrpc/xdr.c b/net/sunrpc/xdr.c
index 88b28656a05d..ea734b14af0f 100644
--- a/net/sunrpc/xdr.c
+++ b/net/sunrpc/xdr.c
@@ -139,6 +139,7 @@ size_t xdr_buf_pagecount(const struct xdr_buf *buf)
 		return 0;
 	return (buf->page_base + buf->page_len + PAGE_SIZE - 1) >> PAGE_SHIFT;
 }
+EXPORT_SYMBOL_GPL(xdr_buf_pagecount);
 
 char *xdr_buf_nth_page_address(const struct xdr_buf *buf, unsigned int n,
 			       unsigned int *len)
-- 
2.37.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 6/6] NFSD: Repeal and replace the READ_PLUS implementation
  2022-07-13 19:08 [PATCH v2 0/6] NFSD: Improvements for the NFSv4.2 READ_PLUS operation Anna Schumaker
                   ` (4 preceding siblings ...)
  2022-07-13 19:08 ` [PATCH v2 5/6] SUNRPC: Export xdr_buf_pagecount() Anna Schumaker
@ 2022-07-13 19:08 ` Anna Schumaker
  2022-07-13 20:45   ` Chuck Lever III
  5 siblings, 1 reply; 10+ messages in thread
From: Anna Schumaker @ 2022-07-13 19:08 UTC (permalink / raw)
  To: linux-nfs, chuck.lever; +Cc: anna

From: Anna Schumaker <Anna.Schumaker@Netapp.com>

Rather than relying on the underlying filesystem to tell us where hole
and data segments are through vfs_llseek(), let's instead do the hole
compression ourselves. This has a few advantages over the old
implementation:

1) A single call to the underlying filesystem through nfsd_readv() means
   the file can't change from underneath us in the middle of encoding.
2) A single call to the underlying filestem also means that the
   underlying filesystem only needs to synchronize cached and on-disk
   data one time instead of potentially many speeding up the reply.
3) Hole support for filesystems that don't support SEEK_HOLE and SEEK_DATA

I also included an optimization where we can cut down on the amount of
memory being shifed around by doing the compression as (hole, data)
pairs.

This patch not only fixes xfstests generic/091 and generic/263
for me but the "-g quick" group tests also finish about a minute faster.

Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
---
 fs/nfsd/nfs4xdr.c | 202 +++++++++++++++++++++++-----------------------
 1 file changed, 102 insertions(+), 100 deletions(-)

diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
index 61b2aae81abb..0e1e7a37d4e0 100644
--- a/fs/nfsd/nfs4xdr.c
+++ b/fs/nfsd/nfs4xdr.c
@@ -4731,81 +4731,121 @@ nfsd4_encode_offload_status(struct nfsd4_compoundres *resp, __be32 nfserr,
 	return nfserr;
 }
 
+struct read_plus_segment {
+	enum data_content4 type;
+	unsigned long offset;
+	unsigned long length;
+	unsigned int page_pos;
+};
+
 static __be32
-nfsd4_encode_read_plus_data(struct nfsd4_compoundres *resp,
-			    struct nfsd4_read *read,
-			    unsigned long *maxcount, u32 *eof,
-			    loff_t *pos)
+nfsd4_read_plus_readv(struct nfsd4_compoundres *resp, struct nfsd4_read *read,
+		      unsigned long *maxcount, u32 *eof)
 {
 	struct xdr_stream *xdr = resp->xdr;
-	struct file *file = read->rd_nf->nf_file;
-	int starting_len = xdr->buf->len;
-	loff_t hole_pos;
-	__be32 nfserr;
-	__be32 *p, tmp;
-	__be64 tmp64;
-
-	hole_pos = pos ? *pos : vfs_llseek(file, read->rd_offset, SEEK_HOLE);
-	if (hole_pos > read->rd_offset)
-		*maxcount = min_t(unsigned long, *maxcount, hole_pos - read->rd_offset);
-	*maxcount = min_t(unsigned long, *maxcount, (xdr->buf->buflen - xdr->buf->len));
-
-	/* Content type, offset, byte count */
-	p = xdr_reserve_space(xdr, 4 + 8 + 4);
-	if (!p)
-		return nfserr_resource;
+	unsigned int starting_len = xdr->buf->len;
+	__be32 nfserr, zero = xdr_zero;
+	int pad;
 
+	/* xdr_reserve_space_vec() switches us to the xdr->pages */
 	read->rd_vlen = xdr_reserve_space_vec(xdr, resp->rqstp->rq_vec, *maxcount);
 	if (read->rd_vlen < 0)
 		return nfserr_resource;
 
-	nfserr = nfsd_readv(resp->rqstp, read->rd_fhp, file, read->rd_offset,
-			    resp->rqstp->rq_vec, read->rd_vlen, maxcount, eof);
+	nfserr = nfsd_readv(resp->rqstp, read->rd_fhp, read->rd_nf->nf_file,
+			    read->rd_offset, resp->rqstp->rq_vec, read->rd_vlen,
+			    maxcount, eof);
 	if (nfserr)
 		return nfserr;
-	xdr_truncate_encode(xdr, starting_len + 16 + xdr_align_size(*maxcount));
+	xdr_truncate_encode(xdr, starting_len + xdr_align_size(*maxcount));
 
-	tmp = htonl(NFS4_CONTENT_DATA);
-	write_bytes_to_xdr_buf(xdr->buf, starting_len,      &tmp,   4);
-	tmp64 = cpu_to_be64(read->rd_offset);
-	write_bytes_to_xdr_buf(xdr->buf, starting_len + 4,  &tmp64, 8);
-	tmp = htonl(*maxcount);
-	write_bytes_to_xdr_buf(xdr->buf, starting_len + 12, &tmp,   4);
-
-	tmp = xdr_zero;
-	write_bytes_to_xdr_buf(xdr->buf, starting_len + 16 + *maxcount, &tmp,
-			       xdr_pad_size(*maxcount));
+	pad = (*maxcount&3) ? 4 - (*maxcount&3) : 0;
+	write_bytes_to_xdr_buf(xdr->buf, starting_len + *maxcount, &zero, pad);
 	return nfs_ok;
 }
 
+static void
+nfsd4_encode_read_plus_segment(struct xdr_stream *xdr,
+			       struct read_plus_segment *segment,
+			       unsigned int *bufpos, unsigned int *segments)
+{
+	struct xdr_buf *buf = xdr->buf;
+
+	xdr_encode_word(buf, *bufpos, segment->type);
+	xdr_encode_double(buf, *bufpos + 4, segment->offset);
+
+	if (segment->type == NFS4_CONTENT_HOLE) {
+		xdr_encode_double(buf, *bufpos + 12, segment->length);
+		*bufpos += 4 + 8 + 8;
+	} else {
+		size_t align = xdr_align_size(segment->length);
+		xdr_encode_word(buf, *bufpos + 12, segment->length);
+		if (*segments == 0)
+			xdr_buf_trim_head(buf, 4);
+
+		xdr_stream_move_subsegment(xdr,
+				buf->head[0].iov_len + segment->page_pos,
+				*bufpos + 16, align);
+		*bufpos += 4 + 8 + 4 + align;
+	}
+
+	*segments += 1;
+}
+
 static __be32
-nfsd4_encode_read_plus_hole(struct nfsd4_compoundres *resp,
-			    struct nfsd4_read *read,
-			    unsigned long *maxcount, u32 *eof)
+nfsd4_encode_read_plus_segments(struct nfsd4_compoundres *resp,
+				struct nfsd4_read *read,
+				unsigned int *segments, u32 *eof)
 {
-	struct file *file = read->rd_nf->nf_file;
-	loff_t data_pos = vfs_llseek(file, read->rd_offset, SEEK_DATA);
-	loff_t f_size = i_size_read(file_inode(file));
-	unsigned long count;
-	__be32 *p;
+	enum data_content4 pagetype;
+	struct read_plus_segment segment;
+	struct xdr_stream *xdr = resp->xdr;
+	unsigned long offset = read->rd_offset;
+	unsigned int bufpos = xdr->buf->len;
+	unsigned long maxcount;
+	unsigned int pagelen, i = 0;
+	char *vpage, *p;
+	__be32 nfserr;
 
-	if (data_pos == -ENXIO)
-		data_pos = f_size;
-	else if (data_pos <= read->rd_offset || (data_pos < f_size && data_pos % PAGE_SIZE))
-		return nfsd4_encode_read_plus_data(resp, read, maxcount, eof, &f_size);
-	count = data_pos - read->rd_offset;
-
-	/* Content type, offset, byte count */
-	p = xdr_reserve_space(resp->xdr, 4 + 8 + 8);
-	if (!p)
+	/* enough space for a HOLE segment before we switch to the pages */
+	if (!xdr_reserve_space(xdr, 4 + 8 + 8))
 		return nfserr_resource;
+	xdr_commit_encode(xdr);
 
-	*p++ = htonl(NFS4_CONTENT_HOLE);
-	p = xdr_encode_hyper(p, read->rd_offset);
-	p = xdr_encode_hyper(p, count);
+	maxcount = min_t(unsigned long, read->rd_length,
+			 (xdr->buf->buflen - xdr->buf->len));
 
-	*eof = (read->rd_offset + count) >= f_size;
-	*maxcount = min_t(unsigned long, count, *maxcount);
+	nfserr = nfsd4_read_plus_readv(resp, read, &maxcount, eof);
+	if (nfserr)
+		return nfserr;
+
+	while (maxcount > 0) {
+		vpage = xdr_buf_nth_page_address(xdr->buf, i, &pagelen);
+		pagelen = min_t(unsigned int, pagelen, maxcount);
+		if (!vpage || pagelen == 0)
+			break;
+		p = memchr_inv(vpage, 0, pagelen);
+		pagetype = (p == NULL) ? NFS4_CONTENT_HOLE : NFS4_CONTENT_DATA;
+
+		if (pagetype != segment.type || i == 0) {
+			if (likely(i > 0)) {
+				nfsd4_encode_read_plus_segment(xdr, &segment,
+							      &bufpos, segments);
+				offset += segment.length;
+			}
+			segment.type = pagetype;
+			segment.offset = offset;
+			segment.length = pagelen;
+			segment.page_pos = i * PAGE_SIZE;
+		} else
+			segment.length += pagelen;
+
+		maxcount -= pagelen;
+		i++;
+	}
+
+	nfsd4_encode_read_plus_segment(xdr, &segment, &bufpos, segments);
+	xdr_truncate_encode(xdr, bufpos);
 	return nfs_ok;
 }
 
@@ -4813,69 +4853,31 @@ static __be32
 nfsd4_encode_read_plus(struct nfsd4_compoundres *resp, __be32 nfserr,
 		       struct nfsd4_read *read)
 {
-	unsigned long maxcount, count;
 	struct xdr_stream *xdr = resp->xdr;
-	struct file *file;
 	int starting_len = xdr->buf->len;
-	int last_segment = xdr->buf->len;
-	int segments = 0;
-	__be32 *p, tmp;
-	bool is_data;
-	loff_t pos;
+	unsigned int segments = 0;
 	u32 eof;
 
 	if (nfserr)
 		return nfserr;
-	file = read->rd_nf->nf_file;
 
 	/* eof flag, segment count */
-	p = xdr_reserve_space(xdr, 4 + 4);
-	if (!p)
+	if (!xdr_reserve_space(xdr, 4 + 4))
 		return nfserr_resource;
 	xdr_commit_encode(xdr);
 
-	maxcount = min_t(unsigned long, read->rd_length,
-			 (xdr->buf->buflen - xdr->buf->len));
-	count    = maxcount;
-
-	eof = read->rd_offset >= i_size_read(file_inode(file));
+	eof = read->rd_offset >= i_size_read(file_inode(read->rd_nf->nf_file));
 	if (eof)
 		goto out;
 
-	pos = vfs_llseek(file, read->rd_offset, SEEK_HOLE);
-	is_data = pos > read->rd_offset;
-
-	while (count > 0 && !eof) {
-		maxcount = count;
-		if (is_data)
-			nfserr = nfsd4_encode_read_plus_data(resp, read, &maxcount, &eof,
-						segments == 0 ? &pos : NULL);
-		else
-			nfserr = nfsd4_encode_read_plus_hole(resp, read, &maxcount, &eof);
-		if (nfserr)
-			goto out;
-		count -= maxcount;
-		read->rd_offset += maxcount;
-		is_data = !is_data;
-		last_segment = xdr->buf->len;
-		segments++;
-	}
-
+	nfserr = nfsd4_encode_read_plus_segments(resp, read, &segments, &eof);
 out:
-	if (nfserr && segments == 0)
+	if (nfserr)
 		xdr_truncate_encode(xdr, starting_len);
 	else {
-		if (nfserr) {
-			xdr_truncate_encode(xdr, last_segment);
-			nfserr = nfs_ok;
-			eof = 0;
-		}
-		tmp = htonl(eof);
-		write_bytes_to_xdr_buf(xdr->buf, starting_len,     &tmp, 4);
-		tmp = htonl(segments);
-		write_bytes_to_xdr_buf(xdr->buf, starting_len + 4, &tmp, 4);
+		xdr_encode_word(xdr->buf, starting_len,     eof);
+		xdr_encode_word(xdr->buf, starting_len + 4, segments);
 	}
-
 	return nfserr;
 }
 
-- 
2.37.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 6/6] NFSD: Repeal and replace the READ_PLUS implementation
  2022-07-13 19:08 ` [PATCH v2 6/6] NFSD: Repeal and replace the READ_PLUS implementation Anna Schumaker
@ 2022-07-13 20:45   ` Chuck Lever III
  2022-07-15 13:05     ` Anna Schumaker
  0 siblings, 1 reply; 10+ messages in thread
From: Chuck Lever III @ 2022-07-13 20:45 UTC (permalink / raw)
  To: Anna Schumaker; +Cc: Linux NFS Mailing List



> On Jul 13, 2022, at 3:08 PM, Anna Schumaker <anna@kernel.org> wrote:
> 
> From: Anna Schumaker <Anna.Schumaker@Netapp.com>
> 
> Rather than relying on the underlying filesystem to tell us where hole
> and data segments are through vfs_llseek(), let's instead do the hole
> compression ourselves. This has a few advantages over the old
> implementation:
> 
> 1) A single call to the underlying filesystem through nfsd_readv() means
>   the file can't change from underneath us in the middle of encoding.
> 2) A single call to the underlying filestem also means that the
>   underlying filesystem only needs to synchronize cached and on-disk
>   data one time instead of potentially many speeding up the reply.
> 3) Hole support for filesystems that don't support SEEK_HOLE and SEEK_DATA

I'm not sure I understand why this last one is a good idea.
Wouldn't that cause holes to appear in the file cached on
the client where there are no holes in the stored file on
the server?

Is there any encryption-related impact such as the issues
that David brought up during LSF/MM ?


> I also included an optimization where we can cut down on the amount of
> memory being shifed around by doing the compression as (hole, data)
> pairs.
> 
> This patch not only fixes xfstests generic/091 and generic/263
> for me but the "-g quick" group tests also finish about a minute faster.
> 
> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
> ---
> fs/nfsd/nfs4xdr.c | 202 +++++++++++++++++++++++-----------------------
> 1 file changed, 102 insertions(+), 100 deletions(-)
> 
> diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
> index 61b2aae81abb..0e1e7a37d4e0 100644
> --- a/fs/nfsd/nfs4xdr.c
> +++ b/fs/nfsd/nfs4xdr.c
> @@ -4731,81 +4731,121 @@ nfsd4_encode_offload_status(struct nfsd4_compoundres *resp, __be32 nfserr,
> 	return nfserr;
> }
> 
> +struct read_plus_segment {
> +	enum data_content4 type;
> +	unsigned long offset;
> +	unsigned long length;
> +	unsigned int page_pos;
> +};

"unsigned long" is not always 64 bits wide, and note that
rd_offset is declared as a u64. Thus ::offset and ::length
need to have explicit bit-width types. How about u64 for both?

The same type needs to be used wherever you do an

	unsigned long offset = read->rd_offset;

Nit: can this struct declaration use tab-formatting with the
usual naming convention for the fields, like "rp_type" and
"rp_offset"? That makes it easier to grep for places these
fields are used, since the current names are pretty generic.


> +
> static __be32
> -nfsd4_encode_read_plus_data(struct nfsd4_compoundres *resp,
> -			    struct nfsd4_read *read,
> -			    unsigned long *maxcount, u32 *eof,
> -			    loff_t *pos)
> +nfsd4_read_plus_readv(struct nfsd4_compoundres *resp, struct nfsd4_read *read,
> +		      unsigned long *maxcount, u32 *eof)
> {
> 	struct xdr_stream *xdr = resp->xdr;
> -	struct file *file = read->rd_nf->nf_file;
> -	int starting_len = xdr->buf->len;
> -	loff_t hole_pos;
> -	__be32 nfserr;
> -	__be32 *p, tmp;
> -	__be64 tmp64;
> -
> -	hole_pos = pos ? *pos : vfs_llseek(file, read->rd_offset, SEEK_HOLE);
> -	if (hole_pos > read->rd_offset)
> -		*maxcount = min_t(unsigned long, *maxcount, hole_pos - read->rd_offset);
> -	*maxcount = min_t(unsigned long, *maxcount, (xdr->buf->buflen - xdr->buf->len));
> -
> -	/* Content type, offset, byte count */
> -	p = xdr_reserve_space(xdr, 4 + 8 + 4);
> -	if (!p)
> -		return nfserr_resource;
> +	unsigned int starting_len = xdr->buf->len;
> +	__be32 nfserr, zero = xdr_zero;
> +	int pad;

unsigned int pad;


> 
> +	/* xdr_reserve_space_vec() switches us to the xdr->pages */

IIUC this is reserving a maximum estimated size piece of the xdr_stream
to be used for encoding the READ_PLUS, and then the mechanics of
encoding the result can trim the message length down a bit. The missing
xdr_reserve_space calls are a little confusing, as are the operations on
the stream's xdr_buf rather than using xdr_stream operations, so it would
help to explain what's going on, perhaps as part of this comment.

Or, move all of this blathering to a kerneldoc comment in front of
nfsd4_encode_read_plus_segments()


> 	read->rd_vlen = xdr_reserve_space_vec(xdr, resp->rqstp->rq_vec, *maxcount);
> 	if (read->rd_vlen < 0)
> 		return nfserr_resource;
> 
> -	nfserr = nfsd_readv(resp->rqstp, read->rd_fhp, file, read->rd_offset,
> -			    resp->rqstp->rq_vec, read->rd_vlen, maxcount, eof);
> +	nfserr = nfsd_readv(resp->rqstp, read->rd_fhp, read->rd_nf->nf_file,
> +			    read->rd_offset, resp->rqstp->rq_vec, read->rd_vlen,
> +			    maxcount, eof);
> 	if (nfserr)
> 		return nfserr;
> -	xdr_truncate_encode(xdr, starting_len + 16 + xdr_align_size(*maxcount));
> +	xdr_truncate_encode(xdr, starting_len + xdr_align_size(*maxcount));
> 
> -	tmp = htonl(NFS4_CONTENT_DATA);
> -	write_bytes_to_xdr_buf(xdr->buf, starting_len,      &tmp,   4);
> -	tmp64 = cpu_to_be64(read->rd_offset);
> -	write_bytes_to_xdr_buf(xdr->buf, starting_len + 4,  &tmp64, 8);
> -	tmp = htonl(*maxcount);
> -	write_bytes_to_xdr_buf(xdr->buf, starting_len + 12, &tmp,   4);
> -
> -	tmp = xdr_zero;
> -	write_bytes_to_xdr_buf(xdr->buf, starting_len + 16 + *maxcount, &tmp,
> -			       xdr_pad_size(*maxcount));
> +	pad = (*maxcount&3) ? 4 - (*maxcount&3) : 0;

Would xdr_pad_size() be appropriate here?


> +	write_bytes_to_xdr_buf(xdr->buf, starting_len + *maxcount, &zero, pad);
> 	return nfs_ok;
> }
> 
> +static void
> +nfsd4_encode_read_plus_segment(struct xdr_stream *xdr,
> +			       struct read_plus_segment *segment,
> +			       unsigned int *bufpos, unsigned int *segments)
> +{
> +	struct xdr_buf *buf = xdr->buf;
> +
> +	xdr_encode_word(buf, *bufpos, segment->type);
> +	xdr_encode_double(buf, *bufpos + 4, segment->offset);
> +
> +	if (segment->type == NFS4_CONTENT_HOLE) {
> +		xdr_encode_double(buf, *bufpos + 12, segment->length);
> +		*bufpos += 4 + 8 + 8;

Throughout, can you use multiples of XDR_UNIT instead of naked integers?


> +	} else {
> +		size_t align = xdr_align_size(segment->length);
> +		xdr_encode_word(buf, *bufpos + 12, segment->length);
> +		if (*segments == 0)
> +			xdr_buf_trim_head(buf, 4);
> +
> +		xdr_stream_move_subsegment(xdr,
> +				buf->head[0].iov_len + segment->page_pos,
> +				*bufpos + 16, align);
> +		*bufpos += 4 + 8 + 4 + align;
> +	}
> +
> +	*segments += 1;
> +}
> +
> static __be32
> -nfsd4_encode_read_plus_hole(struct nfsd4_compoundres *resp,
> -			    struct nfsd4_read *read,
> -			    unsigned long *maxcount, u32 *eof)
> +nfsd4_encode_read_plus_segments(struct nfsd4_compoundres *resp,
> +				struct nfsd4_read *read,
> +				unsigned int *segments, u32 *eof)
> {
> -	struct file *file = read->rd_nf->nf_file;
> -	loff_t data_pos = vfs_llseek(file, read->rd_offset, SEEK_DATA);
> -	loff_t f_size = i_size_read(file_inode(file));
> -	unsigned long count;
> -	__be32 *p;
> +	enum data_content4 pagetype;
> +	struct read_plus_segment segment;
> +	struct xdr_stream *xdr = resp->xdr;
> +	unsigned long offset = read->rd_offset;
> +	unsigned int bufpos = xdr->buf->len;
> +	unsigned long maxcount;
> +	unsigned int pagelen, i = 0;
> +	char *vpage, *p;
> +	__be32 nfserr;

Nit: try to use reverse christmas tree style where possible.


> -	if (data_pos == -ENXIO)
> -		data_pos = f_size;
> -	else if (data_pos <= read->rd_offset || (data_pos < f_size && data_pos % PAGE_SIZE))
> -		return nfsd4_encode_read_plus_data(resp, read, maxcount, eof, &f_size);
> -	count = data_pos - read->rd_offset;
> -
> -	/* Content type, offset, byte count */
> -	p = xdr_reserve_space(resp->xdr, 4 + 8 + 8);
> -	if (!p)
> +	/* enough space for a HOLE segment before we switch to the pages */
> +	if (!xdr_reserve_space(xdr, 4 + 8 + 8))
> 		return nfserr_resource;
> +	xdr_commit_encode(xdr);
> 
> -	*p++ = htonl(NFS4_CONTENT_HOLE);
> -	p = xdr_encode_hyper(p, read->rd_offset);
> -	p = xdr_encode_hyper(p, count);
> +	maxcount = min_t(unsigned long, read->rd_length,
> +			 (xdr->buf->buflen - xdr->buf->len));
> 
> -	*eof = (read->rd_offset + count) >= f_size;
> -	*maxcount = min_t(unsigned long, count, *maxcount);
> +	nfserr = nfsd4_read_plus_readv(resp, read, &maxcount, eof);
> +	if (nfserr)
> +		return nfserr;
> +
> +	while (maxcount > 0) {
> +		vpage = xdr_buf_nth_page_address(xdr->buf, i, &pagelen);
> +		pagelen = min_t(unsigned int, pagelen, maxcount);
> +		if (!vpage || pagelen == 0)
> +			break;
> +		p = memchr_inv(vpage, 0, pagelen);

So you have to walk every page in the payload, byte-by-byte, to
sort out how to encode the READ_PLUS result? That's... unfortunate.
The whole idea of making the READ payload "opaque" is that XDR
doesn't have to touch those bytes; and then the payload is passed
to the network layer as pointers to pages for the same reason.

It might be helpful to get this reviewed by fsdevel and linux-mm
in case there's a better approach. Hugh was attempting to point
all zero pages at ZERO_PAGE at one point, for example, and that
would make it very quick to detect a range of zero bytes.

Another thought is to use a POSIX byte-range lock to prevent
changes to the range of the file you're encoding, while leaving
the rest of the file available for other operations. That way
you could continue to use llseek when that's supported.


> +		pagetype = (p == NULL) ? NFS4_CONTENT_HOLE : NFS4_CONTENT_DATA;
> +
> +		if (pagetype != segment.type || i == 0) {
> +			if (likely(i > 0)) {
> +				nfsd4_encode_read_plus_segment(xdr, &segment,
> +							      &bufpos, segments);
> +				offset += segment.length;
> +			}
> +			segment.type = pagetype;
> +			segment.offset = offset;
> +			segment.length = pagelen;
> +			segment.page_pos = i * PAGE_SIZE;
> +		} else
> +			segment.length += pagelen;
> +
> +		maxcount -= pagelen;
> +		i++;
> +	}
> +
> +	nfsd4_encode_read_plus_segment(xdr, &segment, &bufpos, segments);
> +	xdr_truncate_encode(xdr, bufpos);
> 	return nfs_ok;
> }
> 
> @@ -4813,69 +4853,31 @@ static __be32
> nfsd4_encode_read_plus(struct nfsd4_compoundres *resp, __be32 nfserr,
> 		       struct nfsd4_read *read)
> {
> -	unsigned long maxcount, count;
> 	struct xdr_stream *xdr = resp->xdr;
> -	struct file *file;
> 	int starting_len = xdr->buf->len;
> -	int last_segment = xdr->buf->len;
> -	int segments = 0;
> -	__be32 *p, tmp;
> -	bool is_data;
> -	loff_t pos;
> +	unsigned int segments = 0;
> 	u32 eof;
> 
> 	if (nfserr)
> 		return nfserr;
> -	file = read->rd_nf->nf_file;
> 
> 	/* eof flag, segment count */
> -	p = xdr_reserve_space(xdr, 4 + 4);
> -	if (!p)
> +	if (!xdr_reserve_space(xdr, 4 + 4))
> 		return nfserr_resource;
> 	xdr_commit_encode(xdr);
> 
> -	maxcount = min_t(unsigned long, read->rd_length,
> -			 (xdr->buf->buflen - xdr->buf->len));
> -	count    = maxcount;
> -
> -	eof = read->rd_offset >= i_size_read(file_inode(file));
> +	eof = read->rd_offset >= i_size_read(file_inode(read->rd_nf->nf_file));
> 	if (eof)
> 		goto out;
> 
> -	pos = vfs_llseek(file, read->rd_offset, SEEK_HOLE);
> -	is_data = pos > read->rd_offset;
> -
> -	while (count > 0 && !eof) {
> -		maxcount = count;
> -		if (is_data)
> -			nfserr = nfsd4_encode_read_plus_data(resp, read, &maxcount, &eof,
> -						segments == 0 ? &pos : NULL);
> -		else
> -			nfserr = nfsd4_encode_read_plus_hole(resp, read, &maxcount, &eof);
> -		if (nfserr)
> -			goto out;
> -		count -= maxcount;
> -		read->rd_offset += maxcount;
> -		is_data = !is_data;
> -		last_segment = xdr->buf->len;
> -		segments++;
> -	}
> -
> +	nfserr = nfsd4_encode_read_plus_segments(resp, read, &segments, &eof);
> out:
> -	if (nfserr && segments == 0)
> +	if (nfserr)
> 		xdr_truncate_encode(xdr, starting_len);
> 	else {
> -		if (nfserr) {
> -			xdr_truncate_encode(xdr, last_segment);
> -			nfserr = nfs_ok;
> -			eof = 0;
> -		}
> -		tmp = htonl(eof);
> -		write_bytes_to_xdr_buf(xdr->buf, starting_len,     &tmp, 4);
> -		tmp = htonl(segments);
> -		write_bytes_to_xdr_buf(xdr->buf, starting_len + 4, &tmp, 4);
> +		xdr_encode_word(xdr->buf, starting_len,     eof);
> +		xdr_encode_word(xdr->buf, starting_len + 4, segments);
> 	}
> -
> 	return nfserr;
> }

The clean-ups in nfsd4_encode_read_plus() LGTM.


--
Chuck Lever




^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 6/6] NFSD: Repeal and replace the READ_PLUS implementation
  2022-07-13 20:45   ` Chuck Lever III
@ 2022-07-15 13:05     ` Anna Schumaker
  0 siblings, 0 replies; 10+ messages in thread
From: Anna Schumaker @ 2022-07-15 13:05 UTC (permalink / raw)
  To: Chuck Lever III; +Cc: Linux NFS Mailing List

On Wed, Jul 13, 2022 at 4:45 PM Chuck Lever III <chuck.lever@oracle.com> wrote:
>
>
>
> > On Jul 13, 2022, at 3:08 PM, Anna Schumaker <anna@kernel.org> wrote:
> >
> > From: Anna Schumaker <Anna.Schumaker@Netapp.com>
> >
> > Rather than relying on the underlying filesystem to tell us where hole
> > and data segments are through vfs_llseek(), let's instead do the hole
> > compression ourselves. This has a few advantages over the old
> > implementation:
> >
> > 1) A single call to the underlying filesystem through nfsd_readv() means
> >   the file can't change from underneath us in the middle of encoding.
> > 2) A single call to the underlying filestem also means that the
> >   underlying filesystem only needs to synchronize cached and on-disk
> >   data one time instead of potentially many speeding up the reply.
> > 3) Hole support for filesystems that don't support SEEK_HOLE and SEEK_DATA
>
> I'm not sure I understand why this last one is a good idea.
> Wouldn't that cause holes to appear in the file cached on
> the client where there are no holes in the stored file on
> the server?

It doesn't because the client expands the holes into zero-filled pages
during READ_PLUS decoding. Note that this expansion happens on the
server-side for the classic READ operation.

>
> Is there any encryption-related impact such as the issues
> that David brought up during LSF/MM ?

Not that I'm aware of.

>
>
> > I also included an optimization where we can cut down on the amount of
> > memory being shifed around by doing the compression as (hole, data)
> > pairs.
> >
> > This patch not only fixes xfstests generic/091 and generic/263
> > for me but the "-g quick" group tests also finish about a minute faster.
> >
> > Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
> > ---
> > fs/nfsd/nfs4xdr.c | 202 +++++++++++++++++++++++-----------------------
> > 1 file changed, 102 insertions(+), 100 deletions(-)
> >
> > diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
> > index 61b2aae81abb..0e1e7a37d4e0 100644
> > --- a/fs/nfsd/nfs4xdr.c
> > +++ b/fs/nfsd/nfs4xdr.c
> > @@ -4731,81 +4731,121 @@ nfsd4_encode_offload_status(struct nfsd4_compoundres *resp, __be32 nfserr,
> >       return nfserr;
> > }
> >
> > +struct read_plus_segment {
> > +     enum data_content4 type;
> > +     unsigned long offset;
> > +     unsigned long length;
> > +     unsigned int page_pos;
> > +};
>
> "unsigned long" is not always 64 bits wide, and note that
> rd_offset is declared as a u64. Thus ::offset and ::length
> need to have explicit bit-width types. How about u64 for both?

Sure, I'll change that.

>
> The same type needs to be used wherever you do an
>
>         unsigned long offset = read->rd_offset;
>
> Nit: can this struct declaration use tab-formatting with the
> usual naming convention for the fields, like "rp_type" and
> "rp_offset"? That makes it easier to grep for places these
> fields are used, since the current names are pretty generic.

Yeah, no problem.

>
>
> > +
> > static __be32
> > -nfsd4_encode_read_plus_data(struct nfsd4_compoundres *resp,
> > -                         struct nfsd4_read *read,
> > -                         unsigned long *maxcount, u32 *eof,
> > -                         loff_t *pos)
> > +nfsd4_read_plus_readv(struct nfsd4_compoundres *resp, struct nfsd4_read *read,
> > +                   unsigned long *maxcount, u32 *eof)
> > {
> >       struct xdr_stream *xdr = resp->xdr;
> > -     struct file *file = read->rd_nf->nf_file;
> > -     int starting_len = xdr->buf->len;
> > -     loff_t hole_pos;
> > -     __be32 nfserr;
> > -     __be32 *p, tmp;
> > -     __be64 tmp64;
> > -
> > -     hole_pos = pos ? *pos : vfs_llseek(file, read->rd_offset, SEEK_HOLE);
> > -     if (hole_pos > read->rd_offset)
> > -             *maxcount = min_t(unsigned long, *maxcount, hole_pos - read->rd_offset);
> > -     *maxcount = min_t(unsigned long, *maxcount, (xdr->buf->buflen - xdr->buf->len));
> > -
> > -     /* Content type, offset, byte count */
> > -     p = xdr_reserve_space(xdr, 4 + 8 + 4);
> > -     if (!p)
> > -             return nfserr_resource;
> > +     unsigned int starting_len = xdr->buf->len;
> > +     __be32 nfserr, zero = xdr_zero;
> > +     int pad;
>
> unsigned int pad;

Ok

>
>
> >
> > +     /* xdr_reserve_space_vec() switches us to the xdr->pages */
>
> IIUC this is reserving a maximum estimated size piece of the xdr_stream
> to be used for encoding the READ_PLUS, and then the mechanics of
> encoding the result can trim the message length down a bit. The missing
> xdr_reserve_space calls are a little confusing, as are the operations on
> the stream's xdr_buf rather than using xdr_stream operations, so it would
> help to explain what's going on, perhaps as part of this comment.

Sure, I'll expand on the comment. I'm not sure the best way of
switching xdr_encode_word() and xdr_encode_double() to use an xdr
stream here, since the xdr_stream_encode_*() family of functions all
call xdr_reserve_space() internally but I'm writing to space that has
already been reserved.

>
> Or, move all of this blathering to a kerneldoc comment in front of
> nfsd4_encode_read_plus_segments()
>
>
> >       read->rd_vlen = xdr_reserve_space_vec(xdr, resp->rqstp->rq_vec, *maxcount);
> >       if (read->rd_vlen < 0)
> >               return nfserr_resource;
> >
> > -     nfserr = nfsd_readv(resp->rqstp, read->rd_fhp, file, read->rd_offset,
> > -                         resp->rqstp->rq_vec, read->rd_vlen, maxcount, eof);
> > +     nfserr = nfsd_readv(resp->rqstp, read->rd_fhp, read->rd_nf->nf_file,
> > +                         read->rd_offset, resp->rqstp->rq_vec, read->rd_vlen,
> > +                         maxcount, eof);
> >       if (nfserr)
> >               return nfserr;
> > -     xdr_truncate_encode(xdr, starting_len + 16 + xdr_align_size(*maxcount));
> > +     xdr_truncate_encode(xdr, starting_len + xdr_align_size(*maxcount));
> >
> > -     tmp = htonl(NFS4_CONTENT_DATA);
> > -     write_bytes_to_xdr_buf(xdr->buf, starting_len,      &tmp,   4);
> > -     tmp64 = cpu_to_be64(read->rd_offset);
> > -     write_bytes_to_xdr_buf(xdr->buf, starting_len + 4,  &tmp64, 8);
> > -     tmp = htonl(*maxcount);
> > -     write_bytes_to_xdr_buf(xdr->buf, starting_len + 12, &tmp,   4);
> > -
> > -     tmp = xdr_zero;
> > -     write_bytes_to_xdr_buf(xdr->buf, starting_len + 16 + *maxcount, &tmp,
> > -                            xdr_pad_size(*maxcount));
> > +     pad = (*maxcount&3) ? 4 - (*maxcount&3) : 0;
>
> Would xdr_pad_size() be appropriate here?

Probably. It still passes my testing, so I've changed it.


>
>
> > +     write_bytes_to_xdr_buf(xdr->buf, starting_len + *maxcount, &zero, pad);
> >       return nfs_ok;
> > }
> >
> > +static void
> > +nfsd4_encode_read_plus_segment(struct xdr_stream *xdr,
> > +                            struct read_plus_segment *segment,
> > +                            unsigned int *bufpos, unsigned int *segments)
> > +{
> > +     struct xdr_buf *buf = xdr->buf;
> > +
> > +     xdr_encode_word(buf, *bufpos, segment->type);
> > +     xdr_encode_double(buf, *bufpos + 4, segment->offset);
> > +
> > +     if (segment->type == NFS4_CONTENT_HOLE) {
> > +             xdr_encode_double(buf, *bufpos + 12, segment->length);
> > +             *bufpos += 4 + 8 + 8;
>
> Throughout, can you use multiples of XDR_UNIT instead of naked integers?

Sure.

>
>
> > +     } else {
> > +             size_t align = xdr_align_size(segment->length);
> > +             xdr_encode_word(buf, *bufpos + 12, segment->length);
> > +             if (*segments == 0)
> > +                     xdr_buf_trim_head(buf, 4);
> > +
> > +             xdr_stream_move_subsegment(xdr,
> > +                             buf->head[0].iov_len + segment->page_pos,
> > +                             *bufpos + 16, align);
> > +             *bufpos += 4 + 8 + 4 + align;
> > +     }
> > +
> > +     *segments += 1;
> > +}
> > +
> > static __be32
> > -nfsd4_encode_read_plus_hole(struct nfsd4_compoundres *resp,
> > -                         struct nfsd4_read *read,
> > -                         unsigned long *maxcount, u32 *eof)
> > +nfsd4_encode_read_plus_segments(struct nfsd4_compoundres *resp,
> > +                             struct nfsd4_read *read,
> > +                             unsigned int *segments, u32 *eof)
> > {
> > -     struct file *file = read->rd_nf->nf_file;
> > -     loff_t data_pos = vfs_llseek(file, read->rd_offset, SEEK_DATA);
> > -     loff_t f_size = i_size_read(file_inode(file));
> > -     unsigned long count;
> > -     __be32 *p;
> > +     enum data_content4 pagetype;
> > +     struct read_plus_segment segment;
> > +     struct xdr_stream *xdr = resp->xdr;
> > +     unsigned long offset = read->rd_offset;
> > +     unsigned int bufpos = xdr->buf->len;
> > +     unsigned long maxcount;
> > +     unsigned int pagelen, i = 0;
> > +     char *vpage, *p;
> > +     __be32 nfserr;
>
> Nit: try to use reverse christmas tree style where possible.

Will do.

>
>
> > -     if (data_pos == -ENXIO)
> > -             data_pos = f_size;
> > -     else if (data_pos <= read->rd_offset || (data_pos < f_size && data_pos % PAGE_SIZE))
> > -             return nfsd4_encode_read_plus_data(resp, read, maxcount, eof, &f_size);
> > -     count = data_pos - read->rd_offset;
> > -
> > -     /* Content type, offset, byte count */
> > -     p = xdr_reserve_space(resp->xdr, 4 + 8 + 8);
> > -     if (!p)
> > +     /* enough space for a HOLE segment before we switch to the pages */
> > +     if (!xdr_reserve_space(xdr, 4 + 8 + 8))
> >               return nfserr_resource;
> > +     xdr_commit_encode(xdr);
> >
> > -     *p++ = htonl(NFS4_CONTENT_HOLE);
> > -     p = xdr_encode_hyper(p, read->rd_offset);
> > -     p = xdr_encode_hyper(p, count);
> > +     maxcount = min_t(unsigned long, read->rd_length,
> > +                      (xdr->buf->buflen - xdr->buf->len));
> >
> > -     *eof = (read->rd_offset + count) >= f_size;
> > -     *maxcount = min_t(unsigned long, count, *maxcount);
> > +     nfserr = nfsd4_read_plus_readv(resp, read, &maxcount, eof);
> > +     if (nfserr)
> > +             return nfserr;
> > +
> > +     while (maxcount > 0) {
> > +             vpage = xdr_buf_nth_page_address(xdr->buf, i, &pagelen);
> > +             pagelen = min_t(unsigned int, pagelen, maxcount);
> > +             if (!vpage || pagelen == 0)
> > +                     break;
> > +             p = memchr_inv(vpage, 0, pagelen);
>
> So you have to walk every page in the payload, byte-by-byte, to
> sort out how to encode the READ_PLUS result? That's... unfortunate.
> The whole idea of making the READ payload "opaque" is that XDR
> doesn't have to touch those bytes; and then the payload is passed
> to the network layer as pointers to pages for the same reason.
>
> It might be helpful to get this reviewed by fsdevel and linux-mm
> in case there's a better approach. Hugh was attempting to point
> all zero pages at ZERO_PAGE at one point, for example, and that
> would make it very quick to detect a range of zero bytes.

I agree something like that would be better. I brought up tracking
zero pages to Matthew Wilcox during LSF, so it's at least on his
radar.

>
> Another thought is to use a POSIX byte-range lock to prevent
> changes to the range of the file you're encoding, while leaving
> the rest of the file available for other operations. That way
> you could continue to use llseek when that's supported.

Wouldn't that still deadlock when vfs_llseek() also locks the file? Or
am I misunderstanding something about byte-range locking?

>
>
> > +             pagetype = (p == NULL) ? NFS4_CONTENT_HOLE : NFS4_CONTENT_DATA;
> > +
> > +             if (pagetype != segment.type || i == 0) {
> > +                     if (likely(i > 0)) {
> > +                             nfsd4_encode_read_plus_segment(xdr, &segment,
> > +                                                           &bufpos, segments);
> > +                             offset += segment.length;
> > +                     }
> > +                     segment.type = pagetype;
> > +                     segment.offset = offset;
> > +                     segment.length = pagelen;
> > +                     segment.page_pos = i * PAGE_SIZE;
> > +             } else
> > +                     segment.length += pagelen;
> > +
> > +             maxcount -= pagelen;
> > +             i++;
> > +     }
> > +
> > +     nfsd4_encode_read_plus_segment(xdr, &segment, &bufpos, segments);
> > +     xdr_truncate_encode(xdr, bufpos);
> >       return nfs_ok;
> > }
> >
> > @@ -4813,69 +4853,31 @@ static __be32
> > nfsd4_encode_read_plus(struct nfsd4_compoundres *resp, __be32 nfserr,
> >                      struct nfsd4_read *read)
> > {
> > -     unsigned long maxcount, count;
> >       struct xdr_stream *xdr = resp->xdr;
> > -     struct file *file;
> >       int starting_len = xdr->buf->len;
> > -     int last_segment = xdr->buf->len;
> > -     int segments = 0;
> > -     __be32 *p, tmp;
> > -     bool is_data;
> > -     loff_t pos;
> > +     unsigned int segments = 0;
> >       u32 eof;
> >
> >       if (nfserr)
> >               return nfserr;
> > -     file = read->rd_nf->nf_file;
> >
> >       /* eof flag, segment count */
> > -     p = xdr_reserve_space(xdr, 4 + 4);
> > -     if (!p)
> > +     if (!xdr_reserve_space(xdr, 4 + 4))
> >               return nfserr_resource;
> >       xdr_commit_encode(xdr);
> >
> > -     maxcount = min_t(unsigned long, read->rd_length,
> > -                      (xdr->buf->buflen - xdr->buf->len));
> > -     count    = maxcount;
> > -
> > -     eof = read->rd_offset >= i_size_read(file_inode(file));
> > +     eof = read->rd_offset >= i_size_read(file_inode(read->rd_nf->nf_file));
> >       if (eof)
> >               goto out;
> >
> > -     pos = vfs_llseek(file, read->rd_offset, SEEK_HOLE);
> > -     is_data = pos > read->rd_offset;
> > -
> > -     while (count > 0 && !eof) {
> > -             maxcount = count;
> > -             if (is_data)
> > -                     nfserr = nfsd4_encode_read_plus_data(resp, read, &maxcount, &eof,
> > -                                             segments == 0 ? &pos : NULL);
> > -             else
> > -                     nfserr = nfsd4_encode_read_plus_hole(resp, read, &maxcount, &eof);
> > -             if (nfserr)
> > -                     goto out;
> > -             count -= maxcount;
> > -             read->rd_offset += maxcount;
> > -             is_data = !is_data;
> > -             last_segment = xdr->buf->len;
> > -             segments++;
> > -     }
> > -
> > +     nfserr = nfsd4_encode_read_plus_segments(resp, read, &segments, &eof);
> > out:
> > -     if (nfserr && segments == 0)
> > +     if (nfserr)
> >               xdr_truncate_encode(xdr, starting_len);
> >       else {
> > -             if (nfserr) {
> > -                     xdr_truncate_encode(xdr, last_segment);
> > -                     nfserr = nfs_ok;
> > -                     eof = 0;
> > -             }
> > -             tmp = htonl(eof);
> > -             write_bytes_to_xdr_buf(xdr->buf, starting_len,     &tmp, 4);
> > -             tmp = htonl(segments);
> > -             write_bytes_to_xdr_buf(xdr->buf, starting_len + 4, &tmp, 4);
> > +             xdr_encode_word(xdr->buf, starting_len,     eof);
> > +             xdr_encode_word(xdr->buf, starting_len + 4, segments);
> >       }
> > -
> >       return nfserr;
> > }
>
> The clean-ups in nfsd4_encode_read_plus() LGTM.

Thanks for the review!
Anna

>
>
> --
> Chuck Lever
>
>
>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 3/6] SUNRPC: Introduce xdr_buf_trim_head()
  2022-07-13 19:08 ` [PATCH v2 3/6] SUNRPC: Introduce xdr_buf_trim_head() Anna Schumaker
@ 2022-07-16  6:28   ` kernel test robot
  0 siblings, 0 replies; 10+ messages in thread
From: kernel test robot @ 2022-07-16  6:28 UTC (permalink / raw)
  To: Anna Schumaker, linux-nfs, chuck.lever; +Cc: kbuild-all, anna

Hi Anna,

I love your patch! Perhaps something to improve:

[auto build test WARNING on trondmy-nfs/linux-next]
[also build test WARNING on linus/master v5.19-rc6 next-20220715]
[cannot apply to cel-2.6/for-next]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/Anna-Schumaker/NFSD-Improvements-for-the-NFSv4-2-READ_PLUS-operation/20220714-030910
base:   git://git.linux-nfs.org/projects/trondmy/linux-nfs.git linux-next
reproduce: make htmldocs

If you fix the issue, kindly add following tag where applicable
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

>> Documentation/networking/kapi:59: net/sunrpc/xdr.c:1749: WARNING: Unexpected indentation.

-- 
0-DAY CI Kernel Test Service
https://01.org/lkp

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2022-07-16  6:30 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-07-13 19:08 [PATCH v2 0/6] NFSD: Improvements for the NFSv4.2 READ_PLUS operation Anna Schumaker
2022-07-13 19:08 ` [PATCH v2 1/6] SUNRPC: Introduce xdr_stream_move_subsegment() Anna Schumaker
2022-07-13 19:08 ` [PATCH v2 2/6] SUNRPC: Introduce xdr_encode_double() Anna Schumaker
2022-07-13 19:08 ` [PATCH v2 3/6] SUNRPC: Introduce xdr_buf_trim_head() Anna Schumaker
2022-07-16  6:28   ` kernel test robot
2022-07-13 19:08 ` [PATCH v2 4/6] SUNRPC: Introduce xdr_buf_nth_page_address() Anna Schumaker
2022-07-13 19:08 ` [PATCH v2 5/6] SUNRPC: Export xdr_buf_pagecount() Anna Schumaker
2022-07-13 19:08 ` [PATCH v2 6/6] NFSD: Repeal and replace the READ_PLUS implementation Anna Schumaker
2022-07-13 20:45   ` Chuck Lever III
2022-07-15 13:05     ` Anna Schumaker

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.