linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] Finalising swap-over-NFS patches
@ 2022-04-29  0:43 NeilBrown
  2022-04-29  0:43 ` [PATCH 1/2] MM: handle THP in swap_*page_fs() NeilBrown
  2022-04-29  0:43 ` [PATCH 2/2] NFS: rename nfs_direct_IO and use as ->swap_rw NeilBrown
  0 siblings, 2 replies; 13+ messages in thread
From: NeilBrown @ 2022-04-29  0:43 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Geert Uytterhoeven, Christoph Hellwig, Miaohe Lin, linux-nfs,
	linux-mm, linux-kernel

Hi Andrew,
Two patches for current -mm branch.

First fixes an omission pointed out by Miaohe Lin.  Huge pages weren't
handled correctly.

Second changes NFS to use the new ->swap_rw

With this, swap over NFS largely works again.  I think there is a little
more work to do in NFS/SUNRPC code to make allocations on the swap out
path behave optimally.  Any such changes can go through the NFS tree.

Thanks,
NeilBrown

---

NeilBrown (2):
      MM: handle THP in swap_*page_fs()
      NFS: rename nfs_direct_IO and use as ->swap_rw


 fs/nfs/direct.c        | 23 ++++++++++-------------
 fs/nfs/file.c          |  5 +----
 include/linux/nfs_fs.h |  2 +-
 mm/page_io.c           | 23 +++++++++++++----------
 4 files changed, 25 insertions(+), 28 deletions(-)

--
Signature


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 1/2] MM: handle THP in swap_*page_fs()
  2022-04-29  0:43 [PATCH 0/2] Finalising swap-over-NFS patches NeilBrown
@ 2022-04-29  0:43 ` NeilBrown
  2022-04-29  1:21   ` Andrew Morton
                     ` (2 more replies)
  2022-04-29  0:43 ` [PATCH 2/2] NFS: rename nfs_direct_IO and use as ->swap_rw NeilBrown
  1 sibling, 3 replies; 13+ messages in thread
From: NeilBrown @ 2022-04-29  0:43 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Geert Uytterhoeven, Christoph Hellwig, Miaohe Lin, linux-nfs,
	linux-mm, linux-kernel

Pages passed to swap_readpage()/swap_writepage() are not necessarily all
the same size - there may be transparent-huge-pages involves.

The BIO paths of swap_*page() handle this correctly, but the SWP_FS_OPS
path does not.

So we need to use thp_size() to find the size, not just assume
PAGE_SIZE, and we need to track the total length of the request, not
just assume it is "page * PAGE_SIZE".

Reported-by: Miaohe Lin <linmiaohe@huawei.com>
Signed-off-by: NeilBrown <neilb@suse.de>
---
 mm/page_io.c |   23 +++++++++++++----------
 1 file changed, 13 insertions(+), 10 deletions(-)

diff --git a/mm/page_io.c b/mm/page_io.c
index c132511f521c..d636a3531cad 100644
--- a/mm/page_io.c
+++ b/mm/page_io.c
@@ -239,6 +239,7 @@ struct swap_iocb {
 	struct kiocb		iocb;
 	struct bio_vec		bvec[SWAP_CLUSTER_MAX];
 	int			pages;
+	int			len;
 };
 static mempool_t *sio_pool;
 
@@ -261,7 +262,7 @@ static void sio_write_complete(struct kiocb *iocb, long ret)
 	struct page *page = sio->bvec[0].bv_page;
 	int p;
 
-	if (ret != PAGE_SIZE * sio->pages) {
+	if (ret != sio->len) {
 		/*
 		 * In the case of swap-over-nfs, this can be a
 		 * temporary failure if the system has limited
@@ -301,7 +302,7 @@ static int swap_writepage_fs(struct page *page, struct writeback_control *wbc)
 		sio = *wbc->swap_plug;
 	if (sio) {
 		if (sio->iocb.ki_filp != swap_file ||
-		    sio->iocb.ki_pos + sio->pages * PAGE_SIZE != pos) {
+		    sio->iocb.ki_pos + sio->len != pos) {
 			swap_write_unplug(sio);
 			sio = NULL;
 		}
@@ -312,10 +313,12 @@ static int swap_writepage_fs(struct page *page, struct writeback_control *wbc)
 		sio->iocb.ki_complete = sio_write_complete;
 		sio->iocb.ki_pos = pos;
 		sio->pages = 0;
+		sio->len = 0;
 	}
 	sio->bvec[sio->pages].bv_page = page;
-	sio->bvec[sio->pages].bv_len = PAGE_SIZE;
+	sio->bvec[sio->pages].bv_len = thp_size(page);
 	sio->bvec[sio->pages].bv_offset = 0;
+	sio->len += thp_size(page);
 	sio->pages += 1;
 	if (sio->pages == ARRAY_SIZE(sio->bvec) || !wbc->swap_plug) {
 		swap_write_unplug(sio);
@@ -371,8 +374,7 @@ void swap_write_unplug(struct swap_iocb *sio)
 	struct address_space *mapping = sio->iocb.ki_filp->f_mapping;
 	int ret;
 
-	iov_iter_bvec(&from, WRITE, sio->bvec, sio->pages,
-		      PAGE_SIZE * sio->pages);
+	iov_iter_bvec(&from, WRITE, sio->bvec, sio->pages, sio->len);
 	ret = mapping->a_ops->swap_rw(&sio->iocb, &from);
 	if (ret != -EIOCBQUEUED)
 		sio_write_complete(&sio->iocb, ret);
@@ -383,7 +385,7 @@ static void sio_read_complete(struct kiocb *iocb, long ret)
 	struct swap_iocb *sio = container_of(iocb, struct swap_iocb, iocb);
 	int p;
 
-	if (ret == PAGE_SIZE * sio->pages) {
+	if (ret == sio->len) {
 		for (p = 0; p < sio->pages; p++) {
 			struct page *page = sio->bvec[p].bv_page;
 
@@ -415,7 +417,7 @@ static void swap_readpage_fs(struct page *page,
 		sio = *plug;
 	if (sio) {
 		if (sio->iocb.ki_filp != sis->swap_file ||
-		    sio->iocb.ki_pos + sio->pages * PAGE_SIZE != pos) {
+		    sio->iocb.ki_pos + sio->len != pos) {
 			swap_read_unplug(sio);
 			sio = NULL;
 		}
@@ -426,10 +428,12 @@ static void swap_readpage_fs(struct page *page,
 		sio->iocb.ki_pos = pos;
 		sio->iocb.ki_complete = sio_read_complete;
 		sio->pages = 0;
+		sio->len = 0;
 	}
 	sio->bvec[sio->pages].bv_page = page;
-	sio->bvec[sio->pages].bv_len = PAGE_SIZE;
+	sio->bvec[sio->pages].bv_len = thp_size(page);
 	sio->bvec[sio->pages].bv_offset = 0;
+	sio->len += thp_size(page);
 	sio->pages += 1;
 	if (sio->pages == ARRAY_SIZE(sio->bvec) || !plug) {
 		swap_read_unplug(sio);
@@ -521,8 +525,7 @@ void __swap_read_unplug(struct swap_iocb *sio)
 	struct address_space *mapping = sio->iocb.ki_filp->f_mapping;
 	int ret;
 
-	iov_iter_bvec(&from, READ, sio->bvec, sio->pages,
-		      PAGE_SIZE * sio->pages);
+	iov_iter_bvec(&from, READ, sio->bvec, sio->pages, sio->len);
 	ret = mapping->a_ops->swap_rw(&sio->iocb, &from);
 	if (ret != -EIOCBQUEUED)
 		sio_read_complete(&sio->iocb, ret);



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 2/2] NFS: rename nfs_direct_IO and use as ->swap_rw
  2022-04-29  0:43 [PATCH 0/2] Finalising swap-over-NFS patches NeilBrown
  2022-04-29  0:43 ` [PATCH 1/2] MM: handle THP in swap_*page_fs() NeilBrown
@ 2022-04-29  0:43 ` NeilBrown
  2022-04-29  1:23   ` Andrew Morton
  1 sibling, 1 reply; 13+ messages in thread
From: NeilBrown @ 2022-04-29  0:43 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Geert Uytterhoeven, Christoph Hellwig, Miaohe Lin, linux-nfs,
	linux-mm, linux-kernel

The nfs_direct_IO() exists to support SWAP IO, but hasn't worked for a
while.  We now need a ->swap_rw function which behaves slightly
differently, returning zero for success rather than a byte count.

So modify nfs_direct_IO accordingly, rename it, and use it as the
->swap_rw function.

Tested-by: Geert Uytterhoeven <geert+renesas@glider.be> (on Renesas RSK+RZA1 with 32 MiB of SDRAM)
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: NeilBrown <neilb@suse.de>
---
 fs/nfs/direct.c        |   23 ++++++++++-------------
 fs/nfs/file.c          |    5 +----
 include/linux/nfs_fs.h |    2 +-
 3 files changed, 12 insertions(+), 18 deletions(-)

diff --git a/fs/nfs/direct.c b/fs/nfs/direct.c
index 11c566d8769f..4eb2a8380a28 100644
--- a/fs/nfs/direct.c
+++ b/fs/nfs/direct.c
@@ -153,28 +153,25 @@ nfs_direct_count_bytes(struct nfs_direct_req *dreq,
 }
 
 /**
- * nfs_direct_IO - NFS address space operation for direct I/O
+ * nfs_swap_rw - NFS address space operation for swap I/O
  * @iocb: target I/O control block
  * @iter: I/O buffer
  *
- * The presence of this routine in the address space ops vector means
- * the NFS client supports direct I/O. However, for most direct IO, we
- * shunt off direct read and write requests before the VFS gets them,
- * so this method is only ever called for swap.
+ * Perform IO to the swap-file.  This is much like direct IO.
  */
-ssize_t nfs_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
+int nfs_swap_rw(struct kiocb *iocb, struct iov_iter *iter)
 {
-	struct inode *inode = iocb->ki_filp->f_mapping->host;
-
-	/* we only support swap file calling nfs_direct_IO */
-	if (!IS_SWAPFILE(inode))
-		return 0;
+	ssize_t ret;
 
 	VM_BUG_ON(iov_iter_count(iter) != PAGE_SIZE);
 
 	if (iov_iter_rw(iter) == READ)
-		return nfs_file_direct_read(iocb, iter, true);
-	return nfs_file_direct_write(iocb, iter, true);
+		ret = nfs_file_direct_read(iocb, iter, true);
+	else
+		ret = nfs_file_direct_write(iocb, iter, true);
+	if (ret < 0)
+		return ret;
+	return 0;
 }
 
 static void nfs_direct_release_pages(struct page **pages, unsigned int npages)
diff --git a/fs/nfs/file.c b/fs/nfs/file.c
index e1d10a3e086a..bfb4b707b07e 100644
--- a/fs/nfs/file.c
+++ b/fs/nfs/file.c
@@ -490,10 +490,6 @@ static int nfs_swap_activate(struct swap_info_struct *sis, struct file *file,
 	struct rpc_clnt *clnt = NFS_CLIENT(inode);
 	struct nfs_client *cl = NFS_SERVER(inode)->nfs_client;
 
-	if (!file->f_mapping->a_ops->swap_rw)
-		/* Cannot support swap */
-		return -EINVAL;
-
 	spin_lock(&inode->i_lock);
 	blocks = inode->i_blocks;
 	isize = inode->i_size;
@@ -550,6 +546,7 @@ const struct address_space_operations nfs_file_aops = {
 	.error_remove_page = generic_error_remove_page,
 	.swap_activate = nfs_swap_activate,
 	.swap_deactivate = nfs_swap_deactivate,
+	.swap_rw = nfs_swap_rw,
 };
 
 /*
diff --git a/include/linux/nfs_fs.h b/include/linux/nfs_fs.h
index b48b9259e02c..fd5543486a3f 100644
--- a/include/linux/nfs_fs.h
+++ b/include/linux/nfs_fs.h
@@ -507,7 +507,7 @@ static inline const struct cred *nfs_file_cred(struct file *file)
 /*
  * linux/fs/nfs/direct.c
  */
-extern ssize_t nfs_direct_IO(struct kiocb *, struct iov_iter *);
+int nfs_swap_rw(struct kiocb *iocb, struct iov_iter *iter);
 ssize_t nfs_file_direct_read(struct kiocb *iocb,
 			     struct iov_iter *iter, bool swap);
 ssize_t nfs_file_direct_write(struct kiocb *iocb,



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/2] MM: handle THP in swap_*page_fs()
  2022-04-29  0:43 ` [PATCH 1/2] MM: handle THP in swap_*page_fs() NeilBrown
@ 2022-04-29  1:21   ` Andrew Morton
  2022-04-29  1:57     ` NeilBrown
  2022-04-29  8:13   ` Miaohe Lin
  2022-04-29 19:04   ` Yang Shi
  2 siblings, 1 reply; 13+ messages in thread
From: Andrew Morton @ 2022-04-29  1:21 UTC (permalink / raw)
  To: NeilBrown
  Cc: Geert Uytterhoeven, Christoph Hellwig, Miaohe Lin, linux-nfs,
	linux-mm, linux-kernel

On Fri, 29 Apr 2022 10:43:34 +1000 NeilBrown <neilb@suse.de> wrote:

> Pages passed to swap_readpage()/swap_writepage() are not necessarily all
> the same size - there may be transparent-huge-pages involves.
> 
> The BIO paths of swap_*page() handle this correctly, but the SWP_FS_OPS
> path does not.
> 
> So we need to use thp_size() to find the size, not just assume
> PAGE_SIZE, and we need to track the total length of the request, not
> just assume it is "page * PAGE_SIZE".

Cool.  I added this in the series after
mm-submit-multipage-write-for-swp_fs_ops-swap-space.patch.  I could
later squash it into that patch if you think that's more logical.


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/2] NFS: rename nfs_direct_IO and use as ->swap_rw
  2022-04-29  0:43 ` [PATCH 2/2] NFS: rename nfs_direct_IO and use as ->swap_rw NeilBrown
@ 2022-04-29  1:23   ` Andrew Morton
  2022-04-29  2:05     ` NeilBrown
  0 siblings, 1 reply; 13+ messages in thread
From: Andrew Morton @ 2022-04-29  1:23 UTC (permalink / raw)
  To: NeilBrown
  Cc: Geert Uytterhoeven, Christoph Hellwig, Miaohe Lin, linux-nfs,
	linux-mm, linux-kernel

On Fri, 29 Apr 2022 10:43:34 +1000 NeilBrown <neilb@suse.de> wrote:

> The nfs_direct_IO() exists to support SWAP IO, but hasn't worked for a
> while.  We now need a ->swap_rw function which behaves slightly
> differently, returning zero for success rather than a byte count.
> 
> So modify nfs_direct_IO accordingly, rename it, and use it as the
> ->swap_rw function.
> 

This one I insertion sorted into the series after
mm-introduce-swap_rw-and-use-it-for-reads-from-swp_fs_ops-swap-space.patch.
I can later fold this patch into that one of you think that's a better
presentation?

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/2] MM: handle THP in swap_*page_fs()
  2022-04-29  1:21   ` Andrew Morton
@ 2022-04-29  1:57     ` NeilBrown
  0 siblings, 0 replies; 13+ messages in thread
From: NeilBrown @ 2022-04-29  1:57 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Geert Uytterhoeven, Christoph Hellwig, Miaohe Lin, linux-nfs,
	linux-mm, linux-kernel

On Fri, 29 Apr 2022, Andrew Morton wrote:
> On Fri, 29 Apr 2022 10:43:34 +1000 NeilBrown <neilb@suse.de> wrote:
> 
> > Pages passed to swap_readpage()/swap_writepage() are not necessarily all
> > the same size - there may be transparent-huge-pages involves.
> > 
> > The BIO paths of swap_*page() handle this correctly, but the SWP_FS_OPS
> > path does not.
> > 
> > So we need to use thp_size() to find the size, not just assume
> > PAGE_SIZE, and we need to track the total length of the request, not
> > just assume it is "page * PAGE_SIZE".
> 
> Cool.  I added this in the series after
> mm-submit-multipage-write-for-swp_fs_ops-swap-space.patch.  I could
> later squash it into that patch if you think that's more logical.

I think it best to keep it separate, though that position is good.
If we were to squash, some would need to go into the "submit multipage
reads" patch, and some in "submit multipage writes".  IF you wanted to
do that I wouldn't object but I don't think it is needed.

Thanks,
NeilBrown

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/2] NFS: rename nfs_direct_IO and use as ->swap_rw
  2022-04-29  1:23   ` Andrew Morton
@ 2022-04-29  2:05     ` NeilBrown
  0 siblings, 0 replies; 13+ messages in thread
From: NeilBrown @ 2022-04-29  2:05 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Geert Uytterhoeven, Christoph Hellwig, Miaohe Lin, linux-nfs,
	linux-mm, linux-kernel

On Fri, 29 Apr 2022, Andrew Morton wrote:
> On Fri, 29 Apr 2022 10:43:34 +1000 NeilBrown <neilb@suse.de> wrote:
> 
> > The nfs_direct_IO() exists to support SWAP IO, but hasn't worked for a
> > while.  We now need a ->swap_rw function which behaves slightly
> > differently, returning zero for success rather than a byte count.
> > 
> > So modify nfs_direct_IO accordingly, rename it, and use it as the
> > ->swap_rw function.
> > 
> 
> This one I insertion sorted into the series after
> mm-introduce-swap_rw-and-use-it-for-reads-from-swp_fs_ops-swap-space.patch.
> I can later fold this patch into that one of you think that's a better
> presentation?
> 

I'm happy for the patches to remain separate - though adjacent is good.
If they were to be merged we'd need to fix up the commit message.
At least delete:

   Future patches will restore swap-over-NFS functionality.

Thanks,
NeilBrown

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/2] MM: handle THP in swap_*page_fs()
  2022-04-29  0:43 ` [PATCH 1/2] MM: handle THP in swap_*page_fs() NeilBrown
  2022-04-29  1:21   ` Andrew Morton
@ 2022-04-29  8:13   ` Miaohe Lin
  2022-04-29 19:04   ` Yang Shi
  2 siblings, 0 replies; 13+ messages in thread
From: Miaohe Lin @ 2022-04-29  8:13 UTC (permalink / raw)
  To: NeilBrown, Andrew Morton
  Cc: Geert Uytterhoeven, Christoph Hellwig, linux-nfs, linux-mm, linux-kernel

On 2022/4/29 8:43, NeilBrown wrote:
> Pages passed to swap_readpage()/swap_writepage() are not necessarily all
> the same size - there may be transparent-huge-pages involves.
> 
> The BIO paths of swap_*page() handle this correctly, but the SWP_FS_OPS
> path does not.
> 
> So we need to use thp_size() to find the size, not just assume
> PAGE_SIZE, and we need to track the total length of the request, not
> just assume it is "page * PAGE_SIZE".
> 
> Reported-by: Miaohe Lin <linmiaohe@huawei.com>
> Signed-off-by: NeilBrown <neilb@suse.de>
> ---
>  mm/page_io.c |   23 +++++++++++++----------
>  1 file changed, 13 insertions(+), 10 deletions(-)
> 
> diff --git a/mm/page_io.c b/mm/page_io.c
> index c132511f521c..d636a3531cad 100644
> --- a/mm/page_io.c
> +++ b/mm/page_io.c
> @@ -239,6 +239,7 @@ struct swap_iocb {
>  	struct kiocb		iocb;
>  	struct bio_vec		bvec[SWAP_CLUSTER_MAX];
>  	int			pages;
> +	int			len;
>  };
>  static mempool_t *sio_pool;
>  
> @@ -261,7 +262,7 @@ static void sio_write_complete(struct kiocb *iocb, long ret)

The patch looks good to me. Thanks!

But we might need use count_swpout_vm_event in sio_write_complete. THP_SWPOUT should be
accounted too. And count_vm_events(PSWPOUT, sio->pages) doesn't account the right number
of pages now. Maybe sio_read_complete also needs this fix. Or am I miss something?

Thanks!

>  	struct page *page = sio->bvec[0].bv_page;
>  	int p;
>  
> -	if (ret != PAGE_SIZE * sio->pages) {
> +	if (ret != sio->len) {
>  		/*
>  		 * In the case of swap-over-nfs, this can be a
>  		 * temporary failure if the system has limited
> @@ -301,7 +302,7 @@ static int swap_writepage_fs(struct page *page, struct writeback_control *wbc)
>  		sio = *wbc->swap_plug;
>  	if (sio) {
>  		if (sio->iocb.ki_filp != swap_file ||
> -		    sio->iocb.ki_pos + sio->pages * PAGE_SIZE != pos) {
> +		    sio->iocb.ki_pos + sio->len != pos) {
>  			swap_write_unplug(sio);
>  			sio = NULL;
>  		}
> @@ -312,10 +313,12 @@ static int swap_writepage_fs(struct page *page, struct writeback_control *wbc)
>  		sio->iocb.ki_complete = sio_write_complete;
>  		sio->iocb.ki_pos = pos;
>  		sio->pages = 0;
> +		sio->len = 0;
>  	}
>  	sio->bvec[sio->pages].bv_page = page;
> -	sio->bvec[sio->pages].bv_len = PAGE_SIZE;
> +	sio->bvec[sio->pages].bv_len = thp_size(page);
>  	sio->bvec[sio->pages].bv_offset = 0;
> +	sio->len += thp_size(page);
>  	sio->pages += 1;
>  	if (sio->pages == ARRAY_SIZE(sio->bvec) || !wbc->swap_plug) {
>  		swap_write_unplug(sio);
> @@ -371,8 +374,7 @@ void swap_write_unplug(struct swap_iocb *sio)
>  	struct address_space *mapping = sio->iocb.ki_filp->f_mapping;
>  	int ret;
>  
> -	iov_iter_bvec(&from, WRITE, sio->bvec, sio->pages,
> -		      PAGE_SIZE * sio->pages);
> +	iov_iter_bvec(&from, WRITE, sio->bvec, sio->pages, sio->len);
>  	ret = mapping->a_ops->swap_rw(&sio->iocb, &from);
>  	if (ret != -EIOCBQUEUED)
>  		sio_write_complete(&sio->iocb, ret);
> @@ -383,7 +385,7 @@ static void sio_read_complete(struct kiocb *iocb, long ret)
>  	struct swap_iocb *sio = container_of(iocb, struct swap_iocb, iocb);
>  	int p;
>  
> -	if (ret == PAGE_SIZE * sio->pages) {
> +	if (ret == sio->len) {
>  		for (p = 0; p < sio->pages; p++) {
>  			struct page *page = sio->bvec[p].bv_page;
>  
> @@ -415,7 +417,7 @@ static void swap_readpage_fs(struct page *page,
>  		sio = *plug;
>  	if (sio) {
>  		if (sio->iocb.ki_filp != sis->swap_file ||
> -		    sio->iocb.ki_pos + sio->pages * PAGE_SIZE != pos) {
> +		    sio->iocb.ki_pos + sio->len != pos) {
>  			swap_read_unplug(sio);
>  			sio = NULL;
>  		}
> @@ -426,10 +428,12 @@ static void swap_readpage_fs(struct page *page,
>  		sio->iocb.ki_pos = pos;
>  		sio->iocb.ki_complete = sio_read_complete;
>  		sio->pages = 0;
> +		sio->len = 0;
>  	}
>  	sio->bvec[sio->pages].bv_page = page;
> -	sio->bvec[sio->pages].bv_len = PAGE_SIZE;
> +	sio->bvec[sio->pages].bv_len = thp_size(page);
>  	sio->bvec[sio->pages].bv_offset = 0;
> +	sio->len += thp_size(page);
>  	sio->pages += 1;
>  	if (sio->pages == ARRAY_SIZE(sio->bvec) || !plug) {
>  		swap_read_unplug(sio);
> @@ -521,8 +525,7 @@ void __swap_read_unplug(struct swap_iocb *sio)
>  	struct address_space *mapping = sio->iocb.ki_filp->f_mapping;
>  	int ret;
>  
> -	iov_iter_bvec(&from, READ, sio->bvec, sio->pages,
> -		      PAGE_SIZE * sio->pages);
> +	iov_iter_bvec(&from, READ, sio->bvec, sio->pages, sio->len);
>  	ret = mapping->a_ops->swap_rw(&sio->iocb, &from);
>  	if (ret != -EIOCBQUEUED)
>  		sio_read_complete(&sio->iocb, ret);
> 
> 
> .
> 


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/2] MM: handle THP in swap_*page_fs()
  2022-04-29  0:43 ` [PATCH 1/2] MM: handle THP in swap_*page_fs() NeilBrown
  2022-04-29  1:21   ` Andrew Morton
  2022-04-29  8:13   ` Miaohe Lin
@ 2022-04-29 19:04   ` Yang Shi
  2022-05-02  4:23     ` NeilBrown
  2 siblings, 1 reply; 13+ messages in thread
From: Yang Shi @ 2022-04-29 19:04 UTC (permalink / raw)
  To: NeilBrown
  Cc: Andrew Morton, Geert Uytterhoeven, Christoph Hellwig, Miaohe Lin,
	linux-nfs, Linux MM, Linux Kernel Mailing List

On Thu, Apr 28, 2022 at 5:44 PM NeilBrown <neilb@suse.de> wrote:
>
> Pages passed to swap_readpage()/swap_writepage() are not necessarily all
> the same size - there may be transparent-huge-pages involves.
>
> The BIO paths of swap_*page() handle this correctly, but the SWP_FS_OPS
> path does not.
>
> So we need to use thp_size() to find the size, not just assume
> PAGE_SIZE, and we need to track the total length of the request, not
> just assume it is "page * PAGE_SIZE".

Swap-over-nfs doesn't support THP swap IIUC. So SWP_FS_OPS should not
see THP at all. But I agree to remove the assumption about page size
in this path.

>
> Reported-by: Miaohe Lin <linmiaohe@huawei.com>
> Signed-off-by: NeilBrown <neilb@suse.de>
> ---
>  mm/page_io.c |   23 +++++++++++++----------
>  1 file changed, 13 insertions(+), 10 deletions(-)
>
> diff --git a/mm/page_io.c b/mm/page_io.c
> index c132511f521c..d636a3531cad 100644
> --- a/mm/page_io.c
> +++ b/mm/page_io.c
> @@ -239,6 +239,7 @@ struct swap_iocb {
>         struct kiocb            iocb;
>         struct bio_vec          bvec[SWAP_CLUSTER_MAX];
>         int                     pages;
> +       int                     len;
>  };
>  static mempool_t *sio_pool;
>
> @@ -261,7 +262,7 @@ static void sio_write_complete(struct kiocb *iocb, long ret)
>         struct page *page = sio->bvec[0].bv_page;
>         int p;
>
> -       if (ret != PAGE_SIZE * sio->pages) {
> +       if (ret != sio->len) {
>                 /*
>                  * In the case of swap-over-nfs, this can be a
>                  * temporary failure if the system has limited
> @@ -301,7 +302,7 @@ static int swap_writepage_fs(struct page *page, struct writeback_control *wbc)
>                 sio = *wbc->swap_plug;
>         if (sio) {
>                 if (sio->iocb.ki_filp != swap_file ||
> -                   sio->iocb.ki_pos + sio->pages * PAGE_SIZE != pos) {
> +                   sio->iocb.ki_pos + sio->len != pos) {
>                         swap_write_unplug(sio);
>                         sio = NULL;
>                 }
> @@ -312,10 +313,12 @@ static int swap_writepage_fs(struct page *page, struct writeback_control *wbc)
>                 sio->iocb.ki_complete = sio_write_complete;
>                 sio->iocb.ki_pos = pos;
>                 sio->pages = 0;
> +               sio->len = 0;
>         }
>         sio->bvec[sio->pages].bv_page = page;
> -       sio->bvec[sio->pages].bv_len = PAGE_SIZE;
> +       sio->bvec[sio->pages].bv_len = thp_size(page);
>         sio->bvec[sio->pages].bv_offset = 0;
> +       sio->len += thp_size(page);
>         sio->pages += 1;
>         if (sio->pages == ARRAY_SIZE(sio->bvec) || !wbc->swap_plug) {
>                 swap_write_unplug(sio);
> @@ -371,8 +374,7 @@ void swap_write_unplug(struct swap_iocb *sio)
>         struct address_space *mapping = sio->iocb.ki_filp->f_mapping;
>         int ret;
>
> -       iov_iter_bvec(&from, WRITE, sio->bvec, sio->pages,
> -                     PAGE_SIZE * sio->pages);
> +       iov_iter_bvec(&from, WRITE, sio->bvec, sio->pages, sio->len);
>         ret = mapping->a_ops->swap_rw(&sio->iocb, &from);
>         if (ret != -EIOCBQUEUED)
>                 sio_write_complete(&sio->iocb, ret);
> @@ -383,7 +385,7 @@ static void sio_read_complete(struct kiocb *iocb, long ret)
>         struct swap_iocb *sio = container_of(iocb, struct swap_iocb, iocb);
>         int p;
>
> -       if (ret == PAGE_SIZE * sio->pages) {
> +       if (ret == sio->len) {
>                 for (p = 0; p < sio->pages; p++) {
>                         struct page *page = sio->bvec[p].bv_page;
>
> @@ -415,7 +417,7 @@ static void swap_readpage_fs(struct page *page,
>                 sio = *plug;
>         if (sio) {
>                 if (sio->iocb.ki_filp != sis->swap_file ||
> -                   sio->iocb.ki_pos + sio->pages * PAGE_SIZE != pos) {
> +                   sio->iocb.ki_pos + sio->len != pos) {
>                         swap_read_unplug(sio);
>                         sio = NULL;
>                 }
> @@ -426,10 +428,12 @@ static void swap_readpage_fs(struct page *page,
>                 sio->iocb.ki_pos = pos;
>                 sio->iocb.ki_complete = sio_read_complete;
>                 sio->pages = 0;
> +               sio->len = 0;
>         }
>         sio->bvec[sio->pages].bv_page = page;
> -       sio->bvec[sio->pages].bv_len = PAGE_SIZE;
> +       sio->bvec[sio->pages].bv_len = thp_size(page);
>         sio->bvec[sio->pages].bv_offset = 0;
> +       sio->len += thp_size(page);
>         sio->pages += 1;
>         if (sio->pages == ARRAY_SIZE(sio->bvec) || !plug) {
>                 swap_read_unplug(sio);
> @@ -521,8 +525,7 @@ void __swap_read_unplug(struct swap_iocb *sio)
>         struct address_space *mapping = sio->iocb.ki_filp->f_mapping;
>         int ret;
>
> -       iov_iter_bvec(&from, READ, sio->bvec, sio->pages,
> -                     PAGE_SIZE * sio->pages);
> +       iov_iter_bvec(&from, READ, sio->bvec, sio->pages, sio->len);
>         ret = mapping->a_ops->swap_rw(&sio->iocb, &from);
>         if (ret != -EIOCBQUEUED)
>                 sio_read_complete(&sio->iocb, ret);
>
>
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/2] MM: handle THP in swap_*page_fs()
  2022-04-29 19:04   ` Yang Shi
@ 2022-05-02  4:23     ` NeilBrown
  2022-05-02 17:48       ` Yang Shi
  0 siblings, 1 reply; 13+ messages in thread
From: NeilBrown @ 2022-05-02  4:23 UTC (permalink / raw)
  To: Yang Shi
  Cc: Andrew Morton, Geert Uytterhoeven, Christoph Hellwig, Miaohe Lin,
	linux-nfs, Linux MM, Linux Kernel Mailing List

On Sat, 30 Apr 2022, Yang Shi wrote:
> On Thu, Apr 28, 2022 at 5:44 PM NeilBrown <neilb@suse.de> wrote:
> >
> > Pages passed to swap_readpage()/swap_writepage() are not necessarily all
> > the same size - there may be transparent-huge-pages involves.
> >
> > The BIO paths of swap_*page() handle this correctly, but the SWP_FS_OPS
> > path does not.
> >
> > So we need to use thp_size() to find the size, not just assume
> > PAGE_SIZE, and we need to track the total length of the request, not
> > just assume it is "page * PAGE_SIZE".
> 
> Swap-over-nfs doesn't support THP swap IIUC. So SWP_FS_OPS should not
> see THP at all. But I agree to remove the assumption about page size
> in this path.

Can you help me understand this please.  How would the swap code know
that swap-over-NFS doesn't support THP swap?  There is no reason that
NFS wouldn't be able to handle 2MB writes.  Even 1GB should work though
NFS would have to split into several smaller WRITE requests.

Thanks,
NeilBrown


> 
> >
> > Reported-by: Miaohe Lin <linmiaohe@huawei.com>
> > Signed-off-by: NeilBrown <neilb@suse.de>
> > ---
> >  mm/page_io.c |   23 +++++++++++++----------
> >  1 file changed, 13 insertions(+), 10 deletions(-)
> >
> > diff --git a/mm/page_io.c b/mm/page_io.c
> > index c132511f521c..d636a3531cad 100644
> > --- a/mm/page_io.c
> > +++ b/mm/page_io.c
> > @@ -239,6 +239,7 @@ struct swap_iocb {
> >         struct kiocb            iocb;
> >         struct bio_vec          bvec[SWAP_CLUSTER_MAX];
> >         int                     pages;
> > +       int                     len;
> >  };
> >  static mempool_t *sio_pool;
> >
> > @@ -261,7 +262,7 @@ static void sio_write_complete(struct kiocb *iocb, long ret)
> >         struct page *page = sio->bvec[0].bv_page;
> >         int p;
> >
> > -       if (ret != PAGE_SIZE * sio->pages) {
> > +       if (ret != sio->len) {
> >                 /*
> >                  * In the case of swap-over-nfs, this can be a
> >                  * temporary failure if the system has limited
> > @@ -301,7 +302,7 @@ static int swap_writepage_fs(struct page *page, struct writeback_control *wbc)
> >                 sio = *wbc->swap_plug;
> >         if (sio) {
> >                 if (sio->iocb.ki_filp != swap_file ||
> > -                   sio->iocb.ki_pos + sio->pages * PAGE_SIZE != pos) {
> > +                   sio->iocb.ki_pos + sio->len != pos) {
> >                         swap_write_unplug(sio);
> >                         sio = NULL;
> >                 }
> > @@ -312,10 +313,12 @@ static int swap_writepage_fs(struct page *page, struct writeback_control *wbc)
> >                 sio->iocb.ki_complete = sio_write_complete;
> >                 sio->iocb.ki_pos = pos;
> >                 sio->pages = 0;
> > +               sio->len = 0;
> >         }
> >         sio->bvec[sio->pages].bv_page = page;
> > -       sio->bvec[sio->pages].bv_len = PAGE_SIZE;
> > +       sio->bvec[sio->pages].bv_len = thp_size(page);
> >         sio->bvec[sio->pages].bv_offset = 0;
> > +       sio->len += thp_size(page);
> >         sio->pages += 1;
> >         if (sio->pages == ARRAY_SIZE(sio->bvec) || !wbc->swap_plug) {
> >                 swap_write_unplug(sio);
> > @@ -371,8 +374,7 @@ void swap_write_unplug(struct swap_iocb *sio)
> >         struct address_space *mapping = sio->iocb.ki_filp->f_mapping;
> >         int ret;
> >
> > -       iov_iter_bvec(&from, WRITE, sio->bvec, sio->pages,
> > -                     PAGE_SIZE * sio->pages);
> > +       iov_iter_bvec(&from, WRITE, sio->bvec, sio->pages, sio->len);
> >         ret = mapping->a_ops->swap_rw(&sio->iocb, &from);
> >         if (ret != -EIOCBQUEUED)
> >                 sio_write_complete(&sio->iocb, ret);
> > @@ -383,7 +385,7 @@ static void sio_read_complete(struct kiocb *iocb, long ret)
> >         struct swap_iocb *sio = container_of(iocb, struct swap_iocb, iocb);
> >         int p;
> >
> > -       if (ret == PAGE_SIZE * sio->pages) {
> > +       if (ret == sio->len) {
> >                 for (p = 0; p < sio->pages; p++) {
> >                         struct page *page = sio->bvec[p].bv_page;
> >
> > @@ -415,7 +417,7 @@ static void swap_readpage_fs(struct page *page,
> >                 sio = *plug;
> >         if (sio) {
> >                 if (sio->iocb.ki_filp != sis->swap_file ||
> > -                   sio->iocb.ki_pos + sio->pages * PAGE_SIZE != pos) {
> > +                   sio->iocb.ki_pos + sio->len != pos) {
> >                         swap_read_unplug(sio);
> >                         sio = NULL;
> >                 }
> > @@ -426,10 +428,12 @@ static void swap_readpage_fs(struct page *page,
> >                 sio->iocb.ki_pos = pos;
> >                 sio->iocb.ki_complete = sio_read_complete;
> >                 sio->pages = 0;
> > +               sio->len = 0;
> >         }
> >         sio->bvec[sio->pages].bv_page = page;
> > -       sio->bvec[sio->pages].bv_len = PAGE_SIZE;
> > +       sio->bvec[sio->pages].bv_len = thp_size(page);
> >         sio->bvec[sio->pages].bv_offset = 0;
> > +       sio->len += thp_size(page);
> >         sio->pages += 1;
> >         if (sio->pages == ARRAY_SIZE(sio->bvec) || !plug) {
> >                 swap_read_unplug(sio);
> > @@ -521,8 +525,7 @@ void __swap_read_unplug(struct swap_iocb *sio)
> >         struct address_space *mapping = sio->iocb.ki_filp->f_mapping;
> >         int ret;
> >
> > -       iov_iter_bvec(&from, READ, sio->bvec, sio->pages,
> > -                     PAGE_SIZE * sio->pages);
> > +       iov_iter_bvec(&from, READ, sio->bvec, sio->pages, sio->len);
> >         ret = mapping->a_ops->swap_rw(&sio->iocb, &from);
> >         if (ret != -EIOCBQUEUED)
> >                 sio_read_complete(&sio->iocb, ret);
> >
> >
> >
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/2] MM: handle THP in swap_*page_fs()
  2022-05-02  4:23     ` NeilBrown
@ 2022-05-02 17:48       ` Yang Shi
  2022-05-04 23:41         ` NeilBrown
  0 siblings, 1 reply; 13+ messages in thread
From: Yang Shi @ 2022-05-02 17:48 UTC (permalink / raw)
  To: NeilBrown, Huang Ying
  Cc: Andrew Morton, Geert Uytterhoeven, Christoph Hellwig, Miaohe Lin,
	linux-nfs, Linux MM, Linux Kernel Mailing List

On Sun, May 1, 2022 at 9:23 PM NeilBrown <neilb@suse.de> wrote:
>
> On Sat, 30 Apr 2022, Yang Shi wrote:
> > On Thu, Apr 28, 2022 at 5:44 PM NeilBrown <neilb@suse.de> wrote:
> > >
> > > Pages passed to swap_readpage()/swap_writepage() are not necessarily all
> > > the same size - there may be transparent-huge-pages involves.
> > >
> > > The BIO paths of swap_*page() handle this correctly, but the SWP_FS_OPS
> > > path does not.
> > >
> > > So we need to use thp_size() to find the size, not just assume
> > > PAGE_SIZE, and we need to track the total length of the request, not
> > > just assume it is "page * PAGE_SIZE".
> >
> > Swap-over-nfs doesn't support THP swap IIUC. So SWP_FS_OPS should not
> > see THP at all. But I agree to remove the assumption about page size
> > in this path.
>
> Can you help me understand this please.  How would the swap code know
> that swap-over-NFS doesn't support THP swap?  There is no reason that
> NFS wouldn't be able to handle 2MB writes.  Even 1GB should work though
> NFS would have to split into several smaller WRITE requests.

AFAICT, THP swap is only supported on non-rotate block devices, for
example, SSD, PMEM, etc. IIRC, the swap device has to support the
cluster in order to swap THP. The cluster is only supported by
non-rotate block devices.

Looped Ying in, who is the author of THP swap.

>
> Thanks,
> NeilBrown
>
>
> >
> > >
> > > Reported-by: Miaohe Lin <linmiaohe@huawei.com>
> > > Signed-off-by: NeilBrown <neilb@suse.de>
> > > ---
> > >  mm/page_io.c |   23 +++++++++++++----------
> > >  1 file changed, 13 insertions(+), 10 deletions(-)
> > >
> > > diff --git a/mm/page_io.c b/mm/page_io.c
> > > index c132511f521c..d636a3531cad 100644
> > > --- a/mm/page_io.c
> > > +++ b/mm/page_io.c
> > > @@ -239,6 +239,7 @@ struct swap_iocb {
> > >         struct kiocb            iocb;
> > >         struct bio_vec          bvec[SWAP_CLUSTER_MAX];
> > >         int                     pages;
> > > +       int                     len;
> > >  };
> > >  static mempool_t *sio_pool;
> > >
> > > @@ -261,7 +262,7 @@ static void sio_write_complete(struct kiocb *iocb, long ret)
> > >         struct page *page = sio->bvec[0].bv_page;
> > >         int p;
> > >
> > > -       if (ret != PAGE_SIZE * sio->pages) {
> > > +       if (ret != sio->len) {
> > >                 /*
> > >                  * In the case of swap-over-nfs, this can be a
> > >                  * temporary failure if the system has limited
> > > @@ -301,7 +302,7 @@ static int swap_writepage_fs(struct page *page, struct writeback_control *wbc)
> > >                 sio = *wbc->swap_plug;
> > >         if (sio) {
> > >                 if (sio->iocb.ki_filp != swap_file ||
> > > -                   sio->iocb.ki_pos + sio->pages * PAGE_SIZE != pos) {
> > > +                   sio->iocb.ki_pos + sio->len != pos) {
> > >                         swap_write_unplug(sio);
> > >                         sio = NULL;
> > >                 }
> > > @@ -312,10 +313,12 @@ static int swap_writepage_fs(struct page *page, struct writeback_control *wbc)
> > >                 sio->iocb.ki_complete = sio_write_complete;
> > >                 sio->iocb.ki_pos = pos;
> > >                 sio->pages = 0;
> > > +               sio->len = 0;
> > >         }
> > >         sio->bvec[sio->pages].bv_page = page;
> > > -       sio->bvec[sio->pages].bv_len = PAGE_SIZE;
> > > +       sio->bvec[sio->pages].bv_len = thp_size(page);
> > >         sio->bvec[sio->pages].bv_offset = 0;
> > > +       sio->len += thp_size(page);
> > >         sio->pages += 1;
> > >         if (sio->pages == ARRAY_SIZE(sio->bvec) || !wbc->swap_plug) {
> > >                 swap_write_unplug(sio);
> > > @@ -371,8 +374,7 @@ void swap_write_unplug(struct swap_iocb *sio)
> > >         struct address_space *mapping = sio->iocb.ki_filp->f_mapping;
> > >         int ret;
> > >
> > > -       iov_iter_bvec(&from, WRITE, sio->bvec, sio->pages,
> > > -                     PAGE_SIZE * sio->pages);
> > > +       iov_iter_bvec(&from, WRITE, sio->bvec, sio->pages, sio->len);
> > >         ret = mapping->a_ops->swap_rw(&sio->iocb, &from);
> > >         if (ret != -EIOCBQUEUED)
> > >                 sio_write_complete(&sio->iocb, ret);
> > > @@ -383,7 +385,7 @@ static void sio_read_complete(struct kiocb *iocb, long ret)
> > >         struct swap_iocb *sio = container_of(iocb, struct swap_iocb, iocb);
> > >         int p;
> > >
> > > -       if (ret == PAGE_SIZE * sio->pages) {
> > > +       if (ret == sio->len) {
> > >                 for (p = 0; p < sio->pages; p++) {
> > >                         struct page *page = sio->bvec[p].bv_page;
> > >
> > > @@ -415,7 +417,7 @@ static void swap_readpage_fs(struct page *page,
> > >                 sio = *plug;
> > >         if (sio) {
> > >                 if (sio->iocb.ki_filp != sis->swap_file ||
> > > -                   sio->iocb.ki_pos + sio->pages * PAGE_SIZE != pos) {
> > > +                   sio->iocb.ki_pos + sio->len != pos) {
> > >                         swap_read_unplug(sio);
> > >                         sio = NULL;
> > >                 }
> > > @@ -426,10 +428,12 @@ static void swap_readpage_fs(struct page *page,
> > >                 sio->iocb.ki_pos = pos;
> > >                 sio->iocb.ki_complete = sio_read_complete;
> > >                 sio->pages = 0;
> > > +               sio->len = 0;
> > >         }
> > >         sio->bvec[sio->pages].bv_page = page;
> > > -       sio->bvec[sio->pages].bv_len = PAGE_SIZE;
> > > +       sio->bvec[sio->pages].bv_len = thp_size(page);
> > >         sio->bvec[sio->pages].bv_offset = 0;
> > > +       sio->len += thp_size(page);
> > >         sio->pages += 1;
> > >         if (sio->pages == ARRAY_SIZE(sio->bvec) || !plug) {
> > >                 swap_read_unplug(sio);
> > > @@ -521,8 +525,7 @@ void __swap_read_unplug(struct swap_iocb *sio)
> > >         struct address_space *mapping = sio->iocb.ki_filp->f_mapping;
> > >         int ret;
> > >
> > > -       iov_iter_bvec(&from, READ, sio->bvec, sio->pages,
> > > -                     PAGE_SIZE * sio->pages);
> > > +       iov_iter_bvec(&from, READ, sio->bvec, sio->pages, sio->len);
> > >         ret = mapping->a_ops->swap_rw(&sio->iocb, &from);
> > >         if (ret != -EIOCBQUEUED)
> > >                 sio_read_complete(&sio->iocb, ret);
> > >
> > >
> > >
> >

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/2] MM: handle THP in swap_*page_fs()
  2022-05-02 17:48       ` Yang Shi
@ 2022-05-04 23:41         ` NeilBrown
  2022-05-06  2:56           ` ying.huang
  0 siblings, 1 reply; 13+ messages in thread
From: NeilBrown @ 2022-05-04 23:41 UTC (permalink / raw)
  To: Yang Shi
  Cc: Huang Ying, Andrew Morton, Geert Uytterhoeven, Christoph Hellwig,
	Miaohe Lin, linux-nfs, Linux MM, Linux Kernel Mailing List

On Tue, 03 May 2022, Yang Shi wrote:
> On Sun, May 1, 2022 at 9:23 PM NeilBrown <neilb@suse.de> wrote:
> >
> > On Sat, 30 Apr 2022, Yang Shi wrote:
> > > On Thu, Apr 28, 2022 at 5:44 PM NeilBrown <neilb@suse.de> wrote:
> > > >
> > > > Pages passed to swap_readpage()/swap_writepage() are not necessarily all
> > > > the same size - there may be transparent-huge-pages involves.
> > > >
> > > > The BIO paths of swap_*page() handle this correctly, but the SWP_FS_OPS
> > > > path does not.
> > > >
> > > > So we need to use thp_size() to find the size, not just assume
> > > > PAGE_SIZE, and we need to track the total length of the request, not
> > > > just assume it is "page * PAGE_SIZE".
> > >
> > > Swap-over-nfs doesn't support THP swap IIUC. So SWP_FS_OPS should not
> > > see THP at all. But I agree to remove the assumption about page size
> > > in this path.
> >
> > Can you help me understand this please.  How would the swap code know
> > that swap-over-NFS doesn't support THP swap?  There is no reason that
> > NFS wouldn't be able to handle 2MB writes.  Even 1GB should work though
> > NFS would have to split into several smaller WRITE requests.
> 
> AFAICT, THP swap is only supported on non-rotate block devices, for
> example, SSD, PMEM, etc. IIRC, the swap device has to support the
> cluster in order to swap THP. The cluster is only supported by
> non-rotate block devices.
> 
> Looped Ying in, who is the author of THP swap.

I hunted around the code and found that THP swap only happens if a
'cluster_info' is allocated, and that only happens if 
	if (p->bdev && bdev_nonrot(p->bdev)) {
in the swapon syscall.

I guess "nonrot" is being use as a synonym for "low latency"...
So even if NFS was low-latency it couldn't benefit from THP swap.

So as you say it is not currently possible for THP pages to be send to
NFS for swapout.  It makes sense to prepare for it though I think - if
only so that the code is more consistent and less confusing.

Thanks,
NeilBrown

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/2] MM: handle THP in swap_*page_fs()
  2022-05-04 23:41         ` NeilBrown
@ 2022-05-06  2:56           ` ying.huang
  0 siblings, 0 replies; 13+ messages in thread
From: ying.huang @ 2022-05-06  2:56 UTC (permalink / raw)
  To: NeilBrown, Yang Shi
  Cc: Andrew Morton, Geert Uytterhoeven, Christoph Hellwig, Miaohe Lin,
	linux-nfs, Linux MM, Linux Kernel Mailing List

On Thu, 2022-05-05 at 09:41 +1000, NeilBrown wrote:
> On Tue, 03 May 2022, Yang Shi wrote:
> > On Sun, May 1, 2022 at 9:23 PM NeilBrown <neilb@suse.de> wrote:
> > > 
> > > On Sat, 30 Apr 2022, Yang Shi wrote:
> > > > On Thu, Apr 28, 2022 at 5:44 PM NeilBrown <neilb@suse.de> wrote:
> > > > > 
> > > > > Pages passed to swap_readpage()/swap_writepage() are not necessarily all
> > > > > the same size - there may be transparent-huge-pages involves.
> > > > > 
> > > > > The BIO paths of swap_*page() handle this correctly, but the SWP_FS_OPS
> > > > > path does not.
> > > > > 
> > > > > So we need to use thp_size() to find the size, not just assume
> > > > > PAGE_SIZE, and we need to track the total length of the request, not
> > > > > just assume it is "page * PAGE_SIZE".
> > > > 
> > > > Swap-over-nfs doesn't support THP swap IIUC. So SWP_FS_OPS should not
> > > > see THP at all. But I agree to remove the assumption about page size
> > > > in this path.
> > > 
> > > Can you help me understand this please.  How would the swap code know
> > > that swap-over-NFS doesn't support THP swap?  There is no reason that
> > > NFS wouldn't be able to handle 2MB writes.  Even 1GB should work though
> > > NFS would have to split into several smaller WRITE requests.
> > 
> > AFAICT, THP swap is only supported on non-rotate block devices, for
> > example, SSD, PMEM, etc. IIRC, the swap device has to support the
> > cluster in order to swap THP. The cluster is only supported by
> > non-rotate block devices.
> > 
> > Looped Ying in, who is the author of THP swap.
> 
> I hunted around the code and found that THP swap only happens if a
> 'cluster_info' is allocated, and that only happens if 
> 	if (p->bdev && bdev_nonrot(p->bdev)) {
> in the swapon syscall.
> 

And in get_swap_pages(), the cluster is only allocated for block
devices.

		if (size == SWAPFILE_CLUSTER) {
			if (si->flags & SWP_BLKDEV)
				n_ret = swap_alloc_cluster(si, swp_entries);
		} else
			n_ret = scan_swap_map_slots(si, SWAP_HAS_CACHE,
						    n_goal, swp_entries);

We may remove this restriction in the future if someone can show the
benefit.

Best Regards,
Huang, Ying

> I guess "nonrot" is being use as a synonym for "low latency"...
> So even if NFS was low-latency it couldn't benefit from THP swap.
> 
> So as you say it is not currently possible for THP pages to be send to
> NFS for swapout.  It makes sense to prepare for it though I think - if
> only so that the code is more consistent and less confusing.
> 
> Thanks,
> NeilBrown



^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2022-05-06  2:56 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-29  0:43 [PATCH 0/2] Finalising swap-over-NFS patches NeilBrown
2022-04-29  0:43 ` [PATCH 1/2] MM: handle THP in swap_*page_fs() NeilBrown
2022-04-29  1:21   ` Andrew Morton
2022-04-29  1:57     ` NeilBrown
2022-04-29  8:13   ` Miaohe Lin
2022-04-29 19:04   ` Yang Shi
2022-05-02  4:23     ` NeilBrown
2022-05-02 17:48       ` Yang Shi
2022-05-04 23:41         ` NeilBrown
2022-05-06  2:56           ` ying.huang
2022-04-29  0:43 ` [PATCH 2/2] NFS: rename nfs_direct_IO and use as ->swap_rw NeilBrown
2022-04-29  1:23   ` Andrew Morton
2022-04-29  2:05     ` NeilBrown

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).