linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] SUNRPC consumer for the bulk page allocator
@ 2021-03-23 15:09 Chuck Lever
  2021-03-23 15:09 ` [PATCH 1/2] SUNRPC: Set rq_page_end differently Chuck Lever
  2021-03-23 15:10 ` [PATCH 2/2] SUNRPC: Refresh rq_pages using a bulk page allocator Chuck Lever
  0 siblings, 2 replies; 5+ messages in thread
From: Chuck Lever @ 2021-03-23 15:09 UTC (permalink / raw)
  To: mgorman
  Cc: brouer, vbabka, akpm, hch, alexander.duyck, willy, linux-kernel,
	netdev, linux-mm, linux-nfs

This patch set and the measurements below are based on yesterday's
bulk allocator series:

git://git.kernel.org/pub/scm/linux/kernel/git/mel/linux.git mm-bulk-rebase-v5r9

The patches change SUNRPC to invoke the array-based bulk allocator
instead of alloc_page().

The micro-benchmark results are promising. I ran a mixture of 256KB
reads and writes over NFSv3. The server's kernel is built with KASAN
enabled, so the comparison is exaggerated but I believe it is still
valid.

I instrumented svc_recv() to measure the latency of each call to
svc_alloc_arg() and report it via a trace point. The following
results are averages across the trace events.

Single page: 25.007 us per call over 532,571 calls
Bulk list:    6.258 us per call over 517,034 calls
Bulk array:   4.590 us per call over 517,442 calls

For SUNRPC, the simplicity and better performance of the array-based
API makes it superior to the list-based API.

---

Chuck Lever (2):
      SUNRPC: Set rq_page_end differently
      SUNRPC: Refresh rq_pages using a bulk page allocator


 net/sunrpc/svc_xprt.c | 33 +++++++++++++++++----------------
 1 file changed, 17 insertions(+), 16 deletions(-)

--
Chuck Lever



^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH 1/2] SUNRPC: Set rq_page_end differently
  2021-03-23 15:09 [PATCH 0/2] SUNRPC consumer for the bulk page allocator Chuck Lever
@ 2021-03-23 15:09 ` Chuck Lever
  2021-03-23 15:10 ` [PATCH 2/2] SUNRPC: Refresh rq_pages using a bulk page allocator Chuck Lever
  1 sibling, 0 replies; 5+ messages in thread
From: Chuck Lever @ 2021-03-23 15:09 UTC (permalink / raw)
  To: mgorman
  Cc: brouer, vbabka, akpm, hch, alexander.duyck, willy, linux-kernel,
	netdev, linux-mm, linux-nfs

Refactor:

I'm about to use the loop variable @i for something else.

As far as the "i++" is concerned, that is a post-increment. The
value of @i is not used subsequently, so the increment operator
is unnecessary and can be removed.

Also note that nfsd_read_actor() was renamed nfsd_splice_actor()
by commit cf8208d0eabd ("sendfile: convert nfsd to
splice_direct_to_actor()").

Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
 net/sunrpc/svc_xprt.c |    7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c
index 3cdd71a8df1e..609bda97d4ae 100644
--- a/net/sunrpc/svc_xprt.c
+++ b/net/sunrpc/svc_xprt.c
@@ -642,7 +642,7 @@ static void svc_check_conn_limits(struct svc_serv *serv)
 static int svc_alloc_arg(struct svc_rqst *rqstp)
 {
 	struct svc_serv *serv = rqstp->rq_server;
-	struct xdr_buf *arg;
+	struct xdr_buf *arg = &rqstp->rq_arg;
 	int pages;
 	int i;
 
@@ -667,11 +667,10 @@ static int svc_alloc_arg(struct svc_rqst *rqstp)
 			}
 			rqstp->rq_pages[i] = p;
 		}
-	rqstp->rq_page_end = &rqstp->rq_pages[i];
-	rqstp->rq_pages[i++] = NULL; /* this might be seen in nfs_read_actor */
+	rqstp->rq_page_end = &rqstp->rq_pages[pages];
+	rqstp->rq_pages[pages] = NULL; /* this might be seen in nfsd_splice_actor() */
 
 	/* Make arg->head point to first page and arg->pages point to rest */
-	arg = &rqstp->rq_arg;
 	arg->head[0].iov_base = page_address(rqstp->rq_pages[0]);
 	arg->head[0].iov_len = PAGE_SIZE;
 	arg->pages = rqstp->rq_pages + 1;




^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH 2/2] SUNRPC: Refresh rq_pages using a bulk page allocator
  2021-03-23 15:09 [PATCH 0/2] SUNRPC consumer for the bulk page allocator Chuck Lever
  2021-03-23 15:09 ` [PATCH 1/2] SUNRPC: Set rq_page_end differently Chuck Lever
@ 2021-03-23 15:10 ` Chuck Lever
  2021-03-23 19:56   ` Mel Gorman
  1 sibling, 1 reply; 5+ messages in thread
From: Chuck Lever @ 2021-03-23 15:10 UTC (permalink / raw)
  To: mgorman
  Cc: brouer, vbabka, akpm, hch, alexander.duyck, willy, linux-kernel,
	netdev, linux-mm, linux-nfs

Reduce the rate at which nfsd threads hammer on the page allocator.
This improves throughput scalability by enabling the threads to run
more independently of each other.

Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
 net/sunrpc/svc_xprt.c |   33 +++++++++++++++++----------------
 1 file changed, 17 insertions(+), 16 deletions(-)

diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c
index 609bda97d4ae..d2792b2bf006 100644
--- a/net/sunrpc/svc_xprt.c
+++ b/net/sunrpc/svc_xprt.c
@@ -643,30 +643,31 @@ static int svc_alloc_arg(struct svc_rqst *rqstp)
 {
 	struct svc_serv *serv = rqstp->rq_server;
 	struct xdr_buf *arg = &rqstp->rq_arg;
-	int pages;
-	int i;
+	unsigned long pages, filled;
 
-	/* now allocate needed pages.  If we get a failure, sleep briefly */
 	pages = (serv->sv_max_mesg + 2 * PAGE_SIZE) >> PAGE_SHIFT;
 	if (pages > RPCSVC_MAXPAGES) {
-		pr_warn_once("svc: warning: pages=%u > RPCSVC_MAXPAGES=%lu\n",
+		pr_warn_once("svc: warning: pages=%lu > RPCSVC_MAXPAGES=%lu\n",
 			     pages, RPCSVC_MAXPAGES);
 		/* use as many pages as possible */
 		pages = RPCSVC_MAXPAGES;
 	}
-	for (i = 0; i < pages ; i++)
-		while (rqstp->rq_pages[i] == NULL) {
-			struct page *p = alloc_page(GFP_KERNEL);
-			if (!p) {
-				set_current_state(TASK_INTERRUPTIBLE);
-				if (signalled() || kthread_should_stop()) {
-					set_current_state(TASK_RUNNING);
-					return -EINTR;
-				}
-				schedule_timeout(msecs_to_jiffies(500));
-			}
-			rqstp->rq_pages[i] = p;
+
+	for (;;) {
+		filled = alloc_pages_bulk_array(GFP_KERNEL, pages,
+						rqstp->rq_pages);
+		/* We assume that if the next array element is populated,
+		 * all the following elements are as well, thus we're done. */
+		if (filled == pages || rqstp->rq_pages[filled])
+			break;
+
+		set_current_state(TASK_INTERRUPTIBLE);
+		if (signalled() || kthread_should_stop()) {
+			set_current_state(TASK_RUNNING);
+			return -EINTR;
 		}
+		schedule_timeout(msecs_to_jiffies(500));
+	}
 	rqstp->rq_page_end = &rqstp->rq_pages[pages];
 	rqstp->rq_pages[pages] = NULL; /* this might be seen in nfsd_splice_actor() */
 




^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH 2/2] SUNRPC: Refresh rq_pages using a bulk page allocator
  2021-03-23 15:10 ` [PATCH 2/2] SUNRPC: Refresh rq_pages using a bulk page allocator Chuck Lever
@ 2021-03-23 19:56   ` Mel Gorman
  2021-03-23 19:59     ` Chuck Lever III
  0 siblings, 1 reply; 5+ messages in thread
From: Mel Gorman @ 2021-03-23 19:56 UTC (permalink / raw)
  To: Chuck Lever
  Cc: brouer, vbabka, akpm, hch, alexander.duyck, willy, linux-kernel,
	netdev, linux-mm, linux-nfs

On Tue, Mar 23, 2021 at 11:10:05AM -0400, Chuck Lever wrote:
> Reduce the rate at which nfsd threads hammer on the page allocator.
> This improves throughput scalability by enabling the threads to run
> more independently of each other.
> 
> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>

I've picked up the series and merged the leader with the first patch
because I think the array vs list data is interesting but I did change
the patch.

> +	for (;;) {
> +		filled = alloc_pages_bulk_array(GFP_KERNEL, pages,
> +						rqstp->rq_pages);
> +		/* We assume that if the next array element is populated,
> +		 * all the following elements are as well, thus we're done. */
> +		if (filled == pages || rqstp->rq_pages[filled])
> +			break;
> +

I altered this check because the implementation now returns a useful
index. I know I had concerns about this but while the implementation
cost is higher, the caller needs less knowledge of alloc_bulk_pages
implementation. It might be unfortunate if new users all had to have
their own optimisations around hole management so lets keep it simpler
to start with.

Version current in my tree is below but also available in 

git://git.kernel.org/pub/scm/linux/kernel/git/mel/linux.git mm-bulk-rebase-v6r5

---8<---
SUNRPC: Refresh rq_pages using a bulk page allocator

From: Chuck Lever <chuck.lever@oracle.com>

Reduce the rate at which nfsd threads hammer on the page allocator.
This improves throughput scalability by enabling the threads to run
more independently of each other.

[mgorman: Update interpretation of alloc_pages_bulk return value]
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
---
 net/sunrpc/svc_xprt.c | 31 +++++++++++++++----------------
 1 file changed, 15 insertions(+), 16 deletions(-)

diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c
index 609bda97d4ae..0c27c3291ca1 100644
--- a/net/sunrpc/svc_xprt.c
+++ b/net/sunrpc/svc_xprt.c
@@ -643,30 +643,29 @@ static int svc_alloc_arg(struct svc_rqst *rqstp)
 {
 	struct svc_serv *serv = rqstp->rq_server;
 	struct xdr_buf *arg = &rqstp->rq_arg;
-	int pages;
-	int i;
+	unsigned long pages, filled;
 
-	/* now allocate needed pages.  If we get a failure, sleep briefly */
 	pages = (serv->sv_max_mesg + 2 * PAGE_SIZE) >> PAGE_SHIFT;
 	if (pages > RPCSVC_MAXPAGES) {
-		pr_warn_once("svc: warning: pages=%u > RPCSVC_MAXPAGES=%lu\n",
+		pr_warn_once("svc: warning: pages=%lu > RPCSVC_MAXPAGES=%lu\n",
 			     pages, RPCSVC_MAXPAGES);
 		/* use as many pages as possible */
 		pages = RPCSVC_MAXPAGES;
 	}
-	for (i = 0; i < pages ; i++)
-		while (rqstp->rq_pages[i] == NULL) {
-			struct page *p = alloc_page(GFP_KERNEL);
-			if (!p) {
-				set_current_state(TASK_INTERRUPTIBLE);
-				if (signalled() || kthread_should_stop()) {
-					set_current_state(TASK_RUNNING);
-					return -EINTR;
-				}
-				schedule_timeout(msecs_to_jiffies(500));
-			}
-			rqstp->rq_pages[i] = p;
+
+	for (;;) {
+		filled = alloc_pages_bulk_array(GFP_KERNEL, pages,
+						rqstp->rq_pages);
+		if (filled == pages)
+			break;
+
+		set_current_state(TASK_INTERRUPTIBLE);
+		if (signalled() || kthread_should_stop()) {
+			set_current_state(TASK_RUNNING);
+			return -EINTR;
 		}
+		schedule_timeout(msecs_to_jiffies(500));
+	}
 	rqstp->rq_page_end = &rqstp->rq_pages[pages];
 	rqstp->rq_pages[pages] = NULL; /* this might be seen in nfsd_splice_actor() */
 


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH 2/2] SUNRPC: Refresh rq_pages using a bulk page allocator
  2021-03-23 19:56   ` Mel Gorman
@ 2021-03-23 19:59     ` Chuck Lever III
  0 siblings, 0 replies; 5+ messages in thread
From: Chuck Lever III @ 2021-03-23 19:59 UTC (permalink / raw)
  To: Mel Gorman
  Cc: Jesper Dangaard Brouer, Vlastimil Babka, Andrew Morton,
	Christoph Hellwig, Alexander Duyck, Matthew Wilcox, LKML,
	Linux-Net, Linux-MM, Linux NFS Mailing List



> On Mar 23, 2021, at 3:56 PM, Mel Gorman <mgorman@techsingularity.net> wrote:
> 
> On Tue, Mar 23, 2021 at 11:10:05AM -0400, Chuck Lever wrote:
>> Reduce the rate at which nfsd threads hammer on the page allocator.
>> This improves throughput scalability by enabling the threads to run
>> more independently of each other.
>> 
>> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
> 
> I've picked up the series and merged the leader with the first patch
> because I think the array vs list data is interesting but I did change
> the patch.
> 
>> +	for (;;) {
>> +		filled = alloc_pages_bulk_array(GFP_KERNEL, pages,
>> +						rqstp->rq_pages);
>> +		/* We assume that if the next array element is populated,
>> +		 * all the following elements are as well, thus we're done. */
>> +		if (filled == pages || rqstp->rq_pages[filled])
>> +			break;
>> +
> 
> I altered this check because the implementation now returns a useful
> index. I know I had concerns about this but while the implementation
> cost is higher, the caller needs less knowledge of alloc_bulk_pages
> implementation. It might be unfortunate if new users all had to have
> their own optimisations around hole management so lets keep it simpler
> to start with.

Agreed! Your version below looks like what I'm testing now --
the "rq_pages[filled]" test and the comment have been removed.


> Version current in my tree is below but also available in 
> 
> git://git.kernel.org/pub/scm/linux/kernel/git/mel/linux.git mm-bulk-rebase-v6r5
> 
> ---8<---
> SUNRPC: Refresh rq_pages using a bulk page allocator
> 
> From: Chuck Lever <chuck.lever@oracle.com>
> 
> Reduce the rate at which nfsd threads hammer on the page allocator.
> This improves throughput scalability by enabling the threads to run
> more independently of each other.
> 
> [mgorman: Update interpretation of alloc_pages_bulk return value]
> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
> Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
> ---
> net/sunrpc/svc_xprt.c | 31 +++++++++++++++----------------
> 1 file changed, 15 insertions(+), 16 deletions(-)
> 
> diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c
> index 609bda97d4ae..0c27c3291ca1 100644
> --- a/net/sunrpc/svc_xprt.c
> +++ b/net/sunrpc/svc_xprt.c
> @@ -643,30 +643,29 @@ static int svc_alloc_arg(struct svc_rqst *rqstp)
> {
> 	struct svc_serv *serv = rqstp->rq_server;
> 	struct xdr_buf *arg = &rqstp->rq_arg;
> -	int pages;
> -	int i;
> +	unsigned long pages, filled;
> 
> -	/* now allocate needed pages.  If we get a failure, sleep briefly */
> 	pages = (serv->sv_max_mesg + 2 * PAGE_SIZE) >> PAGE_SHIFT;
> 	if (pages > RPCSVC_MAXPAGES) {
> -		pr_warn_once("svc: warning: pages=%u > RPCSVC_MAXPAGES=%lu\n",
> +		pr_warn_once("svc: warning: pages=%lu > RPCSVC_MAXPAGES=%lu\n",
> 			     pages, RPCSVC_MAXPAGES);
> 		/* use as many pages as possible */
> 		pages = RPCSVC_MAXPAGES;
> 	}
> -	for (i = 0; i < pages ; i++)
> -		while (rqstp->rq_pages[i] == NULL) {
> -			struct page *p = alloc_page(GFP_KERNEL);
> -			if (!p) {
> -				set_current_state(TASK_INTERRUPTIBLE);
> -				if (signalled() || kthread_should_stop()) {
> -					set_current_state(TASK_RUNNING);
> -					return -EINTR;
> -				}
> -				schedule_timeout(msecs_to_jiffies(500));
> -			}
> -			rqstp->rq_pages[i] = p;
> +
> +	for (;;) {
> +		filled = alloc_pages_bulk_array(GFP_KERNEL, pages,
> +						rqstp->rq_pages);
> +		if (filled == pages)
> +			break;
> +
> +		set_current_state(TASK_INTERRUPTIBLE);
> +		if (signalled() || kthread_should_stop()) {
> +			set_current_state(TASK_RUNNING);
> +			return -EINTR;
> 		}
> +		schedule_timeout(msecs_to_jiffies(500));
> +	}
> 	rqstp->rq_page_end = &rqstp->rq_pages[pages];
> 	rqstp->rq_pages[pages] = NULL; /* this might be seen in nfsd_splice_actor() */
> 

--
Chuck Lever





^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2021-03-23 19:59 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-23 15:09 [PATCH 0/2] SUNRPC consumer for the bulk page allocator Chuck Lever
2021-03-23 15:09 ` [PATCH 1/2] SUNRPC: Set rq_page_end differently Chuck Lever
2021-03-23 15:10 ` [PATCH 2/2] SUNRPC: Refresh rq_pages using a bulk page allocator Chuck Lever
2021-03-23 19:56   ` Mel Gorman
2021-03-23 19:59     ` Chuck Lever III

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).