All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/3] SUNRPC: Convert remaining GFP_NOIO, and GFP_NOWAIT sites in sunrpc
@ 2019-03-04  0:19 Trond Myklebust
  2019-03-04  0:19 ` [PATCH 2/3] NFSv4.1: Bump the default callback session slot count to 16 Trond Myklebust
  0 siblings, 1 reply; 5+ messages in thread
From: Trond Myklebust @ 2019-03-04  0:19 UTC (permalink / raw)
  To: linux-nfs

Convert the remaining gfp_flags arguments in sunrpc to standard reclaiming
allocations, now that we set memalloc_nofs_save() as appropriate.

Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
---
 net/sunrpc/auth_gss/auth_gss.c | 2 +-
 net/sunrpc/sched.c             | 7 ++-----
 net/sunrpc/xprtsock.c          | 4 ++--
 3 files changed, 5 insertions(+), 8 deletions(-)

diff --git a/net/sunrpc/auth_gss/auth_gss.c b/net/sunrpc/auth_gss/auth_gss.c
index c67e2ad151ae..3fd56c0c90ae 100644
--- a/net/sunrpc/auth_gss/auth_gss.c
+++ b/net/sunrpc/auth_gss/auth_gss.c
@@ -1208,7 +1208,7 @@ gss_dup_cred(struct gss_auth *gss_auth, struct gss_cred *gss_cred)
 	struct gss_cred *new;
 
 	/* Make a copy of the cred so that we can reference count it */
-	new = kzalloc(sizeof(*gss_cred), GFP_NOIO);
+	new = kzalloc(sizeof(*gss_cred), GFP_NOFS);
 	if (new) {
 		struct auth_cred acred = {
 			.cred = gss_cred->gc_base.cr_cred,
diff --git a/net/sunrpc/sched.c b/net/sunrpc/sched.c
index 2168d4d9c09f..f21557213a43 100644
--- a/net/sunrpc/sched.c
+++ b/net/sunrpc/sched.c
@@ -925,16 +925,13 @@ static void rpc_async_schedule(struct work_struct *work)
  * Most requests are 'small' (under 2KiB) and can be serviced from a
  * mempool, ensuring that NFS reads and writes can always proceed,
  * and that there is good locality of reference for these buffers.
- *
- * In order to avoid memory starvation triggering more writebacks of
- * NFS requests, we avoid using GFP_KERNEL.
  */
 int rpc_malloc(struct rpc_task *task)
 {
 	struct rpc_rqst *rqst = task->tk_rqstp;
 	size_t size = rqst->rq_callsize + rqst->rq_rcvsize;
 	struct rpc_buffer *buf;
-	gfp_t gfp = GFP_NOIO | __GFP_NOWARN;
+	gfp_t gfp = GFP_NOFS;
 
 	if (RPC_IS_SWAPPER(task))
 		gfp = __GFP_MEMALLOC | GFP_NOWAIT | __GFP_NOWARN;
@@ -1015,7 +1012,7 @@ static void rpc_init_task(struct rpc_task *task, const struct rpc_task_setup *ta
 static struct rpc_task *
 rpc_alloc_task(void)
 {
-	return (struct rpc_task *)mempool_alloc(rpc_task_mempool, GFP_NOIO);
+	return (struct rpc_task *)mempool_alloc(rpc_task_mempool, GFP_NOFS);
 }
 
 /*
diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
index e829036ed81f..42f45d33dc56 100644
--- a/net/sunrpc/xprtsock.c
+++ b/net/sunrpc/xprtsock.c
@@ -423,7 +423,7 @@ xs_read_xdr_buf(struct socket *sock, struct msghdr *msg, int flags,
 
 	want = xs_alloc_sparse_pages(buf,
 			min_t(size_t, count - offset, buf->page_len),
-			GFP_NOWAIT);
+			GFP_KERNEL);
 	if (seek < want) {
 		ret = xs_read_bvec(sock, msg, flags, buf->bvec,
 				xdr_buf_pagecount(buf),
@@ -909,7 +909,7 @@ static int xs_nospace(struct rpc_rqst *req)
 static void
 xs_stream_prepare_request(struct rpc_rqst *req)
 {
-	req->rq_task->tk_status = xdr_alloc_bvec(&req->rq_rcv_buf, GFP_NOIO);
+	req->rq_task->tk_status = xdr_alloc_bvec(&req->rq_rcv_buf, GFP_KERNEL);
 }
 
 /*
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH 2/3] NFSv4.1: Bump the default callback session slot count to 16
  2019-03-04  0:19 [PATCH 1/3] SUNRPC: Convert remaining GFP_NOIO, and GFP_NOWAIT sites in sunrpc Trond Myklebust
@ 2019-03-04  0:19 ` Trond Myklebust
  2019-03-04  0:19   ` [PATCH 3/3] SUNRPC: Allow dynamic allocation of back channel slots Trond Myklebust
  0 siblings, 1 reply; 5+ messages in thread
From: Trond Myklebust @ 2019-03-04  0:19 UTC (permalink / raw)
  To: linux-nfs

Users can still control this value explicitly using the
max_session_cb_slots module parameter, but let's bump the default
up to 16 for now.

Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
---
 fs/nfs/nfs4session.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/nfs/nfs4session.h b/fs/nfs/nfs4session.h
index 230509b77121..b996ee23f1ba 100644
--- a/fs/nfs/nfs4session.h
+++ b/fs/nfs/nfs4session.h
@@ -10,7 +10,7 @@
 
 /* maximum number of slots to use */
 #define NFS4_DEF_SLOT_TABLE_SIZE (64U)
-#define NFS4_DEF_CB_SLOT_TABLE_SIZE (1U)
+#define NFS4_DEF_CB_SLOT_TABLE_SIZE (16U)
 #define NFS4_MAX_SLOT_TABLE (1024U)
 #define NFS4_NO_SLOT ((u32)-1)
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH 3/3] SUNRPC: Allow dynamic allocation of back channel slots
  2019-03-04  0:19 ` [PATCH 2/3] NFSv4.1: Bump the default callback session slot count to 16 Trond Myklebust
@ 2019-03-04  0:19   ` Trond Myklebust
  2019-03-04 15:26     ` Chuck Lever
  0 siblings, 1 reply; 5+ messages in thread
From: Trond Myklebust @ 2019-03-04  0:19 UTC (permalink / raw)
  To: linux-nfs

Now that the reads happen in a process context rather than a softirq,
it is safe to allocate back channel slots using a reclaiming
allocation.

Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
---
 net/sunrpc/backchannel_rqst.c | 41 +++++++++++++++++++++--------------
 1 file changed, 25 insertions(+), 16 deletions(-)

diff --git a/net/sunrpc/backchannel_rqst.c b/net/sunrpc/backchannel_rqst.c
index b9313c15ee3a..c47d82622fd1 100644
--- a/net/sunrpc/backchannel_rqst.c
+++ b/net/sunrpc/backchannel_rqst.c
@@ -235,7 +235,8 @@ void xprt_destroy_bc(struct rpc_xprt *xprt, unsigned int max_reqs)
 		list_empty(&xprt->bc_pa_list) ? "true" : "false");
 }
 
-static struct rpc_rqst *xprt_alloc_bc_request(struct rpc_xprt *xprt, __be32 xid)
+static struct rpc_rqst *xprt_get_bc_request(struct rpc_xprt *xprt, __be32 xid,
+		struct rpc_rqst *new)
 {
 	struct rpc_rqst *req = NULL;
 
@@ -243,10 +244,9 @@ static struct rpc_rqst *xprt_alloc_bc_request(struct rpc_xprt *xprt, __be32 xid)
 	if (atomic_read(&xprt->bc_free_slots) <= 0)
 		goto not_found;
 	if (list_empty(&xprt->bc_pa_list)) {
-		req = xprt_alloc_bc_req(xprt, GFP_ATOMIC);
-		if (!req)
+		if (!new)
 			goto not_found;
-		list_add_tail(&req->rq_bc_pa_list, &xprt->bc_pa_list);
+		list_add_tail(&new->rq_bc_pa_list, &xprt->bc_pa_list);
 		xprt->bc_alloc_count++;
 	}
 	req = list_first_entry(&xprt->bc_pa_list, struct rpc_rqst,
@@ -256,8 +256,8 @@ static struct rpc_rqst *xprt_alloc_bc_request(struct rpc_xprt *xprt, __be32 xid)
 			sizeof(req->rq_private_buf));
 	req->rq_xid = xid;
 	req->rq_connect_cookie = xprt->connect_cookie;
-not_found:
 	dprintk("RPC:       backchannel req=%p\n", req);
+not_found:
 	return req;
 }
 
@@ -320,18 +320,27 @@ void xprt_free_bc_rqst(struct rpc_rqst *req)
  */
 struct rpc_rqst *xprt_lookup_bc_request(struct rpc_xprt *xprt, __be32 xid)
 {
-	struct rpc_rqst *req;
-
-	spin_lock(&xprt->bc_pa_lock);
-	list_for_each_entry(req, &xprt->bc_pa_list, rq_bc_pa_list) {
-		if (req->rq_connect_cookie != xprt->connect_cookie)
-			continue;
-		if (req->rq_xid == xid)
-			goto found;
-	}
-	req = xprt_alloc_bc_request(xprt, xid);
+	struct rpc_rqst *req, *new = NULL;
+
+	do {
+		spin_lock(&xprt->bc_pa_lock);
+		list_for_each_entry(req, &xprt->bc_pa_list, rq_bc_pa_list) {
+			if (req->rq_connect_cookie != xprt->connect_cookie)
+				continue;
+			if (req->rq_xid == xid)
+				goto found;
+		}
+		req = xprt_get_bc_request(xprt, xid, new);
 found:
-	spin_unlock(&xprt->bc_pa_lock);
+		spin_unlock(&xprt->bc_pa_lock);
+		if (new) {
+			if (req != new)
+				xprt_free_bc_rqst(new);
+			break;
+		} else if (req)
+			break;
+		new = xprt_alloc_bc_req(xprt, GFP_KERNEL);
+	} while (new);
 	return req;
 }
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH 3/3] SUNRPC: Allow dynamic allocation of back channel slots
  2019-03-04  0:19   ` [PATCH 3/3] SUNRPC: Allow dynamic allocation of back channel slots Trond Myklebust
@ 2019-03-04 15:26     ` Chuck Lever
  2019-03-04 15:52       ` Trond Myklebust
  0 siblings, 1 reply; 5+ messages in thread
From: Chuck Lever @ 2019-03-04 15:26 UTC (permalink / raw)
  To: Trond Myklebust; +Cc: Linux NFS Mailing List

Hi Trond-


> On Mar 3, 2019, at 7:19 PM, Trond Myklebust <trondmy@gmail.com> wrote:
> 
> Now that the reads happen in a process context rather than a softirq,
> it is safe to allocate back channel slots using a reclaiming
> allocation.

Is this a required change for transports, or simply an optimization?
Wondering if a similar change is needed for RPC-over-RDMA.


> Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
> ---
> net/sunrpc/backchannel_rqst.c | 41 +++++++++++++++++++++--------------
> 1 file changed, 25 insertions(+), 16 deletions(-)
> 
> diff --git a/net/sunrpc/backchannel_rqst.c b/net/sunrpc/backchannel_rqst.c
> index b9313c15ee3a..c47d82622fd1 100644
> --- a/net/sunrpc/backchannel_rqst.c
> +++ b/net/sunrpc/backchannel_rqst.c
> @@ -235,7 +235,8 @@ void xprt_destroy_bc(struct rpc_xprt *xprt, unsigned int max_reqs)
> 		list_empty(&xprt->bc_pa_list) ? "true" : "false");
> }
> 
> -static struct rpc_rqst *xprt_alloc_bc_request(struct rpc_xprt *xprt, __be32 xid)
> +static struct rpc_rqst *xprt_get_bc_request(struct rpc_xprt *xprt, __be32 xid,
> +		struct rpc_rqst *new)
> {
> 	struct rpc_rqst *req = NULL;
> 
> @@ -243,10 +244,9 @@ static struct rpc_rqst *xprt_alloc_bc_request(struct rpc_xprt *xprt, __be32 xid)
> 	if (atomic_read(&xprt->bc_free_slots) <= 0)
> 		goto not_found;
> 	if (list_empty(&xprt->bc_pa_list)) {
> -		req = xprt_alloc_bc_req(xprt, GFP_ATOMIC);
> -		if (!req)
> +		if (!new)
> 			goto not_found;
> -		list_add_tail(&req->rq_bc_pa_list, &xprt->bc_pa_list);
> +		list_add_tail(&new->rq_bc_pa_list, &xprt->bc_pa_list);
> 		xprt->bc_alloc_count++;
> 	}
> 	req = list_first_entry(&xprt->bc_pa_list, struct rpc_rqst,
> @@ -256,8 +256,8 @@ static struct rpc_rqst *xprt_alloc_bc_request(struct rpc_xprt *xprt, __be32 xid)
> 			sizeof(req->rq_private_buf));
> 	req->rq_xid = xid;
> 	req->rq_connect_cookie = xprt->connect_cookie;
> -not_found:
> 	dprintk("RPC:       backchannel req=%p\n", req);
> +not_found:
> 	return req;
> }
> 
> @@ -320,18 +320,27 @@ void xprt_free_bc_rqst(struct rpc_rqst *req)
>  */
> struct rpc_rqst *xprt_lookup_bc_request(struct rpc_xprt *xprt, __be32 xid)
> {
> -	struct rpc_rqst *req;
> -
> -	spin_lock(&xprt->bc_pa_lock);
> -	list_for_each_entry(req, &xprt->bc_pa_list, rq_bc_pa_list) {
> -		if (req->rq_connect_cookie != xprt->connect_cookie)
> -			continue;
> -		if (req->rq_xid == xid)
> -			goto found;
> -	}
> -	req = xprt_alloc_bc_request(xprt, xid);
> +	struct rpc_rqst *req, *new = NULL;
> +
> +	do {
> +		spin_lock(&xprt->bc_pa_lock);
> +		list_for_each_entry(req, &xprt->bc_pa_list, rq_bc_pa_list) {
> +			if (req->rq_connect_cookie != xprt->connect_cookie)
> +				continue;
> +			if (req->rq_xid == xid)
> +				goto found;
> +		}
> +		req = xprt_get_bc_request(xprt, xid, new);
> found:
> -	spin_unlock(&xprt->bc_pa_lock);
> +		spin_unlock(&xprt->bc_pa_lock);
> +		if (new) {
> +			if (req != new)
> +				xprt_free_bc_rqst(new);
> +			break;
> +		} else if (req)
> +			break;
> +		new = xprt_alloc_bc_req(xprt, GFP_KERNEL);
> +	} while (new);
> 	return req;
> }
> 
> -- 
> 2.20.1
> 

--
Chuck Lever




^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH 3/3] SUNRPC: Allow dynamic allocation of back channel slots
  2019-03-04 15:26     ` Chuck Lever
@ 2019-03-04 15:52       ` Trond Myklebust
  0 siblings, 0 replies; 5+ messages in thread
From: Trond Myklebust @ 2019-03-04 15:52 UTC (permalink / raw)
  To: chuck.lever; +Cc: linux-nfs

On Mon, 2019-03-04 at 10:26 -0500, Chuck Lever wrote:
> Hi Trond-
> 
> 
> > On Mar 3, 2019, at 7:19 PM, Trond Myklebust <trondmy@gmail.com>
> > wrote:
> > 
> > Now that the reads happen in a process context rather than a
> > softirq,
> > it is safe to allocate back channel slots using a reclaiming
> > allocation.
> 
> Is this a required change for transports, or simply an optimization?
> Wondering if a similar change is needed for RPC-over-RDMA.

Now that we no longer need to be atomic for the socket receive code, I
figure we can afford the call. The alternative is that we have to drop
the connection, which is also very costly.

Cheers
  Trond

-- 
Trond Myklebust
Linux NFS client maintainer, Hammerspace
trond.myklebust@hammerspace.com



^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2019-03-04 15:53 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-03-04  0:19 [PATCH 1/3] SUNRPC: Convert remaining GFP_NOIO, and GFP_NOWAIT sites in sunrpc Trond Myklebust
2019-03-04  0:19 ` [PATCH 2/3] NFSv4.1: Bump the default callback session slot count to 16 Trond Myklebust
2019-03-04  0:19   ` [PATCH 3/3] SUNRPC: Allow dynamic allocation of back channel slots Trond Myklebust
2019-03-04 15:26     ` Chuck Lever
2019-03-04 15:52       ` Trond Myklebust

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.