linux-nfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] SUNRPC: Use poll() to fix up the socket requeue races
@ 2019-02-19 14:06 Trond Myklebust
  2019-02-19 14:54 ` Chuck Lever
  0 siblings, 1 reply; 4+ messages in thread
From: Trond Myklebust @ 2019-02-19 14:06 UTC (permalink / raw)
  To: linux-nfs

Because we clear XPRT_SOCK_DATA_READY before reading, we can end up
with a situation where new data arrives, causing xs_data_ready() to
queue up a second receive worker job for the same socket, which then
immediately gets stuck waiting on the transport receive mutex.
The fix is to only clear XPRT_SOCK_DATA_READY once we're done reading,
and then to use poll() to check if we might need to queue up a new
job in order to deal with any new data.

Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
---
 net/sunrpc/xprtsock.c | 17 +++++++++++++++--
 1 file changed, 15 insertions(+), 2 deletions(-)

diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
index f5d7dcd9e8d9..a721c843d5d3 100644
--- a/net/sunrpc/xprtsock.c
+++ b/net/sunrpc/xprtsock.c
@@ -656,13 +656,25 @@ xs_read_stream(struct sock_xprt *transport, int flags)
 	return ret != 0 ? ret : -ESHUTDOWN;
 }
 
+static void xs_poll_check_readable(struct sock_xprt *transport)
+{
+	struct socket *sock = transport->sock;
+	__poll_t events;
+
+	clear_bit(XPRT_SOCK_DATA_READY, &transport->sock_state);
+	events = sock->ops->poll(NULL, sock, NULL);
+	if (!(events & (EPOLLIN | EPOLLRDNORM)) || events & EPOLLRDHUP)
+		return;
+	if (!test_and_set_bit(XPRT_SOCK_DATA_READY, &transport->sock_state))
+		queue_work(xprtiod_workqueue, &transport->recv_worker);
+}
+
 static void xs_stream_data_receive(struct sock_xprt *transport)
 {
 	size_t read = 0;
 	ssize_t ret = 0;
 
 	mutex_lock(&transport->recv_mutex);
-	clear_bit(XPRT_SOCK_DATA_READY, &transport->sock_state);
 	if (transport->sock == NULL)
 		goto out;
 	for (;;) {
@@ -672,6 +684,7 @@ static void xs_stream_data_receive(struct sock_xprt *transport)
 		read += ret;
 		cond_resched();
 	}
+	xs_poll_check_readable(transport);
 out:
 	mutex_unlock(&transport->recv_mutex);
 	trace_xs_stream_read_data(&transport->xprt, ret, read);
@@ -1362,7 +1375,6 @@ static void xs_udp_data_receive(struct sock_xprt *transport)
 	int err;
 
 	mutex_lock(&transport->recv_mutex);
-	clear_bit(XPRT_SOCK_DATA_READY, &transport->sock_state);
 	sk = transport->inet;
 	if (sk == NULL)
 		goto out;
@@ -1374,6 +1386,7 @@ static void xs_udp_data_receive(struct sock_xprt *transport)
 		consume_skb(skb);
 		cond_resched();
 	}
+	xs_poll_check_readable(transport);
 out:
 	mutex_unlock(&transport->recv_mutex);
 }
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH] SUNRPC: Use poll() to fix up the socket requeue races
  2019-02-19 14:06 [PATCH] SUNRPC: Use poll() to fix up the socket requeue races Trond Myklebust
@ 2019-02-19 14:54 ` Chuck Lever
  2019-02-19 15:13   ` Trond Myklebust
  0 siblings, 1 reply; 4+ messages in thread
From: Chuck Lever @ 2019-02-19 14:54 UTC (permalink / raw)
  To: Trond Myklebust; +Cc: Linux NFS Mailing List

Hi Trond-

> On Feb 19, 2019, at 9:06 AM, Trond Myklebust <trondmy@gmail.com> wrote:
> 
> Because we clear XPRT_SOCK_DATA_READY before reading, we can end up
> with a situation where new data arrives, causing xs_data_ready() to
> queue up a second receive worker job for the same socket, which then
> immediately gets stuck waiting on the transport receive mutex.
> The fix is to only clear XPRT_SOCK_DATA_READY once we're done reading,
> and then to use poll() to check if we might need to queue up a new
> job in order to deal with any new data.

Does this fix an application-visible hang, or is it merely a performance
optimization?


> Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
> ---
> net/sunrpc/xprtsock.c | 17 +++++++++++++++--
> 1 file changed, 15 insertions(+), 2 deletions(-)
> 
> diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
> index f5d7dcd9e8d9..a721c843d5d3 100644
> --- a/net/sunrpc/xprtsock.c
> +++ b/net/sunrpc/xprtsock.c
> @@ -656,13 +656,25 @@ xs_read_stream(struct sock_xprt *transport, int flags)
> 	return ret != 0 ? ret : -ESHUTDOWN;
> }
> 
> +static void xs_poll_check_readable(struct sock_xprt *transport)
> +{
> +	struct socket *sock = transport->sock;
> +	__poll_t events;
> +
> +	clear_bit(XPRT_SOCK_DATA_READY, &transport->sock_state);
> +	events = sock->ops->poll(NULL, sock, NULL);
> +	if (!(events & (EPOLLIN | EPOLLRDNORM)) || events & EPOLLRDHUP)
> +		return;
> +	if (!test_and_set_bit(XPRT_SOCK_DATA_READY, &transport->sock_state))
> +		queue_work(xprtiod_workqueue, &transport->recv_worker);
> +}
> +
> static void xs_stream_data_receive(struct sock_xprt *transport)
> {
> 	size_t read = 0;
> 	ssize_t ret = 0;
> 
> 	mutex_lock(&transport->recv_mutex);
> -	clear_bit(XPRT_SOCK_DATA_READY, &transport->sock_state);
> 	if (transport->sock == NULL)
> 		goto out;
> 	for (;;) {
> @@ -672,6 +684,7 @@ static void xs_stream_data_receive(struct sock_xprt *transport)
> 		read += ret;
> 		cond_resched();
> 	}
> +	xs_poll_check_readable(transport);
> out:
> 	mutex_unlock(&transport->recv_mutex);
> 	trace_xs_stream_read_data(&transport->xprt, ret, read);
> @@ -1362,7 +1375,6 @@ static void xs_udp_data_receive(struct sock_xprt *transport)
> 	int err;
> 
> 	mutex_lock(&transport->recv_mutex);
> -	clear_bit(XPRT_SOCK_DATA_READY, &transport->sock_state);
> 	sk = transport->inet;
> 	if (sk == NULL)
> 		goto out;
> @@ -1374,6 +1386,7 @@ static void xs_udp_data_receive(struct sock_xprt *transport)
> 		consume_skb(skb);
> 		cond_resched();
> 	}
> +	xs_poll_check_readable(transport);
> out:
> 	mutex_unlock(&transport->recv_mutex);
> }
> -- 
> 2.20.1
> 

--
Chuck Lever




^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] SUNRPC: Use poll() to fix up the socket requeue races
  2019-02-19 14:54 ` Chuck Lever
@ 2019-02-19 15:13   ` Trond Myklebust
  2019-02-19 15:20     ` Chuck Lever
  0 siblings, 1 reply; 4+ messages in thread
From: Trond Myklebust @ 2019-02-19 15:13 UTC (permalink / raw)
  To: chuck.lever; +Cc: linux-nfs

On Tue, 2019-02-19 at 09:54 -0500, Chuck Lever wrote:
> Hi Trond-
> 
> > On Feb 19, 2019, at 9:06 AM, Trond Myklebust <trondmy@gmail.com>
> > wrote:
> > 
> > Because we clear XPRT_SOCK_DATA_READY before reading, we can end up
> > with a situation where new data arrives, causing xs_data_ready() to
> > queue up a second receive worker job for the same socket, which
> > then
> > immediately gets stuck waiting on the transport receive mutex.
> > The fix is to only clear XPRT_SOCK_DATA_READY once we're done
> > reading,
> > and then to use poll() to check if we might need to queue up a new
> > job in order to deal with any new data.
> 
> Does this fix an application-visible hang, or is it merely a
> performance
> optimization?

I'm not aware of any hang associated with this behaviour. The patch is
rather intended as an optimisation to avoid having these threads block
uselessly on a mutex.

-- 
Trond Myklebust
Linux NFS client maintainer, Hammerspace
trond.myklebust@hammerspace.com



^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] SUNRPC: Use poll() to fix up the socket requeue races
  2019-02-19 15:13   ` Trond Myklebust
@ 2019-02-19 15:20     ` Chuck Lever
  0 siblings, 0 replies; 4+ messages in thread
From: Chuck Lever @ 2019-02-19 15:20 UTC (permalink / raw)
  To: Trond Myklebust; +Cc: Linux NFS Mailing List



> On Feb 19, 2019, at 10:13 AM, Trond Myklebust <trondmy@hammerspace.com> wrote:
> 
> On Tue, 2019-02-19 at 09:54 -0500, Chuck Lever wrote:
>> Hi Trond-
>> 
>>> On Feb 19, 2019, at 9:06 AM, Trond Myklebust <trondmy@gmail.com>
>>> wrote:
>>> 
>>> Because we clear XPRT_SOCK_DATA_READY before reading, we can end up
>>> with a situation where new data arrives, causing xs_data_ready() to
>>> queue up a second receive worker job for the same socket, which
>>> then
>>> immediately gets stuck waiting on the transport receive mutex.
>>> The fix is to only clear XPRT_SOCK_DATA_READY once we're done
>>> reading,
>>> and then to use poll() to check if we might need to queue up a new
>>> job in order to deal with any new data.
>> 
>> Does this fix an application-visible hang, or is it merely a
>> performance
>> optimization?
> 
> I'm not aware of any hang associated with this behaviour. The patch is
> rather intended as an optimisation to avoid having these threads block
> uselessly on a mutex.

That was my guess, thanks, just wanted to make certain.

--
Chuck Lever




^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2019-02-19 15:20 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-02-19 14:06 [PATCH] SUNRPC: Use poll() to fix up the socket requeue races Trond Myklebust
2019-02-19 14:54 ` Chuck Lever
2019-02-19 15:13   ` Trond Myklebust
2019-02-19 15:20     ` Chuck Lever

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).