linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] SUNRPC: Fix a race in xs_reset_transport
@ 2015-09-15 15:49 Suzuki K. Poulose
  2015-09-15 18:52 ` Jeff Layton
  0 siblings, 1 reply; 16+ messages in thread
From: Suzuki K. Poulose @ 2015-09-15 15:49 UTC (permalink / raw)
  To: Trond Myklebust
  Cc: Anna Schumaker, J. Bruce Fields, Jeff Layton, David S. Miller,
	linux-nfs, linux-kernel, Suzuki K. Poulose

From: "Suzuki K. Poulose" <suzuki.poulose@arm.com>

Encountered the following BUG() with 4.3-rc1 on a fast model
for arm64 with NFS root filesystem.

------------[ cut here ]------------
kernel BUG at fs/inode.c:1493!

Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
Modules linked in:
CPU: 2 PID: 1 Comm: swapper/0 Not tainted 4.3.0-rc1+ #855
Hardware name: FVP Base (DT)
task: ffffc000760b0000 ti: ffffc00076070000 task.ti: ffffc00076070000
PC is at iput+0x144/0x170
LR is at sock_release+0xbc/0xdc
pc : [<ffffc000001b4920>] lr : [<ffffc000004d1974>] pstate: 40000045
sp : ffffc00076073790
x29: ffffc00076073790 x28: ffffc00076073b40
x27: 00000000000003e8 x26: ffffc00076955000
x25: 000000000000000c x24: ffffc00076637200
x23: ffffc00076073930 x22: ffffc000769b8180
x21: ffffc000740500a8 x20: ffffc00074050158
x19: ffffc00074050030 x18: 000000009fcef6bf
x17: 00000000593e3df5 x16: 00000000b597f71d
x15: 00000000e2f9d3f6 x14: 0ffffffffffffffe
x13: 0000000000000020 x12: 0101010101010101
x11: 00000000000001c9 x10: 0000000000000750
x9 : ffffc00076073670 x8 : ffffc000760b07b0
x7 : 0000000000000001 x6 : 0000000000000001
x5 : 0000000000000000 x4 : 00000000ffffffff
x3 : 0000000000000000 x2 : ffffffffffffffff
x1 : ffffc00076070000 x0 : 0000000000000060

Process swapper/0 (pid: 1, stack limit = 0xffffc00076070020)
Stack: (0xffffc00076073790 to 0xffffc00076074000)

 [ stack contents stripped ]

Call trace:
[<ffffc000001b4920>] iput+0x144/0x170
[<ffffc000004d1970>] sock_release+0xb8/0xdc
[<ffffc00000578df0>] xs_reset_transport+0x8c/0xac
[<ffffc00000578e60>] xs_close+0x50/0x6c
[<ffffc00000578e9c>] xs_destroy+0x20/0x5c
[<ffffc00000575f70>] xprt_destroy+0x68/0x8c
[<ffffc0000057777c>] xprt_put+0x24/0x30
[<ffffc000005726c4>] rpc_free_client+0x78/0xd8
[<ffffc0000057288c>] rpc_release_client+0x94/0xec
[<ffffc00000572aac>] rpc_shutdown_client+0x58/0x118
[<ffffc00000278588>] nfs_mount+0x100/0x234
[<ffffc0000026cc88>] nfs_request_mount+0xa8/0x12c
[<ffffc0000026e564>] nfs_try_mount+0x54/0x2b4
[<ffffc0000026f140>] nfs_fs_mount+0x5cc/0xac0
[<ffffc0000019f1a0>] mount_fs+0x38/0x158
[<ffffc000001b81a8>] vfs_kern_mount+0x48/0x11c
[<ffffc000001bb390>] do_mount+0x208/0xc04
[<ffffc000001bc0b0>] SyS_mount+0x78/0xd0
[<ffffc000007f0fa8>] mount_root+0x80/0x148
[<ffffc000007f11a8>] prepare_namespace+0x138/0x184
[<ffffc000007f0b20>] kernel_init_freeable+0x1cc/0x1f4
[<ffffc000005a2914>] kernel_init+0xc/0xd8
Code: b5fffc00 17ffffed d4210000 17ffffd7 (d4210000)
---[ end trace 02951451f1831f54 ]---

With rpc_debug enabled here is the log :

RPC:       shutting down mount client for your.nfs.server
RPC:       rpc_release_client(ffffc00076637800)
RPC:       destroying UNIX authenticator ffffc000008f48c8
RPC:       destroying mount client for your.nfs.server
RPC:       destroying transport ffffc00076226000
RPC:       xs_destroy xprt ffffc00076226000
RPC:       xs_close xprt ffffc00076226000
RPC:       xs_tcp_state_change client ffffc00076226000...
RPC:       state 4 conn 1 dead 0 zapped 1 sk_shutdown 3
RPC:       xs_tcp_state_change client ffffc00076226000...
RPC:       state 5 conn 0 dead 0 zapped 1 sk_shutdown 3
RPC:       xs_tcp_state_change client ffffc00076226000...
RPC:       state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
RPC:       disconnected transport ffffc00076226000
RPC:       xs_tcp_state_change client ffffc00076226000...
RPC:       state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
RPC:       disconnected transport ffffc00076226000
RPC:       xs_tcp_data_ready...
RPC:       xs_tcp_state_change client ffffc00076226000...
RPC:       state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
RPC:       disconnected transport ffffc00076226000
RPC:       wake_up_first(ffffc00076226170 "xprt_sending")

So it looks like just before we lock the callbacks in xs_reset_transport,
a few of the callbacks got through and issued the reset before we could
lock it. And we end up repeating the cleanups, ending up in the above
BUG() due to multiple sock_release().

This patch fixes the race by confirming that somebody else hasn't performed
the reset while we were waiting for the lock. Also, the kernel_shutdown()
is performed only if the sock is non-NULL to avoid a possible race.

Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
---
 net/sunrpc/xprtsock.c |    9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
index 7be90bc..6f4789d 100644
--- a/net/sunrpc/xprtsock.c
+++ b/net/sunrpc/xprtsock.c
@@ -822,9 +822,16 @@ static void xs_reset_transport(struct sock_xprt *transport)
 	if (atomic_read(&transport->xprt.swapper))
 		sk_clear_memalloc(sk);
 
-	kernel_sock_shutdown(sock, SHUT_RDWR);
+	if (sock)
+		kernel_sock_shutdown(sock, SHUT_RDWR);
 
 	write_lock_bh(&sk->sk_callback_lock);
+	/* Check someone has already done the job, while we were waiting */
+	if (!transport->inet) {
+		write_unlock_bh(&sk->sk_callback_lock);
+		return;
+	}
+
 	transport->inet = NULL;
 	transport->sock = NULL;
 
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH] SUNRPC: Fix a race in xs_reset_transport
  2015-09-15 15:49 [PATCH] SUNRPC: Fix a race in xs_reset_transport Suzuki K. Poulose
@ 2015-09-15 18:52 ` Jeff Layton
  2015-09-16  8:08   ` Suzuki K. Poulose
                     ` (2 more replies)
  0 siblings, 3 replies; 16+ messages in thread
From: Jeff Layton @ 2015-09-15 18:52 UTC (permalink / raw)
  To: Suzuki K. Poulose
  Cc: Trond Myklebust, Anna Schumaker, J. Bruce Fields,
	David S. Miller, linux-nfs, linux-kernel

On Tue, 15 Sep 2015 16:49:23 +0100
"Suzuki K. Poulose" <suzuki.poulose@arm.com> wrote:

> From: "Suzuki K. Poulose" <suzuki.poulose@arm.com>
> 
> Encountered the following BUG() with 4.3-rc1 on a fast model
> for arm64 with NFS root filesystem.
> 
> ------------[ cut here ]------------
> kernel BUG at fs/inode.c:1493!
> 
> Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
> Modules linked in:
> CPU: 2 PID: 1 Comm: swapper/0 Not tainted 4.3.0-rc1+ #855
> Hardware name: FVP Base (DT)
> task: ffffc000760b0000 ti: ffffc00076070000 task.ti: ffffc00076070000
> PC is at iput+0x144/0x170
> LR is at sock_release+0xbc/0xdc
> pc : [<ffffc000001b4920>] lr : [<ffffc000004d1974>] pstate: 40000045
> sp : ffffc00076073790
> x29: ffffc00076073790 x28: ffffc00076073b40
> x27: 00000000000003e8 x26: ffffc00076955000
> x25: 000000000000000c x24: ffffc00076637200
> x23: ffffc00076073930 x22: ffffc000769b8180
> x21: ffffc000740500a8 x20: ffffc00074050158
> x19: ffffc00074050030 x18: 000000009fcef6bf
> x17: 00000000593e3df5 x16: 00000000b597f71d
> x15: 00000000e2f9d3f6 x14: 0ffffffffffffffe
> x13: 0000000000000020 x12: 0101010101010101
> x11: 00000000000001c9 x10: 0000000000000750
> x9 : ffffc00076073670 x8 : ffffc000760b07b0
> x7 : 0000000000000001 x6 : 0000000000000001
> x5 : 0000000000000000 x4 : 00000000ffffffff
> x3 : 0000000000000000 x2 : ffffffffffffffff
> x1 : ffffc00076070000 x0 : 0000000000000060
> 
> Process swapper/0 (pid: 1, stack limit = 0xffffc00076070020)
> Stack: (0xffffc00076073790 to 0xffffc00076074000)
> 
>  [ stack contents stripped ]
> 
> Call trace:
> [<ffffc000001b4920>] iput+0x144/0x170
> [<ffffc000004d1970>] sock_release+0xb8/0xdc
> [<ffffc00000578df0>] xs_reset_transport+0x8c/0xac
> [<ffffc00000578e60>] xs_close+0x50/0x6c
> [<ffffc00000578e9c>] xs_destroy+0x20/0x5c
> [<ffffc00000575f70>] xprt_destroy+0x68/0x8c
> [<ffffc0000057777c>] xprt_put+0x24/0x30
> [<ffffc000005726c4>] rpc_free_client+0x78/0xd8
> [<ffffc0000057288c>] rpc_release_client+0x94/0xec
> [<ffffc00000572aac>] rpc_shutdown_client+0x58/0x118
> [<ffffc00000278588>] nfs_mount+0x100/0x234
> [<ffffc0000026cc88>] nfs_request_mount+0xa8/0x12c
> [<ffffc0000026e564>] nfs_try_mount+0x54/0x2b4
> [<ffffc0000026f140>] nfs_fs_mount+0x5cc/0xac0
> [<ffffc0000019f1a0>] mount_fs+0x38/0x158
> [<ffffc000001b81a8>] vfs_kern_mount+0x48/0x11c
> [<ffffc000001bb390>] do_mount+0x208/0xc04
> [<ffffc000001bc0b0>] SyS_mount+0x78/0xd0
> [<ffffc000007f0fa8>] mount_root+0x80/0x148
> [<ffffc000007f11a8>] prepare_namespace+0x138/0x184
> [<ffffc000007f0b20>] kernel_init_freeable+0x1cc/0x1f4
> [<ffffc000005a2914>] kernel_init+0xc/0xd8
> Code: b5fffc00 17ffffed d4210000 17ffffd7 (d4210000)
> ---[ end trace 02951451f1831f54 ]---
> 
> With rpc_debug enabled here is the log :
> 
> RPC:       shutting down mount client for your.nfs.server
> RPC:       rpc_release_client(ffffc00076637800)
> RPC:       destroying UNIX authenticator ffffc000008f48c8
> RPC:       destroying mount client for your.nfs.server
> RPC:       destroying transport ffffc00076226000
> RPC:       xs_destroy xprt ffffc00076226000
> RPC:       xs_close xprt ffffc00076226000
> RPC:       xs_tcp_state_change client ffffc00076226000...
> RPC:       state 4 conn 1 dead 0 zapped 1 sk_shutdown 3
> RPC:       xs_tcp_state_change client ffffc00076226000...
> RPC:       state 5 conn 0 dead 0 zapped 1 sk_shutdown 3
> RPC:       xs_tcp_state_change client ffffc00076226000...
> RPC:       state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
> RPC:       disconnected transport ffffc00076226000
> RPC:       xs_tcp_state_change client ffffc00076226000...
> RPC:       state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
> RPC:       disconnected transport ffffc00076226000
> RPC:       xs_tcp_data_ready...
> RPC:       xs_tcp_state_change client ffffc00076226000...
> RPC:       state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
> RPC:       disconnected transport ffffc00076226000
> RPC:       wake_up_first(ffffc00076226170 "xprt_sending")
> 
> So it looks like just before we lock the callbacks in xs_reset_transport,
> a few of the callbacks got through and issued the reset before we could
> lock it. And we end up repeating the cleanups, ending up in the above
> BUG() due to multiple sock_release().
> 
> This patch fixes the race by confirming that somebody else hasn't performed
> the reset while we were waiting for the lock. Also, the kernel_shutdown()
> is performed only if the sock is non-NULL to avoid a possible race.
> 
> Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
> ---
>  net/sunrpc/xprtsock.c |    9 ++++++++-
>  1 file changed, 8 insertions(+), 1 deletion(-)
> 
> diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
> index 7be90bc..6f4789d 100644
> --- a/net/sunrpc/xprtsock.c
> +++ b/net/sunrpc/xprtsock.c
> @@ -822,9 +822,16 @@ static void xs_reset_transport(struct sock_xprt *transport)
>  	if (atomic_read(&transport->xprt.swapper))
>  		sk_clear_memalloc(sk);
>  
> -	kernel_sock_shutdown(sock, SHUT_RDWR);
> +	if (sock)
> +		kernel_sock_shutdown(sock, SHUT_RDWR);
>  

Good catch, but...isn't this still racy? What prevents transport->sock
being set to NULL after you assign it to "sock" but before calling
kernel_sock_shutdown? You might end up calling that on a socket that
has already had sock_release called on it. I believe that would be a bad
thing.

What might be better is to move the assignment to sock inside the
spinlock and then call kernel socket shutdown after dropping the
spinlock. Something like:

    write_lock_bh(&sk->sk_callback_lock);
    ...
    sock = transport->sock;
    transport->sock = NULL;
    ...
    write_unlock_bh(&sk->sk_callback_lock);
    if (sock)
        kernel_sock_shutdown(sock, SHUT_RDWR);

I think that'd be safe...

>  	write_lock_bh(&sk->sk_callback_lock);
> +	/* Check someone has already done the job, while we were waiting */
> +	if (!transport->inet) {
> +		write_unlock_bh(&sk->sk_callback_lock);
> +		return;
> +	}
> +
>  	transport->inet = NULL;
>  	transport->sock = NULL;
>  


-- 
Jeff Layton <jlayton@poochiereds.net>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH] SUNRPC: Fix a race in xs_reset_transport
  2015-09-15 18:52 ` Jeff Layton
@ 2015-09-16  8:08   ` Suzuki K. Poulose
  2015-09-16  9:04   ` [PATCHv2] " Suzuki K. Poulose
  2015-09-17 13:38   ` [PATCH] " Trond Myklebust
  2 siblings, 0 replies; 16+ messages in thread
From: Suzuki K. Poulose @ 2015-09-16  8:08 UTC (permalink / raw)
  To: Jeff Layton
  Cc: Trond Myklebust, Anna Schumaker, J. Bruce Fields,
	David S. Miller, linux-nfs, linux-kernel

On 15/09/15 19:52, Jeff Layton wrote:
> On Tue, 15 Sep 2015 16:49:23 +0100
> "Suzuki K. Poulose" <suzuki.poulose@arm.com> wrote:
>
>> From: "Suzuki K. Poulose" <suzuki.poulose@arm.com>
>>
>> Encountered the following BUG() with 4.3-rc1 on a fast model
>> for arm64 with NFS root filesystem.
>>
>> ------------[ cut here ]------------
>> kernel BUG at fs/inode.c:1493!
>>
>> Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
>> Modules linked in:
>> CPU: 2 PID: 1 Comm: swapper/0 Not tainted 4.3.0-rc1+ #855
>> Hardware name: FVP Base (DT)
>> task: ffffc000760b0000 ti: ffffc00076070000 task.ti: ffffc00076070000
>> PC is at iput+0x144/0x170
>> LR is at sock_release+0xbc/0xdc
>> pc : [<ffffc000001b4920>] lr : [<ffffc000004d1974>] pstate: 40000045
>> sp : ffffc00076073790
>> x29: ffffc00076073790 x28: ffffc00076073b40
>> x27: 00000000000003e8 x26: ffffc00076955000
>> x25: 000000000000000c x24: ffffc00076637200
>> x23: ffffc00076073930 x22: ffffc000769b8180
>> x21: ffffc000740500a8 x20: ffffc00074050158
>> x19: ffffc00074050030 x18: 000000009fcef6bf
>> x17: 00000000593e3df5 x16: 00000000b597f71d
>> x15: 00000000e2f9d3f6 x14: 0ffffffffffffffe
>> x13: 0000000000000020 x12: 0101010101010101
>> x11: 00000000000001c9 x10: 0000000000000750
>> x9 : ffffc00076073670 x8 : ffffc000760b07b0
>> x7 : 0000000000000001 x6 : 0000000000000001
>> x5 : 0000000000000000 x4 : 00000000ffffffff
>> x3 : 0000000000000000 x2 : ffffffffffffffff
>> x1 : ffffc00076070000 x0 : 0000000000000060
>>
>> Process swapper/0 (pid: 1, stack limit = 0xffffc00076070020)
>> Stack: (0xffffc00076073790 to 0xffffc00076074000)
>>
>>   [ stack contents stripped ]
>>
>> Call trace:
>> [<ffffc000001b4920>] iput+0x144/0x170
>> [<ffffc000004d1970>] sock_release+0xb8/0xdc
>> [<ffffc00000578df0>] xs_reset_transport+0x8c/0xac
>> [<ffffc00000578e60>] xs_close+0x50/0x6c
>> [<ffffc00000578e9c>] xs_destroy+0x20/0x5c
>> [<ffffc00000575f70>] xprt_destroy+0x68/0x8c
>> [<ffffc0000057777c>] xprt_put+0x24/0x30
>> [<ffffc000005726c4>] rpc_free_client+0x78/0xd8
>> [<ffffc0000057288c>] rpc_release_client+0x94/0xec
>> [<ffffc00000572aac>] rpc_shutdown_client+0x58/0x118
>> [<ffffc00000278588>] nfs_mount+0x100/0x234
>> [<ffffc0000026cc88>] nfs_request_mount+0xa8/0x12c
>> [<ffffc0000026e564>] nfs_try_mount+0x54/0x2b4
>> [<ffffc0000026f140>] nfs_fs_mount+0x5cc/0xac0
>> [<ffffc0000019f1a0>] mount_fs+0x38/0x158
>> [<ffffc000001b81a8>] vfs_kern_mount+0x48/0x11c
>> [<ffffc000001bb390>] do_mount+0x208/0xc04
>> [<ffffc000001bc0b0>] SyS_mount+0x78/0xd0
>> [<ffffc000007f0fa8>] mount_root+0x80/0x148
>> [<ffffc000007f11a8>] prepare_namespace+0x138/0x184
>> [<ffffc000007f0b20>] kernel_init_freeable+0x1cc/0x1f4
>> [<ffffc000005a2914>] kernel_init+0xc/0xd8
>> Code: b5fffc00 17ffffed d4210000 17ffffd7 (d4210000)
>> ---[ end trace 02951451f1831f54 ]---
>>
>> With rpc_debug enabled here is the log :
>>
>> RPC:       shutting down mount client for your.nfs.server
>> RPC:       rpc_release_client(ffffc00076637800)
>> RPC:       destroying UNIX authenticator ffffc000008f48c8
>> RPC:       destroying mount client for your.nfs.server
>> RPC:       destroying transport ffffc00076226000
>> RPC:       xs_destroy xprt ffffc00076226000
>> RPC:       xs_close xprt ffffc00076226000
>> RPC:       xs_tcp_state_change client ffffc00076226000...
>> RPC:       state 4 conn 1 dead 0 zapped 1 sk_shutdown 3
>> RPC:       xs_tcp_state_change client ffffc00076226000...
>> RPC:       state 5 conn 0 dead 0 zapped 1 sk_shutdown 3
>> RPC:       xs_tcp_state_change client ffffc00076226000...
>> RPC:       state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
>> RPC:       disconnected transport ffffc00076226000
>> RPC:       xs_tcp_state_change client ffffc00076226000...
>> RPC:       state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
>> RPC:       disconnected transport ffffc00076226000
>> RPC:       xs_tcp_data_ready...
>> RPC:       xs_tcp_state_change client ffffc00076226000...
>> RPC:       state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
>> RPC:       disconnected transport ffffc00076226000
>> RPC:       wake_up_first(ffffc00076226170 "xprt_sending")
>>
>> So it looks like just before we lock the callbacks in xs_reset_transport,
>> a few of the callbacks got through and issued the reset before we could
>> lock it. And we end up repeating the cleanups, ending up in the above
>> BUG() due to multiple sock_release().
>>
>> This patch fixes the race by confirming that somebody else hasn't performed
>> the reset while we were waiting for the lock. Also, the kernel_shutdown()
>> is performed only if the sock is non-NULL to avoid a possible race.
>>
>> Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
>> ---
>>   net/sunrpc/xprtsock.c |    9 ++++++++-
>>   1 file changed, 8 insertions(+), 1 deletion(-)
>>
>> diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
>> index 7be90bc..6f4789d 100644
>> --- a/net/sunrpc/xprtsock.c
>> +++ b/net/sunrpc/xprtsock.c
>> @@ -822,9 +822,16 @@ static void xs_reset_transport(struct sock_xprt *transport)
>>   	if (atomic_read(&transport->xprt.swapper))
>>   		sk_clear_memalloc(sk);
>>
>> -	kernel_sock_shutdown(sock, SHUT_RDWR);
>> +	if (sock)
>> +		kernel_sock_shutdown(sock, SHUT_RDWR);
>>
>
> Good catch, but...isn't this still racy? What prevents transport->sock
> being set to NULL after you assign it to "sock" but before calling
> kernel_sock_shutdown? You might end up calling that on a socket that
> has already had sock_release called on it. I believe that would be a bad
> thing.

You are right. I had a try with moving the kernel_sock_shutdown() under the lock,
but then it would cause lockups, hence left it there. I should have paid more attention
to the kernel_sock_shutdown() which I assumed would be safe :). Thanks for spotting.
I will send an updated version.


Thanks
Suzuki


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCHv2] SUNRPC: Fix a race in xs_reset_transport
  2015-09-15 18:52 ` Jeff Layton
  2015-09-16  8:08   ` Suzuki K. Poulose
@ 2015-09-16  9:04   ` Suzuki K. Poulose
  2015-09-16  9:35     ` Suzuki K. Poulose
  2015-09-17 13:38   ` [PATCH] " Trond Myklebust
  2 siblings, 1 reply; 16+ messages in thread
From: Suzuki K. Poulose @ 2015-09-16  9:04 UTC (permalink / raw)
  To: Trond Myklebust
  Cc: Anna Schumaker, J. Bruce Fields, Jeff Layton, David S. Miller,
	linux-nfs, linux-kernel, marc.zyngier, Suzuki K. Poulose

From: "Suzuki K. Poulose" <suzuki.poulose@arm.com>

Encountered the following BUG() with 4.3-rc1 on a fast model
for arm64 with NFS root filesystem.

------------[ cut here ]------------
kernel BUG at fs/inode.c:1493!

Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
Modules linked in:
CPU: 2 PID: 1 Comm: swapper/0 Not tainted 4.3.0-rc1+ #855
Hardware name: FVP Base (DT)
task: ffffc000760b0000 ti: ffffc00076070000 task.ti: ffffc00076070000
PC is at iput+0x144/0x170
LR is at sock_release+0xbc/0xdc
pc : [<ffffc000001b4920>] lr : [<ffffc000004d1974>] pstate: 40000045
sp : ffffc00076073790
x29: ffffc00076073790 x28: ffffc00076073b40
x27: 00000000000003e8 x26: ffffc00076955000
x25: 000000000000000c x24: ffffc00076637200
x23: ffffc00076073930 x22: ffffc000769b8180
x21: ffffc000740500a8 x20: ffffc00074050158
x19: ffffc00074050030 x18: 000000009fcef6bf
x17: 00000000593e3df5 x16: 00000000b597f71d
x15: 00000000e2f9d3f6 x14: 0ffffffffffffffe
x13: 0000000000000020 x12: 0101010101010101
x11: 00000000000001c9 x10: 0000000000000750
x9 : ffffc00076073670 x8 : ffffc000760b07b0
x7 : 0000000000000001 x6 : 0000000000000001
x5 : 0000000000000000 x4 : 00000000ffffffff
x3 : 0000000000000000 x2 : ffffffffffffffff
x1 : ffffc00076070000 x0 : 0000000000000060

Process swapper/0 (pid: 1, stack limit = 0xffffc00076070020)
Stack: (0xffffc00076073790 to 0xffffc00076074000)

 [ stack contents stripped ]

Call trace:
[<ffffc000001b4920>] iput+0x144/0x170
[<ffffc000004d1970>] sock_release+0xb8/0xdc
[<ffffc00000578df0>] xs_reset_transport+0x8c/0xac
[<ffffc00000578e60>] xs_close+0x50/0x6c
[<ffffc00000578e9c>] xs_destroy+0x20/0x5c
[<ffffc00000575f70>] xprt_destroy+0x68/0x8c
[<ffffc0000057777c>] xprt_put+0x24/0x30
[<ffffc000005726c4>] rpc_free_client+0x78/0xd8
[<ffffc0000057288c>] rpc_release_client+0x94/0xec
[<ffffc00000572aac>] rpc_shutdown_client+0x58/0x118
[<ffffc00000278588>] nfs_mount+0x100/0x234
[<ffffc0000026cc88>] nfs_request_mount+0xa8/0x12c
[<ffffc0000026e564>] nfs_try_mount+0x54/0x2b4
[<ffffc0000026f140>] nfs_fs_mount+0x5cc/0xac0
[<ffffc0000019f1a0>] mount_fs+0x38/0x158
[<ffffc000001b81a8>] vfs_kern_mount+0x48/0x11c
[<ffffc000001bb390>] do_mount+0x208/0xc04
[<ffffc000001bc0b0>] SyS_mount+0x78/0xd0
[<ffffc000007f0fa8>] mount_root+0x80/0x148
[<ffffc000007f11a8>] prepare_namespace+0x138/0x184
[<ffffc000007f0b20>] kernel_init_freeable+0x1cc/0x1f4
[<ffffc000005a2914>] kernel_init+0xc/0xd8
Code: b5fffc00 17ffffed d4210000 17ffffd7 (d4210000)
---[ end trace 02951451f1831f54 ]---

With rpc_debug enabled here is the log :

RPC:       shutting down mount client for your.nfs.server
RPC:       rpc_release_client(ffffc00076637800)
RPC:       destroying UNIX authenticator ffffc000008f48c8
RPC:       destroying mount client for your.nfs.server
RPC:       destroying transport ffffc00076226000
RPC:       xs_destroy xprt ffffc00076226000
RPC:       xs_close xprt ffffc00076226000
RPC:       xs_tcp_state_change client ffffc00076226000...
RPC:       state 4 conn 1 dead 0 zapped 1 sk_shutdown 3
RPC:       xs_tcp_state_change client ffffc00076226000...
RPC:       state 5 conn 0 dead 0 zapped 1 sk_shutdown 3
RPC:       xs_tcp_state_change client ffffc00076226000...
RPC:       state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
RPC:       disconnected transport ffffc00076226000
RPC:       xs_tcp_state_change client ffffc00076226000...
RPC:       state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
RPC:       disconnected transport ffffc00076226000
RPC:       xs_tcp_data_ready...
RPC:       xs_tcp_state_change client ffffc00076226000...
RPC:       state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
RPC:       disconnected transport ffffc00076226000
RPC:       wake_up_first(ffffc00076226170 "xprt_sending")

So it looks like just before we lock the callbacks in xs_reset_transport,
a few of the callbacks got through and issued the reset before we could
lock it. And we end up repeating the cleanups, ending up in the above
BUG() due to multiple sock_release().

This patch fixes the race by confirming that somebody else hasn't performed
the reset while we were waiting for the lock. Also, the kernel_shutdown()
is performed only if the sock is non-NULL to avoid a possible race.

Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
---
 net/sunrpc/xprtsock.c |    9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
index 7be90bc..6f4789d 100644
--- a/net/sunrpc/xprtsock.c
+++ b/net/sunrpc/xprtsock.c
@@ -822,9 +822,16 @@ static void xs_reset_transport(struct sock_xprt *transport)
 	if (atomic_read(&transport->xprt.swapper))
 		sk_clear_memalloc(sk);
 
-	kernel_sock_shutdown(sock, SHUT_RDWR);
+	if (sock)
+		kernel_sock_shutdown(sock, SHUT_RDWR);
 
 	write_lock_bh(&sk->sk_callback_lock);
+	/* Check someone has already done the job, while we were waiting */
+	if (!transport->inet) {
+		write_unlock_bh(&sk->sk_callback_lock);
+		return;
+	}
+
 	transport->inet = NULL;
 	transport->sock = NULL;
 
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCHv2] SUNRPC: Fix a race in xs_reset_transport
  2015-09-16  9:04   ` [PATCHv2] " Suzuki K. Poulose
@ 2015-09-16  9:35     ` Suzuki K. Poulose
  2015-09-16  9:48       ` Marc Zyngier
  2015-09-16 11:17       ` Jeff Layton
  0 siblings, 2 replies; 16+ messages in thread
From: Suzuki K. Poulose @ 2015-09-16  9:35 UTC (permalink / raw)
  To: Trond Myklebust
  Cc: Anna Schumaker, J. Bruce Fields, Jeff Layton, David S. Miller,
	linux-nfs, linux-kernel, marc.zyngier, Suzuki K. Poulose

From: "Suzuki K. Poulose" <suzuki.poulose@arm.com>

Ignore the previous patch, which was really v1.

---

Encountered the following BUG() with 4.3-rc1 on a fast model
for arm64 with NFS root filesystem.

------------[ cut here ]------------
kernel BUG at fs/inode.c:1493!

Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
Modules linked in:
CPU: 2 PID: 1 Comm: swapper/0 Not tainted 4.3.0-rc1+ #855
Hardware name: FVP Base (DT)
task: ffffc000760b0000 ti: ffffc00076070000 task.ti: ffffc00076070000
PC is at iput+0x144/0x170
LR is at sock_release+0xbc/0xdc
pc : [<ffffc000001b4920>] lr : [<ffffc000004d1974>] pstate: 40000045
sp : ffffc00076073790
x29: ffffc00076073790 x28: ffffc00076073b40
x27: 00000000000003e8 x26: ffffc00076955000
x25: 000000000000000c x24: ffffc00076637200
x23: ffffc00076073930 x22: ffffc000769b8180
x21: ffffc000740500a8 x20: ffffc00074050158
x19: ffffc00074050030 x18: 000000009fcef6bf
x17: 00000000593e3df5 x16: 00000000b597f71d
x15: 00000000e2f9d3f6 x14: 0ffffffffffffffe
x13: 0000000000000020 x12: 0101010101010101
x11: 00000000000001c9 x10: 0000000000000750
x9 : ffffc00076073670 x8 : ffffc000760b07b0
x7 : 0000000000000001 x6 : 0000000000000001
x5 : 0000000000000000 x4 : 00000000ffffffff
x3 : 0000000000000000 x2 : ffffffffffffffff
x1 : ffffc00076070000 x0 : 0000000000000060

Process swapper/0 (pid: 1, stack limit = 0xffffc00076070020)
Stack: (0xffffc00076073790 to 0xffffc00076074000)

 [ stack contents stripped ]

Call trace:
[<ffffc000001b4920>] iput+0x144/0x170
[<ffffc000004d1970>] sock_release+0xb8/0xdc
[<ffffc00000578df0>] xs_reset_transport+0x8c/0xac
[<ffffc00000578e60>] xs_close+0x50/0x6c
[<ffffc00000578e9c>] xs_destroy+0x20/0x5c
[<ffffc00000575f70>] xprt_destroy+0x68/0x8c
[<ffffc0000057777c>] xprt_put+0x24/0x30
[<ffffc000005726c4>] rpc_free_client+0x78/0xd8
[<ffffc0000057288c>] rpc_release_client+0x94/0xec
[<ffffc00000572aac>] rpc_shutdown_client+0x58/0x118
[<ffffc00000278588>] nfs_mount+0x100/0x234
[<ffffc0000026cc88>] nfs_request_mount+0xa8/0x12c
[<ffffc0000026e564>] nfs_try_mount+0x54/0x2b4
[<ffffc0000026f140>] nfs_fs_mount+0x5cc/0xac0
[<ffffc0000019f1a0>] mount_fs+0x38/0x158
[<ffffc000001b81a8>] vfs_kern_mount+0x48/0x11c
[<ffffc000001bb390>] do_mount+0x208/0xc04
[<ffffc000001bc0b0>] SyS_mount+0x78/0xd0
[<ffffc000007f0fa8>] mount_root+0x80/0x148
[<ffffc000007f11a8>] prepare_namespace+0x138/0x184
[<ffffc000007f0b20>] kernel_init_freeable+0x1cc/0x1f4
[<ffffc000005a2914>] kernel_init+0xc/0xd8
Code: b5fffc00 17ffffed d4210000 17ffffd7 (d4210000)
---[ end trace 02951451f1831f54 ]---

With rpc_debug enabled here is the log :

RPC:       shutting down mount client for your.nfs.server
RPC:       rpc_release_client(ffffc00076637800)
RPC:       destroying UNIX authenticator ffffc000008f48c8
RPC:       destroying mount client for your.nfs.server
RPC:       destroying transport ffffc00076226000
RPC:       xs_destroy xprt ffffc00076226000
RPC:       xs_close xprt ffffc00076226000
RPC:       xs_tcp_state_change client ffffc00076226000...
RPC:       state 4 conn 1 dead 0 zapped 1 sk_shutdown 3
RPC:       xs_tcp_state_change client ffffc00076226000...
RPC:       state 5 conn 0 dead 0 zapped 1 sk_shutdown 3
RPC:       xs_tcp_state_change client ffffc00076226000...
RPC:       state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
RPC:       disconnected transport ffffc00076226000
RPC:       xs_tcp_state_change client ffffc00076226000...
RPC:       state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
RPC:       disconnected transport ffffc00076226000
RPC:       xs_tcp_data_ready...
RPC:       xs_tcp_state_change client ffffc00076226000...
RPC:       state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
RPC:       disconnected transport ffffc00076226000
RPC:       wake_up_first(ffffc00076226170 "xprt_sending")

So it looks like just before we lock the callbacks in xs_reset_transport,
a few of the callbacks got through and issued the reset before we could
lock it. And we end up repeating the cleanups, ending up in the above
BUG() due to multiple sock_release().

This patch fixes the race by confirming that somebody else hasn't performed
the reset while we were waiting for the lock. Also, the transport->sock is
only accessed within the lock to prevent using a stale sock for kernel_sock_shutdown().

Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
---
Changes since V1:
  - Prevent race in accessing sock.
---
 net/sunrpc/xprtsock.c |   15 ++++++++++++---
 1 file changed, 12 insertions(+), 3 deletions(-)

diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
index 7be90bc..b95f5d0 100644
--- a/net/sunrpc/xprtsock.c
+++ b/net/sunrpc/xprtsock.c
@@ -812,7 +812,7 @@ static void xs_error_report(struct sock *sk)
 
 static void xs_reset_transport(struct sock_xprt *transport)
 {
-	struct socket *sock = transport->sock;
+	struct socket *sock;
 	struct sock *sk = transport->inet;
 	struct rpc_xprt *xprt = &transport->xprt;
 
@@ -822,9 +822,14 @@ static void xs_reset_transport(struct sock_xprt *transport)
 	if (atomic_read(&transport->xprt.swapper))
 		sk_clear_memalloc(sk);
 
-	kernel_sock_shutdown(sock, SHUT_RDWR);
-
 	write_lock_bh(&sk->sk_callback_lock);
+	/* Check someone has already done the job, while we were waiting */
+	if (!transport->inet) {
+		write_unlock_bh(&sk->sk_callback_lock);
+		return;
+	}
+	sock = transport->sock;
+
 	transport->inet = NULL;
 	transport->sock = NULL;
 
@@ -833,6 +838,10 @@ static void xs_reset_transport(struct sock_xprt *transport)
 	xs_restore_old_callbacks(transport, sk);
 	xprt_clear_connected(xprt);
 	write_unlock_bh(&sk->sk_callback_lock);
+
+	if (sock)
+		kernel_sock_shutdown(sock, SHUT_RDWR);
+
 	xs_sock_reset_connection_flags(xprt);
 
 	trace_rpc_socket_close(xprt, sock);
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCHv2] SUNRPC: Fix a race in xs_reset_transport
  2015-09-16  9:35     ` Suzuki K. Poulose
@ 2015-09-16  9:48       ` Marc Zyngier
  2015-09-16 11:17       ` Jeff Layton
  1 sibling, 0 replies; 16+ messages in thread
From: Marc Zyngier @ 2015-09-16  9:48 UTC (permalink / raw)
  To: Suzuki K. Poulose, Trond Myklebust
  Cc: Anna Schumaker, J. Bruce Fields, Jeff Layton, David S. Miller,
	linux-nfs, linux-kernel

On 16/09/15 10:35, Suzuki K. Poulose wrote:
> From: "Suzuki K. Poulose" <suzuki.poulose@arm.com>
> 
> Ignore the previous patch, which was really v1.
> 
> ---
> 
> Encountered the following BUG() with 4.3-rc1 on a fast model
> for arm64 with NFS root filesystem.
> 
> ------------[ cut here ]------------
> kernel BUG at fs/inode.c:1493!
> 
> Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
> Modules linked in:
> CPU: 2 PID: 1 Comm: swapper/0 Not tainted 4.3.0-rc1+ #855
> Hardware name: FVP Base (DT)
> task: ffffc000760b0000 ti: ffffc00076070000 task.ti: ffffc00076070000
> PC is at iput+0x144/0x170
> LR is at sock_release+0xbc/0xdc
> pc : [<ffffc000001b4920>] lr : [<ffffc000004d1974>] pstate: 40000045
> sp : ffffc00076073790
> x29: ffffc00076073790 x28: ffffc00076073b40
> x27: 00000000000003e8 x26: ffffc00076955000
> x25: 000000000000000c x24: ffffc00076637200
> x23: ffffc00076073930 x22: ffffc000769b8180
> x21: ffffc000740500a8 x20: ffffc00074050158
> x19: ffffc00074050030 x18: 000000009fcef6bf
> x17: 00000000593e3df5 x16: 00000000b597f71d
> x15: 00000000e2f9d3f6 x14: 0ffffffffffffffe
> x13: 0000000000000020 x12: 0101010101010101
> x11: 00000000000001c9 x10: 0000000000000750
> x9 : ffffc00076073670 x8 : ffffc000760b07b0
> x7 : 0000000000000001 x6 : 0000000000000001
> x5 : 0000000000000000 x4 : 00000000ffffffff
> x3 : 0000000000000000 x2 : ffffffffffffffff
> x1 : ffffc00076070000 x0 : 0000000000000060
> 
> Process swapper/0 (pid: 1, stack limit = 0xffffc00076070020)
> Stack: (0xffffc00076073790 to 0xffffc00076074000)
> 
>  [ stack contents stripped ]
> 
> Call trace:
> [<ffffc000001b4920>] iput+0x144/0x170
> [<ffffc000004d1970>] sock_release+0xb8/0xdc
> [<ffffc00000578df0>] xs_reset_transport+0x8c/0xac
> [<ffffc00000578e60>] xs_close+0x50/0x6c
> [<ffffc00000578e9c>] xs_destroy+0x20/0x5c
> [<ffffc00000575f70>] xprt_destroy+0x68/0x8c
> [<ffffc0000057777c>] xprt_put+0x24/0x30
> [<ffffc000005726c4>] rpc_free_client+0x78/0xd8
> [<ffffc0000057288c>] rpc_release_client+0x94/0xec
> [<ffffc00000572aac>] rpc_shutdown_client+0x58/0x118
> [<ffffc00000278588>] nfs_mount+0x100/0x234
> [<ffffc0000026cc88>] nfs_request_mount+0xa8/0x12c
> [<ffffc0000026e564>] nfs_try_mount+0x54/0x2b4
> [<ffffc0000026f140>] nfs_fs_mount+0x5cc/0xac0
> [<ffffc0000019f1a0>] mount_fs+0x38/0x158
> [<ffffc000001b81a8>] vfs_kern_mount+0x48/0x11c
> [<ffffc000001bb390>] do_mount+0x208/0xc04
> [<ffffc000001bc0b0>] SyS_mount+0x78/0xd0
> [<ffffc000007f0fa8>] mount_root+0x80/0x148
> [<ffffc000007f11a8>] prepare_namespace+0x138/0x184
> [<ffffc000007f0b20>] kernel_init_freeable+0x1cc/0x1f4
> [<ffffc000005a2914>] kernel_init+0xc/0xd8
> Code: b5fffc00 17ffffed d4210000 17ffffd7 (d4210000)
> ---[ end trace 02951451f1831f54 ]---
> 
> With rpc_debug enabled here is the log :
> 
> RPC:       shutting down mount client for your.nfs.server
> RPC:       rpc_release_client(ffffc00076637800)
> RPC:       destroying UNIX authenticator ffffc000008f48c8
> RPC:       destroying mount client for your.nfs.server
> RPC:       destroying transport ffffc00076226000
> RPC:       xs_destroy xprt ffffc00076226000
> RPC:       xs_close xprt ffffc00076226000
> RPC:       xs_tcp_state_change client ffffc00076226000...
> RPC:       state 4 conn 1 dead 0 zapped 1 sk_shutdown 3
> RPC:       xs_tcp_state_change client ffffc00076226000...
> RPC:       state 5 conn 0 dead 0 zapped 1 sk_shutdown 3
> RPC:       xs_tcp_state_change client ffffc00076226000...
> RPC:       state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
> RPC:       disconnected transport ffffc00076226000
> RPC:       xs_tcp_state_change client ffffc00076226000...
> RPC:       state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
> RPC:       disconnected transport ffffc00076226000
> RPC:       xs_tcp_data_ready...
> RPC:       xs_tcp_state_change client ffffc00076226000...
> RPC:       state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
> RPC:       disconnected transport ffffc00076226000
> RPC:       wake_up_first(ffffc00076226170 "xprt_sending")
> 
> So it looks like just before we lock the callbacks in xs_reset_transport,
> a few of the callbacks got through and issued the reset before we could
> lock it. And we end up repeating the cleanups, ending up in the above
> BUG() due to multiple sock_release().
> 
> This patch fixes the race by confirming that somebody else hasn't performed
> the reset while we were waiting for the lock. Also, the transport->sock is
> only accessed within the lock to prevent using a stale sock for kernel_sock_shutdown().
> 
> Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
> ---
> Changes since V1:
>   - Prevent race in accessing sock.
> ---
>  net/sunrpc/xprtsock.c |   15 ++++++++++++---
>  1 file changed, 12 insertions(+), 3 deletions(-)
> 
> diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
> index 7be90bc..b95f5d0 100644
> --- a/net/sunrpc/xprtsock.c
> +++ b/net/sunrpc/xprtsock.c
> @@ -812,7 +812,7 @@ static void xs_error_report(struct sock *sk)
>  
>  static void xs_reset_transport(struct sock_xprt *transport)
>  {
> -	struct socket *sock = transport->sock;
> +	struct socket *sock;
>  	struct sock *sk = transport->inet;
>  	struct rpc_xprt *xprt = &transport->xprt;
>  
> @@ -822,9 +822,14 @@ static void xs_reset_transport(struct sock_xprt *transport)
>  	if (atomic_read(&transport->xprt.swapper))
>  		sk_clear_memalloc(sk);
>  
> -	kernel_sock_shutdown(sock, SHUT_RDWR);
> -
>  	write_lock_bh(&sk->sk_callback_lock);
> +	/* Check someone has already done the job, while we were waiting */
> +	if (!transport->inet) {
> +		write_unlock_bh(&sk->sk_callback_lock);
> +		return;
> +	}
> +	sock = transport->sock;
> +
>  	transport->inet = NULL;
>  	transport->sock = NULL;
>  
> @@ -833,6 +838,10 @@ static void xs_reset_transport(struct sock_xprt *transport)
>  	xs_restore_old_callbacks(transport, sk);
>  	xprt_clear_connected(xprt);
>  	write_unlock_bh(&sk->sk_callback_lock);
> +
> +	if (sock)
> +		kernel_sock_shutdown(sock, SHUT_RDWR);
> +
>  	xs_sock_reset_connection_flags(xprt);
>  
>  	trace_rpc_socket_close(xprt, sock);
> 

This patch resurrects NFS on my arm64 model, so thanks for that!

Tested-by: Marc Zyngier <marc.zyngier@arm.com>

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCHv2] SUNRPC: Fix a race in xs_reset_transport
  2015-09-16  9:35     ` Suzuki K. Poulose
  2015-09-16  9:48       ` Marc Zyngier
@ 2015-09-16 11:17       ` Jeff Layton
  2015-09-18 11:19         ` Suzuki K. Poulose
  1 sibling, 1 reply; 16+ messages in thread
From: Jeff Layton @ 2015-09-16 11:17 UTC (permalink / raw)
  To: Suzuki K. Poulose
  Cc: Trond Myklebust, Anna Schumaker, J. Bruce Fields,
	David S. Miller, linux-nfs, linux-kernel, marc.zyngier

On Wed, 16 Sep 2015 10:35:49 +0100
"Suzuki K. Poulose" <suzuki.poulose@arm.com> wrote:

> From: "Suzuki K. Poulose" <suzuki.poulose@arm.com>
> 
> Ignore the previous patch, which was really v1.
> 
> ---
> 
> Encountered the following BUG() with 4.3-rc1 on a fast model
> for arm64 with NFS root filesystem.
> 
> ------------[ cut here ]------------
> kernel BUG at fs/inode.c:1493!
> 
> Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
> Modules linked in:
> CPU: 2 PID: 1 Comm: swapper/0 Not tainted 4.3.0-rc1+ #855
> Hardware name: FVP Base (DT)
> task: ffffc000760b0000 ti: ffffc00076070000 task.ti: ffffc00076070000
> PC is at iput+0x144/0x170
> LR is at sock_release+0xbc/0xdc
> pc : [<ffffc000001b4920>] lr : [<ffffc000004d1974>] pstate: 40000045
> sp : ffffc00076073790
> x29: ffffc00076073790 x28: ffffc00076073b40
> x27: 00000000000003e8 x26: ffffc00076955000
> x25: 000000000000000c x24: ffffc00076637200
> x23: ffffc00076073930 x22: ffffc000769b8180
> x21: ffffc000740500a8 x20: ffffc00074050158
> x19: ffffc00074050030 x18: 000000009fcef6bf
> x17: 00000000593e3df5 x16: 00000000b597f71d
> x15: 00000000e2f9d3f6 x14: 0ffffffffffffffe
> x13: 0000000000000020 x12: 0101010101010101
> x11: 00000000000001c9 x10: 0000000000000750
> x9 : ffffc00076073670 x8 : ffffc000760b07b0
> x7 : 0000000000000001 x6 : 0000000000000001
> x5 : 0000000000000000 x4 : 00000000ffffffff
> x3 : 0000000000000000 x2 : ffffffffffffffff
> x1 : ffffc00076070000 x0 : 0000000000000060
> 
> Process swapper/0 (pid: 1, stack limit = 0xffffc00076070020)
> Stack: (0xffffc00076073790 to 0xffffc00076074000)
> 
>  [ stack contents stripped ]
> 
> Call trace:
> [<ffffc000001b4920>] iput+0x144/0x170
> [<ffffc000004d1970>] sock_release+0xb8/0xdc
> [<ffffc00000578df0>] xs_reset_transport+0x8c/0xac
> [<ffffc00000578e60>] xs_close+0x50/0x6c
> [<ffffc00000578e9c>] xs_destroy+0x20/0x5c
> [<ffffc00000575f70>] xprt_destroy+0x68/0x8c
> [<ffffc0000057777c>] xprt_put+0x24/0x30
> [<ffffc000005726c4>] rpc_free_client+0x78/0xd8
> [<ffffc0000057288c>] rpc_release_client+0x94/0xec
> [<ffffc00000572aac>] rpc_shutdown_client+0x58/0x118
> [<ffffc00000278588>] nfs_mount+0x100/0x234
> [<ffffc0000026cc88>] nfs_request_mount+0xa8/0x12c
> [<ffffc0000026e564>] nfs_try_mount+0x54/0x2b4
> [<ffffc0000026f140>] nfs_fs_mount+0x5cc/0xac0
> [<ffffc0000019f1a0>] mount_fs+0x38/0x158
> [<ffffc000001b81a8>] vfs_kern_mount+0x48/0x11c
> [<ffffc000001bb390>] do_mount+0x208/0xc04
> [<ffffc000001bc0b0>] SyS_mount+0x78/0xd0
> [<ffffc000007f0fa8>] mount_root+0x80/0x148
> [<ffffc000007f11a8>] prepare_namespace+0x138/0x184
> [<ffffc000007f0b20>] kernel_init_freeable+0x1cc/0x1f4
> [<ffffc000005a2914>] kernel_init+0xc/0xd8
> Code: b5fffc00 17ffffed d4210000 17ffffd7 (d4210000)
> ---[ end trace 02951451f1831f54 ]---
> 
> With rpc_debug enabled here is the log :
> 
> RPC:       shutting down mount client for your.nfs.server
> RPC:       rpc_release_client(ffffc00076637800)
> RPC:       destroying UNIX authenticator ffffc000008f48c8
> RPC:       destroying mount client for your.nfs.server
> RPC:       destroying transport ffffc00076226000
> RPC:       xs_destroy xprt ffffc00076226000
> RPC:       xs_close xprt ffffc00076226000
> RPC:       xs_tcp_state_change client ffffc00076226000...
> RPC:       state 4 conn 1 dead 0 zapped 1 sk_shutdown 3
> RPC:       xs_tcp_state_change client ffffc00076226000...
> RPC:       state 5 conn 0 dead 0 zapped 1 sk_shutdown 3
> RPC:       xs_tcp_state_change client ffffc00076226000...
> RPC:       state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
> RPC:       disconnected transport ffffc00076226000
> RPC:       xs_tcp_state_change client ffffc00076226000...
> RPC:       state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
> RPC:       disconnected transport ffffc00076226000
> RPC:       xs_tcp_data_ready...
> RPC:       xs_tcp_state_change client ffffc00076226000...
> RPC:       state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
> RPC:       disconnected transport ffffc00076226000
> RPC:       wake_up_first(ffffc00076226170 "xprt_sending")
> 
> So it looks like just before we lock the callbacks in xs_reset_transport,
> a few of the callbacks got through and issued the reset before we could
> lock it. And we end up repeating the cleanups, ending up in the above
> BUG() due to multiple sock_release().
> 
> This patch fixes the race by confirming that somebody else hasn't performed
> the reset while we were waiting for the lock. Also, the transport->sock is
> only accessed within the lock to prevent using a stale sock for kernel_sock_shutdown().
> 
> Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
> ---
> Changes since V1:
>   - Prevent race in accessing sock.
> ---
>  net/sunrpc/xprtsock.c |   15 ++++++++++++---
>  1 file changed, 12 insertions(+), 3 deletions(-)
> 
> diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
> index 7be90bc..b95f5d0 100644
> --- a/net/sunrpc/xprtsock.c
> +++ b/net/sunrpc/xprtsock.c
> @@ -812,7 +812,7 @@ static void xs_error_report(struct sock *sk)
>  
>  static void xs_reset_transport(struct sock_xprt *transport)
>  {
> -	struct socket *sock = transport->sock;
> +	struct socket *sock;
>  	struct sock *sk = transport->inet;
>  	struct rpc_xprt *xprt = &transport->xprt;
>  
> @@ -822,9 +822,14 @@ static void xs_reset_transport(struct sock_xprt *transport)
>  	if (atomic_read(&transport->xprt.swapper))
>  		sk_clear_memalloc(sk);
>  
> -	kernel_sock_shutdown(sock, SHUT_RDWR);
> -
>  	write_lock_bh(&sk->sk_callback_lock);
> +	/* Check someone has already done the job, while we were waiting */
> +	if (!transport->inet) {
> +		write_unlock_bh(&sk->sk_callback_lock);
> +		return;
> +	}
> +	sock = transport->sock;
> +
>  	transport->inet = NULL;
>  	transport->sock = NULL;
>  
> @@ -833,6 +838,10 @@ static void xs_reset_transport(struct sock_xprt *transport)
>  	xs_restore_old_callbacks(transport, sk);
>  	xprt_clear_connected(xprt);
>  	write_unlock_bh(&sk->sk_callback_lock);
> +
> +	if (sock)
> +		kernel_sock_shutdown(sock, SHUT_RDWR);
> +
>  	xs_sock_reset_connection_flags(xprt);
>  
>  	trace_rpc_socket_close(xprt, sock);

Better, but now I'm wondering...is it problematic to restore the old
callbacks before calling kernel_sock_shutdown? I can't quite tell
whether it matters in all cases.

It might be best to just go ahead and take the spinlock twice here. Do
it once to clear the transport->sock pointer, call
kernel_sock_shutdown, and then take it again to restore the old
callbacks, etc.

I don't know though...I get the feeling there are races all over the
place in this code. It seems like there's a similar one wrt to the
transport->inet pointer. It seems a little silly that we clear it under
the sk->sk_callback_lock. You have to dereference that pointer
in order to get to the lock.

Maybe the right solution is to use an xchg to swap the inet pointer
with NULL so it can act as a gatekeeper. Whoever gets there first does
the rest of the shutdown.

Something like this maybe? Would this also fix the original problem?
Note that this patch is untested...

[PATCH] sunrpc: use xchg to fetch and clear the transport->inet pointer in xs_reset_transport

Reported-by: "Suzuki K. Poulose" <Suzuki.Poulose@arm.com>
Signed-off-by: Jeff Layton <jeff.layton@primarydata.com>
---
 net/sunrpc/xprtsock.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
index 7be90bc1a7c2..57f79dcab493 100644
--- a/net/sunrpc/xprtsock.c
+++ b/net/sunrpc/xprtsock.c
@@ -813,9 +813,10 @@ static void xs_error_report(struct sock *sk)
 static void xs_reset_transport(struct sock_xprt *transport)
 {
 	struct socket *sock = transport->sock;
-	struct sock *sk = transport->inet;
+	struct sock *sk;
 	struct rpc_xprt *xprt = &transport->xprt;
 
+	sk = xchg(&transport->inet, NULL);
 	if (sk == NULL)
 		return;
 
@@ -825,7 +826,6 @@ static void xs_reset_transport(struct sock_xprt *transport)
 	kernel_sock_shutdown(sock, SHUT_RDWR);
 
 	write_lock_bh(&sk->sk_callback_lock);
-	transport->inet = NULL;
 	transport->sock = NULL;
 
 	sk->sk_user_data = NULL;
-- 
2.4.3


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH] SUNRPC: Fix a race in xs_reset_transport
  2015-09-15 18:52 ` Jeff Layton
  2015-09-16  8:08   ` Suzuki K. Poulose
  2015-09-16  9:04   ` [PATCHv2] " Suzuki K. Poulose
@ 2015-09-17 13:38   ` Trond Myklebust
  2015-09-17 14:18     ` Jeff Layton
  2 siblings, 1 reply; 16+ messages in thread
From: Trond Myklebust @ 2015-09-17 13:38 UTC (permalink / raw)
  To: Jeff Layton
  Cc: Suzuki K. Poulose, Anna Schumaker, J. Bruce Fields,
	David S. Miller, Linux NFS Mailing List,
	Linux Kernel Mailing List

On Tue, Sep 15, 2015 at 2:52 PM, Jeff Layton <jlayton@poochiereds.net> wrote:
> On Tue, 15 Sep 2015 16:49:23 +0100
> "Suzuki K. Poulose" <suzuki.poulose@arm.com> wrote:
>
>>  net/sunrpc/xprtsock.c |    9 ++++++++-
>>  1 file changed, 8 insertions(+), 1 deletion(-)
>>
>> diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
>> index 7be90bc..6f4789d 100644
>> --- a/net/sunrpc/xprtsock.c
>> +++ b/net/sunrpc/xprtsock.c
>> @@ -822,9 +822,16 @@ static void xs_reset_transport(struct sock_xprt *transport)
>>       if (atomic_read(&transport->xprt.swapper))
>>               sk_clear_memalloc(sk);
>>
>> -     kernel_sock_shutdown(sock, SHUT_RDWR);
>> +     if (sock)
>> +             kernel_sock_shutdown(sock, SHUT_RDWR);
>>
>
> Good catch, but...isn't this still racy? What prevents transport->sock
> being set to NULL after you assign it to "sock" but before calling
> kernel_sock_shutdown?

The XPRT_LOCKED state.

Cheers
   Trond

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH] SUNRPC: Fix a race in xs_reset_transport
  2015-09-17 13:38   ` [PATCH] " Trond Myklebust
@ 2015-09-17 14:18     ` Jeff Layton
  2015-09-17 14:50       ` Trond Myklebust
  0 siblings, 1 reply; 16+ messages in thread
From: Jeff Layton @ 2015-09-17 14:18 UTC (permalink / raw)
  To: Trond Myklebust
  Cc: Suzuki K. Poulose, Anna Schumaker, J. Bruce Fields,
	David S. Miller, Linux NFS Mailing List,
	Linux Kernel Mailing List

On Thu, 17 Sep 2015 09:38:33 -0400
Trond Myklebust <trond.myklebust@primarydata.com> wrote:

> On Tue, Sep 15, 2015 at 2:52 PM, Jeff Layton <jlayton@poochiereds.net> wrote:
> > On Tue, 15 Sep 2015 16:49:23 +0100
> > "Suzuki K. Poulose" <suzuki.poulose@arm.com> wrote:
> >
> >>  net/sunrpc/xprtsock.c |    9 ++++++++-
> >>  1 file changed, 8 insertions(+), 1 deletion(-)
> >>
> >> diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
> >> index 7be90bc..6f4789d 100644
> >> --- a/net/sunrpc/xprtsock.c
> >> +++ b/net/sunrpc/xprtsock.c
> >> @@ -822,9 +822,16 @@ static void xs_reset_transport(struct sock_xprt *transport)
> >>       if (atomic_read(&transport->xprt.swapper))
> >>               sk_clear_memalloc(sk);
> >>
> >> -     kernel_sock_shutdown(sock, SHUT_RDWR);
> >> +     if (sock)
> >> +             kernel_sock_shutdown(sock, SHUT_RDWR);
> >>
> >
> > Good catch, but...isn't this still racy? What prevents transport->sock
> > being set to NULL after you assign it to "sock" but before calling
> > kernel_sock_shutdown?
> 
> The XPRT_LOCKED state.
> 

IDGI -- if the XPRT_LOCKED bit was supposed to prevent that, then
how could you hit the original race? There should be no concurrent
callers to xs_reset_transport on the same xprt, right?

AFAICT, that bit is not set in the xprt_destroy codepath, which may be
the root cause of the problem. How would we take it there anyway?
xprt_destroy is void return, and may not be called in the context of a
rpc_task. If it's contended,  what do we do? Sleep until it's cleared?

-- 
Jeff Layton <jlayton@poochiereds.net>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH] SUNRPC: Fix a race in xs_reset_transport
  2015-09-17 14:18     ` Jeff Layton
@ 2015-09-17 14:50       ` Trond Myklebust
  2015-09-17 14:59         ` Jeff Layton
  2015-09-18 11:16         ` Suzuki K. Poulose
  0 siblings, 2 replies; 16+ messages in thread
From: Trond Myklebust @ 2015-09-17 14:50 UTC (permalink / raw)
  To: Jeff Layton
  Cc: Suzuki K. Poulose, Anna Schumaker, J. Bruce Fields,
	David S. Miller, Linux NFS Mailing List,
	Linux Kernel Mailing List

On Thu, 2015-09-17 at 10:18 -0400, Jeff Layton wrote:
> On Thu, 17 Sep 2015 09:38:33 -0400
> Trond Myklebust <trond.myklebust@primarydata.com> wrote:
> 
> > On Tue, Sep 15, 2015 at 2:52 PM, Jeff Layton <
> > jlayton@poochiereds.net> wrote:
> > > On Tue, 15 Sep 2015 16:49:23 +0100
> > > "Suzuki K. Poulose" <suzuki.poulose@arm.com> wrote:
> > > 
> > > >  net/sunrpc/xprtsock.c |    9 ++++++++-
> > > >  1 file changed, 8 insertions(+), 1 deletion(-)
> > > > 
> > > > diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
> > > > index 7be90bc..6f4789d 100644
> > > > --- a/net/sunrpc/xprtsock.c
> > > > +++ b/net/sunrpc/xprtsock.c
> > > > @@ -822,9 +822,16 @@ static void xs_reset_transport(struct
> > > > sock_xprt *transport)
> > > >       if (atomic_read(&transport->xprt.swapper))
> > > >               sk_clear_memalloc(sk);
> > > > 
> > > > -     kernel_sock_shutdown(sock, SHUT_RDWR);
> > > > +     if (sock)
> > > > +             kernel_sock_shutdown(sock, SHUT_RDWR);
> > > > 
> > > 
> > > Good catch, but...isn't this still racy? What prevents transport
> > > ->sock
> > > being set to NULL after you assign it to "sock" but before
> > > calling
> > > kernel_sock_shutdown?
> > 
> > The XPRT_LOCKED state.
> > 
> 
> IDGI -- if the XPRT_LOCKED bit was supposed to prevent that, then
> how could you hit the original race? There should be no concurrent
> callers to xs_reset_transport on the same xprt, right?

Correct. The only exception is xs_destroy.

> AFAICT, that bit is not set in the xprt_destroy codepath, which may
> be
> the root cause of the problem. How would we take it there anyway?
> xprt_destroy is void return, and may not be called in the context of
> a
> rpc_task. If it's contended,  what do we do? Sleep until it's
> cleared?
> 

How about the following.

8<-----------------------------------------------------------------
>From e2e68218e66c6b0715fd6b8f1b3092694a7c0e62 Mon Sep 17 00:00:00 2001
From: Trond Myklebust <trond.myklebust@primarydata.com>
Date: Thu, 17 Sep 2015 10:42:27 -0400
Subject: [PATCH] SUNRPC: Fix races between socket connection and destroy code

When we're destroying the socket transport, we need to ensure that
we cancel any existing delayed connection attempts, and order them
w.r.t. the call to xs_close().

Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
---
 net/sunrpc/xprtsock.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
index 7be90bc1a7c2..d2dfbd043bea 100644
--- a/net/sunrpc/xprtsock.c
+++ b/net/sunrpc/xprtsock.c
@@ -881,8 +881,11 @@ static void xs_xprt_free(struct rpc_xprt *xprt)
  */
 static void xs_destroy(struct rpc_xprt *xprt)
 {
+	struct sock_xprt *transport = container_of(xprt,
+			struct sock_xprt, xprt);
 	dprintk("RPC:       xs_destroy xprt %p\n", xprt);
 
+	cancel_delayed_work_sync(&transport->connect_worker);
 	xs_close(xprt);
 	xs_xprt_free(xprt);
 	module_put(THIS_MODULE);
-- 
2.4.3


-- 
Trond Myklebust
Linux NFS client maintainer, PrimaryData
trond.myklebust@primarydata.com




^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH] SUNRPC: Fix a race in xs_reset_transport
  2015-09-17 14:50       ` Trond Myklebust
@ 2015-09-17 14:59         ` Jeff Layton
  2015-09-18 11:16         ` Suzuki K. Poulose
  1 sibling, 0 replies; 16+ messages in thread
From: Jeff Layton @ 2015-09-17 14:59 UTC (permalink / raw)
  To: Trond Myklebust
  Cc: Suzuki K. Poulose, Anna Schumaker, J. Bruce Fields,
	David S. Miller, Linux NFS Mailing List,
	Linux Kernel Mailing List

On Thu, 17 Sep 2015 10:50:01 -0400
Trond Myklebust <trond.myklebust@primarydata.com> wrote:

> On Thu, 2015-09-17 at 10:18 -0400, Jeff Layton wrote:
> > On Thu, 17 Sep 2015 09:38:33 -0400
> > Trond Myklebust <trond.myklebust@primarydata.com> wrote:
> > 
> > > On Tue, Sep 15, 2015 at 2:52 PM, Jeff Layton <
> > > jlayton@poochiereds.net> wrote:
> > > > On Tue, 15 Sep 2015 16:49:23 +0100
> > > > "Suzuki K. Poulose" <suzuki.poulose@arm.com> wrote:
> > > > 
> > > > >  net/sunrpc/xprtsock.c |    9 ++++++++-
> > > > >  1 file changed, 8 insertions(+), 1 deletion(-)
> > > > > 
> > > > > diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
> > > > > index 7be90bc..6f4789d 100644
> > > > > --- a/net/sunrpc/xprtsock.c
> > > > > +++ b/net/sunrpc/xprtsock.c
> > > > > @@ -822,9 +822,16 @@ static void xs_reset_transport(struct
> > > > > sock_xprt *transport)
> > > > >       if (atomic_read(&transport->xprt.swapper))
> > > > >               sk_clear_memalloc(sk);
> > > > > 
> > > > > -     kernel_sock_shutdown(sock, SHUT_RDWR);
> > > > > +     if (sock)
> > > > > +             kernel_sock_shutdown(sock, SHUT_RDWR);
> > > > > 
> > > > 
> > > > Good catch, but...isn't this still racy? What prevents transport
> > > > ->sock
> > > > being set to NULL after you assign it to "sock" but before
> > > > calling
> > > > kernel_sock_shutdown?
> > > 
> > > The XPRT_LOCKED state.
> > > 
> > 
> > IDGI -- if the XPRT_LOCKED bit was supposed to prevent that, then
> > how could you hit the original race? There should be no concurrent
> > callers to xs_reset_transport on the same xprt, right?
> 
> Correct. The only exception is xs_destroy.
> 
> > AFAICT, that bit is not set in the xprt_destroy codepath, which may
> > be
> > the root cause of the problem. How would we take it there anyway?
> > xprt_destroy is void return, and may not be called in the context of
> > a
> > rpc_task. If it's contended,  what do we do? Sleep until it's
> > cleared?
> > 
> 
> How about the following.
> 
> 8<-----------------------------------------------------------------
> From e2e68218e66c6b0715fd6b8f1b3092694a7c0e62 Mon Sep 17 00:00:00 2001
> From: Trond Myklebust <trond.myklebust@primarydata.com>
> Date: Thu, 17 Sep 2015 10:42:27 -0400
> Subject: [PATCH] SUNRPC: Fix races between socket connection and destroy code
> 
> When we're destroying the socket transport, we need to ensure that
> we cancel any existing delayed connection attempts, and order them
> w.r.t. the call to xs_close().
> 
> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
> ---
>  net/sunrpc/xprtsock.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
> index 7be90bc1a7c2..d2dfbd043bea 100644
> --- a/net/sunrpc/xprtsock.c
> +++ b/net/sunrpc/xprtsock.c
> @@ -881,8 +881,11 @@ static void xs_xprt_free(struct rpc_xprt *xprt)
>   */
>  static void xs_destroy(struct rpc_xprt *xprt)
>  {
> +	struct sock_xprt *transport = container_of(xprt,
> +			struct sock_xprt, xprt);
>  	dprintk("RPC:       xs_destroy xprt %p\n", xprt);
>  
> +	cancel_delayed_work_sync(&transport->connect_worker);
>  	xs_close(xprt);
>  	xs_xprt_free(xprt);
>  	module_put(THIS_MODULE);

Yeah, that looks like it might do it. The only other xs_destroy callers
are in the connect codepath so canceling the work should prevent the
race. So...

Acked-by: Jeff Layton <jlayton@poochiereds.net>

It wouldn't hurt to update the comments over xs_close too for
posterity. They currently say:

 * The caller _must_ be holding XPRT_LOCKED in order to avoid issues with
 * xs_reset_transport() zeroing the socket from underneath a writer.

...but that rule is clearly broken here.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH] SUNRPC: Fix a race in xs_reset_transport
  2015-09-17 14:50       ` Trond Myklebust
  2015-09-17 14:59         ` Jeff Layton
@ 2015-09-18 11:16         ` Suzuki K. Poulose
  1 sibling, 0 replies; 16+ messages in thread
From: Suzuki K. Poulose @ 2015-09-18 11:16 UTC (permalink / raw)
  To: Trond Myklebust, Jeff Layton
  Cc: Anna Schumaker, J. Bruce Fields, David S. Miller,
	Linux NFS Mailing List, Linux Kernel Mailing List

On 17/09/15 15:50, Trond Myklebust wrote:
> On Thu, 2015-09-17 at 10:18 -0400, Jeff Layton wrote:
>> On Thu, 17 Sep 2015 09:38:33 -0400
>> Trond Myklebust <trond.myklebust@primarydata.com> wrote:
>>
>>> On Tue, Sep 15, 2015 at 2:52 PM, Jeff Layton <
>>> jlayton@poochiereds.net> wrote:
>>>> On Tue, 15 Sep 2015 16:49:23 +0100
>>>> "Suzuki K. Poulose" <suzuki.poulose@arm.com> wrote:
>>>>
>>>>>   net/sunrpc/xprtsock.c |    9 ++++++++-
>>>>>   1 file changed, 8 insertions(+), 1 deletion(-)
>>>>>

...

>>
>> IDGI -- if the XPRT_LOCKED bit was supposed to prevent that, then
>> how could you hit the original race? There should be no concurrent
>> callers to xs_reset_transport on the same xprt, right?
>
> Correct. The only exception is xs_destroy.
>
>> AFAICT, that bit is not set in the xprt_destroy codepath, which may
>> be
>> the root cause of the problem. How would we take it there anyway?
>> xprt_destroy is void return, and may not be called in the context of
>> a
>> rpc_task. If it's contended,  what do we do? Sleep until it's
>> cleared?
>>
>
> How about the following.
>
> 8<-----------------------------------------------------------------
>  From e2e68218e66c6b0715fd6b8f1b3092694a7c0e62 Mon Sep 17 00:00:00 2001
> From: Trond Myklebust <trond.myklebust@primarydata.com>
> Date: Thu, 17 Sep 2015 10:42:27 -0400
> Subject: [PATCH] SUNRPC: Fix races between socket connection and destroy code
>
> When we're destroying the socket transport, we need to ensure that
> we cancel any existing delayed connection attempts, and order them
> w.r.t. the call to xs_close().
>
> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
> ---
>   net/sunrpc/xprtsock.c | 3 +++
>   1 file changed, 3 insertions(+)
>
> diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
> index 7be90bc1a7c2..d2dfbd043bea 100644
> --- a/net/sunrpc/xprtsock.c
> +++ b/net/sunrpc/xprtsock.c
> @@ -881,8 +881,11 @@ static void xs_xprt_free(struct rpc_xprt *xprt)
>    */
>   static void xs_destroy(struct rpc_xprt *xprt)
>   {
> +	struct sock_xprt *transport = container_of(xprt,
> +			struct sock_xprt, xprt);
>   	dprintk("RPC:       xs_destroy xprt %p\n", xprt);
>
> +	cancel_delayed_work_sync(&transport->connect_worker);
>   	xs_close(xprt);
>   	xs_xprt_free(xprt);
>   	module_put(THIS_MODULE);


That doesn't fix it for me:

Root-NFS: nfsroot=/work/local/data/rootfs/arm64-rootfs-fvp,tcp,vers=2
NFS: nfs mount opts='vers=2,udp,rsize=4096,wsize=4096,tcp,vers=2,nolock,addr=your.nfs.server'
NFS:   parsing nfs mount option 'vers=2'
NFS:   parsing nfs mount option 'udp'
NFS:   parsing nfs mount option 'rsize=4096'
NFS:   parsing nfs mount option 'wsize=4096'
NFS:   parsing nfs mount option 'tcp'
NFS:   parsing nfs mount option 'vers=2'
NFS:   parsing nfs mount option 'nolock'
NFS:   parsing nfs mount option 'addr=your.nfs.server'
NFS: MNTPATH: '/work/local/data/rootfs/arm64-rootfs-fvp'
NFS: sending MNT request for your.nfs.server:/work/local/data/rootfs/arm64-rootfs-fvp
RPC:       set up xprt to your.nfs.server (autobind) via tcp
RPC:       created transport ffff800077e9d000 with 65536 slots
RPC:       creating mount client for your.nfs.server (xprt ffff800077e9d000)
RPC:       creating UNIX authenticator for client ffff8000787a7800
RPC:       new task initialized, procpid 1
RPC:       allocated task ffff800077f63600
RPC:     1 __rpc_execute flags=0x680
RPC:     1 call_start mount1 proc NULL (sync)
RPC:     1 call_reserve (status 0)
RPC:     1 reserved req ffff8000787a7600 xid b188dcac
RPC:       wake_up_first(ffff800077e9d170 "xprt_sending")
RPC:     1 call_reserveresult (status 0)
RPC:     1 call_refresh (status 0)
RPC:     1 holding NULL cred ffff8000008e2e68
RPC:     1 refreshing NULL cred ffff8000008e2e68
RPC:     1 call_refreshresult (status 0)
RPC:     1 call_allocate (status 0)
RPC:     1 allocated buffer of size 96 at ffff800077e9d800
RPC:     1 call_bind (status 0)
RPC:     1 rpcb_getport_async(your.nfs.server, 100005, 1, 6)
RPC:     1 sleep_on(queue "xprt_binding" time 4294937393)
RPC:     1 added to queue ffff800077e9d0c8 "xprt_binding"
RPC:     1 setting alarm for 60000 ms
RPC:     1 rpcb_getport_async: trying rpcbind version 2
RPC:       set up xprt to your.nfs.server (port 111) via tcp
RPC:       created transport ffff800077e9e000 with 65536 slots
RPC:       creating rpcbind client for your.nfs.server (xprt ffff800077e9e000)
RPC:       creating UNIX authenticator for client ffff800078688000
RPC:       new task initialized, procpid 1
RPC:       allocated task ffff800077f63800
RPC:       rpc_release_client(ffff800078688000)
RPC:     2 __rpc_execute flags=0x681
RPC:     2 call_start rpcbind2 proc GETPORT (async)
RPC:     2 call_reserve (status 0)
RPC:     2 reserved req ffff8000787a7e00 xid 591ea4ad
RPC:       wake_up_first(ffff800077e9e170 "xprt_sending")
RPC:     2 call_reserveresult (status 0)
RPC:     2 call_refresh (status 0)
RPC:     2 looking up UNIX cred
RPC:       looking up UNIX cred
RPC:       allocating UNIX cred for uid 0 gid 0
RPC:     2 refreshing UNIX cred ffff800077ee5e40
RPC:     2 call_refreshresult (status 0)
RPC:     2 call_allocate (status 0)
RPC:     2 allocated buffer of size 512 at ffff800077e9e800
RPC:     2 call_bind (status 0)
RPC:     2 call_connect xprt ffff800077e9e000 is not connected
RPC:     2 xprt_connect xprt ffff800077e9e000 is not connected
RPC:     2 sleep_on(queue "xprt_pending" time 4294937393)
RPC:     2 added to queue ffff800077e9e218 "xprt_pending"
RPC:     2 setting alarm for 60000 ms
RPC:       xs_connect scheduled xprt ffff800077e9e000
RPC:       worker connecting xprt ffff800077e9e000 via tcp to your.nfs.server (port 111)
RPC:       xs_tcp_state_change client ffff800077e9e000...
RPC:       state 1 conn 0 dead 0 zapped 1 sk_shutdown 0
RPC:     2 __rpc_wake_up_task (now 4294937393)
RPC:     2 disabling timer
RPC:     2 removed from queue ffff800077e9e218 "xprt_pending"
RPC:       __rpc_wake_up_task done
RPC:       ffff800077e9e000 connect status 115 connected 1 sock state 1
RPC:       wake_up_first(ffff800077e9e170 "xprt_sending")
RPC:     2 __rpc_execute flags=0x681
RPC:     2 xprt_connect_status: retrying
RPC:     2 call_connect_status (status -11)
RPC:     2 call_timeout (minor)
RPC:     2 call_bind (status 0)
RPC:     2 call_connect xprt ffff800077e9e000 is connected
RPC:     2 call_transmit (status 0)
RPC:     2 xprt_prepare_transmit
RPC:     2 rpc_xdr_encode (status 0)
RPC:     2 marshaling UNIX cred ffff800077ee5e40
RPC:     2 using AUTH_UNIX cred ffff800077ee5e40 to wrap rpc data
RPC:     2 encoding PMAP_GETPORT call (100005, 1, 6, 0)
RPC:     2 xprt_transmit(92)
RPC:       xs_tcp_data_ready...
RPC:       xs_tcp_data_recv started
RPC:       reading TCP record fragment of length 28
RPC:       reading XID (4 bytes)
RPC:       reading request with XID 591ea4ad
RPC:       reading CALL/REPLY flag (4 bytes)
RPC:       read reply XID 591ea4ad
RPC:       XID 591ea4ad read 20 bytes
RPC:       xprt = ffff800077e9e000, tcp_copied = 28, tcp_offset = 28, tcp_reclen = 28
RPC:     2 xid 591ea4ad complete (28 bytes received)
RPC:       xs_tcp_data_recv done
RPC:       xs_tcp_send_request(92) = 0
RPC:     2 xmit complete
RPC:       wake_up_first(ffff800077e9e170 "xprt_sending")
RPC:     2 call_status (status 28)
RPC:     2 call_decode (status 28)
RPC:     2 validating UNIX cred ffff800077ee5e40
RPC:     2 using AUTH_UNIX cred ffff800077ee5e40 to unwrap rpc data
RPC:     2 PMAP_GETPORT result: 58401
RPC:     2 call_decode result 0
RPC:       setting port for xprt ffff800077e9d000 to 58401
RPC:     2 rpcb_getport_done(status 0, port 58401)
RPC:     2 return 0, status 0
RPC:     2 release task
RPC:       freeing buffer of size 512 at ffff800077e9e800
RPC:     2 release request ffff8000787a7e00
RPC:       wake_up_first(ffff800077e9e2c0 "xprt_backlog")
RPC:       rpc_release_client(ffff800078688000)
RPC:       destroying rpcbind client for your.nfs.server
RPC:       destroying transport ffff800077e9e000
RPC:       xs_destroy xprt ffff800077e9e000
RPC:       xs_close xprt ffff800077e9e000
RPC:       xs_reset xprt ffff800077e9e000
RPC:       xs_tcp_state_change client ffff800077e9e000...
RPC:       state 4 conn 1 dead 0 zapped 1 sk_shutdown 3
RPC:       xs_tcp_state_change client ffff800077e9e000...
RPC:       state 5 conn 0 dead 0 zapped 1 sk_shutdown 3
RPC:       xs_tcp_state_change client ffff800077e9e000...
RPC:       state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
RPC:       disconnected transport ffff800077e9e000
RPC:       xs_tcp_state_change client ffff800077e9e000...
RPC:       state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
RPC:       disconnected transport ffff800077e9e000
RPC:       xs_tcp_data_ready...
RPC:       xs_reset xprt ffff800077e9e000
RPC:       xs_tcp_state_change client ffff800077e9e000...
RPC:       state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
RPC:       disconnected transport ffff800077e9e000
RPC:       wake_up_first(ffff800077e9e170 "xprt_sending")
------------[ cut here ]------------
kernel BUG at fs/inode.c:1493!
Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
Modules linked in:
CPU: 2 PID: 1 Comm: swapper/0 Tainted: G S              4.3.0-rc1+ #885
Hardware name: FVP Base (DT)
task: ffff800078478000 ti: ffff800078454000 task.ti: ffff800078454000
PC is at iput+0x144/0x170
LR is at sock_release+0xbc/0xdc
pc : [<ffff8000001b38a8>] lr : [<ffff8000004d0a0c>] pstate: 40400045
sp : ffff8000784574e0
x29: ffff8000784574e0 x28: 0000000000000002
x27: ffff800078688000 x26: ffff800077ee7cc0
x25: ffff80000062c000 x24: ffff800077e9d388
x23: ffff800077e9d0c8 x22: ffff800077f08180
x21: ffff80007808e628 x20: ffff80007808e6d8
x19: ffff80007808e5b0 x18: 0000000000000007
x17: 000000000000000e x16: ffff000000294fff
x15: ffffffffffffffff x14: 0ffffffffffffffe
x13: 0000000000000020 x12: 0101010101010101
x11: 0000000000000167 x10: 0000000000000750
x9 : ffff8000784573c0 x8 : ffff8000784787b0
x7 : ffff80000087a610 x6 : 0000000000000001
x5 : 0000000000000000 x4 : 00000000fffffffe
x3 : 0000000000000000 x2 : ffffffffffffffff
x1 : ffff800078454000 x0 : 0000000000000060

Call trace:
[<ffff8000001b38a8>] iput+0x144/0x170
[<ffff8000004d0a08>] sock_release+0xb8/0xdc
[<ffff800000577e88>] xs_reset_transport+0x98/0xcc
[<ffff800000577f0c>] xs_close+0x50/0x6c
[<ffff800000577f50>] xs_destroy+0x28/0x64
[<ffff800000575000>] xprt_destroy+0x68/0x8c
[<ffff80000057680c>] xprt_put+0x24/0x30
[<ffff800000571754>] rpc_free_client+0x78/0xd8
[<ffff80000057191c>] rpc_release_client+0x94/0xec
[<ffff800000584aa4>] rpcb_getport_async+0x2d8/0x490
[<ffff800000571218>] call_bind+0x58/0x88
[<ffff80000057b540>] __rpc_execute+0x64/0x338
[<ffff80000057bb20>] rpc_execute+0x5c/0x6c
[<ffff800000573eac>] rpc_run_task+0x8c/0xb0
[<ffff800000573f14>] rpc_call_sync+0x44/0xb0
[<ffff800000573fd0>] rpc_ping+0x50/0x70
[<ffff8000005740a0>] rpc_create_xprt+0xb0/0xcc
[<ffff80000057415c>] rpc_create+0xa0/0x150
[<ffff8000002774e0>] nfs_mount+0xcc/0x234
[<ffff80000026bc14>] nfs_request_mount+0xa8/0x12c
[<ffff80000026d4f0>] nfs_try_mount+0x54/0x2b4
[<ffff80000026e0cc>] nfs_fs_mount+0x5cc/0xac0
[<ffff80000019e124>] mount_fs+0x38/0x158
[<ffff8000001b7130>] vfs_kern_mount+0x48/0x11c
[<ffff8000001ba314>] do_mount+0x204/0xc00
[<ffff8000001bb034>] SyS_mount+0x78/0xd0
[<ffff8000007e9fac>] mount_root+0x80/0x148
[<ffff8000007ea1ac>] prepare_namespace+0x138/0x184
[<ffff8000007e9b20>] kernel_init_freeable+0x1cc/0x1f4
[<ffff8000005a1a18>] kernel_init+0xc/0xd8
Code: b5fffc00 17ffffed d4210000 17ffffd7 (d4210000)
---[ end trace 9b8c1c0ef92dab57 ]---
Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b


Thanks
Suzuki


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCHv2] SUNRPC: Fix a race in xs_reset_transport
  2015-09-16 11:17       ` Jeff Layton
@ 2015-09-18 11:19         ` Suzuki K. Poulose
  2015-09-18 16:51           ` Trond Myklebust
  0 siblings, 1 reply; 16+ messages in thread
From: Suzuki K. Poulose @ 2015-09-18 11:19 UTC (permalink / raw)
  To: Jeff Layton
  Cc: Trond Myklebust, Anna Schumaker, J. Bruce Fields,
	David S. Miller, linux-nfs, linux-kernel, Marc Zyngier

On 16/09/15 12:17, Jeff Layton wrote:
> On Wed, 16 Sep 2015 10:35:49 +0100
> "Suzuki K. Poulose" <suzuki.poulose@arm.com> wrote:
>
>> From: "Suzuki K. Poulose" <suzuki.poulose@arm.com>
>>

...

>> +		write_unlock_bh(&sk->sk_callback_lock);
>> +		return;
>> +	}
>> +	sock = transport->sock;
>> +
>>   	transport->inet = NULL;
>>   	transport->sock = NULL;
>>
>> @@ -833,6 +838,10 @@ static void xs_reset_transport(struct sock_xprt *transport)
>>   	xs_restore_old_callbacks(transport, sk);
>>   	xprt_clear_connected(xprt);
>>   	write_unlock_bh(&sk->sk_callback_lock);
>> +
>> +	if (sock)
>> +		kernel_sock_shutdown(sock, SHUT_RDWR);
>> +
>>   	xs_sock_reset_connection_flags(xprt);
>>
>>   	trace_rpc_socket_close(xprt, sock);
>
> Better, but now I'm wondering...is it problematic to restore the old
> callbacks before calling kernel_sock_shutdown? I can't quite tell
> whether it matters in all cases.
>
> It might be best to just go ahead and take the spinlock twice here. Do
> it once to clear the transport->sock pointer, call
> kernel_sock_shutdown, and then take it again to restore the old
> callbacks, etc.
>
> I don't know though...I get the feeling there are races all over the
> place in this code. It seems like there's a similar one wrt to the
> transport->inet pointer. It seems a little silly that we clear it under
> the sk->sk_callback_lock. You have to dereference that pointer
> in order to get to the lock.
>
> Maybe the right solution is to use an xchg to swap the inet pointer
> with NULL so it can act as a gatekeeper. Whoever gets there first does
> the rest of the shutdown.
>
> Something like this maybe? Would this also fix the original problem?
> Note that this patch is untested...
>
> [PATCH] sunrpc: use xchg to fetch and clear the transport->inet pointer in xs_reset_transport
>
> Reported-by: "Suzuki K. Poulose" <Suzuki.Poulose@arm.com>
> Signed-off-by: Jeff Layton <jeff.layton@primarydata.com>
> ---
>   net/sunrpc/xprtsock.c | 4 ++--
>   1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
> index 7be90bc1a7c2..57f79dcab493 100644
> --- a/net/sunrpc/xprtsock.c
> +++ b/net/sunrpc/xprtsock.c
> @@ -813,9 +813,10 @@ static void xs_error_report(struct sock *sk)
>   static void xs_reset_transport(struct sock_xprt *transport)
>   {
>   	struct socket *sock = transport->sock;
> -	struct sock *sk = transport->inet;
> +	struct sock *sk;
>   	struct rpc_xprt *xprt = &transport->xprt;
>
> +	sk = xchg(&transport->inet, NULL);
>   	if (sk == NULL)
>   		return;
>
> @@ -825,7 +826,6 @@ static void xs_reset_transport(struct sock_xprt *transport)
>   	kernel_sock_shutdown(sock, SHUT_RDWR);
>
>   	write_lock_bh(&sk->sk_callback_lock);
> -	transport->inet = NULL;
>   	transport->sock = NULL;
>
>   	sk->sk_user_data = NULL;
>


This one seemed to fix it, so if it matters :

Tested-by: Suzuki K. Poulose <suzuki.poulose@arm.com>

Suzuki


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCHv2] SUNRPC: Fix a race in xs_reset_transport
  2015-09-18 11:19         ` Suzuki K. Poulose
@ 2015-09-18 16:51           ` Trond Myklebust
  2015-09-18 22:00             ` Trond Myklebust
  0 siblings, 1 reply; 16+ messages in thread
From: Trond Myklebust @ 2015-09-18 16:51 UTC (permalink / raw)
  To: Suzuki K. Poulose, Jeff Layton
  Cc: Anna Schumaker, J. Bruce Fields, David S. Miller, linux-nfs,
	linux-kernel, Marc Zyngier

On Fri, 2015-09-18 at 12:19 +0100, Suzuki K. Poulose wrote:
> On 16/09/15 12:17, Jeff Layton wrote:
> > On Wed, 16 Sep 2015 10:35:49 +0100
> > "Suzuki K. Poulose" <suzuki.poulose@arm.com> wrote:
> > 
> > > From: "Suzuki K. Poulose" <suzuki.poulose@arm.com>
> > > 
> 
> ...
> 
> > > +		write_unlock_bh(&sk->sk_callback_lock);
> > > +		return;
> > > +	}
> > > +	sock = transport->sock;
> > > +
> > >   	transport->inet = NULL;
> > >   	transport->sock = NULL;
> > > 
> > > @@ -833,6 +838,10 @@ static void xs_reset_transport(struct
> > > sock_xprt *transport)
> > >   	xs_restore_old_callbacks(transport, sk);
> > >   	xprt_clear_connected(xprt);
> > >   	write_unlock_bh(&sk->sk_callback_lock);
> > > +
> > > +	if (sock)
> > > +		kernel_sock_shutdown(sock, SHUT_RDWR);
> > > +
> > >   	xs_sock_reset_connection_flags(xprt);
> > > 
> > >   	trace_rpc_socket_close(xprt, sock);
> > 
> > Better, but now I'm wondering...is it problematic to restore the
> > old
> > callbacks before calling kernel_sock_shutdown? I can't quite tell
> > whether it matters in all cases.
> > 
> > It might be best to just go ahead and take the spinlock twice here.
> > Do
> > it once to clear the transport->sock pointer, call
> > kernel_sock_shutdown, and then take it again to restore the old
> > callbacks, etc.
> > 
> > I don't know though...I get the feeling there are races all over
> > the
> > place in this code. It seems like there's a similar one wrt to the
> > transport->inet pointer. It seems a little silly that we clear it
> > under
> > the sk->sk_callback_lock. You have to dereference that pointer
> > in order to get to the lock.
> > 
> > Maybe the right solution is to use an xchg to swap the inet pointer
> > with NULL so it can act as a gatekeeper. Whoever gets there first
> > does
> > the rest of the shutdown.
> > 
> > Something like this maybe? Would this also fix the original
> > problem?
> > Note that this patch is untested...
> > 
> > [PATCH] sunrpc: use xchg to fetch and clear the transport->inet
> > pointer in xs_reset_transport
> > 
> > Reported-by: "Suzuki K. Poulose" <Suzuki.Poulose@arm.com>
> > Signed-off-by: Jeff Layton <jeff.layton@primarydata.com>
> > ---
> >   net/sunrpc/xprtsock.c | 4 ++--
> >   1 file changed, 2 insertions(+), 2 deletions(-)
> > 
> > diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
> > index 7be90bc1a7c2..57f79dcab493 100644
> > --- a/net/sunrpc/xprtsock.c
> > +++ b/net/sunrpc/xprtsock.c
> > @@ -813,9 +813,10 @@ static void xs_error_report(struct sock *sk)
> >   static void xs_reset_transport(struct sock_xprt *transport)
> >   {
> >   	struct socket *sock = transport->sock;
> > -	struct sock *sk = transport->inet;
> > +	struct sock *sk;
> >   	struct rpc_xprt *xprt = &transport->xprt;
> > 
> > +	sk = xchg(&transport->inet, NULL);
> >   	if (sk == NULL)
> >   		return;
> > 
> > @@ -825,7 +826,6 @@ static void xs_reset_transport(struct sock_xprt
> > *transport)
> >   	kernel_sock_shutdown(sock, SHUT_RDWR);
> > 
> >   	write_lock_bh(&sk->sk_callback_lock);
> > -	transport->inet = NULL;
> >   	transport->sock = NULL;
> > 
> >   	sk->sk_user_data = NULL;
> > 
> 
> 
> This one seemed to fix it, so if it matters :
> 
> Tested-by: Suzuki K. Poulose <suzuki.poulose@arm.com>


I don't think it does. It addresses a symptom, but the actual problem
is that we're running 2 parallel close() calls on the same socket
during a shutdown. That must not happen because it means we have
something else trying to use the socket while it is being freed.

I think what is happening is that we're triggering the socket autoclose
mechanism from the state change callback. You're seeing the problem
more frequently because we added the call to kernel_sock_shutdown() as
part of the socket shutdown, but AFAICS, it could still be triggered
from some external event such as a server-initiated shutdown that
happened at the same time.
In fact, looking at the code, it could even be triggered from the data
receive side of things.
Both of these things are bad, because autoclose puts the transport
struct that is being freed onto a workqueue. That again can lead to a
really bad use-after-free situation if the timing is just a little
different.

So how about the following patch? It should apply cleanly on top of the
first one (which is still needed, btw).

8<--------------------------------------------------------------------
>From 6261682c433cf57e6bff4ab57d615460dc15e75c Mon Sep 17 00:00:00 2001
From: Trond Myklebust <trond.myklebust@primarydata.com>
Date: Fri, 18 Sep 2015 09:52:07 -0400
Subject: [PATCH 1/2] SUNRPC: xs_sock_mark_closed() should not trigger socket
 autoclose

Trggering a socket autoclose from inside xs_tcp_state_change() is
potentially racy when it happens during the transport destroy phase,
because it restarts the xprt->task_cleanup work.

The same is true of autoclose triggered from inside xs_tcp_data_ready().

Under all conditions, it should be quite sufficient just to mark
the socket as disconnected. It will then be closed by the
transport shutdown or reconnect code.

Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
---
 net/sunrpc/xprtsock.c | 10 ++++------
 1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
index c35038511686..bb4fff4d4e5e 100644
--- a/net/sunrpc/xprtsock.c
+++ b/net/sunrpc/xprtsock.c
@@ -777,7 +777,6 @@ static void xs_sock_mark_closed(struct rpc_xprt *xprt)
 	xs_sock_reset_connection_flags(xprt);
 	/* Mark transport as closed and wake up all pending tasks */
 	xprt_disconnect_done(xprt);
-	xprt_force_disconnect(xprt);
 }
 
 /**
@@ -1068,7 +1067,7 @@ static inline void xs_tcp_read_fraghdr(struct rpc_xprt *xprt, struct xdr_skb_rea
 	/* Sanity check of the record length */
 	if (unlikely(transport->tcp_reclen < 8)) {
 		dprintk("RPC:       invalid TCP record fragment length\n");
-		xs_tcp_force_close(xprt);
+		xprt_disconnect_done(xprt);
 		return;
 	}
 	dprintk("RPC:       reading TCP record fragment of length %d\n",
@@ -1149,7 +1148,7 @@ static inline void xs_tcp_read_calldir(struct sock_xprt *transport,
 		break;
 	default:
 		dprintk("RPC:       invalid request message type\n");
-		xs_tcp_force_close(&transport->xprt);
+		xprt_disconnect_done(&transport->xprt);
 	}
 	xs_tcp_check_fraghdr(transport);
 }
@@ -1283,7 +1282,7 @@ static int xs_tcp_read_callback(struct rpc_xprt *xprt,
 	if (req == NULL) {
 		spin_unlock(&xprt->transport_lock);
 		printk(KERN_WARNING "Callback slot table overflowed\n");
-		xprt_force_disconnect(xprt);
+		xprt_disconnect_done(xprt);
 		return -1;
 	}
 
@@ -1484,8 +1483,7 @@ static void xs_tcp_state_change(struct sock *sk)
 	case TCP_CLOSE_WAIT:
 		/* The server initiated a shutdown of the socket */
 		xprt->connect_cookie++;
-		clear_bit(XPRT_CONNECTED, &xprt->state);
-		xs_tcp_force_close(xprt);
+		xprt_disconnect_done(xprt);
 	case TCP_CLOSING:
 		/*
 		 * If the server closed down the connection, make sure that
-- 
2.4.3

-- 
Trond Myklebust
Linux NFS client maintainer, PrimaryData
trond.myklebust@primarydata.com




^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCHv2] SUNRPC: Fix a race in xs_reset_transport
  2015-09-18 16:51           ` Trond Myklebust
@ 2015-09-18 22:00             ` Trond Myklebust
  2015-09-21 13:48               ` Suzuki K. Poulose
  0 siblings, 1 reply; 16+ messages in thread
From: Trond Myklebust @ 2015-09-18 22:00 UTC (permalink / raw)
  To: Suzuki K. Poulose, Jeff Layton
  Cc: Anna Schumaker, J. Bruce Fields, David S. Miller, linux-nfs,
	linux-kernel, Marc Zyngier

On Fri, 2015-09-18 at 12:51 -0400, Trond Myklebust wrote:
> On Fri, 2015-09-18 at 12:19 +0100, Suzuki K. Poulose wrote:
> > On 16/09/15 12:17, Jeff Layton wrote:
> > > On Wed, 16 Sep 2015 10:35:49 +0100
> > > "Suzuki K. Poulose" <suzuki.poulose@arm.com> wrote:
> > > 
> > > > From: "Suzuki K. Poulose" <suzuki.poulose@arm.com>
> > > > 
> > 
> > ...
> > 
> > > > +		write_unlock_bh(&sk->sk_callback_lock);
> > > > +		return;
> > > > +	}
> > > > +	sock = transport->sock;
> > > > +
> > > >   	transport->inet = NULL;
> > > >   	transport->sock = NULL;
> > > > 
> > > > @@ -833,6 +838,10 @@ static void xs_reset_transport(struct
> > > > sock_xprt *transport)
> > > >   	xs_restore_old_callbacks(transport, sk);
> > > >   	xprt_clear_connected(xprt);
> > > >   	write_unlock_bh(&sk->sk_callback_lock);
> > > > +
> > > > +	if (sock)
> > > > +		kernel_sock_shutdown(sock, SHUT_RDWR);
> > > > +
> > > >   	xs_sock_reset_connection_flags(xprt);
> > > > 
> > > >   	trace_rpc_socket_close(xprt, sock);
> > > 
> > > Better, but now I'm wondering...is it problematic to restore the
> > > old
> > > callbacks before calling kernel_sock_shutdown? I can't quite tell
> > > whether it matters in all cases.
> > > 
> > > It might be best to just go ahead and take the spinlock twice
> > > here.
> > > Do
> > > it once to clear the transport->sock pointer, call
> > > kernel_sock_shutdown, and then take it again to restore the old
> > > callbacks, etc.
> > > 
> > > I don't know though...I get the feeling there are races all over
> > > the
> > > place in this code. It seems like there's a similar one wrt to
> > > the
> > > transport->inet pointer. It seems a little silly that we clear it
> > > under
> > > the sk->sk_callback_lock. You have to dereference that pointer
> > > in order to get to the lock.
> > > 
> > > Maybe the right solution is to use an xchg to swap the inet
> > > pointer
> > > with NULL so it can act as a gatekeeper. Whoever gets there first
> > > does
> > > the rest of the shutdown.
> > > 
> > > Something like this maybe? Would this also fix the original
> > > problem?
> > > Note that this patch is untested...
> > > 
> > > [PATCH] sunrpc: use xchg to fetch and clear the transport->inet
> > > pointer in xs_reset_transport
> > > 
> > > Reported-by: "Suzuki K. Poulose" <Suzuki.Poulose@arm.com>
> > > Signed-off-by: Jeff Layton <jeff.layton@primarydata.com>
> > > ---
> > >   net/sunrpc/xprtsock.c | 4 ++--
> > >   1 file changed, 2 insertions(+), 2 deletions(-)
> > > 
> > > diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
> > > index 7be90bc1a7c2..57f79dcab493 100644
> > > --- a/net/sunrpc/xprtsock.c
> > > +++ b/net/sunrpc/xprtsock.c
> > > @@ -813,9 +813,10 @@ static void xs_error_report(struct sock *sk)
> > >   static void xs_reset_transport(struct sock_xprt *transport)
> > >   {
> > >   	struct socket *sock = transport->sock;
> > > -	struct sock *sk = transport->inet;
> > > +	struct sock *sk;
> > >   	struct rpc_xprt *xprt = &transport->xprt;
> > > 
> > > +	sk = xchg(&transport->inet, NULL);
> > >   	if (sk == NULL)
> > >   		return;
> > > 
> > > @@ -825,7 +826,6 @@ static void xs_reset_transport(struct
> > > sock_xprt
> > > *transport)
> > >   	kernel_sock_shutdown(sock, SHUT_RDWR);
> > > 
> > >   	write_lock_bh(&sk->sk_callback_lock);
> > > -	transport->inet = NULL;
> > >   	transport->sock = NULL;
> > > 
> > >   	sk->sk_user_data = NULL;
> > > 
> > 
> > 
> > This one seemed to fix it, so if it matters :
> > 
> > Tested-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
> 
> 
> I don't think it does. It addresses a symptom, but the actual problem
> is that we're running 2 parallel close() calls on the same socket
> during a shutdown. That must not happen because it means we have
> something else trying to use the socket while it is being freed.
> 
> I think what is happening is that we're triggering the socket
> autoclose
> mechanism from the state change callback. You're seeing the problem
> more frequently because we added the call to kernel_sock_shutdown()
> as
> part of the socket shutdown, but AFAICS, it could still be triggered
> from some external event such as a server-initiated shutdown that
> happened at the same time.
> In fact, looking at the code, it could even be triggered from the
> data
> receive side of things.
> Both of these things are bad, because autoclose puts the transport
> struct that is being freed onto a workqueue. That again can lead to a
> really bad use-after-free situation if the timing is just a little
> different.
> 
> So how about the following patch? It should apply cleanly on top of
> the
> first one (which is still needed, btw).

Having thought some more about this, I think the safest thing in order
to avoid races is simply to have the shutdown set XPRT_LOCKED. That way
we can keep the current desirable behaviour of closing the socket
automatically any time the server initiates a close, while still
preventing it during shutdown.

8<-------------------------------------------------------------------
>From 3e1c9d8092e2fa4509d84a00fcf21e7e0c581fe2 Mon Sep 17 00:00:00 2001
From: Trond Myklebust <trond.myklebust@primarydata.com>
Date: Fri, 18 Sep 2015 15:53:24 -0400
Subject: [PATCH] SUNRPC: Lock the transport layer on shutdown

Avoid all races with the connect/disconnect handlers by taking the
transport lock.

Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
---
 net/sunrpc/xprt.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
index ab5dd621ae0c..2e98f4a243e5 100644
--- a/net/sunrpc/xprt.c
+++ b/net/sunrpc/xprt.c
@@ -614,6 +614,7 @@ static void xprt_autoclose(struct work_struct *work)
 	clear_bit(XPRT_CLOSE_WAIT, &xprt->state);
 	xprt->ops->close(xprt);
 	xprt_release_write(xprt, NULL);
+	wake_up_bit(&xprt->state, XPRT_LOCKED);
 }
 
 /**
@@ -723,6 +724,7 @@ void xprt_unlock_connect(struct rpc_xprt *xprt, void *cookie)
 	xprt->ops->release_xprt(xprt, NULL);
 out:
 	spin_unlock_bh(&xprt->transport_lock);
+	wake_up_bit(&xprt->state, XPRT_LOCKED);
 }
 
 /**
@@ -1394,6 +1396,10 @@ out:
 static void xprt_destroy(struct rpc_xprt *xprt)
 {
 	dprintk("RPC:       destroying transport %p\n", xprt);
+
+	/* Exclude transport connect/disconnect handlers */
+	wait_on_bit_lock(&xprt->state, XPRT_LOCKED, TASK_UNINTERRUPTIBLE);
+
 	del_timer_sync(&xprt->timer);
 
 	rpc_xprt_debugfs_unregister(xprt);
-- 
2.4.3

-- 
Trond Myklebust
Linux NFS client maintainer, PrimaryData
trond.myklebust@primarydata.com




^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCHv2] SUNRPC: Fix a race in xs_reset_transport
  2015-09-18 22:00             ` Trond Myklebust
@ 2015-09-21 13:48               ` Suzuki K. Poulose
  0 siblings, 0 replies; 16+ messages in thread
From: Suzuki K. Poulose @ 2015-09-21 13:48 UTC (permalink / raw)
  To: Trond Myklebust, Jeff Layton
  Cc: Anna Schumaker, J. Bruce Fields, David S. Miller, linux-nfs,
	linux-kernel, Marc Zyngier

On 18/09/15 23:00, Trond Myklebust wrote:
> On Fri, 2015-09-18 at 12:51 -0400, Trond Myklebust wrote:
>> On Fri, 2015-09-18 at 12:19 +0100, Suzuki K. Poulose wrote:
>>> On 16/09/15 12:17, Jeff Layton wrote:
>>>> On Wed, 16 Sep 2015 10:35:49 +0100
>>>> "Suzuki K. Poulose" <suzuki.poulose@arm.com> wrote:
>>>>
>>>>> From: "Suzuki K. Poulose" <suzuki.poulose@arm.com>
>>>>>
>>>
>>> ...
>>>
>>>>> +		write_unlock_bh(&sk->sk_callback_lock);
>>>>> +		return;
>>>>> +	}
>>>>> +	sock = transport->sock;
>>>>> +
>>>>>    	transport->inet = NULL;
>>>>>    	transport->sock = NULL;
>>>>>
>>>>> @@ -833,6 +838,10 @@ static void xs_reset_transport(struct
>>>>> sock_xprt *transport)
>>>>>    	xs_restore_old_callbacks(transport, sk);
>>>>>    	xprt_clear_connected(xprt);
>>>>>    	write_unlock_bh(&sk->sk_callback_lock);
>>>>> +

...

>>
>> So how about the following patch? It should apply cleanly on top of
>> the
>> first one (which is still needed, btw).
>
> Having thought some more about this, I think the safest thing in order
> to avoid races is simply to have the shutdown set XPRT_LOCKED. That way
> we can keep the current desirable behaviour of closing the socket
> automatically any time the server initiates a close, while still
> preventing it during shutdown.
>
> 8<-------------------------------------------------------------------
>  From 3e1c9d8092e2fa4509d84a00fcf21e7e0c581fe2 Mon Sep 17 00:00:00 2001
> From: Trond Myklebust <trond.myklebust@primarydata.com>
> Date: Fri, 18 Sep 2015 15:53:24 -0400
> Subject: [PATCH] SUNRPC: Lock the transport layer on shutdown
>
> Avoid all races with the connect/disconnect handlers by taking the
> transport lock.
>
> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
> ---
>   net/sunrpc/xprt.c | 6 ++++++
>   1 file changed, 6 insertions(+)
>
> diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
> index ab5dd621ae0c..2e98f4a243e5 100644
> --- a/net/sunrpc/xprt.c
> +++ b/net/sunrpc/xprt.c
> @@ -614,6 +614,7 @@ static void xprt_autoclose(struct work_struct *work)
>   	clear_bit(XPRT_CLOSE_WAIT, &xprt->state);
>   	xprt->ops->close(xprt);
>   	xprt_release_write(xprt, NULL);
> +	wake_up_bit(&xprt->state, XPRT_LOCKED);
>   }
>
>   /**
> @@ -723,6 +724,7 @@ void xprt_unlock_connect(struct rpc_xprt *xprt, void *cookie)
>   	xprt->ops->release_xprt(xprt, NULL);
>   out:
>   	spin_unlock_bh(&xprt->transport_lock);
> +	wake_up_bit(&xprt->state, XPRT_LOCKED);
>   }
>
>   /**
> @@ -1394,6 +1396,10 @@ out:
>   static void xprt_destroy(struct rpc_xprt *xprt)
>   {
>   	dprintk("RPC:       destroying transport %p\n", xprt);
> +
> +	/* Exclude transport connect/disconnect handlers */
> +	wait_on_bit_lock(&xprt->state, XPRT_LOCKED, TASK_UNINTERRUPTIBLE);
> +
>   	del_timer_sync(&xprt->timer);
>
>   	rpc_xprt_debugfs_unregister(xprt);
>


That works for me, please feel free to add:

Reported-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Tested-by: Suzuki K. Poulose <suzuki.poulose@arm.com>



Thanks
Suzuki



^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2015-09-21 13:49 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-09-15 15:49 [PATCH] SUNRPC: Fix a race in xs_reset_transport Suzuki K. Poulose
2015-09-15 18:52 ` Jeff Layton
2015-09-16  8:08   ` Suzuki K. Poulose
2015-09-16  9:04   ` [PATCHv2] " Suzuki K. Poulose
2015-09-16  9:35     ` Suzuki K. Poulose
2015-09-16  9:48       ` Marc Zyngier
2015-09-16 11:17       ` Jeff Layton
2015-09-18 11:19         ` Suzuki K. Poulose
2015-09-18 16:51           ` Trond Myklebust
2015-09-18 22:00             ` Trond Myklebust
2015-09-21 13:48               ` Suzuki K. Poulose
2015-09-17 13:38   ` [PATCH] " Trond Myklebust
2015-09-17 14:18     ` Jeff Layton
2015-09-17 14:50       ` Trond Myklebust
2015-09-17 14:59         ` Jeff Layton
2015-09-18 11:16         ` Suzuki K. Poulose

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).