All of lore.kernel.org
 help / color / mirror / Atom feed
* [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
@ 2015-06-12  3:49 Steven Rostedt
  2015-06-12 14:10 ` Trond Myklebust
  0 siblings, 1 reply; 77+ messages in thread
From: Steven Rostedt @ 2015-06-12  3:49 UTC (permalink / raw)
  To: Trond Myklebust; +Cc: Anna Schumaker, linux-nfs, netdev, LKML, Andrew Morton


I recently upgraded my main server to 4.0.4 from 3.19.5 and rkhunter
started reporting a hidden port on my box.

Running unhide-tcp I see this:

# unhide-tcp 
Unhide-tcp 20121229
Copyright © 2012 Yago Jesus & Patrick Gouin
License GPLv3+ : GNU GPL version 3 or later
http://www.unhide-forensics.info
Used options: 
[*]Starting TCP checking

Found Hidden port that not appears in ss: 946
[*]Starting UDP checking

This scared the hell out of me as I'm thinking that I have got some kind
of NSA backdoor hooked into my server and it is monitoring my plans to
smuggle Kinder Überraschung into the USA from Germany. I panicked!

Well, I wasted the day writing modules to first look at all the sockets
opened by all processes (via their file descriptors) and posted their
port numbers.

  http://rostedt.homelinux.com/private/tasklist.c

But this port wasn't there either.

Then I decided to look at the ports in tcp_hashinfo.

  http://rostedt.homelinux.com/private/portlist.c

This found the port but no file was connected to it, and worse yet,
when I first ran it without using probe_kernel_read(), it crashed my
kernel, because sk->sk_socket pointed to a freed socket!

Note, each boot, the hidden port is different.

Finally, I decided to bring in the big guns, and inserted a
trace_printk() into the bind logic, to see if I could find the culprit.
After fiddling with it a few times, I found a suspect:

   kworker/3:1H-123   [003] ..s.    96.696213: inet_bind_hash: add 946

Bah, it's a kernel thread doing it, via a work queue. I then added a
trace_dump_stack() to find what was calling this, and here it is:

    kworker/3:1H-123   [003] ..s.    96.696222: <stack trace>
 => inet_csk_get_port
 => inet_addr_type
 => inet_bind
 => xs_bind
 => sock_setsockopt
 => __sock_create
 => xs_create_sock.isra.18
 => xs_tcp_setup_socket
 => process_one_work
 => worker_thread
 => worker_thread
 => kthread
 => kthread
 => ret_from_fork
 => kthread

I rebooted, and examined what happens. I see the kworker binding that
port, and all seems well:

# netstat -tapn |grep 946
tcp        0      0 192.168.23.9:946        192.168.23.22:55201     ESTABLISHED -               

But waiting for a bit, the connection goes into a TIME_WAIT, and then
it just disappears. But the bind to the port does not get released, and
that port is from then on, taken.

This never happened with my 3.19 kernels. I would bisect it but this is
happening on my main server box which I usually only reboot every other
month doing upgrades. It causes too much disturbance for myself (and my
family) as when this box is offline, basically the rest of my machines
are too.

I figured this may be enough information to see if you can fix it.
Otherwise I can try to do the bisect, but that's not going to happen
any time soon. I may just go back to 3.19 for now, such that rkhunter
stops complaining about the hidden port.

If you need anymore information, let me know.

-- Steve

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
  2015-06-12  3:49 [REGRESSION] NFS is creating a hidden port (left over from xs_bind() ) Steven Rostedt
@ 2015-06-12 14:10 ` Trond Myklebust
  2015-06-12 14:40     ` Eric Dumazet
  0 siblings, 1 reply; 77+ messages in thread
From: Trond Myklebust @ 2015-06-12 14:10 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Anna Schumaker, Linux NFS Mailing List,
	Linux Network Devel Mailing List, LKML, Andrew Morton

On Thu, Jun 11, 2015 at 11:49 PM, Steven Rostedt <rostedt@goodmis.org> wrote:
>
> I recently upgraded my main server to 4.0.4 from 3.19.5 and rkhunter
> started reporting a hidden port on my box.
>
> Running unhide-tcp I see this:
>
> # unhide-tcp
> Unhide-tcp 20121229
> Copyright © 2012 Yago Jesus & Patrick Gouin
> License GPLv3+ : GNU GPL version 3 or later
> http://www.unhide-forensics.info
> Used options:
> [*]Starting TCP checking
>
> Found Hidden port that not appears in ss: 946
> [*]Starting UDP checking
>
> This scared the hell out of me as I'm thinking that I have got some kind
> of NSA backdoor hooked into my server and it is monitoring my plans to
> smuggle Kinder Überraschung into the USA from Germany. I panicked!
>
> Well, I wasted the day writing modules to first look at all the sockets
> opened by all processes (via their file descriptors) and posted their
> port numbers.
>
>   http://rostedt.homelinux.com/private/tasklist.c
>
> But this port wasn't there either.
>
> Then I decided to look at the ports in tcp_hashinfo.
>
>   http://rostedt.homelinux.com/private/portlist.c
>
> This found the port but no file was connected to it, and worse yet,
> when I first ran it without using probe_kernel_read(), it crashed my
> kernel, because sk->sk_socket pointed to a freed socket!
>
> Note, each boot, the hidden port is different.
>
> Finally, I decided to bring in the big guns, and inserted a
> trace_printk() into the bind logic, to see if I could find the culprit.
> After fiddling with it a few times, I found a suspect:
>
>    kworker/3:1H-123   [003] ..s.    96.696213: inet_bind_hash: add 946
>
> Bah, it's a kernel thread doing it, via a work queue. I then added a
> trace_dump_stack() to find what was calling this, and here it is:
>
>     kworker/3:1H-123   [003] ..s.    96.696222: <stack trace>
>  => inet_csk_get_port
>  => inet_addr_type
>  => inet_bind
>  => xs_bind
>  => sock_setsockopt
>  => __sock_create
>  => xs_create_sock.isra.18
>  => xs_tcp_setup_socket
>  => process_one_work
>  => worker_thread
>  => worker_thread
>  => kthread
>  => kthread
>  => ret_from_fork
>  => kthread
>
> I rebooted, and examined what happens. I see the kworker binding that
> port, and all seems well:
>
> # netstat -tapn |grep 946
> tcp        0      0 192.168.23.9:946        192.168.23.22:55201     ESTABLISHED -
>
> But waiting for a bit, the connection goes into a TIME_WAIT, and then
> it just disappears. But the bind to the port does not get released, and
> that port is from then on, taken.
>
> This never happened with my 3.19 kernels. I would bisect it but this is
> happening on my main server box which I usually only reboot every other
> month doing upgrades. It causes too much disturbance for myself (and my
> family) as when this box is offline, basically the rest of my machines
> are too.
>
> I figured this may be enough information to see if you can fix it.
> Otherwise I can try to do the bisect, but that's not going to happen
> any time soon. I may just go back to 3.19 for now, such that rkhunter
> stops complaining about the hidden port.
>

The only new thing that we're doing with 4.0 is to set SO_REUSEPORT on
the socket before binding the port (commit 4dda9c8a5e34: "SUNRPC: Set
SO_REUSEPORT socket option for TCP connections"). Perhaps there is an
issue with that?

Cheers
  Trond

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
@ 2015-06-12 14:40     ` Eric Dumazet
  0 siblings, 0 replies; 77+ messages in thread
From: Eric Dumazet @ 2015-06-12 14:40 UTC (permalink / raw)
  To: Trond Myklebust
  Cc: Steven Rostedt, Anna Schumaker, Linux NFS Mailing List,
	Linux Network Devel Mailing List, LKML, Andrew Morton

On Fri, 2015-06-12 at 10:10 -0400, Trond Myklebust wrote:
> On Thu, Jun 11, 2015 at 11:49 PM, Steven Rostedt <rostedt@goodmis.org> wrote:
> >
> > I recently upgraded my main server to 4.0.4 from 3.19.5 and rkhunter
> > started reporting a hidden port on my box.
> >
> > Running unhide-tcp I see this:
> >
> > # unhide-tcp
> > Unhide-tcp 20121229
> > Copyright © 2012 Yago Jesus & Patrick Gouin
> > License GPLv3+ : GNU GPL version 3 or later
> > http://www.unhide-forensics.info
> > Used options:
> > [*]Starting TCP checking
> >
> > Found Hidden port that not appears in ss: 946
> > [*]Starting UDP checking
> >
> > This scared the hell out of me as I'm thinking that I have got some kind
> > of NSA backdoor hooked into my server and it is monitoring my plans to
> > smuggle Kinder Überraschung into the USA from Germany. I panicked!
> >
> > Well, I wasted the day writing modules to first look at all the sockets
> > opened by all processes (via their file descriptors) and posted their
> > port numbers.
> >
> >   http://rostedt.homelinux.com/private/tasklist.c
> >
> > But this port wasn't there either.
> >
> > Then I decided to look at the ports in tcp_hashinfo.
> >
> >   http://rostedt.homelinux.com/private/portlist.c
> >
> > This found the port but no file was connected to it, and worse yet,
> > when I first ran it without using probe_kernel_read(), it crashed my
> > kernel, because sk->sk_socket pointed to a freed socket!
> >
> > Note, each boot, the hidden port is different.
> >
> > Finally, I decided to bring in the big guns, and inserted a
> > trace_printk() into the bind logic, to see if I could find the culprit.
> > After fiddling with it a few times, I found a suspect:
> >
> >    kworker/3:1H-123   [003] ..s.    96.696213: inet_bind_hash: add 946
> >
> > Bah, it's a kernel thread doing it, via a work queue. I then added a
> > trace_dump_stack() to find what was calling this, and here it is:
> >
> >     kworker/3:1H-123   [003] ..s.    96.696222: <stack trace>
> >  => inet_csk_get_port
> >  => inet_addr_type
> >  => inet_bind
> >  => xs_bind
> >  => sock_setsockopt
> >  => __sock_create
> >  => xs_create_sock.isra.18
> >  => xs_tcp_setup_socket
> >  => process_one_work
> >  => worker_thread
> >  => worker_thread
> >  => kthread
> >  => kthread
> >  => ret_from_fork
> >  => kthread
> >
> > I rebooted, and examined what happens. I see the kworker binding that
> > port, and all seems well:
> >
> > # netstat -tapn |grep 946
> > tcp        0      0 192.168.23.9:946        192.168.23.22:55201     ESTABLISHED -
> >
> > But waiting for a bit, the connection goes into a TIME_WAIT, and then
> > it just disappears. But the bind to the port does not get released, and
> > that port is from then on, taken.
> >
> > This never happened with my 3.19 kernels. I would bisect it but this is
> > happening on my main server box which I usually only reboot every other
> > month doing upgrades. It causes too much disturbance for myself (and my
> > family) as when this box is offline, basically the rest of my machines
> > are too.
> >
> > I figured this may be enough information to see if you can fix it.
> > Otherwise I can try to do the bisect, but that's not going to happen
> > any time soon. I may just go back to 3.19 for now, such that rkhunter
> > stops complaining about the hidden port.
> >
> 
> The only new thing that we're doing with 4.0 is to set SO_REUSEPORT on
> the socket before binding the port (commit 4dda9c8a5e34: "SUNRPC: Set
> SO_REUSEPORT socket option for TCP connections"). Perhaps there is an
> issue with that?

Strange, because the usual way to not have time-wait is to use SO_LINGER
with linger=0

And apparently xs_tcp_finish_connecting() has this :

                sock_reset_flag(sk, SOCK_LINGER);
                tcp_sk(sk)->linger2 = 0;

Are you sure SO_REUSEADDR was not the thing you wanted ?

Steven, have you tried kmemleak ?




^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
@ 2015-06-12 14:40     ` Eric Dumazet
  0 siblings, 0 replies; 77+ messages in thread
From: Eric Dumazet @ 2015-06-12 14:40 UTC (permalink / raw)
  To: Trond Myklebust
  Cc: Steven Rostedt, Anna Schumaker, Linux NFS Mailing List,
	Linux Network Devel Mailing List, LKML, Andrew Morton

On Fri, 2015-06-12 at 10:10 -0400, Trond Myklebust wrote:
> On Thu, Jun 11, 2015 at 11:49 PM, Steven Rostedt <rostedt-nx8X9YLhiw1AfugRpC6u6w@public.gmane.org> wrote:
> >
> > I recently upgraded my main server to 4.0.4 from 3.19.5 and rkhunter
> > started reporting a hidden port on my box.
> >
> > Running unhide-tcp I see this:
> >
> > # unhide-tcp
> > Unhide-tcp 20121229
> > Copyright © 2012 Yago Jesus & Patrick Gouin
> > License GPLv3+ : GNU GPL version 3 or later
> > http://www.unhide-forensics.info
> > Used options:
> > [*]Starting TCP checking
> >
> > Found Hidden port that not appears in ss: 946
> > [*]Starting UDP checking
> >
> > This scared the hell out of me as I'm thinking that I have got some kind
> > of NSA backdoor hooked into my server and it is monitoring my plans to
> > smuggle Kinder Überraschung into the USA from Germany. I panicked!
> >
> > Well, I wasted the day writing modules to first look at all the sockets
> > opened by all processes (via their file descriptors) and posted their
> > port numbers.
> >
> >   http://rostedt.homelinux.com/private/tasklist.c
> >
> > But this port wasn't there either.
> >
> > Then I decided to look at the ports in tcp_hashinfo.
> >
> >   http://rostedt.homelinux.com/private/portlist.c
> >
> > This found the port but no file was connected to it, and worse yet,
> > when I first ran it without using probe_kernel_read(), it crashed my
> > kernel, because sk->sk_socket pointed to a freed socket!
> >
> > Note, each boot, the hidden port is different.
> >
> > Finally, I decided to bring in the big guns, and inserted a
> > trace_printk() into the bind logic, to see if I could find the culprit.
> > After fiddling with it a few times, I found a suspect:
> >
> >    kworker/3:1H-123   [003] ..s.    96.696213: inet_bind_hash: add 946
> >
> > Bah, it's a kernel thread doing it, via a work queue. I then added a
> > trace_dump_stack() to find what was calling this, and here it is:
> >
> >     kworker/3:1H-123   [003] ..s.    96.696222: <stack trace>
> >  => inet_csk_get_port
> >  => inet_addr_type
> >  => inet_bind
> >  => xs_bind
> >  => sock_setsockopt
> >  => __sock_create
> >  => xs_create_sock.isra.18
> >  => xs_tcp_setup_socket
> >  => process_one_work
> >  => worker_thread
> >  => worker_thread
> >  => kthread
> >  => kthread
> >  => ret_from_fork
> >  => kthread
> >
> > I rebooted, and examined what happens. I see the kworker binding that
> > port, and all seems well:
> >
> > # netstat -tapn |grep 946
> > tcp        0      0 192.168.23.9:946        192.168.23.22:55201     ESTABLISHED -
> >
> > But waiting for a bit, the connection goes into a TIME_WAIT, and then
> > it just disappears. But the bind to the port does not get released, and
> > that port is from then on, taken.
> >
> > This never happened with my 3.19 kernels. I would bisect it but this is
> > happening on my main server box which I usually only reboot every other
> > month doing upgrades. It causes too much disturbance for myself (and my
> > family) as when this box is offline, basically the rest of my machines
> > are too.
> >
> > I figured this may be enough information to see if you can fix it.
> > Otherwise I can try to do the bisect, but that's not going to happen
> > any time soon. I may just go back to 3.19 for now, such that rkhunter
> > stops complaining about the hidden port.
> >
> 
> The only new thing that we're doing with 4.0 is to set SO_REUSEPORT on
> the socket before binding the port (commit 4dda9c8a5e34: "SUNRPC: Set
> SO_REUSEPORT socket option for TCP connections"). Perhaps there is an
> issue with that?

Strange, because the usual way to not have time-wait is to use SO_LINGER
with linger=0

And apparently xs_tcp_finish_connecting() has this :

                sock_reset_flag(sk, SOCK_LINGER);
                tcp_sk(sk)->linger2 = 0;

Are you sure SO_REUSEADDR was not the thing you wanted ?

Steven, have you tried kmemleak ?



--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
  2015-06-12 14:40     ` Eric Dumazet
  (?)
@ 2015-06-12 14:57     ` Trond Myklebust
  2015-06-12 15:43         ` Eric Dumazet
  -1 siblings, 1 reply; 77+ messages in thread
From: Trond Myklebust @ 2015-06-12 14:57 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: Steven Rostedt, Anna Schumaker, Linux NFS Mailing List,
	Linux Network Devel Mailing List, LKML, Andrew Morton

On Fri, Jun 12, 2015 at 10:40 AM, Eric Dumazet <eric.dumazet@gmail.com> wrote:
> On Fri, 2015-06-12 at 10:10 -0400, Trond Myklebust wrote:
>> On Thu, Jun 11, 2015 at 11:49 PM, Steven Rostedt <rostedt@goodmis.org> wrote:
>> >
>> > I recently upgraded my main server to 4.0.4 from 3.19.5 and rkhunter
>> > started reporting a hidden port on my box.
>> >
>> > Running unhide-tcp I see this:
>> >
>> > # unhide-tcp
>> > Unhide-tcp 20121229
>> > Copyright © 2012 Yago Jesus & Patrick Gouin
>> > License GPLv3+ : GNU GPL version 3 or later
>> > http://www.unhide-forensics.info
>> > Used options:
>> > [*]Starting TCP checking
>> >
>> > Found Hidden port that not appears in ss: 946
>> > [*]Starting UDP checking
>> >
>> > This scared the hell out of me as I'm thinking that I have got some kind
>> > of NSA backdoor hooked into my server and it is monitoring my plans to
>> > smuggle Kinder Überraschung into the USA from Germany. I panicked!
>> >
>> > Well, I wasted the day writing modules to first look at all the sockets
>> > opened by all processes (via their file descriptors) and posted their
>> > port numbers.
>> >
>> >   http://rostedt.homelinux.com/private/tasklist.c
>> >
>> > But this port wasn't there either.
>> >
>> > Then I decided to look at the ports in tcp_hashinfo.
>> >
>> >   http://rostedt.homelinux.com/private/portlist.c
>> >
>> > This found the port but no file was connected to it, and worse yet,
>> > when I first ran it without using probe_kernel_read(), it crashed my
>> > kernel, because sk->sk_socket pointed to a freed socket!
>> >
>> > Note, each boot, the hidden port is different.
>> >
>> > Finally, I decided to bring in the big guns, and inserted a
>> > trace_printk() into the bind logic, to see if I could find the culprit.
>> > After fiddling with it a few times, I found a suspect:
>> >
>> >    kworker/3:1H-123   [003] ..s.    96.696213: inet_bind_hash: add 946
>> >
>> > Bah, it's a kernel thread doing it, via a work queue. I then added a
>> > trace_dump_stack() to find what was calling this, and here it is:
>> >
>> >     kworker/3:1H-123   [003] ..s.    96.696222: <stack trace>
>> >  => inet_csk_get_port
>> >  => inet_addr_type
>> >  => inet_bind
>> >  => xs_bind
>> >  => sock_setsockopt
>> >  => __sock_create
>> >  => xs_create_sock.isra.18
>> >  => xs_tcp_setup_socket
>> >  => process_one_work
>> >  => worker_thread
>> >  => worker_thread
>> >  => kthread
>> >  => kthread
>> >  => ret_from_fork
>> >  => kthread
>> >
>> > I rebooted, and examined what happens. I see the kworker binding that
>> > port, and all seems well:
>> >
>> > # netstat -tapn |grep 946
>> > tcp        0      0 192.168.23.9:946        192.168.23.22:55201     ESTABLISHED -
>> >
>> > But waiting for a bit, the connection goes into a TIME_WAIT, and then
>> > it just disappears. But the bind to the port does not get released, and
>> > that port is from then on, taken.
>> >
>> > This never happened with my 3.19 kernels. I would bisect it but this is
>> > happening on my main server box which I usually only reboot every other
>> > month doing upgrades. It causes too much disturbance for myself (and my
>> > family) as when this box is offline, basically the rest of my machines
>> > are too.
>> >
>> > I figured this may be enough information to see if you can fix it.
>> > Otherwise I can try to do the bisect, but that's not going to happen
>> > any time soon. I may just go back to 3.19 for now, such that rkhunter
>> > stops complaining about the hidden port.
>> >
>>
>> The only new thing that we're doing with 4.0 is to set SO_REUSEPORT on
>> the socket before binding the port (commit 4dda9c8a5e34: "SUNRPC: Set
>> SO_REUSEPORT socket option for TCP connections"). Perhaps there is an
>> issue with that?
>
> Strange, because the usual way to not have time-wait is to use SO_LINGER
> with linger=0
>
> And apparently xs_tcp_finish_connecting() has this :
>
>                 sock_reset_flag(sk, SOCK_LINGER);
>                 tcp_sk(sk)->linger2 = 0;

Are you sure? I thought that SO_LINGER is more about controlling how
the socket behaves w.r.t. waiting for the TCP_CLOSE state to be
achieved (i.e. about aborting the FIN state negotiation early). I've
never observed an effect on the TCP time-wait states.

> Are you sure SO_REUSEADDR was not the thing you wanted ?

Yes. SO_REUSEADDR has the problem that it requires you bind to
something other than 0.0.0.0, so it is less appropriate for outgoing
connections; the RPC code really should not have to worry about
routing and routability of a particular source address.

Cheers
  Trond

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
@ 2015-06-12 15:34       ` Steven Rostedt
  0 siblings, 0 replies; 77+ messages in thread
From: Steven Rostedt @ 2015-06-12 15:34 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: Trond Myklebust, Anna Schumaker, Linux NFS Mailing List,
	Linux Network Devel Mailing List, LKML, Andrew Morton

On Fri, 12 Jun 2015 07:40:35 -0700
Eric Dumazet <eric.dumazet@gmail.com> wrote:

> Strange, because the usual way to not have time-wait is to use SO_LINGER
> with linger=0
> 
> And apparently xs_tcp_finish_connecting() has this :
> 
>                 sock_reset_flag(sk, SOCK_LINGER);
>                 tcp_sk(sk)->linger2 = 0;
> 
> Are you sure SO_REUSEADDR was not the thing you wanted ?
> 
> Steven, have you tried kmemleak ?

Nope, and again, I'm hesitant on adding too much debug. This is my main
server (build server, ssh server, web server, mail server, proxy
server, irc server, etc).

Although, I made dprintk() into trace_printk() in xprtsock.c and
xprt.c, and reran it. Here's the output:

(port 684 was the bad one this time)

# tracer: nop
#
# entries-in-buffer/entries-written: 396/396   #P:4
#
#                              _-----=> irqs-off
#                             / _----=> need-resched
#                            | / _---=> hardirq/softirq
#                            || / _--=> preempt-depth
#                            ||| /     delay
#           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
#              | |       |   ||||       |         |
        rpc.nfsd-4710  [002] ....    48.615382: xs_local_setup_socket: RPC:       worker connecting xprt ffff8800d9018000 via AF_LOCAL to /var/run/rpcbind.sock
        rpc.nfsd-4710  [002] ....    48.615393: xs_local_setup_socket: RPC:       xprt ffff8800d9018000 connected to /var/run/rpcbind.sock
        rpc.nfsd-4710  [002] ....    48.615394: xs_setup_local: RPC:       set up xprt to /var/run/rpcbind.sock via AF_LOCAL
        rpc.nfsd-4710  [002] ....    48.615399: xprt_create_transport: RPC:       created transport ffff8800d9018000 with 65536 slots
        rpc.nfsd-4710  [002] ....    48.615416: xprt_alloc_slot: RPC:     1 reserved req ffff8800db829600 xid cb06d5e8
        rpc.nfsd-4710  [002] ....    48.615419: xprt_prepare_transmit: RPC:     1 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.615420: xprt_transmit: RPC:     1 xprt_transmit(44)
        rpc.nfsd-4710  [002] ....    48.615424: xs_local_send_request: RPC:       xs_local_send_request(44) = 0
        rpc.nfsd-4710  [002] ....    48.615425: xprt_transmit: RPC:     1 xmit complete
         rpcbind-1829  [003] ..s.    48.615503: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [003] ..s.    48.615506: xprt_complete_rqst: RPC:     1 xid cb06d5e8 complete (24 bytes received)
        rpc.nfsd-4710  [002] ....    48.615556: xprt_release: RPC:     1 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.615568: xprt_alloc_slot: RPC:     2 reserved req ffff8800db829600 xid cc06d5e8
        rpc.nfsd-4710  [002] ....    48.615569: xprt_prepare_transmit: RPC:     2 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.615569: xprt_transmit: RPC:     2 xprt_transmit(44)
        rpc.nfsd-4710  [002] ....    48.615578: xs_local_send_request: RPC:       xs_local_send_request(44) = 0
        rpc.nfsd-4710  [002] ....    48.615578: xprt_transmit: RPC:     2 xmit complete
         rpcbind-1829  [003] ..s.    48.615643: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [003] ..s.    48.615645: xprt_complete_rqst: RPC:     2 xid cc06d5e8 complete (24 bytes received)
        rpc.nfsd-4710  [002] ....    48.615695: xprt_release: RPC:     2 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.615698: xprt_alloc_slot: RPC:     3 reserved req ffff8800db829600 xid cd06d5e8
        rpc.nfsd-4710  [002] ....    48.615699: xprt_prepare_transmit: RPC:     3 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.615700: xprt_transmit: RPC:     3 xprt_transmit(68)
        rpc.nfsd-4710  [002] ....    48.615706: xs_local_send_request: RPC:       xs_local_send_request(68) = 0
        rpc.nfsd-4710  [002] ....    48.615707: xprt_transmit: RPC:     3 xmit complete
         rpcbind-1829  [003] ..s.    48.615784: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [003] ..s.    48.615785: xprt_complete_rqst: RPC:     3 xid cd06d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.615830: xprt_release: RPC:     3 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.615833: xprt_alloc_slot: RPC:     4 reserved req ffff8800db829600 xid ce06d5e8
        rpc.nfsd-4710  [002] ....    48.615834: xprt_prepare_transmit: RPC:     4 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.615835: xprt_transmit: RPC:     4 xprt_transmit(68)
        rpc.nfsd-4710  [002] ....    48.615841: xs_local_send_request: RPC:       xs_local_send_request(68) = 0
        rpc.nfsd-4710  [002] ....    48.615841: xprt_transmit: RPC:     4 xmit complete
         rpcbind-1829  [003] ..s.    48.615892: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [003] ..s.    48.615894: xprt_complete_rqst: RPC:     4 xid ce06d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.615958: xprt_release: RPC:     4 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.615961: xprt_alloc_slot: RPC:     5 reserved req ffff8800db829600 xid cf06d5e8
        rpc.nfsd-4710  [002] ....    48.615962: xprt_prepare_transmit: RPC:     5 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.615966: xprt_transmit: RPC:     5 xprt_transmit(68)
        rpc.nfsd-4710  [002] ....    48.615971: xs_local_send_request: RPC:       xs_local_send_request(68) = 0
        rpc.nfsd-4710  [002] ....    48.615972: xprt_transmit: RPC:     5 xmit complete
         rpcbind-1829  [003] ..s.    48.616011: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [003] ..s.    48.616012: xprt_complete_rqst: RPC:     5 xid cf06d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.616057: xprt_release: RPC:     5 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.616059: xprt_alloc_slot: RPC:     6 reserved req ffff8800db829600 xid d006d5e8
        rpc.nfsd-4710  [002] ....    48.616060: xprt_prepare_transmit: RPC:     6 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.616061: xprt_transmit: RPC:     6 xprt_transmit(68)
        rpc.nfsd-4710  [002] ....    48.616065: xs_local_send_request: RPC:       xs_local_send_request(68) = 0
        rpc.nfsd-4710  [002] ....    48.616066: xprt_transmit: RPC:     6 xmit complete
         rpcbind-1829  [003] ..s.    48.616117: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [003] ..s.    48.616119: xprt_complete_rqst: RPC:     6 xid d006d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.616163: xprt_release: RPC:     6 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.616165: xprt_alloc_slot: RPC:     7 reserved req ffff8800db829600 xid d106d5e8
        rpc.nfsd-4710  [002] ....    48.616166: xprt_prepare_transmit: RPC:     7 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.616167: xprt_transmit: RPC:     7 xprt_transmit(68)
        rpc.nfsd-4710  [002] ....    48.616172: xs_local_send_request: RPC:       xs_local_send_request(68) = 0
        rpc.nfsd-4710  [002] ....    48.616172: xprt_transmit: RPC:     7 xmit complete
         rpcbind-1829  [000] ..s.    48.616247: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.616249: xprt_complete_rqst: RPC:     7 xid d106d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.616289: xprt_release: RPC:     7 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.616296: xprt_alloc_slot: RPC:     8 reserved req ffff8800db829600 xid d206d5e8
        rpc.nfsd-4710  [002] ....    48.616297: xprt_prepare_transmit: RPC:     8 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.616298: xprt_transmit: RPC:     8 xprt_transmit(84)
        rpc.nfsd-4710  [002] ....    48.616302: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-4710  [002] ....    48.616302: xprt_transmit: RPC:     8 xmit complete
         rpcbind-1829  [000] ..s.    48.616324: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.616326: xprt_complete_rqst: RPC:     8 xid d206d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.616340: xprt_release: RPC:     8 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.616346: xprt_alloc_slot: RPC:     9 reserved req ffff8800db829600 xid d306d5e8
        rpc.nfsd-4710  [002] ....    48.616347: xprt_prepare_transmit: RPC:     9 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.616348: xprt_transmit: RPC:     9 xprt_transmit(84)
        rpc.nfsd-4710  [002] ....    48.616355: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-4710  [002] ....    48.616355: xprt_transmit: RPC:     9 xmit complete
         rpcbind-1829  [000] ..s.    48.616380: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.616381: xprt_complete_rqst: RPC:     9 xid d306d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.616392: xprt_release: RPC:     9 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.616396: xprt_alloc_slot: RPC:    10 reserved req ffff8800db829600 xid d406d5e8
        rpc.nfsd-4710  [002] ....    48.616396: xprt_prepare_transmit: RPC:    10 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.616397: xprt_transmit: RPC:    10 xprt_transmit(84)
        rpc.nfsd-4710  [002] ....    48.616401: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-4710  [002] ....    48.616402: xprt_transmit: RPC:    10 xmit complete
         rpcbind-1829  [000] ..s.    48.616421: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.616422: xprt_complete_rqst: RPC:    10 xid d406d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.616433: xprt_release: RPC:    10 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.616436: xprt_alloc_slot: RPC:    11 reserved req ffff8800db829600 xid d506d5e8
        rpc.nfsd-4710  [002] ....    48.616437: xprt_prepare_transmit: RPC:    11 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.616438: xprt_transmit: RPC:    11 xprt_transmit(84)
        rpc.nfsd-4710  [002] ....    48.616442: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-4710  [002] ....    48.616442: xprt_transmit: RPC:    11 xmit complete
         rpcbind-1829  [000] ..s.    48.616461: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.616462: xprt_complete_rqst: RPC:    11 xid d506d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.616473: xprt_release: RPC:    11 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.616476: xprt_alloc_slot: RPC:    12 reserved req ffff8800db829600 xid d606d5e8
        rpc.nfsd-4710  [002] ....    48.616477: xprt_prepare_transmit: RPC:    12 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.616478: xprt_transmit: RPC:    12 xprt_transmit(84)
        rpc.nfsd-4710  [002] ....    48.616482: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-4710  [002] ....    48.616482: xprt_transmit: RPC:    12 xmit complete
         rpcbind-1829  [000] ..s.    48.616501: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.616502: xprt_complete_rqst: RPC:    12 xid d606d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.616511: xprt_release: RPC:    12 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.616535: xprt_alloc_slot: RPC:    13 reserved req ffff8800db829600 xid d706d5e8
        rpc.nfsd-4710  [002] ....    48.616536: xprt_prepare_transmit: RPC:    13 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.616537: xprt_transmit: RPC:    13 xprt_transmit(84)
        rpc.nfsd-4710  [002] ....    48.616541: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-4710  [002] ....    48.616541: xprt_transmit: RPC:    13 xmit complete
         rpcbind-1829  [000] ..s.    48.616580: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.616581: xprt_complete_rqst: RPC:    13 xid d706d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.616589: xprt_release: RPC:    13 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.616591: xprt_alloc_slot: RPC:    14 reserved req ffff8800db829600 xid d806d5e8
        rpc.nfsd-4710  [002] ....    48.616591: xprt_prepare_transmit: RPC:    14 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.616592: xprt_transmit: RPC:    14 xprt_transmit(84)
        rpc.nfsd-4710  [002] ....    48.616594: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-4710  [002] ....    48.616595: xprt_transmit: RPC:    14 xmit complete
         rpcbind-1829  [000] ..s.    48.616610: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.616611: xprt_complete_rqst: RPC:    14 xid d806d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.616618: xprt_release: RPC:    14 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.616619: xprt_alloc_slot: RPC:    15 reserved req ffff8800db829600 xid d906d5e8
        rpc.nfsd-4710  [002] ....    48.616620: xprt_prepare_transmit: RPC:    15 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.616620: xprt_transmit: RPC:    15 xprt_transmit(84)
        rpc.nfsd-4710  [002] ....    48.616623: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-4710  [002] ....    48.616623: xprt_transmit: RPC:    15 xmit complete
         rpcbind-1829  [000] ..s.    48.616635: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.616637: xprt_complete_rqst: RPC:    15 xid d906d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.616643: xprt_release: RPC:    15 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.616644: xprt_alloc_slot: RPC:    16 reserved req ffff8800db829600 xid da06d5e8
        rpc.nfsd-4710  [002] ....    48.616645: xprt_prepare_transmit: RPC:    16 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.616645: xprt_transmit: RPC:    16 xprt_transmit(84)
        rpc.nfsd-4710  [002] ....    48.616648: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-4710  [002] ....    48.616648: xprt_transmit: RPC:    16 xmit complete
         rpcbind-1829  [000] ..s.    48.616658: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.616659: xprt_complete_rqst: RPC:    16 xid da06d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.616665: xprt_release: RPC:    16 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.616666: xprt_alloc_slot: RPC:    17 reserved req ffff8800db829600 xid db06d5e8
        rpc.nfsd-4710  [002] ....    48.616667: xprt_prepare_transmit: RPC:    17 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.616667: xprt_transmit: RPC:    17 xprt_transmit(84)
        rpc.nfsd-4710  [002] ....    48.616670: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-4710  [002] ....    48.616670: xprt_transmit: RPC:    17 xmit complete
         rpcbind-1829  [000] ..s.    48.616680: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.616681: xprt_complete_rqst: RPC:    17 xid db06d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.616687: xprt_release: RPC:    17 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.617251: xprt_alloc_slot: RPC:    18 reserved req ffff8800db829600 xid dc06d5e8
        rpc.nfsd-4710  [002] ....    48.617252: xprt_prepare_transmit: RPC:    18 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.617252: xprt_transmit: RPC:    18 xprt_transmit(80)
        rpc.nfsd-4710  [002] ....    48.617256: xs_local_send_request: RPC:       xs_local_send_request(80) = 0
        rpc.nfsd-4710  [002] ....    48.617257: xprt_transmit: RPC:    18 xmit complete
         rpcbind-1829  [000] ..s.    48.617265: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.617265: xprt_complete_rqst: RPC:    18 xid dc06d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.617272: xprt_release: RPC:    18 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.617274: xprt_alloc_slot: RPC:    19 reserved req ffff8800db829600 xid dd06d5e8
        rpc.nfsd-4710  [002] ....    48.617274: xprt_prepare_transmit: RPC:    19 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.617275: xprt_transmit: RPC:    19 xprt_transmit(80)
        rpc.nfsd-4710  [002] ....    48.617277: xs_local_send_request: RPC:       xs_local_send_request(80) = 0
        rpc.nfsd-4710  [002] ....    48.617277: xprt_transmit: RPC:    19 xmit complete
         rpcbind-1829  [000] ..s.    48.617287: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.617288: xprt_complete_rqst: RPC:    19 xid dd06d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.617293: xprt_release: RPC:    19 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.617295: xprt_alloc_slot: RPC:    20 reserved req ffff8800db829600 xid de06d5e8
        rpc.nfsd-4710  [002] ....    48.617295: xprt_prepare_transmit: RPC:    20 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.617296: xprt_transmit: RPC:    20 xprt_transmit(80)
        rpc.nfsd-4710  [002] ....    48.617298: xs_local_send_request: RPC:       xs_local_send_request(80) = 0
        rpc.nfsd-4710  [002] ....    48.617298: xprt_transmit: RPC:    20 xmit complete
         rpcbind-1829  [000] ..s.    48.617307: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.617308: xprt_complete_rqst: RPC:    20 xid de06d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.617314: xprt_release: RPC:    20 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.617315: xprt_alloc_slot: RPC:    21 reserved req ffff8800db829600 xid df06d5e8
        rpc.nfsd-4710  [002] ....    48.617316: xprt_prepare_transmit: RPC:    21 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.617316: xprt_transmit: RPC:    21 xprt_transmit(80)
        rpc.nfsd-4710  [002] ....    48.617318: xs_local_send_request: RPC:       xs_local_send_request(80) = 0
        rpc.nfsd-4710  [002] ....    48.617319: xprt_transmit: RPC:    21 xmit complete
         rpcbind-1829  [000] ..s.    48.617328: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.617329: xprt_complete_rqst: RPC:    21 xid df06d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.617334: xprt_release: RPC:    21 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.617336: xprt_alloc_slot: RPC:    22 reserved req ffff8800db829600 xid e006d5e8
        rpc.nfsd-4710  [002] ....    48.617336: xprt_prepare_transmit: RPC:    22 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.617336: xprt_transmit: RPC:    22 xprt_transmit(80)
        rpc.nfsd-4710  [002] ....    48.617339: xs_local_send_request: RPC:       xs_local_send_request(80) = 0
        rpc.nfsd-4710  [002] ....    48.617339: xprt_transmit: RPC:    22 xmit complete
         rpcbind-1829  [000] ..s.    48.617348: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.617349: xprt_complete_rqst: RPC:    22 xid e006d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.617354: xprt_release: RPC:    22 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.617370: xprt_alloc_slot: RPC:    23 reserved req ffff8800db829600 xid e106d5e8
        rpc.nfsd-4710  [002] ....    48.617371: xprt_prepare_transmit: RPC:    23 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.617371: xprt_transmit: RPC:    23 xprt_transmit(80)
        rpc.nfsd-4710  [002] ....    48.617374: xs_local_send_request: RPC:       xs_local_send_request(80) = 0
        rpc.nfsd-4710  [002] ....    48.617374: xprt_transmit: RPC:    23 xmit complete
         rpcbind-1829  [000] ..s.    48.617382: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.617383: xprt_complete_rqst: RPC:    23 xid e106d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.617389: xprt_release: RPC:    23 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.617390: xprt_alloc_slot: RPC:    24 reserved req ffff8800db829600 xid e206d5e8
        rpc.nfsd-4710  [002] ....    48.617391: xprt_prepare_transmit: RPC:    24 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.617391: xprt_transmit: RPC:    24 xprt_transmit(80)
        rpc.nfsd-4710  [002] ....    48.617394: xs_local_send_request: RPC:       xs_local_send_request(80) = 0
        rpc.nfsd-4710  [002] ....    48.617394: xprt_transmit: RPC:    24 xmit complete
         rpcbind-1829  [000] ..s.    48.617403: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.617404: xprt_complete_rqst: RPC:    24 xid e206d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.617410: xprt_release: RPC:    24 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.617411: xprt_alloc_slot: RPC:    25 reserved req ffff8800db829600 xid e306d5e8
        rpc.nfsd-4710  [002] ....    48.617412: xprt_prepare_transmit: RPC:    25 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.617412: xprt_transmit: RPC:    25 xprt_transmit(80)
        rpc.nfsd-4710  [002] ....    48.617414: xs_local_send_request: RPC:       xs_local_send_request(80) = 0
        rpc.nfsd-4710  [002] ....    48.617414: xprt_transmit: RPC:    25 xmit complete
         rpcbind-1829  [000] ..s.    48.617424: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.617424: xprt_complete_rqst: RPC:    25 xid e306d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.617430: xprt_release: RPC:    25 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.617431: xprt_alloc_slot: RPC:    26 reserved req ffff8800db829600 xid e406d5e8
        rpc.nfsd-4710  [002] ....    48.617432: xprt_prepare_transmit: RPC:    26 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.617432: xprt_transmit: RPC:    26 xprt_transmit(80)
        rpc.nfsd-4710  [002] ....    48.617434: xs_local_send_request: RPC:       xs_local_send_request(80) = 0
        rpc.nfsd-4710  [002] ....    48.617434: xprt_transmit: RPC:    26 xmit complete
         rpcbind-1829  [000] ..s.    48.617444: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.617444: xprt_complete_rqst: RPC:    26 xid e406d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.617450: xprt_release: RPC:    26 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.617451: xprt_alloc_slot: RPC:    27 reserved req ffff8800db829600 xid e506d5e8
        rpc.nfsd-4710  [002] ....    48.617452: xprt_prepare_transmit: RPC:    27 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.617452: xprt_transmit: RPC:    27 xprt_transmit(80)
        rpc.nfsd-4710  [002] ....    48.617454: xs_local_send_request: RPC:       xs_local_send_request(80) = 0
        rpc.nfsd-4710  [002] ....    48.617455: xprt_transmit: RPC:    27 xmit complete
         rpcbind-1829  [000] ..s.    48.617464: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.617464: xprt_complete_rqst: RPC:    27 xid e506d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.617470: xprt_release: RPC:    27 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.617575: xprt_alloc_slot: RPC:    28 reserved req ffff8800db829600 xid e606d5e8
        rpc.nfsd-4710  [002] ....    48.617576: xprt_prepare_transmit: RPC:    28 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.617577: xprt_transmit: RPC:    28 xprt_transmit(68)
        rpc.nfsd-4710  [002] ....    48.617580: xs_local_send_request: RPC:       xs_local_send_request(68) = 0
        rpc.nfsd-4710  [002] ....    48.617580: xprt_transmit: RPC:    28 xmit complete
         rpcbind-1829  [000] ..s.    48.617590: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.617591: xprt_complete_rqst: RPC:    28 xid e606d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.617598: xprt_release: RPC:    28 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.617599: xprt_alloc_slot: RPC:    29 reserved req ffff8800db829600 xid e706d5e8
        rpc.nfsd-4710  [002] ....    48.617599: xprt_prepare_transmit: RPC:    29 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.617600: xprt_transmit: RPC:    29 xprt_transmit(68)
        rpc.nfsd-4710  [002] ....    48.617602: xs_local_send_request: RPC:       xs_local_send_request(68) = 0
        rpc.nfsd-4710  [002] ....    48.617602: xprt_transmit: RPC:    29 xmit complete
         rpcbind-1829  [000] ..s.    48.617614: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.617615: xprt_complete_rqst: RPC:    29 xid e706d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.617621: xprt_release: RPC:    29 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.617622: xprt_alloc_slot: RPC:    30 reserved req ffff8800db829600 xid e806d5e8
        rpc.nfsd-4710  [002] ....    48.617622: xprt_prepare_transmit: RPC:    30 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.617623: xprt_transmit: RPC:    30 xprt_transmit(68)
        rpc.nfsd-4710  [002] ....    48.617625: xs_local_send_request: RPC:       xs_local_send_request(68) = 0
        rpc.nfsd-4710  [002] ....    48.617625: xprt_transmit: RPC:    30 xmit complete
         rpcbind-1829  [000] ..s.    48.617634: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.617635: xprt_complete_rqst: RPC:    30 xid e806d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.617640: xprt_release: RPC:    30 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.617647: xprt_alloc_slot: RPC:    31 reserved req ffff8800db829600 xid e906d5e8
        rpc.nfsd-4710  [002] ....    48.617647: xprt_prepare_transmit: RPC:    31 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.617648: xprt_transmit: RPC:    31 xprt_transmit(88)
        rpc.nfsd-4710  [002] ....    48.617650: xs_local_send_request: RPC:       xs_local_send_request(88) = 0
        rpc.nfsd-4710  [002] ....    48.617650: xprt_transmit: RPC:    31 xmit complete
         rpcbind-1829  [000] ..s.    48.617659: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.617660: xprt_complete_rqst: RPC:    31 xid e906d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.617666: xprt_release: RPC:    31 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.617668: xprt_alloc_slot: RPC:    32 reserved req ffff8800db829600 xid ea06d5e8
        rpc.nfsd-4710  [002] ....    48.617668: xprt_prepare_transmit: RPC:    32 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.617669: xprt_transmit: RPC:    32 xprt_transmit(88)
        rpc.nfsd-4710  [002] ....    48.617671: xs_local_send_request: RPC:       xs_local_send_request(88) = 0
        rpc.nfsd-4710  [002] ....    48.617671: xprt_transmit: RPC:    32 xmit complete
         rpcbind-1829  [000] ..s.    48.617681: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.617681: xprt_complete_rqst: RPC:    32 xid ea06d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.617687: xprt_release: RPC:    32 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.617688: xprt_alloc_slot: RPC:    33 reserved req ffff8800db829600 xid eb06d5e8
        rpc.nfsd-4710  [002] ....    48.617689: xprt_prepare_transmit: RPC:    33 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.617689: xprt_transmit: RPC:    33 xprt_transmit(88)
        rpc.nfsd-4710  [002] ....    48.617692: xs_local_send_request: RPC:       xs_local_send_request(88) = 0
        rpc.nfsd-4710  [002] ....    48.617692: xprt_transmit: RPC:    33 xmit complete
         rpcbind-1829  [000] ..s.    48.617701: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.617702: xprt_complete_rqst: RPC:    33 xid eb06d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.617707: xprt_release: RPC:    33 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.617716: xprt_alloc_slot: RPC:    34 reserved req ffff8800db829600 xid ec06d5e8
        rpc.nfsd-4710  [002] ....    48.617716: xprt_prepare_transmit: RPC:    34 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.617717: xprt_transmit: RPC:    34 xprt_transmit(88)
        rpc.nfsd-4710  [002] ....    48.617719: xs_local_send_request: RPC:       xs_local_send_request(88) = 0
        rpc.nfsd-4710  [002] ....    48.617719: xprt_transmit: RPC:    34 xmit complete
         rpcbind-1829  [000] ..s.    48.617728: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.617729: xprt_complete_rqst: RPC:    34 xid ec06d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.617735: xprt_release: RPC:    34 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.617737: xprt_alloc_slot: RPC:    35 reserved req ffff8800db829600 xid ed06d5e8
        rpc.nfsd-4710  [002] ....    48.617737: xprt_prepare_transmit: RPC:    35 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.617738: xprt_transmit: RPC:    35 xprt_transmit(88)
        rpc.nfsd-4710  [002] ....    48.617740: xs_local_send_request: RPC:       xs_local_send_request(88) = 0
        rpc.nfsd-4710  [002] ....    48.617740: xprt_transmit: RPC:    35 xmit complete
         rpcbind-1829  [000] ..s.    48.617749: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.617750: xprt_complete_rqst: RPC:    35 xid ed06d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.617756: xprt_release: RPC:    35 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.617757: xprt_alloc_slot: RPC:    36 reserved req ffff8800db829600 xid ee06d5e8
        rpc.nfsd-4710  [002] ....    48.617758: xprt_prepare_transmit: RPC:    36 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.617758: xprt_transmit: RPC:    36 xprt_transmit(88)
        rpc.nfsd-4710  [002] ....    48.617760: xs_local_send_request: RPC:       xs_local_send_request(88) = 0
        rpc.nfsd-4710  [002] ....    48.617760: xprt_transmit: RPC:    36 xmit complete
         rpcbind-1829  [000] ..s.    48.617770: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.617770: xprt_complete_rqst: RPC:    36 xid ee06d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.617776: xprt_release: RPC:    36 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.617782: xprt_alloc_slot: RPC:    37 reserved req ffff8800db829600 xid ef06d5e8
        rpc.nfsd-4710  [002] ....    48.617782: xprt_prepare_transmit: RPC:    37 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.617783: xprt_transmit: RPC:    37 xprt_transmit(84)
        rpc.nfsd-4710  [002] ....    48.617785: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-4710  [002] ....    48.617785: xprt_transmit: RPC:    37 xmit complete
         rpcbind-1829  [000] ..s.    48.617794: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.617795: xprt_complete_rqst: RPC:    37 xid ef06d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.617800: xprt_release: RPC:    37 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.617802: xprt_alloc_slot: RPC:    38 reserved req ffff8800db829600 xid f006d5e8
        rpc.nfsd-4710  [002] ....    48.617802: xprt_prepare_transmit: RPC:    38 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.617803: xprt_transmit: RPC:    38 xprt_transmit(84)
        rpc.nfsd-4710  [002] ....    48.617805: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-4710  [002] ....    48.617805: xprt_transmit: RPC:    38 xmit complete
         rpcbind-1829  [000] ..s.    48.617814: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.617815: xprt_complete_rqst: RPC:    38 xid f006d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.617821: xprt_release: RPC:    38 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.617822: xprt_alloc_slot: RPC:    39 reserved req ffff8800db829600 xid f106d5e8
        rpc.nfsd-4710  [002] ....    48.617822: xprt_prepare_transmit: RPC:    39 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.617823: xprt_transmit: RPC:    39 xprt_transmit(84)
        rpc.nfsd-4710  [002] ....    48.617825: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-4710  [002] ....    48.617825: xprt_transmit: RPC:    39 xmit complete
         rpcbind-1829  [000] ..s.    48.617834: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.617835: xprt_complete_rqst: RPC:    39 xid f106d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.617840: xprt_release: RPC:    39 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.617848: xprt_alloc_slot: RPC:    40 reserved req ffff8800db829600 xid f206d5e8
        rpc.nfsd-4710  [002] ....    48.617849: xprt_prepare_transmit: RPC:    40 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.617849: xprt_transmit: RPC:    40 xprt_transmit(80)
        rpc.nfsd-4710  [002] ....    48.617851: xs_local_send_request: RPC:       xs_local_send_request(80) = 0
        rpc.nfsd-4710  [002] ....    48.617851: xprt_transmit: RPC:    40 xmit complete
         rpcbind-1829  [000] ..s.    48.617860: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.617861: xprt_complete_rqst: RPC:    40 xid f206d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.617867: xprt_release: RPC:    40 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.617868: xprt_alloc_slot: RPC:    41 reserved req ffff8800db829600 xid f306d5e8
        rpc.nfsd-4710  [002] ....    48.617869: xprt_prepare_transmit: RPC:    41 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.617869: xprt_transmit: RPC:    41 xprt_transmit(80)
        rpc.nfsd-4710  [002] ....    48.617871: xs_local_send_request: RPC:       xs_local_send_request(80) = 0
        rpc.nfsd-4710  [002] ....    48.617871: xprt_transmit: RPC:    41 xmit complete
         rpcbind-1829  [000] ..s.    48.617881: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.617881: xprt_complete_rqst: RPC:    41 xid f306d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.617887: xprt_release: RPC:    41 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.617888: xprt_alloc_slot: RPC:    42 reserved req ffff8800db829600 xid f406d5e8
        rpc.nfsd-4710  [002] ....    48.617889: xprt_prepare_transmit: RPC:    42 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.617889: xprt_transmit: RPC:    42 xprt_transmit(80)
        rpc.nfsd-4710  [002] ....    48.617891: xs_local_send_request: RPC:       xs_local_send_request(80) = 0
        rpc.nfsd-4710  [002] ....    48.617891: xprt_transmit: RPC:    42 xmit complete
         rpcbind-1829  [000] ..s.    48.617901: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.617902: xprt_complete_rqst: RPC:    42 xid f406d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.617907: xprt_release: RPC:    42 release request ffff8800db829600
          <idle>-0     [003] ..s.    57.765235: inet_bind_hash: add 2049
          <idle>-0     [003] ..s.    57.765278: <stack trace>
 => __inet_inherit_port
 => tcp_v4_syn_recv_sock
 => tcp_check_req
 => tcp_v4_do_rcv
 => tcp_v4_rcv
 => ip_local_deliver_finish
 => __netif_receive_skb_core
 => netif_receive_skb
 => netif_receive_skb_internal
 => br_handle_frame_finish
 => csum_block_add_ext
 => __skb_gro_checksum_complete
 => br_handle_frame
 => br_handle_frame
 => __netif_receive_skb_core
 => read_tsc
 => netif_receive_skb_internal
 => napi_gro_receive
 => e1000_clean_rx_irq
 => e1000_clean
 => try_to_wake_up
 => net_rx_action
 => add_interrupt_randomness
 => __do_softirq
 => ack_ioapic_level
 => irq_exit
 => do_IRQ
 => ret_from_intr
 => cpuidle_enter_state
 => cpuidle_enter_state
 => cpu_startup_entry
 => start_secondary
   kworker/u32:7-118   [000] ....    57.767716: xs_setup_tcp: RPC:       set up xprt to 192.168.23.22 (port 55201) via tcp
   kworker/u32:7-118   [000] ....    57.767726: xprt_create_transport: RPC:       created transport ffff88040b251000 with 65536 slots
    kworker/0:1H-128   [000] ....    57.767758: xprt_alloc_slot: RPC:    43 reserved req ffff8804033c3800 xid f4185658
    kworker/0:1H-128   [000] ....    57.767764: xprt_connect: RPC:    43 xprt_connect xprt ffff88040b251000 is not connected
    kworker/0:1H-128   [000] ....    57.767767: xs_connect: RPC:       xs_connect scheduled xprt ffff88040b251000
    kworker/0:1H-128   [000] ..s.    57.767780: inet_csk_get_port: snum 684
    kworker/0:1H-128   [000] ..s.    57.767792: <stack trace>
 => inet_addr_type
 => inet_bind
 => xs_bind
 => sock_setsockopt
 => __sock_create
 => xs_create_sock.isra.18
 => xs_tcp_setup_socket
 => process_one_work
 => worker_thread
 => worker_thread
 => kthread
 => kthread
 => ret_from_fork
 => kthread
    kworker/0:1H-128   [000] ..s.    57.767793: inet_bind_hash: add 684
    kworker/0:1H-128   [000] ..s.    57.767801: <stack trace>
 => inet_csk_get_port
 => inet_addr_type
 => inet_bind
 => xs_bind
 => sock_setsockopt
 => __sock_create
 => xs_create_sock.isra.18
 => xs_tcp_setup_socket
 => process_one_work
 => worker_thread
 => worker_thread
 => kthread
 => kthread
 => ret_from_fork
 => kthread
    kworker/0:1H-128   [000] ....    57.767803: xs_bind: RPC:       xs_bind 4.136.255.255:684: ok (0)
    kworker/0:1H-128   [000] ....    57.767805: xs_tcp_setup_socket: RPC:       worker connecting xprt ffff88040b251000 via tcp to 192.168.23.22 (port 55201)
    kworker/0:1H-128   [000] ....    57.767841: xs_tcp_setup_socket: RPC:       ffff88040b251000 connect status 115 connected 0 sock state 2
          <idle>-0     [003] ..s.    57.768178: xs_tcp_state_change: RPC:       xs_tcp_state_change client ffff88040b251000...
          <idle>-0     [003] ..s.    57.768180: xs_tcp_state_change: RPC:       state 1 conn 0 dead 0 zapped 1 sk_shutdown 0
    kworker/3:1H-127   [003] ....    57.768216: xprt_connect_status: RPC:    43 xprt_connect_status: retrying
    kworker/3:1H-127   [003] ....    57.768218: xprt_prepare_transmit: RPC:    43 xprt_prepare_transmit
    kworker/3:1H-127   [003] ....    57.768229: xprt_transmit: RPC:    43 xprt_transmit(72)
    kworker/3:1H-127   [003] ....    57.768245: xs_tcp_send_request: RPC:       xs_tcp_send_request(72) = 0
    kworker/3:1H-127   [003] ....    57.768246: xprt_transmit: RPC:    43 xmit complete
          <idle>-0     [003] ..s.    57.768621: xs_tcp_data_ready: RPC:       xs_tcp_data_ready...
          <idle>-0     [003] ..s.    57.768622: xs_tcp_data_recv: RPC:       xs_tcp_data_recv started
          <idle>-0     [003] ..s.    57.768624: xs_tcp_data_recv: RPC:       reading TCP record fragment of length 24
          <idle>-0     [003] ..s.    57.768625: xs_tcp_data_recv: RPC:       reading XID (4 bytes)
          <idle>-0     [003] ..s.    57.768626: xs_tcp_data_recv: RPC:       reading request with XID f4185658
          <idle>-0     [003] ..s.    57.768627: xs_tcp_data_recv: RPC:       reading CALL/REPLY flag (4 bytes)
          <idle>-0     [003] ..s.    57.768628: xs_tcp_data_recv: RPC:       read reply XID f4185658
          <idle>-0     [003] ..s.    57.768630: xs_tcp_data_recv: RPC:       XID f4185658 read 16 bytes
          <idle>-0     [003] ..s.    57.768631: xs_tcp_data_recv: RPC:       xprt = ffff88040b251000, tcp_copied = 24, tcp_offset = 24, tcp_reclen = 24
          <idle>-0     [003] ..s.    57.768632: xprt_complete_rqst: RPC:    43 xid f4185658 complete (24 bytes received)
          <idle>-0     [003] .Ns.    57.768637: xs_tcp_data_recv: RPC:       xs_tcp_data_recv done
    kworker/3:1H-127   [003] ....    57.768656: xprt_release: RPC:    43 release request ffff8804033c3800
          <idle>-0     [003] ..s.    96.518571: inet_bind_hash: add 10993
          <idle>-0     [003] ..s.    96.518612: <stack trace>
 => __inet_inherit_port
 => tcp_v4_syn_recv_sock
 => tcp_check_req
 => tcp_v4_do_rcv
 => tcp_v4_rcv
 => ip_local_deliver_finish
 => __netif_receive_skb_core
 => netif_receive_skb
 => netif_receive_skb_internal
 => br_handle_frame_finish
 => br_handle_frame
 => br_handle_frame
 => __netif_receive_skb_core
 => read_tsc
 => netif_receive_skb_internal
 => napi_gro_receive
 => e1000_clean_rx_irq
 => e1000_clean
 => try_to_wake_up
 => net_rx_action
 => add_interrupt_randomness
 => __do_softirq
 => ack_ioapic_level
 => irq_exit
 => do_IRQ
 => ret_from_intr
 => cpuidle_enter_state
 => cpuidle_enter_state
 => cpu_startup_entry
 => start_secondary
          <idle>-0     [003] ..s.   138.868936: inet_bind_hash: add 22
          <idle>-0     [003] ..s.   138.868978: <stack trace>
 => __inet_inherit_port
 => tcp_v4_syn_recv_sock
 => tcp_check_req
 => tcp_v4_do_rcv
 => tcp_v4_rcv
 => ip_local_deliver_finish
 => __netif_receive_skb_core
 => netif_receive_skb
 => netif_receive_skb_internal
 => br_handle_frame_finish
 => br_handle_frame
 => br_handle_frame
 => __netif_receive_skb_core
 => read_tsc
 => netif_receive_skb_internal
 => napi_gro_receive
 => e1000_clean_rx_irq
 => e1000_clean
 => net_rx_action
 => add_interrupt_randomness
 => __do_softirq
 => ack_ioapic_level
 => irq_exit
 => do_IRQ
 => ret_from_intr
 => cpuidle_enter_state
 => cpuidle_enter_state
 => cpu_startup_entry
 => start_secondary
          <idle>-0     [003] ..s.   144.103035: inet_bind_hash: add 22
          <idle>-0     [003] ..s.   144.103078: <stack trace>
 => __inet_inherit_port
 => tcp_v4_syn_recv_sock
 => tcp_check_req
 => tcp_v4_do_rcv
 => tcp_v4_rcv
 => ip_local_deliver_finish
 => __netif_receive_skb_core
 => netif_receive_skb
 => netif_receive_skb_internal
 => br_handle_frame_finish
 => br_handle_frame
 => br_handle_frame
 => __netif_receive_skb_core
 => read_tsc
 => netif_receive_skb_internal
 => napi_gro_receive
 => e1000_clean_rx_irq
 => e1000_clean
 => try_to_wake_up
 => net_rx_action
 => add_interrupt_randomness
 => __do_softirq
 => ack_ioapic_level
 => irq_exit
 => do_IRQ
 => ret_from_intr
 => cpuidle_enter_state
 => cpuidle_enter_state
 => cpu_startup_entry
 => start_secondary
          <idle>-0     [003] ..s.   174.758123: inet_bind_hash: add 10993
          <idle>-0     [003] ..s.   174.758151: <stack trace>
 => __inet_inherit_port
 => tcp_v4_syn_recv_sock
 => tcp_check_req
 => tcp_v4_do_rcv
 => tcp_v4_rcv
 => ip_local_deliver_finish
 => __netif_receive_skb_core
 => netif_receive_skb
 => netif_receive_skb_internal
 => br_handle_frame_finish
 => try_to_wake_up
 => ipt_do_table
 => br_handle_frame
 => br_handle_frame
 => __netif_receive_skb_core
 => read_tsc
 => netif_receive_skb_internal
 => napi_gro_receive
 => e1000_clean_rx_irq
 => e1000_clean
 => net_rx_action
 => add_interrupt_randomness
 => __do_softirq
 => ack_ioapic_level
 => irq_exit
 => do_IRQ
 => ret_from_intr
 => cpuidle_enter_state
 => cpuidle_enter_state
 => cpu_startup_entry
 => start_secondary
          <idle>-0     [003] ..s.   216.551651: inet_bind_hash: add 10993
          <idle>-0     [003] ..s.   216.551689: <stack trace>
 => __inet_inherit_port
 => tcp_v4_syn_recv_sock
 => tcp_check_req
 => tcp_v4_do_rcv
 => tcp_v4_rcv
 => ip_local_deliver_finish
 => __netif_receive_skb_core
 => netif_receive_skb
 => netif_receive_skb_internal
 => br_handle_frame_finish
 => br_handle_frame
 => br_handle_frame
 => __netif_receive_skb_core
 => read_tsc
 => netif_receive_skb_internal
 => napi_gro_receive
 => e1000_clean_rx_irq
 => e1000_clean
 => try_to_wake_up
 => net_rx_action
 => add_interrupt_randomness
 => __do_softirq
 => ack_ioapic_level
 => irq_exit
 => do_IRQ
 => ret_from_intr
 => cpuidle_enter_state
 => cpuidle_enter_state
 => cpu_startup_entry
 => start_secondary
    kworker/3:1H-127   [003] ..s.   358.800834: xs_tcp_state_change: RPC:       xs_tcp_state_change client ffff88040b251000...
    kworker/3:1H-127   [003] ..s.   358.800837: xs_tcp_state_change: RPC:       state 4 conn 1 dead 0 zapped 1 sk_shutdown 3
          <idle>-0     [003] ..s.   358.801180: xs_tcp_state_change: RPC:       xs_tcp_state_change client ffff88040b251000...
          <idle>-0     [003] ..s.   358.801182: xs_tcp_state_change: RPC:       state 5 conn 0 dead 0 zapped 1 sk_shutdown 3
          <idle>-0     [003] ..s.   358.801204: xs_tcp_state_change: RPC:       xs_tcp_state_change client ffff88040b251000...
          <idle>-0     [003] ..s.   358.801205: xs_tcp_state_change: RPC:       state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
          <idle>-0     [003] ..s.   358.801206: xprt_disconnect_done: RPC:       disconnected transport ffff88040b251000
          <idle>-0     [003] ..s.   358.801207: xs_tcp_state_change: RPC:       xs_tcp_state_change client ffff88040b251000...
          <idle>-0     [003] ..s.   358.801208: xs_tcp_state_change: RPC:       state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
          <idle>-0     [003] ..s.   358.801208: xprt_disconnect_done: RPC:       disconnected transport ffff88040b251000
          <idle>-0     [003] ..s.   358.801209: xs_tcp_data_ready: RPC:       xs_tcp_data_ready...
          <idle>-0     [003] ..s.   476.855136: inet_bind_hash: add 10993
          <idle>-0     [003] ..s.   476.855172: <stack trace>
 => __inet_inherit_port
 => tcp_v4_syn_recv_sock
 => tcp_check_req
 => tcp_v4_do_rcv
 => tcp_v4_rcv
 => ip_local_deliver_finish
 => __netif_receive_skb_core
 => netif_receive_skb
 => netif_receive_skb_internal
 => br_handle_frame_finish
 => br_handle_frame
 => br_handle_frame
 => __netif_receive_skb_core
 => read_tsc
 => netif_receive_skb_internal
 => napi_gro_receive
 => e1000_clean_rx_irq
 => e1000_clean
 => net_rx_action
 => add_interrupt_randomness
 => __do_softirq
 => ack_ioapic_level
 => irq_exit
 => do_IRQ
 => ret_from_intr
 => cpuidle_enter_state
 => cpuidle_enter_state
 => cpu_startup_entry
 => start_secondary

-- Steve


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
@ 2015-06-12 15:34       ` Steven Rostedt
  0 siblings, 0 replies; 77+ messages in thread
From: Steven Rostedt @ 2015-06-12 15:34 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: Trond Myklebust, Anna Schumaker, Linux NFS Mailing List,
	Linux Network Devel Mailing List, LKML, Andrew Morton

On Fri, 12 Jun 2015 07:40:35 -0700
Eric Dumazet <eric.dumazet-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:

> Strange, because the usual way to not have time-wait is to use SO_LINGER
> with linger=0
> 
> And apparently xs_tcp_finish_connecting() has this :
> 
>                 sock_reset_flag(sk, SOCK_LINGER);
>                 tcp_sk(sk)->linger2 = 0;
> 
> Are you sure SO_REUSEADDR was not the thing you wanted ?
> 
> Steven, have you tried kmemleak ?

Nope, and again, I'm hesitant on adding too much debug. This is my main
server (build server, ssh server, web server, mail server, proxy
server, irc server, etc).

Although, I made dprintk() into trace_printk() in xprtsock.c and
xprt.c, and reran it. Here's the output:

(port 684 was the bad one this time)

# tracer: nop
#
# entries-in-buffer/entries-written: 396/396   #P:4
#
#                              _-----=> irqs-off
#                             / _----=> need-resched
#                            | / _---=> hardirq/softirq
#                            || / _--=> preempt-depth
#                            ||| /     delay
#           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
#              | |       |   ||||       |         |
        rpc.nfsd-4710  [002] ....    48.615382: xs_local_setup_socket: RPC:       worker connecting xprt ffff8800d9018000 via AF_LOCAL to /var/run/rpcbind.sock
        rpc.nfsd-4710  [002] ....    48.615393: xs_local_setup_socket: RPC:       xprt ffff8800d9018000 connected to /var/run/rpcbind.sock
        rpc.nfsd-4710  [002] ....    48.615394: xs_setup_local: RPC:       set up xprt to /var/run/rpcbind.sock via AF_LOCAL
        rpc.nfsd-4710  [002] ....    48.615399: xprt_create_transport: RPC:       created transport ffff8800d9018000 with 65536 slots
        rpc.nfsd-4710  [002] ....    48.615416: xprt_alloc_slot: RPC:     1 reserved req ffff8800db829600 xid cb06d5e8
        rpc.nfsd-4710  [002] ....    48.615419: xprt_prepare_transmit: RPC:     1 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.615420: xprt_transmit: RPC:     1 xprt_transmit(44)
        rpc.nfsd-4710  [002] ....    48.615424: xs_local_send_request: RPC:       xs_local_send_request(44) = 0
        rpc.nfsd-4710  [002] ....    48.615425: xprt_transmit: RPC:     1 xmit complete
         rpcbind-1829  [003] ..s.    48.615503: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [003] ..s.    48.615506: xprt_complete_rqst: RPC:     1 xid cb06d5e8 complete (24 bytes received)
        rpc.nfsd-4710  [002] ....    48.615556: xprt_release: RPC:     1 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.615568: xprt_alloc_slot: RPC:     2 reserved req ffff8800db829600 xid cc06d5e8
        rpc.nfsd-4710  [002] ....    48.615569: xprt_prepare_transmit: RPC:     2 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.615569: xprt_transmit: RPC:     2 xprt_transmit(44)
        rpc.nfsd-4710  [002] ....    48.615578: xs_local_send_request: RPC:       xs_local_send_request(44) = 0
        rpc.nfsd-4710  [002] ....    48.615578: xprt_transmit: RPC:     2 xmit complete
         rpcbind-1829  [003] ..s.    48.615643: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [003] ..s.    48.615645: xprt_complete_rqst: RPC:     2 xid cc06d5e8 complete (24 bytes received)
        rpc.nfsd-4710  [002] ....    48.615695: xprt_release: RPC:     2 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.615698: xprt_alloc_slot: RPC:     3 reserved req ffff8800db829600 xid cd06d5e8
        rpc.nfsd-4710  [002] ....    48.615699: xprt_prepare_transmit: RPC:     3 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.615700: xprt_transmit: RPC:     3 xprt_transmit(68)
        rpc.nfsd-4710  [002] ....    48.615706: xs_local_send_request: RPC:       xs_local_send_request(68) = 0
        rpc.nfsd-4710  [002] ....    48.615707: xprt_transmit: RPC:     3 xmit complete
         rpcbind-1829  [003] ..s.    48.615784: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [003] ..s.    48.615785: xprt_complete_rqst: RPC:     3 xid cd06d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.615830: xprt_release: RPC:     3 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.615833: xprt_alloc_slot: RPC:     4 reserved req ffff8800db829600 xid ce06d5e8
        rpc.nfsd-4710  [002] ....    48.615834: xprt_prepare_transmit: RPC:     4 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.615835: xprt_transmit: RPC:     4 xprt_transmit(68)
        rpc.nfsd-4710  [002] ....    48.615841: xs_local_send_request: RPC:       xs_local_send_request(68) = 0
        rpc.nfsd-4710  [002] ....    48.615841: xprt_transmit: RPC:     4 xmit complete
         rpcbind-1829  [003] ..s.    48.615892: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [003] ..s.    48.615894: xprt_complete_rqst: RPC:     4 xid ce06d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.615958: xprt_release: RPC:     4 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.615961: xprt_alloc_slot: RPC:     5 reserved req ffff8800db829600 xid cf06d5e8
        rpc.nfsd-4710  [002] ....    48.615962: xprt_prepare_transmit: RPC:     5 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.615966: xprt_transmit: RPC:     5 xprt_transmit(68)
        rpc.nfsd-4710  [002] ....    48.615971: xs_local_send_request: RPC:       xs_local_send_request(68) = 0
        rpc.nfsd-4710  [002] ....    48.615972: xprt_transmit: RPC:     5 xmit complete
         rpcbind-1829  [003] ..s.    48.616011: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [003] ..s.    48.616012: xprt_complete_rqst: RPC:     5 xid cf06d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.616057: xprt_release: RPC:     5 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.616059: xprt_alloc_slot: RPC:     6 reserved req ffff8800db829600 xid d006d5e8
        rpc.nfsd-4710  [002] ....    48.616060: xprt_prepare_transmit: RPC:     6 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.616061: xprt_transmit: RPC:     6 xprt_transmit(68)
        rpc.nfsd-4710  [002] ....    48.616065: xs_local_send_request: RPC:       xs_local_send_request(68) = 0
        rpc.nfsd-4710  [002] ....    48.616066: xprt_transmit: RPC:     6 xmit complete
         rpcbind-1829  [003] ..s.    48.616117: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [003] ..s.    48.616119: xprt_complete_rqst: RPC:     6 xid d006d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.616163: xprt_release: RPC:     6 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.616165: xprt_alloc_slot: RPC:     7 reserved req ffff8800db829600 xid d106d5e8
        rpc.nfsd-4710  [002] ....    48.616166: xprt_prepare_transmit: RPC:     7 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.616167: xprt_transmit: RPC:     7 xprt_transmit(68)
        rpc.nfsd-4710  [002] ....    48.616172: xs_local_send_request: RPC:       xs_local_send_request(68) = 0
        rpc.nfsd-4710  [002] ....    48.616172: xprt_transmit: RPC:     7 xmit complete
         rpcbind-1829  [000] ..s.    48.616247: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.616249: xprt_complete_rqst: RPC:     7 xid d106d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.616289: xprt_release: RPC:     7 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.616296: xprt_alloc_slot: RPC:     8 reserved req ffff8800db829600 xid d206d5e8
        rpc.nfsd-4710  [002] ....    48.616297: xprt_prepare_transmit: RPC:     8 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.616298: xprt_transmit: RPC:     8 xprt_transmit(84)
        rpc.nfsd-4710  [002] ....    48.616302: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-4710  [002] ....    48.616302: xprt_transmit: RPC:     8 xmit complete
         rpcbind-1829  [000] ..s.    48.616324: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.616326: xprt_complete_rqst: RPC:     8 xid d206d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.616340: xprt_release: RPC:     8 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.616346: xprt_alloc_slot: RPC:     9 reserved req ffff8800db829600 xid d306d5e8
        rpc.nfsd-4710  [002] ....    48.616347: xprt_prepare_transmit: RPC:     9 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.616348: xprt_transmit: RPC:     9 xprt_transmit(84)
        rpc.nfsd-4710  [002] ....    48.616355: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-4710  [002] ....    48.616355: xprt_transmit: RPC:     9 xmit complete
         rpcbind-1829  [000] ..s.    48.616380: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.616381: xprt_complete_rqst: RPC:     9 xid d306d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.616392: xprt_release: RPC:     9 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.616396: xprt_alloc_slot: RPC:    10 reserved req ffff8800db829600 xid d406d5e8
        rpc.nfsd-4710  [002] ....    48.616396: xprt_prepare_transmit: RPC:    10 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.616397: xprt_transmit: RPC:    10 xprt_transmit(84)
        rpc.nfsd-4710  [002] ....    48.616401: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-4710  [002] ....    48.616402: xprt_transmit: RPC:    10 xmit complete
         rpcbind-1829  [000] ..s.    48.616421: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.616422: xprt_complete_rqst: RPC:    10 xid d406d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.616433: xprt_release: RPC:    10 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.616436: xprt_alloc_slot: RPC:    11 reserved req ffff8800db829600 xid d506d5e8
        rpc.nfsd-4710  [002] ....    48.616437: xprt_prepare_transmit: RPC:    11 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.616438: xprt_transmit: RPC:    11 xprt_transmit(84)
        rpc.nfsd-4710  [002] ....    48.616442: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-4710  [002] ....    48.616442: xprt_transmit: RPC:    11 xmit complete
         rpcbind-1829  [000] ..s.    48.616461: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.616462: xprt_complete_rqst: RPC:    11 xid d506d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.616473: xprt_release: RPC:    11 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.616476: xprt_alloc_slot: RPC:    12 reserved req ffff8800db829600 xid d606d5e8
        rpc.nfsd-4710  [002] ....    48.616477: xprt_prepare_transmit: RPC:    12 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.616478: xprt_transmit: RPC:    12 xprt_transmit(84)
        rpc.nfsd-4710  [002] ....    48.616482: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-4710  [002] ....    48.616482: xprt_transmit: RPC:    12 xmit complete
         rpcbind-1829  [000] ..s.    48.616501: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.616502: xprt_complete_rqst: RPC:    12 xid d606d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.616511: xprt_release: RPC:    12 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.616535: xprt_alloc_slot: RPC:    13 reserved req ffff8800db829600 xid d706d5e8
        rpc.nfsd-4710  [002] ....    48.616536: xprt_prepare_transmit: RPC:    13 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.616537: xprt_transmit: RPC:    13 xprt_transmit(84)
        rpc.nfsd-4710  [002] ....    48.616541: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-4710  [002] ....    48.616541: xprt_transmit: RPC:    13 xmit complete
         rpcbind-1829  [000] ..s.    48.616580: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.616581: xprt_complete_rqst: RPC:    13 xid d706d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.616589: xprt_release: RPC:    13 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.616591: xprt_alloc_slot: RPC:    14 reserved req ffff8800db829600 xid d806d5e8
        rpc.nfsd-4710  [002] ....    48.616591: xprt_prepare_transmit: RPC:    14 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.616592: xprt_transmit: RPC:    14 xprt_transmit(84)
        rpc.nfsd-4710  [002] ....    48.616594: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-4710  [002] ....    48.616595: xprt_transmit: RPC:    14 xmit complete
         rpcbind-1829  [000] ..s.    48.616610: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.616611: xprt_complete_rqst: RPC:    14 xid d806d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.616618: xprt_release: RPC:    14 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.616619: xprt_alloc_slot: RPC:    15 reserved req ffff8800db829600 xid d906d5e8
        rpc.nfsd-4710  [002] ....    48.616620: xprt_prepare_transmit: RPC:    15 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.616620: xprt_transmit: RPC:    15 xprt_transmit(84)
        rpc.nfsd-4710  [002] ....    48.616623: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-4710  [002] ....    48.616623: xprt_transmit: RPC:    15 xmit complete
         rpcbind-1829  [000] ..s.    48.616635: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.616637: xprt_complete_rqst: RPC:    15 xid d906d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.616643: xprt_release: RPC:    15 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.616644: xprt_alloc_slot: RPC:    16 reserved req ffff8800db829600 xid da06d5e8
        rpc.nfsd-4710  [002] ....    48.616645: xprt_prepare_transmit: RPC:    16 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.616645: xprt_transmit: RPC:    16 xprt_transmit(84)
        rpc.nfsd-4710  [002] ....    48.616648: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-4710  [002] ....    48.616648: xprt_transmit: RPC:    16 xmit complete
         rpcbind-1829  [000] ..s.    48.616658: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.616659: xprt_complete_rqst: RPC:    16 xid da06d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.616665: xprt_release: RPC:    16 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.616666: xprt_alloc_slot: RPC:    17 reserved req ffff8800db829600 xid db06d5e8
        rpc.nfsd-4710  [002] ....    48.616667: xprt_prepare_transmit: RPC:    17 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.616667: xprt_transmit: RPC:    17 xprt_transmit(84)
        rpc.nfsd-4710  [002] ....    48.616670: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-4710  [002] ....    48.616670: xprt_transmit: RPC:    17 xmit complete
         rpcbind-1829  [000] ..s.    48.616680: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.616681: xprt_complete_rqst: RPC:    17 xid db06d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.616687: xprt_release: RPC:    17 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.617251: xprt_alloc_slot: RPC:    18 reserved req ffff8800db829600 xid dc06d5e8
        rpc.nfsd-4710  [002] ....    48.617252: xprt_prepare_transmit: RPC:    18 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.617252: xprt_transmit: RPC:    18 xprt_transmit(80)
        rpc.nfsd-4710  [002] ....    48.617256: xs_local_send_request: RPC:       xs_local_send_request(80) = 0
        rpc.nfsd-4710  [002] ....    48.617257: xprt_transmit: RPC:    18 xmit complete
         rpcbind-1829  [000] ..s.    48.617265: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.617265: xprt_complete_rqst: RPC:    18 xid dc06d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.617272: xprt_release: RPC:    18 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.617274: xprt_alloc_slot: RPC:    19 reserved req ffff8800db829600 xid dd06d5e8
        rpc.nfsd-4710  [002] ....    48.617274: xprt_prepare_transmit: RPC:    19 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.617275: xprt_transmit: RPC:    19 xprt_transmit(80)
        rpc.nfsd-4710  [002] ....    48.617277: xs_local_send_request: RPC:       xs_local_send_request(80) = 0
        rpc.nfsd-4710  [002] ....    48.617277: xprt_transmit: RPC:    19 xmit complete
         rpcbind-1829  [000] ..s.    48.617287: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.617288: xprt_complete_rqst: RPC:    19 xid dd06d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.617293: xprt_release: RPC:    19 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.617295: xprt_alloc_slot: RPC:    20 reserved req ffff8800db829600 xid de06d5e8
        rpc.nfsd-4710  [002] ....    48.617295: xprt_prepare_transmit: RPC:    20 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.617296: xprt_transmit: RPC:    20 xprt_transmit(80)
        rpc.nfsd-4710  [002] ....    48.617298: xs_local_send_request: RPC:       xs_local_send_request(80) = 0
        rpc.nfsd-4710  [002] ....    48.617298: xprt_transmit: RPC:    20 xmit complete
         rpcbind-1829  [000] ..s.    48.617307: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.617308: xprt_complete_rqst: RPC:    20 xid de06d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.617314: xprt_release: RPC:    20 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.617315: xprt_alloc_slot: RPC:    21 reserved req ffff8800db829600 xid df06d5e8
        rpc.nfsd-4710  [002] ....    48.617316: xprt_prepare_transmit: RPC:    21 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.617316: xprt_transmit: RPC:    21 xprt_transmit(80)
        rpc.nfsd-4710  [002] ....    48.617318: xs_local_send_request: RPC:       xs_local_send_request(80) = 0
        rpc.nfsd-4710  [002] ....    48.617319: xprt_transmit: RPC:    21 xmit complete
         rpcbind-1829  [000] ..s.    48.617328: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.617329: xprt_complete_rqst: RPC:    21 xid df06d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.617334: xprt_release: RPC:    21 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.617336: xprt_alloc_slot: RPC:    22 reserved req ffff8800db829600 xid e006d5e8
        rpc.nfsd-4710  [002] ....    48.617336: xprt_prepare_transmit: RPC:    22 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.617336: xprt_transmit: RPC:    22 xprt_transmit(80)
        rpc.nfsd-4710  [002] ....    48.617339: xs_local_send_request: RPC:       xs_local_send_request(80) = 0
        rpc.nfsd-4710  [002] ....    48.617339: xprt_transmit: RPC:    22 xmit complete
         rpcbind-1829  [000] ..s.    48.617348: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.617349: xprt_complete_rqst: RPC:    22 xid e006d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.617354: xprt_release: RPC:    22 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.617370: xprt_alloc_slot: RPC:    23 reserved req ffff8800db829600 xid e106d5e8
        rpc.nfsd-4710  [002] ....    48.617371: xprt_prepare_transmit: RPC:    23 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.617371: xprt_transmit: RPC:    23 xprt_transmit(80)
        rpc.nfsd-4710  [002] ....    48.617374: xs_local_send_request: RPC:       xs_local_send_request(80) = 0
        rpc.nfsd-4710  [002] ....    48.617374: xprt_transmit: RPC:    23 xmit complete
         rpcbind-1829  [000] ..s.    48.617382: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.617383: xprt_complete_rqst: RPC:    23 xid e106d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.617389: xprt_release: RPC:    23 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.617390: xprt_alloc_slot: RPC:    24 reserved req ffff8800db829600 xid e206d5e8
        rpc.nfsd-4710  [002] ....    48.617391: xprt_prepare_transmit: RPC:    24 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.617391: xprt_transmit: RPC:    24 xprt_transmit(80)
        rpc.nfsd-4710  [002] ....    48.617394: xs_local_send_request: RPC:       xs_local_send_request(80) = 0
        rpc.nfsd-4710  [002] ....    48.617394: xprt_transmit: RPC:    24 xmit complete
         rpcbind-1829  [000] ..s.    48.617403: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.617404: xprt_complete_rqst: RPC:    24 xid e206d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.617410: xprt_release: RPC:    24 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.617411: xprt_alloc_slot: RPC:    25 reserved req ffff8800db829600 xid e306d5e8
        rpc.nfsd-4710  [002] ....    48.617412: xprt_prepare_transmit: RPC:    25 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.617412: xprt_transmit: RPC:    25 xprt_transmit(80)
        rpc.nfsd-4710  [002] ....    48.617414: xs_local_send_request: RPC:       xs_local_send_request(80) = 0
        rpc.nfsd-4710  [002] ....    48.617414: xprt_transmit: RPC:    25 xmit complete
         rpcbind-1829  [000] ..s.    48.617424: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.617424: xprt_complete_rqst: RPC:    25 xid e306d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.617430: xprt_release: RPC:    25 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.617431: xprt_alloc_slot: RPC:    26 reserved req ffff8800db829600 xid e406d5e8
        rpc.nfsd-4710  [002] ....    48.617432: xprt_prepare_transmit: RPC:    26 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.617432: xprt_transmit: RPC:    26 xprt_transmit(80)
        rpc.nfsd-4710  [002] ....    48.617434: xs_local_send_request: RPC:       xs_local_send_request(80) = 0
        rpc.nfsd-4710  [002] ....    48.617434: xprt_transmit: RPC:    26 xmit complete
         rpcbind-1829  [000] ..s.    48.617444: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.617444: xprt_complete_rqst: RPC:    26 xid e406d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.617450: xprt_release: RPC:    26 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.617451: xprt_alloc_slot: RPC:    27 reserved req ffff8800db829600 xid e506d5e8
        rpc.nfsd-4710  [002] ....    48.617452: xprt_prepare_transmit: RPC:    27 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.617452: xprt_transmit: RPC:    27 xprt_transmit(80)
        rpc.nfsd-4710  [002] ....    48.617454: xs_local_send_request: RPC:       xs_local_send_request(80) = 0
        rpc.nfsd-4710  [002] ....    48.617455: xprt_transmit: RPC:    27 xmit complete
         rpcbind-1829  [000] ..s.    48.617464: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.617464: xprt_complete_rqst: RPC:    27 xid e506d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.617470: xprt_release: RPC:    27 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.617575: xprt_alloc_slot: RPC:    28 reserved req ffff8800db829600 xid e606d5e8
        rpc.nfsd-4710  [002] ....    48.617576: xprt_prepare_transmit: RPC:    28 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.617577: xprt_transmit: RPC:    28 xprt_transmit(68)
        rpc.nfsd-4710  [002] ....    48.617580: xs_local_send_request: RPC:       xs_local_send_request(68) = 0
        rpc.nfsd-4710  [002] ....    48.617580: xprt_transmit: RPC:    28 xmit complete
         rpcbind-1829  [000] ..s.    48.617590: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.617591: xprt_complete_rqst: RPC:    28 xid e606d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.617598: xprt_release: RPC:    28 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.617599: xprt_alloc_slot: RPC:    29 reserved req ffff8800db829600 xid e706d5e8
        rpc.nfsd-4710  [002] ....    48.617599: xprt_prepare_transmit: RPC:    29 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.617600: xprt_transmit: RPC:    29 xprt_transmit(68)
        rpc.nfsd-4710  [002] ....    48.617602: xs_local_send_request: RPC:       xs_local_send_request(68) = 0
        rpc.nfsd-4710  [002] ....    48.617602: xprt_transmit: RPC:    29 xmit complete
         rpcbind-1829  [000] ..s.    48.617614: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.617615: xprt_complete_rqst: RPC:    29 xid e706d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.617621: xprt_release: RPC:    29 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.617622: xprt_alloc_slot: RPC:    30 reserved req ffff8800db829600 xid e806d5e8
        rpc.nfsd-4710  [002] ....    48.617622: xprt_prepare_transmit: RPC:    30 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.617623: xprt_transmit: RPC:    30 xprt_transmit(68)
        rpc.nfsd-4710  [002] ....    48.617625: xs_local_send_request: RPC:       xs_local_send_request(68) = 0
        rpc.nfsd-4710  [002] ....    48.617625: xprt_transmit: RPC:    30 xmit complete
         rpcbind-1829  [000] ..s.    48.617634: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.617635: xprt_complete_rqst: RPC:    30 xid e806d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.617640: xprt_release: RPC:    30 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.617647: xprt_alloc_slot: RPC:    31 reserved req ffff8800db829600 xid e906d5e8
        rpc.nfsd-4710  [002] ....    48.617647: xprt_prepare_transmit: RPC:    31 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.617648: xprt_transmit: RPC:    31 xprt_transmit(88)
        rpc.nfsd-4710  [002] ....    48.617650: xs_local_send_request: RPC:       xs_local_send_request(88) = 0
        rpc.nfsd-4710  [002] ....    48.617650: xprt_transmit: RPC:    31 xmit complete
         rpcbind-1829  [000] ..s.    48.617659: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.617660: xprt_complete_rqst: RPC:    31 xid e906d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.617666: xprt_release: RPC:    31 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.617668: xprt_alloc_slot: RPC:    32 reserved req ffff8800db829600 xid ea06d5e8
        rpc.nfsd-4710  [002] ....    48.617668: xprt_prepare_transmit: RPC:    32 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.617669: xprt_transmit: RPC:    32 xprt_transmit(88)
        rpc.nfsd-4710  [002] ....    48.617671: xs_local_send_request: RPC:       xs_local_send_request(88) = 0
        rpc.nfsd-4710  [002] ....    48.617671: xprt_transmit: RPC:    32 xmit complete
         rpcbind-1829  [000] ..s.    48.617681: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.617681: xprt_complete_rqst: RPC:    32 xid ea06d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.617687: xprt_release: RPC:    32 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.617688: xprt_alloc_slot: RPC:    33 reserved req ffff8800db829600 xid eb06d5e8
        rpc.nfsd-4710  [002] ....    48.617689: xprt_prepare_transmit: RPC:    33 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.617689: xprt_transmit: RPC:    33 xprt_transmit(88)
        rpc.nfsd-4710  [002] ....    48.617692: xs_local_send_request: RPC:       xs_local_send_request(88) = 0
        rpc.nfsd-4710  [002] ....    48.617692: xprt_transmit: RPC:    33 xmit complete
         rpcbind-1829  [000] ..s.    48.617701: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.617702: xprt_complete_rqst: RPC:    33 xid eb06d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.617707: xprt_release: RPC:    33 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.617716: xprt_alloc_slot: RPC:    34 reserved req ffff8800db829600 xid ec06d5e8
        rpc.nfsd-4710  [002] ....    48.617716: xprt_prepare_transmit: RPC:    34 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.617717: xprt_transmit: RPC:    34 xprt_transmit(88)
        rpc.nfsd-4710  [002] ....    48.617719: xs_local_send_request: RPC:       xs_local_send_request(88) = 0
        rpc.nfsd-4710  [002] ....    48.617719: xprt_transmit: RPC:    34 xmit complete
         rpcbind-1829  [000] ..s.    48.617728: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.617729: xprt_complete_rqst: RPC:    34 xid ec06d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.617735: xprt_release: RPC:    34 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.617737: xprt_alloc_slot: RPC:    35 reserved req ffff8800db829600 xid ed06d5e8
        rpc.nfsd-4710  [002] ....    48.617737: xprt_prepare_transmit: RPC:    35 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.617738: xprt_transmit: RPC:    35 xprt_transmit(88)
        rpc.nfsd-4710  [002] ....    48.617740: xs_local_send_request: RPC:       xs_local_send_request(88) = 0
        rpc.nfsd-4710  [002] ....    48.617740: xprt_transmit: RPC:    35 xmit complete
         rpcbind-1829  [000] ..s.    48.617749: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.617750: xprt_complete_rqst: RPC:    35 xid ed06d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.617756: xprt_release: RPC:    35 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.617757: xprt_alloc_slot: RPC:    36 reserved req ffff8800db829600 xid ee06d5e8
        rpc.nfsd-4710  [002] ....    48.617758: xprt_prepare_transmit: RPC:    36 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.617758: xprt_transmit: RPC:    36 xprt_transmit(88)
        rpc.nfsd-4710  [002] ....    48.617760: xs_local_send_request: RPC:       xs_local_send_request(88) = 0
        rpc.nfsd-4710  [002] ....    48.617760: xprt_transmit: RPC:    36 xmit complete
         rpcbind-1829  [000] ..s.    48.617770: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.617770: xprt_complete_rqst: RPC:    36 xid ee06d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.617776: xprt_release: RPC:    36 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.617782: xprt_alloc_slot: RPC:    37 reserved req ffff8800db829600 xid ef06d5e8
        rpc.nfsd-4710  [002] ....    48.617782: xprt_prepare_transmit: RPC:    37 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.617783: xprt_transmit: RPC:    37 xprt_transmit(84)
        rpc.nfsd-4710  [002] ....    48.617785: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-4710  [002] ....    48.617785: xprt_transmit: RPC:    37 xmit complete
         rpcbind-1829  [000] ..s.    48.617794: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.617795: xprt_complete_rqst: RPC:    37 xid ef06d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.617800: xprt_release: RPC:    37 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.617802: xprt_alloc_slot: RPC:    38 reserved req ffff8800db829600 xid f006d5e8
        rpc.nfsd-4710  [002] ....    48.617802: xprt_prepare_transmit: RPC:    38 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.617803: xprt_transmit: RPC:    38 xprt_transmit(84)
        rpc.nfsd-4710  [002] ....    48.617805: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-4710  [002] ....    48.617805: xprt_transmit: RPC:    38 xmit complete
         rpcbind-1829  [000] ..s.    48.617814: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.617815: xprt_complete_rqst: RPC:    38 xid f006d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.617821: xprt_release: RPC:    38 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.617822: xprt_alloc_slot: RPC:    39 reserved req ffff8800db829600 xid f106d5e8
        rpc.nfsd-4710  [002] ....    48.617822: xprt_prepare_transmit: RPC:    39 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.617823: xprt_transmit: RPC:    39 xprt_transmit(84)
        rpc.nfsd-4710  [002] ....    48.617825: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-4710  [002] ....    48.617825: xprt_transmit: RPC:    39 xmit complete
         rpcbind-1829  [000] ..s.    48.617834: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.617835: xprt_complete_rqst: RPC:    39 xid f106d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.617840: xprt_release: RPC:    39 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.617848: xprt_alloc_slot: RPC:    40 reserved req ffff8800db829600 xid f206d5e8
        rpc.nfsd-4710  [002] ....    48.617849: xprt_prepare_transmit: RPC:    40 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.617849: xprt_transmit: RPC:    40 xprt_transmit(80)
        rpc.nfsd-4710  [002] ....    48.617851: xs_local_send_request: RPC:       xs_local_send_request(80) = 0
        rpc.nfsd-4710  [002] ....    48.617851: xprt_transmit: RPC:    40 xmit complete
         rpcbind-1829  [000] ..s.    48.617860: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.617861: xprt_complete_rqst: RPC:    40 xid f206d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.617867: xprt_release: RPC:    40 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.617868: xprt_alloc_slot: RPC:    41 reserved req ffff8800db829600 xid f306d5e8
        rpc.nfsd-4710  [002] ....    48.617869: xprt_prepare_transmit: RPC:    41 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.617869: xprt_transmit: RPC:    41 xprt_transmit(80)
        rpc.nfsd-4710  [002] ....    48.617871: xs_local_send_request: RPC:       xs_local_send_request(80) = 0
        rpc.nfsd-4710  [002] ....    48.617871: xprt_transmit: RPC:    41 xmit complete
         rpcbind-1829  [000] ..s.    48.617881: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.617881: xprt_complete_rqst: RPC:    41 xid f306d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.617887: xprt_release: RPC:    41 release request ffff8800db829600
        rpc.nfsd-4710  [002] ....    48.617888: xprt_alloc_slot: RPC:    42 reserved req ffff8800db829600 xid f406d5e8
        rpc.nfsd-4710  [002] ....    48.617889: xprt_prepare_transmit: RPC:    42 xprt_prepare_transmit
        rpc.nfsd-4710  [002] ....    48.617889: xprt_transmit: RPC:    42 xprt_transmit(80)
        rpc.nfsd-4710  [002] ....    48.617891: xs_local_send_request: RPC:       xs_local_send_request(80) = 0
        rpc.nfsd-4710  [002] ....    48.617891: xprt_transmit: RPC:    42 xmit complete
         rpcbind-1829  [000] ..s.    48.617901: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [000] ..s.    48.617902: xprt_complete_rqst: RPC:    42 xid f406d5e8 complete (28 bytes received)
        rpc.nfsd-4710  [002] ....    48.617907: xprt_release: RPC:    42 release request ffff8800db829600
          <idle>-0     [003] ..s.    57.765235: inet_bind_hash: add 2049
          <idle>-0     [003] ..s.    57.765278: <stack trace>
 => __inet_inherit_port
 => tcp_v4_syn_recv_sock
 => tcp_check_req
 => tcp_v4_do_rcv
 => tcp_v4_rcv
 => ip_local_deliver_finish
 => __netif_receive_skb_core
 => netif_receive_skb
 => netif_receive_skb_internal
 => br_handle_frame_finish
 => csum_block_add_ext
 => __skb_gro_checksum_complete
 => br_handle_frame
 => br_handle_frame
 => __netif_receive_skb_core
 => read_tsc
 => netif_receive_skb_internal
 => napi_gro_receive
 => e1000_clean_rx_irq
 => e1000_clean
 => try_to_wake_up
 => net_rx_action
 => add_interrupt_randomness
 => __do_softirq
 => ack_ioapic_level
 => irq_exit
 => do_IRQ
 => ret_from_intr
 => cpuidle_enter_state
 => cpuidle_enter_state
 => cpu_startup_entry
 => start_secondary
   kworker/u32:7-118   [000] ....    57.767716: xs_setup_tcp: RPC:       set up xprt to 192.168.23.22 (port 55201) via tcp
   kworker/u32:7-118   [000] ....    57.767726: xprt_create_transport: RPC:       created transport ffff88040b251000 with 65536 slots
    kworker/0:1H-128   [000] ....    57.767758: xprt_alloc_slot: RPC:    43 reserved req ffff8804033c3800 xid f4185658
    kworker/0:1H-128   [000] ....    57.767764: xprt_connect: RPC:    43 xprt_connect xprt ffff88040b251000 is not connected
    kworker/0:1H-128   [000] ....    57.767767: xs_connect: RPC:       xs_connect scheduled xprt ffff88040b251000
    kworker/0:1H-128   [000] ..s.    57.767780: inet_csk_get_port: snum 684
    kworker/0:1H-128   [000] ..s.    57.767792: <stack trace>
 => inet_addr_type
 => inet_bind
 => xs_bind
 => sock_setsockopt
 => __sock_create
 => xs_create_sock.isra.18
 => xs_tcp_setup_socket
 => process_one_work
 => worker_thread
 => worker_thread
 => kthread
 => kthread
 => ret_from_fork
 => kthread
    kworker/0:1H-128   [000] ..s.    57.767793: inet_bind_hash: add 684
    kworker/0:1H-128   [000] ..s.    57.767801: <stack trace>
 => inet_csk_get_port
 => inet_addr_type
 => inet_bind
 => xs_bind
 => sock_setsockopt
 => __sock_create
 => xs_create_sock.isra.18
 => xs_tcp_setup_socket
 => process_one_work
 => worker_thread
 => worker_thread
 => kthread
 => kthread
 => ret_from_fork
 => kthread
    kworker/0:1H-128   [000] ....    57.767803: xs_bind: RPC:       xs_bind 4.136.255.255:684: ok (0)
    kworker/0:1H-128   [000] ....    57.767805: xs_tcp_setup_socket: RPC:       worker connecting xprt ffff88040b251000 via tcp to 192.168.23.22 (port 55201)
    kworker/0:1H-128   [000] ....    57.767841: xs_tcp_setup_socket: RPC:       ffff88040b251000 connect status 115 connected 0 sock state 2
          <idle>-0     [003] ..s.    57.768178: xs_tcp_state_change: RPC:       xs_tcp_state_change client ffff88040b251000...
          <idle>-0     [003] ..s.    57.768180: xs_tcp_state_change: RPC:       state 1 conn 0 dead 0 zapped 1 sk_shutdown 0
    kworker/3:1H-127   [003] ....    57.768216: xprt_connect_status: RPC:    43 xprt_connect_status: retrying
    kworker/3:1H-127   [003] ....    57.768218: xprt_prepare_transmit: RPC:    43 xprt_prepare_transmit
    kworker/3:1H-127   [003] ....    57.768229: xprt_transmit: RPC:    43 xprt_transmit(72)
    kworker/3:1H-127   [003] ....    57.768245: xs_tcp_send_request: RPC:       xs_tcp_send_request(72) = 0
    kworker/3:1H-127   [003] ....    57.768246: xprt_transmit: RPC:    43 xmit complete
          <idle>-0     [003] ..s.    57.768621: xs_tcp_data_ready: RPC:       xs_tcp_data_ready...
          <idle>-0     [003] ..s.    57.768622: xs_tcp_data_recv: RPC:       xs_tcp_data_recv started
          <idle>-0     [003] ..s.    57.768624: xs_tcp_data_recv: RPC:       reading TCP record fragment of length 24
          <idle>-0     [003] ..s.    57.768625: xs_tcp_data_recv: RPC:       reading XID (4 bytes)
          <idle>-0     [003] ..s.    57.768626: xs_tcp_data_recv: RPC:       reading request with XID f4185658
          <idle>-0     [003] ..s.    57.768627: xs_tcp_data_recv: RPC:       reading CALL/REPLY flag (4 bytes)
          <idle>-0     [003] ..s.    57.768628: xs_tcp_data_recv: RPC:       read reply XID f4185658
          <idle>-0     [003] ..s.    57.768630: xs_tcp_data_recv: RPC:       XID f4185658 read 16 bytes
          <idle>-0     [003] ..s.    57.768631: xs_tcp_data_recv: RPC:       xprt = ffff88040b251000, tcp_copied = 24, tcp_offset = 24, tcp_reclen = 24
          <idle>-0     [003] ..s.    57.768632: xprt_complete_rqst: RPC:    43 xid f4185658 complete (24 bytes received)
          <idle>-0     [003] .Ns.    57.768637: xs_tcp_data_recv: RPC:       xs_tcp_data_recv done
    kworker/3:1H-127   [003] ....    57.768656: xprt_release: RPC:    43 release request ffff8804033c3800
          <idle>-0     [003] ..s.    96.518571: inet_bind_hash: add 10993
          <idle>-0     [003] ..s.    96.518612: <stack trace>
 => __inet_inherit_port
 => tcp_v4_syn_recv_sock
 => tcp_check_req
 => tcp_v4_do_rcv
 => tcp_v4_rcv
 => ip_local_deliver_finish
 => __netif_receive_skb_core
 => netif_receive_skb
 => netif_receive_skb_internal
 => br_handle_frame_finish
 => br_handle_frame
 => br_handle_frame
 => __netif_receive_skb_core
 => read_tsc
 => netif_receive_skb_internal
 => napi_gro_receive
 => e1000_clean_rx_irq
 => e1000_clean
 => try_to_wake_up
 => net_rx_action
 => add_interrupt_randomness
 => __do_softirq
 => ack_ioapic_level
 => irq_exit
 => do_IRQ
 => ret_from_intr
 => cpuidle_enter_state
 => cpuidle_enter_state
 => cpu_startup_entry
 => start_secondary
          <idle>-0     [003] ..s.   138.868936: inet_bind_hash: add 22
          <idle>-0     [003] ..s.   138.868978: <stack trace>
 => __inet_inherit_port
 => tcp_v4_syn_recv_sock
 => tcp_check_req
 => tcp_v4_do_rcv
 => tcp_v4_rcv
 => ip_local_deliver_finish
 => __netif_receive_skb_core
 => netif_receive_skb
 => netif_receive_skb_internal
 => br_handle_frame_finish
 => br_handle_frame
 => br_handle_frame
 => __netif_receive_skb_core
 => read_tsc
 => netif_receive_skb_internal
 => napi_gro_receive
 => e1000_clean_rx_irq
 => e1000_clean
 => net_rx_action
 => add_interrupt_randomness
 => __do_softirq
 => ack_ioapic_level
 => irq_exit
 => do_IRQ
 => ret_from_intr
 => cpuidle_enter_state
 => cpuidle_enter_state
 => cpu_startup_entry
 => start_secondary
          <idle>-0     [003] ..s.   144.103035: inet_bind_hash: add 22
          <idle>-0     [003] ..s.   144.103078: <stack trace>
 => __inet_inherit_port
 => tcp_v4_syn_recv_sock
 => tcp_check_req
 => tcp_v4_do_rcv
 => tcp_v4_rcv
 => ip_local_deliver_finish
 => __netif_receive_skb_core
 => netif_receive_skb
 => netif_receive_skb_internal
 => br_handle_frame_finish
 => br_handle_frame
 => br_handle_frame
 => __netif_receive_skb_core
 => read_tsc
 => netif_receive_skb_internal
 => napi_gro_receive
 => e1000_clean_rx_irq
 => e1000_clean
 => try_to_wake_up
 => net_rx_action
 => add_interrupt_randomness
 => __do_softirq
 => ack_ioapic_level
 => irq_exit
 => do_IRQ
 => ret_from_intr
 => cpuidle_enter_state
 => cpuidle_enter_state
 => cpu_startup_entry
 => start_secondary
          <idle>-0     [003] ..s.   174.758123: inet_bind_hash: add 10993
          <idle>-0     [003] ..s.   174.758151: <stack trace>
 => __inet_inherit_port
 => tcp_v4_syn_recv_sock
 => tcp_check_req
 => tcp_v4_do_rcv
 => tcp_v4_rcv
 => ip_local_deliver_finish
 => __netif_receive_skb_core
 => netif_receive_skb
 => netif_receive_skb_internal
 => br_handle_frame_finish
 => try_to_wake_up
 => ipt_do_table
 => br_handle_frame
 => br_handle_frame
 => __netif_receive_skb_core
 => read_tsc
 => netif_receive_skb_internal
 => napi_gro_receive
 => e1000_clean_rx_irq
 => e1000_clean
 => net_rx_action
 => add_interrupt_randomness
 => __do_softirq
 => ack_ioapic_level
 => irq_exit
 => do_IRQ
 => ret_from_intr
 => cpuidle_enter_state
 => cpuidle_enter_state
 => cpu_startup_entry
 => start_secondary
          <idle>-0     [003] ..s.   216.551651: inet_bind_hash: add 10993
          <idle>-0     [003] ..s.   216.551689: <stack trace>
 => __inet_inherit_port
 => tcp_v4_syn_recv_sock
 => tcp_check_req
 => tcp_v4_do_rcv
 => tcp_v4_rcv
 => ip_local_deliver_finish
 => __netif_receive_skb_core
 => netif_receive_skb
 => netif_receive_skb_internal
 => br_handle_frame_finish
 => br_handle_frame
 => br_handle_frame
 => __netif_receive_skb_core
 => read_tsc
 => netif_receive_skb_internal
 => napi_gro_receive
 => e1000_clean_rx_irq
 => e1000_clean
 => try_to_wake_up
 => net_rx_action
 => add_interrupt_randomness
 => __do_softirq
 => ack_ioapic_level
 => irq_exit
 => do_IRQ
 => ret_from_intr
 => cpuidle_enter_state
 => cpuidle_enter_state
 => cpu_startup_entry
 => start_secondary
    kworker/3:1H-127   [003] ..s.   358.800834: xs_tcp_state_change: RPC:       xs_tcp_state_change client ffff88040b251000...
    kworker/3:1H-127   [003] ..s.   358.800837: xs_tcp_state_change: RPC:       state 4 conn 1 dead 0 zapped 1 sk_shutdown 3
          <idle>-0     [003] ..s.   358.801180: xs_tcp_state_change: RPC:       xs_tcp_state_change client ffff88040b251000...
          <idle>-0     [003] ..s.   358.801182: xs_tcp_state_change: RPC:       state 5 conn 0 dead 0 zapped 1 sk_shutdown 3
          <idle>-0     [003] ..s.   358.801204: xs_tcp_state_change: RPC:       xs_tcp_state_change client ffff88040b251000...
          <idle>-0     [003] ..s.   358.801205: xs_tcp_state_change: RPC:       state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
          <idle>-0     [003] ..s.   358.801206: xprt_disconnect_done: RPC:       disconnected transport ffff88040b251000
          <idle>-0     [003] ..s.   358.801207: xs_tcp_state_change: RPC:       xs_tcp_state_change client ffff88040b251000...
          <idle>-0     [003] ..s.   358.801208: xs_tcp_state_change: RPC:       state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
          <idle>-0     [003] ..s.   358.801208: xprt_disconnect_done: RPC:       disconnected transport ffff88040b251000
          <idle>-0     [003] ..s.   358.801209: xs_tcp_data_ready: RPC:       xs_tcp_data_ready...
          <idle>-0     [003] ..s.   476.855136: inet_bind_hash: add 10993
          <idle>-0     [003] ..s.   476.855172: <stack trace>
 => __inet_inherit_port
 => tcp_v4_syn_recv_sock
 => tcp_check_req
 => tcp_v4_do_rcv
 => tcp_v4_rcv
 => ip_local_deliver_finish
 => __netif_receive_skb_core
 => netif_receive_skb
 => netif_receive_skb_internal
 => br_handle_frame_finish
 => br_handle_frame
 => br_handle_frame
 => __netif_receive_skb_core
 => read_tsc
 => netif_receive_skb_internal
 => napi_gro_receive
 => e1000_clean_rx_irq
 => e1000_clean
 => net_rx_action
 => add_interrupt_randomness
 => __do_softirq
 => ack_ioapic_level
 => irq_exit
 => do_IRQ
 => ret_from_intr
 => cpuidle_enter_state
 => cpuidle_enter_state
 => cpu_startup_entry
 => start_secondary

-- Steve

--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
@ 2015-06-12 15:43         ` Eric Dumazet
  0 siblings, 0 replies; 77+ messages in thread
From: Eric Dumazet @ 2015-06-12 15:43 UTC (permalink / raw)
  To: Trond Myklebust
  Cc: Steven Rostedt, Anna Schumaker, Linux NFS Mailing List,
	Linux Network Devel Mailing List, LKML, Andrew Morton

On Fri, 2015-06-12 at 10:57 -0400, Trond Myklebust wrote:
> On Fri, Jun 12, 2015 at 10:40 AM, Eric Dumazet <eric.dumazet@gmail.com> wrote:

> > Strange, because the usual way to not have time-wait is to use SO_LINGER
> > with linger=0
> >
> > And apparently xs_tcp_finish_connecting() has this :
> >
> >                 sock_reset_flag(sk, SOCK_LINGER);
> >                 tcp_sk(sk)->linger2 = 0;
> 
> Are you sure? I thought that SO_LINGER is more about controlling how
> the socket behaves w.r.t. waiting for the TCP_CLOSE state to be
> achieved (i.e. about aborting the FIN state negotiation early). I've
> never observed an effect on the TCP time-wait states.

Definitely this is standard way to avoid time-wait states.

Maybe not very well documented. We probably should...

http://stackoverflow.com/questions/3757289/tcp-option-so-linger-zero-when-its-required




> Yes. SO_REUSEADDR has the problem that it requires you bind to
> something other than 0.0.0.0, so it is less appropriate for outgoing
> connections; the RPC code really should not have to worry about
> routing and routability of a particular source address.

OK understood.

Are you trying to reuse same 4-tuple ?




^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
@ 2015-06-12 15:43         ` Eric Dumazet
  0 siblings, 0 replies; 77+ messages in thread
From: Eric Dumazet @ 2015-06-12 15:43 UTC (permalink / raw)
  To: Trond Myklebust
  Cc: Steven Rostedt, Anna Schumaker, Linux NFS Mailing List,
	Linux Network Devel Mailing List, LKML, Andrew Morton

On Fri, 2015-06-12 at 10:57 -0400, Trond Myklebust wrote:
> On Fri, Jun 12, 2015 at 10:40 AM, Eric Dumazet <eric.dumazet-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:

> > Strange, because the usual way to not have time-wait is to use SO_LINGER
> > with linger=0
> >
> > And apparently xs_tcp_finish_connecting() has this :
> >
> >                 sock_reset_flag(sk, SOCK_LINGER);
> >                 tcp_sk(sk)->linger2 = 0;
> 
> Are you sure? I thought that SO_LINGER is more about controlling how
> the socket behaves w.r.t. waiting for the TCP_CLOSE state to be
> achieved (i.e. about aborting the FIN state negotiation early). I've
> never observed an effect on the TCP time-wait states.

Definitely this is standard way to avoid time-wait states.

Maybe not very well documented. We probably should...

http://stackoverflow.com/questions/3757289/tcp-option-so-linger-zero-when-its-required




> Yes. SO_REUSEADDR has the problem that it requires you bind to
> something other than 0.0.0.0, so it is less appropriate for outgoing
> connections; the RPC code really should not have to worry about
> routing and routability of a particular source address.

OK understood.

Are you trying to reuse same 4-tuple ?



--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
@ 2015-06-12 15:50         ` Steven Rostedt
  0 siblings, 0 replies; 77+ messages in thread
From: Steven Rostedt @ 2015-06-12 15:50 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: Trond Myklebust, Anna Schumaker, Linux NFS Mailing List,
	Linux Network Devel Mailing List, LKML, Andrew Morton

On Fri, 12 Jun 2015 11:34:20 -0400
Steven Rostedt <rostedt@goodmis.org> wrote:

> On Fri, 12 Jun 2015 07:40:35 -0700
> Eric Dumazet <eric.dumazet@gmail.com> wrote:
> 
> > Strange, because the usual way to not have time-wait is to use SO_LINGER
> > with linger=0
> > 
> > And apparently xs_tcp_finish_connecting() has this :
> > 
> >                 sock_reset_flag(sk, SOCK_LINGER);
> >                 tcp_sk(sk)->linger2 = 0;
> > 
> > Are you sure SO_REUSEADDR was not the thing you wanted ?
> > 
> > Steven, have you tried kmemleak ?
> 
> Nope, and again, I'm hesitant on adding too much debug. This is my main
> server (build server, ssh server, web server, mail server, proxy
> server, irc server, etc).
> 
> Although, I made dprintk() into trace_printk() in xprtsock.c and
> xprt.c, and reran it. Here's the output:
> 

I reverted the following commits:

c627d31ba0696cbd829437af2be2f2dee3546b1e
9e2b9f37760e129cee053cc7b6e7288acc2a7134
caf4ccd4e88cf2795c927834bc488c8321437586

And the issue goes away. That is, I watched the port go from
ESTABLISHED to TIME_WAIT, and then gone, and theirs no hidden port.

In fact, I watched the port with my portlist.c module, and it
disappeared there too when it entered the TIME_WAIT state.

Here's the trace of that run:

# tracer: nop
#
# entries-in-buffer/entries-written: 397/397   #P:4
#
#                              _-----=> irqs-off
#                             / _----=> need-resched
#                            | / _---=> hardirq/softirq
#                            || / _--=> preempt-depth
#                            ||| /     delay
#           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
#              | |       |   ||||       |         |
        rpc.nfsd-3932  [002] ....    44.098689: xs_local_setup_socket: RPC:       worker connecting xprt ffff88040b6f5800 via AF_LOCAL to /var/run/rpcbind.sock
        rpc.nfsd-3932  [002] ....    44.098699: xs_local_setup_socket: RPC:       xprt ffff88040b6f5800 connected to /var/run/rpcbind.sock
        rpc.nfsd-3932  [002] ....    44.098700: xs_setup_local: RPC:       set up xprt to /var/run/rpcbind.sock via AF_LOCAL
        rpc.nfsd-3932  [002] ....    44.098704: xprt_create_transport: RPC:       created transport ffff88040b6f5800 with 65536 slots
        rpc.nfsd-3932  [002] ....    44.098717: xprt_alloc_slot: RPC:     1 reserved req ffff8800d8cc6800 xid 0850084b
        rpc.nfsd-3932  [002] ....    44.098720: xprt_prepare_transmit: RPC:     1 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.098721: xprt_transmit: RPC:     1 xprt_transmit(44)
        rpc.nfsd-3932  [002] ....    44.098724: xs_local_send_request: RPC:       xs_local_send_request(44) = 0
        rpc.nfsd-3932  [002] ....    44.098724: xprt_transmit: RPC:     1 xmit complete
         rpcbind-1829  [001] ..s.    44.098812: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.098815: xprt_complete_rqst: RPC:     1 xid 0850084b complete (24 bytes received)
        rpc.nfsd-3932  [002] ....    44.098854: xprt_release: RPC:     1 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.098864: xprt_alloc_slot: RPC:     2 reserved req ffff8800d8cc6800 xid 0950084b
        rpc.nfsd-3932  [002] ....    44.098865: xprt_prepare_transmit: RPC:     2 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.098865: xprt_transmit: RPC:     2 xprt_transmit(44)
        rpc.nfsd-3932  [002] ....    44.098870: xs_local_send_request: RPC:       xs_local_send_request(44) = 0
        rpc.nfsd-3932  [002] ....    44.098870: xprt_transmit: RPC:     2 xmit complete
         rpcbind-1829  [001] ..s.    44.098915: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.098917: xprt_complete_rqst: RPC:     2 xid 0950084b complete (24 bytes received)
        rpc.nfsd-3932  [002] ....    44.098968: xprt_release: RPC:     2 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.098971: xprt_alloc_slot: RPC:     3 reserved req ffff8800d8cc6800 xid 0a50084b
        rpc.nfsd-3932  [002] ....    44.098972: xprt_prepare_transmit: RPC:     3 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.098973: xprt_transmit: RPC:     3 xprt_transmit(68)
        rpc.nfsd-3932  [002] ....    44.098978: xs_local_send_request: RPC:       xs_local_send_request(68) = 0
        rpc.nfsd-3932  [002] ....    44.098978: xprt_transmit: RPC:     3 xmit complete
         rpcbind-1829  [001] ..s.    44.099029: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.099031: xprt_complete_rqst: RPC:     3 xid 0a50084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.099083: xprt_release: RPC:     3 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.099086: xprt_alloc_slot: RPC:     4 reserved req ffff8800d8cc6800 xid 0b50084b
        rpc.nfsd-3932  [002] ....    44.099086: xprt_prepare_transmit: RPC:     4 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.099087: xprt_transmit: RPC:     4 xprt_transmit(68)
        rpc.nfsd-3932  [002] ....    44.099091: xs_local_send_request: RPC:       xs_local_send_request(68) = 0
        rpc.nfsd-3932  [002] ....    44.099092: xprt_transmit: RPC:     4 xmit complete
         rpcbind-1829  [001] ..s.    44.099145: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.099147: xprt_complete_rqst: RPC:     4 xid 0b50084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.099155: xprt_release: RPC:     4 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.099157: xprt_alloc_slot: RPC:     5 reserved req ffff8800d8cc6800 xid 0c50084b
        rpc.nfsd-3932  [002] ....    44.099157: xprt_prepare_transmit: RPC:     5 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.099158: xprt_transmit: RPC:     5 xprt_transmit(68)
        rpc.nfsd-3932  [002] ....    44.099161: xs_local_send_request: RPC:       xs_local_send_request(68) = 0
        rpc.nfsd-3932  [002] ....    44.099162: xprt_transmit: RPC:     5 xmit complete
         rpcbind-1829  [001] ..s.    44.099172: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.099173: xprt_complete_rqst: RPC:     5 xid 0c50084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.099180: xprt_release: RPC:     5 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.099181: xprt_alloc_slot: RPC:     6 reserved req ffff8800d8cc6800 xid 0d50084b
        rpc.nfsd-3932  [002] ....    44.099181: xprt_prepare_transmit: RPC:     6 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.099182: xprt_transmit: RPC:     6 xprt_transmit(68)
        rpc.nfsd-3932  [002] ....    44.099184: xs_local_send_request: RPC:       xs_local_send_request(68) = 0
        rpc.nfsd-3932  [002] ....    44.099184: xprt_transmit: RPC:     6 xmit complete
         rpcbind-1829  [001] ..s.    44.099204: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.099205: xprt_complete_rqst: RPC:     6 xid 0d50084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.099212: xprt_release: RPC:     6 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.099213: xprt_alloc_slot: RPC:     7 reserved req ffff8800d8cc6800 xid 0e50084b
        rpc.nfsd-3932  [002] ....    44.099214: xprt_prepare_transmit: RPC:     7 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.099214: xprt_transmit: RPC:     7 xprt_transmit(68)
        rpc.nfsd-3932  [002] ....    44.099217: xs_local_send_request: RPC:       xs_local_send_request(68) = 0
        rpc.nfsd-3932  [002] ....    44.099217: xprt_transmit: RPC:     7 xmit complete
         rpcbind-1829  [001] ..s.    44.099228: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.099229: xprt_complete_rqst: RPC:     7 xid 0e50084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.099236: xprt_release: RPC:     7 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.099241: xprt_alloc_slot: RPC:     8 reserved req ffff8800d8cc6800 xid 0f50084b
        rpc.nfsd-3932  [002] ....    44.099241: xprt_prepare_transmit: RPC:     8 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.099242: xprt_transmit: RPC:     8 xprt_transmit(84)
        rpc.nfsd-3932  [002] ....    44.099244: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-3932  [002] ....    44.099244: xprt_transmit: RPC:     8 xmit complete
         rpcbind-1829  [001] ..s.    44.099261: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.099263: xprt_complete_rqst: RPC:     8 xid 0f50084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.099270: xprt_release: RPC:     8 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.099271: xprt_alloc_slot: RPC:     9 reserved req ffff8800d8cc6800 xid 1050084b
        rpc.nfsd-3932  [002] ....    44.099272: xprt_prepare_transmit: RPC:     9 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.099272: xprt_transmit: RPC:     9 xprt_transmit(84)
        rpc.nfsd-3932  [002] ....    44.099275: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-3932  [002] ....    44.099275: xprt_transmit: RPC:     9 xmit complete
         rpcbind-1829  [001] ..s.    44.099290: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.099291: xprt_complete_rqst: RPC:     9 xid 1050084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.099298: xprt_release: RPC:     9 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.099300: xprt_alloc_slot: RPC:    10 reserved req ffff8800d8cc6800 xid 1150084b
        rpc.nfsd-3932  [002] ....    44.099301: xprt_prepare_transmit: RPC:    10 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.099301: xprt_transmit: RPC:    10 xprt_transmit(84)
        rpc.nfsd-3932  [002] ....    44.099303: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-3932  [002] ....    44.099304: xprt_transmit: RPC:    10 xmit complete
         rpcbind-1829  [001] ..s.    44.099318: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.099320: xprt_complete_rqst: RPC:    10 xid 1150084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.099327: xprt_release: RPC:    10 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.099329: xprt_alloc_slot: RPC:    11 reserved req ffff8800d8cc6800 xid 1250084b
        rpc.nfsd-3932  [002] ....    44.099329: xprt_prepare_transmit: RPC:    11 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.099330: xprt_transmit: RPC:    11 xprt_transmit(84)
        rpc.nfsd-3932  [002] ....    44.099332: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-3932  [002] ....    44.099332: xprt_transmit: RPC:    11 xmit complete
         rpcbind-1829  [001] ..s.    44.099344: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.099345: xprt_complete_rqst: RPC:    11 xid 1250084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.099352: xprt_release: RPC:    11 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.099354: xprt_alloc_slot: RPC:    12 reserved req ffff8800d8cc6800 xid 1350084b
        rpc.nfsd-3932  [002] ....    44.099354: xprt_prepare_transmit: RPC:    12 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.099355: xprt_transmit: RPC:    12 xprt_transmit(84)
        rpc.nfsd-3932  [002] ....    44.099357: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-3932  [002] ....    44.099357: xprt_transmit: RPC:    12 xmit complete
         rpcbind-1829  [001] ..s.    44.099368: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.099369: xprt_complete_rqst: RPC:    12 xid 1350084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.099376: xprt_release: RPC:    12 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.099394: xprt_alloc_slot: RPC:    13 reserved req ffff8800d8cc6800 xid 1450084b
        rpc.nfsd-3932  [002] ....    44.099395: xprt_prepare_transmit: RPC:    13 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.099395: xprt_transmit: RPC:    13 xprt_transmit(84)
        rpc.nfsd-3932  [002] ....    44.099399: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-3932  [002] ....    44.099399: xprt_transmit: RPC:    13 xmit complete
         rpcbind-1829  [001] ..s.    44.099405: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.099406: xprt_complete_rqst: RPC:    13 xid 1450084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.099413: xprt_release: RPC:    13 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.099414: xprt_alloc_slot: RPC:    14 reserved req ffff8800d8cc6800 xid 1550084b
        rpc.nfsd-3932  [002] ....    44.099415: xprt_prepare_transmit: RPC:    14 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.099415: xprt_transmit: RPC:    14 xprt_transmit(84)
        rpc.nfsd-3932  [002] ....    44.099418: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-3932  [002] ....    44.099418: xprt_transmit: RPC:    14 xmit complete
         rpcbind-1829  [001] ..s.    44.099424: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.099425: xprt_complete_rqst: RPC:    14 xid 1550084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.099431: xprt_release: RPC:    14 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.099433: xprt_alloc_slot: RPC:    15 reserved req ffff8800d8cc6800 xid 1650084b
        rpc.nfsd-3932  [002] ....    44.099433: xprt_prepare_transmit: RPC:    15 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.099434: xprt_transmit: RPC:    15 xprt_transmit(84)
        rpc.nfsd-3932  [002] ....    44.099436: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-3932  [002] ....    44.099436: xprt_transmit: RPC:    15 xmit complete
         rpcbind-1829  [001] ..s.    44.099443: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.099443: xprt_complete_rqst: RPC:    15 xid 1650084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.099449: xprt_release: RPC:    15 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.099451: xprt_alloc_slot: RPC:    16 reserved req ffff8800d8cc6800 xid 1750084b
        rpc.nfsd-3932  [002] ....    44.099451: xprt_prepare_transmit: RPC:    16 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.099452: xprt_transmit: RPC:    16 xprt_transmit(84)
        rpc.nfsd-3932  [002] ....    44.099454: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-3932  [002] ....    44.099455: xprt_transmit: RPC:    16 xmit complete
         rpcbind-1829  [001] ..s.    44.099461: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.099461: xprt_complete_rqst: RPC:    16 xid 1750084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.099467: xprt_release: RPC:    16 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.099469: xprt_alloc_slot: RPC:    17 reserved req ffff8800d8cc6800 xid 1850084b
        rpc.nfsd-3932  [002] ....    44.099469: xprt_prepare_transmit: RPC:    17 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.099470: xprt_transmit: RPC:    17 xprt_transmit(84)
        rpc.nfsd-3932  [002] ....    44.099472: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-3932  [002] ....    44.099472: xprt_transmit: RPC:    17 xmit complete
         rpcbind-1829  [001] ..s.    44.099479: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.099479: xprt_complete_rqst: RPC:    17 xid 1850084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.099485: xprt_release: RPC:    17 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.100111: xprt_alloc_slot: RPC:    18 reserved req ffff8800d8cc6800 xid 1950084b
        rpc.nfsd-3932  [002] ....    44.100112: xprt_prepare_transmit: RPC:    18 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.100113: xprt_transmit: RPC:    18 xprt_transmit(80)
        rpc.nfsd-3932  [002] ....    44.100118: xs_local_send_request: RPC:       xs_local_send_request(80) = 0
        rpc.nfsd-3932  [002] ....    44.100118: xprt_transmit: RPC:    18 xmit complete
         rpcbind-1829  [001] ..s.    44.100124: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.100125: xprt_complete_rqst: RPC:    18 xid 1950084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.100132: xprt_release: RPC:    18 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.100134: xprt_alloc_slot: RPC:    19 reserved req ffff8800d8cc6800 xid 1a50084b
        rpc.nfsd-3932  [002] ....    44.100135: xprt_prepare_transmit: RPC:    19 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.100135: xprt_transmit: RPC:    19 xprt_transmit(80)
        rpc.nfsd-3932  [002] ....    44.100138: xs_local_send_request: RPC:       xs_local_send_request(80) = 0
        rpc.nfsd-3932  [002] ....    44.100138: xprt_transmit: RPC:    19 xmit complete
         rpcbind-1829  [001] ..s.    44.100144: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.100145: xprt_complete_rqst: RPC:    19 xid 1a50084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.100151: xprt_release: RPC:    19 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.100153: xprt_alloc_slot: RPC:    20 reserved req ffff8800d8cc6800 xid 1b50084b
        rpc.nfsd-3932  [002] ....    44.100153: xprt_prepare_transmit: RPC:    20 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.100154: xprt_transmit: RPC:    20 xprt_transmit(80)
        rpc.nfsd-3932  [002] ....    44.100156: xs_local_send_request: RPC:       xs_local_send_request(80) = 0
        rpc.nfsd-3932  [002] ....    44.100156: xprt_transmit: RPC:    20 xmit complete
         rpcbind-1829  [001] ..s.    44.100162: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.100163: xprt_complete_rqst: RPC:    20 xid 1b50084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.100169: xprt_release: RPC:    20 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.100171: xprt_alloc_slot: RPC:    21 reserved req ffff8800d8cc6800 xid 1c50084b
        rpc.nfsd-3932  [002] ....    44.100171: xprt_prepare_transmit: RPC:    21 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.100172: xprt_transmit: RPC:    21 xprt_transmit(80)
        rpc.nfsd-3932  [002] ....    44.100174: xs_local_send_request: RPC:       xs_local_send_request(80) = 0
        rpc.nfsd-3932  [002] ....    44.100174: xprt_transmit: RPC:    21 xmit complete
         rpcbind-1829  [001] ..s.    44.100180: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.100181: xprt_complete_rqst: RPC:    21 xid 1c50084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.100187: xprt_release: RPC:    21 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.100189: xprt_alloc_slot: RPC:    22 reserved req ffff8800d8cc6800 xid 1d50084b
        rpc.nfsd-3932  [002] ....    44.100189: xprt_prepare_transmit: RPC:    22 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.100190: xprt_transmit: RPC:    22 xprt_transmit(80)
        rpc.nfsd-3932  [002] ....    44.100192: xs_local_send_request: RPC:       xs_local_send_request(80) = 0
        rpc.nfsd-3932  [002] ....    44.100192: xprt_transmit: RPC:    22 xmit complete
         rpcbind-1829  [001] ..s.    44.100198: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.100199: xprt_complete_rqst: RPC:    22 xid 1d50084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.100205: xprt_release: RPC:    22 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.100223: xprt_alloc_slot: RPC:    23 reserved req ffff8800d8cc6800 xid 1e50084b
        rpc.nfsd-3932  [002] ....    44.100223: xprt_prepare_transmit: RPC:    23 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.100224: xprt_transmit: RPC:    23 xprt_transmit(80)
        rpc.nfsd-3932  [002] ....    44.100227: xs_local_send_request: RPC:       xs_local_send_request(80) = 0
        rpc.nfsd-3932  [002] ....    44.100227: xprt_transmit: RPC:    23 xmit complete
         rpcbind-1829  [001] ..s.    44.100233: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.100234: xprt_complete_rqst: RPC:    23 xid 1e50084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.100241: xprt_release: RPC:    23 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.100242: xprt_alloc_slot: RPC:    24 reserved req ffff8800d8cc6800 xid 1f50084b
        rpc.nfsd-3932  [002] ....    44.100243: xprt_prepare_transmit: RPC:    24 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.100243: xprt_transmit: RPC:    24 xprt_transmit(80)
        rpc.nfsd-3932  [002] ....    44.100246: xs_local_send_request: RPC:       xs_local_send_request(80) = 0
        rpc.nfsd-3932  [002] ....    44.100246: xprt_transmit: RPC:    24 xmit complete
         rpcbind-1829  [001] ..s.    44.100252: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.100253: xprt_complete_rqst: RPC:    24 xid 1f50084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.100259: xprt_release: RPC:    24 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.100260: xprt_alloc_slot: RPC:    25 reserved req ffff8800d8cc6800 xid 2050084b
        rpc.nfsd-3932  [002] ....    44.100261: xprt_prepare_transmit: RPC:    25 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.100261: xprt_transmit: RPC:    25 xprt_transmit(80)
        rpc.nfsd-3932  [002] ....    44.100263: xs_local_send_request: RPC:       xs_local_send_request(80) = 0
        rpc.nfsd-3932  [002] ....    44.100264: xprt_transmit: RPC:    25 xmit complete
         rpcbind-1829  [001] ..s.    44.100270: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.100270: xprt_complete_rqst: RPC:    25 xid 2050084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.100276: xprt_release: RPC:    25 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.100278: xprt_alloc_slot: RPC:    26 reserved req ffff8800d8cc6800 xid 2150084b
        rpc.nfsd-3932  [002] ....    44.100278: xprt_prepare_transmit: RPC:    26 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.100279: xprt_transmit: RPC:    26 xprt_transmit(80)
        rpc.nfsd-3932  [002] ....    44.100281: xs_local_send_request: RPC:       xs_local_send_request(80) = 0
        rpc.nfsd-3932  [002] ....    44.100281: xprt_transmit: RPC:    26 xmit complete
         rpcbind-1829  [001] ..s.    44.100287: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.100288: xprt_complete_rqst: RPC:    26 xid 2150084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.100294: xprt_release: RPC:    26 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.100296: xprt_alloc_slot: RPC:    27 reserved req ffff8800d8cc6800 xid 2250084b
        rpc.nfsd-3932  [002] ....    44.100296: xprt_prepare_transmit: RPC:    27 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.100297: xprt_transmit: RPC:    27 xprt_transmit(80)
        rpc.nfsd-3932  [002] ....    44.100299: xs_local_send_request: RPC:       xs_local_send_request(80) = 0
        rpc.nfsd-3932  [002] ....    44.100299: xprt_transmit: RPC:    27 xmit complete
         rpcbind-1829  [001] ..s.    44.100305: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.100306: xprt_complete_rqst: RPC:    27 xid 2250084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.100312: xprt_release: RPC:    27 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.100403: xprt_alloc_slot: RPC:    28 reserved req ffff8800d8cc6800 xid 2350084b
        rpc.nfsd-3932  [002] ....    44.100404: xprt_prepare_transmit: RPC:    28 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.100405: xprt_transmit: RPC:    28 xprt_transmit(68)
        rpc.nfsd-3932  [002] ....    44.100409: xs_local_send_request: RPC:       xs_local_send_request(68) = 0
        rpc.nfsd-3932  [002] ....    44.100409: xprt_transmit: RPC:    28 xmit complete
         rpcbind-1829  [001] ..s.    44.100415: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.100416: xprt_complete_rqst: RPC:    28 xid 2350084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.100422: xprt_release: RPC:    28 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.100423: xprt_alloc_slot: RPC:    29 reserved req ffff8800d8cc6800 xid 2450084b
        rpc.nfsd-3932  [002] ....    44.100424: xprt_prepare_transmit: RPC:    29 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.100424: xprt_transmit: RPC:    29 xprt_transmit(68)
        rpc.nfsd-3932  [002] ....    44.100427: xs_local_send_request: RPC:       xs_local_send_request(68) = 0
        rpc.nfsd-3932  [002] ....    44.100427: xprt_transmit: RPC:    29 xmit complete
         rpcbind-1829  [001] ..s.    44.100432: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.100433: xprt_complete_rqst: RPC:    29 xid 2450084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.100439: xprt_release: RPC:    29 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.100440: xprt_alloc_slot: RPC:    30 reserved req ffff8800d8cc6800 xid 2550084b
        rpc.nfsd-3932  [002] ....    44.100441: xprt_prepare_transmit: RPC:    30 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.100441: xprt_transmit: RPC:    30 xprt_transmit(68)
        rpc.nfsd-3932  [002] ....    44.100443: xs_local_send_request: RPC:       xs_local_send_request(68) = 0
        rpc.nfsd-3932  [002] ....    44.100444: xprt_transmit: RPC:    30 xmit complete
         rpcbind-1829  [001] ..s.    44.100450: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.100450: xprt_complete_rqst: RPC:    30 xid 2550084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.100456: xprt_release: RPC:    30 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.100463: xprt_alloc_slot: RPC:    31 reserved req ffff8800d8cc6800 xid 2650084b
        rpc.nfsd-3932  [002] ....    44.100463: xprt_prepare_transmit: RPC:    31 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.100464: xprt_transmit: RPC:    31 xprt_transmit(88)
        rpc.nfsd-3932  [002] ....    44.100467: xs_local_send_request: RPC:       xs_local_send_request(88) = 0
        rpc.nfsd-3932  [002] ....    44.100467: xprt_transmit: RPC:    31 xmit complete
         rpcbind-1829  [001] ..s.    44.100473: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.100474: xprt_complete_rqst: RPC:    31 xid 2650084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.100480: xprt_release: RPC:    31 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.100482: xprt_alloc_slot: RPC:    32 reserved req ffff8800d8cc6800 xid 2750084b
        rpc.nfsd-3932  [002] ....    44.100482: xprt_prepare_transmit: RPC:    32 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.100483: xprt_transmit: RPC:    32 xprt_transmit(88)
        rpc.nfsd-3932  [002] ....    44.100485: xs_local_send_request: RPC:       xs_local_send_request(88) = 0
        rpc.nfsd-3932  [002] ....    44.100485: xprt_transmit: RPC:    32 xmit complete
         rpcbind-1829  [001] ..s.    44.100492: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.100493: xprt_complete_rqst: RPC:    32 xid 2750084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.100498: xprt_release: RPC:    32 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.100500: xprt_alloc_slot: RPC:    33 reserved req ffff8800d8cc6800 xid 2850084b
        rpc.nfsd-3932  [002] ....    44.100501: xprt_prepare_transmit: RPC:    33 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.100501: xprt_transmit: RPC:    33 xprt_transmit(88)
        rpc.nfsd-3932  [002] ....    44.100504: xs_local_send_request: RPC:       xs_local_send_request(88) = 0
        rpc.nfsd-3932  [002] ....    44.100504: xprt_transmit: RPC:    33 xmit complete
         rpcbind-1829  [001] ..s.    44.100510: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.100511: xprt_complete_rqst: RPC:    33 xid 2850084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.100517: xprt_release: RPC:    33 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.100526: xprt_alloc_slot: RPC:    34 reserved req ffff8800d8cc6800 xid 2950084b
        rpc.nfsd-3932  [002] ....    44.100527: xprt_prepare_transmit: RPC:    34 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.100528: xprt_transmit: RPC:    34 xprt_transmit(88)
        rpc.nfsd-3932  [002] ....    44.100530: xs_local_send_request: RPC:       xs_local_send_request(88) = 0
        rpc.nfsd-3932  [002] ....    44.100531: xprt_transmit: RPC:    34 xmit complete
         rpcbind-1829  [001] ..s.    44.100537: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.100538: xprt_complete_rqst: RPC:    34 xid 2950084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.100544: xprt_release: RPC:    34 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.100546: xprt_alloc_slot: RPC:    35 reserved req ffff8800d8cc6800 xid 2a50084b
        rpc.nfsd-3932  [002] ....    44.100546: xprt_prepare_transmit: RPC:    35 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.100547: xprt_transmit: RPC:    35 xprt_transmit(88)
        rpc.nfsd-3932  [002] ....    44.100549: xs_local_send_request: RPC:       xs_local_send_request(88) = 0
        rpc.nfsd-3932  [002] ....    44.100549: xprt_transmit: RPC:    35 xmit complete
         rpcbind-1829  [001] ..s.    44.100556: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.100556: xprt_complete_rqst: RPC:    35 xid 2a50084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.100562: xprt_release: RPC:    35 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.100564: xprt_alloc_slot: RPC:    36 reserved req ffff8800d8cc6800 xid 2b50084b
        rpc.nfsd-3932  [002] ....    44.100565: xprt_prepare_transmit: RPC:    36 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.100565: xprt_transmit: RPC:    36 xprt_transmit(88)
        rpc.nfsd-3932  [002] ....    44.100567: xs_local_send_request: RPC:       xs_local_send_request(88) = 0
        rpc.nfsd-3932  [002] ....    44.100568: xprt_transmit: RPC:    36 xmit complete
         rpcbind-1829  [001] ..s.    44.100574: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.100575: xprt_complete_rqst: RPC:    36 xid 2b50084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.100581: xprt_release: RPC:    36 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.100587: xprt_alloc_slot: RPC:    37 reserved req ffff8800d8cc6800 xid 2c50084b
        rpc.nfsd-3932  [002] ....    44.100587: xprt_prepare_transmit: RPC:    37 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.100588: xprt_transmit: RPC:    37 xprt_transmit(84)
        rpc.nfsd-3932  [002] ....    44.100590: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-3932  [002] ....    44.100590: xprt_transmit: RPC:    37 xmit complete
         rpcbind-1829  [001] ..s.    44.100597: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.100597: xprt_complete_rqst: RPC:    37 xid 2c50084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.100604: xprt_release: RPC:    37 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.100605: xprt_alloc_slot: RPC:    38 reserved req ffff8800d8cc6800 xid 2d50084b
        rpc.nfsd-3932  [002] ....    44.100606: xprt_prepare_transmit: RPC:    38 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.100606: xprt_transmit: RPC:    38 xprt_transmit(84)
        rpc.nfsd-3932  [002] ....    44.100608: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-3932  [002] ....    44.100609: xprt_transmit: RPC:    38 xmit complete
         rpcbind-1829  [001] ..s.    44.100615: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.100616: xprt_complete_rqst: RPC:    38 xid 2d50084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.100622: xprt_release: RPC:    38 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.100623: xprt_alloc_slot: RPC:    39 reserved req ffff8800d8cc6800 xid 2e50084b
        rpc.nfsd-3932  [002] ....    44.100624: xprt_prepare_transmit: RPC:    39 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.100624: xprt_transmit: RPC:    39 xprt_transmit(84)
        rpc.nfsd-3932  [002] ....    44.100626: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-3932  [002] ....    44.100627: xprt_transmit: RPC:    39 xmit complete
         rpcbind-1829  [001] ..s.    44.100633: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.100634: xprt_complete_rqst: RPC:    39 xid 2e50084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.100640: xprt_release: RPC:    39 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.100647: xprt_alloc_slot: RPC:    40 reserved req ffff8800d8cc6800 xid 2f50084b
        rpc.nfsd-3932  [002] ....    44.100648: xprt_prepare_transmit: RPC:    40 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.100648: xprt_transmit: RPC:    40 xprt_transmit(84)
        rpc.nfsd-3932  [002] ....    44.100651: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-3932  [002] ....    44.100651: xprt_transmit: RPC:    40 xmit complete
         rpcbind-1829  [001] ..s.    44.100657: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.100658: xprt_complete_rqst: RPC:    40 xid 2f50084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.100664: xprt_release: RPC:    40 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.100666: xprt_alloc_slot: RPC:    41 reserved req ffff8800d8cc6800 xid 3050084b
        rpc.nfsd-3932  [002] ....    44.100666: xprt_prepare_transmit: RPC:    41 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.100667: xprt_transmit: RPC:    41 xprt_transmit(84)
        rpc.nfsd-3932  [002] ....    44.100669: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-3932  [002] ....    44.100669: xprt_transmit: RPC:    41 xmit complete
         rpcbind-1829  [001] ..s.    44.100675: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.100676: xprt_complete_rqst: RPC:    41 xid 3050084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.100682: xprt_release: RPC:    41 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.100684: xprt_alloc_slot: RPC:    42 reserved req ffff8800d8cc6800 xid 3150084b
        rpc.nfsd-3932  [002] ....    44.100684: xprt_prepare_transmit: RPC:    42 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.100685: xprt_transmit: RPC:    42 xprt_transmit(84)
        rpc.nfsd-3932  [002] ....    44.100687: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-3932  [002] ....    44.100687: xprt_transmit: RPC:    42 xmit complete
         rpcbind-1829  [001] ..s.    44.100693: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.100694: xprt_complete_rqst: RPC:    42 xid 3150084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.100700: xprt_release: RPC:    42 release request ffff8800d8cc6800
          <idle>-0     [003] ..s.    52.302416: inet_bind_hash: add 22
          <idle>-0     [003] ..s.    52.302456: <stack trace>
 => __inet_inherit_port
 => tcp_v4_syn_recv_sock
 => tcp_check_req
 => tcp_v4_do_rcv
 => tcp_v4_rcv
 => ip_local_deliver_finish
 => __netif_receive_skb_core
 => netif_receive_skb
 => netif_receive_skb_internal
 => br_handle_frame_finish
 => delay_tsc
 => br_handle_frame
 => br_handle_frame
 => __netif_receive_skb_core
 => read_tsc
 => netif_receive_skb_internal
 => napi_gro_receive
 => e1000_clean_rx_irq
 => e1000_clean
 => ack_ioapic_level
 => do_IRQ
 => net_rx_action
 => __do_softirq
 => ack_ioapic_level
 => irq_exit
 => do_IRQ
 => ret_from_intr
 => cpuidle_enter_state
 => cpuidle_enter_state
 => cpu_startup_entry
 => start_secondary
   kworker/u32:2-105   [001] ....    77.750302: xs_setup_tcp: RPC:       set up xprt to 192.168.23.22 (port 55201) via tcp
   kworker/u32:2-105   [001] ....    77.750310: xprt_create_transport: RPC:       created transport ffff8804082fb000 with 65536 slots
    kworker/1:1H-133   [001] ....    77.750352: xprt_alloc_slot: RPC:    43 reserved req ffff88040ab08200 xid 83da2dc3
    kworker/1:1H-133   [001] ....    77.750356: xprt_connect: RPC:    43 xprt_connect xprt ffff8804082fb000 is not connected
    kworker/1:1H-133   [001] ....    77.750358: xs_connect: RPC:       xs_connect scheduled xprt ffff8804082fb000
    kworker/1:1H-133   [001] ..s.    77.750365: inet_csk_get_port: snum 737
    kworker/1:1H-133   [001] ..s.    77.750374: <stack trace>
 => inet_addr_type
 => inet_bind
 => xs_bind
 => sock_setsockopt
 => __sock_create
 => xs_create_sock.isra.19
 => xs_tcp_setup_socket
 => process_one_work
 => worker_thread
 => worker_thread
 => kthread
 => kthread
 => ret_from_fork
 => kthread
    kworker/1:1H-133   [001] ..s.    77.750374: inet_bind_hash: add 737
    kworker/1:1H-133   [001] ..s.    77.750377: <stack trace>
 => inet_csk_get_port
 => inet_addr_type
 => inet_bind
 => xs_bind
 => sock_setsockopt
 => __sock_create
 => xs_create_sock.isra.19
 => xs_tcp_setup_socket
 => process_one_work
 => worker_thread
 => worker_thread
 => kthread
 => kthread
 => ret_from_fork
 => kthread
    kworker/1:1H-133   [001] ....    77.750378: xs_bind: RPC:       xs_bind 4.136.255.255:737: ok (0)
    kworker/1:1H-133   [001] ....    77.750379: xs_tcp_setup_socket: RPC:       worker connecting xprt ffff8804082fb000 via tcp to 192.168.23.22 (port 55201)
    kworker/1:1H-133   [001] ....    77.750397: xs_tcp_setup_socket: xprt=ffff8804082fb000 sock=ffff880408a47d40 status=-115
    kworker/1:1H-133   [001] ....    77.750397: xs_tcp_setup_socket: RPC:       ffff8804082fb000 connect status 115 connected 0 sock state 2
 fail2ban-server-4683  [002] ..s.    77.750554: xs_tcp_state_change: RPC:       xs_tcp_state_change client ffff8804082fb000...
 fail2ban-server-4683  [002] ..s.    77.750555: xs_tcp_state_change: RPC:       state 1 conn 0 dead 0 zapped 1 sk_shutdown 0
    kworker/2:1H-126   [002] ....    77.750571: xprt_connect_status: RPC:    43 xprt_connect_status: retrying
    kworker/2:1H-126   [002] ....    77.750572: xprt_prepare_transmit: RPC:    43 xprt_prepare_transmit
    kworker/2:1H-126   [002] ....    77.750573: xprt_transmit: RPC:    43 xprt_transmit(72)
    kworker/2:1H-126   [002] ....    77.750581: xs_tcp_send_request: RPC:       xs_tcp_send_request(72) = 0
    kworker/2:1H-126   [002] ....    77.750581: xprt_transmit: RPC:    43 xmit complete
 fail2ban-server-4683  [002] ..s.    77.750798: xs_tcp_data_ready: RPC:       xs_tcp_data_ready...
 fail2ban-server-4683  [002] ..s.    77.750799: xs_tcp_data_recv: RPC:       xs_tcp_data_recv started
 fail2ban-server-4683  [002] ..s.    77.750800: xs_tcp_data_recv: RPC:       reading TCP record fragment of length 24
 fail2ban-server-4683  [002] ..s.    77.750800: xs_tcp_data_recv: RPC:       reading XID (4 bytes)
 fail2ban-server-4683  [002] ..s.    77.750801: xs_tcp_data_recv: RPC:       reading request with XID 83da2dc3
 fail2ban-server-4683  [002] ..s.    77.750801: xs_tcp_data_recv: RPC:       reading CALL/REPLY flag (4 bytes)
 fail2ban-server-4683  [002] ..s.    77.750801: xs_tcp_data_recv: RPC:       read reply XID 83da2dc3
 fail2ban-server-4683  [002] ..s.    77.750802: xs_tcp_data_recv: RPC:       XID 83da2dc3 read 16 bytes
 fail2ban-server-4683  [002] ..s.    77.750803: xs_tcp_data_recv: RPC:       xprt = ffff8804082fb000, tcp_copied = 24, tcp_offset = 24, tcp_reclen = 24
 fail2ban-server-4683  [002] ..s.    77.750803: xprt_complete_rqst: RPC:    43 xid 83da2dc3 complete (24 bytes received)
 fail2ban-server-4683  [002] .Ns.    77.750805: xs_tcp_data_recv: RPC:       xs_tcp_data_recv done
    kworker/2:1H-126   [002] ....    77.750813: xprt_release: RPC:    43 release request ffff88040ab08200
          <idle>-0     [003] ..s.    94.613312: inet_bind_hash: add 22
          <idle>-0     [003] ..s.    94.613354: <stack trace>
 => __inet_inherit_port
 => tcp_v4_syn_recv_sock
 => tcp_check_req
 => tcp_v4_do_rcv
 => tcp_v4_rcv
 => ip_local_deliver_finish
 => __netif_receive_skb_core
 => netif_receive_skb
 => netif_receive_skb_internal
 => br_handle_frame_finish
 => br_handle_frame
 => br_handle_frame
 => __netif_receive_skb_core
 => read_tsc
 => netif_receive_skb_internal
 => napi_gro_receive
 => e1000_clean_rx_irq
 => e1000_clean
 => net_rx_action
 => add_interrupt_randomness
 => __do_softirq
 => ack_ioapic_level
 => irq_exit
 => do_IRQ
 => ret_from_intr
 => cpuidle_enter_state
 => cpuidle_enter_state
 => cpu_startup_entry
 => start_secondary
          <idle>-0     [003] ..s.    98.776868: inet_bind_hash: add 10993
          <idle>-0     [003] ..s.    98.776910: <stack trace>
 => __inet_inherit_port
 => tcp_v4_syn_recv_sock
 => tcp_check_req
 => tcp_v4_do_rcv
 => tcp_v4_rcv
 => ip_local_deliver_finish
 => __netif_receive_skb_core
 => netif_receive_skb
 => netif_receive_skb_internal
 => br_handle_frame_finish
 => br_handle_frame
 => br_handle_frame
 => __netif_receive_skb_core
 => read_tsc
 => netif_receive_skb_internal
 => napi_gro_receive
 => e1000_clean_rx_irq
 => e1000_clean
 => net_rx_action
 => add_interrupt_randomness
 => __do_softirq
 => ack_ioapic_level
 => irq_exit
 => do_IRQ
 => ret_from_intr
 => cpuidle_enter_state
 => cpuidle_enter_state
 => cpu_startup_entry
 => start_secondary
          <idle>-0     [003] ..s.   151.179778: inet_bind_hash: add 80
          <idle>-0     [003] ..s.   151.179822: <stack trace>
 => __inet_inherit_port
 => tcp_v4_syn_recv_sock
 => tcp_v6_syn_recv_sock
 => ipt_do_table
 => nf_conntrack_in
 => tcp_check_req
 => tcp_v4_do_rcv
 => tcp_v4_rcv
 => ip_local_deliver_finish
 => __netif_receive_skb_core
 => netif_receive_skb
 => netif_receive_skb_internal
 => br_handle_frame_finish
 => br_handle_frame
 => br_handle_frame
 => __netif_receive_skb_core
 => read_tsc
 => netif_receive_skb_internal
 => napi_gro_receive
 => e1000_clean_rx_irq
 => e1000_clean
 => try_to_wake_up
 => ktime_get
 => net_rx_action
 => add_interrupt_randomness
 => __do_softirq
 => ack_ioapic_level
 => irq_exit
 => do_IRQ
 => ret_from_intr
 => cpuidle_enter_state
 => cpuidle_enter_state
 => cpu_startup_entry
 => start_secondary
          <idle>-0     [003] ..s.   172.217453: inet_bind_hash: add 10993
          <idle>-0     [003] ..s.   172.217496: <stack trace>
 => __inet_inherit_port
 => tcp_v4_syn_recv_sock
 => tcp_check_req
 => tcp_v4_do_rcv
 => tcp_v4_rcv
 => ip_local_deliver_finish
 => __netif_receive_skb_core
 => netif_receive_skb
 => netif_receive_skb_internal
 => br_handle_frame_finish
 => br_handle_frame
 => br_handle_frame
 => __netif_receive_skb_core
 => read_tsc
 => netif_receive_skb_internal
 => napi_gro_receive
 => e1000_clean_rx_irq
 => e1000_clean
 => net_rx_action
 => add_interrupt_randomness
 => __do_softirq
 => ack_ioapic_level
 => irq_exit
 => do_IRQ
 => ret_from_intr
 => cpuidle_enter_state
 => cpuidle_enter_state
 => cpu_startup_entry
 => start_secondary
          <idle>-0     [003] .Ns.   181.603150: inet_bind_hash: add 80
          <idle>-0     [003] .Ns.   181.603194: <stack trace>
 => __inet_inherit_port
 => tcp_v4_syn_recv_sock
 => tcp_v6_syn_recv_sock
 => ipt_do_table
 => nf_conntrack_in
 => tcp_check_req
 => tcp_v4_do_rcv
 => tcp_v4_rcv
 => ip_local_deliver_finish
 => __netif_receive_skb_core
 => netif_receive_skb
 => netif_receive_skb_internal
 => br_handle_frame_finish
 => br_handle_frame
 => br_handle_frame
 => __netif_receive_skb_core
 => read_tsc
 => netif_receive_skb_internal
 => napi_gro_receive
 => e1000_clean_rx_irq
 => e1000_clean
 => try_to_wake_up
 => net_rx_action
 => add_interrupt_randomness
 => __do_softirq
 => ack_ioapic_level
 => irq_exit
 => do_IRQ
 => ret_from_intr
 => cpuidle_enter_state
 => cpuidle_enter_state
 => cpu_startup_entry
 => start_secondary
          <idle>-0     [003] ..s.   234.638237: inet_bind_hash: add 10993
          <idle>-0     [003] ..s.   234.638281: <stack trace>
 => __inet_inherit_port
 => tcp_v4_syn_recv_sock
 => tcp_check_req
 => tcp_v4_do_rcv
 => tcp_v4_rcv
 => ip_local_deliver_finish
 => __netif_receive_skb_core
 => netif_receive_skb
 => netif_receive_skb_internal
 => br_handle_frame_finish
 => br_handle_frame
 => br_handle_frame
 => __netif_receive_skb_core
 => read_tsc
 => netif_receive_skb_internal
 => napi_gro_receive
 => e1000_clean_rx_irq
 => e1000_clean
 => try_to_wake_up
 => net_rx_action
 => add_interrupt_randomness
 => __do_softirq
 => ack_ioapic_level
 => irq_exit
 => do_IRQ
 => ret_from_intr
 => cpuidle_enter_state
 => cpuidle_enter_state
 => cpu_startup_entry
 => start_secondary
          <idle>-0     [003] ..s.   241.694872: inet_bind_hash: add 57000
          <idle>-0     [003] ..s.   241.694915: <stack trace>
 => __inet_inherit_port
 => tcp_v4_syn_recv_sock
 => tcp_check_req
 => tcp_v4_do_rcv
 => tcp_v4_rcv
 => ip_local_deliver_finish
 => __netif_receive_skb_core
 => netif_receive_skb
 => netif_receive_skb_internal
 => br_handle_frame_finish
 => br_handle_frame
 => br_handle_frame
 => __netif_receive_skb_core
 => read_tsc
 => netif_receive_skb_internal
 => napi_gro_receive
 => e1000_clean_rx_irq
 => e1000_clean
 => try_to_wake_up
 => net_rx_action
 => add_interrupt_randomness
 => __do_softirq
 => ack_ioapic_level
 => irq_exit
 => do_IRQ
 => ret_from_intr
 => cpuidle_enter_state
 => cpuidle_enter_state
 => cpu_startup_entry
 => start_secondary
          <idle>-0     [003] ..s.   242.308627: inet_bind_hash: add 10993
          <idle>-0     [003] ..s.   242.308670: <stack trace>
 => __inet_inherit_port
 => tcp_v4_syn_recv_sock
 => tcp_check_req
 => tcp_v4_do_rcv
 => tcp_v4_rcv
 => ip_local_deliver_finish
 => __netif_receive_skb_core
 => netif_receive_skb
 => netif_receive_skb_internal
 => br_handle_frame_finish
 => br_handle_frame
 => br_handle_frame
 => __netif_receive_skb_core
 => read_tsc
 => netif_receive_skb_internal
 => napi_gro_receive
 => e1000_clean_rx_irq
 => e1000_clean
 => try_to_wake_up
 => net_rx_action
 => add_interrupt_randomness
 => __do_softirq
 => ack_ioapic_level
 => irq_exit
 => do_IRQ
 => ret_from_intr
 => cpuidle_enter_state
 => cpuidle_enter_state
 => cpu_startup_entry
 => start_secondary
          <idle>-0     [003] ..s.   296.125499: inet_bind_hash: add 80
          <idle>-0     [003] ..s.   296.125543: <stack trace>
 => __inet_inherit_port
 => tcp_v4_syn_recv_sock
 => tcp_v6_syn_recv_sock
 => ipt_do_table
 => nf_conntrack_in
 => tcp_check_req
 => fib_validate_source
 => tcp_v4_do_rcv
 => tcp_v4_rcv
 => ip_local_deliver_finish
 => __netif_receive_skb_core
 => netif_receive_skb
 => netif_receive_skb_internal
 => br_handle_frame_finish
 => br_handle_frame
 => br_handle_frame
 => __netif_receive_skb_core
 => read_tsc
 => netif_receive_skb_internal
 => napi_gro_receive
 => e1000_clean_rx_irq
 => e1000_clean
 => net_rx_action
 => add_interrupt_randomness
 => __do_softirq
 => ack_ioapic_level
 => irq_exit
 => do_IRQ
 => ret_from_intr
 => cpuidle_enter_state
 => cpuidle_enter_state
 => cpu_startup_entry
 => start_secondary
          <idle>-0     [003] ..s.   304.196576: inet_bind_hash: add 80
          <idle>-0     [003] ..s.   304.196618: <stack trace>
 => __inet_inherit_port
 => tcp_v4_syn_recv_sock
 => tcp_v6_syn_recv_sock
 => ipt_do_table
 => nf_conntrack_in
 => tcp_check_req
 => fib_validate_source
 => tcp_v4_do_rcv
 => tcp_v4_rcv
 => ip_local_deliver_finish
 => __netif_receive_skb_core
 => netif_receive_skb
 => netif_receive_skb_internal
 => br_handle_frame_finish
 => br_handle_frame
 => br_handle_frame
 => __netif_receive_skb_core
 => read_tsc
 => netif_receive_skb_internal
 => napi_gro_receive
 => e1000_clean_rx_irq
 => e1000_clean
 => raise_softirq_irqoff
 => netif_schedule_queue
 => dev_watchdog
 => net_rx_action
 => add_interrupt_randomness
 => __do_softirq
 => ack_ioapic_level
 => irq_exit
 => do_IRQ
 => ret_from_intr
 => cpuidle_enter_state
 => cpuidle_enter_state
 => cpu_startup_entry
 => start_secondary
    kworker/2:1H-126   [002] ....   378.264745: xs_tcp_close: close %p
    kworker/2:1H-126   [002] ....   378.264748: xs_close: RPC:       xs_close xprt ffff8804082fb000
    kworker/2:1H-126   [002] ....   378.264786: xprt_disconnect_done: RPC:       disconnected transport ffff8804082fb000


-- Steve

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
@ 2015-06-12 15:50         ` Steven Rostedt
  0 siblings, 0 replies; 77+ messages in thread
From: Steven Rostedt @ 2015-06-12 15:50 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: Trond Myklebust, Anna Schumaker, Linux NFS Mailing List,
	Linux Network Devel Mailing List, LKML, Andrew Morton

On Fri, 12 Jun 2015 11:34:20 -0400
Steven Rostedt <rostedt-nx8X9YLhiw1AfugRpC6u6w@public.gmane.org> wrote:

> On Fri, 12 Jun 2015 07:40:35 -0700
> Eric Dumazet <eric.dumazet-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
> 
> > Strange, because the usual way to not have time-wait is to use SO_LINGER
> > with linger=0
> > 
> > And apparently xs_tcp_finish_connecting() has this :
> > 
> >                 sock_reset_flag(sk, SOCK_LINGER);
> >                 tcp_sk(sk)->linger2 = 0;
> > 
> > Are you sure SO_REUSEADDR was not the thing you wanted ?
> > 
> > Steven, have you tried kmemleak ?
> 
> Nope, and again, I'm hesitant on adding too much debug. This is my main
> server (build server, ssh server, web server, mail server, proxy
> server, irc server, etc).
> 
> Although, I made dprintk() into trace_printk() in xprtsock.c and
> xprt.c, and reran it. Here's the output:
> 

I reverted the following commits:

c627d31ba0696cbd829437af2be2f2dee3546b1e
9e2b9f37760e129cee053cc7b6e7288acc2a7134
caf4ccd4e88cf2795c927834bc488c8321437586

And the issue goes away. That is, I watched the port go from
ESTABLISHED to TIME_WAIT, and then gone, and theirs no hidden port.

In fact, I watched the port with my portlist.c module, and it
disappeared there too when it entered the TIME_WAIT state.

Here's the trace of that run:

# tracer: nop
#
# entries-in-buffer/entries-written: 397/397   #P:4
#
#                              _-----=> irqs-off
#                             / _----=> need-resched
#                            | / _---=> hardirq/softirq
#                            || / _--=> preempt-depth
#                            ||| /     delay
#           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
#              | |       |   ||||       |         |
        rpc.nfsd-3932  [002] ....    44.098689: xs_local_setup_socket: RPC:       worker connecting xprt ffff88040b6f5800 via AF_LOCAL to /var/run/rpcbind.sock
        rpc.nfsd-3932  [002] ....    44.098699: xs_local_setup_socket: RPC:       xprt ffff88040b6f5800 connected to /var/run/rpcbind.sock
        rpc.nfsd-3932  [002] ....    44.098700: xs_setup_local: RPC:       set up xprt to /var/run/rpcbind.sock via AF_LOCAL
        rpc.nfsd-3932  [002] ....    44.098704: xprt_create_transport: RPC:       created transport ffff88040b6f5800 with 65536 slots
        rpc.nfsd-3932  [002] ....    44.098717: xprt_alloc_slot: RPC:     1 reserved req ffff8800d8cc6800 xid 0850084b
        rpc.nfsd-3932  [002] ....    44.098720: xprt_prepare_transmit: RPC:     1 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.098721: xprt_transmit: RPC:     1 xprt_transmit(44)
        rpc.nfsd-3932  [002] ....    44.098724: xs_local_send_request: RPC:       xs_local_send_request(44) = 0
        rpc.nfsd-3932  [002] ....    44.098724: xprt_transmit: RPC:     1 xmit complete
         rpcbind-1829  [001] ..s.    44.098812: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.098815: xprt_complete_rqst: RPC:     1 xid 0850084b complete (24 bytes received)
        rpc.nfsd-3932  [002] ....    44.098854: xprt_release: RPC:     1 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.098864: xprt_alloc_slot: RPC:     2 reserved req ffff8800d8cc6800 xid 0950084b
        rpc.nfsd-3932  [002] ....    44.098865: xprt_prepare_transmit: RPC:     2 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.098865: xprt_transmit: RPC:     2 xprt_transmit(44)
        rpc.nfsd-3932  [002] ....    44.098870: xs_local_send_request: RPC:       xs_local_send_request(44) = 0
        rpc.nfsd-3932  [002] ....    44.098870: xprt_transmit: RPC:     2 xmit complete
         rpcbind-1829  [001] ..s.    44.098915: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.098917: xprt_complete_rqst: RPC:     2 xid 0950084b complete (24 bytes received)
        rpc.nfsd-3932  [002] ....    44.098968: xprt_release: RPC:     2 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.098971: xprt_alloc_slot: RPC:     3 reserved req ffff8800d8cc6800 xid 0a50084b
        rpc.nfsd-3932  [002] ....    44.098972: xprt_prepare_transmit: RPC:     3 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.098973: xprt_transmit: RPC:     3 xprt_transmit(68)
        rpc.nfsd-3932  [002] ....    44.098978: xs_local_send_request: RPC:       xs_local_send_request(68) = 0
        rpc.nfsd-3932  [002] ....    44.098978: xprt_transmit: RPC:     3 xmit complete
         rpcbind-1829  [001] ..s.    44.099029: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.099031: xprt_complete_rqst: RPC:     3 xid 0a50084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.099083: xprt_release: RPC:     3 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.099086: xprt_alloc_slot: RPC:     4 reserved req ffff8800d8cc6800 xid 0b50084b
        rpc.nfsd-3932  [002] ....    44.099086: xprt_prepare_transmit: RPC:     4 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.099087: xprt_transmit: RPC:     4 xprt_transmit(68)
        rpc.nfsd-3932  [002] ....    44.099091: xs_local_send_request: RPC:       xs_local_send_request(68) = 0
        rpc.nfsd-3932  [002] ....    44.099092: xprt_transmit: RPC:     4 xmit complete
         rpcbind-1829  [001] ..s.    44.099145: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.099147: xprt_complete_rqst: RPC:     4 xid 0b50084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.099155: xprt_release: RPC:     4 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.099157: xprt_alloc_slot: RPC:     5 reserved req ffff8800d8cc6800 xid 0c50084b
        rpc.nfsd-3932  [002] ....    44.099157: xprt_prepare_transmit: RPC:     5 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.099158: xprt_transmit: RPC:     5 xprt_transmit(68)
        rpc.nfsd-3932  [002] ....    44.099161: xs_local_send_request: RPC:       xs_local_send_request(68) = 0
        rpc.nfsd-3932  [002] ....    44.099162: xprt_transmit: RPC:     5 xmit complete
         rpcbind-1829  [001] ..s.    44.099172: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.099173: xprt_complete_rqst: RPC:     5 xid 0c50084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.099180: xprt_release: RPC:     5 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.099181: xprt_alloc_slot: RPC:     6 reserved req ffff8800d8cc6800 xid 0d50084b
        rpc.nfsd-3932  [002] ....    44.099181: xprt_prepare_transmit: RPC:     6 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.099182: xprt_transmit: RPC:     6 xprt_transmit(68)
        rpc.nfsd-3932  [002] ....    44.099184: xs_local_send_request: RPC:       xs_local_send_request(68) = 0
        rpc.nfsd-3932  [002] ....    44.099184: xprt_transmit: RPC:     6 xmit complete
         rpcbind-1829  [001] ..s.    44.099204: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.099205: xprt_complete_rqst: RPC:     6 xid 0d50084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.099212: xprt_release: RPC:     6 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.099213: xprt_alloc_slot: RPC:     7 reserved req ffff8800d8cc6800 xid 0e50084b
        rpc.nfsd-3932  [002] ....    44.099214: xprt_prepare_transmit: RPC:     7 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.099214: xprt_transmit: RPC:     7 xprt_transmit(68)
        rpc.nfsd-3932  [002] ....    44.099217: xs_local_send_request: RPC:       xs_local_send_request(68) = 0
        rpc.nfsd-3932  [002] ....    44.099217: xprt_transmit: RPC:     7 xmit complete
         rpcbind-1829  [001] ..s.    44.099228: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.099229: xprt_complete_rqst: RPC:     7 xid 0e50084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.099236: xprt_release: RPC:     7 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.099241: xprt_alloc_slot: RPC:     8 reserved req ffff8800d8cc6800 xid 0f50084b
        rpc.nfsd-3932  [002] ....    44.099241: xprt_prepare_transmit: RPC:     8 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.099242: xprt_transmit: RPC:     8 xprt_transmit(84)
        rpc.nfsd-3932  [002] ....    44.099244: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-3932  [002] ....    44.099244: xprt_transmit: RPC:     8 xmit complete
         rpcbind-1829  [001] ..s.    44.099261: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.099263: xprt_complete_rqst: RPC:     8 xid 0f50084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.099270: xprt_release: RPC:     8 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.099271: xprt_alloc_slot: RPC:     9 reserved req ffff8800d8cc6800 xid 1050084b
        rpc.nfsd-3932  [002] ....    44.099272: xprt_prepare_transmit: RPC:     9 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.099272: xprt_transmit: RPC:     9 xprt_transmit(84)
        rpc.nfsd-3932  [002] ....    44.099275: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-3932  [002] ....    44.099275: xprt_transmit: RPC:     9 xmit complete
         rpcbind-1829  [001] ..s.    44.099290: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.099291: xprt_complete_rqst: RPC:     9 xid 1050084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.099298: xprt_release: RPC:     9 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.099300: xprt_alloc_slot: RPC:    10 reserved req ffff8800d8cc6800 xid 1150084b
        rpc.nfsd-3932  [002] ....    44.099301: xprt_prepare_transmit: RPC:    10 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.099301: xprt_transmit: RPC:    10 xprt_transmit(84)
        rpc.nfsd-3932  [002] ....    44.099303: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-3932  [002] ....    44.099304: xprt_transmit: RPC:    10 xmit complete
         rpcbind-1829  [001] ..s.    44.099318: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.099320: xprt_complete_rqst: RPC:    10 xid 1150084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.099327: xprt_release: RPC:    10 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.099329: xprt_alloc_slot: RPC:    11 reserved req ffff8800d8cc6800 xid 1250084b
        rpc.nfsd-3932  [002] ....    44.099329: xprt_prepare_transmit: RPC:    11 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.099330: xprt_transmit: RPC:    11 xprt_transmit(84)
        rpc.nfsd-3932  [002] ....    44.099332: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-3932  [002] ....    44.099332: xprt_transmit: RPC:    11 xmit complete
         rpcbind-1829  [001] ..s.    44.099344: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.099345: xprt_complete_rqst: RPC:    11 xid 1250084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.099352: xprt_release: RPC:    11 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.099354: xprt_alloc_slot: RPC:    12 reserved req ffff8800d8cc6800 xid 1350084b
        rpc.nfsd-3932  [002] ....    44.099354: xprt_prepare_transmit: RPC:    12 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.099355: xprt_transmit: RPC:    12 xprt_transmit(84)
        rpc.nfsd-3932  [002] ....    44.099357: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-3932  [002] ....    44.099357: xprt_transmit: RPC:    12 xmit complete
         rpcbind-1829  [001] ..s.    44.099368: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.099369: xprt_complete_rqst: RPC:    12 xid 1350084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.099376: xprt_release: RPC:    12 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.099394: xprt_alloc_slot: RPC:    13 reserved req ffff8800d8cc6800 xid 1450084b
        rpc.nfsd-3932  [002] ....    44.099395: xprt_prepare_transmit: RPC:    13 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.099395: xprt_transmit: RPC:    13 xprt_transmit(84)
        rpc.nfsd-3932  [002] ....    44.099399: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-3932  [002] ....    44.099399: xprt_transmit: RPC:    13 xmit complete
         rpcbind-1829  [001] ..s.    44.099405: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.099406: xprt_complete_rqst: RPC:    13 xid 1450084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.099413: xprt_release: RPC:    13 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.099414: xprt_alloc_slot: RPC:    14 reserved req ffff8800d8cc6800 xid 1550084b
        rpc.nfsd-3932  [002] ....    44.099415: xprt_prepare_transmit: RPC:    14 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.099415: xprt_transmit: RPC:    14 xprt_transmit(84)
        rpc.nfsd-3932  [002] ....    44.099418: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-3932  [002] ....    44.099418: xprt_transmit: RPC:    14 xmit complete
         rpcbind-1829  [001] ..s.    44.099424: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.099425: xprt_complete_rqst: RPC:    14 xid 1550084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.099431: xprt_release: RPC:    14 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.099433: xprt_alloc_slot: RPC:    15 reserved req ffff8800d8cc6800 xid 1650084b
        rpc.nfsd-3932  [002] ....    44.099433: xprt_prepare_transmit: RPC:    15 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.099434: xprt_transmit: RPC:    15 xprt_transmit(84)
        rpc.nfsd-3932  [002] ....    44.099436: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-3932  [002] ....    44.099436: xprt_transmit: RPC:    15 xmit complete
         rpcbind-1829  [001] ..s.    44.099443: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.099443: xprt_complete_rqst: RPC:    15 xid 1650084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.099449: xprt_release: RPC:    15 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.099451: xprt_alloc_slot: RPC:    16 reserved req ffff8800d8cc6800 xid 1750084b
        rpc.nfsd-3932  [002] ....    44.099451: xprt_prepare_transmit: RPC:    16 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.099452: xprt_transmit: RPC:    16 xprt_transmit(84)
        rpc.nfsd-3932  [002] ....    44.099454: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-3932  [002] ....    44.099455: xprt_transmit: RPC:    16 xmit complete
         rpcbind-1829  [001] ..s.    44.099461: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.099461: xprt_complete_rqst: RPC:    16 xid 1750084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.099467: xprt_release: RPC:    16 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.099469: xprt_alloc_slot: RPC:    17 reserved req ffff8800d8cc6800 xid 1850084b
        rpc.nfsd-3932  [002] ....    44.099469: xprt_prepare_transmit: RPC:    17 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.099470: xprt_transmit: RPC:    17 xprt_transmit(84)
        rpc.nfsd-3932  [002] ....    44.099472: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-3932  [002] ....    44.099472: xprt_transmit: RPC:    17 xmit complete
         rpcbind-1829  [001] ..s.    44.099479: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.099479: xprt_complete_rqst: RPC:    17 xid 1850084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.099485: xprt_release: RPC:    17 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.100111: xprt_alloc_slot: RPC:    18 reserved req ffff8800d8cc6800 xid 1950084b
        rpc.nfsd-3932  [002] ....    44.100112: xprt_prepare_transmit: RPC:    18 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.100113: xprt_transmit: RPC:    18 xprt_transmit(80)
        rpc.nfsd-3932  [002] ....    44.100118: xs_local_send_request: RPC:       xs_local_send_request(80) = 0
        rpc.nfsd-3932  [002] ....    44.100118: xprt_transmit: RPC:    18 xmit complete
         rpcbind-1829  [001] ..s.    44.100124: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.100125: xprt_complete_rqst: RPC:    18 xid 1950084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.100132: xprt_release: RPC:    18 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.100134: xprt_alloc_slot: RPC:    19 reserved req ffff8800d8cc6800 xid 1a50084b
        rpc.nfsd-3932  [002] ....    44.100135: xprt_prepare_transmit: RPC:    19 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.100135: xprt_transmit: RPC:    19 xprt_transmit(80)
        rpc.nfsd-3932  [002] ....    44.100138: xs_local_send_request: RPC:       xs_local_send_request(80) = 0
        rpc.nfsd-3932  [002] ....    44.100138: xprt_transmit: RPC:    19 xmit complete
         rpcbind-1829  [001] ..s.    44.100144: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.100145: xprt_complete_rqst: RPC:    19 xid 1a50084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.100151: xprt_release: RPC:    19 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.100153: xprt_alloc_slot: RPC:    20 reserved req ffff8800d8cc6800 xid 1b50084b
        rpc.nfsd-3932  [002] ....    44.100153: xprt_prepare_transmit: RPC:    20 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.100154: xprt_transmit: RPC:    20 xprt_transmit(80)
        rpc.nfsd-3932  [002] ....    44.100156: xs_local_send_request: RPC:       xs_local_send_request(80) = 0
        rpc.nfsd-3932  [002] ....    44.100156: xprt_transmit: RPC:    20 xmit complete
         rpcbind-1829  [001] ..s.    44.100162: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.100163: xprt_complete_rqst: RPC:    20 xid 1b50084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.100169: xprt_release: RPC:    20 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.100171: xprt_alloc_slot: RPC:    21 reserved req ffff8800d8cc6800 xid 1c50084b
        rpc.nfsd-3932  [002] ....    44.100171: xprt_prepare_transmit: RPC:    21 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.100172: xprt_transmit: RPC:    21 xprt_transmit(80)
        rpc.nfsd-3932  [002] ....    44.100174: xs_local_send_request: RPC:       xs_local_send_request(80) = 0
        rpc.nfsd-3932  [002] ....    44.100174: xprt_transmit: RPC:    21 xmit complete
         rpcbind-1829  [001] ..s.    44.100180: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.100181: xprt_complete_rqst: RPC:    21 xid 1c50084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.100187: xprt_release: RPC:    21 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.100189: xprt_alloc_slot: RPC:    22 reserved req ffff8800d8cc6800 xid 1d50084b
        rpc.nfsd-3932  [002] ....    44.100189: xprt_prepare_transmit: RPC:    22 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.100190: xprt_transmit: RPC:    22 xprt_transmit(80)
        rpc.nfsd-3932  [002] ....    44.100192: xs_local_send_request: RPC:       xs_local_send_request(80) = 0
        rpc.nfsd-3932  [002] ....    44.100192: xprt_transmit: RPC:    22 xmit complete
         rpcbind-1829  [001] ..s.    44.100198: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.100199: xprt_complete_rqst: RPC:    22 xid 1d50084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.100205: xprt_release: RPC:    22 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.100223: xprt_alloc_slot: RPC:    23 reserved req ffff8800d8cc6800 xid 1e50084b
        rpc.nfsd-3932  [002] ....    44.100223: xprt_prepare_transmit: RPC:    23 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.100224: xprt_transmit: RPC:    23 xprt_transmit(80)
        rpc.nfsd-3932  [002] ....    44.100227: xs_local_send_request: RPC:       xs_local_send_request(80) = 0
        rpc.nfsd-3932  [002] ....    44.100227: xprt_transmit: RPC:    23 xmit complete
         rpcbind-1829  [001] ..s.    44.100233: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.100234: xprt_complete_rqst: RPC:    23 xid 1e50084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.100241: xprt_release: RPC:    23 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.100242: xprt_alloc_slot: RPC:    24 reserved req ffff8800d8cc6800 xid 1f50084b
        rpc.nfsd-3932  [002] ....    44.100243: xprt_prepare_transmit: RPC:    24 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.100243: xprt_transmit: RPC:    24 xprt_transmit(80)
        rpc.nfsd-3932  [002] ....    44.100246: xs_local_send_request: RPC:       xs_local_send_request(80) = 0
        rpc.nfsd-3932  [002] ....    44.100246: xprt_transmit: RPC:    24 xmit complete
         rpcbind-1829  [001] ..s.    44.100252: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.100253: xprt_complete_rqst: RPC:    24 xid 1f50084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.100259: xprt_release: RPC:    24 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.100260: xprt_alloc_slot: RPC:    25 reserved req ffff8800d8cc6800 xid 2050084b
        rpc.nfsd-3932  [002] ....    44.100261: xprt_prepare_transmit: RPC:    25 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.100261: xprt_transmit: RPC:    25 xprt_transmit(80)
        rpc.nfsd-3932  [002] ....    44.100263: xs_local_send_request: RPC:       xs_local_send_request(80) = 0
        rpc.nfsd-3932  [002] ....    44.100264: xprt_transmit: RPC:    25 xmit complete
         rpcbind-1829  [001] ..s.    44.100270: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.100270: xprt_complete_rqst: RPC:    25 xid 2050084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.100276: xprt_release: RPC:    25 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.100278: xprt_alloc_slot: RPC:    26 reserved req ffff8800d8cc6800 xid 2150084b
        rpc.nfsd-3932  [002] ....    44.100278: xprt_prepare_transmit: RPC:    26 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.100279: xprt_transmit: RPC:    26 xprt_transmit(80)
        rpc.nfsd-3932  [002] ....    44.100281: xs_local_send_request: RPC:       xs_local_send_request(80) = 0
        rpc.nfsd-3932  [002] ....    44.100281: xprt_transmit: RPC:    26 xmit complete
         rpcbind-1829  [001] ..s.    44.100287: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.100288: xprt_complete_rqst: RPC:    26 xid 2150084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.100294: xprt_release: RPC:    26 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.100296: xprt_alloc_slot: RPC:    27 reserved req ffff8800d8cc6800 xid 2250084b
        rpc.nfsd-3932  [002] ....    44.100296: xprt_prepare_transmit: RPC:    27 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.100297: xprt_transmit: RPC:    27 xprt_transmit(80)
        rpc.nfsd-3932  [002] ....    44.100299: xs_local_send_request: RPC:       xs_local_send_request(80) = 0
        rpc.nfsd-3932  [002] ....    44.100299: xprt_transmit: RPC:    27 xmit complete
         rpcbind-1829  [001] ..s.    44.100305: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.100306: xprt_complete_rqst: RPC:    27 xid 2250084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.100312: xprt_release: RPC:    27 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.100403: xprt_alloc_slot: RPC:    28 reserved req ffff8800d8cc6800 xid 2350084b
        rpc.nfsd-3932  [002] ....    44.100404: xprt_prepare_transmit: RPC:    28 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.100405: xprt_transmit: RPC:    28 xprt_transmit(68)
        rpc.nfsd-3932  [002] ....    44.100409: xs_local_send_request: RPC:       xs_local_send_request(68) = 0
        rpc.nfsd-3932  [002] ....    44.100409: xprt_transmit: RPC:    28 xmit complete
         rpcbind-1829  [001] ..s.    44.100415: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.100416: xprt_complete_rqst: RPC:    28 xid 2350084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.100422: xprt_release: RPC:    28 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.100423: xprt_alloc_slot: RPC:    29 reserved req ffff8800d8cc6800 xid 2450084b
        rpc.nfsd-3932  [002] ....    44.100424: xprt_prepare_transmit: RPC:    29 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.100424: xprt_transmit: RPC:    29 xprt_transmit(68)
        rpc.nfsd-3932  [002] ....    44.100427: xs_local_send_request: RPC:       xs_local_send_request(68) = 0
        rpc.nfsd-3932  [002] ....    44.100427: xprt_transmit: RPC:    29 xmit complete
         rpcbind-1829  [001] ..s.    44.100432: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.100433: xprt_complete_rqst: RPC:    29 xid 2450084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.100439: xprt_release: RPC:    29 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.100440: xprt_alloc_slot: RPC:    30 reserved req ffff8800d8cc6800 xid 2550084b
        rpc.nfsd-3932  [002] ....    44.100441: xprt_prepare_transmit: RPC:    30 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.100441: xprt_transmit: RPC:    30 xprt_transmit(68)
        rpc.nfsd-3932  [002] ....    44.100443: xs_local_send_request: RPC:       xs_local_send_request(68) = 0
        rpc.nfsd-3932  [002] ....    44.100444: xprt_transmit: RPC:    30 xmit complete
         rpcbind-1829  [001] ..s.    44.100450: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.100450: xprt_complete_rqst: RPC:    30 xid 2550084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.100456: xprt_release: RPC:    30 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.100463: xprt_alloc_slot: RPC:    31 reserved req ffff8800d8cc6800 xid 2650084b
        rpc.nfsd-3932  [002] ....    44.100463: xprt_prepare_transmit: RPC:    31 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.100464: xprt_transmit: RPC:    31 xprt_transmit(88)
        rpc.nfsd-3932  [002] ....    44.100467: xs_local_send_request: RPC:       xs_local_send_request(88) = 0
        rpc.nfsd-3932  [002] ....    44.100467: xprt_transmit: RPC:    31 xmit complete
         rpcbind-1829  [001] ..s.    44.100473: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.100474: xprt_complete_rqst: RPC:    31 xid 2650084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.100480: xprt_release: RPC:    31 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.100482: xprt_alloc_slot: RPC:    32 reserved req ffff8800d8cc6800 xid 2750084b
        rpc.nfsd-3932  [002] ....    44.100482: xprt_prepare_transmit: RPC:    32 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.100483: xprt_transmit: RPC:    32 xprt_transmit(88)
        rpc.nfsd-3932  [002] ....    44.100485: xs_local_send_request: RPC:       xs_local_send_request(88) = 0
        rpc.nfsd-3932  [002] ....    44.100485: xprt_transmit: RPC:    32 xmit complete
         rpcbind-1829  [001] ..s.    44.100492: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.100493: xprt_complete_rqst: RPC:    32 xid 2750084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.100498: xprt_release: RPC:    32 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.100500: xprt_alloc_slot: RPC:    33 reserved req ffff8800d8cc6800 xid 2850084b
        rpc.nfsd-3932  [002] ....    44.100501: xprt_prepare_transmit: RPC:    33 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.100501: xprt_transmit: RPC:    33 xprt_transmit(88)
        rpc.nfsd-3932  [002] ....    44.100504: xs_local_send_request: RPC:       xs_local_send_request(88) = 0
        rpc.nfsd-3932  [002] ....    44.100504: xprt_transmit: RPC:    33 xmit complete
         rpcbind-1829  [001] ..s.    44.100510: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.100511: xprt_complete_rqst: RPC:    33 xid 2850084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.100517: xprt_release: RPC:    33 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.100526: xprt_alloc_slot: RPC:    34 reserved req ffff8800d8cc6800 xid 2950084b
        rpc.nfsd-3932  [002] ....    44.100527: xprt_prepare_transmit: RPC:    34 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.100528: xprt_transmit: RPC:    34 xprt_transmit(88)
        rpc.nfsd-3932  [002] ....    44.100530: xs_local_send_request: RPC:       xs_local_send_request(88) = 0
        rpc.nfsd-3932  [002] ....    44.100531: xprt_transmit: RPC:    34 xmit complete
         rpcbind-1829  [001] ..s.    44.100537: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.100538: xprt_complete_rqst: RPC:    34 xid 2950084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.100544: xprt_release: RPC:    34 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.100546: xprt_alloc_slot: RPC:    35 reserved req ffff8800d8cc6800 xid 2a50084b
        rpc.nfsd-3932  [002] ....    44.100546: xprt_prepare_transmit: RPC:    35 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.100547: xprt_transmit: RPC:    35 xprt_transmit(88)
        rpc.nfsd-3932  [002] ....    44.100549: xs_local_send_request: RPC:       xs_local_send_request(88) = 0
        rpc.nfsd-3932  [002] ....    44.100549: xprt_transmit: RPC:    35 xmit complete
         rpcbind-1829  [001] ..s.    44.100556: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.100556: xprt_complete_rqst: RPC:    35 xid 2a50084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.100562: xprt_release: RPC:    35 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.100564: xprt_alloc_slot: RPC:    36 reserved req ffff8800d8cc6800 xid 2b50084b
        rpc.nfsd-3932  [002] ....    44.100565: xprt_prepare_transmit: RPC:    36 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.100565: xprt_transmit: RPC:    36 xprt_transmit(88)
        rpc.nfsd-3932  [002] ....    44.100567: xs_local_send_request: RPC:       xs_local_send_request(88) = 0
        rpc.nfsd-3932  [002] ....    44.100568: xprt_transmit: RPC:    36 xmit complete
         rpcbind-1829  [001] ..s.    44.100574: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.100575: xprt_complete_rqst: RPC:    36 xid 2b50084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.100581: xprt_release: RPC:    36 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.100587: xprt_alloc_slot: RPC:    37 reserved req ffff8800d8cc6800 xid 2c50084b
        rpc.nfsd-3932  [002] ....    44.100587: xprt_prepare_transmit: RPC:    37 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.100588: xprt_transmit: RPC:    37 xprt_transmit(84)
        rpc.nfsd-3932  [002] ....    44.100590: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-3932  [002] ....    44.100590: xprt_transmit: RPC:    37 xmit complete
         rpcbind-1829  [001] ..s.    44.100597: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.100597: xprt_complete_rqst: RPC:    37 xid 2c50084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.100604: xprt_release: RPC:    37 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.100605: xprt_alloc_slot: RPC:    38 reserved req ffff8800d8cc6800 xid 2d50084b
        rpc.nfsd-3932  [002] ....    44.100606: xprt_prepare_transmit: RPC:    38 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.100606: xprt_transmit: RPC:    38 xprt_transmit(84)
        rpc.nfsd-3932  [002] ....    44.100608: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-3932  [002] ....    44.100609: xprt_transmit: RPC:    38 xmit complete
         rpcbind-1829  [001] ..s.    44.100615: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.100616: xprt_complete_rqst: RPC:    38 xid 2d50084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.100622: xprt_release: RPC:    38 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.100623: xprt_alloc_slot: RPC:    39 reserved req ffff8800d8cc6800 xid 2e50084b
        rpc.nfsd-3932  [002] ....    44.100624: xprt_prepare_transmit: RPC:    39 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.100624: xprt_transmit: RPC:    39 xprt_transmit(84)
        rpc.nfsd-3932  [002] ....    44.100626: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-3932  [002] ....    44.100627: xprt_transmit: RPC:    39 xmit complete
         rpcbind-1829  [001] ..s.    44.100633: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.100634: xprt_complete_rqst: RPC:    39 xid 2e50084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.100640: xprt_release: RPC:    39 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.100647: xprt_alloc_slot: RPC:    40 reserved req ffff8800d8cc6800 xid 2f50084b
        rpc.nfsd-3932  [002] ....    44.100648: xprt_prepare_transmit: RPC:    40 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.100648: xprt_transmit: RPC:    40 xprt_transmit(84)
        rpc.nfsd-3932  [002] ....    44.100651: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-3932  [002] ....    44.100651: xprt_transmit: RPC:    40 xmit complete
         rpcbind-1829  [001] ..s.    44.100657: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.100658: xprt_complete_rqst: RPC:    40 xid 2f50084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.100664: xprt_release: RPC:    40 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.100666: xprt_alloc_slot: RPC:    41 reserved req ffff8800d8cc6800 xid 3050084b
        rpc.nfsd-3932  [002] ....    44.100666: xprt_prepare_transmit: RPC:    41 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.100667: xprt_transmit: RPC:    41 xprt_transmit(84)
        rpc.nfsd-3932  [002] ....    44.100669: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-3932  [002] ....    44.100669: xprt_transmit: RPC:    41 xmit complete
         rpcbind-1829  [001] ..s.    44.100675: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.100676: xprt_complete_rqst: RPC:    41 xid 3050084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.100682: xprt_release: RPC:    41 release request ffff8800d8cc6800
        rpc.nfsd-3932  [002] ....    44.100684: xprt_alloc_slot: RPC:    42 reserved req ffff8800d8cc6800 xid 3150084b
        rpc.nfsd-3932  [002] ....    44.100684: xprt_prepare_transmit: RPC:    42 xprt_prepare_transmit
        rpc.nfsd-3932  [002] ....    44.100685: xprt_transmit: RPC:    42 xprt_transmit(84)
        rpc.nfsd-3932  [002] ....    44.100687: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-3932  [002] ....    44.100687: xprt_transmit: RPC:    42 xmit complete
         rpcbind-1829  [001] ..s.    44.100693: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1829  [001] ..s.    44.100694: xprt_complete_rqst: RPC:    42 xid 3150084b complete (28 bytes received)
        rpc.nfsd-3932  [002] ....    44.100700: xprt_release: RPC:    42 release request ffff8800d8cc6800
          <idle>-0     [003] ..s.    52.302416: inet_bind_hash: add 22
          <idle>-0     [003] ..s.    52.302456: <stack trace>
 => __inet_inherit_port
 => tcp_v4_syn_recv_sock
 => tcp_check_req
 => tcp_v4_do_rcv
 => tcp_v4_rcv
 => ip_local_deliver_finish
 => __netif_receive_skb_core
 => netif_receive_skb
 => netif_receive_skb_internal
 => br_handle_frame_finish
 => delay_tsc
 => br_handle_frame
 => br_handle_frame
 => __netif_receive_skb_core
 => read_tsc
 => netif_receive_skb_internal
 => napi_gro_receive
 => e1000_clean_rx_irq
 => e1000_clean
 => ack_ioapic_level
 => do_IRQ
 => net_rx_action
 => __do_softirq
 => ack_ioapic_level
 => irq_exit
 => do_IRQ
 => ret_from_intr
 => cpuidle_enter_state
 => cpuidle_enter_state
 => cpu_startup_entry
 => start_secondary
   kworker/u32:2-105   [001] ....    77.750302: xs_setup_tcp: RPC:       set up xprt to 192.168.23.22 (port 55201) via tcp
   kworker/u32:2-105   [001] ....    77.750310: xprt_create_transport: RPC:       created transport ffff8804082fb000 with 65536 slots
    kworker/1:1H-133   [001] ....    77.750352: xprt_alloc_slot: RPC:    43 reserved req ffff88040ab08200 xid 83da2dc3
    kworker/1:1H-133   [001] ....    77.750356: xprt_connect: RPC:    43 xprt_connect xprt ffff8804082fb000 is not connected
    kworker/1:1H-133   [001] ....    77.750358: xs_connect: RPC:       xs_connect scheduled xprt ffff8804082fb000
    kworker/1:1H-133   [001] ..s.    77.750365: inet_csk_get_port: snum 737
    kworker/1:1H-133   [001] ..s.    77.750374: <stack trace>
 => inet_addr_type
 => inet_bind
 => xs_bind
 => sock_setsockopt
 => __sock_create
 => xs_create_sock.isra.19
 => xs_tcp_setup_socket
 => process_one_work
 => worker_thread
 => worker_thread
 => kthread
 => kthread
 => ret_from_fork
 => kthread
    kworker/1:1H-133   [001] ..s.    77.750374: inet_bind_hash: add 737
    kworker/1:1H-133   [001] ..s.    77.750377: <stack trace>
 => inet_csk_get_port
 => inet_addr_type
 => inet_bind
 => xs_bind
 => sock_setsockopt
 => __sock_create
 => xs_create_sock.isra.19
 => xs_tcp_setup_socket
 => process_one_work
 => worker_thread
 => worker_thread
 => kthread
 => kthread
 => ret_from_fork
 => kthread
    kworker/1:1H-133   [001] ....    77.750378: xs_bind: RPC:       xs_bind 4.136.255.255:737: ok (0)
    kworker/1:1H-133   [001] ....    77.750379: xs_tcp_setup_socket: RPC:       worker connecting xprt ffff8804082fb000 via tcp to 192.168.23.22 (port 55201)
    kworker/1:1H-133   [001] ....    77.750397: xs_tcp_setup_socket: xprt=ffff8804082fb000 sock=ffff880408a47d40 status=-115
    kworker/1:1H-133   [001] ....    77.750397: xs_tcp_setup_socket: RPC:       ffff8804082fb000 connect status 115 connected 0 sock state 2
 fail2ban-server-4683  [002] ..s.    77.750554: xs_tcp_state_change: RPC:       xs_tcp_state_change client ffff8804082fb000...
 fail2ban-server-4683  [002] ..s.    77.750555: xs_tcp_state_change: RPC:       state 1 conn 0 dead 0 zapped 1 sk_shutdown 0
    kworker/2:1H-126   [002] ....    77.750571: xprt_connect_status: RPC:    43 xprt_connect_status: retrying
    kworker/2:1H-126   [002] ....    77.750572: xprt_prepare_transmit: RPC:    43 xprt_prepare_transmit
    kworker/2:1H-126   [002] ....    77.750573: xprt_transmit: RPC:    43 xprt_transmit(72)
    kworker/2:1H-126   [002] ....    77.750581: xs_tcp_send_request: RPC:       xs_tcp_send_request(72) = 0
    kworker/2:1H-126   [002] ....    77.750581: xprt_transmit: RPC:    43 xmit complete
 fail2ban-server-4683  [002] ..s.    77.750798: xs_tcp_data_ready: RPC:       xs_tcp_data_ready...
 fail2ban-server-4683  [002] ..s.    77.750799: xs_tcp_data_recv: RPC:       xs_tcp_data_recv started
 fail2ban-server-4683  [002] ..s.    77.750800: xs_tcp_data_recv: RPC:       reading TCP record fragment of length 24
 fail2ban-server-4683  [002] ..s.    77.750800: xs_tcp_data_recv: RPC:       reading XID (4 bytes)
 fail2ban-server-4683  [002] ..s.    77.750801: xs_tcp_data_recv: RPC:       reading request with XID 83da2dc3
 fail2ban-server-4683  [002] ..s.    77.750801: xs_tcp_data_recv: RPC:       reading CALL/REPLY flag (4 bytes)
 fail2ban-server-4683  [002] ..s.    77.750801: xs_tcp_data_recv: RPC:       read reply XID 83da2dc3
 fail2ban-server-4683  [002] ..s.    77.750802: xs_tcp_data_recv: RPC:       XID 83da2dc3 read 16 bytes
 fail2ban-server-4683  [002] ..s.    77.750803: xs_tcp_data_recv: RPC:       xprt = ffff8804082fb000, tcp_copied = 24, tcp_offset = 24, tcp_reclen = 24
 fail2ban-server-4683  [002] ..s.    77.750803: xprt_complete_rqst: RPC:    43 xid 83da2dc3 complete (24 bytes received)
 fail2ban-server-4683  [002] .Ns.    77.750805: xs_tcp_data_recv: RPC:       xs_tcp_data_recv done
    kworker/2:1H-126   [002] ....    77.750813: xprt_release: RPC:    43 release request ffff88040ab08200
          <idle>-0     [003] ..s.    94.613312: inet_bind_hash: add 22
          <idle>-0     [003] ..s.    94.613354: <stack trace>
 => __inet_inherit_port
 => tcp_v4_syn_recv_sock
 => tcp_check_req
 => tcp_v4_do_rcv
 => tcp_v4_rcv
 => ip_local_deliver_finish
 => __netif_receive_skb_core
 => netif_receive_skb
 => netif_receive_skb_internal
 => br_handle_frame_finish
 => br_handle_frame
 => br_handle_frame
 => __netif_receive_skb_core
 => read_tsc
 => netif_receive_skb_internal
 => napi_gro_receive
 => e1000_clean_rx_irq
 => e1000_clean
 => net_rx_action
 => add_interrupt_randomness
 => __do_softirq
 => ack_ioapic_level
 => irq_exit
 => do_IRQ
 => ret_from_intr
 => cpuidle_enter_state
 => cpuidle_enter_state
 => cpu_startup_entry
 => start_secondary
          <idle>-0     [003] ..s.    98.776868: inet_bind_hash: add 10993
          <idle>-0     [003] ..s.    98.776910: <stack trace>
 => __inet_inherit_port
 => tcp_v4_syn_recv_sock
 => tcp_check_req
 => tcp_v4_do_rcv
 => tcp_v4_rcv
 => ip_local_deliver_finish
 => __netif_receive_skb_core
 => netif_receive_skb
 => netif_receive_skb_internal
 => br_handle_frame_finish
 => br_handle_frame
 => br_handle_frame
 => __netif_receive_skb_core
 => read_tsc
 => netif_receive_skb_internal
 => napi_gro_receive
 => e1000_clean_rx_irq
 => e1000_clean
 => net_rx_action
 => add_interrupt_randomness
 => __do_softirq
 => ack_ioapic_level
 => irq_exit
 => do_IRQ
 => ret_from_intr
 => cpuidle_enter_state
 => cpuidle_enter_state
 => cpu_startup_entry
 => start_secondary
          <idle>-0     [003] ..s.   151.179778: inet_bind_hash: add 80
          <idle>-0     [003] ..s.   151.179822: <stack trace>
 => __inet_inherit_port
 => tcp_v4_syn_recv_sock
 => tcp_v6_syn_recv_sock
 => ipt_do_table
 => nf_conntrack_in
 => tcp_check_req
 => tcp_v4_do_rcv
 => tcp_v4_rcv
 => ip_local_deliver_finish
 => __netif_receive_skb_core
 => netif_receive_skb
 => netif_receive_skb_internal
 => br_handle_frame_finish
 => br_handle_frame
 => br_handle_frame
 => __netif_receive_skb_core
 => read_tsc
 => netif_receive_skb_internal
 => napi_gro_receive
 => e1000_clean_rx_irq
 => e1000_clean
 => try_to_wake_up
 => ktime_get
 => net_rx_action
 => add_interrupt_randomness
 => __do_softirq
 => ack_ioapic_level
 => irq_exit
 => do_IRQ
 => ret_from_intr
 => cpuidle_enter_state
 => cpuidle_enter_state
 => cpu_startup_entry
 => start_secondary
          <idle>-0     [003] ..s.   172.217453: inet_bind_hash: add 10993
          <idle>-0     [003] ..s.   172.217496: <stack trace>
 => __inet_inherit_port
 => tcp_v4_syn_recv_sock
 => tcp_check_req
 => tcp_v4_do_rcv
 => tcp_v4_rcv
 => ip_local_deliver_finish
 => __netif_receive_skb_core
 => netif_receive_skb
 => netif_receive_skb_internal
 => br_handle_frame_finish
 => br_handle_frame
 => br_handle_frame
 => __netif_receive_skb_core
 => read_tsc
 => netif_receive_skb_internal
 => napi_gro_receive
 => e1000_clean_rx_irq
 => e1000_clean
 => net_rx_action
 => add_interrupt_randomness
 => __do_softirq
 => ack_ioapic_level
 => irq_exit
 => do_IRQ
 => ret_from_intr
 => cpuidle_enter_state
 => cpuidle_enter_state
 => cpu_startup_entry
 => start_secondary
          <idle>-0     [003] .Ns.   181.603150: inet_bind_hash: add 80
          <idle>-0     [003] .Ns.   181.603194: <stack trace>
 => __inet_inherit_port
 => tcp_v4_syn_recv_sock
 => tcp_v6_syn_recv_sock
 => ipt_do_table
 => nf_conntrack_in
 => tcp_check_req
 => tcp_v4_do_rcv
 => tcp_v4_rcv
 => ip_local_deliver_finish
 => __netif_receive_skb_core
 => netif_receive_skb
 => netif_receive_skb_internal
 => br_handle_frame_finish
 => br_handle_frame
 => br_handle_frame
 => __netif_receive_skb_core
 => read_tsc
 => netif_receive_skb_internal
 => napi_gro_receive
 => e1000_clean_rx_irq
 => e1000_clean
 => try_to_wake_up
 => net_rx_action
 => add_interrupt_randomness
 => __do_softirq
 => ack_ioapic_level
 => irq_exit
 => do_IRQ
 => ret_from_intr
 => cpuidle_enter_state
 => cpuidle_enter_state
 => cpu_startup_entry
 => start_secondary
          <idle>-0     [003] ..s.   234.638237: inet_bind_hash: add 10993
          <idle>-0     [003] ..s.   234.638281: <stack trace>
 => __inet_inherit_port
 => tcp_v4_syn_recv_sock
 => tcp_check_req
 => tcp_v4_do_rcv
 => tcp_v4_rcv
 => ip_local_deliver_finish
 => __netif_receive_skb_core
 => netif_receive_skb
 => netif_receive_skb_internal
 => br_handle_frame_finish
 => br_handle_frame
 => br_handle_frame
 => __netif_receive_skb_core
 => read_tsc
 => netif_receive_skb_internal
 => napi_gro_receive
 => e1000_clean_rx_irq
 => e1000_clean
 => try_to_wake_up
 => net_rx_action
 => add_interrupt_randomness
 => __do_softirq
 => ack_ioapic_level
 => irq_exit
 => do_IRQ
 => ret_from_intr
 => cpuidle_enter_state
 => cpuidle_enter_state
 => cpu_startup_entry
 => start_secondary
          <idle>-0     [003] ..s.   241.694872: inet_bind_hash: add 57000
          <idle>-0     [003] ..s.   241.694915: <stack trace>
 => __inet_inherit_port
 => tcp_v4_syn_recv_sock
 => tcp_check_req
 => tcp_v4_do_rcv
 => tcp_v4_rcv
 => ip_local_deliver_finish
 => __netif_receive_skb_core
 => netif_receive_skb
 => netif_receive_skb_internal
 => br_handle_frame_finish
 => br_handle_frame
 => br_handle_frame
 => __netif_receive_skb_core
 => read_tsc
 => netif_receive_skb_internal
 => napi_gro_receive
 => e1000_clean_rx_irq
 => e1000_clean
 => try_to_wake_up
 => net_rx_action
 => add_interrupt_randomness
 => __do_softirq
 => ack_ioapic_level
 => irq_exit
 => do_IRQ
 => ret_from_intr
 => cpuidle_enter_state
 => cpuidle_enter_state
 => cpu_startup_entry
 => start_secondary
          <idle>-0     [003] ..s.   242.308627: inet_bind_hash: add 10993
          <idle>-0     [003] ..s.   242.308670: <stack trace>
 => __inet_inherit_port
 => tcp_v4_syn_recv_sock
 => tcp_check_req
 => tcp_v4_do_rcv
 => tcp_v4_rcv
 => ip_local_deliver_finish
 => __netif_receive_skb_core
 => netif_receive_skb
 => netif_receive_skb_internal
 => br_handle_frame_finish
 => br_handle_frame
 => br_handle_frame
 => __netif_receive_skb_core
 => read_tsc
 => netif_receive_skb_internal
 => napi_gro_receive
 => e1000_clean_rx_irq
 => e1000_clean
 => try_to_wake_up
 => net_rx_action
 => add_interrupt_randomness
 => __do_softirq
 => ack_ioapic_level
 => irq_exit
 => do_IRQ
 => ret_from_intr
 => cpuidle_enter_state
 => cpuidle_enter_state
 => cpu_startup_entry
 => start_secondary
          <idle>-0     [003] ..s.   296.125499: inet_bind_hash: add 80
          <idle>-0     [003] ..s.   296.125543: <stack trace>
 => __inet_inherit_port
 => tcp_v4_syn_recv_sock
 => tcp_v6_syn_recv_sock
 => ipt_do_table
 => nf_conntrack_in
 => tcp_check_req
 => fib_validate_source
 => tcp_v4_do_rcv
 => tcp_v4_rcv
 => ip_local_deliver_finish
 => __netif_receive_skb_core
 => netif_receive_skb
 => netif_receive_skb_internal
 => br_handle_frame_finish
 => br_handle_frame
 => br_handle_frame
 => __netif_receive_skb_core
 => read_tsc
 => netif_receive_skb_internal
 => napi_gro_receive
 => e1000_clean_rx_irq
 => e1000_clean
 => net_rx_action
 => add_interrupt_randomness
 => __do_softirq
 => ack_ioapic_level
 => irq_exit
 => do_IRQ
 => ret_from_intr
 => cpuidle_enter_state
 => cpuidle_enter_state
 => cpu_startup_entry
 => start_secondary
          <idle>-0     [003] ..s.   304.196576: inet_bind_hash: add 80
          <idle>-0     [003] ..s.   304.196618: <stack trace>
 => __inet_inherit_port
 => tcp_v4_syn_recv_sock
 => tcp_v6_syn_recv_sock
 => ipt_do_table
 => nf_conntrack_in
 => tcp_check_req
 => fib_validate_source
 => tcp_v4_do_rcv
 => tcp_v4_rcv
 => ip_local_deliver_finish
 => __netif_receive_skb_core
 => netif_receive_skb
 => netif_receive_skb_internal
 => br_handle_frame_finish
 => br_handle_frame
 => br_handle_frame
 => __netif_receive_skb_core
 => read_tsc
 => netif_receive_skb_internal
 => napi_gro_receive
 => e1000_clean_rx_irq
 => e1000_clean
 => raise_softirq_irqoff
 => netif_schedule_queue
 => dev_watchdog
 => net_rx_action
 => add_interrupt_randomness
 => __do_softirq
 => ack_ioapic_level
 => irq_exit
 => do_IRQ
 => ret_from_intr
 => cpuidle_enter_state
 => cpuidle_enter_state
 => cpu_startup_entry
 => start_secondary
    kworker/2:1H-126   [002] ....   378.264745: xs_tcp_close: close %p
    kworker/2:1H-126   [002] ....   378.264748: xs_close: RPC:       xs_close xprt ffff8804082fb000
    kworker/2:1H-126   [002] ....   378.264786: xprt_disconnect_done: RPC:       disconnected transport ffff8804082fb000


-- Steve
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
  2015-06-12 15:50         ` Steven Rostedt
  (?)
@ 2015-06-12 15:53         ` Steven Rostedt
  -1 siblings, 0 replies; 77+ messages in thread
From: Steven Rostedt @ 2015-06-12 15:53 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: Trond Myklebust, Anna Schumaker, Linux NFS Mailing List,
	Linux Network Devel Mailing List, LKML, Andrew Morton

On Fri, 12 Jun 2015 11:50:38 -0400
Steven Rostedt <rostedt@goodmis.org> wrote:

> On Fri, 12 Jun 2015 11:34:20 -0400
> Steven Rostedt <rostedt@goodmis.org> wrote:
> 

> 
> And the issue goes away. That is, I watched the port go from
> ESTABLISHED to TIME_WAIT, and then gone, and theirs no hidden port.
> 

s/theirs/there's/

Time to go back to grammar school. :-p

-- Steve

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
@ 2015-06-18  3:08           ` Steven Rostedt
  0 siblings, 0 replies; 77+ messages in thread
From: Steven Rostedt @ 2015-06-18  3:08 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: Trond Myklebust, Anna Schumaker, Linux NFS Mailing List,
	Linux Network Devel Mailing List, LKML, Andrew Morton

On Fri, 12 Jun 2015 11:50:38 -0400
Steven Rostedt <rostedt@goodmis.org> wrote:

> I reverted the following commits:
> 
> c627d31ba0696cbd829437af2be2f2dee3546b1e
> 9e2b9f37760e129cee053cc7b6e7288acc2a7134
> caf4ccd4e88cf2795c927834bc488c8321437586
> 
> And the issue goes away. That is, I watched the port go from
> ESTABLISHED to TIME_WAIT, and then gone, and theirs no hidden port.
> 
> In fact, I watched the port with my portlist.c module, and it
> disappeared there too when it entered the TIME_WAIT state.
> 

I've been running v4.0.5 with the above commits reverted for 5 days
now, and there's still no hidden port appearing.

What's the status on this? Should those commits be reverted or is there
another solution to this bug?

-- Steve

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
@ 2015-06-18  3:08           ` Steven Rostedt
  0 siblings, 0 replies; 77+ messages in thread
From: Steven Rostedt @ 2015-06-18  3:08 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: Trond Myklebust, Anna Schumaker, Linux NFS Mailing List,
	Linux Network Devel Mailing List, LKML, Andrew Morton

On Fri, 12 Jun 2015 11:50:38 -0400
Steven Rostedt <rostedt-nx8X9YLhiw1AfugRpC6u6w@public.gmane.org> wrote:

> I reverted the following commits:
> 
> c627d31ba0696cbd829437af2be2f2dee3546b1e
> 9e2b9f37760e129cee053cc7b6e7288acc2a7134
> caf4ccd4e88cf2795c927834bc488c8321437586
> 
> And the issue goes away. That is, I watched the port go from
> ESTABLISHED to TIME_WAIT, and then gone, and theirs no hidden port.
> 
> In fact, I watched the port with my portlist.c module, and it
> disappeared there too when it entered the TIME_WAIT state.
> 

I've been running v4.0.5 with the above commits reverted for 5 days
now, and there's still no hidden port appearing.

What's the status on this? Should those commits be reverted or is there
another solution to this bug?

-- Steve
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
@ 2015-06-18 19:24             ` Trond Myklebust
  0 siblings, 0 replies; 77+ messages in thread
From: Trond Myklebust @ 2015-06-18 19:24 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Eric Dumazet, Anna Schumaker, Linux NFS Mailing List,
	Linux Network Devel Mailing List, LKML, Andrew Morton

On Wed, Jun 17, 2015 at 11:08 PM, Steven Rostedt <rostedt@goodmis.org> wrote:
> On Fri, 12 Jun 2015 11:50:38 -0400
> Steven Rostedt <rostedt@goodmis.org> wrote:
>
>> I reverted the following commits:
>>
>> c627d31ba0696cbd829437af2be2f2dee3546b1e
>> 9e2b9f37760e129cee053cc7b6e7288acc2a7134
>> caf4ccd4e88cf2795c927834bc488c8321437586
>>
>> And the issue goes away. That is, I watched the port go from
>> ESTABLISHED to TIME_WAIT, and then gone, and theirs no hidden port.
>>
>> In fact, I watched the port with my portlist.c module, and it
>> disappeared there too when it entered the TIME_WAIT state.
>>

I've scanned those commits again and again, and I'm not seeing how we
could be introducing a socket leak there. The only suspect I can see
would be the NFS swap bugs that Jeff fixed a few weeks ago. Are you
using NFS swap?

> I've been running v4.0.5 with the above commits reverted for 5 days
> now, and there's still no hidden port appearing.
>
> What's the status on this? Should those commits be reverted or is there
> another solution to this bug?
>

I'm trying to reproduce, but I've had no luck yet.

Cheers
  Trond

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
@ 2015-06-18 19:24             ` Trond Myklebust
  0 siblings, 0 replies; 77+ messages in thread
From: Trond Myklebust @ 2015-06-18 19:24 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Eric Dumazet, Anna Schumaker, Linux NFS Mailing List,
	Linux Network Devel Mailing List, LKML, Andrew Morton

On Wed, Jun 17, 2015 at 11:08 PM, Steven Rostedt <rostedt-nx8X9YLhiw1AfugRpC6u6w@public.gmane.org> wrote:
> On Fri, 12 Jun 2015 11:50:38 -0400
> Steven Rostedt <rostedt-nx8X9YLhiw1AfugRpC6u6w@public.gmane.org> wrote:
>
>> I reverted the following commits:
>>
>> c627d31ba0696cbd829437af2be2f2dee3546b1e
>> 9e2b9f37760e129cee053cc7b6e7288acc2a7134
>> caf4ccd4e88cf2795c927834bc488c8321437586
>>
>> And the issue goes away. That is, I watched the port go from
>> ESTABLISHED to TIME_WAIT, and then gone, and theirs no hidden port.
>>
>> In fact, I watched the port with my portlist.c module, and it
>> disappeared there too when it entered the TIME_WAIT state.
>>

I've scanned those commits again and again, and I'm not seeing how we
could be introducing a socket leak there. The only suspect I can see
would be the NFS swap bugs that Jeff fixed a few weeks ago. Are you
using NFS swap?

> I've been running v4.0.5 with the above commits reverted for 5 days
> now, and there's still no hidden port appearing.
>
> What's the status on this? Should those commits be reverted or is there
> another solution to this bug?
>

I'm trying to reproduce, but I've had no luck yet.

Cheers
  Trond
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
@ 2015-06-18 19:49               ` Steven Rostedt
  0 siblings, 0 replies; 77+ messages in thread
From: Steven Rostedt @ 2015-06-18 19:49 UTC (permalink / raw)
  To: Trond Myklebust
  Cc: Eric Dumazet, Anna Schumaker, Linux NFS Mailing List,
	Linux Network Devel Mailing List, LKML, Andrew Morton

[-- Attachment #1: Type: text/plain, Size: 1845 bytes --]

On Thu, 18 Jun 2015 15:24:52 -0400
Trond Myklebust <trond.myklebust@primarydata.com> wrote:

> On Wed, Jun 17, 2015 at 11:08 PM, Steven Rostedt <rostedt@goodmis.org> wrote:
> > On Fri, 12 Jun 2015 11:50:38 -0400
> > Steven Rostedt <rostedt@goodmis.org> wrote:
> >
> >> I reverted the following commits:
> >>
> >> c627d31ba0696cbd829437af2be2f2dee3546b1e
> >> 9e2b9f37760e129cee053cc7b6e7288acc2a7134
> >> caf4ccd4e88cf2795c927834bc488c8321437586
> >>
> >> And the issue goes away. That is, I watched the port go from
> >> ESTABLISHED to TIME_WAIT, and then gone, and theirs no hidden port.
> >>
> >> In fact, I watched the port with my portlist.c module, and it
> >> disappeared there too when it entered the TIME_WAIT state.
> >>
> 
> I've scanned those commits again and again, and I'm not seeing how we
> could be introducing a socket leak there. The only suspect I can see
> would be the NFS swap bugs that Jeff fixed a few weeks ago. Are you
> using NFS swap?

Not that I'm aware of.

> 
> > I've been running v4.0.5 with the above commits reverted for 5 days
> > now, and there's still no hidden port appearing.
> >
> > What's the status on this? Should those commits be reverted or is there
> > another solution to this bug?
> >
> 
> I'm trying to reproduce, but I've had no luck yet.

It seems to happen with the connection to my wife's machine, and that
is where my wife's box connects two directories via nfs:

This is what's in my wife's /etc/fstab directory

goliath:/home/upload     /upload         nfs     auto,rw,intr,soft       0 0
goliath:/home/gallery    /gallery        nfs     auto,ro,intr,soft	 0 0

And here's what's in my /etc/exports directory

/home/upload       wife(no_root_squash,no_all_squash,rw,sync,no_subtree_check)
/home/gallery      192.168.23.0/24(ro,sync,no_subtree_check)

Attached is my config.

-- Steve



[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 40609 bytes --]

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
@ 2015-06-18 19:49               ` Steven Rostedt
  0 siblings, 0 replies; 77+ messages in thread
From: Steven Rostedt @ 2015-06-18 19:49 UTC (permalink / raw)
  To: Trond Myklebust
  Cc: Eric Dumazet, Anna Schumaker, Linux NFS Mailing List,
	Linux Network Devel Mailing List, LKML, Andrew Morton

[-- Attachment #1: Type: text/plain, Size: 1925 bytes --]

On Thu, 18 Jun 2015 15:24:52 -0400
Trond Myklebust <trond.myklebust-7I+n7zu2hftEKMMhf/gKZA@public.gmane.org> wrote:

> On Wed, Jun 17, 2015 at 11:08 PM, Steven Rostedt <rostedt-nx8X9YLhiw1AfugRpC6u6w@public.gmane.org> wrote:
> > On Fri, 12 Jun 2015 11:50:38 -0400
> > Steven Rostedt <rostedt-nx8X9YLhiw1AfugRpC6u6w@public.gmane.org> wrote:
> >
> >> I reverted the following commits:
> >>
> >> c627d31ba0696cbd829437af2be2f2dee3546b1e
> >> 9e2b9f37760e129cee053cc7b6e7288acc2a7134
> >> caf4ccd4e88cf2795c927834bc488c8321437586
> >>
> >> And the issue goes away. That is, I watched the port go from
> >> ESTABLISHED to TIME_WAIT, and then gone, and theirs no hidden port.
> >>
> >> In fact, I watched the port with my portlist.c module, and it
> >> disappeared there too when it entered the TIME_WAIT state.
> >>
> 
> I've scanned those commits again and again, and I'm not seeing how we
> could be introducing a socket leak there. The only suspect I can see
> would be the NFS swap bugs that Jeff fixed a few weeks ago. Are you
> using NFS swap?

Not that I'm aware of.

> 
> > I've been running v4.0.5 with the above commits reverted for 5 days
> > now, and there's still no hidden port appearing.
> >
> > What's the status on this? Should those commits be reverted or is there
> > another solution to this bug?
> >
> 
> I'm trying to reproduce, but I've had no luck yet.

It seems to happen with the connection to my wife's machine, and that
is where my wife's box connects two directories via nfs:

This is what's in my wife's /etc/fstab directory

goliath:/home/upload     /upload         nfs     auto,rw,intr,soft       0 0
goliath:/home/gallery    /gallery        nfs     auto,ro,intr,soft	 0 0

And here's what's in my /etc/exports directory

/home/upload       wife(no_root_squash,no_all_squash,rw,sync,no_subtree_check)
/home/gallery      192.168.23.0/24(ro,sync,no_subtree_check)

Attached is my config.

-- Steve



[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 40609 bytes --]

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
@ 2015-06-18 22:50                 ` Jeff Layton
  0 siblings, 0 replies; 77+ messages in thread
From: Jeff Layton @ 2015-06-18 22:50 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Trond Myklebust, Eric Dumazet, Anna Schumaker,
	Linux NFS Mailing List, Linux Network Devel Mailing List, LKML,
	Andrew Morton

On Thu, 18 Jun 2015 15:49:14 -0400
Steven Rostedt <rostedt@goodmis.org> wrote:

> On Thu, 18 Jun 2015 15:24:52 -0400
> Trond Myklebust <trond.myklebust@primarydata.com> wrote:
> 
> > On Wed, Jun 17, 2015 at 11:08 PM, Steven Rostedt <rostedt@goodmis.org> wrote:
> > > On Fri, 12 Jun 2015 11:50:38 -0400
> > > Steven Rostedt <rostedt@goodmis.org> wrote:
> > >
> > >> I reverted the following commits:
> > >>
> > >> c627d31ba0696cbd829437af2be2f2dee3546b1e
> > >> 9e2b9f37760e129cee053cc7b6e7288acc2a7134
> > >> caf4ccd4e88cf2795c927834bc488c8321437586
> > >>
> > >> And the issue goes away. That is, I watched the port go from
> > >> ESTABLISHED to TIME_WAIT, and then gone, and theirs no hidden port.
> > >>
> > >> In fact, I watched the port with my portlist.c module, and it
> > >> disappeared there too when it entered the TIME_WAIT state.
> > >>
> > 
> > I've scanned those commits again and again, and I'm not seeing how we
> > could be introducing a socket leak there. The only suspect I can see
> > would be the NFS swap bugs that Jeff fixed a few weeks ago. Are you
> > using NFS swap?
> 
> Not that I'm aware of.
> 
> > 
> > > I've been running v4.0.5 with the above commits reverted for 5 days
> > > now, and there's still no hidden port appearing.
> > >
> > > What's the status on this? Should those commits be reverted or is there
> > > another solution to this bug?
> > >
> > 
> > I'm trying to reproduce, but I've had no luck yet.
> 
> It seems to happen with the connection to my wife's machine, and that
> is where my wife's box connects two directories via nfs:
> 
> This is what's in my wife's /etc/fstab directory
> 
> goliath:/home/upload     /upload         nfs     auto,rw,intr,soft       0 0
> goliath:/home/gallery    /gallery        nfs     auto,ro,intr,soft	 0 0
> 
> And here's what's in my /etc/exports directory
> 
> /home/upload       wife(no_root_squash,no_all_squash,rw,sync,no_subtree_check)
> /home/gallery      192.168.23.0/24(ro,sync,no_subtree_check)
> 
> Attached is my config.
> 

The interesting bit here is that the sockets all seem to connect to port
55201 on the remote host, if I'm reading these traces correctly. What's
listening on that port on the server?

This might give some helpful info:

    $ rpcinfo -p <NFS servername>

Also, what NFS version are you using to mount here? Your fstab entries
suggest that you're using the default version (for whatever distro this
is), but have you (e.g.) set up nfsmount.conf to default to v3 on this
box?

-- 
Jeff Layton <jlayton@poochiereds.net>

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
@ 2015-06-18 22:50                 ` Jeff Layton
  0 siblings, 0 replies; 77+ messages in thread
From: Jeff Layton @ 2015-06-18 22:50 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Trond Myklebust, Eric Dumazet, Anna Schumaker,
	Linux NFS Mailing List, Linux Network Devel Mailing List, LKML,
	Andrew Morton

On Thu, 18 Jun 2015 15:49:14 -0400
Steven Rostedt <rostedt-nx8X9YLhiw1AfugRpC6u6w@public.gmane.org> wrote:

> On Thu, 18 Jun 2015 15:24:52 -0400
> Trond Myklebust <trond.myklebust-7I+n7zu2hftEKMMhf/gKZA@public.gmane.org> wrote:
> 
> > On Wed, Jun 17, 2015 at 11:08 PM, Steven Rostedt <rostedt-nx8X9YLhiw1AfugRpC6u6w@public.gmane.org> wrote:
> > > On Fri, 12 Jun 2015 11:50:38 -0400
> > > Steven Rostedt <rostedt-nx8X9YLhiw1AfugRpC6u6w@public.gmane.org> wrote:
> > >
> > >> I reverted the following commits:
> > >>
> > >> c627d31ba0696cbd829437af2be2f2dee3546b1e
> > >> 9e2b9f37760e129cee053cc7b6e7288acc2a7134
> > >> caf4ccd4e88cf2795c927834bc488c8321437586
> > >>
> > >> And the issue goes away. That is, I watched the port go from
> > >> ESTABLISHED to TIME_WAIT, and then gone, and theirs no hidden port.
> > >>
> > >> In fact, I watched the port with my portlist.c module, and it
> > >> disappeared there too when it entered the TIME_WAIT state.
> > >>
> > 
> > I've scanned those commits again and again, and I'm not seeing how we
> > could be introducing a socket leak there. The only suspect I can see
> > would be the NFS swap bugs that Jeff fixed a few weeks ago. Are you
> > using NFS swap?
> 
> Not that I'm aware of.
> 
> > 
> > > I've been running v4.0.5 with the above commits reverted for 5 days
> > > now, and there's still no hidden port appearing.
> > >
> > > What's the status on this? Should those commits be reverted or is there
> > > another solution to this bug?
> > >
> > 
> > I'm trying to reproduce, but I've had no luck yet.
> 
> It seems to happen with the connection to my wife's machine, and that
> is where my wife's box connects two directories via nfs:
> 
> This is what's in my wife's /etc/fstab directory
> 
> goliath:/home/upload     /upload         nfs     auto,rw,intr,soft       0 0
> goliath:/home/gallery    /gallery        nfs     auto,ro,intr,soft	 0 0
> 
> And here's what's in my /etc/exports directory
> 
> /home/upload       wife(no_root_squash,no_all_squash,rw,sync,no_subtree_check)
> /home/gallery      192.168.23.0/24(ro,sync,no_subtree_check)
> 
> Attached is my config.
> 

The interesting bit here is that the sockets all seem to connect to port
55201 on the remote host, if I'm reading these traces correctly. What's
listening on that port on the server?

This might give some helpful info:

    $ rpcinfo -p <NFS servername>

Also, what NFS version are you using to mount here? Your fstab entries
suggest that you're using the default version (for whatever distro this
is), but have you (e.g.) set up nfsmount.conf to default to v3 on this
box?

-- 
Jeff Layton <jlayton-vpEMnDpepFuMZCB2o+C8xQ@public.gmane.org>
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
@ 2015-06-19  1:08                   ` Steven Rostedt
  0 siblings, 0 replies; 77+ messages in thread
From: Steven Rostedt @ 2015-06-19  1:08 UTC (permalink / raw)
  To: Jeff Layton
  Cc: Trond Myklebust, Eric Dumazet, Anna Schumaker,
	Linux NFS Mailing List, Linux Network Devel Mailing List, LKML,
	Andrew Morton

On Thu, 18 Jun 2015 18:50:51 -0400
Jeff Layton <jlayton@poochiereds.net> wrote:
 
> The interesting bit here is that the sockets all seem to connect to port
> 55201 on the remote host, if I'm reading these traces correctly. What's
> listening on that port on the server?
> 
> This might give some helpful info:
> 
>     $ rpcinfo -p <NFS servername>

# rpcinfo -p wife
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp  34243  status
    100024    1   tcp  34498  status

# rpcinfo -p localhost
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp  38332  status
    100024    1   tcp  52684  status
    100003    2   tcp   2049  nfs
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100227    2   tcp   2049
    100227    3   tcp   2049
    100003    2   udp   2049  nfs
    100003    3   udp   2049  nfs
    100003    4   udp   2049  nfs
    100227    2   udp   2049
    100227    3   udp   2049
    100021    1   udp  53218  nlockmgr
    100021    3   udp  53218  nlockmgr
    100021    4   udp  53218  nlockmgr
    100021    1   tcp  49825  nlockmgr
    100021    3   tcp  49825  nlockmgr
    100021    4   tcp  49825  nlockmgr
    100005    1   udp  49166  mountd
    100005    1   tcp  48797  mountd
    100005    2   udp  47856  mountd
    100005    2   tcp  53839  mountd
    100005    3   udp  36090  mountd
    100005    3   tcp  46390  mountd

Note, the box has been rebooted since I posted my last trace.

> 
> Also, what NFS version are you using to mount here? Your fstab entries
> suggest that you're using the default version (for whatever distro this
> is), but have you (e.g.) set up nfsmount.conf to default to v3 on this
> box?
> 

My box is Debian testing (recently updated).

# dpkg -l nfs-*

ii  nfs-common     1:1.2.8-9    amd64        NFS support files common to clien
ii  nfs-kernel-ser 1:1.2.8-9    amd64        support for NFS kernel server


same for both boxes.

nfsmount.conf doesn't exist on either box.

I'm assuming it is using nfs4.

Anything else I can provide?

-- Steve

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
@ 2015-06-19  1:08                   ` Steven Rostedt
  0 siblings, 0 replies; 77+ messages in thread
From: Steven Rostedt @ 2015-06-19  1:08 UTC (permalink / raw)
  To: Jeff Layton
  Cc: Trond Myklebust, Eric Dumazet, Anna Schumaker,
	Linux NFS Mailing List, Linux Network Devel Mailing List, LKML,
	Andrew Morton

On Thu, 18 Jun 2015 18:50:51 -0400
Jeff Layton <jlayton-vpEMnDpepFuMZCB2o+C8xQ@public.gmane.org> wrote:
 
> The interesting bit here is that the sockets all seem to connect to port
> 55201 on the remote host, if I'm reading these traces correctly. What's
> listening on that port on the server?
> 
> This might give some helpful info:
> 
>     $ rpcinfo -p <NFS servername>

# rpcinfo -p wife
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp  34243  status
    100024    1   tcp  34498  status

# rpcinfo -p localhost
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp  38332  status
    100024    1   tcp  52684  status
    100003    2   tcp   2049  nfs
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100227    2   tcp   2049
    100227    3   tcp   2049
    100003    2   udp   2049  nfs
    100003    3   udp   2049  nfs
    100003    4   udp   2049  nfs
    100227    2   udp   2049
    100227    3   udp   2049
    100021    1   udp  53218  nlockmgr
    100021    3   udp  53218  nlockmgr
    100021    4   udp  53218  nlockmgr
    100021    1   tcp  49825  nlockmgr
    100021    3   tcp  49825  nlockmgr
    100021    4   tcp  49825  nlockmgr
    100005    1   udp  49166  mountd
    100005    1   tcp  48797  mountd
    100005    2   udp  47856  mountd
    100005    2   tcp  53839  mountd
    100005    3   udp  36090  mountd
    100005    3   tcp  46390  mountd

Note, the box has been rebooted since I posted my last trace.

> 
> Also, what NFS version are you using to mount here? Your fstab entries
> suggest that you're using the default version (for whatever distro this
> is), but have you (e.g.) set up nfsmount.conf to default to v3 on this
> box?
> 

My box is Debian testing (recently updated).

# dpkg -l nfs-*

ii  nfs-common     1:1.2.8-9    amd64        NFS support files common to clien
ii  nfs-kernel-ser 1:1.2.8-9    amd64        support for NFS kernel server


same for both boxes.

nfsmount.conf doesn't exist on either box.

I'm assuming it is using nfs4.

Anything else I can provide?

-- Steve
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
  2015-06-19  1:08                   ` Steven Rostedt
  (?)
@ 2015-06-19  1:37                   ` Jeff Layton
  2015-06-19  3:21                       ` Steven Rostedt
  2015-06-19 16:25                     ` Steven Rostedt
  -1 siblings, 2 replies; 77+ messages in thread
From: Jeff Layton @ 2015-06-19  1:37 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Trond Myklebust, Eric Dumazet, Anna Schumaker,
	Linux NFS Mailing List, Linux Network Devel Mailing List, LKML,
	Andrew Morton, bfields

On Thu, 18 Jun 2015 21:08:43 -0400
Steven Rostedt <rostedt@goodmis.org> wrote:

> On Thu, 18 Jun 2015 18:50:51 -0400
> Jeff Layton <jlayton@poochiereds.net> wrote:
>  
> > The interesting bit here is that the sockets all seem to connect to port
> > 55201 on the remote host, if I'm reading these traces correctly. What's
> > listening on that port on the server?
> > 
> > This might give some helpful info:
> > 
> >     $ rpcinfo -p <NFS servername>
> 
> # rpcinfo -p wife
>    program vers proto   port  service
>     100000    4   tcp    111  portmapper
>     100000    3   tcp    111  portmapper
>     100000    2   tcp    111  portmapper
>     100000    4   udp    111  portmapper
>     100000    3   udp    111  portmapper
>     100000    2   udp    111  portmapper
>     100024    1   udp  34243  status
>     100024    1   tcp  34498  status
> 
> # rpcinfo -p localhost
>    program vers proto   port  service
>     100000    4   tcp    111  portmapper
>     100000    3   tcp    111  portmapper
>     100000    2   tcp    111  portmapper
>     100000    4   udp    111  portmapper
>     100000    3   udp    111  portmapper
>     100000    2   udp    111  portmapper
>     100024    1   udp  38332  status
>     100024    1   tcp  52684  status
>     100003    2   tcp   2049  nfs
>     100003    3   tcp   2049  nfs
>     100003    4   tcp   2049  nfs
>     100227    2   tcp   2049
>     100227    3   tcp   2049
>     100003    2   udp   2049  nfs
>     100003    3   udp   2049  nfs
>     100003    4   udp   2049  nfs
>     100227    2   udp   2049
>     100227    3   udp   2049
>     100021    1   udp  53218  nlockmgr
>     100021    3   udp  53218  nlockmgr
>     100021    4   udp  53218  nlockmgr
>     100021    1   tcp  49825  nlockmgr
>     100021    3   tcp  49825  nlockmgr
>     100021    4   tcp  49825  nlockmgr
>     100005    1   udp  49166  mountd
>     100005    1   tcp  48797  mountd
>     100005    2   udp  47856  mountd
>     100005    2   tcp  53839  mountd
>     100005    3   udp  36090  mountd
>     100005    3   tcp  46390  mountd
> 
> Note, the box has been rebooted since I posted my last trace.
> 

Ahh pity. The port has probably changed...if you trace it again maybe
try to figure out what it's talking to before rebooting the server?

> > 
> > Also, what NFS version are you using to mount here? Your fstab entries
> > suggest that you're using the default version (for whatever distro this
> > is), but have you (e.g.) set up nfsmount.conf to default to v3 on this
> > box?
> > 
> 
> My box is Debian testing (recently updated).
> 
> # dpkg -l nfs-*
> 
> ii  nfs-common     1:1.2.8-9    amd64        NFS support files common to clien
> ii  nfs-kernel-ser 1:1.2.8-9    amd64        support for NFS kernel server
> 
> 
> same for both boxes.
> 
> nfsmount.conf doesn't exist on either box.
> 
> I'm assuming it is using nfs4.
> 

(cc'ing Bruce)

Oh! I was thinking that you were seeing this extra port on the
_client_, but now rereading your original mail I see that it's
appearing up on the NFS server. Is that correct?

So, assuming that this is NFSv4.0, then this port is probably bound
when the server is establishing the callback channel to the client. So
we may need to look at how those xprts are being created and whether
there are differences from a standard client xprt.

-- 
Jeff Layton <jlayton@poochiereds.net>

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
@ 2015-06-19  3:21                       ` Steven Rostedt
  0 siblings, 0 replies; 77+ messages in thread
From: Steven Rostedt @ 2015-06-19  3:21 UTC (permalink / raw)
  To: Jeff Layton
  Cc: Trond Myklebust, Eric Dumazet, Anna Schumaker,
	Linux NFS Mailing List, Linux Network Devel Mailing List, LKML,
	Andrew Morton, bfields

On Thu, 18 Jun 2015 21:37:02 -0400
Jeff Layton <jlayton@poochiereds.net> wrote:

> > Note, the box has been rebooted since I posted my last trace.
> > 
> 
> Ahh pity. The port has probably changed...if you trace it again maybe
> try to figure out what it's talking to before rebooting the server?

I could probably re-enable the trace again.

Would it be best if I put back the commits and run it with the buggy
kernel. I could then run these commands after the bug happens and/or
before the port goes away.

 
> Oh! I was thinking that you were seeing this extra port on the
> _client_, but now rereading your original mail I see that it's
> appearing up on the NFS server. Is that correct?

Correct, the bug is on the NFS server, not the client. The client is
already up and running, and had the filesystem mounted when the server
rebooted. I take it that this happened when the client tried to
reconnect.

Just let me know what you would like to do. As this is my main
production server of my local network, I would only be able to do this
a few times. Let me know all the commands and tracing you would like to
have. I'll try it tomorrow (going to bed now).

-- Steve


> 
> So, assuming that this is NFSv4.0, then this port is probably bound
> when the server is establishing the callback channel to the client. So
> we may need to look at how those xprts are being created and whether
> there are differences from a standard client xprt.
> 


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
@ 2015-06-19  3:21                       ` Steven Rostedt
  0 siblings, 0 replies; 77+ messages in thread
From: Steven Rostedt @ 2015-06-19  3:21 UTC (permalink / raw)
  To: Jeff Layton
  Cc: Trond Myklebust, Eric Dumazet, Anna Schumaker,
	Linux NFS Mailing List, Linux Network Devel Mailing List, LKML,
	Andrew Morton, bfields-uC3wQj2KruNg9hUCZPvPmw

On Thu, 18 Jun 2015 21:37:02 -0400
Jeff Layton <jlayton-vpEMnDpepFuMZCB2o+C8xQ@public.gmane.org> wrote:

> > Note, the box has been rebooted since I posted my last trace.
> > 
> 
> Ahh pity. The port has probably changed...if you trace it again maybe
> try to figure out what it's talking to before rebooting the server?

I could probably re-enable the trace again.

Would it be best if I put back the commits and run it with the buggy
kernel. I could then run these commands after the bug happens and/or
before the port goes away.

 
> Oh! I was thinking that you were seeing this extra port on the
> _client_, but now rereading your original mail I see that it's
> appearing up on the NFS server. Is that correct?

Correct, the bug is on the NFS server, not the client. The client is
already up and running, and had the filesystem mounted when the server
rebooted. I take it that this happened when the client tried to
reconnect.

Just let me know what you would like to do. As this is my main
production server of my local network, I would only be able to do this
a few times. Let me know all the commands and tracing you would like to
have. I'll try it tomorrow (going to bed now).

-- Steve


> 
> So, assuming that this is NFSv4.0, then this port is probably bound
> when the server is establishing the callback channel to the client. So
> we may need to look at how those xprts are being created and whether
> there are differences from a standard client xprt.
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
  2015-06-19  1:37                   ` Jeff Layton
  2015-06-19  3:21                       ` Steven Rostedt
@ 2015-06-19 16:25                     ` Steven Rostedt
  2015-06-19 17:17                         ` Steven Rostedt
  1 sibling, 1 reply; 77+ messages in thread
From: Steven Rostedt @ 2015-06-19 16:25 UTC (permalink / raw)
  To: Jeff Layton
  Cc: Trond Myklebust, Eric Dumazet, Anna Schumaker,
	Linux NFS Mailing List, Linux Network Devel Mailing List, LKML,
	Andrew Morton, bfields

[-- Attachment #1: Type: text/plain, Size: 218400 bytes --]

On Thu, 18 Jun 2015 21:37:02 -0400
Jeff Layton <jlayton@poochiereds.net> wrote:

> > Note, the box has been rebooted since I posted my last trace.
> > 
> 
> Ahh pity. The port has probably changed...if you trace it again maybe
> try to figure out what it's talking to before rebooting the server?
> 

OK, I ran it again. Here's exactly what I did:

I reverted my revert and applied the attached patch.

I built and rebooted the box (with the same config) an then I waited
till I saw the kworker in my trace:

 # grep kworker /debug/tracing/trace

Once I found it, I noted the port that it bounded to.

    kworker/1:1H-131   [001] ..s.   149.230212: inet_csk_get_port: kworker/1:1H:131 got port 947


Note, unhide-tcp didn't show any issues.

I saved the rpcinfo of both my box and my wifes box:

# rpcinfo -p localhost 
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp  33043  status
    100024    1   tcp  53880  status
    100003    2   tcp   2049  nfs
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100227    2   tcp   2049
    100227    3   tcp   2049
    100003    2   udp   2049  nfs
    100003    3   udp   2049  nfs
    100003    4   udp   2049  nfs
    100227    2   udp   2049
    100227    3   udp   2049
    100021    1   udp  39455  nlockmgr
    100021    3   udp  39455  nlockmgr
    100021    4   udp  39455  nlockmgr
    100021    1   tcp  48916  nlockmgr
    100021    3   tcp  48916  nlockmgr
    100021    4   tcp  48916  nlockmgr
    100005    1   udp  58465  mountd
    100005    1   tcp  56391  mountd
    100005    2   udp  35741  mountd
    100005    2   tcp  40520  mountd
    100005    3   udp  56522  mountd
    100005    3   tcp  33464  mountd

# rpcinfo -p wife
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp  34243  status
    100024    1   tcp  34498  status

and ran:

while :; do  netstat -tapn |grep 947; sleep 1; done

Waited for the state to turn from ESTABLISHED to TIME_WAIT, and then I
ran the rpcinfo again, but they didn't change. I checked for hidden
ports, but none were listed (yet).

I then waited for the port to disappear. I ran the rpcinfo again, but
it still didn't change. But unhide-tpc reports:

  Found Hidden port that not appears in ss: 947


Here's that trace:

# tracer: nop
#
# entries-in-buffer/entries-written: 1978/1978   #P:4
#
#                              _-----=> irqs-off
#                             / _----=> need-resched
#                            | / _---=> hardirq/softirq
#                            || / _--=> preempt-depth
#                            ||| /     delay
#           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
#              | |       |   ||||       |         |
        modprobe-1904  [002] ....    22.119972: rpc_init_mempool: RPC:       creating workqueue rpciod
           mount-1912  [002] ....    22.305981: rpc_fill_super: RPC:       sending pipefs MOUNT notification for net ffffffff818b9780 (init_net)
        rpc.nfsd-4720  [001] ....    50.855600: xs_local_setup_socket: RPC:       worker connecting xprt ffff880407939800 via AF_LOCAL to /var/run/rpcbind.sock
        rpc.nfsd-4720  [001] ....    50.855609: xs_local_setup_socket: RPC:       xprt ffff880407939800 connected to /var/run/rpcbind.sock
        rpc.nfsd-4720  [001] ....    50.855610: xs_setup_local: RPC:       set up xprt to /var/run/rpcbind.sock via AF_LOCAL
        rpc.nfsd-4720  [001] ....    50.855614: xprt_create_transport: RPC:       created transport ffff880407939800 with 65536 slots
        rpc.nfsd-4720  [001] ....    50.855614: rpc_new_client: RPC:       creating rpcbind client for localhost (xprt ffff880407939800)
        rpc.nfsd-4720  [001] ....    50.855625: rpc_new_task: RPC:       new task initialized, procpid 4720
        rpc.nfsd-4720  [001] ....    50.855626: rpc_new_task: RPC:       allocated task ffff88040a645e00
        rpc.nfsd-4720  [001] ....    50.855627: __rpc_execute: RPC:     1 __rpc_execute flags=0x680
        rpc.nfsd-4720  [001] ....    50.855628: call_start: RPC:     1 call_start rpcbind2 proc NULL (sync)
        rpc.nfsd-4720  [001] ....    50.855628: call_reserve: RPC:     1 call_reserve (status 0)
        rpc.nfsd-4720  [001] ....    50.855629: xprt_alloc_slot: RPC:     1 reserved req ffff880403542200 xid 3a45b0ec
        rpc.nfsd-4720  [001] ....    50.855629: call_reserveresult: RPC:     1 call_reserveresult (status 0)
        rpc.nfsd-4720  [001] ....    50.855630: call_refresh: RPC:     1 call_refresh (status 0)
        rpc.nfsd-4720  [001] ....    50.855631: call_refreshresult: RPC:     1 call_refreshresult (status 0)
        rpc.nfsd-4720  [001] ....    50.855631: call_allocate: RPC:     1 call_allocate (status 0)
        rpc.nfsd-4720  [001] ....    50.855632: rpc_malloc: RPC:     1 allocated buffer of size 96 at ffff8804045aa000
        rpc.nfsd-4720  [001] ....    50.855633: call_bind: RPC:     1 call_bind (status 0)
        rpc.nfsd-4720  [001] ....    50.855633: call_connect: RPC:     1 call_connect xprt ffff880407939800 is connected
        rpc.nfsd-4720  [001] ....    50.855634: call_transmit: RPC:     1 call_transmit (status 0)
        rpc.nfsd-4720  [001] ....    50.855634: xprt_prepare_transmit: RPC:     1 xprt_prepare_transmit
        rpc.nfsd-4720  [001] ....    50.855635: call_transmit: RPC:     1 rpc_xdr_encode (status 0)
        rpc.nfsd-4720  [001] ....    50.855635: xprt_transmit: RPC:     1 xprt_transmit(44)
        rpc.nfsd-4720  [001] ....    50.855638: xs_local_send_request: RPC:       xs_local_send_request(44) = 0
        rpc.nfsd-4720  [001] ....    50.855638: xprt_transmit: RPC:     1 xmit complete
        rpc.nfsd-4720  [001] ..s.    50.855639: __rpc_sleep_on_priority: RPC:     1 sleep_on(queue "xprt_pending" time 4294904942)
        rpc.nfsd-4720  [001] ..s.    50.855640: __rpc_sleep_on_priority: RPC:     1 added to queue ffff880407939a58 "xprt_pending"
        rpc.nfsd-4720  [001] ..s.    50.855640: __rpc_sleep_on_priority: RPC:     1 setting alarm for 10000 ms
        rpc.nfsd-4720  [001] ..s.    50.855641: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939990 "xprt_sending")
        rpc.nfsd-4720  [001] ....    50.855642: __rpc_execute: RPC:     1 sync task going to sleep
         rpcbind-1871  [003] ..s.    50.855723: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1871  [003] ..s.    50.855725: xprt_complete_rqst: RPC:     1 xid 3a45b0ec complete (24 bytes received)
         rpcbind-1871  [003] ..s.    50.855726: rpc_wake_up_task_queue_locked: RPC:     1 __rpc_wake_up_task (now 4294904942)
         rpcbind-1871  [003] ..s.    50.855726: rpc_wake_up_task_queue_locked: RPC:     1 disabling timer
         rpcbind-1871  [003] ..s.    50.855727: rpc_wake_up_task_queue_locked: RPC:     1 removed from queue ffff880407939a58 "xprt_pending"
         rpcbind-1871  [003] ..s.    50.855729: rpc_wake_up_task_queue_locked: RPC:       __rpc_wake_up_task done
        rpc.nfsd-4720  [001] ....    50.855768: __rpc_execute: RPC:     1 sync task resuming
        rpc.nfsd-4720  [001] ....    50.855770: call_status: RPC:     1 call_status (status 24)
        rpc.nfsd-4720  [001] ....    50.855771: call_decode: RPC:     1 call_decode (status 24)
        rpc.nfsd-4720  [001] ....    50.855773: call_decode: RPC:     1 call_decode result 0
        rpc.nfsd-4720  [001] ....    50.855774: __rpc_execute: RPC:     1 return 0, status 0
        rpc.nfsd-4720  [001] ....    50.855774: __rpc_execute: RPC:     1 release task
        rpc.nfsd-4720  [001] ....    50.855776: rpc_free: RPC:       freeing buffer of size 96 at ffff8804045aa000
        rpc.nfsd-4720  [001] ....    50.855777: xprt_release: RPC:     1 release request ffff880403542200
        rpc.nfsd-4720  [001] ....    50.855778: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939b20 "xprt_backlog")
        rpc.nfsd-4720  [001] ....    50.855779: rpc_release_client: RPC:       rpc_release_client(ffff8800d8e62c00)
        rpc.nfsd-4720  [001] ....    50.855781: rpc_free_task: RPC:     1 freeing task
        rpc.nfsd-4720  [001] ....    50.855782: rpc_new_client: RPC:       creating rpcbind client for localhost (xprt ffff880407939800)
        rpc.nfsd-4720  [001] ....    50.855795: rpc_new_task: RPC:       new task initialized, procpid 4720
        rpc.nfsd-4720  [001] ....    50.855796: rpc_new_task: RPC:       allocated task ffff88040a645e00
        rpc.nfsd-4720  [001] ....    50.855797: __rpc_execute: RPC:     2 __rpc_execute flags=0x680
        rpc.nfsd-4720  [001] ....    50.855798: call_start: RPC:     2 call_start rpcbind4 proc NULL (sync)
        rpc.nfsd-4720  [001] ....    50.855799: call_reserve: RPC:     2 call_reserve (status 0)
        rpc.nfsd-4720  [001] ....    50.855800: xprt_alloc_slot: RPC:     2 reserved req ffff880403542200 xid 3b45b0ec
        rpc.nfsd-4720  [001] ....    50.855801: call_reserveresult: RPC:     2 call_reserveresult (status 0)
        rpc.nfsd-4720  [001] ....    50.855801: call_refresh: RPC:     2 call_refresh (status 0)
        rpc.nfsd-4720  [001] ....    50.855802: call_refreshresult: RPC:     2 call_refreshresult (status 0)
        rpc.nfsd-4720  [001] ....    50.855803: call_allocate: RPC:     2 call_allocate (status 0)
        rpc.nfsd-4720  [001] ....    50.855804: rpc_malloc: RPC:     2 allocated buffer of size 96 at ffff8804045aa000
        rpc.nfsd-4720  [001] ....    50.855805: call_bind: RPC:     2 call_bind (status 0)
        rpc.nfsd-4720  [001] ....    50.855806: call_connect: RPC:     2 call_connect xprt ffff880407939800 is connected
        rpc.nfsd-4720  [001] ....    50.855806: call_transmit: RPC:     2 call_transmit (status 0)
        rpc.nfsd-4720  [001] ....    50.855807: xprt_prepare_transmit: RPC:     2 xprt_prepare_transmit
        rpc.nfsd-4720  [001] ....    50.855808: call_transmit: RPC:     2 rpc_xdr_encode (status 0)
        rpc.nfsd-4720  [001] ....    50.855808: xprt_transmit: RPC:     2 xprt_transmit(44)
        rpc.nfsd-4720  [001] ....    50.855817: xs_local_send_request: RPC:       xs_local_send_request(44) = 0
        rpc.nfsd-4720  [001] ....    50.855817: xprt_transmit: RPC:     2 xmit complete
        rpc.nfsd-4720  [001] ..s.    50.855818: __rpc_sleep_on_priority: RPC:     2 sleep_on(queue "xprt_pending" time 4294904942)
        rpc.nfsd-4720  [001] ..s.    50.855819: __rpc_sleep_on_priority: RPC:     2 added to queue ffff880407939a58 "xprt_pending"
        rpc.nfsd-4720  [001] ..s.    50.855820: __rpc_sleep_on_priority: RPC:     2 setting alarm for 10000 ms
        rpc.nfsd-4720  [001] ..s.    50.855822: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939990 "xprt_sending")
        rpc.nfsd-4720  [001] ....    50.855822: __rpc_execute: RPC:     2 sync task going to sleep
         rpcbind-1871  [003] ..s.    50.855866: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1871  [003] ..s.    50.855868: xprt_complete_rqst: RPC:     2 xid 3b45b0ec complete (24 bytes received)
         rpcbind-1871  [003] ..s.    50.855869: rpc_wake_up_task_queue_locked: RPC:     2 __rpc_wake_up_task (now 4294904942)
         rpcbind-1871  [003] ..s.    50.855870: rpc_wake_up_task_queue_locked: RPC:     2 disabling timer
         rpcbind-1871  [003] ..s.    50.855871: rpc_wake_up_task_queue_locked: RPC:     2 removed from queue ffff880407939a58 "xprt_pending"
         rpcbind-1871  [003] ..s.    50.855875: rpc_wake_up_task_queue_locked: RPC:       __rpc_wake_up_task done
        rpc.nfsd-4720  [001] ....    50.855910: __rpc_execute: RPC:     2 sync task resuming
        rpc.nfsd-4720  [001] ....    50.855911: call_status: RPC:     2 call_status (status 24)
        rpc.nfsd-4720  [001] ....    50.855912: call_decode: RPC:     2 call_decode (status 24)
        rpc.nfsd-4720  [001] ....    50.855914: call_decode: RPC:     2 call_decode result 0
        rpc.nfsd-4720  [001] ....    50.855914: __rpc_execute: RPC:     2 return 0, status 0
        rpc.nfsd-4720  [001] ....    50.855915: __rpc_execute: RPC:     2 release task
        rpc.nfsd-4720  [001] ....    50.855916: rpc_free: RPC:       freeing buffer of size 96 at ffff8804045aa000
        rpc.nfsd-4720  [001] ....    50.855917: xprt_release: RPC:     2 release request ffff880403542200
        rpc.nfsd-4720  [001] ....    50.855918: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939b20 "xprt_backlog")
        rpc.nfsd-4720  [001] ....    50.855919: rpc_release_client: RPC:       rpc_release_client(ffff88040a8b3600)
        rpc.nfsd-4720  [001] ....    50.855920: rpc_free_task: RPC:     2 freeing task
        rpc.nfsd-4720  [001] ....    50.855922: rpc_new_task: RPC:       new task initialized, procpid 4720
        rpc.nfsd-4720  [001] ....    50.855923: rpc_new_task: RPC:       allocated task ffff88040a645e00
        rpc.nfsd-4720  [001] ....    50.855924: __rpc_execute: RPC:     3 __rpc_execute flags=0x2280
        rpc.nfsd-4720  [001] ....    50.855925: call_start: RPC:     3 call_start rpcbind4 proc UNSET (sync)
        rpc.nfsd-4720  [001] ....    50.855926: call_reserve: RPC:     3 call_reserve (status 0)
        rpc.nfsd-4720  [001] ....    50.855927: xprt_alloc_slot: RPC:     3 reserved req ffff880403542200 xid 3c45b0ec
        rpc.nfsd-4720  [001] ....    50.855927: call_reserveresult: RPC:     3 call_reserveresult (status 0)
        rpc.nfsd-4720  [001] ....    50.855928: call_refresh: RPC:     3 call_refresh (status 0)
        rpc.nfsd-4720  [001] ....    50.855929: call_refreshresult: RPC:     3 call_refreshresult (status 0)
        rpc.nfsd-4720  [001] ....    50.855930: call_allocate: RPC:     3 call_allocate (status 0)
        rpc.nfsd-4720  [001] ....    50.855931: rpc_malloc: RPC:     3 allocated buffer of size 188 at ffff8804045aa000
        rpc.nfsd-4720  [001] ....    50.855932: call_bind: RPC:     3 call_bind (status 0)
        rpc.nfsd-4720  [001] ....    50.855933: call_connect: RPC:     3 call_connect xprt ffff880407939800 is connected
        rpc.nfsd-4720  [001] ....    50.855934: call_transmit: RPC:     3 call_transmit (status 0)
        rpc.nfsd-4720  [001] ....    50.855934: xprt_prepare_transmit: RPC:     3 xprt_prepare_transmit
        rpc.nfsd-4720  [001] ....    50.855935: call_transmit: RPC:     3 rpc_xdr_encode (status 0)
        rpc.nfsd-4720  [001] ....    50.855938: xprt_transmit: RPC:     3 xprt_transmit(68)
        rpc.nfsd-4720  [001] ....    50.855945: xs_local_send_request: RPC:       xs_local_send_request(68) = 0
        rpc.nfsd-4720  [001] ....    50.855946: xprt_transmit: RPC:     3 xmit complete
        rpc.nfsd-4720  [001] ..s.    50.855947: __rpc_sleep_on_priority: RPC:     3 sleep_on(queue "xprt_pending" time 4294904942)
        rpc.nfsd-4720  [001] ..s.    50.855948: __rpc_sleep_on_priority: RPC:     3 added to queue ffff880407939a58 "xprt_pending"
        rpc.nfsd-4720  [001] ..s.    50.855948: __rpc_sleep_on_priority: RPC:     3 setting alarm for 10000 ms
        rpc.nfsd-4720  [001] ..s.    50.855950: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939990 "xprt_sending")
        rpc.nfsd-4720  [001] ....    50.855951: __rpc_execute: RPC:     3 sync task going to sleep
         rpcbind-1871  [003] ..s.    50.856003: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1871  [003] ..s.    50.856005: xprt_complete_rqst: RPC:     3 xid 3c45b0ec complete (28 bytes received)
         rpcbind-1871  [003] ..s.    50.856006: rpc_wake_up_task_queue_locked: RPC:     3 __rpc_wake_up_task (now 4294904942)
         rpcbind-1871  [003] ..s.    50.856007: rpc_wake_up_task_queue_locked: RPC:     3 disabling timer
         rpcbind-1871  [003] ..s.    50.856008: rpc_wake_up_task_queue_locked: RPC:     3 removed from queue ffff880407939a58 "xprt_pending"
         rpcbind-1871  [003] ..s.    50.856011: rpc_wake_up_task_queue_locked: RPC:       __rpc_wake_up_task done
        rpc.nfsd-4720  [001] ....    50.856046: __rpc_execute: RPC:     3 sync task resuming
        rpc.nfsd-4720  [001] ....    50.856048: call_status: RPC:     3 call_status (status 28)
        rpc.nfsd-4720  [001] ....    50.856049: call_decode: RPC:     3 call_decode (status 28)
        rpc.nfsd-4720  [001] ....    50.856050: call_decode: RPC:     3 call_decode result 0
        rpc.nfsd-4720  [001] ....    50.856051: __rpc_execute: RPC:     3 return 0, status 0
        rpc.nfsd-4720  [001] ....    50.856052: __rpc_execute: RPC:     3 release task
        rpc.nfsd-4720  [001] ....    50.856053: rpc_free: RPC:       freeing buffer of size 188 at ffff8804045aa000
        rpc.nfsd-4720  [001] ....    50.856054: xprt_release: RPC:     3 release request ffff880403542200
        rpc.nfsd-4720  [001] ....    50.856055: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939b20 "xprt_backlog")
        rpc.nfsd-4720  [001] ....    50.856055: rpc_release_client: RPC:       rpc_release_client(ffff88040a8b3600)
        rpc.nfsd-4720  [001] ....    50.856056: rpc_free_task: RPC:     3 freeing task
        rpc.nfsd-4720  [001] ....    50.856058: rpc_new_task: RPC:       new task initialized, procpid 4720
        rpc.nfsd-4720  [001] ....    50.856059: rpc_new_task: RPC:       allocated task ffff88040a645e00
        rpc.nfsd-4720  [001] ....    50.856060: __rpc_execute: RPC:     4 __rpc_execute flags=0x2280
        rpc.nfsd-4720  [001] ....    50.856061: call_start: RPC:     4 call_start rpcbind4 proc UNSET (sync)
        rpc.nfsd-4720  [001] ....    50.856062: call_reserve: RPC:     4 call_reserve (status 0)
        rpc.nfsd-4720  [001] ....    50.856063: xprt_alloc_slot: RPC:     4 reserved req ffff880403542200 xid 3d45b0ec
        rpc.nfsd-4720  [001] ....    50.856063: call_reserveresult: RPC:     4 call_reserveresult (status 0)
        rpc.nfsd-4720  [001] ....    50.856064: call_refresh: RPC:     4 call_refresh (status 0)
        rpc.nfsd-4720  [001] ....    50.856065: call_refreshresult: RPC:     4 call_refreshresult (status 0)
        rpc.nfsd-4720  [001] ....    50.856066: call_allocate: RPC:     4 call_allocate (status 0)
        rpc.nfsd-4720  [001] ....    50.856067: rpc_malloc: RPC:     4 allocated buffer of size 188 at ffff8804045aa000
        rpc.nfsd-4720  [001] ....    50.856067: call_bind: RPC:     4 call_bind (status 0)
        rpc.nfsd-4720  [001] ....    50.856068: call_connect: RPC:     4 call_connect xprt ffff880407939800 is connected
        rpc.nfsd-4720  [001] ....    50.856069: call_transmit: RPC:     4 call_transmit (status 0)
        rpc.nfsd-4720  [001] ....    50.856069: xprt_prepare_transmit: RPC:     4 xprt_prepare_transmit
        rpc.nfsd-4720  [001] ....    50.856070: call_transmit: RPC:     4 rpc_xdr_encode (status 0)
        rpc.nfsd-4720  [001] ....    50.856072: xprt_transmit: RPC:     4 xprt_transmit(68)
        rpc.nfsd-4720  [001] ....    50.856079: xs_local_send_request: RPC:       xs_local_send_request(68) = 0
        rpc.nfsd-4720  [001] ....    50.856080: xprt_transmit: RPC:     4 xmit complete
        rpc.nfsd-4720  [001] ..s.    50.856081: __rpc_sleep_on_priority: RPC:     4 sleep_on(queue "xprt_pending" time 4294904942)
        rpc.nfsd-4720  [001] ..s.    50.856081: __rpc_sleep_on_priority: RPC:     4 added to queue ffff880407939a58 "xprt_pending"
        rpc.nfsd-4720  [001] ..s.    50.856082: __rpc_sleep_on_priority: RPC:     4 setting alarm for 10000 ms
        rpc.nfsd-4720  [001] ..s.    50.856084: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939990 "xprt_sending")
        rpc.nfsd-4720  [001] ....    50.856084: __rpc_execute: RPC:     4 sync task going to sleep
         rpcbind-1871  [003] ..s.    50.856139: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1871  [003] ..s.    50.856142: xprt_complete_rqst: RPC:     4 xid 3d45b0ec complete (28 bytes received)
         rpcbind-1871  [003] ..s.    50.856143: rpc_wake_up_task_queue_locked: RPC:     4 __rpc_wake_up_task (now 4294904942)
         rpcbind-1871  [003] ..s.    50.856143: rpc_wake_up_task_queue_locked: RPC:     4 disabling timer
         rpcbind-1871  [003] ..s.    50.856145: rpc_wake_up_task_queue_locked: RPC:     4 removed from queue ffff880407939a58 "xprt_pending"
         rpcbind-1871  [003] ..s.    50.856148: rpc_wake_up_task_queue_locked: RPC:       __rpc_wake_up_task done
        rpc.nfsd-4720  [001] ....    50.856186: __rpc_execute: RPC:     4 sync task resuming
        rpc.nfsd-4720  [001] ....    50.856188: call_status: RPC:     4 call_status (status 28)
        rpc.nfsd-4720  [001] ....    50.856189: call_decode: RPC:     4 call_decode (status 28)
        rpc.nfsd-4720  [001] ....    50.856190: call_decode: RPC:     4 call_decode result 0
        rpc.nfsd-4720  [001] ....    50.856191: __rpc_execute: RPC:     4 return 0, status 0
        rpc.nfsd-4720  [001] ....    50.856192: __rpc_execute: RPC:     4 release task
        rpc.nfsd-4720  [001] ....    50.856193: rpc_free: RPC:       freeing buffer of size 188 at ffff8804045aa000
        rpc.nfsd-4720  [001] ....    50.856194: xprt_release: RPC:     4 release request ffff880403542200
        rpc.nfsd-4720  [001] ....    50.856195: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939b20 "xprt_backlog")
        rpc.nfsd-4720  [001] ....    50.856196: rpc_release_client: RPC:       rpc_release_client(ffff88040a8b3600)
        rpc.nfsd-4720  [001] ....    50.856196: rpc_free_task: RPC:     4 freeing task
        rpc.nfsd-4720  [001] ....    50.856198: rpc_new_task: RPC:       new task initialized, procpid 4720
        rpc.nfsd-4720  [001] ....    50.856199: rpc_new_task: RPC:       allocated task ffff88040a645e00
        rpc.nfsd-4720  [001] ....    50.856200: __rpc_execute: RPC:     5 __rpc_execute flags=0x2280
        rpc.nfsd-4720  [001] ....    50.856201: call_start: RPC:     5 call_start rpcbind4 proc UNSET (sync)
        rpc.nfsd-4720  [001] ....    50.856202: call_reserve: RPC:     5 call_reserve (status 0)
        rpc.nfsd-4720  [001] ....    50.856202: xprt_alloc_slot: RPC:     5 reserved req ffff880403542200 xid 3e45b0ec
        rpc.nfsd-4720  [001] ....    50.856203: call_reserveresult: RPC:     5 call_reserveresult (status 0)
        rpc.nfsd-4720  [001] ....    50.856204: call_refresh: RPC:     5 call_refresh (status 0)
        rpc.nfsd-4720  [001] ....    50.856205: call_refreshresult: RPC:     5 call_refreshresult (status 0)
        rpc.nfsd-4720  [001] ....    50.856206: call_allocate: RPC:     5 call_allocate (status 0)
        rpc.nfsd-4720  [001] ....    50.856207: rpc_malloc: RPC:     5 allocated buffer of size 188 at ffff8804045aa000
        rpc.nfsd-4720  [001] ....    50.856207: call_bind: RPC:     5 call_bind (status 0)
        rpc.nfsd-4720  [001] ....    50.856208: call_connect: RPC:     5 call_connect xprt ffff880407939800 is connected
        rpc.nfsd-4720  [001] ....    50.856209: call_transmit: RPC:     5 call_transmit (status 0)
        rpc.nfsd-4720  [001] ....    50.856209: xprt_prepare_transmit: RPC:     5 xprt_prepare_transmit
        rpc.nfsd-4720  [001] ....    50.856210: call_transmit: RPC:     5 rpc_xdr_encode (status 0)
        rpc.nfsd-4720  [001] ....    50.856212: xprt_transmit: RPC:     5 xprt_transmit(68)
        rpc.nfsd-4720  [001] ....    50.856219: xs_local_send_request: RPC:       xs_local_send_request(68) = 0
        rpc.nfsd-4720  [001] ....    50.856219: xprt_transmit: RPC:     5 xmit complete
        rpc.nfsd-4720  [001] ..s.    50.856221: __rpc_sleep_on_priority: RPC:     5 sleep_on(queue "xprt_pending" time 4294904942)
        rpc.nfsd-4720  [001] ..s.    50.856221: __rpc_sleep_on_priority: RPC:     5 added to queue ffff880407939a58 "xprt_pending"
        rpc.nfsd-4720  [001] ..s.    50.856222: __rpc_sleep_on_priority: RPC:     5 setting alarm for 10000 ms
        rpc.nfsd-4720  [001] ..s.    50.856224: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939990 "xprt_sending")
        rpc.nfsd-4720  [001] ....    50.856224: __rpc_execute: RPC:     5 sync task going to sleep
         rpcbind-1871  [003] ..s.    50.856277: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1871  [003] ..s.    50.856279: xprt_complete_rqst: RPC:     5 xid 3e45b0ec complete (28 bytes received)
         rpcbind-1871  [003] ..s.    50.856280: rpc_wake_up_task_queue_locked: RPC:     5 __rpc_wake_up_task (now 4294904942)
         rpcbind-1871  [003] ..s.    50.856281: rpc_wake_up_task_queue_locked: RPC:     5 disabling timer
         rpcbind-1871  [003] ..s.    50.856282: rpc_wake_up_task_queue_locked: RPC:     5 removed from queue ffff880407939a58 "xprt_pending"
         rpcbind-1871  [003] ..s.    50.856285: rpc_wake_up_task_queue_locked: RPC:       __rpc_wake_up_task done
        rpc.nfsd-4720  [001] ....    50.856321: __rpc_execute: RPC:     5 sync task resuming
        rpc.nfsd-4720  [001] ....    50.856322: call_status: RPC:     5 call_status (status 28)
        rpc.nfsd-4720  [001] ....    50.856323: call_decode: RPC:     5 call_decode (status 28)
        rpc.nfsd-4720  [001] ....    50.856325: call_decode: RPC:     5 call_decode result 0
        rpc.nfsd-4720  [001] ....    50.856326: __rpc_execute: RPC:     5 return 0, status 0
        rpc.nfsd-4720  [001] ....    50.856326: __rpc_execute: RPC:     5 release task
        rpc.nfsd-4720  [001] ....    50.856327: rpc_free: RPC:       freeing buffer of size 188 at ffff8804045aa000
        rpc.nfsd-4720  [001] ....    50.856328: xprt_release: RPC:     5 release request ffff880403542200
        rpc.nfsd-4720  [001] ....    50.856329: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939b20 "xprt_backlog")
        rpc.nfsd-4720  [001] ....    50.856330: rpc_release_client: RPC:       rpc_release_client(ffff88040a8b3600)
        rpc.nfsd-4720  [001] ....    50.856331: rpc_free_task: RPC:     5 freeing task
        rpc.nfsd-4720  [001] ....    50.856333: rpc_new_task: RPC:       new task initialized, procpid 4720
        rpc.nfsd-4720  [001] ....    50.856334: rpc_new_task: RPC:       allocated task ffff88040a645e00
        rpc.nfsd-4720  [001] ....    50.856335: __rpc_execute: RPC:     6 __rpc_execute flags=0x2280
        rpc.nfsd-4720  [001] ....    50.856336: call_start: RPC:     6 call_start rpcbind4 proc UNSET (sync)
        rpc.nfsd-4720  [001] ....    50.856337: call_reserve: RPC:     6 call_reserve (status 0)
        rpc.nfsd-4720  [001] ....    50.856337: xprt_alloc_slot: RPC:     6 reserved req ffff880403542200 xid 3f45b0ec
        rpc.nfsd-4720  [001] ....    50.856338: call_reserveresult: RPC:     6 call_reserveresult (status 0)
        rpc.nfsd-4720  [001] ....    50.856339: call_refresh: RPC:     6 call_refresh (status 0)
        rpc.nfsd-4720  [001] ....    50.856340: call_refreshresult: RPC:     6 call_refreshresult (status 0)
        rpc.nfsd-4720  [001] ....    50.856341: call_allocate: RPC:     6 call_allocate (status 0)
        rpc.nfsd-4720  [001] ....    50.856342: rpc_malloc: RPC:     6 allocated buffer of size 188 at ffff8804045aa000
        rpc.nfsd-4720  [001] ....    50.856342: call_bind: RPC:     6 call_bind (status 0)
        rpc.nfsd-4720  [001] ....    50.856343: call_connect: RPC:     6 call_connect xprt ffff880407939800 is connected
        rpc.nfsd-4720  [001] ....    50.856344: call_transmit: RPC:     6 call_transmit (status 0)
        rpc.nfsd-4720  [001] ....    50.856344: xprt_prepare_transmit: RPC:     6 xprt_prepare_transmit
        rpc.nfsd-4720  [001] ....    50.856345: call_transmit: RPC:     6 rpc_xdr_encode (status 0)
        rpc.nfsd-4720  [001] ....    50.856346: xprt_transmit: RPC:     6 xprt_transmit(68)
        rpc.nfsd-4720  [001] ....    50.856354: xs_local_send_request: RPC:       xs_local_send_request(68) = 0
        rpc.nfsd-4720  [001] ....    50.856354: xprt_transmit: RPC:     6 xmit complete
        rpc.nfsd-4720  [001] ..s.    50.856355: __rpc_sleep_on_priority: RPC:     6 sleep_on(queue "xprt_pending" time 4294904942)
        rpc.nfsd-4720  [001] ..s.    50.856356: __rpc_sleep_on_priority: RPC:     6 added to queue ffff880407939a58 "xprt_pending"
        rpc.nfsd-4720  [001] ..s.    50.856357: __rpc_sleep_on_priority: RPC:     6 setting alarm for 10000 ms
        rpc.nfsd-4720  [001] ..s.    50.856359: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939990 "xprt_sending")
        rpc.nfsd-4720  [001] ....    50.856359: __rpc_execute: RPC:     6 sync task going to sleep
         rpcbind-1871  [003] ..s.    50.856409: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1871  [003] ..s.    50.856411: xprt_complete_rqst: RPC:     6 xid 3f45b0ec complete (28 bytes received)
         rpcbind-1871  [003] ..s.    50.856412: rpc_wake_up_task_queue_locked: RPC:     6 __rpc_wake_up_task (now 4294904942)
         rpcbind-1871  [003] ..s.    50.856413: rpc_wake_up_task_queue_locked: RPC:     6 disabling timer
         rpcbind-1871  [003] ..s.    50.856414: rpc_wake_up_task_queue_locked: RPC:     6 removed from queue ffff880407939a58 "xprt_pending"
         rpcbind-1871  [003] ..s.    50.856417: rpc_wake_up_task_queue_locked: RPC:       __rpc_wake_up_task done
        rpc.nfsd-4720  [001] ....    50.856453: __rpc_execute: RPC:     6 sync task resuming
        rpc.nfsd-4720  [001] ....    50.856455: call_status: RPC:     6 call_status (status 28)
        rpc.nfsd-4720  [001] ....    50.856456: call_decode: RPC:     6 call_decode (status 28)
        rpc.nfsd-4720  [001] ....    50.856457: call_decode: RPC:     6 call_decode result 0
        rpc.nfsd-4720  [001] ....    50.856458: __rpc_execute: RPC:     6 return 0, status 0
        rpc.nfsd-4720  [001] ....    50.856459: __rpc_execute: RPC:     6 release task
        rpc.nfsd-4720  [001] ....    50.856460: rpc_free: RPC:       freeing buffer of size 188 at ffff8804045aa000
        rpc.nfsd-4720  [001] ....    50.856461: xprt_release: RPC:     6 release request ffff880403542200
        rpc.nfsd-4720  [001] ....    50.856462: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939b20 "xprt_backlog")
        rpc.nfsd-4720  [001] ....    50.856463: rpc_release_client: RPC:       rpc_release_client(ffff88040a8b3600)
        rpc.nfsd-4720  [001] ....    50.856464: rpc_free_task: RPC:     6 freeing task
        rpc.nfsd-4720  [001] ....    50.856465: rpc_new_task: RPC:       new task initialized, procpid 4720
        rpc.nfsd-4720  [001] ....    50.856466: rpc_new_task: RPC:       allocated task ffff88040a645e00
        rpc.nfsd-4720  [001] ....    50.856467: __rpc_execute: RPC:     7 __rpc_execute flags=0x2280
        rpc.nfsd-4720  [001] ....    50.856468: call_start: RPC:     7 call_start rpcbind4 proc UNSET (sync)
        rpc.nfsd-4720  [001] ....    50.856469: call_reserve: RPC:     7 call_reserve (status 0)
        rpc.nfsd-4720  [001] ....    50.856470: xprt_alloc_slot: RPC:     7 reserved req ffff880403542200 xid 4045b0ec
        rpc.nfsd-4720  [001] ....    50.856470: call_reserveresult: RPC:     7 call_reserveresult (status 0)
        rpc.nfsd-4720  [001] ....    50.856471: call_refresh: RPC:     7 call_refresh (status 0)
        rpc.nfsd-4720  [001] ....    50.856472: call_refreshresult: RPC:     7 call_refreshresult (status 0)
        rpc.nfsd-4720  [001] ....    50.856473: call_allocate: RPC:     7 call_allocate (status 0)
        rpc.nfsd-4720  [001] ....    50.856474: rpc_malloc: RPC:     7 allocated buffer of size 188 at ffff8804045aa000
        rpc.nfsd-4720  [001] ....    50.856474: call_bind: RPC:     7 call_bind (status 0)
        rpc.nfsd-4720  [001] ....    50.856475: call_connect: RPC:     7 call_connect xprt ffff880407939800 is connected
        rpc.nfsd-4720  [001] ....    50.856476: call_transmit: RPC:     7 call_transmit (status 0)
        rpc.nfsd-4720  [001] ....    50.856476: xprt_prepare_transmit: RPC:     7 xprt_prepare_transmit
        rpc.nfsd-4720  [001] ....    50.856477: call_transmit: RPC:     7 rpc_xdr_encode (status 0)
        rpc.nfsd-4720  [001] ....    50.856479: xprt_transmit: RPC:     7 xprt_transmit(68)
        rpc.nfsd-4720  [001] ....    50.856486: xs_local_send_request: RPC:       xs_local_send_request(68) = 0
        rpc.nfsd-4720  [001] ....    50.856487: xprt_transmit: RPC:     7 xmit complete
        rpc.nfsd-4720  [001] ..s.    50.856488: __rpc_sleep_on_priority: RPC:     7 sleep_on(queue "xprt_pending" time 4294904942)
        rpc.nfsd-4720  [001] ..s.    50.856489: __rpc_sleep_on_priority: RPC:     7 added to queue ffff880407939a58 "xprt_pending"
        rpc.nfsd-4720  [001] ..s.    50.856489: __rpc_sleep_on_priority: RPC:     7 setting alarm for 10000 ms
        rpc.nfsd-4720  [001] ..s.    50.856491: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939990 "xprt_sending")
        rpc.nfsd-4720  [001] ....    50.856492: __rpc_execute: RPC:     7 sync task going to sleep
         rpcbind-1871  [003] ..s.    50.856507: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1871  [003] ..s.    50.856509: xprt_complete_rqst: RPC:     7 xid 4045b0ec complete (28 bytes received)
         rpcbind-1871  [003] ..s.    50.856510: rpc_wake_up_task_queue_locked: RPC:     7 __rpc_wake_up_task (now 4294904942)
         rpcbind-1871  [003] ..s.    50.856510: rpc_wake_up_task_queue_locked: RPC:     7 disabling timer
         rpcbind-1871  [003] ..s.    50.856511: rpc_wake_up_task_queue_locked: RPC:     7 removed from queue ffff880407939a58 "xprt_pending"
         rpcbind-1871  [003] ..s.    50.856514: rpc_wake_up_task_queue_locked: RPC:       __rpc_wake_up_task done
        rpc.nfsd-4720  [001] ....    50.856517: __rpc_execute: RPC:     7 sync task resuming
        rpc.nfsd-4720  [001] ....    50.856518: call_status: RPC:     7 call_status (status 28)
        rpc.nfsd-4720  [001] ....    50.856519: call_decode: RPC:     7 call_decode (status 28)
        rpc.nfsd-4720  [001] ....    50.856521: call_decode: RPC:     7 call_decode result 0
        rpc.nfsd-4720  [001] ....    50.856522: __rpc_execute: RPC:     7 return 0, status 0
        rpc.nfsd-4720  [001] ....    50.856522: __rpc_execute: RPC:     7 release task
        rpc.nfsd-4720  [001] ....    50.856523: rpc_free: RPC:       freeing buffer of size 188 at ffff8804045aa000
        rpc.nfsd-4720  [001] ....    50.856524: xprt_release: RPC:     7 release request ffff880403542200
        rpc.nfsd-4720  [001] ....    50.856525: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939b20 "xprt_backlog")
        rpc.nfsd-4720  [001] ....    50.856526: rpc_release_client: RPC:       rpc_release_client(ffff88040a8b3600)
        rpc.nfsd-4720  [001] ....    50.856527: rpc_free_task: RPC:     7 freeing task
        rpc.nfsd-4720  [001] ....    50.856530: svc_setup_socket: svc: svc_setup_socket ffff8800db68bac0
        rpc.nfsd-4720  [001] ....    50.856535: rpc_new_task: RPC:       new task initialized, procpid 4720
        rpc.nfsd-4720  [001] ....    50.856536: rpc_new_task: RPC:       allocated task ffff88040a645e00
        rpc.nfsd-4720  [001] ....    50.856536: __rpc_execute: RPC:     8 __rpc_execute flags=0x680
        rpc.nfsd-4720  [001] ....    50.856538: call_start: RPC:     8 call_start rpcbind4 proc SET (sync)
        rpc.nfsd-4720  [001] ....    50.856538: call_reserve: RPC:     8 call_reserve (status 0)
        rpc.nfsd-4720  [001] ....    50.856539: xprt_alloc_slot: RPC:     8 reserved req ffff880403542200 xid 4145b0ec
        rpc.nfsd-4720  [001] ....    50.856540: call_reserveresult: RPC:     8 call_reserveresult (status 0)
        rpc.nfsd-4720  [001] ....    50.856541: call_refresh: RPC:     8 call_refresh (status 0)
        rpc.nfsd-4720  [001] ....    50.856542: call_refreshresult: RPC:     8 call_refreshresult (status 0)
        rpc.nfsd-4720  [001] ....    50.856542: call_allocate: RPC:     8 call_allocate (status 0)
        rpc.nfsd-4720  [001] ....    50.856543: rpc_malloc: RPC:     8 allocated buffer of size 188 at ffff8804045aa000
        rpc.nfsd-4720  [001] ....    50.856544: call_bind: RPC:     8 call_bind (status 0)
        rpc.nfsd-4720  [001] ....    50.856545: call_connect: RPC:     8 call_connect xprt ffff880407939800 is connected
        rpc.nfsd-4720  [001] ....    50.856545: call_transmit: RPC:     8 call_transmit (status 0)
        rpc.nfsd-4720  [001] ....    50.856546: xprt_prepare_transmit: RPC:     8 xprt_prepare_transmit
        rpc.nfsd-4720  [001] ....    50.856547: call_transmit: RPC:     8 rpc_xdr_encode (status 0)
        rpc.nfsd-4720  [001] ....    50.856548: xprt_transmit: RPC:     8 xprt_transmit(84)
        rpc.nfsd-4720  [001] ....    50.856555: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-4720  [001] ....    50.856555: xprt_transmit: RPC:     8 xmit complete
        rpc.nfsd-4720  [001] ..s.    50.856556: __rpc_sleep_on_priority: RPC:     8 sleep_on(queue "xprt_pending" time 4294904942)
        rpc.nfsd-4720  [001] ..s.    50.856558: __rpc_sleep_on_priority: RPC:     8 added to queue ffff880407939a58 "xprt_pending"
        rpc.nfsd-4720  [001] ..s.    50.856558: __rpc_sleep_on_priority: RPC:     8 setting alarm for 10000 ms
        rpc.nfsd-4720  [001] ..s.    50.856560: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939990 "xprt_sending")
        rpc.nfsd-4720  [001] ....    50.856561: __rpc_execute: RPC:     8 sync task going to sleep
         rpcbind-1871  [003] ..s.    50.856580: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1871  [003] ..s.    50.856580: xprt_complete_rqst: RPC:     8 xid 4145b0ec complete (28 bytes received)
         rpcbind-1871  [003] ..s.    50.856581: rpc_wake_up_task_queue_locked: RPC:     8 __rpc_wake_up_task (now 4294904942)
         rpcbind-1871  [003] ..s.    50.856581: rpc_wake_up_task_queue_locked: RPC:     8 disabling timer
         rpcbind-1871  [003] ..s.    50.856581: rpc_wake_up_task_queue_locked: RPC:     8 removed from queue ffff880407939a58 "xprt_pending"
         rpcbind-1871  [003] ..s.    50.856582: rpc_wake_up_task_queue_locked: RPC:       __rpc_wake_up_task done
        rpc.nfsd-4720  [001] ....    50.856584: __rpc_execute: RPC:     8 sync task resuming
        rpc.nfsd-4720  [001] ....    50.856584: call_status: RPC:     8 call_status (status 28)
        rpc.nfsd-4720  [001] ....    50.856584: call_decode: RPC:     8 call_decode (status 28)
        rpc.nfsd-4720  [001] ....    50.856585: call_decode: RPC:     8 call_decode result 0
        rpc.nfsd-4720  [001] ....    50.856585: __rpc_execute: RPC:     8 return 0, status 0
        rpc.nfsd-4720  [001] ....    50.856585: __rpc_execute: RPC:     8 release task
        rpc.nfsd-4720  [001] ....    50.856586: rpc_free: RPC:       freeing buffer of size 188 at ffff8804045aa000
        rpc.nfsd-4720  [001] ....    50.856586: xprt_release: RPC:     8 release request ffff880403542200
        rpc.nfsd-4720  [001] ....    50.856586: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939b20 "xprt_backlog")
        rpc.nfsd-4720  [001] ....    50.856586: rpc_release_client: RPC:       rpc_release_client(ffff88040a8b3600)
        rpc.nfsd-4720  [001] ....    50.856587: rpc_free_task: RPC:     8 freeing task
        rpc.nfsd-4720  [001] ....    50.856588: rpc_new_task: RPC:       new task initialized, procpid 4720
        rpc.nfsd-4720  [001] ....    50.856588: rpc_new_task: RPC:       allocated task ffff88040a645e00
        rpc.nfsd-4720  [001] ....    50.856588: __rpc_execute: RPC:     9 __rpc_execute flags=0x680
        rpc.nfsd-4720  [001] ....    50.856589: call_start: RPC:     9 call_start rpcbind4 proc SET (sync)
        rpc.nfsd-4720  [001] ....    50.856589: call_reserve: RPC:     9 call_reserve (status 0)
        rpc.nfsd-4720  [001] ....    50.856589: xprt_alloc_slot: RPC:     9 reserved req ffff880403542200 xid 4245b0ec
        rpc.nfsd-4720  [001] ....    50.856590: call_reserveresult: RPC:     9 call_reserveresult (status 0)
        rpc.nfsd-4720  [001] ....    50.856590: call_refresh: RPC:     9 call_refresh (status 0)
        rpc.nfsd-4720  [001] ....    50.856590: call_refreshresult: RPC:     9 call_refreshresult (status 0)
        rpc.nfsd-4720  [001] ....    50.856590: call_allocate: RPC:     9 call_allocate (status 0)
        rpc.nfsd-4720  [001] ....    50.856591: rpc_malloc: RPC:     9 allocated buffer of size 188 at ffff8804045aa000
        rpc.nfsd-4720  [001] ....    50.856591: call_bind: RPC:     9 call_bind (status 0)
        rpc.nfsd-4720  [001] ....    50.856591: call_connect: RPC:     9 call_connect xprt ffff880407939800 is connected
        rpc.nfsd-4720  [001] ....    50.856592: call_transmit: RPC:     9 call_transmit (status 0)
        rpc.nfsd-4720  [001] ....    50.856592: xprt_prepare_transmit: RPC:     9 xprt_prepare_transmit
        rpc.nfsd-4720  [001] ....    50.856592: call_transmit: RPC:     9 rpc_xdr_encode (status 0)
        rpc.nfsd-4720  [001] ....    50.856593: xprt_transmit: RPC:     9 xprt_transmit(84)
        rpc.nfsd-4720  [001] ....    50.856595: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-4720  [001] ....    50.856595: xprt_transmit: RPC:     9 xmit complete
        rpc.nfsd-4720  [001] ..s.    50.856595: __rpc_sleep_on_priority: RPC:     9 sleep_on(queue "xprt_pending" time 4294904942)
        rpc.nfsd-4720  [001] ..s.    50.856596: __rpc_sleep_on_priority: RPC:     9 added to queue ffff880407939a58 "xprt_pending"
        rpc.nfsd-4720  [001] ..s.    50.856596: __rpc_sleep_on_priority: RPC:     9 setting alarm for 10000 ms
        rpc.nfsd-4720  [001] ..s.    50.856596: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939990 "xprt_sending")
        rpc.nfsd-4720  [001] ....    50.856596: __rpc_execute: RPC:     9 sync task going to sleep
         rpcbind-1871  [003] ..s.    50.856604: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1871  [003] ..s.    50.856604: xprt_complete_rqst: RPC:     9 xid 4245b0ec complete (28 bytes received)
         rpcbind-1871  [003] ..s.    50.856604: rpc_wake_up_task_queue_locked: RPC:     9 __rpc_wake_up_task (now 4294904942)
         rpcbind-1871  [003] ..s.    50.856605: rpc_wake_up_task_queue_locked: RPC:     9 disabling timer
         rpcbind-1871  [003] ..s.    50.856605: rpc_wake_up_task_queue_locked: RPC:     9 removed from queue ffff880407939a58 "xprt_pending"
         rpcbind-1871  [003] ..s.    50.856606: rpc_wake_up_task_queue_locked: RPC:       __rpc_wake_up_task done
        rpc.nfsd-4720  [001] ....    50.856609: __rpc_execute: RPC:     9 sync task resuming
        rpc.nfsd-4720  [001] ....    50.856610: call_status: RPC:     9 call_status (status 28)
        rpc.nfsd-4720  [001] ....    50.856611: call_decode: RPC:     9 call_decode (status 28)
        rpc.nfsd-4720  [001] ....    50.856611: call_decode: RPC:     9 call_decode result 0
        rpc.nfsd-4720  [001] ....    50.856612: __rpc_execute: RPC:     9 return 0, status 0
        rpc.nfsd-4720  [001] ....    50.856612: __rpc_execute: RPC:     9 release task
        rpc.nfsd-4720  [001] ....    50.856612: rpc_free: RPC:       freeing buffer of size 188 at ffff8804045aa000
        rpc.nfsd-4720  [001] ....    50.856613: xprt_release: RPC:     9 release request ffff880403542200
        rpc.nfsd-4720  [001] ....    50.856613: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939b20 "xprt_backlog")
        rpc.nfsd-4720  [001] ....    50.856614: rpc_release_client: RPC:       rpc_release_client(ffff88040a8b3600)
        rpc.nfsd-4720  [001] ....    50.856614: rpc_free_task: RPC:     9 freeing task
        rpc.nfsd-4720  [001] ....    50.856616: rpc_new_task: RPC:       new task initialized, procpid 4720
        rpc.nfsd-4720  [001] ....    50.856616: rpc_new_task: RPC:       allocated task ffff88040a645e00
        rpc.nfsd-4720  [001] ....    50.856617: __rpc_execute: RPC:    10 __rpc_execute flags=0x680
        rpc.nfsd-4720  [001] ....    50.856617: call_start: RPC:    10 call_start rpcbind4 proc SET (sync)
        rpc.nfsd-4720  [001] ....    50.856617: call_reserve: RPC:    10 call_reserve (status 0)
        rpc.nfsd-4720  [001] ....    50.856618: xprt_alloc_slot: RPC:    10 reserved req ffff880403542200 xid 4345b0ec
        rpc.nfsd-4720  [001] ....    50.856618: call_reserveresult: RPC:    10 call_reserveresult (status 0)
        rpc.nfsd-4720  [001] ....    50.856618: call_refresh: RPC:    10 call_refresh (status 0)
        rpc.nfsd-4720  [001] ....    50.856619: call_refreshresult: RPC:    10 call_refreshresult (status 0)
        rpc.nfsd-4720  [001] ....    50.856619: call_allocate: RPC:    10 call_allocate (status 0)
        rpc.nfsd-4720  [001] ....    50.856619: rpc_malloc: RPC:    10 allocated buffer of size 188 at ffff8804045aa000
        rpc.nfsd-4720  [001] ....    50.856620: call_bind: RPC:    10 call_bind (status 0)
        rpc.nfsd-4720  [001] ....    50.856620: call_connect: RPC:    10 call_connect xprt ffff880407939800 is connected
        rpc.nfsd-4720  [001] ....    50.856620: call_transmit: RPC:    10 call_transmit (status 0)
        rpc.nfsd-4720  [001] ....    50.856620: xprt_prepare_transmit: RPC:    10 xprt_prepare_transmit
        rpc.nfsd-4720  [001] ....    50.856621: call_transmit: RPC:    10 rpc_xdr_encode (status 0)
        rpc.nfsd-4720  [001] ....    50.856622: xprt_transmit: RPC:    10 xprt_transmit(84)
        rpc.nfsd-4720  [001] ....    50.856631: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-4720  [001] ....    50.856631: xprt_transmit: RPC:    10 xmit complete
        rpc.nfsd-4720  [001] ..s.    50.856633: __rpc_sleep_on_priority: RPC:    10 sleep_on(queue "xprt_pending" time 4294904942)
        rpc.nfsd-4720  [001] ..s.    50.856633: __rpc_sleep_on_priority: RPC:    10 added to queue ffff880407939a58 "xprt_pending"
        rpc.nfsd-4720  [001] ..s.    50.856634: __rpc_sleep_on_priority: RPC:    10 setting alarm for 10000 ms
        rpc.nfsd-4720  [001] ..s.    50.856636: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939990 "xprt_sending")
        rpc.nfsd-4720  [001] ....    50.856637: __rpc_execute: RPC:    10 sync task going to sleep
         rpcbind-1871  [003] ..s.    50.856661: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1871  [003] ..s.    50.856664: xprt_complete_rqst: RPC:    10 xid 4345b0ec complete (28 bytes received)
         rpcbind-1871  [003] ..s.    50.856665: rpc_wake_up_task_queue_locked: RPC:    10 __rpc_wake_up_task (now 4294904942)
         rpcbind-1871  [003] ..s.    50.856665: rpc_wake_up_task_queue_locked: RPC:    10 disabling timer
         rpcbind-1871  [003] ..s.    50.856666: rpc_wake_up_task_queue_locked: RPC:    10 removed from queue ffff880407939a58 "xprt_pending"
         rpcbind-1871  [003] ..s.    50.856670: rpc_wake_up_task_queue_locked: RPC:       __rpc_wake_up_task done
        rpc.nfsd-4720  [001] ....    50.856690: __rpc_execute: RPC:    10 sync task resuming
        rpc.nfsd-4720  [001] ....    50.856692: call_status: RPC:    10 call_status (status 28)
        rpc.nfsd-4720  [001] ....    50.856692: call_decode: RPC:    10 call_decode (status 28)
        rpc.nfsd-4720  [001] ....    50.856694: call_decode: RPC:    10 call_decode result 0
        rpc.nfsd-4720  [001] ....    50.856695: __rpc_execute: RPC:    10 return 0, status 0
        rpc.nfsd-4720  [001] ....    50.856695: __rpc_execute: RPC:    10 release task
        rpc.nfsd-4720  [001] ....    50.856697: rpc_free: RPC:       freeing buffer of size 188 at ffff8804045aa000
        rpc.nfsd-4720  [001] ....    50.856697: xprt_release: RPC:    10 release request ffff880403542200
        rpc.nfsd-4720  [001] ....    50.856698: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939b20 "xprt_backlog")
        rpc.nfsd-4720  [001] ....    50.856699: rpc_release_client: RPC:       rpc_release_client(ffff88040a8b3600)
        rpc.nfsd-4720  [001] ....    50.856700: rpc_free_task: RPC:    10 freeing task
        rpc.nfsd-4720  [001] ....    50.856704: rpc_new_task: RPC:       new task initialized, procpid 4720
        rpc.nfsd-4720  [001] ....    50.856705: rpc_new_task: RPC:       allocated task ffff88040a645e00
        rpc.nfsd-4720  [001] ....    50.856706: __rpc_execute: RPC:    11 __rpc_execute flags=0x680
        rpc.nfsd-4720  [001] ....    50.856707: call_start: RPC:    11 call_start rpcbind4 proc SET (sync)
        rpc.nfsd-4720  [001] ....    50.856708: call_reserve: RPC:    11 call_reserve (status 0)
        rpc.nfsd-4720  [001] ....    50.856709: xprt_alloc_slot: RPC:    11 reserved req ffff880403542200 xid 4445b0ec
        rpc.nfsd-4720  [001] ....    50.856709: call_reserveresult: RPC:    11 call_reserveresult (status 0)
        rpc.nfsd-4720  [001] ....    50.856710: call_refresh: RPC:    11 call_refresh (status 0)
        rpc.nfsd-4720  [001] ....    50.856711: call_refreshresult: RPC:    11 call_refreshresult (status 0)
        rpc.nfsd-4720  [001] ....    50.856712: call_allocate: RPC:    11 call_allocate (status 0)
        rpc.nfsd-4720  [001] ....    50.856713: rpc_malloc: RPC:    11 allocated buffer of size 188 at ffff8804045aa000
        rpc.nfsd-4720  [001] ....    50.856714: call_bind: RPC:    11 call_bind (status 0)
        rpc.nfsd-4720  [001] ....    50.856715: call_connect: RPC:    11 call_connect xprt ffff880407939800 is connected
        rpc.nfsd-4720  [001] ....    50.856715: call_transmit: RPC:    11 call_transmit (status 0)
        rpc.nfsd-4720  [001] ....    50.856716: xprt_prepare_transmit: RPC:    11 xprt_prepare_transmit
        rpc.nfsd-4720  [001] ....    50.856717: call_transmit: RPC:    11 rpc_xdr_encode (status 0)
        rpc.nfsd-4720  [001] ....    50.856718: xprt_transmit: RPC:    11 xprt_transmit(84)
        rpc.nfsd-4720  [001] ....    50.856726: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-4720  [001] ....    50.856726: xprt_transmit: RPC:    11 xmit complete
        rpc.nfsd-4720  [001] ..s.    50.856727: __rpc_sleep_on_priority: RPC:    11 sleep_on(queue "xprt_pending" time 4294904942)
        rpc.nfsd-4720  [001] ..s.    50.856728: __rpc_sleep_on_priority: RPC:    11 added to queue ffff880407939a58 "xprt_pending"
        rpc.nfsd-4720  [001] ..s.    50.856729: __rpc_sleep_on_priority: RPC:    11 setting alarm for 10000 ms
        rpc.nfsd-4720  [001] ..s.    50.856731: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939990 "xprt_sending")
        rpc.nfsd-4720  [001] ....    50.856731: __rpc_execute: RPC:    11 sync task going to sleep
         rpcbind-1871  [003] ..s.    50.856771: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1871  [003] ..s.    50.856773: xprt_complete_rqst: RPC:    11 xid 4445b0ec complete (28 bytes received)
         rpcbind-1871  [003] ..s.    50.856774: rpc_wake_up_task_queue_locked: RPC:    11 __rpc_wake_up_task (now 4294904942)
         rpcbind-1871  [003] ..s.    50.856775: rpc_wake_up_task_queue_locked: RPC:    11 disabling timer
         rpcbind-1871  [003] ..s.    50.856776: rpc_wake_up_task_queue_locked: RPC:    11 removed from queue ffff880407939a58 "xprt_pending"
         rpcbind-1871  [003] ..s.    50.856779: rpc_wake_up_task_queue_locked: RPC:       __rpc_wake_up_task done
        rpc.nfsd-4720  [001] ....    50.856782: __rpc_execute: RPC:    11 sync task resuming
        rpc.nfsd-4720  [001] ....    50.856783: call_status: RPC:    11 call_status (status 28)
        rpc.nfsd-4720  [001] ....    50.856784: call_decode: RPC:    11 call_decode (status 28)
        rpc.nfsd-4720  [001] ....    50.856786: call_decode: RPC:    11 call_decode result 0
        rpc.nfsd-4720  [001] ....    50.856787: __rpc_execute: RPC:    11 return 0, status 0
        rpc.nfsd-4720  [001] ....    50.856787: __rpc_execute: RPC:    11 release task
        rpc.nfsd-4720  [001] ....    50.856788: rpc_free: RPC:       freeing buffer of size 188 at ffff8804045aa000
        rpc.nfsd-4720  [001] ....    50.856789: xprt_release: RPC:    11 release request ffff880403542200
        rpc.nfsd-4720  [001] ....    50.856790: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939b20 "xprt_backlog")
        rpc.nfsd-4720  [001] ....    50.856791: rpc_release_client: RPC:       rpc_release_client(ffff88040a8b3600)
        rpc.nfsd-4720  [001] ....    50.856792: rpc_free_task: RPC:    11 freeing task
        rpc.nfsd-4720  [001] ....    50.856795: rpc_new_task: RPC:       new task initialized, procpid 4720
        rpc.nfsd-4720  [001] ....    50.856795: rpc_new_task: RPC:       allocated task ffff88040a645e00
        rpc.nfsd-4720  [001] ....    50.856796: __rpc_execute: RPC:    12 __rpc_execute flags=0x680
        rpc.nfsd-4720  [001] ....    50.856797: call_start: RPC:    12 call_start rpcbind4 proc SET (sync)
        rpc.nfsd-4720  [001] ....    50.856798: call_reserve: RPC:    12 call_reserve (status 0)
        rpc.nfsd-4720  [001] ....    50.856799: xprt_alloc_slot: RPC:    12 reserved req ffff880403542200 xid 4545b0ec
        rpc.nfsd-4720  [001] ....    50.856800: call_reserveresult: RPC:    12 call_reserveresult (status 0)
        rpc.nfsd-4720  [001] ....    50.856801: call_refresh: RPC:    12 call_refresh (status 0)
        rpc.nfsd-4720  [001] ....    50.856801: call_refreshresult: RPC:    12 call_refreshresult (status 0)
        rpc.nfsd-4720  [001] ....    50.856802: call_allocate: RPC:    12 call_allocate (status 0)
        rpc.nfsd-4720  [001] ....    50.856803: rpc_malloc: RPC:    12 allocated buffer of size 188 at ffff8804045aa000
        rpc.nfsd-4720  [001] ....    50.856804: call_bind: RPC:    12 call_bind (status 0)
        rpc.nfsd-4720  [001] ....    50.856805: call_connect: RPC:    12 call_connect xprt ffff880407939800 is connected
        rpc.nfsd-4720  [001] ....    50.856805: call_transmit: RPC:    12 call_transmit (status 0)
        rpc.nfsd-4720  [001] ....    50.856806: xprt_prepare_transmit: RPC:    12 xprt_prepare_transmit
        rpc.nfsd-4720  [001] ....    50.856807: call_transmit: RPC:    12 rpc_xdr_encode (status 0)
        rpc.nfsd-4720  [001] ....    50.856808: xprt_transmit: RPC:    12 xprt_transmit(84)
        rpc.nfsd-4720  [001] ....    50.856814: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-4720  [001] ....    50.856815: xprt_transmit: RPC:    12 xmit complete
        rpc.nfsd-4720  [001] ..s.    50.856816: __rpc_sleep_on_priority: RPC:    12 sleep_on(queue "xprt_pending" time 4294904942)
        rpc.nfsd-4720  [001] ..s.    50.856817: __rpc_sleep_on_priority: RPC:    12 added to queue ffff880407939a58 "xprt_pending"
        rpc.nfsd-4720  [001] ..s.    50.856818: __rpc_sleep_on_priority: RPC:    12 setting alarm for 10000 ms
        rpc.nfsd-4720  [001] ..s.    50.856819: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939990 "xprt_sending")
        rpc.nfsd-4720  [001] ....    50.856820: __rpc_execute: RPC:    12 sync task going to sleep
         rpcbind-1871  [003] ..s.    50.856838: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1871  [003] ..s.    50.856840: xprt_complete_rqst: RPC:    12 xid 4545b0ec complete (28 bytes received)
         rpcbind-1871  [003] ..s.    50.856841: rpc_wake_up_task_queue_locked: RPC:    12 __rpc_wake_up_task (now 4294904942)
         rpcbind-1871  [003] ..s.    50.856842: rpc_wake_up_task_queue_locked: RPC:    12 disabling timer
         rpcbind-1871  [003] ..s.    50.856843: rpc_wake_up_task_queue_locked: RPC:    12 removed from queue ffff880407939a58 "xprt_pending"
         rpcbind-1871  [003] ..s.    50.856846: rpc_wake_up_task_queue_locked: RPC:       __rpc_wake_up_task done
        rpc.nfsd-4720  [001] ....    50.856848: __rpc_execute: RPC:    12 sync task resuming
        rpc.nfsd-4720  [001] ....    50.856849: call_status: RPC:    12 call_status (status 28)
        rpc.nfsd-4720  [001] ....    50.856850: call_decode: RPC:    12 call_decode (status 28)
        rpc.nfsd-4720  [001] ....    50.856851: call_decode: RPC:    12 call_decode result 0
        rpc.nfsd-4720  [001] ....    50.856852: __rpc_execute: RPC:    12 return 0, status 0
        rpc.nfsd-4720  [001] ....    50.856853: __rpc_execute: RPC:    12 release task
        rpc.nfsd-4720  [001] ....    50.856854: rpc_free: RPC:       freeing buffer of size 188 at ffff8804045aa000
        rpc.nfsd-4720  [001] ....    50.856854: xprt_release: RPC:    12 release request ffff880403542200
        rpc.nfsd-4720  [001] ....    50.856855: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939b20 "xprt_backlog")
        rpc.nfsd-4720  [001] ....    50.856856: rpc_release_client: RPC:       rpc_release_client(ffff88040a8b3600)
        rpc.nfsd-4720  [001] ....    50.856857: rpc_free_task: RPC:    12 freeing task
        rpc.nfsd-4720  [001] ....    50.856859: svc_setup_socket: setting up TCP socket for listening
        rpc.nfsd-4720  [001] ....    50.856860: svc_setup_socket: svc: svc_setup_socket created ffff880402bd4000 (inet ffff88040a708780)
        rpc.nfsd-4720  [001] ....    50.856882: svc_setup_socket: svc: svc_setup_socket ffff88040c7cacc0
        rpc.nfsd-4720  [001] ....    50.856884: rpc_new_task: RPC:       new task initialized, procpid 4720
        rpc.nfsd-4720  [001] ....    50.856884: rpc_new_task: RPC:       allocated task ffff88040b326c00
        rpc.nfsd-4720  [001] ....    50.856884: __rpc_execute: RPC:    13 __rpc_execute flags=0x680
        rpc.nfsd-4720  [001] ....    50.856885: call_start: RPC:    13 call_start rpcbind4 proc SET (sync)
        rpc.nfsd-4720  [001] ....    50.856885: call_reserve: RPC:    13 call_reserve (status 0)
        rpc.nfsd-4720  [001] ....    50.856886: xprt_alloc_slot: RPC:    13 reserved req ffff880403542200 xid 4645b0ec
        rpc.nfsd-4720  [001] ....    50.856886: call_reserveresult: RPC:    13 call_reserveresult (status 0)
        rpc.nfsd-4720  [001] ....    50.856886: call_refresh: RPC:    13 call_refresh (status 0)
        rpc.nfsd-4720  [001] ....    50.856886: call_refreshresult: RPC:    13 call_refreshresult (status 0)
        rpc.nfsd-4720  [001] ....    50.856887: call_allocate: RPC:    13 call_allocate (status 0)
        rpc.nfsd-4720  [001] ....    50.856887: rpc_malloc: RPC:    13 allocated buffer of size 188 at ffff8804045aa000
        rpc.nfsd-4720  [001] ....    50.856887: call_bind: RPC:    13 call_bind (status 0)
        rpc.nfsd-4720  [001] ....    50.856888: call_connect: RPC:    13 call_connect xprt ffff880407939800 is connected
        rpc.nfsd-4720  [001] ....    50.856888: call_transmit: RPC:    13 call_transmit (status 0)
        rpc.nfsd-4720  [001] ....    50.856888: xprt_prepare_transmit: RPC:    13 xprt_prepare_transmit
        rpc.nfsd-4720  [001] ....    50.856889: call_transmit: RPC:    13 rpc_xdr_encode (status 0)
        rpc.nfsd-4720  [001] ....    50.856889: xprt_transmit: RPC:    13 xprt_transmit(84)
        rpc.nfsd-4720  [001] ....    50.856892: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-4720  [001] ....    50.856892: xprt_transmit: RPC:    13 xmit complete
        rpc.nfsd-4720  [001] ..s.    50.856893: __rpc_sleep_on_priority: RPC:    13 sleep_on(queue "xprt_pending" time 4294904942)
        rpc.nfsd-4720  [001] ..s.    50.856893: __rpc_sleep_on_priority: RPC:    13 added to queue ffff880407939a58 "xprt_pending"
        rpc.nfsd-4720  [001] ..s.    50.856893: __rpc_sleep_on_priority: RPC:    13 setting alarm for 10000 ms
        rpc.nfsd-4720  [001] ..s.    50.856894: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939990 "xprt_sending")
        rpc.nfsd-4720  [001] ....    50.856894: __rpc_execute: RPC:    13 sync task going to sleep
         rpcbind-1871  [003] ..s.    50.856901: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1871  [003] ..s.    50.856901: xprt_complete_rqst: RPC:    13 xid 4645b0ec complete (28 bytes received)
         rpcbind-1871  [003] ..s.    50.856902: rpc_wake_up_task_queue_locked: RPC:    13 __rpc_wake_up_task (now 4294904942)
         rpcbind-1871  [003] ..s.    50.856902: rpc_wake_up_task_queue_locked: RPC:    13 disabling timer
         rpcbind-1871  [003] ..s.    50.856902: rpc_wake_up_task_queue_locked: RPC:    13 removed from queue ffff880407939a58 "xprt_pending"
         rpcbind-1871  [003] ..s.    50.856903: rpc_wake_up_task_queue_locked: RPC:       __rpc_wake_up_task done
        rpc.nfsd-4720  [001] ....    50.856905: __rpc_execute: RPC:    13 sync task resuming
        rpc.nfsd-4720  [001] ....    50.856905: call_status: RPC:    13 call_status (status 28)
        rpc.nfsd-4720  [001] ....    50.856905: call_decode: RPC:    13 call_decode (status 28)
        rpc.nfsd-4720  [001] ....    50.856906: call_decode: RPC:    13 call_decode result 0
        rpc.nfsd-4720  [001] ....    50.856906: __rpc_execute: RPC:    13 return 0, status 0
        rpc.nfsd-4720  [001] ....    50.856907: __rpc_execute: RPC:    13 release task
        rpc.nfsd-4720  [001] ....    50.856907: rpc_free: RPC:       freeing buffer of size 188 at ffff8804045aa000
        rpc.nfsd-4720  [001] ....    50.856907: xprt_release: RPC:    13 release request ffff880403542200
        rpc.nfsd-4720  [001] ....    50.856908: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939b20 "xprt_backlog")
        rpc.nfsd-4720  [001] ....    50.856908: rpc_release_client: RPC:       rpc_release_client(ffff88040a8b3600)
        rpc.nfsd-4720  [001] ....    50.856908: rpc_free_task: RPC:    13 freeing task
        rpc.nfsd-4720  [001] ....    50.856909: rpc_new_task: RPC:       new task initialized, procpid 4720
        rpc.nfsd-4720  [001] ....    50.856909: rpc_new_task: RPC:       allocated task ffff88040b326c00
        rpc.nfsd-4720  [001] ....    50.856910: __rpc_execute: RPC:    14 __rpc_execute flags=0x680
        rpc.nfsd-4720  [001] ....    50.856910: call_start: RPC:    14 call_start rpcbind4 proc SET (sync)
        rpc.nfsd-4720  [001] ....    50.856910: call_reserve: RPC:    14 call_reserve (status 0)
        rpc.nfsd-4720  [001] ....    50.856911: xprt_alloc_slot: RPC:    14 reserved req ffff880403542200 xid 4745b0ec
        rpc.nfsd-4720  [001] ....    50.856911: call_reserveresult: RPC:    14 call_reserveresult (status 0)
        rpc.nfsd-4720  [001] ....    50.856911: call_refresh: RPC:    14 call_refresh (status 0)
        rpc.nfsd-4720  [001] ....    50.856912: call_refreshresult: RPC:    14 call_refreshresult (status 0)
        rpc.nfsd-4720  [001] ....    50.856912: call_allocate: RPC:    14 call_allocate (status 0)
        rpc.nfsd-4720  [001] ....    50.856912: rpc_malloc: RPC:    14 allocated buffer of size 188 at ffff8804045aa000
        rpc.nfsd-4720  [001] ....    50.856912: call_bind: RPC:    14 call_bind (status 0)
        rpc.nfsd-4720  [001] ....    50.856913: call_connect: RPC:    14 call_connect xprt ffff880407939800 is connected
        rpc.nfsd-4720  [001] ....    50.856913: call_transmit: RPC:    14 call_transmit (status 0)
        rpc.nfsd-4720  [001] ....    50.856913: xprt_prepare_transmit: RPC:    14 xprt_prepare_transmit
        rpc.nfsd-4720  [001] ....    50.856913: call_transmit: RPC:    14 rpc_xdr_encode (status 0)
        rpc.nfsd-4720  [001] ....    50.856914: xprt_transmit: RPC:    14 xprt_transmit(84)
        rpc.nfsd-4720  [001] ....    50.856916: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-4720  [001] ....    50.856916: xprt_transmit: RPC:    14 xmit complete
        rpc.nfsd-4720  [001] ..s.    50.856917: __rpc_sleep_on_priority: RPC:    14 sleep_on(queue "xprt_pending" time 4294904942)
        rpc.nfsd-4720  [001] ..s.    50.856917: __rpc_sleep_on_priority: RPC:    14 added to queue ffff880407939a58 "xprt_pending"
        rpc.nfsd-4720  [001] ..s.    50.856917: __rpc_sleep_on_priority: RPC:    14 setting alarm for 10000 ms
        rpc.nfsd-4720  [001] ..s.    50.856918: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939990 "xprt_sending")
        rpc.nfsd-4720  [001] ....    50.856918: __rpc_execute: RPC:    14 sync task going to sleep
         rpcbind-1871  [003] ..s.    50.856924: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1871  [003] ..s.    50.856925: xprt_complete_rqst: RPC:    14 xid 4745b0ec complete (28 bytes received)
         rpcbind-1871  [003] ..s.    50.856926: rpc_wake_up_task_queue_locked: RPC:    14 __rpc_wake_up_task (now 4294904942)
         rpcbind-1871  [003] ..s.    50.856926: rpc_wake_up_task_queue_locked: RPC:    14 disabling timer
         rpcbind-1871  [003] ..s.    50.856926: rpc_wake_up_task_queue_locked: RPC:    14 removed from queue ffff880407939a58 "xprt_pending"
         rpcbind-1871  [003] ..s.    50.856927: rpc_wake_up_task_queue_locked: RPC:       __rpc_wake_up_task done
        rpc.nfsd-4720  [001] ....    50.856928: __rpc_execute: RPC:    14 sync task resuming
        rpc.nfsd-4720  [001] ....    50.856929: call_status: RPC:    14 call_status (status 28)
        rpc.nfsd-4720  [001] ....    50.856929: call_decode: RPC:    14 call_decode (status 28)
        rpc.nfsd-4720  [001] ....    50.856929: call_decode: RPC:    14 call_decode result 0
        rpc.nfsd-4720  [001] ....    50.856930: __rpc_execute: RPC:    14 return 0, status 0
        rpc.nfsd-4720  [001] ....    50.856930: __rpc_execute: RPC:    14 release task
        rpc.nfsd-4720  [001] ....    50.856930: rpc_free: RPC:       freeing buffer of size 188 at ffff8804045aa000
        rpc.nfsd-4720  [001] ....    50.856930: xprt_release: RPC:    14 release request ffff880403542200
        rpc.nfsd-4720  [001] ....    50.856931: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939b20 "xprt_backlog")
        rpc.nfsd-4720  [001] ....    50.856931: rpc_release_client: RPC:       rpc_release_client(ffff88040a8b3600)
        rpc.nfsd-4720  [001] ....    50.856931: rpc_free_task: RPC:    14 freeing task
        rpc.nfsd-4720  [001] ....    50.856932: rpc_new_task: RPC:       new task initialized, procpid 4720
        rpc.nfsd-4720  [001] ....    50.856933: rpc_new_task: RPC:       allocated task ffff88040b326c00
        rpc.nfsd-4720  [001] ....    50.856933: __rpc_execute: RPC:    15 __rpc_execute flags=0x680
        rpc.nfsd-4720  [001] ....    50.856933: call_start: RPC:    15 call_start rpcbind4 proc SET (sync)
        rpc.nfsd-4720  [001] ....    50.856934: call_reserve: RPC:    15 call_reserve (status 0)
        rpc.nfsd-4720  [001] ....    50.856934: xprt_alloc_slot: RPC:    15 reserved req ffff880403542200 xid 4845b0ec
        rpc.nfsd-4720  [001] ....    50.856934: call_reserveresult: RPC:    15 call_reserveresult (status 0)
        rpc.nfsd-4720  [001] ....    50.856934: call_refresh: RPC:    15 call_refresh (status 0)
        rpc.nfsd-4720  [001] ....    50.856935: call_refreshresult: RPC:    15 call_refreshresult (status 0)
        rpc.nfsd-4720  [001] ....    50.856935: call_allocate: RPC:    15 call_allocate (status 0)
        rpc.nfsd-4720  [001] ....    50.856935: rpc_malloc: RPC:    15 allocated buffer of size 188 at ffff8804045aa000
        rpc.nfsd-4720  [001] ....    50.856935: call_bind: RPC:    15 call_bind (status 0)
        rpc.nfsd-4720  [001] ....    50.856936: call_connect: RPC:    15 call_connect xprt ffff880407939800 is connected
        rpc.nfsd-4720  [001] ....    50.856936: call_transmit: RPC:    15 call_transmit (status 0)
        rpc.nfsd-4720  [001] ....    50.856936: xprt_prepare_transmit: RPC:    15 xprt_prepare_transmit
        rpc.nfsd-4720  [001] ....    50.856936: call_transmit: RPC:    15 rpc_xdr_encode (status 0)
        rpc.nfsd-4720  [001] ....    50.856937: xprt_transmit: RPC:    15 xprt_transmit(84)
        rpc.nfsd-4720  [001] ....    50.856939: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-4720  [001] ....    50.856939: xprt_transmit: RPC:    15 xmit complete
        rpc.nfsd-4720  [001] ..s.    50.856939: __rpc_sleep_on_priority: RPC:    15 sleep_on(queue "xprt_pending" time 4294904942)
        rpc.nfsd-4720  [001] ..s.    50.856940: __rpc_sleep_on_priority: RPC:    15 added to queue ffff880407939a58 "xprt_pending"
        rpc.nfsd-4720  [001] ..s.    50.856940: __rpc_sleep_on_priority: RPC:    15 setting alarm for 10000 ms
        rpc.nfsd-4720  [001] ..s.    50.856941: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939990 "xprt_sending")
        rpc.nfsd-4720  [001] ....    50.856941: __rpc_execute: RPC:    15 sync task going to sleep
         rpcbind-1871  [003] ..s.    50.856947: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1871  [003] ..s.    50.856948: xprt_complete_rqst: RPC:    15 xid 4845b0ec complete (28 bytes received)
         rpcbind-1871  [003] ..s.    50.856949: rpc_wake_up_task_queue_locked: RPC:    15 __rpc_wake_up_task (now 4294904942)
         rpcbind-1871  [003] ..s.    50.856949: rpc_wake_up_task_queue_locked: RPC:    15 disabling timer
         rpcbind-1871  [003] ..s.    50.856949: rpc_wake_up_task_queue_locked: RPC:    15 removed from queue ffff880407939a58 "xprt_pending"
         rpcbind-1871  [003] ..s.    50.856950: rpc_wake_up_task_queue_locked: RPC:       __rpc_wake_up_task done
        rpc.nfsd-4720  [001] ....    50.856951: __rpc_execute: RPC:    15 sync task resuming
        rpc.nfsd-4720  [001] ....    50.856952: call_status: RPC:    15 call_status (status 28)
        rpc.nfsd-4720  [001] ....    50.856952: call_decode: RPC:    15 call_decode (status 28)
        rpc.nfsd-4720  [001] ....    50.856952: call_decode: RPC:    15 call_decode result 0
        rpc.nfsd-4720  [001] ....    50.856953: __rpc_execute: RPC:    15 return 0, status 0
        rpc.nfsd-4720  [001] ....    50.856953: __rpc_execute: RPC:    15 release task
        rpc.nfsd-4720  [001] ....    50.856953: rpc_free: RPC:       freeing buffer of size 188 at ffff8804045aa000
        rpc.nfsd-4720  [001] ....    50.856953: xprt_release: RPC:    15 release request ffff880403542200
        rpc.nfsd-4720  [001] ....    50.856954: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939b20 "xprt_backlog")
        rpc.nfsd-4720  [001] ....    50.856954: rpc_release_client: RPC:       rpc_release_client(ffff88040a8b3600)
        rpc.nfsd-4720  [001] ....    50.856954: rpc_free_task: RPC:    15 freeing task
        rpc.nfsd-4720  [001] ....    50.856955: rpc_new_task: RPC:       new task initialized, procpid 4720
        rpc.nfsd-4720  [001] ....    50.856955: rpc_new_task: RPC:       allocated task ffff88040b326c00
        rpc.nfsd-4720  [001] ....    50.856956: __rpc_execute: RPC:    16 __rpc_execute flags=0x680
        rpc.nfsd-4720  [001] ....    50.856956: call_start: RPC:    16 call_start rpcbind4 proc SET (sync)
        rpc.nfsd-4720  [001] ....    50.856956: call_reserve: RPC:    16 call_reserve (status 0)
        rpc.nfsd-4720  [001] ....    50.856957: xprt_alloc_slot: RPC:    16 reserved req ffff880403542200 xid 4945b0ec
        rpc.nfsd-4720  [001] ....    50.856957: call_reserveresult: RPC:    16 call_reserveresult (status 0)
        rpc.nfsd-4720  [001] ....    50.856957: call_refresh: RPC:    16 call_refresh (status 0)
        rpc.nfsd-4720  [001] ....    50.856958: call_refreshresult: RPC:    16 call_refreshresult (status 0)
        rpc.nfsd-4720  [001] ....    50.856958: call_allocate: RPC:    16 call_allocate (status 0)
        rpc.nfsd-4720  [001] ....    50.856958: rpc_malloc: RPC:    16 allocated buffer of size 188 at ffff8804045aa000
        rpc.nfsd-4720  [001] ....    50.856959: call_bind: RPC:    16 call_bind (status 0)
        rpc.nfsd-4720  [001] ....    50.856959: call_connect: RPC:    16 call_connect xprt ffff880407939800 is connected
        rpc.nfsd-4720  [001] ....    50.856959: call_transmit: RPC:    16 call_transmit (status 0)
        rpc.nfsd-4720  [001] ....    50.856959: xprt_prepare_transmit: RPC:    16 xprt_prepare_transmit
        rpc.nfsd-4720  [001] ....    50.856960: call_transmit: RPC:    16 rpc_xdr_encode (status 0)
        rpc.nfsd-4720  [001] ....    50.856960: xprt_transmit: RPC:    16 xprt_transmit(84)
        rpc.nfsd-4720  [001] ....    50.856962: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-4720  [001] ....    50.856962: xprt_transmit: RPC:    16 xmit complete
        rpc.nfsd-4720  [001] ..s.    50.856963: __rpc_sleep_on_priority: RPC:    16 sleep_on(queue "xprt_pending" time 4294904942)
        rpc.nfsd-4720  [001] ..s.    50.856963: __rpc_sleep_on_priority: RPC:    16 added to queue ffff880407939a58 "xprt_pending"
        rpc.nfsd-4720  [001] ..s.    50.856963: __rpc_sleep_on_priority: RPC:    16 setting alarm for 10000 ms
        rpc.nfsd-4720  [001] ..s.    50.856964: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939990 "xprt_sending")
        rpc.nfsd-4720  [001] ....    50.856964: __rpc_execute: RPC:    16 sync task going to sleep
         rpcbind-1871  [003] ..s.    50.856971: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1871  [003] ..s.    50.856971: xprt_complete_rqst: RPC:    16 xid 4945b0ec complete (28 bytes received)
         rpcbind-1871  [003] ..s.    50.856972: rpc_wake_up_task_queue_locked: RPC:    16 __rpc_wake_up_task (now 4294904942)
         rpcbind-1871  [003] ..s.    50.856972: rpc_wake_up_task_queue_locked: RPC:    16 disabling timer
         rpcbind-1871  [003] ..s.    50.856972: rpc_wake_up_task_queue_locked: RPC:    16 removed from queue ffff880407939a58 "xprt_pending"
         rpcbind-1871  [003] ..s.    50.856973: rpc_wake_up_task_queue_locked: RPC:       __rpc_wake_up_task done
        rpc.nfsd-4720  [001] ....    50.856974: __rpc_execute: RPC:    16 sync task resuming
        rpc.nfsd-4720  [001] ....    50.856975: call_status: RPC:    16 call_status (status 28)
        rpc.nfsd-4720  [001] ....    50.856975: call_decode: RPC:    16 call_decode (status 28)
        rpc.nfsd-4720  [001] ....    50.856976: call_decode: RPC:    16 call_decode result 0
        rpc.nfsd-4720  [001] ....    50.856976: __rpc_execute: RPC:    16 return 0, status 0
        rpc.nfsd-4720  [001] ....    50.856976: __rpc_execute: RPC:    16 release task
        rpc.nfsd-4720  [001] ....    50.856976: rpc_free: RPC:       freeing buffer of size 188 at ffff8804045aa000
        rpc.nfsd-4720  [001] ....    50.856977: xprt_release: RPC:    16 release request ffff880403542200
        rpc.nfsd-4720  [001] ....    50.856977: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939b20 "xprt_backlog")
        rpc.nfsd-4720  [001] ....    50.856977: rpc_release_client: RPC:       rpc_release_client(ffff88040a8b3600)
        rpc.nfsd-4720  [001] ....    50.856978: rpc_free_task: RPC:    16 freeing task
        rpc.nfsd-4720  [001] ....    50.856979: rpc_new_task: RPC:       new task initialized, procpid 4720
        rpc.nfsd-4720  [001] ....    50.856979: rpc_new_task: RPC:       allocated task ffff88040b326c00
        rpc.nfsd-4720  [001] ....    50.856979: __rpc_execute: RPC:    17 __rpc_execute flags=0x680
        rpc.nfsd-4720  [001] ....    50.856979: call_start: RPC:    17 call_start rpcbind4 proc SET (sync)
        rpc.nfsd-4720  [001] ....    50.856980: call_reserve: RPC:    17 call_reserve (status 0)
        rpc.nfsd-4720  [001] ....    50.856980: xprt_alloc_slot: RPC:    17 reserved req ffff880403542200 xid 4a45b0ec
        rpc.nfsd-4720  [001] ....    50.856980: call_reserveresult: RPC:    17 call_reserveresult (status 0)
        rpc.nfsd-4720  [001] ....    50.856980: call_refresh: RPC:    17 call_refresh (status 0)
        rpc.nfsd-4720  [001] ....    50.856981: call_refreshresult: RPC:    17 call_refreshresult (status 0)
        rpc.nfsd-4720  [001] ....    50.856981: call_allocate: RPC:    17 call_allocate (status 0)
        rpc.nfsd-4720  [001] ....    50.856981: rpc_malloc: RPC:    17 allocated buffer of size 188 at ffff8804045aa000
        rpc.nfsd-4720  [001] ....    50.856982: call_bind: RPC:    17 call_bind (status 0)
        rpc.nfsd-4720  [001] ....    50.856982: call_connect: RPC:    17 call_connect xprt ffff880407939800 is connected
        rpc.nfsd-4720  [001] ....    50.856982: call_transmit: RPC:    17 call_transmit (status 0)
        rpc.nfsd-4720  [001] ....    50.856982: xprt_prepare_transmit: RPC:    17 xprt_prepare_transmit
        rpc.nfsd-4720  [001] ....    50.856983: call_transmit: RPC:    17 rpc_xdr_encode (status 0)
        rpc.nfsd-4720  [001] ....    50.856983: xprt_transmit: RPC:    17 xprt_transmit(84)
        rpc.nfsd-4720  [001] ....    50.856985: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-4720  [001] ....    50.856985: xprt_transmit: RPC:    17 xmit complete
        rpc.nfsd-4720  [001] ..s.    50.856986: __rpc_sleep_on_priority: RPC:    17 sleep_on(queue "xprt_pending" time 4294904942)
        rpc.nfsd-4720  [001] ..s.    50.856986: __rpc_sleep_on_priority: RPC:    17 added to queue ffff880407939a58 "xprt_pending"
        rpc.nfsd-4720  [001] ..s.    50.856986: __rpc_sleep_on_priority: RPC:    17 setting alarm for 10000 ms
        rpc.nfsd-4720  [001] ..s.    50.856987: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939990 "xprt_sending")
        rpc.nfsd-4720  [001] ....    50.856987: __rpc_execute: RPC:    17 sync task going to sleep
         rpcbind-1871  [003] ..s.    50.856994: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1871  [003] ..s.    50.856994: xprt_complete_rqst: RPC:    17 xid 4a45b0ec complete (28 bytes received)
         rpcbind-1871  [003] ..s.    50.856995: rpc_wake_up_task_queue_locked: RPC:    17 __rpc_wake_up_task (now 4294904942)
         rpcbind-1871  [003] ..s.    50.856995: rpc_wake_up_task_queue_locked: RPC:    17 disabling timer
         rpcbind-1871  [003] ..s.    50.856996: rpc_wake_up_task_queue_locked: RPC:    17 removed from queue ffff880407939a58 "xprt_pending"
         rpcbind-1871  [003] ..s.    50.856997: rpc_wake_up_task_queue_locked: RPC:       __rpc_wake_up_task done
        rpc.nfsd-4720  [001] ....    50.856998: __rpc_execute: RPC:    17 sync task resuming
        rpc.nfsd-4720  [001] ....    50.856998: call_status: RPC:    17 call_status (status 28)
        rpc.nfsd-4720  [001] ....    50.856998: call_decode: RPC:    17 call_decode (status 28)
        rpc.nfsd-4720  [001] ....    50.856999: call_decode: RPC:    17 call_decode result 0
        rpc.nfsd-4720  [001] ....    50.856999: __rpc_execute: RPC:    17 return 0, status 0
        rpc.nfsd-4720  [001] ....    50.856999: __rpc_execute: RPC:    17 release task
        rpc.nfsd-4720  [001] ....    50.857000: rpc_free: RPC:       freeing buffer of size 188 at ffff8804045aa000
        rpc.nfsd-4720  [001] ....    50.857000: xprt_release: RPC:    17 release request ffff880403542200
        rpc.nfsd-4720  [001] ....    50.857000: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939b20 "xprt_backlog")
        rpc.nfsd-4720  [001] ....    50.857001: rpc_release_client: RPC:       rpc_release_client(ffff88040a8b3600)
        rpc.nfsd-4720  [001] ....    50.857001: rpc_free_task: RPC:    17 freeing task
        rpc.nfsd-4720  [001] ....    50.857002: svc_write_space: svc: socket ffff880402ba1000(inet ffff880407b2cc00), write_space busy=1
        rpc.nfsd-4720  [001] ....    50.857003: svc_setup_socket: svc: kernel_setsockopt returned 0
        rpc.nfsd-4720  [001] ....    50.857003: svc_setup_socket: svc: svc_setup_socket created ffff880402ba1000 (inet ffff880407b2cc00)
        rpc.nfsd-4720  [001] ....    50.857568: svc_setup_socket: svc: svc_setup_socket ffff88040ec230c0
        rpc.nfsd-4720  [001] ....    50.857571: rpc_new_task: RPC:       new task initialized, procpid 4720
        rpc.nfsd-4720  [001] ....    50.857571: rpc_new_task: RPC:       allocated task ffff88040a14b900
        rpc.nfsd-4720  [001] ....    50.857572: __rpc_execute: RPC:    18 __rpc_execute flags=0x680
        rpc.nfsd-4720  [001] ....    50.857572: call_start: RPC:    18 call_start rpcbind4 proc SET (sync)
        rpc.nfsd-4720  [001] ....    50.857573: call_reserve: RPC:    18 call_reserve (status 0)
        rpc.nfsd-4720  [001] ....    50.857573: xprt_alloc_slot: RPC:    18 reserved req ffff880403542200 xid 4b45b0ec
        rpc.nfsd-4720  [001] ....    50.857573: call_reserveresult: RPC:    18 call_reserveresult (status 0)
        rpc.nfsd-4720  [001] ....    50.857574: call_refresh: RPC:    18 call_refresh (status 0)
        rpc.nfsd-4720  [001] ....    50.857574: call_refreshresult: RPC:    18 call_refreshresult (status 0)
        rpc.nfsd-4720  [001] ....    50.857574: call_allocate: RPC:    18 call_allocate (status 0)
        rpc.nfsd-4720  [001] ....    50.857575: rpc_malloc: RPC:    18 allocated buffer of size 188 at ffff8800d8dc0000
        rpc.nfsd-4720  [001] ....    50.857575: call_bind: RPC:    18 call_bind (status 0)
        rpc.nfsd-4720  [001] ....    50.857575: call_connect: RPC:    18 call_connect xprt ffff880407939800 is connected
        rpc.nfsd-4720  [001] ....    50.857576: call_transmit: RPC:    18 call_transmit (status 0)
        rpc.nfsd-4720  [001] ....    50.857576: xprt_prepare_transmit: RPC:    18 xprt_prepare_transmit
        rpc.nfsd-4720  [001] ....    50.857576: call_transmit: RPC:    18 rpc_xdr_encode (status 0)
        rpc.nfsd-4720  [001] ....    50.857577: xprt_transmit: RPC:    18 xprt_transmit(80)
        rpc.nfsd-4720  [001] ....    50.857581: xs_local_send_request: RPC:       xs_local_send_request(80) = 0
        rpc.nfsd-4720  [001] ....    50.857581: xprt_transmit: RPC:    18 xmit complete
        rpc.nfsd-4720  [001] ..s.    50.857581: __rpc_sleep_on_priority: RPC:    18 sleep_on(queue "xprt_pending" time 4294904943)
        rpc.nfsd-4720  [001] ..s.    50.857582: __rpc_sleep_on_priority: RPC:    18 added to queue ffff880407939a58 "xprt_pending"
        rpc.nfsd-4720  [001] ..s.    50.857582: __rpc_sleep_on_priority: RPC:    18 setting alarm for 10000 ms
        rpc.nfsd-4720  [001] ..s.    50.857583: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939990 "xprt_sending")
        rpc.nfsd-4720  [001] ....    50.857583: __rpc_execute: RPC:    18 sync task going to sleep
         rpcbind-1871  [003] ..s.    50.857592: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1871  [003] ..s.    50.857593: xprt_complete_rqst: RPC:    18 xid 4b45b0ec complete (28 bytes received)
         rpcbind-1871  [003] ..s.    50.857593: rpc_wake_up_task_queue_locked: RPC:    18 __rpc_wake_up_task (now 4294904943)
         rpcbind-1871  [003] ..s.    50.857594: rpc_wake_up_task_queue_locked: RPC:    18 disabling timer
         rpcbind-1871  [003] ..s.    50.857594: rpc_wake_up_task_queue_locked: RPC:    18 removed from queue ffff880407939a58 "xprt_pending"
         rpcbind-1871  [003] ..s.    50.857595: rpc_wake_up_task_queue_locked: RPC:       __rpc_wake_up_task done
        rpc.nfsd-4720  [001] ....    50.857596: __rpc_execute: RPC:    18 sync task resuming
        rpc.nfsd-4720  [001] ....    50.857597: call_status: RPC:    18 call_status (status 28)
        rpc.nfsd-4720  [001] ....    50.857597: call_decode: RPC:    18 call_decode (status 28)
        rpc.nfsd-4720  [001] ....    50.857598: call_decode: RPC:    18 call_decode result 0
        rpc.nfsd-4720  [001] ....    50.857598: __rpc_execute: RPC:    18 return 0, status 0
        rpc.nfsd-4720  [001] ....    50.857598: __rpc_execute: RPC:    18 release task
        rpc.nfsd-4720  [001] ....    50.857599: rpc_free: RPC:       freeing buffer of size 188 at ffff8800d8dc0000
        rpc.nfsd-4720  [001] ....    50.857599: xprt_release: RPC:    18 release request ffff880403542200
        rpc.nfsd-4720  [001] ....    50.857600: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939b20 "xprt_backlog")
        rpc.nfsd-4720  [001] ....    50.857600: rpc_release_client: RPC:       rpc_release_client(ffff88040a8b3600)
        rpc.nfsd-4720  [001] ....    50.857601: rpc_free_task: RPC:    18 freeing task
        rpc.nfsd-4720  [001] ....    50.857602: rpc_new_task: RPC:       new task initialized, procpid 4720
        rpc.nfsd-4720  [001] ....    50.857602: rpc_new_task: RPC:       allocated task ffff88040a14b900
        rpc.nfsd-4720  [001] ....    50.857602: __rpc_execute: RPC:    19 __rpc_execute flags=0x680
        rpc.nfsd-4720  [001] ....    50.857602: call_start: RPC:    19 call_start rpcbind4 proc SET (sync)
        rpc.nfsd-4720  [001] ....    50.857603: call_reserve: RPC:    19 call_reserve (status 0)
        rpc.nfsd-4720  [001] ....    50.857603: xprt_alloc_slot: RPC:    19 reserved req ffff880403542200 xid 4c45b0ec
        rpc.nfsd-4720  [001] ....    50.857603: call_reserveresult: RPC:    19 call_reserveresult (status 0)
        rpc.nfsd-4720  [001] ....    50.857604: call_refresh: RPC:    19 call_refresh (status 0)
        rpc.nfsd-4720  [001] ....    50.857604: call_refreshresult: RPC:    19 call_refreshresult (status 0)
        rpc.nfsd-4720  [001] ....    50.857604: call_allocate: RPC:    19 call_allocate (status 0)
        rpc.nfsd-4720  [001] ....    50.857605: rpc_malloc: RPC:    19 allocated buffer of size 188 at ffff8800d8dc0000
        rpc.nfsd-4720  [001] ....    50.857605: call_bind: RPC:    19 call_bind (status 0)
        rpc.nfsd-4720  [001] ....    50.857605: call_connect: RPC:    19 call_connect xprt ffff880407939800 is connected
        rpc.nfsd-4720  [001] ....    50.857605: call_transmit: RPC:    19 call_transmit (status 0)
        rpc.nfsd-4720  [001] ....    50.857606: xprt_prepare_transmit: RPC:    19 xprt_prepare_transmit
        rpc.nfsd-4720  [001] ....    50.857606: call_transmit: RPC:    19 rpc_xdr_encode (status 0)
        rpc.nfsd-4720  [001] ....    50.857606: xprt_transmit: RPC:    19 xprt_transmit(80)
        rpc.nfsd-4720  [001] ....    50.857608: xs_local_send_request: RPC:       xs_local_send_request(80) = 0
        rpc.nfsd-4720  [001] ....    50.857609: xprt_transmit: RPC:    19 xmit complete
        rpc.nfsd-4720  [001] ..s.    50.857609: __rpc_sleep_on_priority: RPC:    19 sleep_on(queue "xprt_pending" time 4294904943)
        rpc.nfsd-4720  [001] ..s.    50.857609: __rpc_sleep_on_priority: RPC:    19 added to queue ffff880407939a58 "xprt_pending"
        rpc.nfsd-4720  [001] ..s.    50.857610: __rpc_sleep_on_priority: RPC:    19 setting alarm for 10000 ms
        rpc.nfsd-4720  [001] ..s.    50.857610: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939990 "xprt_sending")
        rpc.nfsd-4720  [001] ....    50.857610: __rpc_execute: RPC:    19 sync task going to sleep
         rpcbind-1871  [003] ..s.    50.857617: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1871  [003] ..s.    50.857618: xprt_complete_rqst: RPC:    19 xid 4c45b0ec complete (28 bytes received)
         rpcbind-1871  [003] ..s.    50.857618: rpc_wake_up_task_queue_locked: RPC:    19 __rpc_wake_up_task (now 4294904943)
         rpcbind-1871  [003] ..s.    50.857618: rpc_wake_up_task_queue_locked: RPC:    19 disabling timer
         rpcbind-1871  [003] ..s.    50.857618: rpc_wake_up_task_queue_locked: RPC:    19 removed from queue ffff880407939a58 "xprt_pending"
         rpcbind-1871  [003] ..s.    50.857619: rpc_wake_up_task_queue_locked: RPC:       __rpc_wake_up_task done
        rpc.nfsd-4720  [001] ....    50.857620: __rpc_execute: RPC:    19 sync task resuming
        rpc.nfsd-4720  [001] ....    50.857621: call_status: RPC:    19 call_status (status 28)
        rpc.nfsd-4720  [001] ....    50.857621: call_decode: RPC:    19 call_decode (status 28)
        rpc.nfsd-4720  [001] ....    50.857622: call_decode: RPC:    19 call_decode result 0
        rpc.nfsd-4720  [001] ....    50.857622: __rpc_execute: RPC:    19 return 0, status 0
        rpc.nfsd-4720  [001] ....    50.857622: __rpc_execute: RPC:    19 release task
        rpc.nfsd-4720  [001] ....    50.857623: rpc_free: RPC:       freeing buffer of size 188 at ffff8800d8dc0000
        rpc.nfsd-4720  [001] ....    50.857623: xprt_release: RPC:    19 release request ffff880403542200
        rpc.nfsd-4720  [001] ....    50.857623: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939b20 "xprt_backlog")
        rpc.nfsd-4720  [001] ....    50.857623: rpc_release_client: RPC:       rpc_release_client(ffff88040a8b3600)
        rpc.nfsd-4720  [001] ....    50.857624: rpc_free_task: RPC:    19 freeing task
        rpc.nfsd-4720  [001] ....    50.857625: rpc_new_task: RPC:       new task initialized, procpid 4720
        rpc.nfsd-4720  [001] ....    50.857625: rpc_new_task: RPC:       allocated task ffff88040a14b900
        rpc.nfsd-4720  [001] ....    50.857625: __rpc_execute: RPC:    20 __rpc_execute flags=0x680
        rpc.nfsd-4720  [001] ....    50.857625: call_start: RPC:    20 call_start rpcbind4 proc SET (sync)
        rpc.nfsd-4720  [001] ....    50.857626: call_reserve: RPC:    20 call_reserve (status 0)
        rpc.nfsd-4720  [001] ....    50.857626: xprt_alloc_slot: RPC:    20 reserved req ffff880403542200 xid 4d45b0ec
        rpc.nfsd-4720  [001] ....    50.857626: call_reserveresult: RPC:    20 call_reserveresult (status 0)
        rpc.nfsd-4720  [001] ....    50.857627: call_refresh: RPC:    20 call_refresh (status 0)
        rpc.nfsd-4720  [001] ....    50.857627: call_refreshresult: RPC:    20 call_refreshresult (status 0)
        rpc.nfsd-4720  [001] ....    50.857627: call_allocate: RPC:    20 call_allocate (status 0)
        rpc.nfsd-4720  [001] ....    50.857628: rpc_malloc: RPC:    20 allocated buffer of size 188 at ffff8800d8dc0000
        rpc.nfsd-4720  [001] ....    50.857628: call_bind: RPC:    20 call_bind (status 0)
        rpc.nfsd-4720  [001] ....    50.857628: call_connect: RPC:    20 call_connect xprt ffff880407939800 is connected
        rpc.nfsd-4720  [001] ....    50.857628: call_transmit: RPC:    20 call_transmit (status 0)
        rpc.nfsd-4720  [001] ....    50.857628: xprt_prepare_transmit: RPC:    20 xprt_prepare_transmit
        rpc.nfsd-4720  [001] ....    50.857629: call_transmit: RPC:    20 rpc_xdr_encode (status 0)
        rpc.nfsd-4720  [001] ....    50.857629: xprt_transmit: RPC:    20 xprt_transmit(80)
        rpc.nfsd-4720  [001] ....    50.857631: xs_local_send_request: RPC:       xs_local_send_request(80) = 0
        rpc.nfsd-4720  [001] ....    50.857632: xprt_transmit: RPC:    20 xmit complete
        rpc.nfsd-4720  [001] ..s.    50.857632: __rpc_sleep_on_priority: RPC:    20 sleep_on(queue "xprt_pending" time 4294904943)
        rpc.nfsd-4720  [001] ..s.    50.857632: __rpc_sleep_on_priority: RPC:    20 added to queue ffff880407939a58 "xprt_pending"
        rpc.nfsd-4720  [001] ..s.    50.857632: __rpc_sleep_on_priority: RPC:    20 setting alarm for 10000 ms
        rpc.nfsd-4720  [001] ..s.    50.857633: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939990 "xprt_sending")
        rpc.nfsd-4720  [001] ....    50.857633: __rpc_execute: RPC:    20 sync task going to sleep
         rpcbind-1871  [003] ..s.    50.857640: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1871  [003] ..s.    50.857640: xprt_complete_rqst: RPC:    20 xid 4d45b0ec complete (28 bytes received)
         rpcbind-1871  [003] ..s.    50.857641: rpc_wake_up_task_queue_locked: RPC:    20 __rpc_wake_up_task (now 4294904943)
         rpcbind-1871  [003] ..s.    50.857641: rpc_wake_up_task_queue_locked: RPC:    20 disabling timer
         rpcbind-1871  [003] ..s.    50.857641: rpc_wake_up_task_queue_locked: RPC:    20 removed from queue ffff880407939a58 "xprt_pending"
         rpcbind-1871  [003] ..s.    50.857642: rpc_wake_up_task_queue_locked: RPC:       __rpc_wake_up_task done
        rpc.nfsd-4720  [001] ....    50.857643: __rpc_execute: RPC:    20 sync task resuming
        rpc.nfsd-4720  [001] ....    50.857644: call_status: RPC:    20 call_status (status 28)
        rpc.nfsd-4720  [001] ....    50.857644: call_decode: RPC:    20 call_decode (status 28)
        rpc.nfsd-4720  [001] ....    50.857645: call_decode: RPC:    20 call_decode result 0
        rpc.nfsd-4720  [001] ....    50.857645: __rpc_execute: RPC:    20 return 0, status 0
        rpc.nfsd-4720  [001] ....    50.857645: __rpc_execute: RPC:    20 release task
        rpc.nfsd-4720  [001] ....    50.857645: rpc_free: RPC:       freeing buffer of size 188 at ffff8800d8dc0000
        rpc.nfsd-4720  [001] ....    50.857646: xprt_release: RPC:    20 release request ffff880403542200
        rpc.nfsd-4720  [001] ....    50.857646: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939b20 "xprt_backlog")
        rpc.nfsd-4720  [001] ....    50.857646: rpc_release_client: RPC:       rpc_release_client(ffff88040a8b3600)
        rpc.nfsd-4720  [001] ....    50.857646: rpc_free_task: RPC:    20 freeing task
        rpc.nfsd-4720  [001] ....    50.857647: rpc_new_task: RPC:       new task initialized, procpid 4720
        rpc.nfsd-4720  [001] ....    50.857648: rpc_new_task: RPC:       allocated task ffff88040a14b900
        rpc.nfsd-4720  [001] ....    50.857648: __rpc_execute: RPC:    21 __rpc_execute flags=0x680
        rpc.nfsd-4720  [001] ....    50.857648: call_start: RPC:    21 call_start rpcbind4 proc SET (sync)
        rpc.nfsd-4720  [001] ....    50.857649: call_reserve: RPC:    21 call_reserve (status 0)
        rpc.nfsd-4720  [001] ....    50.857649: xprt_alloc_slot: RPC:    21 reserved req ffff880403542200 xid 4e45b0ec
        rpc.nfsd-4720  [001] ....    50.857649: call_reserveresult: RPC:    21 call_reserveresult (status 0)
        rpc.nfsd-4720  [001] ....    50.857649: call_refresh: RPC:    21 call_refresh (status 0)
        rpc.nfsd-4720  [001] ....    50.857650: call_refreshresult: RPC:    21 call_refreshresult (status 0)
        rpc.nfsd-4720  [001] ....    50.857650: call_allocate: RPC:    21 call_allocate (status 0)
        rpc.nfsd-4720  [001] ....    50.857650: rpc_malloc: RPC:    21 allocated buffer of size 188 at ffff8800d8dc0000
        rpc.nfsd-4720  [001] ....    50.857651: call_bind: RPC:    21 call_bind (status 0)
        rpc.nfsd-4720  [001] ....    50.857651: call_connect: RPC:    21 call_connect xprt ffff880407939800 is connected
        rpc.nfsd-4720  [001] ....    50.857651: call_transmit: RPC:    21 call_transmit (status 0)
        rpc.nfsd-4720  [001] ....    50.857651: xprt_prepare_transmit: RPC:    21 xprt_prepare_transmit
        rpc.nfsd-4720  [001] ....    50.857652: call_transmit: RPC:    21 rpc_xdr_encode (status 0)
        rpc.nfsd-4720  [001] ....    50.857652: xprt_transmit: RPC:    21 xprt_transmit(80)
        rpc.nfsd-4720  [001] ....    50.857654: xs_local_send_request: RPC:       xs_local_send_request(80) = 0
        rpc.nfsd-4720  [001] ....    50.857654: xprt_transmit: RPC:    21 xmit complete
        rpc.nfsd-4720  [001] ..s.    50.857655: __rpc_sleep_on_priority: RPC:    21 sleep_on(queue "xprt_pending" time 4294904943)
        rpc.nfsd-4720  [001] ..s.    50.857655: __rpc_sleep_on_priority: RPC:    21 added to queue ffff880407939a58 "xprt_pending"
        rpc.nfsd-4720  [001] ..s.    50.857655: __rpc_sleep_on_priority: RPC:    21 setting alarm for 10000 ms
        rpc.nfsd-4720  [001] ..s.    50.857656: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939990 "xprt_sending")
        rpc.nfsd-4720  [001] ....    50.857656: __rpc_execute: RPC:    21 sync task going to sleep
         rpcbind-1871  [003] ..s.    50.857662: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1871  [003] ..s.    50.857663: xprt_complete_rqst: RPC:    21 xid 4e45b0ec complete (28 bytes received)
         rpcbind-1871  [003] ..s.    50.857664: rpc_wake_up_task_queue_locked: RPC:    21 __rpc_wake_up_task (now 4294904943)
         rpcbind-1871  [003] ..s.    50.857664: rpc_wake_up_task_queue_locked: RPC:    21 disabling timer
         rpcbind-1871  [003] ..s.    50.857664: rpc_wake_up_task_queue_locked: RPC:    21 removed from queue ffff880407939a58 "xprt_pending"
         rpcbind-1871  [003] ..s.    50.857665: rpc_wake_up_task_queue_locked: RPC:       __rpc_wake_up_task done
        rpc.nfsd-4720  [001] ....    50.857666: __rpc_execute: RPC:    21 sync task resuming
        rpc.nfsd-4720  [001] ....    50.857667: call_status: RPC:    21 call_status (status 28)
        rpc.nfsd-4720  [001] ....    50.857667: call_decode: RPC:    21 call_decode (status 28)
        rpc.nfsd-4720  [001] ....    50.857667: call_decode: RPC:    21 call_decode result 0
        rpc.nfsd-4720  [001] ....    50.857668: __rpc_execute: RPC:    21 return 0, status 0
        rpc.nfsd-4720  [001] ....    50.857668: __rpc_execute: RPC:    21 release task
        rpc.nfsd-4720  [001] ....    50.857668: rpc_free: RPC:       freeing buffer of size 188 at ffff8800d8dc0000
        rpc.nfsd-4720  [001] ....    50.857669: xprt_release: RPC:    21 release request ffff880403542200
        rpc.nfsd-4720  [001] ....    50.857669: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939b20 "xprt_backlog")
        rpc.nfsd-4720  [001] ....    50.857669: rpc_release_client: RPC:       rpc_release_client(ffff88040a8b3600)
        rpc.nfsd-4720  [001] ....    50.857669: rpc_free_task: RPC:    21 freeing task
        rpc.nfsd-4720  [001] ....    50.857670: rpc_new_task: RPC:       new task initialized, procpid 4720
        rpc.nfsd-4720  [001] ....    50.857671: rpc_new_task: RPC:       allocated task ffff88040a14b900
        rpc.nfsd-4720  [001] ....    50.857671: __rpc_execute: RPC:    22 __rpc_execute flags=0x680
        rpc.nfsd-4720  [001] ....    50.857671: call_start: RPC:    22 call_start rpcbind4 proc SET (sync)
        rpc.nfsd-4720  [001] ....    50.857672: call_reserve: RPC:    22 call_reserve (status 0)
        rpc.nfsd-4720  [001] ....    50.857672: xprt_alloc_slot: RPC:    22 reserved req ffff880403542200 xid 4f45b0ec
        rpc.nfsd-4720  [001] ....    50.857672: call_reserveresult: RPC:    22 call_reserveresult (status 0)
        rpc.nfsd-4720  [001] ....    50.857672: call_refresh: RPC:    22 call_refresh (status 0)
        rpc.nfsd-4720  [001] ....    50.857673: call_refreshresult: RPC:    22 call_refreshresult (status 0)
        rpc.nfsd-4720  [001] ....    50.857673: call_allocate: RPC:    22 call_allocate (status 0)
        rpc.nfsd-4720  [001] ....    50.857673: rpc_malloc: RPC:    22 allocated buffer of size 188 at ffff8800d8dc0000
        rpc.nfsd-4720  [001] ....    50.857674: call_bind: RPC:    22 call_bind (status 0)
        rpc.nfsd-4720  [001] ....    50.857674: call_connect: RPC:    22 call_connect xprt ffff880407939800 is connected
        rpc.nfsd-4720  [001] ....    50.857674: call_transmit: RPC:    22 call_transmit (status 0)
        rpc.nfsd-4720  [001] ....    50.857674: xprt_prepare_transmit: RPC:    22 xprt_prepare_transmit
        rpc.nfsd-4720  [001] ....    50.857675: call_transmit: RPC:    22 rpc_xdr_encode (status 0)
        rpc.nfsd-4720  [001] ....    50.857675: xprt_transmit: RPC:    22 xprt_transmit(80)
        rpc.nfsd-4720  [001] ....    50.857677: xs_local_send_request: RPC:       xs_local_send_request(80) = 0
        rpc.nfsd-4720  [001] ....    50.857677: xprt_transmit: RPC:    22 xmit complete
        rpc.nfsd-4720  [001] ..s.    50.857678: __rpc_sleep_on_priority: RPC:    22 sleep_on(queue "xprt_pending" time 4294904943)
        rpc.nfsd-4720  [001] ..s.    50.857678: __rpc_sleep_on_priority: RPC:    22 added to queue ffff880407939a58 "xprt_pending"
        rpc.nfsd-4720  [001] ..s.    50.857678: __rpc_sleep_on_priority: RPC:    22 setting alarm for 10000 ms
        rpc.nfsd-4720  [001] ..s.    50.857679: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939990 "xprt_sending")
        rpc.nfsd-4720  [001] ....    50.857679: __rpc_execute: RPC:    22 sync task going to sleep
         rpcbind-1871  [003] ..s.    50.857685: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1871  [003] ..s.    50.857686: xprt_complete_rqst: RPC:    22 xid 4f45b0ec complete (28 bytes received)
         rpcbind-1871  [003] ..s.    50.857687: rpc_wake_up_task_queue_locked: RPC:    22 __rpc_wake_up_task (now 4294904943)
         rpcbind-1871  [003] ..s.    50.857687: rpc_wake_up_task_queue_locked: RPC:    22 disabling timer
         rpcbind-1871  [003] ..s.    50.857687: rpc_wake_up_task_queue_locked: RPC:    22 removed from queue ffff880407939a58 "xprt_pending"
         rpcbind-1871  [003] ..s.    50.857688: rpc_wake_up_task_queue_locked: RPC:       __rpc_wake_up_task done
        rpc.nfsd-4720  [001] ....    50.857689: __rpc_execute: RPC:    22 sync task resuming
        rpc.nfsd-4720  [001] ....    50.857690: call_status: RPC:    22 call_status (status 28)
        rpc.nfsd-4720  [001] ....    50.857690: call_decode: RPC:    22 call_decode (status 28)
        rpc.nfsd-4720  [001] ....    50.857690: call_decode: RPC:    22 call_decode result 0
        rpc.nfsd-4720  [001] ....    50.857691: __rpc_execute: RPC:    22 return 0, status 0
        rpc.nfsd-4720  [001] ....    50.857691: __rpc_execute: RPC:    22 release task
        rpc.nfsd-4720  [001] ....    50.857691: rpc_free: RPC:       freeing buffer of size 188 at ffff8800d8dc0000
        rpc.nfsd-4720  [001] ....    50.857691: xprt_release: RPC:    22 release request ffff880403542200
        rpc.nfsd-4720  [001] ....    50.857692: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939b20 "xprt_backlog")
        rpc.nfsd-4720  [001] ....    50.857692: rpc_release_client: RPC:       rpc_release_client(ffff88040a8b3600)
        rpc.nfsd-4720  [001] ....    50.857692: rpc_free_task: RPC:    22 freeing task
        rpc.nfsd-4720  [001] ....    50.857693: svc_setup_socket: setting up TCP socket for listening
        rpc.nfsd-4720  [001] ....    50.857693: svc_setup_socket: svc: svc_setup_socket created ffff88040cf94000 (inet ffff88040a152800)
        rpc.nfsd-4720  [001] ....    50.857706: svc_setup_socket: svc: svc_setup_socket ffff880402295340
        rpc.nfsd-4720  [001] ....    50.857708: rpc_new_task: RPC:       new task initialized, procpid 4720
        rpc.nfsd-4720  [001] ....    50.857708: rpc_new_task: RPC:       allocated task ffff88040b0c0e00
        rpc.nfsd-4720  [001] ....    50.857708: __rpc_execute: RPC:    23 __rpc_execute flags=0x680
        rpc.nfsd-4720  [001] ....    50.857709: call_start: RPC:    23 call_start rpcbind4 proc SET (sync)
        rpc.nfsd-4720  [001] ....    50.857709: call_reserve: RPC:    23 call_reserve (status 0)
        rpc.nfsd-4720  [001] ....    50.857709: xprt_alloc_slot: RPC:    23 reserved req ffff880403542200 xid 5045b0ec
        rpc.nfsd-4720  [001] ....    50.857710: call_reserveresult: RPC:    23 call_reserveresult (status 0)
        rpc.nfsd-4720  [001] ....    50.857710: call_refresh: RPC:    23 call_refresh (status 0)
        rpc.nfsd-4720  [001] ....    50.857710: call_refreshresult: RPC:    23 call_refreshresult (status 0)
        rpc.nfsd-4720  [001] ....    50.857711: call_allocate: RPC:    23 call_allocate (status 0)
        rpc.nfsd-4720  [001] ....    50.857711: rpc_malloc: RPC:    23 allocated buffer of size 188 at ffff8800d8dc0000
        rpc.nfsd-4720  [001] ....    50.857711: call_bind: RPC:    23 call_bind (status 0)
        rpc.nfsd-4720  [001] ....    50.857712: call_connect: RPC:    23 call_connect xprt ffff880407939800 is connected
        rpc.nfsd-4720  [001] ....    50.857712: call_transmit: RPC:    23 call_transmit (status 0)
        rpc.nfsd-4720  [001] ....    50.857712: xprt_prepare_transmit: RPC:    23 xprt_prepare_transmit
        rpc.nfsd-4720  [001] ....    50.857712: call_transmit: RPC:    23 rpc_xdr_encode (status 0)
        rpc.nfsd-4720  [001] ....    50.857713: xprt_transmit: RPC:    23 xprt_transmit(80)
        rpc.nfsd-4720  [001] ....    50.857716: xs_local_send_request: RPC:       xs_local_send_request(80) = 0
        rpc.nfsd-4720  [001] ....    50.857716: xprt_transmit: RPC:    23 xmit complete
        rpc.nfsd-4720  [001] ..s.    50.857716: __rpc_sleep_on_priority: RPC:    23 sleep_on(queue "xprt_pending" time 4294904943)
        rpc.nfsd-4720  [001] ..s.    50.857717: __rpc_sleep_on_priority: RPC:    23 added to queue ffff880407939a58 "xprt_pending"
        rpc.nfsd-4720  [001] ..s.    50.857717: __rpc_sleep_on_priority: RPC:    23 setting alarm for 10000 ms
        rpc.nfsd-4720  [001] ..s.    50.857718: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939990 "xprt_sending")
        rpc.nfsd-4720  [001] ....    50.857718: __rpc_execute: RPC:    23 sync task going to sleep
         rpcbind-1871  [003] ..s.    50.857724: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1871  [003] ..s.    50.857725: xprt_complete_rqst: RPC:    23 xid 5045b0ec complete (28 bytes received)
         rpcbind-1871  [003] ..s.    50.857725: rpc_wake_up_task_queue_locked: RPC:    23 __rpc_wake_up_task (now 4294904943)
         rpcbind-1871  [003] ..s.    50.857725: rpc_wake_up_task_queue_locked: RPC:    23 disabling timer
         rpcbind-1871  [003] ..s.    50.857726: rpc_wake_up_task_queue_locked: RPC:    23 removed from queue ffff880407939a58 "xprt_pending"
         rpcbind-1871  [003] ..s.    50.857727: rpc_wake_up_task_queue_locked: RPC:       __rpc_wake_up_task done
        rpc.nfsd-4720  [001] ....    50.857728: __rpc_execute: RPC:    23 sync task resuming
        rpc.nfsd-4720  [001] ....    50.857729: call_status: RPC:    23 call_status (status 28)
        rpc.nfsd-4720  [001] ....    50.857729: call_decode: RPC:    23 call_decode (status 28)
        rpc.nfsd-4720  [001] ....    50.857729: call_decode: RPC:    23 call_decode result 0
        rpc.nfsd-4720  [001] ....    50.857730: __rpc_execute: RPC:    23 return 0, status 0
        rpc.nfsd-4720  [001] ....    50.857730: __rpc_execute: RPC:    23 release task
        rpc.nfsd-4720  [001] ....    50.857730: rpc_free: RPC:       freeing buffer of size 188 at ffff8800d8dc0000
        rpc.nfsd-4720  [001] ....    50.857731: xprt_release: RPC:    23 release request ffff880403542200
        rpc.nfsd-4720  [001] ....    50.857731: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939b20 "xprt_backlog")
        rpc.nfsd-4720  [001] ....    50.857731: rpc_release_client: RPC:       rpc_release_client(ffff88040a8b3600)
        rpc.nfsd-4720  [001] ....    50.857732: rpc_free_task: RPC:    23 freeing task
        rpc.nfsd-4720  [001] ....    50.857732: rpc_new_task: RPC:       new task initialized, procpid 4720
        rpc.nfsd-4720  [001] ....    50.857733: rpc_new_task: RPC:       allocated task ffff88040b0c0e00
        rpc.nfsd-4720  [001] ....    50.857733: __rpc_execute: RPC:    24 __rpc_execute flags=0x680
        rpc.nfsd-4720  [001] ....    50.857734: call_start: RPC:    24 call_start rpcbind4 proc SET (sync)
        rpc.nfsd-4720  [001] ....    50.857734: call_reserve: RPC:    24 call_reserve (status 0)
        rpc.nfsd-4720  [001] ....    50.857734: xprt_alloc_slot: RPC:    24 reserved req ffff880403542200 xid 5145b0ec
        rpc.nfsd-4720  [001] ....    50.857734: call_reserveresult: RPC:    24 call_reserveresult (status 0)
        rpc.nfsd-4720  [001] ....    50.857735: call_refresh: RPC:    24 call_refresh (status 0)
        rpc.nfsd-4720  [001] ....    50.857735: call_refreshresult: RPC:    24 call_refreshresult (status 0)
        rpc.nfsd-4720  [001] ....    50.857735: call_allocate: RPC:    24 call_allocate (status 0)
        rpc.nfsd-4720  [001] ....    50.857736: rpc_malloc: RPC:    24 allocated buffer of size 188 at ffff8800d8dc0000
        rpc.nfsd-4720  [001] ....    50.857736: call_bind: RPC:    24 call_bind (status 0)
        rpc.nfsd-4720  [001] ....    50.857736: call_connect: RPC:    24 call_connect xprt ffff880407939800 is connected
        rpc.nfsd-4720  [001] ....    50.857736: call_transmit: RPC:    24 call_transmit (status 0)
        rpc.nfsd-4720  [001] ....    50.857737: xprt_prepare_transmit: RPC:    24 xprt_prepare_transmit
        rpc.nfsd-4720  [001] ....    50.857737: call_transmit: RPC:    24 rpc_xdr_encode (status 0)
        rpc.nfsd-4720  [001] ....    50.857737: xprt_transmit: RPC:    24 xprt_transmit(80)
        rpc.nfsd-4720  [001] ....    50.857740: xs_local_send_request: RPC:       xs_local_send_request(80) = 0
        rpc.nfsd-4720  [001] ....    50.857740: xprt_transmit: RPC:    24 xmit complete
        rpc.nfsd-4720  [001] ..s.    50.857740: __rpc_sleep_on_priority: RPC:    24 sleep_on(queue "xprt_pending" time 4294904943)
        rpc.nfsd-4720  [001] ..s.    50.857741: __rpc_sleep_on_priority: RPC:    24 added to queue ffff880407939a58 "xprt_pending"
        rpc.nfsd-4720  [001] ..s.    50.857741: __rpc_sleep_on_priority: RPC:    24 setting alarm for 10000 ms
        rpc.nfsd-4720  [001] ..s.    50.857741: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939990 "xprt_sending")
        rpc.nfsd-4720  [001] ....    50.857742: __rpc_execute: RPC:    24 sync task going to sleep
         rpcbind-1871  [003] ..s.    50.857748: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1871  [003] ..s.    50.857749: xprt_complete_rqst: RPC:    24 xid 5145b0ec complete (28 bytes received)
         rpcbind-1871  [003] ..s.    50.857749: rpc_wake_up_task_queue_locked: RPC:    24 __rpc_wake_up_task (now 4294904943)
         rpcbind-1871  [003] ..s.    50.857749: rpc_wake_up_task_queue_locked: RPC:    24 disabling timer
         rpcbind-1871  [003] ..s.    50.857750: rpc_wake_up_task_queue_locked: RPC:    24 removed from queue ffff880407939a58 "xprt_pending"
         rpcbind-1871  [003] ..s.    50.857751: rpc_wake_up_task_queue_locked: RPC:       __rpc_wake_up_task done
        rpc.nfsd-4720  [001] ....    50.857752: __rpc_execute: RPC:    24 sync task resuming
        rpc.nfsd-4720  [001] ....    50.857752: call_status: RPC:    24 call_status (status 28)
        rpc.nfsd-4720  [001] ....    50.857753: call_decode: RPC:    24 call_decode (status 28)
        rpc.nfsd-4720  [001] ....    50.857753: call_decode: RPC:    24 call_decode result 0
        rpc.nfsd-4720  [001] ....    50.857753: __rpc_execute: RPC:    24 return 0, status 0
        rpc.nfsd-4720  [001] ....    50.857753: __rpc_execute: RPC:    24 release task
        rpc.nfsd-4720  [001] ....    50.857754: rpc_free: RPC:       freeing buffer of size 188 at ffff8800d8dc0000
        rpc.nfsd-4720  [001] ....    50.857754: xprt_release: RPC:    24 release request ffff880403542200
        rpc.nfsd-4720  [001] ....    50.857754: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939b20 "xprt_backlog")
        rpc.nfsd-4720  [001] ....    50.857755: rpc_release_client: RPC:       rpc_release_client(ffff88040a8b3600)
        rpc.nfsd-4720  [001] ....    50.857755: rpc_free_task: RPC:    24 freeing task
        rpc.nfsd-4720  [001] ....    50.857756: rpc_new_task: RPC:       new task initialized, procpid 4720
        rpc.nfsd-4720  [001] ....    50.857756: rpc_new_task: RPC:       allocated task ffff88040b0c0e00
        rpc.nfsd-4720  [001] ....    50.857756: __rpc_execute: RPC:    25 __rpc_execute flags=0x680
        rpc.nfsd-4720  [001] ....    50.857757: call_start: RPC:    25 call_start rpcbind4 proc SET (sync)
        rpc.nfsd-4720  [001] ....    50.857757: call_reserve: RPC:    25 call_reserve (status 0)
        rpc.nfsd-4720  [001] ....    50.857757: xprt_alloc_slot: RPC:    25 reserved req ffff880403542200 xid 5245b0ec
        rpc.nfsd-4720  [001] ....    50.857758: call_reserveresult: RPC:    25 call_reserveresult (status 0)
        rpc.nfsd-4720  [001] ....    50.857758: call_refresh: RPC:    25 call_refresh (status 0)
        rpc.nfsd-4720  [001] ....    50.857758: call_refreshresult: RPC:    25 call_refreshresult (status 0)
        rpc.nfsd-4720  [001] ....    50.857759: call_allocate: RPC:    25 call_allocate (status 0)
        rpc.nfsd-4720  [001] ....    50.857759: rpc_malloc: RPC:    25 allocated buffer of size 188 at ffff8800d8dc0000
        rpc.nfsd-4720  [001] ....    50.857759: call_bind: RPC:    25 call_bind (status 0)
        rpc.nfsd-4720  [001] ....    50.857759: call_connect: RPC:    25 call_connect xprt ffff880407939800 is connected
        rpc.nfsd-4720  [001] ....    50.857760: call_transmit: RPC:    25 call_transmit (status 0)
        rpc.nfsd-4720  [001] ....    50.857760: xprt_prepare_transmit: RPC:    25 xprt_prepare_transmit
        rpc.nfsd-4720  [001] ....    50.857760: call_transmit: RPC:    25 rpc_xdr_encode (status 0)
        rpc.nfsd-4720  [001] ....    50.857761: xprt_transmit: RPC:    25 xprt_transmit(80)
        rpc.nfsd-4720  [001] ....    50.857763: xs_local_send_request: RPC:       xs_local_send_request(80) = 0
        rpc.nfsd-4720  [001] ....    50.857763: xprt_transmit: RPC:    25 xmit complete
        rpc.nfsd-4720  [001] ..s.    50.857763: __rpc_sleep_on_priority: RPC:    25 sleep_on(queue "xprt_pending" time 4294904943)
        rpc.nfsd-4720  [001] ..s.    50.857764: __rpc_sleep_on_priority: RPC:    25 added to queue ffff880407939a58 "xprt_pending"
        rpc.nfsd-4720  [001] ..s.    50.857764: __rpc_sleep_on_priority: RPC:    25 setting alarm for 10000 ms
        rpc.nfsd-4720  [001] ..s.    50.857764: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939990 "xprt_sending")
        rpc.nfsd-4720  [001] ....    50.857765: __rpc_execute: RPC:    25 sync task going to sleep
         rpcbind-1871  [003] ..s.    50.857771: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1871  [003] ..s.    50.857772: xprt_complete_rqst: RPC:    25 xid 5245b0ec complete (28 bytes received)
         rpcbind-1871  [003] ..s.    50.857772: rpc_wake_up_task_queue_locked: RPC:    25 __rpc_wake_up_task (now 4294904943)
         rpcbind-1871  [003] ..s.    50.857772: rpc_wake_up_task_queue_locked: RPC:    25 disabling timer
         rpcbind-1871  [003] ..s.    50.857773: rpc_wake_up_task_queue_locked: RPC:    25 removed from queue ffff880407939a58 "xprt_pending"
         rpcbind-1871  [003] ..s.    50.857774: rpc_wake_up_task_queue_locked: RPC:       __rpc_wake_up_task done
        rpc.nfsd-4720  [001] ....    50.857775: __rpc_execute: RPC:    25 sync task resuming
        rpc.nfsd-4720  [001] ....    50.857775: call_status: RPC:    25 call_status (status 28)
        rpc.nfsd-4720  [001] ....    50.857776: call_decode: RPC:    25 call_decode (status 28)
        rpc.nfsd-4720  [001] ....    50.857776: call_decode: RPC:    25 call_decode result 0
        rpc.nfsd-4720  [001] ....    50.857776: __rpc_execute: RPC:    25 return 0, status 0
        rpc.nfsd-4720  [001] ....    50.857777: __rpc_execute: RPC:    25 release task
        rpc.nfsd-4720  [001] ....    50.857777: rpc_free: RPC:       freeing buffer of size 188 at ffff8800d8dc0000
        rpc.nfsd-4720  [001] ....    50.857777: xprt_release: RPC:    25 release request ffff880403542200
        rpc.nfsd-4720  [001] ....    50.857778: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939b20 "xprt_backlog")
        rpc.nfsd-4720  [001] ....    50.857778: rpc_release_client: RPC:       rpc_release_client(ffff88040a8b3600)
        rpc.nfsd-4720  [001] ....    50.857778: rpc_free_task: RPC:    25 freeing task
        rpc.nfsd-4720  [001] ....    50.857779: rpc_new_task: RPC:       new task initialized, procpid 4720
        rpc.nfsd-4720  [001] ....    50.857779: rpc_new_task: RPC:       allocated task ffff88040b0c0e00
        rpc.nfsd-4720  [001] ....    50.857780: __rpc_execute: RPC:    26 __rpc_execute flags=0x680
        rpc.nfsd-4720  [001] ....    50.857780: call_start: RPC:    26 call_start rpcbind4 proc SET (sync)
        rpc.nfsd-4720  [001] ....    50.857780: call_reserve: RPC:    26 call_reserve (status 0)
        rpc.nfsd-4720  [001] ....    50.857781: xprt_alloc_slot: RPC:    26 reserved req ffff880403542200 xid 5345b0ec
        rpc.nfsd-4720  [001] ....    50.857781: call_reserveresult: RPC:    26 call_reserveresult (status 0)
        rpc.nfsd-4720  [001] ....    50.857781: call_refresh: RPC:    26 call_refresh (status 0)
        rpc.nfsd-4720  [001] ....    50.857781: call_refreshresult: RPC:    26 call_refreshresult (status 0)
        rpc.nfsd-4720  [001] ....    50.857782: call_allocate: RPC:    26 call_allocate (status 0)
        rpc.nfsd-4720  [001] ....    50.857782: rpc_malloc: RPC:    26 allocated buffer of size 188 at ffff8800d8dc0000
        rpc.nfsd-4720  [001] ....    50.857782: call_bind: RPC:    26 call_bind (status 0)
        rpc.nfsd-4720  [001] ....    50.857782: call_connect: RPC:    26 call_connect xprt ffff880407939800 is connected
        rpc.nfsd-4720  [001] ....    50.857783: call_transmit: RPC:    26 call_transmit (status 0)
        rpc.nfsd-4720  [001] ....    50.857783: xprt_prepare_transmit: RPC:    26 xprt_prepare_transmit
        rpc.nfsd-4720  [001] ....    50.857783: call_transmit: RPC:    26 rpc_xdr_encode (status 0)
        rpc.nfsd-4720  [001] ....    50.857784: xprt_transmit: RPC:    26 xprt_transmit(80)
        rpc.nfsd-4720  [001] ....    50.857786: xs_local_send_request: RPC:       xs_local_send_request(80) = 0
        rpc.nfsd-4720  [001] ....    50.857786: xprt_transmit: RPC:    26 xmit complete
        rpc.nfsd-4720  [001] ..s.    50.857786: __rpc_sleep_on_priority: RPC:    26 sleep_on(queue "xprt_pending" time 4294904943)
        rpc.nfsd-4720  [001] ..s.    50.857787: __rpc_sleep_on_priority: RPC:    26 added to queue ffff880407939a58 "xprt_pending"
        rpc.nfsd-4720  [001] ..s.    50.857787: __rpc_sleep_on_priority: RPC:    26 setting alarm for 10000 ms
        rpc.nfsd-4720  [001] ..s.    50.857787: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939990 "xprt_sending")
        rpc.nfsd-4720  [001] ....    50.857788: __rpc_execute: RPC:    26 sync task going to sleep
         rpcbind-1871  [003] ..s.    50.857794: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1871  [003] ..s.    50.857795: xprt_complete_rqst: RPC:    26 xid 5345b0ec complete (28 bytes received)
         rpcbind-1871  [003] ..s.    50.857795: rpc_wake_up_task_queue_locked: RPC:    26 __rpc_wake_up_task (now 4294904943)
         rpcbind-1871  [003] ..s.    50.857795: rpc_wake_up_task_queue_locked: RPC:    26 disabling timer
         rpcbind-1871  [003] ..s.    50.857796: rpc_wake_up_task_queue_locked: RPC:    26 removed from queue ffff880407939a58 "xprt_pending"
         rpcbind-1871  [003] ..s.    50.857797: rpc_wake_up_task_queue_locked: RPC:       __rpc_wake_up_task done
        rpc.nfsd-4720  [001] ....    50.857798: __rpc_execute: RPC:    26 sync task resuming
        rpc.nfsd-4720  [001] ....    50.857798: call_status: RPC:    26 call_status (status 28)
        rpc.nfsd-4720  [001] ....    50.857798: call_decode: RPC:    26 call_decode (status 28)
        rpc.nfsd-4720  [001] ....    50.857799: call_decode: RPC:    26 call_decode result 0
        rpc.nfsd-4720  [001] ....    50.857799: __rpc_execute: RPC:    26 return 0, status 0
        rpc.nfsd-4720  [001] ....    50.857799: __rpc_execute: RPC:    26 release task
        rpc.nfsd-4720  [001] ....    50.857800: rpc_free: RPC:       freeing buffer of size 188 at ffff8800d8dc0000
        rpc.nfsd-4720  [001] ....    50.857800: xprt_release: RPC:    26 release request ffff880403542200
        rpc.nfsd-4720  [001] ....    50.857800: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939b20 "xprt_backlog")
        rpc.nfsd-4720  [001] ....    50.857801: rpc_release_client: RPC:       rpc_release_client(ffff88040a8b3600)
        rpc.nfsd-4720  [001] ....    50.857801: rpc_free_task: RPC:    26 freeing task
        rpc.nfsd-4720  [001] ....    50.857802: rpc_new_task: RPC:       new task initialized, procpid 4720
        rpc.nfsd-4720  [001] ....    50.857802: rpc_new_task: RPC:       allocated task ffff88040b0c0e00
        rpc.nfsd-4720  [001] ....    50.857802: __rpc_execute: RPC:    27 __rpc_execute flags=0x680
        rpc.nfsd-4720  [001] ....    50.857803: call_start: RPC:    27 call_start rpcbind4 proc SET (sync)
        rpc.nfsd-4720  [001] ....    50.857803: call_reserve: RPC:    27 call_reserve (status 0)
        rpc.nfsd-4720  [001] ....    50.857803: xprt_alloc_slot: RPC:    27 reserved req ffff880403542200 xid 5445b0ec
        rpc.nfsd-4720  [001] ....    50.857804: call_reserveresult: RPC:    27 call_reserveresult (status 0)
        rpc.nfsd-4720  [001] ....    50.857804: call_refresh: RPC:    27 call_refresh (status 0)
        rpc.nfsd-4720  [001] ....    50.857804: call_refreshresult: RPC:    27 call_refreshresult (status 0)
        rpc.nfsd-4720  [001] ....    50.857804: call_allocate: RPC:    27 call_allocate (status 0)
        rpc.nfsd-4720  [001] ....    50.857805: rpc_malloc: RPC:    27 allocated buffer of size 188 at ffff8800d8dc0000
        rpc.nfsd-4720  [001] ....    50.857805: call_bind: RPC:    27 call_bind (status 0)
        rpc.nfsd-4720  [001] ....    50.857805: call_connect: RPC:    27 call_connect xprt ffff880407939800 is connected
        rpc.nfsd-4720  [001] ....    50.857806: call_transmit: RPC:    27 call_transmit (status 0)
        rpc.nfsd-4720  [001] ....    50.857806: xprt_prepare_transmit: RPC:    27 xprt_prepare_transmit
        rpc.nfsd-4720  [001] ....    50.857806: call_transmit: RPC:    27 rpc_xdr_encode (status 0)
        rpc.nfsd-4720  [001] ....    50.857806: xprt_transmit: RPC:    27 xprt_transmit(80)
        rpc.nfsd-4720  [001] ....    50.857809: xs_local_send_request: RPC:       xs_local_send_request(80) = 0
        rpc.nfsd-4720  [001] ....    50.857809: xprt_transmit: RPC:    27 xmit complete
        rpc.nfsd-4720  [001] ..s.    50.857809: __rpc_sleep_on_priority: RPC:    27 sleep_on(queue "xprt_pending" time 4294904943)
        rpc.nfsd-4720  [001] ..s.    50.857809: __rpc_sleep_on_priority: RPC:    27 added to queue ffff880407939a58 "xprt_pending"
        rpc.nfsd-4720  [001] ..s.    50.857810: __rpc_sleep_on_priority: RPC:    27 setting alarm for 10000 ms
        rpc.nfsd-4720  [001] ..s.    50.857810: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939990 "xprt_sending")
        rpc.nfsd-4720  [001] ....    50.857810: __rpc_execute: RPC:    27 sync task going to sleep
         rpcbind-1871  [003] ..s.    50.857817: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1871  [003] ..s.    50.857818: xprt_complete_rqst: RPC:    27 xid 5445b0ec complete (28 bytes received)
         rpcbind-1871  [003] ..s.    50.857818: rpc_wake_up_task_queue_locked: RPC:    27 __rpc_wake_up_task (now 4294904943)
         rpcbind-1871  [003] ..s.    50.857818: rpc_wake_up_task_queue_locked: RPC:    27 disabling timer
         rpcbind-1871  [003] ..s.    50.857819: rpc_wake_up_task_queue_locked: RPC:    27 removed from queue ffff880407939a58 "xprt_pending"
         rpcbind-1871  [003] ..s.    50.857820: rpc_wake_up_task_queue_locked: RPC:       __rpc_wake_up_task done
        rpc.nfsd-4720  [001] ....    50.857821: __rpc_execute: RPC:    27 sync task resuming
        rpc.nfsd-4720  [001] ....    50.857821: call_status: RPC:    27 call_status (status 28)
        rpc.nfsd-4720  [001] ....    50.857821: call_decode: RPC:    27 call_decode (status 28)
        rpc.nfsd-4720  [001] ....    50.857822: call_decode: RPC:    27 call_decode result 0
        rpc.nfsd-4720  [001] ....    50.857822: __rpc_execute: RPC:    27 return 0, status 0
        rpc.nfsd-4720  [001] ....    50.857822: __rpc_execute: RPC:    27 release task
        rpc.nfsd-4720  [001] ....    50.857823: rpc_free: RPC:       freeing buffer of size 188 at ffff8800d8dc0000
        rpc.nfsd-4720  [001] ....    50.857823: xprt_release: RPC:    27 release request ffff880403542200
        rpc.nfsd-4720  [001] ....    50.857823: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939b20 "xprt_backlog")
        rpc.nfsd-4720  [001] ....    50.857824: rpc_release_client: RPC:       rpc_release_client(ffff88040a8b3600)
        rpc.nfsd-4720  [001] ....    50.857824: rpc_free_task: RPC:    27 freeing task
        rpc.nfsd-4720  [001] ....    50.857824: svc_write_space: svc: socket ffff880402966000(inet ffff8800db854180), write_space busy=1
        rpc.nfsd-4720  [001] ....    50.857825: svc_setup_socket: svc: kernel_setsockopt returned 0
        rpc.nfsd-4720  [001] ....    50.857825: svc_setup_socket: svc: svc_setup_socket created ffff880402966000 (inet ffff8800db854180)
        rpc.nfsd-4720  [001] ....    50.857974: rpc_new_task: RPC:       new task initialized, procpid 4720
        rpc.nfsd-4720  [001] ....    50.857974: rpc_new_task: RPC:       allocated task ffff88040b718200
        rpc.nfsd-4720  [001] ....    50.857975: __rpc_execute: RPC:    28 __rpc_execute flags=0x2280
        rpc.nfsd-4720  [001] ....    50.857975: call_start: RPC:    28 call_start rpcbind4 proc UNSET (sync)
        rpc.nfsd-4720  [001] ....    50.857976: call_reserve: RPC:    28 call_reserve (status 0)
        rpc.nfsd-4720  [001] ....    50.857976: xprt_alloc_slot: RPC:    28 reserved req ffff880403542200 xid 5545b0ec
        rpc.nfsd-4720  [001] ....    50.857977: call_reserveresult: RPC:    28 call_reserveresult (status 0)
        rpc.nfsd-4720  [001] ....    50.857977: call_refresh: RPC:    28 call_refresh (status 0)
        rpc.nfsd-4720  [001] ....    50.857977: call_refreshresult: RPC:    28 call_refreshresult (status 0)
        rpc.nfsd-4720  [001] ....    50.857978: call_allocate: RPC:    28 call_allocate (status 0)
        rpc.nfsd-4720  [001] ....    50.857978: rpc_malloc: RPC:    28 allocated buffer of size 188 at ffff8800d8dc0000
        rpc.nfsd-4720  [001] ....    50.857979: call_bind: RPC:    28 call_bind (status 0)
        rpc.nfsd-4720  [001] ....    50.857979: call_connect: RPC:    28 call_connect xprt ffff880407939800 is connected
        rpc.nfsd-4720  [001] ....    50.857979: call_transmit: RPC:    28 call_transmit (status 0)
        rpc.nfsd-4720  [001] ....    50.857979: xprt_prepare_transmit: RPC:    28 xprt_prepare_transmit
        rpc.nfsd-4720  [001] ....    50.857980: call_transmit: RPC:    28 rpc_xdr_encode (status 0)
        rpc.nfsd-4720  [001] ....    50.857981: xprt_transmit: RPC:    28 xprt_transmit(68)
        rpc.nfsd-4720  [001] ....    50.857984: xs_local_send_request: RPC:       xs_local_send_request(68) = 0
        rpc.nfsd-4720  [001] ....    50.857984: xprt_transmit: RPC:    28 xmit complete
        rpc.nfsd-4720  [001] ..s.    50.857985: __rpc_sleep_on_priority: RPC:    28 sleep_on(queue "xprt_pending" time 4294904943)
        rpc.nfsd-4720  [001] ..s.    50.857985: __rpc_sleep_on_priority: RPC:    28 added to queue ffff880407939a58 "xprt_pending"
        rpc.nfsd-4720  [001] ..s.    50.857985: __rpc_sleep_on_priority: RPC:    28 setting alarm for 10000 ms
        rpc.nfsd-4720  [001] ..s.    50.857986: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939990 "xprt_sending")
        rpc.nfsd-4720  [001] ....    50.857986: __rpc_execute: RPC:    28 sync task going to sleep
         rpcbind-1871  [003] ..s.    50.857994: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1871  [003] ..s.    50.857994: xprt_complete_rqst: RPC:    28 xid 5545b0ec complete (28 bytes received)
         rpcbind-1871  [003] ..s.    50.857995: rpc_wake_up_task_queue_locked: RPC:    28 __rpc_wake_up_task (now 4294904943)
         rpcbind-1871  [003] ..s.    50.857995: rpc_wake_up_task_queue_locked: RPC:    28 disabling timer
         rpcbind-1871  [003] ..s.    50.857995: rpc_wake_up_task_queue_locked: RPC:    28 removed from queue ffff880407939a58 "xprt_pending"
         rpcbind-1871  [003] ..s.    50.857996: rpc_wake_up_task_queue_locked: RPC:       __rpc_wake_up_task done
        rpc.nfsd-4720  [001] ....    50.857998: __rpc_execute: RPC:    28 sync task resuming
        rpc.nfsd-4720  [001] ....    50.857998: call_status: RPC:    28 call_status (status 28)
        rpc.nfsd-4720  [001] ....    50.857999: call_decode: RPC:    28 call_decode (status 28)
        rpc.nfsd-4720  [001] ....    50.857999: call_decode: RPC:    28 call_decode result 0
        rpc.nfsd-4720  [001] ....    50.858000: __rpc_execute: RPC:    28 return 0, status 0
        rpc.nfsd-4720  [001] ....    50.858000: __rpc_execute: RPC:    28 release task
        rpc.nfsd-4720  [001] ....    50.858000: rpc_free: RPC:       freeing buffer of size 188 at ffff8800d8dc0000
        rpc.nfsd-4720  [001] ....    50.858001: xprt_release: RPC:    28 release request ffff880403542200
        rpc.nfsd-4720  [001] ....    50.858001: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939b20 "xprt_backlog")
        rpc.nfsd-4720  [001] ....    50.858002: rpc_release_client: RPC:       rpc_release_client(ffff88040a8b3600)
        rpc.nfsd-4720  [001] ....    50.858002: rpc_free_task: RPC:    28 freeing task
        rpc.nfsd-4720  [001] ....    50.858003: rpc_new_task: RPC:       new task initialized, procpid 4720
        rpc.nfsd-4720  [001] ....    50.858003: rpc_new_task: RPC:       allocated task ffff88040b718200
        rpc.nfsd-4720  [001] ....    50.858003: __rpc_execute: RPC:    29 __rpc_execute flags=0x2280
        rpc.nfsd-4720  [001] ....    50.858004: call_start: RPC:    29 call_start rpcbind4 proc UNSET (sync)
        rpc.nfsd-4720  [001] ....    50.858004: call_reserve: RPC:    29 call_reserve (status 0)
        rpc.nfsd-4720  [001] ....    50.858004: xprt_alloc_slot: RPC:    29 reserved req ffff880403542200 xid 5645b0ec
        rpc.nfsd-4720  [001] ....    50.858005: call_reserveresult: RPC:    29 call_reserveresult (status 0)
        rpc.nfsd-4720  [001] ....    50.858005: call_refresh: RPC:    29 call_refresh (status 0)
        rpc.nfsd-4720  [001] ....    50.858005: call_refreshresult: RPC:    29 call_refreshresult (status 0)
        rpc.nfsd-4720  [001] ....    50.858006: call_allocate: RPC:    29 call_allocate (status 0)
        rpc.nfsd-4720  [001] ....    50.858006: rpc_malloc: RPC:    29 allocated buffer of size 188 at ffff8800d8dc0000
        rpc.nfsd-4720  [001] ....    50.858006: call_bind: RPC:    29 call_bind (status 0)
        rpc.nfsd-4720  [001] ....    50.858007: call_connect: RPC:    29 call_connect xprt ffff880407939800 is connected
        rpc.nfsd-4720  [001] ....    50.858007: call_transmit: RPC:    29 call_transmit (status 0)
        rpc.nfsd-4720  [001] ....    50.858007: xprt_prepare_transmit: RPC:    29 xprt_prepare_transmit
        rpc.nfsd-4720  [001] ....    50.858008: call_transmit: RPC:    29 rpc_xdr_encode (status 0)
        rpc.nfsd-4720  [001] ....    50.858008: xprt_transmit: RPC:    29 xprt_transmit(68)
        rpc.nfsd-4720  [001] ....    50.858010: xs_local_send_request: RPC:       xs_local_send_request(68) = 0
        rpc.nfsd-4720  [001] ....    50.858011: xprt_transmit: RPC:    29 xmit complete
        rpc.nfsd-4720  [001] ..s.    50.858011: __rpc_sleep_on_priority: RPC:    29 sleep_on(queue "xprt_pending" time 4294904943)
        rpc.nfsd-4720  [001] ..s.    50.858011: __rpc_sleep_on_priority: RPC:    29 added to queue ffff880407939a58 "xprt_pending"
        rpc.nfsd-4720  [001] ..s.    50.858012: __rpc_sleep_on_priority: RPC:    29 setting alarm for 10000 ms
        rpc.nfsd-4720  [001] ..s.    50.858012: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939990 "xprt_sending")
        rpc.nfsd-4720  [001] ....    50.858012: __rpc_execute: RPC:    29 sync task going to sleep
         rpcbind-1871  [003] ..s.    50.858019: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1871  [003] ..s.    50.858020: xprt_complete_rqst: RPC:    29 xid 5645b0ec complete (28 bytes received)
         rpcbind-1871  [003] ..s.    50.858020: rpc_wake_up_task_queue_locked: RPC:    29 __rpc_wake_up_task (now 4294904943)
         rpcbind-1871  [003] ..s.    50.858020: rpc_wake_up_task_queue_locked: RPC:    29 disabling timer
         rpcbind-1871  [003] ..s.    50.858021: rpc_wake_up_task_queue_locked: RPC:    29 removed from queue ffff880407939a58 "xprt_pending"
         rpcbind-1871  [003] ..s.    50.858022: rpc_wake_up_task_queue_locked: RPC:       __rpc_wake_up_task done
        rpc.nfsd-4720  [001] ....    50.858023: __rpc_execute: RPC:    29 sync task resuming
        rpc.nfsd-4720  [001] ....    50.858023: call_status: RPC:    29 call_status (status 28)
        rpc.nfsd-4720  [001] ....    50.858024: call_decode: RPC:    29 call_decode (status 28)
        rpc.nfsd-4720  [001] ....    50.858024: call_decode: RPC:    29 call_decode result 0
        rpc.nfsd-4720  [001] ....    50.858025: __rpc_execute: RPC:    29 return 0, status 0
        rpc.nfsd-4720  [001] ....    50.858025: __rpc_execute: RPC:    29 release task
        rpc.nfsd-4720  [001] ....    50.858025: rpc_free: RPC:       freeing buffer of size 188 at ffff8800d8dc0000
        rpc.nfsd-4720  [001] ....    50.858026: xprt_release: RPC:    29 release request ffff880403542200
        rpc.nfsd-4720  [001] ....    50.858026: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939b20 "xprt_backlog")
        rpc.nfsd-4720  [001] ....    50.858026: rpc_release_client: RPC:       rpc_release_client(ffff88040a8b3600)
        rpc.nfsd-4720  [001] ....    50.858027: rpc_free_task: RPC:    29 freeing task
        rpc.nfsd-4720  [001] ....    50.858027: rpc_new_task: RPC:       new task initialized, procpid 4720
        rpc.nfsd-4720  [001] ....    50.858027: rpc_new_task: RPC:       allocated task ffff88040b718200
        rpc.nfsd-4720  [001] ....    50.858028: __rpc_execute: RPC:    30 __rpc_execute flags=0x2280
        rpc.nfsd-4720  [001] ....    50.858028: call_start: RPC:    30 call_start rpcbind4 proc UNSET (sync)
        rpc.nfsd-4720  [001] ....    50.858029: call_reserve: RPC:    30 call_reserve (status 0)
        rpc.nfsd-4720  [001] ....    50.858029: xprt_alloc_slot: RPC:    30 reserved req ffff880403542200 xid 5745b0ec
        rpc.nfsd-4720  [001] ....    50.858029: call_reserveresult: RPC:    30 call_reserveresult (status 0)
        rpc.nfsd-4720  [001] ....    50.858030: call_refresh: RPC:    30 call_refresh (status 0)
        rpc.nfsd-4720  [001] ....    50.858030: call_refreshresult: RPC:    30 call_refreshresult (status 0)
        rpc.nfsd-4720  [001] ....    50.858030: call_allocate: RPC:    30 call_allocate (status 0)
        rpc.nfsd-4720  [001] ....    50.858030: rpc_malloc: RPC:    30 allocated buffer of size 188 at ffff8800d8dc0000
        rpc.nfsd-4720  [001] ....    50.858031: call_bind: RPC:    30 call_bind (status 0)
        rpc.nfsd-4720  [001] ....    50.858031: call_connect: RPC:    30 call_connect xprt ffff880407939800 is connected
        rpc.nfsd-4720  [001] ....    50.858031: call_transmit: RPC:    30 call_transmit (status 0)
        rpc.nfsd-4720  [001] ....    50.858032: xprt_prepare_transmit: RPC:    30 xprt_prepare_transmit
        rpc.nfsd-4720  [001] ....    50.858032: call_transmit: RPC:    30 rpc_xdr_encode (status 0)
        rpc.nfsd-4720  [001] ....    50.858032: xprt_transmit: RPC:    30 xprt_transmit(68)
        rpc.nfsd-4720  [001] ....    50.858035: xs_local_send_request: RPC:       xs_local_send_request(68) = 0
        rpc.nfsd-4720  [001] ....    50.858035: xprt_transmit: RPC:    30 xmit complete
        rpc.nfsd-4720  [001] ..s.    50.858035: __rpc_sleep_on_priority: RPC:    30 sleep_on(queue "xprt_pending" time 4294904943)
        rpc.nfsd-4720  [001] ..s.    50.858036: __rpc_sleep_on_priority: RPC:    30 added to queue ffff880407939a58 "xprt_pending"
        rpc.nfsd-4720  [001] ..s.    50.858036: __rpc_sleep_on_priority: RPC:    30 setting alarm for 10000 ms
        rpc.nfsd-4720  [001] ..s.    50.858036: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939990 "xprt_sending")
        rpc.nfsd-4720  [001] ....    50.858037: __rpc_execute: RPC:    30 sync task going to sleep
         rpcbind-1871  [003] ..s.    50.858043: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1871  [003] ..s.    50.858044: xprt_complete_rqst: RPC:    30 xid 5745b0ec complete (28 bytes received)
         rpcbind-1871  [003] ..s.    50.858044: rpc_wake_up_task_queue_locked: RPC:    30 __rpc_wake_up_task (now 4294904943)
         rpcbind-1871  [003] ..s.    50.858045: rpc_wake_up_task_queue_locked: RPC:    30 disabling timer
         rpcbind-1871  [003] ..s.    50.858045: rpc_wake_up_task_queue_locked: RPC:    30 removed from queue ffff880407939a58 "xprt_pending"
         rpcbind-1871  [003] ..s.    50.858046: rpc_wake_up_task_queue_locked: RPC:       __rpc_wake_up_task done
        rpc.nfsd-4720  [001] ....    50.858047: __rpc_execute: RPC:    30 sync task resuming
        rpc.nfsd-4720  [001] ....    50.858048: call_status: RPC:    30 call_status (status 28)
        rpc.nfsd-4720  [001] ....    50.858048: call_decode: RPC:    30 call_decode (status 28)
        rpc.nfsd-4720  [001] ....    50.858049: call_decode: RPC:    30 call_decode result 0
        rpc.nfsd-4720  [001] ....    50.858049: __rpc_execute: RPC:    30 return 0, status 0
        rpc.nfsd-4720  [001] ....    50.858049: __rpc_execute: RPC:    30 release task
        rpc.nfsd-4720  [001] ....    50.858050: rpc_free: RPC:       freeing buffer of size 188 at ffff8800d8dc0000
        rpc.nfsd-4720  [001] ....    50.858050: xprt_release: RPC:    30 release request ffff880403542200
        rpc.nfsd-4720  [001] ....    50.858050: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939b20 "xprt_backlog")
        rpc.nfsd-4720  [001] ....    50.858051: rpc_release_client: RPC:       rpc_release_client(ffff88040a8b3600)
        rpc.nfsd-4720  [001] ....    50.858051: rpc_free_task: RPC:    30 freeing task
        rpc.nfsd-4720  [001] ....    50.858053: svc_create_socket: svc: svc_create_socket(lockd, 17, 0.0.0.0, port=0)
        rpc.nfsd-4720  [001] ....    50.858058: svc_setup_socket: svc: svc_setup_socket ffff8804022955c0
        rpc.nfsd-4720  [001] ....    50.858059: rpc_new_task: RPC:       new task initialized, procpid 4720
        rpc.nfsd-4720  [001] ....    50.858060: rpc_new_task: RPC:       allocated task ffff88040b718200
        rpc.nfsd-4720  [001] ....    50.858060: __rpc_execute: RPC:    31 __rpc_execute flags=0x680
        rpc.nfsd-4720  [001] ....    50.858060: call_start: RPC:    31 call_start rpcbind4 proc SET (sync)
        rpc.nfsd-4720  [001] ....    50.858061: call_reserve: RPC:    31 call_reserve (status 0)
        rpc.nfsd-4720  [001] ....    50.858061: xprt_alloc_slot: RPC:    31 reserved req ffff880403542200 xid 5845b0ec
        rpc.nfsd-4720  [001] ....    50.858061: call_reserveresult: RPC:    31 call_reserveresult (status 0)
        rpc.nfsd-4720  [001] ....    50.858062: call_refresh: RPC:    31 call_refresh (status 0)
        rpc.nfsd-4720  [001] ....    50.858062: call_refreshresult: RPC:    31 call_refreshresult (status 0)
        rpc.nfsd-4720  [001] ....    50.858062: call_allocate: RPC:    31 call_allocate (status 0)
        rpc.nfsd-4720  [001] ....    50.858063: rpc_malloc: RPC:    31 allocated buffer of size 188 at ffff8800d8dc0000
        rpc.nfsd-4720  [001] ....    50.858063: call_bind: RPC:    31 call_bind (status 0)
        rpc.nfsd-4720  [001] ....    50.858063: call_connect: RPC:    31 call_connect xprt ffff880407939800 is connected
        rpc.nfsd-4720  [001] ....    50.858064: call_transmit: RPC:    31 call_transmit (status 0)
        rpc.nfsd-4720  [001] ....    50.858064: xprt_prepare_transmit: RPC:    31 xprt_prepare_transmit
        rpc.nfsd-4720  [001] ....    50.858064: call_transmit: RPC:    31 rpc_xdr_encode (status 0)
        rpc.nfsd-4720  [001] ....    50.858065: xprt_transmit: RPC:    31 xprt_transmit(88)
        rpc.nfsd-4720  [001] ....    50.858067: xs_local_send_request: RPC:       xs_local_send_request(88) = 0
        rpc.nfsd-4720  [001] ....    50.858068: xprt_transmit: RPC:    31 xmit complete
        rpc.nfsd-4720  [001] ..s.    50.858068: __rpc_sleep_on_priority: RPC:    31 sleep_on(queue "xprt_pending" time 4294904943)
        rpc.nfsd-4720  [001] ..s.    50.858068: __rpc_sleep_on_priority: RPC:    31 added to queue ffff880407939a58 "xprt_pending"
        rpc.nfsd-4720  [001] ..s.    50.858069: __rpc_sleep_on_priority: RPC:    31 setting alarm for 10000 ms
        rpc.nfsd-4720  [001] ..s.    50.858069: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939990 "xprt_sending")
        rpc.nfsd-4720  [001] ....    50.858070: __rpc_execute: RPC:    31 sync task going to sleep
         rpcbind-1871  [003] ..s.    50.858080: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1871  [003] ..s.    50.858081: xprt_complete_rqst: RPC:    31 xid 5845b0ec complete (28 bytes received)
         rpcbind-1871  [003] ..s.    50.858081: rpc_wake_up_task_queue_locked: RPC:    31 __rpc_wake_up_task (now 4294904943)
         rpcbind-1871  [003] ..s.    50.858081: rpc_wake_up_task_queue_locked: RPC:    31 disabling timer
         rpcbind-1871  [003] ..s.    50.858081: rpc_wake_up_task_queue_locked: RPC:    31 removed from queue ffff880407939a58 "xprt_pending"
         rpcbind-1871  [003] ..s.    50.858082: rpc_wake_up_task_queue_locked: RPC:       __rpc_wake_up_task done
        rpc.nfsd-4720  [001] ....    50.858084: __rpc_execute: RPC:    31 sync task resuming
        rpc.nfsd-4720  [001] ....    50.858084: call_status: RPC:    31 call_status (status 28)
        rpc.nfsd-4720  [001] ....    50.858085: call_decode: RPC:    31 call_decode (status 28)
        rpc.nfsd-4720  [001] ....    50.858085: call_decode: RPC:    31 call_decode result 0
        rpc.nfsd-4720  [001] ....    50.858085: __rpc_execute: RPC:    31 return 0, status 0
        rpc.nfsd-4720  [001] ....    50.858086: __rpc_execute: RPC:    31 release task
        rpc.nfsd-4720  [001] ....    50.858086: rpc_free: RPC:       freeing buffer of size 188 at ffff8800d8dc0000
        rpc.nfsd-4720  [001] ....    50.858086: xprt_release: RPC:    31 release request ffff880403542200
        rpc.nfsd-4720  [001] ....    50.858087: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939b20 "xprt_backlog")
        rpc.nfsd-4720  [001] ....    50.858087: rpc_release_client: RPC:       rpc_release_client(ffff88040a8b3600)
        rpc.nfsd-4720  [001] ....    50.858087: rpc_free_task: RPC:    31 freeing task
        rpc.nfsd-4720  [001] ....    50.858088: rpc_new_task: RPC:       new task initialized, procpid 4720
        rpc.nfsd-4720  [001] ....    50.858089: rpc_new_task: RPC:       allocated task ffff88040b718200
        rpc.nfsd-4720  [001] ....    50.858089: __rpc_execute: RPC:    32 __rpc_execute flags=0x680
        rpc.nfsd-4720  [001] ....    50.858089: call_start: RPC:    32 call_start rpcbind4 proc SET (sync)
        rpc.nfsd-4720  [001] ....    50.858090: call_reserve: RPC:    32 call_reserve (status 0)
        rpc.nfsd-4720  [001] ....    50.858090: xprt_alloc_slot: RPC:    32 reserved req ffff880403542200 xid 5945b0ec
        rpc.nfsd-4720  [001] ....    50.858090: call_reserveresult: RPC:    32 call_reserveresult (status 0)
        rpc.nfsd-4720  [001] ....    50.858090: call_refresh: RPC:    32 call_refresh (status 0)
        rpc.nfsd-4720  [001] ....    50.858091: call_refreshresult: RPC:    32 call_refreshresult (status 0)
        rpc.nfsd-4720  [001] ....    50.858091: call_allocate: RPC:    32 call_allocate (status 0)
        rpc.nfsd-4720  [001] ....    50.858091: rpc_malloc: RPC:    32 allocated buffer of size 188 at ffff8800d8dc0000
        rpc.nfsd-4720  [001] ....    50.858091: call_bind: RPC:    32 call_bind (status 0)
        rpc.nfsd-4720  [001] ....    50.858092: call_connect: RPC:    32 call_connect xprt ffff880407939800 is connected
        rpc.nfsd-4720  [001] ....    50.858092: call_transmit: RPC:    32 call_transmit (status 0)
        rpc.nfsd-4720  [001] ....    50.858092: xprt_prepare_transmit: RPC:    32 xprt_prepare_transmit
        rpc.nfsd-4720  [001] ....    50.858092: call_transmit: RPC:    32 rpc_xdr_encode (status 0)
        rpc.nfsd-4720  [001] ....    50.858093: xprt_transmit: RPC:    32 xprt_transmit(88)
        rpc.nfsd-4720  [001] ....    50.858095: xs_local_send_request: RPC:       xs_local_send_request(88) = 0
        rpc.nfsd-4720  [001] ....    50.858095: xprt_transmit: RPC:    32 xmit complete
        rpc.nfsd-4720  [001] ..s.    50.858096: __rpc_sleep_on_priority: RPC:    32 sleep_on(queue "xprt_pending" time 4294904943)
        rpc.nfsd-4720  [001] ..s.    50.858096: __rpc_sleep_on_priority: RPC:    32 added to queue ffff880407939a58 "xprt_pending"
        rpc.nfsd-4720  [001] ..s.    50.858096: __rpc_sleep_on_priority: RPC:    32 setting alarm for 10000 ms
        rpc.nfsd-4720  [001] ..s.    50.858097: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939990 "xprt_sending")
        rpc.nfsd-4720  [001] ....    50.858097: __rpc_execute: RPC:    32 sync task going to sleep
         rpcbind-1871  [003] ..s.    50.858104: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1871  [003] ..s.    50.858104: xprt_complete_rqst: RPC:    32 xid 5945b0ec complete (28 bytes received)
         rpcbind-1871  [003] ..s.    50.858105: rpc_wake_up_task_queue_locked: RPC:    32 __rpc_wake_up_task (now 4294904943)
         rpcbind-1871  [003] ..s.    50.858105: rpc_wake_up_task_queue_locked: RPC:    32 disabling timer
         rpcbind-1871  [003] ..s.    50.858105: rpc_wake_up_task_queue_locked: RPC:    32 removed from queue ffff880407939a58 "xprt_pending"
         rpcbind-1871  [003] ..s.    50.858106: rpc_wake_up_task_queue_locked: RPC:       __rpc_wake_up_task done
        rpc.nfsd-4720  [001] ....    50.858107: __rpc_execute: RPC:    32 sync task resuming
        rpc.nfsd-4720  [001] ....    50.858108: call_status: RPC:    32 call_status (status 28)
        rpc.nfsd-4720  [001] ....    50.858108: call_decode: RPC:    32 call_decode (status 28)
        rpc.nfsd-4720  [001] ....    50.858109: call_decode: RPC:    32 call_decode result 0
        rpc.nfsd-4720  [001] ....    50.858109: __rpc_execute: RPC:    32 return 0, status 0
        rpc.nfsd-4720  [001] ....    50.858109: __rpc_execute: RPC:    32 release task
        rpc.nfsd-4720  [001] ....    50.858109: rpc_free: RPC:       freeing buffer of size 188 at ffff8800d8dc0000
        rpc.nfsd-4720  [001] ....    50.858110: xprt_release: RPC:    32 release request ffff880403542200
        rpc.nfsd-4720  [001] ....    50.858110: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939b20 "xprt_backlog")
        rpc.nfsd-4720  [001] ....    50.858110: rpc_release_client: RPC:       rpc_release_client(ffff88040a8b3600)
        rpc.nfsd-4720  [001] ....    50.858111: rpc_free_task: RPC:    32 freeing task
        rpc.nfsd-4720  [001] ....    50.858112: rpc_new_task: RPC:       new task initialized, procpid 4720
        rpc.nfsd-4720  [001] ....    50.858112: rpc_new_task: RPC:       allocated task ffff88040b718200
        rpc.nfsd-4720  [001] ....    50.858112: __rpc_execute: RPC:    33 __rpc_execute flags=0x680
        rpc.nfsd-4720  [001] ....    50.858112: call_start: RPC:    33 call_start rpcbind4 proc SET (sync)
        rpc.nfsd-4720  [001] ....    50.858113: call_reserve: RPC:    33 call_reserve (status 0)
        rpc.nfsd-4720  [001] ....    50.858113: xprt_alloc_slot: RPC:    33 reserved req ffff880403542200 xid 5a45b0ec
        rpc.nfsd-4720  [001] ....    50.858113: call_reserveresult: RPC:    33 call_reserveresult (status 0)
        rpc.nfsd-4720  [001] ....    50.858114: call_refresh: RPC:    33 call_refresh (status 0)
        rpc.nfsd-4720  [001] ....    50.858114: call_refreshresult: RPC:    33 call_refreshresult (status 0)
        rpc.nfsd-4720  [001] ....    50.858114: call_allocate: RPC:    33 call_allocate (status 0)
        rpc.nfsd-4720  [001] ....    50.858114: rpc_malloc: RPC:    33 allocated buffer of size 188 at ffff8800d8dc0000
        rpc.nfsd-4720  [001] ....    50.858115: call_bind: RPC:    33 call_bind (status 0)
        rpc.nfsd-4720  [001] ....    50.858115: call_connect: RPC:    33 call_connect xprt ffff880407939800 is connected
        rpc.nfsd-4720  [001] ....    50.858115: call_transmit: RPC:    33 call_transmit (status 0)
        rpc.nfsd-4720  [001] ....    50.858115: xprt_prepare_transmit: RPC:    33 xprt_prepare_transmit
        rpc.nfsd-4720  [001] ....    50.858116: call_transmit: RPC:    33 rpc_xdr_encode (status 0)
        rpc.nfsd-4720  [001] ....    50.858116: xprt_transmit: RPC:    33 xprt_transmit(88)
        rpc.nfsd-4720  [001] ....    50.858118: xs_local_send_request: RPC:       xs_local_send_request(88) = 0
        rpc.nfsd-4720  [001] ....    50.858118: xprt_transmit: RPC:    33 xmit complete
        rpc.nfsd-4720  [001] ..s.    50.858119: __rpc_sleep_on_priority: RPC:    33 sleep_on(queue "xprt_pending" time 4294904943)
        rpc.nfsd-4720  [001] ..s.    50.858119: __rpc_sleep_on_priority: RPC:    33 added to queue ffff880407939a58 "xprt_pending"
        rpc.nfsd-4720  [001] ..s.    50.858119: __rpc_sleep_on_priority: RPC:    33 setting alarm for 10000 ms
        rpc.nfsd-4720  [001] ..s.    50.858120: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939990 "xprt_sending")
        rpc.nfsd-4720  [001] ....    50.858120: __rpc_execute: RPC:    33 sync task going to sleep
         rpcbind-1871  [003] ..s.    50.858127: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1871  [003] ..s.    50.858128: xprt_complete_rqst: RPC:    33 xid 5a45b0ec complete (28 bytes received)
         rpcbind-1871  [003] ..s.    50.858128: rpc_wake_up_task_queue_locked: RPC:    33 __rpc_wake_up_task (now 4294904943)
         rpcbind-1871  [003] ..s.    50.858128: rpc_wake_up_task_queue_locked: RPC:    33 disabling timer
         rpcbind-1871  [003] ..s.    50.858128: rpc_wake_up_task_queue_locked: RPC:    33 removed from queue ffff880407939a58 "xprt_pending"
         rpcbind-1871  [003] ..s.    50.858129: rpc_wake_up_task_queue_locked: RPC:       __rpc_wake_up_task done
        rpc.nfsd-4720  [001] ....    50.858131: __rpc_execute: RPC:    33 sync task resuming
        rpc.nfsd-4720  [001] ....    50.858131: call_status: RPC:    33 call_status (status 28)
        rpc.nfsd-4720  [001] ....    50.858131: call_decode: RPC:    33 call_decode (status 28)
        rpc.nfsd-4720  [001] ....    50.858132: call_decode: RPC:    33 call_decode result 0
        rpc.nfsd-4720  [001] ....    50.858132: __rpc_execute: RPC:    33 return 0, status 0
        rpc.nfsd-4720  [001] ....    50.858132: __rpc_execute: RPC:    33 release task
        rpc.nfsd-4720  [001] ....    50.858133: rpc_free: RPC:       freeing buffer of size 188 at ffff8800d8dc0000
        rpc.nfsd-4720  [001] ....    50.858133: xprt_release: RPC:    33 release request ffff880403542200
        rpc.nfsd-4720  [001] ....    50.858133: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939b20 "xprt_backlog")
        rpc.nfsd-4720  [001] ....    50.858133: rpc_release_client: RPC:       rpc_release_client(ffff88040a8b3600)
        rpc.nfsd-4720  [001] ....    50.858134: rpc_free_task: RPC:    33 freeing task
        rpc.nfsd-4720  [001] ....    50.858134: svc_write_space: svc: socket ffff8800db83a000(inet ffff88040a14eb80), write_space busy=1
        rpc.nfsd-4720  [001] ....    50.858135: svc_setup_socket: svc: kernel_setsockopt returned 0
        rpc.nfsd-4720  [001] ....    50.858135: svc_setup_socket: svc: svc_setup_socket created ffff8800db83a000 (inet ffff88040a14eb80)
        rpc.nfsd-4720  [001] ....    50.858137: svc_create_socket: svc: svc_create_socket(lockd, 6, 0.0.0.0, port=0)
        rpc.nfsd-4720  [001] ....    50.858141: svc_setup_socket: svc: svc_setup_socket ffff88040acedd40
        rpc.nfsd-4720  [001] ....    50.858142: rpc_new_task: RPC:       new task initialized, procpid 4720
        rpc.nfsd-4720  [001] ....    50.858143: rpc_new_task: RPC:       allocated task ffff88040b718200
        rpc.nfsd-4720  [001] ....    50.858143: __rpc_execute: RPC:    34 __rpc_execute flags=0x680
        rpc.nfsd-4720  [001] ....    50.858143: call_start: RPC:    34 call_start rpcbind4 proc SET (sync)
        rpc.nfsd-4720  [001] ....    50.858144: call_reserve: RPC:    34 call_reserve (status 0)
        rpc.nfsd-4720  [001] ....    50.858144: xprt_alloc_slot: RPC:    34 reserved req ffff880403542200 xid 5b45b0ec
        rpc.nfsd-4720  [001] ....    50.858144: call_reserveresult: RPC:    34 call_reserveresult (status 0)
        rpc.nfsd-4720  [001] ....    50.858145: call_refresh: RPC:    34 call_refresh (status 0)
        rpc.nfsd-4720  [001] ....    50.858145: call_refreshresult: RPC:    34 call_refreshresult (status 0)
        rpc.nfsd-4720  [001] ....    50.858145: call_allocate: RPC:    34 call_allocate (status 0)
        rpc.nfsd-4720  [001] ....    50.858146: rpc_malloc: RPC:    34 allocated buffer of size 188 at ffff88040a84a800
        rpc.nfsd-4720  [001] ....    50.858146: call_bind: RPC:    34 call_bind (status 0)
        rpc.nfsd-4720  [001] ....    50.858146: call_connect: RPC:    34 call_connect xprt ffff880407939800 is connected
        rpc.nfsd-4720  [001] ....    50.858146: call_transmit: RPC:    34 call_transmit (status 0)
        rpc.nfsd-4720  [001] ....    50.858147: xprt_prepare_transmit: RPC:    34 xprt_prepare_transmit
        rpc.nfsd-4720  [001] ....    50.858147: call_transmit: RPC:    34 rpc_xdr_encode (status 0)
        rpc.nfsd-4720  [001] ....    50.858147: xprt_transmit: RPC:    34 xprt_transmit(88)
        rpc.nfsd-4720  [001] ....    50.858150: xs_local_send_request: RPC:       xs_local_send_request(88) = 0
        rpc.nfsd-4720  [001] ....    50.858150: xprt_transmit: RPC:    34 xmit complete
        rpc.nfsd-4720  [001] ..s.    50.858150: __rpc_sleep_on_priority: RPC:    34 sleep_on(queue "xprt_pending" time 4294904943)
        rpc.nfsd-4720  [001] ..s.    50.858150: __rpc_sleep_on_priority: RPC:    34 added to queue ffff880407939a58 "xprt_pending"
        rpc.nfsd-4720  [001] ..s.    50.858151: __rpc_sleep_on_priority: RPC:    34 setting alarm for 10000 ms
        rpc.nfsd-4720  [001] ..s.    50.858151: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939990 "xprt_sending")
        rpc.nfsd-4720  [001] ....    50.858152: __rpc_execute: RPC:    34 sync task going to sleep
         rpcbind-1871  [003] ..s.    50.858158: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1871  [003] ..s.    50.858159: xprt_complete_rqst: RPC:    34 xid 5b45b0ec complete (28 bytes received)
         rpcbind-1871  [003] ..s.    50.858159: rpc_wake_up_task_queue_locked: RPC:    34 __rpc_wake_up_task (now 4294904943)
         rpcbind-1871  [003] ..s.    50.858160: rpc_wake_up_task_queue_locked: RPC:    34 disabling timer
         rpcbind-1871  [003] ..s.    50.858160: rpc_wake_up_task_queue_locked: RPC:    34 removed from queue ffff880407939a58 "xprt_pending"
         rpcbind-1871  [003] ..s.    50.858161: rpc_wake_up_task_queue_locked: RPC:       __rpc_wake_up_task done
        rpc.nfsd-4720  [001] ....    50.858162: __rpc_execute: RPC:    34 sync task resuming
        rpc.nfsd-4720  [001] ....    50.858163: call_status: RPC:    34 call_status (status 28)
        rpc.nfsd-4720  [001] ....    50.858163: call_decode: RPC:    34 call_decode (status 28)
        rpc.nfsd-4720  [001] ....    50.858164: call_decode: RPC:    34 call_decode result 0
        rpc.nfsd-4720  [001] ....    50.858164: __rpc_execute: RPC:    34 return 0, status 0
        rpc.nfsd-4720  [001] ....    50.858164: __rpc_execute: RPC:    34 release task
        rpc.nfsd-4720  [001] ....    50.858164: rpc_free: RPC:       freeing buffer of size 188 at ffff88040a84a800
        rpc.nfsd-4720  [001] ....    50.858165: xprt_release: RPC:    34 release request ffff880403542200
        rpc.nfsd-4720  [001] ....    50.858165: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939b20 "xprt_backlog")
        rpc.nfsd-4720  [001] ....    50.858165: rpc_release_client: RPC:       rpc_release_client(ffff88040a8b3600)
        rpc.nfsd-4720  [001] ....    50.858166: rpc_free_task: RPC:    34 freeing task
        rpc.nfsd-4720  [001] ....    50.858167: rpc_new_task: RPC:       new task initialized, procpid 4720
        rpc.nfsd-4720  [001] ....    50.858167: rpc_new_task: RPC:       allocated task ffff88040b718200
        rpc.nfsd-4720  [001] ....    50.858167: __rpc_execute: RPC:    35 __rpc_execute flags=0x680
        rpc.nfsd-4720  [001] ....    50.858168: call_start: RPC:    35 call_start rpcbind4 proc SET (sync)
        rpc.nfsd-4720  [001] ....    50.858168: call_reserve: RPC:    35 call_reserve (status 0)
        rpc.nfsd-4720  [001] ....    50.858168: xprt_alloc_slot: RPC:    35 reserved req ffff880403542200 xid 5c45b0ec
        rpc.nfsd-4720  [001] ....    50.858168: call_reserveresult: RPC:    35 call_reserveresult (status 0)
        rpc.nfsd-4720  [001] ....    50.858169: call_refresh: RPC:    35 call_refresh (status 0)
        rpc.nfsd-4720  [001] ....    50.858169: call_refreshresult: RPC:    35 call_refreshresult (status 0)
        rpc.nfsd-4720  [001] ....    50.858169: call_allocate: RPC:    35 call_allocate (status 0)
        rpc.nfsd-4720  [001] ....    50.858170: rpc_malloc: RPC:    35 allocated buffer of size 188 at ffff88040a84a800
        rpc.nfsd-4720  [001] ....    50.858170: call_bind: RPC:    35 call_bind (status 0)
        rpc.nfsd-4720  [001] ....    50.858170: call_connect: RPC:    35 call_connect xprt ffff880407939800 is connected
        rpc.nfsd-4720  [001] ....    50.858170: call_transmit: RPC:    35 call_transmit (status 0)
        rpc.nfsd-4720  [001] ....    50.858171: xprt_prepare_transmit: RPC:    35 xprt_prepare_transmit
        rpc.nfsd-4720  [001] ....    50.858171: call_transmit: RPC:    35 rpc_xdr_encode (status 0)
        rpc.nfsd-4720  [001] ....    50.858171: xprt_transmit: RPC:    35 xprt_transmit(88)
        rpc.nfsd-4720  [001] ....    50.858173: xs_local_send_request: RPC:       xs_local_send_request(88) = 0
        rpc.nfsd-4720  [001] ....    50.858173: xprt_transmit: RPC:    35 xmit complete
        rpc.nfsd-4720  [001] ..s.    50.858174: __rpc_sleep_on_priority: RPC:    35 sleep_on(queue "xprt_pending" time 4294904943)
        rpc.nfsd-4720  [001] ..s.    50.858174: __rpc_sleep_on_priority: RPC:    35 added to queue ffff880407939a58 "xprt_pending"
        rpc.nfsd-4720  [001] ..s.    50.858174: __rpc_sleep_on_priority: RPC:    35 setting alarm for 10000 ms
        rpc.nfsd-4720  [001] ..s.    50.858175: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939990 "xprt_sending")
        rpc.nfsd-4720  [001] ....    50.858175: __rpc_execute: RPC:    35 sync task going to sleep
         rpcbind-1871  [003] ..s.    50.858182: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1871  [003] ..s.    50.858183: xprt_complete_rqst: RPC:    35 xid 5c45b0ec complete (28 bytes received)
         rpcbind-1871  [003] ..s.    50.858183: rpc_wake_up_task_queue_locked: RPC:    35 __rpc_wake_up_task (now 4294904943)
         rpcbind-1871  [003] ..s.    50.858184: rpc_wake_up_task_queue_locked: RPC:    35 disabling timer
         rpcbind-1871  [003] ..s.    50.858184: rpc_wake_up_task_queue_locked: RPC:    35 removed from queue ffff880407939a58 "xprt_pending"
         rpcbind-1871  [003] ..s.    50.858185: rpc_wake_up_task_queue_locked: RPC:       __rpc_wake_up_task done
        rpc.nfsd-4720  [001] ....    50.858186: __rpc_execute: RPC:    35 sync task resuming
        rpc.nfsd-4720  [001] ....    50.858186: call_status: RPC:    35 call_status (status 28)
        rpc.nfsd-4720  [001] ....    50.858187: call_decode: RPC:    35 call_decode (status 28)
        rpc.nfsd-4720  [001] ....    50.858187: call_decode: RPC:    35 call_decode result 0
        rpc.nfsd-4720  [001] ....    50.858187: __rpc_execute: RPC:    35 return 0, status 0
        rpc.nfsd-4720  [001] ....    50.858188: __rpc_execute: RPC:    35 release task
        rpc.nfsd-4720  [001] ....    50.858188: rpc_free: RPC:       freeing buffer of size 188 at ffff88040a84a800
        rpc.nfsd-4720  [001] ....    50.858188: xprt_release: RPC:    35 release request ffff880403542200
        rpc.nfsd-4720  [001] ....    50.858189: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939b20 "xprt_backlog")
        rpc.nfsd-4720  [001] ....    50.858189: rpc_release_client: RPC:       rpc_release_client(ffff88040a8b3600)
        rpc.nfsd-4720  [001] ....    50.858189: rpc_free_task: RPC:    35 freeing task
        rpc.nfsd-4720  [001] ....    50.858190: rpc_new_task: RPC:       new task initialized, procpid 4720
        rpc.nfsd-4720  [001] ....    50.858190: rpc_new_task: RPC:       allocated task ffff88040b718200
        rpc.nfsd-4720  [001] ....    50.858191: __rpc_execute: RPC:    36 __rpc_execute flags=0x680
        rpc.nfsd-4720  [001] ....    50.858191: call_start: RPC:    36 call_start rpcbind4 proc SET (sync)
        rpc.nfsd-4720  [001] ....    50.858191: call_reserve: RPC:    36 call_reserve (status 0)
        rpc.nfsd-4720  [001] ....    50.858192: xprt_alloc_slot: RPC:    36 reserved req ffff880403542200 xid 5d45b0ec
        rpc.nfsd-4720  [001] ....    50.858192: call_reserveresult: RPC:    36 call_reserveresult (status 0)
        rpc.nfsd-4720  [001] ....    50.858192: call_refresh: RPC:    36 call_refresh (status 0)
        rpc.nfsd-4720  [001] ....    50.858192: call_refreshresult: RPC:    36 call_refreshresult (status 0)
        rpc.nfsd-4720  [001] ....    50.858193: call_allocate: RPC:    36 call_allocate (status 0)
        rpc.nfsd-4720  [001] ....    50.858193: rpc_malloc: RPC:    36 allocated buffer of size 188 at ffff88040a84a800
        rpc.nfsd-4720  [001] ....    50.858193: call_bind: RPC:    36 call_bind (status 0)
        rpc.nfsd-4720  [001] ....    50.858194: call_connect: RPC:    36 call_connect xprt ffff880407939800 is connected
        rpc.nfsd-4720  [001] ....    50.858194: call_transmit: RPC:    36 call_transmit (status 0)
        rpc.nfsd-4720  [001] ....    50.858194: xprt_prepare_transmit: RPC:    36 xprt_prepare_transmit
        rpc.nfsd-4720  [001] ....    50.858194: call_transmit: RPC:    36 rpc_xdr_encode (status 0)
        rpc.nfsd-4720  [001] ....    50.858195: xprt_transmit: RPC:    36 xprt_transmit(88)
        rpc.nfsd-4720  [001] ....    50.858197: xs_local_send_request: RPC:       xs_local_send_request(88) = 0
        rpc.nfsd-4720  [001] ....    50.858197: xprt_transmit: RPC:    36 xmit complete
        rpc.nfsd-4720  [001] ..s.    50.858198: __rpc_sleep_on_priority: RPC:    36 sleep_on(queue "xprt_pending" time 4294904943)
        rpc.nfsd-4720  [001] ..s.    50.858198: __rpc_sleep_on_priority: RPC:    36 added to queue ffff880407939a58 "xprt_pending"
        rpc.nfsd-4720  [001] ..s.    50.858198: __rpc_sleep_on_priority: RPC:    36 setting alarm for 10000 ms
        rpc.nfsd-4720  [001] ..s.    50.858199: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939990 "xprt_sending")
        rpc.nfsd-4720  [001] ....    50.858199: __rpc_execute: RPC:    36 sync task going to sleep
         rpcbind-1871  [003] ..s.    50.858206: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1871  [003] ..s.    50.858206: xprt_complete_rqst: RPC:    36 xid 5d45b0ec complete (28 bytes received)
         rpcbind-1871  [003] ..s.    50.858207: rpc_wake_up_task_queue_locked: RPC:    36 __rpc_wake_up_task (now 4294904943)
         rpcbind-1871  [003] ..s.    50.858207: rpc_wake_up_task_queue_locked: RPC:    36 disabling timer
         rpcbind-1871  [003] ..s.    50.858207: rpc_wake_up_task_queue_locked: RPC:    36 removed from queue ffff880407939a58 "xprt_pending"
         rpcbind-1871  [003] ..s.    50.858208: rpc_wake_up_task_queue_locked: RPC:       __rpc_wake_up_task done
        rpc.nfsd-4720  [001] ....    50.858209: __rpc_execute: RPC:    36 sync task resuming
        rpc.nfsd-4720  [001] ....    50.858210: call_status: RPC:    36 call_status (status 28)
        rpc.nfsd-4720  [001] ....    50.858210: call_decode: RPC:    36 call_decode (status 28)
        rpc.nfsd-4720  [001] ....    50.858210: call_decode: RPC:    36 call_decode result 0
        rpc.nfsd-4720  [001] ....    50.858211: __rpc_execute: RPC:    36 return 0, status 0
        rpc.nfsd-4720  [001] ....    50.858211: __rpc_execute: RPC:    36 release task
        rpc.nfsd-4720  [001] ....    50.858211: rpc_free: RPC:       freeing buffer of size 188 at ffff88040a84a800
        rpc.nfsd-4720  [001] ....    50.858212: xprt_release: RPC:    36 release request ffff880403542200
        rpc.nfsd-4720  [001] ....    50.858212: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939b20 "xprt_backlog")
        rpc.nfsd-4720  [001] ....    50.858212: rpc_release_client: RPC:       rpc_release_client(ffff88040a8b3600)
        rpc.nfsd-4720  [001] ....    50.858212: rpc_free_task: RPC:    36 freeing task
        rpc.nfsd-4720  [001] ....    50.858213: svc_setup_socket: setting up TCP socket for listening
        rpc.nfsd-4720  [001] ....    50.858213: svc_setup_socket: svc: svc_setup_socket created ffff88040298e000 (inet ffff88040a708040)
        rpc.nfsd-4720  [001] ....    50.858215: svc_create_socket: svc: svc_create_socket(lockd, 17, 0000:0000:0000:0000:0000:0000:0000:0000, port=0)
        rpc.nfsd-4720  [001] ....    50.858217: svc_setup_socket: svc: svc_setup_socket ffff88040ba2c840
        rpc.nfsd-4720  [001] ....    50.858218: rpc_new_task: RPC:       new task initialized, procpid 4720
        rpc.nfsd-4720  [001] ....    50.858218: rpc_new_task: RPC:       allocated task ffff88040b718200
        rpc.nfsd-4720  [001] ....    50.858219: __rpc_execute: RPC:    37 __rpc_execute flags=0x680
        rpc.nfsd-4720  [001] ....    50.858219: call_start: RPC:    37 call_start rpcbind4 proc SET (sync)
        rpc.nfsd-4720  [001] ....    50.858220: call_reserve: RPC:    37 call_reserve (status 0)
        rpc.nfsd-4720  [001] ....    50.858220: xprt_alloc_slot: RPC:    37 reserved req ffff880403542200 xid 5e45b0ec
        rpc.nfsd-4720  [001] ....    50.858220: call_reserveresult: RPC:    37 call_reserveresult (status 0)
        rpc.nfsd-4720  [001] ....    50.858220: call_refresh: RPC:    37 call_refresh (status 0)
        rpc.nfsd-4720  [001] ....    50.858221: call_refreshresult: RPC:    37 call_refreshresult (status 0)
        rpc.nfsd-4720  [001] ....    50.858221: call_allocate: RPC:    37 call_allocate (status 0)
        rpc.nfsd-4720  [001] ....    50.858221: rpc_malloc: RPC:    37 allocated buffer of size 188 at ffff88040a84a800
        rpc.nfsd-4720  [001] ....    50.858222: call_bind: RPC:    37 call_bind (status 0)
        rpc.nfsd-4720  [001] ....    50.858222: call_connect: RPC:    37 call_connect xprt ffff880407939800 is connected
        rpc.nfsd-4720  [001] ....    50.858222: call_transmit: RPC:    37 call_transmit (status 0)
        rpc.nfsd-4720  [001] ....    50.858222: xprt_prepare_transmit: RPC:    37 xprt_prepare_transmit
        rpc.nfsd-4720  [001] ....    50.858223: call_transmit: RPC:    37 rpc_xdr_encode (status 0)
        rpc.nfsd-4720  [001] ....    50.858223: xprt_transmit: RPC:    37 xprt_transmit(84)
        rpc.nfsd-4720  [001] ....    50.858225: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-4720  [001] ....    50.858225: xprt_transmit: RPC:    37 xmit complete
        rpc.nfsd-4720  [001] ..s.    50.858226: __rpc_sleep_on_priority: RPC:    37 sleep_on(queue "xprt_pending" time 4294904943)
        rpc.nfsd-4720  [001] ..s.    50.858226: __rpc_sleep_on_priority: RPC:    37 added to queue ffff880407939a58 "xprt_pending"
        rpc.nfsd-4720  [001] ..s.    50.858226: __rpc_sleep_on_priority: RPC:    37 setting alarm for 10000 ms
        rpc.nfsd-4720  [001] ..s.    50.858227: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939990 "xprt_sending")
        rpc.nfsd-4720  [001] ....    50.858227: __rpc_execute: RPC:    37 sync task going to sleep
         rpcbind-1871  [003] ..s.    50.858234: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1871  [003] ..s.    50.858234: xprt_complete_rqst: RPC:    37 xid 5e45b0ec complete (28 bytes received)
         rpcbind-1871  [003] ..s.    50.858235: rpc_wake_up_task_queue_locked: RPC:    37 __rpc_wake_up_task (now 4294904943)
         rpcbind-1871  [003] ..s.    50.858235: rpc_wake_up_task_queue_locked: RPC:    37 disabling timer
         rpcbind-1871  [003] ..s.    50.858235: rpc_wake_up_task_queue_locked: RPC:    37 removed from queue ffff880407939a58 "xprt_pending"
         rpcbind-1871  [003] ..s.    50.858236: rpc_wake_up_task_queue_locked: RPC:       __rpc_wake_up_task done
        rpc.nfsd-4720  [001] ....    50.858237: __rpc_execute: RPC:    37 sync task resuming
        rpc.nfsd-4720  [001] ....    50.858238: call_status: RPC:    37 call_status (status 28)
        rpc.nfsd-4720  [001] ....    50.858238: call_decode: RPC:    37 call_decode (status 28)
        rpc.nfsd-4720  [001] ....    50.858239: call_decode: RPC:    37 call_decode result 0
        rpc.nfsd-4720  [001] ....    50.858239: __rpc_execute: RPC:    37 return 0, status 0
        rpc.nfsd-4720  [001] ....    50.858239: __rpc_execute: RPC:    37 release task
        rpc.nfsd-4720  [001] ....    50.858239: rpc_free: RPC:       freeing buffer of size 188 at ffff88040a84a800
        rpc.nfsd-4720  [001] ....    50.858240: xprt_release: RPC:    37 release request ffff880403542200
        rpc.nfsd-4720  [001] ....    50.858240: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939b20 "xprt_backlog")
        rpc.nfsd-4720  [001] ....    50.858240: rpc_release_client: RPC:       rpc_release_client(ffff88040a8b3600)
        rpc.nfsd-4720  [001] ....    50.858241: rpc_free_task: RPC:    37 freeing task
        rpc.nfsd-4720  [001] ....    50.858242: rpc_new_task: RPC:       new task initialized, procpid 4720
        rpc.nfsd-4720  [001] ....    50.858242: rpc_new_task: RPC:       allocated task ffff88040b718200
        rpc.nfsd-4720  [001] ....    50.858242: __rpc_execute: RPC:    38 __rpc_execute flags=0x680
        rpc.nfsd-4720  [001] ....    50.858243: call_start: RPC:    38 call_start rpcbind4 proc SET (sync)
        rpc.nfsd-4720  [001] ....    50.858243: call_reserve: RPC:    38 call_reserve (status 0)
        rpc.nfsd-4720  [001] ....    50.858243: xprt_alloc_slot: RPC:    38 reserved req ffff880403542200 xid 5f45b0ec
        rpc.nfsd-4720  [001] ....    50.858243: call_reserveresult: RPC:    38 call_reserveresult (status 0)
        rpc.nfsd-4720  [001] ....    50.858244: call_refresh: RPC:    38 call_refresh (status 0)
        rpc.nfsd-4720  [001] ....    50.858244: call_refreshresult: RPC:    38 call_refreshresult (status 0)
        rpc.nfsd-4720  [001] ....    50.858244: call_allocate: RPC:    38 call_allocate (status 0)
        rpc.nfsd-4720  [001] ....    50.858245: rpc_malloc: RPC:    38 allocated buffer of size 188 at ffff88040a84a800
        rpc.nfsd-4720  [001] ....    50.858245: call_bind: RPC:    38 call_bind (status 0)
        rpc.nfsd-4720  [001] ....    50.858245: call_connect: RPC:    38 call_connect xprt ffff880407939800 is connected
        rpc.nfsd-4720  [001] ....    50.858245: call_transmit: RPC:    38 call_transmit (status 0)
        rpc.nfsd-4720  [001] ....    50.858245: xprt_prepare_transmit: RPC:    38 xprt_prepare_transmit
        rpc.nfsd-4720  [001] ....    50.858246: call_transmit: RPC:    38 rpc_xdr_encode (status 0)
        rpc.nfsd-4720  [001] ....    50.858246: xprt_transmit: RPC:    38 xprt_transmit(84)
        rpc.nfsd-4720  [001] ....    50.858248: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-4720  [001] ....    50.858248: xprt_transmit: RPC:    38 xmit complete
        rpc.nfsd-4720  [001] ..s.    50.858249: __rpc_sleep_on_priority: RPC:    38 sleep_on(queue "xprt_pending" time 4294904943)
        rpc.nfsd-4720  [001] ..s.    50.858249: __rpc_sleep_on_priority: RPC:    38 added to queue ffff880407939a58 "xprt_pending"
        rpc.nfsd-4720  [001] ..s.    50.858249: __rpc_sleep_on_priority: RPC:    38 setting alarm for 10000 ms
        rpc.nfsd-4720  [001] ..s.    50.858250: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939990 "xprt_sending")
        rpc.nfsd-4720  [001] ....    50.858250: __rpc_execute: RPC:    38 sync task going to sleep
         rpcbind-1871  [003] ..s.    50.858257: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1871  [003] ..s.    50.858257: xprt_complete_rqst: RPC:    38 xid 5f45b0ec complete (28 bytes received)
         rpcbind-1871  [003] ..s.    50.858258: rpc_wake_up_task_queue_locked: RPC:    38 __rpc_wake_up_task (now 4294904943)
         rpcbind-1871  [003] ..s.    50.858258: rpc_wake_up_task_queue_locked: RPC:    38 disabling timer
         rpcbind-1871  [003] ..s.    50.858258: rpc_wake_up_task_queue_locked: RPC:    38 removed from queue ffff880407939a58 "xprt_pending"
         rpcbind-1871  [003] ..s.    50.858259: rpc_wake_up_task_queue_locked: RPC:       __rpc_wake_up_task done
        rpc.nfsd-4720  [001] ....    50.858260: __rpc_execute: RPC:    38 sync task resuming
        rpc.nfsd-4720  [001] ....    50.858261: call_status: RPC:    38 call_status (status 28)
        rpc.nfsd-4720  [001] ....    50.858261: call_decode: RPC:    38 call_decode (status 28)
        rpc.nfsd-4720  [001] ....    50.858262: call_decode: RPC:    38 call_decode result 0
        rpc.nfsd-4720  [001] ....    50.858262: __rpc_execute: RPC:    38 return 0, status 0
        rpc.nfsd-4720  [001] ....    50.858262: __rpc_execute: RPC:    38 release task
        rpc.nfsd-4720  [001] ....    50.858262: rpc_free: RPC:       freeing buffer of size 188 at ffff88040a84a800
        rpc.nfsd-4720  [001] ....    50.858263: xprt_release: RPC:    38 release request ffff880403542200
        rpc.nfsd-4720  [001] ....    50.858263: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939b20 "xprt_backlog")
        rpc.nfsd-4720  [001] ....    50.858263: rpc_release_client: RPC:       rpc_release_client(ffff88040a8b3600)
        rpc.nfsd-4720  [001] ....    50.858264: rpc_free_task: RPC:    38 freeing task
        rpc.nfsd-4720  [001] ....    50.858265: rpc_new_task: RPC:       new task initialized, procpid 4720
        rpc.nfsd-4720  [001] ....    50.858265: rpc_new_task: RPC:       allocated task ffff88040b718200
        rpc.nfsd-4720  [001] ....    50.858265: __rpc_execute: RPC:    39 __rpc_execute flags=0x680
        rpc.nfsd-4720  [001] ....    50.858265: call_start: RPC:    39 call_start rpcbind4 proc SET (sync)
        rpc.nfsd-4720  [001] ....    50.858266: call_reserve: RPC:    39 call_reserve (status 0)
        rpc.nfsd-4720  [001] ....    50.858266: xprt_alloc_slot: RPC:    39 reserved req ffff880403542200 xid 6045b0ec
        rpc.nfsd-4720  [001] ....    50.858266: call_reserveresult: RPC:    39 call_reserveresult (status 0)
        rpc.nfsd-4720  [001] ....    50.858266: call_refresh: RPC:    39 call_refresh (status 0)
        rpc.nfsd-4720  [001] ....    50.858267: call_refreshresult: RPC:    39 call_refreshresult (status 0)
        rpc.nfsd-4720  [001] ....    50.858267: call_allocate: RPC:    39 call_allocate (status 0)
        rpc.nfsd-4720  [001] ....    50.858268: rpc_malloc: RPC:    39 allocated buffer of size 188 at ffff88040a84a800
        rpc.nfsd-4720  [001] ....    50.858268: call_bind: RPC:    39 call_bind (status 0)
        rpc.nfsd-4720  [001] ....    50.858268: call_connect: RPC:    39 call_connect xprt ffff880407939800 is connected
        rpc.nfsd-4720  [001] ....    50.858268: call_transmit: RPC:    39 call_transmit (status 0)
        rpc.nfsd-4720  [001] ....    50.858269: xprt_prepare_transmit: RPC:    39 xprt_prepare_transmit
        rpc.nfsd-4720  [001] ....    50.858269: call_transmit: RPC:    39 rpc_xdr_encode (status 0)
        rpc.nfsd-4720  [001] ....    50.858269: xprt_transmit: RPC:    39 xprt_transmit(84)
        rpc.nfsd-4720  [001] ....    50.858271: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-4720  [001] ....    50.858272: xprt_transmit: RPC:    39 xmit complete
        rpc.nfsd-4720  [001] ..s.    50.858272: __rpc_sleep_on_priority: RPC:    39 sleep_on(queue "xprt_pending" time 4294904943)
        rpc.nfsd-4720  [001] ..s.    50.858272: __rpc_sleep_on_priority: RPC:    39 added to queue ffff880407939a58 "xprt_pending"
        rpc.nfsd-4720  [001] ..s.    50.858272: __rpc_sleep_on_priority: RPC:    39 setting alarm for 10000 ms
        rpc.nfsd-4720  [001] ..s.    50.858273: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939990 "xprt_sending")
        rpc.nfsd-4720  [001] ....    50.858273: __rpc_execute: RPC:    39 sync task going to sleep
         rpcbind-1871  [003] ..s.    50.858280: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1871  [003] ..s.    50.858280: xprt_complete_rqst: RPC:    39 xid 6045b0ec complete (28 bytes received)
         rpcbind-1871  [003] ..s.    50.858281: rpc_wake_up_task_queue_locked: RPC:    39 __rpc_wake_up_task (now 4294904943)
         rpcbind-1871  [003] ..s.    50.858281: rpc_wake_up_task_queue_locked: RPC:    39 disabling timer
         rpcbind-1871  [003] ..s.    50.858281: rpc_wake_up_task_queue_locked: RPC:    39 removed from queue ffff880407939a58 "xprt_pending"
         rpcbind-1871  [003] ..s.    50.858282: rpc_wake_up_task_queue_locked: RPC:       __rpc_wake_up_task done
        rpc.nfsd-4720  [001] ....    50.858283: __rpc_execute: RPC:    39 sync task resuming
        rpc.nfsd-4720  [001] ....    50.858284: call_status: RPC:    39 call_status (status 28)
        rpc.nfsd-4720  [001] ....    50.858284: call_decode: RPC:    39 call_decode (status 28)
        rpc.nfsd-4720  [001] ....    50.858285: call_decode: RPC:    39 call_decode result 0
        rpc.nfsd-4720  [001] ....    50.858285: __rpc_execute: RPC:    39 return 0, status 0
        rpc.nfsd-4720  [001] ....    50.858285: __rpc_execute: RPC:    39 release task
        rpc.nfsd-4720  [001] ....    50.858285: rpc_free: RPC:       freeing buffer of size 188 at ffff88040a84a800
        rpc.nfsd-4720  [001] ....    50.858286: xprt_release: RPC:    39 release request ffff880403542200
        rpc.nfsd-4720  [001] ....    50.858286: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939b20 "xprt_backlog")
        rpc.nfsd-4720  [001] ....    50.858286: rpc_release_client: RPC:       rpc_release_client(ffff88040a8b3600)
        rpc.nfsd-4720  [001] ....    50.858287: rpc_free_task: RPC:    39 freeing task
        rpc.nfsd-4720  [001] ....    50.858287: svc_write_space: svc: socket ffff8800db81e000(inet ffff8800db8545c0), write_space busy=1
        rpc.nfsd-4720  [001] ....    50.858288: svc_setup_socket: svc: kernel_setsockopt returned 0
        rpc.nfsd-4720  [001] ....    50.858288: svc_setup_socket: svc: svc_setup_socket created ffff8800db81e000 (inet ffff8800db8545c0)
        rpc.nfsd-4720  [001] ....    50.858289: svc_create_socket: svc: svc_create_socket(lockd, 6, 0000:0000:0000:0000:0000:0000:0000:0000, port=0)
        rpc.nfsd-4720  [001] ....    50.858293: svc_setup_socket: svc: svc_setup_socket ffff88040b9d12c0
        rpc.nfsd-4720  [001] ....    50.858295: rpc_new_task: RPC:       new task initialized, procpid 4720
        rpc.nfsd-4720  [001] ....    50.858295: rpc_new_task: RPC:       allocated task ffff88040b718200
        rpc.nfsd-4720  [001] ....    50.858295: __rpc_execute: RPC:    40 __rpc_execute flags=0x680
        rpc.nfsd-4720  [001] ....    50.858296: call_start: RPC:    40 call_start rpcbind4 proc SET (sync)
        rpc.nfsd-4720  [001] ....    50.858296: call_reserve: RPC:    40 call_reserve (status 0)
        rpc.nfsd-4720  [001] ....    50.858296: xprt_alloc_slot: RPC:    40 reserved req ffff880403542200 xid 6145b0ec
        rpc.nfsd-4720  [001] ....    50.858296: call_reserveresult: RPC:    40 call_reserveresult (status 0)
        rpc.nfsd-4720  [001] ....    50.858297: call_refresh: RPC:    40 call_refresh (status 0)
        rpc.nfsd-4720  [001] ....    50.858297: call_refreshresult: RPC:    40 call_refreshresult (status 0)
        rpc.nfsd-4720  [001] ....    50.858297: call_allocate: RPC:    40 call_allocate (status 0)
        rpc.nfsd-4720  [001] ....    50.858298: rpc_malloc: RPC:    40 allocated buffer of size 188 at ffff88040a848000
        rpc.nfsd-4720  [001] ....    50.858298: call_bind: RPC:    40 call_bind (status 0)
        rpc.nfsd-4720  [001] ....    50.858298: call_connect: RPC:    40 call_connect xprt ffff880407939800 is connected
        rpc.nfsd-4720  [001] ....    50.858299: call_transmit: RPC:    40 call_transmit (status 0)
        rpc.nfsd-4720  [001] ....    50.858299: xprt_prepare_transmit: RPC:    40 xprt_prepare_transmit
        rpc.nfsd-4720  [001] ....    50.858299: call_transmit: RPC:    40 rpc_xdr_encode (status 0)
        rpc.nfsd-4720  [001] ....    50.858300: xprt_transmit: RPC:    40 xprt_transmit(84)
        rpc.nfsd-4720  [001] ....    50.858302: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-4720  [001] ....    50.858302: xprt_transmit: RPC:    40 xmit complete
        rpc.nfsd-4720  [001] ..s.    50.858302: __rpc_sleep_on_priority: RPC:    40 sleep_on(queue "xprt_pending" time 4294904943)
        rpc.nfsd-4720  [001] ..s.    50.858303: __rpc_sleep_on_priority: RPC:    40 added to queue ffff880407939a58 "xprt_pending"
        rpc.nfsd-4720  [001] ..s.    50.858303: __rpc_sleep_on_priority: RPC:    40 setting alarm for 10000 ms
        rpc.nfsd-4720  [001] ..s.    50.858303: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939990 "xprt_sending")
        rpc.nfsd-4720  [001] ....    50.858304: __rpc_execute: RPC:    40 sync task going to sleep
         rpcbind-1871  [003] ..s.    50.858310: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1871  [003] ..s.    50.858311: xprt_complete_rqst: RPC:    40 xid 6145b0ec complete (28 bytes received)
         rpcbind-1871  [003] ..s.    50.858311: rpc_wake_up_task_queue_locked: RPC:    40 __rpc_wake_up_task (now 4294904943)
         rpcbind-1871  [003] ..s.    50.858311: rpc_wake_up_task_queue_locked: RPC:    40 disabling timer
         rpcbind-1871  [003] ..s.    50.858312: rpc_wake_up_task_queue_locked: RPC:    40 removed from queue ffff880407939a58 "xprt_pending"
         rpcbind-1871  [003] ..s.    50.858313: rpc_wake_up_task_queue_locked: RPC:       __rpc_wake_up_task done
        rpc.nfsd-4720  [001] ....    50.858314: __rpc_execute: RPC:    40 sync task resuming
        rpc.nfsd-4720  [001] ....    50.858314: call_status: RPC:    40 call_status (status 28)
        rpc.nfsd-4720  [001] ....    50.858315: call_decode: RPC:    40 call_decode (status 28)
        rpc.nfsd-4720  [001] ....    50.858315: call_decode: RPC:    40 call_decode result 0
        rpc.nfsd-4720  [001] ....    50.858315: __rpc_execute: RPC:    40 return 0, status 0
        rpc.nfsd-4720  [001] ....    50.858316: __rpc_execute: RPC:    40 release task
        rpc.nfsd-4720  [001] ....    50.858316: rpc_free: RPC:       freeing buffer of size 188 at ffff88040a848000
        rpc.nfsd-4720  [001] ....    50.858316: xprt_release: RPC:    40 release request ffff880403542200
        rpc.nfsd-4720  [001] ....    50.858317: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939b20 "xprt_backlog")
        rpc.nfsd-4720  [001] ....    50.858317: rpc_release_client: RPC:       rpc_release_client(ffff88040a8b3600)
        rpc.nfsd-4720  [001] ....    50.858317: rpc_free_task: RPC:    40 freeing task
        rpc.nfsd-4720  [001] ....    50.858318: rpc_new_task: RPC:       new task initialized, procpid 4720
        rpc.nfsd-4720  [001] ....    50.858318: rpc_new_task: RPC:       allocated task ffff88040b718200
        rpc.nfsd-4720  [001] ....    50.858319: __rpc_execute: RPC:    41 __rpc_execute flags=0x680
        rpc.nfsd-4720  [001] ....    50.858319: call_start: RPC:    41 call_start rpcbind4 proc SET (sync)
        rpc.nfsd-4720  [001] ....    50.858319: call_reserve: RPC:    41 call_reserve (status 0)
        rpc.nfsd-4720  [001] ....    50.858320: xprt_alloc_slot: RPC:    41 reserved req ffff880403542200 xid 6245b0ec
        rpc.nfsd-4720  [001] ....    50.858320: call_reserveresult: RPC:    41 call_reserveresult (status 0)
        rpc.nfsd-4720  [001] ....    50.858320: call_refresh: RPC:    41 call_refresh (status 0)
        rpc.nfsd-4720  [001] ....    50.858320: call_refreshresult: RPC:    41 call_refreshresult (status 0)
        rpc.nfsd-4720  [001] ....    50.858321: call_allocate: RPC:    41 call_allocate (status 0)
        rpc.nfsd-4720  [001] ....    50.858321: rpc_malloc: RPC:    41 allocated buffer of size 188 at ffff88040a848000
        rpc.nfsd-4720  [001] ....    50.858321: call_bind: RPC:    41 call_bind (status 0)
        rpc.nfsd-4720  [001] ....    50.858322: call_connect: RPC:    41 call_connect xprt ffff880407939800 is connected
        rpc.nfsd-4720  [001] ....    50.858322: call_transmit: RPC:    41 call_transmit (status 0)
        rpc.nfsd-4720  [001] ....    50.858322: xprt_prepare_transmit: RPC:    41 xprt_prepare_transmit
        rpc.nfsd-4720  [001] ....    50.858322: call_transmit: RPC:    41 rpc_xdr_encode (status 0)
        rpc.nfsd-4720  [001] ....    50.858323: xprt_transmit: RPC:    41 xprt_transmit(84)
        rpc.nfsd-4720  [001] ....    50.858325: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-4720  [001] ....    50.858325: xprt_transmit: RPC:    41 xmit complete
        rpc.nfsd-4720  [001] ..s.    50.858326: __rpc_sleep_on_priority: RPC:    41 sleep_on(queue "xprt_pending" time 4294904943)
        rpc.nfsd-4720  [001] ..s.    50.858326: __rpc_sleep_on_priority: RPC:    41 added to queue ffff880407939a58 "xprt_pending"
        rpc.nfsd-4720  [001] ..s.    50.858326: __rpc_sleep_on_priority: RPC:    41 setting alarm for 10000 ms
        rpc.nfsd-4720  [001] ..s.    50.858327: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939990 "xprt_sending")
        rpc.nfsd-4720  [001] ....    50.858327: __rpc_execute: RPC:    41 sync task going to sleep
         rpcbind-1871  [003] ..s.    50.858333: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1871  [003] ..s.    50.858334: xprt_complete_rqst: RPC:    41 xid 6245b0ec complete (28 bytes received)
         rpcbind-1871  [003] ..s.    50.858334: rpc_wake_up_task_queue_locked: RPC:    41 __rpc_wake_up_task (now 4294904943)
         rpcbind-1871  [003] ..s.    50.858334: rpc_wake_up_task_queue_locked: RPC:    41 disabling timer
         rpcbind-1871  [003] ..s.    50.858335: rpc_wake_up_task_queue_locked: RPC:    41 removed from queue ffff880407939a58 "xprt_pending"
         rpcbind-1871  [003] ..s.    50.858336: rpc_wake_up_task_queue_locked: RPC:       __rpc_wake_up_task done
        rpc.nfsd-4720  [001] ....    50.858337: __rpc_execute: RPC:    41 sync task resuming
        rpc.nfsd-4720  [001] ....    50.858337: call_status: RPC:    41 call_status (status 28)
        rpc.nfsd-4720  [001] ....    50.858338: call_decode: RPC:    41 call_decode (status 28)
        rpc.nfsd-4720  [001] ....    50.858338: call_decode: RPC:    41 call_decode result 0
        rpc.nfsd-4720  [001] ....    50.858339: __rpc_execute: RPC:    41 return 0, status 0
        rpc.nfsd-4720  [001] ....    50.858339: __rpc_execute: RPC:    41 release task
        rpc.nfsd-4720  [001] ....    50.858339: rpc_free: RPC:       freeing buffer of size 188 at ffff88040a848000
        rpc.nfsd-4720  [001] ....    50.858339: xprt_release: RPC:    41 release request ffff880403542200
        rpc.nfsd-4720  [001] ....    50.858340: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939b20 "xprt_backlog")
        rpc.nfsd-4720  [001] ....    50.858340: rpc_release_client: RPC:       rpc_release_client(ffff88040a8b3600)
        rpc.nfsd-4720  [001] ....    50.858340: rpc_free_task: RPC:    41 freeing task
        rpc.nfsd-4720  [001] ....    50.858341: rpc_new_task: RPC:       new task initialized, procpid 4720
        rpc.nfsd-4720  [001] ....    50.858341: rpc_new_task: RPC:       allocated task ffff88040b718200
        rpc.nfsd-4720  [001] ....    50.858342: __rpc_execute: RPC:    42 __rpc_execute flags=0x680
        rpc.nfsd-4720  [001] ....    50.858342: call_start: RPC:    42 call_start rpcbind4 proc SET (sync)
        rpc.nfsd-4720  [001] ....    50.858343: call_reserve: RPC:    42 call_reserve (status 0)
        rpc.nfsd-4720  [001] ....    50.858343: xprt_alloc_slot: RPC:    42 reserved req ffff880403542200 xid 6345b0ec
        rpc.nfsd-4720  [001] ....    50.858343: call_reserveresult: RPC:    42 call_reserveresult (status 0)
        rpc.nfsd-4720  [001] ....    50.858343: call_refresh: RPC:    42 call_refresh (status 0)
        rpc.nfsd-4720  [001] ....    50.858344: call_refreshresult: RPC:    42 call_refreshresult (status 0)
        rpc.nfsd-4720  [001] ....    50.858344: call_allocate: RPC:    42 call_allocate (status 0)
        rpc.nfsd-4720  [001] ....    50.858344: rpc_malloc: RPC:    42 allocated buffer of size 188 at ffff88040a848000
        rpc.nfsd-4720  [001] ....    50.858344: call_bind: RPC:    42 call_bind (status 0)
        rpc.nfsd-4720  [001] ....    50.858345: call_connect: RPC:    42 call_connect xprt ffff880407939800 is connected
        rpc.nfsd-4720  [001] ....    50.858345: call_transmit: RPC:    42 call_transmit (status 0)
        rpc.nfsd-4720  [001] ....    50.858345: xprt_prepare_transmit: RPC:    42 xprt_prepare_transmit
        rpc.nfsd-4720  [001] ....    50.858345: call_transmit: RPC:    42 rpc_xdr_encode (status 0)
        rpc.nfsd-4720  [001] ....    50.858346: xprt_transmit: RPC:    42 xprt_transmit(84)
        rpc.nfsd-4720  [001] ....    50.858348: xs_local_send_request: RPC:       xs_local_send_request(84) = 0
        rpc.nfsd-4720  [001] ....    50.858348: xprt_transmit: RPC:    42 xmit complete
        rpc.nfsd-4720  [001] ..s.    50.858348: __rpc_sleep_on_priority: RPC:    42 sleep_on(queue "xprt_pending" time 4294904943)
        rpc.nfsd-4720  [001] ..s.    50.858349: __rpc_sleep_on_priority: RPC:    42 added to queue ffff880407939a58 "xprt_pending"
        rpc.nfsd-4720  [001] ..s.    50.858349: __rpc_sleep_on_priority: RPC:    42 setting alarm for 10000 ms
        rpc.nfsd-4720  [001] ..s.    50.858349: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939990 "xprt_sending")
        rpc.nfsd-4720  [001] ....    50.858350: __rpc_execute: RPC:    42 sync task going to sleep
         rpcbind-1871  [003] ..s.    50.858356: xs_local_data_ready: RPC:       xs_local_data_ready...
         rpcbind-1871  [003] ..s.    50.858357: xprt_complete_rqst: RPC:    42 xid 6345b0ec complete (28 bytes received)
         rpcbind-1871  [003] ..s.    50.858357: rpc_wake_up_task_queue_locked: RPC:    42 __rpc_wake_up_task (now 4294904943)
         rpcbind-1871  [003] ..s.    50.858357: rpc_wake_up_task_queue_locked: RPC:    42 disabling timer
         rpcbind-1871  [003] ..s.    50.858358: rpc_wake_up_task_queue_locked: RPC:    42 removed from queue ffff880407939a58 "xprt_pending"
         rpcbind-1871  [003] ..s.    50.858359: rpc_wake_up_task_queue_locked: RPC:       __rpc_wake_up_task done
        rpc.nfsd-4720  [001] ....    50.858360: __rpc_execute: RPC:    42 sync task resuming
        rpc.nfsd-4720  [001] ....    50.858360: call_status: RPC:    42 call_status (status 28)
        rpc.nfsd-4720  [001] ....    50.858361: call_decode: RPC:    42 call_decode (status 28)
        rpc.nfsd-4720  [001] ....    50.858361: call_decode: RPC:    42 call_decode result 0
        rpc.nfsd-4720  [001] ....    50.858361: __rpc_execute: RPC:    42 return 0, status 0
        rpc.nfsd-4720  [001] ....    50.858362: __rpc_execute: RPC:    42 release task
        rpc.nfsd-4720  [001] ....    50.858362: rpc_free: RPC:       freeing buffer of size 188 at ffff88040a848000
        rpc.nfsd-4720  [001] ....    50.858362: xprt_release: RPC:    42 release request ffff880403542200
        rpc.nfsd-4720  [001] ....    50.858363: rpc_wake_up_first: RPC:       wake_up_first(ffff880407939b20 "xprt_backlog")
        rpc.nfsd-4720  [001] ....    50.858363: rpc_release_client: RPC:       rpc_release_client(ffff88040a8b3600)
        rpc.nfsd-4720  [001] ....    50.858363: rpc_free_task: RPC:    42 freeing task
        rpc.nfsd-4720  [001] ....    50.858364: svc_setup_socket: setting up TCP socket for listening
        rpc.nfsd-4720  [001] ....    50.858364: svc_setup_socket: svc: svc_setup_socket created ffff880409d69000 (inet ffff88040a152040)
           lockd-4750  [003] ....    50.858433: svc_write_space: svc: socket ffff8800db83a000(inet ffff88040a14eb80), write_space busy=1
           lockd-4750  [003] ....    50.858435: svc_tcp_accept: svc: tcp_accept ffff88040298e000 sock ffff88040acedd40
           lockd-4750  [003] ....    50.858438: svc_write_space: svc: socket ffff8800db81e000(inet ffff8800db8545c0), write_space busy=1
           lockd-4750  [003] ....    50.858439: svc_tcp_accept: svc: tcp_accept ffff880409d69000 sock ffff88040b9d12c0
            nfsd-4771  [001] ....    50.956508: svc_tcp_accept: svc: tcp_accept ffff880402bd4000 sock ffff8800db68bac0
            nfsd-4772  [003] ....    50.956510: svc_write_space: svc: socket ffff880402ba1000(inet ffff880407b2cc00), write_space busy=1
            nfsd-4772  [003] ....    50.956516: svc_tcp_accept: svc: tcp_accept ffff88040cf94000 sock ffff88040ec230c0
            nfsd-4771  [001] ....    50.956517: svc_write_space: svc: socket ffff880402966000(inet ffff8800db854180), write_space busy=1
          <idle>-0     [003] ..s.   149.227677: svc_tcp_listen_data_ready: svc: socket ffff88040a708780 TCP (listen) state change 10
            nfsd-4779  [003] ....   149.227697: svc_tcp_accept: svc: tcp_accept ffff880402bd4000 sock ffff8800db68bac0
            nfsd-4779  [003] ....   149.227705: svc_tcp_accept: nfsd: connect from 192.168.23.22, port=867
            nfsd-4779  [003] ....   149.227706: svc_setup_socket: svc: svc_setup_socket ffff8804081ad580
            nfsd-4779  [003] ....   149.227708: svc_setup_socket: setting up TCP socket for reading
            nfsd-4779  [003] ....   149.227709: svc_setup_socket: svc: svc_setup_socket created ffff8800daaa1000 (inet ffff8800d8c817c0)
            nfsd-4777  [002] ....   149.227792: svc_tcp_accept: svc: tcp_accept ffff880402bd4000 sock ffff8800db68bac0
            nfsd-4778  [000] ....   149.227792: svc_tcp_recvfrom: svc: tcp_recv ffff8800daaa1000 data 1 conn 0 close 0
            nfsd-4778  [000] ....   149.227797: svc_recvfrom.isra.10: svc: socket ffff8800daaa1000 recvfrom(ffff8800daaa12b8, 4) = -11
            nfsd-4778  [000] ....   149.227798: svc_tcp_recvfrom: RPC: TCP recv_record got -11
            nfsd-4778  [000] ....   149.227798: svc_tcp_recvfrom: RPC: TCP recvfrom got EAGAIN
          <idle>-0     [003] ..s.   149.227933: svc_tcp_data_ready: svc: socket ffff8800d8c817c0 TCP data ready (svsk ffff8800daaa1000)
            nfsd-4779  [003] ....   149.227966: svc_tcp_recvfrom: svc: tcp_recv ffff8800daaa1000 data 1 conn 0 close 0
            nfsd-4779  [003] ....   149.227970: svc_recvfrom.isra.10: svc: socket ffff8800daaa1000 recvfrom(ffff8800daaa12b8, 4) = 4
            nfsd-4779  [003] ....   149.227971: svc_tcp_recvfrom: svc: TCP record, 92 bytes
            nfsd-4779  [003] ....   149.227973: svc_recvfrom.isra.10: svc: socket ffff8800daaa1000 recvfrom(ffff8800dbb8a000, 4096) = 92
            nfsd-4779  [003] ....   149.227974: svc_tcp_fragment_received: svc: TCP final record (92 bytes)
            nfsd-4778  [000] ....   149.228077: svc_tcp_recvfrom: svc: tcp_recv ffff8800daaa1000 data 1 conn 0 close 0
            nfsd-4778  [000] ....   149.228081: svc_recvfrom.isra.10: svc: socket ffff8800daaa1000 recvfrom(ffff8800daaa12b8, 4) = -11
            nfsd-4778  [000] ....   149.228081: svc_tcp_recvfrom: RPC: TCP recv_record got -11
            nfsd-4778  [000] ....   149.228082: svc_tcp_recvfrom: RPC: TCP recvfrom got EAGAIN
            nfsd-4779  [003] ....   149.229024: svc_sendto: svc: socket ffff8800daaa1000 sendto([ffff8800dbb8b000 48... ], 48) = 48 (addr 192.168.23.22, port=867)
          <idle>-0     [003] ..s.   149.229463: svc_tcp_data_ready: svc: socket ffff8800d8c817c0 TCP data ready (svsk ffff8800daaa1000)
            nfsd-4779  [003] ....   149.229481: svc_tcp_recvfrom: svc: tcp_recv ffff8800daaa1000 data 1 conn 0 close 0
            nfsd-4779  [003] ....   149.229484: svc_recvfrom.isra.10: svc: socket ffff8800daaa1000 recvfrom(ffff8800daaa12b8, 4) = 4
            nfsd-4779  [003] ....   149.229485: svc_tcp_recvfrom: svc: TCP record, 184 bytes
            nfsd-4779  [003] ....   149.229487: svc_recvfrom.isra.10: svc: socket ffff8800daaa1000 recvfrom(ffff8800dbb8a000, 4096) = 184
            nfsd-4779  [003] ....   149.229488: svc_tcp_fragment_received: svc: TCP final record (184 bytes)
            nfsd-4779  [003] ....   149.229536: svc_sendto: svc: socket ffff8800daaa1000 sendto([ffff8800d9324000 64... ], 64) = 64 (addr 192.168.23.22, port=867)
            nfsd-4778  [000] ....   149.229558: svc_tcp_recvfrom: svc: tcp_recv ffff8800daaa1000 data 1 conn 0 close 0
            nfsd-4778  [000] ....   149.229562: svc_recvfrom.isra.10: svc: socket ffff8800daaa1000 recvfrom(ffff8800daaa12b8, 4) = -11
            nfsd-4778  [000] ....   149.229562: svc_tcp_recvfrom: RPC: TCP recv_record got -11
            nfsd-4778  [000] ....   149.229563: svc_tcp_recvfrom: RPC: TCP recvfrom got EAGAIN
          <idle>-0     [003] ..s.   149.230009: svc_tcp_data_ready: svc: socket ffff8800d8c817c0 TCP data ready (svsk ffff8800daaa1000)
            nfsd-4779  [003] ....   149.230027: svc_tcp_recvfrom: svc: tcp_recv ffff8800daaa1000 data 1 conn 0 close 0
            nfsd-4779  [003] ....   149.230031: svc_recvfrom.isra.10: svc: socket ffff8800daaa1000 recvfrom(ffff8800daaa12b8, 4) = 4
            nfsd-4779  [003] ....   149.230032: svc_tcp_recvfrom: svc: TCP record, 100 bytes
            nfsd-4779  [003] ....   149.230034: svc_recvfrom.isra.10: svc: socket ffff8800daaa1000 recvfrom(ffff8800dbb8a000, 4096) = 100
            nfsd-4779  [003] ....   149.230035: svc_tcp_fragment_received: svc: TCP final record (100 bytes)
            nfsd-4779  [003] ....   149.230076: svc_sendto: svc: socket ffff8800daaa1000 sendto([ffff88003787b000 48... ], 48) = 48 (addr 192.168.23.22, port=867)
            nfsd-4778  [000] ....   149.230122: svc_tcp_recvfrom: svc: tcp_recv ffff8800daaa1000 data 1 conn 0 close 0
            nfsd-4778  [000] ....   149.230126: svc_recvfrom.isra.10: svc: socket ffff8800daaa1000 recvfrom(ffff8800daaa12b8, 4) = -11
            nfsd-4778  [000] ....   149.230127: svc_tcp_recvfrom: RPC: TCP recv_record got -11
            nfsd-4778  [000] ....   149.230127: svc_tcp_recvfrom: RPC: TCP recvfrom got EAGAIN
   kworker/u32:3-105   [001] ....   149.230132: xs_setup_tcp: RPC:       set up xprt to 192.168.23.22 (port 55201) via tcp
   kworker/u32:3-105   [001] ....   149.230143: xprt_create_transport: RPC:       created transport ffff8800daa83800 with 65536 slots
   kworker/u32:3-105   [001] ....   149.230145: rpc_new_client: RPC:       creating nfs4_cb client for (null) (xprt ffff8800daa83800)
   kworker/u32:3-105   [001] ....   149.230167: rpc_new_task: RPC:       new task initialized, procpid 105
   kworker/u32:3-105   [001] ....   149.230168: rpc_new_task: RPC:       allocated task ffff880408652900
    kworker/1:1H-131   [001] ....   149.230177: __rpc_execute: RPC:    43 __rpc_execute flags=0x681
    kworker/1:1H-131   [001] ....   149.230179: call_start: RPC:    43 call_start nfs4_cb1 proc CB_NULL (async)
    kworker/1:1H-131   [001] ....   149.230180: call_reserve: RPC:    43 call_reserve (status 0)
    kworker/1:1H-131   [001] ....   149.230183: xprt_alloc_slot: RPC:    43 reserved req ffff8800daa7f000 xid 2f21bdb9
    kworker/1:1H-131   [001] ..s.   149.230184: rpc_wake_up_first: RPC:       wake_up_first(ffff8800daa83990 "xprt_sending")
    kworker/1:1H-131   [001] ....   149.230185: call_reserveresult: RPC:    43 call_reserveresult (status 0)
    kworker/1:1H-131   [001] ....   149.230186: call_refresh: RPC:    43 call_refresh (status 0)
    kworker/1:1H-131   [001] ....   149.230189: call_refreshresult: RPC:    43 call_refreshresult (status 0)
    kworker/1:1H-131   [001] ....   149.230190: call_allocate: RPC:    43 call_allocate (status 0)
    kworker/1:1H-131   [001] ....   149.230192: rpc_malloc: RPC:    43 allocated buffer of size 396 at ffff8800daafc000
    kworker/1:1H-131   [001] ....   149.230192: call_bind: RPC:    43 call_bind (status 0)
    kworker/1:1H-131   [001] ....   149.230194: call_connect: RPC:    43 call_connect xprt ffff8800daa83800 is not connected
    kworker/1:1H-131   [001] ....   149.230195: xprt_connect: RPC:    43 xprt_connect xprt ffff8800daa83800 is not connected
    kworker/1:1H-131   [001] ..s.   149.230196: __rpc_sleep_on_priority: RPC:    43 sleep_on(queue "xprt_pending" time 4294929524)
    kworker/1:1H-131   [001] ..s.   149.230197: __rpc_sleep_on_priority: RPC:    43 added to queue ffff8800daa83a58 "xprt_pending"
    kworker/1:1H-131   [001] ..s.   149.230198: __rpc_sleep_on_priority: RPC:    43 setting alarm for 9000 ms
    kworker/1:1H-131   [001] ....   149.230201: xs_connect: RPC:       xs_connect scheduled xprt ffff8800daa83800
    kworker/1:1H-131   [001] ..s.   149.230212: inet_csk_get_port: kworker/1:1H:131 got port 947
    kworker/1:1H-131   [001] ....   149.230274: xs_bind: RPC:       xs_bind 0.0.0.0:947: ok (0)
    kworker/1:1H-131   [001] ....   149.230276: xs_tcp_setup_socket: RPC:       worker connecting xprt ffff8800daa83800 via tcp to 192.168.23.22 (port 55201)
    kworker/1:1H-131   [001] ....   149.230310: xs_tcp_setup_socket: RPC:       ffff8800daa83800 connect status 115 connected 0 sock state 2
    kworker/1:1H-131   [001] ..s.   149.230312: rpc_wake_up_first: RPC:       wake_up_first(ffff8800daa83990 "xprt_sending")
          <idle>-0     [003] ..s.   149.230583: xs_tcp_state_change: RPC:       xs_tcp_state_change client ffff8800daa83800...
          <idle>-0     [003] ..s.   149.230585: xs_tcp_state_change: RPC:       state 1 conn 0 dead 0 zapped 1 sk_shutdown 0
          <idle>-0     [003] ..s.   149.230587: rpc_wake_up_task_queue_locked: RPC:    43 __rpc_wake_up_task (now 4294929525)
          <idle>-0     [003] ..s.   149.230587: rpc_wake_up_task_queue_locked: RPC:    43 disabling timer
          <idle>-0     [003] ..s.   149.230588: rpc_wake_up_task_queue_locked: RPC:    43 removed from queue ffff8800daa83a58 "xprt_pending"
          <idle>-0     [003] .Ns.   149.230592: rpc_wake_up_task_queue_locked: RPC:       __rpc_wake_up_task done
    kworker/3:1H-124   [003] ....   149.230618: __rpc_execute: RPC:    43 __rpc_execute flags=0x681
    kworker/3:1H-124   [003] ....   149.230619: xprt_connect_status: RPC:    43 xprt_connect_status: retrying
    kworker/3:1H-124   [003] ....   149.230621: call_connect_status: RPC:    43 call_connect_status (status -11)
    kworker/3:1H-124   [003] ....   149.230622: call_timeout: RPC:    43 call_timeout (minor)
    kworker/3:1H-124   [003] ....   149.230623: call_bind: RPC:    43 call_bind (status 0)
    kworker/3:1H-124   [003] ....   149.230624: call_connect: RPC:    43 call_connect xprt ffff8800daa83800 is connected
    kworker/3:1H-124   [003] ....   149.230625: call_transmit: RPC:    43 call_transmit (status 0)
    kworker/3:1H-124   [003] ....   149.230625: xprt_prepare_transmit: RPC:    43 xprt_prepare_transmit
    kworker/3:1H-124   [003] ....   149.230626: call_transmit: RPC:    43 rpc_xdr_encode (status 0)
    kworker/3:1H-124   [003] ....   149.230629: xprt_transmit: RPC:    43 xprt_transmit(72)
    kworker/3:1H-124   [003] ....   149.230644: xs_tcp_send_request: RPC:       xs_tcp_send_request(72) = 0
    kworker/3:1H-124   [003] ....   149.230644: xprt_transmit: RPC:    43 xmit complete
    kworker/3:1H-124   [003] ..s.   149.230645: __rpc_sleep_on_priority: RPC:    43 sleep_on(queue "xprt_pending" time 4294929525)
    kworker/3:1H-124   [003] ..s.   149.230646: __rpc_sleep_on_priority: RPC:    43 added to queue ffff8800daa83a58 "xprt_pending"
    kworker/3:1H-124   [003] ..s.   149.230647: __rpc_sleep_on_priority: RPC:    43 setting alarm for 9000 ms
    kworker/3:1H-124   [003] ..s.   149.230649: rpc_wake_up_first: RPC:       wake_up_first(ffff8800daa83990 "xprt_sending")
          <idle>-0     [003] ..s.   149.230989: xs_tcp_data_ready: RPC:       xs_tcp_data_ready...
          <idle>-0     [003] ..s.   149.230990: xs_tcp_data_recv: RPC:       xs_tcp_data_recv started
          <idle>-0     [003] ..s.   149.230992: xs_tcp_data_recv: RPC:       reading TCP record fragment of length 24
          <idle>-0     [003] ..s.   149.230992: xs_tcp_data_recv: RPC:       reading XID (4 bytes)
          <idle>-0     [003] ..s.   149.230994: xs_tcp_data_recv: RPC:       reading request with XID 2f21bdb9
          <idle>-0     [003] ..s.   149.230995: xs_tcp_data_recv: RPC:       reading CALL/REPLY flag (4 bytes)
          <idle>-0     [003] ..s.   149.230995: xs_tcp_data_recv: RPC:       read reply XID 2f21bdb9
          <idle>-0     [003] ..s.   149.230997: xs_tcp_data_recv: RPC:       XID 2f21bdb9 read 16 bytes
          <idle>-0     [003] ..s.   149.230998: xs_tcp_data_recv: RPC:       xprt = ffff8800daa83800, tcp_copied = 24, tcp_offset = 24, tcp_reclen = 24
          <idle>-0     [003] ..s.   149.230999: xprt_complete_rqst: RPC:    43 xid 2f21bdb9 complete (24 bytes received)
          <idle>-0     [003] ..s.   149.231000: rpc_wake_up_task_queue_locked: RPC:    43 __rpc_wake_up_task (now 4294929525)
          <idle>-0     [003] ..s.   149.231000: rpc_wake_up_task_queue_locked: RPC:    43 disabling timer
          <idle>-0     [003] ..s.   149.231002: rpc_wake_up_task_queue_locked: RPC:    43 removed from queue ffff8800daa83a58 "xprt_pending"
          <idle>-0     [003] .Ns.   149.231004: rpc_wake_up_task_queue_locked: RPC:       __rpc_wake_up_task done
          <idle>-0     [003] .Ns.   149.231005: xs_tcp_data_recv: RPC:       xs_tcp_data_recv done
    kworker/3:1H-124   [003] ....   149.231016: __rpc_execute: RPC:    43 __rpc_execute flags=0xe81
    kworker/3:1H-124   [003] ....   149.231017: call_status: RPC:    43 call_status (status 24)
    kworker/3:1H-124   [003] ....   149.231018: call_decode: RPC:    43 call_decode (status 24)
    kworker/3:1H-124   [003] ....   149.231020: call_decode: RPC:    43 call_decode result 0
    kworker/3:1H-124   [003] ....   149.231021: __rpc_execute: RPC:    43 return 0, status 0
    kworker/3:1H-124   [003] ....   149.231022: __rpc_execute: RPC:    43 release task
    kworker/3:1H-124   [003] ....   149.231024: rpc_free: RPC:       freeing buffer of size 396 at ffff8800daafc000
    kworker/3:1H-124   [003] ....   149.231025: xprt_release: RPC:    43 release request ffff8800daa7f000
    kworker/3:1H-124   [003] ....   149.231026: rpc_wake_up_first: RPC:       wake_up_first(ffff8800daa83b20 "xprt_backlog")
    kworker/3:1H-124   [003] ....   149.231027: rpc_release_client: RPC:       rpc_release_client(ffff8800daa7f800)
    kworker/3:1H-124   [003] ....   149.231028: rpc_free_task: RPC:    43 freeing task
          <idle>-0     [000] ..s.   154.237735: svc_tcp_data_ready: svc: socket ffff8800d8c817c0 TCP data ready (svsk ffff8800daaa1000)
            nfsd-4779  [003] ....   154.237808: svc_tcp_recvfrom: svc: tcp_recv ffff8800daaa1000 data 1 conn 0 close 0
            nfsd-4779  [003] ....   154.237814: svc_recvfrom.isra.10: svc: socket ffff8800daaa1000 recvfrom(ffff8800daaa12b8, 4) = 4
            nfsd-4779  [003] ....   154.237815: svc_tcp_recvfrom: svc: TCP record, 92 bytes
            nfsd-4779  [003] ....   154.237818: svc_recvfrom.isra.10: svc: socket ffff8800daaa1000 recvfrom(ffff8800dbb8a000, 4096) = 92
            nfsd-4779  [003] ....   154.237819: svc_tcp_fragment_received: svc: TCP final record (92 bytes)
            nfsd-4779  [003] ....   154.237872: svc_sendto: svc: socket ffff8800daaa1000 sendto([ffff8800d9324000 48... ], 48) = 48 (addr 192.168.23.22, port=867)
            nfsd-4778  [000] ....   154.237892: svc_tcp_recvfrom: svc: tcp_recv ffff8800daaa1000 data 1 conn 0 close 0
            nfsd-4778  [000] ....   154.237896: svc_recvfrom.isra.10: svc: socket ffff8800daaa1000 recvfrom(ffff8800daaa12b8, 4) = -11
            nfsd-4778  [000] ....   154.237897: svc_tcp_recvfrom: RPC: TCP recv_record got -11
            nfsd-4778  [000] ....   154.237897: svc_tcp_recvfrom: RPC: TCP recvfrom got EAGAIN
          <idle>-0     [001] ..s.   214.282477: svc_tcp_data_ready: svc: socket ffff8800d8c817c0 TCP data ready (svsk ffff8800daaa1000)
            nfsd-4779  [003] ....   214.282554: svc_tcp_recvfrom: svc: tcp_recv ffff8800daaa1000 data 1 conn 0 close 0
            nfsd-4779  [003] ....   214.282559: svc_recvfrom.isra.10: svc: socket ffff8800daaa1000 recvfrom(ffff8800daaa12b8, 4) = 4
            nfsd-4779  [003] ....   214.282560: svc_tcp_recvfrom: svc: TCP record, 92 bytes
            nfsd-4779  [003] ....   214.282563: svc_recvfrom.isra.10: svc: socket ffff8800daaa1000 recvfrom(ffff8800dbb8a000, 4096) = 92
            nfsd-4779  [003] ....   214.282564: svc_tcp_fragment_received: svc: TCP final record (92 bytes)
            nfsd-4779  [003] ....   214.282617: svc_sendto: svc: socket ffff8800daaa1000 sendto([ffff8804086ca000 48... ], 48) = 48 (addr 192.168.23.22, port=867)
            nfsd-4778  [000] ....   214.282637: svc_tcp_recvfrom: svc: tcp_recv ffff8800daaa1000 data 1 conn 0 close 0
            nfsd-4778  [000] ....   214.282642: svc_recvfrom.isra.10: svc: socket ffff8800daaa1000 recvfrom(ffff8800daaa12b8, 4) = -11
            nfsd-4778  [000] ....   214.282643: svc_tcp_recvfrom: RPC: TCP recv_record got -11
            nfsd-4778  [000] ....   214.282643: svc_tcp_recvfrom: RPC: TCP recvfrom got EAGAIN
    spamassassin-5827  [000] ..s.   274.471016: svc_tcp_data_ready: svc: socket ffff8800d8c817c0 TCP data ready (svsk ffff8800daaa1000)
            nfsd-4779  [003] ....   274.471080: svc_tcp_recvfrom: svc: tcp_recv ffff8800daaa1000 data 1 conn 0 close 0
            nfsd-4779  [003] ....   274.471085: svc_recvfrom.isra.10: svc: socket ffff8800daaa1000 recvfrom(ffff8800daaa12b8, 4) = 4
            nfsd-4779  [003] ....   274.471085: svc_tcp_recvfrom: svc: TCP record, 92 bytes
            nfsd-4779  [003] ....   274.471087: svc_recvfrom.isra.10: svc: socket ffff8800daaa1000 recvfrom(ffff8800dbb8a000, 4096) = 92
            nfsd-4779  [003] ....   274.471088: svc_tcp_fragment_received: svc: TCP final record (92 bytes)
            nfsd-4779  [003] ....   274.471130: svc_sendto: svc: socket ffff8800daaa1000 sendto([ffff88040a6dd000 48... ], 48) = 48 (addr 192.168.23.22, port=867)
            nfsd-4778  [001] ....   274.471134: svc_tcp_recvfrom: svc: tcp_recv ffff8800daaa1000 data 1 conn 0 close 0
            nfsd-4778  [001] ....   274.471136: svc_recvfrom.isra.10: svc: socket ffff8800daaa1000 recvfrom(ffff8800daaa12b8, 4) = -11
            nfsd-4778  [001] ....   274.471136: svc_tcp_recvfrom: RPC: TCP recv_record got -11
            nfsd-4778  [001] ....   274.471137: svc_tcp_recvfrom: RPC: TCP recvfrom got EAGAIN
          <idle>-0     [000] .Ns.   334.659832: svc_tcp_data_ready: svc: socket ffff8800d8c817c0 TCP data ready (svsk ffff8800daaa1000)
            nfsd-4779  [003] ....   334.659927: svc_tcp_recvfrom: svc: tcp_recv ffff8800daaa1000 data 1 conn 0 close 0
            nfsd-4779  [003] ....   334.659933: svc_recvfrom.isra.10: svc: socket ffff8800daaa1000 recvfrom(ffff8800daaa12b8, 4) = 4
            nfsd-4779  [003] ....   334.659934: svc_tcp_recvfrom: svc: TCP record, 92 bytes
            nfsd-4779  [003] ....   334.659937: svc_recvfrom.isra.10: svc: socket ffff8800daaa1000 recvfrom(ffff8800dbb8a000, 4096) = 92
            nfsd-4779  [003] ....   334.659938: svc_tcp_fragment_received: svc: TCP final record (92 bytes)
            nfsd-4778  [001] ....   334.659952: svc_tcp_recvfrom: svc: tcp_recv ffff8800daaa1000 data 1 conn 0 close 0
            nfsd-4778  [001] ....   334.659957: svc_recvfrom.isra.10: svc: socket ffff8800daaa1000 recvfrom(ffff8800daaa12b8, 4) = -11
            nfsd-4778  [001] ....   334.659957: svc_tcp_recvfrom: RPC: TCP recv_record got -11
            nfsd-4778  [001] ....   334.659958: svc_tcp_recvfrom: RPC: TCP recvfrom got EAGAIN
            nfsd-4779  [003] ....   334.659991: svc_sendto: svc: socket ffff8800daaa1000 sendto([ffff88040a043000 48... ], 48) = 48 (addr 192.168.23.22, port=867)
          <idle>-0     [003] ..s.   394.848497: svc_tcp_data_ready: svc: socket ffff8800d8c817c0 TCP data ready (svsk ffff8800daaa1000)
            nfsd-4779  [003] ....   394.848520: svc_tcp_recvfrom: svc: tcp_recv ffff8800daaa1000 data 1 conn 0 close 0
            nfsd-4779  [003] ....   394.848524: svc_recvfrom.isra.10: svc: socket ffff8800daaa1000 recvfrom(ffff8800daaa12b8, 4) = 4
            nfsd-4779  [003] ....   394.848525: svc_tcp_recvfrom: svc: TCP record, 92 bytes
            nfsd-4779  [003] ....   394.848527: svc_recvfrom.isra.10: svc: socket ffff8800daaa1000 recvfrom(ffff8800dbb8a000, 4096) = 92
            nfsd-4779  [003] ....   394.848528: svc_tcp_fragment_received: svc: TCP final record (92 bytes)
            nfsd-4779  [003] ....   394.848579: svc_sendto: svc: socket ffff8800daaa1000 sendto([ffff880402bc9000 48... ], 48) = 48 (addr 192.168.23.22, port=867)
            nfsd-4778  [001] ....   394.848599: svc_tcp_recvfrom: svc: tcp_recv ffff8800daaa1000 data 1 conn 0 close 0
            nfsd-4778  [001] ....   394.848603: svc_recvfrom.isra.10: svc: socket ffff8800daaa1000 recvfrom(ffff8800daaa12b8, 4) = -11
            nfsd-4778  [001] ....   394.848604: svc_tcp_recvfrom: RPC: TCP recv_record got -11
            nfsd-4778  [001] ....   394.848604: svc_tcp_recvfrom: RPC: TCP recvfrom got EAGAIN
    kworker/3:1H-124   [003] ..s.   449.959363: xs_tcp_state_change: RPC:       xs_tcp_state_change client ffff8800daa83800...
    kworker/3:1H-124   [003] ..s.   449.959366: xs_tcp_state_change: RPC:       state 4 conn 1 dead 0 zapped 1 sk_shutdown 3
    kworker/3:1H-124   [003] ..s.   449.959369: rpc_wake_up_first: RPC:       wake_up_first(ffff8800daa83990 "xprt_sending")
          <idle>-0     [000] ..s.   449.959669: xs_tcp_state_change: RPC:       xs_tcp_state_change client ffff8800daa83800...
          <idle>-0     [000] ..s.   449.959671: xs_tcp_state_change: RPC:       state 5 conn 0 dead 0 zapped 1 sk_shutdown 3
          <idle>-0     [000] ..s.   449.959693: xs_tcp_state_change: RPC:       xs_tcp_state_change client ffff8800daa83800...
          <idle>-0     [000] ..s.   449.959694: xs_tcp_state_change: RPC:       state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
          <idle>-0     [000] ..s.   449.959695: xprt_disconnect_done: RPC:       disconnected transport ffff8800daa83800
          <idle>-0     [000] ..s.   449.959696: xs_tcp_state_change: RPC:       xs_tcp_state_change client ffff8800daa83800...
          <idle>-0     [000] ..s.   449.959696: xs_tcp_state_change: RPC:       state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
          <idle>-0     [000] ..s.   449.959697: xprt_disconnect_done: RPC:       disconnected transport ffff8800daa83800
          <idle>-0     [000] ..s.   449.959698: xs_tcp_data_ready: RPC:       xs_tcp_data_ready...
          <idle>-0     [003] ..s.   455.037231: svc_tcp_data_ready: svc: socket ffff8800d8c817c0 TCP data ready (svsk ffff8800daaa1000)
            nfsd-4779  [003] ....   455.037253: svc_tcp_recvfrom: svc: tcp_recv ffff8800daaa1000 data 1 conn 0 close 0
            nfsd-4779  [003] ....   455.037267: svc_recvfrom.isra.10: svc: socket ffff8800daaa1000 recvfrom(ffff8800daaa12b8, 4) = 4
            nfsd-4779  [003] ....   455.037268: svc_tcp_recvfrom: svc: TCP record, 92 bytes
            nfsd-4779  [003] ....   455.037270: svc_recvfrom.isra.10: svc: socket ffff8800daaa1000 recvfrom(ffff8800dbb8a000, 4096) = 92
            nfsd-4779  [003] ....   455.037272: svc_tcp_fragment_received: svc: TCP final record (92 bytes)
            nfsd-4779  [003] ....   455.037313: svc_sendto: svc: socket ffff8800daaa1000 sendto([ffff880408502000 48... ], 48) = 48 (addr 192.168.23.22, port=867)
            nfsd-4778  [001] ....   455.037340: svc_tcp_recvfrom: svc: tcp_recv ffff8800daaa1000 data 1 conn 0 close 0
            nfsd-4778  [001] ....   455.037345: svc_recvfrom.isra.10: svc: socket ffff8800daaa1000 recvfrom(ffff8800daaa12b8, 4) = -11
            nfsd-4778  [001] ....   455.037346: svc_tcp_recvfrom: RPC: TCP recv_record got -11
            nfsd-4778  [001] ....   455.037347: svc_tcp_recvfrom: RPC: TCP recvfrom got EAGAIN
          <idle>-0     [002] ..s.   515.225890: svc_tcp_data_ready: svc: socket ffff8800d8c817c0 TCP data ready (svsk ffff8800daaa1000)
            nfsd-4779  [002] ....   515.225914: svc_tcp_recvfrom: svc: tcp_recv ffff8800daaa1000 data 1 conn 0 close 0
            nfsd-4779  [002] ....   515.225918: svc_recvfrom.isra.10: svc: socket ffff8800daaa1000 recvfrom(ffff8800daaa12b8, 4) = 4
            nfsd-4779  [002] ....   515.225919: svc_tcp_recvfrom: svc: TCP record, 92 bytes
            nfsd-4779  [002] ....   515.225921: svc_recvfrom.isra.10: svc: socket ffff8800daaa1000 recvfrom(ffff8800dbb8a000, 4096) = 92
            nfsd-4779  [002] ....   515.225922: svc_tcp_fragment_received: svc: TCP final record (92 bytes)
            nfsd-4779  [002] ....   515.225972: svc_sendto: svc: socket ffff8800daaa1000 sendto([ffff8800d8cab000 48... ], 48) = 48 (addr 192.168.23.22, port=867)
            nfsd-4778  [001] ....   515.225990: svc_tcp_recvfrom: svc: tcp_recv ffff8800daaa1000 data 1 conn 0 close 0
            nfsd-4778  [001] ....   515.225995: svc_recvfrom.isra.10: svc: socket ffff8800daaa1000 recvfrom(ffff8800daaa12b8, 4) = -11
            nfsd-4778  [001] ....   515.225995: svc_tcp_recvfrom: RPC: TCP recv_record got -11
            nfsd-4778  [001] ....   515.225996: svc_tcp_recvfrom: RPC: TCP recvfrom got EAGAIN

I don't see that 55201 anywhere. But then again, I didn't look for it
before the port disappeared. I could reboot and look for it again. I
should have saved the full netstat -tapn as well :-/

Oh well, I'll do this again (saving all the info and also netstat)

-- Steve

[-- Attachment #2: debug-nfs.patch --]
[-- Type: text/x-patch, Size: 939 bytes --]

diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
index 3e44b9b0b78e..90cc377388b4 100644
--- a/net/ipv4/inet_connection_sock.c
+++ b/net/ipv4/inet_connection_sock.c
@@ -225,6 +225,9 @@ tb_not_found:
 			tb->fastreuseport = 0;
 	}
 success:
+	if (!current->mm)
+		trace_printk("%s:%d got port %d\n", current->comm, current->pid,
+			     snum);
 	if (!inet_csk(sk)->icsk_bind_hash)
 		inet_bind_hash(sk, tb, snum);
 	WARN_ON(inet_csk(sk)->icsk_bind_hash != tb);
diff --git a/net/sunrpc/sunrpc.h b/net/sunrpc/sunrpc.h
index f2b7cb540e61..8ea4ddaed8b3 100644
--- a/net/sunrpc/sunrpc.h
+++ b/net/sunrpc/sunrpc.h
@@ -29,6 +29,12 @@ SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 
 #include <linux/net.h>
 
+#undef dprintk
+#undef dprintk_rcu
+
+#define dprintk(args...)	trace_printk(args)
+#define dprintk_rcu(args...)	trace_printk(args)
+
 /*
  * Header for dynamically allocated rpc buffers.
  */

^ permalink raw reply related	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
@ 2015-06-19 17:17                         ` Steven Rostedt
  0 siblings, 0 replies; 77+ messages in thread
From: Steven Rostedt @ 2015-06-19 17:17 UTC (permalink / raw)
  To: Jeff Layton
  Cc: Trond Myklebust, Eric Dumazet, Anna Schumaker,
	Linux NFS Mailing List, Linux Network Devel Mailing List, LKML,
	Andrew Morton, bfields

On Fri, 19 Jun 2015 12:25:53 -0400
Steven Rostedt <rostedt@goodmis.org> wrote:


> I don't see that 55201 anywhere. But then again, I didn't look for it
> before the port disappeared. I could reboot and look for it again. I
> should have saved the full netstat -tapn as well :-/

Of course I didn't find it anywhere, that's the port on my wife's box
that port 947 was connected to.

Now I even went over to my wife's box and ran

 # rpcinfo -p localhost
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp  34243  status
    100024    1   tcp  34498  status

which doesn't show anything.

but something is listening to that port...

 # netstat -ntap |grep 55201
tcp        0      0 0.0.0.0:55201           0.0.0.0:*               LISTEN   

I rebooted again, but this time I ran this on my wife's box:

 # trace-cmd record -e nfs -e nfs4 -e net -e skb -e sock -e udp -e workqueue -e sunrpc

I started it when my server started booting the kernel, and kept it
running till the port vanished.

The full trace can be downloaded from
http://rostedt.homelinux.com/private/wife-trace.txt

Here's some interesting output from that trace:

ksoftirq-13      1..s. 12272627.681760: netif_receive_skb:    dev=lo skbaddr=0xffff88020944c600 len=88
ksoftirq-13      1..s. 12272627.681776: net_dev_queue:        dev=eth0 skbaddr=0xffff880234e5b100 len=42
ksoftirq-13      1..s. 12272627.681777: net_dev_start_xmit:   dev=eth0 queue_mapping=0 skbaddr=0xffff880234e5b100 vlan_tagged=0 vlan_proto=0x0000 vlan_tci=0x0000 protocol=0x0806 ip_
summed=0 len=42 data_len=0 network_offset=14 transport_offset_valid=0 transport_offset=65533 tx_flags=0 gso_size=0 gso_segs=0 gso_type=0
ksoftirq-13      1..s. 12272627.681779: net_dev_xmit:         dev=eth0 skbaddr=0xffff880234e5b100 len=42 rc=0
ksoftirq-13      1..s. 12272627.681780: kfree_skb:            skbaddr=0xffff88023444cf00 protocol=2048 location=0xffffffff81422a72
ksoftirq-13      1..s. 12272627.681783: rpc_socket_error:     error=-113 socket:[11886206] dstaddr=192.168.23.9/2049 state=2 () sk_state=2 ()
ksoftirq-13      1..s. 12272627.681785: rpc_task_wakeup:      task:18128@0 flags=5281 state=0006 status=-113 timeout=45000 queue=xprt_pending
ksoftirq-13      1d.s. 12272627.681786: workqueue_queue_work: work struct=0xffff8800b5a94588 function=rpc_async_schedule workqueue=0xffff880234666800 req_cpu=512 cpu=1
ksoftirq-13      1d.s. 12272627.681787: workqueue_activate_work: work struct 0xffff8800b5a94588
ksoftirq-13      1..s. 12272627.681791: rpc_socket_state_change: socket:[11886206] dstaddr=192.168.23.9/2049 state=2 () sk_state=7 ()
ksoftirq-13      1..s. 12272627.681792: kfree_skb:            skbaddr=0xffff88020944c600 protocol=2048 location=0xffffffff81482c05
kworker/-20111   1.... 12272627.681796: workqueue_execute_start: work struct 0xffff8800b5a94588: function rpc_async_schedule
kworker/-20111   1.... 12272627.681797: rpc_task_run_action:  task:18128@0 flags=5281 state=0005 status=-113 action=call_connect_status
kworker/-20111   1.... 12272627.681798: rpc_task_run_action:  task:18128@0 flags=5281 state=0005 status=-113 action=call_connect_status
kworker/-20111   1.... 12272627.681798: rpc_connect_status:   task:18128@0, status -113
kworker/-20111   1..s. 12272627.681799: rpc_task_sleep:       task:18128@0 flags=5281 state=0005 status=0 timeout=750 queue=delayq
kworker/-20111   1.... 12272627.681800: workqueue_execute_end: work struct 0xffff8800b5a94588

  <idle>-0       1..s. 12272630.688741: rpc_task_wakeup:      task:18128@0 flags=5281 state=0006 status=-110 timeout=750 queue=delayq
  <idle>-0       1dNs. 12272630.688749: workqueue_queue_work: work struct=0xffff8800b5a94588 function=rpc_async_schedule workqueue=0xffff880234666800 req_cpu=512 cpu=1
  <idle>-0       1dNs. 12272630.688749: workqueue_activate_work: work struct 0xffff8800b5a94588
kworker/-20111   1.... 12272630.688758: workqueue_execute_start: work struct 0xffff8800b5a94588: function rpc_async_schedule
kworker/-20111   1.... 12272630.688759: rpc_task_run_action:  task:18128@0 flags=5281 state=0005 status=-110 action=call_timeout
kworker/-20111   1.... 12272630.688760: rpc_task_run_action:  task:18128@0 flags=5281 state=0005 status=0 action=call_timeout
kworker/-20111   1.... 12272630.688760: rpc_task_run_action:  task:18128@0 flags=5281 state=0005 status=0 action=call_bind
kworker/-20111   1.... 12272630.688761: rpc_task_run_action:  task:18128@0 flags=5281 state=0005 status=0 action=call_connect
kworker/-20111   1..s. 12272630.688762: rpc_task_sleep:       task:18128@0 flags=5281 state=0005 status=0 timeout=45000 queue=xprt_pending
kworker/-20111   1.... 12272630.688765: workqueue_execute_end: work struct 0xffff8800b5a94588
  <idle>-0       3d.s. 12272630.696742: workqueue_queue_work: work struct=0xffff880234ac9060 function=disk_events_workfn workqueue=0xffff8802370d9000 req_cpu=512 cpu=3
  <idle>-0       3d.s. 12272630.696744: workqueue_activate_work: work struct 0xffff880234ac9060
kworker/-7491    3.... 12272630.696760: workqueue_execute_start: work struct 0xffff880234ac9060: function disk_events_workfn
kworker/-7491    3d... 12272630.696827: workqueue_queue_work: work struct=0xffff8802347440b8 function=ata_sff_pio_task workqueue=0xffff880234491c00 req_cpu=512 cpu=3
kworker/-7491    3d... 12272630.696828: workqueue_activate_work: work struct 0xffff8802347440b8
kworker/-16140   3.... 12272630.696837: workqueue_execute_start: work struct 0xffff8802347440b8: function ata_sff_pio_task
kworker/-16140   3.... 12272630.696853: workqueue_execute_end: work struct 0xffff8802347440b8
kworker/-7491    3.... 12272630.697383: workqueue_execute_end: work struct 0xffff880234ac9060

  <idle>-0       1d.s. 12272654.753029: workqueue_queue_work: work struct=0xffff8802361f4de0 function=xs_tcp_setup_socket workqueue=0xffff880234666800 req_cpu=512 cpu=1
  <idle>-0       1d.s. 12272654.753031: workqueue_activate_work: work struct 0xffff8802361f4de0
kworker/-20111   1.... 12272654.753049: workqueue_execute_start: work struct 0xffff8802361f4de0: function xs_tcp_setup_socket
kworker/-20111   1..s. 12272654.753054: rpc_socket_error:     error=-113 socket:[11886206] dstaddr=192.168.23.9/2049 state=2 () sk_state=7 ()
kworker/-20111   1.... 12272654.753055: rpc_socket_reset_connection: error=0 socket:[11886206] dstaddr=192.168.23.9/2049 state=1 () sk_state=7 ()
kworker/-20111   1..s. 12272654.753075: net_dev_queue:        dev=eth0 skbaddr=0xffff880082117ae8 len=74
kworker/-20111   1..s. 12272654.753083: net_dev_start_xmit:   dev=eth0 queue_mapping=0 skbaddr=0xffff880082117ae8 vlan_tagged=0 vlan_proto=0x0000 vlan_tci=0x0000 protocol=0x0800 ip_summed=0 len=74 data_len=0 network_offset=14 transport_offset_valid=1 transport_offset=34 tx_flags=0 gso_size=0 gso_segs=1 gso_type=0
kworker/-20111   1..s. 12272654.753088: net_dev_xmit:         dev=eth0 skbaddr=0xffff880082117ae8 len=74 rc=0
kworker/-20111   1.... 12272654.753090: rpc_socket_connect:   error=-115 socket:[11886206] dstaddr=192.168.23.9/2049 state=2 () sk_state=2 ()
kworker/-20111   1.... 12272654.753093: workqueue_execute_end: work struct 0xffff8802361f4de0
  <idle>-0       1..s. 12272654.753320: consume_skb:          skbaddr=0xffff880082117ae8
  <idle>-0       1..s. 12272654.753503: napi_gro_receive_entry: dev=eth0 napi_id=0 queue_mapping=0 skbaddr=0xffff8801f647d100 vlan_tagged=0 vlan_proto=0x0000 vlan_tci=0x0000 protocol=0x0806 ip_summed=0 hash=0x00000000 l4_hash=0 len=46 data_len=0 truesize=704 mac_header_valid=1 mac_header=-14 nr_frags=0 gso_size=0 gso_type=0
  <idle>-0       1.Ns. 12272654.753508: netif_receive_skb:    dev=eth0 skbaddr=0xffff8801f647d100 len=46
  <idle>-0       1.Ns. 12272654.753519: consume_skb:          skbaddr=0xffff8800a9aa2d00
  <idle>-0       1.Ns. 12272654.753522: net_dev_queue:        dev=eth0 skbaddr=0xffff8800a9aa2d00 len=42
  <idle>-0       1.Ns. 12272654.753523: net_dev_start_xmit:   dev=eth0 queue_mapping=0 skbaddr=0xffff8800a9aa2d00 vlan_tagged=0 vlan_proto=0x0000 vlan_tci=0x0000 protocol=0x0806 ip_summed=0 len=42 data_len=0 network_offset=14 transport_offset_valid=0 transport_offset=65533 tx_flags=0 gso_size=0 gso_segs=0 gso_type=0
  <idle>-0       1.Ns. 12272654.753525: net_dev_xmit:         dev=eth0 skbaddr=0xffff8800a9aa2d00 len=42 rc=0
  <idle>-0       1.Ns. 12272654.753526: consume_skb:          skbaddr=0xffff8801f647d100
  <idle>-0       1..s. 12272654.753585: napi_gro_receive_entry: dev=eth0 napi_id=0 queue_mapping=0 skbaddr=0xffff8801f647d100 vlan_tagged=0 vlan_proto=0x0000 vlan_tci=0x0000 protocol=0x0800 ip_summed=1 hash=0x00000000 l4_hash=0 len=46 data_len=0 truesize=704 mac_header_valid=1 mac_header=-14 nr_frags=0 gso_size=0 gso_type=0
  <idle>-0       1.Ns. 12272654.753589: netif_receive_skb:    dev=eth0 skbaddr=0xffff8801f647d100 len=46
  <idle>-0       1.Ns. 12272654.753595: rpc_socket_error:     error=-111 socket:[11886206] dstaddr=192.168.23.9/2049 state=2 () sk_state=2 ()
  <idle>-0       1.Ns. 12272654.753597: rpc_task_wakeup:      task:18128@0 flags=5281 state=0006 status=-111 timeout=45000 queue=xprt_pending
  <idle>-0       1dNs. 12272654.753598: workqueue_queue_work: work struct=0xffff8800b5a94588 function=rpc_async_schedule workqueue=0xffff880234666800 req_cpu=512 cpu=1
  <idle>-0       1dNs. 12272654.753599: workqueue_activate_work: work struct 0xffff8800b5a94588
  <idle>-0       1.Ns. 12272654.753601: rpc_socket_state_change: socket:[11886206] dstaddr=192.168.23.9/2049 state=2 () sk_state=7 ()
kworker/-20111   1.... 12272654.753607: workqueue_execute_start: work struct 0xffff8800b5a94588: function rpc_async_schedule
kworker/-20111   1.... 12272654.753608: rpc_task_run_action:  task:18128@0 flags=5281 state=0005 status=-111 action=call_connect_status
kworker/-20111   1.... 12272654.753609: rpc_task_run_action:  task:18128@0 flags=5281 state=0005 status=-111 action=call_connect_status
kworker/-20111   1.... 12272654.753609: rpc_connect_status:   task:18128@0, status -111
kworker/-20111   1..s. 12272654.753610: rpc_task_sleep:       task:18128@0 flags=5281 state=0005 status=0 timeout=750 queue=delayq
kworker/-20111   1.... 12272654.753612: workqueue_execute_end: work struct 0xffff8800b5a94588
  <idle>-0       1..s. 12272654.753692: consume_skb:          skbaddr=0xffff8800a9aa2d00

  <idle>-0       1.Ns. 12272657.605105: netif_receive_skb:    dev=eth0 skbaddr=0xffff8802223d1e00 len=345
  <idle>-0       1.Ns. 12272657.605108: kfree_skb:            skbaddr=0xffff8802223d1e00 protocol=2048 location=0xffffffff8147e361
  <idle>-0       1..s. 12272657.760044: rpc_task_wakeup:      task:18128@0 flags=5281 state=0006 status=-110 timeout=750 queue=delayq
  <idle>-0       1dNs. 12272657.760051: workqueue_queue_work: work struct=0xffff8800b5a94588 function=rpc_async_schedule workqueue=0xffff880234666800 req_cpu=512 cpu=1
  <idle>-0       1dNs. 12272657.760052: workqueue_activate_work: work struct 0xffff8800b5a94588
kworker/-20111   1.... 12272657.760063: workqueue_execute_start: work struct 0xffff8800b5a94588: function rpc_async_schedule
kworker/-20111   1.... 12272657.760064: rpc_task_run_action:  task:18128@0 flags=5281 state=0005 status=-110 action=call_timeout
kworker/-20111   1.... 12272657.760065: rpc_task_run_action:  task:18128@0 flags=5281 state=0005 status=0 action=call_timeout
kworker/-20111   1.... 12272657.760066: rpc_task_run_action:  task:18128@0 flags=5281 state=0005 status=0 action=call_bind
kworker/-20111   1.... 12272657.760066: rpc_task_run_action:  task:18128@0 flags=5281 state=0005 status=0 action=call_connect
kworker/-20111   1..s. 12272657.760068: rpc_task_sleep:       task:18128@0 flags=5281 state=0005 status=0 timeout=45000 queue=xprt_pending
kworker/-20111   1.... 12272657.760070: workqueue_execute_end: work struct 0xffff8800b5a94588
  <idle>-0       3d.s. 12272657.824024: workqueue_queue_work: work struct=0xffff8800b5551760 function=disk_events_workfn workqueue=0xffff8802370d9000 req_cpu=512 cpu=3
  <idle>-0       3d.s. 12272657.824025: workqueue_activate_work: work struct 0xffff8800b5551760
kworker/-7491    3.... 12272657.824041: workqueue_execute_start: work struct 0xffff8800b5551760: function disk_events_workfn
kworker/-7491    3.... 12272657.824807: workqueue_execute_end: work struct 0xffff8800b5551760


  <idle>-0       1d.s. 12272705.808564: workqueue_queue_work: work struct=0xffff8802361f4de0 function=xs_tcp_setup_socket workqueue=0xffff880234666800 req_cpu=512 cpu=1
  <idle>-0       1d.s. 12272705.808565: workqueue_activate_work: work struct 0xffff8802361f4de0
kworker/-20111   1.... 12272705.808574: workqueue_execute_start: work struct 0xffff8802361f4de0: function xs_tcp_setup_socket
kworker/-20111   1..s. 12272705.808580: rpc_socket_error:     error=-111 socket:[11886206] dstaddr=192.168.23.9/2049 state=2 () sk_state=7 ()
kworker/-20111   1.... 12272705.808581: rpc_socket_reset_connection: error=0 socket:[11886206] dstaddr=192.168.23.9/2049 state=1 () sk_state=7 ()
kworker/-20111   1..s. 12272705.808599: net_dev_queue:        dev=eth0 skbaddr=0xffff880082117ae8 len=74
kworker/-20111   1..s. 12272705.808602: net_dev_start_xmit:   dev=eth0 queue_mapping=0 skbaddr=0xffff880082117ae8 vlan_tagged=0 vlan_proto=0x0000 vlan_tci=0x0000 protocol=0x0800 ip_summed=0 len=74 data_len=0 network_offset=14 transport_offset_valid=1 transport_offset=34 tx_flags=0 gso_size=0 gso_segs=1 gso_type=0
kworker/-20111   1..s. 12272705.808605: net_dev_xmit:         dev=eth0 skbaddr=0xffff880082117ae8 len=74 rc=0
kworker/-20111   1.... 12272705.808614: rpc_socket_connect:   error=-115 socket:[11886206] dstaddr=192.168.23.9/2049 state=2 () sk_state=2 ()
kworker/-20111   1.... 12272705.808615: workqueue_execute_end: work struct 0xffff8802361f4de0
  <idle>-0       1..s. 12272705.808841: napi_gro_receive_entry: dev=eth0 napi_id=0 queue_mapping=0 skbaddr=0xffff8800a60e8900 vlan_tagged=0 vlan_proto=0x0000 vlan_tci=0x0000 protocol=0x0800 ip_summed=1 hash=0x00000000 l4_hash=0 len=60 data_len=0 truesize=768 mac_header_valid=1 mac_header=-14 nr_frags=0 gso_size=0 gso_type=0
  <idle>-0       1.Ns. 12272705.808849: netif_receive_skb:    dev=eth0 skbaddr=0xffff8800a60e8900 len=60
  <idle>-0       1.Ns. 12272705.808872: rpc_socket_state_change: socket:[11886206] dstaddr=192.168.23.9/2049 state=2 () sk_state=1 ()
  <idle>-0       1.Ns. 12272705.808874: rpc_task_wakeup:      task:18128@0 flags=5281 state=0006 status=-11 timeout=45000 queue=xprt_pending
  <idle>-0       1dNs. 12272705.808875: workqueue_queue_work: work struct=0xffff8800b5a94588 function=rpc_async_schedule workqueue=0xffff880234666800 req_cpu=512 cpu=1
  <idle>-0       1dNs. 12272705.808876: workqueue_activate_work: work struct 0xffff8800b5a94588
  <idle>-0       1.Ns. 12272705.808881: net_dev_queue:        dev=eth0 skbaddr=0xffff8800a60e8500 len=66
  <idle>-0       1.Ns. 12272705.808883: net_dev_start_xmit:   dev=eth0 queue_mapping=0 skbaddr=0xffff8800a60e8500 vlan_tagged=0 vlan_proto=0x0000 vlan_tci=0x0000 protocol=0x0800 ip_summed=0 len=66 data_len=0 network_offset=14 transport_offset_valid=1 transport_offset=34 tx_flags=0 gso_size=0 gso_segs=1 gso_type=0
  <idle>-0       1.Ns. 12272705.808885: net_dev_xmit:         dev=eth0 skbaddr=0xffff8800a60e8500 len=66 rc=0
  <idle>-0       1.Ns. 12272705.808887: consume_skb:          skbaddr=0xffff880082117ae8
kworker/-20111   1.... 12272705.808895: workqueue_execute_start: work struct 0xffff8800b5a94588: function rpc_async_schedule
kworker/-20111   1.... 12272705.808896: rpc_task_run_action:  task:18128@0 flags=5281 state=0005 status=-11 action=call_connect_status
kworker/-20111   1.... 12272705.808897: rpc_task_run_action:  task:18128@0 flags=5281 state=0005 status=-11 action=call_connect_status
kworker/-20111   1.... 12272705.808897: rpc_connect_status:   task:18128@0, status -11
kworker/-20111   1.... 12272705.808897: rpc_task_run_action:  task:18128@0 flags=5281 state=0005 status=0 action=call_timeout
kworker/-20111   1.... 12272705.808898: rpc_task_run_action:  task:18128@0 flags=5281 state=0005 status=0 action=call_bind
kworker/-20111   1.... 12272705.808899: rpc_task_run_action:  task:18128@0 flags=5281 state=0005 status=0 action=call_connect
kworker/-20111   1.... 12272705.808899: rpc_task_run_action:  task:18128@0 flags=5281 state=0005 status=0 action=call_transmit
kworker/-20111   1..s. 12272705.808912: net_dev_queue:        dev=eth0 skbaddr=0xffff880082117ae8 len=162
kworker/-20111   1..s. 12272705.808913: net_dev_start_xmit:   dev=eth0 queue_mapping=0 skbaddr=0xffff880082117ae8 vlan_tagged=0 vlan_proto=0x0000 vlan_tci=0x0000 protocol=0x0800 ip_summed=0 len=162 data_len=0 network_offset=14 transport_offset_valid=1 transport_offset=34 tx_flags=0 gso_size=0 gso_segs=1 gso_type=0
kworker/-20111   1..s. 12272705.808913: net_dev_xmit:         dev=eth0 skbaddr=0xffff880082117ae8 len=162 rc=0
kworker/-20111   1..s. 12272705.808916: rpc_task_sleep:       task:18128@0 flags=5a81 state=0005 status=0 timeout=45000 queue=xprt_pending
kworker/-20111   1.... 12272705.808917: workqueue_execute_end: work struct 0xffff8800b5a94588
  <idle>-0       1..s. 12272705.809098: consume_skb:          skbaddr=0xffff8800a60e8500

  <idle>-0       1.Ns. 12272705.809840: rpc_task_wakeup:      task:18128@0 flags=5a81 state=0006 status=0 timeout=45000 queue=xprt_pending
  <idle>-0       1dNs. 12272705.809842: workqueue_queue_work: work struct=0xffff8800b5a94588 function=rpc_async_schedule workqueue=0xffff880234666800 req_cpu=512 cpu=1
  <idle>-0       1dNs. 12272705.809842: workqueue_activate_work: work struct 0xffff8800b5a94588
kworker/-20111   1.... 12272705.809853: workqueue_execute_start: work struct 0xffff8800b5a94588: function rpc_async_schedule
kworker/-20111   1.... 12272705.809853: rpc_task_run_action:  task:18128@0 flags=5a81 state=0005 status=0 action=call_status
kworker/-20111   1.... 12272705.809854: rpc_task_run_action:  task:18128@0 flags=5a81 state=0005 status=0 action=call_status
kworker/-20111   1.... 12272705.809854: rpc_task_run_action:  task:18128@0 flags=5a81 state=0005 status=44 action=call_decode
kworker/-20111   1.... 12272705.809856: rpc_task_run_action:  task:18128@0 flags=5a81 state=0005 status=-10022 action=rpc_exit_task
kworker/-20111   1.... 12272705.809858: nfs4_renew_async:     error=-10022 () dstaddr=192.168.23.9
  <idle>-0       1.Ns. 12272705.810000: consume_skb:          skbaddr=0xffff8800a60e8900
kworker/-20111   1.... 12272705.810033: workqueue_execute_end: work struct 0xffff8800b5a94588
192.168.-16171   3.... 12272705.810062: rpc_task_begin:       task:0@0 flags=5280 state=0000 status=0 action=irq_stack_union
192.168.-16171   3.... 12272705.810068: rpc_task_run_action:  task:18129@0 flags=5280 state=0005 status=0 action=call_start
192.168.-16171   3.... 12272705.810069: rpc_task_run_action:  task:18129@0 flags=5280 state=0005 status=0 action=call_reserve
192.168.-16171   3.... 12272705.810071: rpc_task_run_action:  task:18129@0 flags=5280 state=0005 status=0 action=call_reserveresult
192.168.-16171   3.... 12272705.810071: rpc_task_run_action:  task:18129@0 flags=5280 state=0005 status=0 action=call_refresh
192.168.-16171   3.... 12272705.810073: rpc_task_run_action:  task:18129@0 flags=5280 state=0005 status=0 action=call_refreshresult
192.168.-16171   3.... 12272705.810073: rpc_task_run_action:  task:18129@0 flags=5280 state=0005 status=0 action=call_allocate
192.168.-16171   3.... 12272705.810074: rpc_task_run_action:  task:18129@0 flags=5280 state=0005 status=0 action=call_bind
192.168.-16171   3.... 12272705.810075: rpc_task_run_action:  task:18129@0 flags=5280 state=0005 status=0 action=call_connect
192.168.-16171   3.... 12272705.810075: rpc_task_run_action:  task:18129@0 flags=5280 state=0005 status=0 action=call_transmit
192.168.-16171   3..s. 12272705.810095: net_dev_queue:        dev=eth0 skbaddr=0xffff8800a9b924e8 len=162
192.168.-16171   3..s. 12272705.810097: net_dev_start_xmit:   dev=eth0 queue_mapping=0 skbaddr=0xffff8800a9b924e8 vlan_tagged=0 vlan_proto=0x0000 vlan_tci=0x0000 protocol=0x0800 ip_summed=0 len=162 data_len=0 network_offset=14 transport_offset_valid=1 transport_offset=34 tx_flags=0 gso_size=0 gso_segs=1 gso_type=0
192.168.-16171   3..s. 12272705.810098: net_dev_xmit:         dev=eth0 skbaddr=0xffff8800a9b924e8 len=162 rc=0
192.168.-16171   3..s. 12272705.810101: rpc_task_sleep:       task:18129@0 flags=5a80 state=0005 status=0 timeout=15000 queue=xprt_pending
  <idle>-0       1..s. 12272705.810318: napi_gro_receive_entry: dev=eth0 napi_id=0 queue_mapping=0 skbaddr=0xffff8800a60e8900 vlan_tagged=0 vlan_proto=0x0000 vlan_tci=0x0000 protocol=0x0800 ip_summed=1 hash=0x00000000 l4_hash=0 len=100 data_len=0 truesize=768 mac_header_valid=1 mac_header=-14 nr_frags=0 gso_size=0 gso_type=0
  <idle>-0       1.Ns. 12272705.810326: netif_receive_skb:    dev=eth0 skbaddr=0xffff8800a60e8900 len=100
  <idle>-0       1.Ns. 12272705.810344: rpc_task_wakeup:      task:18129@0 flags=5a80 state=0006 status=0 timeout=15000 queue=xprt_pending
  <idle>-0       1.Ns. 12272705.810349: consume_skb:          skbaddr=0xffff8800a9b924e8
192.168.-16171   3.... 12272705.810379: rpc_task_run_action:  task:18129@0 flags=5a80 state=0005 status=0 action=call_status
192.168.-16171   3.... 12272705.810385: rpc_task_run_action:  task:18129@0 flags=5a80 state=0005 status=0 action=call_status
192.168.-16171   3.... 12272705.810385: rpc_task_run_action:  task:18129@0 flags=5a80 state=0005 status=44 action=call_decode
192.168.-16171   3.... 12272705.810387: rpc_task_run_action:  task:18129@0 flags=5a80 state=0005 status=-10022 action=rpc_exit_task
192.168.-16171   3.... 12272705.810397: rpc_task_begin:       task:0@0 flags=5280 state=0000 status=0 action=irq_stack_union
192.168.-16171   3.... 12272705.810398: rpc_task_run_action:  task:18130@0 flags=5280 state=0005 status=0 action=call_start
192.168.-16171   3.... 12272705.810398: rpc_task_run_action:  task:18130@0 flags=5280 state=0005 status=0 action=call_reserve
192.168.-16171   3.... 12272705.810399: rpc_task_run_action:  task:18130@0 flags=5280 state=0005 status=0 action=call_reserveresult
192.168.-16171   3.... 12272705.810399: rpc_task_run_action:  task:18130@0 flags=5280 state=0005 status=0 action=call_refresh
192.168.-16171   3.... 12272705.810400: rpc_task_run_action:  task:18130@0 flags=5280 state=0005 status=0 action=call_refreshresult
192.168.-16171   3.... 12272705.810400: rpc_task_run_action:  task:18130@0 flags=5280 state=0005 status=0 action=call_allocate
192.168.-16171   3.... 12272705.810404: rpc_task_run_action:  task:18130@0 flags=5280 state=0005 status=0 action=call_bind
192.168.-16171   3.... 12272705.810404: rpc_task_run_action:  task:18130@0 flags=5280 state=0005 status=0 action=call_connect
192.168.-16171   3.... 12272705.810404: rpc_task_run_action:  task:18130@0 flags=5280 state=0005 status=0 action=call_transmit
192.168.-16171   3..s. 12272705.810417: net_dev_queue:        dev=eth0 skbaddr=0xffff8800a9b92ae8 len=254
192.168.-16171   3..s. 12272705.810418: net_dev_start_xmit:   dev=eth0 queue_mapping=0 skbaddr=0xffff8800a9b92ae8 vlan_tagged=0 vlan_proto=0x0000 vlan_tci=0x0000 protocol=0x0800 ip_summed=0 len=254 data_len=0 network_offset=14 transport_offset_valid=1 transport_offset=34 tx_flags=0 gso_size=0 gso_segs=1 gso_type=0
192.168.-16171   3..s. 12272705.810426: net_dev_xmit:         dev=eth0 skbaddr=0xffff8800a9b92ae8 len=254 rc=0
192.168.-16171   3..s. 12272705.810428: rpc_task_sleep:       task:18130@0 flags=5a80 state=0005 status=0 timeout=15000 queue=xprt_pending
192.168.-16171   3.... 12272705.810902: rpc_task_run_action:  task:18130@0 flags=5a80 state=0005 status=0 action=call_status
192.168.-16171   3.... 12272705.810908: rpc_task_run_action:  task:18130@0 flags=5a80 state=0005 status=0 action=call_status
192.168.-16171   3.... 12272705.810908: rpc_task_run_action:  task:18130@0 flags=5a80 state=0005 status=60 action=call_decode
192.168.-16171   3.... 12272705.810910: rpc_task_run_action:  task:18130@0 flags=5a80 state=0005 status=0 action=rpc_exit_task
192.168.-16171   3.... 12272705.810914: nfs4_setclientid:     error=0 (ACCESS) dstaddr=192.168.23.9
192.168.-16171   3.... 12272705.810915: rpc_task_begin:       task:0@0 flags=5280 state=0000 status=0 action=irq_stack_union
192.168.-16171   3.... 12272705.810916: rpc_task_run_action:  task:18131@0 flags=5280 state=0005 status=0 action=call_start
192.168.-16171   3.... 12272705.810916: rpc_task_run_action:  task:18131@0 flags=5280 state=0005 status=0 action=call_reserve
192.168.-16171   3.... 12272705.810917: rpc_task_run_action:  task:18131@0 flags=5280 state=0005 status=0 action=call_reserveresult
192.168.-16171   3.... 12272705.810917: rpc_task_run_action:  task:18131@0 flags=5280 state=0005 status=0 action=call_refresh
192.168.-16171   3.... 12272705.810918: rpc_task_run_action:  task:18131@0 flags=5280 state=0005 status=0 action=call_refreshresult
192.168.-16171   3.... 12272705.810918: rpc_task_run_action:  task:18131@0 flags=5280 state=0005 status=0 action=call_allocate
192.168.-16171   3.... 12272705.810919: rpc_task_run_action:  task:18131@0 flags=5280 state=0005 status=0 action=call_bind
192.168.-16171   3.... 12272705.810919: rpc_task_run_action:  task:18131@0 flags=5280 state=0005 status=0 action=call_connect
192.168.-16171   3.... 12272705.810919: rpc_task_run_action:  task:18131@0 flags=5280 state=0005 status=0 action=call_transmit
192.168.-16171   3..s. 12272705.810931: net_dev_queue:        dev=eth0 skbaddr=0xffff8800a9b92ce8 len=170
192.168.-16171   3..s. 12272705.810932: net_dev_start_xmit:   dev=eth0 queue_mapping=0 skbaddr=0xffff8800a9b92ce8 vlan_tagged=0 vlan_proto=0x0000 vlan_tci=0x0000 protocol=0x0800 ip_summed=0 len=170 data_len=0 network_offset=14 transport_offset_valid=1 transport_offset=34 tx_flags=0 gso_size=0 gso_segs=1 gso_type=0
192.168.-16171   3..s. 12272705.810933: net_dev_xmit:         dev=eth0 skbaddr=0xffff8800a9b92ce8 len=170 rc=0
192.168.-16171   3..s. 12272705.810936: rpc_task_sleep:       task:18131@0 flags=5a80 state=0005 status=0 timeout=15000 queue=xprt_pending
192.168.-16171   3.... 12272705.811213: rpc_task_run_action:  task:18131@0 flags=5a80 state=0005 status=0 action=call_status
192.168.-16171   3.... 12272705.811220: rpc_task_run_action:  task:18131@0 flags=5a80 state=0005 status=0 action=call_status
192.168.-16171   3.... 12272705.811220: rpc_task_run_action:  task:18131@0 flags=5a80 state=0005 status=44 action=call_decode
192.168.-16171   3.... 12272705.811222: rpc_task_run_action:  task:18131@0 flags=5a80 state=0005 status=0 action=rpc_exit_task
192.168.-16171   3.... 12272705.811227: nfs4_setclientid_confirm: error=0 (ACCESS) dstaddr=192.168.23.9


And it goes on, but you can look at the full  trace. I just searched
for "rpc".

Maybe this will shed more light on the issue. I'll keep this kernel on
my server for a little longer, but it's going to start triggering
rkhunter warnings about hidden ports again.

-- Steve


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
@ 2015-06-19 17:17                         ` Steven Rostedt
  0 siblings, 0 replies; 77+ messages in thread
From: Steven Rostedt @ 2015-06-19 17:17 UTC (permalink / raw)
  To: Jeff Layton
  Cc: Trond Myklebust, Eric Dumazet, Anna Schumaker,
	Linux NFS Mailing List, Linux Network Devel Mailing List, LKML,
	Andrew Morton, bfields-uC3wQj2KruNg9hUCZPvPmw

On Fri, 19 Jun 2015 12:25:53 -0400
Steven Rostedt <rostedt-nx8X9YLhiw1AfugRpC6u6w@public.gmane.org> wrote:


> I don't see that 55201 anywhere. But then again, I didn't look for it
> before the port disappeared. I could reboot and look for it again. I
> should have saved the full netstat -tapn as well :-/

Of course I didn't find it anywhere, that's the port on my wife's box
that port 947 was connected to.

Now I even went over to my wife's box and ran

 # rpcinfo -p localhost
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp  34243  status
    100024    1   tcp  34498  status

which doesn't show anything.

but something is listening to that port...

 # netstat -ntap |grep 55201
tcp        0      0 0.0.0.0:55201           0.0.0.0:*               LISTEN   

I rebooted again, but this time I ran this on my wife's box:

 # trace-cmd record -e nfs -e nfs4 -e net -e skb -e sock -e udp -e workqueue -e sunrpc

I started it when my server started booting the kernel, and kept it
running till the port vanished.

The full trace can be downloaded from
http://rostedt.homelinux.com/private/wife-trace.txt

Here's some interesting output from that trace:

ksoftirq-13      1..s. 12272627.681760: netif_receive_skb:    dev=lo skbaddr=0xffff88020944c600 len=88
ksoftirq-13      1..s. 12272627.681776: net_dev_queue:        dev=eth0 skbaddr=0xffff880234e5b100 len=42
ksoftirq-13      1..s. 12272627.681777: net_dev_start_xmit:   dev=eth0 queue_mapping=0 skbaddr=0xffff880234e5b100 vlan_tagged=0 vlan_proto=0x0000 vlan_tci=0x0000 protocol=0x0806 ip_
summed=0 len=42 data_len=0 network_offset=14 transport_offset_valid=0 transport_offset=65533 tx_flags=0 gso_size=0 gso_segs=0 gso_type=0
ksoftirq-13      1..s. 12272627.681779: net_dev_xmit:         dev=eth0 skbaddr=0xffff880234e5b100 len=42 rc=0
ksoftirq-13      1..s. 12272627.681780: kfree_skb:            skbaddr=0xffff88023444cf00 protocol=2048 location=0xffffffff81422a72
ksoftirq-13      1..s. 12272627.681783: rpc_socket_error:     error=-113 socket:[11886206] dstaddr=192.168.23.9/2049 state=2 () sk_state=2 ()
ksoftirq-13      1..s. 12272627.681785: rpc_task_wakeup:      task:18128@0 flags=5281 state=0006 status=-113 timeout=45000 queue=xprt_pending
ksoftirq-13      1d.s. 12272627.681786: workqueue_queue_work: work struct=0xffff8800b5a94588 function=rpc_async_schedule workqueue=0xffff880234666800 req_cpu=512 cpu=1
ksoftirq-13      1d.s. 12272627.681787: workqueue_activate_work: work struct 0xffff8800b5a94588
ksoftirq-13      1..s. 12272627.681791: rpc_socket_state_change: socket:[11886206] dstaddr=192.168.23.9/2049 state=2 () sk_state=7 ()
ksoftirq-13      1..s. 12272627.681792: kfree_skb:            skbaddr=0xffff88020944c600 protocol=2048 location=0xffffffff81482c05
kworker/-20111   1.... 12272627.681796: workqueue_execute_start: work struct 0xffff8800b5a94588: function rpc_async_schedule
kworker/-20111   1.... 12272627.681797: rpc_task_run_action:  task:18128@0 flags=5281 state=0005 status=-113 action=call_connect_status
kworker/-20111   1.... 12272627.681798: rpc_task_run_action:  task:18128@0 flags=5281 state=0005 status=-113 action=call_connect_status
kworker/-20111   1.... 12272627.681798: rpc_connect_status:   task:18128@0, status -113
kworker/-20111   1..s. 12272627.681799: rpc_task_sleep:       task:18128@0 flags=5281 state=0005 status=0 timeout=750 queue=delayq
kworker/-20111   1.... 12272627.681800: workqueue_execute_end: work struct 0xffff8800b5a94588

  <idle>-0       1..s. 12272630.688741: rpc_task_wakeup:      task:18128@0 flags=5281 state=0006 status=-110 timeout=750 queue=delayq
  <idle>-0       1dNs. 12272630.688749: workqueue_queue_work: work struct=0xffff8800b5a94588 function=rpc_async_schedule workqueue=0xffff880234666800 req_cpu=512 cpu=1
  <idle>-0       1dNs. 12272630.688749: workqueue_activate_work: work struct 0xffff8800b5a94588
kworker/-20111   1.... 12272630.688758: workqueue_execute_start: work struct 0xffff8800b5a94588: function rpc_async_schedule
kworker/-20111   1.... 12272630.688759: rpc_task_run_action:  task:18128@0 flags=5281 state=0005 status=-110 action=call_timeout
kworker/-20111   1.... 12272630.688760: rpc_task_run_action:  task:18128@0 flags=5281 state=0005 status=0 action=call_timeout
kworker/-20111   1.... 12272630.688760: rpc_task_run_action:  task:18128@0 flags=5281 state=0005 status=0 action=call_bind
kworker/-20111   1.... 12272630.688761: rpc_task_run_action:  task:18128@0 flags=5281 state=0005 status=0 action=call_connect
kworker/-20111   1..s. 12272630.688762: rpc_task_sleep:       task:18128@0 flags=5281 state=0005 status=0 timeout=45000 queue=xprt_pending
kworker/-20111   1.... 12272630.688765: workqueue_execute_end: work struct 0xffff8800b5a94588
  <idle>-0       3d.s. 12272630.696742: workqueue_queue_work: work struct=0xffff880234ac9060 function=disk_events_workfn workqueue=0xffff8802370d9000 req_cpu=512 cpu=3
  <idle>-0       3d.s. 12272630.696744: workqueue_activate_work: work struct 0xffff880234ac9060
kworker/-7491    3.... 12272630.696760: workqueue_execute_start: work struct 0xffff880234ac9060: function disk_events_workfn
kworker/-7491    3d... 12272630.696827: workqueue_queue_work: work struct=0xffff8802347440b8 function=ata_sff_pio_task workqueue=0xffff880234491c00 req_cpu=512 cpu=3
kworker/-7491    3d... 12272630.696828: workqueue_activate_work: work struct 0xffff8802347440b8
kworker/-16140   3.... 12272630.696837: workqueue_execute_start: work struct 0xffff8802347440b8: function ata_sff_pio_task
kworker/-16140   3.... 12272630.696853: workqueue_execute_end: work struct 0xffff8802347440b8
kworker/-7491    3.... 12272630.697383: workqueue_execute_end: work struct 0xffff880234ac9060

  <idle>-0       1d.s. 12272654.753029: workqueue_queue_work: work struct=0xffff8802361f4de0 function=xs_tcp_setup_socket workqueue=0xffff880234666800 req_cpu=512 cpu=1
  <idle>-0       1d.s. 12272654.753031: workqueue_activate_work: work struct 0xffff8802361f4de0
kworker/-20111   1.... 12272654.753049: workqueue_execute_start: work struct 0xffff8802361f4de0: function xs_tcp_setup_socket
kworker/-20111   1..s. 12272654.753054: rpc_socket_error:     error=-113 socket:[11886206] dstaddr=192.168.23.9/2049 state=2 () sk_state=7 ()
kworker/-20111   1.... 12272654.753055: rpc_socket_reset_connection: error=0 socket:[11886206] dstaddr=192.168.23.9/2049 state=1 () sk_state=7 ()
kworker/-20111   1..s. 12272654.753075: net_dev_queue:        dev=eth0 skbaddr=0xffff880082117ae8 len=74
kworker/-20111   1..s. 12272654.753083: net_dev_start_xmit:   dev=eth0 queue_mapping=0 skbaddr=0xffff880082117ae8 vlan_tagged=0 vlan_proto=0x0000 vlan_tci=0x0000 protocol=0x0800 ip_summed=0 len=74 data_len=0 network_offset=14 transport_offset_valid=1 transport_offset=34 tx_flags=0 gso_size=0 gso_segs=1 gso_type=0
kworker/-20111   1..s. 12272654.753088: net_dev_xmit:         dev=eth0 skbaddr=0xffff880082117ae8 len=74 rc=0
kworker/-20111   1.... 12272654.753090: rpc_socket_connect:   error=-115 socket:[11886206] dstaddr=192.168.23.9/2049 state=2 () sk_state=2 ()
kworker/-20111   1.... 12272654.753093: workqueue_execute_end: work struct 0xffff8802361f4de0
  <idle>-0       1..s. 12272654.753320: consume_skb:          skbaddr=0xffff880082117ae8
  <idle>-0       1..s. 12272654.753503: napi_gro_receive_entry: dev=eth0 napi_id=0 queue_mapping=0 skbaddr=0xffff8801f647d100 vlan_tagged=0 vlan_proto=0x0000 vlan_tci=0x0000 protocol=0x0806 ip_summed=0 hash=0x00000000 l4_hash=0 len=46 data_len=0 truesize=704 mac_header_valid=1 mac_header=-14 nr_frags=0 gso_size=0 gso_type=0
  <idle>-0       1.Ns. 12272654.753508: netif_receive_skb:    dev=eth0 skbaddr=0xffff8801f647d100 len=46
  <idle>-0       1.Ns. 12272654.753519: consume_skb:          skbaddr=0xffff8800a9aa2d00
  <idle>-0       1.Ns. 12272654.753522: net_dev_queue:        dev=eth0 skbaddr=0xffff8800a9aa2d00 len=42
  <idle>-0       1.Ns. 12272654.753523: net_dev_start_xmit:   dev=eth0 queue_mapping=0 skbaddr=0xffff8800a9aa2d00 vlan_tagged=0 vlan_proto=0x0000 vlan_tci=0x0000 protocol=0x0806 ip_summed=0 len=42 data_len=0 network_offset=14 transport_offset_valid=0 transport_offset=65533 tx_flags=0 gso_size=0 gso_segs=0 gso_type=0
  <idle>-0       1.Ns. 12272654.753525: net_dev_xmit:         dev=eth0 skbaddr=0xffff8800a9aa2d00 len=42 rc=0
  <idle>-0       1.Ns. 12272654.753526: consume_skb:          skbaddr=0xffff8801f647d100
  <idle>-0       1..s. 12272654.753585: napi_gro_receive_entry: dev=eth0 napi_id=0 queue_mapping=0 skbaddr=0xffff8801f647d100 vlan_tagged=0 vlan_proto=0x0000 vlan_tci=0x0000 protocol=0x0800 ip_summed=1 hash=0x00000000 l4_hash=0 len=46 data_len=0 truesize=704 mac_header_valid=1 mac_header=-14 nr_frags=0 gso_size=0 gso_type=0
  <idle>-0       1.Ns. 12272654.753589: netif_receive_skb:    dev=eth0 skbaddr=0xffff8801f647d100 len=46
  <idle>-0       1.Ns. 12272654.753595: rpc_socket_error:     error=-111 socket:[11886206] dstaddr=192.168.23.9/2049 state=2 () sk_state=2 ()
  <idle>-0       1.Ns. 12272654.753597: rpc_task_wakeup:      task:18128@0 flags=5281 state=0006 status=-111 timeout=45000 queue=xprt_pending
  <idle>-0       1dNs. 12272654.753598: workqueue_queue_work: work struct=0xffff8800b5a94588 function=rpc_async_schedule workqueue=0xffff880234666800 req_cpu=512 cpu=1
  <idle>-0       1dNs. 12272654.753599: workqueue_activate_work: work struct 0xffff8800b5a94588
  <idle>-0       1.Ns. 12272654.753601: rpc_socket_state_change: socket:[11886206] dstaddr=192.168.23.9/2049 state=2 () sk_state=7 ()
kworker/-20111   1.... 12272654.753607: workqueue_execute_start: work struct 0xffff8800b5a94588: function rpc_async_schedule
kworker/-20111   1.... 12272654.753608: rpc_task_run_action:  task:18128@0 flags=5281 state=0005 status=-111 action=call_connect_status
kworker/-20111   1.... 12272654.753609: rpc_task_run_action:  task:18128@0 flags=5281 state=0005 status=-111 action=call_connect_status
kworker/-20111   1.... 12272654.753609: rpc_connect_status:   task:18128@0, status -111
kworker/-20111   1..s. 12272654.753610: rpc_task_sleep:       task:18128@0 flags=5281 state=0005 status=0 timeout=750 queue=delayq
kworker/-20111   1.... 12272654.753612: workqueue_execute_end: work struct 0xffff8800b5a94588
  <idle>-0       1..s. 12272654.753692: consume_skb:          skbaddr=0xffff8800a9aa2d00

  <idle>-0       1.Ns. 12272657.605105: netif_receive_skb:    dev=eth0 skbaddr=0xffff8802223d1e00 len=345
  <idle>-0       1.Ns. 12272657.605108: kfree_skb:            skbaddr=0xffff8802223d1e00 protocol=2048 location=0xffffffff8147e361
  <idle>-0       1..s. 12272657.760044: rpc_task_wakeup:      task:18128@0 flags=5281 state=0006 status=-110 timeout=750 queue=delayq
  <idle>-0       1dNs. 12272657.760051: workqueue_queue_work: work struct=0xffff8800b5a94588 function=rpc_async_schedule workqueue=0xffff880234666800 req_cpu=512 cpu=1
  <idle>-0       1dNs. 12272657.760052: workqueue_activate_work: work struct 0xffff8800b5a94588
kworker/-20111   1.... 12272657.760063: workqueue_execute_start: work struct 0xffff8800b5a94588: function rpc_async_schedule
kworker/-20111   1.... 12272657.760064: rpc_task_run_action:  task:18128@0 flags=5281 state=0005 status=-110 action=call_timeout
kworker/-20111   1.... 12272657.760065: rpc_task_run_action:  task:18128@0 flags=5281 state=0005 status=0 action=call_timeout
kworker/-20111   1.... 12272657.760066: rpc_task_run_action:  task:18128@0 flags=5281 state=0005 status=0 action=call_bind
kworker/-20111   1.... 12272657.760066: rpc_task_run_action:  task:18128@0 flags=5281 state=0005 status=0 action=call_connect
kworker/-20111   1..s. 12272657.760068: rpc_task_sleep:       task:18128@0 flags=5281 state=0005 status=0 timeout=45000 queue=xprt_pending
kworker/-20111   1.... 12272657.760070: workqueue_execute_end: work struct 0xffff8800b5a94588
  <idle>-0       3d.s. 12272657.824024: workqueue_queue_work: work struct=0xffff8800b5551760 function=disk_events_workfn workqueue=0xffff8802370d9000 req_cpu=512 cpu=3
  <idle>-0       3d.s. 12272657.824025: workqueue_activate_work: work struct 0xffff8800b5551760
kworker/-7491    3.... 12272657.824041: workqueue_execute_start: work struct 0xffff8800b5551760: function disk_events_workfn
kworker/-7491    3.... 12272657.824807: workqueue_execute_end: work struct 0xffff8800b5551760


  <idle>-0       1d.s. 12272705.808564: workqueue_queue_work: work struct=0xffff8802361f4de0 function=xs_tcp_setup_socket workqueue=0xffff880234666800 req_cpu=512 cpu=1
  <idle>-0       1d.s. 12272705.808565: workqueue_activate_work: work struct 0xffff8802361f4de0
kworker/-20111   1.... 12272705.808574: workqueue_execute_start: work struct 0xffff8802361f4de0: function xs_tcp_setup_socket
kworker/-20111   1..s. 12272705.808580: rpc_socket_error:     error=-111 socket:[11886206] dstaddr=192.168.23.9/2049 state=2 () sk_state=7 ()
kworker/-20111   1.... 12272705.808581: rpc_socket_reset_connection: error=0 socket:[11886206] dstaddr=192.168.23.9/2049 state=1 () sk_state=7 ()
kworker/-20111   1..s. 12272705.808599: net_dev_queue:        dev=eth0 skbaddr=0xffff880082117ae8 len=74
kworker/-20111   1..s. 12272705.808602: net_dev_start_xmit:   dev=eth0 queue_mapping=0 skbaddr=0xffff880082117ae8 vlan_tagged=0 vlan_proto=0x0000 vlan_tci=0x0000 protocol=0x0800 ip_summed=0 len=74 data_len=0 network_offset=14 transport_offset_valid=1 transport_offset=34 tx_flags=0 gso_size=0 gso_segs=1 gso_type=0
kworker/-20111   1..s. 12272705.808605: net_dev_xmit:         dev=eth0 skbaddr=0xffff880082117ae8 len=74 rc=0
kworker/-20111   1.... 12272705.808614: rpc_socket_connect:   error=-115 socket:[11886206] dstaddr=192.168.23.9/2049 state=2 () sk_state=2 ()
kworker/-20111   1.... 12272705.808615: workqueue_execute_end: work struct 0xffff8802361f4de0
  <idle>-0       1..s. 12272705.808841: napi_gro_receive_entry: dev=eth0 napi_id=0 queue_mapping=0 skbaddr=0xffff8800a60e8900 vlan_tagged=0 vlan_proto=0x0000 vlan_tci=0x0000 protocol=0x0800 ip_summed=1 hash=0x00000000 l4_hash=0 len=60 data_len=0 truesize=768 mac_header_valid=1 mac_header=-14 nr_frags=0 gso_size=0 gso_type=0
  <idle>-0       1.Ns. 12272705.808849: netif_receive_skb:    dev=eth0 skbaddr=0xffff8800a60e8900 len=60
  <idle>-0       1.Ns. 12272705.808872: rpc_socket_state_change: socket:[11886206] dstaddr=192.168.23.9/2049 state=2 () sk_state=1 ()
  <idle>-0       1.Ns. 12272705.808874: rpc_task_wakeup:      task:18128@0 flags=5281 state=0006 status=-11 timeout=45000 queue=xprt_pending
  <idle>-0       1dNs. 12272705.808875: workqueue_queue_work: work struct=0xffff8800b5a94588 function=rpc_async_schedule workqueue=0xffff880234666800 req_cpu=512 cpu=1
  <idle>-0       1dNs. 12272705.808876: workqueue_activate_work: work struct 0xffff8800b5a94588
  <idle>-0       1.Ns. 12272705.808881: net_dev_queue:        dev=eth0 skbaddr=0xffff8800a60e8500 len=66
  <idle>-0       1.Ns. 12272705.808883: net_dev_start_xmit:   dev=eth0 queue_mapping=0 skbaddr=0xffff8800a60e8500 vlan_tagged=0 vlan_proto=0x0000 vlan_tci=0x0000 protocol=0x0800 ip_summed=0 len=66 data_len=0 network_offset=14 transport_offset_valid=1 transport_offset=34 tx_flags=0 gso_size=0 gso_segs=1 gso_type=0
  <idle>-0       1.Ns. 12272705.808885: net_dev_xmit:         dev=eth0 skbaddr=0xffff8800a60e8500 len=66 rc=0
  <idle>-0       1.Ns. 12272705.808887: consume_skb:          skbaddr=0xffff880082117ae8
kworker/-20111   1.... 12272705.808895: workqueue_execute_start: work struct 0xffff8800b5a94588: function rpc_async_schedule
kworker/-20111   1.... 12272705.808896: rpc_task_run_action:  task:18128@0 flags=5281 state=0005 status=-11 action=call_connect_status
kworker/-20111   1.... 12272705.808897: rpc_task_run_action:  task:18128@0 flags=5281 state=0005 status=-11 action=call_connect_status
kworker/-20111   1.... 12272705.808897: rpc_connect_status:   task:18128@0, status -11
kworker/-20111   1.... 12272705.808897: rpc_task_run_action:  task:18128@0 flags=5281 state=0005 status=0 action=call_timeout
kworker/-20111   1.... 12272705.808898: rpc_task_run_action:  task:18128@0 flags=5281 state=0005 status=0 action=call_bind
kworker/-20111   1.... 12272705.808899: rpc_task_run_action:  task:18128@0 flags=5281 state=0005 status=0 action=call_connect
kworker/-20111   1.... 12272705.808899: rpc_task_run_action:  task:18128@0 flags=5281 state=0005 status=0 action=call_transmit
kworker/-20111   1..s. 12272705.808912: net_dev_queue:        dev=eth0 skbaddr=0xffff880082117ae8 len=162
kworker/-20111   1..s. 12272705.808913: net_dev_start_xmit:   dev=eth0 queue_mapping=0 skbaddr=0xffff880082117ae8 vlan_tagged=0 vlan_proto=0x0000 vlan_tci=0x0000 protocol=0x0800 ip_summed=0 len=162 data_len=0 network_offset=14 transport_offset_valid=1 transport_offset=34 tx_flags=0 gso_size=0 gso_segs=1 gso_type=0
kworker/-20111   1..s. 12272705.808913: net_dev_xmit:         dev=eth0 skbaddr=0xffff880082117ae8 len=162 rc=0
kworker/-20111   1..s. 12272705.808916: rpc_task_sleep:       task:18128@0 flags=5a81 state=0005 status=0 timeout=45000 queue=xprt_pending
kworker/-20111   1.... 12272705.808917: workqueue_execute_end: work struct 0xffff8800b5a94588
  <idle>-0       1..s. 12272705.809098: consume_skb:          skbaddr=0xffff8800a60e8500

  <idle>-0       1.Ns. 12272705.809840: rpc_task_wakeup:      task:18128@0 flags=5a81 state=0006 status=0 timeout=45000 queue=xprt_pending
  <idle>-0       1dNs. 12272705.809842: workqueue_queue_work: work struct=0xffff8800b5a94588 function=rpc_async_schedule workqueue=0xffff880234666800 req_cpu=512 cpu=1
  <idle>-0       1dNs. 12272705.809842: workqueue_activate_work: work struct 0xffff8800b5a94588
kworker/-20111   1.... 12272705.809853: workqueue_execute_start: work struct 0xffff8800b5a94588: function rpc_async_schedule
kworker/-20111   1.... 12272705.809853: rpc_task_run_action:  task:18128@0 flags=5a81 state=0005 status=0 action=call_status
kworker/-20111   1.... 12272705.809854: rpc_task_run_action:  task:18128@0 flags=5a81 state=0005 status=0 action=call_status
kworker/-20111   1.... 12272705.809854: rpc_task_run_action:  task:18128@0 flags=5a81 state=0005 status=44 action=call_decode
kworker/-20111   1.... 12272705.809856: rpc_task_run_action:  task:18128@0 flags=5a81 state=0005 status=-10022 action=rpc_exit_task
kworker/-20111   1.... 12272705.809858: nfs4_renew_async:     error=-10022 () dstaddr=192.168.23.9
  <idle>-0       1.Ns. 12272705.810000: consume_skb:          skbaddr=0xffff8800a60e8900
kworker/-20111   1.... 12272705.810033: workqueue_execute_end: work struct 0xffff8800b5a94588
192.168.-16171   3.... 12272705.810062: rpc_task_begin:       task:0@0 flags=5280 state=0000 status=0 action=irq_stack_union
192.168.-16171   3.... 12272705.810068: rpc_task_run_action:  task:18129@0 flags=5280 state=0005 status=0 action=call_start
192.168.-16171   3.... 12272705.810069: rpc_task_run_action:  task:18129@0 flags=5280 state=0005 status=0 action=call_reserve
192.168.-16171   3.... 12272705.810071: rpc_task_run_action:  task:18129@0 flags=5280 state=0005 status=0 action=call_reserveresult
192.168.-16171   3.... 12272705.810071: rpc_task_run_action:  task:18129@0 flags=5280 state=0005 status=0 action=call_refresh
192.168.-16171   3.... 12272705.810073: rpc_task_run_action:  task:18129@0 flags=5280 state=0005 status=0 action=call_refreshresult
192.168.-16171   3.... 12272705.810073: rpc_task_run_action:  task:18129@0 flags=5280 state=0005 status=0 action=call_allocate
192.168.-16171   3.... 12272705.810074: rpc_task_run_action:  task:18129@0 flags=5280 state=0005 status=0 action=call_bind
192.168.-16171   3.... 12272705.810075: rpc_task_run_action:  task:18129@0 flags=5280 state=0005 status=0 action=call_connect
192.168.-16171   3.... 12272705.810075: rpc_task_run_action:  task:18129@0 flags=5280 state=0005 status=0 action=call_transmit
192.168.-16171   3..s. 12272705.810095: net_dev_queue:        dev=eth0 skbaddr=0xffff8800a9b924e8 len=162
192.168.-16171   3..s. 12272705.810097: net_dev_start_xmit:   dev=eth0 queue_mapping=0 skbaddr=0xffff8800a9b924e8 vlan_tagged=0 vlan_proto=0x0000 vlan_tci=0x0000 protocol=0x0800 ip_summed=0 len=162 data_len=0 network_offset=14 transport_offset_valid=1 transport_offset=34 tx_flags=0 gso_size=0 gso_segs=1 gso_type=0
192.168.-16171   3..s. 12272705.810098: net_dev_xmit:         dev=eth0 skbaddr=0xffff8800a9b924e8 len=162 rc=0
192.168.-16171   3..s. 12272705.810101: rpc_task_sleep:       task:18129@0 flags=5a80 state=0005 status=0 timeout=15000 queue=xprt_pending
  <idle>-0       1..s. 12272705.810318: napi_gro_receive_entry: dev=eth0 napi_id=0 queue_mapping=0 skbaddr=0xffff8800a60e8900 vlan_tagged=0 vlan_proto=0x0000 vlan_tci=0x0000 protocol=0x0800 ip_summed=1 hash=0x00000000 l4_hash=0 len=100 data_len=0 truesize=768 mac_header_valid=1 mac_header=-14 nr_frags=0 gso_size=0 gso_type=0
  <idle>-0       1.Ns. 12272705.810326: netif_receive_skb:    dev=eth0 skbaddr=0xffff8800a60e8900 len=100
  <idle>-0       1.Ns. 12272705.810344: rpc_task_wakeup:      task:18129@0 flags=5a80 state=0006 status=0 timeout=15000 queue=xprt_pending
  <idle>-0       1.Ns. 12272705.810349: consume_skb:          skbaddr=0xffff8800a9b924e8
192.168.-16171   3.... 12272705.810379: rpc_task_run_action:  task:18129@0 flags=5a80 state=0005 status=0 action=call_status
192.168.-16171   3.... 12272705.810385: rpc_task_run_action:  task:18129@0 flags=5a80 state=0005 status=0 action=call_status
192.168.-16171   3.... 12272705.810385: rpc_task_run_action:  task:18129@0 flags=5a80 state=0005 status=44 action=call_decode
192.168.-16171   3.... 12272705.810387: rpc_task_run_action:  task:18129@0 flags=5a80 state=0005 status=-10022 action=rpc_exit_task
192.168.-16171   3.... 12272705.810397: rpc_task_begin:       task:0@0 flags=5280 state=0000 status=0 action=irq_stack_union
192.168.-16171   3.... 12272705.810398: rpc_task_run_action:  task:18130@0 flags=5280 state=0005 status=0 action=call_start
192.168.-16171   3.... 12272705.810398: rpc_task_run_action:  task:18130@0 flags=5280 state=0005 status=0 action=call_reserve
192.168.-16171   3.... 12272705.810399: rpc_task_run_action:  task:18130@0 flags=5280 state=0005 status=0 action=call_reserveresult
192.168.-16171   3.... 12272705.810399: rpc_task_run_action:  task:18130@0 flags=5280 state=0005 status=0 action=call_refresh
192.168.-16171   3.... 12272705.810400: rpc_task_run_action:  task:18130@0 flags=5280 state=0005 status=0 action=call_refreshresult
192.168.-16171   3.... 12272705.810400: rpc_task_run_action:  task:18130@0 flags=5280 state=0005 status=0 action=call_allocate
192.168.-16171   3.... 12272705.810404: rpc_task_run_action:  task:18130@0 flags=5280 state=0005 status=0 action=call_bind
192.168.-16171   3.... 12272705.810404: rpc_task_run_action:  task:18130@0 flags=5280 state=0005 status=0 action=call_connect
192.168.-16171   3.... 12272705.810404: rpc_task_run_action:  task:18130@0 flags=5280 state=0005 status=0 action=call_transmit
192.168.-16171   3..s. 12272705.810417: net_dev_queue:        dev=eth0 skbaddr=0xffff8800a9b92ae8 len=254
192.168.-16171   3..s. 12272705.810418: net_dev_start_xmit:   dev=eth0 queue_mapping=0 skbaddr=0xffff8800a9b92ae8 vlan_tagged=0 vlan_proto=0x0000 vlan_tci=0x0000 protocol=0x0800 ip_summed=0 len=254 data_len=0 network_offset=14 transport_offset_valid=1 transport_offset=34 tx_flags=0 gso_size=0 gso_segs=1 gso_type=0
192.168.-16171   3..s. 12272705.810426: net_dev_xmit:         dev=eth0 skbaddr=0xffff8800a9b92ae8 len=254 rc=0
192.168.-16171   3..s. 12272705.810428: rpc_task_sleep:       task:18130@0 flags=5a80 state=0005 status=0 timeout=15000 queue=xprt_pending
192.168.-16171   3.... 12272705.810902: rpc_task_run_action:  task:18130@0 flags=5a80 state=0005 status=0 action=call_status
192.168.-16171   3.... 12272705.810908: rpc_task_run_action:  task:18130@0 flags=5a80 state=0005 status=0 action=call_status
192.168.-16171   3.... 12272705.810908: rpc_task_run_action:  task:18130@0 flags=5a80 state=0005 status=60 action=call_decode
192.168.-16171   3.... 12272705.810910: rpc_task_run_action:  task:18130@0 flags=5a80 state=0005 status=0 action=rpc_exit_task
192.168.-16171   3.... 12272705.810914: nfs4_setclientid:     error=0 (ACCESS) dstaddr=192.168.23.9
192.168.-16171   3.... 12272705.810915: rpc_task_begin:       task:0@0 flags=5280 state=0000 status=0 action=irq_stack_union
192.168.-16171   3.... 12272705.810916: rpc_task_run_action:  task:18131@0 flags=5280 state=0005 status=0 action=call_start
192.168.-16171   3.... 12272705.810916: rpc_task_run_action:  task:18131@0 flags=5280 state=0005 status=0 action=call_reserve
192.168.-16171   3.... 12272705.810917: rpc_task_run_action:  task:18131@0 flags=5280 state=0005 status=0 action=call_reserveresult
192.168.-16171   3.... 12272705.810917: rpc_task_run_action:  task:18131@0 flags=5280 state=0005 status=0 action=call_refresh
192.168.-16171   3.... 12272705.810918: rpc_task_run_action:  task:18131@0 flags=5280 state=0005 status=0 action=call_refreshresult
192.168.-16171   3.... 12272705.810918: rpc_task_run_action:  task:18131@0 flags=5280 state=0005 status=0 action=call_allocate
192.168.-16171   3.... 12272705.810919: rpc_task_run_action:  task:18131@0 flags=5280 state=0005 status=0 action=call_bind
192.168.-16171   3.... 12272705.810919: rpc_task_run_action:  task:18131@0 flags=5280 state=0005 status=0 action=call_connect
192.168.-16171   3.... 12272705.810919: rpc_task_run_action:  task:18131@0 flags=5280 state=0005 status=0 action=call_transmit
192.168.-16171   3..s. 12272705.810931: net_dev_queue:        dev=eth0 skbaddr=0xffff8800a9b92ce8 len=170
192.168.-16171   3..s. 12272705.810932: net_dev_start_xmit:   dev=eth0 queue_mapping=0 skbaddr=0xffff8800a9b92ce8 vlan_tagged=0 vlan_proto=0x0000 vlan_tci=0x0000 protocol=0x0800 ip_summed=0 len=170 data_len=0 network_offset=14 transport_offset_valid=1 transport_offset=34 tx_flags=0 gso_size=0 gso_segs=1 gso_type=0
192.168.-16171   3..s. 12272705.810933: net_dev_xmit:         dev=eth0 skbaddr=0xffff8800a9b92ce8 len=170 rc=0
192.168.-16171   3..s. 12272705.810936: rpc_task_sleep:       task:18131@0 flags=5a80 state=0005 status=0 timeout=15000 queue=xprt_pending
192.168.-16171   3.... 12272705.811213: rpc_task_run_action:  task:18131@0 flags=5a80 state=0005 status=0 action=call_status
192.168.-16171   3.... 12272705.811220: rpc_task_run_action:  task:18131@0 flags=5a80 state=0005 status=0 action=call_status
192.168.-16171   3.... 12272705.811220: rpc_task_run_action:  task:18131@0 flags=5a80 state=0005 status=44 action=call_decode
192.168.-16171   3.... 12272705.811222: rpc_task_run_action:  task:18131@0 flags=5a80 state=0005 status=0 action=rpc_exit_task
192.168.-16171   3.... 12272705.811227: nfs4_setclientid_confirm: error=0 (ACCESS) dstaddr=192.168.23.9


And it goes on, but you can look at the full  trace. I just searched
for "rpc".

Maybe this will shed more light on the issue. I'll keep this kernel on
my server for a little longer, but it's going to start triggering
rkhunter warnings about hidden ports again.

-- Steve


--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
@ 2015-06-19 17:17                         ` Steven Rostedt
  0 siblings, 0 replies; 77+ messages in thread
From: Steven Rostedt @ 2015-06-19 17:17 UTC (permalink / raw)
  To: Jeff Layton
  Cc: Trond Myklebust, Eric Dumazet, Anna Schumaker,
	Linux NFS Mailing List, Linux Network Devel Mailing List, LKML,
	Andrew Morton, bfields

On Fri, 19 Jun 2015 12:25:53 -0400
Steven Rostedt <rostedt@goodmis.org> wrote:


> I don't see that 55201 anywhere. But then again, I didn't look for it
> before the port disappeared. I could reboot and look for it again. I
> should have saved the full netstat -tapn as well :-/

Of course I didn't find it anywhere, that's the port on my wife's box
that port 947 was connected to.

Now I even went over to my wife's box and ran

 # rpcinfo -p localhost
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp  34243  status
    100024    1   tcp  34498  status

which doesn't show anything.

but something is listening to that port...

 # netstat -ntap |grep 55201
tcp        0      0 0.0.0.0:55201           0.0.0.0:*               LISTEN   

I rebooted again, but this time I ran this on my wife's box:

 # trace-cmd record -e nfs -e nfs4 -e net -e skb -e sock -e udp -e workqueue -e sunrpc

I started it when my server started booting the kernel, and kept it
running till the port vanished.

The full trace can be downloaded from
http://rostedt.homelinux.com/private/wife-trace.txt

Here's some interesting output from that trace:

ksoftirq-13      1..s. 12272627.681760: netif_receive_skb:    dev=lo skbaddr=0xffff88020944c600 len=88
ksoftirq-13      1..s. 12272627.681776: net_dev_queue:        dev=eth0 skbaddr=0xffff880234e5b100 len=42
ksoftirq-13      1..s. 12272627.681777: net_dev_start_xmit:   dev=eth0 queue_mapping=0 skbaddr=0xffff880234e5b100 vlan_tagged=0 vlan_proto=0x0000 vlan_tci=0x0000 protocol=0x0806 ip_
summed=0 len=42 data_len=0 network_offset=14 transport_offset_valid=0 transport_offset=65533 tx_flags=0 gso_size=0 gso_segs=0 gso_type=0
ksoftirq-13      1..s. 12272627.681779: net_dev_xmit:         dev=eth0 skbaddr=0xffff880234e5b100 len=42 rc=0
ksoftirq-13      1..s. 12272627.681780: kfree_skb:            skbaddr=0xffff88023444cf00 protocol=2048 location=0xffffffff81422a72
ksoftirq-13      1..s. 12272627.681783: rpc_socket_error:     error=-113 socket:[11886206] dstaddr=192.168.23.9/2049 state=2 () sk_state=2 ()
ksoftirq-13      1..s. 12272627.681785: rpc_task_wakeup:      task:18128@0 flags=5281 state=0006 status=-113 timeout=45000 queue=xprt_pending
ksoftirq-13      1d.s. 12272627.681786: workqueue_queue_work: work struct=0xffff8800b5a94588 function=rpc_async_schedule workqueue=0xffff880234666800 req_cpu=512 cpu=1
ksoftirq-13      1d.s. 12272627.681787: workqueue_activate_work: work struct 0xffff8800b5a94588
ksoftirq-13      1..s. 12272627.681791: rpc_socket_state_change: socket:[11886206] dstaddr=192.168.23.9/2049 state=2 () sk_state=7 ()
ksoftirq-13      1..s. 12272627.681792: kfree_skb:            skbaddr=0xffff88020944c600 protocol=2048 location=0xffffffff81482c05
kworker/-20111   1.... 12272627.681796: workqueue_execute_start: work struct 0xffff8800b5a94588: function rpc_async_schedule
kworker/-20111   1.... 12272627.681797: rpc_task_run_action:  task:18128@0 flags=5281 state=0005 status=-113 action=call_connect_status
kworker/-20111   1.... 12272627.681798: rpc_task_run_action:  task:18128@0 flags=5281 state=0005 status=-113 action=call_connect_status
kworker/-20111   1.... 12272627.681798: rpc_connect_status:   task:18128@0, status -113
kworker/-20111   1..s. 12272627.681799: rpc_task_sleep:       task:18128@0 flags=5281 state=0005 status=0 timeout=750 queue=delayq
kworker/-20111   1.... 12272627.681800: workqueue_execute_end: work struct 0xffff8800b5a94588

  <idle>-0       1..s. 12272630.688741: rpc_task_wakeup:      task:18128@0 flags=5281 state=0006 status=-110 timeout=750 queue=delayq
  <idle>-0       1dNs. 12272630.688749: workqueue_queue_work: work struct=0xffff8800b5a94588 function=rpc_async_schedule workqueue=0xffff880234666800 req_cpu=512 cpu=1
  <idle>-0       1dNs. 12272630.688749: workqueue_activate_work: work struct 0xffff8800b5a94588
kworker/-20111   1.... 12272630.688758: workqueue_execute_start: work struct 0xffff8800b5a94588: function rpc_async_schedule
kworker/-20111   1.... 12272630.688759: rpc_task_run_action:  task:18128@0 flags=5281 state=0005 status=-110 action=call_timeout
kworker/-20111   1.... 12272630.688760: rpc_task_run_action:  task:18128@0 flags=5281 state=0005 status=0 action=call_timeout
kworker/-20111   1.... 12272630.688760: rpc_task_run_action:  task:18128@0 flags=5281 state=0005 status=0 action=call_bind
kworker/-20111   1.... 12272630.688761: rpc_task_run_action:  task:18128@0 flags=5281 state=0005 status=0 action=call_connect
kworker/-20111   1..s. 12272630.688762: rpc_task_sleep:       task:18128@0 flags=5281 state=0005 status=0 timeout=45000 queue=xprt_pending
kworker/-20111   1.... 12272630.688765: workqueue_execute_end: work struct 0xffff8800b5a94588
  <idle>-0       3d.s. 12272630.696742: workqueue_queue_work: work struct=0xffff880234ac9060 function=disk_events_workfn workqueue=0xffff8802370d9000 req_cpu=512 cpu=3
  <idle>-0       3d.s. 12272630.696744: workqueue_activate_work: work struct 0xffff880234ac9060
kworker/-7491    3.... 12272630.696760: workqueue_execute_start: work struct 0xffff880234ac9060: function disk_events_workfn
kworker/-7491    3d... 12272630.696827: workqueue_queue_work: work struct=0xffff8802347440b8 function=ata_sff_pio_task workqueue=0xffff880234491c00 req_cpu=512 cpu=3
kworker/-7491    3d... 12272630.696828: workqueue_activate_work: work struct 0xffff8802347440b8
kworker/-16140   3.... 12272630.696837: workqueue_execute_start: work struct 0xffff8802347440b8: function ata_sff_pio_task
kworker/-16140   3.... 12272630.696853: workqueue_execute_end: work struct 0xffff8802347440b8
kworker/-7491    3.... 12272630.697383: workqueue_execute_end: work struct 0xffff880234ac9060

  <idle>-0       1d.s. 12272654.753029: workqueue_queue_work: work struct=0xffff8802361f4de0 function=xs_tcp_setup_socket workqueue=0xffff880234666800 req_cpu=512 cpu=1
  <idle>-0       1d.s. 12272654.753031: workqueue_activate_work: work struct 0xffff8802361f4de0
kworker/-20111   1.... 12272654.753049: workqueue_execute_start: work struct 0xffff8802361f4de0: function xs_tcp_setup_socket
kworker/-20111   1..s. 12272654.753054: rpc_socket_error:     error=-113 socket:[11886206] dstaddr=192.168.23.9/2049 state=2 () sk_state=7 ()
kworker/-20111   1.... 12272654.753055: rpc_socket_reset_connection: error=0 socket:[11886206] dstaddr=192.168.23.9/2049 state=1 () sk_state=7 ()
kworker/-20111   1..s. 12272654.753075: net_dev_queue:        dev=eth0 skbaddr=0xffff880082117ae8 len=74
kworker/-20111   1..s. 12272654.753083: net_dev_start_xmit:   dev=eth0 queue_mapping=0 skbaddr=0xffff880082117ae8 vlan_tagged=0 vlan_proto=0x0000 vlan_tci=0x0000 protocol=0x0800 ip_summed=0 len=74 data_len=0 network_offset=14 transport_offset_valid=1 transport_offset=34 tx_flags=0 gso_size=0 gso_segs=1 gso_type=0
kworker/-20111   1..s. 12272654.753088: net_dev_xmit:         dev=eth0 skbaddr=0xffff880082117ae8 len=74 rc=0
kworker/-20111   1.... 12272654.753090: rpc_socket_connect:   error=-115 socket:[11886206] dstaddr=192.168.23.9/2049 state=2 () sk_state=2 ()
kworker/-20111   1.... 12272654.753093: workqueue_execute_end: work struct 0xffff8802361f4de0
  <idle>-0       1..s. 12272654.753320: consume_skb:          skbaddr=0xffff880082117ae8
  <idle>-0       1..s. 12272654.753503: napi_gro_receive_entry: dev=eth0 napi_id=0 queue_mapping=0 skbaddr=0xffff8801f647d100 vlan_tagged=0 vlan_proto=0x0000 vlan_tci=0x0000 protocol=0x0806 ip_summed=0 hash=0x00000000 l4_hash=0 len=46 data_len=0 truesize=704 mac_header_valid=1 mac_header=-14 nr_frags=0 gso_size=0 gso_type=0
  <idle>-0       1.Ns. 12272654.753508: netif_receive_skb:    dev=eth0 skbaddr=0xffff8801f647d100 len=46
  <idle>-0       1.Ns. 12272654.753519: consume_skb:          skbaddr=0xffff8800a9aa2d00
  <idle>-0       1.Ns. 12272654.753522: net_dev_queue:        dev=eth0 skbaddr=0xffff8800a9aa2d00 len=42
  <idle>-0       1.Ns. 12272654.753523: net_dev_start_xmit:   dev=eth0 queue_mapping=0 skbaddr=0xffff8800a9aa2d00 vlan_tagged=0 vlan_proto=0x0000 vlan_tci=0x0000 protocol=0x0806 ip_summed=0 len=42 data_len=0 network_offset=14 transport_offset_valid=0 transport_offset=65533 tx_flags=0 gso_size=0 gso_segs=0 gso_type=0
  <idle>-0       1.Ns. 12272654.753525: net_dev_xmit:         dev=eth0 skbaddr=0xffff8800a9aa2d00 len=42 rc=0
  <idle>-0       1.Ns. 12272654.753526: consume_skb:          skbaddr=0xffff8801f647d100
  <idle>-0       1..s. 12272654.753585: napi_gro_receive_entry: dev=eth0 napi_id=0 queue_mapping=0 skbaddr=0xffff8801f647d100 vlan_tagged=0 vlan_proto=0x0000 vlan_tci=0x0000 protocol=0x0800 ip_summed=1 hash=0x00000000 l4_hash=0 len=46 data_len=0 truesize=704 mac_header_valid=1 mac_header=-14 nr_frags=0 gso_size=0 gso_type=0
  <idle>-0       1.Ns. 12272654.753589: netif_receive_skb:    dev=eth0 skbaddr=0xffff8801f647d100 len=46
  <idle>-0       1.Ns. 12272654.753595: rpc_socket_error:     error=-111 socket:[11886206] dstaddr=192.168.23.9/2049 state=2 () sk_state=2 ()
  <idle>-0       1.Ns. 12272654.753597: rpc_task_wakeup:      task:18128@0 flags=5281 state=0006 status=-111 timeout=45000 queue=xprt_pending
  <idle>-0       1dNs. 12272654.753598: workqueue_queue_work: work struct=0xffff8800b5a94588 function=rpc_async_schedule workqueue=0xffff880234666800 req_cpu=512 cpu=1
  <idle>-0       1dNs. 12272654.753599: workqueue_activate_work: work struct 0xffff8800b5a94588
  <idle>-0       1.Ns. 12272654.753601: rpc_socket_state_change: socket:[11886206] dstaddr=192.168.23.9/2049 state=2 () sk_state=7 ()
kworker/-20111   1.... 12272654.753607: workqueue_execute_start: work struct 0xffff8800b5a94588: function rpc_async_schedule
kworker/-20111   1.... 12272654.753608: rpc_task_run_action:  task:18128@0 flags=5281 state=0005 status=-111 action=call_connect_status
kworker/-20111   1.... 12272654.753609: rpc_task_run_action:  task:18128@0 flags=5281 state=0005 status=-111 action=call_connect_status
kworker/-20111   1.... 12272654.753609: rpc_connect_status:   task:18128@0, status -111
kworker/-20111   1..s. 12272654.753610: rpc_task_sleep:       task:18128@0 flags=5281 state=0005 status=0 timeout=750 queue=delayq
kworker/-20111   1.... 12272654.753612: workqueue_execute_end: work struct 0xffff8800b5a94588
  <idle>-0       1..s. 12272654.753692: consume_skb:          skbaddr=0xffff8800a9aa2d00

  <idle>-0       1.Ns. 12272657.605105: netif_receive_skb:    dev=eth0 skbaddr=0xffff8802223d1e00 len=345
  <idle>-0       1.Ns. 12272657.605108: kfree_skb:            skbaddr=0xffff8802223d1e00 protocol=2048 location=0xffffffff8147e361
  <idle>-0       1..s. 12272657.760044: rpc_task_wakeup:      task:18128@0 flags=5281 state=0006 status=-110 timeout=750 queue=delayq
  <idle>-0       1dNs. 12272657.760051: workqueue_queue_work: work struct=0xffff8800b5a94588 function=rpc_async_schedule workqueue=0xffff880234666800 req_cpu=512 cpu=1
  <idle>-0       1dNs. 12272657.760052: workqueue_activate_work: work struct 0xffff8800b5a94588
kworker/-20111   1.... 12272657.760063: workqueue_execute_start: work struct 0xffff8800b5a94588: function rpc_async_schedule
kworker/-20111   1.... 12272657.760064: rpc_task_run_action:  task:18128@0 flags=5281 state=0005 status=-110 action=call_timeout
kworker/-20111   1.... 12272657.760065: rpc_task_run_action:  task:18128@0 flags=5281 state=0005 status=0 action=call_timeout
kworker/-20111   1.... 12272657.760066: rpc_task_run_action:  task:18128@0 flags=5281 state=0005 status=0 action=call_bind
kworker/-20111   1.... 12272657.760066: rpc_task_run_action:  task:18128@0 flags=5281 state=0005 status=0 action=call_connect
kworker/-20111   1..s. 12272657.760068: rpc_task_sleep:       task:18128@0 flags=5281 state=0005 status=0 timeout=45000 queue=xprt_pending
kworker/-20111   1.... 12272657.760070: workqueue_execute_end: work struct 0xffff8800b5a94588
  <idle>-0       3d.s. 12272657.824024: workqueue_queue_work: work struct=0xffff8800b5551760 function=disk_events_workfn workqueue=0xffff8802370d9000 req_cpu=512 cpu=3
  <idle>-0       3d.s. 12272657.824025: workqueue_activate_work: work struct 0xffff8800b5551760
kworker/-7491    3.... 12272657.824041: workqueue_execute_start: work struct 0xffff8800b5551760: function disk_events_workfn
kworker/-7491    3.... 12272657.824807: workqueue_execute_end: work struct 0xffff8800b5551760


  <idle>-0       1d.s. 12272705.808564: workqueue_queue_work: work struct=0xffff8802361f4de0 function=xs_tcp_setup_socket workqueue=0xffff880234666800 req_cpu=512 cpu=1
  <idle>-0       1d.s. 12272705.808565: workqueue_activate_work: work struct 0xffff8802361f4de0
kworker/-20111   1.... 12272705.808574: workqueue_execute_start: work struct 0xffff8802361f4de0: function xs_tcp_setup_socket
kworker/-20111   1..s. 12272705.808580: rpc_socket_error:     error=-111 socket:[11886206] dstaddr=192.168.23.9/2049 state=2 () sk_state=7 ()
kworker/-20111   1.... 12272705.808581: rpc_socket_reset_connection: error=0 socket:[11886206] dstaddr=192.168.23.9/2049 state=1 () sk_state=7 ()
kworker/-20111   1..s. 12272705.808599: net_dev_queue:        dev=eth0 skbaddr=0xffff880082117ae8 len=74
kworker/-20111   1..s. 12272705.808602: net_dev_start_xmit:   dev=eth0 queue_mapping=0 skbaddr=0xffff880082117ae8 vlan_tagged=0 vlan_proto=0x0000 vlan_tci=0x0000 protocol=0x0800 ip_summed=0 len=74 data_len=0 network_offset=14 transport_offset_valid=1 transport_offset=34 tx_flags=0 gso_size=0 gso_segs=1 gso_type=0
kworker/-20111   1..s. 12272705.808605: net_dev_xmit:         dev=eth0 skbaddr=0xffff880082117ae8 len=74 rc=0
kworker/-20111   1.... 12272705.808614: rpc_socket_connect:   error=-115 socket:[11886206] dstaddr=192.168.23.9/2049 state=2 () sk_state=2 ()
kworker/-20111   1.... 12272705.808615: workqueue_execute_end: work struct 0xffff8802361f4de0
  <idle>-0       1..s. 12272705.808841: napi_gro_receive_entry: dev=eth0 napi_id=0 queue_mapping=0 skbaddr=0xffff8800a60e8900 vlan_tagged=0 vlan_proto=0x0000 vlan_tci=0x0000 protocol=0x0800 ip_summed=1 hash=0x00000000 l4_hash=0 len=60 data_len=0 truesize=768 mac_header_valid=1 mac_header=-14 nr_frags=0 gso_size=0 gso_type=0
  <idle>-0       1.Ns. 12272705.808849: netif_receive_skb:    dev=eth0 skbaddr=0xffff8800a60e8900 len=60
  <idle>-0       1.Ns. 12272705.808872: rpc_socket_state_change: socket:[11886206] dstaddr=192.168.23.9/2049 state=2 () sk_state=1 ()
  <idle>-0       1.Ns. 12272705.808874: rpc_task_wakeup:      task:18128@0 flags=5281 state=0006 status=-11 timeout=45000 queue=xprt_pending
  <idle>-0       1dNs. 12272705.808875: workqueue_queue_work: work struct=0xffff8800b5a94588 function=rpc_async_schedule workqueue=0xffff880234666800 req_cpu=512 cpu=1
  <idle>-0       1dNs. 12272705.808876: workqueue_activate_work: work struct 0xffff8800b5a94588
  <idle>-0       1.Ns. 12272705.808881: net_dev_queue:        dev=eth0 skbaddr=0xffff8800a60e8500 len=66
  <idle>-0       1.Ns. 12272705.808883: net_dev_start_xmit:   dev=eth0 queue_mapping=0 skbaddr=0xffff8800a60e8500 vlan_tagged=0 vlan_proto=0x0000 vlan_tci=0x0000 protocol=0x0800 ip_summed=0 len=66 data_len=0 network_offset=14 transport_offset_valid=1 transport_offset=34 tx_flags=0 gso_size=0 gso_segs=1 gso_type=0
  <idle>-0       1.Ns. 12272705.808885: net_dev_xmit:         dev=eth0 skbaddr=0xffff8800a60e8500 len=66 rc=0
  <idle>-0       1.Ns. 12272705.808887: consume_skb:          skbaddr=0xffff880082117ae8
kworker/-20111   1.... 12272705.808895: workqueue_execute_start: work struct 0xffff8800b5a94588: function rpc_async_schedule
kworker/-20111   1.... 12272705.808896: rpc_task_run_action:  task:18128@0 flags=5281 state=0005 status=-11 action=call_connect_status
kworker/-20111   1.... 12272705.808897: rpc_task_run_action:  task:18128@0 flags=5281 state=0005 status=-11 action=call_connect_status
kworker/-20111   1.... 12272705.808897: rpc_connect_status:   task:18128@0, status -11
kworker/-20111   1.... 12272705.808897: rpc_task_run_action:  task:18128@0 flags=5281 state=0005 status=0 action=call_timeout
kworker/-20111   1.... 12272705.808898: rpc_task_run_action:  task:18128@0 flags=5281 state=0005 status=0 action=call_bind
kworker/-20111   1.... 12272705.808899: rpc_task_run_action:  task:18128@0 flags=5281 state=0005 status=0 action=call_connect
kworker/-20111   1.... 12272705.808899: rpc_task_run_action:  task:18128@0 flags=5281 state=0005 status=0 action=call_transmit
kworker/-20111   1..s. 12272705.808912: net_dev_queue:        dev=eth0 skbaddr=0xffff880082117ae8 len=162
kworker/-20111   1..s. 12272705.808913: net_dev_start_xmit:   dev=eth0 queue_mapping=0 skbaddr=0xffff880082117ae8 vlan_tagged=0 vlan_proto=0x0000 vlan_tci=0x0000 protocol=0x0800 ip_summed=0 len=162 data_len=0 network_offset=14 transport_offset_valid=1 transport_offset=34 tx_flags=0 gso_size=0 gso_segs=1 gso_type=0
kworker/-20111   1..s. 12272705.808913: net_dev_xmit:         dev=eth0 skbaddr=0xffff880082117ae8 len=162 rc=0
kworker/-20111   1..s. 12272705.808916: rpc_task_sleep:       task:18128@0 flags=5a81 state=0005 status=0 timeout=45000 queue=xprt_pending
kworker/-20111   1.... 12272705.808917: workqueue_execute_end: work struct 0xffff8800b5a94588
  <idle>-0       1..s. 12272705.809098: consume_skb:          skbaddr=0xffff8800a60e8500

  <idle>-0       1.Ns. 12272705.809840: rpc_task_wakeup:      task:18128@0 flags=5a81 state=0006 status=0 timeout=45000 queue=xprt_pending
  <idle>-0       1dNs. 12272705.809842: workqueue_queue_work: work struct=0xffff8800b5a94588 function=rpc_async_schedule workqueue=0xffff880234666800 req_cpu=512 cpu=1
  <idle>-0       1dNs. 12272705.809842: workqueue_activate_work: work struct 0xffff8800b5a94588
kworker/-20111   1.... 12272705.809853: workqueue_execute_start: work struct 0xffff8800b5a94588: function rpc_async_schedule
kworker/-20111   1.... 12272705.809853: rpc_task_run_action:  task:18128@0 flags=5a81 state=0005 status=0 action=call_status
kworker/-20111   1.... 12272705.809854: rpc_task_run_action:  task:18128@0 flags=5a81 state=0005 status=0 action=call_status
kworker/-20111   1.... 12272705.809854: rpc_task_run_action:  task:18128@0 flags=5a81 state=0005 status=44 action=call_decode
kworker/-20111   1.... 12272705.809856: rpc_task_run_action:  task:18128@0 flags=5a81 state=0005 status=-10022 action=rpc_exit_task
kworker/-20111   1.... 12272705.809858: nfs4_renew_async:     error=-10022 () dstaddr=192.168.23.9
  <idle>-0       1.Ns. 12272705.810000: consume_skb:          skbaddr=0xffff8800a60e8900
kworker/-20111   1.... 12272705.810033: workqueue_execute_end: work struct 0xffff8800b5a94588
192.168.-16171   3.... 12272705.810062: rpc_task_begin:       task:0@0 flags=5280 state=0000 status=0 action=irq_stack_union
192.168.-16171   3.... 12272705.810068: rpc_task_run_action:  task:18129@0 flags=5280 state=0005 status=0 action=call_start
192.168.-16171   3.... 12272705.810069: rpc_task_run_action:  task:18129@0 flags=5280 state=0005 status=0 action=call_reserve
192.168.-16171   3.... 12272705.810071: rpc_task_run_action:  task:18129@0 flags=5280 state=0005 status=0 action=call_reserveresult
192.168.-16171   3.... 12272705.810071: rpc_task_run_action:  task:18129@0 flags=5280 state=0005 status=0 action=call_refresh
192.168.-16171   3.... 12272705.810073: rpc_task_run_action:  task:18129@0 flags=5280 state=0005 status=0 action=call_refreshresult
192.168.-16171   3.... 12272705.810073: rpc_task_run_action:  task:18129@0 flags=5280 state=0005 status=0 action=call_allocate
192.168.-16171   3.... 12272705.810074: rpc_task_run_action:  task:18129@0 flags=5280 state=0005 status=0 action=call_bind
192.168.-16171   3.... 12272705.810075: rpc_task_run_action:  task:18129@0 flags=5280 state=0005 status=0 action=call_connect
192.168.-16171   3.... 12272705.810075: rpc_task_run_action:  task:18129@0 flags=5280 state=0005 status=0 action=call_transmit
192.168.-16171   3..s. 12272705.810095: net_dev_queue:        dev=eth0 skbaddr=0xffff8800a9b924e8 len=162
192.168.-16171   3..s. 12272705.810097: net_dev_start_xmit:   dev=eth0 queue_mapping=0 skbaddr=0xffff8800a9b924e8 vlan_tagged=0 vlan_proto=0x0000 vlan_tci=0x0000 protocol=0x0800 ip_summed=0 len=162 data_len=0 network_offset=14 transport_offset_valid=1 transport_offset=34 tx_flags=0 gso_size=0 gso_segs=1 gso_type=0
192.168.-16171   3..s. 12272705.810098: net_dev_xmit:         dev=eth0 skbaddr=0xffff8800a9b924e8 len=162 rc=0
192.168.-16171   3..s. 12272705.810101: rpc_task_sleep:       task:18129@0 flags=5a80 state=0005 status=0 timeout=15000 queue=xprt_pending
  <idle>-0       1..s. 12272705.810318: napi_gro_receive_entry: dev=eth0 napi_id=0 queue_mapping=0 skbaddr=0xffff8800a60e8900 vlan_tagged=0 vlan_proto=0x0000 vlan_tci=0x0000 protocol=0x0800 ip_summed=1 hash=0x00000000 l4_hash=0 len=100 data_len=0 truesize=768 mac_header_valid=1 mac_header=-14 nr_frags=0 gso_size=0 gso_type=0
  <idle>-0       1.Ns. 12272705.810326: netif_receive_skb:    dev=eth0 skbaddr=0xffff8800a60e8900 len=100
  <idle>-0       1.Ns. 12272705.810344: rpc_task_wakeup:      task:18129@0 flags=5a80 state=0006 status=0 timeout=15000 queue=xprt_pending
  <idle>-0       1.Ns. 12272705.810349: consume_skb:          skbaddr=0xffff8800a9b924e8
192.168.-16171   3.... 12272705.810379: rpc_task_run_action:  task:18129@0 flags=5a80 state=0005 status=0 action=call_status
192.168.-16171   3.... 12272705.810385: rpc_task_run_action:  task:18129@0 flags=5a80 state=0005 status=0 action=call_status
192.168.-16171   3.... 12272705.810385: rpc_task_run_action:  task:18129@0 flags=5a80 state=0005 status=44 action=call_decode
192.168.-16171   3.... 12272705.810387: rpc_task_run_action:  task:18129@0 flags=5a80 state=0005 status=-10022 action=rpc_exit_task
192.168.-16171   3.... 12272705.810397: rpc_task_begin:       task:0@0 flags=5280 state=0000 status=0 action=irq_stack_union
192.168.-16171   3.... 12272705.810398: rpc_task_run_action:  task:18130@0 flags=5280 state=0005 status=0 action=call_start
192.168.-16171   3.... 12272705.810398: rpc_task_run_action:  task:18130@0 flags=5280 state=0005 status=0 action=call_reserve
192.168.-16171   3.... 12272705.810399: rpc_task_run_action:  task:18130@0 flags=5280 state=0005 status=0 action=call_reserveresult
192.168.-16171   3.... 12272705.810399: rpc_task_run_action:  task:18130@0 flags=5280 state=0005 status=0 action=call_refresh
192.168.-16171   3.... 12272705.810400: rpc_task_run_action:  task:18130@0 flags=5280 state=0005 status=0 action=call_refreshresult
192.168.-16171   3.... 12272705.810400: rpc_task_run_action:  task:18130@0 flags=5280 state=0005 status=0 action=call_allocate
192.168.-16171   3.... 12272705.810404: rpc_task_run_action:  task:18130@0 flags=5280 state=0005 status=0 action=call_bind
192.168.-16171   3.... 12272705.810404: rpc_task_run_action:  task:18130@0 flags=5280 state=0005 status=0 action=call_connect
192.168.-16171   3.... 12272705.810404: rpc_task_run_action:  task:18130@0 flags=5280 state=0005 status=0 action=call_transmit
192.168.-16171   3..s. 12272705.810417: net_dev_queue:        dev=eth0 skbaddr=0xffff8800a9b92ae8 len=254
192.168.-16171   3..s. 12272705.810418: net_dev_start_xmit:   dev=eth0 queue_mapping=0 skbaddr=0xffff8800a9b92ae8 vlan_tagged=0 vlan_proto=0x0000 vlan_tci=0x0000 protocol=0x0800 ip_summed=0 len=254 data_len=0 network_offset=14 transport_offset_valid=1 transport_offset=34 tx_flags=0 gso_size=0 gso_segs=1 gso_type=0
192.168.-16171   3..s. 12272705.810426: net_dev_xmit:         dev=eth0 skbaddr=0xffff8800a9b92ae8 len=254 rc=0
192.168.-16171   3..s. 12272705.810428: rpc_task_sleep:       task:18130@0 flags=5a80 state=0005 status=0 timeout=15000 queue=xprt_pending
192.168.-16171   3.... 12272705.810902: rpc_task_run_action:  task:18130@0 flags=5a80 state=0005 status=0 action=call_status
192.168.-16171   3.... 12272705.810908: rpc_task_run_action:  task:18130@0 flags=5a80 state=0005 status=0 action=call_status
192.168.-16171   3.... 12272705.810908: rpc_task_run_action:  task:18130@0 flags=5a80 state=0005 status=60 action=call_decode
192.168.-16171   3.... 12272705.810910: rpc_task_run_action:  task:18130@0 flags=5a80 state=0005 status=0 action=rpc_exit_task
192.168.-16171   3.... 12272705.810914: nfs4_setclientid:     error=0 (ACCESS) dstaddr=192.168.23.9
192.168.-16171   3.... 12272705.810915: rpc_task_begin:       task:0@0 flags=5280 state=0000 status=0 action=irq_stack_union
192.168.-16171   3.... 12272705.810916: rpc_task_run_action:  task:18131@0 flags=5280 state=0005 status=0 action=call_start
192.168.-16171   3.... 12272705.810916: rpc_task_run_action:  task:18131@0 flags=5280 state=0005 status=0 action=call_reserve
192.168.-16171   3.... 12272705.810917: rpc_task_run_action:  task:18131@0 flags=5280 state=0005 status=0 action=call_reserveresult
192.168.-16171   3.... 12272705.810917: rpc_task_run_action:  task:18131@0 flags=5280 state=0005 status=0 action=call_refresh
192.168.-16171   3.... 12272705.810918: rpc_task_run_action:  task:18131@0 flags=5280 state=0005 status=0 action=call_refreshresult
192.168.-16171   3.... 12272705.810918: rpc_task_run_action:  task:18131@0 flags=5280 state=0005 status=0 action=call_allocate
192.168.-16171   3.... 12272705.810919: rpc_task_run_action:  task:18131@0 flags=5280 state=0005 status=0 action=call_bind
192.168.-16171   3.... 12272705.810919: rpc_task_run_action:  task:18131@0 flags=5280 state=0005 status=0 action=call_connect
192.168.-16171   3.... 12272705.810919: rpc_task_run_action:  task:18131@0 flags=5280 state=0005 status=0 action=call_transmit
192.168.-16171   3..s. 12272705.810931: net_dev_queue:        dev=eth0 skbaddr=0xffff8800a9b92ce8 len=170
192.168.-16171   3..s. 12272705.810932: net_dev_start_xmit:   dev=eth0 queue_mapping=0 skbaddr=0xffff8800a9b92ce8 vlan_tagged=0 vlan_proto=0x0000 vlan_tci=0x0000 protocol=0x0800 ip_summed=0 len=170 data_len=0 network_offset=14 transport_offset_valid=1 transport_offset=34 tx_flags=0 gso_size=0 gso_segs=1 gso_type=0
192.168.-16171   3..s. 12272705.810933: net_dev_xmit:         dev=eth0 skbaddr=0xffff8800a9b92ce8 len=170 rc=0
192.168.-16171   3..s. 12272705.810936: rpc_task_sleep:       task:18131@0 flags=5a80 state=0005 status=0 timeout=15000 queue=xprt_pending
192.168.-16171   3.... 12272705.811213: rpc_task_run_action:  task:18131@0 flags=5a80 state=0005 status=0 action=call_status
192.168.-16171   3.... 12272705.811220: rpc_task_run_action:  task:18131@0 flags=5a80 state=0005 status=0 action=call_status
192.168.-16171   3.... 12272705.811220: rpc_task_run_action:  task:18131@0 flags=5a80 state=0005 status=44 action=call_decode
192.168.-16171   3.... 12272705.811222: rpc_task_run_action:  task:18131@0 flags=5a80 state=0005 status=0 action=rpc_exit_task
192.168.-16171   3.... 12272705.811227: nfs4_setclientid_confirm: error=0 (ACCESS) dstaddr=192.168.23.9


And it goes on, but you can look at the full  trace. I just searched
for "rpc".

Maybe this will shed more light on the issue. I'll keep this kernel on
my server for a little longer, but it's going to start triggering
rkhunter warnings about hidden ports again.

-- Steve


--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
  2015-06-19 17:17                         ` Steven Rostedt
  (?)
@ 2015-06-19 17:39                           ` Trond Myklebust
  -1 siblings, 0 replies; 77+ messages in thread
From: Trond Myklebust @ 2015-06-19 17:39 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Jeff Layton, Eric Dumazet, Anna Schumaker,
	Linux NFS Mailing List, Linux Network Devel Mailing List, LKML,
	Andrew Morton, Bruce James Fields

On Fri, Jun 19, 2015 at 1:17 PM, Steven Rostedt <rostedt@goodmis.org> wrote:
> On Fri, 19 Jun 2015 12:25:53 -0400
> Steven Rostedt <rostedt@goodmis.org> wrote:
>
>
>> I don't see that 55201 anywhere. But then again, I didn't look for it
>> before the port disappeared. I could reboot and look for it again. I
>> should have saved the full netstat -tapn as well :-/
>
> Of course I didn't find it anywhere, that's the port on my wife's box
> that port 947 was connected to.
>
> Now I even went over to my wife's box and ran
>
>  # rpcinfo -p localhost
>    program vers proto   port  service
>     100000    4   tcp    111  portmapper
>     100000    3   tcp    111  portmapper
>     100000    2   tcp    111  portmapper
>     100000    4   udp    111  portmapper
>     100000    3   udp    111  portmapper
>     100000    2   udp    111  portmapper
>     100024    1   udp  34243  status
>     100024    1   tcp  34498  status
>
> which doesn't show anything.
>
> but something is listening to that port...
>
>  # netstat -ntap |grep 55201
> tcp        0      0 0.0.0.0:55201           0.0.0.0:*               LISTEN


Hang on. This is on the client box while there is an active NFSv4
mount? Then that's probably the NFSv4 callback channel listening for
delegation callbacks.

Can you please try:

echo "options nfs callback_tcpport=4048" > /etc/modprobe.d/nfs-local.conf

and then either reboot the client or unload and then reload the nfs
modules before reattempting the mount. If this is indeed the callback
channel, then that will move your phantom listener to port 4048...

Cheers
   Trond
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
@ 2015-06-19 17:39                           ` Trond Myklebust
  0 siblings, 0 replies; 77+ messages in thread
From: Trond Myklebust @ 2015-06-19 17:39 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Jeff Layton, Eric Dumazet, Anna Schumaker,
	Linux NFS Mailing List, Linux Network Devel Mailing List, LKML,
	Andrew Morton, Bruce James Fields

On Fri, Jun 19, 2015 at 1:17 PM, Steven Rostedt <rostedt@goodmis.org> wrote:
> On Fri, 19 Jun 2015 12:25:53 -0400
> Steven Rostedt <rostedt@goodmis.org> wrote:
>
>
>> I don't see that 55201 anywhere. But then again, I didn't look for it
>> before the port disappeared. I could reboot and look for it again. I
>> should have saved the full netstat -tapn as well :-/
>
> Of course I didn't find it anywhere, that's the port on my wife's box
> that port 947 was connected to.
>
> Now I even went over to my wife's box and ran
>
>  # rpcinfo -p localhost
>    program vers proto   port  service
>     100000    4   tcp    111  portmapper
>     100000    3   tcp    111  portmapper
>     100000    2   tcp    111  portmapper
>     100000    4   udp    111  portmapper
>     100000    3   udp    111  portmapper
>     100000    2   udp    111  portmapper
>     100024    1   udp  34243  status
>     100024    1   tcp  34498  status
>
> which doesn't show anything.
>
> but something is listening to that port...
>
>  # netstat -ntap |grep 55201
> tcp        0      0 0.0.0.0:55201           0.0.0.0:*               LISTEN


Hang on. This is on the client box while there is an active NFSv4
mount? Then that's probably the NFSv4 callback channel listening for
delegation callbacks.

Can you please try:

echo "options nfs callback_tcpport=4048" > /etc/modprobe.d/nfs-local.conf

and then either reboot the client or unload and then reload the nfs
modules before reattempting the mount. If this is indeed the callback
channel, then that will move your phantom listener to port 4048...

Cheers
   Trond

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
@ 2015-06-19 17:39                           ` Trond Myklebust
  0 siblings, 0 replies; 77+ messages in thread
From: Trond Myklebust @ 2015-06-19 17:39 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Jeff Layton, Eric Dumazet, Anna Schumaker,
	Linux NFS Mailing List, Linux Network Devel Mailing List, LKML,
	Andrew Morton, Bruce James Fields

On Fri, Jun 19, 2015 at 1:17 PM, Steven Rostedt <rostedt@goodmis.org> wrote:
> On Fri, 19 Jun 2015 12:25:53 -0400
> Steven Rostedt <rostedt@goodmis.org> wrote:
>
>
>> I don't see that 55201 anywhere. But then again, I didn't look for it
>> before the port disappeared. I could reboot and look for it again. I
>> should have saved the full netstat -tapn as well :-/
>
> Of course I didn't find it anywhere, that's the port on my wife's box
> that port 947 was connected to.
>
> Now I even went over to my wife's box and ran
>
>  # rpcinfo -p localhost
>    program vers proto   port  service
>     100000    4   tcp    111  portmapper
>     100000    3   tcp    111  portmapper
>     100000    2   tcp    111  portmapper
>     100000    4   udp    111  portmapper
>     100000    3   udp    111  portmapper
>     100000    2   udp    111  portmapper
>     100024    1   udp  34243  status
>     100024    1   tcp  34498  status
>
> which doesn't show anything.
>
> but something is listening to that port...
>
>  # netstat -ntap |grep 55201
> tcp        0      0 0.0.0.0:55201           0.0.0.0:*               LISTEN


Hang on. This is on the client box while there is an active NFSv4
mount? Then that's probably the NFSv4 callback channel listening for
delegation callbacks.

Can you please try:

echo "options nfs callback_tcpport=4048" > /etc/modprobe.d/nfs-local.conf

and then either reboot the client or unload and then reload the nfs
modules before reattempting the mount. If this is indeed the callback
channel, then that will move your phantom listener to port 4048...

Cheers
   Trond
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
@ 2015-06-19 19:52                             ` Jeff Layton
  0 siblings, 0 replies; 77+ messages in thread
From: Jeff Layton @ 2015-06-19 19:52 UTC (permalink / raw)
  To: Trond Myklebust
  Cc: Steven Rostedt, Eric Dumazet, Anna Schumaker,
	Linux NFS Mailing List, Linux Network Devel Mailing List, LKML,
	Andrew Morton, Bruce James Fields

On Fri, 19 Jun 2015 13:39:08 -0400
Trond Myklebust <trond.myklebust@primarydata.com> wrote:

> On Fri, Jun 19, 2015 at 1:17 PM, Steven Rostedt <rostedt@goodmis.org> wrote:
> > On Fri, 19 Jun 2015 12:25:53 -0400
> > Steven Rostedt <rostedt@goodmis.org> wrote:
> >
> >
> >> I don't see that 55201 anywhere. But then again, I didn't look for it
> >> before the port disappeared. I could reboot and look for it again. I
> >> should have saved the full netstat -tapn as well :-/
> >
> > Of course I didn't find it anywhere, that's the port on my wife's box
> > that port 947 was connected to.
> >
> > Now I even went over to my wife's box and ran
> >
> >  # rpcinfo -p localhost
> >    program vers proto   port  service
> >     100000    4   tcp    111  portmapper
> >     100000    3   tcp    111  portmapper
> >     100000    2   tcp    111  portmapper
> >     100000    4   udp    111  portmapper
> >     100000    3   udp    111  portmapper
> >     100000    2   udp    111  portmapper
> >     100024    1   udp  34243  status
> >     100024    1   tcp  34498  status
> >
> > which doesn't show anything.
> >
> > but something is listening to that port...
> >
> >  # netstat -ntap |grep 55201
> > tcp        0      0 0.0.0.0:55201           0.0.0.0:*               LISTEN
> 
> 
> Hang on. This is on the client box while there is an active NFSv4
> mount? Then that's probably the NFSv4 callback channel listening for
> delegation callbacks.
> 
> Can you please try:
> 
> echo "options nfs callback_tcpport=4048" > /etc/modprobe.d/nfs-local.conf
> 
> and then either reboot the client or unload and then reload the nfs
> modules before reattempting the mount. If this is indeed the callback
> channel, then that will move your phantom listener to port 4048...
> 

Right, it was a little unclear to me before, but it now seems clear
that the callback socket that the server is opening to the client is
the one squatting on the port.

...and that sort of makes sense, doesn't it? That rpc_clnt will stick
around for the life of the client's lease, and the rpc_clnt binds to a
particular port so that it can reconnect using the same one.

Given that Stephen has done the legwork and figured out that reverting
those commits fixes the issue, then I suspect that the real culprit is
caf4ccd4e88cf2.

The client is likely closing down the other end of the callback
socket when it goes idle. Before that commit, we probably did an
xs_close on it, but now we're doing a xs_tcp_shutdown and that leaves
the port bound.

I'm travelling this weekend and am not set up to reproduce it to
confirm, but that does seem to be a plausible scenario.
-- 
Jeff Layton <jlayton@poochiereds.net>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
@ 2015-06-19 19:52                             ` Jeff Layton
  0 siblings, 0 replies; 77+ messages in thread
From: Jeff Layton @ 2015-06-19 19:52 UTC (permalink / raw)
  To: Trond Myklebust
  Cc: Steven Rostedt, Eric Dumazet, Anna Schumaker,
	Linux NFS Mailing List, Linux Network Devel Mailing List, LKML,
	Andrew Morton, Bruce James Fields

On Fri, 19 Jun 2015 13:39:08 -0400
Trond Myklebust <trond.myklebust-7I+n7zu2hftEKMMhf/gKZA@public.gmane.org> wrote:

> On Fri, Jun 19, 2015 at 1:17 PM, Steven Rostedt <rostedt-nx8X9YLhiw1AfugRpC6u6w@public.gmane.org> wrote:
> > On Fri, 19 Jun 2015 12:25:53 -0400
> > Steven Rostedt <rostedt-nx8X9YLhiw1AfugRpC6u6w@public.gmane.org> wrote:
> >
> >
> >> I don't see that 55201 anywhere. But then again, I didn't look for it
> >> before the port disappeared. I could reboot and look for it again. I
> >> should have saved the full netstat -tapn as well :-/
> >
> > Of course I didn't find it anywhere, that's the port on my wife's box
> > that port 947 was connected to.
> >
> > Now I even went over to my wife's box and ran
> >
> >  # rpcinfo -p localhost
> >    program vers proto   port  service
> >     100000    4   tcp    111  portmapper
> >     100000    3   tcp    111  portmapper
> >     100000    2   tcp    111  portmapper
> >     100000    4   udp    111  portmapper
> >     100000    3   udp    111  portmapper
> >     100000    2   udp    111  portmapper
> >     100024    1   udp  34243  status
> >     100024    1   tcp  34498  status
> >
> > which doesn't show anything.
> >
> > but something is listening to that port...
> >
> >  # netstat -ntap |grep 55201
> > tcp        0      0 0.0.0.0:55201           0.0.0.0:*               LISTEN
> 
> 
> Hang on. This is on the client box while there is an active NFSv4
> mount? Then that's probably the NFSv4 callback channel listening for
> delegation callbacks.
> 
> Can you please try:
> 
> echo "options nfs callback_tcpport=4048" > /etc/modprobe.d/nfs-local.conf
> 
> and then either reboot the client or unload and then reload the nfs
> modules before reattempting the mount. If this is indeed the callback
> channel, then that will move your phantom listener to port 4048...
> 

Right, it was a little unclear to me before, but it now seems clear
that the callback socket that the server is opening to the client is
the one squatting on the port.

...and that sort of makes sense, doesn't it? That rpc_clnt will stick
around for the life of the client's lease, and the rpc_clnt binds to a
particular port so that it can reconnect using the same one.

Given that Stephen has done the legwork and figured out that reverting
those commits fixes the issue, then I suspect that the real culprit is
caf4ccd4e88cf2.

The client is likely closing down the other end of the callback
socket when it goes idle. Before that commit, we probably did an
xs_close on it, but now we're doing a xs_tcp_shutdown and that leaves
the port bound.

I'm travelling this weekend and am not set up to reproduce it to
confirm, but that does seem to be a plausible scenario.
-- 
Jeff Layton <jlayton-vpEMnDpepFuMZCB2o+C8xQ@public.gmane.org>
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
@ 2015-06-19 19:52                             ` Jeff Layton
  0 siblings, 0 replies; 77+ messages in thread
From: Jeff Layton @ 2015-06-19 19:52 UTC (permalink / raw)
  To: Trond Myklebust
  Cc: Steven Rostedt, Eric Dumazet, Anna Schumaker,
	Linux NFS Mailing List, Linux Network Devel Mailing List, LKML,
	Andrew Morton, Bruce James Fields

On Fri, 19 Jun 2015 13:39:08 -0400
Trond Myklebust <trond.myklebust@primarydata.com> wrote:

> On Fri, Jun 19, 2015 at 1:17 PM, Steven Rostedt <rostedt@goodmis.org> wrote:
> > On Fri, 19 Jun 2015 12:25:53 -0400
> > Steven Rostedt <rostedt@goodmis.org> wrote:
> >
> >
> >> I don't see that 55201 anywhere. But then again, I didn't look for it
> >> before the port disappeared. I could reboot and look for it again. I
> >> should have saved the full netstat -tapn as well :-/
> >
> > Of course I didn't find it anywhere, that's the port on my wife's box
> > that port 947 was connected to.
> >
> > Now I even went over to my wife's box and ran
> >
> >  # rpcinfo -p localhost
> >    program vers proto   port  service
> >     100000    4   tcp    111  portmapper
> >     100000    3   tcp    111  portmapper
> >     100000    2   tcp    111  portmapper
> >     100000    4   udp    111  portmapper
> >     100000    3   udp    111  portmapper
> >     100000    2   udp    111  portmapper
> >     100024    1   udp  34243  status
> >     100024    1   tcp  34498  status
> >
> > which doesn't show anything.
> >
> > but something is listening to that port...
> >
> >  # netstat -ntap |grep 55201
> > tcp        0      0 0.0.0.0:55201           0.0.0.0:*               LISTEN
> 
> 
> Hang on. This is on the client box while there is an active NFSv4
> mount? Then that's probably the NFSv4 callback channel listening for
> delegation callbacks.
> 
> Can you please try:
> 
> echo "options nfs callback_tcpport=4048" > /etc/modprobe.d/nfs-local.conf
> 
> and then either reboot the client or unload and then reload the nfs
> modules before reattempting the mount. If this is indeed the callback
> channel, then that will move your phantom listener to port 4048...
> 

Right, it was a little unclear to me before, but it now seems clear
that the callback socket that the server is opening to the client is
the one squatting on the port.

...and that sort of makes sense, doesn't it? That rpc_clnt will stick
around for the life of the client's lease, and the rpc_clnt binds to a
particular port so that it can reconnect using the same one.

Given that Stephen has done the legwork and figured out that reverting
those commits fixes the issue, then I suspect that the real culprit is
caf4ccd4e88cf2.

The client is likely closing down the other end of the callback
socket when it goes idle. Before that commit, we probably did an
xs_close on it, but now we're doing a xs_tcp_shutdown and that leaves
the port bound.

I'm travelling this weekend and am not set up to reproduce it to
confirm, but that does seem to be a plausible scenario.
-- 
Jeff Layton <jlayton@poochiereds.net>
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
@ 2015-06-19 20:30                               ` Trond Myklebust
  0 siblings, 0 replies; 77+ messages in thread
From: Trond Myklebust @ 2015-06-19 20:30 UTC (permalink / raw)
  To: Jeff Layton
  Cc: Steven Rostedt, Eric Dumazet, Anna Schumaker,
	Linux NFS Mailing List, Linux Network Devel Mailing List, LKML,
	Andrew Morton, Bruce James Fields

On Fri, 2015-06-19 at 15:52 -0400, Jeff Layton wrote:
> On Fri, 19 Jun 2015 13:39:08 -0400
> Trond Myklebust <trond.myklebust@primarydata.com> wrote:
> 
> > On Fri, Jun 19, 2015 at 1:17 PM, Steven Rostedt <
> > rostedt@goodmis.org> wrote:
> > > On Fri, 19 Jun 2015 12:25:53 -0400
> > > Steven Rostedt <rostedt@goodmis.org> wrote:
> > > 
> > > 
> > > > I don't see that 55201 anywhere. But then again, I didn't look 
> > > > for it
> > > > before the port disappeared. I could reboot and look for it 
> > > > again. I
> > > > should have saved the full netstat -tapn as well :-/
> > > 
> > > Of course I didn't find it anywhere, that's the port on my wife's 
> > > box
> > > that port 947 was connected to.
> > > 
> > > Now I even went over to my wife's box and ran
> > > 
> > >  # rpcinfo -p localhost
> > >    program vers proto   port  service
> > >     100000    4   tcp    111  portmapper
> > >     100000    3   tcp    111  portmapper
> > >     100000    2   tcp    111  portmapper
> > >     100000    4   udp    111  portmapper
> > >     100000    3   udp    111  portmapper
> > >     100000    2   udp    111  portmapper
> > >     100024    1   udp  34243  status
> > >     100024    1   tcp  34498  status
> > > 
> > > which doesn't show anything.
> > > 
> > > but something is listening to that port...
> > > 
> > >  # netstat -ntap |grep 55201
> > > tcp        0      0 0.0.0.0:55201           0.0.0.0:*            
> > >    LISTEN
> > 
> > 
> > Hang on. This is on the client box while there is an active NFSv4
> > mount? Then that's probably the NFSv4 callback channel listening 
> > for
> > delegation callbacks.
> > 
> > Can you please try:
> > 
> > echo "options nfs callback_tcpport=4048" > /etc/modprobe.d/nfs
> > -local.conf
> > 
> > and then either reboot the client or unload and then reload the nfs
> > modules before reattempting the mount. If this is indeed the 
> > callback
> > channel, then that will move your phantom listener to port 4048...
> > 
> 
> Right, it was a little unclear to me before, but it now seems clear
> that the callback socket that the server is opening to the client is
> the one squatting on the port.
> 
> ...and that sort of makes sense, doesn't it? That rpc_clnt will stick
> around for the life of the client's lease, and the rpc_clnt binds to 
> a
> particular port so that it can reconnect using the same one.
> 
> Given that Stephen has done the legwork and figured out that 
> reverting
> those commits fixes the issue, then I suspect that the real culprit 
> is
> caf4ccd4e88cf2.
> 
> The client is likely closing down the other end of the callback
> socket when it goes idle. Before that commit, we probably did an
> xs_close on it, but now we're doing a xs_tcp_shutdown and that leaves
> the port bound.
> 

Agreed. I've been looking into whether or not there is a simple fix.
Reverting those patches is not an option, because the whole point was
to ensure that the socket is in the TCP_CLOSED state before we release
the socket.

Steven, how about something like the following patch?

8<-----------------------------------------------------------------
>From 9a0bcfdbdbc793eae1ed6d901a6396b6c66f9513 Mon Sep 17 00:00:00 2001
From: Trond Myklebust <trond.myklebust@primarydata.com>
Date: Fri, 19 Jun 2015 16:17:57 -0400
Subject: [PATCH] SUNRPC: Ensure we release the TCP socket once it has been
 closed

This fixes a regression introduced by commit caf4ccd4e88cf2 ("SUNRPC:
Make xs_tcp_close() do a socket shutdown rather than a sock_release").
Prior to that commit, the autoclose feature would ensure that an
idle connection would result in the socket being both disconnected and
released, whereas now only gets disconnected.

While the current behaviour is harmless, it does leave the port bound
until either RPC traffic resumes or the RPC client is shut down.

Reported-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
---
 net/sunrpc/xprt.c     | 2 +-
 net/sunrpc/xprtsock.c | 8 ++++++--
 2 files changed, 7 insertions(+), 3 deletions(-)

diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
index 3ca31f20b97c..ab5dd621ae0c 100644
--- a/net/sunrpc/xprt.c
+++ b/net/sunrpc/xprt.c
@@ -611,8 +611,8 @@ static void xprt_autoclose(struct work_struct *work)
 	struct rpc_xprt *xprt =
 		container_of(work, struct rpc_xprt, task_cleanup);
 
-	xprt->ops->close(xprt);
 	clear_bit(XPRT_CLOSE_WAIT, &xprt->state);
+	xprt->ops->close(xprt);
 	xprt_release_write(xprt, NULL);
 }
 
diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
index fda8ec8c74c0..75dcdadf0269 100644
--- a/net/sunrpc/xprtsock.c
+++ b/net/sunrpc/xprtsock.c
@@ -634,10 +634,13 @@ static void xs_tcp_shutdown(struct rpc_xprt *xprt)
 	struct sock_xprt *transport = container_of(xprt, struct sock_xprt, xprt);
 	struct socket *sock = transport->sock;
 
-	if (sock != NULL) {
+	if (sock == NULL)
+		return;
+	if (xprt_connected(xprt)) {
 		kernel_sock_shutdown(sock, SHUT_RDWR);
 		trace_rpc_socket_shutdown(xprt, sock);
-	}
+	} else
+		xs_reset_transport(transport);
 }
 
 /**
@@ -786,6 +789,7 @@ static void xs_sock_mark_closed(struct rpc_xprt *xprt)
 	xs_sock_reset_connection_flags(xprt);
 	/* Mark transport as closed and wake up all pending tasks */
 	xprt_disconnect_done(xprt);
+	xprt_force_disconnect(xprt);
 }
 
 /**
-- 
2.4.3


-- 
Trond Myklebust
Linux NFS client maintainer, PrimaryData
trond.myklebust@primarydata.com


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply related	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
@ 2015-06-19 20:30                               ` Trond Myklebust
  0 siblings, 0 replies; 77+ messages in thread
From: Trond Myklebust @ 2015-06-19 20:30 UTC (permalink / raw)
  To: Jeff Layton
  Cc: Steven Rostedt, Eric Dumazet, Anna Schumaker,
	Linux NFS Mailing List, Linux Network Devel Mailing List, LKML,
	Andrew Morton, Bruce James Fields

On Fri, 2015-06-19 at 15:52 -0400, Jeff Layton wrote:
> On Fri, 19 Jun 2015 13:39:08 -0400
> Trond Myklebust <trond.myklebust-7I+n7zu2hftEKMMhf/gKZA@public.gmane.org> wrote:
> 
> > On Fri, Jun 19, 2015 at 1:17 PM, Steven Rostedt <
> > rostedt-nx8X9YLhiw1AfugRpC6u6w@public.gmane.org> wrote:
> > > On Fri, 19 Jun 2015 12:25:53 -0400
> > > Steven Rostedt <rostedt-nx8X9YLhiw1AfugRpC6u6w@public.gmane.org> wrote:
> > > 
> > > 
> > > > I don't see that 55201 anywhere. But then again, I didn't look 
> > > > for it
> > > > before the port disappeared. I could reboot and look for it 
> > > > again. I
> > > > should have saved the full netstat -tapn as well :-/
> > > 
> > > Of course I didn't find it anywhere, that's the port on my wife's 
> > > box
> > > that port 947 was connected to.
> > > 
> > > Now I even went over to my wife's box and ran
> > > 
> > >  # rpcinfo -p localhost
> > >    program vers proto   port  service
> > >     100000    4   tcp    111  portmapper
> > >     100000    3   tcp    111  portmapper
> > >     100000    2   tcp    111  portmapper
> > >     100000    4   udp    111  portmapper
> > >     100000    3   udp    111  portmapper
> > >     100000    2   udp    111  portmapper
> > >     100024    1   udp  34243  status
> > >     100024    1   tcp  34498  status
> > > 
> > > which doesn't show anything.
> > > 
> > > but something is listening to that port...
> > > 
> > >  # netstat -ntap |grep 55201
> > > tcp        0      0 0.0.0.0:55201           0.0.0.0:*            
> > >    LISTEN
> > 
> > 
> > Hang on. This is on the client box while there is an active NFSv4
> > mount? Then that's probably the NFSv4 callback channel listening 
> > for
> > delegation callbacks.
> > 
> > Can you please try:
> > 
> > echo "options nfs callback_tcpport=4048" > /etc/modprobe.d/nfs
> > -local.conf
> > 
> > and then either reboot the client or unload and then reload the nfs
> > modules before reattempting the mount. If this is indeed the 
> > callback
> > channel, then that will move your phantom listener to port 4048...
> > 
> 
> Right, it was a little unclear to me before, but it now seems clear
> that the callback socket that the server is opening to the client is
> the one squatting on the port.
> 
> ...and that sort of makes sense, doesn't it? That rpc_clnt will stick
> around for the life of the client's lease, and the rpc_clnt binds to 
> a
> particular port so that it can reconnect using the same one.
> 
> Given that Stephen has done the legwork and figured out that 
> reverting
> those commits fixes the issue, then I suspect that the real culprit 
> is
> caf4ccd4e88cf2.
> 
> The client is likely closing down the other end of the callback
> socket when it goes idle. Before that commit, we probably did an
> xs_close on it, but now we're doing a xs_tcp_shutdown and that leaves
> the port bound.
> 

Agreed. I've been looking into whether or not there is a simple fix.
Reverting those patches is not an option, because the whole point was
to ensure that the socket is in the TCP_CLOSED state before we release
the socket.

Steven, how about something like the following patch?

8<-----------------------------------------------------------------
>From 9a0bcfdbdbc793eae1ed6d901a6396b6c66f9513 Mon Sep 17 00:00:00 2001
From: Trond Myklebust <trond.myklebust-7I+n7zu2hftEKMMhf/gKZA@public.gmane.org>
Date: Fri, 19 Jun 2015 16:17:57 -0400
Subject: [PATCH] SUNRPC: Ensure we release the TCP socket once it has been
 closed

This fixes a regression introduced by commit caf4ccd4e88cf2 ("SUNRPC:
Make xs_tcp_close() do a socket shutdown rather than a sock_release").
Prior to that commit, the autoclose feature would ensure that an
idle connection would result in the socket being both disconnected and
released, whereas now only gets disconnected.

While the current behaviour is harmless, it does leave the port bound
until either RPC traffic resumes or the RPC client is shut down.

Reported-by: Steven Rostedt <rostedt-nx8X9YLhiw1AfugRpC6u6w@public.gmane.org>
Signed-off-by: Trond Myklebust <trond.myklebust-7I+n7zu2hftEKMMhf/gKZA@public.gmane.org>
---
 net/sunrpc/xprt.c     | 2 +-
 net/sunrpc/xprtsock.c | 8 ++++++--
 2 files changed, 7 insertions(+), 3 deletions(-)

diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
index 3ca31f20b97c..ab5dd621ae0c 100644
--- a/net/sunrpc/xprt.c
+++ b/net/sunrpc/xprt.c
@@ -611,8 +611,8 @@ static void xprt_autoclose(struct work_struct *work)
 	struct rpc_xprt *xprt =
 		container_of(work, struct rpc_xprt, task_cleanup);
 
-	xprt->ops->close(xprt);
 	clear_bit(XPRT_CLOSE_WAIT, &xprt->state);
+	xprt->ops->close(xprt);
 	xprt_release_write(xprt, NULL);
 }
 
diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
index fda8ec8c74c0..75dcdadf0269 100644
--- a/net/sunrpc/xprtsock.c
+++ b/net/sunrpc/xprtsock.c
@@ -634,10 +634,13 @@ static void xs_tcp_shutdown(struct rpc_xprt *xprt)
 	struct sock_xprt *transport = container_of(xprt, struct sock_xprt, xprt);
 	struct socket *sock = transport->sock;
 
-	if (sock != NULL) {
+	if (sock == NULL)
+		return;
+	if (xprt_connected(xprt)) {
 		kernel_sock_shutdown(sock, SHUT_RDWR);
 		trace_rpc_socket_shutdown(xprt, sock);
-	}
+	} else
+		xs_reset_transport(transport);
 }
 
 /**
@@ -786,6 +789,7 @@ static void xs_sock_mark_closed(struct rpc_xprt *xprt)
 	xs_sock_reset_connection_flags(xprt);
 	/* Mark transport as closed and wake up all pending tasks */
 	xprt_disconnect_done(xprt);
+	xprt_force_disconnect(xprt);
 }
 
 /**
-- 
2.4.3


-- 
Trond Myklebust
Linux NFS client maintainer, PrimaryData
trond.myklebust-7I+n7zu2hftEKMMhf/gKZA@public.gmane.org


--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in

^ permalink raw reply related	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
@ 2015-06-19 20:30                               ` Trond Myklebust
  0 siblings, 0 replies; 77+ messages in thread
From: Trond Myklebust @ 2015-06-19 20:30 UTC (permalink / raw)
  To: Jeff Layton
  Cc: Steven Rostedt, Eric Dumazet, Anna Schumaker,
	Linux NFS Mailing List, Linux Network Devel Mailing List, LKML,
	Andrew Morton, Bruce James Fields

On Fri, 2015-06-19 at 15:52 -0400, Jeff Layton wrote:
> On Fri, 19 Jun 2015 13:39:08 -0400
> Trond Myklebust <trond.myklebust@primarydata.com> wrote:
> 
> > On Fri, Jun 19, 2015 at 1:17 PM, Steven Rostedt <
> > rostedt@goodmis.org> wrote:
> > > On Fri, 19 Jun 2015 12:25:53 -0400
> > > Steven Rostedt <rostedt@goodmis.org> wrote:
> > > 
> > > 
> > > > I don't see that 55201 anywhere. But then again, I didn't look 
> > > > for it
> > > > before the port disappeared. I could reboot and look for it 
> > > > again. I
> > > > should have saved the full netstat -tapn as well :-/
> > > 
> > > Of course I didn't find it anywhere, that's the port on my wife's 
> > > box
> > > that port 947 was connected to.
> > > 
> > > Now I even went over to my wife's box and ran
> > > 
> > >  # rpcinfo -p localhost
> > >    program vers proto   port  service
> > >     100000    4   tcp    111  portmapper
> > >     100000    3   tcp    111  portmapper
> > >     100000    2   tcp    111  portmapper
> > >     100000    4   udp    111  portmapper
> > >     100000    3   udp    111  portmapper
> > >     100000    2   udp    111  portmapper
> > >     100024    1   udp  34243  status
> > >     100024    1   tcp  34498  status
> > > 
> > > which doesn't show anything.
> > > 
> > > but something is listening to that port...
> > > 
> > >  # netstat -ntap |grep 55201
> > > tcp        0      0 0.0.0.0:55201           0.0.0.0:*            
> > >    LISTEN
> > 
> > 
> > Hang on. This is on the client box while there is an active NFSv4
> > mount? Then that's probably the NFSv4 callback channel listening 
> > for
> > delegation callbacks.
> > 
> > Can you please try:
> > 
> > echo "options nfs callback_tcpport=4048" > /etc/modprobe.d/nfs
> > -local.conf
> > 
> > and then either reboot the client or unload and then reload the nfs
> > modules before reattempting the mount. If this is indeed the 
> > callback
> > channel, then that will move your phantom listener to port 4048...
> > 
> 
> Right, it was a little unclear to me before, but it now seems clear
> that the callback socket that the server is opening to the client is
> the one squatting on the port.
> 
> ...and that sort of makes sense, doesn't it? That rpc_clnt will stick
> around for the life of the client's lease, and the rpc_clnt binds to 
> a
> particular port so that it can reconnect using the same one.
> 
> Given that Stephen has done the legwork and figured out that 
> reverting
> those commits fixes the issue, then I suspect that the real culprit 
> is
> caf4ccd4e88cf2.
> 
> The client is likely closing down the other end of the callback
> socket when it goes idle. Before that commit, we probably did an
> xs_close on it, but now we're doing a xs_tcp_shutdown and that leaves
> the port bound.
> 

Agreed. I've been looking into whether or not there is a simple fix.
Reverting those patches is not an option, because the whole point was
to ensure that the socket is in the TCP_CLOSED state before we release
the socket.

Steven, how about something like the following patch?

8<-----------------------------------------------------------------

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
@ 2015-06-19 21:50                             ` Steven Rostedt
  0 siblings, 0 replies; 77+ messages in thread
From: Steven Rostedt @ 2015-06-19 21:50 UTC (permalink / raw)
  To: Trond Myklebust
  Cc: Jeff Layton, Eric Dumazet, Anna Schumaker,
	Linux NFS Mailing List, Linux Network Devel Mailing List, LKML,
	Andrew Morton, Bruce James Fields

On Fri, 19 Jun 2015 13:39:08 -0400
Trond Myklebust <trond.myklebust@primarydata.com> wrote:


> Hang on. This is on the client box while there is an active NFSv4
> mount? Then that's probably the NFSv4 callback channel listening for
> delegation callbacks.
> 
> Can you please try:
> 
> echo "options nfs callback_tcpport=4048" > /etc/modprobe.d/nfs-local.conf
> 
> and then either reboot the client or unload and then reload the nfs
> modules before reattempting the mount. If this is indeed the callback
> channel, then that will move your phantom listener to port 4048...

I unmounted the directories, removed the nfs modules, and then add this
file, and loaded the modules back and remounted the directories.

# netstat -ntap |grep 4048
tcp        0      0 0.0.0.0:4048            0.0.0.0:*               LISTEN      -               
tcp        0      0 192.168.23.22:4048      192.168.23.9:1010       ESTABLISHED -               
tcp6       0      0 :::4048                 :::*                    LISTEN      -               

-- Steve

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
@ 2015-06-19 21:50                             ` Steven Rostedt
  0 siblings, 0 replies; 77+ messages in thread
From: Steven Rostedt @ 2015-06-19 21:50 UTC (permalink / raw)
  To: Trond Myklebust
  Cc: Jeff Layton, Eric Dumazet, Anna Schumaker,
	Linux NFS Mailing List, Linux Network Devel Mailing List, LKML,
	Andrew Morton, Bruce James Fields

On Fri, 19 Jun 2015 13:39:08 -0400
Trond Myklebust <trond.myklebust-7I+n7zu2hftEKMMhf/gKZA@public.gmane.org> wrote:


> Hang on. This is on the client box while there is an active NFSv4
> mount? Then that's probably the NFSv4 callback channel listening for
> delegation callbacks.
> 
> Can you please try:
> 
> echo "options nfs callback_tcpport=4048" > /etc/modprobe.d/nfs-local.conf
> 
> and then either reboot the client or unload and then reload the nfs
> modules before reattempting the mount. If this is indeed the callback
> channel, then that will move your phantom listener to port 4048...

I unmounted the directories, removed the nfs modules, and then add this
file, and loaded the modules back and remounted the directories.

# netstat -ntap |grep 4048
tcp        0      0 0.0.0.0:4048            0.0.0.0:*               LISTEN      -               
tcp        0      0 192.168.23.22:4048      192.168.23.9:1010       ESTABLISHED -               
tcp6       0      0 :::4048                 :::*                    LISTEN      -               

-- Steve

--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
@ 2015-06-19 21:50                             ` Steven Rostedt
  0 siblings, 0 replies; 77+ messages in thread
From: Steven Rostedt @ 2015-06-19 21:50 UTC (permalink / raw)
  To: Trond Myklebust
  Cc: Jeff Layton, Eric Dumazet, Anna Schumaker,
	Linux NFS Mailing List, Linux Network Devel Mailing List, LKML,
	Andrew Morton, Bruce James Fields

On Fri, 19 Jun 2015 13:39:08 -0400
Trond Myklebust <trond.myklebust@primarydata.com> wrote:


> Hang on. This is on the client box while there is an active NFSv4
> mount? Then that's probably the NFSv4 callback channel listening for
> delegation callbacks.
> 
> Can you please try:
> 
> echo "options nfs callback_tcpport=4048" > /etc/modprobe.d/nfs-local.conf
> 
> and then either reboot the client or unload and then reload the nfs
> modules before reattempting the mount. If this is indeed the callback
> channel, then that will move your phantom listener to port 4048...

I unmounted the directories, removed the nfs modules, and then add this
file, and loaded the modules back and remounted the directories.

# netstat -ntap |grep 4048
tcp        0      0 0.0.0.0:4048            0.0.0.0:*               LISTEN      -               
tcp        0      0 192.168.23.22:4048      192.168.23.9:1010       ESTABLISHED -               
tcp6       0      0 :::4048                 :::*                    LISTEN      -               

-- Steve

--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
@ 2015-06-19 21:56                                 ` Steven Rostedt
  0 siblings, 0 replies; 77+ messages in thread
From: Steven Rostedt @ 2015-06-19 21:56 UTC (permalink / raw)
  To: Trond Myklebust
  Cc: Jeff Layton, Eric Dumazet, Anna Schumaker,
	Linux NFS Mailing List, Linux Network Devel Mailing List, LKML,
	Andrew Morton, Bruce James Fields

On Fri, 19 Jun 2015 16:30:18 -0400
Trond Myklebust <trond.myklebust@primarydata.com> wrote:

> Steven, how about something like the following patch?

Building it now. Will let you know in a bit.


> 
> 8<-----------------------------------------------------------------
> >From 9a0bcfdbdbc793eae1ed6d901a6396b6c66f9513 Mon Sep 17 00:00:00 2001
> From: Trond Myklebust <trond.myklebust@primarydata.com>
> Date: Fri, 19 Jun 2015 16:17:57 -0400
> Subject: [PATCH] SUNRPC: Ensure we release the TCP socket once it has been
>  closed
> 
> This fixes a regression introduced by commit caf4ccd4e88cf2 ("SUNRPC:
> Make xs_tcp_close() do a socket shutdown rather than a sock_release").
> Prior to that commit, the autoclose feature would ensure that an
> idle connection would result in the socket being both disconnected and
> released, whereas now only gets disconnected.
> 
> While the current behaviour is harmless, it does leave the port bound
> until either RPC traffic resumes or the RPC client is shut down.

Hmm, is this true? The port is bound, but the socket has been freed.
That is sk->sk_socket points to garbage. As my portlist.c module
verified.

It doesn't seem that anything can attach to that port again that I
know of. Is there a way to verify that something can attach to it again?

-- Steve


> 
> Reported-by: Steven Rostedt <rostedt@goodmis.org>
> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
> ---
>  net/sunrpc/xprt.c     | 2 +-
>  net/sunrpc/xprtsock.c | 8 ++++++--
>  2 files changed, 7 insertions(+), 3 deletions(-)
> 
> diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
> index 3ca31f20b97c..ab5dd621ae0c 100644
> --- a/net/sunrpc/xprt.c
> +++ b/net/sunrpc/xprt.c
> @@ -611,8 +611,8 @@ static void xprt_autoclose(struct work_struct *work)
>  	struct rpc_xprt *xprt =
>  		container_of(work, struct rpc_xprt, task_cleanup);
>  
> -	xprt->ops->close(xprt);
>  	clear_bit(XPRT_CLOSE_WAIT, &xprt->state);
> +	xprt->ops->close(xprt);
>  	xprt_release_write(xprt, NULL);
>  }
>  
> diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
> index fda8ec8c74c0..75dcdadf0269 100644
> --- a/net/sunrpc/xprtsock.c
> +++ b/net/sunrpc/xprtsock.c
> @@ -634,10 +634,13 @@ static void xs_tcp_shutdown(struct rpc_xprt *xprt)
>  	struct sock_xprt *transport = container_of(xprt, struct sock_xprt, xprt);
>  	struct socket *sock = transport->sock;
>  
> -	if (sock != NULL) {
> +	if (sock == NULL)
> +		return;
> +	if (xprt_connected(xprt)) {
>  		kernel_sock_shutdown(sock, SHUT_RDWR);
>  		trace_rpc_socket_shutdown(xprt, sock);
> -	}
> +	} else
> +		xs_reset_transport(transport);
>  }
>  
>  /**
> @@ -786,6 +789,7 @@ static void xs_sock_mark_closed(struct rpc_xprt *xprt)
>  	xs_sock_reset_connection_flags(xprt);
>  	/* Mark transport as closed and wake up all pending tasks */
>  	xprt_disconnect_done(xprt);
> +	xprt_force_disconnect(xprt);
>  }
>  
>  /**

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
@ 2015-06-19 21:56                                 ` Steven Rostedt
  0 siblings, 0 replies; 77+ messages in thread
From: Steven Rostedt @ 2015-06-19 21:56 UTC (permalink / raw)
  To: Trond Myklebust
  Cc: Jeff Layton, Eric Dumazet, Anna Schumaker,
	Linux NFS Mailing List, Linux Network Devel Mailing List, LKML,
	Andrew Morton, Bruce James Fields

On Fri, 19 Jun 2015 16:30:18 -0400
Trond Myklebust <trond.myklebust-7I+n7zu2hftEKMMhf/gKZA@public.gmane.org> wrote:

> Steven, how about something like the following patch?

Building it now. Will let you know in a bit.


> 
> 8<-----------------------------------------------------------------
> >From 9a0bcfdbdbc793eae1ed6d901a6396b6c66f9513 Mon Sep 17 00:00:00 2001
> From: Trond Myklebust <trond.myklebust-7I+n7zu2hftEKMMhf/gKZA@public.gmane.org>
> Date: Fri, 19 Jun 2015 16:17:57 -0400
> Subject: [PATCH] SUNRPC: Ensure we release the TCP socket once it has been
>  closed
> 
> This fixes a regression introduced by commit caf4ccd4e88cf2 ("SUNRPC:
> Make xs_tcp_close() do a socket shutdown rather than a sock_release").
> Prior to that commit, the autoclose feature would ensure that an
> idle connection would result in the socket being both disconnected and
> released, whereas now only gets disconnected.
> 
> While the current behaviour is harmless, it does leave the port bound
> until either RPC traffic resumes or the RPC client is shut down.

Hmm, is this true? The port is bound, but the socket has been freed.
That is sk->sk_socket points to garbage. As my portlist.c module
verified.

It doesn't seem that anything can attach to that port again that I
know of. Is there a way to verify that something can attach to it again?

-- Steve


> 
> Reported-by: Steven Rostedt <rostedt-nx8X9YLhiw1AfugRpC6u6w@public.gmane.org>
> Signed-off-by: Trond Myklebust <trond.myklebust-7I+n7zu2hftEKMMhf/gKZA@public.gmane.org>
> ---
>  net/sunrpc/xprt.c     | 2 +-
>  net/sunrpc/xprtsock.c | 8 ++++++--
>  2 files changed, 7 insertions(+), 3 deletions(-)
> 
> diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
> index 3ca31f20b97c..ab5dd621ae0c 100644
> --- a/net/sunrpc/xprt.c
> +++ b/net/sunrpc/xprt.c
> @@ -611,8 +611,8 @@ static void xprt_autoclose(struct work_struct *work)
>  	struct rpc_xprt *xprt =
>  		container_of(work, struct rpc_xprt, task_cleanup);
>  
> -	xprt->ops->close(xprt);
>  	clear_bit(XPRT_CLOSE_WAIT, &xprt->state);
> +	xprt->ops->close(xprt);
>  	xprt_release_write(xprt, NULL);
>  }
>  
> diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
> index fda8ec8c74c0..75dcdadf0269 100644
> --- a/net/sunrpc/xprtsock.c
> +++ b/net/sunrpc/xprtsock.c
> @@ -634,10 +634,13 @@ static void xs_tcp_shutdown(struct rpc_xprt *xprt)
>  	struct sock_xprt *transport = container_of(xprt, struct sock_xprt, xprt);
>  	struct socket *sock = transport->sock;
>  
> -	if (sock != NULL) {
> +	if (sock == NULL)
> +		return;
> +	if (xprt_connected(xprt)) {
>  		kernel_sock_shutdown(sock, SHUT_RDWR);
>  		trace_rpc_socket_shutdown(xprt, sock);
> -	}
> +	} else
> +		xs_reset_transport(transport);
>  }
>  
>  /**
> @@ -786,6 +789,7 @@ static void xs_sock_mark_closed(struct rpc_xprt *xprt)
>  	xs_sock_reset_connection_flags(xprt);
>  	/* Mark transport as closed and wake up all pending tasks */
>  	xprt_disconnect_done(xprt);
> +	xprt_force_disconnect(xprt);
>  }
>  
>  /**

--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
@ 2015-06-19 21:56                                 ` Steven Rostedt
  0 siblings, 0 replies; 77+ messages in thread
From: Steven Rostedt @ 2015-06-19 21:56 UTC (permalink / raw)
  To: Trond Myklebust
  Cc: Jeff Layton, Eric Dumazet, Anna Schumaker,
	Linux NFS Mailing List, Linux Network Devel Mailing List, LKML,
	Andrew Morton, Bruce James Fields

On Fri, 19 Jun 2015 16:30:18 -0400
Trond Myklebust <trond.myklebust@primarydata.com> wrote:

> Steven, how about something like the following patch?

Building it now. Will let you know in a bit.


> 
> 8<-----------------------------------------------------------------
> >From 9a0bcfdbdbc793eae1ed6d901a6396b6c66f9513 Mon Sep 17 00:00:00 2001
> From: Trond Myklebust <trond.myklebust@primarydata.com>
> Date: Fri, 19 Jun 2015 16:17:57 -0400
> Subject: [PATCH] SUNRPC: Ensure we release the TCP socket once it has been
>  closed
> 
> This fixes a regression introduced by commit caf4ccd4e88cf2 ("SUNRPC:
> Make xs_tcp_close() do a socket shutdown rather than a sock_release").
> Prior to that commit, the autoclose feature would ensure that an
> idle connection would result in the socket being both disconnected and
> released, whereas now only gets disconnected.
> 
> While the current behaviour is harmless, it does leave the port bound
> until either RPC traffic resumes or the RPC client is shut down.

Hmm, is this true? The port is bound, but the socket has been freed.
That is sk->sk_socket points to garbage. As my portlist.c module
verified.

It doesn't seem that anything can attach to that port again that I
know of. Is there a way to verify that something can attach to it again?

-- Steve


> 
> Reported-by: Steven Rostedt <rostedt@goodmis.org>
> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
> ---
>  net/sunrpc/xprt.c     | 2 +-
>  net/sunrpc/xprtsock.c | 8 ++++++--
>  2 files changed, 7 insertions(+), 3 deletions(-)
> 
> diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
> index 3ca31f20b97c..ab5dd621ae0c 100644
> --- a/net/sunrpc/xprt.c
> +++ b/net/sunrpc/xprt.c
> @@ -611,8 +611,8 @@ static void xprt_autoclose(struct work_struct *work)
>  	struct rpc_xprt *xprt =
>  		container_of(work, struct rpc_xprt, task_cleanup);
>  
> -	xprt->ops->close(xprt);
>  	clear_bit(XPRT_CLOSE_WAIT, &xprt->state);
> +	xprt->ops->close(xprt);
>  	xprt_release_write(xprt, NULL);
>  }
>  
> diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
> index fda8ec8c74c0..75dcdadf0269 100644
> --- a/net/sunrpc/xprtsock.c
> +++ b/net/sunrpc/xprtsock.c
> @@ -634,10 +634,13 @@ static void xs_tcp_shutdown(struct rpc_xprt *xprt)
>  	struct sock_xprt *transport = container_of(xprt, struct sock_xprt, xprt);
>  	struct socket *sock = transport->sock;
>  
> -	if (sock != NULL) {
> +	if (sock == NULL)
> +		return;
> +	if (xprt_connected(xprt)) {
>  		kernel_sock_shutdown(sock, SHUT_RDWR);
>  		trace_rpc_socket_shutdown(xprt, sock);
> -	}
> +	} else
> +		xs_reset_transport(transport);
>  }
>  
>  /**
> @@ -786,6 +789,7 @@ static void xs_sock_mark_closed(struct rpc_xprt *xprt)
>  	xs_sock_reset_connection_flags(xprt);
>  	/* Mark transport as closed and wake up all pending tasks */
>  	xprt_disconnect_done(xprt);
> +	xprt_force_disconnect(xprt);
>  }
>  
>  /**

--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
@ 2015-06-19 22:14                                 ` Steven Rostedt
  0 siblings, 0 replies; 77+ messages in thread
From: Steven Rostedt @ 2015-06-19 22:14 UTC (permalink / raw)
  To: Trond Myklebust
  Cc: Jeff Layton, Eric Dumazet, Anna Schumaker,
	Linux NFS Mailing List, Linux Network Devel Mailing List, LKML,
	Andrew Morton, Bruce James Fields

On Fri, 19 Jun 2015 16:30:18 -0400
Trond Myklebust <trond.myklebust@primarydata.com> wrote:

> Steven, how about something like the following patch?
> 

OK, the box I'm running this on is using v4.0.5, can you make a patch
based on that, as whatever you make needs to go to stable as well.

distcc[31554] ERROR: compile /home/rostedt/work/git/nobackup/linux-build.git/net/sunrpc/xprtsock.c on fedora/8 failed
distcc[31554] (dcc_build_somewhere) Warning: remote compilation of '/home/rostedt/work/git/nobackup/linux-build.git/net/sunrpc/xprtsock.c' failed, retrying locally
distcc[31554] Warning: failed to distribute /home/rostedt/work/git/nobackup/linux-build.git/net/sunrpc/xprtsock.c to fedora/8, running locally instead
/home/rostedt/work/git/nobackup/linux-build.git/net/sunrpc/xprtsock.c: In function 'xs_tcp_shutdown':
/home/rostedt/work/git/nobackup/linux-build.git/net/sunrpc/xprtsock.c:643:3: error: implicit declaration of function 'xs_reset_transport' [-Werror=implicit-function-declaration]
/home/rostedt/work/git/nobackup/linux-build.git/net/sunrpc/xprtsock.c: At top level:
/home/rostedt/work/git/nobackup/linux-build.git/net/sunrpc/xprtsock.c:825:13: warning: conflicting types for 'xs_reset_transport' [enabled by default]
/home/rostedt/work/git/nobackup/linux-build.git/net/sunrpc/xprtsock.c:825:13: error: static declaration of 'xs_reset_transport' follows non-static declaration
/home/rostedt/work/git/nobackup/linux-build.git/net/sunrpc/xprtsock.c:643:3: note: previous implicit declaration of 'xs_reset_transport' was here
cc1: some warnings being treated as errors
distcc[31554] ERROR: compile /home/rostedt/work/git/nobackup/linux-build.git/net/sunrpc/xprtsock.c on localhost failed
/home/rostedt/work/git/nobackup/linux-build.git/scripts/Makefile.build:258: recipe for target 'net/sunrpc/xprtsock.o' failed
make[3]: *** [net/sunrpc/xprtsock.o] Error 1

-- Steve

> 8<-----------------------------------------------------------------
> >From 9a0bcfdbdbc793eae1ed6d901a6396b6c66f9513 Mon Sep 17 00:00:00 2001
> From: Trond Myklebust <trond.myklebust@primarydata.com>
> Date: Fri, 19 Jun 2015 16:17:57 -0400
> Subject: [PATCH] SUNRPC: Ensure we release the TCP socket once it has been
>  closed
> 
> This fixes a regression introduced by commit caf4ccd4e88cf2 ("SUNRPC:
> Make xs_tcp_close() do a socket shutdown rather than a sock_release").
> Prior to that commit, the autoclose feature would ensure that an
> idle connection would result in the socket being both disconnected and
> released, whereas now only gets disconnected.
> 
> While the current behaviour is harmless, it does leave the port bound
> until either RPC traffic resumes or the RPC client is shut down.
> 
> Reported-by: Steven Rostedt <rostedt@goodmis.org>
> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
> ---
>  net/sunrpc/xprt.c     | 2 +-
>  net/sunrpc/xprtsock.c | 8 ++++++--
>  2 files changed, 7 insertions(+), 3 deletions(-)
> 
> diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
> index 3ca31f20b97c..ab5dd621ae0c 100644
> --- a/net/sunrpc/xprt.c
> +++ b/net/sunrpc/xprt.c
> @@ -611,8 +611,8 @@ static void xprt_autoclose(struct work_struct *work)
>  	struct rpc_xprt *xprt =
>  		container_of(work, struct rpc_xprt, task_cleanup);
>  
> -	xprt->ops->close(xprt);
>  	clear_bit(XPRT_CLOSE_WAIT, &xprt->state);
> +	xprt->ops->close(xprt);
>  	xprt_release_write(xprt, NULL);
>  }
>  
> diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
> index fda8ec8c74c0..75dcdadf0269 100644
> --- a/net/sunrpc/xprtsock.c
> +++ b/net/sunrpc/xprtsock.c
> @@ -634,10 +634,13 @@ static void xs_tcp_shutdown(struct rpc_xprt *xprt)
>  	struct sock_xprt *transport = container_of(xprt, struct sock_xprt, xprt);
>  	struct socket *sock = transport->sock;
>  
> -	if (sock != NULL) {
> +	if (sock == NULL)
> +		return;
> +	if (xprt_connected(xprt)) {
>  		kernel_sock_shutdown(sock, SHUT_RDWR);
>  		trace_rpc_socket_shutdown(xprt, sock);
> -	}
> +	} else
> +		xs_reset_transport(transport);
>  }
>  
>  /**
> @@ -786,6 +789,7 @@ static void xs_sock_mark_closed(struct rpc_xprt *xprt)
>  	xs_sock_reset_connection_flags(xprt);
>  	/* Mark transport as closed and wake up all pending tasks */
>  	xprt_disconnect_done(xprt);
> +	xprt_force_disconnect(xprt);
>  }
>  
>  /**

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
@ 2015-06-19 22:14                                 ` Steven Rostedt
  0 siblings, 0 replies; 77+ messages in thread
From: Steven Rostedt @ 2015-06-19 22:14 UTC (permalink / raw)
  To: Trond Myklebust
  Cc: Jeff Layton, Eric Dumazet, Anna Schumaker,
	Linux NFS Mailing List, Linux Network Devel Mailing List, LKML,
	Andrew Morton, Bruce James Fields

On Fri, 19 Jun 2015 16:30:18 -0400
Trond Myklebust <trond.myklebust-7I+n7zu2hftEKMMhf/gKZA@public.gmane.org> wrote:

> Steven, how about something like the following patch?
> 

OK, the box I'm running this on is using v4.0.5, can you make a patch
based on that, as whatever you make needs to go to stable as well.

distcc[31554] ERROR: compile /home/rostedt/work/git/nobackup/linux-build.git/net/sunrpc/xprtsock.c on fedora/8 failed
distcc[31554] (dcc_build_somewhere) Warning: remote compilation of '/home/rostedt/work/git/nobackup/linux-build.git/net/sunrpc/xprtsock.c' failed, retrying locally
distcc[31554] Warning: failed to distribute /home/rostedt/work/git/nobackup/linux-build.git/net/sunrpc/xprtsock.c to fedora/8, running locally instead
/home/rostedt/work/git/nobackup/linux-build.git/net/sunrpc/xprtsock.c: In function 'xs_tcp_shutdown':
/home/rostedt/work/git/nobackup/linux-build.git/net/sunrpc/xprtsock.c:643:3: error: implicit declaration of function 'xs_reset_transport' [-Werror=implicit-function-declaration]
/home/rostedt/work/git/nobackup/linux-build.git/net/sunrpc/xprtsock.c: At top level:
/home/rostedt/work/git/nobackup/linux-build.git/net/sunrpc/xprtsock.c:825:13: warning: conflicting types for 'xs_reset_transport' [enabled by default]
/home/rostedt/work/git/nobackup/linux-build.git/net/sunrpc/xprtsock.c:825:13: error: static declaration of 'xs_reset_transport' follows non-static declaration
/home/rostedt/work/git/nobackup/linux-build.git/net/sunrpc/xprtsock.c:643:3: note: previous implicit declaration of 'xs_reset_transport' was here
cc1: some warnings being treated as errors
distcc[31554] ERROR: compile /home/rostedt/work/git/nobackup/linux-build.git/net/sunrpc/xprtsock.c on localhost failed
/home/rostedt/work/git/nobackup/linux-build.git/scripts/Makefile.build:258: recipe for target 'net/sunrpc/xprtsock.o' failed
make[3]: *** [net/sunrpc/xprtsock.o] Error 1

-- Steve

> 8<-----------------------------------------------------------------
> >From 9a0bcfdbdbc793eae1ed6d901a6396b6c66f9513 Mon Sep 17 00:00:00 2001
> From: Trond Myklebust <trond.myklebust-7I+n7zu2hftEKMMhf/gKZA@public.gmane.org>
> Date: Fri, 19 Jun 2015 16:17:57 -0400
> Subject: [PATCH] SUNRPC: Ensure we release the TCP socket once it has been
>  closed
> 
> This fixes a regression introduced by commit caf4ccd4e88cf2 ("SUNRPC:
> Make xs_tcp_close() do a socket shutdown rather than a sock_release").
> Prior to that commit, the autoclose feature would ensure that an
> idle connection would result in the socket being both disconnected and
> released, whereas now only gets disconnected.
> 
> While the current behaviour is harmless, it does leave the port bound
> until either RPC traffic resumes or the RPC client is shut down.
> 
> Reported-by: Steven Rostedt <rostedt-nx8X9YLhiw1AfugRpC6u6w@public.gmane.org>
> Signed-off-by: Trond Myklebust <trond.myklebust-7I+n7zu2hftEKMMhf/gKZA@public.gmane.org>
> ---
>  net/sunrpc/xprt.c     | 2 +-
>  net/sunrpc/xprtsock.c | 8 ++++++--
>  2 files changed, 7 insertions(+), 3 deletions(-)
> 
> diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
> index 3ca31f20b97c..ab5dd621ae0c 100644
> --- a/net/sunrpc/xprt.c
> +++ b/net/sunrpc/xprt.c
> @@ -611,8 +611,8 @@ static void xprt_autoclose(struct work_struct *work)
>  	struct rpc_xprt *xprt =
>  		container_of(work, struct rpc_xprt, task_cleanup);
>  
> -	xprt->ops->close(xprt);
>  	clear_bit(XPRT_CLOSE_WAIT, &xprt->state);
> +	xprt->ops->close(xprt);
>  	xprt_release_write(xprt, NULL);
>  }
>  
> diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
> index fda8ec8c74c0..75dcdadf0269 100644
> --- a/net/sunrpc/xprtsock.c
> +++ b/net/sunrpc/xprtsock.c
> @@ -634,10 +634,13 @@ static void xs_tcp_shutdown(struct rpc_xprt *xprt)
>  	struct sock_xprt *transport = container_of(xprt, struct sock_xprt, xprt);
>  	struct socket *sock = transport->sock;
>  
> -	if (sock != NULL) {
> +	if (sock == NULL)
> +		return;
> +	if (xprt_connected(xprt)) {
>  		kernel_sock_shutdown(sock, SHUT_RDWR);
>  		trace_rpc_socket_shutdown(xprt, sock);
> -	}
> +	} else
> +		xs_reset_transport(transport);
>  }
>  
>  /**
> @@ -786,6 +789,7 @@ static void xs_sock_mark_closed(struct rpc_xprt *xprt)
>  	xs_sock_reset_connection_flags(xprt);
>  	/* Mark transport as closed and wake up all pending tasks */
>  	xprt_disconnect_done(xprt);
> +	xprt_force_disconnect(xprt);
>  }
>  
>  /**

--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
@ 2015-06-19 22:14                                 ` Steven Rostedt
  0 siblings, 0 replies; 77+ messages in thread
From: Steven Rostedt @ 2015-06-19 22:14 UTC (permalink / raw)
  To: Trond Myklebust
  Cc: Jeff Layton, Eric Dumazet, Anna Schumaker,
	Linux NFS Mailing List, Linux Network Devel Mailing List, LKML,
	Andrew Morton, Bruce James Fields

On Fri, 19 Jun 2015 16:30:18 -0400
Trond Myklebust <trond.myklebust@primarydata.com> wrote:

> Steven, how about something like the following patch?
> 

OK, the box I'm running this on is using v4.0.5, can you make a patch
based on that, as whatever you make needs to go to stable as well.

distcc[31554] ERROR: compile /home/rostedt/work/git/nobackup/linux-build.git/net/sunrpc/xprtsock.c on fedora/8 failed
distcc[31554] (dcc_build_somewhere) Warning: remote compilation of '/home/rostedt/work/git/nobackup/linux-build.git/net/sunrpc/xprtsock.c' failed, retrying locally
distcc[31554] Warning: failed to distribute /home/rostedt/work/git/nobackup/linux-build.git/net/sunrpc/xprtsock.c to fedora/8, running locally instead
/home/rostedt/work/git/nobackup/linux-build.git/net/sunrpc/xprtsock.c: In function 'xs_tcp_shutdown':
/home/rostedt/work/git/nobackup/linux-build.git/net/sunrpc/xprtsock.c:643:3: error: implicit declaration of function 'xs_reset_transport' [-Werror=implicit-function-declaration]
/home/rostedt/work/git/nobackup/linux-build.git/net/sunrpc/xprtsock.c: At top level:
/home/rostedt/work/git/nobackup/linux-build.git/net/sunrpc/xprtsock.c:825:13: warning: conflicting types for 'xs_reset_transport' [enabled by default]
/home/rostedt/work/git/nobackup/linux-build.git/net/sunrpc/xprtsock.c:825:13: error: static declaration of 'xs_reset_transport' follows non-static declaration
/home/rostedt/work/git/nobackup/linux-build.git/net/sunrpc/xprtsock.c:643:3: note: previous implicit declaration of 'xs_reset_transport' was here
cc1: some warnings being treated as errors
distcc[31554] ERROR: compile /home/rostedt/work/git/nobackup/linux-build.git/net/sunrpc/xprtsock.c on localhost failed
/home/rostedt/work/git/nobackup/linux-build.git/scripts/Makefile.build:258: recipe for target 'net/sunrpc/xprtsock.o' failed
make[3]: *** [net/sunrpc/xprtsock.o] Error 1

-- Steve

> 8<-----------------------------------------------------------------
> >From 9a0bcfdbdbc793eae1ed6d901a6396b6c66f9513 Mon Sep 17 00:00:00 2001
> From: Trond Myklebust <trond.myklebust@primarydata.com>
> Date: Fri, 19 Jun 2015 16:17:57 -0400
> Subject: [PATCH] SUNRPC: Ensure we release the TCP socket once it has been
>  closed
> 
> This fixes a regression introduced by commit caf4ccd4e88cf2 ("SUNRPC:
> Make xs_tcp_close() do a socket shutdown rather than a sock_release").
> Prior to that commit, the autoclose feature would ensure that an
> idle connection would result in the socket being both disconnected and
> released, whereas now only gets disconnected.
> 
> While the current behaviour is harmless, it does leave the port bound
> until either RPC traffic resumes or the RPC client is shut down.
> 
> Reported-by: Steven Rostedt <rostedt@goodmis.org>
> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
> ---
>  net/sunrpc/xprt.c     | 2 +-
>  net/sunrpc/xprtsock.c | 8 ++++++--
>  2 files changed, 7 insertions(+), 3 deletions(-)
> 
> diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
> index 3ca31f20b97c..ab5dd621ae0c 100644
> --- a/net/sunrpc/xprt.c
> +++ b/net/sunrpc/xprt.c
> @@ -611,8 +611,8 @@ static void xprt_autoclose(struct work_struct *work)
>  	struct rpc_xprt *xprt =
>  		container_of(work, struct rpc_xprt, task_cleanup);
>  
> -	xprt->ops->close(xprt);
>  	clear_bit(XPRT_CLOSE_WAIT, &xprt->state);
> +	xprt->ops->close(xprt);
>  	xprt_release_write(xprt, NULL);
>  }
>  
> diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
> index fda8ec8c74c0..75dcdadf0269 100644
> --- a/net/sunrpc/xprtsock.c
> +++ b/net/sunrpc/xprtsock.c
> @@ -634,10 +634,13 @@ static void xs_tcp_shutdown(struct rpc_xprt *xprt)
>  	struct sock_xprt *transport = container_of(xprt, struct sock_xprt, xprt);
>  	struct socket *sock = transport->sock;
>  
> -	if (sock != NULL) {
> +	if (sock == NULL)
> +		return;
> +	if (xprt_connected(xprt)) {
>  		kernel_sock_shutdown(sock, SHUT_RDWR);
>  		trace_rpc_socket_shutdown(xprt, sock);
> -	}
> +	} else
> +		xs_reset_transport(transport);
>  }
>  
>  /**
> @@ -786,6 +789,7 @@ static void xs_sock_mark_closed(struct rpc_xprt *xprt)
>  	xs_sock_reset_connection_flags(xprt);
>  	/* Mark transport as closed and wake up all pending tasks */
>  	xprt_disconnect_done(xprt);
> +	xprt_force_disconnect(xprt);
>  }
>  
>  /**

--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
@ 2015-06-19 23:25                                   ` Trond Myklebust
  0 siblings, 0 replies; 77+ messages in thread
From: Trond Myklebust @ 2015-06-19 23:25 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Jeff Layton, Eric Dumazet, Anna Schumaker,
	Linux NFS Mailing List, Linux Network Devel Mailing List, LKML,
	Andrew Morton, Bruce James Fields

On Fri, 2015-06-19 at 18:14 -0400, Steven Rostedt wrote:
> On Fri, 19 Jun 2015 16:30:18 -0400
> Trond Myklebust <trond.myklebust@primarydata.com> wrote:
> 
> > Steven, how about something like the following patch?
> > 
> 
> OK, the box I'm running this on is using v4.0.5, can you make a patch
> based on that, as whatever you make needs to go to stable as well.

Is it causing any other damage than the rkhunter warning you reported?

> distcc[31554] ERROR: compile /home/rostedt/work/git/nobackup/linux
> -build.git/net/sunrpc/xprtsock.c on fedora/8 failed
> distcc[31554] (dcc_build_somewhere) Warning: remote compilation of 
> '/home/rostedt/work/git/nobackup/linux
> -build.git/net/sunrpc/xprtsock.c' failed, retrying locally
> distcc[31554] Warning: failed to distribute 
> /home/rostedt/work/git/nobackup/linux-build.git/net/sunrpc/xprtsock.c 
> to fedora/8, running locally instead
> /home/rostedt/work/git/nobackup/linux
> -build.git/net/sunrpc/xprtsock.c: In function 'xs_tcp_shutdown':
> /home/rostedt/work/git/nobackup/linux
> -build.git/net/sunrpc/xprtsock.c:643:3: error: implicit declaration 
> of function 'xs_reset_transport' [-Werror=implicit-function
> -declaration]
> /home/rostedt/work/git/nobackup/linux
> -build.git/net/sunrpc/xprtsock.c: At top level:
> /home/rostedt/work/git/nobackup/linux
> -build.git/net/sunrpc/xprtsock.c:825:13: warning: conflicting types 
> for 'xs_reset_transport' [enabled by default]
> /home/rostedt/work/git/nobackup/linux
> -build.git/net/sunrpc/xprtsock.c:825:13: error: static declaration of 
> 'xs_reset_transport' follows non-static declaration
> /home/rostedt/work/git/nobackup/linux
> -build.git/net/sunrpc/xprtsock.c:643:3: note: previous implicit 
> declaration of 'xs_reset_transport' was here
> cc1: some warnings being treated as errors
> distcc[31554] ERROR: compile /home/rostedt/work/git/nobackup/linux
> -build.git/net/sunrpc/xprtsock.c on localhost failed
> /home/rostedt/work/git/nobackup/linux
> -build.git/scripts/Makefile.build:258: recipe for target 
> 'net/sunrpc/xprtsock.o' failed
> make[3]: *** [net/sunrpc/xprtsock.o] Error 1

Sorry. I sent that one off too quickly. Try the following.

8<--------------------------------------------------------------
>From 4876cc779ff525b9c2376d8076edf47815e71f2c Mon Sep 17 00:00:00 2001
From: Trond Myklebust <trond.myklebust@primarydata.com>
Date: Fri, 19 Jun 2015 16:17:57 -0400
Subject: [PATCH v2] SUNRPC: Ensure we release the TCP socket once it has been
 closed

This fixes a regression introduced by commit caf4ccd4e88cf2 ("SUNRPC:
Make xs_tcp_close() do a socket shutdown rather than a sock_release").
Prior to that commit, the autoclose feature would ensure that an
idle connection would result in the socket being both disconnected and
released, whereas now only gets disconnected.

While the current behaviour is harmless, it does leave the port bound
until either RPC traffic resumes or the RPC client is shut down.

Reported-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
---
 net/sunrpc/xprt.c     |  2 +-
 net/sunrpc/xprtsock.c | 40 ++++++++++++++++++++++------------------
 2 files changed, 23 insertions(+), 19 deletions(-)

diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
index 3ca31f20b97c..ab5dd621ae0c 100644
--- a/net/sunrpc/xprt.c
+++ b/net/sunrpc/xprt.c
@@ -611,8 +611,8 @@ static void xprt_autoclose(struct work_struct *work)
 	struct rpc_xprt *xprt =
 		container_of(work, struct rpc_xprt, task_cleanup);
 
-	xprt->ops->close(xprt);
 	clear_bit(XPRT_CLOSE_WAIT, &xprt->state);
+	xprt->ops->close(xprt);
 	xprt_release_write(xprt, NULL);
 }
 
diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
index fda8ec8c74c0..ee0715dfc3c7 100644
--- a/net/sunrpc/xprtsock.c
+++ b/net/sunrpc/xprtsock.c
@@ -623,24 +623,6 @@ process_status:
 }
 
 /**
- * xs_tcp_shutdown - gracefully shut down a TCP socket
- * @xprt: transport
- *
- * Initiates a graceful shutdown of the TCP socket by calling the
- * equivalent of shutdown(SHUT_RDWR);
- */
-static void xs_tcp_shutdown(struct rpc_xprt *xprt)
-{
-	struct sock_xprt *transport = container_of(xprt, struct sock_xprt, xprt);
-	struct socket *sock = transport->sock;
-
-	if (sock != NULL) {
-		kernel_sock_shutdown(sock, SHUT_RDWR);
-		trace_rpc_socket_shutdown(xprt, sock);
-	}
-}
-
-/**
  * xs_tcp_send_request - write an RPC request to a TCP socket
  * @task: address of RPC task that manages the state of an RPC request
  *
@@ -786,6 +768,7 @@ static void xs_sock_mark_closed(struct rpc_xprt *xprt)
 	xs_sock_reset_connection_flags(xprt);
 	/* Mark transport as closed and wake up all pending tasks */
 	xprt_disconnect_done(xprt);
+	xprt_force_disconnect(xprt);
 }
 
 /**
@@ -2103,6 +2086,27 @@ out:
 	xprt_wake_pending_tasks(xprt, status);
 }
 
+/**
+ * xs_tcp_shutdown - gracefully shut down a TCP socket
+ * @xprt: transport
+ *
+ * Initiates a graceful shutdown of the TCP socket by calling the
+ * equivalent of shutdown(SHUT_RDWR);
+ */
+static void xs_tcp_shutdown(struct rpc_xprt *xprt)
+{
+	struct sock_xprt *transport = container_of(xprt, struct sock_xprt, xprt);
+	struct socket *sock = transport->sock;
+
+	if (sock == NULL)
+		return;
+	if (xprt_connected(xprt)) {
+		kernel_sock_shutdown(sock, SHUT_RDWR);
+		trace_rpc_socket_shutdown(xprt, sock);
+	} else
+		xs_reset_transport(transport);
+}
+
 static int xs_tcp_finish_connecting(struct rpc_xprt *xprt, struct socket *sock)
 {
 	struct sock_xprt *transport = container_of(xprt, struct sock_xprt, xprt);
-- 
2.4.3

-- 
Trond Myklebust
Linux NFS client maintainer, PrimaryData
trond.myklebust@primarydata.com


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply related	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
@ 2015-06-19 23:25                                   ` Trond Myklebust
  0 siblings, 0 replies; 77+ messages in thread
From: Trond Myklebust @ 2015-06-19 23:25 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Jeff Layton, Eric Dumazet, Anna Schumaker,
	Linux NFS Mailing List, Linux Network Devel Mailing List, LKML,
	Andrew Morton, Bruce James Fields

On Fri, 2015-06-19 at 18:14 -0400, Steven Rostedt wrote:
> On Fri, 19 Jun 2015 16:30:18 -0400
> Trond Myklebust <trond.myklebust-7I+n7zu2hftEKMMhf/gKZA@public.gmane.org> wrote:
> 
> > Steven, how about something like the following patch?
> > 
> 
> OK, the box I'm running this on is using v4.0.5, can you make a patch
> based on that, as whatever you make needs to go to stable as well.

Is it causing any other damage than the rkhunter warning you reported?

> distcc[31554] ERROR: compile /home/rostedt/work/git/nobackup/linux
> -build.git/net/sunrpc/xprtsock.c on fedora/8 failed
> distcc[31554] (dcc_build_somewhere) Warning: remote compilation of 
> '/home/rostedt/work/git/nobackup/linux
> -build.git/net/sunrpc/xprtsock.c' failed, retrying locally
> distcc[31554] Warning: failed to distribute 
> /home/rostedt/work/git/nobackup/linux-build.git/net/sunrpc/xprtsock.c 
> to fedora/8, running locally instead
> /home/rostedt/work/git/nobackup/linux
> -build.git/net/sunrpc/xprtsock.c: In function 'xs_tcp_shutdown':
> /home/rostedt/work/git/nobackup/linux
> -build.git/net/sunrpc/xprtsock.c:643:3: error: implicit declaration 
> of function 'xs_reset_transport' [-Werror=implicit-function
> -declaration]
> /home/rostedt/work/git/nobackup/linux
> -build.git/net/sunrpc/xprtsock.c: At top level:
> /home/rostedt/work/git/nobackup/linux
> -build.git/net/sunrpc/xprtsock.c:825:13: warning: conflicting types 
> for 'xs_reset_transport' [enabled by default]
> /home/rostedt/work/git/nobackup/linux
> -build.git/net/sunrpc/xprtsock.c:825:13: error: static declaration of 
> 'xs_reset_transport' follows non-static declaration
> /home/rostedt/work/git/nobackup/linux
> -build.git/net/sunrpc/xprtsock.c:643:3: note: previous implicit 
> declaration of 'xs_reset_transport' was here
> cc1: some warnings being treated as errors
> distcc[31554] ERROR: compile /home/rostedt/work/git/nobackup/linux
> -build.git/net/sunrpc/xprtsock.c on localhost failed
> /home/rostedt/work/git/nobackup/linux
> -build.git/scripts/Makefile.build:258: recipe for target 
> 'net/sunrpc/xprtsock.o' failed
> make[3]: *** [net/sunrpc/xprtsock.o] Error 1

Sorry. I sent that one off too quickly. Try the following.

8<--------------------------------------------------------------
>From 4876cc779ff525b9c2376d8076edf47815e71f2c Mon Sep 17 00:00:00 2001
From: Trond Myklebust <trond.myklebust-7I+n7zu2hftEKMMhf/gKZA@public.gmane.org>
Date: Fri, 19 Jun 2015 16:17:57 -0400
Subject: [PATCH v2] SUNRPC: Ensure we release the TCP socket once it has been
 closed

This fixes a regression introduced by commit caf4ccd4e88cf2 ("SUNRPC:
Make xs_tcp_close() do a socket shutdown rather than a sock_release").
Prior to that commit, the autoclose feature would ensure that an
idle connection would result in the socket being both disconnected and
released, whereas now only gets disconnected.

While the current behaviour is harmless, it does leave the port bound
until either RPC traffic resumes or the RPC client is shut down.

Reported-by: Steven Rostedt <rostedt-nx8X9YLhiw1AfugRpC6u6w@public.gmane.org>
Signed-off-by: Trond Myklebust <trond.myklebust-7I+n7zu2hftEKMMhf/gKZA@public.gmane.org>
---
 net/sunrpc/xprt.c     |  2 +-
 net/sunrpc/xprtsock.c | 40 ++++++++++++++++++++++------------------
 2 files changed, 23 insertions(+), 19 deletions(-)

diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
index 3ca31f20b97c..ab5dd621ae0c 100644
--- a/net/sunrpc/xprt.c
+++ b/net/sunrpc/xprt.c
@@ -611,8 +611,8 @@ static void xprt_autoclose(struct work_struct *work)
 	struct rpc_xprt *xprt =
 		container_of(work, struct rpc_xprt, task_cleanup);
 
-	xprt->ops->close(xprt);
 	clear_bit(XPRT_CLOSE_WAIT, &xprt->state);
+	xprt->ops->close(xprt);
 	xprt_release_write(xprt, NULL);
 }
 
diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
index fda8ec8c74c0..ee0715dfc3c7 100644
--- a/net/sunrpc/xprtsock.c
+++ b/net/sunrpc/xprtsock.c
@@ -623,24 +623,6 @@ process_status:
 }
 
 /**
- * xs_tcp_shutdown - gracefully shut down a TCP socket
- * @xprt: transport
- *
- * Initiates a graceful shutdown of the TCP socket by calling the
- * equivalent of shutdown(SHUT_RDWR);
- */
-static void xs_tcp_shutdown(struct rpc_xprt *xprt)
-{
-	struct sock_xprt *transport = container_of(xprt, struct sock_xprt, xprt);
-	struct socket *sock = transport->sock;
-
-	if (sock != NULL) {
-		kernel_sock_shutdown(sock, SHUT_RDWR);
-		trace_rpc_socket_shutdown(xprt, sock);
-	}
-}
-
-/**
  * xs_tcp_send_request - write an RPC request to a TCP socket
  * @task: address of RPC task that manages the state of an RPC request
  *
@@ -786,6 +768,7 @@ static void xs_sock_mark_closed(struct rpc_xprt *xprt)
 	xs_sock_reset_connection_flags(xprt);
 	/* Mark transport as closed and wake up all pending tasks */
 	xprt_disconnect_done(xprt);
+	xprt_force_disconnect(xprt);
 }
 
 /**
@@ -2103,6 +2086,27 @@ out:
 	xprt_wake_pending_tasks(xprt, status);
 }
 
+/**
+ * xs_tcp_shutdown - gracefully shut down a TCP socket
+ * @xprt: transport
+ *
+ * Initiates a graceful shutdown of the TCP socket by calling the
+ * equivalent of shutdown(SHUT_RDWR);
+ */
+static void xs_tcp_shutdown(struct rpc_xprt *xprt)
+{
+	struct sock_xprt *transport = container_of(xprt, struct sock_xprt, xprt);
+	struct socket *sock = transport->sock;
+
+	if (sock == NULL)
+		return;
+	if (xprt_connected(xprt)) {
+		kernel_sock_shutdown(sock, SHUT_RDWR);
+		trace_rpc_socket_shutdown(xprt, sock);
+	} else
+		xs_reset_transport(transport);
+}
+
 static int xs_tcp_finish_connecting(struct rpc_xprt *xprt, struct socket *sock)
 {
 	struct sock_xprt *transport = container_of(xprt, struct sock_xprt, xprt);
-- 
2.4.3

-- 
Trond Myklebust
Linux NFS client maintainer, PrimaryData
trond.myklebust-7I+n7zu2hftEKMMhf/gKZA@public.gmane.org


--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in

^ permalink raw reply related	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
@ 2015-06-19 23:25                                   ` Trond Myklebust
  0 siblings, 0 replies; 77+ messages in thread
From: Trond Myklebust @ 2015-06-19 23:25 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Jeff Layton, Eric Dumazet, Anna Schumaker,
	Linux NFS Mailing List, Linux Network Devel Mailing List, LKML,
	Andrew Morton, Bruce James Fields

On Fri, 2015-06-19 at 18:14 -0400, Steven Rostedt wrote:
> On Fri, 19 Jun 2015 16:30:18 -0400
> Trond Myklebust <trond.myklebust@primarydata.com> wrote:
> 
> > Steven, how about something like the following patch?
> > 
> 
> OK, the box I'm running this on is using v4.0.5, can you make a patch
> based on that, as whatever you make needs to go to stable as well.

Is it causing any other damage than the rkhunter warning you reported?

> distcc[31554] ERROR: compile /home/rostedt/work/git/nobackup/linux
> -build.git/net/sunrpc/xprtsock.c on fedora/8 failed
> distcc[31554] (dcc_build_somewhere) Warning: remote compilation of 
> '/home/rostedt/work/git/nobackup/linux
> -build.git/net/sunrpc/xprtsock.c' failed, retrying locally
> distcc[31554] Warning: failed to distribute 
> /home/rostedt/work/git/nobackup/linux-build.git/net/sunrpc/xprtsock.c 
> to fedora/8, running locally instead
> /home/rostedt/work/git/nobackup/linux
> -build.git/net/sunrpc/xprtsock.c: In function 'xs_tcp_shutdown':
> /home/rostedt/work/git/nobackup/linux
> -build.git/net/sunrpc/xprtsock.c:643:3: error: implicit declaration 
> of function 'xs_reset_transport' [-Werror=implicit-function
> -declaration]
> /home/rostedt/work/git/nobackup/linux
> -build.git/net/sunrpc/xprtsock.c: At top level:
> /home/rostedt/work/git/nobackup/linux
> -build.git/net/sunrpc/xprtsock.c:825:13: warning: conflicting types 
> for 'xs_reset_transport' [enabled by default]
> /home/rostedt/work/git/nobackup/linux
> -build.git/net/sunrpc/xprtsock.c:825:13: error: static declaration of 
> 'xs_reset_transport' follows non-static declaration
> /home/rostedt/work/git/nobackup/linux
> -build.git/net/sunrpc/xprtsock.c:643:3: note: previous implicit 
> declaration of 'xs_reset_transport' was here
> cc1: some warnings being treated as errors
> distcc[31554] ERROR: compile /home/rostedt/work/git/nobackup/linux
> -build.git/net/sunrpc/xprtsock.c on localhost failed
> /home/rostedt/work/git/nobackup/linux
> -build.git/scripts/Makefile.build:258: recipe for target 
> 'net/sunrpc/xprtsock.o' failed
> make[3]: *** [net/sunrpc/xprtsock.o] Error 1

Sorry. I sent that one off too quickly. Try the following.

8<--------------------------------------------------------------

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
@ 2015-06-20  0:37                                     ` Steven Rostedt
  0 siblings, 0 replies; 77+ messages in thread
From: Steven Rostedt @ 2015-06-20  0:37 UTC (permalink / raw)
  To: Trond Myklebust
  Cc: Jeff Layton, Eric Dumazet, Anna Schumaker,
	Linux NFS Mailing List, Linux Network Devel Mailing List, LKML,
	Andrew Morton, Bruce James Fields

On Fri, 19 Jun 2015 19:25:59 -0400
Trond Myklebust <trond.myklebust@primarydata.com> wrote:

> On Fri, 2015-06-19 at 18:14 -0400, Steven Rostedt wrote:
> > On Fri, 19 Jun 2015 16:30:18 -0400
> > Trond Myklebust <trond.myklebust@primarydata.com> wrote:
> > 
> > > Steven, how about something like the following patch?
> > > 
> > 
> > OK, the box I'm running this on is using v4.0.5, can you make a patch
> > based on that, as whatever you make needs to go to stable as well.
> 
> Is it causing any other damage than the rkhunter warning you reported?

Well, not that I know of. Are you sure that this port will be
reconnected, and is not just a leak. Not sure if you could waste more
ports this way with connections to other machines. I only have my
wife's box connect to this server. This server is actually a client to
my other boxes.

Although the rkhunter warning is the only thing that triggers, I still
would think this is a stable fix, especially if the port is leaked and
not taken again.

> 
> Sorry. I sent that one off too quickly. Try the following.

This built, will be testing it shortly.

-- Steve
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
@ 2015-06-20  0:37                                     ` Steven Rostedt
  0 siblings, 0 replies; 77+ messages in thread
From: Steven Rostedt @ 2015-06-20  0:37 UTC (permalink / raw)
  To: Trond Myklebust
  Cc: Jeff Layton, Eric Dumazet, Anna Schumaker,
	Linux NFS Mailing List, Linux Network Devel Mailing List, LKML,
	Andrew Morton, Bruce James Fields

On Fri, 19 Jun 2015 19:25:59 -0400
Trond Myklebust <trond.myklebust-7I+n7zu2hftEKMMhf/gKZA@public.gmane.org> wrote:

> On Fri, 2015-06-19 at 18:14 -0400, Steven Rostedt wrote:
> > On Fri, 19 Jun 2015 16:30:18 -0400
> > Trond Myklebust <trond.myklebust-7I+n7zu2hftEKMMhf/gKZA@public.gmane.org> wrote:
> > 
> > > Steven, how about something like the following patch?
> > > 
> > 
> > OK, the box I'm running this on is using v4.0.5, can you make a patch
> > based on that, as whatever you make needs to go to stable as well.
> 
> Is it causing any other damage than the rkhunter warning you reported?

Well, not that I know of. Are you sure that this port will be
reconnected, and is not just a leak. Not sure if you could waste more
ports this way with connections to other machines. I only have my
wife's box connect to this server. This server is actually a client to
my other boxes.

Although the rkhunter warning is the only thing that triggers, I still
would think this is a stable fix, especially if the port is leaked and
not taken again.

> 
> Sorry. I sent that one off too quickly. Try the following.

This built, will be testing it shortly.

-- Steve
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
@ 2015-06-20  0:37                                     ` Steven Rostedt
  0 siblings, 0 replies; 77+ messages in thread
From: Steven Rostedt @ 2015-06-20  0:37 UTC (permalink / raw)
  To: Trond Myklebust
  Cc: Jeff Layton, Eric Dumazet, Anna Schumaker,
	Linux NFS Mailing List, Linux Network Devel Mailing List, LKML,
	Andrew Morton, Bruce James Fields

On Fri, 19 Jun 2015 19:25:59 -0400
Trond Myklebust <trond.myklebust@primarydata.com> wrote:

> On Fri, 2015-06-19 at 18:14 -0400, Steven Rostedt wrote:
> > On Fri, 19 Jun 2015 16:30:18 -0400
> > Trond Myklebust <trond.myklebust@primarydata.com> wrote:
> > 
> > > Steven, how about something like the following patch?
> > > 
> > 
> > OK, the box I'm running this on is using v4.0.5, can you make a patch
> > based on that, as whatever you make needs to go to stable as well.
> 
> Is it causing any other damage than the rkhunter warning you reported?

Well, not that I know of. Are you sure that this port will be
reconnected, and is not just a leak. Not sure if you could waste more
ports this way with connections to other machines. I only have my
wife's box connect to this server. This server is actually a client to
my other boxes.

Although the rkhunter warning is the only thing that triggers, I still
would think this is a stable fix, especially if the port is leaked and
not taken again.

> 
> Sorry. I sent that one off too quickly. Try the following.

This built, will be testing it shortly.

-- Steve
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
  2015-06-20  0:37                                     ` Steven Rostedt
  (?)
@ 2015-06-20  0:50                                       ` Steven Rostedt
  -1 siblings, 0 replies; 77+ messages in thread
From: Steven Rostedt @ 2015-06-20  0:50 UTC (permalink / raw)
  To: Trond Myklebust
  Cc: Jeff Layton, Eric Dumazet, Anna Schumaker,
	Linux NFS Mailing List, Linux Network Devel Mailing List, LKML,
	Andrew Morton, Bruce James Fields

On Fri, 19 Jun 2015 20:37:45 -0400
Steven Rostedt <rostedt@goodmis.org> wrote:


> > Is it causing any other damage than the rkhunter warning you reported?
> 
> Well, not that I know of. Are you sure that this port will be
> reconnected, and is not just a leak. Not sure if you could waste more
> ports this way with connections to other machines. I only have my
> wife's box connect to this server. This server is actually a client to
> my other boxes.
> 
> Although the rkhunter warning is the only thing that triggers, I still
> would think this is a stable fix, especially if the port is leaked and
> not taken again.

I did some experiments. If I unmount the directories from my wife's
machine and remount them, the port that was hidden is fully closed.
Maybe it's not that big of a deal after all.

-- Steve
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
@ 2015-06-20  0:50                                       ` Steven Rostedt
  0 siblings, 0 replies; 77+ messages in thread
From: Steven Rostedt @ 2015-06-20  0:50 UTC (permalink / raw)
  To: Trond Myklebust
  Cc: Jeff Layton, Eric Dumazet, Anna Schumaker,
	Linux NFS Mailing List, Linux Network Devel Mailing List, LKML,
	Andrew Morton, Bruce James Fields

On Fri, 19 Jun 2015 20:37:45 -0400
Steven Rostedt <rostedt@goodmis.org> wrote:


> > Is it causing any other damage than the rkhunter warning you reported?
> 
> Well, not that I know of. Are you sure that this port will be
> reconnected, and is not just a leak. Not sure if you could waste more
> ports this way with connections to other machines. I only have my
> wife's box connect to this server. This server is actually a client to
> my other boxes.
> 
> Although the rkhunter warning is the only thing that triggers, I still
> would think this is a stable fix, especially if the port is leaked and
> not taken again.

I did some experiments. If I unmount the directories from my wife's
machine and remount them, the port that was hidden is fully closed.
Maybe it's not that big of a deal after all.

-- Steve

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
@ 2015-06-20  0:50                                       ` Steven Rostedt
  0 siblings, 0 replies; 77+ messages in thread
From: Steven Rostedt @ 2015-06-20  0:50 UTC (permalink / raw)
  To: Trond Myklebust
  Cc: Jeff Layton, Eric Dumazet, Anna Schumaker,
	Linux NFS Mailing List, Linux Network Devel Mailing List, LKML,
	Andrew Morton, Bruce James Fields

On Fri, 19 Jun 2015 20:37:45 -0400
Steven Rostedt <rostedt@goodmis.org> wrote:


> > Is it causing any other damage than the rkhunter warning you reported?
> 
> Well, not that I know of. Are you sure that this port will be
> reconnected, and is not just a leak. Not sure if you could waste more
> ports this way with connections to other machines. I only have my
> wife's box connect to this server. This server is actually a client to
> my other boxes.
> 
> Although the rkhunter warning is the only thing that triggers, I still
> would think this is a stable fix, especially if the port is leaked and
> not taken again.

I did some experiments. If I unmount the directories from my wife's
machine and remount them, the port that was hidden is fully closed.
Maybe it's not that big of a deal after all.

-- Steve
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
@ 2015-06-20  1:27                                     ` Steven Rostedt
  0 siblings, 0 replies; 77+ messages in thread
From: Steven Rostedt @ 2015-06-20  1:27 UTC (permalink / raw)
  To: Trond Myklebust
  Cc: Jeff Layton, Eric Dumazet, Anna Schumaker,
	Linux NFS Mailing List, Linux Network Devel Mailing List, LKML,
	Andrew Morton, Bruce James Fields

On Fri, 19 Jun 2015 19:25:59 -0400
Trond Myklebust <trond.myklebust@primarydata.com> wrote:


> 8<--------------------------------------------------------------
> >From 4876cc779ff525b9c2376d8076edf47815e71f2c Mon Sep 17 00:00:00 2001
> From: Trond Myklebust <trond.myklebust@primarydata.com>
> Date: Fri, 19 Jun 2015 16:17:57 -0400
> Subject: [PATCH v2] SUNRPC: Ensure we release the TCP socket once it has been
>  closed
> 
> This fixes a regression introduced by commit caf4ccd4e88cf2 ("SUNRPC:
> Make xs_tcp_close() do a socket shutdown rather than a sock_release").
> Prior to that commit, the autoclose feature would ensure that an
> idle connection would result in the socket being both disconnected and
> released, whereas now only gets disconnected.
> 
> While the current behaviour is harmless, it does leave the port bound
> until either RPC traffic resumes or the RPC client is shut down.

Is there a way to test RPC traffic resuming? I'd like to try that before
declaring this bug harmless.

> 
> Reported-by: Steven Rostedt <rostedt@goodmis.org>

The problem appears to go away with this patch.

Tested-by: Steven Rostedt <rostedt@goodmis.org>

Thanks a lot!

-- Steve

> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
> ---
>  net/sunrpc/xprt.c     |  2 +-
>  net/sunrpc/xprtsock.c | 40 ++++++++++++++++++++++------------------
>  2 files changed, 23 insertions(+), 19 deletions(-)
> 
> diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
> index 3ca31f20b97c..ab5dd621ae0c 100644
> --- a/net/sunrpc/xprt.c
> +++ b/net/sunrpc/xprt.c
> @@ -611,8 +611,8 @@ static void xprt_autoclose(struct work_struct *work)
>  	struct rpc_xprt *xprt =
>  		container_of(work, struct rpc_xprt, task_cleanup);
>  
> -	xprt->ops->close(xprt);
>  	clear_bit(XPRT_CLOSE_WAIT, &xprt->state);
> +	xprt->ops->close(xprt);
>  	xprt_release_write(xprt, NULL);
>  }
>  
> diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
> index fda8ec8c74c0..ee0715dfc3c7 100644
> --- a/net/sunrpc/xprtsock.c
> +++ b/net/sunrpc/xprtsock.c
> @@ -623,24 +623,6 @@ process_status:
>  }
>  
>  /**
> - * xs_tcp_shutdown - gracefully shut down a TCP socket
> - * @xprt: transport
> - *
> - * Initiates a graceful shutdown of the TCP socket by calling the
> - * equivalent of shutdown(SHUT_RDWR);
> - */
> -static void xs_tcp_shutdown(struct rpc_xprt *xprt)
> -{
> -	struct sock_xprt *transport = container_of(xprt, struct sock_xprt, xprt);
> -	struct socket *sock = transport->sock;
> -
> -	if (sock != NULL) {
> -		kernel_sock_shutdown(sock, SHUT_RDWR);
> -		trace_rpc_socket_shutdown(xprt, sock);
> -	}
> -}
> -
> -/**
>   * xs_tcp_send_request - write an RPC request to a TCP socket
>   * @task: address of RPC task that manages the state of an RPC request
>   *
> @@ -786,6 +768,7 @@ static void xs_sock_mark_closed(struct rpc_xprt *xprt)
>  	xs_sock_reset_connection_flags(xprt);
>  	/* Mark transport as closed and wake up all pending tasks */
>  	xprt_disconnect_done(xprt);
> +	xprt_force_disconnect(xprt);
>  }
>  
>  /**
> @@ -2103,6 +2086,27 @@ out:
>  	xprt_wake_pending_tasks(xprt, status);
>  }
>  
> +/**
> + * xs_tcp_shutdown - gracefully shut down a TCP socket
> + * @xprt: transport
> + *
> + * Initiates a graceful shutdown of the TCP socket by calling the
> + * equivalent of shutdown(SHUT_RDWR);
> + */
> +static void xs_tcp_shutdown(struct rpc_xprt *xprt)
> +{
> +	struct sock_xprt *transport = container_of(xprt, struct sock_xprt, xprt);
> +	struct socket *sock = transport->sock;
> +
> +	if (sock == NULL)
> +		return;
> +	if (xprt_connected(xprt)) {
> +		kernel_sock_shutdown(sock, SHUT_RDWR);
> +		trace_rpc_socket_shutdown(xprt, sock);
> +	} else
> +		xs_reset_transport(transport);
> +}
> +
>  static int xs_tcp_finish_connecting(struct rpc_xprt *xprt, struct socket *sock)
>  {
>  	struct sock_xprt *transport = container_of(xprt, struct sock_xprt, xprt);

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
@ 2015-06-20  1:27                                     ` Steven Rostedt
  0 siblings, 0 replies; 77+ messages in thread
From: Steven Rostedt @ 2015-06-20  1:27 UTC (permalink / raw)
  To: Trond Myklebust
  Cc: Jeff Layton, Eric Dumazet, Anna Schumaker,
	Linux NFS Mailing List, Linux Network Devel Mailing List, LKML,
	Andrew Morton, Bruce James Fields

On Fri, 19 Jun 2015 19:25:59 -0400
Trond Myklebust <trond.myklebust-7I+n7zu2hftEKMMhf/gKZA@public.gmane.org> wrote:


> 8<--------------------------------------------------------------
> >From 4876cc779ff525b9c2376d8076edf47815e71f2c Mon Sep 17 00:00:00 2001
> From: Trond Myklebust <trond.myklebust-7I+n7zu2hftEKMMhf/gKZA@public.gmane.org>
> Date: Fri, 19 Jun 2015 16:17:57 -0400
> Subject: [PATCH v2] SUNRPC: Ensure we release the TCP socket once it has been
>  closed
> 
> This fixes a regression introduced by commit caf4ccd4e88cf2 ("SUNRPC:
> Make xs_tcp_close() do a socket shutdown rather than a sock_release").
> Prior to that commit, the autoclose feature would ensure that an
> idle connection would result in the socket being both disconnected and
> released, whereas now only gets disconnected.
> 
> While the current behaviour is harmless, it does leave the port bound
> until either RPC traffic resumes or the RPC client is shut down.

Is there a way to test RPC traffic resuming? I'd like to try that before
declaring this bug harmless.

> 
> Reported-by: Steven Rostedt <rostedt-nx8X9YLhiw1AfugRpC6u6w@public.gmane.org>

The problem appears to go away with this patch.

Tested-by: Steven Rostedt <rostedt-nx8X9YLhiw1AfugRpC6u6w@public.gmane.org>

Thanks a lot!

-- Steve

> Signed-off-by: Trond Myklebust <trond.myklebust-7I+n7zu2hftEKMMhf/gKZA@public.gmane.org>
> ---
>  net/sunrpc/xprt.c     |  2 +-
>  net/sunrpc/xprtsock.c | 40 ++++++++++++++++++++++------------------
>  2 files changed, 23 insertions(+), 19 deletions(-)
> 
> diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
> index 3ca31f20b97c..ab5dd621ae0c 100644
> --- a/net/sunrpc/xprt.c
> +++ b/net/sunrpc/xprt.c
> @@ -611,8 +611,8 @@ static void xprt_autoclose(struct work_struct *work)
>  	struct rpc_xprt *xprt =
>  		container_of(work, struct rpc_xprt, task_cleanup);
>  
> -	xprt->ops->close(xprt);
>  	clear_bit(XPRT_CLOSE_WAIT, &xprt->state);
> +	xprt->ops->close(xprt);
>  	xprt_release_write(xprt, NULL);
>  }
>  
> diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
> index fda8ec8c74c0..ee0715dfc3c7 100644
> --- a/net/sunrpc/xprtsock.c
> +++ b/net/sunrpc/xprtsock.c
> @@ -623,24 +623,6 @@ process_status:
>  }
>  
>  /**
> - * xs_tcp_shutdown - gracefully shut down a TCP socket
> - * @xprt: transport
> - *
> - * Initiates a graceful shutdown of the TCP socket by calling the
> - * equivalent of shutdown(SHUT_RDWR);
> - */
> -static void xs_tcp_shutdown(struct rpc_xprt *xprt)
> -{
> -	struct sock_xprt *transport = container_of(xprt, struct sock_xprt, xprt);
> -	struct socket *sock = transport->sock;
> -
> -	if (sock != NULL) {
> -		kernel_sock_shutdown(sock, SHUT_RDWR);
> -		trace_rpc_socket_shutdown(xprt, sock);
> -	}
> -}
> -
> -/**
>   * xs_tcp_send_request - write an RPC request to a TCP socket
>   * @task: address of RPC task that manages the state of an RPC request
>   *
> @@ -786,6 +768,7 @@ static void xs_sock_mark_closed(struct rpc_xprt *xprt)
>  	xs_sock_reset_connection_flags(xprt);
>  	/* Mark transport as closed and wake up all pending tasks */
>  	xprt_disconnect_done(xprt);
> +	xprt_force_disconnect(xprt);
>  }
>  
>  /**
> @@ -2103,6 +2086,27 @@ out:
>  	xprt_wake_pending_tasks(xprt, status);
>  }
>  
> +/**
> + * xs_tcp_shutdown - gracefully shut down a TCP socket
> + * @xprt: transport
> + *
> + * Initiates a graceful shutdown of the TCP socket by calling the
> + * equivalent of shutdown(SHUT_RDWR);
> + */
> +static void xs_tcp_shutdown(struct rpc_xprt *xprt)
> +{
> +	struct sock_xprt *transport = container_of(xprt, struct sock_xprt, xprt);
> +	struct socket *sock = transport->sock;
> +
> +	if (sock == NULL)
> +		return;
> +	if (xprt_connected(xprt)) {
> +		kernel_sock_shutdown(sock, SHUT_RDWR);
> +		trace_rpc_socket_shutdown(xprt, sock);
> +	} else
> +		xs_reset_transport(transport);
> +}
> +
>  static int xs_tcp_finish_connecting(struct rpc_xprt *xprt, struct socket *sock)
>  {
>  	struct sock_xprt *transport = container_of(xprt, struct sock_xprt, xprt);

--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
@ 2015-06-20  1:27                                     ` Steven Rostedt
  0 siblings, 0 replies; 77+ messages in thread
From: Steven Rostedt @ 2015-06-20  1:27 UTC (permalink / raw)
  To: Trond Myklebust
  Cc: Jeff Layton, Eric Dumazet, Anna Schumaker,
	Linux NFS Mailing List, Linux Network Devel Mailing List, LKML,
	Andrew Morton, Bruce James Fields

On Fri, 19 Jun 2015 19:25:59 -0400
Trond Myklebust <trond.myklebust@primarydata.com> wrote:


> 8<--------------------------------------------------------------
> >From 4876cc779ff525b9c2376d8076edf47815e71f2c Mon Sep 17 00:00:00 2001
> From: Trond Myklebust <trond.myklebust@primarydata.com>
> Date: Fri, 19 Jun 2015 16:17:57 -0400
> Subject: [PATCH v2] SUNRPC: Ensure we release the TCP socket once it has been
>  closed
> 
> This fixes a regression introduced by commit caf4ccd4e88cf2 ("SUNRPC:
> Make xs_tcp_close() do a socket shutdown rather than a sock_release").
> Prior to that commit, the autoclose feature would ensure that an
> idle connection would result in the socket being both disconnected and
> released, whereas now only gets disconnected.
> 
> While the current behaviour is harmless, it does leave the port bound
> until either RPC traffic resumes or the RPC client is shut down.

Is there a way to test RPC traffic resuming? I'd like to try that before
declaring this bug harmless.

> 
> Reported-by: Steven Rostedt <rostedt@goodmis.org>

The problem appears to go away with this patch.

Tested-by: Steven Rostedt <rostedt@goodmis.org>

Thanks a lot!

-- Steve

> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
> ---
>  net/sunrpc/xprt.c     |  2 +-
>  net/sunrpc/xprtsock.c | 40 ++++++++++++++++++++++------------------
>  2 files changed, 23 insertions(+), 19 deletions(-)
> 
> diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
> index 3ca31f20b97c..ab5dd621ae0c 100644
> --- a/net/sunrpc/xprt.c
> +++ b/net/sunrpc/xprt.c
> @@ -611,8 +611,8 @@ static void xprt_autoclose(struct work_struct *work)
>  	struct rpc_xprt *xprt =
>  		container_of(work, struct rpc_xprt, task_cleanup);
>  
> -	xprt->ops->close(xprt);
>  	clear_bit(XPRT_CLOSE_WAIT, &xprt->state);
> +	xprt->ops->close(xprt);
>  	xprt_release_write(xprt, NULL);
>  }
>  
> diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
> index fda8ec8c74c0..ee0715dfc3c7 100644
> --- a/net/sunrpc/xprtsock.c
> +++ b/net/sunrpc/xprtsock.c
> @@ -623,24 +623,6 @@ process_status:
>  }
>  
>  /**
> - * xs_tcp_shutdown - gracefully shut down a TCP socket
> - * @xprt: transport
> - *
> - * Initiates a graceful shutdown of the TCP socket by calling the
> - * equivalent of shutdown(SHUT_RDWR);
> - */
> -static void xs_tcp_shutdown(struct rpc_xprt *xprt)
> -{
> -	struct sock_xprt *transport = container_of(xprt, struct sock_xprt, xprt);
> -	struct socket *sock = transport->sock;
> -
> -	if (sock != NULL) {
> -		kernel_sock_shutdown(sock, SHUT_RDWR);
> -		trace_rpc_socket_shutdown(xprt, sock);
> -	}
> -}
> -
> -/**
>   * xs_tcp_send_request - write an RPC request to a TCP socket
>   * @task: address of RPC task that manages the state of an RPC request
>   *
> @@ -786,6 +768,7 @@ static void xs_sock_mark_closed(struct rpc_xprt *xprt)
>  	xs_sock_reset_connection_flags(xprt);
>  	/* Mark transport as closed and wake up all pending tasks */
>  	xprt_disconnect_done(xprt);
> +	xprt_force_disconnect(xprt);
>  }
>  
>  /**
> @@ -2103,6 +2086,27 @@ out:
>  	xprt_wake_pending_tasks(xprt, status);
>  }
>  
> +/**
> + * xs_tcp_shutdown - gracefully shut down a TCP socket
> + * @xprt: transport
> + *
> + * Initiates a graceful shutdown of the TCP socket by calling the
> + * equivalent of shutdown(SHUT_RDWR);
> + */
> +static void xs_tcp_shutdown(struct rpc_xprt *xprt)
> +{
> +	struct sock_xprt *transport = container_of(xprt, struct sock_xprt, xprt);
> +	struct socket *sock = transport->sock;
> +
> +	if (sock == NULL)
> +		return;
> +	if (xprt_connected(xprt)) {
> +		kernel_sock_shutdown(sock, SHUT_RDWR);
> +		trace_rpc_socket_shutdown(xprt, sock);
> +	} else
> +		xs_reset_transport(transport);
> +}
> +
>  static int xs_tcp_finish_connecting(struct rpc_xprt *xprt, struct socket *sock)
>  {
>  	struct sock_xprt *transport = container_of(xprt, struct sock_xprt, xprt);

--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
@ 2015-06-20  2:44                                       ` Trond Myklebust
  0 siblings, 0 replies; 77+ messages in thread
From: Trond Myklebust @ 2015-06-20  2:44 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Jeff Layton, Eric Dumazet, Anna Schumaker,
	Linux NFS Mailing List, Linux Network Devel Mailing List, LKML,
	Andrew Morton, Bruce James Fields

On Fri, Jun 19, 2015 at 9:27 PM, Steven Rostedt <rostedt@goodmis.org> wrote:
> On Fri, 19 Jun 2015 19:25:59 -0400
> Trond Myklebust <trond.myklebust@primarydata.com> wrote:
>
>
>> 8<--------------------------------------------------------------
>> >From 4876cc779ff525b9c2376d8076edf47815e71f2c Mon Sep 17 00:00:00 2001
>> From: Trond Myklebust <trond.myklebust@primarydata.com>
>> Date: Fri, 19 Jun 2015 16:17:57 -0400
>> Subject: [PATCH v2] SUNRPC: Ensure we release the TCP socket once it has been
>>  closed
>>
>> This fixes a regression introduced by commit caf4ccd4e88cf2 ("SUNRPC:
>> Make xs_tcp_close() do a socket shutdown rather than a sock_release").
>> Prior to that commit, the autoclose feature would ensure that an
>> idle connection would result in the socket being both disconnected and
>> released, whereas now only gets disconnected.
>>
>> While the current behaviour is harmless, it does leave the port bound
>> until either RPC traffic resumes or the RPC client is shut down.
>
> Is there a way to test RPC traffic resuming? I'd like to try that before
> declaring this bug harmless.

You should be seeing the same issue if you mount an NFSv3 partition.
After about 5 minutes of inactivity, the client will close down the
connection to the server, and rkhunter should again see the phantom
socket. If you then try to access the partition, the RPC layer should
immediately release the socket and establish a new connection on the
same port.

Cheers
  Trond
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
@ 2015-06-20  2:44                                       ` Trond Myklebust
  0 siblings, 0 replies; 77+ messages in thread
From: Trond Myklebust @ 2015-06-20  2:44 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Jeff Layton, Eric Dumazet, Anna Schumaker,
	Linux NFS Mailing List, Linux Network Devel Mailing List, LKML,
	Andrew Morton, Bruce James Fields

On Fri, Jun 19, 2015 at 9:27 PM, Steven Rostedt <rostedt-nx8X9YLhiw1AfugRpC6u6w@public.gmane.org> wrote:
> On Fri, 19 Jun 2015 19:25:59 -0400
> Trond Myklebust <trond.myklebust-7I+n7zu2hftEKMMhf/gKZA@public.gmane.org> wrote:
>
>
>> 8<--------------------------------------------------------------
>> >From 4876cc779ff525b9c2376d8076edf47815e71f2c Mon Sep 17 00:00:00 2001
>> From: Trond Myklebust <trond.myklebust-7I+n7zu2hftEKMMhf/gKZA@public.gmane.org>
>> Date: Fri, 19 Jun 2015 16:17:57 -0400
>> Subject: [PATCH v2] SUNRPC: Ensure we release the TCP socket once it has been
>>  closed
>>
>> This fixes a regression introduced by commit caf4ccd4e88cf2 ("SUNRPC:
>> Make xs_tcp_close() do a socket shutdown rather than a sock_release").
>> Prior to that commit, the autoclose feature would ensure that an
>> idle connection would result in the socket being both disconnected and
>> released, whereas now only gets disconnected.
>>
>> While the current behaviour is harmless, it does leave the port bound
>> until either RPC traffic resumes or the RPC client is shut down.
>
> Is there a way to test RPC traffic resuming? I'd like to try that before
> declaring this bug harmless.

You should be seeing the same issue if you mount an NFSv3 partition.
After about 5 minutes of inactivity, the client will close down the
connection to the server, and rkhunter should again see the phantom
socket. If you then try to access the partition, the RPC layer should
immediately release the socket and establish a new connection on the
same port.

Cheers
  Trond
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )
@ 2015-06-20  2:44                                       ` Trond Myklebust
  0 siblings, 0 replies; 77+ messages in thread
From: Trond Myklebust @ 2015-06-20  2:44 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Jeff Layton, Eric Dumazet, Anna Schumaker,
	Linux NFS Mailing List, Linux Network Devel Mailing List, LKML,
	Andrew Morton, Bruce James Fields

On Fri, Jun 19, 2015 at 9:27 PM, Steven Rostedt <rostedt@goodmis.org> wrote:
> On Fri, 19 Jun 2015 19:25:59 -0400
> Trond Myklebust <trond.myklebust@primarydata.com> wrote:
>
>
>> 8<--------------------------------------------------------------
>> >From 4876cc779ff525b9c2376d8076edf47815e71f2c Mon Sep 17 00:00:00 2001
>> From: Trond Myklebust <trond.myklebust@primarydata.com>
>> Date: Fri, 19 Jun 2015 16:17:57 -0400
>> Subject: [PATCH v2] SUNRPC: Ensure we release the TCP socket once it has been
>>  closed
>>
>> This fixes a regression introduced by commit caf4ccd4e88cf2 ("SUNRPC:
>> Make xs_tcp_close() do a socket shutdown rather than a sock_release").
>> Prior to that commit, the autoclose feature would ensure that an
>> idle connection would result in the socket being both disconnected and
>> released, whereas now only gets disconnected.
>>
>> While the current behaviour is harmless, it does leave the port bound
>> until either RPC traffic resumes or the RPC client is shut down.
>
> Is there a way to test RPC traffic resuming? I'd like to try that before
> declaring this bug harmless.

You should be seeing the same issue if you mount an NFSv3 partition.
After about 5 minutes of inactivity, the client will close down the
connection to the server, and rkhunter should again see the phantom
socket. If you then try to access the partition, the RPC layer should
immediately release the socket and establish a new connection on the
same port.

Cheers
  Trond
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in

^ permalink raw reply	[flat|nested] 77+ messages in thread

* It's back! (Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() ))
  2015-06-20  1:27                                     ` Steven Rostedt
                                                       ` (2 preceding siblings ...)
  (?)
@ 2016-06-22 16:41                                     ` Steven Rostedt
  -1 siblings, 0 replies; 77+ messages in thread
From: Steven Rostedt @ 2016-06-22 16:41 UTC (permalink / raw)
  To: Trond Myklebust
  Cc: Jeff Layton, Eric Dumazet, Anna Schumaker,
	Linux NFS Mailing List, Linux Network Devel Mailing List, LKML,
	Andrew Morton, Bruce James Fields

[-- Attachment #1: Type: text/plain, Size: 3957 bytes --]

I've hit this again. Not sure when it started, but I applied my old
debug trace_printk() patch (attached) and rebooted (4.5.7). I just
tested the latest kernel from Linus's tree (from last nights pull), and
it still gives me the problem.

Here's the trace I have:

    kworker/3:1H-134   [003] ..s.    61.036129: inet_csk_get_port: snum 805
    kworker/3:1H-134   [003] ..s.    61.036135: <stack trace>
 => sched_clock
 => inet_addr_type_table
 => security_capable
 => inet_bind
 => xs_bind
 => release_sock
 => sock_setsockopt
 => __sock_create
 => xs_create_sock.isra.19
 => xs_tcp_setup_socket
 => process_one_work
 => worker_thread
 => worker_thread
 => kthread
 => ret_from_fork
 => kthread
    kworker/3:1H-134   [003] ..s.    61.036136: inet_bind_hash: add 805
    kworker/3:1H-134   [003] ..s.    61.036138: <stack trace>
 => inet_csk_get_port
 => sched_clock
 => inet_addr_type_table
 => security_capable
 => inet_bind
 => xs_bind
 => release_sock
 => sock_setsockopt
 => __sock_create
 => xs_create_sock.isra.19
 => xs_tcp_setup_socket
 => process_one_work
 => worker_thread
 => worker_thread
 => kthread
 => ret_from_fork
 => kthread
    kworker/3:1H-134   [003] ....    61.036139: xs_bind: RPC:       xs_bind 4.136.255.255:805: ok (0)
    kworker/3:1H-134   [003] ....    61.036140: xs_tcp_setup_socket: RPC:       worker connecting xprt ffff880407eca800 via tcp to 192.168.23.22 (port 43651)
    kworker/3:1H-134   [003] ....    61.036162: xs_tcp_setup_socket: RPC:       ffff880407eca800 connect status 115 connected 0 sock state 2
          <idle>-0     [001] ..s.    61.036450: xs_tcp_state_change: RPC:       xs_tcp_state_change client ffff880407eca800...
          <idle>-0     [001] ..s.    61.036452: xs_tcp_state_change: RPC:       state 1 conn 0 dead 0 zapped 1 sk_shutdown 0
    kworker/1:1H-136   [001] ....    61.036476: xprt_connect_status: RPC:    43 xprt_connect_status: retrying
    kworker/1:1H-136   [001] ....    61.036478: xprt_prepare_transmit: RPC:    43 xprt_prepare_transmit
    kworker/1:1H-136   [001] ....    61.036479: xprt_transmit: RPC:    43 xprt_transmit(72)
    kworker/1:1H-136   [001] ....    61.036486: xs_tcp_send_request: RPC:       xs_tcp_send_request(72) = 0
    kworker/1:1H-136   [001] ....    61.036487: xprt_transmit: RPC:    43 xmit complete
          <idle>-0     [001] ..s.    61.036789: xs_tcp_data_ready: RPC:       xs_tcp_data_ready...
    kworker/1:1H-136   [001] ....    61.036798: xs_tcp_data_recv: RPC:       xs_tcp_data_recv started
    kworker/1:1H-136   [001] ....    61.036799: xs_tcp_data_recv: RPC:       reading TCP record fragment of length 24
    kworker/1:1H-136   [001] ....    61.036799: xs_tcp_data_recv: RPC:       reading XID (4 bytes)
    kworker/1:1H-136   [001] ....    61.036800: xs_tcp_data_recv: RPC:       reading request with XID 2f4c3f88
    kworker/1:1H-136   [001] ....    61.036800: xs_tcp_data_recv: RPC:       reading CALL/REPLY flag (4 bytes)
    kworker/1:1H-136   [001] ....    61.036801: xs_tcp_data_recv: RPC:       read reply XID 2f4c3f88
    kworker/1:1H-136   [001] ..s.    61.036801: xs_tcp_data_recv: RPC:       XID 2f4c3f88 read 16 bytes
    kworker/1:1H-136   [001] ..s.    61.036802: xs_tcp_data_recv: RPC:       xprt = ffff880407eca800, tcp_copied = 24, tcp_offset = 24, tcp_reclen = 24
    kworker/1:1H-136   [001] ..s.    61.036802: xprt_complete_rqst: RPC:    43 xid 2f4c3f88 complete (24 bytes received)
    kworker/1:1H-136   [001] ....    61.036803: xs_tcp_data_recv: RPC:       xs_tcp_data_recv done
    kworker/1:1H-136   [001] ....    61.036812: xprt_release: RPC:    43 release request ffff88040b270800


# unhide-tcp 
Unhide-tcp 20130526
Copyright © 2013 Yago Jesus & Patrick Gouin
License GPLv3+ : GNU GPL version 3 or later
http://www.unhide-forensics.info
Used options: 
[*]Starting TCP checking

Found Hidden port that not appears in ss: 805

-- Steve

[-- Attachment #2: debug-hidden-port-4.7.patch --]
[-- Type: text/x-patch, Size: 2378 bytes --]

---
 net/ipv4/inet_connection_sock.c |    4 ++++
 net/ipv4/inet_hashtables.c      |    5 +++++
 net/sunrpc/xprt.c               |    5 +++++
 net/sunrpc/xprtsock.c           |    5 +++++
 5 files changed, 22 insertions(+)

Index: linux-build.git/net/ipv4/inet_connection_sock.c
===================================================================
--- linux-build.git.orig/net/ipv4/inet_connection_sock.c	2016-06-22 11:55:05.952267493 -0400
+++ linux-build.git/net/ipv4/inet_connection_sock.c	2016-06-22 11:56:20.002662092 -0400
@@ -232,6 +232,10 @@ tb_found:
 		}
 	}
 success:
+	if (!current->mm) {
+		trace_printk("snum %d\n", snum);
+		trace_dump_stack(1);
+	}
 	if (!inet_csk(sk)->icsk_bind_hash)
 		inet_bind_hash(sk, tb, port);
 	WARN_ON(inet_csk(sk)->icsk_bind_hash != tb);
Index: linux-build.git/net/ipv4/inet_hashtables.c
===================================================================
--- linux-build.git.orig/net/ipv4/inet_hashtables.c	2016-06-22 11:55:05.952267493 -0400
+++ linux-build.git/net/ipv4/inet_hashtables.c	2016-06-22 11:55:05.948267360 -0400
@@ -93,6 +93,11 @@ void inet_bind_bucket_destroy(struct kme
 void inet_bind_hash(struct sock *sk, struct inet_bind_bucket *tb,
 		    const unsigned short snum)
 {
+	if (!current->mm) {
+		trace_printk("add %d\n", snum);
+		trace_dump_stack(1);
+	}
+
 	inet_sk(sk)->inet_num = snum;
 	sk_add_bind_node(sk, &tb->owners);
 	tb->num_owners++;
Index: linux-build.git/net/sunrpc/xprt.c
===================================================================
--- linux-build.git.orig/net/sunrpc/xprt.c	2016-06-22 11:55:05.952267493 -0400
+++ linux-build.git/net/sunrpc/xprt.c	2016-06-22 11:55:05.948267360 -0400
@@ -54,6 +54,11 @@
 
 #include "sunrpc.h"
 
+#undef dprintk
+#undef dprintk_rcu
+#define dprintk(args...)	trace_printk(args)
+#define dprintk_rcu(args...)	trace_printk(args)
+
 /*
  * Local variables
  */
Index: linux-build.git/net/sunrpc/xprtsock.c
===================================================================
--- linux-build.git.orig/net/sunrpc/xprtsock.c	2016-06-22 11:55:05.952267493 -0400
+++ linux-build.git/net/sunrpc/xprtsock.c	2016-06-22 11:55:05.948267360 -0400
@@ -51,6 +51,11 @@
 
 #include "sunrpc.h"
 
+#undef dprintk
+#undef dprintk_rcu
+#define dprintk(args...)	trace_printk(args)
+#define dprintk_rcu(args...)	trace_printk(args)
+
 static void xs_close(struct rpc_xprt *xprt);
 
 /*

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: It's back! (Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() ))
  2018-02-06  9:20   ` Daniel Reichelt
@ 2018-02-06 19:26     ` Trond Myklebust
  0 siblings, 0 replies; 77+ messages in thread
From: Trond Myklebust @ 2018-02-06 19:26 UTC (permalink / raw)
  To: rostedt, hacking; +Cc: linux-kernel, linux-nfs

[-- Attachment #1: Type: text/plain, Size: 3799 bytes --]

On Tue, 2018-02-06 at 10:20 +0100, Daniel Reichelt wrote:
> On 02/06/2018 01:24 AM, Trond Myklebust wrote:
> > Does the following fix the issue?
> > 
> > 8<-----------------------------------------------
> > From 9b30889c548a4d45bfe6226e58de32504c1d682f Mon Sep 17 00:00:00
> > 2001
> > From: Trond Myklebust <trond.myklebust@primarydata.com>
> > Date: Mon, 5 Feb 2018 10:20:06 -0500
> > Subject: [PATCH] SUNRPC: Ensure we always close the socket after a
> > connection
> >  shuts down
> > 
> > Ensure that we release the TCP socket once it is in the TCP_CLOSE
> > or
> > TCP_TIME_WAIT state (and only then) so that we don't confuse
> > rkhunter
> > and its ilk.
> > 
> > Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
> > ---
> >  net/sunrpc/xprtsock.c | 23 ++++++++++-------------
> >  1 file changed, 10 insertions(+), 13 deletions(-)
> > 
> > diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
> > index 18803021f242..5d0108172ed3 100644
> > --- a/net/sunrpc/xprtsock.c
> > +++ b/net/sunrpc/xprtsock.c
> > @@ -807,13 +807,6 @@ static void
> > xs_sock_reset_connection_flags(struct rpc_xprt *xprt)
> >  	smp_mb__after_atomic();
> >  }
> >  
> > -static void xs_sock_mark_closed(struct rpc_xprt *xprt)
> > -{
> > -	xs_sock_reset_connection_flags(xprt);
> > -	/* Mark transport as closed and wake up all pending tasks
> > */
> > -	xprt_disconnect_done(xprt);
> > -}
> > -
> >  /**
> >   * xs_error_report - callback to handle TCP socket state errors
> >   * @sk: socket
> > @@ -833,9 +826,6 @@ static void xs_error_report(struct sock *sk)
> >  	err = -sk->sk_err;
> >  	if (err == 0)
> >  		goto out;
> > -	/* Is this a reset event? */
> > -	if (sk->sk_state == TCP_CLOSE)
> > -		xs_sock_mark_closed(xprt);
> >  	dprintk("RPC:       xs_error_report client %p,
> > error=%d...\n",
> >  			xprt, -err);
> >  	trace_rpc_socket_error(xprt, sk->sk_socket, err);
> > @@ -1655,9 +1645,11 @@ static void xs_tcp_state_change(struct sock
> > *sk)
> >  		if (test_and_clear_bit(XPRT_SOCK_CONNECTING,
> >  					&transport->sock_state))
> >  			xprt_clear_connecting(xprt);
> > +		clear_bit(XPRT_CLOSING, &xprt->state);
> >  		if (sk->sk_err)
> >  			xprt_wake_pending_tasks(xprt, -sk-
> > >sk_err);
> > -		xs_sock_mark_closed(xprt);
> > +		/* Trigger the socket release */
> > +		xs_tcp_force_close(xprt);
> >  	}
> >   out:
> >  	read_unlock_bh(&sk->sk_callback_lock);
> > @@ -2265,14 +2257,19 @@ static void xs_tcp_shutdown(struct rpc_xprt
> > *xprt)
> >  {
> >  	struct sock_xprt *transport = container_of(xprt, struct
> > sock_xprt, xprt);
> >  	struct socket *sock = transport->sock;
> > +	int skst = transport->inet ? transport->inet->sk_state :
> > TCP_CLOSE;
> >  
> >  	if (sock == NULL)
> >  		return;
> > -	if (xprt_connected(xprt)) {
> > +	switch (skst) {
> > +	default:
> >  		kernel_sock_shutdown(sock, SHUT_RDWR);
> >  		trace_rpc_socket_shutdown(xprt, sock);
> > -	} else
> > +		break;
> > +	case TCP_CLOSE:
> > +	case TCP_TIME_WAIT:
> >  		xs_reset_transport(transport);
> > +	}
> >  }
> >  
> >  static void xs_tcp_set_socket_timeouts(struct rpc_xprt *xprt,
> > 
> 
> 
> Previously, I've seen hidden ports within 5-6 minutes after re-
> starting
> the nfsd and re-mounting nfs-exports on clients.
> 
> With this patch applied, I don't see any hidden ports after 15mins. I
> guess it's a valid fix.

For the record, the intention of the patch is not to adjust or correct
any connection timeout values. Merely to ensure that once the
connection breakage is detected by the socket layer, so that is it no
longer usable by the RPC client, we release the socket.

-- 
Trond Myklebust
Linux NFS client maintainer, PrimaryData
trond.myklebust@primarydata.com

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: It's back! (Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() ))
  2018-02-06  0:24 ` Trond Myklebust
@ 2018-02-06  9:20   ` Daniel Reichelt
  2018-02-06 19:26     ` Trond Myklebust
  0 siblings, 1 reply; 77+ messages in thread
From: Daniel Reichelt @ 2018-02-06  9:20 UTC (permalink / raw)
  To: Trond Myklebust, rostedt; +Cc: linux-kernel, linux-nfs


[-- Attachment #1.1: Type: text/plain, Size: 3159 bytes --]

On 02/06/2018 01:24 AM, Trond Myklebust wrote:
> Does the following fix the issue?
> 
> 8<-----------------------------------------------
> From 9b30889c548a4d45bfe6226e58de32504c1d682f Mon Sep 17 00:00:00 2001
> From: Trond Myklebust <trond.myklebust@primarydata.com>
> Date: Mon, 5 Feb 2018 10:20:06 -0500
> Subject: [PATCH] SUNRPC: Ensure we always close the socket after a connection
>  shuts down
> 
> Ensure that we release the TCP socket once it is in the TCP_CLOSE or
> TCP_TIME_WAIT state (and only then) so that we don't confuse rkhunter
> and its ilk.
> 
> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
> ---
>  net/sunrpc/xprtsock.c | 23 ++++++++++-------------
>  1 file changed, 10 insertions(+), 13 deletions(-)
> 
> diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
> index 18803021f242..5d0108172ed3 100644
> --- a/net/sunrpc/xprtsock.c
> +++ b/net/sunrpc/xprtsock.c
> @@ -807,13 +807,6 @@ static void xs_sock_reset_connection_flags(struct rpc_xprt *xprt)
>  	smp_mb__after_atomic();
>  }
>  
> -static void xs_sock_mark_closed(struct rpc_xprt *xprt)
> -{
> -	xs_sock_reset_connection_flags(xprt);
> -	/* Mark transport as closed and wake up all pending tasks */
> -	xprt_disconnect_done(xprt);
> -}
> -
>  /**
>   * xs_error_report - callback to handle TCP socket state errors
>   * @sk: socket
> @@ -833,9 +826,6 @@ static void xs_error_report(struct sock *sk)
>  	err = -sk->sk_err;
>  	if (err == 0)
>  		goto out;
> -	/* Is this a reset event? */
> -	if (sk->sk_state == TCP_CLOSE)
> -		xs_sock_mark_closed(xprt);
>  	dprintk("RPC:       xs_error_report client %p, error=%d...\n",
>  			xprt, -err);
>  	trace_rpc_socket_error(xprt, sk->sk_socket, err);
> @@ -1655,9 +1645,11 @@ static void xs_tcp_state_change(struct sock *sk)
>  		if (test_and_clear_bit(XPRT_SOCK_CONNECTING,
>  					&transport->sock_state))
>  			xprt_clear_connecting(xprt);
> +		clear_bit(XPRT_CLOSING, &xprt->state);
>  		if (sk->sk_err)
>  			xprt_wake_pending_tasks(xprt, -sk->sk_err);
> -		xs_sock_mark_closed(xprt);
> +		/* Trigger the socket release */
> +		xs_tcp_force_close(xprt);
>  	}
>   out:
>  	read_unlock_bh(&sk->sk_callback_lock);
> @@ -2265,14 +2257,19 @@ static void xs_tcp_shutdown(struct rpc_xprt *xprt)
>  {
>  	struct sock_xprt *transport = container_of(xprt, struct sock_xprt, xprt);
>  	struct socket *sock = transport->sock;
> +	int skst = transport->inet ? transport->inet->sk_state : TCP_CLOSE;
>  
>  	if (sock == NULL)
>  		return;
> -	if (xprt_connected(xprt)) {
> +	switch (skst) {
> +	default:
>  		kernel_sock_shutdown(sock, SHUT_RDWR);
>  		trace_rpc_socket_shutdown(xprt, sock);
> -	} else
> +		break;
> +	case TCP_CLOSE:
> +	case TCP_TIME_WAIT:
>  		xs_reset_transport(transport);
> +	}
>  }
>  
>  static void xs_tcp_set_socket_timeouts(struct rpc_xprt *xprt,
> 


Previously, I've seen hidden ports within 5-6 minutes after re-starting
the nfsd and re-mounting nfs-exports on clients.

With this patch applied, I don't see any hidden ports after 15mins. I
guess it's a valid fix.


Thank you!

Daniel


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 866 bytes --]

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: It's back! (Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() ))
  2018-02-02 21:31 Daniel Reichelt
@ 2018-02-06  0:24 ` Trond Myklebust
  2018-02-06  9:20   ` Daniel Reichelt
  0 siblings, 1 reply; 77+ messages in thread
From: Trond Myklebust @ 2018-02-06  0:24 UTC (permalink / raw)
  To: rostedt, hacking; +Cc: linux-kernel, linux-nfs

[-- Attachment #1: Type: text/plain, Size: 3592 bytes --]

On Fri, 2018-02-02 at 22:31 +0100, Daniel Reichelt wrote:
> Hi Trond, Steven,
> 
> eversince I switched from Debian Jessie to Stretch last summer, I've
> been seeing the very same hidden ports on an NFS server as described
> in
> [1], which is a follow-up to [2].
> 
> Your patch ([3], [4]) solved the issue back then. Later on, you
> changed
> that fix again in [5], which lead to the situation we're seeing
> today.
> 
> Reverting 0b0ab51 fixes the issue for me.
> 
> Let me know if you need more info.
> 
> 
> 
> Thanks
> Daniel
> 
> 
> [1] https://lkml.org/lkml/2016/6/30/341
> [2] https://lkml.org/lkml/2015/6/11/803
> [3] https://lkml.org/lkml/2015/6/19/759
> [4] 4876cc779ff525b9c2376d8076edf47815e71f2c
> [5] 4b0ab51db32eba0f48b7618254742f143364a28d

Does the following fix the issue?

8<-----------------------------------------------
From 9b30889c548a4d45bfe6226e58de32504c1d682f Mon Sep 17 00:00:00 2001
From: Trond Myklebust <trond.myklebust@primarydata.com>
Date: Mon, 5 Feb 2018 10:20:06 -0500
Subject: [PATCH] SUNRPC: Ensure we always close the socket after a connection
 shuts down

Ensure that we release the TCP socket once it is in the TCP_CLOSE or
TCP_TIME_WAIT state (and only then) so that we don't confuse rkhunter
and its ilk.

Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
---
 net/sunrpc/xprtsock.c | 23 ++++++++++-------------
 1 file changed, 10 insertions(+), 13 deletions(-)

diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
index 18803021f242..5d0108172ed3 100644
--- a/net/sunrpc/xprtsock.c
+++ b/net/sunrpc/xprtsock.c
@@ -807,13 +807,6 @@ static void xs_sock_reset_connection_flags(struct rpc_xprt *xprt)
 	smp_mb__after_atomic();
 }
 
-static void xs_sock_mark_closed(struct rpc_xprt *xprt)
-{
-	xs_sock_reset_connection_flags(xprt);
-	/* Mark transport as closed and wake up all pending tasks */
-	xprt_disconnect_done(xprt);
-}
-
 /**
  * xs_error_report - callback to handle TCP socket state errors
  * @sk: socket
@@ -833,9 +826,6 @@ static void xs_error_report(struct sock *sk)
 	err = -sk->sk_err;
 	if (err == 0)
 		goto out;
-	/* Is this a reset event? */
-	if (sk->sk_state == TCP_CLOSE)
-		xs_sock_mark_closed(xprt);
 	dprintk("RPC:       xs_error_report client %p, error=%d...\n",
 			xprt, -err);
 	trace_rpc_socket_error(xprt, sk->sk_socket, err);
@@ -1655,9 +1645,11 @@ static void xs_tcp_state_change(struct sock *sk)
 		if (test_and_clear_bit(XPRT_SOCK_CONNECTING,
 					&transport->sock_state))
 			xprt_clear_connecting(xprt);
+		clear_bit(XPRT_CLOSING, &xprt->state);
 		if (sk->sk_err)
 			xprt_wake_pending_tasks(xprt, -sk->sk_err);
-		xs_sock_mark_closed(xprt);
+		/* Trigger the socket release */
+		xs_tcp_force_close(xprt);
 	}
  out:
 	read_unlock_bh(&sk->sk_callback_lock);
@@ -2265,14 +2257,19 @@ static void xs_tcp_shutdown(struct rpc_xprt *xprt)
 {
 	struct sock_xprt *transport = container_of(xprt, struct sock_xprt, xprt);
 	struct socket *sock = transport->sock;
+	int skst = transport->inet ? transport->inet->sk_state : TCP_CLOSE;
 
 	if (sock == NULL)
 		return;
-	if (xprt_connected(xprt)) {
+	switch (skst) {
+	default:
 		kernel_sock_shutdown(sock, SHUT_RDWR);
 		trace_rpc_socket_shutdown(xprt, sock);
-	} else
+		break;
+	case TCP_CLOSE:
+	case TCP_TIME_WAIT:
 		xs_reset_transport(transport);
+	}
 }
 
 static void xs_tcp_set_socket_timeouts(struct rpc_xprt *xprt,
-- 
2.14.3

-- 
Trond Myklebust
Linux NFS client maintainer, PrimaryData
trond.myklebust@primarydata.com

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply related	[flat|nested] 77+ messages in thread

* Re: It's back! (Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() ))
@ 2018-02-02 21:31 Daniel Reichelt
  2018-02-06  0:24 ` Trond Myklebust
  0 siblings, 1 reply; 77+ messages in thread
From: Daniel Reichelt @ 2018-02-02 21:31 UTC (permalink / raw)
  To: Trond Myklebust, Steven Rostedt; +Cc: Linux NFS Mailing List, LKML


[-- Attachment #1.1: Type: text/plain, Size: 672 bytes --]

Hi Trond, Steven,

eversince I switched from Debian Jessie to Stretch last summer, I've
been seeing the very same hidden ports on an NFS server as described in
[1], which is a follow-up to [2].

Your patch ([3], [4]) solved the issue back then. Later on, you changed
that fix again in [5], which lead to the situation we're seeing today.

Reverting 0b0ab51 fixes the issue for me.

Let me know if you need more info.



Thanks
Daniel


[1] https://lkml.org/lkml/2016/6/30/341
[2] https://lkml.org/lkml/2015/6/11/803
[3] https://lkml.org/lkml/2015/6/19/759
[4] 4876cc779ff525b9c2376d8076edf47815e71f2c
[5] 4b0ab51db32eba0f48b7618254742f143364a28d


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 866 bytes --]

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: It's back! (Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() ))
  2016-06-30 20:07         ` Steven Rostedt
  (?)
@ 2016-06-30 21:56         ` Steven Rostedt
  -1 siblings, 0 replies; 77+ messages in thread
From: Steven Rostedt @ 2016-06-30 21:56 UTC (permalink / raw)
  To: Trond Myklebust
  Cc: Jeff Layton, Eric Dumazet, Schumaker Anna,
	Linux NFS Mailing List, Linux Network Devel Mailing List, LKML,
	Andrew Morton, Fields Bruce

On Thu, 30 Jun 2016 16:07:26 -0400
Steven Rostedt <rostedt@goodmis.org> wrote:

> I can reproduce this by having the client unmount and remount the
> directory.

It gets even more interesting. When I unmount the directory, the hidden
port does not go away. It is still there. But if I mount it again, it
goes away (until it times out again).

Even more info:

When I first mount it, it creates 3 sockets, where one immediately is
closed:

tcp        0      0 192.168.23.9:892        192.168.23.22:44672     TIME_WAIT   -                   
tcp        0      0 192.168.23.9:2049       192.168.23.22:815       ESTABLISHED -                   
tcp        0      0 192.168.23.9:754        192.168.23.22:44672     ESTABLISHED -                   

(192.168.23.22 is the machine remotely mounting a directory from the
server 192.168.23.9)

The trace of port 892 is this:

   kworker/u32:1-13473 [000] ....  4093.915114: xs_setup_tcp: RPC:       set up xprt to 192.168.23.22 (port 44672) via tcp
   kworker/u32:1-13473 [000] ....  4093.915122: xprt_create_transport: RPC:       created transport ffff8803b1c38000 with 65536 slots
    kworker/0:1H-129   [000] ....  4093.915152: xprt_alloc_slot: RPC:    47 reserved req ffff88040b27ca00 xid c50ccaff
    kworker/0:1H-129   [000] ....  4093.915157: xprt_connect: RPC:    47 xprt_connect xprt ffff8803b1c38000 is not connected
    kworker/0:1H-129   [000] ....  4093.915159: xs_connect: RPC:       xs_connect scheduled xprt ffff8803b1c38000
    kworker/0:1H-129   [000] ..s.  4093.915170: inet_csk_get_port: snum 892
    kworker/0:1H-129   [000] ..s.  4093.915177: <stack trace>
 => sched_clock
 => inet_addr_type_table
 => security_capable
 => inet_bind
 => xs_bind
 => release_sock
 => sock_setsockopt
 => __sock_create
 => xs_create_sock.isra.19
 => xs_tcp_setup_socket
 => process_one_work
 => worker_thread
 => worker_thread
 => kthread
 => ret_from_fork
 => kthread
    kworker/0:1H-129   [000] ..s.  4093.915178: inet_bind_hash: add 892 ffff8803bb9b5cc0
    kworker/0:1H-129   [000] ..s.  4093.915184: <stack trace>
 => inet_csk_get_port
 => sched_clock
 => inet_addr_type_table
 => security_capable
 => inet_bind
 => xs_bind
 => release_sock
 => sock_setsockopt
 => __sock_create
 => xs_create_sock.isra.19
 => xs_tcp_setup_socket
 => process_one_work
 => worker_thread
 => worker_thread
 => kthread
 => ret_from_fork
 => kthread
    kworker/0:1H-129   [000] ....  4093.915185: xs_bind: RPC:       xs_bind 4.136.255.255:892: ok (0)
    kworker/0:1H-129   [000] ....  4093.915186: xs_tcp_setup_socket: RPC:       worker connecting xprt ffff8803b1c38000 via tcp to 192.168.23.22 (port 44672)
    kworker/0:1H-129   [000] ....  4093.915221: xs_tcp_setup_socket: RPC:       ffff8803b1c38000 connect status 115 connected 0 sock state 2
          <idle>-0     [003] ..s.  4093.915434: xs_tcp_state_change: RPC:       xs_tcp_state_change client ffff8803b1c38000...
          <idle>-0     [003] ..s.  4093.915435: xs_tcp_state_change: RPC:       state 1 conn 0 dead 0 zapped 1 sk_shutdown 0
    kworker/3:1H-145   [003] ....  4093.915558: xprt_connect_status: RPC:    47 xprt_connect_status: retrying
    kworker/3:1H-145   [003] ....  4093.915560: xprt_prepare_transmit: RPC:    47 xprt_prepare_transmit
    kworker/3:1H-145   [003] ....  4093.915562: xprt_transmit: RPC:    47 xprt_transmit(72)
    kworker/3:1H-145   [003] ....  4093.915588: xs_tcp_send_request: RPC:       xs_tcp_send_request(72) = 0
    kworker/3:1H-145   [003] ....  4093.915589: xprt_transmit: RPC:    47 xmit complete
          <idle>-0     [003] ..s.  4093.915969: xs_tcp_data_ready: RPC:       xs_tcp_data_ready...
    kworker/3:1H-145   [003] ....  4093.916081: xs_tcp_data_recv: RPC:       xs_tcp_data_recv started
    kworker/3:1H-145   [003] ....  4093.916083: xs_tcp_data_recv: RPC:       reading TCP record fragment of length 24
    kworker/3:1H-145   [003] ....  4093.916084: xs_tcp_data_recv: RPC:       reading XID (4 bytes)
    kworker/3:1H-145   [003] ....  4093.916085: xs_tcp_data_recv: RPC:       reading request with XID c50ccaff
    kworker/3:1H-145   [003] ....  4093.916086: xs_tcp_data_recv: RPC:       reading CALL/REPLY flag (4 bytes)
    kworker/3:1H-145   [003] ....  4093.916087: xs_tcp_data_recv: RPC:       read reply XID c50ccaff
    kworker/3:1H-145   [003] ..s.  4093.916088: xs_tcp_data_recv: RPC:       XID c50ccaff read 16 bytes
    kworker/3:1H-145   [003] ..s.  4093.916089: xs_tcp_data_recv: RPC:       xprt = ffff8803b1c38000, tcp_copied = 24, tcp_offset = 24, tcp_reclen = 24
    kworker/3:1H-145   [003] ..s.  4093.916090: xprt_complete_rqst: RPC:    47 xid c50ccaff complete (24 bytes received)
    kworker/3:1H-145   [003] ....  4093.916091: xs_tcp_data_recv: RPC:       xs_tcp_data_recv done
    kworker/3:1H-145   [003] ....  4093.916098: xprt_release: RPC:    47 release request ffff88040b27ca00
   kworker/u32:1-13473 [002] ....  4093.976056: xprt_destroy: RPC:       destroying transport ffff8803b1c38000
   kworker/u32:1-13473 [002] ....  4093.976068: xs_destroy: RPC:       xs_destroy xprt ffff8803b1c38000
   kworker/u32:1-13473 [002] ....  4093.976069: xs_close: RPC:       xs_close xprt ffff8803b1c38000
   kworker/u32:1-13473 [002] ..s.  4093.976096: xs_tcp_state_change: RPC:       xs_tcp_state_change client ffff8803b1c38000...
   kworker/u32:1-13473 [002] ..s.  4093.976098: xs_tcp_state_change: RPC:       state 4 conn 1 dead 0 zapped 1 sk_shutdown 3
   kworker/u32:1-13473 [002] ....  4093.976103: xprt_disconnect_done: RPC:       disconnected transport ffff8803b1c38000
   kworker/u32:1-13473 [002] ....  4093.976104: xprt_disconnect_done: disconnect transport!
   kworker/u32:1-13473 [002] ....  4093.976113: <stack trace>
 => xs_destroy
 => xprt_switch_free
 => rpc_free_client
 => rpc_release_client
 => rpc_shutdown_client
 => load_balance
 => ttwu_do_wakeup
 => nfsd4_process_cb_update.isra.14
 => __switch_to
 => pick_next_task_fair
 => nfsd4_run_cb_work
 => process_one_work
 => worker_thread
 => worker_thread
 => kthread
 => ret_from_fork
 => kthread


Immediately followed by setting up of the port that will eventually turn into the hidden port:

   kworker/u32:1-13473 [002] ....  4093.976128: xs_setup_tcp: RPC:       set up xprt to 192.168.23.22 (port 44672) via tcp
   kworker/u32:1-13473 [002] ....  4093.976136: xprt_create_transport: RPC:       created transport ffff8803b8c22000 with 65536 slots
    kworker/2:1H-144   [002] ....  4093.976209: xprt_alloc_slot: RPC:    48 reserved req ffff8803bfe89c00 xid 10c028fe
    kworker/2:1H-144   [002] ....  4093.976213: xprt_connect: RPC:    48 xprt_connect xprt ffff8803b8c22000 is not connected
    kworker/2:1H-144   [002] ....  4093.976215: xs_connect: RPC:       xs_connect scheduled xprt ffff8803b8c22000
    kworker/2:1H-144   [002] ..s.  4093.976231: inet_csk_get_port: snum 754
    kworker/2:1H-144   [002] ..s.  4093.976239: <stack trace>
 => sched_clock
 => inet_addr_type_table
 => security_capable
 => inet_bind
 => xs_bind
 => release_sock
 => sock_setsockopt
 => __sock_create
 => xs_create_sock.isra.19
 => xs_tcp_setup_socket
 => process_one_work
 => worker_thread
 => worker_thread
 => kthread
 => ret_from_fork
 => kthread
    kworker/2:1H-144   [002] ..s.  4093.976239: inet_bind_hash: add 754 ffff8803afc20e40
    kworker/2:1H-144   [002] ..s.  4093.976247: <stack trace>
 => inet_csk_get_port
 => sched_clock
 => inet_addr_type_table
 => security_capable
 => inet_bind
 => xs_bind
 => release_sock
 => sock_setsockopt
 => __sock_create
 => xs_create_sock.isra.19
 => xs_tcp_setup_socket
 => process_one_work
 => worker_thread
 => worker_thread
 => kthread
 => ret_from_fork
 => kthread
    kworker/2:1H-144   [002] ....  4093.976248: xs_bind: RPC:       xs_bind 4.136.255.255:754: ok (0)
    kworker/2:1H-144   [002] ....  4093.976250: xs_tcp_setup_socket: RPC:       worker connecting xprt ffff8803b8c22000 via tcp to 192.168.23.22 (port 44672)
    kworker/2:1H-144   [002] ....  4093.976284: xs_tcp_setup_socket: RPC:       ffff8803b8c22000 connect status 115 connected 0 sock state 2
          <idle>-0     [003] ..s.  4093.976456: xs_tcp_state_change: RPC:       xs_tcp_state_change client ffff8803b8c22000...
          <idle>-0     [003] ..s.  4093.976458: xs_tcp_state_change: RPC:       state 1 conn 0 dead 0 zapped 1 sk_shutdown 0
    kworker/3:1H-145   [003] ....  4093.976588: xprt_connect_status: RPC:    48 xprt_connect_status: retrying
    kworker/3:1H-145   [003] ....  4093.976590: xprt_prepare_transmit: RPC:    48 xprt_prepare_transmit
    kworker/3:1H-145   [003] ....  4093.976604: xprt_transmit: RPC:    48 xprt_transmit(72)
    kworker/3:1H-145   [003] ....  4093.976622: xs_tcp_send_request: RPC:       xs_tcp_send_request(72) = 0
    kworker/3:1H-145   [003] ....  4093.976623: xprt_transmit: RPC:    48 xmit complete
          <idle>-0     [003] ..s.  4093.977040: xs_tcp_data_ready: RPC:       xs_tcp_data_ready...
    kworker/3:1H-145   [003] ....  4093.977151: xs_tcp_data_recv: RPC:       xs_tcp_data_recv started
    kworker/3:1H-145   [003] ....  4093.977153: xs_tcp_data_recv: RPC:       reading TCP record fragment of length 24
    kworker/3:1H-145   [003] ....  4093.977154: xs_tcp_data_recv: RPC:       reading XID (4 bytes)
    kworker/3:1H-145   [003] ....  4093.977155: xs_tcp_data_recv: RPC:       reading request with XID 10c028fe
    kworker/3:1H-145   [003] ....  4093.977156: xs_tcp_data_recv: RPC:       reading CALL/REPLY flag (4 bytes)
    kworker/3:1H-145   [003] ....  4093.977157: xs_tcp_data_recv: RPC:       read reply XID 10c028fe
    kworker/3:1H-145   [003] ..s.  4093.977158: xs_tcp_data_recv: RPC:       XID 10c028fe read 16 bytes
    kworker/3:1H-145   [003] ..s.  4093.977159: xs_tcp_data_recv: RPC:       xprt = ffff8803b8c22000, tcp_copied = 24, tcp_offset = 24, tcp_reclen = 24
    kworker/3:1H-145   [003] ..s.  4093.977160: xprt_complete_rqst: RPC:    48 xid 10c028fe complete (24 bytes received)
    kworker/3:1H-145   [003] ....  4093.977161: xs_tcp_data_recv: RPC:       xs_tcp_data_recv done
    kworker/3:1H-145   [003] ....  4093.977167: xprt_release: RPC:    48 release request ffff8803bfe89c00


That "2049" port is what is used for all transferring of data.

When the FIN/ACK is sent by the client, the socket is destroyed (no more connections
can be used) but the port is not freed up, because it appears there's still an owner
attached to it. That means this port will *never* be used again. Even if the client
unmount the directory. That port is still in limbo.

When the FIN/ACK comes in, it goes into the TIME_WAIT state here:

    kworker/3:1H-145   [003] ..s.  4394.370019: xs_tcp_state_change: RPC:       xs_tcp_state_change client ffff8803b8c22000...
    kworker/3:1H-145   [003] ..s.  4394.370022: xs_tcp_state_change: RPC:       state 4 conn 1 dead 0 zapped 1 sk_shutdown 3
          <idle>-0     [003] ..s.  4394.370352: xs_tcp_state_change: RPC:       xs_tcp_state_change client ffff8803b8c22000...
          <idle>-0     [003] ..s.  4394.370354: xs_tcp_state_change: RPC:       state 5 conn 0 dead 0 zapped 1 sk_shutdown 3
          <idle>-0     [003] ..s.  4394.370375: tcp_time_wait: moving xs_bind sock to time wait
          <idle>-0     [003] ..s.  4394.370396: <stack trace>
 => tcp_data_queue
 => tcp_rcv_state_process
 => tcp_v4_inbound_md5_hash
 => tcp_v4_do_rcv
 => tcp_v4_rcv
 => ipv4_confirm
 => nf_iterate
 => ip_local_deliver_finish
 => ip_local_deliver
 => ip_local_deliver_finish
 => ip_rcv
 => packet_rcv
 => ip_rcv_finish
 => __netif_receive_skb_core
 => kmem_cache_alloc
 => netif_receive_skb_internal
 => br_pass_frame_up
 => br_flood
 => br_handle_frame
 => br_handle_frame_finish
 => enqueue_task_fair
 => br_handle_frame
 => br_handle_frame
 => find_busiest_group
 => __netif_receive_skb_core
 => inet_gro_receive
 => netif_receive_skb_internal
 => napi_gro_receive
 => e1000_clean_rx_irq
 => e1000_clean
 => net_rx_action
 => __do_softirq
 => irq_exit
 => do_IRQ
 => ret_from_intr
 => cpuidle_enter_state
 => cpuidle_enter_state
 => cpu_startup_entry
 => start_secondary


Then eventually, the socket is freed, and 'netstat' no longer shows it.

          <idle>-0     [003] .Ns.  4454.371802: inet_bind_bucket_destroy: destroy 754 empty=0 ffff8803afc20e40
          <idle>-0     [003] .Ns.  4454.371813: <stack trace>
 => inet_twsk_bind_unhash
 => inet_twsk_kill
 => tw_timer_handler
 => call_timer_fn
 => tw_timer_handler
 => run_timer_softirq
 => __do_softirq
 => irq_exit
 => do_IRQ
 => ret_from_intr
 => cpuidle_enter_state
 => cpuidle_enter_state
 => cpu_startup_entry
 => start_secondary


"empty=0" means the tb->owners associated with the port is not empty, and the
freeing of the port is skipped.

Now when I go and remount the directory from the client, the code finally cleans
up the port:

   kworker/u32:0-25031 [000] ....  4544.674603: xprt_destroy: RPC:       destroying transport ffff8803b8c22000
   kworker/u32:0-25031 [000] ....  4544.674616: xs_destroy: RPC:       xs_destroy xprt ffff8803b8c22000
   kworker/u32:0-25031 [000] ....  4544.674617: xs_close: RPC:       xs_close xprt ffff8803b8c22000
   kworker/u32:0-25031 [000] ..s.  4544.674619: xs_tcp_state_change: RPC:       xs_tcp_state_change client ffff8803b8c22000...
   kworker/u32:0-25031 [000] ..s.  4544.674620: xs_tcp_state_change: RPC:       state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
   kworker/u32:0-25031 [000] ..s.  4544.674621: xprt_disconnect_done: RPC:       disconnected transport ffff8803b8c22000
   kworker/u32:0-25031 [000] ..s.  4544.674621: xprt_disconnect_done: disconnect transport!


   kworker/u32:0-25031 [000] ..s.  4544.674647: inet_bind_bucket_destroy: destroy 754 empty=1 ffff8803afc20e40
   kworker/u32:0-25031 [000] ..s.  4544.674655: <stack trace>
 => inet_put_port
 => tcp_v4_destroy_sock
 => inet_csk_destroy_sock
 => tcp_close
 => inet_release
 => sock_release
 => xs_close
 => xs_destroy
 => xprt_switch_free
 => rpc_free_client
 => rpc_release_client
 => rpc_shutdown_client
 => nfsd4_process_cb_update.isra.14
 => update_curr
 => dequeue_task_fair
 => __switch_to
 => pick_next_task_fair
 => nfsd4_run_cb_work
 => process_one_work
 => worker_thread
 => worker_thread
 => kthread
 => ret_from_fork
 => kthread
   kworker/u32:0-25031 [000] ....  4544.674660: xprt_disconnect_done: RPC:       disconnected transport ffff8803b8c22000

Notice that "empty=1" now, and the port is freed.

Then it goes back doing everything all over again:

    kworker/3:1H-145   [003] ....  4558.442458: xs_bind: RPC:       xs_bind 4.136.255.255:973: ok (0)
    kworker/3:1H-145   [003] ....  4558.442460: xs_tcp_setup_socket: RPC:       worker connecting xprt ffff8803d7fd3800 via tcp to 192.168.23.22 (port 45075)
    kworker/3:1H-145   [003] ....  4558.442496: xs_tcp_setup_socket: RPC:       ffff8803d7fd3800 connect status 115 connected 0 sock state 2
          <idle>-0     [002] ..s.  4558.442691: xs_tcp_state_change: RPC:       xs_tcp_state_change client ffff8803d7fd3800...
          <idle>-0     [002] ..s.  4558.442693: xs_tcp_state_change: RPC:       state 1 conn 0 dead 0 zapped 1 sk_shutdown 0
    kworker/2:1H-144   [002] ....  4558.442732: xprt_connect_status: RPC:    49 xprt_connect_status: retrying
    kworker/2:1H-144   [002] ....  4558.442734: xprt_prepare_transmit: RPC:    49 xprt_prepare_transmit
    kworker/2:1H-144   [002] ....  4558.442737: xprt_transmit: RPC:    49 xprt_transmit(72)
    kworker/2:1H-144   [002] ....  4558.442753: xs_tcp_send_request: RPC:       xs_tcp_send_request(72) = 0
    kworker/2:1H-144   [002] ....  4558.442754: xprt_transmit: RPC:    49 xmit complete
            nfsd-4382  [002] ..s.  4558.443203: xs_tcp_data_ready: RPC:       xs_tcp_data_ready...
    kworker/2:1H-144   [002] ....  4558.443227: xs_tcp_data_recv: RPC:       xs_tcp_data_recv started
    kworker/2:1H-144   [002] ....  4558.443229: xs_tcp_data_recv: RPC:       reading TCP record fragment of length 24
    kworker/2:1H-144   [002] ....  4558.443230: xs_tcp_data_recv: RPC:       reading XID (4 bytes)
    kworker/2:1H-144   [002] ....  4558.443231: xs_tcp_data_recv: RPC:       reading request with XID e2e1dc21
    kworker/2:1H-144   [002] ....  4558.443232: xs_tcp_data_recv: RPC:       reading CALL/REPLY flag (4 bytes)
    kworker/2:1H-144   [002] ....  4558.443233: xs_tcp_data_recv: RPC:       read reply XID e2e1dc21
    kworker/2:1H-144   [002] ..s.  4558.443235: xs_tcp_data_recv: RPC:       XID e2e1dc21 read 16 bytes
    kworker/2:1H-144   [002] ..s.  4558.443236: xs_tcp_data_recv: RPC:       xprt = ffff8803d7fd3800, tcp_copied = 24, tcp_offset = 24, tcp_reclen = 24
    kworker/2:1H-144   [002] ..s.  4558.443237: xprt_complete_rqst: RPC:    49 xid e2e1dc21 complete (24 bytes received)
    kworker/2:1H-144   [002] ....  4558.443238: xs_tcp_data_recv: RPC:       xs_tcp_data_recv done
    kworker/2:1H-144   [002] ....  4558.443246: xprt_release: RPC:    49 release request ffff8800dba14800
   kworker/u32:1-13473 [003] ....  4558.496850: xprt_destroy: RPC:       destroying transport ffff8803d7fd3800
   kworker/u32:1-13473 [003] ....  4558.496860: xs_destroy: RPC:       xs_destroy xprt ffff8803d7fd3800
   kworker/u32:1-13473 [003] ....  4558.496861: xs_close: RPC:       xs_close xprt ffff8803d7fd3800
   kworker/u32:1-13473 [003] ..s.  4558.496888: xs_tcp_state_change: RPC:       xs_tcp_state_change client ffff8803d7fd3800...
   kworker/u32:1-13473 [003] ..s.  4558.496889: xs_tcp_state_change: RPC:       state 4 conn 1 dead 0 zapped 1 sk_shutdown 3
   kworker/u32:1-13473 [003] ....  4558.496894: xprt_disconnect_done: RPC:       disconnected transport ffff8803d7fd3800
   kworker/u32:1-13473 [003] ....  4558.496895: xprt_disconnect_done: disconnect transport!

Where 973 is the port that gets added and disconnected right away (and freed)

    kworker/3:1H-145   [003] ....  4558.496991: xs_bind: RPC:       xs_bind 4.136.255.255:688: ok (0)
    kworker/3:1H-145   [003] ....  4558.496993: xs_tcp_setup_socket: RPC:       worker connecting xprt ffff8803bb889000 via tcp to 192.168.23.22 (port 45075)
    kworker/3:1H-145   [003] ....  4558.497024: xs_tcp_setup_socket: RPC:       ffff8803bb889000 connect status 115 connected 0 sock state 2
          <idle>-0     [002] .Ns.  4558.497171: xs_tcp_state_change: RPC:       xs_tcp_state_change client ffff8803bb889000...
          <idle>-0     [002] .Ns.  4558.497173: xs_tcp_state_change: RPC:       state 1 conn 0 dead 0 zapped 1 sk_shutdown 0
    kworker/2:1H-144   [002] ....  4558.497196: xprt_connect_status: RPC:    50 xprt_connect_status: retrying
    kworker/2:1H-144   [002] ....  4558.497197: xprt_prepare_transmit: RPC:    50 xprt_prepare_transmit
    kworker/2:1H-144   [002] ....  4558.497199: xprt_transmit: RPC:    50 xprt_transmit(72)
    kworker/2:1H-144   [002] ....  4558.497210: xs_tcp_send_request: RPC:       xs_tcp_send_request(72) = 0
    kworker/2:1H-144   [002] ....  4558.497210: xprt_transmit: RPC:    50 xmit complete
          <idle>-0     [002] ..s.  4558.497475: xs_tcp_data_ready: RPC:       xs_tcp_data_ready...
    kworker/2:1H-144   [002] ....  4558.497569: xs_tcp_data_recv: RPC:       xs_tcp_data_recv started
    kworker/2:1H-144   [002] ....  4558.497571: xs_tcp_data_recv: RPC:       reading TCP record fragment of length 24
    kworker/2:1H-144   [002] ....  4558.497571: xs_tcp_data_recv: RPC:       reading XID (4 bytes)
    kworker/2:1H-144   [002] ....  4558.497572: xs_tcp_data_recv: RPC:       reading request with XID a4418f34
    kworker/2:1H-144   [002] ....  4558.497573: xs_tcp_data_recv: RPC:       reading CALL/REPLY flag (4 bytes)
    kworker/2:1H-144   [002] ....  4558.497573: xs_tcp_data_recv: RPC:       read reply XID a4418f34
    kworker/2:1H-144   [002] ..s.  4558.497574: xs_tcp_data_recv: RPC:       XID a4418f34 read 16 bytes
    kworker/2:1H-144   [002] ..s.  4558.497575: xs_tcp_data_recv: RPC:       xprt = ffff8803bb889000, tcp_copied = 24, tcp_offset = 24, tcp_reclen = 24
    kworker/2:1H-144   [002] ..s.  4558.497575: xprt_complete_rqst: RPC:    50 xid a4418f34 complete (24 bytes received)
    kworker/2:1H-144   [002] ....  4558.497577: xs_tcp_data_recv: RPC:       xs_tcp_data_recv done
    kworker/2:1H-144   [002] ....  4558.497581: xprt_release: RPC:    50 release request ffff8800db28d800

And 688 will be the port that becomes the new hidden port.

Thus it looks like there's something that is not cleaning up ports properly. There's
some timing issues here.

Any thoughts why. Because this is obviously wrong, and not only is it wasting a port
and memory to store it, but it's causing rkhunter to report it, because it's one of
the things that rootkits do.

-- Steve

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: It's back! (Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() ))
@ 2016-06-30 20:07         ` Steven Rostedt
  0 siblings, 0 replies; 77+ messages in thread
From: Steven Rostedt @ 2016-06-30 20:07 UTC (permalink / raw)
  To: Trond Myklebust
  Cc: Jeff Layton, Eric Dumazet, Schumaker Anna,
	Linux NFS Mailing List, Linux Network Devel Mailing List, LKML,
	Andrew Morton, Fields Bruce

On Thu, 30 Jun 2016 18:30:42 +0000
Trond Myklebust <trondmy@primarydata.com> wrote:


> Wait. So the NFS mount is still active, it’s just that the socket
> disconnected due to no traffic? That should be OK. Granted that the
> port can’t be reused by another process, but you really don’t want
> that: what if there are no other ports available and you start
> writing to a file on the NFS partition?

What would cause the port to be connected to a socket again? I copied a
large file to the nfs mount, and the hidden port is still there?

Remember, this wasn't always the case, the hidden port is a recent
issue.

I ran wireshark on this and it appears to create two ports for NFS. One
of them is canceled by the client (sends a FIN/ACK) and this port is
what lays around never to be used again, and uses the other port for
all connections after that.

When I unmount the NFS directory, the port is finally freed (but has no
socket attached to it). What is the purpose of keeping this port around?

I can reproduce this by having the client unmount and remount the
directory.

-- Steve

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: It's back! (Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() ))
@ 2016-06-30 20:07         ` Steven Rostedt
  0 siblings, 0 replies; 77+ messages in thread
From: Steven Rostedt @ 2016-06-30 20:07 UTC (permalink / raw)
  To: Trond Myklebust
  Cc: Jeff Layton, Eric Dumazet, Schumaker Anna,
	Linux NFS Mailing List, Linux Network Devel Mailing List, LKML,
	Andrew Morton, Fields Bruce

On Thu, 30 Jun 2016 18:30:42 +0000
Trond Myklebust <trondmy-7I+n7zu2hftEKMMhf/gKZA@public.gmane.org> wrote:


> Wait. So the NFS mount is still active, it’s just that the socket
> disconnected due to no traffic? That should be OK. Granted that the
> port can’t be reused by another process, but you really don’t want
> that: what if there are no other ports available and you start
> writing to a file on the NFS partition?

What would cause the port to be connected to a socket again? I copied a
large file to the nfs mount, and the hidden port is still there?

Remember, this wasn't always the case, the hidden port is a recent
issue.

I ran wireshark on this and it appears to create two ports for NFS. One
of them is canceled by the client (sends a FIN/ACK) and this port is
what lays around never to be used again, and uses the other port for
all connections after that.

When I unmount the NFS directory, the port is finally freed (but has no
socket attached to it). What is the purpose of keeping this port around?

I can reproduce this by having the client unmount and remount the
directory.

-- Steve
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: It's back! (Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() ))
  2016-06-30 15:23   ` Steven Rostedt
@ 2016-06-30 18:30       ` Trond Myklebust
  2016-06-30 18:30       ` Trond Myklebust
  1 sibling, 0 replies; 77+ messages in thread
From: Trond Myklebust @ 2016-06-30 18:30 UTC (permalink / raw)
  To: Rostedt Steven
  Cc: Jeff Layton, Eric Dumazet, Schumaker Anna,
	Linux NFS Mailing List, Linux Network Devel Mailing List, LKML,
	Andrew Morton, Fields Bruce


> On Jun 30, 2016, at 11:23, Steven Rostedt <rostedt@goodmis.org> wrote:
> 
> On Thu, 30 Jun 2016 13:17:47 +0000
> Trond Myklebust <trondmy@primarydata.com> wrote:
> 
>>> On Jun 30, 2016, at 08:59, Steven Rostedt <rostedt@goodmis.org> wrote:
>>> 
>>> [ resending as a new email, as I'm assuming people do not sort their
>>> INBOX via last email on thread, thus my last email is sitting in the
>>> bottom of everyone's INBOX ]
>>> 
>>> I've hit this again. Not sure when it started, but I applied my old
>>> debug trace_printk() patch (attached) and rebooted (4.5.7). I just
>>> tested the latest kernel from Linus's tree (from last nights pull), and
>>> it still gives me the problem.
>>> 
>>> Here's the trace I have:
>>> 
>>>   kworker/3:1H-134   [003] ..s.    61.036129: inet_csk_get_port: snum 805
> 
> Here's were the port is taken
> 
>>>   kworker/3:1H-134   [003] ..s.    61.036135: <stack trace>  
>>> => sched_clock
>>> => inet_addr_type_table
>>> => security_capable
>>> => inet_bind
>>> => xs_bind
>>> => release_sock
>>> => sock_setsockopt
>>> => __sock_create
>>> => xs_create_sock.isra.19
>>> => xs_tcp_setup_socket
>>> => process_one_work
>>> => worker_thread
>>> => worker_thread
>>> => kthread
>>> => ret_from_fork
>>> => kthread    
>>>   kworker/3:1H-134   [003] ..s.    61.036136: inet_bind_hash: add 805
>>>   kworker/3:1H-134   [003] ..s.    61.036138: <stack trace>  
>>> => inet_csk_get_port
>>> => sched_clock
>>> => inet_addr_type_table
>>> => security_capable
>>> => inet_bind
>>> => xs_bind
>>> => release_sock
>>> => sock_setsockopt
>>> => __sock_create
>>> => xs_create_sock.isra.19
>>> => xs_tcp_setup_socket
>>> => process_one_work
>>> => worker_thread
>>> => worker_thread
>>> => kthread
>>> => ret_from_fork
>>> => kthread    
>>>   kworker/3:1H-134   [003] ....    61.036139: xs_bind: RPC:       xs_bind 4.136.255.255:805: ok (0)
> 
> Here's where it is bounded.
> 
>>>   kworker/3:1H-134   [003] ....    61.036140: xs_tcp_setup_socket: RPC:       worker connecting xprt ffff880407eca800 via tcp to 192.168.23.22 (port 43651)
>>>   kworker/3:1H-134   [003] ....    61.036162: xs_tcp_setup_socket: RPC:       ffff880407eca800 connect status 115 connected 0 sock state 2
>>>         <idle>-0     [001] ..s.    61.036450: xs_tcp_state_change: RPC:       xs_tcp_state_change client ffff880407eca800...
>>>         <idle>-0     [001] ..s.    61.036452: xs_tcp_state_change: RPC:       state 1 conn 0 dead 0 zapped 1 sk_shutdown 0
>>>   kworker/1:1H-136   [001] ....    61.036476: xprt_connect_status: RPC:    43 xprt_connect_status: retrying
>>>   kworker/1:1H-136   [001] ....    61.036478: xprt_prepare_transmit: RPC:    43 xprt_prepare_transmit
>>>   kworker/1:1H-136   [001] ....    61.036479: xprt_transmit: RPC:    43 xprt_transmit(72)
>>>   kworker/1:1H-136   [001] ....    61.036486: xs_tcp_send_request: RPC:       xs_tcp_send_request(72) = 0
>>>   kworker/1:1H-136   [001] ....    61.036487: xprt_transmit: RPC:    43 xmit complete
>>>         <idle>-0     [001] ..s.    61.036789: xs_tcp_data_ready: RPC:       xs_tcp_data_ready...
>>>   kworker/1:1H-136   [001] ....    61.036798: xs_tcp_data_recv: RPC:       xs_tcp_data_recv started
>>>   kworker/1:1H-136   [001] ....    61.036799: xs_tcp_data_recv: RPC:       reading TCP record fragment of length 24
>>>   kworker/1:1H-136   [001] ....    61.036799: xs_tcp_data_recv: RPC:       reading XID (4 bytes)
>>>   kworker/1:1H-136   [001] ....    61.036800: xs_tcp_data_recv: RPC:       reading request with XID 2f4c3f88
>>>   kworker/1:1H-136   [001] ....    61.036800: xs_tcp_data_recv: RPC:       reading CALL/REPLY flag (4 bytes)
>>>   kworker/1:1H-136   [001] ....    61.036801: xs_tcp_data_recv: RPC:       read reply XID 2f4c3f88
>>>   kworker/1:1H-136   [001] ..s.    61.036801: xs_tcp_data_recv: RPC:       XID 2f4c3f88 read 16 bytes
>>>   kworker/1:1H-136   [001] ..s.    61.036802: xs_tcp_data_recv: RPC:       xprt = ffff880407eca800, tcp_copied = 24, tcp_offset = 24, tcp_reclen = 24
>>>   kworker/1:1H-136   [001] ..s.    61.036802: xprt_complete_rqst: RPC:    43 xid 2f4c3f88 complete (24 bytes received)
>>>   kworker/1:1H-136   [001] ....    61.036803: xs_tcp_data_recv: RPC:       xs_tcp_data_recv done
>>>   kworker/1:1H-136   [001] ....    61.036812: xprt_release: RPC:    43 release request ffff88040b270800
>>> 
>>> 
>>> # unhide-tcp 
>>> Unhide-tcp 20130526
>>> Copyright © 2013 Yago Jesus & Patrick Gouin
>>> License GPLv3+ : GNU GPL version 3 or later
>>> http://www.unhide-forensics.info
>>> Used options: 
>>> [*]Starting TCP checking
>>> 
>>> Found Hidden port that not appears in ss: 805
>>> 
>> 
>> What is a “Hidden port that not appears in ss: 805”, and what does this report mean? Are we failing to close a socket?
> 
> I believe hidden ports are ports that are bound to no socket.
> Basically, a "port leak". Where they are in limbo and can never be
> reused.
> 
> I looked at my past report, and everthing is exactly like the issue
> before. When I first boot my box, the port is there, I have the above
> trace. I run netstat -tapn and grep for the port. And it shows that it
> is an established socket between my box and my wife's box (I have a nfs
> mounted file system for her to copy her pictures to my server). After a
> couple of minutes, the port turns from ESTABLISHED to TIME_WAIT, and
> after another minute it disappears. At that moment, the unhide-tcp
> shows the port as hidden.
> 
> When the socket goes away (without releasing the port) I see this in my
> trace:
> 
>    kworker/1:1H-131   [001] ..s.   364.762537: xs_tcp_state_change: RPC:       xs_tcp_state_change client ffff88040ad68800...
>    kworker/1:1H-131   [001] ..s.   364.762539: xs_tcp_state_change: RPC:       state 4 conn 1 dead 0 zapped 1 sk_shutdown 3
>          <idle>-0     [001] ..s.   364.762715: xs_tcp_state_change: RPC:       xs_tcp_state_change client ffff88040ad68800...
>          <idle>-0     [001] ..s.   364.762716: xs_tcp_state_change: RPC:       state 5 conn 0 dead 0 zapped 1 sk_shutdown 3
>          <idle>-0     [001] ..s.   364.762728: xs_tcp_state_change: RPC:       xs_tcp_state_change client ffff88040ad68800...
>          <idle>-0     [001] ..s.   364.762728: xs_tcp_state_change: RPC:       state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
>          <idle>-0     [001] ..s.   364.762729: xprt_disconnect_done: RPC:       disconnected transport ffff88040ad68800
>          <idle>-0     [001] ..s.   364.762730: xs_tcp_state_change: RPC:       xs_tcp_state_change client ffff88040ad68800...
>          <idle>-0     [001] ..s.   364.762730: xs_tcp_state_change: RPC:       state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
>          <idle>-0     [001] ..s.   364.762730: xprt_disconnect_done: RPC:       disconnected transport ffff88040ad68800
> 
> I can add more trace_printk()s if it would help.


Wait. So the NFS mount is still active, it’s just that the socket disconnected due to no traffic? That should be OK. Granted that the port can’t be reused by another process, but you really don’t want that: what if there are no other ports available and you start writing to a file on the NFS partition?

Cheers
  Trond

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: It's back! (Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() ))
@ 2016-06-30 18:30       ` Trond Myklebust
  0 siblings, 0 replies; 77+ messages in thread
From: Trond Myklebust @ 2016-06-30 18:30 UTC (permalink / raw)
  To: Rostedt Steven
  Cc: Jeff Layton, Eric Dumazet, Schumaker Anna,
	Linux NFS Mailing List, Linux Network Devel Mailing List, LKML,
	Andrew Morton, Fields Bruce

DQo+IE9uIEp1biAzMCwgMjAxNiwgYXQgMTE6MjMsIFN0ZXZlbiBSb3N0ZWR0IDxyb3N0ZWR0QGdv
b2RtaXMub3JnPiB3cm90ZToNCj4gDQo+IE9uIFRodSwgMzAgSnVuIDIwMTYgMTM6MTc6NDcgKzAw
MDANCj4gVHJvbmQgTXlrbGVidXN0IDx0cm9uZG15QHByaW1hcnlkYXRhLmNvbT4gd3JvdGU6DQo+
IA0KPj4+IE9uIEp1biAzMCwgMjAxNiwgYXQgMDg6NTksIFN0ZXZlbiBSb3N0ZWR0IDxyb3N0ZWR0
QGdvb2RtaXMub3JnPiB3cm90ZToNCj4+PiANCj4+PiBbIHJlc2VuZGluZyBhcyBhIG5ldyBlbWFp
bCwgYXMgSSdtIGFzc3VtaW5nIHBlb3BsZSBkbyBub3Qgc29ydCB0aGVpcg0KPj4+IElOQk9YIHZp
YSBsYXN0IGVtYWlsIG9uIHRocmVhZCwgdGh1cyBteSBsYXN0IGVtYWlsIGlzIHNpdHRpbmcgaW4g
dGhlDQo+Pj4gYm90dG9tIG9mIGV2ZXJ5b25lJ3MgSU5CT1ggXQ0KPj4+IA0KPj4+IEkndmUgaGl0
IHRoaXMgYWdhaW4uIE5vdCBzdXJlIHdoZW4gaXQgc3RhcnRlZCwgYnV0IEkgYXBwbGllZCBteSBv
bGQNCj4+PiBkZWJ1ZyB0cmFjZV9wcmludGsoKSBwYXRjaCAoYXR0YWNoZWQpIGFuZCByZWJvb3Rl
ZCAoNC41LjcpLiBJIGp1c3QNCj4+PiB0ZXN0ZWQgdGhlIGxhdGVzdCBrZXJuZWwgZnJvbSBMaW51
cydzIHRyZWUgKGZyb20gbGFzdCBuaWdodHMgcHVsbCksIGFuZA0KPj4+IGl0IHN0aWxsIGdpdmVz
IG1lIHRoZSBwcm9ibGVtLg0KPj4+IA0KPj4+IEhlcmUncyB0aGUgdHJhY2UgSSBoYXZlOg0KPj4+
IA0KPj4+ICAga3dvcmtlci8zOjFILTEzNCAgIFswMDNdIC4ucy4gICAgNjEuMDM2MTI5OiBpbmV0
X2Nza19nZXRfcG9ydDogc251bSA4MDUNCj4gDQo+IEhlcmUncyB3ZXJlIHRoZSBwb3J0IGlzIHRh
a2VuDQo+IA0KPj4+ICAga3dvcmtlci8zOjFILTEzNCAgIFswMDNdIC4ucy4gICAgNjEuMDM2MTM1
OiA8c3RhY2sgdHJhY2U+ICANCj4+PiA9PiBzY2hlZF9jbG9jaw0KPj4+ID0+IGluZXRfYWRkcl90
eXBlX3RhYmxlDQo+Pj4gPT4gc2VjdXJpdHlfY2FwYWJsZQ0KPj4+ID0+IGluZXRfYmluZA0KPj4+
ID0+IHhzX2JpbmQNCj4+PiA9PiByZWxlYXNlX3NvY2sNCj4+PiA9PiBzb2NrX3NldHNvY2tvcHQN
Cj4+PiA9PiBfX3NvY2tfY3JlYXRlDQo+Pj4gPT4geHNfY3JlYXRlX3NvY2suaXNyYS4xOQ0KPj4+
ID0+IHhzX3RjcF9zZXR1cF9zb2NrZXQNCj4+PiA9PiBwcm9jZXNzX29uZV93b3JrDQo+Pj4gPT4g
d29ya2VyX3RocmVhZA0KPj4+ID0+IHdvcmtlcl90aHJlYWQNCj4+PiA9PiBrdGhyZWFkDQo+Pj4g
PT4gcmV0X2Zyb21fZm9yaw0KPj4+ID0+IGt0aHJlYWQgICAgDQo+Pj4gICBrd29ya2VyLzM6MUgt
MTM0ICAgWzAwM10gLi5zLiAgICA2MS4wMzYxMzY6IGluZXRfYmluZF9oYXNoOiBhZGQgODA1DQo+
Pj4gICBrd29ya2VyLzM6MUgtMTM0ICAgWzAwM10gLi5zLiAgICA2MS4wMzYxMzg6IDxzdGFjayB0
cmFjZT4gIA0KPj4+ID0+IGluZXRfY3NrX2dldF9wb3J0DQo+Pj4gPT4gc2NoZWRfY2xvY2sNCj4+
PiA9PiBpbmV0X2FkZHJfdHlwZV90YWJsZQ0KPj4+ID0+IHNlY3VyaXR5X2NhcGFibGUNCj4+PiA9
PiBpbmV0X2JpbmQNCj4+PiA9PiB4c19iaW5kDQo+Pj4gPT4gcmVsZWFzZV9zb2NrDQo+Pj4gPT4g
c29ja19zZXRzb2Nrb3B0DQo+Pj4gPT4gX19zb2NrX2NyZWF0ZQ0KPj4+ID0+IHhzX2NyZWF0ZV9z
b2NrLmlzcmEuMTkNCj4+PiA9PiB4c190Y3Bfc2V0dXBfc29ja2V0DQo+Pj4gPT4gcHJvY2Vzc19v
bmVfd29yaw0KPj4+ID0+IHdvcmtlcl90aHJlYWQNCj4+PiA9PiB3b3JrZXJfdGhyZWFkDQo+Pj4g
PT4ga3RocmVhZA0KPj4+ID0+IHJldF9mcm9tX2ZvcmsNCj4+PiA9PiBrdGhyZWFkICAgIA0KPj4+
ICAga3dvcmtlci8zOjFILTEzNCAgIFswMDNdIC4uLi4gICAgNjEuMDM2MTM5OiB4c19iaW5kOiBS
UEM6ICAgICAgIHhzX2JpbmQgNC4xMzYuMjU1LjI1NTo4MDU6IG9rICgwKQ0KPiANCj4gSGVyZSdz
IHdoZXJlIGl0IGlzIGJvdW5kZWQuDQo+IA0KPj4+ICAga3dvcmtlci8zOjFILTEzNCAgIFswMDNd
IC4uLi4gICAgNjEuMDM2MTQwOiB4c190Y3Bfc2V0dXBfc29ja2V0OiBSUEM6ICAgICAgIHdvcmtl
ciBjb25uZWN0aW5nIHhwcnQgZmZmZjg4MDQwN2VjYTgwMCB2aWEgdGNwIHRvIDE5Mi4xNjguMjMu
MjIgKHBvcnQgNDM2NTEpDQo+Pj4gICBrd29ya2VyLzM6MUgtMTM0ICAgWzAwM10gLi4uLiAgICA2
MS4wMzYxNjI6IHhzX3RjcF9zZXR1cF9zb2NrZXQ6IFJQQzogICAgICAgZmZmZjg4MDQwN2VjYTgw
MCBjb25uZWN0IHN0YXR1cyAxMTUgY29ubmVjdGVkIDAgc29jayBzdGF0ZSAyDQo+Pj4gICAgICAg
ICA8aWRsZT4tMCAgICAgWzAwMV0gLi5zLiAgICA2MS4wMzY0NTA6IHhzX3RjcF9zdGF0ZV9jaGFu
Z2U6IFJQQzogICAgICAgeHNfdGNwX3N0YXRlX2NoYW5nZSBjbGllbnQgZmZmZjg4MDQwN2VjYTgw
MC4uLg0KPj4+ICAgICAgICAgPGlkbGU+LTAgICAgIFswMDFdIC4ucy4gICAgNjEuMDM2NDUyOiB4
c190Y3Bfc3RhdGVfY2hhbmdlOiBSUEM6ICAgICAgIHN0YXRlIDEgY29ubiAwIGRlYWQgMCB6YXBw
ZWQgMSBza19zaHV0ZG93biAwDQo+Pj4gICBrd29ya2VyLzE6MUgtMTM2ICAgWzAwMV0gLi4uLiAg
ICA2MS4wMzY0NzY6IHhwcnRfY29ubmVjdF9zdGF0dXM6IFJQQzogICAgNDMgeHBydF9jb25uZWN0
X3N0YXR1czogcmV0cnlpbmcNCj4+PiAgIGt3b3JrZXIvMToxSC0xMzYgICBbMDAxXSAuLi4uICAg
IDYxLjAzNjQ3ODogeHBydF9wcmVwYXJlX3RyYW5zbWl0OiBSUEM6ICAgIDQzIHhwcnRfcHJlcGFy
ZV90cmFuc21pdA0KPj4+ICAga3dvcmtlci8xOjFILTEzNiAgIFswMDFdIC4uLi4gICAgNjEuMDM2
NDc5OiB4cHJ0X3RyYW5zbWl0OiBSUEM6ICAgIDQzIHhwcnRfdHJhbnNtaXQoNzIpDQo+Pj4gICBr
d29ya2VyLzE6MUgtMTM2ICAgWzAwMV0gLi4uLiAgICA2MS4wMzY0ODY6IHhzX3RjcF9zZW5kX3Jl
cXVlc3Q6IFJQQzogICAgICAgeHNfdGNwX3NlbmRfcmVxdWVzdCg3MikgPSAwDQo+Pj4gICBrd29y
a2VyLzE6MUgtMTM2ICAgWzAwMV0gLi4uLiAgICA2MS4wMzY0ODc6IHhwcnRfdHJhbnNtaXQ6IFJQ
QzogICAgNDMgeG1pdCBjb21wbGV0ZQ0KPj4+ICAgICAgICAgPGlkbGU+LTAgICAgIFswMDFdIC4u
cy4gICAgNjEuMDM2Nzg5OiB4c190Y3BfZGF0YV9yZWFkeTogUlBDOiAgICAgICB4c190Y3BfZGF0
YV9yZWFkeS4uLg0KPj4+ICAga3dvcmtlci8xOjFILTEzNiAgIFswMDFdIC4uLi4gICAgNjEuMDM2
Nzk4OiB4c190Y3BfZGF0YV9yZWN2OiBSUEM6ICAgICAgIHhzX3RjcF9kYXRhX3JlY3Ygc3RhcnRl
ZA0KPj4+ICAga3dvcmtlci8xOjFILTEzNiAgIFswMDFdIC4uLi4gICAgNjEuMDM2Nzk5OiB4c190
Y3BfZGF0YV9yZWN2OiBSUEM6ICAgICAgIHJlYWRpbmcgVENQIHJlY29yZCBmcmFnbWVudCBvZiBs
ZW5ndGggMjQNCj4+PiAgIGt3b3JrZXIvMToxSC0xMzYgICBbMDAxXSAuLi4uICAgIDYxLjAzNjc5
OTogeHNfdGNwX2RhdGFfcmVjdjogUlBDOiAgICAgICByZWFkaW5nIFhJRCAoNCBieXRlcykNCj4+
PiAgIGt3b3JrZXIvMToxSC0xMzYgICBbMDAxXSAuLi4uICAgIDYxLjAzNjgwMDogeHNfdGNwX2Rh
dGFfcmVjdjogUlBDOiAgICAgICByZWFkaW5nIHJlcXVlc3Qgd2l0aCBYSUQgMmY0YzNmODgNCj4+
PiAgIGt3b3JrZXIvMToxSC0xMzYgICBbMDAxXSAuLi4uICAgIDYxLjAzNjgwMDogeHNfdGNwX2Rh
dGFfcmVjdjogUlBDOiAgICAgICByZWFkaW5nIENBTEwvUkVQTFkgZmxhZyAoNCBieXRlcykNCj4+
PiAgIGt3b3JrZXIvMToxSC0xMzYgICBbMDAxXSAuLi4uICAgIDYxLjAzNjgwMTogeHNfdGNwX2Rh
dGFfcmVjdjogUlBDOiAgICAgICByZWFkIHJlcGx5IFhJRCAyZjRjM2Y4OA0KPj4+ICAga3dvcmtl
ci8xOjFILTEzNiAgIFswMDFdIC4ucy4gICAgNjEuMDM2ODAxOiB4c190Y3BfZGF0YV9yZWN2OiBS
UEM6ICAgICAgIFhJRCAyZjRjM2Y4OCByZWFkIDE2IGJ5dGVzDQo+Pj4gICBrd29ya2VyLzE6MUgt
MTM2ICAgWzAwMV0gLi5zLiAgICA2MS4wMzY4MDI6IHhzX3RjcF9kYXRhX3JlY3Y6IFJQQzogICAg
ICAgeHBydCA9IGZmZmY4ODA0MDdlY2E4MDAsIHRjcF9jb3BpZWQgPSAyNCwgdGNwX29mZnNldCA9
IDI0LCB0Y3BfcmVjbGVuID0gMjQNCj4+PiAgIGt3b3JrZXIvMToxSC0xMzYgICBbMDAxXSAuLnMu
ICAgIDYxLjAzNjgwMjogeHBydF9jb21wbGV0ZV9ycXN0OiBSUEM6ICAgIDQzIHhpZCAyZjRjM2Y4
OCBjb21wbGV0ZSAoMjQgYnl0ZXMgcmVjZWl2ZWQpDQo+Pj4gICBrd29ya2VyLzE6MUgtMTM2ICAg
WzAwMV0gLi4uLiAgICA2MS4wMzY4MDM6IHhzX3RjcF9kYXRhX3JlY3Y6IFJQQzogICAgICAgeHNf
dGNwX2RhdGFfcmVjdiBkb25lDQo+Pj4gICBrd29ya2VyLzE6MUgtMTM2ICAgWzAwMV0gLi4uLiAg
ICA2MS4wMzY4MTI6IHhwcnRfcmVsZWFzZTogUlBDOiAgICA0MyByZWxlYXNlIHJlcXVlc3QgZmZm
Zjg4MDQwYjI3MDgwMA0KPj4+IA0KPj4+IA0KPj4+ICMgdW5oaWRlLXRjcCANCj4+PiBVbmhpZGUt
dGNwIDIwMTMwNTI2DQo+Pj4gQ29weXJpZ2h0IMKpIDIwMTMgWWFnbyBKZXN1cyAmIFBhdHJpY2sg
R291aW4NCj4+PiBMaWNlbnNlIEdQTHYzKyA6IEdOVSBHUEwgdmVyc2lvbiAzIG9yIGxhdGVyDQo+
Pj4gaHR0cDovL3d3dy51bmhpZGUtZm9yZW5zaWNzLmluZm8NCj4+PiBVc2VkIG9wdGlvbnM6IA0K
Pj4+IFsqXVN0YXJ0aW5nIFRDUCBjaGVja2luZw0KPj4+IA0KPj4+IEZvdW5kIEhpZGRlbiBwb3J0
IHRoYXQgbm90IGFwcGVhcnMgaW4gc3M6IDgwNQ0KPj4+IA0KPj4gDQo+PiBXaGF0IGlzIGEg4oCc
SGlkZGVuIHBvcnQgdGhhdCBub3QgYXBwZWFycyBpbiBzczogODA14oCdLCBhbmQgd2hhdCBkb2Vz
IHRoaXMgcmVwb3J0IG1lYW4/IEFyZSB3ZSBmYWlsaW5nIHRvIGNsb3NlIGEgc29ja2V0Pw0KPiAN
Cj4gSSBiZWxpZXZlIGhpZGRlbiBwb3J0cyBhcmUgcG9ydHMgdGhhdCBhcmUgYm91bmQgdG8gbm8g
c29ja2V0Lg0KPiBCYXNpY2FsbHksIGEgInBvcnQgbGVhayIuIFdoZXJlIHRoZXkgYXJlIGluIGxp
bWJvIGFuZCBjYW4gbmV2ZXIgYmUNCj4gcmV1c2VkLg0KPiANCj4gSSBsb29rZWQgYXQgbXkgcGFz
dCByZXBvcnQsIGFuZCBldmVydGhpbmcgaXMgZXhhY3RseSBsaWtlIHRoZSBpc3N1ZQ0KPiBiZWZv
cmUuIFdoZW4gSSBmaXJzdCBib290IG15IGJveCwgdGhlIHBvcnQgaXMgdGhlcmUsIEkgaGF2ZSB0
aGUgYWJvdmUNCj4gdHJhY2UuIEkgcnVuIG5ldHN0YXQgLXRhcG4gYW5kIGdyZXAgZm9yIHRoZSBw
b3J0LiBBbmQgaXQgc2hvd3MgdGhhdCBpdA0KPiBpcyBhbiBlc3RhYmxpc2hlZCBzb2NrZXQgYmV0
d2VlbiBteSBib3ggYW5kIG15IHdpZmUncyBib3ggKEkgaGF2ZSBhIG5mcw0KPiBtb3VudGVkIGZp
bGUgc3lzdGVtIGZvciBoZXIgdG8gY29weSBoZXIgcGljdHVyZXMgdG8gbXkgc2VydmVyKS4gQWZ0
ZXIgYQ0KPiBjb3VwbGUgb2YgbWludXRlcywgdGhlIHBvcnQgdHVybnMgZnJvbSBFU1RBQkxJU0hF
RCB0byBUSU1FX1dBSVQsIGFuZA0KPiBhZnRlciBhbm90aGVyIG1pbnV0ZSBpdCBkaXNhcHBlYXJz
LiBBdCB0aGF0IG1vbWVudCwgdGhlIHVuaGlkZS10Y3ANCj4gc2hvd3MgdGhlIHBvcnQgYXMgaGlk
ZGVuLg0KPiANCj4gV2hlbiB0aGUgc29ja2V0IGdvZXMgYXdheSAod2l0aG91dCByZWxlYXNpbmcg
dGhlIHBvcnQpIEkgc2VlIHRoaXMgaW4gbXkNCj4gdHJhY2U6DQo+IA0KPiAgICBrd29ya2VyLzE6
MUgtMTMxICAgWzAwMV0gLi5zLiAgIDM2NC43NjI1Mzc6IHhzX3RjcF9zdGF0ZV9jaGFuZ2U6IFJQ
QzogICAgICAgeHNfdGNwX3N0YXRlX2NoYW5nZSBjbGllbnQgZmZmZjg4MDQwYWQ2ODgwMC4uLg0K
PiAgICBrd29ya2VyLzE6MUgtMTMxICAgWzAwMV0gLi5zLiAgIDM2NC43NjI1Mzk6IHhzX3RjcF9z
dGF0ZV9jaGFuZ2U6IFJQQzogICAgICAgc3RhdGUgNCBjb25uIDEgZGVhZCAwIHphcHBlZCAxIHNr
X3NodXRkb3duIDMNCj4gICAgICAgICAgPGlkbGU+LTAgICAgIFswMDFdIC4ucy4gICAzNjQuNzYy
NzE1OiB4c190Y3Bfc3RhdGVfY2hhbmdlOiBSUEM6ICAgICAgIHhzX3RjcF9zdGF0ZV9jaGFuZ2Ug
Y2xpZW50IGZmZmY4ODA0MGFkNjg4MDAuLi4NCj4gICAgICAgICAgPGlkbGU+LTAgICAgIFswMDFd
IC4ucy4gICAzNjQuNzYyNzE2OiB4c190Y3Bfc3RhdGVfY2hhbmdlOiBSUEM6ICAgICAgIHN0YXRl
IDUgY29ubiAwIGRlYWQgMCB6YXBwZWQgMSBza19zaHV0ZG93biAzDQo+ICAgICAgICAgIDxpZGxl
Pi0wICAgICBbMDAxXSAuLnMuICAgMzY0Ljc2MjcyODogeHNfdGNwX3N0YXRlX2NoYW5nZTogUlBD
OiAgICAgICB4c190Y3Bfc3RhdGVfY2hhbmdlIGNsaWVudCBmZmZmODgwNDBhZDY4ODAwLi4uDQo+
ICAgICAgICAgIDxpZGxlPi0wICAgICBbMDAxXSAuLnMuICAgMzY0Ljc2MjcyODogeHNfdGNwX3N0
YXRlX2NoYW5nZTogUlBDOiAgICAgICBzdGF0ZSA3IGNvbm4gMCBkZWFkIDAgemFwcGVkIDEgc2tf
c2h1dGRvd24gMw0KPiAgICAgICAgICA8aWRsZT4tMCAgICAgWzAwMV0gLi5zLiAgIDM2NC43NjI3
Mjk6IHhwcnRfZGlzY29ubmVjdF9kb25lOiBSUEM6ICAgICAgIGRpc2Nvbm5lY3RlZCB0cmFuc3Bv
cnQgZmZmZjg4MDQwYWQ2ODgwMA0KPiAgICAgICAgICA8aWRsZT4tMCAgICAgWzAwMV0gLi5zLiAg
IDM2NC43NjI3MzA6IHhzX3RjcF9zdGF0ZV9jaGFuZ2U6IFJQQzogICAgICAgeHNfdGNwX3N0YXRl
X2NoYW5nZSBjbGllbnQgZmZmZjg4MDQwYWQ2ODgwMC4uLg0KPiAgICAgICAgICA8aWRsZT4tMCAg
ICAgWzAwMV0gLi5zLiAgIDM2NC43NjI3MzA6IHhzX3RjcF9zdGF0ZV9jaGFuZ2U6IFJQQzogICAg
ICAgc3RhdGUgNyBjb25uIDAgZGVhZCAwIHphcHBlZCAxIHNrX3NodXRkb3duIDMNCj4gICAgICAg
ICAgPGlkbGU+LTAgICAgIFswMDFdIC4ucy4gICAzNjQuNzYyNzMwOiB4cHJ0X2Rpc2Nvbm5lY3Rf
ZG9uZTogUlBDOiAgICAgICBkaXNjb25uZWN0ZWQgdHJhbnNwb3J0IGZmZmY4ODA0MGFkNjg4MDAN
Cj4gDQo+IEkgY2FuIGFkZCBtb3JlIHRyYWNlX3ByaW50aygpcyBpZiBpdCB3b3VsZCBoZWxwLg0K
DQoNCldhaXQuIFNvIHRoZSBORlMgbW91bnQgaXMgc3RpbGwgYWN0aXZlLCBpdOKAmXMganVzdCB0
aGF0IHRoZSBzb2NrZXQgZGlzY29ubmVjdGVkIGR1ZSB0byBubyB0cmFmZmljPyBUaGF0IHNob3Vs
ZCBiZSBPSy4gR3JhbnRlZCB0aGF0IHRoZSBwb3J0IGNhbuKAmXQgYmUgcmV1c2VkIGJ5IGFub3Ro
ZXIgcHJvY2VzcywgYnV0IHlvdSByZWFsbHkgZG9u4oCZdCB3YW50IHRoYXQ6IHdoYXQgaWYgdGhl
cmUgYXJlIG5vIG90aGVyIHBvcnRzIGF2YWlsYWJsZSBhbmQgeW91IHN0YXJ0IHdyaXRpbmcgdG8g
YSBmaWxlIG9uIHRoZSBORlMgcGFydGl0aW9uPw0KDQpDaGVlcnMNCiAgVHJvbmQ=


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: It's back! (Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() ))
  2016-06-30 15:23   ` Steven Rostedt
@ 2016-06-30 16:24     ` Steven Rostedt
  2016-06-30 18:30       ` Trond Myklebust
  1 sibling, 0 replies; 77+ messages in thread
From: Steven Rostedt @ 2016-06-30 16:24 UTC (permalink / raw)
  To: Trond Myklebust
  Cc: Jeff Layton, Eric Dumazet, Schumaker Anna,
	Linux NFS Mailing List, Linux Network Devel Mailing List, LKML,
	Andrew Morton, Fields Bruce

On Thu, 30 Jun 2016 11:23:41 -0400
Steven Rostedt <rostedt@goodmis.org> wrote:


> I can add more trace_printk()s if it would help.

I added a trace_printk() in inet_bind_bucket_destroy() to print out
some information on the socket used by xs_bind(), and it shows that the
bind destroy is called, but the list is not empty.



/*
 * Caller must hold hashbucket lock for this tb with local BH disabled
 */
void inet_bind_bucket_destroy(struct kmem_cache *cachep, struct inet_bind_bucket *tb)
{
	if (!current->mm && xs_port == tb->port) {
		trace_printk("destroy %d empty=%d %p\n",
			     tb->port, hlist_empty(&tb->owners), tb);
		trace_dump_stack(1);
	}
	if (hlist_empty(&tb->owners)) {
		__hlist_del(&tb->node);
		kmem_cache_free(cachep, tb);
	}
}

I created "xs_port" to hold the port of the variable used by xs_bind,
and when it is called, the hlist_empty(&tb->owners) returns false.

I'll add more trace_printks to find out where those owners are being
added.

-- Steve

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: It's back! (Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() ))
  2016-06-30 13:17 ` Trond Myklebust
@ 2016-06-30 15:23   ` Steven Rostedt
  2016-06-30 16:24     ` Steven Rostedt
  2016-06-30 18:30       ` Trond Myklebust
  0 siblings, 2 replies; 77+ messages in thread
From: Steven Rostedt @ 2016-06-30 15:23 UTC (permalink / raw)
  To: Trond Myklebust
  Cc: Jeff Layton, Eric Dumazet, Schumaker Anna,
	Linux NFS Mailing List, Linux Network Devel Mailing List, LKML,
	Andrew Morton, Fields Bruce

On Thu, 30 Jun 2016 13:17:47 +0000
Trond Myklebust <trondmy@primarydata.com> wrote:

> > On Jun 30, 2016, at 08:59, Steven Rostedt <rostedt@goodmis.org> wrote:
> > 
> > [ resending as a new email, as I'm assuming people do not sort their
> >  INBOX via last email on thread, thus my last email is sitting in the
> >  bottom of everyone's INBOX ]
> > 
> > I've hit this again. Not sure when it started, but I applied my old
> > debug trace_printk() patch (attached) and rebooted (4.5.7). I just
> > tested the latest kernel from Linus's tree (from last nights pull), and
> > it still gives me the problem.
> > 
> > Here's the trace I have:
> > 
> >    kworker/3:1H-134   [003] ..s.    61.036129: inet_csk_get_port: snum 805

Here's were the port is taken

> >    kworker/3:1H-134   [003] ..s.    61.036135: <stack trace>  
> > => sched_clock
> > => inet_addr_type_table
> > => security_capable
> > => inet_bind
> > => xs_bind
> > => release_sock
> > => sock_setsockopt
> > => __sock_create
> > => xs_create_sock.isra.19
> > => xs_tcp_setup_socket
> > => process_one_work
> > => worker_thread
> > => worker_thread
> > => kthread
> > => ret_from_fork
> > => kthread    
> >    kworker/3:1H-134   [003] ..s.    61.036136: inet_bind_hash: add 805
> >    kworker/3:1H-134   [003] ..s.    61.036138: <stack trace>  
> > => inet_csk_get_port
> > => sched_clock
> > => inet_addr_type_table
> > => security_capable
> > => inet_bind
> > => xs_bind
> > => release_sock
> > => sock_setsockopt
> > => __sock_create
> > => xs_create_sock.isra.19
> > => xs_tcp_setup_socket
> > => process_one_work
> > => worker_thread
> > => worker_thread
> > => kthread
> > => ret_from_fork
> > => kthread    
> >    kworker/3:1H-134   [003] ....    61.036139: xs_bind: RPC:       xs_bind 4.136.255.255:805: ok (0)

Here's where it is bounded.

> >    kworker/3:1H-134   [003] ....    61.036140: xs_tcp_setup_socket: RPC:       worker connecting xprt ffff880407eca800 via tcp to 192.168.23.22 (port 43651)
> >    kworker/3:1H-134   [003] ....    61.036162: xs_tcp_setup_socket: RPC:       ffff880407eca800 connect status 115 connected 0 sock state 2
> >          <idle>-0     [001] ..s.    61.036450: xs_tcp_state_change: RPC:       xs_tcp_state_change client ffff880407eca800...
> >          <idle>-0     [001] ..s.    61.036452: xs_tcp_state_change: RPC:       state 1 conn 0 dead 0 zapped 1 sk_shutdown 0
> >    kworker/1:1H-136   [001] ....    61.036476: xprt_connect_status: RPC:    43 xprt_connect_status: retrying
> >    kworker/1:1H-136   [001] ....    61.036478: xprt_prepare_transmit: RPC:    43 xprt_prepare_transmit
> >    kworker/1:1H-136   [001] ....    61.036479: xprt_transmit: RPC:    43 xprt_transmit(72)
> >    kworker/1:1H-136   [001] ....    61.036486: xs_tcp_send_request: RPC:       xs_tcp_send_request(72) = 0
> >    kworker/1:1H-136   [001] ....    61.036487: xprt_transmit: RPC:    43 xmit complete
> >          <idle>-0     [001] ..s.    61.036789: xs_tcp_data_ready: RPC:       xs_tcp_data_ready...
> >    kworker/1:1H-136   [001] ....    61.036798: xs_tcp_data_recv: RPC:       xs_tcp_data_recv started
> >    kworker/1:1H-136   [001] ....    61.036799: xs_tcp_data_recv: RPC:       reading TCP record fragment of length 24
> >    kworker/1:1H-136   [001] ....    61.036799: xs_tcp_data_recv: RPC:       reading XID (4 bytes)
> >    kworker/1:1H-136   [001] ....    61.036800: xs_tcp_data_recv: RPC:       reading request with XID 2f4c3f88
> >    kworker/1:1H-136   [001] ....    61.036800: xs_tcp_data_recv: RPC:       reading CALL/REPLY flag (4 bytes)
> >    kworker/1:1H-136   [001] ....    61.036801: xs_tcp_data_recv: RPC:       read reply XID 2f4c3f88
> >    kworker/1:1H-136   [001] ..s.    61.036801: xs_tcp_data_recv: RPC:       XID 2f4c3f88 read 16 bytes
> >    kworker/1:1H-136   [001] ..s.    61.036802: xs_tcp_data_recv: RPC:       xprt = ffff880407eca800, tcp_copied = 24, tcp_offset = 24, tcp_reclen = 24
> >    kworker/1:1H-136   [001] ..s.    61.036802: xprt_complete_rqst: RPC:    43 xid 2f4c3f88 complete (24 bytes received)
> >    kworker/1:1H-136   [001] ....    61.036803: xs_tcp_data_recv: RPC:       xs_tcp_data_recv done
> >    kworker/1:1H-136   [001] ....    61.036812: xprt_release: RPC:    43 release request ffff88040b270800
> > 
> > 
> > # unhide-tcp 
> > Unhide-tcp 20130526
> > Copyright © 2013 Yago Jesus & Patrick Gouin
> > License GPLv3+ : GNU GPL version 3 or later
> > http://www.unhide-forensics.info
> > Used options: 
> > [*]Starting TCP checking
> > 
> > Found Hidden port that not appears in ss: 805
> >   
> 
> What is a “Hidden port that not appears in ss: 805”, and what does this report mean? Are we failing to close a socket?

I believe hidden ports are ports that are bound to no socket.
Basically, a "port leak". Where they are in limbo and can never be
reused.

I looked at my past report, and everthing is exactly like the issue
before. When I first boot my box, the port is there, I have the above
trace. I run netstat -tapn and grep for the port. And it shows that it
is an established socket between my box and my wife's box (I have a nfs
mounted file system for her to copy her pictures to my server). After a
couple of minutes, the port turns from ESTABLISHED to TIME_WAIT, and
after another minute it disappears. At that moment, the unhide-tcp
shows the port as hidden.

When the socket goes away (without releasing the port) I see this in my
trace:

    kworker/1:1H-131   [001] ..s.   364.762537: xs_tcp_state_change: RPC:       xs_tcp_state_change client ffff88040ad68800...
    kworker/1:1H-131   [001] ..s.   364.762539: xs_tcp_state_change: RPC:       state 4 conn 1 dead 0 zapped 1 sk_shutdown 3
          <idle>-0     [001] ..s.   364.762715: xs_tcp_state_change: RPC:       xs_tcp_state_change client ffff88040ad68800...
          <idle>-0     [001] ..s.   364.762716: xs_tcp_state_change: RPC:       state 5 conn 0 dead 0 zapped 1 sk_shutdown 3
          <idle>-0     [001] ..s.   364.762728: xs_tcp_state_change: RPC:       xs_tcp_state_change client ffff88040ad68800...
          <idle>-0     [001] ..s.   364.762728: xs_tcp_state_change: RPC:       state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
          <idle>-0     [001] ..s.   364.762729: xprt_disconnect_done: RPC:       disconnected transport ffff88040ad68800
          <idle>-0     [001] ..s.   364.762730: xs_tcp_state_change: RPC:       xs_tcp_state_change client ffff88040ad68800...
          <idle>-0     [001] ..s.   364.762730: xs_tcp_state_change: RPC:       state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
          <idle>-0     [001] ..s.   364.762730: xprt_disconnect_done: RPC:       disconnected transport ffff88040ad68800

I can add more trace_printk()s if it would help.

-- Steve

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: It's back! (Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() ))
  2016-06-30 12:59 ` Steven Rostedt
  (?)
@ 2016-06-30 13:17 ` Trond Myklebust
  2016-06-30 15:23   ` Steven Rostedt
  -1 siblings, 1 reply; 77+ messages in thread
From: Trond Myklebust @ 2016-06-30 13:17 UTC (permalink / raw)
  To: Rostedt Steven
  Cc: Jeff Layton, Eric Dumazet, Schumaker Anna,
	Linux NFS Mailing List, Linux Network Devel Mailing List, LKML,
	Andrew Morton, Fields Bruce


> On Jun 30, 2016, at 08:59, Steven Rostedt <rostedt@goodmis.org> wrote:
> 
> [ resending as a new email, as I'm assuming people do not sort their
>  INBOX via last email on thread, thus my last email is sitting in the
>  bottom of everyone's INBOX ]
> 
> I've hit this again. Not sure when it started, but I applied my old
> debug trace_printk() patch (attached) and rebooted (4.5.7). I just
> tested the latest kernel from Linus's tree (from last nights pull), and
> it still gives me the problem.
> 
> Here's the trace I have:
> 
>    kworker/3:1H-134   [003] ..s.    61.036129: inet_csk_get_port: snum 805
>    kworker/3:1H-134   [003] ..s.    61.036135: <stack trace>
> => sched_clock
> => inet_addr_type_table
> => security_capable
> => inet_bind
> => xs_bind
> => release_sock
> => sock_setsockopt
> => __sock_create
> => xs_create_sock.isra.19
> => xs_tcp_setup_socket
> => process_one_work
> => worker_thread
> => worker_thread
> => kthread
> => ret_from_fork
> => kthread  
>    kworker/3:1H-134   [003] ..s.    61.036136: inet_bind_hash: add 805
>    kworker/3:1H-134   [003] ..s.    61.036138: <stack trace>
> => inet_csk_get_port
> => sched_clock
> => inet_addr_type_table
> => security_capable
> => inet_bind
> => xs_bind
> => release_sock
> => sock_setsockopt
> => __sock_create
> => xs_create_sock.isra.19
> => xs_tcp_setup_socket
> => process_one_work
> => worker_thread
> => worker_thread
> => kthread
> => ret_from_fork
> => kthread  
>    kworker/3:1H-134   [003] ....    61.036139: xs_bind: RPC:       xs_bind 4.136.255.255:805: ok (0)
>    kworker/3:1H-134   [003] ....    61.036140: xs_tcp_setup_socket: RPC:       worker connecting xprt ffff880407eca800 via tcp to 192.168.23.22 (port 43651)
>    kworker/3:1H-134   [003] ....    61.036162: xs_tcp_setup_socket: RPC:       ffff880407eca800 connect status 115 connected 0 sock state 2
>          <idle>-0     [001] ..s.    61.036450: xs_tcp_state_change: RPC:       xs_tcp_state_change client ffff880407eca800...
>          <idle>-0     [001] ..s.    61.036452: xs_tcp_state_change: RPC:       state 1 conn 0 dead 0 zapped 1 sk_shutdown 0
>    kworker/1:1H-136   [001] ....    61.036476: xprt_connect_status: RPC:    43 xprt_connect_status: retrying
>    kworker/1:1H-136   [001] ....    61.036478: xprt_prepare_transmit: RPC:    43 xprt_prepare_transmit
>    kworker/1:1H-136   [001] ....    61.036479: xprt_transmit: RPC:    43 xprt_transmit(72)
>    kworker/1:1H-136   [001] ....    61.036486: xs_tcp_send_request: RPC:       xs_tcp_send_request(72) = 0
>    kworker/1:1H-136   [001] ....    61.036487: xprt_transmit: RPC:    43 xmit complete
>          <idle>-0     [001] ..s.    61.036789: xs_tcp_data_ready: RPC:       xs_tcp_data_ready...
>    kworker/1:1H-136   [001] ....    61.036798: xs_tcp_data_recv: RPC:       xs_tcp_data_recv started
>    kworker/1:1H-136   [001] ....    61.036799: xs_tcp_data_recv: RPC:       reading TCP record fragment of length 24
>    kworker/1:1H-136   [001] ....    61.036799: xs_tcp_data_recv: RPC:       reading XID (4 bytes)
>    kworker/1:1H-136   [001] ....    61.036800: xs_tcp_data_recv: RPC:       reading request with XID 2f4c3f88
>    kworker/1:1H-136   [001] ....    61.036800: xs_tcp_data_recv: RPC:       reading CALL/REPLY flag (4 bytes)
>    kworker/1:1H-136   [001] ....    61.036801: xs_tcp_data_recv: RPC:       read reply XID 2f4c3f88
>    kworker/1:1H-136   [001] ..s.    61.036801: xs_tcp_data_recv: RPC:       XID 2f4c3f88 read 16 bytes
>    kworker/1:1H-136   [001] ..s.    61.036802: xs_tcp_data_recv: RPC:       xprt = ffff880407eca800, tcp_copied = 24, tcp_offset = 24, tcp_reclen = 24
>    kworker/1:1H-136   [001] ..s.    61.036802: xprt_complete_rqst: RPC:    43 xid 2f4c3f88 complete (24 bytes received)
>    kworker/1:1H-136   [001] ....    61.036803: xs_tcp_data_recv: RPC:       xs_tcp_data_recv done
>    kworker/1:1H-136   [001] ....    61.036812: xprt_release: RPC:    43 release request ffff88040b270800
> 
> 
> # unhide-tcp 
> Unhide-tcp 20130526
> Copyright © 2013 Yago Jesus & Patrick Gouin
> License GPLv3+ : GNU GPL version 3 or later
> http://www.unhide-forensics.info
> Used options: 
> [*]Starting TCP checking
> 
> Found Hidden port that not appears in ss: 805
> 

What is a “Hidden port that not appears in ss: 805”, and what does this report mean? Are we failing to close a socket?

Cheers
  Trond

^ permalink raw reply	[flat|nested] 77+ messages in thread

* It's back! (Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() ))
@ 2016-06-30 12:59 ` Steven Rostedt
  0 siblings, 0 replies; 77+ messages in thread
From: Steven Rostedt @ 2016-06-30 12:59 UTC (permalink / raw)
  To: Trond Myklebust
  Cc: Jeff Layton, Eric Dumazet, Anna Schumaker,
	Linux NFS Mailing List, Linux Network Devel Mailing List, LKML,
	Andrew Morton, Bruce James Fields

[-- Attachment #1: Type: text/plain, Size: 4137 bytes --]

[ resending as a new email, as I'm assuming people do not sort their
  INBOX via last email on thread, thus my last email is sitting in the
  bottom of everyone's INBOX ]

I've hit this again. Not sure when it started, but I applied my old
debug trace_printk() patch (attached) and rebooted (4.5.7). I just
tested the latest kernel from Linus's tree (from last nights pull), and
it still gives me the problem.

Here's the trace I have:

    kworker/3:1H-134   [003] ..s.    61.036129: inet_csk_get_port: snum 805
    kworker/3:1H-134   [003] ..s.    61.036135: <stack trace>
 => sched_clock
 => inet_addr_type_table
 => security_capable
 => inet_bind
 => xs_bind
 => release_sock
 => sock_setsockopt
 => __sock_create
 => xs_create_sock.isra.19
 => xs_tcp_setup_socket
 => process_one_work
 => worker_thread
 => worker_thread
 => kthread
 => ret_from_fork
 => kthread  
    kworker/3:1H-134   [003] ..s.    61.036136: inet_bind_hash: add 805
    kworker/3:1H-134   [003] ..s.    61.036138: <stack trace>
 => inet_csk_get_port
 => sched_clock
 => inet_addr_type_table
 => security_capable
 => inet_bind
 => xs_bind
 => release_sock
 => sock_setsockopt
 => __sock_create
 => xs_create_sock.isra.19
 => xs_tcp_setup_socket
 => process_one_work
 => worker_thread
 => worker_thread
 => kthread
 => ret_from_fork
 => kthread  
    kworker/3:1H-134   [003] ....    61.036139: xs_bind: RPC:       xs_bind 4.136.255.255:805: ok (0)
    kworker/3:1H-134   [003] ....    61.036140: xs_tcp_setup_socket: RPC:       worker connecting xprt ffff880407eca800 via tcp to 192.168.23.22 (port 43651)
    kworker/3:1H-134   [003] ....    61.036162: xs_tcp_setup_socket: RPC:       ffff880407eca800 connect status 115 connected 0 sock state 2
          <idle>-0     [001] ..s.    61.036450: xs_tcp_state_change: RPC:       xs_tcp_state_change client ffff880407eca800...
          <idle>-0     [001] ..s.    61.036452: xs_tcp_state_change: RPC:       state 1 conn 0 dead 0 zapped 1 sk_shutdown 0
    kworker/1:1H-136   [001] ....    61.036476: xprt_connect_status: RPC:    43 xprt_connect_status: retrying
    kworker/1:1H-136   [001] ....    61.036478: xprt_prepare_transmit: RPC:    43 xprt_prepare_transmit
    kworker/1:1H-136   [001] ....    61.036479: xprt_transmit: RPC:    43 xprt_transmit(72)
    kworker/1:1H-136   [001] ....    61.036486: xs_tcp_send_request: RPC:       xs_tcp_send_request(72) = 0
    kworker/1:1H-136   [001] ....    61.036487: xprt_transmit: RPC:    43 xmit complete
          <idle>-0     [001] ..s.    61.036789: xs_tcp_data_ready: RPC:       xs_tcp_data_ready...
    kworker/1:1H-136   [001] ....    61.036798: xs_tcp_data_recv: RPC:       xs_tcp_data_recv started
    kworker/1:1H-136   [001] ....    61.036799: xs_tcp_data_recv: RPC:       reading TCP record fragment of length 24
    kworker/1:1H-136   [001] ....    61.036799: xs_tcp_data_recv: RPC:       reading XID (4 bytes)
    kworker/1:1H-136   [001] ....    61.036800: xs_tcp_data_recv: RPC:       reading request with XID 2f4c3f88
    kworker/1:1H-136   [001] ....    61.036800: xs_tcp_data_recv: RPC:       reading CALL/REPLY flag (4 bytes)
    kworker/1:1H-136   [001] ....    61.036801: xs_tcp_data_recv: RPC:       read reply XID 2f4c3f88
    kworker/1:1H-136   [001] ..s.    61.036801: xs_tcp_data_recv: RPC:       XID 2f4c3f88 read 16 bytes
    kworker/1:1H-136   [001] ..s.    61.036802: xs_tcp_data_recv: RPC:       xprt = ffff880407eca800, tcp_copied = 24, tcp_offset = 24, tcp_reclen = 24
    kworker/1:1H-136   [001] ..s.    61.036802: xprt_complete_rqst: RPC:    43 xid 2f4c3f88 complete (24 bytes received)
    kworker/1:1H-136   [001] ....    61.036803: xs_tcp_data_recv: RPC:       xs_tcp_data_recv done
    kworker/1:1H-136   [001] ....    61.036812: xprt_release: RPC:    43 release request ffff88040b270800


# unhide-tcp 
Unhide-tcp 20130526
Copyright © 2013 Yago Jesus & Patrick Gouin
License GPLv3+ : GNU GPL version 3 or later
http://www.unhide-forensics.info
Used options: 
[*]Starting TCP checking

Found Hidden port that not appears in ss: 805

-- Steve

[-- Attachment #2: debug-hidden-port-4.7.patch --]
[-- Type: text/x-patch, Size: 2378 bytes --]

---
 net/ipv4/inet_connection_sock.c |    4 ++++
 net/ipv4/inet_hashtables.c      |    5 +++++
 net/sunrpc/xprt.c               |    5 +++++
 net/sunrpc/xprtsock.c           |    5 +++++
 5 files changed, 22 insertions(+)

Index: linux-build.git/net/ipv4/inet_connection_sock.c
===================================================================
--- linux-build.git.orig/net/ipv4/inet_connection_sock.c	2016-06-22 11:55:05.952267493 -0400
+++ linux-build.git/net/ipv4/inet_connection_sock.c	2016-06-22 11:56:20.002662092 -0400
@@ -232,6 +232,10 @@ tb_found:
 		}
 	}
 success:
+	if (!current->mm) {
+		trace_printk("snum %d\n", snum);
+		trace_dump_stack(1);
+	}
 	if (!inet_csk(sk)->icsk_bind_hash)
 		inet_bind_hash(sk, tb, port);
 	WARN_ON(inet_csk(sk)->icsk_bind_hash != tb);
Index: linux-build.git/net/ipv4/inet_hashtables.c
===================================================================
--- linux-build.git.orig/net/ipv4/inet_hashtables.c	2016-06-22 11:55:05.952267493 -0400
+++ linux-build.git/net/ipv4/inet_hashtables.c	2016-06-22 11:55:05.948267360 -0400
@@ -93,6 +93,11 @@ void inet_bind_bucket_destroy(struct kme
 void inet_bind_hash(struct sock *sk, struct inet_bind_bucket *tb,
 		    const unsigned short snum)
 {
+	if (!current->mm) {
+		trace_printk("add %d\n", snum);
+		trace_dump_stack(1);
+	}
+
 	inet_sk(sk)->inet_num = snum;
 	sk_add_bind_node(sk, &tb->owners);
 	tb->num_owners++;
Index: linux-build.git/net/sunrpc/xprt.c
===================================================================
--- linux-build.git.orig/net/sunrpc/xprt.c	2016-06-22 11:55:05.952267493 -0400
+++ linux-build.git/net/sunrpc/xprt.c	2016-06-22 11:55:05.948267360 -0400
@@ -54,6 +54,11 @@
 
 #include "sunrpc.h"
 
+#undef dprintk
+#undef dprintk_rcu
+#define dprintk(args...)	trace_printk(args)
+#define dprintk_rcu(args...)	trace_printk(args)
+
 /*
  * Local variables
  */
Index: linux-build.git/net/sunrpc/xprtsock.c
===================================================================
--- linux-build.git.orig/net/sunrpc/xprtsock.c	2016-06-22 11:55:05.952267493 -0400
+++ linux-build.git/net/sunrpc/xprtsock.c	2016-06-22 11:55:05.948267360 -0400
@@ -51,6 +51,11 @@
 
 #include "sunrpc.h"
 
+#undef dprintk
+#undef dprintk_rcu
+#define dprintk(args...)	trace_printk(args)
+#define dprintk_rcu(args...)	trace_printk(args)
+
 static void xs_close(struct rpc_xprt *xprt);
 
 /*

^ permalink raw reply	[flat|nested] 77+ messages in thread

* It's back! (Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() ))
@ 2016-06-30 12:59 ` Steven Rostedt
  0 siblings, 0 replies; 77+ messages in thread
From: Steven Rostedt @ 2016-06-30 12:59 UTC (permalink / raw)
  To: Trond Myklebust
  Cc: Jeff Layton, Eric Dumazet, Anna Schumaker,
	Linux NFS Mailing List, Linux Network Devel Mailing List, LKML,
	Andrew Morton, Bruce James Fields

[-- Attachment #1: Type: text/plain, Size: 4137 bytes --]

[ resending as a new email, as I'm assuming people do not sort their
  INBOX via last email on thread, thus my last email is sitting in the
  bottom of everyone's INBOX ]

I've hit this again. Not sure when it started, but I applied my old
debug trace_printk() patch (attached) and rebooted (4.5.7). I just
tested the latest kernel from Linus's tree (from last nights pull), and
it still gives me the problem.

Here's the trace I have:

    kworker/3:1H-134   [003] ..s.    61.036129: inet_csk_get_port: snum 805
    kworker/3:1H-134   [003] ..s.    61.036135: <stack trace>
 => sched_clock
 => inet_addr_type_table
 => security_capable
 => inet_bind
 => xs_bind
 => release_sock
 => sock_setsockopt
 => __sock_create
 => xs_create_sock.isra.19
 => xs_tcp_setup_socket
 => process_one_work
 => worker_thread
 => worker_thread
 => kthread
 => ret_from_fork
 => kthread  
    kworker/3:1H-134   [003] ..s.    61.036136: inet_bind_hash: add 805
    kworker/3:1H-134   [003] ..s.    61.036138: <stack trace>
 => inet_csk_get_port
 => sched_clock
 => inet_addr_type_table
 => security_capable
 => inet_bind
 => xs_bind
 => release_sock
 => sock_setsockopt
 => __sock_create
 => xs_create_sock.isra.19
 => xs_tcp_setup_socket
 => process_one_work
 => worker_thread
 => worker_thread
 => kthread
 => ret_from_fork
 => kthread  
    kworker/3:1H-134   [003] ....    61.036139: xs_bind: RPC:       xs_bind 4.136.255.255:805: ok (0)
    kworker/3:1H-134   [003] ....    61.036140: xs_tcp_setup_socket: RPC:       worker connecting xprt ffff880407eca800 via tcp to 192.168.23.22 (port 43651)
    kworker/3:1H-134   [003] ....    61.036162: xs_tcp_setup_socket: RPC:       ffff880407eca800 connect status 115 connected 0 sock state 2
          <idle>-0     [001] ..s.    61.036450: xs_tcp_state_change: RPC:       xs_tcp_state_change client ffff880407eca800...
          <idle>-0     [001] ..s.    61.036452: xs_tcp_state_change: RPC:       state 1 conn 0 dead 0 zapped 1 sk_shutdown 0
    kworker/1:1H-136   [001] ....    61.036476: xprt_connect_status: RPC:    43 xprt_connect_status: retrying
    kworker/1:1H-136   [001] ....    61.036478: xprt_prepare_transmit: RPC:    43 xprt_prepare_transmit
    kworker/1:1H-136   [001] ....    61.036479: xprt_transmit: RPC:    43 xprt_transmit(72)
    kworker/1:1H-136   [001] ....    61.036486: xs_tcp_send_request: RPC:       xs_tcp_send_request(72) = 0
    kworker/1:1H-136   [001] ....    61.036487: xprt_transmit: RPC:    43 xmit complete
          <idle>-0     [001] ..s.    61.036789: xs_tcp_data_ready: RPC:       xs_tcp_data_ready...
    kworker/1:1H-136   [001] ....    61.036798: xs_tcp_data_recv: RPC:       xs_tcp_data_recv started
    kworker/1:1H-136   [001] ....    61.036799: xs_tcp_data_recv: RPC:       reading TCP record fragment of length 24
    kworker/1:1H-136   [001] ....    61.036799: xs_tcp_data_recv: RPC:       reading XID (4 bytes)
    kworker/1:1H-136   [001] ....    61.036800: xs_tcp_data_recv: RPC:       reading request with XID 2f4c3f88
    kworker/1:1H-136   [001] ....    61.036800: xs_tcp_data_recv: RPC:       reading CALL/REPLY flag (4 bytes)
    kworker/1:1H-136   [001] ....    61.036801: xs_tcp_data_recv: RPC:       read reply XID 2f4c3f88
    kworker/1:1H-136   [001] ..s.    61.036801: xs_tcp_data_recv: RPC:       XID 2f4c3f88 read 16 bytes
    kworker/1:1H-136   [001] ..s.    61.036802: xs_tcp_data_recv: RPC:       xprt = ffff880407eca800, tcp_copied = 24, tcp_offset = 24, tcp_reclen = 24
    kworker/1:1H-136   [001] ..s.    61.036802: xprt_complete_rqst: RPC:    43 xid 2f4c3f88 complete (24 bytes received)
    kworker/1:1H-136   [001] ....    61.036803: xs_tcp_data_recv: RPC:       xs_tcp_data_recv done
    kworker/1:1H-136   [001] ....    61.036812: xprt_release: RPC:    43 release request ffff88040b270800


# unhide-tcp 
Unhide-tcp 20130526
Copyright © 2013 Yago Jesus & Patrick Gouin
License GPLv3+ : GNU GPL version 3 or later
http://www.unhide-forensics.info
Used options: 
[*]Starting TCP checking

Found Hidden port that not appears in ss: 805

-- Steve

[-- Attachment #2: debug-hidden-port-4.7.patch --]
[-- Type: text/x-patch, Size: 2378 bytes --]

---
 net/ipv4/inet_connection_sock.c |    4 ++++
 net/ipv4/inet_hashtables.c      |    5 +++++
 net/sunrpc/xprt.c               |    5 +++++
 net/sunrpc/xprtsock.c           |    5 +++++
 5 files changed, 22 insertions(+)

Index: linux-build.git/net/ipv4/inet_connection_sock.c
===================================================================
--- linux-build.git.orig/net/ipv4/inet_connection_sock.c	2016-06-22 11:55:05.952267493 -0400
+++ linux-build.git/net/ipv4/inet_connection_sock.c	2016-06-22 11:56:20.002662092 -0400
@@ -232,6 +232,10 @@ tb_found:
 		}
 	}
 success:
+	if (!current->mm) {
+		trace_printk("snum %d\n", snum);
+		trace_dump_stack(1);
+	}
 	if (!inet_csk(sk)->icsk_bind_hash)
 		inet_bind_hash(sk, tb, port);
 	WARN_ON(inet_csk(sk)->icsk_bind_hash != tb);
Index: linux-build.git/net/ipv4/inet_hashtables.c
===================================================================
--- linux-build.git.orig/net/ipv4/inet_hashtables.c	2016-06-22 11:55:05.952267493 -0400
+++ linux-build.git/net/ipv4/inet_hashtables.c	2016-06-22 11:55:05.948267360 -0400
@@ -93,6 +93,11 @@ void inet_bind_bucket_destroy(struct kme
 void inet_bind_hash(struct sock *sk, struct inet_bind_bucket *tb,
 		    const unsigned short snum)
 {
+	if (!current->mm) {
+		trace_printk("add %d\n", snum);
+		trace_dump_stack(1);
+	}
+
 	inet_sk(sk)->inet_num = snum;
 	sk_add_bind_node(sk, &tb->owners);
 	tb->num_owners++;
Index: linux-build.git/net/sunrpc/xprt.c
===================================================================
--- linux-build.git.orig/net/sunrpc/xprt.c	2016-06-22 11:55:05.952267493 -0400
+++ linux-build.git/net/sunrpc/xprt.c	2016-06-22 11:55:05.948267360 -0400
@@ -54,6 +54,11 @@
 
 #include "sunrpc.h"
 
+#undef dprintk
+#undef dprintk_rcu
+#define dprintk(args...)	trace_printk(args)
+#define dprintk_rcu(args...)	trace_printk(args)
+
 /*
  * Local variables
  */
Index: linux-build.git/net/sunrpc/xprtsock.c
===================================================================
--- linux-build.git.orig/net/sunrpc/xprtsock.c	2016-06-22 11:55:05.952267493 -0400
+++ linux-build.git/net/sunrpc/xprtsock.c	2016-06-22 11:55:05.948267360 -0400
@@ -51,6 +51,11 @@
 
 #include "sunrpc.h"
 
+#undef dprintk
+#undef dprintk_rcu
+#define dprintk(args...)	trace_printk(args)
+#define dprintk_rcu(args...)	trace_printk(args)
+
 static void xs_close(struct rpc_xprt *xprt);
 
 /*

^ permalink raw reply	[flat|nested] 77+ messages in thread

end of thread, other threads:[~2018-02-06 19:26 UTC | newest]

Thread overview: 77+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-06-12  3:49 [REGRESSION] NFS is creating a hidden port (left over from xs_bind() ) Steven Rostedt
2015-06-12 14:10 ` Trond Myklebust
2015-06-12 14:40   ` Eric Dumazet
2015-06-12 14:40     ` Eric Dumazet
2015-06-12 14:57     ` Trond Myklebust
2015-06-12 15:43       ` Eric Dumazet
2015-06-12 15:43         ` Eric Dumazet
2015-06-12 15:34     ` Steven Rostedt
2015-06-12 15:34       ` Steven Rostedt
2015-06-12 15:50       ` Steven Rostedt
2015-06-12 15:50         ` Steven Rostedt
2015-06-12 15:53         ` Steven Rostedt
2015-06-18  3:08         ` Steven Rostedt
2015-06-18  3:08           ` Steven Rostedt
2015-06-18 19:24           ` Trond Myklebust
2015-06-18 19:24             ` Trond Myklebust
2015-06-18 19:49             ` Steven Rostedt
2015-06-18 19:49               ` Steven Rostedt
2015-06-18 22:50               ` Jeff Layton
2015-06-18 22:50                 ` Jeff Layton
2015-06-19  1:08                 ` Steven Rostedt
2015-06-19  1:08                   ` Steven Rostedt
2015-06-19  1:37                   ` Jeff Layton
2015-06-19  3:21                     ` Steven Rostedt
2015-06-19  3:21                       ` Steven Rostedt
2015-06-19 16:25                     ` Steven Rostedt
2015-06-19 17:17                       ` Steven Rostedt
2015-06-19 17:17                         ` Steven Rostedt
2015-06-19 17:17                         ` Steven Rostedt
2015-06-19 17:39                         ` Trond Myklebust
2015-06-19 17:39                           ` Trond Myklebust
2015-06-19 17:39                           ` Trond Myklebust
2015-06-19 19:52                           ` Jeff Layton
2015-06-19 19:52                             ` Jeff Layton
2015-06-19 19:52                             ` Jeff Layton
2015-06-19 20:30                             ` Trond Myklebust
2015-06-19 20:30                               ` Trond Myklebust
2015-06-19 20:30                               ` Trond Myklebust
2015-06-19 21:56                               ` Steven Rostedt
2015-06-19 21:56                                 ` Steven Rostedt
2015-06-19 21:56                                 ` Steven Rostedt
2015-06-19 22:14                               ` Steven Rostedt
2015-06-19 22:14                                 ` Steven Rostedt
2015-06-19 22:14                                 ` Steven Rostedt
2015-06-19 23:25                                 ` Trond Myklebust
2015-06-19 23:25                                   ` Trond Myklebust
2015-06-19 23:25                                   ` Trond Myklebust
2015-06-20  0:37                                   ` Steven Rostedt
2015-06-20  0:37                                     ` Steven Rostedt
2015-06-20  0:37                                     ` Steven Rostedt
2015-06-20  0:50                                     ` Steven Rostedt
2015-06-20  0:50                                       ` Steven Rostedt
2015-06-20  0:50                                       ` Steven Rostedt
2015-06-20  1:27                                   ` Steven Rostedt
2015-06-20  1:27                                     ` Steven Rostedt
2015-06-20  1:27                                     ` Steven Rostedt
2015-06-20  2:44                                     ` Trond Myklebust
2015-06-20  2:44                                       ` Trond Myklebust
2015-06-20  2:44                                       ` Trond Myklebust
2016-06-22 16:41                                     ` It's back! (Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )) Steven Rostedt
2015-06-19 21:50                           ` [REGRESSION] NFS is creating a hidden port (left over from xs_bind() ) Steven Rostedt
2015-06-19 21:50                             ` Steven Rostedt
2015-06-19 21:50                             ` Steven Rostedt
2016-06-30 12:59 It's back! (Re: [REGRESSION] NFS is creating a hidden port (left over from xs_bind() )) Steven Rostedt
2016-06-30 12:59 ` Steven Rostedt
2016-06-30 13:17 ` Trond Myklebust
2016-06-30 15:23   ` Steven Rostedt
2016-06-30 16:24     ` Steven Rostedt
2016-06-30 18:30     ` Trond Myklebust
2016-06-30 18:30       ` Trond Myklebust
2016-06-30 20:07       ` Steven Rostedt
2016-06-30 20:07         ` Steven Rostedt
2016-06-30 21:56         ` Steven Rostedt
2018-02-02 21:31 Daniel Reichelt
2018-02-06  0:24 ` Trond Myklebust
2018-02-06  9:20   ` Daniel Reichelt
2018-02-06 19:26     ` Trond Myklebust

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.