All of lore.kernel.org
 help / color / mirror / Atom feed
* 5.13 NFSD: completely broken?
@ 2021-06-30 15:53 Russell King (Oracle)
  2021-06-30 15:55 ` Chuck Lever III
  0 siblings, 1 reply; 6+ messages in thread
From: Russell King (Oracle) @ 2021-06-30 15:53 UTC (permalink / raw)
  To: J. Bruce Fields, Chuck Lever; +Cc: linux-nfs

Hi,

I've just upgraded my ARM32 VMs to 5.13, and I'm seeing some really odd
behaviour with NFS exports that are mounted on a debian stable x86
machine.  Here's an example packet dump:

16:39:55.172494 00:22:68:15:37:dd > 52:54:00:00:00:11, ethertype IPv6 (0x86dd), length 234: fd8f:7570:feb6:1:222:68ff:fe15:37dd.932 > fd8f:7570:feb6:c8:5054:ff:fe00:341.2049: Flags [P.], seq 1027945732:1027945880, ack 3477555770, win 506, options [nop,nop,TS val 3337720655 ecr 372042622], length 148: NFS request xid 822450515 144 getattr fh Unkno/010006002246F4529A1C41AD9C321B1BF54BA2DD
16:39:55.172904 00:22:68:15:37:dd > 52:54:00:00:00:11, ethertype IPv6 (0x86dd), length 234: fd8f:7570:feb6:1:222:68ff:fe15:37dd.932 > fd8f:7570:feb6:c8:5054:ff:fe00:341.2049: Flags [P.], seq 148:296, ack 1, win 506, options [nop,nop,TS val 3337720656 ecr 372042622], length 148: NFS request xid 856004947 144 getattr fh
Unkno/010006002246F4529A1C41AD9C321B1BF54BA2DD
16:39:55.173365 52:54:00:00:00:11 > 00:22:68:15:37:dd, ethertype IPv6 (0x86dd), length 86: fd8f:7570:feb6:c8:5054:ff:fe00:341.2049 > fd8f:7570:feb6:1:222:68ff:fe15:37dd.932: Flags [.], ack 148, win 449, options [nop,nop,TS val 372047356 ecr 3337720655], length 0

16:39:55.173383 00:22:68:15:37:dd > 52:54:00:00:00:11, ethertype IPv6 (0x86dd), length 234: fd8f:7570:feb6:1:222:68ff:fe15:37dd.932 > fd8f:7570:feb6:c8:5054:ff:fe00:341.2049: Flags [P.], seq 296:444, ack 1, win 506, options [nop,nop,TS val 3337720656 ecr 372042622], length 148: NFS request xid 872782163 144 getattr fh Unkno/010006002246F4529A1C41AD9C321B1BF54BA2DD
16:39:55.173474 00:22:68:15:37:dd > 52:54:00:00:00:11, ethertype IPv6 (0x86dd), length 234: fd8f:7570:feb6:1:222:68ff:fe15:37dd.932 > fd8f:7570:feb6:c8:5054:ff:fe00:341.2049: Flags [P.], seq 444:592, ack 1, win 506, options [nop,nop,TS val 3337720656 ecr 372047356], length 148: NFS request xid 889559379 144 getattr fh Unkno/010006002246F4529A1C41AD9C321B1BF54BA2DD
16:39:55.173649 00:22:68:15:37:dd > 52:54:00:00:00:11, ethertype IPv6 (0x86dd), length 234: fd8f:7570:feb6:1:222:68ff:fe15:37dd.932 > fd8f:7570:feb6:c8:5054:ff:fe00:341.2049: Flags [P.], seq 592:740, ack 1, win 506, options [nop,nop,TS val 3337720656 ecr 372047356], length 148: NFS request xid 839227731 144 getattr fh Unkno/010006002246F4529A1C41AD9C321B1BF54BA2DD
16:39:55.173708 00:22:68:15:37:dd > 52:54:00:00:00:11, ethertype IPv6 (0x86dd), length 382: fd8f:7570:feb6:1:222:68ff:fe15:37dd.932 > fd8f:7570:feb6:c8:5054:ff:fe00:341.2049: Flags [P.], seq 740:1036, ack 1, win 506, options [nop,nop,TS val 3337720656 ecr 372047356], length 296: NFS request xid 805673299 144 getattr fh Unkno/010006002246F4529A1C41AD9C321B1BF54BA2DD
16:39:55.173830 00:22:68:15:37:dd > 52:54:00:00:00:11, ethertype IPv6 (0x86dd), length 234: fd8f:7570:feb6:1:222:68ff:fe15:37dd.932 > fd8f:7570:feb6:c8:5054:ff:fe00:341.2049: Flags [P.], seq 1036:1184, ack 1, win 506, options [nop,nop,TS val 3337720657 ecr 372047356], length 148: NFS request xid 788896083 144 getattr fh Unkno/010006002246F4529A1C41AD9C321B1BF54BA2DD
16:39:55.173921 00:22:68:15:37:dd > 52:54:00:00:00:11, ethertype IPv6 (0x86dd), length 234: fd8f:7570:feb6:1:222:68ff:fe15:37dd.932 > fd8f:7570:feb6:c8:5054:ff:fe00:341.2049: Flags [P.], seq 1184:1332, ack 1, win 506, options [nop,nop,TS val 3337720657 ecr 372047356], length 148: NFS request xid 755341651 144 getattr fh Unkno/010006002246F4529A1C41AD9C321B1BF54BA2DD
16:39:55.196354 00:22:68:15:37:dd > 52:54:00:00:00:11, ethertype IPv6 (0x86dd), length 234: fd8f:7570:feb6:1:222:68ff:fe15:37dd.932 > fd8f:7570:feb6:c8:5054:ff:fe00:341.2049: Flags [P.], seq 1184:1332, ack 1, win 506, options [nop,nop,TS val 3337720679 ecr 372047356], length 148: NFS request xid 755341651 144 getattr fh Unkno/010006002246F4529A1C41AD9C321B1BF54BA2DD
16:39:55.214728 52:54:00:00:00:11 > 00:22:68:15:37:dd, ethertype IPv6 (0x86dd), length 98: fd8f:7570:feb6:c8:5054:ff:fe00:341.2049 > fd8f:7570:feb6:1:222:68ff:fe15:37dd.932: Flags [.], ack 1332, win 440, options [nop,nop,TS val 372047398 ecr 3337720656,nop,nop,sack 1 {1184:1332}], length 0
16:40:25.892493 00:22:68:15:37:dd > 52:54:00:00:00:11, ethertype IPv6 (0x86dd), length 246: fd8f:7570:feb6:1:222:68ff:fe15:37dd.932 > fd8f:7570:feb6:c8:5054:ff:fe00:341.2049: Flags [P.], seq 1332:1492, ack 1, win 506, options [nop,nop,TS val 3337751375 ecr 372047398], length 160: NFS request xid 671455571 156 access fh Unknown/010006012246F4529A1C41AD9C321B1BF54BA2DD01000200D0F60B4E NFS_ACCESS_READ|NFS_ACCESS_LOOKUP|NFS_ACCESS_MODIFY|NFS_ACCESS_EXTEND|NFS_ACCESS_DELETE
16:40:25.893365 52:54:00:00:00:11 > 00:22:68:15:37:dd, ethertype IPv6 (0x86dd), length 86: fd8f:7570:feb6:c8:5054:ff:fe00:341.2049 > fd8f:7570:feb6:1:222:68ff:fe15:37dd.932: Flags [.], ack 1492, win 439, options [nop,nop,TS val 372078076 ecr 3337751375], length 0

As can be seen, the TCP packets are being acked by the remote host, so the
TCP connection is alive and intact. However, the NFS server itself is
ignoring the requests. On the server, the nfsd processes are all sitting
idle:

...
root      1155  0.0  2.7  32208 27820 ?        Ss   15:43   0:00 /usr/sbin/rpc.mountd --manage-gids
root      1165  0.0  0.0      0     0 ?        S    15:43   0:00 [lockd]
root      1173  0.0  0.0      0     0 ?        S    15:43   0:00 [nfsd]
root      1174  0.0  0.0      0     0 ?        S    15:43   0:00 [nfsd]
root      1175  0.0  0.0      0     0 ?        S    15:43   0:00 [nfsd]
root      1176  0.0  0.0      0     0 ?        S    15:43   0:00 [nfsd]
root      1177  0.0  0.0      0     0 ?        S    15:43   0:00 [nfsd]
root      1178  0.0  0.0      0     0 ?        S    15:43   0:00 [nfsd]
root      1179  0.0  0.0      0     0 ?        S    15:43   0:00 [nfsd]
root      1180  0.0  0.0      0     0 ?        S    15:43   0:00 [nfsd]
statd     1195  0.0  5.1  57012 52528 ?        Ss   15:43   0:00 /sbin/rpc.statd

So it looks like they're basically ignoring everything.

The mount options on the client are:
(rw,relatime,vers=3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp6,timeo=600,retrans=2,sec=sys,mountaddr=fd8f:7570:feb6:c8:5054:ff:fe00:341,mountvers=3,mountport=42927,mountproto=udp6,local_lock=none,addr=fd8f:7570:feb6:c8:5054:ff:fe00:341)

Has 5.13 been tested with nfsv3 ?

Any ideas what's going on?

-- 
RMK's Patch system: https://www.armlinux.org.uk/developer/patches/
FTTP is here! 40Mbps down 10Mbps up. Decent connectivity at last!

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: 5.13 NFSD: completely broken?
  2021-06-30 15:53 5.13 NFSD: completely broken? Russell King (Oracle)
@ 2021-06-30 15:55 ` Chuck Lever III
  2021-06-30 16:05   ` Bruce Fields
  2021-06-30 16:40   ` Russell King (Oracle)
  0 siblings, 2 replies; 6+ messages in thread
From: Chuck Lever III @ 2021-06-30 15:55 UTC (permalink / raw)
  To: Russell King (Oracle); +Cc: Bruce Fields, Linux NFS Mailing List



> On Jun 30, 2021, at 11:53 AM, Russell King (Oracle) <linux@armlinux.org.uk> wrote:
> 
> Hi,
> 
> I've just upgraded my ARM32 VMs to 5.13, and I'm seeing some really odd
> behaviour with NFS exports that are mounted on a debian stable x86
> machine.  Here's an example packet dump:
> 
> 16:39:55.172494 00:22:68:15:37:dd > 52:54:00:00:00:11, ethertype IPv6 (0x86dd), length 234: fd8f:7570:feb6:1:222:68ff:fe15:37dd.932 > fd8f:7570:feb6:c8:5054:ff:fe00:341.2049: Flags [P.], seq 1027945732:1027945880, ack 3477555770, win 506, options [nop,nop,TS val 3337720655 ecr 372042622], length 148: NFS request xid 822450515 144 getattr fh Unkno/010006002246F4529A1C41AD9C321B1BF54BA2DD
> 16:39:55.172904 00:22:68:15:37:dd > 52:54:00:00:00:11, ethertype IPv6 (0x86dd), length 234: fd8f:7570:feb6:1:222:68ff:fe15:37dd.932 > fd8f:7570:feb6:c8:5054:ff:fe00:341.2049: Flags [P.], seq 148:296, ack 1, win 506, options [nop,nop,TS val 3337720656 ecr 372042622], length 148: NFS request xid 856004947 144 getattr fh
> Unkno/010006002246F4529A1C41AD9C321B1BF54BA2DD
> 16:39:55.173365 52:54:00:00:00:11 > 00:22:68:15:37:dd, ethertype IPv6 (0x86dd), length 86: fd8f:7570:feb6:c8:5054:ff:fe00:341.2049 > fd8f:7570:feb6:1:222:68ff:fe15:37dd.932: Flags [.], ack 148, win 449, options [nop,nop,TS val 372047356 ecr 3337720655], length 0
> 
> 16:39:55.173383 00:22:68:15:37:dd > 52:54:00:00:00:11, ethertype IPv6 (0x86dd), length 234: fd8f:7570:feb6:1:222:68ff:fe15:37dd.932 > fd8f:7570:feb6:c8:5054:ff:fe00:341.2049: Flags [P.], seq 296:444, ack 1, win 506, options [nop,nop,TS val 3337720656 ecr 372042622], length 148: NFS request xid 872782163 144 getattr fh Unkno/010006002246F4529A1C41AD9C321B1BF54BA2DD
> 16:39:55.173474 00:22:68:15:37:dd > 52:54:00:00:00:11, ethertype IPv6 (0x86dd), length 234: fd8f:7570:feb6:1:222:68ff:fe15:37dd.932 > fd8f:7570:feb6:c8:5054:ff:fe00:341.2049: Flags [P.], seq 444:592, ack 1, win 506, options [nop,nop,TS val 3337720656 ecr 372047356], length 148: NFS request xid 889559379 144 getattr fh Unkno/010006002246F4529A1C41AD9C321B1BF54BA2DD
> 16:39:55.173649 00:22:68:15:37:dd > 52:54:00:00:00:11, ethertype IPv6 (0x86dd), length 234: fd8f:7570:feb6:1:222:68ff:fe15:37dd.932 > fd8f:7570:feb6:c8:5054:ff:fe00:341.2049: Flags [P.], seq 592:740, ack 1, win 506, options [nop,nop,TS val 3337720656 ecr 372047356], length 148: NFS request xid 839227731 144 getattr fh Unkno/010006002246F4529A1C41AD9C321B1BF54BA2DD
> 16:39:55.173708 00:22:68:15:37:dd > 52:54:00:00:00:11, ethertype IPv6 (0x86dd), length 382: fd8f:7570:feb6:1:222:68ff:fe15:37dd.932 > fd8f:7570:feb6:c8:5054:ff:fe00:341.2049: Flags [P.], seq 740:1036, ack 1, win 506, options [nop,nop,TS val 3337720656 ecr 372047356], length 296: NFS request xid 805673299 144 getattr fh Unkno/010006002246F4529A1C41AD9C321B1BF54BA2DD
> 16:39:55.173830 00:22:68:15:37:dd > 52:54:00:00:00:11, ethertype IPv6 (0x86dd), length 234: fd8f:7570:feb6:1:222:68ff:fe15:37dd.932 > fd8f:7570:feb6:c8:5054:ff:fe00:341.2049: Flags [P.], seq 1036:1184, ack 1, win 506, options [nop,nop,TS val 3337720657 ecr 372047356], length 148: NFS request xid 788896083 144 getattr fh Unkno/010006002246F4529A1C41AD9C321B1BF54BA2DD
> 16:39:55.173921 00:22:68:15:37:dd > 52:54:00:00:00:11, ethertype IPv6 (0x86dd), length 234: fd8f:7570:feb6:1:222:68ff:fe15:37dd.932 > fd8f:7570:feb6:c8:5054:ff:fe00:341.2049: Flags [P.], seq 1184:1332, ack 1, win 506, options [nop,nop,TS val 3337720657 ecr 372047356], length 148: NFS request xid 755341651 144 getattr fh Unkno/010006002246F4529A1C41AD9C321B1BF54BA2DD
> 16:39:55.196354 00:22:68:15:37:dd > 52:54:00:00:00:11, ethertype IPv6 (0x86dd), length 234: fd8f:7570:feb6:1:222:68ff:fe15:37dd.932 > fd8f:7570:feb6:c8:5054:ff:fe00:341.2049: Flags [P.], seq 1184:1332, ack 1, win 506, options [nop,nop,TS val 3337720679 ecr 372047356], length 148: NFS request xid 755341651 144 getattr fh Unkno/010006002246F4529A1C41AD9C321B1BF54BA2DD
> 16:39:55.214728 52:54:00:00:00:11 > 00:22:68:15:37:dd, ethertype IPv6 (0x86dd), length 98: fd8f:7570:feb6:c8:5054:ff:fe00:341.2049 > fd8f:7570:feb6:1:222:68ff:fe15:37dd.932: Flags [.], ack 1332, win 440, options [nop,nop,TS val 372047398 ecr 3337720656,nop,nop,sack 1 {1184:1332}], length 0
> 16:40:25.892493 00:22:68:15:37:dd > 52:54:00:00:00:11, ethertype IPv6 (0x86dd), length 246: fd8f:7570:feb6:1:222:68ff:fe15:37dd.932 > fd8f:7570:feb6:c8:5054:ff:fe00:341.2049: Flags [P.], seq 1332:1492, ack 1, win 506, options [nop,nop,TS val 3337751375 ecr 372047398], length 160: NFS request xid 671455571 156 access fh Unknown/010006012246F4529A1C41AD9C321B1BF54BA2DD01000200D0F60B4E NFS_ACCESS_READ|NFS_ACCESS_LOOKUP|NFS_ACCESS_MODIFY|NFS_ACCESS_EXTEND|NFS_ACCESS_DELETE
> 16:40:25.893365 52:54:00:00:00:11 > 00:22:68:15:37:dd, ethertype IPv6 (0x86dd), length 86: fd8f:7570:feb6:c8:5054:ff:fe00:341.2049 > fd8f:7570:feb6:1:222:68ff:fe15:37dd.932: Flags [.], ack 1492, win 439, options [nop,nop,TS val 372078076 ecr 3337751375], length 0
> 
> As can be seen, the TCP packets are being acked by the remote host, so the
> TCP connection is alive and intact. However, the NFS server itself is
> ignoring the requests. On the server, the nfsd processes are all sitting
> idle:
> 
> ...
> root      1155  0.0  2.7  32208 27820 ?        Ss   15:43   0:00 /usr/sbin/rpc.mountd --manage-gids
> root      1165  0.0  0.0      0     0 ?        S    15:43   0:00 [lockd]
> root      1173  0.0  0.0      0     0 ?        S    15:43   0:00 [nfsd]
> root      1174  0.0  0.0      0     0 ?        S    15:43   0:00 [nfsd]
> root      1175  0.0  0.0      0     0 ?        S    15:43   0:00 [nfsd]
> root      1176  0.0  0.0      0     0 ?        S    15:43   0:00 [nfsd]
> root      1177  0.0  0.0      0     0 ?        S    15:43   0:00 [nfsd]
> root      1178  0.0  0.0      0     0 ?        S    15:43   0:00 [nfsd]
> root      1179  0.0  0.0      0     0 ?        S    15:43   0:00 [nfsd]
> root      1180  0.0  0.0      0     0 ?        S    15:43   0:00 [nfsd]
> statd     1195  0.0  5.1  57012 52528 ?        Ss   15:43   0:00 /sbin/rpc.statd
> 
> So it looks like they're basically ignoring everything.
> 
> The mount options on the client are:
> (rw,relatime,vers=3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp6,timeo=600,retrans=2,sec=sys,mountaddr=fd8f:7570:feb6:c8:5054:ff:fe00:341,mountvers=3,mountport=42927,mountproto=udp6,local_lock=none,addr=fd8f:7570:feb6:c8:5054:ff:fe00:341)
> 
> Has 5.13 been tested with nfsv3 ?
> 
> Any ideas what's going on?

Yes, fortunately!

Have a look at commit 66d9282523b for a one-liner fix, it should apply
cleanly to v5.13.


--
Chuck Lever




^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: 5.13 NFSD: completely broken?
  2021-06-30 15:55 ` Chuck Lever III
@ 2021-06-30 16:05   ` Bruce Fields
  2021-06-30 16:28     ` Chuck Lever III
  2021-06-30 16:40   ` Russell King (Oracle)
  1 sibling, 1 reply; 6+ messages in thread
From: Bruce Fields @ 2021-06-30 16:05 UTC (permalink / raw)
  To: Chuck Lever III; +Cc: Russell King (Oracle), Linux NFS Mailing List

On Wed, Jun 30, 2021 at 03:55:16PM +0000, Chuck Lever III wrote:
> > On Jun 30, 2021, at 11:53 AM, Russell King (Oracle) <linux@armlinux.org.uk> wrote:
> > Has 5.13 been tested with nfsv3 ?
> > 
> > Any ideas what's going on?
> 
> Yes, fortunately!
> 
> Have a look at commit 66d9282523b for a one-liner fix, it should apply
> cleanly to v5.13.

So was this a freak event or do we need some process change?

Looks like the problem commit went in 3 days before 5.13.  It was an mm
commit so they're probably not set up to do nfsd testing, but nfsd is
the only user of that function, right?

It says it fixed a bug that could result in writing past the end of an
array, so made sense that it was urgent.

Sorry, just catching up!

--b.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: 5.13 NFSD: completely broken?
  2021-06-30 16:05   ` Bruce Fields
@ 2021-06-30 16:28     ` Chuck Lever III
  2021-06-30 18:08       ` Bruce Fields
  0 siblings, 1 reply; 6+ messages in thread
From: Chuck Lever III @ 2021-06-30 16:28 UTC (permalink / raw)
  To: Bruce Fields; +Cc: Russell King (Oracle), Linux NFS Mailing List



> On Jun 30, 2021, at 12:05 PM, Bruce Fields <bfields@fieldses.org> wrote:
> 
> On Wed, Jun 30, 2021 at 03:55:16PM +0000, Chuck Lever III wrote:
>>> On Jun 30, 2021, at 11:53 AM, Russell King (Oracle) <linux@armlinux.org.uk> wrote:
>>> Has 5.13 been tested with nfsv3 ?
>>> 
>>> Any ideas what's going on?
>> 
>> Yes, fortunately!
>> 
>> Have a look at commit 66d9282523b for a one-liner fix, it should apply
>> cleanly to v5.13.
> 
> So was this a freak event or do we need some process change?

I would say that it is "Somewhat freaky".

In the future we expect there to be more consumers of the
bulk page allocator, so this kind of thing should become
less likely over time.


> Looks like the problem commit went in 3 days before 5.13.  It was an mm
> commit so they're probably not set up to do nfsd testing, but nfsd is
> the only user of that function, right?

NFSD is one of two consumers of that function. The other
is __page_pool_alloc_pages_slow().

The author of __page_pool_alloc_pages_slow() was cc'd on
the broken fix, but I was not. I was cc'd on earlier
traffic regarding the new API because I instigated its
inclusion into the mainline kernel on behalf of NFSD.

Mel does indeed have the ability to test NFSD, and has
been careful in the past about testing before committing.


> It says it fixed a bug that could result in writing past the end of an
> array, so made sense that it was urgent.

No, a prior commit, b08e50dd6448 ("mm/page_alloc:
__alloc_pages_bulk(): do bounds check before accessing
array"), fixed that particular bug. The broken patch
fixed a page leak in the same use case.

I tested the new API with NFSD extensively, and did not
notice page leaks or array overruns (although, rq_pages
has an extra element at the end for other reasons).

NFSD does feed full arrays to alloc_pages_bulk_array()
quite often with RDMA, so I think I would have noticed
something. With TCP I think there is at least one missing
page per call, so the common case wouldn't have triggered
this bug at all.

__page_pool_alloc_pages_slow() always passes in an empty
array, so it also would never have hit this bug.

The problem was discovered by static analysis, not a
crash report. IMO the fix should have waited for testing
and review by the NFS community. This close to a final
release, the fixes for the array overrun and page leak
could have gone into v5.13.1. I hope automation will
take care of all of this very soon.


--
Chuck Lever




^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: 5.13 NFSD: completely broken?
  2021-06-30 15:55 ` Chuck Lever III
  2021-06-30 16:05   ` Bruce Fields
@ 2021-06-30 16:40   ` Russell King (Oracle)
  1 sibling, 0 replies; 6+ messages in thread
From: Russell King (Oracle) @ 2021-06-30 16:40 UTC (permalink / raw)
  To: Chuck Lever III; +Cc: Bruce Fields, Linux NFS Mailing List

On Wed, Jun 30, 2021 at 03:55:16PM +0000, Chuck Lever III wrote:
> > On Jun 30, 2021, at 11:53 AM, Russell King (Oracle) <linux@armlinux.org.uk> wrote:
> > 
> > Hi,
> > 
> > I've just upgraded my ARM32 VMs to 5.13, and I'm seeing some really odd
> > behaviour with NFS exports that are mounted on a debian stable x86
> > machine.  Here's an example packet dump:
...
> > Has 5.13 been tested with nfsv3 ?
> > 
> > Any ideas what's going on?
> 
> Yes, fortunately!
> 
> Have a look at commit 66d9282523b for a one-liner fix, it should apply
> cleanly to v5.13.

Thanks Chuck, that resolves the issue for me.

-- 
RMK's Patch system: https://www.armlinux.org.uk/developer/patches/
FTTP is here! 40Mbps down 10Mbps up. Decent connectivity at last!

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: 5.13 NFSD: completely broken?
  2021-06-30 16:28     ` Chuck Lever III
@ 2021-06-30 18:08       ` Bruce Fields
  0 siblings, 0 replies; 6+ messages in thread
From: Bruce Fields @ 2021-06-30 18:08 UTC (permalink / raw)
  To: Chuck Lever III; +Cc: Russell King (Oracle), Linux NFS Mailing List

On Wed, Jun 30, 2021 at 04:28:47PM +0000, Chuck Lever III wrote:
> The problem was discovered by static analysis, not a
> crash report. IMO the fix should have waited for testing
> and review by the NFS community. This close to a final
> release, the fixes for the array overrun and page leak
> could have gone into v5.13.1. I hope automation will
> take care of all of this very soon.

Years ago I had a cron job that fetched a few trees and ran cron jobs on
them nightly.  "Automatic" testing always ends up requiring more manual
intervention than I expect, but maybe I should give that another
shot....

--b.

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2021-06-30 18:08 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-30 15:53 5.13 NFSD: completely broken? Russell King (Oracle)
2021-06-30 15:55 ` Chuck Lever III
2021-06-30 16:05   ` Bruce Fields
2021-06-30 16:28     ` Chuck Lever III
2021-06-30 18:08       ` Bruce Fields
2021-06-30 16:40   ` Russell King (Oracle)

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.