linux-nfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Chuck Lever III <chuck.lever@oracle.com>
To: Mike Javorski <mike.javorski@gmail.com>, Mel Gorman <mgorman@suse.com>
Cc: Neil Brown <neilb@suse.de>,
	Linux NFS Mailing List <linux-nfs@vger.kernel.org>
Subject: Re: NFS server regression in kernel 5.13 (tested w/ 5.13.9)
Date: Sat, 28 Aug 2021 18:23:05 +0000	[thread overview]
Message-ID: <12B831AA-4A4E-4102-ADA3-97B6FA0B119E@oracle.com> (raw)
In-Reply-To: <CAOv1SKCjvgSfUoFtufZ5-dB-quG=djnn-UHO286S410aVxrV0Q@mail.gmail.com>



> On Aug 27, 2021, at 11:22 PM, Mike Javorski <mike.javorski@gmail.com> wrote:
> 
> I had some time this evening (and the kernel finally compiled), and
> wanted to get this tested.
> 
> The TL;DR:  Both patches are needed
> 
> Below are the test results from my replication of Neil's test. It is
> readily apparent that both the 5.13.13 kernel AND the 5.13.13 kernel
> with the 82011c80b3ec fix exhibit the randomness in read times that
> were observed. The 5.13.13 kernel with both the 82011c80b3ec and
> f6e70aab9dfe fixes brings the performance back in line with the
> 5.12.15 kernel which I tested as a baseline.
> 
> Please forgive the inconsistency in sample counts. This was running as
> a while loop, and I just let it go long enough that the behavior was
> consistent. Only change to the VM between tests was the different
> kernel + a reboot. The testing PC had a consistent workload during the
> entire set of tests.
> 
> Test 0: 5.13.10 (base kernel in VM image, just for kicks)
> ==================================================
> Samples 30
> Min 6.839
> Max 19.998
> Median 9.638
> 75-P 10.898
> 95-P 12.939
> 99-P 18.005
> 
> Test 1: 5.12.15 (known good)
> ==================================================
> Samples 152
> Min 1.997
> Max 2.333
> Median 2.171
> 75-P 2.230
> 95-P 2.286
> 99-P 2.312
> 
> Test 2: 5.13.13 (known bad)
> ==================================================
> Samples 42
> Min 3.587
> Max 15.803
> Median 6.039
> 75-P 6.452
> 95-P 10.293
> 99-P 15.540
> 
> Test 3: 5.13.13 + 82011c80b3ec fix
> ==================================================
> Samples 44
> Min 4.309
> Max 37.040
> Median 6.615
> 75-P 10.224
> 95-P 19.516
> 99-P 36.650
> 
> Test 4: 5.13.13 + 82011c80b3ec fix + f6e70aab9dfe fix
> ==================================================
> Samples 131
> Min 2.013
> Max 2.397
> Median 2.169
> 75-P 2.211
> 95-P 2.283
> 99-P 2.348
> 
> I am going to run the kernel w/ both fixes over the weekend, but
> things look good at this point.
> 
> - mike

I've targeted Neil's fix for the first 5.15-rc NFSD pull request.
I'd like to have Mel's Reviewed-by or Acked-by, though.

I will add a Fixes: tag if Neil doesn't repost (no reason to at
this point) so the fix should get backported automatically to
recent stable kernels.


> On Fri, Aug 27, 2021 at 4:49 PM Chuck Lever III <chuck.lever@oracle.com> wrote:
>> 
>> 
>>> On Aug 27, 2021, at 6:00 PM, Mike Javorski <mike.javorski@gmail.com> wrote:
>>> 
>>> OK, an update. Several hours of spaced out testing sessions and the
>>> first patch seems to have resolved the issue. There may be a very tiny
>>> bit of lag that still occurs when opening/processing new files, but so
>>> far on this kernel I have not had any multi-second freezes. I am still
>>> waiting on the kernel with Neil's patch to compile (compiling on this
>>> underpowered server so it's taking several hours), but I think the
>>> testing there will just be to see if I can show it works still, and
>>> then to try and test in a memory constrained VM. To see if I can
>>> recreate Neil's experiment. Likely will have to do this over the
>>> weekend given the kernel compile delay + fiddling with a VM.
>> 
>> Thanks for your testing!
>> 
>> 
>>> Chuck: I don't mean to overstep bounds, but is it possible to get that
>>> patch pulled into 5.13 stable? That may help things for several people
>>> while 5.14 goes through it's shakedown in archlinux prior to release.
>> 
>> The patch had a Fixes: tag, so it should get automatically backported
>> to every kernel that has the broken commit. If you don't see it in
>> a subsequent 5.13 stable kernel, you are free to ask the stable
>> maintainers to consider it.
>> 
>> 
>>> - mike
>>> 
>>> On Fri, Aug 27, 2021 at 10:07 AM Mike Javorski <mike.javorski@gmail.com> wrote:
>>>> 
>>>> Chuck:
>>>> I just booted a 5.13.13 kernel with your suggested patch. No freezes
>>>> on the first test, but that sometimes happens so I will let the server
>>>> settle some and try it again later in the day (which also would align
>>>> with Neil's comment on memory fragmentation being a contributor).
>>>> 
>>>> Neil:
>>>> I have started a compile with the above kernel + your patch to test
>>>> next unless you or Chuck determine that it isn't needed, or that I
>>>> should test both patches discreetly. As the above is already merged to
>>>> 5.14 it seemed logical to just add your patch on top.
>>>> 
>>>> I will also try to set up a vm to test your md5sum scenario with the
>>>> various kernels since it's a much faster thing to test.
>>>> 
>>>> - mike
>>>> 
>>>> On Fri, Aug 27, 2021 at 7:13 AM Chuck Lever III <chuck.lever@oracle.com> wrote:
>>>>> 
>>>>> 
>>>>>> On Aug 27, 2021, at 3:14 AM, NeilBrown <neilb@suse.de> wrote:
>>>>>> 
>>>>>> Subject: [PATCH] SUNRPC: don't pause on incomplete allocation
>>>>>> 
>>>>>> alloc_pages_bulk_array() attempts to allocate at least one page based on
>>>>>> the provided pages, and then opportunistically allocates more if that
>>>>>> can be done without dropping the spinlock.
>>>>>> 
>>>>>> So if it returns fewer than requested, that could just mean that it
>>>>>> needed to drop the lock.  In that case, try again immediately.
>>>>>> 
>>>>>> Only pause for a time if no progress could be made.
>>>>> 
>>>>> The case I was worried about was "no pages available on the
>>>>> pcplist", in which case, alloc_pages_bulk_array() resorts
>>>>> to calling __alloc_pages() and returns only one new page.
>>>>> 
>>>>> "No progess" would mean even __alloc_pages() failed.
>>>>> 
>>>>> So this patch would behave essentially like the
>>>>> pre-alloc_pages_bulk_array() code: call alloc_page() for
>>>>> each empty struct_page in the array without pausing. That
>>>>> seems correct to me.
>>>>> 
>>>>> 
>>>>> I would add
>>>>> 
>>>>> Fixes: f6e70aab9dfe ("SUNRPC: refresh rq_pages using a bulk page allocator")
>>>>> 
>>>>> 
>>>>>> Signed-off-by: NeilBrown <neilb@suse.de>
>>>>>> ---
>>>>>> net/sunrpc/svc_xprt.c | 7 +++++--
>>>>>> 1 file changed, 5 insertions(+), 2 deletions(-)
>>>>>> 
>>>>>> diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c
>>>>>> index d66a8e44a1ae..99268dd95519 100644
>>>>>> --- a/net/sunrpc/svc_xprt.c
>>>>>> +++ b/net/sunrpc/svc_xprt.c
>>>>>> @@ -662,7 +662,7 @@ static int svc_alloc_arg(struct svc_rqst *rqstp)
>>>>>> {
>>>>>>     struct svc_serv *serv = rqstp->rq_server;
>>>>>>     struct xdr_buf *arg = &rqstp->rq_arg;
>>>>>> -     unsigned long pages, filled;
>>>>>> +     unsigned long pages, filled, prev;
>>>>>> 
>>>>>>     pages = (serv->sv_max_mesg + 2 * PAGE_SIZE) >> PAGE_SHIFT;
>>>>>>     if (pages > RPCSVC_MAXPAGES) {
>>>>>> @@ -672,11 +672,14 @@ static int svc_alloc_arg(struct svc_rqst *rqstp)
>>>>>>             pages = RPCSVC_MAXPAGES;
>>>>>>     }
>>>>>> 
>>>>>> -     for (;;) {
>>>>>> +     for (prev = 0;; prev = filled) {
>>>>>>             filled = alloc_pages_bulk_array(GFP_KERNEL, pages,
>>>>>>                                             rqstp->rq_pages);
>>>>>>             if (filled == pages)
>>>>>>                     break;
>>>>>> +             if (filled > prev)
>>>>>> +                     /* Made progress, don't sleep yet */
>>>>>> +                     continue;
>>>>>> 
>>>>>>             set_current_state(TASK_INTERRUPTIBLE);
>>>>>>             if (signalled() || kthread_should_stop()) {
>>>>> 
>>>>> --
>>>>> Chuck Lever
>>>>> 
>>>>> 
>>>>> 
>> 
>> --
>> Chuck Lever
>> 
>> 
>> 

--
Chuck Lever




  reply	other threads:[~2021-08-28 18:23 UTC|newest]

Thread overview: 46+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-08-08 22:37 NFS server regression in kernel 5.13 (tested w/ 5.13.9) Mike Javorski
2021-08-08 22:47 ` Chuck Lever III
2021-08-08 23:23   ` Mike Javorski
2021-08-09  0:01 ` NeilBrown
2021-08-09  0:28   ` Mike Javorski
2021-08-10  0:50     ` Mike Javorski
2021-08-10  1:28       ` NeilBrown
2021-08-10 11:54         ` Daire Byrne
2021-08-13  1:51         ` Mike Javorski
2021-08-13  2:39           ` NeilBrown
2021-08-13  2:53             ` Mike Javorski
2021-08-15  1:23               ` Mike Javorski
2021-08-16  1:20                 ` NeilBrown
2021-08-16 13:21                   ` Chuck Lever III
2021-08-16 16:25                     ` Mike Javorski
2021-08-16 23:01                       ` NeilBrown
2021-08-20  0:31                         ` NeilBrown
2021-08-20  0:52                           ` Mike Javorski
2021-08-22  0:17                             ` Mike Javorski
2021-08-22  3:41                               ` NeilBrown
2021-08-22  4:05                                 ` Mike Javorski
2021-08-22 22:00                                   ` NeilBrown
2021-08-26 19:34                                     ` Mike Javorski
2021-08-26 21:44                                       ` NeilBrown
2021-08-27  0:07                                         ` Mike Javorski
2021-08-27  5:27                                           ` NeilBrown
2021-08-27  6:11                                             ` Mike Javorski
2021-08-27  7:14                                               ` NeilBrown
2021-08-27 14:13                                                 ` Chuck Lever III
2021-08-27 17:07                                                   ` Mike Javorski
2021-08-27 22:00                                                     ` Mike Javorski
2021-08-27 23:49                                                       ` Chuck Lever III
2021-08-28  3:22                                                         ` Mike Javorski
2021-08-28 18:23                                                           ` Chuck Lever III [this message]
2021-08-29 22:36                                                             ` [PATCH] SUNRPC: don't pause on incomplete allocation NeilBrown
2021-08-30  9:12                                                               ` Mel Gorman
2021-08-30 20:46                                                               ` J. Bruce Fields
     [not found]                                                             ` <163027609524.7591.4987241695872857175@noble.neil.brown.name>
2021-08-30  9:11                                                               ` [PATCH] MM: clarify effort used in alloc_pages_bulk_*() Mel Gorman
2021-09-04 17:41                                                             ` NFS server regression in kernel 5.13 (tested w/ 5.13.9) Mike Javorski
2021-09-05  2:02                                                               ` Chuck Lever III
2021-09-16  2:45                                                                 ` Mike Javorski
2021-09-16 18:58                                                                   ` Chuck Lever III
2021-09-16 19:21                                                                     ` Mike Javorski
2021-09-17 14:41                                                                       ` J. Bruce Fields
2021-08-16 16:09                   ` Mike Javorski
2021-08-16 23:04                     ` NeilBrown

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=12B831AA-4A4E-4102-ADA3-97B6FA0B119E@oracle.com \
    --to=chuck.lever@oracle.com \
    --cc=linux-nfs@vger.kernel.org \
    --cc=mgorman@suse.com \
    --cc=mike.javorski@gmail.com \
    --cc=neilb@suse.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).