From: Chuck Lever III <chuck.lever@oracle.com>
To: "trondmy@kernel.org" <trondmy@kernel.org>,
Bruce Fields <bfields@redhat.com>
Cc: Linux NFS Mailing List <linux-nfs@vger.kernel.org>
Subject: Re: [PATCH] nfsd: Reduce contention for the nfsd_file nf_rwsem
Date: Tue, 22 Jun 2021 14:51:52 +0000 [thread overview]
Message-ID: <F61924B3-3FC0-4FEF-BEFB-9802D9A852B7@oracle.com> (raw)
In-Reply-To: <20210617232652.264884-1-trondmy@kernel.org>
> On Jun 17, 2021, at 7:26 PM, trondmy@kernel.org wrote:
>
> From: Trond Myklebust <trond.myklebust@hammerspace.com>
>
> When flushing out the unstable file writes as part of a COMMIT call, try
> to perform most of of the data writes and waits outside the semaphore.
>
> This means that if the client is sending the COMMIT as part of a memory
> reclaim operation, then it can continue performing I/O, with contention
> for the lock occurring only once the data sync is finished.
>
> Fixes: 5011af4c698a ("nfsd: Fix stable writes")
> Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
I can see write throughput improvement now.
Tested-by: Chuck Lever <chuck.lever@oracle.com>
This is NFSv3 against an NVMe-backed XFS export, wsize=8192:
v5.13-rc6:
Command line used: /home/cel/bin/iozone -M -+u -i0 -i1 -s1g -r256k -t12 -I
Output is in kBytes/sec
Time Resolution = 0.000001 seconds.
Processor cache size set to 1024 kBytes.
Processor cache line size set to 32 bytes.
File stride size set to 17 * record size.
Throughput test with 12 processes
Each process writes a 1048576 kByte file in 256 kByte records
Children see throughput for 12 initial writers = 416004.59 kB/sec
Parent sees throughput for 12 initial writers = 415691.65 kB/sec
Min throughput per process = 34630.36 kB/sec
Max throughput per process = 34703.62 kB/sec
Avg throughput per process = 34667.05 kB/sec
Min xfer = 1046528.00 kB
CPU Utilization: Wall time 30.239 CPU time 5.854 CPU utilization 19.36 %
Children see throughput for 12 rewriters = 516605.59 kB/sec
Parent sees throughput for 12 rewriters = 516530.05 kB/sec
Min throughput per process = 43007.56 kB/sec
Max throughput per process = 43074.50 kB/sec
Avg throughput per process = 43050.47 kB/sec
Min xfer = 1047040.00 kB
CPU utilization: Wall time 24.347 CPU time 5.882 CPU utilization 24.16 %
v5.13-rc6 + Trond's patch:
Command line used: /home/cel/bin/iozone -M -+u -i0 -i1 -s1g -r256k -t12 -I
Output is in kBytes/sec
Time Resolution = 0.000001 seconds.
Processor cache size set to 1024 kBytes.
Processor cache line size set to 32 bytes.
File stride size set to 17 * record size.
Throughput test with 12 processes
Each process writes a 1048576 kByte file in 256 kByte records
Children see throughput for 12 initial writers = 434971.09 kB/sec
Parent sees throughput for 12 initial writers = 434649.13 kB/sec
Min throughput per process = 36209.41 kB/sec
Max throughput per process = 36287.55 kB/sec
Avg throughput per process = 36247.59 kB/sec
Min xfer = 1046528.00 kB
CPU Utilization: Wall time 28.920 CPU time 5.705 CPU utilization 19.73 %
Children see throughput for 12 rewriters = 544700.37 kB/sec
Parent sees throughput for 12 rewriters = 544623.91 kB/sec
Min throughput per process = 45320.82 kB/sec
Max throughput per process = 45456.07 kB/sec
Avg throughput per process = 45391.70 kB/sec
Min xfer = 1045504.00 kB
CPU utilization: Wall time 23.071 CPU time 5.708 CPU utilization 24.74 %
> ---
> fs/nfsd/vfs.c | 18 ++++++++++++++++--
> 1 file changed, 16 insertions(+), 2 deletions(-)
>
> diff --git a/fs/nfsd/vfs.c b/fs/nfsd/vfs.c
> index 15adf1f6ab21..46485c04740d 100644
> --- a/fs/nfsd/vfs.c
> +++ b/fs/nfsd/vfs.c
> @@ -1123,6 +1123,19 @@ nfsd_write(struct svc_rqst *rqstp, struct svc_fh *fhp, loff_t offset,
> }
>
> #ifdef CONFIG_NFSD_V3
> +static int
> +nfsd_filemap_write_and_wait_range(struct nfsd_file *nf, loff_t offset,
> + loff_t end)
> +{
> + struct address_space *mapping = nf->nf_file->f_mapping;
> + int ret = filemap_fdatawrite_range(mapping, offset, end);
> +
> + if (ret)
> + return ret;
> + filemap_fdatawait_range_keep_errors(mapping, offset, end);
> + return 0;
> +}
> +
> /*
> * Commit all pending writes to stable storage.
> *
> @@ -1153,10 +1166,11 @@ nfsd_commit(struct svc_rqst *rqstp, struct svc_fh *fhp,
> if (err)
> goto out;
> if (EX_ISSYNC(fhp->fh_export)) {
> - int err2;
> + int err2 = nfsd_filemap_write_and_wait_range(nf, offset, end);
>
> down_write(&nf->nf_rwsem);
> - err2 = vfs_fsync_range(nf->nf_file, offset, end, 0);
> + if (!err2)
> + err2 = vfs_fsync_range(nf->nf_file, offset, end, 0);
> switch (err2) {
> case 0:
> nfsd_copy_boot_verifier(verf, net_generic(nf->nf_net,
> --
> 2.31.1
>
--
Chuck Lever
next prev parent reply other threads:[~2021-06-22 14:52 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-06-17 23:26 [PATCH] nfsd: Reduce contention for the nfsd_file nf_rwsem trondmy
2021-06-18 17:59 ` Chuck Lever III
2021-06-21 16:49 ` Chuck Lever III
2021-06-21 18:00 ` Trond Myklebust
2021-06-21 18:27 ` Chuck Lever III
2021-06-21 19:06 ` Trond Myklebust
2021-06-21 19:35 ` Trond Myklebust
2021-06-22 14:51 ` Chuck Lever III [this message]
2021-06-23 21:38 ` J. Bruce Fields
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=F61924B3-3FC0-4FEF-BEFB-9802D9A852B7@oracle.com \
--to=chuck.lever@oracle.com \
--cc=bfields@redhat.com \
--cc=linux-nfs@vger.kernel.org \
--cc=trondmy@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.