Linux-Fsdevel Archive on lore.kernel.org
 help / color / Atom feed
From: Tom Talpey <tom@talpey.com>
To: John Hubbard <jhubbard@nvidia.com>,
	john.hubbard@gmail.com, Andrew Morton <akpm@linux-foundation.org>,
	linux-mm@kvack.org
Cc: Al Viro <viro@zeniv.linux.org.uk>,
	Christian Benvenuti <benve@cisco.com>,
	Christoph Hellwig <hch@infradead.org>,
	Christopher Lameter <cl@linux.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Dave Chinner <david@fromorbit.com>,
	Dennis Dalessandro <dennis.dalessandro@intel.com>,
	Doug Ledford <dledford@redhat.com>, Jan Kara <jack@suse.cz>,
	Jason Gunthorpe <jgg@ziepe.ca>,
	Jerome Glisse <jglisse@redhat.com>,
	Matthew Wilcox <willy@infradead.org>,
	Michal Hocko <mhocko@kernel.org>,
	Mike Rapoport <rppt@linux.ibm.com>,
	Mike Marciniszyn <mike.marciniszyn@intel.com>,
	Ralph Campbell <rcampbell@nvidia.com>,
	LKML <linux-kernel@vger.kernel.org>,
	linux-fsdevel@vger.kernel.org
Subject: Re: [PATCH 0/6] RFC v2: mm: gup/dma tracking
Date: Tue, 5 Feb 2019 08:38:10 -0500
Message-ID: <303ab506-62b7-ee6d-27a0-a818c7ff6473@talpey.com> (raw)
In-Reply-To: <80d503f5-038b-7f0b-90d5-e5b9537ae1df@nvidia.com>

On 2/5/2019 3:22 AM, John Hubbard wrote:
> On 2/4/19 5:41 PM, Tom Talpey wrote:
>> On 2/4/2019 12:21 AM, john.hubbard@gmail.com wrote:
>>> From: John Hubbard <jhubbard@nvidia.com>
>>>
>>>
>>> Performance: here is an fio run on an NVMe drive, using this for the fio
>>> configuration file:
>>>
>>>      [reader]
>>>      direct=1
>>>      ioengine=libaio
>>>      blocksize=4096
>>>      size=1g
>>>      numjobs=1
>>>      rw=read
>>>      iodepth=64
>>>
>>> reader: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 
>>> 4096B-4096B, ioengine=libaio, iodepth=64
>>> fio-3.3
>>> Starting 1 process
>>> Jobs: 1 (f=1)
>>> reader: (groupid=0, jobs=1): err= 0: pid=7011: Sun Feb  3 20:36:51 2019
>>>     read: IOPS=190k, BW=741MiB/s (778MB/s)(1024MiB/1381msec)
>>>      slat (nsec): min=2716, max=57255, avg=4048.14, stdev=1084.10
>>>      clat (usec): min=20, max=12485, avg=332.63, stdev=191.77
>>>       lat (usec): min=22, max=12498, avg=336.72, stdev=192.07
>>>      clat percentiles (usec):
>>>       |  1.00th=[  322],  5.00th=[  322], 10.00th=[  322], 20.00th=[ 
>>> 326],
>>>       | 30.00th=[  326], 40.00th=[  326], 50.00th=[  326], 60.00th=[ 
>>> 326],
>>>       | 70.00th=[  326], 80.00th=[  330], 90.00th=[  330], 95.00th=[ 
>>> 330],
>>>       | 99.00th=[  478], 99.50th=[  717], 99.90th=[ 1074], 99.95th=[ 
>>> 1090],
>>>       | 99.99th=[12256]
>>
>> These latencies are concerning. The best results we saw at the end of
>> November (previous approach) were MUCH flatter. These really start
>> spiking at three 9's, and are sky-high at four 9's. The "stdev" values
>> for clat and lat are about 10 times the previous. There's some kind
>> of serious queuing contention here, that wasn't there in November.
> 
> Hi Tom,
> 
> I think this latency problem is also there in the baseline kernel, but...
> 
>>
>>>     bw (  KiB/s): min=730152, max=776512, per=99.22%, avg=753332.00, 
>>> stdev=32781.47, samples=2
>>>     iops        : min=182538, max=194128, avg=188333.00, 
>>> stdev=8195.37, samples=2
>>>    lat (usec)   : 50=0.01%, 100=0.01%, 250=0.07%, 500=99.26%, 750=0.38%
>>>    lat (usec)   : 1000=0.02%
>>>    lat (msec)   : 2=0.24%, 20=0.02%
>>>    cpu          : usr=15.07%, sys=84.13%, ctx=10, majf=0, minf=74
>>
>> System CPU 84% is roughly double the November results of 45%. Ouch.
> 
> That's my fault. First of all, I had a few extra, supposedly minor debug
> settings in the .config, which I'm removing now--I'm doing a proper run
> with the original .config file from November, below. Second, I'm not
> sure I controlled the run carefully enough.
> 
>>
>> Did you re-run the baseline on the new unpatched base kernel and can
>> we see the before/after?
> 
> Doing that now, I see:
> 
> -- No significant perf difference between before and after, but
> -- Still high clat in the 99.99th
> 
> =======================================================================
> Before: using commit 8834f5600cf3 ("Linux 5.0-rc5")
> ===================================================
> reader: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 
> 4096B-4096B, ioengine=libaio, iodepth=64
> fio-3.3
> Starting 1 process
> Jobs: 1 (f=1)
> reader: (groupid=0, jobs=1): err= 0: pid=1829: Tue Feb  5 00:08:08 2019
>     read: IOPS=193k, BW=753MiB/s (790MB/s)(1024MiB/1359msec)
>      slat (nsec): min=1269, max=40309, avg=1493.66, stdev=534.83
>      clat (usec): min=127, max=12249, avg=329.83, stdev=184.92
>       lat (usec): min=129, max=12256, avg=331.35, stdev=185.06
>      clat percentiles (usec):
>       |  1.00th=[  326],  5.00th=[  326], 10.00th=[  326], 20.00th=[  326],
>       | 30.00th=[  326], 40.00th=[  326], 50.00th=[  326], 60.00th=[  326],
>       | 70.00th=[  326], 80.00th=[  326], 90.00th=[  326], 95.00th=[  326],
>       | 99.00th=[  347], 99.50th=[  519], 99.90th=[  529], 99.95th=[  537],
>       | 99.99th=[12125]
>     bw (  KiB/s): min=755032, max=781472, per=99.57%, avg=768252.00, 
> stdev=18695.90, samples=2
>     iops        : min=188758, max=195368, avg=192063.00, stdev=4673.98, 
> samples=2
>    lat (usec)   : 250=0.08%, 500=99.18%, 750=0.72%
>    lat (msec)   : 20=0.02%
>    cpu          : usr=12.30%, sys=46.83%, ctx=253554, majf=0, minf=74
>    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, 
>  >=64=100.0%
>       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, 
>  >=64=0.0%
>       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, 
>  >=64=0.0%
>       issued rwts: total=262144,0,0,0 short=0,0,0,0 dropped=0,0,0,0
>       latency   : target=0, window=0, percentile=100.00%, depth=64
> 
> Run status group 0 (all jobs):
>     READ: bw=753MiB/s (790MB/s), 753MiB/s-753MiB/s (790MB/s-790MB/s), 
> io=1024MiB (1074MB), run=1359-1359msec
> 
> Disk stats (read/write):
>    nvme0n1: ios=221246/0, merge=0/0, ticks=71556/0, in_queue=704, 
> util=91.35%
> 
> =======================================================================
> After:
> =======================================================================
> reader: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 
> 4096B-4096B, ioengine=libaio, iodepth=64
> fio-3.3
> Starting 1 process
> Jobs: 1 (f=1)
> reader: (groupid=0, jobs=1): err= 0: pid=1803: Mon Feb  4 23:58:07 2019
>     read: IOPS=193k, BW=753MiB/s (790MB/s)(1024MiB/1359msec)
>      slat (nsec): min=1276, max=41900, avg=1505.36, stdev=565.26
>      clat (usec): min=177, max=12186, avg=329.88, stdev=184.03
>       lat (usec): min=178, max=12192, avg=331.42, stdev=184.16
>      clat percentiles (usec):
>       |  1.00th=[  326],  5.00th=[  326], 10.00th=[  326], 20.00th=[  326],
>       | 30.00th=[  326], 40.00th=[  326], 50.00th=[  326], 60.00th=[  326],
>       | 70.00th=[  326], 80.00th=[  326], 90.00th=[  326], 95.00th=[  326],
>       | 99.00th=[  359], 99.50th=[  498], 99.90th=[  537], 99.95th=[  627],
>       | 99.99th=[12125]
>     bw (  KiB/s): min=754656, max=781504, per=99.55%, avg=768080.00, 
> stdev=18984.40, samples=2
>     iops        : min=188664, max=195378, avg=192021.00, stdev=4747.51, 
> samples=2
>    lat (usec)   : 250=0.12%, 500=99.40%, 750=0.46%
>    lat (msec)   : 20=0.02%
>    cpu          : usr=12.44%, sys=47.05%, ctx=252127, majf=0, minf=73
>    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, 
>  >=64=100.0%
>       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, 
>  >=64=0.0%
>       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, 
>  >=64=0.0%
>       issued rwts: total=262144,0,0,0 short=0,0,0,0 dropped=0,0,0,0
>       latency   : target=0, window=0, percentile=100.00%, depth=64
> 
> Run status group 0 (all jobs):
>     READ: bw=753MiB/s (790MB/s), 753MiB/s-753MiB/s (790MB/s-790MB/s), 
> io=1024MiB (1074MB), run=1359-1359msec
> 
> Disk stats (read/write):
>    nvme0n1: ios=221203/0, merge=0/0, ticks=71291/0, in_queue=704, 
> util=91.19%
> 
> How's this look to you?

Ok, I'm satisfied the four-9's latency spike is in not your code. :-)
Results look good relative to baseline. Thanks for doublechecking!

Tom.

  reply index

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-02-04  5:21 john.hubbard
2019-02-04  5:21 ` [PATCH 1/6] mm: introduce put_user_page*(), placeholder versions john.hubbard
2019-02-04  5:21 ` [PATCH 2/6] infiniband/mm: convert put_page() to put_user_page*() john.hubbard
2019-02-04  5:21 ` [PATCH 3/6] mm: page_cache_add_speculative(): refactoring john.hubbard
2019-02-04  5:21 ` [PATCH 4/6] mm/gup: track gup-pinned pages john.hubbard
2019-02-04 18:19   ` Matthew Wilcox
2019-02-04 19:11     ` John Hubbard
2019-02-20 19:24   ` Ira Weiny
2019-02-20 20:22     ` John Hubbard
2019-02-04  5:21 ` [PATCH 5/6] mm/gup: /proc/vmstat support for get/put user pages john.hubbard
2019-02-04  5:21 ` [PATCH 6/6] mm/gup: Documentation/vm/get_user_pages.rst, MAINTAINERS john.hubbard
2019-02-05 16:40   ` Mike Rapoport
2019-02-05 21:53     ` John Hubbard
2019-02-04 16:08 ` [PATCH 0/6] RFC v2: mm: gup/dma tracking Christopher Lameter
2019-02-04 16:12   ` Christoph Hellwig
2019-02-04 16:59     ` Christopher Lameter
2019-02-04 17:14 ` Christopher Lameter
2019-02-04 17:51   ` Jason Gunthorpe
2019-02-04 18:21     ` Christopher Lameter
2019-02-04 19:09       ` Matthew Wilcox
2019-02-04 23:35   ` Ira Weiny
2019-02-05 19:30     ` Christopher Lameter
2019-02-05  1:41 ` Tom Talpey
2019-02-05  8:22   ` John Hubbard
2019-02-05 13:38     ` Tom Talpey [this message]
2019-02-05 21:55       ` John Hubbard

Reply instructions:

You may reply publically to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=303ab506-62b7-ee6d-27a0-a818c7ff6473@talpey.com \
    --to=tom@talpey.com \
    --cc=akpm@linux-foundation.org \
    --cc=benve@cisco.com \
    --cc=cl@linux.com \
    --cc=dan.j.williams@intel.com \
    --cc=david@fromorbit.com \
    --cc=dennis.dalessandro@intel.com \
    --cc=dledford@redhat.com \
    --cc=hch@infradead.org \
    --cc=jack@suse.cz \
    --cc=jgg@ziepe.ca \
    --cc=jglisse@redhat.com \
    --cc=jhubbard@nvidia.com \
    --cc=john.hubbard@gmail.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=mike.marciniszyn@intel.com \
    --cc=rcampbell@nvidia.com \
    --cc=rppt@linux.ibm.com \
    --cc=viro@zeniv.linux.org.uk \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

Linux-Fsdevel Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-fsdevel/0 linux-fsdevel/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-fsdevel linux-fsdevel/ https://lore.kernel.org/linux-fsdevel \
		linux-fsdevel@vger.kernel.org linux-fsdevel@archiver.kernel.org
	public-inbox-index linux-fsdevel


Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-fsdevel


AGPL code for this site: git clone https://public-inbox.org/ public-inbox