* [PATCH] nfs: writeback pages wait queue
@ 2011-11-19 12:52 Wu Fengguang
[not found] ` <20111119134412.GA5853@umich.edu>
2011-11-21 7:15 ` Wu Fengguang
0 siblings, 2 replies; 4+ messages in thread
From: Wu Fengguang @ 2011-11-19 12:52 UTC (permalink / raw)
To: Trond Myklebust; +Cc: linux-nfs, linux-fsdevel, LKML, Feng Tang
The generic writeback routines are departing from congestion_wait()
in preference of get_request_wait(), aka. waiting on the block queues.
Introduce the missing writeback wait queue for NFS, otherwise its
writeback pages will grow greedily, exhausting all PG_dirty pages.
Tests show that it can effectively reduce stalls in the disk-network
pipeline, improve performance and reduce delays.
The test cases are basically
for run in 1 2 3
for nr_dd in 1 10 100
for dirty_thresh in 10M 100M 1000M 2G
start $nr_dd dd's writing to a 1-disk mem=12G NFS server
During all tests, nfs_congestion_kb is set to 1/8 dirty_thresh.
3.2.0-rc1 3.2.0-rc1-ioless-full+
(w/o patch) (w/ patch)
----------- ------------------------
20.66 +136.7% 48.90 thresh=1000M/nfs-100dd-1
20.82 +147.5% 51.52 thresh=1000M/nfs-100dd-2
20.57 +129.8% 47.26 thresh=1000M/nfs-100dd-3
35.96 +96.5% 70.67 thresh=1000M/nfs-10dd-1
37.47 +89.1% 70.85 thresh=1000M/nfs-10dd-2
34.55 +106.1% 71.21 thresh=1000M/nfs-10dd-3
58.24 +28.2% 74.63 thresh=1000M/nfs-1dd-1
59.83 +18.6% 70.93 thresh=1000M/nfs-1dd-2
58.30 +31.4% 76.61 thresh=1000M/nfs-1dd-3
23.69 -10.0% 21.33 thresh=100M/nfs-100dd-1
23.59 -1.7% 23.19 thresh=100M/nfs-100dd-2
23.94 -1.0% 23.70 thresh=100M/nfs-100dd-3
27.06 -0.0% 27.06 thresh=100M/nfs-10dd-1
25.43 +4.8% 26.66 thresh=100M/nfs-10dd-2
27.21 -0.8% 26.99 thresh=100M/nfs-10dd-3
53.82 +4.4% 56.17 thresh=100M/nfs-1dd-1
55.80 +4.2% 58.12 thresh=100M/nfs-1dd-2
55.75 +2.9% 57.37 thresh=100M/nfs-1dd-3
15.47 +1.3% 15.68 thresh=10M/nfs-10dd-1
16.09 -3.5% 15.53 thresh=10M/nfs-10dd-2
15.09 -0.9% 14.96 thresh=10M/nfs-10dd-3
26.65 +13.0% 30.10 thresh=10M/nfs-1dd-1
25.09 +7.7% 27.02 thresh=10M/nfs-1dd-2
27.16 +3.3% 28.06 thresh=10M/nfs-1dd-3
27.51 +78.6% 49.11 thresh=2G/nfs-100dd-1
22.46 +131.6% 52.01 thresh=2G/nfs-100dd-2
12.95 +289.8% 50.50 thresh=2G/nfs-100dd-3
42.28 +81.0% 76.52 thresh=2G/nfs-10dd-1
40.33 +78.8% 72.10 thresh=2G/nfs-10dd-2
42.52 +67.6% 71.27 thresh=2G/nfs-10dd-3
62.27 +34.6% 83.84 thresh=2G/nfs-1dd-1
60.10 +35.6% 81.48 thresh=2G/nfs-1dd-2
66.29 +17.5% 77.88 thresh=2G/nfs-1dd-3
1164.97 +41.6% 1649.19 TOTAL write_bw
The local queue time for WRITE RPCs could be reduced by several orders!
3.2.0-rc1 3.2.0-rc1-ioless-full+
----------- ------------------------
90226.82 -99.9% 92.07 thresh=1000M/nfs-100dd-1
88904.27 -99.9% 80.21 thresh=1000M/nfs-100dd-2
97436.73 -99.9% 87.32 thresh=1000M/nfs-100dd-3
62167.19 -99.3% 444.25 thresh=1000M/nfs-10dd-1
64150.34 -99.2% 539.38 thresh=1000M/nfs-10dd-2
78675.54 -99.3% 540.27 thresh=1000M/nfs-10dd-3
5372.84 +57.8% 8477.45 thresh=1000M/nfs-1dd-1
10245.66 -51.2% 4995.71 thresh=1000M/nfs-1dd-2
4744.06 +109.1% 9919.55 thresh=1000M/nfs-1dd-3
1727.29 -9.6% 1562.16 thresh=100M/nfs-100dd-1
2183.49 +4.4% 2280.21 thresh=100M/nfs-100dd-2
2201.49 +3.7% 2281.92 thresh=100M/nfs-100dd-3
6213.73 +19.9% 7448.13 thresh=100M/nfs-10dd-1
8127.01 +3.2% 8387.06 thresh=100M/nfs-10dd-2
7255.35 +4.4% 7571.11 thresh=100M/nfs-10dd-3
1144.67 +20.4% 1378.01 thresh=100M/nfs-1dd-1
1010.02 +19.0% 1202.22 thresh=100M/nfs-1dd-2
906.33 +15.8% 1049.76 thresh=100M/nfs-1dd-3
642.82 +17.3% 753.80 thresh=10M/nfs-10dd-1
766.82 -21.7% 600.18 thresh=10M/nfs-10dd-2
575.95 +16.5% 670.85 thresh=10M/nfs-10dd-3
21.91 +71.0% 37.47 thresh=10M/nfs-1dd-1
16.70 +105.3% 34.29 thresh=10M/nfs-1dd-2
19.05 -71.3% 5.47 thresh=10M/nfs-1dd-3
123877.11 -99.0% 1187.27 thresh=2G/nfs-100dd-1
122353.65 -98.8% 1505.84 thresh=2G/nfs-100dd-2
101140.82 -98.4% 1641.03 thresh=2G/nfs-100dd-3
78248.51 -98.9% 892.00 thresh=2G/nfs-10dd-1
84589.42 -98.6% 1212.17 thresh=2G/nfs-10dd-2
89684.95 -99.4% 495.28 thresh=2G/nfs-10dd-3
10405.39 -6.9% 9684.57 thresh=2G/nfs-1dd-1
16151.86 -48.5% 8316.69 thresh=2G/nfs-1dd-2
16119.17 -49.0% 8214.84 thresh=2G/nfs-1dd-3
1177306.98 -92.1% 93588.50 TOTAL nfs_write_queue_time
The average COMMIT size is not impacted too much.
3.2.0-rc1 3.2.0-rc1-ioless-full+
----------- ------------------------
5.56 +44.9% 8.06 thresh=1000M/nfs-100dd-1
4.14 +109.1% 8.67 thresh=1000M/nfs-100dd-2
5.46 +16.3% 6.35 thresh=1000M/nfs-100dd-3
52.04 -8.4% 47.70 thresh=1000M/nfs-10dd-1
52.33 -13.8% 45.09 thresh=1000M/nfs-10dd-2
51.72 -9.2% 46.98 thresh=1000M/nfs-10dd-3
484.63 -8.6% 443.16 thresh=1000M/nfs-1dd-1
492.42 -8.2% 452.26 thresh=1000M/nfs-1dd-2
493.13 -11.4% 437.15 thresh=1000M/nfs-1dd-3
32.52 -72.9% 8.80 thresh=100M/nfs-100dd-1
36.15 +26.1% 45.58 thresh=100M/nfs-100dd-2
38.33 +0.4% 38.49 thresh=100M/nfs-100dd-3
5.67 +0.5% 5.69 thresh=100M/nfs-10dd-1
5.74 -1.1% 5.68 thresh=100M/nfs-10dd-2
5.69 +0.9% 5.74 thresh=100M/nfs-10dd-3
44.91 -1.0% 44.45 thresh=100M/nfs-1dd-1
44.22 -0.6% 43.96 thresh=100M/nfs-1dd-2
44.18 +0.2% 44.28 thresh=100M/nfs-1dd-3
1.42 +1.1% 1.43 thresh=10M/nfs-10dd-1
1.48 +0.3% 1.48 thresh=10M/nfs-10dd-2
1.43 -1.0% 1.42 thresh=10M/nfs-10dd-3
5.51 -6.8% 5.14 thresh=10M/nfs-1dd-1
5.91 -8.1% 5.43 thresh=10M/nfs-1dd-2
5.44 +3.0% 5.61 thresh=10M/nfs-1dd-3
8.80 +6.6% 9.38 thresh=2G/nfs-100dd-1
8.51 +65.2% 14.06 thresh=2G/nfs-100dd-2
15.28 -13.2% 13.27 thresh=2G/nfs-100dd-3
105.12 -24.9% 78.99 thresh=2G/nfs-10dd-1
101.90 -9.1% 92.60 thresh=2G/nfs-10dd-2
106.24 -29.7% 74.65 thresh=2G/nfs-10dd-3
909.85 +0.4% 913.68 thresh=2G/nfs-1dd-1
1030.45 -18.3% 841.68 thresh=2G/nfs-1dd-2
1016.56 -11.6% 898.36 thresh=2G/nfs-1dd-3
5222.74 -10.1% 4695.25 TOTAL nfs_commit_size
And here is the list of overall numbers.
3.2.0-rc1 3.2.0-rc1-ioless-full+
----------- ------------------------
1164.97 +41.6% 1649.19 TOTAL write_bw
54799.00 +25.0% 68500.00 TOTAL nfs_nr_commits
3543263.00 -3.3% 3425418.00 TOTAL nfs_nr_writes
5222.74 -10.1% 4695.25 TOTAL nfs_commit_size
7.62 +89.2% 14.42 TOTAL nfs_write_size
1177306.98 -92.1% 93588.50 TOTAL nfs_write_queue_time
5977.02 -16.0% 5019.34 TOTAL nfs_write_rtt_time
1183360.15 -91.7% 98645.74 TOTAL nfs_write_execute_time
51186.59 -62.5% 19170.98 TOTAL nfs_commit_queue_time
81801.14 +3.6% 84735.19 TOTAL nfs_commit_rtt_time
133015.32 -21.9% 103926.05 TOTAL nfs_commit_execute_time
Feng: do more coarse grained throttle on each ->writepages rather than
on each page, for better performance and avoid throttled-before-send-rpc
deadlock
Signed-off-by: Feng Tang <feng.tang@intel.com>
Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
---
fs/nfs/client.c | 2
fs/nfs/write.c | 84 +++++++++++++++++++++++++++++++-----
include/linux/nfs_fs_sb.h | 1
3 files changed, 77 insertions(+), 10 deletions(-)
--- linux-next.orig/fs/nfs/write.c 2011-10-20 23:08:17.000000000 +0800
+++ linux-next/fs/nfs/write.c 2011-10-20 23:45:59.000000000 +0800
@@ -190,11 +190,64 @@ static int wb_priority(struct writeback_
* NFS congestion control
*/
+#define NFS_WAIT_PAGES (1024L >> (PAGE_SHIFT - 10))
int nfs_congestion_kb;
-#define NFS_CONGESTION_ON_THRESH (nfs_congestion_kb >> (PAGE_SHIFT-10))
-#define NFS_CONGESTION_OFF_THRESH \
- (NFS_CONGESTION_ON_THRESH - (NFS_CONGESTION_ON_THRESH >> 2))
+/*
+ * SYNC requests will block on (2*limit) and wakeup on (2*limit-NFS_WAIT_PAGES)
+ * ASYNC requests will block on (limit) and wakeup on (limit - NFS_WAIT_PAGES)
+ * In this way SYNC writes will never be blocked by ASYNC ones.
+ */
+
+static void nfs_set_congested(long nr, struct backing_dev_info *bdi)
+{
+ long limit = nfs_congestion_kb >> (PAGE_SHIFT - 10);
+
+ if (nr > limit && !test_bit(BDI_async_congested, &bdi->state))
+ set_bdi_congested(bdi, BLK_RW_ASYNC);
+ else if (nr > 2 * limit && !test_bit(BDI_sync_congested, &bdi->state))
+ set_bdi_congested(bdi, BLK_RW_SYNC);
+}
+
+static void nfs_wait_congested(int is_sync,
+ struct backing_dev_info *bdi,
+ wait_queue_head_t *wqh)
+{
+ int waitbit = is_sync ? BDI_sync_congested : BDI_async_congested;
+ DEFINE_WAIT(wait);
+
+ if (!test_bit(waitbit, &bdi->state))
+ return;
+
+ for (;;) {
+ prepare_to_wait(&wqh[is_sync], &wait, TASK_UNINTERRUPTIBLE);
+ if (!test_bit(waitbit, &bdi->state))
+ break;
+
+ io_schedule();
+ }
+ finish_wait(&wqh[is_sync], &wait);
+}
+
+static void nfs_wakeup_congested(long nr,
+ struct backing_dev_info *bdi,
+ wait_queue_head_t *wqh)
+{
+ long limit = nfs_congestion_kb >> (PAGE_SHIFT - 10);
+
+ if (nr < 2 * limit - min(limit / 8, NFS_WAIT_PAGES)) {
+ if (test_bit(BDI_sync_congested, &bdi->state))
+ clear_bdi_congested(bdi, BLK_RW_SYNC);
+ if (waitqueue_active(&wqh[BLK_RW_SYNC]))
+ wake_up(&wqh[BLK_RW_SYNC]);
+ }
+ if (nr < limit - min(limit / 8, NFS_WAIT_PAGES)) {
+ if (test_bit(BDI_async_congested, &bdi->state))
+ clear_bdi_congested(bdi, BLK_RW_ASYNC);
+ if (waitqueue_active(&wqh[BLK_RW_ASYNC]))
+ wake_up(&wqh[BLK_RW_ASYNC]);
+ }
+}
static int nfs_set_page_writeback(struct page *page)
{
@@ -205,11 +258,8 @@ static int nfs_set_page_writeback(struct
struct nfs_server *nfss = NFS_SERVER(inode);
page_cache_get(page);
- if (atomic_long_inc_return(&nfss->writeback) >
- NFS_CONGESTION_ON_THRESH) {
- set_bdi_congested(&nfss->backing_dev_info,
- BLK_RW_ASYNC);
- }
+ nfs_set_congested(atomic_long_inc_return(&nfss->writeback),
+ &nfss->backing_dev_info);
}
return ret;
}
@@ -221,8 +271,10 @@ static void nfs_end_page_writeback(struc
end_page_writeback(page);
page_cache_release(page);
- if (atomic_long_dec_return(&nfss->writeback) < NFS_CONGESTION_OFF_THRESH)
- clear_bdi_congested(&nfss->backing_dev_info, BLK_RW_ASYNC);
+
+ nfs_wakeup_congested(atomic_long_dec_return(&nfss->writeback),
+ &nfss->backing_dev_info,
+ nfss->writeback_wait);
}
static struct nfs_page *nfs_find_and_lock_request(struct page *page, bool nonblock)
@@ -323,10 +375,17 @@ static int nfs_writepage_locked(struct p
int nfs_writepage(struct page *page, struct writeback_control *wbc)
{
+ struct inode *inode = page->mapping->host;
+ struct nfs_server *nfss = NFS_SERVER(inode);
int ret;
ret = nfs_writepage_locked(page, wbc);
unlock_page(page);
+
+ nfs_wait_congested(wbc->sync_mode == WB_SYNC_ALL,
+ &nfss->backing_dev_info,
+ nfss->writeback_wait);
+
return ret;
}
@@ -342,6 +401,7 @@ static int nfs_writepages_callback(struc
int nfs_writepages(struct address_space *mapping, struct writeback_control *wbc)
{
struct inode *inode = mapping->host;
+ struct nfs_server *nfss = NFS_SERVER(inode);
unsigned long *bitlock = &NFS_I(inode)->flags;
struct nfs_pageio_descriptor pgio;
int err;
@@ -358,6 +418,10 @@ int nfs_writepages(struct address_space
err = write_cache_pages(mapping, wbc, nfs_writepages_callback, &pgio);
nfs_pageio_complete(&pgio);
+ nfs_wait_congested(wbc->sync_mode == WB_SYNC_ALL,
+ &nfss->backing_dev_info,
+ nfss->writeback_wait);
+
clear_bit_unlock(NFS_INO_FLUSHING, bitlock);
smp_mb__after_clear_bit();
wake_up_bit(bitlock, NFS_INO_FLUSHING);
--- linux-next.orig/include/linux/nfs_fs_sb.h 2011-10-20 23:08:17.000000000 +0800
+++ linux-next/include/linux/nfs_fs_sb.h 2011-10-20 23:45:12.000000000 +0800
@@ -102,6 +102,7 @@ struct nfs_server {
struct nfs_iostats __percpu *io_stats; /* I/O statistics */
struct backing_dev_info backing_dev_info;
atomic_long_t writeback; /* number of writeback pages */
+ wait_queue_head_t writeback_wait[2];
int flags; /* various flags */
unsigned int caps; /* server capabilities */
unsigned int rsize; /* read size */
--- linux-next.orig/fs/nfs/client.c 2011-10-20 23:08:17.000000000 +0800
+++ linux-next/fs/nfs/client.c 2011-10-20 23:45:12.000000000 +0800
@@ -1066,6 +1066,8 @@ static struct nfs_server *nfs_alloc_serv
INIT_LIST_HEAD(&server->layouts);
atomic_set(&server->active, 0);
+ init_waitqueue_head(&server->writeback_wait[BLK_RW_SYNC]);
+ init_waitqueue_head(&server->writeback_wait[BLK_RW_ASYNC]);
server->io_stats = nfs_alloc_iostats();
if (!server->io_stats) {
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH] nfs: writeback pages wait queue
@ 2011-11-20 1:57 ` Wu Fengguang
0 siblings, 0 replies; 4+ messages in thread
From: Wu Fengguang @ 2011-11-20 1:57 UTC (permalink / raw)
To: Jim Rees; +Cc: Trond Myklebust, linux-nfs, linux-fsdevel, LKML, Feng Tang
Hi Jim,
On Sat, Nov 19, 2011 at 09:44:12PM +0800, Jim Rees wrote:
> Wu Fengguang wrote:
>
> The generic writeback routines are departing from congestion_wait()
> in preference of get_request_wait(), aka. waiting on the block queues.
>
> Introduce the missing writeback wait queue for NFS, otherwise its
> writeback pages will grow greedily, exhausting all PG_dirty pages.
>
> Tests show that it can effectively reduce stalls in the disk-network
> pipeline, improve performance and reduce delays.
>
> This is great stuff. Did you do any tests on long delay paths? I did some
> work on this a few years ago and made some progress but not enough.
Good question! I didn't test fat pipelines, which sure asks for reasonably high
nfs_congestion_kb to work well.
However we have good chances.
The nfs_congestion_kb is computed at module loading time in
nfs_init_writepagecache():
/*
* NFS congestion size, scale with available memory.
*
* 64MB: 8192k
* 128MB: 11585k
* 256MB: 16384k
* 512MB: 23170k
* 1GB: 32768k
* 2GB: 46340k
* 4GB: 65536k
* 8GB: 92681k
* 16GB: 131072k
*
* This allows larger machines to have larger/more transfers.
* Limit the default to 256M
*/
For a typical mem=4GB client, nfs_congestion_kb=64MB, which is enough
to fill a 100ms*100MB/s=10MB network pipeline.
There may be more demanding ones, however that's rare case and its
user should be fully aware of the specialness and the need to do some
hand tuning, for example:
echo $(300<<10) > /proc/sys/fs/nfs/nfs_congestion_kb
Thanks,
Fengguang
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH] nfs: writeback pages wait queue
@ 2011-11-20 1:57 ` Wu Fengguang
0 siblings, 0 replies; 4+ messages in thread
From: Wu Fengguang @ 2011-11-20 1:57 UTC (permalink / raw)
To: Jim Rees
Cc: Trond Myklebust, linux-nfs-u79uwXL29TY76Z2rM5mHXA,
linux-fsdevel-u79uwXL29TY76Z2rM5mHXA, LKML, Feng Tang
Hi Jim,
On Sat, Nov 19, 2011 at 09:44:12PM +0800, Jim Rees wrote:
> Wu Fengguang wrote:
>
> The generic writeback routines are departing from congestion_wait()
> in preference of get_request_wait(), aka. waiting on the block queues.
>
> Introduce the missing writeback wait queue for NFS, otherwise its
> writeback pages will grow greedily, exhausting all PG_dirty pages.
>
> Tests show that it can effectively reduce stalls in the disk-network
> pipeline, improve performance and reduce delays.
>
> This is great stuff. Did you do any tests on long delay paths? I did some
> work on this a few years ago and made some progress but not enough.
Good question! I didn't test fat pipelines, which sure asks for reasonably high
nfs_congestion_kb to work well.
However we have good chances.
The nfs_congestion_kb is computed at module loading time in
nfs_init_writepagecache():
/*
* NFS congestion size, scale with available memory.
*
* 64MB: 8192k
* 128MB: 11585k
* 256MB: 16384k
* 512MB: 23170k
* 1GB: 32768k
* 2GB: 46340k
* 4GB: 65536k
* 8GB: 92681k
* 16GB: 131072k
*
* This allows larger machines to have larger/more transfers.
* Limit the default to 256M
*/
For a typical mem=4GB client, nfs_congestion_kb=64MB, which is enough
to fill a 100ms*100MB/s=10MB network pipeline.
There may be more demanding ones, however that's rare case and its
user should be fully aware of the specialness and the need to do some
hand tuning, for example:
echo $(300<<10) > /proc/sys/fs/nfs/nfs_congestion_kb
Thanks,
Fengguang
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH] nfs: writeback pages wait queue
2011-11-19 12:52 [PATCH] nfs: writeback pages wait queue Wu Fengguang
[not found] ` <20111119134412.GA5853@umich.edu>
@ 2011-11-21 7:15 ` Wu Fengguang
1 sibling, 0 replies; 4+ messages in thread
From: Wu Fengguang @ 2011-11-21 7:15 UTC (permalink / raw)
To: Trond Myklebust; +Cc: linux-nfs, linux-fsdevel, LKML, Feng Tang
[-- Attachment #1: Type: text/plain, Size: 34789 bytes --]
> Tests show that it can effectively reduce stalls in the disk-network
> pipeline, improve performance and reduce delays.
For the record, the full stats are listed below.
The attached two figures about NFS server disk/network traffics are
taken from cases
snb/thresh=1000M/nfs-1dd-1-3.2.0-rc1
snb/thresh=1000M/nfs-1dd-1-3.2.0-rc1-ioless-full+
The former shows more interleaved disk/network activities (one is
mostly stalled when the other is busy). While the latter shows more
evenly distributed disk/net bandwidth, both disk/net throughput are
dropping to 0 less often.
Thanks,
Fengguang
---
wfg@bee /export/writeback% ./compare-nfs.sh -g nfs snb/*/{*-3.2.0-rc1,*-3.2.0-rc1-ioless-full+}
3.2.0-rc1 3.2.0-rc1-ioless-full+
------------------------ ------------------------
20.66 +136.7% 48.90 snb/thresh=1000M/nfs-100dd-1-3.2.0-rc1
20.82 +147.5% 51.52 snb/thresh=1000M/nfs-100dd-2-3.2.0-rc1
20.57 +129.8% 47.26 snb/thresh=1000M/nfs-100dd-3-3.2.0-rc1
35.96 +96.5% 70.67 snb/thresh=1000M/nfs-10dd-1-3.2.0-rc1
37.47 +89.1% 70.85 snb/thresh=1000M/nfs-10dd-2-3.2.0-rc1
34.55 +106.1% 71.21 snb/thresh=1000M/nfs-10dd-3-3.2.0-rc1
58.24 +28.2% 74.63 snb/thresh=1000M/nfs-1dd-1-3.2.0-rc1
59.83 +18.6% 70.93 snb/thresh=1000M/nfs-1dd-2-3.2.0-rc1
58.30 +31.4% 76.61 snb/thresh=1000M/nfs-1dd-3-3.2.0-rc1
23.69 -10.0% 21.33 snb/thresh=100M/nfs-100dd-1-3.2.0-rc1
23.59 -1.7% 23.19 snb/thresh=100M/nfs-100dd-2-3.2.0-rc1
23.94 -1.0% 23.70 snb/thresh=100M/nfs-100dd-3-3.2.0-rc1
27.06 -0.0% 27.06 snb/thresh=100M/nfs-10dd-1-3.2.0-rc1
25.43 +4.8% 26.66 snb/thresh=100M/nfs-10dd-2-3.2.0-rc1
27.21 -0.8% 26.99 snb/thresh=100M/nfs-10dd-3-3.2.0-rc1
53.82 +4.4% 56.17 snb/thresh=100M/nfs-1dd-1-3.2.0-rc1
55.80 +4.2% 58.12 snb/thresh=100M/nfs-1dd-2-3.2.0-rc1
55.75 +2.9% 57.37 snb/thresh=100M/nfs-1dd-3-3.2.0-rc1
15.47 +1.3% 15.68 snb/thresh=10M/nfs-10dd-1-3.2.0-rc1
16.09 -3.5% 15.53 snb/thresh=10M/nfs-10dd-2-3.2.0-rc1
15.09 -0.9% 14.96 snb/thresh=10M/nfs-10dd-3-3.2.0-rc1
26.65 +13.0% 30.10 snb/thresh=10M/nfs-1dd-1-3.2.0-rc1
25.09 +7.7% 27.02 snb/thresh=10M/nfs-1dd-2-3.2.0-rc1
27.16 +3.3% 28.06 snb/thresh=10M/nfs-1dd-3-3.2.0-rc1
27.51 +78.6% 49.11 snb/thresh=2G/nfs-100dd-1-3.2.0-rc1
22.46 +131.6% 52.01 snb/thresh=2G/nfs-100dd-2-3.2.0-rc1
12.95 +289.8% 50.50 snb/thresh=2G/nfs-100dd-3-3.2.0-rc1
42.28 +81.0% 76.52 snb/thresh=2G/nfs-10dd-1-3.2.0-rc1
40.33 +78.8% 72.10 snb/thresh=2G/nfs-10dd-2-3.2.0-rc1
42.52 +67.6% 71.27 snb/thresh=2G/nfs-10dd-3-3.2.0-rc1
62.27 +34.6% 83.84 snb/thresh=2G/nfs-1dd-1-3.2.0-rc1
60.10 +35.6% 81.48 snb/thresh=2G/nfs-1dd-2-3.2.0-rc1
66.29 +17.5% 77.88 snb/thresh=2G/nfs-1dd-3-3.2.0-rc1
1164.97 +41.6% 1649.19 TOTAL write_bw
3.2.0-rc1 3.2.0-rc1-ioless-full+
------------------------ ------------------------
2404.00 +67.7% 4031.00 snb/thresh=1000M/nfs-100dd-1-3.2.0-rc1
3236.00 +22.6% 3968.00 snb/thresh=1000M/nfs-100dd-2-3.2.0-rc1
2435.00 +100.7% 4886.00 snb/thresh=1000M/nfs-100dd-3-3.2.0-rc1
415.00 +113.7% 887.00 snb/thresh=1000M/nfs-10dd-1-3.2.0-rc1
429.00 +119.6% 942.00 snb/thresh=1000M/nfs-10dd-2-3.2.0-rc1
400.00 +127.2% 909.00 snb/thresh=1000M/nfs-10dd-3-3.2.0-rc1
72.00 +40.3% 101.00 snb/thresh=1000M/nfs-1dd-1-3.2.0-rc1
73.00 +28.8% 94.00 snb/thresh=1000M/nfs-1dd-2-3.2.0-rc1
71.00 +47.9% 105.00 snb/thresh=1000M/nfs-1dd-3-3.2.0-rc1
455.00 +233.4% 1517.00 snb/thresh=100M/nfs-100dd-1-3.2.0-rc1
401.00 -21.7% 314.00 snb/thresh=100M/nfs-100dd-2-3.2.0-rc1
390.00 -1.0% 386.00 snb/thresh=100M/nfs-100dd-3-3.2.0-rc1
2875.00 -0.6% 2858.00 snb/thresh=100M/nfs-10dd-1-3.2.0-rc1
2664.00 +5.9% 2820.00 snb/thresh=100M/nfs-10dd-2-3.2.0-rc1
2882.00 -1.8% 2829.00 snb/thresh=100M/nfs-10dd-3-3.2.0-rc1
718.00 +5.4% 757.00 snb/thresh=100M/nfs-1dd-1-3.2.0-rc1
756.00 +4.8% 792.00 snb/thresh=100M/nfs-1dd-2-3.2.0-rc1
756.00 +2.6% 776.00 snb/thresh=100M/nfs-1dd-3-3.2.0-rc1
6570.00 +0.1% 6574.00 snb/thresh=10M/nfs-10dd-1-3.2.0-rc1
6551.00 -3.8% 6305.00 snb/thresh=10M/nfs-10dd-2-3.2.0-rc1
6362.00 -0.0% 6360.00 snb/thresh=10M/nfs-10dd-3-3.2.0-rc1
2895.00 +21.2% 3509.00 snb/thresh=10M/nfs-1dd-1-3.2.0-rc1
2542.00 +17.2% 2978.00 snb/thresh=10M/nfs-1dd-2-3.2.0-rc1
2987.00 +0.3% 2996.00 snb/thresh=10M/nfs-1dd-3-3.2.0-rc1
2140.00 +77.6% 3801.00 snb/thresh=2G/nfs-100dd-1-3.2.0-rc1
1808.00 +41.2% 2552.00 snb/thresh=2G/nfs-100dd-2-3.2.0-rc1
679.00 +293.8% 2674.00 snb/thresh=2G/nfs-100dd-3-3.2.0-rc1
241.00 +141.1% 581.00 snb/thresh=2G/nfs-10dd-1-3.2.0-rc1
237.00 +97.5% 468.00 snb/thresh=2G/nfs-10dd-2-3.2.0-rc1
240.00 +135.4% 565.00 snb/thresh=2G/nfs-10dd-3-3.2.0-rc1
41.00 +34.1% 55.00 snb/thresh=2G/nfs-1dd-1-3.2.0-rc1
35.00 +65.7% 58.00 snb/thresh=2G/nfs-1dd-2-3.2.0-rc1
39.00 +33.3% 52.00 snb/thresh=2G/nfs-1dd-3-3.2.0-rc1
54799.00 +25.0% 68500.00 TOTAL nfs_nr_commits
3.2.0-rc1 3.2.0-rc1-ioless-full+
------------------------ ------------------------
40601.00 -14.0% 34925.00 snb/thresh=1000M/nfs-100dd-1-3.2.0-rc1
41859.00 -12.2% 36761.00 snb/thresh=1000M/nfs-100dd-2-3.2.0-rc1
39947.00 -15.1% 33897.00 snb/thresh=1000M/nfs-100dd-3-3.2.0-rc1
153398.00 -72.0% 42898.00 snb/thresh=1000M/nfs-10dd-1-3.2.0-rc1
149928.00 -65.0% 52422.00 snb/thresh=1000M/nfs-10dd-2-3.2.0-rc1
118064.00 -59.6% 47707.00 snb/thresh=1000M/nfs-10dd-3-3.2.0-rc1
102626.00 +78.2% 182853.00 snb/thresh=1000M/nfs-1dd-1-3.2.0-rc1
111743.00 +81.4% 202708.00 snb/thresh=1000M/nfs-1dd-2-3.2.0-rc1
91838.00 +107.1% 190184.00 snb/thresh=1000M/nfs-1dd-3-3.2.0-rc1
35016.00 +16.7% 40874.00 snb/thresh=100M/nfs-100dd-1-3.2.0-rc1
34778.00 -2.3% 33981.00 snb/thresh=100M/nfs-100dd-2-3.2.0-rc1
35749.00 -1.7% 35131.00 snb/thresh=100M/nfs-100dd-3-3.2.0-rc1
104527.00 -7.4% 96761.00 snb/thresh=100M/nfs-10dd-1-3.2.0-rc1
91568.00 +6.5% 97506.00 snb/thresh=100M/nfs-10dd-2-3.2.0-rc1
99681.00 -5.1% 94596.00 snb/thresh=100M/nfs-10dd-3-3.2.0-rc1
208450.00 -1.0% 206384.00 snb/thresh=100M/nfs-1dd-1-3.2.0-rc1
205944.00 +12.2% 231073.00 snb/thresh=100M/nfs-1dd-2-3.2.0-rc1
190945.00 +21.8% 232662.00 snb/thresh=100M/nfs-1dd-3-3.2.0-rc1
71042.00 -0.1% 70937.00 snb/thresh=10M/nfs-10dd-1-3.2.0-rc1
71398.00 -3.2% 69148.00 snb/thresh=10M/nfs-10dd-2-3.2.0-rc1
70247.00 -1.1% 69504.00 snb/thresh=10M/nfs-10dd-3-3.2.0-rc1
142168.00 +35.0% 191905.00 snb/thresh=10M/nfs-1dd-1-3.2.0-rc1
116012.00 +18.7% 137664.00 snb/thresh=10M/nfs-1dd-2-3.2.0-rc1
145534.00 +5.6% 153619.00 snb/thresh=10M/nfs-1dd-3-3.2.0-rc1
67785.00 -39.1% 41269.00 snb/thresh=2G/nfs-100dd-1-3.2.0-rc1
53151.00 -20.1% 42471.00 snb/thresh=2G/nfs-100dd-2-3.2.0-rc1
73200.00 -39.9% 43988.00 snb/thresh=2G/nfs-100dd-3-3.2.0-rc1
188725.00 -57.4% 80357.00 snb/thresh=2G/nfs-10dd-1-3.2.0-rc1
176809.00 -75.0% 44204.00 snb/thresh=2G/nfs-10dd-2-3.2.0-rc1
173292.00 -44.2% 96636.00 snb/thresh=2G/nfs-10dd-3-3.2.0-rc1
96940.00 +76.7% 171281.00 snb/thresh=2G/nfs-1dd-1-3.2.0-rc1
111572.00 +36.3% 152049.00 snb/thresh=2G/nfs-1dd-2-3.2.0-rc1
128726.00 +29.8% 167063.00 snb/thresh=2G/nfs-1dd-3-3.2.0-rc1
3543263.00 -3.3% 3425418.00 TOTAL nfs_nr_writes
3.2.0-rc1 3.2.0-rc1-ioless-full+
------------------------ ------------------------
5.56 +44.9% 8.06 snb/thresh=1000M/nfs-100dd-1-3.2.0-rc1
4.14 +109.1% 8.67 snb/thresh=1000M/nfs-100dd-2-3.2.0-rc1
5.46 +16.3% 6.35 snb/thresh=1000M/nfs-100dd-3-3.2.0-rc1
52.04 -8.4% 47.70 snb/thresh=1000M/nfs-10dd-1-3.2.0-rc1
52.33 -13.8% 45.09 snb/thresh=1000M/nfs-10dd-2-3.2.0-rc1
51.72 -9.2% 46.98 snb/thresh=1000M/nfs-10dd-3-3.2.0-rc1
484.63 -8.6% 443.16 snb/thresh=1000M/nfs-1dd-1-3.2.0-rc1
492.42 -8.2% 452.26 snb/thresh=1000M/nfs-1dd-2-3.2.0-rc1
493.13 -11.4% 437.15 snb/thresh=1000M/nfs-1dd-3-3.2.0-rc1
32.52 -72.9% 8.80 snb/thresh=100M/nfs-100dd-1-3.2.0-rc1
36.15 +26.1% 45.58 snb/thresh=100M/nfs-100dd-2-3.2.0-rc1
38.33 +0.4% 38.49 snb/thresh=100M/nfs-100dd-3-3.2.0-rc1
5.67 +0.5% 5.69 snb/thresh=100M/nfs-10dd-1-3.2.0-rc1
5.74 -1.1% 5.68 snb/thresh=100M/nfs-10dd-2-3.2.0-rc1
5.69 +0.9% 5.74 snb/thresh=100M/nfs-10dd-3-3.2.0-rc1
44.91 -1.0% 44.45 snb/thresh=100M/nfs-1dd-1-3.2.0-rc1
44.22 -0.6% 43.96 snb/thresh=100M/nfs-1dd-2-3.2.0-rc1
44.18 +0.2% 44.28 snb/thresh=100M/nfs-1dd-3-3.2.0-rc1
1.42 +1.1% 1.43 snb/thresh=10M/nfs-10dd-1-3.2.0-rc1
1.48 +0.3% 1.48 snb/thresh=10M/nfs-10dd-2-3.2.0-rc1
1.43 -1.0% 1.42 snb/thresh=10M/nfs-10dd-3-3.2.0-rc1
5.51 -6.8% 5.14 snb/thresh=10M/nfs-1dd-1-3.2.0-rc1
5.91 -8.1% 5.43 snb/thresh=10M/nfs-1dd-2-3.2.0-rc1
5.44 +3.0% 5.61 snb/thresh=10M/nfs-1dd-3-3.2.0-rc1
8.80 +6.6% 9.38 snb/thresh=2G/nfs-100dd-1-3.2.0-rc1
8.51 +65.2% 14.06 snb/thresh=2G/nfs-100dd-2-3.2.0-rc1
15.28 -13.2% 13.27 snb/thresh=2G/nfs-100dd-3-3.2.0-rc1
105.12 -24.9% 78.99 snb/thresh=2G/nfs-10dd-1-3.2.0-rc1
101.90 -9.1% 92.60 snb/thresh=2G/nfs-10dd-2-3.2.0-rc1
106.24 -29.7% 74.65 snb/thresh=2G/nfs-10dd-3-3.2.0-rc1
909.85 +0.4% 913.68 snb/thresh=2G/nfs-1dd-1-3.2.0-rc1
1030.45 -18.3% 841.68 snb/thresh=2G/nfs-1dd-2-3.2.0-rc1
1016.56 -11.6% 898.36 snb/thresh=2G/nfs-1dd-3-3.2.0-rc1
5222.74 -10.1% 4695.25 TOTAL nfs_commit_size
3.2.0-rc1 3.2.0-rc1-ioless-full+
------------------------ ------------------------
0.33 +182.4% 0.93 snb/thresh=1000M/nfs-100dd-1-3.2.0-rc1
0.32 +192.0% 0.94 snb/thresh=1000M/nfs-100dd-2-3.2.0-rc1
0.33 +175.1% 0.91 snb/thresh=1000M/nfs-100dd-3-3.2.0-rc1
0.14 +600.4% 0.99 snb/thresh=1000M/nfs-10dd-1-3.2.0-rc1
0.15 +441.1% 0.81 snb/thresh=1000M/nfs-10dd-2-3.2.0-rc1
0.18 +410.8% 0.90 snb/thresh=1000M/nfs-10dd-3-3.2.0-rc1
0.34 -28.0% 0.24 snb/thresh=1000M/nfs-1dd-1-3.2.0-rc1
0.32 -34.8% 0.21 snb/thresh=1000M/nfs-1dd-2-3.2.0-rc1
0.38 -36.7% 0.24 snb/thresh=1000M/nfs-1dd-3-3.2.0-rc1
0.42 -22.7% 0.33 snb/thresh=100M/nfs-100dd-1-3.2.0-rc1
0.42 +1.0% 0.42 snb/thresh=100M/nfs-100dd-2-3.2.0-rc1
0.42 +1.1% 0.42 snb/thresh=100M/nfs-100dd-3-3.2.0-rc1
0.16 +7.9% 0.17 snb/thresh=100M/nfs-10dd-1-3.2.0-rc1
0.17 -1.6% 0.16 snb/thresh=100M/nfs-10dd-2-3.2.0-rc1
0.16 +4.4% 0.17 snb/thresh=100M/nfs-10dd-3-3.2.0-rc1
0.15 +5.4% 0.16 snb/thresh=100M/nfs-1dd-1-3.2.0-rc1
0.16 -7.2% 0.15 snb/thresh=100M/nfs-1dd-2-3.2.0-rc1
0.17 -15.6% 0.15 snb/thresh=100M/nfs-1dd-3-3.2.0-rc1
0.13 +1.3% 0.13 snb/thresh=10M/nfs-10dd-1-3.2.0-rc1
0.14 -0.3% 0.14 snb/thresh=10M/nfs-10dd-2-3.2.0-rc1
0.13 +0.0% 0.13 snb/thresh=10M/nfs-10dd-3-3.2.0-rc1
0.11 -16.4% 0.09 snb/thresh=10M/nfs-1dd-1-3.2.0-rc1
0.13 -9.3% 0.12 snb/thresh=10M/nfs-1dd-2-3.2.0-rc1
0.11 -2.2% 0.11 snb/thresh=10M/nfs-1dd-3-3.2.0-rc1
0.28 +210.9% 0.86 snb/thresh=2G/nfs-100dd-1-3.2.0-rc1
0.29 +191.9% 0.85 snb/thresh=2G/nfs-100dd-2-3.2.0-rc1
0.14 +469.1% 0.81 snb/thresh=2G/nfs-100dd-3-3.2.0-rc1
0.13 +325.5% 0.57 snb/thresh=2G/nfs-10dd-1-3.2.0-rc1
0.14 +617.7% 0.98 snb/thresh=2G/nfs-10dd-2-3.2.0-rc1
0.15 +196.6% 0.44 snb/thresh=2G/nfs-10dd-3-3.2.0-rc1
0.38 -23.8% 0.29 snb/thresh=2G/nfs-1dd-1-3.2.0-rc1
0.32 -0.7% 0.32 snb/thresh=2G/nfs-1dd-2-3.2.0-rc1
0.31 -9.2% 0.28 snb/thresh=2G/nfs-1dd-3-3.2.0-rc1
7.62 +89.2% 14.42 TOTAL nfs_write_size
3.2.0-rc1 3.2.0-rc1-ioless-full+
------------------------ ------------------------
90226.82 -99.9% 92.07 snb/thresh=1000M/nfs-100dd-1-3.2.0-rc1
88904.27 -99.9% 80.21 snb/thresh=1000M/nfs-100dd-2-3.2.0-rc1
97436.73 -99.9% 87.32 snb/thresh=1000M/nfs-100dd-3-3.2.0-rc1
62167.19 -99.3% 444.25 snb/thresh=1000M/nfs-10dd-1-3.2.0-rc1
64150.34 -99.2% 539.38 snb/thresh=1000M/nfs-10dd-2-3.2.0-rc1
78675.54 -99.3% 540.27 snb/thresh=1000M/nfs-10dd-3-3.2.0-rc1
5372.84 +57.8% 8477.45 snb/thresh=1000M/nfs-1dd-1-3.2.0-rc1
10245.66 -51.2% 4995.71 snb/thresh=1000M/nfs-1dd-2-3.2.0-rc1
4744.06 +109.1% 9919.55 snb/thresh=1000M/nfs-1dd-3-3.2.0-rc1
1727.29 -9.6% 1562.16 snb/thresh=100M/nfs-100dd-1-3.2.0-rc1
2183.49 +4.4% 2280.21 snb/thresh=100M/nfs-100dd-2-3.2.0-rc1
2201.49 +3.7% 2281.92 snb/thresh=100M/nfs-100dd-3-3.2.0-rc1
6213.73 +19.9% 7448.13 snb/thresh=100M/nfs-10dd-1-3.2.0-rc1
8127.01 +3.2% 8387.06 snb/thresh=100M/nfs-10dd-2-3.2.0-rc1
7255.35 +4.4% 7571.11 snb/thresh=100M/nfs-10dd-3-3.2.0-rc1
1144.67 +20.4% 1378.01 snb/thresh=100M/nfs-1dd-1-3.2.0-rc1
1010.02 +19.0% 1202.22 snb/thresh=100M/nfs-1dd-2-3.2.0-rc1
906.33 +15.8% 1049.76 snb/thresh=100M/nfs-1dd-3-3.2.0-rc1
642.82 +17.3% 753.80 snb/thresh=10M/nfs-10dd-1-3.2.0-rc1
766.82 -21.7% 600.18 snb/thresh=10M/nfs-10dd-2-3.2.0-rc1
575.95 +16.5% 670.85 snb/thresh=10M/nfs-10dd-3-3.2.0-rc1
21.91 +71.0% 37.47 snb/thresh=10M/nfs-1dd-1-3.2.0-rc1
16.70 +105.3% 34.29 snb/thresh=10M/nfs-1dd-2-3.2.0-rc1
19.05 -71.3% 5.47 snb/thresh=10M/nfs-1dd-3-3.2.0-rc1
123877.11 -99.0% 1187.27 snb/thresh=2G/nfs-100dd-1-3.2.0-rc1
122353.65 -98.8% 1505.84 snb/thresh=2G/nfs-100dd-2-3.2.0-rc1
101140.82 -98.4% 1641.03 snb/thresh=2G/nfs-100dd-3-3.2.0-rc1
78248.51 -98.9% 892.00 snb/thresh=2G/nfs-10dd-1-3.2.0-rc1
84589.42 -98.6% 1212.17 snb/thresh=2G/nfs-10dd-2-3.2.0-rc1
89684.95 -99.4% 495.28 snb/thresh=2G/nfs-10dd-3-3.2.0-rc1
10405.39 -6.9% 9684.57 snb/thresh=2G/nfs-1dd-1-3.2.0-rc1
16151.86 -48.5% 8316.69 snb/thresh=2G/nfs-1dd-2-3.2.0-rc1
16119.17 -49.0% 8214.84 snb/thresh=2G/nfs-1dd-3-3.2.0-rc1
1177306.98 -92.1% 93588.50 TOTAL nfs_write_queue_time
3.2.0-rc1 3.2.0-rc1-ioless-full+
------------------------ ------------------------
316.90 -56.7% 137.37 snb/thresh=1000M/nfs-100dd-1-3.2.0-rc1
279.71 -55.0% 125.88 snb/thresh=1000M/nfs-100dd-2-3.2.0-rc1
328.22 -63.4% 120.29 snb/thresh=1000M/nfs-100dd-3-3.2.0-rc1
152.06 -45.7% 82.53 snb/thresh=1000M/nfs-10dd-1-3.2.0-rc1
159.06 -52.7% 75.25 snb/thresh=1000M/nfs-10dd-2-3.2.0-rc1
192.96 -59.0% 79.16 snb/thresh=1000M/nfs-10dd-3-3.2.0-rc1
35.55 -21.2% 28.02 snb/thresh=1000M/nfs-1dd-1-3.2.0-rc1
33.14 -32.9% 22.24 snb/thresh=1000M/nfs-1dd-2-3.2.0-rc1
31.55 -11.1% 28.06 snb/thresh=1000M/nfs-1dd-3-3.2.0-rc1
446.92 +3.2% 461.33 snb/thresh=100M/nfs-100dd-1-3.2.0-rc1
403.82 +12.8% 455.34 snb/thresh=100M/nfs-100dd-2-3.2.0-rc1
410.66 +5.3% 432.30 snb/thresh=100M/nfs-100dd-3-3.2.0-rc1
347.30 +3.8% 360.34 snb/thresh=100M/nfs-10dd-1-3.2.0-rc1
372.89 -7.4% 345.26 snb/thresh=100M/nfs-10dd-2-3.2.0-rc1
352.98 +2.0% 360.11 snb/thresh=100M/nfs-10dd-3-3.2.0-rc1
25.22 +18.9% 29.99 snb/thresh=100M/nfs-1dd-1-3.2.0-rc1
21.90 +7.6% 23.57 snb/thresh=100M/nfs-1dd-2-3.2.0-rc1
23.49 -7.2% 21.79 snb/thresh=100M/nfs-1dd-3-3.2.0-rc1
286.86 +0.4% 288.09 snb/thresh=10M/nfs-10dd-1-3.2.0-rc1
288.38 +5.5% 304.22 snb/thresh=10M/nfs-10dd-2-3.2.0-rc1
299.92 +1.0% 302.95 snb/thresh=10M/nfs-10dd-3-3.2.0-rc1
74.81 -37.8% 46.53 snb/thresh=10M/nfs-1dd-1-3.2.0-rc1
106.27 -23.8% 81.03 snb/thresh=10M/nfs-1dd-2-3.2.0-rc1
76.47 -6.5% 71.48 snb/thresh=10M/nfs-1dd-3-3.2.0-rc1
175.97 -9.8% 158.73 snb/thresh=2G/nfs-100dd-1-3.2.0-rc1
190.32 -11.5% 168.43 snb/thresh=2G/nfs-100dd-2-3.2.0-rc1
81.29 +99.2% 161.97 snb/thresh=2G/nfs-100dd-3-3.2.0-rc1
104.98 -58.2% 43.89 snb/thresh=2G/nfs-10dd-1-3.2.0-rc1
119.66 -42.3% 68.99 snb/thresh=2G/nfs-10dd-2-3.2.0-rc1
126.88 -65.9% 43.32 snb/thresh=2G/nfs-10dd-3-3.2.0-rc1
62.43 -53.5% 29.04 snb/thresh=2G/nfs-1dd-1-3.2.0-rc1
28.99 +8.7% 31.50 snb/thresh=2G/nfs-1dd-2-3.2.0-rc1
19.47 +55.9% 30.37 snb/thresh=2G/nfs-1dd-3-3.2.0-rc1
5977.02 -16.0% 5019.34 TOTAL nfs_write_rtt_time
3.2.0-rc1 3.2.0-rc1-ioless-full+
------------------------ ------------------------
90545.77 -99.7% 229.94 snb/thresh=1000M/nfs-100dd-1-3.2.0-rc1
89185.97 -99.8% 206.68 snb/thresh=1000M/nfs-100dd-2-3.2.0-rc1
97766.88 -99.8% 208.12 snb/thresh=1000M/nfs-100dd-3-3.2.0-rc1
62320.35 -99.2% 528.28 snb/thresh=1000M/nfs-10dd-1-3.2.0-rc1
64310.52 -99.0% 615.99 snb/thresh=1000M/nfs-10dd-2-3.2.0-rc1
78869.77 -99.2% 620.86 snb/thresh=1000M/nfs-10dd-3-3.2.0-rc1
5416.77 +57.1% 8508.91 snb/thresh=1000M/nfs-1dd-1-3.2.0-rc1
10286.87 -51.2% 5021.39 snb/thresh=1000M/nfs-1dd-2-3.2.0-rc1
4784.53 +108.0% 9950.05 snb/thresh=1000M/nfs-1dd-3-3.2.0-rc1
2174.34 -6.9% 2023.56 snb/thresh=100M/nfs-100dd-1-3.2.0-rc1
2587.44 +5.7% 2735.64 snb/thresh=100M/nfs-100dd-2-3.2.0-rc1
2612.29 +3.9% 2714.30 snb/thresh=100M/nfs-100dd-3-3.2.0-rc1
6561.24 +19.0% 7808.62 snb/thresh=100M/nfs-10dd-1-3.2.0-rc1
8500.09 +2.7% 8732.45 snb/thresh=100M/nfs-10dd-2-3.2.0-rc1
7608.54 +4.2% 7931.36 snb/thresh=100M/nfs-10dd-3-3.2.0-rc1
1171.48 +20.3% 1408.94 snb/thresh=100M/nfs-1dd-1-3.2.0-rc1
1033.75 +18.7% 1226.79 snb/thresh=100M/nfs-1dd-2-3.2.0-rc1
931.62 +15.1% 1072.45 snb/thresh=100M/nfs-1dd-3-3.2.0-rc1
929.77 +12.1% 1041.98 snb/thresh=10M/nfs-10dd-1-3.2.0-rc1
1055.30 -14.3% 904.49 snb/thresh=10M/nfs-10dd-2-3.2.0-rc1
875.95 +11.2% 973.88 snb/thresh=10M/nfs-10dd-3-3.2.0-rc1
96.80 -13.1% 84.09 snb/thresh=10M/nfs-1dd-1-3.2.0-rc1
123.09 -6.2% 115.40 snb/thresh=10M/nfs-1dd-2-3.2.0-rc1
95.61 -19.4% 77.06 snb/thresh=10M/nfs-1dd-3-3.2.0-rc1
124054.85 -98.9% 1346.78 snb/thresh=2G/nfs-100dd-1-3.2.0-rc1
122545.76 -98.6% 1674.99 snb/thresh=2G/nfs-100dd-2-3.2.0-rc1
101222.98 -98.2% 1803.76 snb/thresh=2G/nfs-100dd-3-3.2.0-rc1
78354.75 -98.8% 937.22 snb/thresh=2G/nfs-10dd-1-3.2.0-rc1
84710.46 -98.5% 1283.06 snb/thresh=2G/nfs-10dd-2-3.2.0-rc1
89813.36 -99.4% 540.01 snb/thresh=2G/nfs-10dd-3-3.2.0-rc1
10476.93 -7.3% 9717.35 snb/thresh=2G/nfs-1dd-1-3.2.0-rc1
16189.55 -48.4% 8351.81 snb/thresh=2G/nfs-1dd-2-3.2.0-rc1
16146.79 -48.9% 8249.52 snb/thresh=2G/nfs-1dd-3-3.2.0-rc1
1183360.15 -91.7% 98645.74 TOTAL nfs_write_execute_time
3.2.0-rc1 3.2.0-rc1-ioless-full+
------------------------ ------------------------
2262.68 -97.0% 68.18 snb/thresh=1000M/nfs-100dd-1-3.2.0-rc1
1794.04 -99.5% 8.96 snb/thresh=1000M/nfs-100dd-2-3.2.0-rc1
2128.07 -95.2% 101.40 snb/thresh=1000M/nfs-100dd-3-3.2.0-rc1
1614.18 -96.3% 60.53 snb/thresh=1000M/nfs-10dd-1-3.2.0-rc1
1764.72 -94.3% 100.35 snb/thresh=1000M/nfs-10dd-2-3.2.0-rc1
1602.32 -97.7% 37.01 snb/thresh=1000M/nfs-10dd-3-3.2.0-rc1
2000.24 -72.6% 548.77 snb/thresh=1000M/nfs-1dd-1-3.2.0-rc1
2028.37 -69.5% 619.40 snb/thresh=1000M/nfs-1dd-2-3.2.0-rc1
2114.61 -80.3% 417.26 snb/thresh=1000M/nfs-1dd-3-3.2.0-rc1
195.97 -35.4% 126.53 snb/thresh=100M/nfs-100dd-1-3.2.0-rc1
158.77 +9.6% 173.97 snb/thresh=100M/nfs-100dd-2-3.2.0-rc1
140.91 +40.0% 197.30 snb/thresh=100M/nfs-100dd-3-3.2.0-rc1
36.36 -6.1% 34.14 snb/thresh=100M/nfs-10dd-1-3.2.0-rc1
39.42 -9.9% 35.51 snb/thresh=100M/nfs-10dd-2-3.2.0-rc1
38.72 -16.9% 32.19 snb/thresh=100M/nfs-10dd-3-3.2.0-rc1
97.43 -17.2% 80.69 snb/thresh=100M/nfs-1dd-1-3.2.0-rc1
96.45 -20.6% 76.63 snb/thresh=100M/nfs-1dd-2-3.2.0-rc1
102.16 -25.3% 76.35 snb/thresh=100M/nfs-1dd-3-3.2.0-rc1
2.99 -2.0% 2.94 snb/thresh=10M/nfs-10dd-1-3.2.0-rc1
1.61 -0.9% 1.60 snb/thresh=10M/nfs-10dd-2-3.2.0-rc1
1.40 -13.4% 1.21 snb/thresh=10M/nfs-10dd-3-3.2.0-rc1
1.56 -3.3% 1.51 snb/thresh=10M/nfs-1dd-1-3.2.0-rc1
1.65 -0.6% 1.64 snb/thresh=10M/nfs-1dd-2-3.2.0-rc1
1.69 +9.3% 1.85 snb/thresh=10M/nfs-1dd-3-3.2.0-rc1
4122.75 -57.5% 1750.85 snb/thresh=2G/nfs-100dd-1-3.2.0-rc1
3385.95 +25.5% 4249.51 snb/thresh=2G/nfs-100dd-2-3.2.0-rc1
4023.74 +14.6% 4611.05 snb/thresh=2G/nfs-100dd-3-3.2.0-rc1
3242.20 -62.2% 1226.57 snb/thresh=2G/nfs-10dd-1-3.2.0-rc1
2961.05 -74.8% 745.85 snb/thresh=2G/nfs-10dd-2-3.2.0-rc1
3666.47 -71.1% 1059.87 snb/thresh=2G/nfs-10dd-3-3.2.0-rc1
3438.00 -78.1% 753.51 snb/thresh=2G/nfs-1dd-1-3.2.0-rc1
4524.00 -79.2% 940.24 snb/thresh=2G/nfs-1dd-2-3.2.0-rc1
3596.13 -71.4% 1027.63 snb/thresh=2G/nfs-1dd-3-3.2.0-rc1
51186.59 -62.5% 19170.98 TOTAL nfs_commit_queue_time
3.2.0-rc1 3.2.0-rc1-ioless-full+
------------------------ ------------------------
1860.87 -31.5% 1274.40 snb/thresh=1000M/nfs-100dd-1-3.2.0-rc1
1758.72 -26.6% 1291.19 snb/thresh=1000M/nfs-100dd-2-3.2.0-rc1
1906.15 -45.9% 1030.96 snb/thresh=1000M/nfs-100dd-3-3.2.0-rc1
3905.95 +11.3% 4349.24 snb/thresh=1000M/nfs-10dd-1-3.2.0-rc1
3863.32 +2.7% 3968.57 snb/thresh=1000M/nfs-10dd-2-3.2.0-rc1
3698.44 +11.0% 4103.96 snb/thresh=1000M/nfs-10dd-3-3.2.0-rc1
4080.74 +7.1% 4372.00 snb/thresh=1000M/nfs-1dd-1-3.2.0-rc1
3861.75 +21.1% 4674.71 snb/thresh=1000M/nfs-1dd-2-3.2.0-rc1
3844.38 +13.9% 4377.59 snb/thresh=1000M/nfs-1dd-3-3.2.0-rc1
389.74 +16.8% 455.39 snb/thresh=100M/nfs-100dd-1-3.2.0-rc1
348.41 +15.1% 400.95 snb/thresh=100M/nfs-100dd-2-3.2.0-rc1
360.80 +6.6% 384.70 snb/thresh=100M/nfs-100dd-3-3.2.0-rc1
846.22 -0.9% 838.24 snb/thresh=100M/nfs-10dd-1-3.2.0-rc1
878.63 -2.7% 854.63 snb/thresh=100M/nfs-10dd-2-3.2.0-rc1
836.37 +0.5% 840.59 snb/thresh=100M/nfs-10dd-3-3.2.0-rc1
530.48 +0.5% 533.08 snb/thresh=100M/nfs-1dd-1-3.2.0-rc1
521.77 +2.8% 536.20 snb/thresh=100M/nfs-1dd-2-3.2.0-rc1
518.10 +2.9% 532.87 snb/thresh=100M/nfs-1dd-3-3.2.0-rc1
238.28 +3.4% 246.32 snb/thresh=10M/nfs-10dd-1-3.2.0-rc1
238.98 +8.7% 259.86 snb/thresh=10M/nfs-10dd-2-3.2.0-rc1
254.25 -1.2% 251.15 snb/thresh=10M/nfs-10dd-3-3.2.0-rc1
135.25 +1.5% 137.26 snb/thresh=10M/nfs-1dd-1-3.2.0-rc1
139.34 -4.3% 133.34 snb/thresh=10M/nfs-1dd-2-3.2.0-rc1
136.72 +0.6% 137.48 snb/thresh=10M/nfs-1dd-3-3.2.0-rc1
1712.95 -8.8% 1562.92 snb/thresh=2G/nfs-100dd-1-3.2.0-rc1
1521.25 +36.5% 2076.59 snb/thresh=2G/nfs-100dd-2-3.2.0-rc1
1437.56 +34.1% 1927.87 snb/thresh=2G/nfs-100dd-3-3.2.0-rc1
8110.66 -32.3% 5493.17 snb/thresh=2G/nfs-10dd-1-3.2.0-rc1
8286.49 -13.1% 7202.22 snb/thresh=2G/nfs-10dd-2-3.2.0-rc1
6623.11 -16.5% 5532.38 snb/thresh=2G/nfs-10dd-3-3.2.0-rc1
5219.90 +65.6% 8644.65 snb/thresh=2G/nfs-1dd-1-3.2.0-rc1
6789.74 +14.3% 7758.48 snb/thresh=2G/nfs-1dd-2-3.2.0-rc1
6945.82 +23.1% 8552.21 snb/thresh=2G/nfs-1dd-3-3.2.0-rc1
81801.14 +3.6% 84735.19 TOTAL nfs_commit_rtt_time
3.2.0-rc1 3.2.0-rc1-ioless-full+
------------------------ ------------------------
4123.94 -67.4% 1342.77 snb/thresh=1000M/nfs-100dd-1-3.2.0-rc1
3553.22 -63.4% 1300.47 snb/thresh=1000M/nfs-100dd-2-3.2.0-rc1
4034.69 -71.9% 1132.59 snb/thresh=1000M/nfs-100dd-3-3.2.0-rc1
5521.41 -20.1% 4411.29 snb/thresh=1000M/nfs-10dd-1-3.2.0-rc1
5629.16 -27.7% 4070.70 snb/thresh=1000M/nfs-10dd-2-3.2.0-rc1
5302.14 -21.9% 4142.63 snb/thresh=1000M/nfs-10dd-3-3.2.0-rc1
6082.58 -19.1% 4921.35 snb/thresh=1000M/nfs-1dd-1-3.2.0-rc1
5890.41 -10.1% 5294.47 snb/thresh=1000M/nfs-1dd-2-3.2.0-rc1
5960.49 -19.5% 4795.47 snb/thresh=1000M/nfs-1dd-3-3.2.0-rc1
585.80 -0.6% 582.04 snb/thresh=100M/nfs-100dd-1-3.2.0-rc1
507.32 +13.3% 575.00 snb/thresh=100M/nfs-100dd-2-3.2.0-rc1
501.85 +16.0% 582.10 snb/thresh=100M/nfs-100dd-3-3.2.0-rc1
882.89 -1.2% 872.69 snb/thresh=100M/nfs-10dd-1-3.2.0-rc1
918.37 -3.0% 890.44 snb/thresh=100M/nfs-10dd-2-3.2.0-rc1
875.43 -0.3% 873.07 snb/thresh=100M/nfs-10dd-3-3.2.0-rc1
628.33 -2.3% 613.90 snb/thresh=100M/nfs-1dd-1-3.2.0-rc1
618.86 -1.0% 612.97 snb/thresh=100M/nfs-1dd-2-3.2.0-rc1
620.76 -1.8% 609.36 snb/thresh=100M/nfs-1dd-3-3.2.0-rc1
241.39 +3.3% 249.35 snb/thresh=10M/nfs-10dd-1-3.2.0-rc1
240.69 +8.7% 261.56 snb/thresh=10M/nfs-10dd-2-3.2.0-rc1
255.75 -1.3% 252.45 snb/thresh=10M/nfs-10dd-3-3.2.0-rc1
136.85 +1.4% 138.79 snb/thresh=10M/nfs-1dd-1-3.2.0-rc1
141.03 -4.3% 135.01 snb/thresh=10M/nfs-1dd-2-3.2.0-rc1
138.44 +0.7% 139.36 snb/thresh=10M/nfs-1dd-3-3.2.0-rc1
5836.33 -43.2% 3314.23 snb/thresh=2G/nfs-100dd-1-3.2.0-rc1
4907.99 +28.9% 6326.79 snb/thresh=2G/nfs-100dd-2-3.2.0-rc1
5462.84 +19.7% 6539.49 snb/thresh=2G/nfs-100dd-3-3.2.0-rc1
11355.59 -40.8% 6723.06 snb/thresh=2G/nfs-10dd-1-3.2.0-rc1
11250.64 -29.3% 7950.03 snb/thresh=2G/nfs-10dd-2-3.2.0-rc1
10293.16 -35.9% 6594.40 snb/thresh=2G/nfs-10dd-3-3.2.0-rc1
8658.85 +8.5% 9398.44 snb/thresh=2G/nfs-1dd-1-3.2.0-rc1
11314.40 -23.1% 8699.43 snb/thresh=2G/nfs-1dd-2-3.2.0-rc1
10543.72 -9.1% 9580.37 snb/thresh=2G/nfs-1dd-3-3.2.0-rc1
133015.32 -21.9% 103926.05 TOTAL nfs_commit_execute_time
[-- Attachment #2: dstat-nfss-bw.png --]
[-- Type: image/png, Size: 58730 bytes --]
[-- Attachment #3: dstat-nfss-bw.png --]
[-- Type: image/png, Size: 65720 bytes --]
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2011-11-21 7:15 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-11-19 12:52 [PATCH] nfs: writeback pages wait queue Wu Fengguang
[not found] ` <20111119134412.GA5853@umich.edu>
2011-11-20 1:57 ` Wu Fengguang
2011-11-20 1:57 ` Wu Fengguang
2011-11-21 7:15 ` Wu Fengguang
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.