All of lore.kernel.org
 help / color / mirror / Atom feed
* Reduce latencies for syncronous writes and high I/O priority requests  in deadline IO scheduler
@ 2009-04-22 21:07 Corrado Zoccolo
  2009-04-23 11:18 ` Paolo Ciarrocchi
                   ` (2 more replies)
  0 siblings, 3 replies; 14+ messages in thread
From: Corrado Zoccolo @ 2009-04-22 21:07 UTC (permalink / raw)
  To: jens.axboe, Linux-Kernel

[-- Attachment #1: Type: text/plain, Size: 10194 bytes --]

Hi,
deadline I/O scheduler currently classifies all I/O requests in only 2
classes, reads (always considered high priority) and writes (always
lower).
The attached patch, intended to reduce latencies for syncronous writes
and high I/O priority requests, introduces more levels of priorities:
* real time reads: highest priority and shortest deadline, can starve
other levels
* syncronous operations (either best effort reads or RT/BE writes),
mid priority, starvation for lower level is prevented as usual
* asyncronous operations (async writes and all IDLE class requests),
lowest priority and longest deadline

The patch also introduces some new heuristics:
* for non-rotational devices, reads (within a given priority level)
are issued in FIFO order, to improve the latency perceived by readers
* minimum batch timespan (time quantum): partners with fifo_batch to
improve throughput, by sending more consecutive requests together. A
given number of requests will not always take the same time (due to
amount of seek needed), therefore fifo_batch must be tuned for worst
cases, while in best cases, having longer batches would give a
throughput boost.
* batch start request is chosen fifo_batch/3 requests before the
expired one, to improve fairness for requests with lower start sector,
that otherwise have higher probability to miss a deadline than
mid-sector requests.

I did few performance comparisons:
* HDD, ext3 partition with data=writeback, tiotest with 32 threads,
each writing 80MB of data

** deadline-original
Tiotest results for 32 concurrent io threads:
,----------------------------------------------------------------------.
| Item                  | Time     | Rate         | Usr CPU  | Sys CPU |
+-----------------------+----------+--------------+----------+---------+
| Write        2560 MBs |  103.0 s |  24.848 MB/s |  10.6 %  | 522.2 % |
| Random Write  125 MBs |   98.8 s |   1.265 MB/s |  -1.6 %  |  16.1 % |
| Read         2560 MBs |  166.2 s |  15.400 MB/s |   4.2 %  |  82.7 % |
| Random Read   125 MBs |  193.3 s |   0.647 MB/s |  -0.8 %  |  14.5 % |
`----------------------------------------------------------------------'
Tiotest latency results:
,-------------------------------------------------------------------------.
| Item         | Average latency | Maximum latency | % >2 sec | % >10 sec |
+--------------+-----------------+-----------------+----------+-----------+
| Write        |        4.122 ms |    17922.920 ms |  0.07980 |   0.00061 |
| Random Write |        0.599 ms |     1245.200 ms |  0.00000 |   0.00000 |
| Read         |        8.032 ms |     1125.759 ms |  0.00000 |   0.00000 |
| Random Read  |      181.968 ms |      972.657 ms |  0.00000 |   0.00000 |
|--------------+-----------------+-----------------+----------+-----------|
| Total        |       10.044 ms |    17922.920 ms |  0.03804 |   0.00029 |
`--------------+-----------------+-----------------+----------+-----------'

** cfq (2.6.30-rc2)
Tiotest results for 32 concurrent io threads:
,----------------------------------------------------------------------.
| Item                  | Time     | Rate         | Usr CPU  | Sys CPU |
+-----------------------+----------+--------------+----------+---------+
| Write        2560 MBs |  132.4 s |  19.342 MB/s |   8.5 %  | 400.4 % |
| Random Write  125 MBs |  107.8 s |   1.159 MB/s |  -1.6 %  |  16.8 % |
| Read         2560 MBs |  107.6 s |  23.788 MB/s |   5.4 %  |  95.7 % |
| Random Read   125 MBs |  158.4 s |   0.789 MB/s |   0.9 %  |   7.7 % |
`----------------------------------------------------------------------'
Tiotest latency results:
,-------------------------------------------------------------------------.
| Item         | Average latency | Maximum latency | % >2 sec | % >10 sec |
+--------------+-----------------+-----------------+----------+-----------+
| Write        |        5.362 ms |    21081.012 ms |  0.09811 |   0.00244 |
| Random Write |       23.310 ms |    31865.095 ms |  0.13437 |   0.06250 |
| Read         |        5.048 ms |     3694.001 ms |  0.15167 |   0.00000 |
| Random Read  |      146.523 ms |     2880.409 ms |  0.52187 |   0.00000 |
|--------------+-----------------+-----------------+----------+-----------|
| Total        |        8.916 ms |    31865.095 ms |  0.13435 |   0.00262 |
`--------------+-----------------+-----------------+----------+-----------'

** deadline-patched
Tiotest results for 32 concurrent io threads:
,----------------------------------------------------------------------.
| Item                  | Time     | Rate         | Usr CPU  | Sys CPU |
+-----------------------+----------+--------------+----------+---------+
| Write        2560 MBs |  105.3 s |  24.301 MB/s |  10.5 %  | 514.8 % |
| Random Write  125 MBs |   95.9 s |   1.304 MB/s |  -1.8 %  |  17.3 % |
| Read         2560 MBs |  165.1 s |  15.507 MB/s |   2.7 %  |  61.9 % |
| Random Read   125 MBs |  110.6 s |   1.130 MB/s |   0.8 %  |  12.2 % |
`----------------------------------------------------------------------'
Tiotest latency results:
,-------------------------------------------------------------------------.
| Item         | Average latency | Maximum latency | % >2 sec | % >10 sec |
+--------------+-----------------+-----------------+----------+-----------+
| Write        |        4.131 ms |    17456.831 ms |  0.08041 |   0.00275 |
| Random Write |        2.780 ms |     5073.180 ms |  0.07500 |   0.00000 |
| Read         |        7.748 ms |      936.499 ms |  0.00000 |   0.00000 |
| Random Read  |      104.849 ms |      695.192 ms |  0.00000 |   0.00000 |
|--------------+-----------------+-----------------+----------+-----------|
| Total        |        8.168 ms |    17456.831 ms |  0.04008 |   0.00131 |
`--------------+-----------------+-----------------+----------+-----------'

* SD card, nilfs2 partition, tiotest with 16 threads, each writing 80MB of data
** cfq(2.6.30-rc2)
Tiotest results for 16 concurrent io threads:
,----------------------------------------------------------------------.
| Item                  | Time     | Rate         | Usr CPU  | Sys CPU |
+-----------------------+----------+--------------+----------+---------+
| Write        1280 MBs |  217.8 s |   5.878 MB/s |   3.7 %  |  92.2 % |
| Random Write   62 MBs |   18.2 s |   3.432 MB/s |  -2.3 %  |  28.7 % |
| Read         1280 MBs |  114.7 s |  11.156 MB/s |   7.3 %  |  76.6 % |
| Random Read    62 MBs |    3.4 s |  18.615 MB/s |  -5.4 %  | 274.2 % |
`----------------------------------------------------------------------'
Tiotest latency results:
,-------------------------------------------------------------------------.
| Item         | Average latency | Maximum latency | % >2 sec | % >10 sec |
+--------------+-----------------+-----------------+----------+-----------+
| Write        |        9.943 ms |    10223.581 ms |  0.14252 |   0.00488 |
| Random Write |       12.287 ms |     5097.196 ms |  0.25625 |   0.00000 |
| Read         |        5.352 ms |     1550.162 ms |  0.00000 |   0.00000 |
| Random Read  |        3.051 ms |     1507.837 ms |  0.00000 |   0.00000 |
|--------------+-----------------+-----------------+----------+-----------|
| Total        |        7.649 ms |    10223.581 ms |  0.07391 |   0.00233 |
`--------------+-----------------+-----------------+----------+-----------'

** deadline-patched:
Tiotest results for 16 concurrent io threads:
,----------------------------------------------------------------------.
| Item                  | Time     | Rate         | Usr CPU  | Sys CPU |
+-----------------------+----------+--------------+----------+---------+
| Write        1280 MBs |  220.9 s |   5.794 MB/s |   4.0 %  |  93.9 % |
| Random Write   62 MBs |   20.5 s |   3.044 MB/s |  -2.2 %  |  24.9 % |
| Read         1280 MBs |  113.2 s |  11.304 MB/s |   6.8 %  |  72.8 % |
| Random Read    62 MBs |    2.9 s |  21.896 MB/s |   5.1 %  | 293.8 % |
`----------------------------------------------------------------------'
Tiotest latency results:
,-------------------------------------------------------------------------.
| Item         | Average latency | Maximum latency | % >2 sec | % >10 sec |
+--------------+-----------------+-----------------+----------+-----------+
| Write        |       10.078 ms |    13303.036 ms |  0.14160 |   0.00031 |
| Random Write |       14.350 ms |     5265.088 ms |  0.40000 |   0.00000 |
| Read         |        5.455 ms |      434.495 ms |  0.00000 |   0.00000 |
| Random Read  |        2.685 ms |       12.652 ms |  0.00000 |   0.00000 |
|--------------+-----------------+-----------------+----------+-----------|
| Total        |        7.801 ms |    13303.036 ms |  0.07682 |   0.00015 |
`--------------+-----------------+-----------------+----------+-----------'

* fsync-tester results, on HDD, empty ext3 partition, mounted with
data=writeback
** deadline-original:
fsync time: 0.7963
fsync time: 4.5914
fsync time: 4.2347
fsync time: 1.1670
fsync time: 0.8164
fsync time: 1.9783
fsync time: 4.9726
fsync time: 2.4929
fsync time: 2.5448
fsync time: 3.9627
** cfq 2.6.30-rc2
fsync time: 0.0288
fsync time: 0.0528
fsync time: 0.0299
fsync time: 0.0397
fsync time: 0.5720
fsync time: 0.0409
fsync time: 0.0876
fsync time: 0.0294
fsync time: 0.0485
** deadline-patched
fsync time: 0.0772
fsync time: 0.0381
fsync time: 0.0604
fsync time: 0.2923
fsync time: 0.2488
fsync time: 0.0924
fsync time: 0.0144
fsync time: 1.4824
fsync time: 0.0789
fsync time: 0.0565
fsync time: 0.0550
fsync time: 0.0421
** deadline-patched, ionice -c1:
fsync time: 0.2569
fsync time: 0.0500
fsync time: 0.0681
fsync time: 0.2863
fsync time: 0.0140
fsync time: 0.0171
fsync time: 0.1198
fsync time: 0.0530
fsync time: 0.0503
fsync time: 0.0462
fsync time: 0.0484
fsync time: 0.0328
fsync time: 0.0562
fsync time: 0.0451
fsync time: 0.0576
fsync time: 0.0444
fsync time: 0.0469
fsync time: 0.0368
fsync time: 0.2865

Corrado

-- 
__________________________________________________________________________

dott. Corrado Zoccolo                          mailto:czoccolo@gmail.com
PhD - Department of Computer Science - University of Pisa, Italy
--------------------------------------------------------------------------

[-- Attachment #2: deadline-iosched.c.patch --]
[-- Type: application/octet-stream, Size: 17479 bytes --]

diff --git a/block/deadline-iosched.c b/block/deadline-iosched.c
index c4d991d..5222b61 100644
--- a/block/deadline-iosched.c
+++ b/block/deadline-iosched.c
@@ -17,11 +17,12 @@
 /*
  * See Documentation/block/deadline-iosched.txt
  */
-static const int read_expire = HZ / 2;  /* max time before a read is submitted. */
-static const int write_expire = 5 * HZ; /* ditto for writes, these limits are SOFT! */
-static const int writes_starved = 2;    /* max times reads can starve a write */
-static const int fifo_batch = 16;       /* # of sequential requests treated as one
-				     by the above parameters. For throughput. */
+static const int rt_sync_expire = HZ / 8;  /* max time before a real-time sync operation (e.g. read for RT class process) is submitted. */
+static const int sync_expire = HZ / 2;     /* max time before a sync operation (e.g. read) is submitted. */
+static const int async_expire = 5 * HZ;    /* ditto for async operations (e.g. writes), these limits are SOFT! */
+static const int async_starved = 2;        /* max times SYNC can starve ASYNC requests */
+static const int fifo_batch = 16;          /* min # of sequential requests treated as one by the above parameters. For throughput. */
+static const int time_quantum  = HZ / 10;  /* min duration for a batch */
 
 struct deadline_data {
 	/*
@@ -31,27 +32,33 @@ struct deadline_data {
 	/*
 	 * requests (deadline_rq s) are present on both sort_list and fifo_list
 	 */
-	struct rb_root sort_list[2];	
-	struct list_head fifo_list[2];
+	struct rb_root sort_list[2]; /* READ, WRITE */
+	struct list_head fifo_list[3]; /* 0=ASYNC (or IDLE), 1=SYNC (or RT ASYNC), 2=RT SYNC */
 
 	/*
-	 * next in sort order. read, write or both are NULL
+	 * next in sort order.
 	 */
-	struct request *next_rq[2];
+	struct request *next_rq;
 	unsigned int batching;		/* number of sequential requests made */
-	sector_t last_sector;		/* head position */
 	unsigned int starved;		/* times reads have starved writes */
 
 	/*
 	 * settings that change how the i/o scheduler behaves
 	 */
-	int fifo_expire[2];
+	int fifo_expire[3];
+	int time_quantum;
 	int fifo_batch;
-	int writes_starved;
+	int async_starved;
 	int front_merges;
+
+	/*
+	  current batch data & stats
+	 */
+	int cur_batch_prio;
+	unsigned long cur_batch_start;
 };
 
-static void deadline_move_request(struct deadline_data *, struct request *);
+static void deadline_move_request(struct deadline_data *, struct request *, int nonrot);
 
 static inline struct rb_root *
 deadline_rb_root(struct deadline_data *dd, struct request *rq)
@@ -63,7 +70,7 @@ deadline_rb_root(struct deadline_data *dd, struct request *rq)
  * get the request after `rq' in sector-sorted order
  */
 static inline struct request *
-deadline_latter_request(struct request *rq)
+deadline_next_request(struct request *rq)
 {
 	struct rb_node *node = rb_next(&rq->rb_node);
 
@@ -73,27 +80,118 @@ deadline_latter_request(struct request *rq)
 	return NULL;
 }
 
+/*
+ * get the request before `rq' in sector-sorted order
+ */
+static inline struct request *
+deadline_prev_request(struct request *rq)
+{
+	struct rb_node *node = rb_prev(&rq->rb_node);
+
+	if (node)
+		return rb_entry_rq(node);
+
+	return NULL;
+}
+
 static void
-deadline_add_rq_rb(struct deadline_data *dd, struct request *rq)
+deadline_add_rq_rb(struct deadline_data *dd, struct request *rq, int nonrot)
 {
 	struct rb_root *root = deadline_rb_root(dd, rq);
 	struct request *__alias;
 
 	while (unlikely(__alias = elv_rb_add(root, rq)))
-		deadline_move_request(dd, __alias);
+		deadline_move_request(dd, __alias, nonrot);
 }
 
 static inline void
 deadline_del_rq_rb(struct deadline_data *dd, struct request *rq)
 {
-	const int data_dir = rq_data_dir(rq);
-
-	if (dd->next_rq[data_dir] == rq)
-		dd->next_rq[data_dir] = deadline_latter_request(rq);
+	if (dd->next_rq == rq)
+		dd->next_rq = deadline_next_request(rq);
 
 	elv_rb_del(deadline_rb_root(dd, rq), rq);
 }
 
+static void
+list_add_timesorted(struct list_head *q, struct request *rq)
+{
+	struct list_head *entry;
+	int stop_flags = REQ_SOFTBARRIER | REQ_HARDBARRIER | REQ_STARTED;
+	list_for_each_prev(entry, q) {
+		struct request *pos = list_entry_rq(entry);
+		if (pos->cmd_flags & stop_flags)
+			break;
+		if (rq_fifo_time(rq) > rq_fifo_time(pos))
+			break;
+		if (rq_fifo_time(rq) == rq_fifo_time(pos) &&
+		    rq->sector >= pos->sector)
+			break;
+	}
+	list_add(&rq->queuelist, entry);
+}
+
+static int ioprio_lub(unsigned short aprio, unsigned short bprio)
+{
+	unsigned short aclass = IOPRIO_PRIO_CLASS(aprio);
+	unsigned short bclass = IOPRIO_PRIO_CLASS(bprio);
+
+	if (aclass == IOPRIO_CLASS_NONE)
+		return bprio;
+	if (bclass == IOPRIO_CLASS_NONE)
+		return aprio;
+
+	if (aclass == bclass)
+		return min(aprio, bprio);
+	if (aclass > bclass)
+		return bprio;
+	else
+		return aprio;
+}
+
+static void
+deadline_merge_prio_data(struct request_queue *q, struct request *rq)
+{
+	struct task_struct *tsk = current;
+	struct io_context *ioc = get_io_context(GFP_ATOMIC,q->node);
+	int ioprio_class = IOPRIO_CLASS_NONE;
+	int ioprio = IOPRIO_NORM;
+
+	if(ioc) {
+		ioprio_class = task_ioprio_class(ioc);
+	}
+
+	switch (ioprio_class) {
+	default:
+		printk(KERN_ERR "deadline: bad prio %x\n", ioprio_class);
+	case IOPRIO_CLASS_NONE:
+		/*
+		 * no prio set, inherit CPU scheduling settings
+		 */
+		ioprio = task_nice_ioprio(tsk);
+		ioprio_class = task_nice_ioclass(tsk);
+		break;
+	case IOPRIO_CLASS_RT:
+	case IOPRIO_CLASS_BE:
+		ioprio = task_ioprio(ioc);
+		break;
+	case IOPRIO_CLASS_IDLE:
+		ioprio = 7;
+		break;
+	}
+
+	ioprio=IOPRIO_PRIO_VALUE(ioprio_class,ioprio);
+	rq->ioprio=ioprio_lub(rq->ioprio,ioprio);
+}
+
+static int
+deadline_compute_request_priority(struct request *req)
+{
+	unsigned short ioprio_class=IOPRIO_PRIO_CLASS(req_get_ioprio(req));
+	return (ioprio_class!=IOPRIO_CLASS_IDLE)*
+		(!!rq_is_sync(req) + (rq_data_dir(req)==READ)*(ioprio_class==IOPRIO_CLASS_RT));
+}
+
 /*
  * add rq to rbtree and fifo
  */
@@ -101,15 +199,17 @@ static void
 deadline_add_request(struct request_queue *q, struct request *rq)
 {
 	struct deadline_data *dd = q->elevator->elevator_data;
-	const int data_dir = rq_data_dir(rq);
 
-	deadline_add_rq_rb(dd, rq);
+	deadline_merge_prio_data(q,rq);
+	deadline_add_rq_rb(dd, rq, blk_queue_nonrot(q));
 
 	/*
-	 * set expire time and add to fifo list
+	 * set request creation time and add to fifo list
 	 */
-	rq_set_fifo_time(rq, jiffies + dd->fifo_expire[data_dir]);
-	list_add_tail(&rq->queuelist, &dd->fifo_list[data_dir]);
+
+	rq_set_fifo_time(rq, jiffies);
+	
+	list_add_timesorted(&dd->fifo_list[deadline_compute_request_priority(rq)],rq);
 }
 
 /*
@@ -157,14 +257,16 @@ static void deadline_merged_request(struct request_queue *q,
 				    struct request *req, int type)
 {
 	struct deadline_data *dd = q->elevator->elevator_data;
-
 	/*
 	 * if the merge was a front merge, we need to reposition request
 	 */
 	if (type == ELEVATOR_FRONT_MERGE) {
 		elv_rb_del(deadline_rb_root(dd, req), req);
-		deadline_add_rq_rb(dd, req);
+		deadline_add_rq_rb(dd, req, blk_queue_nonrot(q));
 	}
+
+	deadline_merge_prio_data(q,req);
+
 }
 
 static void
@@ -172,7 +274,7 @@ deadline_merged_requests(struct request_queue *q, struct request *req,
 			 struct request *next)
 {
 	/*
-	 * if next expires before rq, assign its expire time to rq
+	 * request that cannot idle. if next expires before rq, assign its expire time to rq
 	 * and move into next position (next will be deleted) in fifo
 	 */
 	if (!list_empty(&req->queuelist) && !list_empty(&next->queuelist)) {
@@ -204,15 +306,33 @@ deadline_move_to_dispatch(struct deadline_data *dd, struct request *rq)
  * move an entry to dispatch queue
  */
 static void
-deadline_move_request(struct deadline_data *dd, struct request *rq)
+deadline_move_request(struct deadline_data *dd, struct request *rq, int nonrot)
 {
 	const int data_dir = rq_data_dir(rq);
+	dd->next_rq = NULL;
+
+	if(data_dir != READ || !nonrot) {
+		int max_search = dd->fifo_batch;
+		/* for rot devices, or writes on non-rot, requests are dispatched in disk order */
+		dd->next_rq = rq;
+		/* try to get requests of at least the same priority as current one */
+		while(max_search-- && (dd->next_rq = deadline_next_request(dd->next_rq)) && dd->cur_batch_prio>deadline_compute_request_priority(dd->next_rq));
+		if(!max_search || !dd->next_rq) { // did not get a next of the same priority, demote batch to lower, and continue in disk order
+			dd->next_rq = deadline_next_request(rq);
+			if(dd->next_rq) dd->cur_batch_prio = deadline_compute_request_priority(dd->next_rq);
+		}
 
-	dd->next_rq[READ] = NULL;
-	dd->next_rq[WRITE] = NULL;
-	dd->next_rq[data_dir] = deadline_latter_request(rq);
-
-	dd->last_sector = rq_end_sector(rq);
+	} else { /* nonrot && data_dir==READ : requests are dispatched in deadline order */
+		struct list_head *entry;
+		list_for_each(entry, &dd->fifo_list[dd->cur_batch_prio]) {
+			struct request *pos = list_entry_rq(entry);
+			if(pos==rq) continue;
+			if(rq_data_dir(pos)==data_dir) { /* find same direction (always READ) */
+				dd->next_rq = pos;
+				break;
+			}
+		}
+	}
 
 	/*
 	 * take it off the sort and fifo list, move
@@ -222,20 +342,16 @@ deadline_move_request(struct deadline_data *dd, struct request *rq)
 }
 
 /*
- * deadline_check_fifo returns 0 if there are no expired requests on the fifo,
- * 1 otherwise. Requires !list_empty(&dd->fifo_list[data_dir])
+ * deadline_check_fifo returns 0 if there are no expired requests on the fifo for given priority,
+ * 1 otherwise. Requires !list_empty(&dd->fifo_list[prio])
  */
-static inline int deadline_check_fifo(struct deadline_data *dd, int ddir)
+static inline int deadline_check_request(struct deadline_data *dd, unsigned prio) 
 {
-	struct request *rq = rq_entry_fifo(dd->fifo_list[ddir].next);
-
+	BUG_ON(list_empty(&dd->fifo_list[prio]));
 	/*
-	 * rq is expired!
+	 * deadline is expired!
 	 */
-	if (time_after(jiffies, rq_fifo_time(rq)))
-		return 1;
-
-	return 0;
+	return time_after(jiffies, dd->fifo_expire[prio] + rq_fifo_time(rq_entry_fifo(dd->fifo_list[prio].next)));
 }
 
 /*
@@ -245,36 +361,31 @@ static inline int deadline_check_fifo(struct deadline_data *dd, int ddir)
 static int deadline_dispatch_requests(struct request_queue *q, int force)
 {
 	struct deadline_data *dd = q->elevator->elevator_data;
-	const int reads = !list_empty(&dd->fifo_list[READ]);
-	const int writes = !list_empty(&dd->fifo_list[WRITE]);
-	struct request *rq;
-	int data_dir;
+	int rt_ = !list_empty(&dd->fifo_list[2]);
+	int sync_ = !list_empty(&dd->fifo_list[1]);
+	int async_ = !list_empty(&dd->fifo_list[0]);
+	struct request *rq = dd->next_rq;
+	int request_prio = dd->cur_batch_prio;
 
-	/*
-	 * batches are currently reads XOR writes
-	 */
-	if (dd->next_rq[WRITE])
-		rq = dd->next_rq[WRITE];
-	else
-		rq = dd->next_rq[READ];
-
-	if (rq && dd->batching < dd->fifo_batch)
+	if (rq && (dd->batching < dd->fifo_batch || jiffies-dd->cur_batch_start < dd->time_quantum)) {
 		/* we have a next request are still entitled to batch */
 		goto dispatch_request;
+	}
 
 	/*
 	 * at this point we are not running a batch. select the appropriate
 	 * data direction (read / write)
 	 */
 
-	if (reads) {
-		BUG_ON(RB_EMPTY_ROOT(&dd->sort_list[READ]));
-
-		if (writes && (dd->starved++ >= dd->writes_starved))
-			goto dispatch_writes;
-
-		data_dir = READ;
+	if (rt_) {
+		request_prio = 2;
+		goto dispatch_find_request;
+	}
 
+	if (sync_) {
+		if (async_ && (dd->starved++ >= dd->async_starved))
+			goto dispatch_async;
+		request_prio = 1;
 		goto dispatch_find_request;
 	}
 
@@ -282,37 +393,44 @@ static int deadline_dispatch_requests(struct request_queue *q, int force)
 	 * there are either no reads or writes have been starved
 	 */
 
-	if (writes) {
-dispatch_writes:
-		BUG_ON(RB_EMPTY_ROOT(&dd->sort_list[WRITE]));
-
+	if (async_) {
+dispatch_async:
 		dd->starved = 0;
-
-		data_dir = WRITE;
-
+		request_prio = 0;
 		goto dispatch_find_request;
 	}
 
+	dd->cur_batch_start=jiffies;
+	dd->batching = 0;
 	return 0;
 
 dispatch_find_request:
+
 	/*
-	 * we are not running a batch, find best request for selected data_dir
+	 * we are not running a batch, find best request for selected request_prio
 	 */
-	if (deadline_check_fifo(dd, data_dir) || !dd->next_rq[data_dir]) {
+	if (!dd->next_rq ||
+	    dd->cur_batch_prio < request_prio ||
+	    deadline_check_request(dd, request_prio)) {
 		/*
-		 * A deadline has expired, the last request was in the other
-		 * direction, or we have run out of higher-sectored requests.
-		 * Start again from the request with the earliest expiry time.
+		 * A deadline has expired, the previous batch had a lower priority,
+		 * or we have run out of higher-sectored requests.
+		 * Start again (a bit before) the request with the earliest expiry time.
 		 */
-		rq = rq_entry_fifo(dd->fifo_list[data_dir].next);
+		struct request * nrq = rq_entry_fifo(dd->fifo_list[request_prio].next);
+		int batch = dd->fifo_batch/3;
+		rq=nrq;
+		while(batch-- && (nrq = deadline_prev_request(nrq)))
+			if(request_prio<=deadline_compute_request_priority(nrq)) rq = nrq;
 	} else {
 		/*
-		 * The last req was the same dir and we have a next request in
+		 * The last batch was same or higher priority and we have a next request in
 		 * sort order. No expired requests so continue on from here.
 		 */
-		rq = dd->next_rq[data_dir];
+		rq = dd->next_rq;
 	}
+	dd->cur_batch_prio = request_prio;
+	dd->cur_batch_start = jiffies;
 
 	dd->batching = 0;
 
@@ -320,26 +438,29 @@ dispatch_request:
 	/*
 	 * rq is the selected appropriate request.
 	 */
-	dd->batching++;
-	deadline_move_request(dd, rq);
 
+	dd->batching++;
+	deadline_move_request(dd, rq, blk_queue_nonrot(q));
 	return 1;
 }
 
 static int deadline_queue_empty(struct request_queue *q)
 {
 	struct deadline_data *dd = q->elevator->elevator_data;
-
-	return list_empty(&dd->fifo_list[WRITE])
-		&& list_empty(&dd->fifo_list[READ]);
+	return list_empty(&dd->fifo_list[0])
+		&& list_empty(&dd->fifo_list[1])
+		&& list_empty(&dd->fifo_list[2]);
 }
 
+
+
 static void deadline_exit_queue(struct elevator_queue *e)
 {
 	struct deadline_data *dd = e->elevator_data;
 
-	BUG_ON(!list_empty(&dd->fifo_list[READ]));
-	BUG_ON(!list_empty(&dd->fifo_list[WRITE]));
+	BUG_ON(!list_empty(&dd->fifo_list[0]));
+	BUG_ON(!list_empty(&dd->fifo_list[1]));
+	BUG_ON(!list_empty(&dd->fifo_list[2]));
 
 	kfree(dd);
 }
@@ -355,13 +476,16 @@ static void *deadline_init_queue(struct request_queue *q)
 	if (!dd)
 		return NULL;
 
-	INIT_LIST_HEAD(&dd->fifo_list[READ]);
-	INIT_LIST_HEAD(&dd->fifo_list[WRITE]);
+	INIT_LIST_HEAD(&dd->fifo_list[0]);
+	INIT_LIST_HEAD(&dd->fifo_list[1]);
+	INIT_LIST_HEAD(&dd->fifo_list[2]);
 	dd->sort_list[READ] = RB_ROOT;
 	dd->sort_list[WRITE] = RB_ROOT;
-	dd->fifo_expire[READ] = read_expire;
-	dd->fifo_expire[WRITE] = write_expire;
-	dd->writes_starved = writes_starved;
+	dd->fifo_expire[0] = async_expire;
+	dd->fifo_expire[1] = sync_expire;
+	dd->fifo_expire[2] = rt_sync_expire;
+	dd->time_quantum = time_quantum;
+	dd->async_starved = async_starved;
 	dd->front_merges = 1;
 	dd->fifo_batch = fifo_batch;
 	return dd;
@@ -395,9 +519,12 @@ static ssize_t __FUNC(struct elevator_queue *e, char *page)		\
 		__data = jiffies_to_msecs(__data);			\
 	return deadline_var_show(__data, (page));			\
 }
-SHOW_FUNCTION(deadline_read_expire_show, dd->fifo_expire[READ], 1);
-SHOW_FUNCTION(deadline_write_expire_show, dd->fifo_expire[WRITE], 1);
-SHOW_FUNCTION(deadline_writes_starved_show, dd->writes_starved, 0);
+SHOW_FUNCTION(deadline_async_expire_show, dd->fifo_expire[0], 1);
+SHOW_FUNCTION(deadline_sync_expire_show, dd->fifo_expire[1], 1);
+SHOW_FUNCTION(deadline_rt_sync_expire_show, dd->fifo_expire[2], 1);
+SHOW_FUNCTION(deadline_time_quantum_show, dd->time_quantum, 1);
+
+SHOW_FUNCTION(deadline_async_starved_show, dd->async_starved, 0);
 SHOW_FUNCTION(deadline_front_merges_show, dd->front_merges, 0);
 SHOW_FUNCTION(deadline_fifo_batch_show, dd->fifo_batch, 0);
 #undef SHOW_FUNCTION
@@ -418,9 +545,12 @@ static ssize_t __FUNC(struct elevator_queue *e, const char *page, size_t count)
 		*(__PTR) = __data;					\
 	return ret;							\
 }
-STORE_FUNCTION(deadline_read_expire_store, &dd->fifo_expire[READ], 0, INT_MAX, 1);
-STORE_FUNCTION(deadline_write_expire_store, &dd->fifo_expire[WRITE], 0, INT_MAX, 1);
-STORE_FUNCTION(deadline_writes_starved_store, &dd->writes_starved, INT_MIN, INT_MAX, 0);
+STORE_FUNCTION(deadline_async_expire_store, &dd->fifo_expire[0], 0, INT_MAX, 1);
+STORE_FUNCTION(deadline_sync_expire_store, &dd->fifo_expire[1], 0, INT_MAX, 1);
+STORE_FUNCTION(deadline_rt_sync_expire_store, &dd->fifo_expire[2], 0, INT_MAX, 1);
+STORE_FUNCTION(deadline_time_quantum_store, &dd->time_quantum, 0, INT_MAX, 1);
+
+STORE_FUNCTION(deadline_async_starved_store, &dd->async_starved, INT_MIN, INT_MAX, 0);
 STORE_FUNCTION(deadline_front_merges_store, &dd->front_merges, 0, 1, 0);
 STORE_FUNCTION(deadline_fifo_batch_store, &dd->fifo_batch, 0, INT_MAX, 0);
 #undef STORE_FUNCTION
@@ -430,9 +560,11 @@ STORE_FUNCTION(deadline_fifo_batch_store, &dd->fifo_batch, 0, INT_MAX, 0);
 				      deadline_##name##_store)
 
 static struct elv_fs_entry deadline_attrs[] = {
-	DD_ATTR(read_expire),
-	DD_ATTR(write_expire),
-	DD_ATTR(writes_starved),
+	DD_ATTR(async_expire),
+	DD_ATTR(sync_expire),
+	DD_ATTR(rt_sync_expire),
+	DD_ATTR(time_quantum),
+	DD_ATTR(async_starved),
 	DD_ATTR(front_merges),
 	DD_ATTR(fifo_batch),
 	__ATTR_NULL

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: Reduce latencies for syncronous writes and high I/O priority  requests in deadline IO scheduler
  2009-04-22 21:07 Reduce latencies for syncronous writes and high I/O priority requests in deadline IO scheduler Corrado Zoccolo
@ 2009-04-23 11:18 ` Paolo Ciarrocchi
  2009-04-23 11:28 ` Jens Axboe
  2009-04-23 11:52 ` Aaron Carroll
  2 siblings, 0 replies; 14+ messages in thread
From: Paolo Ciarrocchi @ 2009-04-23 11:18 UTC (permalink / raw)
  To: Corrado Zoccolo; +Cc: jens.axboe, Linux-Kernel

On 4/22/09, Corrado Zoccolo <czoccolo@gmail.com> wrote:
> Hi,
> deadline I/O scheduler currently classifies all I/O requests in only 2
> classes, reads (always considered high priority) and writes (always
> lower).
> The attached patch, intended to reduce latencies for syncronous writes
> and high I/O priority requests, introduces more levels of priorities:
> * real time reads: highest priority and shortest deadline, can starve
> other levels
> * syncronous operations (either best effort reads or RT/BE writes),
> mid priority, starvation for lower level is prevented as usual
> * asyncronous operations (async writes and all IDLE class requests),
> lowest priority and longest deadline
>


numbers are impressive.

do you observe better latencies in normal desktop usage as well?

ciao,
-- 
Paolo
http://paolo.ciarrocchi.googlepages.com/
http://mypage.vodafone.it/

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Reduce latencies for syncronous writes and high I/O priority requests   in deadline IO scheduler
  2009-04-22 21:07 Reduce latencies for syncronous writes and high I/O priority requests in deadline IO scheduler Corrado Zoccolo
  2009-04-23 11:18 ` Paolo Ciarrocchi
@ 2009-04-23 11:28 ` Jens Axboe
  2009-04-23 15:57   ` Corrado Zoccolo
  2009-04-23 11:52 ` Aaron Carroll
  2 siblings, 1 reply; 14+ messages in thread
From: Jens Axboe @ 2009-04-23 11:28 UTC (permalink / raw)
  To: Corrado Zoccolo; +Cc: Linux-Kernel

On Wed, Apr 22 2009, Corrado Zoccolo wrote:
> Hi,
> deadline I/O scheduler currently classifies all I/O requests in only 2
> classes, reads (always considered high priority) and writes (always
> lower).
> The attached patch, intended to reduce latencies for syncronous writes
> and high I/O priority requests, introduces more levels of priorities:
> * real time reads: highest priority and shortest deadline, can starve
> other levels
> * syncronous operations (either best effort reads or RT/BE writes),
> mid priority, starvation for lower level is prevented as usual
> * asyncronous operations (async writes and all IDLE class requests),
> lowest priority and longest deadline
> 
> The patch also introduces some new heuristics:
> * for non-rotational devices, reads (within a given priority level)
> are issued in FIFO order, to improve the latency perceived by readers

Danger danger... I smell nasty heuristics.

> * minimum batch timespan (time quantum): partners with fifo_batch to
> improve throughput, by sending more consecutive requests together. A
> given number of requests will not always take the same time (due to
> amount of seek needed), therefore fifo_batch must be tuned for worst
> cases, while in best cases, having longer batches would give a
> throughput boost.
> * batch start request is chosen fifo_batch/3 requests before the
> expired one, to improve fairness for requests with lower start sector,
> that otherwise have higher probability to miss a deadline than
> mid-sector requests.

This is a huge patch, I'm not going to be reviewing this. Make this a
patchset, each doing that little change separately. Then it's easier to
review, and much easier to pick the parts that can go in directly and
leave the ones that either need more work or are not going be merged
out.

> I did few performance comparisons:
> * HDD, ext3 partition with data=writeback, tiotest with 32 threads,
> each writing 80MB of data

It doesn't seem to make a whole lot of difference, does it?

> ** deadline-original
> Tiotest results for 32 concurrent io threads:
> ,----------------------------------------------------------------------.
> | Item                  | Time     | Rate         | Usr CPU  | Sys CPU |
> +-----------------------+----------+--------------+----------+---------+
> | Write        2560 MBs |  103.0 s |  24.848 MB/s |  10.6 %  | 522.2 % |
> | Random Write  125 MBs |   98.8 s |   1.265 MB/s |  -1.6 %  |  16.1 % |
> | Read         2560 MBs |  166.2 s |  15.400 MB/s |   4.2 %  |  82.7 % |
> | Random Read   125 MBs |  193.3 s |   0.647 MB/s |  -0.8 %  |  14.5 % |
> `----------------------------------------------------------------------'
> Tiotest latency results:
> ,-------------------------------------------------------------------------.
> | Item         | Average latency | Maximum latency | % >2 sec | % >10 sec |
> +--------------+-----------------+-----------------+----------+-----------+
> | Write        |        4.122 ms |    17922.920 ms |  0.07980 |   0.00061 |
> | Random Write |        0.599 ms |     1245.200 ms |  0.00000 |   0.00000 |
> | Read         |        8.032 ms |     1125.759 ms |  0.00000 |   0.00000 |
> | Random Read  |      181.968 ms |      972.657 ms |  0.00000 |   0.00000 |
> |--------------+-----------------+-----------------+----------+-----------|
> | Total        |       10.044 ms |    17922.920 ms |  0.03804 |   0.00029 |
> `--------------+-----------------+-----------------+----------+-----------'
> 
> ** deadline-patched
> Tiotest results for 32 concurrent io threads:
> ,----------------------------------------------------------------------.
> | Item                  | Time     | Rate         | Usr CPU  | Sys CPU |
> +-----------------------+----------+--------------+----------+---------+
> | Write        2560 MBs |  105.3 s |  24.301 MB/s |  10.5 %  | 514.8 % |
> | Random Write  125 MBs |   95.9 s |   1.304 MB/s |  -1.8 %  |  17.3 % |
> | Read         2560 MBs |  165.1 s |  15.507 MB/s |   2.7 %  |  61.9 % |
> | Random Read   125 MBs |  110.6 s |   1.130 MB/s |   0.8 %  |  12.2 % |
> `----------------------------------------------------------------------'
> Tiotest latency results:
> ,-------------------------------------------------------------------------.
> | Item         | Average latency | Maximum latency | % >2 sec | % >10 sec |
> +--------------+-----------------+-----------------+----------+-----------+
> | Write        |        4.131 ms |    17456.831 ms |  0.08041 |   0.00275 |
> | Random Write |        2.780 ms |     5073.180 ms |  0.07500 |   0.00000 |
> | Read         |        7.748 ms |      936.499 ms |  0.00000 |   0.00000 |
> | Random Read  |      104.849 ms |      695.192 ms |  0.00000 |   0.00000 |
> |--------------+-----------------+-----------------+----------+-----------|
> | Total        |        8.168 ms |    17456.831 ms |  0.04008 |   0.00131 |
> `--------------+-----------------+-----------------+----------+-----------'

Main difference here seems to be random read performance, the rest are
pretty close and could just be noise. Random write is much worse, from a
latency view point. Is this just one run, or did you average several?

For something like this, you also need to consider workloads that
consist of processes with different IO patterns running at the same
time. With this tiotest run, you only test sequential readers competing,
then random readers, etc.

So, please, split the big patch into lots of little separate pieces.
Benchmark each one separately, so they each carry their own
justification.

> * fsync-tester results, on HDD, empty ext3 partition, mounted with
> data=writeback
> ** deadline-original:
> fsync time: 0.7963
> fsync time: 4.5914
> fsync time: 4.2347
> fsync time: 1.1670
> fsync time: 0.8164
> fsync time: 1.9783
> fsync time: 4.9726
> fsync time: 2.4929
> fsync time: 2.5448
> fsync time: 3.9627
> ** cfq 2.6.30-rc2
> fsync time: 0.0288
> fsync time: 0.0528
> fsync time: 0.0299
> fsync time: 0.0397
> fsync time: 0.5720
> fsync time: 0.0409
> fsync time: 0.0876
> fsync time: 0.0294
> fsync time: 0.0485
> ** deadline-patched
> fsync time: 0.0772
> fsync time: 0.0381
> fsync time: 0.0604
> fsync time: 0.2923
> fsync time: 0.2488
> fsync time: 0.0924
> fsync time: 0.0144
> fsync time: 1.4824
> fsync time: 0.0789
> fsync time: 0.0565
> fsync time: 0.0550
> fsync time: 0.0421

At least this test looks a lot better!

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Reduce latencies for syncronous writes and high I/O priority requests in deadline IO scheduler
  2009-04-22 21:07 Reduce latencies for syncronous writes and high I/O priority requests in deadline IO scheduler Corrado Zoccolo
  2009-04-23 11:18 ` Paolo Ciarrocchi
  2009-04-23 11:28 ` Jens Axboe
@ 2009-04-23 11:52 ` Aaron Carroll
  2009-04-23 12:13   ` Jens Axboe
  2009-04-23 16:10   ` Corrado Zoccolo
  2 siblings, 2 replies; 14+ messages in thread
From: Aaron Carroll @ 2009-04-23 11:52 UTC (permalink / raw)
  To: Corrado Zoccolo; +Cc: jens.axboe, Linux-Kernel

Corrado Zoccolo wrote:
> Hi,
> deadline I/O scheduler currently classifies all I/O requests in only 2
> classes, reads (always considered high priority) and writes (always
> lower).
> The attached patch, intended to reduce latencies for syncronous writes

Can be achieved by switching to sync/async rather than read/write.  No
one has shown results where this makes an improvement.  Let us know if
you have a good example.

> and high I/O priority requests, introduces more levels of priorities:
> * real time reads: highest priority and shortest deadline, can starve
> other levels
> * syncronous operations (either best effort reads or RT/BE writes),
> mid priority, starvation for lower level is prevented as usual
> * asyncronous operations (async writes and all IDLE class requests),
> lowest priority and longest deadline
> 
> The patch also introduces some new heuristics:
> * for non-rotational devices, reads (within a given priority level)
> are issued in FIFO order, to improve the latency perceived by readers

This might be a good idea.  Can you make this a separate patch?
Is there a good reason not to do the same for writes?

> * minimum batch timespan (time quantum): partners with fifo_batch to
> improve throughput, by sending more consecutive requests together. A
> given number of requests will not always take the same time (due to
> amount of seek needed), therefore fifo_batch must be tuned for worst
> cases, while in best cases, having longer batches would give a
> throughput boost.
> * batch start request is chosen fifo_batch/3 requests before the
> expired one, to improve fairness for requests with lower start sector,
> that otherwise have higher probability to miss a deadline than
> mid-sector requests.

I don't like the rest of it.  I use deadline because it's a simple,
no surprises, no bullshit scheduler with reasonably good performance
in all situations.  Is there some reason why CFQ won't work for you?

> I did few performance comparisons:
> * HDD, ext3 partition with data=writeback, tiotest with 32 threads,
> each writing 80MB of data
> 
> ** deadline-original
> Tiotest results for 32 concurrent io threads:
> ,----------------------------------------------------------------------.
> | Item                  | Time     | Rate         | Usr CPU  | Sys CPU |
> +-----------------------+----------+--------------+----------+---------+
> | Write        2560 MBs |  103.0 s |  24.848 MB/s |  10.6 %  | 522.2 % |
> | Random Write  125 MBs |   98.8 s |   1.265 MB/s |  -1.6 %  |  16.1 % |
> | Read         2560 MBs |  166.2 s |  15.400 MB/s |   4.2 %  |  82.7 % |
> | Random Read   125 MBs |  193.3 s |   0.647 MB/s |  -0.8 %  |  14.5 % |
> `----------------------------------------------------------------------'
> Tiotest latency results:
> ,-------------------------------------------------------------------------.
> | Item         | Average latency | Maximum latency | % >2 sec | % >10 sec |
> +--------------+-----------------+-----------------+----------+-----------+
> | Write        |        4.122 ms |    17922.920 ms |  0.07980 |   0.00061 |
> | Random Write |        0.599 ms |     1245.200 ms |  0.00000 |   0.00000 |
> | Read         |        8.032 ms |     1125.759 ms |  0.00000 |   0.00000 |
> | Random Read  |      181.968 ms |      972.657 ms |  0.00000 |   0.00000 |
> |--------------+-----------------+-----------------+----------+-----------|
> | Total        |       10.044 ms |    17922.920 ms |  0.03804 |   0.00029 |
> `--------------+-----------------+-----------------+----------+-----------'
> 
> ** cfq (2.6.30-rc2)
> Tiotest results for 32 concurrent io threads:
> ,----------------------------------------------------------------------.
> | Item                  | Time     | Rate         | Usr CPU  | Sys CPU |
> +-----------------------+----------+--------------+----------+---------+
> | Write        2560 MBs |  132.4 s |  19.342 MB/s |   8.5 %  | 400.4 % |
> | Random Write  125 MBs |  107.8 s |   1.159 MB/s |  -1.6 %  |  16.8 % |
> | Read         2560 MBs |  107.6 s |  23.788 MB/s |   5.4 %  |  95.7 % |
> | Random Read   125 MBs |  158.4 s |   0.789 MB/s |   0.9 %  |   7.7 % |
> `----------------------------------------------------------------------'
> Tiotest latency results:
> ,-------------------------------------------------------------------------.
> | Item         | Average latency | Maximum latency | % >2 sec | % >10 sec |
> +--------------+-----------------+-----------------+----------+-----------+
> | Write        |        5.362 ms |    21081.012 ms |  0.09811 |   0.00244 |
> | Random Write |       23.310 ms |    31865.095 ms |  0.13437 |   0.06250 |
> | Read         |        5.048 ms |     3694.001 ms |  0.15167 |   0.00000 |
> | Random Read  |      146.523 ms |     2880.409 ms |  0.52187 |   0.00000 |
> |--------------+-----------------+-----------------+----------+-----------|
> | Total        |        8.916 ms |    31865.095 ms |  0.13435 |   0.00262 |
> `--------------+-----------------+-----------------+----------+-----------'
> 
> ** deadline-patched
> Tiotest results for 32 concurrent io threads:
> ,----------------------------------------------------------------------.
> | Item                  | Time     | Rate         | Usr CPU  | Sys CPU |
> +-----------------------+----------+--------------+----------+---------+
> | Write        2560 MBs |  105.3 s |  24.301 MB/s |  10.5 %  | 514.8 % |
> | Random Write  125 MBs |   95.9 s |   1.304 MB/s |  -1.8 %  |  17.3 % |
> | Read         2560 MBs |  165.1 s |  15.507 MB/s |   2.7 %  |  61.9 % |
> | Random Read   125 MBs |  110.6 s |   1.130 MB/s |   0.8 %  |  12.2 % |
> `----------------------------------------------------------------------'
> Tiotest latency results:
> ,-------------------------------------------------------------------------.
> | Item         | Average latency | Maximum latency | % >2 sec | % >10 sec |
> +--------------+-----------------+-----------------+----------+-----------+
> | Write        |        4.131 ms |    17456.831 ms |  0.08041 |   0.00275 |
> | Random Write |        2.780 ms |     5073.180 ms |  0.07500 |   0.00000 |
> | Read         |        7.748 ms |      936.499 ms |  0.00000 |   0.00000 |
> | Random Read  |      104.849 ms |      695.192 ms |  0.00000 |   0.00000 |
> |--------------+-----------------+-----------------+----------+-----------|
> | Total        |        8.168 ms |    17456.831 ms |  0.04008 |   0.00131 |
> `--------------+-----------------+-----------------+----------+-----------'
> 
> * SD card, nilfs2 partition, tiotest with 16 threads, each writing 80MB of data
> ** cfq(2.6.30-rc2)
> Tiotest results for 16 concurrent io threads:
> ,----------------------------------------------------------------------.
> | Item                  | Time     | Rate         | Usr CPU  | Sys CPU |
> +-----------------------+----------+--------------+----------+---------+
> | Write        1280 MBs |  217.8 s |   5.878 MB/s |   3.7 %  |  92.2 % |
> | Random Write   62 MBs |   18.2 s |   3.432 MB/s |  -2.3 %  |  28.7 % |
> | Read         1280 MBs |  114.7 s |  11.156 MB/s |   7.3 %  |  76.6 % |
> | Random Read    62 MBs |    3.4 s |  18.615 MB/s |  -5.4 %  | 274.2 % |
> `----------------------------------------------------------------------'
> Tiotest latency results:
> ,-------------------------------------------------------------------------.
> | Item         | Average latency | Maximum latency | % >2 sec | % >10 sec |
> +--------------+-----------------+-----------------+----------+-----------+
> | Write        |        9.943 ms |    10223.581 ms |  0.14252 |   0.00488 |
> | Random Write |       12.287 ms |     5097.196 ms |  0.25625 |   0.00000 |
> | Read         |        5.352 ms |     1550.162 ms |  0.00000 |   0.00000 |
> | Random Read  |        3.051 ms |     1507.837 ms |  0.00000 |   0.00000 |
> |--------------+-----------------+-----------------+----------+-----------|
> | Total        |        7.649 ms |    10223.581 ms |  0.07391 |   0.00233 |
> `--------------+-----------------+-----------------+----------+-----------'
> 
> ** deadline-patched:
> Tiotest results for 16 concurrent io threads:
> ,----------------------------------------------------------------------.
> | Item                  | Time     | Rate         | Usr CPU  | Sys CPU |
> +-----------------------+----------+--------------+----------+---------+
> | Write        1280 MBs |  220.9 s |   5.794 MB/s |   4.0 %  |  93.9 % |
> | Random Write   62 MBs |   20.5 s |   3.044 MB/s |  -2.2 %  |  24.9 % |
> | Read         1280 MBs |  113.2 s |  11.304 MB/s |   6.8 %  |  72.8 % |
> | Random Read    62 MBs |    2.9 s |  21.896 MB/s |   5.1 %  | 293.8 % |
> `----------------------------------------------------------------------'
> Tiotest latency results:
> ,-------------------------------------------------------------------------.
> | Item         | Average latency | Maximum latency | % >2 sec | % >10 sec |
> +--------------+-----------------+-----------------+----------+-----------+
> | Write        |       10.078 ms |    13303.036 ms |  0.14160 |   0.00031 |
> | Random Write |       14.350 ms |     5265.088 ms |  0.40000 |   0.00000 |
> | Read         |        5.455 ms |      434.495 ms |  0.00000 |   0.00000 |
> | Random Read  |        2.685 ms |       12.652 ms |  0.00000 |   0.00000 |
> |--------------+-----------------+-----------------+----------+-----------|
> | Total        |        7.801 ms |    13303.036 ms |  0.07682 |   0.00015 |
> `--------------+-----------------+-----------------+----------+-----------'
> 
> * fsync-tester results, on HDD, empty ext3 partition, mounted with
> data=writeback
> ** deadline-original:
> fsync time: 0.7963
> fsync time: 4.5914
> fsync time: 4.2347
> fsync time: 1.1670
> fsync time: 0.8164
> fsync time: 1.9783
> fsync time: 4.9726
> fsync time: 2.4929
> fsync time: 2.5448
> fsync time: 3.9627
> ** cfq 2.6.30-rc2
> fsync time: 0.0288
> fsync time: 0.0528
> fsync time: 0.0299
> fsync time: 0.0397
> fsync time: 0.5720
> fsync time: 0.0409
> fsync time: 0.0876
> fsync time: 0.0294
> fsync time: 0.0485
> ** deadline-patched
> fsync time: 0.0772
> fsync time: 0.0381
> fsync time: 0.0604
> fsync time: 0.2923
> fsync time: 0.2488
> fsync time: 0.0924
> fsync time: 0.0144
> fsync time: 1.4824
> fsync time: 0.0789
> fsync time: 0.0565
> fsync time: 0.0550
> fsync time: 0.0421
> ** deadline-patched, ionice -c1:
> fsync time: 0.2569
> fsync time: 0.0500
> fsync time: 0.0681
> fsync time: 0.2863
> fsync time: 0.0140
> fsync time: 0.0171
> fsync time: 0.1198
> fsync time: 0.0530
> fsync time: 0.0503
> fsync time: 0.0462
> fsync time: 0.0484
> fsync time: 0.0328
> fsync time: 0.0562
> fsync time: 0.0451
> fsync time: 0.0576
> fsync time: 0.0444
> fsync time: 0.0469
> fsync time: 0.0368
> fsync time: 0.2865
> 
> Corrado
> 


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Reduce latencies for syncronous writes and high I/O priority requests in deadline IO scheduler
  2009-04-23 11:52 ` Aaron Carroll
@ 2009-04-23 12:13   ` Jens Axboe
  2009-04-23 16:10   ` Corrado Zoccolo
  1 sibling, 0 replies; 14+ messages in thread
From: Jens Axboe @ 2009-04-23 12:13 UTC (permalink / raw)
  To: Aaron Carroll; +Cc: Corrado Zoccolo, Linux-Kernel

On Thu, Apr 23 2009, Aaron Carroll wrote:
> Corrado Zoccolo wrote:
> > Hi,
> > deadline I/O scheduler currently classifies all I/O requests in only 2
> > classes, reads (always considered high priority) and writes (always
> > lower).
> > The attached patch, intended to reduce latencies for syncronous writes
> 
> Can be achieved by switching to sync/async rather than read/write.  No
> one has shown results where this makes an improvement.  Let us know if
> you have a good example.
> 
> > and high I/O priority requests, introduces more levels of priorities:
> > * real time reads: highest priority and shortest deadline, can starve
> > other levels
> > * syncronous operations (either best effort reads or RT/BE writes),
> > mid priority, starvation for lower level is prevented as usual
> > * asyncronous operations (async writes and all IDLE class requests),
> > lowest priority and longest deadline
> > 
> > The patch also introduces some new heuristics:
> > * for non-rotational devices, reads (within a given priority level)
> > are issued in FIFO order, to improve the latency perceived by readers
> 
> This might be a good idea.  Can you make this a separate patch?
> Is there a good reason not to do the same for writes?
> 
> > * minimum batch timespan (time quantum): partners with fifo_batch to
> > improve throughput, by sending more consecutive requests together. A
> > given number of requests will not always take the same time (due to
> > amount of seek needed), therefore fifo_batch must be tuned for worst
> > cases, while in best cases, having longer batches would give a
> > throughput boost.
> > * batch start request is chosen fifo_batch/3 requests before the
> > expired one, to improve fairness for requests with lower start sector,
> > that otherwise have higher probability to miss a deadline than
> > mid-sector requests.
> 
> I don't like the rest of it.  I use deadline because it's a simple,
> no surprises, no bullshit scheduler with reasonably good performance
> in all situations.  Is there some reason why CFQ won't work for you?

Fully agree with that, deadline is not going to be changed radically.
Doing sync/async instead of read/write would indeed likely bring the
latency results down alone, what impact the rest has is unknown.

If CFQ performs poorly for some situations, we fix that.


-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Reduce latencies for syncronous writes and high I/O priority  requests in deadline IO scheduler
  2009-04-23 11:28 ` Jens Axboe
@ 2009-04-23 15:57   ` Corrado Zoccolo
  0 siblings, 0 replies; 14+ messages in thread
From: Corrado Zoccolo @ 2009-04-23 15:57 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Linux-Kernel

Hi Jens,

On Thu, Apr 23, 2009 at 1:28 PM, Jens Axboe <jens.axboe@oracle.com> wrote:
> On Wed, Apr 22 2009, Corrado Zoccolo wrote:
>> The patch also introduces some new heuristics:
>> * for non-rotational devices, reads (within a given priority level)
>> are issued in FIFO order, to improve the latency perceived by readers
>
> Danger danger... I smell nasty heuristics.

Ok, I wanted to sneak this heuristic in :), but probably I can abridge
it from the initial submission.
The fact is that many people around are using noop scheduler for SSDs,
to get the ultimate preformance out of their hardware, and I wanted to
give them a better alternative. CFQ doesn't honor non-rotational flag
when tag-queuing is not supported, so it will not be an alternative in
such cases.

>
>> * minimum batch timespan (time quantum): partners with fifo_batch to
>> improve throughput, by sending more consecutive requests together. A
>> given number of requests will not always take the same time (due to
>> amount of seek needed), therefore fifo_batch must be tuned for worst
>> cases, while in best cases, having longer batches would give a
>> throughput boost.
>> * batch start request is chosen fifo_batch/3 requests before the
>> expired one, to improve fairness for requests with lower start sector,
>> that otherwise have higher probability to miss a deadline than
>> mid-sector requests.
>
> This is a huge patch, I'm not going to be reviewing this. Make this a
> patchset, each doing that little change separately. Then it's easier to
> review, and much easier to pick the parts that can go in directly and
> leave the ones that either need more work or are not going be merged
> out.
>
Ok.
I think I can split it into:
* add new heuristics (so those can be evaluated independently of the
read/write vs. sync/async)
* read/write becomes sync/async
* add iopriorities

>> I did few performance comparisons:
>> * HDD, ext3 partition with data=writeback, tiotest with 32 threads,
>> each writing 80MB of data
>
> It doesn't seem to make a whole lot of difference, does it?

The intent is not to boost throughput, but to reduce sync latency.
The heuristics were added to avoid throughput regression.

> Main difference here seems to be random read performance, the rest are
> pretty close and could just be noise. Random write is much worse, from a
> latency view point. Is this just one run, or did you average several?

The random writes issued by tio-test are async writes, so the latency
is not an issue, and having higher latencies here actually help in
improving throughput.

>
> For something like this, you also need to consider workloads that
> consist of processes with different IO patterns running at the same
> time. With this tiotest run, you only test sequential readers competing,
> then random readers, etc.

Sure. Do you have any suggestion?
I have an other workload, that is the boot of my netbook, on which the
patched ioscheduler saves 1s out of 12s.
All other IOschedulers, including noop, perform equally worse (for
noop, I think I lose on writes).

> So, please, split the big patch into lots of little separate pieces.
> Benchmark each one separately, so they each carry their own
> justification.

Does a theoretical proof of unfairness also matter, for the
fifo_batch/3 backjump?

>> * fsync-tester results, on HDD, empty ext3 partition, mounted with
>
> At least this test looks a lot better!

This is why the subject says "reducing latencies ..." :)
Maybe I should have started the mails with this test, instead of the
other ones showing that there wasn't regression for throughput (that
in principle could happen when trying to reduce latencies).

Thanks,
Corrado


-- 
__________________________________________________________________________

dott. Corrado Zoccolo                          mailto:czoccolo@gmail.com
PhD - Department of Computer Science - University of Pisa, Italy
--------------------------------------------------------------------------

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Reduce latencies for syncronous writes and high I/O priority  requests in deadline IO scheduler
  2009-04-23 11:52 ` Aaron Carroll
  2009-04-23 12:13   ` Jens Axboe
@ 2009-04-23 16:10   ` Corrado Zoccolo
  2009-04-23 23:30     ` Aaron Carroll
  2009-04-24  6:39     ` Jens Axboe
  1 sibling, 2 replies; 14+ messages in thread
From: Corrado Zoccolo @ 2009-04-23 16:10 UTC (permalink / raw)
  To: Aaron Carroll; +Cc: jens.axboe, Linux-Kernel

On Thu, Apr 23, 2009 at 1:52 PM, Aaron Carroll <aaronc@cse.unsw.edu.au> wrote:
> Corrado Zoccolo wrote:
>> Hi,
>> deadline I/O scheduler currently classifies all I/O requests in only 2
>> classes, reads (always considered high priority) and writes (always
>> lower).
>> The attached patch, intended to reduce latencies for syncronous writes
>
> Can be achieved by switching to sync/async rather than read/write.  No
> one has shown results where this makes an improvement.  Let us know if
> you have a good example.

Yes, this is exactly what my patch does, and the numbers for
fsync-tester are much better than baseline deadline, almost comparable
with cfq.

>
>> and high I/O priority requests, introduces more levels of priorities:
>> * real time reads: highest priority and shortest deadline, can starve
>> other levels
>> * syncronous operations (either best effort reads or RT/BE writes),
>> mid priority, starvation for lower level is prevented as usual
>> * asyncronous operations (async writes and all IDLE class requests),
>> lowest priority and longest deadline
>>
>> The patch also introduces some new heuristics:
>> * for non-rotational devices, reads (within a given priority level)
>> are issued in FIFO order, to improve the latency perceived by readers
>
> This might be a good idea.
I think Jens doesn't like it very much.
> Can you make this a separate patch?
I have an earlier attempt, much simpler, at:
http://lkml.indiana.edu/hypermail/linux/kernel/0904.1/00667.html
> Is there a good reason not to do the same for writes?
Well, in that case you could just use noop.
I found that this scheme outperforms noop. Random writes, in fact,
perform quite bad on most SSDs (unless you use a logging FS like
nilfs2, that transforms them into sequential writes), so having all
the deadline ioscheduler machinery to merge write requests is much
better. As I said, my patched IO scheduler outperforms noop on my
normal usage.


>> * minimum batch timespan (time quantum): partners with fifo_batch to
>> improve throughput, by sending more consecutive requests together. A
>> given number of requests will not always take the same time (due to
>> amount of seek needed), therefore fifo_batch must be tuned for worst
>> cases, while in best cases, having longer batches would give a
>> throughput boost.
>> * batch start request is chosen fifo_batch/3 requests before the
>> expired one, to improve fairness for requests with lower start sector,
>> that otherwise have higher probability to miss a deadline than
>> mid-sector requests.
>
> I don't like the rest of it.  I use deadline because it's a simple,
> no surprises, no bullshit scheduler with reasonably good performance
> in all situations.  Is there some reason why CFQ won't work for you?

I actually like CFQ, and use it almost everywhere, and switch to
deadline only when submitting an heavy-duty workload (having a SysRq
combination to switch I/O schedulers could sometimes be very handy).

However, on SSDs it's not optimal, so I'm developing this to overcome
those limitations.
In the meantime, I wanted to overcome also deadline limitations, i.e.
the high latencies on fsync/fdatasync.

Corrado

-- 
__________________________________________________________________________

dott. Corrado Zoccolo                          mailto:czoccolo@gmail.com
PhD - Department of Computer Science - University of Pisa, Italy
--------------------------------------------------------------------------

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Reduce latencies for syncronous writes and high I/O priority requests in deadline IO scheduler
  2009-04-23 16:10   ` Corrado Zoccolo
@ 2009-04-23 23:30     ` Aaron Carroll
  2009-04-24  6:13       ` Corrado Zoccolo
  2009-04-24  6:39     ` Jens Axboe
  1 sibling, 1 reply; 14+ messages in thread
From: Aaron Carroll @ 2009-04-23 23:30 UTC (permalink / raw)
  To: Corrado Zoccolo; +Cc: jens.axboe, Linux-Kernel

Hi Corrado,

Corrado Zoccolo wrote:
> On Thu, Apr 23, 2009 at 1:52 PM, Aaron Carroll <aaronc@cse.unsw.edu.au> wrote:
>> Corrado Zoccolo wrote:
>>> Hi,
>>> deadline I/O scheduler currently classifies all I/O requests in only 2
>>> classes, reads (always considered high priority) and writes (always
>>> lower).
>>> The attached patch, intended to reduce latencies for syncronous writes
>> Can be achieved by switching to sync/async rather than read/write.  No
>> one has shown results where this makes an improvement.  Let us know if
>> you have a good example.
> 
> Yes, this is exactly what my patch does, and the numbers for
> fsync-tester are much better than baseline deadline, almost comparable
> with cfq.

The patch does a bunch of other things too.  I can't tell what is due to
the read/write -> sync/async change, and what is due to the rest of it.

>>> and high I/O priority requests, introduces more levels of priorities:
>>> * real time reads: highest priority and shortest deadline, can starve
>>> other levels
>>> * syncronous operations (either best effort reads or RT/BE writes),
>>> mid priority, starvation for lower level is prevented as usual
>>> * asyncronous operations (async writes and all IDLE class requests),
>>> lowest priority and longest deadline
>>>
>>> The patch also introduces some new heuristics:
>>> * for non-rotational devices, reads (within a given priority level)
>>> are issued in FIFO order, to improve the latency perceived by readers
>> This might be a good idea.
> I think Jens doesn't like it very much.

Let's convince him :)

I think a nice way to do this would be to make fifo_batch=1 the default
for nonrot devices.  Of course this will affect writes too...

One problem here is the definition of nonrot.  E.g. if H/W RAID drivers
start setting that flag, it will kill performance.  Sorting is important 
for arrays of rotational disks.

>> Can you make this a separate patch?
> I have an earlier attempt, much simpler, at:
> http://lkml.indiana.edu/hypermail/linux/kernel/0904.1/00667.html
>> Is there a good reason not to do the same for writes?
> Well, in that case you could just use noop.

Noop doesn't merge as well as deadline, nor does is provide read/write
differentiation.  Is there a performance/QoS argument for not doing it?

> I found that this scheme outperforms noop. Random writes, in fact,
> perform quite bad on most SSDs (unless you use a logging FS like
> nilfs2, that transforms them into sequential writes), so having all
> the deadline ioscheduler machinery to merge write requests is much
> better. As I said, my patched IO scheduler outperforms noop on my
> normal usage.

You still get the merging... we are only talking about the issue
order here.

>>> * minimum batch timespan (time quantum): partners with fifo_batch to
>>> improve throughput, by sending more consecutive requests together. A
>>> given number of requests will not always take the same time (due to
>>> amount of seek needed), therefore fifo_batch must be tuned for worst
>>> cases, while in best cases, having longer batches would give a
>>> throughput boost.
>>> * batch start request is chosen fifo_batch/3 requests before the
>>> expired one, to improve fairness for requests with lower start sector,
>>> that otherwise have higher probability to miss a deadline than
>>> mid-sector requests.
>> I don't like the rest of it.  I use deadline because it's a simple,
>> no surprises, no bullshit scheduler with reasonably good performance
>> in all situations.  Is there some reason why CFQ won't work for you?
> 
> I actually like CFQ, and use it almost everywhere, and switch to
> deadline only when submitting an heavy-duty workload (having a SysRq
> combination to switch I/O schedulers could sometimes be very handy).
> 
> However, on SSDs it's not optimal, so I'm developing this to overcome
> those limitations.

Is this due to the stall on each batch switch?

> In the meantime, I wanted to overcome also deadline limitations, i.e.
> the high latencies on fsync/fdatasync.

Did you try dropping the expiry times and/or batch size?


    -- Aaron

> 
> Corrado
> 


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Reduce latencies for syncronous writes and high I/O priority  requests in deadline IO scheduler
  2009-04-23 23:30     ` Aaron Carroll
@ 2009-04-24  6:13       ` Corrado Zoccolo
  0 siblings, 0 replies; 14+ messages in thread
From: Corrado Zoccolo @ 2009-04-24  6:13 UTC (permalink / raw)
  To: Aaron Carroll; +Cc: jens.axboe, Linux-Kernel

Hi Aaron
On Fri, Apr 24, 2009 at 1:30 AM, Aaron Carroll <aaronc@cse.unsw.edu.au> wrote:
> Hi Corrado,
>
> Corrado Zoccolo wrote:
>>
>> On Thu, Apr 23, 2009 at 1:52 PM, Aaron Carroll <aaronc@cse.unsw.edu.au>
>> wrote:
>>>
>>> Corrado Zoccolo wrote:
>>>>
>>>> Hi,
>>>> deadline I/O scheduler currently classifies all I/O requests in only 2
>>>> classes, reads (always considered high priority) and writes (always
>>>> lower).
>>>> The attached patch, intended to reduce latencies for syncronous writes
>>>
>>> Can be achieved by switching to sync/async rather than read/write.  No
>>> one has shown results where this makes an improvement.  Let us know if
>>> you have a good example.
>>
>> Yes, this is exactly what my patch does, and the numbers for
>> fsync-tester are much better than baseline deadline, almost comparable
>> with cfq.
>
> The patch does a bunch of other things too.  I can't tell what is due to
> the read/write -> sync/async change, and what is due to the rest of it.

Ok, I got it. I'm splitting it in smaller patches.

>>>> and high I/O priority requests, introduces more levels of priorities:
>>>> * real time reads: highest priority and shortest deadline, can starve
>>>> other levels
>>>> * syncronous operations (either best effort reads or RT/BE writes),
>>>> mid priority, starvation for lower level is prevented as usual
>>>> * asyncronous operations (async writes and all IDLE class requests),
>>>> lowest priority and longest deadline
>>>>
>>>> The patch also introduces some new heuristics:
>>>> * for non-rotational devices, reads (within a given priority level)
>>>> are issued in FIFO order, to improve the latency perceived by readers
>>>
>>> This might be a good idea.
>>
>> I think Jens doesn't like it very much.
>
> Let's convince him :)
>
> I think a nice way to do this would be to make fifo_batch=1 the default
> for nonrot devices.  Of course this will affect writes too...

Fifo_batch has various implications, concerning also the alternation
between reads and writes.
Moreover, too low numbers also negatively affect merging.
In deadline, often merging on writeback requests happens because the
scheduler is handling unrelated requests for some time, so incoming
requests have time to accumulate.

>
> One problem here is the definition of nonrot.  E.g. if H/W RAID drivers
> start setting that flag, it will kill performance.  Sorting is important for
> arrays of rotational disks.
>
The flag should have a well defined semantics.
In RAID, I think it could be defined for the aggregated disk, while
single disks will be rotational or not depending on the technology.
This could work very well, since each disk will sort only its
requests, and the scheduler will not waste time on other disk
requests.
A random read workload with reads smaller than the RAID stripe will
shine with this.
Clearly, for writes, since multiple disks are touched, the sorting
must be performed at the aggregated disk level to have some
opportunity of reducing data transfers: this corresponds to what my
patch does.

>>> Can you make this a separate patch?
>>
>> I have an earlier attempt, much simpler, at:
>> http://lkml.indiana.edu/hypermail/linux/kernel/0904.1/00667.html
>>>
>>> Is there a good reason not to do the same for writes?
>>
>> Well, in that case you could just use noop.
>
> Noop doesn't merge as well as deadline, nor does is provide read/write
> differentiation.  Is there a performance/QoS argument for not doing it?

I think only experimentation can tell. But the RAID argument above
could make a case.

>> I found that this scheme outperforms noop. Random writes, in fact,
>> perform quite bad on most SSDs (unless you use a logging FS like
>> nilfs2, that transforms them into sequential writes), so having all
>> the deadline ioscheduler machinery to merge write requests is much
>> better. As I said, my patched IO scheduler outperforms noop on my
>> normal usage.
>
> You still get the merging... we are only talking about the issue
> order here.
>

Ditto, more experimentation is needed.

>>>> * minimum batch timespan (time quantum): partners with fifo_batch to
>>>> improve throughput, by sending more consecutive requests together. A
>>>> given number of requests will not always take the same time (due to
>>>> amount of seek needed), therefore fifo_batch must be tuned for worst
>>>> cases, while in best cases, having longer batches would give a
>>>> throughput boost.
>>>> * batch start request is chosen fifo_batch/3 requests before the
>>>> expired one, to improve fairness for requests with lower start sector,
>>>> that otherwise have higher probability to miss a deadline than
>>>> mid-sector requests.
>>>
>>> I don't like the rest of it.  I use deadline because it's a simple,
>>> no surprises, no bullshit scheduler with reasonably good performance
>>> in all situations.  Is there some reason why CFQ won't work for you?
>>
>> I actually like CFQ, and use it almost everywhere, and switch to
>> deadline only when submitting an heavy-duty workload (having a SysRq
>> combination to switch I/O schedulers could sometimes be very handy).
>>
>> However, on SSDs it's not optimal, so I'm developing this to overcome
>> those limitations.
>
> Is this due to the stall on each batch switch?

Possibly (CFQ is too complex to start hacking with it without some
experience on something simpler).
AFAIK, it should be disabled when ronrot=1, but actually only if the
device supports tag queuing.
I think, however, that the whole machinery of CFQ is too heavy for
non-rotational devices, where a simple fifo scheme, adjusted with
priorities, can achieve fair handling of requests.

>
>> In the meantime, I wanted to overcome also deadline limitations, i.e.
>> the high latencies on fsync/fdatasync.
>
> Did you try dropping the expiry times and/or batch size?

Yes. Expiry times are soft, so often they are not satisfied.
Dropping batch size will cause bandwidth drop, causing expiry times to
miss more often due to longer queues (I'm speaking of rotational
devices here, since the latencies affect also them).

>
>
>   -- Aaron
>
>>
>> Corrado
>>
>
>



-- 
__________________________________________________________________________

dott. Corrado Zoccolo                          mailto:czoccolo@gmail.com
PhD - Department of Computer Science - University of Pisa, Italy
--------------------------------------------------------------------------

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Reduce latencies for syncronous writes and high I/O priority requests in deadline IO scheduler
  2009-04-23 16:10   ` Corrado Zoccolo
  2009-04-23 23:30     ` Aaron Carroll
@ 2009-04-24  6:39     ` Jens Axboe
  2009-04-24 16:07       ` Corrado Zoccolo
  1 sibling, 1 reply; 14+ messages in thread
From: Jens Axboe @ 2009-04-24  6:39 UTC (permalink / raw)
  To: Corrado Zoccolo; +Cc: Aaron Carroll, Linux-Kernel

On Thu, Apr 23 2009, Corrado Zoccolo wrote:
> >> * minimum batch timespan (time quantum): partners with fifo_batch to
> >> improve throughput, by sending more consecutive requests together. A
> >> given number of requests will not always take the same time (due to
> >> amount of seek needed), therefore fifo_batch must be tuned for worst
> >> cases, while in best cases, having longer batches would give a
> >> throughput boost.
> >> * batch start request is chosen fifo_batch/3 requests before the
> >> expired one, to improve fairness for requests with lower start sector,
> >> that otherwise have higher probability to miss a deadline than
> >> mid-sector requests.
> >
> > I don't like the rest of it.  I use deadline because it's a simple,
> > no surprises, no bullshit scheduler with reasonably good performance
> > in all situations.  Is there some reason why CFQ won't work for you?
> 
> I actually like CFQ, and use it almost everywhere, and switch to
> deadline only when submitting an heavy-duty workload (having a SysRq
> combination to switch I/O schedulers could sometimes be very handy).
> 
> However, on SSDs it's not optimal, so I'm developing this to overcome
> those limitations.

I find your solution quite confusing - the statement is that it CFQ
isn't optimal on SSD, so you modify deadline? ;-)

Most of the "CFQ doesn't work well on SSD" statements are mostly wrong.
Now, you seem to have done some testing, so when you say that you
probably have actually done some testing that tells you that this is the
case. But lets attempt to fix that issue, then!

One thing you pointed out is that CFQ doesn't treat the device as a
"real" SSD unless it does queueing. This is very much on purpose, for
two reasons:

1) I have never seen a non-queueing SSD that actually performs well for
   reads-vs-write situations, so CFQ still does idling for those.
2) It's a problem that is going away. SSD that are coming out today and
   in the future WILL definitely do queuing. We can attribute most of
   the crap behaviour to the lacking jmicron flash controller, which
   also has a crappy SATA interface.

What I am worried about in the future is even faster SSD devices. CFQ is
already down a percent or two when we are doing 100k iops and such, this
problem will only get worse. So I'm very much interested in speeding up
CFQ for such devices, which I think will mainly be slimming down the IO
path and bypassing much of the (unneeded) complexity for them. The last
thing I want is to have to tell people to use deadline or noop on SSD
devices.

> In the meantime, I wanted to overcome also deadline limitations, i.e.
> the high latencies on fsync/fdatasync.

This is very much something you could pull out of the patchset and we
could include without much questioning.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Reduce latencies for syncronous writes and high I/O priority  requests in deadline IO scheduler
  2009-04-24  6:39     ` Jens Axboe
@ 2009-04-24 16:07       ` Corrado Zoccolo
  2009-04-24 21:37         ` Corrado Zoccolo
  0 siblings, 1 reply; 14+ messages in thread
From: Corrado Zoccolo @ 2009-04-24 16:07 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Aaron Carroll, Linux-Kernel

[-- Attachment #1: Type: text/plain, Size: 4232 bytes --]

On Fri, Apr 24, 2009 at 8:39 AM, Jens Axboe <jens.axboe@oracle.com> wrote:
> I find your solution quite confusing - the statement is that it CFQ
> isn't optimal on SSD, so you modify deadline? ;-)

Well, I find CFQ too confusing to start with, so I chose a simpler one.
If I can prove something with deadline, maybe you will decide to
implement it also on CFQ ;)

>
> Most of the "CFQ doesn't work well on SSD" statements are mostly wrong.
> Now, you seem to have done some testing, so when you say that you
> probably have actually done some testing that tells you that this is the
> case. But lets attempt to fix that issue, then!
>
> One thing you pointed out is that CFQ doesn't treat the device as a
> "real" SSD unless it does queueing. This is very much on purpose, for
> two reasons:
>
> 1) I have never seen a non-queueing SSD that actually performs well for
>   reads-vs-write situations, so CFQ still does idling for those.

Does CFQ idle only when switching between reads and writes, or even
when switching between reads from one process, and reads from an
other?
I think I'll have to instrument CFQ a bit to understand how it works.
Is there a better way instead of scattering printks all around?

> 2) It's a problem that is going away. SSD that are coming out today and
>   in the future WILL definitely do queuing. We can attribute most of
>   the crap behaviour to the lacking jmicron flash controller, which
>   also has a crappy SATA interface.

I think SD cards will still be around a lot, and I don't expect them
to have queuing, so some support for them might still be needed.

> What I am worried about in the future is even faster SSD devices. CFQ is
> already down a percent or two when we are doing 100k iops and such, this
> problem will only get worse. So I'm very much interested in speeding up
> CFQ for such devices, which I think will mainly be slimming down the IO
> path and bypassing much of the (unneeded) complexity for them. The last
> thing I want is to have to tell people to use deadline or noop on SSD
> devices.
>

Totally agree. Having the main IOscheduler perform good on most
scenarios is surely needed.
But this could be achieved in various ways.
What if the main IO scheduler had in his toolbox various strategies,
and could switch between them based on the workload or type of
hardware?
FIFO scheduling for reads could be one such strategy, used only when
the conditions are good for it.
An other possibility is to use auto-tuning strategies, but those are
more difficult to devise and test.

>> In the meantime, I wanted to overcome also deadline limitations, i.e.
>> the high latencies on fsync/fdatasync.
>
> This is very much something you could pull out of the patchset and we
> could include without much questioning.
>

Ok, this is the first patch of the series, and contains code cleanup
needed before changing read/write to sync/async. No behavioral change
is introduced by this patch.

I found where the random read performance is gained, but I didn't
include in this patch, because it require sync/async separation to not
negatively impact sync write latencies.

If the following new code, that replicates existing behaviour:
       if (!dd->next_rq
           || rq_data_dir(dd->next_rq) != data_dir
           || deadline_check_fifo(dd, data_dir)) {
                /*
                 * A deadline has expired, the last request was in the other
                 * direction, or we have run out of higher-sectored requests.
is changed to:
       if (!dd->next_rq
           || rq_data_dir(dd->next_rq) > data_dir
           || deadline_check_fifo(dd, data_dir)) {
                /*
                 * A deadline has expired, the last request was less
important (where WRITE is less important than READ),
                 * or we have run out of higher-sectored requests.

you get both higher random read throughput and higher write latencies.

Corrado

> --
> Jens Axboe
>
>

-- 
__________________________________________________________________________

dott. Corrado Zoccolo                          mailto:czoccolo@gmail.com
PhD - Department of Computer Science - University of Pisa, Italy
--------------------------------------------------------------------------

[-- Attachment #2: deadline-patch-cleanup --]
[-- Type: application/octet-stream, Size: 4874 bytes --]

Deadline IOscheduler code cleanup, preparation for sync/async patch

This is the first patch of the series, and contains code cleanup
needed before changing read/write to sync/async.
No behavioral change is introduced by this patch.

Code cleanups:
* A single next_rq is sufficient.
* we store fifo insertion time on request, and compute deadline on the
  fly, to handle fifo_expire changes better (fifos remain sorted)
* remove unused field
* deadline_latter_request becomes deadline_next_request.

Signed-off-by: Corrado Zoccolo <czoccolo@gmail.com>

diff --git a/block/deadline-iosched.c b/block/deadline-iosched.c
index c4d991d..5713595 100644
--- a/block/deadline-iosched.c
+++ b/block/deadline-iosched.c
@@ -35,11 +35,10 @@ struct deadline_data {
 	struct list_head fifo_list[2];
 
 	/*
-	 * next in sort order. read, write or both are NULL
+	 * next in sort order.
 	 */
-	struct request *next_rq[2];
+	struct request *next_rq;
 	unsigned int batching;		/* number of sequential requests made */
-	sector_t last_sector;		/* head position */
 	unsigned int starved;		/* times reads have starved writes */
 
 	/*
@@ -63,7 +62,7 @@ deadline_rb_root(struct deadline_data *dd, struct request *rq)
  * get the request after `rq' in sector-sorted order
  */
 static inline struct request *
-deadline_latter_request(struct request *rq)
+deadline_next_request(struct request *rq)
 {
 	struct rb_node *node = rb_next(&rq->rb_node);
 
@@ -86,10 +85,8 @@ deadline_add_rq_rb(struct deadline_data *dd, struct request *rq)
 static inline void
 deadline_del_rq_rb(struct deadline_data *dd, struct request *rq)
 {
-	const int data_dir = rq_data_dir(rq);
-
-	if (dd->next_rq[data_dir] == rq)
-		dd->next_rq[data_dir] = deadline_latter_request(rq);
+	if (dd->next_rq == rq)
+		dd->next_rq = deadline_next_request(rq);
 
 	elv_rb_del(deadline_rb_root(dd, rq), rq);
 }
@@ -101,15 +98,14 @@ static void
 deadline_add_request(struct request_queue *q, struct request *rq)
 {
 	struct deadline_data *dd = q->elevator->elevator_data;
-	const int data_dir = rq_data_dir(rq);
 
 	deadline_add_rq_rb(dd, rq);
 
 	/*
-	 * set expire time and add to fifo list
+	 * set request creation time and add to fifo list
 	 */
-	rq_set_fifo_time(rq, jiffies + dd->fifo_expire[data_dir]);
-	list_add_tail(&rq->queuelist, &dd->fifo_list[data_dir]);
+	rq_set_fifo_time(rq, jiffies);
+	list_add_tail(&rq->queuelist, &dd->fifo_list[rq_data_dir(rq)]);
 }
 
 /*
@@ -206,13 +202,7 @@ deadline_move_to_dispatch(struct deadline_data *dd, struct request *rq)
 static void
 deadline_move_request(struct deadline_data *dd, struct request *rq)
 {
-	const int data_dir = rq_data_dir(rq);
-
-	dd->next_rq[READ] = NULL;
-	dd->next_rq[WRITE] = NULL;
-	dd->next_rq[data_dir] = deadline_latter_request(rq);
-
-	dd->last_sector = rq_end_sector(rq);
+	dd->next_rq = deadline_next_request(rq);
 
 	/*
 	 * take it off the sort and fifo list, move
@@ -227,15 +217,13 @@ deadline_move_request(struct deadline_data *dd, struct request *rq)
  */
 static inline int deadline_check_fifo(struct deadline_data *dd, int ddir)
 {
-	struct request *rq = rq_entry_fifo(dd->fifo_list[ddir].next);
-
+	BUG_ON(list_empty(&dd->fifo_list[ddir]));
 	/*
-	 * rq is expired!
+	 * deadline is expired!
 	 */
-	if (time_after(jiffies, rq_fifo_time(rq)))
-		return 1;
-
-	return 0;
+	return time_after(jiffies, dd->fifo_expire[ddir] +
+			  rq_fifo_time(rq_entry_fifo(dd->fifo_list[ddir].next))
+			  );
 }
 
 /*
@@ -247,20 +235,13 @@ static int deadline_dispatch_requests(struct request_queue *q, int force)
 	struct deadline_data *dd = q->elevator->elevator_data;
 	const int reads = !list_empty(&dd->fifo_list[READ]);
 	const int writes = !list_empty(&dd->fifo_list[WRITE]);
-	struct request *rq;
+	struct request *rq = dd->next_rq;
 	int data_dir;
 
-	/*
-	 * batches are currently reads XOR writes
-	 */
-	if (dd->next_rq[WRITE])
-		rq = dd->next_rq[WRITE];
-	else
-		rq = dd->next_rq[READ];
-
-	if (rq && dd->batching < dd->fifo_batch)
+	if (rq && dd->batching < dd->fifo_batch) {
 		/* we have a next request are still entitled to batch */
 		goto dispatch_request;
+	}
 
 	/*
 	 * at this point we are not running a batch. select the appropriate
@@ -299,7 +280,9 @@ dispatch_find_request:
 	/*
 	 * we are not running a batch, find best request for selected data_dir
 	 */
-	if (deadline_check_fifo(dd, data_dir) || !dd->next_rq[data_dir]) {
+	if (!dd->next_rq
+	    || rq_data_dir(dd->next_rq) != data_dir
+	    || deadline_check_fifo(dd, data_dir)) {
 		/*
 		 * A deadline has expired, the last request was in the other
 		 * direction, or we have run out of higher-sectored requests.
@@ -311,7 +294,7 @@ dispatch_find_request:
 		 * The last req was the same dir and we have a next request in
 		 * sort order. No expired requests so continue on from here.
 		 */
-		rq = dd->next_rq[data_dir];
+		rq = dd->next_rq;
 	}
 
 	dd->batching = 0;

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: Reduce latencies for syncronous writes and high I/O priority  requests in deadline IO scheduler
  2009-04-24 16:07       ` Corrado Zoccolo
@ 2009-04-24 21:37         ` Corrado Zoccolo
  2009-04-26 12:43           ` Corrado Zoccolo
  0 siblings, 1 reply; 14+ messages in thread
From: Corrado Zoccolo @ 2009-04-24 21:37 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Aaron Carroll, Linux-Kernel

[-- Attachment #1: Type: text/plain, Size: 4854 bytes --]

On Fri, Apr 24, 2009 at 6:07 PM, Corrado Zoccolo <czoccolo@gmail.com> wrote:
> Ok, this is the first patch of the series, and contains code cleanup
> needed before changing read/write to sync/async. No behavioral change
> is introduced by this patch.
>

And this one is the second patch, that changes read/write to sync vs. async.
fsync-tester results,with
     while : ; do time sh -c "dd if=/dev/zero of=bigfile bs=8M
count=256 ; sync; rm bigfile"; done:
running in the background
./fsync-tester
fsync time: 0.6383
fsync time: 0.1835
fsync time: 0.1744
fsync time: 0.1103
fsync time: 0.1535
fsync time: 0.1545
fsync time: 0.1491
fsync time: 0.1524
fsync time: 0.1609
fsync time: 0.1168
fsync time: 0.1458
fsync time: 0.1328
fsync time: 0.1655
fsync time: 0.1731
fsync time: 0.1356
fsync time: 0.1746
fsync time: 0.1166
fsync time: 0.1609
fsync time: 0.1370
fsync time: 0.1379
fsync time: 0.2096
fsync time: 0.1438
fsync time: 0.1652
fsync time: 0.1612
fsync time: 0.1438

compared with original deadline:
fsync time: 0.7963
fsync time: 4.5914
fsync time: 4.2347
fsync time: 1.1670
fsync time: 0.8164
fsync time: 1.9783
fsync time: 4.9726
fsync time: 2.4929
fsync time: 2.5448
fsync time: 3.9627

Usual tio-test, 32 threads 80Mb each:
Tiotest results for 32 concurrent io threads:
,----------------------------------------------------------------------.
| Item                  | Time     | Rate         | Usr CPU  | Sys CPU |
+-----------------------+----------+--------------+----------+---------+
| Write        2560 MBs |  103.6 s |  24.705 MB/s |   9.7 %  | 495.6 % |
| Random Write  125 MBs |   95.9 s |   1.304 MB/s |  -1.9 %  |  23.8 % |
| Read         2560 MBs |  164.3 s |  15.580 MB/s |   3.5 %  |  70.1 % |
| Random Read   125 MBs |  129.7 s |   0.964 MB/s |  -0.0 %  |  16.2 % |
`----------------------------------------------------------------------'
Tiotest latency results:
,-------------------------------------------------------------------------.
| Item         | Average latency | Maximum latency | % >2 sec | % >10 sec |
+--------------+-----------------+-----------------+----------+-----------+
| Write        |        4.040 ms |     8949.999 ms |  0.07614 |   0.00000 |
| Random Write |        1.252 ms |     5439.023 ms |  0.03125 |   0.00000 |
| Read         |        7.920 ms |      792.899 ms |  0.00000 |   0.00000 |
| Random Read  |      123.807 ms |      910.263 ms |  0.00000 |   0.00000 |
|--------------+-----------------+-----------------+----------+-----------|
| Total        |        8.613 ms |     8949.999 ms |  0.03703 |   0.00000 |
`--------------+-----------------+-----------------+----------+-----------'

compared with original deadline:
Tiotest results for 32 concurrent io threads:
,----------------------------------------------------------------------.
| Item                  | Time     | Rate         | Usr CPU  | Sys CPU |
+-----------------------+----------+--------------+----------+---------+
| Write        2560 MBs |  103.0 s |  24.848 MB/s |  10.6 %  | 522.2 % |
| Random Write  125 MBs |   98.8 s |   1.265 MB/s |  -1.6 %  |  16.1 % |
| Read         2560 MBs |  166.2 s |  15.400 MB/s |   4.2 %  |  82.7 % |
| Random Read   125 MBs |  193.3 s |   0.647 MB/s |  -0.8 %  |  14.5 % |
`----------------------------------------------------------------------'
Tiotest latency results:
,-------------------------------------------------------------------------.
| Item         | Average latency | Maximum latency | % >2 sec | % >10 sec |
+--------------+-----------------+-----------------+----------+-----------+
| Write        |        4.122 ms |    17922.920 ms |  0.07980 |   0.00061 |
| Random Write |        0.599 ms |     1245.200 ms |  0.00000 |   0.00000 |
| Read         |        8.032 ms |     1125.759 ms |  0.00000 |   0.00000 |
| Random Read  |      181.968 ms |      972.657 ms |  0.00000 |   0.00000 |
|--------------+-----------------+-----------------+----------+-----------|
| Total        |       10.044 ms |    17922.920 ms |  0.03804 |   0.00029 |
`--------------+-----------------+-----------------+----------+-----------'

We see the improvement on random read (not as pronounced as with full
set of heuristics, but still noticeable) and smaller increase on
random write latency, but this will not harm, since now we have
distinction between sync vs async reads.
I can test which of the other heuristics will provide the other few %
of improvements on random read if you are interested.
Otherwise, I'm more inclined on submitting an other patch to add
io-priorities to this series.

-- 
__________________________________________________________________________

dott. Corrado Zoccolo                          mailto:czoccolo@gmail.com
PhD - Department of Computer Science - University of Pisa, Italy
--------------------------------------------------------------------------

[-- Attachment #2: deadline-patch-sync-async --]
[-- Type: application/octet-stream, Size: 11000 bytes --]

Deadline IOscheduler sync/async patch

This is the second patch of the series, and contains the changes
to classify requests in sync/async instead of read/write for deadline
assignment. 
Most changes are straightforward.
Few things to note:
* user space tunables change their name accordingly.
* deadline_dispatch_requests now selects requests to dispatch based
  on their belonging to sync vs async class. Batch extension rules
  are changed to allow a sync batch to be extended, even if async
  was selected due to async_starved (unless the deadline is expired).
* deadline_move_request becomes more complex, since now the next 
  request in disk order may not be suitable if of lower priority
  than the current batch. In order to keep it constant time
  complexity, we put a limit in the number of requests that can be
  skipped looking for one with the correct priority, and we allow for
  batch priority demotion in case the suitable request is not found.


Signed-off-by: Corrado Zoccolo <czoccolo@gmail.com>

diff --git a/block/deadline-iosched.c b/block/deadline-iosched.c
index 5713595..f8ca1a3 100644
--- a/block/deadline-iosched.c
+++ b/block/deadline-iosched.c
@@ -17,9 +17,9 @@
 /*
  * See Documentation/block/deadline-iosched.txt
  */
-static const int read_expire = HZ / 2;  /* max time before a read is submitted. */
-static const int write_expire = 5 * HZ; /* ditto for writes, these limits are SOFT! */
-static const int writes_starved = 2;    /* max times reads can starve a write */
+static const int sync_expire = HZ / 2;     /* max time before a sync operation is submitted. */
+static const int async_expire = 5 * HZ;    /* ditto for async operations, these limits are SOFT! */
+static const int async_starved = 2;        /* max times SYNC can starve ASYNC requests */
 static const int fifo_batch = 16;       /* # of sequential requests treated as one
 				     by the above parameters. For throughput. */
 
@@ -31,8 +31,8 @@ struct deadline_data {
 	/*
 	 * requests (deadline_rq s) are present on both sort_list and fifo_list
 	 */
-	struct rb_root sort_list[2];	
-	struct list_head fifo_list[2];
+	struct rb_root sort_list[2]; /* READ, WRITE */
+	struct list_head fifo_list[2]; /* 0=ASYNC, 1=SYNC */
 
 	/*
 	 * next in sort order.
@@ -46,8 +46,13 @@ struct deadline_data {
 	 */
 	int fifo_expire[2];
 	int fifo_batch;
-	int writes_starved;
+	int async_starved;
 	int front_merges;
+
+	/*
+	  current batch data & stats
+	 */
+	int cur_batch_prio;
 };
 
 static void deadline_move_request(struct deadline_data *, struct request *);
@@ -91,6 +96,12 @@ deadline_del_rq_rb(struct deadline_data *dd, struct request *rq)
 	elv_rb_del(deadline_rb_root(dd, rq), rq);
 }
 
+static int
+deadline_compute_req_priority(struct request *req)
+{
+	return !!rq_is_sync(req);
+}
+
 /*
  * add rq to rbtree and fifo
  */
@@ -105,7 +116,8 @@ deadline_add_request(struct request_queue *q, struct request *rq)
 	 * set request creation time and add to fifo list
 	 */
 	rq_set_fifo_time(rq, jiffies);
-	list_add_tail(&rq->queuelist, &dd->fifo_list[rq_data_dir(rq)]);
+	list_add_tail(&rq->queuelist,
+		      &dd->fifo_list[deadline_compute_req_priority(rq)]);
 }
 
 /*
@@ -202,7 +214,24 @@ deadline_move_to_dispatch(struct deadline_data *dd, struct request *rq)
 static void
 deadline_move_request(struct deadline_data *dd, struct request *rq)
 {
-	dd->next_rq = deadline_next_request(rq);
+	int max_search = dd->fifo_batch;
+
+	dd->next_rq = rq;
+	/* try to get requests of at least the same priority (or above)
+	   and same direction as current one */
+	while (max_search-- &&
+	       (dd->next_rq = deadline_next_request(dd->next_rq)) &&
+	       dd->cur_batch_prio > deadline_compute_req_priority(dd->next_rq))
+		;
+
+	if (!max_search || !dd->next_rq) {
+		/* did not get a next of suitable priority, demote batch to
+		   lower, and continue in disk order */
+		dd->next_rq = deadline_next_request(rq);
+		if (dd->next_rq)
+			dd->cur_batch_prio =
+				deadline_compute_req_priority(dd->next_rq);
+	}
 
 	/*
 	 * take it off the sort and fifo list, move
@@ -212,17 +241,17 @@ deadline_move_request(struct deadline_data *dd, struct request *rq)
 }
 
 /*
- * deadline_check_fifo returns 0 if there are no expired requests on the fifo,
- * 1 otherwise. Requires !list_empty(&dd->fifo_list[data_dir])
+ * deadline_check_fifo returns 0 if there are no expired requests on the fifo
+ * for given priority, 1 otherwise. Requires !list_empty(&dd->fifo_list[prio])
  */
-static inline int deadline_check_fifo(struct deadline_data *dd, int ddir)
+static inline int deadline_check_fifo(struct deadline_data *dd, unsigned prio)
 {
-	BUG_ON(list_empty(&dd->fifo_list[ddir]));
+	BUG_ON(list_empty(&dd->fifo_list[prio]));
 	/*
 	 * deadline is expired!
 	 */
-	return time_after(jiffies, dd->fifo_expire[ddir] +
-			  rq_fifo_time(rq_entry_fifo(dd->fifo_list[ddir].next))
+	return time_after(jiffies, dd->fifo_expire[prio] +
+			  rq_fifo_time(rq_entry_fifo(dd->fifo_list[prio].next))
 			  );
 }
 
@@ -233,10 +262,10 @@ static inline int deadline_check_fifo(struct deadline_data *dd, int ddir)
 static int deadline_dispatch_requests(struct request_queue *q, int force)
 {
 	struct deadline_data *dd = q->elevator->elevator_data;
-	const int reads = !list_empty(&dd->fifo_list[READ]);
-	const int writes = !list_empty(&dd->fifo_list[WRITE]);
+	const int sync_reqs = !list_empty(&dd->fifo_list[1]);
+	const int async_reqs = !list_empty(&dd->fifo_list[0]);
 	struct request *rq = dd->next_rq;
-	int data_dir;
+	int request_prio = dd->cur_batch_prio;
 
 	if (rq && dd->batching < dd->fifo_batch) {
 		/* we have a next request are still entitled to batch */
@@ -248,14 +277,10 @@ static int deadline_dispatch_requests(struct request_queue *q, int force)
 	 * data direction (read / write)
 	 */
 
-	if (reads) {
-		BUG_ON(RB_EMPTY_ROOT(&dd->sort_list[READ]));
-
-		if (writes && (dd->starved++ >= dd->writes_starved))
-			goto dispatch_writes;
-
-		data_dir = READ;
-
+	if (sync_reqs) {
+		if (async_reqs && (dd->starved++ >= dd->async_starved))
+			goto dispatch_async;
+		request_prio = 1;
 		goto dispatch_find_request;
 	}
 
@@ -263,14 +288,10 @@ static int deadline_dispatch_requests(struct request_queue *q, int force)
 	 * there are either no reads or writes have been starved
 	 */
 
-	if (writes) {
-dispatch_writes:
-		BUG_ON(RB_EMPTY_ROOT(&dd->sort_list[WRITE]));
-
+	if (async_reqs) {
+dispatch_async:
 		dd->starved = 0;
-
-		data_dir = WRITE;
-
+		request_prio = 0;
 		goto dispatch_find_request;
 	}
 
@@ -278,25 +299,28 @@ dispatch_writes:
 
 dispatch_find_request:
 	/*
-	 * we are not running a batch, find best request for selected data_dir
+	 * we are not running a batch:
+	 * find best request for selected request_prio
 	 */
 	if (!dd->next_rq
-	    || rq_data_dir(dd->next_rq) != data_dir
-	    || deadline_check_fifo(dd, data_dir)) {
+	    || dd->cur_batch_prio < request_prio
+	    || deadline_check_fifo(dd, request_prio)) {
 		/*
-		 * A deadline has expired, the last request was in the other
-		 * direction, or we have run out of higher-sectored requests.
+		 * A deadline expired, the previous batch had a lower priority,
+		 * or we have run out of higher-sectored requests.
 		 * Start again from the request with the earliest expiry time.
 		 */
-		rq = rq_entry_fifo(dd->fifo_list[data_dir].next);
+		rq = rq_entry_fifo(dd->fifo_list[request_prio].next);
 	} else {
 		/*
-		 * The last req was the same dir and we have a next request in
-		 * sort order. No expired requests so continue on from here.
+		 * The last batch was same or higher priority and we have a
+		 * next request in sort order. No expired requests so continue
+		 * on from here.
 		 */
 		rq = dd->next_rq;
 	}
 
+	dd->cur_batch_prio = request_prio;
 	dd->batching = 0;
 
 dispatch_request:
@@ -313,16 +337,16 @@ static int deadline_queue_empty(struct request_queue *q)
 {
 	struct deadline_data *dd = q->elevator->elevator_data;
 
-	return list_empty(&dd->fifo_list[WRITE])
-		&& list_empty(&dd->fifo_list[READ]);
+	return list_empty(&dd->fifo_list[0])
+		&& list_empty(&dd->fifo_list[1]);
 }
 
 static void deadline_exit_queue(struct elevator_queue *e)
 {
 	struct deadline_data *dd = e->elevator_data;
 
-	BUG_ON(!list_empty(&dd->fifo_list[READ]));
-	BUG_ON(!list_empty(&dd->fifo_list[WRITE]));
+	BUG_ON(!list_empty(&dd->fifo_list[0]));
+	BUG_ON(!list_empty(&dd->fifo_list[1]));
 
 	kfree(dd);
 }
@@ -338,13 +362,13 @@ static void *deadline_init_queue(struct request_queue *q)
 	if (!dd)
 		return NULL;
 
-	INIT_LIST_HEAD(&dd->fifo_list[READ]);
-	INIT_LIST_HEAD(&dd->fifo_list[WRITE]);
+	INIT_LIST_HEAD(&dd->fifo_list[0]);
+	INIT_LIST_HEAD(&dd->fifo_list[1]);
 	dd->sort_list[READ] = RB_ROOT;
 	dd->sort_list[WRITE] = RB_ROOT;
-	dd->fifo_expire[READ] = read_expire;
-	dd->fifo_expire[WRITE] = write_expire;
-	dd->writes_starved = writes_starved;
+	dd->fifo_expire[0] = async_expire;
+	dd->fifo_expire[1] = sync_expire;
+	dd->async_starved = async_starved;
 	dd->front_merges = 1;
 	dd->fifo_batch = fifo_batch;
 	return dd;
@@ -378,9 +402,9 @@ static ssize_t __FUNC(struct elevator_queue *e, char *page)		\
 		__data = jiffies_to_msecs(__data);			\
 	return deadline_var_show(__data, (page));			\
 }
-SHOW_FUNCTION(deadline_read_expire_show, dd->fifo_expire[READ], 1);
-SHOW_FUNCTION(deadline_write_expire_show, dd->fifo_expire[WRITE], 1);
-SHOW_FUNCTION(deadline_writes_starved_show, dd->writes_starved, 0);
+SHOW_FUNCTION(deadline_async_expire_show, dd->fifo_expire[0], 1);
+SHOW_FUNCTION(deadline_sync_expire_show, dd->fifo_expire[1], 1);
+SHOW_FUNCTION(deadline_async_starved_show, dd->async_starved, 0);
 SHOW_FUNCTION(deadline_front_merges_show, dd->front_merges, 0);
 SHOW_FUNCTION(deadline_fifo_batch_show, dd->fifo_batch, 0);
 #undef SHOW_FUNCTION
@@ -401,9 +425,9 @@ static ssize_t __FUNC(struct elevator_queue *e, const char *page, size_t count)
 		*(__PTR) = __data;					\
 	return ret;							\
 }
-STORE_FUNCTION(deadline_read_expire_store, &dd->fifo_expire[READ], 0, INT_MAX, 1);
-STORE_FUNCTION(deadline_write_expire_store, &dd->fifo_expire[WRITE], 0, INT_MAX, 1);
-STORE_FUNCTION(deadline_writes_starved_store, &dd->writes_starved, INT_MIN, INT_MAX, 0);
+STORE_FUNCTION(deadline_async_expire_store, &dd->fifo_expire[0], 0, INT_MAX, 1);
+STORE_FUNCTION(deadline_sync_expire_store, &dd->fifo_expire[1], 0, INT_MAX, 1);
+STORE_FUNCTION(deadline_async_starved_store, &dd->async_starved, INT_MIN, INT_MAX, 0);
 STORE_FUNCTION(deadline_front_merges_store, &dd->front_merges, 0, 1, 0);
 STORE_FUNCTION(deadline_fifo_batch_store, &dd->fifo_batch, 0, INT_MAX, 0);
 #undef STORE_FUNCTION
@@ -413,9 +437,9 @@ STORE_FUNCTION(deadline_fifo_batch_store, &dd->fifo_batch, 0, INT_MAX, 0);
 				      deadline_##name##_store)
 
 static struct elv_fs_entry deadline_attrs[] = {
-	DD_ATTR(read_expire),
-	DD_ATTR(write_expire),
-	DD_ATTR(writes_starved),
+	DD_ATTR(async_expire),
+	DD_ATTR(sync_expire),
+	DD_ATTR(async_starved),
 	DD_ATTR(front_merges),
 	DD_ATTR(fifo_batch),
 	__ATTR_NULL

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: Reduce latencies for syncronous writes and high I/O priority  requests in deadline IO scheduler
  2009-04-24 21:37         ` Corrado Zoccolo
@ 2009-04-26 12:43           ` Corrado Zoccolo
  2009-05-01 19:30             ` Corrado Zoccolo
  0 siblings, 1 reply; 14+ messages in thread
From: Corrado Zoccolo @ 2009-04-26 12:43 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Aaron Carroll, Linux-Kernel

[-- Attachment #1: Type: text/plain, Size: 6096 bytes --]

Hi Jens,
I found fio, a very handy and complete tool for block I/O performance
testing (kudos to the author), and started doing some thorough testing
for the patch, since I couldn't tune tiotest behaviour, and I had only
a surface understanding of what was going on.
The test configuration is attached for reference. Each test is run
after dropping the caches. The suffix .2 or .3 means the value for
{writes,async}_starved tunable.

My findings are interesting:
* there is a definite improvement for many readers performing random
reads and one sequential writer (this was the one I think tiotest was
showing, due to unclear separation -i.e. no fsync- between tiotest
phases). This workload simulates boot for a single disk machine, with
random reads that represent fault-ins for binaries and libraries, and
sequential writes that represent log updates.

* the improvement is not present if the number of readers is small
(e.g. 4). It gets performance similar to original deadline, that is
far below cfq. The problem appears to be caused by the unfairness for
low-numbered sectors, and happens only when the random readers have
overlapping reading regions. Let's assume 4 readers as in my test.
The workload evolution will be: a read batch is started, from the
request that is first in FIFO. The probability that the batch starts
at the first read in disk order is 1/4, and the probability that it
will be first in next second is 7/24 (assuming the first reader
doesn't post a new request yet). This means there is 11/24 of
probability that we need more than 2 batches to service all initial
read requests (and then we will service the starved writer: increasing
writes_starved in fact improves the reader bw). If the reading regions
overlap, after the writer is serviced, the FIFO will still be randomly
ordered, so the same pattern will repeat.
A perfect scheduler, instead, for each batch in which fewer than
fifo_batch requests are available, should schedule all the read
requests available, i.e. start from the first in disk order instead of
the first in FIFO order (unless a deadline is expired).
This allows the readers to progress much faster. Do you want me to
test such heuristic?
** I think there is also an other theoretical bad case in deadline
behaviour, i.e. when all deadlines expire. In that case, it switches
to complete FIFO batch scheduling. In this case, instead. scheduling
all requests in disk order will allow for a faster recovery. Do you
think we should handle also this case?

* I think that now that we differentiate between sync and async
writes, we can painlessly increase the async_starved tunable. This
will provide better performance for mixed workloads as random readers
mixed with sequential writer. In particular, the 32 readers/1writer
test shows impressive performance, where full write bandwidth is
achieved, while reader bandwidth outperforms all other schedulers,
including cfq (that instead completely starve the writer).

* on my machine, there is a regression on sequential write (2 parallel
sequential writers, instead, give better performance, and 1 seq writer
mixed with many random readers max out the write bandwidth).
Interestingly, this regression disappears when I spread some printks
around. It is therefore a timing issue, that causes less merges to
happen (I think this can be fixed allowing async writes to be
scheduled only after an initial delay):
# run with printks #

seqwrite: (g=0): rw=write, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
Starting 1 process
Jobs: 1 (f=1): [F] [100.0% done] [     0/     0 kb/s] [eta 00m:00s]
seqwrite: (groupid=0, jobs=1): err= 0: pid=4838
  write: io=1010MiB, bw=30967KiB/s, iops=7560, runt= 34193msec
    clat (usec): min=7, max=4274K, avg=114.53, stdev=13822.22
  cpu          : usr=1.44%, sys=9.47%, ctx=1299, majf=0, minf=154
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=0/258510, short=0/0
     lat (usec): 10=45.00%, 20=52.36%, 50=2.18%, 100=0.05%, 250=0.32%
     lat (usec): 500=0.03%, 750=0.01%, 1000=0.01%
     lat (msec): 2=0.01%, 4=0.01%, 10=0.01%, 100=0.01%, 250=0.02%
     lat (msec): 500=0.01%, 2000=0.01%, >=2000=0.01%

Run status group 0 (all jobs):
  WRITE: io=1010MiB, aggrb=30967KiB/s, minb=30967KiB/s,
maxb=30967KiB/s, mint=34193msec, maxt=34193msec

Disk stats (read/write):
  sda: ios=35/8113, merge=0/250415, ticks=2619/4277418,
in_queue=4280032, util=96.52%

# run without printks #
seqwrite: (g=0): rw=write, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
Starting 1 process
Jobs: 1 (f=1): [F] [100.0% done] [     0/     0 kb/s] [eta 00m:00s]
seqwrite: (groupid=0, jobs=1): err= 0: pid=5311
  write: io=897076KiB, bw=26726KiB/s, iops=6524, runt= 34371msec
    clat (usec): min=7, max=1801K, avg=132.11, stdev=6407.61
  cpu          : usr=1.14%, sys=7.84%, ctx=1272, majf=0, minf=318
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=0/224269, short=0/0
     lat (usec): 10=49.04%, 20=49.05%, 50=1.17%, 100=0.07%, 250=0.51%
     lat (usec): 500=0.02%, 750=0.01%, 1000=0.01%
     lat (msec): 2=0.01%, 4=0.02%, 10=0.01%, 20=0.01%, 50=0.01%
     lat (msec): 100=0.05%, 250=0.03%, 500=0.01%, 2000=0.01%

Run status group 0 (all jobs):
  WRITE: io=897076KiB, aggrb=26726KiB/s, minb=26726KiB/s,
maxb=26726KiB/s, mint=34371msec, maxt=34371msec

Disk stats (read/write):
  sda: ios=218/7041, merge=0/217243, ticks=16638/4254061,
in_queue=4270696, util=98.92%

Corrado

-- 
__________________________________________________________________________

dott. Corrado Zoccolo                          mailto:czoccolo@gmail.com
PhD - Department of Computer Science - University of Pisa, Italy
--------------------------------------------------------------------------

[-- Attachment #2: test2.fio --]
[-- Type: application/octet-stream, Size: 5772 bytes --]

[seqread]
directory=/mnt/test/corrado
rw=read
size=1G
runtime=30
ioengine=psync
filename=sread

[seqwrite]
stonewall
directory=/mnt/test/corrado
rw=write
size=1G
runtime=30
ioengine=psync
filename=swrite
end_fsync=1

[parread.0]
stonewall
directory=/mnt/test/corrado
rw=read
size=500M
runtime=30
ioengine=psync
filename=pread.0

[parread.1]
directory=/mnt/test/corrado
rw=read
size=500M
runtime=30
ioengine=psync
filename=pread.1

[parwrite.0]
stonewall
directory=/mnt/test/corrado
rw=write
size=1G
runtime=30
ioengine=psync
filename=pwrite.0
end_fsync=1

[parwrite.1]
directory=/mnt/test/corrado
rw=write
size=1G
runtime=30
ioengine=psync
filename=pwrite.1
end_fsync=1

[randomread2.0]
stonewall
directory=/mnt/test/corrado
rw=randread
size=1G
runtime=30
ioengine=psync
filename=randomread.1.0

[randomread2.1]
directory=/mnt/test/corrado
rw=randread
size=1G
runtime=30
ioengine=psync
filename=randomread.1.0

[randomreadseqwrites4.w]
stonewall
directory=/mnt/test/corrado
rw=write
size=2G
runtime=30
ioengine=psync
filename=swrite.1

[randomreadseqwrites4.0]
directory=/mnt/test/corrado
rw=randread
size=1G
runtime=30
ioengine=psync
filename=randomread.1.0

[randomreadseqwrites4.1]
directory=/mnt/test/corrado
rw=randread
size=1G
runtime=30
ioengine=psync
filename=randomread.1.0

[randomreadseqwrites4.2]
directory=/mnt/test/corrado
rw=randread
size=1G
runtime=30
ioengine=psync
filename=randomread.1.0

[randomreadseqwrites4.3]
directory=/mnt/test/corrado
rw=randread
size=1G
runtime=30
ioengine=psync
filename=randomread.1.0


[randomreadseqwrites5.w]
stonewall
directory=/mnt/test/corrado
rw=write
size=2G
runtime=30
ioengine=psync
filename=swrite.1

[randomreadseqwrites5.0]
directory=/mnt/test/corrado
rw=randread
size=2G
runtime=30
ioengine=psync
filename=randomread.2.0

[randomreadseqwrites5.1]
directory=/mnt/test/corrado
rw=randread
size=2G
runtime=30
ioengine=psync
filename=randomread.2.0

[randomreadseqwrites5.2]
directory=/mnt/test/corrado
rw=randread
size=2G
runtime=30
ioengine=psync
filename=randomread.2.0

[randomreadseqwrites5.3]
directory=/mnt/test/corrado
rw=randread
size=2G
runtime=30
ioengine=psync
filename=randomread.2.0

[randomreadseqwrites5.4]
directory=/mnt/test/corrado
rw=randread
size=2G
runtime=30
ioengine=psync
filename=randomread.2.0

[randomreadseqwrites5.5]
directory=/mnt/test/corrado
rw=randread
size=2G
runtime=30
ioengine=psync
filename=randomread.2.0

[randomreadseqwrites5.6]
directory=/mnt/test/corrado
rw=randread
size=2G
runtime=30
ioengine=psync
filename=randomread.2.0

[randomreadseqwrites5.7]
directory=/mnt/test/corrado
rw=randread
size=2G
runtime=30
ioengine=psync
filename=randomread.2.0

[randomreadseqwrites5.8]
directory=/mnt/test/corrado
rw=randread
size=2G
runtime=30
ioengine=psync
filename=randomread.2.0

[randomreadseqwrites5.9]
directory=/mnt/test/corrado
rw=randread
size=2G
runtime=30
ioengine=psync
filename=randomread.2.0

[randomreadseqwrites5.10]
directory=/mnt/test/corrado
rw=randread
size=2G
runtime=30
ioengine=psync
filename=randomread.2.0

[randomreadseqwrites5.11]
directory=/mnt/test/corrado
rw=randread
size=2G
runtime=30
ioengine=psync
filename=randomread.2.0

[randomreadseqwrites5.12]
directory=/mnt/test/corrado
rw=randread
size=2G
runtime=30
ioengine=psync
filename=randomread.2.0

[randomreadseqwrites5.13]
directory=/mnt/test/corrado
rw=randread
size=2G
runtime=30
ioengine=psync
filename=randomread.2.0

[randomreadseqwrites5.14]
directory=/mnt/test/corrado
rw=randread
size=2G
runtime=30
ioengine=psync
filename=randomread.2.0

[randomreadseqwrites5.15]
directory=/mnt/test/corrado
rw=randread
size=2G
runtime=30
ioengine=psync
filename=randomread.2.0

[randomreadseqwrites5.16]
directory=/mnt/test/corrado
rw=randread
size=2G
runtime=30
ioengine=psync
filename=randomread.2.0

[randomreadseqwrites5.17]
directory=/mnt/test/corrado
rw=randread
size=2G
runtime=30
ioengine=psync
filename=randomread.2.0

[randomreadseqwrites5.18]
directory=/mnt/test/corrado
rw=randread
size=2G
runtime=30
ioengine=psync
filename=randomread.2.0

[randomreadseqwrites5.19]
directory=/mnt/test/corrado
rw=randread
size=2G
runtime=30
ioengine=psync
filename=randomread.2.0

[randomreadseqwrites5.20]
directory=/mnt/test/corrado
rw=randread
size=2G
runtime=30
ioengine=psync
filename=randomread.2.0

[randomreadseqwrites5.21]
directory=/mnt/test/corrado
rw=randread
size=2G
runtime=30
ioengine=psync
filename=randomread.2.0

[randomreadseqwrites5.22]
directory=/mnt/test/corrado
rw=randread
size=2G
runtime=30
ioengine=psync
filename=randomread.2.0

[randomreadseqwrites5.23]
directory=/mnt/test/corrado
rw=randread
size=2G
runtime=30
ioengine=psync
filename=randomread.2.0

[randomreadseqwrites5.24]
directory=/mnt/test/corrado
rw=randread
size=2G
runtime=30
ioengine=psync
filename=randomread.2.0

[randomreadseqwrites5.25]
directory=/mnt/test/corrado
rw=randread
size=2G
runtime=30
ioengine=psync
filename=randomread.2.0

[randomreadseqwrites5.26]
directory=/mnt/test/corrado
rw=randread
size=2G
runtime=30
ioengine=psync
filename=randomread.2.0

[randomreadseqwrites5.27]
directory=/mnt/test/corrado
rw=randread
size=2G
runtime=30
ioengine=psync
filename=randomread.2.0

[randomreadseqwrites5.28]
directory=/mnt/test/corrado
rw=randread
size=2G
runtime=30
ioengine=psync
filename=randomread.2.0

[randomreadseqwrites5.29]
directory=/mnt/test/corrado
rw=randread
size=2G
runtime=30
ioengine=psync
filename=randomread.2.0

[randomreadseqwrites5.30]
directory=/mnt/test/corrado
rw=randread
size=2G
runtime=30
ioengine=psync
filename=randomread.2.0

[randomreadseqwrites5.31]
directory=/mnt/test/corrado
rw=randread
size=2G
runtime=30
ioengine=psync
filename=randomread.2.0

[randomreadseqwrites5.32]
directory=/mnt/test/corrado
rw=randread
size=2G
runtime=30
ioengine=psync
filename=randomread.2.0

[-- Attachment #3: deadline-iosched-orig.2 --]
[-- Type: application/octet-stream, Size: 40765 bytes --]

seqread: (g=0): rw=read, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
seqwrite: (g=1): rw=write, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
parread.0: (g=2): rw=read, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
parread.1: (g=2): rw=read, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
parwrite.0: (g=3): rw=write, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
parwrite.1: (g=3): rw=write, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomread2.0: (g=4): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomread2.1: (g=4): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites4.w: (g=5): rw=write, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites4.0: (g=5): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites4.1: (g=5): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites4.2: (g=5): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites4.3: (g=5): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.w: (g=6): rw=write, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.0: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.1: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.2: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.3: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.4: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.5: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.6: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.7: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.8: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.9: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.10: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.11: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.12: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.13: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.14: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.15: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.16: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.17: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.18: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.19: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.20: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.21: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.22: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.23: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.24: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.25: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.26: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.27: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.28: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.29: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.30: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.31: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.32: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
Starting 47 processes
seqwrite: Laying out IO file(s) (1 file(s) / 1024MiB)
parwrite.0: Laying out IO file(s) (1 file(s) / 1024MiB)
parwrite.1: Laying out IO file(s) (1 file(s) / 1024MiB)
randomreadseqwrites4.w: Laying out IO file(s) (1 file(s) / 2048MiB)
randomreadseqwrites5.w: Laying out IO file(s) (1 file(s) / 2048MiB)

seqread: (groupid=0, jobs=1): err= 0: pid=3802
  read : io=1006MiB, bw=35150KiB/s, iops=8581, runt= 30002msec
    clat (usec): min=2, max=22539, avg=115.14, stdev=703.10
    bw (KiB/s) : min=30548, max=36757, per=100.29%, avg=35251.93, stdev=1604.58
  cpu          : usr=0.96%, sys=4.65%, ctx=7869, majf=0, minf=17
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=257469/0, short=0/0
     lat (usec): 4=79.54%, 10=17.03%, 20=0.27%, 50=0.02%, 100=0.01%
     lat (usec): 250=0.10%, 500=0.01%, 750=0.01%
     lat (msec): 2=0.08%, 4=1.82%, 10=1.07%, 20=0.05%, 50=0.01%
seqwrite: (groupid=1, jobs=1): err= 0: pid=3803
  write: io=1024MiB, bw=31342KiB/s, iops=7652, runt= 34247msec
    clat (usec): min=14, max=4378K, avg=112.72, stdev=13181.06
  cpu          : usr=1.31%, sys=15.55%, ctx=640, majf=0, minf=155
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=0/262061, short=0/0
     lat (usec): 20=80.75%, 50=18.78%, 100=0.09%, 250=0.33%, 500=0.01%
     lat (usec): 750=0.01%, 1000=0.01%
     lat (msec): 2=0.01%, 4=0.01%, 20=0.01%, 100=0.01%, 250=0.03%
     lat (msec): 500=0.01%, 750=0.01%, 2000=0.01%, >=2000=0.01%
parread.0: (groupid=2, jobs=1): err= 0: pid=3808
  read : io=265204KiB, bw=9045KiB/s, iops=2208, runt= 30021msec
    clat (usec): min=2, max=104560, avg=451.42, stdev=4071.93
    bw (KiB/s) : min= 7063, max=11227, per=50.10%, avg=9050.47, stdev=950.30
  cpu          : usr=0.31%, sys=1.29%, ctx=2108, majf=0, minf=19
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=66301/0, short=0/0
     lat (usec): 4=78.43%, 10=18.10%, 20=0.31%, 50=0.01%, 100=0.03%
     lat (usec): 250=0.07%, 500=0.01%, 750=0.01%
     lat (msec): 2=0.07%, 4=1.23%, 10=0.86%, 20=0.03%, 50=0.73%
     lat (msec): 100=0.12%, 250=0.01%
parread.1: (groupid=2, jobs=1): err= 0: pid=3809
  read : io=264436KiB, bw=9025KiB/s, iops=2203, runt= 30002msec
    clat (usec): min=2, max=79666, avg=452.27, stdev=4023.78
    bw (KiB/s) : min= 7017, max=10200, per=50.07%, avg=9044.53, stdev=941.45
  cpu          : usr=0.33%, sys=1.24%, ctx=2081, majf=0, minf=19
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=66109/0, short=0/0
     lat (usec): 4=66.45%, 10=30.13%, 20=0.26%, 50=0.02%, 100=0.04%
     lat (usec): 250=0.07%
     lat (msec): 2=0.09%, 4=1.18%, 10=0.86%, 20=0.03%, 50=0.74%
     lat (msec): 100=0.13%
parwrite.0: (groupid=3, jobs=1): err= 0: pid=3810
  write: io=484512KiB, bw=14177KiB/s, iops=3461, runt= 34994msec
    clat (usec): min=15, max=2846K, avg=248.15, stdev=12722.77
  cpu          : usr=0.86%, sys=9.45%, ctx=374, majf=0, minf=142
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=0/121128, short=0/0
     lat (usec): 20=44.35%, 50=54.30%, 100=0.66%, 250=0.55%, 500=0.01%
     lat (usec): 750=0.01%, 1000=0.01%
     lat (msec): 2=0.02%, 4=0.04%, 10=0.01%, 20=0.01%, 250=0.01%
     lat (msec): 500=0.04%, 750=0.01%, 1000=0.01%, 2000=0.01%, >=2000=0.01%
parwrite.1: (groupid=3, jobs=1): err= 0: pid=3811
  write: io=541852KiB, bw=15855KiB/s, iops=3870, runt= 34995msec
    clat (usec): min=15, max=2906K, avg=220.79, stdev=11894.60
  cpu          : usr=0.89%, sys=10.21%, ctx=631, majf=0, minf=148
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=0/135463, short=0/0
     lat (usec): 20=48.62%, 50=50.45%, 100=0.44%, 250=0.37%, 500=0.01%
     lat (usec): 750=0.01%, 1000=0.01%
     lat (msec): 2=0.03%, 4=0.03%, 10=0.01%, 20=0.01%, 50=0.01%
     lat (msec): 250=0.01%, 500=0.03%, 750=0.01%, 1000=0.01%, 2000=0.01%
     lat (msec): >=2000=0.01%
randomread2.0: (groupid=4, jobs=1): err= 0: pid=3818
  read : io=5248KiB, bw=179KiB/s, iops=43, runt= 30012msec
    clat (usec): min=4, max=253449, avg=22870.82, stdev=14712.39
    bw (KiB/s) : min=   17, max=  244, per=50.76%, avg=180.19, stdev=44.11
  cpu          : usr=0.01%, sys=0.10%, ctx=1433, majf=0, minf=66
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=1312/0, short=0/0
     lat (usec): 10=0.23%
     lat (msec): 10=1.14%, 20=55.87%, 50=40.24%, 100=2.29%, 250=0.15%
     lat (msec): 500=0.08%
randomread2.1: (groupid=4, jobs=1): err= 0: pid=3819
  read : io=5180KiB, bw=176KiB/s, iops=43, runt= 30004msec
    clat (usec): min=4, max=249707, avg=23164.74, stdev=14971.53
    bw (KiB/s) : min=   18, max=  238, per=50.03%, avg=177.60, stdev=45.27
  cpu          : usr=0.00%, sys=0.09%, ctx=1430, majf=0, minf=66
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=1295/0, short=0/0
     lat (usec): 10=0.23%
     lat (msec): 10=0.93%, 20=53.82%, 50=42.08%, 100=2.70%, 250=0.23%
randomreadseqwrites4.w: (groupid=5, jobs=1): err= 0: pid=3820
  write: io=572468KiB, bw=19446KiB/s, iops=4747, runt= 30144msec
    clat (usec): min=15, max=1492K, avg=208.87, stdev=11111.03
  cpu          : usr=0.87%, sys=9.67%, ctx=270, majf=0, minf=133
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=0/143117, short=0/0
     lat (usec): 20=80.40%, 50=19.21%, 100=0.08%, 250=0.25%, 500=0.01%
     lat (usec): 750=0.01%, 1000=0.01%
     lat (msec): 2=0.01%, 4=0.01%, 500=0.03%, 750=0.01%, 2000=0.01%
randomreadseqwrites4.0: (groupid=5, jobs=1): err= 0: pid=3821
  read : io=1164KiB, bw=39KiB/s, iops=9, runt= 30082msec
    clat (msec): min=12, max=260, avg=103.35, stdev=51.57
    bw (KiB/s) : min=   28, max=   97, per=24.60%, avg=38.87, stdev= 9.71
  cpu          : usr=0.01%, sys=0.04%, ctx=291, majf=0, minf=572
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=291/0, short=0/0

     lat (msec): 20=0.34%, 50=29.21%, 100=5.84%, 250=64.26%, 500=0.34%
randomreadseqwrites4.1: (groupid=5, jobs=1): err= 0: pid=3822
  read : io=1192KiB, bw=40KiB/s, iops=9, runt= 30062msec
    clat (msec): min=22, max=240, avg=100.86, stdev=48.57
    bw (KiB/s) : min=   29, max=  112, per=25.36%, avg=40.08, stdev=11.19
  cpu          : usr=0.01%, sys=0.03%, ctx=298, majf=0, minf=588
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=298/0, short=0/0

     lat (msec): 50=29.87%, 100=5.70%, 250=64.43%
randomreadseqwrites4.2: (groupid=5, jobs=1): err= 0: pid=3823
  read : io=1140KiB, bw=38KiB/s, iops=9, runt= 30057msec
    clat (msec): min=19, max=254, avg=105.44, stdev=51.15
    bw (KiB/s) : min=   27, max=   95, per=24.23%, avg=38.28, stdev= 9.48
  cpu          : usr=0.01%, sys=0.05%, ctx=285, majf=0, minf=560
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=285/0, short=0/0

     lat (msec): 20=0.35%, 50=28.42%, 100=4.21%, 250=66.67%, 500=0.35%
randomreadseqwrites4.3: (groupid=5, jobs=1): err= 0: pid=3824
  read : io=1164KiB, bw=39KiB/s, iops=9, runt= 30073msec
    clat (msec): min=21, max=246, avg=103.32, stdev=49.43
    bw (KiB/s) : min=   29, max=  114, per=24.87%, avg=39.30, stdev=11.25
  cpu          : usr=0.01%, sys=0.02%, ctx=292, majf=0, minf=574
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=291/0, short=0/0

     lat (msec): 50=28.87%, 100=4.12%, 250=67.01%
randomreadseqwrites5.w: (groupid=6, jobs=1): err= 0: pid=3830
  write: io=664228KiB, bw=21512KiB/s, iops=5251, runt= 31618msec
    clat (usec): min=5, max=4220K, avg=188.74, stdev=16330.73
  cpu          : usr=1.00%, sys=6.52%, ctx=283, majf=0, minf=134
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=0/166057, short=0/0
     lat (usec): 10=41.66%, 20=54.92%, 50=3.06%, 100=0.10%, 250=0.21%
     lat (usec): 500=0.01%, 750=0.01%, 1000=0.01%
     lat (msec): 2=0.01%, 4=0.01%, 250=0.02%, 500=0.01%, 750=0.01%
     lat (msec): 1000=0.01%, 2000=0.01%, >=2000=0.01%
randomreadseqwrites5.0: (groupid=6, jobs=1): err= 0: pid=3831
  read : io=332KiB, bw=11KiB/s, iops=2, runt= 30300msec
    clat (usec): min=9, max=1783K, avg=365040.31, stdev=377635.37
    bw (KiB/s) : min=    3, max=   35, per=3.60%, avg=12.57, stdev= 8.67
  cpu          : usr=0.00%, sys=0.00%, ctx=82, majf=0, minf=134
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=83/0, short=0/0
     lat (usec): 10=1.20%, 20=13.25%
     lat (msec): 50=6.02%, 100=4.82%, 250=27.71%, 500=20.48%, 750=10.84%
     lat (msec): 1000=8.43%, 2000=7.23%
randomreadseqwrites5.1: (groupid=6, jobs=1): err= 0: pid=3832
  read : io=292KiB, bw=9KiB/s, iops=2, runt= 30415msec
    clat (usec): min=6, max=2208K, avg=416613.11, stdev=415815.13
    bw (KiB/s) : min=    1, max=   26, per=3.01%, avg=10.52, stdev= 6.14
  cpu          : usr=0.00%, sys=0.01%, ctx=83, majf=0, minf=123
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=73/0, short=0/0
     lat (usec): 10=1.37%, 20=6.85%, 50=1.37%
     lat (msec): 20=1.37%, 50=4.11%, 100=8.22%, 250=26.03%, 500=17.81%
     lat (msec): 750=12.33%, 1000=12.33%, 2000=6.85%, >=2000=1.37%
randomreadseqwrites5.2: (groupid=6, jobs=1): err= 0: pid=3833
  read : io=312KiB, bw=10KiB/s, iops=2, runt= 30171msec
    clat (usec): min=7, max=1478K, avg=386789.32, stdev=395263.09
    bw (KiB/s) : min=    3, max=   47, per=3.18%, avg=11.09, stdev= 8.02
  cpu          : usr=0.00%, sys=0.01%, ctx=78, majf=0, minf=112
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=78/0, short=0/0
     lat (usec): 10=5.13%, 20=14.10%, 50=3.85%
     lat (msec): 20=1.28%, 50=2.56%, 100=7.69%, 250=14.10%, 500=19.23%
     lat (msec): 750=12.82%, 1000=7.69%, 2000=11.54%
randomreadseqwrites5.3: (groupid=6, jobs=1): err= 0: pid=3834
  read : io=308KiB, bw=10KiB/s, iops=2, runt= 30093msec
    clat (usec): min=7, max=1495K, avg=390792.22, stdev=387485.25
    bw (KiB/s) : min=    2, max=   39, per=3.10%, avg=10.82, stdev= 8.34
  cpu          : usr=0.00%, sys=0.01%, ctx=80, majf=0, minf=131
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=77/0, short=0/0
     lat (usec): 10=1.30%, 20=16.88%, 50=1.30%
     lat (msec): 50=7.79%, 100=3.90%, 250=16.88%, 500=19.48%, 750=12.99%
     lat (msec): 1000=12.99%, 2000=6.49%
randomreadseqwrites5.4: (groupid=6, jobs=1): err= 0: pid=3835
  read : io=324KiB, bw=11KiB/s, iops=2, runt= 30148msec
    clat (usec): min=6, max=1858K, avg=372174.36, stdev=377025.51
    bw (KiB/s) : min=    2, max=   40, per=3.50%, avg=12.22, stdev= 9.79
  cpu          : usr=0.00%, sys=0.01%, ctx=83, majf=0, minf=129
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=81/0, short=0/0
     lat (usec): 10=3.70%, 20=14.81%
     lat (msec): 50=7.41%, 100=1.23%, 250=17.28%, 500=27.16%, 750=13.58%
     lat (msec): 1000=7.41%, 2000=7.41%
randomreadseqwrites5.5: (groupid=6, jobs=1): err= 0: pid=3836
  read : io=332KiB, bw=11KiB/s, iops=2, runt= 30080msec
    clat (usec): min=7, max=1207K, avg=362392.07, stdev=343666.83
    bw (KiB/s) : min=    3, max=   34, per=3.31%, avg=11.56, stdev= 7.80
  cpu          : usr=0.00%, sys=0.01%, ctx=79, majf=0, minf=126
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=83/0, short=0/0
     lat (usec): 10=3.61%, 20=18.07%
     lat (msec): 50=1.20%, 100=4.82%, 250=19.28%, 500=26.51%, 750=7.23%
     lat (msec): 1000=13.25%, 2000=6.02%
randomreadseqwrites5.6: (groupid=6, jobs=1): err= 0: pid=3837
  read : io=312KiB, bw=10KiB/s, iops=2, runt= 30013msec
    clat (usec): min=7, max=1547K, avg=384758.45, stdev=362855.40
    bw (KiB/s) : min=    3, max=   24, per=3.15%, avg=11.00, stdev= 6.04
  cpu          : usr=0.00%, sys=0.01%, ctx=83, majf=0, minf=120
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=78/0, short=0/0
     lat (usec): 10=6.41%, 20=5.13%, 50=1.28%
     lat (msec): 50=2.56%, 100=8.97%, 250=20.51%, 500=26.92%, 750=12.82%
     lat (msec): 1000=10.26%, 2000=5.13%
randomreadseqwrites5.7: (groupid=6, jobs=1): err= 0: pid=3838
  read : io=296KiB, bw=10KiB/s, iops=2, runt= 30125msec
    clat (usec): min=10, max=1585K, avg=407059.88, stdev=385259.93
    bw (KiB/s) : min=    2, max=   25, per=3.00%, avg=10.45, stdev= 6.62
  cpu          : usr=0.00%, sys=0.00%, ctx=80, majf=0, minf=124
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=74/0, short=0/0
     lat (usec): 20=12.16%, 50=1.35%
     lat (msec): 50=8.11%, 100=5.41%, 250=18.92%, 500=20.27%, 750=12.16%
     lat (msec): 1000=10.81%, 2000=10.81%
randomreadseqwrites5.8: (groupid=6, jobs=1): err= 0: pid=3839
  read : io=324KiB, bw=10KiB/s, iops=2, runt= 30444msec
    clat (usec): min=6, max=1054K, avg=375821.70, stdev=348073.50
    bw (KiB/s) : min=    3, max=   42, per=3.18%, avg=11.09, stdev= 7.44
  cpu          : usr=0.00%, sys=0.02%, ctx=85, majf=0, minf=127
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=81/0, short=0/0
     lat (usec): 10=2.47%, 20=11.11%, 50=1.23%
     lat (msec): 50=7.41%, 100=7.41%, 250=22.22%, 500=12.35%, 750=14.81%
     lat (msec): 1000=16.05%, 2000=4.94%
randomreadseqwrites5.9: (groupid=6, jobs=1): err= 0: pid=3840
  read : io=328KiB, bw=11KiB/s, iops=2, runt= 30057msec
    clat (usec): min=6, max=1107K, avg=366526.44, stdev=328473.35
    bw (KiB/s) : min=    3, max=   36, per=3.23%, avg=11.27, stdev= 8.01
  cpu          : usr=0.00%, sys=0.01%, ctx=82, majf=0, minf=133
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=82/0, short=0/0
     lat (usec): 10=2.44%, 20=12.20%, 50=3.66%
     lat (msec): 50=3.66%, 100=8.54%, 250=12.20%, 500=26.83%, 750=13.41%
     lat (msec): 1000=12.20%, 2000=4.88%
randomreadseqwrites5.10: (groupid=6, jobs=1): err= 0: pid=3841
  read : io=304KiB, bw=10KiB/s, iops=2, runt= 30133msec
    clat (usec): min=10, max=1371K, avg=396468.01, stdev=339582.06
    bw (KiB/s) : min=    3, max=   30, per=3.18%, avg=11.11, stdev= 6.86
  cpu          : usr=0.00%, sys=0.01%, ctx=84, majf=0, minf=121
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=76/0, short=0/0
     lat (usec): 20=10.53%, 50=1.32%
     lat (msec): 20=1.32%, 50=6.58%, 100=5.26%, 250=13.16%, 500=28.95%
     lat (msec): 750=17.11%, 1000=9.21%, 2000=6.58%
randomreadseqwrites5.11: (groupid=6, jobs=1): err= 0: pid=3842
  read : io=324KiB, bw=10KiB/s, iops=2, runt= 30408msec
    clat (usec): min=6, max=1531K, avg=375381.32, stdev=351836.40
    bw (KiB/s) : min=    3, max=   31, per=3.42%, avg=11.94, stdev= 7.33
  cpu          : usr=0.00%, sys=0.00%, ctx=88, majf=0, minf=131
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=81/0, short=0/0
     lat (usec): 10=1.23%, 20=9.88%
     lat (msec): 50=1.23%, 100=9.88%, 250=22.22%, 500=33.33%, 750=7.41%
     lat (msec): 1000=6.17%, 2000=8.64%
randomreadseqwrites5.12: (groupid=6, jobs=1): err= 0: pid=3843
  read : io=308KiB, bw=10KiB/s, iops=2, runt= 30164msec
    clat (usec): min=6, max=1298K, avg=391722.96, stdev=358708.26
    bw (KiB/s) : min=    3, max=   43, per=3.28%, avg=11.44, stdev= 8.25
  cpu          : usr=0.00%, sys=0.01%, ctx=88, majf=0, minf=131
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=77/0, short=0/0
     lat (usec): 10=2.60%, 20=7.79%
     lat (msec): 50=3.90%, 100=12.99%, 250=18.18%, 500=23.38%, 750=14.29%
     lat (msec): 1000=9.09%, 2000=7.79%
randomreadseqwrites5.13: (groupid=6, jobs=1): err= 0: pid=3844
  read : io=316KiB, bw=10KiB/s, iops=2, runt= 30150msec
    clat (usec): min=10, max=1332K, avg=381627.43, stdev=371940.20
    bw (KiB/s) : min=    3, max=   28, per=3.26%, avg=11.36, stdev= 7.56
  cpu          : usr=0.00%, sys=0.01%, ctx=84, majf=0, minf=124
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=79/0, short=0/0
     lat (usec): 20=15.19%, 50=2.53%
     lat (msec): 50=1.27%, 100=6.33%, 250=18.99%, 500=26.58%, 750=12.66%
     lat (msec): 1000=6.33%, 2000=10.13%
randomreadseqwrites5.14: (groupid=6, jobs=1): err= 0: pid=3845
  read : io=308KiB, bw=10KiB/s, iops=2, runt= 30323msec
    clat (usec): min=6, max=1530K, avg=393786.21, stdev=358888.03
    bw (KiB/s) : min=    3, max=   30, per=3.15%, avg=11.00, stdev= 7.38
  cpu          : usr=0.00%, sys=0.00%, ctx=86, majf=0, minf=121
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=77/0, short=0/0
     lat (usec): 10=2.60%, 20=11.69%, 50=1.30%
     lat (msec): 50=6.49%, 100=1.30%, 250=16.88%, 500=27.27%, 750=14.29%
     lat (msec): 1000=11.69%, 2000=6.49%
randomreadseqwrites5.15: (groupid=6, jobs=1): err= 0: pid=3846
  read : io=340KiB, bw=11KiB/s, iops=2, runt= 30118msec
    clat (usec): min=6, max=1312K, avg=354302.99, stdev=315368.16
    bw (KiB/s) : min=    3, max=   27, per=3.41%, avg=11.89, stdev= 6.30
  cpu          : usr=0.00%, sys=0.02%, ctx=88, majf=0, minf=132
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=85/0, short=0/0
     lat (usec): 10=2.35%, 20=12.94%, 50=1.18%
     lat (msec): 50=2.35%, 100=7.06%, 250=18.82%, 500=25.88%, 750=18.82%
     lat (msec): 1000=4.71%, 2000=5.88%
randomreadseqwrites5.16: (groupid=6, jobs=1): err= 0: pid=3847
  read : io=312KiB, bw=10KiB/s, iops=2, runt= 30136msec
    clat (usec): min=7, max=1620K, avg=386325.04, stdev=382732.11
    bw (KiB/s) : min=    2, max=   27, per=3.13%, avg=10.91, stdev= 6.65
  cpu          : usr=0.00%, sys=0.01%, ctx=83, majf=0, minf=118
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=78/0, short=0/0
     lat (usec): 10=2.56%, 20=12.82%
     lat (msec): 50=3.85%, 100=8.97%, 250=14.10%, 500=25.64%, 750=12.82%
     lat (msec): 1000=11.54%, 2000=7.69%
randomreadseqwrites5.17: (groupid=6, jobs=1): err= 0: pid=3848
  read : io=316KiB, bw=10KiB/s, iops=2, runt= 30177msec
    clat (usec): min=11, max=1504K, avg=381965.22, stdev=333049.90
    bw (KiB/s) : min=    2, max=   30, per=3.22%, avg=11.22, stdev= 6.68
  cpu          : usr=0.00%, sys=0.01%, ctx=84, majf=0, minf=123
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=79/0, short=0/0
     lat (usec): 20=6.33%, 50=1.27%, 100=1.27%
     lat (msec): 50=6.33%, 100=10.13%, 250=18.99%, 500=26.58%, 750=13.92%
     lat (msec): 1000=10.13%, 2000=5.06%
randomreadseqwrites5.18: (groupid=6, jobs=1): err= 0: pid=3849
  read : io=284KiB, bw=9KiB/s, iops=2, runt= 30392msec
    clat (usec): min=8, max=1551K, avg=428029.46, stdev=400259.38
    bw (KiB/s) : min=    2, max=   25, per=2.98%, avg=10.39, stdev= 6.16
  cpu          : usr=0.00%, sys=0.01%, ctx=72, majf=0, minf=110
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=71/0, short=0/0
     lat (usec): 10=2.82%, 20=9.86%
     lat (msec): 50=4.23%, 100=2.82%, 250=18.31%, 500=32.39%, 750=9.86%
     lat (msec): 1000=7.04%, 2000=12.68%
randomreadseqwrites5.19: (groupid=6, jobs=1): err= 0: pid=3850
  read : io=328KiB, bw=11KiB/s, iops=2, runt= 30086msec
    clat (usec): min=6, max=1417K, avg=366869.70, stdev=359602.64
    bw (KiB/s) : min=    2, max=   46, per=3.46%, avg=12.08, stdev= 8.78
  cpu          : usr=0.00%, sys=0.01%, ctx=81, majf=0, minf=130
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=82/0, short=0/0
     lat (usec): 10=2.44%, 20=15.85%, 50=1.22%
     lat (msec): 50=3.66%, 100=3.66%, 250=21.95%, 500=20.73%, 750=12.20%
     lat (msec): 1000=12.20%, 2000=6.10%
randomreadseqwrites5.20: (groupid=6, jobs=1): err= 0: pid=3851
  read : io=324KiB, bw=11KiB/s, iops=2, runt= 30140msec
    clat (usec): min=8, max=1540K, avg=372067.28, stdev=323119.36
    bw (KiB/s) : min=    2, max=   22, per=3.06%, avg=10.67, stdev= 5.40
  cpu          : usr=0.00%, sys=0.00%, ctx=85, majf=0, minf=129
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=81/0, short=0/0
     lat (usec): 10=1.23%, 20=9.88%, 50=1.23%
     lat (msec): 50=4.94%, 100=8.64%, 250=18.52%, 500=19.75%, 750=23.46%
     lat (msec): 1000=8.64%, 2000=3.70%
randomreadseqwrites5.21: (groupid=6, jobs=1): err= 0: pid=3852
  read : io=304KiB, bw=10KiB/s, iops=2, runt= 30130msec
    clat (usec): min=5, max=1550K, avg=396417.49, stdev=337139.00
    bw (KiB/s) : min=    2, max=   34, per=2.93%, avg=10.22, stdev= 6.10
  cpu          : usr=0.00%, sys=0.00%, ctx=78, majf=0, minf=117
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=76/0, short=0/0
     lat (usec): 10=5.26%, 20=7.89%, 50=3.95%
     lat (msec): 100=6.58%, 250=14.47%, 500=27.63%, 750=22.37%, 1000=6.58%
     lat (msec): 2000=5.26%
randomreadseqwrites5.22: (groupid=6, jobs=1): err= 0: pid=3853
  read : io=336KiB, bw=11KiB/s, iops=2, runt= 30325msec
    clat (usec): min=6, max=1825K, avg=360977.75, stdev=381519.24
    bw (KiB/s) : min=    2, max=   34, per=3.54%, avg=12.34, stdev= 8.09
  cpu          : usr=0.00%, sys=0.00%, ctx=89, majf=0, minf=126
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=84/0, short=0/0
     lat (usec): 10=2.38%, 20=10.71%
     lat (msec): 50=9.52%, 100=11.90%, 250=10.71%, 500=27.38%, 750=11.90%
     lat (msec): 1000=9.52%, 2000=5.95%
randomreadseqwrites5.23: (groupid=6, jobs=1): err= 0: pid=3854
  read : io=308KiB, bw=10KiB/s, iops=2, runt= 30223msec
    clat (usec): min=8, max=1895K, avg=392480.36, stdev=392357.07
    bw (KiB/s) : min=    3, max=   35, per=3.09%, avg=10.78, stdev= 7.15
  cpu          : usr=0.00%, sys=0.01%, ctx=74, majf=0, minf=111
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=77/0, short=0/0
     lat (usec): 10=1.30%, 20=12.99%, 50=2.60%, 100=1.30%
     lat (msec): 50=2.60%, 100=6.49%, 250=16.88%, 500=25.97%, 750=12.99%
     lat (msec): 1000=6.49%, 2000=10.39%
randomreadseqwrites5.24: (groupid=6, jobs=1): err= 0: pid=3855
  read : io=300KiB, bw=10KiB/s, iops=2, runt= 30134msec
    clat (usec): min=8, max=1629K, avg=401764.49, stdev=343292.98
    bw (KiB/s) : min=    2, max=   24, per=2.91%, avg=10.17, stdev= 5.52
  cpu          : usr=0.00%, sys=0.00%, ctx=83, majf=0, minf=114
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=75/0, short=0/0
     lat (usec): 10=4.00%, 20=5.33%, 50=1.33%
     lat (msec): 50=5.33%, 100=4.00%, 250=18.67%, 500=25.33%, 750=24.00%
     lat (msec): 1000=8.00%, 2000=4.00%
randomreadseqwrites5.25: (groupid=6, jobs=1): err= 0: pid=3856
  read : io=280KiB, bw=9KiB/s, iops=2, runt= 30103msec
    clat (usec): min=10, max=1901K, avg=430019.53, stdev=387445.36
    bw (KiB/s) : min=    2, max=   27, per=2.83%, avg= 9.88, stdev= 5.68
  cpu          : usr=0.00%, sys=0.00%, ctx=70, majf=0, minf=111
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=70/0, short=0/0
     lat (usec): 20=12.86%, 100=1.43%
     lat (msec): 50=1.43%, 100=7.14%, 250=12.86%, 500=27.14%, 750=20.00%
     lat (msec): 1000=10.00%, 2000=7.14%
randomreadseqwrites5.26: (groupid=6, jobs=1): err= 0: pid=3857
  read : io=344KiB, bw=11KiB/s, iops=2, runt= 30202msec
    clat (usec): min=9, max=1891K, avg=351158.86, stdev=369772.36
    bw (KiB/s) : min=    2, max=   30, per=3.57%, avg=12.45, stdev= 7.73
  cpu          : usr=0.00%, sys=0.01%, ctx=87, majf=0, minf=134
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=86/0, short=0/0
     lat (usec): 10=1.16%, 20=13.95%, 50=2.33%
     lat (msec): 50=3.49%, 100=10.47%, 250=18.60%, 500=25.58%, 750=6.98%
     lat (msec): 1000=11.63%, 2000=5.81%
randomreadseqwrites5.27: (groupid=6, jobs=1): err= 0: pid=3858
  read : io=288KiB, bw=9KiB/s, iops=2, runt= 30075msec
    clat (usec): min=7, max=1427K, avg=417682.67, stdev=355239.95
    bw (KiB/s) : min=    2, max=   24, per=2.86%, avg= 9.97, stdev= 5.27
  cpu          : usr=0.00%, sys=0.01%, ctx=79, majf=0, minf=117
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=72/0, short=0/0
     lat (usec): 10=1.39%, 20=9.72%, 100=1.39%
     lat (msec): 50=4.17%, 100=6.94%, 250=12.50%, 500=26.39%, 750=22.22%
     lat (msec): 1000=8.33%, 2000=6.94%
randomreadseqwrites5.28: (groupid=6, jobs=1): err= 0: pid=3859
  read : io=312KiB, bw=10KiB/s, iops=2, runt= 30097msec
    clat (usec): min=6, max=1871K, avg=385825.77, stdev=345585.77
    bw (KiB/s) : min=    2, max=   28, per=3.20%, avg=11.17, stdev= 6.28
  cpu          : usr=0.00%, sys=0.01%, ctx=81, majf=0, minf=163
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=78/0, short=0/0
     lat (usec): 10=2.56%, 20=14.10%
     lat (msec): 50=2.56%, 100=6.41%, 250=14.10%, 500=26.92%, 750=17.95%
     lat (msec): 1000=11.54%, 2000=3.85%
randomreadseqwrites5.29: (groupid=6, jobs=1): err= 0: pid=3860
  read : io=336KiB, bw=11KiB/s, iops=2, runt= 30215msec
    clat (usec): min=8, max=1350K, avg=359670.81, stdev=323956.38
    bw (KiB/s) : min=    3, max=   37, per=3.41%, avg=11.89, stdev= 8.55
  cpu          : usr=0.01%, sys=0.00%, ctx=85, majf=0, minf=128
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=84/0, short=0/0
     lat (usec): 10=1.19%, 20=10.71%, 50=1.19%
     lat (msec): 50=7.14%, 100=8.33%, 250=14.29%, 500=26.19%, 750=17.86%
     lat (msec): 1000=7.14%, 2000=5.95%
randomreadseqwrites5.30: (groupid=6, jobs=1): err= 0: pid=3861
  read : io=316KiB, bw=10KiB/s, iops=2, runt= 30158msec
    clat (usec): min=6, max=1868K, avg=381718.53, stdev=370212.64
    bw (KiB/s) : min=    2, max=   29, per=3.22%, avg=11.24, stdev= 6.77
  cpu          : usr=0.00%, sys=0.01%, ctx=81, majf=0, minf=125
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=79/0, short=0/0
     lat (usec): 10=1.27%, 20=13.92%
     lat (msec): 50=2.53%, 100=8.86%, 250=17.72%, 500=25.32%, 750=13.92%
     lat (msec): 1000=10.13%, 2000=6.33%
randomreadseqwrites5.31: (groupid=6, jobs=1): err= 0: pid=3862
  read : io=324KiB, bw=11KiB/s, iops=2, runt= 30135msec
    clat (usec): min=9, max=1416K, avg=372016.38, stdev=346495.76
    bw (KiB/s) : min=    2, max=   30, per=3.38%, avg=11.79, stdev= 7.17
  cpu          : usr=0.00%, sys=0.01%, ctx=84, majf=0, minf=126
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=81/0, short=0/0
     lat (usec): 10=1.23%, 20=12.35%, 100=1.23%
     lat (msec): 50=4.94%, 100=11.11%, 250=12.35%, 500=25.93%, 750=17.28%
     lat (msec): 1000=7.41%, 2000=6.17%
randomreadseqwrites5.32: (groupid=6, jobs=1): err= 0: pid=3863
  read : io=320KiB, bw=10KiB/s, iops=2, runt= 30311msec
    clat (usec): min=10, max=1478K, avg=378865.47, stdev=386256.97
    bw (KiB/s) : min=    2, max=   32, per=3.18%, avg=11.09, stdev= 7.78
  cpu          : usr=0.00%, sys=0.01%, ctx=83, majf=0, minf=130
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=80/0, short=0/0
     lat (usec): 20=12.50%, 50=1.25%
     lat (msec): 50=8.75%, 100=6.25%, 250=22.50%, 500=17.50%, 750=11.25%
     lat (msec): 1000=8.75%, 2000=11.25%

Run status group 0 (all jobs):
   READ: io=1006MiB, aggrb=35150KiB/s, minb=35150KiB/s, maxb=35150KiB/s, mint=30002msec, maxt=30002msec

Run status group 1 (all jobs):
  WRITE: io=1024MiB, aggrb=31342KiB/s, minb=31342KiB/s, maxb=31342KiB/s, mint=34247msec, maxt=34247msec

Run status group 2 (all jobs):
   READ: io=529640KiB, aggrb=18065KiB/s, minb=9025KiB/s, maxb=9045KiB/s, mint=30002msec, maxt=30021msec

Run status group 3 (all jobs):
  WRITE: io=1002MiB, aggrb=30032KiB/s, minb=14177KiB/s, maxb=15855KiB/s, mint=34994msec, maxt=34995msec

Run status group 4 (all jobs):
   READ: io=10428KiB, aggrb=355KiB/s, minb=176KiB/s, maxb=179KiB/s, mint=30004msec, maxt=30012msec

Run status group 5 (all jobs):
   READ: io=4660KiB, aggrb=158KiB/s, minb=38KiB/s, maxb=40KiB/s, mint=30057msec, maxt=30082msec
  WRITE: io=572468KiB, aggrb=19446KiB/s, minb=19446KiB/s, maxb=19446KiB/s, mint=30144msec, maxt=30144msec

Run status group 6 (all jobs):
   READ: io=10392KiB, aggrb=349KiB/s, minb=9KiB/s, maxb=11KiB/s, mint=30013msec, maxt=30444msec
  WRITE: io=664228KiB, aggrb=21512KiB/s, minb=21512KiB/s, maxb=21512KiB/s, mint=31618msec, maxt=31618msec

Disk stats (read/write):
  sda: ios=19780/25937, merge=382/783135, ticks=1484629/19885547, in_queue=21370159, util=98.89%

[-- Attachment #4: deadline-iosched-patched.2 --]
[-- Type: application/octet-stream, Size: 40672 bytes --]

seqread: (g=0): rw=read, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
seqwrite: (g=1): rw=write, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
parread.0: (g=2): rw=read, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
parread.1: (g=2): rw=read, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
parwrite.0: (g=3): rw=write, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
parwrite.1: (g=3): rw=write, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomread2.0: (g=4): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomread2.1: (g=4): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites4.w: (g=5): rw=write, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites4.0: (g=5): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites4.1: (g=5): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites4.2: (g=5): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites4.3: (g=5): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.w: (g=6): rw=write, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.0: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.1: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.2: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.3: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.4: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.5: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.6: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.7: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.8: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.9: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.10: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.11: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.12: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.13: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.14: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.15: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.16: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.17: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.18: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.19: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.20: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.21: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.22: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.23: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.24: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.25: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.26: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.27: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.28: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.29: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.30: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.31: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.32: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
Starting 47 processes
parwrite.0: Laying out IO file(s) (1 file(s) / 1024MiB)
parwrite.1: Laying out IO file(s) (1 file(s) / 1024MiB)
randomreadseqwrites4.w: Laying out IO file(s) (1 file(s) / 2048MiB)
randomreadseqwrites5.w: Laying out IO file(s) (1 file(s) / 2048MiB)

seqread: (groupid=0, jobs=1): err= 0: pid=3955
  read : io=1024MiB, bw=36193KiB/s, iops=8836, runt= 29667msec
    clat (usec): min=2, max=27585, avg=112.02, stdev=663.16
    bw (KiB/s) : min=33005, max=37076, per=100.07%, avg=36219.97, stdev=776.76
  cpu          : usr=0.90%, sys=4.09%, ctx=7995, majf=0, minf=16
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=262144/0, short=0/0
     lat (usec): 4=81.79%, 10=14.79%, 20=0.28%, 50=0.01%, 100=0.01%
     lat (usec): 250=0.10%, 500=0.01%, 750=0.01%
     lat (msec): 2=0.08%, 4=1.81%, 10=1.12%, 20=0.01%, 50=0.01%
seqwrite: (groupid=1, jobs=1): err= 0: pid=3956
  write: io=790584KiB, bw=26577KiB/s, iops=6488, runt= 30460msec
    clat (usec): min=7, max=4443K, avg=152.40, stdev=10850.06
  cpu          : usr=1.01%, sys=7.65%, ctx=679, majf=0, minf=305
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=0/197646, short=0/0
     lat (usec): 10=52.54%, 20=46.03%, 50=0.71%, 100=0.06%, 250=0.44%
     lat (usec): 500=0.01%, 750=0.01%, 1000=0.06%
     lat (msec): 2=0.01%, 4=0.02%, 10=0.01%, 20=0.01%, 50=0.01%
     lat (msec): 100=0.06%, 250=0.04%, 500=0.01%, 1000=0.01%, >=2000=0.01%
parread.0: (groupid=2, jobs=1): err= 0: pid=3959
  read : io=284660KiB, bw=9707KiB/s, iops=2369, runt= 30029msec
    clat (usec): min=2, max=60224, avg=420.76, stdev=3597.46
    bw (KiB/s) : min= 7746, max=10464, per=50.12%, avg=9720.76, stdev=601.47
  cpu          : usr=0.20%, sys=1.24%, ctx=2238, majf=0, minf=19
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=71165/0, short=0/0
     lat (usec): 4=82.34%, 10=14.22%, 20=0.29%, 50=0.02%, 100=0.05%
     lat (usec): 250=0.04%, 750=0.01%, 1000=0.01%
     lat (msec): 2=0.07%, 4=1.22%, 10=0.85%, 20=0.01%, 50=0.86%
     lat (msec): 100=0.02%
parread.1: (groupid=2, jobs=1): err= 0: pid=3960
  read : io=284148KiB, bw=9698KiB/s, iops=2367, runt= 30001msec
    clat (usec): min=2, max=68662, avg=421.16, stdev=3619.46
    bw (KiB/s) : min= 7762, max=10683, per=50.08%, avg=9712.64, stdev=631.25
  cpu          : usr=0.27%, sys=1.18%, ctx=2237, majf=0, minf=19
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=71037/0, short=0/0
     lat (usec): 4=82.43%, 10=14.16%, 20=0.26%, 50=0.01%, 100=0.04%
     lat (usec): 250=0.06%, 1000=0.01%
     lat (msec): 2=0.09%, 4=1.18%, 10=0.88%, 20=0.02%, 50=0.85%
     lat (msec): 100=0.02%
parwrite.0: (groupid=3, jobs=1): err= 0: pid=3961
  write: io=552664KiB, bw=16300KiB/s, iops=3979, runt= 34719msec
    clat (usec): min=15, max=1295K, avg=215.76, stdev=9844.47
  cpu          : usr=0.94%, sys=10.24%, ctx=451, majf=0, minf=154
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=0/138166, short=0/0
     lat (usec): 20=52.66%, 50=46.28%, 100=0.48%, 250=0.47%, 500=0.01%
     lat (usec): 750=0.01%, 1000=0.01%
     lat (msec): 2=0.03%, 4=0.02%, 10=0.01%, 20=0.01%, 250=0.01%
     lat (msec): 500=0.04%, 750=0.01%, 1000=0.01%, 2000=0.01%
parwrite.1: (groupid=3, jobs=1): err= 0: pid=3962
  write: io=518472KiB, bw=15377KiB/s, iops=3754, runt= 34525msec
    clat (usec): min=15, max=2560K, avg=247.69, stdev=12455.86
  cpu          : usr=0.93%, sys=9.84%, ctx=459, majf=0, minf=171
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=0/129618, short=0/0
     lat (usec): 20=50.24%, 50=48.63%, 100=0.55%, 250=0.45%, 500=0.01%
     lat (usec): 750=0.01%, 1000=0.01%
     lat (msec): 2=0.03%, 4=0.03%, 10=0.01%, 20=0.01%, 50=0.01%
     lat (msec): 250=0.01%, 500=0.04%, 1000=0.01%, 2000=0.01%, >=2000=0.01%
randomread2.0: (groupid=4, jobs=1): err= 0: pid=3969
  read : io=5368KiB, bw=183KiB/s, iops=44, runt= 30016msec
    clat (usec): min=4, max=341402, avg=22362.37, stdev=16506.02
    bw (KiB/s) : min=   13, max=  244, per=50.36%, avg=183.29, stdev=48.61
  cpu          : usr=0.03%, sys=0.07%, ctx=1461, majf=0, minf=76
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=1342/0, short=0/0
     lat (usec): 10=0.22%
     lat (msec): 10=1.12%, 20=57.97%, 50=37.85%, 100=2.53%, 250=0.15%
     lat (msec): 500=0.15%
randomread2.1: (groupid=4, jobs=1): err= 0: pid=3970
  read : io=5304KiB, bw=181KiB/s, iops=44, runt= 30004msec
    clat (usec): min=5, max=333410, avg=22623.06, stdev=16582.39
    bw (KiB/s) : min=   13, max=  248, per=49.91%, avg=181.66, stdev=50.50
  cpu          : usr=0.01%, sys=0.10%, ctx=1460, majf=0, minf=76
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=1326/0, short=0/0
     lat (usec): 10=0.30%
     lat (msec): 10=0.90%, 20=56.56%, 50=39.74%, 100=2.19%, 250=0.15%
     lat (msec): 500=0.15%
randomreadseqwrites4.w: (groupid=5, jobs=1): err= 0: pid=3971
  write: io=593064KiB, bw=20242KiB/s, iops=4942, runt= 30001msec
    clat (usec): min=15, max=1367K, avg=200.63, stdev=10660.52
  cpu          : usr=0.89%, sys=10.02%, ctx=271, majf=0, minf=134
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=0/148266, short=0/0
     lat (usec): 20=82.18%, 50=17.35%, 100=0.07%, 250=0.33%, 500=0.01%
     lat (usec): 750=0.01%, 1000=0.01%
     lat (msec): 2=0.01%, 4=0.01%, 250=0.01%, 500=0.03%, 750=0.01%
     lat (msec): 2000=0.01%
randomreadseqwrites4.0: (groupid=5, jobs=1): err= 0: pid=3972
  read : io=1132KiB, bw=38KiB/s, iops=9, runt= 30120msec
    clat (msec): min=9, max=347, avg=106.41, stdev=56.87
    bw (KiB/s) : min=   23, max=  109, per=24.35%, avg=37.98, stdev=12.20
  cpu          : usr=0.00%, sys=0.05%, ctx=284, majf=0, minf=549
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=283/0, short=0/0

     lat (msec): 10=0.35%, 20=0.35%, 50=28.27%, 100=7.42%, 250=62.54%
     lat (msec): 500=1.06%
randomreadseqwrites4.1: (groupid=5, jobs=1): err= 0: pid=3973
  read : io=1168KiB, bw=39KiB/s, iops=9, runt= 30105msec
    clat (msec): min=19, max=348, avg=103.08, stdev=52.65
    bw (KiB/s) : min=   23, max=  102, per=25.15%, avg=39.23, stdev=10.35
  cpu          : usr=0.02%, sys=0.03%, ctx=294, majf=0, minf=569
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=292/0, short=0/0

     lat (msec): 20=0.34%, 50=28.42%, 100=10.27%, 250=59.93%, 500=1.03%
randomreadseqwrites4.2: (groupid=5, jobs=1): err= 0: pid=3974
  read : io=1144KiB, bw=38KiB/s, iops=9, runt= 30101msec
    clat (msec): min=16, max=340, avg=105.22, stdev=54.50
    bw (KiB/s) : min=   24, max=   93, per=24.75%, avg=38.60, stdev= 9.30
  cpu          : usr=0.01%, sys=0.03%, ctx=287, majf=0, minf=559
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=286/0, short=0/0

     lat (msec): 20=0.35%, 50=27.62%, 100=6.64%, 250=64.34%, 500=1.05%
randomreadseqwrites4.3: (groupid=5, jobs=1): err= 0: pid=3975
  read : io=1152KiB, bw=39KiB/s, iops=9, runt= 30092msec
    clat (msec): min=22, max=332, avg=104.46, stdev=56.25
    bw (KiB/s) : min=   23, max=  101, per=24.79%, avg=38.68, stdev=10.71
  cpu          : usr=0.00%, sys=0.04%, ctx=288, majf=0, minf=563
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=288/0, short=0/0

     lat (msec): 50=29.17%, 100=7.64%, 250=62.15%, 500=1.04%
randomreadseqwrites5.w: (groupid=6, jobs=1): err= 0: pid=3982
  write: io=114760KiB, bw=3826KiB/s, iops=934, runt= 30709msec
    clat (usec): min=7, max=5482K, avg=1068.68, stdev=65602.45
  cpu          : usr=0.16%, sys=1.08%, ctx=51, majf=0, minf=43
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=0/28690, short=0/0
     lat (usec): 10=28.90%, 20=70.22%, 50=0.46%, 100=0.17%, 250=0.18%
     lat (usec): 750=0.01%, 1000=0.01%
     lat (msec): 2=0.01%, 4=0.01%, 750=0.01%, 2000=0.01%, >=2000=0.02%
randomreadseqwrites5.0: (groupid=6, jobs=1): err= 0: pid=3983
  read : io=420KiB, bw=14KiB/s, iops=3, runt= 30245msec
    clat (usec): min=6, max=1019K, avg=287922.10, stdev=261938.07
    bw (KiB/s) : min=    4, max=   40, per=3.11%, avg=14.13, stdev= 8.02
  cpu          : usr=0.01%, sys=0.01%, ctx=101, majf=0, minf=215
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=105/0, short=0/0
     lat (usec): 10=2.86%, 20=13.33%
     lat (msec): 50=7.62%, 100=7.62%, 250=20.00%, 500=25.71%, 750=16.19%
     lat (msec): 1000=5.71%, 2000=0.95%
randomreadseqwrites5.1: (groupid=6, jobs=1): err= 0: pid=3984
  read : io=348KiB, bw=11KiB/s, iops=2, runt= 30117msec
    clat (usec): min=10, max=1360K, avg=346140.69, stdev=286468.11
    bw (KiB/s) : min=    3, max=   40, per=2.67%, avg=12.14, stdev= 8.31
  cpu          : usr=0.00%, sys=0.01%, ctx=94, majf=0, minf=188
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=87/0, short=0/0
     lat (usec): 20=8.05%
     lat (msec): 20=1.15%, 50=3.45%, 100=9.20%, 250=26.44%, 500=24.14%
     lat (msec): 750=17.24%, 1000=8.05%, 2000=2.30%
randomreadseqwrites5.2: (groupid=6, jobs=1): err= 0: pid=3985
  read : io=408KiB, bw=13KiB/s, iops=3, runt= 30171msec
    clat (usec): min=5, max=1186K, avg=295769.44, stdev=261230.59
    bw (KiB/s) : min=    3, max=   36, per=3.10%, avg=14.10, stdev= 8.13
  cpu          : usr=0.00%, sys=0.01%, ctx=97, majf=0, minf=203
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=102/0, short=0/0
     lat (usec): 10=5.88%, 20=14.71%
     lat (msec): 50=0.98%, 100=9.80%, 250=18.63%, 500=29.41%, 750=15.69%
     lat (msec): 1000=3.92%, 2000=0.98%
randomreadseqwrites5.3: (groupid=6, jobs=1): err= 0: pid=3986
  read : io=396KiB, bw=13KiB/s, iops=3, runt= 30268msec
    clat (usec): min=6, max=1571K, avg=305709.36, stdev=278572.19
    bw (KiB/s) : min=    2, max=   38, per=2.99%, avg=13.61, stdev= 7.54
  cpu          : usr=0.01%, sys=0.01%, ctx=99, majf=0, minf=200
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=99/0, short=0/0
     lat (usec): 10=3.03%, 20=13.13%, 50=3.03%
     lat (msec): 50=4.04%, 100=8.08%, 250=16.16%, 500=29.29%, 750=17.17%
     lat (msec): 1000=5.05%, 2000=1.01%
randomreadseqwrites5.4: (groupid=6, jobs=1): err= 0: pid=3987
  read : io=460KiB, bw=15KiB/s, iops=3, runt= 30148msec
    clat (usec): min=6, max=1086K, avg=262128.40, stdev=245717.33
    bw (KiB/s) : min=    4, max=   38, per=3.53%, avg=16.07, stdev= 9.38
  cpu          : usr=0.00%, sys=0.02%, ctx=108, majf=0, minf=231
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=115/0, short=0/0
     lat (usec): 10=4.35%, 20=13.04%
     lat (msec): 50=6.09%, 100=9.57%, 250=24.35%, 500=27.83%, 750=10.43%
     lat (msec): 1000=2.61%, 2000=1.74%
randomreadseqwrites5.5: (groupid=6, jobs=1): err= 0: pid=3988
  read : io=484KiB, bw=16KiB/s, iops=4, runt= 30064msec
    clat (usec): min=6, max=1158K, avg=248439.89, stdev=243925.53
    bw (KiB/s) : min=    4, max=   37, per=3.65%, avg=16.62, stdev= 8.69
  cpu          : usr=0.00%, sys=0.03%, ctx=112, majf=0, minf=241
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=121/0, short=0/0
     lat (usec): 10=3.31%, 20=16.53%, 50=0.83%
     lat (msec): 50=7.44%, 100=5.79%, 250=23.97%, 500=23.97%, 750=14.88%
     lat (msec): 1000=2.48%, 2000=0.83%
randomreadseqwrites5.6: (groupid=6, jobs=1): err= 0: pid=3989
  read : io=428KiB, bw=14KiB/s, iops=3, runt= 30261msec
    clat (usec): min=6, max=1003K, avg=282788.71, stdev=238253.24
    bw (KiB/s) : min=    4, max=   40, per=3.28%, avg=14.90, stdev= 9.21
  cpu          : usr=0.00%, sys=0.01%, ctx=113, majf=0, minf=226
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=107/0, short=0/0
     lat (usec): 10=3.74%, 20=6.54%
     lat (msec): 50=2.80%, 100=12.15%, 250=29.91%, 500=27.10%, 750=13.08%
     lat (msec): 1000=3.74%, 2000=0.93%
randomreadseqwrites5.7: (groupid=6, jobs=1): err= 0: pid=3990
  read : io=424KiB, bw=14KiB/s, iops=3, runt= 30353msec
    clat (usec): min=6, max=1105K, avg=286324.19, stdev=258249.04
    bw (KiB/s) : min=    4, max=   41, per=3.13%, avg=14.24, stdev= 8.71
  cpu          : usr=0.00%, sys=0.02%, ctx=104, majf=0, minf=216
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=106/0, short=0/0
     lat (usec): 10=3.77%, 20=9.43%, 50=2.83%
     lat (msec): 50=8.49%, 100=7.55%, 250=18.87%, 500=27.36%, 750=14.15%
     lat (msec): 1000=6.60%, 2000=0.94%
randomreadseqwrites5.8: (groupid=6, jobs=1): err= 0: pid=3991
  read : io=412KiB, bw=13KiB/s, iops=3, runt= 30530msec
    clat (usec): min=6, max=1181K, avg=296385.97, stdev=250535.90
    bw (KiB/s) : min=    3, max=   39, per=3.03%, avg=13.80, stdev= 8.49
  cpu          : usr=0.00%, sys=0.02%, ctx=105, majf=0, minf=209
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=103/0, short=0/0
     lat (usec): 10=2.91%, 20=8.74%, 50=0.97%
     lat (msec): 50=4.85%, 100=9.71%, 250=24.27%, 500=28.16%, 750=15.53%
     lat (msec): 1000=2.91%, 2000=1.94%
randomreadseqwrites5.9: (groupid=6, jobs=1): err= 0: pid=3992
  read : io=456KiB, bw=15KiB/s, iops=3, runt= 30179msec
    clat (usec): min=8, max=1095K, avg=264702.34, stdev=246934.38
    bw (KiB/s) : min=    3, max=   44, per=3.48%, avg=15.83, stdev= 9.43
  cpu          : usr=0.00%, sys=0.01%, ctx=111, majf=0, minf=238
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=114/0, short=0/0
     lat (usec): 10=1.75%, 20=14.04%, 50=0.88%
     lat (msec): 50=2.63%, 100=14.91%, 250=21.93%, 500=28.07%, 750=10.53%
     lat (msec): 1000=4.39%, 2000=0.88%
randomreadseqwrites5.10: (groupid=6, jobs=1): err= 0: pid=3993
  read : io=380KiB, bw=12KiB/s, iops=3, runt= 30114msec
    clat (usec): min=5, max=1086K, avg=316968.99, stdev=275498.44
    bw (KiB/s) : min=    3, max=   33, per=2.91%, avg=13.26, stdev= 8.22
  cpu          : usr=0.00%, sys=0.02%, ctx=95, majf=0, minf=195
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=95/0, short=0/0
     lat (usec): 10=4.21%, 20=9.47%, 50=1.05%
     lat (msec): 50=5.26%, 100=4.21%, 250=25.26%, 500=25.26%, 750=15.79%
     lat (msec): 1000=6.32%, 2000=3.16%
randomreadseqwrites5.11: (groupid=6, jobs=1): err= 0: pid=3994
  read : io=488KiB, bw=16KiB/s, iops=4, runt= 30279msec
    clat (usec): min=9, max=1047K, avg=248160.33, stdev=213760.35
    bw (KiB/s) : min=    3, max=   46, per=3.64%, avg=16.55, stdev=10.16
  cpu          : usr=0.01%, sys=0.02%, ctx=124, majf=0, minf=251
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=122/0, short=0/0
     lat (usec): 10=0.82%, 20=13.11%
     lat (msec): 50=4.92%, 100=9.84%, 250=28.69%, 500=28.69%, 750=10.66%
     lat (msec): 1000=2.46%, 2000=0.82%
randomreadseqwrites5.12: (groupid=6, jobs=1): err= 0: pid=3995
  read : io=440KiB, bw=14KiB/s, iops=3, runt= 30086msec
    clat (usec): min=7, max=1348K, avg=273489.72, stdev=261122.22
    bw (KiB/s) : min=    3, max=   43, per=3.41%, avg=15.50, stdev= 9.71
  cpu          : usr=0.00%, sys=0.01%, ctx=116, majf=0, minf=223
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=110/0, short=0/0
     lat (usec): 10=1.82%, 20=8.18%
     lat (msec): 20=0.91%, 50=9.09%, 100=11.82%, 250=26.36%, 500=25.45%
     lat (msec): 750=11.82%, 1000=1.82%, 2000=2.73%
randomreadseqwrites5.13: (groupid=6, jobs=1): err= 0: pid=3996
  read : io=404KiB, bw=13KiB/s, iops=3, runt= 30150msec
    clat (usec): min=9, max=986975, avg=298485.60, stdev=256093.04
    bw (KiB/s) : min=    4, max=   28, per=2.94%, avg=13.38, stdev= 6.23
  cpu          : usr=0.00%, sys=0.00%, ctx=101, majf=0, minf=206
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=101/0, short=0/0
     lat (usec): 10=0.99%, 20=13.86%
     lat (msec): 50=6.93%, 100=4.95%, 250=25.74%, 500=28.71%, 750=8.91%
     lat (msec): 1000=9.90%
randomreadseqwrites5.14: (groupid=6, jobs=1): err= 0: pid=3997
  read : io=372KiB, bw=12KiB/s, iops=3, runt= 30182msec
    clat (usec): min=6, max=1075K, avg=324505.49, stdev=272172.41
    bw (KiB/s) : min=    3, max=   28, per=2.78%, avg=12.66, stdev= 6.86
  cpu          : usr=0.00%, sys=0.01%, ctx=98, majf=0, minf=200
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=93/0, short=0/0
     lat (usec): 10=2.15%, 20=11.83%
     lat (msec): 50=4.30%, 100=6.45%, 250=19.35%, 500=31.18%, 750=16.13%
     lat (msec): 1000=6.45%, 2000=2.15%
randomreadseqwrites5.15: (groupid=6, jobs=1): err= 0: pid=3998
  read : io=400KiB, bw=13KiB/s, iops=3, runt= 30235msec
    clat (usec): min=6, max=1304K, avg=302326.55, stdev=276080.55
    bw (KiB/s) : min=    3, max=   38, per=3.18%, avg=14.45, stdev= 7.89
  cpu          : usr=0.01%, sys=0.01%, ctx=99, majf=0, minf=207
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=100/0, short=0/0
     lat (usec): 10=2.00%, 20=14.00%
     lat (msec): 50=4.00%, 100=8.00%, 250=21.00%, 500=32.00%, 750=13.00%
     lat (msec): 1000=3.00%, 2000=3.00%
randomreadseqwrites5.16: (groupid=6, jobs=1): err= 0: pid=3999
  read : io=364KiB, bw=12KiB/s, iops=3, runt= 30271msec
    clat (usec): min=7, max=1041K, avg=332621.34, stdev=265424.35
    bw (KiB/s) : min=    3, max=   32, per=2.77%, avg=12.62, stdev= 8.07
  cpu          : usr=0.00%, sys=0.01%, ctx=93, majf=0, minf=191
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=91/0, short=0/0
     lat (usec): 10=5.49%, 20=8.79%
     lat (msec): 50=2.20%, 100=4.40%, 250=24.18%, 500=28.57%, 750=18.68%
     lat (msec): 1000=5.49%, 2000=2.20%
randomreadseqwrites5.17: (groupid=6, jobs=1): err= 0: pid=4000
  read : io=352KiB, bw=11KiB/s, iops=2, runt= 30238msec
    clat (usec): min=10, max=913917, avg=343582.52, stdev=247353.42
    bw (KiB/s) : min=    4, max=   31, per=2.66%, avg=12.10, stdev= 6.43
  cpu          : usr=0.00%, sys=0.01%, ctx=95, majf=0, minf=191
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=88/0, short=0/0
     lat (usec): 20=6.82%
     lat (msec): 50=4.55%, 100=13.64%, 250=12.50%, 500=31.82%, 750=25.00%
     lat (msec): 1000=5.68%
randomreadseqwrites5.18: (groupid=6, jobs=1): err= 0: pid=4001
  read : io=356KiB, bw=12KiB/s, iops=2, runt= 30180msec
    clat (usec): min=6, max=1318K, avg=339069.51, stdev=309831.60
    bw (KiB/s) : min=    3, max=   30, per=2.71%, avg=12.35, stdev= 7.41
  cpu          : usr=0.00%, sys=0.01%, ctx=82, majf=0, minf=183
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=89/0, short=0/0
     lat (usec): 10=5.62%, 20=11.24%
     lat (msec): 50=4.49%, 100=6.74%, 250=19.10%, 500=25.84%, 750=14.61%
     lat (msec): 1000=10.11%, 2000=2.25%
randomreadseqwrites5.19: (groupid=6, jobs=1): err= 0: pid=4002
  read : io=404KiB, bw=13KiB/s, iops=3, runt= 30026msec
    clat (usec): min=6, max=966566, avg=297255.63, stdev=261848.50
    bw (KiB/s) : min=    4, max=   38, per=2.99%, avg=13.58, stdev= 7.99
  cpu          : usr=0.00%, sys=0.02%, ctx=99, majf=0, minf=205
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=101/0, short=0/0
     lat (usec): 10=4.95%, 20=11.88%, 50=0.99%, 100=0.99%
     lat (msec): 50=2.97%, 100=5.94%, 250=23.76%, 500=23.76%, 750=17.82%
     lat (msec): 1000=6.93%
randomreadseqwrites5.20: (groupid=6, jobs=1): err= 0: pid=4003
  read : io=416KiB, bw=14KiB/s, iops=3, runt= 30125msec
    clat (usec): min=6, max=1144K, avg=289634.59, stdev=251867.05
    bw (KiB/s) : min=    5, max=   32, per=3.09%, avg=14.08, stdev= 7.12
  cpu          : usr=0.00%, sys=0.01%, ctx=105, majf=0, minf=220
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=104/0, short=0/0
     lat (usec): 10=2.88%, 20=10.58%
     lat (msec): 50=0.96%, 100=9.62%, 250=26.92%, 500=30.77%, 750=12.50%
     lat (msec): 1000=2.88%, 2000=2.88%
randomreadseqwrites5.21: (groupid=6, jobs=1): err= 0: pid=4004
  read : io=456KiB, bw=15KiB/s, iops=3, runt= 30057msec
    clat (usec): min=5, max=1319K, avg=263622.37, stdev=250687.09
    bw (KiB/s) : min=    4, max=   43, per=3.47%, avg=15.79, stdev= 8.87
  cpu          : usr=0.01%, sys=0.01%, ctx=106, majf=0, minf=227
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=114/0, short=0/0
     lat (usec): 10=7.02%, 20=11.40%, 50=0.88%, 100=0.88%
     lat (msec): 50=2.63%, 100=8.77%, 250=21.93%, 500=30.70%, 750=10.53%
     lat (msec): 1000=4.39%, 2000=0.88%
randomreadseqwrites5.22: (groupid=6, jobs=1): err= 0: pid=4005
  read : io=428KiB, bw=14KiB/s, iops=3, runt= 30055msec
    clat (usec): min=6, max=1264K, avg=280851.04, stdev=270680.20
    bw (KiB/s) : min=    4, max=   48, per=3.30%, avg=15.00, stdev= 9.95
  cpu          : usr=0.00%, sys=0.02%, ctx=104, majf=0, minf=223
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=107/0, short=0/0
     lat (usec): 10=5.61%, 20=9.35%
     lat (msec): 50=8.41%, 100=12.15%, 250=17.76%, 500=26.17%, 750=14.95%
     lat (msec): 1000=3.74%, 2000=1.87%
randomreadseqwrites5.23: (groupid=6, jobs=1): err= 0: pid=4006
  read : io=424KiB, bw=14KiB/s, iops=3, runt= 30087msec
    clat (usec): min=6, max=1196K, avg=283807.96, stdev=264492.74
    bw (KiB/s) : min=    4, max=   32, per=3.27%, avg=14.90, stdev= 8.27
  cpu          : usr=0.00%, sys=0.01%, ctx=100, majf=0, minf=212
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=106/0, short=0/0
     lat (usec): 10=5.66%, 20=12.26%, 100=0.94%
     lat (msec): 50=6.60%, 100=6.60%, 250=16.04%, 500=33.96%, 750=12.26%
     lat (msec): 1000=3.77%, 2000=1.89%
randomreadseqwrites5.24: (groupid=6, jobs=1): err= 0: pid=4007
  read : io=388KiB, bw=13KiB/s, iops=3, runt= 30102msec
    clat (usec): min=9, max=1028K, avg=310293.41, stdev=256887.69
    bw (KiB/s) : min=    4, max=   29, per=2.89%, avg=13.15, stdev= 6.40
  cpu          : usr=0.00%, sys=0.01%, ctx=100, majf=0, minf=211
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=97/0, short=0/0
     lat (usec): 10=2.06%, 20=10.31%
     lat (msec): 50=3.09%, 100=12.37%, 250=22.68%, 500=28.87%, 750=15.46%
     lat (msec): 1000=4.12%, 2000=1.03%
randomreadseqwrites5.25: (groupid=6, jobs=1): err= 0: pid=4008
  read : io=444KiB, bw=15KiB/s, iops=3, runt= 30159msec
    clat (usec): min=6, max=1317K, avg=271666.56, stdev=235963.74
    bw (KiB/s) : min=    3, max=   43, per=3.43%, avg=15.61, stdev= 8.83
  cpu          : usr=0.00%, sys=0.00%, ctx=108, majf=0, minf=229
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=111/0, short=0/0
     lat (usec): 10=3.60%, 20=10.81%, 100=0.90%
     lat (msec): 50=3.60%, 100=4.50%, 250=27.93%, 500=34.23%, 750=9.91%
     lat (msec): 1000=3.60%, 2000=0.90%
randomreadseqwrites5.26: (groupid=6, jobs=1): err= 0: pid=4009
  read : io=432KiB, bw=14KiB/s, iops=3, runt= 30360msec
    clat (usec): min=5, max=1647K, avg=281079.92, stdev=283575.41
    bw (KiB/s) : min=    4, max=   34, per=3.39%, avg=15.42, stdev= 8.48
  cpu          : usr=0.00%, sys=0.01%, ctx=103, majf=0, minf=222
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=108/0, short=0/0
     lat (usec): 10=6.48%, 20=9.26%, 50=2.78%
     lat (msec): 50=4.63%, 100=8.33%, 250=25.93%, 500=22.22%, 750=12.96%
     lat (msec): 1000=6.48%, 2000=0.93%
randomreadseqwrites5.27: (groupid=6, jobs=1): err= 0: pid=4010
  read : io=388KiB, bw=13KiB/s, iops=3, runt= 30018msec
    clat (usec): min=6, max=1409K, avg=309434.65, stdev=286935.46
    bw (KiB/s) : min=    3, max=   32, per=3.01%, avg=13.72, stdev= 7.10
  cpu          : usr=0.00%, sys=0.01%, ctx=102, majf=0, minf=201
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=97/0, short=0/0
     lat (usec): 10=1.03%, 20=10.31%, 100=1.03%
     lat (msec): 50=2.06%, 100=16.49%, 250=19.59%, 500=27.84%, 750=15.46%
     lat (msec): 1000=2.06%, 2000=4.12%
randomreadseqwrites5.28: (groupid=6, jobs=1): err= 0: pid=4011
  read : io=372KiB, bw=12KiB/s, iops=3, runt= 30014msec
    clat (usec): min=6, max=1124K, avg=322698.92, stdev=275302.31
    bw (KiB/s) : min=    3, max=   52, per=2.83%, avg=12.87, stdev= 8.50
  cpu          : usr=0.01%, sys=0.01%, ctx=90, majf=0, minf=271
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=93/0, short=0/0
     lat (usec): 10=2.15%, 20=13.98%, 50=1.08%
     lat (msec): 50=3.23%, 100=9.68%, 250=18.28%, 500=25.81%, 750=18.28%
     lat (msec): 1000=6.45%, 2000=1.08%
randomreadseqwrites5.29: (groupid=6, jobs=1): err= 0: pid=4012
  read : io=460KiB, bw=15KiB/s, iops=3, runt= 30133msec
    clat (usec): min=7, max=915334, avg=261998.30, stdev=231751.04
    bw (KiB/s) : min=    5, max=   39, per=3.51%, avg=15.98, stdev= 8.64
  cpu          : usr=0.00%, sys=0.01%, ctx=115, majf=0, minf=238
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=115/0, short=0/0
     lat (usec): 10=2.61%, 20=8.70%, 50=1.74%
     lat (msec): 50=6.09%, 100=9.57%, 250=29.57%, 500=22.61%, 750=14.78%
     lat (msec): 1000=4.35%
randomreadseqwrites5.30: (groupid=6, jobs=1): err= 0: pid=4013
  read : io=420KiB, bw=14KiB/s, iops=3, runt= 30141msec
    clat (usec): min=6, max=1328K, avg=287033.67, stdev=260483.36
    bw (KiB/s) : min=    4, max=   38, per=3.26%, avg=14.85, stdev= 8.60
  cpu          : usr=0.00%, sys=0.01%, ctx=101, majf=0, minf=215
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=105/0, short=0/0
     lat (usec): 10=2.86%, 20=11.43%, 50=0.95%
     lat (msec): 50=4.76%, 100=8.57%, 250=21.90%, 500=31.43%, 750=11.43%
     lat (msec): 1000=4.76%, 2000=1.90%
randomreadseqwrites5.31: (groupid=6, jobs=1): err= 0: pid=4014
  read : io=368KiB, bw=12KiB/s, iops=3, runt= 30074msec
    clat (usec): min=6, max=1357K, avg=326863.84, stdev=298611.17
    bw (KiB/s) : min=    3, max=   32, per=2.82%, avg=12.84, stdev= 7.14
  cpu          : usr=0.00%, sys=0.02%, ctx=96, majf=0, minf=194
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=92/0, short=0/0
     lat (usec): 10=3.26%, 20=8.70%, 50=2.17%
     lat (msec): 50=4.35%, 100=6.52%, 250=27.17%, 500=22.83%, 750=15.22%
     lat (msec): 1000=6.52%, 2000=3.26%
randomreadseqwrites5.32: (groupid=6, jobs=1): err= 0: pid=4015
  read : io=388KiB, bw=13KiB/s, iops=3, runt= 30059msec
    clat (usec): min=6, max=1064K, avg=309857.87, stdev=264025.51
    bw (KiB/s) : min=    3, max=   35, per=2.93%, avg=13.33, stdev= 8.18
  cpu          : usr=0.00%, sys=0.02%, ctx=101, majf=0, minf=205
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=97/0, short=0/0
     lat (usec): 10=2.06%, 20=10.31%
     lat (msec): 50=5.15%, 100=9.28%, 250=20.62%, 500=29.90%, 750=15.46%
     lat (msec): 1000=5.15%, 2000=2.06%

Run status group 0 (all jobs):
   READ: io=1024MiB, aggrb=36193KiB/s, minb=36193KiB/s, maxb=36193KiB/s, mint=29667msec, maxt=29667msec

Run status group 1 (all jobs):
  WRITE: io=790584KiB, aggrb=26577KiB/s, minb=26577KiB/s, maxb=26577KiB/s, mint=30460msec, maxt=30460msec

Run status group 2 (all jobs):
   READ: io=568808KiB, aggrb=19396KiB/s, minb=9698KiB/s, maxb=9707KiB/s, mint=30001msec, maxt=30029msec

Run status group 3 (all jobs):
  WRITE: io=1046MiB, aggrb=31592KiB/s, minb=15377KiB/s, maxb=16300KiB/s, mint=34525msec, maxt=34719msec

Run status group 4 (all jobs):
   READ: io=10672KiB, aggrb=364KiB/s, minb=181KiB/s, maxb=183KiB/s, mint=30004msec, maxt=30016msec

Run status group 5 (all jobs):
   READ: io=4596KiB, aggrb=156KiB/s, minb=38KiB/s, maxb=39KiB/s, mint=30092msec, maxt=30120msec
  WRITE: io=593064KiB, aggrb=20242KiB/s, minb=20242KiB/s, maxb=20242KiB/s, mint=30001msec, maxt=30001msec

Run status group 6 (all jobs):
   READ: io=13580KiB, aggrb=455KiB/s, minb=11KiB/s, maxb=16KiB/s, mint=30014msec, maxt=30530msec
  WRITE: io=114760KiB, aggrb=3826KiB/s, minb=3826KiB/s, maxb=3826KiB/s, mint=30709msec, maxt=30709msec

Disk stats (read/write):
  sda: ios=21096/19668, merge=396/600785, ticks=1504291/16397904, in_queue=17940475, util=99.26%

[-- Attachment #5: deadline-iosched-orig.3 --]
[-- Type: application/octet-stream, Size: 40710 bytes --]

seqread: (g=0): rw=read, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
seqwrite: (g=1): rw=write, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
parread.0: (g=2): rw=read, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
parread.1: (g=2): rw=read, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
parwrite.0: (g=3): rw=write, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
parwrite.1: (g=3): rw=write, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomread2.0: (g=4): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomread2.1: (g=4): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites4.w: (g=5): rw=write, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites4.0: (g=5): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites4.1: (g=5): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites4.2: (g=5): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites4.3: (g=5): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.w: (g=6): rw=write, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.0: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.1: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.2: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.3: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.4: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.5: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.6: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.7: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.8: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.9: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.10: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.11: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.12: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.13: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.14: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.15: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.16: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.17: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.18: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.19: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.20: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.21: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.22: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.23: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.24: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.25: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.26: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.27: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.28: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.29: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.30: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.31: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.32: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
Starting 47 processes
seqwrite: Laying out IO file(s) (1 file(s) / 1024MiB)
parwrite.0: Laying out IO file(s) (1 file(s) / 1024MiB)
parwrite.1: Laying out IO file(s) (1 file(s) / 1024MiB)
randomreadseqwrites4.w: Laying out IO file(s) (1 file(s) / 2048MiB)
randomreadseqwrites5.w: Laying out IO file(s) (1 file(s) / 2048MiB)

seqread: (groupid=0, jobs=1): err= 0: pid=3877
  read : io=1024MiB, bw=35956KiB/s, iops=8778, runt= 29862msec
    clat (usec): min=2, max=22543, avg=112.75, stdev=671.84
    bw (KiB/s) : min=33554, max=36962, per=100.21%, avg=36031.47, stdev=909.94
  cpu          : usr=0.91%, sys=4.47%, ctx=8004, majf=0, minf=17
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=262144/0, short=0/0
     lat (usec): 4=92.30%, 10=4.29%, 20=0.25%, 50=0.01%, 100=0.01%
     lat (usec): 250=0.10%, 500=0.01%, 750=0.01%, 1000=0.01%
     lat (msec): 2=0.08%, 4=1.82%, 10=1.11%, 20=0.02%, 50=0.01%
seqwrite: (groupid=1, jobs=1): err= 0: pid=3878
  write: io=1024MiB, bw=31493KiB/s, iops=7688, runt= 34094msec
    clat (usec): min=15, max=4342K, avg=111.83, stdev=11451.19
  cpu          : usr=1.37%, sys=15.52%, ctx=786, majf=0, minf=181
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=0/262144, short=0/0
     lat (usec): 20=82.41%, 50=17.07%, 100=0.08%, 250=0.36%, 500=0.01%
     lat (usec): 750=0.01%, 1000=0.02%
     lat (msec): 2=0.01%, 4=0.01%, 20=0.01%, 100=0.01%, 250=0.02%
     lat (msec): 500=0.01%, 2000=0.01%, >=2000=0.01%
parread.0: (groupid=2, jobs=1): err= 0: pid=3886
  read : io=258548KiB, bw=8812KiB/s, iops=2151, runt= 30043msec
    clat (usec): min=2, max=179486, avg=463.59, stdev=4281.40
    bw (KiB/s) : min=  498, max=10220, per=50.11%, avg=8824.05, stdev=1549.44
  cpu          : usr=0.20%, sys=1.10%, ctx=2031, majf=0, minf=19
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=64637/0, short=0/0
     lat (usec): 4=89.28%, 10=7.30%, 20=0.27%, 50=0.01%, 100=0.03%
     lat (usec): 250=0.06%
     lat (msec): 2=0.07%, 4=1.23%, 10=0.86%, 20=0.01%, 50=0.72%
     lat (msec): 100=0.15%, 250=0.01%
parread.1: (groupid=2, jobs=1): err= 0: pid=3887
  read : io=258100KiB, bw=8807KiB/s, iops=2150, runt= 30008msec
    clat (usec): min=2, max=177978, avg=463.87, stdev=4266.09
    bw (KiB/s) : min= 1471, max=10328, per=50.22%, avg=8843.46, stdev=1486.12
  cpu          : usr=0.27%, sys=1.08%, ctx=2041, majf=0, minf=19
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=64525/0, short=0/0
     lat (usec): 4=89.03%, 10=7.55%, 20=0.26%, 50=0.01%, 100=0.03%
     lat (usec): 250=0.07%
     lat (msec): 2=0.10%, 4=1.18%, 10=0.86%, 20=0.04%, 50=0.72%
     lat (msec): 100=0.14%, 250=0.01%
parwrite.0: (groupid=3, jobs=1): err= 0: pid=3888
  write: io=531424KiB, bw=15603KiB/s, iops=3809, runt= 34875msec
    clat (usec): min=14, max=1579K, avg=223.54, stdev=9773.40
  cpu          : usr=1.04%, sys=9.70%, ctx=622, majf=0, minf=164
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=0/132856, short=0/0
     lat (usec): 20=52.17%, 50=46.80%, 100=0.49%, 250=0.40%, 500=0.01%
     lat (usec): 750=0.01%, 1000=0.01%
     lat (msec): 2=0.05%, 4=0.02%, 10=0.01%, 50=0.01%, 250=0.01%
     lat (msec): 500=0.04%, 750=0.01%, 2000=0.01%
parwrite.1: (groupid=3, jobs=1): err= 0: pid=3889
  write: io=514932KiB, bw=15543KiB/s, iops=3794, runt= 33924msec
    clat (usec): min=15, max=1521K, avg=232.88, stdev=10100.28
  cpu          : usr=0.98%, sys=9.76%, ctx=571, majf=0, minf=159
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=0/128733, short=0/0
     lat (usec): 20=52.39%, 50=46.53%, 100=0.48%, 250=0.46%, 500=0.01%
     lat (usec): 750=0.01%, 1000=0.01%
     lat (msec): 2=0.05%, 4=0.02%, 10=0.01%, 20=0.01%, 50=0.01%
     lat (msec): 250=0.01%, 500=0.04%, 750=0.01%, 2000=0.01%
randomread2.0: (groupid=4, jobs=1): err= 0: pid=3896
  read : io=5432KiB, bw=185KiB/s, iops=45, runt= 30008msec
    clat (usec): min=4, max=324693, avg=22092.55, stdev=15239.09
    bw (KiB/s) : min=   14, max=  247, per=50.44%, avg=185.60, stdev=48.13
  cpu          : usr=0.03%, sys=0.07%, ctx=1479, majf=0, minf=66
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=1358/0, short=0/0
     lat (usec): 10=0.22%
     lat (msec): 10=1.18%, 20=58.76%, 50=36.97%, 100=2.65%, 250=0.15%
     lat (msec): 500=0.07%
randomread2.1: (groupid=4, jobs=1): err= 0: pid=3897
  read : io=5372KiB, bw=183KiB/s, iops=44, runt= 30021msec
    clat (usec): min=5, max=316703, avg=22349.18, stdev=15168.12
    bw (KiB/s) : min=   15, max=  244, per=49.84%, avg=183.41, stdev=49.30
  cpu          : usr=0.03%, sys=0.13%, ctx=1477, majf=0, minf=66
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=1343/0, short=0/0
     lat (usec): 10=0.30%
     lat (msec): 10=1.04%, 20=57.41%, 50=38.12%, 100=2.90%, 250=0.15%
     lat (msec): 500=0.07%
randomreadseqwrites4.w: (groupid=5, jobs=1): err= 0: pid=3898
  write: io=528788KiB, bw=17912KiB/s, iops=4373, runt= 30229msec
    clat (usec): min=15, max=1583K, avg=226.94, stdev=12483.12
  cpu          : usr=0.76%, sys=8.81%, ctx=294, majf=0, minf=128
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=0/132197, short=0/0
     lat (usec): 20=83.80%, 50=15.75%, 100=0.09%, 250=0.26%, 500=0.01%
     lat (usec): 750=0.01%, 1000=0.02%
     lat (msec): 2=0.03%, 4=0.01%, 250=0.01%, 500=0.03%, 750=0.01%
     lat (msec): 2000=0.01%
randomreadseqwrites4.0: (groupid=5, jobs=1): err= 0: pid=3899
  read : io=1408KiB, bw=47KiB/s, iops=11, runt= 30083msec
    clat (usec): min=6, max=191433, avg=85437.97, stdev=49417.33
    bw (KiB/s) : min=   35, max=   99, per=24.85%, avg=47.72, stdev= 9.53
  cpu          : usr=0.00%, sys=0.05%, ctx=351, majf=0, minf=683
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=352/0, short=0/0
     lat (usec): 10=0.28%
     lat (msec): 10=0.28%, 20=0.28%, 50=44.60%, 100=5.11%, 250=49.43%
randomreadseqwrites4.1: (groupid=5, jobs=1): err= 0: pid=3900
  read : io=1416KiB, bw=48KiB/s, iops=11, runt= 30096msec
    clat (msec): min=20, max=172, avg=84.99, stdev=47.97
    bw (KiB/s) : min=   32, max=  113, per=24.97%, avg=47.94, stdev=10.75
  cpu          : usr=0.01%, sys=0.05%, ctx=355, majf=0, minf=694
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=354/0, short=0/0

     lat (msec): 50=45.20%, 100=4.80%, 250=50.00%
randomreadseqwrites4.2: (groupid=5, jobs=1): err= 0: pid=3901
  read : io=1420KiB, bw=48KiB/s, iops=11, runt= 30111msec
    clat (msec): min=15, max=257, avg=84.80, stdev=50.87
    bw (KiB/s) : min=   33, max=   97, per=24.97%, avg=47.94, stdev= 9.10
  cpu          : usr=0.01%, sys=0.02%, ctx=355, majf=0, minf=690
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=355/0, short=0/0

     lat (msec): 20=0.56%, 50=46.20%, 100=5.63%, 250=47.32%, 500=0.28%
randomreadseqwrites4.3: (groupid=5, jobs=1): err= 0: pid=3902
  read : io=1408KiB, bw=47KiB/s, iops=11, runt= 30092msec
    clat (msec): min=18, max=209, avg=85.46, stdev=48.69
    bw (KiB/s) : min=   34, max=   96, per=24.76%, avg=47.54, stdev= 8.51
  cpu          : usr=0.02%, sys=0.06%, ctx=355, majf=0, minf=680
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=352/0, short=0/0

     lat (msec): 20=0.28%, 50=44.60%, 100=6.25%, 250=48.86%
randomreadseqwrites5.w: (groupid=6, jobs=1): err= 0: pid=3908
  write: io=694220KiB, bw=22173KiB/s, iops=5413, runt= 32060msec
    clat (usec): min=6, max=5169K, avg=183.01, stdev=20549.93
  cpu          : usr=0.92%, sys=7.19%, ctx=362, majf=0, minf=153
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=0/173555, short=0/0
     lat (usec): 10=35.71%, 20=59.48%, 50=4.39%, 100=0.14%, 250=0.22%
     lat (usec): 500=0.01%, 750=0.01%, 1000=0.01%
     lat (msec): 2=0.02%, 4=0.01%, 250=0.02%, 500=0.01%, 1000=0.01%
     lat (msec): 2000=0.01%, >=2000=0.01%
randomreadseqwrites5.0: (groupid=6, jobs=1): err= 0: pid=3909
  read : io=340KiB, bw=11KiB/s, iops=2, runt= 30046msec
    clat (usec): min=7, max=1012K, avg=353467.86, stdev=298733.73
    bw (KiB/s) : min=    4, max=   44, per=2.90%, avg=11.86, stdev= 7.40
  cpu          : usr=0.00%, sys=0.02%, ctx=84, majf=0, minf=98
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=85/0, short=0/0
     lat (usec): 10=4.71%, 20=8.24%, 50=1.18%
     lat (msec): 50=5.88%, 100=3.53%, 250=21.18%, 500=28.24%, 750=11.76%
     lat (msec): 1000=12.94%, 2000=2.35%
randomreadseqwrites5.1: (groupid=6, jobs=1): err= 0: pid=3910
  read : io=352KiB, bw=11KiB/s, iops=2, runt= 30335msec
    clat (usec): min=6, max=1107K, avg=344694.88, stdev=288974.51
    bw (KiB/s) : min=    3, max=   37, per=2.95%, avg=12.08, stdev= 7.50
  cpu          : usr=0.01%, sys=0.00%, ctx=96, majf=0, minf=104
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=88/0, short=0/0
     lat (usec): 10=1.14%, 20=6.82%
     lat (msec): 50=5.68%, 100=10.23%, 250=23.86%, 500=23.86%, 750=19.32%
     lat (msec): 1000=4.55%, 2000=4.55%
randomreadseqwrites5.2: (groupid=6, jobs=1): err= 0: pid=3911
  read : io=432KiB, bw=14KiB/s, iops=3, runt= 30031msec
    clat (usec): min=7, max=1342K, avg=278043.84, stdev=295014.94
    bw (KiB/s) : min=    3, max=   42, per=3.65%, avg=14.95, stdev=10.13
  cpu          : usr=0.00%, sys=0.01%, ctx=102, majf=0, minf=108
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=108/0, short=0/0
     lat (usec): 10=7.41%, 20=12.96%
     lat (msec): 50=5.56%, 100=12.04%, 250=21.30%, 500=20.37%, 750=12.96%
     lat (msec): 1000=4.63%, 2000=2.78%
randomreadseqwrites5.3: (groupid=6, jobs=1): err= 0: pid=3912
  read : io=368KiB, bw=12KiB/s, iops=3, runt= 30084msec
    clat (usec): min=7, max=1260K, avg=326981.10, stdev=314440.53
    bw (KiB/s) : min=    3, max=   35, per=3.22%, avg=13.19, stdev= 8.43
  cpu          : usr=0.01%, sys=0.01%, ctx=93, majf=0, minf=109
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=92/0, short=0/0
     lat (usec): 10=8.70%, 20=11.96%
     lat (msec): 50=2.17%, 100=5.43%, 250=20.65%, 500=26.09%, 750=9.78%
     lat (msec): 1000=10.87%, 2000=4.35%
randomreadseqwrites5.4: (groupid=6, jobs=1): err= 0: pid=3913
  read : io=440KiB, bw=15KiB/s, iops=3, runt= 30034msec
    clat (usec): min=7, max=1284K, avg=273022.29, stdev=252596.45
    bw (KiB/s) : min=    3, max=   36, per=3.74%, avg=15.29, stdev= 9.17
  cpu          : usr=0.00%, sys=0.01%, ctx=105, majf=0, minf=130
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=110/0, short=0/0
     lat (usec): 10=3.64%, 20=11.82%, 50=1.82%
     lat (msec): 50=3.64%, 100=7.27%, 250=28.18%, 500=25.45%, 750=13.64%
     lat (msec): 1000=2.73%, 2000=1.82%
randomreadseqwrites5.5: (groupid=6, jobs=1): err= 0: pid=3914
  read : io=404KiB, bw=13KiB/s, iops=3, runt= 30396msec
    clat (usec): min=6, max=1057K, avg=300926.60, stdev=287897.57
    bw (KiB/s) : min=    3, max=   49, per=3.47%, avg=14.18, stdev= 9.35
  cpu          : usr=0.00%, sys=0.01%, ctx=94, majf=0, minf=132
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=101/0, short=0/0
     lat (usec): 10=2.97%, 20=15.84%, 50=2.97%
     lat (msec): 50=0.99%, 100=10.89%, 250=18.81%, 500=23.76%, 750=14.85%
     lat (msec): 1000=5.94%, 2000=2.97%
randomreadseqwrites5.6: (groupid=6, jobs=1): err= 0: pid=3915
  read : io=368KiB, bw=12KiB/s, iops=3, runt= 30022msec
    clat (usec): min=7, max=1118K, avg=326309.29, stdev=297227.59
    bw (KiB/s) : min=    4, max=   33, per=3.14%, avg=12.83, stdev= 8.24
  cpu          : usr=0.00%, sys=0.01%, ctx=95, majf=0, minf=116
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=92/0, short=0/0
     lat (usec): 10=3.26%, 20=8.70%
     lat (msec): 50=7.61%, 100=7.61%, 250=25.00%, 500=22.83%, 750=11.96%
     lat (msec): 1000=8.70%, 2000=4.35%
randomreadseqwrites5.7: (groupid=6, jobs=1): err= 0: pid=3916
  read : io=448KiB, bw=15KiB/s, iops=3, runt= 30033msec
    clat (usec): min=9, max=1125K, avg=268132.39, stdev=268634.67
    bw (KiB/s) : min=    3, max=   40, per=3.87%, avg=15.85, stdev= 9.65
  cpu          : usr=0.00%, sys=0.01%, ctx=111, majf=0, minf=118
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=112/0, short=0/0
     lat (usec): 10=2.68%, 20=12.50%, 50=0.89%
     lat (msec): 50=7.14%, 100=13.39%, 250=18.75%, 500=25.89%, 750=10.71%
     lat (msec): 1000=6.25%, 2000=1.79%
randomreadseqwrites5.8: (groupid=6, jobs=1): err= 0: pid=3917
  read : io=336KiB, bw=11KiB/s, iops=2, runt= 30159msec
    clat (usec): min=7, max=1173K, avg=359017.18, stdev=312814.11
    bw (KiB/s) : min=    3, max=   35, per=2.79%, avg=11.43, stdev= 7.52
  cpu          : usr=0.00%, sys=0.01%, ctx=87, majf=0, minf=109
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=84/0, short=0/0
     lat (usec): 10=3.57%, 20=10.71%
     lat (msec): 50=3.57%, 100=8.33%, 250=22.62%, 500=21.43%, 750=17.86%
     lat (msec): 1000=7.14%, 2000=4.76%
randomreadseqwrites5.9: (groupid=6, jobs=1): err= 0: pid=3918
  read : io=376KiB, bw=12KiB/s, iops=3, runt= 30316msec
    clat (usec): min=6, max=1036K, avg=322485.24, stdev=288683.45
    bw (KiB/s) : min=    4, max=   37, per=3.18%, avg=13.00, stdev= 9.07
  cpu          : usr=0.00%, sys=0.02%, ctx=93, majf=0, minf=108
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=94/0, short=0/0
     lat (usec): 10=4.26%, 20=13.83%, 50=1.06%
     lat (msec): 50=2.13%, 100=11.70%, 250=14.89%, 500=24.47%, 750=19.15%
     lat (msec): 1000=7.45%, 2000=1.06%
randomreadseqwrites5.10: (groupid=6, jobs=1): err= 0: pid=3919
  read : io=316KiB, bw=10KiB/s, iops=2, runt= 30292msec
    clat (usec): min=6, max=1268K, avg=383426.23, stdev=321301.91
    bw (KiB/s) : min=    3, max=   27, per=2.69%, avg=11.00, stdev= 7.00
  cpu          : usr=0.00%, sys=0.00%, ctx=84, majf=0, minf=98
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=79/0, short=0/0
     lat (usec): 10=5.06%, 20=8.86%
     lat (msec): 50=6.33%, 100=2.53%, 250=18.99%, 500=24.05%, 750=18.99%
     lat (msec): 1000=12.66%, 2000=2.53%
randomreadseqwrites5.11: (groupid=6, jobs=1): err= 0: pid=3920
  read : io=380KiB, bw=12KiB/s, iops=3, runt= 30168msec
    clat (usec): min=7, max=1145K, avg=317534.63, stdev=278705.88
    bw (KiB/s) : min=    4, max=   35, per=3.18%, avg=13.00, stdev= 8.20
  cpu          : usr=0.00%, sys=0.01%, ctx=97, majf=0, minf=118
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=95/0, short=0/0
     lat (usec): 10=1.05%, 20=10.53%
     lat (msec): 20=1.05%, 50=7.37%, 100=10.53%, 250=18.95%, 500=25.26%
     lat (msec): 750=15.79%, 1000=8.42%, 2000=1.05%
randomreadseqwrites5.12: (groupid=6, jobs=1): err= 0: pid=3921
  read : io=336KiB, bw=11KiB/s, iops=2, runt= 30141msec
    clat (usec): min=7, max=1275K, avg=358797.46, stdev=299082.88
    bw (KiB/s) : min=    3, max=   34, per=2.86%, avg=11.69, stdev= 6.88
  cpu          : usr=0.00%, sys=0.02%, ctx=93, majf=0, minf=111
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=84/0, short=0/0
     lat (usec): 10=3.57%, 20=5.95%
     lat (msec): 50=4.76%, 100=7.14%, 250=25.00%, 500=23.81%, 750=19.05%
     lat (msec): 1000=8.33%, 2000=2.38%
randomreadseqwrites5.13: (groupid=6, jobs=1): err= 0: pid=3922
  read : io=356KiB, bw=12KiB/s, iops=2, runt= 30378msec
    clat (usec): min=9, max=1252K, avg=341303.22, stdev=312598.87
    bw (KiB/s) : min=    3, max=   56, per=3.09%, avg=12.63, stdev=10.82
  cpu          : usr=0.00%, sys=0.02%, ctx=89, majf=0, minf=107
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=89/0, short=0/0
     lat (usec): 10=4.49%, 20=11.24%, 50=1.12%
     lat (msec): 50=5.62%, 100=11.24%, 250=19.10%, 500=14.61%, 750=20.22%
     lat (msec): 1000=8.99%, 2000=3.37%
randomreadseqwrites5.14: (groupid=6, jobs=1): err= 0: pid=3923
  read : io=352KiB, bw=11KiB/s, iops=2, runt= 30147msec
    clat (usec): min=5, max=984088, avg=342562.11, stdev=301674.33
    bw (KiB/s) : min=    4, max=   40, per=2.95%, avg=12.08, stdev= 8.51
  cpu          : usr=0.00%, sys=0.00%, ctx=91, majf=0, minf=100
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=88/0, short=0/0
     lat (usec): 10=2.27%, 20=12.50%
     lat (msec): 50=5.68%, 100=12.50%, 250=13.64%, 500=20.45%, 750=22.73%
     lat (msec): 1000=10.23%
randomreadseqwrites5.15: (groupid=6, jobs=1): err= 0: pid=3924
  read : io=356KiB, bw=12KiB/s, iops=2, runt= 30336msec
    clat (usec): min=6, max=1145K, avg=340834.31, stdev=312446.77
    bw (KiB/s) : min=    3, max=   35, per=2.97%, avg=12.16, stdev= 7.40
  cpu          : usr=0.00%, sys=0.01%, ctx=88, majf=0, minf=115
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=89/0, short=0/0
     lat (usec): 10=3.37%, 20=12.36%, 50=1.12%
     lat (msec): 50=5.62%, 100=7.87%, 250=16.85%, 500=25.84%, 750=14.61%
     lat (msec): 1000=7.87%, 2000=4.49%
randomreadseqwrites5.16: (groupid=6, jobs=1): err= 0: pid=3925
  read : io=432KiB, bw=14KiB/s, iops=3, runt= 30155msec
    clat (usec): min=7, max=1076K, avg=279185.06, stdev=269241.95
    bw (KiB/s) : min=    3, max=   37, per=3.64%, avg=14.89, stdev= 8.82
  cpu          : usr=0.00%, sys=0.01%, ctx=107, majf=0, minf=128
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=108/0, short=0/0
     lat (usec): 10=7.41%, 20=8.33%
     lat (msec): 50=7.41%, 100=10.19%, 250=25.00%, 500=23.15%, 750=12.96%
     lat (msec): 1000=2.78%, 2000=2.78%
randomreadseqwrites5.17: (groupid=6, jobs=1): err= 0: pid=3926
  read : io=340KiB, bw=11KiB/s, iops=2, runt= 30061msec
    clat (usec): min=9, max=1102K, avg=353633.76, stdev=287678.55
    bw (KiB/s) : min=    3, max=   30, per=2.75%, avg=11.25, stdev= 6.73
  cpu          : usr=0.00%, sys=0.01%, ctx=92, majf=0, minf=110
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=85/0, short=0/0
     lat (usec): 10=1.18%, 20=5.88%
     lat (msec): 50=2.35%, 100=12.94%, 250=25.88%, 500=20.00%, 750=20.00%
     lat (msec): 1000=10.59%, 2000=1.18%
randomreadseqwrites5.18: (groupid=6, jobs=1): err= 0: pid=3927
  read : io=368KiB, bw=12KiB/s, iops=3, runt= 30156msec
    clat (usec): min=6, max=1293K, avg=327760.97, stdev=310114.02
    bw (KiB/s) : min=    3, max=   57, per=3.11%, avg=12.73, stdev=10.71
  cpu          : usr=0.00%, sys=0.02%, ctx=85, majf=0, minf=112
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=92/0, short=0/0
     lat (usec): 10=7.61%, 20=9.78%
     lat (msec): 50=5.43%, 100=10.87%, 250=15.22%, 500=22.83%, 750=16.30%
     lat (msec): 1000=8.70%, 2000=3.26%
randomreadseqwrites5.19: (groupid=6, jobs=1): err= 0: pid=3928
  read : io=336KiB, bw=11KiB/s, iops=2, runt= 30245msec
    clat (usec): min=6, max=1207K, avg=360038.65, stdev=339188.79
    bw (KiB/s) : min=    3, max=   35, per=2.76%, avg=11.29, stdev= 7.91
  cpu          : usr=0.00%, sys=0.01%, ctx=81, majf=0, minf=98
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=84/0, short=0/0
     lat (usec): 10=3.57%, 20=14.29%, 100=1.19%
     lat (msec): 50=2.38%, 100=11.90%, 250=15.48%, 500=17.86%, 750=16.67%
     lat (msec): 1000=10.71%, 2000=5.95%
randomreadseqwrites5.20: (groupid=6, jobs=1): err= 0: pid=3929
  read : io=352KiB, bw=11KiB/s, iops=2, runt= 30225msec
    clat (usec): min=5, max=1199K, avg=343441.44, stdev=301356.39
    bw (KiB/s) : min=    3, max=   48, per=3.11%, avg=12.74, stdev=10.79
  cpu          : usr=0.00%, sys=0.01%, ctx=89, majf=0, minf=114
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=88/0, short=0/0
     lat (usec): 10=3.41%, 20=7.95%
     lat (msec): 50=6.82%, 100=10.23%, 250=19.32%, 500=23.86%, 750=15.91%
     lat (msec): 1000=9.09%, 2000=3.41%
randomreadseqwrites5.21: (groupid=6, jobs=1): err= 0: pid=3930
  read : io=368KiB, bw=12KiB/s, iops=3, runt= 30004msec
    clat (usec): min=5, max=1229K, avg=326103.54, stdev=286861.54
    bw (KiB/s) : min=    5, max=   49, per=3.05%, avg=12.49, stdev= 9.25
  cpu          : usr=0.00%, sys=0.00%, ctx=92, majf=0, minf=106
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=92/0, short=0/0
     lat (usec): 10=6.52%, 20=8.70%, 50=1.09%
     lat (msec): 50=3.26%, 100=9.78%, 250=18.48%, 500=23.91%, 750=18.48%
     lat (msec): 1000=7.61%, 2000=2.17%
randomreadseqwrites5.22: (groupid=6, jobs=1): err= 0: pid=3931
  read : io=376KiB, bw=12KiB/s, iops=3, runt= 30047msec
    clat (usec): min=5, max=1313K, avg=319621.35, stdev=304905.19
    bw (KiB/s) : min=    3, max=   35, per=3.03%, avg=12.38, stdev= 7.93
  cpu          : usr=0.01%, sys=0.01%, ctx=91, majf=0, minf=116
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=94/0, short=0/0
     lat (usec): 10=5.32%, 20=10.64%
     lat (msec): 50=8.51%, 100=9.57%, 250=19.15%, 500=19.15%, 750=20.21%
     lat (msec): 1000=3.19%, 2000=4.26%
randomreadseqwrites5.23: (groupid=6, jobs=1): err= 0: pid=3932
  read : io=376KiB, bw=12KiB/s, iops=3, runt= 30122msec
    clat (usec): min=7, max=1221K, avg=320423.87, stdev=304694.86
    bw (KiB/s) : min=    4, max=   44, per=3.24%, avg=13.24, stdev=10.13
  cpu          : usr=0.00%, sys=0.02%, ctx=90, majf=0, minf=111
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=94/0, short=0/0
     lat (usec): 10=2.13%, 20=11.70%, 50=3.19%
     lat (msec): 50=6.38%, 100=11.70%, 250=19.15%, 500=19.15%, 750=14.89%
     lat (msec): 1000=9.57%, 2000=2.13%
randomreadseqwrites5.24: (groupid=6, jobs=1): err= 0: pid=3933
  read : io=340KiB, bw=11KiB/s, iops=2, runt= 30189msec
    clat (usec): min=7, max=1224K, avg=355138.34, stdev=333841.35
    bw (KiB/s) : min=    3, max=   40, per=2.91%, avg=11.92, stdev= 8.39
  cpu          : usr=0.00%, sys=0.01%, ctx=88, majf=0, minf=91
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=85/0, short=0/0
     lat (usec): 10=2.35%, 20=9.41%, 50=1.18%
     lat (msec): 50=4.71%, 100=14.12%, 250=21.18%, 500=15.29%, 750=17.65%
     lat (msec): 1000=8.24%, 2000=5.88%
randomreadseqwrites5.25: (groupid=6, jobs=1): err= 0: pid=3934
  read : io=388KiB, bw=13KiB/s, iops=3, runt= 30192msec
    clat (usec): min=6, max=1201K, avg=311231.60, stdev=272368.83
    bw (KiB/s) : min=    4, max=   37, per=3.24%, avg=13.26, stdev= 8.65
  cpu          : usr=0.00%, sys=0.01%, ctx=97, majf=0, minf=118
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=97/0, short=0/0
     lat (usec): 10=3.09%, 20=10.31%, 100=1.03%
     lat (msec): 50=2.06%, 100=11.34%, 250=21.65%, 500=26.80%, 750=14.43%
     lat (msec): 1000=8.25%, 2000=1.03%
randomreadseqwrites5.26: (groupid=6, jobs=1): err= 0: pid=3935
  read : io=360KiB, bw=12KiB/s, iops=2, runt= 30072msec
    clat (usec): min=5, max=1385K, avg=334118.56, stdev=309142.51
    bw (KiB/s) : min=    2, max=   40, per=3.13%, avg=12.82, stdev= 8.20
  cpu          : usr=0.00%, sys=0.01%, ctx=88, majf=0, minf=92
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=90/0, short=0/0
     lat (usec): 10=7.78%, 20=10.00%
     lat (msec): 50=3.33%, 100=8.89%, 250=18.89%, 500=22.22%, 750=18.89%
     lat (msec): 1000=5.56%, 2000=4.44%
randomreadseqwrites5.27: (groupid=6, jobs=1): err= 0: pid=3936
  read : io=328KiB, bw=11KiB/s, iops=2, runt= 30203msec
    clat (usec): min=7, max=1318K, avg=368299.17, stdev=284425.85
    bw (KiB/s) : min=    3, max=   26, per=2.67%, avg=10.92, stdev= 6.60
  cpu          : usr=0.00%, sys=0.01%, ctx=89, majf=0, minf=115
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=82/0, short=0/0
     lat (usec): 10=1.22%, 20=9.76%, 50=1.22%
     lat (msec): 50=2.44%, 100=8.54%, 250=17.07%, 500=29.27%, 750=21.95%
     lat (msec): 1000=6.10%, 2000=2.44%
randomreadseqwrites5.28: (groupid=6, jobs=1): err= 0: pid=3937
  read : io=344KiB, bw=11KiB/s, iops=2, runt= 30080msec
    clat (usec): min=7, max=1052K, avg=349743.62, stdev=290531.04
    bw (KiB/s) : min=    3, max=   35, per=2.83%, avg=11.59, stdev= 6.84
  cpu          : usr=0.00%, sys=0.00%, ctx=86, majf=0, minf=136
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=86/0, short=0/0
     lat (usec): 10=4.65%, 20=8.14%, 50=2.33%
     lat (msec): 50=8.14%, 100=6.98%, 250=15.12%, 500=23.26%, 750=19.77%
     lat (msec): 1000=10.47%, 2000=1.16%
randomreadseqwrites5.29: (groupid=6, jobs=1): err= 0: pid=3938
  read : io=368KiB, bw=12KiB/s, iops=3, runt= 30107msec
    clat (usec): min=9, max=1232K, avg=327229.96, stdev=310066.23
    bw (KiB/s) : min=    3, max=   40, per=3.17%, avg=12.95, stdev= 8.47
  cpu          : usr=0.00%, sys=0.02%, ctx=90, majf=0, minf=119
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=92/0, short=0/0
     lat (usec): 10=1.09%, 20=8.70%, 50=2.17%
     lat (msec): 50=7.61%, 100=13.04%, 250=19.57%, 500=22.83%, 750=15.22%
     lat (msec): 1000=5.43%, 2000=4.35%
randomreadseqwrites5.30: (groupid=6, jobs=1): err= 0: pid=3939
  read : io=388KiB, bw=13KiB/s, iops=3, runt= 30141msec
    clat (usec): min=7, max=1296K, avg=310705.55, stdev=320239.44
    bw (KiB/s) : min=    3, max=   39, per=3.44%, avg=14.08, stdev=10.17
  cpu          : usr=0.00%, sys=0.01%, ctx=97, majf=0, minf=130
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=97/0, short=0/0
     lat (usec): 10=3.09%, 20=10.31%, 50=1.03%
     lat (msec): 50=5.15%, 100=14.43%, 250=25.77%, 500=17.53%, 750=11.34%
     lat (msec): 1000=4.12%, 2000=7.22%
randomreadseqwrites5.31: (groupid=6, jobs=1): err= 0: pid=3940
  read : io=364KiB, bw=12KiB/s, iops=3, runt= 30135msec
    clat (usec): min=6, max=1368K, avg=331133.96, stdev=320957.51
    bw (KiB/s) : min=    4, max=   48, per=3.20%, avg=13.09, stdev= 9.94
  cpu          : usr=0.00%, sys=0.01%, ctx=93, majf=0, minf=114
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=91/0, short=0/0
     lat (usec): 10=4.40%, 20=9.89%
     lat (msec): 20=1.10%, 50=5.49%, 100=9.89%, 250=19.78%, 500=26.37%
     lat (msec): 750=8.79%, 1000=9.89%, 2000=4.40%
randomreadseqwrites5.32: (groupid=6, jobs=1): err= 0: pid=3941
  read : io=376KiB, bw=12KiB/s, iops=3, runt= 30088msec
    clat (usec): min=8, max=1152K, avg=320068.44, stdev=290175.95
    bw (KiB/s) : min=    3, max=   35, per=3.00%, avg=12.26, stdev= 7.39
  cpu          : usr=0.00%, sys=0.00%, ctx=97, majf=0, minf=107
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=94/0, short=0/0
     lat (usec): 10=1.06%, 20=11.70%
     lat (msec): 50=5.32%, 100=8.51%, 250=24.47%, 500=21.28%, 750=18.09%
     lat (msec): 1000=7.45%, 2000=2.13%

Run status group 0 (all jobs):
   READ: io=1024MiB, aggrb=35956KiB/s, minb=35956KiB/s, maxb=35956KiB/s, mint=29862msec, maxt=29862msec

Run status group 1 (all jobs):
  WRITE: io=1024MiB, aggrb=31493KiB/s, minb=31493KiB/s, maxb=31493KiB/s, mint=34094msec, maxt=34094msec

Run status group 2 (all jobs):
   READ: io=516648KiB, aggrb=17609KiB/s, minb=8807KiB/s, maxb=8812KiB/s, mint=30008msec, maxt=30043msec

Run status group 3 (all jobs):
  WRITE: io=1022MiB, aggrb=30723KiB/s, minb=15543KiB/s, maxb=15603KiB/s, mint=33924msec, maxt=34875msec

Run status group 4 (all jobs):
   READ: io=10804KiB, aggrb=368KiB/s, minb=183KiB/s, maxb=185KiB/s, mint=30008msec, maxt=30021msec

Run status group 5 (all jobs):
   READ: io=5652KiB, aggrb=192KiB/s, minb=47KiB/s, maxb=48KiB/s, mint=30083msec, maxt=30111msec
  WRITE: io=528788KiB, aggrb=17912KiB/s, minb=17912KiB/s, maxb=17912KiB/s, mint=30229msec, maxt=30229msec

Run status group 6 (all jobs):
   READ: io=12164KiB, aggrb=409KiB/s, minb=10KiB/s, maxb=15KiB/s, mint=30004msec, maxt=30396msec
  WRITE: io=694220KiB, aggrb=22173KiB/s, minb=22173KiB/s, maxb=22173KiB/s, mint=32060msec, maxt=32060msec

Disk stats (read/write):
  sda: ios=20733/26752, merge=384/804350, ticks=1490733/20829071, in_queue=22334561, util=99.07%

[-- Attachment #6: deadline-iosched-patched.3 --]
[-- Type: application/octet-stream, Size: 40743 bytes --]

seqread: (g=0): rw=read, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
seqwrite: (g=1): rw=write, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
parread.0: (g=2): rw=read, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
parread.1: (g=2): rw=read, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
parwrite.0: (g=3): rw=write, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
parwrite.1: (g=3): rw=write, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomread2.0: (g=4): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomread2.1: (g=4): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites4.w: (g=5): rw=write, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites4.0: (g=5): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites4.1: (g=5): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites4.2: (g=5): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites4.3: (g=5): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.w: (g=6): rw=write, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.0: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.1: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.2: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.3: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.4: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.5: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.6: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.7: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.8: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.9: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.10: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.11: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.12: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.13: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.14: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.15: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.16: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.17: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.18: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.19: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.20: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.21: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.22: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.23: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.24: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.25: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.26: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.27: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.28: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.29: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.30: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.31: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.32: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
Starting 47 processes
parwrite.0: Laying out IO file(s) (1 file(s) / 1024MiB)
parwrite.1: Laying out IO file(s) (1 file(s) / 1024MiB)
randomreadseqwrites4.w: Laying out IO file(s) (1 file(s) / 2048MiB)
randomreadseqwrites5.w: Laying out IO file(s) (1 file(s) / 2048MiB)

seqread: (groupid=0, jobs=1): err= 0: pid=4028
  read : io=1024MiB, bw=36150KiB/s, iops=8825, runt= 29702msec
    clat (usec): min=2, max=27851, avg=112.05, stdev=664.29
    bw (KiB/s) : min=32939, max=37208, per=100.10%, avg=36185.14, stdev=797.77
  cpu          : usr=0.98%, sys=4.43%, ctx=8005, majf=0, minf=17
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=262144/0, short=0/0
     lat (usec): 4=77.39%, 10=19.18%, 20=0.28%, 50=0.02%, 100=0.01%
     lat (usec): 250=0.10%, 500=0.01%, 750=0.01%, 1000=0.01%
     lat (msec): 2=0.08%, 4=1.82%, 10=1.12%, 20=0.01%, 50=0.01%
seqwrite: (groupid=1, jobs=1): err= 0: pid=4029
  write: io=898580KiB, bw=26806KiB/s, iops=6544, runt= 34325msec
    clat (usec): min=7, max=4035K, avg=132.53, stdev=9208.81
  cpu          : usr=1.09%, sys=7.57%, ctx=912, majf=0, minf=313
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=0/224645, short=0/0
     lat (usec): 10=50.72%, 20=47.73%, 50=0.88%, 100=0.05%, 250=0.41%
     lat (usec): 500=0.01%, 750=0.01%, 1000=0.07%
     lat (msec): 2=0.01%, 4=0.02%, 10=0.01%, 20=0.01%, 50=0.01%
     lat (msec): 100=0.06%, 250=0.04%, 500=0.01%, >=2000=0.01%
parread.0: (groupid=2, jobs=1): err= 0: pid=4032
  read : io=284660KiB, bw=9707KiB/s, iops=2370, runt= 30027msec
    clat (usec): min=2, max=62975, avg=420.66, stdev=3599.00
    bw (KiB/s) : min= 7777, max=10747, per=50.10%, avg=9720.55, stdev=621.97
  cpu          : usr=0.30%, sys=1.17%, ctx=2177, majf=0, minf=19
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=71165/0, short=0/0
     lat (usec): 4=78.74%, 10=17.85%, 20=0.25%, 50=0.02%, 100=0.05%
     lat (usec): 250=0.05%, 1000=0.01%
     lat (msec): 2=0.07%, 4=1.23%, 10=0.85%, 20=0.01%, 50=0.86%
     lat (msec): 100=0.02%
parread.1: (groupid=2, jobs=1): err= 0: pid=4033
  read : io=284276KiB, bw=9702KiB/s, iops=2368, runt= 30003msec
    clat (usec): min=2, max=67393, avg=420.78, stdev=3617.21
    bw (KiB/s) : min= 7792, max=10683, per=50.06%, avg=9712.55, stdev=622.10
  cpu          : usr=0.28%, sys=1.20%, ctx=2172, majf=0, minf=19
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=71069/0, short=0/0
     lat (usec): 4=68.78%, 10=27.78%, 20=0.29%, 50=0.01%, 100=0.06%
     lat (usec): 250=0.05%
     lat (msec): 2=0.09%, 4=1.18%, 10=0.88%, 20=0.02%, 50=0.86%
     lat (msec): 100=0.01%
parwrite.0: (groupid=3, jobs=1): err= 0: pid=4034
  write: io=489540KiB, bw=14423KiB/s, iops=3521, runt= 34754msec
    clat (usec): min=15, max=2639K, avg=262.22, stdev=13412.45
  cpu          : usr=0.97%, sys=9.57%, ctx=438, majf=0, minf=155
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=0/122385, short=0/0
     lat (usec): 20=43.23%, 50=55.48%, 100=0.65%, 250=0.50%, 500=0.01%
     lat (usec): 1000=0.01%
     lat (msec): 2=0.03%, 4=0.03%, 10=0.01%, 20=0.01%, 50=0.01%
     lat (msec): 100=0.01%, 250=0.01%, 500=0.04%, 750=0.01%, 1000=0.01%
     lat (msec): 2000=0.01%, >=2000=0.01%
parwrite.1: (groupid=3, jobs=1): err= 0: pid=4035
  write: io=565920KiB, bw=16783KiB/s, iops=4097, runt= 34528msec
    clat (usec): min=15, max=2219K, avg=209.87, stdev=10337.68
  cpu          : usr=1.06%, sys=10.90%, ctx=524, majf=0, minf=158
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=0/141480, short=0/0
     lat (usec): 20=48.10%, 50=50.73%, 100=0.59%, 250=0.48%, 500=0.01%
     lat (usec): 750=0.01%, 1000=0.01%
     lat (msec): 2=0.03%, 4=0.02%, 10=0.01%, 20=0.01%, 100=0.01%
     lat (msec): 250=0.01%, 500=0.04%, 750=0.01%, 1000=0.01%, 2000=0.01%
     lat (msec): >=2000=0.01%
randomread2.0: (groupid=4, jobs=1): err= 0: pid=4042
  read : io=5216KiB, bw=178KiB/s, iops=43, runt= 30003msec
    clat (usec): min=4, max=111125, avg=23004.09, stdev=11431.51
    bw (KiB/s) : min=   84, max=  241, per=50.35%, avg=177.74, stdev=37.08
  cpu          : usr=0.01%, sys=0.06%, ctx=1423, majf=0, minf=76
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=1304/0, short=0/0
     lat (usec): 10=0.23%
     lat (msec): 10=0.77%, 20=53.37%, 50=42.48%, 100=2.99%, 250=0.15%
randomread2.1: (groupid=4, jobs=1): err= 0: pid=4043
  read : io=5152KiB, bw=175KiB/s, iops=42, runt= 30009msec
    clat (usec): min=5, max=156650, avg=23294.48, stdev=11872.53
    bw (KiB/s) : min=   80, max=  248, per=49.76%, avg=175.64, stdev=37.75
  cpu          : usr=0.01%, sys=0.10%, ctx=1424, majf=0, minf=76
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=1288/0, short=0/0
     lat (usec): 10=0.23%
     lat (msec): 10=0.39%, 20=51.40%, 50=44.95%, 100=2.95%, 250=0.08%
randomreadseqwrites4.w: (groupid=5, jobs=1): err= 0: pid=4044
  write: io=549972KiB, bw=18767KiB/s, iops=4581, runt= 30008msec
    clat (usec): min=15, max=1525K, avg=216.50, stdev=11664.48
  cpu          : usr=0.76%, sys=9.39%, ctx=302, majf=0, minf=143
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=0/137493, short=0/0
     lat (usec): 20=80.85%, 50=18.72%, 100=0.09%, 250=0.28%, 500=0.01%
     lat (usec): 1000=0.01%
     lat (msec): 2=0.01%, 4=0.01%, 100=0.01%, 250=0.01%, 500=0.03%
     lat (msec): 750=0.01%, 2000=0.01%
randomreadseqwrites4.0: (groupid=5, jobs=1): err= 0: pid=4045
  read : io=1440KiB, bw=49KiB/s, iops=11, runt= 30079msec
    clat (usec): min=7, max=441349, avg=83531.32, stdev=53566.03
    bw (KiB/s) : min=   27, max=  104, per=25.09%, avg=48.92, stdev=11.95
  cpu          : usr=0.02%, sys=0.05%, ctx=358, majf=0, minf=697
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=360/0, short=0/0
     lat (usec): 10=0.28%, 20=0.28%
     lat (msec): 50=44.44%, 100=10.00%, 250=43.89%, 500=1.11%
randomreadseqwrites4.1: (groupid=5, jobs=1): err= 0: pid=4046
  read : io=1428KiB, bw=48KiB/s, iops=11, runt= 30071msec
    clat (msec): min=23, max=418, avg=84.21, stdev=53.42
    bw (KiB/s) : min=   22, max=  106, per=24.73%, avg=48.23, stdev=11.85
  cpu          : usr=0.01%, sys=0.04%, ctx=357, majf=0, minf=693
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=357/0, short=0/0

     lat (msec): 50=44.26%, 100=12.61%, 250=42.30%, 500=0.84%
randomreadseqwrites4.2: (groupid=5, jobs=1): err= 0: pid=4047
  read : io=1456KiB, bw=49KiB/s, iops=12, runt= 30059msec
    clat (usec): min=5, max=396442, avg=82555.90, stdev=53955.47
    bw (KiB/s) : min=   29, max=  100, per=25.35%, avg=49.43, stdev=10.74
  cpu          : usr=0.01%, sys=0.04%, ctx=360, majf=0, minf=707
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=364/0, short=0/0
     lat (usec): 10=1.37%
     lat (msec): 20=0.27%, 50=44.23%, 100=9.62%, 250=43.68%, 500=0.82%
randomreadseqwrites4.3: (groupid=5, jobs=1): err= 0: pid=4048
  read : io=1412KiB, bw=48KiB/s, iops=11, runt= 30054msec
    clat (usec): min=5, max=414928, avg=85116.96, stdev=53743.19
    bw (KiB/s) : min=   28, max=  102, per=24.66%, avg=48.10, stdev=11.02
  cpu          : usr=0.03%, sys=0.04%, ctx=355, majf=0, minf=684
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=353/0, short=0/0
     lat (usec): 10=0.28%
     lat (msec): 50=43.91%, 100=8.78%, 250=45.89%, 500=1.13%
randomreadseqwrites5.w: (groupid=6, jobs=1): err= 0: pid=4054
  write: io=1006MiB, bw=35168KiB/s, iops=8585, runt= 30000msec
    clat (usec): min=5, max=4910K, avg=114.70, stdev=14041.51
  cpu          : usr=1.58%, sys=13.48%, ctx=503, majf=0, minf=178
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=0/257579, short=0/0
     lat (usec): 10=20.72%, 20=69.96%, 50=8.93%, 100=0.10%, 250=0.24%
     lat (usec): 500=0.01%, 750=0.01%, 1000=0.01%
     lat (msec): 2=0.01%, 4=0.01%, 10=0.01%, 250=0.03%, 500=0.01%
     lat (msec): 2000=0.01%, >=2000=0.01%
randomreadseqwrites5.0: (groupid=6, jobs=1): err= 0: pid=4055
  read : io=416KiB, bw=14KiB/s, iops=3, runt= 30195msec
    clat (usec): min=6, max=1292K, avg=290322.21, stdev=248848.78
    bw (KiB/s) : min=    4, max=   33, per=3.10%, avg=14.43, stdev= 7.81
  cpu          : usr=0.00%, sys=0.01%, ctx=100, majf=0, minf=89
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=104/0, short=0/0
     lat (usec): 10=4.81%, 20=10.58%, 50=0.96%
     lat (msec): 50=3.85%, 100=8.65%, 250=21.15%, 500=29.81%, 750=17.31%
     lat (msec): 1000=1.92%, 2000=0.96%
randomreadseqwrites5.1: (groupid=6, jobs=1): err= 0: pid=4056
  read : io=352KiB, bw=11KiB/s, iops=2, runt= 30104msec
    clat (usec): min=8, max=1566K, avg=342070.44, stdev=289631.25
    bw (KiB/s) : min=    2, max=   38, per=2.65%, avg=12.36, stdev= 8.17
  cpu          : usr=0.01%, sys=0.02%, ctx=96, majf=0, minf=76
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=88/0, short=0/0
     lat (usec): 10=3.41%, 20=4.55%
     lat (msec): 50=5.68%, 100=9.09%, 250=27.27%, 500=22.73%, 750=21.59%
     lat (msec): 1000=3.41%, 2000=2.27%
randomreadseqwrites5.2: (groupid=6, jobs=1): err= 0: pid=4057
  read : io=416KiB, bw=14KiB/s, iops=3, runt= 30243msec
    clat (usec): min=6, max=1036K, avg=290785.73, stdev=271481.42
    bw (KiB/s) : min=    4, max=   34, per=3.02%, avg=14.08, stdev= 7.69
  cpu          : usr=0.01%, sys=0.00%, ctx=96, majf=0, minf=94
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=104/0, short=0/0
     lat (usec): 10=6.73%, 20=13.46%
     lat (msec): 50=5.77%, 100=6.73%, 250=19.23%, 500=25.00%, 750=16.35%
     lat (msec): 1000=4.81%, 2000=1.92%
randomreadseqwrites5.3: (groupid=6, jobs=1): err= 0: pid=4058
  read : io=400KiB, bw=13KiB/s, iops=3, runt= 30361msec
    clat (usec): min=6, max=966393, avg=303595.46, stdev=267616.19
    bw (KiB/s) : min=    4, max=   36, per=2.95%, avg=13.72, stdev= 8.67
  cpu          : usr=0.00%, sys=0.01%, ctx=100, majf=0, minf=78
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=100/0, short=0/0
     lat (usec): 10=6.00%, 20=13.00%
     lat (msec): 50=6.00%, 100=7.00%, 250=20.00%, 500=21.00%, 750=20.00%
     lat (msec): 1000=7.00%
randomreadseqwrites5.4: (groupid=6, jobs=1): err= 0: pid=4059
  read : io=452KiB, bw=15KiB/s, iops=3, runt= 30366msec
    clat (usec): min=6, max=1301K, avg=268709.80, stdev=263169.21
    bw (KiB/s) : min=    3, max=   56, per=3.38%, avg=15.75, stdev=10.54
  cpu          : usr=0.00%, sys=0.02%, ctx=109, majf=0, minf=84
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=113/0, short=0/0
     lat (usec): 10=3.54%, 20=12.39%, 50=0.88%
     lat (msec): 50=2.65%, 100=11.50%, 250=30.09%, 500=23.01%, 750=10.62%
     lat (msec): 1000=3.54%, 2000=1.77%
randomreadseqwrites5.5: (groupid=6, jobs=1): err= 0: pid=4060
  read : io=560KiB, bw=19KiB/s, iops=4, runt= 30110msec
    clat (usec): min=7, max=908097, avg=215056.74, stdev=211828.74
    bw (KiB/s) : min=    5, max=   50, per=4.24%, avg=19.76, stdev=11.30
  cpu          : usr=0.00%, sys=0.01%, ctx=130, majf=0, minf=105
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=140/0, short=0/0
     lat (usec): 10=4.29%, 20=13.57%, 50=1.43%
     lat (msec): 50=6.43%, 100=12.86%, 250=30.71%, 500=18.57%, 750=10.71%
     lat (msec): 1000=1.43%
randomreadseqwrites5.6: (groupid=6, jobs=1): err= 0: pid=4061
  read : io=372KiB, bw=12KiB/s, iops=3, runt= 30391msec
    clat (usec): min=7, max=1074K, avg=326768.32, stdev=272867.49
    bw (KiB/s) : min=    4, max=   34, per=2.67%, avg=12.43, stdev= 6.65
  cpu          : usr=0.00%, sys=0.02%, ctx=95, majf=0, minf=71
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=93/0, short=0/0
     lat (usec): 10=2.15%, 20=9.68%
     lat (msec): 50=5.38%, 100=10.75%, 250=21.51%, 500=23.66%, 750=19.35%
     lat (msec): 1000=6.45%, 2000=1.08%
randomreadseqwrites5.7: (groupid=6, jobs=1): err= 0: pid=4062
  read : io=412KiB, bw=14KiB/s, iops=3, runt= 30089msec
    clat (usec): min=8, max=1167K, avg=292114.01, stdev=288875.81
    bw (KiB/s) : min=    3, max=   41, per=3.22%, avg=15.03, stdev=10.24
  cpu          : usr=0.00%, sys=0.01%, ctx=103, majf=0, minf=80
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=103/0, short=0/0
     lat (usec): 10=5.83%, 20=9.71%
     lat (msec): 50=4.85%, 100=13.59%, 250=20.39%, 500=22.33%, 750=14.56%
     lat (msec): 1000=4.85%, 2000=3.88%
randomreadseqwrites5.8: (groupid=6, jobs=1): err= 0: pid=4063
  read : io=412KiB, bw=14KiB/s, iops=3, runt= 30085msec
    clat (usec): min=6, max=1087K, avg=292071.97, stdev=254636.00
    bw (KiB/s) : min=    3, max=   40, per=3.05%, avg=14.23, stdev= 8.24
  cpu          : usr=0.00%, sys=0.01%, ctx=106, majf=0, minf=86
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=103/0, short=0/0
     lat (usec): 10=4.85%, 20=7.77%
     lat (msec): 50=4.85%, 100=13.59%, 250=19.42%, 500=30.10%, 750=12.62%
     lat (msec): 1000=5.83%, 2000=0.97%
randomreadseqwrites5.9: (groupid=6, jobs=1): err= 0: pid=4064
  read : io=460KiB, bw=15KiB/s, iops=3, runt= 30261msec
    clat (usec): min=7, max=1043K, avg=263129.09, stdev=244731.00
    bw (KiB/s) : min=    5, max=   57, per=3.55%, avg=16.56, stdev=12.78
  cpu          : usr=0.00%, sys=0.00%, ctx=112, majf=0, minf=86
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=115/0, short=0/0
     lat (usec): 10=4.35%, 20=12.17%
     lat (msec): 50=4.35%, 100=11.30%, 250=24.35%, 500=26.09%, 750=13.04%
     lat (msec): 1000=3.48%, 2000=0.87%
randomreadseqwrites5.10: (groupid=6, jobs=1): err= 0: pid=4065
  read : io=440KiB, bw=14KiB/s, iops=3, runt= 30100msec
    clat (usec): min=6, max=864529, avg=273617.33, stdev=243272.52
    bw (KiB/s) : min=    4, max=   40, per=3.28%, avg=15.29, stdev= 9.33
  cpu          : usr=0.00%, sys=0.00%, ctx=109, majf=0, minf=83
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=110/0, short=0/0
     lat (usec): 10=4.55%, 20=9.09%, 50=0.91%
     lat (msec): 50=9.09%, 100=8.18%, 250=21.82%, 500=23.64%, 750=20.00%
     lat (msec): 1000=2.73%
randomreadseqwrites5.11: (groupid=6, jobs=1): err= 0: pid=4066
  read : io=460KiB, bw=15KiB/s, iops=3, runt= 30426msec
    clat (usec): min=7, max=1180K, avg=264560.63, stdev=219338.21
    bw (KiB/s) : min=    6, max=   33, per=3.27%, avg=15.24, stdev= 6.63
  cpu          : usr=0.00%, sys=0.01%, ctx=112, majf=0, minf=90
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=115/0, short=0/0
     lat (usec): 10=6.09%, 20=7.83%, 50=0.87%
     lat (msec): 50=3.48%, 100=11.30%, 250=21.74%, 500=32.17%, 750=15.65%
     lat (msec): 2000=0.87%
randomreadseqwrites5.12: (groupid=6, jobs=1): err= 0: pid=4067
  read : io=392KiB, bw=13KiB/s, iops=3, runt= 30400msec
    clat (usec): min=8, max=1130K, avg=310192.22, stdev=268287.73
    bw (KiB/s) : min=    4, max=   46, per=2.85%, avg=13.26, stdev=10.06
  cpu          : usr=0.00%, sys=0.00%, ctx=105, majf=0, minf=87
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=98/0, short=0/0
     lat (usec): 10=4.08%, 20=5.10%, 50=1.02%
     lat (msec): 50=7.14%, 100=10.20%, 250=27.55%, 500=19.39%, 750=17.35%
     lat (msec): 1000=7.14%, 2000=1.02%
randomreadseqwrites5.13: (groupid=6, jobs=1): err= 0: pid=4068
  read : io=424KiB, bw=14KiB/s, iops=3, runt= 30231msec
    clat (usec): min=8, max=1127K, avg=285182.15, stdev=239391.58
    bw (KiB/s) : min=    5, max=   35, per=3.13%, avg=14.59, stdev= 8.16
  cpu          : usr=0.00%, sys=0.01%, ctx=106, majf=0, minf=89
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=106/0, short=0/0
     lat (usec): 10=2.83%, 20=11.32%, 50=0.94%
     lat (msec): 50=3.77%, 100=6.60%, 250=25.47%, 500=29.25%, 750=15.09%
     lat (msec): 1000=3.77%, 2000=0.94%
randomreadseqwrites5.14: (groupid=6, jobs=1): err= 0: pid=4069
  read : io=380KiB, bw=12KiB/s, iops=3, runt= 30447msec
    clat (usec): min=6, max=1367K, avg=320482.51, stdev=275598.28
    bw (KiB/s) : min=    4, max=   42, per=2.73%, avg=12.71, stdev= 7.17
  cpu          : usr=0.00%, sys=0.01%, ctx=99, majf=0, minf=81
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=95/0, short=0/0
     lat (usec): 10=5.26%, 20=8.42%, 50=1.05%
     lat (msec): 50=5.26%, 100=4.21%, 250=24.21%, 500=26.32%, 750=17.89%
     lat (msec): 1000=6.32%, 2000=1.05%
randomreadseqwrites5.15: (groupid=6, jobs=1): err= 0: pid=4070
  read : io=464KiB, bw=15KiB/s, iops=3, runt= 30171msec
    clat (usec): min=6, max=1051K, avg=260083.95, stdev=270122.46
    bw (KiB/s) : min=    4, max=   43, per=3.43%, avg=15.97, stdev= 9.56
  cpu          : usr=0.00%, sys=0.00%, ctx=113, majf=0, minf=98
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=116/0, short=0/0
     lat (usec): 10=5.17%, 20=10.34%, 50=0.86%
     lat (msec): 50=10.34%, 100=6.90%, 250=25.86%, 500=24.14%, 750=6.90%
     lat (msec): 1000=6.90%, 2000=2.59%
randomreadseqwrites5.16: (groupid=6, jobs=1): err= 0: pid=4071
  read : io=388KiB, bw=13KiB/s, iops=3, runt= 30254msec
    clat (usec): min=6, max=983838, avg=311883.86, stdev=264207.22
    bw (KiB/s) : min=    4, max=   37, per=2.85%, avg=13.26, stdev= 8.06
  cpu          : usr=0.00%, sys=0.02%, ctx=97, majf=0, minf=73
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=97/0, short=0/0
     lat (usec): 10=4.12%, 20=10.31%
     lat (msec): 50=6.19%, 100=9.28%, 250=15.46%, 500=29.90%, 750=16.49%
     lat (msec): 1000=8.25%
randomreadseqwrites5.17: (groupid=6, jobs=1): err= 0: pid=4072
  read : io=464KiB, bw=15KiB/s, iops=3, runt= 30055msec
    clat (usec): min=8, max=1291K, avg=259076.79, stdev=242813.51
    bw (KiB/s) : min=    4, max=   40, per=3.39%, avg=15.80, stdev= 8.42
  cpu          : usr=0.00%, sys=0.01%, ctx=115, majf=0, minf=89
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=116/0, short=0/0
     lat (usec): 10=3.45%, 20=6.03%, 100=0.86%
     lat (msec): 50=6.90%, 100=15.52%, 250=25.86%, 500=23.28%, 750=12.93%
     lat (msec): 1000=4.31%, 2000=0.86%
randomreadseqwrites5.18: (groupid=6, jobs=1): err= 0: pid=4073
  read : io=400KiB, bw=13KiB/s, iops=3, runt= 30081msec
    clat (usec): min=6, max=1132K, avg=300791.26, stdev=290351.71
    bw (KiB/s) : min=    3, max=   78, per=2.98%, avg=13.87, stdev=12.84
  cpu          : usr=0.00%, sys=0.01%, ctx=93, majf=0, minf=74
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=100/0, short=0/0
     lat (usec): 10=8.00%, 20=8.00%, 50=1.00%
     lat (msec): 50=4.00%, 100=11.00%, 250=22.00%, 500=22.00%, 750=14.00%
     lat (msec): 1000=7.00%, 2000=3.00%
randomreadseqwrites5.19: (groupid=6, jobs=1): err= 0: pid=4074
  read : io=392KiB, bw=13KiB/s, iops=3, runt= 30100msec
    clat (usec): min=6, max=1238K, avg=307129.60, stdev=282656.88
    bw (KiB/s) : min=    5, max=   42, per=2.80%, avg=13.05, stdev= 7.66
  cpu          : usr=0.00%, sys=0.01%, ctx=91, majf=0, minf=75
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=98/0, short=0/0
     lat (usec): 10=7.14%, 20=10.20%, 50=2.04%
     lat (msec): 50=5.10%, 100=6.12%, 250=20.41%, 500=23.47%, 750=18.37%
     lat (msec): 1000=5.10%, 2000=2.04%
randomreadseqwrites5.20: (groupid=6, jobs=1): err= 0: pid=4075
  read : io=448KiB, bw=15KiB/s, iops=3, runt= 30297msec
    clat (usec): min=6, max=1342K, avg=270498.05, stdev=274906.74
    bw (KiB/s) : min=    3, max=   44, per=3.45%, avg=16.05, stdev= 9.94
  cpu          : usr=0.01%, sys=0.01%, ctx=111, majf=0, minf=78
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=112/0, short=0/0
     lat (usec): 10=5.36%, 20=8.93%
     lat (msec): 50=8.04%, 100=10.71%, 250=25.00%, 500=25.00%, 750=10.71%
     lat (msec): 1000=3.57%, 2000=2.68%
randomreadseqwrites5.21: (groupid=6, jobs=1): err= 0: pid=4076
  read : io=452KiB, bw=15KiB/s, iops=3, runt= 30255msec
    clat (usec): min=5, max=885182, avg=267729.25, stdev=239546.65
    bw (KiB/s) : min=    4, max=   44, per=3.28%, avg=15.31, stdev= 8.30
  cpu          : usr=0.00%, sys=0.01%, ctx=106, majf=0, minf=85
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=113/0, short=0/0
     lat (usec): 10=9.73%, 20=7.96%, 50=2.65%
     lat (msec): 50=2.65%, 100=12.39%, 250=15.93%, 500=30.09%, 750=14.16%
     lat (msec): 1000=4.42%
randomreadseqwrites5.22: (groupid=6, jobs=1): err= 0: pid=4077
  read : io=424KiB, bw=14KiB/s, iops=3, runt= 30010msec
    clat (usec): min=6, max=1417K, avg=283097.40, stdev=287325.95
    bw (KiB/s) : min=    2, max=   49, per=3.14%, avg=14.63, stdev=11.22
  cpu          : usr=0.00%, sys=0.01%, ctx=102, majf=0, minf=87
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=106/0, short=0/0
     lat (usec): 10=5.66%, 20=9.43%
     lat (msec): 50=9.43%, 100=15.09%, 250=16.04%, 500=20.75%, 750=16.98%
     lat (msec): 1000=4.72%, 2000=1.89%
randomreadseqwrites5.23: (groupid=6, jobs=1): err= 0: pid=4078
  read : io=408KiB, bw=13KiB/s, iops=3, runt= 30340msec
    clat (usec): min=6, max=1481K, avg=297433.69, stdev=288590.16
    bw (KiB/s) : min=    3, max=   36, per=3.05%, avg=14.22, stdev= 8.07
  cpu          : usr=0.01%, sys=0.01%, ctx=97, majf=0, minf=74
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=102/0, short=0/0
     lat (usec): 10=6.86%, 20=8.82%, 50=0.98%, 100=0.98%
     lat (msec): 50=3.92%, 100=7.84%, 250=23.53%, 500=23.53%, 750=17.65%
     lat (msec): 1000=3.92%, 2000=1.96%
randomreadseqwrites5.24: (groupid=6, jobs=1): err= 0: pid=4079
  read : io=380KiB, bw=12KiB/s, iops=3, runt= 30216msec
    clat (usec): min=5, max=1415K, avg=318046.51, stdev=303694.61
    bw (KiB/s) : min=    2, max=   38, per=2.81%, avg=13.11, stdev= 8.31
  cpu          : usr=0.01%, sys=0.01%, ctx=97, majf=0, minf=83
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=95/0, short=0/0
     lat (usec): 10=6.32%, 20=5.26%, 50=1.05%
     lat (msec): 50=6.32%, 100=14.74%, 250=18.95%, 500=22.11%, 750=16.84%
     lat (msec): 1000=5.26%, 2000=3.16%
randomreadseqwrites5.25: (groupid=6, jobs=1): err= 0: pid=4080
  read : io=448KiB, bw=15KiB/s, iops=3, runt= 30314msec
    clat (usec): min=5, max=1199K, avg=270643.38, stdev=241077.65
    bw (KiB/s) : min=    4, max=   46, per=3.28%, avg=15.31, stdev= 8.96
  cpu          : usr=0.00%, sys=0.01%, ctx=108, majf=0, minf=86
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=112/0, short=0/0
     lat (usec): 10=8.04%, 20=5.36%, 50=1.79%
     lat (msec): 50=3.57%, 100=8.04%, 250=28.57%, 500=28.57%, 750=12.50%
     lat (msec): 1000=1.79%, 2000=1.79%
randomreadseqwrites5.26: (groupid=6, jobs=1): err= 0: pid=4081
  read : io=396KiB, bw=13KiB/s, iops=3, runt= 30049msec
    clat (usec): min=6, max=1468K, avg=303512.39, stdev=307079.65
    bw (KiB/s) : min=    2, max=   35, per=2.99%, avg=13.92, stdev= 8.08
  cpu          : usr=0.00%, sys=0.01%, ctx=94, majf=0, minf=80
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=99/0, short=0/0
     lat (usec): 10=8.08%, 20=10.10%
     lat (msec): 50=2.02%, 100=11.11%, 250=23.23%, 500=24.24%, 750=13.13%
     lat (msec): 1000=4.04%, 2000=4.04%
randomreadseqwrites5.27: (groupid=6, jobs=1): err= 0: pid=4082
  read : io=388KiB, bw=13KiB/s, iops=3, runt= 30260msec
    clat (usec): min=7, max=1284K, avg=311946.13, stdev=255964.11
    bw (KiB/s) : min=    4, max=   46, per=2.96%, avg=13.82, stdev= 8.67
  cpu          : usr=0.00%, sys=0.01%, ctx=99, majf=0, minf=81
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=97/0, short=0/0
     lat (usec): 10=5.15%, 20=6.19%, 100=1.03%
     lat (msec): 50=1.03%, 100=11.34%, 250=22.68%, 500=31.96%, 750=13.40%
     lat (msec): 1000=6.19%, 2000=1.03%
randomreadseqwrites5.28: (groupid=6, jobs=1): err= 0: pid=4083
  read : io=484KiB, bw=16KiB/s, iops=3, runt= 30273msec
    clat (usec): min=4, max=1448K, avg=250177.86, stdev=268023.81
    bw (KiB/s) : min=    2, max=   48, per=3.54%, avg=16.51, stdev=10.90
  cpu          : usr=0.01%, sys=0.01%, ctx=107, majf=0, minf=90
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=121/0, short=0/0
     lat (usec): 10=7.44%, 20=12.40%, 50=1.65%
     lat (msec): 50=4.13%, 100=9.92%, 250=25.62%, 500=19.83%, 750=13.22%
     lat (msec): 1000=4.13%, 2000=1.65%
randomreadseqwrites5.29: (groupid=6, jobs=1): err= 0: pid=4084
  read : io=404KiB, bw=13KiB/s, iops=3, runt= 30058msec
    clat (usec): min=8, max=1060K, avg=297587.16, stdev=263151.04
    bw (KiB/s) : min=    4, max=   30, per=3.00%, avg=14.00, stdev= 7.63
  cpu          : usr=0.00%, sys=0.01%, ctx=102, majf=0, minf=86
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=101/0, short=0/0
     lat (usec): 10=2.97%, 20=7.92%, 50=0.99%
     lat (msec): 50=4.95%, 100=11.88%, 250=21.78%, 500=29.70%, 750=10.89%
     lat (msec): 1000=6.93%, 2000=1.98%
randomreadseqwrites5.30: (groupid=6, jobs=1): err= 0: pid=4085
  read : io=380KiB, bw=12KiB/s, iops=3, runt= 30218msec
    clat (usec): min=6, max=1339K, avg=318064.27, stdev=292540.19
    bw (KiB/s) : min=    3, max=   55, per=2.87%, avg=13.36, stdev= 9.52
  cpu          : usr=0.00%, sys=0.00%, ctx=92, majf=0, minf=77
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=95/0, short=0/0
     lat (usec): 10=5.26%, 20=9.47%
     lat (msec): 50=6.32%, 100=11.58%, 250=18.95%, 500=21.05%, 750=16.84%
     lat (msec): 1000=9.47%, 2000=1.05%
randomreadseqwrites5.31: (groupid=6, jobs=1): err= 0: pid=4086
  read : io=384KiB, bw=13KiB/s, iops=3, runt= 30110msec
    clat (usec): min=7, max=989323, avg=313625.86, stdev=272033.20
    bw (KiB/s) : min=    4, max=   48, per=2.86%, avg=13.32, stdev= 8.77
  cpu          : usr=0.00%, sys=0.01%, ctx=98, majf=0, minf=87
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=96/0, short=0/0
     lat (usec): 10=5.21%, 20=8.33%, 50=1.04%
     lat (msec): 50=5.21%, 100=9.38%, 250=20.83%, 500=23.96%, 750=16.67%
     lat (msec): 1000=9.38%
randomreadseqwrites5.32: (groupid=6, jobs=1): err= 0: pid=4087
  read : io=432KiB, bw=14KiB/s, iops=3, runt= 30144msec
    clat (usec): min=6, max=1026K, avg=279091.51, stdev=261785.37
    bw (KiB/s) : min=    3, max=   56, per=3.26%, avg=15.18, stdev=10.68
  cpu          : usr=0.00%, sys=0.00%, ctx=111, majf=0, minf=78
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=108/0, short=0/0
     lat (usec): 10=4.63%, 20=6.48%
     lat (msec): 50=7.41%, 100=13.89%, 250=26.85%, 500=18.52%, 750=15.74%
     lat (msec): 1000=5.56%, 2000=0.93%

Run status group 0 (all jobs):
   READ: io=1024MiB, aggrb=36150KiB/s, minb=36150KiB/s, maxb=36150KiB/s, mint=29702msec, maxt=29702msec

Run status group 1 (all jobs):
  WRITE: io=898580KiB, aggrb=26806KiB/s, minb=26806KiB/s, maxb=26806KiB/s, mint=34325msec, maxt=34325msec

Run status group 2 (all jobs):
   READ: io=568936KiB, aggrb=19402KiB/s, minb=9702KiB/s, maxb=9707KiB/s, mint=30003msec, maxt=30027msec

Run status group 3 (all jobs):
  WRITE: io=1031MiB, aggrb=31098KiB/s, minb=14423KiB/s, maxb=16783KiB/s, mint=34528msec, maxt=34754msec

Run status group 4 (all jobs):
   READ: io=10368KiB, aggrb=353KiB/s, minb=175KiB/s, maxb=178KiB/s, mint=30003msec, maxt=30009msec

Run status group 5 (all jobs):
   READ: io=5736KiB, aggrb=195KiB/s, minb=48KiB/s, maxb=49KiB/s, mint=30054msec, maxt=30079msec
  WRITE: io=549972KiB, aggrb=18767KiB/s, minb=18767KiB/s, maxb=18767KiB/s, mint=30008msec, maxt=30008msec

Run status group 6 (all jobs):
   READ: io=13884KiB, aggrb=466KiB/s, minb=11KiB/s, maxb=19KiB/s, mint=30010msec, maxt=30447msec
  WRITE: io=1006MiB, aggrb=35168KiB/s, minb=35168KiB/s, maxb=35168KiB/s, mint=30000msec, maxt=30000msec

Disk stats (read/write):
  sda: ios=21367/27198, merge=396/825211, ticks=1503808/21047009, in_queue=22593654, util=99.06%

[-- Attachment #7: cfq --]
[-- Type: application/octet-stream, Size: 40695 bytes --]

seqread: (g=0): rw=read, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
seqwrite: (g=1): rw=write, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
parread.0: (g=2): rw=read, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
parread.1: (g=2): rw=read, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
parwrite.0: (g=3): rw=write, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
parwrite.1: (g=3): rw=write, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomread2.0: (g=4): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomread2.1: (g=4): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites4.w: (g=5): rw=write, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites4.0: (g=5): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites4.1: (g=5): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites4.2: (g=5): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites4.3: (g=5): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.w: (g=6): rw=write, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.0: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.1: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.2: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.3: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.4: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.5: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.6: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.7: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.8: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.9: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.10: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.11: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.12: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.13: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.14: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.15: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.16: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.17: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.18: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.19: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.20: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.21: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.22: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.23: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.24: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.25: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.26: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.27: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.28: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.29: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.30: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.31: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
randomreadseqwrites5.32: (g=6): rw=randread, bs=4K-4K/4K-4K, ioengine=psync, iodepth=1
Starting 47 processes
seqwrite: Laying out IO file(s) (1 file(s) / 1024MiB)
parwrite.0: Laying out IO file(s) (1 file(s) / 1024MiB)
parwrite.1: Laying out IO file(s) (1 file(s) / 1024MiB)
randomreadseqwrites4.w: Laying out IO file(s) (1 file(s) / 2048MiB)
randomreadseqwrites5.w: Laying out IO file(s) (1 file(s) / 2048MiB)

seqread: (groupid=0, jobs=1): err= 0: pid=3710
  read : io=993MiB, bw=34723KiB/s, iops=8477, runt= 30001msec
    clat (usec): min=2, max=34508, avg=116.79, stdev=725.48
    bw (KiB/s) : min=28179, max=37076, per=100.10%, avg=34758.27, stdev=2012.82
  cpu          : usr=0.79%, sys=4.46%, ctx=7789, majf=0, minf=17
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=254333/0, short=0/0
     lat (usec): 4=84.29%, 10=12.29%, 20=0.27%, 50=0.01%, 100=0.01%
     lat (usec): 250=0.09%, 500=0.01%, 750=0.01%
     lat (msec): 2=0.09%, 4=1.82%, 10=1.06%, 20=0.07%, 50=0.01%
seqwrite: (groupid=1, jobs=1): err= 0: pid=3711
  write: io=1003MiB, bw=29403KiB/s, iops=7178, runt= 35754msec
    clat (usec): min=14, max=726007, avg=115.50, stdev=4828.84
  cpu          : usr=1.24%, sys=14.81%, ctx=868, majf=0, minf=260
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=0/256662, short=0/0
     lat (usec): 20=85.97%, 50=13.39%, 100=0.08%, 250=0.41%, 500=0.01%
     lat (usec): 750=0.01%, 1000=0.07%
     lat (msec): 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01%
     lat (msec): 100=0.01%, 250=0.04%, 500=0.01%, 750=0.01%
parread.0: (groupid=2, jobs=1): err= 0: pid=3718
  read : io=414196KiB, bw=14097KiB/s, iops=3441, runt= 30085msec
    clat (usec): min=2, max=285807, avg=289.33, stdev=5658.36
    bw (KiB/s) : min= 6403, max=24151, per=51.51%, avg=14381.40, stdev=4571.87
  cpu          : usr=0.37%, sys=1.81%, ctx=3257, majf=0, minf=19
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=103549/0, short=0/0
     lat (usec): 4=83.53%, 10=13.05%, 20=0.26%, 50=0.02%, 100=0.02%
     lat (usec): 250=0.08%, 500=0.01%, 750=0.07%, 1000=0.01%
     lat (msec): 2=0.10%, 4=1.50%, 10=1.16%, 20=0.10%, 50=0.01%
     lat (msec): 100=0.01%, 250=0.08%, 500=0.01%
parread.1: (groupid=2, jobs=1): err= 0: pid=3719
  read : io=406132KiB, bw=13861KiB/s, iops=3384, runt= 30002msec
    clat (usec): min=2, max=301121, avg=294.31, stdev=5761.34
    bw (KiB/s) : min= 8204, max=24282, per=51.04%, avg=14251.19, stdev=3997.73
  cpu          : usr=0.32%, sys=1.83%, ctx=3184, majf=0, minf=19
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=101533/0, short=0/0
     lat (usec): 4=82.82%, 10=13.76%, 20=0.26%, 50=0.02%, 100=0.03%
     lat (usec): 250=0.08%, 500=0.01%, 750=0.07%, 1000=0.01%
     lat (msec): 2=0.10%, 4=1.50%, 10=1.16%, 20=0.10%, 50=0.01%
     lat (msec): 100=0.01%, 250=0.08%, 500=0.01%
parwrite.0: (groupid=3, jobs=1): err= 0: pid=3729
  write: io=341556KiB, bw=9701KiB/s, iops=2368, runt= 36052msec
    clat (usec): min=15, max=1025K, avg=348.93, stdev=13888.01
  cpu          : usr=0.69%, sys=6.65%, ctx=346, majf=0, minf=123
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=0/85389, short=0/0
     lat (usec): 20=37.10%, 50=61.63%, 100=0.74%, 250=0.38%, 500=0.01%
     lat (usec): 750=0.01%, 1000=0.02%
     lat (msec): 2=0.03%, 4=0.03%, 20=0.01%, 50=0.01%, 100=0.01%
     lat (msec): 500=0.02%, 750=0.03%, 1000=0.01%, 2000=0.01%
parwrite.1: (groupid=3, jobs=1): err= 0: pid=3730
  write: io=440288KiB, bw=12087KiB/s, iops=2950, runt= 37300msec
    clat (usec): min=15, max=1150K, avg=276.56, stdev=12433.09
  cpu          : usr=0.72%, sys=7.82%, ctx=463, majf=0, minf=125
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=0/110072, short=0/0
     lat (usec): 20=50.34%, 50=48.58%, 100=0.55%, 250=0.37%, 500=0.01%
     lat (usec): 750=0.01%, 1000=0.02%
     lat (msec): 2=0.05%, 4=0.03%, 10=0.01%, 20=0.01%, 50=0.01%
     lat (msec): 100=0.01%, 250=0.01%, 500=0.02%, 750=0.02%, 1000=0.01%
     lat (msec): 2000=0.01%
randomread2.0: (groupid=4, jobs=1): err= 0: pid=3737
  read : io=8532KiB, bw=291KiB/s, iops=71, runt= 30010msec
    clat (usec): min=4, max=153873, avg=14065.62, stdev=9854.96
    bw (KiB/s) : min=   83, max=  358, per=83.73%, avg=290.55, stdev=61.17
  cpu          : usr=0.04%, sys=0.17%, ctx=2337, majf=0, minf=66
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=2133/0, short=0/0
     lat (usec): 10=0.14%
     lat (msec): 4=0.28%, 10=34.22%, 20=50.63%, 50=13.92%, 100=0.66%
     lat (msec): 250=0.14%
randomread2.1: (groupid=4, jobs=1): err= 0: pid=3738
  read : io=1648KiB, bw=56KiB/s, iops=13, runt= 30018msec
    clat (usec): min=5, max=245581, avg=72853.74, stdev=59127.79
    bw (KiB/s) : min=   31, max=   70, per=16.06%, avg=55.75, stdev=10.07
  cpu          : usr=0.01%, sys=0.03%, ctx=465, majf=0, minf=65
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=412/0, short=0/0
     lat (usec): 10=0.49%
     lat (msec): 4=0.97%, 10=26.70%, 20=14.08%, 50=4.13%, 100=0.73%
     lat (msec): 250=52.91%
randomreadseqwrites4.w: (groupid=5, jobs=1): err= 0: pid=3739
  write: io=160464KiB, bw=5426KiB/s, iops=1324, runt= 30278msec
    clat (usec): min=15, max=21931K, avg=752.99, stdev=114914.30
  cpu          : usr=0.25%, sys=2.50%, ctx=174, majf=0, minf=50
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=0/40116, short=0/0
     lat (usec): 20=82.12%, 50=17.44%, 100=0.24%, 250=0.17%, 500=0.01%
     lat (usec): 750=0.01%, 1000=0.01%
     lat (msec): 2=0.01%, 750=0.01%, >=2000=0.01%
randomreadseqwrites4.0: (groupid=5, jobs=1): err= 0: pid=3740
  read : io=1440KiB, bw=49KiB/s, iops=11, runt= 30027msec
    clat (usec): min=7, max=367530, avg=83383.03, stdev=89629.30
    bw (KiB/s) : min=   31, max=  130, per=12.51%, avg=49.40, stdev=17.87
  cpu          : usr=0.01%, sys=0.03%, ctx=380, majf=0, minf=431
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=360/0, short=0/0
     lat (usec): 10=0.28%
     lat (msec): 4=1.67%, 10=33.89%, 20=20.83%, 50=0.56%, 100=0.56%
     lat (msec): 250=39.17%, 500=3.06%
randomreadseqwrites4.1: (groupid=5, jobs=1): err= 0: pid=3741
  read : io=1328KiB, bw=45KiB/s, iops=11, runt= 30020msec
    clat (msec): min=2, max=355, avg=90.40, stdev=91.68
    bw (KiB/s) : min=   31, max=   83, per=11.38%, avg=44.94, stdev= 7.18
  cpu          : usr=0.00%, sys=0.03%, ctx=337, majf=0, minf=404
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=332/0, short=0/0

     lat (msec): 4=1.51%, 10=32.53%, 20=18.98%, 50=0.60%, 100=0.30%
     lat (msec): 250=42.47%, 500=3.61%
randomreadseqwrites4.2: (groupid=5, jobs=1): err= 0: pid=3742
  read : io=1960KiB, bw=66KiB/s, iops=16, runt= 30040msec
    clat (usec): min=5, max=371183, avg=61282.23, stdev=78706.89
    bw (KiB/s) : min=   31, max=  105, per=16.89%, avg=66.72, stdev=14.23
  cpu          : usr=0.02%, sys=0.04%, ctx=518, majf=0, minf=564
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=490/0, short=0/0
     lat (usec): 10=0.41%
     lat (msec): 4=1.22%, 10=37.96%, 20=27.14%, 50=1.63%, 100=0.20%
     lat (msec): 250=29.39%, 500=2.04%
randomreadseqwrites4.3: (groupid=5, jobs=1): err= 0: pid=3743
  read : io=6864KiB, bw=234KiB/s, iops=57, runt= 30008msec
    clat (usec): min=4, max=363072, avg=17466.15, stdev=24377.68
    bw (KiB/s) : min=   66, max=  325, per=59.36%, avg=234.49, stdev=42.38
  cpu          : usr=0.03%, sys=0.22%, ctx=1788, majf=0, minf=1677
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=1716/0, short=0/0
     lat (usec): 10=0.17%
     lat (msec): 2=0.17%, 4=0.29%, 10=40.21%, 20=48.02%, 50=2.16%
     lat (msec): 100=7.40%, 250=1.46%, 500=0.12%
randomreadseqwrites5.w: (groupid=6, jobs=1): err= 0: pid=3745
  write: io=4KiB, bw=0KiB/s, iops=0, runt= 30420msec
    clat (usec): min=30420K, max=30420K, avg=30419551.00, stdev= 0.00
  cpu          : usr=0.00%, sys=0.00%, ctx=1, majf=0, minf=16
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=0/1, short=0/0

     lat (msec): >=2000=100.00%
randomreadseqwrites5.0: (groupid=6, jobs=1): err= 0: pid=3746
  read : io=272KiB, bw=9KiB/s, iops=2, runt= 30266msec
    clat (usec): min=11, max=1395K, avg=445060.97, stdev=483433.47
    bw (KiB/s) : min=    4, max=   20, per=2.23%, avg= 8.91, stdev= 3.56
  cpu          : usr=0.00%, sys=0.00%, ctx=68, majf=0, minf=140
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=68/0, short=0/0
     lat (usec): 20=13.24%, 50=1.47%
     lat (msec): 10=16.18%, 20=20.59%, 50=1.47%, 750=8.82%, 1000=22.06%
     lat (msec): 2000=16.18%
randomreadseqwrites5.1: (groupid=6, jobs=1): err= 0: pid=3747
  read : io=232KiB, bw=7KiB/s, iops=1, runt= 30013msec
    clat (usec): min=13, max=1622K, avg=517422.86, stdev=497436.35
    bw (KiB/s) : min=    2, max=   15, per=1.90%, avg= 7.61, stdev= 3.21
  cpu          : usr=0.00%, sys=0.01%, ctx=69, majf=0, minf=137
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=58/0, short=0/0
     lat (usec): 20=6.90%, 50=1.72%
     lat (msec): 10=10.34%, 20=25.86%, 100=1.72%, 750=6.90%, 1000=29.31%
     lat (msec): 2000=17.24%
randomreadseqwrites5.2: (groupid=6, jobs=1): err= 0: pid=3748
  read : io=280KiB, bw=9KiB/s, iops=2, runt= 30402msec
    clat (usec): min=10, max=2064K, avg=434280.53, stdev=528700.18
    bw (KiB/s) : min=    3, max=   24, per=2.32%, avg= 9.27, stdev= 5.04
  cpu          : usr=0.00%, sys=0.00%, ctx=66, majf=0, minf=136
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=70/0, short=0/0
     lat (usec): 20=20.00%, 50=4.29%
     lat (msec): 10=20.00%, 20=11.43%, 100=1.43%, 750=7.14%, 1000=14.29%
     lat (msec): 2000=20.00%, >=2000=1.43%
randomreadseqwrites5.3: (groupid=6, jobs=1): err= 0: pid=3749
  read : io=272KiB, bw=9KiB/s, iops=2, runt= 30453msec
    clat (usec): min=9, max=1301K, avg=447805.21, stdev=481915.51
    bw (KiB/s) : min=    4, max=   17, per=2.16%, avg= 8.63, stdev= 3.56
  cpu          : usr=0.00%, sys=0.01%, ctx=70, majf=0, minf=135
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=68/0, short=0/0
     lat (usec): 10=1.47%, 20=13.24%, 50=4.41%
     lat (msec): 10=10.29%, 20=22.06%, 250=1.47%, 750=7.35%, 1000=23.53%
     lat (msec): 2000=16.18%
randomreadseqwrites5.4: (groupid=6, jobs=1): err= 0: pid=3750
  read : io=376KiB, bw=12KiB/s, iops=3, runt= 30297msec
    clat (usec): min=6, max=1293K, avg=322279.04, stdev=444579.28
    bw (KiB/s) : min=    4, max=   35, per=3.05%, avg=12.22, stdev= 7.39
  cpu          : usr=0.00%, sys=0.00%, ctx=105, majf=0, minf=178
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=94/0, short=0/0
     lat (usec): 10=1.06%, 20=15.96%, 50=2.13%
     lat (msec): 10=19.15%, 20=26.60%, 500=1.06%, 750=4.26%, 1000=20.21%
     lat (msec): 2000=9.57%
randomreadseqwrites5.5: (groupid=6, jobs=1): err= 0: pid=3751
  read : io=348KiB, bw=11KiB/s, iops=2, runt= 30289msec
    clat (usec): min=8, max=1868K, avg=348115.52, stdev=483087.13
    bw (KiB/s) : min=    2, max=   30, per=2.92%, avg=11.70, stdev= 6.17
  cpu          : usr=0.00%, sys=0.00%, ctx=92, majf=0, minf=165
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=87/0, short=0/0
     lat (usec): 10=1.15%, 20=16.09%, 50=3.45%
     lat (msec): 4=1.15%, 10=17.24%, 20=25.29%, 500=1.15%, 750=2.30%
     lat (msec): 1000=18.39%, 2000=13.79%
randomreadseqwrites5.6: (groupid=6, jobs=1): err= 0: pid=3752
  read : io=292KiB, bw=9KiB/s, iops=2, runt= 30317msec
    clat (usec): min=10, max=1298K, avg=415273.77, stdev=467021.53
    bw (KiB/s) : min=    4, max=   20, per=2.27%, avg= 9.10, stdev= 4.38
  cpu          : usr=0.01%, sys=0.00%, ctx=85, majf=0, minf=147
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=73/0, short=0/0
     lat (usec): 20=12.33%, 50=1.37%
     lat (msec): 10=13.70%, 20=21.92%, 50=5.48%, 500=2.74%, 750=4.11%
     lat (msec): 1000=24.66%, 2000=13.70%
randomreadseqwrites5.7: (groupid=6, jobs=1): err= 0: pid=3753
  read : io=332KiB, bw=11KiB/s, iops=2, runt= 30040msec
    clat (usec): min=6, max=1795K, avg=361895.05, stdev=479895.09
    bw (KiB/s) : min=    3, max=   20, per=2.75%, avg=11.00, stdev= 4.38
  cpu          : usr=0.00%, sys=0.01%, ctx=106, majf=0, minf=159
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=83/0, short=0/0
     lat (usec): 10=1.20%, 20=12.05%, 50=1.20%
     lat (msec): 10=16.87%, 20=28.92%, 50=2.41%, 500=1.20%, 750=3.61%
     lat (msec): 1000=19.28%, 2000=13.25%
randomreadseqwrites5.8: (groupid=6, jobs=1): err= 0: pid=3754
  read : io=260KiB, bw=8KiB/s, iops=2, runt= 30153msec
    clat (usec): min=10, max=1234K, avg=463861.52, stdev=481799.48
    bw (KiB/s) : min=    3, max=   26, per=2.09%, avg= 8.34, stdev= 4.78
  cpu          : usr=0.00%, sys=0.00%, ctx=78, majf=0, minf=137
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=65/0, short=0/0
     lat (usec): 20=12.31%, 50=1.54%
     lat (msec): 10=12.31%, 20=24.62%, 750=6.15%, 1000=27.69%, 2000=15.38%
randomreadseqwrites5.9: (groupid=6, jobs=1): err= 0: pid=3755
  read : io=352KiB, bw=11KiB/s, iops=2, runt= 30207msec
    clat (usec): min=6, max=1257K, avg=343224.89, stdev=455645.53
    bw (KiB/s) : min=    3, max=   25, per=2.84%, avg=11.34, stdev= 6.29
  cpu          : usr=0.01%, sys=0.02%, ctx=108, majf=0, minf=163
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=88/0, short=0/0
     lat (usec): 10=2.27%, 20=11.36%, 50=4.55%
     lat (msec): 4=1.14%, 10=17.05%, 20=26.14%, 100=1.14%, 500=1.14%
     lat (msec): 750=2.27%, 1000=21.59%, 2000=11.36%
randomreadseqwrites5.10: (groupid=6, jobs=1): err= 0: pid=3756
  read : io=312KiB, bw=10KiB/s, iops=2, runt= 30406msec
    clat (usec): min=9, max=1679K, avg=389790.65, stdev=478818.27
    bw (KiB/s) : min=    3, max=   25, per=2.55%, avg=10.22, stdev= 5.78
  cpu          : usr=0.00%, sys=0.01%, ctx=96, majf=0, minf=160
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=78/0, short=0/0
     lat (usec): 10=1.28%, 20=11.54%
     lat (msec): 4=1.28%, 10=16.67%, 20=26.92%, 500=2.56%, 750=5.13%
     lat (msec): 1000=20.51%, 2000=14.10%
randomreadseqwrites5.11: (groupid=6, jobs=1): err= 0: pid=3757
  read : io=288KiB, bw=9KiB/s, iops=2, runt= 30301msec
    clat (usec): min=13, max=1299K, avg=420818.50, stdev=464627.28
    bw (KiB/s) : min=    4, max=   32, per=2.27%, avg= 9.06, stdev= 6.04
  cpu          : usr=0.00%, sys=0.01%, ctx=90, majf=0, minf=152
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=72/0, short=0/0
     lat (usec): 20=5.56%, 50=2.78%
     lat (msec): 10=18.06%, 20=26.39%, 100=1.39%, 500=1.39%, 750=5.56%
     lat (msec): 1000=26.39%, 2000=12.50%
randomreadseqwrites5.12: (groupid=6, jobs=1): err= 0: pid=3758
  read : io=244KiB, bw=8KiB/s, iops=2, runt= 30182msec
    clat (usec): min=10, max=1242K, avg=494761.18, stdev=479355.70
    bw (KiB/s) : min=    3, max=   17, per=1.95%, avg= 7.81, stdev= 3.04
  cpu          : usr=0.00%, sys=0.00%, ctx=73, majf=0, minf=133
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=61/0, short=0/0
     lat (usec): 20=8.20%, 50=1.64%
     lat (msec): 4=1.64%, 10=13.11%, 20=21.31%, 50=1.64%, 750=4.92%
     lat (msec): 1000=31.15%, 2000=16.39%
randomreadseqwrites5.13: (groupid=6, jobs=1): err= 0: pid=3759
  read : io=380KiB, bw=12KiB/s, iops=3, runt= 30023msec
    clat (usec): min=12, max=1295K, avg=315996.77, stdev=442395.85
    bw (KiB/s) : min=    4, max=   29, per=3.06%, avg=12.25, stdev= 5.58
  cpu          : usr=0.01%, sys=0.01%, ctx=116, majf=0, minf=182
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=95/0, short=0/0
     lat (usec): 20=11.58%, 50=4.21%
     lat (msec): 10=18.95%, 20=28.42%, 50=2.11%, 100=1.05%, 750=6.32%
     lat (msec): 1000=17.89%, 2000=9.47%
randomreadseqwrites5.14: (groupid=6, jobs=1): err= 0: pid=3760
  read : io=328KiB, bw=11KiB/s, iops=2, runt= 30429msec
    clat (usec): min=6, max=1253K, avg=371052.17, stdev=457771.25
    bw (KiB/s) : min=    4, max=   31, per=2.70%, avg=10.79, stdev= 5.74
  cpu          : usr=0.00%, sys=0.01%, ctx=106, majf=0, minf=163
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=82/0, short=0/0
     lat (usec): 10=2.44%, 20=12.20%, 50=1.22%
     lat (msec): 10=10.98%, 20=30.49%, 50=2.44%, 750=7.32%, 1000=19.51%
     lat (msec): 2000=13.41%
randomreadseqwrites5.15: (groupid=6, jobs=1): err= 0: pid=3761
  read : io=324KiB, bw=11KiB/s, iops=2, runt= 30086msec
    clat (usec): min=6, max=1234K, avg=371398.04, stdev=462205.69
    bw (KiB/s) : min=    4, max=   38, per=2.70%, avg=10.81, stdev= 6.97
  cpu          : usr=0.00%, sys=0.00%, ctx=99, majf=0, minf=164
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=81/0, short=0/0
     lat (usec): 10=1.23%, 20=12.35%, 50=1.23%
     lat (msec): 10=17.28%, 20=23.46%, 50=3.70%, 100=1.23%, 750=6.17%
     lat (msec): 1000=20.99%, 2000=12.35%
randomreadseqwrites5.16: (groupid=6, jobs=1): err= 0: pid=3762
  read : io=536KiB, bw=18KiB/s, iops=4, runt= 30248msec
    clat (usec): min=7, max=1609K, avg=225685.59, stdev=414274.46
    bw (KiB/s) : min=    2, max=   68, per=4.36%, avg=17.43, stdev=13.47
  cpu          : usr=0.00%, sys=0.01%, ctx=192, majf=0, minf=261
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=134/0, short=0/0
     lat (usec): 10=1.49%, 20=12.69%, 50=1.49%
     lat (msec): 10=23.13%, 20=37.31%, 50=1.49%, 750=1.49%, 1000=12.69%
     lat (msec): 2000=8.21%
randomreadseqwrites5.17: (groupid=6, jobs=1): err= 0: pid=3763
  read : io=560KiB, bw=19KiB/s, iops=4, runt= 30021msec
    clat (usec): min=11, max=1226K, avg=214391.89, stdev=379349.80
    bw (KiB/s) : min=    5, max=   48, per=4.35%, avg=17.42, stdev=11.68
  cpu          : usr=0.00%, sys=0.02%, ctx=216, majf=0, minf=286
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=140/0, short=0/0
     lat (usec): 20=9.29%, 50=0.71%
     lat (msec): 10=21.43%, 20=43.57%, 50=0.71%, 100=0.71%, 500=1.43%
     lat (msec): 750=2.86%, 1000=12.86%, 2000=6.43%
randomreadseqwrites5.18: (groupid=6, jobs=1): err= 0: pid=3764
  read : io=596KiB, bw=20KiB/s, iops=4, runt= 30232msec
    clat (usec): min=6, max=1177K, avg=202858.74, stdev=373335.57
    bw (KiB/s) : min=    3, max=   55, per=4.62%, avg=18.47, stdev=14.95
  cpu          : usr=0.01%, sys=0.01%, ctx=204, majf=0, minf=290
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=149/0, short=0/0
     lat (usec): 10=4.03%, 20=9.40%, 50=3.36%
     lat (msec): 10=23.49%, 20=36.91%, 100=0.67%, 500=0.67%, 750=4.70%
     lat (msec): 1000=11.41%, 2000=5.37%
randomreadseqwrites5.19: (groupid=6, jobs=1): err= 0: pid=3765
  read : io=288KiB, bw=9KiB/s, iops=2, runt= 30216msec
    clat (usec): min=6, max=1824K, avg=419636.83, stdev=492002.08
    bw (KiB/s) : min=    2, max=   26, per=2.47%, avg= 9.87, stdev= 5.71
  cpu          : usr=0.00%, sys=0.01%, ctx=76, majf=0, minf=146
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=72/0, short=0/0
     lat (usec): 10=4.17%, 20=15.28%, 50=1.39%
     lat (msec): 10=11.11%, 20=20.83%, 50=2.78%, 500=1.39%, 750=6.94%
     lat (msec): 1000=20.83%, 2000=15.28%
randomreadseqwrites5.20: (groupid=6, jobs=1): err= 0: pid=3766
  read : io=280KiB, bw=9KiB/s, iops=2, runt= 30177msec
    clat (usec): min=7, max=1588K, avg=431064.69, stdev=488501.78
    bw (KiB/s) : min=    3, max=   20, per=2.22%, avg= 8.87, stdev= 3.70
  cpu          : usr=0.01%, sys=0.01%, ctx=79, majf=0, minf=147
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=70/0, short=0/0
     lat (usec): 10=1.43%, 20=10.00%
     lat (msec): 10=24.29%, 20=14.29%, 50=2.86%, 250=2.86%, 750=7.14%
     lat (msec): 1000=21.43%, 2000=15.71%
randomreadseqwrites5.21: (groupid=6, jobs=1): err= 0: pid=3767
  read : io=512KiB, bw=17KiB/s, iops=4, runt= 30250msec
    clat (usec): min=6, max=1708K, avg=236283.01, stdev=411234.93
    bw (KiB/s) : min=    4, max=   47, per=4.06%, avg=16.26, stdev=11.75
  cpu          : usr=0.00%, sys=0.02%, ctx=172, majf=0, minf=249
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=128/0, short=0/0
     lat (usec): 10=4.69%, 20=14.06%, 100=0.78%
     lat (msec): 10=18.75%, 20=35.16%, 50=1.56%, 500=0.78%, 750=3.12%
     lat (msec): 1000=12.50%, 2000=8.59%
randomreadseqwrites5.22: (groupid=6, jobs=1): err= 0: pid=3768
  read : io=392KiB, bw=13KiB/s, iops=3, runt= 30280msec
    clat (usec): min=5, max=2265K, avg=308938.50, stdev=482346.89
    bw (KiB/s) : min=    2, max=   53, per=3.33%, avg=13.30, stdev=11.64
  cpu          : usr=0.00%, sys=0.00%, ctx=126, majf=0, minf=196
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=98/0, short=0/0
     lat (usec): 10=6.12%, 20=9.18%, 250=1.02%
     lat (msec): 4=1.02%, 10=15.31%, 20=33.67%, 50=2.04%, 250=1.02%
     lat (msec): 500=1.02%, 750=1.02%, 1000=17.35%, 2000=10.20%, >=2000=1.02%
randomreadseqwrites5.23: (groupid=6, jobs=1): err= 0: pid=3769
  read : io=572KiB, bw=19KiB/s, iops=4, runt= 30400msec
    clat (usec): min=7, max=1225K, avg=212542.97, stdev=380713.45
    bw (KiB/s) : min=    4, max=   62, per=4.61%, avg=18.42, stdev=14.46
  cpu          : usr=0.00%, sys=0.03%, ctx=204, majf=0, minf=284
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=143/0, short=0/0
     lat (usec): 10=2.10%, 20=11.19%, 50=2.10%
     lat (msec): 4=2.80%, 10=21.68%, 20=32.87%, 50=4.20%, 500=0.70%
     lat (msec): 750=4.20%, 1000=12.59%, 2000=5.59%
randomreadseqwrites5.24: (groupid=6, jobs=1): err= 0: pid=3770
  read : io=556KiB, bw=18KiB/s, iops=4, runt= 30054msec
    clat (usec): min=7, max=1232K, avg=216175.13, stdev=384170.67
    bw (KiB/s) : min=    4, max=   50, per=4.41%, avg=17.63, stdev=15.11
  cpu          : usr=0.00%, sys=0.02%, ctx=203, majf=0, minf=275
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=139/0, short=0/0
     lat (usec): 10=3.60%, 20=8.63%, 50=2.16%
     lat (msec): 10=20.86%, 20=40.29%, 50=0.72%, 100=0.72%, 750=2.88%
     lat (msec): 1000=16.55%, 2000=3.60%
randomreadseqwrites5.25: (groupid=6, jobs=1): err= 0: pid=3771
  read : io=288KiB, bw=9KiB/s, iops=2, runt= 30344msec
    clat (usec): min=6, max=1689K, avg=420413.07, stdev=496067.56
    bw (KiB/s) : min=    2, max=   36, per=2.30%, avg= 9.19, stdev= 5.96
  cpu          : usr=0.00%, sys=0.03%, ctx=83, majf=0, minf=141
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=72/0, short=0/0
     lat (usec): 10=1.39%, 20=11.11%
     lat (msec): 10=16.67%, 20=23.61%, 50=4.17%, 750=4.17%, 1000=20.83%
     lat (msec): 2000=18.06%
randomreadseqwrites5.26: (groupid=6, jobs=1): err= 0: pid=3772
  read : io=324KiB, bw=10KiB/s, iops=2, runt= 30231msec
    clat (usec): min=7, max=1294K, avg=373187.78, stdev=459099.40
    bw (KiB/s) : min=    4, max=   36, per=2.63%, avg=10.53, stdev= 6.63
  cpu          : usr=0.01%, sys=0.01%, ctx=95, majf=0, minf=166
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=81/0, short=0/0
     lat (usec): 10=3.70%, 20=13.58%
     lat (msec): 10=22.22%, 20=18.52%, 50=1.23%, 500=1.23%, 750=8.64%
     lat (msec): 1000=18.52%, 2000=12.35%
randomreadseqwrites5.27: (groupid=6, jobs=1): err= 0: pid=3773
  read : io=236KiB, bw=7KiB/s, iops=1, runt= 30275msec
    clat (usec): min=9, max=1285K, avg=513103.81, stdev=471865.81
    bw (KiB/s) : min=    3, max=   17, per=1.86%, avg= 7.44, stdev= 3.23
  cpu          : usr=0.00%, sys=0.00%, ctx=67, majf=0, minf=125
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=59/0, short=0/0
     lat (usec): 10=1.69%, 20=6.78%, 50=1.69%
     lat (msec): 4=1.69%, 10=8.47%, 20=22.03%, 50=1.69%, 500=1.69%
     lat (msec): 750=5.08%, 1000=32.20%, 2000=16.95%
randomreadseqwrites5.28: (groupid=6, jobs=1): err= 0: pid=3774
  read : io=272KiB, bw=9KiB/s, iops=2, runt= 30056msec
    clat (usec): min=6, max=1232K, avg=441954.50, stdev=476694.24
    bw (KiB/s) : min=    4, max=   17, per=2.19%, avg= 8.75, stdev= 3.52
  cpu          : usr=0.00%, sys=0.01%, ctx=70, majf=0, minf=195
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=68/0, short=0/0
     lat (usec): 10=2.94%, 20=14.71%
     lat (msec): 10=17.65%, 20=17.65%, 750=4.41%, 1000=27.94%, 2000=14.71%
randomreadseqwrites5.29: (groupid=6, jobs=1): err= 0: pid=3775
  read : io=296KiB, bw=10KiB/s, iops=2, runt= 30231msec
    clat (usec): min=10, max=1399K, avg=408489.27, stdev=483399.68
    bw (KiB/s) : min=    4, max=   41, per=2.44%, avg= 9.74, stdev= 6.76
  cpu          : usr=0.01%, sys=0.00%, ctx=83, majf=0, minf=152
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=74/0, short=0/0
     lat (usec): 20=9.46%, 50=2.70%
     lat (msec): 10=16.22%, 20=20.27%, 50=8.11%, 100=1.35%, 750=2.70%
     lat (msec): 1000=22.97%, 2000=16.22%
randomreadseqwrites5.30: (groupid=6, jobs=1): err= 0: pid=3776
  read : io=284KiB, bw=9KiB/s, iops=2, runt= 30003msec
    clat (usec): min=10, max=1237K, avg=422545.06, stdev=472551.63
    bw (KiB/s) : min=    4, max=   22, per=2.33%, avg= 9.31, stdev= 4.46
  cpu          : usr=0.00%, sys=0.00%, ctx=77, majf=0, minf=148
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=71/0, short=0/0
     lat (usec): 20=15.49%, 50=1.41%
     lat (msec): 4=1.41%, 10=16.90%, 20=18.31%, 50=1.41%, 750=4.23%
     lat (msec): 1000=28.17%, 2000=12.68%
randomreadseqwrites5.31: (groupid=6, jobs=1): err= 0: pid=3777
  read : io=776KiB, bw=26KiB/s, iops=6, runt= 30176msec
    clat (usec): min=7, max=1211K, avg=155519.71, stdev=335034.58
    bw (KiB/s) : min=    4, max=   74, per=6.72%, avg=26.87, stdev=20.38
  cpu          : usr=0.00%, sys=0.05%, ctx=214, majf=0, minf=343
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=194/0, short=0/0
     lat (usec): 10=2.06%, 20=13.92%, 50=1.03%
     lat (msec): 10=21.65%, 20=37.63%, 50=6.70%, 250=0.52%, 500=0.52%
     lat (msec): 750=3.61%, 1000=6.70%, 2000=5.67%
randomreadseqwrites5.32: (groupid=6, jobs=1): err= 0: pid=3778
  read : io=240KiB, bw=8KiB/s, iops=1, runt= 30077msec
    clat (usec): min=11, max=1997K, avg=501242.13, stdev=536438.64
    bw (KiB/s) : min=    2, max=   19, per=2.10%, avg= 8.40, stdev= 4.06
  cpu          : usr=0.00%, sys=0.01%, ctx=71, majf=0, minf=131
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=60/0, short=0/0
     lat (usec): 20=10.00%, 50=1.67%
     lat (msec): 10=21.67%, 20=15.00%, 50=1.67%, 750=6.67%, 1000=23.33%
     lat (msec): 2000=20.00%

Run status group 0 (all jobs):
   READ: io=993MiB, aggrb=34723KiB/s, minb=34723KiB/s, maxb=34723KiB/s, mint=30001msec, maxt=30001msec

Run status group 1 (all jobs):
  WRITE: io=1003MiB, aggrb=29403KiB/s, minb=29403KiB/s, maxb=29403KiB/s, mint=35754msec, maxt=35754msec

Run status group 2 (all jobs):
   READ: io=820328KiB, aggrb=27921KiB/s, minb=13861KiB/s, maxb=14097KiB/s, mint=30002msec, maxt=30085msec

Run status group 3 (all jobs):
  WRITE: io=781844KiB, aggrb=21464KiB/s, minb=9701KiB/s, maxb=12087KiB/s, mint=36052msec, maxt=37300msec

Run status group 4 (all jobs):
   READ: io=10180KiB, aggrb=347KiB/s, minb=56KiB/s, maxb=291KiB/s, mint=30010msec, maxt=30018msec

Run status group 5 (all jobs):
   READ: io=11592KiB, aggrb=395KiB/s, minb=45KiB/s, maxb=234KiB/s, mint=30008msec, maxt=30040msec
  WRITE: io=160464KiB, aggrb=5426KiB/s, minb=5426KiB/s, maxb=5426KiB/s, mint=30278msec, maxt=30278msec

Run status group 6 (all jobs):
   READ: io=11900KiB, aggrb=400KiB/s, minb=7KiB/s, maxb=26KiB/s, mint=30003msec, maxt=30453msec
  WRITE: io=4KiB, aggrb=0KiB/s, minb=0KiB/s, maxb=0KiB/s, mint=30420msec, maxt=30420msec

Disk stats (read/write):
  sda: ios=23503/15157, merge=448/450552, ticks=1466051/17217190, in_queue=20983028, util=99.27%

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Reduce latencies for syncronous writes and high I/O priority  requests in deadline IO scheduler
  2009-04-26 12:43           ` Corrado Zoccolo
@ 2009-05-01 19:30             ` Corrado Zoccolo
  0 siblings, 0 replies; 14+ messages in thread
From: Corrado Zoccolo @ 2009-05-01 19:30 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Aaron Carroll, Linux-Kernel

[-- Attachment #1: Type: text/plain, Size: 1775 bytes --]

On Sun, Apr 26, 2009 at 2:43 PM, Corrado Zoccolo <czoccolo@gmail.com> wrote:
> * on my machine, there is a regression on sequential write

I found that the regression was just an artifact of my testing (the
test partition was almost full, and the written files were re-created
at each test, resulting in non-uniform fragmentation across tests).
Changing the test to preallocate also the write file made the test
more repeateable, with the result that patched deadline and original
perform equal.

Here is the last patch of the series, that add I/O priority support to
deadline. All requests are sorted into 3 levels of priorities :
* 0: async reads/writes, and all Idle class requests
* 1: sync Best Effort reads/writes, and sync Real Time writes
* 2: Real Time reads.

Aaron, I found your previous attempt at modifying deadline to use
sync/async instead of read/write.
My approach is slightly different, since I changed only the fifos to
respect the new scheme, while the RB trees are still partitioned as
reads vs writes.
Since the RB trees are used for merging and for batch formation,
having the RB trees as in original deadline should guarantee the same
success rate for merging, and allow to create longer batches that span
across priority levels, when requests on a given priority level are
too few to fully utilize the disk bandwidth (this is usually the case
for writes, where we have few sync writes to the journal, mixed with
lots of async writes to the data).

Corrado

-- 
__________________________________________________________________________

dott. Corrado Zoccolo                          mailto:czoccolo@gmail.com
PhD - Department of Computer Science - University of Pisa, Italy
--------------------------------------------------------------------------

[-- Attachment #2: deadline-patch-rt --]
[-- Type: application/octet-stream, Size: 7149 bytes --]

Deadline IOscheduler rt patch

This is the third (and last) patch of the series, and contains the changes
to propagate I/O priority from process to requests, and use this info
to schedule requests.

Requests are classified in 3 priority levels, from lowest to highest:
* 0: async reads/writes, and all Idle class requests
* 1: sync Best Effort reads/writes, and sync Real Time writes
* 2: sync Real Time reads.

Signed-off-by: Corrado Zoccolo <czoccolo@gmail.com>

diff --git a/block/deadline-iosched.c b/block/deadline-iosched.c
index 57e67c8..1b9fd51 100644
--- a/block/deadline-iosched.c
+++ b/block/deadline-iosched.c
@@ -17,9 +17,10 @@
 /*
  * See Documentation/block/deadline-iosched.txt
  */
+static const int rt_sync_expire = HZ / 8;  /* max time before a real-time sync operation is submitted. */
 static const int sync_expire = HZ / 2;     /* max time before a sync operation is submitted. */
 static const int async_expire = 5 * HZ;    /* ditto for async operations, these limits are SOFT! */
-static const int async_starved = 2;        /* max times SYNC can starve ASYNC requests */
+static const int async_starved = 3;        /* max times SYNC can starve ASYNC requests */
 static const int fifo_batch = 16;       /* # of sequential requests treated as one
 				     by the above parameters. For throughput. */
 
@@ -32,7 +33,7 @@ struct deadline_data {
 	 * requests (deadline_rq s) are present on both sort_list and fifo_list
 	 */
 	struct rb_root sort_list[2]; /* READ, WRITE */
-	struct list_head fifo_list[2]; /* 0=ASYNC, 1=SYNC */
+	struct list_head fifo_list[3]; /* 0=ASYNC (or IDLE), 1=SYNC (or RT ASYNC), 2=RT SYNC */
 
 	/*
 	 * next in sort order.
@@ -44,7 +45,7 @@ struct deadline_data {
 	/*
 	 * settings that change how the i/o scheduler behaves
 	 */
-	int fifo_expire[2];
+	int fifo_expire[3];
 	int fifo_batch;
 	int async_starved;
 	int front_merges;
@@ -96,10 +97,65 @@ deadline_del_rq_rb(struct deadline_data *dd, struct request *rq)
 	elv_rb_del(deadline_rb_root(dd, rq), rq);
 }
 
+static int ioprio_lub(unsigned short aprio, unsigned short bprio)
+{
+	unsigned short aclass = IOPRIO_PRIO_CLASS(aprio);
+	unsigned short bclass = IOPRIO_PRIO_CLASS(bprio);
+
+	if (aclass == IOPRIO_CLASS_NONE)
+		return bprio;
+	if (bclass == IOPRIO_CLASS_NONE)
+		return aprio;
+
+	if (aclass == bclass)
+		return min(aprio, bprio);
+	if (aclass > bclass)
+		return bprio;
+	else
+		return aprio;
+}
+
+static void
+deadline_merge_prio_data(struct request_queue *q, struct request *rq)
+{
+	struct task_struct *tsk = current;
+	struct io_context *ioc = get_io_context(GFP_ATOMIC,q->node);
+	int ioprio_class = IOPRIO_CLASS_NONE;
+	int ioprio = IOPRIO_NORM;
+
+	if(ioc) {
+		ioprio_class = task_ioprio_class(ioc);
+	}
+
+	switch (ioprio_class) {
+	default:
+		printk(KERN_ERR "deadline: bad prio %x\n", ioprio_class);
+	case IOPRIO_CLASS_NONE:
+		/*
+		 * no prio set, inherit CPU scheduling settings
+		 */
+		ioprio = task_nice_ioprio(tsk);
+		ioprio_class = task_nice_ioclass(tsk);
+		break;
+	case IOPRIO_CLASS_RT:
+	case IOPRIO_CLASS_BE:
+		ioprio = task_ioprio(ioc);
+		break;
+	case IOPRIO_CLASS_IDLE:
+		ioprio = 7;
+		break;
+	}
+
+	ioprio=IOPRIO_PRIO_VALUE(ioprio_class,ioprio);
+	rq->ioprio=ioprio_lub(rq->ioprio,ioprio);
+}
+
 static int
 deadline_compute_req_priority(struct request *req)
 {
-	return !!rq_is_sync(req);
+	unsigned short ioprio_class=IOPRIO_PRIO_CLASS(req_get_ioprio(req));
+	return (ioprio_class!=IOPRIO_CLASS_IDLE)*
+		(!!rq_is_sync(req) + (rq_data_dir(req)==READ)*(ioprio_class==IOPRIO_CLASS_RT));
 }
 
 /*
@@ -110,6 +166,7 @@ deadline_add_request(struct request_queue *q, struct request *rq)
 {
 	struct deadline_data *dd = q->elevator->elevator_data;
 
+	deadline_merge_prio_data(q,rq);
 	deadline_add_rq_rb(dd, rq);
 
 	/*
@@ -173,6 +230,8 @@ static void deadline_merged_request(struct request_queue *q,
 		elv_rb_del(deadline_rb_root(dd, req), req);
 		deadline_add_rq_rb(dd, req);
 	}
+
+	deadline_merge_prio_data(q,req);
 }
 
 static void
@@ -262,6 +321,7 @@ static inline int deadline_check_fifo(struct deadline_data *dd, unsigned prio)
 static int deadline_dispatch_requests(struct request_queue *q, int force)
 {
 	struct deadline_data *dd = q->elevator->elevator_data;
+	const int rt_reqs = !list_empty(&dd->fifo_list[2]);
 	const int sync_reqs = !list_empty(&dd->fifo_list[1]);
 	const int async_reqs = !list_empty(&dd->fifo_list[0]);
 	struct request *rq = dd->next_rq;
@@ -277,6 +337,11 @@ static int deadline_dispatch_requests(struct request_queue *q, int force)
 	 * data direction (read / write)
 	 */
 
+	if (rt_reqs) {
+		request_prio = 2;
+		goto dispatch_find_request;
+	}
+
 	if (sync_reqs) {
 		if (async_reqs && (dd->starved++ >= dd->async_starved))
 			goto dispatch_async;
@@ -338,7 +403,8 @@ static int deadline_queue_empty(struct request_queue *q)
 	struct deadline_data *dd = q->elevator->elevator_data;
 
 	return list_empty(&dd->fifo_list[0])
-		&& list_empty(&dd->fifo_list[1]);
+		&& list_empty(&dd->fifo_list[1])
+		&& list_empty(&dd->fifo_list[2]);
 }
 
 static void deadline_exit_queue(struct elevator_queue *e)
@@ -347,6 +413,7 @@ static void deadline_exit_queue(struct elevator_queue *e)
 
 	BUG_ON(!list_empty(&dd->fifo_list[0]));
 	BUG_ON(!list_empty(&dd->fifo_list[1]));
+	BUG_ON(!list_empty(&dd->fifo_list[2]));
 
 	kfree(dd);
 }
@@ -364,10 +431,12 @@ static void *deadline_init_queue(struct request_queue *q)
 
 	INIT_LIST_HEAD(&dd->fifo_list[0]);
 	INIT_LIST_HEAD(&dd->fifo_list[1]);
+	INIT_LIST_HEAD(&dd->fifo_list[2]);
 	dd->sort_list[READ] = RB_ROOT;
 	dd->sort_list[WRITE] = RB_ROOT;
 	dd->fifo_expire[0] = async_expire;
 	dd->fifo_expire[1] = sync_expire;
+	dd->fifo_expire[2] = rt_sync_expire;
 	dd->async_starved = async_starved;
 	dd->front_merges = 1;
 	dd->fifo_batch = fifo_batch;
@@ -404,6 +473,7 @@ static ssize_t __FUNC(struct elevator_queue *e, char *page)		\
 }
 SHOW_FUNCTION(deadline_async_expire_show, dd->fifo_expire[0], 1);
 SHOW_FUNCTION(deadline_sync_expire_show, dd->fifo_expire[1], 1);
+SHOW_FUNCTION(deadline_rt_sync_expire_show, dd->fifo_expire[2], 1);
 SHOW_FUNCTION(deadline_async_starved_show, dd->async_starved, 0);
 SHOW_FUNCTION(deadline_front_merges_show, dd->front_merges, 0);
 SHOW_FUNCTION(deadline_fifo_batch_show, dd->fifo_batch, 0);
@@ -427,6 +497,7 @@ static ssize_t __FUNC(struct elevator_queue *e, const char *page, size_t count)
 }
 STORE_FUNCTION(deadline_async_expire_store, &dd->fifo_expire[0], 0, INT_MAX, 1);
 STORE_FUNCTION(deadline_sync_expire_store, &dd->fifo_expire[1], 0, INT_MAX, 1);
+STORE_FUNCTION(deadline_rt_sync_expire_store, &dd->fifo_expire[2], 0, INT_MAX, 1);
 STORE_FUNCTION(deadline_async_starved_store, &dd->async_starved, INT_MIN, INT_MAX, 0);
 STORE_FUNCTION(deadline_front_merges_store, &dd->front_merges, 0, 1, 0);
 STORE_FUNCTION(deadline_fifo_batch_store, &dd->fifo_batch, 0, INT_MAX, 0);
@@ -439,6 +510,7 @@ STORE_FUNCTION(deadline_fifo_batch_store, &dd->fifo_batch, 0, INT_MAX, 0);
 static struct elv_fs_entry deadline_attrs[] = {
 	DD_ATTR(async_expire),
 	DD_ATTR(sync_expire),
+	DD_ATTR(rt_sync_expire),
 	DD_ATTR(async_starved),
 	DD_ATTR(front_merges),
 	DD_ATTR(fifo_batch),

^ permalink raw reply related	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2009-05-01 19:38 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-04-22 21:07 Reduce latencies for syncronous writes and high I/O priority requests in deadline IO scheduler Corrado Zoccolo
2009-04-23 11:18 ` Paolo Ciarrocchi
2009-04-23 11:28 ` Jens Axboe
2009-04-23 15:57   ` Corrado Zoccolo
2009-04-23 11:52 ` Aaron Carroll
2009-04-23 12:13   ` Jens Axboe
2009-04-23 16:10   ` Corrado Zoccolo
2009-04-23 23:30     ` Aaron Carroll
2009-04-24  6:13       ` Corrado Zoccolo
2009-04-24  6:39     ` Jens Axboe
2009-04-24 16:07       ` Corrado Zoccolo
2009-04-24 21:37         ` Corrado Zoccolo
2009-04-26 12:43           ` Corrado Zoccolo
2009-05-01 19:30             ` Corrado Zoccolo

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.