All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/7] blk-mq request and latency stats
@ 2020-03-09 20:59 Jes Sorensen
  2020-03-09 20:59 ` [PATCH 1/7] block: keep track of per-device io sizes in stats Jes Sorensen
                   ` (7 more replies)
  0 siblings, 8 replies; 9+ messages in thread
From: Jes Sorensen @ 2020-03-09 20:59 UTC (permalink / raw)
  To: linux-block; +Cc: kernel-team, mmullins, josef, Jes Sorensen

From: Jes Sorensen <jsorensen@fb.com>

Hi,

This patchset introduces statistics collection of request sizes and
latencies for blk-mq using the blk-stat infrastructue.

This was designed to have minimal overhead when not in use. It relies on
blk_rq_stats_sectors() and introduces a sectors counter to struct
blk_rq_stat.

For request sizes it uses 8 buckets per operation type. Latencies are
tracked in us precision, and uses 32 buckets per operation type. To
not blow up the size of struct request_queue, I changed it to
dynamically allocate these data structures.

Usage, request stats are enabled like this:
 $ echo 1 > /sys/block/nvme0n1/queue/reqstat
with output reading like this:
 $ cat /sys/block/nvme0n1/queue/stat
 read: 0 0 0 8278016 14270464 29323264 120107008 2069282816
 read reqs: 0 0 0 2021 1531 1377 3229 3627
 write: 4096 0 3072 10903552 9244672 6258688 16584704 2228011008
 write reqs: 8 0 1 2662 898 311 375 4972
 discard: 0 0 0 5242880 5472256 3809280 136880128 830554112
 discard reqs: 0 0 0 1280 515 196 4150 3717

Latency stats are enabled like this:
 $ echo 1 > /sys/block/nvme0n1/queue/latstat
with output reading like this
 $  cat /sys/block/nvme0n1/queue/latency
 read: 0 0 0 0 4 101 677 5146 1162 2654 1933 832 657 52 8 0 3 2 3 2 0 0 0 0 0 0 0 0 0 0 0 0
 write: 0 0 0 79 2564 2641 8087 6226 1580 4052 498 332 385 365 382 279 323 166 109 119 188 267 0 0 0 0 0 0 0 0 0 0
 discard: 0 0 0 0 0 0 0 17709 698 15 0 1 0 0 3 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

Cheers,
Jes


Jes Sorensen (7):
  block: keep track of per-device io sizes in stats
  block: Use blk-stat infrastructure to collect per queue request stats
  Export block request stats to sysfs
  Expand block stats to export number of of requests per bucket
  blk-mq: Only allocate request stat data when it is enabled
  blk-stat: Make bucket function take latency as an additional argument
  block: Introduce blk-mq latency stats

 block/blk-iolatency.c     |   2 +-
 block/blk-mq.c            | 110 ++++++++++++++++++++-
 block/blk-stat.c          |  18 ++--
 block/blk-stat.h          |  12 ++-
 block/blk-sysfs.c         | 195 ++++++++++++++++++++++++++++++++++++++
 block/blk-wbt.c           |   2 +-
 include/linux/blk_types.h |   1 +
 include/linux/blkdev.h    |  13 +++
 8 files changed, 338 insertions(+), 15 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 1/7] block: keep track of per-device io sizes in stats
  2020-03-09 20:59 [PATCH 0/7] blk-mq request and latency stats Jes Sorensen
@ 2020-03-09 20:59 ` Jes Sorensen
  2020-03-09 20:59 ` [PATCH 2/7] block: Use blk-stat infrastructure to collect per queue request stats Jes Sorensen
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Jes Sorensen @ 2020-03-09 20:59 UTC (permalink / raw)
  To: linux-block; +Cc: kernel-team, mmullins, josef, Jes Sorensen

From: Jes Sorensen <jsorensen@fb.com>

In order to provide blk stat heuristics, we need to track request
sizes per device. This relies on blk_rq_stats_sectors() introduced in
3d24430694077313c75c6b89f618db09943621e4. Add a field to the
blk_rq_stat to hold this information so that consumers of the
blk_rq_stat stuff can use it.

Based on a previous patch by Josef Bacik <josef@toxicpanda.com>

Signed-off-by: Jes Sorensen <jsorensen@fb.com>
---
 block/blk-iolatency.c     |  2 +-
 block/blk-stat.c          | 14 ++++++++------
 block/blk-stat.h          |  2 +-
 include/linux/blk_types.h |  1 +
 4 files changed, 11 insertions(+), 8 deletions(-)

diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c
index c128d50cb410..ca0eba5fedf7 100644
--- a/block/blk-iolatency.c
+++ b/block/blk-iolatency.c
@@ -219,7 +219,7 @@ static inline void latency_stat_record_time(struct iolatency_grp *iolat,
 			stat->ps.missed++;
 		stat->ps.total++;
 	} else
-		blk_rq_stat_add(&stat->rqs, req_time);
+		blk_rq_stat_add(&stat->rqs, req_time, 0);
 	put_cpu_ptr(stat);
 }
 
diff --git a/block/blk-stat.c b/block/blk-stat.c
index 7da302ff88d0..dd5c9c8989a5 100644
--- a/block/blk-stat.c
+++ b/block/blk-stat.c
@@ -21,7 +21,7 @@ struct blk_queue_stats {
 void blk_rq_stat_init(struct blk_rq_stat *stat)
 {
 	stat->min = -1ULL;
-	stat->max = stat->nr_samples = stat->mean = 0;
+	stat->max = stat->nr_samples = stat->mean = stat->sectors = 0;
 	stat->batch = 0;
 }
 
@@ -38,13 +38,15 @@ void blk_rq_stat_sum(struct blk_rq_stat *dst, struct blk_rq_stat *src)
 				dst->nr_samples + src->nr_samples);
 
 	dst->nr_samples += src->nr_samples;
+	dst->sectors += src->sectors;
 }
 
-void blk_rq_stat_add(struct blk_rq_stat *stat, u64 value)
+void blk_rq_stat_add(struct blk_rq_stat *stat, u64 time, u64 sectors)
 {
-	stat->min = min(stat->min, value);
-	stat->max = max(stat->max, value);
-	stat->batch += value;
+	stat->min = min(stat->min, time);
+	stat->max = max(stat->max, time);
+	stat->batch += time;
+	stat->sectors += sectors;
 	stat->nr_samples++;
 }
 
@@ -71,7 +73,7 @@ void blk_stat_add(struct request *rq, u64 now)
 			continue;
 
 		stat = &per_cpu_ptr(cb->cpu_stat, cpu)[bucket];
-		blk_rq_stat_add(stat, value);
+		blk_rq_stat_add(stat, value, blk_rq_stats_sectors(rq));
 	}
 	put_cpu();
 	rcu_read_unlock();
diff --git a/block/blk-stat.h b/block/blk-stat.h
index 17b47a86eefb..ea893c4a9af1 100644
--- a/block/blk-stat.h
+++ b/block/blk-stat.h
@@ -164,7 +164,7 @@ static inline void blk_stat_activate_msecs(struct blk_stat_callback *cb,
 	mod_timer(&cb->timer, jiffies + msecs_to_jiffies(msecs));
 }
 
-void blk_rq_stat_add(struct blk_rq_stat *, u64);
+void blk_rq_stat_add(struct blk_rq_stat *, u64, u64);
 void blk_rq_stat_sum(struct blk_rq_stat *, struct blk_rq_stat *);
 void blk_rq_stat_init(struct blk_rq_stat *);
 
diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index 70254ae11769..4db37b220367 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -482,6 +482,7 @@ struct blk_rq_stat {
 	u64 max;
 	u32 nr_samples;
 	u64 batch;
+	u64 sectors;
 };
 
 #endif /* __LINUX_BLK_TYPES_H */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 2/7] block: Use blk-stat infrastructure to collect per queue request stats
  2020-03-09 20:59 [PATCH 0/7] blk-mq request and latency stats Jes Sorensen
  2020-03-09 20:59 ` [PATCH 1/7] block: keep track of per-device io sizes in stats Jes Sorensen
@ 2020-03-09 20:59 ` Jes Sorensen
  2020-03-09 20:59 ` [PATCH 3/7] Export block request stats to sysfs Jes Sorensen
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Jes Sorensen @ 2020-03-09 20:59 UTC (permalink / raw)
  To: linux-block; +Cc: kernel-team, mmullins, josef, Jes Sorensen

From: Jes Sorensen <jsorensen@fb.com>

Track request sectors in 8 buckets for read, write, and discard.

Enable stats by writing 1 to /sys/block/<dev>/queue/reqstat, disable
by writing 0.

Signed-off-by: Jes Sorensen <jsorensen@fb.com>
---
 block/blk-mq.c         | 52 ++++++++++++++++++++++++++++++++++++++++++
 block/blk-stat.h       |  1 +
 block/blk-sysfs.c      | 40 ++++++++++++++++++++++++++++++++
 include/linux/blkdev.h |  7 ++++++
 4 files changed, 100 insertions(+)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index d92088dec6c3..4aff0903546c 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -60,6 +60,52 @@ static int blk_mq_poll_stats_bkt(const struct request *rq)
 	return bucket;
 }
 
+/*
+ * 8 buckets for each of read, write, and discard
+ */
+static int blk_req_stats_bkt(const struct request *rq)
+{
+	int grp, bucket;
+
+	grp = op_stat_group(req_op(rq));
+
+	bucket = grp + 3 * ilog2(blk_rq_stats_sectors(rq));
+
+	if (bucket < 0)
+		return -1;
+	else if (bucket >= BLK_REQ_STATS_BKTS)
+		return grp + BLK_REQ_STATS_BKTS - 3;
+
+	return bucket;
+}
+
+/*
+ * Copy out the stats to their official location
+ */
+static void blk_req_stats_cb(struct blk_stat_callback *cb)
+{
+	struct request_queue *q = cb->data;
+	int bucket;
+
+	for (bucket = 0; bucket < BLK_REQ_STATS_BKTS; bucket++) {
+		if (cb->stat[bucket].nr_samples) {
+			q->req_stat[bucket].sectors +=
+				cb->stat[bucket].sectors;
+			q->req_stat[bucket].nr_samples +=
+				cb->stat[bucket].nr_samples;
+		}
+	}
+
+	if (!blk_stat_is_active(cb))
+		blk_stat_activate_msecs(cb, 100);
+}
+
+void blk_req_stats_free(struct request_queue *q)
+{
+	blk_stat_remove_callback(q, q->reqstat_cb);
+	blk_stat_free_callback(q->reqstat_cb);
+}
+
 /*
  * Check if any of the ctx, dispatch list or elevator
  * have pending work in this hardware queue.
@@ -2910,6 +2956,12 @@ struct request_queue *blk_mq_init_allocated_queue(struct blk_mq_tag_set *set,
 	if (!q->nr_hw_queues)
 		goto err_hctxs;
 
+	q->reqstat_cb = blk_stat_alloc_callback(blk_req_stats_cb,
+						blk_req_stats_bkt,
+						BLK_REQ_STATS_BKTS, q);
+	if (!q->reqstat_cb)
+		goto err_hctxs;
+
 	INIT_WORK(&q->timeout_work, blk_mq_timeout_work);
 	blk_queue_rq_timeout(q, set->timeout ? set->timeout : 30 * HZ);
 
diff --git a/block/blk-stat.h b/block/blk-stat.h
index ea893c4a9af1..e592bbf50d38 100644
--- a/block/blk-stat.h
+++ b/block/blk-stat.h
@@ -168,4 +168,5 @@ void blk_rq_stat_add(struct blk_rq_stat *, u64, u64);
 void blk_rq_stat_sum(struct blk_rq_stat *, struct blk_rq_stat *);
 void blk_rq_stat_init(struct blk_rq_stat *);
 
+void blk_req_stats_free(struct request_queue *q);
 #endif
diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
index fca9b158f4a0..8841146cad54 100644
--- a/block/blk-sysfs.c
+++ b/block/blk-sysfs.c
@@ -529,6 +529,37 @@ static ssize_t queue_dax_show(struct request_queue *q, char *page)
 	return queue_var_show(blk_queue_dax(q), page);
 }
 
+static ssize_t queue_reqstat_show(struct request_queue *q, char *page)
+{
+	return queue_var_show(test_bit(QUEUE_FLAG_REQSTATS,
+				       &q->queue_flags), page);
+}
+
+static ssize_t queue_reqstat_store(struct request_queue *q, const char *page,
+				    size_t size)
+{
+	unsigned long reqstat_on;
+	ssize_t ret;
+
+	ret = queue_var_store(&reqstat_on, page, size);
+	if (ret < 0)
+		return ret;
+
+	if (reqstat_on) {
+		if (!blk_queue_flag_test_and_set(QUEUE_FLAG_REQSTATS, q))
+			blk_stat_add_callback(q, q->reqstat_cb);
+		if (!blk_stat_is_active(q->reqstat_cb))
+			blk_stat_activate_msecs(q->reqstat_cb, 100);
+	} else {
+		if (test_bit(QUEUE_FLAG_REQSTATS, &q->queue_flags)) {
+			blk_stat_remove_callback(q, q->reqstat_cb);
+			blk_queue_flag_clear(QUEUE_FLAG_REQSTATS, q);
+		}
+	}
+
+	return ret;
+}
+
 static struct queue_sysfs_entry queue_requests_entry = {
 	.attr = {.name = "nr_requests", .mode = 0644 },
 	.show = queue_requests_show,
@@ -727,6 +758,12 @@ static struct queue_sysfs_entry throtl_sample_time_entry = {
 };
 #endif
 
+static struct queue_sysfs_entry queue_reqstat_entry = {
+	.attr = {.name = "reqstat", .mode = 0644 },
+	.show = queue_reqstat_show,
+	.store = queue_reqstat_store,
+};
+
 static struct attribute *queue_attrs[] = {
 	&queue_requests_entry.attr,
 	&queue_ra_entry.attr,
@@ -766,6 +803,7 @@ static struct attribute *queue_attrs[] = {
 #ifdef CONFIG_BLK_DEV_THROTTLING_LOW
 	&throtl_sample_time_entry.attr,
 #endif
+	&queue_reqstat_entry.attr,
 	NULL,
 };
 
@@ -877,6 +915,8 @@ static void __blk_release_queue(struct work_struct *work)
 {
 	struct request_queue *q = container_of(work, typeof(*q), release_work);
 
+	blk_req_stats_free(q);
+
 	if (test_bit(QUEUE_FLAG_POLL_STATS, &q->queue_flags))
 		blk_stat_remove_callback(q, q->poll_cb);
 	blk_stat_free_callback(q->poll_cb);
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 10455b2bbbb4..58abab51ed9f 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -53,6 +53,9 @@ struct blk_stat_callback;
 /* Doing classic polling */
 #define BLK_MQ_POLL_CLASSIC -1
 
+/* Must be consistent with blk_part_stats_bkt() */
+#define BLK_REQ_STATS_BKTS (3 * 8)
+
 /*
  * Maximum number of blkcg policies allowed to be registered concurrently.
  * Defined here to simplify include dependency.
@@ -480,6 +483,9 @@ struct request_queue {
 	struct blk_stat_callback	*poll_cb;
 	struct blk_rq_stat	poll_stat[BLK_MQ_POLL_STATS_BKTS];
 
+	struct blk_stat_callback	*reqstat_cb;
+	struct blk_rq_stat	req_stat[BLK_REQ_STATS_BKTS];
+
 	struct timer_list	timeout;
 	struct work_struct	timeout_work;
 
@@ -612,6 +618,7 @@ struct request_queue {
 #define QUEUE_FLAG_PCI_P2PDMA	25	/* device supports PCI p2p requests */
 #define QUEUE_FLAG_ZONE_RESETALL 26	/* supports Zone Reset All */
 #define QUEUE_FLAG_RQ_ALLOC_TIME 27	/* record rq->alloc_time_ns */
+#define QUEUE_FLAG_REQSTATS	28	/* request stats enabled if set */
 
 #define QUEUE_FLAG_MQ_DEFAULT	((1 << QUEUE_FLAG_IO_STAT) |		\
 				 (1 << QUEUE_FLAG_SAME_COMP))
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 3/7] Export block request stats to sysfs
  2020-03-09 20:59 [PATCH 0/7] blk-mq request and latency stats Jes Sorensen
  2020-03-09 20:59 ` [PATCH 1/7] block: keep track of per-device io sizes in stats Jes Sorensen
  2020-03-09 20:59 ` [PATCH 2/7] block: Use blk-stat infrastructure to collect per queue request stats Jes Sorensen
@ 2020-03-09 20:59 ` Jes Sorensen
  2020-03-09 20:59 ` [PATCH 4/7] Expand block stats to export number of of requests per bucket Jes Sorensen
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Jes Sorensen @ 2020-03-09 20:59 UTC (permalink / raw)
  To: linux-block; +Cc: kernel-team, mmullins, josef, Jes Sorensen

From: Jes Sorensen <jsorensen@fb.com>

This exports bytes read/write/discard per bucket to sysfs

Signed-off-by: Jes Sorensen <jsorensen@fb.com>
---
 block/blk-sysfs.c | 23 +++++++++++++++++++++++
 1 file changed, 23 insertions(+)

diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
index 8841146cad54..44517799a2e8 100644
--- a/block/blk-sysfs.c
+++ b/block/blk-sysfs.c
@@ -529,6 +529,23 @@ static ssize_t queue_dax_show(struct request_queue *q, char *page)
 	return queue_var_show(blk_queue_dax(q), page);
 }
 
+static ssize_t queue_stat_show(struct request_queue *q, char *p)
+{
+	char name[3][8] = {"read", "write", "discard"};
+	int bkt, off, i;
+
+	off = 0;
+	for (i = 0; i < 3; i++) {
+		off += sprintf(p + off, "%s: ", name[i]);
+		for (bkt = 0; bkt < (BLK_REQ_STATS_BKTS / 3); bkt++) {
+			off += sprintf(p + off, "%llu ",
+				       q->req_stat[i + 3 * bkt].sectors << 9);
+		}
+		off += sprintf(p + off, "\n");
+	}
+	return off;
+}
+
 static ssize_t queue_reqstat_show(struct request_queue *q, char *page)
 {
 	return queue_var_show(test_bit(QUEUE_FLAG_REQSTATS,
@@ -764,6 +781,11 @@ static struct queue_sysfs_entry queue_reqstat_entry = {
 	.store = queue_reqstat_store,
 };
 
+static struct queue_sysfs_entry queue_stat_entry = {
+	.attr = {.name = "stat", .mode = 0444 },
+	.show = queue_stat_show,
+};
+
 static struct attribute *queue_attrs[] = {
 	&queue_requests_entry.attr,
 	&queue_ra_entry.attr,
@@ -804,6 +826,7 @@ static struct attribute *queue_attrs[] = {
 	&throtl_sample_time_entry.attr,
 #endif
 	&queue_reqstat_entry.attr,
+	&queue_stat_entry.attr,
 	NULL,
 };
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 4/7] Expand block stats to export number of of requests per bucket
  2020-03-09 20:59 [PATCH 0/7] blk-mq request and latency stats Jes Sorensen
                   ` (2 preceding siblings ...)
  2020-03-09 20:59 ` [PATCH 3/7] Export block request stats to sysfs Jes Sorensen
@ 2020-03-09 20:59 ` Jes Sorensen
  2020-03-09 20:59 ` [PATCH 5/7] blk-mq: Only allocate request stat data when it is enabled Jes Sorensen
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Jes Sorensen @ 2020-03-09 20:59 UTC (permalink / raw)
  To: linux-block; +Cc: kernel-team, mmullins, josef, Jes Sorensen

From: Jes Sorensen <jsorensen@fb.com>

Signed-off-by: Jes Sorensen <jsorensen@fb.com>
---
 block/blk-sysfs.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
index 44517799a2e8..aeb69c57ffb7 100644
--- a/block/blk-sysfs.c
+++ b/block/blk-sysfs.c
@@ -542,6 +542,14 @@ static ssize_t queue_stat_show(struct request_queue *q, char *p)
 				       q->req_stat[i + 3 * bkt].sectors << 9);
 		}
 		off += sprintf(p + off, "\n");
+
+		off += sprintf(p + off, "%s reqs: ", name[i]);
+		for (bkt = 0; bkt < (BLK_REQ_STATS_BKTS / 3); bkt++) {
+			off += sprintf(p + off, "%u ",
+				       q->req_stat[i + 3 * bkt].nr_samples);
+		}
+
+		off += sprintf(p + off, "\n");
 	}
 	return off;
 }
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 5/7] blk-mq: Only allocate request stat data when it is enabled
  2020-03-09 20:59 [PATCH 0/7] blk-mq request and latency stats Jes Sorensen
                   ` (3 preceding siblings ...)
  2020-03-09 20:59 ` [PATCH 4/7] Expand block stats to export number of of requests per bucket Jes Sorensen
@ 2020-03-09 20:59 ` Jes Sorensen
  2020-03-09 20:59 ` [PATCH 6/7] blk-stat: Make bucket function take latency as an additional argument Jes Sorensen
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Jes Sorensen @ 2020-03-09 20:59 UTC (permalink / raw)
  To: linux-block; +Cc: kernel-team, mmullins, josef, Jes Sorensen

From: Jes Sorensen <jsorensen@fb.com>

This reduces the data used for request stats to two pointers in
struct request_queue, when this is not enabled.

Signed-off-by: Jes Sorensen <jsorensen@fb.com>
---
 block/blk-mq.c         | 20 ++++++++++----------
 block/blk-stat.h       |  2 ++
 block/blk-sysfs.c      | 36 ++++++++++++++++++++++++++++++++----
 include/linux/blkdev.h |  2 +-
 4 files changed, 45 insertions(+), 15 deletions(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 4aff0903546c..04652e59b0e9 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -63,7 +63,7 @@ static int blk_mq_poll_stats_bkt(const struct request *rq)
 /*
  * 8 buckets for each of read, write, and discard
  */
-static int blk_req_stats_bkt(const struct request *rq)
+int blk_req_stats_bkt(const struct request *rq)
 {
 	int grp, bucket;
 
@@ -82,7 +82,7 @@ static int blk_req_stats_bkt(const struct request *rq)
 /*
  * Copy out the stats to their official location
  */
-static void blk_req_stats_cb(struct blk_stat_callback *cb)
+void blk_req_stats_cb(struct blk_stat_callback *cb)
 {
 	struct request_queue *q = cb->data;
 	int bucket;
@@ -102,8 +102,14 @@ static void blk_req_stats_cb(struct blk_stat_callback *cb)
 
 void blk_req_stats_free(struct request_queue *q)
 {
-	blk_stat_remove_callback(q, q->reqstat_cb);
-	blk_stat_free_callback(q->reqstat_cb);
+	if (test_bit(QUEUE_FLAG_REQSTATS, &q->queue_flags)) {
+		blk_stat_remove_callback(q, q->reqstat_cb);
+		blk_queue_flag_clear(QUEUE_FLAG_REQSTATS, q);
+		blk_stat_free_callback(q->reqstat_cb);
+		q->reqstat_cb = NULL;
+		kfree(q->req_stat);
+		q->req_stat = NULL;
+	}
 }
 
 /*
@@ -2956,12 +2962,6 @@ struct request_queue *blk_mq_init_allocated_queue(struct blk_mq_tag_set *set,
 	if (!q->nr_hw_queues)
 		goto err_hctxs;
 
-	q->reqstat_cb = blk_stat_alloc_callback(blk_req_stats_cb,
-						blk_req_stats_bkt,
-						BLK_REQ_STATS_BKTS, q);
-	if (!q->reqstat_cb)
-		goto err_hctxs;
-
 	INIT_WORK(&q->timeout_work, blk_mq_timeout_work);
 	blk_queue_rq_timeout(q, set->timeout ? set->timeout : 30 * HZ);
 
diff --git a/block/blk-stat.h b/block/blk-stat.h
index e592bbf50d38..d23090b53e12 100644
--- a/block/blk-stat.h
+++ b/block/blk-stat.h
@@ -168,5 +168,7 @@ void blk_rq_stat_add(struct blk_rq_stat *, u64, u64);
 void blk_rq_stat_sum(struct blk_rq_stat *, struct blk_rq_stat *);
 void blk_rq_stat_init(struct blk_rq_stat *);
 
+int blk_req_stats_bkt(const struct request *rq);
+void blk_req_stats_cb(struct blk_stat_callback *cb);
 void blk_req_stats_free(struct request_queue *q);
 #endif
diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
index aeb69c57ffb7..b1469b3ce511 100644
--- a/block/blk-sysfs.c
+++ b/block/blk-sysfs.c
@@ -16,6 +16,7 @@
 #include "blk-mq.h"
 #include "blk-mq-debugfs.h"
 #include "blk-wbt.h"
+#include "blk-stat.h"
 
 struct queue_sysfs_entry {
 	struct attribute attr;
@@ -534,6 +535,9 @@ static ssize_t queue_stat_show(struct request_queue *q, char *p)
 	char name[3][8] = {"read", "write", "discard"};
 	int bkt, off, i;
 
+	if (!q->req_stat)
+		return -ENODEV;
+
 	off = 0;
 	for (i = 0; i < 3; i++) {
 		off += sprintf(p + off, "%s: ", name[i]);
@@ -571,18 +575,42 @@ static ssize_t queue_reqstat_store(struct request_queue *q, const char *page,
 		return ret;
 
 	if (reqstat_on) {
+		if (!q->req_stat) {
+			q->req_stat = kcalloc(BLK_REQ_STATS_BKTS,
+					      sizeof(struct blk_rq_stat),
+					      GFP_KERNEL);
+			if (!q->req_stat) {
+				ret = -ENOMEM;
+				goto err_out;
+			}
+			q->reqstat_cb =
+				blk_stat_alloc_callback(blk_req_stats_cb,
+							blk_req_stats_bkt,
+							BLK_REQ_STATS_BKTS,
+							q);
+			if (!q->reqstat_cb) {
+				ret = -ENOMEM;
+				goto err_out;
+			}
+		}
 		if (!blk_queue_flag_test_and_set(QUEUE_FLAG_REQSTATS, q))
 			blk_stat_add_callback(q, q->reqstat_cb);
 		if (!blk_stat_is_active(q->reqstat_cb))
 			blk_stat_activate_msecs(q->reqstat_cb, 100);
 	} else {
-		if (test_bit(QUEUE_FLAG_REQSTATS, &q->queue_flags)) {
-			blk_stat_remove_callback(q, q->reqstat_cb);
-			blk_queue_flag_clear(QUEUE_FLAG_REQSTATS, q);
-		}
+		blk_req_stats_free(q);
 	}
 
+ out:
 	return ret;
+ err_out:
+	if (q->reqstat_cb) {
+		blk_stat_free_callback(q->reqstat_cb);
+		q->reqstat_cb = NULL;
+	}
+	kfree(q->req_stat);
+	q->req_stat = NULL;
+	goto out;
 }
 
 static struct queue_sysfs_entry queue_requests_entry = {
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 58abab51ed9f..1731c4ec4d34 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -484,7 +484,7 @@ struct request_queue {
 	struct blk_rq_stat	poll_stat[BLK_MQ_POLL_STATS_BKTS];
 
 	struct blk_stat_callback	*reqstat_cb;
-	struct blk_rq_stat	req_stat[BLK_REQ_STATS_BKTS];
+	struct blk_rq_stat	*req_stat;
 
 	struct timer_list	timeout;
 	struct work_struct	timeout_work;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 6/7] blk-stat: Make bucket function take latency as an additional argument
  2020-03-09 20:59 [PATCH 0/7] blk-mq request and latency stats Jes Sorensen
                   ` (4 preceding siblings ...)
  2020-03-09 20:59 ` [PATCH 5/7] blk-mq: Only allocate request stat data when it is enabled Jes Sorensen
@ 2020-03-09 20:59 ` Jes Sorensen
  2020-03-09 20:59 ` [PATCH 7/7] block: Introduce blk-mq latency stats Jes Sorensen
  2020-03-18 14:49 ` [PATCH 0/7] blk-mq request and " Jes Sorensen
  7 siblings, 0 replies; 9+ messages in thread
From: Jes Sorensen @ 2020-03-09 20:59 UTC (permalink / raw)
  To: linux-block; +Cc: kernel-team, mmullins, josef, Jes Sorensen

From: Jes Sorensen <jsorensen@fb.com>

This is useful for tracking request latencies, which will be
introduced in the follow-on patch.

Signed-off-by: Jes Sorensen <jsorensen@fb.com>
---
 block/blk-mq.c   | 6 +++---
 block/blk-stat.c | 4 ++--
 block/blk-stat.h | 6 +++---
 block/blk-wbt.c  | 2 +-
 4 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 04652e59b0e9..a1e4c444a10b 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -43,7 +43,7 @@
 static void blk_mq_poll_stats_start(struct request_queue *q);
 static void blk_mq_poll_stats_fn(struct blk_stat_callback *cb);
 
-static int blk_mq_poll_stats_bkt(const struct request *rq)
+static int blk_mq_poll_stats_bkt(const struct request *rq, u64 value)
 {
 	int ddir, sectors, bucket;
 
@@ -63,7 +63,7 @@ static int blk_mq_poll_stats_bkt(const struct request *rq)
 /*
  * 8 buckets for each of read, write, and discard
  */
-int blk_req_stats_bkt(const struct request *rq)
+int blk_req_stats_bkt(const struct request *rq, u64 value)
 {
 	int grp, bucket;
 
@@ -3475,7 +3475,7 @@ static unsigned long blk_mq_poll_nsecs(struct request_queue *q,
 	 * than ~10 usec. We do use the stats for the relevant IO size
 	 * if available which does lead to better estimates.
 	 */
-	bucket = blk_mq_poll_stats_bkt(rq);
+	bucket = blk_mq_poll_stats_bkt(rq, 0);
 	if (bucket < 0)
 		return ret;
 
diff --git a/block/blk-stat.c b/block/blk-stat.c
index dd5c9c8989a5..812af84b6d1b 100644
--- a/block/blk-stat.c
+++ b/block/blk-stat.c
@@ -68,7 +68,7 @@ void blk_stat_add(struct request *rq, u64 now)
 		if (!blk_stat_is_active(cb))
 			continue;
 
-		bucket = cb->bucket_fn(rq);
+		bucket = cb->bucket_fn(rq, value);
 		if (bucket < 0)
 			continue;
 
@@ -103,7 +103,7 @@ static void blk_stat_timer_fn(struct timer_list *t)
 
 struct blk_stat_callback *
 blk_stat_alloc_callback(void (*timer_fn)(struct blk_stat_callback *),
-			int (*bucket_fn)(const struct request *),
+			int (*bucket_fn)(const struct request *, u64),
 			unsigned int buckets, void *data)
 {
 	struct blk_stat_callback *cb;
diff --git a/block/blk-stat.h b/block/blk-stat.h
index d23090b53e12..51abad3775a9 100644
--- a/block/blk-stat.h
+++ b/block/blk-stat.h
@@ -37,7 +37,7 @@ struct blk_stat_callback {
 	 * should be accounted under. Return -1 for no bucket for this
 	 * request.
 	 */
-	int (*bucket_fn)(const struct request *);
+	int (*bucket_fn)(const struct request *, u64);
 
 	/**
 	 * @buckets: Number of statistics buckets.
@@ -83,7 +83,7 @@ void blk_stat_enable_accounting(struct request_queue *q);
  */
 struct blk_stat_callback *
 blk_stat_alloc_callback(void (*timer_fn)(struct blk_stat_callback *),
-			int (*bucket_fn)(const struct request *),
+			int (*bucket_fn)(const struct request *, u64),
 			unsigned int buckets, void *data);
 
 /**
@@ -168,7 +168,7 @@ void blk_rq_stat_add(struct blk_rq_stat *, u64, u64);
 void blk_rq_stat_sum(struct blk_rq_stat *, struct blk_rq_stat *);
 void blk_rq_stat_init(struct blk_rq_stat *);
 
-int blk_req_stats_bkt(const struct request *rq);
+int blk_req_stats_bkt(const struct request *rq, u64 value);
 void blk_req_stats_cb(struct blk_stat_callback *cb);
 void blk_req_stats_free(struct request_queue *q);
 #endif
diff --git a/block/blk-wbt.c b/block/blk-wbt.c
index 8641ba9793c5..9593f7ae3e31 100644
--- a/block/blk-wbt.c
+++ b/block/blk-wbt.c
@@ -669,7 +669,7 @@ u64 wbt_default_latency_nsec(struct request_queue *q)
 		return 75000000ULL;
 }
 
-static int wbt_data_dir(const struct request *rq)
+static int wbt_data_dir(const struct request *rq, u64 value)
 {
 	const int op = req_op(rq);
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 7/7] block: Introduce blk-mq latency stats
  2020-03-09 20:59 [PATCH 0/7] blk-mq request and latency stats Jes Sorensen
                   ` (5 preceding siblings ...)
  2020-03-09 20:59 ` [PATCH 6/7] blk-stat: Make bucket function take latency as an additional argument Jes Sorensen
@ 2020-03-09 20:59 ` Jes Sorensen
  2020-03-18 14:49 ` [PATCH 0/7] blk-mq request and " Jes Sorensen
  7 siblings, 0 replies; 9+ messages in thread
From: Jes Sorensen @ 2020-03-09 20:59 UTC (permalink / raw)
  To: linux-block; +Cc: kernel-team, mmullins, josef, Jes Sorensen

From: Jes Sorensen <jsorensen@fb.com>

This uses the blk-stat infrastructure to collect latency statistics
for read/write/discard requests. Stats are accounted in us using 32
buckets, which should give up to approximately 35 minutes for the
highest bucket.

Stats are only collecting once enabled, and the data structures are
released again if stats for the device are disabled.

This is enabled on a per device level by
 $ echo 1 > /sys/block/<device>/queue/latstat
Latency stats are read from /sys/block/<device>/queue/latency
 $ cat /sys/block/sda/queue/latency
 read: 0 0 0 1 1 29 97 119 120 99 113 126 165 226 266 116 19 17 4 1 0 0 0 0 0 0 0 0 0 0 0 0
 write: 0 0 0 0 0 12 259 91 234 218 347 448 285 564 263 211 319 205 36 6 15 0 0 0 0 0 0 0 0 0 0 0
 discard: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

Signed-off-by: Jes Sorensen <jsorensen@fb.com>
---
 block/blk-mq.c         | 54 ++++++++++++++++++++++++
 block/blk-stat.h       |  3 ++
 block/blk-sysfs.c      | 96 ++++++++++++++++++++++++++++++++++++++++++
 include/linux/blkdev.h |  6 +++
 4 files changed, 159 insertions(+)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index a1e4c444a10b..5472842f6077 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -112,6 +112,60 @@ void blk_req_stats_free(struct request_queue *q)
 	}
 }
 
+/*
+ * With 32 buckets for latencies, kernel counts latency in ns, but
+ * bucket on us. This should give us roughly 35 minutes of range.
+ * If we reduce to 28 buckets and the limit would be about 2 minutes.
+ */
+int blk_latency_stats_bkt(const struct request *rq, u64 value)
+{
+	int grp, bucket, index;
+
+	if (!value)
+		return -1;
+
+	grp = op_stat_group(req_op(rq));
+
+	index = ilog2(value / NSEC_PER_USEC);
+	if (index >= (BLK_LATENCY_STATS_BKTS / 3))
+		index = (BLK_LATENCY_STATS_BKTS / 3) - 1;
+
+	bucket = 3 * index + grp;
+
+	return bucket;
+}
+
+/*
+ * Copy out the stats to their official location
+ */
+void blk_latency_stats_cb(struct blk_stat_callback *cb)
+{
+	struct request_queue *q = cb->data;
+	int bucket;
+
+	for (bucket = 0; bucket < BLK_LATENCY_STATS_BKTS; bucket++) {
+		if (cb->stat[bucket].nr_samples) {
+			q->latency_stat[bucket].nr_samples +=
+				cb->stat[bucket].nr_samples;
+		}
+	}
+
+	if (!blk_stat_is_active(cb))
+		blk_stat_activate_msecs(cb, 200);
+}
+
+void blk_latency_stats_free(struct request_queue *q)
+{
+	if (test_bit(QUEUE_FLAG_LATENCYSTATS, &q->queue_flags)) {
+		blk_stat_remove_callback(q, q->latency_cb);
+		blk_queue_flag_clear(QUEUE_FLAG_LATENCYSTATS, q);
+		blk_stat_free_callback(q->latency_cb);
+		q->latency_cb = NULL;
+		kfree(q->latency_stat);
+		q->latency_stat = NULL;
+	}
+}
+
 /*
  * Check if any of the ctx, dispatch list or elevator
  * have pending work in this hardware queue.
diff --git a/block/blk-stat.h b/block/blk-stat.h
index 51abad3775a9..4b991b95271c 100644
--- a/block/blk-stat.h
+++ b/block/blk-stat.h
@@ -171,4 +171,7 @@ void blk_rq_stat_init(struct blk_rq_stat *);
 int blk_req_stats_bkt(const struct request *rq, u64 value);
 void blk_req_stats_cb(struct blk_stat_callback *cb);
 void blk_req_stats_free(struct request_queue *q);
+int blk_latency_stats_bkt(const struct request *rq, u64 value);
+void blk_latency_stats_cb(struct blk_stat_callback *cb);
+void blk_latency_stats_free(struct request_queue *q);
 #endif
diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
index b1469b3ce511..96ec0191e748 100644
--- a/block/blk-sysfs.c
+++ b/block/blk-sysfs.c
@@ -613,6 +613,88 @@ static ssize_t queue_reqstat_store(struct request_queue *q, const char *page,
 	goto out;
 }
 
+static ssize_t queue_latency_show(struct request_queue *q, char *p)
+{
+	char name[3][8] = {"read", "write", "discard"};
+	int bkt, off, i;
+
+	if (!q->latency_stat)
+		return -ENODEV;
+
+	/*
+	 * 64 bit decimal is max 20 characters + 1 whitespace, which
+	 * totals a max of 672 characters per read/write/discard line,
+	 * not counting the prefix. This easily keeps us within a 4KB
+	 * page of output.
+	 */
+	off = 0;
+	for (i = 0; i < 3; i++) {
+		off += sprintf(p + off, "%s: ", name[i]);
+		for (bkt = 0; bkt < (BLK_LATENCY_STATS_BKTS / 3); bkt++) {
+			off += sprintf(p + off, "%u ",
+				       q->latency_stat[i + 3 * bkt].nr_samples);
+		}
+
+		off += sprintf(p + off, "\n");
+	}
+	return off;
+}
+
+static ssize_t queue_latstat_show(struct request_queue *q, char *page)
+{
+	return queue_var_show(test_bit(QUEUE_FLAG_LATENCYSTATS,
+				       &q->queue_flags), page);
+}
+
+static ssize_t queue_latstat_store(struct request_queue *q, const char *page,
+				    size_t size)
+{
+	unsigned long latstat_on;
+	ssize_t ret;
+
+	ret = queue_var_store(&latstat_on, page, size);
+	if (ret < 0)
+		return ret;
+
+	if (latstat_on) {
+		if (!q->latency_stat) {
+			q->latency_stat = kcalloc(BLK_LATENCY_STATS_BKTS,
+						  sizeof(struct blk_rq_stat),
+						  GFP_KERNEL);
+			if (!q->latency_stat) {
+				ret = -ENOMEM;
+				goto err_out;
+			}
+			q->latency_cb =
+				blk_stat_alloc_callback(blk_latency_stats_cb,
+							blk_latency_stats_bkt,
+							BLK_LATENCY_STATS_BKTS,
+							q);
+			if (!q->latency_cb) {
+				ret = -ENOMEM;
+				goto err_out;
+			}
+		}
+		if (!blk_queue_flag_test_and_set(QUEUE_FLAG_LATENCYSTATS, q))
+			blk_stat_add_callback(q, q->latency_cb);
+		if (!blk_stat_is_active(q->latency_cb))
+			blk_stat_activate_msecs(q->latency_cb, 100);
+	} else {
+		blk_latency_stats_free(q);
+	}
+
+ out:
+	return ret;
+ err_out:
+	if (q->latency_cb) {
+		blk_stat_free_callback(q->latency_cb);
+		q->latency_cb = NULL;
+	}
+	kfree(q->latency_stat);
+	q->latency_stat = NULL;
+	goto out;
+}
+
 static struct queue_sysfs_entry queue_requests_entry = {
 	.attr = {.name = "nr_requests", .mode = 0644 },
 	.show = queue_requests_show,
@@ -822,6 +904,17 @@ static struct queue_sysfs_entry queue_stat_entry = {
 	.show = queue_stat_show,
 };
 
+static struct queue_sysfs_entry queue_latency_entry = {
+	.attr = {.name = "latency", .mode = 0444 },
+	.show = queue_latency_show,
+};
+
+static struct queue_sysfs_entry queue_latstat_entry = {
+	.attr = {.name = "latstat", .mode = 0644 },
+	.show = queue_latstat_show,
+	.store = queue_latstat_store,
+};
+
 static struct attribute *queue_attrs[] = {
 	&queue_requests_entry.attr,
 	&queue_ra_entry.attr,
@@ -863,6 +956,8 @@ static struct attribute *queue_attrs[] = {
 #endif
 	&queue_reqstat_entry.attr,
 	&queue_stat_entry.attr,
+	&queue_latency_entry.attr,
+	&queue_latstat_entry.attr,
 	NULL,
 };
 
@@ -974,6 +1069,7 @@ static void __blk_release_queue(struct work_struct *work)
 {
 	struct request_queue *q = container_of(work, typeof(*q), release_work);
 
+	blk_latency_stats_free(q);
 	blk_req_stats_free(q);
 
 	if (test_bit(QUEUE_FLAG_POLL_STATS, &q->queue_flags))
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 1731c4ec4d34..fd5fab43bda3 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -56,6 +56,8 @@ struct blk_stat_callback;
 /* Must be consistent with blk_part_stats_bkt() */
 #define BLK_REQ_STATS_BKTS (3 * 8)
 
+#define BLK_LATENCY_STATS_BKTS (3 * 32)
+
 /*
  * Maximum number of blkcg policies allowed to be registered concurrently.
  * Defined here to simplify include dependency.
@@ -486,6 +488,9 @@ struct request_queue {
 	struct blk_stat_callback	*reqstat_cb;
 	struct blk_rq_stat	*req_stat;
 
+	struct blk_stat_callback	*latency_cb;
+	struct blk_rq_stat	*latency_stat;
+
 	struct timer_list	timeout;
 	struct work_struct	timeout_work;
 
@@ -619,6 +624,7 @@ struct request_queue {
 #define QUEUE_FLAG_ZONE_RESETALL 26	/* supports Zone Reset All */
 #define QUEUE_FLAG_RQ_ALLOC_TIME 27	/* record rq->alloc_time_ns */
 #define QUEUE_FLAG_REQSTATS	28	/* request stats enabled if set */
+#define QUEUE_FLAG_LATENCYSTATS	29	/* latency stats enabled if set */
 
 #define QUEUE_FLAG_MQ_DEFAULT	((1 << QUEUE_FLAG_IO_STAT) |		\
 				 (1 << QUEUE_FLAG_SAME_COMP))
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH 0/7] blk-mq request and latency stats
  2020-03-09 20:59 [PATCH 0/7] blk-mq request and latency stats Jes Sorensen
                   ` (6 preceding siblings ...)
  2020-03-09 20:59 ` [PATCH 7/7] block: Introduce blk-mq latency stats Jes Sorensen
@ 2020-03-18 14:49 ` Jes Sorensen
  7 siblings, 0 replies; 9+ messages in thread
From: Jes Sorensen @ 2020-03-18 14:49 UTC (permalink / raw)
  To: Jes Sorensen, linux-block; +Cc: kernel-team, mmullins, josef, Jens Axboe

On 3/9/20 4:59 PM, Jes Sorensen wrote:
> From: Jes Sorensen <jsorensen@fb.com>
> 
> Hi,
> 
> This patchset introduces statistics collection of request sizes and
> latencies for blk-mq using the blk-stat infrastructue.

Hi,

Any comments on this?

Thanks,
Jes


> This was designed to have minimal overhead when not in use. It relies on
> blk_rq_stats_sectors() and introduces a sectors counter to struct
> blk_rq_stat.
> 
> For request sizes it uses 8 buckets per operation type. Latencies are
> tracked in us precision, and uses 32 buckets per operation type. To
> not blow up the size of struct request_queue, I changed it to
> dynamically allocate these data structures.
> 
> Usage, request stats are enabled like this:
>  $ echo 1 > /sys/block/nvme0n1/queue/reqstat
> with output reading like this:
>  $ cat /sys/block/nvme0n1/queue/stat
>  read: 0 0 0 8278016 14270464 29323264 120107008 2069282816
>  read reqs: 0 0 0 2021 1531 1377 3229 3627
>  write: 4096 0 3072 10903552 9244672 6258688 16584704 2228011008
>  write reqs: 8 0 1 2662 898 311 375 4972
>  discard: 0 0 0 5242880 5472256 3809280 136880128 830554112
>  discard reqs: 0 0 0 1280 515 196 4150 3717
> 
> Latency stats are enabled like this:
>  $ echo 1 > /sys/block/nvme0n1/queue/latstat
> with output reading like this
>  $  cat /sys/block/nvme0n1/queue/latency
>  read: 0 0 0 0 4 101 677 5146 1162 2654 1933 832 657 52 8 0 3 2 3 2 0 0 0 0 0 0 0 0 0 0 0 0
>  write: 0 0 0 79 2564 2641 8087 6226 1580 4052 498 332 385 365 382 279 323 166 109 119 188 267 0 0 0 0 0 0 0 0 0 0
>  discard: 0 0 0 0 0 0 0 17709 698 15 0 1 0 0 3 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
> 
> Cheers,
> Jes
> 
> 
> Jes Sorensen (7):
>   block: keep track of per-device io sizes in stats
>   block: Use blk-stat infrastructure to collect per queue request stats
>   Export block request stats to sysfs
>   Expand block stats to export number of of requests per bucket
>   blk-mq: Only allocate request stat data when it is enabled
>   blk-stat: Make bucket function take latency as an additional argument
>   block: Introduce blk-mq latency stats
> 
>  block/blk-iolatency.c     |   2 +-
>  block/blk-mq.c            | 110 ++++++++++++++++++++-
>  block/blk-stat.c          |  18 ++--
>  block/blk-stat.h          |  12 ++-
>  block/blk-sysfs.c         | 195 ++++++++++++++++++++++++++++++++++++++
>  block/blk-wbt.c           |   2 +-
>  include/linux/blk_types.h |   1 +
>  include/linux/blkdev.h    |  13 +++
>  8 files changed, 338 insertions(+), 15 deletions(-)
> 


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2020-03-18 14:49 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-09 20:59 [PATCH 0/7] blk-mq request and latency stats Jes Sorensen
2020-03-09 20:59 ` [PATCH 1/7] block: keep track of per-device io sizes in stats Jes Sorensen
2020-03-09 20:59 ` [PATCH 2/7] block: Use blk-stat infrastructure to collect per queue request stats Jes Sorensen
2020-03-09 20:59 ` [PATCH 3/7] Export block request stats to sysfs Jes Sorensen
2020-03-09 20:59 ` [PATCH 4/7] Expand block stats to export number of of requests per bucket Jes Sorensen
2020-03-09 20:59 ` [PATCH 5/7] blk-mq: Only allocate request stat data when it is enabled Jes Sorensen
2020-03-09 20:59 ` [PATCH 6/7] blk-stat: Make bucket function take latency as an additional argument Jes Sorensen
2020-03-09 20:59 ` [PATCH 7/7] block: Introduce blk-mq latency stats Jes Sorensen
2020-03-18 14:49 ` [PATCH 0/7] blk-mq request and " Jes Sorensen

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.