ceph-devel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/4] ceph: add IO size metric support
@ 2021-03-22 12:28 xiubli
  2021-03-22 12:28 ` [PATCH 1/4] ceph: rename the metric helpers xiubli
                   ` (4 more replies)
  0 siblings, 5 replies; 11+ messages in thread
From: xiubli @ 2021-03-22 12:28 UTC (permalink / raw)
  To: jlayton; +Cc: idryomov, pdonnell, ceph-devel, Xiubo Li

From: Xiubo Li <xiubli@redhat.com>

Currently it will show as the following:

item          total       avg_sz(bytes)   min_sz(bytes)   max_sz(bytes)  total_sz(bytes)
----------------------------------------------------------------------------------------
read          1           10240           10240           10240           10240
write         1           10240           10240           10240           10240



Xiubo Li (4):
  ceph: rename the metric helpers
  ceph: update the __update_latency helper
  ceph: avoid count the same request twice or more
  ceph: add IO size metrics support

 fs/ceph/addr.c       |  20 +++----
 fs/ceph/debugfs.c    |  49 +++++++++++++----
 fs/ceph/file.c       |  47 ++++++++--------
 fs/ceph/mds_client.c |   2 +-
 fs/ceph/metric.c     | 126 ++++++++++++++++++++++++++++++++-----------
 fs/ceph/metric.h     |  22 +++++---
 6 files changed, 184 insertions(+), 82 deletions(-)

-- 
2.27.0


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 1/4] ceph: rename the metric helpers
  2021-03-22 12:28 [PATCH 0/4] ceph: add IO size metric support xiubli
@ 2021-03-22 12:28 ` xiubli
  2021-03-22 12:28 ` [PATCH 2/4] ceph: update the __update_latency helper xiubli
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 11+ messages in thread
From: xiubli @ 2021-03-22 12:28 UTC (permalink / raw)
  To: jlayton; +Cc: idryomov, pdonnell, ceph-devel, Xiubo Li

From: Xiubo Li <xiubli@redhat.com>

Prepare for the size metrics patches followed.

URL: https://tracker.ceph.com/issues/49913
Signed-off-by: Xiubo Li <xiubli@redhat.com>
---
 fs/ceph/addr.c       |  8 ++++----
 fs/ceph/debugfs.c    | 12 ++++++------
 fs/ceph/file.c       | 12 ++++++------
 fs/ceph/mds_client.c |  2 +-
 fs/ceph/metric.c     | 24 ++++++++++++------------
 fs/ceph/metric.h     | 12 ++++++------
 6 files changed, 35 insertions(+), 35 deletions(-)

diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
index b476133353ae..7c2802758d0e 100644
--- a/fs/ceph/addr.c
+++ b/fs/ceph/addr.c
@@ -226,7 +226,7 @@ static void finish_netfs_read(struct ceph_osd_request *req)
 	int num_pages;
 	int err = req->r_result;
 
-	ceph_update_read_latency(&fsc->mdsc->metric, req->r_start_latency,
+	ceph_update_read_metrics(&fsc->mdsc->metric, req->r_start_latency,
 				 req->r_end_latency, err);
 
 	dout("%s: result %d subreq->len=%zu i_size=%lld\n", __func__, req->r_result,
@@ -560,7 +560,7 @@ static int writepage_nounlock(struct page *page, struct writeback_control *wbc)
 	if (!err)
 		err = ceph_osdc_wait_request(osdc, req);
 
-	ceph_update_write_latency(&fsc->mdsc->metric, req->r_start_latency,
+	ceph_update_write_metrics(&fsc->mdsc->metric, req->r_start_latency,
 				  req->r_end_latency, err);
 
 	ceph_osdc_put_request(req);
@@ -648,7 +648,7 @@ static void writepages_finish(struct ceph_osd_request *req)
 		ceph_clear_error_write(ci);
 	}
 
-	ceph_update_write_latency(&fsc->mdsc->metric, req->r_start_latency,
+	ceph_update_write_metrics(&fsc->mdsc->metric, req->r_start_latency,
 				  req->r_end_latency, rc);
 
 	/*
@@ -1719,7 +1719,7 @@ int ceph_uninline_data(struct file *filp, struct page *locked_page)
 	if (!err)
 		err = ceph_osdc_wait_request(&fsc->client->osdc, req);
 
-	ceph_update_write_latency(&fsc->mdsc->metric, req->r_start_latency,
+	ceph_update_write_metrics(&fsc->mdsc->metric, req->r_start_latency,
 				  req->r_end_latency, err);
 
 out_put:
diff --git a/fs/ceph/debugfs.c b/fs/ceph/debugfs.c
index 66989c880adb..425f3356332a 100644
--- a/fs/ceph/debugfs.c
+++ b/fs/ceph/debugfs.c
@@ -162,34 +162,34 @@ static int metric_show(struct seq_file *s, void *p)
 	seq_printf(s, "item          total       avg_lat(us)     min_lat(us)     max_lat(us)     stdev(us)\n");
 	seq_printf(s, "-----------------------------------------------------------------------------------\n");
 
-	spin_lock(&m->read_latency_lock);
+	spin_lock(&m->read_metric_lock);
 	total = m->total_reads;
 	sum = m->read_latency_sum;
 	avg = total > 0 ? DIV64_U64_ROUND_CLOSEST(sum, total) : 0;
 	min = m->read_latency_min;
 	max = m->read_latency_max;
 	sq = m->read_latency_sq_sum;
-	spin_unlock(&m->read_latency_lock);
+	spin_unlock(&m->read_metric_lock);
 	CEPH_METRIC_SHOW("read", total, avg, min, max, sq);
 
-	spin_lock(&m->write_latency_lock);
+	spin_lock(&m->write_metric_lock);
 	total = m->total_writes;
 	sum = m->write_latency_sum;
 	avg = total > 0 ? DIV64_U64_ROUND_CLOSEST(sum, total) : 0;
 	min = m->write_latency_min;
 	max = m->write_latency_max;
 	sq = m->write_latency_sq_sum;
-	spin_unlock(&m->write_latency_lock);
+	spin_unlock(&m->write_metric_lock);
 	CEPH_METRIC_SHOW("write", total, avg, min, max, sq);
 
-	spin_lock(&m->metadata_latency_lock);
+	spin_lock(&m->metadata_metric_lock);
 	total = m->total_metadatas;
 	sum = m->metadata_latency_sum;
 	avg = total > 0 ? DIV64_U64_ROUND_CLOSEST(sum, total) : 0;
 	min = m->metadata_latency_min;
 	max = m->metadata_latency_max;
 	sq = m->metadata_latency_sq_sum;
-	spin_unlock(&m->metadata_latency_lock);
+	spin_unlock(&m->metadata_metric_lock);
 	CEPH_METRIC_SHOW("metadata", total, avg, min, max, sq);
 
 	seq_printf(s, "\n");
diff --git a/fs/ceph/file.c b/fs/ceph/file.c
index a6ef1d143308..a27aabcb0e0b 100644
--- a/fs/ceph/file.c
+++ b/fs/ceph/file.c
@@ -895,7 +895,7 @@ static ssize_t ceph_sync_read(struct kiocb *iocb, struct iov_iter *to,
 		if (!ret)
 			ret = ceph_osdc_wait_request(osdc, req);
 
-		ceph_update_read_latency(&fsc->mdsc->metric,
+		ceph_update_read_metrics(&fsc->mdsc->metric,
 					 req->r_start_latency,
 					 req->r_end_latency,
 					 ret);
@@ -1040,10 +1040,10 @@ static void ceph_aio_complete_req(struct ceph_osd_request *req)
 	/* r_start_latency == 0 means the request was not submitted */
 	if (req->r_start_latency) {
 		if (aio_req->write)
-			ceph_update_write_latency(metric, req->r_start_latency,
+			ceph_update_write_metrics(metric, req->r_start_latency,
 						  req->r_end_latency, rc);
 		else
-			ceph_update_read_latency(metric, req->r_start_latency,
+			ceph_update_read_metrics(metric, req->r_start_latency,
 						 req->r_end_latency, rc);
 	}
 
@@ -1293,10 +1293,10 @@ ceph_direct_read_write(struct kiocb *iocb, struct iov_iter *iter,
 			ret = ceph_osdc_wait_request(&fsc->client->osdc, req);
 
 		if (write)
-			ceph_update_write_latency(metric, req->r_start_latency,
+			ceph_update_write_metrics(metric, req->r_start_latency,
 						  req->r_end_latency, ret);
 		else
-			ceph_update_read_latency(metric, req->r_start_latency,
+			ceph_update_read_metrics(metric, req->r_start_latency,
 						 req->r_end_latency, ret);
 
 		size = i_size_read(inode);
@@ -1470,7 +1470,7 @@ ceph_sync_write(struct kiocb *iocb, struct iov_iter *from, loff_t pos,
 		if (!ret)
 			ret = ceph_osdc_wait_request(&fsc->client->osdc, req);
 
-		ceph_update_write_latency(&fsc->mdsc->metric, req->r_start_latency,
+		ceph_update_write_metrics(&fsc->mdsc->metric, req->r_start_latency,
 					  req->r_end_latency, ret);
 out:
 		ceph_osdc_put_request(req);
diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
index d87bd852ed96..73ecb7d128c9 100644
--- a/fs/ceph/mds_client.c
+++ b/fs/ceph/mds_client.c
@@ -3306,7 +3306,7 @@ static void handle_reply(struct ceph_mds_session *session, struct ceph_msg *msg)
 	/* kick calling process */
 	complete_request(mdsc, req);
 
-	ceph_update_metadata_latency(&mdsc->metric, req->r_start_latency,
+	ceph_update_metadata_metrics(&mdsc->metric, req->r_start_latency,
 				     req->r_end_latency, err);
 out:
 	ceph_mdsc_put_request(req);
diff --git a/fs/ceph/metric.c b/fs/ceph/metric.c
index 5ec94bd4c1de..75d309f2fb0c 100644
--- a/fs/ceph/metric.c
+++ b/fs/ceph/metric.c
@@ -183,21 +183,21 @@ int ceph_metric_init(struct ceph_client_metric *m)
 	if (ret)
 		goto err_i_caps_mis;
 
-	spin_lock_init(&m->read_latency_lock);
+	spin_lock_init(&m->read_metric_lock);
 	m->read_latency_sq_sum = 0;
 	m->read_latency_min = KTIME_MAX;
 	m->read_latency_max = 0;
 	m->total_reads = 0;
 	m->read_latency_sum = 0;
 
-	spin_lock_init(&m->write_latency_lock);
+	spin_lock_init(&m->write_metric_lock);
 	m->write_latency_sq_sum = 0;
 	m->write_latency_min = KTIME_MAX;
 	m->write_latency_max = 0;
 	m->total_writes = 0;
 	m->write_latency_sum = 0;
 
-	spin_lock_init(&m->metadata_latency_lock);
+	spin_lock_init(&m->metadata_metric_lock);
 	m->metadata_latency_sq_sum = 0;
 	m->metadata_latency_min = KTIME_MAX;
 	m->metadata_latency_max = 0;
@@ -274,7 +274,7 @@ static inline void __update_latency(ktime_t *totalp, ktime_t *lsump,
 	*sq_sump += sq;
 }
 
-void ceph_update_read_latency(struct ceph_client_metric *m,
+void ceph_update_read_metrics(struct ceph_client_metric *m,
 			      ktime_t r_start, ktime_t r_end,
 			      int rc)
 {
@@ -283,14 +283,14 @@ void ceph_update_read_latency(struct ceph_client_metric *m,
 	if (unlikely(rc < 0 && rc != -ENOENT && rc != -ETIMEDOUT))
 		return;
 
-	spin_lock(&m->read_latency_lock);
+	spin_lock(&m->read_metric_lock);
 	__update_latency(&m->total_reads, &m->read_latency_sum,
 			 &m->read_latency_min, &m->read_latency_max,
 			 &m->read_latency_sq_sum, lat);
-	spin_unlock(&m->read_latency_lock);
+	spin_unlock(&m->read_metric_lock);
 }
 
-void ceph_update_write_latency(struct ceph_client_metric *m,
+void ceph_update_write_metrics(struct ceph_client_metric *m,
 			       ktime_t r_start, ktime_t r_end,
 			       int rc)
 {
@@ -299,14 +299,14 @@ void ceph_update_write_latency(struct ceph_client_metric *m,
 	if (unlikely(rc && rc != -ETIMEDOUT))
 		return;
 
-	spin_lock(&m->write_latency_lock);
+	spin_lock(&m->write_metric_lock);
 	__update_latency(&m->total_writes, &m->write_latency_sum,
 			 &m->write_latency_min, &m->write_latency_max,
 			 &m->write_latency_sq_sum, lat);
-	spin_unlock(&m->write_latency_lock);
+	spin_unlock(&m->write_metric_lock);
 }
 
-void ceph_update_metadata_latency(struct ceph_client_metric *m,
+void ceph_update_metadata_metrics(struct ceph_client_metric *m,
 				  ktime_t r_start, ktime_t r_end,
 				  int rc)
 {
@@ -315,9 +315,9 @@ void ceph_update_metadata_latency(struct ceph_client_metric *m,
 	if (unlikely(rc && rc != -ENOENT))
 		return;
 
-	spin_lock(&m->metadata_latency_lock);
+	spin_lock(&m->metadata_metric_lock);
 	__update_latency(&m->total_metadatas, &m->metadata_latency_sum,
 			 &m->metadata_latency_min, &m->metadata_latency_max,
 			 &m->metadata_latency_sq_sum, lat);
-	spin_unlock(&m->metadata_latency_lock);
+	spin_unlock(&m->metadata_metric_lock);
 }
diff --git a/fs/ceph/metric.h b/fs/ceph/metric.h
index af6038ff39d4..57b5f0ec38be 100644
--- a/fs/ceph/metric.h
+++ b/fs/ceph/metric.h
@@ -108,21 +108,21 @@ struct ceph_client_metric {
 	struct percpu_counter i_caps_hit;
 	struct percpu_counter i_caps_mis;
 
-	spinlock_t read_latency_lock;
+	spinlock_t read_metric_lock;
 	u64 total_reads;
 	ktime_t read_latency_sum;
 	ktime_t read_latency_sq_sum;
 	ktime_t read_latency_min;
 	ktime_t read_latency_max;
 
-	spinlock_t write_latency_lock;
+	spinlock_t write_metric_lock;
 	u64 total_writes;
 	ktime_t write_latency_sum;
 	ktime_t write_latency_sq_sum;
 	ktime_t write_latency_min;
 	ktime_t write_latency_max;
 
-	spinlock_t metadata_latency_lock;
+	spinlock_t metadata_metric_lock;
 	u64 total_metadatas;
 	ktime_t metadata_latency_sum;
 	ktime_t metadata_latency_sq_sum;
@@ -162,13 +162,13 @@ static inline void ceph_update_cap_mis(struct ceph_client_metric *m)
 	percpu_counter_inc(&m->i_caps_mis);
 }
 
-extern void ceph_update_read_latency(struct ceph_client_metric *m,
+extern void ceph_update_read_metrics(struct ceph_client_metric *m,
 				     ktime_t r_start, ktime_t r_end,
 				     int rc);
-extern void ceph_update_write_latency(struct ceph_client_metric *m,
+extern void ceph_update_write_metrics(struct ceph_client_metric *m,
 				      ktime_t r_start, ktime_t r_end,
 				      int rc);
-extern void ceph_update_metadata_latency(struct ceph_client_metric *m,
+extern void ceph_update_metadata_metrics(struct ceph_client_metric *m,
 				         ktime_t r_start, ktime_t r_end,
 					 int rc);
 #endif /* _FS_CEPH_MDS_METRIC_H */
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 2/4] ceph: update the __update_latency helper
  2021-03-22 12:28 [PATCH 0/4] ceph: add IO size metric support xiubli
  2021-03-22 12:28 ` [PATCH 1/4] ceph: rename the metric helpers xiubli
@ 2021-03-22 12:28 ` xiubli
  2021-03-23 12:34   ` Jeff Layton
  2021-03-22 12:28 ` [PATCH 3/4] ceph: avoid count the same request twice or more xiubli
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 11+ messages in thread
From: xiubli @ 2021-03-22 12:28 UTC (permalink / raw)
  To: jlayton; +Cc: idryomov, pdonnell, ceph-devel, Xiubo Li

From: Xiubo Li <xiubli@redhat.com>

Let the __update_latency() helper choose the correcsponding members
according to the metric_type.

URL: https://tracker.ceph.com/issues/49913
Signed-off-by: Xiubo Li <xiubli@redhat.com>
---
 fs/ceph/metric.c | 58 +++++++++++++++++++++++++++++++++++-------------
 1 file changed, 42 insertions(+), 16 deletions(-)

diff --git a/fs/ceph/metric.c b/fs/ceph/metric.c
index 75d309f2fb0c..d5560ff99a9d 100644
--- a/fs/ceph/metric.c
+++ b/fs/ceph/metric.c
@@ -249,19 +249,51 @@ void ceph_metric_destroy(struct ceph_client_metric *m)
 		ceph_put_mds_session(m->session);
 }
 
-static inline void __update_latency(ktime_t *totalp, ktime_t *lsump,
-				    ktime_t *min, ktime_t *max,
-				    ktime_t *sq_sump, ktime_t lat)
+typedef enum {
+	CEPH_METRIC_READ,
+	CEPH_METRIC_WRITE,
+	CEPH_METRIC_METADATA,
+} metric_type;
+
+static inline void __update_latency(struct ceph_client_metric *m,
+				    metric_type type, ktime_t lat)
 {
+	ktime_t *totalp, *minp, *maxp, *lsump, *sq_sump;
 	ktime_t total, avg, sq, lsum;
 
+	switch (type) {
+	case CEPH_METRIC_READ:
+		totalp = &m->total_reads;
+		lsump = &m->read_latency_sum;
+		minp = &m->read_latency_min;
+		maxp = &m->read_latency_max;
+		sq_sump = &m->read_latency_sq_sum;
+		break;
+	case CEPH_METRIC_WRITE:
+		totalp = &m->total_writes;
+		lsump = &m->write_latency_sum;
+		minp = &m->write_latency_min;
+		maxp = &m->write_latency_max;
+		sq_sump = &m->write_latency_sq_sum;
+		break;
+	case CEPH_METRIC_METADATA:
+		totalp = &m->total_metadatas;
+		lsump = &m->metadata_latency_sum;
+		minp = &m->metadata_latency_min;
+		maxp = &m->metadata_latency_max;
+		sq_sump = &m->metadata_latency_sq_sum;
+		break;
+	default:
+		return;
+	}
+
 	total = ++(*totalp);
 	lsum = (*lsump += lat);
 
-	if (unlikely(lat < *min))
-		*min = lat;
-	if (unlikely(lat > *max))
-		*max = lat;
+	if (unlikely(lat < *minp))
+		*minp = lat;
+	if (unlikely(lat > *maxp))
+		*maxp = lat;
 
 	if (unlikely(total == 1))
 		return;
@@ -284,9 +316,7 @@ void ceph_update_read_metrics(struct ceph_client_metric *m,
 		return;
 
 	spin_lock(&m->read_metric_lock);
-	__update_latency(&m->total_reads, &m->read_latency_sum,
-			 &m->read_latency_min, &m->read_latency_max,
-			 &m->read_latency_sq_sum, lat);
+	__update_latency(m, CEPH_METRIC_READ, lat);
 	spin_unlock(&m->read_metric_lock);
 }
 
@@ -300,9 +330,7 @@ void ceph_update_write_metrics(struct ceph_client_metric *m,
 		return;
 
 	spin_lock(&m->write_metric_lock);
-	__update_latency(&m->total_writes, &m->write_latency_sum,
-			 &m->write_latency_min, &m->write_latency_max,
-			 &m->write_latency_sq_sum, lat);
+	__update_latency(m, CEPH_METRIC_WRITE, lat);
 	spin_unlock(&m->write_metric_lock);
 }
 
@@ -316,8 +344,6 @@ void ceph_update_metadata_metrics(struct ceph_client_metric *m,
 		return;
 
 	spin_lock(&m->metadata_metric_lock);
-	__update_latency(&m->total_metadatas, &m->metadata_latency_sum,
-			 &m->metadata_latency_min, &m->metadata_latency_max,
-			 &m->metadata_latency_sq_sum, lat);
+	__update_latency(m, CEPH_METRIC_METADATA, lat);
 	spin_unlock(&m->metadata_metric_lock);
 }
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 3/4] ceph: avoid count the same request twice or more
  2021-03-22 12:28 [PATCH 0/4] ceph: add IO size metric support xiubli
  2021-03-22 12:28 ` [PATCH 1/4] ceph: rename the metric helpers xiubli
  2021-03-22 12:28 ` [PATCH 2/4] ceph: update the __update_latency helper xiubli
@ 2021-03-22 12:28 ` xiubli
  2021-03-22 12:28 ` [PATCH 4/4] ceph: add IO size metrics support xiubli
  2021-03-24 15:06 ` [PATCH 0/4] ceph: add IO size metric support Jeff Layton
  4 siblings, 0 replies; 11+ messages in thread
From: xiubli @ 2021-03-22 12:28 UTC (permalink / raw)
  To: jlayton; +Cc: idryomov, pdonnell, ceph-devel, Xiubo Li

From: Xiubo Li <xiubli@redhat.com>

If the request will retry, skip updating the latency metric.

Signed-off-by: Xiubo Li <xiubli@redhat.com>
---
 fs/ceph/file.c | 20 ++++++++++----------
 1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/fs/ceph/file.c b/fs/ceph/file.c
index a27aabcb0e0b..31542eac7e59 100644
--- a/fs/ceph/file.c
+++ b/fs/ceph/file.c
@@ -1037,16 +1037,6 @@ static void ceph_aio_complete_req(struct ceph_osd_request *req)
 	dout("ceph_aio_complete_req %p rc %d bytes %u\n",
 	     inode, rc, osd_data->bvec_pos.iter.bi_size);
 
-	/* r_start_latency == 0 means the request was not submitted */
-	if (req->r_start_latency) {
-		if (aio_req->write)
-			ceph_update_write_metrics(metric, req->r_start_latency,
-						  req->r_end_latency, rc);
-		else
-			ceph_update_read_metrics(metric, req->r_start_latency,
-						 req->r_end_latency, rc);
-	}
-
 	if (rc == -EOLDSNAPC) {
 		struct ceph_aio_work *aio_work;
 		BUG_ON(!aio_req->write);
@@ -1089,6 +1079,16 @@ static void ceph_aio_complete_req(struct ceph_osd_request *req)
 		}
 	}
 
+	/* r_start_latency == 0 means the request was not submitted */
+	if (req->r_start_latency) {
+		if (aio_req->write)
+			ceph_update_write_metrics(metric, req->r_start_latency,
+						  req->r_end_latency, rc);
+		else
+			ceph_update_read_metrics(metric, req->r_start_latency,
+						 req->r_end_latency, rc);
+	}
+
 	put_bvecs(osd_data->bvec_pos.bvecs, osd_data->num_bvecs,
 		  aio_req->should_dirty);
 	ceph_osdc_put_request(req);
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 4/4] ceph: add IO size metrics support
  2021-03-22 12:28 [PATCH 0/4] ceph: add IO size metric support xiubli
                   ` (2 preceding siblings ...)
  2021-03-22 12:28 ` [PATCH 3/4] ceph: avoid count the same request twice or more xiubli
@ 2021-03-22 12:28 ` xiubli
  2021-03-23 12:29   ` Jeff Layton
  2021-03-24 15:06 ` [PATCH 0/4] ceph: add IO size metric support Jeff Layton
  4 siblings, 1 reply; 11+ messages in thread
From: xiubli @ 2021-03-22 12:28 UTC (permalink / raw)
  To: jlayton; +Cc: idryomov, pdonnell, ceph-devel, Xiubo Li

From: Xiubo Li <xiubli@redhat.com>

This will collect IO's total size and then calculate the average
size, and also will collect the min/max IO sizes.

The debugfs will show the size metrics in byte and will let the
userspace applications to switch to what they need.

URL: https://tracker.ceph.com/issues/49913
Signed-off-by: Xiubo Li <xiubli@redhat.com>
---
 fs/ceph/addr.c    | 14 ++++++++------
 fs/ceph/debugfs.c | 37 +++++++++++++++++++++++++++++++++----
 fs/ceph/file.c    | 23 +++++++++++------------
 fs/ceph/metric.c  | 44 ++++++++++++++++++++++++++++++++++++++++++--
 fs/ceph/metric.h  | 10 ++++++++--
 5 files changed, 102 insertions(+), 26 deletions(-)

diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
index 7c2802758d0e..d8a3624bc81d 100644
--- a/fs/ceph/addr.c
+++ b/fs/ceph/addr.c
@@ -227,7 +227,7 @@ static void finish_netfs_read(struct ceph_osd_request *req)
 	int err = req->r_result;
 
 	ceph_update_read_metrics(&fsc->mdsc->metric, req->r_start_latency,
-				 req->r_end_latency, err);
+				 req->r_end_latency, osd_data->length, err);
 
 	dout("%s: result %d subreq->len=%zu i_size=%lld\n", __func__, req->r_result,
 	     subreq->len, i_size_read(req->r_inode));
@@ -561,7 +561,7 @@ static int writepage_nounlock(struct page *page, struct writeback_control *wbc)
 		err = ceph_osdc_wait_request(osdc, req);
 
 	ceph_update_write_metrics(&fsc->mdsc->metric, req->r_start_latency,
-				  req->r_end_latency, err);
+				  req->r_end_latency, len, err);
 
 	ceph_osdc_put_request(req);
 	if (err == 0)
@@ -636,6 +636,7 @@ static void writepages_finish(struct ceph_osd_request *req)
 	struct ceph_snap_context *snapc = req->r_snapc;
 	struct address_space *mapping = inode->i_mapping;
 	struct ceph_fs_client *fsc = ceph_inode_to_client(inode);
+	unsigned int len = 0;
 	bool remove_page;
 
 	dout("writepages_finish %p rc %d\n", inode, rc);
@@ -648,9 +649,6 @@ static void writepages_finish(struct ceph_osd_request *req)
 		ceph_clear_error_write(ci);
 	}
 
-	ceph_update_write_metrics(&fsc->mdsc->metric, req->r_start_latency,
-				  req->r_end_latency, rc);
-
 	/*
 	 * We lost the cache cap, need to truncate the page before
 	 * it is unlocked, otherwise we'd truncate it later in the
@@ -667,6 +665,7 @@ static void writepages_finish(struct ceph_osd_request *req)
 
 		osd_data = osd_req_op_extent_osd_data(req, i);
 		BUG_ON(osd_data->type != CEPH_OSD_DATA_TYPE_PAGES);
+		len += osd_data->length;
 		num_pages = calc_pages_for((u64)osd_data->alignment,
 					   (u64)osd_data->length);
 		total_pages += num_pages;
@@ -699,6 +698,9 @@ static void writepages_finish(struct ceph_osd_request *req)
 		release_pages(osd_data->pages, num_pages);
 	}
 
+	ceph_update_write_metrics(&fsc->mdsc->metric, req->r_start_latency,
+				  req->r_end_latency, len, rc);
+
 	ceph_put_wrbuffer_cap_refs(ci, total_pages, snapc);
 
 	osd_data = osd_req_op_extent_osd_data(req, 0);
@@ -1720,7 +1722,7 @@ int ceph_uninline_data(struct file *filp, struct page *locked_page)
 		err = ceph_osdc_wait_request(&fsc->client->osdc, req);
 
 	ceph_update_write_metrics(&fsc->mdsc->metric, req->r_start_latency,
-				  req->r_end_latency, err);
+				  req->r_end_latency, len, err);
 
 out_put:
 	ceph_osdc_put_request(req);
diff --git a/fs/ceph/debugfs.c b/fs/ceph/debugfs.c
index 425f3356332a..38b78b45811f 100644
--- a/fs/ceph/debugfs.c
+++ b/fs/ceph/debugfs.c
@@ -127,7 +127,7 @@ static int mdsc_show(struct seq_file *s, void *p)
 	return 0;
 }
 
-#define CEPH_METRIC_SHOW(name, total, avg, min, max, sq) {		\
+#define CEPH_LAT_METRIC_SHOW(name, total, avg, min, max, sq) {		\
 	s64 _total, _avg, _min, _max, _sq, _st;				\
 	_avg = ktime_to_us(avg);					\
 	_min = ktime_to_us(min == KTIME_MAX ? 0 : min);			\
@@ -140,6 +140,12 @@ static int mdsc_show(struct seq_file *s, void *p)
 		   name, total, _avg, _min, _max, _st);			\
 }
 
+#define CEPH_SZ_METRIC_SHOW(name, total, avg, min, max, sum) {		\
+	u64 _min = min == U64_MAX ? 0 : min;				\
+	seq_printf(s, "%-14s%-12lld%-16llu%-16llu%-16llu%llu\n",	\
+		   name, total, avg, _min, max, sum);			\
+}
+
 static int metric_show(struct seq_file *s, void *p)
 {
 	struct ceph_fs_client *fsc = s->private;
@@ -147,6 +153,7 @@ static int metric_show(struct seq_file *s, void *p)
 	struct ceph_client_metric *m = &mdsc->metric;
 	int nr_caps = 0;
 	s64 total, sum, avg, min, max, sq;
+	u64 sum_sz, avg_sz, min_sz, max_sz;
 
 	sum = percpu_counter_sum(&m->total_inodes);
 	seq_printf(s, "item                               total\n");
@@ -170,7 +177,7 @@ static int metric_show(struct seq_file *s, void *p)
 	max = m->read_latency_max;
 	sq = m->read_latency_sq_sum;
 	spin_unlock(&m->read_metric_lock);
-	CEPH_METRIC_SHOW("read", total, avg, min, max, sq);
+	CEPH_LAT_METRIC_SHOW("read", total, avg, min, max, sq);
 
 	spin_lock(&m->write_metric_lock);
 	total = m->total_writes;
@@ -180,7 +187,7 @@ static int metric_show(struct seq_file *s, void *p)
 	max = m->write_latency_max;
 	sq = m->write_latency_sq_sum;
 	spin_unlock(&m->write_metric_lock);
-	CEPH_METRIC_SHOW("write", total, avg, min, max, sq);
+	CEPH_LAT_METRIC_SHOW("write", total, avg, min, max, sq);
 
 	spin_lock(&m->metadata_metric_lock);
 	total = m->total_metadatas;
@@ -190,7 +197,29 @@ static int metric_show(struct seq_file *s, void *p)
 	max = m->metadata_latency_max;
 	sq = m->metadata_latency_sq_sum;
 	spin_unlock(&m->metadata_metric_lock);
-	CEPH_METRIC_SHOW("metadata", total, avg, min, max, sq);
+	CEPH_LAT_METRIC_SHOW("metadata", total, avg, min, max, sq);
+
+	seq_printf(s, "\n");
+	seq_printf(s, "item          total       avg_sz(bytes)   min_sz(bytes)   max_sz(bytes)  total_sz(bytes)\n");
+	seq_printf(s, "----------------------------------------------------------------------------------------\n");
+
+	spin_lock(&m->read_metric_lock);
+	total = m->total_reads;
+	sum_sz = m->read_size_sum;
+	avg_sz = total > 0 ? DIV64_U64_ROUND_CLOSEST(sum_sz, total) : 0;
+	min_sz = m->read_size_min;
+	max_sz = m->read_size_max;
+	spin_unlock(&m->read_metric_lock);
+	CEPH_SZ_METRIC_SHOW("read", total, avg_sz, min_sz, max_sz, sum_sz);
+
+	spin_lock(&m->write_metric_lock);
+	total = m->total_writes;
+	sum_sz = m->write_size_sum;
+	avg_sz = total > 0 ? DIV64_U64_ROUND_CLOSEST(sum_sz, total) : 0;
+	min_sz = m->write_size_min;
+	max_sz = m->write_size_max;
+	spin_unlock(&m->write_metric_lock);
+	CEPH_SZ_METRIC_SHOW("write", total, avg_sz, min_sz, max_sz, sum_sz);
 
 	seq_printf(s, "\n");
 	seq_printf(s, "item          total           miss            hit\n");
diff --git a/fs/ceph/file.c b/fs/ceph/file.c
index 31542eac7e59..db43d2d013b9 100644
--- a/fs/ceph/file.c
+++ b/fs/ceph/file.c
@@ -898,7 +898,7 @@ static ssize_t ceph_sync_read(struct kiocb *iocb, struct iov_iter *to,
 		ceph_update_read_metrics(&fsc->mdsc->metric,
 					 req->r_start_latency,
 					 req->r_end_latency,
-					 ret);
+					 len, ret);
 
 		ceph_osdc_put_request(req);
 
@@ -1030,12 +1030,12 @@ static void ceph_aio_complete_req(struct ceph_osd_request *req)
 	struct ceph_aio_request *aio_req = req->r_priv;
 	struct ceph_osd_data *osd_data = osd_req_op_extent_osd_data(req, 0);
 	struct ceph_client_metric *metric = &ceph_sb_to_mdsc(inode->i_sb)->metric;
+	unsigned int len = osd_data->bvec_pos.iter.bi_size;
 
 	BUG_ON(osd_data->type != CEPH_OSD_DATA_TYPE_BVECS);
 	BUG_ON(!osd_data->num_bvecs);
 
-	dout("ceph_aio_complete_req %p rc %d bytes %u\n",
-	     inode, rc, osd_data->bvec_pos.iter.bi_size);
+	dout("ceph_aio_complete_req %p rc %d bytes %u\n", inode, rc, len);
 
 	if (rc == -EOLDSNAPC) {
 		struct ceph_aio_work *aio_work;
@@ -1053,9 +1053,9 @@ static void ceph_aio_complete_req(struct ceph_osd_request *req)
 	} else if (!aio_req->write) {
 		if (rc == -ENOENT)
 			rc = 0;
-		if (rc >= 0 && osd_data->bvec_pos.iter.bi_size > rc) {
+		if (rc >= 0 && len > rc) {
 			struct iov_iter i;
-			int zlen = osd_data->bvec_pos.iter.bi_size - rc;
+			int zlen = len - rc;
 
 			/*
 			 * If read is satisfied by single OSD request,
@@ -1072,8 +1072,7 @@ static void ceph_aio_complete_req(struct ceph_osd_request *req)
 			}
 
 			iov_iter_bvec(&i, READ, osd_data->bvec_pos.bvecs,
-				      osd_data->num_bvecs,
-				      osd_data->bvec_pos.iter.bi_size);
+				      osd_data->num_bvecs, len);
 			iov_iter_advance(&i, rc);
 			iov_iter_zero(zlen, &i);
 		}
@@ -1083,10 +1082,10 @@ static void ceph_aio_complete_req(struct ceph_osd_request *req)
 	if (req->r_start_latency) {
 		if (aio_req->write)
 			ceph_update_write_metrics(metric, req->r_start_latency,
-						  req->r_end_latency, rc);
+						  req->r_end_latency, len, rc);
 		else
 			ceph_update_read_metrics(metric, req->r_start_latency,
-						 req->r_end_latency, rc);
+						 req->r_end_latency, len, rc);
 	}
 
 	put_bvecs(osd_data->bvec_pos.bvecs, osd_data->num_bvecs,
@@ -1294,10 +1293,10 @@ ceph_direct_read_write(struct kiocb *iocb, struct iov_iter *iter,
 
 		if (write)
 			ceph_update_write_metrics(metric, req->r_start_latency,
-						  req->r_end_latency, ret);
+						  req->r_end_latency, len, ret);
 		else
 			ceph_update_read_metrics(metric, req->r_start_latency,
-						 req->r_end_latency, ret);
+						 req->r_end_latency, len, ret);
 
 		size = i_size_read(inode);
 		if (!write) {
@@ -1471,7 +1470,7 @@ ceph_sync_write(struct kiocb *iocb, struct iov_iter *from, loff_t pos,
 			ret = ceph_osdc_wait_request(&fsc->client->osdc, req);
 
 		ceph_update_write_metrics(&fsc->mdsc->metric, req->r_start_latency,
-					  req->r_end_latency, ret);
+					  req->r_end_latency, len, ret);
 out:
 		ceph_osdc_put_request(req);
 		if (ret != 0) {
diff --git a/fs/ceph/metric.c b/fs/ceph/metric.c
index d5560ff99a9d..ff3c9d5cf9ff 100644
--- a/fs/ceph/metric.c
+++ b/fs/ceph/metric.c
@@ -189,6 +189,9 @@ int ceph_metric_init(struct ceph_client_metric *m)
 	m->read_latency_max = 0;
 	m->total_reads = 0;
 	m->read_latency_sum = 0;
+	m->read_size_min = U64_MAX;
+	m->read_size_max = 0;
+	m->read_size_sum = 0;
 
 	spin_lock_init(&m->write_metric_lock);
 	m->write_latency_sq_sum = 0;
@@ -196,6 +199,9 @@ int ceph_metric_init(struct ceph_client_metric *m)
 	m->write_latency_max = 0;
 	m->total_writes = 0;
 	m->write_latency_sum = 0;
+	m->write_size_min = U64_MAX;
+	m->write_size_max = 0;
+	m->write_size_sum = 0;
 
 	spin_lock_init(&m->metadata_metric_lock);
 	m->metadata_latency_sq_sum = 0;
@@ -306,9 +312,41 @@ static inline void __update_latency(struct ceph_client_metric *m,
 	*sq_sump += sq;
 }
 
+static inline void __update_size(struct ceph_client_metric *m,
+				 metric_type type, unsigned int size)
+{
+	ktime_t total;
+	u64 *minp, *maxp, *sump;
+
+	switch (type) {
+	case CEPH_METRIC_READ:
+		total = m->total_reads;
+		sump = &m->read_size_sum;
+		minp = &m->read_size_min;
+		maxp = &m->read_size_max;
+		break;
+	case CEPH_METRIC_WRITE:
+		total = m->total_writes;
+		sump = &m->write_size_sum;
+		minp = &m->write_size_min;
+		maxp = &m->write_size_max;
+		break;
+	case CEPH_METRIC_METADATA:
+	default:
+		return;
+	}
+
+	*sump += size;
+
+	if (unlikely(size < *minp))
+		*minp = size;
+	if (unlikely(size > *maxp))
+		*maxp = size;
+}
+
 void ceph_update_read_metrics(struct ceph_client_metric *m,
 			      ktime_t r_start, ktime_t r_end,
-			      int rc)
+			      unsigned int size, int rc)
 {
 	ktime_t lat = ktime_sub(r_end, r_start);
 
@@ -317,12 +355,13 @@ void ceph_update_read_metrics(struct ceph_client_metric *m,
 
 	spin_lock(&m->read_metric_lock);
 	__update_latency(m, CEPH_METRIC_READ, lat);
+	__update_size(m, CEPH_METRIC_READ, size);
 	spin_unlock(&m->read_metric_lock);
 }
 
 void ceph_update_write_metrics(struct ceph_client_metric *m,
 			       ktime_t r_start, ktime_t r_end,
-			       int rc)
+			       unsigned int size, int rc)
 {
 	ktime_t lat = ktime_sub(r_end, r_start);
 
@@ -331,6 +370,7 @@ void ceph_update_write_metrics(struct ceph_client_metric *m,
 
 	spin_lock(&m->write_metric_lock);
 	__update_latency(m, CEPH_METRIC_WRITE, lat);
+	__update_size(m, CEPH_METRIC_WRITE, size);
 	spin_unlock(&m->write_metric_lock);
 }
 
diff --git a/fs/ceph/metric.h b/fs/ceph/metric.h
index 57b5f0ec38be..64651b6ac886 100644
--- a/fs/ceph/metric.h
+++ b/fs/ceph/metric.h
@@ -110,6 +110,9 @@ struct ceph_client_metric {
 
 	spinlock_t read_metric_lock;
 	u64 total_reads;
+	u64 read_size_sum;
+	u64 read_size_min;
+	u64 read_size_max;
 	ktime_t read_latency_sum;
 	ktime_t read_latency_sq_sum;
 	ktime_t read_latency_min;
@@ -117,6 +120,9 @@ struct ceph_client_metric {
 
 	spinlock_t write_metric_lock;
 	u64 total_writes;
+	u64 write_size_sum;
+	u64 write_size_min;
+	u64 write_size_max;
 	ktime_t write_latency_sum;
 	ktime_t write_latency_sq_sum;
 	ktime_t write_latency_min;
@@ -164,10 +170,10 @@ static inline void ceph_update_cap_mis(struct ceph_client_metric *m)
 
 extern void ceph_update_read_metrics(struct ceph_client_metric *m,
 				     ktime_t r_start, ktime_t r_end,
-				     int rc);
+				     unsigned int size, int rc);
 extern void ceph_update_write_metrics(struct ceph_client_metric *m,
 				      ktime_t r_start, ktime_t r_end,
-				      int rc);
+				      unsigned int size, int rc);
 extern void ceph_update_metadata_metrics(struct ceph_client_metric *m,
 				         ktime_t r_start, ktime_t r_end,
 					 int rc);
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH 4/4] ceph: add IO size metrics support
  2021-03-22 12:28 ` [PATCH 4/4] ceph: add IO size metrics support xiubli
@ 2021-03-23 12:29   ` Jeff Layton
  2021-03-23 13:17     ` Xiubo Li
  0 siblings, 1 reply; 11+ messages in thread
From: Jeff Layton @ 2021-03-23 12:29 UTC (permalink / raw)
  To: xiubli; +Cc: idryomov, pdonnell, ceph-devel

On Mon, 2021-03-22 at 20:28 +0800, xiubli@redhat.com wrote:
> From: Xiubo Li <xiubli@redhat.com>
> 
> This will collect IO's total size and then calculate the average
> size, and also will collect the min/max IO sizes.
> 
> The debugfs will show the size metrics in byte and will let the
> userspace applications to switch to what they need.
> 
> URL: https://tracker.ceph.com/issues/49913
> Signed-off-by: Xiubo Li <xiubli@redhat.com>
> ---
>  fs/ceph/addr.c    | 14 ++++++++------
>  fs/ceph/debugfs.c | 37 +++++++++++++++++++++++++++++++++----
>  fs/ceph/file.c    | 23 +++++++++++------------
>  fs/ceph/metric.c  | 44 ++++++++++++++++++++++++++++++++++++++++++--
>  fs/ceph/metric.h  | 10 ++++++++--
>  5 files changed, 102 insertions(+), 26 deletions(-)
> 
> diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
> index 7c2802758d0e..d8a3624bc81d 100644
> --- a/fs/ceph/addr.c
> +++ b/fs/ceph/addr.c
> @@ -227,7 +227,7 @@ static void finish_netfs_read(struct ceph_osd_request *req)
>  	int err = req->r_result;
>  
> 
> 
> 
>  	ceph_update_read_metrics(&fsc->mdsc->metric, req->r_start_latency,
> -				 req->r_end_latency, err);
> +				 req->r_end_latency, osd_data->length, err);
>  
> 
> 
> 
>  	dout("%s: result %d subreq->len=%zu i_size=%lld\n", __func__, req->r_result,
>  	     subreq->len, i_size_read(req->r_inode));
> @@ -561,7 +561,7 @@ static int writepage_nounlock(struct page *page, struct writeback_control *wbc)
>  		err = ceph_osdc_wait_request(osdc, req);
>  
> 
> 
> 
>  	ceph_update_write_metrics(&fsc->mdsc->metric, req->r_start_latency,
> -				  req->r_end_latency, err);
> +				  req->r_end_latency, len, err);
>  
> 
> 
> 
>  	ceph_osdc_put_request(req);
>  	if (err == 0)
> @@ -636,6 +636,7 @@ static void writepages_finish(struct ceph_osd_request *req)
>  	struct ceph_snap_context *snapc = req->r_snapc;
>  	struct address_space *mapping = inode->i_mapping;
>  	struct ceph_fs_client *fsc = ceph_inode_to_client(inode);
> +	unsigned int len = 0;
>  	bool remove_page;
>  
> 
> 
> 
>  	dout("writepages_finish %p rc %d\n", inode, rc);
> @@ -648,9 +649,6 @@ static void writepages_finish(struct ceph_osd_request *req)
>  		ceph_clear_error_write(ci);
>  	}
>  
> 
> 
> 
> -	ceph_update_write_metrics(&fsc->mdsc->metric, req->r_start_latency,
> -				  req->r_end_latency, rc);
> -
>  	/*
>  	 * We lost the cache cap, need to truncate the page before
>  	 * it is unlocked, otherwise we'd truncate it later in the
> @@ -667,6 +665,7 @@ static void writepages_finish(struct ceph_osd_request *req)
>  
> 
> 
> 
>  		osd_data = osd_req_op_extent_osd_data(req, i);
>  		BUG_ON(osd_data->type != CEPH_OSD_DATA_TYPE_PAGES);
> +		len += osd_data->length;
>  		num_pages = calc_pages_for((u64)osd_data->alignment,
>  					   (u64)osd_data->length);
>  		total_pages += num_pages;
> @@ -699,6 +698,9 @@ static void writepages_finish(struct ceph_osd_request *req)
>  		release_pages(osd_data->pages, num_pages);
>  	}
>  
> 
> 
> 
> +	ceph_update_write_metrics(&fsc->mdsc->metric, req->r_start_latency,
> +				  req->r_end_latency, len, rc);
> +
>  	ceph_put_wrbuffer_cap_refs(ci, total_pages, snapc);
>  
> 
> 
> 
>  	osd_data = osd_req_op_extent_osd_data(req, 0);
> @@ -1720,7 +1722,7 @@ int ceph_uninline_data(struct file *filp, struct page *locked_page)
>  		err = ceph_osdc_wait_request(&fsc->client->osdc, req);
>  
> 
> 
> 
>  	ceph_update_write_metrics(&fsc->mdsc->metric, req->r_start_latency,
> -				  req->r_end_latency, err);
> +				  req->r_end_latency, len, err);
>  
> 
> 
> 
>  out_put:
>  	ceph_osdc_put_request(req);
> diff --git a/fs/ceph/debugfs.c b/fs/ceph/debugfs.c
> index 425f3356332a..38b78b45811f 100644
> --- a/fs/ceph/debugfs.c
> +++ b/fs/ceph/debugfs.c
> @@ -127,7 +127,7 @@ static int mdsc_show(struct seq_file *s, void *p)
>  	return 0;
>  }
>  
> 
> 
> 
> -#define CEPH_METRIC_SHOW(name, total, avg, min, max, sq) {		\
> +#define CEPH_LAT_METRIC_SHOW(name, total, avg, min, max, sq) {		\
>  	s64 _total, _avg, _min, _max, _sq, _st;				\
>  	_avg = ktime_to_us(avg);					\
>  	_min = ktime_to_us(min == KTIME_MAX ? 0 : min);			\
> @@ -140,6 +140,12 @@ static int mdsc_show(struct seq_file *s, void *p)
>  		   name, total, _avg, _min, _max, _st);			\
>  }
>  
> 
> 
> 
> +#define CEPH_SZ_METRIC_SHOW(name, total, avg, min, max, sum) {		\
> +	u64 _min = min == U64_MAX ? 0 : min;				\
> +	seq_printf(s, "%-14s%-12lld%-16llu%-16llu%-16llu%llu\n",	\
> +		   name, total, avg, _min, max, sum);			\
> +}
> +
>  static int metric_show(struct seq_file *s, void *p)
>  {
>  	struct ceph_fs_client *fsc = s->private;
> @@ -147,6 +153,7 @@ static int metric_show(struct seq_file *s, void *p)
>  	struct ceph_client_metric *m = &mdsc->metric;
>  	int nr_caps = 0;
>  	s64 total, sum, avg, min, max, sq;
> +	u64 sum_sz, avg_sz, min_sz, max_sz;
>  
> 
> 
> 
>  	sum = percpu_counter_sum(&m->total_inodes);
>  	seq_printf(s, "item                               total\n");
> @@ -170,7 +177,7 @@ static int metric_show(struct seq_file *s, void *p)
>  	max = m->read_latency_max;
>  	sq = m->read_latency_sq_sum;
>  	spin_unlock(&m->read_metric_lock);
> -	CEPH_METRIC_SHOW("read", total, avg, min, max, sq);
> +	CEPH_LAT_METRIC_SHOW("read", total, avg, min, max, sq);
>  
> 
> 
> 
>  	spin_lock(&m->write_metric_lock);
>  	total = m->total_writes;
> @@ -180,7 +187,7 @@ static int metric_show(struct seq_file *s, void *p)
>  	max = m->write_latency_max;
>  	sq = m->write_latency_sq_sum;
>  	spin_unlock(&m->write_metric_lock);
> -	CEPH_METRIC_SHOW("write", total, avg, min, max, sq);
> +	CEPH_LAT_METRIC_SHOW("write", total, avg, min, max, sq);
>  
> 
> 
> 
>  	spin_lock(&m->metadata_metric_lock);
>  	total = m->total_metadatas;
> @@ -190,7 +197,29 @@ static int metric_show(struct seq_file *s, void *p)
>  	max = m->metadata_latency_max;
>  	sq = m->metadata_latency_sq_sum;
>  	spin_unlock(&m->metadata_metric_lock);
> -	CEPH_METRIC_SHOW("metadata", total, avg, min, max, sq);
> +	CEPH_LAT_METRIC_SHOW("metadata", total, avg, min, max, sq);
> +
> +	seq_printf(s, "\n");
> +	seq_printf(s, "item          total       avg_sz(bytes)   min_sz(bytes)   max_sz(bytes)  total_sz(bytes)\n");
> +	seq_printf(s, "----------------------------------------------------------------------------------------\n");
> +
> +	spin_lock(&m->read_metric_lock);
> +	total = m->total_reads;
> +	sum_sz = m->read_size_sum;
> +	avg_sz = total > 0 ? DIV64_U64_ROUND_CLOSEST(sum_sz, total) : 0;
> +	min_sz = m->read_size_min;
> +	max_sz = m->read_size_max;
> +	spin_unlock(&m->read_metric_lock);
> +	CEPH_SZ_METRIC_SHOW("read", total, avg_sz, min_sz, max_sz, sum_sz);
> +
> +	spin_lock(&m->write_metric_lock);
> +	total = m->total_writes;
> +	sum_sz = m->write_size_sum;
> +	avg_sz = total > 0 ? DIV64_U64_ROUND_CLOSEST(sum_sz, total) : 0;
> +	min_sz = m->write_size_min;
> +	max_sz = m->write_size_max;
> +	spin_unlock(&m->write_metric_lock);
> +	CEPH_SZ_METRIC_SHOW("write", total, avg_sz, min_sz, max_sz, sum_sz);
>  
> 
> 
> 
>  	seq_printf(s, "\n");
>  	seq_printf(s, "item          total           miss            hit\n");
> diff --git a/fs/ceph/file.c b/fs/ceph/file.c
> index 31542eac7e59..db43d2d013b9 100644
> --- a/fs/ceph/file.c
> +++ b/fs/ceph/file.c
> @@ -898,7 +898,7 @@ static ssize_t ceph_sync_read(struct kiocb *iocb, struct iov_iter *to,
>  		ceph_update_read_metrics(&fsc->mdsc->metric,
>  					 req->r_start_latency,
>  					 req->r_end_latency,
> -					 ret);
> +					 len, ret);
>  
> 
> 
> 
>  		ceph_osdc_put_request(req);
>  
> 
> 
> 
> @@ -1030,12 +1030,12 @@ static void ceph_aio_complete_req(struct ceph_osd_request *req)
>  	struct ceph_aio_request *aio_req = req->r_priv;
>  	struct ceph_osd_data *osd_data = osd_req_op_extent_osd_data(req, 0);
>  	struct ceph_client_metric *metric = &ceph_sb_to_mdsc(inode->i_sb)->metric;
> +	unsigned int len = osd_data->bvec_pos.iter.bi_size;
>  
> 
> 
> 
>  	BUG_ON(osd_data->type != CEPH_OSD_DATA_TYPE_BVECS);
>  	BUG_ON(!osd_data->num_bvecs);
>  
> 
> 
> 
> -	dout("ceph_aio_complete_req %p rc %d bytes %u\n",
> -	     inode, rc, osd_data->bvec_pos.iter.bi_size);
> +	dout("ceph_aio_complete_req %p rc %d bytes %u\n", inode, rc, len);
>  
> 
> 
> 
>  	if (rc == -EOLDSNAPC) {
>  		struct ceph_aio_work *aio_work;
> @@ -1053,9 +1053,9 @@ static void ceph_aio_complete_req(struct ceph_osd_request *req)
>  	} else if (!aio_req->write) {
>  		if (rc == -ENOENT)
>  			rc = 0;
> -		if (rc >= 0 && osd_data->bvec_pos.iter.bi_size > rc) {
> +		if (rc >= 0 && len > rc) {
>  			struct iov_iter i;
> -			int zlen = osd_data->bvec_pos.iter.bi_size - rc;
> +			int zlen = len - rc;
>  
> 
> 
> 
>  			/*
>  			 * If read is satisfied by single OSD request,
> @@ -1072,8 +1072,7 @@ static void ceph_aio_complete_req(struct ceph_osd_request *req)
>  			}
>  
> 
> 
> 
>  			iov_iter_bvec(&i, READ, osd_data->bvec_pos.bvecs,
> -				      osd_data->num_bvecs,
> -				      osd_data->bvec_pos.iter.bi_size);
> +				      osd_data->num_bvecs, len);
>  			iov_iter_advance(&i, rc);
>  			iov_iter_zero(zlen, &i);
>  		}
> @@ -1083,10 +1082,10 @@ static void ceph_aio_complete_req(struct ceph_osd_request *req)
>  	if (req->r_start_latency) {
>  		if (aio_req->write)
>  			ceph_update_write_metrics(metric, req->r_start_latency,
> -						  req->r_end_latency, rc);
> +						  req->r_end_latency, len, rc);
>  		else
>  			ceph_update_read_metrics(metric, req->r_start_latency,
> -						 req->r_end_latency, rc);
> +						 req->r_end_latency, len, rc);
>  	}
>  
> 
> 
> 
>  	put_bvecs(osd_data->bvec_pos.bvecs, osd_data->num_bvecs,
> @@ -1294,10 +1293,10 @@ ceph_direct_read_write(struct kiocb *iocb, struct iov_iter *iter,
>  
> 
> 
> 
>  		if (write)
>  			ceph_update_write_metrics(metric, req->r_start_latency,
> -						  req->r_end_latency, ret);
> +						  req->r_end_latency, len, ret);
>  		else
>  			ceph_update_read_metrics(metric, req->r_start_latency,
> -						 req->r_end_latency, ret);
> +						 req->r_end_latency, len, ret);
>  
> 
> 
> 
>  		size = i_size_read(inode);
>  		if (!write) {
> @@ -1471,7 +1470,7 @@ ceph_sync_write(struct kiocb *iocb, struct iov_iter *from, loff_t pos,
>  			ret = ceph_osdc_wait_request(&fsc->client->osdc, req);
>  
> 
> 
> 
>  		ceph_update_write_metrics(&fsc->mdsc->metric, req->r_start_latency,
> -					  req->r_end_latency, ret);
> +					  req->r_end_latency, len, ret);
>  out:
>  		ceph_osdc_put_request(req);
>  		if (ret != 0) {
> diff --git a/fs/ceph/metric.c b/fs/ceph/metric.c
> index d5560ff99a9d..ff3c9d5cf9ff 100644
> --- a/fs/ceph/metric.c
> +++ b/fs/ceph/metric.c
> @@ -189,6 +189,9 @@ int ceph_metric_init(struct ceph_client_metric *m)
>  	m->read_latency_max = 0;
>  	m->total_reads = 0;
>  	m->read_latency_sum = 0;
> +	m->read_size_min = U64_MAX;
> +	m->read_size_max = 0;
> +	m->read_size_sum = 0;
>  
> 
> 
> 
>  	spin_lock_init(&m->write_metric_lock);
>  	m->write_latency_sq_sum = 0;
> @@ -196,6 +199,9 @@ int ceph_metric_init(struct ceph_client_metric *m)
>  	m->write_latency_max = 0;
>  	m->total_writes = 0;
>  	m->write_latency_sum = 0;
> +	m->write_size_min = U64_MAX;
> +	m->write_size_max = 0;
> +	m->write_size_sum = 0;
>  
> 
> 
> 
>  	spin_lock_init(&m->metadata_metric_lock);
>  	m->metadata_latency_sq_sum = 0;
> @@ -306,9 +312,41 @@ static inline void __update_latency(struct ceph_client_metric *m,
>  	*sq_sump += sq;
>  }
>  
> 
> 
> 
> +static inline void __update_size(struct ceph_client_metric *m,
> +				 metric_type type, unsigned int size)
> +{
> +	ktime_t total;
> +	u64 *minp, *maxp, *sump;
> +
> +	switch (type) {
> +	case CEPH_METRIC_READ:
> +		total = m->total_reads;
> +		sump = &m->read_size_sum;
> +		minp = &m->read_size_min;
> +		maxp = &m->read_size_max;
> +		break;
> +	case CEPH_METRIC_WRITE:
> +		total = m->total_writes;

"total" and "sump" are unused in this function, aside from the
assignment.

> +		sump = &m->write_size_sum;
> +		minp = &m->write_size_min;
> +		maxp = &m->write_size_max;
> +		break;
> +	case CEPH_METRIC_METADATA:
> +	default:
> +		return;
> +	}
> +
> +	*sump += size;
> +
> +	if (unlikely(size < *minp))
> +		*minp = size;
> +	if (unlikely(size > *maxp))
> +		*maxp = size;
> +}
> +
>  void ceph_update_read_metrics(struct ceph_client_metric *m,
>  			      ktime_t r_start, ktime_t r_end,
> -			      int rc)
> +			      unsigned int size, int rc)
>  {
>  	ktime_t lat = ktime_sub(r_end, r_start);
>  
> 
> 
> 
> @@ -317,12 +355,13 @@ void ceph_update_read_metrics(struct ceph_client_metric *m,
>  
> 
> 
> 
>  	spin_lock(&m->read_metric_lock);
>  	__update_latency(m, CEPH_METRIC_READ, lat);
> +	__update_size(m, CEPH_METRIC_READ, size);
>  	spin_unlock(&m->read_metric_lock);
>  }
>  
> 
> 
> 
>  void ceph_update_write_metrics(struct ceph_client_metric *m,
>  			       ktime_t r_start, ktime_t r_end,
> -			       int rc)
> +			       unsigned int size, int rc)
>  {
>  	ktime_t lat = ktime_sub(r_end, r_start);
>  
> 
> 
> 
> @@ -331,6 +370,7 @@ void ceph_update_write_metrics(struct ceph_client_metric *m,
>  
> 
> 
> 
>  	spin_lock(&m->write_metric_lock);
>  	__update_latency(m, CEPH_METRIC_WRITE, lat);
> +	__update_size(m, CEPH_METRIC_WRITE, size);
>  	spin_unlock(&m->write_metric_lock);
>  }
>  
> 
> 
> 
> diff --git a/fs/ceph/metric.h b/fs/ceph/metric.h
> index 57b5f0ec38be..64651b6ac886 100644
> --- a/fs/ceph/metric.h
> +++ b/fs/ceph/metric.h
> @@ -110,6 +110,9 @@ struct ceph_client_metric {
>  
> 
> 
> 
>  	spinlock_t read_metric_lock;
>  	u64 total_reads;
> +	u64 read_size_sum;
> +	u64 read_size_min;
> +	u64 read_size_max;
>  	ktime_t read_latency_sum;
>  	ktime_t read_latency_sq_sum;
>  	ktime_t read_latency_min;
> @@ -117,6 +120,9 @@ struct ceph_client_metric {
>  
> 
> 
> 
>  	spinlock_t write_metric_lock;
>  	u64 total_writes;
> +	u64 write_size_sum;
> +	u64 write_size_min;
> +	u64 write_size_max;
>  	ktime_t write_latency_sum;
>  	ktime_t write_latency_sq_sum;
>  	ktime_t write_latency_min;
> @@ -164,10 +170,10 @@ static inline void ceph_update_cap_mis(struct ceph_client_metric *m)
>  
> 
> 
> 
>  extern void ceph_update_read_metrics(struct ceph_client_metric *m,
>  				     ktime_t r_start, ktime_t r_end,
> -				     int rc);
> +				     unsigned int size, int rc);
>  extern void ceph_update_write_metrics(struct ceph_client_metric *m,
>  				      ktime_t r_start, ktime_t r_end,
> -				      int rc);
> +				      unsigned int size, int rc);
>  extern void ceph_update_metadata_metrics(struct ceph_client_metric *m,
>  				         ktime_t r_start, ktime_t r_end,
>  					 int rc);

-- 
Jeff Layton <jlayton@kernel.org>


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/4] ceph: update the __update_latency helper
  2021-03-22 12:28 ` [PATCH 2/4] ceph: update the __update_latency helper xiubli
@ 2021-03-23 12:34   ` Jeff Layton
  2021-03-23 13:14     ` Xiubo Li
  0 siblings, 1 reply; 11+ messages in thread
From: Jeff Layton @ 2021-03-23 12:34 UTC (permalink / raw)
  To: xiubli; +Cc: idryomov, pdonnell, ceph-devel

On Mon, 2021-03-22 at 20:28 +0800, xiubli@redhat.com wrote:
> From: Xiubo Li <xiubli@redhat.com>
> 
> Let the __update_latency() helper choose the correcsponding members
> according to the metric_type.
> 
> URL: https://tracker.ceph.com/issues/49913
> Signed-off-by: Xiubo Li <xiubli@redhat.com>
> ---
>  fs/ceph/metric.c | 58 +++++++++++++++++++++++++++++++++++-------------
>  1 file changed, 42 insertions(+), 16 deletions(-)
> 
> diff --git a/fs/ceph/metric.c b/fs/ceph/metric.c
> index 75d309f2fb0c..d5560ff99a9d 100644
> --- a/fs/ceph/metric.c
> +++ b/fs/ceph/metric.c
> @@ -249,19 +249,51 @@ void ceph_metric_destroy(struct ceph_client_metric *m)
>  		ceph_put_mds_session(m->session);
>  }
>  
> 
> 
> 
> 
> 
> 
> 
> -static inline void __update_latency(ktime_t *totalp, ktime_t *lsump,
> -				    ktime_t *min, ktime_t *max,
> -				    ktime_t *sq_sump, ktime_t lat)
> +typedef enum {
> +	CEPH_METRIC_READ,
> +	CEPH_METRIC_WRITE,
> +	CEPH_METRIC_METADATA,
> +} metric_type;
> +
> +static inline void __update_latency(struct ceph_client_metric *m,
> +				    metric_type type, ktime_t lat)
>  {
> +	ktime_t *totalp, *minp, *maxp, *lsump, *sq_sump;
>  	ktime_t total, avg, sq, lsum;
>  
> 
> 
> 
> 
> 
> 
> 
> +	switch (type) {
> +	case CEPH_METRIC_READ:
> +		totalp = &m->total_reads;
> +		lsump = &m->read_latency_sum;
> +		minp = &m->read_latency_min;
> +		maxp = &m->read_latency_max;
> +		sq_sump = &m->read_latency_sq_sum;
> +		break;
> +	case CEPH_METRIC_WRITE:
> +		totalp = &m->total_writes;
> +		lsump = &m->write_latency_sum;
> +		minp = &m->write_latency_min;
> +		maxp = &m->write_latency_max;
> +		sq_sump = &m->write_latency_sq_sum;
> +		break;
> +	case CEPH_METRIC_METADATA:
> +		totalp = &m->total_metadatas;
> +		lsump = &m->metadata_latency_sum;
> +		minp = &m->metadata_latency_min;
> +		maxp = &m->metadata_latency_max;
> +		sq_sump = &m->metadata_latency_sq_sum;
> +		break;
> +	default:
> +		return;
> +	}
> +
>  	total = ++(*totalp);

Why are you adding one to *totalp above? Is that to avoid it being 0? 

>  	lsum = (*lsump += lat);
>  
> 

^^^
Instead of doing all of the above with pointers, why not just add to
total and lsum directly inside the switch statement? This seems like a
lot of pointless indirection.

> 
> 
> 
> 
> 
> -	if (unlikely(lat < *min))
> -		*min = lat;
> -	if (unlikely(lat > *max))
> -		*max = lat;
> +	if (unlikely(lat < *minp))
> +		*minp = lat;
> +	if (unlikely(lat > *maxp))
> +		*maxp = lat;
>  
> 
> 
> 
>  	if (unlikely(total == 1))
>  		return;
> @@ -284,9 +316,7 @@ void ceph_update_read_metrics(struct ceph_client_metric *m,
>  		return;
>  
> 
> 
> 
>  	spin_lock(&m->read_metric_lock);
> -	__update_latency(&m->total_reads, &m->read_latency_sum,
> -			 &m->read_latency_min, &m->read_latency_max,
> -			 &m->read_latency_sq_sum, lat);
> +	__update_latency(m, CEPH_METRIC_READ, lat);
>  	spin_unlock(&m->read_metric_lock);
>  }
>  
> 
> 
> 
> @@ -300,9 +330,7 @@ void ceph_update_write_metrics(struct ceph_client_metric *m,
>  		return;
>  
> 
> 
> 
>  	spin_lock(&m->write_metric_lock);
> -	__update_latency(&m->total_writes, &m->write_latency_sum,
> -			 &m->write_latency_min, &m->write_latency_max,
> -			 &m->write_latency_sq_sum, lat);
> +	__update_latency(m, CEPH_METRIC_WRITE, lat);
>  	spin_unlock(&m->write_metric_lock);
>  }
>  
> 
> 
> 
> @@ -316,8 +344,6 @@ void ceph_update_metadata_metrics(struct ceph_client_metric *m,
>  		return;
>  
> 
> 
> 
>  	spin_lock(&m->metadata_metric_lock);
> -	__update_latency(&m->total_metadatas, &m->metadata_latency_sum,
> -			 &m->metadata_latency_min, &m->metadata_latency_max,
> -			 &m->metadata_latency_sq_sum, lat);
> +	__update_latency(m, CEPH_METRIC_METADATA, lat);
>  	spin_unlock(&m->metadata_metric_lock);
>  }

-- 
Jeff Layton <jlayton@kernel.org>


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/4] ceph: update the __update_latency helper
  2021-03-23 12:34   ` Jeff Layton
@ 2021-03-23 13:14     ` Xiubo Li
  0 siblings, 0 replies; 11+ messages in thread
From: Xiubo Li @ 2021-03-23 13:14 UTC (permalink / raw)
  To: Jeff Layton; +Cc: idryomov, pdonnell, ceph-devel

On 2021/3/23 20:34, Jeff Layton wrote:
> On Mon, 2021-03-22 at 20:28 +0800, xiubli@redhat.com wrote:
>> From: Xiubo Li <xiubli@redhat.com>
>>
>> Let the __update_latency() helper choose the correcsponding members
>> according to the metric_type.
>>
>> URL: https://tracker.ceph.com/issues/49913
>> Signed-off-by: Xiubo Li <xiubli@redhat.com>
>> ---
>>   fs/ceph/metric.c | 58 +++++++++++++++++++++++++++++++++++-------------
>>   1 file changed, 42 insertions(+), 16 deletions(-)
>>
>> diff --git a/fs/ceph/metric.c b/fs/ceph/metric.c
>> index 75d309f2fb0c..d5560ff99a9d 100644
>> --- a/fs/ceph/metric.c
>> +++ b/fs/ceph/metric.c
>> @@ -249,19 +249,51 @@ void ceph_metric_destroy(struct ceph_client_metric *m)
>>   		ceph_put_mds_session(m->session);
>>   }
>>   
>>
>>
>>
>>
>>
>>
>>
>> -static inline void __update_latency(ktime_t *totalp, ktime_t *lsump,
>> -				    ktime_t *min, ktime_t *max,
>> -				    ktime_t *sq_sump, ktime_t lat)
>> +typedef enum {
>> +	CEPH_METRIC_READ,
>> +	CEPH_METRIC_WRITE,
>> +	CEPH_METRIC_METADATA,
>> +} metric_type;
>> +
>> +static inline void __update_latency(struct ceph_client_metric *m,
>> +				    metric_type type, ktime_t lat)
>>   {
>> +	ktime_t *totalp, *minp, *maxp, *lsump, *sq_sump;
>>   	ktime_t total, avg, sq, lsum;
>>   
>>
>>
>>
>>
>>
>>
>>
>> +	switch (type) {
>> +	case CEPH_METRIC_READ:
>> +		totalp = &m->total_reads;
>> +		lsump = &m->read_latency_sum;
>> +		minp = &m->read_latency_min;
>> +		maxp = &m->read_latency_max;
>> +		sq_sump = &m->read_latency_sq_sum;
>> +		break;
>> +	case CEPH_METRIC_WRITE:
>> +		totalp = &m->total_writes;
>> +		lsump = &m->write_latency_sum;
>> +		minp = &m->write_latency_min;
>> +		maxp = &m->write_latency_max;
>> +		sq_sump = &m->write_latency_sq_sum;
>> +		break;
>> +	case CEPH_METRIC_METADATA:
>> +		totalp = &m->total_metadatas;
>> +		lsump = &m->metadata_latency_sum;
>> +		minp = &m->metadata_latency_min;
>> +		maxp = &m->metadata_latency_max;
>> +		sq_sump = &m->metadata_latency_sq_sum;
>> +		break;
>> +	default:
>> +		return;
>> +	}
>> +
>>   	total = ++(*totalp);
> Why are you adding one to *totalp above? Is that to avoid it being 0?

No, in the old code we will count the 
total_reads/total_writes/total_metadatas for each call of the 
ceph_update_{read/write/metadata}_latency() helpers. And the same here.


>>   	lsum = (*lsump += lat);
>>   
>>
> ^^^
> Instead of doing all of the above with pointers, why not just add to
> total and lsum directly inside the switch statement? This seems like a
> lot of pointless indirection.

Okay, sounds good, will change it.


>>
>>
>>
>>
>> -	if (unlikely(lat < *min))
>> -		*min = lat;
>> -	if (unlikely(lat > *max))
>> -		*max = lat;
>> +	if (unlikely(lat < *minp))
>> +		*minp = lat;
>> +	if (unlikely(lat > *maxp))
>> +		*maxp = lat;
>>   
>>
>>
>>
>>   	if (unlikely(total == 1))
>>   		return;
>> @@ -284,9 +316,7 @@ void ceph_update_read_metrics(struct ceph_client_metric *m,
>>   		return;
>>   
>>
>>
>>
>>   	spin_lock(&m->read_metric_lock);
>> -	__update_latency(&m->total_reads, &m->read_latency_sum,
>> -			 &m->read_latency_min, &m->read_latency_max,
>> -			 &m->read_latency_sq_sum, lat);
>> +	__update_latency(m, CEPH_METRIC_READ, lat);
>>   	spin_unlock(&m->read_metric_lock);
>>   }
>>   
>>
>>
>>
>> @@ -300,9 +330,7 @@ void ceph_update_write_metrics(struct ceph_client_metric *m,
>>   		return;
>>   
>>
>>
>>
>>   	spin_lock(&m->write_metric_lock);
>> -	__update_latency(&m->total_writes, &m->write_latency_sum,
>> -			 &m->write_latency_min, &m->write_latency_max,
>> -			 &m->write_latency_sq_sum, lat);
>> +	__update_latency(m, CEPH_METRIC_WRITE, lat);
>>   	spin_unlock(&m->write_metric_lock);
>>   }
>>   
>>
>>
>>
>> @@ -316,8 +344,6 @@ void ceph_update_metadata_metrics(struct ceph_client_metric *m,
>>   		return;
>>   
>>
>>
>>
>>   	spin_lock(&m->metadata_metric_lock);
>> -	__update_latency(&m->total_metadatas, &m->metadata_latency_sum,
>> -			 &m->metadata_latency_min, &m->metadata_latency_max,
>> -			 &m->metadata_latency_sq_sum, lat);
>> +	__update_latency(m, CEPH_METRIC_METADATA, lat);
>>   	spin_unlock(&m->metadata_metric_lock);
>>   }



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 4/4] ceph: add IO size metrics support
  2021-03-23 12:29   ` Jeff Layton
@ 2021-03-23 13:17     ` Xiubo Li
  0 siblings, 0 replies; 11+ messages in thread
From: Xiubo Li @ 2021-03-23 13:17 UTC (permalink / raw)
  To: Jeff Layton; +Cc: idryomov, pdonnell, ceph-devel

On 2021/3/23 20:29, Jeff Layton wrote:
> On Mon, 2021-03-22 at 20:28 +0800, xiubli@redhat.com wrote:
>> From: Xiubo Li <xiubli@redhat.com>
>>
>> This will collect IO's total size and then calculate the average
>> size, and also will collect the min/max IO sizes.
>>
>> The debugfs will show the size metrics in byte and will let the
>> userspace applications to switch to what they need.
>>
>> URL: https://tracker.ceph.com/issues/49913
>> Signed-off-by: Xiubo Li <xiubli@redhat.com>
>> ---
>>   fs/ceph/addr.c    | 14 ++++++++------
>>   fs/ceph/debugfs.c | 37 +++++++++++++++++++++++++++++++++----
>>   fs/ceph/file.c    | 23 +++++++++++------------
>>   fs/ceph/metric.c  | 44 ++++++++++++++++++++++++++++++++++++++++++--
>>   fs/ceph/metric.h  | 10 ++++++++--
>>   5 files changed, 102 insertions(+), 26 deletions(-)
>>
>> diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
>> index 7c2802758d0e..d8a3624bc81d 100644
>> --- a/fs/ceph/addr.c
>> +++ b/fs/ceph/addr.c
>> @@ -227,7 +227,7 @@ static void finish_netfs_read(struct ceph_osd_request *req)
>>   	int err = req->r_result;
>>   
>>
>>
>>
>>   	ceph_update_read_metrics(&fsc->mdsc->metric, req->r_start_latency,
>> -				 req->r_end_latency, err);
>> +				 req->r_end_latency, osd_data->length, err);
>>   
>>
>>
>>
>>   	dout("%s: result %d subreq->len=%zu i_size=%lld\n", __func__, req->r_result,
>>   	     subreq->len, i_size_read(req->r_inode));
>> @@ -561,7 +561,7 @@ static int writepage_nounlock(struct page *page, struct writeback_control *wbc)
>>   		err = ceph_osdc_wait_request(osdc, req);
>>   
>>
>>
>>
>>   	ceph_update_write_metrics(&fsc->mdsc->metric, req->r_start_latency,
>> -				  req->r_end_latency, err);
>> +				  req->r_end_latency, len, err);
>>   
>>
>>
>>
>>   	ceph_osdc_put_request(req);
>>   	if (err == 0)
>> @@ -636,6 +636,7 @@ static void writepages_finish(struct ceph_osd_request *req)
>>   	struct ceph_snap_context *snapc = req->r_snapc;
>>   	struct address_space *mapping = inode->i_mapping;
>>   	struct ceph_fs_client *fsc = ceph_inode_to_client(inode);
>> +	unsigned int len = 0;
>>   	bool remove_page;
>>   
>>
>>
>>
>>   	dout("writepages_finish %p rc %d\n", inode, rc);
>> @@ -648,9 +649,6 @@ static void writepages_finish(struct ceph_osd_request *req)
>>   		ceph_clear_error_write(ci);
>>   	}
>>   
>>
>>
>>
>> -	ceph_update_write_metrics(&fsc->mdsc->metric, req->r_start_latency,
>> -				  req->r_end_latency, rc);
>> -
>>   	/*
>>   	 * We lost the cache cap, need to truncate the page before
>>   	 * it is unlocked, otherwise we'd truncate it later in the
>> @@ -667,6 +665,7 @@ static void writepages_finish(struct ceph_osd_request *req)
>>   
>>
>>
>>
>>   		osd_data = osd_req_op_extent_osd_data(req, i);
>>   		BUG_ON(osd_data->type != CEPH_OSD_DATA_TYPE_PAGES);
>> +		len += osd_data->length;
>>   		num_pages = calc_pages_for((u64)osd_data->alignment,
>>   					   (u64)osd_data->length);
>>   		total_pages += num_pages;
>> @@ -699,6 +698,9 @@ static void writepages_finish(struct ceph_osd_request *req)
>>   		release_pages(osd_data->pages, num_pages);
>>   	}
>>   
>>
>>
>>
>> +	ceph_update_write_metrics(&fsc->mdsc->metric, req->r_start_latency,
>> +				  req->r_end_latency, len, rc);
>> +
>>   	ceph_put_wrbuffer_cap_refs(ci, total_pages, snapc);
>>   
>>
>>
>>
>>   	osd_data = osd_req_op_extent_osd_data(req, 0);
>> @@ -1720,7 +1722,7 @@ int ceph_uninline_data(struct file *filp, struct page *locked_page)
>>   		err = ceph_osdc_wait_request(&fsc->client->osdc, req);
>>   
>>
>>
>>
>>   	ceph_update_write_metrics(&fsc->mdsc->metric, req->r_start_latency,
>> -				  req->r_end_latency, err);
>> +				  req->r_end_latency, len, err);
>>   
>>
>>
>>
>>   out_put:
>>   	ceph_osdc_put_request(req);
>> diff --git a/fs/ceph/debugfs.c b/fs/ceph/debugfs.c
>> index 425f3356332a..38b78b45811f 100644
>> --- a/fs/ceph/debugfs.c
>> +++ b/fs/ceph/debugfs.c
>> @@ -127,7 +127,7 @@ static int mdsc_show(struct seq_file *s, void *p)
>>   	return 0;
>>   }
>>   
>>
>>
>>
>> -#define CEPH_METRIC_SHOW(name, total, avg, min, max, sq) {		\
>> +#define CEPH_LAT_METRIC_SHOW(name, total, avg, min, max, sq) {		\
>>   	s64 _total, _avg, _min, _max, _sq, _st;				\
>>   	_avg = ktime_to_us(avg);					\
>>   	_min = ktime_to_us(min == KTIME_MAX ? 0 : min);			\
>> @@ -140,6 +140,12 @@ static int mdsc_show(struct seq_file *s, void *p)
>>   		   name, total, _avg, _min, _max, _st);			\
>>   }
>>   
>>
>>
>>
>> +#define CEPH_SZ_METRIC_SHOW(name, total, avg, min, max, sum) {		\
>> +	u64 _min = min == U64_MAX ? 0 : min;				\
>> +	seq_printf(s, "%-14s%-12lld%-16llu%-16llu%-16llu%llu\n",	\
>> +		   name, total, avg, _min, max, sum);			\
>> +}
>> +
>>   static int metric_show(struct seq_file *s, void *p)
>>   {
>>   	struct ceph_fs_client *fsc = s->private;
>> @@ -147,6 +153,7 @@ static int metric_show(struct seq_file *s, void *p)
>>   	struct ceph_client_metric *m = &mdsc->metric;
>>   	int nr_caps = 0;
>>   	s64 total, sum, avg, min, max, sq;
>> +	u64 sum_sz, avg_sz, min_sz, max_sz;
>>   
>>
>>
>>
>>   	sum = percpu_counter_sum(&m->total_inodes);
>>   	seq_printf(s, "item                               total\n");
>> @@ -170,7 +177,7 @@ static int metric_show(struct seq_file *s, void *p)
>>   	max = m->read_latency_max;
>>   	sq = m->read_latency_sq_sum;
>>   	spin_unlock(&m->read_metric_lock);
>> -	CEPH_METRIC_SHOW("read", total, avg, min, max, sq);
>> +	CEPH_LAT_METRIC_SHOW("read", total, avg, min, max, sq);
>>   
>>
>>
>>
>>   	spin_lock(&m->write_metric_lock);
>>   	total = m->total_writes;
>> @@ -180,7 +187,7 @@ static int metric_show(struct seq_file *s, void *p)
>>   	max = m->write_latency_max;
>>   	sq = m->write_latency_sq_sum;
>>   	spin_unlock(&m->write_metric_lock);
>> -	CEPH_METRIC_SHOW("write", total, avg, min, max, sq);
>> +	CEPH_LAT_METRIC_SHOW("write", total, avg, min, max, sq);
>>   
>>
>>
>>
>>   	spin_lock(&m->metadata_metric_lock);
>>   	total = m->total_metadatas;
>> @@ -190,7 +197,29 @@ static int metric_show(struct seq_file *s, void *p)
>>   	max = m->metadata_latency_max;
>>   	sq = m->metadata_latency_sq_sum;
>>   	spin_unlock(&m->metadata_metric_lock);
>> -	CEPH_METRIC_SHOW("metadata", total, avg, min, max, sq);
>> +	CEPH_LAT_METRIC_SHOW("metadata", total, avg, min, max, sq);
>> +
>> +	seq_printf(s, "\n");
>> +	seq_printf(s, "item          total       avg_sz(bytes)   min_sz(bytes)   max_sz(bytes)  total_sz(bytes)\n");
>> +	seq_printf(s, "----------------------------------------------------------------------------------------\n");
>> +
>> +	spin_lock(&m->read_metric_lock);
>> +	total = m->total_reads;
>> +	sum_sz = m->read_size_sum;
>> +	avg_sz = total > 0 ? DIV64_U64_ROUND_CLOSEST(sum_sz, total) : 0;
>> +	min_sz = m->read_size_min;
>> +	max_sz = m->read_size_max;
>> +	spin_unlock(&m->read_metric_lock);
>> +	CEPH_SZ_METRIC_SHOW("read", total, avg_sz, min_sz, max_sz, sum_sz);
>> +
>> +	spin_lock(&m->write_metric_lock);
>> +	total = m->total_writes;
>> +	sum_sz = m->write_size_sum;
>> +	avg_sz = total > 0 ? DIV64_U64_ROUND_CLOSEST(sum_sz, total) : 0;
>> +	min_sz = m->write_size_min;
>> +	max_sz = m->write_size_max;
>> +	spin_unlock(&m->write_metric_lock);
>> +	CEPH_SZ_METRIC_SHOW("write", total, avg_sz, min_sz, max_sz, sum_sz);
>>   
>>
>>
>>
>>   	seq_printf(s, "\n");
>>   	seq_printf(s, "item          total           miss            hit\n");
>> diff --git a/fs/ceph/file.c b/fs/ceph/file.c
>> index 31542eac7e59..db43d2d013b9 100644
>> --- a/fs/ceph/file.c
>> +++ b/fs/ceph/file.c
>> @@ -898,7 +898,7 @@ static ssize_t ceph_sync_read(struct kiocb *iocb, struct iov_iter *to,
>>   		ceph_update_read_metrics(&fsc->mdsc->metric,
>>   					 req->r_start_latency,
>>   					 req->r_end_latency,
>> -					 ret);
>> +					 len, ret);
>>   
>>
>>
>>
>>   		ceph_osdc_put_request(req);
>>   
>>
>>
>>
>> @@ -1030,12 +1030,12 @@ static void ceph_aio_complete_req(struct ceph_osd_request *req)
>>   	struct ceph_aio_request *aio_req = req->r_priv;
>>   	struct ceph_osd_data *osd_data = osd_req_op_extent_osd_data(req, 0);
>>   	struct ceph_client_metric *metric = &ceph_sb_to_mdsc(inode->i_sb)->metric;
>> +	unsigned int len = osd_data->bvec_pos.iter.bi_size;
>>   
>>
>>
>>
>>   	BUG_ON(osd_data->type != CEPH_OSD_DATA_TYPE_BVECS);
>>   	BUG_ON(!osd_data->num_bvecs);
>>   
>>
>>
>>
>> -	dout("ceph_aio_complete_req %p rc %d bytes %u\n",
>> -	     inode, rc, osd_data->bvec_pos.iter.bi_size);
>> +	dout("ceph_aio_complete_req %p rc %d bytes %u\n", inode, rc, len);
>>   
>>
>>
>>
>>   	if (rc == -EOLDSNAPC) {
>>   		struct ceph_aio_work *aio_work;
>> @@ -1053,9 +1053,9 @@ static void ceph_aio_complete_req(struct ceph_osd_request *req)
>>   	} else if (!aio_req->write) {
>>   		if (rc == -ENOENT)
>>   			rc = 0;
>> -		if (rc >= 0 && osd_data->bvec_pos.iter.bi_size > rc) {
>> +		if (rc >= 0 && len > rc) {
>>   			struct iov_iter i;
>> -			int zlen = osd_data->bvec_pos.iter.bi_size - rc;
>> +			int zlen = len - rc;
>>   
>>
>>
>>
>>   			/*
>>   			 * If read is satisfied by single OSD request,
>> @@ -1072,8 +1072,7 @@ static void ceph_aio_complete_req(struct ceph_osd_request *req)
>>   			}
>>   
>>
>>
>>
>>   			iov_iter_bvec(&i, READ, osd_data->bvec_pos.bvecs,
>> -				      osd_data->num_bvecs,
>> -				      osd_data->bvec_pos.iter.bi_size);
>> +				      osd_data->num_bvecs, len);
>>   			iov_iter_advance(&i, rc);
>>   			iov_iter_zero(zlen, &i);
>>   		}
>> @@ -1083,10 +1082,10 @@ static void ceph_aio_complete_req(struct ceph_osd_request *req)
>>   	if (req->r_start_latency) {
>>   		if (aio_req->write)
>>   			ceph_update_write_metrics(metric, req->r_start_latency,
>> -						  req->r_end_latency, rc);
>> +						  req->r_end_latency, len, rc);
>>   		else
>>   			ceph_update_read_metrics(metric, req->r_start_latency,
>> -						 req->r_end_latency, rc);
>> +						 req->r_end_latency, len, rc);
>>   	}
>>   
>>
>>
>>
>>   	put_bvecs(osd_data->bvec_pos.bvecs, osd_data->num_bvecs,
>> @@ -1294,10 +1293,10 @@ ceph_direct_read_write(struct kiocb *iocb, struct iov_iter *iter,
>>   
>>
>>
>>
>>   		if (write)
>>   			ceph_update_write_metrics(metric, req->r_start_latency,
>> -						  req->r_end_latency, ret);
>> +						  req->r_end_latency, len, ret);
>>   		else
>>   			ceph_update_read_metrics(metric, req->r_start_latency,
>> -						 req->r_end_latency, ret);
>> +						 req->r_end_latency, len, ret);
>>   
>>
>>
>>
>>   		size = i_size_read(inode);
>>   		if (!write) {
>> @@ -1471,7 +1470,7 @@ ceph_sync_write(struct kiocb *iocb, struct iov_iter *from, loff_t pos,
>>   			ret = ceph_osdc_wait_request(&fsc->client->osdc, req);
>>   
>>
>>
>>
>>   		ceph_update_write_metrics(&fsc->mdsc->metric, req->r_start_latency,
>> -					  req->r_end_latency, ret);
>> +					  req->r_end_latency, len, ret);
>>   out:
>>   		ceph_osdc_put_request(req);
>>   		if (ret != 0) {
>> diff --git a/fs/ceph/metric.c b/fs/ceph/metric.c
>> index d5560ff99a9d..ff3c9d5cf9ff 100644
>> --- a/fs/ceph/metric.c
>> +++ b/fs/ceph/metric.c
>> @@ -189,6 +189,9 @@ int ceph_metric_init(struct ceph_client_metric *m)
>>   	m->read_latency_max = 0;
>>   	m->total_reads = 0;
>>   	m->read_latency_sum = 0;
>> +	m->read_size_min = U64_MAX;
>> +	m->read_size_max = 0;
>> +	m->read_size_sum = 0;
>>   
>>
>>
>>
>>   	spin_lock_init(&m->write_metric_lock);
>>   	m->write_latency_sq_sum = 0;
>> @@ -196,6 +199,9 @@ int ceph_metric_init(struct ceph_client_metric *m)
>>   	m->write_latency_max = 0;
>>   	m->total_writes = 0;
>>   	m->write_latency_sum = 0;
>> +	m->write_size_min = U64_MAX;
>> +	m->write_size_max = 0;
>> +	m->write_size_sum = 0;
>>   
>>
>>
>>
>>   	spin_lock_init(&m->metadata_metric_lock);
>>   	m->metadata_latency_sq_sum = 0;
>> @@ -306,9 +312,41 @@ static inline void __update_latency(struct ceph_client_metric *m,
>>   	*sq_sump += sq;
>>   }
>>   
>>
>>
>>
>> +static inline void __update_size(struct ceph_client_metric *m,
>> +				 metric_type type, unsigned int size)
>> +{
>> +	ktime_t total;
>> +	u64 *minp, *maxp, *sump;
>> +
>> +	switch (type) {
>> +	case CEPH_METRIC_READ:
>> +		total = m->total_reads;
>> +		sump = &m->read_size_sum;
>> +		minp = &m->read_size_min;
>> +		maxp = &m->read_size_max;
>> +		break;
>> +	case CEPH_METRIC_WRITE:
>> +		total = m->total_writes;
> "total" and "sump" are unused in this function, aside from the
> assignment.

Will fix it.

Just assuming to add the read/write IO speeds here, will it make sense ?


>> +		sump = &m->write_size_sum;
>> +		minp = &m->write_size_min;
>> +		maxp = &m->write_size_max;
>> +		break;
>> +	case CEPH_METRIC_METADATA:
>> +	default:
>> +		return;
>> +	}
>> +
>> +	*sump += size;
>> +
>> +	if (unlikely(size < *minp))
>> +		*minp = size;
>> +	if (unlikely(size > *maxp))
>> +		*maxp = size;
>> +}
>> +
>>   void ceph_update_read_metrics(struct ceph_client_metric *m,
>>   			      ktime_t r_start, ktime_t r_end,
>> -			      int rc)
>> +			      unsigned int size, int rc)
>>   {
>>   	ktime_t lat = ktime_sub(r_end, r_start);
>>   
>>
>>
>>
>> @@ -317,12 +355,13 @@ void ceph_update_read_metrics(struct ceph_client_metric *m,
>>   
>>
>>
>>
>>   	spin_lock(&m->read_metric_lock);
>>   	__update_latency(m, CEPH_METRIC_READ, lat);
>> +	__update_size(m, CEPH_METRIC_READ, size);
>>   	spin_unlock(&m->read_metric_lock);
>>   }
>>   
>>
>>
>>
>>   void ceph_update_write_metrics(struct ceph_client_metric *m,
>>   			       ktime_t r_start, ktime_t r_end,
>> -			       int rc)
>> +			       unsigned int size, int rc)
>>   {
>>   	ktime_t lat = ktime_sub(r_end, r_start);
>>   
>>
>>
>>
>> @@ -331,6 +370,7 @@ void ceph_update_write_metrics(struct ceph_client_metric *m,
>>   
>>
>>
>>
>>   	spin_lock(&m->write_metric_lock);
>>   	__update_latency(m, CEPH_METRIC_WRITE, lat);
>> +	__update_size(m, CEPH_METRIC_WRITE, size);
>>   	spin_unlock(&m->write_metric_lock);
>>   }
>>   
>>
>>
>>
>> diff --git a/fs/ceph/metric.h b/fs/ceph/metric.h
>> index 57b5f0ec38be..64651b6ac886 100644
>> --- a/fs/ceph/metric.h
>> +++ b/fs/ceph/metric.h
>> @@ -110,6 +110,9 @@ struct ceph_client_metric {
>>   
>>
>>
>>
>>   	spinlock_t read_metric_lock;
>>   	u64 total_reads;
>> +	u64 read_size_sum;
>> +	u64 read_size_min;
>> +	u64 read_size_max;
>>   	ktime_t read_latency_sum;
>>   	ktime_t read_latency_sq_sum;
>>   	ktime_t read_latency_min;
>> @@ -117,6 +120,9 @@ struct ceph_client_metric {
>>   
>>
>>
>>
>>   	spinlock_t write_metric_lock;
>>   	u64 total_writes;
>> +	u64 write_size_sum;
>> +	u64 write_size_min;
>> +	u64 write_size_max;
>>   	ktime_t write_latency_sum;
>>   	ktime_t write_latency_sq_sum;
>>   	ktime_t write_latency_min;
>> @@ -164,10 +170,10 @@ static inline void ceph_update_cap_mis(struct ceph_client_metric *m)
>>   
>>
>>
>>
>>   extern void ceph_update_read_metrics(struct ceph_client_metric *m,
>>   				     ktime_t r_start, ktime_t r_end,
>> -				     int rc);
>> +				     unsigned int size, int rc);
>>   extern void ceph_update_write_metrics(struct ceph_client_metric *m,
>>   				      ktime_t r_start, ktime_t r_end,
>> -				      int rc);
>> +				      unsigned int size, int rc);
>>   extern void ceph_update_metadata_metrics(struct ceph_client_metric *m,
>>   				         ktime_t r_start, ktime_t r_end,
>>   					 int rc);



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 0/4] ceph: add IO size metric support
  2021-03-22 12:28 [PATCH 0/4] ceph: add IO size metric support xiubli
                   ` (3 preceding siblings ...)
  2021-03-22 12:28 ` [PATCH 4/4] ceph: add IO size metrics support xiubli
@ 2021-03-24 15:06 ` Jeff Layton
  2021-03-25  0:42   ` Xiubo Li
  4 siblings, 1 reply; 11+ messages in thread
From: Jeff Layton @ 2021-03-24 15:06 UTC (permalink / raw)
  To: xiubli; +Cc: idryomov, pdonnell, ceph-devel

On Mon, 2021-03-22 at 20:28 +0800, xiubli@redhat.com wrote:
> From: Xiubo Li <xiubli@redhat.com>
> 
> Currently it will show as the following:
> 
> item          total       avg_sz(bytes)   min_sz(bytes)   max_sz(bytes)  total_sz(bytes)
> ----------------------------------------------------------------------------------------
> read          1           10240           10240           10240           10240
> write         1           10240           10240           10240           10240
> 
> 
> 
> Xiubo Li (4):
>   ceph: rename the metric helpers
>   ceph: update the __update_latency helper
>   ceph: avoid count the same request twice or more
>   ceph: add IO size metrics support
> 
>  fs/ceph/addr.c       |  20 +++----
>  fs/ceph/debugfs.c    |  49 +++++++++++++----
>  fs/ceph/file.c       |  47 ++++++++--------
>  fs/ceph/mds_client.c |   2 +-
>  fs/ceph/metric.c     | 126 ++++++++++++++++++++++++++++++++-----------
>  fs/ceph/metric.h     |  22 +++++---
>  6 files changed, 184 insertions(+), 82 deletions(-)
> 

I've gone ahead and merged patches 1 and 3 from this series into
ceph-client/testing. 1 was just a trivial renaming that we might as well
get out of the way, and 3 looked like a (minor) bugfix. The other two
still need a bit of work (but nothing major).

Cheers,
-- 
Jeff Layton <jlayton@kernel.org>


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 0/4] ceph: add IO size metric support
  2021-03-24 15:06 ` [PATCH 0/4] ceph: add IO size metric support Jeff Layton
@ 2021-03-25  0:42   ` Xiubo Li
  0 siblings, 0 replies; 11+ messages in thread
From: Xiubo Li @ 2021-03-25  0:42 UTC (permalink / raw)
  To: Jeff Layton; +Cc: idryomov, pdonnell, ceph-devel

On 2021/3/24 23:06, Jeff Layton wrote:
> On Mon, 2021-03-22 at 20:28 +0800, xiubli@redhat.com wrote:
>> From: Xiubo Li <xiubli@redhat.com>
>>
>> Currently it will show as the following:
>>
>> item          total       avg_sz(bytes)   min_sz(bytes)   max_sz(bytes)  total_sz(bytes)
>> ----------------------------------------------------------------------------------------
>> read          1           10240           10240           10240           10240
>> write         1           10240           10240           10240           10240
>>
>>
>>
>> Xiubo Li (4):
>>    ceph: rename the metric helpers
>>    ceph: update the __update_latency helper
>>    ceph: avoid count the same request twice or more
>>    ceph: add IO size metrics support
>>
>>   fs/ceph/addr.c       |  20 +++----
>>   fs/ceph/debugfs.c    |  49 +++++++++++++----
>>   fs/ceph/file.c       |  47 ++++++++--------
>>   fs/ceph/mds_client.c |   2 +-
>>   fs/ceph/metric.c     | 126 ++++++++++++++++++++++++++++++++-----------
>>   fs/ceph/metric.h     |  22 +++++---
>>   6 files changed, 184 insertions(+), 82 deletions(-)
>>
> I've gone ahead and merged patches 1 and 3 from this series into
> ceph-client/testing. 1 was just a trivial renaming that we might as well
> get out of the way, and 3 looked like a (minor) bugfix. The other two
> still need a bit of work (but nothing major).

Sure, will fix them and post the v2 later.

Thanks Jeff.


> Cheers,



^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2021-03-25  0:44 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-22 12:28 [PATCH 0/4] ceph: add IO size metric support xiubli
2021-03-22 12:28 ` [PATCH 1/4] ceph: rename the metric helpers xiubli
2021-03-22 12:28 ` [PATCH 2/4] ceph: update the __update_latency helper xiubli
2021-03-23 12:34   ` Jeff Layton
2021-03-23 13:14     ` Xiubo Li
2021-03-22 12:28 ` [PATCH 3/4] ceph: avoid count the same request twice or more xiubli
2021-03-22 12:28 ` [PATCH 4/4] ceph: add IO size metrics support xiubli
2021-03-23 12:29   ` Jeff Layton
2021-03-23 13:17     ` Xiubo Li
2021-03-24 15:06 ` [PATCH 0/4] ceph: add IO size metric support Jeff Layton
2021-03-25  0:42   ` Xiubo Li

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).