ceph-devel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/4] ceph: forward average read/write/metadata latency
@ 2021-09-14  8:48 Venky Shankar
  2021-09-14  8:48 ` [PATCH v2 1/4] ceph: use "struct ceph_timespec" for r/w/m latencies Venky Shankar
                   ` (3 more replies)
  0 siblings, 4 replies; 19+ messages in thread
From: Venky Shankar @ 2021-09-14  8:48 UTC (permalink / raw)
  To: jlayton, pdonnell, xiubli; +Cc: ceph-devel, Venky Shankar

v2:
  - based on top of ceph-client/testing branch

Right now, cumulative read/write/metadata latencies are tracked
and are periodically forwarded to the MDS. These meterics are not
particularly useful. A much more useful metric is the average latency
and standard deviation (stdev) which is what this series of patches
aims to do.

The userspace (libcephfs+tool) changes are here::

          https://github.com/ceph/ceph/pull/41397

The math involved in keeping track of the average latency and stdev
seems incorrect, so, this series fixes that up too (closely mimics
how its done in userspace with some restrictions obviously) as per::

          NEW_AVG = OLD_AVG + ((latency - OLD_AVG) / total_ops)
          NEW_STDEV = SQRT(((OLD_STDEV + (latency - OLD_AVG)*(latency - NEW_AVG)) / (total_ops - 1)))

Note that the cumulative latencies are still forwarded to the MDS but
the tool (cephfs-top) ignores it altogether.

Venky Shankar (4):
  ceph: use "struct ceph_timespec" for r/w/m latencies
  ceph: track average/stdev r/w/m latency
  ceph: include average/stddev r/w/m latency in mds metrics
  ceph: use tracked average r/w/m latencies to display metrics in
    debugfs

 fs/ceph/debugfs.c |  20 ++++-----
 fs/ceph/metric.c  | 105 ++++++++++++++++++++++------------------------
 fs/ceph/metric.h  |  68 +++++++++++++++++++-----------
 3 files changed, 104 insertions(+), 89 deletions(-)

-- 
2.31.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH v2 1/4] ceph: use "struct ceph_timespec" for r/w/m latencies
  2021-09-14  8:48 [PATCH v2 0/4] ceph: forward average read/write/metadata latency Venky Shankar
@ 2021-09-14  8:48 ` Venky Shankar
  2021-09-14  8:49 ` [PATCH v2 2/4] ceph: track average/stdev r/w/m latency Venky Shankar
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 19+ messages in thread
From: Venky Shankar @ 2021-09-14  8:48 UTC (permalink / raw)
  To: jlayton, pdonnell, xiubli; +Cc: ceph-devel, Venky Shankar

Signed-off-by: Venky Shankar <vshankar@redhat.com>
---
 fs/ceph/metric.c | 12 ++++++------
 fs/ceph/metric.h | 11 ++++-------
 2 files changed, 10 insertions(+), 13 deletions(-)

diff --git a/fs/ceph/metric.c b/fs/ceph/metric.c
index 04d5df29bbbf..226dc38e2909 100644
--- a/fs/ceph/metric.c
+++ b/fs/ceph/metric.c
@@ -64,8 +64,8 @@ static bool ceph_mdsc_send_metrics(struct ceph_mds_client *mdsc,
 	read->header.data_len = cpu_to_le32(sizeof(*read) - header_len);
 	sum = m->read_latency_sum;
 	jiffies_to_timespec64(sum, &ts);
-	read->sec = cpu_to_le32(ts.tv_sec);
-	read->nsec = cpu_to_le32(ts.tv_nsec);
+	read->lat.tv_sec = cpu_to_le32(ts.tv_sec);
+	read->lat.tv_nsec = cpu_to_le32(ts.tv_nsec);
 	items++;
 
 	/* encode the write latency metric */
@@ -76,8 +76,8 @@ static bool ceph_mdsc_send_metrics(struct ceph_mds_client *mdsc,
 	write->header.data_len = cpu_to_le32(sizeof(*write) - header_len);
 	sum = m->write_latency_sum;
 	jiffies_to_timespec64(sum, &ts);
-	write->sec = cpu_to_le32(ts.tv_sec);
-	write->nsec = cpu_to_le32(ts.tv_nsec);
+	write->lat.tv_sec = cpu_to_le32(ts.tv_sec);
+	write->lat.tv_nsec = cpu_to_le32(ts.tv_nsec);
 	items++;
 
 	/* encode the metadata latency metric */
@@ -88,8 +88,8 @@ static bool ceph_mdsc_send_metrics(struct ceph_mds_client *mdsc,
 	meta->header.data_len = cpu_to_le32(sizeof(*meta) - header_len);
 	sum = m->metadata_latency_sum;
 	jiffies_to_timespec64(sum, &ts);
-	meta->sec = cpu_to_le32(ts.tv_sec);
-	meta->nsec = cpu_to_le32(ts.tv_nsec);
+	meta->lat.tv_sec = cpu_to_le32(ts.tv_sec);
+	meta->lat.tv_nsec = cpu_to_le32(ts.tv_nsec);
 	items++;
 
 	/* encode the dentry lease metric */
diff --git a/fs/ceph/metric.h b/fs/ceph/metric.h
index 0133955a3c6a..103ed736f9d2 100644
--- a/fs/ceph/metric.h
+++ b/fs/ceph/metric.h
@@ -2,7 +2,7 @@
 #ifndef _FS_CEPH_MDS_METRIC_H
 #define _FS_CEPH_MDS_METRIC_H
 
-#include <linux/types.h>
+#include <linux/ceph/types.h>
 #include <linux/percpu_counter.h>
 #include <linux/ktime.h>
 
@@ -60,22 +60,19 @@ struct ceph_metric_cap {
 /* metric read latency header */
 struct ceph_metric_read_latency {
 	struct ceph_metric_header header;
-	__le32 sec;
-	__le32 nsec;
+	struct ceph_timespec lat;
 } __packed;
 
 /* metric write latency header */
 struct ceph_metric_write_latency {
 	struct ceph_metric_header header;
-	__le32 sec;
-	__le32 nsec;
+	struct ceph_timespec lat;
 } __packed;
 
 /* metric metadata latency header */
 struct ceph_metric_metadata_latency {
 	struct ceph_metric_header header;
-	__le32 sec;
-	__le32 nsec;
+	struct ceph_timespec lat;
 } __packed;
 
 /* metric dentry lease header */
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v2 2/4] ceph: track average/stdev r/w/m latency
  2021-09-14  8:48 [PATCH v2 0/4] ceph: forward average read/write/metadata latency Venky Shankar
  2021-09-14  8:48 ` [PATCH v2 1/4] ceph: use "struct ceph_timespec" for r/w/m latencies Venky Shankar
@ 2021-09-14  8:49 ` Venky Shankar
  2021-09-14 12:52   ` Xiubo Li
                     ` (2 more replies)
  2021-09-14  8:49 ` [PATCH v2 3/4] ceph: include average/stddev r/w/m latency in mds metrics Venky Shankar
  2021-09-14  8:49 ` [PATCH v2 4/4] ceph: use tracked average r/w/m latencies to display metrics in debugfs Venky Shankar
  3 siblings, 3 replies; 19+ messages in thread
From: Venky Shankar @ 2021-09-14  8:49 UTC (permalink / raw)
  To: jlayton, pdonnell, xiubli; +Cc: ceph-devel, Venky Shankar

The math involved in tracking average and standard deviation
for r/w/m latencies looks incorrect. Fix that up. Also, change
the variable name that tracks standard deviation (*_sq_sum) to
*_stdev.

Signed-off-by: Venky Shankar <vshankar@redhat.com>
---
 fs/ceph/debugfs.c | 14 +++++-----
 fs/ceph/metric.c  | 70 ++++++++++++++++++++++-------------------------
 fs/ceph/metric.h  |  9 ++++--
 3 files changed, 45 insertions(+), 48 deletions(-)

diff --git a/fs/ceph/debugfs.c b/fs/ceph/debugfs.c
index 38b78b45811f..3abfa7ae8220 100644
--- a/fs/ceph/debugfs.c
+++ b/fs/ceph/debugfs.c
@@ -152,7 +152,7 @@ static int metric_show(struct seq_file *s, void *p)
 	struct ceph_mds_client *mdsc = fsc->mdsc;
 	struct ceph_client_metric *m = &mdsc->metric;
 	int nr_caps = 0;
-	s64 total, sum, avg, min, max, sq;
+	s64 total, sum, avg, min, max, stdev;
 	u64 sum_sz, avg_sz, min_sz, max_sz;
 
 	sum = percpu_counter_sum(&m->total_inodes);
@@ -175,9 +175,9 @@ static int metric_show(struct seq_file *s, void *p)
 	avg = total > 0 ? DIV64_U64_ROUND_CLOSEST(sum, total) : 0;
 	min = m->read_latency_min;
 	max = m->read_latency_max;
-	sq = m->read_latency_sq_sum;
+	stdev = m->read_latency_stdev;
 	spin_unlock(&m->read_metric_lock);
-	CEPH_LAT_METRIC_SHOW("read", total, avg, min, max, sq);
+	CEPH_LAT_METRIC_SHOW("read", total, avg, min, max, stdev);
 
 	spin_lock(&m->write_metric_lock);
 	total = m->total_writes;
@@ -185,9 +185,9 @@ static int metric_show(struct seq_file *s, void *p)
 	avg = total > 0 ? DIV64_U64_ROUND_CLOSEST(sum, total) : 0;
 	min = m->write_latency_min;
 	max = m->write_latency_max;
-	sq = m->write_latency_sq_sum;
+	stdev = m->write_latency_stdev;
 	spin_unlock(&m->write_metric_lock);
-	CEPH_LAT_METRIC_SHOW("write", total, avg, min, max, sq);
+	CEPH_LAT_METRIC_SHOW("write", total, avg, min, max, stdev);
 
 	spin_lock(&m->metadata_metric_lock);
 	total = m->total_metadatas;
@@ -195,9 +195,9 @@ static int metric_show(struct seq_file *s, void *p)
 	avg = total > 0 ? DIV64_U64_ROUND_CLOSEST(sum, total) : 0;
 	min = m->metadata_latency_min;
 	max = m->metadata_latency_max;
-	sq = m->metadata_latency_sq_sum;
+	stdev = m->metadata_latency_stdev;
 	spin_unlock(&m->metadata_metric_lock);
-	CEPH_LAT_METRIC_SHOW("metadata", total, avg, min, max, sq);
+	CEPH_LAT_METRIC_SHOW("metadata", total, avg, min, max, stdev);
 
 	seq_printf(s, "\n");
 	seq_printf(s, "item          total       avg_sz(bytes)   min_sz(bytes)   max_sz(bytes)  total_sz(bytes)\n");
diff --git a/fs/ceph/metric.c b/fs/ceph/metric.c
index 226dc38e2909..6b774b1a88ce 100644
--- a/fs/ceph/metric.c
+++ b/fs/ceph/metric.c
@@ -244,7 +244,8 @@ int ceph_metric_init(struct ceph_client_metric *m)
 		goto err_i_caps_mis;
 
 	spin_lock_init(&m->read_metric_lock);
-	m->read_latency_sq_sum = 0;
+	m->read_latency_stdev = 0;
+	m->avg_read_latency = 0;
 	m->read_latency_min = KTIME_MAX;
 	m->read_latency_max = 0;
 	m->total_reads = 0;
@@ -254,7 +255,8 @@ int ceph_metric_init(struct ceph_client_metric *m)
 	m->read_size_sum = 0;
 
 	spin_lock_init(&m->write_metric_lock);
-	m->write_latency_sq_sum = 0;
+	m->write_latency_stdev = 0;
+	m->avg_write_latency = 0;
 	m->write_latency_min = KTIME_MAX;
 	m->write_latency_max = 0;
 	m->total_writes = 0;
@@ -264,7 +266,8 @@ int ceph_metric_init(struct ceph_client_metric *m)
 	m->write_size_sum = 0;
 
 	spin_lock_init(&m->metadata_metric_lock);
-	m->metadata_latency_sq_sum = 0;
+	m->metadata_latency_stdev = 0;
+	m->avg_metadata_latency = 0;
 	m->metadata_latency_min = KTIME_MAX;
 	m->metadata_latency_max = 0;
 	m->total_metadatas = 0;
@@ -322,20 +325,26 @@ void ceph_metric_destroy(struct ceph_client_metric *m)
 		max = new;			\
 }
 
-static inline void __update_stdev(ktime_t total, ktime_t lsum,
-				  ktime_t *sq_sump, ktime_t lat)
+static inline void __update_latency(ktime_t *ctotal, ktime_t *lsum,
+				    ktime_t *lavg, ktime_t *min, ktime_t *max,
+				    ktime_t *lstdev, ktime_t lat)
 {
-	ktime_t avg, sq;
+	ktime_t total, avg, stdev;
 
-	if (unlikely(total == 1))
-		return;
+	total = ++(*ctotal);
+	*lsum += lat;
+
+	METRIC_UPDATE_MIN_MAX(*min, *max, lat);
 
-	/* the sq is (lat - old_avg) * (lat - new_avg) */
-	avg = DIV64_U64_ROUND_CLOSEST((lsum - lat), (total - 1));
-	sq = lat - avg;
-	avg = DIV64_U64_ROUND_CLOSEST(lsum, total);
-	sq = sq * (lat - avg);
-	*sq_sump += sq;
+	if (unlikely(total == 1)) {
+		*lavg = lat;
+		*lstdev = 0;
+	} else {
+		avg = *lavg + div64_s64(lat - *lavg, total);
+		stdev = *lstdev + (lat - *lavg)*(lat - avg);
+		*lstdev = int_sqrt(div64_u64(stdev, total - 1));
+		*lavg = avg;
+	}
 }
 
 void ceph_update_read_metrics(struct ceph_client_metric *m,
@@ -343,23 +352,18 @@ void ceph_update_read_metrics(struct ceph_client_metric *m,
 			      unsigned int size, int rc)
 {
 	ktime_t lat = ktime_sub(r_end, r_start);
-	ktime_t total;
 
 	if (unlikely(rc < 0 && rc != -ENOENT && rc != -ETIMEDOUT))
 		return;
 
 	spin_lock(&m->read_metric_lock);
-	total = ++m->total_reads;
 	m->read_size_sum += size;
-	m->read_latency_sum += lat;
 	METRIC_UPDATE_MIN_MAX(m->read_size_min,
 			      m->read_size_max,
 			      size);
-	METRIC_UPDATE_MIN_MAX(m->read_latency_min,
-			      m->read_latency_max,
-			      lat);
-	__update_stdev(total, m->read_latency_sum,
-		       &m->read_latency_sq_sum, lat);
+	__update_latency(&m->total_reads, &m->read_latency_sum,
+			 &m->avg_read_latency, &m->read_latency_min,
+			 &m->read_latency_max, &m->read_latency_stdev, lat);
 	spin_unlock(&m->read_metric_lock);
 }
 
@@ -368,23 +372,18 @@ void ceph_update_write_metrics(struct ceph_client_metric *m,
 			       unsigned int size, int rc)
 {
 	ktime_t lat = ktime_sub(r_end, r_start);
-	ktime_t total;
 
 	if (unlikely(rc && rc != -ETIMEDOUT))
 		return;
 
 	spin_lock(&m->write_metric_lock);
-	total = ++m->total_writes;
 	m->write_size_sum += size;
-	m->write_latency_sum += lat;
 	METRIC_UPDATE_MIN_MAX(m->write_size_min,
 			      m->write_size_max,
 			      size);
-	METRIC_UPDATE_MIN_MAX(m->write_latency_min,
-			      m->write_latency_max,
-			      lat);
-	__update_stdev(total, m->write_latency_sum,
-		       &m->write_latency_sq_sum, lat);
+	__update_latency(&m->total_writes, &m->write_latency_sum,
+			 &m->avg_write_latency, &m->write_latency_min,
+			 &m->write_latency_max, &m->write_latency_stdev, lat);
 	spin_unlock(&m->write_metric_lock);
 }
 
@@ -393,18 +392,13 @@ void ceph_update_metadata_metrics(struct ceph_client_metric *m,
 				  int rc)
 {
 	ktime_t lat = ktime_sub(r_end, r_start);
-	ktime_t total;
 
 	if (unlikely(rc && rc != -ENOENT))
 		return;
 
 	spin_lock(&m->metadata_metric_lock);
-	total = ++m->total_metadatas;
-	m->metadata_latency_sum += lat;
-	METRIC_UPDATE_MIN_MAX(m->metadata_latency_min,
-			      m->metadata_latency_max,
-			      lat);
-	__update_stdev(total, m->metadata_latency_sum,
-		       &m->metadata_latency_sq_sum, lat);
+	__update_latency(&m->total_metadatas, &m->metadata_latency_sum,
+			 &m->avg_metadata_latency, &m->metadata_latency_min,
+			 &m->metadata_latency_max, &m->metadata_latency_stdev, lat);
 	spin_unlock(&m->metadata_metric_lock);
 }
diff --git a/fs/ceph/metric.h b/fs/ceph/metric.h
index 103ed736f9d2..a5da21b8f8ed 100644
--- a/fs/ceph/metric.h
+++ b/fs/ceph/metric.h
@@ -138,7 +138,8 @@ struct ceph_client_metric {
 	u64 read_size_min;
 	u64 read_size_max;
 	ktime_t read_latency_sum;
-	ktime_t read_latency_sq_sum;
+	ktime_t avg_read_latency;
+	ktime_t read_latency_stdev;
 	ktime_t read_latency_min;
 	ktime_t read_latency_max;
 
@@ -148,14 +149,16 @@ struct ceph_client_metric {
 	u64 write_size_min;
 	u64 write_size_max;
 	ktime_t write_latency_sum;
-	ktime_t write_latency_sq_sum;
+	ktime_t avg_write_latency;
+	ktime_t write_latency_stdev;
 	ktime_t write_latency_min;
 	ktime_t write_latency_max;
 
 	spinlock_t metadata_metric_lock;
 	u64 total_metadatas;
 	ktime_t metadata_latency_sum;
-	ktime_t metadata_latency_sq_sum;
+	ktime_t avg_metadata_latency;
+	ktime_t metadata_latency_stdev;
 	ktime_t metadata_latency_min;
 	ktime_t metadata_latency_max;
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v2 3/4] ceph: include average/stddev r/w/m latency in mds metrics
  2021-09-14  8:48 [PATCH v2 0/4] ceph: forward average read/write/metadata latency Venky Shankar
  2021-09-14  8:48 ` [PATCH v2 1/4] ceph: use "struct ceph_timespec" for r/w/m latencies Venky Shankar
  2021-09-14  8:49 ` [PATCH v2 2/4] ceph: track average/stdev r/w/m latency Venky Shankar
@ 2021-09-14  8:49 ` Venky Shankar
  2021-09-14 13:57   ` Xiubo Li
  2021-09-14  8:49 ` [PATCH v2 4/4] ceph: use tracked average r/w/m latencies to display metrics in debugfs Venky Shankar
  3 siblings, 1 reply; 19+ messages in thread
From: Venky Shankar @ 2021-09-14  8:49 UTC (permalink / raw)
  To: jlayton, pdonnell, xiubli; +Cc: ceph-devel, Venky Shankar

The use of `jiffies_to_timespec64()` seems incorrect too, switch
that to `ktime_to_timespec64()`.

Signed-off-by: Venky Shankar <vshankar@redhat.com>
---
 fs/ceph/metric.c | 35 +++++++++++++++++++----------------
 fs/ceph/metric.h | 48 +++++++++++++++++++++++++++++++++---------------
 2 files changed, 52 insertions(+), 31 deletions(-)

diff --git a/fs/ceph/metric.c b/fs/ceph/metric.c
index 6b774b1a88ce..78a50bb7bd0f 100644
--- a/fs/ceph/metric.c
+++ b/fs/ceph/metric.c
@@ -8,6 +8,13 @@
 #include "metric.h"
 #include "mds_client.h"
 
+static void to_ceph_timespec(struct ceph_timespec *ts, ktime_t val)
+{
+	struct timespec64 t = ktime_to_timespec64(val);
+	ts->tv_sec = cpu_to_le32(t.tv_sec);
+	ts->tv_nsec = cpu_to_le32(t.tv_nsec);
+}
+
 static bool ceph_mdsc_send_metrics(struct ceph_mds_client *mdsc,
 				   struct ceph_mds_session *s)
 {
@@ -26,7 +33,6 @@ static bool ceph_mdsc_send_metrics(struct ceph_mds_client *mdsc,
 	u64 nr_caps = atomic64_read(&m->total_caps);
 	u32 header_len = sizeof(struct ceph_metric_header);
 	struct ceph_msg *msg;
-	struct timespec64 ts;
 	s64 sum;
 	s32 items = 0;
 	s32 len;
@@ -59,37 +65,34 @@ static bool ceph_mdsc_send_metrics(struct ceph_mds_client *mdsc,
 	/* encode the read latency metric */
 	read = (struct ceph_metric_read_latency *)(cap + 1);
 	read->header.type = cpu_to_le32(CLIENT_METRIC_TYPE_READ_LATENCY);
-	read->header.ver = 1;
+	read->header.ver = 2;
 	read->header.compat = 1;
 	read->header.data_len = cpu_to_le32(sizeof(*read) - header_len);
-	sum = m->read_latency_sum;
-	jiffies_to_timespec64(sum, &ts);
-	read->lat.tv_sec = cpu_to_le32(ts.tv_sec);
-	read->lat.tv_nsec = cpu_to_le32(ts.tv_nsec);
+	to_ceph_timespec(&read->lat, m->read_latency_sum);
+	to_ceph_timespec(&read->avg, m->avg_read_latency);
+	to_ceph_timespec(&read->stdev, m->read_latency_stdev);
 	items++;
 
 	/* encode the write latency metric */
 	write = (struct ceph_metric_write_latency *)(read + 1);
 	write->header.type = cpu_to_le32(CLIENT_METRIC_TYPE_WRITE_LATENCY);
-	write->header.ver = 1;
+	write->header.ver = 2;
 	write->header.compat = 1;
 	write->header.data_len = cpu_to_le32(sizeof(*write) - header_len);
-	sum = m->write_latency_sum;
-	jiffies_to_timespec64(sum, &ts);
-	write->lat.tv_sec = cpu_to_le32(ts.tv_sec);
-	write->lat.tv_nsec = cpu_to_le32(ts.tv_nsec);
+	to_ceph_timespec(&write->lat, m->write_latency_sum);
+	to_ceph_timespec(&write->avg, m->avg_write_latency);
+	to_ceph_timespec(&write->stdev, m->write_latency_stdev);
 	items++;
 
 	/* encode the metadata latency metric */
 	meta = (struct ceph_metric_metadata_latency *)(write + 1);
 	meta->header.type = cpu_to_le32(CLIENT_METRIC_TYPE_METADATA_LATENCY);
-	meta->header.ver = 1;
+	meta->header.ver = 2;
 	meta->header.compat = 1;
 	meta->header.data_len = cpu_to_le32(sizeof(*meta) - header_len);
-	sum = m->metadata_latency_sum;
-	jiffies_to_timespec64(sum, &ts);
-	meta->lat.tv_sec = cpu_to_le32(ts.tv_sec);
-	meta->lat.tv_nsec = cpu_to_le32(ts.tv_nsec);
+	to_ceph_timespec(&meta->lat, m->metadata_latency_sum);
+	to_ceph_timespec(&meta->avg, m->avg_metadata_latency);
+	to_ceph_timespec(&meta->stdev, m->metadata_latency_stdev);
 	items++;
 
 	/* encode the dentry lease metric */
diff --git a/fs/ceph/metric.h b/fs/ceph/metric.h
index a5da21b8f8ed..2dd506dedebf 100644
--- a/fs/ceph/metric.h
+++ b/fs/ceph/metric.h
@@ -19,27 +19,39 @@ enum ceph_metric_type {
 	CLIENT_METRIC_TYPE_OPENED_INODES,
 	CLIENT_METRIC_TYPE_READ_IO_SIZES,
 	CLIENT_METRIC_TYPE_WRITE_IO_SIZES,
-
-	CLIENT_METRIC_TYPE_MAX = CLIENT_METRIC_TYPE_WRITE_IO_SIZES,
+	CLIENT_METRIC_TYPE_AVG_READ_LATENCY,
+	CLIENT_METRIC_TYPE_STDEV_READ_LATENCY,
+	CLIENT_METRIC_TYPE_AVG_WRITE_LATENCY,
+	CLIENT_METRIC_TYPE_STDEV_WRITE_LATENCY,
+	CLIENT_METRIC_TYPE_AVG_METADATA_LATENCY,
+	CLIENT_METRIC_TYPE_STDEV_METADATA_LATENCY,
+
+	CLIENT_METRIC_TYPE_MAX = CLIENT_METRIC_TYPE_STDEV_METADATA_LATENCY,
 };
 
 /*
  * This will always have the highest metric bit value
  * as the last element of the array.
  */
-#define CEPHFS_METRIC_SPEC_CLIENT_SUPPORTED {	\
-	CLIENT_METRIC_TYPE_CAP_INFO,		\
-	CLIENT_METRIC_TYPE_READ_LATENCY,	\
-	CLIENT_METRIC_TYPE_WRITE_LATENCY,	\
-	CLIENT_METRIC_TYPE_METADATA_LATENCY,	\
-	CLIENT_METRIC_TYPE_DENTRY_LEASE,	\
-	CLIENT_METRIC_TYPE_OPENED_FILES,	\
-	CLIENT_METRIC_TYPE_PINNED_ICAPS,	\
-	CLIENT_METRIC_TYPE_OPENED_INODES,	\
-	CLIENT_METRIC_TYPE_READ_IO_SIZES,	\
-	CLIENT_METRIC_TYPE_WRITE_IO_SIZES,	\
-						\
-	CLIENT_METRIC_TYPE_MAX,			\
+#define CEPHFS_METRIC_SPEC_CLIENT_SUPPORTED {	    \
+	CLIENT_METRIC_TYPE_CAP_INFO,		    \
+	CLIENT_METRIC_TYPE_READ_LATENCY,	    \
+	CLIENT_METRIC_TYPE_WRITE_LATENCY,	    \
+	CLIENT_METRIC_TYPE_METADATA_LATENCY,	    \
+	CLIENT_METRIC_TYPE_DENTRY_LEASE,	    \
+	CLIENT_METRIC_TYPE_OPENED_FILES,	    \
+	CLIENT_METRIC_TYPE_PINNED_ICAPS,	    \
+	CLIENT_METRIC_TYPE_OPENED_INODES,	    \
+	CLIENT_METRIC_TYPE_READ_IO_SIZES,	    \
+	CLIENT_METRIC_TYPE_WRITE_IO_SIZES,	    \
+	CLIENT_METRIC_TYPE_AVG_READ_LATENCY,	    \
+	CLIENT_METRIC_TYPE_STDEV_READ_LATENCY,	    \
+	CLIENT_METRIC_TYPE_AVG_WRITE_LATENCY,	    \
+	CLIENT_METRIC_TYPE_STDEV_WRITE_LATENCY,	    \
+	CLIENT_METRIC_TYPE_AVG_METADATA_LATENCY,    \
+	CLIENT_METRIC_TYPE_STDEV_METADATA_LATENCY,  \
+						    \
+	CLIENT_METRIC_TYPE_MAX,			    \
 }
 
 struct ceph_metric_header {
@@ -61,18 +73,24 @@ struct ceph_metric_cap {
 struct ceph_metric_read_latency {
 	struct ceph_metric_header header;
 	struct ceph_timespec lat;
+	struct ceph_timespec avg;
+	struct ceph_timespec stdev;
 } __packed;
 
 /* metric write latency header */
 struct ceph_metric_write_latency {
 	struct ceph_metric_header header;
 	struct ceph_timespec lat;
+	struct ceph_timespec avg;
+	struct ceph_timespec stdev;
 } __packed;
 
 /* metric metadata latency header */
 struct ceph_metric_metadata_latency {
 	struct ceph_metric_header header;
 	struct ceph_timespec lat;
+	struct ceph_timespec avg;
+	struct ceph_timespec stdev;
 } __packed;
 
 /* metric dentry lease header */
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v2 4/4] ceph: use tracked average r/w/m latencies to display metrics in debugfs
  2021-09-14  8:48 [PATCH v2 0/4] ceph: forward average read/write/metadata latency Venky Shankar
                   ` (2 preceding siblings ...)
  2021-09-14  8:49 ` [PATCH v2 3/4] ceph: include average/stddev r/w/m latency in mds metrics Venky Shankar
@ 2021-09-14  8:49 ` Venky Shankar
  3 siblings, 0 replies; 19+ messages in thread
From: Venky Shankar @ 2021-09-14  8:49 UTC (permalink / raw)
  To: jlayton, pdonnell, xiubli; +Cc: ceph-devel, Venky Shankar

Signed-off-by: Venky Shankar <vshankar@redhat.com>
---
 fs/ceph/debugfs.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/fs/ceph/debugfs.c b/fs/ceph/debugfs.c
index 3abfa7ae8220..970aa04fb04d 100644
--- a/fs/ceph/debugfs.c
+++ b/fs/ceph/debugfs.c
@@ -172,7 +172,7 @@ static int metric_show(struct seq_file *s, void *p)
 	spin_lock(&m->read_metric_lock);
 	total = m->total_reads;
 	sum = m->read_latency_sum;
-	avg = total > 0 ? DIV64_U64_ROUND_CLOSEST(sum, total) : 0;
+	avg = m->avg_read_latency;
 	min = m->read_latency_min;
 	max = m->read_latency_max;
 	stdev = m->read_latency_stdev;
@@ -182,7 +182,7 @@ static int metric_show(struct seq_file *s, void *p)
 	spin_lock(&m->write_metric_lock);
 	total = m->total_writes;
 	sum = m->write_latency_sum;
-	avg = total > 0 ? DIV64_U64_ROUND_CLOSEST(sum, total) : 0;
+	avg = m->avg_write_latency;
 	min = m->write_latency_min;
 	max = m->write_latency_max;
 	stdev = m->write_latency_stdev;
@@ -192,7 +192,7 @@ static int metric_show(struct seq_file *s, void *p)
 	spin_lock(&m->metadata_metric_lock);
 	total = m->total_metadatas;
 	sum = m->metadata_latency_sum;
-	avg = total > 0 ? DIV64_U64_ROUND_CLOSEST(sum, total) : 0;
+	avg = m->avg_metadata_latency;
 	min = m->metadata_latency_min;
 	max = m->metadata_latency_max;
 	stdev = m->metadata_latency_stdev;
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 2/4] ceph: track average/stdev r/w/m latency
  2021-09-14  8:49 ` [PATCH v2 2/4] ceph: track average/stdev r/w/m latency Venky Shankar
@ 2021-09-14 12:52   ` Xiubo Li
  2021-09-14 13:03     ` Venky Shankar
  2021-09-14 13:09   ` Xiubo Li
  2021-09-14 13:13   ` Xiubo Li
  2 siblings, 1 reply; 19+ messages in thread
From: Xiubo Li @ 2021-09-14 12:52 UTC (permalink / raw)
  To: Venky Shankar, jlayton, pdonnell; +Cc: ceph-devel


On 9/14/21 4:49 PM, Venky Shankar wrote:
> The math involved in tracking average and standard deviation
> for r/w/m latencies looks incorrect. Fix that up. Also, change
> the variable name that tracks standard deviation (*_sq_sum) to
> *_stdev.
>
> Signed-off-by: Venky Shankar <vshankar@redhat.com>
> ---
>   fs/ceph/debugfs.c | 14 +++++-----
>   fs/ceph/metric.c  | 70 ++++++++++++++++++++++-------------------------
>   fs/ceph/metric.h  |  9 ++++--
>   3 files changed, 45 insertions(+), 48 deletions(-)
>
> diff --git a/fs/ceph/debugfs.c b/fs/ceph/debugfs.c
> index 38b78b45811f..3abfa7ae8220 100644
> --- a/fs/ceph/debugfs.c
> +++ b/fs/ceph/debugfs.c
> @@ -152,7 +152,7 @@ static int metric_show(struct seq_file *s, void *p)
>   	struct ceph_mds_client *mdsc = fsc->mdsc;
>   	struct ceph_client_metric *m = &mdsc->metric;
>   	int nr_caps = 0;
> -	s64 total, sum, avg, min, max, sq;
> +	s64 total, sum, avg, min, max, stdev;
>   	u64 sum_sz, avg_sz, min_sz, max_sz;
>   
>   	sum = percpu_counter_sum(&m->total_inodes);
> @@ -175,9 +175,9 @@ static int metric_show(struct seq_file *s, void *p)
>   	avg = total > 0 ? DIV64_U64_ROUND_CLOSEST(sum, total) : 0;
>   	min = m->read_latency_min;
>   	max = m->read_latency_max;
> -	sq = m->read_latency_sq_sum;
> +	stdev = m->read_latency_stdev;
>   	spin_unlock(&m->read_metric_lock);
> -	CEPH_LAT_METRIC_SHOW("read", total, avg, min, max, sq);
> +	CEPH_LAT_METRIC_SHOW("read", total, avg, min, max, stdev);
>   
>   	spin_lock(&m->write_metric_lock);
>   	total = m->total_writes;
> @@ -185,9 +185,9 @@ static int metric_show(struct seq_file *s, void *p)
>   	avg = total > 0 ? DIV64_U64_ROUND_CLOSEST(sum, total) : 0;
>   	min = m->write_latency_min;
>   	max = m->write_latency_max;
> -	sq = m->write_latency_sq_sum;
> +	stdev = m->write_latency_stdev;
>   	spin_unlock(&m->write_metric_lock);
> -	CEPH_LAT_METRIC_SHOW("write", total, avg, min, max, sq);
> +	CEPH_LAT_METRIC_SHOW("write", total, avg, min, max, stdev);

Hi Venky,

Sorry I missed you V1 patch set.

Previously the "sq_sum" just counting the square sum and only when 
showing them in the debug file will it to compute the stdev in 
CEPH_LAT_METRIC_SHOW().

So with this patch I think you also need to fix the 
CEPH_LAT_METRIC_SHOW(), no need to compute it twice ?

Thanks.

>   
>   	spin_lock(&m->metadata_metric_lock);
>   	total = m->total_metadatas;
> @@ -195,9 +195,9 @@ static int metric_show(struct seq_file *s, void *p)
>   	avg = total > 0 ? DIV64_U64_ROUND_CLOSEST(sum, total) : 0;
>   	min = m->metadata_latency_min;
>   	max = m->metadata_latency_max;
> -	sq = m->metadata_latency_sq_sum;
> +	stdev = m->metadata_latency_stdev;
>   	spin_unlock(&m->metadata_metric_lock);
> -	CEPH_LAT_METRIC_SHOW("metadata", total, avg, min, max, sq);
> +	CEPH_LAT_METRIC_SHOW("metadata", total, avg, min, max, stdev);
>   
>   	seq_printf(s, "\n");
>   	seq_printf(s, "item          total       avg_sz(bytes)   min_sz(bytes)   max_sz(bytes)  total_sz(bytes)\n");
> diff --git a/fs/ceph/metric.c b/fs/ceph/metric.c
> index 226dc38e2909..6b774b1a88ce 100644
> --- a/fs/ceph/metric.c
> +++ b/fs/ceph/metric.c
> @@ -244,7 +244,8 @@ int ceph_metric_init(struct ceph_client_metric *m)
>   		goto err_i_caps_mis;
>   
>   	spin_lock_init(&m->read_metric_lock);
> -	m->read_latency_sq_sum = 0;
> +	m->read_latency_stdev = 0;
> +	m->avg_read_latency = 0;
>   	m->read_latency_min = KTIME_MAX;
>   	m->read_latency_max = 0;
>   	m->total_reads = 0;
> @@ -254,7 +255,8 @@ int ceph_metric_init(struct ceph_client_metric *m)
>   	m->read_size_sum = 0;
>   
>   	spin_lock_init(&m->write_metric_lock);
> -	m->write_latency_sq_sum = 0;
> +	m->write_latency_stdev = 0;
> +	m->avg_write_latency = 0;
>   	m->write_latency_min = KTIME_MAX;
>   	m->write_latency_max = 0;
>   	m->total_writes = 0;
> @@ -264,7 +266,8 @@ int ceph_metric_init(struct ceph_client_metric *m)
>   	m->write_size_sum = 0;
>   
>   	spin_lock_init(&m->metadata_metric_lock);
> -	m->metadata_latency_sq_sum = 0;
> +	m->metadata_latency_stdev = 0;
> +	m->avg_metadata_latency = 0;
>   	m->metadata_latency_min = KTIME_MAX;
>   	m->metadata_latency_max = 0;
>   	m->total_metadatas = 0;
> @@ -322,20 +325,26 @@ void ceph_metric_destroy(struct ceph_client_metric *m)
>   		max = new;			\
>   }
>   
> -static inline void __update_stdev(ktime_t total, ktime_t lsum,
> -				  ktime_t *sq_sump, ktime_t lat)
> +static inline void __update_latency(ktime_t *ctotal, ktime_t *lsum,
> +				    ktime_t *lavg, ktime_t *min, ktime_t *max,
> +				    ktime_t *lstdev, ktime_t lat)
>   {
> -	ktime_t avg, sq;
> +	ktime_t total, avg, stdev;
>   
> -	if (unlikely(total == 1))
> -		return;
> +	total = ++(*ctotal);
> +	*lsum += lat;
> +
> +	METRIC_UPDATE_MIN_MAX(*min, *max, lat);
>   
> -	/* the sq is (lat - old_avg) * (lat - new_avg) */
> -	avg = DIV64_U64_ROUND_CLOSEST((lsum - lat), (total - 1));
> -	sq = lat - avg;
> -	avg = DIV64_U64_ROUND_CLOSEST(lsum, total);
> -	sq = sq * (lat - avg);
> -	*sq_sump += sq;
> +	if (unlikely(total == 1)) {
> +		*lavg = lat;
> +		*lstdev = 0;
> +	} else {
> +		avg = *lavg + div64_s64(lat - *lavg, total);
> +		stdev = *lstdev + (lat - *lavg)*(lat - avg);
> +		*lstdev = int_sqrt(div64_u64(stdev, total - 1));
> +		*lavg = avg;
> +	}
>   }
>   
>   void ceph_update_read_metrics(struct ceph_client_metric *m,
> @@ -343,23 +352,18 @@ void ceph_update_read_metrics(struct ceph_client_metric *m,
>   			      unsigned int size, int rc)
>   {
>   	ktime_t lat = ktime_sub(r_end, r_start);
> -	ktime_t total;
>   
>   	if (unlikely(rc < 0 && rc != -ENOENT && rc != -ETIMEDOUT))
>   		return;
>   
>   	spin_lock(&m->read_metric_lock);
> -	total = ++m->total_reads;
>   	m->read_size_sum += size;
> -	m->read_latency_sum += lat;
>   	METRIC_UPDATE_MIN_MAX(m->read_size_min,
>   			      m->read_size_max,
>   			      size);
> -	METRIC_UPDATE_MIN_MAX(m->read_latency_min,
> -			      m->read_latency_max,
> -			      lat);
> -	__update_stdev(total, m->read_latency_sum,
> -		       &m->read_latency_sq_sum, lat);
> +	__update_latency(&m->total_reads, &m->read_latency_sum,
> +			 &m->avg_read_latency, &m->read_latency_min,
> +			 &m->read_latency_max, &m->read_latency_stdev, lat);
>   	spin_unlock(&m->read_metric_lock);
>   }
>   
> @@ -368,23 +372,18 @@ void ceph_update_write_metrics(struct ceph_client_metric *m,
>   			       unsigned int size, int rc)
>   {
>   	ktime_t lat = ktime_sub(r_end, r_start);
> -	ktime_t total;
>   
>   	if (unlikely(rc && rc != -ETIMEDOUT))
>   		return;
>   
>   	spin_lock(&m->write_metric_lock);
> -	total = ++m->total_writes;
>   	m->write_size_sum += size;
> -	m->write_latency_sum += lat;
>   	METRIC_UPDATE_MIN_MAX(m->write_size_min,
>   			      m->write_size_max,
>   			      size);
> -	METRIC_UPDATE_MIN_MAX(m->write_latency_min,
> -			      m->write_latency_max,
> -			      lat);
> -	__update_stdev(total, m->write_latency_sum,
> -		       &m->write_latency_sq_sum, lat);
> +	__update_latency(&m->total_writes, &m->write_latency_sum,
> +			 &m->avg_write_latency, &m->write_latency_min,
> +			 &m->write_latency_max, &m->write_latency_stdev, lat);
>   	spin_unlock(&m->write_metric_lock);
>   }
>   
> @@ -393,18 +392,13 @@ void ceph_update_metadata_metrics(struct ceph_client_metric *m,
>   				  int rc)
>   {
>   	ktime_t lat = ktime_sub(r_end, r_start);
> -	ktime_t total;
>   
>   	if (unlikely(rc && rc != -ENOENT))
>   		return;
>   
>   	spin_lock(&m->metadata_metric_lock);
> -	total = ++m->total_metadatas;
> -	m->metadata_latency_sum += lat;
> -	METRIC_UPDATE_MIN_MAX(m->metadata_latency_min,
> -			      m->metadata_latency_max,
> -			      lat);
> -	__update_stdev(total, m->metadata_latency_sum,
> -		       &m->metadata_latency_sq_sum, lat);
> +	__update_latency(&m->total_metadatas, &m->metadata_latency_sum,
> +			 &m->avg_metadata_latency, &m->metadata_latency_min,
> +			 &m->metadata_latency_max, &m->metadata_latency_stdev, lat);
>   	spin_unlock(&m->metadata_metric_lock);
>   }
> diff --git a/fs/ceph/metric.h b/fs/ceph/metric.h
> index 103ed736f9d2..a5da21b8f8ed 100644
> --- a/fs/ceph/metric.h
> +++ b/fs/ceph/metric.h
> @@ -138,7 +138,8 @@ struct ceph_client_metric {
>   	u64 read_size_min;
>   	u64 read_size_max;
>   	ktime_t read_latency_sum;
> -	ktime_t read_latency_sq_sum;
> +	ktime_t avg_read_latency;
> +	ktime_t read_latency_stdev;
>   	ktime_t read_latency_min;
>   	ktime_t read_latency_max;
>   
> @@ -148,14 +149,16 @@ struct ceph_client_metric {
>   	u64 write_size_min;
>   	u64 write_size_max;
>   	ktime_t write_latency_sum;
> -	ktime_t write_latency_sq_sum;
> +	ktime_t avg_write_latency;
> +	ktime_t write_latency_stdev;
>   	ktime_t write_latency_min;
>   	ktime_t write_latency_max;
>   
>   	spinlock_t metadata_metric_lock;
>   	u64 total_metadatas;
>   	ktime_t metadata_latency_sum;
> -	ktime_t metadata_latency_sq_sum;
> +	ktime_t avg_metadata_latency;
> +	ktime_t metadata_latency_stdev;
>   	ktime_t metadata_latency_min;
>   	ktime_t metadata_latency_max;
>   


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 2/4] ceph: track average/stdev r/w/m latency
  2021-09-14 12:52   ` Xiubo Li
@ 2021-09-14 13:03     ` Venky Shankar
  0 siblings, 0 replies; 19+ messages in thread
From: Venky Shankar @ 2021-09-14 13:03 UTC (permalink / raw)
  To: Xiubo Li; +Cc: Jeff Layton, Patrick Donnelly, ceph-devel

On Tue, Sep 14, 2021 at 6:23 PM Xiubo Li <xiubli@redhat.com> wrote:
>
>
> On 9/14/21 4:49 PM, Venky Shankar wrote:
> > The math involved in tracking average and standard deviation
> > for r/w/m latencies looks incorrect. Fix that up. Also, change
> > the variable name that tracks standard deviation (*_sq_sum) to
> > *_stdev.
> >
> > Signed-off-by: Venky Shankar <vshankar@redhat.com>
> > ---
> >   fs/ceph/debugfs.c | 14 +++++-----
> >   fs/ceph/metric.c  | 70 ++++++++++++++++++++++-------------------------
> >   fs/ceph/metric.h  |  9 ++++--
> >   3 files changed, 45 insertions(+), 48 deletions(-)
> >
> > diff --git a/fs/ceph/debugfs.c b/fs/ceph/debugfs.c
> > index 38b78b45811f..3abfa7ae8220 100644
> > --- a/fs/ceph/debugfs.c
> > +++ b/fs/ceph/debugfs.c
> > @@ -152,7 +152,7 @@ static int metric_show(struct seq_file *s, void *p)
> >       struct ceph_mds_client *mdsc = fsc->mdsc;
> >       struct ceph_client_metric *m = &mdsc->metric;
> >       int nr_caps = 0;
> > -     s64 total, sum, avg, min, max, sq;
> > +     s64 total, sum, avg, min, max, stdev;
> >       u64 sum_sz, avg_sz, min_sz, max_sz;
> >
> >       sum = percpu_counter_sum(&m->total_inodes);
> > @@ -175,9 +175,9 @@ static int metric_show(struct seq_file *s, void *p)
> >       avg = total > 0 ? DIV64_U64_ROUND_CLOSEST(sum, total) : 0;
> >       min = m->read_latency_min;
> >       max = m->read_latency_max;
> > -     sq = m->read_latency_sq_sum;
> > +     stdev = m->read_latency_stdev;
> >       spin_unlock(&m->read_metric_lock);
> > -     CEPH_LAT_METRIC_SHOW("read", total, avg, min, max, sq);
> > +     CEPH_LAT_METRIC_SHOW("read", total, avg, min, max, stdev);
> >
> >       spin_lock(&m->write_metric_lock);
> >       total = m->total_writes;
> > @@ -185,9 +185,9 @@ static int metric_show(struct seq_file *s, void *p)
> >       avg = total > 0 ? DIV64_U64_ROUND_CLOSEST(sum, total) : 0;
> >       min = m->write_latency_min;
> >       max = m->write_latency_max;
> > -     sq = m->write_latency_sq_sum;
> > +     stdev = m->write_latency_stdev;
> >       spin_unlock(&m->write_metric_lock);
> > -     CEPH_LAT_METRIC_SHOW("write", total, avg, min, max, sq);
> > +     CEPH_LAT_METRIC_SHOW("write", total, avg, min, max, stdev);
>
> Hi Venky,
>
> Sorry I missed you V1 patch set.
>
> Previously the "sq_sum" just counting the square sum and only when
> showing them in the debug file will it to compute the stdev in
> CEPH_LAT_METRIC_SHOW().
>
> So with this patch I think you also need to fix the
> CEPH_LAT_METRIC_SHOW(), no need to compute it twice ?

OK, yeh. I didn't look at that when winding this series over the testing branch.

I'll remove that and resend.

>
> Thanks.
>
> >
> >       spin_lock(&m->metadata_metric_lock);
> >       total = m->total_metadatas;
> > @@ -195,9 +195,9 @@ static int metric_show(struct seq_file *s, void *p)
> >       avg = total > 0 ? DIV64_U64_ROUND_CLOSEST(sum, total) : 0;
> >       min = m->metadata_latency_min;
> >       max = m->metadata_latency_max;
> > -     sq = m->metadata_latency_sq_sum;
> > +     stdev = m->metadata_latency_stdev;
> >       spin_unlock(&m->metadata_metric_lock);
> > -     CEPH_LAT_METRIC_SHOW("metadata", total, avg, min, max, sq);
> > +     CEPH_LAT_METRIC_SHOW("metadata", total, avg, min, max, stdev);
> >
> >       seq_printf(s, "\n");
> >       seq_printf(s, "item          total       avg_sz(bytes)   min_sz(bytes)   max_sz(bytes)  total_sz(bytes)\n");
> > diff --git a/fs/ceph/metric.c b/fs/ceph/metric.c
> > index 226dc38e2909..6b774b1a88ce 100644
> > --- a/fs/ceph/metric.c
> > +++ b/fs/ceph/metric.c
> > @@ -244,7 +244,8 @@ int ceph_metric_init(struct ceph_client_metric *m)
> >               goto err_i_caps_mis;
> >
> >       spin_lock_init(&m->read_metric_lock);
> > -     m->read_latency_sq_sum = 0;
> > +     m->read_latency_stdev = 0;
> > +     m->avg_read_latency = 0;
> >       m->read_latency_min = KTIME_MAX;
> >       m->read_latency_max = 0;
> >       m->total_reads = 0;
> > @@ -254,7 +255,8 @@ int ceph_metric_init(struct ceph_client_metric *m)
> >       m->read_size_sum = 0;
> >
> >       spin_lock_init(&m->write_metric_lock);
> > -     m->write_latency_sq_sum = 0;
> > +     m->write_latency_stdev = 0;
> > +     m->avg_write_latency = 0;
> >       m->write_latency_min = KTIME_MAX;
> >       m->write_latency_max = 0;
> >       m->total_writes = 0;
> > @@ -264,7 +266,8 @@ int ceph_metric_init(struct ceph_client_metric *m)
> >       m->write_size_sum = 0;
> >
> >       spin_lock_init(&m->metadata_metric_lock);
> > -     m->metadata_latency_sq_sum = 0;
> > +     m->metadata_latency_stdev = 0;
> > +     m->avg_metadata_latency = 0;
> >       m->metadata_latency_min = KTIME_MAX;
> >       m->metadata_latency_max = 0;
> >       m->total_metadatas = 0;
> > @@ -322,20 +325,26 @@ void ceph_metric_destroy(struct ceph_client_metric *m)
> >               max = new;                      \
> >   }
> >
> > -static inline void __update_stdev(ktime_t total, ktime_t lsum,
> > -                               ktime_t *sq_sump, ktime_t lat)
> > +static inline void __update_latency(ktime_t *ctotal, ktime_t *lsum,
> > +                                 ktime_t *lavg, ktime_t *min, ktime_t *max,
> > +                                 ktime_t *lstdev, ktime_t lat)
> >   {
> > -     ktime_t avg, sq;
> > +     ktime_t total, avg, stdev;
> >
> > -     if (unlikely(total == 1))
> > -             return;
> > +     total = ++(*ctotal);
> > +     *lsum += lat;
> > +
> > +     METRIC_UPDATE_MIN_MAX(*min, *max, lat);
> >
> > -     /* the sq is (lat - old_avg) * (lat - new_avg) */
> > -     avg = DIV64_U64_ROUND_CLOSEST((lsum - lat), (total - 1));
> > -     sq = lat - avg;
> > -     avg = DIV64_U64_ROUND_CLOSEST(lsum, total);
> > -     sq = sq * (lat - avg);
> > -     *sq_sump += sq;
> > +     if (unlikely(total == 1)) {
> > +             *lavg = lat;
> > +             *lstdev = 0;
> > +     } else {
> > +             avg = *lavg + div64_s64(lat - *lavg, total);
> > +             stdev = *lstdev + (lat - *lavg)*(lat - avg);
> > +             *lstdev = int_sqrt(div64_u64(stdev, total - 1));
> > +             *lavg = avg;
> > +     }
> >   }
> >
> >   void ceph_update_read_metrics(struct ceph_client_metric *m,
> > @@ -343,23 +352,18 @@ void ceph_update_read_metrics(struct ceph_client_metric *m,
> >                             unsigned int size, int rc)
> >   {
> >       ktime_t lat = ktime_sub(r_end, r_start);
> > -     ktime_t total;
> >
> >       if (unlikely(rc < 0 && rc != -ENOENT && rc != -ETIMEDOUT))
> >               return;
> >
> >       spin_lock(&m->read_metric_lock);
> > -     total = ++m->total_reads;
> >       m->read_size_sum += size;
> > -     m->read_latency_sum += lat;
> >       METRIC_UPDATE_MIN_MAX(m->read_size_min,
> >                             m->read_size_max,
> >                             size);
> > -     METRIC_UPDATE_MIN_MAX(m->read_latency_min,
> > -                           m->read_latency_max,
> > -                           lat);
> > -     __update_stdev(total, m->read_latency_sum,
> > -                    &m->read_latency_sq_sum, lat);
> > +     __update_latency(&m->total_reads, &m->read_latency_sum,
> > +                      &m->avg_read_latency, &m->read_latency_min,
> > +                      &m->read_latency_max, &m->read_latency_stdev, lat);
> >       spin_unlock(&m->read_metric_lock);
> >   }
> >
> > @@ -368,23 +372,18 @@ void ceph_update_write_metrics(struct ceph_client_metric *m,
> >                              unsigned int size, int rc)
> >   {
> >       ktime_t lat = ktime_sub(r_end, r_start);
> > -     ktime_t total;
> >
> >       if (unlikely(rc && rc != -ETIMEDOUT))
> >               return;
> >
> >       spin_lock(&m->write_metric_lock);
> > -     total = ++m->total_writes;
> >       m->write_size_sum += size;
> > -     m->write_latency_sum += lat;
> >       METRIC_UPDATE_MIN_MAX(m->write_size_min,
> >                             m->write_size_max,
> >                             size);
> > -     METRIC_UPDATE_MIN_MAX(m->write_latency_min,
> > -                           m->write_latency_max,
> > -                           lat);
> > -     __update_stdev(total, m->write_latency_sum,
> > -                    &m->write_latency_sq_sum, lat);
> > +     __update_latency(&m->total_writes, &m->write_latency_sum,
> > +                      &m->avg_write_latency, &m->write_latency_min,
> > +                      &m->write_latency_max, &m->write_latency_stdev, lat);
> >       spin_unlock(&m->write_metric_lock);
> >   }
> >
> > @@ -393,18 +392,13 @@ void ceph_update_metadata_metrics(struct ceph_client_metric *m,
> >                                 int rc)
> >   {
> >       ktime_t lat = ktime_sub(r_end, r_start);
> > -     ktime_t total;
> >
> >       if (unlikely(rc && rc != -ENOENT))
> >               return;
> >
> >       spin_lock(&m->metadata_metric_lock);
> > -     total = ++m->total_metadatas;
> > -     m->metadata_latency_sum += lat;
> > -     METRIC_UPDATE_MIN_MAX(m->metadata_latency_min,
> > -                           m->metadata_latency_max,
> > -                           lat);
> > -     __update_stdev(total, m->metadata_latency_sum,
> > -                    &m->metadata_latency_sq_sum, lat);
> > +     __update_latency(&m->total_metadatas, &m->metadata_latency_sum,
> > +                      &m->avg_metadata_latency, &m->metadata_latency_min,
> > +                      &m->metadata_latency_max, &m->metadata_latency_stdev, lat);
> >       spin_unlock(&m->metadata_metric_lock);
> >   }
> > diff --git a/fs/ceph/metric.h b/fs/ceph/metric.h
> > index 103ed736f9d2..a5da21b8f8ed 100644
> > --- a/fs/ceph/metric.h
> > +++ b/fs/ceph/metric.h
> > @@ -138,7 +138,8 @@ struct ceph_client_metric {
> >       u64 read_size_min;
> >       u64 read_size_max;
> >       ktime_t read_latency_sum;
> > -     ktime_t read_latency_sq_sum;
> > +     ktime_t avg_read_latency;
> > +     ktime_t read_latency_stdev;
> >       ktime_t read_latency_min;
> >       ktime_t read_latency_max;
> >
> > @@ -148,14 +149,16 @@ struct ceph_client_metric {
> >       u64 write_size_min;
> >       u64 write_size_max;
> >       ktime_t write_latency_sum;
> > -     ktime_t write_latency_sq_sum;
> > +     ktime_t avg_write_latency;
> > +     ktime_t write_latency_stdev;
> >       ktime_t write_latency_min;
> >       ktime_t write_latency_max;
> >
> >       spinlock_t metadata_metric_lock;
> >       u64 total_metadatas;
> >       ktime_t metadata_latency_sum;
> > -     ktime_t metadata_latency_sq_sum;
> > +     ktime_t avg_metadata_latency;
> > +     ktime_t metadata_latency_stdev;
> >       ktime_t metadata_latency_min;
> >       ktime_t metadata_latency_max;
> >
>


-- 
Cheers,
Venky


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 2/4] ceph: track average/stdev r/w/m latency
  2021-09-14  8:49 ` [PATCH v2 2/4] ceph: track average/stdev r/w/m latency Venky Shankar
  2021-09-14 12:52   ` Xiubo Li
@ 2021-09-14 13:09   ` Xiubo Li
  2021-09-14 13:30     ` Venky Shankar
  2021-09-14 13:13   ` Xiubo Li
  2 siblings, 1 reply; 19+ messages in thread
From: Xiubo Li @ 2021-09-14 13:09 UTC (permalink / raw)
  To: Venky Shankar, jlayton, pdonnell; +Cc: ceph-devel


On 9/14/21 4:49 PM, Venky Shankar wrote:
> The math involved in tracking average and standard deviation
> for r/w/m latencies looks incorrect. Fix that up. Also, change
> the variable name that tracks standard deviation (*_sq_sum) to
> *_stdev.
>
> Signed-off-by: Venky Shankar <vshankar@redhat.com>
> ---
>   fs/ceph/debugfs.c | 14 +++++-----
>   fs/ceph/metric.c  | 70 ++++++++++++++++++++++-------------------------
>   fs/ceph/metric.h  |  9 ++++--
>   3 files changed, 45 insertions(+), 48 deletions(-)
>
> diff --git a/fs/ceph/debugfs.c b/fs/ceph/debugfs.c
> index 38b78b45811f..3abfa7ae8220 100644
> --- a/fs/ceph/debugfs.c
> +++ b/fs/ceph/debugfs.c
> @@ -152,7 +152,7 @@ static int metric_show(struct seq_file *s, void *p)
>   	struct ceph_mds_client *mdsc = fsc->mdsc;
>   	struct ceph_client_metric *m = &mdsc->metric;
>   	int nr_caps = 0;
> -	s64 total, sum, avg, min, max, sq;
> +	s64 total, sum, avg, min, max, stdev;
>   	u64 sum_sz, avg_sz, min_sz, max_sz;
>   
>   	sum = percpu_counter_sum(&m->total_inodes);
> @@ -175,9 +175,9 @@ static int metric_show(struct seq_file *s, void *p)
>   	avg = total > 0 ? DIV64_U64_ROUND_CLOSEST(sum, total) : 0;
>   	min = m->read_latency_min;
>   	max = m->read_latency_max;
> -	sq = m->read_latency_sq_sum;
> +	stdev = m->read_latency_stdev;
>   	spin_unlock(&m->read_metric_lock);
> -	CEPH_LAT_METRIC_SHOW("read", total, avg, min, max, sq);
> +	CEPH_LAT_METRIC_SHOW("read", total, avg, min, max, stdev);
>   
>   	spin_lock(&m->write_metric_lock);
>   	total = m->total_writes;
> @@ -185,9 +185,9 @@ static int metric_show(struct seq_file *s, void *p)
>   	avg = total > 0 ? DIV64_U64_ROUND_CLOSEST(sum, total) : 0;
>   	min = m->write_latency_min;
>   	max = m->write_latency_max;
> -	sq = m->write_latency_sq_sum;
> +	stdev = m->write_latency_stdev;
>   	spin_unlock(&m->write_metric_lock);
> -	CEPH_LAT_METRIC_SHOW("write", total, avg, min, max, sq);
> +	CEPH_LAT_METRIC_SHOW("write", total, avg, min, max, stdev);
>   
>   	spin_lock(&m->metadata_metric_lock);
>   	total = m->total_metadatas;
> @@ -195,9 +195,9 @@ static int metric_show(struct seq_file *s, void *p)
>   	avg = total > 0 ? DIV64_U64_ROUND_CLOSEST(sum, total) : 0;
>   	min = m->metadata_latency_min;
>   	max = m->metadata_latency_max;
> -	sq = m->metadata_latency_sq_sum;
> +	stdev = m->metadata_latency_stdev;
>   	spin_unlock(&m->metadata_metric_lock);
> -	CEPH_LAT_METRIC_SHOW("metadata", total, avg, min, max, sq);
> +	CEPH_LAT_METRIC_SHOW("metadata", total, avg, min, max, stdev);
>   
>   	seq_printf(s, "\n");
>   	seq_printf(s, "item          total       avg_sz(bytes)   min_sz(bytes)   max_sz(bytes)  total_sz(bytes)\n");
> diff --git a/fs/ceph/metric.c b/fs/ceph/metric.c
> index 226dc38e2909..6b774b1a88ce 100644
> --- a/fs/ceph/metric.c
> +++ b/fs/ceph/metric.c
> @@ -244,7 +244,8 @@ int ceph_metric_init(struct ceph_client_metric *m)
>   		goto err_i_caps_mis;
>   
>   	spin_lock_init(&m->read_metric_lock);
> -	m->read_latency_sq_sum = 0;
> +	m->read_latency_stdev = 0;
> +	m->avg_read_latency = 0;
>   	m->read_latency_min = KTIME_MAX;
>   	m->read_latency_max = 0;
>   	m->total_reads = 0;
> @@ -254,7 +255,8 @@ int ceph_metric_init(struct ceph_client_metric *m)
>   	m->read_size_sum = 0;
>   
>   	spin_lock_init(&m->write_metric_lock);
> -	m->write_latency_sq_sum = 0;
> +	m->write_latency_stdev = 0;
> +	m->avg_write_latency = 0;
>   	m->write_latency_min = KTIME_MAX;
>   	m->write_latency_max = 0;
>   	m->total_writes = 0;
> @@ -264,7 +266,8 @@ int ceph_metric_init(struct ceph_client_metric *m)
>   	m->write_size_sum = 0;
>   
>   	spin_lock_init(&m->metadata_metric_lock);
> -	m->metadata_latency_sq_sum = 0;
> +	m->metadata_latency_stdev = 0;
> +	m->avg_metadata_latency = 0;
>   	m->metadata_latency_min = KTIME_MAX;
>   	m->metadata_latency_max = 0;
>   	m->total_metadatas = 0;
> @@ -322,20 +325,26 @@ void ceph_metric_destroy(struct ceph_client_metric *m)
>   		max = new;			\
>   }
>   
> -static inline void __update_stdev(ktime_t total, ktime_t lsum,
> -				  ktime_t *sq_sump, ktime_t lat)
> +static inline void __update_latency(ktime_t *ctotal, ktime_t *lsum,
> +				    ktime_t *lavg, ktime_t *min, ktime_t *max,
> +				    ktime_t *lstdev, ktime_t lat)
>   {
> -	ktime_t avg, sq;
> +	ktime_t total, avg, stdev;
>   
> -	if (unlikely(total == 1))
> -		return;
> +	total = ++(*ctotal);
> +	*lsum += lat;
> +
> +	METRIC_UPDATE_MIN_MAX(*min, *max, lat);
>   
> -	/* the sq is (lat - old_avg) * (lat - new_avg) */
> -	avg = DIV64_U64_ROUND_CLOSEST((lsum - lat), (total - 1));
> -	sq = lat - avg;
> -	avg = DIV64_U64_ROUND_CLOSEST(lsum, total);
> -	sq = sq * (lat - avg);
> -	*sq_sump += sq;
> +	if (unlikely(total == 1)) {
> +		*lavg = lat;
> +		*lstdev = 0;
> +	} else {
> +		avg = *lavg + div64_s64(lat - *lavg, total);
> +		stdev = *lstdev + (lat - *lavg)*(lat - avg);
> +		*lstdev = int_sqrt(div64_u64(stdev, total - 1));
> +		*lavg = avg;
> +	}

IMO, this is incorrect, the math formula please see:

https://www.investopedia.com/ask/answers/042415/what-difference-between-standard-error-means-and-standard-deviation.asp

The most accurate result should be:

stdev = int_sqrt(sum((X(n) - avg)^2, (X(n-1) - avg)^2, ..., (X(1) - 
avg)^2) / (n - 1)).

While you are computing it:

stdev_n = int_sqrt(stdev_(n-1) + (X(n-1) - avg)^2)

Though current stdev computing method is not exactly the same the math 
formula does, but it's closer to it, because the kernel couldn't record 
all the latency value and do it whenever needed, which will occupy a 
large amount of memories and cpu resources.


>   }
>   
>   void ceph_update_read_metrics(struct ceph_client_metric *m,
> @@ -343,23 +352,18 @@ void ceph_update_read_metrics(struct ceph_client_metric *m,
>   			      unsigned int size, int rc)
>   {
>   	ktime_t lat = ktime_sub(r_end, r_start);
> -	ktime_t total;
>   
>   	if (unlikely(rc < 0 && rc != -ENOENT && rc != -ETIMEDOUT))
>   		return;
>   
>   	spin_lock(&m->read_metric_lock);
> -	total = ++m->total_reads;
>   	m->read_size_sum += size;
> -	m->read_latency_sum += lat;
>   	METRIC_UPDATE_MIN_MAX(m->read_size_min,
>   			      m->read_size_max,
>   			      size);
> -	METRIC_UPDATE_MIN_MAX(m->read_latency_min,
> -			      m->read_latency_max,
> -			      lat);
> -	__update_stdev(total, m->read_latency_sum,
> -		       &m->read_latency_sq_sum, lat);
> +	__update_latency(&m->total_reads, &m->read_latency_sum,
> +			 &m->avg_read_latency, &m->read_latency_min,
> +			 &m->read_latency_max, &m->read_latency_stdev, lat);
>   	spin_unlock(&m->read_metric_lock);
>   }
>   
> @@ -368,23 +372,18 @@ void ceph_update_write_metrics(struct ceph_client_metric *m,
>   			       unsigned int size, int rc)
>   {
>   	ktime_t lat = ktime_sub(r_end, r_start);
> -	ktime_t total;
>   
>   	if (unlikely(rc && rc != -ETIMEDOUT))
>   		return;
>   
>   	spin_lock(&m->write_metric_lock);
> -	total = ++m->total_writes;
>   	m->write_size_sum += size;
> -	m->write_latency_sum += lat;
>   	METRIC_UPDATE_MIN_MAX(m->write_size_min,
>   			      m->write_size_max,
>   			      size);
> -	METRIC_UPDATE_MIN_MAX(m->write_latency_min,
> -			      m->write_latency_max,
> -			      lat);
> -	__update_stdev(total, m->write_latency_sum,
> -		       &m->write_latency_sq_sum, lat);
> +	__update_latency(&m->total_writes, &m->write_latency_sum,
> +			 &m->avg_write_latency, &m->write_latency_min,
> +			 &m->write_latency_max, &m->write_latency_stdev, lat);
>   	spin_unlock(&m->write_metric_lock);
>   }
>   
> @@ -393,18 +392,13 @@ void ceph_update_metadata_metrics(struct ceph_client_metric *m,
>   				  int rc)
>   {
>   	ktime_t lat = ktime_sub(r_end, r_start);
> -	ktime_t total;
>   
>   	if (unlikely(rc && rc != -ENOENT))
>   		return;
>   
>   	spin_lock(&m->metadata_metric_lock);
> -	total = ++m->total_metadatas;
> -	m->metadata_latency_sum += lat;
> -	METRIC_UPDATE_MIN_MAX(m->metadata_latency_min,
> -			      m->metadata_latency_max,
> -			      lat);
> -	__update_stdev(total, m->metadata_latency_sum,
> -		       &m->metadata_latency_sq_sum, lat);
> +	__update_latency(&m->total_metadatas, &m->metadata_latency_sum,
> +			 &m->avg_metadata_latency, &m->metadata_latency_min,
> +			 &m->metadata_latency_max, &m->metadata_latency_stdev, lat);
>   	spin_unlock(&m->metadata_metric_lock);
>   }
> diff --git a/fs/ceph/metric.h b/fs/ceph/metric.h
> index 103ed736f9d2..a5da21b8f8ed 100644
> --- a/fs/ceph/metric.h
> +++ b/fs/ceph/metric.h
> @@ -138,7 +138,8 @@ struct ceph_client_metric {
>   	u64 read_size_min;
>   	u64 read_size_max;
>   	ktime_t read_latency_sum;
> -	ktime_t read_latency_sq_sum;
> +	ktime_t avg_read_latency;
> +	ktime_t read_latency_stdev;
>   	ktime_t read_latency_min;
>   	ktime_t read_latency_max;
>   
> @@ -148,14 +149,16 @@ struct ceph_client_metric {
>   	u64 write_size_min;
>   	u64 write_size_max;
>   	ktime_t write_latency_sum;
> -	ktime_t write_latency_sq_sum;
> +	ktime_t avg_write_latency;
> +	ktime_t write_latency_stdev;
>   	ktime_t write_latency_min;
>   	ktime_t write_latency_max;
>   
>   	spinlock_t metadata_metric_lock;
>   	u64 total_metadatas;
>   	ktime_t metadata_latency_sum;
> -	ktime_t metadata_latency_sq_sum;
> +	ktime_t avg_metadata_latency;
> +	ktime_t metadata_latency_stdev;
>   	ktime_t metadata_latency_min;
>   	ktime_t metadata_latency_max;
>   


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 2/4] ceph: track average/stdev r/w/m latency
  2021-09-14  8:49 ` [PATCH v2 2/4] ceph: track average/stdev r/w/m latency Venky Shankar
  2021-09-14 12:52   ` Xiubo Li
  2021-09-14 13:09   ` Xiubo Li
@ 2021-09-14 13:13   ` Xiubo Li
  2021-09-14 13:32     ` Jeff Layton
  2021-09-14 13:32     ` Venky Shankar
  2 siblings, 2 replies; 19+ messages in thread
From: Xiubo Li @ 2021-09-14 13:13 UTC (permalink / raw)
  To: Venky Shankar, jlayton, pdonnell; +Cc: ceph-devel


On 9/14/21 4:49 PM, Venky Shankar wrote:
> The math involved in tracking average and standard deviation
> for r/w/m latencies looks incorrect. Fix that up. Also, change
> the variable name that tracks standard deviation (*_sq_sum) to
> *_stdev.
>
> Signed-off-by: Venky Shankar <vshankar@redhat.com>
> ---
>   fs/ceph/debugfs.c | 14 +++++-----
>   fs/ceph/metric.c  | 70 ++++++++++++++++++++++-------------------------
>   fs/ceph/metric.h  |  9 ++++--
>   3 files changed, 45 insertions(+), 48 deletions(-)
>
> diff --git a/fs/ceph/debugfs.c b/fs/ceph/debugfs.c
> index 38b78b45811f..3abfa7ae8220 100644
> --- a/fs/ceph/debugfs.c
> +++ b/fs/ceph/debugfs.c
> @@ -152,7 +152,7 @@ static int metric_show(struct seq_file *s, void *p)
>   	struct ceph_mds_client *mdsc = fsc->mdsc;
>   	struct ceph_client_metric *m = &mdsc->metric;
>   	int nr_caps = 0;
> -	s64 total, sum, avg, min, max, sq;
> +	s64 total, sum, avg, min, max, stdev;
>   	u64 sum_sz, avg_sz, min_sz, max_sz;
>   
>   	sum = percpu_counter_sum(&m->total_inodes);
> @@ -175,9 +175,9 @@ static int metric_show(struct seq_file *s, void *p)
>   	avg = total > 0 ? DIV64_U64_ROUND_CLOSEST(sum, total) : 0;
>   	min = m->read_latency_min;
>   	max = m->read_latency_max;
> -	sq = m->read_latency_sq_sum;
> +	stdev = m->read_latency_stdev;
>   	spin_unlock(&m->read_metric_lock);
> -	CEPH_LAT_METRIC_SHOW("read", total, avg, min, max, sq);
> +	CEPH_LAT_METRIC_SHOW("read", total, avg, min, max, stdev);
>   
>   	spin_lock(&m->write_metric_lock);
>   	total = m->total_writes;
> @@ -185,9 +185,9 @@ static int metric_show(struct seq_file *s, void *p)
>   	avg = total > 0 ? DIV64_U64_ROUND_CLOSEST(sum, total) : 0;
>   	min = m->write_latency_min;
>   	max = m->write_latency_max;
> -	sq = m->write_latency_sq_sum;
> +	stdev = m->write_latency_stdev;
>   	spin_unlock(&m->write_metric_lock);
> -	CEPH_LAT_METRIC_SHOW("write", total, avg, min, max, sq);
> +	CEPH_LAT_METRIC_SHOW("write", total, avg, min, max, stdev);
>   
>   	spin_lock(&m->metadata_metric_lock);
>   	total = m->total_metadatas;
> @@ -195,9 +195,9 @@ static int metric_show(struct seq_file *s, void *p)
>   	avg = total > 0 ? DIV64_U64_ROUND_CLOSEST(sum, total) : 0;
>   	min = m->metadata_latency_min;
>   	max = m->metadata_latency_max;
> -	sq = m->metadata_latency_sq_sum;
> +	stdev = m->metadata_latency_stdev;
>   	spin_unlock(&m->metadata_metric_lock);
> -	CEPH_LAT_METRIC_SHOW("metadata", total, avg, min, max, sq);
> +	CEPH_LAT_METRIC_SHOW("metadata", total, avg, min, max, stdev);
>   
>   	seq_printf(s, "\n");
>   	seq_printf(s, "item          total       avg_sz(bytes)   min_sz(bytes)   max_sz(bytes)  total_sz(bytes)\n");
> diff --git a/fs/ceph/metric.c b/fs/ceph/metric.c
> index 226dc38e2909..6b774b1a88ce 100644
> --- a/fs/ceph/metric.c
> +++ b/fs/ceph/metric.c
> @@ -244,7 +244,8 @@ int ceph_metric_init(struct ceph_client_metric *m)
>   		goto err_i_caps_mis;
>   
>   	spin_lock_init(&m->read_metric_lock);
> -	m->read_latency_sq_sum = 0;
> +	m->read_latency_stdev = 0;
> +	m->avg_read_latency = 0;
>   	m->read_latency_min = KTIME_MAX;
>   	m->read_latency_max = 0;
>   	m->total_reads = 0;
> @@ -254,7 +255,8 @@ int ceph_metric_init(struct ceph_client_metric *m)
>   	m->read_size_sum = 0;
>   
>   	spin_lock_init(&m->write_metric_lock);
> -	m->write_latency_sq_sum = 0;
> +	m->write_latency_stdev = 0;
> +	m->avg_write_latency = 0;
>   	m->write_latency_min = KTIME_MAX;
>   	m->write_latency_max = 0;
>   	m->total_writes = 0;
> @@ -264,7 +266,8 @@ int ceph_metric_init(struct ceph_client_metric *m)
>   	m->write_size_sum = 0;
>   
>   	spin_lock_init(&m->metadata_metric_lock);
> -	m->metadata_latency_sq_sum = 0;
> +	m->metadata_latency_stdev = 0;
> +	m->avg_metadata_latency = 0;
>   	m->metadata_latency_min = KTIME_MAX;
>   	m->metadata_latency_max = 0;
>   	m->total_metadatas = 0;
> @@ -322,20 +325,26 @@ void ceph_metric_destroy(struct ceph_client_metric *m)
>   		max = new;			\
>   }
>   
> -static inline void __update_stdev(ktime_t total, ktime_t lsum,
> -				  ktime_t *sq_sump, ktime_t lat)
> +static inline void __update_latency(ktime_t *ctotal, ktime_t *lsum,
> +				    ktime_t *lavg, ktime_t *min, ktime_t *max,
> +				    ktime_t *lstdev, ktime_t lat)
>   {
> -	ktime_t avg, sq;
> +	ktime_t total, avg, stdev;
>   
> -	if (unlikely(total == 1))
> -		return;
> +	total = ++(*ctotal);
> +	*lsum += lat;
> +
> +	METRIC_UPDATE_MIN_MAX(*min, *max, lat);
>   
> -	/* the sq is (lat - old_avg) * (lat - new_avg) */
> -	avg = DIV64_U64_ROUND_CLOSEST((lsum - lat), (total - 1));
> -	sq = lat - avg;
> -	avg = DIV64_U64_ROUND_CLOSEST(lsum, total);
> -	sq = sq * (lat - avg);
> -	*sq_sump += sq;
> +	if (unlikely(total == 1)) {
> +		*lavg = lat;
> +		*lstdev = 0;
> +	} else {
> +		avg = *lavg + div64_s64(lat - *lavg, total);
> +		stdev = *lstdev + (lat - *lavg)*(lat - avg);
> +		*lstdev = int_sqrt(div64_u64(stdev, total - 1));

In kernel space, won't it a little heavy to run the in_sqrt() every time 
when updating the latency ?

@Jeff, any idea ?


> +		*lavg = avg;
> +	}
>   }
>   
>   void ceph_update_read_metrics(struct ceph_client_metric *m,
> @@ -343,23 +352,18 @@ void ceph_update_read_metrics(struct ceph_client_metric *m,
>   			      unsigned int size, int rc)
>   {
>   	ktime_t lat = ktime_sub(r_end, r_start);
> -	ktime_t total;
>   
>   	if (unlikely(rc < 0 && rc != -ENOENT && rc != -ETIMEDOUT))
>   		return;
>   
>   	spin_lock(&m->read_metric_lock);
> -	total = ++m->total_reads;
>   	m->read_size_sum += size;
> -	m->read_latency_sum += lat;
>   	METRIC_UPDATE_MIN_MAX(m->read_size_min,
>   			      m->read_size_max,
>   			      size);
> -	METRIC_UPDATE_MIN_MAX(m->read_latency_min,
> -			      m->read_latency_max,
> -			      lat);
> -	__update_stdev(total, m->read_latency_sum,
> -		       &m->read_latency_sq_sum, lat);
> +	__update_latency(&m->total_reads, &m->read_latency_sum,
> +			 &m->avg_read_latency, &m->read_latency_min,
> +			 &m->read_latency_max, &m->read_latency_stdev, lat);
>   	spin_unlock(&m->read_metric_lock);
>   }
>   
> @@ -368,23 +372,18 @@ void ceph_update_write_metrics(struct ceph_client_metric *m,
>   			       unsigned int size, int rc)
>   {
>   	ktime_t lat = ktime_sub(r_end, r_start);
> -	ktime_t total;
>   
>   	if (unlikely(rc && rc != -ETIMEDOUT))
>   		return;
>   
>   	spin_lock(&m->write_metric_lock);
> -	total = ++m->total_writes;
>   	m->write_size_sum += size;
> -	m->write_latency_sum += lat;
>   	METRIC_UPDATE_MIN_MAX(m->write_size_min,
>   			      m->write_size_max,
>   			      size);
> -	METRIC_UPDATE_MIN_MAX(m->write_latency_min,
> -			      m->write_latency_max,
> -			      lat);
> -	__update_stdev(total, m->write_latency_sum,
> -		       &m->write_latency_sq_sum, lat);
> +	__update_latency(&m->total_writes, &m->write_latency_sum,
> +			 &m->avg_write_latency, &m->write_latency_min,
> +			 &m->write_latency_max, &m->write_latency_stdev, lat);
>   	spin_unlock(&m->write_metric_lock);
>   }
>   
> @@ -393,18 +392,13 @@ void ceph_update_metadata_metrics(struct ceph_client_metric *m,
>   				  int rc)
>   {
>   	ktime_t lat = ktime_sub(r_end, r_start);
> -	ktime_t total;
>   
>   	if (unlikely(rc && rc != -ENOENT))
>   		return;
>   
>   	spin_lock(&m->metadata_metric_lock);
> -	total = ++m->total_metadatas;
> -	m->metadata_latency_sum += lat;
> -	METRIC_UPDATE_MIN_MAX(m->metadata_latency_min,
> -			      m->metadata_latency_max,
> -			      lat);
> -	__update_stdev(total, m->metadata_latency_sum,
> -		       &m->metadata_latency_sq_sum, lat);
> +	__update_latency(&m->total_metadatas, &m->metadata_latency_sum,
> +			 &m->avg_metadata_latency, &m->metadata_latency_min,
> +			 &m->metadata_latency_max, &m->metadata_latency_stdev, lat);
>   	spin_unlock(&m->metadata_metric_lock);
>   }
> diff --git a/fs/ceph/metric.h b/fs/ceph/metric.h
> index 103ed736f9d2..a5da21b8f8ed 100644
> --- a/fs/ceph/metric.h
> +++ b/fs/ceph/metric.h
> @@ -138,7 +138,8 @@ struct ceph_client_metric {
>   	u64 read_size_min;
>   	u64 read_size_max;
>   	ktime_t read_latency_sum;
> -	ktime_t read_latency_sq_sum;
> +	ktime_t avg_read_latency;
> +	ktime_t read_latency_stdev;
>   	ktime_t read_latency_min;
>   	ktime_t read_latency_max;
>   
> @@ -148,14 +149,16 @@ struct ceph_client_metric {
>   	u64 write_size_min;
>   	u64 write_size_max;
>   	ktime_t write_latency_sum;
> -	ktime_t write_latency_sq_sum;
> +	ktime_t avg_write_latency;
> +	ktime_t write_latency_stdev;
>   	ktime_t write_latency_min;
>   	ktime_t write_latency_max;
>   
>   	spinlock_t metadata_metric_lock;
>   	u64 total_metadatas;
>   	ktime_t metadata_latency_sum;
> -	ktime_t metadata_latency_sq_sum;
> +	ktime_t avg_metadata_latency;
> +	ktime_t metadata_latency_stdev;
>   	ktime_t metadata_latency_min;
>   	ktime_t metadata_latency_max;
>   


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 2/4] ceph: track average/stdev r/w/m latency
  2021-09-14 13:09   ` Xiubo Li
@ 2021-09-14 13:30     ` Venky Shankar
  2021-09-14 13:45       ` Xiubo Li
  0 siblings, 1 reply; 19+ messages in thread
From: Venky Shankar @ 2021-09-14 13:30 UTC (permalink / raw)
  To: Xiubo Li; +Cc: Jeff Layton, Patrick Donnelly, ceph-devel

On Tue, Sep 14, 2021 at 6:39 PM Xiubo Li <xiubli@redhat.com> wrote:
>
>
> On 9/14/21 4:49 PM, Venky Shankar wrote:
> > The math involved in tracking average and standard deviation
> > for r/w/m latencies looks incorrect. Fix that up. Also, change
> > the variable name that tracks standard deviation (*_sq_sum) to
> > *_stdev.
> >
> > Signed-off-by: Venky Shankar <vshankar@redhat.com>
> > ---
> >   fs/ceph/debugfs.c | 14 +++++-----
> >   fs/ceph/metric.c  | 70 ++++++++++++++++++++++-------------------------
> >   fs/ceph/metric.h  |  9 ++++--
> >   3 files changed, 45 insertions(+), 48 deletions(-)
> >
> > diff --git a/fs/ceph/debugfs.c b/fs/ceph/debugfs.c
> > index 38b78b45811f..3abfa7ae8220 100644
> > --- a/fs/ceph/debugfs.c
> > +++ b/fs/ceph/debugfs.c
> > @@ -152,7 +152,7 @@ static int metric_show(struct seq_file *s, void *p)
> >       struct ceph_mds_client *mdsc = fsc->mdsc;
> >       struct ceph_client_metric *m = &mdsc->metric;
> >       int nr_caps = 0;
> > -     s64 total, sum, avg, min, max, sq;
> > +     s64 total, sum, avg, min, max, stdev;
> >       u64 sum_sz, avg_sz, min_sz, max_sz;
> >
> >       sum = percpu_counter_sum(&m->total_inodes);
> > @@ -175,9 +175,9 @@ static int metric_show(struct seq_file *s, void *p)
> >       avg = total > 0 ? DIV64_U64_ROUND_CLOSEST(sum, total) : 0;
> >       min = m->read_latency_min;
> >       max = m->read_latency_max;
> > -     sq = m->read_latency_sq_sum;
> > +     stdev = m->read_latency_stdev;
> >       spin_unlock(&m->read_metric_lock);
> > -     CEPH_LAT_METRIC_SHOW("read", total, avg, min, max, sq);
> > +     CEPH_LAT_METRIC_SHOW("read", total, avg, min, max, stdev);
> >
> >       spin_lock(&m->write_metric_lock);
> >       total = m->total_writes;
> > @@ -185,9 +185,9 @@ static int metric_show(struct seq_file *s, void *p)
> >       avg = total > 0 ? DIV64_U64_ROUND_CLOSEST(sum, total) : 0;
> >       min = m->write_latency_min;
> >       max = m->write_latency_max;
> > -     sq = m->write_latency_sq_sum;
> > +     stdev = m->write_latency_stdev;
> >       spin_unlock(&m->write_metric_lock);
> > -     CEPH_LAT_METRIC_SHOW("write", total, avg, min, max, sq);
> > +     CEPH_LAT_METRIC_SHOW("write", total, avg, min, max, stdev);
> >
> >       spin_lock(&m->metadata_metric_lock);
> >       total = m->total_metadatas;
> > @@ -195,9 +195,9 @@ static int metric_show(struct seq_file *s, void *p)
> >       avg = total > 0 ? DIV64_U64_ROUND_CLOSEST(sum, total) : 0;
> >       min = m->metadata_latency_min;
> >       max = m->metadata_latency_max;
> > -     sq = m->metadata_latency_sq_sum;
> > +     stdev = m->metadata_latency_stdev;
> >       spin_unlock(&m->metadata_metric_lock);
> > -     CEPH_LAT_METRIC_SHOW("metadata", total, avg, min, max, sq);
> > +     CEPH_LAT_METRIC_SHOW("metadata", total, avg, min, max, stdev);
> >
> >       seq_printf(s, "\n");
> >       seq_printf(s, "item          total       avg_sz(bytes)   min_sz(bytes)   max_sz(bytes)  total_sz(bytes)\n");
> > diff --git a/fs/ceph/metric.c b/fs/ceph/metric.c
> > index 226dc38e2909..6b774b1a88ce 100644
> > --- a/fs/ceph/metric.c
> > +++ b/fs/ceph/metric.c
> > @@ -244,7 +244,8 @@ int ceph_metric_init(struct ceph_client_metric *m)
> >               goto err_i_caps_mis;
> >
> >       spin_lock_init(&m->read_metric_lock);
> > -     m->read_latency_sq_sum = 0;
> > +     m->read_latency_stdev = 0;
> > +     m->avg_read_latency = 0;
> >       m->read_latency_min = KTIME_MAX;
> >       m->read_latency_max = 0;
> >       m->total_reads = 0;
> > @@ -254,7 +255,8 @@ int ceph_metric_init(struct ceph_client_metric *m)
> >       m->read_size_sum = 0;
> >
> >       spin_lock_init(&m->write_metric_lock);
> > -     m->write_latency_sq_sum = 0;
> > +     m->write_latency_stdev = 0;
> > +     m->avg_write_latency = 0;
> >       m->write_latency_min = KTIME_MAX;
> >       m->write_latency_max = 0;
> >       m->total_writes = 0;
> > @@ -264,7 +266,8 @@ int ceph_metric_init(struct ceph_client_metric *m)
> >       m->write_size_sum = 0;
> >
> >       spin_lock_init(&m->metadata_metric_lock);
> > -     m->metadata_latency_sq_sum = 0;
> > +     m->metadata_latency_stdev = 0;
> > +     m->avg_metadata_latency = 0;
> >       m->metadata_latency_min = KTIME_MAX;
> >       m->metadata_latency_max = 0;
> >       m->total_metadatas = 0;
> > @@ -322,20 +325,26 @@ void ceph_metric_destroy(struct ceph_client_metric *m)
> >               max = new;                      \
> >   }
> >
> > -static inline void __update_stdev(ktime_t total, ktime_t lsum,
> > -                               ktime_t *sq_sump, ktime_t lat)
> > +static inline void __update_latency(ktime_t *ctotal, ktime_t *lsum,
> > +                                 ktime_t *lavg, ktime_t *min, ktime_t *max,
> > +                                 ktime_t *lstdev, ktime_t lat)
> >   {
> > -     ktime_t avg, sq;
> > +     ktime_t total, avg, stdev;
> >
> > -     if (unlikely(total == 1))
> > -             return;
> > +     total = ++(*ctotal);
> > +     *lsum += lat;
> > +
> > +     METRIC_UPDATE_MIN_MAX(*min, *max, lat);
> >
> > -     /* the sq is (lat - old_avg) * (lat - new_avg) */
> > -     avg = DIV64_U64_ROUND_CLOSEST((lsum - lat), (total - 1));
> > -     sq = lat - avg;
> > -     avg = DIV64_U64_ROUND_CLOSEST(lsum, total);
> > -     sq = sq * (lat - avg);
> > -     *sq_sump += sq;
> > +     if (unlikely(total == 1)) {
> > +             *lavg = lat;
> > +             *lstdev = 0;
> > +     } else {
> > +             avg = *lavg + div64_s64(lat - *lavg, total);
> > +             stdev = *lstdev + (lat - *lavg)*(lat - avg);
> > +             *lstdev = int_sqrt(div64_u64(stdev, total - 1));
> > +             *lavg = avg;
> > +     }
>
> IMO, this is incorrect, the math formula please see:
>
> https://www.investopedia.com/ask/answers/042415/what-difference-between-standard-error-means-and-standard-deviation.asp
>
> The most accurate result should be:
>
> stdev = int_sqrt(sum((X(n) - avg)^2, (X(n-1) - avg)^2, ..., (X(1) -
> avg)^2) / (n - 1)).
>
> While you are computing it:
>
> stdev_n = int_sqrt(stdev_(n-1) + (X(n-1) - avg)^2)

Hmm. The int_sqrt() is probably not needed at this point and can be
done when sending the metric. That would avoid some cycles.

Also, the way avg is calculated not totally incorrect, however, I
would like to keep it similar to how its done is libcephfs.

>
> Though current stdev computing method is not exactly the same the math
> formula does, but it's closer to it, because the kernel couldn't record
> all the latency value and do it whenever needed, which will occupy a
> large amount of memories and cpu resources.

The approach is to calculate the running variance, I.e., compute the
variance as  data (latency) arrive one at a time.

>
>
> >   }
> >
> >   void ceph_update_read_metrics(struct ceph_client_metric *m,
> > @@ -343,23 +352,18 @@ void ceph_update_read_metrics(struct ceph_client_metric *m,
> >                             unsigned int size, int rc)
> >   {
> >       ktime_t lat = ktime_sub(r_end, r_start);
> > -     ktime_t total;
> >
> >       if (unlikely(rc < 0 && rc != -ENOENT && rc != -ETIMEDOUT))
> >               return;
> >
> >       spin_lock(&m->read_metric_lock);
> > -     total = ++m->total_reads;
> >       m->read_size_sum += size;
> > -     m->read_latency_sum += lat;
> >       METRIC_UPDATE_MIN_MAX(m->read_size_min,
> >                             m->read_size_max,
> >                             size);
> > -     METRIC_UPDATE_MIN_MAX(m->read_latency_min,
> > -                           m->read_latency_max,
> > -                           lat);
> > -     __update_stdev(total, m->read_latency_sum,
> > -                    &m->read_latency_sq_sum, lat);
> > +     __update_latency(&m->total_reads, &m->read_latency_sum,
> > +                      &m->avg_read_latency, &m->read_latency_min,
> > +                      &m->read_latency_max, &m->read_latency_stdev, lat);
> >       spin_unlock(&m->read_metric_lock);
> >   }
> >
> > @@ -368,23 +372,18 @@ void ceph_update_write_metrics(struct ceph_client_metric *m,
> >                              unsigned int size, int rc)
> >   {
> >       ktime_t lat = ktime_sub(r_end, r_start);
> > -     ktime_t total;
> >
> >       if (unlikely(rc && rc != -ETIMEDOUT))
> >               return;
> >
> >       spin_lock(&m->write_metric_lock);
> > -     total = ++m->total_writes;
> >       m->write_size_sum += size;
> > -     m->write_latency_sum += lat;
> >       METRIC_UPDATE_MIN_MAX(m->write_size_min,
> >                             m->write_size_max,
> >                             size);
> > -     METRIC_UPDATE_MIN_MAX(m->write_latency_min,
> > -                           m->write_latency_max,
> > -                           lat);
> > -     __update_stdev(total, m->write_latency_sum,
> > -                    &m->write_latency_sq_sum, lat);
> > +     __update_latency(&m->total_writes, &m->write_latency_sum,
> > +                      &m->avg_write_latency, &m->write_latency_min,
> > +                      &m->write_latency_max, &m->write_latency_stdev, lat);
> >       spin_unlock(&m->write_metric_lock);
> >   }
> >
> > @@ -393,18 +392,13 @@ void ceph_update_metadata_metrics(struct ceph_client_metric *m,
> >                                 int rc)
> >   {
> >       ktime_t lat = ktime_sub(r_end, r_start);
> > -     ktime_t total;
> >
> >       if (unlikely(rc && rc != -ENOENT))
> >               return;
> >
> >       spin_lock(&m->metadata_metric_lock);
> > -     total = ++m->total_metadatas;
> > -     m->metadata_latency_sum += lat;
> > -     METRIC_UPDATE_MIN_MAX(m->metadata_latency_min,
> > -                           m->metadata_latency_max,
> > -                           lat);
> > -     __update_stdev(total, m->metadata_latency_sum,
> > -                    &m->metadata_latency_sq_sum, lat);
> > +     __update_latency(&m->total_metadatas, &m->metadata_latency_sum,
> > +                      &m->avg_metadata_latency, &m->metadata_latency_min,
> > +                      &m->metadata_latency_max, &m->metadata_latency_stdev, lat);
> >       spin_unlock(&m->metadata_metric_lock);
> >   }
> > diff --git a/fs/ceph/metric.h b/fs/ceph/metric.h
> > index 103ed736f9d2..a5da21b8f8ed 100644
> > --- a/fs/ceph/metric.h
> > +++ b/fs/ceph/metric.h
> > @@ -138,7 +138,8 @@ struct ceph_client_metric {
> >       u64 read_size_min;
> >       u64 read_size_max;
> >       ktime_t read_latency_sum;
> > -     ktime_t read_latency_sq_sum;
> > +     ktime_t avg_read_latency;
> > +     ktime_t read_latency_stdev;
> >       ktime_t read_latency_min;
> >       ktime_t read_latency_max;
> >
> > @@ -148,14 +149,16 @@ struct ceph_client_metric {
> >       u64 write_size_min;
> >       u64 write_size_max;
> >       ktime_t write_latency_sum;
> > -     ktime_t write_latency_sq_sum;
> > +     ktime_t avg_write_latency;
> > +     ktime_t write_latency_stdev;
> >       ktime_t write_latency_min;
> >       ktime_t write_latency_max;
> >
> >       spinlock_t metadata_metric_lock;
> >       u64 total_metadatas;
> >       ktime_t metadata_latency_sum;
> > -     ktime_t metadata_latency_sq_sum;
> > +     ktime_t avg_metadata_latency;
> > +     ktime_t metadata_latency_stdev;
> >       ktime_t metadata_latency_min;
> >       ktime_t metadata_latency_max;
> >
>


-- 
Cheers,
Venky


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 2/4] ceph: track average/stdev r/w/m latency
  2021-09-14 13:13   ` Xiubo Li
@ 2021-09-14 13:32     ` Jeff Layton
  2021-09-14 13:32     ` Venky Shankar
  1 sibling, 0 replies; 19+ messages in thread
From: Jeff Layton @ 2021-09-14 13:32 UTC (permalink / raw)
  To: Xiubo Li, Venky Shankar, pdonnell; +Cc: ceph-devel

On Tue, 2021-09-14 at 21:13 +0800, Xiubo Li wrote:
> On 9/14/21 4:49 PM, Venky Shankar wrote:
> > The math involved in tracking average and standard deviation
> > for r/w/m latencies looks incorrect. Fix that up. Also, change
> > the variable name that tracks standard deviation (*_sq_sum) to
> > *_stdev.
> > 
> > Signed-off-by: Venky Shankar <vshankar@redhat.com>
> > ---
> >   fs/ceph/debugfs.c | 14 +++++-----
> >   fs/ceph/metric.c  | 70 ++++++++++++++++++++++-------------------------
> >   fs/ceph/metric.h  |  9 ++++--
> >   3 files changed, 45 insertions(+), 48 deletions(-)
> > 
> > diff --git a/fs/ceph/debugfs.c b/fs/ceph/debugfs.c
> > index 38b78b45811f..3abfa7ae8220 100644
> > --- a/fs/ceph/debugfs.c
> > +++ b/fs/ceph/debugfs.c
> > @@ -152,7 +152,7 @@ static int metric_show(struct seq_file *s, void *p)
> >   	struct ceph_mds_client *mdsc = fsc->mdsc;
> >   	struct ceph_client_metric *m = &mdsc->metric;
> >   	int nr_caps = 0;
> > -	s64 total, sum, avg, min, max, sq;
> > +	s64 total, sum, avg, min, max, stdev;
> >   	u64 sum_sz, avg_sz, min_sz, max_sz;
> >   
> >   	sum = percpu_counter_sum(&m->total_inodes);
> > @@ -175,9 +175,9 @@ static int metric_show(struct seq_file *s, void *p)
> >   	avg = total > 0 ? DIV64_U64_ROUND_CLOSEST(sum, total) : 0;
> >   	min = m->read_latency_min;
> >   	max = m->read_latency_max;
> > -	sq = m->read_latency_sq_sum;
> > +	stdev = m->read_latency_stdev;
> >   	spin_unlock(&m->read_metric_lock);
> > -	CEPH_LAT_METRIC_SHOW("read", total, avg, min, max, sq);
> > +	CEPH_LAT_METRIC_SHOW("read", total, avg, min, max, stdev);
> >   
> >   	spin_lock(&m->write_metric_lock);
> >   	total = m->total_writes;
> > @@ -185,9 +185,9 @@ static int metric_show(struct seq_file *s, void *p)
> >   	avg = total > 0 ? DIV64_U64_ROUND_CLOSEST(sum, total) : 0;
> >   	min = m->write_latency_min;
> >   	max = m->write_latency_max;
> > -	sq = m->write_latency_sq_sum;
> > +	stdev = m->write_latency_stdev;
> >   	spin_unlock(&m->write_metric_lock);
> > -	CEPH_LAT_METRIC_SHOW("write", total, avg, min, max, sq);
> > +	CEPH_LAT_METRIC_SHOW("write", total, avg, min, max, stdev);
> >   
> >   	spin_lock(&m->metadata_metric_lock);
> >   	total = m->total_metadatas;
> > @@ -195,9 +195,9 @@ static int metric_show(struct seq_file *s, void *p)
> >   	avg = total > 0 ? DIV64_U64_ROUND_CLOSEST(sum, total) : 0;
> >   	min = m->metadata_latency_min;
> >   	max = m->metadata_latency_max;
> > -	sq = m->metadata_latency_sq_sum;
> > +	stdev = m->metadata_latency_stdev;
> >   	spin_unlock(&m->metadata_metric_lock);
> > -	CEPH_LAT_METRIC_SHOW("metadata", total, avg, min, max, sq);
> > +	CEPH_LAT_METRIC_SHOW("metadata", total, avg, min, max, stdev);
> >   
> >   	seq_printf(s, "\n");
> >   	seq_printf(s, "item          total       avg_sz(bytes)   min_sz(bytes)   max_sz(bytes)  total_sz(bytes)\n");
> > diff --git a/fs/ceph/metric.c b/fs/ceph/metric.c
> > index 226dc38e2909..6b774b1a88ce 100644
> > --- a/fs/ceph/metric.c
> > +++ b/fs/ceph/metric.c
> > @@ -244,7 +244,8 @@ int ceph_metric_init(struct ceph_client_metric *m)
> >   		goto err_i_caps_mis;
> >   
> >   	spin_lock_init(&m->read_metric_lock);
> > -	m->read_latency_sq_sum = 0;
> > +	m->read_latency_stdev = 0;
> > +	m->avg_read_latency = 0;
> >   	m->read_latency_min = KTIME_MAX;
> >   	m->read_latency_max = 0;
> >   	m->total_reads = 0;
> > @@ -254,7 +255,8 @@ int ceph_metric_init(struct ceph_client_metric *m)
> >   	m->read_size_sum = 0;
> >   
> >   	spin_lock_init(&m->write_metric_lock);
> > -	m->write_latency_sq_sum = 0;
> > +	m->write_latency_stdev = 0;
> > +	m->avg_write_latency = 0;
> >   	m->write_latency_min = KTIME_MAX;
> >   	m->write_latency_max = 0;
> >   	m->total_writes = 0;
> > @@ -264,7 +266,8 @@ int ceph_metric_init(struct ceph_client_metric *m)
> >   	m->write_size_sum = 0;
> >   
> >   	spin_lock_init(&m->metadata_metric_lock);
> > -	m->metadata_latency_sq_sum = 0;
> > +	m->metadata_latency_stdev = 0;
> > +	m->avg_metadata_latency = 0;
> >   	m->metadata_latency_min = KTIME_MAX;
> >   	m->metadata_latency_max = 0;
> >   	m->total_metadatas = 0;
> > @@ -322,20 +325,26 @@ void ceph_metric_destroy(struct ceph_client_metric *m)
> >   		max = new;			\
> >   }
> >   
> > -static inline void __update_stdev(ktime_t total, ktime_t lsum,
> > -				  ktime_t *sq_sump, ktime_t lat)
> > +static inline void __update_latency(ktime_t *ctotal, ktime_t *lsum,
> > +				    ktime_t *lavg, ktime_t *min, ktime_t *max,
> > +				    ktime_t *lstdev, ktime_t lat)
> >   {
> > -	ktime_t avg, sq;
> > +	ktime_t total, avg, stdev;
> >   
> > -	if (unlikely(total == 1))
> > -		return;
> > +	total = ++(*ctotal);
> > +	*lsum += lat;
> > +
> > +	METRIC_UPDATE_MIN_MAX(*min, *max, lat);
> >   
> > -	/* the sq is (lat - old_avg) * (lat - new_avg) */
> > -	avg = DIV64_U64_ROUND_CLOSEST((lsum - lat), (total - 1));
> > -	sq = lat - avg;
> > -	avg = DIV64_U64_ROUND_CLOSEST(lsum, total);
> > -	sq = sq * (lat - avg);
> > -	*sq_sump += sq;
> > +	if (unlikely(total == 1)) {
> > +		*lavg = lat;
> > +		*lstdev = 0;
> > +	} else {
> > +		avg = *lavg + div64_s64(lat - *lavg, total);
> > +		stdev = *lstdev + (lat - *lavg)*(lat - avg);
> > +		*lstdev = int_sqrt(div64_u64(stdev, total - 1));
> 
> In kernel space, won't it a little heavy to run the in_sqrt() every time 
> when updating the latency ?
> 
> @Jeff, any idea ?
> 
> 

Yeah, I agree...

int_sqrt() doesn't look _too_ awful -- it's mostly shifts and adds. You
can see the code for it in lib/math/int_sqrt.c. This probably ought to
be using int_sqrt64() too since the argument is a 64-bit value.

Still, keeping the amount of work low for each new update is really
better if you can. It would be best to defer as much computation as
possible to when this info is being queried. In many cases, this info
will never be consulted, so we really want to keep its overhead low.

> > +		*lavg = avg;
> > +	}
> >   }
> >   
> >   void ceph_update_read_metrics(struct ceph_client_metric *m,
> > @@ -343,23 +352,18 @@ void ceph_update_read_metrics(struct ceph_client_metric *m,
> >   			      unsigned int size, int rc)
> >   {
> >   	ktime_t lat = ktime_sub(r_end, r_start);
> > -	ktime_t total;
> >   
> >   	if (unlikely(rc < 0 && rc != -ENOENT && rc != -ETIMEDOUT))
> >   		return;
> >   
> >   	spin_lock(&m->read_metric_lock);
> > -	total = ++m->total_reads;
> >   	m->read_size_sum += size;
> > -	m->read_latency_sum += lat;
> >   	METRIC_UPDATE_MIN_MAX(m->read_size_min,
> >   			      m->read_size_max,
> >   			      size);
> > -	METRIC_UPDATE_MIN_MAX(m->read_latency_min,
> > -			      m->read_latency_max,
> > -			      lat);
> > -	__update_stdev(total, m->read_latency_sum,
> > -		       &m->read_latency_sq_sum, lat);
> > +	__update_latency(&m->total_reads, &m->read_latency_sum,
> > +			 &m->avg_read_latency, &m->read_latency_min,
> > +			 &m->read_latency_max, &m->read_latency_stdev, lat);
> >   	spin_unlock(&m->read_metric_lock);
> >   }
> >   
> > @@ -368,23 +372,18 @@ void ceph_update_write_metrics(struct ceph_client_metric *m,
> >   			       unsigned int size, int rc)
> >   {
> >   	ktime_t lat = ktime_sub(r_end, r_start);
> > -	ktime_t total;
> >   
> >   	if (unlikely(rc && rc != -ETIMEDOUT))
> >   		return;
> >   
> >   	spin_lock(&m->write_metric_lock);
> > -	total = ++m->total_writes;
> >   	m->write_size_sum += size;
> > -	m->write_latency_sum += lat;
> >   	METRIC_UPDATE_MIN_MAX(m->write_size_min,
> >   			      m->write_size_max,
> >   			      size);
> > -	METRIC_UPDATE_MIN_MAX(m->write_latency_min,
> > -			      m->write_latency_max,
> > -			      lat);
> > -	__update_stdev(total, m->write_latency_sum,
> > -		       &m->write_latency_sq_sum, lat);
> > +	__update_latency(&m->total_writes, &m->write_latency_sum,
> > +			 &m->avg_write_latency, &m->write_latency_min,
> > +			 &m->write_latency_max, &m->write_latency_stdev, lat);
> >   	spin_unlock(&m->write_metric_lock);
> >   }
> >   
> > @@ -393,18 +392,13 @@ void ceph_update_metadata_metrics(struct ceph_client_metric *m,
> >   				  int rc)
> >   {
> >   	ktime_t lat = ktime_sub(r_end, r_start);
> > -	ktime_t total;
> >   
> >   	if (unlikely(rc && rc != -ENOENT))
> >   		return;
> >   
> >   	spin_lock(&m->metadata_metric_lock);
> > -	total = ++m->total_metadatas;
> > -	m->metadata_latency_sum += lat;
> > -	METRIC_UPDATE_MIN_MAX(m->metadata_latency_min,
> > -			      m->metadata_latency_max,
> > -			      lat);
> > -	__update_stdev(total, m->metadata_latency_sum,
> > -		       &m->metadata_latency_sq_sum, lat);
> > +	__update_latency(&m->total_metadatas, &m->metadata_latency_sum,
> > +			 &m->avg_metadata_latency, &m->metadata_latency_min,
> > +			 &m->metadata_latency_max, &m->metadata_latency_stdev, lat);
> >   	spin_unlock(&m->metadata_metric_lock);
> >   }
> > diff --git a/fs/ceph/metric.h b/fs/ceph/metric.h
> > index 103ed736f9d2..a5da21b8f8ed 100644
> > --- a/fs/ceph/metric.h
> > +++ b/fs/ceph/metric.h
> > @@ -138,7 +138,8 @@ struct ceph_client_metric {
> >   	u64 read_size_min;
> >   	u64 read_size_max;
> >   	ktime_t read_latency_sum;
> > -	ktime_t read_latency_sq_sum;
> > +	ktime_t avg_read_latency;
> > +	ktime_t read_latency_stdev;
> >   	ktime_t read_latency_min;
> >   	ktime_t read_latency_max;
> >   
> > @@ -148,14 +149,16 @@ struct ceph_client_metric {
> >   	u64 write_size_min;
> >   	u64 write_size_max;
> >   	ktime_t write_latency_sum;
> > -	ktime_t write_latency_sq_sum;
> > +	ktime_t avg_write_latency;
> > +	ktime_t write_latency_stdev;
> >   	ktime_t write_latency_min;
> >   	ktime_t write_latency_max;
> >   
> >   	spinlock_t metadata_metric_lock;
> >   	u64 total_metadatas;
> >   	ktime_t metadata_latency_sum;
> > -	ktime_t metadata_latency_sq_sum;
> > +	ktime_t avg_metadata_latency;
> > +	ktime_t metadata_latency_stdev;
> >   	ktime_t metadata_latency_min;
> >   	ktime_t metadata_latency_max;
> >   
> 

-- 
Jeff Layton <jlayton@redhat.com>


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 2/4] ceph: track average/stdev r/w/m latency
  2021-09-14 13:13   ` Xiubo Li
  2021-09-14 13:32     ` Jeff Layton
@ 2021-09-14 13:32     ` Venky Shankar
  1 sibling, 0 replies; 19+ messages in thread
From: Venky Shankar @ 2021-09-14 13:32 UTC (permalink / raw)
  To: Xiubo Li; +Cc: Jeff Layton, Patrick Donnelly, ceph-devel

On Tue, Sep 14, 2021 at 6:43 PM Xiubo Li <xiubli@redhat.com> wrote:
>
>
> On 9/14/21 4:49 PM, Venky Shankar wrote:
> > The math involved in tracking average and standard deviation
> > for r/w/m latencies looks incorrect. Fix that up. Also, change
> > the variable name that tracks standard deviation (*_sq_sum) to
> > *_stdev.
> >
> > Signed-off-by: Venky Shankar <vshankar@redhat.com>
> > ---
> >   fs/ceph/debugfs.c | 14 +++++-----
> >   fs/ceph/metric.c  | 70 ++++++++++++++++++++++-------------------------
> >   fs/ceph/metric.h  |  9 ++++--
> >   3 files changed, 45 insertions(+), 48 deletions(-)
> >
> > diff --git a/fs/ceph/debugfs.c b/fs/ceph/debugfs.c
> > index 38b78b45811f..3abfa7ae8220 100644
> > --- a/fs/ceph/debugfs.c
> > +++ b/fs/ceph/debugfs.c
> > @@ -152,7 +152,7 @@ static int metric_show(struct seq_file *s, void *p)
> >       struct ceph_mds_client *mdsc = fsc->mdsc;
> >       struct ceph_client_metric *m = &mdsc->metric;
> >       int nr_caps = 0;
> > -     s64 total, sum, avg, min, max, sq;
> > +     s64 total, sum, avg, min, max, stdev;
> >       u64 sum_sz, avg_sz, min_sz, max_sz;
> >
> >       sum = percpu_counter_sum(&m->total_inodes);
> > @@ -175,9 +175,9 @@ static int metric_show(struct seq_file *s, void *p)
> >       avg = total > 0 ? DIV64_U64_ROUND_CLOSEST(sum, total) : 0;
> >       min = m->read_latency_min;
> >       max = m->read_latency_max;
> > -     sq = m->read_latency_sq_sum;
> > +     stdev = m->read_latency_stdev;
> >       spin_unlock(&m->read_metric_lock);
> > -     CEPH_LAT_METRIC_SHOW("read", total, avg, min, max, sq);
> > +     CEPH_LAT_METRIC_SHOW("read", total, avg, min, max, stdev);
> >
> >       spin_lock(&m->write_metric_lock);
> >       total = m->total_writes;
> > @@ -185,9 +185,9 @@ static int metric_show(struct seq_file *s, void *p)
> >       avg = total > 0 ? DIV64_U64_ROUND_CLOSEST(sum, total) : 0;
> >       min = m->write_latency_min;
> >       max = m->write_latency_max;
> > -     sq = m->write_latency_sq_sum;
> > +     stdev = m->write_latency_stdev;
> >       spin_unlock(&m->write_metric_lock);
> > -     CEPH_LAT_METRIC_SHOW("write", total, avg, min, max, sq);
> > +     CEPH_LAT_METRIC_SHOW("write", total, avg, min, max, stdev);
> >
> >       spin_lock(&m->metadata_metric_lock);
> >       total = m->total_metadatas;
> > @@ -195,9 +195,9 @@ static int metric_show(struct seq_file *s, void *p)
> >       avg = total > 0 ? DIV64_U64_ROUND_CLOSEST(sum, total) : 0;
> >       min = m->metadata_latency_min;
> >       max = m->metadata_latency_max;
> > -     sq = m->metadata_latency_sq_sum;
> > +     stdev = m->metadata_latency_stdev;
> >       spin_unlock(&m->metadata_metric_lock);
> > -     CEPH_LAT_METRIC_SHOW("metadata", total, avg, min, max, sq);
> > +     CEPH_LAT_METRIC_SHOW("metadata", total, avg, min, max, stdev);
> >
> >       seq_printf(s, "\n");
> >       seq_printf(s, "item          total       avg_sz(bytes)   min_sz(bytes)   max_sz(bytes)  total_sz(bytes)\n");
> > diff --git a/fs/ceph/metric.c b/fs/ceph/metric.c
> > index 226dc38e2909..6b774b1a88ce 100644
> > --- a/fs/ceph/metric.c
> > +++ b/fs/ceph/metric.c
> > @@ -244,7 +244,8 @@ int ceph_metric_init(struct ceph_client_metric *m)
> >               goto err_i_caps_mis;
> >
> >       spin_lock_init(&m->read_metric_lock);
> > -     m->read_latency_sq_sum = 0;
> > +     m->read_latency_stdev = 0;
> > +     m->avg_read_latency = 0;
> >       m->read_latency_min = KTIME_MAX;
> >       m->read_latency_max = 0;
> >       m->total_reads = 0;
> > @@ -254,7 +255,8 @@ int ceph_metric_init(struct ceph_client_metric *m)
> >       m->read_size_sum = 0;
> >
> >       spin_lock_init(&m->write_metric_lock);
> > -     m->write_latency_sq_sum = 0;
> > +     m->write_latency_stdev = 0;
> > +     m->avg_write_latency = 0;
> >       m->write_latency_min = KTIME_MAX;
> >       m->write_latency_max = 0;
> >       m->total_writes = 0;
> > @@ -264,7 +266,8 @@ int ceph_metric_init(struct ceph_client_metric *m)
> >       m->write_size_sum = 0;
> >
> >       spin_lock_init(&m->metadata_metric_lock);
> > -     m->metadata_latency_sq_sum = 0;
> > +     m->metadata_latency_stdev = 0;
> > +     m->avg_metadata_latency = 0;
> >       m->metadata_latency_min = KTIME_MAX;
> >       m->metadata_latency_max = 0;
> >       m->total_metadatas = 0;
> > @@ -322,20 +325,26 @@ void ceph_metric_destroy(struct ceph_client_metric *m)
> >               max = new;                      \
> >   }
> >
> > -static inline void __update_stdev(ktime_t total, ktime_t lsum,
> > -                               ktime_t *sq_sump, ktime_t lat)
> > +static inline void __update_latency(ktime_t *ctotal, ktime_t *lsum,
> > +                                 ktime_t *lavg, ktime_t *min, ktime_t *max,
> > +                                 ktime_t *lstdev, ktime_t lat)
> >   {
> > -     ktime_t avg, sq;
> > +     ktime_t total, avg, stdev;
> >
> > -     if (unlikely(total == 1))
> > -             return;
> > +     total = ++(*ctotal);
> > +     *lsum += lat;
> > +
> > +     METRIC_UPDATE_MIN_MAX(*min, *max, lat);
> >
> > -     /* the sq is (lat - old_avg) * (lat - new_avg) */
> > -     avg = DIV64_U64_ROUND_CLOSEST((lsum - lat), (total - 1));
> > -     sq = lat - avg;
> > -     avg = DIV64_U64_ROUND_CLOSEST(lsum, total);
> > -     sq = sq * (lat - avg);
> > -     *sq_sump += sq;
> > +     if (unlikely(total == 1)) {
> > +             *lavg = lat;
> > +             *lstdev = 0;
> > +     } else {
> > +             avg = *lavg + div64_s64(lat - *lavg, total);
> > +             stdev = *lstdev + (lat - *lavg)*(lat - avg);
> > +             *lstdev = int_sqrt(div64_u64(stdev, total - 1));
>
> In kernel space, won't it a little heavy to run the in_sqrt() every time
> when updating the latency ?

It's most likely not needed. We could keep track of the variance
(doesn't require int_sqrt) and calculate stdev when sending metrics.
That would be mathematically correct too as you mentioned.

>
> @Jeff, any idea ?
>
>
> > +             *lavg = avg;
> > +     }
> >   }
> >
> >   void ceph_update_read_metrics(struct ceph_client_metric *m,
> > @@ -343,23 +352,18 @@ void ceph_update_read_metrics(struct ceph_client_metric *m,
> >                             unsigned int size, int rc)
> >   {
> >       ktime_t lat = ktime_sub(r_end, r_start);
> > -     ktime_t total;
> >
> >       if (unlikely(rc < 0 && rc != -ENOENT && rc != -ETIMEDOUT))
> >               return;
> >
> >       spin_lock(&m->read_metric_lock);
> > -     total = ++m->total_reads;
> >       m->read_size_sum += size;
> > -     m->read_latency_sum += lat;
> >       METRIC_UPDATE_MIN_MAX(m->read_size_min,
> >                             m->read_size_max,
> >                             size);
> > -     METRIC_UPDATE_MIN_MAX(m->read_latency_min,
> > -                           m->read_latency_max,
> > -                           lat);
> > -     __update_stdev(total, m->read_latency_sum,
> > -                    &m->read_latency_sq_sum, lat);
> > +     __update_latency(&m->total_reads, &m->read_latency_sum,
> > +                      &m->avg_read_latency, &m->read_latency_min,
> > +                      &m->read_latency_max, &m->read_latency_stdev, lat);
> >       spin_unlock(&m->read_metric_lock);
> >   }
> >
> > @@ -368,23 +372,18 @@ void ceph_update_write_metrics(struct ceph_client_metric *m,
> >                              unsigned int size, int rc)
> >   {
> >       ktime_t lat = ktime_sub(r_end, r_start);
> > -     ktime_t total;
> >
> >       if (unlikely(rc && rc != -ETIMEDOUT))
> >               return;
> >
> >       spin_lock(&m->write_metric_lock);
> > -     total = ++m->total_writes;
> >       m->write_size_sum += size;
> > -     m->write_latency_sum += lat;
> >       METRIC_UPDATE_MIN_MAX(m->write_size_min,
> >                             m->write_size_max,
> >                             size);
> > -     METRIC_UPDATE_MIN_MAX(m->write_latency_min,
> > -                           m->write_latency_max,
> > -                           lat);
> > -     __update_stdev(total, m->write_latency_sum,
> > -                    &m->write_latency_sq_sum, lat);
> > +     __update_latency(&m->total_writes, &m->write_latency_sum,
> > +                      &m->avg_write_latency, &m->write_latency_min,
> > +                      &m->write_latency_max, &m->write_latency_stdev, lat);
> >       spin_unlock(&m->write_metric_lock);
> >   }
> >
> > @@ -393,18 +392,13 @@ void ceph_update_metadata_metrics(struct ceph_client_metric *m,
> >                                 int rc)
> >   {
> >       ktime_t lat = ktime_sub(r_end, r_start);
> > -     ktime_t total;
> >
> >       if (unlikely(rc && rc != -ENOENT))
> >               return;
> >
> >       spin_lock(&m->metadata_metric_lock);
> > -     total = ++m->total_metadatas;
> > -     m->metadata_latency_sum += lat;
> > -     METRIC_UPDATE_MIN_MAX(m->metadata_latency_min,
> > -                           m->metadata_latency_max,
> > -                           lat);
> > -     __update_stdev(total, m->metadata_latency_sum,
> > -                    &m->metadata_latency_sq_sum, lat);
> > +     __update_latency(&m->total_metadatas, &m->metadata_latency_sum,
> > +                      &m->avg_metadata_latency, &m->metadata_latency_min,
> > +                      &m->metadata_latency_max, &m->metadata_latency_stdev, lat);
> >       spin_unlock(&m->metadata_metric_lock);
> >   }
> > diff --git a/fs/ceph/metric.h b/fs/ceph/metric.h
> > index 103ed736f9d2..a5da21b8f8ed 100644
> > --- a/fs/ceph/metric.h
> > +++ b/fs/ceph/metric.h
> > @@ -138,7 +138,8 @@ struct ceph_client_metric {
> >       u64 read_size_min;
> >       u64 read_size_max;
> >       ktime_t read_latency_sum;
> > -     ktime_t read_latency_sq_sum;
> > +     ktime_t avg_read_latency;
> > +     ktime_t read_latency_stdev;
> >       ktime_t read_latency_min;
> >       ktime_t read_latency_max;
> >
> > @@ -148,14 +149,16 @@ struct ceph_client_metric {
> >       u64 write_size_min;
> >       u64 write_size_max;
> >       ktime_t write_latency_sum;
> > -     ktime_t write_latency_sq_sum;
> > +     ktime_t avg_write_latency;
> > +     ktime_t write_latency_stdev;
> >       ktime_t write_latency_min;
> >       ktime_t write_latency_max;
> >
> >       spinlock_t metadata_metric_lock;
> >       u64 total_metadatas;
> >       ktime_t metadata_latency_sum;
> > -     ktime_t metadata_latency_sq_sum;
> > +     ktime_t avg_metadata_latency;
> > +     ktime_t metadata_latency_stdev;
> >       ktime_t metadata_latency_min;
> >       ktime_t metadata_latency_max;
> >
>


-- 
Cheers,
Venky


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 2/4] ceph: track average/stdev r/w/m latency
  2021-09-14 13:30     ` Venky Shankar
@ 2021-09-14 13:45       ` Xiubo Li
  2021-09-14 13:52         ` Xiubo Li
  2021-09-14 13:53         ` Venky Shankar
  0 siblings, 2 replies; 19+ messages in thread
From: Xiubo Li @ 2021-09-14 13:45 UTC (permalink / raw)
  To: Venky Shankar; +Cc: Jeff Layton, Patrick Donnelly, ceph-devel


On 9/14/21 9:30 PM, Venky Shankar wrote:
> On Tue, Sep 14, 2021 at 6:39 PM Xiubo Li <xiubli@redhat.com> wrote:
>>
>> On 9/14/21 4:49 PM, Venky Shankar wrote:
>>> The math involved in tracking average and standard deviation
>>> for r/w/m latencies looks incorrect. Fix that up. Also, change
>>> the variable name that tracks standard deviation (*_sq_sum) to
>>> *_stdev.
>>>
>>> Signed-off-by: Venky Shankar <vshankar@redhat.com>
>>> ---
>>>    fs/ceph/debugfs.c | 14 +++++-----
>>>    fs/ceph/metric.c  | 70 ++++++++++++++++++++++-------------------------
>>>    fs/ceph/metric.h  |  9 ++++--
>>>    3 files changed, 45 insertions(+), 48 deletions(-)
>>>
>>> diff --git a/fs/ceph/debugfs.c b/fs/ceph/debugfs.c
>>> index 38b78b45811f..3abfa7ae8220 100644
>>> --- a/fs/ceph/debugfs.c
>>> +++ b/fs/ceph/debugfs.c
>>> @@ -152,7 +152,7 @@ static int metric_show(struct seq_file *s, void *p)
>>>        struct ceph_mds_client *mdsc = fsc->mdsc;
>>>        struct ceph_client_metric *m = &mdsc->metric;
>>>        int nr_caps = 0;
>>> -     s64 total, sum, avg, min, max, sq;
>>> +     s64 total, sum, avg, min, max, stdev;
>>>        u64 sum_sz, avg_sz, min_sz, max_sz;
>>>
>>>        sum = percpu_counter_sum(&m->total_inodes);
>>> @@ -175,9 +175,9 @@ static int metric_show(struct seq_file *s, void *p)
>>>        avg = total > 0 ? DIV64_U64_ROUND_CLOSEST(sum, total) : 0;
>>>        min = m->read_latency_min;
>>>        max = m->read_latency_max;
>>> -     sq = m->read_latency_sq_sum;
>>> +     stdev = m->read_latency_stdev;
>>>        spin_unlock(&m->read_metric_lock);
>>> -     CEPH_LAT_METRIC_SHOW("read", total, avg, min, max, sq);
>>> +     CEPH_LAT_METRIC_SHOW("read", total, avg, min, max, stdev);
>>>
>>>        spin_lock(&m->write_metric_lock);
>>>        total = m->total_writes;
>>> @@ -185,9 +185,9 @@ static int metric_show(struct seq_file *s, void *p)
>>>        avg = total > 0 ? DIV64_U64_ROUND_CLOSEST(sum, total) : 0;
>>>        min = m->write_latency_min;
>>>        max = m->write_latency_max;
>>> -     sq = m->write_latency_sq_sum;
>>> +     stdev = m->write_latency_stdev;
>>>        spin_unlock(&m->write_metric_lock);
>>> -     CEPH_LAT_METRIC_SHOW("write", total, avg, min, max, sq);
>>> +     CEPH_LAT_METRIC_SHOW("write", total, avg, min, max, stdev);
>>>
>>>        spin_lock(&m->metadata_metric_lock);
>>>        total = m->total_metadatas;
>>> @@ -195,9 +195,9 @@ static int metric_show(struct seq_file *s, void *p)
>>>        avg = total > 0 ? DIV64_U64_ROUND_CLOSEST(sum, total) : 0;
>>>        min = m->metadata_latency_min;
>>>        max = m->metadata_latency_max;
>>> -     sq = m->metadata_latency_sq_sum;
>>> +     stdev = m->metadata_latency_stdev;
>>>        spin_unlock(&m->metadata_metric_lock);
>>> -     CEPH_LAT_METRIC_SHOW("metadata", total, avg, min, max, sq);
>>> +     CEPH_LAT_METRIC_SHOW("metadata", total, avg, min, max, stdev);
>>>
>>>        seq_printf(s, "\n");
>>>        seq_printf(s, "item          total       avg_sz(bytes)   min_sz(bytes)   max_sz(bytes)  total_sz(bytes)\n");
>>> diff --git a/fs/ceph/metric.c b/fs/ceph/metric.c
>>> index 226dc38e2909..6b774b1a88ce 100644
>>> --- a/fs/ceph/metric.c
>>> +++ b/fs/ceph/metric.c
>>> @@ -244,7 +244,8 @@ int ceph_metric_init(struct ceph_client_metric *m)
>>>                goto err_i_caps_mis;
>>>
>>>        spin_lock_init(&m->read_metric_lock);
>>> -     m->read_latency_sq_sum = 0;
>>> +     m->read_latency_stdev = 0;
>>> +     m->avg_read_latency = 0;
>>>        m->read_latency_min = KTIME_MAX;
>>>        m->read_latency_max = 0;
>>>        m->total_reads = 0;
>>> @@ -254,7 +255,8 @@ int ceph_metric_init(struct ceph_client_metric *m)
>>>        m->read_size_sum = 0;
>>>
>>>        spin_lock_init(&m->write_metric_lock);
>>> -     m->write_latency_sq_sum = 0;
>>> +     m->write_latency_stdev = 0;
>>> +     m->avg_write_latency = 0;
>>>        m->write_latency_min = KTIME_MAX;
>>>        m->write_latency_max = 0;
>>>        m->total_writes = 0;
>>> @@ -264,7 +266,8 @@ int ceph_metric_init(struct ceph_client_metric *m)
>>>        m->write_size_sum = 0;
>>>
>>>        spin_lock_init(&m->metadata_metric_lock);
>>> -     m->metadata_latency_sq_sum = 0;
>>> +     m->metadata_latency_stdev = 0;
>>> +     m->avg_metadata_latency = 0;
>>>        m->metadata_latency_min = KTIME_MAX;
>>>        m->metadata_latency_max = 0;
>>>        m->total_metadatas = 0;
>>> @@ -322,20 +325,26 @@ void ceph_metric_destroy(struct ceph_client_metric *m)
>>>                max = new;                      \
>>>    }
>>>
>>> -static inline void __update_stdev(ktime_t total, ktime_t lsum,
>>> -                               ktime_t *sq_sump, ktime_t lat)
>>> +static inline void __update_latency(ktime_t *ctotal, ktime_t *lsum,
>>> +                                 ktime_t *lavg, ktime_t *min, ktime_t *max,
>>> +                                 ktime_t *lstdev, ktime_t lat)
>>>    {
>>> -     ktime_t avg, sq;
>>> +     ktime_t total, avg, stdev;
>>>
>>> -     if (unlikely(total == 1))
>>> -             return;
>>> +     total = ++(*ctotal);
>>> +     *lsum += lat;
>>> +
>>> +     METRIC_UPDATE_MIN_MAX(*min, *max, lat);
>>>
>>> -     /* the sq is (lat - old_avg) * (lat - new_avg) */
>>> -     avg = DIV64_U64_ROUND_CLOSEST((lsum - lat), (total - 1));
>>> -     sq = lat - avg;
>>> -     avg = DIV64_U64_ROUND_CLOSEST(lsum, total);
>>> -     sq = sq * (lat - avg);
>>> -     *sq_sump += sq;
>>> +     if (unlikely(total == 1)) {
>>> +             *lavg = lat;
>>> +             *lstdev = 0;
>>> +     } else {
>>> +             avg = *lavg + div64_s64(lat - *lavg, total);
>>> +             stdev = *lstdev + (lat - *lavg)*(lat - avg);
>>> +             *lstdev = int_sqrt(div64_u64(stdev, total - 1));
>>> +             *lavg = avg;
>>> +     }
>> IMO, this is incorrect, the math formula please see:
>>
>> https://www.investopedia.com/ask/answers/042415/what-difference-between-standard-error-means-and-standard-deviation.asp
>>
>> The most accurate result should be:
>>
>> stdev = int_sqrt(sum((X(n) - avg)^2, (X(n-1) - avg)^2, ..., (X(1) -
>> avg)^2) / (n - 1)).
>>
>> While you are computing it:
>>
>> stdev_n = int_sqrt(stdev_(n-1) + (X(n-1) - avg)^2)
> Hmm. The int_sqrt() is probably not needed at this point and can be
> done when sending the metric. That would avoid some cycles.
>
> Also, the way avg is calculated not totally incorrect, however, I
> would like to keep it similar to how its done is libcephfs.

In user space this is very easy to do, but not in kernel space, 
especially there has no float computing.

Currently the kclient is doing the avg computing by:

avg(n) = (avg(n-1) + latency(n)) / (n), IMO this should be closer to the 
real avg(n) = sum(latency(n), latency(n-1), ..., latency(1)) / n.

Because it's hard to record all the latency values, this is also many 
other user space tools doing to count the avg.


>> Though current stdev computing method is not exactly the same the math
>> formula does, but it's closer to it, because the kernel couldn't record
>> all the latency value and do it whenever needed, which will occupy a
>> large amount of memories and cpu resources.
> The approach is to calculate the running variance, I.e., compute the
> variance as  data (latency) arrive one at a time.
>
>>
>>>    }
>>>
>>>    void ceph_update_read_metrics(struct ceph_client_metric *m,
>>> @@ -343,23 +352,18 @@ void ceph_update_read_metrics(struct ceph_client_metric *m,
>>>                              unsigned int size, int rc)
>>>    {
>>>        ktime_t lat = ktime_sub(r_end, r_start);
>>> -     ktime_t total;
>>>
>>>        if (unlikely(rc < 0 && rc != -ENOENT && rc != -ETIMEDOUT))
>>>                return;
>>>
>>>        spin_lock(&m->read_metric_lock);
>>> -     total = ++m->total_reads;
>>>        m->read_size_sum += size;
>>> -     m->read_latency_sum += lat;
>>>        METRIC_UPDATE_MIN_MAX(m->read_size_min,
>>>                              m->read_size_max,
>>>                              size);
>>> -     METRIC_UPDATE_MIN_MAX(m->read_latency_min,
>>> -                           m->read_latency_max,
>>> -                           lat);
>>> -     __update_stdev(total, m->read_latency_sum,
>>> -                    &m->read_latency_sq_sum, lat);
>>> +     __update_latency(&m->total_reads, &m->read_latency_sum,
>>> +                      &m->avg_read_latency, &m->read_latency_min,
>>> +                      &m->read_latency_max, &m->read_latency_stdev, lat);
>>>        spin_unlock(&m->read_metric_lock);
>>>    }
>>>
>>> @@ -368,23 +372,18 @@ void ceph_update_write_metrics(struct ceph_client_metric *m,
>>>                               unsigned int size, int rc)
>>>    {
>>>        ktime_t lat = ktime_sub(r_end, r_start);
>>> -     ktime_t total;
>>>
>>>        if (unlikely(rc && rc != -ETIMEDOUT))
>>>                return;
>>>
>>>        spin_lock(&m->write_metric_lock);
>>> -     total = ++m->total_writes;
>>>        m->write_size_sum += size;
>>> -     m->write_latency_sum += lat;
>>>        METRIC_UPDATE_MIN_MAX(m->write_size_min,
>>>                              m->write_size_max,
>>>                              size);
>>> -     METRIC_UPDATE_MIN_MAX(m->write_latency_min,
>>> -                           m->write_latency_max,
>>> -                           lat);
>>> -     __update_stdev(total, m->write_latency_sum,
>>> -                    &m->write_latency_sq_sum, lat);
>>> +     __update_latency(&m->total_writes, &m->write_latency_sum,
>>> +                      &m->avg_write_latency, &m->write_latency_min,
>>> +                      &m->write_latency_max, &m->write_latency_stdev, lat);
>>>        spin_unlock(&m->write_metric_lock);
>>>    }
>>>
>>> @@ -393,18 +392,13 @@ void ceph_update_metadata_metrics(struct ceph_client_metric *m,
>>>                                  int rc)
>>>    {
>>>        ktime_t lat = ktime_sub(r_end, r_start);
>>> -     ktime_t total;
>>>
>>>        if (unlikely(rc && rc != -ENOENT))
>>>                return;
>>>
>>>        spin_lock(&m->metadata_metric_lock);
>>> -     total = ++m->total_metadatas;
>>> -     m->metadata_latency_sum += lat;
>>> -     METRIC_UPDATE_MIN_MAX(m->metadata_latency_min,
>>> -                           m->metadata_latency_max,
>>> -                           lat);
>>> -     __update_stdev(total, m->metadata_latency_sum,
>>> -                    &m->metadata_latency_sq_sum, lat);
>>> +     __update_latency(&m->total_metadatas, &m->metadata_latency_sum,
>>> +                      &m->avg_metadata_latency, &m->metadata_latency_min,
>>> +                      &m->metadata_latency_max, &m->metadata_latency_stdev, lat);
>>>        spin_unlock(&m->metadata_metric_lock);
>>>    }
>>> diff --git a/fs/ceph/metric.h b/fs/ceph/metric.h
>>> index 103ed736f9d2..a5da21b8f8ed 100644
>>> --- a/fs/ceph/metric.h
>>> +++ b/fs/ceph/metric.h
>>> @@ -138,7 +138,8 @@ struct ceph_client_metric {
>>>        u64 read_size_min;
>>>        u64 read_size_max;
>>>        ktime_t read_latency_sum;
>>> -     ktime_t read_latency_sq_sum;
>>> +     ktime_t avg_read_latency;
>>> +     ktime_t read_latency_stdev;
>>>        ktime_t read_latency_min;
>>>        ktime_t read_latency_max;
>>>
>>> @@ -148,14 +149,16 @@ struct ceph_client_metric {
>>>        u64 write_size_min;
>>>        u64 write_size_max;
>>>        ktime_t write_latency_sum;
>>> -     ktime_t write_latency_sq_sum;
>>> +     ktime_t avg_write_latency;
>>> +     ktime_t write_latency_stdev;
>>>        ktime_t write_latency_min;
>>>        ktime_t write_latency_max;
>>>
>>>        spinlock_t metadata_metric_lock;
>>>        u64 total_metadatas;
>>>        ktime_t metadata_latency_sum;
>>> -     ktime_t metadata_latency_sq_sum;
>>> +     ktime_t avg_metadata_latency;
>>> +     ktime_t metadata_latency_stdev;
>>>        ktime_t metadata_latency_min;
>>>        ktime_t metadata_latency_max;
>>>
>


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 2/4] ceph: track average/stdev r/w/m latency
  2021-09-14 13:45       ` Xiubo Li
@ 2021-09-14 13:52         ` Xiubo Li
  2021-09-14 14:00           ` Venky Shankar
  2021-09-14 13:53         ` Venky Shankar
  1 sibling, 1 reply; 19+ messages in thread
From: Xiubo Li @ 2021-09-14 13:52 UTC (permalink / raw)
  To: Venky Shankar; +Cc: Jeff Layton, Patrick Donnelly, ceph-devel


On 9/14/21 9:45 PM, Xiubo Li wrote:
>
> On 9/14/21 9:30 PM, Venky Shankar wrote:
>> On Tue, Sep 14, 2021 at 6:39 PM Xiubo Li <xiubli@redhat.com> wrote:
>>>
>>> On 9/14/21 4:49 PM, Venky Shankar wrote:
[...]
> In user space this is very easy to do, but not in kernel space, 
> especially there has no float computing.
>
As I remembered this is main reason why I was planing to send the raw 
metrics to MDS and let the MDS do the computing.

So if possible why not just send the raw data to MDS and let the MDS to 
do the stdev computing ?


> Currently the kclient is doing the avg computing by:
>
> avg(n) = (avg(n-1) + latency(n)) / (n), IMO this should be closer to 
> the real avg(n) = sum(latency(n), latency(n-1), ..., latency(1)) / n.
>
> Because it's hard to record all the latency values, this is also many 
> other user space tools doing to count the avg.
>
>
>>> Though current stdev computing method is not exactly the same the math
>>> formula does, but it's closer to it, because the kernel couldn't record
>>> all the latency value and do it whenever needed, which will occupy a
>>> large amount of memories and cpu resources.
>> The approach is to calculate the running variance, I.e., compute the
>> variance as  data (latency) arrive one at a time.
>>
>>>
>>>>    }
>>>>
>>>>    void ceph_update_read_metrics(struct ceph_client_metric *m,
>>>> @@ -343,23 +352,18 @@ void ceph_update_read_metrics(struct 
>>>> ceph_client_metric *m,
>>>>                              unsigned int size, int rc)
>>>>    {
>>>>        ktime_t lat = ktime_sub(r_end, r_start);
>>>> -     ktime_t total;
>>>>
>>>>        if (unlikely(rc < 0 && rc != -ENOENT && rc != -ETIMEDOUT))
>>>>                return;
>>>>
>>>>        spin_lock(&m->read_metric_lock);
>>>> -     total = ++m->total_reads;
>>>>        m->read_size_sum += size;
>>>> -     m->read_latency_sum += lat;
>>>>        METRIC_UPDATE_MIN_MAX(m->read_size_min,
>>>>                              m->read_size_max,
>>>>                              size);
>>>> -     METRIC_UPDATE_MIN_MAX(m->read_latency_min,
>>>> -                           m->read_latency_max,
>>>> -                           lat);
>>>> -     __update_stdev(total, m->read_latency_sum,
>>>> -                    &m->read_latency_sq_sum, lat);
>>>> +     __update_latency(&m->total_reads, &m->read_latency_sum,
>>>> +                      &m->avg_read_latency, &m->read_latency_min,
>>>> +                      &m->read_latency_max, 
>>>> &m->read_latency_stdev, lat);
>>>>        spin_unlock(&m->read_metric_lock);
>>>>    }
>>>>
>>>> @@ -368,23 +372,18 @@ void ceph_update_write_metrics(struct 
>>>> ceph_client_metric *m,
>>>>                               unsigned int size, int rc)
>>>>    {
>>>>        ktime_t lat = ktime_sub(r_end, r_start);
>>>> -     ktime_t total;
>>>>
>>>>        if (unlikely(rc && rc != -ETIMEDOUT))
>>>>                return;
>>>>
>>>>        spin_lock(&m->write_metric_lock);
>>>> -     total = ++m->total_writes;
>>>>        m->write_size_sum += size;
>>>> -     m->write_latency_sum += lat;
>>>>        METRIC_UPDATE_MIN_MAX(m->write_size_min,
>>>>                              m->write_size_max,
>>>>                              size);
>>>> -     METRIC_UPDATE_MIN_MAX(m->write_latency_min,
>>>> -                           m->write_latency_max,
>>>> -                           lat);
>>>> -     __update_stdev(total, m->write_latency_sum,
>>>> -                    &m->write_latency_sq_sum, lat);
>>>> +     __update_latency(&m->total_writes, &m->write_latency_sum,
>>>> +                      &m->avg_write_latency, &m->write_latency_min,
>>>> +                      &m->write_latency_max, 
>>>> &m->write_latency_stdev, lat);
>>>>        spin_unlock(&m->write_metric_lock);
>>>>    }
>>>>
>>>> @@ -393,18 +392,13 @@ void ceph_update_metadata_metrics(struct 
>>>> ceph_client_metric *m,
>>>>                                  int rc)
>>>>    {
>>>>        ktime_t lat = ktime_sub(r_end, r_start);
>>>> -     ktime_t total;
>>>>
>>>>        if (unlikely(rc && rc != -ENOENT))
>>>>                return;
>>>>
>>>>        spin_lock(&m->metadata_metric_lock);
>>>> -     total = ++m->total_metadatas;
>>>> -     m->metadata_latency_sum += lat;
>>>> -     METRIC_UPDATE_MIN_MAX(m->metadata_latency_min,
>>>> -                           m->metadata_latency_max,
>>>> -                           lat);
>>>> -     __update_stdev(total, m->metadata_latency_sum,
>>>> -                    &m->metadata_latency_sq_sum, lat);
>>>> +     __update_latency(&m->total_metadatas, &m->metadata_latency_sum,
>>>> +                      &m->avg_metadata_latency, 
>>>> &m->metadata_latency_min,
>>>> +                      &m->metadata_latency_max, 
>>>> &m->metadata_latency_stdev, lat);
>>>>        spin_unlock(&m->metadata_metric_lock);
>>>>    }
>>>> diff --git a/fs/ceph/metric.h b/fs/ceph/metric.h
>>>> index 103ed736f9d2..a5da21b8f8ed 100644
>>>> --- a/fs/ceph/metric.h
>>>> +++ b/fs/ceph/metric.h
>>>> @@ -138,7 +138,8 @@ struct ceph_client_metric {
>>>>        u64 read_size_min;
>>>>        u64 read_size_max;
>>>>        ktime_t read_latency_sum;
>>>> -     ktime_t read_latency_sq_sum;
>>>> +     ktime_t avg_read_latency;
>>>> +     ktime_t read_latency_stdev;
>>>>        ktime_t read_latency_min;
>>>>        ktime_t read_latency_max;
>>>>
>>>> @@ -148,14 +149,16 @@ struct ceph_client_metric {
>>>>        u64 write_size_min;
>>>>        u64 write_size_max;
>>>>        ktime_t write_latency_sum;
>>>> -     ktime_t write_latency_sq_sum;
>>>> +     ktime_t avg_write_latency;
>>>> +     ktime_t write_latency_stdev;
>>>>        ktime_t write_latency_min;
>>>>        ktime_t write_latency_max;
>>>>
>>>>        spinlock_t metadata_metric_lock;
>>>>        u64 total_metadatas;
>>>>        ktime_t metadata_latency_sum;
>>>> -     ktime_t metadata_latency_sq_sum;
>>>> +     ktime_t avg_metadata_latency;
>>>> +     ktime_t metadata_latency_stdev;
>>>>        ktime_t metadata_latency_min;
>>>>        ktime_t metadata_latency_max;
>>>>
>>


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 2/4] ceph: track average/stdev r/w/m latency
  2021-09-14 13:45       ` Xiubo Li
  2021-09-14 13:52         ` Xiubo Li
@ 2021-09-14 13:53         ` Venky Shankar
  2021-09-14 13:58           ` Xiubo Li
  1 sibling, 1 reply; 19+ messages in thread
From: Venky Shankar @ 2021-09-14 13:53 UTC (permalink / raw)
  To: Xiubo Li; +Cc: Jeff Layton, Patrick Donnelly, ceph-devel

On Tue, Sep 14, 2021 at 7:16 PM Xiubo Li <xiubli@redhat.com> wrote:
>
>
> On 9/14/21 9:30 PM, Venky Shankar wrote:
> > On Tue, Sep 14, 2021 at 6:39 PM Xiubo Li <xiubli@redhat.com> wrote:
> >>
> >> On 9/14/21 4:49 PM, Venky Shankar wrote:
> >>> The math involved in tracking average and standard deviation
> >>> for r/w/m latencies looks incorrect. Fix that up. Also, change
> >>> the variable name that tracks standard deviation (*_sq_sum) to
> >>> *_stdev.
> >>>
> >>> Signed-off-by: Venky Shankar <vshankar@redhat.com>
> >>> ---
> >>>    fs/ceph/debugfs.c | 14 +++++-----
> >>>    fs/ceph/metric.c  | 70 ++++++++++++++++++++++-------------------------
> >>>    fs/ceph/metric.h  |  9 ++++--
> >>>    3 files changed, 45 insertions(+), 48 deletions(-)
> >>>
> >>> diff --git a/fs/ceph/debugfs.c b/fs/ceph/debugfs.c
> >>> index 38b78b45811f..3abfa7ae8220 100644
> >>> --- a/fs/ceph/debugfs.c
> >>> +++ b/fs/ceph/debugfs.c
> >>> @@ -152,7 +152,7 @@ static int metric_show(struct seq_file *s, void *p)
> >>>        struct ceph_mds_client *mdsc = fsc->mdsc;
> >>>        struct ceph_client_metric *m = &mdsc->metric;
> >>>        int nr_caps = 0;
> >>> -     s64 total, sum, avg, min, max, sq;
> >>> +     s64 total, sum, avg, min, max, stdev;
> >>>        u64 sum_sz, avg_sz, min_sz, max_sz;
> >>>
> >>>        sum = percpu_counter_sum(&m->total_inodes);
> >>> @@ -175,9 +175,9 @@ static int metric_show(struct seq_file *s, void *p)
> >>>        avg = total > 0 ? DIV64_U64_ROUND_CLOSEST(sum, total) : 0;
> >>>        min = m->read_latency_min;
> >>>        max = m->read_latency_max;
> >>> -     sq = m->read_latency_sq_sum;
> >>> +     stdev = m->read_latency_stdev;
> >>>        spin_unlock(&m->read_metric_lock);
> >>> -     CEPH_LAT_METRIC_SHOW("read", total, avg, min, max, sq);
> >>> +     CEPH_LAT_METRIC_SHOW("read", total, avg, min, max, stdev);
> >>>
> >>>        spin_lock(&m->write_metric_lock);
> >>>        total = m->total_writes;
> >>> @@ -185,9 +185,9 @@ static int metric_show(struct seq_file *s, void *p)
> >>>        avg = total > 0 ? DIV64_U64_ROUND_CLOSEST(sum, total) : 0;
> >>>        min = m->write_latency_min;
> >>>        max = m->write_latency_max;
> >>> -     sq = m->write_latency_sq_sum;
> >>> +     stdev = m->write_latency_stdev;
> >>>        spin_unlock(&m->write_metric_lock);
> >>> -     CEPH_LAT_METRIC_SHOW("write", total, avg, min, max, sq);
> >>> +     CEPH_LAT_METRIC_SHOW("write", total, avg, min, max, stdev);
> >>>
> >>>        spin_lock(&m->metadata_metric_lock);
> >>>        total = m->total_metadatas;
> >>> @@ -195,9 +195,9 @@ static int metric_show(struct seq_file *s, void *p)
> >>>        avg = total > 0 ? DIV64_U64_ROUND_CLOSEST(sum, total) : 0;
> >>>        min = m->metadata_latency_min;
> >>>        max = m->metadata_latency_max;
> >>> -     sq = m->metadata_latency_sq_sum;
> >>> +     stdev = m->metadata_latency_stdev;
> >>>        spin_unlock(&m->metadata_metric_lock);
> >>> -     CEPH_LAT_METRIC_SHOW("metadata", total, avg, min, max, sq);
> >>> +     CEPH_LAT_METRIC_SHOW("metadata", total, avg, min, max, stdev);
> >>>
> >>>        seq_printf(s, "\n");
> >>>        seq_printf(s, "item          total       avg_sz(bytes)   min_sz(bytes)   max_sz(bytes)  total_sz(bytes)\n");
> >>> diff --git a/fs/ceph/metric.c b/fs/ceph/metric.c
> >>> index 226dc38e2909..6b774b1a88ce 100644
> >>> --- a/fs/ceph/metric.c
> >>> +++ b/fs/ceph/metric.c
> >>> @@ -244,7 +244,8 @@ int ceph_metric_init(struct ceph_client_metric *m)
> >>>                goto err_i_caps_mis;
> >>>
> >>>        spin_lock_init(&m->read_metric_lock);
> >>> -     m->read_latency_sq_sum = 0;
> >>> +     m->read_latency_stdev = 0;
> >>> +     m->avg_read_latency = 0;
> >>>        m->read_latency_min = KTIME_MAX;
> >>>        m->read_latency_max = 0;
> >>>        m->total_reads = 0;
> >>> @@ -254,7 +255,8 @@ int ceph_metric_init(struct ceph_client_metric *m)
> >>>        m->read_size_sum = 0;
> >>>
> >>>        spin_lock_init(&m->write_metric_lock);
> >>> -     m->write_latency_sq_sum = 0;
> >>> +     m->write_latency_stdev = 0;
> >>> +     m->avg_write_latency = 0;
> >>>        m->write_latency_min = KTIME_MAX;
> >>>        m->write_latency_max = 0;
> >>>        m->total_writes = 0;
> >>> @@ -264,7 +266,8 @@ int ceph_metric_init(struct ceph_client_metric *m)
> >>>        m->write_size_sum = 0;
> >>>
> >>>        spin_lock_init(&m->metadata_metric_lock);
> >>> -     m->metadata_latency_sq_sum = 0;
> >>> +     m->metadata_latency_stdev = 0;
> >>> +     m->avg_metadata_latency = 0;
> >>>        m->metadata_latency_min = KTIME_MAX;
> >>>        m->metadata_latency_max = 0;
> >>>        m->total_metadatas = 0;
> >>> @@ -322,20 +325,26 @@ void ceph_metric_destroy(struct ceph_client_metric *m)
> >>>                max = new;                      \
> >>>    }
> >>>
> >>> -static inline void __update_stdev(ktime_t total, ktime_t lsum,
> >>> -                               ktime_t *sq_sump, ktime_t lat)
> >>> +static inline void __update_latency(ktime_t *ctotal, ktime_t *lsum,
> >>> +                                 ktime_t *lavg, ktime_t *min, ktime_t *max,
> >>> +                                 ktime_t *lstdev, ktime_t lat)
> >>>    {
> >>> -     ktime_t avg, sq;
> >>> +     ktime_t total, avg, stdev;
> >>>
> >>> -     if (unlikely(total == 1))
> >>> -             return;
> >>> +     total = ++(*ctotal);
> >>> +     *lsum += lat;
> >>> +
> >>> +     METRIC_UPDATE_MIN_MAX(*min, *max, lat);
> >>>
> >>> -     /* the sq is (lat - old_avg) * (lat - new_avg) */
> >>> -     avg = DIV64_U64_ROUND_CLOSEST((lsum - lat), (total - 1));
> >>> -     sq = lat - avg;
> >>> -     avg = DIV64_U64_ROUND_CLOSEST(lsum, total);
> >>> -     sq = sq * (lat - avg);
> >>> -     *sq_sump += sq;
> >>> +     if (unlikely(total == 1)) {
> >>> +             *lavg = lat;
> >>> +             *lstdev = 0;
> >>> +     } else {
> >>> +             avg = *lavg + div64_s64(lat - *lavg, total);
> >>> +             stdev = *lstdev + (lat - *lavg)*(lat - avg);
> >>> +             *lstdev = int_sqrt(div64_u64(stdev, total - 1));
> >>> +             *lavg = avg;
> >>> +     }
> >> IMO, this is incorrect, the math formula please see:
> >>
> >> https://www.investopedia.com/ask/answers/042415/what-difference-between-standard-error-means-and-standard-deviation.asp
> >>
> >> The most accurate result should be:
> >>
> >> stdev = int_sqrt(sum((X(n) - avg)^2, (X(n-1) - avg)^2, ..., (X(1) -
> >> avg)^2) / (n - 1)).
> >>
> >> While you are computing it:
> >>
> >> stdev_n = int_sqrt(stdev_(n-1) + (X(n-1) - avg)^2)
> > Hmm. The int_sqrt() is probably not needed at this point and can be
> > done when sending the metric. That would avoid some cycles.
> >
> > Also, the way avg is calculated not totally incorrect, however, I
> > would like to keep it similar to how its done is libcephfs.
>
> In user space this is very easy to do, but not in kernel space,
> especially there has no float computing.
>
> Currently the kclient is doing the avg computing by:
>
> avg(n) = (avg(n-1) + latency(n)) / (n), IMO this should be closer to the
> real avg(n) = sum(latency(n), latency(n-1), ..., latency(1)) / n.

That's how is done in libcephfs too.

>
> Because it's hard to record all the latency values, this is also many
> other user space tools doing to count the avg.
>
>
> >> Though current stdev computing method is not exactly the same the math
> >> formula does, but it's closer to it, because the kernel couldn't record
> >> all the latency value and do it whenever needed, which will occupy a
> >> large amount of memories and cpu resources.
> > The approach is to calculate the running variance, I.e., compute the
> > variance as  data (latency) arrive one at a time.
> >
> >>
> >>>    }
> >>>
> >>>    void ceph_update_read_metrics(struct ceph_client_metric *m,
> >>> @@ -343,23 +352,18 @@ void ceph_update_read_metrics(struct ceph_client_metric *m,
> >>>                              unsigned int size, int rc)
> >>>    {
> >>>        ktime_t lat = ktime_sub(r_end, r_start);
> >>> -     ktime_t total;
> >>>
> >>>        if (unlikely(rc < 0 && rc != -ENOENT && rc != -ETIMEDOUT))
> >>>                return;
> >>>
> >>>        spin_lock(&m->read_metric_lock);
> >>> -     total = ++m->total_reads;
> >>>        m->read_size_sum += size;
> >>> -     m->read_latency_sum += lat;
> >>>        METRIC_UPDATE_MIN_MAX(m->read_size_min,
> >>>                              m->read_size_max,
> >>>                              size);
> >>> -     METRIC_UPDATE_MIN_MAX(m->read_latency_min,
> >>> -                           m->read_latency_max,
> >>> -                           lat);
> >>> -     __update_stdev(total, m->read_latency_sum,
> >>> -                    &m->read_latency_sq_sum, lat);
> >>> +     __update_latency(&m->total_reads, &m->read_latency_sum,
> >>> +                      &m->avg_read_latency, &m->read_latency_min,
> >>> +                      &m->read_latency_max, &m->read_latency_stdev, lat);
> >>>        spin_unlock(&m->read_metric_lock);
> >>>    }
> >>>
> >>> @@ -368,23 +372,18 @@ void ceph_update_write_metrics(struct ceph_client_metric *m,
> >>>                               unsigned int size, int rc)
> >>>    {
> >>>        ktime_t lat = ktime_sub(r_end, r_start);
> >>> -     ktime_t total;
> >>>
> >>>        if (unlikely(rc && rc != -ETIMEDOUT))
> >>>                return;
> >>>
> >>>        spin_lock(&m->write_metric_lock);
> >>> -     total = ++m->total_writes;
> >>>        m->write_size_sum += size;
> >>> -     m->write_latency_sum += lat;
> >>>        METRIC_UPDATE_MIN_MAX(m->write_size_min,
> >>>                              m->write_size_max,
> >>>                              size);
> >>> -     METRIC_UPDATE_MIN_MAX(m->write_latency_min,
> >>> -                           m->write_latency_max,
> >>> -                           lat);
> >>> -     __update_stdev(total, m->write_latency_sum,
> >>> -                    &m->write_latency_sq_sum, lat);
> >>> +     __update_latency(&m->total_writes, &m->write_latency_sum,
> >>> +                      &m->avg_write_latency, &m->write_latency_min,
> >>> +                      &m->write_latency_max, &m->write_latency_stdev, lat);
> >>>        spin_unlock(&m->write_metric_lock);
> >>>    }
> >>>
> >>> @@ -393,18 +392,13 @@ void ceph_update_metadata_metrics(struct ceph_client_metric *m,
> >>>                                  int rc)
> >>>    {
> >>>        ktime_t lat = ktime_sub(r_end, r_start);
> >>> -     ktime_t total;
> >>>
> >>>        if (unlikely(rc && rc != -ENOENT))
> >>>                return;
> >>>
> >>>        spin_lock(&m->metadata_metric_lock);
> >>> -     total = ++m->total_metadatas;
> >>> -     m->metadata_latency_sum += lat;
> >>> -     METRIC_UPDATE_MIN_MAX(m->metadata_latency_min,
> >>> -                           m->metadata_latency_max,
> >>> -                           lat);
> >>> -     __update_stdev(total, m->metadata_latency_sum,
> >>> -                    &m->metadata_latency_sq_sum, lat);
> >>> +     __update_latency(&m->total_metadatas, &m->metadata_latency_sum,
> >>> +                      &m->avg_metadata_latency, &m->metadata_latency_min,
> >>> +                      &m->metadata_latency_max, &m->metadata_latency_stdev, lat);
> >>>        spin_unlock(&m->metadata_metric_lock);
> >>>    }
> >>> diff --git a/fs/ceph/metric.h b/fs/ceph/metric.h
> >>> index 103ed736f9d2..a5da21b8f8ed 100644
> >>> --- a/fs/ceph/metric.h
> >>> +++ b/fs/ceph/metric.h
> >>> @@ -138,7 +138,8 @@ struct ceph_client_metric {
> >>>        u64 read_size_min;
> >>>        u64 read_size_max;
> >>>        ktime_t read_latency_sum;
> >>> -     ktime_t read_latency_sq_sum;
> >>> +     ktime_t avg_read_latency;
> >>> +     ktime_t read_latency_stdev;
> >>>        ktime_t read_latency_min;
> >>>        ktime_t read_latency_max;
> >>>
> >>> @@ -148,14 +149,16 @@ struct ceph_client_metric {
> >>>        u64 write_size_min;
> >>>        u64 write_size_max;
> >>>        ktime_t write_latency_sum;
> >>> -     ktime_t write_latency_sq_sum;
> >>> +     ktime_t avg_write_latency;
> >>> +     ktime_t write_latency_stdev;
> >>>        ktime_t write_latency_min;
> >>>        ktime_t write_latency_max;
> >>>
> >>>        spinlock_t metadata_metric_lock;
> >>>        u64 total_metadatas;
> >>>        ktime_t metadata_latency_sum;
> >>> -     ktime_t metadata_latency_sq_sum;
> >>> +     ktime_t avg_metadata_latency;
> >>> +     ktime_t metadata_latency_stdev;
> >>>        ktime_t metadata_latency_min;
> >>>        ktime_t metadata_latency_max;
> >>>
> >
>


-- 
Cheers,
Venky


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 3/4] ceph: include average/stddev r/w/m latency in mds metrics
  2021-09-14  8:49 ` [PATCH v2 3/4] ceph: include average/stddev r/w/m latency in mds metrics Venky Shankar
@ 2021-09-14 13:57   ` Xiubo Li
  0 siblings, 0 replies; 19+ messages in thread
From: Xiubo Li @ 2021-09-14 13:57 UTC (permalink / raw)
  To: Venky Shankar, jlayton, pdonnell; +Cc: ceph-devel


On 9/14/21 4:49 PM, Venky Shankar wrote:
> The use of `jiffies_to_timespec64()` seems incorrect too, switch
> that to `ktime_to_timespec64()`.

I think this has been missed after I switched the jeffies to ktime for 
the r_start and r_end in my previous patch set.

This LGTM :-)

> Signed-off-by: Venky Shankar <vshankar@redhat.com>
> ---
>   fs/ceph/metric.c | 35 +++++++++++++++++++----------------
>   fs/ceph/metric.h | 48 +++++++++++++++++++++++++++++++++---------------
>   2 files changed, 52 insertions(+), 31 deletions(-)
>
> diff --git a/fs/ceph/metric.c b/fs/ceph/metric.c
> index 6b774b1a88ce..78a50bb7bd0f 100644
> --- a/fs/ceph/metric.c
> +++ b/fs/ceph/metric.c
> @@ -8,6 +8,13 @@
>   #include "metric.h"
>   #include "mds_client.h"
>   
> +static void to_ceph_timespec(struct ceph_timespec *ts, ktime_t val)
> +{
> +	struct timespec64 t = ktime_to_timespec64(val);
> +	ts->tv_sec = cpu_to_le32(t.tv_sec);
> +	ts->tv_nsec = cpu_to_le32(t.tv_nsec);
> +}
> +
>   static bool ceph_mdsc_send_metrics(struct ceph_mds_client *mdsc,
>   				   struct ceph_mds_session *s)
>   {
> @@ -26,7 +33,6 @@ static bool ceph_mdsc_send_metrics(struct ceph_mds_client *mdsc,
>   	u64 nr_caps = atomic64_read(&m->total_caps);
>   	u32 header_len = sizeof(struct ceph_metric_header);
>   	struct ceph_msg *msg;
> -	struct timespec64 ts;
>   	s64 sum;
>   	s32 items = 0;
>   	s32 len;
> @@ -59,37 +65,34 @@ static bool ceph_mdsc_send_metrics(struct ceph_mds_client *mdsc,
>   	/* encode the read latency metric */
>   	read = (struct ceph_metric_read_latency *)(cap + 1);
>   	read->header.type = cpu_to_le32(CLIENT_METRIC_TYPE_READ_LATENCY);
> -	read->header.ver = 1;
> +	read->header.ver = 2;
>   	read->header.compat = 1;
>   	read->header.data_len = cpu_to_le32(sizeof(*read) - header_len);
> -	sum = m->read_latency_sum;
> -	jiffies_to_timespec64(sum, &ts);
> -	read->lat.tv_sec = cpu_to_le32(ts.tv_sec);
> -	read->lat.tv_nsec = cpu_to_le32(ts.tv_nsec);
> +	to_ceph_timespec(&read->lat, m->read_latency_sum);
> +	to_ceph_timespec(&read->avg, m->avg_read_latency);
> +	to_ceph_timespec(&read->stdev, m->read_latency_stdev);
>   	items++;
>   
>   	/* encode the write latency metric */
>   	write = (struct ceph_metric_write_latency *)(read + 1);
>   	write->header.type = cpu_to_le32(CLIENT_METRIC_TYPE_WRITE_LATENCY);
> -	write->header.ver = 1;
> +	write->header.ver = 2;
>   	write->header.compat = 1;
>   	write->header.data_len = cpu_to_le32(sizeof(*write) - header_len);
> -	sum = m->write_latency_sum;
> -	jiffies_to_timespec64(sum, &ts);
> -	write->lat.tv_sec = cpu_to_le32(ts.tv_sec);
> -	write->lat.tv_nsec = cpu_to_le32(ts.tv_nsec);
> +	to_ceph_timespec(&write->lat, m->write_latency_sum);
> +	to_ceph_timespec(&write->avg, m->avg_write_latency);
> +	to_ceph_timespec(&write->stdev, m->write_latency_stdev);
>   	items++;
>   
>   	/* encode the metadata latency metric */
>   	meta = (struct ceph_metric_metadata_latency *)(write + 1);
>   	meta->header.type = cpu_to_le32(CLIENT_METRIC_TYPE_METADATA_LATENCY);
> -	meta->header.ver = 1;
> +	meta->header.ver = 2;
>   	meta->header.compat = 1;
>   	meta->header.data_len = cpu_to_le32(sizeof(*meta) - header_len);
> -	sum = m->metadata_latency_sum;
> -	jiffies_to_timespec64(sum, &ts);
> -	meta->lat.tv_sec = cpu_to_le32(ts.tv_sec);
> -	meta->lat.tv_nsec = cpu_to_le32(ts.tv_nsec);
> +	to_ceph_timespec(&meta->lat, m->metadata_latency_sum);
> +	to_ceph_timespec(&meta->avg, m->avg_metadata_latency);
> +	to_ceph_timespec(&meta->stdev, m->metadata_latency_stdev);
>   	items++;
>   
>   	/* encode the dentry lease metric */
> diff --git a/fs/ceph/metric.h b/fs/ceph/metric.h
> index a5da21b8f8ed..2dd506dedebf 100644
> --- a/fs/ceph/metric.h
> +++ b/fs/ceph/metric.h
> @@ -19,27 +19,39 @@ enum ceph_metric_type {
>   	CLIENT_METRIC_TYPE_OPENED_INODES,
>   	CLIENT_METRIC_TYPE_READ_IO_SIZES,
>   	CLIENT_METRIC_TYPE_WRITE_IO_SIZES,
> -
> -	CLIENT_METRIC_TYPE_MAX = CLIENT_METRIC_TYPE_WRITE_IO_SIZES,
> +	CLIENT_METRIC_TYPE_AVG_READ_LATENCY,
> +	CLIENT_METRIC_TYPE_STDEV_READ_LATENCY,
> +	CLIENT_METRIC_TYPE_AVG_WRITE_LATENCY,
> +	CLIENT_METRIC_TYPE_STDEV_WRITE_LATENCY,
> +	CLIENT_METRIC_TYPE_AVG_METADATA_LATENCY,
> +	CLIENT_METRIC_TYPE_STDEV_METADATA_LATENCY,
> +
> +	CLIENT_METRIC_TYPE_MAX = CLIENT_METRIC_TYPE_STDEV_METADATA_LATENCY,
>   };
>   
>   /*
>    * This will always have the highest metric bit value
>    * as the last element of the array.
>    */
> -#define CEPHFS_METRIC_SPEC_CLIENT_SUPPORTED {	\
> -	CLIENT_METRIC_TYPE_CAP_INFO,		\
> -	CLIENT_METRIC_TYPE_READ_LATENCY,	\
> -	CLIENT_METRIC_TYPE_WRITE_LATENCY,	\
> -	CLIENT_METRIC_TYPE_METADATA_LATENCY,	\
> -	CLIENT_METRIC_TYPE_DENTRY_LEASE,	\
> -	CLIENT_METRIC_TYPE_OPENED_FILES,	\
> -	CLIENT_METRIC_TYPE_PINNED_ICAPS,	\
> -	CLIENT_METRIC_TYPE_OPENED_INODES,	\
> -	CLIENT_METRIC_TYPE_READ_IO_SIZES,	\
> -	CLIENT_METRIC_TYPE_WRITE_IO_SIZES,	\
> -						\
> -	CLIENT_METRIC_TYPE_MAX,			\
> +#define CEPHFS_METRIC_SPEC_CLIENT_SUPPORTED {	    \
> +	CLIENT_METRIC_TYPE_CAP_INFO,		    \
> +	CLIENT_METRIC_TYPE_READ_LATENCY,	    \
> +	CLIENT_METRIC_TYPE_WRITE_LATENCY,	    \
> +	CLIENT_METRIC_TYPE_METADATA_LATENCY,	    \
> +	CLIENT_METRIC_TYPE_DENTRY_LEASE,	    \
> +	CLIENT_METRIC_TYPE_OPENED_FILES,	    \
> +	CLIENT_METRIC_TYPE_PINNED_ICAPS,	    \
> +	CLIENT_METRIC_TYPE_OPENED_INODES,	    \
> +	CLIENT_METRIC_TYPE_READ_IO_SIZES,	    \
> +	CLIENT_METRIC_TYPE_WRITE_IO_SIZES,	    \
> +	CLIENT_METRIC_TYPE_AVG_READ_LATENCY,	    \
> +	CLIENT_METRIC_TYPE_STDEV_READ_LATENCY,	    \
> +	CLIENT_METRIC_TYPE_AVG_WRITE_LATENCY,	    \
> +	CLIENT_METRIC_TYPE_STDEV_WRITE_LATENCY,	    \
> +	CLIENT_METRIC_TYPE_AVG_METADATA_LATENCY,    \
> +	CLIENT_METRIC_TYPE_STDEV_METADATA_LATENCY,  \
> +						    \
> +	CLIENT_METRIC_TYPE_MAX,			    \
>   }
>   
>   struct ceph_metric_header {
> @@ -61,18 +73,24 @@ struct ceph_metric_cap {
>   struct ceph_metric_read_latency {
>   	struct ceph_metric_header header;
>   	struct ceph_timespec lat;
> +	struct ceph_timespec avg;
> +	struct ceph_timespec stdev;
>   } __packed;
>   
>   /* metric write latency header */
>   struct ceph_metric_write_latency {
>   	struct ceph_metric_header header;
>   	struct ceph_timespec lat;
> +	struct ceph_timespec avg;
> +	struct ceph_timespec stdev;
>   } __packed;
>   
>   /* metric metadata latency header */
>   struct ceph_metric_metadata_latency {
>   	struct ceph_metric_header header;
>   	struct ceph_timespec lat;
> +	struct ceph_timespec avg;
> +	struct ceph_timespec stdev;
>   } __packed;
>   
>   /* metric dentry lease header */


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 2/4] ceph: track average/stdev r/w/m latency
  2021-09-14 13:53         ` Venky Shankar
@ 2021-09-14 13:58           ` Xiubo Li
  0 siblings, 0 replies; 19+ messages in thread
From: Xiubo Li @ 2021-09-14 13:58 UTC (permalink / raw)
  To: Venky Shankar; +Cc: Jeff Layton, Patrick Donnelly, ceph-devel


On 9/14/21 9:53 PM, Venky Shankar wrote:
> On Tue, Sep 14, 2021 at 7:16 PM Xiubo Li <xiubli@redhat.com> wrote:
>>
>> On 9/14/21 9:30 PM, Venky Shankar wrote:
>>> On Tue, Sep 14, 2021 at 6:39 PM Xiubo Li <xiubli@redhat.com> wrote:
>>>> On 9/14/21 4:49 PM, Venky Shankar wrote:
>>>>> The math involved in tracking average and standard deviation
>>>>> for r/w/m latencies looks incorrect. Fix that up. Also, change
>>>>> the variable name that tracks standard deviation (*_sq_sum) to
>>>>> *_stdev.
>>>>>
>>>>> Signed-off-by: Venky Shankar <vshankar@redhat.com>
>>>>> ---
>>>>>     fs/ceph/debugfs.c | 14 +++++-----
>>>>>     fs/ceph/metric.c  | 70 ++++++++++++++++++++++-------------------------
>>>>>     fs/ceph/metric.h  |  9 ++++--
>>>>>     3 files changed, 45 insertions(+), 48 deletions(-)
>>>>>
>>>>> diff --git a/fs/ceph/debugfs.c b/fs/ceph/debugfs.c
>>>>> index 38b78b45811f..3abfa7ae8220 100644
>>>>> --- a/fs/ceph/debugfs.c
>>>>> +++ b/fs/ceph/debugfs.c
>>>>> @@ -152,7 +152,7 @@ static int metric_show(struct seq_file *s, void *p)
>>>>>         struct ceph_mds_client *mdsc = fsc->mdsc;
>>>>>         struct ceph_client_metric *m = &mdsc->metric;
>>>>>         int nr_caps = 0;
>>>>> -     s64 total, sum, avg, min, max, sq;
>>>>> +     s64 total, sum, avg, min, max, stdev;
>>>>>         u64 sum_sz, avg_sz, min_sz, max_sz;
>>>>>
>>>>>         sum = percpu_counter_sum(&m->total_inodes);
>>>>> @@ -175,9 +175,9 @@ static int metric_show(struct seq_file *s, void *p)
>>>>>         avg = total > 0 ? DIV64_U64_ROUND_CLOSEST(sum, total) : 0;
>>>>>         min = m->read_latency_min;
>>>>>         max = m->read_latency_max;
>>>>> -     sq = m->read_latency_sq_sum;
>>>>> +     stdev = m->read_latency_stdev;
>>>>>         spin_unlock(&m->read_metric_lock);
>>>>> -     CEPH_LAT_METRIC_SHOW("read", total, avg, min, max, sq);
>>>>> +     CEPH_LAT_METRIC_SHOW("read", total, avg, min, max, stdev);
>>>>>
>>>>>         spin_lock(&m->write_metric_lock);
>>>>>         total = m->total_writes;
>>>>> @@ -185,9 +185,9 @@ static int metric_show(struct seq_file *s, void *p)
>>>>>         avg = total > 0 ? DIV64_U64_ROUND_CLOSEST(sum, total) : 0;
>>>>>         min = m->write_latency_min;
>>>>>         max = m->write_latency_max;
>>>>> -     sq = m->write_latency_sq_sum;
>>>>> +     stdev = m->write_latency_stdev;
>>>>>         spin_unlock(&m->write_metric_lock);
>>>>> -     CEPH_LAT_METRIC_SHOW("write", total, avg, min, max, sq);
>>>>> +     CEPH_LAT_METRIC_SHOW("write", total, avg, min, max, stdev);
>>>>>
>>>>>         spin_lock(&m->metadata_metric_lock);
>>>>>         total = m->total_metadatas;
>>>>> @@ -195,9 +195,9 @@ static int metric_show(struct seq_file *s, void *p)
>>>>>         avg = total > 0 ? DIV64_U64_ROUND_CLOSEST(sum, total) : 0;
>>>>>         min = m->metadata_latency_min;
>>>>>         max = m->metadata_latency_max;
>>>>> -     sq = m->metadata_latency_sq_sum;
>>>>> +     stdev = m->metadata_latency_stdev;
>>>>>         spin_unlock(&m->metadata_metric_lock);
>>>>> -     CEPH_LAT_METRIC_SHOW("metadata", total, avg, min, max, sq);
>>>>> +     CEPH_LAT_METRIC_SHOW("metadata", total, avg, min, max, stdev);
>>>>>
>>>>>         seq_printf(s, "\n");
>>>>>         seq_printf(s, "item          total       avg_sz(bytes)   min_sz(bytes)   max_sz(bytes)  total_sz(bytes)\n");
>>>>> diff --git a/fs/ceph/metric.c b/fs/ceph/metric.c
>>>>> index 226dc38e2909..6b774b1a88ce 100644
>>>>> --- a/fs/ceph/metric.c
>>>>> +++ b/fs/ceph/metric.c
>>>>> @@ -244,7 +244,8 @@ int ceph_metric_init(struct ceph_client_metric *m)
>>>>>                 goto err_i_caps_mis;
>>>>>
>>>>>         spin_lock_init(&m->read_metric_lock);
>>>>> -     m->read_latency_sq_sum = 0;
>>>>> +     m->read_latency_stdev = 0;
>>>>> +     m->avg_read_latency = 0;
>>>>>         m->read_latency_min = KTIME_MAX;
>>>>>         m->read_latency_max = 0;
>>>>>         m->total_reads = 0;
>>>>> @@ -254,7 +255,8 @@ int ceph_metric_init(struct ceph_client_metric *m)
>>>>>         m->read_size_sum = 0;
>>>>>
>>>>>         spin_lock_init(&m->write_metric_lock);
>>>>> -     m->write_latency_sq_sum = 0;
>>>>> +     m->write_latency_stdev = 0;
>>>>> +     m->avg_write_latency = 0;
>>>>>         m->write_latency_min = KTIME_MAX;
>>>>>         m->write_latency_max = 0;
>>>>>         m->total_writes = 0;
>>>>> @@ -264,7 +266,8 @@ int ceph_metric_init(struct ceph_client_metric *m)
>>>>>         m->write_size_sum = 0;
>>>>>
>>>>>         spin_lock_init(&m->metadata_metric_lock);
>>>>> -     m->metadata_latency_sq_sum = 0;
>>>>> +     m->metadata_latency_stdev = 0;
>>>>> +     m->avg_metadata_latency = 0;
>>>>>         m->metadata_latency_min = KTIME_MAX;
>>>>>         m->metadata_latency_max = 0;
>>>>>         m->total_metadatas = 0;
>>>>> @@ -322,20 +325,26 @@ void ceph_metric_destroy(struct ceph_client_metric *m)
>>>>>                 max = new;                      \
>>>>>     }
>>>>>
>>>>> -static inline void __update_stdev(ktime_t total, ktime_t lsum,
>>>>> -                               ktime_t *sq_sump, ktime_t lat)
>>>>> +static inline void __update_latency(ktime_t *ctotal, ktime_t *lsum,
>>>>> +                                 ktime_t *lavg, ktime_t *min, ktime_t *max,
>>>>> +                                 ktime_t *lstdev, ktime_t lat)
>>>>>     {
>>>>> -     ktime_t avg, sq;
>>>>> +     ktime_t total, avg, stdev;
>>>>>
>>>>> -     if (unlikely(total == 1))
>>>>> -             return;
>>>>> +     total = ++(*ctotal);
>>>>> +     *lsum += lat;
>>>>> +
>>>>> +     METRIC_UPDATE_MIN_MAX(*min, *max, lat);
>>>>>
>>>>> -     /* the sq is (lat - old_avg) * (lat - new_avg) */
>>>>> -     avg = DIV64_U64_ROUND_CLOSEST((lsum - lat), (total - 1));
>>>>> -     sq = lat - avg;
>>>>> -     avg = DIV64_U64_ROUND_CLOSEST(lsum, total);
>>>>> -     sq = sq * (lat - avg);
>>>>> -     *sq_sump += sq;
>>>>> +     if (unlikely(total == 1)) {
>>>>> +             *lavg = lat;
>>>>> +             *lstdev = 0;
>>>>> +     } else {
>>>>> +             avg = *lavg + div64_s64(lat - *lavg, total);
>>>>> +             stdev = *lstdev + (lat - *lavg)*(lat - avg);
>>>>> +             *lstdev = int_sqrt(div64_u64(stdev, total - 1));
>>>>> +             *lavg = avg;
>>>>> +     }
>>>> IMO, this is incorrect, the math formula please see:
>>>>
>>>> https://www.investopedia.com/ask/answers/042415/what-difference-between-standard-error-means-and-standard-deviation.asp
>>>>
>>>> The most accurate result should be:
>>>>
>>>> stdev = int_sqrt(sum((X(n) - avg)^2, (X(n-1) - avg)^2, ..., (X(1) -
>>>> avg)^2) / (n - 1)).
>>>>
>>>> While you are computing it:
>>>>
>>>> stdev_n = int_sqrt(stdev_(n-1) + (X(n-1) - avg)^2)
>>> Hmm. The int_sqrt() is probably not needed at this point and can be
>>> done when sending the metric. That would avoid some cycles.
>>>
>>> Also, the way avg is calculated not totally incorrect, however, I
>>> would like to keep it similar to how its done is libcephfs.
>> In user space this is very easy to do, but not in kernel space,
>> especially there has no float computing.
>>
>> Currently the kclient is doing the avg computing by:
>>
>> avg(n) = (avg(n-1) + latency(n)) / (n), IMO this should be closer to the
>> real avg(n) = sum(latency(n), latency(n-1), ..., latency(1)) / n.
> That's how is done in libcephfs too.

Okay.

>
>> Because it's hard to record all the latency values, this is also many
>> other user space tools doing to count the avg.
>>
>>
>>>> Though current stdev computing method is not exactly the same the math
>>>> formula does, but it's closer to it, because the kernel couldn't record
>>>> all the latency value and do it whenever needed, which will occupy a
>>>> large amount of memories and cpu resources.
>>> The approach is to calculate the running variance, I.e., compute the
>>> variance as  data (latency) arrive one at a time.
>>>
>>>>>     }
>>>>>
>>>>>     void ceph_update_read_metrics(struct ceph_client_metric *m,
>>>>> @@ -343,23 +352,18 @@ void ceph_update_read_metrics(struct ceph_client_metric *m,
>>>>>                               unsigned int size, int rc)
>>>>>     {
>>>>>         ktime_t lat = ktime_sub(r_end, r_start);
>>>>> -     ktime_t total;
>>>>>
>>>>>         if (unlikely(rc < 0 && rc != -ENOENT && rc != -ETIMEDOUT))
>>>>>                 return;
>>>>>
>>>>>         spin_lock(&m->read_metric_lock);
>>>>> -     total = ++m->total_reads;
>>>>>         m->read_size_sum += size;
>>>>> -     m->read_latency_sum += lat;
>>>>>         METRIC_UPDATE_MIN_MAX(m->read_size_min,
>>>>>                               m->read_size_max,
>>>>>                               size);
>>>>> -     METRIC_UPDATE_MIN_MAX(m->read_latency_min,
>>>>> -                           m->read_latency_max,
>>>>> -                           lat);
>>>>> -     __update_stdev(total, m->read_latency_sum,
>>>>> -                    &m->read_latency_sq_sum, lat);
>>>>> +     __update_latency(&m->total_reads, &m->read_latency_sum,
>>>>> +                      &m->avg_read_latency, &m->read_latency_min,
>>>>> +                      &m->read_latency_max, &m->read_latency_stdev, lat);
>>>>>         spin_unlock(&m->read_metric_lock);
>>>>>     }
>>>>>
>>>>> @@ -368,23 +372,18 @@ void ceph_update_write_metrics(struct ceph_client_metric *m,
>>>>>                                unsigned int size, int rc)
>>>>>     {
>>>>>         ktime_t lat = ktime_sub(r_end, r_start);
>>>>> -     ktime_t total;
>>>>>
>>>>>         if (unlikely(rc && rc != -ETIMEDOUT))
>>>>>                 return;
>>>>>
>>>>>         spin_lock(&m->write_metric_lock);
>>>>> -     total = ++m->total_writes;
>>>>>         m->write_size_sum += size;
>>>>> -     m->write_latency_sum += lat;
>>>>>         METRIC_UPDATE_MIN_MAX(m->write_size_min,
>>>>>                               m->write_size_max,
>>>>>                               size);
>>>>> -     METRIC_UPDATE_MIN_MAX(m->write_latency_min,
>>>>> -                           m->write_latency_max,
>>>>> -                           lat);
>>>>> -     __update_stdev(total, m->write_latency_sum,
>>>>> -                    &m->write_latency_sq_sum, lat);
>>>>> +     __update_latency(&m->total_writes, &m->write_latency_sum,
>>>>> +                      &m->avg_write_latency, &m->write_latency_min,
>>>>> +                      &m->write_latency_max, &m->write_latency_stdev, lat);
>>>>>         spin_unlock(&m->write_metric_lock);
>>>>>     }
>>>>>
>>>>> @@ -393,18 +392,13 @@ void ceph_update_metadata_metrics(struct ceph_client_metric *m,
>>>>>                                   int rc)
>>>>>     {
>>>>>         ktime_t lat = ktime_sub(r_end, r_start);
>>>>> -     ktime_t total;
>>>>>
>>>>>         if (unlikely(rc && rc != -ENOENT))
>>>>>                 return;
>>>>>
>>>>>         spin_lock(&m->metadata_metric_lock);
>>>>> -     total = ++m->total_metadatas;
>>>>> -     m->metadata_latency_sum += lat;
>>>>> -     METRIC_UPDATE_MIN_MAX(m->metadata_latency_min,
>>>>> -                           m->metadata_latency_max,
>>>>> -                           lat);
>>>>> -     __update_stdev(total, m->metadata_latency_sum,
>>>>> -                    &m->metadata_latency_sq_sum, lat);
>>>>> +     __update_latency(&m->total_metadatas, &m->metadata_latency_sum,
>>>>> +                      &m->avg_metadata_latency, &m->metadata_latency_min,
>>>>> +                      &m->metadata_latency_max, &m->metadata_latency_stdev, lat);
>>>>>         spin_unlock(&m->metadata_metric_lock);
>>>>>     }
>>>>> diff --git a/fs/ceph/metric.h b/fs/ceph/metric.h
>>>>> index 103ed736f9d2..a5da21b8f8ed 100644
>>>>> --- a/fs/ceph/metric.h
>>>>> +++ b/fs/ceph/metric.h
>>>>> @@ -138,7 +138,8 @@ struct ceph_client_metric {
>>>>>         u64 read_size_min;
>>>>>         u64 read_size_max;
>>>>>         ktime_t read_latency_sum;
>>>>> -     ktime_t read_latency_sq_sum;
>>>>> +     ktime_t avg_read_latency;
>>>>> +     ktime_t read_latency_stdev;
>>>>>         ktime_t read_latency_min;
>>>>>         ktime_t read_latency_max;
>>>>>
>>>>> @@ -148,14 +149,16 @@ struct ceph_client_metric {
>>>>>         u64 write_size_min;
>>>>>         u64 write_size_max;
>>>>>         ktime_t write_latency_sum;
>>>>> -     ktime_t write_latency_sq_sum;
>>>>> +     ktime_t avg_write_latency;
>>>>> +     ktime_t write_latency_stdev;
>>>>>         ktime_t write_latency_min;
>>>>>         ktime_t write_latency_max;
>>>>>
>>>>>         spinlock_t metadata_metric_lock;
>>>>>         u64 total_metadatas;
>>>>>         ktime_t metadata_latency_sum;
>>>>> -     ktime_t metadata_latency_sq_sum;
>>>>> +     ktime_t avg_metadata_latency;
>>>>> +     ktime_t metadata_latency_stdev;
>>>>>         ktime_t metadata_latency_min;
>>>>>         ktime_t metadata_latency_max;
>>>>>
>


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 2/4] ceph: track average/stdev r/w/m latency
  2021-09-14 13:52         ` Xiubo Li
@ 2021-09-14 14:00           ` Venky Shankar
  2021-09-14 14:10             ` Xiubo Li
  0 siblings, 1 reply; 19+ messages in thread
From: Venky Shankar @ 2021-09-14 14:00 UTC (permalink / raw)
  To: Xiubo Li; +Cc: Jeff Layton, Patrick Donnelly, ceph-devel

On Tue, Sep 14, 2021 at 7:22 PM Xiubo Li <xiubli@redhat.com> wrote:
>
>
> On 9/14/21 9:45 PM, Xiubo Li wrote:
> >
> > On 9/14/21 9:30 PM, Venky Shankar wrote:
> >> On Tue, Sep 14, 2021 at 6:39 PM Xiubo Li <xiubli@redhat.com> wrote:
> >>>
> >>> On 9/14/21 4:49 PM, Venky Shankar wrote:
> [...]
> > In user space this is very easy to do, but not in kernel space,
> > especially there has no float computing.
> >
> As I remembered this is main reason why I was planing to send the raw
> metrics to MDS and let the MDS do the computing.
>
> So if possible why not just send the raw data to MDS and let the MDS to
> do the stdev computing ?

Since metrics are sent each second (I suppose) and there can be N
operations done within that second, what raw data (say for avg/stdev
calculation) would the client send to the MDS?

>
>
> > Currently the kclient is doing the avg computing by:
> >
> > avg(n) = (avg(n-1) + latency(n)) / (n), IMO this should be closer to
> > the real avg(n) = sum(latency(n), latency(n-1), ..., latency(1)) / n.
> >
> > Because it's hard to record all the latency values, this is also many
> > other user space tools doing to count the avg.
> >
> >
> >>> Though current stdev computing method is not exactly the same the math
> >>> formula does, but it's closer to it, because the kernel couldn't record
> >>> all the latency value and do it whenever needed, which will occupy a
> >>> large amount of memories and cpu resources.
> >> The approach is to calculate the running variance, I.e., compute the
> >> variance as  data (latency) arrive one at a time.
> >>
> >>>
> >>>>    }
> >>>>
> >>>>    void ceph_update_read_metrics(struct ceph_client_metric *m,
> >>>> @@ -343,23 +352,18 @@ void ceph_update_read_metrics(struct
> >>>> ceph_client_metric *m,
> >>>>                              unsigned int size, int rc)
> >>>>    {
> >>>>        ktime_t lat = ktime_sub(r_end, r_start);
> >>>> -     ktime_t total;
> >>>>
> >>>>        if (unlikely(rc < 0 && rc != -ENOENT && rc != -ETIMEDOUT))
> >>>>                return;
> >>>>
> >>>>        spin_lock(&m->read_metric_lock);
> >>>> -     total = ++m->total_reads;
> >>>>        m->read_size_sum += size;
> >>>> -     m->read_latency_sum += lat;
> >>>>        METRIC_UPDATE_MIN_MAX(m->read_size_min,
> >>>>                              m->read_size_max,
> >>>>                              size);
> >>>> -     METRIC_UPDATE_MIN_MAX(m->read_latency_min,
> >>>> -                           m->read_latency_max,
> >>>> -                           lat);
> >>>> -     __update_stdev(total, m->read_latency_sum,
> >>>> -                    &m->read_latency_sq_sum, lat);
> >>>> +     __update_latency(&m->total_reads, &m->read_latency_sum,
> >>>> +                      &m->avg_read_latency, &m->read_latency_min,
> >>>> +                      &m->read_latency_max,
> >>>> &m->read_latency_stdev, lat);
> >>>>        spin_unlock(&m->read_metric_lock);
> >>>>    }
> >>>>
> >>>> @@ -368,23 +372,18 @@ void ceph_update_write_metrics(struct
> >>>> ceph_client_metric *m,
> >>>>                               unsigned int size, int rc)
> >>>>    {
> >>>>        ktime_t lat = ktime_sub(r_end, r_start);
> >>>> -     ktime_t total;
> >>>>
> >>>>        if (unlikely(rc && rc != -ETIMEDOUT))
> >>>>                return;
> >>>>
> >>>>        spin_lock(&m->write_metric_lock);
> >>>> -     total = ++m->total_writes;
> >>>>        m->write_size_sum += size;
> >>>> -     m->write_latency_sum += lat;
> >>>>        METRIC_UPDATE_MIN_MAX(m->write_size_min,
> >>>>                              m->write_size_max,
> >>>>                              size);
> >>>> -     METRIC_UPDATE_MIN_MAX(m->write_latency_min,
> >>>> -                           m->write_latency_max,
> >>>> -                           lat);
> >>>> -     __update_stdev(total, m->write_latency_sum,
> >>>> -                    &m->write_latency_sq_sum, lat);
> >>>> +     __update_latency(&m->total_writes, &m->write_latency_sum,
> >>>> +                      &m->avg_write_latency, &m->write_latency_min,
> >>>> +                      &m->write_latency_max,
> >>>> &m->write_latency_stdev, lat);
> >>>>        spin_unlock(&m->write_metric_lock);
> >>>>    }
> >>>>
> >>>> @@ -393,18 +392,13 @@ void ceph_update_metadata_metrics(struct
> >>>> ceph_client_metric *m,
> >>>>                                  int rc)
> >>>>    {
> >>>>        ktime_t lat = ktime_sub(r_end, r_start);
> >>>> -     ktime_t total;
> >>>>
> >>>>        if (unlikely(rc && rc != -ENOENT))
> >>>>                return;
> >>>>
> >>>>        spin_lock(&m->metadata_metric_lock);
> >>>> -     total = ++m->total_metadatas;
> >>>> -     m->metadata_latency_sum += lat;
> >>>> -     METRIC_UPDATE_MIN_MAX(m->metadata_latency_min,
> >>>> -                           m->metadata_latency_max,
> >>>> -                           lat);
> >>>> -     __update_stdev(total, m->metadata_latency_sum,
> >>>> -                    &m->metadata_latency_sq_sum, lat);
> >>>> +     __update_latency(&m->total_metadatas, &m->metadata_latency_sum,
> >>>> +                      &m->avg_metadata_latency,
> >>>> &m->metadata_latency_min,
> >>>> +                      &m->metadata_latency_max,
> >>>> &m->metadata_latency_stdev, lat);
> >>>>        spin_unlock(&m->metadata_metric_lock);
> >>>>    }
> >>>> diff --git a/fs/ceph/metric.h b/fs/ceph/metric.h
> >>>> index 103ed736f9d2..a5da21b8f8ed 100644
> >>>> --- a/fs/ceph/metric.h
> >>>> +++ b/fs/ceph/metric.h
> >>>> @@ -138,7 +138,8 @@ struct ceph_client_metric {
> >>>>        u64 read_size_min;
> >>>>        u64 read_size_max;
> >>>>        ktime_t read_latency_sum;
> >>>> -     ktime_t read_latency_sq_sum;
> >>>> +     ktime_t avg_read_latency;
> >>>> +     ktime_t read_latency_stdev;
> >>>>        ktime_t read_latency_min;
> >>>>        ktime_t read_latency_max;
> >>>>
> >>>> @@ -148,14 +149,16 @@ struct ceph_client_metric {
> >>>>        u64 write_size_min;
> >>>>        u64 write_size_max;
> >>>>        ktime_t write_latency_sum;
> >>>> -     ktime_t write_latency_sq_sum;
> >>>> +     ktime_t avg_write_latency;
> >>>> +     ktime_t write_latency_stdev;
> >>>>        ktime_t write_latency_min;
> >>>>        ktime_t write_latency_max;
> >>>>
> >>>>        spinlock_t metadata_metric_lock;
> >>>>        u64 total_metadatas;
> >>>>        ktime_t metadata_latency_sum;
> >>>> -     ktime_t metadata_latency_sq_sum;
> >>>> +     ktime_t avg_metadata_latency;
> >>>> +     ktime_t metadata_latency_stdev;
> >>>>        ktime_t metadata_latency_min;
> >>>>        ktime_t metadata_latency_max;
> >>>>
> >>
>


-- 
Cheers,
Venky


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 2/4] ceph: track average/stdev r/w/m latency
  2021-09-14 14:00           ` Venky Shankar
@ 2021-09-14 14:10             ` Xiubo Li
  0 siblings, 0 replies; 19+ messages in thread
From: Xiubo Li @ 2021-09-14 14:10 UTC (permalink / raw)
  To: Venky Shankar; +Cc: Jeff Layton, Patrick Donnelly, ceph-devel


On 9/14/21 10:00 PM, Venky Shankar wrote:
> On Tue, Sep 14, 2021 at 7:22 PM Xiubo Li <xiubli@redhat.com> wrote:
>>
>> On 9/14/21 9:45 PM, Xiubo Li wrote:
>>> On 9/14/21 9:30 PM, Venky Shankar wrote:
>>>> On Tue, Sep 14, 2021 at 6:39 PM Xiubo Li <xiubli@redhat.com> wrote:
>>>>> On 9/14/21 4:49 PM, Venky Shankar wrote:
>> [...]
>>> In user space this is very easy to do, but not in kernel space,
>>> especially there has no float computing.
>>>
>> As I remembered this is main reason why I was planing to send the raw
>> metrics to MDS and let the MDS do the computing.
>>
>> So if possible why not just send the raw data to MDS and let the MDS to
>> do the stdev computing ?
> Since metrics are sent each second (I suppose) and there can be N
> operations done within that second, what raw data (say for avg/stdev
> calculation) would the client send to the MDS?

Yeah.

For example, just send the "sq_sum" and the total numbers to MDS, these 
should be enough to compute the stdev. And in MDS or cephfs-top tool can 
just do it by int_sqrt(sq_sum / total).

I am okay with both and it's up to you, but the stdev could be more 
accurate in userspace with float computing.


>
>>
>>> Currently the kclient is doing the avg computing by:
>>>
>>> avg(n) = (avg(n-1) + latency(n)) / (n), IMO this should be closer to
>>> the real avg(n) = sum(latency(n), latency(n-1), ..., latency(1)) / n.
>>>
>>> Because it's hard to record all the latency values, this is also many
>>> other user space tools doing to count the avg.
>>>
>>>
>>>>> Though current stdev computing method is not exactly the same the math
>>>>> formula does, but it's closer to it, because the kernel couldn't record
>>>>> all the latency value and do it whenever needed, which will occupy a
>>>>> large amount of memories and cpu resources.
>>>> The approach is to calculate the running variance, I.e., compute the
>>>> variance as  data (latency) arrive one at a time.
>>>>
>>>>>>     }
>>>>>>
>>>>>>     void ceph_update_read_metrics(struct ceph_client_metric *m,
>>>>>> @@ -343,23 +352,18 @@ void ceph_update_read_metrics(struct
>>>>>> ceph_client_metric *m,
>>>>>>                               unsigned int size, int rc)
>>>>>>     {
>>>>>>         ktime_t lat = ktime_sub(r_end, r_start);
>>>>>> -     ktime_t total;
>>>>>>
>>>>>>         if (unlikely(rc < 0 && rc != -ENOENT && rc != -ETIMEDOUT))
>>>>>>                 return;
>>>>>>
>>>>>>         spin_lock(&m->read_metric_lock);
>>>>>> -     total = ++m->total_reads;
>>>>>>         m->read_size_sum += size;
>>>>>> -     m->read_latency_sum += lat;
>>>>>>         METRIC_UPDATE_MIN_MAX(m->read_size_min,
>>>>>>                               m->read_size_max,
>>>>>>                               size);
>>>>>> -     METRIC_UPDATE_MIN_MAX(m->read_latency_min,
>>>>>> -                           m->read_latency_max,
>>>>>> -                           lat);
>>>>>> -     __update_stdev(total, m->read_latency_sum,
>>>>>> -                    &m->read_latency_sq_sum, lat);
>>>>>> +     __update_latency(&m->total_reads, &m->read_latency_sum,
>>>>>> +                      &m->avg_read_latency, &m->read_latency_min,
>>>>>> +                      &m->read_latency_max,
>>>>>> &m->read_latency_stdev, lat);
>>>>>>         spin_unlock(&m->read_metric_lock);
>>>>>>     }
>>>>>>
>>>>>> @@ -368,23 +372,18 @@ void ceph_update_write_metrics(struct
>>>>>> ceph_client_metric *m,
>>>>>>                                unsigned int size, int rc)
>>>>>>     {
>>>>>>         ktime_t lat = ktime_sub(r_end, r_start);
>>>>>> -     ktime_t total;
>>>>>>
>>>>>>         if (unlikely(rc && rc != -ETIMEDOUT))
>>>>>>                 return;
>>>>>>
>>>>>>         spin_lock(&m->write_metric_lock);
>>>>>> -     total = ++m->total_writes;
>>>>>>         m->write_size_sum += size;
>>>>>> -     m->write_latency_sum += lat;
>>>>>>         METRIC_UPDATE_MIN_MAX(m->write_size_min,
>>>>>>                               m->write_size_max,
>>>>>>                               size);
>>>>>> -     METRIC_UPDATE_MIN_MAX(m->write_latency_min,
>>>>>> -                           m->write_latency_max,
>>>>>> -                           lat);
>>>>>> -     __update_stdev(total, m->write_latency_sum,
>>>>>> -                    &m->write_latency_sq_sum, lat);
>>>>>> +     __update_latency(&m->total_writes, &m->write_latency_sum,
>>>>>> +                      &m->avg_write_latency, &m->write_latency_min,
>>>>>> +                      &m->write_latency_max,
>>>>>> &m->write_latency_stdev, lat);
>>>>>>         spin_unlock(&m->write_metric_lock);
>>>>>>     }
>>>>>>
>>>>>> @@ -393,18 +392,13 @@ void ceph_update_metadata_metrics(struct
>>>>>> ceph_client_metric *m,
>>>>>>                                   int rc)
>>>>>>     {
>>>>>>         ktime_t lat = ktime_sub(r_end, r_start);
>>>>>> -     ktime_t total;
>>>>>>
>>>>>>         if (unlikely(rc && rc != -ENOENT))
>>>>>>                 return;
>>>>>>
>>>>>>         spin_lock(&m->metadata_metric_lock);
>>>>>> -     total = ++m->total_metadatas;
>>>>>> -     m->metadata_latency_sum += lat;
>>>>>> -     METRIC_UPDATE_MIN_MAX(m->metadata_latency_min,
>>>>>> -                           m->metadata_latency_max,
>>>>>> -                           lat);
>>>>>> -     __update_stdev(total, m->metadata_latency_sum,
>>>>>> -                    &m->metadata_latency_sq_sum, lat);
>>>>>> +     __update_latency(&m->total_metadatas, &m->metadata_latency_sum,
>>>>>> +                      &m->avg_metadata_latency,
>>>>>> &m->metadata_latency_min,
>>>>>> +                      &m->metadata_latency_max,
>>>>>> &m->metadata_latency_stdev, lat);
>>>>>>         spin_unlock(&m->metadata_metric_lock);
>>>>>>     }
>>>>>> diff --git a/fs/ceph/metric.h b/fs/ceph/metric.h
>>>>>> index 103ed736f9d2..a5da21b8f8ed 100644
>>>>>> --- a/fs/ceph/metric.h
>>>>>> +++ b/fs/ceph/metric.h
>>>>>> @@ -138,7 +138,8 @@ struct ceph_client_metric {
>>>>>>         u64 read_size_min;
>>>>>>         u64 read_size_max;
>>>>>>         ktime_t read_latency_sum;
>>>>>> -     ktime_t read_latency_sq_sum;
>>>>>> +     ktime_t avg_read_latency;
>>>>>> +     ktime_t read_latency_stdev;
>>>>>>         ktime_t read_latency_min;
>>>>>>         ktime_t read_latency_max;
>>>>>>
>>>>>> @@ -148,14 +149,16 @@ struct ceph_client_metric {
>>>>>>         u64 write_size_min;
>>>>>>         u64 write_size_max;
>>>>>>         ktime_t write_latency_sum;
>>>>>> -     ktime_t write_latency_sq_sum;
>>>>>> +     ktime_t avg_write_latency;
>>>>>> +     ktime_t write_latency_stdev;
>>>>>>         ktime_t write_latency_min;
>>>>>>         ktime_t write_latency_max;
>>>>>>
>>>>>>         spinlock_t metadata_metric_lock;
>>>>>>         u64 total_metadatas;
>>>>>>         ktime_t metadata_latency_sum;
>>>>>> -     ktime_t metadata_latency_sq_sum;
>>>>>> +     ktime_t avg_metadata_latency;
>>>>>> +     ktime_t metadata_latency_stdev;
>>>>>>         ktime_t metadata_latency_min;
>>>>>>         ktime_t metadata_latency_max;
>>>>>>
>


^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2021-09-14 14:10 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-14  8:48 [PATCH v2 0/4] ceph: forward average read/write/metadata latency Venky Shankar
2021-09-14  8:48 ` [PATCH v2 1/4] ceph: use "struct ceph_timespec" for r/w/m latencies Venky Shankar
2021-09-14  8:49 ` [PATCH v2 2/4] ceph: track average/stdev r/w/m latency Venky Shankar
2021-09-14 12:52   ` Xiubo Li
2021-09-14 13:03     ` Venky Shankar
2021-09-14 13:09   ` Xiubo Li
2021-09-14 13:30     ` Venky Shankar
2021-09-14 13:45       ` Xiubo Li
2021-09-14 13:52         ` Xiubo Li
2021-09-14 14:00           ` Venky Shankar
2021-09-14 14:10             ` Xiubo Li
2021-09-14 13:53         ` Venky Shankar
2021-09-14 13:58           ` Xiubo Li
2021-09-14 13:13   ` Xiubo Li
2021-09-14 13:32     ` Jeff Layton
2021-09-14 13:32     ` Venky Shankar
2021-09-14  8:49 ` [PATCH v2 3/4] ceph: include average/stddev r/w/m latency in mds metrics Venky Shankar
2021-09-14 13:57   ` Xiubo Li
2021-09-14  8:49 ` [PATCH v2 4/4] ceph: use tracked average r/w/m latencies to display metrics in debugfs Venky Shankar

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).