ceph-devel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Xiubo Li <xiubli@redhat.com>
To: Venky Shankar <vshankar@redhat.com>
Cc: Jeff Layton <jlayton@redhat.com>,
	Patrick Donnelly <pdonnell@redhat.com>,
	ceph-devel <ceph-devel@vger.kernel.org>
Subject: Re: [PATCH v2 2/4] ceph: track average/stdev r/w/m latency
Date: Tue, 14 Sep 2021 21:58:39 +0800	[thread overview]
Message-ID: <d9c74fa8-5e0e-d884-a347-6a0ae3d061fb@redhat.com> (raw)
In-Reply-To: <CACPzV1mKmmFttb8X05ePa5WyQN-EFWoH-n9XfEwY5fraush8+A@mail.gmail.com>


On 9/14/21 9:53 PM, Venky Shankar wrote:
> On Tue, Sep 14, 2021 at 7:16 PM Xiubo Li <xiubli@redhat.com> wrote:
>>
>> On 9/14/21 9:30 PM, Venky Shankar wrote:
>>> On Tue, Sep 14, 2021 at 6:39 PM Xiubo Li <xiubli@redhat.com> wrote:
>>>> On 9/14/21 4:49 PM, Venky Shankar wrote:
>>>>> The math involved in tracking average and standard deviation
>>>>> for r/w/m latencies looks incorrect. Fix that up. Also, change
>>>>> the variable name that tracks standard deviation (*_sq_sum) to
>>>>> *_stdev.
>>>>>
>>>>> Signed-off-by: Venky Shankar <vshankar@redhat.com>
>>>>> ---
>>>>>     fs/ceph/debugfs.c | 14 +++++-----
>>>>>     fs/ceph/metric.c  | 70 ++++++++++++++++++++++-------------------------
>>>>>     fs/ceph/metric.h  |  9 ++++--
>>>>>     3 files changed, 45 insertions(+), 48 deletions(-)
>>>>>
>>>>> diff --git a/fs/ceph/debugfs.c b/fs/ceph/debugfs.c
>>>>> index 38b78b45811f..3abfa7ae8220 100644
>>>>> --- a/fs/ceph/debugfs.c
>>>>> +++ b/fs/ceph/debugfs.c
>>>>> @@ -152,7 +152,7 @@ static int metric_show(struct seq_file *s, void *p)
>>>>>         struct ceph_mds_client *mdsc = fsc->mdsc;
>>>>>         struct ceph_client_metric *m = &mdsc->metric;
>>>>>         int nr_caps = 0;
>>>>> -     s64 total, sum, avg, min, max, sq;
>>>>> +     s64 total, sum, avg, min, max, stdev;
>>>>>         u64 sum_sz, avg_sz, min_sz, max_sz;
>>>>>
>>>>>         sum = percpu_counter_sum(&m->total_inodes);
>>>>> @@ -175,9 +175,9 @@ static int metric_show(struct seq_file *s, void *p)
>>>>>         avg = total > 0 ? DIV64_U64_ROUND_CLOSEST(sum, total) : 0;
>>>>>         min = m->read_latency_min;
>>>>>         max = m->read_latency_max;
>>>>> -     sq = m->read_latency_sq_sum;
>>>>> +     stdev = m->read_latency_stdev;
>>>>>         spin_unlock(&m->read_metric_lock);
>>>>> -     CEPH_LAT_METRIC_SHOW("read", total, avg, min, max, sq);
>>>>> +     CEPH_LAT_METRIC_SHOW("read", total, avg, min, max, stdev);
>>>>>
>>>>>         spin_lock(&m->write_metric_lock);
>>>>>         total = m->total_writes;
>>>>> @@ -185,9 +185,9 @@ static int metric_show(struct seq_file *s, void *p)
>>>>>         avg = total > 0 ? DIV64_U64_ROUND_CLOSEST(sum, total) : 0;
>>>>>         min = m->write_latency_min;
>>>>>         max = m->write_latency_max;
>>>>> -     sq = m->write_latency_sq_sum;
>>>>> +     stdev = m->write_latency_stdev;
>>>>>         spin_unlock(&m->write_metric_lock);
>>>>> -     CEPH_LAT_METRIC_SHOW("write", total, avg, min, max, sq);
>>>>> +     CEPH_LAT_METRIC_SHOW("write", total, avg, min, max, stdev);
>>>>>
>>>>>         spin_lock(&m->metadata_metric_lock);
>>>>>         total = m->total_metadatas;
>>>>> @@ -195,9 +195,9 @@ static int metric_show(struct seq_file *s, void *p)
>>>>>         avg = total > 0 ? DIV64_U64_ROUND_CLOSEST(sum, total) : 0;
>>>>>         min = m->metadata_latency_min;
>>>>>         max = m->metadata_latency_max;
>>>>> -     sq = m->metadata_latency_sq_sum;
>>>>> +     stdev = m->metadata_latency_stdev;
>>>>>         spin_unlock(&m->metadata_metric_lock);
>>>>> -     CEPH_LAT_METRIC_SHOW("metadata", total, avg, min, max, sq);
>>>>> +     CEPH_LAT_METRIC_SHOW("metadata", total, avg, min, max, stdev);
>>>>>
>>>>>         seq_printf(s, "\n");
>>>>>         seq_printf(s, "item          total       avg_sz(bytes)   min_sz(bytes)   max_sz(bytes)  total_sz(bytes)\n");
>>>>> diff --git a/fs/ceph/metric.c b/fs/ceph/metric.c
>>>>> index 226dc38e2909..6b774b1a88ce 100644
>>>>> --- a/fs/ceph/metric.c
>>>>> +++ b/fs/ceph/metric.c
>>>>> @@ -244,7 +244,8 @@ int ceph_metric_init(struct ceph_client_metric *m)
>>>>>                 goto err_i_caps_mis;
>>>>>
>>>>>         spin_lock_init(&m->read_metric_lock);
>>>>> -     m->read_latency_sq_sum = 0;
>>>>> +     m->read_latency_stdev = 0;
>>>>> +     m->avg_read_latency = 0;
>>>>>         m->read_latency_min = KTIME_MAX;
>>>>>         m->read_latency_max = 0;
>>>>>         m->total_reads = 0;
>>>>> @@ -254,7 +255,8 @@ int ceph_metric_init(struct ceph_client_metric *m)
>>>>>         m->read_size_sum = 0;
>>>>>
>>>>>         spin_lock_init(&m->write_metric_lock);
>>>>> -     m->write_latency_sq_sum = 0;
>>>>> +     m->write_latency_stdev = 0;
>>>>> +     m->avg_write_latency = 0;
>>>>>         m->write_latency_min = KTIME_MAX;
>>>>>         m->write_latency_max = 0;
>>>>>         m->total_writes = 0;
>>>>> @@ -264,7 +266,8 @@ int ceph_metric_init(struct ceph_client_metric *m)
>>>>>         m->write_size_sum = 0;
>>>>>
>>>>>         spin_lock_init(&m->metadata_metric_lock);
>>>>> -     m->metadata_latency_sq_sum = 0;
>>>>> +     m->metadata_latency_stdev = 0;
>>>>> +     m->avg_metadata_latency = 0;
>>>>>         m->metadata_latency_min = KTIME_MAX;
>>>>>         m->metadata_latency_max = 0;
>>>>>         m->total_metadatas = 0;
>>>>> @@ -322,20 +325,26 @@ void ceph_metric_destroy(struct ceph_client_metric *m)
>>>>>                 max = new;                      \
>>>>>     }
>>>>>
>>>>> -static inline void __update_stdev(ktime_t total, ktime_t lsum,
>>>>> -                               ktime_t *sq_sump, ktime_t lat)
>>>>> +static inline void __update_latency(ktime_t *ctotal, ktime_t *lsum,
>>>>> +                                 ktime_t *lavg, ktime_t *min, ktime_t *max,
>>>>> +                                 ktime_t *lstdev, ktime_t lat)
>>>>>     {
>>>>> -     ktime_t avg, sq;
>>>>> +     ktime_t total, avg, stdev;
>>>>>
>>>>> -     if (unlikely(total == 1))
>>>>> -             return;
>>>>> +     total = ++(*ctotal);
>>>>> +     *lsum += lat;
>>>>> +
>>>>> +     METRIC_UPDATE_MIN_MAX(*min, *max, lat);
>>>>>
>>>>> -     /* the sq is (lat - old_avg) * (lat - new_avg) */
>>>>> -     avg = DIV64_U64_ROUND_CLOSEST((lsum - lat), (total - 1));
>>>>> -     sq = lat - avg;
>>>>> -     avg = DIV64_U64_ROUND_CLOSEST(lsum, total);
>>>>> -     sq = sq * (lat - avg);
>>>>> -     *sq_sump += sq;
>>>>> +     if (unlikely(total == 1)) {
>>>>> +             *lavg = lat;
>>>>> +             *lstdev = 0;
>>>>> +     } else {
>>>>> +             avg = *lavg + div64_s64(lat - *lavg, total);
>>>>> +             stdev = *lstdev + (lat - *lavg)*(lat - avg);
>>>>> +             *lstdev = int_sqrt(div64_u64(stdev, total - 1));
>>>>> +             *lavg = avg;
>>>>> +     }
>>>> IMO, this is incorrect, the math formula please see:
>>>>
>>>> https://www.investopedia.com/ask/answers/042415/what-difference-between-standard-error-means-and-standard-deviation.asp
>>>>
>>>> The most accurate result should be:
>>>>
>>>> stdev = int_sqrt(sum((X(n) - avg)^2, (X(n-1) - avg)^2, ..., (X(1) -
>>>> avg)^2) / (n - 1)).
>>>>
>>>> While you are computing it:
>>>>
>>>> stdev_n = int_sqrt(stdev_(n-1) + (X(n-1) - avg)^2)
>>> Hmm. The int_sqrt() is probably not needed at this point and can be
>>> done when sending the metric. That would avoid some cycles.
>>>
>>> Also, the way avg is calculated not totally incorrect, however, I
>>> would like to keep it similar to how its done is libcephfs.
>> In user space this is very easy to do, but not in kernel space,
>> especially there has no float computing.
>>
>> Currently the kclient is doing the avg computing by:
>>
>> avg(n) = (avg(n-1) + latency(n)) / (n), IMO this should be closer to the
>> real avg(n) = sum(latency(n), latency(n-1), ..., latency(1)) / n.
> That's how is done in libcephfs too.

Okay.

>
>> Because it's hard to record all the latency values, this is also many
>> other user space tools doing to count the avg.
>>
>>
>>>> Though current stdev computing method is not exactly the same the math
>>>> formula does, but it's closer to it, because the kernel couldn't record
>>>> all the latency value and do it whenever needed, which will occupy a
>>>> large amount of memories and cpu resources.
>>> The approach is to calculate the running variance, I.e., compute the
>>> variance as  data (latency) arrive one at a time.
>>>
>>>>>     }
>>>>>
>>>>>     void ceph_update_read_metrics(struct ceph_client_metric *m,
>>>>> @@ -343,23 +352,18 @@ void ceph_update_read_metrics(struct ceph_client_metric *m,
>>>>>                               unsigned int size, int rc)
>>>>>     {
>>>>>         ktime_t lat = ktime_sub(r_end, r_start);
>>>>> -     ktime_t total;
>>>>>
>>>>>         if (unlikely(rc < 0 && rc != -ENOENT && rc != -ETIMEDOUT))
>>>>>                 return;
>>>>>
>>>>>         spin_lock(&m->read_metric_lock);
>>>>> -     total = ++m->total_reads;
>>>>>         m->read_size_sum += size;
>>>>> -     m->read_latency_sum += lat;
>>>>>         METRIC_UPDATE_MIN_MAX(m->read_size_min,
>>>>>                               m->read_size_max,
>>>>>                               size);
>>>>> -     METRIC_UPDATE_MIN_MAX(m->read_latency_min,
>>>>> -                           m->read_latency_max,
>>>>> -                           lat);
>>>>> -     __update_stdev(total, m->read_latency_sum,
>>>>> -                    &m->read_latency_sq_sum, lat);
>>>>> +     __update_latency(&m->total_reads, &m->read_latency_sum,
>>>>> +                      &m->avg_read_latency, &m->read_latency_min,
>>>>> +                      &m->read_latency_max, &m->read_latency_stdev, lat);
>>>>>         spin_unlock(&m->read_metric_lock);
>>>>>     }
>>>>>
>>>>> @@ -368,23 +372,18 @@ void ceph_update_write_metrics(struct ceph_client_metric *m,
>>>>>                                unsigned int size, int rc)
>>>>>     {
>>>>>         ktime_t lat = ktime_sub(r_end, r_start);
>>>>> -     ktime_t total;
>>>>>
>>>>>         if (unlikely(rc && rc != -ETIMEDOUT))
>>>>>                 return;
>>>>>
>>>>>         spin_lock(&m->write_metric_lock);
>>>>> -     total = ++m->total_writes;
>>>>>         m->write_size_sum += size;
>>>>> -     m->write_latency_sum += lat;
>>>>>         METRIC_UPDATE_MIN_MAX(m->write_size_min,
>>>>>                               m->write_size_max,
>>>>>                               size);
>>>>> -     METRIC_UPDATE_MIN_MAX(m->write_latency_min,
>>>>> -                           m->write_latency_max,
>>>>> -                           lat);
>>>>> -     __update_stdev(total, m->write_latency_sum,
>>>>> -                    &m->write_latency_sq_sum, lat);
>>>>> +     __update_latency(&m->total_writes, &m->write_latency_sum,
>>>>> +                      &m->avg_write_latency, &m->write_latency_min,
>>>>> +                      &m->write_latency_max, &m->write_latency_stdev, lat);
>>>>>         spin_unlock(&m->write_metric_lock);
>>>>>     }
>>>>>
>>>>> @@ -393,18 +392,13 @@ void ceph_update_metadata_metrics(struct ceph_client_metric *m,
>>>>>                                   int rc)
>>>>>     {
>>>>>         ktime_t lat = ktime_sub(r_end, r_start);
>>>>> -     ktime_t total;
>>>>>
>>>>>         if (unlikely(rc && rc != -ENOENT))
>>>>>                 return;
>>>>>
>>>>>         spin_lock(&m->metadata_metric_lock);
>>>>> -     total = ++m->total_metadatas;
>>>>> -     m->metadata_latency_sum += lat;
>>>>> -     METRIC_UPDATE_MIN_MAX(m->metadata_latency_min,
>>>>> -                           m->metadata_latency_max,
>>>>> -                           lat);
>>>>> -     __update_stdev(total, m->metadata_latency_sum,
>>>>> -                    &m->metadata_latency_sq_sum, lat);
>>>>> +     __update_latency(&m->total_metadatas, &m->metadata_latency_sum,
>>>>> +                      &m->avg_metadata_latency, &m->metadata_latency_min,
>>>>> +                      &m->metadata_latency_max, &m->metadata_latency_stdev, lat);
>>>>>         spin_unlock(&m->metadata_metric_lock);
>>>>>     }
>>>>> diff --git a/fs/ceph/metric.h b/fs/ceph/metric.h
>>>>> index 103ed736f9d2..a5da21b8f8ed 100644
>>>>> --- a/fs/ceph/metric.h
>>>>> +++ b/fs/ceph/metric.h
>>>>> @@ -138,7 +138,8 @@ struct ceph_client_metric {
>>>>>         u64 read_size_min;
>>>>>         u64 read_size_max;
>>>>>         ktime_t read_latency_sum;
>>>>> -     ktime_t read_latency_sq_sum;
>>>>> +     ktime_t avg_read_latency;
>>>>> +     ktime_t read_latency_stdev;
>>>>>         ktime_t read_latency_min;
>>>>>         ktime_t read_latency_max;
>>>>>
>>>>> @@ -148,14 +149,16 @@ struct ceph_client_metric {
>>>>>         u64 write_size_min;
>>>>>         u64 write_size_max;
>>>>>         ktime_t write_latency_sum;
>>>>> -     ktime_t write_latency_sq_sum;
>>>>> +     ktime_t avg_write_latency;
>>>>> +     ktime_t write_latency_stdev;
>>>>>         ktime_t write_latency_min;
>>>>>         ktime_t write_latency_max;
>>>>>
>>>>>         spinlock_t metadata_metric_lock;
>>>>>         u64 total_metadatas;
>>>>>         ktime_t metadata_latency_sum;
>>>>> -     ktime_t metadata_latency_sq_sum;
>>>>> +     ktime_t avg_metadata_latency;
>>>>> +     ktime_t metadata_latency_stdev;
>>>>>         ktime_t metadata_latency_min;
>>>>>         ktime_t metadata_latency_max;
>>>>>
>


  reply	other threads:[~2021-09-14 13:58 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-09-14  8:48 [PATCH v2 0/4] ceph: forward average read/write/metadata latency Venky Shankar
2021-09-14  8:48 ` [PATCH v2 1/4] ceph: use "struct ceph_timespec" for r/w/m latencies Venky Shankar
2021-09-14  8:49 ` [PATCH v2 2/4] ceph: track average/stdev r/w/m latency Venky Shankar
2021-09-14 12:52   ` Xiubo Li
2021-09-14 13:03     ` Venky Shankar
2021-09-14 13:09   ` Xiubo Li
2021-09-14 13:30     ` Venky Shankar
2021-09-14 13:45       ` Xiubo Li
2021-09-14 13:52         ` Xiubo Li
2021-09-14 14:00           ` Venky Shankar
2021-09-14 14:10             ` Xiubo Li
2021-09-14 13:53         ` Venky Shankar
2021-09-14 13:58           ` Xiubo Li [this message]
2021-09-14 13:13   ` Xiubo Li
2021-09-14 13:32     ` Jeff Layton
2021-09-14 13:32     ` Venky Shankar
2021-09-14  8:49 ` [PATCH v2 3/4] ceph: include average/stddev r/w/m latency in mds metrics Venky Shankar
2021-09-14 13:57   ` Xiubo Li
2021-09-14  8:49 ` [PATCH v2 4/4] ceph: use tracked average r/w/m latencies to display metrics in debugfs Venky Shankar

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=d9c74fa8-5e0e-d884-a347-6a0ae3d061fb@redhat.com \
    --to=xiubli@redhat.com \
    --cc=ceph-devel@vger.kernel.org \
    --cc=jlayton@redhat.com \
    --cc=pdonnell@redhat.com \
    --cc=vshankar@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).