ceph-devel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Xiubo Li <xiubli@redhat.com>
To: Venky Shankar <vshankar@redhat.com>
Cc: Jeff Layton <jlayton@redhat.com>,
	Patrick Donnelly <pdonnell@redhat.com>,
	ceph-devel <ceph-devel@vger.kernel.org>
Subject: Re: [PATCH v2 2/4] ceph: track average/stdev r/w/m latency
Date: Tue, 14 Sep 2021 22:10:15 +0800	[thread overview]
Message-ID: <ae906a4e-f626-d2ce-d357-e3a48365ee81@redhat.com> (raw)
In-Reply-To: <CACPzV1kYU7fknFmiTqns1iHFEOiKhGYCbcjeCq5wdsb7JT81_A@mail.gmail.com>


On 9/14/21 10:00 PM, Venky Shankar wrote:
> On Tue, Sep 14, 2021 at 7:22 PM Xiubo Li <xiubli@redhat.com> wrote:
>>
>> On 9/14/21 9:45 PM, Xiubo Li wrote:
>>> On 9/14/21 9:30 PM, Venky Shankar wrote:
>>>> On Tue, Sep 14, 2021 at 6:39 PM Xiubo Li <xiubli@redhat.com> wrote:
>>>>> On 9/14/21 4:49 PM, Venky Shankar wrote:
>> [...]
>>> In user space this is very easy to do, but not in kernel space,
>>> especially there has no float computing.
>>>
>> As I remembered this is main reason why I was planing to send the raw
>> metrics to MDS and let the MDS do the computing.
>>
>> So if possible why not just send the raw data to MDS and let the MDS to
>> do the stdev computing ?
> Since metrics are sent each second (I suppose) and there can be N
> operations done within that second, what raw data (say for avg/stdev
> calculation) would the client send to the MDS?

Yeah.

For example, just send the "sq_sum" and the total numbers to MDS, these 
should be enough to compute the stdev. And in MDS or cephfs-top tool can 
just do it by int_sqrt(sq_sum / total).

I am okay with both and it's up to you, but the stdev could be more 
accurate in userspace with float computing.


>
>>
>>> Currently the kclient is doing the avg computing by:
>>>
>>> avg(n) = (avg(n-1) + latency(n)) / (n), IMO this should be closer to
>>> the real avg(n) = sum(latency(n), latency(n-1), ..., latency(1)) / n.
>>>
>>> Because it's hard to record all the latency values, this is also many
>>> other user space tools doing to count the avg.
>>>
>>>
>>>>> Though current stdev computing method is not exactly the same the math
>>>>> formula does, but it's closer to it, because the kernel couldn't record
>>>>> all the latency value and do it whenever needed, which will occupy a
>>>>> large amount of memories and cpu resources.
>>>> The approach is to calculate the running variance, I.e., compute the
>>>> variance as  data (latency) arrive one at a time.
>>>>
>>>>>>     }
>>>>>>
>>>>>>     void ceph_update_read_metrics(struct ceph_client_metric *m,
>>>>>> @@ -343,23 +352,18 @@ void ceph_update_read_metrics(struct
>>>>>> ceph_client_metric *m,
>>>>>>                               unsigned int size, int rc)
>>>>>>     {
>>>>>>         ktime_t lat = ktime_sub(r_end, r_start);
>>>>>> -     ktime_t total;
>>>>>>
>>>>>>         if (unlikely(rc < 0 && rc != -ENOENT && rc != -ETIMEDOUT))
>>>>>>                 return;
>>>>>>
>>>>>>         spin_lock(&m->read_metric_lock);
>>>>>> -     total = ++m->total_reads;
>>>>>>         m->read_size_sum += size;
>>>>>> -     m->read_latency_sum += lat;
>>>>>>         METRIC_UPDATE_MIN_MAX(m->read_size_min,
>>>>>>                               m->read_size_max,
>>>>>>                               size);
>>>>>> -     METRIC_UPDATE_MIN_MAX(m->read_latency_min,
>>>>>> -                           m->read_latency_max,
>>>>>> -                           lat);
>>>>>> -     __update_stdev(total, m->read_latency_sum,
>>>>>> -                    &m->read_latency_sq_sum, lat);
>>>>>> +     __update_latency(&m->total_reads, &m->read_latency_sum,
>>>>>> +                      &m->avg_read_latency, &m->read_latency_min,
>>>>>> +                      &m->read_latency_max,
>>>>>> &m->read_latency_stdev, lat);
>>>>>>         spin_unlock(&m->read_metric_lock);
>>>>>>     }
>>>>>>
>>>>>> @@ -368,23 +372,18 @@ void ceph_update_write_metrics(struct
>>>>>> ceph_client_metric *m,
>>>>>>                                unsigned int size, int rc)
>>>>>>     {
>>>>>>         ktime_t lat = ktime_sub(r_end, r_start);
>>>>>> -     ktime_t total;
>>>>>>
>>>>>>         if (unlikely(rc && rc != -ETIMEDOUT))
>>>>>>                 return;
>>>>>>
>>>>>>         spin_lock(&m->write_metric_lock);
>>>>>> -     total = ++m->total_writes;
>>>>>>         m->write_size_sum += size;
>>>>>> -     m->write_latency_sum += lat;
>>>>>>         METRIC_UPDATE_MIN_MAX(m->write_size_min,
>>>>>>                               m->write_size_max,
>>>>>>                               size);
>>>>>> -     METRIC_UPDATE_MIN_MAX(m->write_latency_min,
>>>>>> -                           m->write_latency_max,
>>>>>> -                           lat);
>>>>>> -     __update_stdev(total, m->write_latency_sum,
>>>>>> -                    &m->write_latency_sq_sum, lat);
>>>>>> +     __update_latency(&m->total_writes, &m->write_latency_sum,
>>>>>> +                      &m->avg_write_latency, &m->write_latency_min,
>>>>>> +                      &m->write_latency_max,
>>>>>> &m->write_latency_stdev, lat);
>>>>>>         spin_unlock(&m->write_metric_lock);
>>>>>>     }
>>>>>>
>>>>>> @@ -393,18 +392,13 @@ void ceph_update_metadata_metrics(struct
>>>>>> ceph_client_metric *m,
>>>>>>                                   int rc)
>>>>>>     {
>>>>>>         ktime_t lat = ktime_sub(r_end, r_start);
>>>>>> -     ktime_t total;
>>>>>>
>>>>>>         if (unlikely(rc && rc != -ENOENT))
>>>>>>                 return;
>>>>>>
>>>>>>         spin_lock(&m->metadata_metric_lock);
>>>>>> -     total = ++m->total_metadatas;
>>>>>> -     m->metadata_latency_sum += lat;
>>>>>> -     METRIC_UPDATE_MIN_MAX(m->metadata_latency_min,
>>>>>> -                           m->metadata_latency_max,
>>>>>> -                           lat);
>>>>>> -     __update_stdev(total, m->metadata_latency_sum,
>>>>>> -                    &m->metadata_latency_sq_sum, lat);
>>>>>> +     __update_latency(&m->total_metadatas, &m->metadata_latency_sum,
>>>>>> +                      &m->avg_metadata_latency,
>>>>>> &m->metadata_latency_min,
>>>>>> +                      &m->metadata_latency_max,
>>>>>> &m->metadata_latency_stdev, lat);
>>>>>>         spin_unlock(&m->metadata_metric_lock);
>>>>>>     }
>>>>>> diff --git a/fs/ceph/metric.h b/fs/ceph/metric.h
>>>>>> index 103ed736f9d2..a5da21b8f8ed 100644
>>>>>> --- a/fs/ceph/metric.h
>>>>>> +++ b/fs/ceph/metric.h
>>>>>> @@ -138,7 +138,8 @@ struct ceph_client_metric {
>>>>>>         u64 read_size_min;
>>>>>>         u64 read_size_max;
>>>>>>         ktime_t read_latency_sum;
>>>>>> -     ktime_t read_latency_sq_sum;
>>>>>> +     ktime_t avg_read_latency;
>>>>>> +     ktime_t read_latency_stdev;
>>>>>>         ktime_t read_latency_min;
>>>>>>         ktime_t read_latency_max;
>>>>>>
>>>>>> @@ -148,14 +149,16 @@ struct ceph_client_metric {
>>>>>>         u64 write_size_min;
>>>>>>         u64 write_size_max;
>>>>>>         ktime_t write_latency_sum;
>>>>>> -     ktime_t write_latency_sq_sum;
>>>>>> +     ktime_t avg_write_latency;
>>>>>> +     ktime_t write_latency_stdev;
>>>>>>         ktime_t write_latency_min;
>>>>>>         ktime_t write_latency_max;
>>>>>>
>>>>>>         spinlock_t metadata_metric_lock;
>>>>>>         u64 total_metadatas;
>>>>>>         ktime_t metadata_latency_sum;
>>>>>> -     ktime_t metadata_latency_sq_sum;
>>>>>> +     ktime_t avg_metadata_latency;
>>>>>> +     ktime_t metadata_latency_stdev;
>>>>>>         ktime_t metadata_latency_min;
>>>>>>         ktime_t metadata_latency_max;
>>>>>>
>


  reply	other threads:[~2021-09-14 14:10 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-09-14  8:48 [PATCH v2 0/4] ceph: forward average read/write/metadata latency Venky Shankar
2021-09-14  8:48 ` [PATCH v2 1/4] ceph: use "struct ceph_timespec" for r/w/m latencies Venky Shankar
2021-09-14  8:49 ` [PATCH v2 2/4] ceph: track average/stdev r/w/m latency Venky Shankar
2021-09-14 12:52   ` Xiubo Li
2021-09-14 13:03     ` Venky Shankar
2021-09-14 13:09   ` Xiubo Li
2021-09-14 13:30     ` Venky Shankar
2021-09-14 13:45       ` Xiubo Li
2021-09-14 13:52         ` Xiubo Li
2021-09-14 14:00           ` Venky Shankar
2021-09-14 14:10             ` Xiubo Li [this message]
2021-09-14 13:53         ` Venky Shankar
2021-09-14 13:58           ` Xiubo Li
2021-09-14 13:13   ` Xiubo Li
2021-09-14 13:32     ` Jeff Layton
2021-09-14 13:32     ` Venky Shankar
2021-09-14  8:49 ` [PATCH v2 3/4] ceph: include average/stddev r/w/m latency in mds metrics Venky Shankar
2021-09-14 13:57   ` Xiubo Li
2021-09-14  8:49 ` [PATCH v2 4/4] ceph: use tracked average r/w/m latencies to display metrics in debugfs Venky Shankar

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ae906a4e-f626-d2ce-d357-e3a48365ee81@redhat.com \
    --to=xiubli@redhat.com \
    --cc=ceph-devel@vger.kernel.org \
    --cc=jlayton@redhat.com \
    --cc=pdonnell@redhat.com \
    --cc=vshankar@redhat.com \
    --subject='Re: [PATCH v2 2/4] ceph: track average/stdev r/w/m latency' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).