From: Xiubo Li <xiubli@redhat.com>
To: Venky Shankar <vshankar@redhat.com>, Jeff Layton <jlayton@redhat.com>
Cc: Patrick Donnelly <pdonnell@redhat.com>,
ceph-devel <ceph-devel@vger.kernel.org>
Subject: Re: [PATCH v3 2/4] ceph: track average/stdev r/w/m latency
Date: Wed, 22 Sep 2021 22:34:28 +0800 [thread overview]
Message-ID: <ba1d57df-f977-4319-a857-c629ea47a818@redhat.com> (raw)
In-Reply-To: <CACPzV1=avKH1TA7EJMRe_ivXxzMWn4ZaVMQVb3Nf6ZbdBa10-A@mail.gmail.com>
On 9/22/21 9:44 PM, Venky Shankar wrote:
> On Wed, Sep 22, 2021 at 6:03 PM Venky Shankar <vshankar@redhat.com> wrote:
>> On Wed, Sep 22, 2021 at 5:47 PM Jeff Layton <jlayton@redhat.com> wrote:
>>> On Tue, 2021-09-21 at 18:37 +0530, Venky Shankar wrote:
>>>> Update the math involved to closely mimic how its done in
>>>> user land. This does not make a lot of difference to the
>>>> execution speed.
>>>>
>>>> Signed-off-by: Venky Shankar <vshankar@redhat.com>
>>>> ---
>>>> fs/ceph/metric.c | 63 +++++++++++++++++++++---------------------------
>>>> fs/ceph/metric.h | 3 +++
>>>> 2 files changed, 31 insertions(+), 35 deletions(-)
>>>>
>>>> diff --git a/fs/ceph/metric.c b/fs/ceph/metric.c
>>>> index 226dc38e2909..ca758bff69ca 100644
>>>> --- a/fs/ceph/metric.c
>>>> +++ b/fs/ceph/metric.c
>>>> @@ -245,6 +245,7 @@ int ceph_metric_init(struct ceph_client_metric *m)
>>>>
>>>> spin_lock_init(&m->read_metric_lock);
>>>> m->read_latency_sq_sum = 0;
>>>> + m->avg_read_latency = 0;
>>>> m->read_latency_min = KTIME_MAX;
>>>> m->read_latency_max = 0;
>>>> m->total_reads = 0;
>>>> @@ -255,6 +256,7 @@ int ceph_metric_init(struct ceph_client_metric *m)
>>>>
>>>> spin_lock_init(&m->write_metric_lock);
>>>> m->write_latency_sq_sum = 0;
>>>> + m->avg_write_latency = 0;
>>>> m->write_latency_min = KTIME_MAX;
>>>> m->write_latency_max = 0;
>>>> m->total_writes = 0;
>>>> @@ -265,6 +267,7 @@ int ceph_metric_init(struct ceph_client_metric *m)
>>>>
>>>> spin_lock_init(&m->metadata_metric_lock);
>>>> m->metadata_latency_sq_sum = 0;
>>>> + m->avg_metadata_latency = 0;
>>>> m->metadata_latency_min = KTIME_MAX;
>>>> m->metadata_latency_max = 0;
>>>> m->total_metadatas = 0;
>>>> @@ -322,20 +325,25 @@ void ceph_metric_destroy(struct ceph_client_metric *m)
>>>> max = new; \
>>>> }
>>>>
>>>> -static inline void __update_stdev(ktime_t total, ktime_t lsum,
>>>> - ktime_t *sq_sump, ktime_t lat)
>>>> +static inline void __update_latency(ktime_t *ctotal, ktime_t *lsum,
>>>> + ktime_t *lavg, ktime_t *min, ktime_t *max,
>>>> + ktime_t *sum_sq, ktime_t lat)
>>>> {
>>>> - ktime_t avg, sq;
>>>> + ktime_t total, avg;
>>>>
>>>> - if (unlikely(total == 1))
>>>> - return;
>>>> + total = ++(*ctotal);
>>>> + *lsum += lat;
>>>> +
>>>> + METRIC_UPDATE_MIN_MAX(*min, *max, lat);
>>>>
>>>> - /* the sq is (lat - old_avg) * (lat - new_avg) */
>>>> - avg = DIV64_U64_ROUND_CLOSEST((lsum - lat), (total - 1));
>>>> - sq = lat - avg;
>>>> - avg = DIV64_U64_ROUND_CLOSEST(lsum, total);
>>>> - sq = sq * (lat - avg);
>>>> - *sq_sump += sq;
>>>> + if (unlikely(total == 1)) {
>>>> + *lavg = lat;
>>>> + *sum_sq = 0;
>>>> + } else {
>>>> + avg = *lavg + div64_s64(lat - *lavg, total);
>>>> + *sum_sq += (lat - *lavg)*(lat - avg);
>>>> + *lavg = avg;
>>>> + }
>>>> }
>>>>
>>>> void ceph_update_read_metrics(struct ceph_client_metric *m,
>>>> @@ -343,23 +351,18 @@ void ceph_update_read_metrics(struct ceph_client_metric *m,
>>>> unsigned int size, int rc)
>>>> {
>>>> ktime_t lat = ktime_sub(r_end, r_start);
>>>> - ktime_t total;
>>>>
>>>> if (unlikely(rc < 0 && rc != -ENOENT && rc != -ETIMEDOUT))
>>>> return;
>>>>
>>>> spin_lock(&m->read_metric_lock);
>>>> - total = ++m->total_reads;
>>>> m->read_size_sum += size;
>>>> - m->read_latency_sum += lat;
>>>> METRIC_UPDATE_MIN_MAX(m->read_size_min,
>>>> m->read_size_max,
>>>> size);
>>>> - METRIC_UPDATE_MIN_MAX(m->read_latency_min,
>>>> - m->read_latency_max,
>>>> - lat);
>>>> - __update_stdev(total, m->read_latency_sum,
>>>> - &m->read_latency_sq_sum, lat);
>>>> + __update_latency(&m->total_reads, &m->read_latency_sum,
>>>> + &m->avg_read_latency, &m->read_latency_min,
>>>> + &m->read_latency_max, &m->read_latency_sq_sum, lat);
>>> Do we really need to calculate the std deviation on every update? We
>>> have to figure that in most cases, this stuff will be collected but only
>>> seldom viewed.
>>>
>>> ISTM that we ought to collect just the bare minimum of info on each
>>> update, and save the more expensive calculations for the tool presenting
>>> this info.
>> Yeh, that's probably the plan we want going forward when introducing
>> new metrics.
>>
>> FWIW, we could start doing it with this itself. It's just that the
>> user land PRs are approved and those do the way it is done here.
>>
>> I'm ok with moving math crunching to the tool and it should not be a
>> major change to this patchset.
> So, I kind of recall why we did it this way -- metrics exchange
> between MDS and ceph-mgr is restricted to std::pair<uint64_t,
> uint64_t>.
Before I pushed one patch to improve this and switched it to a list or
something else, but revert it dues to some reason.
> That would probably need to be expanded to carry variable
> sized objects. I remember seeing a PR that does something like that.
> Need to dig it up...
>
> The other way would be to exchange the necessary information needed
> for the calculation as a separate types but that's not really clean
> and results in unnecessary bloating.
>
>>>> spin_unlock(&m->read_metric_lock);
>>>> }
>>>>
>>>> @@ -368,23 +371,18 @@ void ceph_update_write_metrics(struct ceph_client_metric *m,
>>>> unsigned int size, int rc)
>>>> {
>>>> ktime_t lat = ktime_sub(r_end, r_start);
>>>> - ktime_t total;
>>>>
>>>> if (unlikely(rc && rc != -ETIMEDOUT))
>>>> return;
>>>>
>>>> spin_lock(&m->write_metric_lock);
>>>> - total = ++m->total_writes;
>>>> m->write_size_sum += size;
>>>> - m->write_latency_sum += lat;
>>>> METRIC_UPDATE_MIN_MAX(m->write_size_min,
>>>> m->write_size_max,
>>>> size);
>>>> - METRIC_UPDATE_MIN_MAX(m->write_latency_min,
>>>> - m->write_latency_max,
>>>> - lat);
>>>> - __update_stdev(total, m->write_latency_sum,
>>>> - &m->write_latency_sq_sum, lat);
>>>> + __update_latency(&m->total_writes, &m->write_latency_sum,
>>>> + &m->avg_write_latency, &m->write_latency_min,
>>>> + &m->write_latency_max, &m->write_latency_sq_sum, lat);
>>>> spin_unlock(&m->write_metric_lock);
>>>> }
>>>>
>>>> @@ -393,18 +391,13 @@ void ceph_update_metadata_metrics(struct ceph_client_metric *m,
>>>> int rc)
>>>> {
>>>> ktime_t lat = ktime_sub(r_end, r_start);
>>>> - ktime_t total;
>>>>
>>>> if (unlikely(rc && rc != -ENOENT))
>>>> return;
>>>>
>>>> spin_lock(&m->metadata_metric_lock);
>>>> - total = ++m->total_metadatas;
>>>> - m->metadata_latency_sum += lat;
>>>> - METRIC_UPDATE_MIN_MAX(m->metadata_latency_min,
>>>> - m->metadata_latency_max,
>>>> - lat);
>>>> - __update_stdev(total, m->metadata_latency_sum,
>>>> - &m->metadata_latency_sq_sum, lat);
>>>> + __update_latency(&m->total_metadatas, &m->metadata_latency_sum,
>>>> + &m->avg_metadata_latency, &m->metadata_latency_min,
>>>> + &m->metadata_latency_max, &m->metadata_latency_sq_sum, lat);
>>>> spin_unlock(&m->metadata_metric_lock);
>>>> }
>>>> diff --git a/fs/ceph/metric.h b/fs/ceph/metric.h
>>>> index 103ed736f9d2..0af02e212033 100644
>>>> --- a/fs/ceph/metric.h
>>>> +++ b/fs/ceph/metric.h
>>>> @@ -138,6 +138,7 @@ struct ceph_client_metric {
>>>> u64 read_size_min;
>>>> u64 read_size_max;
>>>> ktime_t read_latency_sum;
>>>> + ktime_t avg_read_latency;
>>>> ktime_t read_latency_sq_sum;
>>>> ktime_t read_latency_min;
>>>> ktime_t read_latency_max;
>>>> @@ -148,6 +149,7 @@ struct ceph_client_metric {
>>>> u64 write_size_min;
>>>> u64 write_size_max;
>>>> ktime_t write_latency_sum;
>>>> + ktime_t avg_write_latency;
>>>> ktime_t write_latency_sq_sum;
>>>> ktime_t write_latency_min;
>>>> ktime_t write_latency_max;
>>>> @@ -155,6 +157,7 @@ struct ceph_client_metric {
>>>> spinlock_t metadata_metric_lock;
>>>> u64 total_metadatas;
>>>> ktime_t metadata_latency_sum;
>>>> + ktime_t avg_metadata_latency;
>>>> ktime_t metadata_latency_sq_sum;
>>>> ktime_t metadata_latency_min;
>>>> ktime_t metadata_latency_max;
>>> --
>>> Jeff Layton <jlayton@redhat.com>
>>>
>>
>> --
>> Cheers,
>> Venky
>
>
next prev parent reply other threads:[~2021-09-22 14:34 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-09-21 13:07 [PATCH v3 0/4] ceph: forward average read/write/metadata latency Venky Shankar
2021-09-21 13:07 ` [PATCH v3 1/4] ceph: use "struct ceph_timespec" for r/w/m latencies Venky Shankar
2021-09-21 13:07 ` [PATCH v3 2/4] ceph: track average/stdev r/w/m latency Venky Shankar
2021-09-22 12:17 ` Jeff Layton
2021-09-22 12:33 ` Venky Shankar
2021-09-22 13:44 ` Venky Shankar
2021-09-22 14:34 ` Xiubo Li [this message]
2021-09-22 14:37 ` Xiubo Li
2021-09-22 17:10 ` Venky Shankar
2021-09-22 13:45 ` Xiubo Li
2021-09-21 13:07 ` [PATCH v3 3/4] ceph: include average/stddev r/w/m latency in mds metrics Venky Shankar
2021-09-21 13:07 ` [PATCH v3 4/4] ceph: use tracked average r/w/m latencies to display metrics in debugfs Venky Shankar
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ba1d57df-f977-4319-a857-c629ea47a818@redhat.com \
--to=xiubli@redhat.com \
--cc=ceph-devel@vger.kernel.org \
--cc=jlayton@redhat.com \
--cc=pdonnell@redhat.com \
--cc=vshankar@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).