From: Xiubo Li <xiubli@redhat.com>
To: Jeff Layton <jlayton@kernel.org>
Cc: idryomov@gmail.com, pdonnell@redhat.com, ceph-devel@vger.kernel.org
Subject: Re: [PATCH 2/4] ceph: update the __update_latency helper
Date: Tue, 23 Mar 2021 21:14:41 +0800 [thread overview]
Message-ID: <3eaf5c43-8c81-b116-b500-7fdfb3e3153f@redhat.com> (raw)
In-Reply-To: <c836da61eaba7650538cdfe2b37c8c0214d1312a.camel@kernel.org>
On 2021/3/23 20:34, Jeff Layton wrote:
> On Mon, 2021-03-22 at 20:28 +0800, xiubli@redhat.com wrote:
>> From: Xiubo Li <xiubli@redhat.com>
>>
>> Let the __update_latency() helper choose the correcsponding members
>> according to the metric_type.
>>
>> URL: https://tracker.ceph.com/issues/49913
>> Signed-off-by: Xiubo Li <xiubli@redhat.com>
>> ---
>> fs/ceph/metric.c | 58 +++++++++++++++++++++++++++++++++++-------------
>> 1 file changed, 42 insertions(+), 16 deletions(-)
>>
>> diff --git a/fs/ceph/metric.c b/fs/ceph/metric.c
>> index 75d309f2fb0c..d5560ff99a9d 100644
>> --- a/fs/ceph/metric.c
>> +++ b/fs/ceph/metric.c
>> @@ -249,19 +249,51 @@ void ceph_metric_destroy(struct ceph_client_metric *m)
>> ceph_put_mds_session(m->session);
>> }
>>
>>
>>
>>
>>
>>
>>
>>
>> -static inline void __update_latency(ktime_t *totalp, ktime_t *lsump,
>> - ktime_t *min, ktime_t *max,
>> - ktime_t *sq_sump, ktime_t lat)
>> +typedef enum {
>> + CEPH_METRIC_READ,
>> + CEPH_METRIC_WRITE,
>> + CEPH_METRIC_METADATA,
>> +} metric_type;
>> +
>> +static inline void __update_latency(struct ceph_client_metric *m,
>> + metric_type type, ktime_t lat)
>> {
>> + ktime_t *totalp, *minp, *maxp, *lsump, *sq_sump;
>> ktime_t total, avg, sq, lsum;
>>
>>
>>
>>
>>
>>
>>
>>
>> + switch (type) {
>> + case CEPH_METRIC_READ:
>> + totalp = &m->total_reads;
>> + lsump = &m->read_latency_sum;
>> + minp = &m->read_latency_min;
>> + maxp = &m->read_latency_max;
>> + sq_sump = &m->read_latency_sq_sum;
>> + break;
>> + case CEPH_METRIC_WRITE:
>> + totalp = &m->total_writes;
>> + lsump = &m->write_latency_sum;
>> + minp = &m->write_latency_min;
>> + maxp = &m->write_latency_max;
>> + sq_sump = &m->write_latency_sq_sum;
>> + break;
>> + case CEPH_METRIC_METADATA:
>> + totalp = &m->total_metadatas;
>> + lsump = &m->metadata_latency_sum;
>> + minp = &m->metadata_latency_min;
>> + maxp = &m->metadata_latency_max;
>> + sq_sump = &m->metadata_latency_sq_sum;
>> + break;
>> + default:
>> + return;
>> + }
>> +
>> total = ++(*totalp);
> Why are you adding one to *totalp above? Is that to avoid it being 0?
No, in the old code we will count the
total_reads/total_writes/total_metadatas for each call of the
ceph_update_{read/write/metadata}_latency() helpers. And the same here.
>> lsum = (*lsump += lat);
>>
>>
> ^^^
> Instead of doing all of the above with pointers, why not just add to
> total and lsum directly inside the switch statement? This seems like a
> lot of pointless indirection.
Okay, sounds good, will change it.
>>
>>
>>
>>
>> - if (unlikely(lat < *min))
>> - *min = lat;
>> - if (unlikely(lat > *max))
>> - *max = lat;
>> + if (unlikely(lat < *minp))
>> + *minp = lat;
>> + if (unlikely(lat > *maxp))
>> + *maxp = lat;
>>
>>
>>
>>
>> if (unlikely(total == 1))
>> return;
>> @@ -284,9 +316,7 @@ void ceph_update_read_metrics(struct ceph_client_metric *m,
>> return;
>>
>>
>>
>>
>> spin_lock(&m->read_metric_lock);
>> - __update_latency(&m->total_reads, &m->read_latency_sum,
>> - &m->read_latency_min, &m->read_latency_max,
>> - &m->read_latency_sq_sum, lat);
>> + __update_latency(m, CEPH_METRIC_READ, lat);
>> spin_unlock(&m->read_metric_lock);
>> }
>>
>>
>>
>>
>> @@ -300,9 +330,7 @@ void ceph_update_write_metrics(struct ceph_client_metric *m,
>> return;
>>
>>
>>
>>
>> spin_lock(&m->write_metric_lock);
>> - __update_latency(&m->total_writes, &m->write_latency_sum,
>> - &m->write_latency_min, &m->write_latency_max,
>> - &m->write_latency_sq_sum, lat);
>> + __update_latency(m, CEPH_METRIC_WRITE, lat);
>> spin_unlock(&m->write_metric_lock);
>> }
>>
>>
>>
>>
>> @@ -316,8 +344,6 @@ void ceph_update_metadata_metrics(struct ceph_client_metric *m,
>> return;
>>
>>
>>
>>
>> spin_lock(&m->metadata_metric_lock);
>> - __update_latency(&m->total_metadatas, &m->metadata_latency_sum,
>> - &m->metadata_latency_min, &m->metadata_latency_max,
>> - &m->metadata_latency_sq_sum, lat);
>> + __update_latency(m, CEPH_METRIC_METADATA, lat);
>> spin_unlock(&m->metadata_metric_lock);
>> }
next prev parent reply other threads:[~2021-03-23 13:15 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-03-22 12:28 [PATCH 0/4] ceph: add IO size metric support xiubli
2021-03-22 12:28 ` [PATCH 1/4] ceph: rename the metric helpers xiubli
2021-03-22 12:28 ` [PATCH 2/4] ceph: update the __update_latency helper xiubli
2021-03-23 12:34 ` Jeff Layton
2021-03-23 13:14 ` Xiubo Li [this message]
2021-03-22 12:28 ` [PATCH 3/4] ceph: avoid count the same request twice or more xiubli
2021-03-22 12:28 ` [PATCH 4/4] ceph: add IO size metrics support xiubli
2021-03-23 12:29 ` Jeff Layton
2021-03-23 13:17 ` Xiubo Li
2021-03-24 15:06 ` [PATCH 0/4] ceph: add IO size metric support Jeff Layton
2021-03-25 0:42 ` Xiubo Li
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3eaf5c43-8c81-b116-b500-7fdfb3e3153f@redhat.com \
--to=xiubli@redhat.com \
--cc=ceph-devel@vger.kernel.org \
--cc=idryomov@gmail.com \
--cc=jlayton@kernel.org \
--cc=pdonnell@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).