All of lore.kernel.org
 help / color / mirror / Atom feed
From: Andreas Joachim Peters <Andreas.Joachim.Peters@cern.ch>
To: Milosz Tanski <milosz@adfin.com>, Gregory Farnum <greg@inktank.com>
Cc: Alexandre DERUMIER <aderumier@odiso.com>,
	ceph-devel <ceph-devel@vger.kernel.org>
Subject: RE: CEPH IOPS Baseline Measurements with MemStore
Date: Tue, 24 Jun 2014 12:13:42 +0000	[thread overview]
Message-ID: <3472A07E6605974CBC9BC573F1BC02E4AE745C14@CERNXCHG44.cern.ch> (raw)
In-Reply-To: <CANP1eJG2a1tg6wMs1o1HNL_nn3MWtpduot-X5HRxX6Vi11OBPQ@mail.gmail.com>

I made the same MemStore measurements with the master branch.
It seems that the sharded write queue has no visible performance impact for this low latency backend.

On the contrary I observe a general performance regression ( e.g. 70 kHz => 44 kHz for rOP) in comparison to firefly.

If I disable the ops tracking in firefly I move from 75 => 80 kHz, in master I move from 44 => 84kHz. Maybe you know where this might come from.

Attached is the OPS tracking for -t 1 idle case  and the loaded 4x -t 10 case with the master branch.

Is there some presentation/drawing explaining the details of the OP pipelining in the OSD daemon drawing all thread pools,queues and an explanation which tuning parameters modify the behaviour of this threads/queues?

Cheers Andreas.


======================================================================================

Single wOP in fligth:

{ "time": "2014-06-24 12:06:20.499832",
                      "event": "initiated"},
                    { "time": "2014-06-24 12:06:20.500019",
                      "event": "reached_pg"},
                    { "time": "2014-06-24 12:06:20.500050",
                      "event": "started"},
                    { "time": "2014-06-24 12:06:20.500056",
                      "event": "started"},
                    { "time": "2014-06-24 12:06:20.500169",
                      "event": "op_applied"},
                    { "time": "2014-06-24 12:06:20.500187",
                      "event": "op_commit"},
                    { "time": "2014-06-24 12:06:20.500194",
                      "event": "commit_sent"},
                    { "time": "2014-06-24 12:06:20.500202",
                      "event": "done"}]]}]}

40 wOPS in flight:
                    { "time": "2014-06-24 12:09:07.313460",
                      "event": "initiated"},
                    { "time": "2014-06-24 12:09:07.316255",
                      "event": "reached_pg"},
                    { "time": "2014-06-24 12:09:07.317314",
                      "event": "started"},
                    { "time": "2014-06-24 12:09:07.317830",
                      "event": "started"},
                    { "time": "2014-06-24 12:09:07.320276",
                      "event": "op_applied"},
                    { "time": "2014-06-24 12:09:07.320346",
                      "event": "op_commit"},
                    { "time": "2014-06-24 12:09:07.320363",
                      "event": "commit_sent"},
                    { "time": "2014-06-24 12:09:07.320372",
                      "event": "done"}]]}]}




________________________________________
From: Milosz Tanski [milosz@adfin.com]
Sent: 23 June 2014 22:33
To: Gregory Farnum
Cc: Alexandre DERUMIER; Andreas Joachim Peters; ceph-devel
Subject: Re: CEPH IOPS Baseline Measurements with MemStore

I'm working on getting mutrace going on the OSD to profile the hot
contented lock paths in master. Hopefully I'll have something soon.

On Mon, Jun 23, 2014 at 1:41 PM, Gregory Farnum <greg@inktank.com> wrote:
> On Fri, Jun 20, 2014 at 12:41 AM, Alexandre DERUMIER
> <aderumier@odiso.com> wrote:
>> They are also a tracker here
>> http://tracker.ceph.com/issues/7191
>> "Replace Mutex to RWLock with fdcache_lock in FileStore"
>>
>> seem to be done, but I'm not sure it's already is the master branch ?
>
> I believe this particular patch is still not merged (reviews etc on it
> and some related things are in progress), but some other pieces of the
> puzzle are in master (but not being backported to Firefly). In
> particular, we've enabled an "ms_fast_dispatch" mechanism which
> directly queues ops from the Pipe thread into the "OpWQ" (rather than
> going through a DispatchQueue priority queue first), and we've sharded
> the OpWQ. In progress but coming soonish are patches that should
> reduce the CPU cost of lfn_find and related FileStore calls, as well
> as sharding the fdcache lock (unless that one's merged already; I
> forget).
> And it turns out the "xattr spillout" patches to avoid doing so many
> LevelDB accesses were broken, and those are fixed in master (being
> backported to Firefly shortly).
>
> So there's a fair bit of work going on to address most all of those
> noted bottlenecks; if you're interested in it you probably want to run
> tests against master and try to track the conversations on the Tracker
> and ceph-devel. :)
> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com



--
Milosz Tanski
CTO
16 East 34th Street, 15th floor
New York, NY 10016

p: 646-253-9055
e: milosz@adfin.com

  reply	other threads:[~2014-06-24 12:13 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-06-19  9:05 CEPH IOPS Baseline Measurements with MemStore Andreas Joachim Peters
2014-06-19  9:21 ` Alexandre DERUMIER
2014-06-19  9:29   ` Andreas Joachim Peters
2014-06-19 11:08     ` Alexandre DERUMIER
2014-06-19 22:18       ` Milosz Tanski
2014-06-20  4:35         ` Alexandre DERUMIER
2014-06-20  4:41           ` Alexandre DERUMIER
2014-06-23 17:41             ` Gregory Farnum
2014-06-23 20:33               ` Milosz Tanski
2014-06-24 12:13                 ` Andreas Joachim Peters [this message]
2014-06-24 16:53                   ` Somnath Roy
2014-06-25  2:55                   ` Haomai Wang
2014-06-24  5:55               ` Alexandre DERUMIER
2014-06-20 21:49 ` Andreas Joachim Peters

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3472A07E6605974CBC9BC573F1BC02E4AE745C14@CERNXCHG44.cern.ch \
    --to=andreas.joachim.peters@cern.ch \
    --cc=aderumier@odiso.com \
    --cc=ceph-devel@vger.kernel.org \
    --cc=greg@inktank.com \
    --cc=milosz@adfin.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.