All of lore.kernel.org
 help / color / mirror / Atom feed
From: Varada Kari <Varada.Kari@sandisk.com>
To: Sage Weil <sage@newdream.net>
Cc: Haomai Wang <haomai@xsky.com>,
	"Tang, Haodong" <haodong.tang@intel.com>,
	"ceph-devel@vger.kernel.org" <ceph-devel@vger.kernel.org>
Subject: Re: parallel transaction submit
Date: Thu, 25 Aug 2016 14:26:18 +0000	[thread overview]
Message-ID: <BLUPR0201MB19083965C2FB9F3357BF3961E2ED0@BLUPR0201MB1908.namprd02.prod.outlook.com> (raw)
In-Reply-To: alpine.DEB.2.11.1608251404580.10979@piezo.us.to

On Thursday 25 August 2016 07:41 PM, Sage Weil wrote:
> On Thu, 25 Aug 2016, Varada Kari wrote:
>> Hi,
>>
>> Increasing the number of the kv_sync_threads is not giving much of
>> performance. In the current threading model, shard_worker submits the IO
>> to the block device which are handled by aio_callback thread(which is
>> one) and submits to the kv_sync thread, which batches the requests and
>> submits to the rocksdb. Because kv_sync batches the requests and submits
>> the requests, we might observe more time spent on kv_sync_thread
>> routine. And i haven't observed much of an improvement by adding more
>> threads here.
>>
>> But when increased the number of callbacks thread from aio(still needs
>> some refinements in polling for the request completions) and completing
>> the write completion in the same thread context increased some
>> performance. I don't have the numbers to say how much, but that is
>> better than having multiple kv_sync threads, adding one more queue and
>> lock. You can refer to
>> https://github.com/varadakari/ceph/commits/wip-parallel-aiocb (ignore
>> the first commit, was trying to do sync transaction in the same thread
>> context of sharded worker to measure the latency).
> Yeah, I think this is right.  I see two avenues of attack:
>
> - Try to eliminate the handoff to _kv_sync_thread by having the 
> transaction submitted to rocksdb in the calling thread.  This will 
> require a bit of refactoring but I think it's possible. We don't actually 
> want to block, though, so it'll be an async submission, and we'll still 
> need kv_sync_thread just telling rocksdb to commit in a loop and 
> triggering callbacks.  A recent PR sharded the completion finishers so I'm 
> guessing the final step would be some affinity thing that pins the 
> finishers to the same cores as the submitters?
I kind of copied what kv_sync_thread does and was able to run multiple
callbacks at the same time.
If we can complete the write ack in the same aio_cb thread not handing
off to finisher thread, we can eliminate
one thread switch and we can have same number of threads as shards. We
can use the same logic to handle the finishers
(not sure if we can process the request by reading the osr here) for the
callback threads.

Varada
> - Shard the io completion (before we submit the kv transaction).  Not sure 
> if we want a thread per shard, or polls at opportunistic/strategic points 
> in code.  The goal would be keeping the processing local to the 
> core/socket (vs the current strategy of a single thread waiting/polling 
> for completions and doing the next phase of work).
>
>> was exploring a way to have the aio callback thread matached/reserved at
>> the time of io submission, so that we don't need to do io_getevents(),
>> kind of a async callback to the specified thread so that we can avoid
>> some waiting logic in io_getevents() and process the request in the same
>> thread context. You can refer to
>> http://manpages.ubuntu.com/manpages/wily/man3/io_set_callback.3.html. I
>> don't have the working code ready for this. FWIW, that is worth
>> experimenting and see if it reduces any latency.
> I don't think this will help--it just means you're using a layer of the 
> library that's calling getevents for you and calling your callback.
>
> Thanks!
> sage
>
>
>> Varada
>>
>> On Thursday 25 August 2016 01:25 PM, Haomai Wang wrote:
>>> looks very litlle improvements. rocksdb result meet my expectation
>>> because rocksdb internal has lock for multi sync write. But memdb
>>> improments is a little confusing.
>>>
>>> On Thu, Aug 25, 2016 at 3:45 PM, Tang, Haodong <haodong.tang@intel.com> wrote:
>>>> Hi Sage, Varada
>>>>
>>>> Noticed you are making parallel transaction submits, we also worked out a prototype that looks similar, here is the link for the implementation: https://github.com/ceph/ceph/pull/10856
>>>>
>>>> Background:
>>>> From the perf counter we added, found it spent a lot time in kv_queue, that is, single thread transaction submits is not competent to handle the transaction from OSD.
>>>>
>>>> Implementation:
>>>> The key thought is to use multiple thread and assign each TransContext to one of the processing threads. In order to parallelize transaction submit, add different kv_locks and kv_conds for each thread.
>>>>
>>>> Performance evaluation:
>>>> Test ENV:
>>>>         4 x server, 4 x client, 16 x Intel S3700 as block device, and 4 x Intel P3600 as Rocksdb/WAL device.
>>>> Performance:
>>>> We also did several quick tests to verify the performance benefit, the results showed that parallel transaction submission will brought 10% performance improvement if using memdb, but little performance improvement with rocksdb.
>>>>
>>>> What's more, without parallel transaction submits, we also see performance boost if just changing to MemDB, but a little.
>>>>
>>>> Test summary:
>>>> QD Scaling Test - 4k Random Write:
>>>>                                                                                   QD = 1      QD = 16     QD = 32      QD = 64      QD = 128
>>>> With rocksdb (IOPS)                                              682            173000       190000        203000       204000
>>>> With memdb (IOPS)                                              704            180000       194000        206000       218000
>>>> With rocksdb+multiple_kv_thread(IOPS)          /                164243        167037        180961      201752
>>>> With memdb+multiple_kv_thread(IOPS)          /                 176000       200000        221000      227000
>>>>
>>>>
>>>> It seems single thread of transaction submits will be a bottleneck if using MemDB.
>>>> --
>>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>>>> the body of a message to majordomo@vger.kernel.org
>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>>


  reply	other threads:[~2016-08-25 14:42 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-08-25  7:45 parallel transaction submit Tang, Haodong
2016-08-25  7:55 ` Haomai Wang
2016-08-25  8:47   ` Varada Kari
2016-08-25 14:11     ` Sage Weil
2016-08-25 14:26       ` Varada Kari [this message]
2016-08-25  8:48   ` Tang, Haodong

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=BLUPR0201MB19083965C2FB9F3357BF3961E2ED0@BLUPR0201MB1908.namprd02.prod.outlook.com \
    --to=varada.kari@sandisk.com \
    --cc=ceph-devel@vger.kernel.org \
    --cc=haodong.tang@intel.com \
    --cc=haomai@xsky.com \
    --cc=sage@newdream.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.