linux-bcache.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Coly Li <colyli@suse.de>
To: "Norman.Kern" <norman.kern@gmx.com>
Cc: linux-block@vger.kernel.org, axboe@kernel.dk,
	linux-bcache@vger.kernel.org
Subject: Re: Large latency with bcache for Ceph OSD
Date: Wed, 24 Feb 2021 16:52:01 +0800	[thread overview]
Message-ID: <5867daf1-0960-39aa-1843-1a76c1e9a28d@suse.de> (raw)
In-Reply-To: <632258f7-b138-3fba-456b-9da37c1de710@gmx.com>

On 2/22/21 7:48 AM, Norman.Kern wrote:
> Ping.
> 
> I'm confused on the SYNC I/O on bcache. why SYNC I/O must be writen back
> for persistent cache?  It can cause some latency.
> 
> @Coly, can you give help me to explain why bcache handle O_SYNC like this.?
> 
> 

Hmm, normally we won't observe the application issuing I/Os on backing
device except for,
- I/O bypass by SSD congestion
- Sequential I/O request
- Dirty buckets exceeds the cutoff threshold
- Write through mode

Do you set the write/read congestion threshold to 0 ?

Coly Li

> On 2021/2/18 下午3:56, Norman.Kern wrote:
>> Hi guys,
>>
>> I am testing ceph with bcache, I found some I/O with O_SYNC writeback
>> to HDD, which caused large latency on HDD, I trace the I/O with iosnoop:
>>
>> ./iosnoop  -Q -ts -d '8,192
>>
>> Tracing block I/O for 1 seconds (buffered)...
>> STARTs          ENDs            COMM         PID    TYPE DEV
>> BLOCK        BYTES     LATms
>>
>> 1809296.292350  1809296.319052  tp_osd_tp    22191  R    8,192
>> 4578940240   16384     26.70
>> 1809296.292330  1809296.320974  tp_osd_tp    22191  R    8,192
>> 4577938704   16384     28.64
>> 1809296.292614  1809296.323292  tp_osd_tp    22191  R    8,192
>> 4600404304   16384     30.68
>> 1809296.292353  1809296.325300  tp_osd_tp    22191  R    8,192
>> 4578343088   16384     32.95
>> 1809296.292340  1809296.328013  tp_osd_tp    22191  R    8,192
>> 4578055472   16384     35.67
>> 1809296.292606  1809296.330518  tp_osd_tp    22191  R    8,192
>> 4578581648   16384     37.91
>> 1809295.169266  1809296.334041  bstore_kv_fi 17266  WS   8,192
>> 4244996360   4096    1164.78
>> 1809296.292618  1809296.336349  tp_osd_tp    22191  R    8,192
>> 4602631760   16384     43.73
>> 1809296.292618  1809296.338812  tp_osd_tp    22191  R    8,192
>> 4602632976   16384     46.19
>> 1809296.030103  1809296.342780  tp_osd_tp    22180  WS   8,192
>> 4741276048   131072   312.68
>> 1809296.292347  1809296.345045  tp_osd_tp    22191  R    8,192
>> 4609037872   16384     52.70
>> 1809296.292620  1809296.345109  tp_osd_tp    22191  R    8,192
>> 4609037904   16384     52.49
>> 1809296.292612  1809296.347251  tp_osd_tp    22191  R    8,192
>> 4578937616   16384     54.64
>> 1809296.292621  1809296.351136  tp_osd_tp    22191  R    8,192
>> 4612654992   16384     58.51
>> 1809296.292341  1809296.353428  tp_osd_tp    22191  R    8,192
>> 4578220656   16384     61.09
>> 1809296.292342  1809296.353864  tp_osd_tp    22191  R    8,192
>> 4578220880   16384     61.52
>> 1809295.167650  1809296.358510  bstore_kv_fi 17266  WS   8,192
>> 4923695960   4096    1190.86
>> 1809296.292347  1809296.361885  tp_osd_tp    22191  R    8,192
>> 4607437136   16384     69.54
>> 1809296.029363  1809296.367313  tp_osd_tp    22180  WS   8,192
>> 4739824400   98304    337.95
>> 1809296.292349  1809296.370245  tp_osd_tp    22191  R    8,192
>> 4591379888   16384     77.90
>> 1809296.292348  1809296.376273  tp_osd_tp    22191  R    8,192
>> 4591289552   16384     83.92
>> 1809296.292353  1809296.378659  tp_osd_tp    22191  R    8,192
>> 4578248656   16384     86.31
>> 1809296.292619  1809296.384835  tp_osd_tp    22191  R    8,192
>> 4617494160   65536     92.22
>> 1809295.165451  1809296.393715  bstore_kv_fi 17266  WS   8,192
>> 1355703120   4096    1228.26
>> 1809295.168595  1809296.401560  bstore_kv_fi 17266  WS   8,192
>> 1122200      4096    1232.96
>> 1809295.165221  1809296.408018  bstore_kv_fi 17266  WS   8,192
>> 960656       4096    1242.80
>> 1809295.166737  1809296.411505  bstore_kv_fi 17266  WS   8,192
>> 57682504     4096    1244.77
>> 1809296.292352  1809296.418123  tp_osd_tp    22191  R    8,192
>> 4579459056   32768    125.77
>>
>> I'm confused why write with O_SYNC must writeback on the backend
>> storage device?  And when I used bcache for a time,
>>
>> the latency increased a lot.(The SSD is not very busy), There's some
>> best practices on configuration?
>>


  reply	other threads:[~2021-02-24  8:53 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-02-18  7:56 Large latency with bcache for Ceph OSD Norman.Kern
2021-02-21 23:48 ` Norman.Kern
2021-02-24  8:52   ` Coly Li [this message]
2021-02-25  2:22     ` Norman.Kern
2021-02-25  2:23     ` Norman.Kern
2021-02-25 13:00       ` Norman.Kern
2021-02-25 14:44         ` Coly Li
2021-02-26  8:57           ` Norman.Kern
2021-02-26  9:54             ` Coly Li
2021-03-02  2:03               ` Norman.Kern
2021-03-02  5:30               ` Norman.Kern

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5867daf1-0960-39aa-1843-1a76c1e9a28d@suse.de \
    --to=colyli@suse.de \
    --cc=axboe@kernel.dk \
    --cc=linux-bcache@vger.kernel.org \
    --cc=linux-block@vger.kernel.org \
    --cc=norman.kern@gmx.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).