From: "Elliott, Robert (Server Storage)" <Elliott@hp.com>
To: "fio@vger.kernel.org" <fio@vger.kernel.org>
Cc: "dgilbert@interlog.com" <dgilbert@interlog.com>
Subject: lots of closes causing lots of invalidates while running
Date: Fri, 4 Jul 2014 00:20:19 +0000 [thread overview]
Message-ID: <94D0CD8314A33A4D9D801C0FE68B402958B83482@G9W0745.americas.hpqcorp.net> (raw)
Doug Gilbert noticed while running fio to scsi_debug devices
with the scsi-mq.2 tree that it is generating frequent ioctl
calls (e.g., 35 times per second on my system):
[ 1324.777541] sd 5:0:0:0: scsi_debug_ioctl: BLKFLSBUF [0x1261]
[ 1324.782543] sd 5:0:4:0: scsi_debug_ioctl: BLKFLSBUF [0x1261]
[ 1324.800988] sd 5:0:4:0: scsi_debug_ioctl: BLKFLSBUF [0x1261]
[ 1324.802529] sd 5:0:2:0: scsi_debug_ioctl: BLKFLSBUF [0x1261]
[ 1324.805116] sd 5:0:5:0: scsi_debug_ioctl: BLKFLSBUF [0x1261]
[ 1324.811526] sd 5:0:1:0: scsi_debug_ioctl: BLKFLSBUF [0x1261]
[ 1324.813527] sd 5:0:2:0: scsi_debug_ioctl: BLKFLSBUF [0x1261]
They come from fio's invalidate option.
Although the man page says:
invalidate=bool
Invalidate buffer-cache for the file prior
to starting I/O. Default: true.
the invalidations are happen on many io_units, not just once
at startup. Setting invalidate=0 makes them go away. However,
the root cause is a bunch of closes.
This is the call chain (fio-2.1.10-22-g5eba):
do_io
get_io_u /* Return an io_u to be processed. Gets a buflen and offset, sets direction */
set_io_u_file
get_next_file
__get_next_file
get_next_file_rand
td_io_open_file
file_invalidate_cache
__file_invalidate_cache
blockdev_invalidate_cache
return ioctl(f->fd, BLKFLSBUF);
which causes the linux block layer to run fsync_bdev and
invalidate_bdev.
The device/file keeps getting closed by backend.c thread_main
in this loop:
while (keep_running(td)) {
...
if (clear_state)
clear_io_state(td);
...
if (...
else
verify_bytes = do_io(td);
clear_state = 1;
...
}
via this call chain:
clear_io_state
close_files
td_io_close_file
so it keeps having to reopen the file, and asks for a flush
each time.
Are those clear_io_state/close_files calls really intended?
fio script:
[global]
direct=1
ioengine=libaio
norandommap
randrepeat=0
bs=4096
iodepth=96
numjobs=6
runtime=216000
time_based=1
group_reporting
thread
gtod_reduce=1
iodepth_batch=16
iodepth_batch_complete=16
cpus_allowed=0-5
cpus_allowed_policy=split
rw=randread
[4_KiB_RR_drive_ah]
filename=/dev/sdah
---
Rob Elliott HP Server Storage
next reply other threads:[~2014-07-04 0:23 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-07-04 0:20 Elliott, Robert (Server Storage) [this message]
2014-07-04 21:19 ` lots of closes causing lots of invalidates while running Jens Axboe
2014-07-04 21:26 ` Elliott, Robert (Server Storage)
2014-07-04 21:31 ` Jens Axboe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=94D0CD8314A33A4D9D801C0FE68B402958B83482@G9W0745.americas.hpqcorp.net \
--to=elliott@hp.com \
--cc=dgilbert@interlog.com \
--cc=fio@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.