All of lore.kernel.org
 help / color / mirror / Atom feed
* fio: first direct IO errored
@ 2010-10-30 12:53 Christian Zoffoli
  2010-10-31  1:53 ` Jens Axboe
  0 siblings, 1 reply; 7+ messages in thread
From: Christian Zoffoli @ 2010-10-30 12:53 UTC (permalink / raw)
  To: fio

Hi to all,
I'm testing a storage (14x300GB) with different RAID levels, volume
stripe sizes and different blocksizes and numjobs in fio ...but I've
found a problem testing it with disks in RAID6 configuration.

here is the command line I've used

-----
fio --filename=/dev/dm-0  --direct=1 --group_reporting  --rw=read
--bs=1k --numjobs=1 --runtime=60  --name="read-bs512-job1"
-----

and here is the error:

-----
read-bs512-job1: (g=0): rw=read, bs=1K-1K/1K-1K, ioengine=sync, iodepth=1
Starting 1 process
fio: first direct IO errored. File system may not support direct IO, or
iomem_align= is bad.
fio: pid=4898, err=22/file:engines/sync.c:62, func=xfer, error=Invalid
argument

read-bs512-job1: (groupid=0, jobs=1): err=22 (file:engines/sync.c:62,
func=xfer, error=Invalid argument): pid=4898
  cpu          : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=50
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%,
>=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>=64=0.0%
     complete  : 0=50.0%, 4=50.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>=64=0.0%
     issued r/w/d: total=1/0/0, short=0/0/0



Run status group 0 (all jobs):

Disk stats (read/write):
  dm-0: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=-nan%,
aggrios=0/0, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%
    sdb: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
    sdd: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
    sdc: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
    sde: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00

-----


testing the same volume with RAID10 works as expected.

Is it related to the size of the volume?
In RAID6 is 3.3TB and 2.1TB in RAID10.

I've tried also with --norandommap but without success.

and I've not understood what I have to put in iomem_align= to fix the
"alignment" problem.


Best regards,
Christian
-- 
Christian Zoffoli (XMerlin)

You never change things by fighting the existing reality. To change
something, build a new model that makes the existing model obsolete."
-- R. Buckminster Fuller

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: fio: first direct IO errored
  2010-10-30 12:53 fio: first direct IO errored Christian Zoffoli
@ 2010-10-31  1:53 ` Jens Axboe
  2010-10-31 12:13   ` Christian Zoffoli
  0 siblings, 1 reply; 7+ messages in thread
From: Jens Axboe @ 2010-10-31  1:53 UTC (permalink / raw)
  To: Christian Zoffoli; +Cc: fio

On 2010-10-30 08:53, Christian Zoffoli wrote:
> Hi to all,
> I'm testing a storage (14x300GB) with different RAID levels, volume
> stripe sizes and different blocksizes and numjobs in fio ...but I've
> found a problem testing it with disks in RAID6 configuration.
> 
> here is the command line I've used
> 
> -----
> fio --filename=/dev/dm-0  --direct=1 --group_reporting  --rw=read
> --bs=1k --numjobs=1 --runtime=60  --name="read-bs512-job1"
> -----
> 
> and here is the error:
> 
> -----
> read-bs512-job1: (g=0): rw=read, bs=1K-1K/1K-1K, ioengine=sync, iodepth=1
> Starting 1 process
> fio: first direct IO errored. File system may not support direct IO, or
> iomem_align= is bad.
> fio: pid=4898, err=22/file:engines/sync.c:62, func=xfer, error=Invalid
> argument

So you get -1/EINVAL on the issued IO. For O_DIRECT, this will typically
happen for one (or more) of these reasons:

- The file system doesn't support O_DIRECT, as fio suggests. Since it
  works for your other test, we can safely discount that reason.

- The block size is not a multiple of the block size of the device. I'm
  guessing this is your problem. You are using 1K block sizes, if your
  dm device has a larger block size, then this will not work.

It's not likely to be iomem_align, it's more likely to be due to bs=1k
just being too small for your setup. To be sure, please provide a:

$ grep . /sys/block/dm-0/queue/*

for dm-0.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: fio: first direct IO errored
  2010-10-31  1:53 ` Jens Axboe
@ 2010-10-31 12:13   ` Christian Zoffoli
  2010-10-31 12:28     ` Jens Axboe
  0 siblings, 1 reply; 7+ messages in thread
From: Christian Zoffoli @ 2010-10-31 12:13 UTC (permalink / raw)
  To: fio; +Cc: Jens Axboe

Il 31/10/2010 02:53, Jens Axboe ha scritto:
[cut]
> 
> So you get -1/EINVAL on the issued IO. For O_DIRECT, this will typically
> happen for one (or more) of these reasons:
> 
> - The file system doesn't support O_DIRECT, as fio suggests. Since it
>   works for your other test, we can safely discount that reason.
> 
> - The block size is not a multiple of the block size of the device. I'm
>   guessing this is your problem. You are using 1K block sizes, if your
>   dm device has a larger block size, then this will not work.
> 
> It's not likely to be iomem_align, it's more likely to be due to bs=1k
> just being too small for your setup. To be sure, please provide a:
> 
> $ grep . /sys/block/dm-0/queue/*
> 
> for dm-0.
> 

Hi Jens

here is the output

/sys/block/dm-0/queue/discard_granularity:0
/sys/block/dm-0/queue/discard_max_bytes:0
/sys/block/dm-0/queue/discard_zeroes_data:0
/sys/block/dm-0/queue/hw_sector_size:4096
/sys/block/dm-0/queue/iostats:1
/sys/block/dm-0/queue/logical_block_size:4096
/sys/block/dm-0/queue/max_hw_sectors_kb:32767
/sys/block/dm-0/queue/max_sectors_kb:512
/sys/block/dm-0/queue/max_segment_size:65536
/sys/block/dm-0/queue/max_segments:128
/sys/block/dm-0/queue/minimum_io_size:4096
/sys/block/dm-0/queue/nomerges:0
/sys/block/dm-0/queue/nr_requests:128
/sys/block/dm-0/queue/optimal_io_size:0
/sys/block/dm-0/queue/physical_block_size:4096
/sys/block/dm-0/queue/read_ahead_kb:128
/sys/block/dm-0/queue/rotational:1
/sys/block/dm-0/queue/rq_affinity:1
/sys/block/dm-0/queue/scheduler:noop deadline [cfq]


Thank you very much.

Best regards,
Christian


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: fio: first direct IO errored
  2010-10-31 12:13   ` Christian Zoffoli
@ 2010-10-31 12:28     ` Jens Axboe
  2010-10-31 12:38       ` Christian Zoffoli
  0 siblings, 1 reply; 7+ messages in thread
From: Jens Axboe @ 2010-10-31 12:28 UTC (permalink / raw)
  To: Christian Zoffoli; +Cc: fio

On 2010-10-31 08:13, Christian Zoffoli wrote:
> Il 31/10/2010 02:53, Jens Axboe ha scritto:
> [cut]
>>
>> So you get -1/EINVAL on the issued IO. For O_DIRECT, this will typically
>> happen for one (or more) of these reasons:
>>
>> - The file system doesn't support O_DIRECT, as fio suggests. Since it
>>   works for your other test, we can safely discount that reason.
>>
>> - The block size is not a multiple of the block size of the device. I'm
>>   guessing this is your problem. You are using 1K block sizes, if your
>>   dm device has a larger block size, then this will not work.
>>
>> It's not likely to be iomem_align, it's more likely to be due to bs=1k
>> just being too small for your setup. To be sure, please provide a:
>>
>> $ grep . /sys/block/dm-0/queue/*
>>
>> for dm-0.
>>
> 
> Hi Jens
> 
> here is the output
> 
> /sys/block/dm-0/queue/discard_granularity:0
> /sys/block/dm-0/queue/discard_max_bytes:0
> /sys/block/dm-0/queue/discard_zeroes_data:0
> /sys/block/dm-0/queue/hw_sector_size:4096
> /sys/block/dm-0/queue/iostats:1
> /sys/block/dm-0/queue/logical_block_size:4096
> /sys/block/dm-0/queue/max_hw_sectors_kb:32767
> /sys/block/dm-0/queue/max_sectors_kb:512
> /sys/block/dm-0/queue/max_segment_size:65536
> /sys/block/dm-0/queue/max_segments:128
> /sys/block/dm-0/queue/minimum_io_size:4096
> /sys/block/dm-0/queue/nomerges:0
> /sys/block/dm-0/queue/nr_requests:128
> /sys/block/dm-0/queue/optimal_io_size:0
> /sys/block/dm-0/queue/physical_block_size:4096
> /sys/block/dm-0/queue/read_ahead_kb:128
> /sys/block/dm-0/queue/rotational:1
> /sys/block/dm-0/queue/rq_affinity:1
> /sys/block/dm-0/queue/scheduler:noop deadline [cfq]

The exposed hw block size is 4096 bytes, so that is also the mimimum
size at which you can issue unbuffered IO. Your test will work if you
change the bs=1k to bs=4k.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: fio: first direct IO errored
  2010-10-31 12:28     ` Jens Axboe
@ 2010-10-31 12:38       ` Christian Zoffoli
  0 siblings, 0 replies; 7+ messages in thread
From: Christian Zoffoli @ 2010-10-31 12:38 UTC (permalink / raw)
  To: fio; +Cc: Jens Axboe

Il 31/10/2010 13:28, Jens Axboe ha scritto:
[cut]
> The exposed hw block size is 4096 bytes, so that is also the mimimum
> size at which you can issue unbuffered IO. Your test will work if you
> change the bs=1k to bs=4k.
> 

yes, the tests works fine with bs=4k but I need to test also bs=1k and
bs=512 to compare them with the other tests.

I'll try to recreate the raid with different parameters.

Thank you very much for your help.

Best regards,
Christian


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: fio: first direct io errored
  2012-04-19 13:56 fio: first direct io errored bin lin
@ 2012-04-19 18:49 ` Jens Axboe
  0 siblings, 0 replies; 7+ messages in thread
From: Jens Axboe @ 2012-04-19 18:49 UTC (permalink / raw)
  To: bin lin; +Cc: fio

On 2012-04-19 15:56, bin lin wrote:
> Hi, i am a newer with fio. i have written a read_iolog and use fio to
> run the trace in a
> 
> device /dev/sdb1, but it had errors when add --direct=1. the errors is
> as follows:
> 
> replay: (g=0): rw=read, bs=4K-4K/4K-4K, ioengine=sync, iodepth=1
> fio 2.0.6
> Starting 1 process
> fio: first direct IO errored. File system may not support direct IO,
> or iomem_align= is bad.
> fio: pid=6363, err=22/file:engines/sync.c:62, func=xfer, error=Invalid argument
> 
> the trace file is as follows:
> fio version 2 iolog
> /dev/sdb1 add
> /dev/sdb1 open
> /dev/sdb1 read 27267456 8
> /dev/sdb1 read 27267560 8
> /dev/sdb1 read 34933120 8
> /dev/sdb1 read 34933128 8
> /dev/sdb1 read 34933136 8
> /dev/sdb1 read 34933144 8
> /dev/sdb1 read 34604888 8
> /dev/sdb1 close
> 
> The filesystem on /dev/sdb1 is ext3 which supports the O_DIRECT, and
> the trace file use
> 
> block size as 512 byte. I do not know where the error is and how to
> solve it ? thanks

What job generated that iolog? The reason is fails is that the transfer
size is 8 _bytes_, it's not sectors. So the above wont work on any
device.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 7+ messages in thread

* fio: first direct io errored
@ 2012-04-19 13:56 bin lin
  2012-04-19 18:49 ` Jens Axboe
  0 siblings, 1 reply; 7+ messages in thread
From: bin lin @ 2012-04-19 13:56 UTC (permalink / raw)
  To: fio

Hi, i am a newer with fio. i have written a read_iolog and use fio to
run the trace in a

device /dev/sdb1, but it had errors when add --direct=1. the errors is
as follows:

replay: (g=0): rw=read, bs=4K-4K/4K-4K, ioengine=sync, iodepth=1
fio 2.0.6
Starting 1 process
fio: first direct IO errored. File system may not support direct IO,
or iomem_align= is bad.
fio: pid=6363, err=22/file:engines/sync.c:62, func=xfer, error=Invalid argument

the trace file is as follows:
fio version 2 iolog
/dev/sdb1 add
/dev/sdb1 open
/dev/sdb1 read 27267456 8
/dev/sdb1 read 27267560 8
/dev/sdb1 read 34933120 8
/dev/sdb1 read 34933128 8
/dev/sdb1 read 34933136 8
/dev/sdb1 read 34933144 8
/dev/sdb1 read 34604888 8
/dev/sdb1 close

The filesystem on /dev/sdb1 is ext3 which supports the O_DIRECT, and
the trace file use

block size as 512 byte. I do not know where the error is and how to
solve it ? thanks

Best regards, linbin.

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2012-04-19 18:49 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-10-30 12:53 fio: first direct IO errored Christian Zoffoli
2010-10-31  1:53 ` Jens Axboe
2010-10-31 12:13   ` Christian Zoffoli
2010-10-31 12:28     ` Jens Axboe
2010-10-31 12:38       ` Christian Zoffoli
2012-04-19 13:56 fio: first direct io errored bin lin
2012-04-19 18:49 ` Jens Axboe

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.