All of lore.kernel.org
 help / color / mirror / Atom feed
* Helping to model this workload
       [not found] <CE142CC7.127F5%Antonio.Jose.Rodrigues.Neto@netapp.com>
@ 2013-07-23 16:52 ` Neto, Antonio Jose Rodrigues
  2013-07-25 16:45   ` Jens Axboe
  0 siblings, 1 reply; 19+ messages in thread
From: Neto, Antonio Jose Rodrigues @ 2013-07-23 16:52 UTC (permalink / raw)
  To: fio

Hi All

This is neto from Brazil

How are you?

I need to model the following workload:

Sequential Read % 35.0
Sequential Write % 5.0
Random Read % 50.0
Random Write % 10.0
Random Read Working Set(GB) 1000.0
Random Write Working Set(GB) 1000.0
Sequential Read Size(KB) 64KB
Sequential Write Size(KB) 64KB
Random Read Size(KB) 8KB
Random Write Size(KB) 8KB

For the random workload � I thought to use 16 jobs with 2 outstanding IOs.
For sequential workload, I cannot use more than one job (because this will
become random) and I need to use 64 outstanding
 Ios.

My biggest doubt is: how to distribute the percentage in the job file?

Does anyone have any example to share?

Thank you very much

All the best

neto


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Helping to model this workload
  2013-07-23 16:52 ` Helping to model this workload Neto, Antonio Jose Rodrigues
@ 2013-07-25 16:45   ` Jens Axboe
  2013-07-25 16:52     ` Neto, Antonio Jose Rodrigues
  0 siblings, 1 reply; 19+ messages in thread
From: Jens Axboe @ 2013-07-25 16:45 UTC (permalink / raw)
  To: Neto, Antonio Jose Rodrigues; +Cc: fio

On Tue, Jul 23 2013, Neto, Antonio Jose Rodrigues wrote:
> Hi All
> 
> This is neto from Brazil
> 
> How are you?
> 
> I need to model the following workload:
> 
> Sequential Read % 35.0
> Sequential Write % 5.0
> Random Read % 50.0
> Random Write % 10.0
> Random Read Working Set(GB) 1000.0
> Random Write Working Set(GB) 1000.0
> Sequential Read Size(KB) 64KB
> Sequential Write Size(KB) 64KB
> Random Read Size(KB) 8KB
> Random Write Size(KB) 8KB

So that's 85% reads and 15% writes, first part:

rw=randrw
rwmixread=85

and then you have 50/85 reads random and 35/85 reads sequential. That's
roughly 59% reads random. On the writes, you have 10/15 random and 5/15
sequential. That's roughly 67% writes random:

percentage_random=59,67

The latter I just added support for, before it only support a single
setting for reads and writes.

Fio does not support splitting block sizes on a random/sequential basis.
You will have to improvise there.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Helping to model this workload
  2013-07-25 16:45   ` Jens Axboe
@ 2013-07-25 16:52     ` Neto, Antonio Jose Rodrigues
  2013-07-25 18:27       ` Jens Axboe
  0 siblings, 1 reply; 19+ messages in thread
From: Neto, Antonio Jose Rodrigues @ 2013-07-25 16:52 UTC (permalink / raw)
  To: Jens Axboe; +Cc: fio


On 7/25/13 12:45 PM, "Jens Axboe" <axboe@kernel.dk> wrote:

>On Tue, Jul 23 2013, Neto, Antonio Jose Rodrigues wrote:
>> Hi All
>> 
>> This is neto from Brazil
>> 
>> How are you?
>> 
>> I need to model the following workload:
>> 
>> Sequential Read % 35.0
>> Sequential Write % 5.0
>> Random Read % 50.0
>> Random Write % 10.0
>> Random Read Working Set(GB) 1000.0
>> Random Write Working Set(GB) 1000.0
>> Sequential Read Size(KB) 64KB
>> Sequential Write Size(KB) 64KB
>> Random Read Size(KB) 8KB
>> Random Write Size(KB) 8KB
>
>So that's 85% reads and 15% writes, first part:
>
>rw=randrw
>rwmixread=85
>
>and then you have 50/85 reads random and 35/85 reads sequential. That's
>roughly 59% reads random. On the writes, you have 10/15 random and 5/15
>sequential. That's roughly 67% writes random:
>
>percentage_random=59,67
>
>The latter I just added support for, before it only support a single
>setting for reads and writes.
>
>Fio does not support splitting block sizes on a random/sequential basis.
>You will have to improvise there.
>
>-- 
>Jens Axboe
>
>--
>To unsubscribe from this list: send the line "unsubscribe fio" in
>the body of a message to majordomo@vger.kernel.org
>More majordomo info at  http://vger.kernel.org/majordomo-info.html

Hi Jens

This is neto from Brazil

How are you?

Thank you very much.

So basically the option right now is to use either 8KB or 64KB right?

Something like:

[workload]
bs=8k
ioengine=libaio
iodepth=2
numjobs=64
direct=1
runtime=2400
size=2000g
filename=\\.\PhysicalDrive9
filename=\\.\PhysicalDrive10
rw=randrw
rwmixread=85
percentage_random=59,67

thread
unified_rw_reporting=1
group_reporting=1


Would be nice to have something like:

block_mixed=8192,65536 (random, sequential)


Thank you

neto


>



^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Helping to model this workload
  2013-07-25 16:52     ` Neto, Antonio Jose Rodrigues
@ 2013-07-25 18:27       ` Jens Axboe
  2013-07-25 18:31         ` Neto, Antonio Jose Rodrigues
  0 siblings, 1 reply; 19+ messages in thread
From: Jens Axboe @ 2013-07-25 18:27 UTC (permalink / raw)
  To: Neto, Antonio Jose Rodrigues; +Cc: fio

On Thu, Jul 25 2013, Neto, Antonio Jose Rodrigues wrote:
> 
> On 7/25/13 12:45 PM, "Jens Axboe" <axboe@kernel.dk> wrote:
> 
> >On Tue, Jul 23 2013, Neto, Antonio Jose Rodrigues wrote:
> >> Hi All
> >> 
> >> This is neto from Brazil
> >> 
> >> How are you?
> >> 
> >> I need to model the following workload:
> >> 
> >> Sequential Read % 35.0
> >> Sequential Write % 5.0
> >> Random Read % 50.0
> >> Random Write % 10.0
> >> Random Read Working Set(GB) 1000.0
> >> Random Write Working Set(GB) 1000.0
> >> Sequential Read Size(KB) 64KB
> >> Sequential Write Size(KB) 64KB
> >> Random Read Size(KB) 8KB
> >> Random Write Size(KB) 8KB
> >
> >So that's 85% reads and 15% writes, first part:
> >
> >rw=randrw
> >rwmixread=85
> >
> >and then you have 50/85 reads random and 35/85 reads sequential. That's
> >roughly 59% reads random. On the writes, you have 10/15 random and 5/15
> >sequential. That's roughly 67% writes random:
> >
> >percentage_random=59,67
> >
> >The latter I just added support for, before it only support a single
> >setting for reads and writes.
> >
> >Fio does not support splitting block sizes on a random/sequential basis.
> >You will have to improvise there.
> >
> >-- 
> >Jens Axboe
> >
> >--
> >To unsubscribe from this list: send the line "unsubscribe fio" in
> >the body of a message to majordomo@vger.kernel.org
> >More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> Hi Jens
> 
> This is neto from Brazil
> 
> How are you?
> 
> Thank you very much.
> 
> So basically the option right now is to use either 8KB or 64KB right?
> 
> Something like:
> 
> [workload]
> bs=8k
> ioengine=libaio
> iodepth=2
> numjobs=64
> direct=1
> runtime=2400
> size=2000g
> filename=\\.\PhysicalDrive9
> filename=\\.\PhysicalDrive10
> rw=randrw
> rwmixread=85
> percentage_random=59,67
> 
> thread
> unified_rw_reporting=1
> group_reporting=1
> 
> 
> Would be nice to have something like:
> 
> block_mixed=8192,65536 (random, sequential)

That would certainly easily be feasible. But since that would override
any other blocksize setting, might be cleaner to just have a boolean
saying whether to interpret these fields as "read,write" or
"sequential,random" instead.

BTW, if you use filename= twice like you do above, only the last one
will be effective.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Helping to model this workload
  2013-07-25 18:27       ` Jens Axboe
@ 2013-07-25 18:31         ` Neto, Antonio Jose Rodrigues
  2013-07-25 18:42           ` Jens Axboe
  0 siblings, 1 reply; 19+ messages in thread
From: Neto, Antonio Jose Rodrigues @ 2013-07-25 18:31 UTC (permalink / raw)
  To: Jens Axboe; +Cc: fio

On 7/25/13 2:27 PM, "Jens Axboe" <axboe@kernel.dk> wrote:

>On Thu, Jul 25 2013, Neto, Antonio Jose Rodrigues wrote:
>> 
>> On 7/25/13 12:45 PM, "Jens Axboe" <axboe@kernel.dk> wrote:
>> 
>> >On Tue, Jul 23 2013, Neto, Antonio Jose Rodrigues wrote:
>> >> Hi All
>> >> 
>> >> This is neto from Brazil
>> >> 
>> >> How are you?
>> >> 
>> >> I need to model the following workload:
>> >> 
>> >> Sequential Read % 35.0
>> >> Sequential Write % 5.0
>> >> Random Read % 50.0
>> >> Random Write % 10.0
>> >> Random Read Working Set(GB) 1000.0
>> >> Random Write Working Set(GB) 1000.0
>> >> Sequential Read Size(KB) 64KB
>> >> Sequential Write Size(KB) 64KB
>> >> Random Read Size(KB) 8KB
>> >> Random Write Size(KB) 8KB
>> >
>> >So that's 85% reads and 15% writes, first part:
>> >
>> >rw=randrw
>> >rwmixread=85
>> >
>> >and then you have 50/85 reads random and 35/85 reads sequential. That's
>> >roughly 59% reads random. On the writes, you have 10/15 random and 5/15
>> >sequential. That's roughly 67% writes random:
>> >
>> >percentage_random=59,67
>> >
>> >The latter I just added support for, before it only support a single
>> >setting for reads and writes.
>> >
>> >Fio does not support splitting block sizes on a random/sequential
>>basis.
>> >You will have to improvise there.
>> >
>> >-- 
>> >Jens Axboe
>> >
>> >--
>> >To unsubscribe from this list: send the line "unsubscribe fio" in
>> >the body of a message to majordomo@vger.kernel.org
>> >More majordomo info at  http://vger.kernel.org/majordomo-info.html
>> 
>> Hi Jens
>> 
>> This is neto from Brazil
>> 
>> How are you?
>> 
>> Thank you very much.
>> 
>> So basically the option right now is to use either 8KB or 64KB right?
>> 
>> Something like:
>> 
>> [workload]
>> bs=8k
>> ioengine=libaio
>> iodepth=2
>> numjobs=64
>> direct=1
>> runtime=2400
>> size=2000g
>> filename=\\.\PhysicalDrive9
>> filename=\\.\PhysicalDrive10
>> rw=randrw
>> rwmixread=85
>> percentage_random=59,67
>> 
>> thread
>> unified_rw_reporting=1
>> group_reporting=1
>> 
>> 
>> Would be nice to have something like:
>> 
>> block_mixed=8192,65536 (random, sequential)
>
>That would certainly easily be feasible. But since that would override
>any other blocksize setting, might be cleaner to just have a boolean
>saying whether to interpret these fields as "read,write" or
>"sequential,random" instead.
>
>BTW, if you use filename= twice like you do above, only the last one
>will be effective.
>
>-- 
>Jens Axboe
>
>--
>To unsubscribe from this list: send the line "unsubscribe fio" in
>the body of a message to majordomo@vger.kernel.org
>More majordomo info at  http://vger.kernel.org/majordomo-info.html


And if I do this? file_service_type=random



^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Helping to model this workload
  2013-07-25 18:31         ` Neto, Antonio Jose Rodrigues
@ 2013-07-25 18:42           ` Jens Axboe
  2013-07-25 18:59             ` Neto, Antonio Jose Rodrigues
  0 siblings, 1 reply; 19+ messages in thread
From: Jens Axboe @ 2013-07-25 18:42 UTC (permalink / raw)
  To: Neto, Antonio Jose Rodrigues; +Cc: fio

On Thu, Jul 25 2013, Neto, Antonio Jose Rodrigues wrote:
> >BTW, if you use filename= twice like you do above, only the last one
> >will be effective.
> 
> And if I do this? file_service_type=random

Doesn't matter. You have to give all files in one filename= statement,
there are not additive.

The below should add bs_is_seq_rand support. If you set that to 1, then:

bs=4k,64k

will not be reads 4k and writes 64k, it will be sequential 4k and random
64k instead. Totally untested...


diff --git a/cconv.c b/cconv.c
index 9de4e25..8e7c69e 100644
--- a/cconv.c
+++ b/cconv.c
@@ -123,6 +123,7 @@ void convert_thread_options_to_cpu(struct thread_options *o,
 	o->softrandommap = le32_to_cpu(top->softrandommap);
 	o->bs_unaligned = le32_to_cpu(top->bs_unaligned);
 	o->fsync_on_close = le32_to_cpu(top->fsync_on_close);
+	o->bs_is_seq_rand = le32_to_cpu(top->bs_is_seq_rand);
 	o->random_distribution = le32_to_cpu(top->random_distribution);
 	o->zipf_theta.u.f = fio_uint64_to_double(le64_to_cpu(top->zipf_theta.u.i));
 	o->pareto_h.u.f = fio_uint64_to_double(le64_to_cpu(top->pareto_h.u.i));
@@ -281,6 +282,7 @@ void convert_thread_options_to_net(struct thread_options_pack *top,
 	top->softrandommap = cpu_to_le32(o->softrandommap);
 	top->bs_unaligned = cpu_to_le32(o->bs_unaligned);
 	top->fsync_on_close = cpu_to_le32(o->fsync_on_close);
+	top->bs_is_seq_rand = cpu_to_le32(o->bs_is_seq_rand);
 	top->random_distribution = cpu_to_le32(o->random_distribution);
 	top->zipf_theta.u.i = __cpu_to_le64(fio_double_to_uint64(o->zipf_theta.u.f));
 	top->pareto_h.u.i = __cpu_to_le64(fio_double_to_uint64(o->pareto_h.u.f));
diff --git a/io_u.c b/io_u.c
index 8401719..6537c90 100644
--- a/io_u.c
+++ b/io_u.c
@@ -293,7 +293,8 @@ static int get_next_seq_offset(struct thread_data *td, struct fio_file *f,
 }
 
 static int get_next_block(struct thread_data *td, struct io_u *io_u,
-			  enum fio_ddir ddir, int rw_seq)
+			  enum fio_ddir ddir, int rw_seq,
+			  unsigned int *is_random)
 {
 	struct fio_file *f = io_u->file;
 	uint64_t b, offset;
@@ -305,23 +306,30 @@ static int get_next_block(struct thread_data *td, struct io_u *io_u,
 
 	if (rw_seq) {
 		if (td_random(td)) {
-			if (should_do_random(td, ddir))
+			if (should_do_random(td, ddir)) {
 				ret = get_next_rand_block(td, f, ddir, &b);
-			else {
+				*is_random = 1;
+			} else {
+				*is_random = 0;
 				io_u->flags |= IO_U_F_BUSY_OK;
 				ret = get_next_seq_offset(td, f, ddir, &offset);
 				if (ret)
 					ret = get_next_rand_block(td, f, ddir, &b);
 			}
-		} else
+		} else {
+			*is_random = 0;
 			ret = get_next_seq_offset(td, f, ddir, &offset);
+		}
 	} else {
 		io_u->flags |= IO_U_F_BUSY_OK;
+		*is_random = 0;
 
 		if (td->o.rw_seq == RW_SEQ_SEQ) {
 			ret = get_next_seq_offset(td, f, ddir, &offset);
-			if (ret)
+			if (ret) {
 				ret = get_next_rand_block(td, f, ddir, &b);
+				*is_random = 0;
+			}
 		} else if (td->o.rw_seq == RW_SEQ_IDENT) {
 			if (f->last_start != -1ULL)
 				offset = f->last_start - f->file_offset;
@@ -353,7 +361,8 @@ static int get_next_block(struct thread_data *td, struct io_u *io_u,
  * until we find a free one. For sequential io, just return the end of
  * the last io issued.
  */
-static int __get_next_offset(struct thread_data *td, struct io_u *io_u)
+static int __get_next_offset(struct thread_data *td, struct io_u *io_u,
+			     unsigned int *is_random)
 {
 	struct fio_file *f = io_u->file;
 	enum fio_ddir ddir = io_u->ddir;
@@ -366,7 +375,7 @@ static int __get_next_offset(struct thread_data *td, struct io_u *io_u)
 		td->ddir_seq_nr = td->o.ddir_seq_nr;
 	}
 
-	if (get_next_block(td, io_u, ddir, rw_seq_hit))
+	if (get_next_block(td, io_u, ddir, rw_seq_hit, is_random))
 		return 1;
 
 	if (io_u->offset >= f->io_size) {
@@ -387,16 +396,17 @@ static int __get_next_offset(struct thread_data *td, struct io_u *io_u)
 	return 0;
 }
 
-static int get_next_offset(struct thread_data *td, struct io_u *io_u)
+static int get_next_offset(struct thread_data *td, struct io_u *io_u,
+			   unsigned int *is_random)
 {
 	if (td->flags & TD_F_PROFILE_OPS) {
 		struct prof_io_ops *ops = &td->prof_io_ops;
 
 		if (ops->fill_io_u_off)
-			return ops->fill_io_u_off(td, io_u);
+			return ops->fill_io_u_off(td, io_u, is_random);
 	}
 
-	return __get_next_offset(td, io_u);
+	return __get_next_offset(td, io_u, is_random);
 }
 
 static inline int io_u_fits(struct thread_data *td, struct io_u *io_u,
@@ -407,14 +417,20 @@ static inline int io_u_fits(struct thread_data *td, struct io_u *io_u,
 	return io_u->offset + buflen <= f->io_size + get_start_offset(td);
 }
 
-static unsigned int __get_next_buflen(struct thread_data *td, struct io_u *io_u)
+static unsigned int __get_next_buflen(struct thread_data *td, struct io_u *io_u,
+				      unsigned int is_random)
 {
-	const int ddir = io_u->ddir;
+	int ddir = io_u->ddir;
 	unsigned int buflen = 0;
 	unsigned int minbs, maxbs;
 	unsigned long r, rand_max;
 
-	assert(ddir_rw(ddir));
+	assert(ddir_rw(io_u->ddir));
+
+	if (td->o.bs_is_seq_rand)
+		ddir = is_random ? DDIR_WRITE: DDIR_READ;
+	else
+		ddir = io_u->ddir;
 
 	minbs = td->o.min_bs[ddir];
 	maxbs = td->o.max_bs[ddir];
@@ -471,16 +487,17 @@ static unsigned int __get_next_buflen(struct thread_data *td, struct io_u *io_u)
 	return buflen;
 }
 
-static unsigned int get_next_buflen(struct thread_data *td, struct io_u *io_u)
+static unsigned int get_next_buflen(struct thread_data *td, struct io_u *io_u,
+				    unsigned int is_random)
 {
 	if (td->flags & TD_F_PROFILE_OPS) {
 		struct prof_io_ops *ops = &td->prof_io_ops;
 
 		if (ops->fill_io_u_size)
-			return ops->fill_io_u_size(td, io_u);
+			return ops->fill_io_u_size(td, io_u, is_random);
 	}
 
-	return __get_next_buflen(td, io_u);
+	return __get_next_buflen(td, io_u, is_random);
 }
 
 static void set_rwmix_bytes(struct thread_data *td)
@@ -715,6 +732,8 @@ void requeue_io_u(struct thread_data *td, struct io_u **io_u)
 
 static int fill_io_u(struct thread_data *td, struct io_u *io_u)
 {
+	unsigned int is_random;
+
 	if (td->io_ops->flags & FIO_NOIO)
 		goto out;
 
@@ -740,12 +759,12 @@ static int fill_io_u(struct thread_data *td, struct io_u *io_u)
 	 * No log, let the seq/rand engine retrieve the next buflen and
 	 * position.
 	 */
-	if (get_next_offset(td, io_u)) {
+	if (get_next_offset(td, io_u, &is_random)) {
 		dprint(FD_IO, "io_u %p, failed getting offset\n", io_u);
 		return 1;
 	}
 
-	io_u->buflen = get_next_buflen(td, io_u);
+	io_u->buflen = get_next_buflen(td, io_u, is_random);
 	if (!io_u->buflen) {
 		dprint(FD_IO, "io_u %p, failed getting buflen\n", io_u);
 		return 1;
diff --git a/options.c b/options.c
index 3da376e..1816d0b 100644
--- a/options.c
+++ b/options.c
@@ -1558,6 +1558,17 @@ struct fio_option fio_options[FIO_MAX_OPTS] = {
 		.group	= FIO_OPT_G_INVALID,
 	},
 	{
+		.name	= "bs_is_seq_rand",
+		.lname	= "Block size division is seq/random (not read/write)",
+		.type	= FIO_OPT_BOOL,
+		.off1	= td_var_offset(bs_is_seq_rand),
+		.help	= "Consider any blocksize setting to be sequential,ramdom",
+		.def	= "0",
+		.parent = "blocksize",
+		.category = FIO_OPT_C_IO,
+		.group	= FIO_OPT_G_INVALID,
+	},
+	{
 		.name	= "randrepeat",
 		.lname	= "Random repeatable",
 		.type	= FIO_OPT_BOOL,
diff --git a/profile.h b/profile.h
index 3c8d61f..de35e9b 100644
--- a/profile.h
+++ b/profile.h
@@ -10,8 +10,8 @@ struct prof_io_ops {
 	int (*td_init)(struct thread_data *);
 	void (*td_exit)(struct thread_data *);
 
-	int (*fill_io_u_off)(struct thread_data *, struct io_u *);
-	int (*fill_io_u_size)(struct thread_data *, struct io_u *);
+	int (*fill_io_u_off)(struct thread_data *, struct io_u *, unsigned int *);
+	int (*fill_io_u_size)(struct thread_data *, struct io_u *, unsigned int);
 	struct fio_file *(*get_next_file)(struct thread_data *);
 
 	int (*io_u_lat)(struct thread_data *, uint64_t);
diff --git a/thread_options.h b/thread_options.h
index 32677e2..eaafaee 100644
--- a/thread_options.h
+++ b/thread_options.h
@@ -105,6 +105,7 @@ struct thread_options {
 	unsigned int softrandommap;
 	unsigned int bs_unaligned;
 	unsigned int fsync_on_close;
+	unsigned int bs_is_seq_rand;
 
 	unsigned int random_distribution;
 
@@ -317,6 +318,7 @@ struct thread_options_pack {
 	uint32_t softrandommap;
 	uint32_t bs_unaligned;
 	uint32_t fsync_on_close;
+	uint32_t bs_is_seq_rand;
 
 	uint32_t random_distribution;
 	fio_fp64_t zipf_theta;

-- 
Jens Axboe


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: Helping to model this workload
  2013-07-25 18:42           ` Jens Axboe
@ 2013-07-25 18:59             ` Neto, Antonio Jose Rodrigues
  2013-07-25 19:02               ` Jens Axboe
  2013-07-25 19:02               ` Neto, Antonio Jose Rodrigues
  0 siblings, 2 replies; 19+ messages in thread
From: Neto, Antonio Jose Rodrigues @ 2013-07-25 18:59 UTC (permalink / raw)
  To: Jens Axboe; +Cc: fio



On 7/25/13 2:42 PM, "Jens Axboe" <axboe@kernel.dk> wrote:

>On Thu, Jul 25 2013, Neto, Antonio Jose Rodrigues wrote:
>> >BTW, if you use filename= twice like you do above, only the last one
>> >will be effective.
>> 
>> And if I do this? file_service_type=random
>
>Doesn't matter. You have to give all files in one filename= statement,
>there are not additive.
>
>The below should add bs_is_seq_rand support. If you set that to 1, then:
>
>bs=4k,64k
>
>will not be reads 4k and writes 64k, it will be sequential 4k and random
>64k instead. Totally untested...
>
>
>diff --git a/cconv.c b/cconv.c
>index 9de4e25..8e7c69e 100644
>--- a/cconv.c
>+++ b/cconv.c
>@@ -123,6 +123,7 @@ void convert_thread_options_to_cpu(struct
>thread_options *o,
> 	o->softrandommap = le32_to_cpu(top->softrandommap);
> 	o->bs_unaligned = le32_to_cpu(top->bs_unaligned);
> 	o->fsync_on_close = le32_to_cpu(top->fsync_on_close);
>+	o->bs_is_seq_rand = le32_to_cpu(top->bs_is_seq_rand);
> 	o->random_distribution = le32_to_cpu(top->random_distribution);
> 	o->zipf_theta.u.f =
>fio_uint64_to_double(le64_to_cpu(top->zipf_theta.u.i));
> 	o->pareto_h.u.f = fio_uint64_to_double(le64_to_cpu(top->pareto_h.u.i));
>@@ -281,6 +282,7 @@ void convert_thread_options_to_net(struct
>thread_options_pack *top,
> 	top->softrandommap = cpu_to_le32(o->softrandommap);
> 	top->bs_unaligned = cpu_to_le32(o->bs_unaligned);
> 	top->fsync_on_close = cpu_to_le32(o->fsync_on_close);
>+	top->bs_is_seq_rand = cpu_to_le32(o->bs_is_seq_rand);
> 	top->random_distribution = cpu_to_le32(o->random_distribution);
> 	top->zipf_theta.u.i =
>__cpu_to_le64(fio_double_to_uint64(o->zipf_theta.u.f));
> 	top->pareto_h.u.i =
>__cpu_to_le64(fio_double_to_uint64(o->pareto_h.u.f));
>diff --git a/io_u.c b/io_u.c
>index 8401719..6537c90 100644
>--- a/io_u.c
>+++ b/io_u.c
>@@ -293,7 +293,8 @@ static int get_next_seq_offset(struct thread_data
>*td, struct fio_file *f,
> }
> 
> static int get_next_block(struct thread_data *td, struct io_u *io_u,
>-			  enum fio_ddir ddir, int rw_seq)
>+			  enum fio_ddir ddir, int rw_seq,
>+			  unsigned int *is_random)
> {
> 	struct fio_file *f = io_u->file;
> 	uint64_t b, offset;
>@@ -305,23 +306,30 @@ static int get_next_block(struct thread_data *td,
>struct io_u *io_u,
> 
> 	if (rw_seq) {
> 		if (td_random(td)) {
>-			if (should_do_random(td, ddir))
>+			if (should_do_random(td, ddir)) {
> 				ret = get_next_rand_block(td, f, ddir, &b);
>-			else {
>+				*is_random = 1;
>+			} else {
>+				*is_random = 0;
> 				io_u->flags |= IO_U_F_BUSY_OK;
> 				ret = get_next_seq_offset(td, f, ddir, &offset);
> 				if (ret)
> 					ret = get_next_rand_block(td, f, ddir, &b);
> 			}
>-		} else
>+		} else {
>+			*is_random = 0;
> 			ret = get_next_seq_offset(td, f, ddir, &offset);
>+		}
> 	} else {
> 		io_u->flags |= IO_U_F_BUSY_OK;
>+		*is_random = 0;
> 
> 		if (td->o.rw_seq == RW_SEQ_SEQ) {
> 			ret = get_next_seq_offset(td, f, ddir, &offset);
>-			if (ret)
>+			if (ret) {
> 				ret = get_next_rand_block(td, f, ddir, &b);
>+				*is_random = 0;
>+			}
> 		} else if (td->o.rw_seq == RW_SEQ_IDENT) {
> 			if (f->last_start != -1ULL)
> 				offset = f->last_start - f->file_offset;
>@@ -353,7 +361,8 @@ static int get_next_block(struct thread_data *td,
>struct io_u *io_u,
>  * until we find a free one. For sequential io, just return the end of
>  * the last io issued.
>  */
>-static int __get_next_offset(struct thread_data *td, struct io_u *io_u)
>+static int __get_next_offset(struct thread_data *td, struct io_u *io_u,
>+			     unsigned int *is_random)
> {
> 	struct fio_file *f = io_u->file;
> 	enum fio_ddir ddir = io_u->ddir;
>@@ -366,7 +375,7 @@ static int __get_next_offset(struct thread_data *td,
>struct io_u *io_u)
> 		td->ddir_seq_nr = td->o.ddir_seq_nr;
> 	}
> 
>-	if (get_next_block(td, io_u, ddir, rw_seq_hit))
>+	if (get_next_block(td, io_u, ddir, rw_seq_hit, is_random))
> 		return 1;
> 
> 	if (io_u->offset >= f->io_size) {
>@@ -387,16 +396,17 @@ static int __get_next_offset(struct thread_data
>*td, struct io_u *io_u)
> 	return 0;
> }
> 
>-static int get_next_offset(struct thread_data *td, struct io_u *io_u)
>+static int get_next_offset(struct thread_data *td, struct io_u *io_u,
>+			   unsigned int *is_random)
> {
> 	if (td->flags & TD_F_PROFILE_OPS) {
> 		struct prof_io_ops *ops = &td->prof_io_ops;
> 
> 		if (ops->fill_io_u_off)
>-			return ops->fill_io_u_off(td, io_u);
>+			return ops->fill_io_u_off(td, io_u, is_random);
> 	}
> 
>-	return __get_next_offset(td, io_u);
>+	return __get_next_offset(td, io_u, is_random);
> }
> 
> static inline int io_u_fits(struct thread_data *td, struct io_u *io_u,
>@@ -407,14 +417,20 @@ static inline int io_u_fits(struct thread_data *td,
>struct io_u *io_u,
> 	return io_u->offset + buflen <= f->io_size + get_start_offset(td);
> }
> 
>-static unsigned int __get_next_buflen(struct thread_data *td, struct
>io_u *io_u)
>+static unsigned int __get_next_buflen(struct thread_data *td, struct
>io_u *io_u,
>+				      unsigned int is_random)
> {
>-	const int ddir = io_u->ddir;
>+	int ddir = io_u->ddir;
> 	unsigned int buflen = 0;
> 	unsigned int minbs, maxbs;
> 	unsigned long r, rand_max;
> 
>-	assert(ddir_rw(ddir));
>+	assert(ddir_rw(io_u->ddir));
>+
>+	if (td->o.bs_is_seq_rand)
>+		ddir = is_random ? DDIR_WRITE: DDIR_READ;
>+	else
>+		ddir = io_u->ddir;
> 
> 	minbs = td->o.min_bs[ddir];
> 	maxbs = td->o.max_bs[ddir];
>@@ -471,16 +487,17 @@ static unsigned int __get_next_buflen(struct
>thread_data *td, struct io_u *io_u)
> 	return buflen;
> }
> 
>-static unsigned int get_next_buflen(struct thread_data *td, struct io_u
>*io_u)
>+static unsigned int get_next_buflen(struct thread_data *td, struct io_u
>*io_u,
>+				    unsigned int is_random)
> {
> 	if (td->flags & TD_F_PROFILE_OPS) {
> 		struct prof_io_ops *ops = &td->prof_io_ops;
> 
> 		if (ops->fill_io_u_size)
>-			return ops->fill_io_u_size(td, io_u);
>+			return ops->fill_io_u_size(td, io_u, is_random);
> 	}
> 
>-	return __get_next_buflen(td, io_u);
>+	return __get_next_buflen(td, io_u, is_random);
> }
> 
> static void set_rwmix_bytes(struct thread_data *td)
>@@ -715,6 +732,8 @@ void requeue_io_u(struct thread_data *td, struct io_u
>**io_u)
> 
> static int fill_io_u(struct thread_data *td, struct io_u *io_u)
> {
>+	unsigned int is_random;
>+
> 	if (td->io_ops->flags & FIO_NOIO)
> 		goto out;
> 
>@@ -740,12 +759,12 @@ static int fill_io_u(struct thread_data *td, struct
>io_u *io_u)
> 	 * No log, let the seq/rand engine retrieve the next buflen and
> 	 * position.
> 	 */
>-	if (get_next_offset(td, io_u)) {
>+	if (get_next_offset(td, io_u, &is_random)) {
> 		dprint(FD_IO, "io_u %p, failed getting offset\n", io_u);
> 		return 1;
> 	}
> 
>-	io_u->buflen = get_next_buflen(td, io_u);
>+	io_u->buflen = get_next_buflen(td, io_u, is_random);
> 	if (!io_u->buflen) {
> 		dprint(FD_IO, "io_u %p, failed getting buflen\n", io_u);
> 		return 1;
>diff --git a/options.c b/options.c
>index 3da376e..1816d0b 100644
>--- a/options.c
>+++ b/options.c
>@@ -1558,6 +1558,17 @@ struct fio_option fio_options[FIO_MAX_OPTS] = {
> 		.group	= FIO_OPT_G_INVALID,
> 	},
> 	{
>+		.name	= "bs_is_seq_rand",
>+		.lname	= "Block size division is seq/random (not read/write)",
>+		.type	= FIO_OPT_BOOL,
>+		.off1	= td_var_offset(bs_is_seq_rand),
>+		.help	= "Consider any blocksize setting to be sequential,ramdom",
>+		.def	= "0",
>+		.parent = "blocksize",
>+		.category = FIO_OPT_C_IO,
>+		.group	= FIO_OPT_G_INVALID,
>+	},
>+	{
> 		.name	= "randrepeat",
> 		.lname	= "Random repeatable",
> 		.type	= FIO_OPT_BOOL,
>diff --git a/profile.h b/profile.h
>index 3c8d61f..de35e9b 100644
>--- a/profile.h
>+++ b/profile.h
>@@ -10,8 +10,8 @@ struct prof_io_ops {
> 	int (*td_init)(struct thread_data *);
> 	void (*td_exit)(struct thread_data *);
> 
>-	int (*fill_io_u_off)(struct thread_data *, struct io_u *);
>-	int (*fill_io_u_size)(struct thread_data *, struct io_u *);
>+	int (*fill_io_u_off)(struct thread_data *, struct io_u *, unsigned int
>*);
>+	int (*fill_io_u_size)(struct thread_data *, struct io_u *, unsigned
>int);
> 	struct fio_file *(*get_next_file)(struct thread_data *);
> 
> 	int (*io_u_lat)(struct thread_data *, uint64_t);
>diff --git a/thread_options.h b/thread_options.h
>index 32677e2..eaafaee 100644
>--- a/thread_options.h
>+++ b/thread_options.h
>@@ -105,6 +105,7 @@ struct thread_options {
> 	unsigned int softrandommap;
> 	unsigned int bs_unaligned;
> 	unsigned int fsync_on_close;
>+	unsigned int bs_is_seq_rand;
> 
> 	unsigned int random_distribution;
> 
>@@ -317,6 +318,7 @@ struct thread_options_pack {
> 	uint32_t softrandommap;
> 	uint32_t bs_unaligned;
> 	uint32_t fsync_on_close;
>+	uint32_t bs_is_seq_rand;
> 
> 	uint32_t random_distribution;
> 	fio_fp64_t zipf_theta;
>
>-- 
>Jens Axboe
>
>--
>To unsubscribe from this list: send the line "unsubscribe fio" in
>the body of a message to majordomo@vger.kernel.org
>More majordomo info at  http://vger.kernel.org/majordomo-info.html


Thank you Jens, I appreciate. I will test it.

Question: How can I access multiple devices for example /dev/sda and
/dev/sdb?

Something like = filename=/dev/sda,/dev/sdb?
>



^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Helping to model this workload
  2013-07-25 18:59             ` Neto, Antonio Jose Rodrigues
@ 2013-07-25 19:02               ` Jens Axboe
  2013-07-25 19:02               ` Neto, Antonio Jose Rodrigues
  1 sibling, 0 replies; 19+ messages in thread
From: Jens Axboe @ 2013-07-25 19:02 UTC (permalink / raw)
  To: Neto, Antonio Jose Rodrigues; +Cc: fio

On 07/25/2013 12:59 PM, Neto, Antonio Jose Rodrigues wrote:
> Thank you Jens, I appreciate. I will test it.

It is now committed, fwiw.

> Question: How can I access multiple devices for example /dev/sda and
> /dev/sdb?
> 
> Something like = filename=/dev/sda,/dev/sdb?

It's in the documentation, you do:

filename=/dev/sda:/dev/sdb

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Helping to model this workload
  2013-07-25 18:59             ` Neto, Antonio Jose Rodrigues
  2013-07-25 19:02               ` Jens Axboe
@ 2013-07-25 19:02               ` Neto, Antonio Jose Rodrigues
  2013-07-25 19:03                 ` Jens Axboe
  1 sibling, 1 reply; 19+ messages in thread
From: Neto, Antonio Jose Rodrigues @ 2013-07-25 19:02 UTC (permalink / raw)
  To: Jens Axboe; +Cc: fio



On 7/25/13 2:59 PM, "Neto, Antonio Jose Rodrigues"
<Antonio.Jose.Rodrigues.Neto@netapp.com> wrote:

>
>
>On 7/25/13 2:42 PM, "Jens Axboe" <axboe@kernel.dk> wrote:
>
>>On Thu, Jul 25 2013, Neto, Antonio Jose Rodrigues wrote:
>>> >BTW, if you use filename= twice like you do above, only the last one
>>> >will be effective.
>>> 
>>> And if I do this? file_service_type=random
>>
>>Doesn't matter. You have to give all files in one filename= statement,
>>there are not additive.
>>
>>The below should add bs_is_seq_rand support. If you set that to 1, then:
>>
>>bs=4k,64k
>>
>>will not be reads 4k and writes 64k, it will be sequential 4k and random
>>64k instead. Totally untested...
>>
>>
>>diff --git a/cconv.c b/cconv.c
>>index 9de4e25..8e7c69e 100644
>>--- a/cconv.c
>>+++ b/cconv.c
>>@@ -123,6 +123,7 @@ void convert_thread_options_to_cpu(struct
>>thread_options *o,
>> 	o->softrandommap = le32_to_cpu(top->softrandommap);
>> 	o->bs_unaligned = le32_to_cpu(top->bs_unaligned);
>> 	o->fsync_on_close = le32_to_cpu(top->fsync_on_close);
>>+	o->bs_is_seq_rand = le32_to_cpu(top->bs_is_seq_rand);
>> 	o->random_distribution = le32_to_cpu(top->random_distribution);
>> 	o->zipf_theta.u.f =
>>fio_uint64_to_double(le64_to_cpu(top->zipf_theta.u.i));
>> 	o->pareto_h.u.f = fio_uint64_to_double(le64_to_cpu(top->pareto_h.u.i));
>>@@ -281,6 +282,7 @@ void convert_thread_options_to_net(struct
>>thread_options_pack *top,
>> 	top->softrandommap = cpu_to_le32(o->softrandommap);
>> 	top->bs_unaligned = cpu_to_le32(o->bs_unaligned);
>> 	top->fsync_on_close = cpu_to_le32(o->fsync_on_close);
>>+	top->bs_is_seq_rand = cpu_to_le32(o->bs_is_seq_rand);
>> 	top->random_distribution = cpu_to_le32(o->random_distribution);
>> 	top->zipf_theta.u.i =
>>__cpu_to_le64(fio_double_to_uint64(o->zipf_theta.u.f));
>> 	top->pareto_h.u.i =
>>__cpu_to_le64(fio_double_to_uint64(o->pareto_h.u.f));
>>diff --git a/io_u.c b/io_u.c
>>index 8401719..6537c90 100644
>>--- a/io_u.c
>>+++ b/io_u.c
>>@@ -293,7 +293,8 @@ static int get_next_seq_offset(struct thread_data
>>*td, struct fio_file *f,
>> }
>> 
>> static int get_next_block(struct thread_data *td, struct io_u *io_u,
>>-			  enum fio_ddir ddir, int rw_seq)
>>+			  enum fio_ddir ddir, int rw_seq,
>>+			  unsigned int *is_random)
>> {
>> 	struct fio_file *f = io_u->file;
>> 	uint64_t b, offset;
>>@@ -305,23 +306,30 @@ static int get_next_block(struct thread_data *td,
>>struct io_u *io_u,
>> 
>> 	if (rw_seq) {
>> 		if (td_random(td)) {
>>-			if (should_do_random(td, ddir))
>>+			if (should_do_random(td, ddir)) {
>> 				ret = get_next_rand_block(td, f, ddir, &b);
>>-			else {
>>+				*is_random = 1;
>>+			} else {
>>+				*is_random = 0;
>> 				io_u->flags |= IO_U_F_BUSY_OK;
>> 				ret = get_next_seq_offset(td, f, ddir, &offset);
>> 				if (ret)
>> 					ret = get_next_rand_block(td, f, ddir, &b);
>> 			}
>>-		} else
>>+		} else {
>>+			*is_random = 0;
>> 			ret = get_next_seq_offset(td, f, ddir, &offset);
>>+		}
>> 	} else {
>> 		io_u->flags |= IO_U_F_BUSY_OK;
>>+		*is_random = 0;
>> 
>> 		if (td->o.rw_seq == RW_SEQ_SEQ) {
>> 			ret = get_next_seq_offset(td, f, ddir, &offset);
>>-			if (ret)
>>+			if (ret) {
>> 				ret = get_next_rand_block(td, f, ddir, &b);
>>+				*is_random = 0;
>>+			}
>> 		} else if (td->o.rw_seq == RW_SEQ_IDENT) {
>> 			if (f->last_start != -1ULL)
>> 				offset = f->last_start - f->file_offset;
>>@@ -353,7 +361,8 @@ static int get_next_block(struct thread_data *td,
>>struct io_u *io_u,
>>  * until we find a free one. For sequential io, just return the end of
>>  * the last io issued.
>>  */
>>-static int __get_next_offset(struct thread_data *td, struct io_u *io_u)
>>+static int __get_next_offset(struct thread_data *td, struct io_u *io_u,
>>+			     unsigned int *is_random)
>> {
>> 	struct fio_file *f = io_u->file;
>> 	enum fio_ddir ddir = io_u->ddir;
>>@@ -366,7 +375,7 @@ static int __get_next_offset(struct thread_data *td,
>>struct io_u *io_u)
>> 		td->ddir_seq_nr = td->o.ddir_seq_nr;
>> 	}
>> 
>>-	if (get_next_block(td, io_u, ddir, rw_seq_hit))
>>+	if (get_next_block(td, io_u, ddir, rw_seq_hit, is_random))
>> 		return 1;
>> 
>> 	if (io_u->offset >= f->io_size) {
>>@@ -387,16 +396,17 @@ static int __get_next_offset(struct thread_data
>>*td, struct io_u *io_u)
>> 	return 0;
>> }
>> 
>>-static int get_next_offset(struct thread_data *td, struct io_u *io_u)
>>+static int get_next_offset(struct thread_data *td, struct io_u *io_u,
>>+			   unsigned int *is_random)
>> {
>> 	if (td->flags & TD_F_PROFILE_OPS) {
>> 		struct prof_io_ops *ops = &td->prof_io_ops;
>> 
>> 		if (ops->fill_io_u_off)
>>-			return ops->fill_io_u_off(td, io_u);
>>+			return ops->fill_io_u_off(td, io_u, is_random);
>> 	}
>> 
>>-	return __get_next_offset(td, io_u);
>>+	return __get_next_offset(td, io_u, is_random);
>> }
>> 
>> static inline int io_u_fits(struct thread_data *td, struct io_u *io_u,
>>@@ -407,14 +417,20 @@ static inline int io_u_fits(struct thread_data *td,
>>struct io_u *io_u,
>> 	return io_u->offset + buflen <= f->io_size + get_start_offset(td);
>> }
>> 
>>-static unsigned int __get_next_buflen(struct thread_data *td, struct
>>io_u *io_u)
>>+static unsigned int __get_next_buflen(struct thread_data *td, struct
>>io_u *io_u,
>>+				      unsigned int is_random)
>> {
>>-	const int ddir = io_u->ddir;
>>+	int ddir = io_u->ddir;
>> 	unsigned int buflen = 0;
>> 	unsigned int minbs, maxbs;
>> 	unsigned long r, rand_max;
>> 
>>-	assert(ddir_rw(ddir));
>>+	assert(ddir_rw(io_u->ddir));
>>+
>>+	if (td->o.bs_is_seq_rand)
>>+		ddir = is_random ? DDIR_WRITE: DDIR_READ;
>>+	else
>>+		ddir = io_u->ddir;
>> 
>> 	minbs = td->o.min_bs[ddir];
>> 	maxbs = td->o.max_bs[ddir];
>>@@ -471,16 +487,17 @@ static unsigned int __get_next_buflen(struct
>>thread_data *td, struct io_u *io_u)
>> 	return buflen;
>> }
>> 
>>-static unsigned int get_next_buflen(struct thread_data *td, struct io_u
>>*io_u)
>>+static unsigned int get_next_buflen(struct thread_data *td, struct io_u
>>*io_u,
>>+				    unsigned int is_random)
>> {
>> 	if (td->flags & TD_F_PROFILE_OPS) {
>> 		struct prof_io_ops *ops = &td->prof_io_ops;
>> 
>> 		if (ops->fill_io_u_size)
>>-			return ops->fill_io_u_size(td, io_u);
>>+			return ops->fill_io_u_size(td, io_u, is_random);
>> 	}
>> 
>>-	return __get_next_buflen(td, io_u);
>>+	return __get_next_buflen(td, io_u, is_random);
>> }
>> 
>> static void set_rwmix_bytes(struct thread_data *td)
>>@@ -715,6 +732,8 @@ void requeue_io_u(struct thread_data *td, struct io_u
>>**io_u)
>> 
>> static int fill_io_u(struct thread_data *td, struct io_u *io_u)
>> {
>>+	unsigned int is_random;
>>+
>> 	if (td->io_ops->flags & FIO_NOIO)
>> 		goto out;
>> 
>>@@ -740,12 +759,12 @@ static int fill_io_u(struct thread_data *td, struct
>>io_u *io_u)
>> 	 * No log, let the seq/rand engine retrieve the next buflen and
>> 	 * position.
>> 	 */
>>-	if (get_next_offset(td, io_u)) {
>>+	if (get_next_offset(td, io_u, &is_random)) {
>> 		dprint(FD_IO, "io_u %p, failed getting offset\n", io_u);
>> 		return 1;
>> 	}
>> 
>>-	io_u->buflen = get_next_buflen(td, io_u);
>>+	io_u->buflen = get_next_buflen(td, io_u, is_random);
>> 	if (!io_u->buflen) {
>> 		dprint(FD_IO, "io_u %p, failed getting buflen\n", io_u);
>> 		return 1;
>>diff --git a/options.c b/options.c
>>index 3da376e..1816d0b 100644
>>--- a/options.c
>>+++ b/options.c
>>@@ -1558,6 +1558,17 @@ struct fio_option fio_options[FIO_MAX_OPTS] = {
>> 		.group	= FIO_OPT_G_INVALID,
>> 	},
>> 	{
>>+		.name	= "bs_is_seq_rand",
>>+		.lname	= "Block size division is seq/random (not read/write)",
>>+		.type	= FIO_OPT_BOOL,
>>+		.off1	= td_var_offset(bs_is_seq_rand),
>>+		.help	= "Consider any blocksize setting to be sequential,ramdom",
>>+		.def	= "0",
>>+		.parent = "blocksize",
>>+		.category = FIO_OPT_C_IO,
>>+		.group	= FIO_OPT_G_INVALID,
>>+	},
>>+	{
>> 		.name	= "randrepeat",
>> 		.lname	= "Random repeatable",
>> 		.type	= FIO_OPT_BOOL,
>>diff --git a/profile.h b/profile.h
>>index 3c8d61f..de35e9b 100644
>>--- a/profile.h
>>+++ b/profile.h
>>@@ -10,8 +10,8 @@ struct prof_io_ops {
>> 	int (*td_init)(struct thread_data *);
>> 	void (*td_exit)(struct thread_data *);
>> 
>>-	int (*fill_io_u_off)(struct thread_data *, struct io_u *);
>>-	int (*fill_io_u_size)(struct thread_data *, struct io_u *);
>>+	int (*fill_io_u_off)(struct thread_data *, struct io_u *, unsigned int
>>*);
>>+	int (*fill_io_u_size)(struct thread_data *, struct io_u *, unsigned
>>int);
>> 	struct fio_file *(*get_next_file)(struct thread_data *);
>> 
>> 	int (*io_u_lat)(struct thread_data *, uint64_t);
>>diff --git a/thread_options.h b/thread_options.h
>>index 32677e2..eaafaee 100644
>>--- a/thread_options.h
>>+++ b/thread_options.h
>>@@ -105,6 +105,7 @@ struct thread_options {
>> 	unsigned int softrandommap;
>> 	unsigned int bs_unaligned;
>> 	unsigned int fsync_on_close;
>>+	unsigned int bs_is_seq_rand;
>> 
>> 	unsigned int random_distribution;
>> 
>>@@ -317,6 +318,7 @@ struct thread_options_pack {
>> 	uint32_t softrandommap;
>> 	uint32_t bs_unaligned;
>> 	uint32_t fsync_on_close;
>>+	uint32_t bs_is_seq_rand;
>> 
>> 	uint32_t random_distribution;
>> 	fio_fp64_t zipf_theta;
>>
>>-- 
>>Jens Axboe
>>
>>--
>>To unsubscribe from this list: send the line "unsubscribe fio" in
>>the body of a message to majordomo@vger.kernel.org
>>More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
>
>Thank you Jens, I appreciate. I will test it.
>
>Question: How can I access multiple devices for example /dev/sda and
>/dev/sdb?
>
>Something like = filename=/dev/sda,/dev/sdb?

Just to confirm..

bs_is_seq_rand=1

bs=64k,4k 

It's 64K for sequential and 4K for random

Right?



^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Helping to model this workload
  2013-07-25 19:02               ` Neto, Antonio Jose Rodrigues
@ 2013-07-25 19:03                 ` Jens Axboe
  2013-07-25 19:07                   ` Neto, Antonio Jose Rodrigues
  0 siblings, 1 reply; 19+ messages in thread
From: Jens Axboe @ 2013-07-25 19:03 UTC (permalink / raw)
  To: Neto, Antonio Jose Rodrigues; +Cc: fio

On 07/25/2013 01:02 PM, Neto, Antonio Jose Rodrigues wrote:
> Just to confirm..
> 
> bs_is_seq_rand=1
> 
> bs=64k,4k 
> 
> It's 64K for sequential and 4K for random
> 
> Right?

Correct. If bs_is_seq_rand is set, any READ block size setting is
applied to sequential IO, and any WRITE block size setting is applied to
random IO.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Helping to model this workload
  2013-07-25 19:03                 ` Jens Axboe
@ 2013-07-25 19:07                   ` Neto, Antonio Jose Rodrigues
  2013-07-25 19:11                     ` Jens Axboe
  0 siblings, 1 reply; 19+ messages in thread
From: Neto, Antonio Jose Rodrigues @ 2013-07-25 19:07 UTC (permalink / raw)
  To: Jens Axboe; +Cc: fio


On 7/25/13 3:03 PM, "Jens Axboe" <axboe@kernel.dk> wrote:

>On 07/25/2013 01:02 PM, Neto, Antonio Jose Rodrigues wrote:
>> Just to confirm..
>> 
>> bs_is_seq_rand=1
>> 
>> bs=64k,4k 
>> 
>> It's 64K for sequential and 4K for random
>> 
>> Right?
>
>Correct. If bs_is_seq_rand is set, any READ block size setting is
>applied to sequential IO, and any WRITE block size setting is applied to
>random IO.
>
>-- 
>Jens Axboe
>

Let me see if I understood:

But Jens, if I want sequential read AND write 64KB and random read AND
write 4KB?



^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Helping to model this workload
  2013-07-25 19:07                   ` Neto, Antonio Jose Rodrigues
@ 2013-07-25 19:11                     ` Jens Axboe
  2013-07-25 19:50                       ` Neto, Antonio Jose Rodrigues
  0 siblings, 1 reply; 19+ messages in thread
From: Jens Axboe @ 2013-07-25 19:11 UTC (permalink / raw)
  To: Neto, Antonio Jose Rodrigues; +Cc: fio

On 07/25/2013 01:07 PM, Neto, Antonio Jose Rodrigues wrote:
> 
> On 7/25/13 3:03 PM, "Jens Axboe" <axboe@kernel.dk> wrote:
> 
>> On 07/25/2013 01:02 PM, Neto, Antonio Jose Rodrigues wrote:
>>> Just to confirm..
>>>
>>> bs_is_seq_rand=1
>>>
>>> bs=64k,4k 
>>>
>>> It's 64K for sequential and 4K for random
>>>
>>> Right?
>>
>> Correct. If bs_is_seq_rand is set, any READ block size setting is
>> applied to sequential IO, and any WRITE block size setting is applied to
>> random IO.
>>
>> -- 
>> Jens Axboe
>>
> 
> Let me see if I understood:
> 
> But Jens, if I want sequential read AND write 64KB and random read AND
> write 4KB?

Then you'd do:

bs=64k,4k
bs_is_seq_rand=1

any ANY sequential IO will be 64kb, and ANY random IO will be 4kb.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Helping to model this workload
  2013-07-25 19:11                     ` Jens Axboe
@ 2013-07-25 19:50                       ` Neto, Antonio Jose Rodrigues
  2013-07-26 14:22                         ` Neto, Antonio Jose Rodrigues
  0 siblings, 1 reply; 19+ messages in thread
From: Neto, Antonio Jose Rodrigues @ 2013-07-25 19:50 UTC (permalink / raw)
  To: Jens Axboe; +Cc: fio



On 7/25/13 3:11 PM, "Jens Axboe" <axboe@kernel.dk> wrote:

>On 07/25/2013 01:07 PM, Neto, Antonio Jose Rodrigues wrote:
>> 
>> On 7/25/13 3:03 PM, "Jens Axboe" <axboe@kernel.dk> wrote:
>> 
>>> On 07/25/2013 01:02 PM, Neto, Antonio Jose Rodrigues wrote:
>>>> Just to confirm..
>>>>
>>>> bs_is_seq_rand=1
>>>>
>>>> bs=64k,4k 
>>>>
>>>> It's 64K for sequential and 4K for random
>>>>
>>>> Right?
>>>
>>> Correct. If bs_is_seq_rand is set, any READ block size setting is
>>> applied to sequential IO, and any WRITE block size setting is applied
>>>to
>>> random IO.
>>>
>>> -- 
>>> Jens Axboe
>>>
>> 
>> Let me see if I understood:
>> 
>> But Jens, if I want sequential read AND write 64KB and random read AND
>> write 4KB?
>
>Then you'd do:
>
>bs=64k,4k
>bs_is_seq_rand=1
>
>any ANY sequential IO will be 64kb, and ANY random IO will be 4kb.
>
>-- 
>Jens Axboe
>

This?

[workload]
bs=64k,4k

bs_is_seq_rand=1

ioengine=libaio
iodepth=2
numjobs=64
direct=1
runtime=2400
size=2000g
filename=\\.\PhysicalDrive9:\\.\PhysicalDrive10
rw=randrw
rwmixread=85
percentage_random=59,67
thread
unified_rw_reporting=1
group_reporting=1





^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Helping to model this workload
  2013-07-25 19:50                       ` Neto, Antonio Jose Rodrigues
@ 2013-07-26 14:22                         ` Neto, Antonio Jose Rodrigues
  2013-07-26 14:24                           ` Jens Axboe
  0 siblings, 1 reply; 19+ messages in thread
From: Neto, Antonio Jose Rodrigues @ 2013-07-26 14:22 UTC (permalink / raw)
  To: Jens Axboe; +Cc: fio

>>>>>
>>>>>Just to confirm..
>>>>>
>>>>> bs_is_seq_rand=1
>>>>>
>>>>> bs=64k,4k 
>>>>>
>>>>> It's 64K for sequential and 4K for random
>>>>>
>>>>> Right?
>>>>
>>>> Correct. If bs_is_seq_rand is set, any READ block size setting is
>>>> applied to sequential IO, and any WRITE block size setting is applied
>>>>to
>>>> random IO.
>>>>
>>>> -- 
>>>> Jens Axboe
>>>>
>>> 
>>> Let me see if I understood:
>>> 
>>> But Jens, if I want sequential read AND write 64KB and random read AND
>>> write 4KB?
>>
>>Then you'd do:
>>
>>bs=64k,4k
>>bs_is_seq_rand=1
>>
>>any ANY sequential IO will be 64kb, and ANY random IO will be 4kb.
>>
>>-- 
>>Jens Axboe
>>
>
>This?
>
>[workload]
>bs=64k,4k
>
>bs_is_seq_rand=1
>
>ioengine=libaio
>iodepth=2
>numjobs=64
>direct=1
>runtime=2400
>size=2000g
>filename=\\.\PhysicalDrive9:\\.\PhysicalDrive10
>rw=randrw
>rwmixread=85
>percentage_random=59,67
>thread
>unified_rw_reporting=1
>group_reporting=1


Jens

Would be nice if we could specify?
Iodepth for sequential and random
Numjobs for sequential and random (for example: I would like to have 16
jobs doing random and 1 job only doing sequential)

Thanks for considering it in the future

All the best my friend

neto



^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Helping to model this workload
  2013-07-26 14:22                         ` Neto, Antonio Jose Rodrigues
@ 2013-07-26 14:24                           ` Jens Axboe
  2013-07-26 14:28                             ` Neto, Antonio Jose Rodrigues
  0 siblings, 1 reply; 19+ messages in thread
From: Jens Axboe @ 2013-07-26 14:24 UTC (permalink / raw)
  To: Neto, Antonio Jose Rodrigues; +Cc: fio

On 07/26/2013 08:22 AM, Neto, Antonio Jose Rodrigues wrote:
> Would be nice if we could specify?
> Iodepth for sequential and random

That would be easy enough to add.

> Numjobs for sequential and random (for example: I would like to have 16
> jobs doing random and 1 job only doing sequential)

That's pretty easy to do manually by just having the two sections in the
job file.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Helping to model this workload
  2013-07-26 14:24                           ` Jens Axboe
@ 2013-07-26 14:28                             ` Neto, Antonio Jose Rodrigues
  2013-07-26 14:34                               ` Jens Axboe
  0 siblings, 1 reply; 19+ messages in thread
From: Neto, Antonio Jose Rodrigues @ 2013-07-26 14:28 UTC (permalink / raw)
  To: Jens Axboe; +Cc: fio


On 7/26/13 10:24 AM, "Jens Axboe" <axboe@kernel.dk> wrote:

>On 07/26/2013 08:22 AM, Neto, Antonio Jose Rodrigues wrote:
>> Would be nice if we could specify?
>> Iodepth for sequential and random
>
>That would be easy enough to add.
>
>> Numjobs for sequential and random (for example: I would like to have 16
>> jobs doing random and 1 job only doing sequential)
>
>That's pretty easy to do manually by just having the two sections in the
>job file.
>
>-- 
>Jens Axboe
>
>--

Something like this? Do you have an example? I am confused with 2 sections
and the percentage of the distribution of sequential and random plus the
difference of block size

>[random]
>bs=64k,4k
>bs_is_seq_rand=1
>ioengine=libaio
>iodepth=2
>numjobs=64
>direct=1
>runtime=2400
>size=2000g
>filename=\\.\PhysicalDrive9:\\.\PhysicalDrive10
>rw=randrw
>rwmixread=85
>percentage_random=59,67
>thread
>unified_rw_reporting=1
>group_reporting=1

>[sequential]
>bs=64k,4k
>bs_is_seq_rand=1
>ioengine=libaio
>iodepth=1
>numjobs=1
>direct=1
>runtime=2400
>size=2000g
>filename=\\.\PhysicalDrive9:\\.\PhysicalDrive10
>rw=randrw
>rwmixread=85
>percentage_random=59,67
>thread
>unified_rw_reporting=1
>group_reporting=1








^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Helping to model this workload
  2013-07-26 14:28                             ` Neto, Antonio Jose Rodrigues
@ 2013-07-26 14:34                               ` Jens Axboe
  2013-07-26 14:35                                 ` Neto, Antonio Jose Rodrigues
  0 siblings, 1 reply; 19+ messages in thread
From: Jens Axboe @ 2013-07-26 14:34 UTC (permalink / raw)
  To: Neto, Antonio Jose Rodrigues; +Cc: fio

On 07/26/2013 08:28 AM, Neto, Antonio Jose Rodrigues wrote:
> 
> On 7/26/13 10:24 AM, "Jens Axboe" <axboe@kernel.dk> wrote:
> 
>> On 07/26/2013 08:22 AM, Neto, Antonio Jose Rodrigues wrote:
>>> Would be nice if we could specify?
>>> Iodepth for sequential and random
>>
>> That would be easy enough to add.
>>
>>> Numjobs for sequential and random (for example: I would like to have 16
>>> jobs doing random and 1 job only doing sequential)
>>
>> That's pretty easy to do manually by just having the two sections in the
>> job file.
>>
>> -- 
>> Jens Axboe
>>
>> --
> 
> Something like this? Do you have an example? I am confused with 2 sections
> and the percentage of the distribution of sequential and random plus the
> difference of block size

For multiple sections, you put the shared items in a [global] section.
That section applies to any job in the file. That also gives you a
better overview of the shared and distinct properties of each job.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Helping to model this workload
  2013-07-26 14:34                               ` Jens Axboe
@ 2013-07-26 14:35                                 ` Neto, Antonio Jose Rodrigues
  2013-07-26 14:39                                   ` Neto, Antonio Jose Rodrigues
  0 siblings, 1 reply; 19+ messages in thread
From: Neto, Antonio Jose Rodrigues @ 2013-07-26 14:35 UTC (permalink / raw)
  To: Jens Axboe; +Cc: fio



On 7/26/13 10:34 AM, "Jens Axboe" <axboe@kernel.dk> wrote:

>On 07/26/2013 08:28 AM, Neto, Antonio Jose Rodrigues wrote:
>> 
>> On 7/26/13 10:24 AM, "Jens Axboe" <axboe@kernel.dk> wrote:
>> 
>>> On 07/26/2013 08:22 AM, Neto, Antonio Jose Rodrigues wrote:
>>>> Would be nice if we could specify?
>>>> Iodepth for sequential and random
>>>
>>> That would be easy enough to add.
>>>
>>>> Numjobs for sequential and random (for example: I would like to have
>>>>16
>>>> jobs doing random and 1 job only doing sequential)
>>>
>>> That's pretty easy to do manually by just having the two sections in
>>>the
>>> job file.
>>>
>>> -- 
>>> Jens Axboe
>>>
>>> --
>> 
>> Something like this? Do you have an example? I am confused with 2
>>sections
>> and the percentage of the distribution of sequential and random plus the
>> difference of block size
>
>For multiple sections, you put the shared items in a [global] section.
>That section applies to any job in the file. That also gives you a
>better overview of the shared and distinct properties of each job.
>
>-- 
>Jens Axboe
>
>--
>To unsubscribe from this list: send the line "unsubscribe fio" in
>the body of a message to majordomo@vger.kernel.org
>More majordomo info at  http://vger.kernel.org/majordomo-info.html

Let me try



^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Helping to model this workload
  2013-07-26 14:35                                 ` Neto, Antonio Jose Rodrigues
@ 2013-07-26 14:39                                   ` Neto, Antonio Jose Rodrigues
  0 siblings, 0 replies; 19+ messages in thread
From: Neto, Antonio Jose Rodrigues @ 2013-07-26 14:39 UTC (permalink / raw)
  To: Jens Axboe; +Cc: fio

>>>>>
>>>>>Would be nice if we could specify?
>>>>> Iodepth for sequential and random
>>>>
>>>> That would be easy enough to add.
>>>>
>>>>> Numjobs for sequential and random (for example: I would like to have
>>>>>16
>>>>> jobs doing random and 1 job only doing sequential)
>>>>
>>>> That's pretty easy to do manually by just having the two sections in
>>>>the
>>>> job file.
>>>>
>>>> -- 
>>>> Jens Axboe
>>>>
>>>> --
>>> 
>>> Something like this? Do you have an example? I am confused with 2
>>>sections
>>> and the percentage of the distribution of sequential and random plus
>>>the
>>> difference of block size
>>
>>For multiple sections, you put the shared items in a [global] section.
>>That section applies to any job in the file. That also gives you a
>>better overview of the shared and distinct properties of each job.
>>
>>-- 
>>Jens Axboe
>>
>>--
>>To unsubscribe from this list: send the line "unsubscribe fio" in
>>the body of a message to majordomo@vger.kernel.org
>>More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
>Let me try
>


This one? How about the reporting? I would love to have a unified
reporting by section like:

Random XYZ IOPS and x latency
Sequential ABC IOPS and y latency


[global]
bs=64k,4k
bs_is_seq_rand=1
ioengine=libaio
direct=1
runtime=2400
size=2000g
filename=\\.\PhysicalDrive9:\\.\PhysicalDrive10
rw=randrw
rwmixread=85
percentage_random=59,67
thread
unified_rw_reporting=1
group_reporting=1

[random]
iodepth=2
numjobs=64

[sequential]
iodepth=1
numjobs=1



^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2013-07-26 14:39 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <CE142CC7.127F5%Antonio.Jose.Rodrigues.Neto@netapp.com>
2013-07-23 16:52 ` Helping to model this workload Neto, Antonio Jose Rodrigues
2013-07-25 16:45   ` Jens Axboe
2013-07-25 16:52     ` Neto, Antonio Jose Rodrigues
2013-07-25 18:27       ` Jens Axboe
2013-07-25 18:31         ` Neto, Antonio Jose Rodrigues
2013-07-25 18:42           ` Jens Axboe
2013-07-25 18:59             ` Neto, Antonio Jose Rodrigues
2013-07-25 19:02               ` Jens Axboe
2013-07-25 19:02               ` Neto, Antonio Jose Rodrigues
2013-07-25 19:03                 ` Jens Axboe
2013-07-25 19:07                   ` Neto, Antonio Jose Rodrigues
2013-07-25 19:11                     ` Jens Axboe
2013-07-25 19:50                       ` Neto, Antonio Jose Rodrigues
2013-07-26 14:22                         ` Neto, Antonio Jose Rodrigues
2013-07-26 14:24                           ` Jens Axboe
2013-07-26 14:28                             ` Neto, Antonio Jose Rodrigues
2013-07-26 14:34                               ` Jens Axboe
2013-07-26 14:35                                 ` Neto, Antonio Jose Rodrigues
2013-07-26 14:39                                   ` Neto, Antonio Jose Rodrigues

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.