All of lore.kernel.org
 help / color / mirror / Atom feed
* Stonewalled ?
@ 2015-02-26  4:04 thoms
  2015-02-26  7:11 ` Carl Zwanzig
  0 siblings, 1 reply; 4+ messages in thread
From: thoms @ 2015-02-26  4:04 UTC (permalink / raw)
  To: fio

Hi folks,

I'm running fio-2.2.5 on a Linux x86_64 platform.  This is the first
time I've had to create a job file with an extremely large number of job
sections within the same file (hundreds of jobs).  I need each job to
run sequentially and have included "stonewall" within each section. 
When I execute the job file, I get this error:

  error: maximum number of jobs (2048) reached.

When I reduce the number of job sections in the file to under 200, the
job runs sequentially as expected. 

My understanding of "stonewall" is that it should serialize the running
of each job within a file (or files).  The implication is that fio
shouldn't be evaluating a subsequent job section until after the current
job has fully completed.  But in this case, fio appears to be looking
ahead and ignoring the "stonewall" directive until it exhausts
resources. This behavior also occurs within the current git commit.

Is this a feature or a bug, and is there another way to tell fio to
execute each job sequentially (top to bottom) as it encounters them in
the file?

Thanks!

^ permalink raw reply	[flat|nested] 4+ messages in thread

* RE: Stonewalled ?
  2015-02-26  4:04 Stonewalled ? thoms
@ 2015-02-26  7:11 ` Carl Zwanzig
  2015-02-26 14:46   ` thoms
  0 siblings, 1 reply; 4+ messages in thread
From: Carl Zwanzig @ 2015-02-26  7:11 UTC (permalink / raw)
  To: thoms, fio

(sorry about top-posting here)

For things like this, I'd use an external script specifying all params on each command line, and not use a job file at all. Order of execution is guaranteed and there's no limit to the number of jobs. (When fio evaluates the job file it creates all the jobs, but then doesn't let them all start at once.)

z!
________________________________________
From: fio-owner@vger.kernel.org [fio-owner@vger.kernel.org] on behalf of thoms [thoms@voicenet.com]
Sent: Wednesday, February 25, 2015 8:04 PM
To: fio@vger.kernel.org
Subject: Stonewalled ?

Hi folks,

I'm running fio-2.2.5 on a Linux x86_64 platform.  This is the first
time I've had to create a job file with an extremely large number of job
sections within the same file (hundreds of jobs).  I need each job to
run sequentially and have included "stonewall" within each section.
When I execute the job file, I get this error:

  error: maximum number of jobs (2048) reached.

When I reduce the number of job sections in the file to under 200, the
job runs sequentially as expected.

My understanding of "stonewall" is that it should serialize the running
of each job within a file (or files).  The implication is that fio
shouldn't be evaluating a subsequent job section until after the current
job has fully completed.  But in this case, fio appears to be looking
ahead and ignoring the "stonewall" directive until it exhausts
resources. This behavior also occurs within the current git commit.

Is this a feature or a bug, and is there another way to tell fio to
execute each job sequentially (top to bottom) as it encounters them in
the file?

Thanks!
--
To unsubscribe from this list: send the line "unsubscribe fio" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Stonewalled ?
  2015-02-26  7:11 ` Carl Zwanzig
@ 2015-02-26 14:46   ` thoms
  2015-02-26 20:50     ` Jens Axboe
  0 siblings, 1 reply; 4+ messages in thread
From: thoms @ 2015-02-26 14:46 UTC (permalink / raw)
  To: Carl Zwanzig, fio


On 02/26/2015 02:11 AM, Carl Zwanzig wrote:
> (sorry about top-posting here)
>
> For things like this, I'd use an external script specifying all params on each command line, and not use a job file at all. Order of execution is guaranteed and there's no limit to the number of jobs. (When fio evaluates the job file it creates all the jobs, but then doesn't let them all start at once.)
>
> z!
> ________

I'm prepared to do that, it's just messy for my particular situation. 
Not having looked through the code, I was under the impression that
"stonewall" changed the semantics of that behavior.  Understood...

Thanks for the insight.  Much appreciated...


> ________________________________
> From: fio-owner@vger.kernel.org [fio-owner@vger.kernel.org] on behalf of thoms [thoms@voicenet.com]
> Sent: Wednesday, February 25, 2015 8:04 PM
> To: fio@vger.kernel.org
> Subject: Stonewalled ?
>
> Hi folks,
>
> I'm running fio-2.2.5 on a Linux x86_64 platform.  This is the first
> time I've had to create a job file with an extremely large number of job
> sections within the same file (hundreds of jobs).  I need each job to
> run sequentially and have included "stonewall" within each section.
> When I execute the job file, I get this error:
>
>   error: maximum number of jobs (2048) reached.
>
> When I reduce the number of job sections in the file to under 200, the
> job runs sequentially as expected.
>
> My understanding of "stonewall" is that it should serialize the running
> of each job within a file (or files).  The implication is that fio
> shouldn't be evaluating a subsequent job section until after the current
> job has fully completed.  But in this case, fio appears to be looking
> ahead and ignoring the "stonewall" directive until it exhausts
> resources. This behavior also occurs within the current git commit.
>
> Is this a feature or a bug, and is there another way to tell fio to
> execute each job sequentially (top to bottom) as it encounters them in
> the file?
>
> Thanks!
> --
> To unsubscribe from this list: send the line "unsubscribe fio" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> --
> To unsubscribe from this list: send the line "unsubscribe fio" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Stonewalled ?
  2015-02-26 14:46   ` thoms
@ 2015-02-26 20:50     ` Jens Axboe
  0 siblings, 0 replies; 4+ messages in thread
From: Jens Axboe @ 2015-02-26 20:50 UTC (permalink / raw)
  To: thoms, Carl Zwanzig, fio

On 02/26/2015 07:46 AM, thoms wrote:
> 
> On 02/26/2015 02:11 AM, Carl Zwanzig wrote:
>> (sorry about top-posting here)
>>
>> For things like this, I'd use an external script specifying all params on each command line, and not use a job file at all. Order of execution is guaranteed and there's no limit to the number of jobs. (When fio evaluates the job file it creates all the jobs, but then doesn't let them all start at once.)
>>
>> z!
>> ________
> 
> I'm prepared to do that, it's just messy for my particular situation. 
> Not having looked through the code, I was under the impression that
> "stonewall" changed the semantics of that behavior.  Understood...
> 
> Thanks for the insight.  Much appreciated...

The stonewall does do what you expect. The problem isn't related to how
many jobs are running at the same time, it's how many jobs have been
added. Basically it boils down to how big of an shm segment fio will
need to run.

You could always just bump REAL_MAX_JOBS in fio.h to something larger
than 2048, that should work on most platforms.

-- 
Jens Axboe



^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2015-02-26 20:50 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-02-26  4:04 Stonewalled ? thoms
2015-02-26  7:11 ` Carl Zwanzig
2015-02-26 14:46   ` thoms
2015-02-26 20:50     ` Jens Axboe

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.