All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Elliott, Robert (Persistent Memory)" <elliott@hpe.com>
To: Etienne-Hugues Fortin <efortin@cyberspicace.com>,
	"fio@vger.kernel.org" <fio@vger.kernel.org>,
	'Jens Axboe' <axboe@kernel.dk>
Subject: RE: FIO 3.11
Date: Wed, 17 Oct 2018 21:41:40 +0000	[thread overview]
Message-ID: <AT5PR8401MB1169925565E8213E9968F891ABFF0@AT5PR8401MB1169.NAMPRD84.PROD.OUTLOOK.COM> (raw)
In-Reply-To: <d9885cb6-af59-f4c3-d58b-37495e4c0654@kernel.dk>

> First, when we create a profile, my assumption is that if I have two jobs without using 'stonewall',
> each job will run at the same time and will have the same priority. So, if I want to do a job with a
> 50% read/write but with 25% random and 75% sequential, I would need a job that look like this:
...
> If so, outside of trying to work the numjobs required to get about 50% R/W, is there a
> better way of doing this?

The OS controls how different threads progress, and there's nothing keeping them in
lockstep.

These will make each thread implement your desired mix:

       rwmixread=int
              Percentage of a mixed workload that should be reads. Default: 50.

       rwmixwrite=int
              Percentage of a mixed workload that should be writes. If both rwmixread and
              rwmixwrite is given and the values do not add up to 100%, the latter of the two
              will be used to override the first.

       percentage_random=int[,int][,int]
              For  a  random  workload,  set how big a percentage should be random. This defaults
              to 100%, in which case the workload is fully random. It can be set from anywhere
              from 0 to 100. Setting it to 0 would make the workload fully sequential. Any setting
              in between will result in a random mix of sequential and  random  I/O,  at  the
              given percentages. Comma-separated values may be specified for reads, writes, and
              trims as described in blocksize.

Threads running sequential accesses can easily benefit from cache hits from each other, if
there is any caching or prefetching done by the involved drivers or devices.  One thread
takes the lead and suffers delays, while the others benefit from its work and stay close
behind.  They can take turns, but tend to stay clustered together. This can distort results.
Random accesses avoid that problem, provided the capacity is much larger than any caches.



  reply	other threads:[~2018-10-17 21:41 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <BN7PR08MB409869032E10FEC58C9CBE08D9FF0@BN7PR08MB4098.namprd08.prod.outlook.com>
2018-10-17 19:03 ` Fwd: FIO 3.11 Jens Axboe
2018-10-17 21:41   ` Elliott, Robert (Persistent Memory) [this message]
2018-10-18  0:42     ` Etienne-Hugues Fortin
2018-10-18  4:19     ` Sitsofe Wheeler
2018-10-18 10:26       ` Etienne-Hugues Fortin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=AT5PR8401MB1169925565E8213E9968F891ABFF0@AT5PR8401MB1169.NAMPRD84.PROD.OUTLOOK.COM \
    --to=elliott@hpe.com \
    --cc=axboe@kernel.dk \
    --cc=efortin@cyberspicace.com \
    --cc=fio@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.