All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Kudryavtsev, Andrey O" <andrey.o.kudryavtsev@intel.com>
To: "Gruher, Joseph R" <joseph.r.gruher@intel.com>,
	Mauricio Tavares <raubvogel@gmail.com>,
	"fio@vger.kernel.org" <fio@vger.kernel.org>
Subject: Re: CPUs, threads, and speed
Date: Wed, 15 Jan 2020 18:33:01 +0000	[thread overview]
Message-ID: <EE47D240-3BC2-4E47-91A6-7F7943A408CB@intel.com> (raw)
In-Reply-To: <MWHPR11MB16790DF710A8EA7EB9118C37A9370@MWHPR11MB1679.namprd11.prod.outlook.com>

I can clarify that as I posted the original script on github. 
Sequential preconditioning is mandatory for bandwidth test. Random 4k preconditioning is for everything else. For all mixed scenarios data has to be randomized, so, that puts highest pressure on the drive (and internally WAF in case of NAND SSD). That makes all following benchmarks fair.  
Norandommap - I agree in general, but the FIO overhead of tracking LBAs impacts the performance and extends pre-fill time. 

-- 
Andrey Kudryavtsev, 
SSD Solution Architect
Intel Corp. 


On 1/15/20, 9:29 AM, "fio-owner@vger.kernel.org on behalf of Gruher, Joseph R" <fio-owner@vger.kernel.org on behalf of joseph.r.gruher@intel.com> wrote:

    > -----Original Message-----
    > From: fio-owner@vger.kernel.org <fio-owner@vger.kernel.org> On Behalf Of
    > Mauricio Tavares
    > Sent: Wednesday, January 15, 2020 7:51 AM
    > To: fio@vger.kernel.org
    > Subject: CPUs, threads, and speed
    > 
    > Let's say I have a config file to preload drive that looks like this (stolen from
    > https://github.com/intel/fiovisualizer/blob/master/Workloads/Precondition/fill
    > _4KRandom_NVMe.ini)
    > 
    > [global]
    > name=4k random write 4 ios in the queue in 32 queues
    > filename=/dev/nvme0n1
    > ioengine=libaio
    > direct=1
    > bs=4k
    > rw=randwrite
    > iodepth=4
    > numjobs=32
    > buffered=0
    > size=100%
    > loops=2
    > randrepeat=0
    > norandommap
    > refill_buffers
    > 
    > [job1]
    > 
    > That is taking a ton of time, like days to go. Is there anything I can do to speed it
    > up? 
    
    When you say preload, do you just want to write in the full capacity of the drive?  A sequential workload with larger blocks will be faster, like:
    
    [global]
    ioengine=libaio
    thread=1
    direct=1
    bs=128k
    rw=write
    numjobs=1
    iodepth=128
    size=100%
    loops=2
    [job00]
    filename=/dev/nvme0n1
    
    Or if you have a use case where you specifically want to write it in with 4K blocks, you could probably increase your queue depth way beyond 4 and see improvement in performance, and you probably don't want to specify norandommap if you're trying to hit every block on the device.
    
    -Joe
    


  parent reply	other threads:[~2020-01-15 18:33 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-01-15 15:50 CPUs, threads, and speed Mauricio Tavares
2020-01-15 17:28 ` Gruher, Joseph R
2020-01-15 18:04   ` Andrey Kuzmin
2020-01-15 18:29     ` Mauricio Tavares
2020-01-15 19:00       ` Andrey Kuzmin
2020-01-15 20:36         ` Mauricio Tavares
2020-01-16  6:59           ` Andrey Kuzmin
2020-01-16 16:12             ` Mauricio Tavares
2020-01-16 17:03               ` Andrey Kuzmin
2020-01-16 17:25               ` Jared Walton
2020-01-16 18:39                 ` Andrey Kuzmin
2020-01-16 19:03                   ` Jared Walton
2020-01-17 22:08                     ` Matthew Eaton
2020-01-24 20:39                       ` Mauricio Tavares
2020-01-15 18:33   ` Kudryavtsev, Andrey O [this message]
2020-01-15 21:33 ` Elliott, Robert (Servers)
2020-01-15 22:39   ` Mauricio Tavares
2020-01-16  0:49     ` Ming Lei

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=EE47D240-3BC2-4E47-91A6-7F7943A408CB@intel.com \
    --to=andrey.o.kudryavtsev@intel.com \
    --cc=fio@vger.kernel.org \
    --cc=joseph.r.gruher@intel.com \
    --cc=raubvogel@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.