linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Lukas Czerner <lczerner@redhat.com>
To: Jan Tulak <jtulak@redhat.com>
Cc: Ric Wheeler <ricwheeler@gmail.com>, Jens Axboe <axboe@kernel.dk>,
	linux-block@vger.kernel.org,
	Linux FS Devel <linux-fsdevel@vger.kernel.org>,
	Nikolay Borisov <nborisov@suse.com>
Subject: Re: Testing devices for discard support properly
Date: Tue, 7 May 2019 11:40:15 +0200	[thread overview]
Message-ID: <20190507094015.hb76w3rjzx7shxjp@work> (raw)
In-Reply-To: <CACj3i71HdW0ys_YujGFJkobMmZAZtEPo7B2tgZjEY8oP_T9T6g@mail.gmail.com>

On Tue, May 07, 2019 at 10:48:55AM +0200, Jan Tulak wrote:
> On Tue, May 7, 2019 at 9:10 AM Lukas Czerner <lczerner@redhat.com> wrote:
> >
> > On Mon, May 06, 2019 at 04:56:44PM -0400, Ric Wheeler wrote:
> > >
> ...
> > >
> > > * Whole device discard at the block level both for a device that has been
> > > completely written and for one that had already been trimmed
> >
> > Yes, usefull. Also note that a long time ago when I've done the testing
> > I noticed that after a discard request, especially after whole device
> > discard, the read/write IO performance went down significanly for some
> > drives. I am sure things have changed, but I think it would be
> > interesting to see how does it behave now.
> >
> > >
> > > * Discard performance at the block level for 4k discards for a device that
> > > has been completely written and again the same test for a device that has
> > > been completely discarded.
> > >
> > > * Same test for large discards - say at a megabyte and/or gigabyte size?
> >
> > From my testing (again it was long time ago and things probably changed
> > since then) most of the drives I've seen had largely the same or similar
> > timing for discard request regardless of the size (hence, the conclusion
> > was the bigger the request the better). A small variation I did see
> > could have been explained by kernel implementation and discard_max_bytes
> > limitations as well.
> >
> > >
> > > * Same test done at the device optimal discard chunk size and alignment
> > >
> > > Should the discard pattern be done with a random pattern? Or just
> > > sequential?
> >
> > I think that all of the above will be interesting. However there are two
> > sides of it. One is just pure discard performance to figure out what
> > could be the expectations and the other will be "real" workload
> > performance. Since from my experience discard can have an impact on
> > drive IO performance beyond of what's obvious, testing mixed workload
> > (IO + discard) is going to be very important as well. And that's where
> > fio workloads can come in (I actually do not know if fio already
> > supports this or not).
> >
> 
> And:
> 
> On Tue, May 7, 2019 at 10:22 AM Nikolay Borisov <nborisov@suse.com> wrote:
> > I have some vague recollection this was brought up before but how sure
> > are we that when a discard request is sent down to disk and a response
> > is returned the actual data has indeed been discarded. What about NCQ
> > effects i.e "instant completion" while doing work in the background. Or
> > ignoring the discard request altogether?
> 
> 
> As Nikolay writes in the other thread, I too have a feeling that there
> have been a discard-related discussion at LSF/MM before. And if I
> remember, there were hints that the drives (sometimes) do asynchronous
> trim after returning a success. Which would explain the similar time
> for all sizes and IO drop after trim.

Yes, that was definitely the case  in the past. It's also why we've
seen IO performance drop after a big (whole device) discard as the
device was busy in the background.

However Nikolay does have a point. IIRC device is free to ignore discard
requests, I do not think there is any reliable way to actually tell that
the data was really discarded. I can even imagine a situation that the
device is not going to do anything unless it's pass some threshold of
free blocks for wear leveling. If that's the case our tests are not
going to be very useful unless they do stress such corner cases. But
that's just my speculation, so someone with a better knowledge of what
vendors are doing might tell us if it's something to worry about or not.

> 
> So, I think that the mixed workload (IO + discard) is a pretty
> important part of the whole topic and a pure discard test doesn't
> really tell us anything, at least for some drives.

I think both are important especially since mixed IO tests are going to
be highly workload specific.

-Lukas

> 
> Jan
> 
> 
> 
> -- 
> Jan Tulak

  reply	other threads:[~2019-05-07  9:40 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-05-06 20:56 Testing devices for discard support properly Ric Wheeler
2019-05-07  7:10 ` Lukas Czerner
2019-05-07  8:48   ` Jan Tulak
2019-05-07  9:40     ` Lukas Czerner [this message]
2019-05-07 12:57       ` Ric Wheeler
2019-05-07 15:35         ` Bryan Gurney
2019-05-07 15:44           ` Ric Wheeler
2019-05-07 20:09             ` Bryan Gurney
2019-05-07 21:24               ` Chris Mason
2019-06-03 20:01                 ` Ric Wheeler
2019-05-07  8:21 ` Nikolay Borisov
2019-05-07 22:04 ` Dave Chinner
2019-05-08  0:07   ` Ric Wheeler
2019-05-08  1:14     ` Dave Chinner
2019-05-08 15:05       ` Ric Wheeler
2019-05-08 17:03         ` Martin K. Petersen
2019-05-08 17:09           ` Ric Wheeler
2019-05-08 17:25             ` Martin K. Petersen
2019-05-08 18:12               ` Ric Wheeler
2019-05-09 16:02                 ` Bryan Gurney
2019-05-09 17:27                   ` Ric Wheeler
2019-05-09 20:35                     ` Bryan Gurney
2019-05-08 21:58             ` Dave Chinner
2019-05-09  2:29               ` Martin K. Petersen
2019-05-09  3:20                 ` Dave Chinner
2019-05-09  4:35                   ` Martin K. Petersen
2019-05-08 16:16   ` Martin K. Petersen
2019-05-08 22:31     ` Dave Chinner
2019-05-09  3:55       ` Martin K. Petersen
2019-05-09 13:40         ` Ric Wheeler

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190507094015.hb76w3rjzx7shxjp@work \
    --to=lczerner@redhat.com \
    --cc=axboe@kernel.dk \
    --cc=jtulak@redhat.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=nborisov@suse.com \
    --cc=ricwheeler@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).