linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ric Wheeler <ricwheeler@gmail.com>
To: Matthew Wilcox <willy@infradead.org>
Cc: Jaegeuk Kim <jaegeuk.kim@gmail.com>,
	Bart Van Assche <bvanassche@acm.org>,
	lsf-pc@lists.linux-foundation.org,
	Linux FS Devel <linux-fsdevel@vger.kernel.org>,
	linux-block@vger.kernel.org
Subject: Re: [LSF/MM/BPF TOPIC] durability vs performance for flash devices (especially embedded!)
Date: Thu, 10 Jun 2021 13:25:39 -0400	[thread overview]
Message-ID: <97ecb393-db84-c71d-6162-d8309400b0ee@gmail.com> (raw)
In-Reply-To: <YMJGqkwL87KczMS+@casper.infradead.org>

On 6/10/21 1:06 PM, Matthew Wilcox wrote:
> On Thu, Jun 10, 2021 at 12:22:40PM -0400, Ric Wheeler wrote:
>> On 6/9/21 5:32 PM, Jaegeuk Kim wrote:
>>> On Wed, Jun 9, 2021 at 11:47 AM Bart Van Assche <bvanassche@acm.org
>>> <mailto:bvanassche@acm.org>> wrote:
>>>
>>>      On 6/9/21 11:30 AM, Matthew Wilcox wrote:
>>>      > maybe you should read the paper.
>>>      >
>>>      > " Thiscomparison demonstrates that using F2FS, a flash-friendly file
>>>      > sys-tem, does not mitigate the wear-out problem, except inasmuch asit
>>>      > inadvertently rate limitsallI/O to the device"
>>>
>>>
>>> Do you agree with that statement based on your insight? At least to me, that
>>> paper is missing the fundamental GC problem which was supposed to be
>>> evaluated by real workloads instead of using a simple benchmark generating
>>> 4KB random writes only. And, they had to investigate more details in FTL/IO
>>> patterns including UNMAP and LBA alignment between host and storage, which
>>> all affect WAF. Based on that, the point of the zoned device is quite promising
>>> to me, since it can address LBA alignment entirely and give a way that host
>>> SW stack can control QoS.
>> Just a note, using a pretty simple and optimal streaming write pattern, I
>> have been able to burn out emmc parts in a little over a week.
>>
>> My test case creating a 1GB file (filled with random data just in case the
>> device was looking for zero blocks to ignore) and then do a loop to cp and
>> sync that file until the emmc device life time was shown as exhausted.
>>
>> This was a clean, best case sequential write so this is not just an issue
>> with small, random writes.
> How many LBAs were you using?  My mental model of a FTL (which may
> be out of date) is that it's essentially a log-structured filesystem.
> When there are insufficient empty erase-blocks available, the device
> finds a suitable victim erase-block, copies all the still-live LBAs into
> an active erase-block, updates the FTL and erases the erase-block.
>
> So the key is making sure that LBAs are reused as much as possible.
> Short of modifying a filesystem to make this happen, I force it by
> short-stroking my SSD.  We can model it statistically, but intuitively,
> if there are more "live" LBAs, the higher the write amplification and
> wear on the drive will be because the victim erase-blocks will have
> more live LBAs to migrate.
>
> This is why the paper intrigued me; it seemed like they were rewriting
> a 100MB file in place.  That _shouldn't_ cause ridiculous wear, unless
> the emmc device was otherwise almost full.

During the test run, I did not look at which LBA's that got written to over the 
couple of weeks.

Roughly, I tried to make sure that the file system ranged in fullness from 50% 
to 75% (did not let it get too close to full).

Any vendor (especially on the low end parts) might do something really 
primitive, but the hope I would have is similar to what you describe - if there 
is sufficient free space, the firmware should be able to wear level across all 
of the cells in the device. Overwriting in place or writing (and then 
freeing/discarded) each LBA *should* be roughly equivalent. Free space being 
defined as LBA's that are not known to the device as those without valid, 
un-discarded data.

Also important to write enough to flush through any possible DRAM/SRAM like 
cache a device might have that could absorb tiny writes.

The parts I played with ranged from what seemed to be roughly 3x write 
amplification for the workload I ran down to more like 1.3x write amplification 
(measured super coarsely as app level IO dispatched so all metadata, etc counted 
as "WA" in my coarse look). Just trying to figure out for a given IO/fs stack, 
how specific devices handle the user workload.

Regards,

Ric



  reply	other threads:[~2021-06-10 17:25 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-09 10:53 [LSF/MM/BPF TOPIC] durability vs performance for flash devices (especially embedded!) Ric Wheeler
2021-06-09 18:05 ` Bart Van Assche
2021-06-09 18:30   ` Matthew Wilcox
2021-06-09 18:47     ` Bart Van Assche
2021-06-10  0:16       ` Damien Le Moal
2021-06-10  1:11         ` Ric Wheeler
2021-06-10  1:20       ` Ric Wheeler
2021-06-10 11:07         ` Tim Walker
2021-06-10 16:38           ` Keith Busch
     [not found]       ` <CAOtxgyeRf=+grEoHxVLEaSM=Yfx4KrSG5q96SmztpoWfP=QrDg@mail.gmail.com>
2021-06-10 16:22         ` Ric Wheeler
2021-06-10 17:06           ` Matthew Wilcox
2021-06-10 17:25             ` Ric Wheeler [this message]
2021-06-10 17:57           ` Viacheslav Dubeyko
2021-06-13 20:41 ` [LSF/MM/BPF TOPIC] SSDFS: LFS file system without GC operations + NAND flash devices lifetime prolongation Viacheslav Dubeyko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=97ecb393-db84-c71d-6162-d8309400b0ee@gmail.com \
    --to=ricwheeler@gmail.com \
    --cc=bvanassche@acm.org \
    --cc=jaegeuk.kim@gmail.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=lsf-pc@lists.linux-foundation.org \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).