archive mirror
 help / color / mirror / Atom feed
From: Ric Wheeler <>
To: Jaegeuk Kim <>,
	Bart Van Assche <>
Cc: Matthew Wilcox <>,,
	Linux FS Devel <>,
Subject: Re: [LSF/MM/BPF TOPIC] durability vs performance for flash devices (especially embedded!)
Date: Thu, 10 Jun 2021 12:22:40 -0400	[thread overview]
Message-ID: <> (raw)
In-Reply-To: <>

On 6/9/21 5:32 PM, Jaegeuk Kim wrote:
> On Wed, Jun 9, 2021 at 11:47 AM Bart Van Assche < 
> <>> wrote:
>     On 6/9/21 11:30 AM, Matthew Wilcox wrote:
>     > maybe you should read the paper.
>     >
>     > " Thiscomparison demonstrates that using F2FS, a flash-friendly file
>     > sys-tem, does not mitigate the wear-out problem, except inasmuch asit
>     > inadvertently rate limitsallI/O to the device"
> Do you agree with that statement based on your insight? At least to me, that
> paper is missing the fundamental GC problem which was supposed to be
> evaluated by real workloads instead of using a simple benchmark generating
> 4KB random writes only. And, they had to investigate more details in FTL/IO
> patterns including UNMAP and LBA alignment between host and storage, which
> all affect WAF. Based on that, the point of the zoned device is quite promising
> to me, since it can address LBA alignment entirely and give a way that host
> SW stack can control QoS.

Just a note, using a pretty simple and optimal streaming write pattern, I have 
been able to burn out emmc parts in a little over a week.

My test case creating a 1GB file (filled with random data just in case the 
device was looking for zero blocks to ignore) and then do a loop to cp and sync 
that file until the emmc device life time was shown as exhausted.

This was a clean, best case sequential write so this is not just an issue with 
small, random writes.

Of course, this is normal to wear them out, but for the super low end parts, 
taking away any of the device writes in our stack is costly given how little 
life they have....



> The topic has been a long-standing issue in flash area for multiple years and
> it'd be exciting to see any new ideas.
>     It seems like my email was not clear enough? What I tried to make clear
>     is that I think that there is no way to solve the flash wear issue with
>     the traditional block interface. I think that F2FS in combination with
>     the zone interface is an effective solution.
>     What is also relevant in this context is that the "Flash drive lifespan
>     is a problem" paper was published in 2017. I think that the first
>     commercial SSDs with a zone interface became available at a later time
>     (summer of 2020?).
>     Bart.

  parent reply	other threads:[~2021-06-10 16:22 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-09 10:53 [LSF/MM/BPF TOPIC] durability vs performance for flash devices (especially embedded!) Ric Wheeler
2021-06-09 18:05 ` Bart Van Assche
2021-06-09 18:30   ` Matthew Wilcox
2021-06-09 18:47     ` Bart Van Assche
2021-06-10  0:16       ` Damien Le Moal
2021-06-10  1:11         ` Ric Wheeler
2021-06-10  1:20       ` Ric Wheeler
2021-06-10 11:07         ` Tim Walker
2021-06-10 16:38           ` Keith Busch
     [not found]       ` <>
2021-06-10 16:22         ` Ric Wheeler [this message]
2021-06-10 17:06           ` Matthew Wilcox
2021-06-10 17:25             ` Ric Wheeler
2021-06-10 17:57           ` Viacheslav Dubeyko
2021-06-13 20:41 ` [LSF/MM/BPF TOPIC] SSDFS: LFS file system without GC operations + NAND flash devices lifetime prolongation Viacheslav Dubeyko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \ \ \ \ \ \ \ \ \

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).