From: Keith Busch <firstname.lastname@example.org>
To: Tim Walker <email@example.com>
Cc: Ric Wheeler <firstname.lastname@example.org>,
Bart Van Assche <email@example.com>,
Matthew Wilcox <firstname.lastname@example.org>,
Linux FS Devel <email@example.com>,
Subject: Re: [LSF/MM/BPF TOPIC] durability vs performance for flash devices (especially embedded!)
Date: Fri, 11 Jun 2021 01:38:08 +0900 [thread overview]
Message-ID: <20210610163808.GA26360@redsun51.ssa.fujisawa.hgst.com> (raw)
On Thu, Jun 10, 2021 at 11:07:09AM +0000, Tim Walker wrote:
> Wednesday, June 9, 2021 at 9:20:52 PM Ric Wheeler wrote:
> >On 6/9/21 2:47 PM, Bart Van Assche wrote:
> >> On 6/9/21 11:30 AM, Matthew Wilcox wrote:
> >>> maybe you should read the paper.
> >>> " Thiscomparison demonstrates that using F2FS, a flash-friendly file
> >>> sys-tem, does not mitigate the wear-out problem, except inasmuch asit
> >>> inadvertently rate limitsallI/O to the device"
> >> It seems like my email was not clear enough? What I tried to make clear
> >> is that I think that there is no way to solve the flash wear issue with
> >> the traditional block interface. I think that F2FS in combination with
> >> the zone interface is an effective solution.
> >> What is also relevant in this context is that the "Flash drive lifespan
> >> is a problem" paper was published in 2017. I think that the first
> >> commercial SSDs with a zone interface became available at a later time
> >> (summer of 2020?).
> >> Bart.
> >Just to address the zone interface support, it unfortunately takes a very long
> >time to make it down into the world of embedded parts (emmc is super common and
> >very primitive for example). UFS parts are in higher end devices, have not had a
> >chance to look at what they offer.
> For zoned block devices, particularly the sequential write zones,
> maybe it makes more sense for the device to manage wear leveling on a
> zone-by-zone basis. It seems like it could be pretty easy for a device
> to decide which head/die to select for a given zone when the zone is
> initially opened after the last reset write pointer.
I think device managed wear leveling was the point of zoned ssd's. If the
host was managing that, then that's pretty much an open channel ssd.
next prev parent reply other threads:[~2021-06-10 16:38 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-06-09 10:53 [LSF/MM/BPF TOPIC] durability vs performance for flash devices (especially embedded!) Ric Wheeler
2021-06-09 18:05 ` Bart Van Assche
2021-06-09 18:30 ` Matthew Wilcox
2021-06-09 18:47 ` Bart Van Assche
2021-06-10 0:16 ` Damien Le Moal
2021-06-10 1:11 ` Ric Wheeler
2021-06-10 1:20 ` Ric Wheeler
2021-06-10 11:07 ` Tim Walker
2021-06-10 16:38 ` Keith Busch [this message]
[not found] ` <CAOtxgyeRf=+grEoHxVLEaSM=Yfx4KrSG5q96SmztpoWfP=QrDg@mail.gmail.com>
2021-06-10 16:22 ` Ric Wheeler
2021-06-10 17:06 ` Matthew Wilcox
2021-06-10 17:25 ` Ric Wheeler
2021-06-10 17:57 ` Viacheslav Dubeyko
2021-06-13 20:41 ` [LSF/MM/BPF TOPIC] SSDFS: LFS file system without GC operations + NAND flash devices lifetime prolongation Viacheslav Dubeyko
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).