linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Damien Le Moal <Damien.LeMoal@wdc.com>
To: Jeff Moyer <jmoyer@redhat.com>
Cc: "linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>,
	"linux-xfs@vger.kernel.org" <linux-xfs@vger.kernel.org>,
	Christoph Hellwig <hch@lst.de>,
	Johannes Thumshirn <jthumshirn@suse.de>,
	Hannes Reinecke <hare@suse.de>, Ting Yao <d201577678@hust.edu.cn>
Subject: Re: [PATCH RFC] fs: New zonefs file system
Date: Sat, 20 Jul 2019 01:07:25 +0000	[thread overview]
Message-ID: <BYAPR04MB5816A2630B1EBC0696CBEC71E7CA0@BYAPR04MB5816.namprd04.prod.outlook.com> (raw)
In-Reply-To: x49h87iqexz.fsf@segfault.boston.devel.redhat.com

On 2019/07/19 23:25, Jeff Moyer wrote:
> Hi, Damien,
> 
> Thanks for your well-considered response.
> 
> Damien Le Moal <Damien.LeMoal@wdc.com> writes:
> 
>> Jeff,
>>
>> On 2019/07/18 23:11, Jeff Moyer wrote:
>>> Hi, Damien,
>>>
>>> Did you consider creating a shared library?  I bet that would also
>>> ease application adoption for the use cases you're interested in, and
>>> would have similar performance.
>>>
>>> -Jeff
>>
>> Yes, it would, but to a lesser extent since system calls would need to be
>> replaced with library calls. Earlier work on LevelDB by Ting used the library
>> approach with libzbc, not quite a "libzonefs" but close enough. Working with
>> LevelDB code gave me the idea for zonefs. Compared to a library, the added
>> benefits are that specific language bindings are not a problem and further
>> simplify the code changes needed to support zoned block devices. In the case of
>> LevelDB for instance, C++ is used and file accesses are using streams, which
>> makes using a library a little difficult, and necessitates more changes just for
>> the internal application API itself. The needed changes spread beyond the device
>> access API.
>>
>> This is I think the main advantage of this simple in-kernel FS over a library:
>> the developer can focus on zone block device specific needs (write sequential
>> pattern and garbage collection) and forget about the device access parts as the
>> standard system calls API can be used.
> 
> OK, I can see how a file system eases adoption across multiple
> languages, and may, in some cases, be easier to adopt by applications.
> However, I'm not a fan of the file system interface for this usage.
> Once you present a file system, there are certain expectations from
> users, and this fs breaks most of them.

Yes, that is true. zonefs differs significantly from regular file systems. But I
would argue that breaking the users expectation is OK because that would happen
only and only if the user does not understand the hardware it is dealing with. I
still get emails regularly about mkfs.ext4 not working with SMR drives :)
In other words, since kernel 4.10 and exposure of HM SRM HDDs as regular block
device files, we already are in a sense breaking legacy user expectations
regarding the devices under device files... So I am not too worried about this
point.

If zonefs makes it into the kernel, I probably will be getting more emails about
"it does not work !" until SMR drive users out there understand what they are
dealing with. We are making a serious effort with documenting everything related
to zoned block devices. See https://zonedstorage.io. zonefs, if included in the
kernel, will be part of that documentation effort. Of note is that this
documentation is external to the kernel, we still need to increase our
contribution to the kernel docs for zoned block devices. And we will.

> I'll throw out another suggestion that may or may not work (I haven't
> given it much thought).  Would it be possible to create a device mapper
> target that would export each zone as a separate block device?  I
> understand that wouldn't help with the write pointer management, but it
> would allow you to create a single "file" for each zone.

Well, I do not think you need a new device mapper for this. dm-linear supports
zoned block devices and will happily allow mapping a single zone and expose a
block device file for it. My problem with this approach is that SMR drives are
huge, and getting bigger. A 15 TB drive has 55380 zones of 256 MB. Upcoming 20
TB drives have more than 75000 zones. Using dm-linear or any per-zone device
mapper target would create a huge resources pressure as the amount of memory
alone that would be used per zone would be much higher than with a file system
and the setup would also take far longer to complete compared to zonefs mount.

>> Another approach I considered is using FUSE, but went for a regular (albeit
>> simple) in-kernel approach due to performance concerns. While any difference in
>> performance for SMR HDDs would probably not be noticeable, performance would
>> likely be lower for upcoming NVMe zonenamespace devices compared to the
>> in-kernel approach.
>>
>> But granted, most of the arguments I can put forward for an in-kernel FS
>> solution vs a user shared library solution are mostly subjective. I think though
>> that having support directly provided by the kernel brings zoned block devices
>> into the "mainstream storage options" rather than having them perceived as
>> fringe solutions that need additional libraries to work correctly. Zoned block
>> devices are not going away and may in fact become more mainstream as
>> implementing higher capacities more and more depends on the sequential write
>> interface.
> 
> A file system like this would further cement in my mind that zoned block
> devices are not maintstream storage options.  I guess this part is
> highly subjective.  :)

Yes, it is subjective :) Many (even large scale) data centers are already
switching to "all SMR" backend storage, relying on traditional block devices
(SSDs mostly) for active data. For these systems, SMR is a mainstream solution.

When saying "mainstream", I was referring more to the software needed to use
these drives rather than the drives as a solution. zonefs allows mapping the
zone sequential write constraint to a known concept: file append write. And in
this sense, I think we can consider zonefs as progress.

Ideally, I would not bother with this at all and point people to Btrtfs (work in
progress) or XFS (next in the pipeline) for using an FS on zoned block devices.
But there is definitely a tendency for many distributed applications to try to
do without any file system at all (Ceph -> Bluestore is one example). And as
mentioned before, some other use cases where a fully POSIX compliant file system
is not really necessary at all (LevelDB, RocksDB). zonefs fits in the middle
ground here between removing the normal file system and going to raw block
device. Bluestore has "bluefs" underneath rocksdb after all. One cannot really
never go directly raw block device and in most cases some level of abstraction
(space management) is needed. That is where zonefs fits.

Best regards.

-- 
Damien Le Moal
Western Digital Research

  reply	other threads:[~2019-07-20  1:07 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-07-12  3:00 [PATCH RFC] fs: New zonefs file system Damien Le Moal
2019-07-12  8:00 ` Johannes Thumshirn
2019-07-12  8:31   ` Damien Le Moal
2019-07-12  8:47     ` Johannes Thumshirn
2019-07-12 17:10 ` Viacheslav Dubeyko
2019-07-12 22:56   ` Damien Le Moal
2019-07-15 16:54     ` Viacheslav Dubeyko
2019-07-15 23:53       ` Damien Le Moal
2019-07-16 16:51         ` Viacheslav Dubeyko
2019-07-18  0:57           ` Damien Le Moal
2019-07-15  1:19 ` Dave Chinner
2019-07-15  6:57   ` Johannes Thumshirn
2019-07-16 11:21   ` Damien Le Moal
2019-07-18 14:11 ` Jeff Moyer
2019-07-18 23:02   ` Damien Le Moal
2019-07-19 14:25     ` Jeff Moyer
2019-07-20  1:07       ` Damien Le Moal [this message]
2019-07-22  0:12         ` Dave Chinner
2019-07-20  7:15       ` Damien Le Moal
2019-07-22  0:04         ` Dave Chinner
2019-07-22  0:09           ` Damien Le Moal

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=BYAPR04MB5816A2630B1EBC0696CBEC71E7CA0@BYAPR04MB5816.namprd04.prod.outlook.com \
    --to=damien.lemoal@wdc.com \
    --cc=d201577678@hust.edu.cn \
    --cc=hare@suse.de \
    --cc=hch@lst.de \
    --cc=jmoyer@redhat.com \
    --cc=jthumshirn@suse.de \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-xfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).