linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] New zonefs file system
@ 2019-12-12 18:38 Damien Le Moal
  2019-12-12 18:38 ` [PATCH 1/2] fs: " Damien Le Moal
                   ` (2 more replies)
  0 siblings, 3 replies; 10+ messages in thread
From: Damien Le Moal @ 2019-12-12 18:38 UTC (permalink / raw)
  To: linux-fsdevel, linux-xfs, linux-kernel, Linus Torvalds
  Cc: Johannes Thumshirn, Naohiro Aota, Darrick J . Wong, Hannes Reinecke

zonefs is a very simple file system exposing each zone of a zoned block
device as a file. Unlike a regular file system with zoned block device
support (e.g. f2fs or the on-going btrfs effort), zonefs does not hide
the sequential write constraint of zoned block devices to the user.
Files representing sequential write zones of the device must be written
sequentially starting from the end of the file (append only writes).

zonefs is not a POSIX compliant file system. It's goal is to simplify
the implementation of zoned block devices support in applications by
replacing raw block device file accesses with a richer file based API,
avoiding relying on direct block device file ioctls which may
be more obscure to developers. One example of this approach is the
implementation of LSM (log-structured merge) tree structures (such as
used in RocksDB and LevelDB) on zoned block devices by allowing SSTables
to be stored in a zone file similarly to a regular file system rather
than as a range of sectors of a zoned device. The introduction of the
higher level construct "one file is one zone" can help reducing the
amount of changes needed in the application while at the same time
allowing the use of zoned block devices with various programming
languages other than C.

zonefs IO management implementation uses the new iomap generic code.

Damien Le Moal (2):
  fs: New zonefs file system
  zonefs: Add documentation

 Documentation/filesystems/zonefs.txt |  150 ++++
 MAINTAINERS                          |   10 +
 fs/Kconfig                           |    1 +
 fs/Makefile                          |    1 +
 fs/zonefs/Kconfig                    |    9 +
 fs/zonefs/Makefile                   |    4 +
 fs/zonefs/super.c                    | 1158 ++++++++++++++++++++++++++
 fs/zonefs/zonefs.h                   |  169 ++++
 include/uapi/linux/magic.h           |    1 +
 9 files changed, 1503 insertions(+)
 create mode 100644 Documentation/filesystems/zonefs.txt
 create mode 100644 fs/zonefs/Kconfig
 create mode 100644 fs/zonefs/Makefile
 create mode 100644 fs/zonefs/super.c
 create mode 100644 fs/zonefs/zonefs.h

-- 
2.23.0


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 1/2] fs: New zonefs file system
  2019-12-12 18:38 [PATCH 0/2] New zonefs file system Damien Le Moal
@ 2019-12-12 18:38 ` Damien Le Moal
  2019-12-16  8:36   ` Hannes Reinecke
  2019-12-12 18:38 ` [PATCH 2/2] zonefs: Add documentation Damien Le Moal
  2019-12-16  8:18 ` [PATCH 0/2] New zonefs file system Enrico Weigelt, metux IT consult
  2 siblings, 1 reply; 10+ messages in thread
From: Damien Le Moal @ 2019-12-12 18:38 UTC (permalink / raw)
  To: linux-fsdevel, linux-xfs, linux-kernel, Linus Torvalds
  Cc: Johannes Thumshirn, Naohiro Aota, Darrick J . Wong, Hannes Reinecke

zonefs is a very simple file system exposing each zone of a zoned block
device as a file. Unlike a regular file system with zoned block device
support (e.g. f2fs), zonefs does not hide the sequential write
constraint of zoned block devices to the user. Files representing
sequential write zones of the device must be written sequentially
starting from the end of the file (append only writes).

As such, zonefs is in essence closer to a raw block device access
interface than to a full featured POSIX file system. The goal of zonefs
is to simplify the implementation of zoned block devices support in
applications by replacing raw block device file accesses with a richer
file API, avoiding relying on direct block device file ioctls which may
be more obscure to developers. One example of this approach is the
implementation of LSM (log-structured merge) tree structures (such as
used in RocksDB and LevelDB) on zoned block devices by allowing SSTables
to be stored in a zone file similarly to a regular file system rather
than as a range of sectors of a zoned device. The introduction of the
higher level construct "one file is one zone" can help reducing the
amount of changes needed in the application as well as introducing
support for different application programming languages.

Zonefs on-disk metadata is reduced to an immutable super block to
persistently store a magic number and optional features flags and
values. On mount, zonefs uses blkdev_report_zones() to obtain the device
zone configuration and populates the mount point with a static file tree
solely based on this information. E.g. file sizes come from the device
zone type and write pointer offset managed by the device itself.

The zone files created on mount have the following characteristics.
1) Files representing zones of the same type are grouped together
   under a common sub-directory:
     * For conventional zones, the sub-directory "cnv" is used.
     * For sequential write zones, the sub-directory "seq" is used.
  These two directories are the only directories that exist in zonefs.
  Users cannot create other directories and cannot rename nor delete
  the "cnv" and "seq" sub-directories.
2) The name of zone files is the number of the file within the zone
   type sub-directory, in order of increasing zone start sector.
3) The size of conventional zone files is fixed to the device zone size.
   Conventional zone files cannot be truncated.
4) The size of sequential zone files represent the file's zone write
   pointer position relative to the zone start sector. Truncating these
   files is allowed only down to 0, in wich case, the zone is reset to
   rewind the zone write pointer position to the start of the zone, or
   up to the zone size, in which case the file's zone is transitioned
   to the FULL state (finish zone operation).
5) All read and write operations to files are not allowed beyond the
   file zone size. Any access exceeding the zone size is failed with
   the -EFBIG error.
6) Creating, deleting, renaming or modifying any attribute of files and
   sub-directories is not allowed.

Several optional features of zonefs can be enabled at format time.
* Conventional zone aggregation: ranges of contiguous conventional
  zones can be agregated into a single larger file instead of the
  default one file per zone.
* File ownership: The owner UID and GID of zone files is by default 0
  (root) but can be changed to any valid UID/GID.
* File access permissions: the default 640 access permissions can be
  changed.

The mkzonefs tool is used to format zoned block devices for use with
zonefs. This tool is available on Github at:

git@github.com:damien-lemoal/zonefs-tools.git.

zonefs-tools also includes a test suite which can be run against any
zoned block device, including null_blk block device created with zoned
mode.

Example: the following formats a 15TB host-managed SMR HDD with 256 MB
zones with the conventional zones aggregation feature enabled.

$ sudo mkzonefs -o aggr_cnv /dev/sdX
$ sudo mount -t zonefs /dev/sdX /mnt
$ ls -l /mnt/
total 0
dr-xr-xr-x 2 root root     1 Nov 25 13:23 cnv
dr-xr-xr-x 2 root root 55356 Nov 25 13:23 seq

The size of the zone files sub-directories indicate the number of files
existing for each type of zones. In this example, there is only one
conventional zone file (all conventional zones are agreggated under a
single file).

$ ls -l /mnt/cnv
total 137101312
-rw-r----- 1 root root 140391743488 Nov 25 13:23 0

This aggregated conventional zone file can be used as a regular file.

$ sudo mkfs.ext4 /mnt/cnv/0
$ sudo mount -o loop /mnt/cnv/0 /data

The "seq" sub-directory grouping files for sequential write zones has
in this example 55356 zones.

$ ls -lv /mnt/seq
total 14511243264
-rw-r----- 1 root root 0 Nov 25 13:23 0
-rw-r----- 1 root root 0 Nov 25 13:23 1
-rw-r----- 1 root root 0 Nov 25 13:23 2
...
-rw-r----- 1 root root 0 Nov 25 13:23 55354
-rw-r----- 1 root root 0 Nov 25 13:23 55355

For sequential write zone files, the file size changes as data is
appended at the end of the file, similarly to any regular file system.

$ dd if=/dev/zero of=/mnt/seq/0 bs=4K count=1 conv=notrunc oflag=direct
1+0 records in
1+0 records out
4096 bytes (4.1 kB, 4.0 KiB) copied, 1.05112 s, 3.9 kB/s

$ ls -l /mnt/seq/0
-rw-r----- 1 root root 4096 Nov 25 13:23 /mnt/sdh/seq/0

The written file can be truncated to the zone size, prventing any
further write operation.

$ truncate -s 268435456 /mnt/seq/0
$ ls -l /mnt/seq/0
-rw-r----- 1 root root 268435456 Nov 25 13:49 /mnt/seq/0

Truncation to 0 size allows freeing the file zone storage space and
restart append-writes to the file.

$ truncate -s 0 /mnt/seq/0
$ ls -l /mnt/seq/0
-rw-r----- 1 root root 0 Nov 25 13:49 /mnt/seq/0

Since files are statically mapped to zones on the disk, the number of
blocks of a file as reported by stat() and fstat() indicates the size
of the file zone.

$ stat /mnt/seq/0
  File: /mnt/seq/0
  Size: 0       Blocks: 524288     IO Block: 4096   regular empty file
Device: 870h/2160d      Inode: 50431       Links: 1
Access: (0640/-rw-r-----)  Uid: (    0/    root)   Gid: (    0/  root)
Access: 2019-11-25 13:23:57.048971997 +0900
Modify: 2019-11-25 13:52:25.553805765 +0900
Change: 2019-11-25 13:52:25.553805765 +0900
 Birth: -

The number of blocks of the file ("Blocks") in units of 512B blocks
gives the maximum file size of 524288 * 512 B = 256 MB, corresponding
to the device zone size in this example. Of note is that the "IO block"
field always indicates the minimum IO size for writes and corresponds
to the device physical sector size.

This code contains contributions from:
* Johannes Thumshirn <jthumshirn@suse.de>,
* Darrick J. Wong <darrick.wong@oracle.com>,
* Christoph Hellwig <hch@lst.de>,
* Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com> and
* Ting Yao <tingyao@hust.edu.cn>.

Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com>
---
 MAINTAINERS                |    9 +
 fs/Kconfig                 |    1 +
 fs/Makefile                |    1 +
 fs/zonefs/Kconfig          |    9 +
 fs/zonefs/Makefile         |    4 +
 fs/zonefs/super.c          | 1158 ++++++++++++++++++++++++++++++++++++
 fs/zonefs/zonefs.h         |  169 ++++++
 include/uapi/linux/magic.h |    1 +
 8 files changed, 1352 insertions(+)
 create mode 100644 fs/zonefs/Kconfig
 create mode 100644 fs/zonefs/Makefile
 create mode 100644 fs/zonefs/super.c
 create mode 100644 fs/zonefs/zonefs.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 02d5278a4c9a..0641167ed2ea 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -18282,6 +18282,15 @@ L:	linux-kernel@vger.kernel.org
 S:	Maintained
 F:	arch/x86/kernel/cpu/zhaoxin.c
 
+ZONEFS FILESYSTEM
+M:	Damien Le Moal <damien.lemoal@wdc.com>
+M:	Naohiro Aota <naohiro.aota@wdc.com>
+R:	Johannes Thumshirn <jth@kernel.org>
+L:	linux-fsdevel@vger.kernel.org
+T:	git git://git.kernel.org/pub/scm/linux/kernel/git/dlemoal/zonefs.git
+S:	Maintained
+F:	fs/zonefs/
+
 ZPOOL COMPRESSED PAGE STORAGE API
 M:	Dan Streetman <ddstreet@ieee.org>
 L:	linux-mm@kvack.org
diff --git a/fs/Kconfig b/fs/Kconfig
index 7b623e9fc1b0..a3f97ca2bd46 100644
--- a/fs/Kconfig
+++ b/fs/Kconfig
@@ -40,6 +40,7 @@ source "fs/ocfs2/Kconfig"
 source "fs/btrfs/Kconfig"
 source "fs/nilfs2/Kconfig"
 source "fs/f2fs/Kconfig"
+source "fs/zonefs/Kconfig"
 
 config FS_DAX
 	bool "Direct Access (DAX) support"
diff --git a/fs/Makefile b/fs/Makefile
index 1148c555c4d3..527f228a5e8a 100644
--- a/fs/Makefile
+++ b/fs/Makefile
@@ -133,3 +133,4 @@ obj-$(CONFIG_CEPH_FS)		+= ceph/
 obj-$(CONFIG_PSTORE)		+= pstore/
 obj-$(CONFIG_EFIVAR_FS)		+= efivarfs/
 obj-$(CONFIG_EROFS_FS)		+= erofs/
+obj-$(CONFIG_ZONEFS_FS)		+= zonefs/
diff --git a/fs/zonefs/Kconfig b/fs/zonefs/Kconfig
new file mode 100644
index 000000000000..6490547e9763
--- /dev/null
+++ b/fs/zonefs/Kconfig
@@ -0,0 +1,9 @@
+config ZONEFS_FS
+	tristate "zonefs filesystem support"
+	depends on BLOCK
+	depends on BLK_DEV_ZONED
+	help
+	  zonefs is a simple File System which exposes zones of a zoned block
+	  device as files.
+
+	  If unsure, say N.
diff --git a/fs/zonefs/Makefile b/fs/zonefs/Makefile
new file mode 100644
index 000000000000..75a380aa1ae1
--- /dev/null
+++ b/fs/zonefs/Makefile
@@ -0,0 +1,4 @@
+# SPDX-License-Identifier: GPL-2.0
+obj-$(CONFIG_ZONEFS_FS) += zonefs.o
+
+zonefs-y	:= super.o
diff --git a/fs/zonefs/super.c b/fs/zonefs/super.c
new file mode 100644
index 000000000000..5a2558cae3e3
--- /dev/null
+++ b/fs/zonefs/super.c
@@ -0,0 +1,1158 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Simple zone file system for zoned block devices.
+ *
+ * Copyright (C) 2019 Western Digital Corporation or its affiliates.
+ */
+#include <linux/module.h>
+#include <linux/fs.h>
+#include <linux/magic.h>
+#include <linux/iomap.h>
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/blkdev.h>
+#include <linux/statfs.h>
+#include <linux/writeback.h>
+#include <linux/quotaops.h>
+#include <linux/seq_file.h>
+#include <linux/parser.h>
+#include <linux/uio.h>
+#include <linux/mman.h>
+#include <linux/sched/mm.h>
+#include <linux/crc32.h>
+
+#include "zonefs.h"
+
+static int zonefs_iomap_begin(struct inode *inode, loff_t offset, loff_t length,
+			      unsigned int flags, struct iomap *iomap,
+			      struct iomap *srcmap)
+{
+	struct zonefs_sb_info *sbi = ZONEFS_SB(inode->i_sb);
+	struct zonefs_inode_info *zi = ZONEFS_I(inode);
+	loff_t max_isize = zi->i_max_size;
+	loff_t isize;
+
+	/*
+	 * For sequential zones, enforce direct IO writes. This is already
+	 * checked when writes are issued, so warn about this here if we
+	 * get buffered write to a sequential file inode.
+	 */
+	if (WARN_ON_ONCE(zi->i_ztype == ZONEFS_ZTYPE_SEQ &&
+			 (flags & IOMAP_WRITE) && !(flags & IOMAP_DIRECT)))
+		return -EIO;
+
+	/*
+	 * For all zones, all blocks are always mapped. For sequential zones,
+	 * all blocks after the write pointer (inode size) are always unwritten.
+	 */
+	mutex_lock(&zi->i_truncate_mutex);
+	isize = i_size_read(inode);
+	if (offset >= isize) {
+		length = min(length, max_isize - offset);
+		if (zi->i_ztype == ZONEFS_ZTYPE_CNV)
+			iomap->type = IOMAP_MAPPED;
+		else
+			iomap->type = IOMAP_UNWRITTEN;
+	} else {
+		length = min(length, isize - offset);
+		iomap->type = IOMAP_MAPPED;
+	}
+	mutex_unlock(&zi->i_truncate_mutex);
+
+	iomap->offset = offset & (~sbi->s_blocksize_mask);
+	iomap->length = ((offset + length + sbi->s_blocksize_mask) &
+			 (~sbi->s_blocksize_mask)) - iomap->offset;
+	iomap->bdev = inode->i_sb->s_bdev;
+	iomap->addr = (zi->i_zsector << SECTOR_SHIFT) + iomap->offset;
+
+	return 0;
+}
+
+static const struct iomap_ops zonefs_iomap_ops = {
+	.iomap_begin	= zonefs_iomap_begin,
+};
+
+static int zonefs_readpage(struct file *unused, struct page *page)
+{
+	return iomap_readpage(page, &zonefs_iomap_ops);
+}
+
+static int zonefs_readpages(struct file *unused, struct address_space *mapping,
+			    struct list_head *pages, unsigned int nr_pages)
+{
+	return iomap_readpages(mapping, pages, nr_pages, &zonefs_iomap_ops);
+}
+
+static int zonefs_map_blocks(struct iomap_writepage_ctx *wpc,
+			     struct inode *inode, loff_t offset)
+{
+	if (offset >= wpc->iomap.offset &&
+	    offset < wpc->iomap.offset + wpc->iomap.length)
+		return 0;
+
+	memset(&wpc->iomap, 0, sizeof(wpc->iomap));
+	return zonefs_iomap_begin(inode, offset, ZONEFS_I(inode)->i_max_size,
+				  0, &wpc->iomap, NULL);
+}
+
+static const struct iomap_writeback_ops zonefs_writeback_ops = {
+	.map_blocks		= zonefs_map_blocks,
+};
+
+static int zonefs_writepage(struct page *page, struct writeback_control *wbc)
+{
+	struct iomap_writepage_ctx wpc = { };
+
+	return iomap_writepage(page, wbc, &wpc, &zonefs_writeback_ops);
+}
+
+static int zonefs_writepages(struct address_space *mapping,
+			     struct writeback_control *wbc)
+{
+	struct iomap_writepage_ctx wpc = { };
+
+	return iomap_writepages(mapping, wbc, &wpc, &zonefs_writeback_ops);
+}
+
+static const struct address_space_operations zonefs_file_aops = {
+	.readpage		= zonefs_readpage,
+	.readpages		= zonefs_readpages,
+	.writepage		= zonefs_writepage,
+	.writepages		= zonefs_writepages,
+	.set_page_dirty		= iomap_set_page_dirty,
+	.releasepage		= iomap_releasepage,
+	.invalidatepage		= iomap_invalidatepage,
+	.migratepage		= iomap_migrate_page,
+	.is_partially_uptodate  = iomap_is_partially_uptodate,
+	.error_remove_page	= generic_error_remove_page,
+	.direct_IO		= noop_direct_IO,
+};
+
+static void zonefs_update_stats(struct inode *inode, loff_t new_isize)
+{
+	struct super_block *sb = inode->i_sb;
+	struct zonefs_sb_info *sbi = ZONEFS_SB(sb);
+	loff_t old_isize = i_size_read(inode);
+
+	if (new_isize == old_isize)
+		return;
+
+	spin_lock(&sbi->s_lock);
+
+	if (!new_isize) {
+		/* File truncated to 0 */
+		sbi->s_used_blocks -= old_isize >> sb->s_blocksize_bits;
+	} else if (new_isize > old_isize) {
+		/* File written or truncated to max size */
+		sbi->s_used_blocks +=
+			(new_isize - old_isize) >> sb->s_blocksize_bits;
+	} else {
+		/* Sequential zone files can only grow or be truncated to 0 */
+		WARN_ON(new_isize < old_isize);
+	}
+
+	spin_unlock(&sbi->s_lock);
+}
+
+static int zonefs_seq_file_truncate(struct inode *inode, loff_t isize)
+{
+	struct zonefs_inode_info *zi = ZONEFS_I(inode);
+	loff_t old_isize;
+	enum req_opf op;
+	int ret = 0;
+
+	/*
+	 * For sequential zone files, we can only allow truncating to 0 size,
+	 * which is equivalent to a zone reset, or to the maximum file size,
+	 * which is equivalent toa zone finish.
+	 */
+	if (!isize)
+		op = REQ_OP_ZONE_RESET;
+	else if (isize == zi->i_max_size)
+		op = REQ_OP_ZONE_FINISH;
+	else
+		return -EPERM;
+
+	inode_dio_wait(inode);
+
+	/* Serialize against page faults */
+	down_write(&zi->i_mmap_sem);
+
+	/* Serialize against zonefs_iomap_begin() */
+	mutex_lock(&zi->i_truncate_mutex);
+
+	old_isize = i_size_read(inode);
+	if (isize == old_isize)
+		goto unlock;
+
+	ret = blkdev_zone_mgmt(inode->i_sb->s_bdev, op, zi->i_zsector,
+			       zi->i_max_size >> SECTOR_SHIFT, GFP_NOFS);
+	if (ret) {
+		zonefs_err(inode->i_sb,
+			   "Zone management operation at %llu failed %d",
+			   zi->i_zsector, ret);
+		goto unlock;
+	}
+
+	zonefs_update_stats(inode, isize);
+	truncate_setsize(inode, isize);
+	zi->i_wpoffset = isize;
+
+unlock:
+	mutex_unlock(&zi->i_truncate_mutex);
+	up_write(&zi->i_mmap_sem);
+
+	return ret;
+}
+
+static int zonefs_inode_setattr(struct dentry *dentry, struct iattr *iattr)
+{
+	struct inode *inode = d_inode(dentry);
+	int ret;
+
+	ret = setattr_prepare(dentry, iattr);
+	if (ret)
+		return ret;
+
+	if ((iattr->ia_valid & ATTR_UID &&
+	     !uid_eq(iattr->ia_uid, inode->i_uid)) ||
+	    (iattr->ia_valid & ATTR_GID &&
+	     !gid_eq(iattr->ia_gid, inode->i_gid))) {
+		ret = dquot_transfer(inode, iattr);
+		if (ret)
+			return ret;
+	}
+
+	if (iattr->ia_valid & ATTR_SIZE) {
+		/* The size of conventional zone files cannot be changed */
+		if (ZONEFS_I(inode)->i_ztype == ZONEFS_ZTYPE_CNV)
+			return -EPERM;
+
+		ret = zonefs_seq_file_truncate(inode, iattr->ia_size);
+		if (ret)
+			return ret;
+	}
+
+	setattr_copy(inode, iattr);
+
+	return 0;
+}
+
+static const struct inode_operations zonefs_file_inode_operations = {
+	.setattr	= zonefs_inode_setattr,
+};
+
+static int zonefs_file_fsync(struct file *file, loff_t start, loff_t end,
+			     int datasync)
+{
+	struct inode *inode = file_inode(file);
+	int ret = 0;
+
+	/*
+	 * Since only direct writes are allowed in sequential files, page cache
+	 * flush is needed only for conventional zone files.
+	 */
+	if (ZONEFS_I(inode)->i_ztype == ZONEFS_ZTYPE_CNV) {
+		ret = file_write_and_wait_range(file, start, end);
+		if (ret)
+			return ret;
+		ret = file_check_and_advance_wb_err(file);
+	}
+
+	if (ret == 0)
+		ret = blkdev_issue_flush(inode->i_sb->s_bdev, GFP_KERNEL, NULL);
+
+	return ret;
+}
+
+static vm_fault_t zonefs_filemap_fault(struct vm_fault *vmf)
+{
+	struct zonefs_inode_info *zi = ZONEFS_I(file_inode(vmf->vma->vm_file));
+	vm_fault_t ret;
+
+	down_read(&zi->i_mmap_sem);
+	ret = filemap_fault(vmf);
+	up_read(&zi->i_mmap_sem);
+
+	return ret;
+}
+
+static vm_fault_t zonefs_filemap_page_mkwrite(struct vm_fault *vmf)
+{
+	struct inode *inode = file_inode(vmf->vma->vm_file);
+	struct zonefs_inode_info *zi = ZONEFS_I(inode);
+	vm_fault_t ret;
+
+	sb_start_pagefault(inode->i_sb);
+	file_update_time(vmf->vma->vm_file);
+
+	/* Serialize against truncates */
+	down_read(&zi->i_mmap_sem);
+	ret = iomap_page_mkwrite(vmf, &zonefs_iomap_ops);
+	up_read(&zi->i_mmap_sem);
+
+	sb_end_pagefault(inode->i_sb);
+	return ret;
+}
+
+static const struct vm_operations_struct zonefs_file_vm_ops = {
+	.fault		= zonefs_filemap_fault,
+	.map_pages	= filemap_map_pages,
+	.page_mkwrite	= zonefs_filemap_page_mkwrite,
+};
+
+static int zonefs_file_mmap(struct file *file, struct vm_area_struct *vma)
+{
+	/*
+	 * Conventional zone files can be mmap-ed READ/WRITE.
+	 * For sequential zone files, only readonly mappings are possible.
+	 */
+	if (ZONEFS_I(file_inode(file))->i_ztype == ZONEFS_ZTYPE_SEQ &&
+	    (vma->vm_flags & VM_SHARED) && (vma->vm_flags & VM_MAYWRITE))
+		return -EINVAL;
+
+	file_accessed(file);
+	vma->vm_ops = &zonefs_file_vm_ops;
+
+	return 0;
+}
+
+static loff_t zonefs_file_llseek(struct file *file, loff_t offset, int whence)
+{
+	loff_t isize = i_size_read(file_inode(file));
+
+	/*
+	 * Seeks are limited to below the zone size for conventional zones
+	 * and below the zone write pointer for sequential zones. In both
+	 * cases, this limit is the inode size.
+	 */
+	return generic_file_llseek_size(file, offset, whence, isize, isize);
+}
+
+static ssize_t zonefs_file_read_iter(struct kiocb *iocb, struct iov_iter *to)
+{
+	struct inode *inode = file_inode(iocb->ki_filp);
+	struct zonefs_sb_info *sbi = ZONEFS_SB(inode->i_sb);
+	struct zonefs_inode_info *zi = ZONEFS_I(inode);
+	loff_t max_pos;
+	size_t count;
+	ssize_t ret;
+
+	if (iocb->ki_pos >= zi->i_max_size)
+		return 0;
+
+	if (iocb->ki_flags & IOCB_NOWAIT) {
+		if (!inode_trylock_shared(inode))
+			return -EAGAIN;
+	} else {
+		inode_lock_shared(inode);
+	}
+
+	mutex_lock(&zi->i_truncate_mutex);
+
+	/*
+	 * Limit read operations to written data.
+	 */
+	max_pos = i_size_read(inode);
+	if (iocb->ki_pos >= max_pos) {
+		mutex_unlock(&zi->i_truncate_mutex);
+		ret = 0;
+		goto out;
+	}
+
+	iov_iter_truncate(to, max_pos - iocb->ki_pos);
+
+	mutex_unlock(&zi->i_truncate_mutex);
+
+	count = iov_iter_count(to);
+
+	if (iocb->ki_flags & IOCB_DIRECT) {
+		if ((iocb->ki_pos | count) & sbi->s_blocksize_mask) {
+			ret = -EINVAL;
+			goto out;
+		}
+		file_accessed(iocb->ki_filp);
+		ret = iomap_dio_rw(iocb, to, &zonefs_iomap_ops, NULL,
+				   is_sync_kiocb(iocb));
+	} else {
+		ret = generic_file_read_iter(iocb, to);
+	}
+
+out:
+	inode_unlock_shared(inode);
+
+	return ret;
+}
+
+static int zonefs_report_zones_err_cb(struct blk_zone *zone, unsigned int idx,
+				      void *data)
+{
+	struct inode *inode = data;
+	struct zonefs_inode_info *zi = ZONEFS_I(inode);
+	loff_t pos;
+
+	pos = (zone->wp - zone->start) << SECTOR_SHIFT;
+	zi->i_wpoffset = pos;
+	if (i_size_read(inode) != pos) {
+		zonefs_update_stats(inode, pos);
+		i_size_write(inode, pos);
+	}
+
+	return 0;
+}
+
+/*
+ * When a write error occurs in a sequential zone, the zone write pointer
+ * position must be refreshed to correct the file size and zonefs inode
+ * write pointer offset.
+ */
+static int zonefs_seq_file_write_failed(struct inode *inode, int error)
+{
+	struct zonefs_inode_info *zi = ZONEFS_I(inode);
+	struct super_block *sb = inode->i_sb;
+	sector_t sector = zi->i_zsector;
+	unsigned int nofs_flag;
+	int ret;
+
+	zonefs_warn(sb, "Updating inode zone %llu info\n", sector);
+
+	/*
+	 * blkdev_report_zones() uses GFP_KERNEL by default. Force execution as
+	 * if GFP_NOFS was specified so that it will not end up recursing into
+	 * the FS on memory allocation.
+	 */
+	nofs_flag = memalloc_nofs_save();
+	ret = blkdev_report_zones(sb->s_bdev, sector, 1,
+				  zonefs_report_zones_err_cb, inode);
+	memalloc_nofs_restore(nofs_flag);
+
+	if (ret != 1) {
+		if (!ret)
+			ret = -EIO;
+		zonefs_err(sb, "Get zone %llu report failed %d\n",
+			   sector, ret);
+		return ret;
+	}
+
+	return 0;
+}
+
+static int zonefs_file_dio_write_end(struct kiocb *iocb, ssize_t size, int ret,
+				     unsigned int flags)
+{
+	struct inode *inode = file_inode(iocb->ki_filp);
+	struct zonefs_inode_info *zi = ZONEFS_I(inode);
+
+	if (ret)
+		return ret;
+
+	/*
+	 * Conventional zone file size is fixed to the zone size so there
+	 * is no need to do anything.
+	 */
+	if (zi->i_ztype == ZONEFS_ZTYPE_CNV)
+		return 0;
+
+	mutex_lock(&zi->i_truncate_mutex);
+
+	if (size < 0) {
+		ret = zonefs_seq_file_write_failed(inode, size);
+	} else if (i_size_read(inode) < iocb->ki_pos + size) {
+		zonefs_update_stats(inode, iocb->ki_pos + size);
+		i_size_write(inode, iocb->ki_pos + size);
+	}
+
+	mutex_unlock(&zi->i_truncate_mutex);
+
+	return ret;
+}
+
+static const struct iomap_dio_ops zonefs_dio_ops = {
+	.end_io			= zonefs_file_dio_write_end,
+};
+
+static ssize_t zonefs_file_dio_write(struct kiocb *iocb, struct iov_iter *from)
+{
+	struct inode *inode = file_inode(iocb->ki_filp);
+	struct zonefs_sb_info *sbi = ZONEFS_SB(inode->i_sb);
+	struct zonefs_inode_info *zi = ZONEFS_I(inode);
+	size_t count;
+	ssize_t ret;
+
+	if (iocb->ki_flags & IOCB_NOWAIT) {
+		if (!inode_trylock(inode))
+			return -EAGAIN;
+	} else {
+		inode_lock(inode);
+	}
+
+	ret = generic_write_checks(iocb, from);
+	if (ret <= 0)
+		goto out;
+
+	iov_iter_truncate(from, zi->i_max_size - iocb->ki_pos);
+	count = iov_iter_count(from);
+
+	/*
+	 * Direct writes must be aligned to the block size, that is, the device
+	 * physical sector size, to avoid errors when writing sequential zones
+	 * on 512e devices (512B logical sector, 4KB physical sectors).
+	 */
+	if ((iocb->ki_pos | count) & sbi->s_blocksize_mask) {
+		ret = -EINVAL;
+		goto out;
+	}
+
+	/*
+	 * Enforce sequential writes (append only) in sequential zones.
+	 */
+	mutex_lock(&zi->i_truncate_mutex);
+	if (zi->i_ztype == ZONEFS_ZTYPE_SEQ &&
+	    iocb->ki_pos != zi->i_wpoffset) {
+		zonefs_err(inode->i_sb,
+			   "Unaligned write at %llu + %zu (wp %llu)\n",
+			   iocb->ki_pos, count,
+			   zi->i_wpoffset);
+		mutex_unlock(&zi->i_truncate_mutex);
+		ret = -EINVAL;
+		goto out;
+	}
+	mutex_unlock(&zi->i_truncate_mutex);
+
+	ret = iomap_dio_rw(iocb, from, &zonefs_iomap_ops, &zonefs_dio_ops,
+			   is_sync_kiocb(iocb));
+	if (zi->i_ztype == ZONEFS_ZTYPE_SEQ &&
+	    (ret > 0 || ret == -EIOCBQUEUED)) {
+		if (ret > 0)
+			count = ret;
+		mutex_lock(&zi->i_truncate_mutex);
+		zi->i_wpoffset += count;
+		mutex_unlock(&zi->i_truncate_mutex);
+	}
+
+out:
+	inode_unlock(inode);
+
+	return ret;
+}
+
+static ssize_t zonefs_file_buffered_write(struct kiocb *iocb,
+					  struct iov_iter *from)
+{
+	struct inode *inode = file_inode(iocb->ki_filp);
+	struct zonefs_inode_info *zi = ZONEFS_I(inode);
+	size_t count;
+	ssize_t ret;
+
+	/*
+	 * Direct IO writes are mandatory for sequential zones so that the
+	 * write IO order is preserved.
+	 */
+	if (zi->i_ztype == ZONEFS_ZTYPE_SEQ)
+		return -EIO;
+
+	if (iocb->ki_flags & IOCB_NOWAIT) {
+		if (!inode_trylock(inode))
+			return -EAGAIN;
+	} else {
+		inode_lock(inode);
+	}
+
+	ret = generic_write_checks(iocb, from);
+	if (ret <= 0)
+		goto out;
+
+	iov_iter_truncate(from, zi->i_max_size - iocb->ki_pos);
+	count = iov_iter_count(from);
+
+	ret = iomap_file_buffered_write(iocb, from, &zonefs_iomap_ops);
+	if (ret > 0)
+		iocb->ki_pos += ret;
+
+out:
+	inode_unlock(inode);
+	if (ret > 0)
+		ret = generic_write_sync(iocb, ret);
+
+	return ret;
+}
+
+static ssize_t zonefs_file_write_iter(struct kiocb *iocb, struct iov_iter *from)
+{
+	struct inode *inode = file_inode(iocb->ki_filp);
+
+	/*
+	 * Check that the write operation does not go beyond the zone size.
+	 */
+	if (iocb->ki_pos >= ZONEFS_I(inode)->i_max_size)
+		return -EFBIG;
+
+	if (iocb->ki_flags & IOCB_DIRECT)
+		return zonefs_file_dio_write(iocb, from);
+
+	return zonefs_file_buffered_write(iocb, from);
+}
+
+static const struct file_operations zonefs_file_operations = {
+	.open		= generic_file_open,
+	.fsync		= zonefs_file_fsync,
+	.mmap		= zonefs_file_mmap,
+	.llseek		= zonefs_file_llseek,
+	.read_iter	= zonefs_file_read_iter,
+	.write_iter	= zonefs_file_write_iter,
+	.splice_read	= generic_file_splice_read,
+	.splice_write	= iter_file_splice_write,
+	.iopoll		= iomap_dio_iopoll,
+};
+
+static struct kmem_cache *zonefs_inode_cachep;
+
+static struct inode *zonefs_alloc_inode(struct super_block *sb)
+{
+	struct zonefs_inode_info *zi;
+
+	zi = kmem_cache_alloc(zonefs_inode_cachep, GFP_KERNEL);
+	if (!zi)
+		return NULL;
+
+	inode_init_once(&zi->i_vnode);
+	mutex_init(&zi->i_truncate_mutex);
+	init_rwsem(&zi->i_mmap_sem);
+
+	return &zi->i_vnode;
+}
+
+static void zonefs_free_inode(struct inode *inode)
+{
+	kmem_cache_free(zonefs_inode_cachep, ZONEFS_I(inode));
+}
+
+/*
+ * File system stat.
+ */
+static int zonefs_statfs(struct dentry *dentry, struct kstatfs *buf)
+{
+	struct super_block *sb = dentry->d_sb;
+	struct zonefs_sb_info *sbi = ZONEFS_SB(sb);
+	enum zonefs_ztype t;
+	u64 fsid;
+
+	buf->f_type = ZONEFS_MAGIC;
+	buf->f_bsize = sb->s_blocksize;
+	buf->f_namelen = ZONEFS_NAME_MAX;
+
+	spin_lock(&sbi->s_lock);
+
+	buf->f_blocks = sbi->s_blocks;
+	if (WARN_ON(sbi->s_used_blocks > sbi->s_blocks))
+		buf->f_bfree = 0;
+	else
+		buf->f_bfree = buf->f_blocks - sbi->s_used_blocks;
+	buf->f_bavail = buf->f_bfree;
+
+	for (t = 0; t < ZONEFS_ZTYPE_MAX; t++) {
+		if (sbi->s_nr_files[t])
+			buf->f_files += sbi->s_nr_files[t] + 1;
+	}
+	buf->f_ffree = 0;
+
+	spin_unlock(&sbi->s_lock);
+
+	fsid = le64_to_cpup((void *)sbi->s_uuid.b) ^
+		le64_to_cpup((void *)sbi->s_uuid.b + sizeof(u64));
+	buf->f_fsid.val[0] = (u32)fsid;
+	buf->f_fsid.val[1] = (u32)(fsid >> 32);
+
+	return 0;
+}
+
+static const struct super_operations zonefs_sops = {
+	.alloc_inode	= zonefs_alloc_inode,
+	.free_inode	= zonefs_free_inode,
+	.statfs		= zonefs_statfs,
+};
+
+static void zonefs_init_dir_inode(struct inode *parent, struct inode *inode)
+{
+	inode_init_owner(inode, parent, S_IFDIR | 0555);
+	inode->i_fop = &simple_dir_operations;
+	inode->i_op = &simple_dir_inode_operations;
+	set_nlink(inode, 2);
+	inc_nlink(parent);
+}
+
+static void zonefs_init_file_inode(struct inode *inode, struct blk_zone *zone)
+{
+	struct super_block *sb = inode->i_sb;
+	struct zonefs_sb_info *sbi = ZONEFS_SB(sb);
+	struct zonefs_inode_info *zi = ZONEFS_I(inode);
+	umode_t	perm = sbi->s_perm;
+
+	zi->i_ztype = zonefs_zone_type(zone);
+	zi->i_zsector = zone->start;
+
+	switch (zone->cond) {
+	case BLK_ZONE_COND_OFFLINE:
+		/*
+		 * Disable all accesses and set the file size to 0 for
+		 * offline zones.
+		 */
+		zi->i_wpoffset = 0;
+		zi->i_max_size = 0;
+		perm = 0;
+		break;
+	case BLK_ZONE_COND_READONLY:
+		/* Do not allow writes in read-only zones*/
+		perm &= ~(0222); /* S_IWUGO */
+		/* Fallthrough */
+	default:
+		zi->i_max_size = min_t(loff_t, MAX_LFS_FILESIZE,
+				       zone->len << SECTOR_SHIFT);
+		if (zi->i_ztype == ZONEFS_ZTYPE_CNV)
+			zi->i_wpoffset = zi->i_max_size;
+		else
+			zi->i_wpoffset =
+				(zone->wp - zone->start) << SECTOR_SHIFT;
+		break;
+	}
+
+	inode->i_mode = S_IFREG | perm;
+	inode->i_uid = sbi->s_uid;
+	inode->i_gid = sbi->s_gid;
+	inode->i_size = zi->i_wpoffset;
+	inode->i_blocks = zone->len;
+
+	inode->i_fop = &zonefs_file_operations;
+	inode->i_op = &zonefs_file_inode_operations;
+	inode->i_mapping->a_ops = &zonefs_file_aops;
+
+	sb->s_maxbytes = max(zi->i_max_size, sb->s_maxbytes);
+	sbi->s_blocks += zi->i_max_size >> sb->s_blocksize_bits;
+	sbi->s_used_blocks += zi->i_wpoffset >> sb->s_blocksize_bits;
+}
+
+static struct dentry *zonefs_create_inode(struct dentry *parent,
+					const char *name, struct blk_zone *zone)
+{
+	struct inode *dir = d_inode(parent);
+	struct dentry *dentry;
+	struct inode *inode;
+
+	dentry = d_alloc_name(parent, name);
+	if (!dentry)
+		return NULL;
+
+	inode = new_inode(parent->d_sb);
+	if (!inode)
+		goto out;
+
+	inode->i_ino = get_next_ino();
+	inode->i_ctime = inode->i_mtime = inode->i_atime = dir->i_ctime;
+	if (zone)
+		zonefs_init_file_inode(inode, zone);
+	else
+		zonefs_init_dir_inode(dir, inode);
+	d_add(dentry, inode);
+	dir->i_size++;
+
+	return dentry;
+
+out:
+	dput(dentry);
+
+	return NULL;
+}
+
+static char *zgroups_name[ZONEFS_ZTYPE_MAX] = { "cnv", "seq" };
+
+struct zonefs_zone_data {
+	struct super_block *sb;
+	unsigned int nr_zones[ZONEFS_ZTYPE_MAX];
+	struct blk_zone *zones;
+};
+
+/*
+ * Create a zone group and populate it with zone files.
+ */
+static int zonefs_create_zgroup(struct zonefs_zone_data *zd,
+				enum zonefs_ztype type)
+{
+	struct super_block *sb = zd->sb;
+	struct zonefs_sb_info *sbi = ZONEFS_SB(sb);
+	struct blk_zone *zone, *next, *end;
+	char name[ZONEFS_NAME_MAX];
+	struct dentry *dir;
+	unsigned int n = 0;
+
+	/* If the group is empty, there is nothing to do */
+	if (!zd->nr_zones[type])
+		return 0;
+
+	dir = zonefs_create_inode(sb->s_root, zgroups_name[type], NULL);
+	if (!dir)
+		return -ENOMEM;
+
+	/*
+	 * The first zone contains the super block: skip it.
+	 */
+	end = zd->zones + blkdev_nr_zones(sb->s_bdev->bd_disk);
+	for (zone = &zd->zones[1]; zone < end; zone = next) {
+
+		next = zone + 1;
+		if (zonefs_zone_type(zone) != type)
+			continue;
+
+		/*
+		 * For conventional zones, contiguous zones can be aggregated
+		 * together to form larger files.
+		 * Note that this overwrites the length of the first zone of
+		 * the set of contiguous zones aggregated together.
+		 * Only zones with the same condition can be agreggated so that
+		 * offline zones are excluded and readonly zones are aggregated
+		 * together into a read only file.
+		 */
+		if (type == ZONEFS_ZTYPE_CNV &&
+		    sbi->s_features & ZONEFS_F_AGGRCNV) {
+			for (; next < end; next++) {
+				if (zonefs_zone_type(next) != type ||
+				    next->cond != zone->cond)
+					break;
+				zone->len += next->len;
+			}
+		}
+
+		/*
+		 * Use the file number within its group as file name.
+		 */
+		snprintf(name, ZONEFS_NAME_MAX - 1, "%u", n);
+		if (!zonefs_create_inode(dir, name, zone))
+			return -ENOMEM;
+
+		n++;
+	}
+
+	zonefs_info(sb, "Zone group \"%s\" has %u file%s\n",
+		    zgroups_name[type], n, n > 1 ? "s" : "");
+
+	sbi->s_nr_files[type] = n;
+
+	return 0;
+}
+
+static int zonefs_get_zone_info_cb(struct blk_zone *zone, unsigned int idx,
+				   void *data)
+{
+	struct zonefs_zone_data *zd = data;
+
+	/*
+	 * Count the number of usable zones: the first zone at index 0 contains
+	 * the super block and is ignored.
+	 */
+	switch (zone->type) {
+	case BLK_ZONE_TYPE_CONVENTIONAL:
+		zone->wp = zone->start + zone->len;
+		if (idx)
+			zd->nr_zones[ZONEFS_ZTYPE_CNV]++;
+		break;
+	case BLK_ZONE_TYPE_SEQWRITE_REQ:
+	case BLK_ZONE_TYPE_SEQWRITE_PREF:
+		if (idx)
+			zd->nr_zones[ZONEFS_ZTYPE_SEQ]++;
+		break;
+	default:
+		zonefs_err(zd->sb, "Unsupported zone type 0x%x\n",
+			   zone->type);
+		return -EIO;
+	}
+
+	memcpy(&zd->zones[idx], zone, sizeof(struct blk_zone));
+
+	return 0;
+}
+
+static int zonefs_get_zone_info(struct zonefs_zone_data *zd)
+{
+	struct block_device *bdev = zd->sb->s_bdev;
+	int ret;
+
+	zd->zones = kvcalloc(blkdev_nr_zones(bdev->bd_disk),
+			     sizeof(struct blk_zone), GFP_KERNEL);
+	if (!zd->zones)
+		return -ENOMEM;
+
+	/* Get zones information */
+	ret = blkdev_report_zones(bdev, 0, BLK_ALL_ZONES,
+				  zonefs_get_zone_info_cb, zd);
+	if (ret < 0) {
+		zonefs_err(zd->sb, "Zone report failed %d\n", ret);
+		return ret;
+	}
+
+	if (ret != blkdev_nr_zones(bdev->bd_disk)) {
+		zonefs_err(zd->sb, "Invalid zone report (%d/%u zones)\n",
+			   ret, blkdev_nr_zones(bdev->bd_disk));
+		return -EIO;
+	}
+
+	return 0;
+}
+
+static inline void zonefs_cleanup_zone_info(struct zonefs_zone_data *zd)
+{
+	kvfree(zd->zones);
+}
+
+/*
+ * Read super block information from the device.
+ */
+static int zonefs_read_super(struct super_block *sb)
+{
+	struct zonefs_sb_info *sbi = ZONEFS_SB(sb);
+	struct zonefs_super *super;
+	u32 crc, stored_crc;
+	struct page *page;
+	struct bio_vec bio_vec;
+	struct bio bio;
+	int ret;
+
+	page = alloc_page(GFP_KERNEL);
+	if (!page)
+		return -ENOMEM;
+
+	bio_init(&bio, &bio_vec, 1);
+	bio.bi_iter.bi_sector = 0;
+	bio_set_dev(&bio, sb->s_bdev);
+	bio_set_op_attrs(&bio, REQ_OP_READ, 0);
+	bio_add_page(&bio, page, PAGE_SIZE, 0);
+
+	ret = submit_bio_wait(&bio);
+	if (ret)
+		goto out;
+
+	super = page_address(page);
+
+	stored_crc = super->s_crc;
+	super->s_crc = 0;
+	crc = crc32_le(ZONEFS_MAGIC, (unsigned char *)super,
+		       sizeof(struct zonefs_super));
+	if (crc != stored_crc) {
+		zonefs_err(sb, "Invalid checksum (Expected 0x%08x, got 0x%08x)",
+			   crc, stored_crc);
+		ret = -EIO;
+		goto out;
+	}
+
+	ret = -EINVAL;
+	if (le32_to_cpu(super->s_magic) != ZONEFS_MAGIC)
+		goto out;
+
+	sbi->s_features = le64_to_cpu(super->s_features);
+	if (sbi->s_features & ~ZONEFS_F_DEFINED_FEATURES) {
+		zonefs_err(sb, "Unknown features set 0x%llx\n",
+			   sbi->s_features);
+		goto out;
+	}
+
+	if (sbi->s_features & ZONEFS_F_UID) {
+		sbi->s_uid = make_kuid(current_user_ns(),
+				       le32_to_cpu(super->s_uid));
+		if (!uid_valid(sbi->s_uid)) {
+			zonefs_err(sb, "Invalid UID feature\n");
+			goto out;
+		}
+	}
+
+	if (sbi->s_features & ZONEFS_F_GID) {
+		sbi->s_gid = make_kgid(current_user_ns(),
+				       le32_to_cpu(super->s_gid));
+		if (!gid_valid(sbi->s_gid)) {
+			zonefs_err(sb, "Invalid GID feature\n");
+			goto out;
+		}
+	}
+
+	if (sbi->s_features & ZONEFS_F_PERM)
+		sbi->s_perm = le32_to_cpu(super->s_perm);
+
+	if (memchr_inv(super->s_reserved, 0, sizeof(super->s_reserved))) {
+		zonefs_err(sb, "Reserved area is being used\n");
+		goto out;
+	}
+
+	uuid_copy(&sbi->s_uuid, &super->s_uuid);
+	ret = 0;
+
+out:
+	__free_page(page);
+
+	return ret;
+}
+
+/*
+ * Check that the device is zoned. If it is, get the list of zones and create
+ * sub-directories and files according to the device zone configuration and
+ * format options.
+ */
+static int zonefs_fill_super(struct super_block *sb, void *data, int silent)
+{
+	struct zonefs_zone_data zd;
+	struct zonefs_sb_info *sbi;
+	struct inode *inode;
+	enum zonefs_ztype t;
+	int ret;
+
+	if (!bdev_is_zoned(sb->s_bdev)) {
+		zonefs_err(sb, "Not a zoned block device\n");
+		return -EINVAL;
+	}
+
+	/*
+	 * Initialize super block information: the maximum file size is updated
+	 * when the zone files are created so that the format option
+	 * ZONEFS_F_AGGRCNV which increases the maximum file size of a file
+	 * beyond the zone size is taken into account.
+	 */
+	sbi = kzalloc(sizeof(*sbi), GFP_KERNEL);
+	if (!sbi)
+		return -ENOMEM;
+
+	spin_lock_init(&sbi->s_lock);
+	sb->s_fs_info = sbi;
+	sb->s_magic = ZONEFS_MAGIC;
+	sb->s_maxbytes = 0;
+	sb->s_op = &zonefs_sops;
+	sb->s_time_gran	= 1;
+
+	/*
+	 * The block size is always equal to the device physical sector size to
+	 * ensure that writes on 512e devices (512B logical block and 4KB
+	 * physical block) are always aligned to the device physical blocks
+	 * (as required for writes to sequential zones on ZBC/ZAC disks).
+	 */
+	sb_set_blocksize(sb, bdev_physical_block_size(sb->s_bdev));
+	sbi->s_blocksize_mask = sb->s_blocksize - 1;
+	sbi->s_uid = GLOBAL_ROOT_UID;
+	sbi->s_gid = GLOBAL_ROOT_GID;
+	sbi->s_perm = 0640; /* S_IRUSR | S_IWUSR | S_IRGRP */
+
+	ret = zonefs_read_super(sb);
+	if (ret)
+		return ret;
+
+	memset(&zd, 0, sizeof(struct zonefs_zone_data));
+	zd.sb = sb;
+	ret = zonefs_get_zone_info(&zd);
+	if (ret)
+		goto out;
+
+	zonefs_info(sb, "Mounting %u zones",
+		    blkdev_nr_zones(sb->s_bdev->bd_disk));
+
+	/* Create root directory inode */
+	ret = -ENOMEM;
+	inode = new_inode(sb);
+	if (!inode)
+		goto out;
+
+	inode->i_ino = get_next_ino();
+	inode->i_mode = S_IFDIR | 0755;
+	inode->i_ctime = inode->i_mtime = inode->i_atime = current_time(inode);
+	inode->i_op = &simple_dir_inode_operations;
+	inode->i_fop = &simple_dir_operations;
+	set_nlink(inode, 2);
+
+	sb->s_root = d_make_root(inode);
+	if (!sb->s_root)
+		goto out;
+
+	/* Create and populate files in zone groups directories */
+	for (t = 0; t < ZONEFS_ZTYPE_MAX; t++) {
+		ret = zonefs_create_zgroup(&zd, t);
+		if (ret)
+			break;
+	}
+
+out:
+	zonefs_cleanup_zone_info(&zd);
+
+	return ret;
+}
+
+static struct dentry *zonefs_mount(struct file_system_type *fs_type,
+				   int flags, const char *dev_name, void *data)
+{
+	return mount_bdev(fs_type, flags, dev_name, data, zonefs_fill_super);
+}
+
+static void zonefs_kill_super(struct super_block *sb)
+{
+	struct zonefs_sb_info *sbi = ZONEFS_SB(sb);
+
+	kfree(sbi);
+	if (sb->s_root)
+		d_genocide(sb->s_root);
+	kill_block_super(sb);
+}
+
+/*
+ * File system definition and registration.
+ */
+static struct file_system_type zonefs_type = {
+	.owner		= THIS_MODULE,
+	.name		= "zonefs",
+	.mount		= zonefs_mount,
+	.kill_sb	= zonefs_kill_super,
+	.fs_flags	= FS_REQUIRES_DEV,
+};
+
+static int __init zonefs_init_inodecache(void)
+{
+	zonefs_inode_cachep = kmem_cache_create("zonefs_inode_cache",
+			sizeof(struct zonefs_inode_info), 0,
+			(SLAB_RECLAIM_ACCOUNT | SLAB_MEM_SPREAD | SLAB_ACCOUNT),
+			NULL);
+	if (zonefs_inode_cachep == NULL)
+		return -ENOMEM;
+	return 0;
+}
+
+static void zonefs_destroy_inodecache(void)
+{
+	/*
+	 * Make sure all delayed rcu free inodes are flushed before we
+	 * destroy the inode cache.
+	 */
+	rcu_barrier();
+	kmem_cache_destroy(zonefs_inode_cachep);
+}
+
+static int __init zonefs_init(void)
+{
+	int ret;
+
+	BUILD_BUG_ON(sizeof(struct zonefs_super) != ZONEFS_SUPER_SIZE);
+
+	ret = zonefs_init_inodecache();
+	if (ret)
+		return ret;
+
+	ret = register_filesystem(&zonefs_type);
+	if (ret) {
+		zonefs_destroy_inodecache();
+		return ret;
+	}
+
+	return 0;
+}
+
+static void __exit zonefs_exit(void)
+{
+	zonefs_destroy_inodecache();
+	unregister_filesystem(&zonefs_type);
+}
+
+MODULE_AUTHOR("Damien Le Moal");
+MODULE_DESCRIPTION("Zone file system for zoned block devices");
+MODULE_LICENSE("GPL");
+module_init(zonefs_init);
+module_exit(zonefs_exit);
diff --git a/fs/zonefs/zonefs.h b/fs/zonefs/zonefs.h
new file mode 100644
index 000000000000..0296b3426f7b
--- /dev/null
+++ b/fs/zonefs/zonefs.h
@@ -0,0 +1,169 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Simple zone file system for zoned block devices.
+ *
+ * Copyright (C) 2019 Western Digital Corporation or its affiliates.
+ */
+#ifndef __ZONEFS_H__
+#define __ZONEFS_H__
+
+#include <linux/fs.h>
+#include <linux/magic.h>
+#include <linux/uuid.h>
+#include <linux/mutex.h>
+#include <linux/rwsem.h>
+
+/*
+ * Maximum length of file names: this only needs to be large enough to fit
+ * the zone group directory names and a decimal value of the start sector of
+ * the zones for file names. 16 characters is plenty.
+ */
+#define ZONEFS_NAME_MAX		16
+
+/*
+ * Zone types: ZONEFS_ZTYPE_SEQ is used for all sequential zone types
+ * defined in linux/blkzoned.h, that is, BLK_ZONE_TYPE_SEQWRITE_REQ and
+ * BLK_ZONE_TYPE_SEQWRITE_PREF.
+ */
+enum zonefs_ztype {
+	ZONEFS_ZTYPE_CNV,
+	ZONEFS_ZTYPE_SEQ,
+	ZONEFS_ZTYPE_MAX,
+};
+
+static inline enum zonefs_ztype zonefs_zone_type(struct blk_zone *zone)
+{
+	if (zone->type == BLK_ZONE_TYPE_CONVENTIONAL)
+		return ZONEFS_ZTYPE_CNV;
+	return ZONEFS_ZTYPE_SEQ;
+}
+
+/*
+ * In-memory inode data.
+ */
+struct zonefs_inode_info {
+	struct inode		i_vnode;
+
+	/* File zone type */
+	enum zonefs_ztype	i_ztype;
+
+	/* File zone start sector (512B unit) */
+	sector_t		i_zsector;
+
+	/* File zone write pointer position (sequential zones only) */
+	loff_t			i_wpoffset;
+
+	/* File maximum size */
+	loff_t			i_max_size;
+
+	/*
+	 * To serialise fully against both syscall and mmap based IO and
+	 * sequential file truncation, two locks are used. For serializing
+	 * zonefs_seq_file_truncate() against zonefs_iomap_begin(), that is,
+	 * file truncate operations against block mapping, i_truncate_mutex is
+	 * used. i_truncate_mutex also protects against concurrent accesses
+	 * and changes to the inode private data, and in particular changes to
+	 * a sequential file size on completion of direct IO writes.
+	 * Serialization of mmap read IOs with truncate and syscall IO
+	 * operations is done with i_mmap_sem in addition to i_truncate_mutex.
+	 * Only zonefs_seq_file_truncate() takes both lock (i_mmap_sem first,
+	 * i_truncate_mutex second).
+	 */
+	struct mutex		i_truncate_mutex;
+	struct rw_semaphore	i_mmap_sem;
+};
+
+static inline struct zonefs_inode_info *ZONEFS_I(struct inode *inode)
+{
+	return container_of(inode, struct zonefs_inode_info, i_vnode);
+}
+
+/*
+ * On-disk super block (block 0).
+ */
+#define ZONEFS_SUPER_SIZE	4096
+struct zonefs_super {
+
+	/* Magic number */
+	__le32		s_magic;
+
+	/* Checksum */
+	__le32		s_crc;
+
+	/* Features */
+	__le64		s_features;
+
+	/* 128-bit uuid */
+	uuid_t		s_uuid;
+
+	/* UID/GID to use for files */
+	__le32		s_uid;
+	__le32		s_gid;
+
+	/* File permissions */
+	__le32		s_perm;
+
+	/* Padding to ZONEFS_SUPER_SIZE bytes */
+	__u8		s_reserved[4052];
+
+} __packed;
+
+/*
+ * Feature flags: used on disk in the s_features field of struct zonefs_super
+ * and in-memory in the s_feartures field of struct zonefs_sb_info.
+ */
+enum zonefs_features {
+	/*
+	 * Aggregate contiguous conventional zones into a single file.
+	 */
+	ZONEFS_F_AGGRCNV = 1ULL << 0,
+	/*
+	 * Use super block specified UID for files instead of default.
+	 */
+	ZONEFS_F_UID = 1ULL << 1,
+	/*
+	 * Use super block specified GID for files instead of default.
+	 */
+	ZONEFS_F_GID = 1ULL << 2,
+	/*
+	 * Use super block specified file permissions instead of default 640.
+	 */
+	ZONEFS_F_PERM = 1ULL << 3,
+};
+
+#define ZONEFS_F_DEFINED_FEATURES \
+	(ZONEFS_F_AGGRCNV | ZONEFS_F_UID | ZONEFS_F_GID | ZONEFS_F_PERM)
+
+/*
+ * In-memory Super block information.
+ */
+struct zonefs_sb_info {
+
+	spinlock_t		s_lock;
+
+	unsigned long long	s_features;
+	kuid_t			s_uid;
+	kgid_t			s_gid;
+	umode_t			s_perm;
+	uuid_t			s_uuid;
+	loff_t			s_blocksize_mask;
+
+	unsigned int		s_nr_files[ZONEFS_ZTYPE_MAX];
+
+	loff_t			s_blocks;
+	loff_t			s_used_blocks;
+};
+
+static inline struct zonefs_sb_info *ZONEFS_SB(struct super_block *sb)
+{
+	return sb->s_fs_info;
+}
+
+#define zonefs_info(sb, format, args...)	\
+	pr_info("zonefs (%s): " format, sb->s_id, ## args)
+#define zonefs_err(sb, format, args...)	\
+	pr_err("zonefs (%s) ERROR: " format, sb->s_id, ## args)
+#define zonefs_warn(sb, format, args...)	\
+	pr_warn("zonefs (%s) WARN: " format, sb->s_id, ## args)
+
+#endif
diff --git a/include/uapi/linux/magic.h b/include/uapi/linux/magic.h
index 3ac436376d79..d78064007b17 100644
--- a/include/uapi/linux/magic.h
+++ b/include/uapi/linux/magic.h
@@ -87,6 +87,7 @@
 #define NSFS_MAGIC		0x6e736673
 #define BPF_FS_MAGIC		0xcafe4a11
 #define AAFS_MAGIC		0x5a3c69f0
+#define ZONEFS_MAGIC		0x5a4f4653
 
 /* Since UDF 2.01 is ISO 13346 based... */
 #define UDF_SUPER_MAGIC		0x15013346
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 2/2] zonefs: Add documentation
  2019-12-12 18:38 [PATCH 0/2] New zonefs file system Damien Le Moal
  2019-12-12 18:38 ` [PATCH 1/2] fs: " Damien Le Moal
@ 2019-12-12 18:38 ` Damien Le Moal
  2019-12-16  8:38   ` Hannes Reinecke
  2019-12-16  8:18 ` [PATCH 0/2] New zonefs file system Enrico Weigelt, metux IT consult
  2 siblings, 1 reply; 10+ messages in thread
From: Damien Le Moal @ 2019-12-12 18:38 UTC (permalink / raw)
  To: linux-fsdevel, linux-xfs, linux-kernel, Linus Torvalds
  Cc: Johannes Thumshirn, Naohiro Aota, Darrick J . Wong, Hannes Reinecke

Add the new file Documentation/filesystems/zonefs.txt to document zonefs
principles and user-space tool usage.

Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com>
---
 Documentation/filesystems/zonefs.txt | 150 +++++++++++++++++++++++++++
 MAINTAINERS                          |   1 +
 2 files changed, 151 insertions(+)
 create mode 100644 Documentation/filesystems/zonefs.txt

diff --git a/Documentation/filesystems/zonefs.txt b/Documentation/filesystems/zonefs.txt
new file mode 100644
index 000000000000..e5d798f4087d
--- /dev/null
+++ b/Documentation/filesystems/zonefs.txt
@@ -0,0 +1,150 @@
+ZoneFS - Zone filesystem for Zoned block devices
+
+Overview
+========
+
+zonefs is a very simple file system exposing each zone of a zoned block device
+as a file. Unlike a regular file system with zoned block device support (e.g.
+f2fs), zonefs does not hide the sequential write constraint of zoned block
+devices to the user. Files representing sequential write zones of the device
+must be written sequentially starting from the end of the file (append only
+writes).
+
+As such, zonefs is in essence closer to a raw block device access interface
+than to a full featured POSIX file system. The goal of zonefs is to simplify
+the implementation of zoned block devices support in applications by replacing
+raw block device file accesses with a richer file API, avoiding relying on
+direct block device file ioctls which may be more obscure to developers. One
+example of this approach is the implementation of LSM (log-structured merge)
+tree structures (such as used in RocksDB and LevelDB) on zoned block devices by
+allowing SSTables to be stored in a zone file similarly to a regular file system
+rather than as a range of sectors of the entire disk. The introduction of the
+higher level construct "one file is one zone" can help reducing the amount of
+changes needed in the application as well as introducing support for different
+application programming languages.
+
+zonefs on-disk metadata is reduced to a super block which persistently stores a
+magic number and optional features flags and values. On mount, zonefs uses
+blkdev_report_zones() to obtain the device zone configuration and populates
+the mount point with a static file tree solely based on this information.
+E.g. file sizes come from the device zone type and write pointer offset managed
+by the device itself.
+
+The zone files created on mount have the following characteristics.
+1) Files representing zones of the same type are grouped together
+   under the same sub-directory:
+  * For conventional zones, the sub-directory "cnv" is used.
+  * For sequential write zones, the sub-directory "seq" is used.
+  These two directories are the only directories that exist in zonefs. Users
+  cannot create other directories and cannot rename nor delete the "cnv" and
+  "seq" sub-directories.
+2) The name of zone files is the number of the file within the zone type
+   sub-directory, in order of increasing zone start sector.
+3) The size of conventional zone files is fixed to the device zone size.
+   Conventional zone files cannot be truncated.
+4) The size of sequential zone files represent the file's zone write pointer
+   position relative to the zone start sector. Truncating these files is
+   allowed only down to 0, in wich case, the zone is reset to rewind the file
+   zone write pointer position to the start of the zone, or up to the zone size,
+   in which case the file's zone is transitioned to the FULL state (finish zone
+   operation).
+5) All read and write operations to files are not allowed beyond the file zone
+   size. Any access exceeding the zone size is failed with the -EFBIG error.
+6) Creating, deleting, renaming or modifying any attribute of files and
+   sub-directories is not allowed.
+
+Several optional features of zonefs can be enabled at format time.
+* Conventional zone aggregation: ranges of contiguous conventional zones can be
+  agregated into a single larger file instead of the default one file per zone.
+* File ownership: The owner UID and GID of zone files is by default 0 (root)
+  but can be changed to any valid UID/GID.
+* File access permissions: the default 640 access permissions can be changed.
+
+User Space Tools
+================
+
+The mkzonefs tool is used to format zoned block devices for use with zonefs.
+This tool is available on Github at:
+
+git@github.com:damien-lemoal/zonefs-tools.git.
+
+zonefs-tools also includes a test suite which can be run against any zoned
+block device, including null_blk block device created with zoned mode.
+
+Example: the following formats a 15TB host-managed SMR HDD with 256 MB zones
+with the conventional zones aggregation feature enabled.
+
+# mkzonefs -o aggr_cnv /dev/sdX
+# mount -t zonefs /dev/sdX /mnt
+# ls -l /mnt/
+total 0
+dr-xr-xr-x 2 root root     1 Nov 25 13:23 cnv
+dr-xr-xr-x 2 root root 55356 Nov 25 13:23 seq
+
+The size of the zone files sub-directories indicate the number of files existing
+for each type of zones. In this example, there is only one conventional zone
+file (all conventional zones are agreggated under a single file).
+
+# ls -l /mnt/cnv
+total 137101312
+-rw-r----- 1 root root 140391743488 Nov 25 13:23 0
+
+This aggregated conventional zone file can be used as a regular file.
+
+# mkfs.ext4 /mnt/cnv/0
+# mount -o loop /mnt/cnv/0 /data
+
+The "seq" sub-directory grouping files for sequential write zones has in this
+example 55356 zones.
+
+# ls -lv /mnt/seq
+total 14511243264
+-rw-r----- 1 root root 0 Nov 25 13:23 0
+-rw-r----- 1 root root 0 Nov 25 13:23 1
+-rw-r----- 1 root root 0 Nov 25 13:23 2
+...
+-rw-r----- 1 root root 0 Nov 25 13:23 55354
+-rw-r----- 1 root root 0 Nov 25 13:23 55355
+
+For sequential write zone files, the file size changes as data is appended at
+the end of the file, similarly to any regular file system.
+
+# dd if=/dev/zero of=/mnt/seq/0 bs=4096 count=1 conv=notrunc oflag=direct
+1+0 records in
+1+0 records out
+4096 bytes (4.1 kB, 4.0 KiB) copied, 1.05112 s, 3.9 kB/s
+
+# ls -l /mnt/seq/0
+-rw-r----- 1 root root 4096 Nov 25 13:23 /mnt/sdh/seq/0
+
+The written file can be truncated to the zone size, prventing any further write
+operation.
+
+# truncate -s 268435456 /mnt/seq/0
+# ls -l /mnt/seq/0
+-rw-r----- 1 root root 268435456 Nov 25 13:49 /mnt/seq/0
+
+Truncation to 0 size allows freeing the file zone storage space and restart
+append-writes to the file.
+
+# truncate -s 0 /mnt/seq/0
+# ls -l /mnt/seq/0
+-rw-r----- 1 root root 0 Nov 25 13:49 /mnt/seq/0
+
+Since files are statically mapped to zones on the disk, the number of blocks of
+a file as reported by stat() and fstat() indicates the size of the file zone.
+
+# stat /mnt/seq/0
+  File: /mnt/seq/0
+  Size: 0         	Blocks: 524288     IO Block: 4096   regular empty file
+Device: 870h/2160d	Inode: 50431       Links: 1
+Access: (0640/-rw-r-----)  Uid: (    0/    root)   Gid: (    0/    root)
+Access: 2019-11-25 13:23:57.048971997 +0900
+Modify: 2019-11-25 13:52:25.553805765 +0900
+Change: 2019-11-25 13:52:25.553805765 +0900
+ Birth: -
+
+The number of blocks of the file ("Blocks") in units of 512B blocks gives the
+maximum file size of 524288 * 512 B = 256 MB, corresponding to the device zone
+size in this example. Of note is that the "IO block" field always indicates the
+minimum IO size for writes and corresponds to the device physical sector size.
diff --git a/MAINTAINERS b/MAINTAINERS
index 0641167ed2ea..1c760735e906 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -18290,6 +18290,7 @@ L:	linux-fsdevel@vger.kernel.org
 T:	git git://git.kernel.org/pub/scm/linux/kernel/git/dlemoal/zonefs.git
 S:	Maintained
 F:	fs/zonefs/
+F:	Documentation/filesystems/zonefs.txt
 
 ZPOOL COMPRESSED PAGE STORAGE API
 M:	Dan Streetman <ddstreet@ieee.org>
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH 0/2] New zonefs file system
  2019-12-12 18:38 [PATCH 0/2] New zonefs file system Damien Le Moal
  2019-12-12 18:38 ` [PATCH 1/2] fs: " Damien Le Moal
  2019-12-12 18:38 ` [PATCH 2/2] zonefs: Add documentation Damien Le Moal
@ 2019-12-16  8:18 ` Enrico Weigelt, metux IT consult
  2019-12-17  0:05   ` Damien Le Moal
  2 siblings, 1 reply; 10+ messages in thread
From: Enrico Weigelt, metux IT consult @ 2019-12-16  8:18 UTC (permalink / raw)
  To: Damien Le Moal, linux-fsdevel, linux-xfs, linux-kernel, Linus Torvalds
  Cc: Johannes Thumshirn, Naohiro Aota, Darrick J . Wong, Hannes Reinecke

On 12.12.19 19:38, Damien Le Moal wrote:

Hi,

> zonefs is a very simple file system exposing each zone of a zoned block
> device as a file. Unlike a regular file system with zoned block device
> support (e.g. f2fs or the on-going btrfs effort), zonefs does not hide
> the sequential write constraint of zoned block devices to the user.

Just curious: what's the exact definition of "zoned" here ?
Something like partitions ?

Can these files then also serve as block devices for other filesystems ?
Just a funny idea: could we handle partitions by a file system ?

Even more funny idea: give file systems block device ops, so they can
be directly used as such (w/o explicitly using loopdev) ;-)

> Files representing sequential write zones of the device must be written
> sequentially starting from the end of the file (append only writes).

So, these files can only be accessed like a tape ?

Assuming you're working ontop of standard block devices anyways (instead
of tape-like media ;-)) - why introducing such a limitation ?

> zonefs is not a POSIX compliant file system. It's goal is to simplify
> the implementation of zoned block devices support in applications by
> replacing raw block device file accesses with a richer file based API,
> avoiding relying on direct block device file ioctls which may
> be more obscure to developers. 

ioctls ?

Last time I checked, block devices could be easily accessed via plain
file ops (read, write, seek, ...). You can basically treat them just
like big files of fixed size.

> One example of this approach is the
> implementation of LSM (log-structured merge) tree structures (such as
> used in RocksDB and LevelDB)

The same LevelDB as used eg. in Chrome browser, which destroys itself
every time a little temporary problem (eg. disk full) occours ?
If that's the usecase I'd rather use an simple in-memory table instead
and and enough swap, as leveldb isn't reliable enough for persistent
data anyways :p

> on zoned block devices by allowing SSTables
> to be stored in a zone file similarly to a regular file system rather
> than as a range of sectors of a zoned device. The introduction of the
> higher level construct "one file is one zone" can help reducing the
> amount of changes needed in the application while at the same time
> allowing the use of zoned block devices with various programming
> languages other than C.

Why not just simply use files on a suited filesystem (w/ low block io
overhead) or LVM volumes ?


--mtx

-- 
Dringender Hinweis: aufgrund existenzieller Bedrohung durch "Emotet"
sollten Sie *niemals* MS-Office-Dokumente via E-Mail annehmen/öffenen,
selbst wenn diese von vermeintlich vertrauenswürdigen Absendern zu
stammen scheinen. Andernfalls droht Totalschaden.
---
Hinweis: unverschlüsselte E-Mails können leicht abgehört und manipuliert
werden ! Für eine vertrauliche Kommunikation senden Sie bitte ihren
GPG/PGP-Schlüssel zu.
---
Enrico Weigelt, metux IT consult
Free software and Linux embedded engineering
info@metux.net -- +49-151-27565287

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/2] fs: New zonefs file system
  2019-12-12 18:38 ` [PATCH 1/2] fs: " Damien Le Moal
@ 2019-12-16  8:36   ` Hannes Reinecke
  2019-12-17  0:20     ` Damien Le Moal
  0 siblings, 1 reply; 10+ messages in thread
From: Hannes Reinecke @ 2019-12-16  8:36 UTC (permalink / raw)
  To: Damien Le Moal, linux-fsdevel, linux-xfs, linux-kernel, Linus Torvalds
  Cc: Johannes Thumshirn, Naohiro Aota, Darrick J . Wong

On 12/12/19 7:38 PM, Damien Le Moal wrote:
> zonefs is a very simple file system exposing each zone of a zoned block
> device as a file. Unlike a regular file system with zoned block device
> support (e.g. f2fs), zonefs does not hide the sequential write
> constraint of zoned block devices to the user. Files representing
> sequential write zones of the device must be written sequentially
> starting from the end of the file (append only writes).
> 
> As such, zonefs is in essence closer to a raw block device access
> interface than to a full featured POSIX file system. The goal of zonefs
> is to simplify the implementation of zoned block devices support in
> applications by replacing raw block device file accesses with a richer
> file API, avoiding relying on direct block device file ioctls which may
> be more obscure to developers. One example of this approach is the
> implementation of LSM (log-structured merge) tree structures (such as
> used in RocksDB and LevelDB) on zoned block devices by allowing SSTables
> to be stored in a zone file similarly to a regular file system rather
> than as a range of sectors of a zoned device. The introduction of the
> higher level construct "one file is one zone" can help reducing the
> amount of changes needed in the application as well as introducing
> support for different application programming languages.
> 
> Zonefs on-disk metadata is reduced to an immutable super block to
> persistently store a magic number and optional features flags and
> values. On mount, zonefs uses blkdev_report_zones() to obtain the device
> zone configuration and populates the mount point with a static file tree
> solely based on this information. E.g. file sizes come from the device
> zone type and write pointer offset managed by the device itself.
> 
> The zone files created on mount have the following characteristics.
> 1) Files representing zones of the same type are grouped together
>     under a common sub-directory:
>       * For conventional zones, the sub-directory "cnv" is used.
>       * For sequential write zones, the sub-directory "seq" is used.
>    These two directories are the only directories that exist in zonefs.
>    Users cannot create other directories and cannot rename nor delete
>    the "cnv" and "seq" sub-directories.
> 2) The name of zone files is the number of the file within the zone
>     type sub-directory, in order of increasing zone start sector.
> 3) The size of conventional zone files is fixed to the device zone size.
>     Conventional zone files cannot be truncated.
> 4) The size of sequential zone files represent the file's zone write
>     pointer position relative to the zone start sector. Truncating these
>     files is allowed only down to 0, in wich case, the zone is reset to
>     rewind the zone write pointer position to the start of the zone, or
>     up to the zone size, in which case the file's zone is transitioned
>     to the FULL state (finish zone operation).
> 5) All read and write operations to files are not allowed beyond the
>     file zone size. Any access exceeding the zone size is failed with
>     the -EFBIG error.
> 6) Creating, deleting, renaming or modifying any attribute of files and
>     sub-directories is not allowed.
> 
> Several optional features of zonefs can be enabled at format time.
> * Conventional zone aggregation: ranges of contiguous conventional
>    zones can be agregated into a single larger file instead of the
>    default one file per zone.
> * File ownership: The owner UID and GID of zone files is by default 0
>    (root) but can be changed to any valid UID/GID.
> * File access permissions: the default 640 access permissions can be
>    changed.
> 
> The mkzonefs tool is used to format zoned block devices for use with
> zonefs. This tool is available on Github at:
> 
> git@github.com:damien-lemoal/zonefs-tools.git.
> 
> zonefs-tools also includes a test suite which can be run against any
> zoned block device, including null_blk block device created with zoned
> mode.
> 
> Example: the following formats a 15TB host-managed SMR HDD with 256 MB
> zones with the conventional zones aggregation feature enabled.
> 
> $ sudo mkzonefs -o aggr_cnv /dev/sdX
> $ sudo mount -t zonefs /dev/sdX /mnt
> $ ls -l /mnt/
> total 0
> dr-xr-xr-x 2 root root     1 Nov 25 13:23 cnv
> dr-xr-xr-x 2 root root 55356 Nov 25 13:23 seq
> 
> The size of the zone files sub-directories indicate the number of files
> existing for each type of zones. In this example, there is only one
> conventional zone file (all conventional zones are agreggated under a
> single file).
> 
> $ ls -l /mnt/cnv
> total 137101312
> -rw-r----- 1 root root 140391743488 Nov 25 13:23 0
> 
> This aggregated conventional zone file can be used as a regular file.
> 
> $ sudo mkfs.ext4 /mnt/cnv/0
> $ sudo mount -o loop /mnt/cnv/0 /data
> 
> The "seq" sub-directory grouping files for sequential write zones has
> in this example 55356 zones.
> 
> $ ls -lv /mnt/seq
> total 14511243264
> -rw-r----- 1 root root 0 Nov 25 13:23 0
> -rw-r----- 1 root root 0 Nov 25 13:23 1
> -rw-r----- 1 root root 0 Nov 25 13:23 2
> ...
> -rw-r----- 1 root root 0 Nov 25 13:23 55354
> -rw-r----- 1 root root 0 Nov 25 13:23 55355
> 
> For sequential write zone files, the file size changes as data is
> appended at the end of the file, similarly to any regular file system.
> 
> $ dd if=/dev/zero of=/mnt/seq/0 bs=4K count=1 conv=notrunc oflag=direct
> 1+0 records in
> 1+0 records out
> 4096 bytes (4.1 kB, 4.0 KiB) copied, 1.05112 s, 3.9 kB/s
> 
> $ ls -l /mnt/seq/0
> -rw-r----- 1 root root 4096 Nov 25 13:23 /mnt/sdh/seq/0
> 
> The written file can be truncated to the zone size, prventing any
> further write operation.
> 
> $ truncate -s 268435456 /mnt/seq/0
> $ ls -l /mnt/seq/0
> -rw-r----- 1 root root 268435456 Nov 25 13:49 /mnt/seq/0
> 
> Truncation to 0 size allows freeing the file zone storage space and
> restart append-writes to the file.
> 
> $ truncate -s 0 /mnt/seq/0
> $ ls -l /mnt/seq/0
> -rw-r----- 1 root root 0 Nov 25 13:49 /mnt/seq/0
> 
> Since files are statically mapped to zones on the disk, the number of
> blocks of a file as reported by stat() and fstat() indicates the size
> of the file zone.
> 
> $ stat /mnt/seq/0
>    File: /mnt/seq/0
>    Size: 0       Blocks: 524288     IO Block: 4096   regular empty file
> Device: 870h/2160d      Inode: 50431       Links: 1
> Access: (0640/-rw-r-----)  Uid: (    0/    root)   Gid: (    0/  root)
> Access: 2019-11-25 13:23:57.048971997 +0900
> Modify: 2019-11-25 13:52:25.553805765 +0900
> Change: 2019-11-25 13:52:25.553805765 +0900
>   Birth: -
> 
> The number of blocks of the file ("Blocks") in units of 512B blocks
> gives the maximum file size of 524288 * 512 B = 256 MB, corresponding
> to the device zone size in this example. Of note is that the "IO block"
> field always indicates the minimum IO size for writes and corresponds
> to the device physical sector size.
> 
> This code contains contributions from:
> * Johannes Thumshirn <jthumshirn@suse.de>,
> * Darrick J. Wong <darrick.wong@oracle.com>,
> * Christoph Hellwig <hch@lst.de>,
> * Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com> and
> * Ting Yao <tingyao@hust.edu.cn>.
> 
> Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com>
> ---
>   MAINTAINERS                |    9 +
>   fs/Kconfig                 |    1 +
>   fs/Makefile                |    1 +
>   fs/zonefs/Kconfig          |    9 +
>   fs/zonefs/Makefile         |    4 +
>   fs/zonefs/super.c          | 1158 ++++++++++++++++++++++++++++++++++++
>   fs/zonefs/zonefs.h         |  169 ++++++
>   include/uapi/linux/magic.h |    1 +
>   8 files changed, 1352 insertions(+)
>   create mode 100644 fs/zonefs/Kconfig
>   create mode 100644 fs/zonefs/Makefile
>   create mode 100644 fs/zonefs/super.c
>   create mode 100644 fs/zonefs/zonefs.h
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 02d5278a4c9a..0641167ed2ea 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -18282,6 +18282,15 @@ L:	linux-kernel@vger.kernel.org
>   S:	Maintained
>   F:	arch/x86/kernel/cpu/zhaoxin.c
>   
> +ZONEFS FILESYSTEM
> +M:	Damien Le Moal <damien.lemoal@wdc.com>
> +M:	Naohiro Aota <naohiro.aota@wdc.com>
> +R:	Johannes Thumshirn <jth@kernel.org>
> +L:	linux-fsdevel@vger.kernel.org
> +T:	git git://git.kernel.org/pub/scm/linux/kernel/git/dlemoal/zonefs.git
> +S:	Maintained
> +F:	fs/zonefs/
> +
>   ZPOOL COMPRESSED PAGE STORAGE API
>   M:	Dan Streetman <ddstreet@ieee.org>
>   L:	linux-mm@kvack.org
> diff --git a/fs/Kconfig b/fs/Kconfig
> index 7b623e9fc1b0..a3f97ca2bd46 100644
> --- a/fs/Kconfig
> +++ b/fs/Kconfig
> @@ -40,6 +40,7 @@ source "fs/ocfs2/Kconfig"
>   source "fs/btrfs/Kconfig"
>   source "fs/nilfs2/Kconfig"
>   source "fs/f2fs/Kconfig"
> +source "fs/zonefs/Kconfig"
>   
>   config FS_DAX
>   	bool "Direct Access (DAX) support"
> diff --git a/fs/Makefile b/fs/Makefile
> index 1148c555c4d3..527f228a5e8a 100644
> --- a/fs/Makefile
> +++ b/fs/Makefile
> @@ -133,3 +133,4 @@ obj-$(CONFIG_CEPH_FS)		+= ceph/
>   obj-$(CONFIG_PSTORE)		+= pstore/
>   obj-$(CONFIG_EFIVAR_FS)		+= efivarfs/
>   obj-$(CONFIG_EROFS_FS)		+= erofs/
> +obj-$(CONFIG_ZONEFS_FS)		+= zonefs/
> diff --git a/fs/zonefs/Kconfig b/fs/zonefs/Kconfig
> new file mode 100644
> index 000000000000..6490547e9763
> --- /dev/null
> +++ b/fs/zonefs/Kconfig
> @@ -0,0 +1,9 @@
> +config ZONEFS_FS
> +	tristate "zonefs filesystem support"
> +	depends on BLOCK
> +	depends on BLK_DEV_ZONED
> +	help
> +	  zonefs is a simple File System which exposes zones of a zoned block
> +	  device as files.
> +
> +	  If unsure, say N.
> diff --git a/fs/zonefs/Makefile b/fs/zonefs/Makefile
> new file mode 100644
> index 000000000000..75a380aa1ae1
> --- /dev/null
> +++ b/fs/zonefs/Makefile
> @@ -0,0 +1,4 @@
> +# SPDX-License-Identifier: GPL-2.0
> +obj-$(CONFIG_ZONEFS_FS) += zonefs.o
> +
> +zonefs-y	:= super.o
> diff --git a/fs/zonefs/super.c b/fs/zonefs/super.c
> new file mode 100644
> index 000000000000..5a2558cae3e3
> --- /dev/null
> +++ b/fs/zonefs/super.c
> @@ -0,0 +1,1158 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Simple zone file system for zoned block devices.
> + *
> + * Copyright (C) 2019 Western Digital Corporation or its affiliates.
> + */
> +#include <linux/module.h>
> +#include <linux/fs.h>
> +#include <linux/magic.h>
> +#include <linux/iomap.h>
> +#include <linux/init.h>
> +#include <linux/slab.h>
> +#include <linux/blkdev.h>
> +#include <linux/statfs.h>
> +#include <linux/writeback.h>
> +#include <linux/quotaops.h>
> +#include <linux/seq_file.h>
> +#include <linux/parser.h>
> +#include <linux/uio.h>
> +#include <linux/mman.h>
> +#include <linux/sched/mm.h>
> +#include <linux/crc32.h>
> +
> +#include "zonefs.h"
> +
> +static int zonefs_iomap_begin(struct inode *inode, loff_t offset, loff_t length,
> +			      unsigned int flags, struct iomap *iomap,
> +			      struct iomap *srcmap)
> +{
> +	struct zonefs_sb_info *sbi = ZONEFS_SB(inode->i_sb);
> +	struct zonefs_inode_info *zi = ZONEFS_I(inode);
> +	loff_t max_isize = zi->i_max_size;
> +	loff_t isize;
> +
> +	/*
> +	 * For sequential zones, enforce direct IO writes. This is already
> +	 * checked when writes are issued, so warn about this here if we
> +	 * get buffered write to a sequential file inode.
> +	 */
> +	if (WARN_ON_ONCE(zi->i_ztype == ZONEFS_ZTYPE_SEQ &&
> +			 (flags & IOMAP_WRITE) && !(flags & IOMAP_DIRECT)))
> +		return -EIO;
> +
> +	/*
> +	 * For all zones, all blocks are always mapped. For sequential zones,
> +	 * all blocks after the write pointer (inode size) are always unwritten.
> +	 */
> +	mutex_lock(&zi->i_truncate_mutex);
> +	isize = i_size_read(inode);
> +	if (offset >= isize) {
> +		length = min(length, max_isize - offset);
> +		if (zi->i_ztype == ZONEFS_ZTYPE_CNV)
> +			iomap->type = IOMAP_MAPPED;
> +		else
> +			iomap->type = IOMAP_UNWRITTEN;
> +	} else {
> +		length = min(length, isize - offset);
> +		iomap->type = IOMAP_MAPPED;
> +	}
> +	mutex_unlock(&zi->i_truncate_mutex);
> +
> +	iomap->offset = offset & (~sbi->s_blocksize_mask);
> +	iomap->length = ((offset + length + sbi->s_blocksize_mask) &
> +			 (~sbi->s_blocksize_mask)) - iomap->offset;
> +	iomap->bdev = inode->i_sb->s_bdev;
> +	iomap->addr = (zi->i_zsector << SECTOR_SHIFT) + iomap->offset;
> +
> +	return 0;
> +}
> +
> +static const struct iomap_ops zonefs_iomap_ops = {
> +	.iomap_begin	= zonefs_iomap_begin,
> +};
> +
This probably shows my complete ignorance, but what is the effect on 
enforcing the direct I/O writes on the pagecache?
IE what happens for buffered reads? Will the pages be invalidated when a 
write has been issued?
Or do we simply rely on upper layers to ensure no concurrent buffered 
and direct I/O is being made?

[ .. ]
> +
> +static int zonefs_seq_file_truncate(struct inode *inode, loff_t isize)
> +{
> +	struct zonefs_inode_info *zi = ZONEFS_I(inode);
> +	loff_t old_isize;
> +	enum req_opf op;
> +	int ret = 0;
> +
> +	/*
> +	 * For sequential zone files, we can only allow truncating to 0 size,
> +	 * which is equivalent to a zone reset, or to the maximum file size,
> +	 * which is equivalent toa zone finish.

Spelling: to a

[ .. ]

Other than that:
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke            Teamlead Storage & Networking
hare@suse.de                               +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/2] zonefs: Add documentation
  2019-12-12 18:38 ` [PATCH 2/2] zonefs: Add documentation Damien Le Moal
@ 2019-12-16  8:38   ` Hannes Reinecke
  2019-12-17  0:20     ` Damien Le Moal
  0 siblings, 1 reply; 10+ messages in thread
From: Hannes Reinecke @ 2019-12-16  8:38 UTC (permalink / raw)
  To: Damien Le Moal, linux-fsdevel, linux-xfs, linux-kernel, Linus Torvalds
  Cc: Johannes Thumshirn, Naohiro Aota, Darrick J . Wong

On 12/12/19 7:38 PM, Damien Le Moal wrote:
> Add the new file Documentation/filesystems/zonefs.txt to document zonefs
> principles and user-space tool usage.
> 
> Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com>
> ---
>   Documentation/filesystems/zonefs.txt | 150 +++++++++++++++++++++++++++
>   MAINTAINERS                          |   1 +
>   2 files changed, 151 insertions(+)
>   create mode 100644 Documentation/filesystems/zonefs.txt
> 
> diff --git a/Documentation/filesystems/zonefs.txt b/Documentation/filesystems/zonefs.txt
> new file mode 100644
> index 000000000000..e5d798f4087d
> --- /dev/null
> +++ b/Documentation/filesystems/zonefs.txt
> @@ -0,0 +1,150 @@
> +ZoneFS - Zone filesystem for Zoned block devices
> +
> +Overview
> +========
> +
> +zonefs is a very simple file system exposing each zone of a zoned block device
> +as a file. Unlike a regular file system with zoned block device support (e.g.
> +f2fs), zonefs does not hide the sequential write constraint of zoned block
> +devices to the user. Files representing sequential write zones of the device
> +must be written sequentially starting from the end of the file (append only
> +writes).
> +
> +As such, zonefs is in essence closer to a raw block device access interface
> +than to a full featured POSIX file system. The goal of zonefs is to simplify
> +the implementation of zoned block devices support in applications by replacing
> +raw block device file accesses with a richer file API, avoiding relying on
> +direct block device file ioctls which may be more obscure to developers. One
> +example of this approach is the implementation of LSM (log-structured merge)
> +tree structures (such as used in RocksDB and LevelDB) on zoned block devices by
> +allowing SSTables to be stored in a zone file similarly to a regular file system
> +rather than as a range of sectors of the entire disk. The introduction of the
> +higher level construct "one file is one zone" can help reducing the amount of
> +changes needed in the application as well as introducing support for different
> +application programming languages.
> +
> +zonefs on-disk metadata is reduced to a super block which persistently stores a
> +magic number and optional features flags and values. On mount, zonefs uses
> +blkdev_report_zones() to obtain the device zone configuration and populates
> +the mount point with a static file tree solely based on this information.
> +E.g. file sizes come from the device zone type and write pointer offset managed
> +by the device itself.
> +
> +The zone files created on mount have the following characteristics.
> +1) Files representing zones of the same type are grouped together
> +   under the same sub-directory:
> +  * For conventional zones, the sub-directory "cnv" is used.
> +  * For sequential write zones, the sub-directory "seq" is used.
> +  These two directories are the only directories that exist in zonefs. Users
> +  cannot create other directories and cannot rename nor delete the "cnv" and
> +  "seq" sub-directories.
> +2) The name of zone files is the number of the file within the zone type
> +   sub-directory, in order of increasing zone start sector.
> +3) The size of conventional zone files is fixed to the device zone size.
> +   Conventional zone files cannot be truncated.
> +4) The size of sequential zone files represent the file's zone write pointer
> +   position relative to the zone start sector. Truncating these files is
> +   allowed only down to 0, in wich case, the zone is reset to rewind the file
> +   zone write pointer position to the start of the zone, or up to the zone size,
> +   in which case the file's zone is transitioned to the FULL state (finish zone
> +   operation).
> +5) All read and write operations to files are not allowed beyond the file zone
> +   size. Any access exceeding the zone size is failed with the -EFBIG error.
> +6) Creating, deleting, renaming or modifying any attribute of files and
> +   sub-directories is not allowed.
> +
> +Several optional features of zonefs can be enabled at format time.
> +* Conventional zone aggregation: ranges of contiguous conventional zones can be
> +  agregated into a single larger file instead of the default one file per zone.
> +* File ownership: The owner UID and GID of zone files is by default 0 (root)
> +  but can be changed to any valid UID/GID.
> +* File access permissions: the default 640 access permissions can be changed.
> +

Please mention the 'direct writes only to sequential zones' restriction.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke            Teamlead Storage & Networking
hare@suse.de                               +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 0/2] New zonefs file system
  2019-12-16  8:18 ` [PATCH 0/2] New zonefs file system Enrico Weigelt, metux IT consult
@ 2019-12-17  0:05   ` Damien Le Moal
  0 siblings, 0 replies; 10+ messages in thread
From: Damien Le Moal @ 2019-12-17  0:05 UTC (permalink / raw)
  To: Enrico Weigelt, metux IT consult, linux-fsdevel, linux-xfs,
	linux-kernel, Linus Torvalds
  Cc: Johannes Thumshirn, Naohiro Aota, Darrick J . Wong, Hannes Reinecke

On 2019/12/16 17:19, Enrico Weigelt, metux IT consult wrote:
> On 12.12.19 19:38, Damien Le Moal wrote:
> 
> Hi,
> 
>> zonefs is a very simple file system exposing each zone of a zoned block
>> device as a file. Unlike a regular file system with zoned block device
>> support (e.g. f2fs or the on-going btrfs effort), zonefs does not hide
>> the sequential write constraint of zoned block devices to the user.
> 
> Just curious: what's the exact definition of "zoned" here ?
> Something like partitions ?

As Carlos commented already, a zoned block device is Linux abstraction
used to handle SMR HDDs (Shingled Magnetic Recording). These disks
expose an LBA range that is divided into zones that can only be written
sequentially for host-managed models. Other models such as host-aware or
drive-managed allow random writes to all zones at the cost of potential
serious performance degradation due to disk internal garbage collection
of zones (similarly to an SSD handling of erase blocks).

While today zoned block devices exist on the market only in the form of
SMR disks, NVMe SSDs will also soon be available with the completion of
the Zoned Namespace specifications.

Zoning of block devices has several advantages: higher capacities for
HDDs and more predictable and lower IO latencies for SSDs (almost no
internal GC/weir leveling needed). But taking full advantage of these
devices require software changes on the host due to the sequential write
constraint imposed by the devices interface.

> Can these files then also serve as block devices for other filesystems ?
> Just a funny idea: could we handle partitions by a file system ?
> 
> Even more funny idea: give file systems block device ops, so they can
> be directly used as such (w/o explicitly using loopdev) ;-)

This is outside the scope of this thread, so let's not start a
discussion about this here. Start a new thread !

>> Files representing sequential write zones of the device must be written
>> sequentially starting from the end of the file (append only writes).
> 
> So, these files can only be accessed like a tape ?

Writes must be sequential within a zone but reads can be random to any
writen LBA.

> Assuming you're working ontop of standard block devices anyways (instead
> of tape-like media ;-)) - why introducing such a limitation ?

See above: the limitation is physical, by the device, so that different
improvements can be achieved depending on the storage medium being used
(increased capacity, lower latencies, lower over provisioning, etc)

> 
>> zonefs is not a POSIX compliant file system. It's goal is to simplify
>> the implementation of zoned block devices support in applications by
>> replacing raw block device file accesses with a richer file based API,
>> avoiding relying on direct block device file ioctls which may
>> be more obscure to developers. 
> 
> ioctls ?
> 
> Last time I checked, block devices could be easily accessed via plain
> file ops (read, write, seek, ...). You can basically treat them just
> like big files of fixed size.

I was not clear, my apologies. I am refering here to the zoned block
device related ioctls defined in include/uapi/linux/blkzoned.h. These
ioctls allow an application to manage the device zones (obtain zone
information, erase zones, etc). These ioctls trigger issuing zone
related commands to the device. These commands are defined by the ZBC
and ZAC standards for SCSI and ATA, and NVMe Zoned Namespace in the very
near future.

>> One example of this approach is the
>> implementation of LSM (log-structured merge) tree structures (such as
>> used in RocksDB and LevelDB)
> 
> The same LevelDB as used eg. in Chrome browser, which destroys itself
> every time a little temporary problem (eg. disk full) occours ?
> If that's the usecase I'd rather use an simple in-memory table instead
> and and enough swap, as leveldb isn't reliable enough for persistent
> data anyways :p

The intent of my comment was not to advocate for or discuss the merits
of any particular KV implementation. I was only pointing out that zonefs
does not come in a void and that we do have use cases for it and did the
work on some user space software to validate it. Leveldb and RocksDB are
the 2 LSM-tree based KV stores we worked on as they are very popular and
widely used.

>> on zoned block devices by allowing SSTables
>> to be stored in a zone file similarly to a regular file system rather
>> than as a range of sectors of a zoned device. The introduction of the
>> higher level construct "one file is one zone" can help reducing the
>> amount of changes needed in the application while at the same time
>> allowing the use of zoned block devices with various programming
>> languages other than C.
> 
> Why not just simply use files on a suited filesystem (w/ low block io
> overhead) or LVM volumes ?

Using a file system compliant with zoned block device constraint such as
f2fs or btrfs (on-going work) is certainly a valid approach. However,
this may not be the most optimal one if the application being used as a
mostly sequential write behavior. LSM-tree based KV stores fall into
this category: SSTables are large (several MB) and always written
sequentially. There are not random writes, which facilitates supporting
directly zoned block devices without the need for a file system which
would add a GC background process and degrade performance. As mentioned
in the cover letter, zonefs goal is to facilitate the implementation of
this support compared toa pure raw block device use.

> 
> 
> --mtx
> 


-- 
Damien Le Moal
Western Digital Research

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/2] fs: New zonefs file system
  2019-12-16  8:36   ` Hannes Reinecke
@ 2019-12-17  0:20     ` Damien Le Moal
  2019-12-17  7:28       ` Hannes Reinecke
  0 siblings, 1 reply; 10+ messages in thread
From: Damien Le Moal @ 2019-12-17  0:20 UTC (permalink / raw)
  To: Hannes Reinecke, linux-fsdevel, linux-xfs, linux-kernel, Linus Torvalds
  Cc: Johannes Thumshirn, Naohiro Aota, Darrick J . Wong

On 2019/12/16 17:36, Hannes Reinecke wrote:
[...]
>> +static int zonefs_iomap_begin(struct inode *inode, loff_t offset, loff_t length,
>> +			      unsigned int flags, struct iomap *iomap,
>> +			      struct iomap *srcmap)
>> +{
>> +	struct zonefs_sb_info *sbi = ZONEFS_SB(inode->i_sb);
>> +	struct zonefs_inode_info *zi = ZONEFS_I(inode);
>> +	loff_t max_isize = zi->i_max_size;
>> +	loff_t isize;
>> +
>> +	/*
>> +	 * For sequential zones, enforce direct IO writes. This is already
>> +	 * checked when writes are issued, so warn about this here if we
>> +	 * get buffered write to a sequential file inode.
>> +	 */
>> +	if (WARN_ON_ONCE(zi->i_ztype == ZONEFS_ZTYPE_SEQ &&
>> +			 (flags & IOMAP_WRITE) && !(flags & IOMAP_DIRECT)))
>> +		return -EIO;
>> +
>> +	/*
>> +	 * For all zones, all blocks are always mapped. For sequential zones,
>> +	 * all blocks after the write pointer (inode size) are always unwritten.
>> +	 */
>> +	mutex_lock(&zi->i_truncate_mutex);
>> +	isize = i_size_read(inode);
>> +	if (offset >= isize) {
>> +		length = min(length, max_isize - offset);
>> +		if (zi->i_ztype == ZONEFS_ZTYPE_CNV)
>> +			iomap->type = IOMAP_MAPPED;
>> +		else
>> +			iomap->type = IOMAP_UNWRITTEN;
>> +	} else {
>> +		length = min(length, isize - offset);
>> +		iomap->type = IOMAP_MAPPED;
>> +	}
>> +	mutex_unlock(&zi->i_truncate_mutex);
>> +
>> +	iomap->offset = offset & (~sbi->s_blocksize_mask);
>> +	iomap->length = ((offset + length + sbi->s_blocksize_mask) &
>> +			 (~sbi->s_blocksize_mask)) - iomap->offset;
>> +	iomap->bdev = inode->i_sb->s_bdev;
>> +	iomap->addr = (zi->i_zsector << SECTOR_SHIFT) + iomap->offset;
>> +
>> +	return 0;
>> +}
>> +
>> +static const struct iomap_ops zonefs_iomap_ops = {
>> +	.iomap_begin	= zonefs_iomap_begin,
>> +};
>> +
> This probably shows my complete ignorance, but what is the effect on 
> enforcing the direct I/O writes on the pagecache?
> IE what happens for buffered reads? Will the pages be invalidated when a 
> write has been issued?

Yes, a direct write issued to a file range that has cached pages result
in these pages to be invalidated. But note that in the case of zonefs,
this can happen only in the case of conventional zones. For sequential
zones, this does not happen: reads can be buffered and cache pages but
only for pages below the write pointer. And writes can only be issued at
the write pointer. So there is never any possible overlap between
buffered reads and direct writes.

> Or do we simply rely on upper layers to ensure no concurrent buffered 
> and direct I/O is being made?

Nope. VFS, or the file system specific implementation, takes care of
that. See generic_file_direct_write() and its call to
invalidate_inode_pages2_range().

> 
> [ .. ]
>> +
>> +static int zonefs_seq_file_truncate(struct inode *inode, loff_t isize)
>> +{
>> +	struct zonefs_inode_info *zi = ZONEFS_I(inode);
>> +	loff_t old_isize;
>> +	enum req_opf op;
>> +	int ret = 0;
>> +
>> +	/*
>> +	 * For sequential zone files, we can only allow truncating to 0 size,
>> +	 * which is equivalent to a zone reset, or to the maximum file size,
>> +	 * which is equivalent toa zone finish.
> 
> Spelling: to a

Good catch. Will fix it. Thanks.

> 
> [ .. ]
> 
> Other than that:
> Reviewed-by: Hannes Reinecke <hare@suse.de>
> 
> Cheers,
> 
> Hannes
> 


-- 
Damien Le Moal
Western Digital Research

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/2] zonefs: Add documentation
  2019-12-16  8:38   ` Hannes Reinecke
@ 2019-12-17  0:20     ` Damien Le Moal
  0 siblings, 0 replies; 10+ messages in thread
From: Damien Le Moal @ 2019-12-17  0:20 UTC (permalink / raw)
  To: Hannes Reinecke, linux-fsdevel, linux-xfs, linux-kernel, Linus Torvalds
  Cc: Johannes Thumshirn, Naohiro Aota, Darrick J . Wong

On 2019/12/16 17:38, Hannes Reinecke wrote:
> On 12/12/19 7:38 PM, Damien Le Moal wrote:
>> Add the new file Documentation/filesystems/zonefs.txt to document zonefs
>> principles and user-space tool usage.
>>
>> Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com>
>> ---
>>   Documentation/filesystems/zonefs.txt | 150 +++++++++++++++++++++++++++
>>   MAINTAINERS                          |   1 +
>>   2 files changed, 151 insertions(+)
>>   create mode 100644 Documentation/filesystems/zonefs.txt
>>
>> diff --git a/Documentation/filesystems/zonefs.txt b/Documentation/filesystems/zonefs.txt
>> new file mode 100644
>> index 000000000000..e5d798f4087d
>> --- /dev/null
>> +++ b/Documentation/filesystems/zonefs.txt
>> @@ -0,0 +1,150 @@
>> +ZoneFS - Zone filesystem for Zoned block devices
>> +
>> +Overview
>> +========
>> +
>> +zonefs is a very simple file system exposing each zone of a zoned block device
>> +as a file. Unlike a regular file system with zoned block device support (e.g.
>> +f2fs), zonefs does not hide the sequential write constraint of zoned block
>> +devices to the user. Files representing sequential write zones of the device
>> +must be written sequentially starting from the end of the file (append only
>> +writes).
>> +
>> +As such, zonefs is in essence closer to a raw block device access interface
>> +than to a full featured POSIX file system. The goal of zonefs is to simplify
>> +the implementation of zoned block devices support in applications by replacing
>> +raw block device file accesses with a richer file API, avoiding relying on
>> +direct block device file ioctls which may be more obscure to developers. One
>> +example of this approach is the implementation of LSM (log-structured merge)
>> +tree structures (such as used in RocksDB and LevelDB) on zoned block devices by
>> +allowing SSTables to be stored in a zone file similarly to a regular file system
>> +rather than as a range of sectors of the entire disk. The introduction of the
>> +higher level construct "one file is one zone" can help reducing the amount of
>> +changes needed in the application as well as introducing support for different
>> +application programming languages.
>> +
>> +zonefs on-disk metadata is reduced to a super block which persistently stores a
>> +magic number and optional features flags and values. On mount, zonefs uses
>> +blkdev_report_zones() to obtain the device zone configuration and populates
>> +the mount point with a static file tree solely based on this information.
>> +E.g. file sizes come from the device zone type and write pointer offset managed
>> +by the device itself.
>> +
>> +The zone files created on mount have the following characteristics.
>> +1) Files representing zones of the same type are grouped together
>> +   under the same sub-directory:
>> +  * For conventional zones, the sub-directory "cnv" is used.
>> +  * For sequential write zones, the sub-directory "seq" is used.
>> +  These two directories are the only directories that exist in zonefs. Users
>> +  cannot create other directories and cannot rename nor delete the "cnv" and
>> +  "seq" sub-directories.
>> +2) The name of zone files is the number of the file within the zone type
>> +   sub-directory, in order of increasing zone start sector.
>> +3) The size of conventional zone files is fixed to the device zone size.
>> +   Conventional zone files cannot be truncated.
>> +4) The size of sequential zone files represent the file's zone write pointer
>> +   position relative to the zone start sector. Truncating these files is
>> +   allowed only down to 0, in wich case, the zone is reset to rewind the file
>> +   zone write pointer position to the start of the zone, or up to the zone size,
>> +   in which case the file's zone is transitioned to the FULL state (finish zone
>> +   operation).
>> +5) All read and write operations to files are not allowed beyond the file zone
>> +   size. Any access exceeding the zone size is failed with the -EFBIG error.
>> +6) Creating, deleting, renaming or modifying any attribute of files and
>> +   sub-directories is not allowed.
>> +
>> +Several optional features of zonefs can be enabled at format time.
>> +* Conventional zone aggregation: ranges of contiguous conventional zones can be
>> +  agregated into a single larger file instead of the default one file per zone.
>> +* File ownership: The owner UID and GID of zone files is by default 0 (root)
>> +  but can be changed to any valid UID/GID.
>> +* File access permissions: the default 640 access permissions can be changed.
>> +
> 
> Please mention the 'direct writes only to sequential zones' restriction.

Yes, indeed, this is missing. Will add it.

> 
> Cheers,
> 
> Hannes
> 


-- 
Damien Le Moal
Western Digital Research

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/2] fs: New zonefs file system
  2019-12-17  0:20     ` Damien Le Moal
@ 2019-12-17  7:28       ` Hannes Reinecke
  0 siblings, 0 replies; 10+ messages in thread
From: Hannes Reinecke @ 2019-12-17  7:28 UTC (permalink / raw)
  To: Damien Le Moal, linux-fsdevel, linux-xfs, linux-kernel, Linus Torvalds
  Cc: Johannes Thumshirn, Naohiro Aota, Darrick J . Wong

On 12/17/19 1:20 AM, Damien Le Moal wrote:
> On 2019/12/16 17:36, Hannes Reinecke wrote:
> [...]
>>> +static int zonefs_iomap_begin(struct inode *inode, loff_t offset, loff_t length,
>>> +			      unsigned int flags, struct iomap *iomap,
>>> +			      struct iomap *srcmap)
>>> +{
>>> +	struct zonefs_sb_info *sbi = ZONEFS_SB(inode->i_sb);
>>> +	struct zonefs_inode_info *zi = ZONEFS_I(inode);
>>> +	loff_t max_isize = zi->i_max_size;
>>> +	loff_t isize;
>>> +
>>> +	/*
>>> +	 * For sequential zones, enforce direct IO writes. This is already
>>> +	 * checked when writes are issued, so warn about this here if we
>>> +	 * get buffered write to a sequential file inode.
>>> +	 */
>>> +	if (WARN_ON_ONCE(zi->i_ztype == ZONEFS_ZTYPE_SEQ &&
>>> +			 (flags & IOMAP_WRITE) && !(flags & IOMAP_DIRECT)))
>>> +		return -EIO;
>>> +
>>> +	/*
>>> +	 * For all zones, all blocks are always mapped. For sequential zones,
>>> +	 * all blocks after the write pointer (inode size) are always unwritten.
>>> +	 */
>>> +	mutex_lock(&zi->i_truncate_mutex);
>>> +	isize = i_size_read(inode);
>>> +	if (offset >= isize) {
>>> +		length = min(length, max_isize - offset);
>>> +		if (zi->i_ztype == ZONEFS_ZTYPE_CNV)
>>> +			iomap->type = IOMAP_MAPPED;
>>> +		else
>>> +			iomap->type = IOMAP_UNWRITTEN;
>>> +	} else {
>>> +		length = min(length, isize - offset);
>>> +		iomap->type = IOMAP_MAPPED;
>>> +	}
>>> +	mutex_unlock(&zi->i_truncate_mutex);
>>> +
>>> +	iomap->offset = offset & (~sbi->s_blocksize_mask);
>>> +	iomap->length = ((offset + length + sbi->s_blocksize_mask) &
>>> +			 (~sbi->s_blocksize_mask)) - iomap->offset;
>>> +	iomap->bdev = inode->i_sb->s_bdev;
>>> +	iomap->addr = (zi->i_zsector << SECTOR_SHIFT) + iomap->offset;
>>> +
>>> +	return 0;
>>> +}
>>> +
>>> +static const struct iomap_ops zonefs_iomap_ops = {
>>> +	.iomap_begin	= zonefs_iomap_begin,
>>> +};
>>> +
>> This probably shows my complete ignorance, but what is the effect on
>> enforcing the direct I/O writes on the pagecache?
>> IE what happens for buffered reads? Will the pages be invalidated when a
>> write has been issued?
> 
> Yes, a direct write issued to a file range that has cached pages result
> in these pages to be invalidated. But note that in the case of zonefs,
> this can happen only in the case of conventional zones. For sequential
> zones, this does not happen: reads can be buffered and cache pages but
> only for pages below the write pointer. And writes can only be issued at
> the write pointer. So there is never any possible overlap between
> buffered reads and direct writes.
> 
Oh, indeed, you are correct. That's indeed easy then.

>> Or do we simply rely on upper layers to ensure no concurrent buffered
>> and direct I/O is being made?
> 
> Nope. VFS, or the file system specific implementation, takes care of
> that. See generic_file_direct_write() and its call to
> invalidate_inode_pages2_range().
> 
Of course.
One could even say: not applicable, as it won't happen.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke            Teamlead Storage & Networking
hare@suse.de                               +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2019-12-17  7:28 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-12-12 18:38 [PATCH 0/2] New zonefs file system Damien Le Moal
2019-12-12 18:38 ` [PATCH 1/2] fs: " Damien Le Moal
2019-12-16  8:36   ` Hannes Reinecke
2019-12-17  0:20     ` Damien Le Moal
2019-12-17  7:28       ` Hannes Reinecke
2019-12-12 18:38 ` [PATCH 2/2] zonefs: Add documentation Damien Le Moal
2019-12-16  8:38   ` Hannes Reinecke
2019-12-17  0:20     ` Damien Le Moal
2019-12-16  8:18 ` [PATCH 0/2] New zonefs file system Enrico Weigelt, metux IT consult
2019-12-17  0:05   ` Damien Le Moal

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).