All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/10] dm: zoned block device support
@ 2017-04-21  3:55 damien.lemoal
  2017-04-21  3:55 ` [PATCH 01/10] dm-table: Introduce DM_TARGET_ZONED_HM feature damien.lemoal
                   ` (11 more replies)
  0 siblings, 12 replies; 18+ messages in thread
From: damien.lemoal @ 2017-04-21  3:55 UTC (permalink / raw)
  To: dm-devel, Mike Snitzer, Alasdair Kergon
  Cc: Hannes Reinecke, Christoph Hellwig, Bart Van Assche, linux-block,
	Damien Le Moal

From: Damien Le Moal <damien.lemoal@wdc.com>

This series introduces zoned block device support to the device mapper
infrastructure. Pathces are as follows:

- Patch 1: Add a new target type feature flag to indicate if a target type
  supports host-managed zoned block devices. This prevents using these drives
  with the current target types since none of them have the proper support
  implemented and will not operate properly with these drives.
- Patch 2: If a target device is a zoned block device, check that the range of
  LBAs mapped is aligned to the device zone size and that the device start
  offset also aligns to zone boundaries. This is necessary for zone reset and
  zone report correct execution.
- Patch 3: Check that the different target devices of a table have compatible
  zone sizes and models. This is necessary for target types that expose a zone
  model different from the underlying device.
- Patch 4: Fix handling of REQ_OP_ZONE_RESET bios
- Patch 5: Fix handling of REQ_OP_ZONE_REPORT bios
- Patch 6: Introduce a new helper function to reverse map a device zone report
  to the target LBA range
- Patch 7: Add support for host-managed zoned block devices to dm-flakey. This
  is necessary for testing file systems supporting natively these drives (e.g.
  f2fs).
- Patch 8: Add support for for zoned block devices to dm-linear. This can have
  useful applications during development and testing (e.g. allow creating
  smaller zoned devices with different combinations and positions of zones).
  There are also interesting applications for production, for instance, the
  ability to aggregate conventional zones of different drives to create a
  regular disk.
- Patch 9: Add sequential write enforcement to dm_kcopyd_copy so that
  sequential zones of a host-managed zoned block device can be specified as
  destinations.
- Patch 10: New dm-zoned target type (this was already sent for review twice).
  This resend adds modifications suggested by Hannes to implement reclaim
  using dm-kcopyd. dm-zoned depends on patch 9.

As always, comments and reviews are welcome.

Damien Le Moal (10):
  dm-table: Introduce DM_TARGET_ZONED_HM feature
  dm-table: Check device area zone alignment
  dm-table: Check block devices zone model compatibility
  dm: Fix REQ_OP_ZONE_RESET bio handling
  dm: Fix REQ_OP_ZONE_REPORT bio handling
  dm: Introduce dm_remap_zone_report()
  dm-flakey: Add support for zoned block devices
  dm-linear: Add support for zoned block devices
  dm-kcopyd: Add sequential write feature
  dm-zoned: Drive-managed zoned block device target

 Documentation/device-mapper/dm-zoned.txt |  154 +++
 drivers/md/Kconfig                       |   19 +
 drivers/md/Makefile                      |    2 +
 drivers/md/dm-flakey.c                   |   21 +-
 drivers/md/dm-kcopyd.c                   |   68 +-
 drivers/md/dm-linear.c                   |   14 +-
 drivers/md/dm-table.c                    |  145 ++
 drivers/md/dm-zoned-io.c                 |  998 ++++++++++++++
 drivers/md/dm-zoned-metadata.c           | 2195 ++++++++++++++++++++++++++++++
 drivers/md/dm-zoned-reclaim.c            |  535 ++++++++
 drivers/md/dm-zoned.h                    |  528 +++++++
 drivers/md/dm.c                          |   93 +-
 include/linux/device-mapper.h            |   16 +
 include/linux/dm-kcopyd.h                |    1 +
 14 files changed, 4783 insertions(+), 6 deletions(-)
 create mode 100644 Documentation/device-mapper/dm-zoned.txt
 create mode 100644 drivers/md/dm-zoned-io.c
 create mode 100644 drivers/md/dm-zoned-metadata.c
 create mode 100644 drivers/md/dm-zoned-reclaim.c
 create mode 100644 drivers/md/dm-zoned.h

-- 
2.9.3

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH 01/10] dm-table: Introduce DM_TARGET_ZONED_HM feature
  2017-04-21  3:55 [PATCH 00/10] dm: zoned block device support damien.lemoal
@ 2017-04-21  3:55 ` damien.lemoal
  2017-04-21  3:55 ` [PATCH 02/10] dm-table: Check device area zone alignment damien.lemoal
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 18+ messages in thread
From: damien.lemoal @ 2017-04-21  3:55 UTC (permalink / raw)
  To: dm-devel, Mike Snitzer, Alasdair Kergon
  Cc: Hannes Reinecke, Christoph Hellwig, Bart Van Assche, linux-block,
	Damien Le Moal

From: Damien Le Moal <damien.lemoal@wdc.com>

The target drivers currently available will not operate correctly if a
table target maps onto a host-managed zoned block device.

To avoid problems, this patch introduces the new feature flag
DM_TARGET_ZONED_HM for a target driver to explicitly state that it
supports host-managed zoned block devices. This feature is checked
in dm_get_device() to prevent the addition to a table of a target
mapping to a host-managed zoned block device if the target type does
not have the feature enabled.

Note that as host-aware zoned block devices are backward compatible
with regular block devices, they can be used by any of the current
target types. This new feature is thus restricted to host-managed
zoned block devices.

Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com>
---
 drivers/md/dm-table.c         | 23 +++++++++++++++++++++++
 include/linux/device-mapper.h |  6 ++++++
 2 files changed, 29 insertions(+)

diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
index 3ad16d9..06d3b7b 100644
--- a/drivers/md/dm-table.c
+++ b/drivers/md/dm-table.c
@@ -388,6 +388,24 @@ dev_t dm_get_dev_t(const char *path)
 EXPORT_SYMBOL_GPL(dm_get_dev_t);
 
 /*
+ * Check if the target supports supports host-managed zoned block devices.
+ */
+static bool device_supported(struct dm_target *ti, struct dm_dev *dev)
+{
+	struct block_device *bdev = dev->bdev;
+	char b[BDEVNAME_SIZE];
+
+	if (bdev_zoned_model(bdev) == BLK_ZONED_HM &&
+	    !dm_target_zoned_hm(ti->type)) {
+		DMWARN("%s: Unsupported host-managed zoned block device %s",
+		       dm_device_name(ti->table->md), bdevname(bdev, b));
+		return false;
+	}
+
+	return true;
+}
+
+/*
  * Add a device to the list, or just increment the usage count if
  * it's already present.
  */
@@ -426,6 +444,11 @@ int dm_get_device(struct dm_target *ti, const char *path, fmode_t mode,
 	}
 	atomic_inc(&dd->count);
 
+	if (!device_supported(ti, dd->dm_dev)) {
+		dm_put_device(ti, dd->dm_dev);
+		return -ENOTSUPP;
+	}
+
 	*result = dd->dm_dev;
 	return 0;
 }
diff --git a/include/linux/device-mapper.h b/include/linux/device-mapper.h
index a7e6903..b3c2408 100644
--- a/include/linux/device-mapper.h
+++ b/include/linux/device-mapper.h
@@ -214,6 +214,12 @@ struct target_type {
 #define dm_target_is_wildcard(type)	((type)->features & DM_TARGET_WILDCARD)
 
 /*
+ * Indicates that a target supports host-managed zoned block devices.
+ */
+#define DM_TARGET_ZONED_HM		0x00000010
+#define dm_target_zoned_hm(type)	((type)->features & DM_TARGET_ZONED_HM)
+
+/*
  * Some targets need to be sent the same WRITE bio severals times so
  * that they can send copies of it to different devices.  This function
  * examines any supplied bio and returns the number of copies of it the
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 02/10] dm-table: Check device area zone alignment
  2017-04-21  3:55 [PATCH 00/10] dm: zoned block device support damien.lemoal
  2017-04-21  3:55 ` [PATCH 01/10] dm-table: Introduce DM_TARGET_ZONED_HM feature damien.lemoal
@ 2017-04-21  3:55 ` damien.lemoal
  2017-04-21  3:55   ` damien.lemoal
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 18+ messages in thread
From: damien.lemoal @ 2017-04-21  3:55 UTC (permalink / raw)
  To: dm-devel, Mike Snitzer, Alasdair Kergon
  Cc: Hannes Reinecke, Christoph Hellwig, Bart Van Assche, linux-block,
	Damien Le Moal

From: Damien Le Moal <damien.lemoal@wdc.com>

If a target maps to a zoned block device, check that the device area is
aligned on zone boundaries to avoid problems with REQ_OP_ZONE_RESET
operations (resetting a partially mapped sequential zone would not be
possible). This also greatly facilitate the processing of zone report
with REQ_OP_ZONE_REPORT bios.

Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com>
---
 drivers/md/dm-table.c | 27 +++++++++++++++++++++++++++
 1 file changed, 27 insertions(+)

diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
index 06d3b7b..6947f0f 100644
--- a/drivers/md/dm-table.c
+++ b/drivers/md/dm-table.c
@@ -339,6 +339,33 @@ static int device_area_is_invalid(struct dm_target *ti, struct dm_dev *dev,
 		return 1;
 	}
 
+	/*
+	 * If the target is mapped to a zoned block device, check
+	 * that the device zones are not partially mapped.
+	 */
+	if (bdev_zoned_model(bdev) != BLK_ZONED_NONE) {
+		unsigned int zone_sectors = bdev_zone_sectors(bdev);
+
+		if (start & (zone_sectors - 1)) {
+			DMWARN("%s: start=%llu not aligned to h/w "
+			       "zone size %u of %s",
+			       dm_device_name(ti->table->md),
+			       (unsigned long long)start,
+			       zone_sectors, bdevname(bdev, b));
+			return 1;
+		}
+
+		if (start + len < dev_size &&
+		    len & (zone_sectors - 1)) {
+			DMWARN("%s: len=%llu not aligned to h/w "
+			       "zone size %u of %s",
+			       dm_device_name(ti->table->md),
+			       (unsigned long long)start,
+			       zone_sectors, bdevname(bdev, b));
+			return 1;
+		}
+	}
+
 	return 0;
 }
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 03/10] dm-table: Check block devices zone model compatibility
  2017-04-21  3:55 [PATCH 00/10] dm: zoned block device support damien.lemoal
@ 2017-04-21  3:55   ` damien.lemoal
  2017-04-21  3:55 ` [PATCH 02/10] dm-table: Check device area zone alignment damien.lemoal
                     ` (10 subsequent siblings)
  11 siblings, 0 replies; 18+ messages in thread
From: damien.lemoal @ 2017-04-21  3:55 UTC (permalink / raw)
  To: dm-devel, Mike Snitzer, Alasdair Kergon
  Cc: Hannes Reinecke, Christoph Hellwig, Bart Van Assche, linux-block,
	Damien Le Moal

From: Damien Le Moal <damien.lemoal@wdc.com>

When setting the dm device queue limits, several possibilities exists
for zoned block devices:
1) The dm target driver may want to expose a different zone model (e.g.
host-managed device emulation or regular block device on top of
host-managed zoned block devices)
2) Expose the underlying zone model of the devices as is

To allow both cases, the underlying block device zone model must be set
in the target limits in dm_set_device_limits() and the compatibility of
all devices checked similarly to the logical block size alignment. For
this last check, introduce the function validate_hardware_zone_model()
to check that all targets of a table have the same zone model and that
the zone size of the target devices are equal.

Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com>
---
 drivers/md/dm-table.c | 95 +++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 95 insertions(+)

diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
index 6947f0f..4683cb6 100644
--- a/drivers/md/dm-table.c
+++ b/drivers/md/dm-table.c
@@ -505,6 +505,8 @@ static int dm_set_device_limits(struct dm_target *ti, struct dm_dev *dev,
 		       q->limits.alignment_offset,
 		       (unsigned long long) start << SECTOR_SHIFT);
 
+	limits->zoned = bdev_zoned_model(bdev);
+
 	return 0;
 }
 
@@ -720,6 +722,96 @@ static int validate_hardware_logical_block_alignment(struct dm_table *table,
 	return 0;
 }
 
+/*
+ * Check a devices's table for compatibility between zoned devices used by
+ * the table targets. The zone model may come directly from a target block
+ * device or may have been set by the target using the io_hints method.
+ * Overall, if any of the table device targets is advertized as a zoned
+ * block device, then all targets devices should also be advertized as
+ * using the same model and the devices zone size all equal.
+ */
+static int validate_hardware_zone_model(struct dm_table *table,
+					struct queue_limits *limits)
+{
+	struct dm_target *ti;
+	struct queue_limits ti_limits;
+	unsigned int zone_sectors = limits->chunk_sectors;
+	unsigned int num_targets = dm_table_get_num_targets(table);
+	int zone_model = -1;
+	unsigned int i = 0;
+
+	if (!num_targets)
+		return 0;
+
+	/*
+	 * Check each entry in the table in turn.
+	 */
+	while (i < num_targets) {
+
+		ti = dm_table_get_target(table, i);
+
+		/* Get the target device limits */
+		blk_set_stacking_limits(&ti_limits);
+		if (ti->type->iterate_devices)
+			ti->type->iterate_devices(ti, dm_set_device_limits,
+						  &ti_limits);
+
+		/*
+		 * Let the target driver change the hardware limits, and
+		 * in particular the zone model if needed.
+		 */
+		if (ti->type->io_hints)
+			ti->type->io_hints(ti, &ti_limits);
+
+		/* Check zone model compatibility */
+		if (zone_model == -1)
+			zone_model = ti_limits.zoned;
+		if (ti_limits.zoned != zone_model) {
+			zone_model = -1;
+			break;
+		}
+
+		if (zone_model != BLK_ZONED_NONE) {
+			/* Check zone size validity and compatibility */
+			if (!zone_sectors ||
+			    !is_power_of_2(zone_sectors))
+				break;
+			if (ti_limits.chunk_sectors != zone_sectors) {
+				zone_sectors = ti_limits.chunk_sectors;
+				break;
+			}
+		}
+
+		i++;
+
+	}
+
+	if (i < num_targets) {
+		if (zone_model == -1)
+			DMWARN("%s: table line %u (start sect %llu len %llu) "
+			       "has an incompatible zone model",
+			       dm_device_name(table->md), i,
+			       (unsigned long long) ti->begin,
+			       (unsigned long long) ti->len);
+		else
+			DMWARN("%s: table line %u (start sect %llu len %llu) "
+			       "has an incompatible zone size %u",
+			       dm_device_name(table->md), i,
+			       (unsigned long long) ti->begin,
+			       (unsigned long long) ti->len,
+			       zone_sectors);
+		return -EINVAL;
+	}
+
+	if (zone_model == BLK_ZONED_HA ||
+	    zone_model == BLK_ZONED_HM) {
+		limits->zoned = zone_model;
+		limits->chunk_sectors = zone_sectors;
+	}
+
+	return 0;
+}
+
 int dm_table_add_target(struct dm_table *t, const char *type,
 			sector_t start, sector_t len, char *params)
 {
@@ -1432,6 +1524,9 @@ int dm_calculate_queue_limits(struct dm_table *table,
 			       (unsigned long long) ti->len);
 	}
 
+	if (validate_hardware_zone_model(table, limits))
+		return -EINVAL;
+
 	return validate_hardware_logical_block_alignment(table, limits);
 }
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 03/10] dm-table: Check block devices zone model compatibility
@ 2017-04-21  3:55   ` damien.lemoal
  0 siblings, 0 replies; 18+ messages in thread
From: damien.lemoal @ 2017-04-21  3:55 UTC (permalink / raw)
  To: dm-devel, Mike Snitzer, Alasdair Kergon
  Cc: linux-block, Bart Van Assche, Damien Le Moal, Christoph Hellwig

From: Damien Le Moal <damien.lemoal@wdc.com>

When setting the dm device queue limits, several possibilities exists
for zoned block devices:
1) The dm target driver may want to expose a different zone model (e.g.
host-managed device emulation or regular block device on top of
host-managed zoned block devices)
2) Expose the underlying zone model of the devices as is

To allow both cases, the underlying block device zone model must be set
in the target limits in dm_set_device_limits() and the compatibility of
all devices checked similarly to the logical block size alignment. For
this last check, introduce the function validate_hardware_zone_model()
to check that all targets of a table have the same zone model and that
the zone size of the target devices are equal.

Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com>
---
 drivers/md/dm-table.c | 95 +++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 95 insertions(+)

diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
index 6947f0f..4683cb6 100644
--- a/drivers/md/dm-table.c
+++ b/drivers/md/dm-table.c
@@ -505,6 +505,8 @@ static int dm_set_device_limits(struct dm_target *ti, struct dm_dev *dev,
 		       q->limits.alignment_offset,
 		       (unsigned long long) start << SECTOR_SHIFT);
 
+	limits->zoned = bdev_zoned_model(bdev);
+
 	return 0;
 }
 
@@ -720,6 +722,96 @@ static int validate_hardware_logical_block_alignment(struct dm_table *table,
 	return 0;
 }
 
+/*
+ * Check a devices's table for compatibility between zoned devices used by
+ * the table targets. The zone model may come directly from a target block
+ * device or may have been set by the target using the io_hints method.
+ * Overall, if any of the table device targets is advertized as a zoned
+ * block device, then all targets devices should also be advertized as
+ * using the same model and the devices zone size all equal.
+ */
+static int validate_hardware_zone_model(struct dm_table *table,
+					struct queue_limits *limits)
+{
+	struct dm_target *ti;
+	struct queue_limits ti_limits;
+	unsigned int zone_sectors = limits->chunk_sectors;
+	unsigned int num_targets = dm_table_get_num_targets(table);
+	int zone_model = -1;
+	unsigned int i = 0;
+
+	if (!num_targets)
+		return 0;
+
+	/*
+	 * Check each entry in the table in turn.
+	 */
+	while (i < num_targets) {
+
+		ti = dm_table_get_target(table, i);
+
+		/* Get the target device limits */
+		blk_set_stacking_limits(&ti_limits);
+		if (ti->type->iterate_devices)
+			ti->type->iterate_devices(ti, dm_set_device_limits,
+						  &ti_limits);
+
+		/*
+		 * Let the target driver change the hardware limits, and
+		 * in particular the zone model if needed.
+		 */
+		if (ti->type->io_hints)
+			ti->type->io_hints(ti, &ti_limits);
+
+		/* Check zone model compatibility */
+		if (zone_model == -1)
+			zone_model = ti_limits.zoned;
+		if (ti_limits.zoned != zone_model) {
+			zone_model = -1;
+			break;
+		}
+
+		if (zone_model != BLK_ZONED_NONE) {
+			/* Check zone size validity and compatibility */
+			if (!zone_sectors ||
+			    !is_power_of_2(zone_sectors))
+				break;
+			if (ti_limits.chunk_sectors != zone_sectors) {
+				zone_sectors = ti_limits.chunk_sectors;
+				break;
+			}
+		}
+
+		i++;
+
+	}
+
+	if (i < num_targets) {
+		if (zone_model == -1)
+			DMWARN("%s: table line %u (start sect %llu len %llu) "
+			       "has an incompatible zone model",
+			       dm_device_name(table->md), i,
+			       (unsigned long long) ti->begin,
+			       (unsigned long long) ti->len);
+		else
+			DMWARN("%s: table line %u (start sect %llu len %llu) "
+			       "has an incompatible zone size %u",
+			       dm_device_name(table->md), i,
+			       (unsigned long long) ti->begin,
+			       (unsigned long long) ti->len,
+			       zone_sectors);
+		return -EINVAL;
+	}
+
+	if (zone_model == BLK_ZONED_HA ||
+	    zone_model == BLK_ZONED_HM) {
+		limits->zoned = zone_model;
+		limits->chunk_sectors = zone_sectors;
+	}
+
+	return 0;
+}
+
 int dm_table_add_target(struct dm_table *t, const char *type,
 			sector_t start, sector_t len, char *params)
 {
@@ -1432,6 +1524,9 @@ int dm_calculate_queue_limits(struct dm_table *table,
 			       (unsigned long long) ti->len);
 	}
 
+	if (validate_hardware_zone_model(table, limits))
+		return -EINVAL;
+
 	return validate_hardware_logical_block_alignment(table, limits);
 }
 
-- 
2.9.3

Western Digital Corporation (and its subsidiaries) E-mail Confidentiality Notice & Disclaimer:

This e-mail and any files transmitted with it may contain confidential or legally privileged information of WDC and/or its affiliates, and are intended solely for the use of the individual or entity to which they are addressed. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is prohibited. If you have received this e-mail in error, please notify the sender immediately and delete the e-mail in its entirety from your system.

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 04/10] dm: Fix REQ_OP_ZONE_RESET bio handling
  2017-04-21  3:55 [PATCH 00/10] dm: zoned block device support damien.lemoal
                   ` (2 preceding siblings ...)
  2017-04-21  3:55   ` damien.lemoal
@ 2017-04-21  3:55 ` damien.lemoal
  2017-04-21  3:55 ` [PATCH 05/10] dm: Fix REQ_OP_ZONE_REPORT " damien.lemoal
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 18+ messages in thread
From: damien.lemoal @ 2017-04-21  3:55 UTC (permalink / raw)
  To: dm-devel, Mike Snitzer, Alasdair Kergon
  Cc: Hannes Reinecke, Christoph Hellwig, Bart Van Assche, linux-block,
	Damien Le Moal

From: Damien Le Moal <damien.lemoal@wdc.com>

The REQ_OP_ZONE_RESET bio has no payload and zero sectors. Its position
is the only information used to indicate the zone to reset on the
device. Due to its zero length, this bio is not cloned and sent to the
target through the non-flush case in __split_and_process_bio().
Add an additional case in that function to call
__split_and_process_non_flush() without checking the clone info size.

Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com>
---
 drivers/md/dm.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index dfb7597..1d98035 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -1318,6 +1318,10 @@ static void __split_and_process_bio(struct mapped_device *md,
 		ci.sector_count = 0;
 		error = __send_empty_flush(&ci);
 		/* dec_pending submits any data associated with flush */
+	} else if (bio_op(bio) == REQ_OP_ZONE_RESET) {
+		ci.bio = bio;
+		ci.sector_count = 0;
+		error = __split_and_process_non_flush(&ci);
 	} else {
 		ci.bio = bio;
 		ci.sector_count = bio_sectors(bio);
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 05/10] dm: Fix REQ_OP_ZONE_REPORT bio handling
  2017-04-21  3:55 [PATCH 00/10] dm: zoned block device support damien.lemoal
                   ` (3 preceding siblings ...)
  2017-04-21  3:55 ` [PATCH 04/10] dm: Fix REQ_OP_ZONE_RESET bio handling damien.lemoal
@ 2017-04-21  3:55 ` damien.lemoal
  2017-04-21  3:55 ` [PATCH 06/10] dm: Introduce dm_remap_zone_report() damien.lemoal
                   ` (6 subsequent siblings)
  11 siblings, 0 replies; 18+ messages in thread
From: damien.lemoal @ 2017-04-21  3:55 UTC (permalink / raw)
  To: dm-devel, Mike Snitzer, Alasdair Kergon
  Cc: Hannes Reinecke, Christoph Hellwig, Bart Van Assche, linux-block,
	Damien Le Moal

From: Damien Le Moal <damien.lemoal@wdc.com>

A REQ_OP_ZONE_REPORT bio is not a medium access command. Its number of
sectors indicates the maximum size allowed for the report reply size
and not an amount of sectors accessed from the device.
REQ_OP_ZONE_REPORT bios should thus not be split depending on the
target device maximum I/O length but passed as is. Note that it is the
responsability of the target to remap and format the report reply.

Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com>
---
 drivers/md/dm.c | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index 1d98035..cd44928 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -1098,7 +1098,8 @@ static int clone_bio(struct dm_target_io *tio, struct bio *bio,
 			return r;
 	}
 
-	bio_advance(clone, to_bytes(sector - clone->bi_iter.bi_sector));
+	if (bio_op(bio) != REQ_OP_ZONE_REPORT)
+		bio_advance(clone, to_bytes(sector - clone->bi_iter.bi_sector));
 	clone->bi_iter.bi_size = to_bytes(len);
 
 	if (bio_integrity(bio))
@@ -1275,7 +1276,11 @@ static int __split_and_process_non_flush(struct clone_info *ci)
 	if (!dm_target_is_valid(ti))
 		return -EIO;
 
-	len = min_t(sector_t, max_io_len(ci->sector, ti), ci->sector_count);
+	if (bio_op(bio) == REQ_OP_ZONE_REPORT)
+		len = ci->sector_count;
+	else
+		len = min_t(sector_t, max_io_len(ci->sector, ti),
+			    ci->sector_count);
 
 	r = __clone_and_map_data_bio(ci, ti, ci->sector, &len);
 	if (r < 0)
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 06/10] dm: Introduce dm_remap_zone_report()
  2017-04-21  3:55 [PATCH 00/10] dm: zoned block device support damien.lemoal
                   ` (4 preceding siblings ...)
  2017-04-21  3:55 ` [PATCH 05/10] dm: Fix REQ_OP_ZONE_REPORT " damien.lemoal
@ 2017-04-21  3:55 ` damien.lemoal
  2017-04-21  3:55 ` [PATCH 07/10] dm-flakey: Add support for zoned block devices damien.lemoal
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 18+ messages in thread
From: damien.lemoal @ 2017-04-21  3:55 UTC (permalink / raw)
  To: dm-devel, Mike Snitzer, Alasdair Kergon
  Cc: Hannes Reinecke, Christoph Hellwig, Bart Van Assche, linux-block,
	Damien Le Moal

From: Damien Le Moal <damien.lemoal@wdc.com>

A target driver support zoned block devices and exposing it as such may
receive REQ_OP_ZONE_REPORT request for the user to determine the mapped
device zone configuration. To process properly such request, the target
driver may need to remap the zone descriptors provided in the report
reply. The helper function dm_remap_zone_report() does this generically
using only the target start offset and length and the start offset
within the target device.

dm_remap_zone_report() will remap the start sector of all zones
reported. If the report includes sequential zones, the write pointer
position of these zones will also be remapped.

Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com>
---
 drivers/md/dm.c               | 80 +++++++++++++++++++++++++++++++++++++++++++
 include/linux/device-mapper.h | 10 ++++++
 2 files changed, 90 insertions(+)

diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index cd44928..1f6558e 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -975,6 +975,86 @@ void dm_accept_partial_bio(struct bio *bio, unsigned n_sectors)
 }
 EXPORT_SYMBOL_GPL(dm_accept_partial_bio);
 
+#ifdef CONFIG_BLK_DEV_ZONED
+/*
+ * The zone descriptors obtained with a zone report indicate
+ * zone positions within the target device. The zone descriptors
+ * must be remapped to match their position within the dm device.
+ * A target may call dm_remap_zone_report after completion of a
+ * REQ_OP_ZONE_REPORT bio to remap the zone descriptors obtained
+ * from the target device mapping to the dm device.
+ */
+void dm_remap_zone_report(struct dm_target *ti, struct bio *bio, sector_t start)
+{
+	struct dm_target_io *tio =
+		container_of(bio, struct dm_target_io, clone);
+	struct bio *report_bio = tio->io->bio;
+	struct blk_zone_report_hdr *hdr = NULL;
+	struct blk_zone *zone;
+	unsigned int nr_rep = 0;
+	unsigned int ofst;
+	struct bio_vec bvec;
+	struct bvec_iter iter;
+	void *addr;
+
+	if (bio->bi_error)
+		return;
+
+	/*
+	 * Remap the start sector of the reported zones. For sequential zones,
+	 * also remap the write pointer position.
+	 */
+	bio_for_each_segment(bvec, report_bio, iter) {
+
+		addr = kmap_atomic(bvec.bv_page);
+
+		/* Remember the report header in the first page */
+		if (!hdr) {
+			hdr = addr;
+			ofst = sizeof(struct blk_zone_report_hdr);
+		} else {
+			ofst = 0;
+		}
+
+		/* Set zones start sector */
+		while (hdr->nr_zones && ofst < bvec.bv_len) {
+			zone = addr + ofst;
+			if (zone->start >= start + ti->len) {
+				hdr->nr_zones = 0;
+				break;
+			}
+			zone->start = zone->start + ti->begin - start;
+			if (zone->type != BLK_ZONE_TYPE_CONVENTIONAL) {
+				if (zone->cond == BLK_ZONE_COND_FULL)
+					zone->wp = zone->start + zone->len;
+				else if (zone->cond == BLK_ZONE_COND_EMPTY)
+					zone->wp = zone->start;
+				else
+					zone->wp = zone->wp + ti->begin - start;
+			}
+			ofst += sizeof(struct blk_zone);
+			hdr->nr_zones--;
+			nr_rep++;
+		}
+
+		if (addr != hdr)
+			kunmap_atomic(addr);
+
+		if (!hdr->nr_zones)
+			break;
+
+	}
+
+	if (hdr) {
+		hdr->nr_zones = nr_rep;
+		kunmap_atomic(hdr);
+	}
+
+	bio_advance(report_bio, report_bio->bi_iter.bi_size);
+}
+EXPORT_SYMBOL_GPL(dm_remap_zone_report);
+#endif
+
 /*
  * Flush current->bio_list when the target map method blocks.
  * This fixes deadlocks in snapshot and possibly in other targets.
diff --git a/include/linux/device-mapper.h b/include/linux/device-mapper.h
index b3c2408..d21c761 100644
--- a/include/linux/device-mapper.h
+++ b/include/linux/device-mapper.h
@@ -433,6 +433,16 @@ struct gendisk *dm_disk(struct mapped_device *md);
 int dm_suspended(struct dm_target *ti);
 int dm_noflush_suspending(struct dm_target *ti);
 void dm_accept_partial_bio(struct bio *bio, unsigned n_sectors);
+#ifdef CONFIG_BLK_DEV_ZONED
+void dm_remap_zone_report(struct dm_target *ti, struct bio *bio,
+			  sector_t start);
+#else
+static inline void dm_remap_zone_report(struct dm_target *ti, struct bio *bio,
+					sector_t start)
+{
+	bio->bi_error = -ENOTSUPP;
+}
+#endif
 union map_info *dm_get_rq_mapinfo(struct request *rq);
 
 struct queue_limits *dm_get_queue_limits(struct mapped_device *md);
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 07/10] dm-flakey: Add support for zoned block devices
  2017-04-21  3:55 [PATCH 00/10] dm: zoned block device support damien.lemoal
                   ` (5 preceding siblings ...)
  2017-04-21  3:55 ` [PATCH 06/10] dm: Introduce dm_remap_zone_report() damien.lemoal
@ 2017-04-21  3:55 ` damien.lemoal
  2017-04-21  3:55 ` [PATCH 08/10] dm-linear: " damien.lemoal
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 18+ messages in thread
From: damien.lemoal @ 2017-04-21  3:55 UTC (permalink / raw)
  To: dm-devel, Mike Snitzer, Alasdair Kergon
  Cc: Hannes Reinecke, Christoph Hellwig, Bart Van Assche, linux-block,
	Damien Le Moal

From: Damien Le Moal <damien.lemoal@wdc.com>

With the development of file system support for zoned block devices
(e.g. f2fs), having dm-flakey support for these devices is interesting
to improve testing.

This patch adds support for zoned block devices in dm-flakey, both
host-aware and host-managed. The target type feature is set to
DM_TARGET_ZONED_HM indicate support for host-managed models. The
remaining of the support adds hooks for remapping of REQ_OP_ZONE_RESET
and REQ_OP_ZONE_REPORT bios. Additionally, in the bio completion path,
(backward) remapping of a zone report reply is also added.

Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com>
---
 drivers/md/dm-flakey.c | 21 ++++++++++++++++++++-
 1 file changed, 20 insertions(+), 1 deletion(-)

diff --git a/drivers/md/dm-flakey.c b/drivers/md/dm-flakey.c
index 13305a1..b419c85 100644
--- a/drivers/md/dm-flakey.c
+++ b/drivers/md/dm-flakey.c
@@ -251,6 +251,8 @@ static int flakey_ctr(struct dm_target *ti, unsigned int argc, char **argv)
 	return 0;
 
 bad:
+	if (fc->dev)
+		dm_put_device(ti, fc->dev);
 	kfree(fc);
 	return r;
 }
@@ -275,7 +277,7 @@ static void flakey_map_bio(struct dm_target *ti, struct bio *bio)
 	struct flakey_c *fc = ti->private;
 
 	bio->bi_bdev = fc->dev->bdev;
-	if (bio_sectors(bio))
+	if (bio_sectors(bio) || bio_op(bio) == REQ_OP_ZONE_RESET)
 		bio->bi_iter.bi_sector =
 			flakey_map_sector(ti, bio->bi_iter.bi_sector);
 }
@@ -306,6 +308,14 @@ static int flakey_map(struct dm_target *ti, struct bio *bio)
 	struct per_bio_data *pb = dm_per_bio_data(bio, sizeof(struct per_bio_data));
 	pb->bio_submitted = false;
 
+	/* Do not fail reset zone */
+	if (bio_op(bio) == REQ_OP_ZONE_RESET)
+		goto map_bio;
+
+	/* We need to remap reported zones, so remember the BIO iter */
+	if (bio_op(bio) == REQ_OP_ZONE_REPORT)
+		goto map_bio;
+
 	/* Are we alive ? */
 	elapsed = (jiffies - fc->start_time) / HZ;
 	if (elapsed % (fc->up_interval + fc->down_interval) >= fc->up_interval) {
@@ -363,6 +373,14 @@ static int flakey_end_io(struct dm_target *ti, struct bio *bio, int error)
 	struct flakey_c *fc = ti->private;
 	struct per_bio_data *pb = dm_per_bio_data(bio, sizeof(struct per_bio_data));
 
+	if (bio_op(bio) == REQ_OP_ZONE_RESET)
+		return error;
+
+	if (bio_op(bio) == REQ_OP_ZONE_REPORT) {
+		dm_remap_zone_report(ti, bio, fc->start);
+		return error;
+	}
+
 	if (!error && pb->bio_submitted && (bio_data_dir(bio) == READ)) {
 		if (fc->corrupt_bio_byte && (fc->corrupt_bio_rw == READ) &&
 		    all_corrupt_bio_flags_match(bio, fc)) {
@@ -446,6 +464,7 @@ static int flakey_iterate_devices(struct dm_target *ti, iterate_devices_callout_
 static struct target_type flakey_target = {
 	.name   = "flakey",
 	.version = {1, 4, 0},
+	.features = DM_TARGET_ZONED_HM,
 	.module = THIS_MODULE,
 	.ctr    = flakey_ctr,
 	.dtr    = flakey_dtr,
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 08/10] dm-linear: Add support for zoned block devices
  2017-04-21  3:55 [PATCH 00/10] dm: zoned block device support damien.lemoal
                   ` (6 preceding siblings ...)
  2017-04-21  3:55 ` [PATCH 07/10] dm-flakey: Add support for zoned block devices damien.lemoal
@ 2017-04-21  3:55 ` damien.lemoal
  2017-04-21  3:55 ` [PATCH 09/10] dm-kcopyd: Add sequential write feature damien.lemoal
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 18+ messages in thread
From: damien.lemoal @ 2017-04-21  3:55 UTC (permalink / raw)
  To: dm-devel, Mike Snitzer, Alasdair Kergon
  Cc: Hannes Reinecke, Christoph Hellwig, Bart Van Assche, linux-block,
	Damien Le Moal

From: Damien Le Moal <damien.lemoal@wdc.com>

Add support for zoned block devices by allowing host-managed zoned block
device mapped targets, the remapping of REQ_OP_ZONE_RESET and the post
processing (reply remapping) of REQ_OP_ZONE_REPORT.

Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com>
---
 drivers/md/dm-linear.c | 14 +++++++++++++-
 1 file changed, 13 insertions(+), 1 deletion(-)

diff --git a/drivers/md/dm-linear.c b/drivers/md/dm-linear.c
index 4788b0b..9c4debd 100644
--- a/drivers/md/dm-linear.c
+++ b/drivers/md/dm-linear.c
@@ -87,7 +87,7 @@ static void linear_map_bio(struct dm_target *ti, struct bio *bio)
 	struct linear_c *lc = ti->private;
 
 	bio->bi_bdev = lc->dev->bdev;
-	if (bio_sectors(bio))
+	if (bio_sectors(bio) || bio_op(bio) == REQ_OP_ZONE_RESET)
 		bio->bi_iter.bi_sector =
 			linear_map_sector(ti, bio->bi_iter.bi_sector);
 }
@@ -99,6 +99,16 @@ static int linear_map(struct dm_target *ti, struct bio *bio)
 	return DM_MAPIO_REMAPPED;
 }
 
+static int linear_end_io(struct dm_target *ti, struct bio *bio, int error)
+{
+	struct linear_c *lc = ti->private;
+
+	if (!error && bio_op(bio) == REQ_OP_ZONE_REPORT)
+		dm_remap_zone_report(ti, bio, lc->start);
+
+	return error;
+}
+
 static void linear_status(struct dm_target *ti, status_type_t type,
 			  unsigned status_flags, char *result, unsigned maxlen)
 {
@@ -162,10 +172,12 @@ static long linear_direct_access(struct dm_target *ti, sector_t sector,
 static struct target_type linear_target = {
 	.name   = "linear",
 	.version = {1, 3, 0},
+	.features = DM_TARGET_ZONED_HM,
 	.module = THIS_MODULE,
 	.ctr    = linear_ctr,
 	.dtr    = linear_dtr,
 	.map    = linear_map,
+	.end_io = linear_end_io,
 	.status = linear_status,
 	.prepare_ioctl = linear_prepare_ioctl,
 	.iterate_devices = linear_iterate_devices,
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 09/10] dm-kcopyd: Add sequential write feature
  2017-04-21  3:55 [PATCH 00/10] dm: zoned block device support damien.lemoal
                   ` (7 preceding siblings ...)
  2017-04-21  3:55 ` [PATCH 08/10] dm-linear: " damien.lemoal
@ 2017-04-21  3:55 ` damien.lemoal
  2017-04-21  3:55 ` [PATCH 10/10] dm-zoned: Drive-managed zoned block device target damien.lemoal
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 18+ messages in thread
From: damien.lemoal @ 2017-04-21  3:55 UTC (permalink / raw)
  To: dm-devel, Mike Snitzer, Alasdair Kergon
  Cc: Hannes Reinecke, Christoph Hellwig, Bart Van Assche, linux-block,
	Damien Le Moal

From: Damien Le Moal <damien.lemoal@wdc.com>

When copyying blocks to host-managed zoned block devices, writes must be
sequential. dm_kcopyd_copy() does not howerver guarantee this as writes
are issued in the completion order of reads, and reads may complete out
of order despite being issued sequentially.

Fix this by introducing the DM_KCOPYD_WRITE_SEQ flag. This can be
specified by the user when calling dm_kcopyd_copy() and is set
automatically if one of the destinations is a host-managed zoned block
device. For a split job, the master job maintains the write position at
which writes must be issued. This is checked with the pop() function
which is modify to not return any write I/O sub job that is not at the
correct write position.

When DM_KCOPYD_WRITE_SEQ is specified for a job, errors cannot be
ignored and the flag DM_KCOPYD_IGNORE_ERROR is ignored, even if
specified by the user.

Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com>
---
 drivers/md/dm-kcopyd.c    | 68 +++++++++++++++++++++++++++++++++++++++++++++--
 include/linux/dm-kcopyd.h |  1 +
 2 files changed, 67 insertions(+), 2 deletions(-)

diff --git a/drivers/md/dm-kcopyd.c b/drivers/md/dm-kcopyd.c
index 9e9d04cb..477846e 100644
--- a/drivers/md/dm-kcopyd.c
+++ b/drivers/md/dm-kcopyd.c
@@ -356,6 +356,7 @@ struct kcopyd_job {
 	struct mutex lock;
 	atomic_t sub_jobs;
 	sector_t progress;
+	sector_t write_ofst;
 
 	struct kcopyd_job *master_job;
 };
@@ -386,6 +387,34 @@ void dm_kcopyd_exit(void)
  * Functions to push and pop a job onto the head of a given job
  * list.
  */
+static struct kcopyd_job *pop_io_job(struct list_head *jobs,
+				     struct dm_kcopyd_client *kc)
+{
+	struct kcopyd_job *job;
+
+	/*
+	 * For I/O jobs, pop any read, any write without sequential write
+	 * constraint and sequential writes that are at the right position.
+	 */
+	list_for_each_entry(job, jobs, list) {
+
+		if (job->rw == READ ||
+		    !test_bit(DM_KCOPYD_WRITE_SEQ, &job->flags)) {
+			list_del(&job->list);
+			return job;
+		}
+
+		if (job->write_ofst == job->master_job->write_ofst) {
+			job->master_job->write_ofst += job->source.count;
+			list_del(&job->list);
+			return job;
+		}
+
+	}
+
+	return NULL;
+}
+
 static struct kcopyd_job *pop(struct list_head *jobs,
 			      struct dm_kcopyd_client *kc)
 {
@@ -395,8 +424,12 @@ static struct kcopyd_job *pop(struct list_head *jobs,
 	spin_lock_irqsave(&kc->job_lock, flags);
 
 	if (!list_empty(jobs)) {
-		job = list_entry(jobs->next, struct kcopyd_job, list);
-		list_del(&job->list);
+		if (jobs == &kc->io_jobs) {
+			job = pop_io_job(jobs, kc);
+		} else {
+			job = list_entry(jobs->next, struct kcopyd_job, list);
+			list_del(&job->list);
+		}
 	}
 	spin_unlock_irqrestore(&kc->job_lock, flags);
 
@@ -506,6 +539,14 @@ static int run_io_job(struct kcopyd_job *job)
 		.client = job->kc->io_client,
 	};
 
+	/*
+	 * If we need to write sequentially and some reads or writes failed,
+	 * no point in continuing.
+	 */
+	if (test_bit(DM_KCOPYD_WRITE_SEQ, &job->flags) &&
+	    job->master_job->write_err)
+		return -EIO;
+
 	io_job_start(job->kc->throttle);
 
 	if (job->rw == READ)
@@ -655,6 +696,7 @@ static void segment_complete(int read_err, unsigned long write_err,
 		int i;
 
 		*sub_job = *job;
+		sub_job->write_ofst = progress;
 		sub_job->source.sector += progress;
 		sub_job->source.count = count;
 
@@ -723,6 +765,27 @@ int dm_kcopyd_copy(struct dm_kcopyd_client *kc, struct dm_io_region *from,
 	job->num_dests = num_dests;
 	memcpy(&job->dests, dests, sizeof(*dests) * num_dests);
 
+	/*
+	 * If one of the destination is a host-managed zoned block device,
+	 * we need to write sequentially. If one of the destination is a
+	 * host-aware device, then leave it to the caller to choose what to do.
+	 */
+	if (!test_bit(DM_KCOPYD_WRITE_SEQ, &job->flags)) {
+		for (i = 0; i < job->num_dests; i++) {
+			if (bdev_zoned_model(dests[i].bdev) == BLK_ZONED_HM) {
+				set_bit(DM_KCOPYD_WRITE_SEQ, &job->flags);
+				break;
+			}
+		}
+	}
+
+	/*
+	 * If we need to write sequentially, errors cannot be ignored.
+	 */
+	if (test_bit(DM_KCOPYD_WRITE_SEQ, &job->flags) &&
+	    test_bit(DM_KCOPYD_IGNORE_ERROR, &job->flags))
+		clear_bit(DM_KCOPYD_IGNORE_ERROR, &job->flags);
+
 	if (from) {
 		job->source = *from;
 		job->pages = NULL;
@@ -746,6 +809,7 @@ int dm_kcopyd_copy(struct dm_kcopyd_client *kc, struct dm_io_region *from,
 	job->fn = fn;
 	job->context = context;
 	job->master_job = job;
+	job->write_ofst = 0;
 
 	if (job->source.count <= SUB_JOB_SIZE)
 		dispatch_job(job);
diff --git a/include/linux/dm-kcopyd.h b/include/linux/dm-kcopyd.h
index f486d63..cfac858 100644
--- a/include/linux/dm-kcopyd.h
+++ b/include/linux/dm-kcopyd.h
@@ -20,6 +20,7 @@
 #define DM_KCOPYD_MAX_REGIONS 8
 
 #define DM_KCOPYD_IGNORE_ERROR 1
+#define DM_KCOPYD_WRITE_SEQ    2
 
 struct dm_kcopyd_throttle {
 	unsigned throttle;
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 10/10] dm-zoned: Drive-managed zoned block device target
  2017-04-21  3:55 [PATCH 00/10] dm: zoned block device support damien.lemoal
                   ` (8 preceding siblings ...)
  2017-04-21  3:55 ` [PATCH 09/10] dm-kcopyd: Add sequential write feature damien.lemoal
@ 2017-04-21  3:55 ` damien.lemoal
  2017-04-28 21:14   ` Bart Van Assche
  2017-04-24  6:24   ` Hannes Reinecke
  2017-04-27  4:22 ` Damien Le Moal
  11 siblings, 1 reply; 18+ messages in thread
From: damien.lemoal @ 2017-04-21  3:55 UTC (permalink / raw)
  To: dm-devel, Mike Snitzer, Alasdair Kergon
  Cc: linux-block, Bart Van Assche, Damien Le Moal, Christoph Hellwig

From: Damien Le Moal <damien.lemoal@wdc.com>

The dm-zoned device mapper target provides transparent write access
to zoned block devices (ZBC and ZAC compliant block devices).
dm-zoned hides to the device user (a file system or an application
doing raw block device accesses) any constraint imposed on write
requests by the device, equivalent to a drive-managed zoned block
device model.

Write requests are processed using a combination of on-disk buffering
using the device conventional zones and direct in-place processing for
requests aligned to a zone sequential write pointer position.
A background reclaim process implemented using dm_kcopyd_copy ensures
that conventional zones are always available for executing unaligned
write requests. The reclaim process overhead is minimized by managing
buffer zones in a least-recently-written order and first targeting the
oldest buffer zones. Doing so, blocks under regular write access (such
as metadata blocks of a file system) remain stored in conventional
zones, resulting in no apparent overhead.

dm-zoned implementation focus on simplicity and on minimizing overhead
(CPU, memory and storage overhead). For a 10TB host-managed disk with
256 MB zones, dm-zoned memory usage per disk instance is at most 4.5 MB
and as little as 5 zones will be used internally for storing metadata
and performing buffer zone reclaim operations. This is achieved using
zone level indirection rather than a full block indirection system for
managing block movement between zones.

dm-zoned primary target is host-managed zoned block devices but it can
also be used with host-aware device models to mitigate potential
device-side performance degradation due to excessive random writing.

dm-zoned target devices can be formatted and checked using the dmzadm
utility available at:

https://github.com/hgst/dm-zoned-tools

Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com>
---
 Documentation/device-mapper/dm-zoned.txt |  154 +++
 drivers/md/Kconfig                       |   19 +
 drivers/md/Makefile                      |    2 +
 drivers/md/dm-zoned-io.c                 |  998 ++++++++++++++
 drivers/md/dm-zoned-metadata.c           | 2195 ++++++++++++++++++++++++++++++
 drivers/md/dm-zoned-reclaim.c            |  535 ++++++++
 drivers/md/dm-zoned.h                    |  528 +++++++
 7 files changed, 4431 insertions(+)
 create mode 100644 Documentation/device-mapper/dm-zoned.txt
 create mode 100644 drivers/md/dm-zoned-io.c
 create mode 100644 drivers/md/dm-zoned-metadata.c
 create mode 100644 drivers/md/dm-zoned-reclaim.c
 create mode 100644 drivers/md/dm-zoned.h

diff --git a/Documentation/device-mapper/dm-zoned.txt b/Documentation/device-mapper/dm-zoned.txt
new file mode 100644
index 0000000..d41f597
--- /dev/null
+++ b/Documentation/device-mapper/dm-zoned.txt
@@ -0,0 +1,154 @@
+dm-zoned
+========
+
+The dm-zoned device mapper exposes a zoned block device (ZBC and ZAC compliant
+devices) as a regular block device without any write pattern constraint. In
+effect, it implements a drive-managed zoned block device which hides to the
+user (a file system or an application doing raw block device accesses) the
+sequential write constraints of host-managed zoned block devices and can
+mitigate the potential device-side performance degradation due to excessive
+random writes on host-aware zoned block devices.
+
+For a more detailed description of the zoned block device models and
+their constraints see (for SCSI devices):
+
+http://www.t10.org/drafts.htm#ZBC_Family
+
+and (for ATA devices):
+
+http://www.t13.org/Documents/UploadedDocuments/docs2015/
+di537r05-Zoned_Device_ATA_Command_Set_ZAC.pdf
+
+dm-zoned implementation is simple and minimizes system overhead (CPU and
+memory usage as well as storage capacity loss). For a 10TB host-manmaged disk
+with 256 MB zones, dm-zoned memory usage per disk instance is at most 4.5 MB
+and as little as 5 zones will be used internally for storing metadata and
+performaing reclaim operations.
+
+dm-zoned targte devices can be formatted and checked using the dmzadm utility
+available at:
+
+https://github.com/hgst/dm-zoned-tools
+
+Algorithm
+=========
+
+dm-zoned implements an on-disk buffering scheme to handle non-sequential write
+accesses to the sequential zones of a zoned block device. Conventional zones
+are used for caching as well as for storing internal metadata.
+
+The zones of the device are separated into 2 types:
+
+1) Metadata zones: these are conventional zones used to store metadata.
+Metadata zones are not reported as useable capacity to the user.
+
+2) Data zones: all remaining zones, the vast majority of which will be
+sequential zones used exclusively to store user data. The conventional zones
+of the device may be used also for buffering user random writes. Data in these
+zones may be directly mapped to the conventional zone, but later moved to a
+sequential zone after so that the conventional zone can be reused for buffering
+incoming random writes.
+
+dm-zoned exposes a logical device with a sector size of 4096 bytes,
+irrespectively of the physical sector size of the backend zoned block device
+being used. This allows reducing the amount of metadata needed to manage valid
+blocks (blocks written).
+
+The on-disk metadata format is as follows:
+
+1) The first block of the first convnetional zone found contains the
+super block which describes the amount and position on disk of metadata blocks.
+
+2) Following the super block, a set of blocks is used to describe the mapping
+of the logical device blocks. The mapping is done per chunk of blocks, with
+the chunk size equal to the zoned block device size. The mapping table is
+indexed by chunk number and each mapping entry indicates the zone number of
+the device storing the chunk of data. Each mapping entry may also indicate if
+the zone number of a conventional zone used to buffer random modification to
+the data zone.
+
+3) A set of blocks used to store bitmaps indicating the validity of blocks in
+the data zones follows the mapping table. A valid block is defined as a block
+that was writen and not discarded. For a buffered data chunk, a block is
+always valid only in the data zone mapping the chunk or in the buffer zone of
+the chunk.
+
+For a logical chunk mapped to a conventional zone, all write operations are
+processed by directly writing to the zone. If the mapping zone is a
+sequential zone, the write operation is processed directly only and only if
+the write offset within the logical chunk is equal to the write pointer offset
+within of the sequential data zone (i.e. the write operation is aligned on the
+zone write pointer). Otherwise, write operations are processed indirectly
+using a buffer zone. In such case, an unused conventional zone is allocated
+and assigned to the chunk being accessed. Writing a block to the buffer zone
+of a chunk will automatically invalidate the same block in the sequential zone
+mapping the chunk. If all blocks of the sequential zone become invalid, the
+zone is freed and the chunk buffer zone becomes the primary zone mapping the
+chunk, resulting is native random write performance similar to a regular
+block device.
+
+Read operations are processed according to the block validity information
+provided by the bitmaps. Valid blocks are read either from the sequential zone
+mapping a chunk, or if the chunk is buffered, from the buffer zone assigned.
+If the accessed chunk has no mapping, or the accessed blocks are invalid, the
+read buffer is zeroed and the read operation terminated.
+
+After some time, the limited number of convnetional zones available may be
+exhausted (all used to map chunks or buffer sequential zones) and unaligned
+writes to unbuffered chunks become impossible. To avoid such situation, a
+reclaim process regularly scans used conventional zones and try to reclaim
+the least recently used ones copying the valid blocks of the buffer zone
+to a free sequential zone. Once the copy completes, the chunk mapping is
+updated to point to the sequential zone and the buffer zone freed for reuse.
+
+Metadata Protection
+===================
+
+To protect metadata against corruption in case of sudden power loss or system
+crash, 2 sets of metadata zones are used. One set, the primary set, is used as
+the main metadata region, while the secondary set is used as a staging area.
+Modified metadata are first written to the secondary set and validated by
+updating the super block in the secondary set, indicating using a generation
+counter that this set contains the newest metadata. Once this operation
+completes, updates in place of metadata blocks can be done in the primary
+metadata set, ensuring that one of the set is always consistent (all
+modifications committed or none at all). Flush operations are used as a commit
+point. Upon reception of a flush request, metadata modification activity is
+temporarily blocked (for both incoming BIO processing and reclaim process) and
+all dirty metadata blocks staged and updated. Normal operation is then resumed.
+Metadata flush thus only temporarily delays write and discard requests. Read
+requests can be concurrently processed while metadata flush is being executed.
+
+Usage
+=====
+
+A zoned block device must first be formatted using the dmzadm tool. This will
+analyze the device zone configuration, determine where to place the metadata
+sets on the device and initialize the metadata sets.
+
+Ex:
+
+dmzadm --format /dev/sdxx
+
+For a formatted device, the target can be created normally with the dmsetup
+utility. The only parameter that dm-zoned requires is the device name.
+
+Example scripts
+===============
+
+[[
+#!/bin/sh
+
+if [ $# -ne 1 ]; then
+	echo "Usage: $0 <Zoned device path>"
+	exit 1
+fi
+
+dev="${1}"
+shift
+
+modprobe dm-zoned
+
+echo "0 `blockdev --getsize ${dev}` dm-zoned ${dev}" | dmsetup create dmz-`basename ${dev}`
+]]
+
diff --git a/drivers/md/Kconfig b/drivers/md/Kconfig
index b7767da..a537a73 100644
--- a/drivers/md/Kconfig
+++ b/drivers/md/Kconfig
@@ -508,4 +508,23 @@ config DM_LOG_WRITES
 
 	  If unsure, say N.
 
+config DM_ZONED
+	tristate "Drive-managed zoned block device target support"
+	depends on BLK_DEV_DM
+	depends on BLK_DEV_ZONED
+	---help---
+	  This device-mapper target takes a host-managed or host-aware zoned
+	  block device and expose most of its capacity as a regular block
+	  device (drive-managed zoned block device) without any write
+	  constraint. This is mainly intended for use with file systems that
+	  do not natively support zoned block devices but still want to
+	  benefit from the increased capacity offered by SMR disks. Other uses
+	  by applications using raw block devices (for example object stores)
+	  is also possible.
+
+	  To compile this code as a module, choose M here: the module will
+	  be called dm-zoned.
+
+	  If unsure, say N.
+
 endif # MD
diff --git a/drivers/md/Makefile b/drivers/md/Makefile
index 3cbda1a..f42dfcc 100644
--- a/drivers/md/Makefile
+++ b/drivers/md/Makefile
@@ -19,6 +19,7 @@ dm-era-y	+= dm-era-target.o
 dm-verity-y	+= dm-verity-target.o
 md-mod-y	+= md.o bitmap.o
 raid456-y	+= raid5.o raid5-cache.o
+dm-zoned-y	+= dm-zoned-io.o dm-zoned-metadata.o dm-zoned-reclaim.o
 
 # Note: link order is important.  All raid personalities
 # and must come before md.o, as they each initialise 
@@ -59,6 +60,7 @@ obj-$(CONFIG_DM_CACHE_SMQ)	+= dm-cache-smq.o
 obj-$(CONFIG_DM_CACHE_CLEANER)	+= dm-cache-cleaner.o
 obj-$(CONFIG_DM_ERA)		+= dm-era.o
 obj-$(CONFIG_DM_LOG_WRITES)	+= dm-log-writes.o
+obj-$(CONFIG_DM_ZONED)		+= dm-zoned.o
 
 ifeq ($(CONFIG_DM_UEVENT),y)
 dm-mod-objs			+= dm-uevent.o
diff --git a/drivers/md/dm-zoned-io.c b/drivers/md/dm-zoned-io.c
new file mode 100644
index 0000000..4b39730
--- /dev/null
+++ b/drivers/md/dm-zoned-io.c
@@ -0,0 +1,998 @@
+/*
+ * Drive-managed zoned block device target
+ * Copyright (C) 2017 Western Digital Corporation or its affiliates.
+ *
+ * Written by: Damien Le Moal <damien.lemoal@wdc.com>
+ *
+ * This software is distributed under the terms of the GNU General Public
+ * License version 2, or any later version, "as is," without technical
+ * support, and WITHOUT ANY WARRANTY, without even the implied warranty
+ * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#include <linux/module.h>
+
+#include "dm-zoned.h"
+
+/*
+ * Target BIO completion.
+ */
+static inline void dmz_bio_end(struct bio *bio, int err)
+{
+	struct dmz_bioctx *bioctx =
+		dm_per_bio_data(bio, sizeof(struct dmz_bioctx));
+
+	if (atomic_dec_and_test(&bioctx->ref)) {
+		struct dmz_target *dmz = bioctx->target;
+
+		/* User BIO Completed */
+		if (bioctx->zone)
+			dmz_deactivate_zone(dmz, bioctx->zone);
+		atomic_dec(&dmz->bio_count);
+		bio->bi_error = bioctx->error;
+		bio_endio(bio);
+	}
+}
+
+/*
+ * Partial/internal BIO completion callback.
+ * This terminates the user target BIO when there
+ * are no more references to its context.
+ */
+static void dmz_bio_end_io(struct bio *bio)
+{
+	struct dmz_bioctx *bioctx = bio->bi_private;
+	int err = bio->bi_error;
+
+	if (err) {
+		struct dm_zone *zone = bioctx->zone;
+
+		bioctx->error = err;
+		if (bio_op(bio) == REQ_OP_WRITE &&
+		    dmz_is_seq(zone))
+			set_bit(DMZ_SEQ_WRITE_ERR, &zone->flags);
+	}
+
+	dmz_bio_end(bioctx->bio, err);
+
+	bio_put(bio);
+
+}
+
+/*
+ * Issue a BIO to a zone. The BIO may only partially process the
+ * original target BIO.
+ */
+static int dmz_submit_bio(struct dmz_target *dmz, struct dm_zone *zone,
+			  struct bio *dmz_bio,
+			  sector_t chunk_block, unsigned int nr_blocks)
+{
+	struct dmz_bioctx *bioctx
+		= dm_per_bio_data(dmz_bio, sizeof(struct dmz_bioctx));
+	unsigned int nr_sectors = dmz_blk2sect(nr_blocks);
+	unsigned int size = nr_sectors << SECTOR_SHIFT;
+	struct bio *clone;
+
+	clone = bio_clone_fast(dmz_bio, GFP_NOIO, dmz->bio_set);
+	if (!clone)
+		return -ENOMEM;
+
+	/* Setup the clone */
+	clone->bi_bdev = dmz->zbd;
+	clone->bi_opf = dmz_bio->bi_opf;
+	clone->bi_iter.bi_sector =
+		dmz_start_sect(dmz, zone) + dmz_blk2sect(chunk_block);
+	clone->bi_iter.bi_size = size;
+	clone->bi_end_io = dmz_bio_end_io;
+	clone->bi_private = bioctx;
+
+	bio_advance(dmz_bio, size);
+
+	/* Submit the clone */
+	atomic_inc(&bioctx->ref);
+	generic_make_request(clone);
+
+	return 0;
+}
+
+/*
+ * Zero out pages of discarded blocks accessed by a read BIO.
+ */
+static void dmz_handle_read_zero(struct dmz_target *dmz, struct bio *bio,
+				 sector_t chunk_block, unsigned int nr_blocks)
+{
+	unsigned int size = nr_blocks << DMZ_BLOCK_SHIFT;
+
+	dmz_dev_debug(dmz,
+		      "=> ZERO READ chunk %llu -> block %llu, %u blocks\n",
+		      (unsigned long long)dmz_bio_chunk(dmz, bio),
+		      (unsigned long long)chunk_block,
+		      nr_blocks);
+
+	/* Clear nr_blocks */
+	swap(bio->bi_iter.bi_size, size);
+	zero_fill_bio(bio);
+	swap(bio->bi_iter.bi_size, size);
+
+	bio_advance(bio, size);
+}
+
+/*
+ * Process a read BIO.
+ */
+static int dmz_handle_read(struct dmz_target *dmz, struct dm_zone *zone,
+			   struct bio *bio)
+{
+	sector_t block = dmz_bio_block(bio);
+	unsigned int nr_blocks = dmz_bio_blocks(bio);
+	sector_t chunk_block = dmz_chunk_block(dmz, block);
+	sector_t end_block = chunk_block + nr_blocks;
+	struct dm_zone *rzone, *bzone;
+	int ret;
+
+	/* Read into unmapped chunks need only zeroing the BIO buffer */
+	if (!zone) {
+		dmz_handle_read_zero(dmz, bio, chunk_block, nr_blocks);
+		return 0;
+	}
+
+	dmz_dev_debug(dmz, "READ %s zone %u, block %llu, %u blocks\n",
+		      (dmz_is_rnd(zone) ? "RND" : "SEQ"), dmz_id(dmz, zone),
+		      (unsigned long long)chunk_block, nr_blocks);
+
+	/* Check block validity to determine the read location */
+	bzone = zone->bzone;
+	while (chunk_block < end_block) {
+
+		nr_blocks = 0;
+		if (dmz_is_rnd(zone)
+		    || chunk_block < zone->wp_block) {
+			/* Test block validity in the data zone */
+			ret = dmz_block_valid(dmz, zone, chunk_block);
+			if (ret < 0)
+				return ret;
+			if (ret > 0) {
+				/* Read data zone blocks */
+				nr_blocks = ret;
+				rzone = zone;
+			}
+		}
+
+		/*
+		 * No valid blocks found in the data zone.
+		 * Check the buffer zone, if there is one.
+		 */
+		if (!nr_blocks && bzone) {
+			ret = dmz_block_valid(dmz, bzone, chunk_block);
+			if (ret < 0)
+				return ret;
+			if (ret > 0) {
+				/* Read buffer zone blocks */
+				nr_blocks = ret;
+				rzone = bzone;
+			}
+		}
+
+		if (nr_blocks) {
+
+			/* Valid blocks found: read them */
+			nr_blocks = min_t(unsigned int, nr_blocks,
+					  end_block - chunk_block);
+
+			dmz_dev_debug(dmz,
+				"=> %s READ zone %u, block %llu, %u blocks\n",
+				(dmz_is_buf(rzone) ? "BUF" : "DATA"),
+				dmz_id(dmz, rzone),
+				(unsigned long long)chunk_block,
+				nr_blocks);
+
+			ret = dmz_submit_bio(dmz, rzone, bio,
+					     chunk_block, nr_blocks);
+			if (ret)
+				return ret;
+			chunk_block += nr_blocks;
+
+		} else {
+
+			/* No valid block: zeroout the current BIO block */
+			dmz_handle_read_zero(dmz, bio, chunk_block, 1);
+			chunk_block++;
+
+		}
+
+	}
+
+	return 0;
+}
+
+/*
+ * Write blocks directly in a data zone, at the write pointer.
+ * If a buffer zone is assigned, invalidate the blocks written
+ * in place.
+ */
+static int dmz_handle_direct_write(struct dmz_target *dmz,
+				   struct dm_zone *zone, struct bio *bio,
+				   sector_t chunk_block,
+				   unsigned int nr_blocks)
+{
+	struct dm_zone *bzone = zone->bzone;
+	int ret;
+
+	dmz_dev_debug(dmz, "WRITE %s zone %u, block %llu, %u blocks\n",
+		      (dmz_is_rnd(zone) ? "RND" : "SEQ"), dmz_id(dmz, zone),
+		      (unsigned long long)chunk_block, nr_blocks);
+
+	if (dmz_is_readonly(zone))
+		return -EROFS;
+
+	/* Submit write */
+	ret = dmz_submit_bio(dmz, zone, bio, chunk_block, nr_blocks);
+	if (ret)
+		return -EIO;
+
+	if (dmz_is_seq(zone))
+		zone->wp_block += nr_blocks;
+
+	/*
+	 * Validate the blocks in the data zone and invalidate
+	 * in the buffer zone, if there is one.
+	 */
+	ret = dmz_validate_blocks(dmz, zone, chunk_block, nr_blocks);
+	if (ret == 0 && bzone)
+		ret = dmz_invalidate_blocks(dmz, bzone, chunk_block, nr_blocks);
+
+	return ret;
+}
+
+/*
+ * Write blocks in the buffer zone of @zone.
+ * If no buffer zone is assigned yet, get one.
+ * Called with @zone write locked.
+ */
+static int dmz_handle_buffered_write(struct dmz_target *dmz,
+				     struct dm_zone *zone, struct bio *bio,
+				     sector_t chunk_block,
+				     unsigned int nr_blocks)
+{
+	struct dm_zone *bzone = zone->bzone;
+	int ret;
+
+	if (!bzone) {
+		/* Get a buffer zone */
+		bzone = dmz_get_chunk_buffer(dmz, zone);
+		if (!bzone)
+			return -ENOSPC;
+	}
+
+	dmz_dev_debug(dmz, "WRITE BUF zone %u, block %llu, %u blocks\n",
+		      dmz_id(dmz, bzone), (unsigned long long)chunk_block,
+		      nr_blocks);
+
+	if (dmz_is_readonly(bzone))
+		return -EROFS;
+
+	/* Submit write */
+	ret = dmz_submit_bio(dmz, bzone, bio, chunk_block, nr_blocks);
+	if (ret)
+		return -EIO;
+
+	/*
+	 * Validate the blocks in the buffer zone
+	 * and invalidate in the data zone.
+	 */
+	ret = dmz_validate_blocks(dmz, bzone, chunk_block, nr_blocks);
+	if (ret == 0 && chunk_block < zone->wp_block)
+		ret = dmz_invalidate_blocks(dmz, zone, chunk_block, nr_blocks);
+
+	return ret;
+}
+
+/*
+ * Process a write BIO.
+ */
+static int dmz_handle_write(struct dmz_target *dmz, struct dm_zone *zone,
+			    struct bio *bio)
+{
+	sector_t block = dmz_bio_block(bio);
+	unsigned int nr_blocks = dmz_bio_blocks(bio);
+	sector_t chunk_block = dmz_chunk_block(dmz, block);
+
+	if (!zone)
+		return -ENOSPC;
+
+	if (dmz_is_rnd(zone) ||
+	    chunk_block == zone->wp_block)
+		/*
+		 * zone is a random zone, or it is a sequential zone
+		 * and the BIO is aligned to the zone write pointer:
+		 * direct write the zone.
+		 */
+		return dmz_handle_direct_write(dmz, zone, bio,
+					       chunk_block, nr_blocks);
+
+	/*
+	 * This is an unaligned write in a sequential zone:
+	 * use buffered write.
+	 */
+	return dmz_handle_buffered_write(dmz, zone, bio,
+					 chunk_block, nr_blocks);
+}
+
+/*
+ * Process a discard BIO.
+ */
+static int dmz_handle_discard(struct dmz_target *dmz, struct dm_zone *zone,
+			      struct bio *bio)
+{
+	sector_t block = dmz_bio_block(bio);
+	unsigned int nr_blocks = dmz_bio_blocks(bio);
+	sector_t chunk_block = dmz_chunk_block(dmz, block);
+	int ret = 0;
+
+	/* For unmapped chunks, there is nothing to do */
+	if (!zone)
+		return 0;
+
+	if (dmz_is_readonly(zone))
+		return -EROFS;
+
+	dmz_dev_debug(dmz,
+		      "DISCARD chunk %llu -> zone %u, block %llu, %u blocks\n",
+		      (unsigned long long)dmz_bio_chunk(dmz, bio),
+		      dmz_id(dmz, zone),
+		      (unsigned long long)chunk_block, nr_blocks);
+
+	/*
+	 * Invalidate blocks in the data zone and its
+	 * buffer zone if one is mapped.
+	 */
+	if (dmz_is_rnd(zone) ||
+	    chunk_block < zone->wp_block)
+		ret = dmz_invalidate_blocks(dmz, zone,
+					    chunk_block, nr_blocks);
+	if (ret == 0 && zone->bzone)
+		ret = dmz_invalidate_blocks(dmz, zone->bzone,
+					    chunk_block, nr_blocks);
+
+	return ret;
+}
+
+/*
+ * Process a BIO.
+ */
+static void dmz_handle_bio(struct dmz_target *dmz, struct dm_chunk_work *cw,
+			   struct bio *bio)
+{
+	struct dmz_bioctx *bioctx =
+		dm_per_bio_data(bio, sizeof(struct dmz_bioctx));
+	struct dm_zone *zone;
+	int ret;
+
+	down_read(&dmz->mblk_sem);
+
+	/*
+	 * Get the data zone mapping the chunk. There may be no
+	 * mapping for read and discard. If a mapping is obtained,
+	 + the zone returned will be set to active state.
+	 */
+	zone = dmz_get_chunk_mapping(dmz, dmz_bio_chunk(dmz, bio),
+				     bio_op(bio));
+	if (IS_ERR(zone)) {
+		dmz_bio_end(bio, PTR_ERR(zone));
+		goto out;
+	}
+
+	/* Process the BIO */
+	if (zone) {
+		dmz_activate_zone(dmz, zone);
+		bioctx->zone = zone;
+	}
+
+	switch (bio_op(bio)) {
+	case REQ_OP_READ:
+		ret = dmz_handle_read(dmz, zone, bio);
+		break;
+	case REQ_OP_WRITE:
+		ret = dmz_handle_write(dmz, zone, bio);
+		break;
+	case REQ_OP_DISCARD:
+		ret = dmz_handle_discard(dmz, zone, bio);
+		break;
+	default:
+		dmz_dev_err(dmz,
+			    "Unsupported BIO operation 0x%x\n",
+			    bio_op(bio));
+		ret = -EIO;
+	}
+
+	dmz_bio_end(bio, ret);
+
+	/*
+	 * Release the chunk mapping. This will check that the mapping
+	 * is still valid, that is, that the zone used still has valid blocks.
+	 */
+	if (zone)
+		dmz_put_chunk_mapping(dmz, zone);
+
+out:
+	up_read(&dmz->mblk_sem);
+}
+
+/*
+ * Increment a chunk reference counter.
+ */
+static inline void dmz_get_chunk_work(struct dm_chunk_work *cw)
+{
+	atomic_inc(&cw->refcount);
+}
+
+/*
+ * Decrement a chunk work reference count and
+ * free it if it becomes 0.
+ */
+static void dmz_put_chunk_work(struct dm_chunk_work *cw)
+{
+	if (atomic_dec_and_test(&cw->refcount)) {
+		atomic_dec(&cw->target->nr_active_chunks);
+		radix_tree_delete(&cw->target->chunk_rxtree, cw->chunk);
+		kfree(cw);
+	}
+}
+
+/*
+ * Chunk BIO work function.
+ */
+static void dmz_chunk_work(struct work_struct *work)
+{
+	struct dm_chunk_work *cw =
+		container_of(work, struct dm_chunk_work, work);
+	struct dmz_target *dmz = cw->target;
+	struct bio *bio;
+
+	mutex_lock(&dmz->chunk_lock);
+
+	/* Process the chunk BIOs */
+	while ((bio = bio_list_pop(&cw->bio_list))) {
+
+		mutex_unlock(&dmz->chunk_lock);
+		dmz_handle_bio(dmz, cw, bio);
+		mutex_lock(&dmz->chunk_lock);
+
+		dmz_put_chunk_work(cw);
+
+	}
+
+	/*
+	 * Queueing the work added one to the work refcount.
+	 * So drop this here.
+	 */
+	dmz_put_chunk_work(cw);
+
+	mutex_unlock(&dmz->chunk_lock);
+}
+
+/*
+ * Flush work.
+ */
+static void dmz_flush_work(struct work_struct *work)
+{
+	struct dmz_target *dmz =
+		container_of(work, struct dmz_target, flush_work.work);
+	struct bio *bio;
+	int ret;
+
+	/* Flush metablocks */
+	ret = dmz_flush_mblocks(dmz);
+
+	/* Process queued flush requests */
+	while (1) {
+
+		spin_lock(&dmz->flush_lock);
+		bio = bio_list_pop(&dmz->flush_list);
+		spin_unlock(&dmz->flush_lock);
+
+		if (!bio)
+			break;
+
+		/* Do flush */
+		dmz_bio_end(bio, ret);
+
+	}
+
+	queue_delayed_work(dmz->flush_wq, &dmz->flush_work,
+			   DMZ_FLUSH_PERIOD);
+}
+
+/*
+ * Get a chunk work and start it to process a new BIO.
+ * If the BIO chunk has no work yet, create one.
+ */
+static void dmz_queue_chunk_work(struct dmz_target *dmz,
+				 struct bio *bio)
+{
+	unsigned int chunk = dmz_bio_chunk(dmz, bio);
+	struct dm_chunk_work *cw;
+
+	mutex_lock(&dmz->chunk_lock);
+
+	/* Get the BIO chunk work. If one is not active yet, create one */
+	cw = radix_tree_lookup(&dmz->chunk_rxtree, chunk);
+	if (!cw) {
+		int ret;
+
+		/* Create a new chunk work */
+		cw = kmalloc(sizeof(struct dm_chunk_work), GFP_NOFS);
+		if (!cw)
+			goto out;
+
+		INIT_WORK(&cw->work, dmz_chunk_work);
+		atomic_set(&cw->refcount, 0);
+		cw->target = dmz;
+		cw->chunk = chunk;
+		bio_list_init(&cw->bio_list);
+
+		ret = radix_tree_insert(&dmz->chunk_rxtree, chunk, cw);
+		if (unlikely(ret != 0)) {
+			kfree(cw);
+			cw = NULL;
+			goto out;
+		}
+
+		atomic_inc(&dmz->nr_active_chunks);
+	}
+
+	bio_list_add(&cw->bio_list, bio);
+	dmz_get_chunk_work(cw);
+
+	if (queue_work(dmz->chunk_wq, &cw->work))
+		dmz_get_chunk_work(cw);
+
+out:
+	mutex_unlock(&dmz->chunk_lock);
+}
+
+/*
+ * Process a new BIO.
+ */
+static int dmz_map(struct dm_target *ti, struct bio *bio)
+{
+	struct dmz_target *dmz = ti->private;
+	struct dmz_bioctx *bioctx
+		= dm_per_bio_data(bio, sizeof(struct dmz_bioctx));
+	sector_t sector = bio->bi_iter.bi_sector;
+	unsigned int nr_sectors = bio_sectors(bio);
+	sector_t chunk_sector;
+
+	dmz_dev_debug(dmz,
+		"BIO sector %llu + %u => chunk %llu, block %llu, %u blocks\n",
+		(u64)sector, nr_sectors,
+		(u64)dmz_bio_chunk(dmz, bio),
+		(u64)dmz_chunk_block(dmz, dmz_bio_block(bio)),
+		(unsigned int)dmz_bio_blocks(bio));
+
+	bio->bi_bdev = dmz->zbd;
+
+	if (!nr_sectors &&
+	    (bio_op(bio) != REQ_OP_FLUSH) &&
+	    (bio_op(bio) != REQ_OP_WRITE)) {
+		bio->bi_bdev = dmz->zbd;
+		return DM_MAPIO_REMAPPED;
+	}
+
+	/* The BIO should be block aligned */
+	if ((nr_sectors & DMZ_BLOCK_SECTORS_MASK) ||
+	    (sector & DMZ_BLOCK_SECTORS_MASK)) {
+		dmz_dev_err(dmz,
+			    "Unaligned BIO sector %llu, len %u\n",
+			    (u64)sector,
+			    nr_sectors);
+		return -EIO;
+	}
+
+	/* Initialize the BIO context */
+	bioctx->target = dmz;
+	bioctx->zone = NULL;
+	bioctx->bio = bio;
+	atomic_set(&bioctx->ref, 1);
+	bioctx->error = 0;
+
+	atomic_inc(&dmz->bio_count);
+	dmz->atime = jiffies;
+
+	/* Set the BIO pending in the flush list */
+	if (bio_op(bio) == REQ_OP_FLUSH ||
+	    (!nr_sectors && bio_op(bio) == REQ_OP_WRITE)) {
+		spin_lock(&dmz->flush_lock);
+		bio_list_add(&dmz->flush_list, bio);
+		spin_unlock(&dmz->flush_lock);
+		dmz_trigger_flush(dmz);
+		return DM_MAPIO_SUBMITTED;
+	}
+
+	/* Split zone BIOs to fit entirely into a zone */
+	chunk_sector = dmz_chunk_sector(dmz, sector);
+	if (chunk_sector + nr_sectors > dmz->zone_nr_sectors)
+		dm_accept_partial_bio(bio,
+				      dmz->zone_nr_sectors - chunk_sector);
+
+	/* Now ready to handle this BIO */
+	dmz_queue_chunk_work(dmz, bio);
+
+	return DM_MAPIO_SUBMITTED;
+}
+
+/*
+ * Setup target.
+ */
+static int dmz_ctr(struct dm_target *ti, unsigned int argc, char **argv)
+{
+	struct dmz_target *dmz;
+	int ret;
+
+	/* Check arguments */
+	if (argc != 1) {
+		ti->error = "Invalid argument count";
+		return -EINVAL;
+	}
+
+	/* Allocate and initialize the target descriptor */
+	dmz = kzalloc(sizeof(struct dmz_target), GFP_KERNEL);
+	if (!dmz) {
+		ti->error = "Allocate target descriptor failed";
+		return -ENOMEM;
+	}
+
+	/* Get the target device */
+	ret = dm_get_device(ti, argv[0], dm_table_get_mode(ti->table),
+			    &dmz->ddev);
+	if (ret != 0) {
+		ti->error = "Get target device failed";
+		goto err;
+	}
+
+	dmz->zbd = dmz->ddev->bdev;
+	if (!bdev_is_zoned(dmz->zbd)) {
+		ti->error = "Not a zoned block device";
+		ret = -EINVAL;
+		goto err;
+	}
+
+	dmz->zbd_capacity = i_size_read(dmz->zbd->bd_inode) >> SECTOR_SHIFT;
+	if (ti->begin || (ti->len != dmz->zbd_capacity)) {
+		ti->error = "Partial mapping not supported";
+		ret = -EINVAL;
+		goto err;
+	}
+
+	(void)bdevname(dmz->zbd, dmz->zbd_name);
+	dmz->zbdq = bdev_get_queue(dmz->zbd);
+
+	dmz->mblk_rbtree = RB_ROOT;
+	init_rwsem(&dmz->mblk_sem);
+	spin_lock_init(&dmz->mblk_lock);
+	INIT_LIST_HEAD(&dmz->mblk_lru_list);
+	INIT_LIST_HEAD(&dmz->mblk_dirty_list);
+
+	mutex_init(&dmz->map_lock);
+	atomic_set(&dmz->dz_unmap_nr_rnd, 0);
+	INIT_LIST_HEAD(&dmz->dz_unmap_rnd_list);
+	INIT_LIST_HEAD(&dmz->dz_map_rnd_list);
+
+	atomic_set(&dmz->dz_unmap_nr_seq, 0);
+	INIT_LIST_HEAD(&dmz->dz_unmap_seq_list);
+	INIT_LIST_HEAD(&dmz->dz_map_seq_list);
+
+	init_waitqueue_head(&dmz->dz_free_wq);
+
+	atomic_set(&dmz->nr_reclaim_seq_zones, 0);
+	INIT_LIST_HEAD(&dmz->reclaim_seq_zones_list);
+
+	ret = dmz_init_meta(dmz);
+	if (ret != 0) {
+		ti->error = "Metadata initialization failed";
+		goto err;
+	}
+
+	/* Set target (no write same support) */
+	ti->private = dmz;
+	ti->max_io_len = dmz->zone_nr_sectors << 9;
+	ti->num_flush_bios = 1;
+	ti->num_discard_bios = 1;
+	ti->per_io_data_size = sizeof(struct dmz_bioctx);
+	ti->flush_supported = true;
+	ti->discards_supported = true;
+	ti->split_discard_bios = true;
+	ti->discard_zeroes_data_unsupported = false;
+
+	/* The exposed capacity is the number of chunks that can be mapped */
+	ti->len = dmz->nr_chunks * dmz->zone_nr_sectors;
+
+	/* Zone BIO */
+	atomic_set(&dmz->bio_count, 0);
+	dmz->atime = jiffies;
+	dmz->bio_set = bioset_create_nobvec(DMZ_MIN_BIOS, 0);
+	if (!dmz->bio_set) {
+		ti->error = "Create BIO set failed";
+		ret = -ENOMEM;
+		goto err;
+	}
+
+	/* Chunk BIO work */
+	mutex_init(&dmz->chunk_lock);
+	atomic_set(&dmz->nr_active_chunks, 0);
+	INIT_RADIX_TREE(&dmz->chunk_rxtree, GFP_NOFS);
+	dmz->chunk_wq = alloc_workqueue("dmz_cwq_%s",
+					WQ_MEM_RECLAIM | WQ_UNBOUND,
+					0, dmz->zbd_name);
+	if (!dmz->chunk_wq) {
+		ti->error = "Create chunk workqueue failed";
+		ret = -ENOMEM;
+		goto err;
+	}
+
+	/* Flush work */
+	spin_lock_init(&dmz->flush_lock);
+	bio_list_init(&dmz->flush_list);
+	INIT_DELAYED_WORK(&dmz->flush_work, dmz_flush_work);
+	dmz->flush_wq = alloc_ordered_workqueue("dmz_fwq_%s", WQ_MEM_RECLAIM,
+						dmz->zbd_name);
+	if (!dmz->flush_wq) {
+		ti->error = "Create flush workqueue failed";
+		ret = -ENOMEM;
+		goto err;
+	}
+	mod_delayed_work(dmz->flush_wq, &dmz->flush_work, DMZ_FLUSH_PERIOD);
+
+	/* Reclaim kcopyd client */
+	dmz->reclaim_kc = dm_kcopyd_client_create(&dmz->reclaim_throttle);
+	if (IS_ERR(dmz->reclaim_kc)) {
+		ti->error = "Create kcopyd client failed";
+		ret = PTR_ERR(dmz->reclaim_kc);
+		dmz->reclaim_kc = NULL;
+		goto err;
+	}
+
+	/* Reclaim work */
+	INIT_DELAYED_WORK(&dmz->reclaim_work, dmz_reclaim_work);
+	dmz->reclaim_wq = alloc_ordered_workqueue("dmz_rwq_%s", WQ_MEM_RECLAIM,
+						  dmz->zbd_name);
+	if (!dmz->reclaim_wq) {
+		ti->error = "Create reclaim workqueue failed";
+		ret = -ENOMEM;
+		goto err;
+	}
+
+	dmz_dev_info(dmz,
+		"Target device: %llu 512-byte logical sectors (%llu blocks)\n",
+		(unsigned long long)ti->len,
+		(unsigned long long)dmz_sect2blk(ti->len));
+
+	dmz_trigger_reclaim(dmz);
+
+	return 0;
+
+err:
+	if (dmz->ddev) {
+		if (dmz->reclaim_wq)
+			destroy_workqueue(dmz->reclaim_wq);
+		if (dmz->reclaim_kc)
+			dm_kcopyd_client_destroy(dmz->reclaim_kc);
+		if (dmz->flush_wq)
+			destroy_workqueue(dmz->flush_wq);
+		if (dmz->chunk_wq)
+			destroy_workqueue(dmz->chunk_wq);
+		if (dmz->bio_set)
+			bioset_free(dmz->bio_set);
+		dmz_cleanup_meta(dmz);
+		dm_put_device(ti, dmz->ddev);
+	}
+
+	kfree(dmz);
+
+	return ret;
+
+}
+
+/*
+ * Cleanup target.
+ */
+static void dmz_dtr(struct dm_target *ti)
+{
+	struct dmz_target *dmz = ti->private;
+
+	dmz_dev_info(dmz, "Removing target device\n");
+
+	flush_workqueue(dmz->chunk_wq);
+	destroy_workqueue(dmz->chunk_wq);
+
+	cancel_delayed_work_sync(&dmz->reclaim_work);
+	destroy_workqueue(dmz->reclaim_wq);
+	dm_kcopyd_client_destroy(dmz->reclaim_kc);
+
+	cancel_delayed_work_sync(&dmz->flush_work);
+	destroy_workqueue(dmz->flush_wq);
+
+	dmz_flush_mblocks(dmz);
+
+	bioset_free(dmz->bio_set);
+
+	dmz_cleanup_meta(dmz);
+
+	dm_put_device(ti, dmz->ddev);
+
+	kfree(dmz);
+}
+
+/*
+ * Setup target request queue limits.
+ */
+static void dmz_io_hints(struct dm_target *ti,
+			 struct queue_limits *limits)
+{
+	struct dmz_target *dmz = ti->private;
+	unsigned int chunk_sectors = dmz->zone_nr_sectors;
+
+	limits->logical_block_size = DMZ_BLOCK_SIZE;
+	limits->physical_block_size = DMZ_BLOCK_SIZE;
+
+	blk_limits_io_min(limits, DMZ_BLOCK_SIZE);
+	blk_limits_io_opt(limits, DMZ_BLOCK_SIZE);
+
+	limits->discard_alignment = DMZ_BLOCK_SIZE;
+	limits->discard_granularity = DMZ_BLOCK_SIZE;
+	limits->max_discard_sectors = chunk_sectors;
+	limits->max_hw_discard_sectors = chunk_sectors;
+	limits->discard_zeroes_data = 1;
+
+	/* FS hint to try to align to the device zone size */
+	limits->chunk_sectors = chunk_sectors;
+	limits->max_sectors = chunk_sectors;
+
+	/* We are exposing a drive-managed zone model */
+	limits->zoned = BLK_ZONED_NONE;
+}
+
+/*
+ * Pass on ioctl to the backend device.
+ */
+static int dmz_prepare_ioctl(struct dm_target *ti,
+			     struct block_device **bdev, fmode_t *mode)
+{
+	struct dmz_target *dmz = ti->private;
+
+	*bdev = dmz->zbd;
+
+	return 0;
+}
+
+/*
+ * Stop reclaim before suspend.
+ */
+static void dmz_presuspend(struct dm_target *ti)
+{
+	struct dmz_target *dmz = ti->private;
+
+	dmz_dev_debug(dmz, "Pre-suspend\n");
+
+	/* Enter suspend state */
+	set_bit(DMZ_SUSPENDED, &dmz->flags);
+	smp_mb__after_atomic();
+
+	/* Stop reclaim */
+	cancel_delayed_work_sync(&dmz->reclaim_work);
+}
+
+/*
+ * Restart reclaim if suspend failed.
+ */
+static void dmz_presuspend_undo(struct dm_target *ti)
+{
+	struct dmz_target *dmz = ti->private;
+
+	dmz_dev_debug(dmz, "Pre-suspend undo\n");
+
+	/* Clear suspend state */
+	clear_bit_unlock(DMZ_SUSPENDED, &dmz->flags);
+	smp_mb__after_atomic();
+
+	/* Restart reclaim */
+	mod_delayed_work(dmz->reclaim_wq, &dmz->reclaim_work, 0);
+}
+
+/*
+ * Stop works and flush on suspend.
+ */
+static void dmz_postsuspend(struct dm_target *ti)
+{
+	struct dmz_target *dmz = ti->private;
+
+	dmz_dev_debug(dmz, "Post-suspend\n");
+
+	/* Stop works */
+	flush_workqueue(dmz->chunk_wq);
+	flush_workqueue(dmz->flush_wq);
+}
+
+/*
+ * Refresh zone information before resuming.
+ */
+static int dmz_preresume(struct dm_target *ti)
+{
+	struct dmz_target *dmz = ti->private;
+
+	if (!test_bit(DMZ_SUSPENDED, &dmz->flags))
+		return 0;
+
+	dmz_dev_debug(dmz, "Pre-resume\n");
+
+	/* Refresh zone information */
+	return dmz_resume_meta(dmz);
+}
+
+/*
+ * Resume.
+ */
+static void dmz_resume(struct dm_target *ti)
+{
+	struct dmz_target *dmz = ti->private;
+
+	if (!test_bit(DMZ_SUSPENDED, &dmz->flags))
+		return;
+
+	dmz_dev_debug(dmz, "Resume\n");
+
+	/* Clear suspend state */
+	clear_bit_unlock(DMZ_SUSPENDED, &dmz->flags);
+	smp_mb__after_atomic();
+
+	/* Restart reclaim */
+	mod_delayed_work(dmz->reclaim_wq, &dmz->reclaim_work, 0);
+
+}
+
+static int dmz_iterate_devices(struct dm_target *ti,
+			       iterate_devices_callout_fn fn, void *data)
+{
+	struct dmz_target *dmz = ti->private;
+	sector_t offset = dmz->zbd_capacity -
+		((sector_t)dmz->nr_chunks * dmz->zone_nr_sectors);
+
+	return fn(ti, dmz->ddev, offset, ti->len, data);
+}
+
+static struct target_type dmz_type = {
+	.name		 = "dm-zoned",
+	.version	 = {1, 0, 0},
+	.features	 = DM_TARGET_SINGLETON | DM_TARGET_ZONED_HM,
+	.module		 = THIS_MODULE,
+	.ctr		 = dmz_ctr,
+	.dtr		 = dmz_dtr,
+	.map		 = dmz_map,
+	.io_hints	 = dmz_io_hints,
+	.prepare_ioctl	 = dmz_prepare_ioctl,
+	.presuspend	 = dmz_presuspend,
+	.presuspend_undo = dmz_presuspend_undo,
+	.postsuspend	 = dmz_postsuspend,
+	.preresume	 = dmz_preresume,
+	.resume		 = dmz_resume,
+	.iterate_devices = dmz_iterate_devices,
+};
+
+static int __init dmz_init(void)
+{
+	dmz_info("Zoned block device target (C) Western Digital\n");
+
+	return dm_register_target(&dmz_type);
+}
+
+static void __exit dmz_exit(void)
+{
+	dm_unregister_target(&dmz_type);
+}
+
+module_init(dmz_init);
+module_exit(dmz_exit);
+
+MODULE_DESCRIPTION(DM_NAME " target for zoned block devices");
+MODULE_AUTHOR("Damien Le Moal <damien.lemoal@wdc.com>");
+MODULE_LICENSE("GPL");
diff --git a/drivers/md/dm-zoned-metadata.c b/drivers/md/dm-zoned-metadata.c
new file mode 100644
index 0000000..a6c7d9b
--- /dev/null
+++ b/drivers/md/dm-zoned-metadata.c
@@ -0,0 +1,2195 @@
+/*
+ * Drive-managed zoned block device target
+ * Copyright (C) 2017 Western Digital Corporation or its affiliates.
+ *
+ * Written by: Damien Le Moal <damien.lemoal@wdc.com>
+ *
+ * This software is distributed under the terms of the GNU General Public
+ * License version 2, or any later version, "as is," without technical
+ * support, and WITHOUT ANY WARRANTY, without even the implied warranty
+ * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#include <linux/module.h>
+#include <linux/crc32.h>
+
+#include "dm-zoned.h"
+
+/*
+ * Allocate a metadata block.
+ */
+static struct dmz_mblock *dmz_alloc_mblock(struct dmz_target *dmz,
+					   sector_t mblk_no)
+{
+	struct dmz_mblock *mblk = NULL;
+
+	/* See if we can reuse cached blocks */
+	if (dmz->max_nr_mblks &&
+	    atomic_read(&dmz->nr_mblks) > dmz->max_nr_mblks) {
+
+		spin_lock(&dmz->mblk_lock);
+
+		if (list_empty(&dmz->mblk_lru_list) &&
+		    !list_empty(&dmz->mblk_dirty_list))
+			/* Cleanup dirty blocks */
+			dmz_trigger_flush(dmz);
+
+		mblk = list_first_entry_or_null(&dmz->mblk_lru_list,
+						struct dmz_mblock, link);
+		if (mblk) {
+			list_del_init(&mblk->link);
+			rb_erase(&mblk->node, &dmz->mblk_rbtree);
+			mblk->no = mblk_no;
+		}
+
+		spin_unlock(&dmz->mblk_lock);
+
+		if (mblk)
+			return mblk;
+	}
+
+	/* Allocate a new block */
+	mblk = kmalloc(sizeof(struct dmz_mblock), GFP_NOIO);
+	if (!mblk)
+		return NULL;
+
+	mblk->page = alloc_page(GFP_NOIO);
+	if (!mblk->page) {
+		kfree(mblk);
+		return NULL;
+	}
+
+	RB_CLEAR_NODE(&mblk->node);
+	INIT_LIST_HEAD(&mblk->link);
+	atomic_set(&mblk->ref, 0);
+	mblk->state = 0;
+	mblk->no = mblk_no;
+	mblk->data = page_address(mblk->page);
+
+	atomic_inc(&dmz->nr_mblks);
+
+	return mblk;
+}
+
+/*
+ * Free a metadata block.
+ */
+static void dmz_free_mblock(struct dmz_target *dmz, struct dmz_mblock *mblk)
+{
+	__free_pages(mblk->page, 0);
+	kfree(mblk);
+
+	atomic_dec(&dmz->nr_mblks);
+}
+
+/*
+ * Insert a metadata block in the rbtree.
+ */
+static void dmz_insert_mblock(struct dmz_target *dmz,
+			      struct dmz_mblock *mblk)
+{
+	struct rb_root *root = &dmz->mblk_rbtree;
+	struct rb_node **new = &(root->rb_node), *parent = NULL;
+	struct dmz_mblock *b;
+
+	/* Figure out where to put the new node */
+	while (*new) {
+		b = container_of(*new, struct dmz_mblock, node);
+		parent = *new;
+		new = (b->no < mblk->no) ?
+			&((*new)->rb_left) : &((*new)->rb_right);
+	}
+
+	/* Add new node and rebalance tree */
+	rb_link_node(&mblk->node, parent, new);
+	rb_insert_color(&mblk->node, root);
+}
+
+/*
+ * Lookup a metadata block in the rbtree.
+ */
+static struct dmz_mblock *dmz_lookup_mblock(struct dmz_target *dmz,
+					    sector_t mblk_no)
+{
+	struct rb_root *root = &dmz->mblk_rbtree;
+	struct rb_node *node = root->rb_node;
+	struct dmz_mblock *mblk;
+
+	while (node) {
+		mblk = container_of(node, struct dmz_mblock, node);
+		if (mblk->no == mblk_no)
+			return mblk;
+		node = (mblk->no < mblk_no) ? node->rb_left : node->rb_right;
+	}
+
+	return NULL;
+}
+
+/*
+ * Metadata block BIO end callback.
+ */
+static void dmz_mblock_bio_end_io(struct bio *bio)
+{
+	struct dmz_mblock *mblk = bio->bi_private;
+	int flag;
+
+	if (bio->bi_error)
+		set_bit(DMZ_META_ERROR, &mblk->state);
+
+	if (bio_op(bio) == REQ_OP_WRITE)
+		flag = DMZ_META_WRITING;
+	else
+		flag = DMZ_META_READING;
+
+	clear_bit_unlock(flag, &mblk->state);
+	smp_mb__after_atomic();
+	wake_up_bit(&mblk->state, flag);
+
+	bio_put(bio);
+}
+
+/*
+ * Read a metadata block from disk.
+ */
+static struct dmz_mblock *dmz_fetch_mblock(struct dmz_target *dmz,
+					   sector_t mblk_no)
+{
+	struct dmz_mblock *mblk;
+	sector_t block = dmz->sb[dmz->mblk_primary].block + mblk_no;
+	struct bio *bio;
+
+	/* Get block and insert it */
+	mblk = dmz_alloc_mblock(dmz, mblk_no);
+	if (!mblk)
+		return NULL;
+
+	spin_lock(&dmz->mblk_lock);
+	atomic_inc(&mblk->ref);
+	set_bit(DMZ_META_READING, &mblk->state);
+	dmz_insert_mblock(dmz, mblk);
+	spin_unlock(&dmz->mblk_lock);
+
+	bio = bio_alloc(GFP_NOIO, 1);
+	if (!bio) {
+		dmz_free_mblock(dmz, mblk);
+		return NULL;
+	}
+
+	bio->bi_iter.bi_sector = dmz_blk2sect(block);
+	bio->bi_bdev = dmz->zbd;
+	bio->bi_private = mblk;
+	bio->bi_end_io = dmz_mblock_bio_end_io;
+	bio_set_op_attrs(bio, REQ_OP_READ, REQ_META | REQ_PRIO);
+	bio_add_page(bio, mblk->page, DMZ_BLOCK_SIZE, 0);
+	submit_bio(bio);
+
+	return mblk;
+}
+
+/*
+ * Free metadata blocks.
+ */
+static void dmz_shrink_mblock_cache(struct dmz_target *dmz, bool idle)
+{
+	struct dmz_mblock *mblk;
+	unsigned int nr_mblks;
+
+	if (!dmz->max_nr_mblks)
+		return;
+
+	if (idle)
+		nr_mblks = dmz->min_nr_mblks;
+	else
+		nr_mblks = dmz->max_nr_mblks;
+
+	while (atomic_read(&dmz->nr_mblks) > nr_mblks &&
+	       !list_empty(&dmz->mblk_lru_list)) {
+		mblk = list_first_entry(&dmz->mblk_lru_list,
+					struct dmz_mblock, link);
+		list_del_init(&mblk->link);
+		rb_erase(&mblk->node, &dmz->mblk_rbtree);
+		dmz_free_mblock(dmz, mblk);
+	}
+}
+
+/*
+ * Release a metadata block.
+ */
+static void dmz_release_mblock(struct dmz_target *dmz, struct dmz_mblock *mblk)
+{
+
+	if (!mblk)
+		return;
+
+	spin_lock(&dmz->mblk_lock);
+
+	if (atomic_dec_and_test(&mblk->ref)) {
+		if (test_bit(DMZ_META_ERROR, &mblk->state)) {
+			rb_erase(&mblk->node, &dmz->mblk_rbtree);
+			dmz_free_mblock(dmz, mblk);
+		} else if (!test_bit(DMZ_META_DIRTY, &mblk->state)) {
+			list_add_tail(&mblk->link, &dmz->mblk_lru_list);
+		}
+		dmz_shrink_mblock_cache(dmz, false);
+	}
+
+	spin_unlock(&dmz->mblk_lock);
+}
+
+/*
+ * Get a metadata block from the rbtree. If the block
+ * is not present, read it from disk.
+ */
+static struct dmz_mblock *dmz_get_mblock(struct dmz_target *dmz,
+					 sector_t mblk_no)
+{
+	struct dmz_mblock *mblk;
+
+	/* Check rbtree */
+	spin_lock(&dmz->mblk_lock);
+	mblk = dmz_lookup_mblock(dmz, mblk_no);
+	if (mblk) {
+		/* Cache hit: remove block from LRU list */
+		if (atomic_inc_return(&mblk->ref) == 1 &&
+		    !test_bit(DMZ_META_DIRTY, &mblk->state))
+			list_del_init(&mblk->link);
+	}
+	spin_unlock(&dmz->mblk_lock);
+
+	if (!mblk) {
+		/* Cache miss: read the block from disk */
+		mblk = dmz_fetch_mblock(dmz, mblk_no);
+		if (!mblk)
+			return ERR_PTR(-ENOMEM);
+	}
+
+	/* Wait for on-going read I/O and check for error */
+	wait_on_bit_io(&mblk->state, DMZ_META_READING,
+		       TASK_UNINTERRUPTIBLE);
+	if (test_bit(DMZ_META_ERROR, &mblk->state)) {
+		dmz_release_mblock(dmz, mblk);
+		return ERR_PTR(-EIO);
+	}
+
+	return mblk;
+}
+
+/*
+ * Mark a metadata block dirty.
+ */
+static void dmz_dirty_mblock(struct dmz_target *dmz, struct dmz_mblock *mblk)
+{
+	spin_lock(&dmz->mblk_lock);
+	if (!test_and_set_bit(DMZ_META_DIRTY, &mblk->state))
+		list_add_tail(&mblk->link, &dmz->mblk_dirty_list);
+	spin_unlock(&dmz->mblk_lock);
+}
+
+/*
+ * Issue a metadata block write BIO.
+ */
+static void dmz_write_mblock(struct dmz_target *dmz, struct dmz_mblock *mblk,
+			     unsigned int set)
+{
+	sector_t block = dmz->sb[set].block + mblk->no;
+	struct bio *bio;
+
+	bio = bio_alloc(GFP_NOIO, 1);
+	if (!bio) {
+		set_bit(DMZ_META_ERROR, &mblk->state);
+		return;
+	}
+
+	set_bit(DMZ_META_WRITING, &mblk->state);
+
+	bio->bi_iter.bi_sector = dmz_blk2sect(block);
+	bio->bi_bdev = dmz->zbd;
+	bio->bi_private = mblk;
+	bio->bi_end_io = dmz_mblock_bio_end_io;
+	bio_set_op_attrs(bio, REQ_OP_WRITE, REQ_META | REQ_PRIO);
+	bio_add_page(bio, mblk->page, DMZ_BLOCK_SIZE, 0);
+	submit_bio(bio);
+}
+
+/*
+ * Sync read/write a block.
+ */
+static int dmz_rdwr_block_sync(struct dmz_target *dmz, int op, sector_t block,
+			       struct page *page)
+{
+	struct bio *bio;
+	int ret;
+
+	bio = bio_alloc(GFP_NOIO, 1);
+	if (!bio)
+		return -ENOMEM;
+
+	bio->bi_iter.bi_sector = dmz_blk2sect(block);
+	bio->bi_bdev = dmz->zbd;
+	bio_set_op_attrs(bio, op, REQ_SYNC | REQ_META | REQ_PRIO);
+	bio_add_page(bio, page, DMZ_BLOCK_SIZE, 0);
+	ret = submit_bio_wait(bio);
+	bio_put(bio);
+
+	return ret;
+}
+
+/*
+ * Write super block of the specified metadata set.
+ */
+static int dmz_write_sb(struct dmz_target *dmz, unsigned int set)
+{
+	sector_t block = dmz->sb[set].block;
+	struct dmz_mblock *mblk = dmz->sb[set].mblk;
+	struct dmz_super *sb = dmz->sb[set].sb;
+	u64 sb_gen = dmz->sb_gen + 1;
+	int ret;
+
+	sb->magic = cpu_to_le32(DMZ_MAGIC);
+	sb->version = cpu_to_le32(DMZ_META_VER);
+
+	sb->gen = cpu_to_le64(sb_gen);
+
+	sb->sb_block = cpu_to_le64(block);
+	sb->nr_meta_blocks = cpu_to_le32(dmz->nr_meta_blocks);
+	sb->nr_reserved_seq = cpu_to_le32(dmz->nr_reserved_seq);
+	sb->nr_chunks = cpu_to_le32(dmz->nr_chunks);
+
+	sb->nr_map_blocks = cpu_to_le32(dmz->nr_map_blocks);
+	sb->nr_bitmap_blocks = cpu_to_le32(dmz->nr_bitmap_blocks);
+
+	sb->crc = 0;
+	sb->crc = cpu_to_le32(crc32_le(sb_gen,
+				       (unsigned char *)sb, DMZ_BLOCK_SIZE));
+
+	ret = dmz_rdwr_block_sync(dmz, REQ_OP_WRITE, block, mblk->page);
+	if (ret == 0)
+		ret = blkdev_issue_flush(dmz->zbd, GFP_KERNEL, NULL);
+
+	return ret;
+}
+
+/*
+ * Write dirty metadata blocks to the specified set.
+ */
+static int dmz_write_dirty_mblocks(struct dmz_target *dmz,
+				   struct list_head *write_list,
+				   unsigned int set)
+{
+	struct dmz_mblock *mblk;
+	struct blk_plug plug;
+	int ret = 0;
+
+	/* Issue writes */
+	blk_start_plug(&plug);
+	list_for_each_entry(mblk, write_list, link)
+		dmz_write_mblock(dmz, mblk, set);
+	blk_finish_plug(&plug);
+
+	/* Wait for completion */
+	list_for_each_entry(mblk, write_list, link) {
+		wait_on_bit_io(&mblk->state, DMZ_META_WRITING,
+			       TASK_UNINTERRUPTIBLE);
+		if (test_bit(DMZ_META_ERROR, &mblk->state)) {
+			dmz_dev_err(dmz, "Write metablock %u/%llu failed\n",
+				    set, (u64)mblk->no);
+			clear_bit(DMZ_META_ERROR, &mblk->state);
+			ret = -EIO;
+		}
+	}
+
+	/* Flush drive cache (this will also sync data) */
+	if (ret == 0)
+		ret = blkdev_issue_flush(dmz->zbd, GFP_KERNEL, NULL);
+
+	return ret;
+}
+
+/*
+ * Log dirty metadata blocks.
+ */
+static int dmz_log_dirty_mblocks(struct dmz_target *dmz,
+				 struct list_head *write_list)
+{
+	unsigned int log_set = dmz->mblk_primary ^ 0x1;
+	int ret;
+
+	dmz_dev_debug(dmz, "Log metadata to set %u, gen %llu\n",
+		      log_set, dmz->sb_gen + 1);
+
+	/* Write dirty blocks to the log */
+	ret = dmz_write_dirty_mblocks(dmz, write_list, log_set);
+	if (ret)
+		return ret;
+
+	/*
+	 * No error so far: now validate the log by updating the
+	 * log index super block generation.
+	 */
+	ret = dmz_write_sb(dmz, log_set);
+	if (ret)
+		return ret;
+
+	return 0;
+}
+
+/*
+ * Flush dirty metadata blocks.
+ */
+int dmz_flush_mblocks(struct dmz_target *dmz)
+{
+	struct dmz_mblock *mblk;
+	struct list_head write_list;
+	int ret;
+
+	INIT_LIST_HEAD(&write_list);
+
+	/*
+	 * Prevent BIOs to zones and reclaim. This ensure exclusive
+	 * access to metadata.
+	 */
+	down_write(&dmz->mblk_sem);
+
+	/* If there are no dirty metadata blocks, just flush the device cache */
+	if (list_empty(&dmz->mblk_dirty_list)) {
+		ret = blkdev_issue_flush(dmz->zbd, GFP_KERNEL, NULL);
+		goto out;
+	}
+
+	/*
+	 * The primary metadata set is still clean. Keep it this way until
+	 * all updates are successful in the secondary set. That is, use
+	 * the secondary set as a log.
+	 */
+	list_splice_init(&dmz->mblk_dirty_list, &write_list);
+	ret = dmz_log_dirty_mblocks(dmz, &write_list);
+	if (ret)
+		goto out;
+
+	/*
+	 * The log is on disk. It is now safe to update in place
+	 * in the primary metadata set.
+	 */
+	dmz_dev_debug(dmz, "Commit metadata to set %u, gen %llu\n",
+		      dmz->mblk_primary, dmz->sb_gen + 1);
+	ret = dmz_write_dirty_mblocks(dmz, &write_list, dmz->mblk_primary);
+	if (ret)
+		goto out;
+
+	ret = dmz_write_sb(dmz, dmz->mblk_primary);
+	if (ret)
+		goto out;
+
+	while (!list_empty(&write_list)) {
+		mblk = list_first_entry(&write_list,
+					struct dmz_mblock, link);
+		list_del_init(&mblk->link);
+
+		clear_bit(DMZ_META_DIRTY, &mblk->state);
+		if (atomic_read(&mblk->ref) == 0)
+			list_add_tail(&mblk->link, &dmz->mblk_lru_list);
+
+	}
+
+	dmz->sb_gen++;
+
+out:
+	if (ret && !list_empty(&write_list))
+		list_splice(&write_list, &dmz->mblk_dirty_list);
+
+	/* If we were idle for a full flush period, shrink the cache */
+	if (time_is_before_jiffies(dmz->atime + DMZ_FLUSH_PERIOD * HZ))
+		dmz_shrink_mblock_cache(dmz, true);
+
+
+	up_write(&dmz->mblk_sem);
+
+	return ret;
+}
+
+/*
+ * Check super block.
+ */
+static int dmz_check_sb(struct dmz_target *dmz, struct dmz_super *sb)
+{
+	unsigned int nr_meta_zones, nr_data_zones;
+	u32 crc, stored_crc;
+	u64 gen;
+
+	gen = le64_to_cpu(sb->gen);
+	stored_crc = le32_to_cpu(sb->crc);
+	sb->crc = 0;
+	crc = crc32_le(gen, (unsigned char *)sb, DMZ_BLOCK_SIZE);
+	if (crc != stored_crc) {
+		dmz_dev_err(dmz,
+			    "Invalid checksum (needed 0x%08x, got 0x%08x)\n",
+			    crc, stored_crc);
+		return -ENXIO;
+	}
+
+	if (le32_to_cpu(sb->magic) != DMZ_MAGIC) {
+		dmz_dev_err(dmz,
+			    "Invalid meta magic (need 0x%08x, got 0x%08x)\n",
+			    DMZ_MAGIC, le32_to_cpu(sb->magic));
+		return -ENXIO;
+	}
+
+	if (le32_to_cpu(sb->version) != DMZ_META_VER) {
+		dmz_dev_err(dmz, "Invalid meta version (need %d, got %d)\n",
+			    DMZ_META_VER, le32_to_cpu(sb->version));
+		return -ENXIO;
+	}
+
+	nr_meta_zones =
+		(le32_to_cpu(sb->nr_meta_blocks) + dmz->zone_nr_blocks - 1)
+		>> dmz->zone_nr_blocks_shift;
+	if (!nr_meta_zones ||
+	    nr_meta_zones >= dmz->nr_rnd_zones) {
+		dmz_dev_err(dmz, "Invalid number of metadata blocks\n");
+		return -ENXIO;
+	}
+
+	if (!le32_to_cpu(sb->nr_reserved_seq) ||
+	    le32_to_cpu(sb->nr_reserved_seq) >=
+	    (dmz->nr_useable_zones - nr_meta_zones)) {
+		dmz_dev_err(dmz,
+			    "Invalid number of reserved sequential zones\n");
+		return -ENXIO;
+	}
+
+	nr_data_zones = dmz->nr_useable_zones -
+		(nr_meta_zones * 2 + le32_to_cpu(sb->nr_reserved_seq));
+	if (le32_to_cpu(sb->nr_chunks) > nr_data_zones) {
+		dmz_dev_err(dmz, "Invalid number of chunks %u / %u\n",
+			    le32_to_cpu(sb->nr_chunks), nr_data_zones);
+		return -ENXIO;
+	}
+
+	/* OK */
+	dmz->nr_meta_blocks = le32_to_cpu(sb->nr_meta_blocks);
+	dmz->nr_reserved_seq = le32_to_cpu(sb->nr_reserved_seq);
+	dmz->nr_chunks = le32_to_cpu(sb->nr_chunks);
+	dmz->nr_map_blocks = le32_to_cpu(sb->nr_map_blocks);
+	dmz->nr_bitmap_blocks = le32_to_cpu(sb->nr_bitmap_blocks);
+	dmz->nr_meta_zones = nr_meta_zones;
+	dmz->nr_data_zones = nr_data_zones;
+
+	return 0;
+}
+
+/*
+ * Read the first or second super block from disk.
+ */
+static int dmz_read_sb(struct dmz_target *dmz, unsigned int set)
+{
+	return dmz_rdwr_block_sync(dmz, REQ_OP_READ,
+				   dmz->sb[set].block,
+				   dmz->sb[set].mblk->page);
+}
+
+/*
+ * Determine the position of the secondary super blocks on disk.
+ * This is used only if a corruption of the primary super block
+ * is detected.
+ */
+static int dmz_lookup_secondary_sb(struct dmz_target *dmz)
+{
+	struct dmz_mblock *mblk;
+	int i;
+
+	/* Allocate a block */
+	mblk = dmz_alloc_mblock(dmz, 0);
+	if (!mblk)
+		return -ENOMEM;
+
+	dmz->sb[1].mblk = mblk;
+	dmz->sb[1].sb = mblk->data;
+
+	/* Bad first super block: search for the second one */
+	dmz->sb[1].block = dmz->sb[0].block + dmz->zone_nr_blocks;
+	for (i = 0; i < dmz->nr_rnd_zones - 1; i++) {
+		if (dmz_read_sb(dmz, 1) != 0)
+			break;
+		if (le32_to_cpu(dmz->sb[1].sb->magic) == DMZ_MAGIC)
+			return 0;
+		dmz->sb[1].block += dmz->zone_nr_blocks;
+	}
+
+	dmz_free_mblock(dmz, mblk);
+	dmz->sb[1].mblk = NULL;
+
+	return -EIO;
+}
+
+/*
+ * Read the first or second super block from disk.
+ */
+static int dmz_get_sb(struct dmz_target *dmz, unsigned int set)
+{
+	struct dmz_mblock *mblk;
+	int ret;
+
+	/* Allocate a block */
+	mblk = dmz_alloc_mblock(dmz, 0);
+	if (!mblk)
+		return -ENOMEM;
+
+	dmz->sb[set].mblk = mblk;
+	dmz->sb[set].sb = mblk->data;
+
+	/* Read super block */
+	ret = dmz_read_sb(dmz, set);
+	if (ret) {
+		dmz_free_mblock(dmz, mblk);
+		dmz->sb[set].mblk = NULL;
+		return ret;
+	}
+
+	return 0;
+}
+
+/*
+ * Recover a metadata set.
+ */
+static int dmz_recover_mblocks(struct dmz_target *dmz, unsigned int dst_set)
+{
+	unsigned int src_set = dst_set ^ 0x1;
+	struct page *page;
+	int i, ret;
+
+	dmz_dev_warn(dmz, "Metadata set %u invalid: recovering\n",
+		     dst_set);
+
+	if (dst_set == 0)
+		dmz->sb[0].block = dmz_start_block(dmz, dmz->sb_zone);
+	else
+		dmz->sb[1].block = dmz->sb[0].block +
+			(dmz->nr_meta_zones * dmz->zone_nr_blocks);
+
+	page = alloc_page(GFP_KERNEL);
+	if (!page)
+		return -ENOMEM;
+
+	/* Copy metadata blocks */
+	for (i = 1; i < dmz->nr_meta_blocks; i++) {
+		ret = dmz_rdwr_block_sync(dmz, REQ_OP_READ,
+					  dmz->sb[src_set].block + i,
+					  page);
+		if (ret)
+			goto out;
+		ret = dmz_rdwr_block_sync(dmz, REQ_OP_WRITE,
+					  dmz->sb[dst_set].block + i,
+					  page);
+		if (ret)
+			goto out;
+	}
+
+	/* Finalize with the super block */
+	if (!dmz->sb[dst_set].mblk) {
+		dmz->sb[dst_set].mblk = dmz_alloc_mblock(dmz, 0);
+		if (!dmz->sb[dst_set].mblk) {
+			ret = -ENOMEM;
+			goto out;
+		}
+		dmz->sb[dst_set].sb = dmz->sb[dst_set].mblk->data;
+	}
+
+	ret = dmz_write_sb(dmz, dst_set);
+
+out:
+	__free_pages(page, 0);
+
+	return ret;
+}
+
+/*
+ * Get super block from disk.
+ */
+static int dmz_load_sb(struct dmz_target *dmz)
+{
+	bool sb_good[2] = {false, false};
+	u64 sb_gen[2] = {0, 0};
+	int ret;
+
+	/* Read and check the primary super block */
+	dmz->sb[0].block = dmz_start_block(dmz, dmz->sb_zone);
+	ret = dmz_get_sb(dmz, 0);
+	if (ret) {
+		dmz_dev_err(dmz, "Read primary super block failed\n");
+		return ret;
+	}
+
+	ret = dmz_check_sb(dmz, dmz->sb[0].sb);
+
+	/* Read and check secondary super block */
+	if (ret == 0) {
+		sb_good[0] = true;
+		dmz->sb[1].block = dmz->sb[0].block +
+			(dmz->nr_meta_zones * dmz->zone_nr_blocks);
+		ret = dmz_get_sb(dmz, 1);
+	} else {
+		ret = dmz_lookup_secondary_sb(dmz);
+	}
+	if (ret) {
+		dmz_dev_err(dmz, "Read secondary super block failed\n");
+		return ret;
+	}
+
+	ret = dmz_check_sb(dmz, dmz->sb[1].sb);
+	if (ret == 0)
+		sb_good[1] = true;
+
+	/* Use highest generation sb first */
+	if (!sb_good[0] && !sb_good[1]) {
+		dmz_dev_err(dmz, "No valid super block found\n");
+		return -EIO;
+	}
+
+	if (sb_good[0])
+		sb_gen[0] = le64_to_cpu(dmz->sb[0].sb->gen);
+	else
+		ret = dmz_recover_mblocks(dmz, 0);
+
+	if (sb_good[1])
+		sb_gen[1] = le64_to_cpu(dmz->sb[1].sb->gen);
+	else
+		ret = dmz_recover_mblocks(dmz, 1);
+
+	if (ret) {
+		dmz_dev_err(dmz, "Recovery failed\n");
+		return -EIO;
+	}
+
+	if (sb_gen[0] >= sb_gen[1]) {
+		dmz->sb_gen = sb_gen[0];
+		dmz->mblk_primary = 0;
+	} else {
+		dmz->sb_gen = sb_gen[1];
+		dmz->mblk_primary = 1;
+	}
+
+	dmz_dev_debug(dmz, "Using super block %u (gen %llu)\n",
+		      dmz->mblk_primary, dmz->sb_gen);
+
+	return 0;
+}
+
+/*
+ * Initialize a zone descriptor.
+ */
+static int dmz_init_zone(struct dmz_target *dmz, struct dm_zone *zone,
+			 struct blk_zone *blkz)
+{
+
+	/* Ignore the eventual last runt (smaller) zone */
+	if (blkz->len != dmz->zone_nr_sectors) {
+		if (blkz->start + blkz->len == dmz->zbd_capacity)
+			return 0;
+		return -ENXIO;
+	}
+
+	INIT_LIST_HEAD(&zone->link);
+	atomic_set(&zone->refcount, 0);
+	zone->chunk = DMZ_MAP_UNMAPPED;
+
+	if (blkz->type == BLK_ZONE_TYPE_CONVENTIONAL) {
+		set_bit(DMZ_RND, &zone->flags);
+		dmz->nr_rnd_zones++;
+	} else if (blkz->type == BLK_ZONE_TYPE_SEQWRITE_REQ ||
+		   blkz->type == BLK_ZONE_TYPE_SEQWRITE_PREF) {
+		set_bit(DMZ_SEQ, &zone->flags);
+	} else {
+		return -ENXIO;
+	}
+
+	if (blkz->cond == BLK_ZONE_COND_OFFLINE)
+		set_bit(DMZ_OFFLINE, &zone->flags);
+	else if (blkz->cond == BLK_ZONE_COND_READONLY)
+		set_bit(DMZ_READ_ONLY, &zone->flags);
+
+	if (dmz_is_rnd(zone))
+		zone->wp_block = 0;
+	else
+		zone->wp_block = dmz_sect2blk(blkz->wp - blkz->start);
+
+	if (!dmz_is_offline(zone) && !dmz_is_readonly(zone)) {
+		dmz->nr_useable_zones++;
+		if (dmz_is_rnd(zone)) {
+			dmz->nr_rnd_zones++;
+			if (!dmz->sb_zone) {
+				/* Super block zone */
+				dmz->sb_zone = zone;
+			}
+		}
+	}
+
+	return 0;
+}
+
+/*
+ * Free zones descriptors.
+ */
+static void dmz_drop_zones(struct dmz_target *dmz)
+{
+	kfree(dmz->zones);
+	dmz->zones = NULL;
+}
+
+/*
+ * Allocate and initialize zone descriptors using the zone
+ * information from disk.
+ */
+static int dmz_init_zones(struct dmz_target *dmz)
+{
+	struct dm_zone *zone;
+	struct blk_zone *blkz;
+	unsigned int nr_blkz;
+	sector_t sector = 0;
+	int i, ret = 0;
+
+	/* Init */
+	dmz->zone_nr_sectors = dmz->zbdq->limits.chunk_sectors;
+	dmz->zone_nr_sectors_shift = ilog2(dmz->zone_nr_sectors);
+
+	dmz->zone_nr_blocks = dmz_sect2blk(dmz->zone_nr_sectors);
+	dmz->zone_nr_blocks_shift = ilog2(dmz->zone_nr_blocks);
+
+	dmz->zone_bitmap_size = dmz->zone_nr_blocks >> 3;
+	dmz->zone_nr_bitmap_blocks =
+		dmz->zone_bitmap_size >> DMZ_BLOCK_SHIFT;
+
+	dmz->nr_zones = (dmz->zbd_capacity + dmz->zone_nr_sectors - 1)
+		>> dmz->zone_nr_sectors_shift;
+
+	/* Allocate zone array */
+	dmz->zones = kcalloc(dmz->nr_zones, sizeof(struct dm_zone), GFP_KERNEL);
+	if (!dmz->zones)
+		return -ENOMEM;
+
+	dmz_dev_info(dmz, "Using %zu B for zone information\n",
+		     sizeof(struct dm_zone) * dmz->nr_zones);
+
+	/* Get zone information */
+	nr_blkz = DMZ_REPORT_NR_ZONES;
+	blkz = kcalloc(nr_blkz, sizeof(struct blk_zone), GFP_KERNEL);
+	if (!blkz) {
+		ret = -ENOMEM;
+		goto out;
+	}
+
+	/*
+	 * Get zone information and initialize zone descriptors.
+	 * At the same time, determine where the super block
+	 * should be: first block of the first randomly writable
+	 * zone.
+	 */
+	zone = dmz->zones;
+	while (sector < dmz->zbd_capacity) {
+
+		/* Get zone information */
+		nr_blkz = DMZ_REPORT_NR_ZONES;
+		ret = blkdev_report_zones(dmz->zbd, sector,
+					  blkz, &nr_blkz,
+					  GFP_KERNEL);
+		if (ret) {
+			dmz_dev_err(dmz, "Report zones failed %d\n", ret);
+			goto out;
+		}
+
+		/* Process report */
+		for (i = 0; i < nr_blkz; i++) {
+			ret = dmz_init_zone(dmz, zone, &blkz[i]);
+			if (ret)
+				goto out;
+			sector += dmz->zone_nr_sectors;
+			zone++;
+		}
+
+	}
+
+	/* The entire zone configuration of the disk should now be known */
+	if (sector < dmz->zbd_capacity) {
+		dmz_dev_err(dmz, "Failed to get zone information\n");
+		ret = -ENXIO;
+		goto out;
+	}
+
+out:
+	kfree(blkz);
+
+	if (ret)
+		dmz_drop_zones(dmz);
+
+	return ret;
+}
+
+/*
+ * Update a zone information.
+ */
+static int dmz_update_zone(struct dmz_target *dmz, struct dm_zone *zone)
+{
+	unsigned int nr_blkz = 1;
+	struct blk_zone blkz;
+	int ret;
+
+	/* Get zone information from disk */
+	ret = blkdev_report_zones(dmz->zbd, dmz_start_sect(dmz, zone),
+				  &blkz, &nr_blkz,
+				  GFP_KERNEL);
+	if (ret) {
+		dmz_dev_err(dmz, "Get zone %u report failed\n",
+			    dmz_id(dmz, zone));
+		return ret;
+	}
+
+	clear_bit(DMZ_OFFLINE, &zone->flags);
+	clear_bit(DMZ_READ_ONLY, &zone->flags);
+	if (blkz.cond == BLK_ZONE_COND_OFFLINE)
+		set_bit(DMZ_OFFLINE, &zone->flags);
+	else if (blkz.cond == BLK_ZONE_COND_READONLY)
+		set_bit(DMZ_READ_ONLY, &zone->flags);
+
+	if (dmz_is_seq(zone))
+		zone->wp_block = dmz_sect2blk(blkz.wp - blkz.start);
+	else
+		zone->wp_block = 0;
+
+	return 0;
+}
+
+/*
+ * Check a zone write pointer position when the zone is marked
+ * with the sequential write error flag.
+ */
+static int dmz_handle_seq_write_err(struct dmz_target *dmz,
+				    struct dm_zone *zone)
+{
+	unsigned int wp = 0;
+	int ret = 0;
+
+	wp = zone->wp_block;
+	ret = dmz_update_zone(dmz, zone);
+	if (ret != 0)
+		return ret;
+
+	dmz_dev_warn(dmz, "Processing zone %u write error (zone wp %u/%u)\n",
+		     dmz_id(dmz, zone), zone->wp_block, wp);
+
+	if (zone->wp_block < wp)
+		dmz_invalidate_blocks(dmz, zone,
+				      zone->wp_block,
+				      wp - zone->wp_block);
+
+	return 0;
+}
+
+/*
+ * Check zone information after a resume.
+ */
+static int dmz_check_zones(struct dmz_target *dmz)
+{
+	struct dm_zone *zone;
+	sector_t wp_block;
+	unsigned int i;
+	int ret;
+
+	/* Check zones */
+	for (i = 0; i < dmz->nr_zones; i++) {
+
+		zone = dmz_get(dmz, i);
+		if (!zone) {
+			dmz_dev_err(dmz, "Unable to get zone %u\n", i);
+			return -EIO;
+		}
+
+		wp_block = zone->wp_block;
+
+		ret = dmz_update_zone(dmz, zone);
+		if (ret) {
+			dmz_dev_err(dmz, "Broken zone %u\n", i);
+			return ret;
+		}
+
+		if (dmz_is_offline(zone)) {
+			dmz_dev_warn(dmz, "Zone %u is offline\n", i);
+			continue;
+		}
+
+		/* Check write pointer */
+		if (!dmz_is_seq(zone))
+			zone->wp_block = 0;
+		else if (zone->wp_block != wp_block) {
+			dmz_dev_err(dmz, "Zone %u: Invalid wp (%llu / %llu)\n",
+				    i, (u64)zone->wp_block, (u64)wp_block);
+			zone->wp_block = wp_block;
+			dmz_invalidate_blocks(dmz, zone, zone->wp_block,
+					dmz->zone_nr_blocks - zone->wp_block);
+		}
+
+	}
+
+	return 0;
+}
+
+/*
+ * Reset a zone write pointer.
+ */
+static int dmz_reset_zone(struct dmz_target *dmz, struct dm_zone *zone)
+{
+	int ret;
+
+	/*
+	 * Ignore offline zones, read only zones,
+	 * and conventional zones.
+	 */
+	if (dmz_is_offline(zone) ||
+	    dmz_is_readonly(zone) ||
+	    dmz_is_rnd(zone))
+		return 0;
+
+	if (!dmz_is_empty(zone) || dmz_seq_write_err(zone)) {
+		ret = blkdev_reset_zones(dmz->zbd,
+					 dmz_start_sect(dmz, zone),
+					 dmz->zone_nr_sectors,
+					 GFP_KERNEL);
+		if (ret) {
+			dmz_dev_err(dmz, "Reset zone %u failed %d\n",
+				    dmz_id(dmz, zone), ret);
+			return ret;
+		}
+	}
+
+	/* Clear write error bit and rewind write pointer position */
+	clear_bit(DMZ_SEQ_WRITE_ERR, &zone->flags);
+	zone->wp_block = 0;
+
+	return 0;
+}
+
+static void dmz_get_zone_weight(struct dmz_target *dmz, struct dm_zone *zone);
+
+/*
+ * Initialize chunk mapping.
+ */
+static int dmz_load_mapping(struct dmz_target *dmz)
+{
+	struct dm_zone *dzone, *bzone;
+	struct dmz_mblock *dmap_mblk = NULL;
+	struct dmz_map *dmap;
+	unsigned int i = 0, e = 0, chunk = 0;
+	unsigned int dzone_id;
+	unsigned int bzone_id;
+
+	/* Metadata block array for the chunk mapping table */
+	dmz->dz_map_mblk = kcalloc(dmz->nr_map_blocks,
+				   sizeof(struct dmz_mblk *),
+				   GFP_KERNEL);
+	if (!dmz->dz_map_mblk)
+		return -ENOMEM;
+
+	/* Get chunk mapping table blocks and initialize zone mapping */
+	while (chunk < dmz->nr_chunks) {
+
+		if (!dmap_mblk) {
+			/* Get mapping block */
+			dmap_mblk = dmz_get_mblock(dmz, i + 1);
+			if (IS_ERR(dmap_mblk))
+				return PTR_ERR(dmap_mblk);
+			dmz->dz_map_mblk[i] = dmap_mblk;
+			dmap = (struct dmz_map *) dmap_mblk->data;
+			i++;
+			e = 0;
+		}
+
+		/* Check data zone */
+		dzone_id = le32_to_cpu(dmap[e].dzone_id);
+		if (dzone_id == DMZ_MAP_UNMAPPED)
+			goto next;
+
+		if (dzone_id >= dmz->nr_zones) {
+			dmz_dev_err(dmz,
+				"Chunk %u mapping: invalid data zone ID %u\n",
+				chunk, dzone_id);
+			return -EIO;
+		}
+
+		dzone = dmz_get(dmz, dzone_id);
+		set_bit(DMZ_DATA, &dzone->flags);
+		dzone->chunk = chunk;
+		dmz_get_zone_weight(dmz, dzone);
+
+		if (dmz_is_rnd(dzone))
+			list_add_tail(&dzone->link, &dmz->dz_map_rnd_list);
+		else
+			list_add_tail(&dzone->link, &dmz->dz_map_seq_list);
+
+		/* Check buffer zone */
+		bzone_id = le32_to_cpu(dmap[e].bzone_id);
+		if (bzone_id == DMZ_MAP_UNMAPPED)
+			goto next;
+
+		if (bzone_id >= dmz->nr_zones) {
+			dmz_dev_err(dmz,
+				"Chunk %u mapping: invalid buffer zone ID %u\n",
+				chunk, bzone_id);
+			return -EIO;
+		}
+
+		bzone = dmz_get(dmz, bzone_id);
+		if (!dmz_is_rnd(bzone)) {
+			dmz_dev_err(dmz,
+				"Chunk %u mapping: invalid buffer zone %u\n",
+				chunk, bzone_id);
+			return -EIO;
+		}
+
+		set_bit(DMZ_DATA, &bzone->flags);
+		set_bit(DMZ_BUF, &bzone->flags);
+		bzone->chunk = chunk;
+		bzone->bzone = dzone;
+		dzone->bzone = bzone;
+		dmz_get_zone_weight(dmz, bzone);
+		list_add_tail(&bzone->link, &dmz->dz_map_rnd_list);
+
+next:
+		chunk++;
+		e++;
+		if (e >= DMZ_MAP_ENTRIES)
+			dmap_mblk = NULL;
+
+	}
+
+	/*
+	 * At this point, only meta zones and mapped data zones were
+	 * fully initialized. All remaining zones are unmapped data
+	 * zones. Finish initializing those here.
+	 */
+	for (i = 0; i < dmz->nr_zones; i++) {
+
+		dzone = dmz_get(dmz, i);
+		if (dmz_is_meta(dzone))
+			continue;
+
+		if (dmz_is_rnd(dzone))
+			dmz->dz_nr_rnd++;
+		else
+			dmz->dz_nr_seq++;
+
+		if (dmz_is_data(dzone))
+			/* Already initialized */
+			continue;
+
+		/* Unmapped data zone */
+		set_bit(DMZ_DATA, &dzone->flags);
+		dzone->chunk = DMZ_MAP_UNMAPPED;
+		if (dmz_is_rnd(dzone)) {
+			list_add_tail(&dzone->link,
+				      &dmz->dz_unmap_rnd_list);
+			atomic_inc(&dmz->dz_unmap_nr_rnd);
+		} else if (atomic_read(&dmz->nr_reclaim_seq_zones) <
+			   dmz->nr_reserved_seq) {
+			list_add_tail(&dzone->link,
+				      &dmz->reclaim_seq_zones_list);
+			atomic_inc(&dmz->nr_reclaim_seq_zones);
+			dmz->dz_nr_seq--;
+		} else {
+			list_add_tail(&dzone->link,
+				      &dmz->dz_unmap_seq_list);
+			atomic_inc(&dmz->dz_unmap_nr_seq);
+		}
+	}
+
+	return 0;
+}
+
+/*
+ * Set a data chunk mapping.
+ */
+static void dmz_set_chunk_mapping(struct dmz_target *dmz,
+				  unsigned int chunk,
+				  unsigned int dzone_id,
+				  unsigned int bzone_id)
+{
+	struct dmz_mblock *dmap_mblk =
+		dmz->dz_map_mblk[chunk >> DMZ_MAP_ENTRIES_SHIFT];
+	struct dmz_map *dmap = (struct dmz_map *) dmap_mblk->data;
+	int map_idx = chunk & DMZ_MAP_ENTRIES_MASK;
+
+	dmap[map_idx].dzone_id = cpu_to_le32(dzone_id);
+	dmap[map_idx].bzone_id = cpu_to_le32(bzone_id);
+	dmz_dirty_mblock(dmz, dmap_mblk);
+}
+
+/*
+ * The list of mapped zones is maintained in LRU order.
+ * This rotates a zone at the end of its map list.
+ */
+static void __dmz_lru_zone(struct dmz_target *dmz,
+			   struct dm_zone *zone)
+{
+	if (list_empty(&zone->link))
+		return;
+
+	list_del_init(&zone->link);
+	if (dmz_is_seq(zone))
+		/* LRU rotate sequential zone */
+		list_add_tail(&zone->link, &dmz->dz_map_seq_list);
+	else
+		/* LRU rotate random zone */
+		list_add_tail(&zone->link, &dmz->dz_map_rnd_list);
+}
+
+/*
+ * The list of mapped random zones is maintained
+ * in LRU order. This rotates a zone at the end of the list.
+ */
+static void dmz_lru_zone(struct dmz_target *dmz,
+			 struct dm_zone *zone)
+{
+	__dmz_lru_zone(dmz, zone);
+	if (zone->bzone)
+		__dmz_lru_zone(dmz, zone->bzone);
+}
+
+/*
+ * Wait for any zone to be freed.
+ */
+static void dmz_wait_for_free_zones(struct dmz_target *dmz)
+{
+	DEFINE_WAIT(wait);
+
+	dmz_trigger_reclaim(dmz);
+
+	prepare_to_wait(&dmz->dz_free_wq, &wait, TASK_UNINTERRUPTIBLE);
+	dmz_unlock_map(dmz);
+	up_read(&dmz->mblk_sem);
+
+	io_schedule_timeout(HZ);
+
+	down_read(&dmz->mblk_sem);
+	dmz_lock_map(dmz);
+	finish_wait(&dmz->dz_free_wq, &wait);
+}
+
+/*
+ * Wait for a zone reclaim to complete.
+ */
+static void dmz_wait_for_reclaim(struct dmz_target *dmz,
+				 struct dm_zone *zone)
+{
+	dmz_unlock_map(dmz);
+	wait_on_bit_timeout(&zone->flags, DMZ_RECLAIM,
+			    TASK_UNINTERRUPTIBLE,
+			    HZ);
+	dmz_lock_map(dmz);
+}
+
+/*
+ * Activate a zone (increment its reference count).
+ */
+void dmz_activate_zone(struct dmz_target *dmz, struct dm_zone *zone)
+{
+	set_bit(DMZ_ACTIVE, &zone->flags);
+	atomic_inc(&zone->refcount);
+}
+
+/*
+ * Deactivate a zone. This decrement the zone reference counter
+ * and clears the active state of the zone once the count reaches 0,
+ * indicating that all BIOs to the zone have completed. Returns
+ * true if the zone was deactivated.
+ */
+void dmz_deactivate_zone(struct dmz_target *dmz, struct dm_zone *zone)
+{
+	if (atomic_dec_and_test(&zone->refcount)) {
+		WARN_ON(!test_bit(DMZ_ACTIVE, &zone->flags));
+		clear_bit_unlock(DMZ_ACTIVE, &zone->flags);
+		smp_mb__after_atomic();
+	}
+}
+
+/*
+ * Get the zone mapping a chunk, if the chunk is mapped already.
+ * If no mapping exist and the operation is WRITE, a zone is
+ * allocated and used to map the chunk.
+ * The zone returned will be set to the active state.
+ */
+struct dm_zone *dmz_get_chunk_mapping(struct dmz_target *dmz,
+				      unsigned int chunk, int op)
+{
+	struct dmz_mblock *dmap_mblk =
+		dmz->dz_map_mblk[chunk >> DMZ_MAP_ENTRIES_SHIFT];
+	struct dmz_map *dmap = (struct dmz_map *) dmap_mblk->data;
+	int dmap_idx = chunk & DMZ_MAP_ENTRIES_MASK;
+	unsigned int dzone_id;
+	struct dm_zone *dzone = NULL;
+	int ret = 0;
+
+	dmz_lock_map(dmz);
+
+again:
+
+	/* Get the chunk mapping */
+	dzone_id = le32_to_cpu(dmap[dmap_idx].dzone_id);
+	if (dzone_id == DMZ_MAP_UNMAPPED) {
+
+		/*
+		 * Read or discard in unmapped chunks are fine. But for
+		 * writes, we need a mapping, so get one.
+		 */
+		if (op != REQ_OP_WRITE)
+			goto out;
+
+		/* Alloate a random zone */
+		dzone = dmz_alloc_zone(dmz, DMZ_ALLOC_RND);
+		if (!dzone) {
+			dmz_wait_for_free_zones(dmz);
+			goto again;
+		}
+
+		dmz_map_zone(dmz, dzone, chunk);
+
+	} else {
+
+		/* The chunk is already mapped: get the mapping zone */
+		dzone = dmz_get(dmz, dzone_id);
+		if (dzone->chunk != chunk) {
+			dzone = ERR_PTR(-EIO);
+			goto out;
+		}
+
+		/* Repair write pointer if the sequential dzone has error */
+		if (dmz_seq_write_err(dzone)) {
+			ret = dmz_handle_seq_write_err(dmz, dzone);
+			if (ret) {
+				dzone = ERR_PTR(-EIO);
+				goto out;
+			}
+			clear_bit(DMZ_SEQ_WRITE_ERR, &dzone->flags);
+		}
+	}
+
+	/*
+	 * If the zone is being reclaimed, the chunk mapping may change
+	 * to a different zone. So wait for reclaim and retry. Otherwise,
+	 * activate the zone (this will prevent reclaim from touching it).
+	 */
+	if (dmz_in_reclaim(dzone)) {
+		dmz_wait_for_reclaim(dmz, dzone);
+		goto again;
+	}
+	dmz_activate_zone(dmz, dzone);
+	dmz_lru_zone(dmz, dzone);
+
+out:
+	dmz_unlock_map(dmz);
+
+	return dzone;
+}
+
+/*
+ * Write and discard change the block validity of data zones and their buffer
+ * zones. Check here that valid blocks are still present. If all blocks are
+ * invalid, the zones can be unmapped on the fly without waiting for reclaim
+ * to do it.
+ */
+void dmz_put_chunk_mapping(struct dmz_target *dmz, struct dm_zone *dzone)
+{
+	struct dm_zone *bzone;
+
+	dmz_lock_map(dmz);
+
+	bzone = dzone->bzone;
+	if (bzone) {
+		if (dmz_weight(bzone)) {
+			dmz_lru_zone(dmz, bzone);
+		} else {
+			/* Empty buffer zone: reclaim it */
+			dmz_unmap_zone(dmz, bzone);
+			dmz_free_zone(dmz, bzone);
+			bzone = NULL;
+		}
+	}
+
+	/* Deactivate the data zone */
+	dmz_deactivate_zone(dmz, dzone);
+	if (dmz_is_active(dzone) || bzone || dmz_weight(dzone)) {
+		dmz_lru_zone(dmz, dzone);
+	} else {
+		/* Unbuffered inactive empty data zone: reclaim it */
+		dmz_unmap_zone(dmz, dzone);
+		dmz_free_zone(dmz, dzone);
+	}
+
+	dmz_unlock_map(dmz);
+}
+
+/*
+ * Allocate and map a random zone to buffer a chunk
+ * already mapped to a sequential zone.
+ */
+struct dm_zone *dmz_get_chunk_buffer(struct dmz_target *dmz,
+				     struct dm_zone *dzone)
+{
+	struct dm_zone *bzone;
+	unsigned int chunk;
+
+	dmz_lock_map(dmz);
+
+	chunk = dzone->chunk;
+
+	/* Alloate a random zone */
+	do {
+		bzone = dmz_alloc_zone(dmz, DMZ_ALLOC_RND);
+		if (!bzone)
+			dmz_wait_for_free_zones(dmz);
+	} while (!bzone);
+
+	/* Update the chunk mapping */
+	dmz_set_chunk_mapping(dmz, chunk,
+			      dmz_id(dmz, dzone),
+			      dmz_id(dmz, bzone));
+
+	set_bit(DMZ_BUF, &bzone->flags);
+	bzone->chunk = chunk;
+	bzone->bzone = dzone;
+	dzone->bzone = bzone;
+	list_add_tail(&bzone->link, &dmz->dz_map_rnd_list);
+
+	dmz_unlock_map(dmz);
+
+	return bzone;
+}
+
+/*
+ * Get an unmapped (free) zone.
+ * This must be called with the mapping lock held.
+ */
+struct dm_zone *dmz_alloc_zone(struct dmz_target *dmz, unsigned long flags)
+{
+	struct list_head *list;
+	struct dm_zone *zone;
+
+	if (flags & DMZ_ALLOC_RND)
+		list = &dmz->dz_unmap_rnd_list;
+	else
+		list = &dmz->dz_unmap_seq_list;
+
+again:
+	if (list_empty(list)) {
+
+		/*
+		 * No free zone: if this is for reclaim, allow using the
+		 * reserved sequential zones.
+		 */
+		if (!(flags & DMZ_ALLOC_RECLAIM) ||
+		    list_empty(&dmz->reclaim_seq_zones_list))
+			return NULL;
+
+		zone = list_first_entry(&dmz->reclaim_seq_zones_list,
+					struct dm_zone, link);
+		list_del_init(&zone->link);
+		atomic_dec(&dmz->nr_reclaim_seq_zones);
+		return zone;
+
+	}
+
+	zone = list_first_entry(list, struct dm_zone, link);
+	list_del_init(&zone->link);
+
+	if (dmz_is_rnd(zone))
+		atomic_dec(&dmz->dz_unmap_nr_rnd);
+	else
+		atomic_dec(&dmz->dz_unmap_nr_seq);
+
+	if (dmz_is_offline(zone)) {
+		dmz_dev_warn(dmz, "Zone %u is offline\n",
+			     dmz_id(dmz, zone));
+		zone = NULL;
+		goto again;
+	}
+
+	if (dmz_should_reclaim(dmz))
+		dmz_trigger_reclaim(dmz);
+
+	return zone;
+}
+
+/*
+ * Free a zone.
+ * This must be called with the mapping lock held.
+ */
+void dmz_free_zone(struct dmz_target *dmz, struct dm_zone *zone)
+{
+
+
+
+	/* If this is a sequential zone, reset it */
+	if (dmz_is_seq(zone))
+		dmz_reset_zone(dmz, zone);
+
+	/* Return the zone to its type unmap list */
+	if (dmz_is_rnd(zone)) {
+		list_add_tail(&zone->link, &dmz->dz_unmap_rnd_list);
+		atomic_inc(&dmz->dz_unmap_nr_rnd);
+	} else if (atomic_read(&dmz->nr_reclaim_seq_zones) <
+		   dmz->nr_reserved_seq) {
+		list_add_tail(&zone->link, &dmz->reclaim_seq_zones_list);
+		atomic_inc(&dmz->nr_reclaim_seq_zones);
+	} else {
+		list_add_tail(&zone->link, &dmz->dz_unmap_seq_list);
+		atomic_inc(&dmz->dz_unmap_nr_seq);
+	}
+
+	wake_up_all(&dmz->dz_free_wq);
+}
+
+/*
+ * Map a chunk to a zone.
+ * This must be called with the mapping lock held.
+ */
+void dmz_map_zone(struct dmz_target *dmz, struct dm_zone *dzone,
+		  unsigned int chunk)
+{
+
+	/* Set the chunk mapping */
+	dmz_set_chunk_mapping(dmz, chunk,
+			      dmz_id(dmz, dzone),
+			      DMZ_MAP_UNMAPPED);
+	dzone->chunk = chunk;
+	if (dmz_is_rnd(dzone))
+		list_add_tail(&dzone->link, &dmz->dz_map_rnd_list);
+	else
+		list_add_tail(&dzone->link, &dmz->dz_map_seq_list);
+}
+
+/*
+ * Unmap a zone.
+ * This must be called with the mapping lock held.
+ */
+void dmz_unmap_zone(struct dmz_target *dmz, struct dm_zone *zone)
+{
+	unsigned int chunk = zone->chunk;
+	unsigned int dzone_id;
+
+	if (chunk == DMZ_MAP_UNMAPPED)
+		/* Already unmapped */
+		return;
+
+	if (test_and_clear_bit(DMZ_BUF, &zone->flags)) {
+
+		/*
+		 * Unmapping the chunk buffer zone: clear only
+		 * the chunk buffer mapping
+		 */
+		dzone_id = dmz_id(dmz, zone->bzone);
+		zone->bzone->bzone = NULL;
+		zone->bzone = NULL;
+
+	} else {
+
+		/*
+		 * Unmapping the chunk data zone: the zone must
+		 * not be buffered.
+		 */
+		if (WARN_ON(zone->bzone)) {
+			zone->bzone->bzone = NULL;
+			zone->bzone = NULL;
+		}
+		dzone_id = DMZ_MAP_UNMAPPED;
+
+	}
+
+	dmz_set_chunk_mapping(dmz, chunk, dzone_id,
+			      DMZ_MAP_UNMAPPED);
+
+	zone->chunk = DMZ_MAP_UNMAPPED;
+	list_del_init(&zone->link);
+}
+
+/*
+ * Set @nr_bits bits in @bitmap starting from @bit.
+ * Return the number of bits changed from 0 to 1.
+ */
+static unsigned int dmz_set_bits(unsigned long *bitmap,
+				 unsigned int bit, unsigned int nr_bits)
+{
+	unsigned long *addr;
+	unsigned int end = bit + nr_bits;
+	unsigned int n = 0;
+
+	while (bit < end) {
+
+		if (((bit & (BITS_PER_LONG - 1)) == 0) &&
+		    ((end - bit) >= BITS_PER_LONG)) {
+			/* Try to set the whole word at once */
+			addr = bitmap + BIT_WORD(bit);
+			if (*addr == 0) {
+				*addr = ULONG_MAX;
+				n += BITS_PER_LONG;
+				bit += BITS_PER_LONG;
+				continue;
+			}
+		}
+
+		if (!test_and_set_bit(bit, bitmap))
+			n++;
+		bit++;
+
+	}
+
+	return n;
+
+}
+
+/*
+ * Get the bitmap block storing the bit for chunk_block in zone.
+ */
+static struct dmz_mblock *dmz_get_bitmap(struct dmz_target *dmz,
+					 struct dm_zone *zone,
+					 sector_t chunk_block)
+{
+	sector_t bitmap_block = 1 + dmz->nr_map_blocks
+		+ (sector_t)(dmz_id(dmz, zone) * dmz->zone_nr_bitmap_blocks)
+		+ (chunk_block >> DMZ_BLOCK_SHIFT_BITS);
+
+	return dmz_get_mblock(dmz, bitmap_block);
+}
+
+/*
+ * Copy the bitmap of from_zone to the bitmap of to_zone.
+ */
+int dmz_valid_copy(struct dmz_target *dmz, struct dm_zone *from_zone,
+		   struct dm_zone *to_zone)
+{
+	struct dmz_mblock *from_mblk, *to_mblk;
+	sector_t chunk_block = 0;
+
+	/* Get the zones bitmap blocks */
+	while (chunk_block < dmz->zone_nr_blocks) {
+
+		from_mblk = dmz_get_bitmap(dmz, from_zone, chunk_block);
+		if (IS_ERR(from_mblk))
+			return PTR_ERR(from_mblk);
+		to_mblk = dmz_get_bitmap(dmz, to_zone, chunk_block);
+		if (IS_ERR(to_mblk)) {
+			dmz_release_mblock(dmz, from_mblk);
+			return PTR_ERR(to_mblk);
+		}
+
+		memcpy(to_mblk->data, from_mblk->data, DMZ_BLOCK_SIZE);
+		dmz_dirty_mblock(dmz, to_mblk);
+
+		dmz_release_mblock(dmz, to_mblk);
+		dmz_release_mblock(dmz, from_mblk);
+
+		chunk_block += DMZ_BLOCK_SIZE_BITS;
+
+	}
+
+	to_zone->weight = from_zone->weight;
+
+	return 0;
+}
+
+/*
+ * Merge the valid blocks of from_zone into the bitmap of to_zone.
+ */
+int dmz_valid_merge(struct dmz_target *dmz, struct dm_zone *from_zone,
+		    struct dm_zone *to_zone, sector_t chunk_block)
+{
+	unsigned int nr_blocks;
+	int ret;
+
+	/* Get the zones bitmap blocks */
+	while (chunk_block < dmz->zone_nr_blocks) {
+
+		/* Get a valid region from the source zone */
+		ret = dmz_first_valid_block(dmz, from_zone, &chunk_block);
+		if (ret < 0)
+			return ret;
+
+		/* Are we done ? */
+		nr_blocks = ret;
+		if (!nr_blocks)
+			return 0;
+
+		ret = dmz_validate_blocks(dmz, to_zone, chunk_block, nr_blocks);
+		if (ret != 0)
+			return ret;
+
+		chunk_block += nr_blocks;
+
+	}
+
+	return 0;
+}
+
+/*
+ * Validate all the blocks in the range [block..block+nr_blocks-1].
+ */
+int dmz_validate_blocks(struct dmz_target *dmz, struct dm_zone *zone,
+			sector_t chunk_block, unsigned int nr_blocks)
+{
+	unsigned int count, bit, nr_bits;
+	struct dmz_mblock *mblk;
+	unsigned int n = 0;
+
+	dmz_dev_debug(dmz, "=> VALIDATE zone %u, block %llu, %u blocks\n",
+		      dmz_id(dmz, zone), (u64)chunk_block, nr_blocks);
+
+	WARN_ON(chunk_block + nr_blocks > dmz->zone_nr_blocks);
+
+	while (nr_blocks) {
+
+		/* Get bitmap block */
+		mblk = dmz_get_bitmap(dmz, zone, chunk_block);
+		if (IS_ERR(mblk))
+			return PTR_ERR(mblk);
+
+		/* Set bits */
+		bit = chunk_block & DMZ_BLOCK_MASK_BITS;
+		nr_bits = min(nr_blocks, DMZ_BLOCK_SIZE_BITS - bit);
+
+		count = dmz_set_bits((unsigned long *) mblk->data,
+				     bit, nr_bits);
+		if (count) {
+			dmz_dirty_mblock(dmz, mblk);
+			n += count;
+		}
+		dmz_release_mblock(dmz, mblk);
+
+		nr_blocks -= nr_bits;
+		chunk_block += nr_bits;
+
+	}
+
+	if (likely(zone->weight + n <= dmz->zone_nr_blocks)) {
+		zone->weight += n;
+	} else {
+		dmz_dev_warn(dmz, "Zone %u: weight %u should be <= %llu\n",
+			     dmz_id(dmz, zone), zone->weight,
+			     (u64)dmz->zone_nr_blocks - n);
+		zone->weight = dmz->zone_nr_blocks;
+	}
+
+	return 0;
+}
+
+/*
+ * Clear nr_bits bits in bitmap starting from bit.
+ * Return the number of bits cleared.
+ */
+static int dmz_clear_bits(unsigned long *bitmap, int bit, int nr_bits)
+{
+	unsigned long *addr;
+	int end = bit + nr_bits;
+	int n = 0;
+
+	while (bit < end) {
+
+		if (((bit & (BITS_PER_LONG - 1)) == 0) &&
+		    ((end - bit) >= BITS_PER_LONG)) {
+			/* Try to clear whole word at once */
+			addr = bitmap + BIT_WORD(bit);
+			if (*addr == ULONG_MAX) {
+				*addr = 0;
+				n += BITS_PER_LONG;
+				bit += BITS_PER_LONG;
+				continue;
+			}
+		}
+
+		if (test_and_clear_bit(bit, bitmap))
+			n++;
+		bit++;
+
+	}
+
+	return n;
+
+}
+
+/*
+ * Invalidate all the blocks in the range [block..block+nr_blocks-1].
+ */
+int dmz_invalidate_blocks(struct dmz_target *dmz, struct dm_zone *zone,
+			  sector_t chunk_block, unsigned int nr_blocks)
+{
+	unsigned int count, bit, nr_bits;
+	struct dmz_mblock *mblk;
+	unsigned int n = 0;
+
+	dmz_dev_debug(dmz, "=> INVALIDATE zone %u, block %llu, %u blocks\n",
+		      dmz_id(dmz, zone), (u64)chunk_block, nr_blocks);
+
+	WARN_ON(chunk_block + nr_blocks > dmz->zone_nr_blocks);
+
+	while (nr_blocks) {
+
+		/* Get bitmap block */
+		mblk = dmz_get_bitmap(dmz, zone, chunk_block);
+		if (IS_ERR(mblk))
+			return PTR_ERR(mblk);
+
+		/* Clear bits */
+		bit = chunk_block & DMZ_BLOCK_MASK_BITS;
+		nr_bits = min(nr_blocks, DMZ_BLOCK_SIZE_BITS - bit);
+
+		count = dmz_clear_bits((unsigned long *) mblk->data,
+				       bit, nr_bits);
+		if (count) {
+			dmz_dirty_mblock(dmz, mblk);
+			n += count;
+		}
+		dmz_release_mblock(dmz, mblk);
+
+		nr_blocks -= nr_bits;
+		chunk_block += nr_bits;
+
+	}
+
+	if (zone->weight >= n) {
+		zone->weight -= n;
+	} else {
+		dmz_dev_warn(dmz, "Zone %u: weight %u should be >= %u\n",
+			     dmz_id(dmz, zone), zone->weight, n);
+		zone->weight = 0;
+	}
+
+	return 0;
+}
+
+/*
+ * Get a block bit value.
+ */
+static int dmz_test_block(struct dmz_target *dmz, struct dm_zone *zone,
+			  sector_t chunk_block)
+{
+	struct dmz_mblock *mblk;
+	int ret;
+
+	WARN_ON(chunk_block >= dmz->zone_nr_blocks);
+
+	/* Get bitmap block */
+	mblk = dmz_get_bitmap(dmz, zone, chunk_block);
+	if (IS_ERR(mblk))
+		return PTR_ERR(mblk);
+
+	/* Get offset */
+	ret = test_bit(chunk_block & DMZ_BLOCK_MASK_BITS,
+		       (unsigned long *) mblk->data) != 0;
+
+	dmz_release_mblock(dmz, mblk);
+
+	return ret;
+}
+
+/*
+ * Return the number of blocks from chunk_block to the first block with a bit
+ * value specified by set. Search at most nr_blocks blocks from chunk_block.
+ */
+static int dmz_to_next_set_block(struct dmz_target *dmz, struct dm_zone *zone,
+				 sector_t chunk_block, unsigned int nr_blocks,
+				 int set)
+{
+	struct dmz_mblock *mblk;
+	unsigned int bit, set_bit, nr_bits;
+	unsigned long *bitmap;
+	int n = 0;
+
+	WARN_ON(chunk_block + nr_blocks > dmz->zone_nr_blocks);
+
+	while (nr_blocks) {
+
+		/* Get bitmap block */
+		mblk = dmz_get_bitmap(dmz, zone, chunk_block);
+		if (IS_ERR(mblk))
+			return PTR_ERR(mblk);
+
+		/* Get offset */
+		bitmap = (unsigned long *) mblk->data;
+		bit = chunk_block & DMZ_BLOCK_MASK_BITS;
+		nr_bits = min(nr_blocks, DMZ_BLOCK_SIZE_BITS - bit);
+		if (set)
+			set_bit = find_next_bit(bitmap,
+						DMZ_BLOCK_SIZE_BITS,
+						bit);
+		else
+			set_bit = find_next_zero_bit(bitmap,
+						     DMZ_BLOCK_SIZE_BITS,
+						     bit);
+		dmz_release_mblock(dmz, mblk);
+
+		n += set_bit - bit;
+		if (set_bit < DMZ_BLOCK_SIZE_BITS)
+			break;
+
+		nr_blocks -= nr_bits;
+		chunk_block += nr_bits;
+
+	}
+
+	return n;
+}
+
+/*
+ * Test if chunk_block is valid. If it is, the number of consecutive
+ * valid blocks from chunk_block will be returned.
+ */
+int dmz_block_valid(struct dmz_target *dmz, struct dm_zone *zone,
+		    sector_t chunk_block)
+{
+	int valid;
+
+	/* Test block */
+	valid = dmz_test_block(dmz, zone, chunk_block);
+	if (valid <= 0)
+		return valid;
+
+	/* The block is valid: get the number of valid blocks from block */
+	return dmz_to_next_set_block(dmz, zone, chunk_block,
+				     dmz->zone_nr_blocks - chunk_block,
+				     0);
+}
+
+/*
+ * Find the first valid block from @chunk_block in @zone.
+ * If such a block is found, its number is returned using
+ * @chunk_block and the total number of valid blocks from @chunk_block
+ * is returned.
+ */
+int dmz_first_valid_block(struct dmz_target *dmz, struct dm_zone *zone,
+			  sector_t *chunk_block)
+{
+	sector_t start_block = *chunk_block;
+	int ret;
+
+	ret = dmz_to_next_set_block(dmz, zone, start_block,
+				    dmz->zone_nr_blocks - start_block, 1);
+	if (ret < 0)
+		return ret;
+
+	start_block += ret;
+	*chunk_block = start_block;
+
+	return dmz_to_next_set_block(dmz, zone, start_block,
+				     dmz->zone_nr_blocks - start_block, 0);
+}
+
+/*
+ * Count the number of bits set starting from bit up to bit + nr_bits - 1.
+ */
+static int dmz_count_bits(void *bitmap, int bit, int nr_bits)
+{
+	unsigned long *addr;
+	int end = bit + nr_bits;
+	int n = 0;
+
+	while (bit < end) {
+
+		if (((bit & (BITS_PER_LONG - 1)) == 0) &&
+		    ((end - bit) >= BITS_PER_LONG)) {
+			addr = (unsigned long *)bitmap + BIT_WORD(bit);
+			if (*addr == ULONG_MAX) {
+				n += BITS_PER_LONG;
+				bit += BITS_PER_LONG;
+				continue;
+			}
+		}
+
+		if (test_bit(bit, bitmap))
+			n++;
+		bit++;
+
+	}
+
+	return n;
+
+}
+
+/*
+ * Get a zone weight.
+ */
+static void dmz_get_zone_weight(struct dmz_target *dmz, struct dm_zone *zone)
+{
+	struct dmz_mblock *mblk;
+	sector_t chunk_block = 0;
+	unsigned int bit, nr_bits;
+	unsigned int nr_blocks = dmz->zone_nr_blocks;
+	void *bitmap;
+	int n = 0;
+
+	while (nr_blocks) {
+
+		/* Get bitmap block */
+		mblk = dmz_get_bitmap(dmz, zone, chunk_block);
+		if (IS_ERR(mblk)) {
+			n = 0;
+			break;
+		}
+
+		/* Count bits in this block */
+		bitmap = mblk->data;
+		bit = chunk_block & DMZ_BLOCK_MASK_BITS;
+		nr_bits = min(nr_blocks, DMZ_BLOCK_SIZE_BITS - bit);
+		n += dmz_count_bits(bitmap, bit, nr_bits);
+
+		dmz_release_mblock(dmz, mblk);
+
+		nr_blocks -= nr_bits;
+		chunk_block += nr_bits;
+
+	}
+
+	zone->weight = n;
+}
+
+/*
+ * Initialize the target metadata.
+ */
+int dmz_init_meta(struct dmz_target *dmz)
+{
+	unsigned int i, zid;
+	struct dm_zone *zone;
+	int ret;
+
+	/* Initialize zone descriptors */
+	ret = dmz_init_zones(dmz);
+	if (ret)
+		goto out;
+
+	/* Get super block */
+	ret = dmz_load_sb(dmz);
+	if (ret)
+		goto out;
+
+	/* Set metadata zones starting from sb_zone */
+	zid = dmz_id(dmz, dmz->sb_zone);
+	for (i = 0; i < dmz->nr_meta_zones << 1; i++) {
+		zone = dmz_get(dmz, zid + i);
+		if (!dmz_is_rnd(zone))
+			return -ENXIO;
+		set_bit(DMZ_META, &zone->flags);
+	}
+
+	/*
+	 * Cache size boundaries: allow at least 2 super blocks, the chunk map
+	 * blocks and enough blocks to be able to cache the bitmap blocks of
+	 * up to 16 zones when idle (min_nr_mblks). Otherwise, if busy, allow
+	 * the cahe to add 512 more metadata blocks.
+	 */
+	dmz->min_nr_mblks = 2 + dmz->nr_map_blocks +
+		dmz->zone_nr_bitmap_blocks * 16;
+	dmz->max_nr_mblks = dmz->min_nr_mblks + 512;
+
+	/* Load mapping table */
+	ret = dmz_load_mapping(dmz);
+	if (ret)
+		goto out;
+
+	dmz_dev_info(dmz, "Host-%s zoned block device\n",
+		     bdev_zoned_model(dmz->zbd) == BLK_ZONED_HA ?
+		     "aware" : "managed");
+	dmz_dev_info(dmz, "  %llu 512-byte logical sectors\n",
+		     (u64)dmz->nr_zones
+		     << dmz->zone_nr_sectors_shift);
+	dmz_dev_info(dmz, "  %u zones of %llu 512-byte logical sectors\n",
+		     dmz->nr_zones,
+		     (u64)dmz->zone_nr_sectors);
+	dmz_dev_info(dmz, "  %u metadata zones\n",
+		     dmz->nr_meta_zones * 2);
+	dmz_dev_info(dmz, "  %u data zones for %u chunks\n",
+		     dmz->nr_data_zones,
+		     dmz->nr_chunks);
+	dmz_dev_info(dmz, "    %u random zones (%u unmapped)\n",
+		     dmz->dz_nr_rnd,
+		     atomic_read(&dmz->dz_unmap_nr_rnd));
+	dmz_dev_info(dmz, "    %u sequential zones (%u unmapped)\n",
+		     dmz->dz_nr_seq,
+		     atomic_read(&dmz->dz_unmap_nr_seq));
+	dmz_dev_info(dmz, "  %u reserved sequential data zones\n",
+		     dmz->nr_reserved_seq);
+
+	dmz_dev_debug(dmz, "Format:\n");
+	dmz_dev_debug(dmz, "%u metadata blocks per set (%u max cache)\n",
+		      dmz->nr_meta_blocks,
+		      dmz->max_nr_mblks);
+	dmz_dev_debug(dmz, "  %u data zone mapping blocks\n",
+		      dmz->nr_map_blocks);
+	dmz_dev_debug(dmz, "  %u bitmap blocks\n",
+		      dmz->nr_bitmap_blocks);
+
+out:
+	if (ret)
+		dmz_cleanup_meta(dmz);
+
+	return ret;
+}
+
+/*
+ * Cleanup the target metadata resources.
+ */
+void dmz_cleanup_meta(struct dmz_target *dmz)
+{
+	struct rb_root *root = &dmz->mblk_rbtree;
+	struct dmz_mblock *mblk, *next;
+	int i;
+
+	/* Release zone mapping resources */
+	if (dmz->dz_map_mblk) {
+		for (i = 0; i < dmz->nr_map_blocks; i++)
+			dmz_release_mblock(dmz, dmz->dz_map_mblk[i]);
+		kfree(dmz->dz_map_mblk);
+		dmz->dz_map_mblk = NULL;
+	}
+
+	/* Release super blocks */
+	for (i = 0; i < 2; i++) {
+		if (dmz->sb[i].mblk) {
+			dmz_free_mblock(dmz, dmz->sb[i].mblk);
+			dmz->sb[i].mblk = NULL;
+		}
+	}
+
+	/* Free cached blocks */
+	while (!list_empty(&dmz->mblk_dirty_list)) {
+		mblk = list_first_entry(&dmz->mblk_dirty_list,
+					struct dmz_mblock, link);
+		dmz_dev_warn(dmz, "mblock %llu still in dirty list (ref %u)\n",
+			     (u64)mblk->no,
+			     atomic_read(&mblk->ref));
+		list_del_init(&mblk->link);
+		rb_erase(&mblk->node, &dmz->mblk_rbtree);
+		dmz_free_mblock(dmz, mblk);
+	}
+
+	while (!list_empty(&dmz->mblk_lru_list)) {
+		mblk = list_first_entry(&dmz->mblk_lru_list,
+					struct dmz_mblock, link);
+		list_del_init(&mblk->link);
+		rb_erase(&mblk->node, &dmz->mblk_rbtree);
+		dmz_free_mblock(dmz, mblk);
+	}
+
+	/* Sanity checks: the mblock rbtree should now be empty */
+	rbtree_postorder_for_each_entry_safe(mblk, next, root, node) {
+		dmz_dev_warn(dmz, "mblock %llu ref %u still in rbtree\n",
+			     (u64)mblk->no,
+			     atomic_read(&mblk->ref));
+		atomic_set(&mblk->ref, 0);
+		dmz_free_mblock(dmz, mblk);
+	}
+
+	/* Free the zone descriptors */
+	dmz_drop_zones(dmz);
+}
+
+/*
+ * Check metadata on resume.
+ */
+int dmz_resume_meta(struct dmz_target *dmz)
+{
+	return dmz_check_zones(dmz);
+}
+
diff --git a/drivers/md/dm-zoned-reclaim.c b/drivers/md/dm-zoned-reclaim.c
new file mode 100644
index 0000000..aa76692
--- /dev/null
+++ b/drivers/md/dm-zoned-reclaim.c
@@ -0,0 +1,535 @@
+/*
+ * Drive-managed zoned block device target
+ * Copyright (C) 2017 Western Digital Corporation or its affiliates.
+ *
+ * Written by: Damien Le Moal <damien.lemoal@wdc.com>
+ *
+ * This software is distributed under the terms of the GNU General PUBLIC
+ * License version 2, or any later version, "as is," without technical
+ * support, and WITHOUT ANY WARRANTY, without even the implied warranty
+ * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#include <linux/module.h>
+
+#include "dm-zoned.h"
+
+/*
+ * Align a sequential zone write pointer to chunk_block.
+ */
+static int dmz_reclaim_align_wp(struct dmz_target *dmz, struct dm_zone *zone,
+				sector_t chunk_block)
+{
+	sector_t wp_block = zone->wp_block;
+	unsigned int nr_blocks;
+	int ret;
+
+	if (wp_block > chunk_block)
+		return -EIO;
+
+	/*
+	 * Zeroout the space between the write
+	 * pointer and the requested position.
+	 */
+	nr_blocks = chunk_block - zone->wp_block;
+	if (!nr_blocks)
+		return 0;
+
+	ret = blkdev_issue_zeroout(dmz->zbd,
+			dmz_start_sect(dmz, zone) + dmz_blk2sect(wp_block),
+			dmz_blk2sect(nr_blocks),
+			GFP_NOFS, false);
+	if (ret) {
+		dmz_dev_err(dmz,
+			    "Align zone %u wp %llu to +%u blocks failed %d\n",
+			    dmz_id(dmz, zone),
+			    (unsigned long long)wp_block,
+			    nr_blocks,
+			    ret);
+		return ret;
+	}
+
+	zone->wp_block += nr_blocks;
+
+	return 0;
+}
+
+/*
+ * dm_kcopyd_copy notification.
+ */
+static void dmz_reclaim_copy_end(int read_err, unsigned long write_err,
+				 void *context)
+{
+	struct dmz_target *dmz = context;
+
+	if (read_err || write_err)
+		dmz->reclaim_err = -EIO;
+	else
+		dmz->reclaim_err = 0;
+
+	clear_bit_unlock(DMZ_RECLAIM_COPY, &dmz->flags);
+	smp_mb__after_atomic();
+	wake_up_bit(&dmz->flags, DMZ_RECLAIM_COPY);
+}
+
+/*
+ * Copy valid blocks of src_zone into dst_zone.
+ */
+static int dmz_reclaim_copy(struct dmz_target *dmz,
+			    struct dm_zone *src_zone, struct dm_zone *dst_zone)
+{
+	struct dm_io_region src, dst;
+	sector_t block = 0, end_block;
+	sector_t nr_blocks;
+	sector_t src_zone_block;
+	sector_t dst_zone_block;
+	unsigned long flags = 0;
+	int ret;
+
+	if (dmz_is_seq(src_zone))
+		end_block = src_zone->wp_block;
+	else
+		end_block = dmz->zone_nr_blocks;
+	src_zone_block = dmz_start_block(dmz, src_zone);
+	dst_zone_block = dmz_start_block(dmz, dst_zone);
+
+	if (dmz_is_seq(dst_zone))
+		set_bit(DM_KCOPYD_WRITE_SEQ, &flags);
+
+	while (block < end_block) {
+
+		/* Get a valid region from the source zone */
+		ret = dmz_first_valid_block(dmz, src_zone, &block);
+		if (ret < 0)
+			return ret;
+
+		/* Are we done ? */
+		nr_blocks = ret;
+		if (!nr_blocks)
+			return 0;
+
+		/*
+		 * If we are writing in a sequential zone, we must make sure
+		 * that writes are sequential. So Zero out any eventual hole
+		 * between writes.
+		 */
+		if (dmz_is_seq(dst_zone)) {
+			ret = dmz_reclaim_align_wp(dmz, dst_zone, block);
+			if (ret)
+				return ret;
+		}
+
+		src.bdev = dmz->zbd;
+		src.sector = dmz_blk2sect(src_zone_block + block);
+		src.count = dmz_blk2sect(nr_blocks);
+
+		dst.bdev = dmz->zbd;
+		dst.sector = dmz_blk2sect(dst_zone_block + block);
+		dst.count = src.count;
+
+		dmz_dev_debug(dmz,
+			      "Reclaim: Copy %s zone %u, block %llu+%llu to %s zone %u\n",
+			      dmz_is_rnd(src_zone) ? "RND" : "SEQ",
+			      dmz_id(dmz, src_zone),
+			      (unsigned long long)block,
+			      (unsigned long long)nr_blocks,
+			      dmz_is_rnd(dst_zone) ? "RND" : "SEQ",
+			      dmz_id(dmz, dst_zone));
+
+		/* Copy the valid region */
+		set_bit(DMZ_RECLAIM_COPY, &dmz->flags);
+		ret = dm_kcopyd_copy(dmz->reclaim_kc, &src, 1, &dst, flags,
+				     dmz_reclaim_copy_end, dmz);
+		if (ret != 0)
+			return ret;
+
+		/* Wait for copy to complete */
+		wait_on_bit_io(&dmz->flags, DMZ_RECLAIM_COPY,
+			       TASK_UNINTERRUPTIBLE);
+		if (dmz->reclaim_err)
+			return dmz->reclaim_err;
+
+		if (dmz_is_seq(dst_zone))
+			dst_zone->wp_block += nr_blocks;
+
+		block += nr_blocks;
+
+	}
+
+	return 0;
+}
+
+/*
+ * Clear a zone reclaim flag.
+ */
+static inline void dmz_reclaim_put_zone(struct dmz_target *dmz,
+					struct dm_zone *zone)
+{
+	WARN_ON(dmz_is_active(zone));
+	WARN_ON(!dmz_in_reclaim(zone));
+
+	clear_bit_unlock(DMZ_RECLAIM, &zone->flags);
+	smp_mb__after_atomic();
+	wake_up_bit(&zone->flags, DMZ_RECLAIM);
+}
+
+/*
+ * Move valid blocks of dzone buffer zone into dzone (after its write pointer)
+ * and free the buffer zone.
+ */
+static int dmz_reclaim_buf(struct dmz_target *dmz, struct dm_zone *dzone)
+{
+	struct dm_zone *bzone = dzone->bzone;
+	sector_t chunk_block = dzone->wp_block;
+	int ret;
+
+	dmz_dev_debug(dmz,
+		      "Chunk %u, move buf zone %u (weight %u) to data zone %u (weight %u)\n",
+		      dzone->chunk, dmz_id(dmz, bzone), dmz_weight(bzone),
+		      dmz_id(dmz, dzone), dmz_weight(dzone));
+
+	/* Flush data zone into the buffer zone */
+	ret = dmz_reclaim_copy(dmz, bzone, dzone);
+	if (ret < 0)
+		return ret;
+
+	down_read(&dmz->mblk_sem);
+
+	/* Validate copied blocks */
+	ret = dmz_valid_merge(dmz, bzone, dzone, chunk_block);
+	if (ret == 0) {
+		/* Free the buffer zone */
+		dmz_invalidate_zone(dmz, bzone);
+		dmz_lock_map(dmz);
+		dmz_unmap_zone(dmz, bzone);
+		dmz_reclaim_put_zone(dmz, dzone);
+		dmz_free_zone(dmz, bzone);
+		dmz_unlock_map(dmz);
+	}
+
+	up_read(&dmz->mblk_sem);
+
+	return 0;
+}
+
+/*
+ * Merge valid blocks of dzone into its buffer zone and free dzone.
+ */
+static int dmz_reclaim_seq_data(struct dmz_target *dmz, struct dm_zone *dzone)
+{
+	unsigned int chunk = dzone->chunk;
+	struct dm_zone *bzone = dzone->bzone;
+	int ret = 0;
+
+	dmz_dev_debug(dmz,
+		      "Chunk %u, move data zone %u (weight %u) to buf zone %u (weight %u)\n",
+		      chunk, dmz_id(dmz, dzone), dmz_weight(dzone),
+		      dmz_id(dmz, bzone), dmz_weight(bzone));
+
+	/* Flush data zone into the buffer zone */
+	ret = dmz_reclaim_copy(dmz, dzone, bzone);
+	if (ret < 0)
+		return ret;
+
+	down_read(&dmz->mblk_sem);
+
+	/* Validate copied blocks */
+	ret = dmz_valid_merge(dmz, dzone, bzone, 0);
+	if (ret == 0) {
+		/*
+		 * Free the data zone and remap the chunk to
+		 * the buffer zone.
+		 */
+		dmz_invalidate_zone(dmz, dzone);
+		dmz_lock_map(dmz);
+		dmz_unmap_zone(dmz, bzone);
+		dmz_unmap_zone(dmz, dzone);
+		dmz_reclaim_put_zone(dmz, dzone);
+		dmz_free_zone(dmz, dzone);
+		dmz_map_zone(dmz, bzone, chunk);
+		dmz_unlock_map(dmz);
+	}
+
+	up_read(&dmz->mblk_sem);
+
+	return 0;
+}
+
+/*
+ * Move valid blocks of the random data zone dzone into a free sequential zone.
+ * Once blocks are moved, remap the zone chunk to the sequential zone.
+ */
+static int dmz_reclaim_rnd_data(struct dmz_target *dmz, struct dm_zone *dzone)
+{
+	unsigned int chunk = dzone->chunk;
+	struct dm_zone *szone = NULL;
+	int ret;
+
+	/* Get a free sequential zone */
+	dmz_lock_map(dmz);
+	szone = dmz_alloc_zone(dmz, DMZ_ALLOC_RECLAIM);
+	dmz_unlock_map(dmz);
+	if (!szone)
+		return -ENOSPC;
+
+	dmz_dev_debug(dmz,
+		      "Chunk %u, move rnd zone %u (weight %u) to seq zone %u\n",
+		      chunk, dmz_id(dmz, dzone), dmz_weight(dzone),
+		      dmz_id(dmz, szone));
+
+	/* Flush the random data zone into the sequential zone */
+	ret = dmz_reclaim_copy(dmz, dzone, szone);
+
+	down_read(&dmz->mblk_sem);
+
+	if (ret == 0)
+		/* Validate copied blocks */
+		ret = dmz_valid_copy(dmz, dzone, szone);
+
+	if (ret) {
+		/* Free the sequential zone */
+		dmz_lock_map(dmz);
+		dmz_free_zone(dmz, szone);
+		dmz_unlock_map(dmz);
+	} else {
+		/* Free the data zone and remap the chunk */
+		dmz_invalidate_zone(dmz, dzone);
+		dmz_lock_map(dmz);
+		dmz_unmap_zone(dmz, dzone);
+		dmz_reclaim_put_zone(dmz, dzone);
+		dmz_free_zone(dmz, dzone);
+		dmz_map_zone(dmz, szone, chunk);
+		dmz_unlock_map(dmz);
+	}
+
+	up_read(&dmz->mblk_sem);
+
+	return 0;
+}
+
+/*
+ * Reclaim an empty zone.
+ */
+static void dmz_reclaim_empty(struct dmz_target *dmz, struct dm_zone *dzone)
+{
+	down_read(&dmz->mblk_sem);
+	dmz_lock_map(dmz);
+	dmz_unmap_zone(dmz, dzone);
+	dmz_reclaim_put_zone(dmz, dzone);
+	dmz_free_zone(dmz, dzone);
+	dmz_unlock_map(dmz);
+	up_read(&dmz->mblk_sem);
+}
+
+/*
+ * Lock a zone for reclaim. Returns 0 if the zone cannot be locked or if it is
+ * already locked and 1 otherwise.
+ */
+static inline int dmz_reclaim_lock_zone(struct dmz_target *dmz,
+					struct dm_zone *zone)
+{
+	/* Active zones cannot be reclaimed */
+	if (dmz_is_active(zone))
+		return 0;
+
+	return !test_and_set_bit(DMZ_RECLAIM, &zone->flags);
+}
+
+/*
+ * Select a random zone for reclaim.
+ */
+static struct dm_zone *dmz_reclaim_get_rnd_zone(struct dmz_target *dmz)
+{
+	struct dm_zone *dzone = NULL;
+	struct dm_zone *zone;
+
+	if (list_empty(&dmz->dz_map_rnd_list))
+		return NULL;
+
+	list_for_each_entry(zone, &dmz->dz_map_rnd_list, link) {
+		if (dmz_is_buf(zone))
+			dzone = zone->bzone;
+		else
+			dzone = zone;
+		if (dmz_reclaim_lock_zone(dmz, dzone))
+			return dzone;
+	}
+
+	return NULL;
+}
+
+/*
+ * Select a buffered sequential zone for reclaim.
+ */
+static struct dm_zone *dmz_reclaim_get_seq_zone(struct dmz_target *dmz)
+{
+	struct dm_zone *zone;
+
+	if (list_empty(&dmz->dz_map_seq_list))
+		return NULL;
+
+	list_for_each_entry(zone, &dmz->dz_map_seq_list, link) {
+		if (!zone->bzone)
+			continue;
+		if (dmz_reclaim_lock_zone(dmz, zone))
+			return zone;
+	}
+
+	return NULL;
+}
+
+/*
+ * Select a zone for reclaim.
+ */
+static struct dm_zone *dmz_reclaim_get_zone(struct dmz_target *dmz)
+{
+	struct dm_zone *zone = NULL;
+
+	/*
+	 * Search for a zone candidate to reclaim: 2 cases are possible.
+	 * (1) There is no free sequential zones. Then a random data zone
+	 *     cannot be reclaimed. So choose a sequential zone to reclaim so
+	 *     that afterward a random zone can be reclaimed.
+	 * (2) At least one free sequential zone is available, then choose
+	 *     the oldest random zone (data or buffer) that can be locked.
+	 */
+	dmz_lock_map(dmz);
+	if (list_empty(&dmz->reclaim_seq_zones_list))
+		zone = dmz_reclaim_get_seq_zone(dmz);
+	else
+		zone = dmz_reclaim_get_rnd_zone(dmz);
+	dmz_unlock_map(dmz);
+
+	return zone;
+}
+
+/*
+ * Find a reclaim candidate zone and reclaim it.
+ */
+static void dmz_reclaim(struct dmz_target *dmz)
+{
+	struct dm_zone *dzone;
+	struct dm_zone *rzone;
+	unsigned long start;
+	int ret;
+
+	/* Get a data zone */
+	dzone = dmz_reclaim_get_zone(dmz);
+	if (!dzone)
+		return;
+
+	start = jiffies;
+
+	if (dmz_is_rnd(dzone)) {
+
+		rzone = dzone;
+		if (!dmz_weight(dzone)) {
+			/* Empty zone */
+			dmz_reclaim_empty(dmz, dzone);
+			ret = 0;
+		} else {
+			/*
+			 * Reclaim the random data zone by moving its
+			 * valid data blocks to a free sequential zone.
+			 */
+			ret = dmz_reclaim_rnd_data(dmz, dzone);
+		}
+
+	} else {
+
+		struct dm_zone *bzone = dzone->bzone;
+		sector_t chunk_block = 0;
+
+		ret = dmz_first_valid_block(dmz, bzone, &chunk_block);
+		if (ret < 0)
+			goto out;
+
+		if (chunk_block >= dzone->wp_block) {
+			/*
+			 * Valid blocks in the buffer zone are after
+			 * the data zone write pointer: copy them there.
+			 */
+			ret = dmz_reclaim_buf(dmz, dzone);
+			rzone = bzone;
+		} else {
+			/*
+			 * Reclaim the data zone by merging it into the
+			 * buffer zone so that the buffer zone itself can
+			 * be later reclaimed.
+			 */
+			ret = dmz_reclaim_seq_data(dmz, dzone);
+			rzone = dzone;
+		}
+
+	}
+
+out:
+	if (ret) {
+		dmz_reclaim_put_zone(dmz, dzone);
+		return;
+	}
+
+	dmz_dev_debug(dmz, "Reclaimed zone %u in %u ms\n",
+		      dmz_id(dmz, rzone), jiffies_to_msecs(jiffies - start));
+
+	dmz_trigger_flush(dmz);
+}
+
+/*
+ * Zone reclaim work.
+ */
+void dmz_reclaim_work(struct work_struct *work)
+{
+	struct dmz_target *dmz =
+		container_of(work, struct dmz_target, reclaim_work.work);
+	unsigned long next_reclaim = DMZ_RECLAIM_PERIOD;
+	unsigned int unmap_nr_rnd = atomic_read(&dmz->dz_unmap_nr_rnd);
+	unsigned int throttle, unmap_perc;
+
+	/* If there are still plenty of random zones, do not reclaim */
+	unmap_perc = unmap_nr_rnd * 100 / dmz->dz_nr_rnd;
+	if (unmap_perc >= DMZ_RECLAIM_HIGH_FREE_RND)
+		goto out;
+
+	/*
+	 * If we are not idle and still have unmapped random zones,
+	 * do not reclaim.
+	 */
+	if (!dmz_idle(dmz) && unmap_perc > DMZ_RECLAIM_LOW_FREE_RND)
+		goto out;
+
+	/*
+	 * We need to start reclaiming random zones: set up zone copy
+	 * throttling to either go fast if we are very low on random zones
+	 * and slower if there are still some free random zones to avoid
+	 * as much as possible to negatively impact the user workload.
+	 */
+	if (dmz_idle(dmz) ||
+	    unmap_nr_rnd < atomic_read(&dmz->nr_active_chunks))
+		/* Idle or very low: go fast */
+		throttle = 100;
+	else
+		/* Busy but we still have some random zone: go slower */
+		throttle = min(75U, 100U - unmap_perc / 2);
+	dmz->reclaim_throttle.throttle = throttle;
+
+	dmz_dev_debug(dmz,
+		      "Reclaim (%u): %s (%u BIOs, %u active chunks), %u%% free rnd zones (%u/%u)\n",
+		      dmz->reclaim_throttle.throttle,
+		      (dmz_idle(dmz) ? "Idle" : "Busy"),
+		      atomic_read(&dmz->bio_count),
+		      atomic_read(&dmz->nr_active_chunks),
+		      unmap_perc,
+		      unmap_nr_rnd, dmz->dz_nr_rnd);
+
+	dmz_reclaim(dmz);
+
+	if ((dmz_should_reclaim(dmz)
+	     && atomic_read(&dmz->nr_reclaim_seq_zones)))
+		/* Run again immmediately */
+		next_reclaim = 0;
+
+out:
+	dmz_schedule_reclaim(dmz, next_reclaim);
+}
+
diff --git a/drivers/md/dm-zoned.h b/drivers/md/dm-zoned.h
new file mode 100644
index 0000000..bd4b455
--- /dev/null
+++ b/drivers/md/dm-zoned.h
@@ -0,0 +1,528 @@
+/*
+ * Drive-managed zoned block device target
+ * Copyright (C) 2017 Western Digital Corporation or its affiliates.
+ *
+ * This software is distributed under the terms of the GNU General Public
+ * License version 2, or any later version, "as is," without technical
+ * support, and WITHOUT ANY WARRANTY, without even the implied warranty
+ * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ */
+#include <linux/types.h>
+#include <linux/blkdev.h>
+#include <linux/device-mapper.h>
+#include <linux/dm-kcopyd.h>
+#include <linux/list.h>
+#include <linux/spinlock.h>
+#include <linux/mutex.h>
+#include <linux/workqueue.h>
+#include <linux/rwsem.h>
+#include <linux/rbtree.h>
+#include <linux/radix-tree.h>
+
+#ifndef __DM_ZONED_H__
+#define __DM_ZONED_H__
+
+/*
+ * Metadata version.
+ */
+#define DMZ_META_VER	1
+
+/*
+ * On-disk super block magic.
+ */
+#define DMZ_MAGIC	((((unsigned int)('D')) << 24) | \
+			 (((unsigned int)('Z')) << 16) | \
+			 (((unsigned int)('B')) <<  8) | \
+			 ((unsigned int)('D')))
+
+/*
+ * On disk super block.
+ * This uses only 512 B but uses on disk a full 4KB block. This block is
+ * followed on disk by the mapping table of chunks to zones and the bitmap
+ * blocks indicating zone block validity.
+ * The overall resulting metadata format is:
+ *    (1) Super block (1 block)
+ *    (2) Chunk mapping table (nr_map_blocks)
+ *    (3) Bitmap blocks (nr_bitmap_blocks)
+ * All metadata blocks are stored in conventional zones, starting from the
+ * the first conventional zone found on disk.
+ */
+struct dmz_super {
+
+	/* Magic number */
+	__le32		magic;			/*   4 */
+
+	/* Metadata version number */
+	__le32		version;		/*   8 */
+
+	/* Generation number */
+	__le64		gen;			/*  16 */
+
+	/* This block number */
+	__le64		sb_block;		/*  24 */
+
+	/* The number of metadata blocks, including this super block */
+	__le32		nr_meta_blocks;		/*  28 */
+
+	/* The number of sequential zones reserved for reclaim */
+	__le32		nr_reserved_seq;	/*  32 */
+
+	/* The number of entries in the mapping table */
+	__le32		nr_chunks;		/*  36 */
+
+	/* The number of blocks used for the chunk mapping table */
+	__le32		nr_map_blocks;		/*  40 */
+
+	/* The number of blocks used for the block bitmaps */
+	__le32		nr_bitmap_blocks;	/*  44 */
+
+	/* Checksum */
+	__le32		crc;			/*  48 */
+
+	/* Padding to full 512B sector */
+	u8		reserved[464];		/* 512 */
+
+};
+
+/*
+ * Chunk mapping entry: entries are indexed by chunk number
+ * and give the zone ID (dzone_id) mapping the chunk on disk.
+ * This zone may be sequential or random. If it is a sequential
+ * zone, a second zone (bzone_id) used as a write buffer may
+ * also be specified. This second zone will always be a randomly
+ * writeable zone.
+ */
+struct dmz_map {
+	__le32			dzone_id;
+	__le32			bzone_id;
+};
+
+/*
+ * dm-zoned creates block devices with 4KB blocks, always.
+ */
+#define DMZ_BLOCK_SHIFT		12
+#define DMZ_BLOCK_SIZE		(1 << DMZ_BLOCK_SHIFT)
+#define DMZ_BLOCK_MASK		(DMZ_BLOCK_SIZE - 1)
+
+#define DMZ_BLOCK_SHIFT_BITS	(DMZ_BLOCK_SHIFT + 3)
+#define DMZ_BLOCK_SIZE_BITS	(1 << DMZ_BLOCK_SHIFT_BITS)
+#define DMZ_BLOCK_MASK_BITS	(DMZ_BLOCK_SIZE_BITS - 1)
+
+#define DMZ_BLOCK_SECTORS_SHIFT	(DMZ_BLOCK_SHIFT - SECTOR_SHIFT)
+#define DMZ_BLOCK_SECTORS	(DMZ_BLOCK_SIZE >> SECTOR_SHIFT)
+#define DMZ_BLOCK_SECTORS_MASK	(DMZ_BLOCK_SECTORS - 1)
+
+/*
+ * Chunk mapping table metadata: 512 8-bytes entries per 4KB block.
+ */
+#define DMZ_MAP_ENTRIES		(DMZ_BLOCK_SIZE / sizeof(struct dmz_map))
+#define DMZ_MAP_ENTRIES_SHIFT	(ilog2(DMZ_MAP_ENTRIES))
+#define DMZ_MAP_ENTRIES_MASK	(DMZ_MAP_ENTRIES - 1)
+#define DMZ_MAP_UNMAPPED	UINT_MAX
+
+/*
+ * Block <-> 512B sector conversion.
+ */
+#define dmz_blk2sect(b)		((b) << DMZ_BLOCK_SECTORS_SHIFT)
+#define dmz_sect2blk(s)		((s) >> DMZ_BLOCK_SECTORS_SHIFT)
+
+#define DMZ_MIN_BIOS		8192
+
+/*
+ * The size of a zone report in number of zones.
+ * This results in 4096*64B=256KB report zones commands.
+ */
+#define DMZ_REPORT_NR_ZONES	4096
+
+/*
+ * Zone flags.
+ */
+enum {
+
+	/* Zone write type */
+	DMZ_RND,
+	DMZ_SEQ,
+
+	/* Zone critical condition */
+	DMZ_OFFLINE,
+	DMZ_READ_ONLY,
+
+	/* How the zone is being used */
+	DMZ_META,
+	DMZ_DATA,
+	DMZ_BUF,
+
+	/* Zone internal state */
+	DMZ_ACTIVE,
+	DMZ_RECLAIM,
+	DMZ_SEQ_WRITE_ERR,
+
+};
+
+/*
+ * Zone descriptor.
+ */
+struct dm_zone {
+
+	/* For listing the zone depending on its state */
+	struct list_head	link;
+
+	/* Zone type and state */
+	unsigned long		flags;
+
+	/* Zone activation reference count */
+	atomic_t		refcount;
+
+	/* Zone write pointer block (relative to the zone start block) */
+	unsigned int		wp_block;
+
+	/* Zone weight (number of valid blocks in the zone) */
+	unsigned int		weight;
+
+	/* The chunk that the zone maps */
+	unsigned int		chunk;
+
+	/*
+	 * For a sequential data zone, pointer to the random zone
+	 * used as a buffer for processing unaligned writes.
+	 * For a buffer zone, this points back to the data zone.
+	 */
+	struct dm_zone		*bzone;
+
+};
+
+/*
+ * Meta data block descriptor (for cached metadata blocks).
+ */
+struct dmz_mblock {
+
+	struct rb_node		node;
+	struct list_head	link;
+	sector_t		no;
+	atomic_t		ref;
+	unsigned long		state;
+	struct page		*page;
+	void			*data;
+
+};
+
+/*
+ * Super block information (one per metadata set).
+ */
+struct dmz_sb {
+	sector_t		block;
+	struct dmz_mblock	*mblk;
+	struct dmz_super	*sb;
+};
+
+/*
+ * Metadata block state flags.
+ */
+enum {
+	DMZ_META_DIRTY,
+	DMZ_META_READING,
+	DMZ_META_WRITING,
+	DMZ_META_ERROR,
+};
+
+/*
+ * Target flags.
+ */
+enum {
+	DMZ_RECLAIM_COPY,
+	DMZ_SUSPENDED,
+};
+
+/*
+ * Target descriptor.
+ */
+struct dmz_target {
+
+	struct dm_dev		*ddev;
+
+	/* Zoned block device information */
+	char			zbd_name[BDEVNAME_SIZE];
+	struct block_device	*zbd;
+	sector_t		zbd_capacity;
+	struct request_queue	*zbdq;
+	unsigned long		flags;
+
+	unsigned int		nr_zones;
+	unsigned int		nr_useable_zones;
+	unsigned int		nr_meta_blocks;
+	unsigned int		nr_meta_zones;
+	unsigned int		nr_data_zones;
+	unsigned int		nr_rnd_zones;
+	unsigned int		nr_reserved_seq;
+	unsigned int		nr_chunks;
+
+	sector_t		zone_nr_sectors;
+	unsigned int		zone_nr_sectors_shift;
+
+	sector_t		zone_nr_blocks;
+	sector_t		zone_nr_blocks_shift;
+
+	sector_t		zone_bitmap_size;
+	unsigned int		zone_nr_bitmap_blocks;
+
+	unsigned int		nr_bitmap_blocks;
+	unsigned int		nr_map_blocks;
+
+	/* Zone information array */
+	struct dm_zone		*zones;
+
+	/* For metadata handling */
+	struct dm_zone		*sb_zone;
+	struct dmz_sb		sb[2];
+	unsigned int		mblk_primary;
+	u64			sb_gen;
+	unsigned int		min_nr_mblks;
+	unsigned int		max_nr_mblks;
+	atomic_t		nr_mblks;
+	struct rw_semaphore	mblk_sem;
+	spinlock_t		mblk_lock;
+	struct rb_root		mblk_rbtree;
+	struct list_head	mblk_lru_list;
+	struct list_head	mblk_dirty_list;
+
+	/* Zone allocation management */
+	struct mutex		map_lock;
+	struct dmz_mblock	**dz_map_mblk;
+	unsigned int		dz_nr_rnd;
+	atomic_t		dz_unmap_nr_rnd;
+	struct list_head	dz_unmap_rnd_list;
+	struct list_head	dz_map_rnd_list;
+
+	unsigned int		dz_nr_seq;
+	atomic_t		dz_unmap_nr_seq;
+	struct list_head	dz_unmap_seq_list;
+	struct list_head	dz_map_seq_list;
+
+	wait_queue_head_t	dz_free_wq;
+
+	/* For chunk work */
+	struct mutex		chunk_lock;
+	struct radix_tree_root	chunk_rxtree;
+	struct workqueue_struct *chunk_wq;
+	atomic_t		nr_active_chunks;
+
+	/* For chunk BIOs to zones */
+	struct bio_set		*bio_set;
+	atomic_t		bio_count;
+	unsigned long		atime;
+
+	/* For flush */
+	spinlock_t		flush_lock;
+	struct bio_list		flush_list;
+	struct delayed_work	flush_work;
+	struct workqueue_struct *flush_wq;
+
+	/* For reclaim */
+	struct delayed_work	reclaim_work;
+	struct workqueue_struct *reclaim_wq;
+	atomic_t		nr_reclaim_seq_zones;
+	struct list_head	reclaim_seq_zones_list;
+	struct dm_kcopyd_client	*reclaim_kc;
+	struct dm_kcopyd_throttle reclaim_throttle;
+	int			reclaim_err;
+
+};
+
+/*
+ * Chunk work descriptor.
+ */
+struct dm_chunk_work {
+	struct work_struct	work;
+	atomic_t		refcount;
+	struct dmz_target	*target;
+	unsigned int		chunk;
+	struct bio_list		bio_list;
+};
+
+#define dmz_id(dmz, z)		((unsigned int)((z) - (dmz)->zones))
+#define dmz_get(dmz, z)		(&(dmz)->zones[z])
+#define dmz_start_sect(dmz, z)	(dmz_id(dmz, z) << (dmz)->zone_nr_sectors_shift)
+#define dmz_start_block(dmz, z)	(dmz_id(dmz, z) << (dmz)->zone_nr_blocks_shift)
+#define dmz_is_rnd(z)		test_bit(DMZ_RND, &(z)->flags)
+#define dmz_is_seq(z)		test_bit(DMZ_SEQ, &(z)->flags)
+#define dmz_is_empty(z)		((z)->wp_block == 0)
+#define dmz_is_offline(z)	test_bit(DMZ_OFFLINE, &(z)->flags)
+#define dmz_is_readonly(z)	test_bit(DMZ_READ_ONLY, &(z)->flags)
+#define dmz_is_active(z)	test_bit(DMZ_ACTIVE, &(z)->flags)
+#define dmz_in_reclaim(z)	test_bit(DMZ_RECLAIM, &(z)->flags)
+#define dmz_seq_write_err(z)	test_bit(DMZ_SEQ_WRITE_ERR, &(z)->flags)
+
+#define dmz_is_meta(z)		test_bit(DMZ_META, &(z)->flags)
+#define dmz_is_buf(z)		test_bit(DMZ_BUF, &(z)->flags)
+#define dmz_is_data(z)		test_bit(DMZ_DATA, &(z)->flags)
+
+#define dmz_weight(z)		((z)->weight)
+
+#define dmz_chunk_sector(dmz, s) ((s) & ((dmz)->zone_nr_sectors - 1))
+#define dmz_chunk_block(dmz, b)	((b) & ((dmz)->zone_nr_blocks - 1))
+
+#define dmz_bio_block(bio)	dmz_sect2blk((bio)->bi_iter.bi_sector)
+#define dmz_bio_blocks(bio)	dmz_sect2blk(bio_sectors(bio))
+#define dmz_bio_chunk(dmz, bio)	((bio)->bi_iter.bi_sector >> \
+				 (dmz)->zone_nr_sectors_shift)
+
+#define dmz_lock_map(dmz)	mutex_lock(&(dmz)->map_lock)
+#define dmz_unlock_map(dmz)	mutex_unlock(&(dmz)->map_lock)
+
+/*
+ * Flush intervals (seconds).
+ */
+#define DMZ_FLUSH_PERIOD	(10 * HZ)
+
+/*
+ * Trigger flush.
+ */
+static inline void dmz_trigger_flush(struct dmz_target *dmz)
+{
+	mod_delayed_work(dmz->flush_wq, &dmz->flush_work, 0);
+}
+
+/*
+ * Number of seconds without BIO to consider the target device idle.
+ */
+#define DMZ_IDLE_PERIOD		(10UL * HZ)
+
+/*
+ * Zone reclaim check period.
+ */
+#define DMZ_RECLAIM_PERIOD	(HZ)
+
+/*
+ * Percentage of unmapped (free) random zones below which reclaim starts
+ * even if the device is not idle.
+ */
+#define DMZ_RECLAIM_LOW_FREE_RND	50
+
+/*
+ * Percentage of unmapped (free) random zones above which reclaim stops
+ * * even if the device is idle.
+ */
+#define DMZ_RECLAIM_HIGH_FREE_RND	75
+
+/*
+ * Test if the target device is idle.
+ */
+static inline int dmz_idle(struct dmz_target *dmz)
+{
+	return atomic_read(&(dmz)->bio_count) == 0 &&
+		time_is_before_jiffies(dmz->atime + DMZ_IDLE_PERIOD);
+}
+
+/*
+ * Test if triggerring reclaim is necessary.
+ */
+static inline bool dmz_should_reclaim(struct dmz_target *dmz)
+{
+	unsigned int unmap_rnd = atomic_read(&dmz->dz_unmap_nr_rnd);
+
+	if (dmz_idle(dmz) && unmap_rnd < dmz->dz_nr_rnd)
+		return true;
+
+	/* Percentage of unmappped random zones low ? */
+	return ((unmap_rnd * 100) / dmz->dz_nr_rnd) <= DMZ_RECLAIM_LOW_FREE_RND;
+}
+
+/*
+ * Schedule reclaim (delay in jiffies).
+ */
+static inline void dmz_schedule_reclaim(struct dmz_target *dmz,
+					unsigned long delay)
+{
+	mod_delayed_work(dmz->reclaim_wq, &dmz->reclaim_work, delay);
+}
+
+/*
+ * Trigger reclaim.
+ */
+static inline void dmz_trigger_reclaim(struct dmz_target *dmz)
+{
+	dmz_schedule_reclaim(dmz, 0);
+}
+
+extern void dmz_reclaim_work(struct work_struct *work);
+
+/*
+ * Zone BIO context.
+ */
+struct dmz_bioctx {
+	struct dmz_target	*target;
+	struct dm_zone		*zone;
+	struct bio		*bio;
+	atomic_t		ref;
+	int			error;
+};
+
+#define dmz_info(format, args...)		\
+	pr_info("dm-zoned: " format,		\
+	## args)
+
+#define dmz_dev_info(target, format, args...)	\
+	pr_info("dm-zoned (%s): " format,	\
+	       (dmz)->zbd_name, ## args)
+
+#define dmz_dev_err(dmz, format, args...)	\
+	pr_err("dm-zoned (%s): " format,	\
+	       (dmz)->zbd_name, ## args)
+
+#define dmz_dev_warn(dmz, format, args...)	\
+	pr_warn("dm-zoned (%s): " format,	\
+		(dmz)->zbd_name, ## args)
+
+#define dmz_dev_debug(dmz, format, args...)	\
+	pr_debug("dm-zoned (%s): " format,	\
+		 (dmz)->zbd_name, ## args)
+
+extern int dmz_init_meta(struct dmz_target *dmz);
+extern int dmz_resume_meta(struct dmz_target *dmz);
+extern void dmz_cleanup_meta(struct dmz_target *dmz);
+
+extern int dmz_flush_mblocks(struct dmz_target *dmz);
+
+#define DMZ_ALLOC_RND		0x01
+#define DMZ_ALLOC_RECLAIM	0x02
+
+struct dm_zone *dmz_alloc_zone(struct dmz_target *dmz, unsigned long flags);
+extern void dmz_free_zone(struct dmz_target *dmz, struct dm_zone *zone);
+
+extern void dmz_map_zone(struct dmz_target *dmz, struct dm_zone *zone,
+			 unsigned int chunk);
+extern void dmz_unmap_zone(struct dmz_target *dmz, struct dm_zone *zone);
+
+extern void dmz_activate_zone(struct dmz_target *dmz, struct dm_zone *zone);
+extern void dmz_deactivate_zone(struct dmz_target *dmz, struct dm_zone *zone);
+
+extern struct dm_zone *dmz_get_chunk_mapping(struct dmz_target *dmz,
+					     unsigned int chunk, int op);
+extern void dmz_put_chunk_mapping(struct dmz_target *dmz,
+				  struct dm_zone *zone);
+
+extern struct dm_zone *dmz_get_chunk_buffer(struct dmz_target *dmz,
+					    struct dm_zone *dzone);
+
+extern int dmz_valid_copy(struct dmz_target *dmz, struct dm_zone *from_zone,
+			  struct dm_zone *to_zone);
+extern int dmz_valid_merge(struct dmz_target *dmz, struct dm_zone *from_zone,
+			   struct dm_zone *to_zone, sector_t chunk_block);
+
+extern int dmz_validate_blocks(struct dmz_target *dmz, struct dm_zone *zone,
+			       sector_t chunk_block, unsigned int nr_blocks);
+extern int dmz_invalidate_blocks(struct dmz_target *dmz, struct dm_zone *zone,
+				 sector_t chunk_block, unsigned int nr_blocks);
+static inline int dmz_invalidate_zone(struct dmz_target *dmz,
+				      struct dm_zone *zone)
+{
+	return dmz_invalidate_blocks(dmz, zone, 0, dmz->zone_nr_blocks);
+}
+
+extern int dmz_block_valid(struct dmz_target *dmz, struct dm_zone *zone,
+			   sector_t chunk_block);
+
+extern int dmz_first_valid_block(struct dmz_target *dmz, struct dm_zone *zone,
+				 sector_t *chunk_block);
+
+#endif /* __DM_ZONED_H__ */
-- 
2.9.3

Western Digital Corporation (and its subsidiaries) E-mail Confidentiality Notice & Disclaimer:

This e-mail and any files transmitted with it may contain confidential or legally privileged information of WDC and/or its affiliates, and are intended solely for the use of the individual or entity to which they are addressed. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is prohibited. If you have received this e-mail in error, please notify the sender immediately and delete the e-mail in its entirety from your system.

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH 00/10] dm: zoned block device support
  2017-04-21  3:55 [PATCH 00/10] dm: zoned block device support damien.lemoal
@ 2017-04-24  6:24   ` Hannes Reinecke
  2017-04-21  3:55 ` [PATCH 02/10] dm-table: Check device area zone alignment damien.lemoal
                     ` (10 subsequent siblings)
  11 siblings, 0 replies; 18+ messages in thread
From: Hannes Reinecke @ 2017-04-24  6:24 UTC (permalink / raw)
  To: damien.lemoal, dm-devel, Mike Snitzer, Alasdair Kergon
  Cc: Christoph Hellwig, Bart Van Assche, linux-block

On 04/21/2017 05:55 AM, damien.lemoal@wdc.com wrote:
> From: Damien Le Moal <damien.lemoal@wdc.com>
> 
> This series introduces zoned block device support to the device mapper
> infrastructure. Pathces are as follows:
> 
> - Patch 1: Add a new target type feature flag to indicate if a target type
>   supports host-managed zoned block devices. This prevents using these drives
>   with the current target types since none of them have the proper support
>   implemented and will not operate properly with these drives.
> - Patch 2: If a target device is a zoned block device, check that the range of
>   LBAs mapped is aligned to the device zone size and that the device start
>   offset also aligns to zone boundaries. This is necessary for zone reset and
>   zone report correct execution.
> - Patch 3: Check that the different target devices of a table have compatible
>   zone sizes and models. This is necessary for target types that expose a zone
>   model different from the underlying device.
> - Patch 4: Fix handling of REQ_OP_ZONE_RESET bios
> - Patch 5: Fix handling of REQ_OP_ZONE_REPORT bios
> - Patch 6: Introduce a new helper function to reverse map a device zone report
>   to the target LBA range
> - Patch 7: Add support for host-managed zoned block devices to dm-flakey. This
>   is necessary for testing file systems supporting natively these drives (e.g.
>   f2fs).
> - Patch 8: Add support for for zoned block devices to dm-linear. This can have
>   useful applications during development and testing (e.g. allow creating
>   smaller zoned devices with different combinations and positions of zones).
>   There are also interesting applications for production, for instance, the
>   ability to aggregate conventional zones of different drives to create a
>   regular disk.
> - Patch 9: Add sequential write enforcement to dm_kcopyd_copy so that
>   sequential zones of a host-managed zoned block device can be specified as
>   destinations.
> - Patch 10: New dm-zoned target type (this was already sent for review twice).
>   This resend adds modifications suggested by Hannes to implement reclaim
>   using dm-kcopyd. dm-zoned depends on patch 9.
> 
> As always, comments and reviews are welcome.
> 
> Damien Le Moal (10):
>   dm-table: Introduce DM_TARGET_ZONED_HM feature
>   dm-table: Check device area zone alignment
>   dm-table: Check block devices zone model compatibility
>   dm: Fix REQ_OP_ZONE_RESET bio handling
>   dm: Fix REQ_OP_ZONE_REPORT bio handling
>   dm: Introduce dm_remap_zone_report()
>   dm-flakey: Add support for zoned block devices
>   dm-linear: Add support for zoned block devices
>   dm-kcopyd: Add sequential write feature
>   dm-zoned: Drive-managed zoned block device target
> 
>  Documentation/device-mapper/dm-zoned.txt |  154 +++
>  drivers/md/Kconfig                       |   19 +
>  drivers/md/Makefile                      |    2 +
>  drivers/md/dm-flakey.c                   |   21 +-
>  drivers/md/dm-kcopyd.c                   |   68 +-
>  drivers/md/dm-linear.c                   |   14 +-
>  drivers/md/dm-table.c                    |  145 ++
>  drivers/md/dm-zoned-io.c                 |  998 ++++++++++++++
>  drivers/md/dm-zoned-metadata.c           | 2195 ++++++++++++++++++++++++++++++
>  drivers/md/dm-zoned-reclaim.c            |  535 ++++++++
>  drivers/md/dm-zoned.h                    |  528 +++++++
>  drivers/md/dm.c                          |   93 +-
>  include/linux/device-mapper.h            |   16 +
>  include/linux/dm-kcopyd.h                |    1 +
>  14 files changed, 4783 insertions(+), 6 deletions(-)
>  create mode 100644 Documentation/device-mapper/dm-zoned.txt
>  create mode 100644 drivers/md/dm-zoned-io.c
>  create mode 100644 drivers/md/dm-zoned-metadata.c
>  create mode 100644 drivers/md/dm-zoned-reclaim.c
>  create mode 100644 drivers/md/dm-zoned.h
> 
Very nice.

You can add my

Reviewed-by: Hannes Reinecke <hare@suse.com>

to the whole series.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare@suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N�rnberg
GF: F. Imend�rffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG N�rnberg)

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 00/10] dm: zoned block device support
@ 2017-04-24  6:24   ` Hannes Reinecke
  0 siblings, 0 replies; 18+ messages in thread
From: Hannes Reinecke @ 2017-04-24  6:24 UTC (permalink / raw)
  To: damien.lemoal, dm-devel, Mike Snitzer, Alasdair Kergon
  Cc: Christoph Hellwig, Bart Van Assche, linux-block

On 04/21/2017 05:55 AM, damien.lemoal@wdc.com wrote:
> From: Damien Le Moal <damien.lemoal@wdc.com>
> 
> This series introduces zoned block device support to the device mapper
> infrastructure. Pathces are as follows:
> 
> - Patch 1: Add a new target type feature flag to indicate if a target type
>   supports host-managed zoned block devices. This prevents using these drives
>   with the current target types since none of them have the proper support
>   implemented and will not operate properly with these drives.
> - Patch 2: If a target device is a zoned block device, check that the range of
>   LBAs mapped is aligned to the device zone size and that the device start
>   offset also aligns to zone boundaries. This is necessary for zone reset and
>   zone report correct execution.
> - Patch 3: Check that the different target devices of a table have compatible
>   zone sizes and models. This is necessary for target types that expose a zone
>   model different from the underlying device.
> - Patch 4: Fix handling of REQ_OP_ZONE_RESET bios
> - Patch 5: Fix handling of REQ_OP_ZONE_REPORT bios
> - Patch 6: Introduce a new helper function to reverse map a device zone report
>   to the target LBA range
> - Patch 7: Add support for host-managed zoned block devices to dm-flakey. This
>   is necessary for testing file systems supporting natively these drives (e.g.
>   f2fs).
> - Patch 8: Add support for for zoned block devices to dm-linear. This can have
>   useful applications during development and testing (e.g. allow creating
>   smaller zoned devices with different combinations and positions of zones).
>   There are also interesting applications for production, for instance, the
>   ability to aggregate conventional zones of different drives to create a
>   regular disk.
> - Patch 9: Add sequential write enforcement to dm_kcopyd_copy so that
>   sequential zones of a host-managed zoned block device can be specified as
>   destinations.
> - Patch 10: New dm-zoned target type (this was already sent for review twice).
>   This resend adds modifications suggested by Hannes to implement reclaim
>   using dm-kcopyd. dm-zoned depends on patch 9.
> 
> As always, comments and reviews are welcome.
> 
> Damien Le Moal (10):
>   dm-table: Introduce DM_TARGET_ZONED_HM feature
>   dm-table: Check device area zone alignment
>   dm-table: Check block devices zone model compatibility
>   dm: Fix REQ_OP_ZONE_RESET bio handling
>   dm: Fix REQ_OP_ZONE_REPORT bio handling
>   dm: Introduce dm_remap_zone_report()
>   dm-flakey: Add support for zoned block devices
>   dm-linear: Add support for zoned block devices
>   dm-kcopyd: Add sequential write feature
>   dm-zoned: Drive-managed zoned block device target
> 
>  Documentation/device-mapper/dm-zoned.txt |  154 +++
>  drivers/md/Kconfig                       |   19 +
>  drivers/md/Makefile                      |    2 +
>  drivers/md/dm-flakey.c                   |   21 +-
>  drivers/md/dm-kcopyd.c                   |   68 +-
>  drivers/md/dm-linear.c                   |   14 +-
>  drivers/md/dm-table.c                    |  145 ++
>  drivers/md/dm-zoned-io.c                 |  998 ++++++++++++++
>  drivers/md/dm-zoned-metadata.c           | 2195 ++++++++++++++++++++++++++++++
>  drivers/md/dm-zoned-reclaim.c            |  535 ++++++++
>  drivers/md/dm-zoned.h                    |  528 +++++++
>  drivers/md/dm.c                          |   93 +-
>  include/linux/device-mapper.h            |   16 +
>  include/linux/dm-kcopyd.h                |    1 +
>  14 files changed, 4783 insertions(+), 6 deletions(-)
>  create mode 100644 Documentation/device-mapper/dm-zoned.txt
>  create mode 100644 drivers/md/dm-zoned-io.c
>  create mode 100644 drivers/md/dm-zoned-metadata.c
>  create mode 100644 drivers/md/dm-zoned-reclaim.c
>  create mode 100644 drivers/md/dm-zoned.h
> 
Very nice.

You can add my

Reviewed-by: Hannes Reinecke <hare@suse.com>

to the whole series.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare@suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 00/10] dm: zoned block device support
  2017-04-24  6:24   ` Hannes Reinecke
  (?)
@ 2017-04-24  7:52   ` Damien Le Moal
  -1 siblings, 0 replies; 18+ messages in thread
From: Damien Le Moal @ 2017-04-24  7:52 UTC (permalink / raw)
  To: Hannes Reinecke, dm-devel, Mike Snitzer, Alasdair Kergon
  Cc: Christoph Hellwig, Bart Van Assche, linux-block

Hannes,

On 4/24/17 15:24, Hannes Reinecke wrote:
> On 04/21/2017 05:55 AM, damien.lemoal@wdc.com wrote:
>> From: Damien Le Moal <damien.lemoal@wdc.com>
>>
>> This series introduces zoned block device support to the device mapper
>> infrastructure. Pathces are as follows:
>>
>> - Patch 1: Add a new target type feature flag to indicate if a target type
>>   supports host-managed zoned block devices. This prevents using these drives
>>   with the current target types since none of them have the proper support
>>   implemented and will not operate properly with these drives.
>> - Patch 2: If a target device is a zoned block device, check that the range of
>>   LBAs mapped is aligned to the device zone size and that the device start
>>   offset also aligns to zone boundaries. This is necessary for zone reset and
>>   zone report correct execution.
>> - Patch 3: Check that the different target devices of a table have compatible
>>   zone sizes and models. This is necessary for target types that expose a zone
>>   model different from the underlying device.
>> - Patch 4: Fix handling of REQ_OP_ZONE_RESET bios
>> - Patch 5: Fix handling of REQ_OP_ZONE_REPORT bios
>> - Patch 6: Introduce a new helper function to reverse map a device zone report
>>   to the target LBA range
>> - Patch 7: Add support for host-managed zoned block devices to dm-flakey. This
>>   is necessary for testing file systems supporting natively these drives (e.g.
>>   f2fs).
>> - Patch 8: Add support for for zoned block devices to dm-linear. This can have
>>   useful applications during development and testing (e.g. allow creating
>>   smaller zoned devices with different combinations and positions of zones).
>>   There are also interesting applications for production, for instance, the
>>   ability to aggregate conventional zones of different drives to create a
>>   regular disk.
>> - Patch 9: Add sequential write enforcement to dm_kcopyd_copy so that
>>   sequential zones of a host-managed zoned block device can be specified as
>>   destinations.
>> - Patch 10: New dm-zoned target type (this was already sent for review twice).
>>   This resend adds modifications suggested by Hannes to implement reclaim
>>   using dm-kcopyd. dm-zoned depends on patch 9.
>>
>> As always, comments and reviews are welcome.
>>
>> Damien Le Moal (10):
>>   dm-table: Introduce DM_TARGET_ZONED_HM feature
>>   dm-table: Check device area zone alignment
>>   dm-table: Check block devices zone model compatibility
>>   dm: Fix REQ_OP_ZONE_RESET bio handling
>>   dm: Fix REQ_OP_ZONE_REPORT bio handling
>>   dm: Introduce dm_remap_zone_report()
>>   dm-flakey: Add support for zoned block devices
>>   dm-linear: Add support for zoned block devices
>>   dm-kcopyd: Add sequential write feature
>>   dm-zoned: Drive-managed zoned block device target
>>
>>  Documentation/device-mapper/dm-zoned.txt |  154 +++
>>  drivers/md/Kconfig                       |   19 +
>>  drivers/md/Makefile                      |    2 +
>>  drivers/md/dm-flakey.c                   |   21 +-
>>  drivers/md/dm-kcopyd.c                   |   68 +-
>>  drivers/md/dm-linear.c                   |   14 +-
>>  drivers/md/dm-table.c                    |  145 ++
>>  drivers/md/dm-zoned-io.c                 |  998 ++++++++++++++
>>  drivers/md/dm-zoned-metadata.c           | 2195 ++++++++++++++++++++++++++++++
>>  drivers/md/dm-zoned-reclaim.c            |  535 ++++++++
>>  drivers/md/dm-zoned.h                    |  528 +++++++
>>  drivers/md/dm.c                          |   93 +-
>>  include/linux/device-mapper.h            |   16 +
>>  include/linux/dm-kcopyd.h                |    1 +
>>  14 files changed, 4783 insertions(+), 6 deletions(-)
>>  create mode 100644 Documentation/device-mapper/dm-zoned.txt
>>  create mode 100644 drivers/md/dm-zoned-io.c
>>  create mode 100644 drivers/md/dm-zoned-metadata.c
>>  create mode 100644 drivers/md/dm-zoned-reclaim.c
>>  create mode 100644 drivers/md/dm-zoned.h
>>
> Very nice.
> 
> You can add my
> 
> Reviewed-by: Hannes Reinecke <hare@suse.com>
> 
> to the whole series.

Thank you for the review.

Best regards.

-- 
Damien Le Moal, Ph.D.
Sr. Manager, System Software Research Group,
Western Digital Corporation
Damien.LeMoal@wdc.com
(+81) 0466-98-3593 (ext. 513593)
1 kirihara-cho, Fujisawa,
Kanagawa, 252-0888 Japan
www.wdc.com, www.hgst.com

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 00/10] dm: zoned block device support
  2017-04-21  3:55 [PATCH 00/10] dm: zoned block device support damien.lemoal
                   ` (10 preceding siblings ...)
  2017-04-24  6:24   ` Hannes Reinecke
@ 2017-04-27  4:22 ` Damien Le Moal
  11 siblings, 0 replies; 18+ messages in thread
From: Damien Le Moal @ 2017-04-27  4:22 UTC (permalink / raw)
  To: dm-devel, Mike Snitzer, Alasdair Kergon
  Cc: Hannes Reinecke, Christoph Hellwig, Bart Van Assche, linux-block

Hello Mike,

Any comments on this series ?
Is it possible that you consider it for 4.12 ?

Best regards.

On 4/21/17 12:55, damien.lemoal@wdc.com wrote:
> From: Damien Le Moal <damien.lemoal@wdc.com>
> 
> This series introduces zoned block device support to the device mapper
> infrastructure. Pathces are as follows:
> 
> - Patch 1: Add a new target type feature flag to indicate if a target type
>   supports host-managed zoned block devices. This prevents using these drives
>   with the current target types since none of them have the proper support
>   implemented and will not operate properly with these drives.
> - Patch 2: If a target device is a zoned block device, check that the range of
>   LBAs mapped is aligned to the device zone size and that the device start
>   offset also aligns to zone boundaries. This is necessary for zone reset and
>   zone report correct execution.
> - Patch 3: Check that the different target devices of a table have compatible
>   zone sizes and models. This is necessary for target types that expose a zone
>   model different from the underlying device.
> - Patch 4: Fix handling of REQ_OP_ZONE_RESET bios
> - Patch 5: Fix handling of REQ_OP_ZONE_REPORT bios
> - Patch 6: Introduce a new helper function to reverse map a device zone report
>   to the target LBA range
> - Patch 7: Add support for host-managed zoned block devices to dm-flakey. This
>   is necessary for testing file systems supporting natively these drives (e.g.
>   f2fs).
> - Patch 8: Add support for for zoned block devices to dm-linear. This can have
>   useful applications during development and testing (e.g. allow creating
>   smaller zoned devices with different combinations and positions of zones).
>   There are also interesting applications for production, for instance, the
>   ability to aggregate conventional zones of different drives to create a
>   regular disk.
> - Patch 9: Add sequential write enforcement to dm_kcopyd_copy so that
>   sequential zones of a host-managed zoned block device can be specified as
>   destinations.
> - Patch 10: New dm-zoned target type (this was already sent for review twice).
>   This resend adds modifications suggested by Hannes to implement reclaim
>   using dm-kcopyd. dm-zoned depends on patch 9.
> 
> As always, comments and reviews are welcome.
> 
> Damien Le Moal (10):
>   dm-table: Introduce DM_TARGET_ZONED_HM feature
>   dm-table: Check device area zone alignment
>   dm-table: Check block devices zone model compatibility
>   dm: Fix REQ_OP_ZONE_RESET bio handling
>   dm: Fix REQ_OP_ZONE_REPORT bio handling
>   dm: Introduce dm_remap_zone_report()
>   dm-flakey: Add support for zoned block devices
>   dm-linear: Add support for zoned block devices
>   dm-kcopyd: Add sequential write feature
>   dm-zoned: Drive-managed zoned block device target
> 
>  Documentation/device-mapper/dm-zoned.txt |  154 +++
>  drivers/md/Kconfig                       |   19 +
>  drivers/md/Makefile                      |    2 +
>  drivers/md/dm-flakey.c                   |   21 +-
>  drivers/md/dm-kcopyd.c                   |   68 +-
>  drivers/md/dm-linear.c                   |   14 +-
>  drivers/md/dm-table.c                    |  145 ++
>  drivers/md/dm-zoned-io.c                 |  998 ++++++++++++++
>  drivers/md/dm-zoned-metadata.c           | 2195 ++++++++++++++++++++++++++++++
>  drivers/md/dm-zoned-reclaim.c            |  535 ++++++++
>  drivers/md/dm-zoned.h                    |  528 +++++++
>  drivers/md/dm.c                          |   93 +-
>  include/linux/device-mapper.h            |   16 +
>  include/linux/dm-kcopyd.h                |    1 +
>  14 files changed, 4783 insertions(+), 6 deletions(-)
>  create mode 100644 Documentation/device-mapper/dm-zoned.txt
>  create mode 100644 drivers/md/dm-zoned-io.c
>  create mode 100644 drivers/md/dm-zoned-metadata.c
>  create mode 100644 drivers/md/dm-zoned-reclaim.c
>  create mode 100644 drivers/md/dm-zoned.h
> 

-- 
Damien Le Moal, Ph.D.
Sr Manager, System Software Group,
Western Digital Research
Damien.LeMoal@wdc.com
Tel: (+81) 0466-98-3593 (Ext. 51-3593)
1 kirihara-cho, Fujisawa, Kanagawa, 252-0888 Japan
www.wdc.com, www.hgst.com

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 10/10] dm-zoned: Drive-managed zoned block device target
  2017-04-21  3:55 ` [PATCH 10/10] dm-zoned: Drive-managed zoned block device target damien.lemoal
@ 2017-04-28 21:14   ` Bart Van Assche
  0 siblings, 0 replies; 18+ messages in thread
From: Bart Van Assche @ 2017-04-28 21:14 UTC (permalink / raw)
  To: dm-devel, Damien Le Moal; +Cc: hch

On Fri, 2017-04-21 at 12:55 +0900, damien.lemoal@wdc.com wrote:
> +static void dmz_shrink_mblock_cache(struct dmz_target *dmz, bool idle)
> +{
> +       struct dmz_mblock *mblk;
> +       unsigned int nr_mblks;
> +
> +       if (!dmz->max_nr_mblks)
> +               return;
> +
> +       if (idle)
> +               nr_mblks = dmz->min_nr_mblks;
> +       else
> +               nr_mblks = dmz->max_nr_mblks;
> +
> +       while (atomic_read(&dmz->nr_mblks) > nr_mblks &&
> +              !list_empty(&dmz->mblk_lru_list)) {
> +               mblk = list_first_entry(&dmz->mblk_lru_list,
> +                                       struct dmz_mblock, link);
> +               list_del_init(&mblk->link);
> +               rb_erase(&mblk->node, &dmz->mblk_rbtree);
> +               dmz_free_mblock(dmz, mblk);
> +       }
> +}

(off-list)

Hello Damien,

Is mblk_lru_list perhaps a cache that should be freed under memory pressure?
If so, if you repost this patch series please add a shrinker (struct shrinker
+ register_shrinker()) such that this memory can be freed if memory pressure
becomes too high.

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 03/10] dm-table: Check block devices zone model compatibility
  2017-04-21  3:55   ` damien.lemoal
  (?)
@ 2017-04-29  3:15   ` Bart Van Assche
  -1 siblings, 0 replies; 18+ messages in thread
From: Bart Van Assche @ 2017-04-29  3:15 UTC (permalink / raw)
  To: dm-devel, agk, Damien Le Moal, snitzer; +Cc: Bart Van Assche, hch

On Fri, 2017-04-21 at 12:55 +0900, damien.lemoal@wdc.com wrote:
> [ ... ]
> +static int validate_hardware_zone_model(struct dm_table *table,
> +					struct queue_limits *limits)
> +{
> +	[ ... ]
> +	unsigned int i = 0;
> +	[ ... ]
> +	while (i < num_targets) {
> +             [ ... ]
> +		i++;
> +	}

Hello Damien,

A minor comment: maybe it's more appropriate to implement this loop as a
for-loop?

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2017-04-29  3:15 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-04-21  3:55 [PATCH 00/10] dm: zoned block device support damien.lemoal
2017-04-21  3:55 ` [PATCH 01/10] dm-table: Introduce DM_TARGET_ZONED_HM feature damien.lemoal
2017-04-21  3:55 ` [PATCH 02/10] dm-table: Check device area zone alignment damien.lemoal
2017-04-21  3:55 ` [PATCH 03/10] dm-table: Check block devices zone model compatibility damien.lemoal
2017-04-21  3:55   ` damien.lemoal
2017-04-29  3:15   ` Bart Van Assche
2017-04-21  3:55 ` [PATCH 04/10] dm: Fix REQ_OP_ZONE_RESET bio handling damien.lemoal
2017-04-21  3:55 ` [PATCH 05/10] dm: Fix REQ_OP_ZONE_REPORT " damien.lemoal
2017-04-21  3:55 ` [PATCH 06/10] dm: Introduce dm_remap_zone_report() damien.lemoal
2017-04-21  3:55 ` [PATCH 07/10] dm-flakey: Add support for zoned block devices damien.lemoal
2017-04-21  3:55 ` [PATCH 08/10] dm-linear: " damien.lemoal
2017-04-21  3:55 ` [PATCH 09/10] dm-kcopyd: Add sequential write feature damien.lemoal
2017-04-21  3:55 ` [PATCH 10/10] dm-zoned: Drive-managed zoned block device target damien.lemoal
2017-04-28 21:14   ` Bart Van Assche
2017-04-24  6:24 ` [PATCH 00/10] dm: zoned block device support Hannes Reinecke
2017-04-24  6:24   ` Hannes Reinecke
2017-04-24  7:52   ` Damien Le Moal
2017-04-27  4:22 ` Damien Le Moal

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.