All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH 0/1] dm: add clone target
@ 2019-07-09 14:15 Nikos Tsironis
  2019-07-09 14:15 ` [RFC PATCH 1/1] " Nikos Tsironis
  0 siblings, 1 reply; 14+ messages in thread
From: Nikos Tsironis @ 2019-07-09 14:15 UTC (permalink / raw)
  To: snitzer, agk, dm-devel; +Cc: vkoukis, ntsironis, iliastsi

This patch adds the dm-clone target, which allows cloning of arbitrary
block devices.

dm-clone produces a one-to-one copy of an existing, read-only device
(origin) into a writable device (clone): It presents a virtual block
device which makes all data appear immediately, and redirects reads and
writes accordingly.

The main use case of dm-clone is to clone a potentially remote,
high-latency, read-only, archival-type block device into a writable,
fast, primary-type device for fast, low-latency I/O. The cloned device
is visible/mountable immediately and the copy of the origin device to
the clone device happens in the background, in parallel with user I/O.

For example, one could restore an application backup from a read-only
copy, accessible through a network storage protocol (NBD, Fibre Channel,
iSCSI, AoE, etc.), into a local SSD or NVMe device, and start using the
device immediately, without waiting for the restore to complete.

When the cloning completes, the dm-clone table can be removed altogether
and be replaced, e.g., by a linear table, mapping directly to the clone
device.

dm-clone is optimized for small, random writes, with size equal to
dm-clone's block/region size, e.g., 4K.

For more information regarding dm-clone's operation, please read the
attached documentation.

A preliminary test suite for dm-clone can be found at
https://github.com/arrikto/device-mapper-test-suite/tree/feature-dm-clone

Nikos Tsironis (1):
  dm: add clone target

 Documentation/device-mapper/dm-clone.rst |  334 +++++
 drivers/md/Kconfig                       |   13 +
 drivers/md/Makefile                      |    2 +
 drivers/md/dm-clone-metadata.c           |  991 +++++++++++++
 drivers/md/dm-clone-metadata.h           |  158 +++
 drivers/md/dm-clone-target.c             | 2244 ++++++++++++++++++++++++++++++
 6 files changed, 3742 insertions(+)
 create mode 100644 Documentation/device-mapper/dm-clone.rst
 create mode 100644 drivers/md/dm-clone-metadata.c
 create mode 100644 drivers/md/dm-clone-metadata.h
 create mode 100644 drivers/md/dm-clone-target.c

-- 
2.11.0

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [RFC PATCH 1/1] dm: add clone target
  2019-07-09 14:15 [RFC PATCH 0/1] dm: add clone target Nikos Tsironis
@ 2019-07-09 14:15 ` Nikos Tsironis
  2019-07-09 21:28   ` Heinz Mauelshagen
  2019-08-29 16:19   ` Mike Snitzer
  0 siblings, 2 replies; 14+ messages in thread
From: Nikos Tsironis @ 2019-07-09 14:15 UTC (permalink / raw)
  To: snitzer, agk, dm-devel; +Cc: vkoukis, ntsironis, iliastsi

Add the dm-clone target, which allows cloning of arbitrary block
devices.

dm-clone produces a one-to-one copy of an existing, read-only device
(origin) into a writable device (clone): It presents a virtual block
device which makes all data appear immediately, and redirects reads and
writes accordingly.

The main use case of dm-clone is to clone a potentially remote,
high-latency, read-only, archival-type block device into a writable,
fast, primary-type device for fast, low-latency I/O. The cloned device
is visible/mountable immediately and the copy of the origin device to
the clone device happens in the background, in parallel with user I/O.

When the cloning completes, the dm-clone table can be removed altogether
and be replaced, e.g., by a linear table, mapping directly to the clone
device.

For further information and examples of how to use dm-clone, please read
Documentation/device-mapper/dm-clone.rst

Suggested-by: Vangelis Koukis <vkoukis@arrikto.com>
Co-developed-by: Ilias Tsitsimpis <iliastsi@arrikto.com>
Signed-off-by: Ilias Tsitsimpis <iliastsi@arrikto.com>
Signed-off-by: Nikos Tsironis <ntsironis@arrikto.com>
---
 Documentation/device-mapper/dm-clone.rst |  334 +++++
 drivers/md/Kconfig                       |   13 +
 drivers/md/Makefile                      |    2 +
 drivers/md/dm-clone-metadata.c           |  991 +++++++++++++
 drivers/md/dm-clone-metadata.h           |  158 +++
 drivers/md/dm-clone-target.c             | 2244 ++++++++++++++++++++++++++++++
 6 files changed, 3742 insertions(+)
 create mode 100644 Documentation/device-mapper/dm-clone.rst
 create mode 100644 drivers/md/dm-clone-metadata.c
 create mode 100644 drivers/md/dm-clone-metadata.h
 create mode 100644 drivers/md/dm-clone-target.c

diff --git a/Documentation/device-mapper/dm-clone.rst b/Documentation/device-mapper/dm-clone.rst
new file mode 100644
index 000000000000..948b7ce31ce3
--- /dev/null
+++ b/Documentation/device-mapper/dm-clone.rst
@@ -0,0 +1,334 @@
+.. SPDX-License-Identifier: GPL-2.0-only
+
+========
+dm-clone
+========
+
+Introduction
+============
+
+dm-clone is a device mapper target which produces a one-to-one copy of an
+existing, read-only device (origin) into a writable device (clone): It presents
+a virtual block device which makes all data appear immediately, and redirects
+reads and writes accordingly.
+
+The main use case of dm-clone is to clone a potentially remote, high-latency,
+read-only, archival-type block device into a writable, fast, primary-type device
+for fast, low-latency I/O. The cloned device is visible/mountable immediately
+and the copy of the origin device to the clone device happens in the background,
+in parallel with user I/O.
+
+For example, one could restore an application backup from a read-only copy,
+accessible through a network storage protocol (NBD, Fibre Channel, iSCSI, AoE,
+etc.), into a local SSD or NVMe device, and start using the device immediately,
+without waiting for the restore to complete.
+
+When the cloning completes, the dm-clone table can be removed altogether and be
+replaced, e.g., by a linear table, mapping directly to the clone device.
+
+The dm-clone target reuses the metadata library used by the thin-provisioning
+target.
+
+Glossary
+========
+
+   Region
+     A fixed sized block. The unit of hydration.
+
+   Hydration
+     The process of filling a region of the clone device with data from the same
+     region of the origin device, i.e., copying the region from the origin to
+     the clone device.
+
+Once a region gets hydrated we redirect all I/O regarding it to the clone
+device.
+
+Design
+======
+
+Sub-devices
+-----------
+
+The target is constructed by passing three devices to it (along with other
+parameters detailed later):
+
+1. An origin device - the read-only device that gets cloned and source of the
+   hydration.
+
+2. A clone device - the destination of the hydration, which will become a clone
+   of the origin.
+
+3. A small metadata device - it records which regions/blocks are already valid
+   in the clone device, i.e., which regions have already been hydrated, or have
+   been written to directly, via user I/O.
+
+The size of the clone device must be at least equal to the size of the origin
+device.
+
+Regions
+-------
+
+dm-clone divides the origin and clone devices in fixed sized blocks, called
+regions. Regions are the unit of hydration, i.e., the minimum amount of data
+copied from the origin to the clone device.
+
+The region size is configurable when you first create the dm-clone device. The
+recommended region size is the same as the file system block size, which usually
+is 4KB. The region size must be between 8 sectors (4KB) and 2097152 sectors
+(1GB) and a power of two.
+
+Reads and writes from/to hydrated regions are serviced from the clone device.
+
+A read to a not yet hydrated region is serviced directly from the origin device.
+
+A write to a not yet hydrated region will be delayed until the corresponding
+region has been hydrated and the hydration of the region starts immediately.
+
+Note that a write request with size equal to region size will skip copying of
+the corresponding region from the origin device and overwrite the region of the
+clone device directly.
+
+Discards
+--------
+
+dm-clone interprets a discard request to a range that hasn't been hydrated yet
+as a hint to skip hydration of the regions covered by the request, i.e., it
+skips copying the region's data from the origin to the clone device, and only
+updates its metadata.
+
+If the clone device supports discards, then by default dm-clone will pass down
+discard requests to it.
+
+Background Hydration
+--------------------
+
+dm-clone copies continuously from the origin to the clone device, until all of
+the device has been copied.
+
+Copying data from the origin to the clone device uses bandwidth. The user can
+set a throttle to prevent more than a certain amount of copying occurring at any
+one time. Moreover, dm-clone takes into account user I/O traffic going to the
+devices and pauses the background hydration when there is I/O in-flight.
+
+A message `hydration_threshold <#sectors>` can be used to set the maximum number
+of sectors being copied, the default being 2048 sectors (1MB).
+
+dm-clone employs dm-kcopyd for copying portions of the origin device to the
+clone device. By default, we issue copy requests of size equal to the region
+size. A message `hydration_block_size <#sectors>` can be used to tune the size
+of these copy requests. Increasing the hydration block size results in dm-clone
+trying to batch together contiguous regions, so we copy the data in blocks of
+this size.
+
+When the hydration of the clone device finishes, a dm event will be sent to user
+space.
+
+Updating on-disk metadata
+-------------------------
+
+On-disk metadata is committed every time a FLUSH or FUA bio is written. If no
+such requests are made then commits will occur every second. This means the
+dm-clone device behaves like a physical disk that has a volatile write cache. If
+power is lost you may lose some recent writes. The metadata should always be
+consistent in spite of any crash.
+
+Target Interface
+================
+
+Constructor
+-----------
+
+  ::
+
+   clone <metadata dev> <clone dev> <origin dev> <region size>
+         [<#feature args> [<feature arg>]* [<#core args> [<core arg>]*]]
+
+ =============== ==============================================================
+ metadata dev    Fast device holding the persistent metadata
+ clone dev       The destination device, where the origin will be cloned
+ origin dev      Read only device containing the data that gets cloned
+ region size     The size of a region in sectors
+
+ #feature args   Number of feature arguments passed
+ feature args    no_hydration or no_discard_passdown
+
+ #core args      An even number of arguments corresponding to key/value pairs
+                 passed to dm-clone
+ core args       Key/value pairs passed to dm-clone, e.g. `hydration_threshold
+                 2048`
+ =============== ==============================================================
+
+Optional feature arguments are:
+
+ ==================== =========================================================
+ no_hydration         Create a dm-clone instance with background hydration
+                      disabled
+ no_discard_passdown  Disable passing down discards to the clone device
+ ==================== =========================================================
+
+Optional core arguments are:
+
+ ================================ ==============================================
+ hydration_threshold <#sectors>   Maximum number of sectors being copied from
+                                  the origin to the clone device at any one
+                                  time, during background hydration.
+ hydration_block_size <#sectors>  During background hydration, try to batch
+                                  together contiguous regions, so we copy data
+                                  from the origin to the clone device in blocks
+                                  of this size.
+ ================================ ==============================================
+
+Status
+------
+
+  ::
+
+   <metadata block size> <#used metadata blocks>/<#total metadata blocks>
+   <region size> <#hydrated regions>/<#total regions> <#hydrating regions>
+   <#feature args> <feature args>* <#core args> <core args>*
+   <clone metadata mode>
+
+ ======================= =======================================================
+ metadata block size     Fixed block size for each metadata block in sectors
+ #used metadata blocks   Number of metadata blocks used
+ #total metadata blocks  Total number of metadata blocks
+ region size             Configurable region size for the device in sectors
+ #hydrated regions       Number of regions that have finished hydrating
+ #total regions          Total number of regions to hydrate
+ #hydrating regions      Number of regions currently hydrating
+ #feature args           Number of feature arguments to follow
+ feature args            Feature arguments, e.g. `no_hydration`
+ #core args              Even number of core arguments to follow
+ core args               Key/value pairs for tuning the core, e.g.
+                         `hydration_threshold 2048`
+ clone metadata mode     ro if read-only, rw if read-write
+
+                         In serious cases where even a read-only mode is deemed
+                         unsafe no further I/O will be permitted and the status
+                         will just contain the string 'Fail'. If the metadata
+                         mode changes, a dm event will be sent to user space.
+ ======================= =======================================================
+
+Messages
+--------
+
+  `disable_hydration`
+      Disable the background hydration of the clone device.
+
+  `enable_hydration`
+      Enable the background hydration of the clone device.
+
+  `hydration_threshold <#sectors>`
+      Set background hydration threshold.
+
+  `hydration_block_size <#sectors>`
+      Set background hydration block size.
+
+Examples
+========
+
+Clone a device containing a file system
+---------------------------------------
+
+1. Create the dm-clone device.
+
+   ::
+
+    dmsetup create clone --table "0 1048576000 clone $metadata_dev $clone_dev \
+      $origin_dev 8 1 no_hydration"
+
+2. Mount the device and trim the file system. dm-clone interprets the discards
+   sent by the file system and it will not hydrate the unused space.
+
+   ::
+
+    mount /dev/mapper/clone /mnt/cloned-fs
+    fstrim /mnt/cloned-fs
+
+3. Enable background hydration of the clone device.
+
+   ::
+
+    dmsetup message clone 0 enable_hydration
+
+4. When the hydration finishes, we can replace the dm-clone table with a linear
+   table.
+
+   ::
+
+    dmsetup suspend clone
+    dmsetup load clone --table "0 1048576000 linear $clone_dev 0"
+    dmsetup resume clone
+
+   The metadata device is no longer needed and can be safely discarded or reused
+   for other purposes.
+
+Known issues
+============
+
+1. We redirect reads, to not-yet-cloned regions, to the origin device. If
+   reading the origin device has high latency and the user repeatedly reads from
+   the same regions, this behaviour could degrade performance. We should use
+   these reads as hints to hydrate the relevant regions sooner. Currently, we
+   rely on the page cache to cache these regions, so we hopefully don't end up
+   reading them multiple times from the origin.
+
+2. Release in-core resources, i.e., the bitmaps tracking which blocks are
+   cloned, after the cloning has finished.
+
+3. During background hydration, if we fail to read the origin or write to the
+   clone device, we print an error message, but the cloning process continues
+   indefinitely, until it succeeds. We should stop the background hydration
+   after a number of failures and emit a dm event for user space to notice.
+
+Why not...?
+===========
+
+We explored the following alternatives before implementing dm-clone:
+
+1. Use dm-cache with cache size equal to origin and implement a new cloning
+   policy:
+
+   * The resulting cache device is not a one-to-one mirror of the origin device
+     and thus we cannot remove the cache device once cloning completes.
+
+   * dm-cache writes to the origin device, which violates our requirement that
+     the origin device must be treated as read-only.
+
+   * Caching is semantically different from cloning.
+
+2. Use dm-snapshot with a COW device equal to the origin:
+
+   * dm-snapshot stores its metadata in the COW device, so the resulting device
+     is not a one-to-one mirror of the origin.
+
+   * No background copying mechanism.
+
+   * dm-snapshot needs to commit its metadata whenever a pending exception
+     completes, to ensure snapshot consistency. In the case of cloning, we don't
+     need to be so strict and can rely on committing metadata every time a FLUSH
+     or FUA bio is written, or periodically, like dm-thin and dm-cache do. This
+     improves the performance significantly.
+
+3. Use dm-mirror: The mirror target has a background copying/mirroring
+   mechanism, but it writes to all mirrors, thus violating our requirement that
+   the origin device must be treated as read-only.
+
+4. Use dm-thin's external snapshot functionality. This approach is the most
+   promising among all alternatives, as the thinly-provisioned volume is a
+   one-to-one mirror of the origin and handles reads and writes to
+   un-provisioned/not-yet-cloned areas the same way as dm-clone does.
+
+   Still:
+
+   * There is no background copying mechanism, though one could be implemented.
+
+   * Most importantly, we want to support arbitrary block devices as the
+     destination of the cloning process and not restrict ourselves to
+     thinly-provisioned volumes. Thin-provisioning has an inherent metadata
+     overhead, for maintaining the thin volume mappings, which significantly
+     degrades performance.
+
+   Moreover, cloning a device shouldn't force the use of thin-provisioning. On
+   the other hand, if we wish to use thin provisioning, we can just use a thin
+   LV as dm-clone's destination/clone device.
diff --git a/drivers/md/Kconfig b/drivers/md/Kconfig
index 45254b3ef715..15e6dedf24ea 100644
--- a/drivers/md/Kconfig
+++ b/drivers/md/Kconfig
@@ -346,6 +346,19 @@ config DM_ERA
          over time.  Useful for maintaining cache coherency when using
          vendor snapshots.
 
+config DM_CLONE
+       tristate "Clone target (EXPERIMENTAL)"
+       depends on BLK_DEV_DM
+       default n
+       select DM_PERSISTENT_DATA
+       ---help---
+         dm-clone produces a one-to-one copy of an existing, read-only device
+         (origin) into a writable device (clone). The cloned device is
+         visible/mountable immediately and the copy of the origin device to the
+         clone device happens in the background, in parallel with user I/O.
+
+         If unsure, say N.
+
 config DM_MIRROR
        tristate "Mirror target"
        depends on BLK_DEV_DM
diff --git a/drivers/md/Makefile b/drivers/md/Makefile
index be7a6eb92abc..b3296e3a7116 100644
--- a/drivers/md/Makefile
+++ b/drivers/md/Makefile
@@ -18,6 +18,7 @@ dm-cache-y	+= dm-cache-target.o dm-cache-metadata.o dm-cache-policy.o \
 		    dm-cache-background-tracker.o
 dm-cache-smq-y   += dm-cache-policy-smq.o
 dm-era-y	+= dm-era-target.o
+dm-clone-y	+= dm-clone-target.o dm-clone-metadata.o
 dm-verity-y	+= dm-verity-target.o
 md-mod-y	+= md.o md-bitmap.o
 raid456-y	+= raid5.o raid5-cache.o raid5-ppl.o
@@ -65,6 +66,7 @@ obj-$(CONFIG_DM_VERITY)		+= dm-verity.o
 obj-$(CONFIG_DM_CACHE)		+= dm-cache.o
 obj-$(CONFIG_DM_CACHE_SMQ)	+= dm-cache-smq.o
 obj-$(CONFIG_DM_ERA)		+= dm-era.o
+obj-$(CONFIG_DM_CLONE)		+= dm-clone.o
 obj-$(CONFIG_DM_LOG_WRITES)	+= dm-log-writes.o
 obj-$(CONFIG_DM_INTEGRITY)	+= dm-integrity.o
 obj-$(CONFIG_DM_ZONED)		+= dm-zoned.o
diff --git a/drivers/md/dm-clone-metadata.c b/drivers/md/dm-clone-metadata.c
new file mode 100644
index 000000000000..db2f86d8356b
--- /dev/null
+++ b/drivers/md/dm-clone-metadata.c
@@ -0,0 +1,991 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2019 Arrikto, Inc. All Rights Reserved.
+ */
+
+#include <linux/err.h>
+#include <linux/slab.h>
+#include <linux/rwsem.h>
+#include <linux/bitops.h>
+#include <linux/bitmap.h>
+#include <linux/device-mapper.h>
+
+#include "persistent-data/dm-bitset.h"
+#include "persistent-data/dm-space-map.h"
+#include "persistent-data/dm-block-manager.h"
+#include "persistent-data/dm-transaction-manager.h"
+
+#include "dm-clone-metadata.h"
+
+#define DM_MSG_PREFIX "clone metadata"
+
+#define SUPERBLOCK_LOCATION 0
+#define SUPERBLOCK_MAGIC 0x8af27f64
+#define SUPERBLOCK_CSUM_XOR 257649492
+
+#define DM_CLONE_MAX_CONCURRENT_LOCKS 5
+
+#define UUID_LEN 16
+
+/* Min and max dm-clone metadata versions supported */
+#define DM_CLONE_MIN_METADATA_VERSION 1
+#define DM_CLONE_MAX_METADATA_VERSION 1
+
+/*
+ * On-disk metadata layout
+ */
+struct superblock_disk {
+	__le32 csum;
+	__le32 flags;
+	__le64 blocknr;
+
+	__u8 uuid[UUID_LEN];
+	__le64 magic;
+	__le32 version;
+
+	__u8 metadata_space_map_root[SPACE_MAP_ROOT_SIZE];
+
+	__le64 region_size;
+	__le64 target_size;
+
+	__le64 bitset_root;
+} __packed;
+
+/*
+ * Region and Dirty bitmaps.
+ *
+ * dm-clone logically splits the origin and clone devices in regions of fixed
+ * size. The clone device's regions are gradually hydrated, i.e., we copy
+ * (clone) the origin's regions to the clone device. Eventually, all regions
+ * will get hydrated and all I/O will be served from the clone device.
+ *
+ * We maintain an on-disk bitmap which tracks the state of each of the clone
+ * device's regions, i.e., whether they are hydrated or not.
+ *
+ * To save constantly doing look ups on disk we keep an in core copy of the
+ * on-disk bitmap, the region_map.
+ *
+ * To further reduce metadata I/O overhead we use a second bitmap, the dmap
+ * (dirty bitmap), which tracks the dirty words, i.e. longs, of the region_map.
+ *
+ * When a region finishes hydrating dm-clone calls
+ * dm_clone_set_region_hydrated(), or for discard requests
+ * dm_clone_cond_set_range(), which sets the corresponding bits in region_map
+ * and dmap.
+ *
+ * During a metadata commit we scan the dmap for dirty region_map words (longs)
+ * and update accordingly the on-disk metadata. Thus, we don't have to flush to
+ * disk the whole region_map. We can just flush the dirty region_map words.
+ *
+ * We use a dirty bitmap, which is smaller than the original region_map, to
+ * reduce the amount of memory accesses during a metadata commit. As dm-bitset
+ * accesses the on-disk bitmap in 64-bit word granularity, there is no
+ * significant benefit in tracking the dirty region_map bits with a smaller
+ * granularity.
+ *
+ * We could update directly the on-disk bitmap, when dm-clone calls either
+ * dm_clone_set_region_hydrated() or dm_clone_cond_set_range(), buts this
+ * inserts significant metadata I/O overhead in dm-clone's I/O path. Also, as
+ * these two functions don't block, we can call them in interrupt context,
+ * e.g., in a hooked overwrite bio's completion routine, and further reduce the
+ * I/O completion latency.
+ *
+ * We maintain two dirty bitmaps. During a metadata commit we atomically swap
+ * the currently used dmap with the unused one. This allows the metadata update
+ * functions to run concurrently with an ongoing commit.
+ */
+struct dirty_map {
+	unsigned long *dirty_words;
+	unsigned int changed;
+};
+
+struct dm_clone_metadata {
+	/* The metadata block device */
+	struct block_device *bdev;
+
+	sector_t target_size;
+	sector_t region_size;
+	unsigned long nr_regions;
+	unsigned long nr_words;
+
+	/* Spinlock protecting the region and dirty bitmaps. */
+	spinlock_t bitmap_lock;
+	struct dirty_map dmap[2];
+	struct dirty_map *current_dmap;
+
+	/*
+	 * In core copy of the on-disk bitmap to save constantly doing look ups
+	 * on disk.
+	 */
+	unsigned long *region_map;
+
+	/* Protected by bitmap_lock */
+	unsigned int read_only;
+
+	struct dm_block_manager *bm;
+	struct dm_space_map *sm;
+	struct dm_transaction_manager *tm;
+
+	struct rw_semaphore lock;
+
+	struct dm_disk_bitset bitset_info;
+	dm_block_t bitset_root;
+
+	/*
+	 * Reading the space map root can fail, so we read it into this
+	 * buffer before the superblock is locked and updated.
+	 */
+	__u8 metadata_space_map_root[SPACE_MAP_ROOT_SIZE];
+
+	bool hydration_done:1;
+	bool fail_io:1;
+};
+
+/*---------------------------------------------------------------------------*/
+
+/*
+ * Superblock validation.
+ */
+static void sb_prepare_for_write(struct dm_block_validator *v,
+				 struct dm_block *b, size_t sb_block_size)
+{
+	struct superblock_disk *sb;
+	u32 csum;
+
+	sb = dm_block_data(b);
+	sb->blocknr = cpu_to_le64(dm_block_location(b));
+
+	csum = dm_bm_checksum(&sb->flags, sb_block_size - sizeof(__le32),
+			      SUPERBLOCK_CSUM_XOR);
+	sb->csum = cpu_to_le32(csum);
+}
+
+static int sb_check(struct dm_block_validator *v, struct dm_block *b,
+		    size_t sb_block_size)
+{
+	struct superblock_disk *sb;
+	u32 csum, metadata_version;
+
+	sb = dm_block_data(b);
+
+	if (dm_block_location(b) != le64_to_cpu(sb->blocknr)) {
+		DMERR("Superblock check failed: blocknr %llu, expected %llu",
+		      le64_to_cpu(sb->blocknr),
+		      (unsigned long long)dm_block_location(b));
+		return -ENOTBLK;
+	}
+
+	if (le64_to_cpu(sb->magic) != SUPERBLOCK_MAGIC) {
+		DMERR("Superblock check failed: magic %llu, expected %llu",
+		      le64_to_cpu(sb->magic),
+		      (unsigned long long)SUPERBLOCK_MAGIC);
+		return -EILSEQ;
+	}
+
+	csum = dm_bm_checksum(&sb->flags, sb_block_size - sizeof(__le32),
+			      SUPERBLOCK_CSUM_XOR);
+
+	if (sb->csum != cpu_to_le32(csum)) {
+		DMERR("Superblock check failed: checksum %u, expected %u",
+		      csum, le32_to_cpu(sb->csum));
+		return -EILSEQ;
+	}
+
+	/* Check metadata version */
+	metadata_version = le32_to_cpu(sb->version);
+
+	if (metadata_version < DM_CLONE_MIN_METADATA_VERSION ||
+	    metadata_version > DM_CLONE_MAX_METADATA_VERSION) {
+		DMERR("Clone metadata version %u found, but only versions between %u and %u supported.",
+		      metadata_version, DM_CLONE_MIN_METADATA_VERSION,
+		      DM_CLONE_MAX_METADATA_VERSION);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static struct dm_block_validator sb_validator = {
+	.name = "superblock",
+	.prepare_for_write = sb_prepare_for_write,
+	.check = sb_check
+};
+
+/*
+ * Check if the superblock is formatted or not. We consider the superblock to
+ * be formatted in case we find non-zero bytes in it.
+ */
+static int __superblock_all_zeroes(struct dm_block_manager *bm, bool *formatted)
+{
+	int r;
+	unsigned int i, nr_words;
+	struct dm_block *sblock;
+	__le64 *data_le, zero = cpu_to_le64(0);
+
+	/*
+	 * We don't use a validator here because the superblock could be all
+	 * zeroes.
+	 */
+	r = dm_bm_read_lock(bm, SUPERBLOCK_LOCATION, NULL, &sblock);
+
+	if (r) {
+		DMERR("Failed to read_lock superblock");
+		return r;
+	}
+
+	data_le = dm_block_data(sblock);
+	*formatted = false;
+
+	/* This assumes that the block size is a multiple of 8 bytes */
+	BUG_ON(dm_bm_block_size(bm) % sizeof(__le64));
+	nr_words = dm_bm_block_size(bm) / sizeof(__le64);
+	for (i = 0; i < nr_words; i++) {
+		if (data_le[i] != zero) {
+			*formatted = true;
+			break;
+		}
+	}
+
+	dm_bm_unlock(sblock);
+
+	return 0;
+}
+
+/*---------------------------------------------------------------------------*/
+
+/*
+ * Low-level metadata handling.
+ */
+static inline int superblock_read_lock(struct dm_clone_metadata *md,
+				       struct dm_block **sblock)
+{
+	return dm_bm_read_lock(md->bm, SUPERBLOCK_LOCATION, &sb_validator, sblock);
+}
+
+static inline int superblock_write_lock(struct dm_clone_metadata *md,
+					struct dm_block **sblock)
+{
+	return dm_bm_write_lock(md->bm, SUPERBLOCK_LOCATION, &sb_validator, sblock);
+}
+
+static inline int superblock_write_lock_zero(struct dm_clone_metadata *md,
+					     struct dm_block **sblock)
+{
+	return dm_bm_write_lock_zero(md->bm, SUPERBLOCK_LOCATION, &sb_validator, sblock);
+}
+
+static int __copy_sm_root(struct dm_clone_metadata *md)
+{
+	int r;
+	size_t root_size;
+
+	r = dm_sm_root_size(md->sm, &root_size);
+
+	if (r)
+		return r;
+
+	return dm_sm_copy_root(md->sm, &md->metadata_space_map_root, root_size);
+}
+
+/* Save dm-clone metadata in superblock */
+static void __prepare_superblock(struct dm_clone_metadata *md,
+				 struct superblock_disk *sb)
+{
+	sb->flags = cpu_to_le32(0UL);
+
+	/* FIXME: UUID is currently unused */
+	memset(sb->uuid, 0, sizeof(sb->uuid));
+
+	sb->magic = cpu_to_le64(SUPERBLOCK_MAGIC);
+	sb->version = cpu_to_le32(DM_CLONE_MAX_METADATA_VERSION);
+
+	/* Save the metadata space_map root */
+	memcpy(&sb->metadata_space_map_root, &md->metadata_space_map_root,
+	       sizeof(md->metadata_space_map_root));
+
+	sb->region_size = cpu_to_le64(md->region_size);
+	sb->target_size = cpu_to_le64(md->target_size);
+	sb->bitset_root = cpu_to_le64(md->bitset_root);
+}
+
+static int __open_metadata(struct dm_clone_metadata *md)
+{
+	int r;
+	struct dm_block *sblock;
+	struct superblock_disk *sb;
+
+	r = superblock_read_lock(md, &sblock);
+
+	if (r) {
+		DMERR("Failed to read_lock superblock");
+		return r;
+	}
+
+	sb = dm_block_data(sblock);
+
+	/* Verify that target_size and region_size haven't changed. */
+	if (md->region_size != le64_to_cpu(sb->region_size) ||
+	    md->target_size != le64_to_cpu(sb->target_size)) {
+		DMERR("Region and/or target size don't match the ones in metadata");
+		r = -EINVAL;
+		goto out_with_lock;
+	}
+
+	r = dm_tm_open_with_sm(md->bm, SUPERBLOCK_LOCATION,
+			       sb->metadata_space_map_root,
+			       sizeof(sb->metadata_space_map_root),
+			       &md->tm, &md->sm);
+
+	if (r) {
+		DMERR("dm_tm_open_with_sm failed");
+		goto out_with_lock;
+	}
+
+	dm_disk_bitset_init(md->tm, &md->bitset_info);
+	md->bitset_root = le64_to_cpu(sb->bitset_root);
+
+out_with_lock:
+	dm_bm_unlock(sblock);
+
+	return r;
+}
+
+static int __format_metadata(struct dm_clone_metadata *md)
+{
+	int r;
+	struct dm_block *sblock;
+	struct superblock_disk *sb;
+
+	r = dm_tm_create_with_sm(md->bm, SUPERBLOCK_LOCATION, &md->tm, &md->sm);
+
+	if (r) {
+		DMERR("Failed to create transaction manager");
+		return r;
+	}
+
+	dm_disk_bitset_init(md->tm, &md->bitset_info);
+
+	r = dm_bitset_empty(&md->bitset_info, &md->bitset_root);
+
+	if (r) {
+		DMERR("Failed to create empty on-disk bitset");
+		goto err_with_tm;
+	}
+
+	r = dm_bitset_resize(&md->bitset_info, md->bitset_root, 0,
+			     md->nr_regions, false, &md->bitset_root);
+
+	if (r) {
+		DMERR("Failed to resize on-disk bitset to %lu entries", md->nr_regions);
+		goto err_with_tm;
+	}
+
+	/* Flush to disk all blocks, except the superblock */
+	r = dm_tm_pre_commit(md->tm);
+
+	if (r) {
+		DMERR("dm_tm_pre_commit failed");
+		goto err_with_tm;
+	}
+
+	r = __copy_sm_root(md);
+
+	if (r) {
+		DMERR("__copy_sm_root failed");
+		goto err_with_tm;
+	}
+
+	r = superblock_write_lock_zero(md, &sblock);
+
+	if (r) {
+		DMERR("Failed to write_lock superblock");
+		goto err_with_tm;
+	}
+
+	sb = dm_block_data(sblock);
+	__prepare_superblock(md, sb);
+	r = dm_tm_commit(md->tm, sblock);
+
+	if (r) {
+		DMERR("Failed to commit superblock");
+		goto err_with_tm;
+	}
+
+	return 0;
+
+err_with_tm:
+	dm_sm_destroy(md->sm);
+	dm_tm_destroy(md->tm);
+
+	return r;
+}
+
+static int __open_or_format_metadata(struct dm_clone_metadata *md, bool may_format_device)
+{
+	int r;
+	bool formatted = false;
+
+	r = __superblock_all_zeroes(md->bm, &formatted);
+
+	if (r)
+		return r;
+
+	if (!formatted)
+		return may_format_device ? __format_metadata(md) : -EPERM;
+
+	return __open_metadata(md);
+}
+
+static int __create_persistent_data_structures(struct dm_clone_metadata *md,
+					       bool may_format_device)
+{
+	int r;
+
+	/* Create block manager */
+	md->bm = dm_block_manager_create(md->bdev,
+					 DM_CLONE_METADATA_BLOCK_SIZE << SECTOR_SHIFT,
+					 DM_CLONE_MAX_CONCURRENT_LOCKS);
+
+	if (IS_ERR(md->bm)) {
+		DMERR("Failed to create block manager");
+		return PTR_ERR(md->bm);
+	}
+
+	r = __open_or_format_metadata(md, may_format_device);
+
+	if (r)
+		dm_block_manager_destroy(md->bm);
+
+	return r;
+}
+
+static void __destroy_persistent_data_structures(struct dm_clone_metadata *md)
+{
+	dm_sm_destroy(md->sm);
+	dm_tm_destroy(md->tm);
+	dm_block_manager_destroy(md->bm);
+}
+
+/*---------------------------------------------------------------------------*/
+
+static size_t bitmap_size(unsigned long nr_bits)
+{
+	return BITS_TO_LONGS(nr_bits) * sizeof(long);
+}
+
+static int dirty_map_init(struct dm_clone_metadata *md)
+{
+	md->dmap[0].changed = 0;
+	md->dmap[0].dirty_words = vzalloc(bitmap_size(md->nr_words));
+
+	if (!md->dmap[0].dirty_words) {
+		DMERR("Failed to allocate dirty bitmap");
+		return -ENOMEM;
+	}
+
+	md->dmap[1].changed = 0;
+	md->dmap[1].dirty_words = vzalloc(bitmap_size(md->nr_words));
+
+	if (!md->dmap[1].dirty_words) {
+		DMERR("Failed to allocate dirty bitmap");
+		vfree(md->dmap[0].dirty_words);
+		return -ENOMEM;
+	}
+
+	md->current_dmap = &md->dmap[0];
+
+	return 0;
+}
+
+static void dirty_map_exit(struct dm_clone_metadata *md)
+{
+	vfree(md->dmap[0].dirty_words);
+	vfree(md->dmap[1].dirty_words);
+}
+
+int __load_bitset_in_core(struct dm_clone_metadata *md)
+{
+	int r;
+	unsigned long i;
+	struct dm_bitset_cursor c;
+
+	/* Flush bitset cache */
+	r = dm_bitset_flush(&md->bitset_info, md->bitset_root, &md->bitset_root);
+
+	if (r)
+		return r;
+
+	r = dm_bitset_cursor_begin(&md->bitset_info, md->bitset_root, md->nr_regions, &c);
+
+	if (r)
+		return r;
+
+	for (i = 0; ; i++) {
+		if (dm_bitset_cursor_get_value(&c))
+			__set_bit(i, md->region_map);
+		else
+			__clear_bit(i, md->region_map);
+
+		if (i >= (md->nr_regions - 1))
+			break;
+
+		r = dm_bitset_cursor_next(&c);
+
+		if (r)
+			break;
+	}
+
+	dm_bitset_cursor_end(&c);
+
+	return r;
+}
+
+struct dm_clone_metadata *dm_clone_metadata_open(struct block_device *bdev,
+						 sector_t target_size,
+						 sector_t region_size)
+{
+	int r;
+	struct dm_clone_metadata *md;
+
+	md = kzalloc(sizeof(*md), GFP_KERNEL);
+
+	if (!md) {
+		DMERR("Failed to allocate memory for dm-clone metadata");
+		return ERR_PTR(-ENOMEM);
+	}
+
+	md->bdev = bdev;
+	md->target_size = target_size;
+	md->region_size = region_size;
+	md->nr_regions = dm_sector_div_up(md->target_size, md->region_size);
+	md->nr_words = BITS_TO_LONGS(md->nr_regions);
+
+	init_rwsem(&md->lock);
+	spin_lock_init(&md->bitmap_lock);
+	md->read_only = 0;
+	md->fail_io = false;
+	md->hydration_done = false;
+
+	md->region_map = vmalloc(bitmap_size(md->nr_regions));
+
+	if (!md->region_map) {
+		DMERR("Failed to allocate memory for region bitmap");
+		r = -ENOMEM;
+		goto out_with_md;
+	}
+
+	r = __create_persistent_data_structures(md, true);
+
+	if (r)
+		goto out_with_region_map;
+
+	r = __load_bitset_in_core(md);
+
+	if (r) {
+		DMERR("Failed to load on-disk region map");
+		goto out_with_pds;
+	}
+
+	r = dirty_map_init(md);
+
+	if (r)
+		goto out_with_pds;
+
+	if (bitmap_full(md->region_map, md->nr_regions))
+		md->hydration_done = true;
+
+	return md;
+
+out_with_pds:
+	__destroy_persistent_data_structures(md);
+
+out_with_region_map:
+	vfree(md->region_map);
+
+out_with_md:
+	kfree(md);
+
+	return ERR_PTR(r);
+}
+
+void dm_clone_metadata_close(struct dm_clone_metadata *md)
+{
+	if (!md->fail_io)
+		__destroy_persistent_data_structures(md);
+
+	dirty_map_exit(md);
+	vfree(md->region_map);
+	kfree(md);
+}
+
+bool dm_clone_is_hydration_done(struct dm_clone_metadata *md)
+{
+	return md->hydration_done;
+}
+
+bool dm_clone_is_region_hydrated(struct dm_clone_metadata *md, unsigned long region_nr)
+{
+	return dm_clone_is_hydration_done(md) || test_bit(region_nr, md->region_map);
+}
+
+bool dm_clone_is_range_hydrated(struct dm_clone_metadata *md,
+				unsigned long start, unsigned long nr_regions)
+{
+	unsigned long bit;
+
+	if (dm_clone_is_hydration_done(md))
+		return true;
+
+	bit = find_next_zero_bit(md->region_map, md->nr_regions, start);
+
+	return (bit >= (start + nr_regions));
+}
+
+unsigned long dm_clone_nr_of_hydrated_regions(struct dm_clone_metadata *md)
+{
+	return bitmap_weight(md->region_map, md->nr_regions);
+}
+
+unsigned long dm_clone_find_next_unhydrated_region(struct dm_clone_metadata *md,
+						   unsigned long start)
+{
+	return find_next_zero_bit(md->region_map, md->nr_regions, start);
+}
+
+static int __update_metadata_word(struct dm_clone_metadata *md, unsigned long word)
+{
+	int r;
+	unsigned long index = word * BITS_PER_LONG;
+	unsigned long max_index = min(md->nr_regions, (word + 1) * BITS_PER_LONG);
+
+	while (index < max_index) {
+		if (test_bit(index, md->region_map)) {
+			r = dm_bitset_set_bit(&md->bitset_info, md->bitset_root,
+					      index, &md->bitset_root);
+
+			if (r) {
+				DMERR("dm_bitset_set_bit failed");
+				return r;
+			}
+		}
+		index++;
+	}
+
+	return 0;
+}
+
+int __metadata_commit(struct dm_clone_metadata *md)
+{
+	int r;
+	struct dm_block *sblock;
+	struct superblock_disk *sb;
+
+	/* Flush bitset cache */
+	r = dm_bitset_flush(&md->bitset_info, md->bitset_root, &md->bitset_root);
+
+	if (r) {
+		DMERR("dm_bitset_flush failed");
+		return r;
+	}
+
+	/* Flush to disk all blocks, except the superblock */
+	r = dm_tm_pre_commit(md->tm);
+
+	if (r) {
+		DMERR("dm_tm_pre_commit failed");
+		return r;
+	}
+
+	/* Save the space map root in md->metadata_space_map_root */
+	r = __copy_sm_root(md);
+
+	if (r) {
+		DMERR("__copy_sm_root failed");
+		return r;
+	}
+
+	/* Lock the superblock */
+	r = superblock_write_lock_zero(md, &sblock);
+
+	if (r) {
+		DMERR("Failed to write_lock superblock");
+		return r;
+	}
+
+	/* Save the metadata in superblock */
+	sb = dm_block_data(sblock);
+	__prepare_superblock(md, sb);
+
+	/* Unlock superblock and commit it to disk */
+	r = dm_tm_commit(md->tm, sblock);
+
+	if (r) {
+		DMERR("Failed to commit superblock");
+		return r;
+	}
+
+	/*
+	 * FIXME: Find a more efficient way to check if the hydration is done.
+	 */
+	if (bitmap_full(md->region_map, md->nr_regions))
+		md->hydration_done = true;
+
+	return 0;
+}
+
+static int __flush_dmap(struct dm_clone_metadata *md, struct dirty_map *dmap)
+{
+	int r;
+	unsigned long word, flags;
+
+	word = 0;
+	do {
+		word = find_next_bit(dmap->dirty_words, md->nr_words, word);
+
+		if (word == md->nr_words)
+			break;
+
+		r = __update_metadata_word(md, word);
+
+		if (r)
+			return r;
+
+		__clear_bit(word, dmap->dirty_words);
+		word++;
+	} while (word < md->nr_words);
+
+	r = __metadata_commit(md);
+
+	if (r)
+		return r;
+
+	/* Update the changed flag */
+	spin_lock_irqsave(&md->bitmap_lock, flags);
+	dmap->changed = 0;
+	spin_unlock_irqrestore(&md->bitmap_lock, flags);
+
+	return 0;
+}
+
+int dm_clone_metadata_commit(struct dm_clone_metadata *md)
+{
+	int r = -EPERM;
+	unsigned long flags;
+	struct dirty_map *dmap, *next_dmap;
+
+	down_write(&md->lock);
+
+	if (md->fail_io || dm_bm_is_read_only(md->bm))
+		goto out;
+
+	/* Get current dirty bitmap */
+	dmap = md->current_dmap;
+
+	/* Get next dirty bitmap */
+	next_dmap = (dmap == &md->dmap[0]) ? &md->dmap[1] : &md->dmap[0];
+
+	/*
+	 * The last commit failed, so we don't have a clean dirty-bitmap to
+	 * use.
+	 */
+	if (WARN_ON(next_dmap->changed)) {
+		r = -EINVAL;
+		goto out;
+	}
+
+	/* Swap dirty bitmaps */
+	spin_lock_irqsave(&md->bitmap_lock, flags);
+	md->current_dmap = next_dmap;
+	spin_unlock_irqrestore(&md->bitmap_lock, flags);
+
+	/*
+	 * No one is accessing the old dirty bitmap anymore, so we can flush
+	 * it.
+	 */
+	r = __flush_dmap(md, dmap);
+out:
+	up_write(&md->lock);
+
+	return r;
+}
+
+int dm_clone_set_region_hydrated(struct dm_clone_metadata *md, unsigned long region_nr)
+{
+	int r = 0;
+	struct dirty_map *dmap;
+	unsigned long word, flags;
+
+	word = region_nr / BITS_PER_LONG;
+
+	spin_lock_irqsave(&md->bitmap_lock, flags);
+
+	if (md->read_only) {
+		r = -EPERM;
+		goto out;
+	}
+
+	dmap = md->current_dmap;
+
+	__set_bit(word, dmap->dirty_words);
+	__set_bit(region_nr, md->region_map);
+	dmap->changed = 1;
+
+out:
+	spin_unlock_irqrestore(&md->bitmap_lock, flags);
+
+	return r;
+}
+
+int dm_clone_cond_set_range(struct dm_clone_metadata *md, unsigned long start,
+			    unsigned long nr_regions)
+{
+	int r = 0;
+	struct dirty_map *dmap;
+	unsigned long word, region_nr, flags;
+
+	spin_lock_irqsave(&md->bitmap_lock, flags);
+
+	if (md->read_only) {
+		r = -EPERM;
+		goto out;
+	}
+
+	dmap = md->current_dmap;
+	for (region_nr = start; region_nr < (start + nr_regions); region_nr++) {
+		if (!test_bit(region_nr, md->region_map)) {
+			word = region_nr / BITS_PER_LONG;
+			__set_bit(word, dmap->dirty_words);
+			__set_bit(region_nr, md->region_map);
+			dmap->changed = 1;
+		}
+	}
+
+out:
+	spin_unlock_irqrestore(&md->bitmap_lock, flags);
+
+	return r;
+}
+
+/*
+ * WARNING: This must not be called concurrently with either
+ * dm_clone_set_region_hydrated() or dm_clone_cond_set_range(), as it changes
+ * md->region_map without taking the md->bitmap_lock spinlock. The only
+ * exception is after setting the metadata to read-only mode, using
+ * dm_clone_metadata_set_read_only().
+ *
+ * We don't take the spinlock because __load_bitset_in_core() does I/O, so it
+ * may block.
+ */
+int dm_clone_reload_in_core_bitset(struct dm_clone_metadata *md)
+{
+	int r = -EINVAL;
+
+	down_write(&md->lock);
+
+	if (md->fail_io)
+		goto out;
+
+	r = __load_bitset_in_core(md);
+
+out:
+	up_write(&md->lock);
+
+	return r;
+}
+
+bool dm_clone_changed_this_transaction(struct dm_clone_metadata *md)
+{
+	bool r;
+	unsigned long flags;
+
+	spin_lock_irqsave(&md->bitmap_lock, flags);
+	r = md->dmap[0].changed || md->dmap[1].changed;
+	spin_unlock_irqrestore(&md->bitmap_lock, flags);
+
+	return r;
+}
+
+int dm_clone_metadata_abort(struct dm_clone_metadata *md)
+{
+	int r = -EPERM;
+
+	down_write(&md->lock);
+
+	if (md->fail_io || dm_bm_is_read_only(md->bm))
+		goto out;
+
+	__destroy_persistent_data_structures(md);
+
+	r = __create_persistent_data_structures(md, false);
+
+	/* If something went wrong we can neither write nor read the metadata */
+	if (r)
+		md->fail_io = true;
+
+out:
+	up_write(&md->lock);
+
+	return r;
+}
+
+void dm_clone_metadata_set_read_only(struct dm_clone_metadata *md)
+{
+	unsigned long flags;
+
+	down_write(&md->lock);
+
+	spin_lock_irqsave(&md->bitmap_lock, flags);
+	md->read_only = 1;
+	spin_unlock_irqrestore(&md->bitmap_lock, flags);
+
+	if (!md->fail_io)
+		dm_bm_set_read_only(md->bm);
+
+	up_write(&md->lock);
+}
+
+void dm_clone_metadata_set_read_write(struct dm_clone_metadata *md)
+{
+	unsigned long flags;
+
+	down_write(&md->lock);
+
+	spin_lock_irqsave(&md->bitmap_lock, flags);
+	md->read_only = 0;
+	spin_unlock_irqrestore(&md->bitmap_lock, flags);
+
+	if (!md->fail_io)
+		dm_bm_set_read_write(md->bm);
+
+	up_write(&md->lock);
+}
+
+int dm_clone_get_free_metadata_block_count(struct dm_clone_metadata *md,
+					   dm_block_t *result)
+{
+	int r = -EINVAL;
+
+	down_read(&md->lock);
+
+	if (!md->fail_io)
+		r = dm_sm_get_nr_free(md->sm, result);
+
+	up_read(&md->lock);
+
+	return r;
+}
+
+int dm_clone_get_metadata_dev_size(struct dm_clone_metadata *md,
+				   dm_block_t *result)
+{
+	int r = -EINVAL;
+
+	down_read(&md->lock);
+
+	if (!md->fail_io)
+		r = dm_sm_get_nr_blocks(md->sm, result);
+
+	up_read(&md->lock);
+
+	return r;
+}
diff --git a/drivers/md/dm-clone-metadata.h b/drivers/md/dm-clone-metadata.h
new file mode 100644
index 000000000000..fdfbd6f1cbdb
--- /dev/null
+++ b/drivers/md/dm-clone-metadata.h
@@ -0,0 +1,158 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (C) 2019 Arrikto, Inc. All Rights Reserved.
+ */
+
+#ifndef DM_CLONE_METADATA_H
+#define DM_CLONE_METADATA_H
+
+#include "persistent-data/dm-block-manager.h"
+#include "persistent-data/dm-space-map-metadata.h"
+
+#define DM_CLONE_METADATA_BLOCK_SIZE DM_SM_METADATA_BLOCK_SIZE
+
+/*
+ * The metadata device is currently limited in size.
+ */
+#define DM_CLONE_METADATA_MAX_SECTORS DM_SM_METADATA_MAX_SECTORS
+
+/*
+ * A metadata device larger than 16GB triggers a warning.
+ */
+#define DM_CLONE_METADATA_MAX_SECTORS_WARNING (16 * (1024 * 1024 * 1024 >> SECTOR_SHIFT))
+
+#define SPACE_MAP_ROOT_SIZE 128
+
+/* dm-clone metadata */
+struct dm_clone_metadata;
+
+/*
+ * Set region status to hydrated.
+ *
+ * @md: The dm-clone metadata
+ * @region_nr: The region number
+ *
+ * This function doesn't block, so it's safe to call it from interrupt context.
+ */
+int dm_clone_set_region_hydrated(struct dm_clone_metadata *md, unsigned long region_nr);
+
+/*
+ * Set status of all regions in the provided range to hydrated, if not already
+ * hydrated.
+ *
+ * @md: The dm-clone metadata
+ * @start: Starting region number
+ * @nr_regions: Number of regions in the range
+ *
+ * This function doesn't block, so it's safe to call it from interrupt context.
+ */
+int dm_clone_cond_set_range(struct dm_clone_metadata *md, unsigned long start,
+			    unsigned long nr_regions);
+
+/*
+ * Read existing or create fresh metadata.
+ *
+ * @bdev: The device storing the metadata
+ * @target_size: The target size
+ * @region_size: The region size
+ *
+ * @returns: The dm-clone metadata
+ *
+ * This function reads the superblock of @bdev and checks if it's all zeroes.
+ * If it is, it formats @bdev and creates fresh metadata. If it isn't, it
+ * validates the metadata stored in @bdev.
+ */
+struct dm_clone_metadata *dm_clone_metadata_open(struct block_device *bdev,
+						 sector_t target_size,
+						 sector_t region_size);
+
+/*
+ * Free the resources related to metadata management.
+ */
+void dm_clone_metadata_close(struct dm_clone_metadata *md);
+
+/*
+ * Commit dm-clone metadata to disk.
+ */
+int dm_clone_metadata_commit(struct dm_clone_metadata *md);
+
+/*
+ * Reload the in core copy of the on-disk bitmap.
+ *
+ * This should be used after aborting a metadata transaction and setting the
+ * metadata to read-only, to invalidate the in-core cache and make it match the
+ * on-disk metadata.
+ *
+ * WARNING: It must not be called concurrently with either
+ * dm_clone_set_region_hydrated() or dm_clone_cond_set_range(), as it updates
+ * the region bitmap without taking the relevant spinlock. We don't take the
+ * spinlock because dm_clone_reload_in_core_bitset() does I/O, so it may block.
+ *
+ * But, it's safe to use it after calling dm_clone_metadata_set_read_only(),
+ * because the latter sets the metadata to read-only mode. Both
+ * dm_clone_set_region_hydrated() and dm_clone_cond_set_range() refuse to touch
+ * the region bitmap, after calling dm_clone_metadata_set_read_only().
+ */
+int dm_clone_reload_in_core_bitset(struct dm_clone_metadata *md);
+
+/*
+ * Check whether dm-clone's metadata changed this transaction.
+ */
+bool dm_clone_changed_this_transaction(struct dm_clone_metadata *md);
+
+/*
+ * Abort current metadata transaction and rollback metadata to the last
+ * committed transaction.
+ */
+int dm_clone_metadata_abort(struct dm_clone_metadata *md);
+
+/*
+ * Switches metadata to a read only mode. Once read-only mode has been entered
+ * the following functions will return -EPERM:
+ *
+ *   dm_clone_metadata_commit()
+ *   dm_clone_set_region_hydrated()
+ *   dm_clone_cond_set_range()
+ *   dm_clone_metadata_abort()
+ */
+void dm_clone_metadata_set_read_only(struct dm_clone_metadata *md);
+void dm_clone_metadata_set_read_write(struct dm_clone_metadata *md);
+
+/*
+ * Returns true if the hydration of the clone device is finished.
+ */
+bool dm_clone_is_hydration_done(struct dm_clone_metadata *md);
+
+/*
+ * Returns true if region @region_nr is hydrated.
+ */
+bool dm_clone_is_region_hydrated(struct dm_clone_metadata *md, unsigned long region_nr);
+
+/*
+ * Returns true if all the regions in the range are hydrated.
+ */
+bool dm_clone_is_range_hydrated(struct dm_clone_metadata *md,
+				unsigned long start, unsigned long nr_regions);
+
+/*
+ * Returns the number of hydrated regions.
+ */
+unsigned long dm_clone_nr_of_hydrated_regions(struct dm_clone_metadata *md);
+
+/*
+ * Returns the first unhydrated region with region_nr >= @start
+ */
+unsigned long dm_clone_find_next_unhydrated_region(struct dm_clone_metadata *md,
+						   unsigned long start);
+
+/*
+ * Get the number of free metadata blocks.
+ */
+int dm_clone_get_free_metadata_block_count(struct dm_clone_metadata *md, dm_block_t *result);
+
+/*
+ * Get the total number of metadata blocks.
+ */
+int dm_clone_get_metadata_dev_size(struct dm_clone_metadata *md, dm_block_t *result);
+
+#endif /* DM_CLONE_METADATA_H */
diff --git a/drivers/md/dm-clone-target.c b/drivers/md/dm-clone-target.c
new file mode 100644
index 000000000000..2ce7524616f8
--- /dev/null
+++ b/drivers/md/dm-clone-target.c
@@ -0,0 +1,2244 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2019 Arrikto, Inc. All Rights Reserved.
+ */
+
+#include <linux/bio.h>
+#include <linux/err.h>
+#include <linux/hash.h>
+#include <linux/list.h>
+#include <linux/log2.h>
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/wait.h>
+#include <linux/dm-io.h>
+#include <linux/mutex.h>
+#include <linux/atomic.h>
+#include <linux/bitops.h>
+#include <linux/blkdev.h>
+#include <linux/kdev_t.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/jiffies.h>
+#include <linux/mempool.h>
+#include <linux/spinlock.h>
+#include <linux/blk_types.h>
+#include <linux/dm-kcopyd.h>
+#include <linux/workqueue.h>
+#include <linux/backing-dev.h>
+#include <linux/device-mapper.h>
+
+#include "dm.h"
+#include "dm-clone-metadata.h"
+
+#define DM_MSG_PREFIX "clone"
+
+/*
+ * Minimum and maximum allowed region sizes
+ */
+#define MIN_REGION_SIZE (1 << 3)  /* 4KB */
+#define MAX_REGION_SIZE (1 << 21) /* 1GB */
+
+#define MIN_HYDRATIONS 256 /* Size of hydration mempool */
+#define DEFAULT_HYDRATION_THRESHOLD 2048 /* 1MB */
+#define DEFAULT_HYDRATION_BATCH_SIZE 1 /* Hydrate in batches of 1 region */
+
+#define COMMIT_PERIOD HZ /* 1 sec */
+
+/*
+ * Hydration hash table size: 1 << HASH_TABLE_BITS
+ */
+#define HASH_TABLE_BITS 15
+
+DECLARE_DM_KCOPYD_THROTTLE_WITH_MODULE_PARM(clone_copy_throttle,
+	"A percentage of time allocated for hydrating regions");
+
+/* Slab cache for struct dm_clone_region_hydration */
+static struct kmem_cache *_hydration_cache;
+
+/* dm-clone metadata modes */
+enum clone_metadata_mode {
+	CM_WRITE,		/* metadata may be changed */
+	CM_READ_ONLY,		/* metadata may not be changed */
+	CM_FAIL,		/* all metadata I/O fails */
+};
+
+struct hash_table_bucket;
+
+struct clone {
+	struct dm_target *ti;
+	struct dm_target_callbacks callbacks;
+
+	struct dm_dev *metadata_dev;
+	struct dm_dev *clone_dev;
+	struct dm_dev *origin_dev;
+
+	unsigned long nr_regions;
+	sector_t region_size;
+	unsigned int region_shift;
+
+	/*
+	 * A metadata commit and the actions taken in case it fails should run
+	 * as a single atomic step.
+	 */
+	struct mutex commit_lock;
+
+	struct dm_clone_metadata *md;
+
+	/* Region hydration hash table */
+	struct hash_table_bucket *ht;
+
+	atomic_t ios_in_flight;
+
+	wait_queue_head_t hydration_stopped;
+
+	mempool_t hydration_pool;
+
+	unsigned long last_commit_jiffies;
+
+	/*
+	 * We defer incoming WRITE bios for regions that are not hydrated,
+	 * until after these regions have been hydrated.
+	 *
+	 * Also, we defer REQ_FUA and REQ_PREFLUSH bios, until after the
+	 * metadata have been committed.
+	 */
+	spinlock_t lock;
+	struct bio_list deferred_bios;
+	struct bio_list deferred_discard_bios;
+	struct bio_list deferred_flush_bios;
+	struct bio_list deferred_flush_completions;
+
+	sector_t hydration_threshold;
+
+	/* Number of regions to batch together during hydration. */
+	unsigned int hydration_batch_size;
+
+	/* Which region to hydrate next */
+	unsigned long hydration_offset;
+
+	atomic_t hydrations_in_flight;
+
+	/*
+	 * Save a copy of the table line rather than reconstructing it for the
+	 * status.
+	 */
+	unsigned int nr_ctr_args;
+	const char **ctr_args;
+
+	struct workqueue_struct *wq;
+	struct work_struct worker;
+	struct delayed_work waker;
+
+	struct dm_kcopyd_client *kcopyd_client;
+
+	enum clone_metadata_mode mode;
+	unsigned long flags;
+};
+
+/*
+ * dm-clone flags
+ */
+#define DM_CLONE_DISCARD_PASSDOWN 0
+#define DM_CLONE_HYDRATION_ENABLED 1
+#define DM_CLONE_HYDRATION_SUSPENDED 2
+
+/*---------------------------------------------------------------------------*/
+
+/*
+ * Metadata failure handling.
+ */
+static enum clone_metadata_mode get_clone_mode(struct clone *clone)
+{
+	return READ_ONCE(clone->mode);
+}
+
+static const char *clone_device_name(struct clone *clone)
+{
+	return dm_table_device_name(clone->ti->table);
+}
+
+static void __set_clone_mode(struct clone *clone, enum clone_metadata_mode new_mode)
+{
+	const char *descs[] = {
+		"read-write",
+		"read-only",
+		"fail"
+	};
+
+	enum clone_metadata_mode old_mode = get_clone_mode(clone);
+
+	/* Never move out of fail mode */
+	if (old_mode == CM_FAIL)
+		new_mode = CM_FAIL;
+
+	switch (new_mode) {
+	case CM_FAIL:
+	case CM_READ_ONLY:
+		dm_clone_metadata_set_read_only(clone->md);
+		break;
+
+	case CM_WRITE:
+		dm_clone_metadata_set_read_write(clone->md);
+		break;
+	}
+
+	WRITE_ONCE(clone->mode, new_mode);
+
+	if (new_mode != old_mode) {
+		dm_table_event(clone->ti->table);
+		DMINFO("%s: Switching to %s mode", clone_device_name(clone),
+		       descs[(int)new_mode]);
+	}
+}
+
+static void __abort_transaction(struct clone *clone)
+{
+	const char *dev_name = clone_device_name(clone);
+
+	if (get_clone_mode(clone) >= CM_READ_ONLY)
+		return;
+
+	DMERR("%s: Aborting current metadata transaction", dev_name);
+	if (dm_clone_metadata_abort(clone->md)) {
+		DMERR("%s: Failed to abort metadata transaction", dev_name);
+		__set_clone_mode(clone, CM_FAIL);
+	}
+}
+
+static void __reload_in_core_bitset(struct clone *clone)
+{
+	const char *dev_name = clone_device_name(clone);
+
+	if (get_clone_mode(clone) == CM_FAIL)
+		return;
+
+	/* Reload the on-disk bitset */
+	DMINFO("%s: Reloading on-disk bitmap", dev_name);
+	if (dm_clone_reload_in_core_bitset(clone->md)) {
+		DMERR("%s: Failed to reload on-disk bitmap", dev_name);
+		__set_clone_mode(clone, CM_FAIL);
+	}
+}
+
+static void __metadata_operation_failed(struct clone *clone, const char *op, int r)
+{
+	DMERR("%s: Metadata operation `%s' failed: error = %d",
+	      clone_device_name(clone), op, r);
+
+	__abort_transaction(clone);
+	__set_clone_mode(clone, CM_READ_ONLY);
+
+	/*
+	 * dm_clone_reload_in_core_bitset() may run concurrently with either
+	 * dm_clone_set_region_hydrated() or dm_clone_cond_set_range(), but
+	 * it's safe as we have already set the metadata to read-only mode.
+	 */
+	__reload_in_core_bitset(clone);
+}
+
+/*---------------------------------------------------------------------------*/
+
+/* Wake up anyone waiting for region hydrations to stop */
+static inline void wakeup_hydration_waiters(struct clone *clone)
+{
+	wake_up_all(&clone->hydration_stopped);
+}
+
+static inline void wake_worker(struct clone *clone)
+{
+	queue_work(clone->wq, &clone->worker);
+}
+
+/*---------------------------------------------------------------------------*/
+
+/*
+ * bio helper functions.
+ */
+static inline void remap_to_origin(struct clone *clone, struct bio *bio)
+{
+	bio_set_dev(bio, clone->origin_dev->bdev);
+}
+
+static inline void remap_to_clone(struct clone *clone, struct bio *bio)
+{
+	bio_set_dev(bio, clone->clone_dev->bdev);
+}
+
+static bool bio_triggers_commit(struct clone *clone, struct bio *bio)
+{
+	return op_is_flush(bio->bi_opf) &&
+		dm_clone_changed_this_transaction(clone->md);
+}
+
+/* Get the address of the region in sectors */
+static inline sector_t region_to_sector(struct clone *clone, unsigned long region_nr)
+{
+	return (region_nr << clone->region_shift);
+}
+
+/* Get the region number of the bio */
+static inline unsigned long bio_to_region(struct clone *clone, struct bio *bio)
+{
+	return (bio->bi_iter.bi_sector >> clone->region_shift);
+}
+
+/* Get the region range covered by the bio */
+static void bio_region_range(struct clone *clone, struct bio *bio,
+			     unsigned long *rs, unsigned long *re)
+{
+	*rs = dm_sector_div_up(bio->bi_iter.bi_sector, clone->region_size);
+	*re = bio_end_sector(bio) >> clone->region_shift;
+}
+
+/* Check whether a bio overwrites a region */
+static inline bool is_overwrite_bio(struct clone *clone, struct bio *bio)
+{
+	return (bio_data_dir(bio) == WRITE && bio_sectors(bio) == clone->region_size);
+}
+
+static void fail_bios(struct bio_list *bios, blk_status_t status)
+{
+	struct bio *bio;
+
+	while ((bio = bio_list_pop(bios))) {
+		bio->bi_status = status;
+		bio_endio(bio);
+	}
+}
+
+static void submit_bios(struct bio_list *bios)
+{
+	struct bio *bio;
+	struct blk_plug plug;
+
+	blk_start_plug(&plug);
+
+	while ((bio = bio_list_pop(bios)))
+		generic_make_request(bio);
+
+	blk_finish_plug(&plug);
+}
+
+/*
+ * Submit bio to the underlying device.
+ *
+ * If the bio triggers a commit, delay it, until after the metadata have been
+ * committed.
+ *
+ * NOTE: The bio remapping must be performed by the caller.
+ */
+static void issue_bio(struct clone *clone, struct bio *bio)
+{
+	unsigned long flags;
+
+	if (!bio_triggers_commit(clone, bio)) {
+		generic_make_request(bio);
+		return;
+	}
+
+	/*
+	 * If the metadata mode is RO or FAIL we won't be able to commit the
+	 * metadata, so we complete the bio with an error.
+	 */
+	if (unlikely(get_clone_mode(clone) >= CM_READ_ONLY)) {
+		bio_io_error(bio);
+		return;
+	}
+
+	/*
+	 * Batch together any bios that trigger commits and then issue a single
+	 * commit for them in process_deferred_flush_bios().
+	 */
+	spin_lock_irqsave(&clone->lock, flags);
+	bio_list_add(&clone->deferred_flush_bios, bio);
+	spin_unlock_irqrestore(&clone->lock, flags);
+
+	wake_worker(clone);
+}
+
+/*
+ * Remap bio to the clone device and submit it.
+ *
+ * If the bio triggers a commit, delay it, until after the metadata have been
+ * committed.
+ */
+static void remap_and_issue(struct clone *clone, struct bio *bio)
+{
+	remap_to_clone(clone, bio);
+	issue_bio(clone, bio);
+}
+
+/*
+ * Issue bios that have been deferred until after their region has finished
+ * hydrating.
+ *
+ * We delegate the bio submission to the worker thread, so this is safe to call
+ * from interrupt context.
+ */
+static void issue_deferred_bios(struct clone *clone, struct bio_list *bios)
+{
+	struct bio *bio;
+	unsigned long flags;
+	struct bio_list flush_bios = BIO_EMPTY_LIST;
+	struct bio_list normal_bios = BIO_EMPTY_LIST;
+
+	if (bio_list_empty(bios))
+		return;
+
+	while ((bio = bio_list_pop(bios))) {
+		if (bio_triggers_commit(clone, bio))
+			bio_list_add(&flush_bios, bio);
+		else
+			bio_list_add(&normal_bios, bio);
+	}
+
+	spin_lock_irqsave(&clone->lock, flags);
+	bio_list_merge(&clone->deferred_bios, &normal_bios);
+	bio_list_merge(&clone->deferred_flush_bios, &flush_bios);
+	spin_unlock_irqrestore(&clone->lock, flags);
+
+	wake_worker(clone);
+}
+
+static void complete_overwrite_bio(struct clone *clone, struct bio *bio)
+{
+	unsigned long flags;
+
+	/*
+	 * If the bio has the REQ_FUA flag set we must commit the metadata
+	 * before signaling its completion.
+	 *
+	 * complete_overwrite_bio() is only called by hydration_complete(),
+	 * after having successfully updated the metadata. This means we don't
+	 * need to call dm_clone_changed_this_transaction() to check if the
+	 * metadata has changed and thus we can avoid taking the metadata spin
+	 * lock.
+	 */
+	if (!(bio->bi_opf & REQ_FUA)) {
+		bio_endio(bio);
+		return;
+	}
+
+	/*
+	 * If the metadata mode is RO or FAIL we won't be able to commit the
+	 * metadata, so we complete the bio with an error.
+	 */
+	if (unlikely(get_clone_mode(clone) >= CM_READ_ONLY)) {
+		bio_io_error(bio);
+		return;
+	}
+
+	/*
+	 * Batch together any bios that trigger commits and then issue a single
+	 * commit for them in process_deferred_flush_bios().
+	 */
+	spin_lock_irqsave(&clone->lock, flags);
+	bio_list_add(&clone->deferred_flush_completions, bio);
+	spin_unlock_irqrestore(&clone->lock, flags);
+
+	wake_worker(clone);
+}
+
+static void trim_bio(struct bio *bio, sector_t sector, unsigned int len)
+{
+	bio->bi_iter.bi_sector = sector;
+	bio->bi_iter.bi_size = to_bytes(len);
+}
+
+static void complete_discard_bio(struct clone *clone, struct bio *bio, bool success)
+{
+	unsigned long rs, re;
+
+	/*
+	 * If the clone device supports discards, remap and trim the discard
+	 * bio and pass it down. Otherwise complete the bio immediately.
+	 */
+	if (test_bit(DM_CLONE_DISCARD_PASSDOWN, &clone->flags) && success) {
+		remap_to_clone(clone, bio);
+		bio_region_range(clone, bio, &rs, &re);
+		trim_bio(bio, rs << clone->region_shift,
+			 (re - rs) << clone->region_shift);
+
+		generic_make_request(bio);
+	} else {
+		bio_endio(bio);
+	}
+}
+
+static void process_discard_bio(struct clone *clone, struct bio *bio)
+{
+	unsigned long rs, re, flags;
+
+	bio_region_range(clone, bio, &rs, &re);
+	BUG_ON(re > clone->nr_regions);
+
+	if (unlikely(rs == re)) {
+		bio_endio(bio);
+		return;
+	}
+
+	/*
+	 * The covered regions are already hydrated so we just need to pass
+	 * down the discard.
+	 */
+	if (dm_clone_is_range_hydrated(clone->md, rs, re - rs)) {
+		complete_discard_bio(clone, bio, true);
+		return;
+	}
+
+	/*
+	 * If the metadata mode is RO or FAIL we won't be able to update the
+	 * metadata for the regions covered by the discard so we just ignore
+	 * it.
+	 */
+	if (unlikely(get_clone_mode(clone) >= CM_READ_ONLY)) {
+		bio_endio(bio);
+		return;
+	}
+
+	/*
+	 * Defer discard processing.
+	 */
+	spin_lock_irqsave(&clone->lock, flags);
+	bio_list_add(&clone->deferred_discard_bios, bio);
+	spin_unlock_irqrestore(&clone->lock, flags);
+
+	wake_worker(clone);
+}
+
+/*---------------------------------------------------------------------------*/
+
+/*
+ * dm-clone region hydrations.
+ */
+struct dm_clone_region_hydration {
+	struct clone *clone;
+	unsigned long region_nr;
+
+	struct bio *overwrite_bio;
+	bio_end_io_t *overwrite_bio_end_io;
+
+	struct bio_list deferred_bios;
+
+	blk_status_t status;
+
+	/* Used by hydration batching */
+	struct list_head list;
+
+	/* Used by hydration hash table */
+	struct hlist_node h;
+};
+
+/*
+ * Hydration hash table implementation.
+ *
+ * Ideally we would like to use list_bl, which uses bit spin locks and employs
+ * the least significant bit of the list head to lock the corresponding bucket,
+ * reducing the memory overhead for the locks. But, currently, list_bl and bit
+ * spin locks don't support IRQ safe versions. Since we have to take the lock
+ * in both process and interrupt context, we must fall back to using regular
+ * spin locks; one per hash table bucket.
+ */
+struct hash_table_bucket {
+	struct hlist_head head;
+
+	/* Spinlock protecting the bucket */
+	spinlock_t lock;
+};
+
+#define bucket_lock_irqsave(bucket, flags) \
+	spin_lock_irqsave(&(bucket)->lock, flags)
+
+#define bucket_unlock_irqrestore(bucket, flags) \
+	spin_unlock_irqrestore(&(bucket)->lock, flags)
+
+static int hash_table_init(struct clone *clone)
+{
+	unsigned int i, sz;
+	struct hash_table_bucket *bucket;
+
+	sz = 1 << HASH_TABLE_BITS;
+
+	clone->ht = vmalloc(sz * sizeof(struct hash_table_bucket));
+
+	if (!clone->ht)
+		return -ENOMEM;
+
+	for (i = 0; i < sz; i++) {
+		bucket = clone->ht + i;
+
+		INIT_HLIST_HEAD(&bucket->head);
+		spin_lock_init(&bucket->lock);
+	}
+
+	return 0;
+}
+
+static void hash_table_exit(struct clone *clone)
+{
+	vfree(clone->ht);
+}
+
+static struct hash_table_bucket *get_hash_table_bucket(struct clone *clone,
+						       unsigned long region_nr)
+{
+	return &clone->ht[hash_long(region_nr, HASH_TABLE_BITS)];
+}
+
+/*
+ * Search hash table for a hydration with hd->region_nr == region_nr
+ *
+ * NOTE: Must be called with the bucket lock held
+ */
+struct dm_clone_region_hydration *__hash_find(struct hash_table_bucket *bucket,
+					      unsigned long region_nr)
+{
+	struct dm_clone_region_hydration *hd;
+
+	hlist_for_each_entry(hd, &bucket->head, h) {
+		if (hd->region_nr == region_nr)
+			return hd;
+	}
+
+	return NULL;
+}
+
+/*
+ * Insert a hydration into the hash table.
+ *
+ * NOTE: Must be called with the bucket lock held.
+ */
+static inline void __insert_region_hydration(struct hash_table_bucket *bucket,
+					     struct dm_clone_region_hydration *hd)
+{
+	hlist_add_head(&hd->h, &bucket->head);
+}
+
+/*
+ * This function inserts a hydration into the hash table, unless someone else
+ * managed to insert a hydration for the same region first. In the latter case
+ * it returns the existing hydration descriptor for this region.
+ *
+ * NOTE: Must be called with the hydration hash table lock held.
+ */
+static struct dm_clone_region_hydration *
+__find_or_insert_region_hydration(struct hash_table_bucket *bucket,
+				  struct dm_clone_region_hydration *hd)
+{
+	struct dm_clone_region_hydration *hd2;
+
+	hd2 = __hash_find(bucket, hd->region_nr);
+
+	if (hd2)
+		return hd2;
+
+	__insert_region_hydration(bucket, hd);
+
+	return hd;
+}
+
+/*---------------------------------------------------------------------------*/
+
+/* Allocate a hydration */
+static struct dm_clone_region_hydration *alloc_hydration(struct clone *clone)
+{
+	struct dm_clone_region_hydration *hd;
+
+	/*
+	 * Allocate a hydration from the hydration mempool.
+	 * This might block but it can't fail.
+	 */
+	hd = mempool_alloc(&clone->hydration_pool, GFP_NOIO);
+
+	hd->clone = clone;
+
+	return hd;
+}
+
+static inline void free_hydration(struct dm_clone_region_hydration *hd)
+{
+	mempool_free(hd, &hd->clone->hydration_pool);
+}
+
+/* Initialize a hydration */
+static void hydration_init(struct dm_clone_region_hydration *hd, unsigned long region_nr)
+{
+	hd->region_nr = region_nr;
+
+	hd->overwrite_bio = NULL;
+
+	bio_list_init(&hd->deferred_bios);
+
+	hd->status = 0;
+
+	INIT_LIST_HEAD(&hd->list);
+	INIT_HLIST_NODE(&hd->h);
+}
+
+/*---------------------------------------------------------------------------*/
+
+/*
+ * Update dm-clone's metadata after a region has finished hydrating and remove
+ * hydration from the hash table.
+ */
+static int hydration_update_metadata(struct dm_clone_region_hydration *hd)
+{
+	int r = 0;
+	unsigned long flags;
+	struct hash_table_bucket *bucket;
+	struct clone *clone = hd->clone;
+
+	if (unlikely(get_clone_mode(clone) >= CM_READ_ONLY))
+		r = -EPERM;
+
+	/* Update the metadata */
+	if (likely(!r) && hd->status == BLK_STS_OK)
+		r = dm_clone_set_region_hydrated(clone->md, hd->region_nr);
+
+	bucket = get_hash_table_bucket(clone, hd->region_nr);
+
+	/* Remove hydration from hash table */
+	bucket_lock_irqsave(bucket, flags);
+	hlist_del(&hd->h);
+	bucket_unlock_irqrestore(bucket, flags);
+
+	return r;
+}
+
+/*
+ * Complete a region's hydration:
+ *
+ *	1. Update dm-clone's metadata.
+ *	2. Remove hydration from hash table.
+ *	3. Complete overwrite bio.
+ *	4. Issue deferred bios.
+ *	5. If this was the last hydration, wake up anyone waiting for
+ *	   hydrations to finish.
+ */
+static void hydration_complete(struct dm_clone_region_hydration *hd)
+{
+	int r;
+	blk_status_t status;
+	struct clone *clone = hd->clone;
+
+	r = hydration_update_metadata(hd);
+
+	if (hd->status == BLK_STS_OK && likely(!r)) {
+		if (hd->overwrite_bio)
+			complete_overwrite_bio(clone, hd->overwrite_bio);
+
+		issue_deferred_bios(clone, &hd->deferred_bios);
+	} else {
+		status = r ? BLK_STS_IOERR : hd->status;
+
+		if (hd->overwrite_bio)
+			bio_list_add(&hd->deferred_bios, hd->overwrite_bio);
+
+		fail_bios(&hd->deferred_bios, status);
+	}
+
+	free_hydration(hd);
+
+	if (atomic_dec_and_test(&clone->hydrations_in_flight))
+		wakeup_hydration_waiters(clone);
+}
+
+static void hydration_kcopyd_callback(int read_err, unsigned long write_err, void *context)
+{
+	blk_status_t status;
+
+	struct dm_clone_region_hydration *tmp, *hd = context;
+	struct clone *clone = hd->clone;
+
+	LIST_HEAD(batched_hydrations);
+
+	if (read_err || write_err) {
+		DMERR_LIMIT("%s: hydration failed", clone_device_name(clone));
+		status = BLK_STS_IOERR;
+	} else {
+		status = BLK_STS_OK;
+	}
+	list_splice_tail(&hd->list, &batched_hydrations);
+
+	hd->status = status;
+	hydration_complete(hd);
+
+	/* Complete batched hydrations */
+	list_for_each_entry_safe(hd, tmp, &batched_hydrations, list) {
+		hd->status = status;
+		hydration_complete(hd);
+	}
+
+	/* Continue background hydration, if there is no I/O in-flight */
+	if (test_bit(DM_CLONE_HYDRATION_ENABLED, &clone->flags) &&
+	    !atomic_read(&clone->ios_in_flight))
+		wake_worker(clone);
+}
+
+static void hydration_copy(struct dm_clone_region_hydration *hd, unsigned int nr_regions)
+{
+	unsigned long region_start, region_end;
+	sector_t tail_size, region_size, total_size;
+	struct dm_io_region from, to;
+	struct clone *clone = hd->clone;
+
+	region_size = clone->region_size;
+	region_start = hd->region_nr;
+	region_end = region_start + nr_regions - 1;
+
+	total_size = (nr_regions - 1) << clone->region_shift;
+
+	if (region_end == clone->nr_regions - 1) {
+		/*
+		 * The last region of the target might be smaller than
+		 * region_size.
+		 */
+		tail_size = clone->ti->len & (region_size - 1);
+		if (!tail_size)
+			tail_size = region_size;
+	} else {
+		tail_size = region_size;
+	}
+
+	total_size += tail_size;
+
+	from.bdev = clone->origin_dev->bdev;
+	from.sector = region_to_sector(clone, region_start);
+	from.count = total_size;
+
+	to.bdev = clone->clone_dev->bdev;
+	to.sector = from.sector;
+	to.count = from.count;
+
+	/* Issue copy */
+	atomic_add(nr_regions, &clone->hydrations_in_flight);
+	dm_kcopyd_copy(clone->kcopyd_client, &from, 1, &to, 0,
+		       hydration_kcopyd_callback, hd);
+}
+
+static void overwrite_endio(struct bio *bio)
+{
+	struct dm_clone_region_hydration *hd = bio->bi_private;
+
+	bio->bi_end_io = hd->overwrite_bio_end_io;
+	hd->status = bio->bi_status;
+
+	hydration_complete(hd);
+}
+
+static void hydration_overwrite(struct dm_clone_region_hydration *hd, struct bio *bio)
+{
+	/*
+	 * We don't need to save and restore bio->bi_private because device
+	 * mapper core generates a new bio for us to use, with clean
+	 * bi_private.
+	 */
+	hd->overwrite_bio = bio;
+	hd->overwrite_bio_end_io = bio->bi_end_io;
+
+	bio->bi_end_io = overwrite_endio;
+	bio->bi_private = hd;
+
+	atomic_inc(&hd->clone->hydrations_in_flight);
+	generic_make_request(bio);
+}
+
+/*
+ * Hydrate bio's region.
+ *
+ * This function starts the hydration of the bio's region and puts the bio in
+ * the list of deferred bios for this region. In case, by the time this
+ * function is called, the region has finished hydrating it's submitted to the
+ * clone device.
+ *
+ * NOTE: The bio remapping must be performed by the caller.
+ */
+static void hydrate_bio_region(struct clone *clone, struct bio *bio)
+{
+	unsigned long flags;
+	unsigned long region_nr;
+	struct hash_table_bucket *bucket;
+	struct dm_clone_region_hydration *hd, *hd2;
+
+	region_nr = bio_to_region(clone, bio);
+	bucket = get_hash_table_bucket(clone, region_nr);
+
+	bucket_lock_irqsave(bucket, flags);
+
+	hd = __hash_find(bucket, region_nr);
+
+	/* Someone else is hydrating the region */
+	if (hd) {
+		bio_list_add(&hd->deferred_bios, bio);
+		bucket_unlock_irqrestore(bucket, flags);
+		return;
+	}
+
+	/* The region has been hydrated */
+	if (dm_clone_is_region_hydrated(clone->md, region_nr)) {
+		bucket_unlock_irqrestore(bucket, flags);
+		issue_bio(clone, bio);
+		return;
+	}
+
+	/*
+	 * We must allocate a hydration descriptor and start the hydration of
+	 * the corresponding region.
+	 */
+	bucket_unlock_irqrestore(bucket, flags);
+
+	hd = alloc_hydration(clone);
+	hydration_init(hd, region_nr);
+
+	bucket_lock_irqsave(bucket, flags);
+
+	/* Check if the region has been hydrated in the meantime. */
+	if (dm_clone_is_region_hydrated(clone->md, region_nr)) {
+		bucket_unlock_irqrestore(bucket, flags);
+		free_hydration(hd);
+		issue_bio(clone, bio);
+		return;
+	}
+
+	hd2 = __find_or_insert_region_hydration(bucket, hd);
+
+	/* Someone else started the region's hydration. */
+	if (hd2 != hd) {
+		bio_list_add(&hd2->deferred_bios, bio);
+		bucket_unlock_irqrestore(bucket, flags);
+		free_hydration(hd);
+		return;
+	}
+
+	/*
+	 * If the metadata mode is RO or FAIL then there is no point starting a
+	 * hydration, since we will not be able to update the metadata when the
+	 * hydration finishes.
+	 */
+	if (unlikely(get_clone_mode(clone) >= CM_READ_ONLY)) {
+		hlist_del(&hd->h);
+		bucket_unlock_irqrestore(bucket, flags);
+		free_hydration(hd);
+		bio_io_error(bio);
+		return;
+	}
+
+	/*
+	 * Start region hydration.
+	 *
+	 * If a bio overwrites a region, i.e., its size is equal to the
+	 * region's size, then we don't need to copy the region from
+	 * the origin to the clone device.
+	 */
+	if (is_overwrite_bio(clone, bio)) {
+		bucket_unlock_irqrestore(bucket, flags);
+		hydration_overwrite(hd, bio);
+	} else {
+		bio_list_add(&hd->deferred_bios, bio);
+		bucket_unlock_irqrestore(bucket, flags);
+
+		hydration_copy(hd, 1);
+	}
+}
+
+/*---------------------------------------------------------------------------*/
+
+/*
+ * Background hydrations.
+ */
+
+/*
+ * Batch region hydrations.
+ *
+ * To better utilize device bandwidth we batch together the hydration of
+ * adjacent regions. This allows us to use small region sizes, e.g., 4KB, which
+ * is good for small, random write performance (because of the overwriting of
+ * un-hydrated regions) and at the same time issue big copy requests to kcopyd
+ * to achieve high hydration bandwidth.
+ */
+struct batch_info {
+	struct dm_clone_region_hydration *head;
+	unsigned int nr_batched_regions;
+};
+
+static void __batch_hydration(struct batch_info *batch,
+			      struct dm_clone_region_hydration *hd)
+{
+	struct clone *clone = hd->clone;
+	unsigned int max_batch_size = READ_ONCE(clone->hydration_batch_size);
+
+	if (batch->head) {
+		/* Try to extend the current batch */
+		if (batch->nr_batched_regions < max_batch_size &&
+		    (batch->head->region_nr + batch->nr_batched_regions) == hd->region_nr) {
+			list_add_tail(&hd->list, &batch->head->list);
+			batch->nr_batched_regions++;
+			hd = NULL;
+		}
+
+		/* Check if we should issue the current batch */
+		if (batch->nr_batched_regions >= max_batch_size || hd) {
+			hydration_copy(batch->head, batch->nr_batched_regions);
+			batch->head = NULL;
+			batch->nr_batched_regions = 0;
+		}
+	}
+
+	if (!hd)
+		return;
+
+	/* We treat max batch sizes of zero and one equivalently */
+	if (max_batch_size <= 1) {
+		hydration_copy(hd, 1);
+		return;
+	}
+
+	/* Start a new batch */
+	BUG_ON(!list_empty(&hd->list));
+	batch->head = hd;
+	batch->nr_batched_regions = 1;
+}
+
+static unsigned long __start_next_hydration(struct clone *clone,
+					    unsigned long offset,
+					    struct batch_info *batch)
+{
+	unsigned long flags;
+	struct hash_table_bucket *bucket;
+	struct dm_clone_region_hydration *hd;
+	unsigned long nr_regions = clone->nr_regions;
+
+	hd = alloc_hydration(clone);
+
+	/* Try to find a region to hydrate. */
+	do {
+		offset = dm_clone_find_next_unhydrated_region(clone->md, offset);
+		if (offset == nr_regions)
+			break;
+
+		bucket = get_hash_table_bucket(clone, offset);
+		bucket_lock_irqsave(bucket, flags);
+
+		if (!dm_clone_is_region_hydrated(clone->md, offset) &&
+		    !__hash_find(bucket, offset)) {
+			hydration_init(hd, offset);
+			__insert_region_hydration(bucket, hd);
+			bucket_unlock_irqrestore(bucket, flags);
+
+			/* Batch hydration */
+			__batch_hydration(batch, hd);
+
+			return (offset + 1);
+		}
+
+		bucket_unlock_irqrestore(bucket, flags);
+
+	} while (++offset < nr_regions);
+
+	if (hd)
+		free_hydration(hd);
+
+	return offset;
+}
+
+/*
+ * This function searches for regions that still reside in the origin device
+ * and starts their hydration.
+ */
+static void do_hydration(struct clone *clone)
+{
+	sector_t current_volume;
+	unsigned long offset, nr_regions = clone->nr_regions;
+
+	struct batch_info batch = {
+		.head = NULL,
+		.nr_batched_regions = 0,
+	};
+
+	if (unlikely(get_clone_mode(clone) >= CM_READ_ONLY))
+		return;
+
+	if (dm_clone_is_hydration_done(clone->md))
+		return;
+
+	/*
+	 * Avoid race with device suspension.
+	 */
+	atomic_inc(&clone->hydrations_in_flight);
+
+	/*
+	 * Make sure atomic_inc() is ordered before test_bit(), otherwise we
+	 * might race with clone_postsuspend() and start a region hydration
+	 * after the target has been suspended.
+	 *
+	 * This is paired with the smp_mb__after_atomic() in
+	 * clone_postsuspend().
+	 */
+	smp_mb__after_atomic();
+
+	offset = clone->hydration_offset;
+	while (likely(!test_bit(DM_CLONE_HYDRATION_SUSPENDED, &clone->flags)) &&
+	       !atomic_read(&clone->ios_in_flight) &&
+	       test_bit(DM_CLONE_HYDRATION_ENABLED, &clone->flags) &&
+	       offset < nr_regions) {
+		current_volume = atomic_read(&clone->hydrations_in_flight);
+		current_volume += batch.nr_batched_regions;
+		current_volume <<= clone->region_shift;
+
+		if (current_volume > READ_ONCE(clone->hydration_threshold))
+			break;
+
+		offset = __start_next_hydration(clone, offset, &batch);
+	}
+
+	if (batch.head)
+		hydration_copy(batch.head, batch.nr_batched_regions);
+
+	if (offset >= nr_regions)
+		offset = 0;
+
+	clone->hydration_offset = offset;
+
+	if (atomic_dec_and_test(&clone->hydrations_in_flight))
+		wakeup_hydration_waiters(clone);
+}
+
+/*---------------------------------------------------------------------------*/
+
+static bool need_commit_due_to_time(struct clone *clone)
+{
+	return !time_in_range(jiffies, clone->last_commit_jiffies,
+			      clone->last_commit_jiffies + COMMIT_PERIOD);
+}
+
+/*
+ * A non-zero return indicates read-only or fail mode.
+ */
+static int commit_metadata(struct clone *clone)
+{
+	int r = 0;
+
+	mutex_lock(&clone->commit_lock);
+
+	if (!dm_clone_changed_this_transaction(clone->md))
+		goto out;
+
+	if (unlikely(get_clone_mode(clone) >= CM_READ_ONLY)) {
+		r = -EPERM;
+		goto out;
+	}
+
+	r = dm_clone_metadata_commit(clone->md);
+
+	if (unlikely(r)) {
+		__metadata_operation_failed(clone, "dm_clone_metadata_commit", r);
+		goto out;
+	}
+
+	if (dm_clone_is_hydration_done(clone->md))
+		dm_table_event(clone->ti->table);
+
+out:
+	mutex_unlock(&clone->commit_lock);
+
+	return r;
+}
+
+static void process_deferred_discards(struct clone *clone)
+{
+	int r = -EPERM;
+	struct bio *bio;
+	struct blk_plug plug;
+	unsigned long rs, re, flags;
+	struct bio_list discards = BIO_EMPTY_LIST;
+
+	spin_lock_irqsave(&clone->lock, flags);
+	bio_list_merge(&discards, &clone->deferred_discard_bios);
+	bio_list_init(&clone->deferred_discard_bios);
+	spin_unlock_irqrestore(&clone->lock, flags);
+
+	if (bio_list_empty(&discards))
+		return;
+
+	if (unlikely(get_clone_mode(clone) >= CM_READ_ONLY))
+		goto out;
+
+	/* Update the metadata */
+	bio_list_for_each(bio, &discards) {
+		bio_region_range(clone, bio, &rs, &re);
+		/*
+		 * A discard request might cover regions that have been already
+		 * hydrated. There is no need to update the metadata for these
+		 * regions.
+		 */
+		r = dm_clone_cond_set_range(clone->md, rs, re - rs);
+
+		if (unlikely(r))
+			break;
+	}
+
+out:
+	blk_start_plug(&plug);
+
+	while ((bio = bio_list_pop(&discards)))
+		complete_discard_bio(clone, bio, r == 0);
+
+	blk_finish_plug(&plug);
+}
+
+static void process_deferred_bios(struct clone *clone)
+{
+	unsigned long flags;
+	struct bio_list bios = BIO_EMPTY_LIST;
+
+	spin_lock_irqsave(&clone->lock, flags);
+	bio_list_merge(&bios, &clone->deferred_bios);
+	bio_list_init(&clone->deferred_bios);
+	spin_unlock_irqrestore(&clone->lock, flags);
+
+	if (bio_list_empty(&bios))
+		return;
+
+	submit_bios(&bios);
+}
+
+static void process_deferred_flush_bios(struct clone *clone)
+{
+	struct bio *bio;
+	unsigned long flags;
+	struct bio_list bios = BIO_EMPTY_LIST;
+	struct bio_list bio_completions = BIO_EMPTY_LIST;
+
+	/*
+	 * If there are any deferred flush bios, we must commit the metadata
+	 * before issuing them or signaling their completion.
+	 */
+	spin_lock_irqsave(&clone->lock, flags);
+	bio_list_merge(&bios, &clone->deferred_flush_bios);
+	bio_list_init(&clone->deferred_flush_bios);
+
+	bio_list_merge(&bio_completions, &clone->deferred_flush_completions);
+	bio_list_init(&clone->deferred_flush_completions);
+	spin_unlock_irqrestore(&clone->lock, flags);
+
+	if (bio_list_empty(&bios) && bio_list_empty(&bio_completions) &&
+	    !(dm_clone_changed_this_transaction(clone->md) && need_commit_due_to_time(clone)))
+		return;
+
+	if (commit_metadata(clone)) {
+		bio_list_merge(&bios, &bio_completions);
+
+		while ((bio = bio_list_pop(&bios)))
+			bio_io_error(bio);
+
+		return;
+	}
+
+	clone->last_commit_jiffies = jiffies;
+
+	while ((bio = bio_list_pop(&bio_completions)))
+		bio_endio(bio);
+
+	while ((bio = bio_list_pop(&bios)))
+		generic_make_request(bio);
+}
+
+static void do_worker(struct work_struct *work)
+{
+	struct clone *clone = container_of(work, typeof(*clone), worker);
+
+	process_deferred_bios(clone);
+	process_deferred_discards(clone);
+
+	/*
+	 * process_deferred_flush_bios():
+	 *
+	 *   - Commit metadata
+	 *
+	 *   - Process deferred REQ_FUA completions
+	 *
+	 *   - Process deferred REQ_PREFLUSH bios
+	 */
+	process_deferred_flush_bios(clone);
+
+	/* Background hydration */
+	do_hydration(clone);
+}
+
+/*
+ * Commit periodically so that not too much unwritten data builds up.
+ *
+ * Also, restart background hydration, if it has been stopped by in-flight I/O.
+ */
+static void do_waker(struct work_struct *work)
+{
+	struct clone *clone = container_of(to_delayed_work(work), struct clone, waker);
+
+	wake_worker(clone);
+	queue_delayed_work(clone->wq, &clone->waker, COMMIT_PERIOD);
+}
+
+/*---------------------------------------------------------------------------*/
+
+/*
+ * Target methods
+ */
+static int clone_map(struct dm_target *ti, struct bio *bio)
+{
+	struct clone *clone = ti->private;
+	unsigned long region_nr;
+
+	atomic_inc(&clone->ios_in_flight);
+
+	if (unlikely(get_clone_mode(clone) == CM_FAIL))
+		return DM_MAPIO_KILL;
+
+	/*
+	 * REQ_PREFLUSH bios carry no data:
+	 *
+	 * - Commit metadata, if changed
+	 *
+	 * - Pass down to clone device
+	 */
+	if (bio->bi_opf & REQ_PREFLUSH) {
+		remap_and_issue(clone, bio);
+		return DM_MAPIO_SUBMITTED;
+	}
+
+	bio->bi_iter.bi_sector = dm_target_offset(ti, bio->bi_iter.bi_sector);
+
+	/*
+	 * dm-clone interprets discards and performs a fast hydration of the
+	 * discarded regions, i.e., we skip the copy from the origin device and
+	 * just mark the regions as hydrated.
+	 */
+	if (bio_op(bio) == REQ_OP_DISCARD) {
+		process_discard_bio(clone, bio);
+		return DM_MAPIO_SUBMITTED;
+	}
+
+	/*
+	 * If the bio's region is hydrated, redirect it to the clone device.
+	 *
+	 * If the region is not hydrated and the bio is a READ, redirect it to
+	 * the origin device.
+	 *
+	 * Else, defer WRITE bio until after its region has been hydrated and
+	 * start the region's hydration immediately.
+	 */
+	region_nr = bio_to_region(clone, bio);
+	if (dm_clone_is_region_hydrated(clone->md, region_nr)) {
+		remap_and_issue(clone, bio);
+		return DM_MAPIO_SUBMITTED;
+	} else if (bio_data_dir(bio) == READ) {
+		remap_to_origin(clone, bio);
+		return DM_MAPIO_REMAPPED;
+	}
+
+	remap_to_clone(clone, bio);
+	hydrate_bio_region(clone, bio);
+
+	return DM_MAPIO_SUBMITTED;
+}
+
+static int clone_endio(struct dm_target *ti, struct bio *bio, blk_status_t *error)
+{
+	struct clone *clone = ti->private;
+
+	atomic_dec(&clone->ios_in_flight);
+
+	return DM_ENDIO_DONE;
+}
+
+static void emit_flags(struct clone *clone, char *result, unsigned int maxlen,
+		       ssize_t *sz_ptr)
+{
+	ssize_t sz = *sz_ptr;
+	unsigned int count;
+
+	count = !test_bit(DM_CLONE_HYDRATION_ENABLED, &clone->flags);
+	count += !test_bit(DM_CLONE_DISCARD_PASSDOWN, &clone->flags);
+
+	DMEMIT("%u ", count);
+
+	if (!test_bit(DM_CLONE_HYDRATION_ENABLED, &clone->flags))
+		DMEMIT("no_hydration ");
+
+	if (!test_bit(DM_CLONE_DISCARD_PASSDOWN, &clone->flags))
+		DMEMIT("no_discard_passdown ");
+
+	*sz_ptr = sz;
+}
+
+static void emit_core_args(struct clone *clone, char *result,
+			   unsigned int maxlen, ssize_t *sz_ptr)
+{
+	ssize_t sz = *sz_ptr;
+	unsigned int count = 4;
+
+	DMEMIT("%u hydration_threshold %llu hydration_block_size %llu ", count,
+	       (unsigned long long)READ_ONCE(clone->hydration_threshold),
+	       (unsigned long long)READ_ONCE(clone->hydration_batch_size) << clone->region_shift);
+
+	*sz_ptr = sz;
+}
+
+/*
+ * Status format:
+ *
+ * <metadata block size> <#used metadata blocks>/<#total metadata blocks>
+ * <clone region size> <#hydrated regions>/<#total regions> <#hydrating regions>
+ * <#features> <features>* <#core args> <core args>* <clone metadata mode>
+ */
+static void clone_status(struct dm_target *ti, status_type_t type,
+			 unsigned int status_flags, char *result,
+			 unsigned int maxlen)
+{
+	int r;
+	unsigned int i;
+	ssize_t sz = 0;
+	dm_block_t nr_free_metadata_blocks = 0;
+	dm_block_t nr_metadata_blocks = 0;
+	char buf[BDEVNAME_SIZE];
+	struct clone *clone = ti->private;
+
+	switch (type) {
+	case STATUSTYPE_INFO:
+		if (get_clone_mode(clone) == CM_FAIL) {
+			DMEMIT("Fail");
+			break;
+		}
+
+		/* Commit to ensure statistics aren't out-of-date */
+		if (!(status_flags & DM_STATUS_NOFLUSH_FLAG) && !dm_suspended(ti))
+			(void) commit_metadata(clone);
+
+		r = dm_clone_get_free_metadata_block_count(clone->md, &nr_free_metadata_blocks);
+
+		if (r) {
+			DMERR("%s: dm_clone_get_free_metadata_block_count returned %d",
+			      clone_device_name(clone), r);
+			goto error;
+		}
+
+		r = dm_clone_get_metadata_dev_size(clone->md, &nr_metadata_blocks);
+
+		if (r) {
+			DMERR("%s: dm_clone_get_metadata_dev_size returned %d",
+			      clone_device_name(clone), r);
+			goto error;
+		}
+
+		DMEMIT("%u %llu/%llu %llu %lu/%lu %u ",
+		       DM_CLONE_METADATA_BLOCK_SIZE,
+		       (unsigned long long)(nr_metadata_blocks - nr_free_metadata_blocks),
+		       (unsigned long long)nr_metadata_blocks,
+		       (unsigned long long)clone->region_size,
+		       dm_clone_nr_of_hydrated_regions(clone->md),
+		       clone->nr_regions,
+		       atomic_read(&clone->hydrations_in_flight));
+
+		emit_flags(clone, result, maxlen, &sz);
+		emit_core_args(clone, result, maxlen, &sz);
+
+		switch (get_clone_mode(clone)) {
+		case CM_WRITE:
+			DMEMIT("rw");
+			break;
+		case CM_READ_ONLY:
+			DMEMIT("ro");
+			break;
+		case CM_FAIL:
+			DMEMIT("Fail");
+		}
+
+		break;
+
+	case STATUSTYPE_TABLE:
+		format_dev_t(buf, clone->metadata_dev->bdev->bd_dev);
+		DMEMIT("%s ", buf);
+
+		format_dev_t(buf, clone->clone_dev->bdev->bd_dev);
+		DMEMIT("%s ", buf);
+
+		format_dev_t(buf, clone->origin_dev->bdev->bd_dev);
+		DMEMIT("%s", buf);
+
+		for (i = 0; i < clone->nr_ctr_args; i++)
+			DMEMIT(" %s", clone->ctr_args[i]);
+	}
+
+	return;
+
+error:
+	DMEMIT("Error");
+}
+
+static int clone_is_congested(struct dm_target_callbacks *cb, int bdi_bits)
+{
+	struct request_queue *clone_q, *origin_q;
+	struct clone *clone = container_of(cb, struct clone, callbacks);
+
+	origin_q = bdev_get_queue(clone->origin_dev->bdev);
+	clone_q = bdev_get_queue(clone->clone_dev->bdev);
+
+	return (bdi_congested(clone_q->backing_dev_info, bdi_bits) |
+		bdi_congested(origin_q->backing_dev_info, bdi_bits));
+}
+
+static sector_t get_dev_size(struct dm_dev *dev)
+{
+	return i_size_read(dev->bdev->bd_inode) >> SECTOR_SHIFT;
+}
+
+/*---------------------------------------------------------------------------*/
+
+/*
+ * Construct a clone device mapping:
+ *
+ * clone <metadata dev> <clone dev> <origin dev> <region size>
+ *	[<#feature args> [<feature arg>]* [<#core args> [key value]*]]
+ *
+ * metadata dev: Fast device holding the persistent metadata
+ * clone dev: The destination device, which will become a clone of the origin
+ * origin dev: The read-only source device that gets cloned
+ * region size: dm-clone unit size in sectors
+ *
+ * #feature args: Number of feature arguments passed
+ * feature args: E.g. no_hydration, no_discard_passdown
+ *
+ * #core arguments: An even number of core arguments
+ * core arguments: Key/value pairs for tuning the core
+ *		   E.g. 'hydration_threshold 8192'
+ */
+static int parse_feature_args(struct dm_arg_set *as, struct clone *clone)
+{
+	int r;
+	unsigned int argc;
+	const char *arg_name;
+	struct dm_target *ti = clone->ti;
+
+	const struct dm_arg args = {
+		.min = 0,
+		.max = 2,
+		.error = "Invalid number of feature arguments"
+	};
+
+	/* No feature arguments supplied */
+	if (!as->argc)
+		return 0;
+
+	r = dm_read_arg_group(&args, as, &argc, &ti->error);
+
+	if (r)
+		return r;
+
+	while (argc) {
+		arg_name = dm_shift_arg(as);
+		argc--;
+
+		if (!strcasecmp(arg_name, "no_hydration")) {
+			__clear_bit(DM_CLONE_HYDRATION_ENABLED, &clone->flags);
+		} else if (!strcasecmp(arg_name, "no_discard_passdown")) {
+			__clear_bit(DM_CLONE_DISCARD_PASSDOWN, &clone->flags);
+		} else {
+			ti->error = "Invalid feature argument";
+			return -EINVAL;
+		}
+	}
+
+	return 0;
+}
+
+static int parse_core_args(struct dm_arg_set *as, struct clone *clone)
+{
+	int r;
+	unsigned int argc;
+	unsigned long value;
+	const char *arg_name;
+	struct dm_target *ti = clone->ti;
+
+	const struct dm_arg args = {
+		.min = 0,
+		.max = 4,
+		.error = "Invalid number of core arguments"
+	};
+
+	/* Initialize core arguments */
+	clone->hydration_batch_size = DEFAULT_HYDRATION_BATCH_SIZE;
+	clone->hydration_threshold = max_t(sector_t, DEFAULT_HYDRATION_THRESHOLD,
+					   clone->region_size);
+
+	/* No core arguments supplied */
+	if (!as->argc)
+		return 0;
+
+	r = dm_read_arg_group(&args, as, &argc, &ti->error);
+
+	if (r)
+		return r;
+
+	if (argc & 1) {
+		ti->error = "Number of core arguments must be even";
+		return -EINVAL;
+	}
+
+	while (argc) {
+		arg_name = dm_shift_arg(as);
+		argc -= 2;
+
+		if (!strcasecmp(arg_name, "hydration_threshold")) {
+			if (kstrtoul(dm_shift_arg(as), 10, &value)) {
+				ti->error = "Invalid value for argument `hydration_threshold'";
+				return -EINVAL;
+			}
+			clone->hydration_threshold = value;
+		} else if (!strcasecmp(arg_name, "hydration_block_size")) {
+			if (kstrtoul(dm_shift_arg(as), 10, &value)) {
+				ti->error = "Invalid value for argument `hydration_block_size'";
+				return -EINVAL;
+			}
+			clone->hydration_batch_size = value >> clone->region_shift;
+		} else {
+			ti->error = "Invalid core argument";
+			return -EINVAL;
+		}
+	}
+
+	return 0;
+}
+
+static int parse_region_size(struct clone *clone, struct dm_arg_set *as, char **error)
+{
+	int r;
+	unsigned int region_size;
+	struct dm_arg arg;
+
+	arg.min = MIN_REGION_SIZE;
+	arg.max = MAX_REGION_SIZE;
+	arg.error = "Invalid region size";
+
+	r = dm_read_arg(&arg, as, &region_size, error);
+
+	if (r)
+		return r;
+
+	/* Check region size is a power of 2 */
+	if (!is_power_of_2(region_size)) {
+		*error = "Region size is not a power of 2";
+		return -EINVAL;
+	}
+
+	/* Validate the region size against the device logical block size */
+	if (region_size % (bdev_logical_block_size(clone->origin_dev->bdev) >> 9) ||
+	    region_size % (bdev_logical_block_size(clone->clone_dev->bdev) >> 9)) {
+		*error = "Region size is not a multiple of device logical block size";
+		return -EINVAL;
+	}
+
+	clone->region_size = region_size;
+
+	return 0;
+}
+
+static int validate_nr_regions(unsigned long n, char **error)
+{
+	/*
+	 * dm_bitset restricts us to 2^32 regions. test_bit & co. restrict us
+	 * further to 2^31 regions.
+	 */
+	if (n > (1UL << 31)) {
+		*error = "Too many regions. Consider increasing the region size";
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int parse_metadata_dev(struct clone *clone, struct dm_arg_set *as, char **error)
+{
+	int r;
+	sector_t metadata_dev_size;
+	char b[BDEVNAME_SIZE];
+
+	r = dm_get_device(clone->ti, dm_shift_arg(as), FMODE_READ | FMODE_WRITE,
+			  &clone->metadata_dev);
+
+	if (r) {
+		*error = "Error opening metadata device";
+		return r;
+	}
+
+	metadata_dev_size = get_dev_size(clone->metadata_dev);
+
+	if (metadata_dev_size > DM_CLONE_METADATA_MAX_SECTORS_WARNING)
+		DMWARN("Metadata device %s is larger than %u sectors: excess space will not be used.",
+		       bdevname(clone->metadata_dev->bdev, b), DM_CLONE_METADATA_MAX_SECTORS);
+
+	return 0;
+}
+
+static int parse_clone_dev(struct clone *clone, struct dm_arg_set *as, char **error)
+{
+	int r;
+	sector_t clone_dev_size;
+
+	r = dm_get_device(clone->ti, dm_shift_arg(as), FMODE_READ | FMODE_WRITE,
+			  &clone->clone_dev);
+
+	if (r) {
+		*error = "Error opening clone device";
+		return r;
+	}
+
+	clone_dev_size = get_dev_size(clone->clone_dev);
+
+	if (clone_dev_size < clone->ti->len) {
+		dm_put_device(clone->ti, clone->clone_dev);
+
+		*error = "Device size larger than clone device";
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int parse_origin_dev(struct clone *clone, struct dm_arg_set *as, char **error)
+{
+	int r;
+	sector_t origin_dev_size;
+
+	r = dm_get_device(clone->ti, dm_shift_arg(as), FMODE_READ,
+			  &clone->origin_dev);
+
+	if (r) {
+		*error = "Error opening origin device";
+		return r;
+	}
+
+	origin_dev_size = get_dev_size(clone->origin_dev);
+
+	if (origin_dev_size < clone->ti->len) {
+		dm_put_device(clone->ti, clone->origin_dev);
+
+		*error = "Device size larger than origin device";
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int copy_ctr_args(struct clone *clone, int argc, const char **argv, char **error)
+{
+	unsigned int i;
+	const char **copy;
+
+	copy = kcalloc(argc, sizeof(*copy), GFP_KERNEL);
+
+	if (!copy)
+		goto error;
+
+	for (i = 0; i < argc; i++) {
+		copy[i] = kstrdup(argv[i], GFP_KERNEL);
+
+		if (!copy[i]) {
+			while (i--)
+				kfree(copy[i]);
+			kfree(copy);
+			goto error;
+		}
+	}
+
+	clone->nr_ctr_args = argc;
+	clone->ctr_args = copy;
+
+	return 0;
+
+error:
+	*error = "Failed to allocate memory for table line";
+	return -ENOMEM;
+}
+
+static int clone_ctr(struct dm_target *ti, unsigned int argc, char **argv)
+{
+	int r;
+	struct clone *clone;
+	struct dm_arg_set as;
+
+	if (argc < 4) {
+		ti->error = "Invalid number of arguments";
+		return -EINVAL;
+	}
+
+	as.argc = argc;
+	as.argv = argv;
+
+	clone = kzalloc(sizeof(*clone), GFP_KERNEL);
+
+	if (!clone) {
+		ti->error = "Failed to allocate clone structure";
+		return -ENOMEM;
+	}
+
+	clone->ti = ti;
+
+	/* Initialize dm-clone flags */
+	__set_bit(DM_CLONE_HYDRATION_ENABLED, &clone->flags);
+	__set_bit(DM_CLONE_HYDRATION_SUSPENDED, &clone->flags);
+	__set_bit(DM_CLONE_DISCARD_PASSDOWN, &clone->flags);
+
+	r = parse_metadata_dev(clone, &as, &ti->error);
+
+	if (r)
+		goto out_with_clone;
+
+	r = parse_clone_dev(clone, &as, &ti->error);
+
+	if (r)
+		goto out_with_meta_dev;
+
+	r = parse_origin_dev(clone, &as, &ti->error);
+
+	if (r)
+		goto out_with_clone_dev;
+
+	r = parse_region_size(clone, &as, &ti->error);
+
+	if (r)
+		goto out_with_origin_dev;
+
+	clone->region_shift = __ffs(clone->region_size);
+	clone->nr_regions = dm_sector_div_up(ti->len, clone->region_size);
+
+	r = validate_nr_regions(clone->nr_regions, &ti->error);
+
+	if (r)
+		goto out_with_origin_dev;
+
+	r = dm_set_target_max_io_len(ti, clone->region_size);
+
+	if (r) {
+		ti->error = "Failed to set max io len";
+		goto out_with_origin_dev;
+	}
+
+	r = parse_feature_args(&as, clone);
+
+	if (r)
+		goto out_with_origin_dev;
+
+	r = parse_core_args(&as, clone);
+
+	if (r)
+		goto out_with_origin_dev;
+
+	/* Load metadata */
+	clone->md = dm_clone_metadata_open(clone->metadata_dev->bdev, ti->len,
+					   clone->region_size);
+
+	if (IS_ERR(clone->md)) {
+		ti->error = "Failed to load metadata";
+		r = PTR_ERR(clone->md);
+		goto out_with_origin_dev;
+	}
+
+	__set_clone_mode(clone, CM_WRITE);
+
+	if (get_clone_mode(clone) != CM_WRITE) {
+		ti->error = "Unable to get write access to metadata, please check/repair metadata";
+		r = -EPERM;
+		goto out_with_metadata;
+	}
+
+	clone->last_commit_jiffies = jiffies;
+
+	/* Allocate hydration hash table */
+	r = hash_table_init(clone);
+
+	if (r) {
+		ti->error = "Failed to allocate hydration hash table";
+		goto out_with_metadata;
+	}
+
+	atomic_set(&clone->ios_in_flight, 0);
+	init_waitqueue_head(&clone->hydration_stopped);
+	spin_lock_init(&clone->lock);
+	bio_list_init(&clone->deferred_bios);
+	bio_list_init(&clone->deferred_discard_bios);
+	bio_list_init(&clone->deferred_flush_bios);
+	bio_list_init(&clone->deferred_flush_completions);
+	clone->hydration_offset = 0;
+	atomic_set(&clone->hydrations_in_flight, 0);
+
+	clone->wq = alloc_workqueue("dm-" DM_MSG_PREFIX, WQ_MEM_RECLAIM, 0);
+
+	if (!clone->wq) {
+		ti->error = "Failed to allocate workqueue";
+		r = -ENOMEM;
+		goto out_with_ht;
+	}
+
+	INIT_WORK(&clone->worker, do_worker);
+	INIT_DELAYED_WORK(&clone->waker, do_waker);
+
+	clone->kcopyd_client = dm_kcopyd_client_create(&dm_kcopyd_throttle);
+
+	if (IS_ERR(clone->kcopyd_client)) {
+		r = PTR_ERR(clone->kcopyd_client);
+		goto out_with_wq;
+	}
+
+	r = mempool_init_slab_pool(&clone->hydration_pool, MIN_HYDRATIONS,
+				   _hydration_cache);
+
+	if (r) {
+		ti->error = "Failed to create dm_clone_region_hydration memory pool";
+		goto out_with_kcopyd;
+	}
+
+	/* Save a copy of the table line */
+	r = copy_ctr_args(clone, argc - 3, (const char **)argv + 3, &ti->error);
+
+	if (r)
+		goto out_with_mempool;
+
+	mutex_init(&clone->commit_lock);
+	clone->callbacks.congested_fn = clone_is_congested;
+	dm_table_add_target_callbacks(ti->table, &clone->callbacks);
+
+	/* Enable flushes */
+	ti->num_flush_bios = 1;
+	ti->flush_supported = true;
+
+	/* Enable discards */
+	ti->discards_supported = true;
+	ti->num_discard_bios = 1;
+
+	ti->private = clone;
+
+	return 0;
+
+out_with_mempool:
+	mempool_exit(&clone->hydration_pool);
+
+out_with_kcopyd:
+	dm_kcopyd_client_destroy(clone->kcopyd_client);
+
+out_with_wq:
+	destroy_workqueue(clone->wq);
+
+out_with_ht:
+	hash_table_exit(clone);
+
+out_with_metadata:
+	dm_clone_metadata_close(clone->md);
+
+out_with_origin_dev:
+	dm_put_device(ti, clone->origin_dev);
+
+out_with_clone_dev:
+	dm_put_device(ti, clone->clone_dev);
+
+out_with_meta_dev:
+	dm_put_device(ti, clone->metadata_dev);
+
+out_with_clone:
+	kfree(clone);
+
+	return r;
+}
+
+static void clone_dtr(struct dm_target *ti)
+{
+	unsigned int i;
+	struct clone *clone = ti->private;
+
+	mutex_destroy(&clone->commit_lock);
+
+	for (i = 0; i < clone->nr_ctr_args; i++)
+		kfree(clone->ctr_args[i]);
+	kfree(clone->ctr_args);
+
+	mempool_exit(&clone->hydration_pool);
+	dm_kcopyd_client_destroy(clone->kcopyd_client);
+	destroy_workqueue(clone->wq);
+	hash_table_exit(clone);
+	dm_clone_metadata_close(clone->md);
+	dm_put_device(ti, clone->origin_dev);
+	dm_put_device(ti, clone->clone_dev);
+	dm_put_device(ti, clone->metadata_dev);
+
+	kfree(clone);
+}
+
+/*---------------------------------------------------------------------------*/
+
+static void clone_postsuspend(struct dm_target *ti)
+{
+	struct clone *clone = ti->private;
+
+	/*
+	 * To successfully suspend the device:
+	 *
+	 *	- We cancel the delayed work for periodic commits and wait for
+	 *	  it to finish.
+	 *
+	 *	- We stop the background hydration, i.e. we prevent new region
+	 *	  hydrations from starting.
+	 *
+	 *	- We wait for any in-flight hydrations to finish.
+	 *
+	 *	- We flush the workqueue.
+	 *
+	 *	- We commit the metadata.
+	 */
+	cancel_delayed_work_sync(&clone->waker);
+
+	set_bit(DM_CLONE_HYDRATION_SUSPENDED, &clone->flags);
+
+	/*
+	 * Make sure set_bit() is ordered before atomic_read(), otherwise we
+	 * might race with do_hydration() and miss some started region
+	 * hydrations.
+	 *
+	 * This is paired with smp_mb__after_atomic() in do_hydration().
+	 */
+	smp_mb__after_atomic();
+
+	wait_event(clone->hydration_stopped, !atomic_read(&clone->hydrations_in_flight));
+	flush_workqueue(clone->wq);
+
+	(void) commit_metadata(clone);
+}
+
+static void clone_resume(struct dm_target *ti)
+{
+	struct clone *clone = ti->private;
+
+	clear_bit(DM_CLONE_HYDRATION_SUSPENDED, &clone->flags);
+	do_waker(&clone->waker.work);
+}
+
+static bool bdev_supports_discards(struct block_device *bdev)
+{
+	struct request_queue *q = bdev_get_queue(bdev);
+
+	return (q && blk_queue_discard(q));
+}
+
+/*
+ * If discard_passdown was enabled verify that the clone device supports
+ * discards. Disable discard_passdown if not.
+ */
+static void disable_passdown_if_not_supported(struct clone *clone)
+{
+	struct block_device *clone_dev = clone->clone_dev->bdev;
+	struct queue_limits *clone_limits = &bdev_get_queue(clone_dev)->limits;
+	const char *reason = NULL;
+	char buf[BDEVNAME_SIZE];
+
+	if (!test_bit(DM_CLONE_DISCARD_PASSDOWN, &clone->flags))
+		return;
+
+	if (!bdev_supports_discards(clone_dev))
+		reason = "discard unsupported";
+	else if (clone_limits->max_discard_sectors < clone->region_size)
+		reason = "max discard sectors smaller than a region";
+
+	if (reason) {
+		DMWARN("Clone device (%s) %s: Disabling discard passdown.",
+		       bdevname(clone_dev, buf), reason);
+		clear_bit(DM_CLONE_DISCARD_PASSDOWN, &clone->flags);
+	}
+}
+
+static void set_discard_limits(struct clone *clone, struct queue_limits *limits)
+{
+	struct block_device *clone_bdev = clone->clone_dev->bdev;
+	struct queue_limits *clone_limits = &bdev_get_queue(clone_bdev)->limits;
+
+	if (!test_bit(DM_CLONE_DISCARD_PASSDOWN, &clone->flags)) {
+		/* No passdown is done so we set our own virtual limits */
+		limits->discard_granularity = clone->region_size << SECTOR_SHIFT;
+		limits->max_discard_sectors = round_down(UINT_MAX >> SECTOR_SHIFT, clone->region_size);
+		return;
+	}
+
+	/*
+	 * clone_iterate_devices() is stacking both the origin and clone device
+	 * limits but discards aren't passed to the origin device, so inherit
+	 * clone's limits.
+	 */
+	limits->max_discard_sectors = clone_limits->max_discard_sectors;
+	limits->max_hw_discard_sectors = clone_limits->max_hw_discard_sectors;
+	limits->discard_granularity = clone_limits->discard_granularity;
+	limits->discard_alignment = clone_limits->discard_alignment;
+	limits->discard_misaligned = clone_limits->discard_misaligned;
+	limits->max_discard_segments = clone_limits->max_discard_segments;
+}
+
+static void clone_io_hints(struct dm_target *ti, struct queue_limits *limits)
+{
+	struct clone *clone = ti->private;
+	u64 io_opt_sectors = limits->io_opt >> SECTOR_SHIFT;
+
+	/*
+	 * If the system-determined stacked limits are compatible with
+	 * dm-clone's region size (io_opt is a factor) do not override them.
+	 */
+	if (io_opt_sectors < clone->region_size ||
+	    do_div(io_opt_sectors, clone->region_size)) {
+		blk_limits_io_min(limits, clone->region_size << SECTOR_SHIFT);
+		blk_limits_io_opt(limits, clone->region_size << SECTOR_SHIFT);
+	}
+
+	disable_passdown_if_not_supported(clone);
+	set_discard_limits(clone, limits);
+}
+
+static int clone_iterate_devices(struct dm_target *ti,
+				 iterate_devices_callout_fn fn, void *data)
+{
+	int ret;
+	struct clone *clone = ti->private;
+	struct dm_dev *clone_dev = clone->clone_dev;
+	struct dm_dev *origin_dev = clone->origin_dev;
+
+	ret = fn(ti, origin_dev, 0, ti->len, data);
+	if (!ret)
+		ret = fn(ti, clone_dev, 0, ti->len, data);
+	return ret;
+}
+
+/*
+ * dm-clone message functions.
+ */
+static void set_hydration_threshold(struct clone *clone, sector_t threshold)
+{
+	WRITE_ONCE(clone->hydration_threshold, threshold);
+
+	/*
+	 * If user space sets hydration_threshold to a value lower than the
+	 * region size then the hydration will stop. If at a later time
+	 * hydration_threshold is increased to a value greater or equal to the
+	 * region size we must restart the hydration process by waking up the
+	 * worker.
+	 */
+	wake_worker(clone);
+}
+
+static void set_hydration_batch_size(struct clone *clone, sector_t batch_sectors)
+{
+	WRITE_ONCE(clone->hydration_batch_size, batch_sectors >> clone->region_shift);
+}
+
+static void enable_hydration(struct clone *clone)
+{
+	if (!test_and_set_bit(DM_CLONE_HYDRATION_ENABLED, &clone->flags))
+		wake_worker(clone);
+}
+
+static void disable_hydration(struct clone *clone)
+{
+	clear_bit(DM_CLONE_HYDRATION_ENABLED, &clone->flags);
+}
+
+static int clone_message(struct dm_target *ti, unsigned int argc, char **argv,
+			 char *result, unsigned int maxlen)
+{
+	struct clone *clone = ti->private;
+	unsigned long value;
+
+	if (!argc)
+		return -EINVAL;
+
+	if (!strcasecmp(argv[0], "enable_hydration")) {
+		enable_hydration(clone);
+		return 0;
+	}
+
+	if (!strcasecmp(argv[0], "disable_hydration")) {
+		disable_hydration(clone);
+		return 0;
+	}
+
+	if (argc != 2)
+		goto bad_message;
+
+	if (!strcasecmp(argv[0], "hydration_threshold")) {
+		if (kstrtoul(argv[1], 10, &value))
+			return -EINVAL;
+
+		set_hydration_threshold(clone, value);
+
+		return 0;
+	}
+
+	if (!strcasecmp(argv[0], "hydration_block_size")) {
+		if (kstrtoul(argv[1], 10, &value))
+			return -EINVAL;
+
+		set_hydration_batch_size(clone, value);
+
+		return 0;
+	}
+
+bad_message:
+	DMERR("%s: Unsupported message `%s'", clone_device_name(clone), argv[0]);
+	return -EINVAL;
+}
+
+static struct target_type clone_target = {
+	.name = "clone",
+	.version = {1, 0, 0},
+	.module = THIS_MODULE,
+	.ctr = clone_ctr,
+	.dtr =  clone_dtr,
+	.map = clone_map,
+	.end_io = clone_endio,
+	.postsuspend = clone_postsuspend,
+	.resume = clone_resume,
+	.status = clone_status,
+	.message = clone_message,
+	.io_hints = clone_io_hints,
+	.iterate_devices = clone_iterate_devices,
+};
+
+/*---------------------------------------------------------------------------*/
+
+/* Module functions */
+static int __init dm_clone_init(void)
+{
+	int r;
+
+	_hydration_cache = KMEM_CACHE(dm_clone_region_hydration, 0);
+
+	if (!_hydration_cache)
+		return -ENOMEM;
+
+	r = dm_register_target(&clone_target);
+
+	if (r < 0) {
+		DMERR("Failed to register clone target");
+		return r;
+	}
+
+	return 0;
+}
+
+static void __exit dm_clone_exit(void)
+{
+	dm_unregister_target(&clone_target);
+
+	kmem_cache_destroy(_hydration_cache);
+	_hydration_cache = NULL;
+}
+
+/* Module hooks */
+module_init(dm_clone_init);
+module_exit(dm_clone_exit);
+
+MODULE_DESCRIPTION(DM_NAME " clone target");
+MODULE_AUTHOR("Nikos Tsironis <ntsironis@arrikto.com>");
+MODULE_LICENSE("GPL");
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [RFC PATCH 1/1] dm: add clone target
  2019-07-09 14:15 ` [RFC PATCH 1/1] " Nikos Tsironis
@ 2019-07-09 21:28   ` Heinz Mauelshagen
  2019-07-10 18:45     ` Nikos Tsironis
  2019-08-29 16:19   ` Mike Snitzer
  1 sibling, 1 reply; 14+ messages in thread
From: Heinz Mauelshagen @ 2019-07-09 21:28 UTC (permalink / raw)
  To: Nikos Tsironis, snitzer, agk, dm-devel; +Cc: vkoukis, iliastsi

Hi Nikos,

what is the crucial factor your target offers vs. resynchronizing such a 
latency distinct
2-legged mirror with a read-write snapshot (local, fast exception store) 
on top, tearing the
mirror down keeping the local leg once fully in sync and merging the 
snapshot back into it?

Heinz

On 7/9/19 4:15 PM, Nikos Tsironis wrote:
> Add the dm-clone target, which allows cloning of arbitrary block
> devices.
>
> dm-clone produces a one-to-one copy of an existing, read-only device
> (origin) into a writable device (clone): It presents a virtual block
> device which makes all data appear immediately, and redirects reads and
> writes accordingly.
>
> The main use case of dm-clone is to clone a potentially remote,
> high-latency, read-only, archival-type block device into a writable,
> fast, primary-type device for fast, low-latency I/O. The cloned device
> is visible/mountable immediately and the copy of the origin device to
> the clone device happens in the background, in parallel with user I/O.
>
> When the cloning completes, the dm-clone table can be removed altogether
> and be replaced, e.g., by a linear table, mapping directly to the clone
> device.
>
> For further information and examples of how to use dm-clone, please read
> Documentation/device-mapper/dm-clone.rst
>
> Suggested-by: Vangelis Koukis <vkoukis@arrikto.com>
> Co-developed-by: Ilias Tsitsimpis <iliastsi@arrikto.com>
> Signed-off-by: Ilias Tsitsimpis <iliastsi@arrikto.com>
> Signed-off-by: Nikos Tsironis <ntsironis@arrikto.com>
> ---
>   Documentation/device-mapper/dm-clone.rst |  334 +++++
>   drivers/md/Kconfig                       |   13 +
>   drivers/md/Makefile                      |    2 +
>   drivers/md/dm-clone-metadata.c           |  991 +++++++++++++
>   drivers/md/dm-clone-metadata.h           |  158 +++
>   drivers/md/dm-clone-target.c             | 2244 ++++++++++++++++++++++++++++++
>   6 files changed, 3742 insertions(+)
>   create mode 100644 Documentation/device-mapper/dm-clone.rst
>   create mode 100644 drivers/md/dm-clone-metadata.c
>   create mode 100644 drivers/md/dm-clone-metadata.h
>   create mode 100644 drivers/md/dm-clone-target.c
>
> diff --git a/Documentation/device-mapper/dm-clone.rst b/Documentation/device-mapper/dm-clone.rst
> new file mode 100644
> index 000000000000..948b7ce31ce3
> --- /dev/null
> +++ b/Documentation/device-mapper/dm-clone.rst
> @@ -0,0 +1,334 @@
> +.. SPDX-License-Identifier: GPL-2.0-only
> +
> +========
> +dm-clone
> +========
> +
> +Introduction
> +============
> +
> +dm-clone is a device mapper target which produces a one-to-one copy of an
> +existing, read-only device (origin) into a writable device (clone): It presents
> +a virtual block device which makes all data appear immediately, and redirects
> +reads and writes accordingly.
> +
> +The main use case of dm-clone is to clone a potentially remote, high-latency,
> +read-only, archival-type block device into a writable, fast, primary-type device
> +for fast, low-latency I/O. The cloned device is visible/mountable immediately
> +and the copy of the origin device to the clone device happens in the background,
> +in parallel with user I/O.
> +
> +For example, one could restore an application backup from a read-only copy,
> +accessible through a network storage protocol (NBD, Fibre Channel, iSCSI, AoE,
> +etc.), into a local SSD or NVMe device, and start using the device immediately,
> +without waiting for the restore to complete.
> +
> +When the cloning completes, the dm-clone table can be removed altogether and be
> +replaced, e.g., by a linear table, mapping directly to the clone device.
> +
> +The dm-clone target reuses the metadata library used by the thin-provisioning
> +target.
> +
> +Glossary
> +========
> +
> +   Region
> +     A fixed sized block. The unit of hydration.
> +
> +   Hydration
> +     The process of filling a region of the clone device with data from the same
> +     region of the origin device, i.e., copying the region from the origin to
> +     the clone device.
> +
> +Once a region gets hydrated we redirect all I/O regarding it to the clone
> +device.
> +
> +Design
> +======
> +
> +Sub-devices
> +-----------
> +
> +The target is constructed by passing three devices to it (along with other
> +parameters detailed later):
> +
> +1. An origin device - the read-only device that gets cloned and source of the
> +   hydration.
> +
> +2. A clone device - the destination of the hydration, which will become a clone
> +   of the origin.
> +
> +3. A small metadata device - it records which regions/blocks are already valid
> +   in the clone device, i.e., which regions have already been hydrated, or have
> +   been written to directly, via user I/O.
> +
> +The size of the clone device must be at least equal to the size of the origin
> +device.
> +
> +Regions
> +-------
> +
> +dm-clone divides the origin and clone devices in fixed sized blocks, called
> +regions. Regions are the unit of hydration, i.e., the minimum amount of data
> +copied from the origin to the clone device.
> +
> +The region size is configurable when you first create the dm-clone device. The
> +recommended region size is the same as the file system block size, which usually
> +is 4KB. The region size must be between 8 sectors (4KB) and 2097152 sectors
> +(1GB) and a power of two.
> +
> +Reads and writes from/to hydrated regions are serviced from the clone device.
> +
> +A read to a not yet hydrated region is serviced directly from the origin device.
> +
> +A write to a not yet hydrated region will be delayed until the corresponding
> +region has been hydrated and the hydration of the region starts immediately.
> +
> +Note that a write request with size equal to region size will skip copying of
> +the corresponding region from the origin device and overwrite the region of the
> +clone device directly.
> +
> +Discards
> +--------
> +
> +dm-clone interprets a discard request to a range that hasn't been hydrated yet
> +as a hint to skip hydration of the regions covered by the request, i.e., it
> +skips copying the region's data from the origin to the clone device, and only
> +updates its metadata.
> +
> +If the clone device supports discards, then by default dm-clone will pass down
> +discard requests to it.
> +
> +Background Hydration
> +--------------------
> +
> +dm-clone copies continuously from the origin to the clone device, until all of
> +the device has been copied.
> +
> +Copying data from the origin to the clone device uses bandwidth. The user can
> +set a throttle to prevent more than a certain amount of copying occurring at any
> +one time. Moreover, dm-clone takes into account user I/O traffic going to the
> +devices and pauses the background hydration when there is I/O in-flight.
> +
> +A message `hydration_threshold <#sectors>` can be used to set the maximum number
> +of sectors being copied, the default being 2048 sectors (1MB).
> +
> +dm-clone employs dm-kcopyd for copying portions of the origin device to the
> +clone device. By default, we issue copy requests of size equal to the region
> +size. A message `hydration_block_size <#sectors>` can be used to tune the size
> +of these copy requests. Increasing the hydration block size results in dm-clone
> +trying to batch together contiguous regions, so we copy the data in blocks of
> +this size.
> +
> +When the hydration of the clone device finishes, a dm event will be sent to user
> +space.
> +
> +Updating on-disk metadata
> +-------------------------
> +
> +On-disk metadata is committed every time a FLUSH or FUA bio is written. If no
> +such requests are made then commits will occur every second. This means the
> +dm-clone device behaves like a physical disk that has a volatile write cache. If
> +power is lost you may lose some recent writes. The metadata should always be
> +consistent in spite of any crash.
> +
> +Target Interface
> +================
> +
> +Constructor
> +-----------
> +
> +  ::
> +
> +   clone <metadata dev> <clone dev> <origin dev> <region size>
> +         [<#feature args> [<feature arg>]* [<#core args> [<core arg>]*]]
> +
> + =============== ==============================================================
> + metadata dev    Fast device holding the persistent metadata
> + clone dev       The destination device, where the origin will be cloned
> + origin dev      Read only device containing the data that gets cloned
> + region size     The size of a region in sectors
> +
> + #feature args   Number of feature arguments passed
> + feature args    no_hydration or no_discard_passdown
> +
> + #core args      An even number of arguments corresponding to key/value pairs
> +                 passed to dm-clone
> + core args       Key/value pairs passed to dm-clone, e.g. `hydration_threshold
> +                 2048`
> + =============== ==============================================================
> +
> +Optional feature arguments are:
> +
> + ==================== =========================================================
> + no_hydration         Create a dm-clone instance with background hydration
> +                      disabled
> + no_discard_passdown  Disable passing down discards to the clone device
> + ==================== =========================================================
> +
> +Optional core arguments are:
> +
> + ================================ ==============================================
> + hydration_threshold <#sectors>   Maximum number of sectors being copied from
> +                                  the origin to the clone device at any one
> +                                  time, during background hydration.
> + hydration_block_size <#sectors>  During background hydration, try to batch
> +                                  together contiguous regions, so we copy data
> +                                  from the origin to the clone device in blocks
> +                                  of this size.
> + ================================ ==============================================
> +
> +Status
> +------
> +
> +  ::
> +
> +   <metadata block size> <#used metadata blocks>/<#total metadata blocks>
> +   <region size> <#hydrated regions>/<#total regions> <#hydrating regions>
> +   <#feature args> <feature args>* <#core args> <core args>*
> +   <clone metadata mode>
> +
> + ======================= =======================================================
> + metadata block size     Fixed block size for each metadata block in sectors
> + #used metadata blocks   Number of metadata blocks used
> + #total metadata blocks  Total number of metadata blocks
> + region size             Configurable region size for the device in sectors
> + #hydrated regions       Number of regions that have finished hydrating
> + #total regions          Total number of regions to hydrate
> + #hydrating regions      Number of regions currently hydrating
> + #feature args           Number of feature arguments to follow
> + feature args            Feature arguments, e.g. `no_hydration`
> + #core args              Even number of core arguments to follow
> + core args               Key/value pairs for tuning the core, e.g.
> +                         `hydration_threshold 2048`
> + clone metadata mode     ro if read-only, rw if read-write
> +
> +                         In serious cases where even a read-only mode is deemed
> +                         unsafe no further I/O will be permitted and the status
> +                         will just contain the string 'Fail'. If the metadata
> +                         mode changes, a dm event will be sent to user space.
> + ======================= =======================================================
> +
> +Messages
> +--------
> +
> +  `disable_hydration`
> +      Disable the background hydration of the clone device.
> +
> +  `enable_hydration`
> +      Enable the background hydration of the clone device.
> +
> +  `hydration_threshold <#sectors>`
> +      Set background hydration threshold.
> +
> +  `hydration_block_size <#sectors>`
> +      Set background hydration block size.
> +
> +Examples
> +========
> +
> +Clone a device containing a file system
> +---------------------------------------
> +
> +1. Create the dm-clone device.
> +
> +   ::
> +
> +    dmsetup create clone --table "0 1048576000 clone $metadata_dev $clone_dev \
> +      $origin_dev 8 1 no_hydration"
> +
> +2. Mount the device and trim the file system. dm-clone interprets the discards
> +   sent by the file system and it will not hydrate the unused space.
> +
> +   ::
> +
> +    mount /dev/mapper/clone /mnt/cloned-fs
> +    fstrim /mnt/cloned-fs
> +
> +3. Enable background hydration of the clone device.
> +
> +   ::
> +
> +    dmsetup message clone 0 enable_hydration
> +
> +4. When the hydration finishes, we can replace the dm-clone table with a linear
> +   table.
> +
> +   ::
> +
> +    dmsetup suspend clone
> +    dmsetup load clone --table "0 1048576000 linear $clone_dev 0"
> +    dmsetup resume clone
> +
> +   The metadata device is no longer needed and can be safely discarded or reused
> +   for other purposes.
> +
> +Known issues
> +============
> +
> +1. We redirect reads, to not-yet-cloned regions, to the origin device. If
> +   reading the origin device has high latency and the user repeatedly reads from
> +   the same regions, this behaviour could degrade performance. We should use
> +   these reads as hints to hydrate the relevant regions sooner. Currently, we
> +   rely on the page cache to cache these regions, so we hopefully don't end up
> +   reading them multiple times from the origin.
> +
> +2. Release in-core resources, i.e., the bitmaps tracking which blocks are
> +   cloned, after the cloning has finished.
> +
> +3. During background hydration, if we fail to read the origin or write to the
> +   clone device, we print an error message, but the cloning process continues
> +   indefinitely, until it succeeds. We should stop the background hydration
> +   after a number of failures and emit a dm event for user space to notice.
> +
> +Why not...?
> +===========
> +
> +We explored the following alternatives before implementing dm-clone:
> +
> +1. Use dm-cache with cache size equal to origin and implement a new cloning
> +   policy:
> +
> +   * The resulting cache device is not a one-to-one mirror of the origin device
> +     and thus we cannot remove the cache device once cloning completes.
> +
> +   * dm-cache writes to the origin device, which violates our requirement that
> +     the origin device must be treated as read-only.
> +
> +   * Caching is semantically different from cloning.
> +
> +2. Use dm-snapshot with a COW device equal to the origin:
> +
> +   * dm-snapshot stores its metadata in the COW device, so the resulting device
> +     is not a one-to-one mirror of the origin.
> +
> +   * No background copying mechanism.
> +
> +   * dm-snapshot needs to commit its metadata whenever a pending exception
> +     completes, to ensure snapshot consistency. In the case of cloning, we don't
> +     need to be so strict and can rely on committing metadata every time a FLUSH
> +     or FUA bio is written, or periodically, like dm-thin and dm-cache do. This
> +     improves the performance significantly.
> +
> +3. Use dm-mirror: The mirror target has a background copying/mirroring
> +   mechanism, but it writes to all mirrors, thus violating our requirement that
> +   the origin device must be treated as read-only.
> +
> +4. Use dm-thin's external snapshot functionality. This approach is the most
> +   promising among all alternatives, as the thinly-provisioned volume is a
> +   one-to-one mirror of the origin and handles reads and writes to
> +   un-provisioned/not-yet-cloned areas the same way as dm-clone does.
> +
> +   Still:
> +
> +   * There is no background copying mechanism, though one could be implemented.
> +
> +   * Most importantly, we want to support arbitrary block devices as the
> +     destination of the cloning process and not restrict ourselves to
> +     thinly-provisioned volumes. Thin-provisioning has an inherent metadata
> +     overhead, for maintaining the thin volume mappings, which significantly
> +     degrades performance.
> +
> +   Moreover, cloning a device shouldn't force the use of thin-provisioning. On
> +   the other hand, if we wish to use thin provisioning, we can just use a thin
> +   LV as dm-clone's destination/clone device.
> diff --git a/drivers/md/Kconfig b/drivers/md/Kconfig
> index 45254b3ef715..15e6dedf24ea 100644
> --- a/drivers/md/Kconfig
> +++ b/drivers/md/Kconfig
> @@ -346,6 +346,19 @@ config DM_ERA
>            over time.  Useful for maintaining cache coherency when using
>            vendor snapshots.
>   
> +config DM_CLONE
> +       tristate "Clone target (EXPERIMENTAL)"
> +       depends on BLK_DEV_DM
> +       default n
> +       select DM_PERSISTENT_DATA
> +       ---help---
> +         dm-clone produces a one-to-one copy of an existing, read-only device
> +         (origin) into a writable device (clone). The cloned device is
> +         visible/mountable immediately and the copy of the origin device to the
> +         clone device happens in the background, in parallel with user I/O.
> +
> +         If unsure, say N.
> +
>   config DM_MIRROR
>          tristate "Mirror target"
>          depends on BLK_DEV_DM
> diff --git a/drivers/md/Makefile b/drivers/md/Makefile
> index be7a6eb92abc..b3296e3a7116 100644
> --- a/drivers/md/Makefile
> +++ b/drivers/md/Makefile
> @@ -18,6 +18,7 @@ dm-cache-y	+= dm-cache-target.o dm-cache-metadata.o dm-cache-policy.o \
>   		    dm-cache-background-tracker.o
>   dm-cache-smq-y   += dm-cache-policy-smq.o
>   dm-era-y	+= dm-era-target.o
> +dm-clone-y	+= dm-clone-target.o dm-clone-metadata.o
>   dm-verity-y	+= dm-verity-target.o
>   md-mod-y	+= md.o md-bitmap.o
>   raid456-y	+= raid5.o raid5-cache.o raid5-ppl.o
> @@ -65,6 +66,7 @@ obj-$(CONFIG_DM_VERITY)		+= dm-verity.o
>   obj-$(CONFIG_DM_CACHE)		+= dm-cache.o
>   obj-$(CONFIG_DM_CACHE_SMQ)	+= dm-cache-smq.o
>   obj-$(CONFIG_DM_ERA)		+= dm-era.o
> +obj-$(CONFIG_DM_CLONE)		+= dm-clone.o
>   obj-$(CONFIG_DM_LOG_WRITES)	+= dm-log-writes.o
>   obj-$(CONFIG_DM_INTEGRITY)	+= dm-integrity.o
>   obj-$(CONFIG_DM_ZONED)		+= dm-zoned.o
> diff --git a/drivers/md/dm-clone-metadata.c b/drivers/md/dm-clone-metadata.c
> new file mode 100644
> index 000000000000..db2f86d8356b
> --- /dev/null
> +++ b/drivers/md/dm-clone-metadata.c
> @@ -0,0 +1,991 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/*
> + * Copyright (C) 2019 Arrikto, Inc. All Rights Reserved.
> + */
> +
> +#include <linux/err.h>
> +#include <linux/slab.h>
> +#include <linux/rwsem.h>
> +#include <linux/bitops.h>
> +#include <linux/bitmap.h>
> +#include <linux/device-mapper.h>
> +
> +#include "persistent-data/dm-bitset.h"
> +#include "persistent-data/dm-space-map.h"
> +#include "persistent-data/dm-block-manager.h"
> +#include "persistent-data/dm-transaction-manager.h"
> +
> +#include "dm-clone-metadata.h"
> +
> +#define DM_MSG_PREFIX "clone metadata"
> +
> +#define SUPERBLOCK_LOCATION 0
> +#define SUPERBLOCK_MAGIC 0x8af27f64
> +#define SUPERBLOCK_CSUM_XOR 257649492
> +
> +#define DM_CLONE_MAX_CONCURRENT_LOCKS 5
> +
> +#define UUID_LEN 16
> +
> +/* Min and max dm-clone metadata versions supported */
> +#define DM_CLONE_MIN_METADATA_VERSION 1
> +#define DM_CLONE_MAX_METADATA_VERSION 1
> +
> +/*
> + * On-disk metadata layout
> + */
> +struct superblock_disk {
> +	__le32 csum;
> +	__le32 flags;
> +	__le64 blocknr;
> +
> +	__u8 uuid[UUID_LEN];
> +	__le64 magic;
> +	__le32 version;
> +
> +	__u8 metadata_space_map_root[SPACE_MAP_ROOT_SIZE];
> +
> +	__le64 region_size;
> +	__le64 target_size;
> +
> +	__le64 bitset_root;
> +} __packed;
> +
> +/*
> + * Region and Dirty bitmaps.
> + *
> + * dm-clone logically splits the origin and clone devices in regions of fixed
> + * size. The clone device's regions are gradually hydrated, i.e., we copy
> + * (clone) the origin's regions to the clone device. Eventually, all regions
> + * will get hydrated and all I/O will be served from the clone device.
> + *
> + * We maintain an on-disk bitmap which tracks the state of each of the clone
> + * device's regions, i.e., whether they are hydrated or not.
> + *
> + * To save constantly doing look ups on disk we keep an in core copy of the
> + * on-disk bitmap, the region_map.
> + *
> + * To further reduce metadata I/O overhead we use a second bitmap, the dmap
> + * (dirty bitmap), which tracks the dirty words, i.e. longs, of the region_map.
> + *
> + * When a region finishes hydrating dm-clone calls
> + * dm_clone_set_region_hydrated(), or for discard requests
> + * dm_clone_cond_set_range(), which sets the corresponding bits in region_map
> + * and dmap.
> + *
> + * During a metadata commit we scan the dmap for dirty region_map words (longs)
> + * and update accordingly the on-disk metadata. Thus, we don't have to flush to
> + * disk the whole region_map. We can just flush the dirty region_map words.
> + *
> + * We use a dirty bitmap, which is smaller than the original region_map, to
> + * reduce the amount of memory accesses during a metadata commit. As dm-bitset
> + * accesses the on-disk bitmap in 64-bit word granularity, there is no
> + * significant benefit in tracking the dirty region_map bits with a smaller
> + * granularity.
> + *
> + * We could update directly the on-disk bitmap, when dm-clone calls either
> + * dm_clone_set_region_hydrated() or dm_clone_cond_set_range(), buts this
> + * inserts significant metadata I/O overhead in dm-clone's I/O path. Also, as
> + * these two functions don't block, we can call them in interrupt context,
> + * e.g., in a hooked overwrite bio's completion routine, and further reduce the
> + * I/O completion latency.
> + *
> + * We maintain two dirty bitmaps. During a metadata commit we atomically swap
> + * the currently used dmap with the unused one. This allows the metadata update
> + * functions to run concurrently with an ongoing commit.
> + */
> +struct dirty_map {
> +	unsigned long *dirty_words;
> +	unsigned int changed;
> +};
> +
> +struct dm_clone_metadata {
> +	/* The metadata block device */
> +	struct block_device *bdev;
> +
> +	sector_t target_size;
> +	sector_t region_size;
> +	unsigned long nr_regions;
> +	unsigned long nr_words;
> +
> +	/* Spinlock protecting the region and dirty bitmaps. */
> +	spinlock_t bitmap_lock;
> +	struct dirty_map dmap[2];
> +	struct dirty_map *current_dmap;
> +
> +	/*
> +	 * In core copy of the on-disk bitmap to save constantly doing look ups
> +	 * on disk.
> +	 */
> +	unsigned long *region_map;
> +
> +	/* Protected by bitmap_lock */
> +	unsigned int read_only;
> +
> +	struct dm_block_manager *bm;
> +	struct dm_space_map *sm;
> +	struct dm_transaction_manager *tm;
> +
> +	struct rw_semaphore lock;
> +
> +	struct dm_disk_bitset bitset_info;
> +	dm_block_t bitset_root;
> +
> +	/*
> +	 * Reading the space map root can fail, so we read it into this
> +	 * buffer before the superblock is locked and updated.
> +	 */
> +	__u8 metadata_space_map_root[SPACE_MAP_ROOT_SIZE];
> +
> +	bool hydration_done:1;
> +	bool fail_io:1;
> +};
> +
> +/*---------------------------------------------------------------------------*/
> +
> +/*
> + * Superblock validation.
> + */
> +static void sb_prepare_for_write(struct dm_block_validator *v,
> +				 struct dm_block *b, size_t sb_block_size)
> +{
> +	struct superblock_disk *sb;
> +	u32 csum;
> +
> +	sb = dm_block_data(b);
> +	sb->blocknr = cpu_to_le64(dm_block_location(b));
> +
> +	csum = dm_bm_checksum(&sb->flags, sb_block_size - sizeof(__le32),
> +			      SUPERBLOCK_CSUM_XOR);
> +	sb->csum = cpu_to_le32(csum);
> +}
> +
> +static int sb_check(struct dm_block_validator *v, struct dm_block *b,
> +		    size_t sb_block_size)
> +{
> +	struct superblock_disk *sb;
> +	u32 csum, metadata_version;
> +
> +	sb = dm_block_data(b);
> +
> +	if (dm_block_location(b) != le64_to_cpu(sb->blocknr)) {
> +		DMERR("Superblock check failed: blocknr %llu, expected %llu",
> +		      le64_to_cpu(sb->blocknr),
> +		      (unsigned long long)dm_block_location(b));
> +		return -ENOTBLK;
> +	}
> +
> +	if (le64_to_cpu(sb->magic) != SUPERBLOCK_MAGIC) {
> +		DMERR("Superblock check failed: magic %llu, expected %llu",
> +		      le64_to_cpu(sb->magic),
> +		      (unsigned long long)SUPERBLOCK_MAGIC);
> +		return -EILSEQ;
> +	}
> +
> +	csum = dm_bm_checksum(&sb->flags, sb_block_size - sizeof(__le32),
> +			      SUPERBLOCK_CSUM_XOR);
> +
> +	if (sb->csum != cpu_to_le32(csum)) {
> +		DMERR("Superblock check failed: checksum %u, expected %u",
> +		      csum, le32_to_cpu(sb->csum));
> +		return -EILSEQ;
> +	}
> +
> +	/* Check metadata version */
> +	metadata_version = le32_to_cpu(sb->version);
> +
> +	if (metadata_version < DM_CLONE_MIN_METADATA_VERSION ||
> +	    metadata_version > DM_CLONE_MAX_METADATA_VERSION) {
> +		DMERR("Clone metadata version %u found, but only versions between %u and %u supported.",
> +		      metadata_version, DM_CLONE_MIN_METADATA_VERSION,
> +		      DM_CLONE_MAX_METADATA_VERSION);
> +		return -EINVAL;
> +	}
> +
> +	return 0;
> +}
> +
> +static struct dm_block_validator sb_validator = {
> +	.name = "superblock",
> +	.prepare_for_write = sb_prepare_for_write,
> +	.check = sb_check
> +};
> +
> +/*
> + * Check if the superblock is formatted or not. We consider the superblock to
> + * be formatted in case we find non-zero bytes in it.
> + */
> +static int __superblock_all_zeroes(struct dm_block_manager *bm, bool *formatted)
> +{
> +	int r;
> +	unsigned int i, nr_words;
> +	struct dm_block *sblock;
> +	__le64 *data_le, zero = cpu_to_le64(0);
> +
> +	/*
> +	 * We don't use a validator here because the superblock could be all
> +	 * zeroes.
> +	 */
> +	r = dm_bm_read_lock(bm, SUPERBLOCK_LOCATION, NULL, &sblock);
> +
> +	if (r) {
> +		DMERR("Failed to read_lock superblock");
> +		return r;
> +	}
> +
> +	data_le = dm_block_data(sblock);
> +	*formatted = false;
> +
> +	/* This assumes that the block size is a multiple of 8 bytes */
> +	BUG_ON(dm_bm_block_size(bm) % sizeof(__le64));
> +	nr_words = dm_bm_block_size(bm) / sizeof(__le64);
> +	for (i = 0; i < nr_words; i++) {
> +		if (data_le[i] != zero) {
> +			*formatted = true;
> +			break;
> +		}
> +	}
> +
> +	dm_bm_unlock(sblock);
> +
> +	return 0;
> +}
> +
> +/*---------------------------------------------------------------------------*/
> +
> +/*
> + * Low-level metadata handling.
> + */
> +static inline int superblock_read_lock(struct dm_clone_metadata *md,
> +				       struct dm_block **sblock)
> +{
> +	return dm_bm_read_lock(md->bm, SUPERBLOCK_LOCATION, &sb_validator, sblock);
> +}
> +
> +static inline int superblock_write_lock(struct dm_clone_metadata *md,
> +					struct dm_block **sblock)
> +{
> +	return dm_bm_write_lock(md->bm, SUPERBLOCK_LOCATION, &sb_validator, sblock);
> +}
> +
> +static inline int superblock_write_lock_zero(struct dm_clone_metadata *md,
> +					     struct dm_block **sblock)
> +{
> +	return dm_bm_write_lock_zero(md->bm, SUPERBLOCK_LOCATION, &sb_validator, sblock);
> +}
> +
> +static int __copy_sm_root(struct dm_clone_metadata *md)
> +{
> +	int r;
> +	size_t root_size;
> +
> +	r = dm_sm_root_size(md->sm, &root_size);
> +
> +	if (r)
> +		return r;
> +
> +	return dm_sm_copy_root(md->sm, &md->metadata_space_map_root, root_size);
> +}
> +
> +/* Save dm-clone metadata in superblock */
> +static void __prepare_superblock(struct dm_clone_metadata *md,
> +				 struct superblock_disk *sb)
> +{
> +	sb->flags = cpu_to_le32(0UL);
> +
> +	/* FIXME: UUID is currently unused */
> +	memset(sb->uuid, 0, sizeof(sb->uuid));
> +
> +	sb->magic = cpu_to_le64(SUPERBLOCK_MAGIC);
> +	sb->version = cpu_to_le32(DM_CLONE_MAX_METADATA_VERSION);
> +
> +	/* Save the metadata space_map root */
> +	memcpy(&sb->metadata_space_map_root, &md->metadata_space_map_root,
> +	       sizeof(md->metadata_space_map_root));
> +
> +	sb->region_size = cpu_to_le64(md->region_size);
> +	sb->target_size = cpu_to_le64(md->target_size);
> +	sb->bitset_root = cpu_to_le64(md->bitset_root);
> +}
> +
> +static int __open_metadata(struct dm_clone_metadata *md)
> +{
> +	int r;
> +	struct dm_block *sblock;
> +	struct superblock_disk *sb;
> +
> +	r = superblock_read_lock(md, &sblock);
> +
> +	if (r) {
> +		DMERR("Failed to read_lock superblock");
> +		return r;
> +	}
> +
> +	sb = dm_block_data(sblock);
> +
> +	/* Verify that target_size and region_size haven't changed. */
> +	if (md->region_size != le64_to_cpu(sb->region_size) ||
> +	    md->target_size != le64_to_cpu(sb->target_size)) {
> +		DMERR("Region and/or target size don't match the ones in metadata");
> +		r = -EINVAL;
> +		goto out_with_lock;
> +	}
> +
> +	r = dm_tm_open_with_sm(md->bm, SUPERBLOCK_LOCATION,
> +			       sb->metadata_space_map_root,
> +			       sizeof(sb->metadata_space_map_root),
> +			       &md->tm, &md->sm);
> +
> +	if (r) {
> +		DMERR("dm_tm_open_with_sm failed");
> +		goto out_with_lock;
> +	}
> +
> +	dm_disk_bitset_init(md->tm, &md->bitset_info);
> +	md->bitset_root = le64_to_cpu(sb->bitset_root);
> +
> +out_with_lock:
> +	dm_bm_unlock(sblock);
> +
> +	return r;
> +}
> +
> +static int __format_metadata(struct dm_clone_metadata *md)
> +{
> +	int r;
> +	struct dm_block *sblock;
> +	struct superblock_disk *sb;
> +
> +	r = dm_tm_create_with_sm(md->bm, SUPERBLOCK_LOCATION, &md->tm, &md->sm);
> +
> +	if (r) {
> +		DMERR("Failed to create transaction manager");
> +		return r;
> +	}
> +
> +	dm_disk_bitset_init(md->tm, &md->bitset_info);
> +
> +	r = dm_bitset_empty(&md->bitset_info, &md->bitset_root);
> +
> +	if (r) {
> +		DMERR("Failed to create empty on-disk bitset");
> +		goto err_with_tm;
> +	}
> +
> +	r = dm_bitset_resize(&md->bitset_info, md->bitset_root, 0,
> +			     md->nr_regions, false, &md->bitset_root);
> +
> +	if (r) {
> +		DMERR("Failed to resize on-disk bitset to %lu entries", md->nr_regions);
> +		goto err_with_tm;
> +	}
> +
> +	/* Flush to disk all blocks, except the superblock */
> +	r = dm_tm_pre_commit(md->tm);
> +
> +	if (r) {
> +		DMERR("dm_tm_pre_commit failed");
> +		goto err_with_tm;
> +	}
> +
> +	r = __copy_sm_root(md);
> +
> +	if (r) {
> +		DMERR("__copy_sm_root failed");
> +		goto err_with_tm;
> +	}
> +
> +	r = superblock_write_lock_zero(md, &sblock);
> +
> +	if (r) {
> +		DMERR("Failed to write_lock superblock");
> +		goto err_with_tm;
> +	}
> +
> +	sb = dm_block_data(sblock);
> +	__prepare_superblock(md, sb);
> +	r = dm_tm_commit(md->tm, sblock);
> +
> +	if (r) {
> +		DMERR("Failed to commit superblock");
> +		goto err_with_tm;
> +	}
> +
> +	return 0;
> +
> +err_with_tm:
> +	dm_sm_destroy(md->sm);
> +	dm_tm_destroy(md->tm);
> +
> +	return r;
> +}
> +
> +static int __open_or_format_metadata(struct dm_clone_metadata *md, bool may_format_device)
> +{
> +	int r;
> +	bool formatted = false;
> +
> +	r = __superblock_all_zeroes(md->bm, &formatted);
> +
> +	if (r)
> +		return r;
> +
> +	if (!formatted)
> +		return may_format_device ? __format_metadata(md) : -EPERM;
> +
> +	return __open_metadata(md);
> +}
> +
> +static int __create_persistent_data_structures(struct dm_clone_metadata *md,
> +					       bool may_format_device)
> +{
> +	int r;
> +
> +	/* Create block manager */
> +	md->bm = dm_block_manager_create(md->bdev,
> +					 DM_CLONE_METADATA_BLOCK_SIZE << SECTOR_SHIFT,
> +					 DM_CLONE_MAX_CONCURRENT_LOCKS);
> +
> +	if (IS_ERR(md->bm)) {
> +		DMERR("Failed to create block manager");
> +		return PTR_ERR(md->bm);
> +	}
> +
> +	r = __open_or_format_metadata(md, may_format_device);
> +
> +	if (r)
> +		dm_block_manager_destroy(md->bm);
> +
> +	return r;
> +}
> +
> +static void __destroy_persistent_data_structures(struct dm_clone_metadata *md)
> +{
> +	dm_sm_destroy(md->sm);
> +	dm_tm_destroy(md->tm);
> +	dm_block_manager_destroy(md->bm);
> +}
> +
> +/*---------------------------------------------------------------------------*/
> +
> +static size_t bitmap_size(unsigned long nr_bits)
> +{
> +	return BITS_TO_LONGS(nr_bits) * sizeof(long);
> +}
> +
> +static int dirty_map_init(struct dm_clone_metadata *md)
> +{
> +	md->dmap[0].changed = 0;
> +	md->dmap[0].dirty_words = vzalloc(bitmap_size(md->nr_words));
> +
> +	if (!md->dmap[0].dirty_words) {
> +		DMERR("Failed to allocate dirty bitmap");
> +		return -ENOMEM;
> +	}
> +
> +	md->dmap[1].changed = 0;
> +	md->dmap[1].dirty_words = vzalloc(bitmap_size(md->nr_words));
> +
> +	if (!md->dmap[1].dirty_words) {
> +		DMERR("Failed to allocate dirty bitmap");
> +		vfree(md->dmap[0].dirty_words);
> +		return -ENOMEM;
> +	}
> +
> +	md->current_dmap = &md->dmap[0];
> +
> +	return 0;
> +}
> +
> +static void dirty_map_exit(struct dm_clone_metadata *md)
> +{
> +	vfree(md->dmap[0].dirty_words);
> +	vfree(md->dmap[1].dirty_words);
> +}
> +
> +int __load_bitset_in_core(struct dm_clone_metadata *md)
> +{
> +	int r;
> +	unsigned long i;
> +	struct dm_bitset_cursor c;
> +
> +	/* Flush bitset cache */
> +	r = dm_bitset_flush(&md->bitset_info, md->bitset_root, &md->bitset_root);
> +
> +	if (r)
> +		return r;
> +
> +	r = dm_bitset_cursor_begin(&md->bitset_info, md->bitset_root, md->nr_regions, &c);
> +
> +	if (r)
> +		return r;
> +
> +	for (i = 0; ; i++) {
> +		if (dm_bitset_cursor_get_value(&c))
> +			__set_bit(i, md->region_map);
> +		else
> +			__clear_bit(i, md->region_map);
> +
> +		if (i >= (md->nr_regions - 1))
> +			break;
> +
> +		r = dm_bitset_cursor_next(&c);
> +
> +		if (r)
> +			break;
> +	}
> +
> +	dm_bitset_cursor_end(&c);
> +
> +	return r;
> +}
> +
> +struct dm_clone_metadata *dm_clone_metadata_open(struct block_device *bdev,
> +						 sector_t target_size,
> +						 sector_t region_size)
> +{
> +	int r;
> +	struct dm_clone_metadata *md;
> +
> +	md = kzalloc(sizeof(*md), GFP_KERNEL);
> +
> +	if (!md) {
> +		DMERR("Failed to allocate memory for dm-clone metadata");
> +		return ERR_PTR(-ENOMEM);
> +	}
> +
> +	md->bdev = bdev;
> +	md->target_size = target_size;
> +	md->region_size = region_size;
> +	md->nr_regions = dm_sector_div_up(md->target_size, md->region_size);
> +	md->nr_words = BITS_TO_LONGS(md->nr_regions);
> +
> +	init_rwsem(&md->lock);
> +	spin_lock_init(&md->bitmap_lock);
> +	md->read_only = 0;
> +	md->fail_io = false;
> +	md->hydration_done = false;
> +
> +	md->region_map = vmalloc(bitmap_size(md->nr_regions));
> +
> +	if (!md->region_map) {
> +		DMERR("Failed to allocate memory for region bitmap");
> +		r = -ENOMEM;
> +		goto out_with_md;
> +	}
> +
> +	r = __create_persistent_data_structures(md, true);
> +
> +	if (r)
> +		goto out_with_region_map;
> +
> +	r = __load_bitset_in_core(md);
> +
> +	if (r) {
> +		DMERR("Failed to load on-disk region map");
> +		goto out_with_pds;
> +	}
> +
> +	r = dirty_map_init(md);
> +
> +	if (r)
> +		goto out_with_pds;
> +
> +	if (bitmap_full(md->region_map, md->nr_regions))
> +		md->hydration_done = true;
> +
> +	return md;
> +
> +out_with_pds:
> +	__destroy_persistent_data_structures(md);
> +
> +out_with_region_map:
> +	vfree(md->region_map);
> +
> +out_with_md:
> +	kfree(md);
> +
> +	return ERR_PTR(r);
> +}
> +
> +void dm_clone_metadata_close(struct dm_clone_metadata *md)
> +{
> +	if (!md->fail_io)
> +		__destroy_persistent_data_structures(md);
> +
> +	dirty_map_exit(md);
> +	vfree(md->region_map);
> +	kfree(md);
> +}
> +
> +bool dm_clone_is_hydration_done(struct dm_clone_metadata *md)
> +{
> +	return md->hydration_done;
> +}
> +
> +bool dm_clone_is_region_hydrated(struct dm_clone_metadata *md, unsigned long region_nr)
> +{
> +	return dm_clone_is_hydration_done(md) || test_bit(region_nr, md->region_map);
> +}
> +
> +bool dm_clone_is_range_hydrated(struct dm_clone_metadata *md,
> +				unsigned long start, unsigned long nr_regions)
> +{
> +	unsigned long bit;
> +
> +	if (dm_clone_is_hydration_done(md))
> +		return true;
> +
> +	bit = find_next_zero_bit(md->region_map, md->nr_regions, start);
> +
> +	return (bit >= (start + nr_regions));
> +}
> +
> +unsigned long dm_clone_nr_of_hydrated_regions(struct dm_clone_metadata *md)
> +{
> +	return bitmap_weight(md->region_map, md->nr_regions);
> +}
> +
> +unsigned long dm_clone_find_next_unhydrated_region(struct dm_clone_metadata *md,
> +						   unsigned long start)
> +{
> +	return find_next_zero_bit(md->region_map, md->nr_regions, start);
> +}
> +
> +static int __update_metadata_word(struct dm_clone_metadata *md, unsigned long word)
> +{
> +	int r;
> +	unsigned long index = word * BITS_PER_LONG;
> +	unsigned long max_index = min(md->nr_regions, (word + 1) * BITS_PER_LONG);
> +
> +	while (index < max_index) {
> +		if (test_bit(index, md->region_map)) {
> +			r = dm_bitset_set_bit(&md->bitset_info, md->bitset_root,
> +					      index, &md->bitset_root);
> +
> +			if (r) {
> +				DMERR("dm_bitset_set_bit failed");
> +				return r;
> +			}
> +		}
> +		index++;
> +	}
> +
> +	return 0;
> +}
> +
> +int __metadata_commit(struct dm_clone_metadata *md)
> +{
> +	int r;
> +	struct dm_block *sblock;
> +	struct superblock_disk *sb;
> +
> +	/* Flush bitset cache */
> +	r = dm_bitset_flush(&md->bitset_info, md->bitset_root, &md->bitset_root);
> +
> +	if (r) {
> +		DMERR("dm_bitset_flush failed");
> +		return r;
> +	}
> +
> +	/* Flush to disk all blocks, except the superblock */
> +	r = dm_tm_pre_commit(md->tm);
> +
> +	if (r) {
> +		DMERR("dm_tm_pre_commit failed");
> +		return r;
> +	}
> +
> +	/* Save the space map root in md->metadata_space_map_root */
> +	r = __copy_sm_root(md);
> +
> +	if (r) {
> +		DMERR("__copy_sm_root failed");
> +		return r;
> +	}
> +
> +	/* Lock the superblock */
> +	r = superblock_write_lock_zero(md, &sblock);
> +
> +	if (r) {
> +		DMERR("Failed to write_lock superblock");
> +		return r;
> +	}
> +
> +	/* Save the metadata in superblock */
> +	sb = dm_block_data(sblock);
> +	__prepare_superblock(md, sb);
> +
> +	/* Unlock superblock and commit it to disk */
> +	r = dm_tm_commit(md->tm, sblock);
> +
> +	if (r) {
> +		DMERR("Failed to commit superblock");
> +		return r;
> +	}
> +
> +	/*
> +	 * FIXME: Find a more efficient way to check if the hydration is done.
> +	 */
> +	if (bitmap_full(md->region_map, md->nr_regions))
> +		md->hydration_done = true;
> +
> +	return 0;
> +}
> +
> +static int __flush_dmap(struct dm_clone_metadata *md, struct dirty_map *dmap)
> +{
> +	int r;
> +	unsigned long word, flags;
> +
> +	word = 0;
> +	do {
> +		word = find_next_bit(dmap->dirty_words, md->nr_words, word);
> +
> +		if (word == md->nr_words)
> +			break;
> +
> +		r = __update_metadata_word(md, word);
> +
> +		if (r)
> +			return r;
> +
> +		__clear_bit(word, dmap->dirty_words);
> +		word++;
> +	} while (word < md->nr_words);
> +
> +	r = __metadata_commit(md);
> +
> +	if (r)
> +		return r;
> +
> +	/* Update the changed flag */
> +	spin_lock_irqsave(&md->bitmap_lock, flags);
> +	dmap->changed = 0;
> +	spin_unlock_irqrestore(&md->bitmap_lock, flags);
> +
> +	return 0;
> +}
> +
> +int dm_clone_metadata_commit(struct dm_clone_metadata *md)
> +{
> +	int r = -EPERM;
> +	unsigned long flags;
> +	struct dirty_map *dmap, *next_dmap;
> +
> +	down_write(&md->lock);
> +
> +	if (md->fail_io || dm_bm_is_read_only(md->bm))
> +		goto out;
> +
> +	/* Get current dirty bitmap */
> +	dmap = md->current_dmap;
> +
> +	/* Get next dirty bitmap */
> +	next_dmap = (dmap == &md->dmap[0]) ? &md->dmap[1] : &md->dmap[0];
> +
> +	/*
> +	 * The last commit failed, so we don't have a clean dirty-bitmap to
> +	 * use.
> +	 */
> +	if (WARN_ON(next_dmap->changed)) {
> +		r = -EINVAL;
> +		goto out;
> +	}
> +
> +	/* Swap dirty bitmaps */
> +	spin_lock_irqsave(&md->bitmap_lock, flags);
> +	md->current_dmap = next_dmap;
> +	spin_unlock_irqrestore(&md->bitmap_lock, flags);
> +
> +	/*
> +	 * No one is accessing the old dirty bitmap anymore, so we can flush
> +	 * it.
> +	 */
> +	r = __flush_dmap(md, dmap);
> +out:
> +	up_write(&md->lock);
> +
> +	return r;
> +}
> +
> +int dm_clone_set_region_hydrated(struct dm_clone_metadata *md, unsigned long region_nr)
> +{
> +	int r = 0;
> +	struct dirty_map *dmap;
> +	unsigned long word, flags;
> +
> +	word = region_nr / BITS_PER_LONG;
> +
> +	spin_lock_irqsave(&md->bitmap_lock, flags);
> +
> +	if (md->read_only) {
> +		r = -EPERM;
> +		goto out;
> +	}
> +
> +	dmap = md->current_dmap;
> +
> +	__set_bit(word, dmap->dirty_words);
> +	__set_bit(region_nr, md->region_map);
> +	dmap->changed = 1;
> +
> +out:
> +	spin_unlock_irqrestore(&md->bitmap_lock, flags);
> +
> +	return r;
> +}
> +
> +int dm_clone_cond_set_range(struct dm_clone_metadata *md, unsigned long start,
> +			    unsigned long nr_regions)
> +{
> +	int r = 0;
> +	struct dirty_map *dmap;
> +	unsigned long word, region_nr, flags;
> +
> +	spin_lock_irqsave(&md->bitmap_lock, flags);
> +
> +	if (md->read_only) {
> +		r = -EPERM;
> +		goto out;
> +	}
> +
> +	dmap = md->current_dmap;
> +	for (region_nr = start; region_nr < (start + nr_regions); region_nr++) {
> +		if (!test_bit(region_nr, md->region_map)) {
> +			word = region_nr / BITS_PER_LONG;
> +			__set_bit(word, dmap->dirty_words);
> +			__set_bit(region_nr, md->region_map);
> +			dmap->changed = 1;
> +		}
> +	}
> +
> +out:
> +	spin_unlock_irqrestore(&md->bitmap_lock, flags);
> +
> +	return r;
> +}
> +
> +/*
> + * WARNING: This must not be called concurrently with either
> + * dm_clone_set_region_hydrated() or dm_clone_cond_set_range(), as it changes
> + * md->region_map without taking the md->bitmap_lock spinlock. The only
> + * exception is after setting the metadata to read-only mode, using
> + * dm_clone_metadata_set_read_only().
> + *
> + * We don't take the spinlock because __load_bitset_in_core() does I/O, so it
> + * may block.
> + */
> +int dm_clone_reload_in_core_bitset(struct dm_clone_metadata *md)
> +{
> +	int r = -EINVAL;
> +
> +	down_write(&md->lock);
> +
> +	if (md->fail_io)
> +		goto out;
> +
> +	r = __load_bitset_in_core(md);
> +
> +out:
> +	up_write(&md->lock);
> +
> +	return r;
> +}
> +
> +bool dm_clone_changed_this_transaction(struct dm_clone_metadata *md)
> +{
> +	bool r;
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&md->bitmap_lock, flags);
> +	r = md->dmap[0].changed || md->dmap[1].changed;
> +	spin_unlock_irqrestore(&md->bitmap_lock, flags);
> +
> +	return r;
> +}
> +
> +int dm_clone_metadata_abort(struct dm_clone_metadata *md)
> +{
> +	int r = -EPERM;
> +
> +	down_write(&md->lock);
> +
> +	if (md->fail_io || dm_bm_is_read_only(md->bm))
> +		goto out;
> +
> +	__destroy_persistent_data_structures(md);
> +
> +	r = __create_persistent_data_structures(md, false);
> +
> +	/* If something went wrong we can neither write nor read the metadata */
> +	if (r)
> +		md->fail_io = true;
> +
> +out:
> +	up_write(&md->lock);
> +
> +	return r;
> +}
> +
> +void dm_clone_metadata_set_read_only(struct dm_clone_metadata *md)
> +{
> +	unsigned long flags;
> +
> +	down_write(&md->lock);
> +
> +	spin_lock_irqsave(&md->bitmap_lock, flags);
> +	md->read_only = 1;
> +	spin_unlock_irqrestore(&md->bitmap_lock, flags);
> +
> +	if (!md->fail_io)
> +		dm_bm_set_read_only(md->bm);
> +
> +	up_write(&md->lock);
> +}
> +
> +void dm_clone_metadata_set_read_write(struct dm_clone_metadata *md)
> +{
> +	unsigned long flags;
> +
> +	down_write(&md->lock);
> +
> +	spin_lock_irqsave(&md->bitmap_lock, flags);
> +	md->read_only = 0;
> +	spin_unlock_irqrestore(&md->bitmap_lock, flags);
> +
> +	if (!md->fail_io)
> +		dm_bm_set_read_write(md->bm);
> +
> +	up_write(&md->lock);
> +}
> +
> +int dm_clone_get_free_metadata_block_count(struct dm_clone_metadata *md,
> +					   dm_block_t *result)
> +{
> +	int r = -EINVAL;
> +
> +	down_read(&md->lock);
> +
> +	if (!md->fail_io)
> +		r = dm_sm_get_nr_free(md->sm, result);
> +
> +	up_read(&md->lock);
> +
> +	return r;
> +}
> +
> +int dm_clone_get_metadata_dev_size(struct dm_clone_metadata *md,
> +				   dm_block_t *result)
> +{
> +	int r = -EINVAL;
> +
> +	down_read(&md->lock);
> +
> +	if (!md->fail_io)
> +		r = dm_sm_get_nr_blocks(md->sm, result);
> +
> +	up_read(&md->lock);
> +
> +	return r;
> +}
> diff --git a/drivers/md/dm-clone-metadata.h b/drivers/md/dm-clone-metadata.h
> new file mode 100644
> index 000000000000..fdfbd6f1cbdb
> --- /dev/null
> +++ b/drivers/md/dm-clone-metadata.h
> @@ -0,0 +1,158 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +/*
> + * Copyright (C) 2019 Arrikto, Inc. All Rights Reserved.
> + */
> +
> +#ifndef DM_CLONE_METADATA_H
> +#define DM_CLONE_METADATA_H
> +
> +#include "persistent-data/dm-block-manager.h"
> +#include "persistent-data/dm-space-map-metadata.h"
> +
> +#define DM_CLONE_METADATA_BLOCK_SIZE DM_SM_METADATA_BLOCK_SIZE
> +
> +/*
> + * The metadata device is currently limited in size.
> + */
> +#define DM_CLONE_METADATA_MAX_SECTORS DM_SM_METADATA_MAX_SECTORS
> +
> +/*
> + * A metadata device larger than 16GB triggers a warning.
> + */
> +#define DM_CLONE_METADATA_MAX_SECTORS_WARNING (16 * (1024 * 1024 * 1024 >> SECTOR_SHIFT))
> +
> +#define SPACE_MAP_ROOT_SIZE 128
> +
> +/* dm-clone metadata */
> +struct dm_clone_metadata;
> +
> +/*
> + * Set region status to hydrated.
> + *
> + * @md: The dm-clone metadata
> + * @region_nr: The region number
> + *
> + * This function doesn't block, so it's safe to call it from interrupt context.
> + */
> +int dm_clone_set_region_hydrated(struct dm_clone_metadata *md, unsigned long region_nr);
> +
> +/*
> + * Set status of all regions in the provided range to hydrated, if not already
> + * hydrated.
> + *
> + * @md: The dm-clone metadata
> + * @start: Starting region number
> + * @nr_regions: Number of regions in the range
> + *
> + * This function doesn't block, so it's safe to call it from interrupt context.
> + */
> +int dm_clone_cond_set_range(struct dm_clone_metadata *md, unsigned long start,
> +			    unsigned long nr_regions);
> +
> +/*
> + * Read existing or create fresh metadata.
> + *
> + * @bdev: The device storing the metadata
> + * @target_size: The target size
> + * @region_size: The region size
> + *
> + * @returns: The dm-clone metadata
> + *
> + * This function reads the superblock of @bdev and checks if it's all zeroes.
> + * If it is, it formats @bdev and creates fresh metadata. If it isn't, it
> + * validates the metadata stored in @bdev.
> + */
> +struct dm_clone_metadata *dm_clone_metadata_open(struct block_device *bdev,
> +						 sector_t target_size,
> +						 sector_t region_size);
> +
> +/*
> + * Free the resources related to metadata management.
> + */
> +void dm_clone_metadata_close(struct dm_clone_metadata *md);
> +
> +/*
> + * Commit dm-clone metadata to disk.
> + */
> +int dm_clone_metadata_commit(struct dm_clone_metadata *md);
> +
> +/*
> + * Reload the in core copy of the on-disk bitmap.
> + *
> + * This should be used after aborting a metadata transaction and setting the
> + * metadata to read-only, to invalidate the in-core cache and make it match the
> + * on-disk metadata.
> + *
> + * WARNING: It must not be called concurrently with either
> + * dm_clone_set_region_hydrated() or dm_clone_cond_set_range(), as it updates
> + * the region bitmap without taking the relevant spinlock. We don't take the
> + * spinlock because dm_clone_reload_in_core_bitset() does I/O, so it may block.
> + *
> + * But, it's safe to use it after calling dm_clone_metadata_set_read_only(),
> + * because the latter sets the metadata to read-only mode. Both
> + * dm_clone_set_region_hydrated() and dm_clone_cond_set_range() refuse to touch
> + * the region bitmap, after calling dm_clone_metadata_set_read_only().
> + */
> +int dm_clone_reload_in_core_bitset(struct dm_clone_metadata *md);
> +
> +/*
> + * Check whether dm-clone's metadata changed this transaction.
> + */
> +bool dm_clone_changed_this_transaction(struct dm_clone_metadata *md);
> +
> +/*
> + * Abort current metadata transaction and rollback metadata to the last
> + * committed transaction.
> + */
> +int dm_clone_metadata_abort(struct dm_clone_metadata *md);
> +
> +/*
> + * Switches metadata to a read only mode. Once read-only mode has been entered
> + * the following functions will return -EPERM:
> + *
> + *   dm_clone_metadata_commit()
> + *   dm_clone_set_region_hydrated()
> + *   dm_clone_cond_set_range()
> + *   dm_clone_metadata_abort()
> + */
> +void dm_clone_metadata_set_read_only(struct dm_clone_metadata *md);
> +void dm_clone_metadata_set_read_write(struct dm_clone_metadata *md);
> +
> +/*
> + * Returns true if the hydration of the clone device is finished.
> + */
> +bool dm_clone_is_hydration_done(struct dm_clone_metadata *md);
> +
> +/*
> + * Returns true if region @region_nr is hydrated.
> + */
> +bool dm_clone_is_region_hydrated(struct dm_clone_metadata *md, unsigned long region_nr);
> +
> +/*
> + * Returns true if all the regions in the range are hydrated.
> + */
> +bool dm_clone_is_range_hydrated(struct dm_clone_metadata *md,
> +				unsigned long start, unsigned long nr_regions);
> +
> +/*
> + * Returns the number of hydrated regions.
> + */
> +unsigned long dm_clone_nr_of_hydrated_regions(struct dm_clone_metadata *md);
> +
> +/*
> + * Returns the first unhydrated region with region_nr >= @start
> + */
> +unsigned long dm_clone_find_next_unhydrated_region(struct dm_clone_metadata *md,
> +						   unsigned long start);
> +
> +/*
> + * Get the number of free metadata blocks.
> + */
> +int dm_clone_get_free_metadata_block_count(struct dm_clone_metadata *md, dm_block_t *result);
> +
> +/*
> + * Get the total number of metadata blocks.
> + */
> +int dm_clone_get_metadata_dev_size(struct dm_clone_metadata *md, dm_block_t *result);
> +
> +#endif /* DM_CLONE_METADATA_H */
> diff --git a/drivers/md/dm-clone-target.c b/drivers/md/dm-clone-target.c
> new file mode 100644
> index 000000000000..2ce7524616f8
> --- /dev/null
> +++ b/drivers/md/dm-clone-target.c
> @@ -0,0 +1,2244 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/*
> + * Copyright (C) 2019 Arrikto, Inc. All Rights Reserved.
> + */
> +
> +#include <linux/bio.h>
> +#include <linux/err.h>
> +#include <linux/hash.h>
> +#include <linux/list.h>
> +#include <linux/log2.h>
> +#include <linux/init.h>
> +#include <linux/slab.h>
> +#include <linux/wait.h>
> +#include <linux/dm-io.h>
> +#include <linux/mutex.h>
> +#include <linux/atomic.h>
> +#include <linux/bitops.h>
> +#include <linux/blkdev.h>
> +#include <linux/kdev_t.h>
> +#include <linux/kernel.h>
> +#include <linux/module.h>
> +#include <linux/jiffies.h>
> +#include <linux/mempool.h>
> +#include <linux/spinlock.h>
> +#include <linux/blk_types.h>
> +#include <linux/dm-kcopyd.h>
> +#include <linux/workqueue.h>
> +#include <linux/backing-dev.h>
> +#include <linux/device-mapper.h>
> +
> +#include "dm.h"
> +#include "dm-clone-metadata.h"
> +
> +#define DM_MSG_PREFIX "clone"
> +
> +/*
> + * Minimum and maximum allowed region sizes
> + */
> +#define MIN_REGION_SIZE (1 << 3)  /* 4KB */
> +#define MAX_REGION_SIZE (1 << 21) /* 1GB */
> +
> +#define MIN_HYDRATIONS 256 /* Size of hydration mempool */
> +#define DEFAULT_HYDRATION_THRESHOLD 2048 /* 1MB */
> +#define DEFAULT_HYDRATION_BATCH_SIZE 1 /* Hydrate in batches of 1 region */
> +
> +#define COMMIT_PERIOD HZ /* 1 sec */
> +
> +/*
> + * Hydration hash table size: 1 << HASH_TABLE_BITS
> + */
> +#define HASH_TABLE_BITS 15
> +
> +DECLARE_DM_KCOPYD_THROTTLE_WITH_MODULE_PARM(clone_copy_throttle,
> +	"A percentage of time allocated for hydrating regions");
> +
> +/* Slab cache for struct dm_clone_region_hydration */
> +static struct kmem_cache *_hydration_cache;
> +
> +/* dm-clone metadata modes */
> +enum clone_metadata_mode {
> +	CM_WRITE,		/* metadata may be changed */
> +	CM_READ_ONLY,		/* metadata may not be changed */
> +	CM_FAIL,		/* all metadata I/O fails */
> +};
> +
> +struct hash_table_bucket;
> +
> +struct clone {
> +	struct dm_target *ti;
> +	struct dm_target_callbacks callbacks;
> +
> +	struct dm_dev *metadata_dev;
> +	struct dm_dev *clone_dev;
> +	struct dm_dev *origin_dev;
> +
> +	unsigned long nr_regions;
> +	sector_t region_size;
> +	unsigned int region_shift;
> +
> +	/*
> +	 * A metadata commit and the actions taken in case it fails should run
> +	 * as a single atomic step.
> +	 */
> +	struct mutex commit_lock;
> +
> +	struct dm_clone_metadata *md;
> +
> +	/* Region hydration hash table */
> +	struct hash_table_bucket *ht;
> +
> +	atomic_t ios_in_flight;
> +
> +	wait_queue_head_t hydration_stopped;
> +
> +	mempool_t hydration_pool;
> +
> +	unsigned long last_commit_jiffies;
> +
> +	/*
> +	 * We defer incoming WRITE bios for regions that are not hydrated,
> +	 * until after these regions have been hydrated.
> +	 *
> +	 * Also, we defer REQ_FUA and REQ_PREFLUSH bios, until after the
> +	 * metadata have been committed.
> +	 */
> +	spinlock_t lock;
> +	struct bio_list deferred_bios;
> +	struct bio_list deferred_discard_bios;
> +	struct bio_list deferred_flush_bios;
> +	struct bio_list deferred_flush_completions;
> +
> +	sector_t hydration_threshold;
> +
> +	/* Number of regions to batch together during hydration. */
> +	unsigned int hydration_batch_size;
> +
> +	/* Which region to hydrate next */
> +	unsigned long hydration_offset;
> +
> +	atomic_t hydrations_in_flight;
> +
> +	/*
> +	 * Save a copy of the table line rather than reconstructing it for the
> +	 * status.
> +	 */
> +	unsigned int nr_ctr_args;
> +	const char **ctr_args;
> +
> +	struct workqueue_struct *wq;
> +	struct work_struct worker;
> +	struct delayed_work waker;
> +
> +	struct dm_kcopyd_client *kcopyd_client;
> +
> +	enum clone_metadata_mode mode;
> +	unsigned long flags;
> +};
> +
> +/*
> + * dm-clone flags
> + */
> +#define DM_CLONE_DISCARD_PASSDOWN 0
> +#define DM_CLONE_HYDRATION_ENABLED 1
> +#define DM_CLONE_HYDRATION_SUSPENDED 2
> +
> +/*---------------------------------------------------------------------------*/
> +
> +/*
> + * Metadata failure handling.
> + */
> +static enum clone_metadata_mode get_clone_mode(struct clone *clone)
> +{
> +	return READ_ONCE(clone->mode);
> +}
> +
> +static const char *clone_device_name(struct clone *clone)
> +{
> +	return dm_table_device_name(clone->ti->table);
> +}
> +
> +static void __set_clone_mode(struct clone *clone, enum clone_metadata_mode new_mode)
> +{
> +	const char *descs[] = {
> +		"read-write",
> +		"read-only",
> +		"fail"
> +	};
> +
> +	enum clone_metadata_mode old_mode = get_clone_mode(clone);
> +
> +	/* Never move out of fail mode */
> +	if (old_mode == CM_FAIL)
> +		new_mode = CM_FAIL;
> +
> +	switch (new_mode) {
> +	case CM_FAIL:
> +	case CM_READ_ONLY:
> +		dm_clone_metadata_set_read_only(clone->md);
> +		break;
> +
> +	case CM_WRITE:
> +		dm_clone_metadata_set_read_write(clone->md);
> +		break;
> +	}
> +
> +	WRITE_ONCE(clone->mode, new_mode);
> +
> +	if (new_mode != old_mode) {
> +		dm_table_event(clone->ti->table);
> +		DMINFO("%s: Switching to %s mode", clone_device_name(clone),
> +		       descs[(int)new_mode]);
> +	}
> +}
> +
> +static void __abort_transaction(struct clone *clone)
> +{
> +	const char *dev_name = clone_device_name(clone);
> +
> +	if (get_clone_mode(clone) >= CM_READ_ONLY)
> +		return;
> +
> +	DMERR("%s: Aborting current metadata transaction", dev_name);
> +	if (dm_clone_metadata_abort(clone->md)) {
> +		DMERR("%s: Failed to abort metadata transaction", dev_name);
> +		__set_clone_mode(clone, CM_FAIL);
> +	}
> +}
> +
> +static void __reload_in_core_bitset(struct clone *clone)
> +{
> +	const char *dev_name = clone_device_name(clone);
> +
> +	if (get_clone_mode(clone) == CM_FAIL)
> +		return;
> +
> +	/* Reload the on-disk bitset */
> +	DMINFO("%s: Reloading on-disk bitmap", dev_name);
> +	if (dm_clone_reload_in_core_bitset(clone->md)) {
> +		DMERR("%s: Failed to reload on-disk bitmap", dev_name);
> +		__set_clone_mode(clone, CM_FAIL);
> +	}
> +}
> +
> +static void __metadata_operation_failed(struct clone *clone, const char *op, int r)
> +{
> +	DMERR("%s: Metadata operation `%s' failed: error = %d",
> +	      clone_device_name(clone), op, r);
> +
> +	__abort_transaction(clone);
> +	__set_clone_mode(clone, CM_READ_ONLY);
> +
> +	/*
> +	 * dm_clone_reload_in_core_bitset() may run concurrently with either
> +	 * dm_clone_set_region_hydrated() or dm_clone_cond_set_range(), but
> +	 * it's safe as we have already set the metadata to read-only mode.
> +	 */
> +	__reload_in_core_bitset(clone);
> +}
> +
> +/*---------------------------------------------------------------------------*/
> +
> +/* Wake up anyone waiting for region hydrations to stop */
> +static inline void wakeup_hydration_waiters(struct clone *clone)
> +{
> +	wake_up_all(&clone->hydration_stopped);
> +}
> +
> +static inline void wake_worker(struct clone *clone)
> +{
> +	queue_work(clone->wq, &clone->worker);
> +}
> +
> +/*---------------------------------------------------------------------------*/
> +
> +/*
> + * bio helper functions.
> + */
> +static inline void remap_to_origin(struct clone *clone, struct bio *bio)
> +{
> +	bio_set_dev(bio, clone->origin_dev->bdev);
> +}
> +
> +static inline void remap_to_clone(struct clone *clone, struct bio *bio)
> +{
> +	bio_set_dev(bio, clone->clone_dev->bdev);
> +}
> +
> +static bool bio_triggers_commit(struct clone *clone, struct bio *bio)
> +{
> +	return op_is_flush(bio->bi_opf) &&
> +		dm_clone_changed_this_transaction(clone->md);
> +}
> +
> +/* Get the address of the region in sectors */
> +static inline sector_t region_to_sector(struct clone *clone, unsigned long region_nr)
> +{
> +	return (region_nr << clone->region_shift);
> +}
> +
> +/* Get the region number of the bio */
> +static inline unsigned long bio_to_region(struct clone *clone, struct bio *bio)
> +{
> +	return (bio->bi_iter.bi_sector >> clone->region_shift);
> +}
> +
> +/* Get the region range covered by the bio */
> +static void bio_region_range(struct clone *clone, struct bio *bio,
> +			     unsigned long *rs, unsigned long *re)
> +{
> +	*rs = dm_sector_div_up(bio->bi_iter.bi_sector, clone->region_size);
> +	*re = bio_end_sector(bio) >> clone->region_shift;
> +}
> +
> +/* Check whether a bio overwrites a region */
> +static inline bool is_overwrite_bio(struct clone *clone, struct bio *bio)
> +{
> +	return (bio_data_dir(bio) == WRITE && bio_sectors(bio) == clone->region_size);
> +}
> +
> +static void fail_bios(struct bio_list *bios, blk_status_t status)
> +{
> +	struct bio *bio;
> +
> +	while ((bio = bio_list_pop(bios))) {
> +		bio->bi_status = status;
> +		bio_endio(bio);
> +	}
> +}
> +
> +static void submit_bios(struct bio_list *bios)
> +{
> +	struct bio *bio;
> +	struct blk_plug plug;
> +
> +	blk_start_plug(&plug);
> +
> +	while ((bio = bio_list_pop(bios)))
> +		generic_make_request(bio);
> +
> +	blk_finish_plug(&plug);
> +}
> +
> +/*
> + * Submit bio to the underlying device.
> + *
> + * If the bio triggers a commit, delay it, until after the metadata have been
> + * committed.
> + *
> + * NOTE: The bio remapping must be performed by the caller.
> + */
> +static void issue_bio(struct clone *clone, struct bio *bio)
> +{
> +	unsigned long flags;
> +
> +	if (!bio_triggers_commit(clone, bio)) {
> +		generic_make_request(bio);
> +		return;
> +	}
> +
> +	/*
> +	 * If the metadata mode is RO or FAIL we won't be able to commit the
> +	 * metadata, so we complete the bio with an error.
> +	 */
> +	if (unlikely(get_clone_mode(clone) >= CM_READ_ONLY)) {
> +		bio_io_error(bio);
> +		return;
> +	}
> +
> +	/*
> +	 * Batch together any bios that trigger commits and then issue a single
> +	 * commit for them in process_deferred_flush_bios().
> +	 */
> +	spin_lock_irqsave(&clone->lock, flags);
> +	bio_list_add(&clone->deferred_flush_bios, bio);
> +	spin_unlock_irqrestore(&clone->lock, flags);
> +
> +	wake_worker(clone);
> +}
> +
> +/*
> + * Remap bio to the clone device and submit it.
> + *
> + * If the bio triggers a commit, delay it, until after the metadata have been
> + * committed.
> + */
> +static void remap_and_issue(struct clone *clone, struct bio *bio)
> +{
> +	remap_to_clone(clone, bio);
> +	issue_bio(clone, bio);
> +}
> +
> +/*
> + * Issue bios that have been deferred until after their region has finished
> + * hydrating.
> + *
> + * We delegate the bio submission to the worker thread, so this is safe to call
> + * from interrupt context.
> + */
> +static void issue_deferred_bios(struct clone *clone, struct bio_list *bios)
> +{
> +	struct bio *bio;
> +	unsigned long flags;
> +	struct bio_list flush_bios = BIO_EMPTY_LIST;
> +	struct bio_list normal_bios = BIO_EMPTY_LIST;
> +
> +	if (bio_list_empty(bios))
> +		return;
> +
> +	while ((bio = bio_list_pop(bios))) {
> +		if (bio_triggers_commit(clone, bio))
> +			bio_list_add(&flush_bios, bio);
> +		else
> +			bio_list_add(&normal_bios, bio);
> +	}
> +
> +	spin_lock_irqsave(&clone->lock, flags);
> +	bio_list_merge(&clone->deferred_bios, &normal_bios);
> +	bio_list_merge(&clone->deferred_flush_bios, &flush_bios);
> +	spin_unlock_irqrestore(&clone->lock, flags);
> +
> +	wake_worker(clone);
> +}
> +
> +static void complete_overwrite_bio(struct clone *clone, struct bio *bio)
> +{
> +	unsigned long flags;
> +
> +	/*
> +	 * If the bio has the REQ_FUA flag set we must commit the metadata
> +	 * before signaling its completion.
> +	 *
> +	 * complete_overwrite_bio() is only called by hydration_complete(),
> +	 * after having successfully updated the metadata. This means we don't
> +	 * need to call dm_clone_changed_this_transaction() to check if the
> +	 * metadata has changed and thus we can avoid taking the metadata spin
> +	 * lock.
> +	 */
> +	if (!(bio->bi_opf & REQ_FUA)) {
> +		bio_endio(bio);
> +		return;
> +	}
> +
> +	/*
> +	 * If the metadata mode is RO or FAIL we won't be able to commit the
> +	 * metadata, so we complete the bio with an error.
> +	 */
> +	if (unlikely(get_clone_mode(clone) >= CM_READ_ONLY)) {
> +		bio_io_error(bio);
> +		return;
> +	}
> +
> +	/*
> +	 * Batch together any bios that trigger commits and then issue a single
> +	 * commit for them in process_deferred_flush_bios().
> +	 */
> +	spin_lock_irqsave(&clone->lock, flags);
> +	bio_list_add(&clone->deferred_flush_completions, bio);
> +	spin_unlock_irqrestore(&clone->lock, flags);
> +
> +	wake_worker(clone);
> +}
> +
> +static void trim_bio(struct bio *bio, sector_t sector, unsigned int len)
> +{
> +	bio->bi_iter.bi_sector = sector;
> +	bio->bi_iter.bi_size = to_bytes(len);
> +}
> +
> +static void complete_discard_bio(struct clone *clone, struct bio *bio, bool success)
> +{
> +	unsigned long rs, re;
> +
> +	/*
> +	 * If the clone device supports discards, remap and trim the discard
> +	 * bio and pass it down. Otherwise complete the bio immediately.
> +	 */
> +	if (test_bit(DM_CLONE_DISCARD_PASSDOWN, &clone->flags) && success) {
> +		remap_to_clone(clone, bio);
> +		bio_region_range(clone, bio, &rs, &re);
> +		trim_bio(bio, rs << clone->region_shift,
> +			 (re - rs) << clone->region_shift);
> +
> +		generic_make_request(bio);
> +	} else {
> +		bio_endio(bio);
> +	}
> +}
> +
> +static void process_discard_bio(struct clone *clone, struct bio *bio)
> +{
> +	unsigned long rs, re, flags;
> +
> +	bio_region_range(clone, bio, &rs, &re);
> +	BUG_ON(re > clone->nr_regions);
> +
> +	if (unlikely(rs == re)) {
> +		bio_endio(bio);
> +		return;
> +	}
> +
> +	/*
> +	 * The covered regions are already hydrated so we just need to pass
> +	 * down the discard.
> +	 */
> +	if (dm_clone_is_range_hydrated(clone->md, rs, re - rs)) {
> +		complete_discard_bio(clone, bio, true);
> +		return;
> +	}
> +
> +	/*
> +	 * If the metadata mode is RO or FAIL we won't be able to update the
> +	 * metadata for the regions covered by the discard so we just ignore
> +	 * it.
> +	 */
> +	if (unlikely(get_clone_mode(clone) >= CM_READ_ONLY)) {
> +		bio_endio(bio);
> +		return;
> +	}
> +
> +	/*
> +	 * Defer discard processing.
> +	 */
> +	spin_lock_irqsave(&clone->lock, flags);
> +	bio_list_add(&clone->deferred_discard_bios, bio);
> +	spin_unlock_irqrestore(&clone->lock, flags);
> +
> +	wake_worker(clone);
> +}
> +
> +/*---------------------------------------------------------------------------*/
> +
> +/*
> + * dm-clone region hydrations.
> + */
> +struct dm_clone_region_hydration {
> +	struct clone *clone;
> +	unsigned long region_nr;
> +
> +	struct bio *overwrite_bio;
> +	bio_end_io_t *overwrite_bio_end_io;
> +
> +	struct bio_list deferred_bios;
> +
> +	blk_status_t status;
> +
> +	/* Used by hydration batching */
> +	struct list_head list;
> +
> +	/* Used by hydration hash table */
> +	struct hlist_node h;
> +};
> +
> +/*
> + * Hydration hash table implementation.
> + *
> + * Ideally we would like to use list_bl, which uses bit spin locks and employs
> + * the least significant bit of the list head to lock the corresponding bucket,
> + * reducing the memory overhead for the locks. But, currently, list_bl and bit
> + * spin locks don't support IRQ safe versions. Since we have to take the lock
> + * in both process and interrupt context, we must fall back to using regular
> + * spin locks; one per hash table bucket.
> + */
> +struct hash_table_bucket {
> +	struct hlist_head head;
> +
> +	/* Spinlock protecting the bucket */
> +	spinlock_t lock;
> +};
> +
> +#define bucket_lock_irqsave(bucket, flags) \
> +	spin_lock_irqsave(&(bucket)->lock, flags)
> +
> +#define bucket_unlock_irqrestore(bucket, flags) \
> +	spin_unlock_irqrestore(&(bucket)->lock, flags)
> +
> +static int hash_table_init(struct clone *clone)
> +{
> +	unsigned int i, sz;
> +	struct hash_table_bucket *bucket;
> +
> +	sz = 1 << HASH_TABLE_BITS;
> +
> +	clone->ht = vmalloc(sz * sizeof(struct hash_table_bucket));
> +
> +	if (!clone->ht)
> +		return -ENOMEM;
> +
> +	for (i = 0; i < sz; i++) {
> +		bucket = clone->ht + i;
> +
> +		INIT_HLIST_HEAD(&bucket->head);
> +		spin_lock_init(&bucket->lock);
> +	}
> +
> +	return 0;
> +}
> +
> +static void hash_table_exit(struct clone *clone)
> +{
> +	vfree(clone->ht);
> +}
> +
> +static struct hash_table_bucket *get_hash_table_bucket(struct clone *clone,
> +						       unsigned long region_nr)
> +{
> +	return &clone->ht[hash_long(region_nr, HASH_TABLE_BITS)];
> +}
> +
> +/*
> + * Search hash table for a hydration with hd->region_nr == region_nr
> + *
> + * NOTE: Must be called with the bucket lock held
> + */
> +struct dm_clone_region_hydration *__hash_find(struct hash_table_bucket *bucket,
> +					      unsigned long region_nr)
> +{
> +	struct dm_clone_region_hydration *hd;
> +
> +	hlist_for_each_entry(hd, &bucket->head, h) {
> +		if (hd->region_nr == region_nr)
> +			return hd;
> +	}
> +
> +	return NULL;
> +}
> +
> +/*
> + * Insert a hydration into the hash table.
> + *
> + * NOTE: Must be called with the bucket lock held.
> + */
> +static inline void __insert_region_hydration(struct hash_table_bucket *bucket,
> +					     struct dm_clone_region_hydration *hd)
> +{
> +	hlist_add_head(&hd->h, &bucket->head);
> +}
> +
> +/*
> + * This function inserts a hydration into the hash table, unless someone else
> + * managed to insert a hydration for the same region first. In the latter case
> + * it returns the existing hydration descriptor for this region.
> + *
> + * NOTE: Must be called with the hydration hash table lock held.
> + */
> +static struct dm_clone_region_hydration *
> +__find_or_insert_region_hydration(struct hash_table_bucket *bucket,
> +				  struct dm_clone_region_hydration *hd)
> +{
> +	struct dm_clone_region_hydration *hd2;
> +
> +	hd2 = __hash_find(bucket, hd->region_nr);
> +
> +	if (hd2)
> +		return hd2;
> +
> +	__insert_region_hydration(bucket, hd);
> +
> +	return hd;
> +}
> +
> +/*---------------------------------------------------------------------------*/
> +
> +/* Allocate a hydration */
> +static struct dm_clone_region_hydration *alloc_hydration(struct clone *clone)
> +{
> +	struct dm_clone_region_hydration *hd;
> +
> +	/*
> +	 * Allocate a hydration from the hydration mempool.
> +	 * This might block but it can't fail.
> +	 */
> +	hd = mempool_alloc(&clone->hydration_pool, GFP_NOIO);
> +
> +	hd->clone = clone;
> +
> +	return hd;
> +}
> +
> +static inline void free_hydration(struct dm_clone_region_hydration *hd)
> +{
> +	mempool_free(hd, &hd->clone->hydration_pool);
> +}
> +
> +/* Initialize a hydration */
> +static void hydration_init(struct dm_clone_region_hydration *hd, unsigned long region_nr)
> +{
> +	hd->region_nr = region_nr;
> +
> +	hd->overwrite_bio = NULL;
> +
> +	bio_list_init(&hd->deferred_bios);
> +
> +	hd->status = 0;
> +
> +	INIT_LIST_HEAD(&hd->list);
> +	INIT_HLIST_NODE(&hd->h);
> +}
> +
> +/*---------------------------------------------------------------------------*/
> +
> +/*
> + * Update dm-clone's metadata after a region has finished hydrating and remove
> + * hydration from the hash table.
> + */
> +static int hydration_update_metadata(struct dm_clone_region_hydration *hd)
> +{
> +	int r = 0;
> +	unsigned long flags;
> +	struct hash_table_bucket *bucket;
> +	struct clone *clone = hd->clone;
> +
> +	if (unlikely(get_clone_mode(clone) >= CM_READ_ONLY))
> +		r = -EPERM;
> +
> +	/* Update the metadata */
> +	if (likely(!r) && hd->status == BLK_STS_OK)
> +		r = dm_clone_set_region_hydrated(clone->md, hd->region_nr);
> +
> +	bucket = get_hash_table_bucket(clone, hd->region_nr);
> +
> +	/* Remove hydration from hash table */
> +	bucket_lock_irqsave(bucket, flags);
> +	hlist_del(&hd->h);
> +	bucket_unlock_irqrestore(bucket, flags);
> +
> +	return r;
> +}
> +
> +/*
> + * Complete a region's hydration:
> + *
> + *	1. Update dm-clone's metadata.
> + *	2. Remove hydration from hash table.
> + *	3. Complete overwrite bio.
> + *	4. Issue deferred bios.
> + *	5. If this was the last hydration, wake up anyone waiting for
> + *	   hydrations to finish.
> + */
> +static void hydration_complete(struct dm_clone_region_hydration *hd)
> +{
> +	int r;
> +	blk_status_t status;
> +	struct clone *clone = hd->clone;
> +
> +	r = hydration_update_metadata(hd);
> +
> +	if (hd->status == BLK_STS_OK && likely(!r)) {
> +		if (hd->overwrite_bio)
> +			complete_overwrite_bio(clone, hd->overwrite_bio);
> +
> +		issue_deferred_bios(clone, &hd->deferred_bios);
> +	} else {
> +		status = r ? BLK_STS_IOERR : hd->status;
> +
> +		if (hd->overwrite_bio)
> +			bio_list_add(&hd->deferred_bios, hd->overwrite_bio);
> +
> +		fail_bios(&hd->deferred_bios, status);
> +	}
> +
> +	free_hydration(hd);
> +
> +	if (atomic_dec_and_test(&clone->hydrations_in_flight))
> +		wakeup_hydration_waiters(clone);
> +}
> +
> +static void hydration_kcopyd_callback(int read_err, unsigned long write_err, void *context)
> +{
> +	blk_status_t status;
> +
> +	struct dm_clone_region_hydration *tmp, *hd = context;
> +	struct clone *clone = hd->clone;
> +
> +	LIST_HEAD(batched_hydrations);
> +
> +	if (read_err || write_err) {
> +		DMERR_LIMIT("%s: hydration failed", clone_device_name(clone));
> +		status = BLK_STS_IOERR;
> +	} else {
> +		status = BLK_STS_OK;
> +	}
> +	list_splice_tail(&hd->list, &batched_hydrations);
> +
> +	hd->status = status;
> +	hydration_complete(hd);
> +
> +	/* Complete batched hydrations */
> +	list_for_each_entry_safe(hd, tmp, &batched_hydrations, list) {
> +		hd->status = status;
> +		hydration_complete(hd);
> +	}
> +
> +	/* Continue background hydration, if there is no I/O in-flight */
> +	if (test_bit(DM_CLONE_HYDRATION_ENABLED, &clone->flags) &&
> +	    !atomic_read(&clone->ios_in_flight))
> +		wake_worker(clone);
> +}
> +
> +static void hydration_copy(struct dm_clone_region_hydration *hd, unsigned int nr_regions)
> +{
> +	unsigned long region_start, region_end;
> +	sector_t tail_size, region_size, total_size;
> +	struct dm_io_region from, to;
> +	struct clone *clone = hd->clone;
> +
> +	region_size = clone->region_size;
> +	region_start = hd->region_nr;
> +	region_end = region_start + nr_regions - 1;
> +
> +	total_size = (nr_regions - 1) << clone->region_shift;
> +
> +	if (region_end == clone->nr_regions - 1) {
> +		/*
> +		 * The last region of the target might be smaller than
> +		 * region_size.
> +		 */
> +		tail_size = clone->ti->len & (region_size - 1);
> +		if (!tail_size)
> +			tail_size = region_size;
> +	} else {
> +		tail_size = region_size;
> +	}
> +
> +	total_size += tail_size;
> +
> +	from.bdev = clone->origin_dev->bdev;
> +	from.sector = region_to_sector(clone, region_start);
> +	from.count = total_size;
> +
> +	to.bdev = clone->clone_dev->bdev;
> +	to.sector = from.sector;
> +	to.count = from.count;
> +
> +	/* Issue copy */
> +	atomic_add(nr_regions, &clone->hydrations_in_flight);
> +	dm_kcopyd_copy(clone->kcopyd_client, &from, 1, &to, 0,
> +		       hydration_kcopyd_callback, hd);
> +}
> +
> +static void overwrite_endio(struct bio *bio)
> +{
> +	struct dm_clone_region_hydration *hd = bio->bi_private;
> +
> +	bio->bi_end_io = hd->overwrite_bio_end_io;
> +	hd->status = bio->bi_status;
> +
> +	hydration_complete(hd);
> +}
> +
> +static void hydration_overwrite(struct dm_clone_region_hydration *hd, struct bio *bio)
> +{
> +	/*
> +	 * We don't need to save and restore bio->bi_private because device
> +	 * mapper core generates a new bio for us to use, with clean
> +	 * bi_private.
> +	 */
> +	hd->overwrite_bio = bio;
> +	hd->overwrite_bio_end_io = bio->bi_end_io;
> +
> +	bio->bi_end_io = overwrite_endio;
> +	bio->bi_private = hd;
> +
> +	atomic_inc(&hd->clone->hydrations_in_flight);
> +	generic_make_request(bio);
> +}
> +
> +/*
> + * Hydrate bio's region.
> + *
> + * This function starts the hydration of the bio's region and puts the bio in
> + * the list of deferred bios for this region. In case, by the time this
> + * function is called, the region has finished hydrating it's submitted to the
> + * clone device.
> + *
> + * NOTE: The bio remapping must be performed by the caller.
> + */
> +static void hydrate_bio_region(struct clone *clone, struct bio *bio)
> +{
> +	unsigned long flags;
> +	unsigned long region_nr;
> +	struct hash_table_bucket *bucket;
> +	struct dm_clone_region_hydration *hd, *hd2;
> +
> +	region_nr = bio_to_region(clone, bio);
> +	bucket = get_hash_table_bucket(clone, region_nr);
> +
> +	bucket_lock_irqsave(bucket, flags);
> +
> +	hd = __hash_find(bucket, region_nr);
> +
> +	/* Someone else is hydrating the region */
> +	if (hd) {
> +		bio_list_add(&hd->deferred_bios, bio);
> +		bucket_unlock_irqrestore(bucket, flags);
> +		return;
> +	}
> +
> +	/* The region has been hydrated */
> +	if (dm_clone_is_region_hydrated(clone->md, region_nr)) {
> +		bucket_unlock_irqrestore(bucket, flags);
> +		issue_bio(clone, bio);
> +		return;
> +	}
> +
> +	/*
> +	 * We must allocate a hydration descriptor and start the hydration of
> +	 * the corresponding region.
> +	 */
> +	bucket_unlock_irqrestore(bucket, flags);
> +
> +	hd = alloc_hydration(clone);
> +	hydration_init(hd, region_nr);
> +
> +	bucket_lock_irqsave(bucket, flags);
> +
> +	/* Check if the region has been hydrated in the meantime. */
> +	if (dm_clone_is_region_hydrated(clone->md, region_nr)) {
> +		bucket_unlock_irqrestore(bucket, flags);
> +		free_hydration(hd);
> +		issue_bio(clone, bio);
> +		return;
> +	}
> +
> +	hd2 = __find_or_insert_region_hydration(bucket, hd);
> +
> +	/* Someone else started the region's hydration. */
> +	if (hd2 != hd) {
> +		bio_list_add(&hd2->deferred_bios, bio);
> +		bucket_unlock_irqrestore(bucket, flags);
> +		free_hydration(hd);
> +		return;
> +	}
> +
> +	/*
> +	 * If the metadata mode is RO or FAIL then there is no point starting a
> +	 * hydration, since we will not be able to update the metadata when the
> +	 * hydration finishes.
> +	 */
> +	if (unlikely(get_clone_mode(clone) >= CM_READ_ONLY)) {
> +		hlist_del(&hd->h);
> +		bucket_unlock_irqrestore(bucket, flags);
> +		free_hydration(hd);
> +		bio_io_error(bio);
> +		return;
> +	}
> +
> +	/*
> +	 * Start region hydration.
> +	 *
> +	 * If a bio overwrites a region, i.e., its size is equal to the
> +	 * region's size, then we don't need to copy the region from
> +	 * the origin to the clone device.
> +	 */
> +	if (is_overwrite_bio(clone, bio)) {
> +		bucket_unlock_irqrestore(bucket, flags);
> +		hydration_overwrite(hd, bio);
> +	} else {
> +		bio_list_add(&hd->deferred_bios, bio);
> +		bucket_unlock_irqrestore(bucket, flags);
> +
> +		hydration_copy(hd, 1);
> +	}
> +}
> +
> +/*---------------------------------------------------------------------------*/
> +
> +/*
> + * Background hydrations.
> + */
> +
> +/*
> + * Batch region hydrations.
> + *
> + * To better utilize device bandwidth we batch together the hydration of
> + * adjacent regions. This allows us to use small region sizes, e.g., 4KB, which
> + * is good for small, random write performance (because of the overwriting of
> + * un-hydrated regions) and at the same time issue big copy requests to kcopyd
> + * to achieve high hydration bandwidth.
> + */
> +struct batch_info {
> +	struct dm_clone_region_hydration *head;
> +	unsigned int nr_batched_regions;
> +};
> +
> +static void __batch_hydration(struct batch_info *batch,
> +			      struct dm_clone_region_hydration *hd)
> +{
> +	struct clone *clone = hd->clone;
> +	unsigned int max_batch_size = READ_ONCE(clone->hydration_batch_size);
> +
> +	if (batch->head) {
> +		/* Try to extend the current batch */
> +		if (batch->nr_batched_regions < max_batch_size &&
> +		    (batch->head->region_nr + batch->nr_batched_regions) == hd->region_nr) {
> +			list_add_tail(&hd->list, &batch->head->list);
> +			batch->nr_batched_regions++;
> +			hd = NULL;
> +		}
> +
> +		/* Check if we should issue the current batch */
> +		if (batch->nr_batched_regions >= max_batch_size || hd) {
> +			hydration_copy(batch->head, batch->nr_batched_regions);
> +			batch->head = NULL;
> +			batch->nr_batched_regions = 0;
> +		}
> +	}
> +
> +	if (!hd)
> +		return;
> +
> +	/* We treat max batch sizes of zero and one equivalently */
> +	if (max_batch_size <= 1) {
> +		hydration_copy(hd, 1);
> +		return;
> +	}
> +
> +	/* Start a new batch */
> +	BUG_ON(!list_empty(&hd->list));
> +	batch->head = hd;
> +	batch->nr_batched_regions = 1;
> +}
> +
> +static unsigned long __start_next_hydration(struct clone *clone,
> +					    unsigned long offset,
> +					    struct batch_info *batch)
> +{
> +	unsigned long flags;
> +	struct hash_table_bucket *bucket;
> +	struct dm_clone_region_hydration *hd;
> +	unsigned long nr_regions = clone->nr_regions;
> +
> +	hd = alloc_hydration(clone);
> +
> +	/* Try to find a region to hydrate. */
> +	do {
> +		offset = dm_clone_find_next_unhydrated_region(clone->md, offset);
> +		if (offset == nr_regions)
> +			break;
> +
> +		bucket = get_hash_table_bucket(clone, offset);
> +		bucket_lock_irqsave(bucket, flags);
> +
> +		if (!dm_clone_is_region_hydrated(clone->md, offset) &&
> +		    !__hash_find(bucket, offset)) {
> +			hydration_init(hd, offset);
> +			__insert_region_hydration(bucket, hd);
> +			bucket_unlock_irqrestore(bucket, flags);
> +
> +			/* Batch hydration */
> +			__batch_hydration(batch, hd);
> +
> +			return (offset + 1);
> +		}
> +
> +		bucket_unlock_irqrestore(bucket, flags);
> +
> +	} while (++offset < nr_regions);
> +
> +	if (hd)
> +		free_hydration(hd);
> +
> +	return offset;
> +}
> +
> +/*
> + * This function searches for regions that still reside in the origin device
> + * and starts their hydration.
> + */
> +static void do_hydration(struct clone *clone)
> +{
> +	sector_t current_volume;
> +	unsigned long offset, nr_regions = clone->nr_regions;
> +
> +	struct batch_info batch = {
> +		.head = NULL,
> +		.nr_batched_regions = 0,
> +	};
> +
> +	if (unlikely(get_clone_mode(clone) >= CM_READ_ONLY))
> +		return;
> +
> +	if (dm_clone_is_hydration_done(clone->md))
> +		return;
> +
> +	/*
> +	 * Avoid race with device suspension.
> +	 */
> +	atomic_inc(&clone->hydrations_in_flight);
> +
> +	/*
> +	 * Make sure atomic_inc() is ordered before test_bit(), otherwise we
> +	 * might race with clone_postsuspend() and start a region hydration
> +	 * after the target has been suspended.
> +	 *
> +	 * This is paired with the smp_mb__after_atomic() in
> +	 * clone_postsuspend().
> +	 */
> +	smp_mb__after_atomic();
> +
> +	offset = clone->hydration_offset;
> +	while (likely(!test_bit(DM_CLONE_HYDRATION_SUSPENDED, &clone->flags)) &&
> +	       !atomic_read(&clone->ios_in_flight) &&
> +	       test_bit(DM_CLONE_HYDRATION_ENABLED, &clone->flags) &&
> +	       offset < nr_regions) {
> +		current_volume = atomic_read(&clone->hydrations_in_flight);
> +		current_volume += batch.nr_batched_regions;
> +		current_volume <<= clone->region_shift;
> +
> +		if (current_volume > READ_ONCE(clone->hydration_threshold))
> +			break;
> +
> +		offset = __start_next_hydration(clone, offset, &batch);
> +	}
> +
> +	if (batch.head)
> +		hydration_copy(batch.head, batch.nr_batched_regions);
> +
> +	if (offset >= nr_regions)
> +		offset = 0;
> +
> +	clone->hydration_offset = offset;
> +
> +	if (atomic_dec_and_test(&clone->hydrations_in_flight))
> +		wakeup_hydration_waiters(clone);
> +}
> +
> +/*---------------------------------------------------------------------------*/
> +
> +static bool need_commit_due_to_time(struct clone *clone)
> +{
> +	return !time_in_range(jiffies, clone->last_commit_jiffies,
> +			      clone->last_commit_jiffies + COMMIT_PERIOD);
> +}
> +
> +/*
> + * A non-zero return indicates read-only or fail mode.
> + */
> +static int commit_metadata(struct clone *clone)
> +{
> +	int r = 0;
> +
> +	mutex_lock(&clone->commit_lock);
> +
> +	if (!dm_clone_changed_this_transaction(clone->md))
> +		goto out;
> +
> +	if (unlikely(get_clone_mode(clone) >= CM_READ_ONLY)) {
> +		r = -EPERM;
> +		goto out;
> +	}
> +
> +	r = dm_clone_metadata_commit(clone->md);
> +
> +	if (unlikely(r)) {
> +		__metadata_operation_failed(clone, "dm_clone_metadata_commit", r);
> +		goto out;
> +	}
> +
> +	if (dm_clone_is_hydration_done(clone->md))
> +		dm_table_event(clone->ti->table);
> +
> +out:
> +	mutex_unlock(&clone->commit_lock);
> +
> +	return r;
> +}
> +
> +static void process_deferred_discards(struct clone *clone)
> +{
> +	int r = -EPERM;
> +	struct bio *bio;
> +	struct blk_plug plug;
> +	unsigned long rs, re, flags;
> +	struct bio_list discards = BIO_EMPTY_LIST;
> +
> +	spin_lock_irqsave(&clone->lock, flags);
> +	bio_list_merge(&discards, &clone->deferred_discard_bios);
> +	bio_list_init(&clone->deferred_discard_bios);
> +	spin_unlock_irqrestore(&clone->lock, flags);
> +
> +	if (bio_list_empty(&discards))
> +		return;
> +
> +	if (unlikely(get_clone_mode(clone) >= CM_READ_ONLY))
> +		goto out;
> +
> +	/* Update the metadata */
> +	bio_list_for_each(bio, &discards) {
> +		bio_region_range(clone, bio, &rs, &re);
> +		/*
> +		 * A discard request might cover regions that have been already
> +		 * hydrated. There is no need to update the metadata for these
> +		 * regions.
> +		 */
> +		r = dm_clone_cond_set_range(clone->md, rs, re - rs);
> +
> +		if (unlikely(r))
> +			break;
> +	}
> +
> +out:
> +	blk_start_plug(&plug);
> +
> +	while ((bio = bio_list_pop(&discards)))
> +		complete_discard_bio(clone, bio, r == 0);
> +
> +	blk_finish_plug(&plug);
> +}
> +
> +static void process_deferred_bios(struct clone *clone)
> +{
> +	unsigned long flags;
> +	struct bio_list bios = BIO_EMPTY_LIST;
> +
> +	spin_lock_irqsave(&clone->lock, flags);
> +	bio_list_merge(&bios, &clone->deferred_bios);
> +	bio_list_init(&clone->deferred_bios);
> +	spin_unlock_irqrestore(&clone->lock, flags);
> +
> +	if (bio_list_empty(&bios))
> +		return;
> +
> +	submit_bios(&bios);
> +}
> +
> +static void process_deferred_flush_bios(struct clone *clone)
> +{
> +	struct bio *bio;
> +	unsigned long flags;
> +	struct bio_list bios = BIO_EMPTY_LIST;
> +	struct bio_list bio_completions = BIO_EMPTY_LIST;
> +
> +	/*
> +	 * If there are any deferred flush bios, we must commit the metadata
> +	 * before issuing them or signaling their completion.
> +	 */
> +	spin_lock_irqsave(&clone->lock, flags);
> +	bio_list_merge(&bios, &clone->deferred_flush_bios);
> +	bio_list_init(&clone->deferred_flush_bios);
> +
> +	bio_list_merge(&bio_completions, &clone->deferred_flush_completions);
> +	bio_list_init(&clone->deferred_flush_completions);
> +	spin_unlock_irqrestore(&clone->lock, flags);
> +
> +	if (bio_list_empty(&bios) && bio_list_empty(&bio_completions) &&
> +	    !(dm_clone_changed_this_transaction(clone->md) && need_commit_due_to_time(clone)))
> +		return;
> +
> +	if (commit_metadata(clone)) {
> +		bio_list_merge(&bios, &bio_completions);
> +
> +		while ((bio = bio_list_pop(&bios)))
> +			bio_io_error(bio);
> +
> +		return;
> +	}
> +
> +	clone->last_commit_jiffies = jiffies;
> +
> +	while ((bio = bio_list_pop(&bio_completions)))
> +		bio_endio(bio);
> +
> +	while ((bio = bio_list_pop(&bios)))
> +		generic_make_request(bio);
> +}
> +
> +static void do_worker(struct work_struct *work)
> +{
> +	struct clone *clone = container_of(work, typeof(*clone), worker);
> +
> +	process_deferred_bios(clone);
> +	process_deferred_discards(clone);
> +
> +	/*
> +	 * process_deferred_flush_bios():
> +	 *
> +	 *   - Commit metadata
> +	 *
> +	 *   - Process deferred REQ_FUA completions
> +	 *
> +	 *   - Process deferred REQ_PREFLUSH bios
> +	 */
> +	process_deferred_flush_bios(clone);
> +
> +	/* Background hydration */
> +	do_hydration(clone);
> +}
> +
> +/*
> + * Commit periodically so that not too much unwritten data builds up.
> + *
> + * Also, restart background hydration, if it has been stopped by in-flight I/O.
> + */
> +static void do_waker(struct work_struct *work)
> +{
> +	struct clone *clone = container_of(to_delayed_work(work), struct clone, waker);
> +
> +	wake_worker(clone);
> +	queue_delayed_work(clone->wq, &clone->waker, COMMIT_PERIOD);
> +}
> +
> +/*---------------------------------------------------------------------------*/
> +
> +/*
> + * Target methods
> + */
> +static int clone_map(struct dm_target *ti, struct bio *bio)
> +{
> +	struct clone *clone = ti->private;
> +	unsigned long region_nr;
> +
> +	atomic_inc(&clone->ios_in_flight);
> +
> +	if (unlikely(get_clone_mode(clone) == CM_FAIL))
> +		return DM_MAPIO_KILL;
> +
> +	/*
> +	 * REQ_PREFLUSH bios carry no data:
> +	 *
> +	 * - Commit metadata, if changed
> +	 *
> +	 * - Pass down to clone device
> +	 */
> +	if (bio->bi_opf & REQ_PREFLUSH) {
> +		remap_and_issue(clone, bio);
> +		return DM_MAPIO_SUBMITTED;
> +	}
> +
> +	bio->bi_iter.bi_sector = dm_target_offset(ti, bio->bi_iter.bi_sector);
> +
> +	/*
> +	 * dm-clone interprets discards and performs a fast hydration of the
> +	 * discarded regions, i.e., we skip the copy from the origin device and
> +	 * just mark the regions as hydrated.
> +	 */
> +	if (bio_op(bio) == REQ_OP_DISCARD) {
> +		process_discard_bio(clone, bio);
> +		return DM_MAPIO_SUBMITTED;
> +	}
> +
> +	/*
> +	 * If the bio's region is hydrated, redirect it to the clone device.
> +	 *
> +	 * If the region is not hydrated and the bio is a READ, redirect it to
> +	 * the origin device.
> +	 *
> +	 * Else, defer WRITE bio until after its region has been hydrated and
> +	 * start the region's hydration immediately.
> +	 */
> +	region_nr = bio_to_region(clone, bio);
> +	if (dm_clone_is_region_hydrated(clone->md, region_nr)) {
> +		remap_and_issue(clone, bio);
> +		return DM_MAPIO_SUBMITTED;
> +	} else if (bio_data_dir(bio) == READ) {
> +		remap_to_origin(clone, bio);
> +		return DM_MAPIO_REMAPPED;
> +	}
> +
> +	remap_to_clone(clone, bio);
> +	hydrate_bio_region(clone, bio);
> +
> +	return DM_MAPIO_SUBMITTED;
> +}
> +
> +static int clone_endio(struct dm_target *ti, struct bio *bio, blk_status_t *error)
> +{
> +	struct clone *clone = ti->private;
> +
> +	atomic_dec(&clone->ios_in_flight);
> +
> +	return DM_ENDIO_DONE;
> +}
> +
> +static void emit_flags(struct clone *clone, char *result, unsigned int maxlen,
> +		       ssize_t *sz_ptr)
> +{
> +	ssize_t sz = *sz_ptr;
> +	unsigned int count;
> +
> +	count = !test_bit(DM_CLONE_HYDRATION_ENABLED, &clone->flags);
> +	count += !test_bit(DM_CLONE_DISCARD_PASSDOWN, &clone->flags);
> +
> +	DMEMIT("%u ", count);
> +
> +	if (!test_bit(DM_CLONE_HYDRATION_ENABLED, &clone->flags))
> +		DMEMIT("no_hydration ");
> +
> +	if (!test_bit(DM_CLONE_DISCARD_PASSDOWN, &clone->flags))
> +		DMEMIT("no_discard_passdown ");
> +
> +	*sz_ptr = sz;
> +}
> +
> +static void emit_core_args(struct clone *clone, char *result,
> +			   unsigned int maxlen, ssize_t *sz_ptr)
> +{
> +	ssize_t sz = *sz_ptr;
> +	unsigned int count = 4;
> +
> +	DMEMIT("%u hydration_threshold %llu hydration_block_size %llu ", count,
> +	       (unsigned long long)READ_ONCE(clone->hydration_threshold),
> +	       (unsigned long long)READ_ONCE(clone->hydration_batch_size) << clone->region_shift);
> +
> +	*sz_ptr = sz;
> +}
> +
> +/*
> + * Status format:
> + *
> + * <metadata block size> <#used metadata blocks>/<#total metadata blocks>
> + * <clone region size> <#hydrated regions>/<#total regions> <#hydrating regions>
> + * <#features> <features>* <#core args> <core args>* <clone metadata mode>
> + */
> +static void clone_status(struct dm_target *ti, status_type_t type,
> +			 unsigned int status_flags, char *result,
> +			 unsigned int maxlen)
> +{
> +	int r;
> +	unsigned int i;
> +	ssize_t sz = 0;
> +	dm_block_t nr_free_metadata_blocks = 0;
> +	dm_block_t nr_metadata_blocks = 0;
> +	char buf[BDEVNAME_SIZE];
> +	struct clone *clone = ti->private;
> +
> +	switch (type) {
> +	case STATUSTYPE_INFO:
> +		if (get_clone_mode(clone) == CM_FAIL) {
> +			DMEMIT("Fail");
> +			break;
> +		}
> +
> +		/* Commit to ensure statistics aren't out-of-date */
> +		if (!(status_flags & DM_STATUS_NOFLUSH_FLAG) && !dm_suspended(ti))
> +			(void) commit_metadata(clone);
> +
> +		r = dm_clone_get_free_metadata_block_count(clone->md, &nr_free_metadata_blocks);
> +
> +		if (r) {
> +			DMERR("%s: dm_clone_get_free_metadata_block_count returned %d",
> +			      clone_device_name(clone), r);
> +			goto error;
> +		}
> +
> +		r = dm_clone_get_metadata_dev_size(clone->md, &nr_metadata_blocks);
> +
> +		if (r) {
> +			DMERR("%s: dm_clone_get_metadata_dev_size returned %d",
> +			      clone_device_name(clone), r);
> +			goto error;
> +		}
> +
> +		DMEMIT("%u %llu/%llu %llu %lu/%lu %u ",
> +		       DM_CLONE_METADATA_BLOCK_SIZE,
> +		       (unsigned long long)(nr_metadata_blocks - nr_free_metadata_blocks),
> +		       (unsigned long long)nr_metadata_blocks,
> +		       (unsigned long long)clone->region_size,
> +		       dm_clone_nr_of_hydrated_regions(clone->md),
> +		       clone->nr_regions,
> +		       atomic_read(&clone->hydrations_in_flight));
> +
> +		emit_flags(clone, result, maxlen, &sz);
> +		emit_core_args(clone, result, maxlen, &sz);
> +
> +		switch (get_clone_mode(clone)) {
> +		case CM_WRITE:
> +			DMEMIT("rw");
> +			break;
> +		case CM_READ_ONLY:
> +			DMEMIT("ro");
> +			break;
> +		case CM_FAIL:
> +			DMEMIT("Fail");
> +		}
> +
> +		break;
> +
> +	case STATUSTYPE_TABLE:
> +		format_dev_t(buf, clone->metadata_dev->bdev->bd_dev);
> +		DMEMIT("%s ", buf);
> +
> +		format_dev_t(buf, clone->clone_dev->bdev->bd_dev);
> +		DMEMIT("%s ", buf);
> +
> +		format_dev_t(buf, clone->origin_dev->bdev->bd_dev);
> +		DMEMIT("%s", buf);
> +
> +		for (i = 0; i < clone->nr_ctr_args; i++)
> +			DMEMIT(" %s", clone->ctr_args[i]);
> +	}
> +
> +	return;
> +
> +error:
> +	DMEMIT("Error");
> +}
> +
> +static int clone_is_congested(struct dm_target_callbacks *cb, int bdi_bits)
> +{
> +	struct request_queue *clone_q, *origin_q;
> +	struct clone *clone = container_of(cb, struct clone, callbacks);
> +
> +	origin_q = bdev_get_queue(clone->origin_dev->bdev);
> +	clone_q = bdev_get_queue(clone->clone_dev->bdev);
> +
> +	return (bdi_congested(clone_q->backing_dev_info, bdi_bits) |
> +		bdi_congested(origin_q->backing_dev_info, bdi_bits));
> +}
> +
> +static sector_t get_dev_size(struct dm_dev *dev)
> +{
> +	return i_size_read(dev->bdev->bd_inode) >> SECTOR_SHIFT;
> +}
> +
> +/*---------------------------------------------------------------------------*/
> +
> +/*
> + * Construct a clone device mapping:
> + *
> + * clone <metadata dev> <clone dev> <origin dev> <region size>
> + *	[<#feature args> [<feature arg>]* [<#core args> [key value]*]]
> + *
> + * metadata dev: Fast device holding the persistent metadata
> + * clone dev: The destination device, which will become a clone of the origin
> + * origin dev: The read-only source device that gets cloned
> + * region size: dm-clone unit size in sectors
> + *
> + * #feature args: Number of feature arguments passed
> + * feature args: E.g. no_hydration, no_discard_passdown
> + *
> + * #core arguments: An even number of core arguments
> + * core arguments: Key/value pairs for tuning the core
> + *		   E.g. 'hydration_threshold 8192'
> + */
> +static int parse_feature_args(struct dm_arg_set *as, struct clone *clone)
> +{
> +	int r;
> +	unsigned int argc;
> +	const char *arg_name;
> +	struct dm_target *ti = clone->ti;
> +
> +	const struct dm_arg args = {
> +		.min = 0,
> +		.max = 2,
> +		.error = "Invalid number of feature arguments"
> +	};
> +
> +	/* No feature arguments supplied */
> +	if (!as->argc)
> +		return 0;
> +
> +	r = dm_read_arg_group(&args, as, &argc, &ti->error);
> +
> +	if (r)
> +		return r;
> +
> +	while (argc) {
> +		arg_name = dm_shift_arg(as);
> +		argc--;
> +
> +		if (!strcasecmp(arg_name, "no_hydration")) {
> +			__clear_bit(DM_CLONE_HYDRATION_ENABLED, &clone->flags);
> +		} else if (!strcasecmp(arg_name, "no_discard_passdown")) {
> +			__clear_bit(DM_CLONE_DISCARD_PASSDOWN, &clone->flags);
> +		} else {
> +			ti->error = "Invalid feature argument";
> +			return -EINVAL;
> +		}
> +	}
> +
> +	return 0;
> +}
> +
> +static int parse_core_args(struct dm_arg_set *as, struct clone *clone)
> +{
> +	int r;
> +	unsigned int argc;
> +	unsigned long value;
> +	const char *arg_name;
> +	struct dm_target *ti = clone->ti;
> +
> +	const struct dm_arg args = {
> +		.min = 0,
> +		.max = 4,
> +		.error = "Invalid number of core arguments"
> +	};
> +
> +	/* Initialize core arguments */
> +	clone->hydration_batch_size = DEFAULT_HYDRATION_BATCH_SIZE;
> +	clone->hydration_threshold = max_t(sector_t, DEFAULT_HYDRATION_THRESHOLD,
> +					   clone->region_size);
> +
> +	/* No core arguments supplied */
> +	if (!as->argc)
> +		return 0;
> +
> +	r = dm_read_arg_group(&args, as, &argc, &ti->error);
> +
> +	if (r)
> +		return r;
> +
> +	if (argc & 1) {
> +		ti->error = "Number of core arguments must be even";
> +		return -EINVAL;
> +	}
> +
> +	while (argc) {
> +		arg_name = dm_shift_arg(as);
> +		argc -= 2;
> +
> +		if (!strcasecmp(arg_name, "hydration_threshold")) {
> +			if (kstrtoul(dm_shift_arg(as), 10, &value)) {
> +				ti->error = "Invalid value for argument `hydration_threshold'";
> +				return -EINVAL;
> +			}
> +			clone->hydration_threshold = value;
> +		} else if (!strcasecmp(arg_name, "hydration_block_size")) {
> +			if (kstrtoul(dm_shift_arg(as), 10, &value)) {
> +				ti->error = "Invalid value for argument `hydration_block_size'";
> +				return -EINVAL;
> +			}
> +			clone->hydration_batch_size = value >> clone->region_shift;
> +		} else {
> +			ti->error = "Invalid core argument";
> +			return -EINVAL;
> +		}
> +	}
> +
> +	return 0;
> +}
> +
> +static int parse_region_size(struct clone *clone, struct dm_arg_set *as, char **error)
> +{
> +	int r;
> +	unsigned int region_size;
> +	struct dm_arg arg;
> +
> +	arg.min = MIN_REGION_SIZE;
> +	arg.max = MAX_REGION_SIZE;
> +	arg.error = "Invalid region size";
> +
> +	r = dm_read_arg(&arg, as, &region_size, error);
> +
> +	if (r)
> +		return r;
> +
> +	/* Check region size is a power of 2 */
> +	if (!is_power_of_2(region_size)) {
> +		*error = "Region size is not a power of 2";
> +		return -EINVAL;
> +	}
> +
> +	/* Validate the region size against the device logical block size */
> +	if (region_size % (bdev_logical_block_size(clone->origin_dev->bdev) >> 9) ||
> +	    region_size % (bdev_logical_block_size(clone->clone_dev->bdev) >> 9)) {
> +		*error = "Region size is not a multiple of device logical block size";
> +		return -EINVAL;
> +	}
> +
> +	clone->region_size = region_size;
> +
> +	return 0;
> +}
> +
> +static int validate_nr_regions(unsigned long n, char **error)
> +{
> +	/*
> +	 * dm_bitset restricts us to 2^32 regions. test_bit & co. restrict us
> +	 * further to 2^31 regions.
> +	 */
> +	if (n > (1UL << 31)) {
> +		*error = "Too many regions. Consider increasing the region size";
> +		return -EINVAL;
> +	}
> +
> +	return 0;
> +}
> +
> +static int parse_metadata_dev(struct clone *clone, struct dm_arg_set *as, char **error)
> +{
> +	int r;
> +	sector_t metadata_dev_size;
> +	char b[BDEVNAME_SIZE];
> +
> +	r = dm_get_device(clone->ti, dm_shift_arg(as), FMODE_READ | FMODE_WRITE,
> +			  &clone->metadata_dev);
> +
> +	if (r) {
> +		*error = "Error opening metadata device";
> +		return r;
> +	}
> +
> +	metadata_dev_size = get_dev_size(clone->metadata_dev);
> +
> +	if (metadata_dev_size > DM_CLONE_METADATA_MAX_SECTORS_WARNING)
> +		DMWARN("Metadata device %s is larger than %u sectors: excess space will not be used.",
> +		       bdevname(clone->metadata_dev->bdev, b), DM_CLONE_METADATA_MAX_SECTORS);
> +
> +	return 0;
> +}
> +
> +static int parse_clone_dev(struct clone *clone, struct dm_arg_set *as, char **error)
> +{
> +	int r;
> +	sector_t clone_dev_size;
> +
> +	r = dm_get_device(clone->ti, dm_shift_arg(as), FMODE_READ | FMODE_WRITE,
> +			  &clone->clone_dev);
> +
> +	if (r) {
> +		*error = "Error opening clone device";
> +		return r;
> +	}
> +
> +	clone_dev_size = get_dev_size(clone->clone_dev);
> +
> +	if (clone_dev_size < clone->ti->len) {
> +		dm_put_device(clone->ti, clone->clone_dev);
> +
> +		*error = "Device size larger than clone device";
> +		return -EINVAL;
> +	}
> +
> +	return 0;
> +}
> +
> +static int parse_origin_dev(struct clone *clone, struct dm_arg_set *as, char **error)
> +{
> +	int r;
> +	sector_t origin_dev_size;
> +
> +	r = dm_get_device(clone->ti, dm_shift_arg(as), FMODE_READ,
> +			  &clone->origin_dev);
> +
> +	if (r) {
> +		*error = "Error opening origin device";
> +		return r;
> +	}
> +
> +	origin_dev_size = get_dev_size(clone->origin_dev);
> +
> +	if (origin_dev_size < clone->ti->len) {
> +		dm_put_device(clone->ti, clone->origin_dev);
> +
> +		*error = "Device size larger than origin device";
> +		return -EINVAL;
> +	}
> +
> +	return 0;
> +}
> +
> +static int copy_ctr_args(struct clone *clone, int argc, const char **argv, char **error)
> +{
> +	unsigned int i;
> +	const char **copy;
> +
> +	copy = kcalloc(argc, sizeof(*copy), GFP_KERNEL);
> +
> +	if (!copy)
> +		goto error;
> +
> +	for (i = 0; i < argc; i++) {
> +		copy[i] = kstrdup(argv[i], GFP_KERNEL);
> +
> +		if (!copy[i]) {
> +			while (i--)
> +				kfree(copy[i]);
> +			kfree(copy);
> +			goto error;
> +		}
> +	}
> +
> +	clone->nr_ctr_args = argc;
> +	clone->ctr_args = copy;
> +
> +	return 0;
> +
> +error:
> +	*error = "Failed to allocate memory for table line";
> +	return -ENOMEM;
> +}
> +
> +static int clone_ctr(struct dm_target *ti, unsigned int argc, char **argv)
> +{
> +	int r;
> +	struct clone *clone;
> +	struct dm_arg_set as;
> +
> +	if (argc < 4) {
> +		ti->error = "Invalid number of arguments";
> +		return -EINVAL;
> +	}
> +
> +	as.argc = argc;
> +	as.argv = argv;
> +
> +	clone = kzalloc(sizeof(*clone), GFP_KERNEL);
> +
> +	if (!clone) {
> +		ti->error = "Failed to allocate clone structure";
> +		return -ENOMEM;
> +	}
> +
> +	clone->ti = ti;
> +
> +	/* Initialize dm-clone flags */
> +	__set_bit(DM_CLONE_HYDRATION_ENABLED, &clone->flags);
> +	__set_bit(DM_CLONE_HYDRATION_SUSPENDED, &clone->flags);
> +	__set_bit(DM_CLONE_DISCARD_PASSDOWN, &clone->flags);
> +
> +	r = parse_metadata_dev(clone, &as, &ti->error);
> +
> +	if (r)
> +		goto out_with_clone;
> +
> +	r = parse_clone_dev(clone, &as, &ti->error);
> +
> +	if (r)
> +		goto out_with_meta_dev;
> +
> +	r = parse_origin_dev(clone, &as, &ti->error);
> +
> +	if (r)
> +		goto out_with_clone_dev;
> +
> +	r = parse_region_size(clone, &as, &ti->error);
> +
> +	if (r)
> +		goto out_with_origin_dev;
> +
> +	clone->region_shift = __ffs(clone->region_size);
> +	clone->nr_regions = dm_sector_div_up(ti->len, clone->region_size);
> +
> +	r = validate_nr_regions(clone->nr_regions, &ti->error);
> +
> +	if (r)
> +		goto out_with_origin_dev;
> +
> +	r = dm_set_target_max_io_len(ti, clone->region_size);
> +
> +	if (r) {
> +		ti->error = "Failed to set max io len";
> +		goto out_with_origin_dev;
> +	}
> +
> +	r = parse_feature_args(&as, clone);
> +
> +	if (r)
> +		goto out_with_origin_dev;
> +
> +	r = parse_core_args(&as, clone);
> +
> +	if (r)
> +		goto out_with_origin_dev;
> +
> +	/* Load metadata */
> +	clone->md = dm_clone_metadata_open(clone->metadata_dev->bdev, ti->len,
> +					   clone->region_size);
> +
> +	if (IS_ERR(clone->md)) {
> +		ti->error = "Failed to load metadata";
> +		r = PTR_ERR(clone->md);
> +		goto out_with_origin_dev;
> +	}
> +
> +	__set_clone_mode(clone, CM_WRITE);
> +
> +	if (get_clone_mode(clone) != CM_WRITE) {
> +		ti->error = "Unable to get write access to metadata, please check/repair metadata";
> +		r = -EPERM;
> +		goto out_with_metadata;
> +	}
> +
> +	clone->last_commit_jiffies = jiffies;
> +
> +	/* Allocate hydration hash table */
> +	r = hash_table_init(clone);
> +
> +	if (r) {
> +		ti->error = "Failed to allocate hydration hash table";
> +		goto out_with_metadata;
> +	}
> +
> +	atomic_set(&clone->ios_in_flight, 0);
> +	init_waitqueue_head(&clone->hydration_stopped);
> +	spin_lock_init(&clone->lock);
> +	bio_list_init(&clone->deferred_bios);
> +	bio_list_init(&clone->deferred_discard_bios);
> +	bio_list_init(&clone->deferred_flush_bios);
> +	bio_list_init(&clone->deferred_flush_completions);
> +	clone->hydration_offset = 0;
> +	atomic_set(&clone->hydrations_in_flight, 0);
> +
> +	clone->wq = alloc_workqueue("dm-" DM_MSG_PREFIX, WQ_MEM_RECLAIM, 0);
> +
> +	if (!clone->wq) {
> +		ti->error = "Failed to allocate workqueue";
> +		r = -ENOMEM;
> +		goto out_with_ht;
> +	}
> +
> +	INIT_WORK(&clone->worker, do_worker);
> +	INIT_DELAYED_WORK(&clone->waker, do_waker);
> +
> +	clone->kcopyd_client = dm_kcopyd_client_create(&dm_kcopyd_throttle);
> +
> +	if (IS_ERR(clone->kcopyd_client)) {
> +		r = PTR_ERR(clone->kcopyd_client);
> +		goto out_with_wq;
> +	}
> +
> +	r = mempool_init_slab_pool(&clone->hydration_pool, MIN_HYDRATIONS,
> +				   _hydration_cache);
> +
> +	if (r) {
> +		ti->error = "Failed to create dm_clone_region_hydration memory pool";
> +		goto out_with_kcopyd;
> +	}
> +
> +	/* Save a copy of the table line */
> +	r = copy_ctr_args(clone, argc - 3, (const char **)argv + 3, &ti->error);
> +
> +	if (r)
> +		goto out_with_mempool;
> +
> +	mutex_init(&clone->commit_lock);
> +	clone->callbacks.congested_fn = clone_is_congested;
> +	dm_table_add_target_callbacks(ti->table, &clone->callbacks);
> +
> +	/* Enable flushes */
> +	ti->num_flush_bios = 1;
> +	ti->flush_supported = true;
> +
> +	/* Enable discards */
> +	ti->discards_supported = true;
> +	ti->num_discard_bios = 1;
> +
> +	ti->private = clone;
> +
> +	return 0;
> +
> +out_with_mempool:
> +	mempool_exit(&clone->hydration_pool);
> +
> +out_with_kcopyd:
> +	dm_kcopyd_client_destroy(clone->kcopyd_client);
> +
> +out_with_wq:
> +	destroy_workqueue(clone->wq);
> +
> +out_with_ht:
> +	hash_table_exit(clone);
> +
> +out_with_metadata:
> +	dm_clone_metadata_close(clone->md);
> +
> +out_with_origin_dev:
> +	dm_put_device(ti, clone->origin_dev);
> +
> +out_with_clone_dev:
> +	dm_put_device(ti, clone->clone_dev);
> +
> +out_with_meta_dev:
> +	dm_put_device(ti, clone->metadata_dev);
> +
> +out_with_clone:
> +	kfree(clone);
> +
> +	return r;
> +}
> +
> +static void clone_dtr(struct dm_target *ti)
> +{
> +	unsigned int i;
> +	struct clone *clone = ti->private;
> +
> +	mutex_destroy(&clone->commit_lock);
> +
> +	for (i = 0; i < clone->nr_ctr_args; i++)
> +		kfree(clone->ctr_args[i]);
> +	kfree(clone->ctr_args);
> +
> +	mempool_exit(&clone->hydration_pool);
> +	dm_kcopyd_client_destroy(clone->kcopyd_client);
> +	destroy_workqueue(clone->wq);
> +	hash_table_exit(clone);
> +	dm_clone_metadata_close(clone->md);
> +	dm_put_device(ti, clone->origin_dev);
> +	dm_put_device(ti, clone->clone_dev);
> +	dm_put_device(ti, clone->metadata_dev);
> +
> +	kfree(clone);
> +}
> +
> +/*---------------------------------------------------------------------------*/
> +
> +static void clone_postsuspend(struct dm_target *ti)
> +{
> +	struct clone *clone = ti->private;
> +
> +	/*
> +	 * To successfully suspend the device:
> +	 *
> +	 *	- We cancel the delayed work for periodic commits and wait for
> +	 *	  it to finish.
> +	 *
> +	 *	- We stop the background hydration, i.e. we prevent new region
> +	 *	  hydrations from starting.
> +	 *
> +	 *	- We wait for any in-flight hydrations to finish.
> +	 *
> +	 *	- We flush the workqueue.
> +	 *
> +	 *	- We commit the metadata.
> +	 */
> +	cancel_delayed_work_sync(&clone->waker);
> +
> +	set_bit(DM_CLONE_HYDRATION_SUSPENDED, &clone->flags);
> +
> +	/*
> +	 * Make sure set_bit() is ordered before atomic_read(), otherwise we
> +	 * might race with do_hydration() and miss some started region
> +	 * hydrations.
> +	 *
> +	 * This is paired with smp_mb__after_atomic() in do_hydration().
> +	 */
> +	smp_mb__after_atomic();
> +
> +	wait_event(clone->hydration_stopped, !atomic_read(&clone->hydrations_in_flight));
> +	flush_workqueue(clone->wq);
> +
> +	(void) commit_metadata(clone);
> +}
> +
> +static void clone_resume(struct dm_target *ti)
> +{
> +	struct clone *clone = ti->private;
> +
> +	clear_bit(DM_CLONE_HYDRATION_SUSPENDED, &clone->flags);
> +	do_waker(&clone->waker.work);
> +}
> +
> +static bool bdev_supports_discards(struct block_device *bdev)
> +{
> +	struct request_queue *q = bdev_get_queue(bdev);
> +
> +	return (q && blk_queue_discard(q));
> +}
> +
> +/*
> + * If discard_passdown was enabled verify that the clone device supports
> + * discards. Disable discard_passdown if not.
> + */
> +static void disable_passdown_if_not_supported(struct clone *clone)
> +{
> +	struct block_device *clone_dev = clone->clone_dev->bdev;
> +	struct queue_limits *clone_limits = &bdev_get_queue(clone_dev)->limits;
> +	const char *reason = NULL;
> +	char buf[BDEVNAME_SIZE];
> +
> +	if (!test_bit(DM_CLONE_DISCARD_PASSDOWN, &clone->flags))
> +		return;
> +
> +	if (!bdev_supports_discards(clone_dev))
> +		reason = "discard unsupported";
> +	else if (clone_limits->max_discard_sectors < clone->region_size)
> +		reason = "max discard sectors smaller than a region";
> +
> +	if (reason) {
> +		DMWARN("Clone device (%s) %s: Disabling discard passdown.",
> +		       bdevname(clone_dev, buf), reason);
> +		clear_bit(DM_CLONE_DISCARD_PASSDOWN, &clone->flags);
> +	}
> +}
> +
> +static void set_discard_limits(struct clone *clone, struct queue_limits *limits)
> +{
> +	struct block_device *clone_bdev = clone->clone_dev->bdev;
> +	struct queue_limits *clone_limits = &bdev_get_queue(clone_bdev)->limits;
> +
> +	if (!test_bit(DM_CLONE_DISCARD_PASSDOWN, &clone->flags)) {
> +		/* No passdown is done so we set our own virtual limits */
> +		limits->discard_granularity = clone->region_size << SECTOR_SHIFT;
> +		limits->max_discard_sectors = round_down(UINT_MAX >> SECTOR_SHIFT, clone->region_size);
> +		return;
> +	}
> +
> +	/*
> +	 * clone_iterate_devices() is stacking both the origin and clone device
> +	 * limits but discards aren't passed to the origin device, so inherit
> +	 * clone's limits.
> +	 */
> +	limits->max_discard_sectors = clone_limits->max_discard_sectors;
> +	limits->max_hw_discard_sectors = clone_limits->max_hw_discard_sectors;
> +	limits->discard_granularity = clone_limits->discard_granularity;
> +	limits->discard_alignment = clone_limits->discard_alignment;
> +	limits->discard_misaligned = clone_limits->discard_misaligned;
> +	limits->max_discard_segments = clone_limits->max_discard_segments;
> +}
> +
> +static void clone_io_hints(struct dm_target *ti, struct queue_limits *limits)
> +{
> +	struct clone *clone = ti->private;
> +	u64 io_opt_sectors = limits->io_opt >> SECTOR_SHIFT;
> +
> +	/*
> +	 * If the system-determined stacked limits are compatible with
> +	 * dm-clone's region size (io_opt is a factor) do not override them.
> +	 */
> +	if (io_opt_sectors < clone->region_size ||
> +	    do_div(io_opt_sectors, clone->region_size)) {
> +		blk_limits_io_min(limits, clone->region_size << SECTOR_SHIFT);
> +		blk_limits_io_opt(limits, clone->region_size << SECTOR_SHIFT);
> +	}
> +
> +	disable_passdown_if_not_supported(clone);
> +	set_discard_limits(clone, limits);
> +}
> +
> +static int clone_iterate_devices(struct dm_target *ti,
> +				 iterate_devices_callout_fn fn, void *data)
> +{
> +	int ret;
> +	struct clone *clone = ti->private;
> +	struct dm_dev *clone_dev = clone->clone_dev;
> +	struct dm_dev *origin_dev = clone->origin_dev;
> +
> +	ret = fn(ti, origin_dev, 0, ti->len, data);
> +	if (!ret)
> +		ret = fn(ti, clone_dev, 0, ti->len, data);
> +	return ret;
> +}
> +
> +/*
> + * dm-clone message functions.
> + */
> +static void set_hydration_threshold(struct clone *clone, sector_t threshold)
> +{
> +	WRITE_ONCE(clone->hydration_threshold, threshold);
> +
> +	/*
> +	 * If user space sets hydration_threshold to a value lower than the
> +	 * region size then the hydration will stop. If at a later time
> +	 * hydration_threshold is increased to a value greater or equal to the
> +	 * region size we must restart the hydration process by waking up the
> +	 * worker.
> +	 */
> +	wake_worker(clone);
> +}
> +
> +static void set_hydration_batch_size(struct clone *clone, sector_t batch_sectors)
> +{
> +	WRITE_ONCE(clone->hydration_batch_size, batch_sectors >> clone->region_shift);
> +}
> +
> +static void enable_hydration(struct clone *clone)
> +{
> +	if (!test_and_set_bit(DM_CLONE_HYDRATION_ENABLED, &clone->flags))
> +		wake_worker(clone);
> +}
> +
> +static void disable_hydration(struct clone *clone)
> +{
> +	clear_bit(DM_CLONE_HYDRATION_ENABLED, &clone->flags);
> +}
> +
> +static int clone_message(struct dm_target *ti, unsigned int argc, char **argv,
> +			 char *result, unsigned int maxlen)
> +{
> +	struct clone *clone = ti->private;
> +	unsigned long value;
> +
> +	if (!argc)
> +		return -EINVAL;
> +
> +	if (!strcasecmp(argv[0], "enable_hydration")) {
> +		enable_hydration(clone);
> +		return 0;
> +	}
> +
> +	if (!strcasecmp(argv[0], "disable_hydration")) {
> +		disable_hydration(clone);
> +		return 0;
> +	}
> +
> +	if (argc != 2)
> +		goto bad_message;
> +
> +	if (!strcasecmp(argv[0], "hydration_threshold")) {
> +		if (kstrtoul(argv[1], 10, &value))
> +			return -EINVAL;
> +
> +		set_hydration_threshold(clone, value);
> +
> +		return 0;
> +	}
> +
> +	if (!strcasecmp(argv[0], "hydration_block_size")) {
> +		if (kstrtoul(argv[1], 10, &value))
> +			return -EINVAL;
> +
> +		set_hydration_batch_size(clone, value);
> +
> +		return 0;
> +	}
> +
> +bad_message:
> +	DMERR("%s: Unsupported message `%s'", clone_device_name(clone), argv[0]);
> +	return -EINVAL;
> +}
> +
> +static struct target_type clone_target = {
> +	.name = "clone",
> +	.version = {1, 0, 0},
> +	.module = THIS_MODULE,
> +	.ctr = clone_ctr,
> +	.dtr =  clone_dtr,
> +	.map = clone_map,
> +	.end_io = clone_endio,
> +	.postsuspend = clone_postsuspend,
> +	.resume = clone_resume,
> +	.status = clone_status,
> +	.message = clone_message,
> +	.io_hints = clone_io_hints,
> +	.iterate_devices = clone_iterate_devices,
> +};
> +
> +/*---------------------------------------------------------------------------*/
> +
> +/* Module functions */
> +static int __init dm_clone_init(void)
> +{
> +	int r;
> +
> +	_hydration_cache = KMEM_CACHE(dm_clone_region_hydration, 0);
> +
> +	if (!_hydration_cache)
> +		return -ENOMEM;
> +
> +	r = dm_register_target(&clone_target);
> +
> +	if (r < 0) {
> +		DMERR("Failed to register clone target");
> +		return r;
> +	}
> +
> +	return 0;
> +}
> +
> +static void __exit dm_clone_exit(void)
> +{
> +	dm_unregister_target(&clone_target);
> +
> +	kmem_cache_destroy(_hydration_cache);
> +	_hydration_cache = NULL;
> +}
> +
> +/* Module hooks */
> +module_init(dm_clone_init);
> +module_exit(dm_clone_exit);
> +
> +MODULE_DESCRIPTION(DM_NAME " clone target");
> +MODULE_AUTHOR("Nikos Tsironis <ntsironis@arrikto.com>");
> +MODULE_LICENSE("GPL");

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [RFC PATCH 1/1] dm: add clone target
  2019-07-09 21:28   ` Heinz Mauelshagen
@ 2019-07-10 18:45     ` Nikos Tsironis
  2019-07-17 14:41       ` Heinz Mauelshagen
  0 siblings, 1 reply; 14+ messages in thread
From: Nikos Tsironis @ 2019-07-10 18:45 UTC (permalink / raw)
  To: Heinz Mauelshagen, snitzer, agk, dm-devel; +Cc: vkoukis, iliastsi

On 7/10/19 12:28 AM, Heinz Mauelshagen wrote:
> Hi Nikos,
> 
> what is the crucial factor your target offers vs. resynchronizing such a 
> latency distinct
> 2-legged mirror with a read-write snapshot (local, fast exception store) 
> on top, tearing the
> mirror down keeping the local leg once fully in sync and merging the 
> snapshot back into it?
> 
> Heinz
> 

Hi Heinz,

The most significant benefits of dm-clone over the solution you propose
is significantly better performance, no need for extra COW space, no
need to merge back a snapshot, and the ability to skip syncing the
unused space of a file system.

1. In order to ensure snapshot consistency, dm-snapshot needs to
   commit a completed exception, before signaling the completion of the
   write that triggered it to upper layers.

   The persistent exception store commits exceptions every time a
   metadata area is filled or when there are no more exceptions
   in-flight. For a 4K chunk size we have 256 exceptions per metadata
   area, so the best case scenario is one commit per 256 writes. Here I
   assume a write with size equal to the chunk size of dm-snapshot,
   e.g., 4K, so there is no COW overhead, and that we write to new
   chunks, so we need to allocate new exceptions.

   Part of committing the metadata is flushing the cache of the
   underlying device, if there is one. We have seen SSDs which can
   sustain hundreds of thousands of random write IOPS, but they take up
   to 8ms to flush their cache. In such a case, flushing the SSD cache
   every few writes significantly degrades performance.

   Moreover, dm-snapshot forces exceptions to complete in the order they
   were allocated, to avoid snapshot space leak on crash (commit
   230c83afdd9cd). This inserts further latency in exception completions
   and thus user write completions.

   On the other hand, when cloning a device we don't need to be so
   strict and can rely on committing the metadata every time a FLUSH or
   FUA bio is written, or periodically, like dm-thin and dm-cache do.

   dm-clone does exactly that. When a region/chunk is cloned or
   over-written by a write, we just set a bit in the relevant in-core
   bitmap. The metadata are committed once every second or when we
   receive a FLUSH or FUA bio.

   This improves performance significantly and results in increased IOPS
   and reduced latency, especially in cases where flushing the disk
   cache is very expensive.

2. For large devices, e.g. multi terabyte disks, resynchronizing the
   local leg can take a lot of time. If the application running over the
   local device is write-heavy, dm-snapshot will end up allocating a
   large number of exceptions. This increases the number of hash table
   collisions and thus increases the time we need to do a hash table
   lookup.

   dm-snapshot needs to look up the exception hash tables in order to
   service an I/O, so this increases latency and degrades performance.

   On the other hand, dm-clone is just testing a bit to see if a region
   is cloned or not and decides what to do based on that test.

3. With dm-clone there is no need to reserve extra COW space for
   temporarily storing the written data, while the clone device is
   syncing. Nor would one need to worry about monitoring and expanding
   the COW device to prevent it from filling up.

4. With dm-clone there is no need to merge back potentially several
   gigabytes once cloning/syncing completes. We also avoid the relevant
   performance degradation incurred by the merging process. Writes just
   go directly to the clone device.

5. dm-clone implements support for discards, so it can skip
   cloning/syncing the relevant regions. In the case of a large block
   device which contains a filesystem with empty space, e.g. a 2TB
   device containing 500GB of useful data in a filesystem, this can
   significantly reduce the time needed to sync/clone.

This was a rather long email, but I hope it makes the significant
benefits of dm-clone over using dm-snapshot, and our rationale behind
the decision to implement a new target clearer.

I would be more than happy to continue the conversation and focus on any
other questions you may have.

Thanks,
Nikos

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [RFC PATCH 1/1] dm: add clone target
  2019-07-10 18:45     ` Nikos Tsironis
@ 2019-07-17 14:41       ` Heinz Mauelshagen
  2019-07-22 20:16         ` Nikos Tsironis
  0 siblings, 1 reply; 14+ messages in thread
From: Heinz Mauelshagen @ 2019-07-17 14:41 UTC (permalink / raw)
  To: Nikos Tsironis, snitzer, agk, dm-devel; +Cc: vkoukis, iliastsi

Hi Nikos,

thanks for elaborating on those details.

Hash table collisions, exception store entry commit overhead,
SSD cache flush issues etc. are all valid points relative to performance
and work set footprints in general.

Do you have any performance numbers for your solution vs.
a snapshot one showing the approach is actually superior in
in real configurations?

I'm asking this particularly in the context of your remark

"A write to a not yet hydrated region will be delayed until the 
corresponding
region has been hydrated and the hydration of the region starts 
immediately."

which'll cause a potentially large working set of delayed writes unless 
those
cover the whole eventually larger than 4K region.
How does your 'clone' target perform on such heavy write situations?

In general, performance and storage footprint test results based on the 
same set
of read/write tests including heavy loads with region size variations 
run on 'clone'
and 'snapshot' would help your point.

Heinz

On 7/10/19 8:45 PM, Nikos Tsironis wrote:
> On 7/10/19 12:28 AM, Heinz Mauelshagen wrote:
>> Hi Nikos,
>>
>> what is the crucial factor your target offers vs. resynchronizing such a
>> latency distinct
>> 2-legged mirror with a read-write snapshot (local, fast exception store)
>> on top, tearing the
>> mirror down keeping the local leg once fully in sync and merging the
>> snapshot back into it?
>>
>> Heinz
>>
> Hi Heinz,
>
> The most significant benefits of dm-clone over the solution you propose
> is significantly better performance, no need for extra COW space, no
> need to merge back a snapshot, and the ability to skip syncing the
> unused space of a file system.
>
> 1. In order to ensure snapshot consistency, dm-snapshot needs to
>     commit a completed exception, before signaling the completion of the
>     write that triggered it to upper layers.
>
>     The persistent exception store commits exceptions every time a
>     metadata area is filled or when there are no more exceptions
>     in-flight. For a 4K chunk size we have 256 exceptions per metadata
>     area, so the best case scenario is one commit per 256 writes. Here I
>     assume a write with size equal to the chunk size of dm-snapshot,
>     e.g., 4K, so there is no COW overhead, and that we write to new
>     chunks, so we need to allocate new exceptions.
>
>     Part of committing the metadata is flushing the cache of the
>     underlying device, if there is one. We have seen SSDs which can
>     sustain hundreds of thousands of random write IOPS, but they take up
>     to 8ms to flush their cache. In such a case, flushing the SSD cache
>     every few writes significantly degrades performance.
>
>     Moreover, dm-snapshot forces exceptions to complete in the order they
>     were allocated, to avoid snapshot space leak on crash (commit
>     230c83afdd9cd). This inserts further latency in exception completions
>     and thus user write completions.
>
>     On the other hand, when cloning a device we don't need to be so
>     strict and can rely on committing the metadata every time a FLUSH or
>     FUA bio is written, or periodically, like dm-thin and dm-cache do.
>
>     dm-clone does exactly that. When a region/chunk is cloned or
>     over-written by a write, we just set a bit in the relevant in-core
>     bitmap. The metadata are committed once every second or when we
>     receive a FLUSH or FUA bio.
>
>     This improves performance significantly and results in increased IOPS
>     and reduced latency, especially in cases where flushing the disk
>     cache is very expensive.
>
> 2. For large devices, e.g. multi terabyte disks, resynchronizing the
>     local leg can take a lot of time. If the application running over the
>     local device is write-heavy, dm-snapshot will end up allocating a
>     large number of exceptions. This increases the number of hash table
>     collisions and thus increases the time we need to do a hash table
>     lookup.
>
>     dm-snapshot needs to look up the exception hash tables in order to
>     service an I/O, so this increases latency and degrades performance.
>
>     On the other hand, dm-clone is just testing a bit to see if a region
>     is cloned or not and decides what to do based on that test.
>
> 3. With dm-clone there is no need to reserve extra COW space for
>     temporarily storing the written data, while the clone device is
>     syncing. Nor would one need to worry about monitoring and expanding
>     the COW device to prevent it from filling up.
>
> 4. With dm-clone there is no need to merge back potentially several
>     gigabytes once cloning/syncing completes. We also avoid the relevant
>     performance degradation incurred by the merging process. Writes just
>     go directly to the clone device.
>
> 5. dm-clone implements support for discards, so it can skip
>     cloning/syncing the relevant regions. In the case of a large block
>     device which contains a filesystem with empty space, e.g. a 2TB
>     device containing 500GB of useful data in a filesystem, this can
>     significantly reduce the time needed to sync/clone.
>
> This was a rather long email, but I hope it makes the significant
> benefits of dm-clone over using dm-snapshot, and our rationale behind
> the decision to implement a new target clearer.
>
> I would be more than happy to continue the conversation and focus on any
> other questions you may have.
>
> Thanks,
> Nikos

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [RFC PATCH 1/1] dm: add clone target
  2019-07-17 14:41       ` Heinz Mauelshagen
@ 2019-07-22 20:16         ` Nikos Tsironis
  2019-07-29 21:20           ` Heinz Mauelshagen
  0 siblings, 1 reply; 14+ messages in thread
From: Nikos Tsironis @ 2019-07-22 20:16 UTC (permalink / raw)
  To: Heinz Mauelshagen, snitzer, agk, dm-devel; +Cc: vkoukis, iliastsi

On 7/17/19 5:41 PM, Heinz Mauelshagen wrote:
> Hi Nikos,
> 
> thanks for elaborating on those details.
> 
> Hash table collisions, exception store entry commit overhead,
> SSD cache flush issues etc. are all valid points relative to performance
> and work set footprints in general.
> 
> Do you have any performance numbers for your solution vs.
> a snapshot one showing the approach is actually superior in
> in real configurations?

Hi Heinz,

Please see below for detailed benchmark results.

> 
> I'm asking this particularly in the context of your remark
> 
> "A write to a not yet hydrated region will be delayed until the 
> corresponding
> region has been hydrated and the hydration of the region starts 
> immediately."
> 
> which'll cause a potentially large working set of delayed writes unless 
> those
> cover the whole eventually larger than 4K region.
> How does your 'clone' target perform on such heavy write situations?
> 

This situation occurs only when the writes are smaller than the region
size of dm-clone. E.g., if the user sets a region size of 64K and issues
4K writes.

In this case, we experience a performance drop due to COW. This is true
_both_ for dm-snapshot and dm-clone and is _unavoidable_.

But, the common case will be setting a region size equal to the file
system block size, e.g., 4K, and thus avoiding the COW overhead. Note
that LVM snapshots _already_ use 4K as the _default_ chunk size.

Nevertheless, even for larger region/chunk sizes, dm-clone outperforms
the dm-snapshot based solution, as is evident by the following
performance measurements.

> In general, performance and storage footprint test results based on the 
> same set
> of read/write tests including heavy loads with region size variations 
> run on 'clone'
> and 'snapshot' would help your point.
> 
> Heinz
> 

I used fio to run a series of read and write tests that compare the
performance of dm-clone against your proposed dm-snapshot over dm-raid
solution.

I used a 375GB spinning disk as the origin device storing the data to be
cloned and a 375GB SSD as the clone device and for storing both
dm-clone's metadata and dm-snapshot's exceptions (COW space).

dm-clone stack (dmsetup ls --tree)
==================================

clone (254:3)
 ├─source--vg-origin--lv (254:2)
 │  └─ (8:16)
 ├─dest--vg-clone--lv (254:0)
 │  └─ (259:0)
 └─dest--vg-meta--lv (254:1)
    └─ (259:0)

dm-snapshot + dm-raid stack (dmsetup ls --tree)
===============================================

mirrorvg-snap (254:7)
 ├─mirrorvg-snap-cow (254:6)
 │  └─ (259:0)
 └─mirrorvg-raid1--lv-real (254:5)
    ├─mirrorvg-raid1--lv_rimage_1 (254:3)
    │  └─ (259:0)
    ├─mirrorvg-raid1--lv_rmeta_1 (254:2)
    │  └─ (259:0)
    ├─mirrorvg-raid1--lv_rimage_0 (254:1)
    │  └─ (8:16)
    └─mirrorvg-raid1--lv_rmeta_0 (254:0)
       └─ (8:16)
mirrorvg-raid1--lv (254:4)
 └─mirrorvg-raid1--lv-real (254:5)
    ├─mirrorvg-raid1--lv_rimage_1 (254:3)
    │  └─ (259:0)
    ├─mirrorvg-raid1--lv_rmeta_1 (254:2)
    │  └─ (259:0)
    ├─mirrorvg-raid1--lv_rimage_0 (254:1)
    │  └─ (8:16)
    └─mirrorvg-raid1--lv_rmeta_0 (254:0)
       └─ (8:16)

fio configuration
=================

1. Random Read/Write latency benchmark

  ioengine=psync, bs=4K, numjobs=1, direct=1, timeout=90, time_based=1,
  rw=randwrite/randread

2. Random Read/Write IOPS benchmark

  ioengine=libaio, bs=4K, numjobs=1, direct=1, iodepth=32, timeout=90,
  time_based=1, rw=randwrite/randread

3. Sequential Read/Write Bandwidth

  ioengine=libaio, bs=256K, numjobs=1, direct=1, iodepth=32, timeout=90,
  time_based=1, rw=write/read

Baseline
========

As a reference, the benchmark results for the raw devices:

+--------+--------------------+-----------------+--------------+
| device | rand-write latency | rand-write IOPS | seq-write BW |
+--------+--------------------+-----------------+--------------+
|  HDD   |      701 usec      |       1425      |   120 MB/s   |
|  SSD   |     72.6 usec      |      64490      |   390 MB/s   |
+--------+--------------------+-----------------+--------------+

+--------+-------------------+----------------+-------------+
| device | rand-read latency | rand-read IOPS | seq-read BW |
+--------+-------------------+----------------+-------------+
|  HDD   |      1.4 msec     |      712       |   120 MB/s  |
|  SSD   |      122 usec     |     150920     |   701 MB/s  |
+--------+-------------------+----------------+-------------+

dm-clone vs dm-snapshot+dm-raid
===============================

Latency benchmark
-----------------

1. Random write latency

+-------------------+-----------+-------------+
| region/chunk size |  dm-clone | dm-snapshot |
+-------------------+-----------+-------------+
|        4 KB       | 75.7 usec |   6.8 msec  |
|        8 KB       |  1.9 msec |  17.7 msec  |
|       16 KB       |  2.1 msec |  15.8 msec  |
|       32 KB       |  2.2 msec |  33.6 msec  |
|       64 KB       |  2.6 msec |  31.2 msec  |
|       128 KB      |  3.8 msec |  35.7 msec  |
+-------------------+-----------+-------------+

* dm-snapshot+dm-raid has 7.5 to 90 times _more_ write latency than
  dm-clone.

* For the common case of a 4 KB region/chunk size, dm-clone has minimal
  overhead over the SSD device.

* Even for region/chunk sizes greater than 4KB dm-clone's overhead is
  minimal compared to dm-snapshot+dm-raid.

2. Random read latency

+-------------------+----------+-------------+
| region/chunk size | dm-clone | dm-snapshot |
+-------------------+----------+-------------+
|        4 KB       | 1.5 msec |  10.7 msec  |
|        8 KB       | 1.5 msec |   9.7 msec  |
|       16 KB       | 1.5 msec |  11.9 msec  |
|       32 KB       | 1.5 msec |  28.6 msec  |
|       64 KB       | 1.5 msec |  27.5 msec  |
|       128 KB      | 1.5 msec |  27.3 msec  |
+-------------------+----------+-------------+

* dm-snapshot+dm-raid has 6.5 to 19 times _more_ read latency than
  dm-clone.

* For all region/chunk sizes dm-clone has minimal overhead over the HDD
  device.

IOPS benchmark
--------------

1. Random write IOPS

+-------------------+----------+-------------+
| region/chunk size | dm-clone | dm-snapshot |
+-------------------+----------+-------------+
|        4 KB       |  62347   |     3758    |
|        8 KB       |   696    |     388     |
|       16 KB       |   667    |     217     |
|       32 KB       |   614    |     207     |
|       64 KB       |   531    |     186     |
|       128 KB      |   417    |     159     |
+-------------------+----------+-------------+

* dm-clone achieves 1.8 to 16.6 times _more_ IOPS than
  dm-snapshot+dm-raid.

* For the common case of a 4 KB region/chunk size, dm-clone has minimal
  overhead over the SSD device.

* Even for region/chunk sizes greater than 4KB dm-clone achieves
  significantly more IOPS than dm-snapshot+dm-raid.

2. Random read IOPS

+-------------------+----------+-------------+
| region/chunk size | dm-clone | dm-snapshot |
+-------------------+----------+-------------+
|        4 KB       |   767    |     680     |
|        8 KB       |   714    |     677     |
|       16 KB       |   715    |     338     |
|       32 KB       |   717    |     338     |
|       64 KB       |   720    |     338     |
|       128 KB      |   724    |     339     |
+-------------------+----------+-------------+

* dm-clone achieves 1.1 to 2.1 times _more_ IOPS than
  dm-snapshot+dm-raid.

Bandwidth benchmark
-------------------

1. Sequential write BW

+-------------------+------------+-------------+
| region/chunk size |  dm-clone  | dm-snapshot |
+-------------------+------------+-------------+
|        4 KB       | 389.4 MB/s |  135.3 MB/s |
|        8 KB       | 390.5 MB/s |  231.7 MB/s |
|       16 KB       | 390.5 MB/s |  213.1 MB/s |
|       32 KB       | 390.4 MB/s |  214.0 MB/s |
|       64 KB       | 390.3 MB/s |  214.0 MB/s |
|       128 KB      | 390.5 MB/s |  211.3 MB/s |
+-------------------+------------+-------------+

* dm-clone achieves 1.7 to 2.9 times more write BW than
  dm-snapshot+dm-raid.

* For all region/chunk sizes dm-clone achieves the same write BW as the
  SSD device.

2. Sequential read BW

+-------------------+------------+-------------+
| region/chunk size |  dm-clone  | dm-snapshot |
+-------------------+------------+-------------+
|        4 KB       | 442.8 MB/s |  217.3 MB/s |
|        8 KB       | 443.8 MB/s |  288.8 MB/s |
|       16 KB       | 443.8 MB/s |  275.3 MB/s |
|       32 KB       | 443.8 MB/s |  276.1 MB/s |
|       64 KB       | 443.6 MB/s |  276.1 MB/s |
|       128 KB      | 443.6 MB/s |  275.2 MB/s |
+-------------------+------------+-------------+

* dm-clone achieves 1.5 to 2 times more read BW than
  dm-snapshot+dm-raid.

Metadata/Storage overhead
=========================

dm-clone had a _maximum_ metadata overhead of around 20 MB for all
benchmarks. As dm-clone doesn't require any extra COW space for
temporarily storing the written data (writes just go directly to the
clone device) this is the _only_ storage overhead incurred by dm-clone,
irrespective of the amount of the written data

On the other hand, the COW space utilization of dm-snapshot, for the
bandwidth benchmarks, varied from 11.95 GB to 20.41 GB, depending on the
amount of written data.

I want to emphasize that after the cloning/syncing is complete we have
to merge this multi-gigabyte COW space back to the clone/destination
device. This will cause _further_ performance degradation, which is
_not_ reflected in the above performance measurements, but _will_ be
present in real workloads, if the dm-snapshot based solution is used.


To summarize, dm-clone performs _significantly_ better than a
dm-snapshot based solution, on all aspects (latency, IOPS, BW), and with
a _fraction_ of the storage/metadata overhead.

If you have any more questions, I would be more than happy to discuss
them with you.

Thanks,
Nikos

> On 7/10/19 8:45 PM, Nikos Tsironis wrote:
>> On 7/10/19 12:28 AM, Heinz Mauelshagen wrote:
>>> Hi Nikos,
>e>
>>> what is the crucial factor your target offers vs. resynchronizing such a
>>> latency distinct
>>> 2-legged mirror with a read-write snapshot (local, fast exception store)
>>> on top, tearing the
>>> mirror down keeping the local leg once fully in sync and merging the
>>> snapshot back into it?
>>>
>>> Heinz
>>>
>> Hi Heinz,
>>
>> The most significant benefits of dm-clone over the solution you propose
>> is significantly better performance, no need for extra COW space, no
>> need to merge back a snapshot, and the ability to skip syncing the
>> unused space of a file system.
>>
>> 1. In order to ensure snapshot consistency, dm-snapshot needs to
>>     commit a completed exception, before signaling the completion of the
>>     write that triggered it to upper layers.
>>
>>     The persistent exception store commits exceptions every time a
>>     metadata area is filled or when there are no more exceptions
>>     in-flight. For a 4K chunk size we have 256 exceptions per metadata
>>     area, so the best case scenario is one commit per 256 writes. Here I
>>     assume a write with size equal to the chunk size of dm-snapshot,
>>     e.g., 4K, so there is no COW overhead, and that we write to new
>>     chunks, so we need to allocate new exceptions.
>>
>>     Part of committing the metadata is flushing the cache of the
>>     underlying device, if there is one. We have seen SSDs which can
>>     sustain hundreds of thousands of random write IOPS, but they take up
>>     to 8ms to flush their cache. In such a case, flushing the SSD cache
>>     every few writes significantly degrades performance.
>>
>>     Moreover, dm-snapshot forces exceptions to complete in the order they
>>     were allocated, to avoid snapshot space leak on crash (commit
>>     230c83afdd9cd). This inserts further latency in exception completions
>>     and thus user write completions.
>>
>>     On the other hand, when cloning a device we don't need to be so
>>     strict and can rely on committing the metadata every time a FLUSH or
>>     FUA bio is written, or periodically, like dm-thin and dm-cache do.
>>
>>     dm-clone does exactly that. When a region/chunk is cloned or
>>     over-written by a write, we just set a bit in the relevant in-core
>>     bitmap. The metadata are committed once every second or when we
>>     receive a FLUSH or FUA bio.
>>
>>     This improves performance significantly and results in increased IOPS
>>     and reduced latency, especially in cases where flushing the disk
>>     cache is very expensive.
>>
>> 2. For large devices, e.g. multi terabyte disks, resynchronizing the
>>     local leg can take a lot of time. If the application running over the
>>     local device is write-heavy, dm-snapshot will end up allocating a
>>     large number of exceptions. This increases the number of hash table
>>     collisions and thus increases the time we need to do a hash table
>>     lookup.
>>
>>     dm-snapshot needs to look up the exception hash tables in order to
>>     service an I/O, so this increases latency and degrades performance.
>>
>>     On the other hand, dm-clone is just testing a bit to see if a region
>>     is cloned or not and decides what to do based on that test.
>>
>> 3. With dm-clone there is no need to reserve extra COW space for
>>     temporarily storing the written data, while the clone device is
>>     syncing. Nor would one need to worry about monitoring and expanding
>>     the COW device to prevent it from filling up.
>>
>> 4. With dm-clone there is no need to merge back potentially several
>>     gigabytes once cloning/syncing completes. We also avoid the relevant
>>     performance degradation incurred by the merging process. Writes just
>>     go directly to the clone device.
>>
>> 5. dm-clone implements support for discards, so it can skip
>>     cloning/syncing the relevant regions. In the case of a large block
>>     device which contains a filesystem with empty space, e.g. a 2TB
>>     device containing 500GB of useful data in a filesystem, this can
>>     significantly reduce the time needed to sync/clone.
>>
>> This was a rather long email, but I hope it makes the significant
>> benefits of dm-clone over using dm-snapshot, and our rationale behind
>> the decision to implement a new target clearer.
>>
>> I would be more than happy to continue the conversation and focus on any
>> other questions you may have.
>>
>> Thanks,
>> Nikos
> 

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [RFC PATCH 1/1] dm: add clone target
  2019-07-22 20:16         ` Nikos Tsironis
@ 2019-07-29 21:20           ` Heinz Mauelshagen
  2019-07-30 10:13             ` Nikos Tsironis
  0 siblings, 1 reply; 14+ messages in thread
From: Heinz Mauelshagen @ 2019-07-29 21:20 UTC (permalink / raw)
  To: Nikos Tsironis, snitzer, agk, dm-devel; +Cc: vkoukis, iliastsi

Hi Nikos,

thanks for providing these benchmarks which  seem to confirm the
advantages of clone vs. a snapshot/raid1 stack.

Can you please provide 'dmsetup table' for both configurations for 
completeness?

Heinz

On 7/22/19 10:16 PM, Nikos Tsironis wrote:
> On 7/17/19 5:41 PM, Heinz Mauelshagen wrote:
>> Hi Nikos,
>>
>> thanks for elaborating on those details.
>>
>> Hash table collisions, exception store entry commit overhead,
>> SSD cache flush issues etc. are all valid points relative to performance
>> and work set footprints in general.
>>
>> Do you have any performance numbers for your solution vs.
>> a snapshot one showing the approach is actually superior in
>> in real configurations?
> Hi Heinz,
>
> Please see below for detailed benchmark results.
>
>> I'm asking this particularly in the context of your remark
>>
>> "A write to a not yet hydrated region will be delayed until the
>> corresponding
>> region has been hydrated and the hydration of the region starts
>> immediately."
>>
>> which'll cause a potentially large working set of delayed writes unless
>> those
>> cover the whole eventually larger than 4K region.
>> How does your 'clone' target perform on such heavy write situations?
>>
> This situation occurs only when the writes are smaller than the region
> size of dm-clone. E.g., if the user sets a region size of 64K and issues
> 4K writes.
>
> In this case, we experience a performance drop due to COW. This is true
> _both_ for dm-snapshot and dm-clone and is _unavoidable_.
>
> But, the common case will be setting a region size equal to the file
> system block size, e.g., 4K, and thus avoiding the COW overhead. Note
> that LVM snapshots _already_ use 4K as the _default_ chunk size.
>
> Nevertheless, even for larger region/chunk sizes, dm-clone outperforms
> the dm-snapshot based solution, as is evident by the following
> performance measurements.
>
>> In general, performance and storage footprint test results based on the
>> same set
>> of read/write tests including heavy loads with region size variations
>> run on 'clone'
>> and 'snapshot' would help your point.
>>
>> Heinz
>>
> I used fio to run a series of read and write tests that compare the
> performance of dm-clone against your proposed dm-snapshot over dm-raid
> solution.
>
> I used a 375GB spinning disk as the origin device storing the data to be
> cloned and a 375GB SSD as the clone device and for storing both
> dm-clone's metadata and dm-snapshot's exceptions (COW space).
>
> dm-clone stack (dmsetup ls --tree)
> ==================================
>
> clone (254:3)
>   ├─source--vg-origin--lv (254:2)
>   │  └─ (8:16)
>   ├─dest--vg-clone--lv (254:0)
>   │  └─ (259:0)
>   └─dest--vg-meta--lv (254:1)
>      └─ (259:0)
>
> dm-snapshot + dm-raid stack (dmsetup ls --tree)
> ===============================================
>
> mirrorvg-snap (254:7)
>   ├─mirrorvg-snap-cow (254:6)
>   │  └─ (259:0)
>   └─mirrorvg-raid1--lv-real (254:5)
>      ├─mirrorvg-raid1--lv_rimage_1 (254:3)
>      │  └─ (259:0)
>      ├─mirrorvg-raid1--lv_rmeta_1 (254:2)
>      │  └─ (259:0)
>      ├─mirrorvg-raid1--lv_rimage_0 (254:1)
>      │  └─ (8:16)
>      └─mirrorvg-raid1--lv_rmeta_0 (254:0)
>         └─ (8:16)
> mirrorvg-raid1--lv (254:4)
>   └─mirrorvg-raid1--lv-real (254:5)
>      ├─mirrorvg-raid1--lv_rimage_1 (254:3)
>      │  └─ (259:0)
>      ├─mirrorvg-raid1--lv_rmeta_1 (254:2)
>      │  └─ (259:0)
>      ├─mirrorvg-raid1--lv_rimage_0 (254:1)
>      │  └─ (8:16)
>      └─mirrorvg-raid1--lv_rmeta_0 (254:0)
>         └─ (8:16)
>
> fio configuration
> =================
>
> 1. Random Read/Write latency benchmark
>
>    ioengine=psync, bs=4K, numjobs=1, direct=1, timeout=90, time_based=1,
>    rw=randwrite/randread
>
> 2. Random Read/Write IOPS benchmark
>
>    ioengine=libaio, bs=4K, numjobs=1, direct=1, iodepth=32, timeout=90,
>    time_based=1, rw=randwrite/randread
>
> 3. Sequential Read/Write Bandwidth
>
>    ioengine=libaio, bs=256K, numjobs=1, direct=1, iodepth=32, timeout=90,
>    time_based=1, rw=write/read
>
> Baseline
> ========
>
> As a reference, the benchmark results for the raw devices:
>
> +--------+--------------------+-----------------+--------------+
> | device | rand-write latency | rand-write IOPS | seq-write BW |
> +--------+--------------------+-----------------+--------------+
> |  HDD   |      701 usec      |       1425      |   120 MB/s   |
> |  SSD   |     72.6 usec      |      64490      |   390 MB/s   |
> +--------+--------------------+-----------------+--------------+
>
> +--------+-------------------+----------------+-------------+
> | device | rand-read latency | rand-read IOPS | seq-read BW |
> +--------+-------------------+----------------+-------------+
> |  HDD   |      1.4 msec     |      712       |   120 MB/s  |
> |  SSD   |      122 usec     |     150920     |   701 MB/s  |
> +--------+-------------------+----------------+-------------+
>
> dm-clone vs dm-snapshot+dm-raid
> ===============================
>
> Latency benchmark
> -----------------
>
> 1. Random write latency
>
> +-------------------+-----------+-------------+
> | region/chunk size |  dm-clone | dm-snapshot |
> +-------------------+-----------+-------------+
> |        4 KB       | 75.7 usec |   6.8 msec  |
> |        8 KB       |  1.9 msec |  17.7 msec  |
> |       16 KB       |  2.1 msec |  15.8 msec  |
> |       32 KB       |  2.2 msec |  33.6 msec  |
> |       64 KB       |  2.6 msec |  31.2 msec  |
> |       128 KB      |  3.8 msec |  35.7 msec  |
> +-------------------+-----------+-------------+
>
> * dm-snapshot+dm-raid has 7.5 to 90 times _more_ write latency than
>    dm-clone.
>
> * For the common case of a 4 KB region/chunk size, dm-clone has minimal
>    overhead over the SSD device.
>
> * Even for region/chunk sizes greater than 4KB dm-clone's overhead is
>    minimal compared to dm-snapshot+dm-raid.
>
> 2. Random read latency
>
> +-------------------+----------+-------------+
> | region/chunk size | dm-clone | dm-snapshot |
> +-------------------+----------+-------------+
> |        4 KB       | 1.5 msec |  10.7 msec  |
> |        8 KB       | 1.5 msec |   9.7 msec  |
> |       16 KB       | 1.5 msec |  11.9 msec  |
> |       32 KB       | 1.5 msec |  28.6 msec  |
> |       64 KB       | 1.5 msec |  27.5 msec  |
> |       128 KB      | 1.5 msec |  27.3 msec  |
> +-------------------+----------+-------------+
>
> * dm-snapshot+dm-raid has 6.5 to 19 times _more_ read latency than
>    dm-clone.
>
> * For all region/chunk sizes dm-clone has minimal overhead over the HDD
>    device.
>
> IOPS benchmark
> --------------
>
> 1. Random write IOPS
>
> +-------------------+----------+-------------+
> | region/chunk size | dm-clone | dm-snapshot |
> +-------------------+----------+-------------+
> |        4 KB       |  62347   |     3758    |
> |        8 KB       |   696    |     388     |
> |       16 KB       |   667    |     217     |
> |       32 KB       |   614    |     207     |
> |       64 KB       |   531    |     186     |
> |       128 KB      |   417    |     159     |
> +-------------------+----------+-------------+
>
> * dm-clone achieves 1.8 to 16.6 times _more_ IOPS than
>    dm-snapshot+dm-raid.
>
> * For the common case of a 4 KB region/chunk size, dm-clone has minimal
>    overhead over the SSD device.
>
> * Even for region/chunk sizes greater than 4KB dm-clone achieves
>    significantly more IOPS than dm-snapshot+dm-raid.
>
> 2. Random read IOPS
>
> +-------------------+----------+-------------+
> | region/chunk size | dm-clone | dm-snapshot |
> +-------------------+----------+-------------+
> |        4 KB       |   767    |     680     |
> |        8 KB       |   714    |     677     |
> |       16 KB       |   715    |     338     |
> |       32 KB       |   717    |     338     |
> |       64 KB       |   720    |     338     |
> |       128 KB      |   724    |     339     |
> +-------------------+----------+-------------+
>
> * dm-clone achieves 1.1 to 2.1 times _more_ IOPS than
>    dm-snapshot+dm-raid.
>
> Bandwidth benchmark
> -------------------
>
> 1. Sequential write BW
>
> +-------------------+------------+-------------+
> | region/chunk size |  dm-clone  | dm-snapshot |
> +-------------------+------------+-------------+
> |        4 KB       | 389.4 MB/s |  135.3 MB/s |
> |        8 KB       | 390.5 MB/s |  231.7 MB/s |
> |       16 KB       | 390.5 MB/s |  213.1 MB/s |
> |       32 KB       | 390.4 MB/s |  214.0 MB/s |
> |       64 KB       | 390.3 MB/s |  214.0 MB/s |
> |       128 KB      | 390.5 MB/s |  211.3 MB/s |
> +-------------------+------------+-------------+
>
> * dm-clone achieves 1.7 to 2.9 times more write BW than
>    dm-snapshot+dm-raid.
>
> * For all region/chunk sizes dm-clone achieves the same write BW as the
>    SSD device.
>
> 2. Sequential read BW
>
> +-------------------+------------+-------------+
> | region/chunk size |  dm-clone  | dm-snapshot |
> +-------------------+------------+-------------+
> |        4 KB       | 442.8 MB/s |  217.3 MB/s |
> |        8 KB       | 443.8 MB/s |  288.8 MB/s |
> |       16 KB       | 443.8 MB/s |  275.3 MB/s |
> |       32 KB       | 443.8 MB/s |  276.1 MB/s |
> |       64 KB       | 443.6 MB/s |  276.1 MB/s |
> |       128 KB      | 443.6 MB/s |  275.2 MB/s |
> +-------------------+------------+-------------+
>
> * dm-clone achieves 1.5 to 2 times more read BW than
>    dm-snapshot+dm-raid.
>
> Metadata/Storage overhead
> =========================
>
> dm-clone had a _maximum_ metadata overhead of around 20 MB for all
> benchmarks. As dm-clone doesn't require any extra COW space for
> temporarily storing the written data (writes just go directly to the
> clone device) this is the _only_ storage overhead incurred by dm-clone,
> irrespective of the amount of the written data
>
> On the other hand, the COW space utilization of dm-snapshot, for the
> bandwidth benchmarks, varied from 11.95 GB to 20.41 GB, depending on the
> amount of written data.
>
> I want to emphasize that after the cloning/syncing is complete we have
> to merge this multi-gigabyte COW space back to the clone/destination
> device. This will cause _further_ performance degradation, which is
> _not_ reflected in the above performance measurements, but _will_ be
> present in real workloads, if the dm-snapshot based solution is used.
>
>
> To summarize, dm-clone performs _significantly_ better than a
> dm-snapshot based solution, on all aspects (latency, IOPS, BW), and with
> a _fraction_ of the storage/metadata overhead.
>
> If you have any more questions, I would be more than happy to discuss
> them with you.
>
> Thanks,
> Nikos
>
>> On 7/10/19 8:45 PM, Nikos Tsironis wrote:
>>> On 7/10/19 12:28 AM, Heinz Mauelshagen wrote:
>>>> Hi Nikos,
>> e>
>>>> what is the crucial factor your target offers vs. resynchronizing such a
>>>> latency distinct
>>>> 2-legged mirror with a read-write snapshot (local, fast exception store)
>>>> on top, tearing the
>>>> mirror down keeping the local leg once fully in sync and merging the
>>>> snapshot back into it?
>>>>
>>>> Heinz
>>>>
>>> Hi Heinz,
>>>
>>> The most significant benefits of dm-clone over the solution you propose
>>> is significantly better performance, no need for extra COW space, no
>>> need to merge back a snapshot, and the ability to skip syncing the
>>> unused space of a file system.
>>>
>>> 1. In order to ensure snapshot consistency, dm-snapshot needs to
>>>      commit a completed exception, before signaling the completion of the
>>>      write that triggered it to upper layers.
>>>
>>>      The persistent exception store commits exceptions every time a
>>>      metadata area is filled or when there are no more exceptions
>>>      in-flight. For a 4K chunk size we have 256 exceptions per metadata
>>>      area, so the best case scenario is one commit per 256 writes. Here I
>>>      assume a write with size equal to the chunk size of dm-snapshot,
>>>      e.g., 4K, so there is no COW overhead, and that we write to new
>>>      chunks, so we need to allocate new exceptions.
>>>
>>>      Part of committing the metadata is flushing the cache of the
>>>      underlying device, if there is one. We have seen SSDs which can
>>>      sustain hundreds of thousands of random write IOPS, but they take up
>>>      to 8ms to flush their cache. In such a case, flushing the SSD cache
>>>      every few writes significantly degrades performance.
>>>
>>>      Moreover, dm-snapshot forces exceptions to complete in the order they
>>>      were allocated, to avoid snapshot space leak on crash (commit
>>>      230c83afdd9cd). This inserts further latency in exception completions
>>>      and thus user write completions.
>>>
>>>      On the other hand, when cloning a device we don't need to be so
>>>      strict and can rely on committing the metadata every time a FLUSH or
>>>      FUA bio is written, or periodically, like dm-thin and dm-cache do.
>>>
>>>      dm-clone does exactly that. When a region/chunk is cloned or
>>>      over-written by a write, we just set a bit in the relevant in-core
>>>      bitmap. The metadata are committed once every second or when we
>>>      receive a FLUSH or FUA bio.
>>>
>>>      This improves performance significantly and results in increased IOPS
>>>      and reduced latency, especially in cases where flushing the disk
>>>      cache is very expensive.
>>>
>>> 2. For large devices, e.g. multi terabyte disks, resynchronizing the
>>>      local leg can take a lot of time. If the application running over the
>>>      local device is write-heavy, dm-snapshot will end up allocating a
>>>      large number of exceptions. This increases the number of hash table
>>>      collisions and thus increases the time we need to do a hash table
>>>      lookup.
>>>
>>>      dm-snapshot needs to look up the exception hash tables in order to
>>>      service an I/O, so this increases latency and degrades performance.
>>>
>>>      On the other hand, dm-clone is just testing a bit to see if a region
>>>      is cloned or not and decides what to do based on that test.
>>>
>>> 3. With dm-clone there is no need to reserve extra COW space for
>>>      temporarily storing the written data, while the clone device is
>>>      syncing. Nor would one need to worry about monitoring and expanding
>>>      the COW device to prevent it from filling up.
>>>
>>> 4. With dm-clone there is no need to merge back potentially several
>>>      gigabytes once cloning/syncing completes. We also avoid the relevant
>>>      performance degradation incurred by the merging process. Writes just
>>>      go directly to the clone device.
>>>
>>> 5. dm-clone implements support for discards, so it can skip
>>>      cloning/syncing the relevant regions. In the case of a large block
>>>      device which contains a filesystem with empty space, e.g. a 2TB
>>>      device containing 500GB of useful data in a filesystem, this can
>>>      significantly reduce the time needed to sync/clone.
>>>
>>> This was a rather long email, but I hope it makes the significant
>>> benefits of dm-clone over using dm-snapshot, and our rationale behind
>>> the decision to implement a new target clearer.
>>>
>>> I would be more than happy to continue the conversation and focus on any
>>> other questions you may have.
>>>
>>> Thanks,
>>> Nikos
> --
> dm-devel mailing list
> dm-devel@redhat.com
> https://www.redhat.com/mailman/listinfo/dm-devel

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [RFC PATCH 1/1] dm: add clone target
  2019-07-29 21:20           ` Heinz Mauelshagen
@ 2019-07-30 10:13             ` Nikos Tsironis
  2019-08-27 14:09               ` Nikos Tsironis
  0 siblings, 1 reply; 14+ messages in thread
From: Nikos Tsironis @ 2019-07-30 10:13 UTC (permalink / raw)
  To: Heinz Mauelshagen, snitzer, agk, dm-devel; +Cc: vkoukis, iliastsi

On 7/30/19 12:20 AM, Heinz Mauelshagen wrote:
> Hi Nikos,
> 
> thanks for providing these benchmarks which  seem to confirm the
> advantages of clone vs. a snapshot/raid1 stack.
> 
> Can you please provide 'dmsetup table' for both configurations for 
> completeness?
> 
> Heinz
> 

Hi Heinz,

Yes, of course. The below 'dmsetup table' output is for the 4K
region/chunk size benchmark. The 'dmsetup table' output for the rest of
the benchmarks is the same, changing only the region/chunk sizes of
dm-clone and dm-snapshot.

dm-clone stack (dmsetup table)
==============================

source--vg-origin--lv: 0 629145600 linear 8:16 2048
dest--vg-meta--lv: 0 65536 linear 259:0 629147648
clone: 0 629145600 clone 254:1 254:0 254:2 8
dest--vg-clone--lv: 0 629145600 linear 259:0 2048

dm-snapshot + dm-raid stack (dmsetup table)
===========================================

mirrorvg-snap-cow: 0 104857600 linear 259:0 629155840
mirrorvg-raid1--lv_rimage_1: 0 629145600 linear 259:0 10240
mirrorvg-snap: 0 629145600 snapshot 254:5 254:6 P 8
mirrorvg-raid1--lv_rimage_0: 0 629145600 linear 8:16 10240
mirrorvg-raid1--lv-real: 0 629145600 raid raid1 3 0 region_size 1024 2 254:0 254:1 254:2 254:3
mirrorvg-raid1--lv: 0 629145600 snapshot-origin 254:5
mirrorvg-raid1--lv_rmeta_1: 0 8192 linear 259:0 2048
mirrorvg-raid1--lv_rmeta_0: 0 8192 linear 8:16 2048

Nikos

> On 7/22/19 10:16 PM, Nikos Tsironis wrote:
>> On 7/17/19 5:41 PM, Heinz Mauelshagen wrote:
>>> Hi Nikos,
>>>
>>> thanks for elaborating on those details.
>>>
>>> Hash table collisions, exception store entry commit overhead,
>>> SSD cache flush issues etc. are all valid points relative to performance
>>> and work set footprints in general.
>>>
>>> Do you have any performance numbers for your solution vs.
>>> a snapshot one showing the approach is actually superior in
>>> in real configurations?
>> Hi Heinz,
>>
>> Please see below for detailed benchmark results.
>>
>>> I'm asking this particularly in the context of your remark
>>>
>>> "A write to a not yet hydrated region will be delayed until the
>>> corresponding
>>> region has been hydrated and the hydration of the region starts
>>> immediately."
>>>
>>> which'll cause a potentially large working set of delayed writes unless
>>> those
>>> cover the whole eventually larger than 4K region.
>>> How does your 'clone' target perform on such heavy write situations?
>>>
>> This situation occurs only when the writes are smaller than the region
>> size of dm-clone. E.g., if the user sets a region size of 64K and issues
>> 4K writes.
>>
>> In this case, we experience a performance drop due to COW. This is true
>> _both_ for dm-snapshot and dm-clone and is _unavoidable_.
>>
>> But, the common case will be setting a region size equal to the file
>> system block size, e.g., 4K, and thus avoiding the COW overhead. Note
>> that LVM snapshots _already_ use 4K as the _default_ chunk size.
>>
>> Nevertheless, even for larger region/chunk sizes, dm-clone outperforms
>> the dm-snapshot based solution, as is evident by the following
>> performance measurements.
>>
>>> In general, performance and storage footprint test results based on the
>>> same set
>>> of read/write tests including heavy loads with region size variations
>>> run on 'clone'
>>> and 'snapshot' would help your point.
>>>
>>> Heinz
>>>
>> I used fio to run a series of read and write tests that compare the
>> performance of dm-clone against your proposed dm-snapshot over dm-raid
>> solution.
>>
>> I used a 375GB spinning disk as the origin device storing the data to be
>> cloned and a 375GB SSD as the clone device and for storing both
>> dm-clone's metadata and dm-snapshot's exceptions (COW space).
>>
>> dm-clone stack (dmsetup ls --tree)
>> ==================================
>>
>> clone (254:3)
>>   ├─source--vg-origin--lv (254:2)
>>   │  └─ (8:16)
>>   ├─dest--vg-clone--lv (254:0)
>>   │  └─ (259:0)
>>   └─dest--vg-meta--lv (254:1)
>>      └─ (259:0)
>>
>> dm-snapshot + dm-raid stack (dmsetup ls --tree)
>> ===============================================
>>
>> mirrorvg-snap (254:7)
>>   ├─mirrorvg-snap-cow (254:6)
>>   │  └─ (259:0)
>>   └─mirrorvg-raid1--lv-real (254:5)
>>      ├─mirrorvg-raid1--lv_rimage_1 (254:3)
>>      │  └─ (259:0)
>>      ├─mirrorvg-raid1--lv_rmeta_1 (254:2)
>>      │  └─ (259:0)
>>      ├─mirrorvg-raid1--lv_rimage_0 (254:1)
>>      │  └─ (8:16)
>>      └─mirrorvg-raid1--lv_rmeta_0 (254:0)
>>         └─ (8:16)
>> mirrorvg-raid1--lv (254:4)
>>   └─mirrorvg-raid1--lv-real (254:5)
>>      ├─mirrorvg-raid1--lv_rimage_1 (254:3)
>>      │  └─ (259:0)
>>      ├─mirrorvg-raid1--lv_rmeta_1 (254:2)
>>      │  └─ (259:0)
>>      ├─mirrorvg-raid1--lv_rimage_0 (254:1)
>>      │  └─ (8:16)
>>      └─mirrorvg-raid1--lv_rmeta_0 (254:0)
>>         └─ (8:16)
>>
>> fio configuration
>> =================
>>
>> 1. Random Read/Write latency benchmark
>>
>>    ioengine=psync, bs=4K, numjobs=1, direct=1, timeout=90, time_based=1,
>>    rw=randwrite/randread
>>
>> 2. Random Read/Write IOPS benchmark
>>
>>    ioengine=libaio, bs=4K, numjobs=1, direct=1, iodepth=32, timeout=90,
>>    time_based=1, rw=randwrite/randread
>>
>> 3. Sequential Read/Write Bandwidth
>>
>>    ioengine=libaio, bs=256K, numjobs=1, direct=1, iodepth=32, timeout=90,
>>    time_based=1, rw=write/read
>>
>> Baseline
>> ========
>>
>> As a reference, the benchmark results for the raw devices:
>>
>> +--------+--------------------+-----------------+--------------+
>> | device | rand-write latency | rand-write IOPS | seq-write BW |
>> +--------+--------------------+-----------------+--------------+
>> |  HDD   |      701 usec      |       1425      |   120 MB/s   |
>> |  SSD   |     72.6 usec      |      64490      |   390 MB/s   |
>> +--------+--------------------+-----------------+--------------+
>>
>> +--------+-------------------+----------------+-------------+
>> | device | rand-read latency | rand-read IOPS | seq-read BW |
>> +--------+-------------------+----------------+-------------+
>> |  HDD   |      1.4 msec     |      712       |   120 MB/s  |
>> |  SSD   |      122 usec     |     150920     |   701 MB/s  |
>> +--------+-------------------+----------------+-------------+
>>
>> dm-clone vs dm-snapshot+dm-raid
>> ===============================
>>
>> Latency benchmark
>> -----------------
>>
>> 1. Random write latency
>>
>> +-------------------+-----------+-------------+
>> | region/chunk size |  dm-clone | dm-snapshot |
>> +-------------------+-----------+-------------+
>> |        4 KB       | 75.7 usec |   6.8 msec  |
>> |        8 KB       |  1.9 msec |  17.7 msec  |
>> |       16 KB       |  2.1 msec |  15.8 msec  |
>> |       32 KB       |  2.2 msec |  33.6 msec  |
>> |       64 KB       |  2.6 msec |  31.2 msec  |
>> |       128 KB      |  3.8 msec |  35.7 msec  |
>> +-------------------+-----------+-------------+
>>
>> * dm-snapshot+dm-raid has 7.5 to 90 times _more_ write latency than
>>    dm-clone.
>>
>> * For the common case of a 4 KB region/chunk size, dm-clone has minimal
>>    overhead over the SSD device.
>>
>> * Even for region/chunk sizes greater than 4KB dm-clone's overhead is
>>    minimal compared to dm-snapshot+dm-raid.
>>
>> 2. Random read latency
>>
>> +-------------------+----------+-------------+
>> | region/chunk size | dm-clone | dm-snapshot |
>> +-------------------+----------+-------------+
>> |        4 KB       | 1.5 msec |  10.7 msec  |
>> |        8 KB       | 1.5 msec |   9.7 msec  |
>> |       16 KB       | 1.5 msec |  11.9 msec  |
>> |       32 KB       | 1.5 msec |  28.6 msec  |
>> |       64 KB       | 1.5 msec |  27.5 msec  |
>> |       128 KB      | 1.5 msec |  27.3 msec  |
>> +-------------------+----------+-------------+
>>
>> * dm-snapshot+dm-raid has 6.5 to 19 times _more_ read latency than
>>    dm-clone.
>>
>> * For all region/chunk sizes dm-clone has minimal overhead over the HDD
>>    device.
>>
>> IOPS benchmark
>> --------------
>>
>> 1. Random write IOPS
>>
>> +-------------------+----------+-------------+
>> | region/chunk size | dm-clone | dm-snapshot |
>> +-------------------+----------+-------------+
>> |        4 KB       |  62347   |     3758    |
>> |        8 KB       |   696    |     388     |
>> |       16 KB       |   667    |     217     |
>> |       32 KB       |   614    |     207     |
>> |       64 KB       |   531    |     186     |
>> |       128 KB      |   417    |     159     |
>> +-------------------+----------+-------------+
>>
>> * dm-clone achieves 1.8 to 16.6 times _more_ IOPS than
>>    dm-snapshot+dm-raid.
>>
>> * For the common case of a 4 KB region/chunk size, dm-clone has minimal
>>    overhead over the SSD device.
>>
>> * Even for region/chunk sizes greater than 4KB dm-clone achieves
>>    significantly more IOPS than dm-snapshot+dm-raid.
>>
>> 2. Random read IOPS
>>
>> +-------------------+----------+-------------+
>> | region/chunk size | dm-clone | dm-snapshot |
>> +-------------------+----------+-------------+
>> |        4 KB       |   767    |     680     |
>> |        8 KB       |   714    |     677     |
>> |       16 KB       |   715    |     338     |
>> |       32 KB       |   717    |     338     |
>> |       64 KB       |   720    |     338     |
>> |       128 KB      |   724    |     339     |
>> +-------------------+----------+-------------+
>>
>> * dm-clone achieves 1.1 to 2.1 times _more_ IOPS than
>>    dm-snapshot+dm-raid.
>>
>> Bandwidth benchmark
>> -------------------
>>
>> 1. Sequential write BW
>>
>> +-------------------+------------+-------------+
>> | region/chunk size |  dm-clone  | dm-snapshot |
>> +-------------------+------------+-------------+
>> |        4 KB       | 389.4 MB/s |  135.3 MB/s |
>> |        8 KB       | 390.5 MB/s |  231.7 MB/s |
>> |       16 KB       | 390.5 MB/s |  213.1 MB/s |
>> |       32 KB       | 390.4 MB/s |  214.0 MB/s |
>> |       64 KB       | 390.3 MB/s |  214.0 MB/s |
>> |       128 KB      | 390.5 MB/s |  211.3 MB/s |
>> +-------------------+------------+-------------+
>>
>> * dm-clone achieves 1.7 to 2.9 times more write BW than
>>    dm-snapshot+dm-raid.
>>
>> * For all region/chunk sizes dm-clone achieves the same write BW as the
>>    SSD device.
>>
>> 2. Sequential read BW
>>
>> +-------------------+------------+-------------+
>> | region/chunk size |  dm-clone  | dm-snapshot |
>> +-------------------+------------+-------------+
>> |        4 KB       | 442.8 MB/s |  217.3 MB/s |
>> |        8 KB       | 443.8 MB/s |  288.8 MB/s |
>> |       16 KB       | 443.8 MB/s |  275.3 MB/s |
>> |       32 KB       | 443.8 MB/s |  276.1 MB/s |
>> |       64 KB       | 443.6 MB/s |  276.1 MB/s |
>> |       128 KB      | 443.6 MB/s |  275.2 MB/s |
>> +-------------------+------------+-------------+
>>
>> * dm-clone achieves 1.5 to 2 times more read BW than
>>    dm-snapshot+dm-raid.
>>
>> Metadata/Storage overhead
>> =========================
>>
>> dm-clone had a _maximum_ metadata overhead of around 20 MB for all
>> benchmarks. As dm-clone doesn't require any extra COW space for
>> temporarily storing the written data (writes just go directly to the
>> clone device) this is the _only_ storage overhead incurred by dm-clone,
>> irrespective of the amount of the written data
>>
>> On the other hand, the COW space utilization of dm-snapshot, for the
>> bandwidth benchmarks, varied from 11.95 GB to 20.41 GB, depending on the
>> amount of written data.
>>
>> I want to emphasize that after the cloning/syncing is complete we have
>> to merge this multi-gigabyte COW space back to the clone/destination
>> device. This will cause _further_ performance degradation, which is
>> _not_ reflected in the above performance measurements, but _will_ be
>> present in real workloads, if the dm-snapshot based solution is used.
>>
>>
>> To summarize, dm-clone performs _significantly_ better than a
>> dm-snapshot based solution, on all aspects (latency, IOPS, BW), and with
>> a _fraction_ of the storage/metadata overhead.
>>
>> If you have any more questions, I would be more than happy to discuss
>> them with you.
>>
>> Thanks,
>> Nikos
>>
>>> On 7/10/19 8:45 PM, Nikos Tsironis wrote:
>>>> On 7/10/19 12:28 AM, Heinz Mauelshagen wrote:
>>>>> Hi Nikos,
>>> e>
>>>>> what is the crucial factor your target offers vs. resynchronizing such a
>>>>> latency distinct
>>>>> 2-legged mirror with a read-write snapshot (local, fast exception store)
>>>>> on top, tearing the
>>>>> mirror down keeping the local leg once fully in sync and merging the
>>>>> snapshot back into it?
>>>>>
>>>>> Heinz
>>>>>
>>>> Hi Heinz,
>>>>
>>>> The most significant benefits of dm-clone over the solution you propose
>>>> is significantly better performance, no need for extra COW space, no
>>>> need to merge back a snapshot, and the ability to skip syncing the
>>>> unused space of a file system.
>>>>
>>>> 1. In order to ensure snapshot consistency, dm-snapshot needs to
>>>>      commit a completed exception, before signaling the completion of the
>>>>      write that triggered it to upper layers.
>>>>
>>>>      The persistent exception store commits exceptions every time a
>>>>      metadata area is filled or when there are no more exceptions
>>>>      in-flight. For a 4K chunk size we have 256 exceptions per metadata
>>>>      area, so the best case scenario is one commit per 256 writes. Here I
>>>>      assume a write with size equal to the chunk size of dm-snapshot,
>>>>      e.g., 4K, so there is no COW overhead, and that we write to new
>>>>      chunks, so we need to allocate new exceptions.
>>>>
>>>>      Part of committing the metadata is flushing the cache of the
>>>>      underlying device, if there is one. We have seen SSDs which can
>>>>      sustain hundreds of thousands of random write IOPS, but they take up
>>>>      to 8ms to flush their cache. In such a case, flushing the SSD cache
>>>>      every few writes significantly degrades performance.
>>>>
>>>>      Moreover, dm-snapshot forces exceptions to complete in the order they
>>>>      were allocated, to avoid snapshot space leak on crash (commit
>>>>      230c83afdd9cd). This inserts further latency in exception completions
>>>>      and thus user write completions.
>>>>
>>>>      On the other hand, when cloning a device we don't need to be so
>>>>      strict and can rely on committing the metadata every time a FLUSH or
>>>>      FUA bio is written, or periodically, like dm-thin and dm-cache do.
>>>>
>>>>      dm-clone does exactly that. When a region/chunk is cloned or
>>>>      over-written by a write, we just set a bit in the relevant in-core
>>>>      bitmap. The metadata are committed once every second or when we
>>>>      receive a FLUSH or FUA bio.
>>>>
>>>>      This improves performance significantly and results in increased IOPS
>>>>      and reduced latency, especially in cases where flushing the disk
>>>>      cache is very expensive.
>>>>
>>>> 2. For large devices, e.g. multi terabyte disks, resynchronizing the
>>>>      local leg can take a lot of time. If the application running over the
>>>>      local device is write-heavy, dm-snapshot will end up allocating a
>>>>      large number of exceptions. This increases the number of hash table
>>>>      collisions and thus increases the time we need to do a hash table
>>>>      lookup.
>>>>
>>>>      dm-snapshot needs to look up the exception hash tables in order to
>>>>      service an I/O, so this increases latency and degrades performance.
>>>>
>>>>      On the other hand, dm-clone is just testing a bit to see if a region
>>>>      is cloned or not and decides what to do based on that test.
>>>>
>>>> 3. With dm-clone there is no need to reserve extra COW space for
>>>>      temporarily storing the written data, while the clone device is
>>>>      syncing. Nor would one need to worry about monitoring and expanding
>>>>      the COW device to prevent it from filling up.
>>>>
>>>> 4. With dm-clone there is no need to merge back potentially several
>>>>      gigabytes once cloning/syncing completes. We also avoid the relevant
>>>>      performance degradation incurred by the merging process. Writes just
>>>>      go directly to the clone device.
>>>>
>>>> 5. dm-clone implements support for discards, so it can skip
>>>>      cloning/syncing the relevant regions. In the case of a large block
>>>>      device which contains a filesystem with empty space, e.g. a 2TB
>>>>      device containing 500GB of useful data in a filesystem, this can
>>>>      significantly reduce the time needed to sync/clone.
>>>>
>>>> This was a rather long email, but I hope it makes the significant
>>>> benefits of dm-clone over using dm-snapshot, and our rationale behind
>>>> the decision to implement a new target clearer.
>>>>
>>>> I would be more than happy to continue the conversation and focus on any
>>>> other questions you may have.
>>>>
>>>> Thanks,
>>>> Nikos
>> --
>> dm-devel mailing list
>> dm-devel@redhat.com
>> https://www.redhat.com/mailman/listinfo/dm-devel
> 

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [RFC PATCH 1/1] dm: add clone target
  2019-07-30 10:13             ` Nikos Tsironis
@ 2019-08-27 14:09               ` Nikos Tsironis
  2019-08-27 15:34                 ` Mike Snitzer
  0 siblings, 1 reply; 14+ messages in thread
From: Nikos Tsironis @ 2019-08-27 14:09 UTC (permalink / raw)
  To: Heinz Mauelshagen, snitzer, agk, dm-devel; +Cc: vkoukis, iliastsi

Hello,

This is a kind reminder for this patch set. I'm bumping this thread to
solicit your feedback.

Following the discussion with Heinz, I have provided extensive
benchmarks that show dm-clone's significant performance increase
compared to a dm-snapshot/dm-raid1 stack.

How can we move forward with the review of dm-clone, so it can
eventually be merged upstream?

Looking forward to your feedback,

Nikos

On 7/30/19 1:13 PM, Nikos Tsironis wrote:
> On 7/30/19 12:20 AM, Heinz Mauelshagen wrote:
>> Hi Nikos,
>>
>> thanks for providing these benchmarks which  seem to confirm the
>> advantages of clone vs. a snapshot/raid1 stack.
>>
>> Can you please provide 'dmsetup table' for both configurations for 
>> completeness?
>>
>> Heinz
>>
> 
> Hi Heinz,
> 
> Yes, of course. The below 'dmsetup table' output is for the 4K
> region/chunk size benchmark. The 'dmsetup table' output for the rest of
> the benchmarks is the same, changing only the region/chunk sizes of
> dm-clone and dm-snapshot.
> 
> dm-clone stack (dmsetup table)
> ==============================
> 
> source--vg-origin--lv: 0 629145600 linear 8:16 2048
> dest--vg-meta--lv: 0 65536 linear 259:0 629147648
> clone: 0 629145600 clone 254:1 254:0 254:2 8
> dest--vg-clone--lv: 0 629145600 linear 259:0 2048
> 
> dm-snapshot + dm-raid stack (dmsetup table)
> ===========================================
> 
> mirrorvg-snap-cow: 0 104857600 linear 259:0 629155840
> mirrorvg-raid1--lv_rimage_1: 0 629145600 linear 259:0 10240
> mirrorvg-snap: 0 629145600 snapshot 254:5 254:6 P 8
> mirrorvg-raid1--lv_rimage_0: 0 629145600 linear 8:16 10240
> mirrorvg-raid1--lv-real: 0 629145600 raid raid1 3 0 region_size 1024 2 254:0 254:1 254:2 254:3
> mirrorvg-raid1--lv: 0 629145600 snapshot-origin 254:5
> mirrorvg-raid1--lv_rmeta_1: 0 8192 linear 259:0 2048
> mirrorvg-raid1--lv_rmeta_0: 0 8192 linear 8:16 2048
> 
> Nikos
> 
>> On 7/22/19 10:16 PM, Nikos Tsironis wrote:
>>> On 7/17/19 5:41 PM, Heinz Mauelshagen wrote:
>>>> Hi Nikos,
>>>>
>>>> thanks for elaborating on those details.
>>>>
>>>> Hash table collisions, exception store entry commit overhead,
>>>> SSD cache flush issues etc. are all valid points relative to performance
>>>> and work set footprints in general.
>>>>
>>>> Do you have any performance numbers for your solution vs.
>>>> a snapshot one showing the approach is actually superior in
>>>> in real configurations?
>>> Hi Heinz,
>>>
>>> Please see below for detailed benchmark results.
>>>
>>>> I'm asking this particularly in the context of your remark
>>>>
>>>> "A write to a not yet hydrated region will be delayed until the
>>>> corresponding
>>>> region has been hydrated and the hydration of the region starts
>>>> immediately."
>>>>
>>>> which'll cause a potentially large working set of delayed writes unless
>>>> those
>>>> cover the whole eventually larger than 4K region.
>>>> How does your 'clone' target perform on such heavy write situations?
>>>>
>>> This situation occurs only when the writes are smaller than the region
>>> size of dm-clone. E.g., if the user sets a region size of 64K and issues
>>> 4K writes.
>>>
>>> In this case, we experience a performance drop due to COW. This is true
>>> _both_ for dm-snapshot and dm-clone and is _unavoidable_.
>>>
>>> But, the common case will be setting a region size equal to the file
>>> system block size, e.g., 4K, and thus avoiding the COW overhead. Note
>>> that LVM snapshots _already_ use 4K as the _default_ chunk size.
>>>
>>> Nevertheless, even for larger region/chunk sizes, dm-clone outperforms
>>> the dm-snapshot based solution, as is evident by the following
>>> performance measurements.
>>>
>>>> In general, performance and storage footprint test results based on the
>>>> same set
>>>> of read/write tests including heavy loads with region size variations
>>>> run on 'clone'
>>>> and 'snapshot' would help your point.
>>>>
>>>> Heinz
>>>>
>>> I used fio to run a series of read and write tests that compare the
>>> performance of dm-clone against your proposed dm-snapshot over dm-raid
>>> solution.
>>>
>>> I used a 375GB spinning disk as the origin device storing the data to be
>>> cloned and a 375GB SSD as the clone device and for storing both
>>> dm-clone's metadata and dm-snapshot's exceptions (COW space).
>>>
>>> dm-clone stack (dmsetup ls --tree)
>>> ==================================
>>>
>>> clone (254:3)
>>>   ├─source--vg-origin--lv (254:2)
>>>   │  └─ (8:16)
>>>   ├─dest--vg-clone--lv (254:0)
>>>   │  └─ (259:0)
>>>   └─dest--vg-meta--lv (254:1)
>>>      └─ (259:0)
>>>
>>> dm-snapshot + dm-raid stack (dmsetup ls --tree)
>>> ===============================================
>>>
>>> mirrorvg-snap (254:7)
>>>   ├─mirrorvg-snap-cow (254:6)
>>>   │  └─ (259:0)
>>>   └─mirrorvg-raid1--lv-real (254:5)
>>>      ├─mirrorvg-raid1--lv_rimage_1 (254:3)
>>>      │  └─ (259:0)
>>>      ├─mirrorvg-raid1--lv_rmeta_1 (254:2)
>>>      │  └─ (259:0)
>>>      ├─mirrorvg-raid1--lv_rimage_0 (254:1)
>>>      │  └─ (8:16)
>>>      └─mirrorvg-raid1--lv_rmeta_0 (254:0)
>>>         └─ (8:16)
>>> mirrorvg-raid1--lv (254:4)
>>>   └─mirrorvg-raid1--lv-real (254:5)
>>>      ├─mirrorvg-raid1--lv_rimage_1 (254:3)
>>>      │  └─ (259:0)
>>>      ├─mirrorvg-raid1--lv_rmeta_1 (254:2)
>>>      │  └─ (259:0)
>>>      ├─mirrorvg-raid1--lv_rimage_0 (254:1)
>>>      │  └─ (8:16)
>>>      └─mirrorvg-raid1--lv_rmeta_0 (254:0)
>>>         └─ (8:16)
>>>
>>> fio configuration
>>> =================
>>>
>>> 1. Random Read/Write latency benchmark
>>>
>>>    ioengine=psync, bs=4K, numjobs=1, direct=1, timeout=90, time_based=1,
>>>    rw=randwrite/randread
>>>
>>> 2. Random Read/Write IOPS benchmark
>>>
>>>    ioengine=libaio, bs=4K, numjobs=1, direct=1, iodepth=32, timeout=90,
>>>    time_based=1, rw=randwrite/randread
>>>
>>> 3. Sequential Read/Write Bandwidth
>>>
>>>    ioengine=libaio, bs=256K, numjobs=1, direct=1, iodepth=32, timeout=90,
>>>    time_based=1, rw=write/read
>>>
>>> Baseline
>>> ========
>>>
>>> As a reference, the benchmark results for the raw devices:
>>>
>>> +--------+--------------------+-----------------+--------------+
>>> | device | rand-write latency | rand-write IOPS | seq-write BW |
>>> +--------+--------------------+-----------------+--------------+
>>> |  HDD   |      701 usec      |       1425      |   120 MB/s   |
>>> |  SSD   |     72.6 usec      |      64490      |   390 MB/s   |
>>> +--------+--------------------+-----------------+--------------+
>>>
>>> +--------+-------------------+----------------+-------------+
>>> | device | rand-read latency | rand-read IOPS | seq-read BW |
>>> +--------+-------------------+----------------+-------------+
>>> |  HDD   |      1.4 msec     |      712       |   120 MB/s  |
>>> |  SSD   |      122 usec     |     150920     |   701 MB/s  |
>>> +--------+-------------------+----------------+-------------+
>>>
>>> dm-clone vs dm-snapshot+dm-raid
>>> ===============================
>>>
>>> Latency benchmark
>>> -----------------
>>>
>>> 1. Random write latency
>>>
>>> +-------------------+-----------+-------------+
>>> | region/chunk size |  dm-clone | dm-snapshot |
>>> +-------------------+-----------+-------------+
>>> |        4 KB       | 75.7 usec |   6.8 msec  |
>>> |        8 KB       |  1.9 msec |  17.7 msec  |
>>> |       16 KB       |  2.1 msec |  15.8 msec  |
>>> |       32 KB       |  2.2 msec |  33.6 msec  |
>>> |       64 KB       |  2.6 msec |  31.2 msec  |
>>> |       128 KB      |  3.8 msec |  35.7 msec  |
>>> +-------------------+-----------+-------------+
>>>
>>> * dm-snapshot+dm-raid has 7.5 to 90 times _more_ write latency than
>>>    dm-clone.
>>>
>>> * For the common case of a 4 KB region/chunk size, dm-clone has minimal
>>>    overhead over the SSD device.
>>>
>>> * Even for region/chunk sizes greater than 4KB dm-clone's overhead is
>>>    minimal compared to dm-snapshot+dm-raid.
>>>
>>> 2. Random read latency
>>>
>>> +-------------------+----------+-------------+
>>> | region/chunk size | dm-clone | dm-snapshot |
>>> +-------------------+----------+-------------+
>>> |        4 KB       | 1.5 msec |  10.7 msec  |
>>> |        8 KB       | 1.5 msec |   9.7 msec  |
>>> |       16 KB       | 1.5 msec |  11.9 msec  |
>>> |       32 KB       | 1.5 msec |  28.6 msec  |
>>> |       64 KB       | 1.5 msec |  27.5 msec  |
>>> |       128 KB      | 1.5 msec |  27.3 msec  |
>>> +-------------------+----------+-------------+
>>>
>>> * dm-snapshot+dm-raid has 6.5 to 19 times _more_ read latency than
>>>    dm-clone.
>>>
>>> * For all region/chunk sizes dm-clone has minimal overhead over the HDD
>>>    device.
>>>
>>> IOPS benchmark
>>> --------------
>>>
>>> 1. Random write IOPS
>>>
>>> +-------------------+----------+-------------+
>>> | region/chunk size | dm-clone | dm-snapshot |
>>> +-------------------+----------+-------------+
>>> |        4 KB       |  62347   |     3758    |
>>> |        8 KB       |   696    |     388     |
>>> |       16 KB       |   667    |     217     |
>>> |       32 KB       |   614    |     207     |
>>> |       64 KB       |   531    |     186     |
>>> |       128 KB      |   417    |     159     |
>>> +-------------------+----------+-------------+
>>>
>>> * dm-clone achieves 1.8 to 16.6 times _more_ IOPS than
>>>    dm-snapshot+dm-raid.
>>>
>>> * For the common case of a 4 KB region/chunk size, dm-clone has minimal
>>>    overhead over the SSD device.
>>>
>>> * Even for region/chunk sizes greater than 4KB dm-clone achieves
>>>    significantly more IOPS than dm-snapshot+dm-raid.
>>>
>>> 2. Random read IOPS
>>>
>>> +-------------------+----------+-------------+
>>> | region/chunk size | dm-clone | dm-snapshot |
>>> +-------------------+----------+-------------+
>>> |        4 KB       |   767    |     680     |
>>> |        8 KB       |   714    |     677     |
>>> |       16 KB       |   715    |     338     |
>>> |       32 KB       |   717    |     338     |
>>> |       64 KB       |   720    |     338     |
>>> |       128 KB      |   724    |     339     |
>>> +-------------------+----------+-------------+
>>>
>>> * dm-clone achieves 1.1 to 2.1 times _more_ IOPS than
>>>    dm-snapshot+dm-raid.
>>>
>>> Bandwidth benchmark
>>> -------------------
>>>
>>> 1. Sequential write BW
>>>
>>> +-------------------+------------+-------------+
>>> | region/chunk size |  dm-clone  | dm-snapshot |
>>> +-------------------+------------+-------------+
>>> |        4 KB       | 389.4 MB/s |  135.3 MB/s |
>>> |        8 KB       | 390.5 MB/s |  231.7 MB/s |
>>> |       16 KB       | 390.5 MB/s |  213.1 MB/s |
>>> |       32 KB       | 390.4 MB/s |  214.0 MB/s |
>>> |       64 KB       | 390.3 MB/s |  214.0 MB/s |
>>> |       128 KB      | 390.5 MB/s |  211.3 MB/s |
>>> +-------------------+------------+-------------+
>>>
>>> * dm-clone achieves 1.7 to 2.9 times more write BW than
>>>    dm-snapshot+dm-raid.
>>>
>>> * For all region/chunk sizes dm-clone achieves the same write BW as the
>>>    SSD device.
>>>
>>> 2. Sequential read BW
>>>
>>> +-------------------+------------+-------------+
>>> | region/chunk size |  dm-clone  | dm-snapshot |
>>> +-------------------+------------+-------------+
>>> |        4 KB       | 442.8 MB/s |  217.3 MB/s |
>>> |        8 KB       | 443.8 MB/s |  288.8 MB/s |
>>> |       16 KB       | 443.8 MB/s |  275.3 MB/s |
>>> |       32 KB       | 443.8 MB/s |  276.1 MB/s |
>>> |       64 KB       | 443.6 MB/s |  276.1 MB/s |
>>> |       128 KB      | 443.6 MB/s |  275.2 MB/s |
>>> +-------------------+------------+-------------+
>>>
>>> * dm-clone achieves 1.5 to 2 times more read BW than
>>>    dm-snapshot+dm-raid.
>>>
>>> Metadata/Storage overhead
>>> =========================
>>>
>>> dm-clone had a _maximum_ metadata overhead of around 20 MB for all
>>> benchmarks. As dm-clone doesn't require any extra COW space for
>>> temporarily storing the written data (writes just go directly to the
>>> clone device) this is the _only_ storage overhead incurred by dm-clone,
>>> irrespective of the amount of the written data
>>>
>>> On the other hand, the COW space utilization of dm-snapshot, for the
>>> bandwidth benchmarks, varied from 11.95 GB to 20.41 GB, depending on the
>>> amount of written data.
>>>
>>> I want to emphasize that after the cloning/syncing is complete we have
>>> to merge this multi-gigabyte COW space back to the clone/destination
>>> device. This will cause _further_ performance degradation, which is
>>> _not_ reflected in the above performance measurements, but _will_ be
>>> present in real workloads, if the dm-snapshot based solution is used.
>>>
>>>
>>> To summarize, dm-clone performs _significantly_ better than a
>>> dm-snapshot based solution, on all aspects (latency, IOPS, BW), and with
>>> a _fraction_ of the storage/metadata overhead.
>>>
>>> If you have any more questions, I would be more than happy to discuss
>>> them with you.
>>>
>>> Thanks,
>>> Nikos
>>>
>>>> On 7/10/19 8:45 PM, Nikos Tsironis wrote:
>>>>> On 7/10/19 12:28 AM, Heinz Mauelshagen wrote:
>>>>>> Hi Nikos,
>>>> e>
>>>>>> what is the crucial factor your target offers vs. resynchronizing such a
>>>>>> latency distinct
>>>>>> 2-legged mirror with a read-write snapshot (local, fast exception store)
>>>>>> on top, tearing the
>>>>>> mirror down keeping the local leg once fully in sync and merging the
>>>>>> snapshot back into it?
>>>>>>
>>>>>> Heinz
>>>>>>
>>>>> Hi Heinz,
>>>>>
>>>>> The most significant benefits of dm-clone over the solution you propose
>>>>> is significantly better performance, no need for extra COW space, no
>>>>> need to merge back a snapshot, and the ability to skip syncing the
>>>>> unused space of a file system.
>>>>>
>>>>> 1. In order to ensure snapshot consistency, dm-snapshot needs to
>>>>>      commit a completed exception, before signaling the completion of the
>>>>>      write that triggered it to upper layers.
>>>>>
>>>>>      The persistent exception store commits exceptions every time a
>>>>>      metadata area is filled or when there are no more exceptions
>>>>>      in-flight. For a 4K chunk size we have 256 exceptions per metadata
>>>>>      area, so the best case scenario is one commit per 256 writes. Here I
>>>>>      assume a write with size equal to the chunk size of dm-snapshot,
>>>>>      e.g., 4K, so there is no COW overhead, and that we write to new
>>>>>      chunks, so we need to allocate new exceptions.
>>>>>
>>>>>      Part of committing the metadata is flushing the cache of the
>>>>>      underlying device, if there is one. We have seen SSDs which can
>>>>>      sustain hundreds of thousands of random write IOPS, but they take up
>>>>>      to 8ms to flush their cache. In such a case, flushing the SSD cache
>>>>>      every few writes significantly degrades performance.
>>>>>
>>>>>      Moreover, dm-snapshot forces exceptions to complete in the order they
>>>>>      were allocated, to avoid snapshot space leak on crash (commit
>>>>>      230c83afdd9cd). This inserts further latency in exception completions
>>>>>      and thus user write completions.
>>>>>
>>>>>      On the other hand, when cloning a device we don't need to be so
>>>>>      strict and can rely on committing the metadata every time a FLUSH or
>>>>>      FUA bio is written, or periodically, like dm-thin and dm-cache do.
>>>>>
>>>>>      dm-clone does exactly that. When a region/chunk is cloned or
>>>>>      over-written by a write, we just set a bit in the relevant in-core
>>>>>      bitmap. The metadata are committed once every second or when we
>>>>>      receive a FLUSH or FUA bio.
>>>>>
>>>>>      This improves performance significantly and results in increased IOPS
>>>>>      and reduced latency, especially in cases where flushing the disk
>>>>>      cache is very expensive.
>>>>>
>>>>> 2. For large devices, e.g. multi terabyte disks, resynchronizing the
>>>>>      local leg can take a lot of time. If the application running over the
>>>>>      local device is write-heavy, dm-snapshot will end up allocating a
>>>>>      large number of exceptions. This increases the number of hash table
>>>>>      collisions and thus increases the time we need to do a hash table
>>>>>      lookup.
>>>>>
>>>>>      dm-snapshot needs to look up the exception hash tables in order to
>>>>>      service an I/O, so this increases latency and degrades performance.
>>>>>
>>>>>      On the other hand, dm-clone is just testing a bit to see if a region
>>>>>      is cloned or not and decides what to do based on that test.
>>>>>
>>>>> 3. With dm-clone there is no need to reserve extra COW space for
>>>>>      temporarily storing the written data, while the clone device is
>>>>>      syncing. Nor would one need to worry about monitoring and expanding
>>>>>      the COW device to prevent it from filling up.
>>>>>
>>>>> 4. With dm-clone there is no need to merge back potentially several
>>>>>      gigabytes once cloning/syncing completes. We also avoid the relevant
>>>>>      performance degradation incurred by the merging process. Writes just
>>>>>      go directly to the clone device.
>>>>>
>>>>> 5. dm-clone implements support for discards, so it can skip
>>>>>      cloning/syncing the relevant regions. In the case of a large block
>>>>>      device which contains a filesystem with empty space, e.g. a 2TB
>>>>>      device containing 500GB of useful data in a filesystem, this can
>>>>>      significantly reduce the time needed to sync/clone.
>>>>>
>>>>> This was a rather long email, but I hope it makes the significant
>>>>> benefits of dm-clone over using dm-snapshot, and our rationale behind
>>>>> the decision to implement a new target clearer.
>>>>>
>>>>> I would be more than happy to continue the conversation and focus on any
>>>>> other questions you may have.
>>>>>
>>>>> Thanks,
>>>>> Nikos
>>> --
>>> dm-devel mailing list
>>> dm-devel@redhat.com
>>> https://www.redhat.com/mailman/listinfo/dm-devel
>>

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [RFC PATCH 1/1] dm: add clone target
  2019-08-27 14:09               ` Nikos Tsironis
@ 2019-08-27 15:34                 ` Mike Snitzer
  2019-08-28 14:23                   ` Nikos Tsironis
  0 siblings, 1 reply; 14+ messages in thread
From: Mike Snitzer @ 2019-08-27 15:34 UTC (permalink / raw)
  To: Nikos Tsironis; +Cc: vkoukis, Heinz Mauelshagen, dm-devel, agk, iliastsi

On Tue, Aug 27 2019 at 10:09am -0400,
Nikos Tsironis <ntsironis@arrikto.com> wrote:

> Hello,
> 
> This is a kind reminder for this patch set. I'm bumping this thread to
> solicit your feedback.
> 
> Following the discussion with Heinz, I have provided extensive
> benchmarks that show dm-clone's significant performance increase
> compared to a dm-snapshot/dm-raid1 stack.
> 
> How can we move forward with the review of dm-clone, so it can
> eventually be merged upstream?
> 
> Looking forward to your feedback,

I actually pulled it into my local dm-5.4 branch yesterday and have
started reviewing.  Firrst pass it looks like you've got solid code; a
lot of familiar code patterns too (barrowed from thinp, etc).

But the first thing that is tripping me up is the name "dm-clone"
considering how cloning is so fundamental to all DM.  The second term
that is just awkward is "hydration".  But that is just my initial
thought.  I'll need the rest of the week to really dig in and have more
constructive feedback for you.

Thanks for the ping; wasn't needed in this instance but it never hurts.

Mike

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [RFC PATCH 1/1] dm: add clone target
  2019-08-27 15:34                 ` Mike Snitzer
@ 2019-08-28 14:23                   ` Nikos Tsironis
  0 siblings, 0 replies; 14+ messages in thread
From: Nikos Tsironis @ 2019-08-28 14:23 UTC (permalink / raw)
  To: Mike Snitzer; +Cc: vkoukis, Heinz Mauelshagen, dm-devel, agk, iliastsi

On 8/27/19 6:34 PM, Mike Snitzer wrote:
> On Tue, Aug 27 2019 at 10:09am -0400,
> Nikos Tsironis <ntsironis@arrikto.com> wrote:
> 
>> Hello,
>>
>> This is a kind reminder for this patch set. I'm bumping this thread to
>> solicit your feedback.
>>
>> Following the discussion with Heinz, I have provided extensive
>> benchmarks that show dm-clone's significant performance increase
>> compared to a dm-snapshot/dm-raid1 stack.
>>
>> How can we move forward with the review of dm-clone, so it can
>> eventually be merged upstream?
>>
>> Looking forward to your feedback,
> 
> I actually pulled it into my local dm-5.4 branch yesterday and have
> started reviewing.  Firrst pass it looks like you've got solid code; a
> lot of familiar code patterns too (barrowed from thinp, etc).
> 
> But the first thing that is tripping me up is the name "dm-clone"
> considering how cloning is so fundamental to all DM.  The second term
> that is just awkward is "hydration".  But that is just my initial
> thought.  I'll need the rest of the week to really dig in and have more
> constructive feedback for you.
> 

Hi Mike,

Thank you for your prompt response and also thank you in advance for all
the effort you will put in reviewing dm-clone.

Looking forward to your feedback,

Nikos

> Thanks for the ping; wasn't needed in this instance but it never hurts.
> 
> Mike
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [RFC PATCH 1/1] dm: add clone target
  2019-07-09 14:15 ` [RFC PATCH 1/1] " Nikos Tsironis
  2019-07-09 21:28   ` Heinz Mauelshagen
@ 2019-08-29 16:19   ` Mike Snitzer
  2019-08-31  9:55     ` Nikos Tsironis
  1 sibling, 1 reply; 14+ messages in thread
From: Mike Snitzer @ 2019-08-29 16:19 UTC (permalink / raw)
  To: Nikos Tsironis; +Cc: vkoukis, dm-devel, agk, iliastsi

On Tue, Jul 09 2019 at 10:15am -0400,
Nikos Tsironis <ntsironis@arrikto.com> wrote:

> Add the dm-clone target, which allows cloning of arbitrary block
> devices.
> 
> dm-clone produces a one-to-one copy of an existing, read-only device
> (origin) into a writable device (clone): It presents a virtual block
> device which makes all data appear immediately, and redirects reads and
> writes accordingly.
> 
> The main use case of dm-clone is to clone a potentially remote,
> high-latency, read-only, archival-type block device into a writable,
> fast, primary-type device for fast, low-latency I/O. The cloned device
> is visible/mountable immediately and the copy of the origin device to
> the clone device happens in the background, in parallel with user I/O.
> 
> When the cloning completes, the dm-clone table can be removed altogether
> and be replaced, e.g., by a linear table, mapping directly to the clone
> device.
> 
> For further information and examples of how to use dm-clone, please read
> Documentation/device-mapper/dm-clone.rst
> 
> Suggested-by: Vangelis Koukis <vkoukis@arrikto.com>
> Co-developed-by: Ilias Tsitsimpis <iliastsi@arrikto.com>
> Signed-off-by: Ilias Tsitsimpis <iliastsi@arrikto.com>
> Signed-off-by: Nikos Tsironis <ntsironis@arrikto.com>
> ---
>  Documentation/device-mapper/dm-clone.rst |  334 +++++
>  drivers/md/Kconfig                       |   13 +
>  drivers/md/Makefile                      |    2 +
>  drivers/md/dm-clone-metadata.c           |  991 +++++++++++++
>  drivers/md/dm-clone-metadata.h           |  158 +++
>  drivers/md/dm-clone-target.c             | 2244 ++++++++++++++++++++++++++++++
>  6 files changed, 3742 insertions(+)
>  create mode 100644 Documentation/device-mapper/dm-clone.rst
>  create mode 100644 drivers/md/dm-clone-metadata.c
>  create mode 100644 drivers/md/dm-clone-metadata.h
>  create mode 100644 drivers/md/dm-clone-target.c
> 
> diff --git a/Documentation/device-mapper/dm-clone.rst b/Documentation/device-mapper/dm-clone.rst
> new file mode 100644
> index 000000000000..948b7ce31ce3
> --- /dev/null
> +++ b/Documentation/device-mapper/dm-clone.rst
> @@ -0,0 +1,334 @@
> +.. SPDX-License-Identifier: GPL-2.0-only
> +
> +========
> +dm-clone
> +========
> +
> +Introduction
> +============
> +
> +dm-clone is a device mapper target which produces a one-to-one copy of an
> +existing, read-only device (origin) into a writable device (clone): It presents
> +a virtual block device which makes all data appear immediately, and redirects
> +reads and writes accordingly.
> +
> +The main use case of dm-clone is to clone a potentially remote, high-latency,
> +read-only, archival-type block device into a writable, fast, primary-type device
> +for fast, low-latency I/O. The cloned device is visible/mountable immediately
> +and the copy of the origin device to the clone device happens in the background,
> +in parallel with user I/O.
> +
> +For example, one could restore an application backup from a read-only copy,
> +accessible through a network storage protocol (NBD, Fibre Channel, iSCSI, AoE,
> +etc.), into a local SSD or NVMe device, and start using the device immediately,
> +without waiting for the restore to complete.
> +
> +When the cloning completes, the dm-clone table can be removed altogether and be
> +replaced, e.g., by a linear table, mapping directly to the clone device.
> +
> +The dm-clone target reuses the metadata library used by the thin-provisioning
> +target.
> +
> +Glossary
> +========
> +
> +   Region
> +     A fixed sized block. The unit of hydration.
> +
> +   Hydration
> +     The process of filling a region of the clone device with data from the same
> +     region of the origin device, i.e., copying the region from the origin to
> +     the clone device.
> +
> +Once a region gets hydrated we redirect all I/O regarding it to the clone
> +device.

There is a lot of awkward jargon that you're mixing into this target.

Why "region" and not "block"?  I can let "region" go but please be
consistent (don't fall back to calling a region a block anywhere).

Why "hydration"?  Just needed to call it _something_?  I can let it go
as long as it is a construct internal to the target's implementation.  I
see no point making consumers of this target, that copies data from a
source to destination, have to call something "hydration".

And while we're at it why "origin" device instead of "source"?
Why "clone" device instead of "dest" or "destination"?

I can give the target name "dm-clone" a pass.. but dm-copy is less
opaque IMHO.. I could go either way on those.


> +Background Hydration
> +--------------------
> +
> +dm-clone copies continuously from the origin to the clone device, until all of
> +the device has been copied.
> +
> +Copying data from the origin to the clone device uses bandwidth. The user can
> +set a throttle to prevent more than a certain amount of copying occurring at any
> +one time. Moreover, dm-clone takes into account user I/O traffic going to the
> +devices and pauses the background hydration when there is I/O in-flight.
> +
> +A message `hydration_threshold <#sectors>` can be used to set the maximum number
> +of sectors being copied, the default being 2048 sectors (1MB).

Think this should really be expressed in multiples of a region, e.g.:
copy_threshold <# regions> (or clone_threshold)

> +dm-clone employs dm-kcopyd for copying portions of the origin device to the
> +clone device. By default, we issue copy requests of size equal to the region
> +size. A message `hydration_block_size <#sectors>` can be used to tune the size
> +of these copy requests. Increasing the hydration block size results in dm-clone
> +trying to batch together contiguous regions, so we copy the data in blocks of
> +this size.

It is awkward to have 'hydration_block_size' vs target ctr
provided "region size".  copy_batch_size <# regions>?  (or
clone_batch_size)?

Please take care of the external facing documentation to not use
"hydration".  Of all the naming I dislike it the most.. sorry.

Also, please fold the following patch in before making any edits to the
.c files for v2.

This review pass is the most trivial of the high level review, I'll be
drilling in on other aspects of the implementation now.  But I suspect
you've done a solid job with those details (based on what I've seen so
far).

Thanks,
Mike

From 5cc5479c68f87876d7aaf796d611a69d8e645618 Mon Sep 17 00:00:00 2001
From: Mike Snitzer <snitzer@redhat.com>
Date: Tue, 27 Aug 2019 17:00:24 -0400
Subject: [PATCH] dm clone: remove needless empty newlines

Signed-off-by: Mike Snitzer <snitzer@redhat.com>
---
 drivers/md/dm-clone-metadata.c | 35 +++---------------------
 drivers/md/dm-clone-target.c   | 61 +++---------------------------------------
 2 files changed, 7 insertions(+), 89 deletions(-)

diff --git a/drivers/md/dm-clone-metadata.c b/drivers/md/dm-clone-metadata.c
index db2f86d8356b..77480e94532b 100644
--- a/drivers/md/dm-clone-metadata.c
+++ b/drivers/md/dm-clone-metadata.c
@@ -184,7 +184,6 @@ static int sb_check(struct dm_block_validator *v, struct dm_block *b,
 
 	csum = dm_bm_checksum(&sb->flags, sb_block_size - sizeof(__le32),
 			      SUPERBLOCK_CSUM_XOR);
-
 	if (sb->csum != cpu_to_le32(csum)) {
 		DMERR("Superblock check failed: checksum %u, expected %u",
 		      csum, le32_to_cpu(sb->csum));
@@ -193,7 +192,6 @@ static int sb_check(struct dm_block_validator *v, struct dm_block *b,
 
 	/* Check metadata version */
 	metadata_version = le32_to_cpu(sb->version);
-
 	if (metadata_version < DM_CLONE_MIN_METADATA_VERSION ||
 	    metadata_version > DM_CLONE_MAX_METADATA_VERSION) {
 		DMERR("Clone metadata version %u found, but only versions between %u and %u supported.",
@@ -227,7 +225,6 @@ static int __superblock_all_zeroes(struct dm_block_manager *bm, bool *formatted)
 	 * zeroes.
 	 */
 	r = dm_bm_read_lock(bm, SUPERBLOCK_LOCATION, NULL, &sblock);
-
 	if (r) {
 		DMERR("Failed to read_lock superblock");
 		return r;
@@ -280,7 +277,6 @@ static int __copy_sm_root(struct dm_clone_metadata *md)
 	size_t root_size;
 
 	r = dm_sm_root_size(md->sm, &root_size);
-
 	if (r)
 		return r;
 
@@ -357,7 +353,6 @@ static int __format_metadata(struct dm_clone_metadata *md)
 	struct superblock_disk *sb;
 
 	r = dm_tm_create_with_sm(md->bm, SUPERBLOCK_LOCATION, &md->tm, &md->sm);
-
 	if (r) {
 		DMERR("Failed to create transaction manager");
 		return r;
@@ -366,7 +361,6 @@ static int __format_metadata(struct dm_clone_metadata *md)
 	dm_disk_bitset_init(md->tm, &md->bitset_info);
 
 	r = dm_bitset_empty(&md->bitset_info, &md->bitset_root);
-
 	if (r) {
 		DMERR("Failed to create empty on-disk bitset");
 		goto err_with_tm;
@@ -374,7 +368,6 @@ static int __format_metadata(struct dm_clone_metadata *md)
 
 	r = dm_bitset_resize(&md->bitset_info, md->bitset_root, 0,
 			     md->nr_regions, false, &md->bitset_root);
-
 	if (r) {
 		DMERR("Failed to resize on-disk bitset to %lu entries", md->nr_regions);
 		goto err_with_tm;
@@ -382,21 +375,18 @@ static int __format_metadata(struct dm_clone_metadata *md)
 
 	/* Flush to disk all blocks, except the superblock */
 	r = dm_tm_pre_commit(md->tm);
-
 	if (r) {
 		DMERR("dm_tm_pre_commit failed");
 		goto err_with_tm;
 	}
 
 	r = __copy_sm_root(md);
-
 	if (r) {
 		DMERR("__copy_sm_root failed");
 		goto err_with_tm;
 	}
 
 	r = superblock_write_lock_zero(md, &sblock);
-
 	if (r) {
 		DMERR("Failed to write_lock superblock");
 		goto err_with_tm;
@@ -405,7 +395,6 @@ static int __format_metadata(struct dm_clone_metadata *md)
 	sb = dm_block_data(sblock);
 	__prepare_superblock(md, sb);
 	r = dm_tm_commit(md->tm, sblock);
-
 	if (r) {
 		DMERR("Failed to commit superblock");
 		goto err_with_tm;
@@ -426,7 +415,6 @@ static int __open_or_format_metadata(struct dm_clone_metadata *md, bool may_form
 	bool formatted = false;
 
 	r = __superblock_all_zeroes(md->bm, &formatted);
-
 	if (r)
 		return r;
 
@@ -445,14 +433,12 @@ static int __create_persistent_data_structures(struct dm_clone_metadata *md,
 	md->bm = dm_block_manager_create(md->bdev,
 					 DM_CLONE_METADATA_BLOCK_SIZE << SECTOR_SHIFT,
 					 DM_CLONE_MAX_CONCURRENT_LOCKS);
-
 	if (IS_ERR(md->bm)) {
 		DMERR("Failed to create block manager");
 		return PTR_ERR(md->bm);
 	}
 
 	r = __open_or_format_metadata(md, may_format_device);
-
 	if (r)
 		dm_block_manager_destroy(md->bm);
 
@@ -511,12 +497,10 @@ int __load_bitset_in_core(struct dm_clone_metadata *md)
 
 	/* Flush bitset cache */
 	r = dm_bitset_flush(&md->bitset_info, md->bitset_root, &md->bitset_root);
-
 	if (r)
 		return r;
 
 	r = dm_bitset_cursor_begin(&md->bitset_info, md->bitset_root, md->nr_regions, &c);
-
 	if (r)
 		return r;
 
@@ -548,7 +532,6 @@ struct dm_clone_metadata *dm_clone_metadata_open(struct block_device *bdev,
 	struct dm_clone_metadata *md;
 
 	md = kzalloc(sizeof(*md), GFP_KERNEL);
-
 	if (!md) {
 		DMERR("Failed to allocate memory for dm-clone metadata");
 		return ERR_PTR(-ENOMEM);
@@ -567,7 +550,6 @@ struct dm_clone_metadata *dm_clone_metadata_open(struct block_device *bdev,
 	md->hydration_done = false;
 
 	md->region_map = vmalloc(bitmap_size(md->nr_regions));
-
 	if (!md->region_map) {
 		DMERR("Failed to allocate memory for region bitmap");
 		r = -ENOMEM;
@@ -575,19 +557,16 @@ struct dm_clone_metadata *dm_clone_metadata_open(struct block_device *bdev,
 	}
 
 	r = __create_persistent_data_structures(md, true);
-
 	if (r)
 		goto out_with_region_map;
 
 	r = __load_bitset_in_core(md);
-
 	if (r) {
 		DMERR("Failed to load on-disk region map");
 		goto out_with_pds;
 	}
 
 	r = dirty_map_init(md);
-
 	if (r)
 		goto out_with_pds;
 
@@ -682,7 +661,6 @@ int __metadata_commit(struct dm_clone_metadata *md)
 
 	/* Flush bitset cache */
 	r = dm_bitset_flush(&md->bitset_info, md->bitset_root, &md->bitset_root);
-
 	if (r) {
 		DMERR("dm_bitset_flush failed");
 		return r;
@@ -690,7 +668,6 @@ int __metadata_commit(struct dm_clone_metadata *md)
 
 	/* Flush to disk all blocks, except the superblock */
 	r = dm_tm_pre_commit(md->tm);
-
 	if (r) {
 		DMERR("dm_tm_pre_commit failed");
 		return r;
@@ -698,7 +675,6 @@ int __metadata_commit(struct dm_clone_metadata *md)
 
 	/* Save the space map root in md->metadata_space_map_root */
 	r = __copy_sm_root(md);
-
 	if (r) {
 		DMERR("__copy_sm_root failed");
 		return r;
@@ -706,7 +682,6 @@ int __metadata_commit(struct dm_clone_metadata *md)
 
 	/* Lock the superblock */
 	r = superblock_write_lock_zero(md, &sblock);
-
 	if (r) {
 		DMERR("Failed to write_lock superblock");
 		return r;
@@ -718,7 +693,6 @@ int __metadata_commit(struct dm_clone_metadata *md)
 
 	/* Unlock superblock and commit it to disk */
 	r = dm_tm_commit(md->tm, sblock);
-
 	if (r) {
 		DMERR("Failed to commit superblock");
 		return r;
@@ -859,7 +833,6 @@ int dm_clone_cond_set_range(struct dm_clone_metadata *md, unsigned long start,
 			dmap->changed = 1;
 		}
 	}
-
 out:
 	spin_unlock_irqrestore(&md->bitmap_lock, flags);
 
@@ -886,7 +859,6 @@ int dm_clone_reload_in_core_bitset(struct dm_clone_metadata *md)
 		goto out;
 
 	r = __load_bitset_in_core(md);
-
 out:
 	up_write(&md->lock);
 
@@ -917,11 +889,10 @@ int dm_clone_metadata_abort(struct dm_clone_metadata *md)
 	__destroy_persistent_data_structures(md);
 
 	r = __create_persistent_data_structures(md, false);
-
-	/* If something went wrong we can neither write nor read the metadata */
-	if (r)
+	if (r) {
+		/* If something went wrong we can neither write nor read the metadata */
 		md->fail_io = true;
-
+	}
 out:
 	up_write(&md->lock);
 
diff --git a/drivers/md/dm-clone-target.c b/drivers/md/dm-clone-target.c
index 2ce7524616f8..d10b149b54f5 100644
--- a/drivers/md/dm-clone-target.c
+++ b/drivers/md/dm-clone-target.c
@@ -459,11 +459,9 @@ static void complete_discard_bio(struct clone *clone, struct bio *bio, bool succ
 		bio_region_range(clone, bio, &rs, &re);
 		trim_bio(bio, rs << clone->region_shift,
 			 (re - rs) << clone->region_shift);
-
 		generic_make_request(bio);
-	} else {
+	} else
 		bio_endio(bio);
-	}
 }
 
 static void process_discard_bio(struct clone *clone, struct bio *bio)
@@ -561,7 +559,6 @@ static int hash_table_init(struct clone *clone)
 	sz = 1 << HASH_TABLE_BITS;
 
 	clone->ht = vmalloc(sz * sizeof(struct hash_table_bucket));
-
 	if (!clone->ht)
 		return -ENOMEM;
 
@@ -629,7 +626,6 @@ __find_or_insert_region_hydration(struct hash_table_bucket *bucket,
 	struct dm_clone_region_hydration *hd2;
 
 	hd2 = __hash_find(bucket, hd->region_nr);
-
 	if (hd2)
 		return hd2;
 
@@ -650,7 +646,6 @@ static struct dm_clone_region_hydration *alloc_hydration(struct clone *clone)
 	 * This might block but it can't fail.
 	 */
 	hd = mempool_alloc(&clone->hydration_pool, GFP_NOIO);
-
 	hd->clone = clone;
 
 	return hd;
@@ -665,11 +660,8 @@ static inline void free_hydration(struct dm_clone_region_hydration *hd)
 static void hydration_init(struct dm_clone_region_hydration *hd, unsigned long region_nr)
 {
 	hd->region_nr = region_nr;
-
 	hd->overwrite_bio = NULL;
-
 	bio_list_init(&hd->deferred_bios);
-
 	hd->status = 0;
 
 	INIT_LIST_HEAD(&hd->list);
@@ -867,16 +859,15 @@ static void hydrate_bio_region(struct clone *clone, struct bio *bio)
 	bucket_lock_irqsave(bucket, flags);
 
 	hd = __hash_find(bucket, region_nr);
-
-	/* Someone else is hydrating the region */
 	if (hd) {
+		/* Someone else is hydrating the region */
 		bio_list_add(&hd->deferred_bios, bio);
 		bucket_unlock_irqrestore(bucket, flags);
 		return;
 	}
 
-	/* The region has been hydrated */
 	if (dm_clone_is_region_hydrated(clone->md, region_nr)) {
+		/* The region has been hydrated */
 		bucket_unlock_irqrestore(bucket, flags);
 		issue_bio(clone, bio);
 		return;
@@ -902,9 +893,8 @@ static void hydrate_bio_region(struct clone *clone, struct bio *bio)
 	}
 
 	hd2 = __find_or_insert_region_hydration(bucket, hd);
-
-	/* Someone else started the region's hydration. */
 	if (hd2 != hd) {
+		/* Someone else started the region's hydration. */
 		bio_list_add(&hd2->deferred_bios, bio);
 		bucket_unlock_irqrestore(bucket, flags);
 		free_hydration(hd);
@@ -937,7 +927,6 @@ static void hydrate_bio_region(struct clone *clone, struct bio *bio)
 	} else {
 		bio_list_add(&hd->deferred_bios, bio);
 		bucket_unlock_irqrestore(bucket, flags);
-
 		hydration_copy(hd, 1);
 	}
 }
@@ -1138,7 +1127,6 @@ static int commit_metadata(struct clone *clone)
 
 	if (dm_clone_is_hydration_done(clone->md))
 		dm_table_event(clone->ti->table);
-
 out:
 	mutex_unlock(&clone->commit_lock);
 
@@ -1177,13 +1165,10 @@ static void process_deferred_discards(struct clone *clone)
 		if (unlikely(r))
 			break;
 	}
-
 out:
 	blk_start_plug(&plug);
-
 	while ((bio = bio_list_pop(&discards)))
 		complete_discard_bio(clone, bio, r == 0);
-
 	blk_finish_plug(&plug);
 }
 
@@ -1530,7 +1515,6 @@ static int parse_feature_args(struct dm_arg_set *as, struct clone *clone)
 		return 0;
 
 	r = dm_read_arg_group(&args, as, &argc, &ti->error);
-
 	if (r)
 		return r;
 
@@ -1575,7 +1559,6 @@ static int parse_core_args(struct dm_arg_set *as, struct clone *clone)
 		return 0;
 
 	r = dm_read_arg_group(&args, as, &argc, &ti->error);
-
 	if (r)
 		return r;
 
@@ -1620,7 +1603,6 @@ static int parse_region_size(struct clone *clone, struct dm_arg_set *as, char **
 	arg.error = "Invalid region size";
 
 	r = dm_read_arg(&arg, as, &region_size, error);
-
 	if (r)
 		return r;
 
@@ -1664,14 +1646,12 @@ static int parse_metadata_dev(struct clone *clone, struct dm_arg_set *as, char *
 
 	r = dm_get_device(clone->ti, dm_shift_arg(as), FMODE_READ | FMODE_WRITE,
 			  &clone->metadata_dev);
-
 	if (r) {
 		*error = "Error opening metadata device";
 		return r;
 	}
 
 	metadata_dev_size = get_dev_size(clone->metadata_dev);
-
 	if (metadata_dev_size > DM_CLONE_METADATA_MAX_SECTORS_WARNING)
 		DMWARN("Metadata device %s is larger than %u sectors: excess space will not be used.",
 		       bdevname(clone->metadata_dev->bdev, b), DM_CLONE_METADATA_MAX_SECTORS);
@@ -1686,17 +1666,14 @@ static int parse_clone_dev(struct clone *clone, struct dm_arg_set *as, char **er
 
 	r = dm_get_device(clone->ti, dm_shift_arg(as), FMODE_READ | FMODE_WRITE,
 			  &clone->clone_dev);
-
 	if (r) {
 		*error = "Error opening clone device";
 		return r;
 	}
 
 	clone_dev_size = get_dev_size(clone->clone_dev);
-
 	if (clone_dev_size < clone->ti->len) {
 		dm_put_device(clone->ti, clone->clone_dev);
-
 		*error = "Device size larger than clone device";
 		return -EINVAL;
 	}
@@ -1711,17 +1688,14 @@ static int parse_origin_dev(struct clone *clone, struct dm_arg_set *as, char **e
 
 	r = dm_get_device(clone->ti, dm_shift_arg(as), FMODE_READ,
 			  &clone->origin_dev);
-
 	if (r) {
 		*error = "Error opening origin device";
 		return r;
 	}
 
 	origin_dev_size = get_dev_size(clone->origin_dev);
-
 	if (origin_dev_size < clone->ti->len) {
 		dm_put_device(clone->ti, clone->origin_dev);
-
 		*error = "Device size larger than origin device";
 		return -EINVAL;
 	}
@@ -1735,7 +1709,6 @@ static int copy_ctr_args(struct clone *clone, int argc, const char **argv, char
 	const char **copy;
 
 	copy = kcalloc(argc, sizeof(*copy), GFP_KERNEL);
-
 	if (!copy)
 		goto error;
 
@@ -1752,7 +1725,6 @@ static int copy_ctr_args(struct clone *clone, int argc, const char **argv, char
 
 	clone->nr_ctr_args = argc;
 	clone->ctr_args = copy;
-
 	return 0;
 
 error:
@@ -1775,7 +1747,6 @@ static int clone_ctr(struct dm_target *ti, unsigned int argc, char **argv)
 	as.argv = argv;
 
 	clone = kzalloc(sizeof(*clone), GFP_KERNEL);
-
 	if (!clone) {
 		ti->error = "Failed to allocate clone structure";
 		return -ENOMEM;
@@ -1789,22 +1760,18 @@ static int clone_ctr(struct dm_target *ti, unsigned int argc, char **argv)
 	__set_bit(DM_CLONE_DISCARD_PASSDOWN, &clone->flags);
 
 	r = parse_metadata_dev(clone, &as, &ti->error);
-
 	if (r)
 		goto out_with_clone;
 
 	r = parse_clone_dev(clone, &as, &ti->error);
-
 	if (r)
 		goto out_with_meta_dev;
 
 	r = parse_origin_dev(clone, &as, &ti->error);
-
 	if (r)
 		goto out_with_clone_dev;
 
 	r = parse_region_size(clone, &as, &ti->error);
-
 	if (r)
 		goto out_with_origin_dev;
 
@@ -1812,31 +1779,26 @@ static int clone_ctr(struct dm_target *ti, unsigned int argc, char **argv)
 	clone->nr_regions = dm_sector_div_up(ti->len, clone->region_size);
 
 	r = validate_nr_regions(clone->nr_regions, &ti->error);
-
 	if (r)
 		goto out_with_origin_dev;
 
 	r = dm_set_target_max_io_len(ti, clone->region_size);
-
 	if (r) {
 		ti->error = "Failed to set max io len";
 		goto out_with_origin_dev;
 	}
 
 	r = parse_feature_args(&as, clone);
-
 	if (r)
 		goto out_with_origin_dev;
 
 	r = parse_core_args(&as, clone);
-
 	if (r)
 		goto out_with_origin_dev;
 
 	/* Load metadata */
 	clone->md = dm_clone_metadata_open(clone->metadata_dev->bdev, ti->len,
 					   clone->region_size);
-
 	if (IS_ERR(clone->md)) {
 		ti->error = "Failed to load metadata";
 		r = PTR_ERR(clone->md);
@@ -1855,7 +1817,6 @@ static int clone_ctr(struct dm_target *ti, unsigned int argc, char **argv)
 
 	/* Allocate hydration hash table */
 	r = hash_table_init(clone);
-
 	if (r) {
 		ti->error = "Failed to allocate hydration hash table";
 		goto out_with_metadata;
@@ -1872,7 +1833,6 @@ static int clone_ctr(struct dm_target *ti, unsigned int argc, char **argv)
 	atomic_set(&clone->hydrations_in_flight, 0);
 
 	clone->wq = alloc_workqueue("dm-" DM_MSG_PREFIX, WQ_MEM_RECLAIM, 0);
-
 	if (!clone->wq) {
 		ti->error = "Failed to allocate workqueue";
 		r = -ENOMEM;
@@ -1883,7 +1843,6 @@ static int clone_ctr(struct dm_target *ti, unsigned int argc, char **argv)
 	INIT_DELAYED_WORK(&clone->waker, do_waker);
 
 	clone->kcopyd_client = dm_kcopyd_client_create(&dm_kcopyd_throttle);
-
 	if (IS_ERR(clone->kcopyd_client)) {
 		r = PTR_ERR(clone->kcopyd_client);
 		goto out_with_wq;
@@ -1891,7 +1850,6 @@ static int clone_ctr(struct dm_target *ti, unsigned int argc, char **argv)
 
 	r = mempool_init_slab_pool(&clone->hydration_pool, MIN_HYDRATIONS,
 				   _hydration_cache);
-
 	if (r) {
 		ti->error = "Failed to create dm_clone_region_hydration memory pool";
 		goto out_with_kcopyd;
@@ -1899,7 +1857,6 @@ static int clone_ctr(struct dm_target *ti, unsigned int argc, char **argv)
 
 	/* Save a copy of the table line */
 	r = copy_ctr_args(clone, argc - 3, (const char **)argv + 3, &ti->error);
-
 	if (r)
 		goto out_with_mempool;
 
@@ -1921,28 +1878,20 @@ static int clone_ctr(struct dm_target *ti, unsigned int argc, char **argv)
 
 out_with_mempool:
 	mempool_exit(&clone->hydration_pool);
-
 out_with_kcopyd:
 	dm_kcopyd_client_destroy(clone->kcopyd_client);
-
 out_with_wq:
 	destroy_workqueue(clone->wq);
-
 out_with_ht:
 	hash_table_exit(clone);
-
 out_with_metadata:
 	dm_clone_metadata_close(clone->md);
-
 out_with_origin_dev:
 	dm_put_device(ti, clone->origin_dev);
-
 out_with_clone_dev:
 	dm_put_device(ti, clone->clone_dev);
-
 out_with_meta_dev:
 	dm_put_device(ti, clone->metadata_dev);
-
 out_with_clone:
 	kfree(clone);
 
@@ -2213,12 +2162,10 @@ static int __init dm_clone_init(void)
 	int r;
 
 	_hydration_cache = KMEM_CACHE(dm_clone_region_hydration, 0);
-
 	if (!_hydration_cache)
 		return -ENOMEM;
 
 	r = dm_register_target(&clone_target);
-
 	if (r < 0) {
 		DMERR("Failed to register clone target");
 		return r;
-- 
2.15.0

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [RFC PATCH 1/1] dm: add clone target
  2019-08-29 16:19   ` Mike Snitzer
@ 2019-08-31  9:55     ` Nikos Tsironis
  2019-09-04 15:01       ` Mike Snitzer
  0 siblings, 1 reply; 14+ messages in thread
From: Nikos Tsironis @ 2019-08-31  9:55 UTC (permalink / raw)
  To: Mike Snitzer; +Cc: vkoukis, dm-devel, agk, iliastsi

On 8/29/19 7:19 PM, Mike Snitzer wrote:
> On Tue, Jul 09 2019 at 10:15am -0400,
> Nikos Tsironis <ntsironis@arrikto.com> wrote:
> 
>> Add the dm-clone target, which allows cloning of arbitrary block
>> devices.
>>
>> dm-clone produces a one-to-one copy of an existing, read-only device
>> (origin) into a writable device (clone): It presents a virtual block
>> device which makes all data appear immediately, and redirects reads and
>> writes accordingly.
>>
>> The main use case of dm-clone is to clone a potentially remote,
>> high-latency, read-only, archival-type block device into a writable,
>> fast, primary-type device for fast, low-latency I/O. The cloned device
>> is visible/mountable immediately and the copy of the origin device to
>> the clone device happens in the background, in parallel with user I/O.
>>
>> When the cloning completes, the dm-clone table can be removed altogether
>> and be replaced, e.g., by a linear table, mapping directly to the clone
>> device.
>>
>> For further information and examples of how to use dm-clone, please read
>> Documentation/device-mapper/dm-clone.rst
>>
>> Suggested-by: Vangelis Koukis <vkoukis@arrikto.com>
>> Co-developed-by: Ilias Tsitsimpis <iliastsi@arrikto.com>
>> Signed-off-by: Ilias Tsitsimpis <iliastsi@arrikto.com>
>> Signed-off-by: Nikos Tsironis <ntsironis@arrikto.com>
>> ---
>>  Documentation/device-mapper/dm-clone.rst |  334 +++++
>>  drivers/md/Kconfig                       |   13 +
>>  drivers/md/Makefile                      |    2 +
>>  drivers/md/dm-clone-metadata.c           |  991 +++++++++++++
>>  drivers/md/dm-clone-metadata.h           |  158 +++
>>  drivers/md/dm-clone-target.c             | 2244 ++++++++++++++++++++++++++++++
>>  6 files changed, 3742 insertions(+)
>>  create mode 100644 Documentation/device-mapper/dm-clone.rst
>>  create mode 100644 drivers/md/dm-clone-metadata.c
>>  create mode 100644 drivers/md/dm-clone-metadata.h
>>  create mode 100644 drivers/md/dm-clone-target.c
>>
>> diff --git a/Documentation/device-mapper/dm-clone.rst b/Documentation/device-mapper/dm-clone.rst
>> new file mode 100644
>> index 000000000000..948b7ce31ce3
>> --- /dev/null
>> +++ b/Documentation/device-mapper/dm-clone.rst
>> @@ -0,0 +1,334 @@
>> +.. SPDX-License-Identifier: GPL-2.0-only
>> +
>> +========
>> +dm-clone
>> +========
>> +
>> +Introduction
>> +============
>> +
>> +dm-clone is a device mapper target which produces a one-to-one copy of an
>> +existing, read-only device (origin) into a writable device (clone): It presents
>> +a virtual block device which makes all data appear immediately, and redirects
>> +reads and writes accordingly.
>> +
>> +The main use case of dm-clone is to clone a potentially remote, high-latency,
>> +read-only, archival-type block device into a writable, fast, primary-type device
>> +for fast, low-latency I/O. The cloned device is visible/mountable immediately
>> +and the copy of the origin device to the clone device happens in the background,
>> +in parallel with user I/O.
>> +
>> +For example, one could restore an application backup from a read-only copy,
>> +accessible through a network storage protocol (NBD, Fibre Channel, iSCSI, AoE,
>> +etc.), into a local SSD or NVMe device, and start using the device immediately,
>> +without waiting for the restore to complete.
>> +
>> +When the cloning completes, the dm-clone table can be removed altogether and be
>> +replaced, e.g., by a linear table, mapping directly to the clone device.
>> +
>> +The dm-clone target reuses the metadata library used by the thin-provisioning
>> +target.
>> +
>> +Glossary
>> +========
>> +
>> +   Region
>> +     A fixed sized block. The unit of hydration.
>> +
>> +   Hydration
>> +     The process of filling a region of the clone device with data from the same
>> +     region of the origin device, i.e., copying the region from the origin to
>> +     the clone device.
>> +
>> +Once a region gets hydrated we redirect all I/O regarding it to the clone
>> +device.
> 
> There is a lot of awkward jargon that you're mixing into this target.
> 
> Why "region" and not "block"?  I can let "region" go but please be
> consistent (don't fall back to calling a region a block anywhere).
> 

I used the term "region" to avoid confusion with a device's
logical/physical block size. A "region" is the unit of copying from the
source to the destination device. dm-raid, also, uses the term "region".

But you are right that I should be consistent and never fall back to
calling it a block. I will fix this in v2.

> Why "hydration"?  Just needed to call it _something_?  I can let it go
> as long as it is a construct internal to the target's implementation.  I
> see no point making consumers of this target, that copies data from a
> source to destination, have to call something "hydration".
> 

Hydration refers to the process of filling an object, a region in the
case of dm-clone, with data from a data source, which is the source
device in our case.

Please see the below links for a more detailed definition of the term:

https://stackoverflow.com/questions/6991135/what-does-it-mean-to-hydrate-an-object/20787106#20787106
https://www.snaplogic.com/glossary/data-hydration

I think the term "hydration" is fit to what dm-clone is doing, but if
you insist I can change it to "background copying" both in the user
facing documentation and internally.

Please let me know what you think.

> And while we're at it why "origin" device instead of "source"?
> Why "clone" device instead of "dest" or "destination"?
> 

You are right. The terms "source" and "destination" are better and less
confusing than "origin" and "clone". I will rename both of these to
"source" and "destination" in v2.

> I can give the target name "dm-clone" a pass.. but dm-copy is less
> opaque IMHO.. I could go either way on those.
> 

I think the term "clone" describes the functionality of the target
better than the term "copy". Even if we disable the background copying,
the target exposes a "clone" of the source device, which can be used for
I/O right away, even if no regions have been cloned/copied to the
destination device yet.

Moreover, the term "clone" describes better the intended use case of the
target, i.e., to clone a read-only snapshot to a writable block device.

> 
>> +Background Hydration
>> +--------------------
>> +
>> +dm-clone copies continuously from the origin to the clone device, until all of
>> +the device has been copied.
>> +
>> +Copying data from the origin to the clone device uses bandwidth. The user can
>> +set a throttle to prevent more than a certain amount of copying occurring at any
>> +one time. Moreover, dm-clone takes into account user I/O traffic going to the
>> +devices and pauses the background hydration when there is I/O in-flight.
>> +
>> +A message `hydration_threshold <#sectors>` can be used to set the maximum number
>> +of sectors being copied, the default being 2048 sectors (1MB).
> 
> Think this should really be expressed in multiples of a region, e.g.:
> copy_threshold <# regions> (or clone_threshold)
> 

Ack, I will fix it in v2.

>> +dm-clone employs dm-kcopyd for copying portions of the origin device to the
>> +clone device. By default, we issue copy requests of size equal to the region
>> +size. A message `hydration_block_size <#sectors>` can be used to tune the size
>> +of these copy requests. Increasing the hydration block size results in dm-clone
>> +trying to batch together contiguous regions, so we copy the data in blocks of
>> +this size.
> 
> It is awkward to have 'hydration_block_size' vs target ctr
> provided "region size".  copy_batch_size <# regions>?  (or
> clone_batch_size)?
> 

You are right, I will fix this also in v2.

> Please take care of the external facing documentation to not use
> "hydration".  Of all the naming I dislike it the most.. sorry.
> 
> Also, please fold the following patch in before making any edits to the
> .c files for v2.
> 

Yes, of course. Thank you for the patch.

Nikos

> This review pass is the most trivial of the high level review, I'll be
> drilling in on other aspects of the implementation now.  But I suspect
> you've done a solid job with those details (based on what I've seen so
> far).
> 
> Thanks,
> Mike
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [RFC PATCH 1/1] dm: add clone target
  2019-08-31  9:55     ` Nikos Tsironis
@ 2019-09-04 15:01       ` Mike Snitzer
  0 siblings, 0 replies; 14+ messages in thread
From: Mike Snitzer @ 2019-09-04 15:01 UTC (permalink / raw)
  To: Nikos Tsironis; +Cc: vkoukis, dm-devel, agk, iliastsi

On Sat, Aug 31 2019 at  5:55am -0400,
Nikos Tsironis <ntsironis@arrikto.com> wrote:

> On 8/29/19 7:19 PM, Mike Snitzer wrote:
> > On Tue, Jul 09 2019 at 10:15am -0400,
> > Nikos Tsironis <ntsironis@arrikto.com> wrote:
> > 
> >> Add the dm-clone target, which allows cloning of arbitrary block
> >> devices.
> >>
> >> dm-clone produces a one-to-one copy of an existing, read-only device
> >> (origin) into a writable device (clone): It presents a virtual block
> >> device which makes all data appear immediately, and redirects reads and
> >> writes accordingly.
> >>
> >> The main use case of dm-clone is to clone a potentially remote,
> >> high-latency, read-only, archival-type block device into a writable,
> >> fast, primary-type device for fast, low-latency I/O. The cloned device
> >> is visible/mountable immediately and the copy of the origin device to
> >> the clone device happens in the background, in parallel with user I/O.
> >>
> >> When the cloning completes, the dm-clone table can be removed altogether
> >> and be replaced, e.g., by a linear table, mapping directly to the clone
> >> device.
> >>
> >> For further information and examples of how to use dm-clone, please read
> >> Documentation/device-mapper/dm-clone.rst
> >>
> >> Suggested-by: Vangelis Koukis <vkoukis@arrikto.com>
> >> Co-developed-by: Ilias Tsitsimpis <iliastsi@arrikto.com>
> >> Signed-off-by: Ilias Tsitsimpis <iliastsi@arrikto.com>
> >> Signed-off-by: Nikos Tsironis <ntsironis@arrikto.com>
> >> ---
> >>  Documentation/device-mapper/dm-clone.rst |  334 +++++
> >>  drivers/md/Kconfig                       |   13 +
> >>  drivers/md/Makefile                      |    2 +
> >>  drivers/md/dm-clone-metadata.c           |  991 +++++++++++++
> >>  drivers/md/dm-clone-metadata.h           |  158 +++
> >>  drivers/md/dm-clone-target.c             | 2244 ++++++++++++++++++++++++++++++
> >>  6 files changed, 3742 insertions(+)
> >>  create mode 100644 Documentation/device-mapper/dm-clone.rst
> >>  create mode 100644 drivers/md/dm-clone-metadata.c
> >>  create mode 100644 drivers/md/dm-clone-metadata.h
> >>  create mode 100644 drivers/md/dm-clone-target.c
> >>
> >> diff --git a/Documentation/device-mapper/dm-clone.rst b/Documentation/device-mapper/dm-clone.rst
> >> new file mode 100644
> >> index 000000000000..948b7ce31ce3
> >> --- /dev/null
> >> +++ b/Documentation/device-mapper/dm-clone.rst
> >> @@ -0,0 +1,334 @@
> >> +.. SPDX-License-Identifier: GPL-2.0-only
> >> +
> >> +========
> >> +dm-clone
> >> +========
> >> +
> >> +Introduction
> >> +============
> >> +
> >> +dm-clone is a device mapper target which produces a one-to-one copy of an
> >> +existing, read-only device (origin) into a writable device (clone): It presents
> >> +a virtual block device which makes all data appear immediately, and redirects
> >> +reads and writes accordingly.
> >> +
> >> +The main use case of dm-clone is to clone a potentially remote, high-latency,
> >> +read-only, archival-type block device into a writable, fast, primary-type device
> >> +for fast, low-latency I/O. The cloned device is visible/mountable immediately
> >> +and the copy of the origin device to the clone device happens in the background,
> >> +in parallel with user I/O.
> >> +
> >> +For example, one could restore an application backup from a read-only copy,
> >> +accessible through a network storage protocol (NBD, Fibre Channel, iSCSI, AoE,
> >> +etc.), into a local SSD or NVMe device, and start using the device immediately,
> >> +without waiting for the restore to complete.
> >> +
> >> +When the cloning completes, the dm-clone table can be removed altogether and be
> >> +replaced, e.g., by a linear table, mapping directly to the clone device.
> >> +
> >> +The dm-clone target reuses the metadata library used by the thin-provisioning
> >> +target.
> >> +
> >> +Glossary
> >> +========
> >> +
> >> +   Region
> >> +     A fixed sized block. The unit of hydration.
> >> +
> >> +   Hydration
> >> +     The process of filling a region of the clone device with data from the same
> >> +     region of the origin device, i.e., copying the region from the origin to
> >> +     the clone device.
> >> +
> >> +Once a region gets hydrated we redirect all I/O regarding it to the clone
> >> +device.
> > 
> > There is a lot of awkward jargon that you're mixing into this target.
> > 
> > Why "region" and not "block"?  I can let "region" go but please be
> > consistent (don't fall back to calling a region a block anywhere).
> > 
> 
> I used the term "region" to avoid confusion with a device's
> logical/physical block size. A "region" is the unit of copying from the
> source to the destination device. dm-raid, also, uses the term "region".
> 
> But you are right that I should be consistent and never fall back to
> calling it a block. I will fix this in v2.
> 
> > Why "hydration"?  Just needed to call it _something_?  I can let it go
> > as long as it is a construct internal to the target's implementation.  I
> > see no point making consumers of this target, that copies data from a
> > source to destination, have to call something "hydration".
> > 
> 
> Hydration refers to the process of filling an object, a region in the
> case of dm-clone, with data from a data source, which is the source
> device in our case.
> 
> Please see the below links for a more detailed definition of the term:
> 
> https://stackoverflow.com/questions/6991135/what-does-it-mean-to-hydrate-an-object/20787106#20787106
> https://www.snaplogic.com/glossary/data-hydration
> 
> I think the term "hydration" is fit to what dm-clone is doing, but if
> you insist I can change it to "background copying" both in the user
> facing documentation and internally.

OK, hydration it is... ;)

> Please let me know what you think.
> 
> > And while we're at it why "origin" device instead of "source"?
> > Why "clone" device instead of "dest" or "destination"?
> > 
> 
> You are right. The terms "source" and "destination" are better and less
> confusing than "origin" and "clone". I will rename both of these to
> "source" and "destination" in v2.
> 
> > I can give the target name "dm-clone" a pass.. but dm-copy is less
> > opaque IMHO.. I could go either way on those.
> > 
> 
> I think the term "clone" describes the functionality of the target
> better than the term "copy". Even if we disable the background copying,
> the target exposes a "clone" of the source device, which can be used for
> I/O right away, even if no regions have been cloned/copied to the
> destination device yet.
> 
> Moreover, the term "clone" describes better the intended use case of the
> target, i.e., to clone a read-only snapshot to a writable block device.

Sure, I'm fine with clone.. not a big deal really.

> >> +Background Hydration
> >> +--------------------
> >> +
> >> +dm-clone copies continuously from the origin to the clone device, until all of
> >> +the device has been copied.
> >> +
> >> +Copying data from the origin to the clone device uses bandwidth. The user can
> >> +set a throttle to prevent more than a certain amount of copying occurring at any
> >> +one time. Moreover, dm-clone takes into account user I/O traffic going to the
> >> +devices and pauses the background hydration when there is I/O in-flight.
> >> +
> >> +A message `hydration_threshold <#sectors>` can be used to set the maximum number
> >> +of sectors being copied, the default being 2048 sectors (1MB).
> > 
> > Think this should really be expressed in multiples of a region, e.g.:
> > copy_threshold <# regions> (or clone_threshold)
> > 
> 
> Ack, I will fix it in v2.
> 
> >> +dm-clone employs dm-kcopyd for copying portions of the origin device to the
> >> +clone device. By default, we issue copy requests of size equal to the region
> >> +size. A message `hydration_block_size <#sectors>` can be used to tune the size
> >> +of these copy requests. Increasing the hydration block size results in dm-clone
> >> +trying to batch together contiguous regions, so we copy the data in blocks of
> >> +this size.
> > 
> > It is awkward to have 'hydration_block_size' vs target ctr
> > provided "region size".  copy_batch_size <# regions>?  (or
> > clone_batch_size)?
> > 
> 
> You are right, I will fix this also in v2.
> 
> > Please take care of the external facing documentation to not use
> > "hydration".  Of all the naming I dislike it the most.. sorry.
> > 
> > Also, please fold the following patch in before making any edits to the
> > .c files for v2.
> > 
> 
> Yes, of course. Thank you for the patch.

I look forward to your v2, thanks.

Mike

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2019-09-04 15:01 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-09 14:15 [RFC PATCH 0/1] dm: add clone target Nikos Tsironis
2019-07-09 14:15 ` [RFC PATCH 1/1] " Nikos Tsironis
2019-07-09 21:28   ` Heinz Mauelshagen
2019-07-10 18:45     ` Nikos Tsironis
2019-07-17 14:41       ` Heinz Mauelshagen
2019-07-22 20:16         ` Nikos Tsironis
2019-07-29 21:20           ` Heinz Mauelshagen
2019-07-30 10:13             ` Nikos Tsironis
2019-08-27 14:09               ` Nikos Tsironis
2019-08-27 15:34                 ` Mike Snitzer
2019-08-28 14:23                   ` Nikos Tsironis
2019-08-29 16:19   ` Mike Snitzer
2019-08-31  9:55     ` Nikos Tsironis
2019-09-04 15:01       ` Mike Snitzer

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.