linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] dm: verity target
@ 2012-03-02  0:33 Mandeep Singh Baines
  2012-03-02 16:08 ` Mandeep Singh Baines
  2012-03-04 19:18 ` [PATCH] dm: remake of the " Mikulas Patocka
  0 siblings, 2 replies; 34+ messages in thread
From: Mandeep Singh Baines @ 2012-03-02  0:33 UTC (permalink / raw)
  To: linux-kernel, dm-devel, Alasdair G Kergon
  Cc: Mandeep Singh Baines, Will Drewry, Elly Jones, Milan Broz,
	Olof Johansson, Steffen Klassert, Andrew Morton, Mikulas Patocka

The verity target provides transparent integrity checking of block devices
using a cryptographic digest.

dm-verity is meant to be setup as part of a verified boot path.  This
may be anything ranging from a boot using tboot or trustedgrub to just
booting from a known-good device (like a USB drive or CD).

dm-verity is part of ChromeOS's verified boot path. It is used to verify
the integrity of the root filesystem on boot. The root filesystem is
mounted on a dm-verity partition which transparently verifies each block
with a bootloader verified hash passed into the kernel at boot.

Changes in V5:
* https://lkml.org/lkml/2012/2/29/421 (Mikulas Patocka)
  * Fixed off-by-one error.
  * Added support for filesystems bigger than 4G (bug fix).
* https://lkml.org/lkml/2012/2/29/426 (Andrew Morton)
  * Fixed checkpatch errors/warning.
  * Made code cpu-hotplug-aware.
  * Remove NULL check before calling kfree.
  * No longer checking __GFP_WAIT allocations.
  * Propogate io->error instead of always EIO.
  * Remove unneeded and undesirable casts of void.
  * Use DMERR_LIMIT on io errors to avoid spamming dmesg.
  * Flush workqueue on rmmod.
Changes in V4:
* Discussion over phone (Alasdair G Kergon)
 * copy _ioctl fix from dm-linear
 * verity_status format fixes to match dm conventions
 * s/dm-bht/verity_tree
 * put everything into dm-verity.c
 * ctr changed to dm conventions
 * use hex2bin
 * use conventional dm names for function
  * s/dm_//
  * for example: verity_ctr versus dm_verity_ctr
 * use per_cpu API
Changes in V3:
* Discussion over irc (Alasdair G Kergon)
  * Implement ioctl hook
Changes in V2:
* https://lkml.org/lkml/2011/11/10/85 (Steffen Klassert)
  * Use shash API instead of older hash API

Signed-off-by: Will Drewry <wad@chromium.org>
Signed-off-by: Elly Jones <ellyjones@chromium.org>
Signed-off-by: Mandeep Singh Baines <msb@chromium.org>
Cc: Alasdair G Kergon <agk@redhat.com>
Cc: Milan Broz <mbroz@redhat.com>
Cc: Olof Johansson <olofj@chromium.org>
Cc: Steffen Klassert <steffen.klassert@secunet.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Mikulas Patocka <mpatocka@redhat.com>
Cc: dm-devel@redhat.com
---
 Documentation/device-mapper/verity.txt |  149 ++++
 drivers/md/Kconfig                     |   16 +
 drivers/md/Makefile                    |    1 +
 drivers/md/dm-verity.c                 | 1367 ++++++++++++++++++++++++++++++++
 4 files changed, 1533 insertions(+), 0 deletions(-)
 create mode 100644 Documentation/device-mapper/verity.txt
 create mode 100644 drivers/md/dm-verity.c

diff --git a/Documentation/device-mapper/verity.txt b/Documentation/device-mapper/verity.txt
new file mode 100644
index 0000000..b631f12
--- /dev/null
+++ b/Documentation/device-mapper/verity.txt
@@ -0,0 +1,149 @@
+dm-verity
+==========
+
+Device-Mapper's "verity" target provides transparent integrity checking of
+block devices using a cryptographic digest provided by the kernel crypto API.
+This target is read-only.
+
+Parameters:
+    <version> <dev> <hash_dev> <hash_start> <block_size> <alg> <digest> <salt>
+
+<version>
+    This is the version number of the on-disk format. Currently, there is
+    only version 0.
+
+<dev>
+    This is the device that is going to be integrity checked.  It may be
+    a subset of the full device as specified to dmsetup (start sector and count)
+    It may be specified as a path, like /dev/sdaX, or a device number,
+    <major>:<minor>.
+
+<hash_dev>
+    This is the device that that supplies the hash tree data.  It may be
+    specified similarly to the device path and may be the same device.  If the
+    same device is used, the hash offset should be outside of the dm-verity
+    configured device size.
+
+<hash_start>
+    This is the offset, in 512-byte sectors, from the start of hash_dev to
+    the root block of the hash tree.
+
+<block_size>
+    The size of a hash block. Also, the size of a block to be hashed.
+
+<alg>
+    The cryptographic hash algorithm used for this device.  This should
+    be the name of the algorithm, like "sha1".
+
+<digest>
+    The hexadecimal encoding of the cryptographic hash of all of the
+    neighboring nodes at the first level of the tree.  This hash should be
+    trusted as there is no other authenticity beyond this point.
+
+<salt>
+    The hexadecimal encoding of the salt value.
+
+Theory of operation
+===================
+
+dm-verity is meant to be setup as part of a verified boot path.  This
+may be anything ranging from a boot using tboot or trustedgrub to just
+booting from a known-good device (like a USB drive or CD).
+
+When a dm-verity device is configured, it is expected that the caller
+has been authenticated in some way (cryptographic signatures, etc).
+After instantiation, all hashes will be verified on-demand during
+disk access.  If they cannot be verified up to the root node of the
+tree, the root hash, then the I/O will fail.  This should identify
+tampering with any data on the device and the hash data.
+
+Cryptographic hashes are used to assert the integrity of the device on a
+per-block basis.  This allows for a lightweight hash computation on first read
+into the page cache.  Block hashes are stored linearly aligned to the nearest
+block the size of a page.
+
+Hash Tree
+---------
+
+Each node in the tree is a cryptographic hash.  If it is a leaf node, the hash
+is of some block data on disk.  If it is an intermediary node, then the hash is
+of a number of child nodes.
+
+Each entry in the tree is a collection of neighboring nodes that fit in one
+block.  The number is determined based on block_size and the size of the
+selected cryptographic digest algorithm.  The hashes are linearly ordered in
+this entry and any unaligned trailing space is ignored but included when
+calculating the parent node.
+
+The tree looks something like:
+
+alg = sha256, num_blocks = 32768, block_size = 4096
+
+                                 [   root    ]
+                                /    . . .    \
+                     [entry_0]                 [entry_1]
+                    /  . . .  \                 . . .   \
+         [entry_0_0]   . . .  [entry_0_127]    . . . .  [entry_1_127]
+           / ... \             /   . . .  \             /           \
+     blk_0 ... blk_127  blk_16256   blk_16383      blk_32640 . . . blk_32767
+
+On-disk format
+==============
+
+Below is the recommended on-disk format. The verity kernel code does not
+read the on-disk header. It only reads the hash blocks which directly
+follow the header. It is expected that a user-space tool will verify the
+integrity of the verity_header and then call dm_setup with the correct
+parameters. Alternatively, the header can be omitted and the dm_setup
+parameters can be passed via the kernel command-line in a rooted chain
+of trust where the command-line is verified.
+
+The on-disk format is especially useful in cases where the hash blocks
+are on a separate partition. The magic number allows easy identification
+of the partition contents. Alternatively, the hash blocks can be stored
+in the same partition as the data to be verified. In such a configuration
+the filesystem on the partition would be sized a little smaller than
+the full-partition, leaving room for the hash blocks.
+
+struct verity_header {
+       uint64_t magic = 0x7665726974790a00;
+       uint32_t version;
+       uint32_t block_size;
+       char digest[128]; /* in hex-ascii, null-terminated or 128-bytes */
+       char salt[128]; /* in hex-ascii, null-terminated or 128-bytes */
+}
+
+struct verity_header_block {
+	struct verity_header;
+	char unused[block_size - sizeof(struct verity_header) - sizeof(sig)];
+	char sig[128]; /* in hex-ascii, null-terminated or 128-bytes */
+}
+
+Directly following the header are the hash blocks which are stored a depth
+at a time (starting from the root), sorted in order of increasing index.
+
+Usage
+=====
+
+The API provides mechanisms for reading and verifying a tree. When reading, all
+required data for the hash tree should be populated for a block before
+attempting a verify.  This can be done by calling dm_bht_populate().  When all
+data is ready, a call to dm_bht_verify_block() with the expected hash value will
+perform both the direct block hash check and the hashes of the parent and
+neighboring nodes where needed to ensure validity up to the root hash.  Note,
+dm_bht_set_root_hexdigest() should be called before any verification attempts
+occur.
+
+Example
+=======
+
+Setup a device;
+[[
+  dmsetup create vroot --table \
+    "0 204800 verity /dev/sda1 /dev/sda2 alg=sha1 "\
+    "root_hexdigest=9f74809a2ee7607b16fcc70d9399a4de9725a727"
+]]
+
+A command line tool is available to compute the hash tree and return the
+root hash value.
+  http://git.chromium.org/cgi-bin/gitweb.cgi?p=dm-verity.git;a=tree
diff --git a/drivers/md/Kconfig b/drivers/md/Kconfig
index faa4741..b8bb690 100644
--- a/drivers/md/Kconfig
+++ b/drivers/md/Kconfig
@@ -370,4 +370,20 @@ config DM_FLAKEY
        ---help---
          A target that intermittently fails I/O for debugging purposes.
 
+config DM_VERITY
+        tristate "Verity target support"
+        depends on BLK_DEV_DM
+        select CRYPTO
+        select CRYPTO_HASH
+        ---help---
+          This device-mapper target allows you to create a device that
+          transparently integrity checks the data on it. You'll need to
+          activate the digests you're going to use in the cryptoapi
+          configuration.
+
+          To compile this code as a module, choose M here: the module will
+          be called dm-verity.
+
+          If unsure, say N.
+
 endif # MD
diff --git a/drivers/md/Makefile b/drivers/md/Makefile
index 046860c..70a29af 100644
--- a/drivers/md/Makefile
+++ b/drivers/md/Makefile
@@ -39,6 +39,7 @@ obj-$(CONFIG_DM_SNAPSHOT)	+= dm-snapshot.o
 obj-$(CONFIG_DM_PERSISTENT_DATA)	+= persistent-data/
 obj-$(CONFIG_DM_MIRROR)		+= dm-mirror.o dm-log.o dm-region-hash.o
 obj-$(CONFIG_DM_LOG_USERSPACE)	+= dm-log-userspace.o
+obj-$(CONFIG_DM_VERITY)         += dm-verity.o
 obj-$(CONFIG_DM_ZERO)		+= dm-zero.o
 obj-$(CONFIG_DM_RAID)	+= dm-raid.o
 obj-$(CONFIG_DM_THIN_PROVISIONING)	+= dm-thin-pool.o
diff --git a/drivers/md/dm-verity.c b/drivers/md/dm-verity.c
new file mode 100644
index 0000000..b72f350
--- /dev/null
+++ b/drivers/md/dm-verity.c
@@ -0,0 +1,1367 @@
+/*
+ * Originally based on dm-crypt.c,
+ * Copyright (C) 2003 Christophe Saout <christophe@saout.de>
+ * Copyright (C) 2004 Clemens Fruhwirth <clemens@endorphin.org>
+ * Copyright (C) 2006-2008 Red Hat, Inc. All rights reserved.
+ * Copyright (C) 2012 The Chromium OS Authors <chromium-os-dev@chromium.org>
+ *                    All Rights Reserved.
+ *
+ * This file is released under the GPLv2.
+ *
+ * Implements a verifying transparent block device.
+ * See Documentation/device-mapper/verity.txt
+ */
+#include <crypto/hash.h>
+#include <linux/atomic.h>
+#include <linux/bio.h>
+#include <linux/blkdev.h>
+#include <linux/genhd.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/mempool.h>
+#include <linux/module.h>
+#include <linux/percpu.h>
+#include <linux/workqueue.h>
+#include <linux/device-mapper.h>
+
+
+#define DM_MSG_PREFIX "verity"
+
+
+/* Helper for printing sector_t */
+#define ULL(x) ((unsigned long long)(x))
+
+#define MIN_IOS 32
+#define MIN_BIOS (MIN_IOS * 2)
+
+/* To avoid allocating memory for digest tests, we just setup a
+ * max to use for now.
+ */
+#define VERITY_MAX_DIGEST_SIZE 64   /* Supports up to 512-bit digests */
+#define VERITY_SALT_SIZE       32   /* 256 bits of salt is a lot */
+
+/* UNALLOCATED, PENDING, READY, and VERIFIED are valid states. All other
+ * values are entry-related return codes.
+ */
+#define VERITY_TREE_ENTRY_VERIFIED 8  /* 'nodes' checked against parent */
+#define VERITY_TREE_ENTRY_READY 4  /* 'nodes' is loaded and available */
+#define VERITY_TREE_ENTRY_PENDING 2  /* 'nodes' is being loaded */
+#define VERITY_TREE_ENTRY_UNALLOCATED 0 /* untouched */
+#define VERITY_TREE_ENTRY_ERROR -1 /* entry is unsuitable for use */
+#define VERITY_TREE_ENTRY_ERROR_IO -2 /* I/O error on load */
+
+/* Additional possible return codes */
+#define VERITY_TREE_ENTRY_ERROR_MISMATCH -3 /* Digest mismatch */
+
+
+struct verity_io {
+	struct dm_target *target;
+	struct bio *bio;
+	struct delayed_work work;
+	unsigned int flags;
+
+	int error;
+	atomic_t pending;
+
+	u64 block;  /* aligned block index */
+	u64 count;  /* aligned count in blocks */
+};
+
+/* verity_tree_entry
+ * Contains verity_tree->node_count tree nodes at a given tree depth.
+ * state is used to transactionally assure that data is paged in
+ * from disk.  Unless verity_tree kept running crypto contexts for each
+ * level, we need to load in the data for on-demand verification.
+ */
+struct verity_tree_entry {
+	atomic_t state; /* see defines */
+	/* Keeping an extra pointer per entry wastes up to ~33k of
+	 * memory if a 1m blocks are used (or 66 on 64-bit arch)
+	 */
+	struct verity_io *io_context;  /* Reserve a pointer for use during io */
+	/* data should only be non-NULL if fully populated. */
+	void *nodes;  /* The hash data used to verify the children.
+		       * Guaranteed to be page-aligned.
+		       */
+};
+
+/* verity_tree_level
+ * Contains an array of entries which represent a page of hashes where
+ * each hash is a node in the tree at the given tree depth/level.
+ */
+struct verity_tree_level {
+	struct verity_tree_entry *entries;  /* array of entries of tree nodes */
+	unsigned int count;  /* number of entries at this level */
+	sector_t sector;  /* starting sector for this level */
+};
+
+/* opaque context, start, databuf, sector_count */
+typedef int(*verity_tree_callback)(void *,  /* external context */
+			      sector_t,  /* start sector */
+			      u8 *,  /* destination page */
+			      sector_t,  /* num sectors */
+			      struct verity_tree_entry *);
+/* verity_tree - Device mapper block hash tree
+ * verity_tree provides a fixed interface for comparing data blocks
+ * against a cryptographic hashes stored in a hash tree. It
+ * optimizes the tree structure for storage on disk.
+ *
+ * The tree is built from the bottom up.  A collection of data,
+ * external to the tree, is hashed and these hashes are stored
+ * as the blocks in the tree.  For some number of these hashes,
+ * a parent node is created by hashing them.  These steps are
+ * repeated.
+ */
+struct verity_tree {
+	/* Configured values */
+	int depth;  /* Depth of the tree including the root */
+	unsigned int block_size;  /* Size of a hash block */
+	u64 block_count;  /* Number of blocks hashed */
+	char hash_alg[CRYPTO_MAX_ALG_NAME];
+	u8 salt[VERITY_SALT_SIZE];
+
+	/* Computed values */
+	unsigned int node_count;  /* Data size (in hashes) for each entry */
+	unsigned int node_count_shift;  /* first bit set - 1 */
+	struct crypto_shash *tfm; /* hash for this device */
+	unsigned int hash_desc_size;
+	sector_t sectors;  /* Number of disk sectors used */
+	u8 digest[VERITY_MAX_DIGEST_SIZE];
+	unsigned int digest_size;
+
+	struct verity_tree_level *levels;
+
+	/* Callback for reading from the hash device */
+	verity_tree_callback read_cb;
+};
+
+/* per-requested-bio private data */
+enum verity_io_flags {
+	VERITY_IOFLAGS_CLONED = 0x1,	/* original bio has been cloned */
+};
+
+struct verity_config {
+	struct dm_dev *dev;
+	sector_t start;
+	sector_t size;
+
+	struct dm_dev *hash_dev;
+	sector_t hash_start;
+
+	struct verity_tree vt;
+
+	/* Pool required for io contexts */
+	mempool_t *io_pool;
+	/* Pool and bios required for making sure that backing device reads are
+	 * in PAGE_SIZE increments.
+	 */
+	struct bio_set *bs;
+
+	char hash_alg[CRYPTO_MAX_ALG_NAME];
+};
+
+
+static struct kmem_cache *_verity_io_pool;
+static struct workqueue_struct *kveritydq, *kverityd_ioq;
+
+static DEFINE_PER_CPU(struct shash_desc *, verity_hash_desc);
+static DEFINE_PER_CPU(unsigned int, verity_hash_size);
+
+static void kverityd_verify(struct work_struct *work);
+static void kverityd_io(struct work_struct *work);
+static void kverityd_io_vt_populate(struct verity_io *io);
+static void kverityd_io_vt_populate_end(struct bio *, int error);
+
+
+/*
+ * Utilities
+ */
+
+static void bin2hex(char *dst, const u8 *src, size_t count)
+{
+	while (count-- > 0) {
+		sprintf(dst, "%02hhx", (int)*src);
+		dst += 2;
+		src++;
+	}
+}
+
+/*
+ * Verity Tree
+ */
+
+/* Functions for converting indices to nodes. */
+
+static inline unsigned int verity_tree_get_level_shift(struct verity_tree *vt,
+						  int depth)
+{
+	return (vt->depth - depth) * vt->node_count_shift;
+}
+
+/* For the given depth, this is the entry index.  At depth+1 it is the node
+ * index for depth.
+ */
+static inline u64 verity_tree_index_at_level(struct verity_tree *vt,
+					     int depth, u64 leaf)
+{
+	return leaf >> verity_tree_get_level_shift(vt, depth);
+}
+
+static inline struct verity_tree_entry *verity_tree_get_entry(
+		struct verity_tree *vt,
+		int depth, u64 block)
+{
+	u64 index = verity_tree_index_at_level(vt, depth, block);
+	struct verity_tree_level *level = &vt->levels[depth];
+
+	return &level->entries[index];
+}
+
+static inline void *verity_tree_get_node(struct verity_tree *vt,
+					 struct verity_tree_entry *entry,
+					 int depth, unsigned int block)
+{
+	u64 index = verity_tree_index_at_level(vt, depth, block);
+	unsigned int node_index = (unsigned int)index % vt->node_count;
+
+	return entry->nodes + (node_index * vt->digest_size);
+}
+
+/**
+ * verity_tree_compute_hash: hashes a page of data
+ */
+static int verity_tree_compute_hash(struct verity_tree *vt, struct page *pg,
+				    unsigned int offset, u8 *digest)
+{
+	struct shash_desc **hash_descp = &__get_cpu_var(verity_hash_desc);
+	unsigned int *hash_sizep = &__get_cpu_var(verity_hash_size);
+	struct shash_desc *hash_desc;
+	void *data;
+	int err;
+
+	if (!*hash_descp || *hash_sizep < vt->hash_desc_size) {
+		kfree(*hash_descp);
+		*hash_descp = kmalloc(vt->hash_desc_size, GFP_KERNEL);
+		*hash_sizep = vt->hash_desc_size;
+	}
+	hash_desc = *hash_descp;
+	hash_desc->tfm = vt->tfm;
+	hash_desc->flags = 0x0;
+
+	if (crypto_shash_init(hash_desc)) {
+		DMCRIT("failed to reinitialize crypto hash (proc:%d)",
+			smp_processor_id());
+		return -EINVAL;
+	}
+	data = kmap_atomic(pg);
+	err = crypto_shash_update(hash_desc, data + offset, PAGE_SIZE);
+	kunmap_atomic(data);
+	if (err) {
+		DMCRIT("crypto_hash_update failed");
+		return -EINVAL;
+	}
+	if (crypto_shash_update(hash_desc, vt->salt, sizeof(vt->salt))) {
+		DMCRIT("crypto_hash_update failed");
+		return -EINVAL;
+	}
+	if (crypto_shash_final(hash_desc, digest)) {
+		DMCRIT("crypto_hash_final failed");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int verity_tree_initialize_entries(struct verity_tree *vt)
+{
+	/* last represents the index of the last digest store in the tree.
+	 * By walking the tree with that index, it is possible to compute the
+	 * total number of entries at each level.
+	 *
+	 * Since each entry will contain up to |node_count| nodes of the tree,
+	 * it is possible that the last index may not be at the end of a given
+	 * entry->nodes.  In that case, it is assumed the value is padded.
+	 *
+	 * Note, we treat both the tree root (1 hash) and the tree leaves
+	 * independently from the vt data structures.  Logically, the root is
+	 * depth=-1 and the block layer level is depth=vt->depth
+	 */
+	u64 last = vt->block_count - 1;
+	int depth;
+
+	/* check that the largest level->count can't result in an int overflow
+	 * on allocation or sector calculation.
+	 */
+	if (((last >> vt->node_count_shift) + 1) >
+	    UINT_MAX / max_t(unsigned long,
+			     sizeof(struct verity_tree_entry),
+			     (unsigned long)to_sector(vt->block_size))) {
+		DMCRIT("required entries %llu is too large", vt->block_count);
+		return -EINVAL;
+	}
+
+	/* Track the current sector location for each level so we don't have to
+	 * compute it during traversals.
+	 */
+	vt->sectors = 0;
+	for (depth = 0; depth < vt->depth; ++depth) {
+		struct verity_tree_level *level = &vt->levels[depth];
+
+		level->count = verity_tree_index_at_level(vt, depth, last) + 1;
+		level->entries = kcalloc(level->count,
+					 sizeof(struct verity_tree_entry),
+					 GFP_KERNEL);
+		if (!level->entries) {
+			DMERR("failed to allocate entries for depth %d", depth);
+			return -ENOMEM;
+		}
+		level->sector = vt->sectors;
+		vt->sectors += level->count * to_sector(vt->block_size);
+	}
+
+	return 0;
+}
+
+/**
+ * verity_tree_create - prepares @vt for us
+ * @vt:	          pointer to a verity_tree_create()d vt
+ * @depth:	  tree depth without the root; including block hashes
+ * @block_count:  the number of block hashes / tree leaves
+ * @alg_name:	  crypto hash algorithm name
+ *
+ * Returns 0 on success.
+ *
+ * Callers can offset into devices by storing the data in the io callbacks.
+ */
+static int verity_tree_create(struct verity_tree *vt, u64 block_count,
+			      unsigned int block_size, const char *alg_name)
+{
+	int status = 0;
+
+	vt->block_size = block_size;
+	/* Verify that PAGE_SIZE >= block_size >= SECTOR_SIZE. */
+	if ((block_size > PAGE_SIZE) ||
+	    (PAGE_SIZE % block_size) ||
+	    (to_sector(block_size) == 0))
+		return -EINVAL;
+
+	vt->tfm = crypto_alloc_shash(alg_name, 0, 0);
+	if (IS_ERR(vt->tfm)) {
+		DMERR("failed to allocate crypto hash '%s'", alg_name);
+		return -ENOMEM;
+	}
+	vt->hash_desc_size = sizeof(struct shash_desc) +
+		crypto_shash_descsize(vt->tfm);
+
+	vt->digest_size = crypto_shash_digestsize(vt->tfm);
+	/* We expect to be able to pack >=2 hashes into a block */
+	if (block_size / vt->digest_size < 2) {
+		DMERR("too few hashes fit in a block");
+		status = -EINVAL;
+		goto bad_arg;
+	}
+
+	if (vt->digest_size > VERITY_MAX_DIGEST_SIZE) {
+		DMERR("VERITY_MAX_DIGEST_SIZE too small for digest");
+		status = -EINVAL;
+		goto bad_arg;
+	}
+
+	/* Configure the tree */
+	vt->block_count = block_count;
+	if (block_count == 0) {
+		DMERR("block_count must be non-zero");
+		status = -EINVAL;
+		goto bad_arg;
+	}
+
+	/* Each verity_tree_entry->nodes is one block.  The node code tracks
+	 * how many nodes fit into one entry where a node is a single
+	 * hash (message digest).
+	 */
+	vt->node_count_shift = fls(block_size / vt->digest_size) - 1;
+	/* Round down to the nearest power of two.  This makes indexing
+	 * into the tree much less painful.
+	 */
+	vt->node_count = 1 << vt->node_count_shift;
+
+	/* This is unlikely to happen, but with 64k pages, who knows. */
+	if (vt->node_count > UINT_MAX / vt->digest_size) {
+		DMERR("node_count * hash_len exceeds UINT_MAX!");
+		status = -EINVAL;
+		goto bad_arg;
+	}
+
+	vt->depth = DIV_ROUND_UP(fls64(block_count - 1), vt->node_count_shift);
+
+	/* Ensure that we can safely shift by this value. */
+	if (vt->depth * vt->node_count_shift >= sizeof(unsigned int) * 8) {
+		DMERR("specified depth and node_count_shift is too large");
+		status = -EINVAL;
+		goto bad_arg;
+	}
+
+	/* Allocate levels. Each level of the tree may have an arbitrary number
+	 * of verity_tree_entry structs.  Each entry contains node_count nodes.
+	 * Each node in the tree is a cryptographic digest of either node_count
+	 * nodes on the subsequent level or of a specific block on disk.
+	 */
+	vt->levels = kcalloc(vt->depth,
+			     sizeof(struct verity_tree_level), GFP_KERNEL);
+
+	vt->read_cb = NULL;
+
+	status = verity_tree_initialize_entries(vt);
+	if (status)
+		goto bad_entries_alloc;
+
+	/* We compute depth such that there is only be 1 block at level 0. */
+	BUG_ON(vt->levels[0].count != 1);
+
+	return 0;
+
+bad_entries_alloc:
+	while (vt->depth-- > 0)
+		kfree(vt->levels[vt->depth].entries);
+	kfree(vt->levels);
+bad_arg:
+	crypto_free_shash(vt->tfm);
+	return status;
+}
+
+/**
+ * verity_tree_read_completed
+ * @entry:   pointer to the entry that's been loaded
+ * @status:  I/O status. Non-zero is failure.
+ * MUST always be called after a read_cb completes.
+ */
+static void verity_tree_read_completed(struct verity_tree_entry *entry,
+				       int status)
+{
+	if (status) {
+		DMCRIT("an I/O error occurred while reading entry");
+		atomic_set(&entry->state, VERITY_TREE_ENTRY_ERROR_IO);
+		return;
+	}
+	BUG_ON(atomic_read(&entry->state) != VERITY_TREE_ENTRY_PENDING);
+	atomic_set(&entry->state, VERITY_TREE_ENTRY_READY);
+}
+
+/**
+ * verity_tree_verify_block - checks that all path nodes for @block are valid
+ * @vt:	     pointer to a verity_tree_create()d vt
+ * @block:   specific block data is expected from
+ * @pg:	     page holding the block data
+ * @offset:  offset into the page
+ *
+ * Returns 0 on success, VERITY_TREE_ENTRY_ERROR_MISMATCH on error.
+ */
+static int verity_tree_verify_block(struct verity_tree *vt, unsigned int block,
+				    struct page *pg, unsigned int offset)
+{
+	int state, depth = vt->depth;
+	u8 digest[VERITY_MAX_DIGEST_SIZE];
+	struct verity_tree_entry *entry;
+	void *node;
+
+	do {
+		/* Need to check that the hash of the current block is accurate
+		 * in its parent.
+		 */
+		entry = verity_tree_get_entry(vt, depth - 1, block);
+		state = atomic_read(&entry->state);
+		/* This call is only safe if all nodes along the path
+		 * are already populated (i.e. READY) via verity_tree_populate.
+		 */
+		BUG_ON(state < VERITY_TREE_ENTRY_READY);
+		node = verity_tree_get_node(vt, entry, depth, block);
+
+		if (verity_tree_compute_hash(vt, pg, offset, digest) ||
+		    memcmp(digest, node, vt->digest_size))
+			goto mismatch;
+
+		/* Keep the containing block of hashes to be verified in the
+		 * next pass.
+		 */
+		pg = virt_to_page(entry->nodes);
+		offset = offset_in_page(entry->nodes);
+	} while (--depth > 0 && state != VERITY_TREE_ENTRY_VERIFIED);
+
+	if (depth == 0 && state != VERITY_TREE_ENTRY_VERIFIED) {
+		if (verity_tree_compute_hash(vt, pg, offset, digest) ||
+		    memcmp(digest, vt->digest, vt->digest_size))
+			goto mismatch;
+		atomic_set(&entry->state, VERITY_TREE_ENTRY_VERIFIED);
+	}
+
+	/* Mark path to leaf as verified. */
+	for (depth++; depth < vt->depth; depth++) {
+		entry = verity_tree_get_entry(vt, depth, block);
+		/* At this point, entry can only be in VERIFIED or READY state.
+		 * So it is safe to use atomic_set instead of atomic_cmpxchg.
+		 */
+		atomic_set(&entry->state, VERITY_TREE_ENTRY_VERIFIED);
+	}
+
+	return 0;
+
+mismatch:
+	DMERR_LIMIT("verify_path: failed to verify hash (d=%d,bi=%u)",
+		    depth, block);
+	return VERITY_TREE_ENTRY_ERROR_MISMATCH;
+}
+
+/**
+ * verity_tree_is_populated - check that nodes needed to verify a given
+ *                            block are all ready
+ * @vt:	    pointer to a verity_tree_create()d vt
+ * @block:  specific block data is expected from
+ *
+ * Callers may wish to call verity_tree_is_populated() when checking an io
+ * for which entries were already pending.
+ */
+static bool verity_tree_is_populated(struct verity_tree *vt, unsigned int block)
+{
+	int depth;
+
+	for (depth = vt->depth - 1; depth >= 0; depth--) {
+		struct verity_tree_entry *entry;
+		entry = verity_tree_get_entry(vt, depth, block);
+		if (atomic_read(&entry->state) < VERITY_TREE_ENTRY_READY)
+			return false;
+	}
+
+	return true;
+}
+
+/**
+ * verity_tree_populate - reads entries from disk needed to verify a given block
+ * @vt:     pointer to a verity_tree_create()d vt
+ * @ctx:    context used for all read_cb calls on this request
+ * @block:  specific block data is expected from
+ *
+ * Returns negative value on error. Returns 0 on success.
+ */
+static int verity_tree_populate(struct verity_tree *vt, void *ctx,
+				unsigned int block)
+{
+	int depth, state;
+
+	BUG_ON(block >= vt->block_count);
+
+	for (depth = vt->depth - 1; depth >= 0; --depth) {
+		struct verity_tree_level *level;
+		struct verity_tree_entry *entry;
+		u64 index;
+
+		index = verity_tree_index_at_level(vt, depth, block);
+		level = &vt->levels[depth];
+		entry = verity_tree_get_entry(vt, depth, block);
+		state = atomic_cmpxchg(&entry->state,
+				       VERITY_TREE_ENTRY_UNALLOCATED,
+				       VERITY_TREE_ENTRY_PENDING);
+		if (state == VERITY_TREE_ENTRY_VERIFIED)
+			break;
+		if (state <= VERITY_TREE_ENTRY_ERROR)
+			goto error_state;
+		if (state != VERITY_TREE_ENTRY_UNALLOCATED)
+			continue;
+
+		/* Current entry is claimed for allocation and loading */
+		entry->nodes = kmalloc(vt->block_size, GFP_NOIO);
+
+		vt->read_cb(ctx,
+			    level->sector + to_sector(index * vt->block_size),
+			    entry->nodes, to_sector(vt->block_size), entry);
+	}
+
+	return 0;
+
+error_state:
+	DMCRIT("block %u at depth %d is in an error state", block, depth);
+	return -EPERM;
+}
+
+/**
+ * verity_tree_destroy - cleans up all memory used by @vt
+ * @vt:	 pointer to a verity_tree_create()d vt
+ */
+static void verity_tree_destroy(struct verity_tree *vt)
+{
+	int depth;
+
+	for (depth = 0; depth < vt->depth; depth++) {
+		struct verity_tree_entry *entry = vt->levels[depth].entries;
+		struct verity_tree_entry *entry_end = entry +
+			vt->levels[depth].count;
+		for (; entry < entry_end; ++entry)
+			kfree(entry->nodes);
+		kfree(vt->levels[depth].entries);
+	}
+	kfree(vt->levels);
+	crypto_free_shash(vt->tfm);
+}
+
+/*
+ * Verity Tree Accessors
+ */
+
+/**
+ * verity_tree_set_digest - sets an unverified root digest hash from hex
+ * @vt:	     pointer to a verity_tree_create()d vt
+ * @digest:  string containing the digest in hex
+ * Returns non-zero on error.
+ */
+static int verity_tree_set_digest(struct verity_tree *vt, const char *digest)
+{
+	/* Make sure we have at least the bytes expected */
+	if (strnlen(digest, vt->digest_size * 2) != vt->digest_size * 2) {
+		DMERR("root digest length does not match hash algorithm");
+		return -1;
+	}
+	return hex2bin(vt->digest, digest, vt->digest_size);
+}
+
+/**
+ * verity_tree_digest - returns root digest in hex
+ * @vt:	     pointer to a verity_tree_create()d vt
+ * @digest:  buffer to put into, must be of length VERITY_SALT_SIZE * 2 + 1.
+ */
+int verity_tree_digest(struct verity_tree *vt, char *digest)
+{
+	bin2hex(digest, vt->digest, vt->digest_size);
+	return 0;
+}
+
+/**
+ * verity_tree_set_salt - sets the salt
+ * @vt:    pointer to a verity_tree_create()d vt
+ * @salt:  string containing the salt in hex
+ * Returns non-zero on error.
+ */
+int verity_tree_set_salt(struct verity_tree *vt, const char *salt)
+{
+	size_t saltlen = min(strlen(salt) / 2, sizeof(vt->salt));
+	memset(vt->salt, 0, sizeof(vt->salt));
+	return hex2bin(vt->salt, salt, saltlen);
+}
+
+
+/**
+ * verity_tree_salt - returns the salt in hex
+ * @vt:    pointer to a verity_tree_create()d vt
+ * @salt:  buffer to put salt into, of length VERITY_SALT_SIZE * 2 + 1.
+ */
+int verity_tree_salt(struct verity_tree *vt, char *salt)
+{
+	bin2hex(salt, vt->salt, sizeof(vt->salt));
+	return 0;
+}
+
+/*
+ * Allocation and utility functions
+ */
+
+static void kverityd_src_io_read_end(struct bio *clone, int error);
+
+/* Shared destructor for all internal bios */
+static void verity_bio_destructor(struct bio *bio)
+{
+	struct verity_io *io = bio->bi_private;
+	struct verity_config *vc = io->target->private;
+	bio_free(bio, vc->bs);
+}
+
+static struct bio *verity_alloc_bioset(struct verity_config *vc, gfp_t gfp_mask,
+				       int nr_iovecs)
+{
+	return bio_alloc_bioset(gfp_mask, nr_iovecs, vc->bs);
+}
+
+static struct verity_io *verity_io_alloc(struct dm_target *ti,
+					    struct bio *bio)
+{
+	struct verity_config *vc = ti->private;
+	sector_t sector = bio->bi_sector - ti->begin;
+	struct verity_io *io;
+	u64 tmp;
+
+	io = mempool_alloc(vc->io_pool, GFP_NOIO);
+	io->flags = 0;
+	io->target = ti;
+	io->bio = bio;
+	io->error = 0;
+
+	/* Adjust the sector by the virtual starting sector */
+	tmp = (u64)to_bytes(1) * sector;
+	do_div(tmp, vc->vt.block_size);
+	io->block = tmp;
+	io->count = bio->bi_size / vc->vt.block_size;
+
+	atomic_set(&io->pending, 0);
+
+	return io;
+}
+
+static struct bio *verity_bio_clone(struct verity_io *io)
+{
+	struct verity_config *vc = io->target->private;
+	struct bio *bio = io->bio;
+	struct bio *clone = verity_alloc_bioset(vc, GFP_NOIO, bio->bi_max_vecs);
+
+	__bio_clone(clone, bio);
+	clone->bi_private = io;
+	clone->bi_end_io  = kverityd_src_io_read_end;
+	clone->bi_bdev    = vc->dev->bdev;
+	clone->bi_sector += vc->start - io->target->begin;
+	clone->bi_destructor = verity_bio_destructor;
+
+	return clone;
+}
+
+/*
+ * Reverse flow of requests into the device.
+ *
+ * (Start at the bottom with verity_map and work your way upward).
+ */
+
+static void verity_inc_pending(struct verity_io *io);
+
+static void verity_return_bio_to_caller(struct verity_io *io)
+{
+	struct verity_config *vc = io->target->private;
+
+	bio_endio(io->bio, io->error);
+	mempool_free(io, vc->io_pool);
+}
+
+/* Check for any missing vt hashes. */
+static bool verity_is_vt_populated(struct verity_io *io)
+{
+	struct verity_config *vc = io->target->private;
+	u64 block;
+
+	for (block = io->block; block < io->block + io->count; ++block)
+		if (!verity_tree_is_populated(&vc->vt, block))
+			return false;
+
+	return true;
+}
+
+/* verity_dec_pending manages the lifetime of all verity_io structs.
+ * Non-bug error handling is centralized through this interface and
+ * all passage from workqueue to workqueue.
+ */
+static void verity_dec_pending(struct verity_io *io)
+{
+	if (!atomic_dec_and_test(&io->pending))
+		goto done;
+
+	if (unlikely(io->error))
+		goto io_error;
+
+	/* I/Os that were pending may now be ready */
+	if (verity_is_vt_populated(io)) {
+		INIT_DELAYED_WORK(&io->work, kverityd_verify);
+		queue_delayed_work(kveritydq, &io->work, 0);
+	} else {
+		INIT_DELAYED_WORK(&io->work, kverityd_io);
+		queue_delayed_work(kverityd_ioq, &io->work, HZ/10);
+	}
+
+done:
+	return;
+
+io_error:
+	verity_return_bio_to_caller(io);
+}
+
+/* Walks the data set and computes the hash of the data read from the
+ * untrusted source device.  The computed hash is then passed to verity-tree
+ * for verification.
+ */
+static int verity_verify(struct verity_config *vc,
+			 struct verity_io *io)
+{
+	unsigned int block_size = vc->vt.block_size;
+	struct bio *bio = io->bio;
+	u64 block = io->block;
+	unsigned int idx;
+	int r;
+
+	for (idx = bio->bi_idx; idx < bio->bi_vcnt; idx++) {
+		struct bio_vec *bv = bio_iovec_idx(bio, idx);
+		unsigned int offset = bv->bv_offset;
+		unsigned int len = bv->bv_len;
+
+		BUG_ON(offset % block_size);
+		BUG_ON(len % block_size);
+
+		while (len) {
+			r = verity_tree_verify_block(&vc->vt, block,
+						bv->bv_page, offset);
+			if (r)
+				goto bad_return;
+
+			offset += block_size;
+			len -= block_size;
+			block++;
+			cond_resched();
+		}
+	}
+
+	return 0;
+
+bad_return:
+	/* verity_tree functions aren't expected to return errno friendly
+	 * values.  They are converted here for uniformity.
+	 */
+	if (r > 0) {
+		DMERR("Pending data for block %llu seen at verify", ULL(block));
+		r = -EBUSY;
+	} else {
+		DMERR_LIMIT("Block hash does not match!");
+		r = -EACCES;
+	}
+	return r;
+}
+
+/* Services the verify workqueue */
+static void kverityd_verify(struct work_struct *work)
+{
+	struct delayed_work *dwork = container_of(work, struct delayed_work,
+						  work);
+	struct verity_io *io = container_of(dwork, struct verity_io,
+					    work);
+	struct verity_config *vc = io->target->private;
+
+	io->error = verity_verify(vc, io);
+
+	/* Free up the bio and tag with the return value */
+	verity_return_bio_to_caller(io);
+}
+
+/* Asynchronously called upon the completion of verity-tree I/O. The status
+ * of the operation is passed back to verity-tree and the next steps are
+ * decided by verity_dec_pending.
+ */
+static void kverityd_io_vt_populate_end(struct bio *bio, int error)
+{
+	struct verity_tree_entry *entry = bio->bi_private;
+	struct verity_io *io = entry->io_context;
+
+	/* Tell the tree to atomically update now that we've populated
+	 * the given entry.
+	 */
+	verity_tree_read_completed(entry, error);
+
+	/* Clean up for reuse when reading data to be checked */
+	bio->bi_vcnt = 0;
+	bio->bi_io_vec->bv_offset = 0;
+	bio->bi_io_vec->bv_len = 0;
+	bio->bi_io_vec->bv_page = NULL;
+	/* Restore the private data to I/O so the destructor can be shared. */
+	bio->bi_private = io;
+	bio_put(bio);
+
+	/* We bail but assume the tree has been marked bad. */
+	if (unlikely(error)) {
+		DMERR("Failed to read for sector %llu (%u)",
+		      ULL(io->bio->bi_sector), io->bio->bi_size);
+		io->error = error;
+		/* Pass through the error to verity_dec_pending below */
+	}
+	/* When pending = 0, it will transition to reading real data */
+	verity_dec_pending(io);
+}
+
+/* Called by verity-tree (via verity_tree_populate), this function provides
+ * the message digests to verity-tree that are stored on disk.
+ */
+static int kverityd_vt_read_callback(void *ctx, sector_t start, u8 *dst,
+				      sector_t count,
+				      struct verity_tree_entry *entry)
+{
+	struct verity_io *io = ctx;  /* I/O for this batch */
+	struct verity_config *vc;
+	struct bio *bio;
+
+	vc = io->target->private;
+
+	/* The I/O context is nested inside the entry so that we don't need one
+	 * io context per page read.
+	 */
+	entry->io_context = ctx;
+
+	/* We should only get page size requests at present. */
+	verity_inc_pending(io);
+	bio = verity_alloc_bioset(vc, GFP_NOIO, 1);
+	bio->bi_private = entry;
+	bio->bi_idx = 0;
+	bio->bi_size = vc->vt.block_size;
+	bio->bi_sector = vc->hash_start + start;
+	bio->bi_bdev = vc->hash_dev->bdev;
+	bio->bi_end_io = kverityd_io_vt_populate_end;
+	bio->bi_rw = REQ_META;
+	/* Only need to free the bio since the page is managed by vt */
+	bio->bi_destructor = verity_bio_destructor;
+	bio->bi_vcnt = 1;
+	bio->bi_io_vec->bv_offset = offset_in_page(dst);
+	bio->bi_io_vec->bv_len = to_bytes(count);
+	/* dst is guaranteed to be a page_pool allocation */
+	bio->bi_io_vec->bv_page = virt_to_page(dst);
+	/* Track that this I/O is in use.  There should be no risk of the io
+	 * being removed prior since this is called synchronously.
+	 */
+	generic_make_request(bio);
+	return 0;
+}
+
+/* Submits an io request for each missing block of block hashes.
+ * The last one to return will then enqueue this on the io workqueue.
+ */
+static void kverityd_io_vt_populate(struct verity_io *io)
+{
+	struct verity_config *vc = io->target->private;
+	u64 block;
+
+	for (block = io->block; block < io->block + io->count; ++block) {
+		int ret = verity_tree_populate(&vc->vt, io, block);
+
+		if (ret < 0) {
+			/* verity_dec_pending will handle the error case. */
+			io->error = ret;
+			break;
+		}
+	}
+}
+
+/* Asynchronously called upon the completion of I/O issued
+ * from kverityd_src_io_read. verity_dec_pending() acts as
+ * the scheduler/flow manager.
+ */
+static void kverityd_src_io_read_end(struct bio *clone, int error)
+{
+	struct verity_io *io = clone->bi_private;
+
+	if (unlikely(!bio_flagged(clone, BIO_UPTODATE) && !error))
+		error = -EIO;
+
+	if (unlikely(error)) {
+		DMERR_LIMIT("Error occurred: %d (%llu, %u)",
+			    error, ULL(clone->bi_sector), clone->bi_size);
+		io->error = error;
+	}
+
+	/* Release the clone which just avoids the block layer from
+	 * leaving offsets, etc in unexpected states.
+	 */
+	bio_put(clone);
+
+	verity_dec_pending(io);
+}
+
+/* If not yet underway, an I/O request will be issued to the vc->dev
+ * device for the data needed. It is cloned to avoid unexpected changes
+ * to the original bio struct.
+ */
+static void kverityd_src_io_read(struct verity_io *io)
+{
+	struct bio *clone;
+
+	/* Check if the read is already issued. */
+	if (io->flags & VERITY_IOFLAGS_CLONED)
+		return;
+
+	io->flags |= VERITY_IOFLAGS_CLONED;
+
+	/* Clone the bio. The block layer may modify the bvec array. */
+	clone = verity_bio_clone(io);
+	if (unlikely(!clone)) {
+		io->error = -ENOMEM;
+		return;
+	}
+
+	verity_inc_pending(io);
+
+	generic_make_request(clone);
+}
+
+/* kverityd_io services the I/O workqueue. For each pass through
+ * the I/O workqueue, a call to populate both the origin drive
+ * data and the hash tree data is made.
+ */
+static void kverityd_io(struct work_struct *work)
+{
+	struct delayed_work *dwork = container_of(work, struct delayed_work,
+						  work);
+	struct verity_io *io = container_of(dwork, struct verity_io,
+					    work);
+
+	/* Issue requests asynchronously. */
+	verity_inc_pending(io);
+	kverityd_src_io_read(io);
+	kverityd_io_vt_populate(io);
+	verity_dec_pending(io);
+}
+
+/* Paired with verity_dec_pending, the pending value in the io dictate the
+ * lifetime of a request and when it is ready to be processed on the
+ * workqueues.
+ */
+static void verity_inc_pending(struct verity_io *io)
+{
+	atomic_inc(&io->pending);
+}
+
+/* Block-level requests start here. */
+static int verity_map(struct dm_target *ti, struct bio *bio,
+		      union map_info *map_context)
+{
+	struct verity_io *io;
+	struct verity_config *vc;
+	struct request_queue *r_queue;
+
+	if (unlikely(!ti)) {
+		DMERR("dm_target was NULL");
+		return -EIO;
+	}
+
+	vc = ti->private;
+	r_queue = bdev_get_queue(vc->dev->bdev);
+
+	if (bio_data_dir(bio) == WRITE) {
+		/* If we silently drop writes, then the VFS layer will cache
+		 * the write and persist it in memory. While it doesn't change
+		 * the underlying storage, it still may be contrary to the
+		 * behavior expected by a verified, read-only device.
+		 */
+		DMWARN_LIMIT("write request received. rejecting with -EIO.");
+		return -EIO;
+	} else {
+		/* Queue up the request to be verified */
+		io = verity_io_alloc(ti, bio);
+		if (!io) {
+			DMERR_LIMIT("Failed to allocate and init IO data");
+			return DM_MAPIO_REQUEUE;
+		}
+		INIT_DELAYED_WORK(&io->work, kverityd_io);
+		queue_delayed_work(kverityd_ioq, &io->work, 0);
+	}
+
+	return DM_MAPIO_SUBMITTED;
+}
+
+/*
+ * Non-block interfaces and device-mapper specific code
+ */
+
+/*
+ * Verity target parameters:
+ *
+ * <dev> <hash_dev> <hash_start> <block_size> <alg> <digest> <salt>
+ *
+ * version:        version of the hash tree on-disk format
+ * dev:            device to verify
+ * hash_dev:       device hashtree is stored on
+ * hash_start:     start address of hashes
+ * block_size:     size of a hash block
+ * alg:            hash algorithm
+ * digest:         toplevel hash of the tree
+ * salt:           salt
+ */
+static int verity_ctr(struct dm_target *ti, unsigned int argc, char **argv)
+{
+	struct verity_config *vc = NULL;
+	const char *dev, *hash_dev, *alg, *digest, *salt;
+	unsigned long hash_start, block_size, version;
+	sector_t blocks;
+	int ret;
+
+	if (argc != 8) {
+		ti->error = "Invalid argument count";
+		return -EINVAL;
+	}
+
+	if (kstrtoul(argv[0], 10, &version) || (version != 0)) {
+		ti->error = "Invalid version";
+		return -EINVAL;
+	}
+	dev = argv[1];
+	hash_dev = argv[2];
+	if (kstrtoul(argv[3], 10, &hash_start)) {
+		ti->error = "Invalid hash_start";
+		return -EINVAL;
+	}
+	if (kstrtoul(argv[4], 10, &block_size) || (block_size > UINT_MAX)) {
+		ti->error = "Invalid block_size";
+		return -EINVAL;
+	}
+	alg = argv[5];
+	digest = argv[6];
+	salt = argv[7];
+
+	/* The device mapper device should be setup read-only */
+	if ((dm_table_get_mode(ti->table) & ~FMODE_READ) != 0) {
+		ti->error = "Must be created readonly.";
+		return -EINVAL;
+	}
+
+	vc = kzalloc(sizeof(*vc), GFP_KERNEL);
+	if (!vc)
+		return -EINVAL;
+
+	/* Calculate the blocks from the given device size */
+	vc->size = ti->len;
+	blocks = to_bytes(vc->size) / block_size;
+	if (verity_tree_create(&vc->vt, blocks, block_size, alg)) {
+		DMERR("failed to create required vt");
+		goto bad_vt;
+	}
+	if (verity_tree_set_digest(&vc->vt, digest)) {
+		DMERR("digest error");
+		goto bad_digest;
+	}
+	verity_tree_set_salt(&vc->vt, salt);
+	vc->vt.read_cb = kverityd_vt_read_callback;
+
+	vc->start = 0;
+	/* We only ever grab the device in read-only mode. */
+	ret = dm_get_device(ti, dev, dm_table_get_mode(ti->table), &vc->dev);
+	if (ret) {
+		DMERR("Failed to acquire device '%s': %d", dev, ret);
+		ti->error = "Device lookup failed";
+		goto bad_verity_dev;
+	}
+
+	if ((to_bytes(vc->start) % block_size) ||
+	    (to_bytes(vc->size) % block_size)) {
+		ti->error = "Device must be block_size divisble/aligned";
+		goto bad_hash_start;
+	}
+
+	vc->hash_start = (sector_t)hash_start;
+
+	/*
+	 * Note, dev == hash_dev is okay as long as the size of
+	 *       ti->len passed to device mapper does not include
+	 *       the hashes.
+	 */
+	if (dm_get_device(ti, hash_dev,
+			  dm_table_get_mode(ti->table), &vc->hash_dev)) {
+		ti->error = "Hash device lookup failed";
+		goto bad_hash_dev;
+	}
+
+	if (snprintf(vc->hash_alg, CRYPTO_MAX_ALG_NAME, "%s", alg) >=
+	    CRYPTO_MAX_ALG_NAME) {
+		ti->error = "Hash algorithm name is too long";
+		goto bad_hash;
+	}
+
+	vc->io_pool = mempool_create_slab_pool(MIN_IOS, _verity_io_pool);
+	if (!vc->io_pool) {
+		ti->error = "Cannot allocate verity io mempool";
+		goto bad_slab_pool;
+	}
+
+	vc->bs = bioset_create(MIN_BIOS, 0);
+	if (!vc->bs) {
+		ti->error = "Cannot allocate verity bioset";
+		goto bad_bs;
+	}
+
+	ti->private = vc;
+
+	return 0;
+
+bad_bs:
+	mempool_destroy(vc->io_pool);
+bad_slab_pool:
+bad_hash:
+	dm_put_device(ti, vc->hash_dev);
+bad_hash_dev:
+bad_hash_start:
+	dm_put_device(ti, vc->dev);
+bad_vt:
+bad_digest:
+bad_verity_dev:
+	kfree(vc);   /* hash is not secret so no need to zero */
+	return -EINVAL;
+}
+
+static void verity_dtr(struct dm_target *ti)
+{
+	struct verity_config *vc = ti->private;
+
+	bioset_free(vc->bs);
+	mempool_destroy(vc->io_pool);
+	verity_tree_destroy(&vc->vt);
+	dm_put_device(ti, vc->hash_dev);
+	dm_put_device(ti, vc->dev);
+	kfree(vc);
+}
+
+static int verity_ioctl(struct dm_target *ti, unsigned int cmd,
+			unsigned long arg)
+{
+	struct verity_config *vc = ti->private;
+	struct dm_dev *dev = vc->dev;
+	int r = 0;
+
+	/*
+	 * Only pass ioctls through if the device sizes match exactly.
+	 */
+	if (vc->start ||
+	    ti->len != i_size_read(dev->bdev->bd_inode) >> SECTOR_SHIFT)
+		r = scsi_verify_blk_ioctl(NULL, cmd);
+
+	return r ? : __blkdev_driver_ioctl(dev->bdev, dev->mode, cmd, arg);
+}
+
+static int verity_status(struct dm_target *ti, status_type_t type,
+			char *result, unsigned int maxlen)
+{
+	struct verity_config *vc = ti->private;
+	char digest[VERITY_MAX_DIGEST_SIZE * 2 + 1] = { 0 };
+	char salt[VERITY_SALT_SIZE * 2 + 1] = { 0 };
+	unsigned int sz = 0;
+
+	verity_tree_digest(&vc->vt, digest);
+	verity_tree_salt(&vc->vt, salt);
+
+	switch (type) {
+	case STATUSTYPE_INFO:
+		result[0] = '\0';
+		break;
+	case STATUSTYPE_TABLE:
+		DMEMIT("%s %s %llu %llu %s %s %s",
+		       vc->dev->name,
+		       vc->hash_dev->name,
+		       ULL(vc->hash_start),
+		       ULL(vc->vt.block_size),
+		       vc->hash_alg,
+		       digest,
+		       salt);
+		break;
+	}
+	return 0;
+}
+
+static int verity_merge(struct dm_target *ti, struct bvec_merge_data *bvm,
+		       struct bio_vec *biovec, int max_size)
+{
+	struct verity_config *vc = ti->private;
+	struct request_queue *q = bdev_get_queue(vc->dev->bdev);
+
+	if (!q->merge_bvec_fn)
+		return max_size;
+
+	bvm->bi_bdev = vc->dev->bdev;
+	bvm->bi_sector = vc->start + bvm->bi_sector - ti->begin;
+
+	/* Optionally, this could just return 0 to stick to single pages. */
+	return min(max_size, q->merge_bvec_fn(q, bvm, biovec));
+}
+
+static int verity_iterate_devices(struct dm_target *ti,
+				 iterate_devices_callout_fn fn, void *data)
+{
+	struct verity_config *vc = ti->private;
+
+	return fn(ti, vc->dev, vc->start, ti->len, data);
+}
+
+static void verity_io_hints(struct dm_target *ti,
+			    struct queue_limits *limits)
+{
+	struct verity_config *vc = ti->private;
+	unsigned int block_size = vc->vt.block_size;
+
+	limits->logical_block_size = block_size;
+	limits->physical_block_size = block_size;
+	blk_limits_io_min(limits, block_size);
+}
+
+static struct target_type verity_target = {
+	.name   = "verity",
+	.version = {0, 1, 0},
+	.module = THIS_MODULE,
+	.ctr    = verity_ctr,
+	.dtr    = verity_dtr,
+	.ioctl  = verity_ioctl,
+	.map    = verity_map,
+	.merge  = verity_merge,
+	.status = verity_status,
+	.iterate_devices = verity_iterate_devices,
+	.io_hints = verity_io_hints,
+};
+
+#define VERITY_WQ_FLAGS (WQ_CPU_INTENSIVE|WQ_HIGHPRI)
+
+static int __init verity_init(void)
+{
+	int r = -ENOMEM;
+
+	_verity_io_pool = KMEM_CACHE(verity_io, 0);
+	if (!_verity_io_pool) {
+		DMERR("failed to allocate pool verity_io");
+		goto bad_io_pool;
+	}
+
+	kverityd_ioq = alloc_workqueue("kverityd_io", VERITY_WQ_FLAGS, 1);
+	if (!kverityd_ioq) {
+		DMERR("failed to create workqueue kverityd_ioq");
+		goto bad_io_queue;
+	}
+
+	kveritydq = alloc_workqueue("kverityd", VERITY_WQ_FLAGS, 1);
+	if (!kveritydq) {
+		DMERR("failed to create workqueue kveritydq");
+		goto bad_verify_queue;
+	}
+
+	r = dm_register_target(&verity_target);
+	if (r < 0) {
+		DMERR("register failed %d", r);
+		goto register_failed;
+	}
+
+	DMINFO("version %u.%u.%u loaded", verity_target.version[0],
+	       verity_target.version[1], verity_target.version[2]);
+
+	return r;
+
+register_failed:
+	destroy_workqueue(kveritydq);
+bad_verify_queue:
+	destroy_workqueue(kverityd_ioq);
+bad_io_queue:
+	kmem_cache_destroy(_verity_io_pool);
+bad_io_pool:
+	return r;
+}
+
+static void __exit verity_exit(void)
+{
+	int cpu;
+
+	flush_workqueue(kverityd_ioq);
+	flush_workqueue(kveritydq);
+
+	for_each_possible_cpu(cpu)
+		kfree(__get_cpu_var(verity_hash_desc));
+
+	destroy_workqueue(kveritydq);
+	destroy_workqueue(kverityd_ioq);
+
+	dm_unregister_target(&verity_target);
+	kmem_cache_destroy(_verity_io_pool);
+}
+
+module_init(verity_init);
+module_exit(verity_exit);
+
+MODULE_AUTHOR("The Chromium OS Authors <chromium-os-dev@chromium.org>");
+MODULE_DESCRIPTION(DM_NAME " target for transparent disk integrity checking");
+MODULE_LICENSE("GPL");
-- 
1.7.7.3


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH] dm: verity target
  2012-03-02  0:33 [PATCH] dm: verity target Mandeep Singh Baines
@ 2012-03-02 16:08 ` Mandeep Singh Baines
  2012-03-04 19:18 ` [PATCH] dm: remake of the " Mikulas Patocka
  1 sibling, 0 replies; 34+ messages in thread
From: Mandeep Singh Baines @ 2012-03-02 16:08 UTC (permalink / raw)
  To: Alasdair G Kergon, dm-devel, linux-kernel, Andrew Morton,
	Mikulas Patocka
  Cc: Mandeep Singh Baines, Will Drewry, Elly Jones, Milan Broz,
	Olof Johansson, Steffen Klassert

The verity target provides transparent integrity checking of block devices
using a cryptographic digest.

dm-verity is meant to be setup as part of a verified boot path.  This
may be anything ranging from a boot using tboot or trustedgrub to just
booting from a known-good device (like a USB drive or CD).

dm-verity is part of ChromeOS's verified boot path. It is used to verify
the integrity of the root filesystem on boot. The root filesystem is
mounted on a dm-verity partition which transparently verifies each block
with a bootloader verified hash passed into the kernel at boot.

Changes in V6:
  * Fixed bug in rmmod. Was freeing the same object NR_CPUS times.
  * Fixed example in documentation.
Changes in V5:
* https://lkml.org/lkml/2012/2/29/421 (Mikulas Patocka)
  * Fixed off-by-one error.
  * Added support for filesystems bigger than 4G (bug fix).
* https://lkml.org/lkml/2012/2/29/426 (Andrew Morton)
  * Fixed checkpatch errors/warning.
  * Made code cpu-hotplug-aware.
  * Remove NULL check before calling kfree.
  * No longer checking __GFP_WAIT allocations.
  * Propogate io->error instead of always EIO.
  * Remove unneeded and undesirable casts of void.
  * Use DMERR_LIMIT on io errors to avoid spamming dmesg.
  * Flush workqueue on rmmod.
Changes in V4:
* Discussion over phone (Alasdair G Kergon)
 * copy _ioctl fix from dm-linear
 * verity_status format fixes to match dm conventions
 * s/dm-bht/verity_tree
 * put everything into dm-verity.c
 * ctr changed to dm conventions
 * use hex2bin
 * use conventional dm names for function
  * s/dm_//
  * for example: verity_ctr versus dm_verity_ctr
 * use per_cpu API
Changes in V3:
* Discussion over irc (Alasdair G Kergon)
  * Implement ioctl hook
Changes in V2:
* https://lkml.org/lkml/2011/11/10/85 (Steffen Klassert)
  * Use shash API instead of older hash API

Signed-off-by: Will Drewry <wad@chromium.org>
Signed-off-by: Elly Jones <ellyjones@chromium.org>
Signed-off-by: Mandeep Singh Baines <msb@chromium.org>
Cc: Alasdair G Kergon <agk@redhat.com>
Cc: Milan Broz <mbroz@redhat.com>
Cc: Olof Johansson <olofj@chromium.org>
Cc: Steffen Klassert <steffen.klassert@secunet.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Mikulas Patocka <mpatocka@redhat.com>
Cc: dm-devel@redhat.com
---
 Documentation/device-mapper/verity.txt |  151 ++++
 drivers/md/Kconfig                     |   16 +
 drivers/md/Makefile                    |    1 +
 drivers/md/dm-verity.c                 | 1366 ++++++++++++++++++++++++++++++++
 4 files changed, 1534 insertions(+), 0 deletions(-)
 create mode 100644 Documentation/device-mapper/verity.txt
 create mode 100644 drivers/md/dm-verity.c

diff --git a/Documentation/device-mapper/verity.txt b/Documentation/device-mapper/verity.txt
new file mode 100644
index 0000000..6729102
--- /dev/null
+++ b/Documentation/device-mapper/verity.txt
@@ -0,0 +1,151 @@
+dm-verity
+==========
+
+Device-Mapper's "verity" target provides transparent integrity checking of
+block devices using a cryptographic digest provided by the kernel crypto API.
+This target is read-only.
+
+Parameters:
+    <version> <dev> <hash_dev> <hash_start> <block_size> <alg> <digest> <salt>
+
+<version>
+    This is the version number of the on-disk format. Currently, there is
+    only version 0.
+
+<dev>
+    This is the device that is going to be integrity checked.  It may be
+    a subset of the full device as specified to dmsetup (start sector and count)
+    It may be specified as a path, like /dev/sdaX, or a device number,
+    <major>:<minor>.
+
+<hash_dev>
+    This is the device that that supplies the hash tree data.  It may be
+    specified similarly to the device path and may be the same device.  If the
+    same device is used, the hash offset should be outside of the dm-verity
+    configured device size.
+
+<hash_start>
+    This is the offset, in 512-byte sectors, from the start of hash_dev to
+    the root block of the hash tree.
+
+<block_size>
+    The size of a hash block. Also, the size of a block to be hashed.
+
+<alg>
+    The cryptographic hash algorithm used for this device.  This should
+    be the name of the algorithm, like "sha1".
+
+<digest>
+    The hexadecimal encoding of the cryptographic hash of all of the
+    neighboring nodes at the first level of the tree.  This hash should be
+    trusted as there is no other authenticity beyond this point.
+
+<salt>
+    The hexadecimal encoding of the salt value.
+
+Theory of operation
+===================
+
+dm-verity is meant to be setup as part of a verified boot path.  This
+may be anything ranging from a boot using tboot or trustedgrub to just
+booting from a known-good device (like a USB drive or CD).
+
+When a dm-verity device is configured, it is expected that the caller
+has been authenticated in some way (cryptographic signatures, etc).
+After instantiation, all hashes will be verified on-demand during
+disk access.  If they cannot be verified up to the root node of the
+tree, the root hash, then the I/O will fail.  This should identify
+tampering with any data on the device and the hash data.
+
+Cryptographic hashes are used to assert the integrity of the device on a
+per-block basis.  This allows for a lightweight hash computation on first read
+into the page cache.  Block hashes are stored linearly aligned to the nearest
+block the size of a page.
+
+Hash Tree
+---------
+
+Each node in the tree is a cryptographic hash.  If it is a leaf node, the hash
+is of some block data on disk.  If it is an intermediary node, then the hash is
+of a number of child nodes.
+
+Each entry in the tree is a collection of neighboring nodes that fit in one
+block.  The number is determined based on block_size and the size of the
+selected cryptographic digest algorithm.  The hashes are linearly ordered in
+this entry and any unaligned trailing space is ignored but included when
+calculating the parent node.
+
+The tree looks something like:
+
+alg = sha256, num_blocks = 32768, block_size = 4096
+
+                                 [   root    ]
+                                /    . . .    \
+                     [entry_0]                 [entry_1]
+                    /  . . .  \                 . . .   \
+         [entry_0_0]   . . .  [entry_0_127]    . . . .  [entry_1_127]
+           / ... \             /   . . .  \             /           \
+     blk_0 ... blk_127  blk_16256   blk_16383      blk_32640 . . . blk_32767
+
+On-disk format
+==============
+
+Below is the recommended on-disk format. The verity kernel code does not
+read the on-disk header. It only reads the hash blocks which directly
+follow the header. It is expected that a user-space tool will verify the
+integrity of the verity_header and then call dm_setup with the correct
+parameters. Alternatively, the header can be omitted and the dm_setup
+parameters can be passed via the kernel command-line in a rooted chain
+of trust where the command-line is verified.
+
+The on-disk format is especially useful in cases where the hash blocks
+are on a separate partition. The magic number allows easy identification
+of the partition contents. Alternatively, the hash blocks can be stored
+in the same partition as the data to be verified. In such a configuration
+the filesystem on the partition would be sized a little smaller than
+the full-partition, leaving room for the hash blocks.
+
+struct verity_header {
+       uint64_t magic = 0x7665726974790a00;
+       uint32_t version;
+       uint32_t block_size;
+       char digest[128]; /* in hex-ascii, null-terminated or 128-bytes */
+       char salt[128]; /* in hex-ascii, null-terminated or 128-bytes */
+}
+
+struct verity_header_block {
+	struct verity_header;
+	char unused[block_size - sizeof(struct verity_header) - sizeof(sig)];
+	char sig[128]; /* in hex-ascii, null-terminated or 128-bytes */
+}
+
+Directly following the header are the hash blocks which are stored a depth
+at a time (starting from the root), sorted in order of increasing index.
+
+Usage
+=====
+
+The API provides mechanisms for reading and verifying a tree. When reading, all
+required data for the hash tree should be populated for a block before
+attempting a verify.  This can be done by calling dm_bht_populate().  When all
+data is ready, a call to dm_bht_verify_block() with the expected hash value will
+perform both the direct block hash check and the hashes of the parent and
+neighboring nodes where needed to ensure validity up to the root hash.  Note,
+dm_bht_set_root_hexdigest() should be called before any verification attempts
+occur.
+
+Example
+=======
+
+Setup a device;
+[[
+  dmsetup create vroot --table \
+    "0 `blockdev --getsize /dev/sda1` "\
+    "verity /dev/sda1 /dev/sda2 0 4096 sha256 "\
+    "4392712ba01368efdf14b05c76f9e4df0d53664630b5d48632ed17a137f39076 "\
+    "1234000000000000000000000000000000000000000000000000000000000000"
+]]
+
+A command line tool is available to compute the hash tree and return the
+root hash value.
+  http://git.chromium.org/cgi-bin/gitweb.cgi?p=dm-verity.git;a=tree
diff --git a/drivers/md/Kconfig b/drivers/md/Kconfig
index faa4741..b8bb690 100644
--- a/drivers/md/Kconfig
+++ b/drivers/md/Kconfig
@@ -370,4 +370,20 @@ config DM_FLAKEY
        ---help---
          A target that intermittently fails I/O for debugging purposes.
 
+config DM_VERITY
+        tristate "Verity target support"
+        depends on BLK_DEV_DM
+        select CRYPTO
+        select CRYPTO_HASH
+        ---help---
+          This device-mapper target allows you to create a device that
+          transparently integrity checks the data on it. You'll need to
+          activate the digests you're going to use in the cryptoapi
+          configuration.
+
+          To compile this code as a module, choose M here: the module will
+          be called dm-verity.
+
+          If unsure, say N.
+
 endif # MD
diff --git a/drivers/md/Makefile b/drivers/md/Makefile
index 046860c..70a29af 100644
--- a/drivers/md/Makefile
+++ b/drivers/md/Makefile
@@ -39,6 +39,7 @@ obj-$(CONFIG_DM_SNAPSHOT)	+= dm-snapshot.o
 obj-$(CONFIG_DM_PERSISTENT_DATA)	+= persistent-data/
 obj-$(CONFIG_DM_MIRROR)		+= dm-mirror.o dm-log.o dm-region-hash.o
 obj-$(CONFIG_DM_LOG_USERSPACE)	+= dm-log-userspace.o
+obj-$(CONFIG_DM_VERITY)         += dm-verity.o
 obj-$(CONFIG_DM_ZERO)		+= dm-zero.o
 obj-$(CONFIG_DM_RAID)	+= dm-raid.o
 obj-$(CONFIG_DM_THIN_PROVISIONING)	+= dm-thin-pool.o
diff --git a/drivers/md/dm-verity.c b/drivers/md/dm-verity.c
new file mode 100644
index 0000000..3f9fed9
--- /dev/null
+++ b/drivers/md/dm-verity.c
@@ -0,0 +1,1366 @@
+/*
+ * Originally based on dm-crypt.c,
+ * Copyright (C) 2003 Christophe Saout <christophe@saout.de>
+ * Copyright (C) 2004 Clemens Fruhwirth <clemens@endorphin.org>
+ * Copyright (C) 2006-2008 Red Hat, Inc. All rights reserved.
+ * Copyright (C) 2012 The Chromium OS Authors <chromium-os-dev@chromium.org>
+ *                    All Rights Reserved.
+ *
+ * This file is released under the GPLv2.
+ *
+ * Implements a verifying transparent block device.
+ * See Documentation/device-mapper/verity.txt
+ */
+#include <crypto/hash.h>
+#include <linux/atomic.h>
+#include <linux/bio.h>
+#include <linux/blkdev.h>
+#include <linux/genhd.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/mempool.h>
+#include <linux/module.h>
+#include <linux/percpu.h>
+#include <linux/workqueue.h>
+#include <linux/device-mapper.h>
+
+
+#define DM_MSG_PREFIX "verity"
+
+
+/* Helper for printing sector_t */
+#define ULL(x) ((unsigned long long)(x))
+
+#define MIN_IOS 32
+#define MIN_BIOS (MIN_IOS * 2)
+
+/* To avoid allocating memory for digest tests, we just setup a
+ * max to use for now.
+ */
+#define VERITY_MAX_DIGEST_SIZE 64   /* Supports up to 512-bit digests */
+#define VERITY_SALT_SIZE       32   /* 256 bits of salt is a lot */
+
+/* UNALLOCATED, PENDING, READY, and VERIFIED are valid states. All other
+ * values are entry-related return codes.
+ */
+#define VERITY_TREE_ENTRY_VERIFIED 8  /* 'nodes' checked against parent */
+#define VERITY_TREE_ENTRY_READY 4  /* 'nodes' is loaded and available */
+#define VERITY_TREE_ENTRY_PENDING 2  /* 'nodes' is being loaded */
+#define VERITY_TREE_ENTRY_UNALLOCATED 0 /* untouched */
+#define VERITY_TREE_ENTRY_ERROR -1 /* entry is unsuitable for use */
+#define VERITY_TREE_ENTRY_ERROR_IO -2 /* I/O error on load */
+
+/* Additional possible return codes */
+#define VERITY_TREE_ENTRY_ERROR_MISMATCH -3 /* Digest mismatch */
+
+
+struct verity_io {
+	struct dm_target *target;
+	struct bio *bio;
+	struct delayed_work work;
+	unsigned int flags;
+
+	int error;
+	atomic_t pending;
+
+	u64 block;  /* aligned block index */
+	u64 count;  /* aligned count in blocks */
+};
+
+/* verity_tree_entry
+ * Contains verity_tree->node_count tree nodes at a given tree depth.
+ * state is used to transactionally assure that data is paged in
+ * from disk.  Unless verity_tree kept running crypto contexts for each
+ * level, we need to load in the data for on-demand verification.
+ */
+struct verity_tree_entry {
+	atomic_t state; /* see defines */
+	/* Keeping an extra pointer per entry wastes up to ~33k of
+	 * memory if a 1m blocks are used (or 66 on 64-bit arch)
+	 */
+	struct verity_io *io_context;  /* Reserve a pointer for use during io */
+	/* data should only be non-NULL if fully populated. */
+	void *nodes;  /* The hash data used to verify the children.
+		       * Guaranteed to be page-aligned.
+		       */
+};
+
+/* verity_tree_level
+ * Contains an array of entries which represent a page of hashes where
+ * each hash is a node in the tree at the given tree depth/level.
+ */
+struct verity_tree_level {
+	struct verity_tree_entry *entries;  /* array of entries of tree nodes */
+	unsigned int count;  /* number of entries at this level */
+	sector_t sector;  /* starting sector for this level */
+};
+
+/* opaque context, start, databuf, sector_count */
+typedef int(*verity_tree_callback)(void *,  /* external context */
+			      sector_t,  /* start sector */
+			      u8 *,  /* destination page */
+			      sector_t,  /* num sectors */
+			      struct verity_tree_entry *);
+/* verity_tree - Device mapper block hash tree
+ * verity_tree provides a fixed interface for comparing data blocks
+ * against a cryptographic hashes stored in a hash tree. It
+ * optimizes the tree structure for storage on disk.
+ *
+ * The tree is built from the bottom up.  A collection of data,
+ * external to the tree, is hashed and these hashes are stored
+ * as the blocks in the tree.  For some number of these hashes,
+ * a parent node is created by hashing them.  These steps are
+ * repeated.
+ */
+struct verity_tree {
+	/* Configured values */
+	int depth;  /* Depth of the tree including the root */
+	unsigned int block_size;  /* Size of a hash block */
+	u64 block_count;  /* Number of blocks hashed */
+	char hash_alg[CRYPTO_MAX_ALG_NAME];
+	u8 salt[VERITY_SALT_SIZE];
+
+	/* Computed values */
+	unsigned int node_count;  /* Data size (in hashes) for each entry */
+	unsigned int node_count_shift;  /* first bit set - 1 */
+	struct crypto_shash *tfm; /* hash for this device */
+	unsigned int hash_desc_size;
+	sector_t sectors;  /* Number of disk sectors used */
+	u8 digest[VERITY_MAX_DIGEST_SIZE];
+	unsigned int digest_size;
+
+	struct verity_tree_level *levels;
+
+	/* Callback for reading from the hash device */
+	verity_tree_callback read_cb;
+};
+
+/* per-requested-bio private data */
+enum verity_io_flags {
+	VERITY_IOFLAGS_CLONED = 0x1,	/* original bio has been cloned */
+};
+
+struct verity_config {
+	struct dm_dev *dev;
+	sector_t start;
+	sector_t size;
+
+	struct dm_dev *hash_dev;
+	sector_t hash_start;
+
+	struct verity_tree vt;
+
+	/* Pool required for io contexts */
+	mempool_t *io_pool;
+	/* Pool and bios required for making sure that backing device reads are
+	 * in PAGE_SIZE increments.
+	 */
+	struct bio_set *bs;
+
+	char hash_alg[CRYPTO_MAX_ALG_NAME];
+};
+
+
+static struct kmem_cache *_verity_io_pool;
+static struct workqueue_struct *kveritydq, *kverityd_ioq;
+
+static DEFINE_PER_CPU(struct shash_desc *, verity_hash_desc);
+static DEFINE_PER_CPU(unsigned int, verity_hash_size);
+
+static void kverityd_verify(struct work_struct *work);
+static void kverityd_io(struct work_struct *work);
+static void kverityd_io_vt_populate(struct verity_io *io);
+static void kverityd_io_vt_populate_end(struct bio *, int error);
+
+
+/*
+ * Utilities
+ */
+
+static void bin2hex(char *dst, const u8 *src, size_t count)
+{
+	while (count-- > 0) {
+		sprintf(dst, "%02hhx", (int)*src);
+		dst += 2;
+		src++;
+	}
+}
+
+/*
+ * Verity Tree
+ */
+
+/* Functions for converting indices to nodes. */
+
+static inline unsigned int verity_tree_get_level_shift(struct verity_tree *vt,
+						  int depth)
+{
+	return (vt->depth - depth) * vt->node_count_shift;
+}
+
+/* For the given depth, this is the entry index.  At depth+1 it is the node
+ * index for depth.
+ */
+static inline u64 verity_tree_index_at_level(struct verity_tree *vt,
+					     int depth, u64 leaf)
+{
+	return leaf >> verity_tree_get_level_shift(vt, depth);
+}
+
+static inline struct verity_tree_entry *verity_tree_get_entry(
+		struct verity_tree *vt,
+		int depth, u64 block)
+{
+	u64 index = verity_tree_index_at_level(vt, depth, block);
+	struct verity_tree_level *level = &vt->levels[depth];
+
+	return &level->entries[index];
+}
+
+static inline void *verity_tree_get_node(struct verity_tree *vt,
+					 struct verity_tree_entry *entry,
+					 int depth, unsigned int block)
+{
+	u64 index = verity_tree_index_at_level(vt, depth, block);
+	unsigned int node_index = (unsigned int)index % vt->node_count;
+
+	return entry->nodes + (node_index * vt->digest_size);
+}
+
+/**
+ * verity_tree_compute_hash: hashes a page of data
+ */
+static int verity_tree_compute_hash(struct verity_tree *vt, struct page *pg,
+				    unsigned int offset, u8 *digest)
+{
+	struct shash_desc **hash_descp = &__get_cpu_var(verity_hash_desc);
+	unsigned int *hash_sizep = &__get_cpu_var(verity_hash_size);
+	struct shash_desc *hash_desc;
+	void *data;
+	int err;
+
+	if (!*hash_descp || *hash_sizep < vt->hash_desc_size) {
+		kfree(*hash_descp);
+		*hash_descp = kmalloc(vt->hash_desc_size, GFP_KERNEL);
+		*hash_sizep = vt->hash_desc_size;
+	}
+	hash_desc = *hash_descp;
+	hash_desc->tfm = vt->tfm;
+	hash_desc->flags = 0x0;
+
+	if (crypto_shash_init(hash_desc)) {
+		DMCRIT("failed to reinitialize crypto hash (proc:%d)",
+			smp_processor_id());
+		return -EINVAL;
+	}
+	data = kmap_atomic(pg);
+	err = crypto_shash_update(hash_desc, data + offset, PAGE_SIZE);
+	kunmap_atomic(data);
+	if (err) {
+		DMCRIT("crypto_hash_update failed");
+		return -EINVAL;
+	}
+	if (crypto_shash_update(hash_desc, vt->salt, sizeof(vt->salt))) {
+		DMCRIT("crypto_hash_update failed");
+		return -EINVAL;
+	}
+	if (crypto_shash_final(hash_desc, digest)) {
+		DMCRIT("crypto_hash_final failed");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int verity_tree_initialize_entries(struct verity_tree *vt)
+{
+	/* last represents the index of the last digest store in the tree.
+	 * By walking the tree with that index, it is possible to compute the
+	 * total number of entries at each level.
+	 *
+	 * Since each entry will contain up to |node_count| nodes of the tree,
+	 * it is possible that the last index may not be at the end of a given
+	 * entry->nodes.  In that case, it is assumed the value is padded.
+	 *
+	 * Note, we treat both the tree root (1 hash) and the tree leaves
+	 * independently from the vt data structures.  Logically, the root is
+	 * depth=-1 and the block layer level is depth=vt->depth
+	 */
+	u64 last = vt->block_count - 1;
+	int depth;
+
+	/* check that the largest level->count can't result in an int overflow
+	 * on allocation or sector calculation.
+	 */
+	if (((last >> vt->node_count_shift) + 1) >
+	    UINT_MAX / max_t(unsigned long,
+			     sizeof(struct verity_tree_entry),
+			     (unsigned long)to_sector(vt->block_size))) {
+		DMCRIT("required entries %llu is too large", vt->block_count);
+		return -EINVAL;
+	}
+
+	/* Track the current sector location for each level so we don't have to
+	 * compute it during traversals.
+	 */
+	vt->sectors = 0;
+	for (depth = 0; depth < vt->depth; ++depth) {
+		struct verity_tree_level *level = &vt->levels[depth];
+
+		level->count = verity_tree_index_at_level(vt, depth, last) + 1;
+		level->entries = kcalloc(level->count,
+					 sizeof(struct verity_tree_entry),
+					 GFP_KERNEL);
+		if (!level->entries) {
+			DMERR("failed to allocate entries for depth %d", depth);
+			return -ENOMEM;
+		}
+		level->sector = vt->sectors;
+		vt->sectors += level->count * to_sector(vt->block_size);
+	}
+
+	return 0;
+}
+
+/**
+ * verity_tree_create - prepares @vt for us
+ * @vt:	          pointer to a verity_tree_create()d vt
+ * @depth:	  tree depth without the root; including block hashes
+ * @block_count:  the number of block hashes / tree leaves
+ * @alg_name:	  crypto hash algorithm name
+ *
+ * Returns 0 on success.
+ *
+ * Callers can offset into devices by storing the data in the io callbacks.
+ */
+static int verity_tree_create(struct verity_tree *vt, u64 block_count,
+			      unsigned int block_size, const char *alg_name)
+{
+	int status = 0;
+
+	vt->block_size = block_size;
+	/* Verify that PAGE_SIZE >= block_size >= SECTOR_SIZE. */
+	if ((block_size > PAGE_SIZE) ||
+	    (PAGE_SIZE % block_size) ||
+	    (to_sector(block_size) == 0))
+		return -EINVAL;
+
+	vt->tfm = crypto_alloc_shash(alg_name, 0, 0);
+	if (IS_ERR(vt->tfm)) {
+		DMERR("failed to allocate crypto hash '%s'", alg_name);
+		return -ENOMEM;
+	}
+	vt->hash_desc_size = sizeof(struct shash_desc) +
+		crypto_shash_descsize(vt->tfm);
+
+	vt->digest_size = crypto_shash_digestsize(vt->tfm);
+	/* We expect to be able to pack >=2 hashes into a block */
+	if (block_size / vt->digest_size < 2) {
+		DMERR("too few hashes fit in a block");
+		status = -EINVAL;
+		goto bad_arg;
+	}
+
+	if (vt->digest_size > VERITY_MAX_DIGEST_SIZE) {
+		DMERR("VERITY_MAX_DIGEST_SIZE too small for digest");
+		status = -EINVAL;
+		goto bad_arg;
+	}
+
+	/* Configure the tree */
+	vt->block_count = block_count;
+	if (block_count == 0) {
+		DMERR("block_count must be non-zero");
+		status = -EINVAL;
+		goto bad_arg;
+	}
+
+	/* Each verity_tree_entry->nodes is one block.  The node code tracks
+	 * how many nodes fit into one entry where a node is a single
+	 * hash (message digest).
+	 */
+	vt->node_count_shift = fls(block_size / vt->digest_size) - 1;
+	/* Round down to the nearest power of two.  This makes indexing
+	 * into the tree much less painful.
+	 */
+	vt->node_count = 1 << vt->node_count_shift;
+
+	/* This is unlikely to happen, but with 64k pages, who knows. */
+	if (vt->node_count > UINT_MAX / vt->digest_size) {
+		DMERR("node_count * hash_len exceeds UINT_MAX!");
+		status = -EINVAL;
+		goto bad_arg;
+	}
+
+	vt->depth = DIV_ROUND_UP(fls64(block_count - 1), vt->node_count_shift);
+
+	/* Ensure that we can safely shift by this value. */
+	if (vt->depth * vt->node_count_shift >= sizeof(unsigned int) * 8) {
+		DMERR("specified depth and node_count_shift is too large");
+		status = -EINVAL;
+		goto bad_arg;
+	}
+
+	/* Allocate levels. Each level of the tree may have an arbitrary number
+	 * of verity_tree_entry structs.  Each entry contains node_count nodes.
+	 * Each node in the tree is a cryptographic digest of either node_count
+	 * nodes on the subsequent level or of a specific block on disk.
+	 */
+	vt->levels = kcalloc(vt->depth,
+			     sizeof(struct verity_tree_level), GFP_KERNEL);
+
+	vt->read_cb = NULL;
+
+	status = verity_tree_initialize_entries(vt);
+	if (status)
+		goto bad_entries_alloc;
+
+	/* We compute depth such that there is only be 1 block at level 0. */
+	BUG_ON(vt->levels[0].count != 1);
+
+	return 0;
+
+bad_entries_alloc:
+	while (vt->depth-- > 0)
+		kfree(vt->levels[vt->depth].entries);
+	kfree(vt->levels);
+bad_arg:
+	crypto_free_shash(vt->tfm);
+	return status;
+}
+
+/**
+ * verity_tree_read_completed
+ * @entry:   pointer to the entry that's been loaded
+ * @status:  I/O status. Non-zero is failure.
+ * MUST always be called after a read_cb completes.
+ */
+static void verity_tree_read_completed(struct verity_tree_entry *entry,
+				       int status)
+{
+	if (status) {
+		DMCRIT("an I/O error occurred while reading entry");
+		atomic_set(&entry->state, VERITY_TREE_ENTRY_ERROR_IO);
+		return;
+	}
+	BUG_ON(atomic_read(&entry->state) != VERITY_TREE_ENTRY_PENDING);
+	atomic_set(&entry->state, VERITY_TREE_ENTRY_READY);
+}
+
+/**
+ * verity_tree_verify_block - checks that all path nodes for @block are valid
+ * @vt:	     pointer to a verity_tree_create()d vt
+ * @block:   specific block data is expected from
+ * @pg:	     page holding the block data
+ * @offset:  offset into the page
+ *
+ * Returns 0 on success, VERITY_TREE_ENTRY_ERROR_MISMATCH on error.
+ */
+static int verity_tree_verify_block(struct verity_tree *vt, unsigned int block,
+				    struct page *pg, unsigned int offset)
+{
+	int state, depth = vt->depth;
+	u8 digest[VERITY_MAX_DIGEST_SIZE];
+	struct verity_tree_entry *entry;
+	void *node;
+
+	do {
+		/* Need to check that the hash of the current block is accurate
+		 * in its parent.
+		 */
+		entry = verity_tree_get_entry(vt, depth - 1, block);
+		state = atomic_read(&entry->state);
+		/* This call is only safe if all nodes along the path
+		 * are already populated (i.e. READY) via verity_tree_populate.
+		 */
+		BUG_ON(state < VERITY_TREE_ENTRY_READY);
+		node = verity_tree_get_node(vt, entry, depth, block);
+
+		if (verity_tree_compute_hash(vt, pg, offset, digest) ||
+		    memcmp(digest, node, vt->digest_size))
+			goto mismatch;
+
+		/* Keep the containing block of hashes to be verified in the
+		 * next pass.
+		 */
+		pg = virt_to_page(entry->nodes);
+		offset = offset_in_page(entry->nodes);
+	} while (--depth > 0 && state != VERITY_TREE_ENTRY_VERIFIED);
+
+	if (depth == 0 && state != VERITY_TREE_ENTRY_VERIFIED) {
+		if (verity_tree_compute_hash(vt, pg, offset, digest) ||
+		    memcmp(digest, vt->digest, vt->digest_size))
+			goto mismatch;
+		atomic_set(&entry->state, VERITY_TREE_ENTRY_VERIFIED);
+	}
+
+	/* Mark path to leaf as verified. */
+	for (depth++; depth < vt->depth; depth++) {
+		entry = verity_tree_get_entry(vt, depth, block);
+		/* At this point, entry can only be in VERIFIED or READY state.
+		 * So it is safe to use atomic_set instead of atomic_cmpxchg.
+		 */
+		atomic_set(&entry->state, VERITY_TREE_ENTRY_VERIFIED);
+	}
+
+	return 0;
+
+mismatch:
+	DMERR_LIMIT("verify_path: failed to verify hash (d=%d,bi=%u)",
+		    depth, block);
+	return VERITY_TREE_ENTRY_ERROR_MISMATCH;
+}
+
+/**
+ * verity_tree_is_populated - check that nodes needed to verify a given
+ *                            block are all ready
+ * @vt:	    pointer to a verity_tree_create()d vt
+ * @block:  specific block data is expected from
+ *
+ * Callers may wish to call verity_tree_is_populated() when checking an io
+ * for which entries were already pending.
+ */
+static bool verity_tree_is_populated(struct verity_tree *vt, unsigned int block)
+{
+	int depth;
+
+	for (depth = vt->depth - 1; depth >= 0; depth--) {
+		struct verity_tree_entry *entry;
+		entry = verity_tree_get_entry(vt, depth, block);
+		if (atomic_read(&entry->state) < VERITY_TREE_ENTRY_READY)
+			return false;
+	}
+
+	return true;
+}
+
+/**
+ * verity_tree_populate - reads entries from disk needed to verify a given block
+ * @vt:     pointer to a verity_tree_create()d vt
+ * @ctx:    context used for all read_cb calls on this request
+ * @block:  specific block data is expected from
+ *
+ * Returns negative value on error. Returns 0 on success.
+ */
+static int verity_tree_populate(struct verity_tree *vt, void *ctx,
+				unsigned int block)
+{
+	int depth, state;
+
+	BUG_ON(block >= vt->block_count);
+
+	for (depth = vt->depth - 1; depth >= 0; --depth) {
+		struct verity_tree_level *level;
+		struct verity_tree_entry *entry;
+		u64 index;
+
+		index = verity_tree_index_at_level(vt, depth, block);
+		level = &vt->levels[depth];
+		entry = verity_tree_get_entry(vt, depth, block);
+		state = atomic_cmpxchg(&entry->state,
+				       VERITY_TREE_ENTRY_UNALLOCATED,
+				       VERITY_TREE_ENTRY_PENDING);
+		if (state == VERITY_TREE_ENTRY_VERIFIED)
+			break;
+		if (state <= VERITY_TREE_ENTRY_ERROR)
+			goto error_state;
+		if (state != VERITY_TREE_ENTRY_UNALLOCATED)
+			continue;
+
+		/* Current entry is claimed for allocation and loading */
+		entry->nodes = kmalloc(vt->block_size, GFP_NOIO);
+
+		vt->read_cb(ctx,
+			    level->sector + to_sector(index * vt->block_size),
+			    entry->nodes, to_sector(vt->block_size), entry);
+	}
+
+	return 0;
+
+error_state:
+	DMCRIT("block %u at depth %d is in an error state", block, depth);
+	return -EPERM;
+}
+
+/**
+ * verity_tree_destroy - cleans up all memory used by @vt
+ * @vt:	 pointer to a verity_tree_create()d vt
+ */
+static void verity_tree_destroy(struct verity_tree *vt)
+{
+	int depth;
+
+	for (depth = 0; depth < vt->depth; depth++) {
+		struct verity_tree_entry *entry = vt->levels[depth].entries;
+		struct verity_tree_entry *entry_end = entry +
+			vt->levels[depth].count;
+		for (; entry < entry_end; ++entry)
+			kfree(entry->nodes);
+		kfree(vt->levels[depth].entries);
+	}
+	kfree(vt->levels);
+	crypto_free_shash(vt->tfm);
+}
+
+/*
+ * Verity Tree Accessors
+ */
+
+/**
+ * verity_tree_set_digest - sets an unverified root digest hash from hex
+ * @vt:	     pointer to a verity_tree_create()d vt
+ * @digest:  string containing the digest in hex
+ * Returns non-zero on error.
+ */
+static int verity_tree_set_digest(struct verity_tree *vt, const char *digest)
+{
+	/* Make sure we have at least the bytes expected */
+	if (strnlen(digest, vt->digest_size * 2) != vt->digest_size * 2) {
+		DMERR("root digest length does not match hash algorithm");
+		return -1;
+	}
+	return hex2bin(vt->digest, digest, vt->digest_size);
+}
+
+/**
+ * verity_tree_digest - returns root digest in hex
+ * @vt:	     pointer to a verity_tree_create()d vt
+ * @digest:  buffer to put into, must be of length VERITY_SALT_SIZE * 2 + 1.
+ */
+int verity_tree_digest(struct verity_tree *vt, char *digest)
+{
+	bin2hex(digest, vt->digest, vt->digest_size);
+	return 0;
+}
+
+/**
+ * verity_tree_set_salt - sets the salt
+ * @vt:    pointer to a verity_tree_create()d vt
+ * @salt:  string containing the salt in hex
+ * Returns non-zero on error.
+ */
+int verity_tree_set_salt(struct verity_tree *vt, const char *salt)
+{
+	size_t saltlen = min(strlen(salt) / 2, sizeof(vt->salt));
+	memset(vt->salt, 0, sizeof(vt->salt));
+	return hex2bin(vt->salt, salt, saltlen);
+}
+
+
+/**
+ * verity_tree_salt - returns the salt in hex
+ * @vt:    pointer to a verity_tree_create()d vt
+ * @salt:  buffer to put salt into, of length VERITY_SALT_SIZE * 2 + 1.
+ */
+int verity_tree_salt(struct verity_tree *vt, char *salt)
+{
+	bin2hex(salt, vt->salt, sizeof(vt->salt));
+	return 0;
+}
+
+/*
+ * Allocation and utility functions
+ */
+
+static void kverityd_src_io_read_end(struct bio *clone, int error);
+
+/* Shared destructor for all internal bios */
+static void verity_bio_destructor(struct bio *bio)
+{
+	struct verity_io *io = bio->bi_private;
+	struct verity_config *vc = io->target->private;
+	bio_free(bio, vc->bs);
+}
+
+static struct bio *verity_alloc_bioset(struct verity_config *vc, gfp_t gfp_mask,
+				       int nr_iovecs)
+{
+	return bio_alloc_bioset(gfp_mask, nr_iovecs, vc->bs);
+}
+
+static struct verity_io *verity_io_alloc(struct dm_target *ti,
+					    struct bio *bio)
+{
+	struct verity_config *vc = ti->private;
+	sector_t sector = bio->bi_sector - ti->begin;
+	struct verity_io *io;
+	u64 tmp;
+
+	io = mempool_alloc(vc->io_pool, GFP_NOIO);
+	io->flags = 0;
+	io->target = ti;
+	io->bio = bio;
+	io->error = 0;
+
+	/* Adjust the sector by the virtual starting sector */
+	tmp = (u64)to_bytes(1) * sector;
+	do_div(tmp, vc->vt.block_size);
+	io->block = tmp;
+	io->count = bio->bi_size / vc->vt.block_size;
+
+	atomic_set(&io->pending, 0);
+
+	return io;
+}
+
+static struct bio *verity_bio_clone(struct verity_io *io)
+{
+	struct verity_config *vc = io->target->private;
+	struct bio *bio = io->bio;
+	struct bio *clone = verity_alloc_bioset(vc, GFP_NOIO, bio->bi_max_vecs);
+
+	__bio_clone(clone, bio);
+	clone->bi_private = io;
+	clone->bi_end_io  = kverityd_src_io_read_end;
+	clone->bi_bdev    = vc->dev->bdev;
+	clone->bi_sector += vc->start - io->target->begin;
+	clone->bi_destructor = verity_bio_destructor;
+
+	return clone;
+}
+
+/*
+ * Reverse flow of requests into the device.
+ *
+ * (Start at the bottom with verity_map and work your way upward).
+ */
+
+static void verity_inc_pending(struct verity_io *io);
+
+static void verity_return_bio_to_caller(struct verity_io *io)
+{
+	struct verity_config *vc = io->target->private;
+
+	bio_endio(io->bio, io->error);
+	mempool_free(io, vc->io_pool);
+}
+
+/* Check for any missing vt hashes. */
+static bool verity_is_vt_populated(struct verity_io *io)
+{
+	struct verity_config *vc = io->target->private;
+	u64 block;
+
+	for (block = io->block; block < io->block + io->count; ++block)
+		if (!verity_tree_is_populated(&vc->vt, block))
+			return false;
+
+	return true;
+}
+
+/* verity_dec_pending manages the lifetime of all verity_io structs.
+ * Non-bug error handling is centralized through this interface and
+ * all passage from workqueue to workqueue.
+ */
+static void verity_dec_pending(struct verity_io *io)
+{
+	if (!atomic_dec_and_test(&io->pending))
+		goto done;
+
+	if (unlikely(io->error))
+		goto io_error;
+
+	/* I/Os that were pending may now be ready */
+	if (verity_is_vt_populated(io)) {
+		INIT_DELAYED_WORK(&io->work, kverityd_verify);
+		queue_delayed_work(kveritydq, &io->work, 0);
+	} else {
+		INIT_DELAYED_WORK(&io->work, kverityd_io);
+		queue_delayed_work(kverityd_ioq, &io->work, HZ/10);
+	}
+
+done:
+	return;
+
+io_error:
+	verity_return_bio_to_caller(io);
+}
+
+/* Walks the data set and computes the hash of the data read from the
+ * untrusted source device.  The computed hash is then passed to verity-tree
+ * for verification.
+ */
+static int verity_verify(struct verity_config *vc,
+			 struct verity_io *io)
+{
+	unsigned int block_size = vc->vt.block_size;
+	struct bio *bio = io->bio;
+	u64 block = io->block;
+	unsigned int idx;
+	int r;
+
+	for (idx = bio->bi_idx; idx < bio->bi_vcnt; idx++) {
+		struct bio_vec *bv = bio_iovec_idx(bio, idx);
+		unsigned int offset = bv->bv_offset;
+		unsigned int len = bv->bv_len;
+
+		BUG_ON(offset % block_size);
+		BUG_ON(len % block_size);
+
+		while (len) {
+			r = verity_tree_verify_block(&vc->vt, block,
+						bv->bv_page, offset);
+			if (r)
+				goto bad_return;
+
+			offset += block_size;
+			len -= block_size;
+			block++;
+			cond_resched();
+		}
+	}
+
+	return 0;
+
+bad_return:
+	/* verity_tree functions aren't expected to return errno friendly
+	 * values.  They are converted here for uniformity.
+	 */
+	if (r > 0) {
+		DMERR("Pending data for block %llu seen at verify", ULL(block));
+		r = -EBUSY;
+	} else {
+		DMERR_LIMIT("Block hash does not match!");
+		r = -EACCES;
+	}
+	return r;
+}
+
+/* Services the verify workqueue */
+static void kverityd_verify(struct work_struct *work)
+{
+	struct delayed_work *dwork = container_of(work, struct delayed_work,
+						  work);
+	struct verity_io *io = container_of(dwork, struct verity_io,
+					    work);
+	struct verity_config *vc = io->target->private;
+
+	io->error = verity_verify(vc, io);
+
+	/* Free up the bio and tag with the return value */
+	verity_return_bio_to_caller(io);
+}
+
+/* Asynchronously called upon the completion of verity-tree I/O. The status
+ * of the operation is passed back to verity-tree and the next steps are
+ * decided by verity_dec_pending.
+ */
+static void kverityd_io_vt_populate_end(struct bio *bio, int error)
+{
+	struct verity_tree_entry *entry = bio->bi_private;
+	struct verity_io *io = entry->io_context;
+
+	/* Tell the tree to atomically update now that we've populated
+	 * the given entry.
+	 */
+	verity_tree_read_completed(entry, error);
+
+	/* Clean up for reuse when reading data to be checked */
+	bio->bi_vcnt = 0;
+	bio->bi_io_vec->bv_offset = 0;
+	bio->bi_io_vec->bv_len = 0;
+	bio->bi_io_vec->bv_page = NULL;
+	/* Restore the private data to I/O so the destructor can be shared. */
+	bio->bi_private = io;
+	bio_put(bio);
+
+	/* We bail but assume the tree has been marked bad. */
+	if (unlikely(error)) {
+		DMERR("Failed to read for sector %llu (%u)",
+		      ULL(io->bio->bi_sector), io->bio->bi_size);
+		io->error = error;
+		/* Pass through the error to verity_dec_pending below */
+	}
+	/* When pending = 0, it will transition to reading real data */
+	verity_dec_pending(io);
+}
+
+/* Called by verity-tree (via verity_tree_populate), this function provides
+ * the message digests to verity-tree that are stored on disk.
+ */
+static int kverityd_vt_read_callback(void *ctx, sector_t start, u8 *dst,
+				      sector_t count,
+				      struct verity_tree_entry *entry)
+{
+	struct verity_io *io = ctx;  /* I/O for this batch */
+	struct verity_config *vc;
+	struct bio *bio;
+
+	vc = io->target->private;
+
+	/* The I/O context is nested inside the entry so that we don't need one
+	 * io context per page read.
+	 */
+	entry->io_context = ctx;
+
+	/* We should only get page size requests at present. */
+	verity_inc_pending(io);
+	bio = verity_alloc_bioset(vc, GFP_NOIO, 1);
+	bio->bi_private = entry;
+	bio->bi_idx = 0;
+	bio->bi_size = vc->vt.block_size;
+	bio->bi_sector = vc->hash_start + start;
+	bio->bi_bdev = vc->hash_dev->bdev;
+	bio->bi_end_io = kverityd_io_vt_populate_end;
+	bio->bi_rw = REQ_META;
+	/* Only need to free the bio since the page is managed by vt */
+	bio->bi_destructor = verity_bio_destructor;
+	bio->bi_vcnt = 1;
+	bio->bi_io_vec->bv_offset = offset_in_page(dst);
+	bio->bi_io_vec->bv_len = to_bytes(count);
+	/* dst is guaranteed to be a page_pool allocation */
+	bio->bi_io_vec->bv_page = virt_to_page(dst);
+	/* Track that this I/O is in use.  There should be no risk of the io
+	 * being removed prior since this is called synchronously.
+	 */
+	generic_make_request(bio);
+	return 0;
+}
+
+/* Submits an io request for each missing block of block hashes.
+ * The last one to return will then enqueue this on the io workqueue.
+ */
+static void kverityd_io_vt_populate(struct verity_io *io)
+{
+	struct verity_config *vc = io->target->private;
+	u64 block;
+
+	for (block = io->block; block < io->block + io->count; ++block) {
+		int ret = verity_tree_populate(&vc->vt, io, block);
+
+		if (ret < 0) {
+			/* verity_dec_pending will handle the error case. */
+			io->error = ret;
+			break;
+		}
+	}
+}
+
+/* Asynchronously called upon the completion of I/O issued
+ * from kverityd_src_io_read. verity_dec_pending() acts as
+ * the scheduler/flow manager.
+ */
+static void kverityd_src_io_read_end(struct bio *clone, int error)
+{
+	struct verity_io *io = clone->bi_private;
+
+	if (unlikely(!bio_flagged(clone, BIO_UPTODATE) && !error))
+		error = -EIO;
+
+	if (unlikely(error)) {
+		DMERR_LIMIT("Error occurred: %d (%llu, %u)",
+			    error, ULL(clone->bi_sector), clone->bi_size);
+		io->error = error;
+	}
+
+	/* Release the clone which just avoids the block layer from
+	 * leaving offsets, etc in unexpected states.
+	 */
+	bio_put(clone);
+
+	verity_dec_pending(io);
+}
+
+/* If not yet underway, an I/O request will be issued to the vc->dev
+ * device for the data needed. It is cloned to avoid unexpected changes
+ * to the original bio struct.
+ */
+static void kverityd_src_io_read(struct verity_io *io)
+{
+	struct bio *clone;
+
+	/* Check if the read is already issued. */
+	if (io->flags & VERITY_IOFLAGS_CLONED)
+		return;
+
+	io->flags |= VERITY_IOFLAGS_CLONED;
+
+	/* Clone the bio. The block layer may modify the bvec array. */
+	clone = verity_bio_clone(io);
+	if (unlikely(!clone)) {
+		io->error = -ENOMEM;
+		return;
+	}
+
+	verity_inc_pending(io);
+
+	generic_make_request(clone);
+}
+
+/* kverityd_io services the I/O workqueue. For each pass through
+ * the I/O workqueue, a call to populate both the origin drive
+ * data and the hash tree data is made.
+ */
+static void kverityd_io(struct work_struct *work)
+{
+	struct delayed_work *dwork = container_of(work, struct delayed_work,
+						  work);
+	struct verity_io *io = container_of(dwork, struct verity_io,
+					    work);
+
+	/* Issue requests asynchronously. */
+	verity_inc_pending(io);
+	kverityd_src_io_read(io);
+	kverityd_io_vt_populate(io);
+	verity_dec_pending(io);
+}
+
+/* Paired with verity_dec_pending, the pending value in the io dictate the
+ * lifetime of a request and when it is ready to be processed on the
+ * workqueues.
+ */
+static void verity_inc_pending(struct verity_io *io)
+{
+	atomic_inc(&io->pending);
+}
+
+/* Block-level requests start here. */
+static int verity_map(struct dm_target *ti, struct bio *bio,
+		      union map_info *map_context)
+{
+	struct verity_io *io;
+	struct verity_config *vc;
+	struct request_queue *r_queue;
+
+	if (unlikely(!ti)) {
+		DMERR("dm_target was NULL");
+		return -EIO;
+	}
+
+	vc = ti->private;
+	r_queue = bdev_get_queue(vc->dev->bdev);
+
+	if (bio_data_dir(bio) == WRITE) {
+		/* If we silently drop writes, then the VFS layer will cache
+		 * the write and persist it in memory. While it doesn't change
+		 * the underlying storage, it still may be contrary to the
+		 * behavior expected by a verified, read-only device.
+		 */
+		DMWARN_LIMIT("write request received. rejecting with -EIO.");
+		return -EIO;
+	} else {
+		/* Queue up the request to be verified */
+		io = verity_io_alloc(ti, bio);
+		if (!io) {
+			DMERR_LIMIT("Failed to allocate and init IO data");
+			return DM_MAPIO_REQUEUE;
+		}
+		INIT_DELAYED_WORK(&io->work, kverityd_io);
+		queue_delayed_work(kverityd_ioq, &io->work, 0);
+	}
+
+	return DM_MAPIO_SUBMITTED;
+}
+
+/*
+ * Non-block interfaces and device-mapper specific code
+ */
+
+/*
+ * Verity target parameters:
+ *
+ * <dev> <hash_dev> <hash_start> <block_size> <alg> <digest> <salt>
+ *
+ * version:        version of the hash tree on-disk format
+ * dev:            device to verify
+ * hash_dev:       device hashtree is stored on
+ * hash_start:     start address of hashes
+ * block_size:     size of a hash block
+ * alg:            hash algorithm
+ * digest:         toplevel hash of the tree
+ * salt:           salt
+ */
+static int verity_ctr(struct dm_target *ti, unsigned int argc, char **argv)
+{
+	struct verity_config *vc = NULL;
+	const char *dev, *hash_dev, *alg, *digest, *salt;
+	unsigned long hash_start, block_size, version;
+	sector_t blocks;
+	int ret;
+
+	if (argc != 8) {
+		ti->error = "Invalid argument count";
+		return -EINVAL;
+	}
+
+	if (kstrtoul(argv[0], 10, &version) || (version != 0)) {
+		ti->error = "Invalid version";
+		return -EINVAL;
+	}
+	dev = argv[1];
+	hash_dev = argv[2];
+	if (kstrtoul(argv[3], 10, &hash_start)) {
+		ti->error = "Invalid hash_start";
+		return -EINVAL;
+	}
+	if (kstrtoul(argv[4], 10, &block_size) || (block_size > UINT_MAX)) {
+		ti->error = "Invalid block_size";
+		return -EINVAL;
+	}
+	alg = argv[5];
+	digest = argv[6];
+	salt = argv[7];
+
+	/* The device mapper device should be setup read-only */
+	if ((dm_table_get_mode(ti->table) & ~FMODE_READ) != 0) {
+		ti->error = "Must be created readonly.";
+		return -EINVAL;
+	}
+
+	vc = kzalloc(sizeof(*vc), GFP_KERNEL);
+	if (!vc)
+		return -EINVAL;
+
+	/* Calculate the blocks from the given device size */
+	vc->size = ti->len;
+	blocks = to_bytes(vc->size) / block_size;
+	if (verity_tree_create(&vc->vt, blocks, block_size, alg)) {
+		DMERR("failed to create required vt");
+		goto bad_vt;
+	}
+	if (verity_tree_set_digest(&vc->vt, digest)) {
+		DMERR("digest error");
+		goto bad_digest;
+	}
+	verity_tree_set_salt(&vc->vt, salt);
+	vc->vt.read_cb = kverityd_vt_read_callback;
+
+	vc->start = 0;
+	/* We only ever grab the device in read-only mode. */
+	ret = dm_get_device(ti, dev, dm_table_get_mode(ti->table), &vc->dev);
+	if (ret) {
+		DMERR("Failed to acquire device '%s': %d", dev, ret);
+		ti->error = "Device lookup failed";
+		goto bad_verity_dev;
+	}
+
+	if ((to_bytes(vc->start) % block_size) ||
+	    (to_bytes(vc->size) % block_size)) {
+		ti->error = "Device must be block_size divisble/aligned";
+		goto bad_hash_start;
+	}
+
+	vc->hash_start = (sector_t)hash_start;
+
+	/*
+	 * Note, dev == hash_dev is okay as long as the size of
+	 *       ti->len passed to device mapper does not include
+	 *       the hashes.
+	 */
+	if (dm_get_device(ti, hash_dev,
+			  dm_table_get_mode(ti->table), &vc->hash_dev)) {
+		ti->error = "Hash device lookup failed";
+		goto bad_hash_dev;
+	}
+
+	if (snprintf(vc->hash_alg, CRYPTO_MAX_ALG_NAME, "%s", alg) >=
+	    CRYPTO_MAX_ALG_NAME) {
+		ti->error = "Hash algorithm name is too long";
+		goto bad_hash;
+	}
+
+	vc->io_pool = mempool_create_slab_pool(MIN_IOS, _verity_io_pool);
+	if (!vc->io_pool) {
+		ti->error = "Cannot allocate verity io mempool";
+		goto bad_slab_pool;
+	}
+
+	vc->bs = bioset_create(MIN_BIOS, 0);
+	if (!vc->bs) {
+		ti->error = "Cannot allocate verity bioset";
+		goto bad_bs;
+	}
+
+	ti->private = vc;
+
+	return 0;
+
+bad_bs:
+	mempool_destroy(vc->io_pool);
+bad_slab_pool:
+bad_hash:
+	dm_put_device(ti, vc->hash_dev);
+bad_hash_dev:
+bad_hash_start:
+	dm_put_device(ti, vc->dev);
+bad_vt:
+bad_digest:
+bad_verity_dev:
+	kfree(vc);   /* hash is not secret so no need to zero */
+	return -EINVAL;
+}
+
+static void verity_dtr(struct dm_target *ti)
+{
+	struct verity_config *vc = ti->private;
+
+	bioset_free(vc->bs);
+	mempool_destroy(vc->io_pool);
+	verity_tree_destroy(&vc->vt);
+	dm_put_device(ti, vc->hash_dev);
+	dm_put_device(ti, vc->dev);
+	kfree(vc);
+}
+
+static int verity_ioctl(struct dm_target *ti, unsigned int cmd,
+			unsigned long arg)
+{
+	struct verity_config *vc = ti->private;
+	struct dm_dev *dev = vc->dev;
+	int r = 0;
+
+	/*
+	 * Only pass ioctls through if the device sizes match exactly.
+	 */
+	if (vc->start ||
+	    ti->len != i_size_read(dev->bdev->bd_inode) >> SECTOR_SHIFT)
+		r = scsi_verify_blk_ioctl(NULL, cmd);
+
+	return r ? : __blkdev_driver_ioctl(dev->bdev, dev->mode, cmd, arg);
+}
+
+static int verity_status(struct dm_target *ti, status_type_t type,
+			char *result, unsigned int maxlen)
+{
+	struct verity_config *vc = ti->private;
+	char digest[VERITY_MAX_DIGEST_SIZE * 2 + 1] = { 0 };
+	char salt[VERITY_SALT_SIZE * 2 + 1] = { 0 };
+	unsigned int sz = 0;
+
+	verity_tree_digest(&vc->vt, digest);
+	verity_tree_salt(&vc->vt, salt);
+
+	switch (type) {
+	case STATUSTYPE_INFO:
+		result[0] = '\0';
+		break;
+	case STATUSTYPE_TABLE:
+		DMEMIT("%s %s %llu %llu %s %s %s",
+		       vc->dev->name,
+		       vc->hash_dev->name,
+		       ULL(vc->hash_start),
+		       ULL(vc->vt.block_size),
+		       vc->hash_alg,
+		       digest,
+		       salt);
+		break;
+	}
+	return 0;
+}
+
+static int verity_merge(struct dm_target *ti, struct bvec_merge_data *bvm,
+		       struct bio_vec *biovec, int max_size)
+{
+	struct verity_config *vc = ti->private;
+	struct request_queue *q = bdev_get_queue(vc->dev->bdev);
+
+	if (!q->merge_bvec_fn)
+		return max_size;
+
+	bvm->bi_bdev = vc->dev->bdev;
+	bvm->bi_sector = vc->start + bvm->bi_sector - ti->begin;
+
+	/* Optionally, this could just return 0 to stick to single pages. */
+	return min(max_size, q->merge_bvec_fn(q, bvm, biovec));
+}
+
+static int verity_iterate_devices(struct dm_target *ti,
+				 iterate_devices_callout_fn fn, void *data)
+{
+	struct verity_config *vc = ti->private;
+
+	return fn(ti, vc->dev, vc->start, ti->len, data);
+}
+
+static void verity_io_hints(struct dm_target *ti,
+			    struct queue_limits *limits)
+{
+	struct verity_config *vc = ti->private;
+	unsigned int block_size = vc->vt.block_size;
+
+	limits->logical_block_size = block_size;
+	limits->physical_block_size = block_size;
+	blk_limits_io_min(limits, block_size);
+}
+
+static struct target_type verity_target = {
+	.name   = "verity",
+	.version = {0, 1, 0},
+	.module = THIS_MODULE,
+	.ctr    = verity_ctr,
+	.dtr    = verity_dtr,
+	.ioctl  = verity_ioctl,
+	.map    = verity_map,
+	.merge  = verity_merge,
+	.status = verity_status,
+	.iterate_devices = verity_iterate_devices,
+	.io_hints = verity_io_hints,
+};
+
+#define VERITY_WQ_FLAGS (WQ_CPU_INTENSIVE|WQ_HIGHPRI)
+
+static int __init verity_init(void)
+{
+	int r = -ENOMEM;
+
+	_verity_io_pool = KMEM_CACHE(verity_io, 0);
+	if (!_verity_io_pool) {
+		DMERR("failed to allocate pool verity_io");
+		goto bad_io_pool;
+	}
+
+	kverityd_ioq = alloc_workqueue("kverityd_io", VERITY_WQ_FLAGS, 1);
+	if (!kverityd_ioq) {
+		DMERR("failed to create workqueue kverityd_ioq");
+		goto bad_io_queue;
+	}
+
+	kveritydq = alloc_workqueue("kverityd", VERITY_WQ_FLAGS, 1);
+	if (!kveritydq) {
+		DMERR("failed to create workqueue kveritydq");
+		goto bad_verify_queue;
+	}
+
+	r = dm_register_target(&verity_target);
+	if (r < 0) {
+		DMERR("register failed %d", r);
+		goto register_failed;
+	}
+
+	DMINFO("version %u.%u.%u loaded", verity_target.version[0],
+	       verity_target.version[1], verity_target.version[2]);
+
+	return r;
+
+register_failed:
+	destroy_workqueue(kveritydq);
+bad_verify_queue:
+	destroy_workqueue(kverityd_ioq);
+bad_io_queue:
+	kmem_cache_destroy(_verity_io_pool);
+bad_io_pool:
+	return r;
+}
+
+static void __exit verity_exit(void)
+{
+	int cpu;
+
+	flush_workqueue(kverityd_ioq);
+	flush_workqueue(kveritydq);
+	destroy_workqueue(kveritydq);
+	destroy_workqueue(kverityd_ioq);
+
+	for_each_possible_cpu(cpu)
+		kfree(per_cpu(verity_hash_desc, cpu));
+
+	dm_unregister_target(&verity_target);
+	kmem_cache_destroy(_verity_io_pool);
+}
+
+module_init(verity_init);
+module_exit(verity_exit);
+
+MODULE_AUTHOR("The Chromium OS Authors <chromium-os-dev@chromium.org>");
+MODULE_DESCRIPTION(DM_NAME " target for transparent disk integrity checking");
+MODULE_LICENSE("GPL");
-- 
1.7.7.3


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH] dm: remake of the verity target
  2012-03-02  0:33 [PATCH] dm: verity target Mandeep Singh Baines
  2012-03-02 16:08 ` Mandeep Singh Baines
@ 2012-03-04 19:18 ` Mikulas Patocka
  2012-03-04 19:35   ` userspace hashing utility for dm-verity Mikulas Patocka
  2012-03-06 21:59   ` [PATCH] dm: remake of the verity target Mandeep Singh Baines
  1 sibling, 2 replies; 34+ messages in thread
From: Mikulas Patocka @ 2012-03-04 19:18 UTC (permalink / raw)
  To: Mandeep Singh Baines
  Cc: linux-kernel, dm-devel, Alasdair G Kergon, Will Drewry,
	Elly Jones, Milan Broz, Olof Johansson, Steffen Klassert,
	Andrew Morton

Hi

Here I'm posting remake of the dm-verity target originall developed by 
Mandeep Singh Baines. It has a compatible target line syntax, so it can be 
used as a drop-in replacement.

The major difference is that this driver uses dm-bufio to manage cache of 
hash blocks in memory. We talked with Alasdair about using dm-bufio for 
caching and I concluded that it is simpler to rewrite the code rather than 
transform the original Google code with patches.

Because of dm-bufio, memory consumption is not dependent on device size 
(if the system starts running out of memory, dm-bufio discards cached 
blocks loaded in memory).

This implementation is smaller, because (unlike the original 
implementation), it doesn't create persistent in-memory structures.

This implementation is faster than the original. It uses clustered 
prefetch to prefetch several hash blocks at once, greatly improving 
performance if the data partition and hash partition are located on the 
same disk (the prefetch cluster can be set with prefetch_cluster module 
parameter).

Mikulas

---

Disk: Maxtor Atlas 15k2 146GB, 3.8GB partition

The following tests were made:
dd if=/dev/mapper/verity of=/dev/null bs=1M
fsck.ext2 -fn -C 0 /dev/mapper/verity
fio --rw=randread --size=200M --bsrange=1k-128k --filename=/dev/mapper/verity --name=job1 --name=job2 --name=job3 --name=job4

raw partition:
dd:			42s
fsck:			11s
fio:			13s

original google dm-verity implementation:
dd (first time):	571s
dd (next time):		46s
fsck (first time):	39s
fsck (next time):	10s
fio (first time):	26s
fio (next time):	24s

my vm-verity implementation:
dd (first time):	45s
dd (next time):		43s
fsck (first time):	11s
fsck (next time):	11s
fio (first time):	14s
fio (next time):	14s

---

Remake of the google dm-verity patch.

Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>

---
 drivers/md/Kconfig     |   17 +
 drivers/md/Makefile    |    1 
 drivers/md/dm-bufio.c  |   97 ++++--
 drivers/md/dm-bufio.h  |    8 
 drivers/md/dm-verity.c |  785 +++++++++++++++++++++++++++++++++++++++++++++++++
 5 files changed, 887 insertions(+), 21 deletions(-)

Index: linux-3.3-rc5-fast/drivers/md/Kconfig
===================================================================
--- linux-3.3-rc5-fast.orig/drivers/md/Kconfig	2012-03-03 05:19:33.000000000 +0100
+++ linux-3.3-rc5-fast/drivers/md/Kconfig	2012-03-03 19:39:30.000000000 +0100
@@ -388,4 +388,21 @@ config DM_FLAKEY
        ---help---
          A target that intermittently fails I/O for debugging purposes.
 
+config DM_VERITY
+	tristate "Verity target support"
+	depends on BLK_DEV_DM
+	select CRYPTO
+	select CRYPTO_HASH
+	select DM_BUFIO
+	---help---
+	  This device-mapper target allows you to create a device that
+	  transparently integrity checks the data on it. You'll need to
+	  activate the digests you're going to use in the cryptoapi
+	  configuration.
+
+	  To compile this code as a module, choose M here: the module will
+	  be called dm-verity.
+
+	  If unsure, say N.
+
 endif # MD
Index: linux-3.3-rc5-fast/drivers/md/Makefile
===================================================================
--- linux-3.3-rc5-fast.orig/drivers/md/Makefile	2012-03-03 05:19:33.000000000 +0100
+++ linux-3.3-rc5-fast/drivers/md/Makefile	2012-03-03 19:39:30.000000000 +0100
@@ -29,6 +29,7 @@ obj-$(CONFIG_MD_FAULTY)		+= faulty.o
 obj-$(CONFIG_BLK_DEV_MD)	+= md-mod.o
 obj-$(CONFIG_BLK_DEV_DM)	+= dm-mod.o
 obj-$(CONFIG_DM_BUFIO)		+= dm-bufio.o
+obj-$(CONFIG_DM_VERITY)		+= dm-verity.o
 obj-$(CONFIG_DM_CRYPT)		+= dm-crypt.o
 obj-$(CONFIG_DM_DELAY)		+= dm-delay.o
 obj-$(CONFIG_DM_FLAKEY)		+= dm-flakey.o
Index: linux-3.3-rc5-fast/drivers/md/dm-verity.c
===================================================================
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ linux-3.3-rc5-fast/drivers/md/dm-verity.c	2012-03-03 05:51:10.000000000 +0100
@@ -0,0 +1,785 @@
+/*
+ * Copyright (C) 2012 Red Hat, Inc.
+ *
+ * Author: Mikulas Patocka <mpatocka@redhat.com>
+ *
+ * Based on Chromium dm-verity driver (C) 2011 The Chromium OS Authors
+ *
+ * This file is released under the GPLv2.
+ *
+ * Device mapper target parameters:
+ *	<version>	0
+ *	<data device>
+ *	<hash device>
+ *	<hash start>	(typically 0)
+ *	<block size>	(typically 4096)
+ *	<algorithm>
+ *	<digest>
+ *	optional parameters:
+ *		<salt> (should have 32 bytes for compatibility with Google code)
+ *		<hash block size> (by default it is the same as data block size)
+ *
+ * In the file "/sys/module/dm_verity/parameters/prefetch_cluster" you can set
+ * default prefetch value. Data are read in "prefetch_cluster" chunks from the
+ * hash device. Prefetch cluster greatly improves performance when data and hash
+ * are on the same disk on different partitions.
+ */
+
+#include <linux/module.h>
+#include <linux/device-mapper.h>
+#include <crypto/hash.h>
+#include "dm-bufio.h"
+
+#define DM_MSG_PREFIX			"verity"
+
+#define DM_VERITY_IO_VEC_INLINE		16
+#define DM_VERITY_MEMPOOL_SIZE		4
+#define DM_VERITY_PREFETCH_SIZE		262144
+
+#define DM_VERITY_MAX_LEVELS		63
+
+static unsigned prefetch_cluster = DM_VERITY_PREFETCH_SIZE;
+
+module_param_named(prefetch_cluster, prefetch_cluster, uint, S_IRUGO | S_IWUSR);
+
+struct dm_verity {
+	struct dm_dev *data_dev;
+	struct dm_dev *hash_dev;
+	struct dm_target *ti;
+	struct dm_bufio_client *bufio;
+	char *alg_name;
+	struct crypto_shash *tfm;
+	u8 *root_digest;
+	u8 *salt;
+	unsigned salt_size;
+	sector_t data_start;
+	sector_t hash_start;
+	sector_t data_blocks;
+	sector_t hash_blocks;
+	unsigned char data_dev_block_bits;
+	unsigned char hash_dev_block_bits;
+	unsigned char hash_per_block_bits;
+	unsigned char levels;
+	unsigned digest_size;
+	unsigned shash_descsize;
+
+	mempool_t *io_mempool;
+	mempool_t *vec_mempool;
+
+	struct workqueue_struct *verify_wq;
+
+	sector_t hash_level_block[DM_VERITY_MAX_LEVELS];
+};
+
+struct dm_verity_io {
+	struct dm_verity *v;
+	struct bio *bio;
+	bio_end_io_t *orig_bi_end_io;
+	void *orig_bi_private;
+	sector_t block;
+	unsigned n_blocks;
+	struct bio_vec *io_vec;
+	unsigned io_vec_size;
+	struct work_struct work;
+	struct bio_vec io_vec_inline[DM_VERITY_IO_VEC_INLINE];
+	/* u8 hash_desc[crypto_shash_descsize(v->tfm)]; */
+	/* u8 real_digest[v->digest_size]; */
+	/* u8 want_digest[v->digest_size]; */
+};
+
+static struct shash_desc *io_hash_desc(struct dm_verity *v, struct dm_verity_io *io)
+{
+	return (struct shash_desc *)(io + 1);
+}
+
+static u8 *io_real_digest(struct dm_verity *v, struct dm_verity_io *io)
+{
+	return (u8 *)(io + 1) + v->shash_descsize;
+}
+
+static u8 *io_want_digest(struct dm_verity *v, struct dm_verity_io *io)
+{
+	return (u8 *)(io + 1) + v->shash_descsize + v->digest_size;
+}
+
+struct buffer_aux {
+	int hash_verified;
+};
+
+static void dm_bufio_alloc_callback(struct dm_buffer *buf)
+{
+	struct buffer_aux *aux = dm_bufio_get_aux_data(buf);
+	aux->hash_verified = 0;
+}
+
+static sector_t verity_map_sector(struct dm_verity *v, sector_t bi_sector)
+{
+	return v->data_start + dm_target_offset(v->ti, bi_sector);
+}
+
+static sector_t verity_position_at_level(struct dm_verity *v, sector_t block,
+					 int level)
+{
+	return block >> (level * v->hash_per_block_bits);
+}
+
+static void verity_hash_at_level(struct dm_verity *v, sector_t block, int level,
+				 sector_t *hash_block, unsigned *offset)
+{
+	sector_t position = verity_position_at_level(v, block, level);
+
+	*hash_block = v->hash_level_block[level] + (position >> v->hash_per_block_bits);
+	if (offset)
+		*offset = v->digest_size * (position & ((1 << v->hash_per_block_bits) - 1));
+}
+
+static int verity_verify_level(struct dm_verity_io *io, sector_t block,
+			       int level, int skip_unverified)
+{
+	struct dm_verity *v = io->v;
+	struct dm_buffer *buf;
+	struct buffer_aux *aux;
+	u8 *data;
+	int r;
+	sector_t hash_block;
+	unsigned offset;
+
+	verity_hash_at_level(v, block, level, &hash_block, &offset);
+
+	data = dm_bufio_read(v->bufio, hash_block, &buf);
+	if (unlikely(IS_ERR(data)))
+		return PTR_ERR(data);
+
+	aux = dm_bufio_get_aux_data(buf);
+
+	if (!aux->hash_verified) {
+		struct shash_desc *desc;
+		u8 *result;
+
+		if (skip_unverified) {
+			r = 1;
+			goto release_ret_r;
+		}
+
+		desc = io_hash_desc(v, io);
+		desc->tfm = v->tfm;
+		desc->flags = CRYPTO_TFM_REQ_MAY_SLEEP;
+		r = crypto_shash_init(desc);
+		if (r < 0) {
+			DMERR("crypto_shash_init failed: %d", r);
+			goto release_ret_r;
+		}
+
+		r = crypto_shash_update(desc, data, 1 << v->hash_dev_block_bits);
+		if (r < 0) {
+			DMERR("crypto_shash_update failed: %d", r);
+			goto release_ret_r;
+		}
+
+		r = crypto_shash_update(desc, v->salt, v->salt_size);
+		if (r < 0) {
+			DMERR("crypto_shash_update failed: %d", r);
+			goto release_ret_r;
+		}
+
+		result = io_real_digest(v, io);
+		r = crypto_shash_final(desc, result);
+		if (r < 0) {
+			DMERR("crypto_shash_final failed: %d", r);
+			goto release_ret_r;
+		}
+		if (unlikely(memcmp(result, io_want_digest(v, io), v->digest_size))) {
+			DMERR_LIMIT("metadata block %llu is corrupted",
+				(unsigned long long)hash_block);
+			r = -EIO;
+			goto release_ret_r;
+		} else
+			aux->hash_verified = 1;
+	}
+
+	data += offset;
+
+	memcpy(io_want_digest(v, io), data, v->digest_size);
+
+	dm_bufio_release(buf);
+	return 0;
+
+release_ret_r:
+	dm_bufio_release(buf);
+	return r;
+}
+
+static int verity_verify_io(struct dm_verity_io *io)
+{
+	struct dm_verity *v = io->v;
+	unsigned b;
+	int i;
+	unsigned vector = 0, offset = 0;
+	for (b = 0; b < io->n_blocks; b++) {
+		struct shash_desc *desc;
+		u8 *result;
+		int r;
+		unsigned todo;
+
+		if (likely(v->levels)) {
+			int r = verity_verify_level(io, io->block + b, 0, 1);
+			if (likely(!r))
+				goto test_block_hash;
+			if (r < 0)
+				return r;
+		}
+
+		memcpy(io_want_digest(v, io), v->root_digest, v->digest_size);
+
+		for (i = v->levels - 1; i >= 0; i--) {
+			int r = verity_verify_level(io, io->block + b, i, 0);
+			if (unlikely(r))
+				return r;
+		}
+
+test_block_hash:
+		desc = io_hash_desc(v, io);
+		desc->tfm = v->tfm;
+		desc->flags = CRYPTO_TFM_REQ_MAY_SLEEP;
+		r = crypto_shash_init(desc);
+		if (r < 0) {
+			DMERR("crypto_shash_init failed: %d", r);
+			return r;
+		}
+
+		todo = 1 << v->data_dev_block_bits;
+		do {
+			struct bio_vec *bv;
+			u8 *page;
+			unsigned len;
+
+			BUG_ON(vector >= io->io_vec_size);
+			bv = &io->io_vec[vector];
+			page = kmap_atomic(bv->bv_page, KM_USER0);
+			len = bv->bv_len - offset;
+			if (likely(len >= todo))
+				len = todo;
+			r = crypto_shash_update(desc,
+					page + bv->bv_offset + offset, len);
+			kunmap_atomic(page, KM_USER0);
+			if (r < 0) {
+				DMERR("crypto_shash_update failed: %d", r);
+				return r;
+			}
+			offset += len;
+			if (likely(offset == bv->bv_len)) {
+				offset = 0;
+				vector++;
+			}
+			todo -= len;
+		} while (todo);
+
+		r = crypto_shash_update(desc, v->salt, v->salt_size);
+		if (r < 0) {
+			DMERR("crypto_shash_update failed: %d", r);
+			return r;
+		}
+
+		result = io_real_digest(v, io);
+		r = crypto_shash_final(desc, result);
+		if (r < 0) {
+			DMERR("crypto_shash_final failed: %d", r);
+			return r;
+		}
+		if (unlikely(memcmp(result, io_want_digest(v, io), v->digest_size))) {
+			DMERR_LIMIT("data block %llu is corrupted",
+				(unsigned long long)(io->block + b));
+			return -EIO;
+		}
+	}
+	BUG_ON(vector != io->io_vec_size);
+	BUG_ON(offset);
+	return 0;
+}
+
+static void verity_finish_io(struct dm_verity_io *io, int error)
+{
+	struct bio *bio = io->bio;
+	struct dm_verity *v = io->v;
+
+	bio->bi_end_io = io->orig_bi_end_io;
+	bio->bi_private = io->orig_bi_private;
+
+	if (io->io_vec != io->io_vec_inline)
+		mempool_free(io->io_vec, v->vec_mempool);
+	mempool_free(io, v->io_mempool);
+
+	bio_endio(bio, error);
+}
+
+static void verity_work(struct work_struct *w)
+{
+	struct dm_verity_io *io = container_of(w, struct dm_verity_io, work);
+
+	verity_finish_io(io, verity_verify_io(io));
+}
+
+static void verity_end_io(struct bio *bio, int error)
+{
+	struct dm_verity_io *io = bio->bi_private;
+	if (error) {
+		verity_finish_io(io, error);
+		return;
+	}
+
+	INIT_WORK(&io->work, verity_work);
+	queue_work(io->v->verify_wq, &io->work);
+}
+
+static void verity_prefetch_io(struct dm_verity *v, struct dm_verity_io *io)
+{
+	int i;
+	for (i = v->levels - 2; i >= 0; i--) {
+		sector_t hash_block_start;
+		sector_t hash_block_end;
+		verity_hash_at_level(v, io->block, i, &hash_block_start, NULL);
+		verity_hash_at_level(v, io->block + io->n_blocks - 1, i, &hash_block_end, NULL);
+		if (!i) {
+			unsigned cluster = prefetch_cluster;
+	 /* barrier to stop GCC from re-reading prefetch_cluster again */
+			barrier();
+			cluster >>= v->data_dev_block_bits;
+			if (unlikely(!cluster))
+				goto no_prefetch_cluster;
+			if (unlikely(cluster & (cluster - 1)))
+				cluster = 1 << (fls(cluster) - 1);
+
+			hash_block_start &= ~(sector_t)(cluster - 1);
+			hash_block_end |= cluster - 1;
+			if (unlikely(hash_block_end >= v->hash_blocks))
+				hash_block_end = v->hash_blocks - 1;
+		}
+no_prefetch_cluster:
+		dm_bufio_prefetch(v->bufio, hash_block_start,
+					hash_block_end - hash_block_start + 1);
+	}
+}
+
+static int verity_map(struct dm_target *ti, struct bio *bio,
+		      union map_info *map_context)
+{
+	struct dm_verity *v = ti->private;
+	struct dm_verity_io *io;
+
+	if (((unsigned)bio->bi_sector | bio_sectors(bio)) &
+	    ((1 << (v->data_dev_block_bits - SECTOR_SHIFT)) - 1)) {
+		DMERR_LIMIT("unaligned io");
+		return -EIO;
+	}
+
+	if ((bio->bi_sector + bio_sectors(bio)) >>
+	    (v->data_dev_block_bits - SECTOR_SHIFT) > v->data_blocks) {
+		DMERR_LIMIT("io out of range");
+		return -EIO;
+	}
+
+	if (bio_data_dir(bio) == WRITE)
+		return -EIO;
+
+	io = mempool_alloc(v->io_mempool, GFP_NOIO);
+	io->v = v;
+	io->bio = bio;
+	io->orig_bi_end_io = bio->bi_end_io;
+	io->orig_bi_private = bio->bi_private;
+	io->block = bio->bi_sector >> (v->data_dev_block_bits - SECTOR_SHIFT);
+	io->n_blocks = bio->bi_size >> v->data_dev_block_bits;
+
+	bio->bi_end_io = verity_end_io;
+	bio->bi_private = io;
+	bio->bi_bdev = v->data_dev->bdev;
+	bio->bi_sector = verity_map_sector(v, bio->bi_sector);
+
+	io->io_vec_size = bio->bi_vcnt - bio->bi_idx;
+	if (io->io_vec_size < DM_VERITY_IO_VEC_INLINE)
+		io->io_vec = io->io_vec_inline;
+	else
+		io->io_vec = mempool_alloc(v->vec_mempool, GFP_NOIO);
+	memcpy(io->io_vec, bio_iovec(bio),
+	       io->io_vec_size * sizeof(struct bio_vec));
+
+	verity_prefetch_io(v, io);
+
+	generic_make_request(bio);
+
+	return DM_MAPIO_SUBMITTED;
+}
+
+static int verity_status(struct dm_target *ti, status_type_t type,
+			 char *result, unsigned maxlen)
+{
+	struct dm_verity *v = ti->private;
+	unsigned sz = 0;
+	unsigned x;
+
+	switch (type) {
+	case STATUSTYPE_INFO:
+		result[0] = 0;
+		break;
+	case STATUSTYPE_TABLE:
+		DMEMIT("%u %s %s %llu %u %s ",
+			0,
+			v->data_dev->name,
+			v->hash_dev->name,
+			(unsigned long long)v->hash_start << (v->hash_dev_block_bits - SECTOR_SHIFT),
+			1 << v->data_dev_block_bits,
+			v->alg_name
+			);
+		for (x = 0; x < v->digest_size; x++)
+			DMEMIT("%02x", v->root_digest[x]);
+		DMEMIT(" ");
+		if (!v->salt_size)
+			DMEMIT("-");
+		else
+			for (x = 0; x < v->salt_size; x++)
+				DMEMIT("%02x", v->salt[x]);
+		DMEMIT(" %u", 1 << v->hash_dev_block_bits);
+		break;
+	}
+	return 0;
+}
+
+static int verity_ioctl(struct dm_target *ti, unsigned cmd,
+			unsigned long arg)
+{
+	struct dm_verity *v = ti->private;
+	int r = 0;
+
+	if (v->data_start ||
+	    ti->len != i_size_read(v->data_dev->bdev->bd_inode) >> SECTOR_SHIFT)
+		r = scsi_verify_blk_ioctl(NULL, cmd);
+
+	return r ? : __blkdev_driver_ioctl(v->data_dev->bdev, v->data_dev->mode,
+				     cmd, arg);
+}
+
+static int verity_merge(struct dm_target *ti, struct bvec_merge_data *bvm,
+			struct bio_vec *biovec, int max_size)
+{
+	struct dm_verity *v = ti->private;
+	struct request_queue *q = bdev_get_queue(v->data_dev->bdev);
+
+	if (!q->merge_bvec_fn)
+		return max_size;
+
+	bvm->bi_bdev = v->data_dev->bdev;
+	bvm->bi_sector = verity_map_sector(v, bvm->bi_sector);
+
+	return min(max_size, q->merge_bvec_fn(q, bvm, biovec));
+}
+
+static int verity_iterate_devices(struct dm_target *ti,
+				  iterate_devices_callout_fn fn, void *data)
+{
+	struct dm_verity *v = ti->private;
+	return fn(ti, v->data_dev, v->data_start, ti->len, data);
+}
+
+static void verity_io_hints(struct dm_target *ti, struct queue_limits *limits)
+{
+	struct dm_verity *v = ti->private;
+
+	if (limits->logical_block_size < 1 << v->data_dev_block_bits)
+		limits->logical_block_size = 1 << v->data_dev_block_bits;
+	if (limits->physical_block_size < 1 << v->data_dev_block_bits)
+		limits->physical_block_size = 1 << v->data_dev_block_bits;
+	blk_limits_io_min(limits, limits->logical_block_size);
+}
+
+static void verity_dtr(struct dm_target *ti);
+
+static int verity_ctr(struct dm_target *ti, unsigned argc, char **argv)
+{
+	struct dm_verity *v;
+	unsigned num;
+	unsigned long long hs;
+	int r;
+	int i;
+	sector_t hash_position;
+	char dummy;
+
+	v = kzalloc(sizeof(struct dm_verity), GFP_KERNEL);
+	if (!v) {
+		ti->error = "Cannot allocate verity structure";
+		return -ENOMEM;
+	}
+	ti->private = v;
+	v->ti = ti;
+
+	if ((dm_table_get_mode(ti->table) & ~FMODE_READ) != 0) {
+		ti->error = "Device must be readonly";
+		r = -EINVAL;
+		goto bad;
+	}
+
+	if (argc < 7) {
+		ti->error = "Not enough arguments";
+		r = -EINVAL;
+		goto bad;
+	}
+
+	if (sscanf(argv[0], "%d%c", &num, &dummy) != 1 ||
+	    num != 0) {
+		ti->error = "Invalid version";
+		r = -EINVAL;
+		goto bad;
+	}
+
+	r = dm_get_device(ti, argv[1], FMODE_READ, &v->data_dev);
+	if (r) {
+		ti->error = "Data device lookup failed";
+		goto bad;
+	}
+
+	r = dm_get_device(ti, argv[2], FMODE_READ, &v->hash_dev);
+	if (r) {
+		ti->error = "Data device lookup failed";
+		goto bad;
+	}
+
+	if (sscanf(argv[3], "%llu%c", &hs, &dummy) != 1 ||
+	    hs != (sector_t)hs) {
+		ti->error = "Invalid hash start";
+		r = -EINVAL;
+		goto bad;
+	}
+
+	if (sscanf(argv[4], "%u%c", &num, &dummy) != 1 ||
+	    !num || (num & (num - 1)) ||
+	    num < bdev_logical_block_size(v->data_dev->bdev) ||
+	    num > PAGE_SIZE) {
+		ti->error = "Invalid data device block size";
+		r = -EINVAL;
+		goto bad;
+	}
+	v->data_dev_block_bits = ffs(num) - 1;
+	v->hash_dev_block_bits = ffs(num) - 1;
+
+	v->alg_name = kstrdup(argv[5], GFP_KERNEL);
+	if (!v->alg_name) {
+		ti->error = "Cannot allocate algorithm name";
+		r = -ENOMEM;
+		goto bad;
+	}
+
+	v->tfm = crypto_alloc_shash(v->alg_name, 0, 0);
+	if (IS_ERR(v->tfm)) {
+		ti->error = "Cannot initialize hash function";
+		r = PTR_ERR(v->tfm);
+		v->tfm = NULL;
+		goto bad;
+	}
+	v->digest_size = crypto_shash_digestsize(v->tfm);
+	if ((1 << v->hash_dev_block_bits) < v->digest_size * 2) {
+		ti->error = "Digest size too big";
+		r = -EINVAL;
+		goto bad;
+	}
+	v->shash_descsize =
+		sizeof(struct shash_desc) + crypto_shash_descsize(v->tfm);
+
+	v->root_digest = kmalloc(v->digest_size, GFP_KERNEL);
+	if (!v->root_digest) {
+		ti->error = "Cannot allocate root digest";
+		r = -ENOMEM;
+		goto bad;
+	}
+	if (strlen(argv[6]) != v->digest_size * 2 ||
+	    hex2bin(v->root_digest, argv[6], v->digest_size)) {
+		ti->error = "Invalid root digest";
+		r = -EINVAL;
+		goto bad;
+	}
+
+	if (argc > 7 && strcmp(argv[7], "-")) {
+		v->salt_size = strlen(argv[7]) / 2;
+		v->salt = kmalloc(v->salt_size, GFP_KERNEL);
+		if (!v->salt) {
+			ti->error = "Cannot allocate salt";
+			r = -ENOMEM;
+			goto bad;
+		}
+		if (strlen(argv[7]) != v->salt_size * 2 ||
+		    hex2bin(v->salt, argv[7], v->salt_size)) {
+			ti->error = "Invalid salt";
+			r = -EINVAL;
+			goto bad;
+		}
+	}
+
+	if (argc > 8) {
+		if (sscanf(argv[8], "%u%c", &num, &dummy) != 1 ||
+		    !num || (num & (num - 1)) ||
+		    num < bdev_logical_block_size(v->hash_dev->bdev) ||
+		    num > INT_MAX) {
+			ti->error = "Invalid hash device block size";
+			r = -EINVAL;
+			goto bad;
+		}
+		v->hash_dev_block_bits = ffs(num) - 1;
+	}
+
+	if (hs & ((1 << (v->hash_dev_block_bits - SECTOR_SHIFT)) - 1)) {
+		ti->error = "Hash start not aligned on block boundary";
+		r = -EINVAL;
+		goto bad;
+	}
+	v->hash_start = hs >> (v->hash_dev_block_bits - SECTOR_SHIFT);
+
+	if (ti->len > i_size_read(v->data_dev->bdev->bd_inode) >> SECTOR_SHIFT) {
+		ti->error = "Data device si too small";
+		r = -EINVAL;
+		goto bad;
+	}
+
+	if (ti->len & ((1 << (v->data_dev_block_bits - SECTOR_SHIFT)) - 1)) {
+		ti->error = "Data device length is not aligned to block size";
+		r = -EINVAL;
+		goto bad;
+	}
+
+	v->data_blocks = ti->len >> (v->data_dev_block_bits - SECTOR_SHIFT);
+
+	v->hash_per_block_bits =
+		fls((1 << v->hash_dev_block_bits) / v->digest_size) - 1;
+
+	v->levels = 0;
+	if (v->data_blocks)
+		while (v->hash_per_block_bits * v->levels < 64 &&
+		       (unsigned long long)(v->data_blocks - 1) >>
+		       (v->hash_per_block_bits * v->levels))
+			v->levels++;
+
+	if (v->levels > DM_VERITY_MAX_LEVELS) {
+		ti->error = "Too many tree levels";
+		r = -E2BIG;
+		goto bad;
+	}
+
+	hash_position = v->hash_start;
+	for (i = v->levels - 1; i >= 0; i--) {
+		sector_t s;
+		v->hash_level_block[i] = hash_position;
+		s = verity_position_at_level(v, v->data_blocks, i);
+		s = (s >> v->hash_per_block_bits) +
+		    !!(s & ((1 << v->hash_per_block_bits) - 1));
+		if (hash_position + s < hash_position) {
+			ti->error = "Hash device offset overflow";
+			r = -E2BIG;
+			goto bad;
+		}
+		hash_position += s;
+	}
+	v->hash_blocks = hash_position;
+
+	v->bufio = dm_bufio_client_create(v->hash_dev->bdev,
+		1 << v->hash_dev_block_bits, 1, sizeof(struct buffer_aux),
+		dm_bufio_alloc_callback, NULL);
+	if (IS_ERR(v->bufio)) {
+		ti->error = "Cannot initialize dm-bufio";
+		r = PTR_ERR(v->bufio);
+		v->bufio = NULL;
+		goto bad;
+	}
+
+	if (dm_bufio_get_device_size(v->bufio) < v->hash_blocks) {
+		ti->error = "Hash device is too small";
+		r = -E2BIG;
+		goto bad;
+	}
+
+	v->io_mempool = mempool_create_kmalloc_pool(DM_VERITY_MEMPOOL_SIZE,
+	  sizeof(struct dm_verity_io) + v->shash_descsize + v->digest_size * 2);
+	if (!v->io_mempool) {
+		ti->error = "Cannot allocate io mempool";
+		r = -ENOMEM;
+		goto bad;
+	}
+
+	v->vec_mempool = mempool_create_kmalloc_pool(DM_VERITY_MEMPOOL_SIZE,
+					BIO_MAX_PAGES * sizeof(struct bio_vec));
+	if (!v->vec_mempool) {
+		ti->error = "Cannot allocate vector mempool";
+		r = -ENOMEM;
+		goto bad;
+	}
+
+	/*v->verify_wq = alloc_workqueue("verityd", WQ_CPU_INTENSIVE | WQ_MEM_RECLAIM, 1);*/
+	/* WQ_UNBOUND greatly improves performance when running on ramdisk */
+	v->verify_wq = alloc_workqueue("verityd", WQ_CPU_INTENSIVE | WQ_MEM_RECLAIM | WQ_UNBOUND, num_online_cpus());
+	if (!v->verify_wq) {
+		ti->error = "Cannot allocate workqueue";
+		r = -ENOMEM;
+		goto bad;
+	}
+
+	return 0;
+
+bad:
+	verity_dtr(ti);
+	return r;
+}
+
+static void verity_dtr(struct dm_target *ti)
+{
+	struct dm_verity *v = ti->private;
+
+	if (v->verify_wq)
+		destroy_workqueue(v->verify_wq);
+	if (v->vec_mempool)
+		mempool_destroy(v->vec_mempool);
+	if (v->io_mempool)
+		mempool_destroy(v->io_mempool);
+	if (v->bufio)
+		dm_bufio_client_destroy(v->bufio);
+	kfree(v->salt);
+	kfree(v->root_digest);
+	if (v->tfm)
+		crypto_free_shash(v->tfm);
+	kfree(v->alg_name);
+	if (v->hash_dev)
+		dm_put_device(ti, v->hash_dev);
+	if (v->data_dev)
+		dm_put_device(ti, v->data_dev);
+	kfree(v);
+}
+
+static struct target_type verity_target = {
+	.name		= "verity",
+	.version	= {1, 0, 0},
+	.module		= THIS_MODULE,
+	.ctr		= verity_ctr,
+	.dtr		= verity_dtr,
+	.map		= verity_map,
+	.status		= verity_status,
+	.ioctl		= verity_ioctl,
+	.merge		= verity_merge,
+	.iterate_devices = verity_iterate_devices,
+	.io_hints	= verity_io_hints,
+};
+
+static int __init dm_verity_init(void)
+{
+	int r;
+	r = dm_register_target(&verity_target);
+	if (r < 0)
+		DMERR("register failed %d", r);
+	return r;
+}
+
+static void __exit dm_verity_exit(void)
+{
+	dm_unregister_target(&verity_target);
+}
+
+module_init(dm_verity_init);
+module_exit(dm_verity_exit);
+
+MODULE_AUTHOR("Mikulas Patocka <mpatocka@redhat.com>");
+MODULE_DESCRIPTION(DM_NAME " target for transparent disk integrity checking");
+MODULE_LICENSE("GPL");
+
Index: linux-3.3-rc5-fast/drivers/md/dm-bufio.c
===================================================================
--- linux-3.3-rc5-fast.orig/drivers/md/dm-bufio.c	2012-03-03 05:19:33.000000000 +0100
+++ linux-3.3-rc5-fast/drivers/md/dm-bufio.c	2012-03-03 05:19:35.000000000 +0100
@@ -579,7 +579,7 @@ static void write_endio(struct bio *bio,
 	struct dm_buffer *b = container_of(bio, struct dm_buffer, bio);
 
 	b->write_error = error;
-	if (error) {
+	if (unlikely(error)) {
 		struct dm_bufio_client *c = b->c;
 		(void)cmpxchg(&c->async_write_error, 0, error);
 	}
@@ -698,13 +698,20 @@ static void __wait_for_free_buffer(struc
 	dm_bufio_lock(c);
 }
 
+enum new_flag {
+	NF_FRESH = 0,
+	NF_READ = 1,
+	NF_GET = 2,
+	NF_PREFETCH = 3
+};
+
 /*
  * Allocate a new buffer. If the allocation is not possible, wait until
  * some other thread frees a buffer.
  *
  * May drop the lock and regain it.
  */
-static struct dm_buffer *__alloc_buffer_wait_no_callback(struct dm_bufio_client *c)
+static struct dm_buffer *__alloc_buffer_wait_no_callback(struct dm_bufio_client *c, enum new_flag nf)
 {
 	struct dm_buffer *b;
 
@@ -727,6 +734,9 @@ static struct dm_buffer *__alloc_buffer_
 				return b;
 		}
 
+		if (nf == NF_PREFETCH)
+			return NULL;
+
 		if (!list_empty(&c->reserved_buffers)) {
 			b = list_entry(c->reserved_buffers.next,
 				       struct dm_buffer, lru_list);
@@ -744,9 +754,12 @@ static struct dm_buffer *__alloc_buffer_
 	}
 }
 
-static struct dm_buffer *__alloc_buffer_wait(struct dm_bufio_client *c)
+static struct dm_buffer *__alloc_buffer_wait(struct dm_bufio_client *c, enum new_flag nf)
 {
-	struct dm_buffer *b = __alloc_buffer_wait_no_callback(c);
+	struct dm_buffer *b = __alloc_buffer_wait_no_callback(c, nf);
+
+	if (!b)
+		return NULL;
 
 	if (c->alloc_callback)
 		c->alloc_callback(b);
@@ -866,15 +879,8 @@ static struct dm_buffer *__find(struct d
  * Getting a buffer
  *--------------------------------------------------------------*/
 
-enum new_flag {
-	NF_FRESH = 0,
-	NF_READ = 1,
-	NF_GET = 2
-};
-
 static struct dm_buffer *__bufio_new(struct dm_bufio_client *c, sector_t block,
-				     enum new_flag nf, struct dm_buffer **bp,
-				     int *need_submit)
+				     enum new_flag nf, int *need_submit)
 {
 	struct dm_buffer *b, *new_b = NULL;
 
@@ -882,6 +888,19 @@ static struct dm_buffer *__bufio_new(str
 
 	b = __find(c, block);
 	if (b) {
+found_buffer:
+		if (nf == NF_PREFETCH)
+			return NULL;
+		/*
+		 * Note: it is essential that we don't wait for the buffer to be
+		 * read if dm_bufio_get function is used. Both dm_bufio_get and
+		 * dm_bufio_prefetch can be used in the driver request routine.
+		 * If the user called both dm_bufio_prefetch and dm_bufio_get on
+		 * the same buffer, it would deadlock if we waited.
+		 */
+		if (nf == NF_GET && unlikely(test_bit(B_READING, &b->state)))
+			return NULL;
+
 		b->hold_count++;
 		__relink_lru(b, test_bit(B_DIRTY, &b->state) ||
 			     test_bit(B_WRITING, &b->state));
@@ -891,7 +910,9 @@ static struct dm_buffer *__bufio_new(str
 	if (nf == NF_GET)
 		return NULL;
 
-	new_b = __alloc_buffer_wait(c);
+	new_b = __alloc_buffer_wait(c, nf);
+	if (!new_b)
+		return NULL;
 
 	/*
 	 * We've had a period where the mutex was unlocked, so need to
@@ -900,10 +921,7 @@ static struct dm_buffer *__bufio_new(str
 	b = __find(c, block);
 	if (b) {
 		__free_buffer_wake(new_b);
-		b->hold_count++;
-		__relink_lru(b, test_bit(B_DIRTY, &b->state) ||
-			     test_bit(B_WRITING, &b->state));
-		return b;
+		goto found_buffer;
 	}
 
 	__check_watermark(c);
@@ -957,7 +975,7 @@ static void *new_read(struct dm_bufio_cl
 	struct dm_buffer *b;
 
 	dm_bufio_lock(c);
-	b = __bufio_new(c, block, nf, bp, &need_submit);
+	b = __bufio_new(c, block, nf, &need_submit);
 	dm_bufio_unlock(c);
 
 	if (!b || IS_ERR(b))
@@ -997,6 +1015,8 @@ void *dm_bufio_read(struct dm_bufio_clie
 }
 EXPORT_SYMBOL_GPL(dm_bufio_read);
 
+static void dm_bufio_release_unlocked(struct dm_buffer *b);
+
 void *dm_bufio_new(struct dm_bufio_client *c, sector_t block,
 		   struct dm_buffer **bp)
 {
@@ -1006,13 +1026,35 @@ void *dm_bufio_new(struct dm_bufio_clien
 }
 EXPORT_SYMBOL_GPL(dm_bufio_new);
 
-void dm_bufio_release(struct dm_buffer *b)
+void dm_bufio_prefetch(struct dm_bufio_client *c,
+		       sector_t block, unsigned n_blocks)
 {
-	struct dm_bufio_client *c = b->c;
+	struct blk_plug plug;
 
+	blk_start_plug(&plug);
 	dm_bufio_lock(c);
 
-	BUG_ON(test_bit(B_READING, &b->state));
+	for (; n_blocks--; block++) {
+		int need_submit;
+		struct dm_buffer *b;
+		b = __bufio_new(c, block, NF_PREFETCH, &need_submit);
+		if (unlikely(b != NULL)) {
+			if (need_submit)
+				submit_io(b, READ, b->block, read_endio);
+			dm_bufio_release_unlocked(b);
+		}
+
+	}
+
+	dm_bufio_unlock(c);
+	blk_finish_plug(&plug);
+}
+EXPORT_SYMBOL(dm_bufio_prefetch);
+
+static void dm_bufio_release_unlocked(struct dm_buffer *b)
+{
+	struct dm_bufio_client *c = b->c;
+
 	BUG_ON(!b->hold_count);
 
 	b->hold_count--;
@@ -1025,12 +1067,23 @@ void dm_bufio_release(struct dm_buffer *
 		 * invalid buffer.
 		 */
 		if ((b->read_error || b->write_error) &&
+		    !test_bit(B_READING, &b->state) &&
 		    !test_bit(B_WRITING, &b->state) &&
 		    !test_bit(B_DIRTY, &b->state)) {
 			__unlink_buffer(b);
 			__free_buffer_wake(b);
 		}
 	}
+}
+
+void dm_bufio_release(struct dm_buffer *b)
+{
+	struct dm_bufio_client *c = b->c;
+
+	dm_bufio_lock(c);
+
+	BUG_ON(test_bit(B_READING, &b->state));
+	dm_bufio_release_unlocked(b);
 
 	dm_bufio_unlock(c);
 }
@@ -1042,6 +1095,8 @@ void dm_bufio_mark_buffer_dirty(struct d
 
 	dm_bufio_lock(c);
 
+	BUG_ON(test_bit(B_READING, &b->state));
+
 	if (!test_and_set_bit(B_DIRTY, &b->state))
 		__relink_lru(b, LIST_DIRTY);
 
Index: linux-3.3-rc5-fast/drivers/md/dm-bufio.h
===================================================================
--- linux-3.3-rc5-fast.orig/drivers/md/dm-bufio.h	2012-03-03 05:19:33.000000000 +0100
+++ linux-3.3-rc5-fast/drivers/md/dm-bufio.h	2012-03-03 05:19:35.000000000 +0100
@@ -63,6 +63,14 @@ void *dm_bufio_new(struct dm_bufio_clien
 		   struct dm_buffer **bp);
 
 /*
+ * Prefetch the specified blocks to the cache.
+ * The function starts to read the blocks and returns without waiting for
+ * I/O to finish.
+ */
+void dm_bufio_prefetch(struct dm_bufio_client *c,
+		       sector_t block, unsigned n_blocks);
+
+/*
  * Release a reference obtained with dm_bufio_{read,get,new}. The data
  * pointer and dm_buffer pointer is no longer valid after this call.
  */

^ permalink raw reply	[flat|nested] 34+ messages in thread

* userspace hashing utility for dm-verity
  2012-03-04 19:18 ` [PATCH] dm: remake of the " Mikulas Patocka
@ 2012-03-04 19:35   ` Mikulas Patocka
  2012-03-06 21:59   ` [PATCH] dm: remake of the verity target Mandeep Singh Baines
  1 sibling, 0 replies; 34+ messages in thread
From: Mikulas Patocka @ 2012-03-04 19:35 UTC (permalink / raw)
  To: Mandeep Singh Baines
  Cc: linux-kernel, dm-devel, Alasdair G Kergon, Will Drewry,
	Elly Jones, Milan Broz, Olof Johansson, Steffen Klassert,
	Andrew Morton

This is an userspace utility that creates or verifies hashes for the 
verity target.

The original utility was created by Google and it is located at 
http://git.chromium.org/chromiumos/platform/dm-verity.git

The original utility has some problems:
* the code is really overengineered, they took the kernel code and built 
  an emulation layer in userspace that emulates some of the kernel 
  functions
* it dosn't use library implementations of hash functions, rather it 
  provides its own md5, sha1, sha256 and sha512 implementation
* it is not portable (produces bad result on big-endian machines)

This is much smaller implementation that is portable and uses the crypto 
library.

This code creates compatible format with the original Google code under 
these conditions:
- data block size and hash block size are 4096
- salt has exactly 32 bytes (64 hex digits)

Example use:
Create filesystem on /dev/sdc2 and fill it with some data. Block size must 
be 4096
Unmount the filesystem

Run: ./verity -c /dev/sdc2 /dev/sdc3 sha256 --salt
1234000000000000000000000000000000000000000000000000000000000000
- This creates hash tree on /dev/sdc3 and prints the root block hash

Run: dmsetup -r create verity --table "0 `blockdev --getsize /dev/sdc2` 
verity 0 /dev/sdc2 /dev/sdc3 0 4096 sha256 
f4c97f1f2e6b6757d033b2062268e0b6b42cbe4ff5a5fcc0017305b145cae4de 
1234000000000000000000000000000000000000000000000000000000000000 "
(note: use the real hash reported by "verity" tool instead of f4c9...)

mount -o ro -t ext2 /dev/mapper/verity /mnt/test

Now, the device is mounted and dm-verity target is verifying the hashes.
All data integrity depends only on the root hash 
(f4c97f1f2e6b6757d033b2062268e0b6b42cbe4ff5a5fcc0017305b145cae4de) and 
salt (1234000000000000000000000000000000000000000000000000000000000000). 
If either the data or hash partitions become silently corrupted and 
you read invalid data, dm-verity will return -EIO.

Mikulas

---

/* link with -lpopt -lcrypto */

#define _FILE_OFFSET_BITS	64

#include <stdio.h>
#include <stdlib.h>
#include <stdarg.h>
#include <unistd.h>
#include <errno.h>
#include <string.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <sys/ioctl.h>
#include <sys/mount.h>
#include <popt.h>
#include <openssl/evp.h>

#define DEFAULT_BLOCK_SIZE	4096
#define DM_VERITY_MAX_LEVELS	63

#define MODE_VERIFY	0
#define MODE_CREATE	1

static int mode = -1;

static const char *data_device;
static const char *hash_device;
static const char *hash_algorithm;
static const char *root_hash;

static int data_block_size = 0;
static int hash_block_size = 0;
static long long hash_start = 0;
static long long data_blocks = 0;
static const char *salt_string = NULL;

static FILE *data_file;
static FILE *hash_file;

static off_t data_file_blocks;
static off_t hash_file_blocks;
static off_t used_hash_blocks;

static const EVP_MD *evp;

static unsigned char *root_hash_bytes;
static unsigned char *calculated_digest;

static unsigned char *salt_bytes;
static unsigned salt_size;

static unsigned digest_size;
static unsigned char levels;
static unsigned char hash_per_block_bits;

static off_t hash_level_block[DM_VERITY_MAX_LEVELS];
static off_t hash_level_size[DM_VERITY_MAX_LEVELS];

static int retval = 0;

static void help(poptContext popt_context,
		enum poptCallbackReason reason,
		struct poptOption *key,
		const char *arg,
		void *data)
{
	poptPrintHelp(popt_context, stdout, 0);
	exit(0);
}

static struct poptOption popt_help_options[] = {
	{ NULL,			0,	POPT_ARG_CALLBACK, help, 0, NULL, NULL },
	{ "help",		'?',	POPT_ARG_NONE, NULL, 0, "Show help", NULL },
	POPT_TABLEEND
};

static struct poptOption popt_options[] = {
	{ NULL,			'\0', POPT_ARG_INCLUDE_TABLE, popt_help_options, 0, NULL, NULL },
	{ "create",		'c',	POPT_ARG_VAL, &mode, MODE_CREATE, "Create hash", NULL },
	{ "verify",		'v',	POPT_ARG_VAL, &mode, MODE_VERIFY, "Verify integrity", NULL },
	{ "data-block-size",	0, 	POPT_ARG_INT, &data_block_size, 0, "Block size on the data device", "bytes" },
	{ "hash-block-size",	0, 	POPT_ARG_INT, &hash_block_size, 0, "Block size on the hash device", "bytes" },
	{ "hash-start",		0,	POPT_ARG_LONGLONG, &hash_start, 0, "Starting sector on the hash device", "sectors" },
	{ "data-blocks",	0,	POPT_ARG_LONGLONG, &data_blocks, 0, "The number of blocks in the data file", "blocks" },
	{ "salt",		0,	POPT_ARG_STRING, &salt_string, 0, "Salt", "hex string" },
	POPT_TABLEEND
};

static void exit_err(const char *msg, ...)
{
	va_list args;
	va_start(args, msg);
	vfprintf(stderr, msg, args);
	va_end(args);
	fputc('\n', stderr);
	exit(2);
}

static void stream_err(FILE *f, const char *msg)
{
	if (ferror(f)) {
		perror(msg);
		exit(2);
	} else if (feof(f)) {
		exit_err("eof on %s", msg);
	} else {
		exit_err("unknown error on %s", msg);
	}
}

static void *xmalloc(size_t s)
{
	void *ptr = malloc(s);
	if (!ptr) exit_err("out of memory");
	return ptr;
}

static off_t get_size(FILE *f, const char *name)
{
	struct stat st;
	int h = fileno(f);
	if (h < 0) {
		perror("fileno");
		exit(2);
	}
	if (fstat(h, &st)) {
		perror("fstat");
		exit(2);
	}
	if (S_ISREG(st.st_mode)) {
		return st.st_size;
	} else if (S_ISBLK(st.st_mode)) {
		unsigned long long size64;
		unsigned long sizeul;
		if (!ioctl(h, BLKGETSIZE64, &size64)) {
			return_size64:
			if ((off_t)size64 < 0 || (off_t)size64 != size64) {
				size_overflow:
				exit_err("%s: device size overflow", name);
			}
			return size64;
		}
		if (!ioctl(h, BLKGETSIZE, &sizeul)) {
			size64 = (unsigned long long)sizeul * 512;
			if (size64 / 512 != sizeul) goto size_overflow;
			goto return_size64;
		}
		perror("BLKGETSIZE");
		exit(2);
	} else {
		exit_err("%s is not a file or a block device", name);
	}
	return -1;	/* never reached, shut up warning */
}

static void block_fseek(FILE *f, off_t block, int block_size)
{
	unsigned long long pos = (unsigned long long)block * block_size;
	if (pos / block_size != block ||
	    (off_t)pos < 0 ||
	    (off_t)pos != pos)
		exit_err("seek position overflow");
	if (fseeko(f, pos, SEEK_SET)) {
		perror("fseek");
		exit(2);
	}
}

static off_t verity_position_at_level(off_t block, int level)
{
	return block >> (level * hash_per_block_bits);
}

static void calculate_positions(void)
{
	unsigned long long hash_position;
	int i;

	hash_per_block_bits = 0;
	while (((hash_block_size / digest_size) >> hash_per_block_bits) > 1)
		hash_per_block_bits++;
	if (!hash_per_block_bits)
		exit_err("at least two hashes must fit in a hash file block");
	levels = 0;

	if (data_file_blocks) {
		while (hash_per_block_bits * levels < 64 &&
		       (unsigned long long)(data_file_blocks - 1) >>
		       (hash_per_block_bits * levels))
			levels++;
	}

	if (levels > DM_VERITY_MAX_LEVELS) exit_err("too many tree levels");

	hash_position = hash_start * 512 / hash_block_size;
	for (i = levels - 1; i >= 0; i--) {
		off_t s;
		hash_level_block[i] = hash_position;
		s = verity_position_at_level(data_file_blocks, i);
		s = (s >> hash_per_block_bits) +
		    !!(s & ((1 << hash_per_block_bits) - 1));
		hash_level_size[i] = s;
		if (hash_position + s < hash_position ||
		    (off_t)(hash_position + s) < 0 ||
		    (off_t)(hash_position + s) != hash_position + s)
			exit_err("hash device offset overflow");
		hash_position += s;
	}
	used_hash_blocks = hash_position;
}

static void create_or_verify_stream(FILE *rd, FILE *wr, int block_size, off_t blocks)
{
	unsigned char *left_block = xmalloc(hash_block_size);
	unsigned char *data_buffer = xmalloc(block_size);
	unsigned char *read_digest = mode == MODE_VERIFY ? xmalloc(digest_size) : NULL;
	off_t blocks_to_write = (blocks >> hash_per_block_bits) +
				!!(blocks & ((1 << hash_per_block_bits) - 1));
	EVP_MD_CTX ctx;
	EVP_MD_CTX_init(&ctx);
	memset(left_block, 0, hash_block_size);
	while (blocks_to_write--) {
		unsigned x;
		unsigned left_bytes;
		for (x = 0; x < 1 << hash_per_block_bits; x++) {
			if (!blocks)
				break;
			blocks--;
			if (fread(data_buffer, block_size, 1, rd) != 1)
				stream_err(rd, "read");
			if (EVP_DigestInit_ex(&ctx, evp, NULL) != 1)
				exit_err("EVP_DigestInit_ex failed");
			if (EVP_DigestUpdate(&ctx, data_buffer, block_size) != 1)
				exit_err("EVP_DigestUpdate failed");
			if (EVP_DigestUpdate(&ctx, salt_bytes, salt_size) != 1)
				exit_err("EVP_DigestUpdate failed");
			if (EVP_DigestFinal_ex(&ctx, calculated_digest, NULL) != 1)
				exit_err("EVP_DigestFinal_ex failed");
			if (!wr)
				break;
			if (mode == MODE_VERIFY) {
				if (fread(read_digest, digest_size, 1, wr) != 1)
					stream_err(wr, "read");
				if (memcmp(read_digest, calculated_digest, digest_size)) {
					retval = 1;
					fprintf(stderr, "verification failed at position %lld in %s file\n", (long long)ftello(rd) - block_size, rd == data_file ? "data" : "metadata");
				}
			} else {
				if (fwrite(calculated_digest, digest_size, 1, wr) != 1)
					stream_err(wr, "write");
			}
		}
		left_bytes = hash_block_size - x * digest_size;
		if (left_bytes && wr) {
			if (mode == MODE_VERIFY) {
				if (fread(left_block, left_bytes, 1, wr) != 1)
					stream_err(wr, "read");
				for (x = 0; x < left_bytes; x++) if (left_block[x]) {
					retval = 1;
					fprintf(stderr, "spare area is not zeroed at position %lld\n", (long long)ftello(wr) - left_bytes);
				}
			} else {
				if (fwrite(left_block, left_bytes, 1, wr) != 1)
					stream_err(wr, "write");
			}
		}
	}
	if (mode != MODE_VERIFY && wr) {
		if (fflush(wr)) {
			perror("fflush");
			exit(1);
		}
		if (ferror(wr)) {
			stream_err(wr, "write");
		}
	}
	if (EVP_MD_CTX_cleanup(&ctx) != 1)
		exit_err("EVP_MD_CTX_cleanup failed");
	free(left_block);
	free(data_buffer);
	if (mode == MODE_VERIFY) free(read_digest);
}

static void create_or_verify(void)
{
	int i;
	for (i = 0; i < levels; i++) {
		block_fseek(hash_file, hash_level_block[i], hash_block_size);
		if (!i) {
			block_fseek(data_file, 0, data_block_size);
			create_or_verify_stream(data_file, hash_file, data_block_size, data_file_blocks);
		} else {
			FILE *hash_file_2 = fopen(hash_device, "r");
			if (!hash_file_2) {
				perror(hash_device);
				exit(2);
			}
			block_fseek(hash_file_2, hash_level_block[i - 1], hash_block_size);
			create_or_verify_stream(hash_file_2, hash_file, hash_block_size, hash_level_size[i - 1]);
			fclose(hash_file_2);
		}
	}

	if (levels) {
		block_fseek(hash_file, hash_level_block[levels - 1], hash_block_size);
		create_or_verify_stream(hash_file, NULL, hash_block_size, 1);
	} else {
		block_fseek(data_file, 0, data_block_size);
		create_or_verify_stream(data_file, NULL, data_block_size, data_file_blocks);
	}

	if (mode == MODE_VERIFY) {
		if (memcmp(calculated_digest, root_hash_bytes, digest_size)) {
			fprintf(stderr, "verification failed in the root block\n");
			retval = 1;
		}
		if (!retval)
			fprintf(stderr, "hash successfully verified\n");
	} else {
		if (fsync(fileno(hash_file))) {
			perror("fsync");
			exit(1);
		}
		printf("hash device size: %llu\n", (unsigned long long)used_hash_blocks * hash_block_size);
		printf("data block size %u, hash block size %u, %u tree levels\n", data_block_size, hash_block_size, levels);
		printf("root hash: ");
		for (i = 0; i < digest_size; i++) printf("%02x", calculated_digest[i]);
		printf("\n");
		printf("device mapper target line: 0 %llu verity 0 %s %s %llu %u %s ",
			(unsigned long long)data_file_blocks * data_block_size / 512,
			data_device,
			hash_device,
			hash_start,
			data_block_size,
			hash_algorithm
			);
		for (i = 0; i < digest_size; i++) printf("%02x", calculated_digest[i]);
		printf(" ");
		if (!salt_size) printf("-");
		else for (i = 0; i < salt_size; i++) printf("%02x", salt_bytes[i]);
		if (hash_block_size != data_block_size) printf(" %u", hash_block_size);
		printf("\n");
		if (data_block_size == 4096 && hash_block_size == 4096 && salt_size == 32)
			printf("compatible with the original Google code\n");
		else {
			printf("incompatible with the original Google code:\n");
			if (!(data_block_size == 4096 && hash_block_size == 4096)) printf("\tdata and hash block size must be 4096\n");
			if (!(salt_size == 32)) printf("\tsalt must have exactly 32 bytes (64 hex digits)\n");
		}
	}
}

static void get_hex(const char *string, unsigned char **result, size_t len, const char *description)
{
	size_t rl = strlen(string);
	unsigned u;
	if (strspn(string, "0123456789ABCDEFabcdef") != rl)
		exit_err("invalid %s", description);
	if (rl != len * 2)
		exit_err("invalid length of %s", description);
	*result = xmalloc(len);
	memset(*result, 0, len);
	for (u = 0; u < rl; u++) {
		unsigned char c = (string[u] & 15) + (string[u] > '9' ? 9 : 0);
		(*result)[u / 2] |= c << (((u & 1) ^ 1) << 2);
	}
}

int main(int argc, const char **argv)
{
	poptContext popt_context;
	int r;
	const char *s;

	popt_context = poptGetContext("verity", argc, argv, popt_options, 0);

	poptSetOtherOptionHelp(popt_context, "[-c | -v] <data device> <hash device> <algorithm> [<root hash> if verifying] [OPTION...]");

	if (argc <= 1) {
		poptPrintHelp(popt_context, stdout, 0);
		exit(1);
	}

	r = poptGetNextOpt(popt_context);
	if (r < -1) exit_err("bad option %s", poptBadOption(popt_context, 0));

	if (mode < 0) exit_err("verify or create mode not specified");

	if (!data_block_size) data_block_size = DEFAULT_BLOCK_SIZE;
	if (!hash_block_size) hash_block_size = data_block_size;

	if (data_block_size <= 0 || (data_block_size & (data_block_size - 1)))
		exit_err("invalid data block size");

	if (hash_block_size <= 0 || (hash_block_size & (hash_block_size - 1)))
		exit_err("invalid hash block size");

	if (hash_start < 0 ||
	    (unsigned long long)hash_start * 512 / 512 != hash_start) exit_err("invalid hash start");
	if (data_blocks < 0 || (off_t)data_blocks < 0 || (off_t)data_blocks != data_blocks) exit_err("invalid number of data blocks");

	data_device = poptGetArg(popt_context);
	if (!data_device) exit_err("data device is missing");

	hash_device = poptGetArg(popt_context);
	if (!hash_device) exit_err("metadata device is missing");

	hash_algorithm = poptGetArg(popt_context);
	if (!hash_algorithm) exit_err("hash algorithm not specified");

	if (mode == MODE_VERIFY) {
		root_hash = poptGetArg(popt_context);
		if (!root_hash) exit_err("root hash not specified");
	}

	s = poptGetArg(popt_context);
	if (s) exit_err("extra argument %s", s);

	data_file = fopen(data_device, "r");
	if (!data_file) {
		perror(data_device);
		exit(2);
	}

	hash_file = fopen(hash_device, mode == MODE_VERIFY ? "r" : "r+");
	if (!hash_file && errno == ENOENT && mode != MODE_VERIFY)
		hash_file = fopen(hash_device, "w+");
	if (!hash_file) {
		perror(hash_device);
		exit(2);
	}

	data_file_blocks = get_size(data_file, data_device) / data_block_size;
	hash_file_blocks = get_size(hash_file, hash_device) / hash_block_size;

	if ((unsigned long long)hash_start * 512 % hash_block_size) exit_err("hash start not aligned on block size");
	if (data_file_blocks < data_blocks) exit_err("data file is too small");
	if (data_blocks) data_file_blocks = data_blocks;

	OpenSSL_add_all_digests();
	evp = EVP_get_digestbyname(hash_algorithm);
	if (!evp) exit_err("hash algorithm %s not found", hash_algorithm);
	digest_size = EVP_MD_size(evp);

	salt_size = 0;
	if (salt_string && *salt_string) {
		salt_size = strlen(salt_string) / 2;
		get_hex(salt_string, &salt_bytes, salt_size, "salt");
	}

	calculated_digest = xmalloc(digest_size);

	if (mode == MODE_VERIFY) {
		get_hex(root_hash, &root_hash_bytes, digest_size, "root_hash");
	}

	calculate_positions();

	create_or_verify();

	fclose(data_file);
	fclose(hash_file);

	if (salt_size)
		free(salt_bytes);
	free(calculated_digest);
	if (mode == MODE_VERIFY)
		free(root_hash_bytes);
	poptFreeContext(popt_context);

	return retval;
}

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH] dm: remake of the verity target
  2012-03-04 19:18 ` [PATCH] dm: remake of the " Mikulas Patocka
  2012-03-04 19:35   ` userspace hashing utility for dm-verity Mikulas Patocka
@ 2012-03-06 21:59   ` Mandeep Singh Baines
  2012-03-08 22:21     ` workqueues and percpu (was: [PATCH] dm: remake of the verity target) Mikulas Patocka
  2012-03-13 22:20     ` [PATCH] dm: remake of the verity target Mikulas Patocka
  1 sibling, 2 replies; 34+ messages in thread
From: Mandeep Singh Baines @ 2012-03-06 21:59 UTC (permalink / raw)
  To: Mikulas Patocka
  Cc: Mandeep Singh Baines, linux-kernel, dm-devel, Alasdair G Kergon,
	Will Drewry, Elly Jones, Milan Broz, Olof Johansson,
	Steffen Klassert, Andrew Morton

Mikulas Patocka (mpatocka@redhat.com) wrote:
> Hi
> 
> Here I'm posting remake of the dm-verity target originall developed by 
> Mandeep Singh Baines. It has a compatible target line syntax, so it can be 
> used as a drop-in replacement.
> 
> The major difference is that this driver uses dm-bufio to manage cache of 
> hash blocks in memory. We talked with Alasdair about using dm-bufio for 
> caching and I concluded that it is simpler to rewrite the code rather than 
> transform the original Google code with patches.
> 
> Because of dm-bufio, memory consumption is not dependent on device size 
> (if the system starts running out of memory, dm-bufio discards cached 
> blocks loaded in memory).
> 
> This implementation is smaller, because (unlike the original 
> implementation), it doesn't create persistent in-memory structures.
> 
> This implementation is faster than the original. It uses clustered 
> prefetch to prefetch several hash blocks at once, greatly improving 
> performance if the data partition and hash partition are located on the 
> same disk (the prefetch cluster can be set with prefetch_cluster module 
> parameter).
> 
> Mikulas
> 

Hi Mikulus,

This is some nice work. I like that you've been able to abstract a lot
of the hash buffer management with dm-bufio. You got rid of the I/O queue.
I've been meaning to do that for a while. The prefetch is also nice.
We planned to do this but I decided to not do it now in order to get the
base functionality in:

http://crosbug.com/25441

However, there are some things that I don't like. I don't like comments
either but you have none. You also removed our documentation. You are
allocated a complete shash_desc per I/O. We only allocate one per CPU.
We short-circuit the hash at any level. Your implementation can only
shirt circuit at the lowest level.

I'd like to propose that we get the version we sent upstream and then work
together on adding some of your enhancements incrementally. Other than
the changes we've made to cleanup for upstreaming, the version I
submitted is the code we are using in production.

I'm happy to add prefetch now if that is required for merging.

What do you think?

Regards,
Mandeep

> ---
> 
> Disk: Maxtor Atlas 15k2 146GB, 3.8GB partition
> 
> The following tests were made:
> dd if=/dev/mapper/verity of=/dev/null bs=1M
> fsck.ext2 -fn -C 0 /dev/mapper/verity
> fio --rw=randread --size=200M --bsrange=1k-128k --filename=/dev/mapper/verity --name=job1 --name=job2 --name=job3 --name=job4
> 
> raw partition:
> dd:			42s
> fsck:			11s
> fio:			13s
> 
> original google dm-verity implementation:
> dd (first time):	571s
> dd (next time):		46s
> fsck (first time):	39s
> fsck (next time):	10s
> fio (first time):	26s
> fio (next time):	24s
> 
> my vm-verity implementation:
> dd (first time):	45s
> dd (next time):		43s
> fsck (first time):	11s
> fsck (next time):	11s
> fio (first time):	14s
> fio (next time):	14s
> 
> ---
> 
> Remake of the google dm-verity patch.
> 
> Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
> 
> ---
>  drivers/md/Kconfig     |   17 +
>  drivers/md/Makefile    |    1 
>  drivers/md/dm-bufio.c  |   97 ++++--
>  drivers/md/dm-bufio.h  |    8 
>  drivers/md/dm-verity.c |  785 +++++++++++++++++++++++++++++++++++++++++++++++++
>  5 files changed, 887 insertions(+), 21 deletions(-)
> 
> Index: linux-3.3-rc5-fast/drivers/md/Kconfig
> ===================================================================
> --- linux-3.3-rc5-fast.orig/drivers/md/Kconfig	2012-03-03 05:19:33.000000000 +0100
> +++ linux-3.3-rc5-fast/drivers/md/Kconfig	2012-03-03 19:39:30.000000000 +0100
> @@ -388,4 +388,21 @@ config DM_FLAKEY
>         ---help---
>           A target that intermittently fails I/O for debugging purposes.
>  
> +config DM_VERITY
> +	tristate "Verity target support"
> +	depends on BLK_DEV_DM
> +	select CRYPTO
> +	select CRYPTO_HASH
> +	select DM_BUFIO
> +	---help---
> +	  This device-mapper target allows you to create a device that
> +	  transparently integrity checks the data on it. You'll need to
> +	  activate the digests you're going to use in the cryptoapi
> +	  configuration.
> +
> +	  To compile this code as a module, choose M here: the module will
> +	  be called dm-verity.
> +
> +	  If unsure, say N.
> +
>  endif # MD
> Index: linux-3.3-rc5-fast/drivers/md/Makefile
> ===================================================================
> --- linux-3.3-rc5-fast.orig/drivers/md/Makefile	2012-03-03 05:19:33.000000000 +0100
> +++ linux-3.3-rc5-fast/drivers/md/Makefile	2012-03-03 19:39:30.000000000 +0100
> @@ -29,6 +29,7 @@ obj-$(CONFIG_MD_FAULTY)		+= faulty.o
>  obj-$(CONFIG_BLK_DEV_MD)	+= md-mod.o
>  obj-$(CONFIG_BLK_DEV_DM)	+= dm-mod.o
>  obj-$(CONFIG_DM_BUFIO)		+= dm-bufio.o
> +obj-$(CONFIG_DM_VERITY)		+= dm-verity.o
>  obj-$(CONFIG_DM_CRYPT)		+= dm-crypt.o
>  obj-$(CONFIG_DM_DELAY)		+= dm-delay.o
>  obj-$(CONFIG_DM_FLAKEY)		+= dm-flakey.o
> Index: linux-3.3-rc5-fast/drivers/md/dm-verity.c
> ===================================================================
> --- /dev/null	1970-01-01 00:00:00.000000000 +0000
> +++ linux-3.3-rc5-fast/drivers/md/dm-verity.c	2012-03-03 05:51:10.000000000 +0100
> @@ -0,0 +1,785 @@
> +/*
> + * Copyright (C) 2012 Red Hat, Inc.
> + *
> + * Author: Mikulas Patocka <mpatocka@redhat.com>
> + *
> + * Based on Chromium dm-verity driver (C) 2011 The Chromium OS Authors
> + *
> + * This file is released under the GPLv2.
> + *
> + * Device mapper target parameters:
> + *	<version>	0
> + *	<data device>
> + *	<hash device>
> + *	<hash start>	(typically 0)
> + *	<block size>	(typically 4096)
> + *	<algorithm>
> + *	<digest>
> + *	optional parameters:
> + *		<salt> (should have 32 bytes for compatibility with Google code)
> + *		<hash block size> (by default it is the same as data block size)
> + *
> + * In the file "/sys/module/dm_verity/parameters/prefetch_cluster" you can set
> + * default prefetch value. Data are read in "prefetch_cluster" chunks from the
> + * hash device. Prefetch cluster greatly improves performance when data and hash
> + * are on the same disk on different partitions.
> + */
> +
> +#include <linux/module.h>
> +#include <linux/device-mapper.h>
> +#include <crypto/hash.h>
> +#include "dm-bufio.h"
> +
> +#define DM_MSG_PREFIX			"verity"
> +
> +#define DM_VERITY_IO_VEC_INLINE		16
> +#define DM_VERITY_MEMPOOL_SIZE		4
> +#define DM_VERITY_PREFETCH_SIZE		262144
> +
> +#define DM_VERITY_MAX_LEVELS		63
> +
> +static unsigned prefetch_cluster = DM_VERITY_PREFETCH_SIZE;
> +
> +module_param_named(prefetch_cluster, prefetch_cluster, uint, S_IRUGO | S_IWUSR);
> +
> +struct dm_verity {
> +	struct dm_dev *data_dev;
> +	struct dm_dev *hash_dev;
> +	struct dm_target *ti;
> +	struct dm_bufio_client *bufio;
> +	char *alg_name;
> +	struct crypto_shash *tfm;
> +	u8 *root_digest;
> +	u8 *salt;
> +	unsigned salt_size;
> +	sector_t data_start;
> +	sector_t hash_start;
> +	sector_t data_blocks;
> +	sector_t hash_blocks;
> +	unsigned char data_dev_block_bits;
> +	unsigned char hash_dev_block_bits;
> +	unsigned char hash_per_block_bits;
> +	unsigned char levels;
> +	unsigned digest_size;
> +	unsigned shash_descsize;
> +
> +	mempool_t *io_mempool;
> +	mempool_t *vec_mempool;
> +
> +	struct workqueue_struct *verify_wq;
> +
> +	sector_t hash_level_block[DM_VERITY_MAX_LEVELS];
> +};
> +
> +struct dm_verity_io {
> +	struct dm_verity *v;
> +	struct bio *bio;
> +	bio_end_io_t *orig_bi_end_io;
> +	void *orig_bi_private;
> +	sector_t block;
> +	unsigned n_blocks;
> +	struct bio_vec *io_vec;
> +	unsigned io_vec_size;
> +	struct work_struct work;
> +	struct bio_vec io_vec_inline[DM_VERITY_IO_VEC_INLINE];
> +	/* u8 hash_desc[crypto_shash_descsize(v->tfm)]; */
> +	/* u8 real_digest[v->digest_size]; */
> +	/* u8 want_digest[v->digest_size]; */
> +};
> +
> +static struct shash_desc *io_hash_desc(struct dm_verity *v, struct dm_verity_io *io)
> +{
> +	return (struct shash_desc *)(io + 1);
> +}
> +
> +static u8 *io_real_digest(struct dm_verity *v, struct dm_verity_io *io)
> +{
> +	return (u8 *)(io + 1) + v->shash_descsize;
> +}
> +
> +static u8 *io_want_digest(struct dm_verity *v, struct dm_verity_io *io)
> +{
> +	return (u8 *)(io + 1) + v->shash_descsize + v->digest_size;
> +}
> +
> +struct buffer_aux {
> +	int hash_verified;
> +};
> +
> +static void dm_bufio_alloc_callback(struct dm_buffer *buf)
> +{
> +	struct buffer_aux *aux = dm_bufio_get_aux_data(buf);
> +	aux->hash_verified = 0;
> +}
> +
> +static sector_t verity_map_sector(struct dm_verity *v, sector_t bi_sector)
> +{
> +	return v->data_start + dm_target_offset(v->ti, bi_sector);
> +}
> +
> +static sector_t verity_position_at_level(struct dm_verity *v, sector_t block,
> +					 int level)
> +{
> +	return block >> (level * v->hash_per_block_bits);
> +}
> +
> +static void verity_hash_at_level(struct dm_verity *v, sector_t block, int level,
> +				 sector_t *hash_block, unsigned *offset)
> +{
> +	sector_t position = verity_position_at_level(v, block, level);
> +
> +	*hash_block = v->hash_level_block[level] + (position >> v->hash_per_block_bits);
> +	if (offset)
> +		*offset = v->digest_size * (position & ((1 << v->hash_per_block_bits) - 1));
> +}
> +
> +static int verity_verify_level(struct dm_verity_io *io, sector_t block,
> +			       int level, int skip_unverified)
> +{
> +	struct dm_verity *v = io->v;
> +	struct dm_buffer *buf;
> +	struct buffer_aux *aux;
> +	u8 *data;
> +	int r;
> +	sector_t hash_block;
> +	unsigned offset;
> +
> +	verity_hash_at_level(v, block, level, &hash_block, &offset);
> +
> +	data = dm_bufio_read(v->bufio, hash_block, &buf);
> +	if (unlikely(IS_ERR(data)))
> +		return PTR_ERR(data);
> +
> +	aux = dm_bufio_get_aux_data(buf);
> +
> +	if (!aux->hash_verified) {
> +		struct shash_desc *desc;
> +		u8 *result;
> +
> +		if (skip_unverified) {
> +			r = 1;
> +			goto release_ret_r;
> +		}
> +
> +		desc = io_hash_desc(v, io);
> +		desc->tfm = v->tfm;
> +		desc->flags = CRYPTO_TFM_REQ_MAY_SLEEP;
> +		r = crypto_shash_init(desc);
> +		if (r < 0) {
> +			DMERR("crypto_shash_init failed: %d", r);
> +			goto release_ret_r;
> +		}
> +
> +		r = crypto_shash_update(desc, data, 1 << v->hash_dev_block_bits);
> +		if (r < 0) {
> +			DMERR("crypto_shash_update failed: %d", r);
> +			goto release_ret_r;
> +		}
> +
> +		r = crypto_shash_update(desc, v->salt, v->salt_size);
> +		if (r < 0) {
> +			DMERR("crypto_shash_update failed: %d", r);
> +			goto release_ret_r;
> +		}
> +
> +		result = io_real_digest(v, io);
> +		r = crypto_shash_final(desc, result);
> +		if (r < 0) {
> +			DMERR("crypto_shash_final failed: %d", r);
> +			goto release_ret_r;
> +		}
> +		if (unlikely(memcmp(result, io_want_digest(v, io), v->digest_size))) {
> +			DMERR_LIMIT("metadata block %llu is corrupted",
> +				(unsigned long long)hash_block);
> +			r = -EIO;
> +			goto release_ret_r;
> +		} else
> +			aux->hash_verified = 1;
> +	}
> +
> +	data += offset;
> +
> +	memcpy(io_want_digest(v, io), data, v->digest_size);
> +
> +	dm_bufio_release(buf);
> +	return 0;
> +
> +release_ret_r:
> +	dm_bufio_release(buf);
> +	return r;
> +}
> +
> +static int verity_verify_io(struct dm_verity_io *io)
> +{
> +	struct dm_verity *v = io->v;
> +	unsigned b;
> +	int i;
> +	unsigned vector = 0, offset = 0;
> +	for (b = 0; b < io->n_blocks; b++) {
> +		struct shash_desc *desc;
> +		u8 *result;
> +		int r;
> +		unsigned todo;
> +
> +		if (likely(v->levels)) {
> +			int r = verity_verify_level(io, io->block + b, 0, 1);
> +			if (likely(!r))
> +				goto test_block_hash;
> +			if (r < 0)
> +				return r;
> +		}
> +
> +		memcpy(io_want_digest(v, io), v->root_digest, v->digest_size);
> +
> +		for (i = v->levels - 1; i >= 0; i--) {
> +			int r = verity_verify_level(io, io->block + b, i, 0);
> +			if (unlikely(r))
> +				return r;
> +		}
> +
> +test_block_hash:
> +		desc = io_hash_desc(v, io);
> +		desc->tfm = v->tfm;
> +		desc->flags = CRYPTO_TFM_REQ_MAY_SLEEP;
> +		r = crypto_shash_init(desc);
> +		if (r < 0) {
> +			DMERR("crypto_shash_init failed: %d", r);
> +			return r;
> +		}
> +
> +		todo = 1 << v->data_dev_block_bits;
> +		do {
> +			struct bio_vec *bv;
> +			u8 *page;
> +			unsigned len;
> +
> +			BUG_ON(vector >= io->io_vec_size);
> +			bv = &io->io_vec[vector];
> +			page = kmap_atomic(bv->bv_page, KM_USER0);
> +			len = bv->bv_len - offset;
> +			if (likely(len >= todo))
> +				len = todo;
> +			r = crypto_shash_update(desc,
> +					page + bv->bv_offset + offset, len);
> +			kunmap_atomic(page, KM_USER0);
> +			if (r < 0) {
> +				DMERR("crypto_shash_update failed: %d", r);
> +				return r;
> +			}
> +			offset += len;
> +			if (likely(offset == bv->bv_len)) {
> +				offset = 0;
> +				vector++;
> +			}
> +			todo -= len;
> +		} while (todo);
> +
> +		r = crypto_shash_update(desc, v->salt, v->salt_size);
> +		if (r < 0) {
> +			DMERR("crypto_shash_update failed: %d", r);
> +			return r;
> +		}
> +
> +		result = io_real_digest(v, io);
> +		r = crypto_shash_final(desc, result);
> +		if (r < 0) {
> +			DMERR("crypto_shash_final failed: %d", r);
> +			return r;
> +		}
> +		if (unlikely(memcmp(result, io_want_digest(v, io), v->digest_size))) {
> +			DMERR_LIMIT("data block %llu is corrupted",
> +				(unsigned long long)(io->block + b));
> +			return -EIO;
> +		}
> +	}
> +	BUG_ON(vector != io->io_vec_size);
> +	BUG_ON(offset);
> +	return 0;
> +}
> +
> +static void verity_finish_io(struct dm_verity_io *io, int error)
> +{
> +	struct bio *bio = io->bio;
> +	struct dm_verity *v = io->v;
> +
> +	bio->bi_end_io = io->orig_bi_end_io;
> +	bio->bi_private = io->orig_bi_private;
> +
> +	if (io->io_vec != io->io_vec_inline)
> +		mempool_free(io->io_vec, v->vec_mempool);
> +	mempool_free(io, v->io_mempool);
> +
> +	bio_endio(bio, error);
> +}
> +
> +static void verity_work(struct work_struct *w)
> +{
> +	struct dm_verity_io *io = container_of(w, struct dm_verity_io, work);
> +
> +	verity_finish_io(io, verity_verify_io(io));
> +}
> +
> +static void verity_end_io(struct bio *bio, int error)
> +{
> +	struct dm_verity_io *io = bio->bi_private;
> +	if (error) {
> +		verity_finish_io(io, error);
> +		return;
> +	}
> +
> +	INIT_WORK(&io->work, verity_work);
> +	queue_work(io->v->verify_wq, &io->work);
> +}
> +
> +static void verity_prefetch_io(struct dm_verity *v, struct dm_verity_io *io)
> +{
> +	int i;
> +	for (i = v->levels - 2; i >= 0; i--) {
> +		sector_t hash_block_start;
> +		sector_t hash_block_end;
> +		verity_hash_at_level(v, io->block, i, &hash_block_start, NULL);
> +		verity_hash_at_level(v, io->block + io->n_blocks - 1, i, &hash_block_end, NULL);
> +		if (!i) {
> +			unsigned cluster = prefetch_cluster;
> +	 /* barrier to stop GCC from re-reading prefetch_cluster again */
> +			barrier();
> +			cluster >>= v->data_dev_block_bits;
> +			if (unlikely(!cluster))
> +				goto no_prefetch_cluster;
> +			if (unlikely(cluster & (cluster - 1)))
> +				cluster = 1 << (fls(cluster) - 1);
> +
> +			hash_block_start &= ~(sector_t)(cluster - 1);
> +			hash_block_end |= cluster - 1;
> +			if (unlikely(hash_block_end >= v->hash_blocks))
> +				hash_block_end = v->hash_blocks - 1;
> +		}
> +no_prefetch_cluster:
> +		dm_bufio_prefetch(v->bufio, hash_block_start,
> +					hash_block_end - hash_block_start + 1);
> +	}
> +}
> +
> +static int verity_map(struct dm_target *ti, struct bio *bio,
> +		      union map_info *map_context)
> +{
> +	struct dm_verity *v = ti->private;
> +	struct dm_verity_io *io;
> +
> +	if (((unsigned)bio->bi_sector | bio_sectors(bio)) &
> +	    ((1 << (v->data_dev_block_bits - SECTOR_SHIFT)) - 1)) {
> +		DMERR_LIMIT("unaligned io");
> +		return -EIO;
> +	}
> +
> +	if ((bio->bi_sector + bio_sectors(bio)) >>
> +	    (v->data_dev_block_bits - SECTOR_SHIFT) > v->data_blocks) {
> +		DMERR_LIMIT("io out of range");
> +		return -EIO;
> +	}
> +
> +	if (bio_data_dir(bio) == WRITE)
> +		return -EIO;
> +
> +	io = mempool_alloc(v->io_mempool, GFP_NOIO);
> +	io->v = v;
> +	io->bio = bio;
> +	io->orig_bi_end_io = bio->bi_end_io;
> +	io->orig_bi_private = bio->bi_private;
> +	io->block = bio->bi_sector >> (v->data_dev_block_bits - SECTOR_SHIFT);
> +	io->n_blocks = bio->bi_size >> v->data_dev_block_bits;
> +
> +	bio->bi_end_io = verity_end_io;
> +	bio->bi_private = io;
> +	bio->bi_bdev = v->data_dev->bdev;
> +	bio->bi_sector = verity_map_sector(v, bio->bi_sector);
> +
> +	io->io_vec_size = bio->bi_vcnt - bio->bi_idx;
> +	if (io->io_vec_size < DM_VERITY_IO_VEC_INLINE)
> +		io->io_vec = io->io_vec_inline;
> +	else
> +		io->io_vec = mempool_alloc(v->vec_mempool, GFP_NOIO);
> +	memcpy(io->io_vec, bio_iovec(bio),
> +	       io->io_vec_size * sizeof(struct bio_vec));
> +
> +	verity_prefetch_io(v, io);
> +
> +	generic_make_request(bio);
> +
> +	return DM_MAPIO_SUBMITTED;
> +}
> +
> +static int verity_status(struct dm_target *ti, status_type_t type,
> +			 char *result, unsigned maxlen)
> +{
> +	struct dm_verity *v = ti->private;
> +	unsigned sz = 0;
> +	unsigned x;
> +
> +	switch (type) {
> +	case STATUSTYPE_INFO:
> +		result[0] = 0;
> +		break;
> +	case STATUSTYPE_TABLE:
> +		DMEMIT("%u %s %s %llu %u %s ",
> +			0,
> +			v->data_dev->name,
> +			v->hash_dev->name,
> +			(unsigned long long)v->hash_start << (v->hash_dev_block_bits - SECTOR_SHIFT),
> +			1 << v->data_dev_block_bits,
> +			v->alg_name
> +			);
> +		for (x = 0; x < v->digest_size; x++)
> +			DMEMIT("%02x", v->root_digest[x]);
> +		DMEMIT(" ");
> +		if (!v->salt_size)
> +			DMEMIT("-");
> +		else
> +			for (x = 0; x < v->salt_size; x++)
> +				DMEMIT("%02x", v->salt[x]);
> +		DMEMIT(" %u", 1 << v->hash_dev_block_bits);
> +		break;
> +	}
> +	return 0;
> +}
> +
> +static int verity_ioctl(struct dm_target *ti, unsigned cmd,
> +			unsigned long arg)
> +{
> +	struct dm_verity *v = ti->private;
> +	int r = 0;
> +
> +	if (v->data_start ||
> +	    ti->len != i_size_read(v->data_dev->bdev->bd_inode) >> SECTOR_SHIFT)
> +		r = scsi_verify_blk_ioctl(NULL, cmd);
> +
> +	return r ? : __blkdev_driver_ioctl(v->data_dev->bdev, v->data_dev->mode,
> +				     cmd, arg);
> +}
> +
> +static int verity_merge(struct dm_target *ti, struct bvec_merge_data *bvm,
> +			struct bio_vec *biovec, int max_size)
> +{
> +	struct dm_verity *v = ti->private;
> +	struct request_queue *q = bdev_get_queue(v->data_dev->bdev);
> +
> +	if (!q->merge_bvec_fn)
> +		return max_size;
> +
> +	bvm->bi_bdev = v->data_dev->bdev;
> +	bvm->bi_sector = verity_map_sector(v, bvm->bi_sector);
> +
> +	return min(max_size, q->merge_bvec_fn(q, bvm, biovec));
> +}
> +
> +static int verity_iterate_devices(struct dm_target *ti,
> +				  iterate_devices_callout_fn fn, void *data)
> +{
> +	struct dm_verity *v = ti->private;
> +	return fn(ti, v->data_dev, v->data_start, ti->len, data);
> +}
> +
> +static void verity_io_hints(struct dm_target *ti, struct queue_limits *limits)
> +{
> +	struct dm_verity *v = ti->private;
> +
> +	if (limits->logical_block_size < 1 << v->data_dev_block_bits)
> +		limits->logical_block_size = 1 << v->data_dev_block_bits;
> +	if (limits->physical_block_size < 1 << v->data_dev_block_bits)
> +		limits->physical_block_size = 1 << v->data_dev_block_bits;
> +	blk_limits_io_min(limits, limits->logical_block_size);
> +}
> +
> +static void verity_dtr(struct dm_target *ti);
> +
> +static int verity_ctr(struct dm_target *ti, unsigned argc, char **argv)
> +{
> +	struct dm_verity *v;
> +	unsigned num;
> +	unsigned long long hs;
> +	int r;
> +	int i;
> +	sector_t hash_position;
> +	char dummy;
> +
> +	v = kzalloc(sizeof(struct dm_verity), GFP_KERNEL);
> +	if (!v) {
> +		ti->error = "Cannot allocate verity structure";
> +		return -ENOMEM;
> +	}
> +	ti->private = v;
> +	v->ti = ti;
> +
> +	if ((dm_table_get_mode(ti->table) & ~FMODE_READ) != 0) {
> +		ti->error = "Device must be readonly";
> +		r = -EINVAL;
> +		goto bad;
> +	}
> +
> +	if (argc < 7) {
> +		ti->error = "Not enough arguments";
> +		r = -EINVAL;
> +		goto bad;
> +	}
> +
> +	if (sscanf(argv[0], "%d%c", &num, &dummy) != 1 ||
> +	    num != 0) {
> +		ti->error = "Invalid version";
> +		r = -EINVAL;
> +		goto bad;
> +	}
> +
> +	r = dm_get_device(ti, argv[1], FMODE_READ, &v->data_dev);
> +	if (r) {
> +		ti->error = "Data device lookup failed";
> +		goto bad;
> +	}
> +
> +	r = dm_get_device(ti, argv[2], FMODE_READ, &v->hash_dev);
> +	if (r) {
> +		ti->error = "Data device lookup failed";
> +		goto bad;
> +	}
> +
> +	if (sscanf(argv[3], "%llu%c", &hs, &dummy) != 1 ||
> +	    hs != (sector_t)hs) {
> +		ti->error = "Invalid hash start";
> +		r = -EINVAL;
> +		goto bad;
> +	}
> +
> +	if (sscanf(argv[4], "%u%c", &num, &dummy) != 1 ||
> +	    !num || (num & (num - 1)) ||
> +	    num < bdev_logical_block_size(v->data_dev->bdev) ||
> +	    num > PAGE_SIZE) {
> +		ti->error = "Invalid data device block size";
> +		r = -EINVAL;
> +		goto bad;
> +	}
> +	v->data_dev_block_bits = ffs(num) - 1;
> +	v->hash_dev_block_bits = ffs(num) - 1;
> +
> +	v->alg_name = kstrdup(argv[5], GFP_KERNEL);
> +	if (!v->alg_name) {
> +		ti->error = "Cannot allocate algorithm name";
> +		r = -ENOMEM;
> +		goto bad;
> +	}
> +
> +	v->tfm = crypto_alloc_shash(v->alg_name, 0, 0);
> +	if (IS_ERR(v->tfm)) {
> +		ti->error = "Cannot initialize hash function";
> +		r = PTR_ERR(v->tfm);
> +		v->tfm = NULL;
> +		goto bad;
> +	}
> +	v->digest_size = crypto_shash_digestsize(v->tfm);
> +	if ((1 << v->hash_dev_block_bits) < v->digest_size * 2) {
> +		ti->error = "Digest size too big";
> +		r = -EINVAL;
> +		goto bad;
> +	}
> +	v->shash_descsize =
> +		sizeof(struct shash_desc) + crypto_shash_descsize(v->tfm);
> +
> +	v->root_digest = kmalloc(v->digest_size, GFP_KERNEL);
> +	if (!v->root_digest) {
> +		ti->error = "Cannot allocate root digest";
> +		r = -ENOMEM;
> +		goto bad;
> +	}
> +	if (strlen(argv[6]) != v->digest_size * 2 ||
> +	    hex2bin(v->root_digest, argv[6], v->digest_size)) {
> +		ti->error = "Invalid root digest";
> +		r = -EINVAL;
> +		goto bad;
> +	}
> +
> +	if (argc > 7 && strcmp(argv[7], "-")) {
> +		v->salt_size = strlen(argv[7]) / 2;
> +		v->salt = kmalloc(v->salt_size, GFP_KERNEL);
> +		if (!v->salt) {
> +			ti->error = "Cannot allocate salt";
> +			r = -ENOMEM;
> +			goto bad;
> +		}
> +		if (strlen(argv[7]) != v->salt_size * 2 ||
> +		    hex2bin(v->salt, argv[7], v->salt_size)) {
> +			ti->error = "Invalid salt";
> +			r = -EINVAL;
> +			goto bad;
> +		}
> +	}
> +
> +	if (argc > 8) {
> +		if (sscanf(argv[8], "%u%c", &num, &dummy) != 1 ||
> +		    !num || (num & (num - 1)) ||
> +		    num < bdev_logical_block_size(v->hash_dev->bdev) ||
> +		    num > INT_MAX) {
> +			ti->error = "Invalid hash device block size";
> +			r = -EINVAL;
> +			goto bad;
> +		}
> +		v->hash_dev_block_bits = ffs(num) - 1;
> +	}
> +
> +	if (hs & ((1 << (v->hash_dev_block_bits - SECTOR_SHIFT)) - 1)) {
> +		ti->error = "Hash start not aligned on block boundary";
> +		r = -EINVAL;
> +		goto bad;
> +	}
> +	v->hash_start = hs >> (v->hash_dev_block_bits - SECTOR_SHIFT);
> +
> +	if (ti->len > i_size_read(v->data_dev->bdev->bd_inode) >> SECTOR_SHIFT) {
> +		ti->error = "Data device si too small";
> +		r = -EINVAL;
> +		goto bad;
> +	}
> +
> +	if (ti->len & ((1 << (v->data_dev_block_bits - SECTOR_SHIFT)) - 1)) {
> +		ti->error = "Data device length is not aligned to block size";
> +		r = -EINVAL;
> +		goto bad;
> +	}
> +
> +	v->data_blocks = ti->len >> (v->data_dev_block_bits - SECTOR_SHIFT);
> +
> +	v->hash_per_block_bits =
> +		fls((1 << v->hash_dev_block_bits) / v->digest_size) - 1;
> +
> +	v->levels = 0;
> +	if (v->data_blocks)
> +		while (v->hash_per_block_bits * v->levels < 64 &&
> +		       (unsigned long long)(v->data_blocks - 1) >>
> +		       (v->hash_per_block_bits * v->levels))
> +			v->levels++;
> +
> +	if (v->levels > DM_VERITY_MAX_LEVELS) {
> +		ti->error = "Too many tree levels";
> +		r = -E2BIG;
> +		goto bad;
> +	}
> +
> +	hash_position = v->hash_start;
> +	for (i = v->levels - 1; i >= 0; i--) {
> +		sector_t s;
> +		v->hash_level_block[i] = hash_position;
> +		s = verity_position_at_level(v, v->data_blocks, i);
> +		s = (s >> v->hash_per_block_bits) +
> +		    !!(s & ((1 << v->hash_per_block_bits) - 1));
> +		if (hash_position + s < hash_position) {
> +			ti->error = "Hash device offset overflow";
> +			r = -E2BIG;
> +			goto bad;
> +		}
> +		hash_position += s;
> +	}
> +	v->hash_blocks = hash_position;
> +
> +	v->bufio = dm_bufio_client_create(v->hash_dev->bdev,
> +		1 << v->hash_dev_block_bits, 1, sizeof(struct buffer_aux),
> +		dm_bufio_alloc_callback, NULL);
> +	if (IS_ERR(v->bufio)) {
> +		ti->error = "Cannot initialize dm-bufio";
> +		r = PTR_ERR(v->bufio);
> +		v->bufio = NULL;
> +		goto bad;
> +	}
> +
> +	if (dm_bufio_get_device_size(v->bufio) < v->hash_blocks) {
> +		ti->error = "Hash device is too small";
> +		r = -E2BIG;
> +		goto bad;
> +	}
> +
> +	v->io_mempool = mempool_create_kmalloc_pool(DM_VERITY_MEMPOOL_SIZE,
> +	  sizeof(struct dm_verity_io) + v->shash_descsize + v->digest_size * 2);
> +	if (!v->io_mempool) {
> +		ti->error = "Cannot allocate io mempool";
> +		r = -ENOMEM;
> +		goto bad;
> +	}
> +
> +	v->vec_mempool = mempool_create_kmalloc_pool(DM_VERITY_MEMPOOL_SIZE,
> +					BIO_MAX_PAGES * sizeof(struct bio_vec));
> +	if (!v->vec_mempool) {
> +		ti->error = "Cannot allocate vector mempool";
> +		r = -ENOMEM;
> +		goto bad;
> +	}
> +
> +	/*v->verify_wq = alloc_workqueue("verityd", WQ_CPU_INTENSIVE | WQ_MEM_RECLAIM, 1);*/
> +	/* WQ_UNBOUND greatly improves performance when running on ramdisk */
> +	v->verify_wq = alloc_workqueue("verityd", WQ_CPU_INTENSIVE | WQ_MEM_RECLAIM | WQ_UNBOUND, num_online_cpus());
> +	if (!v->verify_wq) {
> +		ti->error = "Cannot allocate workqueue";
> +		r = -ENOMEM;
> +		goto bad;
> +	}
> +
> +	return 0;
> +
> +bad:
> +	verity_dtr(ti);
> +	return r;
> +}
> +
> +static void verity_dtr(struct dm_target *ti)
> +{
> +	struct dm_verity *v = ti->private;
> +
> +	if (v->verify_wq)
> +		destroy_workqueue(v->verify_wq);
> +	if (v->vec_mempool)
> +		mempool_destroy(v->vec_mempool);
> +	if (v->io_mempool)
> +		mempool_destroy(v->io_mempool);
> +	if (v->bufio)
> +		dm_bufio_client_destroy(v->bufio);
> +	kfree(v->salt);
> +	kfree(v->root_digest);
> +	if (v->tfm)
> +		crypto_free_shash(v->tfm);
> +	kfree(v->alg_name);
> +	if (v->hash_dev)
> +		dm_put_device(ti, v->hash_dev);
> +	if (v->data_dev)
> +		dm_put_device(ti, v->data_dev);
> +	kfree(v);
> +}
> +
> +static struct target_type verity_target = {
> +	.name		= "verity",
> +	.version	= {1, 0, 0},
> +	.module		= THIS_MODULE,
> +	.ctr		= verity_ctr,
> +	.dtr		= verity_dtr,
> +	.map		= verity_map,
> +	.status		= verity_status,
> +	.ioctl		= verity_ioctl,
> +	.merge		= verity_merge,
> +	.iterate_devices = verity_iterate_devices,
> +	.io_hints	= verity_io_hints,
> +};
> +
> +static int __init dm_verity_init(void)
> +{
> +	int r;
> +	r = dm_register_target(&verity_target);
> +	if (r < 0)
> +		DMERR("register failed %d", r);
> +	return r;
> +}
> +
> +static void __exit dm_verity_exit(void)
> +{
> +	dm_unregister_target(&verity_target);
> +}
> +
> +module_init(dm_verity_init);
> +module_exit(dm_verity_exit);
> +
> +MODULE_AUTHOR("Mikulas Patocka <mpatocka@redhat.com>");
> +MODULE_DESCRIPTION(DM_NAME " target for transparent disk integrity checking");
> +MODULE_LICENSE("GPL");
> +
> Index: linux-3.3-rc5-fast/drivers/md/dm-bufio.c
> ===================================================================
> --- linux-3.3-rc5-fast.orig/drivers/md/dm-bufio.c	2012-03-03 05:19:33.000000000 +0100
> +++ linux-3.3-rc5-fast/drivers/md/dm-bufio.c	2012-03-03 05:19:35.000000000 +0100
> @@ -579,7 +579,7 @@ static void write_endio(struct bio *bio,
>  	struct dm_buffer *b = container_of(bio, struct dm_buffer, bio);
>  
>  	b->write_error = error;
> -	if (error) {
> +	if (unlikely(error)) {
>  		struct dm_bufio_client *c = b->c;
>  		(void)cmpxchg(&c->async_write_error, 0, error);
>  	}
> @@ -698,13 +698,20 @@ static void __wait_for_free_buffer(struc
>  	dm_bufio_lock(c);
>  }
>  
> +enum new_flag {
> +	NF_FRESH = 0,
> +	NF_READ = 1,
> +	NF_GET = 2,
> +	NF_PREFETCH = 3
> +};
> +
>  /*
>   * Allocate a new buffer. If the allocation is not possible, wait until
>   * some other thread frees a buffer.
>   *
>   * May drop the lock and regain it.
>   */
> -static struct dm_buffer *__alloc_buffer_wait_no_callback(struct dm_bufio_client *c)
> +static struct dm_buffer *__alloc_buffer_wait_no_callback(struct dm_bufio_client *c, enum new_flag nf)
>  {
>  	struct dm_buffer *b;
>  
> @@ -727,6 +734,9 @@ static struct dm_buffer *__alloc_buffer_
>  				return b;
>  		}
>  
> +		if (nf == NF_PREFETCH)
> +			return NULL;
> +
>  		if (!list_empty(&c->reserved_buffers)) {
>  			b = list_entry(c->reserved_buffers.next,
>  				       struct dm_buffer, lru_list);
> @@ -744,9 +754,12 @@ static struct dm_buffer *__alloc_buffer_
>  	}
>  }
>  
> -static struct dm_buffer *__alloc_buffer_wait(struct dm_bufio_client *c)
> +static struct dm_buffer *__alloc_buffer_wait(struct dm_bufio_client *c, enum new_flag nf)
>  {
> -	struct dm_buffer *b = __alloc_buffer_wait_no_callback(c);
> +	struct dm_buffer *b = __alloc_buffer_wait_no_callback(c, nf);
> +
> +	if (!b)
> +		return NULL;
>  
>  	if (c->alloc_callback)
>  		c->alloc_callback(b);
> @@ -866,15 +879,8 @@ static struct dm_buffer *__find(struct d
>   * Getting a buffer
>   *--------------------------------------------------------------*/
>  
> -enum new_flag {
> -	NF_FRESH = 0,
> -	NF_READ = 1,
> -	NF_GET = 2
> -};
> -
>  static struct dm_buffer *__bufio_new(struct dm_bufio_client *c, sector_t block,
> -				     enum new_flag nf, struct dm_buffer **bp,
> -				     int *need_submit)
> +				     enum new_flag nf, int *need_submit)
>  {
>  	struct dm_buffer *b, *new_b = NULL;
>  
> @@ -882,6 +888,19 @@ static struct dm_buffer *__bufio_new(str
>  
>  	b = __find(c, block);
>  	if (b) {
> +found_buffer:
> +		if (nf == NF_PREFETCH)
> +			return NULL;
> +		/*
> +		 * Note: it is essential that we don't wait for the buffer to be
> +		 * read if dm_bufio_get function is used. Both dm_bufio_get and
> +		 * dm_bufio_prefetch can be used in the driver request routine.
> +		 * If the user called both dm_bufio_prefetch and dm_bufio_get on
> +		 * the same buffer, it would deadlock if we waited.
> +		 */
> +		if (nf == NF_GET && unlikely(test_bit(B_READING, &b->state)))
> +			return NULL;
> +
>  		b->hold_count++;
>  		__relink_lru(b, test_bit(B_DIRTY, &b->state) ||
>  			     test_bit(B_WRITING, &b->state));
> @@ -891,7 +910,9 @@ static struct dm_buffer *__bufio_new(str
>  	if (nf == NF_GET)
>  		return NULL;
>  
> -	new_b = __alloc_buffer_wait(c);
> +	new_b = __alloc_buffer_wait(c, nf);
> +	if (!new_b)
> +		return NULL;
>  
>  	/*
>  	 * We've had a period where the mutex was unlocked, so need to
> @@ -900,10 +921,7 @@ static struct dm_buffer *__bufio_new(str
>  	b = __find(c, block);
>  	if (b) {
>  		__free_buffer_wake(new_b);
> -		b->hold_count++;
> -		__relink_lru(b, test_bit(B_DIRTY, &b->state) ||
> -			     test_bit(B_WRITING, &b->state));
> -		return b;
> +		goto found_buffer;
>  	}
>  
>  	__check_watermark(c);
> @@ -957,7 +975,7 @@ static void *new_read(struct dm_bufio_cl
>  	struct dm_buffer *b;
>  
>  	dm_bufio_lock(c);
> -	b = __bufio_new(c, block, nf, bp, &need_submit);
> +	b = __bufio_new(c, block, nf, &need_submit);
>  	dm_bufio_unlock(c);
>  
>  	if (!b || IS_ERR(b))
> @@ -997,6 +1015,8 @@ void *dm_bufio_read(struct dm_bufio_clie
>  }
>  EXPORT_SYMBOL_GPL(dm_bufio_read);
>  
> +static void dm_bufio_release_unlocked(struct dm_buffer *b);
> +
>  void *dm_bufio_new(struct dm_bufio_client *c, sector_t block,
>  		   struct dm_buffer **bp)
>  {
> @@ -1006,13 +1026,35 @@ void *dm_bufio_new(struct dm_bufio_clien
>  }
>  EXPORT_SYMBOL_GPL(dm_bufio_new);
>  
> -void dm_bufio_release(struct dm_buffer *b)
> +void dm_bufio_prefetch(struct dm_bufio_client *c,
> +		       sector_t block, unsigned n_blocks)
>  {
> -	struct dm_bufio_client *c = b->c;
> +	struct blk_plug plug;
>  
> +	blk_start_plug(&plug);
>  	dm_bufio_lock(c);
>  
> -	BUG_ON(test_bit(B_READING, &b->state));
> +	for (; n_blocks--; block++) {
> +		int need_submit;
> +		struct dm_buffer *b;
> +		b = __bufio_new(c, block, NF_PREFETCH, &need_submit);
> +		if (unlikely(b != NULL)) {
> +			if (need_submit)
> +				submit_io(b, READ, b->block, read_endio);
> +			dm_bufio_release_unlocked(b);
> +		}
> +
> +	}
> +
> +	dm_bufio_unlock(c);
> +	blk_finish_plug(&plug);
> +}
> +EXPORT_SYMBOL(dm_bufio_prefetch);
> +
> +static void dm_bufio_release_unlocked(struct dm_buffer *b)
> +{
> +	struct dm_bufio_client *c = b->c;
> +
>  	BUG_ON(!b->hold_count);
>  
>  	b->hold_count--;
> @@ -1025,12 +1067,23 @@ void dm_bufio_release(struct dm_buffer *
>  		 * invalid buffer.
>  		 */
>  		if ((b->read_error || b->write_error) &&
> +		    !test_bit(B_READING, &b->state) &&
>  		    !test_bit(B_WRITING, &b->state) &&
>  		    !test_bit(B_DIRTY, &b->state)) {
>  			__unlink_buffer(b);
>  			__free_buffer_wake(b);
>  		}
>  	}
> +}
> +
> +void dm_bufio_release(struct dm_buffer *b)
> +{
> +	struct dm_bufio_client *c = b->c;
> +
> +	dm_bufio_lock(c);
> +
> +	BUG_ON(test_bit(B_READING, &b->state));
> +	dm_bufio_release_unlocked(b);
>  
>  	dm_bufio_unlock(c);
>  }
> @@ -1042,6 +1095,8 @@ void dm_bufio_mark_buffer_dirty(struct d
>  
>  	dm_bufio_lock(c);
>  
> +	BUG_ON(test_bit(B_READING, &b->state));
> +
>  	if (!test_and_set_bit(B_DIRTY, &b->state))
>  		__relink_lru(b, LIST_DIRTY);
>  
> Index: linux-3.3-rc5-fast/drivers/md/dm-bufio.h
> ===================================================================
> --- linux-3.3-rc5-fast.orig/drivers/md/dm-bufio.h	2012-03-03 05:19:33.000000000 +0100
> +++ linux-3.3-rc5-fast/drivers/md/dm-bufio.h	2012-03-03 05:19:35.000000000 +0100
> @@ -63,6 +63,14 @@ void *dm_bufio_new(struct dm_bufio_clien
>  		   struct dm_buffer **bp);
>  
>  /*
> + * Prefetch the specified blocks to the cache.
> + * The function starts to read the blocks and returns without waiting for
> + * I/O to finish.
> + */
> +void dm_bufio_prefetch(struct dm_bufio_client *c,
> +		       sector_t block, unsigned n_blocks);
> +
> +/*
>   * Release a reference obtained with dm_bufio_{read,get,new}. The data
>   * pointer and dm_buffer pointer is no longer valid after this call.
>   */

^ permalink raw reply	[flat|nested] 34+ messages in thread

* workqueues and percpu (was: [PATCH] dm: remake of the verity target)
  2012-03-06 21:59   ` [PATCH] dm: remake of the verity target Mandeep Singh Baines
@ 2012-03-08 22:21     ` Mikulas Patocka
  2012-03-08 22:39       ` Andrew Morton
  2012-03-13 22:20     ` [PATCH] dm: remake of the verity target Mikulas Patocka
  1 sibling, 1 reply; 34+ messages in thread
From: Mikulas Patocka @ 2012-03-08 22:21 UTC (permalink / raw)
  To: Mandeep Singh Baines
  Cc: linux-kernel, dm-devel, Alasdair G Kergon, Will Drewry,
	Elly Jones, Milan Broz, Olof Johansson, Steffen Klassert,
	Andrew Morton



On Tue, 6 Mar 2012, Mandeep Singh Baines wrote:

> You are
> allocated a complete shash_desc per I/O. We only allocate one per CPU.

I looked at it --- and using percpu variables in workqueues isn't safe 
because the workqueue can change CPU if the CPU is hot-unplugged.

dm-crypt has the same bug --- it also uses workqueue with per-cpu 
variables and assumes that the CPU doesn't change for a single work item.

This program shows that work executed in a workqueue can be switched to a 
different CPU.

I'm wondering how much other kernel code assumes that workqueues are bound 
to a specific CPU, which isn't true if we unplug that CPU.

Mikulas

---

/*
 * A proof of concept that a work item executed on a workqueue may change CPU
 * when CPU hot-unplugging is used.
 * Compile this as a module and run:
 * insmod test.ko; sleep 1; echo 0 >/sys/devices/system/cpu/cpu1/online
 * You see that the work item starts executing on CPU 1 and ends up executing
 * on different CPU, usually 0.
 */

#include <linux/module.h>
#include <linux/delay.h>

static struct workqueue_struct *wq;
static struct work_struct work;

static void do_work(struct work_struct *w)
{
	printk("starting work on cpu %d\n", smp_processor_id());
	msleep(10000);
	printk("finishing work on cpu %d\n", smp_processor_id());
}

static int __init test_init(void)
{
	printk("module init\n");
	wq = alloc_workqueue("testd", WQ_MEM_RECLAIM | WQ_CPU_INTENSIVE, 1);
	if (!wq) {
		printk("alloc_workqueue failed\n");
		return -ENOMEM;
	}
	INIT_WORK(&work, do_work);
	queue_work_on(1, wq, &work);
	return 0;
}

static void __exit test_exit(void)
{
	destroy_workqueue(wq);
	printk("module exit\n");
}

module_init(test_init)
module_exit(test_exit)
MODULE_LICENSE("GPL");

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: workqueues and percpu (was: [PATCH] dm: remake of the verity target)
  2012-03-08 22:21     ` workqueues and percpu (was: [PATCH] dm: remake of the verity target) Mikulas Patocka
@ 2012-03-08 22:39       ` Andrew Morton
  2012-03-08 23:15         ` Tejun Heo
  0 siblings, 1 reply; 34+ messages in thread
From: Andrew Morton @ 2012-03-08 22:39 UTC (permalink / raw)
  To: Mikulas Patocka
  Cc: Mandeep Singh Baines, linux-kernel, dm-devel, Alasdair G Kergon,
	Will Drewry, Elly Jones, Milan Broz, Olof Johansson,
	Steffen Klassert, Tejun Heo, Rusty Russell

On Thu, 8 Mar 2012 17:21:53 -0500 
Mikulas Patocka <mpatocka@redhat.com> wrote:

> 
> 
> On Tue, 6 Mar 2012, Mandeep Singh Baines wrote:
> 
> > You are
> > allocated a complete shash_desc per I/O. We only allocate one per CPU.
> 
> I looked at it --- and using percpu variables in workqueues isn't safe 
> because the workqueue can change CPU if the CPU is hot-unplugged.
> 
> dm-crypt has the same bug --- it also uses workqueue with per-cpu 
> variables and assumes that the CPU doesn't change for a single work item.
> 
> This program shows that work executed in a workqueue can be switched to a 
> different CPU.
> 
> I'm wondering how much other kernel code assumes that workqueues are bound 
> to a specific CPU, which isn't true if we unplug that CPU.

ugh.

We really don't want to have to avoid using workqueues because of some
daft issue with CPU hot-unplug.  And yes, there are assumptions in various
work handlers that they will be pinned to a single CPU.  Finding and fixing
those assumptions would be painful.

Heck, even debug_smp_processor_id() can be wrong in the presence of the
cpu-unplug thing.

I'm not sure what we can do about it really, apart from blocking unplug
until all the target CPU's workqueues have been cleared.  And/or refusing
to unplug a CPU until all pinned-to-that-cpu kernel threads have been
shut down or pinned elsewhere (which is the same thing, only more
general).

Tejun, is this new behaviour?  I do recall that a long time ago we
wrestled with unplug-vs-worker-threads and I ended up OK with the
result, but I forget what it was.  IIRC Rusty was involved.


That being said, I don't think it's worth compromising the DM code
because of this workqueue wart: lots of other code has the same wart,
and we should find a centralised fix for it.

> /*
>  * A proof of concept that a work item executed on a workqueue may change CPU
>  * when CPU hot-unplugging is used.
>  * Compile this as a module and run:
>  * insmod test.ko; sleep 1; echo 0 >/sys/devices/system/cpu/cpu1/online
>  * You see that the work item starts executing on CPU 1 and ends up executing
>  * on different CPU, usually 0.
>  */
> 
> #include <linux/module.h>
> #include <linux/delay.h>
> 
> static struct workqueue_struct *wq;
> static struct work_struct work;
> 
> static void do_work(struct work_struct *w)
> {
> 	printk("starting work on cpu %d\n", smp_processor_id());
> 	msleep(10000);
> 	printk("finishing work on cpu %d\n", smp_processor_id());
> }
> 
> static int __init test_init(void)
> {
> 	printk("module init\n");
> 	wq = alloc_workqueue("testd", WQ_MEM_RECLAIM | WQ_CPU_INTENSIVE, 1);
> 	if (!wq) {
> 		printk("alloc_workqueue failed\n");
> 		return -ENOMEM;
> 	}
> 	INIT_WORK(&work, do_work);
> 	queue_work_on(1, wq, &work);
> 	return 0;
> }
> 
> static void __exit test_exit(void)
> {
> 	destroy_workqueue(wq);
> 	printk("module exit\n");
> }
> 
> module_init(test_init)
> module_exit(test_exit)
> MODULE_LICENSE("GPL");

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: workqueues and percpu (was: [PATCH] dm: remake of the verity target)
  2012-03-08 22:39       ` Andrew Morton
@ 2012-03-08 23:15         ` Tejun Heo
  2012-03-08 23:30           ` Andrew Morton
  2012-03-09 21:15           ` Mandeep Singh Baines
  0 siblings, 2 replies; 34+ messages in thread
From: Tejun Heo @ 2012-03-08 23:15 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mikulas Patocka, Mandeep Singh Baines, linux-kernel, dm-devel,
	Alasdair G Kergon, Will Drewry, Elly Jones, Milan Broz,
	Olof Johansson, Steffen Klassert, Rusty Russell

Hello,

On Thu, Mar 08, 2012 at 02:39:09PM -0800, Andrew Morton wrote:
> > I looked at it --- and using percpu variables in workqueues isn't safe 
> > because the workqueue can change CPU if the CPU is hot-unplugged.

Generally, I don't think removing preemption enable/disable around
percpu variable access is a worthwhile optimization unles it's on
really really really hot path.  We'll eventually add debug annotations
to percpu accessors and the ones used outside proper preemption
protections would need to be updated anyway.

> > dm-crypt has the same bug --- it also uses workqueue with per-cpu 
> > variables and assumes that the CPU doesn't change for a single work item.
> > 
> > This program shows that work executed in a workqueue can be switched to a 
> > different CPU.
> > 
> > I'm wondering how much other kernel code assumes that workqueues are bound 
> > to a specific CPU, which isn't true if we unplug that CPU.
> 
> ugh.
> 
> We really don't want to have to avoid using workqueues because of some
> daft issue with CPU hot-unplug.

Using or not using wq is orthogonal tho.  Using kthreads directly
requires hooking into CPU hotplug callbacks and one might as well call
flush_work_sync() from there instead of shutting down kthread.

> And yes, there are assumptions in various work handlers that they
> will be pinned to a single CPU.  Finding and fixing those
> assumptions would be painful.
>
> Heck, even debug_smp_processor_id() can be wrong in the presence of the
> cpu-unplug thing.

Yeah, that's a generic problem with cpu unplug.

> I'm not sure what we can do about it really, apart from blocking unplug
> until all the target CPU's workqueues have been cleared.  And/or refusing
> to unplug a CPU until all pinned-to-that-cpu kernel threads have been
> shut down or pinned elsewhere (which is the same thing, only more
> general).
> 
> Tejun, is this new behaviour?  I do recall that a long time ago we
> wrestled with unplug-vs-worker-threads and I ended up OK with the
> result, but I forget what it was.  IIRC Rusty was involved.

Unfortunately, yes, this is a new behavior.  Before, we could have
unbound delays during unplug from work items.  Now, we have CPU
affinity assumption breakage.  The behavior change was primarily to
allow long running work items to use regular workqueues without
worrying about inducing delay across cpu hotplug operations, which is
important as it's also used on suspend / hibernation, especially on
mobile platforms.

During the cmwq conversion, I ended up auditing a lot of (I think I
went through most of them) workqueue users and IIRC there weren't too
many which required stable affinity.

> That being said, I don't think it's worth compromising the DM code
> because of this workqueue wart: lots of other code has the same wart,
> and we should find a centralised fix for it.

Probably the best way to solve this is introducing pinned attribute to
workqueues and have them drained automatically on cpu hotplug events.
It'll require auditing workqueue users but I guess we'll just have to
do it given that we need to actually distinguish the ones need to be
pinned.  Or maybe we can use explicit queue_work_on() to distinguish
the ones which require pinning.

Another approach would be requiring all workqueues to be drained on
cpu offlining and requiring any work item which may stall to use
unbound wq.  IMHO, picking out the ones which may stall would be much
less obvious than the ones which require cpu pinning.

Better ideas?

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: workqueues and percpu (was: [PATCH] dm: remake of the verity target)
  2012-03-08 23:15         ` Tejun Heo
@ 2012-03-08 23:30           ` Andrew Morton
  2012-03-09  0:33             ` Tejun Heo
  2012-03-09 21:15           ` Mandeep Singh Baines
  1 sibling, 1 reply; 34+ messages in thread
From: Andrew Morton @ 2012-03-08 23:30 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Mikulas Patocka, Mandeep Singh Baines, linux-kernel, dm-devel,
	Alasdair G Kergon, Will Drewry, Elly Jones, Milan Broz,
	Olof Johansson, Steffen Klassert, Rusty Russell

On Thu, 8 Mar 2012 15:15:21 -0800
Tejun Heo <tj@kernel.org> wrote:

> > I'm not sure what we can do about it really, apart from blocking unplug
> > until all the target CPU's workqueues have been cleared.  And/or refusing
> > to unplug a CPU until all pinned-to-that-cpu kernel threads have been
> > shut down or pinned elsewhere (which is the same thing, only more
> > general).
> > 
> > Tejun, is this new behaviour?  I do recall that a long time ago we
> > wrestled with unplug-vs-worker-threads and I ended up OK with the
> > result, but I forget what it was.  IIRC Rusty was involved.
> 
> Unfortunately, yes, this is a new behavior.  Before, we could have
> unbound delays during unplug from work items.  Now, we have CPU
> affinity assumption breakage.

Ow, didn't know that.

>  The behavior change was primarily to
> allow long running work items to use regular workqueues without
> worrying about inducing delay across cpu hotplug operations, which is
> important as it's also used on suspend / hibernation, especially on
> mobile platforms.

Well..  why did we want to support these long-running work items? 
They're abusive, aren't they?  Where are they?

> During the cmwq conversion, I ended up auditing a lot of (I think I
> went through most of them) workqueue users and IIRC there weren't too
> many which required stable affinity.
> 
> > That being said, I don't think it's worth compromising the DM code
> > because of this workqueue wart: lots of other code has the same wart,
> > and we should find a centralised fix for it.
> 
> Probably the best way to solve this is introducing pinned attribute to
> workqueues and have them drained automatically on cpu hotplug events.
> It'll require auditing workqueue users but I guess we'll just have to
> do it given that we need to actually distinguish the ones need to be
> pinned.

That will make future use of workqueues more complex and people will
screw it up.

>  Or maybe we can use explicit queue_work_on() to distinguish
> the ones which require pinning.
> 
> Another approach would be requiring all workqueues to be drained on
> cpu offlining and requiring any work item which may stall to use
> unbound wq.  IMHO, picking out the ones which may stall would be much
> less obvious than the ones which require cpu pinning.

I'd be surprised if it's *that* hard to find and fix the long-running
work items.  Hopefully most of them are already using
create_freezable_workqueue() or create_singlethread_workqueue().

I wonder if there's some debug code we can put in workqueue.c to detect
when a pinned work item takes "too long".


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: workqueues and percpu (was: [PATCH] dm: remake of the verity target)
  2012-03-08 23:30           ` Andrew Morton
@ 2012-03-09  0:33             ` Tejun Heo
  2012-03-09  0:51               ` Tejun Heo
  0 siblings, 1 reply; 34+ messages in thread
From: Tejun Heo @ 2012-03-09  0:33 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mikulas Patocka, Mandeep Singh Baines, linux-kernel, dm-devel,
	Alasdair G Kergon, Will Drewry, Elly Jones, Milan Broz,
	Olof Johansson, Steffen Klassert, Rusty Russell

Hello, Andrew.

On Thu, Mar 08, 2012 at 03:30:48PM -0800, Andrew Morton wrote:
> >  The behavior change was primarily to
> > allow long running work items to use regular workqueues without
> > worrying about inducing delay across cpu hotplug operations, which is
> > important as it's also used on suspend / hibernation, especially on
> > mobile platforms.
> 
> Well..  why did we want to support these long-running work items? 
> They're abusive, aren't they?  Where are they?

The rationale was two-fold.  One was that using kthread directly is
inefficient and difficult.  We end up with a lot of mostly idle
kthreads lying around and w/ increasing number of cores, creating them
per-cpu becomes problematic.  On certain setups, we were reaching task
limit during boot, so having an easy to use worker pool mechanism is
necessary.  We already had workqueue, so it was logical to extend wq
to support that.

Also, on auditing kthread users, a lot of them were (and still are)
racy around kthread_should_exit() handling.  kthread_should_exit()
requires careful synchronization to avoid missing the event.  It just
sets should exit flag and wakes up the kthread once.  Many simply
forget to consider the synchronization requirements.

Another side was that "long-running" isn't obvious at all.  Many
workqueue items are used because they require sleepable context for
synchronization and while they usually don't consume large amount of
time, there are occassions where certain locking takes way longer
through chain of dependencies.  This was mostly visible through system
workqueue getting stalled.

> > Another approach would be requiring all workqueues to be drained on
> > cpu offlining and requiring any work item which may stall to use
> > unbound wq.  IMHO, picking out the ones which may stall would be much
> > less obvious than the ones which require cpu pinning.
> 
> I'd be surprised if it's *that* hard to find and fix the long-running
> work items.  Hopefully most of them are already using
> create_freezable_workqueue() or create_singlethread_workqueue().
> 
> I wonder if there's some debug code we can put in workqueue.c to detect
> when a pinned work item takes "too long".

Yes, we can go either way, but I think it would be easier to weed out
the ones with pinned assumptions.  As they usually are much less
common, more obvious and probably easier to automatically detect
(ie. trigger warning on debug_smp_processor_id() if running as
un-pinned work item).

ISTR there was something already broken about having specific CPU
assumption w/ workqueue even before cmwq when using queue_work_on()
unless it was explicitly synchronizing using cpu hotplug callback.
Hmmm... what was it... I think it was that there was no protection
against queueing on workqueue on dead CPU and workqueue was flushed
only once during cpu shutdown meaning that queue_work_on() or
requeueing work items could end up queued on a workqueue of a dead
CPU.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: workqueues and percpu (was: [PATCH] dm: remake of the verity target)
  2012-03-09  0:33             ` Tejun Heo
@ 2012-03-09  0:51               ` Tejun Heo
  0 siblings, 0 replies; 34+ messages in thread
From: Tejun Heo @ 2012-03-09  0:51 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mikulas Patocka, Mandeep Singh Baines, linux-kernel, dm-devel,
	Alasdair G Kergon, Will Drewry, Elly Jones, Milan Broz,
	Olof Johansson, Steffen Klassert, Rusty Russell

Adding a bit..

On Thu, Mar 08, 2012 at 04:33:09PM -0800, Tejun Heo wrote:
> ISTR there was something already broken about having specific CPU
> assumption w/ workqueue even before cmwq when using queue_work_on()
> unless it was explicitly synchronizing using cpu hotplug callback.
> Hmmm... what was it... I think it was that there was no protection
> against queueing on workqueue on dead CPU and workqueue was flushed
> only once during cpu shutdown meaning that queue_work_on() or
> requeueing work items could end up queued on a workqueue of a dead
> CPU.

I think the crux of the problem is that we didn't have the interface
to indicate the intention of workqueue users.  Per-cpu workqueues were
the normal ones and the per-cpuness is used both as optimization
(local queueing is much cheaper and a work item is likely to access
the same stuff its queuer was accessing) and pinning.  Single-threaded
workqueues were used for both non-reentrancy and resource
optimization.

For the short term, the easiest fix would be adding flush_work_sync()
from cpu hotplug callback for the pinned ones.  For the longer term, I
think the most natural fix would be making work items queued with
explicit queue_work_on() handled differently and adding debug code to
enforce it.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: workqueues and percpu (was: [PATCH] dm: remake of the verity target)
  2012-03-08 23:15         ` Tejun Heo
  2012-03-08 23:30           ` Andrew Morton
@ 2012-03-09 21:15           ` Mandeep Singh Baines
  2012-03-09 21:20             ` Tejun Heo
  1 sibling, 1 reply; 34+ messages in thread
From: Mandeep Singh Baines @ 2012-03-09 21:15 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Andrew Morton, Mikulas Patocka, Mandeep Singh Baines,
	linux-kernel, dm-devel, Alasdair G Kergon, Will Drewry,
	Elly Jones, Milan Broz, Olof Johansson, Steffen Klassert,
	Rusty Russell

Tejun Heo (tj@kernel.org) wrote:
> Hello,
> 
> On Thu, Mar 08, 2012 at 02:39:09PM -0800, Andrew Morton wrote:
> > > I looked at it --- and using percpu variables in workqueues isn't safe 
> > > because the workqueue can change CPU if the CPU is hot-unplugged.
> 
> Generally, I don't think removing preemption enable/disable around
> percpu variable access is a worthwhile optimization unles it's on
> really really really hot path.  We'll eventually add debug annotations
> to percpu accessors and the ones used outside proper preemption
> protections would need to be updated anyway.
> 

In this case, I need the per-cpu data for the duration of calculating
a cryptographics hash on a 4K page of data. That's a long time to disable
pre-emption.

I could fix the bug temporarily by adding get/put for the per_cpu data
but would that be acceptable? I'm not sure what the OK limit is for how
long one can disable preemption. An alternative fix would be not allow
CONFIG_VERITY when CONFIG_HOTPLUG_CPU. Once workqueues are fixed, I could
remove that restriction.

Thoughts?

> > > dm-crypt has the same bug --- it also uses workqueue with per-cpu 
> > > variables and assumes that the CPU doesn't change for a single work item.
> > > 
> > > This program shows that work executed in a workqueue can be switched to a 
> > > different CPU.
> > > 
> > > I'm wondering how much other kernel code assumes that workqueues are bound 
> > > to a specific CPU, which isn't true if we unplug that CPU.
> > 
> > ugh.
> > 
> > We really don't want to have to avoid using workqueues because of some
> > daft issue with CPU hot-unplug.
> 
> Using or not using wq is orthogonal tho.  Using kthreads directly
> requires hooking into CPU hotplug callbacks and one might as well call
> flush_work_sync() from there instead of shutting down kthread.
> 
> > And yes, there are assumptions in various work handlers that they
> > will be pinned to a single CPU.  Finding and fixing those
> > assumptions would be painful.
> >
> > Heck, even debug_smp_processor_id() can be wrong in the presence of the
> > cpu-unplug thing.
> 
> Yeah, that's a generic problem with cpu unplug.
> 
> > I'm not sure what we can do about it really, apart from blocking unplug
> > until all the target CPU's workqueues have been cleared.  And/or refusing
> > to unplug a CPU until all pinned-to-that-cpu kernel threads have been
> > shut down or pinned elsewhere (which is the same thing, only more
> > general).
> > 
> > Tejun, is this new behaviour?  I do recall that a long time ago we
> > wrestled with unplug-vs-worker-threads and I ended up OK with the
> > result, but I forget what it was.  IIRC Rusty was involved.
> 
> Unfortunately, yes, this is a new behavior.  Before, we could have
> unbound delays during unplug from work items.  Now, we have CPU
> affinity assumption breakage.  The behavior change was primarily to
> allow long running work items to use regular workqueues without
> worrying about inducing delay across cpu hotplug operations, which is
> important as it's also used on suspend / hibernation, especially on
> mobile platforms.
> 
> During the cmwq conversion, I ended up auditing a lot of (I think I
> went through most of them) workqueue users and IIRC there weren't too
> many which required stable affinity.
> 
> > That being said, I don't think it's worth compromising the DM code
> > because of this workqueue wart: lots of other code has the same wart,
> > and we should find a centralised fix for it.
> 
> Probably the best way to solve this is introducing pinned attribute to
> workqueues and have them drained automatically on cpu hotplug events.
> It'll require auditing workqueue users but I guess we'll just have to
> do it given that we need to actually distinguish the ones need to be
> pinned.  Or maybe we can use explicit queue_work_on() to distinguish
> the ones which require pinning.
> 
> Another approach would be requiring all workqueues to be drained on
> cpu offlining and requiring any work item which may stall to use
> unbound wq.  IMHO, picking out the ones which may stall would be much
> less obvious than the ones which require cpu pinning.
> 
> Better ideas?
> 
> Thanks.
> 
> -- 
> tejun

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: workqueues and percpu (was: [PATCH] dm: remake of the verity target)
  2012-03-09 21:15           ` Mandeep Singh Baines
@ 2012-03-09 21:20             ` Tejun Heo
  2012-03-09 22:06               ` Mandeep Singh Baines
  0 siblings, 1 reply; 34+ messages in thread
From: Tejun Heo @ 2012-03-09 21:20 UTC (permalink / raw)
  To: Mandeep Singh Baines
  Cc: Andrew Morton, Mikulas Patocka, linux-kernel, dm-devel,
	Alasdair G Kergon, Will Drewry, Elly Jones, Milan Broz,
	Olof Johansson, Steffen Klassert, Rusty Russell

Hello,

On Fri, Mar 09, 2012 at 01:15:12PM -0800, Mandeep Singh Baines wrote:
> In this case, I need the per-cpu data for the duration of calculating
> a cryptographics hash on a 4K page of data. That's a long time to disable
> pre-emption.

How long are we talking about?  Tens of microsecs, tens of millisecs?

> I could fix the bug temporarily by adding get/put for the per_cpu data
> but would that be acceptable? I'm not sure what the OK limit is for how
> long one can disable preemption. An alternative fix would be not allow
> CONFIG_VERITY when CONFIG_HOTPLUG_CPU. Once workqueues are fixed, I could
> remove that restriction.

I think the right thing to do for now is to add cpu hotplug notifier
and do flush_work_sync() on the work item.  We can later move that
logic into workqueue and remove it from crypto.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: workqueues and percpu (was: [PATCH] dm: remake of the verity target)
  2012-03-09 21:20             ` Tejun Heo
@ 2012-03-09 22:06               ` Mandeep Singh Baines
  2012-08-14 17:54                 ` Tejun Heo
  0 siblings, 1 reply; 34+ messages in thread
From: Mandeep Singh Baines @ 2012-03-09 22:06 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Mandeep Singh Baines, Andrew Morton, Mikulas Patocka,
	linux-kernel, dm-devel, Alasdair G Kergon, Will Drewry,
	Elly Jones, Milan Broz, Olof Johansson, Steffen Klassert,
	Rusty Russell

Tejun Heo (tj@kernel.org) wrote:
> Hello,
> 
> On Fri, Mar 09, 2012 at 01:15:12PM -0800, Mandeep Singh Baines wrote:
> > In this case, I need the per-cpu data for the duration of calculating
> > a cryptographics hash on a 4K page of data. That's a long time to disable
> > pre-emption.
> 
> How long are we talking about?  Tens of microsecs, tens of millisecs?
> 

It depends on the H/W. We are running on Atom w/ PREEMPT_DESKTOP so
we aren't pre-emptible even now. But I think there is interest from other
folks in embedded to use the work. So it potentially be millisecs on a
slower few hundred MHZ embedded processor.

> > I could fix the bug temporarily by adding get/put for the per_cpu data
> > but would that be acceptable? I'm not sure what the OK limit is for how
> > long one can disable preemption. An alternative fix would be not allow
> > CONFIG_VERITY when CONFIG_HOTPLUG_CPU. Once workqueues are fixed, I could
> > remove that restriction.
> 
> I think the right thing to do for now is to add cpu hotplug notifier
> and do flush_work_sync() on the work item.  We can later move that
> logic into workqueue and remove it from crypto.
> 

That seems like the correct solution. I will implement that.

Thanks,
Mandeep

> Thanks.
> 
> -- 
> tejun

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH] dm: remake of the verity target
  2012-03-06 21:59   ` [PATCH] dm: remake of the verity target Mandeep Singh Baines
  2012-03-08 22:21     ` workqueues and percpu (was: [PATCH] dm: remake of the verity target) Mikulas Patocka
@ 2012-03-13 22:20     ` Mikulas Patocka
  2012-03-14 21:13       ` Will Drewry
                         ` (2 more replies)
  1 sibling, 3 replies; 34+ messages in thread
From: Mikulas Patocka @ 2012-03-13 22:20 UTC (permalink / raw)
  To: Mandeep Singh Baines
  Cc: linux-kernel, dm-devel, Alasdair G Kergon, Will Drewry,
	Elly Jones, Milan Broz, Olof Johansson, Steffen Klassert,
	Andrew Morton

Hi

> Hi Mikulus,
> 
> This is some nice work. I like that you've been able to abstract a lot
> of the hash buffer management with dm-bufio. You got rid of the I/O queue.
> I've been meaning to do that for a while. The prefetch is also nice.
> We planned to do this but I decided to not do it now in order to get the
> base functionality in:
> 
> http://crosbug.com/25441
> 
> However, there are some things that I don't like. I don't like comments
> either but you have none. You also removed our documentation. You are

I added some comments. As for documentation, it's OK to use documentation 
from your patch because the on-disk format and the target arguments are 
the same (with an enhancement that my code supports different data and 
metadata bock size and it has variable-length salt).

> allocated a complete shash_desc per I/O. We only allocate one per CPU.

The hash of 4k block takes 174000 cycles. So trying to optimize 
memory latency that is about 250 cycles doesn't make much sense.

I actually observed better performance using verity on ramdisk with 
workqueue unbound to specific CPUs. The reason is that the ramdisk bio 
completion routine is always run on the same CPU (that one that submitted 
the request), so with bound workqueue, everything was executing on one 
CPU. With unbound workqueue, I got parallelism.

> We short-circuit the hash at any level. Your implementation can only
> shirt circuit at the lowest level.

It short-circuits hash at all levels. If the function 
"verity_verify_level" finds out that "aux->hash_verified" is non-zero, it 
doesn't do any hashing, it just copies the hash for the lower level. My 
implementation walks the tree from the top to the bottom, but it doesn't 
do hash verification if the same block has been verified before.

All this tree-walking from the root to the bottom is 50-times faster than 
the actual hashing of the data block (I measured that), so there's not 
much point in trying to optimize it. I did a simple optimization (don't 
walk the tree if the lowest block is already verified) and I don't need to 
do anything complicated given the fact that it can't improve more than by 
2%.

> I'd like to propose that we get the version we sent upstream and then work
> together on adding some of your enhancements incrementally.

If you add dm-bufio support, you end up deleting majority of the original 
code anyway. That's why I wrote it from scratch and that's why I didn't 
attempt to morph your code.

It's simpler to write the code from scratch and it is also less bug-prone. 

> Other than
> the changes we've made to cleanup for upstreaming, the version I
> submitted is the code we are using in production.
> 
> I'm happy to add prefetch now if that is required for merging.
> 
> What do you think?
> 
> Regards,
> Mandeep

This is the version with comments added:

Mikulas

----

Remake of the google dm-verity patch.

Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>

---
 drivers/md/Kconfig     |   17 
 drivers/md/Makefile    |    1 
 drivers/md/dm-verity.c |  851 +++++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 869 insertions(+)

Index: linux-3.3-rc6-fast/drivers/md/Kconfig
===================================================================
--- linux-3.3-rc6-fast.orig/drivers/md/Kconfig	2012-03-13 21:46:03.000000000 +0100
+++ linux-3.3-rc6-fast/drivers/md/Kconfig	2012-03-13 21:46:05.000000000 +0100
@@ -404,4 +404,21 @@ config DM_VERITY2
 
           If unsure, say N.
 
+config DM_VERITY
+	tristate "Verity target support"
+	depends on BLK_DEV_DM
+	select CRYPTO
+	select CRYPTO_HASH
+	select DM_BUFIO
+	---help---
+	  This device-mapper target allows you to create a device that
+	  transparently integrity checks the data on it. You'll need to
+	  activate the digests you're going to use in the cryptoapi
+	  configuration.
+
+	  To compile this code as a module, choose M here: the module will
+	  be called dm-verity.
+
+	  If unsure, say N.
+
 endif # MD
Index: linux-3.3-rc6-fast/drivers/md/Makefile
===================================================================
--- linux-3.3-rc6-fast.orig/drivers/md/Makefile	2012-03-13 21:46:03.000000000 +0100
+++ linux-3.3-rc6-fast/drivers/md/Makefile	2012-03-13 21:46:05.000000000 +0100
@@ -29,6 +29,7 @@ obj-$(CONFIG_MD_FAULTY)		+= faulty.o
 obj-$(CONFIG_BLK_DEV_MD)	+= md-mod.o
 obj-$(CONFIG_BLK_DEV_DM)	+= dm-mod.o
 obj-$(CONFIG_DM_BUFIO)		+= dm-bufio.o
+obj-$(CONFIG_DM_VERITY)		+= dm-verity.o
 obj-$(CONFIG_DM_CRYPT)		+= dm-crypt.o
 obj-$(CONFIG_DM_DELAY)		+= dm-delay.o
 obj-$(CONFIG_DM_FLAKEY)		+= dm-flakey.o
Index: linux-3.3-rc6-fast/drivers/md/dm-verity.c
===================================================================
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ linux-3.3-rc6-fast/drivers/md/dm-verity.c	2012-03-13 22:02:05.000000000 +0100
@@ -0,0 +1,851 @@
+/*
+ * Copyright (C) 2012 Red Hat, Inc.
+ *
+ * Author: Mikulas Patocka <mpatocka@redhat.com>
+ *
+ * Based on Chromium dm-verity driver (C) 2011 The Chromium OS Authors
+ *
+ * This file is released under the GPLv2.
+ *
+ * Device mapper target parameters:
+ *	<version>	0
+ *	<data device>
+ *	<hash device>
+ *	<hash start>	(typically 0)
+ *	<block size>	(typically 4096)
+ *	<algorithm>
+ *	<digest>
+ *	optional parameters:
+ *		<salt> (should have 32 bytes for compatibility with Google code)
+ *		<hash block size> (by default it is the same as data block size)
+ *
+ * In the file "/sys/module/dm_verity/parameters/prefetch_cluster" you can set
+ * default prefetch value. Data are read in "prefetch_cluster" chunks from the
+ * hash device. Prefetch cluster greatly improves performance when data and hash
+ * are on the same disk on different partitions.
+ */
+
+#include <linux/module.h>
+#include <linux/device-mapper.h>
+#include <crypto/hash.h>
+#include "dm-bufio.h"
+
+#define DM_MSG_PREFIX			"verity"
+
+#define DM_VERITY_IO_VEC_INLINE		16
+#define DM_VERITY_MEMPOOL_SIZE		4
+#define DM_VERITY_DEFAULT_PREFETCH_SIZE	262144
+
+#define DM_VERITY_MAX_LEVELS		63
+
+static unsigned prefetch_cluster = DM_VERITY_DEFAULT_PREFETCH_SIZE;
+
+module_param_named(prefetch_cluster, prefetch_cluster, uint, S_IRUGO | S_IWUSR);
+
+struct dm_verity {
+	struct dm_dev *data_dev;
+	struct dm_dev *hash_dev;
+	struct dm_target *ti;
+	struct dm_bufio_client *bufio;
+	char *alg_name;
+	struct crypto_shash *tfm;
+	u8 *root_digest;	/* digest of the root block */
+	u8 *salt;		/* salt, its size is salt_size */
+	unsigned salt_size;
+	sector_t data_start;	/* data offset in 512-byte sectors */
+	sector_t hash_start;	/* hash start in blocks */
+	sector_t data_blocks;	/* the number of data blocks */
+	sector_t hash_blocks;	/* the number of hash blocks */
+	unsigned char data_dev_block_bits;	/* log2(data blocksize) */
+	unsigned char hash_dev_block_bits;	/* log2(hash blocksize) */
+	unsigned char hash_per_block_bits;	/* log2(hashes in hash block) */
+	unsigned char levels;	/* the number of tree levels */
+	unsigned digest_size;	/* digest size for the current hash algorithm */
+	unsigned shash_descsize;/* the size of temporary space for crypto */
+
+	mempool_t *io_mempool;	/* mempool of struct dm_verity_io */
+	mempool_t *vec_mempool;	/* mempool of bio vector */
+
+	struct workqueue_struct *verify_wq;
+
+	/* starting blocks for each tree level. 0 is the lowest level. */
+	sector_t hash_level_block[DM_VERITY_MAX_LEVELS];
+};
+
+struct dm_verity_io {
+	struct dm_verity *v;
+	struct bio *bio;
+
+	/* original values of bio->bi_end_io and bio->bi_private */
+	bio_end_io_t *orig_bi_end_io;
+	void *orig_bi_private;
+
+	sector_t block;
+	unsigned n_blocks;
+
+	/* saved bio vector */
+	struct bio_vec *io_vec;
+	unsigned io_vec_size;
+
+	struct work_struct work;
+
+	/* a space for short vectors; longer vectors are allocated separately */
+	struct bio_vec io_vec_inline[DM_VERITY_IO_VEC_INLINE];
+
+	/* variable-size fields, accessible with functions
+		io_hash_desc, io_real_digest, io_want_digest */
+	/* u8 hash_desc[crypto_shash_descsize(v->tfm)]; */
+	/* u8 real_digest[v->digest_size]; */
+	/* u8 want_digest[v->digest_size]; */
+};
+
+static struct shash_desc *io_hash_desc(struct dm_verity *v, struct dm_verity_io *io)
+{
+	return (struct shash_desc *)(io + 1);
+}
+
+static u8 *io_real_digest(struct dm_verity *v, struct dm_verity_io *io)
+{
+	return (u8 *)(io + 1) + v->shash_descsize;
+}
+
+static u8 *io_want_digest(struct dm_verity *v, struct dm_verity_io *io)
+{
+	return (u8 *)(io + 1) + v->shash_descsize + v->digest_size;
+}
+
+/*
+ * Auxiliary structure appended to each dm-bufio buffer. If the value
+ * hash_verified is nonzero, hash of the block has been verified.
+ *
+ * There is no lock around this value, a race condition can at worst cause
+ * that multiple processes verify the hash of the same buffer simultaneously.
+ * This condition is harmless, so we don't need locking.
+ */
+struct buffer_aux {
+	int hash_verified;
+};
+
+/*
+ * Initialize struct buffer_aux for a freshly created buffer.
+ */
+static void dm_bufio_alloc_callback(struct dm_buffer *buf)
+{
+	struct buffer_aux *aux = dm_bufio_get_aux_data(buf);
+	aux->hash_verified = 0;
+}
+
+/*
+ * Translate input sector number to the sector number on the target device.
+ */
+static sector_t verity_map_sector(struct dm_verity *v, sector_t bi_sector)
+{
+	return v->data_start + dm_target_offset(v->ti, bi_sector);
+}
+
+/*
+ * Return hash position of a specified block at a specified tree level
+ * (0 is the lowest level).
+ * The lowest "hash_per_block_bits"-bits of the result denote hash position
+ * inside a hash block. The remaining bits denode location of the hash block.
+ */
+static sector_t verity_position_at_level(struct dm_verity *v, sector_t block,
+					 int level)
+{
+	return block >> (level * v->hash_per_block_bits);
+}
+
+static void verity_hash_at_level(struct dm_verity *v, sector_t block, int level,
+				 sector_t *hash_block, unsigned *offset)
+{
+	sector_t position = verity_position_at_level(v, block, level);
+
+	*hash_block = v->hash_level_block[level] + (position >> v->hash_per_block_bits);
+	if (offset)
+		*offset = v->digest_size * (position & ((1 << v->hash_per_block_bits) - 1));
+}
+
+/*
+ * Verify hash of a metadata block pertaining to the specified data block
+ * ("block" argument) at a specified level ("level" argument).
+ *
+ * On successful return, io_want_digest(v, io) contains the hash value for
+ * a lower tree level or for the data block (if we're at the lowest leve).
+ *
+ * If "skip_unverified" is true, unverified buffer is skipped an 1 is returned.
+ * If "skip_unverified" is false, unverified buffer is hashed and verified
+ * against current value of io_want_digest(v, io).
+ */
+static int verity_verify_level(struct dm_verity_io *io, sector_t block,
+			       int level, bool skip_unverified)
+{
+	struct dm_verity *v = io->v;
+	struct dm_buffer *buf;
+	struct buffer_aux *aux;
+	u8 *data;
+	int r;
+	sector_t hash_block;
+	unsigned offset;
+
+	verity_hash_at_level(v, block, level, &hash_block, &offset);
+
+	data = dm_bufio_read(v->bufio, hash_block, &buf);
+	if (unlikely(IS_ERR(data)))
+		return PTR_ERR(data);
+
+	aux = dm_bufio_get_aux_data(buf);
+
+	if (!aux->hash_verified) {
+		struct shash_desc *desc;
+		u8 *result;
+
+		if (skip_unverified) {
+			r = 1;
+			goto release_ret_r;
+		}
+
+		desc = io_hash_desc(v, io);
+		desc->tfm = v->tfm;
+		desc->flags = CRYPTO_TFM_REQ_MAY_SLEEP;
+		r = crypto_shash_init(desc);
+		if (r < 0) {
+			DMERR("crypto_shash_init failed: %d", r);
+			goto release_ret_r;
+		}
+
+		r = crypto_shash_update(desc, data, 1 << v->hash_dev_block_bits);
+		if (r < 0) {
+			DMERR("crypto_shash_update failed: %d", r);
+			goto release_ret_r;
+		}
+
+		r = crypto_shash_update(desc, v->salt, v->salt_size);
+		if (r < 0) {
+			DMERR("crypto_shash_update failed: %d", r);
+			goto release_ret_r;
+		}
+
+		result = io_real_digest(v, io);
+		r = crypto_shash_final(desc, result);
+		if (r < 0) {
+			DMERR("crypto_shash_final failed: %d", r);
+			goto release_ret_r;
+		}
+		if (unlikely(memcmp(result, io_want_digest(v, io), v->digest_size))) {
+			DMERR_LIMIT("metadata block %llu is corrupted",
+				(unsigned long long)hash_block);
+			r = -EIO;
+			goto release_ret_r;
+		} else
+			aux->hash_verified = 1;
+	}
+
+	data += offset;
+
+	memcpy(io_want_digest(v, io), data, v->digest_size);
+
+	dm_bufio_release(buf);
+	return 0;
+
+release_ret_r:
+	dm_bufio_release(buf);
+	return r;
+}
+
+/*
+ * Verify one "dm_verity_io" structure.
+ */
+static int verity_verify_io(struct dm_verity_io *io)
+{
+	struct dm_verity *v = io->v;
+	unsigned b;
+	int i;
+	unsigned vector = 0, offset = 0;
+	for (b = 0; b < io->n_blocks; b++) {
+		struct shash_desc *desc;
+		u8 *result;
+		int r;
+		unsigned todo;
+
+		if (likely(v->levels)) {
+			/*
+			 * First, we try to get the requested hash for
+			 * the current block. If the hash block itself is
+			 * verified, zero is returned. If it isn't, this
+			 * function returns 0 and we fall back to whole
+			 * chain verification.
+			 */
+			int r = verity_verify_level(io, io->block + b, 0, true);
+			if (likely(!r))
+				goto test_block_hash;
+			if (r < 0)
+				return r;
+		}
+
+		memcpy(io_want_digest(v, io), v->root_digest, v->digest_size);
+
+		for (i = v->levels - 1; i >= 0; i--) {
+			int r = verity_verify_level(io, io->block + b, i, false);
+			if (unlikely(r))
+				return r;
+		}
+
+test_block_hash:
+		desc = io_hash_desc(v, io);
+		desc->tfm = v->tfm;
+		desc->flags = CRYPTO_TFM_REQ_MAY_SLEEP;
+		r = crypto_shash_init(desc);
+		if (r < 0) {
+			DMERR("crypto_shash_init failed: %d", r);
+			return r;
+		}
+
+		todo = 1 << v->data_dev_block_bits;
+		do {
+			struct bio_vec *bv;
+			u8 *page;
+			unsigned len;
+
+			BUG_ON(vector >= io->io_vec_size);
+			bv = &io->io_vec[vector];
+			page = kmap_atomic(bv->bv_page, KM_USER0);
+			len = bv->bv_len - offset;
+			if (likely(len >= todo))
+				len = todo;
+			r = crypto_shash_update(desc,
+					page + bv->bv_offset + offset, len);
+			kunmap_atomic(page, KM_USER0);
+			if (r < 0) {
+				DMERR("crypto_shash_update failed: %d", r);
+				return r;
+			}
+			offset += len;
+			if (likely(offset == bv->bv_len)) {
+				offset = 0;
+				vector++;
+			}
+			todo -= len;
+		} while (todo);
+
+		r = crypto_shash_update(desc, v->salt, v->salt_size);
+		if (r < 0) {
+			DMERR("crypto_shash_update failed: %d", r);
+			return r;
+		}
+
+		result = io_real_digest(v, io);
+		r = crypto_shash_final(desc, result);
+		if (r < 0) {
+			DMERR("crypto_shash_final failed: %d", r);
+			return r;
+		}
+		if (unlikely(memcmp(result, io_want_digest(v, io), v->digest_size))) {
+			DMERR_LIMIT("data block %llu is corrupted",
+				(unsigned long long)(io->block + b));
+			return -EIO;
+		}
+	}
+	BUG_ON(vector != io->io_vec_size);
+	BUG_ON(offset);
+	return 0;
+}
+
+/*
+ * End one "io" structure with a given error.
+ */
+static void verity_finish_io(struct dm_verity_io *io, int error)
+{
+	struct bio *bio = io->bio;
+	struct dm_verity *v = io->v;
+
+	bio->bi_end_io = io->orig_bi_end_io;
+	bio->bi_private = io->orig_bi_private;
+
+	if (io->io_vec != io->io_vec_inline)
+		mempool_free(io->io_vec, v->vec_mempool);
+	mempool_free(io, v->io_mempool);
+
+	bio_endio(bio, error);
+}
+
+static void verity_work(struct work_struct *w)
+{
+	struct dm_verity_io *io = container_of(w, struct dm_verity_io, work);
+
+	verity_finish_io(io, verity_verify_io(io));
+}
+
+static void verity_end_io(struct bio *bio, int error)
+{
+	struct dm_verity_io *io = bio->bi_private;
+	if (error) {
+		verity_finish_io(io, error);
+		return;
+	}
+
+	INIT_WORK(&io->work, verity_work);
+	queue_work(io->v->verify_wq, &io->work);
+}
+
+/*
+ * Prefetch buffers for the specified io.
+ * The root buffer is not prefetched, it is assumed that it will be cached
+ * all the time.
+ */
+static void verity_prefetch_io(struct dm_verity *v, struct dm_verity_io *io)
+{
+	int i;
+	for (i = v->levels - 2; i >= 0; i--) {
+		sector_t hash_block_start;
+		sector_t hash_block_end;
+		verity_hash_at_level(v, io->block, i, &hash_block_start, NULL);
+		verity_hash_at_level(v, io->block + io->n_blocks - 1, i, &hash_block_end, NULL);
+		if (!i) {
+			unsigned cluster = prefetch_cluster;
+	 /* barrier to stop GCC from re-reading prefetch_cluster again */
+			barrier();
+			cluster >>= v->data_dev_block_bits;
+			if (unlikely(!cluster))
+				goto no_prefetch_cluster;
+			if (unlikely(cluster & (cluster - 1)))
+				cluster = 1 << (fls(cluster) - 1);
+
+			hash_block_start &= ~(sector_t)(cluster - 1);
+			hash_block_end |= cluster - 1;
+			if (unlikely(hash_block_end >= v->hash_blocks))
+				hash_block_end = v->hash_blocks - 1;
+		}
+no_prefetch_cluster:
+		dm_bufio_prefetch(v->bufio, hash_block_start,
+					hash_block_end - hash_block_start + 1);
+	}
+}
+
+/*
+ * Bio map function. It allocates dm_verity_io structure and bio vector and
+ * fills them. Then it issues prefetches and the I/O.
+ */
+static int verity_map(struct dm_target *ti, struct bio *bio,
+		      union map_info *map_context)
+{
+	struct dm_verity *v = ti->private;
+	struct dm_verity_io *io;
+
+	if (((unsigned)bio->bi_sector | bio_sectors(bio)) &
+	    ((1 << (v->data_dev_block_bits - SECTOR_SHIFT)) - 1)) {
+		DMERR_LIMIT("unaligned io");
+		return -EIO;
+	}
+
+	if ((bio->bi_sector + bio_sectors(bio)) >>
+	    (v->data_dev_block_bits - SECTOR_SHIFT) > v->data_blocks) {
+		DMERR_LIMIT("io out of range");
+		return -EIO;
+	}
+
+	if (bio_data_dir(bio) == WRITE)
+		return -EIO;
+
+	io = mempool_alloc(v->io_mempool, GFP_NOIO);
+	io->v = v;
+	io->bio = bio;
+	io->orig_bi_end_io = bio->bi_end_io;
+	io->orig_bi_private = bio->bi_private;
+	io->block = bio->bi_sector >> (v->data_dev_block_bits - SECTOR_SHIFT);
+	io->n_blocks = bio->bi_size >> v->data_dev_block_bits;
+
+	bio->bi_end_io = verity_end_io;
+	bio->bi_private = io;
+	bio->bi_bdev = v->data_dev->bdev;
+	bio->bi_sector = verity_map_sector(v, bio->bi_sector);
+
+	io->io_vec_size = bio->bi_vcnt - bio->bi_idx;
+	if (io->io_vec_size < DM_VERITY_IO_VEC_INLINE)
+		io->io_vec = io->io_vec_inline;
+	else
+		io->io_vec = mempool_alloc(v->vec_mempool, GFP_NOIO);
+	memcpy(io->io_vec, bio_iovec(bio),
+	       io->io_vec_size * sizeof(struct bio_vec));
+
+	verity_prefetch_io(v, io);
+
+	generic_make_request(bio);
+
+	return DM_MAPIO_SUBMITTED;
+}
+
+static int verity_status(struct dm_target *ti, status_type_t type,
+			 char *result, unsigned maxlen)
+{
+	struct dm_verity *v = ti->private;
+	unsigned sz = 0;
+	unsigned x;
+
+	switch (type) {
+	case STATUSTYPE_INFO:
+		result[0] = 0;
+		break;
+	case STATUSTYPE_TABLE:
+		DMEMIT("%u %s %s %llu %u %s ",
+			0,
+			v->data_dev->name,
+			v->hash_dev->name,
+			(unsigned long long)v->hash_start << (v->hash_dev_block_bits - SECTOR_SHIFT),
+			1 << v->data_dev_block_bits,
+			v->alg_name
+			);
+		for (x = 0; x < v->digest_size; x++)
+			DMEMIT("%02x", v->root_digest[x]);
+		DMEMIT(" ");
+		if (!v->salt_size)
+			DMEMIT("-");
+		else
+			for (x = 0; x < v->salt_size; x++)
+				DMEMIT("%02x", v->salt[x]);
+		if (v->data_dev_block_bits != v->hash_dev_block_bits)
+			DMEMIT(" %u", 1 << v->hash_dev_block_bits);
+		break;
+	}
+	return 0;
+}
+
+static int verity_ioctl(struct dm_target *ti, unsigned cmd,
+			unsigned long arg)
+{
+	struct dm_verity *v = ti->private;
+	int r = 0;
+
+	if (v->data_start ||
+	    ti->len != i_size_read(v->data_dev->bdev->bd_inode) >> SECTOR_SHIFT)
+		r = scsi_verify_blk_ioctl(NULL, cmd);
+
+	return r ? : __blkdev_driver_ioctl(v->data_dev->bdev, v->data_dev->mode,
+				     cmd, arg);
+}
+
+static int verity_merge(struct dm_target *ti, struct bvec_merge_data *bvm,
+			struct bio_vec *biovec, int max_size)
+{
+	struct dm_verity *v = ti->private;
+	struct request_queue *q = bdev_get_queue(v->data_dev->bdev);
+
+	if (!q->merge_bvec_fn)
+		return max_size;
+
+	bvm->bi_bdev = v->data_dev->bdev;
+	bvm->bi_sector = verity_map_sector(v, bvm->bi_sector);
+
+	return min(max_size, q->merge_bvec_fn(q, bvm, biovec));
+}
+
+static int verity_iterate_devices(struct dm_target *ti,
+				  iterate_devices_callout_fn fn, void *data)
+{
+	struct dm_verity *v = ti->private;
+	return fn(ti, v->data_dev, v->data_start, ti->len, data);
+}
+
+static void verity_io_hints(struct dm_target *ti, struct queue_limits *limits)
+{
+	struct dm_verity *v = ti->private;
+
+	if (limits->logical_block_size < 1 << v->data_dev_block_bits)
+		limits->logical_block_size = 1 << v->data_dev_block_bits;
+	if (limits->physical_block_size < 1 << v->data_dev_block_bits)
+		limits->physical_block_size = 1 << v->data_dev_block_bits;
+	blk_limits_io_min(limits, limits->logical_block_size);
+}
+
+static void verity_dtr(struct dm_target *ti);
+
+static int verity_ctr(struct dm_target *ti, unsigned argc, char **argv)
+{
+	struct dm_verity *v;
+	unsigned num;
+	unsigned long long hs;
+	int r;
+	int i;
+	sector_t hash_position;
+	char dummy;
+
+	v = kzalloc(sizeof(struct dm_verity), GFP_KERNEL);
+	if (!v) {
+		ti->error = "Cannot allocate verity structure";
+		return -ENOMEM;
+	}
+	ti->private = v;
+	v->ti = ti;
+
+	if ((dm_table_get_mode(ti->table) & ~FMODE_READ) != 0) {
+		ti->error = "Device must be readonly";
+		r = -EINVAL;
+		goto bad;
+	}
+
+	if (argc < 7) {
+		ti->error = "Not enough arguments";
+		r = -EINVAL;
+		goto bad;
+	}
+
+	if (sscanf(argv[0], "%d%c", &num, &dummy) != 1 ||
+	    num != 0) {
+		ti->error = "Invalid version";
+		r = -EINVAL;
+		goto bad;
+	}
+
+	r = dm_get_device(ti, argv[1], FMODE_READ, &v->data_dev);
+	if (r) {
+		ti->error = "Data device lookup failed";
+		goto bad;
+	}
+
+	r = dm_get_device(ti, argv[2], FMODE_READ, &v->hash_dev);
+	if (r) {
+		ti->error = "Data device lookup failed";
+		goto bad;
+	}
+
+	if (sscanf(argv[3], "%llu%c", &hs, &dummy) != 1 ||
+	    hs != (sector_t)hs) {
+		ti->error = "Invalid hash start";
+		r = -EINVAL;
+		goto bad;
+	}
+
+	if (sscanf(argv[4], "%u%c", &num, &dummy) != 1 ||
+	    !num || (num & (num - 1)) ||
+	    num < bdev_logical_block_size(v->data_dev->bdev) ||
+	    num > PAGE_SIZE) {
+		ti->error = "Invalid data device block size";
+		r = -EINVAL;
+		goto bad;
+	}
+	v->data_dev_block_bits = ffs(num) - 1;
+	v->hash_dev_block_bits = ffs(num) - 1;
+
+	v->alg_name = kstrdup(argv[5], GFP_KERNEL);
+	if (!v->alg_name) {
+		ti->error = "Cannot allocate algorithm name";
+		r = -ENOMEM;
+		goto bad;
+	}
+
+	v->tfm = crypto_alloc_shash(v->alg_name, 0, 0);
+	if (IS_ERR(v->tfm)) {
+		ti->error = "Cannot initialize hash function";
+		r = PTR_ERR(v->tfm);
+		v->tfm = NULL;
+		goto bad;
+	}
+	v->digest_size = crypto_shash_digestsize(v->tfm);
+	if ((1 << v->hash_dev_block_bits) < v->digest_size * 2) {
+		ti->error = "Digest size too big";
+		r = -EINVAL;
+		goto bad;
+	}
+	v->shash_descsize =
+		sizeof(struct shash_desc) + crypto_shash_descsize(v->tfm);
+
+	v->root_digest = kmalloc(v->digest_size, GFP_KERNEL);
+	if (!v->root_digest) {
+		ti->error = "Cannot allocate root digest";
+		r = -ENOMEM;
+		goto bad;
+	}
+	if (strlen(argv[6]) != v->digest_size * 2 ||
+	    hex2bin(v->root_digest, argv[6], v->digest_size)) {
+		ti->error = "Invalid root digest";
+		r = -EINVAL;
+		goto bad;
+	}
+
+	if (argc > 7 && strcmp(argv[7], "-")) {
+		v->salt_size = strlen(argv[7]) / 2;
+		v->salt = kmalloc(v->salt_size, GFP_KERNEL);
+		if (!v->salt) {
+			ti->error = "Cannot allocate salt";
+			r = -ENOMEM;
+			goto bad;
+		}
+		if (strlen(argv[7]) != v->salt_size * 2 ||
+		    hex2bin(v->salt, argv[7], v->salt_size)) {
+			ti->error = "Invalid salt";
+			r = -EINVAL;
+			goto bad;
+		}
+	}
+
+	if (argc > 8) {
+		if (sscanf(argv[8], "%u%c", &num, &dummy) != 1 ||
+		    !num || (num & (num - 1)) ||
+		    num < bdev_logical_block_size(v->hash_dev->bdev) ||
+		    num > INT_MAX) {
+			ti->error = "Invalid hash device block size";
+			r = -EINVAL;
+			goto bad;
+		}
+		v->hash_dev_block_bits = ffs(num) - 1;
+	}
+
+	if (hs & ((1 << (v->hash_dev_block_bits - SECTOR_SHIFT)) - 1)) {
+		ti->error = "Hash start not aligned on block boundary";
+		r = -EINVAL;
+		goto bad;
+	}
+	v->hash_start = hs >> (v->hash_dev_block_bits - SECTOR_SHIFT);
+
+	if (ti->len > i_size_read(v->data_dev->bdev->bd_inode) >> SECTOR_SHIFT) {
+		ti->error = "Data device si too small";
+		r = -EINVAL;
+		goto bad;
+	}
+
+	if (ti->len & ((1 << (v->data_dev_block_bits - SECTOR_SHIFT)) - 1)) {
+		ti->error = "Data device length is not aligned to block size";
+		r = -EINVAL;
+		goto bad;
+	}
+
+	v->data_blocks = ti->len >> (v->data_dev_block_bits - SECTOR_SHIFT);
+
+	v->hash_per_block_bits =
+		fls((1 << v->hash_dev_block_bits) / v->digest_size) - 1;
+
+	v->levels = 0;
+	if (v->data_blocks)
+		while (v->hash_per_block_bits * v->levels < 64 &&
+		       (unsigned long long)(v->data_blocks - 1) >>
+		       (v->hash_per_block_bits * v->levels))
+			v->levels++;
+
+	if (v->levels > DM_VERITY_MAX_LEVELS) {
+		ti->error = "Too many tree levels";
+		r = -E2BIG;
+		goto bad;
+	}
+
+	hash_position = v->hash_start;
+	for (i = v->levels - 1; i >= 0; i--) {
+		sector_t s;
+		v->hash_level_block[i] = hash_position;
+		s = verity_position_at_level(v, v->data_blocks, i);
+		s = (s >> v->hash_per_block_bits) +
+		    !!(s & ((1 << v->hash_per_block_bits) - 1));
+		if (hash_position + s < hash_position) {
+			ti->error = "Hash device offset overflow";
+			r = -E2BIG;
+			goto bad;
+		}
+		hash_position += s;
+	}
+	v->hash_blocks = hash_position;
+
+	v->bufio = dm_bufio_client_create(v->hash_dev->bdev,
+		1 << v->hash_dev_block_bits, 1, sizeof(struct buffer_aux),
+		dm_bufio_alloc_callback, NULL);
+	if (IS_ERR(v->bufio)) {
+		ti->error = "Cannot initialize dm-bufio";
+		r = PTR_ERR(v->bufio);
+		v->bufio = NULL;
+		goto bad;
+	}
+
+	if (dm_bufio_get_device_size(v->bufio) < v->hash_blocks) {
+		ti->error = "Hash device is too small";
+		r = -E2BIG;
+		goto bad;
+	}
+
+	v->io_mempool = mempool_create_kmalloc_pool(DM_VERITY_MEMPOOL_SIZE,
+	  sizeof(struct dm_verity_io) + v->shash_descsize + v->digest_size * 2);
+	if (!v->io_mempool) {
+		ti->error = "Cannot allocate io mempool";
+		r = -ENOMEM;
+		goto bad;
+	}
+
+	v->vec_mempool = mempool_create_kmalloc_pool(DM_VERITY_MEMPOOL_SIZE,
+					BIO_MAX_PAGES * sizeof(struct bio_vec));
+	if (!v->vec_mempool) {
+		ti->error = "Cannot allocate vector mempool";
+		r = -ENOMEM;
+		goto bad;
+	}
+
+	/*v->verify_wq = alloc_workqueue("verityd", WQ_CPU_INTENSIVE | WQ_MEM_RECLAIM, 1);*/
+	/* WQ_UNBOUND greatly improves performance when running on ramdisk */
+	v->verify_wq = alloc_workqueue("verityd", WQ_CPU_INTENSIVE | WQ_MEM_RECLAIM | WQ_UNBOUND, num_online_cpus());
+	if (!v->verify_wq) {
+		ti->error = "Cannot allocate workqueue";
+		r = -ENOMEM;
+		goto bad;
+	}
+
+	return 0;
+
+bad:
+	verity_dtr(ti);
+	return r;
+}
+
+static void verity_dtr(struct dm_target *ti)
+{
+	struct dm_verity *v = ti->private;
+
+	if (v->verify_wq)
+		destroy_workqueue(v->verify_wq);
+	if (v->vec_mempool)
+		mempool_destroy(v->vec_mempool);
+	if (v->io_mempool)
+		mempool_destroy(v->io_mempool);
+	if (v->bufio)
+		dm_bufio_client_destroy(v->bufio);
+	kfree(v->salt);
+	kfree(v->root_digest);
+	if (v->tfm)
+		crypto_free_shash(v->tfm);
+	kfree(v->alg_name);
+	if (v->hash_dev)
+		dm_put_device(ti, v->hash_dev);
+	if (v->data_dev)
+		dm_put_device(ti, v->data_dev);
+	kfree(v);
+}
+
+static struct target_type verity_target = {
+	.name		= "verity",
+	.version	= {1, 0, 0},
+	.module		= THIS_MODULE,
+	.ctr		= verity_ctr,
+	.dtr		= verity_dtr,
+	.map		= verity_map,
+	.status		= verity_status,
+	.ioctl		= verity_ioctl,
+	.merge		= verity_merge,
+	.iterate_devices = verity_iterate_devices,
+	.io_hints	= verity_io_hints,
+};
+
+static int __init dm_verity_init(void)
+{
+	int r;
+	r = dm_register_target(&verity_target);
+	if (r < 0)
+		DMERR("register failed %d", r);
+	return r;
+}
+
+static void __exit dm_verity_exit(void)
+{
+	dm_unregister_target(&verity_target);
+}
+
+module_init(dm_verity_init);
+module_exit(dm_verity_exit);
+
+MODULE_AUTHOR("Mikulas Patocka <mpatocka@redhat.com>");
+MODULE_DESCRIPTION(DM_NAME " target for transparent disk integrity checking");
+MODULE_LICENSE("GPL");
+
Index: linux-3.3-rc6-fast/drivers/md/dm-bufio.c
===================================================================
--- linux-3.3-rc6-fast.orig/drivers/md/dm-bufio.c	2012-03-12 22:43:23.000000000 +0100
+++ linux-3.3-rc6-fast/drivers/md/dm-bufio.c	2012-03-13 15:41:02.000000000 +0100
@@ -579,7 +579,7 @@ static void write_endio(struct bio *bio,
 	struct dm_buffer *b = container_of(bio, struct dm_buffer, bio);
 
 	b->write_error = error;
-	if (error) {
+	if (unlikely(error)) {
 		struct dm_bufio_client *c = b->c;
 		(void)cmpxchg(&c->async_write_error, 0, error);
 	}
@@ -698,13 +698,20 @@ static void __wait_for_free_buffer(struc
 	dm_bufio_lock(c);
 }
 
+enum new_flag {
+	NF_FRESH = 0,
+	NF_READ = 1,
+	NF_GET = 2,
+	NF_PREFETCH = 3
+};
+
 /*
  * Allocate a new buffer. If the allocation is not possible, wait until
  * some other thread frees a buffer.
  *
  * May drop the lock and regain it.
  */
-static struct dm_buffer *__alloc_buffer_wait_no_callback(struct dm_bufio_client *c)
+static struct dm_buffer *__alloc_buffer_wait_no_callback(struct dm_bufio_client *c, enum new_flag nf)
 {
 	struct dm_buffer *b;
 
@@ -727,6 +734,9 @@ static struct dm_buffer *__alloc_buffer_
 				return b;
 		}
 
+		if (nf == NF_PREFETCH)
+			return NULL;
+
 		if (!list_empty(&c->reserved_buffers)) {
 			b = list_entry(c->reserved_buffers.next,
 				       struct dm_buffer, lru_list);
@@ -744,9 +754,12 @@ static struct dm_buffer *__alloc_buffer_
 	}
 }
 
-static struct dm_buffer *__alloc_buffer_wait(struct dm_bufio_client *c)
+static struct dm_buffer *__alloc_buffer_wait(struct dm_bufio_client *c, enum new_flag nf)
 {
-	struct dm_buffer *b = __alloc_buffer_wait_no_callback(c);
+	struct dm_buffer *b = __alloc_buffer_wait_no_callback(c, nf);
+
+	if (!b)
+		return NULL;
 
 	if (c->alloc_callback)
 		c->alloc_callback(b);
@@ -866,15 +879,8 @@ static struct dm_buffer *__find(struct d
  * Getting a buffer
  *--------------------------------------------------------------*/
 
-enum new_flag {
-	NF_FRESH = 0,
-	NF_READ = 1,
-	NF_GET = 2
-};
-
 static struct dm_buffer *__bufio_new(struct dm_bufio_client *c, sector_t block,
-				     enum new_flag nf, struct dm_buffer **bp,
-				     int *need_submit)
+				     enum new_flag nf, int *need_submit)
 {
 	struct dm_buffer *b, *new_b = NULL;
 
@@ -882,6 +888,19 @@ static struct dm_buffer *__bufio_new(str
 
 	b = __find(c, block);
 	if (b) {
+found_buffer:
+		if (nf == NF_PREFETCH)
+			return NULL;
+		/*
+		 * Note: it is essential that we don't wait for the buffer to be
+		 * read if dm_bufio_get function is used. Both dm_bufio_get and
+		 * dm_bufio_prefetch can be used in the driver request routine.
+		 * If the user called both dm_bufio_prefetch and dm_bufio_get on
+		 * the same buffer, it would deadlock if we waited.
+		 */
+		if (nf == NF_GET && unlikely(test_bit(B_READING, &b->state)))
+			return NULL;
+
 		b->hold_count++;
 		__relink_lru(b, test_bit(B_DIRTY, &b->state) ||
 			     test_bit(B_WRITING, &b->state));
@@ -891,7 +910,9 @@ static struct dm_buffer *__bufio_new(str
 	if (nf == NF_GET)
 		return NULL;
 
-	new_b = __alloc_buffer_wait(c);
+	new_b = __alloc_buffer_wait(c, nf);
+	if (!new_b)
+		return NULL;
 
 	/*
 	 * We've had a period where the mutex was unlocked, so need to
@@ -900,10 +921,7 @@ static struct dm_buffer *__bufio_new(str
 	b = __find(c, block);
 	if (b) {
 		__free_buffer_wake(new_b);
-		b->hold_count++;
-		__relink_lru(b, test_bit(B_DIRTY, &b->state) ||
-			     test_bit(B_WRITING, &b->state));
-		return b;
+		goto found_buffer;
 	}
 
 	__check_watermark(c);
@@ -957,7 +975,7 @@ static void *new_read(struct dm_bufio_cl
 	struct dm_buffer *b;
 
 	dm_bufio_lock(c);
-	b = __bufio_new(c, block, nf, bp, &need_submit);
+	b = __bufio_new(c, block, nf, &need_submit);
 	dm_bufio_unlock(c);
 
 	if (!b || IS_ERR(b))
@@ -1006,13 +1024,46 @@ void *dm_bufio_new(struct dm_bufio_clien
 }
 EXPORT_SYMBOL_GPL(dm_bufio_new);
 
+void dm_bufio_prefetch(struct dm_bufio_client *c,
+		       sector_t block, unsigned n_blocks)
+{
+	struct blk_plug plug;
+
+	blk_start_plug(&plug);
+	dm_bufio_lock(c);
+
+	for (; n_blocks--; block++) {
+		int need_submit;
+		struct dm_buffer *b;
+		b = __bufio_new(c, block, NF_PREFETCH, &need_submit);
+		if (unlikely(b != NULL)) {
+			dm_bufio_unlock(c);
+
+			if (need_submit)
+				submit_io(b, READ, b->block, read_endio);
+			dm_bufio_release(b);
+
+			dm_bufio_cond_resched();
+
+			if (!n_blocks)
+				goto flush_plug;
+			dm_bufio_lock(c);
+		}
+
+	}
+
+	dm_bufio_unlock(c);
+flush_plug:
+	blk_finish_plug(&plug);
+}
+EXPORT_SYMBOL_GPL(dm_bufio_prefetch);
+
 void dm_bufio_release(struct dm_buffer *b)
 {
 	struct dm_bufio_client *c = b->c;
 
 	dm_bufio_lock(c);
 
-	BUG_ON(test_bit(B_READING, &b->state));
 	BUG_ON(!b->hold_count);
 
 	b->hold_count--;
@@ -1025,6 +1076,7 @@ void dm_bufio_release(struct dm_buffer *
 		 * invalid buffer.
 		 */
 		if ((b->read_error || b->write_error) &&
+		    !test_bit(B_READING, &b->state) &&
 		    !test_bit(B_WRITING, &b->state) &&
 		    !test_bit(B_DIRTY, &b->state)) {
 			__unlink_buffer(b);
@@ -1042,6 +1094,8 @@ void dm_bufio_mark_buffer_dirty(struct d
 
 	dm_bufio_lock(c);
 
+	BUG_ON(test_bit(B_READING, &b->state));
+
 	if (!test_and_set_bit(B_DIRTY, &b->state))
 		__relink_lru(b, LIST_DIRTY);
 
Index: linux-3.3-rc6-fast/drivers/md/dm-bufio.h
===================================================================
--- linux-3.3-rc6-fast.orig/drivers/md/dm-bufio.h	2012-03-12 22:43:23.000000000 +0100
+++ linux-3.3-rc6-fast/drivers/md/dm-bufio.h	2012-03-12 22:43:25.000000000 +0100
@@ -63,6 +63,14 @@ void *dm_bufio_new(struct dm_bufio_clien
 		   struct dm_buffer **bp);
 
 /*
+ * Prefetch the specified blocks to the cache.
+ * The function starts to read the blocks and returns without waiting for
+ * I/O to finish.
+ */
+void dm_bufio_prefetch(struct dm_bufio_client *c,
+		       sector_t block, unsigned n_blocks);
+
+/*
  * Release a reference obtained with dm_bufio_{read,get,new}. The data
  * pointer and dm_buffer pointer is no longer valid after this call.
  */

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH] dm: remake of the verity target
  2012-03-13 22:20     ` [PATCH] dm: remake of the verity target Mikulas Patocka
@ 2012-03-14 21:13       ` Will Drewry
  2012-03-17  1:16         ` Mikulas Patocka
  2012-03-14 21:43       ` Mandeep Singh Baines
  2012-03-20 15:41       ` Mandeep Singh Baines
  2 siblings, 1 reply; 34+ messages in thread
From: Will Drewry @ 2012-03-14 21:13 UTC (permalink / raw)
  To: Mikulas Patocka
  Cc: Mandeep Singh Baines, linux-kernel, dm-devel, Alasdair G Kergon,
	Elly Jones, Milan Broz, Olof Johansson, Steffen Klassert,
	Andrew Morton

Hi Mikulas,

This is a nice rewrite and takes advantage of your dm-bufio layer. I
wish it'd existed (and or we wrote it :) in 2009 when we started this
work!  Some comments below:

On Tue, Mar 13, 2012 at 5:20 PM, Mikulas Patocka <mpatocka@redhat.com> wrote:
> Hi
>
>> Hi Mikulus,
>>
>> This is some nice work. I like that you've been able to abstract a lot
>> of the hash buffer management with dm-bufio. You got rid of the I/O queue.
>> I've been meaning to do that for a while. The prefetch is also nice.
>> We planned to do this but I decided to not do it now in order to get the
>> base functionality in:
>>
>> http://crosbug.com/25441
>>
>> However, there are some things that I don't like. I don't like comments
>> either but you have none. You also removed our documentation. You are
>
> I added some comments. As for documentation, it's OK to use documentation
> from your patch because the on-disk format and the target arguments are
> the same (with an enhancement that my code supports different data and
> metadata bock size and it has variable-length salt).

Sounds good.

>> allocated a complete shash_desc per I/O. We only allocate one per CPU.
>
> The hash of 4k block takes 174000 cycles. So trying to optimize
> memory latency that is about 250 cycles doesn't make much sense.
>
> I actually observed better performance using verity on ramdisk with
> workqueue unbound to specific CPUs. The reason is that the ramdisk bio
> completion routine is always run on the same CPU (that one that submitted
> the request), so with bound workqueue, everything was executing on one
> CPU. With unbound workqueue, I got parallelism.
>
>> We short-circuit the hash at any level. Your implementation can only
>> shirt circuit at the lowest level.
>
> It short-circuits hash at all levels. If the function
> "verity_verify_level" finds out that "aux->hash_verified" is non-zero, it
> doesn't do any hashing, it just copies the hash for the lower level. My
> implementation walks the tree from the top to the bottom, but it doesn't
> do hash verification if the same block has been verified before.
>
> All this tree-walking from the root to the bottom is 50-times faster than
> the actual hashing of the data block (I measured that), so there's not
> much point in trying to optimize it. I did a simple optimization (don't
> walk the tree if the lowest block is already verified) and I don't need to
> do anything complicated given the fact that it can't improve more than by
> 2%.

All we'd done was reverse the walk (we've had it both ways now :),
nothing complicated, but I don't think it's a problem.

>> I'd like to propose that we get the version we sent upstream and then work
>> together on adding some of your enhancements incrementally.
>
> If you add dm-bufio support, you end up deleting majority of the original
> code anyway. That's why I wrote it from scratch and that's why I didn't
> attempt to morph your code.
>
> It's simpler to write the code from scratch and it is also less bug-prone.
>
>> Other than
>> the changes we've made to cleanup for upstreaming, the version I
>> submitted is the code we are using in production.
>>
>> I'm happy to add prefetch now if that is required for merging.
>>
>> What do you think?
>>
>> Regards,
>> Mandeep
>
> This is the version with comments added:
>
> Mikulas
>
> ----
>
> Remake of the google dm-verity patch.
>
> Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
>
> ---
>  drivers/md/Kconfig     |   17
>  drivers/md/Makefile    |    1
>  drivers/md/dm-verity.c |  851 +++++++++++++++++++++++++++++++++++++++++++++++++
>  3 files changed, 869 insertions(+)
>
> Index: linux-3.3-rc6-fast/drivers/md/Kconfig
> ===================================================================
> --- linux-3.3-rc6-fast.orig/drivers/md/Kconfig  2012-03-13 21:46:03.000000000 +0100
> +++ linux-3.3-rc6-fast/drivers/md/Kconfig       2012-03-13 21:46:05.000000000 +0100
> @@ -404,4 +404,21 @@ config DM_VERITY2
>
>           If unsure, say N.
>
> +config DM_VERITY
> +       tristate "Verity target support"
> +       depends on BLK_DEV_DM
> +       select CRYPTO
> +       select CRYPTO_HASH
> +       select DM_BUFIO
> +       ---help---
> +         This device-mapper target allows you to create a device that
> +         transparently integrity checks the data on it. You'll need to
> +         activate the digests you're going to use in the cryptoapi
> +         configuration.
> +
> +         To compile this code as a module, choose M here: the module will
> +         be called dm-verity.
> +
> +         If unsure, say N.
> +
>  endif # MD
> Index: linux-3.3-rc6-fast/drivers/md/Makefile
> ===================================================================
> --- linux-3.3-rc6-fast.orig/drivers/md/Makefile 2012-03-13 21:46:03.000000000 +0100
> +++ linux-3.3-rc6-fast/drivers/md/Makefile      2012-03-13 21:46:05.000000000 +0100
> @@ -29,6 +29,7 @@ obj-$(CONFIG_MD_FAULTY)               += faulty.o
>  obj-$(CONFIG_BLK_DEV_MD)       += md-mod.o
>  obj-$(CONFIG_BLK_DEV_DM)       += dm-mod.o
>  obj-$(CONFIG_DM_BUFIO)         += dm-bufio.o
> +obj-$(CONFIG_DM_VERITY)                += dm-verity.o
>  obj-$(CONFIG_DM_CRYPT)         += dm-crypt.o
>  obj-$(CONFIG_DM_DELAY)         += dm-delay.o
>  obj-$(CONFIG_DM_FLAKEY)                += dm-flakey.o
> Index: linux-3.3-rc6-fast/drivers/md/dm-verity.c
> ===================================================================
> --- /dev/null   1970-01-01 00:00:00.000000000 +0000
> +++ linux-3.3-rc6-fast/drivers/md/dm-verity.c   2012-03-13 22:02:05.000000000 +0100
> @@ -0,0 +1,851 @@
> +/*
> + * Copyright (C) 2012 Red Hat, Inc.
> + *
> + * Author: Mikulas Patocka <mpatocka@redhat.com>
> + *
> + * Based on Chromium dm-verity driver (C) 2011 The Chromium OS Authors
> + *
> + * This file is released under the GPLv2.
> + *
> + * Device mapper target parameters:
> + *     <version>       0
> + *     <data device>
> + *     <hash device>
> + *     <hash start>    (typically 0)
> + *     <block size>    (typically 4096)
> + *     <algorithm>
> + *     <digest>
> + *     optional parameters:
> + *             <salt> (should have 32 bytes for compatibility with Google code)
> + *             <hash block size> (by default it is the same as data block size)
> + *
> + * In the file "/sys/module/dm_verity/parameters/prefetch_cluster" you can set
> + * default prefetch value. Data are read in "prefetch_cluster" chunks from the
> + * hash device. Prefetch cluster greatly improves performance when data and hash
> + * are on the same disk on different partitions.

... on different partitions on devices with poor random access behavior.

> + */
> +
> +#include <linux/module.h>
> +#include <linux/device-mapper.h>
> +#include <crypto/hash.h>
> +#include "dm-bufio.h"
> +
> +#define DM_MSG_PREFIX                  "verity"
> +
> +#define DM_VERITY_IO_VEC_INLINE                16
> +#define DM_VERITY_MEMPOOL_SIZE         4
> +#define DM_VERITY_DEFAULT_PREFETCH_SIZE        262144
> +
> +#define DM_VERITY_MAX_LEVELS           63
> +
> +static unsigned prefetch_cluster = DM_VERITY_DEFAULT_PREFETCH_SIZE;
> +
> +module_param_named(prefetch_cluster, prefetch_cluster, uint, S_IRUGO | S_IWUSR);
> +
> +struct dm_verity {
> +       struct dm_dev *data_dev;
> +       struct dm_dev *hash_dev;
> +       struct dm_target *ti;
> +       struct dm_bufio_client *bufio;
> +       char *alg_name;
> +       struct crypto_shash *tfm;
> +       u8 *root_digest;        /* digest of the root block */
> +       u8 *salt;               /* salt, its size is salt_size */
> +       unsigned salt_size;
> +       sector_t data_start;    /* data offset in 512-byte sectors */
> +       sector_t hash_start;    /* hash start in blocks */
> +       sector_t data_blocks;   /* the number of data blocks */
> +       sector_t hash_blocks;   /* the number of hash blocks */
> +       unsigned char data_dev_block_bits;      /* log2(data blocksize) */
> +       unsigned char hash_dev_block_bits;      /* log2(hash blocksize) */
> +       unsigned char hash_per_block_bits;      /* log2(hashes in hash block) */
> +       unsigned char levels;   /* the number of tree levels */
> +       unsigned digest_size;   /* digest size for the current hash algorithm */
> +       unsigned shash_descsize;/* the size of temporary space for crypto */
> +
> +       mempool_t *io_mempool;  /* mempool of struct dm_verity_io */
> +       mempool_t *vec_mempool; /* mempool of bio vector */
> +
> +       struct workqueue_struct *verify_wq;
> +
> +       /* starting blocks for each tree level. 0 is the lowest level. */
> +       sector_t hash_level_block[DM_VERITY_MAX_LEVELS];
> +};
> +
> +struct dm_verity_io {
> +       struct dm_verity *v;
> +       struct bio *bio;
> +
> +       /* original values of bio->bi_end_io and bio->bi_private */
> +       bio_end_io_t *orig_bi_end_io;
> +       void *orig_bi_private;
> +
> +       sector_t block;
> +       unsigned n_blocks;
> +
> +       /* saved bio vector */
> +       struct bio_vec *io_vec;
> +       unsigned io_vec_size;
> +
> +       struct work_struct work;
> +
> +       /* a space for short vectors; longer vectors are allocated separately */
> +       struct bio_vec io_vec_inline[DM_VERITY_IO_VEC_INLINE];
> +
> +       /* variable-size fields, accessible with functions
> +               io_hash_desc, io_real_digest, io_want_digest */
> +       /* u8 hash_desc[crypto_shash_descsize(v->tfm)]; */
> +       /* u8 real_digest[v->digest_size]; */
> +       /* u8 want_digest[v->digest_size]; */
> +};
> +
> +static struct shash_desc *io_hash_desc(struct dm_verity *v, struct dm_verity_io *io)
> +{
> +       return (struct shash_desc *)(io + 1);
> +}
> +
> +static u8 *io_real_digest(struct dm_verity *v, struct dm_verity_io *io)
> +{
> +       return (u8 *)(io + 1) + v->shash_descsize;
> +}
> +
> +static u8 *io_want_digest(struct dm_verity *v, struct dm_verity_io *io)
> +{
> +       return (u8 *)(io + 1) + v->shash_descsize + v->digest_size;
> +}
> +
> +/*
> + * Auxiliary structure appended to each dm-bufio buffer. If the value
> + * hash_verified is nonzero, hash of the block has been verified.
> + *
> + * There is no lock around this value, a race condition can at worst cause
> + * that multiple processes verify the hash of the same buffer simultaneously.
> + * This condition is harmless, so we don't need locking.
> + */

Might be worth clarifying that no consumer will ever write a 0 value
to the hash_verified field after dm_bufio_alloc_callback, as that is
the critical constraint.  It's what makes lockless/atomic-less access
acceptable.  As you say, the worst case you over verify or you
over-write.

bufio is nice for this use since it encapsulates the need for the
atomic state transitions we were using in our module to stay
lock-free.

> +struct buffer_aux {
> +       int hash_verified;
> +};
> +
> +/*
> + * Initialize struct buffer_aux for a freshly created buffer.
> + */
> +static void dm_bufio_alloc_callback(struct dm_buffer *buf)
> +{
> +       struct buffer_aux *aux = dm_bufio_get_aux_data(buf);
> +       aux->hash_verified = 0;
> +}
> +
> +/*
> + * Translate input sector number to the sector number on the target device.
> + */
> +static sector_t verity_map_sector(struct dm_verity *v, sector_t bi_sector)
> +{
> +       return v->data_start + dm_target_offset(v->ti, bi_sector);
> +}
> +
> +/*
> + * Return hash position of a specified block at a specified tree level
> + * (0 is the lowest level).
> + * The lowest "hash_per_block_bits"-bits of the result denote hash position
> + * inside a hash block. The remaining bits denode location of the hash block.

s/denode/denote/

> + */
> +static sector_t verity_position_at_level(struct dm_verity *v, sector_t block,
> +                                        int level)
> +{
> +       return block >> (level * v->hash_per_block_bits);
> +}
> +
> +static void verity_hash_at_level(struct dm_verity *v, sector_t block, int level,
> +                                sector_t *hash_block, unsigned *offset)
> +{
> +       sector_t position = verity_position_at_level(v, block, level);
> +
> +       *hash_block = v->hash_level_block[level] + (position >> v->hash_per_block_bits);
> +       if (offset)
> +               *offset = v->digest_size * (position & ((1 << v->hash_per_block_bits) - 1));
> +}
> +
> +/*
> + * Verify hash of a metadata block pertaining to the specified data block
> + * ("block" argument) at a specified level ("level" argument).
> + *
> + * On successful return, io_want_digest(v, io) contains the hash value for
> + * a lower tree level or for the data block (if we're at the lowest leve).
> + *
> + * If "skip_unverified" is true, unverified buffer is skipped an 1 is returned.
> + * If "skip_unverified" is false, unverified buffer is hashed and verified
> + * against current value of io_want_digest(v, io).
> + */
> +static int verity_verify_level(struct dm_verity_io *io, sector_t block,
> +                              int level, bool skip_unverified)
> +{
> +       struct dm_verity *v = io->v;
> +       struct dm_buffer *buf;
> +       struct buffer_aux *aux;
> +       u8 *data;
> +       int r;
> +       sector_t hash_block;
> +       unsigned offset;
> +
> +       verity_hash_at_level(v, block, level, &hash_block, &offset);
> +
> +       data = dm_bufio_read(v->bufio, hash_block, &buf);
> +       if (unlikely(IS_ERR(data)))
> +               return PTR_ERR(data);
> +
> +       aux = dm_bufio_get_aux_data(buf);
> +
> +       if (!aux->hash_verified) {
> +               struct shash_desc *desc;
> +               u8 *result;
> +
> +               if (skip_unverified) {
> +                       r = 1;
> +                       goto release_ret_r;
> +               }
> +
> +               desc = io_hash_desc(v, io);
> +               desc->tfm = v->tfm;
> +               desc->flags = CRYPTO_TFM_REQ_MAY_SLEEP;
> +               r = crypto_shash_init(desc);
> +               if (r < 0) {
> +                       DMERR("crypto_shash_init failed: %d", r);
> +                       goto release_ret_r;
> +               }
> +
> +               r = crypto_shash_update(desc, data, 1 << v->hash_dev_block_bits);
> +               if (r < 0) {
> +                       DMERR("crypto_shash_update failed: %d", r);
> +                       goto release_ret_r;
> +               }
> +
> +               r = crypto_shash_update(desc, v->salt, v->salt_size);
> +               if (r < 0) {
> +                       DMERR("crypto_shash_update failed: %d", r);
> +                       goto release_ret_r;
> +               }
> +
> +               result = io_real_digest(v, io);
> +               r = crypto_shash_final(desc, result);
> +               if (r < 0) {
> +                       DMERR("crypto_shash_final failed: %d", r);
> +                       goto release_ret_r;
> +               }
> +               if (unlikely(memcmp(result, io_want_digest(v, io), v->digest_size))) {
> +                       DMERR_LIMIT("metadata block %llu is corrupted",
> +                               (unsigned long long)hash_block);
> +                       r = -EIO;
> +                       goto release_ret_r;
> +               } else
> +                       aux->hash_verified = 1;
> +       }
> +
> +       data += offset;
> +
> +       memcpy(io_want_digest(v, io), data, v->digest_size);
> +
> +       dm_bufio_release(buf);
> +       return 0;
> +
> +release_ret_r:
> +       dm_bufio_release(buf);
> +       return r;
> +}
> +
> +/*
> + * Verify one "dm_verity_io" structure.
> + */
> +static int verity_verify_io(struct dm_verity_io *io)
> +{
> +       struct dm_verity *v = io->v;
> +       unsigned b;
> +       int i;
> +       unsigned vector = 0, offset = 0;
> +       for (b = 0; b < io->n_blocks; b++) {
> +               struct shash_desc *desc;
> +               u8 *result;
> +               int r;
> +               unsigned todo;
> +
> +               if (likely(v->levels)) {
> +                       /*
> +                        * First, we try to get the requested hash for
> +                        * the current block. If the hash block itself is
> +                        * verified, zero is returned. If it isn't, this
> +                        * function returns 0 and we fall back to whole
> +                        * chain verification.
> +                        */
> +                       int r = verity_verify_level(io, io->block + b, 0, true);
> +                       if (likely(!r))
> +                               goto test_block_hash;
> +                       if (r < 0)
> +                               return r;
> +               }
> +
> +               memcpy(io_want_digest(v, io), v->root_digest, v->digest_size);
> +
> +               for (i = v->levels - 1; i >= 0; i--) {
> +                       int r = verity_verify_level(io, io->block + b, i, false);
> +                       if (unlikely(r))
> +                               return r;
> +               }
> +
> +test_block_hash:
> +               desc = io_hash_desc(v, io);
> +               desc->tfm = v->tfm;
> +               desc->flags = CRYPTO_TFM_REQ_MAY_SLEEP;
> +               r = crypto_shash_init(desc);
> +               if (r < 0) {
> +                       DMERR("crypto_shash_init failed: %d", r);
> +                       return r;
> +               }
> +
> +               todo = 1 << v->data_dev_block_bits;
> +               do {
> +                       struct bio_vec *bv;
> +                       u8 *page;
> +                       unsigned len;
> +
> +                       BUG_ON(vector >= io->io_vec_size);
> +                       bv = &io->io_vec[vector];
> +                       page = kmap_atomic(bv->bv_page, KM_USER0);
> +                       len = bv->bv_len - offset;
> +                       if (likely(len >= todo))
> +                               len = todo;
> +                       r = crypto_shash_update(desc,
> +                                       page + bv->bv_offset + offset, len);
> +                       kunmap_atomic(page, KM_USER0);
> +                       if (r < 0) {
> +                               DMERR("crypto_shash_update failed: %d", r);
> +                               return r;
> +                       }
> +                       offset += len;
> +                       if (likely(offset == bv->bv_len)) {
> +                               offset = 0;
> +                               vector++;
> +                       }
> +                       todo -= len;
> +               } while (todo);
> +
> +               r = crypto_shash_update(desc, v->salt, v->salt_size);
> +               if (r < 0) {
> +                       DMERR("crypto_shash_update failed: %d", r);
> +                       return r;
> +               }
> +
> +               result = io_real_digest(v, io);
> +               r = crypto_shash_final(desc, result);
> +               if (r < 0) {
> +                       DMERR("crypto_shash_final failed: %d", r);
> +                       return r;
> +               }
> +               if (unlikely(memcmp(result, io_want_digest(v, io), v->digest_size))) {
> +                       DMERR_LIMIT("data block %llu is corrupted",
> +                               (unsigned long long)(io->block + b));
> +                       return -EIO;
> +               }
> +       }
> +       BUG_ON(vector != io->io_vec_size);
> +       BUG_ON(offset);
> +       return 0;
> +}
> +
> +/*
> + * End one "io" structure with a given error.
> + */
> +static void verity_finish_io(struct dm_verity_io *io, int error)
> +{
> +       struct bio *bio = io->bio;
> +       struct dm_verity *v = io->v;
> +
> +       bio->bi_end_io = io->orig_bi_end_io;
> +       bio->bi_private = io->orig_bi_private;
> +
> +       if (io->io_vec != io->io_vec_inline)
> +               mempool_free(io->io_vec, v->vec_mempool);
> +       mempool_free(io, v->io_mempool);
> +
> +       bio_endio(bio, error);
> +}
> +
> +static void verity_work(struct work_struct *w)
> +{
> +       struct dm_verity_io *io = container_of(w, struct dm_verity_io, work);
> +
> +       verity_finish_io(io, verity_verify_io(io));
> +}
> +
> +static void verity_end_io(struct bio *bio, int error)
> +{
> +       struct dm_verity_io *io = bio->bi_private;
> +       if (error) {
> +               verity_finish_io(io, error);
> +               return;
> +       }
> +
> +       INIT_WORK(&io->work, verity_work);
> +       queue_work(io->v->verify_wq, &io->work);
> +}
> +
> +/*
> + * Prefetch buffers for the specified io.
> + * The root buffer is not prefetched, it is assumed that it will be cached
> + * all the time.
> + */
> +static void verity_prefetch_io(struct dm_verity *v, struct dm_verity_io *io)
> +{
> +       int i;
> +       for (i = v->levels - 2; i >= 0; i--) {
> +               sector_t hash_block_start;
> +               sector_t hash_block_end;
> +               verity_hash_at_level(v, io->block, i, &hash_block_start, NULL);
> +               verity_hash_at_level(v, io->block + io->n_blocks - 1, i, &hash_block_end, NULL);
> +               if (!i) {
> +                       unsigned cluster = prefetch_cluster;
> +        /* barrier to stop GCC from re-reading prefetch_cluster again */
> +                       barrier();
> +                       cluster >>= v->data_dev_block_bits;

Would:
  unsigned cluster = prefetch_cluster >> v->data_dev_block_bits;
not have similar behavior without a barrier?  (Yeah yeah I could
compile and see, but I was curious if you already had.)

Since the max iterations here is 61 in a worst-case, I don't think
it's a big deal to barrier() each time, just thought I'd ask.

> +                       if (unlikely(!cluster))
> +                               goto no_prefetch_cluster;
> +                       if (unlikely(cluster & (cluster - 1)))
> +                               cluster = 1 << (fls(cluster) - 1);
> +
> +                       hash_block_start &= ~(sector_t)(cluster - 1);
> +                       hash_block_end |= cluster - 1;
> +                       if (unlikely(hash_block_end >= v->hash_blocks))
> +                               hash_block_end = v->hash_blocks - 1;
> +               }
> +no_prefetch_cluster:
> +               dm_bufio_prefetch(v->bufio, hash_block_start,
> +                                       hash_block_end - hash_block_start + 1);
> +       }
> +}
> +
> +/*
> + * Bio map function. It allocates dm_verity_io structure and bio vector and
> + * fills them. Then it issues prefetches and the I/O.
> + */
> +static int verity_map(struct dm_target *ti, struct bio *bio,
> +                     union map_info *map_context)
> +{
> +       struct dm_verity *v = ti->private;
> +       struct dm_verity_io *io;
> +
> +       if (((unsigned)bio->bi_sector | bio_sectors(bio)) &
> +           ((1 << (v->data_dev_block_bits - SECTOR_SHIFT)) - 1)) {
> +               DMERR_LIMIT("unaligned io");
> +               return -EIO;
> +       }
> +
> +       if ((bio->bi_sector + bio_sectors(bio)) >>
> +           (v->data_dev_block_bits - SECTOR_SHIFT) > v->data_blocks) {
> +               DMERR_LIMIT("io out of range");
> +               return -EIO;
> +       }
> +
> +       if (bio_data_dir(bio) == WRITE)
> +               return -EIO;
> +
> +       io = mempool_alloc(v->io_mempool, GFP_NOIO);
> +       io->v = v;
> +       io->bio = bio;
> +       io->orig_bi_end_io = bio->bi_end_io;
> +       io->orig_bi_private = bio->bi_private;
> +       io->block = bio->bi_sector >> (v->data_dev_block_bits - SECTOR_SHIFT);
> +       io->n_blocks = bio->bi_size >> v->data_dev_block_bits;
> +
> +       bio->bi_end_io = verity_end_io;
> +       bio->bi_private = io;
> +       bio->bi_bdev = v->data_dev->bdev;
> +       bio->bi_sector = verity_map_sector(v, bio->bi_sector);
> +
> +       io->io_vec_size = bio->bi_vcnt - bio->bi_idx;
> +       if (io->io_vec_size < DM_VERITY_IO_VEC_INLINE)
> +               io->io_vec = io->io_vec_inline;
> +       else
> +               io->io_vec = mempool_alloc(v->vec_mempool, GFP_NOIO);
> +       memcpy(io->io_vec, bio_iovec(bio),
> +              io->io_vec_size * sizeof(struct bio_vec));
> +
> +       verity_prefetch_io(v, io);
> +
> +       generic_make_request(bio);
> +
> +       return DM_MAPIO_SUBMITTED;
> +}
> +
> +static int verity_status(struct dm_target *ti, status_type_t type,
> +                        char *result, unsigned maxlen)
> +{
> +       struct dm_verity *v = ti->private;
> +       unsigned sz = 0;
> +       unsigned x;
> +
> +       switch (type) {
> +       case STATUSTYPE_INFO:
> +               result[0] = 0;
> +               break;
> +       case STATUSTYPE_TABLE:
> +               DMEMIT("%u %s %s %llu %u %s ",
> +                       0,
> +                       v->data_dev->name,
> +                       v->hash_dev->name,

I understand the new approach is to use major:minor instead of the
device name.  I don't care which, but I believe agk@ requested that.

> +                       (unsigned long long)v->hash_start << (v->hash_dev_block_bits - SECTOR_SHIFT),
> +                       1 << v->data_dev_block_bits,
> +                       v->alg_name
> +                       );
> +               for (x = 0; x < v->digest_size; x++)
> +                       DMEMIT("%02x", v->root_digest[x]);
> +               DMEMIT(" ");
> +               if (!v->salt_size)
> +                       DMEMIT("-");
> +               else
> +                       for (x = 0; x < v->salt_size; x++)
> +                               DMEMIT("%02x", v->salt[x]);
> +               if (v->data_dev_block_bits != v->hash_dev_block_bits)
> +                       DMEMIT(" %u", 1 << v->hash_dev_block_bits);
> +               break;
> +       }
> +       return 0;
> +}
> +
> +static int verity_ioctl(struct dm_target *ti, unsigned cmd,
> +                       unsigned long arg)
> +{
> +       struct dm_verity *v = ti->private;
> +       int r = 0;
> +
> +       if (v->data_start ||
> +           ti->len != i_size_read(v->data_dev->bdev->bd_inode) >> SECTOR_SHIFT)
> +               r = scsi_verify_blk_ioctl(NULL, cmd);
> +

Is it worth supporting ioctl at all given these hoops?  Nothing stops
a privileged user from directly running the ioctl on the underlying
device/devices, it's just very inconvenient :)

> +       return r ? : __blkdev_driver_ioctl(v->data_dev->bdev, v->data_dev->mode,
> +                                    cmd, arg);
> +}
> +
> +static int verity_merge(struct dm_target *ti, struct bvec_merge_data *bvm,
> +                       struct bio_vec *biovec, int max_size)
> +{
> +       struct dm_verity *v = ti->private;
> +       struct request_queue *q = bdev_get_queue(v->data_dev->bdev);
> +
> +       if (!q->merge_bvec_fn)
> +               return max_size;
> +
> +       bvm->bi_bdev = v->data_dev->bdev;
> +       bvm->bi_sector = verity_map_sector(v, bvm->bi_sector);
> +
> +       return min(max_size, q->merge_bvec_fn(q, bvm, biovec));
> +}
> +
> +static int verity_iterate_devices(struct dm_target *ti,
> +                                 iterate_devices_callout_fn fn, void *data)
> +{
> +       struct dm_verity *v = ti->private;
> +       return fn(ti, v->data_dev, v->data_start, ti->len, data);
> +}
> +
> +static void verity_io_hints(struct dm_target *ti, struct queue_limits *limits)
> +{
> +       struct dm_verity *v = ti->private;
> +
> +       if (limits->logical_block_size < 1 << v->data_dev_block_bits)
> +               limits->logical_block_size = 1 << v->data_dev_block_bits;
> +       if (limits->physical_block_size < 1 << v->data_dev_block_bits)
> +               limits->physical_block_size = 1 << v->data_dev_block_bits;
> +       blk_limits_io_min(limits, limits->logical_block_size);
> +}
> +
> +static void verity_dtr(struct dm_target *ti);
> +
> +static int verity_ctr(struct dm_target *ti, unsigned argc, char **argv)
> +{
> +       struct dm_verity *v;
> +       unsigned num;
> +       unsigned long long hs;
> +       int r;
> +       int i;
> +       sector_t hash_position;
> +       char dummy;
> +
> +       v = kzalloc(sizeof(struct dm_verity), GFP_KERNEL);
> +       if (!v) {
> +               ti->error = "Cannot allocate verity structure";
> +               return -ENOMEM;
> +       }
> +       ti->private = v;
> +       v->ti = ti;
> +
> +       if ((dm_table_get_mode(ti->table) & ~FMODE_READ) != 0) {
> +               ti->error = "Device must be readonly";
> +               r = -EINVAL;
> +               goto bad;
> +       }
> +
> +       if (argc < 7) {
> +               ti->error = "Not enough arguments";
> +               r = -EINVAL;
> +               goto bad;
> +       }
> +
> +       if (sscanf(argv[0], "%d%c", &num, &dummy) != 1 ||
> +           num != 0) {
> +               ti->error = "Invalid version";
> +               r = -EINVAL;
> +               goto bad;
> +       }
> +
> +       r = dm_get_device(ti, argv[1], FMODE_READ, &v->data_dev);
> +       if (r) {
> +               ti->error = "Data device lookup failed";
> +               goto bad;
> +       }
> +
> +       r = dm_get_device(ti, argv[2], FMODE_READ, &v->hash_dev);
> +       if (r) {
> +               ti->error = "Data device lookup failed";
> +               goto bad;
> +       }
> +
> +       if (sscanf(argv[3], "%llu%c", &hs, &dummy) != 1 ||
> +           hs != (sector_t)hs) {
> +               ti->error = "Invalid hash start";
> +               r = -EINVAL;
> +               goto bad;
> +       }
> +
> +       if (sscanf(argv[4], "%u%c", &num, &dummy) != 1 ||
> +           !num || (num & (num - 1)) ||
> +           num < bdev_logical_block_size(v->data_dev->bdev) ||
> +           num > PAGE_SIZE) {
> +               ti->error = "Invalid data device block size";
> +               r = -EINVAL;
> +               goto bad;
> +       }
> +       v->data_dev_block_bits = ffs(num) - 1;
> +       v->hash_dev_block_bits = ffs(num) - 1;
> +
> +       v->alg_name = kstrdup(argv[5], GFP_KERNEL);
> +       if (!v->alg_name) {
> +               ti->error = "Cannot allocate algorithm name";
> +               r = -ENOMEM;
> +               goto bad;
> +       }
> +
> +       v->tfm = crypto_alloc_shash(v->alg_name, 0, 0);
> +       if (IS_ERR(v->tfm)) {
> +               ti->error = "Cannot initialize hash function";
> +               r = PTR_ERR(v->tfm);
> +               v->tfm = NULL;
> +               goto bad;
> +       }
> +       v->digest_size = crypto_shash_digestsize(v->tfm);
> +       if ((1 << v->hash_dev_block_bits) < v->digest_size * 2) {
> +               ti->error = "Digest size too big";
> +               r = -EINVAL;
> +               goto bad;
> +       }
> +       v->shash_descsize =
> +               sizeof(struct shash_desc) + crypto_shash_descsize(v->tfm);
> +
> +       v->root_digest = kmalloc(v->digest_size, GFP_KERNEL);
> +       if (!v->root_digest) {
> +               ti->error = "Cannot allocate root digest";
> +               r = -ENOMEM;
> +               goto bad;
> +       }
> +       if (strlen(argv[6]) != v->digest_size * 2 ||
> +           hex2bin(v->root_digest, argv[6], v->digest_size)) {
> +               ti->error = "Invalid root digest";
> +               r = -EINVAL;
> +               goto bad;
> +       }
> +
> +       if (argc > 7 && strcmp(argv[7], "-")) {
> +               v->salt_size = strlen(argv[7]) / 2;
> +               v->salt = kmalloc(v->salt_size, GFP_KERNEL);
> +               if (!v->salt) {
> +                       ti->error = "Cannot allocate salt";
> +                       r = -ENOMEM;
> +                       goto bad;
> +               }
> +               if (strlen(argv[7]) != v->salt_size * 2 ||
> +                   hex2bin(v->salt, argv[7], v->salt_size)) {
> +                       ti->error = "Invalid salt";
> +                       r = -EINVAL;
> +                       goto bad;
> +               }
> +       }
> +
> +       if (argc > 8) {
> +               if (sscanf(argv[8], "%u%c", &num, &dummy) != 1 ||
> +                   !num || (num & (num - 1)) ||
> +                   num < bdev_logical_block_size(v->hash_dev->bdev) ||
> +                   num > INT_MAX) {
> +                       ti->error = "Invalid hash device block size";
> +                       r = -EINVAL;
> +                       goto bad;
> +               }
> +               v->hash_dev_block_bits = ffs(num) - 1;
> +       }
> +
> +       if (hs & ((1 << (v->hash_dev_block_bits - SECTOR_SHIFT)) - 1)) {
> +               ti->error = "Hash start not aligned on block boundary";
> +               r = -EINVAL;
> +               goto bad;
> +       }
> +       v->hash_start = hs >> (v->hash_dev_block_bits - SECTOR_SHIFT);
> +
> +       if (ti->len > i_size_read(v->data_dev->bdev->bd_inode) >> SECTOR_SHIFT) {
> +               ti->error = "Data device si too small";

s/si/is

Should this also check ti->start + ti->len to ensure it isn't reading
off the end or do you just rely on the requests failing?

> +               r = -EINVAL;
> +               goto bad;
> +       }
> +
> +       if (ti->len & ((1 << (v->data_dev_block_bits - SECTOR_SHIFT)) - 1)) {
> +               ti->error = "Data device length is not aligned to block size";
> +               r = -EINVAL;
> +               goto bad;
> +       }
> +
> +       v->data_blocks = ti->len >> (v->data_dev_block_bits - SECTOR_SHIFT);
> +
> +       v->hash_per_block_bits =
> +               fls((1 << v->hash_dev_block_bits) / v->digest_size) - 1;
> +
> +       v->levels = 0;
> +       if (v->data_blocks)
> +               while (v->hash_per_block_bits * v->levels < 64 &&
> +                      (unsigned long long)(v->data_blocks - 1) >>
> +                      (v->hash_per_block_bits * v->levels))
> +                       v->levels++;
> +
> +       if (v->levels > DM_VERITY_MAX_LEVELS) {
> +               ti->error = "Too many tree levels";
> +               r = -E2BIG;
> +               goto bad;
> +       }
> +
> +       hash_position = v->hash_start;
> +       for (i = v->levels - 1; i >= 0; i--) {
> +               sector_t s;
> +               v->hash_level_block[i] = hash_position;
> +               s = verity_position_at_level(v, v->data_blocks, i);
> +               s = (s >> v->hash_per_block_bits) +
> +                   !!(s & ((1 << v->hash_per_block_bits) - 1));
> +               if (hash_position + s < hash_position) {
> +                       ti->error = "Hash device offset overflow";
> +                       r = -E2BIG;
> +                       goto bad;
> +               }
> +               hash_position += s;
> +       }
> +       v->hash_blocks = hash_position;
> +
> +       v->bufio = dm_bufio_client_create(v->hash_dev->bdev,
> +               1 << v->hash_dev_block_bits, 1, sizeof(struct buffer_aux),
> +               dm_bufio_alloc_callback, NULL);
> +       if (IS_ERR(v->bufio)) {
> +               ti->error = "Cannot initialize dm-bufio";
> +               r = PTR_ERR(v->bufio);
> +               v->bufio = NULL;
> +               goto bad;
> +       }
> +
> +       if (dm_bufio_get_device_size(v->bufio) < v->hash_blocks) {
> +               ti->error = "Hash device is too small";
> +               r = -E2BIG;
> +               goto bad;
> +       }
> +
> +       v->io_mempool = mempool_create_kmalloc_pool(DM_VERITY_MEMPOOL_SIZE,
> +         sizeof(struct dm_verity_io) + v->shash_descsize + v->digest_size * 2);
> +       if (!v->io_mempool) {
> +               ti->error = "Cannot allocate io mempool";
> +               r = -ENOMEM;
> +               goto bad;
> +       }
> +
> +       v->vec_mempool = mempool_create_kmalloc_pool(DM_VERITY_MEMPOOL_SIZE,
> +                                       BIO_MAX_PAGES * sizeof(struct bio_vec));
> +       if (!v->vec_mempool) {
> +               ti->error = "Cannot allocate vector mempool";
> +               r = -ENOMEM;
> +               goto bad;
> +       }
> +
> +       /*v->verify_wq = alloc_workqueue("verityd", WQ_CPU_INTENSIVE | WQ_MEM_RECLAIM, 1);*/
> +       /* WQ_UNBOUND greatly improves performance when running on ramdisk */
> +       v->verify_wq = alloc_workqueue("verityd", WQ_CPU_INTENSIVE | WQ_MEM_RECLAIM | WQ_UNBOUND, num_online_cpus());
> +       if (!v->verify_wq) {
> +               ti->error = "Cannot allocate workqueue";
> +               r = -ENOMEM;
> +               goto bad;
> +       }
> +
> +       return 0;
> +
> +bad:
> +       verity_dtr(ti);
> +       return r;
> +}
> +
> +static void verity_dtr(struct dm_target *ti)
> +{
> +       struct dm_verity *v = ti->private;
> +
> +       if (v->verify_wq)
> +               destroy_workqueue(v->verify_wq);
> +       if (v->vec_mempool)
> +               mempool_destroy(v->vec_mempool);
> +       if (v->io_mempool)
> +               mempool_destroy(v->io_mempool);
> +       if (v->bufio)
> +               dm_bufio_client_destroy(v->bufio);
> +       kfree(v->salt);
> +       kfree(v->root_digest);
> +       if (v->tfm)
> +               crypto_free_shash(v->tfm);
> +       kfree(v->alg_name);
> +       if (v->hash_dev)
> +               dm_put_device(ti, v->hash_dev);
> +       if (v->data_dev)
> +               dm_put_device(ti, v->data_dev);
> +       kfree(v);
> +}
> +
> +static struct target_type verity_target = {
> +       .name           = "verity",
> +       .version        = {1, 0, 0},
> +       .module         = THIS_MODULE,
> +       .ctr            = verity_ctr,
> +       .dtr            = verity_dtr,
> +       .map            = verity_map,
> +       .status         = verity_status,
> +       .ioctl          = verity_ioctl,
> +       .merge          = verity_merge,
> +       .iterate_devices = verity_iterate_devices,
> +       .io_hints       = verity_io_hints,
> +};
> +
> +static int __init dm_verity_init(void)
> +{
> +       int r;
> +       r = dm_register_target(&verity_target);
> +       if (r < 0)
> +               DMERR("register failed %d", r);
> +       return r;
> +}
> +
> +static void __exit dm_verity_exit(void)
> +{
> +       dm_unregister_target(&verity_target);
> +}
> +
> +module_init(dm_verity_init);
> +module_exit(dm_verity_exit);
> +
> +MODULE_AUTHOR("Mikulas Patocka <mpatocka@redhat.com>");

As per linux/module.h, I'd welcome additional authors as per the
lkml/patch lineage:
MODULE_AUTHOR("Mandeep Baines <msb@chromium.org>");
MODULE_AUTHOR("Will Drewry <wad@chromium.org>");

Regardless, I'll just be happy to see this functionality merge.

> +MODULE_DESCRIPTION(DM_NAME " target for transparent disk integrity checking");
> +MODULE_LICENSE("GPL");
> +
> Index: linux-3.3-rc6-fast/drivers/md/dm-bufio.c

This should be in a separate patch I think.

> ===================================================================
> --- linux-3.3-rc6-fast.orig/drivers/md/dm-bufio.c       2012-03-12 22:43:23.000000000 +0100
> +++ linux-3.3-rc6-fast/drivers/md/dm-bufio.c    2012-03-13 15:41:02.000000000 +0100
[snip]
> @@ -882,6 +888,19 @@ static struct dm_buffer *__bufio_new(str
>
>        b = __find(c, block);
>        if (b) {
> +found_buffer:
> +               if (nf == NF_PREFETCH)
> +                       return NULL;
> +               /*
> +                * Note: it is essential that we don't wait for the buffer to be
> +                * read if dm_bufio_get function is used. Both dm_bufio_get and
> +                * dm_bufio_prefetch can be used in the driver request routine.
> +                * If the user called both dm_bufio_prefetch and dm_bufio_get on
> +                * the same buffer, it would deadlock if we waited.
> +                */
> +               if (nf == NF_GET && unlikely(test_bit(B_READING, &b->state)))
> +                       return NULL;
> +
>                b->hold_count++;

Are these hold_counts safe on architectures with weak memory models?
Should they be atomic_ts?   I haven't looked at them in context, but
based on what I see here they make me a bit nervous.

Thanks for jumping in to the fray!  None of my comments are blocking,
so I believe the following is appropriate (if not
s/Signed-off/Reviewed-by/).

Signed-off-by: Will Drewry <wad@chromium.org>

cheers!
will

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH] dm: remake of the verity target
  2012-03-13 22:20     ` [PATCH] dm: remake of the verity target Mikulas Patocka
  2012-03-14 21:13       ` Will Drewry
@ 2012-03-14 21:43       ` Mandeep Singh Baines
  2012-03-20 15:41       ` Mandeep Singh Baines
  2 siblings, 0 replies; 34+ messages in thread
From: Mandeep Singh Baines @ 2012-03-14 21:43 UTC (permalink / raw)
  To: Mikulas Patocka
  Cc: Mandeep Singh Baines, linux-kernel, dm-devel, Alasdair G Kergon,
	Will Drewry, Elly Jones, Milan Broz, Olof Johansson,
	Steffen Klassert, Andrew Morton

Mikulas Patocka (mpatocka@redhat.com) wrote:
> Hi
> 
> > Hi Mikulus,
> > 
> > This is some nice work. I like that you've been able to abstract a lot
> > of the hash buffer management with dm-bufio. You got rid of the I/O queue.
> > I've been meaning to do that for a while. The prefetch is also nice.
> > We planned to do this but I decided to not do it now in order to get the
> > base functionality in:
> > 
> > http://crosbug.com/25441
> > 
> > However, there are some things that I don't like. I don't like comments
> > either but you have none. You also removed our documentation. You are
> 
> I added some comments. As for documentation, it's OK to use documentation 

Thanks.

> from your patch because the on-disk format and the target arguments are 
> the same (with an enhancement that my code supports different data and 
> metadata bock size and it has variable-length salt).
> 

Would you mind adding the documentation as part 2 of your series.

> > allocated a complete shash_desc per I/O. We only allocate one per CPU.
> 
> The hash of 4k block takes 174000 cycles. So trying to optimize 
> memory latency that is about 250 cycles doesn't make much sense.
> 
> I actually observed better performance using verity on ramdisk with 
> workqueue unbound to specific CPUs. The reason is that the ramdisk bio 
> completion routine is always run on the same CPU (that one that submitted 
> the request), so with bound workqueue, everything was executing on one 
> CPU. With unbound workqueue, I got parallelism.
> 

I guess it depends on whether you're CPU bound or I/O bound. If you're
CPU-bound and all the schedule is doing a good job of keeping all the
cores busy, then you're just adding extra cache misses. But if you're
not CPU-bound, then you can parallelize the hashing. So I guess it
depends. Anway, arguable which is better without data on real workloads.

At some point, it would be interesting to compare ChromeOS boot performance
with both approaches.


> > We short-circuit the hash at any level. Your implementation can only
> > shirt circuit at the lowest level.
> 
> It short-circuits hash at all levels. If the function 
> "verity_verify_level" finds out that "aux->hash_verified" is non-zero, it 
> doesn't do any hashing, it just copies the hash for the lower level. My 
> implementation walks the tree from the top to the bottom, but it doesn't 
> do hash verification if the same block has been verified before.
> 

I agree. Short-circuiting won't give an extra benefit. For some reason,
I thought you might be re-verifying a node but that's not the case.

> All this tree-walking from the root to the bottom is 50-times faster than 
> the actual hashing of the data block (I measured that), so there's not 
> much point in trying to optimize it. I did a simple optimization (don't 
> walk the tree if the lowest block is already verified) and I don't need to 
> do anything complicated given the fact that it can't improve more than by 
> 2%.
> 
> > I'd like to propose that we get the version we sent upstream and then work
> > together on adding some of your enhancements incrementally.
> 
> If you add dm-bufio support, you end up deleting majority of the original 
> code anyway. That's why I wrote it from scratch and that's why I didn't 
> attempt to morph your code.
> 
> It's simpler to write the code from scratch and it is also less bug-prone. 
> 
> > Other than
> > the changes we've made to cleanup for upstreaming, the version I
> > submitted is the code we are using in production.
> > 
> > I'm happy to add prefetch now if that is required for merging.
> > 
> > What do you think?
> > 
> > Regards,
> > Mandeep
> 
> This is the version with comments added:
> 
> Mikulas
> 
> ----
> 
> Remake of the google dm-verity patch.
> 
> Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
> 

Signed-off-by: Mandeep Singh Baines <msb@chromium.org>

Nice work.

I have a few nits but would be happy to see this merged. It doesn't
look like the version I worked on will ever get merged, maybe you'll
have better luck:)

> ---
>  drivers/md/Kconfig     |   17 
>  drivers/md/Makefile    |    1 
>  drivers/md/dm-verity.c |  851 +++++++++++++++++++++++++++++++++++++++++++++++++
>  3 files changed, 869 insertions(+)
> 
> Index: linux-3.3-rc6-fast/drivers/md/Kconfig
> ===================================================================
> --- linux-3.3-rc6-fast.orig/drivers/md/Kconfig	2012-03-13 21:46:03.000000000 +0100
> +++ linux-3.3-rc6-fast/drivers/md/Kconfig	2012-03-13 21:46:05.000000000 +0100
> @@ -404,4 +404,21 @@ config DM_VERITY2
>  
>            If unsure, say N.
>  
> +config DM_VERITY
> +	tristate "Verity target support"
> +	depends on BLK_DEV_DM
> +	select CRYPTO
> +	select CRYPTO_HASH
> +	select DM_BUFIO
> +	---help---
> +	  This device-mapper target allows you to create a device that
> +	  transparently integrity checks the data on it. You'll need to
> +	  activate the digests you're going to use in the cryptoapi
> +	  configuration.
> +
> +	  To compile this code as a module, choose M here: the module will
> +	  be called dm-verity.
> +
> +	  If unsure, say N.
> +
>  endif # MD
> Index: linux-3.3-rc6-fast/drivers/md/Makefile
> ===================================================================
> --- linux-3.3-rc6-fast.orig/drivers/md/Makefile	2012-03-13 21:46:03.000000000 +0100
> +++ linux-3.3-rc6-fast/drivers/md/Makefile	2012-03-13 21:46:05.000000000 +0100
> @@ -29,6 +29,7 @@ obj-$(CONFIG_MD_FAULTY)		+= faulty.o
>  obj-$(CONFIG_BLK_DEV_MD)	+= md-mod.o
>  obj-$(CONFIG_BLK_DEV_DM)	+= dm-mod.o
>  obj-$(CONFIG_DM_BUFIO)		+= dm-bufio.o
> +obj-$(CONFIG_DM_VERITY)		+= dm-verity.o
>  obj-$(CONFIG_DM_CRYPT)		+= dm-crypt.o
>  obj-$(CONFIG_DM_DELAY)		+= dm-delay.o
>  obj-$(CONFIG_DM_FLAKEY)		+= dm-flakey.o
> Index: linux-3.3-rc6-fast/drivers/md/dm-verity.c
> ===================================================================
> --- /dev/null	1970-01-01 00:00:00.000000000 +0000
> +++ linux-3.3-rc6-fast/drivers/md/dm-verity.c	2012-03-13 22:02:05.000000000 +0100
> @@ -0,0 +1,851 @@
> +/*
> + * Copyright (C) 2012 Red Hat, Inc.
> + *
> + * Author: Mikulas Patocka <mpatocka@redhat.com>
> + *
> + * Based on Chromium dm-verity driver (C) 2011 The Chromium OS Authors
> + *
> + * This file is released under the GPLv2.
> + *
> + * Device mapper target parameters:
> + *	<version>	0
> + *	<data device>
> + *	<hash device>
> + *	<hash start>	(typically 0)
> + *	<block size>	(typically 4096)
> + *	<algorithm>
> + *	<digest>
> + *	optional parameters:
> + *		<salt> (should have 32 bytes for compatibility with Google code)
> + *		<hash block size> (by default it is the same as data block size)
> + *
> + * In the file "/sys/module/dm_verity/parameters/prefetch_cluster" you can set
> + * default prefetch value. Data are read in "prefetch_cluster" chunks from the
> + * hash device. Prefetch cluster greatly improves performance when data and hash
> + * are on the same disk on different partitions.
> + */
> +
> +#include <linux/module.h>
> +#include <linux/device-mapper.h>
> +#include <crypto/hash.h>
> +#include "dm-bufio.h"
> +
> +#define DM_MSG_PREFIX			"verity"
> +
> +#define DM_VERITY_IO_VEC_INLINE		16
> +#define DM_VERITY_MEMPOOL_SIZE		4
> +#define DM_VERITY_DEFAULT_PREFETCH_SIZE	262144
> +
> +#define DM_VERITY_MAX_LEVELS		63
> +
> +static unsigned prefetch_cluster = DM_VERITY_DEFAULT_PREFETCH_SIZE;
> +
> +module_param_named(prefetch_cluster, prefetch_cluster, uint, S_IRUGO | S_IWUSR);
> +
> +struct dm_verity {
> +	struct dm_dev *data_dev;
> +	struct dm_dev *hash_dev;
> +	struct dm_target *ti;
> +	struct dm_bufio_client *bufio;
> +	char *alg_name;
> +	struct crypto_shash *tfm;
> +	u8 *root_digest;	/* digest of the root block */
> +	u8 *salt;		/* salt, its size is salt_size */
> +	unsigned salt_size;
> +	sector_t data_start;	/* data offset in 512-byte sectors */
> +	sector_t hash_start;	/* hash start in blocks */
> +	sector_t data_blocks;	/* the number of data blocks */
> +	sector_t hash_blocks;	/* the number of hash blocks */
> +	unsigned char data_dev_block_bits;	/* log2(data blocksize) */
> +	unsigned char hash_dev_block_bits;	/* log2(hash blocksize) */
> +	unsigned char hash_per_block_bits;	/* log2(hashes in hash block) */
> +	unsigned char levels;	/* the number of tree levels */
> +	unsigned digest_size;	/* digest size for the current hash algorithm */
> +	unsigned shash_descsize;/* the size of temporary space for crypto */
> +
> +	mempool_t *io_mempool;	/* mempool of struct dm_verity_io */
> +	mempool_t *vec_mempool;	/* mempool of bio vector */
> +

Since there are no writes, do we even need mempool? I was thinking of
removing all mempools. I can't think of case where a mempool helps you
for a read-only device. There is no reading under memory pressure.

> +	struct workqueue_struct *verify_wq;
> +
> +	/* starting blocks for each tree level. 0 is the lowest level. */
> +	sector_t hash_level_block[DM_VERITY_MAX_LEVELS];
> +};
> +
> +struct dm_verity_io {
> +	struct dm_verity *v;
> +	struct bio *bio;
> +
> +	/* original values of bio->bi_end_io and bio->bi_private */
> +	bio_end_io_t *orig_bi_end_io;
> +	void *orig_bi_private;
> +
> +	sector_t block;
> +	unsigned n_blocks;
> +
> +	/* saved bio vector */
> +	struct bio_vec *io_vec;
> +	unsigned io_vec_size;
> +
> +	struct work_struct work;
> +
> +	/* a space for short vectors; longer vectors are allocated separately */
> +	struct bio_vec io_vec_inline[DM_VERITY_IO_VEC_INLINE];
> +
> +	/* variable-size fields, accessible with functions
> +		io_hash_desc, io_real_digest, io_want_digest */
> +	/* u8 hash_desc[crypto_shash_descsize(v->tfm)]; */
> +	/* u8 real_digest[v->digest_size]; */
> +	/* u8 want_digest[v->digest_size]; */

Nit. Commented code should be removed.

> +};
> +
> +static struct shash_desc *io_hash_desc(struct dm_verity *v, struct dm_verity_io *io)
> +{
> +	return (struct shash_desc *)(io + 1);
> +}
> +
> +static u8 *io_real_digest(struct dm_verity *v, struct dm_verity_io *io)
> +{
> +	return (u8 *)(io + 1) + v->shash_descsize;
> +}
> +
> +static u8 *io_want_digest(struct dm_verity *v, struct dm_verity_io *io)
> +{
> +	return (u8 *)(io + 1) + v->shash_descsize + v->digest_size;
> +}
> +
> +/*
> + * Auxiliary structure appended to each dm-bufio buffer. If the value
> + * hash_verified is nonzero, hash of the block has been verified.
> + *
> + * There is no lock around this value, a race condition can at worst cause
> + * that multiple processes verify the hash of the same buffer simultaneously.
> + * This condition is harmless, so we don't need locking.
> + */
> +struct buffer_aux {
> +	int hash_verified;
> +};
> +
> +/*
> + * Initialize struct buffer_aux for a freshly created buffer.
> + */
> +static void dm_bufio_alloc_callback(struct dm_buffer *buf)
> +{
> +	struct buffer_aux *aux = dm_bufio_get_aux_data(buf);
> +	aux->hash_verified = 0;
> +}
> +
> +/*
> + * Translate input sector number to the sector number on the target device.
> + */
> +static sector_t verity_map_sector(struct dm_verity *v, sector_t bi_sector)
> +{
> +	return v->data_start + dm_target_offset(v->ti, bi_sector);
> +}
> +
> +/*
> + * Return hash position of a specified block at a specified tree level
> + * (0 is the lowest level).
> + * The lowest "hash_per_block_bits"-bits of the result denote hash position
> + * inside a hash block. The remaining bits denode location of the hash block.
> + */
> +static sector_t verity_position_at_level(struct dm_verity *v, sector_t block,
> +					 int level)
> +{
> +	return block >> (level * v->hash_per_block_bits);
> +}
> +
> +static void verity_hash_at_level(struct dm_verity *v, sector_t block, int level,
> +				 sector_t *hash_block, unsigned *offset)
> +{
> +	sector_t position = verity_position_at_level(v, block, level);
> +
> +	*hash_block = v->hash_level_block[level] + (position >> v->hash_per_block_bits);
> +	if (offset)
> +		*offset = v->digest_size * (position & ((1 << v->hash_per_block_bits) - 1));
> +}
> +
> +/*
> + * Verify hash of a metadata block pertaining to the specified data block
> + * ("block" argument) at a specified level ("level" argument).
> + *
> + * On successful return, io_want_digest(v, io) contains the hash value for
> + * a lower tree level or for the data block (if we're at the lowest leve).
> + *
> + * If "skip_unverified" is true, unverified buffer is skipped an 1 is returned.
> + * If "skip_unverified" is false, unverified buffer is hashed and verified
> + * against current value of io_want_digest(v, io).
> + */
> +static int verity_verify_level(struct dm_verity_io *io, sector_t block,
> +			       int level, bool skip_unverified)
> +{
> +	struct dm_verity *v = io->v;
> +	struct dm_buffer *buf;
> +	struct buffer_aux *aux;
> +	u8 *data;
> +	int r;
> +	sector_t hash_block;
> +	unsigned offset;
> +
> +	verity_hash_at_level(v, block, level, &hash_block, &offset);
> +
> +	data = dm_bufio_read(v->bufio, hash_block, &buf);
> +	if (unlikely(IS_ERR(data)))
> +		return PTR_ERR(data);
> +
> +	aux = dm_bufio_get_aux_data(buf);
> +
> +	if (!aux->hash_verified) {
> +		struct shash_desc *desc;
> +		u8 *result;
> +
> +		if (skip_unverified) {
> +			r = 1;
> +			goto release_ret_r;
> +		}
> +
> +		desc = io_hash_desc(v, io);
> +		desc->tfm = v->tfm;
> +		desc->flags = CRYPTO_TFM_REQ_MAY_SLEEP;
> +		r = crypto_shash_init(desc);
> +		if (r < 0) {
> +			DMERR("crypto_shash_init failed: %d", r);
> +			goto release_ret_r;
> +		}
> +
> +		r = crypto_shash_update(desc, data, 1 << v->hash_dev_block_bits);
> +		if (r < 0) {
> +			DMERR("crypto_shash_update failed: %d", r);
> +			goto release_ret_r;
> +		}
> +
> +		r = crypto_shash_update(desc, v->salt, v->salt_size);
> +		if (r < 0) {
> +			DMERR("crypto_shash_update failed: %d", r);
> +			goto release_ret_r;
> +		}
> +
> +		result = io_real_digest(v, io);
> +		r = crypto_shash_final(desc, result);
> +		if (r < 0) {
> +			DMERR("crypto_shash_final failed: %d", r);
> +			goto release_ret_r;
> +		}
> +		if (unlikely(memcmp(result, io_want_digest(v, io), v->digest_size))) {
> +			DMERR_LIMIT("metadata block %llu is corrupted",
> +				(unsigned long long)hash_block);
> +			r = -EIO;
> +			goto release_ret_r;
> +		} else
> +			aux->hash_verified = 1;
> +	}
> +
> +	data += offset;
> +
> +	memcpy(io_want_digest(v, io), data, v->digest_size);
> +
> +	dm_bufio_release(buf);
> +	return 0;
> +
> +release_ret_r:
> +	dm_bufio_release(buf);
> +	return r;
> +}
> +
> +/*
> + * Verify one "dm_verity_io" structure.
> + */
> +static int verity_verify_io(struct dm_verity_io *io)
> +{
> +	struct dm_verity *v = io->v;
> +	unsigned b;
> +	int i;
> +	unsigned vector = 0, offset = 0;
> +	for (b = 0; b < io->n_blocks; b++) {
> +		struct shash_desc *desc;
> +		u8 *result;
> +		int r;
> +		unsigned todo;
> +
> +		if (likely(v->levels)) {
> +			/*
> +			 * First, we try to get the requested hash for
> +			 * the current block. If the hash block itself is
> +			 * verified, zero is returned. If it isn't, this
> +			 * function returns 0 and we fall back to whole
> +			 * chain verification.
> +			 */
> +			int r = verity_verify_level(io, io->block + b, 0, true);
> +			if (likely(!r))
> +				goto test_block_hash;
> +			if (r < 0)
> +				return r;
> +		}
> +
> +		memcpy(io_want_digest(v, io), v->root_digest, v->digest_size);
> +
> +		for (i = v->levels - 1; i >= 0; i--) {
> +			int r = verity_verify_level(io, io->block + b, i, false);
> +			if (unlikely(r))
> +				return r;
> +		}
> +
> +test_block_hash:
> +		desc = io_hash_desc(v, io);
> +		desc->tfm = v->tfm;
> +		desc->flags = CRYPTO_TFM_REQ_MAY_SLEEP;
> +		r = crypto_shash_init(desc);
> +		if (r < 0) {
> +			DMERR("crypto_shash_init failed: %d", r);
> +			return r;
> +		}
> +
> +		todo = 1 << v->data_dev_block_bits;
> +		do {
> +			struct bio_vec *bv;
> +			u8 *page;
> +			unsigned len;
> +
> +			BUG_ON(vector >= io->io_vec_size);
> +			bv = &io->io_vec[vector];
> +			page = kmap_atomic(bv->bv_page, KM_USER0);
> +			len = bv->bv_len - offset;
> +			if (likely(len >= todo))
> +				len = todo;
> +			r = crypto_shash_update(desc,
> +					page + bv->bv_offset + offset, len);
> +			kunmap_atomic(page, KM_USER0);
> +			if (r < 0) {
> +				DMERR("crypto_shash_update failed: %d", r);
> +				return r;
> +			}
> +			offset += len;
> +			if (likely(offset == bv->bv_len)) {
> +				offset = 0;
> +				vector++;
> +			}
> +			todo -= len;
> +		} while (todo);
> +
> +		r = crypto_shash_update(desc, v->salt, v->salt_size);
> +		if (r < 0) {
> +			DMERR("crypto_shash_update failed: %d", r);
> +			return r;
> +		}
> +
> +		result = io_real_digest(v, io);
> +		r = crypto_shash_final(desc, result);
> +		if (r < 0) {
> +			DMERR("crypto_shash_final failed: %d", r);
> +			return r;
> +		}
> +		if (unlikely(memcmp(result, io_want_digest(v, io), v->digest_size))) {
> +			DMERR_LIMIT("data block %llu is corrupted",
> +				(unsigned long long)(io->block + b));
> +			return -EIO;
> +		}
> +	}
> +	BUG_ON(vector != io->io_vec_size);
> +	BUG_ON(offset);
> +	return 0;
> +}
> +
> +/*
> + * End one "io" structure with a given error.
> + */
> +static void verity_finish_io(struct dm_verity_io *io, int error)
> +{
> +	struct bio *bio = io->bio;
> +	struct dm_verity *v = io->v;
> +
> +	bio->bi_end_io = io->orig_bi_end_io;
> +	bio->bi_private = io->orig_bi_private;
> +
> +	if (io->io_vec != io->io_vec_inline)
> +		mempool_free(io->io_vec, v->vec_mempool);
> +	mempool_free(io, v->io_mempool);
> +
> +	bio_endio(bio, error);
> +}
> +
> +static void verity_work(struct work_struct *w)
> +{
> +	struct dm_verity_io *io = container_of(w, struct dm_verity_io, work);
> +
> +	verity_finish_io(io, verity_verify_io(io));
> +}
> +
> +static void verity_end_io(struct bio *bio, int error)
> +{
> +	struct dm_verity_io *io = bio->bi_private;
> +	if (error) {
> +		verity_finish_io(io, error);
> +		return;
> +	}
> +
> +	INIT_WORK(&io->work, verity_work);
> +	queue_work(io->v->verify_wq, &io->work);
> +}
> +
> +/*
> + * Prefetch buffers for the specified io.
> + * The root buffer is not prefetched, it is assumed that it will be cached
> + * all the time.
> + */
> +static void verity_prefetch_io(struct dm_verity *v, struct dm_verity_io *io)
> +{
> +	int i;
> +	for (i = v->levels - 2; i >= 0; i--) {
> +		sector_t hash_block_start;
> +		sector_t hash_block_end;
> +		verity_hash_at_level(v, io->block, i, &hash_block_start, NULL);
> +		verity_hash_at_level(v, io->block + io->n_blocks - 1, i, &hash_block_end, NULL);
> +		if (!i) {
> +			unsigned cluster = prefetch_cluster;
> +	 /* barrier to stop GCC from re-reading prefetch_cluster again */
> +			barrier();
> +			cluster >>= v->data_dev_block_bits;
> +			if (unlikely(!cluster))
> +				goto no_prefetch_cluster;
> +			if (unlikely(cluster & (cluster - 1)))
> +				cluster = 1 << (fls(cluster) - 1);
> +
> +			hash_block_start &= ~(sector_t)(cluster - 1);
> +			hash_block_end |= cluster - 1;
> +			if (unlikely(hash_block_end >= v->hash_blocks))
> +				hash_block_end = v->hash_blocks - 1;
> +		}
> +no_prefetch_cluster:
> +		dm_bufio_prefetch(v->bufio, hash_block_start,
> +					hash_block_end - hash_block_start + 1);
> +	}
> +}
> +
> +/*
> + * Bio map function. It allocates dm_verity_io structure and bio vector and
> + * fills them. Then it issues prefetches and the I/O.
> + */
> +static int verity_map(struct dm_target *ti, struct bio *bio,
> +		      union map_info *map_context)
> +{
> +	struct dm_verity *v = ti->private;
> +	struct dm_verity_io *io;
> +
> +	if (((unsigned)bio->bi_sector | bio_sectors(bio)) &
> +	    ((1 << (v->data_dev_block_bits - SECTOR_SHIFT)) - 1)) {
> +		DMERR_LIMIT("unaligned io");
> +		return -EIO;
> +	}
> +
> +	if ((bio->bi_sector + bio_sectors(bio)) >>
> +	    (v->data_dev_block_bits - SECTOR_SHIFT) > v->data_blocks) {
> +		DMERR_LIMIT("io out of range");
> +		return -EIO;
> +	}
> +
> +	if (bio_data_dir(bio) == WRITE)
> +		return -EIO;
> +
> +	io = mempool_alloc(v->io_mempool, GFP_NOIO);
> +	io->v = v;
> +	io->bio = bio;
> +	io->orig_bi_end_io = bio->bi_end_io;
> +	io->orig_bi_private = bio->bi_private;
> +	io->block = bio->bi_sector >> (v->data_dev_block_bits - SECTOR_SHIFT);
> +	io->n_blocks = bio->bi_size >> v->data_dev_block_bits;
> +
> +	bio->bi_end_io = verity_end_io;
> +	bio->bi_private = io;
> +	bio->bi_bdev = v->data_dev->bdev;
> +	bio->bi_sector = verity_map_sector(v, bio->bi_sector);
> +
> +	io->io_vec_size = bio->bi_vcnt - bio->bi_idx;
> +	if (io->io_vec_size < DM_VERITY_IO_VEC_INLINE)
> +		io->io_vec = io->io_vec_inline;
> +	else
> +		io->io_vec = mempool_alloc(v->vec_mempool, GFP_NOIO);
> +	memcpy(io->io_vec, bio_iovec(bio),
> +	       io->io_vec_size * sizeof(struct bio_vec));
> +
> +	verity_prefetch_io(v, io);
> +
> +	generic_make_request(bio);
> +
> +	return DM_MAPIO_SUBMITTED;
> +}
> +
> +static int verity_status(struct dm_target *ti, status_type_t type,
> +			 char *result, unsigned maxlen)
> +{
> +	struct dm_verity *v = ti->private;
> +	unsigned sz = 0;
> +	unsigned x;
> +
> +	switch (type) {
> +	case STATUSTYPE_INFO:
> +		result[0] = 0;
> +		break;
> +	case STATUSTYPE_TABLE:
> +		DMEMIT("%u %s %s %llu %u %s ",
> +			0,
> +			v->data_dev->name,
> +			v->hash_dev->name,
> +			(unsigned long long)v->hash_start << (v->hash_dev_block_bits - SECTOR_SHIFT),
> +			1 << v->data_dev_block_bits,
> +			v->alg_name
> +			);
> +		for (x = 0; x < v->digest_size; x++)
> +			DMEMIT("%02x", v->root_digest[x]);
> +		DMEMIT(" ");
> +		if (!v->salt_size)
> +			DMEMIT("-");
> +		else
> +			for (x = 0; x < v->salt_size; x++)
> +				DMEMIT("%02x", v->salt[x]);
> +		if (v->data_dev_block_bits != v->hash_dev_block_bits)
> +			DMEMIT(" %u", 1 << v->hash_dev_block_bits);
> +		break;
> +	}
> +	return 0;
> +}
> +
> +static int verity_ioctl(struct dm_target *ti, unsigned cmd,
> +			unsigned long arg)
> +{
> +	struct dm_verity *v = ti->private;
> +	int r = 0;
> +
> +	if (v->data_start ||
> +	    ti->len != i_size_read(v->data_dev->bdev->bd_inode) >> SECTOR_SHIFT)
> +		r = scsi_verify_blk_ioctl(NULL, cmd);
> +
> +	return r ? : __blkdev_driver_ioctl(v->data_dev->bdev, v->data_dev->mode,
> +				     cmd, arg);
> +}
> +
> +static int verity_merge(struct dm_target *ti, struct bvec_merge_data *bvm,
> +			struct bio_vec *biovec, int max_size)
> +{
> +	struct dm_verity *v = ti->private;
> +	struct request_queue *q = bdev_get_queue(v->data_dev->bdev);
> +
> +	if (!q->merge_bvec_fn)
> +		return max_size;
> +
> +	bvm->bi_bdev = v->data_dev->bdev;
> +	bvm->bi_sector = verity_map_sector(v, bvm->bi_sector);
> +
> +	return min(max_size, q->merge_bvec_fn(q, bvm, biovec));
> +}
> +
> +static int verity_iterate_devices(struct dm_target *ti,
> +				  iterate_devices_callout_fn fn, void *data)
> +{
> +	struct dm_verity *v = ti->private;
> +	return fn(ti, v->data_dev, v->data_start, ti->len, data);
> +}
> +
> +static void verity_io_hints(struct dm_target *ti, struct queue_limits *limits)
> +{
> +	struct dm_verity *v = ti->private;
> +
> +	if (limits->logical_block_size < 1 << v->data_dev_block_bits)
> +		limits->logical_block_size = 1 << v->data_dev_block_bits;
> +	if (limits->physical_block_size < 1 << v->data_dev_block_bits)
> +		limits->physical_block_size = 1 << v->data_dev_block_bits;
> +	blk_limits_io_min(limits, limits->logical_block_size);
> +}
> +
> +static void verity_dtr(struct dm_target *ti);
> +
> +static int verity_ctr(struct dm_target *ti, unsigned argc, char **argv)
> +{
> +	struct dm_verity *v;
> +	unsigned num;
> +	unsigned long long hs;
> +	int r;
> +	int i;
> +	sector_t hash_position;
> +	char dummy;
> +
> +	v = kzalloc(sizeof(struct dm_verity), GFP_KERNEL);
> +	if (!v) {
> +		ti->error = "Cannot allocate verity structure";
> +		return -ENOMEM;
> +	}
> +	ti->private = v;
> +	v->ti = ti;
> +
> +	if ((dm_table_get_mode(ti->table) & ~FMODE_READ) != 0) {
> +		ti->error = "Device must be readonly";
> +		r = -EINVAL;
> +		goto bad;
> +	}
> +
> +	if (argc < 7) {
> +		ti->error = "Not enough arguments";
> +		r = -EINVAL;
> +		goto bad;
> +	}
> +
> +	if (sscanf(argv[0], "%d%c", &num, &dummy) != 1 ||
> +	    num != 0) {
> +		ti->error = "Invalid version";
> +		r = -EINVAL;
> +		goto bad;
> +	}
> +
> +	r = dm_get_device(ti, argv[1], FMODE_READ, &v->data_dev);
> +	if (r) {
> +		ti->error = "Data device lookup failed";
> +		goto bad;
> +	}
> +
> +	r = dm_get_device(ti, argv[2], FMODE_READ, &v->hash_dev);
> +	if (r) {
> +		ti->error = "Data device lookup failed";
> +		goto bad;
> +	}
> +
> +	if (sscanf(argv[3], "%llu%c", &hs, &dummy) != 1 ||
> +	    hs != (sector_t)hs) {
> +		ti->error = "Invalid hash start";
> +		r = -EINVAL;
> +		goto bad;
> +	}
> +
> +	if (sscanf(argv[4], "%u%c", &num, &dummy) != 1 ||
> +	    !num || (num & (num - 1)) ||
> +	    num < bdev_logical_block_size(v->data_dev->bdev) ||
> +	    num > PAGE_SIZE) {
> +		ti->error = "Invalid data device block size";
> +		r = -EINVAL;
> +		goto bad;
> +	}
> +	v->data_dev_block_bits = ffs(num) - 1;
> +	v->hash_dev_block_bits = ffs(num) - 1;
> +
> +	v->alg_name = kstrdup(argv[5], GFP_KERNEL);
> +	if (!v->alg_name) {
> +		ti->error = "Cannot allocate algorithm name";
> +		r = -ENOMEM;
> +		goto bad;
> +	}
> +
> +	v->tfm = crypto_alloc_shash(v->alg_name, 0, 0);
> +	if (IS_ERR(v->tfm)) {
> +		ti->error = "Cannot initialize hash function";
> +		r = PTR_ERR(v->tfm);
> +		v->tfm = NULL;
> +		goto bad;
> +	}
> +	v->digest_size = crypto_shash_digestsize(v->tfm);
> +	if ((1 << v->hash_dev_block_bits) < v->digest_size * 2) {
> +		ti->error = "Digest size too big";
> +		r = -EINVAL;
> +		goto bad;
> +	}
> +	v->shash_descsize =
> +		sizeof(struct shash_desc) + crypto_shash_descsize(v->tfm);
> +
> +	v->root_digest = kmalloc(v->digest_size, GFP_KERNEL);
> +	if (!v->root_digest) {
> +		ti->error = "Cannot allocate root digest";
> +		r = -ENOMEM;
> +		goto bad;
> +	}
> +	if (strlen(argv[6]) != v->digest_size * 2 ||
> +	    hex2bin(v->root_digest, argv[6], v->digest_size)) {
> +		ti->error = "Invalid root digest";
> +		r = -EINVAL;
> +		goto bad;
> +	}
> +
> +	if (argc > 7 && strcmp(argv[7], "-")) {
> +		v->salt_size = strlen(argv[7]) / 2;
> +		v->salt = kmalloc(v->salt_size, GFP_KERNEL);
> +		if (!v->salt) {
> +			ti->error = "Cannot allocate salt";
> +			r = -ENOMEM;
> +			goto bad;
> +		}
> +		if (strlen(argv[7]) != v->salt_size * 2 ||
> +		    hex2bin(v->salt, argv[7], v->salt_size)) {
> +			ti->error = "Invalid salt";
> +			r = -EINVAL;
> +			goto bad;
> +		}
> +	}
> +
> +	if (argc > 8) {
> +		if (sscanf(argv[8], "%u%c", &num, &dummy) != 1 ||
> +		    !num || (num & (num - 1)) ||
> +		    num < bdev_logical_block_size(v->hash_dev->bdev) ||
> +		    num > INT_MAX) {
> +			ti->error = "Invalid hash device block size";
> +			r = -EINVAL;
> +			goto bad;
> +		}
> +		v->hash_dev_block_bits = ffs(num) - 1;
> +	}
> +
> +	if (hs & ((1 << (v->hash_dev_block_bits - SECTOR_SHIFT)) - 1)) {
> +		ti->error = "Hash start not aligned on block boundary";
> +		r = -EINVAL;
> +		goto bad;
> +	}
> +	v->hash_start = hs >> (v->hash_dev_block_bits - SECTOR_SHIFT);
> +
> +	if (ti->len > i_size_read(v->data_dev->bdev->bd_inode) >> SECTOR_SHIFT) {
> +		ti->error = "Data device si too small";
> +		r = -EINVAL;
> +		goto bad;
> +	}
> +
> +	if (ti->len & ((1 << (v->data_dev_block_bits - SECTOR_SHIFT)) - 1)) {
> +		ti->error = "Data device length is not aligned to block size";
> +		r = -EINVAL;
> +		goto bad;
> +	}
> +
> +	v->data_blocks = ti->len >> (v->data_dev_block_bits - SECTOR_SHIFT);
> +
> +	v->hash_per_block_bits =
> +		fls((1 << v->hash_dev_block_bits) / v->digest_size) - 1;
> +
> +	v->levels = 0;
> +	if (v->data_blocks)
> +		while (v->hash_per_block_bits * v->levels < 64 &&
> +		       (unsigned long long)(v->data_blocks - 1) >>
> +		       (v->hash_per_block_bits * v->levels))
> +			v->levels++;
> +
> +	if (v->levels > DM_VERITY_MAX_LEVELS) {
> +		ti->error = "Too many tree levels";
> +		r = -E2BIG;
> +		goto bad;
> +	}
> +
> +	hash_position = v->hash_start;
> +	for (i = v->levels - 1; i >= 0; i--) {
> +		sector_t s;
> +		v->hash_level_block[i] = hash_position;
> +		s = verity_position_at_level(v, v->data_blocks, i);
> +		s = (s >> v->hash_per_block_bits) +
> +		    !!(s & ((1 << v->hash_per_block_bits) - 1));
> +		if (hash_position + s < hash_position) {
> +			ti->error = "Hash device offset overflow";
> +			r = -E2BIG;
> +			goto bad;
> +		}
> +		hash_position += s;
> +	}
> +	v->hash_blocks = hash_position;
> +
> +	v->bufio = dm_bufio_client_create(v->hash_dev->bdev,
> +		1 << v->hash_dev_block_bits, 1, sizeof(struct buffer_aux),
> +		dm_bufio_alloc_callback, NULL);
> +	if (IS_ERR(v->bufio)) {
> +		ti->error = "Cannot initialize dm-bufio";
> +		r = PTR_ERR(v->bufio);
> +		v->bufio = NULL;
> +		goto bad;
> +	}
> +
> +	if (dm_bufio_get_device_size(v->bufio) < v->hash_blocks) {
> +		ti->error = "Hash device is too small";
> +		r = -E2BIG;
> +		goto bad;
> +	}
> +
> +	v->io_mempool = mempool_create_kmalloc_pool(DM_VERITY_MEMPOOL_SIZE,
> +	  sizeof(struct dm_verity_io) + v->shash_descsize + v->digest_size * 2);
> +	if (!v->io_mempool) {
> +		ti->error = "Cannot allocate io mempool";
> +		r = -ENOMEM;
> +		goto bad;
> +	}
> +
> +	v->vec_mempool = mempool_create_kmalloc_pool(DM_VERITY_MEMPOOL_SIZE,
> +					BIO_MAX_PAGES * sizeof(struct bio_vec));
> +	if (!v->vec_mempool) {
> +		ti->error = "Cannot allocate vector mempool";
> +		r = -ENOMEM;
> +		goto bad;
> +	}
> +
> +	/*v->verify_wq = alloc_workqueue("verityd", WQ_CPU_INTENSIVE | WQ_MEM_RECLAIM, 1);*/
> +	/* WQ_UNBOUND greatly improves performance when running on ramdisk */
> +	v->verify_wq = alloc_workqueue("verityd", WQ_CPU_INTENSIVE | WQ_MEM_RECLAIM | WQ_UNBOUND, num_online_cpus());
> +	if (!v->verify_wq) {
> +		ti->error = "Cannot allocate workqueue";
> +		r = -ENOMEM;
> +		goto bad;
> +	}
> +
> +	return 0;
> +
> +bad:
> +	verity_dtr(ti);
> +	return r;
> +}
> +
> +static void verity_dtr(struct dm_target *ti)
> +{
> +	struct dm_verity *v = ti->private;
> +
> +	if (v->verify_wq)
> +		destroy_workqueue(v->verify_wq);
> +	if (v->vec_mempool)
> +		mempool_destroy(v->vec_mempool);
> +	if (v->io_mempool)
> +		mempool_destroy(v->io_mempool);
> +	if (v->bufio)
> +		dm_bufio_client_destroy(v->bufio);
> +	kfree(v->salt);
> +	kfree(v->root_digest);
> +	if (v->tfm)
> +		crypto_free_shash(v->tfm);
> +	kfree(v->alg_name);
> +	if (v->hash_dev)
> +		dm_put_device(ti, v->hash_dev);
> +	if (v->data_dev)
> +		dm_put_device(ti, v->data_dev);
> +	kfree(v);
> +}
> +
> +static struct target_type verity_target = {
> +	.name		= "verity",
> +	.version	= {1, 0, 0},
> +	.module		= THIS_MODULE,
> +	.ctr		= verity_ctr,
> +	.dtr		= verity_dtr,
> +	.map		= verity_map,
> +	.status		= verity_status,
> +	.ioctl		= verity_ioctl,
> +	.merge		= verity_merge,
> +	.iterate_devices = verity_iterate_devices,
> +	.io_hints	= verity_io_hints,
> +};
> +
> +static int __init dm_verity_init(void)
> +{
> +	int r;
> +	r = dm_register_target(&verity_target);
> +	if (r < 0)
> +		DMERR("register failed %d", r);
> +	return r;
> +}
> +
> +static void __exit dm_verity_exit(void)
> +{
> +	dm_unregister_target(&verity_target);
> +}
> +
> +module_init(dm_verity_init);
> +module_exit(dm_verity_exit);
> +
> +MODULE_AUTHOR("Mikulas Patocka <mpatocka@redhat.com>");
> +MODULE_DESCRIPTION(DM_NAME " target for transparent disk integrity checking");
> +MODULE_LICENSE("GPL");
> +
> Index: linux-3.3-rc6-fast/drivers/md/dm-bufio.c
> ===================================================================
> --- linux-3.3-rc6-fast.orig/drivers/md/dm-bufio.c	2012-03-12 22:43:23.000000000 +0100
> +++ linux-3.3-rc6-fast/drivers/md/dm-bufio.c	2012-03-13 15:41:02.000000000 +0100
> @@ -579,7 +579,7 @@ static void write_endio(struct bio *bio,
>  	struct dm_buffer *b = container_of(bio, struct dm_buffer, bio);
>  
>  	b->write_error = error;
> -	if (error) {
> +	if (unlikely(error)) {
>  		struct dm_bufio_client *c = b->c;
>  		(void)cmpxchg(&c->async_write_error, 0, error);
>  	}
> @@ -698,13 +698,20 @@ static void __wait_for_free_buffer(struc
>  	dm_bufio_lock(c);
>  }
>  
> +enum new_flag {
> +	NF_FRESH = 0,
> +	NF_READ = 1,
> +	NF_GET = 2,
> +	NF_PREFETCH = 3
> +};
> +
>  /*
>   * Allocate a new buffer. If the allocation is not possible, wait until
>   * some other thread frees a buffer.
>   *
>   * May drop the lock and regain it.
>   */
> -static struct dm_buffer *__alloc_buffer_wait_no_callback(struct dm_bufio_client *c)
> +static struct dm_buffer *__alloc_buffer_wait_no_callback(struct dm_bufio_client *c, enum new_flag nf)
>  {
>  	struct dm_buffer *b;
>  
> @@ -727,6 +734,9 @@ static struct dm_buffer *__alloc_buffer_
>  				return b;
>  		}
>  
> +		if (nf == NF_PREFETCH)
> +			return NULL;
> +
>  		if (!list_empty(&c->reserved_buffers)) {
>  			b = list_entry(c->reserved_buffers.next,
>  				       struct dm_buffer, lru_list);
> @@ -744,9 +754,12 @@ static struct dm_buffer *__alloc_buffer_
>  	}
>  }
>  
> -static struct dm_buffer *__alloc_buffer_wait(struct dm_bufio_client *c)
> +static struct dm_buffer *__alloc_buffer_wait(struct dm_bufio_client *c, enum new_flag nf)
>  {
> -	struct dm_buffer *b = __alloc_buffer_wait_no_callback(c);
> +	struct dm_buffer *b = __alloc_buffer_wait_no_callback(c, nf);
> +
> +	if (!b)
> +		return NULL;
>  
>  	if (c->alloc_callback)
>  		c->alloc_callback(b);
> @@ -866,15 +879,8 @@ static struct dm_buffer *__find(struct d
>   * Getting a buffer
>   *--------------------------------------------------------------*/
>  
> -enum new_flag {
> -	NF_FRESH = 0,
> -	NF_READ = 1,
> -	NF_GET = 2
> -};
> -
>  static struct dm_buffer *__bufio_new(struct dm_bufio_client *c, sector_t block,
> -				     enum new_flag nf, struct dm_buffer **bp,
> -				     int *need_submit)
> +				     enum new_flag nf, int *need_submit)
>  {
>  	struct dm_buffer *b, *new_b = NULL;
>  
> @@ -882,6 +888,19 @@ static struct dm_buffer *__bufio_new(str
>  
>  	b = __find(c, block);
>  	if (b) {
> +found_buffer:
> +		if (nf == NF_PREFETCH)
> +			return NULL;
> +		/*
> +		 * Note: it is essential that we don't wait for the buffer to be
> +		 * read if dm_bufio_get function is used. Both dm_bufio_get and
> +		 * dm_bufio_prefetch can be used in the driver request routine.
> +		 * If the user called both dm_bufio_prefetch and dm_bufio_get on
> +		 * the same buffer, it would deadlock if we waited.
> +		 */
> +		if (nf == NF_GET && unlikely(test_bit(B_READING, &b->state)))
> +			return NULL;
> +
>  		b->hold_count++;
>  		__relink_lru(b, test_bit(B_DIRTY, &b->state) ||
>  			     test_bit(B_WRITING, &b->state));
> @@ -891,7 +910,9 @@ static struct dm_buffer *__bufio_new(str
>  	if (nf == NF_GET)
>  		return NULL;
>  
> -	new_b = __alloc_buffer_wait(c);
> +	new_b = __alloc_buffer_wait(c, nf);
> +	if (!new_b)
> +		return NULL;
>  
>  	/*
>  	 * We've had a period where the mutex was unlocked, so need to
> @@ -900,10 +921,7 @@ static struct dm_buffer *__bufio_new(str
>  	b = __find(c, block);
>  	if (b) {
>  		__free_buffer_wake(new_b);
> -		b->hold_count++;
> -		__relink_lru(b, test_bit(B_DIRTY, &b->state) ||
> -			     test_bit(B_WRITING, &b->state));
> -		return b;
> +		goto found_buffer;
>  	}
>  
>  	__check_watermark(c);
> @@ -957,7 +975,7 @@ static void *new_read(struct dm_bufio_cl
>  	struct dm_buffer *b;
>  
>  	dm_bufio_lock(c);
> -	b = __bufio_new(c, block, nf, bp, &need_submit);
> +	b = __bufio_new(c, block, nf, &need_submit);
>  	dm_bufio_unlock(c);
>  
>  	if (!b || IS_ERR(b))
> @@ -1006,13 +1024,46 @@ void *dm_bufio_new(struct dm_bufio_clien
>  }
>  EXPORT_SYMBOL_GPL(dm_bufio_new);
>  
> +void dm_bufio_prefetch(struct dm_bufio_client *c,
> +		       sector_t block, unsigned n_blocks)
> +{
> +	struct blk_plug plug;
> +
> +	blk_start_plug(&plug);
> +	dm_bufio_lock(c);
> +
> +	for (; n_blocks--; block++) {
> +		int need_submit;
> +		struct dm_buffer *b;
> +		b = __bufio_new(c, block, NF_PREFETCH, &need_submit);
> +		if (unlikely(b != NULL)) {
> +			dm_bufio_unlock(c);
> +
> +			if (need_submit)
> +				submit_io(b, READ, b->block, read_endio);
> +			dm_bufio_release(b);
> +
> +			dm_bufio_cond_resched();
> +
> +			if (!n_blocks)
> +				goto flush_plug;
> +			dm_bufio_lock(c);
> +		}
> +
> +	}
> +
> +	dm_bufio_unlock(c);
> +flush_plug:
> +	blk_finish_plug(&plug);
> +}
> +EXPORT_SYMBOL_GPL(dm_bufio_prefetch);
> +
>  void dm_bufio_release(struct dm_buffer *b)
>  {
>  	struct dm_bufio_client *c = b->c;
>  
>  	dm_bufio_lock(c);
>  
> -	BUG_ON(test_bit(B_READING, &b->state));
>  	BUG_ON(!b->hold_count);
>  
>  	b->hold_count--;
> @@ -1025,6 +1076,7 @@ void dm_bufio_release(struct dm_buffer *
>  		 * invalid buffer.
>  		 */
>  		if ((b->read_error || b->write_error) &&
> +		    !test_bit(B_READING, &b->state) &&
>  		    !test_bit(B_WRITING, &b->state) &&
>  		    !test_bit(B_DIRTY, &b->state)) {
>  			__unlink_buffer(b);
> @@ -1042,6 +1094,8 @@ void dm_bufio_mark_buffer_dirty(struct d
>  
>  	dm_bufio_lock(c);
>  
> +	BUG_ON(test_bit(B_READING, &b->state));
> +
>  	if (!test_and_set_bit(B_DIRTY, &b->state))
>  		__relink_lru(b, LIST_DIRTY);
>  
> Index: linux-3.3-rc6-fast/drivers/md/dm-bufio.h
> ===================================================================
> --- linux-3.3-rc6-fast.orig/drivers/md/dm-bufio.h	2012-03-12 22:43:23.000000000 +0100
> +++ linux-3.3-rc6-fast/drivers/md/dm-bufio.h	2012-03-12 22:43:25.000000000 +0100
> @@ -63,6 +63,14 @@ void *dm_bufio_new(struct dm_bufio_clien
>  		   struct dm_buffer **bp);
>  
>  /*
> + * Prefetch the specified blocks to the cache.
> + * The function starts to read the blocks and returns without waiting for
> + * I/O to finish.
> + */
> +void dm_bufio_prefetch(struct dm_bufio_client *c,
> +		       sector_t block, unsigned n_blocks);
> +
> +/*
>   * Release a reference obtained with dm_bufio_{read,get,new}. The data
>   * pointer and dm_buffer pointer is no longer valid after this call.
>   */

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH] dm: remake of the verity target
  2012-03-14 21:13       ` Will Drewry
@ 2012-03-17  1:16         ` Mikulas Patocka
  2012-03-17  3:06           ` Will Drewry
  0 siblings, 1 reply; 34+ messages in thread
From: Mikulas Patocka @ 2012-03-17  1:16 UTC (permalink / raw)
  To: Will Drewry
  Cc: Mandeep Singh Baines, linux-kernel, dm-devel, Alasdair G Kergon,
	Elly Jones, Milan Broz, Olof Johansson, Steffen Klassert,
	Andrew Morton

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: TEXT/PLAIN; CHARSET=X-UNKNOWN, Size: 6817 bytes --]

Hi Will


On Wed, 14 Mar 2012, Will Drewry wrote:

> Hi Mikulas,
> 
> This is a nice rewrite and takes advantage of your dm-bufio layer. I
> wish it'd existed (and or we wrote it :) in 2009 when we started this
> work!  Some comments below:
> 
> > ---
> > +static void verity_prefetch_io(struct dm_verity *v, struct dm_verity_io *io)
> > +{
> > +       int i;
> > +       for (i = v->levels - 2; i >= 0; i--) {
> > +               sector_t hash_block_start;
> > +               sector_t hash_block_end;
> > +               verity_hash_at_level(v, io->block, i, &hash_block_start, NULL);
> > +               verity_hash_at_level(v, io->block + io->n_blocks - 1, i, &hash_block_end, NULL);
> > +               if (!i) {
> > +                       unsigned cluster = prefetch_cluster;
> > +        /* barrier to stop GCC from re-reading prefetch_cluster again */
> > +                       barrier();
> > +                       cluster >>= v->data_dev_block_bits;
> 
> Would:
>   unsigned cluster = prefetch_cluster >> v->data_dev_block_bits;
> not have similar behavior without a barrier?  (Yeah yeah I could
> compile and see, but I was curious if you already had.)
> 
> Since the max iterations here is 61 in a worst-case, I don't think
> it's a big deal to barrier() each time, just thought I'd ask.
> 
> > +                       if (unlikely(!cluster))
> > +                               goto no_prefetch_cluster;
> > +                       if (unlikely(cluster & (cluster - 1)))
> > +                               cluster = 1 << (fls(cluster) - 1);
> > +
> > +                       hash_block_start &= ~(sector_t)(cluster - 1);
> > +                       hash_block_end |= cluster - 1;
> > +                       if (unlikely(hash_block_end >= v->hash_blocks))
> > +                               hash_block_end = v->hash_blocks - 1;
> > +               }
> > +no_prefetch_cluster:
> > +               dm_bufio_prefetch(v->bufio, hash_block_start,
> > +                                       hash_block_end - hash_block_start + 1);

The problem here is this. If you look at the code, you think that after 
the clause "if (unlikely(!cluster)) goto no_prefetch_cluster;", cluster 
can't be zero. But this assumption is wrong. The C compiler is allowed to 
transform the above code into:

unsigned cluster;
if (!(prefetch_cluster >> v->data_dev_block_bits))
	goto no_prefetch_cluster;
cluster = prefetch_cluster >> v->data_dev_block_bits;
if (unlikely(cluster & (cluster - 1)))
	cluster = 1 << (fls(cluster) - 1);

I know it's suboptimal, but the C compiler is just allowed to perform this 
transformation. Now, if you know that "prefetch_cluster" can change 
asynchronously by another thread running simultaneously, the condition "if 
(!(prefetch_cluster >> v->data_dev_block_bits))" is useless --- 
prefetch_cluster may change just after this condition and we won't catch 
the zero value. (if the cluster value is zero in the above code, it ends 
up in hash_block_end being ORed with -1 and the prefetch goes wild over 
the whole hash device).

That's why I put that "barrier()" there. It would be better to declare 
"prefetch_cluster" as volatile, but the module param macros issue warnings 
if the variable is volatile.

Or maybe I can change it this way:
"unsigned cluster = *(volatile unsigned *)&prefetch_cluster", it could be 
better than the "barrier()".

> > +       case STATUSTYPE_TABLE:
> > +               DMEMIT("%u %s %s %llu %u %s ",
> > +                       0,
> > +                       v->data_dev->name,
> > +                       v->hash_dev->name,
> 
> I understand the new approach is to use major:minor instead of the
> device name.  I don't care which, but I believe agk@ requested that.

All the device mappers report dm_dev->name in their status routine, so I 
do it this way too.

> > +static int verity_ioctl(struct dm_target *ti, unsigned cmd,
> > +                       unsigned long arg)
> > +{
> > +       struct dm_verity *v = ti->private;
> > +       int r = 0;
> > +
> > +       if (v->data_start ||
> > +           ti->len != i_size_read(v->data_dev->bdev->bd_inode) >> SECTOR_SHIFT)
> > +               r = scsi_verify_blk_ioctl(NULL, cmd);
> > +
> 
> Is it worth supporting ioctl at all given these hoops?  Nothing stops
> a privileged user from directly running the ioctl on the underlying
> device/devices, it's just very inconvenient :)

I don't know. The other dm targets attempt to pass-thru ioctls too.

You need ioctl pass-thru if you want to run it over a cd-rom because 
the iso9660 filesystem needs to send an ioctl to find its superblock. 
Other than that I don't know if other filesystems need ioctls.

> > +       if (ti->len > i_size_read(v->data_dev->bdev->bd_inode) >> SECTOR_SHIFT) {
> > +               ti->error = "Data device si too small";
> 
> s/si/is
> 
> Should this also check ti->start + ti->len to ensure it isn't reading
> off the end or do you just rely on the requests failing?

ti->start is the offset in the target table --- so it shouldn't be checked 
here (for example, you can map a verity device having 1024 blocks to a 
sector offset 1000000 in the table --- so ti->start == 1000000 and ti->len 
== 1024 --- in this case, you have test that the underlying device has at 
least 1024 blocks, but you shouldn't test it for 1000000 sectors --- 
1000000 is offset in the table, not required device size.

But this reminds me that I had the size test wrong in verity_map ... 
fixed.

> > +MODULE_AUTHOR("Mikulas Patocka <mpatocka@redhat.com>");
> 
> As per linux/module.h, I'd welcome additional authors as per the
> lkml/patch lineage:
> MODULE_AUTHOR("Mandeep Baines <msb@chromium.org>");
> MODULE_AUTHOR("Will Drewry <wad@chromium.org>");

OK, I added you there.

> Regardless, I'll just be happy to see this functionality merge.
> 
> > +MODULE_DESCRIPTION(DM_NAME " target for transparent disk integrity checking");
> > +MODULE_LICENSE("GPL");
> > +
> > Index: linux-3.3-rc6-fast/drivers/md/dm-bufio.c
> 
> This should be in a separate patch I think.

Yes, it is a separate patch.

> >                b->hold_count++;
> 
> Are these hold_counts safe on architectures with weak memory models?
> Should they be atomic_ts?   I haven't looked at them in context, but
> based on what I see here they make me a bit nervous.
> 
> Thanks for jumping in to the fray!  None of my comments are blocking,
> so I believe the following is appropriate (if not
> s/Signed-off/Reviewed-by/).
> 
> Signed-off-by: Will Drewry <wad@chromium.org>
> 
> cheers!
> will

hold_count is read or changed only when we hold dm_bufio_client->lock, so 
it doesn't have to be atomic.

Mikulas

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH] dm: remake of the verity target
  2012-03-17  1:16         ` Mikulas Patocka
@ 2012-03-17  3:06           ` Will Drewry
  0 siblings, 0 replies; 34+ messages in thread
From: Will Drewry @ 2012-03-17  3:06 UTC (permalink / raw)
  To: Mikulas Patocka
  Cc: Mandeep Singh Baines, linux-kernel, dm-devel, Alasdair G Kergon,
	Elly Jones, Milan Broz, Olof Johansson, Steffen Klassert,
	Andrew Morton

On Fri, Mar 16, 2012 at 8:16 PM, Mikulas Patocka <mpatocka@redhat.com> wrote:
> Hi Will
>
>
> On Wed, 14 Mar 2012, Will Drewry wrote:
>
>> Hi Mikulas,
>>
>> This is a nice rewrite and takes advantage of your dm-bufio layer. I
>> wish it'd existed (and or we wrote it :) in 2009 when we started this
>> work!  Some comments below:
>>
>> > ---
>> > +static void verity_prefetch_io(struct dm_verity *v, struct dm_verity_io *io)
>> > +{
>> > +       int i;
>> > +       for (i = v->levels - 2; i >= 0; i--) {
>> > +               sector_t hash_block_start;
>> > +               sector_t hash_block_end;
>> > +               verity_hash_at_level(v, io->block, i, &hash_block_start, NULL);
>> > +               verity_hash_at_level(v, io->block + io->n_blocks - 1, i, &hash_block_end, NULL);
>> > +               if (!i) {
>> > +                       unsigned cluster = prefetch_cluster;
>> > +        /* barrier to stop GCC from re-reading prefetch_cluster again */
>> > +                       barrier();
>> > +                       cluster >>= v->data_dev_block_bits;
>>
>> Would:
>>   unsigned cluster = prefetch_cluster >> v->data_dev_block_bits;
>> not have similar behavior without a barrier?  (Yeah yeah I could
>> compile and see, but I was curious if you already had.)
>>
>> Since the max iterations here is 61 in a worst-case, I don't think
>> it's a big deal to barrier() each time, just thought I'd ask.
>>
>> > +                       if (unlikely(!cluster))
>> > +                               goto no_prefetch_cluster;
>> > +                       if (unlikely(cluster & (cluster - 1)))
>> > +                               cluster = 1 << (fls(cluster) - 1);
>> > +
>> > +                       hash_block_start &= ~(sector_t)(cluster - 1);
>> > +                       hash_block_end |= cluster - 1;
>> > +                       if (unlikely(hash_block_end >= v->hash_blocks))
>> > +                               hash_block_end = v->hash_blocks - 1;
>> > +               }
>> > +no_prefetch_cluster:
>> > +               dm_bufio_prefetch(v->bufio, hash_block_start,
>> > +                                       hash_block_end - hash_block_start + 1);
>
> The problem here is this. If you look at the code, you think that after
> the clause "if (unlikely(!cluster)) goto no_prefetch_cluster;", cluster
> can't be zero. But this assumption is wrong. The C compiler is allowed to
> transform the above code into:
>
> unsigned cluster;
> if (!(prefetch_cluster >> v->data_dev_block_bits))
>        goto no_prefetch_cluster;
> cluster = prefetch_cluster >> v->data_dev_block_bits;
> if (unlikely(cluster & (cluster - 1)))
>        cluster = 1 << (fls(cluster) - 1);
>
> I know it's suboptimal, but the C compiler is just allowed to perform this
> transformation. Now, if you know that "prefetch_cluster" can change
> asynchronously by another thread running simultaneously, the condition "if
> (!(prefetch_cluster >> v->data_dev_block_bits))" is useless ---
> prefetch_cluster may change just after this condition and we won't catch
> the zero value. (if the cluster value is zero in the above code, it ends
> up in hash_block_end being ORed with -1 and the prefetch goes wild over
> the whole hash device).
>
> That's why I put that "barrier()" there. It would be better to declare
> "prefetch_cluster" as volatile, but the module param macros issue warnings
> if the variable is volatile.
>
> Or maybe I can change it this way:
> "unsigned cluster = *(volatile unsigned *)&prefetch_cluster", it could be
> better than the "barrier()".

I think that'd read a little bit more clearly, and I think the C
standard supports that approach. If it doesn't work in practice, the
barrier isn't the worst.

>> > +       case STATUSTYPE_TABLE:
>> > +               DMEMIT("%u %s %s %llu %u %s ",
>> > +                       0,
>> > +                       v->data_dev->name,
>> > +                       v->hash_dev->name,
>>
>> I understand the new approach is to use major:minor instead of the
>> device name.  I don't care which, but I believe agk@ requested that.
>
> All the device mappers report dm_dev->name in their status routine, so I
> do it this way too.
>
>> > +static int verity_ioctl(struct dm_target *ti, unsigned cmd,
>> > +                       unsigned long arg)
>> > +{
>> > +       struct dm_verity *v = ti->private;
>> > +       int r = 0;
>> > +
>> > +       if (v->data_start ||
>> > +           ti->len != i_size_read(v->data_dev->bdev->bd_inode) >> SECTOR_SHIFT)
>> > +               r = scsi_verify_blk_ioctl(NULL, cmd);
>> > +
>>
>> Is it worth supporting ioctl at all given these hoops?  Nothing stops
>> a privileged user from directly running the ioctl on the underlying
>> device/devices, it's just very inconvenient :)
>
> I don't know. The other dm targets attempt to pass-thru ioctls too.
>
> You need ioctl pass-thru if you want to run it over a cd-rom because
> the iso9660 filesystem needs to send an ioctl to find its superblock.
> Other than that I don't know if other filesystems need ioctls.

Makes sense. I just think the passthrough condition is ugly, but at
least it provides some support.

>> > +       if (ti->len > i_size_read(v->data_dev->bdev->bd_inode) >> SECTOR_SHIFT) {
>> > +               ti->error = "Data device si too small";
>>
>> s/si/is
>>
>> Should this also check ti->start + ti->len to ensure it isn't reading
>> off the end or do you just rely on the requests failing?
>
> ti->start is the offset in the target table --- so it shouldn't be checked
> here (for example, you can map a verity device having 1024 blocks to a
> sector offset 1000000 in the table --- so ti->start == 1000000 and ti->len
> == 1024 --- in this case, you have test that the underlying device has at
> least 1024 blocks, but you shouldn't test it for 1000000 sectors ---
> 1000000 is offset in the table, not required device size.
>
> But this reminds me that I had the size test wrong in verity_map ...
> fixed.

Well at least some good came of it!  I typed ti->start, but I had mean
v->data_start.  However, I was misinterpreting the i_size_read as
giving the last sector not the actual size, so my comment was still
pointless.

>> > +MODULE_AUTHOR("Mikulas Patocka <mpatocka@redhat.com>");
>>
>> As per linux/module.h, I'd welcome additional authors as per the
>> lkml/patch lineage:
>> MODULE_AUTHOR("Mandeep Baines <msb@chromium.org>");
>> MODULE_AUTHOR("Will Drewry <wad@chromium.org>");
>
> OK, I added you there.

Very much appreciated.

>> Regardless, I'll just be happy to see this functionality merge.
>>
>> > +MODULE_DESCRIPTION(DM_NAME " target for transparent disk integrity checking");
>> > +MODULE_LICENSE("GPL");
>> > +
>> > Index: linux-3.3-rc6-fast/drivers/md/dm-bufio.c
>>
>> This should be in a separate patch I think.
>
> Yes, it is a separate patch.
>
>> >                b->hold_count++;
>>
>> Are these hold_counts safe on architectures with weak memory models?
>> Should they be atomic_ts?   I haven't looked at them in context, but
>> based on what I see here they make me a bit nervous.
>>
>> Thanks for jumping in to the fray!  None of my comments are blocking,
>> so I believe the following is appropriate (if not
>> s/Signed-off/Reviewed-by/).
>>
>> Signed-off-by: Will Drewry <wad@chromium.org>
>>
>> cheers!
>> will
>
> hold_count is read or changed only when we hold dm_bufio_client->lock, so
> it doesn't have to be atomic.

Ah nice!  The little snippet had me worried, but I should've looked at
the full context first.

Thanks!
will

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH] dm: remake of the verity target
  2012-03-13 22:20     ` [PATCH] dm: remake of the verity target Mikulas Patocka
  2012-03-14 21:13       ` Will Drewry
  2012-03-14 21:43       ` Mandeep Singh Baines
@ 2012-03-20 15:41       ` Mandeep Singh Baines
  2012-03-21  0:54         ` Mikulas Patocka
  2012-03-21  1:10         ` Mikulas Patocka
  2 siblings, 2 replies; 34+ messages in thread
From: Mandeep Singh Baines @ 2012-03-20 15:41 UTC (permalink / raw)
  To: Mikulas Patocka
  Cc: Mandeep Singh Baines, linux-kernel, dm-devel, Alasdair G Kergon,
	Will Drewry, Elly Jones, Milan Broz, Olof Johansson,
	Steffen Klassert, Andrew Morton

Hi Mikulas,

Can you please resend this patch with a proper commit message.
We'd really like to see this merged. Alasdair, other than that,
what work is remaining for verity to be merged?

Regards,
Mandeep

Mikulas Patocka (mpatocka@redhat.com) wrote:
> Hi
> 
> > Hi Mikulus,
> > 
> > This is some nice work. I like that you've been able to abstract a lot
> > of the hash buffer management with dm-bufio. You got rid of the I/O queue.
> > I've been meaning to do that for a while. The prefetch is also nice.
> > We planned to do this but I decided to not do it now in order to get the
> > base functionality in:
> > 
> > http://crosbug.com/25441
> > 
> > However, there are some things that I don't like. I don't like comments
> > either but you have none. You also removed our documentation. You are
> 
> I added some comments. As for documentation, it's OK to use documentation 
> from your patch because the on-disk format and the target arguments are 
> the same (with an enhancement that my code supports different data and 
> metadata bock size and it has variable-length salt).
> 
> > allocated a complete shash_desc per I/O. We only allocate one per CPU.
> 
> The hash of 4k block takes 174000 cycles. So trying to optimize 
> memory latency that is about 250 cycles doesn't make much sense.
> 
> I actually observed better performance using verity on ramdisk with 
> workqueue unbound to specific CPUs. The reason is that the ramdisk bio 
> completion routine is always run on the same CPU (that one that submitted 
> the request), so with bound workqueue, everything was executing on one 
> CPU. With unbound workqueue, I got parallelism.
> 
> > We short-circuit the hash at any level. Your implementation can only
> > shirt circuit at the lowest level.
> 
> It short-circuits hash at all levels. If the function 
> "verity_verify_level" finds out that "aux->hash_verified" is non-zero, it 
> doesn't do any hashing, it just copies the hash for the lower level. My 
> implementation walks the tree from the top to the bottom, but it doesn't 
> do hash verification if the same block has been verified before.
> 
> All this tree-walking from the root to the bottom is 50-times faster than 
> the actual hashing of the data block (I measured that), so there's not 
> much point in trying to optimize it. I did a simple optimization (don't 
> walk the tree if the lowest block is already verified) and I don't need to 
> do anything complicated given the fact that it can't improve more than by 
> 2%.
> 
> > I'd like to propose that we get the version we sent upstream and then work
> > together on adding some of your enhancements incrementally.
> 
> If you add dm-bufio support, you end up deleting majority of the original 
> code anyway. That's why I wrote it from scratch and that's why I didn't 
> attempt to morph your code.
> 
> It's simpler to write the code from scratch and it is also less bug-prone. 
> 
> > Other than
> > the changes we've made to cleanup for upstreaming, the version I
> > submitted is the code we are using in production.
> > 
> > I'm happy to add prefetch now if that is required for merging.
> > 
> > What do you think?
> > 
> > Regards,
> > Mandeep
> 
> This is the version with comments added:
> 
> Mikulas
> 
> ----
> 
> Remake of the google dm-verity patch.
> 
> Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
> 
> ---
>  drivers/md/Kconfig     |   17 
>  drivers/md/Makefile    |    1 
>  drivers/md/dm-verity.c |  851 +++++++++++++++++++++++++++++++++++++++++++++++++
>  3 files changed, 869 insertions(+)
> 
> Index: linux-3.3-rc6-fast/drivers/md/Kconfig
> ===================================================================
> --- linux-3.3-rc6-fast.orig/drivers/md/Kconfig	2012-03-13 21:46:03.000000000 +0100
> +++ linux-3.3-rc6-fast/drivers/md/Kconfig	2012-03-13 21:46:05.000000000 +0100
> @@ -404,4 +404,21 @@ config DM_VERITY2
>  
>            If unsure, say N.
>  
> +config DM_VERITY
> +	tristate "Verity target support"
> +	depends on BLK_DEV_DM
> +	select CRYPTO
> +	select CRYPTO_HASH
> +	select DM_BUFIO
> +	---help---
> +	  This device-mapper target allows you to create a device that
> +	  transparently integrity checks the data on it. You'll need to
> +	  activate the digests you're going to use in the cryptoapi
> +	  configuration.
> +
> +	  To compile this code as a module, choose M here: the module will
> +	  be called dm-verity.
> +
> +	  If unsure, say N.
> +
>  endif # MD
> Index: linux-3.3-rc6-fast/drivers/md/Makefile
> ===================================================================
> --- linux-3.3-rc6-fast.orig/drivers/md/Makefile	2012-03-13 21:46:03.000000000 +0100
> +++ linux-3.3-rc6-fast/drivers/md/Makefile	2012-03-13 21:46:05.000000000 +0100
> @@ -29,6 +29,7 @@ obj-$(CONFIG_MD_FAULTY)		+= faulty.o
>  obj-$(CONFIG_BLK_DEV_MD)	+= md-mod.o
>  obj-$(CONFIG_BLK_DEV_DM)	+= dm-mod.o
>  obj-$(CONFIG_DM_BUFIO)		+= dm-bufio.o
> +obj-$(CONFIG_DM_VERITY)		+= dm-verity.o
>  obj-$(CONFIG_DM_CRYPT)		+= dm-crypt.o
>  obj-$(CONFIG_DM_DELAY)		+= dm-delay.o
>  obj-$(CONFIG_DM_FLAKEY)		+= dm-flakey.o
> Index: linux-3.3-rc6-fast/drivers/md/dm-verity.c
> ===================================================================
> --- /dev/null	1970-01-01 00:00:00.000000000 +0000
> +++ linux-3.3-rc6-fast/drivers/md/dm-verity.c	2012-03-13 22:02:05.000000000 +0100
> @@ -0,0 +1,851 @@
> +/*
> + * Copyright (C) 2012 Red Hat, Inc.
> + *
> + * Author: Mikulas Patocka <mpatocka@redhat.com>
> + *
> + * Based on Chromium dm-verity driver (C) 2011 The Chromium OS Authors
> + *
> + * This file is released under the GPLv2.
> + *
> + * Device mapper target parameters:
> + *	<version>	0
> + *	<data device>
> + *	<hash device>
> + *	<hash start>	(typically 0)
> + *	<block size>	(typically 4096)
> + *	<algorithm>
> + *	<digest>
> + *	optional parameters:
> + *		<salt> (should have 32 bytes for compatibility with Google code)
> + *		<hash block size> (by default it is the same as data block size)
> + *
> + * In the file "/sys/module/dm_verity/parameters/prefetch_cluster" you can set
> + * default prefetch value. Data are read in "prefetch_cluster" chunks from the
> + * hash device. Prefetch cluster greatly improves performance when data and hash
> + * are on the same disk on different partitions.
> + */
> +
> +#include <linux/module.h>
> +#include <linux/device-mapper.h>
> +#include <crypto/hash.h>
> +#include "dm-bufio.h"
> +
> +#define DM_MSG_PREFIX			"verity"
> +
> +#define DM_VERITY_IO_VEC_INLINE		16
> +#define DM_VERITY_MEMPOOL_SIZE		4
> +#define DM_VERITY_DEFAULT_PREFETCH_SIZE	262144
> +
> +#define DM_VERITY_MAX_LEVELS		63
> +
> +static unsigned prefetch_cluster = DM_VERITY_DEFAULT_PREFETCH_SIZE;
> +
> +module_param_named(prefetch_cluster, prefetch_cluster, uint, S_IRUGO | S_IWUSR);
> +
> +struct dm_verity {
> +	struct dm_dev *data_dev;
> +	struct dm_dev *hash_dev;
> +	struct dm_target *ti;
> +	struct dm_bufio_client *bufio;
> +	char *alg_name;
> +	struct crypto_shash *tfm;
> +	u8 *root_digest;	/* digest of the root block */
> +	u8 *salt;		/* salt, its size is salt_size */
> +	unsigned salt_size;
> +	sector_t data_start;	/* data offset in 512-byte sectors */
> +	sector_t hash_start;	/* hash start in blocks */
> +	sector_t data_blocks;	/* the number of data blocks */
> +	sector_t hash_blocks;	/* the number of hash blocks */
> +	unsigned char data_dev_block_bits;	/* log2(data blocksize) */
> +	unsigned char hash_dev_block_bits;	/* log2(hash blocksize) */
> +	unsigned char hash_per_block_bits;	/* log2(hashes in hash block) */
> +	unsigned char levels;	/* the number of tree levels */
> +	unsigned digest_size;	/* digest size for the current hash algorithm */
> +	unsigned shash_descsize;/* the size of temporary space for crypto */
> +
> +	mempool_t *io_mempool;	/* mempool of struct dm_verity_io */
> +	mempool_t *vec_mempool;	/* mempool of bio vector */
> +
> +	struct workqueue_struct *verify_wq;
> +
> +	/* starting blocks for each tree level. 0 is the lowest level. */
> +	sector_t hash_level_block[DM_VERITY_MAX_LEVELS];
> +};
> +
> +struct dm_verity_io {
> +	struct dm_verity *v;
> +	struct bio *bio;
> +
> +	/* original values of bio->bi_end_io and bio->bi_private */
> +	bio_end_io_t *orig_bi_end_io;
> +	void *orig_bi_private;
> +
> +	sector_t block;
> +	unsigned n_blocks;
> +
> +	/* saved bio vector */
> +	struct bio_vec *io_vec;
> +	unsigned io_vec_size;
> +
> +	struct work_struct work;
> +
> +	/* a space for short vectors; longer vectors are allocated separately */
> +	struct bio_vec io_vec_inline[DM_VERITY_IO_VEC_INLINE];
> +
> +	/* variable-size fields, accessible with functions
> +		io_hash_desc, io_real_digest, io_want_digest */
> +	/* u8 hash_desc[crypto_shash_descsize(v->tfm)]; */
> +	/* u8 real_digest[v->digest_size]; */
> +	/* u8 want_digest[v->digest_size]; */
> +};
> +
> +static struct shash_desc *io_hash_desc(struct dm_verity *v, struct dm_verity_io *io)
> +{
> +	return (struct shash_desc *)(io + 1);
> +}
> +
> +static u8 *io_real_digest(struct dm_verity *v, struct dm_verity_io *io)
> +{
> +	return (u8 *)(io + 1) + v->shash_descsize;
> +}
> +
> +static u8 *io_want_digest(struct dm_verity *v, struct dm_verity_io *io)
> +{
> +	return (u8 *)(io + 1) + v->shash_descsize + v->digest_size;
> +}
> +
> +/*
> + * Auxiliary structure appended to each dm-bufio buffer. If the value
> + * hash_verified is nonzero, hash of the block has been verified.
> + *
> + * There is no lock around this value, a race condition can at worst cause
> + * that multiple processes verify the hash of the same buffer simultaneously.
> + * This condition is harmless, so we don't need locking.
> + */
> +struct buffer_aux {
> +	int hash_verified;
> +};
> +
> +/*
> + * Initialize struct buffer_aux for a freshly created buffer.
> + */
> +static void dm_bufio_alloc_callback(struct dm_buffer *buf)
> +{
> +	struct buffer_aux *aux = dm_bufio_get_aux_data(buf);
> +	aux->hash_verified = 0;
> +}
> +
> +/*
> + * Translate input sector number to the sector number on the target device.
> + */
> +static sector_t verity_map_sector(struct dm_verity *v, sector_t bi_sector)
> +{
> +	return v->data_start + dm_target_offset(v->ti, bi_sector);
> +}
> +
> +/*
> + * Return hash position of a specified block at a specified tree level
> + * (0 is the lowest level).
> + * The lowest "hash_per_block_bits"-bits of the result denote hash position
> + * inside a hash block. The remaining bits denode location of the hash block.
> + */
> +static sector_t verity_position_at_level(struct dm_verity *v, sector_t block,
> +					 int level)
> +{
> +	return block >> (level * v->hash_per_block_bits);
> +}
> +
> +static void verity_hash_at_level(struct dm_verity *v, sector_t block, int level,
> +				 sector_t *hash_block, unsigned *offset)
> +{
> +	sector_t position = verity_position_at_level(v, block, level);
> +
> +	*hash_block = v->hash_level_block[level] + (position >> v->hash_per_block_bits);
> +	if (offset)
> +		*offset = v->digest_size * (position & ((1 << v->hash_per_block_bits) - 1));
> +}
> +
> +/*
> + * Verify hash of a metadata block pertaining to the specified data block
> + * ("block" argument) at a specified level ("level" argument).
> + *
> + * On successful return, io_want_digest(v, io) contains the hash value for
> + * a lower tree level or for the data block (if we're at the lowest leve).
> + *
> + * If "skip_unverified" is true, unverified buffer is skipped an 1 is returned.
> + * If "skip_unverified" is false, unverified buffer is hashed and verified
> + * against current value of io_want_digest(v, io).
> + */
> +static int verity_verify_level(struct dm_verity_io *io, sector_t block,
> +			       int level, bool skip_unverified)
> +{
> +	struct dm_verity *v = io->v;
> +	struct dm_buffer *buf;
> +	struct buffer_aux *aux;
> +	u8 *data;
> +	int r;
> +	sector_t hash_block;
> +	unsigned offset;
> +
> +	verity_hash_at_level(v, block, level, &hash_block, &offset);
> +
> +	data = dm_bufio_read(v->bufio, hash_block, &buf);
> +	if (unlikely(IS_ERR(data)))
> +		return PTR_ERR(data);
> +
> +	aux = dm_bufio_get_aux_data(buf);
> +
> +	if (!aux->hash_verified) {
> +		struct shash_desc *desc;
> +		u8 *result;
> +
> +		if (skip_unverified) {
> +			r = 1;
> +			goto release_ret_r;
> +		}
> +
> +		desc = io_hash_desc(v, io);
> +		desc->tfm = v->tfm;
> +		desc->flags = CRYPTO_TFM_REQ_MAY_SLEEP;
> +		r = crypto_shash_init(desc);
> +		if (r < 0) {
> +			DMERR("crypto_shash_init failed: %d", r);
> +			goto release_ret_r;
> +		}
> +
> +		r = crypto_shash_update(desc, data, 1 << v->hash_dev_block_bits);
> +		if (r < 0) {
> +			DMERR("crypto_shash_update failed: %d", r);
> +			goto release_ret_r;
> +		}
> +
> +		r = crypto_shash_update(desc, v->salt, v->salt_size);
> +		if (r < 0) {
> +			DMERR("crypto_shash_update failed: %d", r);
> +			goto release_ret_r;
> +		}
> +
> +		result = io_real_digest(v, io);
> +		r = crypto_shash_final(desc, result);
> +		if (r < 0) {
> +			DMERR("crypto_shash_final failed: %d", r);
> +			goto release_ret_r;
> +		}
> +		if (unlikely(memcmp(result, io_want_digest(v, io), v->digest_size))) {
> +			DMERR_LIMIT("metadata block %llu is corrupted",
> +				(unsigned long long)hash_block);
> +			r = -EIO;
> +			goto release_ret_r;
> +		} else
> +			aux->hash_verified = 1;
> +	}
> +
> +	data += offset;
> +
> +	memcpy(io_want_digest(v, io), data, v->digest_size);
> +
> +	dm_bufio_release(buf);
> +	return 0;
> +
> +release_ret_r:
> +	dm_bufio_release(buf);
> +	return r;
> +}
> +
> +/*
> + * Verify one "dm_verity_io" structure.
> + */
> +static int verity_verify_io(struct dm_verity_io *io)
> +{
> +	struct dm_verity *v = io->v;
> +	unsigned b;
> +	int i;
> +	unsigned vector = 0, offset = 0;
> +	for (b = 0; b < io->n_blocks; b++) {
> +		struct shash_desc *desc;
> +		u8 *result;
> +		int r;
> +		unsigned todo;
> +
> +		if (likely(v->levels)) {
> +			/*
> +			 * First, we try to get the requested hash for
> +			 * the current block. If the hash block itself is
> +			 * verified, zero is returned. If it isn't, this
> +			 * function returns 0 and we fall back to whole
> +			 * chain verification.
> +			 */
> +			int r = verity_verify_level(io, io->block + b, 0, true);
> +			if (likely(!r))
> +				goto test_block_hash;
> +			if (r < 0)
> +				return r;
> +		}
> +
> +		memcpy(io_want_digest(v, io), v->root_digest, v->digest_size);
> +
> +		for (i = v->levels - 1; i >= 0; i--) {
> +			int r = verity_verify_level(io, io->block + b, i, false);
> +			if (unlikely(r))
> +				return r;
> +		}
> +
> +test_block_hash:
> +		desc = io_hash_desc(v, io);
> +		desc->tfm = v->tfm;
> +		desc->flags = CRYPTO_TFM_REQ_MAY_SLEEP;
> +		r = crypto_shash_init(desc);
> +		if (r < 0) {
> +			DMERR("crypto_shash_init failed: %d", r);
> +			return r;
> +		}
> +
> +		todo = 1 << v->data_dev_block_bits;
> +		do {
> +			struct bio_vec *bv;
> +			u8 *page;
> +			unsigned len;
> +
> +			BUG_ON(vector >= io->io_vec_size);
> +			bv = &io->io_vec[vector];
> +			page = kmap_atomic(bv->bv_page, KM_USER0);
> +			len = bv->bv_len - offset;
> +			if (likely(len >= todo))
> +				len = todo;
> +			r = crypto_shash_update(desc,
> +					page + bv->bv_offset + offset, len);
> +			kunmap_atomic(page, KM_USER0);
> +			if (r < 0) {
> +				DMERR("crypto_shash_update failed: %d", r);
> +				return r;
> +			}
> +			offset += len;
> +			if (likely(offset == bv->bv_len)) {
> +				offset = 0;
> +				vector++;
> +			}
> +			todo -= len;
> +		} while (todo);
> +
> +		r = crypto_shash_update(desc, v->salt, v->salt_size);
> +		if (r < 0) {
> +			DMERR("crypto_shash_update failed: %d", r);
> +			return r;
> +		}
> +
> +		result = io_real_digest(v, io);
> +		r = crypto_shash_final(desc, result);
> +		if (r < 0) {
> +			DMERR("crypto_shash_final failed: %d", r);
> +			return r;
> +		}
> +		if (unlikely(memcmp(result, io_want_digest(v, io), v->digest_size))) {
> +			DMERR_LIMIT("data block %llu is corrupted",
> +				(unsigned long long)(io->block + b));
> +			return -EIO;
> +		}
> +	}
> +	BUG_ON(vector != io->io_vec_size);
> +	BUG_ON(offset);
> +	return 0;
> +}
> +
> +/*
> + * End one "io" structure with a given error.
> + */
> +static void verity_finish_io(struct dm_verity_io *io, int error)
> +{
> +	struct bio *bio = io->bio;
> +	struct dm_verity *v = io->v;
> +
> +	bio->bi_end_io = io->orig_bi_end_io;
> +	bio->bi_private = io->orig_bi_private;
> +
> +	if (io->io_vec != io->io_vec_inline)
> +		mempool_free(io->io_vec, v->vec_mempool);
> +	mempool_free(io, v->io_mempool);
> +
> +	bio_endio(bio, error);
> +}
> +
> +static void verity_work(struct work_struct *w)
> +{
> +	struct dm_verity_io *io = container_of(w, struct dm_verity_io, work);
> +
> +	verity_finish_io(io, verity_verify_io(io));
> +}
> +
> +static void verity_end_io(struct bio *bio, int error)
> +{
> +	struct dm_verity_io *io = bio->bi_private;
> +	if (error) {
> +		verity_finish_io(io, error);
> +		return;
> +	}
> +
> +	INIT_WORK(&io->work, verity_work);
> +	queue_work(io->v->verify_wq, &io->work);
> +}
> +
> +/*
> + * Prefetch buffers for the specified io.
> + * The root buffer is not prefetched, it is assumed that it will be cached
> + * all the time.
> + */
> +static void verity_prefetch_io(struct dm_verity *v, struct dm_verity_io *io)
> +{
> +	int i;
> +	for (i = v->levels - 2; i >= 0; i--) {
> +		sector_t hash_block_start;
> +		sector_t hash_block_end;
> +		verity_hash_at_level(v, io->block, i, &hash_block_start, NULL);
> +		verity_hash_at_level(v, io->block + io->n_blocks - 1, i, &hash_block_end, NULL);
> +		if (!i) {
> +			unsigned cluster = prefetch_cluster;
> +	 /* barrier to stop GCC from re-reading prefetch_cluster again */
> +			barrier();
> +			cluster >>= v->data_dev_block_bits;
> +			if (unlikely(!cluster))
> +				goto no_prefetch_cluster;
> +			if (unlikely(cluster & (cluster - 1)))
> +				cluster = 1 << (fls(cluster) - 1);
> +
> +			hash_block_start &= ~(sector_t)(cluster - 1);
> +			hash_block_end |= cluster - 1;
> +			if (unlikely(hash_block_end >= v->hash_blocks))
> +				hash_block_end = v->hash_blocks - 1;
> +		}
> +no_prefetch_cluster:
> +		dm_bufio_prefetch(v->bufio, hash_block_start,
> +					hash_block_end - hash_block_start + 1);
> +	}
> +}
> +
> +/*
> + * Bio map function. It allocates dm_verity_io structure and bio vector and
> + * fills them. Then it issues prefetches and the I/O.
> + */
> +static int verity_map(struct dm_target *ti, struct bio *bio,
> +		      union map_info *map_context)
> +{
> +	struct dm_verity *v = ti->private;
> +	struct dm_verity_io *io;
> +
> +	if (((unsigned)bio->bi_sector | bio_sectors(bio)) &
> +	    ((1 << (v->data_dev_block_bits - SECTOR_SHIFT)) - 1)) {
> +		DMERR_LIMIT("unaligned io");
> +		return -EIO;
> +	}
> +
> +	if ((bio->bi_sector + bio_sectors(bio)) >>
> +	    (v->data_dev_block_bits - SECTOR_SHIFT) > v->data_blocks) {
> +		DMERR_LIMIT("io out of range");
> +		return -EIO;
> +	}
> +
> +	if (bio_data_dir(bio) == WRITE)
> +		return -EIO;
> +
> +	io = mempool_alloc(v->io_mempool, GFP_NOIO);
> +	io->v = v;
> +	io->bio = bio;
> +	io->orig_bi_end_io = bio->bi_end_io;
> +	io->orig_bi_private = bio->bi_private;
> +	io->block = bio->bi_sector >> (v->data_dev_block_bits - SECTOR_SHIFT);
> +	io->n_blocks = bio->bi_size >> v->data_dev_block_bits;
> +
> +	bio->bi_end_io = verity_end_io;
> +	bio->bi_private = io;
> +	bio->bi_bdev = v->data_dev->bdev;
> +	bio->bi_sector = verity_map_sector(v, bio->bi_sector);
> +
> +	io->io_vec_size = bio->bi_vcnt - bio->bi_idx;
> +	if (io->io_vec_size < DM_VERITY_IO_VEC_INLINE)
> +		io->io_vec = io->io_vec_inline;
> +	else
> +		io->io_vec = mempool_alloc(v->vec_mempool, GFP_NOIO);
> +	memcpy(io->io_vec, bio_iovec(bio),
> +	       io->io_vec_size * sizeof(struct bio_vec));
> +
> +	verity_prefetch_io(v, io);
> +
> +	generic_make_request(bio);
> +
> +	return DM_MAPIO_SUBMITTED;
> +}
> +
> +static int verity_status(struct dm_target *ti, status_type_t type,
> +			 char *result, unsigned maxlen)
> +{
> +	struct dm_verity *v = ti->private;
> +	unsigned sz = 0;
> +	unsigned x;
> +
> +	switch (type) {
> +	case STATUSTYPE_INFO:
> +		result[0] = 0;
> +		break;
> +	case STATUSTYPE_TABLE:
> +		DMEMIT("%u %s %s %llu %u %s ",
> +			0,
> +			v->data_dev->name,
> +			v->hash_dev->name,
> +			(unsigned long long)v->hash_start << (v->hash_dev_block_bits - SECTOR_SHIFT),
> +			1 << v->data_dev_block_bits,
> +			v->alg_name
> +			);
> +		for (x = 0; x < v->digest_size; x++)
> +			DMEMIT("%02x", v->root_digest[x]);
> +		DMEMIT(" ");
> +		if (!v->salt_size)
> +			DMEMIT("-");
> +		else
> +			for (x = 0; x < v->salt_size; x++)
> +				DMEMIT("%02x", v->salt[x]);
> +		if (v->data_dev_block_bits != v->hash_dev_block_bits)
> +			DMEMIT(" %u", 1 << v->hash_dev_block_bits);
> +		break;
> +	}
> +	return 0;
> +}
> +
> +static int verity_ioctl(struct dm_target *ti, unsigned cmd,
> +			unsigned long arg)
> +{
> +	struct dm_verity *v = ti->private;
> +	int r = 0;
> +
> +	if (v->data_start ||
> +	    ti->len != i_size_read(v->data_dev->bdev->bd_inode) >> SECTOR_SHIFT)
> +		r = scsi_verify_blk_ioctl(NULL, cmd);
> +
> +	return r ? : __blkdev_driver_ioctl(v->data_dev->bdev, v->data_dev->mode,
> +				     cmd, arg);
> +}
> +
> +static int verity_merge(struct dm_target *ti, struct bvec_merge_data *bvm,
> +			struct bio_vec *biovec, int max_size)
> +{
> +	struct dm_verity *v = ti->private;
> +	struct request_queue *q = bdev_get_queue(v->data_dev->bdev);
> +
> +	if (!q->merge_bvec_fn)
> +		return max_size;
> +
> +	bvm->bi_bdev = v->data_dev->bdev;
> +	bvm->bi_sector = verity_map_sector(v, bvm->bi_sector);
> +
> +	return min(max_size, q->merge_bvec_fn(q, bvm, biovec));
> +}
> +
> +static int verity_iterate_devices(struct dm_target *ti,
> +				  iterate_devices_callout_fn fn, void *data)
> +{
> +	struct dm_verity *v = ti->private;
> +	return fn(ti, v->data_dev, v->data_start, ti->len, data);
> +}
> +
> +static void verity_io_hints(struct dm_target *ti, struct queue_limits *limits)
> +{
> +	struct dm_verity *v = ti->private;
> +
> +	if (limits->logical_block_size < 1 << v->data_dev_block_bits)
> +		limits->logical_block_size = 1 << v->data_dev_block_bits;
> +	if (limits->physical_block_size < 1 << v->data_dev_block_bits)
> +		limits->physical_block_size = 1 << v->data_dev_block_bits;
> +	blk_limits_io_min(limits, limits->logical_block_size);
> +}
> +
> +static void verity_dtr(struct dm_target *ti);
> +
> +static int verity_ctr(struct dm_target *ti, unsigned argc, char **argv)
> +{
> +	struct dm_verity *v;
> +	unsigned num;
> +	unsigned long long hs;
> +	int r;
> +	int i;
> +	sector_t hash_position;
> +	char dummy;
> +
> +	v = kzalloc(sizeof(struct dm_verity), GFP_KERNEL);
> +	if (!v) {
> +		ti->error = "Cannot allocate verity structure";
> +		return -ENOMEM;
> +	}
> +	ti->private = v;
> +	v->ti = ti;
> +
> +	if ((dm_table_get_mode(ti->table) & ~FMODE_READ) != 0) {
> +		ti->error = "Device must be readonly";
> +		r = -EINVAL;
> +		goto bad;
> +	}
> +
> +	if (argc < 7) {
> +		ti->error = "Not enough arguments";
> +		r = -EINVAL;
> +		goto bad;
> +	}
> +
> +	if (sscanf(argv[0], "%d%c", &num, &dummy) != 1 ||
> +	    num != 0) {
> +		ti->error = "Invalid version";
> +		r = -EINVAL;
> +		goto bad;
> +	}
> +
> +	r = dm_get_device(ti, argv[1], FMODE_READ, &v->data_dev);
> +	if (r) {
> +		ti->error = "Data device lookup failed";
> +		goto bad;
> +	}
> +
> +	r = dm_get_device(ti, argv[2], FMODE_READ, &v->hash_dev);
> +	if (r) {
> +		ti->error = "Data device lookup failed";
> +		goto bad;
> +	}
> +
> +	if (sscanf(argv[3], "%llu%c", &hs, &dummy) != 1 ||
> +	    hs != (sector_t)hs) {
> +		ti->error = "Invalid hash start";
> +		r = -EINVAL;
> +		goto bad;
> +	}
> +
> +	if (sscanf(argv[4], "%u%c", &num, &dummy) != 1 ||
> +	    !num || (num & (num - 1)) ||
> +	    num < bdev_logical_block_size(v->data_dev->bdev) ||
> +	    num > PAGE_SIZE) {
> +		ti->error = "Invalid data device block size";
> +		r = -EINVAL;
> +		goto bad;
> +	}
> +	v->data_dev_block_bits = ffs(num) - 1;
> +	v->hash_dev_block_bits = ffs(num) - 1;
> +
> +	v->alg_name = kstrdup(argv[5], GFP_KERNEL);
> +	if (!v->alg_name) {
> +		ti->error = "Cannot allocate algorithm name";
> +		r = -ENOMEM;
> +		goto bad;
> +	}
> +
> +	v->tfm = crypto_alloc_shash(v->alg_name, 0, 0);
> +	if (IS_ERR(v->tfm)) {
> +		ti->error = "Cannot initialize hash function";
> +		r = PTR_ERR(v->tfm);
> +		v->tfm = NULL;
> +		goto bad;
> +	}
> +	v->digest_size = crypto_shash_digestsize(v->tfm);
> +	if ((1 << v->hash_dev_block_bits) < v->digest_size * 2) {
> +		ti->error = "Digest size too big";
> +		r = -EINVAL;
> +		goto bad;
> +	}
> +	v->shash_descsize =
> +		sizeof(struct shash_desc) + crypto_shash_descsize(v->tfm);
> +
> +	v->root_digest = kmalloc(v->digest_size, GFP_KERNEL);
> +	if (!v->root_digest) {
> +		ti->error = "Cannot allocate root digest";
> +		r = -ENOMEM;
> +		goto bad;
> +	}
> +	if (strlen(argv[6]) != v->digest_size * 2 ||
> +	    hex2bin(v->root_digest, argv[6], v->digest_size)) {
> +		ti->error = "Invalid root digest";
> +		r = -EINVAL;
> +		goto bad;
> +	}
> +
> +	if (argc > 7 && strcmp(argv[7], "-")) {
> +		v->salt_size = strlen(argv[7]) / 2;
> +		v->salt = kmalloc(v->salt_size, GFP_KERNEL);
> +		if (!v->salt) {
> +			ti->error = "Cannot allocate salt";
> +			r = -ENOMEM;
> +			goto bad;
> +		}
> +		if (strlen(argv[7]) != v->salt_size * 2 ||
> +		    hex2bin(v->salt, argv[7], v->salt_size)) {
> +			ti->error = "Invalid salt";
> +			r = -EINVAL;
> +			goto bad;
> +		}
> +	}
> +
> +	if (argc > 8) {
> +		if (sscanf(argv[8], "%u%c", &num, &dummy) != 1 ||
> +		    !num || (num & (num - 1)) ||
> +		    num < bdev_logical_block_size(v->hash_dev->bdev) ||
> +		    num > INT_MAX) {
> +			ti->error = "Invalid hash device block size";
> +			r = -EINVAL;
> +			goto bad;
> +		}
> +		v->hash_dev_block_bits = ffs(num) - 1;
> +	}
> +
> +	if (hs & ((1 << (v->hash_dev_block_bits - SECTOR_SHIFT)) - 1)) {
> +		ti->error = "Hash start not aligned on block boundary";
> +		r = -EINVAL;
> +		goto bad;
> +	}
> +	v->hash_start = hs >> (v->hash_dev_block_bits - SECTOR_SHIFT);
> +
> +	if (ti->len > i_size_read(v->data_dev->bdev->bd_inode) >> SECTOR_SHIFT) {
> +		ti->error = "Data device si too small";
> +		r = -EINVAL;
> +		goto bad;
> +	}
> +
> +	if (ti->len & ((1 << (v->data_dev_block_bits - SECTOR_SHIFT)) - 1)) {
> +		ti->error = "Data device length is not aligned to block size";
> +		r = -EINVAL;
> +		goto bad;
> +	}
> +
> +	v->data_blocks = ti->len >> (v->data_dev_block_bits - SECTOR_SHIFT);
> +
> +	v->hash_per_block_bits =
> +		fls((1 << v->hash_dev_block_bits) / v->digest_size) - 1;
> +
> +	v->levels = 0;
> +	if (v->data_blocks)
> +		while (v->hash_per_block_bits * v->levels < 64 &&
> +		       (unsigned long long)(v->data_blocks - 1) >>
> +		       (v->hash_per_block_bits * v->levels))
> +			v->levels++;
> +
> +	if (v->levels > DM_VERITY_MAX_LEVELS) {
> +		ti->error = "Too many tree levels";
> +		r = -E2BIG;
> +		goto bad;
> +	}
> +
> +	hash_position = v->hash_start;
> +	for (i = v->levels - 1; i >= 0; i--) {
> +		sector_t s;
> +		v->hash_level_block[i] = hash_position;
> +		s = verity_position_at_level(v, v->data_blocks, i);
> +		s = (s >> v->hash_per_block_bits) +
> +		    !!(s & ((1 << v->hash_per_block_bits) - 1));
> +		if (hash_position + s < hash_position) {
> +			ti->error = "Hash device offset overflow";
> +			r = -E2BIG;
> +			goto bad;
> +		}
> +		hash_position += s;
> +	}
> +	v->hash_blocks = hash_position;
> +
> +	v->bufio = dm_bufio_client_create(v->hash_dev->bdev,
> +		1 << v->hash_dev_block_bits, 1, sizeof(struct buffer_aux),
> +		dm_bufio_alloc_callback, NULL);
> +	if (IS_ERR(v->bufio)) {
> +		ti->error = "Cannot initialize dm-bufio";
> +		r = PTR_ERR(v->bufio);
> +		v->bufio = NULL;
> +		goto bad;
> +	}
> +
> +	if (dm_bufio_get_device_size(v->bufio) < v->hash_blocks) {
> +		ti->error = "Hash device is too small";
> +		r = -E2BIG;
> +		goto bad;
> +	}
> +
> +	v->io_mempool = mempool_create_kmalloc_pool(DM_VERITY_MEMPOOL_SIZE,
> +	  sizeof(struct dm_verity_io) + v->shash_descsize + v->digest_size * 2);
> +	if (!v->io_mempool) {
> +		ti->error = "Cannot allocate io mempool";
> +		r = -ENOMEM;
> +		goto bad;
> +	}
> +
> +	v->vec_mempool = mempool_create_kmalloc_pool(DM_VERITY_MEMPOOL_SIZE,
> +					BIO_MAX_PAGES * sizeof(struct bio_vec));
> +	if (!v->vec_mempool) {
> +		ti->error = "Cannot allocate vector mempool";
> +		r = -ENOMEM;
> +		goto bad;
> +	}
> +
> +	/*v->verify_wq = alloc_workqueue("verityd", WQ_CPU_INTENSIVE | WQ_MEM_RECLAIM, 1);*/
> +	/* WQ_UNBOUND greatly improves performance when running on ramdisk */
> +	v->verify_wq = alloc_workqueue("verityd", WQ_CPU_INTENSIVE | WQ_MEM_RECLAIM | WQ_UNBOUND, num_online_cpus());
> +	if (!v->verify_wq) {
> +		ti->error = "Cannot allocate workqueue";
> +		r = -ENOMEM;
> +		goto bad;
> +	}
> +
> +	return 0;
> +
> +bad:
> +	verity_dtr(ti);
> +	return r;
> +}
> +
> +static void verity_dtr(struct dm_target *ti)
> +{
> +	struct dm_verity *v = ti->private;
> +
> +	if (v->verify_wq)
> +		destroy_workqueue(v->verify_wq);
> +	if (v->vec_mempool)
> +		mempool_destroy(v->vec_mempool);
> +	if (v->io_mempool)
> +		mempool_destroy(v->io_mempool);
> +	if (v->bufio)
> +		dm_bufio_client_destroy(v->bufio);
> +	kfree(v->salt);
> +	kfree(v->root_digest);
> +	if (v->tfm)
> +		crypto_free_shash(v->tfm);
> +	kfree(v->alg_name);
> +	if (v->hash_dev)
> +		dm_put_device(ti, v->hash_dev);
> +	if (v->data_dev)
> +		dm_put_device(ti, v->data_dev);
> +	kfree(v);
> +}
> +
> +static struct target_type verity_target = {
> +	.name		= "verity",
> +	.version	= {1, 0, 0},
> +	.module		= THIS_MODULE,
> +	.ctr		= verity_ctr,
> +	.dtr		= verity_dtr,
> +	.map		= verity_map,
> +	.status		= verity_status,
> +	.ioctl		= verity_ioctl,
> +	.merge		= verity_merge,
> +	.iterate_devices = verity_iterate_devices,
> +	.io_hints	= verity_io_hints,
> +};
> +
> +static int __init dm_verity_init(void)
> +{
> +	int r;
> +	r = dm_register_target(&verity_target);
> +	if (r < 0)
> +		DMERR("register failed %d", r);
> +	return r;
> +}
> +
> +static void __exit dm_verity_exit(void)
> +{
> +	dm_unregister_target(&verity_target);
> +}
> +
> +module_init(dm_verity_init);
> +module_exit(dm_verity_exit);
> +
> +MODULE_AUTHOR("Mikulas Patocka <mpatocka@redhat.com>");
> +MODULE_DESCRIPTION(DM_NAME " target for transparent disk integrity checking");
> +MODULE_LICENSE("GPL");
> +
> Index: linux-3.3-rc6-fast/drivers/md/dm-bufio.c
> ===================================================================
> --- linux-3.3-rc6-fast.orig/drivers/md/dm-bufio.c	2012-03-12 22:43:23.000000000 +0100
> +++ linux-3.3-rc6-fast/drivers/md/dm-bufio.c	2012-03-13 15:41:02.000000000 +0100
> @@ -579,7 +579,7 @@ static void write_endio(struct bio *bio,
>  	struct dm_buffer *b = container_of(bio, struct dm_buffer, bio);
>  
>  	b->write_error = error;
> -	if (error) {
> +	if (unlikely(error)) {
>  		struct dm_bufio_client *c = b->c;
>  		(void)cmpxchg(&c->async_write_error, 0, error);
>  	}
> @@ -698,13 +698,20 @@ static void __wait_for_free_buffer(struc
>  	dm_bufio_lock(c);
>  }
>  
> +enum new_flag {
> +	NF_FRESH = 0,
> +	NF_READ = 1,
> +	NF_GET = 2,
> +	NF_PREFETCH = 3
> +};
> +
>  /*
>   * Allocate a new buffer. If the allocation is not possible, wait until
>   * some other thread frees a buffer.
>   *
>   * May drop the lock and regain it.
>   */
> -static struct dm_buffer *__alloc_buffer_wait_no_callback(struct dm_bufio_client *c)
> +static struct dm_buffer *__alloc_buffer_wait_no_callback(struct dm_bufio_client *c, enum new_flag nf)
>  {
>  	struct dm_buffer *b;
>  
> @@ -727,6 +734,9 @@ static struct dm_buffer *__alloc_buffer_
>  				return b;
>  		}
>  
> +		if (nf == NF_PREFETCH)
> +			return NULL;
> +
>  		if (!list_empty(&c->reserved_buffers)) {
>  			b = list_entry(c->reserved_buffers.next,
>  				       struct dm_buffer, lru_list);
> @@ -744,9 +754,12 @@ static struct dm_buffer *__alloc_buffer_
>  	}
>  }
>  
> -static struct dm_buffer *__alloc_buffer_wait(struct dm_bufio_client *c)
> +static struct dm_buffer *__alloc_buffer_wait(struct dm_bufio_client *c, enum new_flag nf)
>  {
> -	struct dm_buffer *b = __alloc_buffer_wait_no_callback(c);
> +	struct dm_buffer *b = __alloc_buffer_wait_no_callback(c, nf);
> +
> +	if (!b)
> +		return NULL;
>  
>  	if (c->alloc_callback)
>  		c->alloc_callback(b);
> @@ -866,15 +879,8 @@ static struct dm_buffer *__find(struct d
>   * Getting a buffer
>   *--------------------------------------------------------------*/
>  
> -enum new_flag {
> -	NF_FRESH = 0,
> -	NF_READ = 1,
> -	NF_GET = 2
> -};
> -
>  static struct dm_buffer *__bufio_new(struct dm_bufio_client *c, sector_t block,
> -				     enum new_flag nf, struct dm_buffer **bp,
> -				     int *need_submit)
> +				     enum new_flag nf, int *need_submit)
>  {
>  	struct dm_buffer *b, *new_b = NULL;
>  
> @@ -882,6 +888,19 @@ static struct dm_buffer *__bufio_new(str
>  
>  	b = __find(c, block);
>  	if (b) {
> +found_buffer:
> +		if (nf == NF_PREFETCH)
> +			return NULL;
> +		/*
> +		 * Note: it is essential that we don't wait for the buffer to be
> +		 * read if dm_bufio_get function is used. Both dm_bufio_get and
> +		 * dm_bufio_prefetch can be used in the driver request routine.
> +		 * If the user called both dm_bufio_prefetch and dm_bufio_get on
> +		 * the same buffer, it would deadlock if we waited.
> +		 */
> +		if (nf == NF_GET && unlikely(test_bit(B_READING, &b->state)))
> +			return NULL;
> +
>  		b->hold_count++;
>  		__relink_lru(b, test_bit(B_DIRTY, &b->state) ||
>  			     test_bit(B_WRITING, &b->state));
> @@ -891,7 +910,9 @@ static struct dm_buffer *__bufio_new(str
>  	if (nf == NF_GET)
>  		return NULL;
>  
> -	new_b = __alloc_buffer_wait(c);
> +	new_b = __alloc_buffer_wait(c, nf);
> +	if (!new_b)
> +		return NULL;
>  
>  	/*
>  	 * We've had a period where the mutex was unlocked, so need to
> @@ -900,10 +921,7 @@ static struct dm_buffer *__bufio_new(str
>  	b = __find(c, block);
>  	if (b) {
>  		__free_buffer_wake(new_b);
> -		b->hold_count++;
> -		__relink_lru(b, test_bit(B_DIRTY, &b->state) ||
> -			     test_bit(B_WRITING, &b->state));
> -		return b;
> +		goto found_buffer;
>  	}
>  
>  	__check_watermark(c);
> @@ -957,7 +975,7 @@ static void *new_read(struct dm_bufio_cl
>  	struct dm_buffer *b;
>  
>  	dm_bufio_lock(c);
> -	b = __bufio_new(c, block, nf, bp, &need_submit);
> +	b = __bufio_new(c, block, nf, &need_submit);
>  	dm_bufio_unlock(c);
>  
>  	if (!b || IS_ERR(b))
> @@ -1006,13 +1024,46 @@ void *dm_bufio_new(struct dm_bufio_clien
>  }
>  EXPORT_SYMBOL_GPL(dm_bufio_new);
>  
> +void dm_bufio_prefetch(struct dm_bufio_client *c,
> +		       sector_t block, unsigned n_blocks)
> +{
> +	struct blk_plug plug;
> +
> +	blk_start_plug(&plug);
> +	dm_bufio_lock(c);
> +
> +	for (; n_blocks--; block++) {
> +		int need_submit;
> +		struct dm_buffer *b;
> +		b = __bufio_new(c, block, NF_PREFETCH, &need_submit);
> +		if (unlikely(b != NULL)) {
> +			dm_bufio_unlock(c);
> +
> +			if (need_submit)
> +				submit_io(b, READ, b->block, read_endio);
> +			dm_bufio_release(b);
> +
> +			dm_bufio_cond_resched();
> +
> +			if (!n_blocks)
> +				goto flush_plug;
> +			dm_bufio_lock(c);
> +		}
> +
> +	}
> +
> +	dm_bufio_unlock(c);
> +flush_plug:
> +	blk_finish_plug(&plug);
> +}
> +EXPORT_SYMBOL_GPL(dm_bufio_prefetch);
> +
>  void dm_bufio_release(struct dm_buffer *b)
>  {
>  	struct dm_bufio_client *c = b->c;
>  
>  	dm_bufio_lock(c);
>  
> -	BUG_ON(test_bit(B_READING, &b->state));
>  	BUG_ON(!b->hold_count);
>  
>  	b->hold_count--;
> @@ -1025,6 +1076,7 @@ void dm_bufio_release(struct dm_buffer *
>  		 * invalid buffer.
>  		 */
>  		if ((b->read_error || b->write_error) &&
> +		    !test_bit(B_READING, &b->state) &&
>  		    !test_bit(B_WRITING, &b->state) &&
>  		    !test_bit(B_DIRTY, &b->state)) {
>  			__unlink_buffer(b);
> @@ -1042,6 +1094,8 @@ void dm_bufio_mark_buffer_dirty(struct d
>  
>  	dm_bufio_lock(c);
>  
> +	BUG_ON(test_bit(B_READING, &b->state));
> +
>  	if (!test_and_set_bit(B_DIRTY, &b->state))
>  		__relink_lru(b, LIST_DIRTY);
>  
> Index: linux-3.3-rc6-fast/drivers/md/dm-bufio.h
> ===================================================================
> --- linux-3.3-rc6-fast.orig/drivers/md/dm-bufio.h	2012-03-12 22:43:23.000000000 +0100
> +++ linux-3.3-rc6-fast/drivers/md/dm-bufio.h	2012-03-12 22:43:25.000000000 +0100
> @@ -63,6 +63,14 @@ void *dm_bufio_new(struct dm_bufio_clien
>  		   struct dm_buffer **bp);
>  
>  /*
> + * Prefetch the specified blocks to the cache.
> + * The function starts to read the blocks and returns without waiting for
> + * I/O to finish.
> + */
> +void dm_bufio_prefetch(struct dm_bufio_client *c,
> +		       sector_t block, unsigned n_blocks);
> +
> +/*
>   * Release a reference obtained with dm_bufio_{read,get,new}. The data
>   * pointer and dm_buffer pointer is no longer valid after this call.
>   */

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH] dm: remake of the verity target
  2012-03-20 15:41       ` Mandeep Singh Baines
@ 2012-03-21  0:54         ` Mikulas Patocka
  2012-03-21  3:03           ` Mandeep Singh Baines
  2012-03-21  3:11           ` [dm-devel] " Mikulas Patocka
  2012-03-21  1:10         ` Mikulas Patocka
  1 sibling, 2 replies; 34+ messages in thread
From: Mikulas Patocka @ 2012-03-21  0:54 UTC (permalink / raw)
  To: Mandeep Singh Baines
  Cc: linux-kernel, dm-devel, Alasdair G Kergon, Will Drewry,
	Elly Jones, Milan Broz, Olof Johansson, Steffen Klassert,
	Andrew Morton



On Tue, 20 Mar 2012, Mandeep Singh Baines wrote:

> Hi Mikulas,
> 
> Can you please resend this patch with a proper commit message.
> We'd really like to see this merged. Alasdair, other than that,
> what work is remaining for verity to be merged?
> 
> Regards,
> Mandeep

Hi

I'm sending this new version of dm-verity to be merged. I've made some 
last changes in the format, hopefully no more changes will be needed. This 
changes make it incompatible with the original Google code (but the 
original code can be trivially changed to support these modifications).

Changes:

* Salt is hashed before the block (it used to be hased after). The reason 
is that if random salt is hashed before the block, it makes the process 
resilient to hash function collisions - so you can safely use md5, even if 
there's a collision attach for it.

* Argument line was simplified, there are no optional arguments.

* There is new argument specifying the size of the data device.

Mikulas

---

Remake of the google dm-verity patch.

Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>

---
 drivers/md/Kconfig     |   17 
 drivers/md/Makefile    |    1 
 drivers/md/dm-verity.c |  849 +++++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 867 insertions(+)

Index: linux-3.3-fast/drivers/md/Kconfig
===================================================================
--- linux-3.3-fast.orig/drivers/md/Kconfig	2012-03-19 13:46:54.000000000 +0100
+++ linux-3.3-fast/drivers/md/Kconfig	2012-03-19 13:46:55.000000000 +0100
@@ -404,4 +404,21 @@ config DM_VERITY2
 
           If unsure, say N.
 
+config DM_VERITY
+	tristate "Verity target support"
+	depends on BLK_DEV_DM
+	select CRYPTO
+	select CRYPTO_HASH
+	select DM_BUFIO
+	---help---
+	  This device-mapper target allows you to create a device that
+	  transparently integrity checks the data on it. You'll need to
+	  activate the digests you're going to use in the cryptoapi
+	  configuration.
+
+	  To compile this code as a module, choose M here: the module will
+	  be called dm-verity.
+
+	  If unsure, say N.
+
 endif # MD
Index: linux-3.3-fast/drivers/md/Makefile
===================================================================
--- linux-3.3-fast.orig/drivers/md/Makefile	2012-03-19 13:46:54.000000000 +0100
+++ linux-3.3-fast/drivers/md/Makefile	2012-03-19 13:46:55.000000000 +0100
@@ -29,6 +29,7 @@ obj-$(CONFIG_MD_FAULTY)		+= faulty.o
 obj-$(CONFIG_BLK_DEV_MD)	+= md-mod.o
 obj-$(CONFIG_BLK_DEV_DM)	+= dm-mod.o
 obj-$(CONFIG_DM_BUFIO)		+= dm-bufio.o
+obj-$(CONFIG_DM_VERITY)		+= dm-verity.o
 obj-$(CONFIG_DM_CRYPT)		+= dm-crypt.o
 obj-$(CONFIG_DM_DELAY)		+= dm-delay.o
 obj-$(CONFIG_DM_FLAKEY)		+= dm-flakey.o
Index: linux-3.3-fast/drivers/md/dm-verity.c
===================================================================
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ linux-3.3-fast/drivers/md/dm-verity.c	2012-03-20 22:03:53.000000000 +0100
@@ -0,0 +1,849 @@
+/*
+ * Copyright (C) 2012 Red Hat, Inc.
+ *
+ * Author: Mikulas Patocka <mpatocka@redhat.com>
+ *
+ * Based on Chromium dm-verity driver (C) 2011 The Chromium OS Authors
+ *
+ * This file is released under the GPLv2.
+ *
+ * Device mapper target parameters:
+ *	<version>		0
+ *	<data device>
+ *	<hash device>
+ *	<data block size>
+ *	<hash block size>
+ *	<the number of data blocks>
+ *	<hash start block>
+ *	<algorithm>
+ *	<digest>
+ *	<salt>			(hex bytes or "-" for no salt)
+ *
+ * In the file "/sys/module/dm_verity/parameters/prefetch_cluster" you can set
+ * default prefetch value. Data are read in "prefetch_cluster" chunks from the
+ * hash device. Prefetch cluster greatly improves performance when data and hash
+ * are on the same disk on different partitions on devices with poor random
+ * access behavior.
+ */
+
+#include <linux/module.h>
+#include <linux/device-mapper.h>
+#include <crypto/hash.h>
+#include "dm-bufio.h"
+
+#define DM_MSG_PREFIX			"verity"
+
+#define DM_VERITY_IO_VEC_INLINE		16
+#define DM_VERITY_MEMPOOL_SIZE		4
+#define DM_VERITY_DEFAULT_PREFETCH_SIZE	262144
+
+#define DM_VERITY_MAX_LEVELS		63
+
+static unsigned prefetch_cluster = DM_VERITY_DEFAULT_PREFETCH_SIZE;
+
+module_param_named(prefetch_cluster, prefetch_cluster, uint, S_IRUGO | S_IWUSR);
+
+struct dm_verity {
+	struct dm_dev *data_dev;
+	struct dm_dev *hash_dev;
+	struct dm_target *ti;
+	struct dm_bufio_client *bufio;
+	char *alg_name;
+	struct crypto_shash *tfm;
+	u8 *root_digest;	/* digest of the root block */
+	u8 *salt;		/* salt, its size is salt_size */
+	unsigned salt_size;
+	sector_t data_start;	/* data offset in 512-byte sectors */
+	sector_t hash_start;	/* hash start in blocks */
+	sector_t data_blocks;	/* the number of data blocks */
+	sector_t hash_blocks;	/* the number of hash blocks */
+	unsigned char data_dev_block_bits;	/* log2(data blocksize) */
+	unsigned char hash_dev_block_bits;	/* log2(hash blocksize) */
+	unsigned char hash_per_block_bits;	/* log2(hashes in hash block) */
+	unsigned char levels;	/* the number of tree levels */
+	unsigned digest_size;	/* digest size for the current hash algorithm */
+	unsigned shash_descsize;/* the size of temporary space for crypto */
+
+	mempool_t *io_mempool;	/* mempool of struct dm_verity_io */
+	mempool_t *vec_mempool;	/* mempool of bio vector */
+
+	struct workqueue_struct *verify_wq;
+
+	/* starting blocks for each tree level. 0 is the lowest level. */
+	sector_t hash_level_block[DM_VERITY_MAX_LEVELS];
+};
+
+struct dm_verity_io {
+	struct dm_verity *v;
+	struct bio *bio;
+
+	/* original values of bio->bi_end_io and bio->bi_private */
+	bio_end_io_t *orig_bi_end_io;
+	void *orig_bi_private;
+
+	sector_t block;
+	unsigned n_blocks;
+
+	/* saved bio vector */
+	struct bio_vec *io_vec;
+	unsigned io_vec_size;
+
+	struct work_struct work;
+
+	/* a space for short vectors; longer vectors are allocated separately */
+	struct bio_vec io_vec_inline[DM_VERITY_IO_VEC_INLINE];
+
+	/* variable-size fields, accessible with functions
+		io_hash_desc, io_real_digest, io_want_digest */
+	/* u8 hash_desc[v->shash_descsize]; */
+	/* u8 real_digest[v->digest_size]; */
+	/* u8 want_digest[v->digest_size]; */
+};
+
+static struct shash_desc *io_hash_desc(struct dm_verity *v, struct dm_verity_io *io)
+{
+	return (struct shash_desc *)(io + 1);
+}
+
+static u8 *io_real_digest(struct dm_verity *v, struct dm_verity_io *io)
+{
+	return (u8 *)(io + 1) + v->shash_descsize;
+}
+
+static u8 *io_want_digest(struct dm_verity *v, struct dm_verity_io *io)
+{
+	return (u8 *)(io + 1) + v->shash_descsize + v->digest_size;
+}
+
+/*
+ * Auxiliary structure appended to each dm-bufio buffer. If the value
+ * hash_verified is nonzero, hash of the block has been verified.
+ *
+ * The variable hash_verified is set to 0 when allocating the buffer, then
+ * it can be changed to 1 and it is never reset to 0 again.
+ *
+ * There is no lock around this value, a race condition can at worst cause
+ * that multiple processes verify the hash of the same buffer simultaneously
+ * and write 1 to hash_verified simultaneously.
+ * This condition is harmless, so we don't need locking.
+ */
+struct buffer_aux {
+	int hash_verified;
+};
+
+/*
+ * Initialize struct buffer_aux for a freshly created buffer.
+ */
+static void dm_bufio_alloc_callback(struct dm_buffer *buf)
+{
+	struct buffer_aux *aux = dm_bufio_get_aux_data(buf);
+	aux->hash_verified = 0;
+}
+
+/*
+ * Translate input sector number to the sector number on the target device.
+ */
+static sector_t verity_map_sector(struct dm_verity *v, sector_t bi_sector)
+{
+	return v->data_start + dm_target_offset(v->ti, bi_sector);
+}
+
+/*
+ * Return hash position of a specified block at a specified tree level
+ * (0 is the lowest level).
+ * The lowest "hash_per_block_bits"-bits of the result denote hash position
+ * inside a hash block. The remaining bits denote location of the hash block.
+ */
+static sector_t verity_position_at_level(struct dm_verity *v, sector_t block,
+					 int level)
+{
+	return block >> (level * v->hash_per_block_bits);
+}
+
+static void verity_hash_at_level(struct dm_verity *v, sector_t block, int level,
+				 sector_t *hash_block, unsigned *offset)
+{
+	sector_t position = verity_position_at_level(v, block, level);
+
+	*hash_block = v->hash_level_block[level] + (position >> v->hash_per_block_bits);
+	if (offset)
+		*offset = (position & ((1 << v->hash_per_block_bits) - 1)) << (v->hash_dev_block_bits - v->hash_per_block_bits);
+}
+
+/*
+ * Verify hash of a metadata block pertaining to the specified data block
+ * ("block" argument) at a specified level ("level" argument).
+ *
+ * On successful return, io_want_digest(v, io) contains the hash value for
+ * a lower tree level or for the data block (if we're at the lowest leve).
+ *
+ * If "skip_unverified" is true, unverified buffer is skipped an 1 is returned.
+ * If "skip_unverified" is false, unverified buffer is hashed and verified
+ * against current value of io_want_digest(v, io).
+ */
+static int verity_verify_level(struct dm_verity_io *io, sector_t block,
+			       int level, bool skip_unverified)
+{
+	struct dm_verity *v = io->v;
+	struct dm_buffer *buf;
+	struct buffer_aux *aux;
+	u8 *data;
+	int r;
+	sector_t hash_block;
+	unsigned offset;
+
+	verity_hash_at_level(v, block, level, &hash_block, &offset);
+
+	data = dm_bufio_read(v->bufio, hash_block, &buf);
+	if (unlikely(IS_ERR(data)))
+		return PTR_ERR(data);
+
+	aux = dm_bufio_get_aux_data(buf);
+
+	if (!aux->hash_verified) {
+		struct shash_desc *desc;
+		u8 *result;
+
+		if (skip_unverified) {
+			r = 1;
+			goto release_ret_r;
+		}
+
+		desc = io_hash_desc(v, io);
+		desc->tfm = v->tfm;
+		desc->flags = CRYPTO_TFM_REQ_MAY_SLEEP;
+		r = crypto_shash_init(desc);
+		if (r < 0) {
+			DMERR("crypto_shash_init failed: %d", r);
+			goto release_ret_r;
+		}
+
+		r = crypto_shash_update(desc, v->salt, v->salt_size);
+		if (r < 0) {
+			DMERR("crypto_shash_update failed: %d", r);
+			goto release_ret_r;
+		}
+
+		r = crypto_shash_update(desc, data, 1 << v->hash_dev_block_bits);
+		if (r < 0) {
+			DMERR("crypto_shash_update failed: %d", r);
+			goto release_ret_r;
+		}
+
+		result = io_real_digest(v, io);
+		r = crypto_shash_final(desc, result);
+		if (r < 0) {
+			DMERR("crypto_shash_final failed: %d", r);
+			goto release_ret_r;
+		}
+		if (unlikely(memcmp(result, io_want_digest(v, io), v->digest_size))) {
+			DMERR_LIMIT("metadata block %llu is corrupted",
+				(unsigned long long)hash_block);
+			r = -EIO;
+			goto release_ret_r;
+		} else
+			aux->hash_verified = 1;
+	}
+
+	data += offset;
+
+	memcpy(io_want_digest(v, io), data, v->digest_size);
+
+	dm_bufio_release(buf);
+	return 0;
+
+release_ret_r:
+	dm_bufio_release(buf);
+	return r;
+}
+
+/*
+ * Verify one "dm_verity_io" structure.
+ */
+static int verity_verify_io(struct dm_verity_io *io)
+{
+	struct dm_verity *v = io->v;
+	unsigned b;
+	int i;
+	unsigned vector = 0, offset = 0;
+	for (b = 0; b < io->n_blocks; b++) {
+		struct shash_desc *desc;
+		u8 *result;
+		int r;
+		unsigned todo;
+
+		if (likely(v->levels)) {
+			/*
+			 * First, we try to get the requested hash for
+			 * the current block. If the hash block itself is
+			 * verified, zero is returned. If it isn't, this
+			 * function returns 0 and we fall back to whole
+			 * chain verification.
+			 */
+			int r = verity_verify_level(io, io->block + b, 0, true);
+			if (likely(!r))
+				goto test_block_hash;
+			if (r < 0)
+				return r;
+		}
+
+		memcpy(io_want_digest(v, io), v->root_digest, v->digest_size);
+
+		for (i = v->levels - 1; i >= 0; i--) {
+			int r = verity_verify_level(io, io->block + b, i, false);
+			if (unlikely(r))
+				return r;
+		}
+
+test_block_hash:
+		desc = io_hash_desc(v, io);
+		desc->tfm = v->tfm;
+		desc->flags = CRYPTO_TFM_REQ_MAY_SLEEP;
+		r = crypto_shash_init(desc);
+		if (r < 0) {
+			DMERR("crypto_shash_init failed: %d", r);
+			return r;
+		}
+
+		r = crypto_shash_update(desc, v->salt, v->salt_size);
+		if (r < 0) {
+			DMERR("crypto_shash_update failed: %d", r);
+			return r;
+		}
+
+		todo = 1 << v->data_dev_block_bits;
+		do {
+			struct bio_vec *bv;
+			u8 *page;
+			unsigned len;
+
+			BUG_ON(vector >= io->io_vec_size);
+			bv = &io->io_vec[vector];
+			page = kmap_atomic(bv->bv_page, KM_USER0);
+			len = bv->bv_len - offset;
+			if (likely(len >= todo))
+				len = todo;
+			r = crypto_shash_update(desc,
+					page + bv->bv_offset + offset, len);
+			kunmap_atomic(page, KM_USER0);
+			if (r < 0) {
+				DMERR("crypto_shash_update failed: %d", r);
+				return r;
+			}
+			offset += len;
+			if (likely(offset == bv->bv_len)) {
+				offset = 0;
+				vector++;
+			}
+			todo -= len;
+		} while (todo);
+
+		result = io_real_digest(v, io);
+		r = crypto_shash_final(desc, result);
+		if (r < 0) {
+			DMERR("crypto_shash_final failed: %d", r);
+			return r;
+		}
+		if (unlikely(memcmp(result, io_want_digest(v, io), v->digest_size))) {
+			DMERR_LIMIT("data block %llu is corrupted",
+				(unsigned long long)(io->block + b));
+			return -EIO;
+		}
+	}
+	BUG_ON(vector != io->io_vec_size);
+	BUG_ON(offset);
+	return 0;
+}
+
+/*
+ * End one "io" structure with a given error.
+ */
+static void verity_finish_io(struct dm_verity_io *io, int error)
+{
+	struct bio *bio = io->bio;
+	struct dm_verity *v = io->v;
+
+	bio->bi_end_io = io->orig_bi_end_io;
+	bio->bi_private = io->orig_bi_private;
+
+	if (io->io_vec != io->io_vec_inline)
+		mempool_free(io->io_vec, v->vec_mempool);
+	mempool_free(io, v->io_mempool);
+
+	bio_endio(bio, error);
+}
+
+static void verity_work(struct work_struct *w)
+{
+	struct dm_verity_io *io = container_of(w, struct dm_verity_io, work);
+
+	verity_finish_io(io, verity_verify_io(io));
+}
+
+static void verity_end_io(struct bio *bio, int error)
+{
+	struct dm_verity_io *io = bio->bi_private;
+	if (error) {
+		verity_finish_io(io, error);
+		return;
+	}
+
+	INIT_WORK(&io->work, verity_work);
+	queue_work(io->v->verify_wq, &io->work);
+}
+
+/*
+ * Prefetch buffers for the specified io.
+ * The root buffer is not prefetched, it is assumed that it will be cached
+ * all the time.
+ */
+static void verity_prefetch_io(struct dm_verity *v, struct dm_verity_io *io)
+{
+	int i;
+	for (i = v->levels - 2; i >= 0; i--) {
+		sector_t hash_block_start;
+		sector_t hash_block_end;
+		verity_hash_at_level(v, io->block, i, &hash_block_start, NULL);
+		verity_hash_at_level(v, io->block + io->n_blocks - 1, i, &hash_block_end, NULL);
+		if (!i) {
+			unsigned cluster = *(volatile unsigned *)&prefetch_cluster;
+			cluster >>= v->data_dev_block_bits;
+			if (unlikely(!cluster))
+				goto no_prefetch_cluster;
+			if (unlikely(cluster & (cluster - 1)))
+				cluster = 1 << (fls(cluster) - 1);
+
+			hash_block_start &= ~(sector_t)(cluster - 1);
+			hash_block_end |= cluster - 1;
+			if (unlikely(hash_block_end >= v->hash_blocks))
+				hash_block_end = v->hash_blocks - 1;
+		}
+no_prefetch_cluster:
+		dm_bufio_prefetch(v->bufio, hash_block_start,
+					hash_block_end - hash_block_start + 1);
+	}
+}
+
+/*
+ * Bio map function. It allocates dm_verity_io structure and bio vector and
+ * fills them. Then it issues prefetches and the I/O.
+ */
+static int verity_map(struct dm_target *ti, struct bio *bio,
+		      union map_info *map_context)
+{
+	struct dm_verity *v = ti->private;
+	struct dm_verity_io *io;
+
+	bio->bi_bdev = v->data_dev->bdev;
+	bio->bi_sector = verity_map_sector(v, bio->bi_sector);
+
+	if (((unsigned)bio->bi_sector | bio_sectors(bio)) &
+	    ((1 << (v->data_dev_block_bits - SECTOR_SHIFT)) - 1)) {
+		DMERR_LIMIT("unaligned io");
+		return -EIO;
+	}
+
+	if ((bio->bi_sector + bio_sectors(bio)) >>
+	    (v->data_dev_block_bits - SECTOR_SHIFT) > v->data_blocks) {
+		DMERR_LIMIT("io out of range");
+		return -EIO;
+	}
+
+	if (bio_data_dir(bio) == WRITE)
+		return -EIO;
+
+	io = mempool_alloc(v->io_mempool, GFP_NOIO);
+	io->v = v;
+	io->bio = bio;
+	io->orig_bi_end_io = bio->bi_end_io;
+	io->orig_bi_private = bio->bi_private;
+	io->block = bio->bi_sector >> (v->data_dev_block_bits - SECTOR_SHIFT);
+	io->n_blocks = bio->bi_size >> v->data_dev_block_bits;
+
+	bio->bi_end_io = verity_end_io;
+	bio->bi_private = io;
+	io->io_vec_size = bio->bi_vcnt - bio->bi_idx;
+	if (io->io_vec_size < DM_VERITY_IO_VEC_INLINE)
+		io->io_vec = io->io_vec_inline;
+	else
+		io->io_vec = mempool_alloc(v->vec_mempool, GFP_NOIO);
+	memcpy(io->io_vec, bio_iovec(bio),
+	       io->io_vec_size * sizeof(struct bio_vec));
+
+	verity_prefetch_io(v, io);
+
+	generic_make_request(bio);
+
+	return DM_MAPIO_SUBMITTED;
+}
+
+static int verity_status(struct dm_target *ti, status_type_t type,
+			 char *result, unsigned maxlen)
+{
+	struct dm_verity *v = ti->private;
+	unsigned sz = 0;
+	unsigned x;
+
+	switch (type) {
+	case STATUSTYPE_INFO:
+		result[0] = 0;
+		break;
+	case STATUSTYPE_TABLE:
+		DMEMIT("%u %s %s %u %u %llu %llu %s ",
+			0,
+			v->data_dev->name,
+			v->hash_dev->name,
+			1 << v->data_dev_block_bits,
+			1 << v->hash_dev_block_bits,
+			(unsigned long long)v->data_blocks,
+			(unsigned long long)v->hash_start,
+			v->alg_name
+			);
+		for (x = 0; x < v->digest_size; x++)
+			DMEMIT("%02x", v->root_digest[x]);
+		DMEMIT(" ");
+		if (!v->salt_size)
+			DMEMIT("-");
+		else
+			for (x = 0; x < v->salt_size; x++)
+				DMEMIT("%02x", v->salt[x]);
+		break;
+	}
+	return 0;
+}
+
+static int verity_ioctl(struct dm_target *ti, unsigned cmd,
+			unsigned long arg)
+{
+	struct dm_verity *v = ti->private;
+	int r = 0;
+
+	if (v->data_start ||
+	    ti->len != i_size_read(v->data_dev->bdev->bd_inode) >> SECTOR_SHIFT)
+		r = scsi_verify_blk_ioctl(NULL, cmd);
+
+	return r ? : __blkdev_driver_ioctl(v->data_dev->bdev, v->data_dev->mode,
+				     cmd, arg);
+}
+
+static int verity_merge(struct dm_target *ti, struct bvec_merge_data *bvm,
+			struct bio_vec *biovec, int max_size)
+{
+	struct dm_verity *v = ti->private;
+	struct request_queue *q = bdev_get_queue(v->data_dev->bdev);
+
+	if (!q->merge_bvec_fn)
+		return max_size;
+
+	bvm->bi_bdev = v->data_dev->bdev;
+	bvm->bi_sector = verity_map_sector(v, bvm->bi_sector);
+
+	return min(max_size, q->merge_bvec_fn(q, bvm, biovec));
+}
+
+static int verity_iterate_devices(struct dm_target *ti,
+				  iterate_devices_callout_fn fn, void *data)
+{
+	struct dm_verity *v = ti->private;
+	return fn(ti, v->data_dev, v->data_start, ti->len, data);
+}
+
+static void verity_io_hints(struct dm_target *ti, struct queue_limits *limits)
+{
+	struct dm_verity *v = ti->private;
+
+	if (limits->logical_block_size < 1 << v->data_dev_block_bits)
+		limits->logical_block_size = 1 << v->data_dev_block_bits;
+	if (limits->physical_block_size < 1 << v->data_dev_block_bits)
+		limits->physical_block_size = 1 << v->data_dev_block_bits;
+	blk_limits_io_min(limits, limits->logical_block_size);
+}
+
+static void verity_dtr(struct dm_target *ti);
+
+static int verity_ctr(struct dm_target *ti, unsigned argc, char **argv)
+{
+	struct dm_verity *v;
+	unsigned num;
+	unsigned long long num_ll;
+	int r;
+	int i;
+	sector_t hash_position;
+	char dummy;
+
+	v = kzalloc(sizeof(struct dm_verity), GFP_KERNEL);
+	if (!v) {
+		ti->error = "Cannot allocate verity structure";
+		return -ENOMEM;
+	}
+	ti->private = v;
+	v->ti = ti;
+
+	if ((dm_table_get_mode(ti->table) & ~FMODE_READ) != 0) {
+		ti->error = "Device must be readonly";
+		r = -EINVAL;
+		goto bad;
+	}
+
+	if (argc != 10) {
+		ti->error = "Invalid argument count";
+		r = -EINVAL;
+		goto bad;
+	}
+
+	if (sscanf(argv[0], "%d%c", &num, &dummy) != 1 ||
+	    num != 0) {
+		ti->error = "Invalid version";
+		r = -EINVAL;
+		goto bad;
+	}
+
+	r = dm_get_device(ti, argv[1], FMODE_READ, &v->data_dev);
+	if (r) {
+		ti->error = "Data device lookup failed";
+		goto bad;
+	}
+
+	r = dm_get_device(ti, argv[2], FMODE_READ, &v->hash_dev);
+	if (r) {
+		ti->error = "Data device lookup failed";
+		goto bad;
+	}
+
+	if (sscanf(argv[3], "%u%c", &num, &dummy) != 1 ||
+	    !num || (num & (num - 1)) ||
+	    num < bdev_logical_block_size(v->data_dev->bdev) ||
+	    num > PAGE_SIZE) {
+		ti->error = "Invalid data device block size";
+		r = -EINVAL;
+		goto bad;
+	}
+	v->data_dev_block_bits = ffs(num) - 1;
+
+	if (sscanf(argv[4], "%u%c", &num, &dummy) != 1 ||
+	    !num || (num & (num - 1)) ||
+	    num < bdev_logical_block_size(v->hash_dev->bdev) ||
+	    num > INT_MAX) {
+		ti->error = "Invalid hash device block size";
+		r = -EINVAL;
+		goto bad;
+	}
+	v->hash_dev_block_bits = ffs(num) - 1;
+
+	if (sscanf(argv[5], "%llu%c", &num_ll, &dummy) != 1 ||
+	    num_ll << (v->data_dev_block_bits - SECTOR_SHIFT) !=
+	    (sector_t)num_ll << (v->data_dev_block_bits - SECTOR_SHIFT)) {
+		ti->error = "Invalid data blocks";
+		r = -EINVAL;
+		goto bad;
+	}
+	v->data_blocks = num_ll;
+
+	if (ti->len > (v->data_blocks << (v->data_dev_block_bits - SECTOR_SHIFT))) {
+		ti->error = "Data device is too small";
+		r = -EINVAL;
+		goto bad;
+	}
+
+	if (sscanf(argv[6], "%llu%c", &num_ll, &dummy) != 1 ||
+	    num_ll << (v->hash_dev_block_bits - SECTOR_SHIFT) !=
+	    (sector_t)num_ll << (v->hash_dev_block_bits - SECTOR_SHIFT)) {
+		ti->error = "Invalid hash start";
+		r = -EINVAL;
+		goto bad;
+	}
+	v->hash_start = num_ll;
+
+	v->alg_name = kstrdup(argv[7], GFP_KERNEL);
+	if (!v->alg_name) {
+		ti->error = "Cannot allocate algorithm name";
+		r = -ENOMEM;
+		goto bad;
+	}
+
+	v->tfm = crypto_alloc_shash(v->alg_name, 0, 0);
+	if (IS_ERR(v->tfm)) {
+		ti->error = "Cannot initialize hash function";
+		r = PTR_ERR(v->tfm);
+		v->tfm = NULL;
+		goto bad;
+	}
+	v->digest_size = crypto_shash_digestsize(v->tfm);
+	if ((1 << v->hash_dev_block_bits) < v->digest_size * 2) {
+		ti->error = "Digest size too big";
+		r = -EINVAL;
+		goto bad;
+	}
+	v->shash_descsize =
+		sizeof(struct shash_desc) + crypto_shash_descsize(v->tfm);
+
+	v->root_digest = kmalloc(v->digest_size, GFP_KERNEL);
+	if (!v->root_digest) {
+		ti->error = "Cannot allocate root digest";
+		r = -ENOMEM;
+		goto bad;
+	}
+	if (strlen(argv[8]) != v->digest_size * 2 ||
+	    hex2bin(v->root_digest, argv[8], v->digest_size)) {
+		ti->error = "Invalid root digest";
+		r = -EINVAL;
+		goto bad;
+	}
+
+	if (strcmp(argv[9], "-")) {
+		v->salt_size = strlen(argv[9]) / 2;
+		v->salt = kmalloc(v->salt_size, GFP_KERNEL);
+		if (!v->salt) {
+			ti->error = "Cannot allocate salt";
+			r = -ENOMEM;
+			goto bad;
+		}
+		if (strlen(argv[9]) != v->salt_size * 2 ||
+		    hex2bin(v->salt, argv[9], v->salt_size)) {
+			ti->error = "Invalid salt";
+			r = -EINVAL;
+			goto bad;
+		}
+	}
+
+	v->hash_per_block_bits =
+		fls((1 << v->hash_dev_block_bits) / v->digest_size) - 1;
+
+	v->levels = 0;
+	if (v->data_blocks)
+		while (v->hash_per_block_bits * v->levels < 64 &&
+		       (unsigned long long)(v->data_blocks - 1) >>
+		       (v->hash_per_block_bits * v->levels))
+			v->levels++;
+
+	if (v->levels > DM_VERITY_MAX_LEVELS) {
+		ti->error = "Too many tree levels";
+		r = -E2BIG;
+		goto bad;
+	}
+
+	hash_position = v->hash_start;
+	for (i = v->levels - 1; i >= 0; i--) {
+		sector_t s;
+		v->hash_level_block[i] = hash_position;
+		s = verity_position_at_level(v, v->data_blocks, i);
+		s = (s >> v->hash_per_block_bits) +
+		    !!(s & ((1 << v->hash_per_block_bits) - 1));
+		if (hash_position + s < hash_position) {
+			ti->error = "Hash device offset overflow";
+			r = -E2BIG;
+			goto bad;
+		}
+		hash_position += s;
+	}
+	v->hash_blocks = hash_position;
+
+	v->bufio = dm_bufio_client_create(v->hash_dev->bdev,
+		1 << v->hash_dev_block_bits, 1, sizeof(struct buffer_aux),
+		dm_bufio_alloc_callback, NULL);
+	if (IS_ERR(v->bufio)) {
+		ti->error = "Cannot initialize dm-bufio";
+		r = PTR_ERR(v->bufio);
+		v->bufio = NULL;
+		goto bad;
+	}
+
+	if (dm_bufio_get_device_size(v->bufio) < v->hash_blocks) {
+		ti->error = "Hash device is too small";
+		r = -E2BIG;
+		goto bad;
+	}
+
+	v->io_mempool = mempool_create_kmalloc_pool(DM_VERITY_MEMPOOL_SIZE,
+	  sizeof(struct dm_verity_io) + v->shash_descsize + v->digest_size * 2);
+	if (!v->io_mempool) {
+		ti->error = "Cannot allocate io mempool";
+		r = -ENOMEM;
+		goto bad;
+	}
+
+	v->vec_mempool = mempool_create_kmalloc_pool(DM_VERITY_MEMPOOL_SIZE,
+					BIO_MAX_PAGES * sizeof(struct bio_vec));
+	if (!v->vec_mempool) {
+		ti->error = "Cannot allocate vector mempool";
+		r = -ENOMEM;
+		goto bad;
+	}
+
+	/*v->verify_wq = alloc_workqueue("verityd", WQ_CPU_INTENSIVE | WQ_MEM_RECLAIM, 1);*/
+	/* WQ_UNBOUND greatly improves performance when running on ramdisk */
+	v->verify_wq = alloc_workqueue("verityd", WQ_CPU_INTENSIVE | WQ_MEM_RECLAIM | WQ_UNBOUND, num_online_cpus());
+	if (!v->verify_wq) {
+		ti->error = "Cannot allocate workqueue";
+		r = -ENOMEM;
+		goto bad;
+	}
+
+	return 0;
+
+bad:
+	verity_dtr(ti);
+	return r;
+}
+
+static void verity_dtr(struct dm_target *ti)
+{
+	struct dm_verity *v = ti->private;
+
+	if (v->verify_wq)
+		destroy_workqueue(v->verify_wq);
+	if (v->vec_mempool)
+		mempool_destroy(v->vec_mempool);
+	if (v->io_mempool)
+		mempool_destroy(v->io_mempool);
+	if (v->bufio)
+		dm_bufio_client_destroy(v->bufio);
+	kfree(v->salt);
+	kfree(v->root_digest);
+	if (v->tfm)
+		crypto_free_shash(v->tfm);
+	kfree(v->alg_name);
+	if (v->hash_dev)
+		dm_put_device(ti, v->hash_dev);
+	if (v->data_dev)
+		dm_put_device(ti, v->data_dev);
+	kfree(v);
+}
+
+static struct target_type verity_target = {
+	.name		= "verity",
+	.version	= {1, 0, 0},
+	.module		= THIS_MODULE,
+	.ctr		= verity_ctr,
+	.dtr		= verity_dtr,
+	.map		= verity_map,
+	.status		= verity_status,
+	.ioctl		= verity_ioctl,
+	.merge		= verity_merge,
+	.iterate_devices = verity_iterate_devices,
+	.io_hints	= verity_io_hints,
+};
+
+static int __init dm_verity_init(void)
+{
+	int r;
+	r = dm_register_target(&verity_target);
+	if (r < 0)
+		DMERR("register failed %d", r);
+	return r;
+}
+
+static void __exit dm_verity_exit(void)
+{
+	dm_unregister_target(&verity_target);
+}
+
+module_init(dm_verity_init);
+module_exit(dm_verity_exit);
+
+MODULE_AUTHOR("Mikulas Patocka <mpatocka@redhat.com>");
+MODULE_AUTHOR("Mandeep Baines <msb@chromium.org>");
+MODULE_AUTHOR("Will Drewry <wad@chromium.org>");
+MODULE_DESCRIPTION(DM_NAME " target for transparent disk integrity checking");
+MODULE_LICENSE("GPL");
+

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH] dm: remake of the verity target
  2012-03-20 15:41       ` Mandeep Singh Baines
  2012-03-21  0:54         ` Mikulas Patocka
@ 2012-03-21  1:10         ` Mikulas Patocka
  1 sibling, 0 replies; 34+ messages in thread
From: Mikulas Patocka @ 2012-03-21  1:10 UTC (permalink / raw)
  To: Mandeep Singh Baines
  Cc: linux-kernel, dm-devel, Alasdair G Kergon, Will Drewry,
	Elly Jones, Milan Broz, Olof Johansson, Steffen Klassert,
	Andrew Morton

Hi

Here I'm sending the new userspace tool:

Changes since the last version:

* Format is changed to be compatible with the current kernel code (i.e. 
hash salt before the data)

* If no salt is specified, random salt is generated

* Switch -a is used to activate the device (it executes dmsetup with 
the correct parameters)

* There is a superblock that can be used to identify verity hash 
partition. Most arguments are stored in the superblock, so they don't have 
to be specified when verifying or activating. Superblock stores all the 
options except: device names, --hash-start, root block hash.

* The superblock is read or written only by the userspace tool, it is not 
read by the kernel code.


Example use:

# create some filesystem on /dev/sdc2, fill it with data and unmount it
... 
# read /dev/sdc2 and create appropriate hashes on /dev/sdc3
./veritysetup -c /dev/sdc2 /dev/sdc3
# verify the hashes (supply the real hash as reported by ./veritysetup -c)
./veritysetup -v /dev/sdc2 /dev/sdc3 3f1fe1c1ff7f229f1ff113c2b4b76ce4bcd9f455dd7f990673b3e71f6a406e38
# activate the kernel driver with device mapper name "v"
./veritysetup -a v /dev/sdc2 /dev/sdc3 3f1fe1c1ff7f229f1ff113c2b4b76ce4bcd9f455dd7f990673b3e71f6a406e38
# mount the activated filesystem
mount -o ro /dev/mapper/v /mnt/test

---

/* link with -lpopt -lcrypto */

#define _FILE_OFFSET_BITS	64

#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#include <stdarg.h>
#include <unistd.h>
#include <errno.h>
#include <string.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <sys/ioctl.h>
#include <sys/mount.h>
#include <arpa/inet.h>
#include <popt.h>
#include <openssl/evp.h>
#include <openssl/rand.h>

#define DEFAULT_BLOCK_SIZE	4096
#define DM_VERITY_MAX_LEVELS	63

#define DEFAULT_SALT_SIZE	32
#define MAX_SALT_SIZE		384

#define MODE_VERIFY	0
#define MODE_CREATE	1
#define MODE_ACTIVATE	2

static int mode = -1;
static int use_superblock = 1;

static const char *dm_device;
static const char *data_device;
static const char *hash_device;
static const char *hash_algorithm = NULL;
static const char *root_hash;

static int data_block_size = 0;
static int hash_block_size = 0;
static long long hash_start = 0;
static long long data_blocks = 0;
static const char *salt_string = NULL;

static FILE *data_file;
static FILE *hash_file;

static off_t data_file_blocks;
static off_t hash_file_blocks;
static off_t used_hash_blocks;

static const EVP_MD *evp;

static unsigned char *root_hash_bytes;
static unsigned char *calculated_digest;

static unsigned char *salt_bytes;
static unsigned salt_size;

static unsigned digest_size;
static unsigned char digest_size_bits;
static unsigned char levels;
static unsigned char hash_per_block_bits;

static off_t hash_level_block[DM_VERITY_MAX_LEVELS];
static off_t hash_level_size[DM_VERITY_MAX_LEVELS];

static off_t superblock_position;

static int retval = 0;

struct superblock {
	uint8_t signature[8];
	uint8_t version;
	uint8_t data_block_bits;
	uint8_t hash_block_bits;
	uint8_t pad1[1];
	uint16_t salt_size;
	uint8_t pad2[2];
	uint32_t data_blocks_hi;
	uint32_t data_blocks_lo;
	uint8_t algorithm[16];
	uint8_t salt[MAX_SALT_SIZE];
	uint8_t pad3[88];
};

#define DM_VERITY_SIGNATURE	"verity\0\0"
#define DM_VERITY_VERSION	0

static void help(poptContext popt_context,
		enum poptCallbackReason reason,
		struct poptOption *key,
		const char *arg,
		void *data)
{
	poptPrintHelp(popt_context, stdout, 0);
	exit(0);
}

static struct poptOption popt_help_options[] = {
	{ NULL,			0,	POPT_ARG_CALLBACK, help, 0, NULL, NULL },
	{ "help",		'h',	POPT_ARG_NONE, NULL, 0, "Show help", NULL },
	POPT_TABLEEND
};

static struct poptOption popt_options[] = {
	{ NULL,			'\0', POPT_ARG_INCLUDE_TABLE, popt_help_options, 0, NULL, NULL },
	{ "create",		'c',	POPT_ARG_VAL, &mode, MODE_CREATE, "Create hash", NULL },
	{ "verify",		'v',	POPT_ARG_VAL, &mode, MODE_VERIFY, "Verify integrity", NULL },
	{ "activate",		'a',	POPT_ARG_VAL, &mode, MODE_ACTIVATE, "Activate the device", NULL },
	{ "data-block-size",	0, 	POPT_ARG_INT, &data_block_size, 0, "Block size on the data device", "bytes" },
	{ "hash-block-size",	0, 	POPT_ARG_INT, &hash_block_size, 0, "Block size on the hash device", "bytes" },
	{ "data-blocks",	0,	POPT_ARG_LONGLONG, &data_blocks, 0, "The number of blocks in the data file", "blocks" },
	{ "hash-start",		0,	POPT_ARG_LONGLONG, &hash_start, 0, "Starting block on the hash device", "512-byte sectors" },
	{ "salt",		0,	POPT_ARG_STRING, &salt_string, 0, "Salt", "hex string" },
	{ "algorithm",		0,	POPT_ARG_STRING, &hash_algorithm, 0, "Hash algorithm (default sha256)", "string" },
	{ "no-superblock",	0,	POPT_ARG_VAL, &use_superblock, 0, "Do not create/use superblock" },
	POPT_TABLEEND
};

#if defined(__GNUC__) && __GNUC__ >= 2
	__attribute__((__format__(__printf__, 1, 2)))
#endif
static void exit_err(const char *msg, ...)
{
	va_list args;
	va_start(args, msg);
	vfprintf(stderr, msg, args);
	va_end(args);
	fputc('\n', stderr);
	exit(2);
}

static void stream_err(FILE *f, const char *msg)
{
	if (ferror(f)) {
		perror(msg);
		exit(2);
	} else if (feof(f)) {
		exit_err("eof on %s", msg);
	} else {
		exit_err("unknown error on %s", msg);
	}
}

static void *xmalloc(size_t s)
{
	void *ptr = malloc(!s ? 1 : s);
	if (!ptr) exit_err("out of memory");
	return ptr;
}

static char *xstrdup(const char *str)
{
	return strcpy(xmalloc(strlen(str) + 1), str);
}

static char *xprint(unsigned long long num)
{
	size_t s = snprintf(NULL, 0, "%llu", num);
	char *p = xmalloc(s + 1);
	snprintf(p, s + 1, "%llu", num);
	return p;
}

static char *xhexprint(unsigned char *bytes, size_t len)
{
	size_t i;
	char *p = xmalloc(len * 2 + 1);
	p[0] = 0;
	for (i = 0; i < len; i++)
		snprintf(p + i * 2, 3, "%02x", bytes[i]);
	return p;
}

static off_t get_size(FILE *f, const char *name)
{
	struct stat st;
	int h = fileno(f);
	if (h < 0) {
		perror("fileno");
		exit(2);
	}
	if (fstat(h, &st)) {
		perror("fstat");
		exit(2);
	}
	if (S_ISREG(st.st_mode)) {
		return st.st_size;
	} else if (S_ISBLK(st.st_mode)) {
		unsigned long long size64;
		unsigned long sizeul;
		if (!ioctl(h, BLKGETSIZE64, &size64)) {
			return_size64:
			if ((off_t)size64 < 0 || (off_t)size64 != size64) {
				size_overflow:
				exit_err("%s: device size overflow", name);
			}
			return size64;
		}
		if (!ioctl(h, BLKGETSIZE, &sizeul)) {
			size64 = (unsigned long long)sizeul * 512;
			if (size64 / 512 != sizeul) goto size_overflow;
			goto return_size64;
		}
		perror("BLKGETSIZE");
		exit(2);
	} else {
		exit_err("%s is not a file or a block device", name);
	}
	return -1;	/* never reached, shut up warning */
}

static void block_fseek(FILE *f, off_t block, int block_size)
{
	unsigned long long pos = (unsigned long long)block * block_size;
	if (pos / block_size != block ||
	    (off_t)pos < 0 ||
	    (off_t)pos != pos)
		exit_err("seek position overflow");
	if (fseeko(f, pos, SEEK_SET)) {
		perror("fseek");
		exit(2);
	}
}

static off_t verity_position_at_level(off_t block, int level)
{
	return block >> (level * hash_per_block_bits);
}

static void calculate_positions(void)
{
	unsigned long long hash_position;
	int i;

	digest_size_bits = 0;
	while (1 << digest_size_bits < digest_size)
		digest_size_bits++;
	hash_per_block_bits = 0;
	while (((hash_block_size / digest_size) >> hash_per_block_bits) > 1)
		hash_per_block_bits++;
	if (!hash_per_block_bits)
		exit_err("at least two hashes must fit in a hash file block");
	levels = 0;

	if (data_file_blocks) {
		while (hash_per_block_bits * levels < 64 &&
		       (unsigned long long)(data_file_blocks - 1) >>
		       (hash_per_block_bits * levels))
			levels++;
	}

	if (levels > DM_VERITY_MAX_LEVELS) exit_err("too many tree levels");

	hash_position = hash_start * 512 / hash_block_size;
	for (i = levels - 1; i >= 0; i--) {
		off_t s;
		hash_level_block[i] = hash_position;
		s = verity_position_at_level(data_file_blocks, i);
		s = (s >> hash_per_block_bits) +
		    !!(s & ((1 << hash_per_block_bits) - 1));
		hash_level_size[i] = s;
		if (hash_position + s < hash_position ||
		    (off_t)(hash_position + s) < 0 ||
		    (off_t)(hash_position + s) != hash_position + s)
			exit_err("hash device offset overflow");
		hash_position += s;
	}
	used_hash_blocks = hash_position;
}

static void create_or_verify_zero(FILE *wr, unsigned char *left_block, unsigned left_bytes)
{
	if (left_bytes) {
		if (mode != MODE_CREATE) {
			unsigned x;
			if (fread(left_block, left_bytes, 1, wr) != 1)
				stream_err(wr, "read");
			for (x = 0; x < left_bytes; x++) if (left_block[x]) {
				retval = 1;
				fprintf(stderr, "spare area is not zeroed at position %lld\n", (long long)ftello(wr) - left_bytes);
			}
		} else {
			if (fwrite(left_block, left_bytes, 1, wr) != 1)
				stream_err(wr, "write");
		}
	}
}

static void create_or_verify_stream(FILE *rd, FILE *wr, int block_size, off_t blocks)
{
	unsigned char *left_block = xmalloc(hash_block_size);
	unsigned char *data_buffer = xmalloc(block_size);
	unsigned char *read_digest = mode != MODE_CREATE ? xmalloc(digest_size) : NULL;
	off_t blocks_to_write = (blocks >> hash_per_block_bits) +
				!!(blocks & ((1 << hash_per_block_bits) - 1));
	EVP_MD_CTX ctx;
	EVP_MD_CTX_init(&ctx);
	memset(left_block, 0, hash_block_size);
	while (blocks_to_write--) {
		unsigned x;
		unsigned left_bytes = hash_block_size;
		for (x = 0; x < 1 << hash_per_block_bits; x++) {
			if (!blocks)
				break;
			blocks--;
			if (fread(data_buffer, block_size, 1, rd) != 1)
				stream_err(rd, "read");
			if (EVP_DigestInit_ex(&ctx, evp, NULL) != 1)
				exit_err("EVP_DigestInit_ex failed");
			if (EVP_DigestUpdate(&ctx, salt_bytes, salt_size) != 1)
				exit_err("EVP_DigestUpdate failed");
			if (EVP_DigestUpdate(&ctx, data_buffer, block_size) != 1)
				exit_err("EVP_DigestUpdate failed");
			if (EVP_DigestFinal_ex(&ctx, calculated_digest, NULL) != 1)
				exit_err("EVP_DigestFinal_ex failed");
			if (!wr)
				break;
			if (mode != MODE_CREATE) {
				if (fread(read_digest, digest_size, 1, wr) != 1)
					stream_err(wr, "read");
				if (memcmp(read_digest, calculated_digest, digest_size)) {
					retval = 1;
					fprintf(stderr, "verification failed at position %lld in %s file\n", (long long)ftello(rd) - block_size, rd == data_file ? "data" : "metadata");
				}
			} else {
				if (fwrite(calculated_digest, digest_size, 1, wr) != 1)
					stream_err(wr, "write");
			}
			create_or_verify_zero(wr, left_block, (1 << digest_size_bits) - digest_size);
			left_bytes -= 1 << digest_size_bits;
		}
		if (wr)
			create_or_verify_zero(wr, left_block, left_bytes);
	}
	if (mode == MODE_CREATE && wr) {
		if (fflush(wr)) {
			perror("fflush");
			exit(1);
		}
		if (ferror(wr)) {
			stream_err(wr, "write");
		}
	}
	if (EVP_MD_CTX_cleanup(&ctx) != 1)
		exit_err("EVP_MD_CTX_cleanup failed");
	free(left_block);
	free(data_buffer);
	if (mode != MODE_CREATE) free(read_digest);
}

char **make_target_line(void)
{
	const int line_elements = 14;
	char **line = xmalloc(line_elements * sizeof(char *));
	int i = 0;
	char *algorithm_copy = xstrdup(hash_algorithm);
		/* transform ripemdXXX to rmdXXX */
	if (!strncmp(algorithm_copy, "ripemd", 6))
		memmove(algorithm_copy + 1, algorithm_copy + 4, strlen(algorithm_copy + 4) + 1);
	line[i++] = xstrdup("0");
	line[i++] = xprint((unsigned long long)data_file_blocks * data_block_size / 512);
	line[i++] = xstrdup("verity");
	line[i++] = xstrdup("0");
	line[i++] = xstrdup(data_device);
	line[i++] = xstrdup(hash_device);
	line[i++] = xprint(data_block_size);
	line[i++] = xprint(hash_block_size);
	line[i++] = xprint(data_file_blocks);
	line[i++] = xprint(hash_start * 512 / hash_block_size);
	line[i++] = algorithm_copy;
	line[i++] = xhexprint(calculated_digest, digest_size);
	line[i++] = !salt_size ? xstrdup("-") : xhexprint(salt_bytes, salt_size);
	line[i++] = NULL;
	if (i > line_elements) exit_err("INTERNAL ERROR: insufficient array size");
	return line;
}

void free_target_line(char **line)
{
	int i;
	for (i = 0; line[i]; i++)
		free(line[i]);
	free(line);
}

static void create_or_verify(void)
{
	int i;
	if (mode != MODE_ACTIVATE) for (i = 0; i < levels; i++) {
		block_fseek(hash_file, hash_level_block[i], hash_block_size);
		if (!i) {
			block_fseek(data_file, 0, data_block_size);
			create_or_verify_stream(data_file, hash_file, data_block_size, data_file_blocks);
		} else {
			FILE *hash_file_2 = fopen(hash_device, "r");
			if (!hash_file_2) {
				perror(hash_device);
				exit(2);
			}
			block_fseek(hash_file_2, hash_level_block[i - 1], hash_block_size);
			create_or_verify_stream(hash_file_2, hash_file, hash_block_size, hash_level_size[i - 1]);
			fclose(hash_file_2);
		}
	}

	if (levels) {
		block_fseek(hash_file, hash_level_block[levels - 1], hash_block_size);
		create_or_verify_stream(hash_file, NULL, hash_block_size, 1);
	} else {
		block_fseek(data_file, 0, data_block_size);
		create_or_verify_stream(data_file, NULL, data_block_size, data_file_blocks);
	}

	if (mode != MODE_CREATE) {
		if (memcmp(calculated_digest, root_hash_bytes, digest_size)) {
			fprintf(stderr, "verification failed in the root block\n");
			retval = 1;
		}
		if (!retval && mode == MODE_VERIFY)
			fprintf(stderr, "hash successfully verified\n");
	} else {
		char **target_line;
		char *p;
		if (fsync(fileno(hash_file))) {
			perror("fsync");
			exit(1);
		}
		printf("hash device size: %llu\n", (unsigned long long)used_hash_blocks * hash_block_size);
		printf("data block size %u, hash block size %u, %u tree levels\n", data_block_size, hash_block_size, levels);
		if (salt_size) p = xhexprint(salt_bytes, salt_size);
		else p = xstrdup("-");
		printf("salt: %s\n", p);
		free(p);
		p = xhexprint(calculated_digest, digest_size);
		printf("root hash: %s\n", p);
		free(p);
		printf("target line:");
		target_line = make_target_line();
		for (i = 0; target_line[i]; i++)
			printf(" %s", target_line[i]);
		free_target_line(target_line);
		printf("\n");
	}
}

static void activate(void)
{
	int i;
	size_t len = 1;
	char *table_arg;
	char **target_line = make_target_line();
	for (i = 0; target_line[i]; i++) {
		if (i) len++;
		len += strlen(target_line[i]);
	}
	table_arg = xmalloc(len);
	table_arg[0] = 0;
	for (i = 0; target_line[i]; i++) {
		if (i) strcat(table_arg, " ");
		strcat(table_arg, target_line[i]);
	}
	free_target_line(target_line);
	execlp("dmsetup", "dmsetup", "-r", "create", dm_device, "--table", table_arg, NULL);
	perror("dmsetup");
	exit(2);
}

static void get_hex(const char *string, unsigned char **result, size_t len, const char *description)
{
	size_t rl = strlen(string);
	unsigned u;
	if (strspn(string, "0123456789ABCDEFabcdef") != rl)
		exit_err("invalid %s", description);
	if (rl != len * 2)
		exit_err("invalid length of %s", description);
	*result = xmalloc(len);
	memset(*result, 0, len);
	for (u = 0; u < rl; u++) {
		unsigned char c = (string[u] & 15) + (string[u] > '9' ? 9 : 0);
		(*result)[u / 2] |= c << (((u & 1) ^ 1) << 2);
	}
}

static struct superblock superblock;

static void load_superblock(void)
{
	long long sb_data_blocks;

	block_fseek(hash_file, superblock_position, 1);
	if (fread(&superblock, sizeof(struct superblock), 1, hash_file) != 1)
		stream_err(hash_file, "read");
	if (memcmp(superblock.signature, DM_VERITY_SIGNATURE, sizeof(superblock.signature)))
		exit_err("superblock not found on the hash device");
	if (superblock.version != 0)
		exit_err("unknown version");
	if (superblock.data_block_bits < 9 || superblock.data_block_bits >= 31)
		exit_err("invalid data_block_bits in the superblock");
	if (superblock.hash_block_bits < 9 || superblock.hash_block_bits >= 31)
		exit_err("invalid data_block_bits in the superblock");
	sb_data_blocks = ((unsigned long long)ntohl(superblock.data_blocks_hi) << 31 << 1) | ntohl(superblock.data_blocks_lo);
	if (sb_data_blocks < 0 || (off_t)sb_data_blocks < 0 || (off_t)sb_data_blocks != sb_data_blocks)
		exit_err("invalid data blocks in the superblock");
	if (!memchr(superblock.algorithm, 0, sizeof(superblock.algorithm)))
		exit_err("invalid hash algorithm in the superblock");
	if (ntohs(superblock.salt_size) > MAX_SALT_SIZE)
		exit_err("invalid salt_size in the superblock");

	if (!data_block_size) {
		data_block_size = 1 << superblock.data_block_bits;
	} else {
		if (data_block_size != 1 << superblock.data_block_bits)
			exit_err("data block size (%d) does not match superblock value (%d)", data_block_size, 1 << superblock.data_block_bits);
	}

	if (!hash_block_size) {
		hash_block_size = 1 << superblock.hash_block_bits;
	} else {
		if (hash_block_size != 1 << superblock.hash_block_bits)
			exit_err("hash block size (%d) does not match superblock value (%d)", hash_block_size, 1 << superblock.hash_block_bits);
	}

	if (!data_blocks) {
		data_blocks = sb_data_blocks;
	} else {
		if (data_blocks != sb_data_blocks)
			exit_err("data blocks (%lld) does not match superblock value (%lld)", data_blocks, sb_data_blocks);
	}

	if (!hash_algorithm) {
		hash_algorithm = (char *)superblock.algorithm;
	} else {
		if (strcmp(hash_algorithm, (char *)superblock.algorithm))
			exit_err("hash algorithm (%s) does not match superblock value (%s)", hash_algorithm, superblock.algorithm);
	}

	if (!salt_bytes) {
		salt_size = ntohs(superblock.salt_size);
		salt_bytes = xmalloc(salt_size);
		memcpy(salt_bytes, superblock.salt, salt_size);
	} else {
		if (salt_size != ntohs(superblock.salt_size) ||
		    memcmp(salt_bytes, superblock.salt, salt_size))
			exit_err("salt does not match superblock value");
	}
}

static void save_superblock(void)
{
	memset(&superblock, 0, sizeof(struct superblock));

	memcpy(&superblock.signature, DM_VERITY_SIGNATURE, sizeof(superblock.signature));
	superblock.version = 0;
	superblock.data_block_bits = ffs(data_block_size) - 1;
	superblock.hash_block_bits = ffs(hash_block_size) - 1;
	superblock.salt_size = htons(salt_size);
	superblock.data_blocks_hi = htonl(data_blocks >> 31 >> 1);
	superblock.data_blocks_lo = htonl(data_blocks & 0xFFFFFFFF);
	strncpy((char *)superblock.algorithm, hash_algorithm, sizeof superblock.algorithm);
	memcpy(superblock.salt, salt_bytes, salt_size);

	block_fseek(hash_file, superblock_position, 1);
	if (fwrite(&superblock, sizeof(struct superblock), 1, hash_file) != 1)
		stream_err(hash_file, "write");
}

int main(int argc, const char **argv)
{
	poptContext popt_context;
	int r;
	const char *s;

	if (sizeof(struct superblock) != 512)
		exit_err("INTERNAL ERROR: bad superblock size %d", sizeof(struct superblock));

	OpenSSL_add_all_digests();

	popt_context = poptGetContext("verity", argc, argv, popt_options, 0);

	poptSetOtherOptionHelp(popt_context, "[-c | -v | -a] [<device name> if activating] <data device> <hash device> [<root hash> if activating or verifying] [OPTION...]");

	if (argc <= 1) {
		poptPrintHelp(popt_context, stdout, 0);
		exit(1);
	}

	r = poptGetNextOpt(popt_context);
	if (r < -1) exit_err("bad option %s", poptBadOption(popt_context, 0));

	if (mode < 0) exit_err("verify, create or activate mode not specified");

	if (mode == MODE_ACTIVATE) {
		dm_device = poptGetArg(popt_context);
		if (!dm_device) exit_err("device name is missing");
		if (!*dm_device || strchr(dm_device, '/')) exit_err("invalid device name");
	}

	data_device = poptGetArg(popt_context);
	if (!data_device) exit_err("data device is missing");

	hash_device = poptGetArg(popt_context);
	if (!hash_device) exit_err("metadata device is missing");

	if (mode != MODE_CREATE) {
		root_hash = poptGetArg(popt_context);
		if (!root_hash) exit_err("root hash not specified");
	}

	s = poptGetArg(popt_context);
	if (s) exit_err("extra argument %s", s);

	data_file = fopen(data_device, "r");
	if (!data_file) {
		perror(data_device);
		exit(2);
	}

	hash_file = fopen(hash_device, mode != MODE_CREATE ? "r" : "r+");
	if (!hash_file && errno == ENOENT && mode == MODE_CREATE)
		hash_file = fopen(hash_device, "w+");
	if (!hash_file) {
		perror(hash_device);
		exit(2);
	}

	if (hash_start < 0 ||
	   (unsigned long long)hash_start * 512 / 512 != hash_start ||
	   (off_t)(hash_start * 512) < 0 ||
	   (off_t)(hash_start * 512) != hash_start * 512) exit_err("invalid hash start");

	if (salt_string || !use_superblock) {
		if (!salt_string || !strcmp(salt_string, "-"))
			salt_string = "";
		salt_size = strlen(salt_string) / 2;
		if (salt_size > MAX_SALT_SIZE)
			exit_err("too long salt (max %d bytes)", MAX_SALT_SIZE);
		get_hex(salt_string, &salt_bytes, salt_size, "salt");
	}

	if (use_superblock) {
		superblock_position = hash_start * 512;
		if (mode != MODE_CREATE)
			load_superblock();
	}

	if (!data_block_size) data_block_size = DEFAULT_BLOCK_SIZE;
	if (!hash_block_size) hash_block_size = data_block_size;

	if (data_block_size < 512 || (data_block_size & (data_block_size - 1)) || data_block_size >= 1U << 31)
		exit_err("invalid data block size");

	if (hash_block_size < 512 || (hash_block_size & (hash_block_size - 1)) || hash_block_size >= 1U << 31)
		exit_err("invalid hash block size");

	if (data_blocks < 0 || (off_t)data_blocks < 0 || (off_t)data_blocks != data_blocks) exit_err("invalid number of data blocks");

	data_file_blocks = get_size(data_file, data_device) / data_block_size;
	hash_file_blocks = get_size(hash_file, hash_device) / hash_block_size;

	if (data_file_blocks < data_blocks) exit_err("data file is too small");
	if (data_blocks) {
		data_file_blocks = data_blocks;
	}

	if (use_superblock) {
		hash_start = hash_start + (sizeof(struct superblock) + 511) / 512;
		hash_start = (hash_start + (hash_block_size / 512 - 1)) & ~(long long)(hash_block_size / 512 - 1);
	}

	if ((unsigned long long)hash_start * 512 % hash_block_size) exit_err("hash start not aligned on block size");

	if (!hash_algorithm)
		hash_algorithm = "sha256";
	if (strlen(hash_algorithm) >= sizeof(superblock.algorithm) && use_superblock)
		exit_err("hash algorithm name is too long");
	evp = EVP_get_digestbyname(hash_algorithm);
	if (!evp) exit_err("hash algorithm %s not found", hash_algorithm);
	digest_size = EVP_MD_size(evp);

	if (!salt_bytes) {
		salt_size = DEFAULT_SALT_SIZE;
		salt_bytes = xmalloc(salt_size);
		if (RAND_bytes(salt_bytes, salt_size) != 1)
			exit_err("RAND_bytes failed");
	}

	calculated_digest = xmalloc(digest_size);

 	if (mode != MODE_CREATE) {
		get_hex(root_hash, &root_hash_bytes, digest_size, "root_hash");
	}

	calculate_positions();

	create_or_verify();

	if (use_superblock) {
		if (mode == MODE_CREATE)
			save_superblock();
	}

	fclose(data_file);
	fclose(hash_file);

	if (mode == MODE_ACTIVATE && !retval)
		activate();

	free(salt_bytes);
	free(calculated_digest);
	if (mode != MODE_CREATE)
		free(root_hash_bytes);
	poptFreeContext(popt_context);

	return retval;
}


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH] dm: remake of the verity target
  2012-03-21  0:54         ` Mikulas Patocka
@ 2012-03-21  3:03           ` Mandeep Singh Baines
  2012-03-21  3:11           ` [dm-devel] " Mikulas Patocka
  1 sibling, 0 replies; 34+ messages in thread
From: Mandeep Singh Baines @ 2012-03-21  3:03 UTC (permalink / raw)
  To: Mikulas Patocka
  Cc: Mandeep Singh Baines, linux-kernel, dm-devel, Alasdair G Kergon,
	Will Drewry, Elly Jones, Milan Broz, Olof Johansson,
	Steffen Klassert, Andrew Morton

Mikulas Patocka (mpatocka@redhat.com) wrote:
> 
> 
> On Tue, 20 Mar 2012, Mandeep Singh Baines wrote:
> 
> > Hi Mikulas,
> > 
> > Can you please resend this patch with a proper commit message.
> > We'd really like to see this merged. Alasdair, other than that,
> > what work is remaining for verity to be merged?
> > 
> > Regards,
> > Mandeep
> 
> Hi
> 
> I'm sending this new version of dm-verity to be merged. I've made some 
> last changes in the format, hopefully no more changes will be needed. This 
> changes make it incompatible with the original Google code (but the 
> original code can be trivially changed to support these modifications).
> 
> Changes:
> 
> * Salt is hashed before the block (it used to be hased after). The reason 
> is that if random salt is hashed before the block, it makes the process 
> resilient to hash function collisions - so you can safely use md5, even if 
> there's a collision attach for it.
> 

I am not aware of any additional benefit to prepending the salt versus
appending. Could you please provide such a reference.

I would like to avoid breaking backward compatibility unless there is
a real benefit.

Regards,
Mandeep


> * Argument line was simplified, there are no optional arguments.
> 
> * There is new argument specifying the size of the data device.
> 
> Mikulas
> 
> ---
> 
> Remake of the google dm-verity patch.
> 
> Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
> 
> ---
>  drivers/md/Kconfig     |   17 
>  drivers/md/Makefile    |    1 
>  drivers/md/dm-verity.c |  849 +++++++++++++++++++++++++++++++++++++++++++++++++
>  3 files changed, 867 insertions(+)
> 
> Index: linux-3.3-fast/drivers/md/Kconfig
> ===================================================================
> --- linux-3.3-fast.orig/drivers/md/Kconfig	2012-03-19 13:46:54.000000000 +0100
> +++ linux-3.3-fast/drivers/md/Kconfig	2012-03-19 13:46:55.000000000 +0100
> @@ -404,4 +404,21 @@ config DM_VERITY2
>  
>            If unsure, say N.
>  
> +config DM_VERITY
> +	tristate "Verity target support"
> +	depends on BLK_DEV_DM
> +	select CRYPTO
> +	select CRYPTO_HASH
> +	select DM_BUFIO
> +	---help---
> +	  This device-mapper target allows you to create a device that
> +	  transparently integrity checks the data on it. You'll need to
> +	  activate the digests you're going to use in the cryptoapi
> +	  configuration.
> +
> +	  To compile this code as a module, choose M here: the module will
> +	  be called dm-verity.
> +
> +	  If unsure, say N.
> +
>  endif # MD
> Index: linux-3.3-fast/drivers/md/Makefile
> ===================================================================
> --- linux-3.3-fast.orig/drivers/md/Makefile	2012-03-19 13:46:54.000000000 +0100
> +++ linux-3.3-fast/drivers/md/Makefile	2012-03-19 13:46:55.000000000 +0100
> @@ -29,6 +29,7 @@ obj-$(CONFIG_MD_FAULTY)		+= faulty.o
>  obj-$(CONFIG_BLK_DEV_MD)	+= md-mod.o
>  obj-$(CONFIG_BLK_DEV_DM)	+= dm-mod.o
>  obj-$(CONFIG_DM_BUFIO)		+= dm-bufio.o
> +obj-$(CONFIG_DM_VERITY)		+= dm-verity.o
>  obj-$(CONFIG_DM_CRYPT)		+= dm-crypt.o
>  obj-$(CONFIG_DM_DELAY)		+= dm-delay.o
>  obj-$(CONFIG_DM_FLAKEY)		+= dm-flakey.o
> Index: linux-3.3-fast/drivers/md/dm-verity.c
> ===================================================================
> --- /dev/null	1970-01-01 00:00:00.000000000 +0000
> +++ linux-3.3-fast/drivers/md/dm-verity.c	2012-03-20 22:03:53.000000000 +0100
> @@ -0,0 +1,849 @@
> +/*
> + * Copyright (C) 2012 Red Hat, Inc.
> + *
> + * Author: Mikulas Patocka <mpatocka@redhat.com>
> + *
> + * Based on Chromium dm-verity driver (C) 2011 The Chromium OS Authors
> + *
> + * This file is released under the GPLv2.
> + *
> + * Device mapper target parameters:
> + *	<version>		0
> + *	<data device>
> + *	<hash device>
> + *	<data block size>
> + *	<hash block size>
> + *	<the number of data blocks>
> + *	<hash start block>
> + *	<algorithm>
> + *	<digest>
> + *	<salt>			(hex bytes or "-" for no salt)
> + *
> + * In the file "/sys/module/dm_verity/parameters/prefetch_cluster" you can set
> + * default prefetch value. Data are read in "prefetch_cluster" chunks from the
> + * hash device. Prefetch cluster greatly improves performance when data and hash
> + * are on the same disk on different partitions on devices with poor random
> + * access behavior.
> + */
> +
> +#include <linux/module.h>
> +#include <linux/device-mapper.h>
> +#include <crypto/hash.h>
> +#include "dm-bufio.h"
> +
> +#define DM_MSG_PREFIX			"verity"
> +
> +#define DM_VERITY_IO_VEC_INLINE		16
> +#define DM_VERITY_MEMPOOL_SIZE		4
> +#define DM_VERITY_DEFAULT_PREFETCH_SIZE	262144
> +
> +#define DM_VERITY_MAX_LEVELS		63
> +
> +static unsigned prefetch_cluster = DM_VERITY_DEFAULT_PREFETCH_SIZE;
> +
> +module_param_named(prefetch_cluster, prefetch_cluster, uint, S_IRUGO | S_IWUSR);
> +
> +struct dm_verity {
> +	struct dm_dev *data_dev;
> +	struct dm_dev *hash_dev;
> +	struct dm_target *ti;
> +	struct dm_bufio_client *bufio;
> +	char *alg_name;
> +	struct crypto_shash *tfm;
> +	u8 *root_digest;	/* digest of the root block */
> +	u8 *salt;		/* salt, its size is salt_size */
> +	unsigned salt_size;
> +	sector_t data_start;	/* data offset in 512-byte sectors */
> +	sector_t hash_start;	/* hash start in blocks */
> +	sector_t data_blocks;	/* the number of data blocks */
> +	sector_t hash_blocks;	/* the number of hash blocks */
> +	unsigned char data_dev_block_bits;	/* log2(data blocksize) */
> +	unsigned char hash_dev_block_bits;	/* log2(hash blocksize) */
> +	unsigned char hash_per_block_bits;	/* log2(hashes in hash block) */
> +	unsigned char levels;	/* the number of tree levels */
> +	unsigned digest_size;	/* digest size for the current hash algorithm */
> +	unsigned shash_descsize;/* the size of temporary space for crypto */
> +
> +	mempool_t *io_mempool;	/* mempool of struct dm_verity_io */
> +	mempool_t *vec_mempool;	/* mempool of bio vector */
> +
> +	struct workqueue_struct *verify_wq;
> +
> +	/* starting blocks for each tree level. 0 is the lowest level. */
> +	sector_t hash_level_block[DM_VERITY_MAX_LEVELS];
> +};
> +
> +struct dm_verity_io {
> +	struct dm_verity *v;
> +	struct bio *bio;
> +
> +	/* original values of bio->bi_end_io and bio->bi_private */
> +	bio_end_io_t *orig_bi_end_io;
> +	void *orig_bi_private;
> +
> +	sector_t block;
> +	unsigned n_blocks;
> +
> +	/* saved bio vector */
> +	struct bio_vec *io_vec;
> +	unsigned io_vec_size;
> +
> +	struct work_struct work;
> +
> +	/* a space for short vectors; longer vectors are allocated separately */
> +	struct bio_vec io_vec_inline[DM_VERITY_IO_VEC_INLINE];
> +
> +	/* variable-size fields, accessible with functions
> +		io_hash_desc, io_real_digest, io_want_digest */
> +	/* u8 hash_desc[v->shash_descsize]; */
> +	/* u8 real_digest[v->digest_size]; */
> +	/* u8 want_digest[v->digest_size]; */
> +};
> +
> +static struct shash_desc *io_hash_desc(struct dm_verity *v, struct dm_verity_io *io)
> +{
> +	return (struct shash_desc *)(io + 1);
> +}
> +
> +static u8 *io_real_digest(struct dm_verity *v, struct dm_verity_io *io)
> +{
> +	return (u8 *)(io + 1) + v->shash_descsize;
> +}
> +
> +static u8 *io_want_digest(struct dm_verity *v, struct dm_verity_io *io)
> +{
> +	return (u8 *)(io + 1) + v->shash_descsize + v->digest_size;
> +}
> +
> +/*
> + * Auxiliary structure appended to each dm-bufio buffer. If the value
> + * hash_verified is nonzero, hash of the block has been verified.
> + *
> + * The variable hash_verified is set to 0 when allocating the buffer, then
> + * it can be changed to 1 and it is never reset to 0 again.
> + *
> + * There is no lock around this value, a race condition can at worst cause
> + * that multiple processes verify the hash of the same buffer simultaneously
> + * and write 1 to hash_verified simultaneously.
> + * This condition is harmless, so we don't need locking.
> + */
> +struct buffer_aux {
> +	int hash_verified;
> +};
> +
> +/*
> + * Initialize struct buffer_aux for a freshly created buffer.
> + */
> +static void dm_bufio_alloc_callback(struct dm_buffer *buf)
> +{
> +	struct buffer_aux *aux = dm_bufio_get_aux_data(buf);
> +	aux->hash_verified = 0;
> +}
> +
> +/*
> + * Translate input sector number to the sector number on the target device.
> + */
> +static sector_t verity_map_sector(struct dm_verity *v, sector_t bi_sector)
> +{
> +	return v->data_start + dm_target_offset(v->ti, bi_sector);
> +}
> +
> +/*
> + * Return hash position of a specified block at a specified tree level
> + * (0 is the lowest level).
> + * The lowest "hash_per_block_bits"-bits of the result denote hash position
> + * inside a hash block. The remaining bits denote location of the hash block.
> + */
> +static sector_t verity_position_at_level(struct dm_verity *v, sector_t block,
> +					 int level)
> +{
> +	return block >> (level * v->hash_per_block_bits);
> +}
> +
> +static void verity_hash_at_level(struct dm_verity *v, sector_t block, int level,
> +				 sector_t *hash_block, unsigned *offset)
> +{
> +	sector_t position = verity_position_at_level(v, block, level);
> +
> +	*hash_block = v->hash_level_block[level] + (position >> v->hash_per_block_bits);
> +	if (offset)
> +		*offset = (position & ((1 << v->hash_per_block_bits) - 1)) << (v->hash_dev_block_bits - v->hash_per_block_bits);
> +}
> +
> +/*
> + * Verify hash of a metadata block pertaining to the specified data block
> + * ("block" argument) at a specified level ("level" argument).
> + *
> + * On successful return, io_want_digest(v, io) contains the hash value for
> + * a lower tree level or for the data block (if we're at the lowest leve).
> + *
> + * If "skip_unverified" is true, unverified buffer is skipped an 1 is returned.
> + * If "skip_unverified" is false, unverified buffer is hashed and verified
> + * against current value of io_want_digest(v, io).
> + */
> +static int verity_verify_level(struct dm_verity_io *io, sector_t block,
> +			       int level, bool skip_unverified)
> +{
> +	struct dm_verity *v = io->v;
> +	struct dm_buffer *buf;
> +	struct buffer_aux *aux;
> +	u8 *data;
> +	int r;
> +	sector_t hash_block;
> +	unsigned offset;
> +
> +	verity_hash_at_level(v, block, level, &hash_block, &offset);
> +
> +	data = dm_bufio_read(v->bufio, hash_block, &buf);
> +	if (unlikely(IS_ERR(data)))
> +		return PTR_ERR(data);
> +
> +	aux = dm_bufio_get_aux_data(buf);
> +
> +	if (!aux->hash_verified) {
> +		struct shash_desc *desc;
> +		u8 *result;
> +
> +		if (skip_unverified) {
> +			r = 1;
> +			goto release_ret_r;
> +		}
> +
> +		desc = io_hash_desc(v, io);
> +		desc->tfm = v->tfm;
> +		desc->flags = CRYPTO_TFM_REQ_MAY_SLEEP;
> +		r = crypto_shash_init(desc);
> +		if (r < 0) {
> +			DMERR("crypto_shash_init failed: %d", r);
> +			goto release_ret_r;
> +		}
> +
> +		r = crypto_shash_update(desc, v->salt, v->salt_size);
> +		if (r < 0) {
> +			DMERR("crypto_shash_update failed: %d", r);
> +			goto release_ret_r;
> +		}
> +
> +		r = crypto_shash_update(desc, data, 1 << v->hash_dev_block_bits);
> +		if (r < 0) {
> +			DMERR("crypto_shash_update failed: %d", r);
> +			goto release_ret_r;
> +		}
> +
> +		result = io_real_digest(v, io);
> +		r = crypto_shash_final(desc, result);
> +		if (r < 0) {
> +			DMERR("crypto_shash_final failed: %d", r);
> +			goto release_ret_r;
> +		}
> +		if (unlikely(memcmp(result, io_want_digest(v, io), v->digest_size))) {
> +			DMERR_LIMIT("metadata block %llu is corrupted",
> +				(unsigned long long)hash_block);
> +			r = -EIO;
> +			goto release_ret_r;
> +		} else
> +			aux->hash_verified = 1;
> +	}
> +
> +	data += offset;
> +
> +	memcpy(io_want_digest(v, io), data, v->digest_size);
> +
> +	dm_bufio_release(buf);
> +	return 0;
> +
> +release_ret_r:
> +	dm_bufio_release(buf);
> +	return r;
> +}
> +
> +/*
> + * Verify one "dm_verity_io" structure.
> + */
> +static int verity_verify_io(struct dm_verity_io *io)
> +{
> +	struct dm_verity *v = io->v;
> +	unsigned b;
> +	int i;
> +	unsigned vector = 0, offset = 0;
> +	for (b = 0; b < io->n_blocks; b++) {
> +		struct shash_desc *desc;
> +		u8 *result;
> +		int r;
> +		unsigned todo;
> +
> +		if (likely(v->levels)) {
> +			/*
> +			 * First, we try to get the requested hash for
> +			 * the current block. If the hash block itself is
> +			 * verified, zero is returned. If it isn't, this
> +			 * function returns 0 and we fall back to whole
> +			 * chain verification.
> +			 */
> +			int r = verity_verify_level(io, io->block + b, 0, true);
> +			if (likely(!r))
> +				goto test_block_hash;
> +			if (r < 0)
> +				return r;
> +		}
> +
> +		memcpy(io_want_digest(v, io), v->root_digest, v->digest_size);
> +
> +		for (i = v->levels - 1; i >= 0; i--) {
> +			int r = verity_verify_level(io, io->block + b, i, false);
> +			if (unlikely(r))
> +				return r;
> +		}
> +
> +test_block_hash:
> +		desc = io_hash_desc(v, io);
> +		desc->tfm = v->tfm;
> +		desc->flags = CRYPTO_TFM_REQ_MAY_SLEEP;
> +		r = crypto_shash_init(desc);
> +		if (r < 0) {
> +			DMERR("crypto_shash_init failed: %d", r);
> +			return r;
> +		}
> +
> +		r = crypto_shash_update(desc, v->salt, v->salt_size);
> +		if (r < 0) {
> +			DMERR("crypto_shash_update failed: %d", r);
> +			return r;
> +		}
> +
> +		todo = 1 << v->data_dev_block_bits;
> +		do {
> +			struct bio_vec *bv;
> +			u8 *page;
> +			unsigned len;
> +
> +			BUG_ON(vector >= io->io_vec_size);
> +			bv = &io->io_vec[vector];
> +			page = kmap_atomic(bv->bv_page, KM_USER0);
> +			len = bv->bv_len - offset;
> +			if (likely(len >= todo))
> +				len = todo;
> +			r = crypto_shash_update(desc,
> +					page + bv->bv_offset + offset, len);
> +			kunmap_atomic(page, KM_USER0);
> +			if (r < 0) {
> +				DMERR("crypto_shash_update failed: %d", r);
> +				return r;
> +			}
> +			offset += len;
> +			if (likely(offset == bv->bv_len)) {
> +				offset = 0;
> +				vector++;
> +			}
> +			todo -= len;
> +		} while (todo);
> +
> +		result = io_real_digest(v, io);
> +		r = crypto_shash_final(desc, result);
> +		if (r < 0) {
> +			DMERR("crypto_shash_final failed: %d", r);
> +			return r;
> +		}
> +		if (unlikely(memcmp(result, io_want_digest(v, io), v->digest_size))) {
> +			DMERR_LIMIT("data block %llu is corrupted",
> +				(unsigned long long)(io->block + b));
> +			return -EIO;
> +		}
> +	}
> +	BUG_ON(vector != io->io_vec_size);
> +	BUG_ON(offset);
> +	return 0;
> +}
> +
> +/*
> + * End one "io" structure with a given error.
> + */
> +static void verity_finish_io(struct dm_verity_io *io, int error)
> +{
> +	struct bio *bio = io->bio;
> +	struct dm_verity *v = io->v;
> +
> +	bio->bi_end_io = io->orig_bi_end_io;
> +	bio->bi_private = io->orig_bi_private;
> +
> +	if (io->io_vec != io->io_vec_inline)
> +		mempool_free(io->io_vec, v->vec_mempool);
> +	mempool_free(io, v->io_mempool);
> +
> +	bio_endio(bio, error);
> +}
> +
> +static void verity_work(struct work_struct *w)
> +{
> +	struct dm_verity_io *io = container_of(w, struct dm_verity_io, work);
> +
> +	verity_finish_io(io, verity_verify_io(io));
> +}
> +
> +static void verity_end_io(struct bio *bio, int error)
> +{
> +	struct dm_verity_io *io = bio->bi_private;
> +	if (error) {
> +		verity_finish_io(io, error);
> +		return;
> +	}
> +
> +	INIT_WORK(&io->work, verity_work);
> +	queue_work(io->v->verify_wq, &io->work);
> +}
> +
> +/*
> + * Prefetch buffers for the specified io.
> + * The root buffer is not prefetched, it is assumed that it will be cached
> + * all the time.
> + */
> +static void verity_prefetch_io(struct dm_verity *v, struct dm_verity_io *io)
> +{
> +	int i;
> +	for (i = v->levels - 2; i >= 0; i--) {
> +		sector_t hash_block_start;
> +		sector_t hash_block_end;
> +		verity_hash_at_level(v, io->block, i, &hash_block_start, NULL);
> +		verity_hash_at_level(v, io->block + io->n_blocks - 1, i, &hash_block_end, NULL);
> +		if (!i) {
> +			unsigned cluster = *(volatile unsigned *)&prefetch_cluster;
> +			cluster >>= v->data_dev_block_bits;
> +			if (unlikely(!cluster))
> +				goto no_prefetch_cluster;
> +			if (unlikely(cluster & (cluster - 1)))
> +				cluster = 1 << (fls(cluster) - 1);
> +
> +			hash_block_start &= ~(sector_t)(cluster - 1);
> +			hash_block_end |= cluster - 1;
> +			if (unlikely(hash_block_end >= v->hash_blocks))
> +				hash_block_end = v->hash_blocks - 1;
> +		}
> +no_prefetch_cluster:
> +		dm_bufio_prefetch(v->bufio, hash_block_start,
> +					hash_block_end - hash_block_start + 1);
> +	}
> +}
> +
> +/*
> + * Bio map function. It allocates dm_verity_io structure and bio vector and
> + * fills them. Then it issues prefetches and the I/O.
> + */
> +static int verity_map(struct dm_target *ti, struct bio *bio,
> +		      union map_info *map_context)
> +{
> +	struct dm_verity *v = ti->private;
> +	struct dm_verity_io *io;
> +
> +	bio->bi_bdev = v->data_dev->bdev;
> +	bio->bi_sector = verity_map_sector(v, bio->bi_sector);
> +
> +	if (((unsigned)bio->bi_sector | bio_sectors(bio)) &
> +	    ((1 << (v->data_dev_block_bits - SECTOR_SHIFT)) - 1)) {
> +		DMERR_LIMIT("unaligned io");
> +		return -EIO;
> +	}
> +
> +	if ((bio->bi_sector + bio_sectors(bio)) >>
> +	    (v->data_dev_block_bits - SECTOR_SHIFT) > v->data_blocks) {
> +		DMERR_LIMIT("io out of range");
> +		return -EIO;
> +	}
> +
> +	if (bio_data_dir(bio) == WRITE)
> +		return -EIO;
> +
> +	io = mempool_alloc(v->io_mempool, GFP_NOIO);
> +	io->v = v;
> +	io->bio = bio;
> +	io->orig_bi_end_io = bio->bi_end_io;
> +	io->orig_bi_private = bio->bi_private;
> +	io->block = bio->bi_sector >> (v->data_dev_block_bits - SECTOR_SHIFT);
> +	io->n_blocks = bio->bi_size >> v->data_dev_block_bits;
> +
> +	bio->bi_end_io = verity_end_io;
> +	bio->bi_private = io;
> +	io->io_vec_size = bio->bi_vcnt - bio->bi_idx;
> +	if (io->io_vec_size < DM_VERITY_IO_VEC_INLINE)
> +		io->io_vec = io->io_vec_inline;
> +	else
> +		io->io_vec = mempool_alloc(v->vec_mempool, GFP_NOIO);
> +	memcpy(io->io_vec, bio_iovec(bio),
> +	       io->io_vec_size * sizeof(struct bio_vec));
> +
> +	verity_prefetch_io(v, io);
> +
> +	generic_make_request(bio);
> +
> +	return DM_MAPIO_SUBMITTED;
> +}
> +
> +static int verity_status(struct dm_target *ti, status_type_t type,
> +			 char *result, unsigned maxlen)
> +{
> +	struct dm_verity *v = ti->private;
> +	unsigned sz = 0;
> +	unsigned x;
> +
> +	switch (type) {
> +	case STATUSTYPE_INFO:
> +		result[0] = 0;
> +		break;
> +	case STATUSTYPE_TABLE:
> +		DMEMIT("%u %s %s %u %u %llu %llu %s ",
> +			0,
> +			v->data_dev->name,
> +			v->hash_dev->name,
> +			1 << v->data_dev_block_bits,
> +			1 << v->hash_dev_block_bits,
> +			(unsigned long long)v->data_blocks,
> +			(unsigned long long)v->hash_start,
> +			v->alg_name
> +			);
> +		for (x = 0; x < v->digest_size; x++)
> +			DMEMIT("%02x", v->root_digest[x]);
> +		DMEMIT(" ");
> +		if (!v->salt_size)
> +			DMEMIT("-");
> +		else
> +			for (x = 0; x < v->salt_size; x++)
> +				DMEMIT("%02x", v->salt[x]);
> +		break;
> +	}
> +	return 0;
> +}
> +
> +static int verity_ioctl(struct dm_target *ti, unsigned cmd,
> +			unsigned long arg)
> +{
> +	struct dm_verity *v = ti->private;
> +	int r = 0;
> +
> +	if (v->data_start ||
> +	    ti->len != i_size_read(v->data_dev->bdev->bd_inode) >> SECTOR_SHIFT)
> +		r = scsi_verify_blk_ioctl(NULL, cmd);
> +
> +	return r ? : __blkdev_driver_ioctl(v->data_dev->bdev, v->data_dev->mode,
> +				     cmd, arg);
> +}
> +
> +static int verity_merge(struct dm_target *ti, struct bvec_merge_data *bvm,
> +			struct bio_vec *biovec, int max_size)
> +{
> +	struct dm_verity *v = ti->private;
> +	struct request_queue *q = bdev_get_queue(v->data_dev->bdev);
> +
> +	if (!q->merge_bvec_fn)
> +		return max_size;
> +
> +	bvm->bi_bdev = v->data_dev->bdev;
> +	bvm->bi_sector = verity_map_sector(v, bvm->bi_sector);
> +
> +	return min(max_size, q->merge_bvec_fn(q, bvm, biovec));
> +}
> +
> +static int verity_iterate_devices(struct dm_target *ti,
> +				  iterate_devices_callout_fn fn, void *data)
> +{
> +	struct dm_verity *v = ti->private;
> +	return fn(ti, v->data_dev, v->data_start, ti->len, data);
> +}
> +
> +static void verity_io_hints(struct dm_target *ti, struct queue_limits *limits)
> +{
> +	struct dm_verity *v = ti->private;
> +
> +	if (limits->logical_block_size < 1 << v->data_dev_block_bits)
> +		limits->logical_block_size = 1 << v->data_dev_block_bits;
> +	if (limits->physical_block_size < 1 << v->data_dev_block_bits)
> +		limits->physical_block_size = 1 << v->data_dev_block_bits;
> +	blk_limits_io_min(limits, limits->logical_block_size);
> +}
> +
> +static void verity_dtr(struct dm_target *ti);
> +
> +static int verity_ctr(struct dm_target *ti, unsigned argc, char **argv)
> +{
> +	struct dm_verity *v;
> +	unsigned num;
> +	unsigned long long num_ll;
> +	int r;
> +	int i;
> +	sector_t hash_position;
> +	char dummy;
> +
> +	v = kzalloc(sizeof(struct dm_verity), GFP_KERNEL);
> +	if (!v) {
> +		ti->error = "Cannot allocate verity structure";
> +		return -ENOMEM;
> +	}
> +	ti->private = v;
> +	v->ti = ti;
> +
> +	if ((dm_table_get_mode(ti->table) & ~FMODE_READ) != 0) {
> +		ti->error = "Device must be readonly";
> +		r = -EINVAL;
> +		goto bad;
> +	}
> +
> +	if (argc != 10) {
> +		ti->error = "Invalid argument count";
> +		r = -EINVAL;
> +		goto bad;
> +	}
> +
> +	if (sscanf(argv[0], "%d%c", &num, &dummy) != 1 ||
> +	    num != 0) {
> +		ti->error = "Invalid version";
> +		r = -EINVAL;
> +		goto bad;
> +	}
> +
> +	r = dm_get_device(ti, argv[1], FMODE_READ, &v->data_dev);
> +	if (r) {
> +		ti->error = "Data device lookup failed";
> +		goto bad;
> +	}
> +
> +	r = dm_get_device(ti, argv[2], FMODE_READ, &v->hash_dev);
> +	if (r) {
> +		ti->error = "Data device lookup failed";
> +		goto bad;
> +	}
> +
> +	if (sscanf(argv[3], "%u%c", &num, &dummy) != 1 ||
> +	    !num || (num & (num - 1)) ||
> +	    num < bdev_logical_block_size(v->data_dev->bdev) ||
> +	    num > PAGE_SIZE) {
> +		ti->error = "Invalid data device block size";
> +		r = -EINVAL;
> +		goto bad;
> +	}
> +	v->data_dev_block_bits = ffs(num) - 1;
> +
> +	if (sscanf(argv[4], "%u%c", &num, &dummy) != 1 ||
> +	    !num || (num & (num - 1)) ||
> +	    num < bdev_logical_block_size(v->hash_dev->bdev) ||
> +	    num > INT_MAX) {
> +		ti->error = "Invalid hash device block size";
> +		r = -EINVAL;
> +		goto bad;
> +	}
> +	v->hash_dev_block_bits = ffs(num) - 1;
> +
> +	if (sscanf(argv[5], "%llu%c", &num_ll, &dummy) != 1 ||
> +	    num_ll << (v->data_dev_block_bits - SECTOR_SHIFT) !=
> +	    (sector_t)num_ll << (v->data_dev_block_bits - SECTOR_SHIFT)) {
> +		ti->error = "Invalid data blocks";
> +		r = -EINVAL;
> +		goto bad;
> +	}
> +	v->data_blocks = num_ll;
> +
> +	if (ti->len > (v->data_blocks << (v->data_dev_block_bits - SECTOR_SHIFT))) {
> +		ti->error = "Data device is too small";
> +		r = -EINVAL;
> +		goto bad;
> +	}
> +
> +	if (sscanf(argv[6], "%llu%c", &num_ll, &dummy) != 1 ||
> +	    num_ll << (v->hash_dev_block_bits - SECTOR_SHIFT) !=
> +	    (sector_t)num_ll << (v->hash_dev_block_bits - SECTOR_SHIFT)) {
> +		ti->error = "Invalid hash start";
> +		r = -EINVAL;
> +		goto bad;
> +	}
> +	v->hash_start = num_ll;
> +
> +	v->alg_name = kstrdup(argv[7], GFP_KERNEL);
> +	if (!v->alg_name) {
> +		ti->error = "Cannot allocate algorithm name";
> +		r = -ENOMEM;
> +		goto bad;
> +	}
> +
> +	v->tfm = crypto_alloc_shash(v->alg_name, 0, 0);
> +	if (IS_ERR(v->tfm)) {
> +		ti->error = "Cannot initialize hash function";
> +		r = PTR_ERR(v->tfm);
> +		v->tfm = NULL;
> +		goto bad;
> +	}
> +	v->digest_size = crypto_shash_digestsize(v->tfm);
> +	if ((1 << v->hash_dev_block_bits) < v->digest_size * 2) {
> +		ti->error = "Digest size too big";
> +		r = -EINVAL;
> +		goto bad;
> +	}
> +	v->shash_descsize =
> +		sizeof(struct shash_desc) + crypto_shash_descsize(v->tfm);
> +
> +	v->root_digest = kmalloc(v->digest_size, GFP_KERNEL);
> +	if (!v->root_digest) {
> +		ti->error = "Cannot allocate root digest";
> +		r = -ENOMEM;
> +		goto bad;
> +	}
> +	if (strlen(argv[8]) != v->digest_size * 2 ||
> +	    hex2bin(v->root_digest, argv[8], v->digest_size)) {
> +		ti->error = "Invalid root digest";
> +		r = -EINVAL;
> +		goto bad;
> +	}
> +
> +	if (strcmp(argv[9], "-")) {
> +		v->salt_size = strlen(argv[9]) / 2;
> +		v->salt = kmalloc(v->salt_size, GFP_KERNEL);
> +		if (!v->salt) {
> +			ti->error = "Cannot allocate salt";
> +			r = -ENOMEM;
> +			goto bad;
> +		}
> +		if (strlen(argv[9]) != v->salt_size * 2 ||
> +		    hex2bin(v->salt, argv[9], v->salt_size)) {
> +			ti->error = "Invalid salt";
> +			r = -EINVAL;
> +			goto bad;
> +		}
> +	}
> +
> +	v->hash_per_block_bits =
> +		fls((1 << v->hash_dev_block_bits) / v->digest_size) - 1;
> +
> +	v->levels = 0;
> +	if (v->data_blocks)
> +		while (v->hash_per_block_bits * v->levels < 64 &&
> +		       (unsigned long long)(v->data_blocks - 1) >>
> +		       (v->hash_per_block_bits * v->levels))
> +			v->levels++;
> +
> +	if (v->levels > DM_VERITY_MAX_LEVELS) {
> +		ti->error = "Too many tree levels";
> +		r = -E2BIG;
> +		goto bad;
> +	}
> +
> +	hash_position = v->hash_start;
> +	for (i = v->levels - 1; i >= 0; i--) {
> +		sector_t s;
> +		v->hash_level_block[i] = hash_position;
> +		s = verity_position_at_level(v, v->data_blocks, i);
> +		s = (s >> v->hash_per_block_bits) +
> +		    !!(s & ((1 << v->hash_per_block_bits) - 1));
> +		if (hash_position + s < hash_position) {
> +			ti->error = "Hash device offset overflow";
> +			r = -E2BIG;
> +			goto bad;
> +		}
> +		hash_position += s;
> +	}
> +	v->hash_blocks = hash_position;
> +
> +	v->bufio = dm_bufio_client_create(v->hash_dev->bdev,
> +		1 << v->hash_dev_block_bits, 1, sizeof(struct buffer_aux),
> +		dm_bufio_alloc_callback, NULL);
> +	if (IS_ERR(v->bufio)) {
> +		ti->error = "Cannot initialize dm-bufio";
> +		r = PTR_ERR(v->bufio);
> +		v->bufio = NULL;
> +		goto bad;
> +	}
> +
> +	if (dm_bufio_get_device_size(v->bufio) < v->hash_blocks) {
> +		ti->error = "Hash device is too small";
> +		r = -E2BIG;
> +		goto bad;
> +	}
> +
> +	v->io_mempool = mempool_create_kmalloc_pool(DM_VERITY_MEMPOOL_SIZE,
> +	  sizeof(struct dm_verity_io) + v->shash_descsize + v->digest_size * 2);
> +	if (!v->io_mempool) {
> +		ti->error = "Cannot allocate io mempool";
> +		r = -ENOMEM;
> +		goto bad;
> +	}
> +
> +	v->vec_mempool = mempool_create_kmalloc_pool(DM_VERITY_MEMPOOL_SIZE,
> +					BIO_MAX_PAGES * sizeof(struct bio_vec));
> +	if (!v->vec_mempool) {
> +		ti->error = "Cannot allocate vector mempool";
> +		r = -ENOMEM;
> +		goto bad;
> +	}
> +
> +	/*v->verify_wq = alloc_workqueue("verityd", WQ_CPU_INTENSIVE | WQ_MEM_RECLAIM, 1);*/
> +	/* WQ_UNBOUND greatly improves performance when running on ramdisk */
> +	v->verify_wq = alloc_workqueue("verityd", WQ_CPU_INTENSIVE | WQ_MEM_RECLAIM | WQ_UNBOUND, num_online_cpus());
> +	if (!v->verify_wq) {
> +		ti->error = "Cannot allocate workqueue";
> +		r = -ENOMEM;
> +		goto bad;
> +	}
> +
> +	return 0;
> +
> +bad:
> +	verity_dtr(ti);
> +	return r;
> +}
> +
> +static void verity_dtr(struct dm_target *ti)
> +{
> +	struct dm_verity *v = ti->private;
> +
> +	if (v->verify_wq)
> +		destroy_workqueue(v->verify_wq);
> +	if (v->vec_mempool)
> +		mempool_destroy(v->vec_mempool);
> +	if (v->io_mempool)
> +		mempool_destroy(v->io_mempool);
> +	if (v->bufio)
> +		dm_bufio_client_destroy(v->bufio);
> +	kfree(v->salt);
> +	kfree(v->root_digest);
> +	if (v->tfm)
> +		crypto_free_shash(v->tfm);
> +	kfree(v->alg_name);
> +	if (v->hash_dev)
> +		dm_put_device(ti, v->hash_dev);
> +	if (v->data_dev)
> +		dm_put_device(ti, v->data_dev);
> +	kfree(v);
> +}
> +
> +static struct target_type verity_target = {
> +	.name		= "verity",
> +	.version	= {1, 0, 0},
> +	.module		= THIS_MODULE,
> +	.ctr		= verity_ctr,
> +	.dtr		= verity_dtr,
> +	.map		= verity_map,
> +	.status		= verity_status,
> +	.ioctl		= verity_ioctl,
> +	.merge		= verity_merge,
> +	.iterate_devices = verity_iterate_devices,
> +	.io_hints	= verity_io_hints,
> +};
> +
> +static int __init dm_verity_init(void)
> +{
> +	int r;
> +	r = dm_register_target(&verity_target);
> +	if (r < 0)
> +		DMERR("register failed %d", r);
> +	return r;
> +}
> +
> +static void __exit dm_verity_exit(void)
> +{
> +	dm_unregister_target(&verity_target);
> +}
> +
> +module_init(dm_verity_init);
> +module_exit(dm_verity_exit);
> +
> +MODULE_AUTHOR("Mikulas Patocka <mpatocka@redhat.com>");
> +MODULE_AUTHOR("Mandeep Baines <msb@chromium.org>");
> +MODULE_AUTHOR("Will Drewry <wad@chromium.org>");
> +MODULE_DESCRIPTION(DM_NAME " target for transparent disk integrity checking");
> +MODULE_LICENSE("GPL");
> +

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [dm-devel] [PATCH] dm: remake of the verity target
  2012-03-21  0:54         ` Mikulas Patocka
  2012-03-21  3:03           ` Mandeep Singh Baines
@ 2012-03-21  3:11           ` Mikulas Patocka
  2012-03-21  3:30             ` Mandeep Singh Baines
  1 sibling, 1 reply; 34+ messages in thread
From: Mikulas Patocka @ 2012-03-21  3:11 UTC (permalink / raw)
  To: device-mapper development
  Cc: Mandeep Singh Baines, Steffen Klassert, Will Drewry,
	linux-kernel, Elly Jones, Olof Johansson, Andrew Morton,
	Alasdair G Kergon, Milan Broz



On Tue, 20 Mar 2012, Mikulas Patocka wrote:

> > Changes:
> > 
> > * Salt is hashed before the block (it used to be hased after). The reason 
> > is that if random salt is hashed before the block, it makes the process 
> > resilient to hash function collisions - so you can safely use md5, even if 
> > there's a collision attach for it.
> 
> I am not aware of any additional benefit to prepending the salt versus
> appending. Could you please provide such a reference.
> 
> I would like to avoid breaking backward compatibility unless there is
> a real benefit.
> 
> Regards,
> Mandeep
> 

This is some deeper explanation why I do it this way. The reason is to 
protect agains collision attacks (such as a known attack on MD5 or 
possible future attacks against other hash functions).

"Preimage attack" means that you are given a hash value and you create a 
message that hashes to that hash value. There is no known preimage attack 
for currently used hash functions.

"Collision attack" means that you are able to create two messages that 
hash into the same hash value. There is currently collision attack known 
for MD5.


Suppose that I publish some software, calculate MD5 digest of it and sign 
that digest. This is safe (despite the existing collision attack on MD5) 
beacuse there is no preimage attack --- no one is able to create another 
file with the same MD5 hash.

However, it is still possible to break security with collision attack, but 
the attacker must be able to submit some of his data into the software 
signed with MD5.


Suppose for example that software developer publishes "real_program" and 
signs it with MD5. The attacker inserts some security backdoor into the 
program and gets "insecure_program". The attacker takes two MD5 states --- 
the state as it was after hashing "real_program" and the state as it was 
after hashing "insecure_program" --- and with collision attack, he is able 
to create two messages "m1" and "m2" such that they result in MD5 
collision.

The result is that
MD5("real_program"+"m1") and MD5("insecure_program"+"m2") hash to the same 
value.

Now, to make the attack successful, the attacker must trick the software 
developer somehow into inserting "m1" into his program.

It is not trivial, but possible to trick the software developer into 
inserting attacker-controlled data into the program. For example, the 
attacker can send him a file containing the string "m1" and claim that it 
is Chinese localization of the program --- if the software developer has 
no knowledge of Chinese writing system, he can't diferentiate a real 
Chinese text from a string of random characters --- so he inserts the file 
containing "m1" into his program and publishes it.

Now the attack is finished, the software developer published and signed 
"real_program"+"m1" and there exists another file "insecure_program"+"m2" 
that hashes into the same MD5 value. So the attacker can misrepresent 
"insecure_program"+"m2" as being real.


You can protect from this situation either by using a hash function 
without collision attack or by prepending some random data before the 
program to be hashed. If the developer signs "random_data"+"real_program" 
with MD5 in the above example, there is no way how the attacker can create 
a collision --- the attacker can still create "m1" and "m2" and trick the 
developer into including "m1" in the program --- but the developer uses 
different "random_data" next time he publishes a next version of the 
software, so there will be no MD5 collision.


This is a reason why I changed dm-verity hashing system. The salt is 
random-generated when creating the hashes. When we hash a data block, we 
prepend the salt to the data. Consequently, the attacker can't exploit the 
collision attack as described above. If we append the hash (as it used to 
be before), it would be possible to exploit the collision attack.

Mikulas

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [dm-devel] [PATCH] dm: remake of the verity target
  2012-03-21  3:11           ` [dm-devel] " Mikulas Patocka
@ 2012-03-21  3:30             ` Mandeep Singh Baines
  2012-03-21  3:44               ` Mikulas Patocka
  0 siblings, 1 reply; 34+ messages in thread
From: Mandeep Singh Baines @ 2012-03-21  3:30 UTC (permalink / raw)
  To: Mikulas Patocka
  Cc: device-mapper development, Steffen Klassert, Will Drewry,
	linux-kernel, Elly Jones, Olof Johansson, Andrew Morton,
	Alasdair G Kergon, Milan Broz

On Tue, Mar 20, 2012 at 8:11 PM, Mikulas Patocka <mpatocka@redhat.com> wrote:
>
>
> On Tue, 20 Mar 2012, Mikulas Patocka wrote:
>
>> > Changes:
>> >
>> > * Salt is hashed before the block (it used to be hased after). The reason
>> > is that if random salt is hashed before the block, it makes the process
>> > resilient to hash function collisions - so you can safely use md5, even if
>> > there's a collision attach for it.
>>
>> I am not aware of any additional benefit to prepending the salt versus
>> appending. Could you please provide such a reference.
>>
>> I would like to avoid breaking backward compatibility unless there is
>> a real benefit.
>>
>> Regards,
>> Mandeep
>>
>
> This is some deeper explanation why I do it this way. The reason is to
> protect agains collision attacks (such as a known attack on MD5 or
> possible future attacks against other hash functions).
>
> "Preimage attack" means that you are given a hash value and you create a
> message that hashes to that hash value. There is no known preimage attack
> for currently used hash functions.
>
> "Collision attack" means that you are able to create two messages that
> hash into the same hash value. There is currently collision attack known
> for MD5.
>
>
> Suppose that I publish some software, calculate MD5 digest of it and sign
> that digest. This is safe (despite the existing collision attack on MD5)
> beacuse there is no preimage attack --- no one is able to create another
> file with the same MD5 hash.
>
> However, it is still possible to break security with collision attack, but
> the attacker must be able to submit some of his data into the software
> signed with MD5.
>
>
> Suppose for example that software developer publishes "real_program" and
> signs it with MD5. The attacker inserts some security backdoor into the
> program and gets "insecure_program". The attacker takes two MD5 states ---
> the state as it was after hashing "real_program" and the state as it was
> after hashing "insecure_program" --- and with collision attack, he is able
> to create two messages "m1" and "m2" such that they result in MD5
> collision.
>
> The result is that
> MD5("real_program"+"m1") and MD5("insecure_program"+"m2") hash to the same
> value.
>
> Now, to make the attack successful, the attacker must trick the software
> developer somehow into inserting "m1" into his program.
>
> It is not trivial, but possible to trick the software developer into
> inserting attacker-controlled data into the program. For example, the
> attacker can send him a file containing the string "m1" and claim that it
> is Chinese localization of the program --- if the software developer has
> no knowledge of Chinese writing system, he can't diferentiate a real
> Chinese text from a string of random characters --- so he inserts the file
> containing "m1" into his program and publishes it.
>
> Now the attack is finished, the software developer published and signed
> "real_program"+"m1" and there exists another file "insecure_program"+"m2"
> that hashes into the same MD5 value. So the attacker can misrepresent
> "insecure_program"+"m2" as being real.
>
>
> You can protect from this situation either by using a hash function
> without collision attack or by prepending some random data before the
> program to be hashed. If the developer signs "random_data"+"real_program"
> with MD5 in the above example, there is no way how the attacker can create
> a collision --- the attacker can still create "m1" and "m2" and trick the
> developer into including "m1" in the program --- but the developer uses
> different "random_data" next time he publishes a next version of the
> software, so there will be no MD5 collision.
>
>
> This is a reason why I changed dm-verity hashing system. The salt is
> random-generated when creating the hashes. When we hash a data block, we
> prepend the salt to the data. Consequently, the attacker can't exploit the
> collision attack as described above. If we append the hash (as it used to
> be before), it would be possible to exploit the collision attack.
>

But we are hashing fixed-sized blocks so there is no possibility of extension.

However, better safe than sorry. I guess we'll just have to deal with
backward incompatibility. So feel free to add my sign off to the new
patch also.

Signed-off-by: Mandeep Singh Baines <msb@chromium.org>

Regards,
Mandeep

> Mikulas

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [dm-devel] [PATCH] dm: remake of the verity target
  2012-03-21  3:30             ` Mandeep Singh Baines
@ 2012-03-21  3:44               ` Mikulas Patocka
  2012-03-21  3:49                 ` Mandeep Singh Baines
  0 siblings, 1 reply; 34+ messages in thread
From: Mikulas Patocka @ 2012-03-21  3:44 UTC (permalink / raw)
  To: Mandeep Singh Baines
  Cc: device-mapper development, Steffen Klassert, Will Drewry,
	linux-kernel, Elly Jones, Olof Johansson, Andrew Morton,
	Alasdair G Kergon, Milan Broz



On Tue, 20 Mar 2012, Mandeep Singh Baines wrote:

> But we are hashing fixed-sized blocks so there is no possibility of extension.

If the first part of some block contains some security-sensitive data and 
the last part of the same block can be attacker-controlled, he can use the 
collision attack to change the security-sensitive data.

> However, better safe than sorry. I guess we'll just have to deal with
> backward incompatibility. So feel free to add my sign off to the new
> patch also.
> 
> Signed-off-by: Mandeep Singh Baines <msb@chromium.org>
> 
> Regards,
> Mandeep
> 
> > Mikulas

I can introduce a switch to make it accept the old format. Do you want to?

Mikulas

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [dm-devel] [PATCH] dm: remake of the verity target
  2012-03-21  3:44               ` Mikulas Patocka
@ 2012-03-21  3:49                 ` Mandeep Singh Baines
  2012-03-21 17:08                   ` Mikulas Patocka
  0 siblings, 1 reply; 34+ messages in thread
From: Mandeep Singh Baines @ 2012-03-21  3:49 UTC (permalink / raw)
  To: Mikulas Patocka
  Cc: device-mapper development, Steffen Klassert, Will Drewry,
	linux-kernel, Elly Jones, Olof Johansson, Andrew Morton,
	Alasdair G Kergon, Milan Broz

On Tue, Mar 20, 2012 at 8:44 PM, Mikulas Patocka <mpatocka@redhat.com> wrote:
>
>
> On Tue, 20 Mar 2012, Mandeep Singh Baines wrote:
>
>> But we are hashing fixed-sized blocks so there is no possibility of extension.
>
> If the first part of some block contains some security-sensitive data and
> the last part of the same block can be attacker-controlled, he can use the
> collision attack to change the security-sensitive data.
>
>> However, better safe than sorry. I guess we'll just have to deal with
>> backward incompatibility. So feel free to add my sign off to the new
>> patch also.
>>
>> Signed-off-by: Mandeep Singh Baines <msb@chromium.org>
>>
>> Regards,
>> Mandeep
>>
>> > Mikulas
>
> I can introduce a switch to make it accept the old format. Do you want to?
>

We can carry that as an out-of-tree patch until we migrate.

On the other hand, it might be nice to support prepend or append.

I don't too strong feelings but it might be nice to have
prepend/append flag. Would definitely make our life easier.

> Mikulas

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [dm-devel] [PATCH] dm: remake of the verity target
  2012-03-21  3:49                 ` Mandeep Singh Baines
@ 2012-03-21 17:08                   ` Mikulas Patocka
  2012-03-21 17:09                     ` Mikulas Patocka
  2012-03-22 17:41                     ` Mandeep Singh Baines
  0 siblings, 2 replies; 34+ messages in thread
From: Mikulas Patocka @ 2012-03-21 17:08 UTC (permalink / raw)
  To: Mandeep Singh Baines
  Cc: device-mapper development, Steffen Klassert, Will Drewry,
	linux-kernel, Elly Jones, Olof Johansson, Andrew Morton,
	Alasdair G Kergon, Milan Broz



On Tue, 20 Mar 2012, Mandeep Singh Baines wrote:

> > I can introduce a switch to make it accept the old format. Do you want to?
> >
> 
> We can carry that as an out-of-tree patch until we migrate.
> 
> On the other hand, it might be nice to support prepend or append.
> 
> I don't too strong feelings but it might be nice to have
> prepend/append flag. Would definitely make our life easier.

This is improved patch that supports both the old format and the new 
format. I checked that it is interoperable with with the old Google 
userspace tool and with the original Google kernel driver.

Mikulas

---

Remake of the google dm-verity patch.

Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>

---
 drivers/md/Kconfig     |   17 
 drivers/md/Makefile    |    1 
 drivers/md/dm-verity.c |  876 +++++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 894 insertions(+)

Index: linux-3.3-fast/drivers/md/Kconfig
===================================================================
--- linux-3.3-fast.orig/drivers/md/Kconfig	2012-03-21 01:17:56.000000000 +0100
+++ linux-3.3-fast/drivers/md/Kconfig	2012-03-21 01:17:56.000000000 +0100
@@ -404,4 +404,21 @@ config DM_VERITY2
 
           If unsure, say N.
 
+config DM_VERITY
+	tristate "Verity target support"
+	depends on BLK_DEV_DM
+	select CRYPTO
+	select CRYPTO_HASH
+	select DM_BUFIO
+	---help---
+	  This device-mapper target allows you to create a device that
+	  transparently integrity checks the data on it. You'll need to
+	  activate the digests you're going to use in the cryptoapi
+	  configuration.
+
+	  To compile this code as a module, choose M here: the module will
+	  be called dm-verity.
+
+	  If unsure, say N.
+
 endif # MD
Index: linux-3.3-fast/drivers/md/Makefile
===================================================================
--- linux-3.3-fast.orig/drivers/md/Makefile	2012-03-21 01:17:56.000000000 +0100
+++ linux-3.3-fast/drivers/md/Makefile	2012-03-21 01:17:56.000000000 +0100
@@ -29,6 +29,7 @@ obj-$(CONFIG_MD_FAULTY)		+= faulty.o
 obj-$(CONFIG_BLK_DEV_MD)	+= md-mod.o
 obj-$(CONFIG_BLK_DEV_DM)	+= dm-mod.o
 obj-$(CONFIG_DM_BUFIO)		+= dm-bufio.o
+obj-$(CONFIG_DM_VERITY)		+= dm-verity.o
 obj-$(CONFIG_DM_CRYPT)		+= dm-crypt.o
 obj-$(CONFIG_DM_DELAY)		+= dm-delay.o
 obj-$(CONFIG_DM_FLAKEY)		+= dm-flakey.o
Index: linux-3.3-fast/drivers/md/dm-verity.c
===================================================================
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ linux-3.3-fast/drivers/md/dm-verity.c	2012-03-21 18:01:11.000000000 +0100
@@ -0,0 +1,876 @@
+/*
+ * Copyright (C) 2012 Red Hat, Inc.
+ *
+ * Author: Mikulas Patocka <mpatocka@redhat.com>
+ *
+ * Based on Chromium dm-verity driver (C) 2011 The Chromium OS Authors
+ *
+ * This file is released under the GPLv2.
+ *
+ * Device mapper target parameters:
+ *	<version>		(0 - original Google's format, 1 - new format)
+ *	<data device>
+ *	<hash device>
+ *	<data block size>
+ *	<hash block size>
+ *	<the number of data blocks>
+ *	<hash start block>
+ *	<algorithm>
+ *	<digest>
+ *	<salt>			(hex bytes or "-" for no salt)
+ *
+ * In the file "/sys/module/dm_verity/parameters/prefetch_cluster" you can set
+ * default prefetch value. Data are read in "prefetch_cluster" chunks from the
+ * hash device. Prefetch cluster greatly improves performance when data and hash
+ * are on the same disk on different partitions on devices with poor random
+ * access behavior.
+ */
+
+#include <linux/module.h>
+#include <linux/device-mapper.h>
+#include <crypto/hash.h>
+#include "dm-bufio.h"
+
+#define DM_MSG_PREFIX			"verity"
+
+#define DM_VERITY_IO_VEC_INLINE		16
+#define DM_VERITY_MEMPOOL_SIZE		4
+#define DM_VERITY_DEFAULT_PREFETCH_SIZE	262144
+
+#define DM_VERITY_MAX_LEVELS		63
+
+static unsigned prefetch_cluster = DM_VERITY_DEFAULT_PREFETCH_SIZE;
+
+module_param_named(prefetch_cluster, prefetch_cluster, uint, S_IRUGO | S_IWUSR);
+
+struct dm_verity {
+	struct dm_dev *data_dev;
+	struct dm_dev *hash_dev;
+	struct dm_target *ti;
+	struct dm_bufio_client *bufio;
+	char *alg_name;
+	struct crypto_shash *tfm;
+	u8 *root_digest;	/* digest of the root block */
+	u8 *salt;		/* salt, its size is salt_size */
+	unsigned salt_size;
+	sector_t data_start;	/* data offset in 512-byte sectors */
+	sector_t hash_start;	/* hash start in blocks */
+	sector_t data_blocks;	/* the number of data blocks */
+	sector_t hash_blocks;	/* the number of hash blocks */
+	unsigned char data_dev_block_bits;	/* log2(data blocksize) */
+	unsigned char hash_dev_block_bits;	/* log2(hash blocksize) */
+	unsigned char hash_per_block_bits;	/* log2(hashes in hash block) */
+	unsigned char levels;	/* the number of tree levels */
+	unsigned char version;
+	unsigned digest_size;	/* digest size for the current hash algorithm */
+	unsigned shash_descsize;/* the size of temporary space for crypto */
+
+	mempool_t *io_mempool;	/* mempool of struct dm_verity_io */
+	mempool_t *vec_mempool;	/* mempool of bio vector */
+
+	struct workqueue_struct *verify_wq;
+
+	/* starting blocks for each tree level. 0 is the lowest level. */
+	sector_t hash_level_block[DM_VERITY_MAX_LEVELS];
+};
+
+struct dm_verity_io {
+	struct dm_verity *v;
+	struct bio *bio;
+
+	/* original values of bio->bi_end_io and bio->bi_private */
+	bio_end_io_t *orig_bi_end_io;
+	void *orig_bi_private;
+
+	sector_t block;
+	unsigned n_blocks;
+
+	/* saved bio vector */
+	struct bio_vec *io_vec;
+	unsigned io_vec_size;
+
+	struct work_struct work;
+
+	/* a space for short vectors; longer vectors are allocated separately */
+	struct bio_vec io_vec_inline[DM_VERITY_IO_VEC_INLINE];
+
+	/* variable-size fields, accessible with functions
+		io_hash_desc, io_real_digest, io_want_digest */
+	/* u8 hash_desc[v->shash_descsize]; */
+	/* u8 real_digest[v->digest_size]; */
+	/* u8 want_digest[v->digest_size]; */
+};
+
+static struct shash_desc *io_hash_desc(struct dm_verity *v, struct dm_verity_io *io)
+{
+	return (struct shash_desc *)(io + 1);
+}
+
+static u8 *io_real_digest(struct dm_verity *v, struct dm_verity_io *io)
+{
+	return (u8 *)(io + 1) + v->shash_descsize;
+}
+
+static u8 *io_want_digest(struct dm_verity *v, struct dm_verity_io *io)
+{
+	return (u8 *)(io + 1) + v->shash_descsize + v->digest_size;
+}
+
+/*
+ * Auxiliary structure appended to each dm-bufio buffer. If the value
+ * hash_verified is nonzero, hash of the block has been verified.
+ *
+ * The variable hash_verified is set to 0 when allocating the buffer, then
+ * it can be changed to 1 and it is never reset to 0 again.
+ *
+ * There is no lock around this value, a race condition can at worst cause
+ * that multiple processes verify the hash of the same buffer simultaneously
+ * and write 1 to hash_verified simultaneously.
+ * This condition is harmless, so we don't need locking.
+ */
+struct buffer_aux {
+	int hash_verified;
+};
+
+/*
+ * Initialize struct buffer_aux for a freshly created buffer.
+ */
+static void dm_bufio_alloc_callback(struct dm_buffer *buf)
+{
+	struct buffer_aux *aux = dm_bufio_get_aux_data(buf);
+	aux->hash_verified = 0;
+}
+
+/*
+ * Translate input sector number to the sector number on the target device.
+ */
+static sector_t verity_map_sector(struct dm_verity *v, sector_t bi_sector)
+{
+	return v->data_start + dm_target_offset(v->ti, bi_sector);
+}
+
+/*
+ * Return hash position of a specified block at a specified tree level
+ * (0 is the lowest level).
+ * The lowest "hash_per_block_bits"-bits of the result denote hash position
+ * inside a hash block. The remaining bits denote location of the hash block.
+ */
+static sector_t verity_position_at_level(struct dm_verity *v, sector_t block,
+					 int level)
+{
+	return block >> (level * v->hash_per_block_bits);
+}
+
+static void verity_hash_at_level(struct dm_verity *v, sector_t block, int level,
+				 sector_t *hash_block, unsigned *offset)
+{
+	sector_t position = verity_position_at_level(v, block, level);
+
+	*hash_block = v->hash_level_block[level] + (position >> v->hash_per_block_bits);
+	if (offset) {
+		unsigned idx = position & ((1 << v->hash_per_block_bits) - 1);
+		if (!v->version)
+			*offset = idx * v->digest_size;
+		else
+			*offset = idx << (v->hash_dev_block_bits - v->hash_per_block_bits);
+	}
+}
+
+/*
+ * Verify hash of a metadata block pertaining to the specified data block
+ * ("block" argument) at a specified level ("level" argument).
+ *
+ * On successful return, io_want_digest(v, io) contains the hash value for
+ * a lower tree level or for the data block (if we're at the lowest leve).
+ *
+ * If "skip_unverified" is true, unverified buffer is skipped an 1 is returned.
+ * If "skip_unverified" is false, unverified buffer is hashed and verified
+ * against current value of io_want_digest(v, io).
+ */
+static int verity_verify_level(struct dm_verity_io *io, sector_t block,
+			       int level, bool skip_unverified)
+{
+	struct dm_verity *v = io->v;
+	struct dm_buffer *buf;
+	struct buffer_aux *aux;
+	u8 *data;
+	int r;
+	sector_t hash_block;
+	unsigned offset;
+
+	verity_hash_at_level(v, block, level, &hash_block, &offset);
+
+	data = dm_bufio_read(v->bufio, hash_block, &buf);
+	if (unlikely(IS_ERR(data)))
+		return PTR_ERR(data);
+
+	aux = dm_bufio_get_aux_data(buf);
+
+	if (!aux->hash_verified) {
+		struct shash_desc *desc;
+		u8 *result;
+
+		if (skip_unverified) {
+			r = 1;
+			goto release_ret_r;
+		}
+
+		desc = io_hash_desc(v, io);
+		desc->tfm = v->tfm;
+		desc->flags = CRYPTO_TFM_REQ_MAY_SLEEP;
+		r = crypto_shash_init(desc);
+		if (r < 0) {
+			DMERR("crypto_shash_init failed: %d", r);
+			goto release_ret_r;
+		}
+
+		if (likely(v->version >= 1)) {
+			r = crypto_shash_update(desc, v->salt, v->salt_size);
+			if (r < 0) {
+				DMERR("crypto_shash_update failed: %d", r);
+				goto release_ret_r;
+			}
+		}
+
+		r = crypto_shash_update(desc, data, 1 << v->hash_dev_block_bits);
+		if (r < 0) {
+			DMERR("crypto_shash_update failed: %d", r);
+			goto release_ret_r;
+		}
+
+		if (!v->version) {
+			r = crypto_shash_update(desc, v->salt, v->salt_size);
+			if (r < 0) {
+				DMERR("crypto_shash_update failed: %d", r);
+				goto release_ret_r;
+			}
+		}
+
+		result = io_real_digest(v, io);
+		r = crypto_shash_final(desc, result);
+		if (r < 0) {
+			DMERR("crypto_shash_final failed: %d", r);
+			goto release_ret_r;
+		}
+		if (unlikely(memcmp(result, io_want_digest(v, io), v->digest_size))) {
+			DMERR_LIMIT("metadata block %llu is corrupted",
+				(unsigned long long)hash_block);
+			r = -EIO;
+			goto release_ret_r;
+		} else
+			aux->hash_verified = 1;
+	}
+
+	data += offset;
+
+	memcpy(io_want_digest(v, io), data, v->digest_size);
+
+	dm_bufio_release(buf);
+	return 0;
+
+release_ret_r:
+	dm_bufio_release(buf);
+	return r;
+}
+
+/*
+ * Verify one "dm_verity_io" structure.
+ */
+static int verity_verify_io(struct dm_verity_io *io)
+{
+	struct dm_verity *v = io->v;
+	unsigned b;
+	int i;
+	unsigned vector = 0, offset = 0;
+	for (b = 0; b < io->n_blocks; b++) {
+		struct shash_desc *desc;
+		u8 *result;
+		int r;
+		unsigned todo;
+
+		if (likely(v->levels)) {
+			/*
+			 * First, we try to get the requested hash for
+			 * the current block. If the hash block itself is
+			 * verified, zero is returned. If it isn't, this
+			 * function returns 0 and we fall back to whole
+			 * chain verification.
+			 */
+			int r = verity_verify_level(io, io->block + b, 0, true);
+			if (likely(!r))
+				goto test_block_hash;
+			if (r < 0)
+				return r;
+		}
+
+		memcpy(io_want_digest(v, io), v->root_digest, v->digest_size);
+
+		for (i = v->levels - 1; i >= 0; i--) {
+			int r = verity_verify_level(io, io->block + b, i, false);
+			if (unlikely(r))
+				return r;
+		}
+
+test_block_hash:
+		desc = io_hash_desc(v, io);
+		desc->tfm = v->tfm;
+		desc->flags = CRYPTO_TFM_REQ_MAY_SLEEP;
+		r = crypto_shash_init(desc);
+		if (r < 0) {
+			DMERR("crypto_shash_init failed: %d", r);
+			return r;
+		}
+
+		if (likely(v->version >= 1)) {
+			r = crypto_shash_update(desc, v->salt, v->salt_size);
+			if (r < 0) {
+				DMERR("crypto_shash_update failed: %d", r);
+				return r;
+			}
+		}
+
+		todo = 1 << v->data_dev_block_bits;
+		do {
+			struct bio_vec *bv;
+			u8 *page;
+			unsigned len;
+
+			BUG_ON(vector >= io->io_vec_size);
+			bv = &io->io_vec[vector];
+			page = kmap_atomic(bv->bv_page, KM_USER0);
+			len = bv->bv_len - offset;
+			if (likely(len >= todo))
+				len = todo;
+			r = crypto_shash_update(desc,
+					page + bv->bv_offset + offset, len);
+			kunmap_atomic(page, KM_USER0);
+			if (r < 0) {
+				DMERR("crypto_shash_update failed: %d", r);
+				return r;
+			}
+			offset += len;
+			if (likely(offset == bv->bv_len)) {
+				offset = 0;
+				vector++;
+			}
+			todo -= len;
+		} while (todo);
+
+		if (!v->version) {
+			r = crypto_shash_update(desc, v->salt, v->salt_size);
+			if (r < 0) {
+				DMERR("crypto_shash_update failed: %d", r);
+				return r;
+			}
+		}
+
+		result = io_real_digest(v, io);
+		r = crypto_shash_final(desc, result);
+		if (r < 0) {
+			DMERR("crypto_shash_final failed: %d", r);
+			return r;
+		}
+		if (unlikely(memcmp(result, io_want_digest(v, io), v->digest_size))) {
+			DMERR_LIMIT("data block %llu is corrupted",
+				(unsigned long long)(io->block + b));
+			return -EIO;
+		}
+	}
+	BUG_ON(vector != io->io_vec_size);
+	BUG_ON(offset);
+	return 0;
+}
+
+/*
+ * End one "io" structure with a given error.
+ */
+static void verity_finish_io(struct dm_verity_io *io, int error)
+{
+	struct bio *bio = io->bio;
+	struct dm_verity *v = io->v;
+
+	bio->bi_end_io = io->orig_bi_end_io;
+	bio->bi_private = io->orig_bi_private;
+
+	if (io->io_vec != io->io_vec_inline)
+		mempool_free(io->io_vec, v->vec_mempool);
+	mempool_free(io, v->io_mempool);
+
+	bio_endio(bio, error);
+}
+
+static void verity_work(struct work_struct *w)
+{
+	struct dm_verity_io *io = container_of(w, struct dm_verity_io, work);
+
+	verity_finish_io(io, verity_verify_io(io));
+}
+
+static void verity_end_io(struct bio *bio, int error)
+{
+	struct dm_verity_io *io = bio->bi_private;
+	if (error) {
+		verity_finish_io(io, error);
+		return;
+	}
+
+	INIT_WORK(&io->work, verity_work);
+	queue_work(io->v->verify_wq, &io->work);
+}
+
+/*
+ * Prefetch buffers for the specified io.
+ * The root buffer is not prefetched, it is assumed that it will be cached
+ * all the time.
+ */
+static void verity_prefetch_io(struct dm_verity *v, struct dm_verity_io *io)
+{
+	int i;
+	for (i = v->levels - 2; i >= 0; i--) {
+		sector_t hash_block_start;
+		sector_t hash_block_end;
+		verity_hash_at_level(v, io->block, i, &hash_block_start, NULL);
+		verity_hash_at_level(v, io->block + io->n_blocks - 1, i, &hash_block_end, NULL);
+		if (!i) {
+			unsigned cluster = *(volatile unsigned *)&prefetch_cluster;
+			cluster >>= v->data_dev_block_bits;
+			if (unlikely(!cluster))
+				goto no_prefetch_cluster;
+			if (unlikely(cluster & (cluster - 1)))
+				cluster = 1 << (fls(cluster) - 1);
+
+			hash_block_start &= ~(sector_t)(cluster - 1);
+			hash_block_end |= cluster - 1;
+			if (unlikely(hash_block_end >= v->hash_blocks))
+				hash_block_end = v->hash_blocks - 1;
+		}
+no_prefetch_cluster:
+		dm_bufio_prefetch(v->bufio, hash_block_start,
+					hash_block_end - hash_block_start + 1);
+	}
+}
+
+/*
+ * Bio map function. It allocates dm_verity_io structure and bio vector and
+ * fills them. Then it issues prefetches and the I/O.
+ */
+static int verity_map(struct dm_target *ti, struct bio *bio,
+		      union map_info *map_context)
+{
+	struct dm_verity *v = ti->private;
+	struct dm_verity_io *io;
+
+	bio->bi_bdev = v->data_dev->bdev;
+	bio->bi_sector = verity_map_sector(v, bio->bi_sector);
+
+	if (((unsigned)bio->bi_sector | bio_sectors(bio)) &
+	    ((1 << (v->data_dev_block_bits - SECTOR_SHIFT)) - 1)) {
+		DMERR_LIMIT("unaligned io");
+		return -EIO;
+	}
+
+	if ((bio->bi_sector + bio_sectors(bio)) >>
+	    (v->data_dev_block_bits - SECTOR_SHIFT) > v->data_blocks) {
+		DMERR_LIMIT("io out of range");
+		return -EIO;
+	}
+
+	if (bio_data_dir(bio) == WRITE)
+		return -EIO;
+
+	io = mempool_alloc(v->io_mempool, GFP_NOIO);
+	io->v = v;
+	io->bio = bio;
+	io->orig_bi_end_io = bio->bi_end_io;
+	io->orig_bi_private = bio->bi_private;
+	io->block = bio->bi_sector >> (v->data_dev_block_bits - SECTOR_SHIFT);
+	io->n_blocks = bio->bi_size >> v->data_dev_block_bits;
+
+	bio->bi_end_io = verity_end_io;
+	bio->bi_private = io;
+	io->io_vec_size = bio->bi_vcnt - bio->bi_idx;
+	if (io->io_vec_size < DM_VERITY_IO_VEC_INLINE)
+		io->io_vec = io->io_vec_inline;
+	else
+		io->io_vec = mempool_alloc(v->vec_mempool, GFP_NOIO);
+	memcpy(io->io_vec, bio_iovec(bio),
+	       io->io_vec_size * sizeof(struct bio_vec));
+
+	verity_prefetch_io(v, io);
+
+	generic_make_request(bio);
+
+	return DM_MAPIO_SUBMITTED;
+}
+
+static int verity_status(struct dm_target *ti, status_type_t type,
+			 char *result, unsigned maxlen)
+{
+	struct dm_verity *v = ti->private;
+	unsigned sz = 0;
+	unsigned x;
+
+	switch (type) {
+	case STATUSTYPE_INFO:
+		result[0] = 0;
+		break;
+	case STATUSTYPE_TABLE:
+		DMEMIT("%u %s %s %u %u %llu %llu %s ",
+			v->version,
+			v->data_dev->name,
+			v->hash_dev->name,
+			1 << v->data_dev_block_bits,
+			1 << v->hash_dev_block_bits,
+			(unsigned long long)v->data_blocks,
+			(unsigned long long)v->hash_start,
+			v->alg_name
+			);
+		for (x = 0; x < v->digest_size; x++)
+			DMEMIT("%02x", v->root_digest[x]);
+		DMEMIT(" ");
+		if (!v->salt_size)
+			DMEMIT("-");
+		else
+			for (x = 0; x < v->salt_size; x++)
+				DMEMIT("%02x", v->salt[x]);
+		break;
+	}
+	return 0;
+}
+
+static int verity_ioctl(struct dm_target *ti, unsigned cmd,
+			unsigned long arg)
+{
+	struct dm_verity *v = ti->private;
+	int r = 0;
+
+	if (v->data_start ||
+	    ti->len != i_size_read(v->data_dev->bdev->bd_inode) >> SECTOR_SHIFT)
+		r = scsi_verify_blk_ioctl(NULL, cmd);
+
+	return r ? : __blkdev_driver_ioctl(v->data_dev->bdev, v->data_dev->mode,
+				     cmd, arg);
+}
+
+static int verity_merge(struct dm_target *ti, struct bvec_merge_data *bvm,
+			struct bio_vec *biovec, int max_size)
+{
+	struct dm_verity *v = ti->private;
+	struct request_queue *q = bdev_get_queue(v->data_dev->bdev);
+
+	if (!q->merge_bvec_fn)
+		return max_size;
+
+	bvm->bi_bdev = v->data_dev->bdev;
+	bvm->bi_sector = verity_map_sector(v, bvm->bi_sector);
+
+	return min(max_size, q->merge_bvec_fn(q, bvm, biovec));
+}
+
+static int verity_iterate_devices(struct dm_target *ti,
+				  iterate_devices_callout_fn fn, void *data)
+{
+	struct dm_verity *v = ti->private;
+	return fn(ti, v->data_dev, v->data_start, ti->len, data);
+}
+
+static void verity_io_hints(struct dm_target *ti, struct queue_limits *limits)
+{
+	struct dm_verity *v = ti->private;
+
+	if (limits->logical_block_size < 1 << v->data_dev_block_bits)
+		limits->logical_block_size = 1 << v->data_dev_block_bits;
+	if (limits->physical_block_size < 1 << v->data_dev_block_bits)
+		limits->physical_block_size = 1 << v->data_dev_block_bits;
+	blk_limits_io_min(limits, limits->logical_block_size);
+}
+
+static void verity_dtr(struct dm_target *ti);
+
+static int verity_ctr(struct dm_target *ti, unsigned argc, char **argv)
+{
+	struct dm_verity *v;
+	unsigned num;
+	unsigned long long num_ll;
+	int r;
+	int i;
+	sector_t hash_position;
+	char dummy;
+
+	v = kzalloc(sizeof(struct dm_verity), GFP_KERNEL);
+	if (!v) {
+		ti->error = "Cannot allocate verity structure";
+		return -ENOMEM;
+	}
+	ti->private = v;
+	v->ti = ti;
+
+	if ((dm_table_get_mode(ti->table) & ~FMODE_READ) != 0) {
+		ti->error = "Device must be readonly";
+		r = -EINVAL;
+		goto bad;
+	}
+
+	if (argc != 10) {
+		ti->error = "Invalid argument count";
+		r = -EINVAL;
+		goto bad;
+	}
+
+	if (sscanf(argv[0], "%d%c", &num, &dummy) != 1 ||
+	    num < 0 || num > 1) {
+		ti->error = "Invalid version";
+		r = -EINVAL;
+		goto bad;
+	}
+	v->version = num;
+
+	r = dm_get_device(ti, argv[1], FMODE_READ, &v->data_dev);
+	if (r) {
+		ti->error = "Data device lookup failed";
+		goto bad;
+	}
+
+	r = dm_get_device(ti, argv[2], FMODE_READ, &v->hash_dev);
+	if (r) {
+		ti->error = "Data device lookup failed";
+		goto bad;
+	}
+
+	if (sscanf(argv[3], "%u%c", &num, &dummy) != 1 ||
+	    !num || (num & (num - 1)) ||
+	    num < bdev_logical_block_size(v->data_dev->bdev) ||
+	    num > PAGE_SIZE) {
+		ti->error = "Invalid data device block size";
+		r = -EINVAL;
+		goto bad;
+	}
+	v->data_dev_block_bits = ffs(num) - 1;
+
+	if (sscanf(argv[4], "%u%c", &num, &dummy) != 1 ||
+	    !num || (num & (num - 1)) ||
+	    num < bdev_logical_block_size(v->hash_dev->bdev) ||
+	    num > INT_MAX) {
+		ti->error = "Invalid hash device block size";
+		r = -EINVAL;
+		goto bad;
+	}
+	v->hash_dev_block_bits = ffs(num) - 1;
+
+	if (sscanf(argv[5], "%llu%c", &num_ll, &dummy) != 1 ||
+	    num_ll << (v->data_dev_block_bits - SECTOR_SHIFT) !=
+	    (sector_t)num_ll << (v->data_dev_block_bits - SECTOR_SHIFT)) {
+		ti->error = "Invalid data blocks";
+		r = -EINVAL;
+		goto bad;
+	}
+	v->data_blocks = num_ll;
+
+	if (ti->len > (v->data_blocks << (v->data_dev_block_bits - SECTOR_SHIFT))) {
+		ti->error = "Data device is too small";
+		r = -EINVAL;
+		goto bad;
+	}
+
+	if (sscanf(argv[6], "%llu%c", &num_ll, &dummy) != 1 ||
+	    num_ll << (v->hash_dev_block_bits - SECTOR_SHIFT) !=
+	    (sector_t)num_ll << (v->hash_dev_block_bits - SECTOR_SHIFT)) {
+		ti->error = "Invalid hash start";
+		r = -EINVAL;
+		goto bad;
+	}
+	v->hash_start = num_ll;
+
+	v->alg_name = kstrdup(argv[7], GFP_KERNEL);
+	if (!v->alg_name) {
+		ti->error = "Cannot allocate algorithm name";
+		r = -ENOMEM;
+		goto bad;
+	}
+
+	v->tfm = crypto_alloc_shash(v->alg_name, 0, 0);
+	if (IS_ERR(v->tfm)) {
+		ti->error = "Cannot initialize hash function";
+		r = PTR_ERR(v->tfm);
+		v->tfm = NULL;
+		goto bad;
+	}
+	v->digest_size = crypto_shash_digestsize(v->tfm);
+	if ((1 << v->hash_dev_block_bits) < v->digest_size * 2) {
+		ti->error = "Digest size too big";
+		r = -EINVAL;
+		goto bad;
+	}
+	v->shash_descsize =
+		sizeof(struct shash_desc) + crypto_shash_descsize(v->tfm);
+
+	v->root_digest = kmalloc(v->digest_size, GFP_KERNEL);
+	if (!v->root_digest) {
+		ti->error = "Cannot allocate root digest";
+		r = -ENOMEM;
+		goto bad;
+	}
+	if (strlen(argv[8]) != v->digest_size * 2 ||
+	    hex2bin(v->root_digest, argv[8], v->digest_size)) {
+		ti->error = "Invalid root digest";
+		r = -EINVAL;
+		goto bad;
+	}
+
+	if (strcmp(argv[9], "-")) {
+		v->salt_size = strlen(argv[9]) / 2;
+		v->salt = kmalloc(v->salt_size, GFP_KERNEL);
+		if (!v->salt) {
+			ti->error = "Cannot allocate salt";
+			r = -ENOMEM;
+			goto bad;
+		}
+		if (strlen(argv[9]) != v->salt_size * 2 ||
+		    hex2bin(v->salt, argv[9], v->salt_size)) {
+			ti->error = "Invalid salt";
+			r = -EINVAL;
+			goto bad;
+		}
+	}
+
+	v->hash_per_block_bits =
+		fls((1 << v->hash_dev_block_bits) / v->digest_size) - 1;
+
+	v->levels = 0;
+	if (v->data_blocks)
+		while (v->hash_per_block_bits * v->levels < 64 &&
+		       (unsigned long long)(v->data_blocks - 1) >>
+		       (v->hash_per_block_bits * v->levels))
+			v->levels++;
+
+	if (v->levels > DM_VERITY_MAX_LEVELS) {
+		ti->error = "Too many tree levels";
+		r = -E2BIG;
+		goto bad;
+	}
+
+	hash_position = v->hash_start;
+	for (i = v->levels - 1; i >= 0; i--) {
+		sector_t s;
+		v->hash_level_block[i] = hash_position;
+		s = verity_position_at_level(v, v->data_blocks, i);
+		s = (s >> v->hash_per_block_bits) +
+		    !!(s & ((1 << v->hash_per_block_bits) - 1));
+		if (hash_position + s < hash_position) {
+			ti->error = "Hash device offset overflow";
+			r = -E2BIG;
+			goto bad;
+		}
+		hash_position += s;
+	}
+	v->hash_blocks = hash_position;
+
+	v->bufio = dm_bufio_client_create(v->hash_dev->bdev,
+		1 << v->hash_dev_block_bits, 1, sizeof(struct buffer_aux),
+		dm_bufio_alloc_callback, NULL);
+	if (IS_ERR(v->bufio)) {
+		ti->error = "Cannot initialize dm-bufio";
+		r = PTR_ERR(v->bufio);
+		v->bufio = NULL;
+		goto bad;
+	}
+
+	if (dm_bufio_get_device_size(v->bufio) < v->hash_blocks) {
+		ti->error = "Hash device is too small";
+		r = -E2BIG;
+		goto bad;
+	}
+
+	v->io_mempool = mempool_create_kmalloc_pool(DM_VERITY_MEMPOOL_SIZE,
+	  sizeof(struct dm_verity_io) + v->shash_descsize + v->digest_size * 2);
+	if (!v->io_mempool) {
+		ti->error = "Cannot allocate io mempool";
+		r = -ENOMEM;
+		goto bad;
+	}
+
+	v->vec_mempool = mempool_create_kmalloc_pool(DM_VERITY_MEMPOOL_SIZE,
+					BIO_MAX_PAGES * sizeof(struct bio_vec));
+	if (!v->vec_mempool) {
+		ti->error = "Cannot allocate vector mempool";
+		r = -ENOMEM;
+		goto bad;
+	}
+
+	/*v->verify_wq = alloc_workqueue("verityd", WQ_CPU_INTENSIVE | WQ_MEM_RECLAIM, 1);*/
+	/* WQ_UNBOUND greatly improves performance when running on ramdisk */
+	v->verify_wq = alloc_workqueue("verityd", WQ_CPU_INTENSIVE | WQ_MEM_RECLAIM | WQ_UNBOUND, num_online_cpus());
+	if (!v->verify_wq) {
+		ti->error = "Cannot allocate workqueue";
+		r = -ENOMEM;
+		goto bad;
+	}
+
+	return 0;
+
+bad:
+	verity_dtr(ti);
+	return r;
+}
+
+static void verity_dtr(struct dm_target *ti)
+{
+	struct dm_verity *v = ti->private;
+
+	if (v->verify_wq)
+		destroy_workqueue(v->verify_wq);
+	if (v->vec_mempool)
+		mempool_destroy(v->vec_mempool);
+	if (v->io_mempool)
+		mempool_destroy(v->io_mempool);
+	if (v->bufio)
+		dm_bufio_client_destroy(v->bufio);
+	kfree(v->salt);
+	kfree(v->root_digest);
+	if (v->tfm)
+		crypto_free_shash(v->tfm);
+	kfree(v->alg_name);
+	if (v->hash_dev)
+		dm_put_device(ti, v->hash_dev);
+	if (v->data_dev)
+		dm_put_device(ti, v->data_dev);
+	kfree(v);
+}
+
+static struct target_type verity_target = {
+	.name		= "verity",
+	.version	= {1, 0, 0},
+	.module		= THIS_MODULE,
+	.ctr		= verity_ctr,
+	.dtr		= verity_dtr,
+	.map		= verity_map,
+	.status		= verity_status,
+	.ioctl		= verity_ioctl,
+	.merge		= verity_merge,
+	.iterate_devices = verity_iterate_devices,
+	.io_hints	= verity_io_hints,
+};
+
+static int __init dm_verity_init(void)
+{
+	int r;
+	r = dm_register_target(&verity_target);
+	if (r < 0)
+		DMERR("register failed %d", r);
+	return r;
+}
+
+static void __exit dm_verity_exit(void)
+{
+	dm_unregister_target(&verity_target);
+}
+
+module_init(dm_verity_init);
+module_exit(dm_verity_exit);
+
+MODULE_AUTHOR("Mikulas Patocka <mpatocka@redhat.com>");
+MODULE_AUTHOR("Mandeep Baines <msb@chromium.org>");
+MODULE_AUTHOR("Will Drewry <wad@chromium.org>");
+MODULE_DESCRIPTION(DM_NAME " target for transparent disk integrity checking");
+MODULE_LICENSE("GPL");
+

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [dm-devel] [PATCH] dm: remake of the verity target
  2012-03-21 17:08                   ` Mikulas Patocka
@ 2012-03-21 17:09                     ` Mikulas Patocka
  2012-03-22 17:41                     ` Mandeep Singh Baines
  1 sibling, 0 replies; 34+ messages in thread
From: Mikulas Patocka @ 2012-03-21 17:09 UTC (permalink / raw)
  To: Mandeep Singh Baines
  Cc: device-mapper development, Steffen Klassert, Will Drewry,
	linux-kernel, Elly Jones, Olof Johansson, Andrew Morton,
	Alasdair G Kergon, Milan Broz



On Wed, 21 Mar 2012, Mikulas Patocka wrote:

> 
> 
> On Tue, 20 Mar 2012, Mandeep Singh Baines wrote:
> 
> > > I can introduce a switch to make it accept the old format. Do you want to?
> > >
> > 
> > We can carry that as an out-of-tree patch until we migrate.
> > 
> > On the other hand, it might be nice to support prepend or append.
> > 
> > I don't too strong feelings but it might be nice to have
> > prepend/append flag. Would definitely make our life easier.
> 
> This is improved patch that supports both the old format and the new 
> format. I checked that it is interoperable with with the old Google 
> userspace tool and with the original Google kernel driver.
> 
> Mikulas

This is the new userspace tool that supports both formats.

Mikulas

---

/* link with -lpopt -lcrypto */

#define _FILE_OFFSET_BITS	64

#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#include <stdarg.h>
#include <unistd.h>
#include <errno.h>
#include <string.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <sys/ioctl.h>
#include <sys/mount.h>
#include <arpa/inet.h>
#include <popt.h>
#include <openssl/evp.h>
#include <openssl/rand.h>

#define DEFAULT_BLOCK_SIZE	4096
#define DM_VERITY_MAX_LEVELS	63

#define DEFAULT_SALT_SIZE	32
#define MAX_SALT_SIZE		384

#define MODE_VERIFY	0
#define MODE_CREATE	1
#define MODE_ACTIVATE	2

#define MAX_FORMAT_VERSION	1

static int mode = -1;
static int use_superblock = 1;

static const char *dm_device;
static const char *data_device;
static const char *hash_device;
static const char *hash_algorithm = NULL;
static const char *root_hash;

static int version = -1;
static int data_block_size = 0;
static int hash_block_size = 0;
static long long hash_start = 0;
static long long data_blocks = 0;
static const char *salt_string = NULL;

static FILE *data_file;
static FILE *hash_file;

static off_t data_file_blocks;
static off_t hash_file_blocks;
static off_t used_hash_blocks;

static const EVP_MD *evp;

static unsigned char *root_hash_bytes;
static unsigned char *calculated_digest;

static unsigned char *salt_bytes;
static unsigned salt_size;

static unsigned digest_size;
static unsigned char digest_size_bits;
static unsigned char levels;
static unsigned char hash_per_block_bits;

static off_t hash_level_block[DM_VERITY_MAX_LEVELS];
static off_t hash_level_size[DM_VERITY_MAX_LEVELS];

static off_t superblock_position;

static int retval = 0;

struct superblock {
	uint8_t signature[8];
	uint8_t version;
	uint8_t data_block_bits;
	uint8_t hash_block_bits;
	uint8_t pad1[1];
	uint16_t salt_size;
	uint8_t pad2[2];
	uint32_t data_blocks_hi;
	uint32_t data_blocks_lo;
	uint8_t algorithm[16];
	uint8_t salt[MAX_SALT_SIZE];
	uint8_t pad3[88];
};

#define DM_VERITY_SIGNATURE	"verity\0\0"
#define DM_VERITY_VERSION	0

static void help(poptContext popt_context,
		enum poptCallbackReason reason,
		struct poptOption *key,
		const char *arg,
		void *data)
{
	poptPrintHelp(popt_context, stdout, 0);
	exit(0);
}

static struct poptOption popt_help_options[] = {
	{ NULL,			0,	POPT_ARG_CALLBACK, help, 0, NULL, NULL },
	{ "help",		'h',	POPT_ARG_NONE, NULL, 0, "Show help", NULL },
	POPT_TABLEEND
};

static struct poptOption popt_options[] = {
	{ NULL,			'\0', POPT_ARG_INCLUDE_TABLE, popt_help_options, 0, NULL, NULL },
	{ "create",		'c',	POPT_ARG_VAL, &mode, MODE_CREATE, "Create hash", NULL },
	{ "verify",		'v',	POPT_ARG_VAL, &mode, MODE_VERIFY, "Verify integrity", NULL },
	{ "activate",		'a',	POPT_ARG_VAL, &mode, MODE_ACTIVATE, "Activate the device", NULL },
	{ "data-block-size",	0, 	POPT_ARG_INT, &data_block_size, 0, "Block size on the data device", "bytes" },
	{ "hash-block-size",	0, 	POPT_ARG_INT, &hash_block_size, 0, "Block size on the hash device", "bytes" },
	{ "data-blocks",	0,	POPT_ARG_LONGLONG, &data_blocks, 0, "The number of blocks in the data file", "blocks" },
	{ "hash-start",		0,	POPT_ARG_LONGLONG, &hash_start, 0, "Starting block on the hash device", "512-byte sectors" },
	{ "salt",		0,	POPT_ARG_STRING, &salt_string, 0, "Salt", "hex string" },
	{ "algorithm",		0,	POPT_ARG_STRING, &hash_algorithm, 0, "Hash algorithm (default sha256)", "string" },
	{ "no-superblock",	0,	POPT_ARG_VAL, &use_superblock, 0, "Do not create/use superblock" },
	{ "format",		0,	POPT_ARG_INT, &version, 0, "Format version (0 - original Google code, 1 - new format)", "number" },
	POPT_TABLEEND
};

#if defined(__GNUC__) && __GNUC__ >= 2
	__attribute__((__format__(__printf__, 1, 2)))
#endif
static void exit_err(const char *msg, ...)
{
	va_list args;
	va_start(args, msg);
	vfprintf(stderr, msg, args);
	va_end(args);
	fputc('\n', stderr);
	exit(2);
}

static void stream_err(FILE *f, const char *msg)
{
	if (ferror(f)) {
		perror(msg);
		exit(2);
	} else if (feof(f)) {
		exit_err("eof on %s", msg);
	} else {
		exit_err("unknown error on %s", msg);
	}
}

static void *xmalloc(size_t s)
{
	void *ptr = malloc(!s ? 1 : s);
	if (!ptr) exit_err("out of memory");
	return ptr;
}

static char *xstrdup(const char *str)
{
	return strcpy(xmalloc(strlen(str) + 1), str);
}

static char *xprint(unsigned long long num)
{
	size_t s = snprintf(NULL, 0, "%llu", num);
	char *p = xmalloc(s + 1);
	snprintf(p, s + 1, "%llu", num);
	return p;
}

static char *xhexprint(unsigned char *bytes, size_t len)
{
	size_t i;
	char *p = xmalloc(len * 2 + 1);
	p[0] = 0;
	for (i = 0; i < len; i++)
		snprintf(p + i * 2, 3, "%02x", bytes[i]);
	return p;
}

static off_t get_size(FILE *f, const char *name)
{
	struct stat st;
	int h = fileno(f);
	if (h < 0) {
		perror("fileno");
		exit(2);
	}
	if (fstat(h, &st)) {
		perror("fstat");
		exit(2);
	}
	if (S_ISREG(st.st_mode)) {
		return st.st_size;
	} else if (S_ISBLK(st.st_mode)) {
		unsigned long long size64;
		unsigned long sizeul;
		if (!ioctl(h, BLKGETSIZE64, &size64)) {
			return_size64:
			if ((off_t)size64 < 0 || (off_t)size64 != size64) {
				size_overflow:
				exit_err("%s: device size overflow", name);
			}
			return size64;
		}
		if (!ioctl(h, BLKGETSIZE, &sizeul)) {
			size64 = (unsigned long long)sizeul * 512;
			if (size64 / 512 != sizeul) goto size_overflow;
			goto return_size64;
		}
		perror("BLKGETSIZE");
		exit(2);
	} else {
		exit_err("%s is not a file or a block device", name);
	}
	return -1;	/* never reached, shut up warning */
}

static void block_fseek(FILE *f, off_t block, int block_size)
{
	unsigned long long pos = (unsigned long long)block * block_size;
	if (pos / block_size != block ||
	    (off_t)pos < 0 ||
	    (off_t)pos != pos)
		exit_err("seek position overflow");
	if (fseeko(f, pos, SEEK_SET)) {
		perror("fseek");
		exit(2);
	}
}

static off_t verity_position_at_level(off_t block, int level)
{
	return block >> (level * hash_per_block_bits);
}

static void calculate_positions(void)
{
	unsigned long long hash_position;
	int i;

	digest_size_bits = 0;
	while (1 << digest_size_bits < digest_size)
		digest_size_bits++;
	hash_per_block_bits = 0;
	while (((hash_block_size / digest_size) >> hash_per_block_bits) > 1)
		hash_per_block_bits++;
	if (!hash_per_block_bits)
		exit_err("at least two hashes must fit in a hash file block");
	levels = 0;

	if (data_file_blocks) {
		while (hash_per_block_bits * levels < 64 &&
		       (unsigned long long)(data_file_blocks - 1) >>
		       (hash_per_block_bits * levels))
			levels++;
	}

	if (levels > DM_VERITY_MAX_LEVELS) exit_err("too many tree levels");

	hash_position = hash_start * 512 / hash_block_size;
	for (i = levels - 1; i >= 0; i--) {
		off_t s;
		hash_level_block[i] = hash_position;
		s = verity_position_at_level(data_file_blocks, i);
		s = (s >> hash_per_block_bits) +
		    !!(s & ((1 << hash_per_block_bits) - 1));
		hash_level_size[i] = s;
		if (hash_position + s < hash_position ||
		    (off_t)(hash_position + s) < 0 ||
		    (off_t)(hash_position + s) != hash_position + s)
			exit_err("hash device offset overflow");
		hash_position += s;
	}
	used_hash_blocks = hash_position;
}

static void create_or_verify_zero(FILE *wr, unsigned char *left_block, unsigned left_bytes)
{
	if (left_bytes) {
		if (mode != MODE_CREATE) {
			unsigned x;
			if (fread(left_block, left_bytes, 1, wr) != 1)
				stream_err(wr, "read");
			for (x = 0; x < left_bytes; x++) if (left_block[x]) {
				retval = 1;
				fprintf(stderr, "spare area is not zeroed at position %lld\n", (long long)ftello(wr) - left_bytes);
			}
		} else {
			if (fwrite(left_block, left_bytes, 1, wr) != 1)
				stream_err(wr, "write");
		}
	}
}

static void create_or_verify_stream(FILE *rd, FILE *wr, int block_size, off_t blocks)
{
	unsigned char *left_block = xmalloc(hash_block_size);
	unsigned char *data_buffer = xmalloc(block_size);
	unsigned char *read_digest = mode != MODE_CREATE ? xmalloc(digest_size) : NULL;
	off_t blocks_to_write = (blocks >> hash_per_block_bits) +
				!!(blocks & ((1 << hash_per_block_bits) - 1));
	EVP_MD_CTX ctx;
	EVP_MD_CTX_init(&ctx);
	memset(left_block, 0, hash_block_size);
	while (blocks_to_write--) {
		unsigned x;
		unsigned left_bytes = hash_block_size;
		for (x = 0; x < 1 << hash_per_block_bits; x++) {
			if (!blocks)
				break;
			blocks--;
			if (fread(data_buffer, block_size, 1, rd) != 1)
				stream_err(rd, "read");
			if (EVP_DigestInit_ex(&ctx, evp, NULL) != 1)
				exit_err("EVP_DigestInit_ex failed");
			if (version >= 1) {
				if (EVP_DigestUpdate(&ctx, salt_bytes, salt_size) != 1)
					exit_err("EVP_DigestUpdate failed");
			}
			if (EVP_DigestUpdate(&ctx, data_buffer, block_size) != 1)
				exit_err("EVP_DigestUpdate failed");
			if (!version) {
				if (EVP_DigestUpdate(&ctx, salt_bytes, salt_size) != 1)
					exit_err("EVP_DigestUpdate failed");
			}
			if (EVP_DigestFinal_ex(&ctx, calculated_digest, NULL) != 1)
				exit_err("EVP_DigestFinal_ex failed");
			if (!wr)
				break;
			if (mode != MODE_CREATE) {
				if (fread(read_digest, digest_size, 1, wr) != 1)
					stream_err(wr, "read");
				if (memcmp(read_digest, calculated_digest, digest_size)) {
					retval = 1;
					fprintf(stderr, "verification failed at position %lld in %s file\n", (long long)ftello(rd) - block_size, rd == data_file ? "data" : "metadata");
				}
			} else {
				if (fwrite(calculated_digest, digest_size, 1, wr) != 1)
					stream_err(wr, "write");
			}
			if (!version) {
				left_bytes -= digest_size;
			} else {
				create_or_verify_zero(wr, left_block, (1 << digest_size_bits) - digest_size);
				left_bytes -= 1 << digest_size_bits;
			}
		}
		if (wr)
			create_or_verify_zero(wr, left_block, left_bytes);
	}
	if (mode == MODE_CREATE && wr) {
		if (fflush(wr)) {
			perror("fflush");
			exit(1);
		}
		if (ferror(wr)) {
			stream_err(wr, "write");
		}
	}
	if (EVP_MD_CTX_cleanup(&ctx) != 1)
		exit_err("EVP_MD_CTX_cleanup failed");
	free(left_block);
	free(data_buffer);
	if (mode != MODE_CREATE) free(read_digest);
}

char **make_target_line(void)
{
	const int line_elements = 14;
	char **line = xmalloc(line_elements * sizeof(char *));
	int i = 0;
	char *algorithm_copy = xstrdup(hash_algorithm);
		/* transform ripemdXXX to rmdXXX */
	if (!strncmp(algorithm_copy, "ripemd", 6))
		memmove(algorithm_copy + 1, algorithm_copy + 4, strlen(algorithm_copy + 4) + 1);
	line[i++] = xstrdup("0");
	line[i++] = xprint((unsigned long long)data_file_blocks * data_block_size / 512);
	line[i++] = xstrdup("verity");
	line[i++] = xprint(version);
	line[i++] = xstrdup(data_device);
	line[i++] = xstrdup(hash_device);
	line[i++] = xprint(data_block_size);
	line[i++] = xprint(hash_block_size);
	line[i++] = xprint(data_file_blocks);
	line[i++] = xprint(hash_start * 512 / hash_block_size);
	line[i++] = algorithm_copy;
	line[i++] = xhexprint(calculated_digest, digest_size);
	line[i++] = !salt_size ? xstrdup("-") : xhexprint(salt_bytes, salt_size);
	line[i++] = NULL;
	if (i > line_elements) exit_err("INTERNAL ERROR: insufficient array size");
	return line;
}

void free_target_line(char **line)
{
	int i;
	for (i = 0; line[i]; i++)
		free(line[i]);
	free(line);
}

static void create_or_verify(void)
{
	int i;
	if (mode != MODE_ACTIVATE) for (i = 0; i < levels; i++) {
		block_fseek(hash_file, hash_level_block[i], hash_block_size);
		if (!i) {
			block_fseek(data_file, 0, data_block_size);
			create_or_verify_stream(data_file, hash_file, data_block_size, data_file_blocks);
		} else {
			FILE *hash_file_2 = fopen(hash_device, "r");
			if (!hash_file_2) {
				perror(hash_device);
				exit(2);
			}
			block_fseek(hash_file_2, hash_level_block[i - 1], hash_block_size);
			create_or_verify_stream(hash_file_2, hash_file, hash_block_size, hash_level_size[i - 1]);
			fclose(hash_file_2);
		}
	}

	if (levels) {
		block_fseek(hash_file, hash_level_block[levels - 1], hash_block_size);
		create_or_verify_stream(hash_file, NULL, hash_block_size, 1);
	} else {
		block_fseek(data_file, 0, data_block_size);
		create_or_verify_stream(data_file, NULL, data_block_size, data_file_blocks);
	}

	if (mode != MODE_CREATE) {
		if (memcmp(calculated_digest, root_hash_bytes, digest_size)) {
			fprintf(stderr, "verification failed in the root block\n");
			retval = 1;
		}
		if (!retval && mode == MODE_VERIFY)
			fprintf(stderr, "hash successfully verified\n");
	} else {
		char **target_line;
		char *p;
		if (fsync(fileno(hash_file))) {
			perror("fsync");
			exit(1);
		}
		printf("hash device size: %llu\n", (unsigned long long)used_hash_blocks * hash_block_size);
		printf("data block size %u, hash block size %u, %u tree levels\n", data_block_size, hash_block_size, levels);
		if (salt_size) p = xhexprint(salt_bytes, salt_size);
		else p = xstrdup("-");
		printf("salt: %s\n", p);
		free(p);
		p = xhexprint(calculated_digest, digest_size);
		printf("root hash: %s\n", p);
		free(p);
		printf("target line:");
		target_line = make_target_line();
		for (i = 0; target_line[i]; i++)
			printf(" %s", target_line[i]);
		free_target_line(target_line);
		printf("\n");
	}
}

static void activate(void)
{
	int i;
	size_t len = 1;
	char *table_arg;
	char **target_line = make_target_line();
	for (i = 0; target_line[i]; i++) {
		if (i) len++;
		len += strlen(target_line[i]);
	}
	table_arg = xmalloc(len);
	table_arg[0] = 0;
	for (i = 0; target_line[i]; i++) {
		if (i) strcat(table_arg, " ");
		strcat(table_arg, target_line[i]);
	}
	free_target_line(target_line);
	execlp("dmsetup", "dmsetup", "-r", "create", dm_device, "--table", table_arg, NULL);
	perror("dmsetup");
	exit(2);
}

static void get_hex(const char *string, unsigned char **result, size_t len, const char *description)
{
	size_t rl = strlen(string);
	unsigned u;
	if (strspn(string, "0123456789ABCDEFabcdef") != rl)
		exit_err("invalid %s", description);
	if (rl != len * 2)
		exit_err("invalid length of %s", description);
	*result = xmalloc(len);
	memset(*result, 0, len);
	for (u = 0; u < rl; u++) {
		unsigned char c = (string[u] & 15) + (string[u] > '9' ? 9 : 0);
		(*result)[u / 2] |= c << (((u & 1) ^ 1) << 2);
	}
}

static struct superblock superblock;

static void load_superblock(void)
{
	long long sb_data_blocks;

	block_fseek(hash_file, superblock_position, 1);
	if (fread(&superblock, sizeof(struct superblock), 1, hash_file) != 1)
		stream_err(hash_file, "read");
	if (memcmp(superblock.signature, DM_VERITY_SIGNATURE, sizeof(superblock.signature)))
		exit_err("superblock not found on the hash device");
	if (superblock.version > MAX_FORMAT_VERSION)
		exit_err("unknown version");
	if (superblock.data_block_bits < 9 || superblock.data_block_bits >= 31)
		exit_err("invalid data_block_bits in the superblock");
	if (superblock.hash_block_bits < 9 || superblock.hash_block_bits >= 31)
		exit_err("invalid data_block_bits in the superblock");
	sb_data_blocks = ((unsigned long long)ntohl(superblock.data_blocks_hi) << 31 << 1) | ntohl(superblock.data_blocks_lo);
	if (sb_data_blocks < 0 || (off_t)sb_data_blocks < 0 || (off_t)sb_data_blocks != sb_data_blocks)
		exit_err("invalid data blocks in the superblock");
	if (!memchr(superblock.algorithm, 0, sizeof(superblock.algorithm)))
		exit_err("invalid hash algorithm in the superblock");
	if (ntohs(superblock.salt_size) > MAX_SALT_SIZE)
		exit_err("invalid salt_size in the superblock");

	if (version == -1) {
		version = superblock.version;
	} else {
		if (version != superblock.version)
			exit_err("version (%d) does not match superblock value (%d)", version, superblock.version);
	}

	if (!data_block_size) {
		data_block_size = 1 << superblock.data_block_bits;
	} else {
		if (data_block_size != 1 << superblock.data_block_bits)
			exit_err("data block size (%d) does not match superblock value (%d)", data_block_size, 1 << superblock.data_block_bits);
	}

	if (!hash_block_size) {
		hash_block_size = 1 << superblock.hash_block_bits;
	} else {
		if (hash_block_size != 1 << superblock.hash_block_bits)
			exit_err("hash block size (%d) does not match superblock value (%d)", hash_block_size, 1 << superblock.hash_block_bits);
	}

	if (!data_blocks) {
		data_blocks = sb_data_blocks;
	} else {
		if (data_blocks != sb_data_blocks)
			exit_err("data blocks (%lld) does not match superblock value (%lld)", data_blocks, sb_data_blocks);
	}

	if (!hash_algorithm) {
		hash_algorithm = (char *)superblock.algorithm;
	} else {
		if (strcmp(hash_algorithm, (char *)superblock.algorithm))
			exit_err("hash algorithm (%s) does not match superblock value (%s)", hash_algorithm, superblock.algorithm);
	}

	if (!salt_bytes) {
		salt_size = ntohs(superblock.salt_size);
		salt_bytes = xmalloc(salt_size);
		memcpy(salt_bytes, superblock.salt, salt_size);
	} else {
		if (salt_size != ntohs(superblock.salt_size) ||
		    memcmp(salt_bytes, superblock.salt, salt_size))
			exit_err("salt does not match superblock value");
	}
}

static void save_superblock(void)
{
	memset(&superblock, 0, sizeof(struct superblock));

	memcpy(&superblock.signature, DM_VERITY_SIGNATURE, sizeof(superblock.signature));
	superblock.version = version;
	superblock.data_block_bits = ffs(data_block_size) - 1;
	superblock.hash_block_bits = ffs(hash_block_size) - 1;
	superblock.salt_size = htons(salt_size);
	superblock.data_blocks_hi = htonl(data_blocks >> 31 >> 1);
	superblock.data_blocks_lo = htonl(data_blocks & 0xFFFFFFFF);
	strncpy((char *)superblock.algorithm, hash_algorithm, sizeof superblock.algorithm);
	memcpy(superblock.salt, salt_bytes, salt_size);

	block_fseek(hash_file, superblock_position, 1);
	if (fwrite(&superblock, sizeof(struct superblock), 1, hash_file) != 1)
		stream_err(hash_file, "write");
}

int main(int argc, const char **argv)
{
	poptContext popt_context;
	int r;
	const char *s;

	if (sizeof(struct superblock) != 512)
		exit_err("INTERNAL ERROR: bad superblock size %d", sizeof(struct superblock));

	OpenSSL_add_all_digests();

	popt_context = poptGetContext("verity", argc, argv, popt_options, 0);

	poptSetOtherOptionHelp(popt_context, "[-c | -v | -a] [<device name> if activating] <data device> <hash device> [<root hash> if activating or verifying] [OPTION...]");

	if (argc <= 1) {
		poptPrintHelp(popt_context, stdout, 0);
		exit(1);
	}

	r = poptGetNextOpt(popt_context);
	if (r < -1) exit_err("bad option %s", poptBadOption(popt_context, 0));

	if (mode < 0) exit_err("verify, create or activate mode not specified");

	if (mode == MODE_ACTIVATE) {
		dm_device = poptGetArg(popt_context);
		if (!dm_device) exit_err("device name is missing");
		if (!*dm_device || strchr(dm_device, '/')) exit_err("invalid device name");
	}

	data_device = poptGetArg(popt_context);
	if (!data_device) exit_err("data device is missing");

	hash_device = poptGetArg(popt_context);
	if (!hash_device) exit_err("metadata device is missing");

	if (mode != MODE_CREATE) {
		root_hash = poptGetArg(popt_context);
		if (!root_hash) exit_err("root hash not specified");
	}

	s = poptGetArg(popt_context);
	if (s) exit_err("extra argument %s", s);

	data_file = fopen(data_device, "r");
	if (!data_file) {
		perror(data_device);
		exit(2);
	}

	hash_file = fopen(hash_device, mode != MODE_CREATE ? "r" : "r+");
	if (!hash_file && errno == ENOENT && mode == MODE_CREATE)
		hash_file = fopen(hash_device, "w+");
	if (!hash_file) {
		perror(hash_device);
		exit(2);
	}

	if (hash_start < 0 ||
	   (unsigned long long)hash_start * 512 / 512 != hash_start ||
	   (off_t)(hash_start * 512) < 0 ||
	   (off_t)(hash_start * 512) != hash_start * 512) exit_err("invalid hash start");

	if (salt_string || !use_superblock) {
		if (!salt_string || !strcmp(salt_string, "-"))
			salt_string = "";
		salt_size = strlen(salt_string) / 2;
		if (salt_size > MAX_SALT_SIZE)
			exit_err("too long salt (max %d bytes)", MAX_SALT_SIZE);
		get_hex(salt_string, &salt_bytes, salt_size, "salt");
	}

	if (use_superblock) {
		superblock_position = hash_start * 512;
		if (mode != MODE_CREATE)
			load_superblock();
	}

	if (version == -1) version = MAX_FORMAT_VERSION;
	if (version < 0 || version > MAX_FORMAT_VERSION)
		exit_err("invalid format version");

	if (!data_block_size) data_block_size = DEFAULT_BLOCK_SIZE;
	if (!hash_block_size) hash_block_size = data_block_size;

	if (data_block_size < 512 || (data_block_size & (data_block_size - 1)) || data_block_size >= 1U << 31)
		exit_err("invalid data block size");

	if (hash_block_size < 512 || (hash_block_size & (hash_block_size - 1)) || hash_block_size >= 1U << 31)
		exit_err("invalid hash block size");

	if (data_blocks < 0 || (off_t)data_blocks < 0 || (off_t)data_blocks != data_blocks) exit_err("invalid number of data blocks");

	data_file_blocks = get_size(data_file, data_device) / data_block_size;
	hash_file_blocks = get_size(hash_file, hash_device) / hash_block_size;

	if (data_file_blocks < data_blocks) exit_err("data file is too small");
	if (data_blocks) {
		data_file_blocks = data_blocks;
	}

	if (use_superblock) {
		hash_start = hash_start + (sizeof(struct superblock) + 511) / 512;
		hash_start = (hash_start + (hash_block_size / 512 - 1)) & ~(long long)(hash_block_size / 512 - 1);
	}

	if ((unsigned long long)hash_start * 512 % hash_block_size) exit_err("hash start not aligned on block size");

	if (!hash_algorithm)
		hash_algorithm = "sha256";
	if (strlen(hash_algorithm) >= sizeof(superblock.algorithm) && use_superblock)
		exit_err("hash algorithm name is too long");
	evp = EVP_get_digestbyname(hash_algorithm);
	if (!evp) exit_err("hash algorithm %s not found", hash_algorithm);
	digest_size = EVP_MD_size(evp);

	if (!salt_bytes) {
		salt_size = DEFAULT_SALT_SIZE;
		salt_bytes = xmalloc(salt_size);
		if (RAND_bytes(salt_bytes, salt_size) != 1)
			exit_err("RAND_bytes failed");
	}

	calculated_digest = xmalloc(digest_size);

 	if (mode != MODE_CREATE) {
		get_hex(root_hash, &root_hash_bytes, digest_size, "root_hash");
	}

	calculate_positions();

	create_or_verify();

	if (use_superblock) {
		if (mode == MODE_CREATE)
			save_superblock();
	}

	fclose(data_file);
	fclose(hash_file);

	if (mode == MODE_ACTIVATE && !retval)
		activate();

	free(salt_bytes);
	free(calculated_digest);
	if (mode != MODE_CREATE)
		free(root_hash_bytes);
	poptFreeContext(popt_context);

	return retval;
}

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [dm-devel] [PATCH] dm: remake of the verity target
  2012-03-21 17:08                   ` Mikulas Patocka
  2012-03-21 17:09                     ` Mikulas Patocka
@ 2012-03-22 17:41                     ` Mandeep Singh Baines
  2012-03-22 21:52                       ` Mikulas Patocka
  1 sibling, 1 reply; 34+ messages in thread
From: Mandeep Singh Baines @ 2012-03-22 17:41 UTC (permalink / raw)
  To: Mikulas Patocka
  Cc: Mandeep Singh Baines, device-mapper development,
	Steffen Klassert, Will Drewry, linux-kernel, Elly Jones,
	Olof Johansson, Andrew Morton, Alasdair G Kergon, Milan Broz

Mikulas Patocka (mpatocka@redhat.com) wrote:
> 
> 
> On Tue, 20 Mar 2012, Mandeep Singh Baines wrote:
> 
> > > I can introduce a switch to make it accept the old format. Do you want to?
> > >
> > 
> > We can carry that as an out-of-tree patch until we migrate.
> > 
> > On the other hand, it might be nice to support prepend or append.
> > 
> > I don't too strong feelings but it might be nice to have
> > prepend/append flag. Would definitely make our life easier.
> 
> This is improved patch that supports both the old format and the new 
> format. I checked that it is interoperable with with the old Google 
> userspace tool and with the original Google kernel driver.
> 

Thanks much for doing this:)

This looks good but would a prepend/append flag be better?

> Mikulas
> 
> ---
> 
> Remake of the google dm-verity patch.
> 
> Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
> 
> ---
>  drivers/md/Kconfig     |   17 
>  drivers/md/Makefile    |    1 
>  drivers/md/dm-verity.c |  876 +++++++++++++++++++++++++++++++++++++++++++++++++
>  3 files changed, 894 insertions(+)
> 
> Index: linux-3.3-fast/drivers/md/Kconfig
> ===================================================================
> --- linux-3.3-fast.orig/drivers/md/Kconfig	2012-03-21 01:17:56.000000000 +0100
> +++ linux-3.3-fast/drivers/md/Kconfig	2012-03-21 01:17:56.000000000 +0100
> @@ -404,4 +404,21 @@ config DM_VERITY2
>  
>            If unsure, say N.
>  
> +config DM_VERITY
> +	tristate "Verity target support"
> +	depends on BLK_DEV_DM
> +	select CRYPTO
> +	select CRYPTO_HASH
> +	select DM_BUFIO
> +	---help---
> +	  This device-mapper target allows you to create a device that
> +	  transparently integrity checks the data on it. You'll need to
> +	  activate the digests you're going to use in the cryptoapi
> +	  configuration.
> +
> +	  To compile this code as a module, choose M here: the module will
> +	  be called dm-verity.
> +
> +	  If unsure, say N.
> +
>  endif # MD
> Index: linux-3.3-fast/drivers/md/Makefile
> ===================================================================
> --- linux-3.3-fast.orig/drivers/md/Makefile	2012-03-21 01:17:56.000000000 +0100
> +++ linux-3.3-fast/drivers/md/Makefile	2012-03-21 01:17:56.000000000 +0100
> @@ -29,6 +29,7 @@ obj-$(CONFIG_MD_FAULTY)		+= faulty.o
>  obj-$(CONFIG_BLK_DEV_MD)	+= md-mod.o
>  obj-$(CONFIG_BLK_DEV_DM)	+= dm-mod.o
>  obj-$(CONFIG_DM_BUFIO)		+= dm-bufio.o
> +obj-$(CONFIG_DM_VERITY)		+= dm-verity.o
>  obj-$(CONFIG_DM_CRYPT)		+= dm-crypt.o
>  obj-$(CONFIG_DM_DELAY)		+= dm-delay.o
>  obj-$(CONFIG_DM_FLAKEY)		+= dm-flakey.o
> Index: linux-3.3-fast/drivers/md/dm-verity.c
> ===================================================================
> --- /dev/null	1970-01-01 00:00:00.000000000 +0000
> +++ linux-3.3-fast/drivers/md/dm-verity.c	2012-03-21 18:01:11.000000000 +0100
> @@ -0,0 +1,876 @@
> +/*
> + * Copyright (C) 2012 Red Hat, Inc.
> + *
> + * Author: Mikulas Patocka <mpatocka@redhat.com>
> + *
> + * Based on Chromium dm-verity driver (C) 2011 The Chromium OS Authors
> + *
> + * This file is released under the GPLv2.
> + *
> + * Device mapper target parameters:
> + *	<version>		(0 - original Google's format, 1 - new format)
> + *	<data device>
> + *	<hash device>
> + *	<data block size>
> + *	<hash block size>
> + *	<the number of data blocks>
> + *	<hash start block>
> + *	<algorithm>
> + *	<digest>
> + *	<salt>			(hex bytes or "-" for no salt)
> + *
> + * In the file "/sys/module/dm_verity/parameters/prefetch_cluster" you can set
> + * default prefetch value. Data are read in "prefetch_cluster" chunks from the
> + * hash device. Prefetch cluster greatly improves performance when data and hash
> + * are on the same disk on different partitions on devices with poor random
> + * access behavior.
> + */
> +
> +#include <linux/module.h>
> +#include <linux/device-mapper.h>
> +#include <crypto/hash.h>
> +#include "dm-bufio.h"
> +
> +#define DM_MSG_PREFIX			"verity"
> +
> +#define DM_VERITY_IO_VEC_INLINE		16
> +#define DM_VERITY_MEMPOOL_SIZE		4
> +#define DM_VERITY_DEFAULT_PREFETCH_SIZE	262144
> +
> +#define DM_VERITY_MAX_LEVELS		63
> +
> +static unsigned prefetch_cluster = DM_VERITY_DEFAULT_PREFETCH_SIZE;
> +
> +module_param_named(prefetch_cluster, prefetch_cluster, uint, S_IRUGO | S_IWUSR);
> +
> +struct dm_verity {
> +	struct dm_dev *data_dev;
> +	struct dm_dev *hash_dev;
> +	struct dm_target *ti;
> +	struct dm_bufio_client *bufio;
> +	char *alg_name;
> +	struct crypto_shash *tfm;
> +	u8 *root_digest;	/* digest of the root block */
> +	u8 *salt;		/* salt, its size is salt_size */
> +	unsigned salt_size;
> +	sector_t data_start;	/* data offset in 512-byte sectors */
> +	sector_t hash_start;	/* hash start in blocks */
> +	sector_t data_blocks;	/* the number of data blocks */
> +	sector_t hash_blocks;	/* the number of hash blocks */
> +	unsigned char data_dev_block_bits;	/* log2(data blocksize) */
> +	unsigned char hash_dev_block_bits;	/* log2(hash blocksize) */
> +	unsigned char hash_per_block_bits;	/* log2(hashes in hash block) */
> +	unsigned char levels;	/* the number of tree levels */
> +	unsigned char version;
> +	unsigned digest_size;	/* digest size for the current hash algorithm */
> +	unsigned shash_descsize;/* the size of temporary space for crypto */
> +
> +	mempool_t *io_mempool;	/* mempool of struct dm_verity_io */
> +	mempool_t *vec_mempool;	/* mempool of bio vector */
> +
> +	struct workqueue_struct *verify_wq;
> +
> +	/* starting blocks for each tree level. 0 is the lowest level. */
> +	sector_t hash_level_block[DM_VERITY_MAX_LEVELS];
> +};
> +
> +struct dm_verity_io {
> +	struct dm_verity *v;
> +	struct bio *bio;
> +
> +	/* original values of bio->bi_end_io and bio->bi_private */
> +	bio_end_io_t *orig_bi_end_io;
> +	void *orig_bi_private;
> +
> +	sector_t block;
> +	unsigned n_blocks;
> +
> +	/* saved bio vector */
> +	struct bio_vec *io_vec;
> +	unsigned io_vec_size;
> +
> +	struct work_struct work;
> +
> +	/* a space for short vectors; longer vectors are allocated separately */
> +	struct bio_vec io_vec_inline[DM_VERITY_IO_VEC_INLINE];
> +
> +	/* variable-size fields, accessible with functions
> +		io_hash_desc, io_real_digest, io_want_digest */
> +	/* u8 hash_desc[v->shash_descsize]; */
> +	/* u8 real_digest[v->digest_size]; */
> +	/* u8 want_digest[v->digest_size]; */
> +};
> +
> +static struct shash_desc *io_hash_desc(struct dm_verity *v, struct dm_verity_io *io)
> +{
> +	return (struct shash_desc *)(io + 1);
> +}
> +
> +static u8 *io_real_digest(struct dm_verity *v, struct dm_verity_io *io)
> +{
> +	return (u8 *)(io + 1) + v->shash_descsize;
> +}
> +
> +static u8 *io_want_digest(struct dm_verity *v, struct dm_verity_io *io)
> +{
> +	return (u8 *)(io + 1) + v->shash_descsize + v->digest_size;
> +}
> +
> +/*
> + * Auxiliary structure appended to each dm-bufio buffer. If the value
> + * hash_verified is nonzero, hash of the block has been verified.
> + *
> + * The variable hash_verified is set to 0 when allocating the buffer, then
> + * it can be changed to 1 and it is never reset to 0 again.
> + *
> + * There is no lock around this value, a race condition can at worst cause
> + * that multiple processes verify the hash of the same buffer simultaneously
> + * and write 1 to hash_verified simultaneously.
> + * This condition is harmless, so we don't need locking.
> + */
> +struct buffer_aux {
> +	int hash_verified;
> +};
> +
> +/*
> + * Initialize struct buffer_aux for a freshly created buffer.
> + */
> +static void dm_bufio_alloc_callback(struct dm_buffer *buf)
> +{
> +	struct buffer_aux *aux = dm_bufio_get_aux_data(buf);
> +	aux->hash_verified = 0;
> +}
> +
> +/*
> + * Translate input sector number to the sector number on the target device.
> + */
> +static sector_t verity_map_sector(struct dm_verity *v, sector_t bi_sector)
> +{
> +	return v->data_start + dm_target_offset(v->ti, bi_sector);
> +}
> +
> +/*
> + * Return hash position of a specified block at a specified tree level
> + * (0 is the lowest level).
> + * The lowest "hash_per_block_bits"-bits of the result denote hash position
> + * inside a hash block. The remaining bits denote location of the hash block.
> + */
> +static sector_t verity_position_at_level(struct dm_verity *v, sector_t block,
> +					 int level)
> +{
> +	return block >> (level * v->hash_per_block_bits);
> +}
> +
> +static void verity_hash_at_level(struct dm_verity *v, sector_t block, int level,
> +				 sector_t *hash_block, unsigned *offset)
> +{
> +	sector_t position = verity_position_at_level(v, block, level);
> +
> +	*hash_block = v->hash_level_block[level] + (position >> v->hash_per_block_bits);
> +	if (offset) {
> +		unsigned idx = position & ((1 << v->hash_per_block_bits) - 1);
> +		if (!v->version)
> +			*offset = idx * v->digest_size;
> +		else
> +			*offset = idx << (v->hash_dev_block_bits - v->hash_per_block_bits);
> +	}
> +}
> +
> +/*
> + * Verify hash of a metadata block pertaining to the specified data block
> + * ("block" argument) at a specified level ("level" argument).
> + *
> + * On successful return, io_want_digest(v, io) contains the hash value for
> + * a lower tree level or for the data block (if we're at the lowest leve).
> + *
> + * If "skip_unverified" is true, unverified buffer is skipped an 1 is returned.
> + * If "skip_unverified" is false, unverified buffer is hashed and verified
> + * against current value of io_want_digest(v, io).
> + */
> +static int verity_verify_level(struct dm_verity_io *io, sector_t block,
> +			       int level, bool skip_unverified)
> +{
> +	struct dm_verity *v = io->v;
> +	struct dm_buffer *buf;
> +	struct buffer_aux *aux;
> +	u8 *data;
> +	int r;
> +	sector_t hash_block;
> +	unsigned offset;
> +
> +	verity_hash_at_level(v, block, level, &hash_block, &offset);
> +
> +	data = dm_bufio_read(v->bufio, hash_block, &buf);
> +	if (unlikely(IS_ERR(data)))
> +		return PTR_ERR(data);
> +
> +	aux = dm_bufio_get_aux_data(buf);
> +
> +	if (!aux->hash_verified) {
> +		struct shash_desc *desc;
> +		u8 *result;
> +
> +		if (skip_unverified) {
> +			r = 1;
> +			goto release_ret_r;
> +		}
> +
> +		desc = io_hash_desc(v, io);
> +		desc->tfm = v->tfm;
> +		desc->flags = CRYPTO_TFM_REQ_MAY_SLEEP;
> +		r = crypto_shash_init(desc);
> +		if (r < 0) {
> +			DMERR("crypto_shash_init failed: %d", r);
> +			goto release_ret_r;
> +		}
> +
> +		if (likely(v->version >= 1)) {
> +			r = crypto_shash_update(desc, v->salt, v->salt_size);
> +			if (r < 0) {
> +				DMERR("crypto_shash_update failed: %d", r);
> +				goto release_ret_r;
> +			}
> +		}
> +
> +		r = crypto_shash_update(desc, data, 1 << v->hash_dev_block_bits);
> +		if (r < 0) {
> +			DMERR("crypto_shash_update failed: %d", r);
> +			goto release_ret_r;
> +		}
> +
> +		if (!v->version) {
> +			r = crypto_shash_update(desc, v->salt, v->salt_size);
> +			if (r < 0) {
> +				DMERR("crypto_shash_update failed: %d", r);
> +				goto release_ret_r;
> +			}
> +		}
> +
> +		result = io_real_digest(v, io);
> +		r = crypto_shash_final(desc, result);
> +		if (r < 0) {
> +			DMERR("crypto_shash_final failed: %d", r);
> +			goto release_ret_r;
> +		}
> +		if (unlikely(memcmp(result, io_want_digest(v, io), v->digest_size))) {
> +			DMERR_LIMIT("metadata block %llu is corrupted",
> +				(unsigned long long)hash_block);
> +			r = -EIO;
> +			goto release_ret_r;
> +		} else
> +			aux->hash_verified = 1;
> +	}
> +
> +	data += offset;
> +
> +	memcpy(io_want_digest(v, io), data, v->digest_size);
> +
> +	dm_bufio_release(buf);
> +	return 0;
> +
> +release_ret_r:
> +	dm_bufio_release(buf);
> +	return r;
> +}
> +
> +/*
> + * Verify one "dm_verity_io" structure.
> + */
> +static int verity_verify_io(struct dm_verity_io *io)
> +{
> +	struct dm_verity *v = io->v;
> +	unsigned b;
> +	int i;
> +	unsigned vector = 0, offset = 0;
> +	for (b = 0; b < io->n_blocks; b++) {
> +		struct shash_desc *desc;
> +		u8 *result;
> +		int r;
> +		unsigned todo;
> +
> +		if (likely(v->levels)) {
> +			/*
> +			 * First, we try to get the requested hash for
> +			 * the current block. If the hash block itself is
> +			 * verified, zero is returned. If it isn't, this
> +			 * function returns 0 and we fall back to whole
> +			 * chain verification.
> +			 */
> +			int r = verity_verify_level(io, io->block + b, 0, true);
> +			if (likely(!r))
> +				goto test_block_hash;
> +			if (r < 0)
> +				return r;
> +		}
> +
> +		memcpy(io_want_digest(v, io), v->root_digest, v->digest_size);
> +
> +		for (i = v->levels - 1; i >= 0; i--) {
> +			int r = verity_verify_level(io, io->block + b, i, false);
> +			if (unlikely(r))
> +				return r;
> +		}
> +
> +test_block_hash:
> +		desc = io_hash_desc(v, io);
> +		desc->tfm = v->tfm;
> +		desc->flags = CRYPTO_TFM_REQ_MAY_SLEEP;
> +		r = crypto_shash_init(desc);
> +		if (r < 0) {
> +			DMERR("crypto_shash_init failed: %d", r);
> +			return r;
> +		}
> +
> +		if (likely(v->version >= 1)) {
> +			r = crypto_shash_update(desc, v->salt, v->salt_size);
> +			if (r < 0) {
> +				DMERR("crypto_shash_update failed: %d", r);
> +				return r;
> +			}
> +		}
> +
> +		todo = 1 << v->data_dev_block_bits;
> +		do {
> +			struct bio_vec *bv;
> +			u8 *page;
> +			unsigned len;
> +
> +			BUG_ON(vector >= io->io_vec_size);
> +			bv = &io->io_vec[vector];
> +			page = kmap_atomic(bv->bv_page, KM_USER0);
> +			len = bv->bv_len - offset;
> +			if (likely(len >= todo))
> +				len = todo;
> +			r = crypto_shash_update(desc,
> +					page + bv->bv_offset + offset, len);
> +			kunmap_atomic(page, KM_USER0);
> +			if (r < 0) {
> +				DMERR("crypto_shash_update failed: %d", r);
> +				return r;
> +			}
> +			offset += len;
> +			if (likely(offset == bv->bv_len)) {
> +				offset = 0;
> +				vector++;
> +			}
> +			todo -= len;
> +		} while (todo);
> +
> +		if (!v->version) {
> +			r = crypto_shash_update(desc, v->salt, v->salt_size);
> +			if (r < 0) {
> +				DMERR("crypto_shash_update failed: %d", r);
> +				return r;
> +			}
> +		}
> +
> +		result = io_real_digest(v, io);
> +		r = crypto_shash_final(desc, result);
> +		if (r < 0) {
> +			DMERR("crypto_shash_final failed: %d", r);
> +			return r;
> +		}
> +		if (unlikely(memcmp(result, io_want_digest(v, io), v->digest_size))) {
> +			DMERR_LIMIT("data block %llu is corrupted",
> +				(unsigned long long)(io->block + b));
> +			return -EIO;
> +		}
> +	}
> +	BUG_ON(vector != io->io_vec_size);
> +	BUG_ON(offset);
> +	return 0;
> +}
> +
> +/*
> + * End one "io" structure with a given error.
> + */
> +static void verity_finish_io(struct dm_verity_io *io, int error)
> +{
> +	struct bio *bio = io->bio;
> +	struct dm_verity *v = io->v;
> +
> +	bio->bi_end_io = io->orig_bi_end_io;
> +	bio->bi_private = io->orig_bi_private;
> +
> +	if (io->io_vec != io->io_vec_inline)
> +		mempool_free(io->io_vec, v->vec_mempool);
> +	mempool_free(io, v->io_mempool);
> +
> +	bio_endio(bio, error);
> +}
> +
> +static void verity_work(struct work_struct *w)
> +{
> +	struct dm_verity_io *io = container_of(w, struct dm_verity_io, work);
> +
> +	verity_finish_io(io, verity_verify_io(io));
> +}
> +
> +static void verity_end_io(struct bio *bio, int error)
> +{
> +	struct dm_verity_io *io = bio->bi_private;
> +	if (error) {
> +		verity_finish_io(io, error);
> +		return;
> +	}
> +
> +	INIT_WORK(&io->work, verity_work);
> +	queue_work(io->v->verify_wq, &io->work);
> +}
> +
> +/*
> + * Prefetch buffers for the specified io.
> + * The root buffer is not prefetched, it is assumed that it will be cached
> + * all the time.
> + */
> +static void verity_prefetch_io(struct dm_verity *v, struct dm_verity_io *io)
> +{
> +	int i;
> +	for (i = v->levels - 2; i >= 0; i--) {
> +		sector_t hash_block_start;
> +		sector_t hash_block_end;
> +		verity_hash_at_level(v, io->block, i, &hash_block_start, NULL);
> +		verity_hash_at_level(v, io->block + io->n_blocks - 1, i, &hash_block_end, NULL);
> +		if (!i) {
> +			unsigned cluster = *(volatile unsigned *)&prefetch_cluster;
> +			cluster >>= v->data_dev_block_bits;
> +			if (unlikely(!cluster))
> +				goto no_prefetch_cluster;
> +			if (unlikely(cluster & (cluster - 1)))
> +				cluster = 1 << (fls(cluster) - 1);
> +
> +			hash_block_start &= ~(sector_t)(cluster - 1);
> +			hash_block_end |= cluster - 1;
> +			if (unlikely(hash_block_end >= v->hash_blocks))
> +				hash_block_end = v->hash_blocks - 1;
> +		}
> +no_prefetch_cluster:
> +		dm_bufio_prefetch(v->bufio, hash_block_start,
> +					hash_block_end - hash_block_start + 1);
> +	}
> +}
> +
> +/*
> + * Bio map function. It allocates dm_verity_io structure and bio vector and
> + * fills them. Then it issues prefetches and the I/O.
> + */
> +static int verity_map(struct dm_target *ti, struct bio *bio,
> +		      union map_info *map_context)
> +{
> +	struct dm_verity *v = ti->private;
> +	struct dm_verity_io *io;
> +
> +	bio->bi_bdev = v->data_dev->bdev;
> +	bio->bi_sector = verity_map_sector(v, bio->bi_sector);
> +
> +	if (((unsigned)bio->bi_sector | bio_sectors(bio)) &
> +	    ((1 << (v->data_dev_block_bits - SECTOR_SHIFT)) - 1)) {
> +		DMERR_LIMIT("unaligned io");
> +		return -EIO;
> +	}
> +
> +	if ((bio->bi_sector + bio_sectors(bio)) >>
> +	    (v->data_dev_block_bits - SECTOR_SHIFT) > v->data_blocks) {
> +		DMERR_LIMIT("io out of range");
> +		return -EIO;
> +	}
> +
> +	if (bio_data_dir(bio) == WRITE)
> +		return -EIO;
> +
> +	io = mempool_alloc(v->io_mempool, GFP_NOIO);
> +	io->v = v;
> +	io->bio = bio;
> +	io->orig_bi_end_io = bio->bi_end_io;
> +	io->orig_bi_private = bio->bi_private;
> +	io->block = bio->bi_sector >> (v->data_dev_block_bits - SECTOR_SHIFT);
> +	io->n_blocks = bio->bi_size >> v->data_dev_block_bits;
> +
> +	bio->bi_end_io = verity_end_io;
> +	bio->bi_private = io;
> +	io->io_vec_size = bio->bi_vcnt - bio->bi_idx;
> +	if (io->io_vec_size < DM_VERITY_IO_VEC_INLINE)
> +		io->io_vec = io->io_vec_inline;
> +	else
> +		io->io_vec = mempool_alloc(v->vec_mempool, GFP_NOIO);
> +	memcpy(io->io_vec, bio_iovec(bio),
> +	       io->io_vec_size * sizeof(struct bio_vec));
> +
> +	verity_prefetch_io(v, io);
> +
> +	generic_make_request(bio);
> +
> +	return DM_MAPIO_SUBMITTED;
> +}
> +
> +static int verity_status(struct dm_target *ti, status_type_t type,
> +			 char *result, unsigned maxlen)
> +{
> +	struct dm_verity *v = ti->private;
> +	unsigned sz = 0;
> +	unsigned x;
> +
> +	switch (type) {
> +	case STATUSTYPE_INFO:
> +		result[0] = 0;
> +		break;
> +	case STATUSTYPE_TABLE:
> +		DMEMIT("%u %s %s %u %u %llu %llu %s ",
> +			v->version,
> +			v->data_dev->name,
> +			v->hash_dev->name,
> +			1 << v->data_dev_block_bits,
> +			1 << v->hash_dev_block_bits,
> +			(unsigned long long)v->data_blocks,
> +			(unsigned long long)v->hash_start,
> +			v->alg_name
> +			);
> +		for (x = 0; x < v->digest_size; x++)
> +			DMEMIT("%02x", v->root_digest[x]);
> +		DMEMIT(" ");
> +		if (!v->salt_size)
> +			DMEMIT("-");
> +		else
> +			for (x = 0; x < v->salt_size; x++)
> +				DMEMIT("%02x", v->salt[x]);
> +		break;
> +	}
> +	return 0;
> +}
> +
> +static int verity_ioctl(struct dm_target *ti, unsigned cmd,
> +			unsigned long arg)
> +{
> +	struct dm_verity *v = ti->private;
> +	int r = 0;
> +
> +	if (v->data_start ||
> +	    ti->len != i_size_read(v->data_dev->bdev->bd_inode) >> SECTOR_SHIFT)
> +		r = scsi_verify_blk_ioctl(NULL, cmd);
> +
> +	return r ? : __blkdev_driver_ioctl(v->data_dev->bdev, v->data_dev->mode,
> +				     cmd, arg);
> +}
> +
> +static int verity_merge(struct dm_target *ti, struct bvec_merge_data *bvm,
> +			struct bio_vec *biovec, int max_size)
> +{
> +	struct dm_verity *v = ti->private;
> +	struct request_queue *q = bdev_get_queue(v->data_dev->bdev);
> +
> +	if (!q->merge_bvec_fn)
> +		return max_size;
> +
> +	bvm->bi_bdev = v->data_dev->bdev;
> +	bvm->bi_sector = verity_map_sector(v, bvm->bi_sector);
> +
> +	return min(max_size, q->merge_bvec_fn(q, bvm, biovec));
> +}
> +
> +static int verity_iterate_devices(struct dm_target *ti,
> +				  iterate_devices_callout_fn fn, void *data)
> +{
> +	struct dm_verity *v = ti->private;
> +	return fn(ti, v->data_dev, v->data_start, ti->len, data);
> +}
> +
> +static void verity_io_hints(struct dm_target *ti, struct queue_limits *limits)
> +{
> +	struct dm_verity *v = ti->private;
> +
> +	if (limits->logical_block_size < 1 << v->data_dev_block_bits)
> +		limits->logical_block_size = 1 << v->data_dev_block_bits;
> +	if (limits->physical_block_size < 1 << v->data_dev_block_bits)
> +		limits->physical_block_size = 1 << v->data_dev_block_bits;
> +	blk_limits_io_min(limits, limits->logical_block_size);
> +}
> +
> +static void verity_dtr(struct dm_target *ti);
> +
> +static int verity_ctr(struct dm_target *ti, unsigned argc, char **argv)
> +{
> +	struct dm_verity *v;
> +	unsigned num;
> +	unsigned long long num_ll;
> +	int r;
> +	int i;
> +	sector_t hash_position;
> +	char dummy;
> +
> +	v = kzalloc(sizeof(struct dm_verity), GFP_KERNEL);
> +	if (!v) {
> +		ti->error = "Cannot allocate verity structure";
> +		return -ENOMEM;
> +	}
> +	ti->private = v;
> +	v->ti = ti;
> +
> +	if ((dm_table_get_mode(ti->table) & ~FMODE_READ) != 0) {
> +		ti->error = "Device must be readonly";
> +		r = -EINVAL;
> +		goto bad;
> +	}
> +
> +	if (argc != 10) {
> +		ti->error = "Invalid argument count";
> +		r = -EINVAL;
> +		goto bad;
> +	}
> +
> +	if (sscanf(argv[0], "%d%c", &num, &dummy) != 1 ||
> +	    num < 0 || num > 1) {
> +		ti->error = "Invalid version";
> +		r = -EINVAL;
> +		goto bad;
> +	}
> +	v->version = num;
> +
> +	r = dm_get_device(ti, argv[1], FMODE_READ, &v->data_dev);
> +	if (r) {
> +		ti->error = "Data device lookup failed";
> +		goto bad;
> +	}
> +
> +	r = dm_get_device(ti, argv[2], FMODE_READ, &v->hash_dev);
> +	if (r) {
> +		ti->error = "Data device lookup failed";
> +		goto bad;
> +	}
> +
> +	if (sscanf(argv[3], "%u%c", &num, &dummy) != 1 ||
> +	    !num || (num & (num - 1)) ||
> +	    num < bdev_logical_block_size(v->data_dev->bdev) ||
> +	    num > PAGE_SIZE) {
> +		ti->error = "Invalid data device block size";
> +		r = -EINVAL;
> +		goto bad;
> +	}
> +	v->data_dev_block_bits = ffs(num) - 1;
> +
> +	if (sscanf(argv[4], "%u%c", &num, &dummy) != 1 ||
> +	    !num || (num & (num - 1)) ||
> +	    num < bdev_logical_block_size(v->hash_dev->bdev) ||
> +	    num > INT_MAX) {
> +		ti->error = "Invalid hash device block size";
> +		r = -EINVAL;
> +		goto bad;
> +	}
> +	v->hash_dev_block_bits = ffs(num) - 1;
> +
> +	if (sscanf(argv[5], "%llu%c", &num_ll, &dummy) != 1 ||
> +	    num_ll << (v->data_dev_block_bits - SECTOR_SHIFT) !=
> +	    (sector_t)num_ll << (v->data_dev_block_bits - SECTOR_SHIFT)) {
> +		ti->error = "Invalid data blocks";
> +		r = -EINVAL;
> +		goto bad;
> +	}
> +	v->data_blocks = num_ll;
> +
> +	if (ti->len > (v->data_blocks << (v->data_dev_block_bits - SECTOR_SHIFT))) {
> +		ti->error = "Data device is too small";
> +		r = -EINVAL;
> +		goto bad;
> +	}
> +
> +	if (sscanf(argv[6], "%llu%c", &num_ll, &dummy) != 1 ||
> +	    num_ll << (v->hash_dev_block_bits - SECTOR_SHIFT) !=
> +	    (sector_t)num_ll << (v->hash_dev_block_bits - SECTOR_SHIFT)) {
> +		ti->error = "Invalid hash start";
> +		r = -EINVAL;
> +		goto bad;
> +	}
> +	v->hash_start = num_ll;
> +
> +	v->alg_name = kstrdup(argv[7], GFP_KERNEL);
> +	if (!v->alg_name) {
> +		ti->error = "Cannot allocate algorithm name";
> +		r = -ENOMEM;
> +		goto bad;
> +	}
> +
> +	v->tfm = crypto_alloc_shash(v->alg_name, 0, 0);
> +	if (IS_ERR(v->tfm)) {
> +		ti->error = "Cannot initialize hash function";
> +		r = PTR_ERR(v->tfm);
> +		v->tfm = NULL;
> +		goto bad;
> +	}
> +	v->digest_size = crypto_shash_digestsize(v->tfm);
> +	if ((1 << v->hash_dev_block_bits) < v->digest_size * 2) {
> +		ti->error = "Digest size too big";
> +		r = -EINVAL;
> +		goto bad;
> +	}
> +	v->shash_descsize =
> +		sizeof(struct shash_desc) + crypto_shash_descsize(v->tfm);
> +
> +	v->root_digest = kmalloc(v->digest_size, GFP_KERNEL);
> +	if (!v->root_digest) {
> +		ti->error = "Cannot allocate root digest";
> +		r = -ENOMEM;
> +		goto bad;
> +	}
> +	if (strlen(argv[8]) != v->digest_size * 2 ||
> +	    hex2bin(v->root_digest, argv[8], v->digest_size)) {
> +		ti->error = "Invalid root digest";
> +		r = -EINVAL;
> +		goto bad;
> +	}
> +
> +	if (strcmp(argv[9], "-")) {
> +		v->salt_size = strlen(argv[9]) / 2;
> +		v->salt = kmalloc(v->salt_size, GFP_KERNEL);
> +		if (!v->salt) {
> +			ti->error = "Cannot allocate salt";
> +			r = -ENOMEM;
> +			goto bad;
> +		}
> +		if (strlen(argv[9]) != v->salt_size * 2 ||
> +		    hex2bin(v->salt, argv[9], v->salt_size)) {
> +			ti->error = "Invalid salt";
> +			r = -EINVAL;
> +			goto bad;
> +		}
> +	}
> +
> +	v->hash_per_block_bits =
> +		fls((1 << v->hash_dev_block_bits) / v->digest_size) - 1;
> +
> +	v->levels = 0;
> +	if (v->data_blocks)
> +		while (v->hash_per_block_bits * v->levels < 64 &&
> +		       (unsigned long long)(v->data_blocks - 1) >>
> +		       (v->hash_per_block_bits * v->levels))
> +			v->levels++;
> +
> +	if (v->levels > DM_VERITY_MAX_LEVELS) {
> +		ti->error = "Too many tree levels";
> +		r = -E2BIG;
> +		goto bad;
> +	}
> +
> +	hash_position = v->hash_start;
> +	for (i = v->levels - 1; i >= 0; i--) {
> +		sector_t s;
> +		v->hash_level_block[i] = hash_position;
> +		s = verity_position_at_level(v, v->data_blocks, i);
> +		s = (s >> v->hash_per_block_bits) +
> +		    !!(s & ((1 << v->hash_per_block_bits) - 1));
> +		if (hash_position + s < hash_position) {
> +			ti->error = "Hash device offset overflow";
> +			r = -E2BIG;
> +			goto bad;
> +		}
> +		hash_position += s;
> +	}
> +	v->hash_blocks = hash_position;
> +
> +	v->bufio = dm_bufio_client_create(v->hash_dev->bdev,
> +		1 << v->hash_dev_block_bits, 1, sizeof(struct buffer_aux),
> +		dm_bufio_alloc_callback, NULL);
> +	if (IS_ERR(v->bufio)) {
> +		ti->error = "Cannot initialize dm-bufio";
> +		r = PTR_ERR(v->bufio);
> +		v->bufio = NULL;
> +		goto bad;
> +	}
> +
> +	if (dm_bufio_get_device_size(v->bufio) < v->hash_blocks) {
> +		ti->error = "Hash device is too small";
> +		r = -E2BIG;
> +		goto bad;
> +	}
> +
> +	v->io_mempool = mempool_create_kmalloc_pool(DM_VERITY_MEMPOOL_SIZE,
> +	  sizeof(struct dm_verity_io) + v->shash_descsize + v->digest_size * 2);
> +	if (!v->io_mempool) {
> +		ti->error = "Cannot allocate io mempool";
> +		r = -ENOMEM;
> +		goto bad;
> +	}
> +
> +	v->vec_mempool = mempool_create_kmalloc_pool(DM_VERITY_MEMPOOL_SIZE,
> +					BIO_MAX_PAGES * sizeof(struct bio_vec));
> +	if (!v->vec_mempool) {
> +		ti->error = "Cannot allocate vector mempool";
> +		r = -ENOMEM;
> +		goto bad;
> +	}
> +
> +	/*v->verify_wq = alloc_workqueue("verityd", WQ_CPU_INTENSIVE | WQ_MEM_RECLAIM, 1);*/
> +	/* WQ_UNBOUND greatly improves performance when running on ramdisk */
> +	v->verify_wq = alloc_workqueue("verityd", WQ_CPU_INTENSIVE | WQ_MEM_RECLAIM | WQ_UNBOUND, num_online_cpus());
> +	if (!v->verify_wq) {
> +		ti->error = "Cannot allocate workqueue";
> +		r = -ENOMEM;
> +		goto bad;
> +	}
> +
> +	return 0;
> +
> +bad:
> +	verity_dtr(ti);
> +	return r;
> +}
> +
> +static void verity_dtr(struct dm_target *ti)
> +{
> +	struct dm_verity *v = ti->private;
> +
> +	if (v->verify_wq)
> +		destroy_workqueue(v->verify_wq);
> +	if (v->vec_mempool)
> +		mempool_destroy(v->vec_mempool);
> +	if (v->io_mempool)
> +		mempool_destroy(v->io_mempool);
> +	if (v->bufio)
> +		dm_bufio_client_destroy(v->bufio);
> +	kfree(v->salt);
> +	kfree(v->root_digest);
> +	if (v->tfm)
> +		crypto_free_shash(v->tfm);
> +	kfree(v->alg_name);
> +	if (v->hash_dev)
> +		dm_put_device(ti, v->hash_dev);
> +	if (v->data_dev)
> +		dm_put_device(ti, v->data_dev);
> +	kfree(v);
> +}
> +
> +static struct target_type verity_target = {
> +	.name		= "verity",
> +	.version	= {1, 0, 0},
> +	.module		= THIS_MODULE,
> +	.ctr		= verity_ctr,
> +	.dtr		= verity_dtr,
> +	.map		= verity_map,
> +	.status		= verity_status,
> +	.ioctl		= verity_ioctl,
> +	.merge		= verity_merge,
> +	.iterate_devices = verity_iterate_devices,
> +	.io_hints	= verity_io_hints,
> +};
> +
> +static int __init dm_verity_init(void)
> +{
> +	int r;
> +	r = dm_register_target(&verity_target);
> +	if (r < 0)
> +		DMERR("register failed %d", r);
> +	return r;
> +}
> +
> +static void __exit dm_verity_exit(void)
> +{
> +	dm_unregister_target(&verity_target);
> +}
> +
> +module_init(dm_verity_init);
> +module_exit(dm_verity_exit);
> +
> +MODULE_AUTHOR("Mikulas Patocka <mpatocka@redhat.com>");
> +MODULE_AUTHOR("Mandeep Baines <msb@chromium.org>");
> +MODULE_AUTHOR("Will Drewry <wad@chromium.org>");
> +MODULE_DESCRIPTION(DM_NAME " target for transparent disk integrity checking");
> +MODULE_LICENSE("GPL");
> +

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [dm-devel] [PATCH] dm: remake of the verity target
  2012-03-22 17:41                     ` Mandeep Singh Baines
@ 2012-03-22 21:52                       ` Mikulas Patocka
  2012-03-23  3:15                         ` Mandeep Singh Baines
  0 siblings, 1 reply; 34+ messages in thread
From: Mikulas Patocka @ 2012-03-22 21:52 UTC (permalink / raw)
  To: Mandeep Singh Baines
  Cc: device-mapper development, Steffen Klassert, Will Drewry,
	linux-kernel, Elly Jones, Olof Johansson, Andrew Morton,
	Alasdair G Kergon, Milan Broz



On Thu, 22 Mar 2012, Mandeep Singh Baines wrote:

> Mikulas Patocka (mpatocka@redhat.com) wrote:
> > 
> > This is improved patch that supports both the old format and the new 
> > format. I checked that it is interoperable with with the old Google 
> > userspace tool and with the original Google kernel driver.
> > 
> 
> Thanks much for doing this:)
> 
> This looks good but would a prepend/append flag be better?

It does more than changing prepend/append salt. I changed alignment in the 
new format so that it doesn't have to use a multiply instruction.

In the old format, if digest size is not a power of two, all digests are 
placed first and the rest of the block is padded with zeros. In the new 
format, each digest is padded with zeros to a power of two.

For example, when using sha1, the old format padding looks like this:
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaabbbbbbbbbbbbbbbbbbbbbbbb
bbbbbbbbbbbbbbbbccccccccccccccccccccccccccccccccccccccccdddddddd
dddddddddddddddddddddddddddddddd00000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000

... and the new format padding looks like this:
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa000000000000000000000000
bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb000000000000000000000000
cccccccccccccccccccccccccccccccccccccccc000000000000000000000000
dddddddddddddddddddddddddddddddddddddddd000000000000000000000000

The version "0" (first argument in the target line) actually means the old 
style padding and the salt is hashed after the data. The version "1" means 
new style padding and the salt is hashed before the data. If someone comes 
with another format, we can use version "2" for it, etc.

Mikulas

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [dm-devel] [PATCH] dm: remake of the verity target
  2012-03-22 21:52                       ` Mikulas Patocka
@ 2012-03-23  3:15                         ` Mandeep Singh Baines
  2012-03-24  3:48                           ` Mikulas Patocka
  0 siblings, 1 reply; 34+ messages in thread
From: Mandeep Singh Baines @ 2012-03-23  3:15 UTC (permalink / raw)
  To: Mikulas Patocka
  Cc: Mandeep Singh Baines, device-mapper development,
	Steffen Klassert, Will Drewry, linux-kernel, Elly Jones,
	Olof Johansson, Andrew Morton, Alasdair G Kergon, Milan Broz,
	taysom

Mikulas Patocka (mpatocka@redhat.com) wrote:
> 
> 
> On Thu, 22 Mar 2012, Mandeep Singh Baines wrote:
> 
> > Mikulas Patocka (mpatocka@redhat.com) wrote:
> > > 
> > > This is improved patch that supports both the old format and the new 
> > > format. I checked that it is interoperable with with the old Google 
> > > userspace tool and with the original Google kernel driver.
> > > 
> > 
> > Thanks much for doing this:)
> > 
> > This looks good but would a prepend/append flag be better?
> 
> It does more than changing prepend/append salt. I changed alignment in the 
> new format so that it doesn't have to use a multiply instruction.
> 
> In the old format, if digest size is not a power of two, all digests are 
> placed first and the rest of the block is padded with zeros. In the new 
> format, each digest is padded with zeros to a power of two.
> 
> For example, when using sha1, the old format padding looks like this:
> aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaabbbbbbbbbbbbbbbbbbbbbbbb
> bbbbbbbbbbbbbbbbccccccccccccccccccccccccccccccccccccccccdddddddd
> dddddddddddddddddddddddddddddddd00000000000000000000000000000000
> 0000000000000000000000000000000000000000000000000000000000000000
> 
> ... and the new format padding looks like this:
> aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa000000000000000000000000
> bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb000000000000000000000000
> cccccccccccccccccccccccccccccccccccccccc000000000000000000000000
> dddddddddddddddddddddddddddddddddddddddd000000000000000000000000
> 
> The version "0" (first argument in the target line) actually means the old 
> style padding and the salt is hashed after the data. The version "1" means 
> new style padding and the salt is hashed before the data. If someone comes 
> with another format, we can use version "2" for it, etc.
> 

+cc taysom

Makes sense.

Signed-off-by: Mandeep Singh Baines <msb@chromium.org>

Speaking of V2, one idea a colleague of mine (taysom) just had was to
drop the power of 2 alignment. For SHA-1, this shrinks the tree by 37.5 %.
You have to replace the shifts with divides but the reduction in I/O
more than makes up. For the different levels, you could pre-calculate
the divisor.

Regards,
Mandeep

> Mikulas

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [dm-devel] [PATCH] dm: remake of the verity target
  2012-03-23  3:15                         ` Mandeep Singh Baines
@ 2012-03-24  3:48                           ` Mikulas Patocka
  0 siblings, 0 replies; 34+ messages in thread
From: Mikulas Patocka @ 2012-03-24  3:48 UTC (permalink / raw)
  To: Mandeep Singh Baines
  Cc: device-mapper development, Steffen Klassert, Will Drewry,
	linux-kernel, Elly Jones, Olof Johansson, Andrew Morton,
	Alasdair G Kergon, Milan Broz, taysom



On Thu, 22 Mar 2012, Mandeep Singh Baines wrote:

> Mikulas Patocka (mpatocka@redhat.com) wrote:
> > 
> > 
> > On Thu, 22 Mar 2012, Mandeep Singh Baines wrote:
> > 
> > > Mikulas Patocka (mpatocka@redhat.com) wrote:
> > > > 
> > > > This is improved patch that supports both the old format and the new 
> > > > format. I checked that it is interoperable with with the old Google 
> > > > userspace tool and with the original Google kernel driver.
> > > > 
> > > 
> > > Thanks much for doing this:)
> > > 
> > > This looks good but would a prepend/append flag be better?
> > 
> > It does more than changing prepend/append salt. I changed alignment in the 
> > new format so that it doesn't have to use a multiply instruction.
> > 
> > In the old format, if digest size is not a power of two, all digests are 
> > placed first and the rest of the block is padded with zeros. In the new 
> > format, each digest is padded with zeros to a power of two.
> > 
> > For example, when using sha1, the old format padding looks like this:
> > aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaabbbbbbbbbbbbbbbbbbbbbbbb
> > bbbbbbbbbbbbbbbbccccccccccccccccccccccccccccccccccccccccdddddddd
> > dddddddddddddddddddddddddddddddd00000000000000000000000000000000
> > 0000000000000000000000000000000000000000000000000000000000000000
> > 
> > ... and the new format padding looks like this:
> > aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa000000000000000000000000
> > bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb000000000000000000000000
> > cccccccccccccccccccccccccccccccccccccccc000000000000000000000000
> > dddddddddddddddddddddddddddddddddddddddd000000000000000000000000
> > 
> > The version "0" (first argument in the target line) actually means the old 
> > style padding and the salt is hashed after the data. The version "1" means 
> > new style padding and the salt is hashed before the data. If someone comes 
> > with another format, we can use version "2" for it, etc.
> > 
> 
> +cc taysom
> 
> Makes sense.
> 
> Signed-off-by: Mandeep Singh Baines <msb@chromium.org>
> 
> Speaking of V2, one idea a colleague of mine (taysom) just had was to
> drop the power of 2 alignment. For SHA-1, this shrinks the tree by 37.5 %.
> You have to replace the shifts with divides but the reduction in I/O
> more than makes up. For the different levels, you could pre-calculate
> the divisor.
> 
> Regards,
> Mandeep
> 
> > Mikulas
> 

BTW. here I'm sending you the new veritysetup tool. The old one that I 
sent before has a bug that it stores zero as the device size in the 
superblock (and ignores it when reading it back), so don't use that old 
tool, use this instead.

It is already integrated in lvm2 cvs.

Mikulas

---

/*
 * veritysetup
 *
 * (C) 2012 Red Hat Inc.
 *
 * This copyrighted material is made available to anyone wishing to use,
 * modify, copy, or redistribute it subject to the terms and conditions
 * of the GNU General Public License v.2.
 *
 * You should have received a copy of the GNU General Public License
 * along with this program; if not, write to the Free Software Foundation,
 * Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
 */

/*
 * Compile flags to use a specific crypto library:
 * openssl: -lpopt -DCRYPT_OPENSSL -lcrypto
 * nss: -lpopt -DCRYPT_NSS -I/usr/include/nspr/ -I/usr/include/nss -lnss3
 * gcrypt: -lpopt -DCRYPT_GCRYPT -lgcrypt -lgpg-error
 */

#define _FILE_OFFSET_BITS	64

#ifdef HAVE_CONFIG_H
#include "configure.h"
#endif

#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#include <stdarg.h>
#include <unistd.h>
#include <errno.h>
#include <string.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <sys/ioctl.h>
#include <sys/mount.h>
#include <arpa/inet.h>
#include <popt.h>
#ifdef CRYPT_OPENSSL
#include <openssl/evp.h>
#include <openssl/rand.h>
#endif
#ifdef CRYPT_GCRYPT
#include <gcrypt.h>
#endif
#ifdef CRYPT_NSS
#include <nss.h>
#include <sechash.h>
#include <pk11pub.h>
#endif

#if !defined(CRYPT_OPENSSL) && !defined(CRYPT_GCRYPT) && !defined(CRYPT_NSS)
#error No crypto engine specified
#endif

#define DEFAULT_BLOCK_SIZE	4096
#define DM_VERITY_MAX_LEVELS	63

#define DEFAULT_SALT_SIZE	32
#define MAX_SALT_SIZE		384

#define MODE_VERIFY	0
#define MODE_CREATE	1
#define MODE_ACTIVATE	2

#define MAX_FORMAT_VERSION	1

static int mode = -1;
static int use_superblock = 1;

static const char *dm_device;
static const char *data_device;
static const char *hash_device;
static const char *hash_algorithm = NULL;
static const char *root_hash;

static int version = -1;
static int data_block_size = 0;
static int hash_block_size = 0;
static char *data_blocks_string = NULL;
static long long data_blocks = 0;
static char *hash_start_string = NULL;
static long long hash_start = 0;
static const char *salt_string = NULL;

static FILE *data_file;
static FILE *hash_file;

static off_t data_file_blocks;
static off_t hash_file_blocks;
static off_t used_hash_blocks;

static unsigned char *root_hash_bytes;
static unsigned char *calculated_digest;

static unsigned char *salt_bytes;
static unsigned salt_size;

static unsigned digest_size;
static unsigned char digest_size_bits;
static unsigned char levels;
static unsigned char hash_per_block_bits;

static off_t hash_level_block[DM_VERITY_MAX_LEVELS];
static off_t hash_level_size[DM_VERITY_MAX_LEVELS];

static off_t superblock_position;

static int retval = 0;

struct superblock {
	uint8_t signature[8];
	uint8_t version;
	uint8_t data_block_bits;
	uint8_t hash_block_bits;
	uint8_t pad1[1];
	uint16_t salt_size;
	uint8_t pad2[2];
	uint32_t data_blocks_hi;
	uint32_t data_blocks_lo;
	uint8_t algorithm[16];
	uint8_t salt[MAX_SALT_SIZE];
	uint8_t pad3[88];
};

#define DM_VERITY_SIGNATURE	"verity\0\0"
#define DM_VERITY_VERSION	0

#if defined(__GNUC__) && __GNUC__ >= 2
	__attribute__((__noreturn__))
#endif
static void help(poptContext popt_context,
		enum poptCallbackReason reason,
		struct poptOption *key,
		const char *arg,
		void *data)
{
	if (!strcmp(key->longName, "help")) {
		poptPrintHelp(popt_context, stdout, 0);
	} else {
		printf("veritysetup");
#ifdef DM_LIB_VERSION
		printf(", device mapper version %s", DM_LIB_VERSION);
#endif
		printf("\n");
	}
	exit(0);
}

static struct poptOption popt_help_options[] = {
	{ NULL,			0,	POPT_ARG_CALLBACK, help, 0, NULL, NULL },
	{ "help",		'h',	POPT_ARG_NONE, NULL, 0, "Show help", NULL },
	{ "version",		0,	POPT_ARG_NONE, NULL, 0, "Show version", NULL },
	POPT_TABLEEND
};

static struct poptOption popt_options[] = {
	{ NULL,			'\0', POPT_ARG_INCLUDE_TABLE, popt_help_options, 0, NULL, NULL },
	{ "create",		'c',	POPT_ARG_VAL, &mode, MODE_CREATE, "Create hash", NULL },
	{ "verify",		'v',	POPT_ARG_VAL, &mode, MODE_VERIFY, "Verify integrity", NULL },
	{ "activate",		'a',	POPT_ARG_VAL, &mode, MODE_ACTIVATE, "Activate the device", NULL },
	{ "no-superblock",	0,	POPT_ARG_VAL, &use_superblock, 0, "Do not create/use superblock" },
	{ "format",		0,	POPT_ARG_INT, &version, 0, "Format version (0 - original Google code, 1 - new format)", "number" },
	{ "data-block-size",	0, 	POPT_ARG_INT, &data_block_size, 0, "Block size on the data device", "bytes" },
	{ "hash-block-size",	0, 	POPT_ARG_INT, &hash_block_size, 0, "Block size on the hash device", "bytes" },
	{ "data-blocks",	0,	POPT_ARG_STRING, &data_blocks_string, 0, "The number of blocks in the data file", "blocks" },
	{ "hash-start",		0,	POPT_ARG_STRING, &hash_start_string, 0, "Starting block on the hash device", "512-byte sectors" },
	{ "algorithm",		0,	POPT_ARG_STRING, &hash_algorithm, 0, "Hash algorithm (default sha256)", "string" },
	{ "salt",		0,	POPT_ARG_STRING, &salt_string, 0, "Salt", "hex string" },
	POPT_TABLEEND
};

#if defined(__GNUC__) && __GNUC__ >= 2
	__attribute__((__format__(__printf__, 1, 2), __noreturn__))
#endif
static void exit_err(const char *msg, ...)
{
	va_list args;
	va_start(args, msg);
	vfprintf(stderr, msg, args);
	va_end(args);
	fputc('\n', stderr);
	exit(2);
}

#if defined(__GNUC__) && __GNUC__ >= 2
	__attribute__((__noreturn__))
#endif
static void stream_err(FILE *f, const char *msg)
{
	if (ferror(f)) {
		perror(msg);
		exit(2);
	} else if (feof(f)) {
		exit_err("eof on %s", msg);
	} else {
		exit_err("unknown error on %s", msg);
	}
}

static void *xmalloc(size_t s)
{
	void *ptr = malloc(!s ? 1 : s);
	if (!ptr) exit_err("out of memory");
	return ptr;
}

static char *xstrdup(const char *str)
{
	return strcpy(xmalloc(strlen(str) + 1), str);
}

static char *xprint(unsigned long long num)
{
	size_t s = snprintf(NULL, 0, "%llu", num);
	char *p = xmalloc(s + 1);
	snprintf(p, s + 1, "%llu", num);
	return p;
}

static char *xhexprint(unsigned char *bytes, size_t len)
{
	size_t i;
	char *p = xmalloc(len * 2 + 1);
	p[0] = 0;
	for (i = 0; i < len; i++)
		snprintf(p + i * 2, 3, "%02x", bytes[i]);
	return p;
}

static off_t get_size(FILE *f, const char *name)
{
	struct stat st;
	int h = fileno(f);
	if (h < 0) {
		perror("fileno");
		exit(2);
	}
	if (fstat(h, &st)) {
		perror("fstat");
		exit(2);
	}
	if (S_ISREG(st.st_mode)) {
		return st.st_size;
	} else if (S_ISBLK(st.st_mode)) {
		unsigned long long size64;
		unsigned long sizeul;
		if (!ioctl(h, BLKGETSIZE64, &size64)) {
			return_size64:
			if ((off_t)size64 < 0 || (off_t)size64 != size64) {
				size_overflow:
				exit_err("%s: device size overflow", name);
			}
			return size64;
		}
		if (!ioctl(h, BLKGETSIZE, &sizeul)) {
			size64 = (unsigned long long)sizeul * 512;
			if (size64 / 512 != sizeul) goto size_overflow;
			goto return_size64;
		}
		perror("BLKGETSIZE");
		exit(2);
	} else {
		exit_err("%s is not a file or a block device", name);
	}
	return -1;	/* never reached, shut up warning */
}

static void block_fseek(FILE *f, off_t block, int block_size)
{
	unsigned long long pos = (unsigned long long)block * block_size;
	if (pos / block_size != block ||
	    (off_t)pos < 0 ||
	    (off_t)pos != pos)
		exit_err("seek position overflow");
	if (fseeko(f, pos, SEEK_SET)) {
		perror("fseek");
		exit(2);
	}
}


#ifdef CRYPT_OPENSSL

static const EVP_MD *evp;

static int hash_init(const char *name)
{
	OpenSSL_add_all_digests();
	evp = EVP_get_digestbyname(name);
	if (!evp) return 0;
	return EVP_MD_size(evp);
}

typedef EVP_MD_CTX hash_context;

static void hash_context_init(hash_context *ctx)
{
	EVP_MD_CTX_init(ctx);
}

static void hash_context_reset(hash_context *ctx)
{
	if (EVP_DigestInit_ex(ctx, evp, NULL) != 1)
		exit_err("EVP_DigestInit_ex failed");
}

static void hash_context_update(hash_context *ctx, unsigned char *data, size_t len)
{
	if (EVP_DigestUpdate(ctx, data, len) != 1)
		exit_err("EVP_DigestUpdate failed");
}

static void hash_context_final(hash_context *ctx, unsigned char *digest)
{
	if (EVP_DigestFinal_ex(ctx, digest, NULL) != 1)
		exit_err("EVP_DigestFinal_ex failed");
}

static void hash_context_destroy(hash_context *ctx)
{
	if (EVP_MD_CTX_cleanup(ctx) != 1)
		exit_err("EVP_MD_CTX_cleanup failed");
}

static void crypto_rand_bytes(unsigned char *data, size_t len)
{
	if (RAND_bytes(data, len) != 1)
		exit_err("RAND_bytes failed");
}

#endif


#ifdef CRYPT_GCRYPT

static int gcrypt_id;

static int hash_init(const char *name)
{
	retry:
	gcrypt_id = gcry_md_map_name(name);
	if (!gcrypt_id) {
		if (!strcmp(name, "wp512")) {
			name = "whirlpool";
			goto retry;
		}
		if (!strcmp(name, "rmd160")) {
			name = "ripemd160";
			goto retry;
		}
		return 0;
	}
	return gcry_md_get_algo_dlen(gcrypt_id);
}

typedef gcry_md_hd_t hash_context;

static void hash_context_init(hash_context *ctx)
{
	if (gcry_md_open(ctx, gcrypt_id, 0))
		exit_err("gcry_md_open failed");
}

static void hash_context_reset(hash_context *ctx)
{
	gcry_md_reset(*ctx);
}

static void hash_context_update(hash_context *ctx, unsigned char *data, size_t len)
{
	gcry_md_write(*ctx, data, len);
}

static void hash_context_final(hash_context *ctx, unsigned char *digest)
{
	unsigned char *p = gcry_md_read(*ctx, gcrypt_id);
	memcpy(digest, p, gcry_md_get_algo_dlen(gcrypt_id));
}

static void hash_context_destroy(hash_context *ctx)
{
	gcry_md_close(*ctx);
}

static void crypto_rand_bytes(unsigned char *data, size_t len)
{
	gcry_randomize(data, len, GCRY_STRONG_RANDOM);
}

#endif


#ifdef CRYPT_NSS

static HASH_HashType nss_alg;

static int hash_init(const char *name)
{
	if (NSS_NoDB_Init(NULL) != SECSuccess)
		exit_err("NSS_Init failed");
	if (!strcmp(name, "md2")) nss_alg = HASH_AlgMD2;
	else if (!strcmp(name, "md5")) nss_alg = HASH_AlgMD5;
	else if (!strcmp(name, "sha1")) nss_alg = HASH_AlgSHA1;
	else if (!strcmp(name, "sha256")) nss_alg = HASH_AlgSHA256;
	else if (!strcmp(name, "sha384")) nss_alg = HASH_AlgSHA384;
	else if (!strcmp(name, "sha512")) nss_alg = HASH_AlgSHA512;
	else return 0;
	return HASH_ResultLen(nss_alg);
}

typedef HASHContext *hash_context;

static void hash_context_init(hash_context *ctx)
{
	*ctx = HASH_Create(nss_alg);
	if (!*ctx) exit_err("HASH_Create failed");
}

static void hash_context_reset(hash_context *ctx)
{
	HASH_Begin(*ctx);
}

static void hash_context_update(hash_context *ctx, unsigned char *data, size_t len)
{
	HASH_Update(*ctx, data, len);
}

static void hash_context_final(hash_context *ctx, unsigned char *digest)
{
	unsigned result_len;
	HASH_End(*ctx, digest, &result_len, HASH_ResultLen(nss_alg));
}

static void hash_context_destroy(hash_context *ctx)
{
	HASH_Destroy(*ctx);
}

static void crypto_rand_bytes(unsigned char *data, size_t len)
{
	if (PK11_GenerateRandom(data, len) != SECSuccess)
		exit_err("PK11_GenerateRandom failed");
}

#endif


static off_t verity_position_at_level(off_t block, int level)
{
	return block >> (level * hash_per_block_bits);
}

static void calculate_positions(void)
{
	unsigned long long hash_position;
	int i;

	digest_size_bits = 0;
	while (1 << digest_size_bits < digest_size)
		digest_size_bits++;
	hash_per_block_bits = 0;
	while (((hash_block_size / digest_size) >> hash_per_block_bits) > 1)
		hash_per_block_bits++;
	if (!hash_per_block_bits)
		exit_err("at least two hashes must fit in a hash file block");
	levels = 0;

	if (data_file_blocks) {
		while (hash_per_block_bits * levels < 64 &&
		       (unsigned long long)(data_file_blocks - 1) >>
		       (hash_per_block_bits * levels))
			levels++;
	}

	if (levels > DM_VERITY_MAX_LEVELS) exit_err("too many tree levels");

	hash_position = hash_start * 512 / hash_block_size;
	for (i = levels - 1; i >= 0; i--) {
		off_t s;
		hash_level_block[i] = hash_position;
		s = verity_position_at_level(data_file_blocks, i);
		s = (s >> hash_per_block_bits) +
		    !!(s & ((1 << hash_per_block_bits) - 1));
		hash_level_size[i] = s;
		if (hash_position + s < hash_position ||
		    (off_t)(hash_position + s) < 0 ||
		    (off_t)(hash_position + s) != hash_position + s)
			exit_err("hash device offset overflow");
		hash_position += s;
	}
	used_hash_blocks = hash_position;
}

static void create_or_verify_zero(FILE *wr, unsigned char *left_block, unsigned left_bytes)
{
	if (left_bytes) {
		if (mode != MODE_CREATE) {
			unsigned x;
			if (fread(left_block, left_bytes, 1, wr) != 1)
				stream_err(wr, "read");
			for (x = 0; x < left_bytes; x++) if (left_block[x]) {
				retval = 1;
				fprintf(stderr, "spare area is not zeroed at position %lld\n", (long long)ftello(wr) - left_bytes);
			}
		} else {
			if (fwrite(left_block, left_bytes, 1, wr) != 1)
				stream_err(wr, "write");
		}
	}
}

static void create_or_verify_stream(FILE *rd, FILE *wr, int block_size, off_t blocks)
{
	unsigned char *left_block = xmalloc(hash_block_size);
	unsigned char *data_buffer = xmalloc(block_size);
	unsigned char *read_digest = mode != MODE_CREATE ? xmalloc(digest_size) : NULL;
	off_t blocks_to_write = (blocks >> hash_per_block_bits) +
				!!(blocks & ((1 << hash_per_block_bits) - 1));
	hash_context ctx;
	hash_context_init(&ctx);
	memset(left_block, 0, hash_block_size);
	while (blocks_to_write--) {
		unsigned x;
		unsigned left_bytes = hash_block_size;
		for (x = 0; x < 1 << hash_per_block_bits; x++) {
			if (!blocks)
				break;
			blocks--;
			if (fread(data_buffer, block_size, 1, rd) != 1)
				stream_err(rd, "read");
			hash_context_reset(&ctx);
			if (version >= 1) {
				hash_context_update(&ctx, salt_bytes, salt_size);
			}
			hash_context_update(&ctx, data_buffer, block_size);
			if (!version) {
				hash_context_update(&ctx, salt_bytes, salt_size);
			}
			hash_context_final(&ctx, calculated_digest);
			if (!wr)
				break;
			if (mode != MODE_CREATE) {
				if (fread(read_digest, digest_size, 1, wr) != 1)
					stream_err(wr, "read");
				if (memcmp(read_digest, calculated_digest, digest_size)) {
					retval = 1;
					fprintf(stderr, "verification failed at position %lld in %s file\n", (long long)ftello(rd) - block_size, rd == data_file ? "data" : "metadata");
				}
			} else {
				if (fwrite(calculated_digest, digest_size, 1, wr) != 1)
					stream_err(wr, "write");
			}
			if (!version) {
				left_bytes -= digest_size;
			} else {
				create_or_verify_zero(wr, left_block, (1 << digest_size_bits) - digest_size);
				left_bytes -= 1 << digest_size_bits;
			}
		}
		if (wr)
			create_or_verify_zero(wr, left_block, left_bytes);
	}
	if (mode == MODE_CREATE && wr) {
		if (fflush(wr)) {
			perror("fflush");
			exit(1);
		}
		if (ferror(wr)) {
			stream_err(wr, "write");
		}
	}
	hash_context_destroy(&ctx);
	free(left_block);
	free(data_buffer);
	if (mode != MODE_CREATE) free(read_digest);
}

static char **make_target_line(void)
{
	const int line_elements = 14;
	char **line = xmalloc(line_elements * sizeof(char *));
	int i = 0;
	char *algorithm_copy = xstrdup(hash_algorithm);
		/* transform ripemdXXX to rmdXXX */
	if (!strncmp(algorithm_copy, "ripemd", 6))
		memmove(algorithm_copy + 1, algorithm_copy + 4, strlen(algorithm_copy + 4) + 1);
	if (!strcmp(algorithm_copy, "whirlpool"))
		strcpy(algorithm_copy, "wp512");
	line[i++] = xstrdup("0");
	line[i++] = xprint((unsigned long long)data_file_blocks * data_block_size / 512);
	line[i++] = xstrdup("verity");
	line[i++] = xprint(version);
	line[i++] = xstrdup(data_device);
	line[i++] = xstrdup(hash_device);
	line[i++] = xprint(data_block_size);
	line[i++] = xprint(hash_block_size);
	line[i++] = xprint(data_file_blocks);
	line[i++] = xprint(hash_start * 512 / hash_block_size);
	line[i++] = algorithm_copy;
	line[i++] = xhexprint(calculated_digest, digest_size);
	line[i++] = !salt_size ? xstrdup("-") : xhexprint(salt_bytes, salt_size);
	line[i++] = NULL;
	if (i > line_elements) exit_err("INTERNAL ERROR: insufficient array size");
	return line;
}

static void free_target_line(char **line)
{
	int i;
	for (i = 0; line[i]; i++)
		free(line[i]);
	free(line);
}

static void create_or_verify(void)
{
	int i;
	memset(calculated_digest, 0, digest_size);
	if (mode != MODE_ACTIVATE) for (i = 0; i < levels; i++) {
		block_fseek(hash_file, hash_level_block[i], hash_block_size);
		if (!i) {
			block_fseek(data_file, 0, data_block_size);
			create_or_verify_stream(data_file, hash_file, data_block_size, data_file_blocks);
		} else {
			FILE *hash_file_2 = fopen(hash_device, "r");
			if (!hash_file_2) {
				perror(hash_device);
				exit(2);
			}
			block_fseek(hash_file_2, hash_level_block[i - 1], hash_block_size);
			create_or_verify_stream(hash_file_2, hash_file, hash_block_size, hash_level_size[i - 1]);
			fclose(hash_file_2);
		}
	}

	if (levels) {
		block_fseek(hash_file, hash_level_block[levels - 1], hash_block_size);
		create_or_verify_stream(hash_file, NULL, hash_block_size, 1);
	} else {
		block_fseek(data_file, 0, data_block_size);
		create_or_verify_stream(data_file, NULL, data_block_size, data_file_blocks);
	}

	if (mode != MODE_CREATE) {
		if (memcmp(calculated_digest, root_hash_bytes, digest_size)) {
			fprintf(stderr, "verification failed in the root block\n");
			retval = 1;
		}
		if (!retval && mode == MODE_VERIFY)
			fprintf(stderr, "hash successfully verified\n");
	} else {
		char **target_line;
		char *p;
		if (fsync(fileno(hash_file))) {
			perror("fsync");
			exit(1);
		}
		printf("hash device size: %llu\n", (unsigned long long)used_hash_blocks * hash_block_size);
		printf("data block size %u, hash block size %u, %u tree levels\n", data_block_size, hash_block_size, levels);
		if (salt_size) p = xhexprint(salt_bytes, salt_size);
		else p = xstrdup("-");
		printf("salt: %s\n", p);
		free(p);
		p = xhexprint(calculated_digest, digest_size);
		printf("root hash: %s\n", p);
		free(p);
		printf("target line:");
		target_line = make_target_line();
		for (i = 0; target_line[i]; i++)
			printf(" %s", target_line[i]);
		free_target_line(target_line);
		printf("\n");
	}
}

#if defined(__GNUC__) && __GNUC__ >= 2
	__attribute__((__noreturn__))
#endif
static void activate(void)
{
	int i;
	size_t len = 1;
	char *table_arg;
	char **target_line = make_target_line();
	for (i = 0; target_line[i]; i++) {
		if (i) len++;
		len += strlen(target_line[i]);
	}
	table_arg = xmalloc(len);
	table_arg[0] = 0;
	for (i = 0; target_line[i]; i++) {
		if (i) strcat(table_arg, " ");
		strcat(table_arg, target_line[i]);
	}
	free_target_line(target_line);
	execlp("dmsetup", "dmsetup", "-r", "create", dm_device, "--table", table_arg, NULL);
	perror("dmsetup");
	exit(2);
}

static void get_hex(const char *string, unsigned char **result, size_t len, const char *description)
{
	size_t rl = strlen(string);
	unsigned u;
	if (strspn(string, "0123456789ABCDEFabcdef") != rl)
		exit_err("invalid %s", description);
	if (rl != len * 2)
		exit_err("invalid length of %s", description);
	*result = xmalloc(len);
	memset(*result, 0, len);
	for (u = 0; u < rl; u++) {
		unsigned char c = (string[u] & 15) + (string[u] > '9' ? 9 : 0);
		(*result)[u / 2] |= c << (((u & 1) ^ 1) << 2);
	}
}

static struct superblock superblock;

static void load_superblock(void)
{
	long long sb_data_blocks;

	block_fseek(hash_file, superblock_position, 1);
	if (fread(&superblock, sizeof(struct superblock), 1, hash_file) != 1)
		stream_err(hash_file, "read");
	if (memcmp(superblock.signature, DM_VERITY_SIGNATURE, sizeof(superblock.signature)))
		exit_err("superblock not found on the hash device");
	if (superblock.version > MAX_FORMAT_VERSION)
		exit_err("unknown version");
	if (superblock.data_block_bits < 9 || superblock.data_block_bits >= 31)
		exit_err("invalid data_block_bits in the superblock");
	if (superblock.hash_block_bits < 9 || superblock.hash_block_bits >= 31)
		exit_err("invalid data_block_bits in the superblock");
	sb_data_blocks = ((unsigned long long)ntohl(superblock.data_blocks_hi) << 31 << 1) | ntohl(superblock.data_blocks_lo);
	if (sb_data_blocks < 0 || (off_t)sb_data_blocks < 0 || (off_t)sb_data_blocks != sb_data_blocks)
		exit_err("invalid data blocks in the superblock");
	if (!memchr(superblock.algorithm, 0, sizeof(superblock.algorithm)))
		exit_err("invalid hash algorithm in the superblock");
	if (ntohs(superblock.salt_size) > MAX_SALT_SIZE)
		exit_err("invalid salt_size in the superblock");

	if (version == -1) {
		version = superblock.version;
	} else {
		if (version != superblock.version)
			exit_err("version (%d) does not match superblock value (%d)", version, superblock.version);
	}

	if (!data_block_size) {
		data_block_size = 1 << superblock.data_block_bits;
	} else {
		if (data_block_size != 1 << superblock.data_block_bits)
			exit_err("data block size (%d) does not match superblock value (%d)", data_block_size, 1 << superblock.data_block_bits);
	}

	if (!hash_block_size) {
		hash_block_size = 1 << superblock.hash_block_bits;
	} else {
		if (hash_block_size != 1 << superblock.hash_block_bits)
			exit_err("hash block size (%d) does not match superblock value (%d)", hash_block_size, 1 << superblock.hash_block_bits);
	}

	if (!data_blocks_string) {
		data_blocks = sb_data_blocks;
		data_blocks_string = (char *)"";
	} else {
		if (data_blocks != sb_data_blocks)
			exit_err("data blocks (%lld) does not match superblock value (%lld)", data_blocks, sb_data_blocks);
	}

	if (!hash_algorithm) {
		hash_algorithm = (char *)superblock.algorithm;
	} else {
		if (strcmp(hash_algorithm, (char *)superblock.algorithm))
			exit_err("hash algorithm (%s) does not match superblock value (%s)", hash_algorithm, superblock.algorithm);
	}

	if (!salt_bytes) {
		salt_size = ntohs(superblock.salt_size);
		salt_bytes = xmalloc(salt_size);
		memcpy(salt_bytes, superblock.salt, salt_size);
	} else {
		if (salt_size != ntohs(superblock.salt_size) ||
		    memcmp(salt_bytes, superblock.salt, salt_size))
			exit_err("salt does not match superblock value");
	}
}

static void save_superblock(void)
{
	memset(&superblock, 0, sizeof(struct superblock));

	memcpy(&superblock.signature, DM_VERITY_SIGNATURE, sizeof(superblock.signature));
	superblock.version = version;
	superblock.data_block_bits = ffs(data_block_size) - 1;
	superblock.hash_block_bits = ffs(hash_block_size) - 1;
	superblock.salt_size = htons(salt_size);
	superblock.data_blocks_hi = htonl(data_file_blocks >> 31 >> 1);
	superblock.data_blocks_lo = htonl(data_file_blocks & 0xFFFFFFFF);
	strncpy((char *)superblock.algorithm, hash_algorithm, sizeof superblock.algorithm);
	memcpy(superblock.salt, salt_bytes, salt_size);

	block_fseek(hash_file, superblock_position, 1);
	if (fwrite(&superblock, sizeof(struct superblock), 1, hash_file) != 1)
		stream_err(hash_file, "write");
}

int main(int argc, const char **argv)
{
	poptContext popt_context;
	int r;
	const char *s;
	char *end;

	if (sizeof(struct superblock) != 512)
		exit_err("INTERNAL ERROR: bad superblock size %ld", (long)sizeof(struct superblock));

	popt_context = poptGetContext("verity", argc, argv, popt_options, 0);

	poptSetOtherOptionHelp(popt_context, "[-c | -v | -a] [<device name> if activating] <data device> <hash device> [<root hash> if activating or verifying] [OPTION...]");

	if (argc <= 1) {
		poptPrintHelp(popt_context, stdout, 0);
		exit(1);
	}

	r = poptGetNextOpt(popt_context);
	if (r < -1) exit_err("bad option %s", poptBadOption(popt_context, 0));

	if (mode < 0) exit_err("verify, create or activate mode not specified");

	if (mode == MODE_ACTIVATE) {
		dm_device = poptGetArg(popt_context);
		if (!dm_device) exit_err("device name is missing");
		if (!*dm_device || strchr(dm_device, '/')) exit_err("invalid device name to activate");
	}

	data_device = poptGetArg(popt_context);
	if (!data_device) exit_err("data device is missing");

	hash_device = poptGetArg(popt_context);
	if (!hash_device) exit_err("metadata device is missing");

	if (mode != MODE_CREATE) {
		root_hash = poptGetArg(popt_context);
		if (!root_hash) exit_err("root hash not specified");
	}

	s = poptGetArg(popt_context);
	if (s) exit_err("extra argument %s", s);

	data_file = fopen(data_device, "r");
	if (!data_file) {
		perror(data_device);
		exit(2);
	}

	hash_file = fopen(hash_device, mode != MODE_CREATE ? "r" : "r+");
	if (!hash_file && errno == ENOENT && mode == MODE_CREATE)
		hash_file = fopen(hash_device, "w+");
	if (!hash_file) {
		perror(hash_device);
		exit(2);
	}

	if (data_blocks_string) {
		data_blocks = strtoll(data_blocks_string, &end, 10);
		if (!*data_blocks_string || *end)
			exit_err("invalid data blocks");
	}

	if (hash_start_string) {
		hash_start = strtoll(hash_start_string, &end, 10);
		if (!*hash_start_string || *end)
			exit_err("invalid hash start");
	}

	if (hash_start < 0 ||
	   (unsigned long long)hash_start * 512 / 512 != hash_start ||
	   (off_t)(hash_start * 512) < 0 ||
	   (off_t)(hash_start * 512) != hash_start * 512) exit_err("invalid hash start");

	if (salt_string || !use_superblock) {
		if (!salt_string || !strcmp(salt_string, "-"))
			salt_string = "";
		salt_size = strlen(salt_string) / 2;
		if (salt_size > MAX_SALT_SIZE)
			exit_err("too long salt (max %d bytes)", MAX_SALT_SIZE);
		get_hex(salt_string, &salt_bytes, salt_size, "salt");
	}

	if (use_superblock) {
		superblock_position = hash_start * 512;
		if (mode != MODE_CREATE)
			load_superblock();
	}

	if (version == -1) version = MAX_FORMAT_VERSION;
	if (version < 0 || version > MAX_FORMAT_VERSION)
		exit_err("invalid format version");

	if (!data_block_size) data_block_size = DEFAULT_BLOCK_SIZE;
	if (!hash_block_size) hash_block_size = data_block_size;

	if (data_block_size < 512 || (data_block_size & (data_block_size - 1)) || data_block_size >= 1U << 31)
		exit_err("invalid data block size");

	if (hash_block_size < 512 || (hash_block_size & (hash_block_size - 1)) || hash_block_size >= 1U << 31)
		exit_err("invalid hash block size");

	if (data_blocks < 0 || (off_t)data_blocks < 0 || (off_t)data_blocks != data_blocks) exit_err("invalid number of data blocks");

	data_file_blocks = get_size(data_file, data_device) / data_block_size;
	hash_file_blocks = get_size(hash_file, hash_device) / hash_block_size;

	if (data_file_blocks < data_blocks) exit_err("data file is too small");
	if (data_blocks_string) {
		data_file_blocks = data_blocks;
	}

	if (use_superblock) {
		hash_start = hash_start + (sizeof(struct superblock) + 511) / 512;
		hash_start = (hash_start + (hash_block_size / 512 - 1)) & ~(long long)(hash_block_size / 512 - 1);
	}

	if ((unsigned long long)hash_start * 512 % hash_block_size) exit_err("hash start not aligned on block size");

	if (!hash_algorithm)
		hash_algorithm = "sha256";
	if (strlen(hash_algorithm) >= sizeof(superblock.algorithm) && use_superblock)
		exit_err("hash algorithm name is too long");

	digest_size = hash_init(hash_algorithm);
	if (!digest_size) exit_err("hash algorithm %s not found", hash_algorithm);

	if (!salt_bytes) {
		salt_size = DEFAULT_SALT_SIZE;
		salt_bytes = xmalloc(salt_size);
		crypto_rand_bytes(salt_bytes, salt_size);
	}

	calculated_digest = xmalloc(digest_size);

 	if (mode != MODE_CREATE) {
		get_hex(root_hash, &root_hash_bytes, digest_size, "root_hash");
	}

	calculate_positions();

	create_or_verify();

	if (use_superblock) {
		if (mode == MODE_CREATE)
			save_superblock();
	}

	fclose(data_file);
	fclose(hash_file);

	if (mode == MODE_ACTIVATE && !retval)
		activate();

	free(salt_bytes);
	free(calculated_digest);
	if (mode != MODE_CREATE)
		free(root_hash_bytes);
	poptFreeContext(popt_context);

	return retval;
}

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: workqueues and percpu (was: [PATCH] dm: remake of the verity target)
  2012-03-09 22:06               ` Mandeep Singh Baines
@ 2012-08-14 17:54                 ` Tejun Heo
  0 siblings, 0 replies; 34+ messages in thread
From: Tejun Heo @ 2012-08-14 17:54 UTC (permalink / raw)
  To: Mandeep Singh Baines
  Cc: Andrew Morton, Mikulas Patocka, linux-kernel, dm-devel,
	Alasdair G Kergon, Will Drewry, Elly Jones, Milan Broz,
	Olof Johansson, Steffen Klassert, Rusty Russell

Hello,

On Fri, Mar 09, 2012 at 02:06:23PM -0800, Mandeep Singh Baines wrote:
> > I think the right thing to do for now is to add cpu hotplug notifier
> > and do flush_work_sync() on the work item.  We can later move that
> > logic into workqueue and remove it from crypto.
> > 
> 
> That seems like the correct solution. I will implement that.

So, I've been looking at it and now am not so sure whether moving it
to workqueue core is necessary.  With the proposed workqueue updates,
workqueue's behavior is now closely aligned with the timer which also
considers the specified affinity overridable (to avoid reentrancy and
during CPU offlining) and I don't think it's reasonable to require
users which need strict affinity to implement proper CPU up/down
notifiers - in many cases, they need them anyway.  I'll try to review
the current users and think more about it.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 34+ messages in thread

end of thread, other threads:[~2012-08-14 17:57 UTC | newest]

Thread overview: 34+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-03-02  0:33 [PATCH] dm: verity target Mandeep Singh Baines
2012-03-02 16:08 ` Mandeep Singh Baines
2012-03-04 19:18 ` [PATCH] dm: remake of the " Mikulas Patocka
2012-03-04 19:35   ` userspace hashing utility for dm-verity Mikulas Patocka
2012-03-06 21:59   ` [PATCH] dm: remake of the verity target Mandeep Singh Baines
2012-03-08 22:21     ` workqueues and percpu (was: [PATCH] dm: remake of the verity target) Mikulas Patocka
2012-03-08 22:39       ` Andrew Morton
2012-03-08 23:15         ` Tejun Heo
2012-03-08 23:30           ` Andrew Morton
2012-03-09  0:33             ` Tejun Heo
2012-03-09  0:51               ` Tejun Heo
2012-03-09 21:15           ` Mandeep Singh Baines
2012-03-09 21:20             ` Tejun Heo
2012-03-09 22:06               ` Mandeep Singh Baines
2012-08-14 17:54                 ` Tejun Heo
2012-03-13 22:20     ` [PATCH] dm: remake of the verity target Mikulas Patocka
2012-03-14 21:13       ` Will Drewry
2012-03-17  1:16         ` Mikulas Patocka
2012-03-17  3:06           ` Will Drewry
2012-03-14 21:43       ` Mandeep Singh Baines
2012-03-20 15:41       ` Mandeep Singh Baines
2012-03-21  0:54         ` Mikulas Patocka
2012-03-21  3:03           ` Mandeep Singh Baines
2012-03-21  3:11           ` [dm-devel] " Mikulas Patocka
2012-03-21  3:30             ` Mandeep Singh Baines
2012-03-21  3:44               ` Mikulas Patocka
2012-03-21  3:49                 ` Mandeep Singh Baines
2012-03-21 17:08                   ` Mikulas Patocka
2012-03-21 17:09                     ` Mikulas Patocka
2012-03-22 17:41                     ` Mandeep Singh Baines
2012-03-22 21:52                       ` Mikulas Patocka
2012-03-23  3:15                         ` Mandeep Singh Baines
2012-03-24  3:48                           ` Mikulas Patocka
2012-03-21  1:10         ` Mikulas Patocka

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).